diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzaqcs b/data_all_eng_slimpj/shuffled/split2/finalzzaqcs new file mode 100644 index 0000000000000000000000000000000000000000..126cda17137033c8a3f6d41b6774da14260b7b2f --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzaqcs @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec:intro} \nExpander graphs, graphs in which all small subsets exhibit good expansion properties, are intriguing objects of study that arise in such diverse fields as number theory, computer science and discrete geometry. As Hoory, Linial and Wigderson remark in their wide ranging survey on expanders \\cite{HLW}, one reason for their ubiquity is that they may be defined in at least three languages: combinatorial\/geometric, probabilistic and algebraic. We refer the reader to this survey and to the expository article of Sarnak \\cite{S} for more background on expander graphs and their applications.\n\nWe shall be concerned, almost exclusively, with {\\em $d$-regular} graphs, in which every vertex has exactly $d$ neighbours. \nFrom the algebraic viewpoint, which we take throughout, a $d$-regular graph $G$ is an expander if there is a significant gap between $\\lambda(G)$, the largest eigenvalue of the adjacency matrix of $G$ (for a $d$-regular graph this value is always $d$) and $\\lambda_2(G)$, the largest modulus of any other eigenvalue. \nClassic results of Dodziuk, Alon--Milman and Alon \\cite{D84,AM85,A} show that the difference $d-\\lambda_2(G)$ controls the combinatorial expansion of $G$. More precisely, writing $h(G) = \\min_{S} |E(S,S^c)|\/|S|$, where $E(S,S^c)$ is the number of edges from $S$ to $S^c$, its complement in $V$, and where the minimum is taken over subsets $S$ of the vertices of $G$ with $|S| \\leq |S^c|$, we have \n\\[\n\\frac{d-\\lambda_2(G)}{2} \\leq h(G) \\leq \\sqrt{2d(d-\\lambda_2(G))}.\n\\]\nGiven the theorem of Alon and Boppana \\cite{A,N} that $\\lambda_2(G) \\ge 2\\sqrt{d-1}-o_{n}(1)$ for every $d$-regular graph with $n$ vertices, it is particularly significant if $\\lambda_2(G)\\le 2\\sqrt{d-1}$, in which case the graph is said to be {\\em Ramanujan}. A major open problem is to prove the existence of infinite families of $d$-regular Ramanujan graphs for all $d\\ge 3$. Explicit constructions coming from number theory given by Lubotzky, Phillips and Sarnak \\cite{LPS} and Margulis \\cite{Ma} for the case that $d-1$ is prime represented a major breakthrough. Morgenstern \\cite{Mo} gives examples of such families whenever $d-1$ is a prime power. However, it seems unlikely that number theoretic approaches will be successful in resolving the problem in its full generality.\n\nA combinatorial approach to the problem, initiated by \\citet{F1}, is to prove that one \nmay obtain new (larger) Ramanujan graphs from smaller ones. In this approach one starts with a base graph $H$ which one ``lifts'' to obtain a larger graph $G$ which covers the original graph $H$ in the sense that there is a homomorphism from $G$ to $H$ such that all fibres in $G$ of vertices of $H$ are of equal size. If $G$ is a cover of $H$ and the fibres in $G$ of vertices in $H$ have size $k$, then $G$ is called a $k$-lift of $H$. \n\nIt is easily observed that the lift $G$ inherits all the eigenvalues of the base graph $H$. Indeed, let $\\mu$ be an eigenvalue of $H$ with eigenvector $x$, and define a vector $y$ with entries indexed by $V(G)$ by setting, for each $i$, $y_{v}=x_{i}$ for all vertices $v\\in V_{i}$; \nthen $y$ is an eigenvector of $G$ with eigenvalue $\\mu$. In fact these lifted eigenvectors of $H$ span the space of all vectors that are constant on each of the fibres $V_{i}$ of the lift. The remaining eigenvalues of $G$ are referred to as the {\\em new eigenvalues} of the lift (note however that it is possible for some new eigenvalues to be equal to ``old'' eigenvalues). Since eigenvectors of symmetric real matrices corresponding to distinct eigenvalues are orthogonal, these are exactly the set of eigenvalues which have an eigenvector $x$ which is balanced on each fibre (i.e. for which $\\sum_{v\\in V_{i}}x_{v}=0$ for all $i\\in V(H)$). \nSince the base graph is given it suffices to concentrate our study on the new eigenvalues of $G$. We denote by $\\lambda^{*}(G)$ the largest absolute value of a new eigenvalue of $G$. (For the remainder of the paper, \nfor any graph $F$ we denote by $\\lambda(F)$ the largest eigenvalue of $F$.) \n\nA {\\em random $n$-lift} $G$ of a graph $H$ is obtained by assigning to each vertex $i$ of $H$ a distinct set $V_{i}$ of $n$ vertices, and placing a random matching (i.e. one chosen uniformly at random from the $n!$ possibilities) between $V_{i}$ and $V_{j}$ for each edge $ij$ of $H$. Random lifts were introduced by Amit, Linial, Matou\\v sek and Rozenman \\cite{ALMR}. In that article a variety of properties of random lifts are discussed, related to connectivity, expansion, independent sets, colouring and perfect matchings; the proofs of these results, and others, were developed in several subsequent papers \\cite{AL1,AL2,AL3,LR}. \nAs remarked in \\cite{F1}, any finite cover $F$ of $H$ in which fibres have size $n$ has a positive probability of appearing as $G$ (in fact this probability is precisely $(n!)^{|V(H)-E(H)|}(\\mathrm{Aut}(F\/H))^{-1}$, where $\\mathrm{Aut}(F\/H)$ is the group of automorphisms of \n$F$ over $H$), and so such random lifts form a ``seemingly reasonable model of a probabilistic space of finite quotients of [the infinite \n$d$-ary tree]''. \n\nAlthough very few graphs are {\\em known} to be Ramanujan, it is conjectured that a positive proportion of regular graphs are in fact Ramanujan, and \nAlon's conjecture\/Friedman's theorem states that for any $\\epsilon > 0$, only an asymptotically negligible proportion of $d$-regular graphs have $\\lambda_2(G) > 2\\sqrt{d-1}+\\epsilon$. In this spirit, Friedman \\cite{F1} studied the eigenvalues of random lifts of regular graphs, and Lubetzky, Sudakov and Vu \\cite{LSV} conjectured that a random lift of a Ramanujan graph has a positive probability of being Ramanujan. Since for all $d$ the complete graph $K_{d+1}$ is a $d$-regular Ramanujan graph, this would imply the existence of arbitrarily large $d$-regular Ramanujan graphs. In the terminology from \\cite{F1}, the main result of this paper implies that with extremely high probability the lifts of Ramanujan graphs are $O(\\sqrt{d})$-weakly Ramanujan, in that all non-trivial eigenvalues are $O(\\sqrt{d})$.\n\nIn \\cite{F1}, Friedman used the trace method of Wigner to prove results which in particular imply that if $H$ is $d$-regular then $\\lambda^{*}(G)= O(d^{3\/4})$ whp\\footnote{If a statement holds with probability which tends to $1$ as $n$ tends to infinity, we say that it occurs `with high probability', or `whp' for short.}. This was later tightened to $O(d^{2\/3})$ by Linial and Puder, by a careful analysis of the trace method. They also made a conjecture concerning word maps, which if verified would prove $\\lambda^{*}(G)=O(d^{1\/2})$ whp. Two initial cases of the conjecture were proved, a third has been proven more recently by Lui and Puder \\cite{LP}. \nBilu and Linial \\cite{BL} then showed that every $d$-regular graph $H$ has {\\em some} $2$-lift $G$ with $\\lambda^*(G) = O(d^{1\/2}\\log^{3\/2}{d})$.\nThe next major step in this area was taken by Lubetzky, Sudakov and Vu \\cite{LSV}, who proved that $\\lambda^{*}(G)=O(\\max(\\lambda_2(H),d^{1\/2})\\log{d})$ whp. In particular, in the case that $\\lambda_2(H)=O(d^{1\/2})$ this result gives that $\\lambda^{*}(G)=O(d^{1\/2}\\log{d})$. \n\nIn this article we prove that whp $\\lambda^*(G)=O(d^{1\/2})$, a result which is best possible up to the constant. \n\\begin{thm}\\label{main}\nLet $H$ be any $d$-regular graph and let $G$ be a random $n$-lift of $H$.\nFor all $n$ sufficiently large, \nwith probability at least $1-n^{-2d^{1\/2}}$, \n$\\lambda^*(G) \\leq 430656 \\sqrt{d}$. \n\\end{thm}\nFurthermore, we are able to {\\em explain} the likely cause of large eigenvalues should they occur. This cause is, with very high probability, a small (i.e.~of size not depending on $n$) subgraph of $G$.\n\\begin{thm}\n\\label{explain}\nLet $H$ be any $d$-regular graph, write $h=|V(H)|$, and let $G$ be a random $n$-lift of $H$.\nFor all $n$ sufficiently large, with probability at least $1-n^{-hd}$, $G$ contains an induced subgraph $G'$ with at most $hd$ vertices, \nsuch that $\\lambda^*(G) \\leq 1189248 \\lambda(G')$. \n\\end{thm}\nOne might protest that the eigenvalue of $G'$ is not necessarily a new eigenvalue of $G$, and so $G'$ is not the `cause' of a new eigenvalue of large modulus in $G$. However, the following approximate converse to \\refT{explain} justifies our use of such an epithet for $G'$.\n\\begin{prop}\\label{protest}\nFor any induced subgraph $G'$ of $G$ with $|V(G')| \\leq n-h\\sqrt{n}$, we have $\\lambda^*(G) \\geq \\lambda(G') - 7\/2$. \n\\end{prop}\nThe short proof of \\refP{protest} appears in \\refS{sec:zbound}.\n\nCostello and Vu \\cite{CV} remark that ``[t]he main intuition that underlies many problems concerning the rank of a random matrix is that dependency should come from small configurations,'' and their paper can be seen as confirmation of this intuition for a rather broad class of random matrices. In this spirit, \\refT{explain} should be viewed as stating that for random lifts, {\\em any exceptionally large eigenvalues come from small configurations.}\nIt would be very interesting to know whether our ``exceptionally large'' can be replaced by ``slightly large''. A rather ambitious question one could ask in this direction is the following.\n\\begin{question}\nIs there a constant $C$ not depending on $n$ (perhaps depending on $H$) such that for any \n$\\epsilon > 0$, given that $\\lambda^*(G) \\geq (2+\\epsilon)\\sqrt{d-1}$, with probability $1-o_n(1)$, $G$ contains \na subgraph $G'$ with at most $C$ vertices such that $\\lambda(G') \\geq (2+\\epsilon\/2)\\sqrt{d-1}$?\n\\end{question}\n\nWe note that the probability bound in \\refT{explain} is extremely strong. Indeed, the failure probability, $n^{-hd}$ is much smaller than the probability that $G$ contains $H$ as a subgraph -- the probability \nof the latter event is greater than $n^{-hd\/2}$ -- in which case $\\lambda^{*}(G)=d$.\n\nOur proof, like that of Lubetzky, Sudakov and Vu \\cite{LSV} and many others, relies on reducing an uncountable collection of possible `reasons' for a large eigenvalue to a finite (and hopefully relatively small) sub-collection which still express all ways in which a large eigenvalue can occur. They used the well-known method of $\\epsilon$-nets to make this reduction. (Amit and Linial \\cite{AL2} \nalso used $\\epsilon$-nets to prove a lower bound on edge expansion for random lifts of connected graphs which need not necessarily be regular.) However, the number of events (points of the $\\epsilon$-net) one is required to consider is $\\exp(\\Theta(nh\\log{d}))$. The appearance of the $\\log{d}$ here is a major obstacle to proving that $\\lambda^*(G)=O(\\sqrt{d})$ by the $\\epsilon$-net approach. Our approach is based on a convexity argument that allows us to reduce to a smaller, $\\exp(\\Theta(nh))$-sized family of events. Furthermore, the events in this collection are concerned with vectors with dyadic entries. These are easier to deal with than general vectors; in particular, direct combinatorial arguments may be applied and we need not appeal to martingale inequalities to obtain probability bounds.\n\nModulo a few trivial changes to our proof (e.g. changing ``precisely'' to ``at most'' in the proof of Proposition \\ref{zbound}, below) replacing $d$ by $\\Delta$ throughout gives a proof of the following generalisation to the case that the base graph $H$ is not regular. \n\n\\begin{thm} \\label{thm:omit} Let $H$ be any graph of maximum degree $\\Delta$ and let $G$ be a random $n$-lift of $H$. For all $n$ sufficiently large, \nwith probability at least $1-n^{-2\\Delta^{1\/2}}$, \n$\\lambda^*(G) \\leq 430656 \\sqrt{\\Delta}$, and with probability at least $1-n^{-h\\Delta}$, \n$G$ contains an induced subgraph $G'$ with at most $h\\Delta$ vertices, \nsuch that $\\lambda^*(G) \\leq 1189248 \\lambda(G')$. \\end{thm} \n\nSince $G$ will always contain two edge disjoint stars whose centres have degree $\\Delta$ and lie in the same fibre, $\\lambda^*(G)\\ge \\sqrt{\\Delta}$. Thus this result is also tight up to the constant.\n\n\nWhile we focus here on the case of an $n$-lift where $n$ is large we would like to bring to the reader's attention the \nrecent result of Oliveira \\cite{Olive} that a random $n$-lift $G$ of a graph $H$ on $h$ vertices with maximum degree $\\Delta$ satisfies $\\lambda^{*}(G)=O(\\Delta^{1\/2}\\log^{1\/2}(hn))$ whp.\n\n\\section{Notation}\n\nAll logarithms in this paper are natural logarithms unless otherwise specified. \nFor positive integers $k$ we write $[k]=\\{1,\\ldots,k\\}$. We write ${\\mathbb N}_0 = \\{0,1,2,\\ldots\\}$ and ${\\mathbb N}=\\{1,2,\\ldots\\}$. \nFor any graph $F=(V,E)$ and $u,v \\in V$ we write $u \\sim_F v$ if $uv \\in E$. \nFor the remainder of the paper, $d \\geq 2$ is a positive integer, $H=([h],E(H))$ is a fixed $d$-regular graph, \nand $G=(V(G),E(G))$ is the random $n$-lift of $H$, where $V(G)=\\{(i,j),i \\in [h],j \\in [n]\\}$. \nFor $i$ in $[h]$ we write $V_i = \\{(i,j): j \\in [n]\\}$, and call $V_i$ the {\\em fibre} of $i$ in $G$. \nLet $M$ be the adjacency matrix of $G$. \n\nFor any $m$-by-$m$ matrix $A=(a_{uv})_{u,v \\in [m]}$ and vectors $x,y \\in \\mathbb{R}^m$, we write \n$\\ip{x}{y}_A = \\sum_{u,v \\in [m]} x_u a_{uv} y_v$. Also, for a set $E \\subset \\mathbb{R}^2$, we write \n$\\ip{x}{y}_{A,E}$ for the restricted sum\n\\[\n\\sum_{\\{u,v \\in [m]: (x_u,y_v) \\in E\\}} x_u a_{uv} y_v. \n\\]\nFinally, we use the Vinogradov notation $f \\ll g$ to mean that $f=O(g)$, i.e.~$f$ is bounded by a constant times $g$, \nindependent of $n$. We write $f \\asymp g$ to mean that $f=O(g)$ and $g=O(f)$. \n \n\\section{An overview of the proof} \\label{sec:duck} \nAs in \\cite{KS} and much subsequent work, we will bound the eigenvalues of $G$ using the \nRayleigh quotient principle, which is to say by bounding $\\ipo{x}_M$ for suitable vectors~$x$. \nMore precisely, writing $X = \\{x \\in \\mathbb{R}^{V(G)}:\\|x\\|_2^2\\leq 1\\}$, Rayleigh's quotient principle tells us the following. \n\\begin{fact}\\label{rayleigh}\n$\\lambda^*(G) = \\sup\\{ |\\ipo{x}_M|: x \\in X, \\forall~i \\in [h],~\\sum_{v \\in V_i} x_v = 0\\}$. \n\\end{fact}\n\\refF{rayleigh} forms the basis for our study of $\\lambda(G)$. \nHowever, to make use of it we first need to have an idea \nof the diversity of possible ways in which $G$ could admit a balanced vector \n(a vector satisfying \n$\\sum_{v \\in V_i} x_v = 0$ for all $i \\in [h]$) having a large value for $\\ipo{x}_M$.\nTo begin to get a feel for this, we now give two rather different examples of how $\\lambda^*(G)$ can be large. For simplicity, for the examples we assume that $H$ is the complete graph $K_{d+1}$. We also provide bounds on the probability of such examples occurring in $G$. These bounds give something of the flavour of the bounds we shall be required to prove for the general case.\n\n\\textbf{Example 1: $G$ contains a large-ish clique.}\n\nFix $s\\in{\\mathbb N}$ and vertices $v_1,\\ldots,v_s$ of $G$ in distinct fibres. If $G[\\{v_1,\\ldots,v_s\\}]$ happens to be a clique, then setting $x_{u}$ to be $1$ if $u \\in \\{v_1,\\ldots,v_s\\}$, $-1\/(n-1)$ for all other vertices in the same fibres as $v_1,\\ldots,v_s$, and zero on all other vertices, we obtain a vector $x$, balanced on each fibre of $H$ and for which $Mx=(s-1)x$. Thus, if $G$ contains a clique of order at least $K\\sqrt{d}+1$ then $\\lambda^*(G) \\geq K\\sqrt{d}$. \n\nTo bound the probability that $G$ contains such a clique, for each possible choice of $v_1,\\ldots,v_s$ the probability that $G[\\{v_{1},\\dots ,v_{s}\\}]$ is a clique is $n^{-\\binom{s}{2}}$ (since each edge $v_{i}v_{i'}$ is present in $G$ independently with probability $n^{-1}$). Since there are only ${(d+1) \\choose s} n^{s}$ choices of the $s$-tuple $(v_{1},\\dots ,v_{s})$ the probability that $G$ contains a clique of size $s$ is at most ${(d+1) \\choose s}n^{s-\\binom{s}{2}}$, which is $o(1)$ for any $s\\ge 4$. \n\n\\textbf{Example 2: uneven edge densities all over $G$.} \n\nSuppose that there exist sets $(A_{i})_{i \\in V(H)}$, with $A_{i}\\subset V_{i}$ and $|A_{i}|=n\/2$ for all $i\\in V(H)$, such that $e(A_{i},A_{j})\\ge n\/4+Kn\/\\sqrt{d}$ for each $i \\neq j$, $i,j \\in V(H)$. Then setting $x_{u}$ to be $(nh)^{-1\/2}$ if $u$ is in $\\bigcup_{i}A_{i}$ and $-(nh)^{-1\/2}$ otherwise, we obtain a balanced vector $x$ with $\\|x\\|_2^2 = 1$ and $\\ipo{x}_M \\geq 2K\\sqrt{d}$, so $\\lambda^*(G) \\geq 2K\\sqrt{d}$.\n\nHere, for each choice of sets $(A_{i})_{i\\in V(H)}$, with $A_{i}\\subset V_{i}$ and $|A_{i}|=n\/2$ for all $i\\in V(H)$, the probability that for all $i,j \\in V(H)$, $i\\neq j$ we have $e(A_{i},A_{j})\\ge n\/4+Kn\/\\sqrt{d}$, is of order at most $\\exp(-c\\binom{d+1}{2}K^{2}n\/d)$, for some constant $c > 0$ (this is not hard to derive by hand; it can also be obtained straightforwardly from \\refP{bigbound} in Section~\\ref{sec:prob}). If $K$ is sufficiently large, this bound is strong enough for a union bound to show\nthat with high probability, there is no such choice of sets $(A_i)_{i \\in V(H)}$. \n\nThe preceding examples present two rather different structures within $G$, both of which give rise to large new eigenvalues, \nand show how the new eigenvectors are also rather different. \nWe may also switch our point of view, first fixing a vector $x$ (for the first example a vector taking the value $1$ on a single vertex of each fibre $V_{i}, \\, i=1,\\dots,s$ and taking the value $-\\I{u\\in V_{[s]}}\/(n-1)$ on other vertices $u$; for the second example a vector taking the value $(nh)^{-1\/2}$ on $n\/2$ vertices in each fibre and $-(nh)^{-1\/2}$ on the rest) then asking what structure in $G$ is required if $|\\ipo{x}_M|$ is to be large for this specific vector $x$. This is essentially the perspective we will take for most of the rest of the paper. \n\nFrom this viewpoint, a possible cause for $\\lambda^{*}(G)$ being large consists of a vector $x \\in X$ together with evidence, in the form of specified edge counts between subsets of vertices of $G$, that $|\\ipo{x}_M|$ is large. \nFor a vector $x$, $i \\in [h]$ and $w \\in \\mathbb{R}$ and let\n\\[\n A_{i,w}(x) = \\{v \\in V_i: x_{v} = w\\}, \\qquad a_{i,w}(x)=|A_{i,w}(x)|. \n\\]\nWe write $A_{i,w}=A_{i,w}(x)$ and $a_{i,w}=a_{i,w}(x)$ when the dependency on $x$ is clear. \nWe say that the collection $\\{a_{i,w}:i \\in [h],w \\in \\mathbb{R}\\}$ is the {\\em type} of the vector $x$, and denote this collection ${\\bf a}(x)$. By the symmetry of the model, the probability that $|\\ipo{x}_M|$ \nis large should be a function only of the type of $x$. We will seek sets of constraints on the edge densities between sets corresponding to distinct $a_{i,w}$ and $a_{i',w'}$, which \ncodify all possible ways in which a large new eigenvalue can appear. A type, together with a particular such set of constraints, will be called a {\\em pattern}; the precise definition of patterns will appear later in the section. \n\nWhile this initial concept of a pattern is useful for demonstrating the idea we have in mind, a number of changes are needed before we can \nput it into play. The most fundamental issue is that we are trying to bound $\\sup_{x \\in X} \\ipo{x}_M$, the supremum being over an uncountable collection. \nWe will shortly show that for a moderate cost, the uncountable \nsupremum in \\refF{rayleigh} can be replaced by a supremum over a more tractable collection. \nAlso, it turns out that for {\\em any} vector $x \\in X$, the contribution to $\\ipo{x}_M$ made by entries $x_{i,j},x_{i',j'}$ whose \nweights differ by more than a factor of $\\sqrt{d}$ is negligible, which will allow us to further restrict the collection of types we need to consider. \nThis is not a complete list of the required changes, but before proceeding too far into the argument it is useful to fill in some of these initial steps. \n\nRecall that $M$ is the adjacency matrix of $G$, \nand let $\\overline{M} = (\\overline{m}_{(i,j),(i',j')})_{(i,j),(i',j') \\in V(G)}$ be the matrix with \n\\[\n\\overline{m}_{(i,j),(i',j')} = \\begin{cases}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t1\/n\t& \\mbox{if } i \\sim_{H} i' \\\\\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t0\t\t\t\t\t\t\t& \\mbox{otherwise.}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\end{cases}\n\\]\nIn other words, $\\overline{m}_{(i,j),(i',j')} = \\p{(i,j) \\sim_G (i',j')}$, so \nfor all $x \\in \\mathbb{R}^{V(G)}$, we have $\\ipo{x}_{\\overline{M}} = \\E{\\ipo{x}_M}$. \nIf $x$ is balanced on each fibre, i.e., satisfies $\\sum_{v \\in V_i} x_v = 0$ for all $i \\in [h]$, \nthen $\\ipo{x_i}_{\\overline{M}} =x_i^t (\\overline{M}x_i) = x_i^t \\cdot \\overline{0}=0$. \nThus, letting $N=M-\\overline{M}$, for such $x$ we have \n\\[\n\\ipo{x}_N = \\ipo{x}_M - \\ipo{x}_{\\overline{M}} = \\ipo{x}_M. \n\\]\nIf, on the other hand, $x$ is constant on fibres of $G$, then $\\ipo{x}_M = \\ipo{x}_{\\overline{M}}$ and so \n$\\ipo{x}_N=0$. \nFrom this we easily obtain the following fact. \n\\begin{fact}\\label{uncount}\n$\\lambda^*(G) = \\sup_{x \\in X} |\\ipo{x}_N|$.\n\\end{fact}\n\\begin{proof}\nThe inequality $\\lambda^*(G) \\leq \\sup_{x \\in X} |\\ipo{x}_N|$ follows from \\refF{rayleigh} \nsince $\\ipo{x}_N=\\ipo{x}_M$ if $x$ is balanced on fibres of $G$. \nOn the other hand, any vector in $x \\in X$ can be expressed as $y+z$, where $y \\in X$ is balanced on fibres of $H$ \nand $z$ is constant on fibres of $H$. Since $\\langle y,z \\rangle_N=0$, we then have $\\ipo{x}_N=\\ipo{y}_N=\\ipo{y}_M$, which proves the other inequality. \n\\end{proof}\nBy using $N$ instead of $M$, \\refF{uncount} allows us to maintain the property of only considering ``new'' eigenvectors of $M$, without \ninsisting that the vectors we consider remain balanced on each fibre of $G$. This turns out to be remarkably helpful. \n\nNext, let $D^+ = \\{0\\} \\cup \\{2^i\/(nh)^{1\/2}: i \\in {\\mathbb N}\\}$, let \n\\begin{align}\nZ \t& = \\left\\{x \\in \\mathbb{R}^{V(G)}:\\|x\\|_2^2 \\leq 10,\\forall~v \\in V(G),x_v \\in D^+, \\sup_{v \\in V(G)} x_v \\leq d \\cdot \\inf_{v \\in V(G): x_v \\ne 0} x_v\\right\\}\\, . \\label{zdef}\n\\end{align}\nand let \n\\[\nE^* = \\{(x_1,x_2) \\in \\mathbb{R}^2: x_1,x_2 > 0, x_1\/x_2 \\in (d^{-1\/2},d^{1\/2})\\}. \n\\]\nThe following proposition, which is proved in \\refS{sec:zbound}, formalizes our discussion on ``restricting the collection of types we need to consider''. \n\\begin{prop}\\label{zbound}\n$\\lambda^*(G) \\leq 96 \\sup_{x \\in Z} |\\ipo{x}_{N,E^*}| + 480\\sqrt{d}.$\n\\end{prop}\n\nA relatively straightforward step in the proof of \\refP{zbound}, that will also turn out to be useful in another part of the paper, is to show that not much is lost when we restrict our attention to pairs of vertices whose weights in $x$ differ by at most a multiplicative factor of $\\sqrt{d}$. This is the content of the following lemma, which is proved in \\refS{sec:zbound}. \n\\begin{lem}\\label{ipoe}\nFor all $y \\in \\mathbb{R}^{V(G)}$, \n$|\\ipo{y}_{N,\\mathbb{R}^2\\setminus E^{*}}|\\le 4\\sqrt{d}\\|y\\|_2^2.$\n\\end{lem}\n\nWe are now in a position to further elaborate on what will constitute a pattern of a large eigenvalue. \nRecall that for $y \\in \\mathbb{R}^{V(G)}$, \nfor each $i \\in [h]$ and each $w \\in \\mathbb{R}$, we have $A_{i,w}(y) = \\{v \\in V(G):y_{v} = w\\}$, \nand $a_{i,w}(y) = |A_{i,w}(y)|$, and that ${\\bf a}(y)$ is called the type of $y$. \nWe remark that if $y \\in Z$ then ${\\bf a}(y)$ satisfies the following properties.\n\n\\begin{enumerate}\n\\item[{\\bf 1.}] For all $w \\in \\mathbb{R}$, $w \\not \\in D^+$, we have $a_{i,w} = 0$.\n\\item[{\\bf 2.}] For all $i \\in V(H)$, $\\sum_{w\\in D^+} a_{i,w} \\le n$. \n\\item[{\\bf 3.}] There exists $w_{0}\\in D^+$, $w_0>0$, such that $a_{i,w}=0$ unless $w_{0}\\le w\\le w_{0}d$. \n\\item[{\\bf 4.}] $\\sum_{i \\in [h], w \\in D^+} w^2 a_{i,w} \\le 10$. \n\\end{enumerate}\nWe call a collection ${\\bf a} = \\{a_{i,w}: i \\in V(H), w \\in \\mathbb{R}\\}$ a {\\em $Z$-type} if it satisfies properties {\\bf 1}-{\\bf 4}, so that \nif $y \\in Z$ then the type ${\\bf a}(y)$ of $y$ is a $Z$-type. \nNext, given $y \\in \\mathbb{R}^{V(G)}$ and $ii' \\in E(H)$, $w,w' \\in \\mathbb{R}$, let $e_{i,w,i',w'}(y,G) = |e_G(A_{i,w}(y),A_{i',w'}(y)|$, and write \n\\[\n{\\bf e}(y,G) = \\{e_{i,w,i',w'}(y,G): ii' \\in E(H), w,w' \\in \\mathbb{R}\\}.\n\\]\nFor all $ii' \\in E(H)$ and all $w,w' \\in \\mathbb{R}$, we necessarily have \n\\begin{equation*}\\label{eq:edef}\ne_{i,w,i',w'}(y,G) \\in \\{0,1,\\ldots, \\min(a_{i,w}(y),a_{i',w'}(y))\\}.\n\\end{equation*}\nA {\\em pattern} is a pair $({\\bf a},{\\bf e})$, where ${\\bf a}$ is a $Z$-type and \n\\[\n{\\bf e}= \\{e_{i,w,i',w'}: ii' \\in E(H), w,w' \\in \\mathbb{R}\\}\n\\]\nis a collection with $e_{i,w,i',w'} \\in \\{0,1,\\ldots, \\min(a_{i,w},a_{i',w'})\\}$ for all $ii' \\in E(H)$ and all $w,w' \\in \\mathbb{R}$. \n\nWrite $D^{>0}=D^+\\setminus\\{0\\}=\\{2^i\/\\sqrt{nh}:i \\in {\\mathbb N}\\}$, and \nlet $\\Gamma$ be the graph on vertex set $\\{(i,w):i\\in V(H), w\\inD^{>0}\\}$ with an edge $(i,w)(i',w')$ if $i\\sim_H i'$ and $w\/w'\\in (d^{-1\/2},d^{1\/2})$. \n\n\n\nFor a given pattern, $({\\bf a},{\\bf e})$, we write \n\\begin{align*}\np({\\bf a},{\\bf e}) = \\left|\\sum_{(i,w)\\sim_{\\Gamma} (i',w')}w w' \\left(e_{i,w,i',w'}- \\frac{a_{i,w}a_{i',w'}}{n} \\right) \\right| ,\n\\end{align*}\nand call $p({\\bf a},{\\bf e})$ the \\emph{potency} of $({\\bf a},{\\bf e})$. We remark that for any $y \\in Z$, \n\\begin{align}\n|\\langle y,y \\rangle_{N,E^*}| & = 2\\left|\\sum_{(i,w)\\sim_{\\Gamma} (i',w')}w w' \\left(e_{i,w,i',w'}(y,G)- \\frac{a_{i,w}(y)a_{i',w'}(y)}{n} \\right)\\right| \\nonumber\\\\\n& = 2\\, p({\\bf a}(y),{\\bf e}(y,G)), \\label{eq:potconnect}\n\\end{align}\nthe factor $2$ arising from the symmetry of the matrix $N$. \nWe say that a pattern $({\\bf a},{\\bf e})$ {\\em can be found in $G$} if there exists $y \\in Z$ such that \n${\\bf a}(y)={\\bf a}$ and ${\\bf e}(y,G)={\\bf e}$. \nIn this case we say that $G$ {\\em contains} the pattern $({\\bf a},{\\bf e})$, \nand call the collection $\\{A_{i,w}(y)\\}_{(i,w) \\in V(\\Gamma)}$ a {\\em witness} for the pattern $({\\bf a},{\\bf e})$. \n\n\\begin{prop}\\label{findwit}\nFor $K > 0$, if $\\lambda^*(G) \\geq 192(K+3) \\sqrt{d}$ then a pattern $({\\bf a},{\\bf e})$ with potency at least $K\\sqrt{d}$ can be found in $G$.\n\\end{prop}\n\\begin{proof}\nBy \\refP{zbound}, in this case there exists $y \\in Z$ with $|\\langle y,y \\rangle_{N,E^*}| \\ge 2K\\sqrt{d}$, and the result follows from (\\ref{eq:potconnect}). \n\\end{proof}\nOur main aim, then, is to show that with high probability, no high-potency pattern can be found in $G$. \nTo do so, we shall use a reduction which is conceptually analogous to how one bounds the probability that a fixed graph $H$ appears as a subgraph of the Erd\\H{o}s--R\\'enyi random graph $G(n,p)$: rather than proving the bound directly for $H$, one instead considers a strictly balanced (maximally edge-dense) subgraph of $H$. \n\nGiven a pattern $({\\bf a},{\\bf e})$ and a set of vertices $S \\subset V(\\Gamma)$, we define the \n{\\em sub-pattern of $({\\bf a},{\\bf e})$ induced by $S$} to be the pattern $({\\bf a}',{\\bf e}')$ obtained from $({\\bf a},{\\bf e})$ by \nsetting \n\\[\na'_{i,w} = \\begin{cases} \n\t\t\t\t\ta_{i,w} \t& \\mbox{if}~(i,w) \\in S \\\\\n\t\t\t\t\t0\t\t\t\t& \\mbox{otherwise}. \n\t\t\t\t\\end{cases}\n\\]\nand setting \n\\[\ne_{i,w,i',w'}' = \\begin{cases} \n\t\t\t\t\te_{i,w,i',w'} \t& \\mbox{if}~(i,w),(i',w') \\in S \\\\\n\t\t\t\t\t0\t\t\t\t& \\mbox{otherwise}. \n\t\t\t\\end{cases}\n\\]\nWe write $({\\bf a},{\\bf e})_S$ for the sub-pattern of $({\\bf a},{\\bf e})$ induced by $S$. \n\nWe also require the following variant of potency, which \nallows us to consider a ``maximally potent'' subgraph of $\\Gamma$ rather than $\\Gamma$ itself. For a pattern $({\\bf a},{\\bf e})$ we define\n\\[\n\\tilde{p}(({\\bf a},{\\bf e})):=\\max_{E\\subset E(\\Gamma)} \\left|\\sum_{(i,w)(i',w')\\in E}w w' \\left(e_{i,w,i',w'}- \\frac{a_{i,w}a_{i',w'}}{n}\\right) \\right|\n\\]\nWe will prove the following theorem.\n\\begin{thm}\\label{redbound}\nFix $L \\ge 20$. \nFor any pattern $({\\bf a},{\\bf e})$, there exists $S=S(({\\bf a},{\\bf e}),L) \\subset V(\\Gamma)$ such that the following properties hold.\n\\begin{align*}\n\\tilde{p}(({\\bf a},{\\bf e})_{S}) \t& \\geq \\frac{p({\\bf a},{\\bf e})}{2}-55L\\sqrt{d}\\\\\n\\p{({\\bf a},{\\bf e})_S~\\mbox{can be found in G}} & \\ll \\prod_{(i,w) \\in S} a_{i,w}^{d\/4}\\pran{\\prod_{(i,w) \\in S} {n \\choose a_{i,w}\\wedge \\lfloor n\/2 \\rfloor}}^{1-L\/10}. \n\\end{align*}\n\\end{thm}\n\nWe prove this theorem in \\refS{sec:key}. \n\\refT{main} will follow straightforwardly from \\refP{findwit} and \\refT{redbound}.\nMore precisely, \\refP{findwit} ensures that the number of events whose probability we must bound is not too large, and \\refT{redbound} ensures that the probability of each event is sufficiently small that we can simply apply a union bound to prove \\refT{main}. \nBefore {\\em providing} the proof, we state one additional, easy bound which we will require, on \nthe total number of patterns satisfying a constraint on the sizes of the $a_{i,w}$.\n\\begin{lem}\\label{patcount}\nFor fixed $A \\in {\\mathbb N}$, the number of patterns with $a_{i,w} < A$ for all $(i,w) \\in V(\\Gamma)$ \nis at most $\\log_2(nh) \\cdot A^{2h d\\log_2 d}$. \n\\end{lem}\n\nAssuming this lemma, which is proved in \\refS{sec:zbound}, we are now ready to give our proof of \\refT{main}. (The proof of \\refT{explain}, while conceptually almost identical to that of \\refT{main}, ends up requiring a somewhat more technical development and we therefore defer it to \\refS{sec:explain}.)\n\n\\begin{proof}[Proof of Theorem~\\ref{main}]\nFix $M\\, \\ge\\, 430656\\,\\, = \\,\\, 192(2240\\, +\\, 3)$, let $K=K(M)=(M\/192)-3 \\ge 2240$, and let $L(M)=K\/112\\ge 20$.\n\nBy \\refP{findwit}, if $\\lambda^*(G) \\geq M\\sqrt{d}$ then there is a pattern $({\\bf a},{\\bf e})$ with potency at least \n$K\\sqrt{d}=112 L\\sqrt{d}$ such that $({\\bf a},{\\bf e})$ can be found in $G$. Let $({\\bf a},{\\bf e})_{S(({\\bf a},{\\bf e}),L)}$ be the sub-pattern of $({\\bf a},{\\bf e})$ obtained by applying \\refT{redbound} to $({\\bf a},{\\bf e})$, it follows that $\\tilde{p}(({\\bf a},{\\bf e})_{S(({\\bf a},{\\bf e}),L)}) \\ge (112\/2-55)L\\sqrt{d} =L\\sqrt{d}$ and that $({\\bf a},{\\bf e})_{S(({\\bf a},{\\bf e}),L)}$ can be found in $G$.\n\nWe say a pattern $({\\bf a},{\\bf e})$ is an {\\em $L$-reduction} if there is a pattern $({\\bf a}',{\\bf e}')$ and \n$S=S(({\\bf a}',{\\bf e}')),L)$ as in \\refT{redbound}, such that $({\\bf a}',{\\bf e}')_{S(({\\bf a}',{\\bf e}'),L)}=({\\bf a},{\\bf e})$. It follows from the preceding paragraph that for any $M \\geq 430656$, \n\\[\n\\p{\\lambda^*(G) \\geq M\\sqrt{d}} \\leq \\p{E},\n\\]\nwhere $E=E(M)$ is the event that $G$ contains an $L(M)$-reduction $({\\bf a},{\\bf e})$ with $\\tilde{p}({\\bf a},{\\bf e})\\ge L(M)\\sqrt{d} \\geq 20\\sqrt{d}$. We use \\refT{redbound} to bound this probability for each fixed pattern (reduction) $({\\bf a},{\\bf e})$, the proof is then completed with a union bound. We now give the details. We split into two cases, depending on whether the pattern (reduction) $({\\bf a},{\\bf e})$ has $a_{i,w}\\geq 4hd\\log_{2}{d}$ for some $(i,w) \\in V(\\Gamma)$. Let $E_{1}=E_1(M)$ be the event that $G$ contains an $L$-reduction $({\\bf a},{\\bf e})$ with $\\tilde{p}({\\bf a},{\\bf e})\\ge L\\sqrt{d}$ in which at least one $a_{i,w}\\geq 4hd\\log_{2}{d}$ and $E_{2}=E_2(M)$ the event that $G$ contains an $L$-reduction $({\\bf a},{\\bf e})$ with $\\tilde{p}({\\bf a},{\\bf e})\\ge L\\sqrt{d}$ in which $a_{i,w}< 4hd\\log_{2}{d}$ for all $i,w$. We note that $E_1$ and $E_2$ need not be disjoint. To prove \\refT{main} it suffices to show that both $\\p{E_1}$ and $\\p{E_2}$ are less than $n^{-2d^{1\/2}}\/2$ for all $n$ sufficiently large. \n\nFirst note that for {\\em any} $L$-reduction $({\\bf a},{\\bf e})$, writing $\\alpha=\\sum_{(i,w) \\in V(\\Gamma)} a_{i,w}$, since $1 - L\/10 \\le -1$, by \\refT{redbound} we have \n\\[\n\\p{({\\bf a},{\\bf e})_S~\\mbox{can be found in}~G} \\ll \\prod_{(i,w) \\in S} a_{i,w}^{d\/4}\\cdot {n \\choose \\alpha \\wedge \\lfloor n\/2 \\rfloor}^{-1}. \n\\]\nFor any $L$-reduction $({\\bf a},{\\bf e})$ considered in $E_1$, we have $\\alpha \\geq 4hd \\log_2(d)$. \nWe note that since non-zero entries in any $Z$-type differ by a factor of at most $d$, for any $i \\in V(H)$ there are at most $\\log_2(2d)$ values $w\\ne 0$ for which $a_{i,w} \\ne 0$. It follows that in total there are at most $h\\log_2(2d)$ vertices $(i,w) \\in V(\\Gamma)$ with $a_{i,w} \\ne 0$, so\n\\begin{align*}\n\\p{({\\bf a},{\\bf e})_S~\\mbox{can be found in}~G} \n& \\ll n^{(d\/4) h\\log_2(2d)} \\cdot {n \\choose 4hd\\log_2(d)}^{-1} \\\\\n& \\leq n^{(hd\\log_2 d)\/2} \\cdot \\pran{\\frac{8hd \\log_2 d}{n}}^{4hd \\log_2 d}.\n\\end{align*}\nWe next take a union bound over all $L$-reductions considered in $E_1$. \nClearly the number of such reductions is at most the total number of patterns, which by \\refL{patcount} applied with $A=n$ is at most $\\log_2(nh) n^{2hd\\log_2 d}$. \nIt follows that \n\\[\n\\p{E_1} \\ll (8hd \\log_2 d)^{4hd \\log_2 d} \\frac{\\log_2(nh)}{n^{3(hd\\log_2 d)\/2}},\n\\]\nand so $\\p{E_1}\\le n^{-2d^{1\/2}}\/2$ for $n$ sufficiently large. \n\nNext, for any $L$-reduction $({\\bf a},{\\bf e})$ considered in $E_2$, we have \n\\[\n\\prod_{(i,w) \\in S} a_{i,w}^{d\/4} < (4hd\\log_2 d)^{(d\/4)h\\log_2 (2d)}.\n\\]\nAlso, for any pattern $({\\bf a},{\\bf e})$, it follows straightforwardly from the definition of a pattern that \n\\begin{align*}\n\\tilde{p}({\\bf a},{\\bf e}) \t& \\leq \\sum_{(i,w)\\sim_{\\Gamma} (i',w')}w w' \\left|e_{i,w,i',w'}- \\frac{a_{i,w}a_{i',w'}}{n} \\right|\\\\\n \t\t\t& \\leq \\sum_{(i,w)(i',w') \\in E(\\Gamma)} ww' \\min(a_{i,w},a_{i',w'}) \\\\\n\t\t\t& \\leq \\sum_{(i,w) \\in V(\\Gamma)} \\mathop{\\sum_{(i',w')\\sim_{\\Gamma}(i,w)}}_{w' \\leq w,a_{i,w} >0} w^2 a_{i,w} \\\\\n\t\t\t& \\leq \\sum_{(i',w') \\in V(\\Gamma)} a_{i',w'} \\cdot \\sum_{(i,w) \\in E(\\Gamma)} w^2 a_{i,w} \\\\\n\t\t\t& \\leq \\sum_{(i,w) \\in V(\\Gamma)} a_{i,w}. \n\\end{align*}\nIt follows that $\\alpha=\\sum_{(i,w) \\in S} a_{i,w} \\geq Ld^{1\/2}> 3d^{1\/2}$ for every $L$-reduction $({\\bf a},{\\bf e})$ with $\\tilde{p}({\\bf a},{\\bf e}) \\ge L\\sqrt{d}$. So, for any pattern considered in $E_2$, for $n \\geq 6d^{1\/2}$ we have \n\\[\n{n \\choose \\alpha \\wedge \\lfloor n\/2 \\rfloor}^{-1} \\leq \\frac{(6d^{1\/2})^{3d^{1\/2}}}{n^{3d^{1\/2}}}.\n\\]\nFurthermore, the total number of $L$-reductions considered in $E_2$ is at most the total number of patterns \nwith all $a_{i,w}$ less than $4hd \\log_2d$, which by \\refL{patcount} \nis at most $\\log_2(nh) (4hd\\log_2 d)^{2hd \\log_2 d}$. \nThus, by a union bound, \n\\[\n\\p{E_2} \\ll (4hd\\log_2d)^{3hd\\log_2 d}\\cdot (6d^{1\/2})^{2d^{1\/2}}\\cdot \\frac{\\log_2(nh)}{n^{3d^{1\/2}}}, \n\\]\nwhich implies that $\\p{E_2}\\le n^{-2d^{1\/2}}\/2$ for $n$ sufficiently large. This completes the proof\n\\end{proof}\n\n\\section{Two tools from probability} \\label{sec:prob}\nIn this section, we establish all the probability bounds we will require in the remainder of the paper. \\refP{bigbound}, below, \nbounds the probability that a random matching has certain prescribed edge counts between given pairs of sets. \n\\refL{lem:measure} gives a lower bound on the integral of a function when its value is not too small relative to another function. \nWe proceed immediately to details. \n\nLet $V=\\{v_1,\\ldots,v_n\\}$, $W=\\{w_1,\\ldots,w_n\\}$, and let $\\mathbf{M}$ be a uniformly random matching of $V$ and $W$. \nLet $A_0,\\ldots,A_s$ and $B_0,\\ldots,B_t$ be partitions of $V$ and $W$ respectively, and write $a_i=|A_i|$ and $b_j=|B_j|$. \n(Here $s$ and $t$ are constants not depending on $n$.) \nWriting $e_{\\mathbf{M}}(A_i,B_j)$ for the number of edges from $A_i$ to $B_j$ in $\\mathbf{M}$, it is straightforward that $\\mu_{ij} = \\E{e(A_i,B_j)} = a_ib_j\/n$. \n\nNow fix integers $\\{e_{ij}\\}_{0 \\leq i \\leq s,0 \\leq j \\leq t}$ with $\\sum_{j=0}^t e_{ij}=a_i$ for each $0 \\leq i \\leq s$ and $\\sum_{i=0}^s e_{ij} = b_j$ for each $0 \\leq j \\leq t$. (From now on, summations and products over $i$ (resp.~$j$) should be understood to have $0 \\leq i \\leq s$ (resp.~$0 \\leq j \\leq t$).) We wish to bound \n\\[\n\\p{\\bigcap_{ij} \\{e(A_i,B_j) = e_{ij}\\}}.\n\\]\nOur aim is to prove a bound not too different from what we would obtain were the $e(A_i,B_j)$ independent with Binomial$(a_ib_j,1\/n)$ distribution. \nBy direct enumeration, $\\p{\\bigcap_{ij} \\{e(A_i,B_j) = e_{ij}\\}}$ equals \n\\[\n\\frac{1}{n!} \\cdot \\prod_i {a_i \\choose e_{i0},\\ldots,e_{it}} \\prod_j {b_j \\choose e_{0j},\\ldots,e_{sj}} \\prod_{ij} e_{ij}! = \\frac{1}{n!}\\prod_i a_i! \\cdot \\prod_j b_j! \\cdot \\pran{\\prod_{ij} e_{ij}!}^{-1},\n\\]\nwith the convention that $0!=1$. Applying Stirling's formula, this is of the same order as \n\\begin{equation}\\label{jast}\n\\pran{\\frac{\\prod_i a_i \\prod_j b_j}{\\prod_{ij:e_{ij}\\neq 0} e_{ij}}}^{1\/2} \\cdot n^{-(n+1\/2)} \\prod_{i} a_i^{a_i} \\prod_{j} b_j^{b_j} \\prod_{ij:e_{ij}\\neq 0} e_{ij}^{-e_{ij}}\n\\end{equation}\nNow write $e_{ij}=(a_ib_j\/n)(1+\\epsilon_{ij}) = \\mu_{ij} (1+\\epsilon_{ij})$. \nWe then have \n\\begin{align*}\n\\prod_{ij:e_{ij}\\neq 0} e_{ij}^{e_{ij}} & = \\prod_{ij:e_{ij} \\neq 0} \\mu_{ij}^{\\mu_{ij}(1+\\epsilon_{ij})} \\prod_{ij:e_{ij}\\neq 0} (1+\\epsilon_{ij})^{\\mu_{ij}(1+\\epsilon_{ij})} \\\\\n\t\t\t\t\t\t\t\t\t\t& = \\prod_{ij} \\mu_{ij}^{\\mu_{ij}(1+\\epsilon_{ij})} \\prod_{ij:e_{ij}\\neq 0} (1+\\epsilon_{ij})^{\\mu_{ij}(1+\\epsilon_{ij})} \\\\ \n\t\t\t\t\t\t\t\t\t\t& = \\prod_{ij} \\mu_{ij}^{\\mu_{ij}} \\prod_{ij} \\mu_{ij}^{\\mu_{ij}\\epsilon_{ij}} \\prod_{ij:e_{ij}\\neq 0} (1+\\epsilon_{ij})^{\\mu_{ij}(1+\\epsilon_{ij})} \n\\end{align*}\nFor fixed $i$, since $\\sum_{j} e_{ij}=a_i$ we must have $\\sum_j \\epsilon_{ij} \\mu_{ij}=0$. \nLikewise for fixed $j$ we have $\\sum_i \\epsilon_{ij} \\mu_{ij} = 0$, and it follows that \n\\begin{align*}\n\\prod_{ij} \\mu_{ij}^{\\epsilon_{ij} \\mu_{ij}} & = \\prod_{i} \\pran{\\prod_{j} \\pran{\\frac{a_i}{n}}^{\\epsilon_{ij} \\mu_{ij}} \\prod_j b_j^{\\epsilon_{ij} \\mu_{ij}}} \\\\\n\t\t\t\t\t\t\t\t\t\t\t& = \\prod_{ij} b_j^{\\epsilon_{ij} \\mu_{ij}} \\\\\n\t\t\t\t\t\t\t\t\t\t\t& = \\prod_j \\pran{ \\prod_i b_j^{\\epsilon_{ij} \\mu_ij} } = 1.\n\\end{align*}\nA similar calculation shows that \n\\begin{align*}\n\\prod_{ij} \\mu_{ij}^{\\mu_{ij}} = n^{-n} \\cdot \\prod_i a_i^{a_i} \\prod_j b_j^{b_j}, \n\\end{align*}\nand so \\refQ{jast} equals \n\\[\nn^{-1\/2}\\pran{\\frac{\\prod_i a_i \\prod_j b_j}{\\prod_{ij:e_{ij}\\neq 0} e_{ij}}}^{1\/2} \\cdot \\prod_{ij:e_{ij}\\neq 0} (1+\\epsilon_{ij})^{-\\mu_{ij}(1+\\epsilon_{ij})}. \n\\]\nWe now focus our attention on the latter product of the preceding equation. \nIt is helpful to multiply and divide by $1= \\prod_{ij} e^{\\epsilon_{ij} \\mu_{ij}}$, to obtain the equivalent expression \n\\begin{align*}\n\t& \\prod_{ij:e_{ij}\\neq 0} \\exp\\pran{-\\mu_{ij}(1+\\epsilon_{ij})\\log(1+\\epsilon_{ij})} \\prod_{ij} \\exp\\pran{\\mu_{ij}\\epsilon_{ij}} \\\\\n= \t& \\prod_{ij:e_{ij}\\neq 0} \\exp\\pran{-\\mu_{ij}[(1+\\epsilon_{ij})\\log(1+\\epsilon_{ij})-\\epsilon_{ij}]} \\prod_{ij:e_{ij}=0} \\exp\\pran{-\\mu_{ij}}.\n\\end{align*}\n(For the second product we use that when $e_{ij}=0$, $\\epsilon_{ij}=-1$.)\nSince $(1+\\epsilon)\\log(1+\\epsilon)-\\epsilon$ approches $1$ as $\\epsilon \\downarrow -1$, taking $0\\log0-0=1$ by convention then allows us to \ncombine the two preceding products.\nIn sum, we have established the following proposition. \n\\begin{prop}\\label{bigbound}\nLet $e_{ij} = \\mu_{ij}(1+\\epsilon_{ij})$ and let $E_{ij} = e_{\\mathbf{M}}(A_i,B_j)$. Writing $\\chi = n^{-1\/2}\\pran{\\frac{\\prod_i a_i \\prod_j b_j}{\\cdot \\prod_{ij:e_{ij}\\neq 0} e_{ij}}}^{1\/2}$, \nwe have \n\\[\n\\p{\\bigcap_{ij} \\{E_{ij} = e_{ij}\\}} \\asymp\n\\chi \\cdot e^{-\\sum_{ij} \\mu_{ij}[(1+\\epsilon_{ij})\\log(1+\\epsilon_{ij}) - \\epsilon_{ij}]}.\n\\]\n\\end{prop}\nIn fact, it is a weakening of \\refP{bigbound} that will be useful later in the paper. \n\\begin{cor}\n\\label{cor:bb}\nUnder the conditions of \\refP{bigbound}, we have \n\\[\n\\p{\\bigcap_{ij} \\{E_{ij} = e_{ij}\\}} \\ll\n\\pran{\\prod_{1 \\leq i \\leq s}a_i \\prod_{1 \\leq j \\leq t} b_j}^{1\/4} \\cdot e^{-\\sum_{ij} \\mu_{ij}[(1+\\epsilon_{ij})\\log(1+\\epsilon_{ij}) - \\epsilon_{ij}]}.\n\\]\n\\end{cor}\n\\begin{proof}\nFor each $0 \\leq i \\leq s$ we have $\\prod_{0 \\leq j \\leq t: e_{ij} \\neq 0} e_{ij} \\geq a_i$, \nso $\\prod_{ij:e_{ij} \\neq 0} e_{ij} \\geq \\prod_i a_i$. \nWe likewise have $\\prod_{ij:e_{ij} \\neq 0} e_{ij} \\geq \\prod_j b_j$, and so \n\\[\n\\prod_{ij:e_{ij} \\neq 0} e_{ij} \\geq \\prod_i a_i^{1\/2} \\prod_j b_j^{1\/2}. \n\\]\nAlso, $n^{-1\/2}a_0^{1\/2}b_0^{1\/2} \\leq 1$. The result follows. \n\\end{proof}\nWe will also use a lemma which is essentially a version of the following basic fact: \nif $X$ and $Y$ are non-negative random variables and $E$ is the event that $X \\leq cY$, then $\\E{X \\I{E}} \\le c\\E{Y}$. \n(For generality we shall state the lemma in terms of a measure space, although we shall only apply it to finite measure spaces, and in this case the reader may think of the integrals as simply weighted sums.)\n\n\\begin{lem} \\label{lem:measure} \nLet $(\\Omega, \\mathcal{E},\\nu)$ be a measure space and $g,h:\\Omega\\to \\mathbb{R}$ positive measurable functions, \nand fix $0 < c < 1$. \nWriting \n\\[\nE = \\left\\{ \\omega \\in \\Omega: \\frac{h(\\omega)}{g(\\omega)} \\geq c\\frac{\\int h d\\mu}{\\int g d\\mu}\\right\\},\n\\]\nwe have \n\\[\\int_E h d\\mu \\ge (1-c) \\int_{\\Omega} h d\\mu.\n\\] \n\\end{lem}\n\\begin{proof} \\begin{equation*} \\int_{\\Omega \\setminus E}hd\\mu =\\int _{\\Omega\\setminus E}g \\cdot \\frac{h}{g}d\\mu \n\\le c \\int_{\\Omega\\setminus E}g \\cdot \\frac{\\int h}{\\int g} d\\mu\n\\le c\\int_{\\Omega} h d\\mu \\end{equation*}\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Reduced patterns: a proof of \\refT{redbound}} \\label{sec:key}\n\\refT{redbound} requires us to find, given a high potency pattern $({\\bf a},{\\bf e})$, a sub-pattern $({\\bf a},{\\bf e})_S$ which is unlikely to occur in $G$. In fact, our aim will be slightly more exigent. We will show the existence of sub-pattern $({\\bf a},{\\bf e})_S$ which is `locally unlikely' at all $(i,w)\\in S$. Given a pattern $({\\bf a},{\\bf e})$, for $i\\sim_{H} i'$ and $w,w' \\in \\mathbb{R}$ write $\\mu_{i,w,i',w'} = a_{i,w}a_{i',w'}\/n$ and write \n\\[\n\\epsilon_{i,w,i',w'}=\\epsilon_{i,w,i',w'}({\\bf a},{\\bf e}) = \n\\begin{cases}\n1\t& \\mbox{ if }a_{i,w}=0\\mbox{ or }a_{i',w'}=0,\\\\\ne_{i,w,i',w'} \/ \\mu_{i,w,i',w'} -1 & \\mbox{ if } a_{i,w}\\neq 0\\mbox{ and }a_{i',w'} \\neq 0. \n\\end{cases}\n\\]\nThe quantity $\\epsilon_{i,w,i',w'}$ encodes the deviation from the expected number $\\mu_{i,w,i',w'}$ of edges between sets $A \\subset V_i$, $A' \\subset V_{i'}$ with $|A|=a_{i,w}$ and $|A'|=a_{i',w'}$ if $|E_G(A,A')|=e_{i,w,i',w'}$. We will bound the likelihood of such deviations using \\refC{cor:bb}, and to that end define a function $b:(-1,\\infty) \\to [0,\\infty)\n\\[ b(\\epsilon)= (1+\\epsilon)\\log(1+\\epsilon) - \\epsilon \\qquad \\epsilon\\in (-1,\\infty).\\]\nWe then have the following. \n\\begin{prop}\\label{prop:redbound} Fix $L \\ge 20$. \nFor any pattern $({\\bf a},{\\bf e})$, there exists $S=S(({\\bf a},{\\bf e}),L) \\subset V(\\Gamma)$ such that the following properties hold.\n\\begin{align*}\n\\tilde{p}(({\\bf a},{\\bf e})_{S}) \t& \\geq \\frac{p({\\bf a},{\\bf e})}{2}-55L\\sqrt{d}\\qquad \\text{and}\\\\\n\\sum_{(i',w')\\in N_{\\Gamma}(i,w)\\cap S} \\frac{a_{i,w}a_{i',w'}}{n} b(\\epsilon_{i,w,i',w'})& \\ge \\frac{L}{10} a_{i,w}\\log\\left(\\frac{en}{a_{i,w}}\\right) \\qquad \\text{for all} \\, (i,w)\\in S\\, .\\end{align*}\n\\end{prop}\nThe second inequality in \\refP{prop:redbound} encodes the idea that the pattern is everywhere locally unlikely.\nBefore continuing to the proof of \\refP{prop:redbound} let us show that \\refT{redbound} does indeed follow from \\refP{prop:redbound} and the probability bound \\refC{cor:bb}.\n\n\\begin{proof}[Proof of \\refT{redbound}] \nGiven $L\\ge 20$ and a pattern $({\\bf a},{\\bf e})$, let $S=S(({\\bf a},{\\bf e}),L)$ be the subset of $V(\\Gamma)$ given by \\refP{prop:redbound}. Since it is immediate from the statement of \\refP{prop:redbound} that \n\\[ \\tilde{p}(({\\bf a},{\\bf e})_{S}) \\geq \\frac{p({\\bf a},{\\bf e})}{2}-55L\\sqrt{d},\\]\nall that is left to prove is that\n\\[\\p{({\\bf a},{\\bf e})_S~\\mbox{can be found in G}} \\ll \\prod_{(i,w) \\in S}\\!\\! a_{i,w}^{d\/2}\\, \\pran{\\prod_{(i,w) \\in S} {n \\choose a_{i,w}\\wedge \\lfloor n\/2 \\rfloor}}^{1-L\/10}. \\]\nWe recall that finding a copy of $({\\bf a},{\\bf e})_S$ in $G$ corresponds exactly to finding a witness, i.e. a collection of sets $\\{A_{i,w}\\}_{(i,w) \\in S}$, such that for each $(i,w)\\in S$ the set $A_{i,w}\\subseteq V_i$ has cardinality $a_{i,w}$, and such that $e(A_{i,w},A_{i',w'})=e_{i,w,i',w'}$ whenever $(i,w)\\sim_{\\Gamma} (i',w')$.\n\nSince there are at most\n\\[ \\prod_{(i,w)\\in S}\\binom{n}{a_{i,w}\\wedge \\lfloor n\/2\\rfloor}\\]\nchoices of the collection $\\{A_{i,w}\\}_{(i,w) \\in S}$ it suffices to prove that\n\\[ \n\\p{\\bigcap_{(i,w)\\sim_{\\Gamma} (i',w')} \\{e(A_{i,w},A_{i',w'})=e_{i,w,i',w'}\\}}\\ll \\prod_{(i,w) \\in S} \\!\\! a_{i,w}^{d\/4}\\, \\pran{\\prod_{(i,w) \\in S} {n \\choose a_{i,w}\\wedge \\lfloor n\/2 \\rfloor}}^{-L\/10}\n\\] \nfor each such collection $\\{A_{i,w}\\}_{(i,w) \\in S}$. Furthermore, since the inequality $(en\/a)^a \\ge {n \\choose a\\wedge \\lfloor n\/2 \\rfloor}$ holds for all $1\\le a\\le n$, it suffices to prove that \n\\begin{align*}\n& \\quad \\p{\\bigcap_{(i,w)\\sim_{\\Gamma} (i',w')} \\{e(A_{i,w},A_{i',w'})=e_{i,w,i',w'}\\}} \\\\\n\\ll & \\quad \\prod_{(i,w) \\in S}\\!\\! a_{i,w}^{d\/4}\\, \\cdot \\, \\exp\\left(-\\frac{L}{10}\\, \\cdot \\sum_{(i,w) \\in S} a_{i,w}\\, \\log\\left(\\frac{en}{a_{i,w}}\\right)\\right)\n\\end{align*}\nfor each such collection $\\{A_{i,w}\\}_{(i,w) \\in S}$.\n\nThe required bound now follows by applying \\refC{cor:bb} for each matching, then using the independence of the matchings and the second bound of \\refP{prop:redbound}. \\end{proof}\n\nThe rest of the section is dedicated to proving \\refP{prop:redbound}. We divide into two cases based on whether the potency of $({\\bf a},{\\bf e})$ derives mostly from large deviation terms (terms in which $\\epsilon_{i,w,i',w'}$ is particularly large) or small deviation terms. The reason for splitting the proof in this way is that the expression $b(\\epsilon_{i,w,i',w'})$ behaves rather differently in the two cases, resembling $\\epsilon\\log{\\epsilon}$ and $\\epsilon^2$ respectively.\n\nWe shall partition $\\Gamma$ into two subgraphs $\\Gamma_{\\mathrm{LD}}=\\Gamma_{\\mathrm{LD}}({\\bf a},{\\bf e})$ and $\\Gamma_{\\mathrm{SD}}=\\Gamma_{\\mathrm{SD}}({\\bf a},{\\bf e})$ by setting\n\\[ V(\\Gamma_{\\mathrm{LD}}) \\, =\\, V(\\Gamma_{\\mathrm{SD}}) \\, = \\, V(\\Gamma)\\]\nand defining\n\\begin{align*}\nE(\\Gamma_{\\mathrm{LD}}) & \\, =\\, \\{(i,w)\\sim_{\\Gamma} (i',w')\\, :\\, \\epsilon_{i,w,i',w'}({\\bf a},{\\bf e})> e^2 -1\\}\\, ,\n\\\\ \nE(\\Gamma_{\\mathrm{SD}}) & \\, =\\, \\{(i,w)\\sim_{\\Gamma} (i',w')\\, :\\, -1 \\le \\epsilon_{i,w,i',w'}({\\bf a},{\\bf e})\\le e^2 -1\\} \\, .\n\\end{align*}\nWe have chosen $e^2-1$ as the cutoff between the large and small deviations regimes as a matter of technical convenience to do with the details of the proof.\nNote that the potency $p({\\bf a},{\\bf e})$ of a pattern $({\\bf a},{\\bf e})$ may be expressed as\n\\[ p({\\bf a},{\\bf e}) \\, = \\, \\left| \\sum_{(i,w)\\sim_{\\Gamma} (i',w')}\\frac{w w' a_{i,w} a_{i',w'}}{n} \\, \\epsilon_{i,w,i',w'} \\right|\\, .\\] \nWe define now two variants of potency.\n\\[ p_{\\mathrm{LD}}({\\bf a},{\\bf e})\\, = \\, \\left|\\sum_{(i,w)\\sim_{\\Gamma_{\\mathrm{LD}}} (i',w')}\\frac{w w' a_{i,w} a_{i',w'}}{n} \\, \\epsilon_{i,w,i',w'} \\right| \\, .\\] \nand\n\\[ p_{\\mathrm{SD}}({\\bf a},{\\bf e})\\, = \\, \\left|\\sum_{(i,w)\\sim_{\\Gamma_{\\mathrm{SD}}} (i',w')}\\frac{w w' a_{i,w} a_{i',w'}}{n} \\, \\epsilon_{i,w,i',w'} \\right|\\, .\\] \nThe latter expressions respectively capture the ``large deviations potency'' and ``small deviations potency.'' \nSince $p({\\bf a},{\\bf e}) \\le p_{\\mathrm{LD}}({\\bf a},{\\bf e})+p_{\\mathrm{SD}}({\\bf a},{\\bf e})$, for every pattern $({\\bf a},{\\bf e})$ we have that \\[ \\max\\{p_{\\mathrm{LD}}({\\bf a},{\\bf e}),p_{\\mathrm{SD}}({\\bf a},{\\bf e})\\}\\ge \\frac{p({\\bf a},{\\bf e})}{2}\\, .\\] \nWe shall prove the following propositions.\n\n\\begin{prop}\\label{prop:rbld} Fix $L \\ge 20$.\nFor any pattern $({\\bf a},{\\bf e})$, there exists $S=S(({\\bf a},{\\bf e}),L) \\subset V(\\Gamma)$ such that the following properties hold: \n\\[\np_{\\mathrm{LD}}(({\\bf a},{\\bf e})_{S})\\geq p_{\\mathrm{LD}}({\\bf a},{\\bf e})-30L\\sqrt{d} \\, ,\n\\\nand, for all $(i,w) \\in S$, \n\\[\n\\sum_{(i',w')\\in N_{\\Gamma_{\\mathrm{LD}}}(i,w)\\cap S} \\frac{a_{i,w}a_{i',w'}}{n} \\, \\cdot \\, \\pran{1+\\frac{\\epsilon_{i,w,i',w'}}{2}} \\log(1+\\epsilon_{i,w,i',w'}) \\ge \\frac{L}{4} a_{i,w}\\log\\left(\\frac{en}{a_{i,w}}\\right)\n\\] \n\\end{prop}\n\\begin{prop}\\label{prop:rbsd} Fix $L \\ge 20$.\nFor any pattern $({\\bf a},{\\bf e})$, there exists $S=S(({\\bf a},{\\bf e}),L) \\subset V(\\Gamma)$ such that the following properties hold.\n\\begin{align*}\np_{\\mathrm{SD}}(({\\bf a},{\\bf e})_{S}) \t& \\geq p_{\\mathrm{SD}}({\\bf a},{\\bf e})-55L\\sqrt{d}\\qquad \\text{and}\\\\\n\\sum_{(i',w')\\in N_{\\Gamma_{\\mathrm{SD}}}(i,w)\\cap S} \\frac{a_{i,w}a_{i',w'}}{n} \\, \\cdot \\, \\frac{\\epsilon_{i,w,i',w'}^2}{15} & \\ge \\frac{L}{10} a_{i,w}\\log\\left(\\frac{en}{a_{i,w}}\\right) \\qquad \\text{for all} \\, (i,w)\\in S\\, .\\end{align*}\n\\end{prop}\n\nSince \n\\begin{equation}\\label{eq:b} \nb(\\epsilon) \\, \\ge \\begin{cases} \\frac{\\epsilon^2}{15}\t& \\mbox{ if } \\epsilon \\leq e^2-1 \\\\\n\t\t\t\t\\pran{1+\\frac{\\epsilon}{2}} \\log(1+\\epsilon) & \\mbox{ if } \\epsilon > e^2 -1\\, ,\n\t\t\t\\end{cases}\n\t\t\t\\end{equation}\n\\refP{prop:redbound} follows immediately from Propositions~\\ref{prop:rbld} and \\ref{prop:rbsd} by applying the appropriate proposition to the pattern $({\\bf a},{\\bf e})$ (i.e. applying \\refP{prop:rbld} if $p_{\\mathrm{LD}}({\\bf a},{\\bf e}) \\ge p({\\bf a},{\\bf e})\/2$ and applying \\ref{prop:rbsd} if $p_{\\mathrm{SD}}({\\bf a},{\\bf e}) \\ge p({\\bf a},{\\bf e})\/2$). The required bound on $\\tilde{p}(({\\bf a},{\\bf e})_S)$ is obtained by considering the set $E\\subset E(\\Gamma)$ given by $E=\\Gamma_{\\mathrm{LD}}|_S$ or by $E=\\Gamma_{\\mathrm{SD}}|_S$, as appropriate.\n\nThe proofs of \\refP{prop:rbld} and \\refP{prop:rbsd} are similar. In both cases the set $S$ is found by repeatedly applying some straightforward reduction rules. In spirit, these rules can be understood by analogy with the following procedure. \nSuppose we are given a graph $F=(V,E)$ with average degree $\\mu$. To find an induced subgraph with {\\em minimum} degree at least $\\mu\/2$, we may simply repeatedly throw away vertices of degree less than $\\mu\/2$ until no such vertices remain. Throwing away vertices of such small degree can only increase the average degree in what remains, so this procedure must terminate with a non-empty subgraph satisfying the desired global minimum degree requirement. In this example the measure of `importance' of a vertex is its degree. When we apply a similar style of argument, the analogue of degree will be ``local potency'', suitably defined. Also, our rules for throwing away vertices will be more involved, and so it will take some work to verify that in throwing away vertices we do not decrease the overall potency by too much. \n\n\nWe now proceed to the proofs of Propositions~\\ref{prop:rbld} and ~\\ref{prop:rbsd}. We prove \\refP{prop:rbld} first since the reductions required for its proof are simpler.\n\n\\subsection{Large deviation patterns: A proof of \\refP{prop:rbld}}\\label{sec:ldps}\nNow and for the remainder of Section~\\ref{sec:ldps}, we fix $L\\ge 20$ and a pattern $({\\bf a},{\\bf e})$.\nOur aim is to find $S \\subset V(\\Gamma)$ with $p_{\\mathrm{LD}}(({\\bf a},{\\bf e})_{S}) \\geq p_{\\mathrm{LD}}({\\bf a},{\\bf e})-30L\\sqrt{d}$ and such that the inequality \n\\[ \\sum_{(i',w')\\in N_{\\Gamma_{\\mathrm{LD}}}(i,w)\\cap S} \\frac{a_{i,w}a_{i',w'}}{n} \\, \\cdot \\, \\pran{1+\\frac{\\epsilon}{2}} \\log(1+\\epsilon) \\ge \\frac{L}{4} a_{i,w}\\log\\left(\\frac{en}{a_{i,w}}\\right) \\]\nholds for all $(i,w)\\in S$.\n\nAs alluded to above, we shall choose $S$ by repeatedly removing vertices of $V(\\Gamma)$ that correspond to ``inconsequential'' parts of the pattern, so that in the resulting sub-pattern $({\\bf a},{\\bf e})_S$ there is a significant contribution to the overall potency from every vertex. We write \n\\[ p_{\\mathrm{LD}}(({\\bf a},{\\bf e});(i,w)) = \\left|\\sum_{(i',w')\\in N_{\\Gamma_{\\mathrm{LD}}}(i,w)} \\frac{w w' a_{i,w} a_{i',w'}}{n} \\, \\epsilon_{i,w,i',w'}\\right| \\, ,\\]\nfor the `local' large deviations potency at $(i,w)$, \nand write \n\\[ \\widehat{N}_{i,w}({\\bf a},{\\bf e}) = \\sum_{(i',w')\\in N_{\\Gamma}(i,w)} (w')^2 a_{i',w'} \\left(\\frac{w'}{wd^{1\/2}}\\right)\\]\nfor the amount of ``$L_2$-squared weight'' of the type ${\\bf a}$ that appears on $\\Gamma$-neighbours of $(i,w)$ weighted by the factor $w'\/wd^{1\/2}$.\n\n\nThe intuition behind this weighting factor is that in the large deviations regime, the most significant contributions to potency should be made by edges $(i,w)(i',w') \\in E(\\Gamma_{\\mathrm{LD}})$ with $w$ and $w'$ close to a factor of $d^{1\/2}$ apart. Note that this is {\\em exactly} what happens for the eigenfunctions of the universal cover (the infinite $d$-ary tree) with near-supremal eigenvalues.\n\nWe say that a set $U \\subset V(\\Gamma)$ satisfies condition (LD1) if for each $(i,w) \\in U$ we have \n\\begin{itemize}\n\\item[(LD1)] $p_{\\mathrm{LD}}(({\\bf a},{\\bf e})_U;(i,w)) \\ge L a_{i,w}w^2 d^{1\/2}$,\n\\end{itemize}\nand that $U$ satisfies condition (LD2) if for each $(i,w) \\in U$ we have \n\\begin{itemize}\n\\item[(LD2)] $p_{\\mathrm{LD}}(({\\bf a},{\\bf e})_S;(i,w)) \\ge L \\widehat{N}_{i,w}({\\bf a},{\\bf e}) \/d^{1\/2}$.\n\\end{itemize}\nCondition (LD1) asks that $p_{\\mathrm{LD}}(({\\bf a},{\\bf e})_S;(i,w))$ is large relative to the size of $a_{i,w}$; the factor $a_{i,w}w^2$ approximately measures the proportion of $L^2$-squared weight of ${\\bf a}$ used by $a_{i,w}$. In Condition (LD2) the factor $a_{i,w}w^2$ is replaced by $\\widehat{N}_{i,w}\/d$ which (in some sense) measures the average level of opportunity available to $a_{i,w}$. (In the case of large deviations a good opportunity corresponds to $(i',w') \\in N_{\\Gamma_{\\mathrm{LD}}}(i,w) \\cap U$ with large $a_{i',w'}$ large and such that the ratio $w'\/w$ is close to $d^{1\/2}$.) \n\nWe then let $S=S(({\\bf a},{\\bf e}),L) \\subset V(\\Gamma)$ be the maximal subset of $V(\\Gamma)$ satisfying both (LD1) and (LD2). \nThe monotonicity of $p_{\\mathrm{LD}}(({\\bf a},{\\bf e})_U;(i,w))$ in $U$ guarantees that $S$ is unique and that $S$ can be determined by starting from $V(\\Gamma)$ by repeatedly throwing away vertices that violate at least one of the two conditions, until no such vertices remain. \nIn other words, writing $k = |V(\\Gamma)|-|S|$, \nwe may order the vertices of $V(\\Gamma) \\setminus S$ as $(i_1,w_1),\\ldots,(i_k,w_k)$ in such a way that \nfor each $1 \\leq j \\leq k$, letting $S_j = V(\\Gamma) \\setminus \\{(i_1,w_1),\\ldots,(i_{j-1},w_{j-1})\\}$, \nwe have \n\\[\np_{\\mathrm{LD}}(({\\bf a},{\\bf e})_{S_j};(i_j,w_j))0}$, it follows that \n\\begin{equation}\\label{twod}\n\\sum_{(i,w)\\in N_{\\Gamma}(i',w')} \\frac{w'}{wd^{1\/2}}\n= \\sum_{i \\in N_H(i')} \\sum_{w \\in D^{>0}, w \\le d^{1\/2} w'} \\frac{w'}{wd^{1\/2}}\n\\le \\sum_{i \\in N_H(i')} \\sum_{j \\in {\\mathbb N}} 2^{-j}\n= 2d\n\\end{equation}\nfor each $(i',w')\\in V(\\Gamma)$. Thus\n\\begin{align*}\n\\sum_{(i,w) \\in W_2} p_{\\mathrm{LD}}(({\\bf a},{\\bf e})_{S_j};(i_j,w_j)) & \n\\le \\frac{L}{d^{1\/2}}\\, \\cdot \\, \\sum_{(i,w)\\in W_2} \\sum_{(i',w')\\in N_{\\Gamma}(i,w)}a_{i',w'}w'^2\\left(\\frac{w'}{wd^{1\/2}}\\right) \\\\\n&\n\\le \\frac{L}{d^{1\/2}}\\, \\cdot \\, \\sum_{(i',w')\\in V(\\Gamma)} a_{i',w'}w'^2\\, \\cdot \\sum_{(i,w)\\in N_{\\Gamma}(i',w')} \\frac{w'}{wd^{1\/2}}\\\\\n&\n\\le 20 Ld^{1\/2}\\, ,\\end{align*}\nwhere the final inequality follows from \\eqref{twod} and the fact that ${\\bf a}$ is a $Z$-type.\n\nSince $W_1\\cup W_2=\\{(i_1,w_1),\\dots ,(i_k,w_k)\\}$, the above bounds establish \\eqref{thirty}, completing the proof of the proposition.\\end{proof}\n\nIn preparation for the proof of \\refL{lem:suffLD}, we first record the following fact. \nGiven $U \\subset V(\\Gamma)$ and $(i,w) \\in V(\\Gamma)$, define \n\\[\nU_{\\mathrm{LD}}(i,w) = \\left\\{(i',w') \\in N_{\\Gamma_{\\mathrm{LD}}}(i,w)\\cap U: \\frac{\\epsilon_{i,w,i',w'}w^2 d}{w'^2}\\,\\ge \\, \\frac{Ln}{2a}\\right\\} \\, .\n\\]\nThe set $U_{\\mathrm{LD}}(i,w)$ contains the vertices we shall view as making an important contribution in our forthcoming use of \\refL{lem:measure}.\n\\begin{lem}\\label{lem:pointwise} If $U \\subset V(\\Gamma)$ satisfies (LD2) then \nfor all $(i,w)\\in U$, \n\\[\n\\sum_{(i',w')\\in U_{\\mathrm{LD}}(i,w)} \\frac{w w' a_{i,w} a_{i',w'}}{n} \\, \\epsilon_{i,w,i',w'} \\ge \\frac{1}{2} p_{\\mathrm{LD}}(({\\bf a},{\\bf e})_U;(i,w))\\, \n\\]\n\\end{lem}\n\\begin{proof}\nFix $(i,w) \\in U$, let \n$\\Omega = N_{\\Gamma_{\\mathrm{LD}}}(i,w)\\cap U$, and let $\\cE = 2^{\\Omega}$. \nThen for $(i',w') \\in \\Omega$, set \n\\[\n\\mu(i',w') = a_{i,w}a_{i',w'}, \\quad h(i',w') = \\frac{ww'\\epsilon_{i,w,i',w'}}{n}, \\quad g(i',w')=\\frac{w'^2}{nd^{1\/2}}\\frac{w'}{wd^{1\/2}}. \n\\]\nUsing (LD2), we then have \n\\[\n\\int h d\\mu = p_{\\mathrm{LD}}(({\\bf a},{\\bf e})_U;(i,w))\\ge L\\frac{\\widehat{N}_{i,w}}{d^{1\/2}}\n\\]\nand\n\\[ \n\\int g\\, d\\mu = \\frac{a_{i,w}}{nd^{1\/2}} \\sum_{(i',w')\\in N_{\\Gamma_{\\mathrm{LD}}}(i,w)\\cap U} a_{i',w'}w'^2\\, \\frac{w'}{wd^{1\/2}} \\ge\\frac{a_{i,w}\\widehat{N}_{i,w}}{nd^{1\/2}}\\, .\n\\]\nIt follows that \n\\[\nU_{\\mathrm{LD}}(i,w) \\supseteq \\left\\{(i',w') \\in N_{\\Gamma_{\\mathrm{LD}}}(i,w)\\cap U: \\frac{h(i',w')}{g(i',w')} \\geq \\frac{1}{2}\\frac{\\int h d\\mu}{\\int g d\\mu}\\right\\},\n\\]\nand so by \\refL{lem:measure} applied with $c=1\/2$, \n\\[\n\\sum_{(i',w')\\in U_{\\mathrm{LD}}(i,w)} \\frac{w w' a_{i,w} a_{i',w'}}{n} \\ge \\int_{\\frac{h}{g} \\geq \\frac{1}{2}\\frac{\\int h}{\\int g}} h \\, d\\mu \\ge \n\\frac{1}{2} \\int_{\\Omega} h \\, d\\mu = \\frac{1}{2} p_{\\mathrm{LD}}(({\\bf a},{\\bf e})_U;(i,w)).\n\\]\n\\end{proof}\n\nWe now prove \\refL{lem:suffLD}, completing the proof of \\refP{prop:rbld}.\n\n\\begin{proof}[Proof of \\refL{lem:suffLD}]\nWe are given $U \\subset V(\\Gamma)$ satisfying (LD1) and (LD2), and aim to show that \n\\[\n\\sum_{(i',w')\\in N_{\\Gamma}(i,w)\\cap U} \\frac{a_{i,w}a_{i',w'}}{n} \\, \\cdot \\, \\pran{1+\\frac{\\epsilon_{i,w,i',w'}}{2}} \\log(1+\\epsilon_{i,w,i',w'}) \\ge \\frac{L}{4} a_{i,w}\\log\\left(\\frac{en}{a_{i,w}}\\right)\\, .\n\\]\n\nNote that $c\\log{x}\\ge \\log(c^2 x)$ for $x\\ge e^2$ and $c\\ge 1$. Since $1+\\epsilon_{i,w,i',w'}\\ge e^2$ for all $(i',w')\\in N_{\\Gamma_{\\mathrm{LD}}}(i,w)$ we have that \n\\[\n\\frac{w d^{1\/2}}{w'} \\log(1+\\epsilon_{i,w,i',w'}) \\ge \\log\\left((1+\\epsilon_{i,w,i',w'})\\left(\\frac{w^2 d}{(w')^2}\\right)\\right)\\ge \\log\\left(\\frac{\\epsilon_{i,w,i',w'}w^2 d}{(w')^2}\\right)\\, .\n\\]\nIt follows that\n\\begin{align}\n& \\quad \\frac{a_{i,w}a_{i',w'}}{n} \\, \\cdot \\, \\pran{1+\\frac{\\epsilon_{i,w,i',w'}}{2}} \\log(1+\\epsilon_{i,w,i',w'}) \\nonumber\\\\\n= & \\quad \n\\frac{1}{w^2 d^{1\/2}} \\frac{ww'a_{i,w}a_{i',w'}}{n} \\, \\cdot \\left(\\frac{wd^{1\/2}}{w'}\\right)\\, \\cdot \\pran{1+\\frac{\\epsilon_{i,w,i',w'}}{2}} \\log(1+\\epsilon_{i,w,i',w'}) \\nonumber\\\\\n\\ge & \\quad \n\\frac{1}{2w^2 d^{1\/2}}\\frac{ww'a_{i,w}a_{i',w'}}{n} \\epsilon_{i,w,i',w'} \\log\\left(\\frac{\\epsilon_{i,w,i',w'}w^2 d}{(w')^2}\\right)\\, \\label{eq:rbld1}.\n\\end{align}\n\nNote that the expression inside the preceding logarithm is precisely the expression included in the definition of $U_{\\mathrm{LD}}(i,w)$. \nApplying first (\\ref{eq:rbld1}), then \\refL{lem:pointwise} and finally (LD1), we obtain that \n\\begin{align*}\n& \\quad \\sum_{(i',w')\\in N_{\\Gamma_{\\mathrm{LD}}}(i,w)\\cap U} \\frac{a_{i,w}a_{i',w'}}{n} \\, \\cdot \\, \\pran{1+\\frac{\\epsilon_{i,w,i',w'}}{2}} \\log(1+\\epsilon_{i,w,i',w'}) \\\\\n\\ge & \\quad\n\\frac{1}{2w^2 d^{1\/2}}\\sum_{(i',w')\\in N_{\\Gamma_{\\mathrm{LD}}}(i,w)\\cap U} \\frac{ww'a_{i,w}a_{i',w'}}{n} \\epsilon_{i,w,i',w'} \\log\\left(\\frac{\\epsilon_{i,w,i',w'}w^2 d}{w'^2}\\right)\\\\\n\\ge & \\quad \n\\frac{1}{4w^2 d^{1\/2}}p_{\\mathrm{LD}}(({\\bf a},{\\bf e})_U;(i,w))\\log\\left(\\frac{Ln}{2a}\\right)\\\\\n\\ge & \\quad \n\\frac{L}{4} a_{i,w}\\log\\left(\\frac{en}{a_{i,w}}\\right)\\, ,\n\\end{align*}\ncompleting the proof of the lemma.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Small deviation patterns: A proof of \\refP{prop:rbsd}}\\label{sec:rbsd}\nOur proof of \\refP{prop:rbsd} is similar to our proof of \\refP{prop:rbld} given above. In that spirit, throughout Section~\\ref{sec:rbsd} we fix $L\\ge 20$ and a pattern $({\\bf a},{\\bf e})$. We will find the required set $S$ by repeatedly removing vertices of $V(\\Gamma)$ that make a small contribution to potency.\n\nGiven a pattern $({\\bf a},{\\bf e})$ and $(i,w) \\in V(\\Gamma)$, we write \n\\[ p_{\\mathrm{SD}}(({\\bf a},{\\bf e});(i,w)) = \\left|\\sum_{(i',w')\\in N_{\\Gamma_{\\mathrm{SD}}}(i,w)} \\frac{w w' a_{i,w} a_{i',w'}}{n} \\, \\epsilon_{i,w,i',w'}\\right| \\, ,\\]\nfor the `local' small deviations potency at $(i,w)$, \nand write \n\\[\nN_i = N_i ({\\bf a},{\\bf e}) = \\sum_{(i',w') \\in V(\\Gamma): i' \\sim_H i} w'^2 a_{i',w'}.\n\\]\nfor the proportion of the $L_2$-squared weight of ${\\bf a}$ that appears on fibres $V_{i'}$ that neighbour $V_i$.\nOur next aim is to state small deviations analogues of Lemmas~\\ref{lem:potboundLD} and~\\ref{lem:suffLD}. However, our reduction rules are slightly more involved in this case, and require one additional definition. \nLet \n\\[\nM_{i,w} = M_{i,w}({\\bf a},{\\bf e}) = \\max\\left\\{\\frac{N_i}{a_{i,w}w^2 d},\\frac{en}{a_{i,w}}\\right\\}\\,, \n\\]\nand let $m_{i,w} = (\\log M_{i,w})\/M_{i,w}$. Note that we always have $M_{i,w} \\ge en\/a_{i,w} \\ge e$. Since the function $x \\log x^{-1}$ is increasing on $(0,e^{-1}]$ it follows that we may equivalently write \n\\[\nm_{i,w} = \t\\begin{cases}\n\t\t\t\\frac{a_{i,w} w^2 d}{N_i} \\log\\left(\\frac{N_i}{a_{i,w} w^2 d}\\right) & \\mbox{ if } \n\t\t\t\\frac{w^2 d}{N_i} \\le \\frac{1}{en} \\\\\n\t\t\t\\frac{a_{i,w}}{en} \\log\\left(\\frac{en}{a_{i,w}}\\right) & \\mbox{ if }\n\t\t\t\\frac{1}{en} \\le \\frac{w^2 d}{N_i} \n\t\t\t\\end{cases}\n\\]\nIn what follows we will use that $m_{i,w} \\le 1.18 M_{i,w}^{-2\/3}$ always holds. \nThis follows from the fact that $\\log x \\le 1.18 x^{1\/3}$ on $(0,\\infty)$.\n\n \nWe say that a set $U \\subset V(\\Gamma)$ satisfies (SD1), (SD2), and (SD3), respectively, if\n\\begin{itemize}\n\\item[(SD1)] $p_{\\mathrm{SD}}(({\\bf a},{\\bf e})_U;(i,w)) \\ge L a_{i,w}w^2 d^{1\/2}$\n\\item[(SD2)] $p_{\\mathrm{SD}}(({\\bf a},{\\bf e})_U;(i,w)) \\ge L N_i({\\bf a},{\\bf e}) a_{i,w}\/(nd^{1\/2})$, or \n\\item[(SD3)] $p_{\\mathrm{SD}}(({\\bf a},{\\bf e})_U;(i,w)) \\ge L N_i({\\bf a},{\\bf e}) m_{i,w}({\\bf a},{\\bf e})\/d^{1\/2} \\, $\n\\end{itemize}\nfor all $(i,w)\\in U$.\n\nWe remark that (SD1) is identical to (LD1), and asks that $p_{\\mathrm{SD}}(({\\bf a},{\\bf e})_U;(i,w))$ is large relative to $a_{i,w}w^2$.\nIn (SD2), $a_{i,w}\/n$ is the proportion of the fibre $V_i$ consumed by a set of size $a_{i,w}$, \nand $N_i \/ d$ represents the `average opportunity' in the neighbourhood of $(i,w)$. \nNotice that unlike in the large deviations case, no factor of the form $(w'\/wd^{1\/2})$ appears. This corresponds to the intuition that in the small deviations case, large potency is most likely to come from large sets of roughly equal weight. \nFinally, (SD3), in which $m_{i,w}$ appears, is a slight strengthening of either (SD1) or (SD2) -- by a logarithmic factor -- depending on which value $m_{i,w}$ takes. \n\nLet $S=S(({\\bf a},{\\bf e}),L)$ be the maximal subset of $V(\\Gamma)$ such that every $(i,w) \\in S$ satisfies (SD1), (SD2), and (SD3). As in \\refS{sec:ldps}, writing $k = |V(\\Gamma)|-|S|$, \nwe may order the vertices of $V(\\Gamma) \\setminus S$ as $(i_1,w_1),\\ldots,(i_k,w_k)$ in such a way that \nfor each $1 \\leq j \\leq k$, letting $S_j = V(\\Gamma) \\setminus \\{(i_1,w_1),\\ldots,(i_{j-1},w_{j-1})\\}$, \nwe have \n\\[\np_{\\mathrm{SD}}(({\\bf a},{\\bf e})_{S_j};(i_j,w_j)) 6$.\n\nIn the remaining case we have $1\/en \\le w^2 d\/N_i$, from which it follows that $m_{i,w}=a_{i,w}\\log(en\/a_{i,w})\/en$. So, by (SD3), we have that\n\\[ \np_{\\mathrm{SD}}(({\\bf a},{\\bf e})_U;(i,w))\\ge \\frac{La_{i,w}N_i}{end^{1\/2}}\\log(en\/a_{i,w})\\, .\n\\]\nCombining this bound with (SD1), \nwe obtain that \n\\[\np_{\\mathrm{SD}}(({\\bf a},{\\bf e})_U;(i,w))^2 \\ge \\frac{L^2a_{i,w}^2 w^2 N_i}{en}\\log(en\/a_{i,w}),\n\\]\nwhich verifies \\eqref{STP} in this case since $L\\ge 20\\ge 6e$.This completes the proof. \n\\end{proof}\n\n\n\\section{Proof of \\refT{explain}}\\label{sec:explain}\n\nOur proof of \\refT{explain} is similar in nature to our proof of \\refT{main}. As in that proof we shall use \\refP{findwit}, which tells us that if $\\lambda^*(G)$ is large then $G$ contains a pattern $({\\bf a},{\\bf e})$ of large potency. We require two other results. The first, which is easily proved, is an approximate converse to \\refP{findwit}. The second, whose proof is more involved, is a variant of \\refT{redbound}.\n\nGiven a pattern $({\\bf a},{\\bf e})$ and a witness $\\{A_{i,w}\\}_{(i,w) \\in V(\\Gamma)}$ for the pattern, write $\\alpha=\\alpha({\\bf a},{\\bf e})=\\sum_{(i,w) \\in V(\\Gamma)} a_{i,w}$, and write $G[\\{A_{i,w}\\}_{(i,w) \\in V(\\Gamma)}]$ for the subgraph of $G$ induced by $\\bigcup_{(i,w) \\in V(\\Gamma)} A_{i,w}$. \n\\begin{prop}\\label{witfind}\nIf $G$ contains a pattern $({\\bf a},{\\bf e})$ with $p({\\bf a},{\\bf e})=p\\sqrt{d}$ then $\\lambda^*(G) \\geq (2p-40)\\sqrt{d}$. \nFurthermore, $\\lambda(G[\\{A_{i,w}\\}_{(i,w) \\in V(\\Gamma)}]) \\geq (2p-40)\\sqrt{d} - \\frac{\\alpha\\sqrt{10}}{n}$. \n\\end{prop}\n\\begin{proof}\nLet $\\{A_{i,w}: (i,w) \\in V(\\Gamma)\\}$ be a witness for $({\\bf a},{\\bf e})$, \nand let $y \\in \\mathbb{R}^{nh}$ be defined by, for each \n$(i,w) \\in V(\\Gamma)$, setting $y(v)=w$ for all $v \\in A_{i,w}$ (and setting $y(v)=0$ for all remaining $v$). \nSince \n\\[\n\\|y\\|_2^2 = \\sum_{(i,w) \\in V(\\Gamma)} w^2 a_{i,w} \\le 10, \n\\]\n\nby \\refL{ipoe} we have \n\\[\n|\\ipo{y}_N| = |\\ipo{y}_{N,E^*} + \\ipo{y}_{N,\\mathbb{R}^2\\setminus E^*}| \\geq |\\ipo{y}_{N,\\mathbb{R}^2\\setminus E^*}| - 40\\sqrt{d}.\n\\]\nBut $2p({\\bf a},{\\bf e}) = |\\ipo{y}_{N,\\mathbb{R}^2\\setminus E^*}|$, so $|\\ipo{y}_N| \\geq (2p-40)\\sqrt{d}$, and the first result follows from \\refF{uncount}. \n\nFor the second bound, write $G'=G[\\{A_{i,w}\\}_{(i,w) \\in V(\\Gamma)}]$, and write $M'$ for the adjacency matrix of $G'$. \nAlso, let $y' \\in \\mathbb{R}^{a}$ be the vector obtained from $y$ by retaining only coordinates corresponding to vertices of $G'$ \n(recall that all other coordinates of $y$ are equal to zero). Then \n\\[\n|\\ipo{y}_N| = |\\ipo{y}_M- \\ipo{y}_{\\overline{M}}| = |\\ipo{y'}_{M'}- \\ipo{y}_{\\overline{M}}|,\n\\]\nand since all entries of $\\overline{M}$ are either zero or $1\/n$, \n\\[\n\\ipo{y}_{\\overline{M}} \\leq \\frac{1}{n} (\\sum_{v \\in [nh]} y_v)^2 \\leq \\frac{\\alpha\\sqrt{10}}{n}, \n\\]\nwhere in the last inequality we used that, given the constraints that $y$ has $\\alpha$ non-zero entries and $\\|y\\|_2^2 \\leq 10$, the sum $(\\sum_{v \\in [nh]} y_v)^2$ is maximized by taking all nonzero entries equal to $10^{1\/2}\\alpha^{-1\/2}$. \nThus \n\\[\n\\frac{|\\ipo{y'}_{M'}|}{\\|y'\\|_2^2} \\geq \\ipo{y'}_{M'} \\ge |\\ipo{y}_N|-|\\ipo{y'}_M| \\geq (2p-40)\\sqrt{d} - \\frac{\\alpha\\sqrt{10}}{n}.\\qedhere\n\\]\n\\end{proof}\n\n\\begin{thm}\\label{redboundtwo}\nFix $L \\ge 20$. \nFor any pattern $({\\bf a},{\\bf e})$, there exists $S=S(({\\bf a},{\\bf e}),L) \\subset V(\\Gamma)$ such that the following properties hold.\n\\begin{align*}\np(({\\bf a},{\\bf e})_{S}) \t& \\geq p({\\bf a},{\\bf e})- 150 L\\sqrt{d}\\\\\n\\p{({\\bf a},{\\bf e})_S~\\mbox{can be found in G}} & \\ll \\prod_{(i,w) \\in S} a_{i,w}^{d\/4}\\pran{\\prod_{(i,w) \\in S} {n \\choose a_{i,w}\\wedge \\lfloor n\/2 \\rfloor}}^{1-L\/10}. \n\\end{align*}\n\\end{thm}\n\nIn the same way as \\refT{redbound} was deduced from \\refP{prop:redbound} (together with \\refC{cor:bb}) \\refT{redboundtwo} may be deduced from the following proposition (\\refP{prop:redboundtwo}). We recall from \\refS{sec:key} the definition of $b:(-1,\\infty) \\to [0,\\infty)$,\n\\[ \nb(\\epsilon)= (1+\\epsilon)\\log(1+\\epsilon) - \\epsilon \\qquad \\epsilon\\in (-1,\\infty).\n\\]\n\n\\begin{prop}\\label{prop:redboundtwo} Fix $L \\ge 20$. \nFor any pattern $({\\bf a},{\\bf e})$, there exists $S=S(({\\bf a},{\\bf e}),L) \\subset V(\\Gamma)$ such that the following properties hold.\n\\begin{align*}\np(({\\bf a},{\\bf e})_{S})\t& \\geq p({\\bf a},{\\bf e}) - 150 L\\sqrt{d}\\qquad \\text{and}\\\\\n\\sum_{(i',w')\\in N_{\\Gamma}(i,w)\\cap S} \\frac{a_{i,w}a_{i',w'}}{n} b(\\epsilon_{i,w,i',w'})& \\ge \\frac{L}{10} a_{i,w}\\log\\left(\\frac{en}{a_{i,w}}\\right) \\qquad \\text{for all} \\, (i,w)\\in S\\, .\\end{align*}\n\\end{prop}\n\\refT{redboundtwo} follows from \\refP{prop:redboundtwo} in exactly the same way that \\refT{redbound} follows from \\refP{prop:redbound}. Rather than repeat this proof we refer the reader to the proof of \\refT{redbound} given in \\refS{sec:key}.\n\nWe recall that our proof of \\refP{prop:redbound} divided into two cases depending on whether $p_{\\mathrm{LD}}({\\bf a},{\\bf e}) \\ge p({\\bf a},{\\bf e})\/2$ or $p_{\\mathrm{SD}}({\\bf a},{\\bf e}) \\ge p({\\bf a},{\\bf e})\/2$. These two cases were dealt with separately by \\refP{prop:rbld} and \\refP{prop:rbsd} respectively. However, in \\refP{prop:redboundtwo} we seek a lower bound on $p(({\\bf a},{\\bf e})_S)$ rather than on $\\tilde{p}(({\\bf a},{\\bf e})_S)$, and for this reason we can not treat the large and small deviations cases separately. In other words, we must work with the graph $\\Gamma$ rather than exclusively focussing on one of the subgraphs $\\Gamma_{\\mathrm{LD}},\\Gamma_{\\mathrm{SD}}$. That having been said, our proof of \\refP{prop:redboundtwo} resembles the proofs of Propositions~\\ref{prop:rbld} and \\ref{prop:rbsd} in almost all other respects.\n\nWe begin by establishing the following sufficient condition for the second bound of \\refP{prop:redboundtwo} to hold. For the remainder of the section fix a pattern $({\\bf a},{\\bf e})$ and a constant $L \\ge 20$. In what follows we denote by\n\\[\np(({\\bf a},{\\bf e});(i,w)) = \\left| \\sum_{(i',w')\\in N_{\\Gamma}(i,w)} \\frac{w w' a_{i,w} a_{i',w'}}{n} \\, \\epsilon_{i,w,i',w'}\\right|\n\\]\nthe `local' potency at $(i,w)$, and we recall the large and small deviations variants defined in \\refS{sec:key}:\n\\[\np_{\\mathrm{LD}}(({\\bf a},{\\bf e});(i,w)) = \\left|\\sum_{(i',w')\\in N_{\\Gamma_{\\mathrm{LD}}}(i,w)} \\frac{w w' a_{i,w} a_{i',w'}}{n} \\, \\epsilon_{i,w,i',w'}\\right|\n\\]\nand \n\\[\np_{\\mathrm{SD}}(({\\bf a},{\\bf e});(i,w)) = \\left|\\sum_{(i',w')\\in N_{\\Gamma_{\\mathrm{SD}}}(i,w)} \\frac{w w' a_{i,w} a_{i',w'}}{n} \\, \\epsilon_{i,w,i',w'}\\right|\\, .\n\\]\nWe say that a set $U \\subset V(\\Gamma)$ satisfies (G1), (G2), (G3), and (G4), respectively, if \n\\begin{itemize}\n\\item[(G1)] $p(({\\bf a},{\\bf e})_U;(i,w)) \\ge 2L a_{i,w}w^2 d^{1\/2}$,\n\\item[(G2)] $p(({\\bf a},{\\bf e})_U;(i,w)) \\ge 2L \\widehat{N}_{i,w} \/d^{1\/2}$, \n\\item[(G3)] $p(({\\bf a},{\\bf e})_U;(i,w)) \\ge 2L N_i a_{i,w}\/(nd^{1\/2})$, or \n\\item[(G4)] $p(({\\bf a},{\\bf e})_U;(i,w)) \\ge 2L N_i m_{i,w}\/d^{1\/2} \\, .$\n\\end{itemize}\nfor all $(i,w)\\in U$, where $N_{i}=N_{i}({\\bf a},{\\bf e})$, $\\widehat{N}_{i,w}=\\widehat{N}_{i,w}({\\bf a},{\\bf e})$ and $m_{i,w}=m_{i,w}({\\bf a},{\\bf e})$ are as defined in \\refS{sec:key}.\n\n\\begin{lem}\\label{lem:suff} If $U$ satisfies (G1),(G2),(G3), and (G4) then\n\\[\n\\sum_{(i',w')\\in N_{\\Gamma}(i,w)\\cap U} \\frac{a_{i,w}a_{i',w'}}{n} b(\\epsilon_{i,w,i',w'}) \\ge \\frac{L}{10} a_{i,w}\\log\\left(\\frac{en}{a_{i,w}}\\right) \\, .\n\\]\n\\end{lem}\n\n\\begin{proof} Since $E(\\Gamma)=E(\\Gamma_{\\mathrm{LD}})\\cup E(\\Gamma_{\\mathrm{SD}})$ it follows from the triangle inequality that $p(({\\bf a},{\\bf e})_S;(i,w))\\le p_{\\mathrm{LD}}(({\\bf a},{\\bf e})_U;(i,w)) + p_{\\mathrm{SD}}(({\\bf a},{\\bf e})_U;(i,w))$, and so\n\\[\n\\max\\{p_{\\mathrm{LD}}(({\\bf a},{\\bf e})_U;(i,w)),p_{\\mathrm{SD}}(({\\bf a},{\\bf e})_U;(i,w))\\}\\ge \\frac{p(({\\bf a},{\\bf e})_U;(i,w))}{2}\\, .\n\\]\nFirst suppose that $p_{\\mathrm{LD}}(({\\bf a},{\\bf e})_U;(i,w)) \\ge p(({\\bf a},{\\bf e})_U;(i,w))\/2$. \nIn this case, (G1) and (G2) imply that $U$ satisfies conditions (LD1) and (LD2), and so, by an application of \\refL{lem:suffLD} we then have \n\\[\n\\sum_{(i',w')\\in N_{\\Gamma_{\\mathrm{LD}}}(i,w)\\cap U} \\frac{a_{i,w}a_{i',w'}}{n}\n\\, \\cdot \\, \\pran{1+\\frac{\\epsilon_{i,w,i',w'}}{2}} \\log(1+\\epsilon_{i,w,i',w'}) \\ge \\frac{L}{4} a_{i,w}\\log\\left(\\frac{en}{a_{i,w}}\\right)\\, .\n\\]\nThe lemma now follows since (by \\eqref{eq:b}) $b(\\epsilon_{i,w,i',w'}) \\ge (1+\\epsilon_{i,w,i',w'}\/2)\\log(1+\\epsilon_{i,w,i',w'})$.\n\n\nBy \\refL{lem:suffLD} we then have\n\\[\n\\sum_{(i',w')\\in N_{\\Gamma_{\\mathrm{LD}}}(i,w)\\cap U} \\frac{a_{i,w}a_{i',w'}}{n}\n\\, \\cdot \\, \\pran{1+\\frac{\\epsilon_{i,w,i',w'}}{2}} \\log(1+\\epsilon_{i,w,i',w'}) \\ge \\frac{L}{2} a_{i,w}\\log\\left(\\frac{en}{a_{i,w}}\\right)\\, ,\n\\]\n(note that the $L$ of conditions (LD1) and (LD2) became $2L$ in conditions (G1) and (G2)), which proves the result in this case.\n\nOtherwise, we must have that $p_{\\mathrm{SD}}(({\\bf a},{\\bf e})_U;(i,w)) \\ge p(({\\bf a},{\\bf e})_U;(i,w))\/2$ and in this case the result follows similarly from \\eqref{eq:b} and \\refL{lem:suffSD}.\n\\end{proof}\n\nNow let $S\\subset V(\\Gamma)$ be the maximal subset of $V(\\Gamma)$ satisfying all of (G1),(G2),(G3), and (G4). The proof of \\refP{prop:redboundtwo} is then completed by the following lemma.\n\n\\begin{lem}\\label{lem:Stwo} Let $S$ be the subset defined above. Then\n\\[\np(({\\bf a},{\\bf e})_S)\\ge p({\\bf a},{\\bf e}) - 150L\\sqrt{d}\\, .\n\\]\n\\end{lem}\n\n\\begin{proof} This is proved exactly as Lemmas~\\ref{lem:potboundLD} and~\\ref{lem:potboundSD} were proved; we skip the details. \\qedhere \\end{proof}\n\nWe now have all the necessary preliminaries in place and we may turn to our proof of \\refT{explain}.\n\n\\begin{proof}[Proof of \\refT{explain}] We begin by noting that the star graph $F$ consisting of a single vertex of degree $d$ attached to $d$ vertices of degree one is always a subgraph of $G$ and has $\\lambda(F)=\\sqrt{d}$, which proves the result when $\\lambda^*(G) \\leq 1189248 \\sqrt{d}$. \n\nWe may now suppose that $\\lambda^*(G) = M\\sqrt{d}$ for some $M \\geq 1189248$. It follows by \\refP{findwit} that $G$ contains a pattern $({\\bf a},{\\bf e})$ of potency at least $K\\sqrt{d}$, where $K=K(M)=(M\/192) -3\\ge 6191$. By \\refT{redboundtwo}, applied with $L=L(M)=K\/151\\ge 41$, we have that $G$ contains a reduced pattern $({\\bf a},{\\bf e})$ with $p({\\bf a},{\\bf e}) \\ge L\\sqrt{d}$ for which\n\\begin{equation}\\label{eq:probbound}\n\\p{({\\bf a},{\\bf e})~\\mbox{can be found in G}} \\ll \\prod_{(i,w) \\in S} a_{i,w}^{d\/4}\\pran{\\prod_{(i,w) \\in V(\\Gamma)} {n \\choose \\alpha \\wedge \\lfloor n\/2 \\rfloor}}^{-1}. \n\\end{equation}\nwhere $\\alpha=\\sum_{(i,w) \\in V(\\Gamma)} a_{i,w}$.\n\nLet $\\{A_{i,w}\\}_{(i,w) \\in V(\\Gamma)}$ be any witness for $({\\bf a},{\\bf e})_S$, and write \n$G'=G[\\{A_{i,w}\\}_{(i,w) \\in V(\\Gamma)}]$. By \\refP{witfind} we have $\\lambda(G') \\geq (2L-40)\\sqrt{d}-\\frac{\\alpha\\sqrt{10}}{n}$, \nso for $n \\ge \\sqrt{10}hd$, if it happens that $\\alpha \\leq hd$ then $\\lambda(G') \\geq L\\sqrt{d} \\ge \\lambda^{*}(G)\/1189248$, as required. \n\nTo complete the proof we must bound by $n^{-hd}$ the probability that any reduced pattern $({\\bf a},{\\bf e})$ of potency at least $L\\sqrt{d}$ with $\\alpha>hd$ may be found in $G$. \nWe remind the reader that for reduced patterns $({\\bf a},{\\bf e})$ we have the inequality (\\ref{eq:probbound}). \nWe split the proof of the required bound into two steps. First, recall the event $E_1$ from the proof of \\refT{main}, which was the event \nthat any reduced pattern $({\\bf a},{\\bf e})$ with $p({\\bf a},{\\bf e}) \\ge L\\sqrt{d}$ and for which $a_{i,w}\\ge 4hd\\log_2 d$ for some $(i,w) \\in V(\\Gamma)$, can be found in $G$. In proving \\refT{main} we showed that $\\p{E_1}\\le n^{-hd}\/2$\n\nSecond, let $E_3=E_3(M)$ be the event that \n$G$ contains a reduced pattern $({\\bf a},{\\bf e})$ with $p({\\bf a},{\\bf e}) \\ge L\\sqrt{d}$, with $\\alpha=\\sum_{(i,w) \\in S} a_{i,w} > hd$ and with $a_{i,w} \\leq 4hd\\log_2 d$ for all $(i,w) \\in V(\\Gamma)$. We complete the proof by proving that $\\p{E_3} \\leq n^{-hd}\/2$ for all $n$ sufficiently large. \nAs in the proof of \\refT{main}, by a union bound we obtain that\n\\[\n\\p{E_3(M)} \\leq \\log_2(nh) (4hd\\log_2 d)^{3hd \\log_2 d} {n \\choose \\alpha \\wedge \\lfloor n\/2 \\rfloor}^{-1}.\n\\]\nSince $hd+1 \\leq \\alpha= \\sum_{(i,w) \\in S} a_{i,w}< (4hd \\log_2 d)(h \\log_2 d)$, the following bound holds for all $n \\geq 2\\alpha$:\n\\[\n\\p{E_3(M)} \\leq \\log_2(nh) (4hd\\log_2 d)^{3hd \\log_2 d} \\frac{(2(hd+1))^{hd+1}}{n^{hd+1}},\n\\]\nwhich is less than $n^{-hd}\/2$ for $n$ large enough. \n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n \n\\section{The remaining proofs}\\label{sec:zbound}\nBefore proving \\refP{zbound} we establish the following, intermediate result, \nwhich contains our key convexity argument. For this step, it is notationally convenient to consider vectors $x \\in \\mathbb{R}^{V(G)}$ with $\\|x\\|_2^2 = O(nh)$ rather than $O(1)$. \nTo this end, \ndefine \n\\begin{align*}\nD & = \\{0 \\} \\cup \\{2^i: i \\in {\\mathbb N}\\} \\cup \\{-2^i: i \\in {\\mathbb N}\\} \\\\\n\\hat{D} & = \\{0 \\} \\cup \\{2^i: i \\in {\\mathbb N}\\},\n\\end{align*}\nand let \n\\begin{align*}\nX^* & = \\{x \\in \\mathbb{R}^{V(G)}:\\|x\\|_2^2 \\le nh\\}\\, , \\\\\nY & = \\{y \\in \\mathbb{R}^{V(G)}:\\|y\\|_2^2 \\leq 5nh,\\forall~v \\in V(G),y_v \\in D\\}\\, , \\nonumber\\\\\nY^+& = \\{y \\in \\mathbb{R}^{V(G)}:\\|y\\|_2^2 \\leq 10nh,\\forall~v \\in V(G),y_v \\in \\hat{D}\\}\\, . \n\\nonumber\n\\end{align*}\nNote that by Fact~\\ref{uncount}, $nh\\lambda^*(G) = \\sup_{x \\in X^*} |\\ipo{x}_N|$. \n\\begin{prop}\\label{dyprop}\n$nh \\lambda^*(G) \\leq 12 \\sup_{y \\in Y^+} |\\ipo{y}_N|.$\n\\end{prop}\n\\begin{proof}[Proof of \\refP{dyprop}] \nThe proof is rather straightforward; it consists of first an averaging argument, and second a polarization argument. \nThe polarization argument becomes a little delicate only because we are trying to maintain the property that the \nentries of all vectors remain in $D$. \n\nFor $r \\in \\mathbb{R}$ let $\\sigma(r)=r\/|r|$ if $r \\neq 0$, and $\\sigma(r)=0$ if $r=0$. We call $\\sigma(r)$ the sign of $r$. \nAlso, let $\\ell(r)$ be the greatest element of $D$ which is less than or equal to $r$, and let $u(r)$ be the least element of \n$D$ which is greater than or equal to $r$. \n\nFor $x \\in \\mathbb{R}^{V(G)}$, write \n\\[S_x = \\{ y \\in \\mathbb{R}^{V(G)}: \\forall v \\in V(G), \\sigma(y_v)=\\sigma(x), y_v \\in \\{\\ell(x_v),u(x_v)\\}\\}.\n\\] \nNow fix any $x \\in X^*$ and let $\\by \\in Y$ be randomly chosen as follows. Independently for each $v \\in V(G)$: \n\\begin{itemize}\n\\item if $0 \\leq |x_v| < 1$ let $\\by_i=\\sigma(x_i)$ with probability $|x_i|$ and $\\by_i=0$ otherwise.\n\\item if $2^j \\leq |x_v| < 2^{j+1}$ for some $j \\geq 1$, then let $\\by_v = \\sigma_v 2^{j+1}$ with probability $(|x_v|-2^j)\/2^{j+1}$ and \nlet $\\by_v = \\sigma_v 2^j$, otherwise. \n\\end{itemize}\nBy definition, all entries of $\\by$ have values in $\\{0,\\pm 1,\\pm 2,\\pm 4,\\ldots\\}$. Furthermore, \nfor each $v$ with $|x_v| \\geq 1$ we have $\\by_v^2 \\leq (2x_v)^2$, and for $i$ with $|x_v| < 1$ we have $\\by_v^2 \\leq 1$. \nIt follows that \n\\[\n\\sum_{v \\in V(G)} \\by_v^2 \\leq \\sum_{v \\in V(G)} (2x_v)^2+1 = 5nh,\n\\]\nso $\\by$ is a random vector in $Y \\cap S_x$. Since, for each $v \\in V(G)$, $\\E{\\by_v}=x_v$ and the coordinates of $\\by$ \nare chosen independently, we then have\n\\[\nx = \\sum_{y \\in Y \\cap S_x} y \\p{\\by=y},\n\\]\nso \n\\[\n\\ipo{x}_N = \\sum_{y,z \\in Y\\cap S_x} \\ip{y}{z}_N \\p{\\by=y}\\p{\\by=z}.\n\\]\nIn other words, $\\ipo{x}_N$ is a convex combination of elements of $\\{\\ip{y}{z}_N:y,z \\in Y\\}$. \nChoosing $x \\in X$ such that $|\\ipo{x}_N|=nh\\lambda^*(G)$, we then have\n\\begin{equation}\\label{dyprop1}\nnh \\lambda^*(G) = |\\ipo{x}{x}_N| \\leq \\sup_{y,z \\in Y \\cap S_x} |\\ip{y}{z}_N|.\n\\end{equation}\nNote that for all $y,z \\in S_x$ and all $v \\in V(G)$, \neither $y_v=z_v$ or else $\\{y_v,z_v\\}=\\{\\ell(x_v),u(x_v)\\}$. In particular, $y_v$ and $z_v$ are either both non-negative \nor both non-positive. \n\nChoose $y,z \\in Y \\cap S_x$ for which the supremum in \\refeq{dyprop1} is achieved, and write \n$y=y_+-y_-$, $z=z_+-z_-$, where e.g.~$y_+$ is the vector obtained from $y$ by replacing \nall negative entries of $y$ by zeros. \nThen for all $v \\in V(G)$, $\\sigma(y^+_v)=\\sigma(z^+_v)$ and $\\sigma(y^-_v)=\\sigma(z^-_v)$ \nWe then have \n\\[\nnh\\lambda^*(G) \\leq |\\ip{y}{z}_N| = |\\ip{y_+}{z_+}_N-\\ip{y_+}{z_-}_N - \\ip{y_-}{z_+}_N +\\ip{y_-}{z_-}_N|,\n\\]\nso either one of $|\\ip{y_+}{z_-}_N|$,$|\\ip{y_-}{z_+}_N|$ is at least $nh\\lambda^*(G)\/6$ or else \none of $|\\ip{y_+}{z_+}_N|$, $|\\ip{y_-}{z_-}_N|$ is at least $nh\\lambda^*(G)\/3$. \n\nFirst suppose $|\\ip{y_+}{z_-}_N| \\geq nh\\lambda^*(G)\/6$. Since \n\\begin{equation}\\label{polarize}\n\\ipo{y_++z_-}_N = 2 \\ip{y_+}{z_-}_N + \\ipo{y_+}_N + \\ipo{z_-}_N,\n\\end{equation}\nit follows that either $\\ipo{y_++z_-}_N| \\geq nh\\lambda^*(G)\/9$ or else \none of $|\\ipo{y_+}_N|$ or $|\\ipo{z_-}_N|$ is at least $nh \\lambda^*(G)\/9$. \nAlso, since $y,z \\in S_x$, the non-zero coordinates of $y_+$ correspond to zeros in $z_-$, we have $y_++z_- \\in Y^+$. \nSince $\\|y_+\\|_2^2,\\|z_-\\|_2^2$, and $\\|y_++z_-\\|_2^2$ are all at most $\\|y\\|_2^2+\\|z\\|_2^2 \\leq 10nh$, \nin this case we have proved the proposition. This argument \nalso handles the case $|\\ip{y_-^}{z_+}_N| \\geq nh\\lambda^*(G)\/6$, by symmetry. \n\nThe other case is that $|\\ip{y_+}{z_+}_N| \\geq nh\\lambda^*(G)\/3$ \n(the proof in the case that $|\\ip{y_-}{z_-}_N| \\geq nh\\lambda^*(G)\/3$ is symmetric). \nSince \n\\[\n\\ipo{y_+-z_+}_N= \\ipo{y_+}_N + \\ipo{z_+}_N -2\\ip{y_+}{z_+}_N,\n\\]\nit follows that either one of $\\ipo{y_+}_N, \\ipo{z_+}_N$ is at least $nh\\lambda^*(G)\/12$, \nor else \n\\[\n\\ipo{y_+-z_+}_N \\geq nh\\lambda^*(G)\/2.\n\\] \nIn the former case we have proved the proposition. In the latter case, note that \nsince for all $v \\in V(G)$, either $y_v = z_v$ or else $\\{y_v,z_v\\}=\\{\\ell(x_v),u(x_v)\\}$, \nwe have that all all entries of $y_+-z_+$ are in $D$, and \nfurthermore, for each $v$, either $|y_{v,+}-z_{v,+}| = 0$ \nor $|y_{v,+}-z_{v,+}|=1$ or $|y_{v,+}-z_{v,+}| = |y_{v,+}| \\wedge |z_{v,+}|$. It follows that $\\|y_+-z_+\\|^2 \\leq 6nh$. \nNow write $w=y_+-z_+$, and then separate $w$ into its positive and negative \nparts: $w=w_+-w_-$. \nWe then have \n\\[\nnh\\lambda^*(G)\/2 \\leq |\\ipo{w}_N| = | \\ipo{w_+}_N- 2\\ip{w_+}{w_-}_N + \\ipo{w_-}_N|. \n\\]\nBut we also have \n\\[\n\\ipo{w_++w_-}_N= 2\\ip{w_+}{w_-}_N + \\ipo{w_+}_N+ \\ipo{w_-}_N,\n\\]\nand it follows that either one of $|\\ipo{w_+}_N|$ or $|\\ipo{w_-}_N|$ is at least $nh\\lambda^*(G)\/10$, \nor else $|\\ipo{w_++w_-}_N| \\geq nh\\lambda^*(G)\/10$. \nSince all of $\\|w_+\\|_2^2,\\|w_-\\|_2^2$ and $\\|w_++w_-\\|_2^2$ are at most $\\|w\\|_2^2 \\leq 6nh$, \nthey are all in $Y^+$ and so the proof is complete. \n\\end{proof}\n\n\\begin{proof}[Proof of \\refL{ipoe}]\nFor this proof write $N=(n_{vw})_{v,w \\in V(G)}$. \nFor a given vertex $v \\in V(G)$, there are precisely $d$ vertices $w \\in V(G)$ \nwith $n_{vw}=1-1\/n$, and $(n-1)d$ vertices $w \\in V(G)$ with \n$n_{vw}=-1\/n$ (the remaining entries in row $v$ of $N$ are all zero). Thus, for any $v \\in V(G)$, \n\\begin{align*}\n|y_v \\sum_{w \\in V(G)} n_{vw} y_w \\I{y_v\\geq \\sqrt{d}y_w}| & \\leq d(1-1\/n) \\cdot \\frac{y_v^2}{\\sqrt{d}}+ (n-1)d(1\/n) \\frac{y_v^2}{\\sqrt{d}} \\\\\n\t& < 2 y_v^2 \\sqrt{d}.\n\\end{align*}\nWriting $A= \\{(x,y) \\in \\mathbb{R}^2:x \\geq y\\sqrt{d}\\}$, it follows that\n\\begin{align*}\n|\\ipo{y}_{N,\\mathbb{R}^2 \\setminus E^*}| = 2|\\ipo{y}_{N,A}|\n \\leq 2 \\sum_{v \\in V(G)} 2y_v^2 \\sqrt{d} \n\t\t= 4\\sqrt{d} \\|y\\|_2^2.\n\\end{align*}\n\\end{proof}\n\n\\begin{proof}[Proof of Proposition~\\ref{zbound}]\nChoose $z \\in Y^+$ for which $|\\ipo{z}_N| = |\\sup_{y \\in Y^+} \\ipo{y}_N| \\geq nh\\lambda^*(G)\/12$, possible by Proposition \\ref{dyprop}.\nSince $\\|z\\|_2^2 \\le 10nh$, \nby \\refL{ipoe} we have \n\\[\n|\\ipo{z}_{N,E^*}| \\geq \\ipo{z}_N - 40nh\\sqrt{d}. \n\\]\nNext, for each $m \\in {\\mathbb N}_0$ let $I_m = E^* \\cap [d^{m\/2},d^{(m+2)\/2})\\times[d^{m\/2},d^{(m+2)\/2}) \\subset \\mathbb{R}^2$, \nand note that $E^* = \\bigcup_{m \\in {\\mathbb N}} I_m$, so by the triangle inequality \n\\[\n|\\ipo{z}_{N,E^*}| \\leq \\sum_{m =0}^{\\infty} |\\ipo{z}_{N,I_m}|. \n\\]\nSet \n\n\\[\np_m = \\sum_{\\{v \\in V(G): z_v \\in [d^{m\/2},d^{(m+2)\/2})\\}} \\frac{z_v^2}{\\|z\\|_2^2}\\, , \n\\quad \\alpha_m = \\frac{|\\ipo{z}_{N,I_m}|}{(|\\ipo{z}_{N,E^*}|p_m)}\\, ,\n\\]\nso that \n\\[\n|\\ipo{z}_{N,I_m}| = \\alpha_m p_m |\\ipo{z}_{N,E^*}|. \n\\]\nSince no point in $\\mathbb{R}^2$ lies in more than two of the $I_m$, we have \n$\\sum_{m\\in {\\mathbb N}} p_m \\leq 2$, and so there must be $m^* \\in {\\mathbb N}$ for which $\\alpha_{m^*} \\geq 1\/2$. \nLetting $z^*$ the the vector whose entry in position $v$ is $z_v\\I{z_v \\in I_{m^*}}$, \nwe then have that \n\\[\n|\\ipo{z^*}_{N,E}| = |\\ipo{z}_{N,I_{m^*}}| \\geq p_{m^*} |\\ipo{z}_{N,E^*}|\/2, \n\\]\nand furthermore, $\\|z^*\\|_2^2 = p_m^* \\|z\\|_2^2$. \nLet $y$ be the vector obtained from $z^*$ by multiplying all entries of $z^*$. \nBy $2^{\\lfloor -(\\log_2 p_m)\/2 \\rfloor}$, we have \n\\[\n|\\ipo{y}_{N,E}| \\geq \\frac{|\\ipo{z}_{N,E^*}|}{8} \\geq \\frac{|\\ipo{z}_N|}{8} - 5nh\\sqrt{d} \\geq \\frac{nh\\lambda^*(G)}{96}-5nh\\sqrt{d}, \n\\]\nand $\\|y\\|_2^2 \\leq \\|z\\|_2^2 \\leq 10nh$. \nFinally, recall the definition of $Z$ from (\\ref{zdef}). Letting $x$ be the vector obtained \nfrom $y$ by dividing all entries by $(nh)^{1\/2}$, we obtain a vector $x \\in Z$ with \n$|\\ipo{x}_{N,E}| \\ge \\lambda^*(G)\/96- 5\\sqrt{d}$, which completes the proof. \n\\end{proof}\n\nWe now give the promised proofs of \\refP{protest} and Lemma~\\ref{patcount}.\n\n\\begin{proof}[Proof of \\refP{protest}]\nLet $a \\leq n-hn^{1\/2}$ be the number of vertices in $G'$, and let $x' \\in \\mathbb{R}^{V(G')}$ be an eigenvector of $G'$ with eigenvalue $\\lambda$, chosen so that $\\|x'\\|_2^2 = 1$. For each $i \\in [h]$, let $t_i = \\sum_{v \\in V_i \\cap V(G')} x_v$. \nWe define a vector $y \\in \\mathbb{R}^{V(G)}$, for all $i \\in [h]$ and each $v \\in V_i \\setminus V(G')$, setting $y_v = - t_i\/|V_i \\setminus V(G')|$, and taking all other $y_v$ equal to zero. Taking $x=x'+y$, it is immediate that $x$ is balanced. \nAlso, for $v \\in V(G)\\setminus V(G')$, we have $|x_v|=|y_v| \\leq 1\/(n-a)$, and it follows that $\\|x\\|_2^2 \\leq 1+nh\/(n-a)^2 \\leq 1+1\/\\lambda$, since $(n-a) \\geq hn^{1\/2} > (nd^2)^{1\/2} \\geq (nd\\lambda)^{1\/2}$. Now, \n\\begin{align*}\n\\ipo{x}_N \t& = \\ipo{x}_M \\\\\n\t\t\t& = \\ipo{x'}_M + \\ipo{y}_M + 2\\ip{x'}{y}_M \\\\\n\t\t\t& = \\lambda + \\ipo{y}_M + 2\\ip{x'}{y}_M.\n\\end{align*}\nSince all entries of $y$ have modulus at most $(n-a)^{-1}$, it follows that $|\\ipo{y}_M| \\leq |E(G)|\/(n-a)^2 = nhd\/(2(n-a)^2) \\leq n^{1\/2}d\/(2(n-a))$. \nSimilarly, since $\\sum_{v \\in V(G')} |x_v| \\leq a^{1\/2}$, we have $2|\\ip{x'}{y}_M| \\leq 2a^{1\/2}d\/(n-a)$. \nIt follows that \n\\[\n|\\ipo{x}_N| \\geq \\lambda - \\frac{d(n^{1\/2}+4a^{1\/2})}{2(n-a)} \\geq \\lambda - \\frac{5dn^{1\/2}}{2(n-a)} > \\lambda - \\frac{5}{2},\n\\]\nand recalling that $\\|x\\|_2^2 \\leq 1+1\/\\lambda < \\lambda\/(\\lambda-1)$ completes the proof. \n\\end{proof}\n\n\\begin{proof}[Proof of Lemma~\\ref{patcount}]\nWe may specify a pattern by first choosing $w_0$, then choosing $a_{i,w}$ for each $(i,w)$ with $i \\in [h]$ \nand $w \\in D^{>0} \\cap [w_0,w_0d]$, and finally choosing $f_{i,w,i',w'}$ for each pair $(i,w),(i',w') \\in V(\\Gamma)$ \nwith $i\\sim_{H} i'$. \n\nIf the pattern is to be non-empty then there is some $(i,w) \\in V(\\Gamma)$ for \nwhich $\\alpha_{i,w} nh\/w^2$ is a positive integer. Since $\\alpha_{i,w} \\leq 1$ it follows that $w_0^2 \\leq w^2 \\leq nh$ \nand so there are at most $|D^{>0} \\cap [0,\\sqrt{nh}]| \\leq \\log_2(nh)$ choices for $w_0$. \n\nHaving chosen $w_0$, since $|D^{>0} \\cap [w_0,w_0d]| \\leq \\log_2 (2d)$, there are at most \n$A^{h\\log (2d)}$ choices for the $a_{i,w}$. \nFinally, for each pair $(i,w),(i',w')$ with $i \\sim_{H} i'$, there are at most $1+\\min(a_{i,w},a_{i',w'}) \\leq A$ choices \nfor $f_{i,w,i',w'}$. There are at most $hd \\log_2(2d)\/2$ such pairs, so the \ntotal number of choices for the $f_{i,w,i',w'}$ is at most $A^{hd \\log_2(2d)\/2}$. \n\nCombining the bounds of the two preceding paragraphs, we obtain that the total number of patterns \nwith $a_{i,w} < A$ for all $(i,w) \\in V(\\Gamma)$ is at most \n\\[\n\\log_2(nh) \\cdot A^{h \\log_2(2d)} A^{hd \\log_2(2d)\/2}< \\log_2(nh) \\cdot A^{2h d\\log_2 d}. \n\\]\n\\end{proof}\n\n\n\\section{Acknowledgements} {\\Small S.G. heard of this problem in a seminar of Benny Sudakov at the IPAM long program on ``Combinatorics: Methods and Applications in Mathematics and Computer Science''. He would like to thank IPAM for providing such an excellent program. Most of the research for this article took place while he was a postdoctoral fellow at McGill University, he would like to thank them for their support and facilities. He is currently supported by CNPq (Proc. 500016\/2010-2). L.A-B. was supported during this research by an NSERC Discovery Grant. Both authors would like to thank two anonymous referees for many valuable comments and suggestions.} \n\n\\bibliographystyle{plainnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Appendix}\n\n\n\\subsection{A. Conformal symmetry in one dimension}\n\n\nLet us briefly recap some basic features of conformal invariance in one dimension. It is helpful to keep in mind that, in interacting systems like the SYK models, this conformal invariance is only an approximate symmetry of certain large $N$ theories.\n\nConsider coupling a CQM to an external metric $h_{\\mu\\nu}$ and a source $\\lambda$ for a scalar operator $\\mathcal{O}_{\\Delta}$ of dimension $\\Delta$. Define the generating functional of connected correlation functions,\n\\begin{equation}\nW = - i \\ln \\mathcal{Z}\\,.\n\\end{equation}\nThe connected one-point functions of the stress tensor $t^{\\mu\\nu}$ and $\\mathcal{O}_{\\Delta}$ are defined by functional variation,\n\\begin{equation}\n\\label{E:deltaW}\n\\delta W = \\int dt \\sqrt{-h} \\left( \\frac{1}{2}\\langle t^{\\mu\\nu} \\rangle \\delta h_{\\mu\\nu} - \\langle \\mathcal{O}_{\\Delta}\\rangle \\delta \\lambda\\right)\\,.\n\\end{equation}\nBy assumption, $W$ is invariant under infinitesimal reparameterizations, under which $h_{\\mu\\nu}$ and $\\lambda$ vary as\n\\begin{equation}\n\\delta_{\\xi} h_{\\mu\\nu} =D_{\\mu}\\xi_{\\nu}+D_{\\nu}\\xi_{\\mu}\\,, \\qquad \\delta_{\\xi}\\lambda = \\xi^{\\mu}D_{\\mu}\\lambda\\,,\n\\end{equation}\nwith $D_{\\mu}$ the covariant derivative. Plugging these variations into~\\eqref{E:deltaW} and demanding $\\delta_{\\xi}W=0$ leads to the diffeomorphism Ward identity\n\\begin{equation}\nD_{\\nu}\\langle t^{\\mu\\nu} \\rangle = - (D^{\\mu}\\lambda)\\langle \\mathcal{O}_{\\Delta}\\rangle\\,.\n\\end{equation}\nIn one dimension with $h = - dt^2$, this becomes equation~\\eqref{E:eom3} from the main text,\n\\begin{equation}\n\\label{E:diffWard}\n\\dot{E} = \\dot{\\lambda} \\langle \\mathcal{O}_{\\Delta}\\rangle\\,.\n\\end{equation}\n\nOn the other hand, we ought to couple the CQM to the metric in a Weyl-invariant way. Then $W$ is invariant under an infinitesimal Weyl rescaling under which the metric and source $\\lambda$ vary as\n\\begin{equation}\n\\delta_{\\omega} h_{\\mu\\nu} = 2 \\omega h_{\\mu\\nu}\\,, \\qquad \\delta_{\\omega}\\lambda = (\\Delta - 1)\\omega \\lambda\\,,\n\\end{equation}\nand $\\delta_{\\omega}W=0$ gives the Weyl Ward identity (in terms of $E = - h_{\\mu\\nu}\\langle t^{\\mu\\nu}$)\n\\begin{equation}\n\\label{E:weylWard}\nE = (1-\\Delta)\\lambda \\langle \\mathcal{O}_{\\Delta}\\rangle\\,.\n\\end{equation}\n\nOne way of stating Polcinski's paradox~\\cite{Jensen:2011su} is that the diffeomorphism Ward identity~\\eqref{E:diffWard} is not compatible with the Weyl Ward identity~\\eqref{E:weylWard} in one dimension. Actually there is a loophole: the two are compatible if the correlation functions of $\\mathcal{O}$ are topological, as they are for operators in the theory of $2N$ free Majorana fermions.\n\nSo there is a conflict between reparameterization invariance and conformal symmetry for interacting systems in one dimension.\n\nBefore going on, we should reiterate what is in some sense the main point of this Letter, namely how gravity solves this paradox near AdS$_2$. The boundary dual is not a CQM on its own, but instead a particular hydrodynamics coupled to the correlation functions of a CQM. \n\nThose conformal correlators are strongly constrained by the conformal symmetry, as we now discuss. In any dimension, conformal transformations are the combination of a coordinate transformation $x^{\\mu}=x^{\\mu}(y^{\\nu})$ and Weyl rescaling $h_{\\mu\\nu} \\to e^{2\\Omega}h_{\\mu\\nu}$ which leave the metric invariant. In one dimension with the flat metric $h=-dt^2$, any coordinate transformation gives a conformal transformation: the combined action of\n\\begin{equation}\nt = t(w)\\, \\qquad \\Omega = - \\ln t'(w)\\,,\n\\end{equation} \nsends the metric to itself, $-dt^2 \\to -dw^2$. Wick-rotating to Euclidean signature and compactifying Euclidean time, the modes of $t(w)$ generate a Virasoro algebra with $c=0$. \n\nThere are two ways to think about the vanishing central charge. The first is simply that the stress tensor vanishes by the Weyl Ward identity. The second is that there is no Weyl anomaly possible in one dimension.\n\nAs is familiar from two-dimensional CFT, the conformal transformations which are regular everywhere generate the global conformal group $SL(2;\\mathbb{R})$. There is a single special conformal generator $K$, and in the usual way one defines primary operators as those annihilated by $K$. The vacuum two-point function of a primary operator $\\mathcal{O}_{\\Delta}$ of dimension $\\Delta$ is, in Euclidean signature\n\\begin{equation}\n\\langle \\mathcal{O}_{\\Delta}(\\tau)\\mathcal{O}_{\\Delta}(0)\\rangle = \\frac{1}{|\\tau|^{2\\Delta}}\\,,\n\\end{equation}\nand similarly for three-point functions of $\\mathcal{O}_{\\Delta}$.\n\nWe conclude this Subsection with a few comments on thermodynamics. Consider the thermal partition function of a CQM\n\\begin{equation}\n\\mathcal{Z}_E = \\text{tr}\\left( e^{-\\beta H}\\right)\\,,\n\\end{equation}\nwhich as usual is the partition function of the Euclidean theory on a circle of size $\\beta$ with thermal boundary conditions. There is only one local counterterm which can be used to redefine the theory,\n\\begin{equation}\n\\label{E:CT}\n\\ln \\mathcal{Z}_E \\to \\ln\\mathcal{Z}_E + m\\int d\\tau \\sqrt{h_E}\\,,\n\\end{equation}\nwhere $\\tau$ is Euclidean time, $h_E$ the Euclidean metric, and $m$ is some mass scale. This counterterm is not scale-invariant. Thus, in a CQM, the thermal partition function $\\mathcal{Z}_{CQM}$ is an unambiguous observable. This should not be a surprise; in odd dimension, the logarithm of the partition function of a CFT on a Euclidean sphere -- the ``sphere free energy'' -- is a useful and unambiguous (up to a quantized imaginary ambiguity which may exist in $d=4k-1$ dimensions) CFT observable. In one dimension, this partition function is the extremal entropy,\n\\begin{equation}\nS_0 = \\frac{\\partial (T\\ln \\mathcal{Z}_{CQM})}{\\partial T} = \\ln \\mathcal{Z}_{CQM}\\,.\n\\end{equation} \n\nNow consider a non-conformal QM which realizes an emergent conformal invariance at low energies. Suppose that the low-energy description is a(n approximate) CQM with extremal entropy $S_0$ deformed by a dimension $\\Delta$ operator. The low-temperature partition function is\n\\begin{equation}\n\\ln \\mathcal{Z}_E = - \\beta E_0 + S_0 + P_1 T^{\\Delta-1} + \\hdots\\,,\n\\end{equation}\nwhere $E_0$ is the ground state energy. Observe that $E_0$ is unphysical: it is redefined by the counterterm~\\eqref{E:CT}. This partition function leads to a low-temperature entropy\n\\begin{equation}\nS = S_0 +\\Delta P_1 T^{\\Delta-1} + \\hdots\\,.\n\\end{equation}\nComparing this with the entropy~\\eqref{E:nearExtremalS} of near-extremal black holes in dilaton gravity, we see that the black hole entropy is that of a CQM deformed by a $\\Delta = 2$ operator.\n\n\n\\subsection{B. Holographic renormalization}\n\n\nIn this Appendix we fill in various details on the near-AdS$_2$ solutions described in the main text.\n\nWe begin with the perturbative solution near the endpoint of a holographic RG flow, with a small amount of infalling dust $T_{ww}(w)$. The full perturbative solution is\n\\begin{align}\n\\nonumber\n\\varphi & = \\varphi_0 + \\ell r + \\mathcal{O}(\\ell^2 r^2)\\,,\n\\\\\n\\nonumber\ng & = - \\left(r^2 + 2 \\{ t(w),w\\} + \\frac{\\ell r^3}{6}U''[\\varphi_0] + \\mathcal{O}(\\ell^2r^2)\\right)dw^2 \n\\\\\n\\label{E:perturbativeSolution}\n& \\qquad + 2dw dr\\,,\n\\end{align}\nwith\n\\begin{equation}\n\\label{E:bulkWard}\n\\ell\\partial_w \\{ f(w),w\\} = -T_{ww}(w)\\,.\n\\end{equation}\n\nWe account for the dust with a massless scalar field $\\chi$\n\\begin{equation*}\nS_{\\rm matter} = - \\frac{1}{2}\\int d^2x \\sqrt{-g} Z_0[\\varphi] (\\partial\\chi)^2\\,,\n\\end{equation*}\nwith $Z[\\varphi_0]=1$. The infalling solutions are $\\chi(w) = \\lambda(w)$, and we take the perturbative limit $\\lambda^2 \\sim\\ell$. To first order in $\\ell$, the stress tensor evaluated on this solution is $T_{ww} = \\kappa^2\\dot{\\lambda}^2$ and the dilaton source vanishes $\\Phi = 0$.\n\nWe proceed to holographically renormalize the dilaton gravity~\\eqref{E:dilatonS} on backgrounds of the form~\\eqref{E:perturbativeSolution}. To proceed we define the dual theory on a constant-$r$ slice at asymptotically large $r$, subject to the constraint that $\\ell r$ is still asymptotically small. In physical terms, we are taking the almost zero energy limit. In practice we drop the corrections to the background~\\eqref{E:perturbativeSolution} with two or more powers of $\\ell$ and then proceed in the usual way.\n\nThe bulk action is divergent as one integrates to large $r$, so we introduce a cutoff slice at $r=\\Lambda$, add covariant boundary terms on the cutoff slice to eliminate divergences, and then remove the cutoff by sending $\\Lambda\\to\\infty$.\n\nThe authors of~\\cite{Grumiller:2007ju} have studied the problem of holographic renormalization in a general dilaton gravity. For the case at hand, the renormalized action is simply\n\\begin{align}\nS_{ren} &= \\lim_{\\Lambda\\to\\infty} \\frac{1}{2\\kappa^2}\\left\\{ \\int d^2x \\sqrt{-g} \\left( \\varphi R + U\\right)\\right.\n\\\\\n\\nonumber\n& \\qquad\\qquad + \\left. \\int dt \\sqrt{-\\gamma} \\left( 2\\varphi K - U \\right) \\right\\} + S_{\\rm matter} + \\mathcal{O}(\\ell^2)\\,,\n\\end{align}\nwhere $\\gamma$ is the induced metric on the cutoff slice and $K$ the trace of its extrinsic curvature.\n\nThe variation of with on-shell action with respect to the metric and scalar is\n\\begin{align}\n\\begin{split}\n\\label{E:deltaSren}\n\\delta S_{ren} =& \\lim_{\\Lambda\\to\\infty} \\int dt \\sqrt{-\\gamma}\\left\\{ - (n^{\\mu}\\partial_{\\mu}\\chi)\\delta\\chi\\right.\n\\\\\n& \\qquad \\quad \\left. +\\frac{1}{2\\kappa^2} \\left( n^{\\rho}\\partial_{\\rho}\\varphi - \\frac{U}{2}\\right)\\gamma^{\\mu\\nu}\\delta g_{\\mu\\nu}\\right\\}\\,,\n\\end{split}\n\\end{align}\nwith $n^{\\mu}$ the outward pointing normal vector to the cutoff slice. Both $n^{\\mu}\\partial_{\\mu}\\varphi$ and $U$ are $\\mathcal{O}(\\ell)$ for the perturbative solution~\\eqref{E:perturbativeSolution}, so we require the on-shell variation of the $g_{\\mu\\nu}$ in response to a variation of the boundary metric to $\\mathcal{O}(\\ell^0)$, which is simply $\\delta g_{\\mu\\nu} = r^2 \\delta h_{\\mu\\nu} + \\mathcal{O}(r^0,\\ell)$. Plugging this variation into $\\delta S_{ren}$ and evaluating it on the solution~\\eqref{E:perturbativeSolution} gives the boundary stress tensor\n\\begin{equation}\n\\langle t^{\\mu\\nu}\\rangle = \\frac{2}{\\sqrt{-h}}\\frac{\\delta S_{ren}}{\\delta h_{\\mu\\nu}} = \\frac{\\ell h^{\\mu\\nu}}{\\kappa^2}\\{t(w),w\\}\\,,\n\\end{equation}\nso that the energy is $E = - h_{\\mu\\nu} \\langle t^{\\mu\\nu}\\rangle = - (\\ell\/\\kappa^2) \\{ t(w),w\\}$, which derives the result~\\eqref{E:energy} in the main text. We also obtain the expectation value of the dimension$-1$ operator $\\mathcal{O}$ dual to $\\chi$,\n\\begin{equation}\n\\langle \\mathcal{O}\\rangle = - \\frac{1}{\\sqrt{-h}}\\frac{\\delta S_{ren}}{\\delta \\lambda} = \\dot{\\lambda}\\,,\n\\end{equation}\nso that~\\eqref{E:bulkWard} becomes\n\\begin{equation*}\n\\dot{E} = \\dot{\\lambda}\\langle \\mathcal{O}\\rangle\\,.\n\\end{equation*}\n\nNow we consider the perturbative solution in the presence of massive matter,\n\\begin{equation}\nS_{\\rm matter} = - \\frac{1}{2}\\int d^2x \\sqrt{-g} \\left( Z_0[\\varphi] (\\partial\\chi)^2 + Z_1[\\varphi]m^2 \\chi^2\\right)\\,,\n\\end{equation}\nwith $Z_0[\\varphi_0]=Z_1[\\varphi_0]=1$. The field $\\chi$ is now dual to an operator $\\mathcal{O}_{\\Delta}$ of dimension $\\Delta(\\Delta - 1)=m^2$. We take $\\chi$ to be a free field, but it is easy to allow for self-interactions.\n\nAs above we take the perturbative limit with $\\chi \\ll \\ell$. In this limit,\n\\begin{align}\n\\begin{split}\nT_{\\mu\\nu} & = \\kappa^2 \\left( \\partial_{\\mu}\\chi\\partial_{\\nu}\\chi - \\frac{ (\\partial\\chi)^2 +m^2}{2}g_{\\mu\\nu}\\right)\\,,\n\\\\\n\\Phi & = \\kappa^2 \\left( Z_0'[\\varphi_0](\\partial\\phi)^2 + Z_1'[\\varphi_0]m^2 \\chi^2\\right)\\,.\n\\end{split}\n\\end{align}\nThe dilaton and metric are perturbed as\n\\begin{align}\n\\nonumber\n\\varphi &= \\varphi_0 + \\ell \\psi(w,r) + \\mathcal{O}(\\ell^2 r^2)\\,,\n\\\\\n\\nonumber\ng & = -\\Big(r^2 + 2 \\{ t(w),w\\} + \\ell f(w,r)+ \\mathcal{O}(\\ell^2 r^2)\\Big)dw^2 \n\\\\\n\\label{E:perturbativeSolution2}\n& \\qquad + 2 dw dr \\,.\n\\end{align}\nTo leading order in $\\ell$, the solution for $\\chi$ is given by the most general solution of\n\\begin{equation}\n(\\Box - m^2) \\chi = 0\\,,\n\\end{equation}\non the AdS$_2$ background~\\eqref{E:perturbativeSolution2}, setting all of the $\\mathcal{O}(\\ell)$ corrections to vanish. We take $\\Delta$ to be general but not half-integer, and pick the standard quantization for $\\chi$ so that $\\Delta>1\/2$. Then near the boundary $r\\to\\infty$ $\\chi$ is\n\\begin{equation}\n\\label{E:bdyChi}\n\\chi = r^{\\Delta - 1} \\sum_{n=0}\\frac{a_n(w)}{r^n}+ r^{-\\Delta} \\sum_{m=0}\\frac{b_m(w)}{r^m}\\,,\n\\end{equation}\nwhere the $a_{n}$ with $n>0$ are determined by $a_0$ and similarly for the $b_n$, for example\n\\begin{equation}\na_1 = \\dot{a}_0\\,, \\qquad b_1 = \\dot{b}_0\\,.\n\\end{equation}\nWe identify the source for the dual operator as\n\\begin{equation}\n\\lambda = \\lim_{r\\to\\infty} r^{1-\\Delta} \\chi = a_0(t)\\,.\n\\end{equation}\n\nThe stress tensor and dilaton source have three distinct sets of terms. Near the boundary, they are of the schematic form\n\\begin{equation}\nT_{\\mu\\nu} = r^{2\\Delta}\\Sigma^{a}_{\\mu\\nu}(r,a^2) + \\Sigma_{\\mu\\nu}(r,ab) + r^{-2\\Delta} \\Sigma^b_{\\mu\\nu}(r,b^2)\\,,\n\\end{equation}\nwhere $\\Sigma^a_{\\mu\\nu}(r,a^2)$ is some power series in $1\/r$ with coefficients built out of two powers of the $a_n$ and their derivatives, and similarly for the other two series. Because $\\Delta\\cnot \\in \\mathbb{Z}\/2$ by assumption, these three series never mix.\n\nSolving the $rr$ and $rw$ components of Einstein's equations, we obtain the correction to the dilaton to be\n\\begin{align}\n\\label{E:psi}\n&\\psi = r + \\frac{\\kappa^2}{\\ell}r^{2(\\Delta -1)}\\left( \\frac{(\\Delta-1)a_0^2}{2(3-2\\Delta)} + \\mathcal{O}(r^{-1})\\right)\n\\\\\n\\nonumber\n&+ \\frac{\\kappa^2}{\\ell r}\\left(\\Delta(\\Delta-1) a_0 b_0 + \\frac{(\\Delta^2-1)\\dot{(a_0b_0)}-\\Delta \\dot{a}_0b_0}{3r} + \\mathcal{O}(r^{-2})\\right)\n\\\\\n\\nonumber\n& \\qquad - \\frac{\\kappa^2}{\\ell^2}r^{-2\\Delta}\\left( \\frac{\\Delta b_0^2}{2(2\\Delta+1)} + \\mathcal{O}(r^{-1})\\right)\\,,\n\\end{align}\nand, while the correction to the metric is calculable, we do not require it for what follows. The $ww$ component imposes exactly one condition,\n\\begin{equation}\n\\label{E:generalHydro}\n\\frac{\\ell}{\\kappa^2} \\partial_w \\left( \\{t(w),w\\}\\right) =(2\\Delta-1) \\left( \\Delta \\dot{\\left(a_0 b_0\\right)} -a_0\\dot{b}_0\\right)\\,.\n\\end{equation}\n\nAs for a massless $\\chi$, we have succeeded in rewriting the dilaton equations of motion in terms of an equation~\\eqref{E:generalHydro} which only involves the boundary time. Let us rewrite it in terms of boundary variables.\n\nThere are additional $\\mathcal{O}(\\chi^2)$ boundary counterterms required to renormalize the action. The leading divergence is removed by the counterterm,\n\\begin{equation}\n\\label{E:leadingCT}\nS_{\\rm CT} = \\frac{\\Delta -1}{2}\\int dt \\sqrt{-\\gamma}\\,\\chi^2 + \\mathcal{O}(\\partial^2 \\chi^2)\\,,\n\\end{equation}\nand there are subleading counterterms with at least two boundary derivatives which remove subleading divergences. Varying the on-shell action with respect to the boundary metric $h$ and source $\\lambda$ gives\n\\begin{align}\n\\begin{split}\n\\label{E:constitutiveGeneral}\n\\langle \\mathcal{O}_{\\Delta}\\rangle &= (1- 2\\Delta) b_0(t)\\,,\n\\\\\nE & = - \\frac{\\ell}{\\kappa^2}\\{ t(w),w\\} + (2\\Delta-1)(\\Delta-1)a_0b_0\\,,\n\\\\\n& = - \\frac{\\ell}{\\kappa^2}\\{t(w),w\\} + (1-\\Delta)\\lambda \\langle \\mathcal{O}_{\\Delta}\\rangle\\,.\n\\end{split}\n\\end{align}\n(The additional $a_0b_0$ terms in the energy come from (i.) the $\\mathcal{O}(1\/r)$ correction to the dilaton~\\eqref{E:psi}, inserted into the metric variation of $S_{ren}$ in~\\eqref{E:deltaSren}, and (ii.) the metric variation of the leading counterterm~\\eqref{E:leadingCT}.)\nIn terms of $E$, $\\langle \\mathcal{O}_{\\Delta}\\rangle$, and $\\lambda=a_0$, the equation of motion~\\eqref{E:generalHydro} is simply\n\\begin{equation}\n\\label{E:diffAgain}\n\\dot{E} = \\dot{\\lambda}\\langle \\mathcal{O}_{\\Delta}\\rangle\\,.\n\\end{equation}\n\nIt is straightforward to allow for half-integer $\\Delta$. When $\\Delta\\in \\mathbb{Z}\/2$ there are logarithmic terms in the near-boundary solution for bulk fields as well as logarithmic boundary counterterms. It is similarly straightforward to consider self-interacting matter, e.g. $\\chi^4$ theory. The final result is the same: the Einstein's equations can be rewritten as the diffeomorphism Ward identity in the dual quantum mechanics for \\emph{any} value of $\\Delta$.\n\nAs in the main text, we have shown that the gravitational dynamics near AdS$_2$ can be rewritten as the diffeomorphism Ward identity~\\eqref{E:diffAgain} with a ``constitutive relation'' for the energy given by~\\eqref{E:constitutiveGeneral}. This equation of motion follows from the same effective action~\\eqref{E:Seff} presented in the main text.\n\nWe see that the hydrodynamic effective action~\\eqref{E:Seff} (equivalently,~\\eqref{E:Shydro}) encodes the gravitational dynamics for dilaton gravity coupled to general scalar matter.\n\n\n\\subsection{C. Electric AdS$_2$}\n\nIn this main text, we considered dilaton gravities with AdS$_2$ vacua at the roots of the dilaton potential. There is another way to realize AdS$_2$ solutions: the AdS$_2$ may be supported by an electric field. In this Appendix we work out various aspects of these ``electric'' AdS$_2$ vacua of dilaton gravities with a Maxwell field, Einstein-Maxwell-Dilaton gravity.\n\nOur motivation for studying this problem is somewhat indirect. Hartman and Strominger~\\cite{Hartman:2008dq} have claimed that the dual to gravity on electric AdS$_2$ vacua is invariant under a Virasoro algebra with \\emph{nonzero} central charge, a claim which was supported by way of holographic renormalization in~\\cite{Castro:2008ms} (although see also~\\cite{Castro:2014ima}). \n\nIn the setting of two-dimensional CFT, Roberts and Stanford~\\cite{Roberts:2014ifa} have shown that large central charge, a large gap for higher-spin Virasoro primary operators, and a ``low'' density of light states is enough to guarantee a maximal Lyapunov exponent $\\lambda_L = 2\\pi T$. Their argument would go through more or less unaltered for a CQM invariant under a Virasoro symmetry with large central charge, assuming that such a thing existed.\n\nThus we revisit electric AdS$_2$ vacua and the claim of a Virasoro algebra with $c \\neq 0$. We will find that the central charge of the Virasoro symmetry vanishes.\n\nConsider the most general Einstein-Maxwell-Dilaton gravity,\n\\begin{equation}\nS_{\\rm bulk} = \\frac{1}{2\\kappa^2}\\int d^2x \\sqrt{-g} \\left( \\varphi R + U[\\varphi] - \\frac{W[\\varphi]}{4}F^2\\right)\\,.\n\\end{equation} \nThe authors of~\\cite{Castro:2008ms} consider the case\n\\begin{equation}\nU=8\\varphi\\,, \\qquad W = 1\\,.\n\\end{equation}\nThe equations of motion are now\n\\begin{align}\n\\nonumber\n0 & = D_{\\mu}D_{\\nu}\\varphi - g_{\\mu\\nu}\\Box \\varphi +\\frac{g_{\\mu\\nu}}{2} \\left(U - \\frac{W}{4}F^2\\right) + \\frac{W}{2}F_{\\mu\\rho}F_{\\nu}{}^{\\rho}\\,,\n\\\\\n\\label{E:einstein2}\n0 & = D_{\\nu}\\left( WF^{\\mu\\nu}\\right)\\,,\n\\\\\n\\nonumber\n0 & = R + U' - \\frac{W'}{4}F^2\\,.\n\\end{align}\nThey admit ``electric'' AdS$_2$ solutions with constant dilaton and constant electric field,\n\\begin{equation}\n\\varphi = \\varphi_0\\,, \\qquad F_{\\mu\\nu} = E \\varepsilon_{\\mu\\nu}\\,, \\qquad R = - \\frac{2}{L^2}\\,,\n\\end{equation}\nprovided the dilaton $\\varphi_0$, electric field $E$, and AdS radius $L$ satisfy the two relations\n\\begin{equation}\n\\label{E:familyOfAdS2}\nU[\\varphi_0] = \\frac{W[\\varphi_0]}{2}E^2\\,, \\quad \\frac{2}{L^2} = U'[\\varphi_0] + \\frac{W'[\\varphi_0]E^2}{2}\\,.\n\\end{equation}\nSo, adjusting the electric field simultaneously adjusts the dilaton and AdS radius. Observe that, for smooth potentials $U$ and $W$, there is generally a one-parameter family of AdS$_2$ solutions controlled by the strength of the electric field. The most general such solution is, in a radial gauge,\n\\begin{align}\n\\begin{split}\n\\label{E:electricAdS2}\n\\varphi & = \\varphi_0\\,,\n\\\\\nA & = - EL^2 r \\left( f_1(t) - \\frac{f_2(t)}{r^2}\\right)dt + a(t) dt\\,,\n\\\\\ng & = L^2 \\left( - r^2 \\left( f_1(t) + \\frac{f_2(t)}{r^2}\\right)^2 dt^2 + \\frac{dr^2}{r^2}\\right)\\,.\n\\end{split}\n\\end{align}\nUsing the defining function $1\/(r^2L^2)$ the conformal boundary $r\\to\\infty$ is endowed with a metric\n\\begin{equation}\nh = \\lim_{r\\to\\infty} \\frac{\\gamma}{r^2L^2} = - f_1(t)dt^2\\,,\n\\end{equation}\nwhere $\\gamma$ is the induced metric on a constant-$r$ slice.\n\nNow let us holographically renormalize the bulk theory. Consider the following definition of the renormalized theory:\n\\begin{align}\n\\begin{split}\n\\label{E:S1}\nS_{1} &= \\lim_{\\Lambda\\to\\infty} \\frac{1}{2\\kappa^2}\\left\\{\\int d^2x \\sqrt{-g} \\Big( \\varphi R + U - \\frac{W}{4}F^2\\Big) \\right.\n\\\\\n& \\qquad \\quad\\quad +\\left. \\int dt \\sqrt{-\\gamma} \\Big( 2\\varphi K - LU + \\frac{W}{2L}A^2\\Big)\\right\\},\n\\end{split}\n\\end{align}\nwhere $A^2 = \\gamma^{\\mu\\nu}A_{\\mu}A_{\\nu}$. This matches the renormalization scheme of~\\cite{Castro:2008ms} for the particular dilaton theory they study. It is easy to verify that this renormalized action is finite on-shell. Under a variation of the metric and gauge field, keeping the dilaton fixed and using that $\\partial_{\\mu}\\varphi = 0$ for the solutions at hand, $S_1$ varies as\n\\begin{align}\n\\nonumber\n\\delta S_1 =& \\lim_{\\Lambda\\to\\infty} \\frac{1}{2\\kappa^2}\\int dt \\sqrt{-\\gamma} \\left\\{ -\\frac{L}{2}\\left( U+\\frac{W}{2L^2}A^2\\right)\\gamma^{\\mu\\nu} \\delta g_{\\mu\\nu} \\right.\n\\\\\n\\label{E:deltaS1}\n& \\qquad \\qquad \\left. + W \\left( F^{\\mu\\nu} n_{\\nu} + \\frac{\\gamma^{\\mu\\nu}}{L}A_{\\nu}\\right) \\delta A_{\\mu}\\right\\}\\,.\n\\end{align}\nOn-shell, a variation of the boundary metric $h = - f_1(t)^2 dt^2$ induces a variation of both the bulk metric and gauge field~\\eqref{E:electricAdS2} according to $\\delta f_1 = - \\frac{\\delta h_{tt}}{2f_1}$,\n\\begin{align}\n\\begin{split}\n\\delta g_{tt} &= r^2 L^2 \\delta h_{tt} + \\mathcal{O}(r^0) \\,, \n\\\\\n\\delta A_t &= r \\frac{EL^2}{2}\\frac{\\delta h_{tt}}{f_1} + \\mathcal{O}(r^0)\\,.\n\\end{split}\n\\end{align}\nInserting this variation into~\\eqref{E:deltaS1} and evaluating it on an AdS$_2$ solution~\\eqref{E:electricAdS2} gives the dual stress tensor. This however identically vanishes,\n\\begin{equation}\n\\langle t^{tt}\\rangle \\equiv \\frac{2}{\\sqrt{-h}}\\frac{\\delta S_1}{\\delta h_{tt}} = 0\\,,\n\\end{equation}\nso the Virasoro symmetry at play has zero central charge.\n\nHowever, this is not the end of the story. First, it is more natural to work in an alternative quantization, as one commonly does for Maxwell theory in AdS$_3$~\\cite{Marolf:2006nd,Jensen:2010em}. The growing mode of $A_t$ ought to be fixed as a boundary condition, allowing the constant term to fluctuate. Second, the Cvetic and Papadimitriou~\\cite{Cvetic:2016eiv} have argued that the renormalization scheme~\\eqref{E:S1} is not quite correct. They advocate for the prescription\n\\begin{align}\n\\begin{split}\n\\label{E:SrenElectric}\nS_{ren} &= \\lim_{\\Lambda\\to\\infty} \\frac{1}{2\\kappa^2}\\left\\{ \\int d^2x \\sqrt{-g} \\Big( \\varphi R + U - \\frac{W}{4}F^2\\Big) \\right.\n\\\\\n& \\qquad \\quad+ \\int dt \\sqrt{-\\gamma} \\Big( 2\\varphi K -L U \\Big)\n\\\\\n& \\qquad \\quad - \\left.\\int dt\\sqrt{-\\gamma}\\,W\\Big( A_{\\mu}F^{\\mu\\nu}n_{\\nu}+ \\frac{L}{4}F^2\\Big)\\right\\}.\n\\end{split}\n\\end{align}\nIn alternate quantization we fix the canonical momentum conjugate to $A_t$, which in the bulk is\n\\begin{equation}\n\\pi^t \\equiv \\frac{\\delta S_{bulk}}{\\delta (\\partial_r A_t)} = - \\frac{WE}{2\\kappa^2}\\,.\n\\end{equation}\nWe identify this conjugate momentum as a fixed, external charge $\\mathcal{Q}$ in the dual CQM.\n\nThis definition~\\eqref{E:SrenElectric} has the virtue that it conserves the symplectic structure on the boundary.\n\nVarying it with respect to the boundary metric gives\n\\begin{equation}\n\\langle t^{tt}\\rangle = 0\\,.\n\\end{equation}\nThe charge $\\mathcal{Q}$ is conjugate to a $U(1)$ gauge field $\\mathcal{A}$,\n\\begin{equation}\n\\langle \\mathcal{A}\\rangle =- \\frac{\\delta S_{ren}}{\\delta \\mathcal{Q}}\\,.\n\\end{equation}\nThe charge $\\mathcal{Q}$ is time-independent, and so $\\langle \\mathcal{A}\\rangle$ is determined up to a total derivative, as is appropriate for a gauge field. That is, $\\langle \\mathcal{A}\\rangle$ contains no local information. However, its holonomy around the Euclidean time circle is physical, encoding the chemical potential $\\mu$.\n\nTo deduce the holographic formula for $\\langle \\mathcal{A}\\rangle$ we need to deduce the variation of $S_{ren}$ with respect to $W[\\varphi_0]E$. In order to remain on-shell, this variation induces a variation in the AdS radius and dilaton in accordance with~\\eqref{E:familyOfAdS2}, with\n\\begin{equation}\n\\label{E:deltaQ}\n\\delta \\mathcal{Q} = - \\frac{1}{\\kappa^2 E L^2}\\delta \\varphi\\,.\n\\end{equation} \nThe final result is that\n\\begin{equation}\n\\label{E:chemical}\n\\langle \\mathcal{A}(t)\\rangle = a(t)\\,,\n\\end{equation}\nfor $a(t)$ the $\\mathcal{O}(r^0)$ term in the Maxwell field~\\eqref{E:electricAdS2}.\n\nNow consider an electric AdS$_2$ black hole:\n\\begin{align}\n\\begin{split}\n\\varphi &= \\varphi_0\\,,\n\\\\\nA &= - EL^2 \\left(r - 2r_h + \\frac{r_h^2}{r}\\right)dt\\,,\n\\\\\ng & = L^2 \\left( - r^2 \\left( 1 - \\frac{r_h^2}{r^2}\\right)^2dt^2 + \\frac{dr^2}{r^2}\\right)\\,.\n\\end{split}\n\\end{align}\nWe have chosen the $\\mathcal{O}(1)$ term of the gauge field so that $A$ is regular at the Euclideanized horizon. The dual CQM is at a temperature $T = r_h\/\\pi$ and charge $Q = - WE\/(2\\kappa^2)$. The chemical potential is given by~\\eqref{E:chemical} to be $\\mu = 2\\pi T E L^2$.\n\nThe Bekenstein-Hawking entropy is\n\\begin{equation}\nS =\\frac{2 \\pi \\varphi_0}{\\kappa^2}\\,.\n\\end{equation}\nIt can also be computed from the on-shell, Euclidean, holographically renormalized action, which is\n\\begin{equation}\n\\label{E:electricAdS2Sren}\nS_{E} = \\frac{2\\pi \\varphi_0}{\\kappa^2} \\,.\n\\end{equation}\nThe Euclidean action computes the canonical ensemble free energy $G(T,Q) = - T S_{E}$, whose variation gives the entropy and chemical potential as\n\\begin{equation}\n\\delta G = - S \\delta T + \\mu \\delta Q\\,.\n\\end{equation}\nUsing~\\eqref{E:deltaQ} and taking thermodynamic derivatives, we indeed recover the Bekenstein-Hawking entropy and chemical potential $\\mu = 2\\pi T E L^2$.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\n\nLinear programming is a fundamental problem in many areas, such as operations research, network, machine learning, business analysis and finance \\citep{von1947theory,dantzig1963linear,luenberger1984linear,boyd2004convex}.\nIn this paper, we consider the \\emph{maximum support} of the linear feasibility problem\n\\begin{equation}\n\\label{feasibility:primal}\n\\begin{aligned}\n\\mathrm{find} ~~&x \\in {\\mathbb R}^n \\\\\n\\mathrm{subject~to}~~ &Ax=0, ~x\\geq 0,~ x\\neq 0,\n\\end{aligned}\n\\end{equation}\nwith its dual problem\n\\begin{equation}\n\\label{feasibility:dual}\n\\begin{aligned}\n\\mathrm{find} ~~&u \\in {\\mathbb R}^m \\\\\n\\mathrm{subject~to}~~ &A^T u> 0,\n\\end{aligned}\n\\end{equation}\nwhere $A\\in {\\mathbb R}^{m\\times n}$ is an integer (or rational) matrix and $\\mathrm{rank}(A) = m$.\nThe \\emph{maximum support} means that the set of positive coordinates of the returned solution of (\\ref{feasibility:primal}) should be inclusion-wise maximum.\nActually, for the solution $\\hat{x}$ returned by our algorithm, any coordinate $\\hat{x}_i=0$ if and only if this coordinate equals to 0 for all feasible solutions of (\\ref{feasibility:primal}).\nThus, our algorithm can be directly used to test the feasibility of the general linear system $Ax=b,x\\geq 0$ with the same time complexity, i.e., given the maximum support solution $(\\bar{x},\\bar{x}')$ to the system $Ax-bx'=0, (x,x')\\geq 0$, if $\\bar{x}'>0$ then the original problem $Ax=b,x\\geq0$ has a solution $\\widetilde{x} = \\bar{x}\/\\bar{x}'$, otherwise it is infeasible.\n\nThere are many (polynomial-time) algorithms for solving linear programming problems, e.g., \\citep{karmarkar1984new}, \\citep{wright1997primal} and \\citep{renegar1988polynomial}.\nRecently, \\cite{chubanov2015polynomial} proposed a polynomial-time projection and rescaling algorithm for solving problem \\eqref{feasibility:primal}.\nDue to its simplicity and efficiency, this kind of algorithms has become a potentially \\emph{practical} class of polynomial algorithms. See e.g., \\citep{dadush2016rescaled}, \\citep{roos2018improved} and \\citep{pena2018computational}.\n\nChubanov's algorithm \\citep{chubanov2015polynomial} and its variants typically consist of two procedures. The key part is \\emph{basic procedure} (BP) and the other part is \\emph{main algorithm} (MA).\nThe BP returns one of the following three results:\n\\begin{enumerate}[(i)]\n \\item a feasible solution of \\eqref{feasibility:primal};\n \\item a feasible solution of the dual problem \\eqref{feasibility:dual};\n \\item a cut for the feasible region of \\eqref{feasibility:primal}.\n\\end{enumerate}\nNote that exactly one of \\eqref{feasibility:primal} and \\eqref{feasibility:dual} is feasible according to Farkas' lemma. Thus \\eqref{feasibility:primal} is infeasible if BP returns (ii).\nIf BP returns (iii), the other procedure MA rescales the matrix $A$ by using this cut and call BP again on the rescaled matrix $A$.\nAccording to \\citep{khachian1979polynomial} which gives a positive lower bound on the entries of a solution of a linear system, after a certain number of rescalings, one can conclude that there is no feasible solution for \\eqref{feasibility:primal}. So the number of rescaling operations can be bounded, i.e., the number of MA calls can be bounded.\nConsequently, the algorithm can terminate in finite time no matter whether problem \\eqref{feasibility:primal} is feasible or infeasible.\n\nTo be more precise, we quantify the time complexity.\nThe total time complexity of these Chubanov-type algorithms are typically $O(\\mathsf{T_{MA}}*\\mathsf{T_{BP}})$,\nwhere $\\mathsf{T_{MA}}$ denotes the number of MA calls (rescaling operations), and $\\mathsf{T_{BP}}$ denotes the time required by the basic procedure BP.\nAccording to the classic lower bound \\citep{khachian1979polynomial}, $\\mathsf{T_{MA}}$ can simply be bounded by $O(nL)$ for these Chubanov-type algorithms, where $L$ denotes the bit size of $A$.\nHowever, $\\mathsf{T_{BP}}$ is the most important and tricky part.\nTheoretical results and practical performances vary for different BP procedures.\nThe typical BP procedures include the perceptron method, von Neumann's method, and their variants (see e.g., \\citep{dantzig1992varepsilon,dunagan2008simple,dadush2016rescaled,pena2017solving}).\nWe review more details of these BP in Section \\ref{sec:bp}.\nUsually, $\\mathsf{T_{BP}}$ equals to $O(n^4)$ or $O(n^3m)$ in these BP procedures.\nIn this work, we improve $\\mathsf{T_{BP}}$ by a factor of $\\sqrt{n}$ if (\\ref{feasibility:primal}) or (\\ref{feasibility:dual}) is well-conditioned (measured by \\eqref{eq:rho}), but in the worse case, $\\mathsf{T_{BP}}$ still equals to $O(n^3m)$ in our algorithm.\n\n\\topic{Our Motivation}\nIn practice, these Chubanov-type projection and rescaling algorithms usually run much faster on the primal infeasible instances (i.e., \\eqref{feasibility:primal} is infeasible) than on the primal feasible instances (i.e., dual infeasible) no matter what basic procedure (von Neumann, perceptron or their variants) we use (also see Table \\ref{table:comparison} in Section \\ref{sec:exp}).\nIn this paper, we try to explain this phenomenon theoretically. Moreover, we try to provide a new algorithm to address this issue.\n\n\\topic{Our Contribution} Concretely, we make the following technical contributions:\n\\begin{enumerate}\n \\item First, for the theoretical explanation, we provide Lemma \\ref{lm:infeasible} which shows that\n the time complexity $\\mathsf{T_{BP}}$ can be $O(n^{2.5}m)$ rather than $O(n^3m)$ (see Lemma \\ref{lm:tbp}) in certain situations if \\eqref{feasibility:primal} is infeasible.\n This gives an explanation of why these Chubanov-type algorithms usually run much faster if \\eqref{feasibility:primal} is infeasible.\n \\item Then, we explicitly develop the \\emph{dual algorithm} (see Section \\ref{sec:dual}) to improve the performance when \\eqref{feasibility:primal} is feasible.\n Our dual algorithm is the first algorithm which rescales the row space of $A$ in MA (see Table \\ref{tab:comp}).\n As a result, we provide a similar Lemma \\ref{lm:infeasibled} which shows that the time complexity $\\mathsf{T_{BP}}$ of our dual algorithm can be $O(n^{2.5}m)$ rather than $O(n^3m)$ in certain situations if \\eqref{feasibility:dual} is infeasible (i.e. \\eqref{feasibility:primal} is feasible).\n\n Naturally, we obtain a new fast polynomial primal-dual projection algorithm (called $\\mathsf{PPDP}$) by integrating our primal algorithm (which runs faster on the primal infeasible instances) and our dual algorithm (which runs faster on the primal feasible instances). See Section \\ref{sec:ppdp}.\n \\item Finally, the numerical results validate that our primal-dual $\\mathsf{PPDP}$ algorithm is quite balanced between feasible and infeasible instances, and it runs significantly faster than other algorithms (see Table \\ref{table:comparison} in Section \\ref{sec:exp}).\n\\end{enumerate}\n\n\\topic{Remark}\nOur algorithms are based on Dadush-V{\\'e}gh-Zambelli algorithm \\citep{dadush2016rescaled} and the improvements of Roos's algorithm \\citep{roos2018improved} (see Section \\ref{sec:relatealg} and Table \\ref{tab:comp}).\nBesides, we introduce a new step-size term $c$ for practical consideration (see Line 13 and 14 of Algorithm \\ref{alg:bp} and \\ref{code:dual Dadush}).\nFor the maximum support problem \\eqref{feasibility:primal}, $\\mathsf{T_{BP}}=O(n^4)$ for Chubanov's algorithm and Roos's algorithm, and $\\mathsf{T_{BP}}=O(n^3m)$ for Dadush-V{\\'e}gh-Zambelli algorithm.\nNote that in the worst case $\\mathsf{T_{BP}}=O(n^3m)$ for our algorithms, but it can be improved by a factor of $\\sqrt{n}$ in certain situations.\nRecall that $\\mathsf{T_{MA}}=O(nL)$ for these Chubanov-type algorithms as we discussed before.\nThus the time complexity of our algorithms (in the worst case) match the result of Dadush-V{\\'e}gh-Zambelli algorithm \\citep{dadush2016rescaled}, i.e., $O(\\mathsf{T_{MA}}*\\mathsf{T_{BP}})=O(n^4mL)$ (see our Theorems \\ref{thm:primal}--\\ref{thm:primal_dual}).\nHowever, we point out that the total time complexity of Chubanov's algorithm and Roos's algorithm are $O(n^4L)$, and hence is faster than ours. They speed up it from $O(n^5L)$ to $O(n^4L)$ by using an amortized analysis while we currently do not use. We leave this speedup as a future work.\n\n\\topic{Organization}\nIn Section \\ref{sec:pre}, we introduce some useful notations and review some related algorithms.\nThe details and results for our primal algorithm and dual algorithm are provided in Section \\ref{sec:primal} and Section \\ref{sec:dual}, respectively.\nThen, in Section \\ref{sec:ppdp}, we propose the efficient primal-dual $\\mathsf{PPDP}$ algorithm.\nFinally, we conduct the numerical experiments in Section \\ref{sec:exp} and include a brief conclusion in Section \\ref{sec:con}.\n\n\n\n\\section{Preliminaries}\n\\label{sec:pre}\n\nIn this section, we first review some classic basic procedures and then introduce some notations to review some related algorithms at the end of this section.\n\\subsection{Classic Basic Procedures} \\label{sec:bp}\nRecall that BP returns one of the following three results:\n(i) a feasible solution of \\eqref{feasibility:primal};\n(ii) a feasible solution of the dual problem \\eqref{feasibility:dual};\n(iii) a cut for the feasible region of \\eqref{feasibility:primal}.\nHere we focus on the first two outputs, the last one is controlled by an upper bound lemma (similar to Lemma \\ref{lm:roosbound}).\n\nLetting $y=Ax$, when solving \\eqref{feasibility:dual}, we know that there is at least an index $k$\nsuch that $a_k^Ty\\leq 0$, where $a_k$ is the $k$th-column of $A$ (otherwise $y$ is already a feasible solution for \\eqref{feasibility:dual}).\nOn the other hand, to solve \\eqref{feasibility:primal}, we want to minimize $\\n{y}$. The goal is to let $y$ go to 0 (in which case $x$ is a feasible solution for \\eqref{feasibility:primal}).\nWe review some classic update methods as follows:\n\n\\topic{von Neumann's algorithm}\nIn each iteration, find an index $k$ such that $a_k^Ty\\leq 0$, and then update $x$ and $y$ as\n\\begin{equation}\\label{eq:update}\n\\red{y'}=\\alpha y+\\beta a_k, \\quad x'=\\alpha x+\\beta e_k \\quad (\\mathrm{note~ that~}y=Ax \\mathrm{~and~} \\red{y'}=Ax'),\n\\end{equation}\nwhere $\\alpha,\\beta>0$ are chosen such that $\\|\\red{y'}\\|$ is smallest and $\\alpha+\\beta=1$ \\citep{dantzig1992varepsilon}.\n\\begin{figure}[!h]\n\\centering\\includegraphics[scale=0.8]{von.pdf}\n\\end{figure}\n\n\\topic{Perceptron}\nChoose $\\alpha=\\beta=1$ in \\eqref{eq:update} at every iteration. See e.g. \\citep{rosenblatt1957perceptron,novikoff1963convergence}.\n\n\\topic{Dunagan-Vempala}\nFix $\\alpha=1$ and choose $\\beta$ to minimize $\\n{\\red{y'}}$ \\citep{dunagan2008simple}.\n\n\\subsection{Notations}\nBefore reviewing the related algorithms (in the following Section \\ref{sec:relatealg}), we need to define\/recall some useful notations.\nWe use $P_A$ and $Q_A$ to denote the projections of ${\\mathbb R}^n$ onto the null space ($\\mathcal{N}_A$) and row space ($\\mathcal{R}_A$) of the $m\\times n$ matrix $A$, respectively:\n\\begin{equation*}\nP_A \\triangleq I-A^T (A A^T)^{\\dag}A,\\qquad Q_A \\triangleq A^T (A A^T)^{\\dag}A.\n\\end{equation*}\nwhere $(\\cdot)^{\\dag}$ denotes the Moore-Penrose pseudoinverse. Particularly, $(A A^T)^{\\dag}=(A A^T)^{-1}$ if $\\mathrm{rank}(A)=m$.\n\n\nWe further define the following notations:\n\\begin{equation}\n\\label{eq:vec}\nv=Q_Ay\\in \\mathcal{R}_A,\\quad z=P_Ay\\in \\mathcal{N}_A,\\quad y=v+z\\in {\\mathbb R}^n.\n\\end{equation}\nUsually, $z$ is used to denote the feasible solution of \\eqref{feasibility:primal} and $v$ indicates the feasibility of \\eqref{feasibility:dual}.\n\nTo analyze case (iii) of BP, we note that \\eqref{feasibility:primal} is feasible if and only if the system\n\\begin{equation}\n\\label{feasibility:normalized primal}\n\\begin{aligned}\nAx=0, ~x\\in [0,1]^n,~ x\\neq 0\n\\end{aligned}\n\\end{equation}\nis feasible since \\eqref{feasibility:primal} is a homogeneous system.\nFrom now on, we will consider problem \\eqref{feasibility:normalized primal} instead of \\eqref{feasibility:primal}. Similarly, we use a normalized version \\eqref{eq:bounded dual} to replace \\eqref{feasibility:dual}.\nNow, we recall a useful lemma which gives an upper bound for the coordinates of any feasible solution. This upper bound will indicate a cut for case (iii).\n\\begin{lemma}[\\citep{roos2018improved}]\n\\label{lm:roosbound}\nLet $x$ be any feasible solution of \\eqref{feasibility:normalized primal}, $y$ and $v$ are defined as in (\\ref{eq:vec}), then every non-zero coordinate $v_j$ of $v$ gives rise to an upper bound for $x_j$, according to\n\\begin{equation*}\nx_j\\leq \\mathrm{bound}_j(y)\\triangleq \\mathbf{1}^T\\Big[\\frac{v}{-v_j}\\Big]^+,\n\\end{equation*}\nwhere $x^+\\triangleq \\max\\{0,x\\}$ and $\\mathbf{1}$ denotes the all-ones vector.\n\\end{lemma}\nThis means that we can scale the column $j$ of $A$ by a factor $\\mathrm{bound}_j(y)$ to make the feasible solutions of (\\ref{feasibility:normalized primal}) closer to the all-ones vector $\\mathbf{1}$.\nSimilarly to $x^+$, we denote $x^-\\triangleq -(-x)^+$.\nFurthermore, we need the definition of condition number $\\rho(Q)$ for a matrix $Q$ \\citep{goffin1980relaxation}:\n\\begin{equation}\\label{eq:rho}\n\\rho(Q)\\triangleq \\max_{x:\\|x\\|_2=1}\\min_{i} \\langle x, \\frac{q_i}{\\|q_i\\|_2} \\rangle,\n\\end{equation}\nwhere $q_i$ is the $i$th-column of $Q$.\n\n\\subsection{Related Algorithms}\\label{sec:relatealg}\nNow, we are able to review some related algorithms for solving \\eqref{feasibility:normalized primal} based on the BP procedures introduced in Section \\ref{sec:bp}.\n\n\n\\topic{Chubanov's algorithm}\nInstead of updating in the original space $y=Ax$, \\cite{chubanov2015polynomial} updates in the projection space $z=\\blue{P_A}y$, where $P_A = I-A^T (A A^T)^{-1}A$ is a null space projection of $A$.\nIn each BP iteration, it updates $y$ and $z$ in the same way as von Neumann's update (just replacing $A$ by $P_A$).\nIntuitively, BP either finds a feasible solution $x^*$ of \\eqref{feasibility:normalized primal} or finds a cut (i.e., an index $j$ such that $x_j^*\\leq 1\/2$ for any feasible solution $x^*$ of \\eqref{feasibility:normalized primal} in $[0,1]^n$).\nThen the main algorithm MA rescales the null space of $A$ by dividing the $j$th-column of $A$ by 2.\nAccording to [Khachian, 1979], there is a lower bound for the feasible solutions of \\eqref{feasibility:normalized primal}.\nThus the number of rescaling operations can be bounded.\nFinally, the algorithm terminates in polynomial-time, where either BP returns a feasible solution or MA claims the infeasibility according to the lower bound.\n\n\\topic{Roos's algorithm} \\cite{roos2015chubanov,roos2018improved} provided two improvements of Chubanov's algorithm:\n\\begin{enumerate}\n \\item A new cut condition was proposed, which is proved better than the one used by Chubanov.\n \\item The BP can use \\emph{multiple indices} to update $z$ and $y$ ($z=P_Ay$) in each iteration, e.g., a set of indices satisfying $(P_A)_i^Tz\\leq 0$. Recall that von Neumann's update only uses one index $k$ satisfying $a_k^Ty\\leq 0$.\n\\end{enumerate}\n\n\\topic{Dadush-V{\\'e}gh-Zambelli}\nCompared with Chubanov's algorithm, \\cite{dadush2016rescaled} used the Dunagan-Vempala update instead of von Neumann's update as its BP, along with Roos' new cut condition. Besides, the updates are performed in the orthogonal space $v=\\blue{Q_A}y$, where $Q_A = A^T (A A^T)^{-1}A$ is a row space projection matrix of $A$.\nBut the rescaling space in MA is the same, i.e., the null space of $A$.\n\n\\topic{Comparison}\nTo demonstrate it clearly, we provide a comparison of our algorithms with other algorithms in Table \\ref{tab:comp}. Note that our primal-dual $\\mathsf{PPDP}$ algorithm is the integration of our primal algorithm and dual algorithm.\n\n\\begin{table}[!ht]\n \\caption{Comparison of our algorithms with other algorithms}\n \\vspace{1mm}\n \\label{tab:comp}\n \\centering\n \\begin{tabular}{ccccc}\n \\toprule\n Algorithms & Update method & Update space & Rescaling space&\n \\#indices\\\\\n \\midrule\n Chubanov's algorithm & von Neumann & Null space & Null space &One\\\\\n Roos' algorithm & von Neumann & Null space & Null space&Multiple\\\\\n Dadush-V{\\'e}gh-Zambelli & Dunagan-Vempala &Row space & Null space&One\\\\\n Our primal algorithm& Dunagan-Vempala & Row space& Null space&Multiple\\\\\n Our dual algorithm & Dunagan-Vempala & Null space& Row space&Multiple\\\\\n \\bottomrule\n \\end{tabular}\n\\end{table}\n\n\\newpage\n\\section{Our Primal Algorithm}\n\\label{sec:primal}\n\nIn this section, we introduce our primal algorithm which consists of the basic procedure BP and the main algorithm MA.\nThe details of BP and MA are provided in Section \\ref{sec:bpprimal} and Section \\ref{sec:maprimal} respectively.\n\n\\subsection{Basic Procedure (BP)}\\label{sec:bpprimal}\nOur BP is similar to \\citep{dadush2016rescaled} (or \\citep{dunagan2008simple}) (see Table \\ref{tab:comp}). The details are described in Algorithm \\ref{alg:bp}.\nThe main difference is that we use multiple indices $K$ to update (see Line 9 of Algorithm \\ref{alg:bp}) and introduce the step-size $c$ for practical consideration (see Line 13 and 14 of Algorithm \\ref{alg:bp}).\n\n\n\n\n\\begin{algorithm}[!htb]\n\t\\caption{Basic Procedure for the Primal Problem}\n\t\\label{alg:bp}\n\t\\begin{algorithmic}[1]\n\t\t\\REQUIRE $Q_A$\n \\ENSURE\n $y, z, J, \\mathrm{case}.$\n \\STATE $r=size(Q_A), \\mathrm{threshold}=1\/2r^{3\/2}, c\\in (0,2), \\mathrm{case}=0$\n \\STATE $y=\\mathbf{1}\/r, v=Q_Ay, z=y-v=P_Ay$\n\t\t\\WHILE{$\\mathrm{case}=0$}\n\t\t\\IF {$z>0$}\n\t\t \\STATE $\\mathrm{case} = 1$ ($z$ is primal feasible); \\textbf{return}\n\t\t\\ELSIF {$v> 0\\; and\\; r==n$}\n \\STATE $\\mathrm{case} = 2$ ($v$ is dual feasible); \\textbf{return}\n\t\t\\ELSE\n \\STATE find $K=\\{k: v_k \\leq 0\\}$\n\t\t \\STATE $q_K=Q_A \\sum_{k\\in K}e_k$\n\t\t \\STATE $\\alpha=\\inner{\\frac{q_K}{\\|q_K\\|_2}}{v}$\n\t\t \\IF {$\\alpha \\leq -\\mathrm{threshold}$}\n \\STATE $y=y-c(\\frac{\\alpha}{\\|q_K\\|_2}\\sum_{k\\in K} e_k)$\n\t\t \\STATE $v=v-c(\\frac{\\alpha}{\\|q_K\\|_2} \\sum_{k\\in K} q_k)$\n\t\t \\ELSE\n\t\t \\STATE find a nonempty set $J$ such that $J\\subseteq \\{j: bound_j(y)\\leq \\frac{1}{2} \\}$ (a cut); \\textbf{return}\n\t\t \\ENDIF\n\t\t\\ENDIF\n\t\t\\ENDWHILE\n\t\\end{algorithmic}\n\\end{algorithm}\n\nIn the BP (Algorithm \\ref{alg:bp}), the norm of the iterated vector $v=Q_A y$ is decreasing, while each coordinate of $y$ is increasing.\nThus, after a certain number of iterations, we will obtain a feasible solution $z=y-v=P_Ay>0$.\nOtherwise, it is always possible to find a cut $J$ (Line 16), along with some rescaling operations for the matrix $A$, to make the feasible solutions of (\\ref{feasibility:normalized primal}) closer to the all-ones vector.\nThe cut is guaranteed by the following lemma.\n\\begin{lemma}\\label{lm:cut}\nLet $Q_A$ be the projection matrix at a given iteration of BP (Algorithm \\ref{alg:bp}).\nSuppose that $\\alpha=\\inner{\\frac{q_K}{\\|q_K\\|_2}}{v}> -\\mathrm{threshold}$, then the set $J=\\{j: \\mathrm{bound}_j(y)\\leq \\frac{1}{2} \\}$ is nonempty and every solution $x$ of problem (\\ref{feasibility:normalized primal}) satisfies $x_j\\leq \\frac12$ for all $j\\in J$.\n\\end{lemma}\n\nThis lemma is proved with Lemma \\ref{lm:roosbound} and we defer the proof to Appendix \\ref{app:lmcut}.\n\nFor the time complexity of Algorithm \\ref{alg:bp}, i.e. $\\mathsf{T_{BP}}$, we give the following lemma (the proof is in Appendix \\ref{app:tbp}).\n\\begin{lemma}\\label{lm:tbp}\nThe time complexity of Algorithm \\ref{alg:bp} $\\mathsf{T_{BP}}=O(n^3m)$. Concretely, it uses at most $O(n^2)$ iterations and each iteration costs at most $O(mn)$ time.\n\\end{lemma}\n\nNote that Lemma \\ref{lm:tbp} holds regardless \\eqref{feasibility:normalized primal} is feasible or infeasible. However, as we discussed before, the algorithm usually performs much better on the infeasible instances than on the feasible instances. To explain this phenomenon, we give the following lemma. The proof is deferred to Appendix \\ref{app:infeasible}.\n\\begin{lemma}\\label{lm:infeasible}\nIf \\eqref{feasibility:normalized primal} is infeasible,\nthe time complexity of Algorithm \\ref{alg:bp} $\\mathsf{T_{BP}}=O(n^2m\/\\rho(Q_A))$, where $\\rho(Q_A)$ is the condition number defined in (\\ref{eq:rho}).\nIn particular, $\\rho(Q_A)$ equals to $1\/\\sqrt{n}$ under well-condition (e.g., $A$ is an identity matrix), then $\\mathsf{T_{BP}}=O(n^{2.5}m)$ if problem \\eqref{feasibility:normalized primal} is infeasible.\n\\end{lemma}\n\n\n\n\\subsection{Main Algorithm (MA)}\\label{sec:maprimal}\nThe details of our MA are described in Algorithm \\ref{alg:ma}.\nParticularly, we rescale the null space of $A$ in Line 8.\n\\begin{algorithm}[!htb]\n\t\\caption{Main Algorithm for the Primal Problem}\n\t\\label{alg:ma}\n\t\\begin{algorithmic}[1]\n\t\t\\REQUIRE\n $A\\in{\\mathbb R}^{m\\times n}, d=\\mathbf{1}, \\tau=2^{-L}, \\mathrm{case}=0, H=\\varnothing$.\n\t\t\\WHILE{$\\mathrm{case}=0$}\n \\STATE $Q_A=A^T (A A^T)^{\\dag}A$\n \\STATE $(y,z,J,\\mathrm{case})\\leftarrow$ Basic Procedure for Primal Problem$(Q_A)$\n\t\t\\IF {$\\mathrm{case}=0$}\n\t\t \\STATE $d_J=d_J \/2$\n\t\t\n\t\t \\STATE $H=\\{i:d_i\\leq \\tau\\}$\n\t\t \\STATE $d_H=0$\n\t\t \n\t\t \\STATE $A_{J}=A_{J}\/2$\n\t\t \\STATE $A=A_{\\overline{H}}$\n\t\t\n\t\t\\ENDIF\n\t\t\\ENDWHILE\n \\IF{$\\mathrm{case}=1$}\n \\STATE $d=d_{\\overline{H}}$\n \\STATE $D=\\mathrm{diag}(d)$\n \\STATE Define $x$ as $x_{\\overline{H}}=Dz, x_H=0$\n \\ENDIF\n\t\\end{algorithmic}\n\\end{algorithm}\n\nNow, we state the complexity of our primal algorithm in the following theorem.\nThe proof is deferred to Appendix \\ref{app:thmprimal}\n\\begin{theorem}\\label{thm:primal}\nThe time complexity of the primal algorithm is $O(n^4mL)$.\n\\end{theorem}\n\n\n\n\n\\section{Our Dual Algorithm}\n\\label{sec:dual}\nThe Chubanov-type algorithms all focus on the primal problem (\\ref{feasibility:primal}) (or the normalized version \\eqref{feasibility:normalized primal}), i.e.,\ntheir MA always rescale the null space of $A$ (see Table \\ref{tab:comp}).\nWe emphasize that these algorithms usually perform much better on the infeasible instances than on the feasible ones (see our Lemma \\ref{lm:infeasible} which gives an explanation).\nNow, we want to address this unbalanced issue by providing a dual algorithm.\nOur dual algorithm explicitly considers the dual problem (\\ref{feasibility:dual}) and rescales the \\emph{row space} of $A$, unlike the previous algorithms.\nWe already know that the primal algorithm runs faster on the primal infeasible instances.\nThus we expect the dual algorithm runs faster on the dual infeasible instances (i.e., primal feasible instances).\nAs expected, our dual algorithm does work.\nTherefore, in Section \\ref{sec:ppdp}, we integrate our primal algorithm and dual algorithm to obtain a quite \\emph{balanced} primal-dual algorithm and its performance is also remarkably better than the previous algorithms.\n\nSimilar to our primal algorithm, the dual algorithm also consists of the basic procedure BP and the main algorithm MA.\nThe details of BP and MA are provided in Section \\ref{subsec:bpdual} and Section \\ref{subsec:madual} respectively.\nSimilar to \\eqref{feasibility:normalized primal}, we consider the normalized version of (\\ref{feasibility:dual}) due to the homogeneity:\n\\begin{equation}\n\\label{eq:bounded dual}\n\\mathrm{find }\\; u\\in {\\mathbb R}^m \\; \\mathrm{subject} \\; \\mathrm{to} \\; x=A^T u> 0, x\\in (0,1]^n.\n\\end{equation}\n\n\\subsection{Basic Procedure for the Dual Problem}\n\\label{subsec:bpdual}\n\nThe basic procedure for the dual problem is described in Algorithm \\ref{code:dual Dadush}.\n\\begin{algorithm}[!htb]\n\t\\caption{Basic Procedure for the Dual Problem}\n\t\\label{code:dual Dadush}\n\t\\begin{algorithmic}[1]\n \\REQUIRE\n $P_A$\n \\ENSURE\n $y, z, J, \\mathrm{case}.$\n \\STATE $\\mathrm{threshold}=1\/2n^{3\/2}, c\\in (0,2), \\mathrm{case}=0$\n \\STATE $y = \\mathbf{1}\/n, z=P_Ay, v=y-z=Q_Ay$\n\t\t\\WHILE {$\\mathrm{case}=0$}\n\t\t\\IF {$v> 0$}\n \\STATE $\\mathrm{case}=2$ (dual feasible); \\textbf{return}\n\t\t\\ELSIF {$z\\geq 0$}\n\t\t \\STATE $\\mathrm{case}=1$ (primal feasible); \\textbf{return}\n\t\t\\ELSE\n\t\t \\STATE find $K=\\{k:\\langle z, e_k\\rangle\\leq 0\\}$\n\t\t \\STATE $p_K=P_A \\sum_{k\\in K}e_k$\n\t\t \\STATE $\\alpha=\\langle \\frac{p_K}{\\|p_K\\|_2}, z\\rangle $\n\t\t \\IF {$\\alpha \\leq -\\mathrm{threshold}$}\n\t\t \\STATE $y=y-c(\\frac{\\alpha}{\\|p_K\\|_2}\\sum_{k\\in K} e_k)$\n \\STATE $z=z-c(\\frac{\\alpha}{\\|p_K\\|_2} \\sum_{k\\in K} p_k)$\n \\STATE $v=y-z$\n\t\t \\ELSE\n\t\t \\STATE find a nonempty set $J$ such that $J\\subseteq \\{j: \\mathrm{bound}'_j(y)\\leq \\frac{1}{2} \\}$ (a cut); \\textbf{return}\n\t\t \\ENDIF\n\t\t\\ENDIF\n\t\t\\ENDWHILE\n\t\\end{algorithmic}\n\\end{algorithm}\n\nIn this basic procedure, either a feasible solution for the primal problem is found, or a dual feasible solution is found, or a cut of the bounded row space is found (which is denoted as $bound'_j(y)$ in Line 17).\nNow, we need to provide an upper bound in Lemma \\ref{lm:dualcut}, which shows that a cut of the bounded row space can be derived from $z$, instead of $v$ in the case of null space (see Lemma \\ref{lm:roosbound}).\n\\begin{lemma}\n\\label{lm:dualcut}\n\tLet $x$ be any feasible solution of \\eqref{eq:bounded dual} and $z=P_A y$ for some $y$. Then every non-zero coordinate $z_j$ of $z$ gives rise to an upper bound for $x_j$, according to\n\t\\[\n\tx_j \\leq \\mathrm{bound}'_j(y) \\triangleq \\bm{1}^T \\Big[\\frac{z}{-z_j}\\Big]^+.\n\t\\]\n\\end{lemma}\nThe proof of this lemma is deferred to Appendix \\ref{app:dualcut}.\nAccording to this lemma, we can obtain the following guaranteed cut in Lemma \\ref{lm:cutd}, which is similar to Lemma \\ref{lm:cut}.\nThe proof is almost the same as Lemma \\ref{lm:cut} just by replacing Lemma \\ref{lm:roosbound} with our Lemma \\ref{lm:dualcut}.\n\n\\begin{lemma}\\label{lm:cutd}\nLet $P_A$ be the projection matrix at a given iteration of BP (Algorithm \\ref{code:dual Dadush}).\nSuppose that $\\alpha=\\inner{\\frac{p_K}{\\|p_K\\|_2}}{z}> -\\mathrm{threshold}$, then the set $J=\\{j: \\mathrm{bound}'_j(y)\\leq \\frac{1}{2} \\}$ is nonempty and every solution $x$ of problem (\\ref{feasibility:normalized primal}) satisfies $x_j\\leq \\frac12$ for all $j\\in J$.\n\\end{lemma}\n\nSame to Algorithm \\ref{alg:bp}, for the time complexity $\\mathsf{T_{BP}}$ of the dual Algorithm \\ref{code:dual Dadush}, we have the following lemma.\n\\begin{lemma}\\label{lm:tbpd}\nThe time complexity of Algorithm \\ref{code:dual Dadush} $\\mathsf{T_{BP}}=O(n^3m)$. Concretely, it uses at most $O(n^2)$ iterations and each iteration costs at most $O(mn)$ time.\n\\end{lemma}\nNote that Lemma \\ref{lm:tbp} also holds regardless \\eqref{eq:bounded dual} is feasible or infeasible.\nNow, we want to point out that our dual algorithm can perform much better on the dual infeasible instances (primal feasible instances) under well-condition as we expected and discussed before.\nSimilar to Lemma \\ref{lm:infeasible}, we have the following lemma for the dual algorithm.\n\\begin{lemma}\\label{lm:infeasibled}\nIf \\eqref{eq:bounded dual} is infeasible,\nthe time complexity of Algorithm \\ref{alg:bp} $\\mathsf{T_{BP}}=O(n^2m\/\\rho(P_A))$, where $\\rho(P_A)$ is the condition number defined in (\\ref{eq:rho}).\nIn particular, $\\rho(P_A)$ equals to $1\/\\sqrt{n}$ under well-condition (e.g., $A=(I,-I)$, where $I$ is an identity matrix), then $\\mathsf{T_{BP}}=O(n^{2.5}m)$ if problem \\eqref{eq:bounded dual} is infeasible.\n\\end{lemma}\nNote that it is easy to see that \\eqref{feasibility:normalized primal} is feasible (i.e., \\eqref{eq:bounded dual} is infeasible) if $A=(I,-I)$.\n\n\nBesides,\nwhen the dual problem \\eqref{eq:bounded dual} is feasible, we can also utilize the geometry of the problem to bound the iteration complexity instead of Lemma \\ref{lm:tbpd}.\nConsider the following kind of condition number of the set $Im(A)\\bigcap [0,1]^n$:\n\\[\n\\delta_{\\infty}(Im(A)\\bigcap [0,1]^n)\\triangleq \\max_{x}\\{\\prod_{i} x_i : x\\in Im(A)\\bigcap [0,1]^n\\}.\n\\]\n\nAs each rescaling in the basic procedure will at least enlarge the value of $\\delta_{\\infty}(Im(A)\\bigcap [0,1]^n)$ by two times, and the largest possible value of $\\delta_{\\infty}(Im(B)\\bigcap [0,1]^n)$ for all matrices $B$ is 1, it takes at most $-\\log_2 \\delta_{\\infty}(Im(A)\\bigcap [0,1]^n)$ basic procedures before getting a feasible solution. This means that the iteration complexity of the whole algorithm is $O(n^2 \\log \\frac{1}{\\delta_{\\infty}(Im(A)\\bigcap [0,1]^n)})$.\n\n\n\\subsection{Main Algorithm for the Dual Problem}\\label{subsec:madual}\n\nThe main algorithm for the dual problem is described in Algorithm \\ref{code:main procedure of dual Dadush}.\nParticularly, we rescale the row space of $A$ in Line 6.\n\n\\begin{algorithm}[!htb]\n\t\\caption{Main Algorithm for the Dual Problem}\n\t\\label{code:main procedure of dual Dadush}\n\t\\begin{algorithmic}[1]\n \\REQUIRE\n $A\\in{\\mathbb R}^{m\\times n}, d=\\mathbf{1}, \\tau=2^{-L}, \\mathrm{case}=0$.\n\t\t\\WHILE{$\\mathrm{case}=0$}\n \\STATE $P_A=I-A^T (A A^T)^{\\dag}A$\n \\STATE $(y,z,J,\\mathrm{case})\\leftarrow$ Basic Procedure for Dual Problem$(P_A)$\n\t\t\\IF {$\\mathrm{case}=0$}\n\t\t \\STATE $d_J=2d_J$\n \\STATE $A_J=2A_J$\n\t\t \\STATE $H=\\{i:d_i\\geq 2^L\\}$\n \\STATE $d_H=0$\n\t\t \\STATE $A_H=0$\n\t\t\\ENDIF\n\t\t\\ENDWHILE\n \\IF{$\\mathrm{case}=1$}\n \\STATE $D=\\mathrm{diag}(d)$\n \\STATE $x=Dz$\n \\ENDIF\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\\newpage\nNow, we have the following theorem for our dual algorithm.\nThe proof is provided in Appendix \\ref{app:thmdual}.\n\\begin{theorem}\\label{thm:dual}\nThe time complexity of the dual algorithm is $O(n^4mL)$.\n\\end{theorem}\n\n\n\\section{Our Primal-Dual $\\mathsf{PPDP}$ Algorithm}\n\\label{sec:ppdp}\nIn this section, we propose a new polynomial primal-dual projection algorithm (called $\\mathsf{PPDP}$) to take advantages of our primal algorithm and dual algorithm.\nSimilarly, the $\\mathsf{PPDP}$ algorithm also consists of two procedures (MA and BP).\nIntuitively, the BP solves problems (\\ref{feasibility:primal}) and (\\ref{feasibility:dual}) simultaneously.\nRecall that the primal algorithm runs faster on the infeasible instances and the dual algorithm runs faster on the feasible instances (see Table \\ref{table:comparison}).\nThe MA rescales the matrix (row space or null space) according to the output of BP.\nThe MA and BP are formally described in Algorithms \\ref{code:main procedure of primal-dual} and \\ref{code:primal dual} respectively. The details are deferred to Appendix \\ref{app:algo}.\nThus, we have the following theorem.\n\n\\begin{theorem}\\label{thm:primal_dual}\nThe time complexity of our primal-dual $\\mathsf{PPDP}$ algorithm is $O(n^4mL)$.\n\\end{theorem}\n\nNote that the final output of our $\\mathsf{PPDP}$ algorithm is a feasible solution for either (\\ref{feasibility:primal}) or (\\ref{feasibility:dual}).\nObviously, the algorithm will stop whenever it finds a solution of \\eqref{feasibility:primal} or \\eqref{feasibility:dual}, thus the time complexity of our $\\mathsf{PPDP}$ algorithm follows easily from Theorems \\ref{thm:primal} and \\ref{thm:dual}.\n\n\\section{Experiments}\n\\label{sec:exp}\nIn this section, we compare the performance of our algorithms with Roos' algorithm \\citep{roos2018improved} and Gurobi (one of the fastest solvers nowadays).\nWe conduct the experiments on the randomly generated matrices.\nConcretely, we generate $100$ integer matrices $A$ of size $625\\times 1250$, with each entry uniformly randomly generated in the interval $[-100,100]$. The parameter $c\\in (0,2)$ is the step-size which is a new practical term introduced in this work.\nThe average running time of these algorithms are listed in Table \\ref{table:comparison}.\n\n\\begin{table}[!htb]\n \\caption{Running time (sec.) of algorithms wrt. (\\ref{feasibility:primal}) is feasible or infeasible}\n \\vspace{1mm}\n \\label{table:comparison}\n \\centering\n \\begin{tabular}{ccc}\n \\toprule\n Algorithms & \tfeasible instances & \tinfeasible instances \\\\\n \\midrule\n Gurobi (a fast optimization solver) & 3.08 &1.58 \\\\\n Roos's algorithm \\citep{roos2018improved}\n & 10.75 & 0.83 \\\\\n Our primal algorithm ($c=1.8$) & 9.93 & \\red{0.48} \\\\\n Our dual algorithm ($c=1.8$) & \\red{0.35} & 4.57 \\\\\n Our $\\mathsf{PPDP}$ algorithm ($c=1.8$) & \\red{0.60} & \\red{0.58} \\\\\n \\bottomrule\n \\end{tabular}\n\\end{table}\n\nTable \\ref{table:comparison} validates that our new primal-dual $\\mathsf{PPDP}$ algorithm is quite balanced on the feasible and infeasible instances due to the integration of our primal and dual algorithm.\nMoreover, it shows that our $\\mathsf{PPDP}$ algorithm can be a practical option for linear programming since it runs remarkably faster than the fast optimization solver Gurobi.\n\n\\section{Conclusion}\n\\label{sec:con}\nIn this paper, we try to theoretically explain why the Chubanov-type projection algorithms usually run much faster on the primal infeasible instances.\nFurthermore, to address this unbalanced issue, we provide a new fast polynomial primal-dual projection algorithm (called $\\mathsf{PPDP}$) by integrating our primal algorithm (which runs faster on the primal infeasible instances) and our dual algorithm (which runs faster on the primal feasible instances).\nAs a start, we believe more improvements (e.g., the amortized analysis speedup) can be made for the Chubanov-type projection algorithms both theoretically and practically.\n\n\n\n\\section*{Acknowledgments}\nWe would like to thank Jian Li (Tsinghua University), Yuanxi Dai (Tsinghua University) and Rong Ge (Duke University) for useful discussions.\n\n\n\\bibliographystyle{plainnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{S:Intro}\n\nThe simple-multiplicity case of the BiTangential Nevanlinna-Pick \n(BTNP) Interpolation Problem over the right half plane $\\Pi_{+} = \\{ \nz \\in {\\mathbb C} \\colon \\operatorname{Re} z > 0\\}$ can be formulated \nas follows. Let ${\\mathcal S}^{p \\times m}(\\Pi_{+})$ denote the Schur class of\n$\\mathbb C^{p\\times m}$-valued functions that are analytic and contractive-valued\non $\\Pi_+$:\n$$\n {\\mathcal S}^{p \\times m}(\\Pi_{+}) : = \\{ S \\colon \\Pi_{+} \\to {\\mathbb C}^{p\n \\times m} \\colon \\| S(\\lambda) \\| \\le 1 \\text{ for all } \\lambda \\in\n \\Pi_{+} \\}.\n$$\nThe data set ${\\mathfrak D}_{\\rm simple}$ for the problem consists of\na collection of the form\n\\begin{align}\n{\\mathfrak D}_{\\rm simple} = \\{ & z_{i} \\in \\Pi_{+}, \\; x_{i} \\in \n{\\mathbb C}^{1 \\times p},\\; y_{i} \\in {\\mathbb C}^{1 \\times m}\\; \n\\text{ for }\\; i=1,\\ldots,N, \\notag \\\\\n&w_j\\in \\Pi_{+}, \\; u_{j} \\in {\\mathbb C}^{m \\times 1},\\, v_{j} \\in {\\mathbb C}^{p\n \\times 1}\\; \\text{ for } \\; j = 1, \\dots, N',\\notag \\\\\n & \\rho_{ij} \\in {\\mathbb C} \\text{ for } (i,j) \\text{ such that }\n z_{i} = w_{j} =: \\xi_{ij}\\}. \\label{babydata}\n \\end{align}\n The problem then is to find a function $S\\in {\\mathcal S}^{p \\times m}(\\Pi_{+})$ \nthat satisfies the collection of interpolation conditions\n\\begin{align}\n & x_{i} S(z_{i}) = y_{i} \\; \\text{ for } \\; i = 1, \\dots, N, \n \\label{inter1'} \\\\\n & S(w_{j}) u_{j} = v_{j}\\; \\text{ for } \\; j = 1, \\dots, N', \n \\label{inter2'} \\\\\n & x_{i} S'(\\xi_{ij}) y_{j} = \\rho_{ij} \\; \\text{ for } (i,j) \\; \\text{ \n such that } \\; z_{i} = w_{j} =: \\xi_{ij}. \\label{inter3'}\n\\end{align}\nWe note that the existence of a solution $S$ \nto interpolation conditions \\eqref{inter1'}, \\eqref{inter2'}, \n\\eqref{inter3'} forces the data set \\eqref{babydata} to satisfy \nadditional compatibility equations; indeed, if $S$ solves \n\\eqref{inter1'}--\\eqref{inter3'}, and if $(i,j)$ \nis a pair of indices where $z_{i} = w_{j} =: \\xi_{ij}$, then the \nquantity $x_{i} S(\\xi_{ij}) u_{j}$ can be computed in two ways:\n\\begin{align*}\n & x_{i} S(\\xi_{ij}) u_{j} = (x_{i} S(\\xi_{ij})) u_{j} = y_{i} \n u_{j}, \\\\\n& x_{i} S(\\xi_{ij}) u_{j} = x_{i} (S(\\xi_{ij}) u_{j}) = x_{i} v_{j}\n\\end{align*}\nforcing the compatibility condition\n\\begin{equation} \\label{babycomp}\n x_{i} v_{j} = y_{i} u_{j} \\; \\text{ if } \\; z_{i} = w_{j}.\n\\end{equation}\nMoreover, there is no loss of generality in assuming that each row \nvector $x_{i}$ and each column vector $u_{j}$ in \\eqref{babydata} is nonzero;\nif $x_{i}= 0$ for some $i$, existence of a solution $S$ then forces also that \n$y_{i}=0$ and then the interpolation condition $x_{i} S_{i}(z_{i}) = \ny_{i}$ collapses to $0 = 0$ and can be discarded, with a similar \nanalysis in case some $u_{j} = 0$. \n\n\\smallskip\n\nThe following result gives the precise solution criterion. The \nresult actually holds even without the normalization conditions on the data \nset discussed in the previous paragraph.\n\n\n\\begin{theorem} \\label{T:BT-NP}\n{\\rm (See \\cite[Section 4]{LA} for the case where $z_{i} \\ne w_{j}$ for all $i,j$).} \nGiven a data set ${\\mathfrak D}_{\\rm simple}$ as in \\eqref{babydata}, there \nexists a solution $S$ of the associated problem {\\rm BTNP} if and only if the \nassociated Pick matrix\n \\begin{equation} \\label{Pick-simple}\n P_{{\\mathfrak D}_{\\rm simple}} : = \\begin{bmatrix} \n P_{11} & P_{12} \\\\ P_{12}^{*} & P_{22} \\end{bmatrix}\n \\end{equation}\n with entries given by\n \\begin{align*}\n&\t[P_{11}]_{ij} = \\frac{x_{i} x_{j}^{*} - y_{i} \ny_{j}^{*}}{z_{i} + \\overline{z}_{j}} \\; \\text{ for } \\; 1 \\le i,j \\le N, \\\\\n& [P_{12}]_{ij} = \\left\\{\\begin{array}{cl}\n {\\displaystyle\\frac{x_{i}v_{j} - y_{i}u_{j}}{ w_{j}-z_{i}}} & \\text{ if } \\; \n z_{i} \\ne w_{j}, \\\\ \\rho_{i j} & \\text{ if } z_{i} = w_{j}, \\end{array}\\right. \n \\text{ for } 1 \\le i \\le N, 1 \\le j \\le N', \\\\\n& [ P_{22}]_{ij} = \\frac{ u_{i}^{*} u_{j} - \nv_{i}^{*} v_{j}}{ \\overline{w}_{i} + w_{j} } \\; \\text{ for } \\; 1 \\le \ni, j \\le N',\n\\end{align*}\nis positive semidefinite.\n\\end{theorem}\n\nGiven a data set ${\\mathfrak D}_{\\rm simple}$ as above, \nit is convenient to repackage it in a more aggregate form \nas follows (see \\cite{BGR}). With data as in \\eqref{babydata}, \nform the septet of matrices\n$(Z,X,Y, W, U, V, \\Gamma)$ where:\n\\begin{align}\n & Z = \\begin{bmatrix} z_{1} & & 0 \\\\ & \\ddots & \\\\ 0 & & z_{N} \n\\end{bmatrix}, \\; \\; \n X = \\begin{bmatrix} x_{1} \\\\ \\vdots \\\\ x_{N} \\end{bmatrix}, \\; \\; \n Y = \\begin{bmatrix} y_{1} \\\\ \\vdots \\\\ y_{N} \\end{bmatrix}, \\notag \\\\\n& W = \\begin{bmatrix} w_{1} & & 0 \\\\ & \\ddots & \\\\0 & & w_{N'} \n\\end{bmatrix}, \\; \\; \nU = \\begin{bmatrix} u_{1} & \\cdots & u_{N'} \\end{bmatrix}, \\; \\;\nV = \\begin{bmatrix} v_{1} & \\cdots & v_{N'} \\end{bmatrix}, \\notag \\\\\n& \\Gamma = \\left[ \\gamma_{ij} \\right]_{i=1,\\ldots N}^\n{j=1,\\ldots,N^\\prime} \\; \\text{ where } \\; \n\\gamma_{ij} = \\left\\{\\begin{array}{cl}\n {\\displaystyle\\frac{x_{i}v_{j} - y_{i}u_{j}}{ w_{j}-z_{i}}} & \\text{ if } \\;\n z_{i} \\ne w_{j}, \\\\ \\rho_{i j} & \\text{ if } z_{i} = w_{j}. \n\\end{array}\\right.\n\\label{babydata'}\n\\end{align}\nNote that the compatibility condition \\eqref{babycomp} translates to \nthe fact that $\\Gamma$ satisfies the Sylvester equation\n$$\n\\Gamma W-Z\\Gamma = \\begin{bmatrix} X & -Y \\end{bmatrix} \n \\begin{bmatrix} V \\\\ U \\end{bmatrix}.\n$$\nThe normalization requirements ($x_{i} \\ne 0$ for all $i$ and $u_{j} \n\\ne 0$ for all $j$ together with $z_{1}, \\dots, z_{N}$ all distinct \nand $w_{1}, \\dots, w_{N'}$ all distinct) translate to the conditions\n$$\n (Z,X) \\text{ is controllable}, \\quad (U, W) \\text{ is observable.}\n$$\nThen it is not hard to see that the interpolation conditions \n\\eqref{inter1'}, \\eqref{inter2'}, \\eqref{inter3'} can be written in \nthe more aggregate form \n \\begin{align} \n\t& \\sum_{z_{0} \\in \\sigma(Z)} {\\rm Res}_{\\lambda = z_{0}} (\\lambda I \n\t- Z)^{-1} X S(\\lambda) = Y, \\label{resint1} \\\\\n& \\sum_{z_{0} \\in \\sigma(W)} {\\rm Res}_{\\lambda = z_{0}} S(\\lambda) U (\\lambda \nI - W)^{-1} = V, \\label{resint2} \\\\\n& \\sum_{z_{0} \\in \\sigma(Z)\\cup \\sigma(W)} {\\rm Res}_{\\lambda = z_{0}} \n(\\lambda I - Z)^{-1} \nX S(\\lambda) U (\\lambda I - W)^{-1} = \\Gamma. \\label{resint3}\n\\end{align}\n\nSuppose that $(Z,X)$ is any controllable input pair and that $(U,W)$ \nis an observable output pair. Assume in addition that $\\sigma(Z) \n\\cup \\sigma(W) \\subset \\Pi_{+}$ and that $S$ is an analytic matrix \nfunction (of appropriate size) on $\\Pi_{+}$. We define the \n\\textbf{Left-Tangential Operator Argument (LTOA) point evaluation} \n$(XS)^{\\wedge L}(Z)$ of $S$ at $Z$ in left direction $X$ by\n$$\n (X S)^{\\wedge L}(Z) = \\sum_{z_{0}\\in\\sigma(Z)} {\\rm Res}_{\\lambda = z_{0}}\n (\\lambda I - Z)^{-1} X S(\\lambda).\n$$\nSimilarly we define the \\textbf{Right-Tangential Operator Argument \n(RTOA) point evaluation} $(S U)^{\\wedge R}(W)$ of $S$ at $W$ in right \ndirection $U$ by\n$$\n (S U)^{\\wedge R}(W) = \\sum_{z_{0} \\in \\sigma(W)} {\\rm Res}_{\\lambda = z_{0}}\n S(\\lambda) U (\\lambda I - W)^{-1}.\n$$\nFinally the \\textbf{BiTangential Operator Argument (BTOA) point \nevaluation} $(X S U)^{\\wedge L,R}(Z, W)$ of $S$ at left argument $Z$ \nand right argument $W$ in left direction $X$ and right direction $U$ \nis given by\n$$\n(X S U)^{\\wedge L,R}(Z, W) =\n\\sum_{z_{0} \\in \\sigma(Z)\\cup \\sigma(W)} {\\rm Res}_{\\lambda = z_{0}}\n(\\lambda I - Z)^{-1} X S(\\lambda) U (\\lambda I - W)^{-1}.\n$$\nWith this condensed notation, we write the interpolation conditions \n\\eqref{resint1}, \\eqref{resint2}, \\eqref{resint3} simply as\n\\begin{align} \n & (X S)^{\\wedge L}(Z) = Y, \\label{BTOAint1} \\\\\n & (S U)^{\\wedge R} (W) = V, \\label{BTOAint2} \\\\\n & (X S U)^{\\wedge L,R}(Z,W) = \\Gamma. \\label{BTOAint3}\n\\end{align}\nLet us say that the data set \n\\begin{equation} \\label{data}\n {\\mathfrak D} = (Z,X,Y; U,V,W; \\Gamma)\n\\end{equation}\nis a {\\em $\\Pi_{+}$-admissible BiTangential Operator Argument (BTOA)\ninterpolation data set} \nif the following conditions hold:\n\\begin{enumerate}\n \\item Both $Z$ and $W$ have spectrum inside $\\Pi_{+}$: \n $\\sigma(Z) \\cup \\sigma(W) \\subset \\Pi_{+}$.\n \\item $(Z,X)$ is controllable and $(U,W)$ is observable.\n \\item $\\Gamma$ satisfies the Sylvester equation\n \\begin{equation} \\label{Sylvester'}\n \\Gamma W - Z \\Gamma = X V - Y U.\n \\end{equation}\n \\end{enumerate}\n Then it makes sense to consider the collection of interpolation \n conditions \\eqref{BTOAint1}, \\eqref{BTOAint2}, \\eqref{BTOAint3} for \n {\\em any} $\\Pi_{+}$-admissible BTOA interpolation data set $(Z,X,Y; \n U,V,W; \\Gamma)$. It can be shown that these interpolation \n conditions can be expressed equivalently as a set of \n higher-order versions of the interpolation conditions \n \\eqref{inter1'}, \\eqref{inter2'}, \\eqref{inter3'}\n (see \\cite[Theorem 16.8.1]{BGR}), as well as a representation of $S$ in \n the so-called Model-Matching form (see \\cite[Theorem 16.9.3]{BGR}, \n \\cite{Francis})\n $$\n S(\\lambda) = T_{1}(\\lambda) + T_{2}(\\lambda) Q(\\lambda) T_{3}(\\lambda),\n $$\n where $T_{1}$, $T_{2}$, $T_{3}$ are rational matrix functions \n analytic on $\\Pi_{+}$ with $T_{2}$ and $T_{3}$ square and analytic \n and invertible along the imaginary line, and where $Q$ is a \n free-parameter matrix function analytic on all of $\\Pi_{+}$. \n \n\\smallskip\n\n It is interesting to note that the Sylvester equation \n \\eqref{Sylvester'} is still necessary for the existence of a \n $p \\times m$-matrix function $S$ analytic on $\\Pi_{+}$ satisfying the \n BTOA interpolation conditions \\eqref{BTOAint1}, \\eqref{BTOAint2}, \n \\eqref{BTOAint3}. Indeed, note that\n \\begin{align*}\n & \\left( (\\lambda I - Z)^{-1} X S(\\lambda) \n U (\\lambda I - W)^{-1}\\right) W -\n Z \\left( (\\lambda I - Z)^{-1} X S(\\lambda) U (\\lambda I - W)^{-1} \\right) \\\\\n & = (\\lambda I - Z)^{-1} X S(\\lambda) \n U (\\lambda I - W)^{-1} (W - \\lambda I + \\lambda I) \\\\\n & \\quad \\quad + (\\lambda I - Z - \\lambda I)\n (\\lambda I - Z)^{-1} X S(\\lambda) U (\\lambda I - W)^{-1} \\\\\n & = -(\\lambda I - Z)^{-1} X S(\\lambda) U +\n \\lambda \\cdot (\\lambda I - Z)^{-1} X S(\\lambda) U (\\lambda I - W)^{-1} \\\\\n & \\quad \\quad + X S(\\lambda) U (\\lambda I - W)^{-1} - \n \\lambda \\cdot (\\lambda I - Z)^{-1} X S(\\lambda) U (\\lambda I - W)^{-1} \\\\\n & = -(\\lambda I - Z)^{-1} X S(\\lambda) U + X S(\\lambda) U (\\lambda I - \n W)^{-1}.\n \\end{align*}\n If we now take the sum of the residues of the first and last \n expression in this chain of equalities over points $z_{0} \\in \n \\Pi_{+}$ and use the interpolation conditions \n \\eqref{resint1}--\\eqref{resint3}, we arrive at\n $$\n \\Gamma W - Z \\Gamma = -Y U + X V\n $$ \n and the Sylvester equation \\eqref{Sylvester'} follows.\n \n\\smallskip \n \n \n We now pose the \\textbf{BiTangential Operator Argument Nevanlinna-Pick \n (BTOA-NP) Interpolation Problem}: Given a $\\Pi_{+}$-admissible BTOA \n interpolation data set \\eqref{data}, find $S$ in the \n matrix Schur class over the right half plane ${\\mathcal S}^{p \\times \n m}(\\Pi_{+})$ which satisfies the BTOA interpolation conditions \n \\eqref{BTOAint1}, \\eqref{BTOAint2}, \\eqref{BTOAint3}. \n \n\\smallskip\n\n Before formulating the solution, we need some additional \n notation. Given a $\\Pi_{+}$-admissible BTOA interpolation data set \n \\eqref{data}, introduce two additional matrices $\\Gamma_{L}$ and \n $\\Gamma_{R}$ as the unique solutions of the respective Lyapunov \n equations\n \\begin{align}\n& \\Gamma_{L} Z^{*} + Z \\Gamma_{L} = X X^{*} - Y Y^{*}, \\label{GammaL}\\\\\n& \\Gamma_{R} W + W^{*} \\Gamma_{R} = U^{*} U - V^{*}V. \\label{GammaR} \n \\end{align}\n We define the BTOA-Pick matrix $\\boldsymbol{\\Gamma}_{\\mathfrak D}$ associated \n with the data set \\eqref{data} by\n \\begin{equation} \\label{GammaData}\n \\boldsymbol{\\Gamma}_{\\mathfrak D} = \\begin{bmatrix} \\Gamma_{L} & \\Gamma\n \\\\ \\Gamma^{*} & \\Gamma_{R} \\end{bmatrix}.\n \\end{equation}\n The following is the canonical generalization of Theorem \\ref{T:BT-NP} \n to this more \n general situation.\n \n \\begin{theorem} \\label{T:BTOA-NP} Suppose that \n $$\n {\\mathfrak D} = ( X,Y,Z; U,V, W; \\Gamma)\n $$\n is a $\\Pi_{+}$-admissible BTOA interpolation data set. Then there \n exists a solution $S \\in {\\mathcal S}^{p \\times m}(\\Pi_{+})$ of the BTOA-NP \n interpolation problem associated with data set ${\\mathfrak D}$ if \n and only if the associated BTOA-Pick matrix $\\boldsymbol{\\Gamma}_{\\mathfrak D}$\n defined by \\eqref{GammaData} is positive semidefinite.\n \n In case $\\Gamma_{\\mathfrak D}$ is strictly positive definite \n ($\\Gamma_{\\mathfrak D} \\succ 0$), the set of all solutions is \n parametrized as follows. Define a $(p+m) \\times (p+m)$-matrix \n function \n $$\n \\Theta(\\lambda) = \\begin{bmatrix} \\Theta_{11}(\\lambda) & \n \\Theta_{12}(\\lambda) \\\\ \\Theta_{21}(\\lambda) & \\Theta_{22}(\\lambda) \n\\end{bmatrix}\n$$ via\n\\begin{equation} \\label{Theta}\n\\Theta(\\lambda) = \\begin{bmatrix} I_{p} & 0 \\\\ 0 & I_{m} \\end{bmatrix} +\n\\begin{bmatrix}-X^{*} & V \\\\ -Y^{*} & U \\end{bmatrix}\n \\begin{bmatrix} \\lambda I + Z^{*} & 0 \\\\ 0 & \\lambda I - W \n \\end{bmatrix}^{-1} \\boldsymbol{\\Gamma}_{\\mathfrak D}^{-1} \n\\begin{bmatrix} X & -Y \\\\-V ^{*} & U^{*} \\end{bmatrix}. \n\\end{equation}\nThen $S$ is a solution of the BTOA-NP interpolation problem if and \nonly if $S$ has a representation as\n\\begin{equation} \\label{LFTparam}\nS(\\lambda) = (\\Theta_{11}(\\lambda) G(\\lambda) + \\Theta_{12}(\\lambda))\n( \\Theta_{21}(\\lambda) G(\\lambda) + \\Theta_{22}(\\lambda))^{-1}\n\\end{equation}\nwhere $G$ is a free-parameter function in the \nSchur class ${\\mathcal S}^{p \\times m}(\\Pi_{+})$.\n\\end{theorem}\nNote that the first part of Theorem \\ref{T:BTOA-NP} for the special \ncase where the data set ${\\mathfrak D}$ has the \nform \\eqref{babydata'} coming from the data set \\eqref{babydata} for a \nBT-NP problem amounts to the content of Theorem \\ref{T:BT-NP}.\n\n\\smallskip\n\nThe BTOA-NP interpolation problem and closely related problems have \nbeen studied and analyzed using a variety of methodologies by number \nof authors, especially in the 1980s and 1990s, largely inspired by \nconnections with the then emerging $H^{\\infty}$-control theory (see \n\\cite{Francis}). We mention in \nparticular the Schur-algorithm approach in \\cite{LA, DD, AD}, the method of \nFundamental Matrix Inequalities by the \nPotapov school (see e.g., \\cite{KP})) and the related formalism of the \nAbstract Interpolation Problem of Katsnelson-Kheifets-Yuditskii \n(see \\cite{KKY, Kh}), the Commutant Lifting approach of \nFoias-Frazho-Gohberg-Kaashoek (see \\cite{FF, FFGK}, and the \nReproducing Kernel approach of Dym and collaborators (see \\cite{Dym, \nDymOT143}). Our focus here is to revisit two other approaches: \n(1) the Grassmannian\/Kre\\u{\\i}n-space-geometry \napproach of Ball-Helton \\cite{BH}, and (2) the state-space implementation \nof this approach due to Ball-Gohberg-Rodman (\\cite{BGR}). The first \n(Grassmannian) approach relies on Kre\\u{\\i}n-space geometry to arrive at the \nexistence of a solution; the analysis is constructive only after one \nintroduces bases to coordinatize various subspaces and operators. The \nsecond (state-space) approach has the same starting point as the first \n(encoding the problem in terms of the graph of the sought-after solution rather \nthan in terms of the solution itself), but finds state-space \ncoordinates in which to coordinatize the $J$-inner function \nparametrizing the set of solutions and then verifies the \nlinear-fractional parametrization by making use of intrinsic \nproperties of $J$-inner functions together with an explicit \nwinding-number argument, thereby bypassing any appeal to general \nresults from Kre\\u{\\i}n-space geometry. This second approach \nproved to be more accessible to users (e.g., engineers) who were not comfortable with \nthe general theory of Kre\\u{\\i}n spaces.\n\n\\smallskip\n\nIt turns out that the solution criterion $\\boldsymbol{\\Gamma}_{\\mathfrak D}\\succeq 0$\narises more naturally in the second (state-space) approach. \nFurthermore, when $\\boldsymbol{\\Gamma}_{\\mathfrak D} \\succ 0$ \n($\\boldsymbol{\\Gamma}_{\\mathfrak D}$ is strictly positive \ndefinite), one gets a linear-fractional parametrization for the set \nof all Schur-class solutions of the interpolation conditions. The \nmatrix function $\\Theta$ generating the linear-fractional map also \ngenerates a matrix kernel function $K_{\\Theta,J}$ which is a positive \nkernel exactly when $\\boldsymbol{\\Gamma}_{\\mathfrak D} \\succ 0$. We can then \nview the fact that the associated reproducing kernel space \n${\\mathcal H}(K_{\\Theta,J})$ is a Hilbert space as also a solution criterion \nfor the BTOA-NP interpolation problem in the nondegenerate case.\n\n\\smallskip\n\nIn the first (Grassmannian\/Kre\\u{\\i}n-space-geometry) \napproach, on the other hand, the immediate solution criterion is in terms of the \npositivity of a certain finite-dimensional subspace $({\\mathcal M}_{\\mathfrak \nD}^{[\\perp {\\mathcal K}]})_{0}$ of a Kre\\u{\\i}n space constructed \nfrom the interpolation data ${\\mathfrak D}$. In the Left Tangential \ncase, one can identify $\\boldsymbol{\\Gamma}_{\\mathfrak D}$ as the Kre\\u{\\i}n-space \ngramian matrix with respect to a natural basis for \n$({\\mathcal M}_{\\mathfrak D}^{[\\perp {\\mathcal K}]})_{0}$, thereby confirming directly \nthe equivalence of the two seemingly distinct solution criteria. For the general \nBiTangential case, the connection between $\\boldsymbol{\\Gamma}_{\\mathfrak D}$ and \n$({\\mathcal M}_{\\mathfrak D}^{[\\perp {\\mathcal K}]})_{0}$ is not so direct, but nevertheless, \nusing ideas from \\cite{BHMesa}, we present here a direct proof as to why \n$\\boldsymbol{\\Gamma}_{\\mathfrak D} \\succeq 0$ is equivalent to Kre\\u{\\i}n-space positivity \nof $({\\mathcal M}_{\\mathfrak D}^{[\\perp {\\mathcal K}]})_{0}$ which is interesting in its own right. \nAlong the way, we also show how the Fundamental Matrix Inequality approach \nto interpolation of the Potapov school \\cite{KP} can be incorporated into this \nBTOA-interpolation formalism to give an alternative derivation of the linear-fractional \nparametrization which also bypasses the winding-number argument, at \nleast for the classical Schur-class setting. We also sketch how all \nthe results extend to the more general problem where one seeks \nsolutions of the BTOA interpolation conditions \\eqref{BTOAint1}--\\eqref{BTOAint3} \nin the Kre\\u{\\i}n-Langer generalized Schur class ${\\mathcal S}_{\\kappa}^{p \\times m}(\\Pi_{+})$ \nwith the integer $\\kappa$ as small as possible.\n\n\\smallskip\n\nThe plan of the paper is as follows. In Section \\ref{S:proof1} we \nsketch the ideas of the second (state-space) approach, with the \nFundamental Matrix Inequality approach and the reproducing-kernel \ninterpretation dealt with in succeeding subsections. In Section \n\\ref{S:proof2} we sketch the somewhat more involved ideas behind the \nfirst (Grassmannian\/Kre\\u{\\i}n-space-geometry) approach. In Section \n\\ref{S:synthesis} we identify the connections between the two \napproaches and in particular show directly that the two solution \ncriteria are indeed equivalent. In the final Section \n\\ref{S:negsquares} we indicate how the setup extends to interpolation \nproblems for the generalized Schur class ${\\mathcal S}_{\\kappa}^{p \\times \nm}(\\Pi_{+})$.\n\n\n\n\n\\section{The state-space approach to the BTOA-NP interpolation problem} \\label{S:proof1}\n\nIn this section we sketch the analytic proof of Theorem \\ref{T:BTOA-NP} from \\cite{BGR}.\nFor ${\\mathcal U}$ and ${\\mathcal Y}$ Hilbert spaces, we let ${\\mathcal L}({\\mathcal U}, {\\mathcal Y})$ denote the\nspace of bounded linear operators mapping ${\\mathcal U}$ into ${\\mathcal Y}$,\nabbreviated to ${\\mathcal L}({\\mathcal U})$ in case ${\\mathcal U} = {\\mathcal Y}$. We then define the\noperator-valued version of the Schur class ${\\mathcal S}_{\\Omega}({\\mathcal U}, {\\mathcal Y})$\nto consist of holomorphic functions $S$ on $\\Omega$\nwith values equal to contraction operators between ${\\mathcal U}$ and ${\\mathcal Y}$. \n\n\\smallskip\n\nWe first recall some standard facts concerning positive kernels and \nreproducing kernel Hilbert spaces (see e.g., \\cite{BB-HOT}).\nGiven a point-set $\\Omega$ and coefficient Hilbert space ${\\mathcal Y}$ along \nwith a function $K \\colon \\Omega \\times \\Omega \\to {\\mathcal L}({\\mathcal Y})$, we say \nthat $K$ is a {\\em positive kernel on $\\Omega$} if\n\\begin{equation} \\label{posker}\n \\sum_{i,j=1}^{N}\\langle K(\\omega_{i}, \\omega_{j}) y_{j}, y_{i} \n \\rangle_{{\\mathcal Y}} \\ge 0\n\\end{equation}\nfor any collection of $N$ points $\\omega_{1}, \\dots, \\omega_{N} \\in \n\\Omega$ and vectors $y_{1}, \\dots, y_{N} \\in {\\mathcal Y}$ with arbitrary \n$N\\ge 1$. It is well known that \nthe following are equivalent:\n\\begin{enumerate}\n \\item $K$ is a positive kernel on $\\Omega$.\n \\item $K$ is the {\\em reproducing kernel} for a {\\em reproducing kernel \n Hilbert space} ${\\mathcal H}(K)$ consisting of functions $f\\colon \\Omega \n \\to {\\mathcal Y}$ such that, for each $\\omega \\in \\Omega$ and $y \\in {\\mathcal Y}$ the \n\tfunction $k_{\\omega,y} \\colon \\Omega \\to {\\mathcal Y}$ defined by\n\\begin{equation} \\label{RKHSa}\nk_{\\omega,y}(\\omega') = K(\\omega', \\omega)y\n\\end{equation}\nis in ${\\mathcal H}(\\Omega)$ and has the reproducing property: for each $f \\in \n{\\mathcal H}(K)$, \n\\begin{equation} \\label{RKHSb}\n\\langle f, k_{\\omega,y}\\rangle_{{\\mathcal H}(K)} = \\langle f(\\omega), y \n\\rangle_{{\\mathcal Y}}.\n\\end{equation}\n\\item $K$ has a {\\em Kolmogorov decomposition}: there is a Hilbert \nspace ${\\mathcal X}$ and a function $H \\colon \\Omega \\to {\\mathcal L}({\\mathcal X}, {\\mathcal Y})$ so that\n\\begin{equation} \\label{Kolmogorov}\n K(\\omega', \\omega) = H(\\omega') H(\\omega)^{*}.\n\\end{equation}\n\\end{enumerate}\n\n\\begin{proof}[Proof of Theorem \\ref{T:BTOA-NP}] We first illustrate the \n proof of necessity for the easier \n simple-multiplicity case as formulated \n in Theorem \\ref{T:BT-NP}; the idea is essentially the same as the necessity proof \n in Limebeer-Anderson \\cite{LA}.\n\n\\smallskip\n \nIt is well known that a Schur-class function $F \\in{\\mathcal S}_{\\mathbb D}({\\mathcal U},{\\mathcal Y})$ \non the unit disk can be characterized not only by the positivity of \nthe de Branges-Rovnyak kernel \n$$\n {\\mathbf K}_{F}(\\lambda, w) = \\frac{ I - F(\\lambda) F(w)^{*}}{1 - z \n \\overline{\\zeta}}\n$$\non the unit disk ${\\mathbb D}$, but also by positivity of the\nblock $2 \\times 2$-matrix kernel defined on $({\\mathbb D} \\times \n {\\mathbb D}) \\times ({\\mathbb D} \\times {\\mathbb D})$ by\n$$\n \\widetilde {\\mathbf K}_{F}(\\lambda, \\lambda_{*}; w, w_{*}): = \\begin{bmatrix} \n {\\displaystyle\\frac{ I - F(\\lambda) \n F(w)^{*}}{1 - \\lambda \\overline{w}}} & {\\displaystyle\\frac{F(\\lambda) - \n F(\\overline{w}_{*})}{\\lambda - \\overline{w}_{*}}}\\vspace{1mm}\\\\\n {\\displaystyle\\frac{F(\\overline{\\lambda}_{*})^{*} - \n F(w)^{*}}{\\lambda_{*} - \\overline{w}}} &\n {\\displaystyle\\frac{I - F(\\overline{\\lambda}_{*})^{*} F(\\overline{w}_{*})}{1 - \\lambda_{*} \n \\overline{w}_{*}}} \n\\end{bmatrix}.\n$$\nMaking use of the linear-fractional change of variable from ${\\mathbb D}$ to $\\Pi_{+}$\n$$\n \\lambda \\in {\\mathbb D} \\mapsto z = \\frac{ 1+ \\lambda}{1- \\lambda} \\in \\Pi_{+}\n$$\nwith inverse given by\n$$\n z \\in \\Pi_{+} \\mapsto \\lambda = \\frac{z -1}{z +1} \\in {\\mathbb D},\n$$\nit is easily seen that the function $S$ defined on $\\Pi_{+}$ is in the \nSchur class ${\\mathcal S}_{\\Pi_+}({\\mathcal U},{\\mathcal Y})$ over $\\Pi_{+}$ if and only if, not \nonly the $\\Pi_{+}$-de Branges-Rovnyak kernel\n\\begin{equation} \\label{dBRkerPi+}\nK_{S}(z, \\zeta) = \\frac{I - S(z) S(\\zeta)^{*}}{z + \\overline{\\zeta}}\n\\end{equation}\nis a positive kernel on $\\Pi_{+}$, but also the ($2 \\times 2$)-block de \nBranges-Rovnyak kernel\n\\begin{equation} \\label{ker22}\n {\\mathbf K}_{S}(z, z_{*}; \\zeta, \\zeta_{*}): = \\begin{bmatrix} \n{\\displaystyle\\frac{ I - S(z) S(\\zeta)^{*}}{ z + \\overline{\\zeta}}} &\n{\\displaystyle\\frac{S(z) - S(\\overline{\\zeta}_{*})}{z - \\overline{\\zeta}_{*}}} \n\\vspace{1mm}\\\\ {\\displaystyle\\frac{ S(\\overline{z}_{*})^{*} - S(\\zeta)^{*}}\n{ z_{*} - \\overline{\\zeta}}} &\n{\\displaystyle\\frac{I - S(\\overline{z}_{*})^{*} S(\\overline{\\zeta}_{*})}\n{ z_{*} + \\overline{\\zeta}_{*}}}\n\\end{bmatrix}\n\\end{equation}\nis a positive kernel on $(\\Pi_{+}\\times\\Pi_+) \\times (\\Pi_+\\times \\Pi_{+})$. Specifying the \nlatter kernel at the points $(z, z_{*}), (\\zeta, \\zeta_{*}) \\in \\Pi_{+} \\times \\Pi_{+}$ \nwhere $z, \\zeta = z_{1}, \\dots, z_{N}$ and $z_{*}, \\zeta_{*} = \\overline{w}_{1}, \\dots, \n\\overline{w}_{N'}$, leads to the conclusion that the block matrix\n\\begin{equation} \\label{Pick-pre}\n\\begin{bmatrix} \\left[ {\\displaystyle\\frac{I - S(z_{i}) \n S(z_{j})^{*}}{z_{i} + \\overline{z}_{j}}} \\right] &\n \\left[ {\\displaystyle\\frac{S(z_{i}) - S(w_{j'})}{ z_{i} - w_{j'}}} \\right] \n \\vspace{1mm}\\\\\n \\left[ {\\displaystyle\\frac{S(w_{i'})^{*} - S(z_{j})^{*}}{\\overline{w}_{j'} - \n \\overline{z}_{i}}} \\right] & \\left[ {\\displaystyle\\frac{ I - S(w_{i'})^{*} \n S(w_{j'})}{\\overline{w}_{i'} + w_{j'}}} \\right] \\end{bmatrix},\n\\end{equation}\nwhere $1 \\le i,j \\le N$ and $1 \\le i', j' \\le N'$, is positive \nsemidefinite. Note that the entry $\\frac{S(z_{i}) - S(w_{j'})}{ \nz_{i} - w_{j'}}$ in the upper right corner is to be interpreted as \n$S'(\\xi_{i j'})$ in case $z_{i} = w_{j'} =: \\xi_{i,j'}$ for some pair \nof indices $i,j'$. \n\n\\smallskip\n\nSuppose now that $S \\in {\\mathcal S}_{\\Pi_{+}}({\\mathcal U}, {\\mathcal Y})$ is a \nSchur-class solution of the interpolation conditions \\eqref{inter1'}, \n\\eqref{inter2'}, \\eqref{inter3'}. When we multiply the matrix \n\\eqref{Pick-pre} on the left by the block diagonal matrix\n$$\n \\begin{bmatrix} \\operatorname{diag}_{1 \\le i \\le N}[ x_{i}] & 0 \\\\ \n 0 & \\operatorname{diag}_{1 \\le i' \\le N'} [ u_{i'}^{*}] \n \\end{bmatrix}\n$$\nand on the right by its adjoint, we arrive at the matrix \n$P_{{\\mathfrak D}_{\\rm simple}}$ \\label{Psimple}. This verifies the \nnecessity of the condition $P_{{\\mathfrak D}_{\\rm simple}} \\succeq 0$ \nfor a solution of the BT-NP interpolation problem to exist.\n\n\\smallskip\n\nWe now consider the proof of necessity for the general case.\nWe note that the proof of necessity in \n \\cite{BGR} handles explicitly only the case where the Pick matrix \n is invertible and relies on use of the matrix-function $\\Theta$ \n generating the linear-fractional parametrization (see \n \\eqref{oct1} below). We give a proof \n here which proceeds directly from the BTOA-interpolation formulation; it \n amounts to a specialization of the proof of necessity for the more \n complicated multivariable interpolation problems in the Schur-Agler \n class done in \\cite{BB05}. \n\n\\smallskip\n\nThe starting point is the observation that the positivity of the \nkernel ${\\mathbf K}_{S}$ implies that it has a Kolmogorov \ndecomposition \\eqref{Kolmogorov}; furthermore the extra structure of the arguments of \nthe kernel ${\\mathbf K}_{S}$ implies that the Kolmogorov \ndecomposition can be taken to have the form\n\\begin{equation} \\label{KS2}\n {\\mathbf K}_{S}(z, z_{*}; \\zeta, \\zeta_{*}) =\n \\begin{bmatrix} H(z) \\\\ G(z_{*})^* \\end{bmatrix}\n \\begin{bmatrix} H(\\zeta)^{*} & G(\\zeta_{*}) \\end{bmatrix}\n\\end{equation}\nfor holomorphic operator functions\n$$ \nH \\colon \\Pi_{+} \\to {\\mathcal L}({\\mathcal X}, {\\mathcal Y}), \\quad G \\colon \\Pi_{+} \\to {\\mathcal L}({\\mathcal U}, {\\mathcal X}).\n$$\nIn the present matricial setting of $\\mathbb C^{p\\times m}$-valued functions, \nthe spaces ${\\mathcal U}$ and ${\\mathcal Y}$ are finite dimensional and can be identified\nwith $\\mathbb C^p$ and $\\mathbb C^m$, respectively. \nIn particular we read off the identity\n\\begin{equation}\\label{Gamma'12}\n H(z)G(\\zeta) = \\frac{ S(z) - S(\\zeta)}{z - \\zeta}\n \\end{equation}\nwith appropriate interpretation in case $z = \\zeta$. Observe that \nfor a fixed $\\zeta\\in \\Pi_+\\backslash\\sigma(Z)$, we have from \\eqref{Gamma'12}\n\\begin{align}\n(XH)^{\\wedge L}(Z)\\cdot G(\\zeta)\n&=\\sum_{z_{0} \\in \\sigma(Z)} {\\rm Res}_{\\lambda \n= z_{0}} (\\lambda I- Z)^{-1} X H(\\lambda)G(\\zeta)\\notag\\\\ \n&=\\sum_{z_{0} \\in \\sigma(Z)} {\\rm Res}_{\\lambda = z_{0}} (\\lambda I- Z)^{-1} X \n\\frac{ S(\\lambda) - S(\\zeta)}{\\lambda - \\zeta}\\notag\\\\\n&=\\sum_{z_{0} \\in \\sigma(Z)} {\\rm Res}_{\\lambda = z_{0}} (\\lambda I- Z)^{-1}\n\\frac{Y - XS(\\zeta)}{\\lambda - \\zeta}\\notag\\\\\n&=(\\zeta I-Z)^{-1}(XS(\\zeta)-Y)\\label{aug1}\n\\end{align}\nwhere we used the interpolation condition \\eqref{resint1} for the third equality.\nSince the function $g(\\zeta)=(\\zeta I-Z)^{-1}YU(\\zeta I-W)^{-1}$ \nsatisfies an estimate of the form\n$ \\|g(\\zeta \\| \\le \\frac{M}{ |\\zeta|^{2}}\\; $ as $\\; |\\zeta| \\to \\infty$, it follows that\n$$\n\\sum_{z_{0} \\in \\sigma(Z)\\cup\\sigma(W)}{\\rm Res}_{\\zeta= z_{0}}\n(\\zeta I-Z)^{-1}YU(\\zeta I-W)^{-1}=0.\n$$\nOn the other hand, due to condition \\eqref{resint1}, the function on the \nright hand side of \\eqref{aug1}\nis analytic (in $\\zeta$) on $\\Pi_+$, so that \n\\begin{align*}\n&\\sum_{z_{0} \\in \\sigma(W)} {\\rm Res}_{\\zeta=z_0}\n(\\zeta I-Z)^{-1}(XS(\\zeta)-Y)U(\\zeta I-W)^{-1}\\\\\n&=\\sum_{z_{0} \\in \\sigma(Z)\\cup\\sigma(W)} \n{\\rm Res}_{\\zeta=z_0}(\\zeta I-Z)^{-1}(XS(\\zeta)-Y)U(\\zeta I-W)^{-1}.\n\\end{align*}\nWe now apply the {\\bf RTOA} point evaluation to both sides in \\eqref{aug1} and\nmake use of the two last equalities and the interpolation condition \\eqref{resint3}:\n\\begin{align}\n&(XH)^{\\wedge L}(Z)(GU)^{\\wedge R}(W)\\notag \\\\\n&=\\sum_{z_{0} \\in \\sigma(W)} {\\rm Res}_{\\zeta=z_0}(\\zeta I-Z)^{-1}\n(XS(\\zeta)-Y)U(\\zeta I-W)^{-1}\\notag\\\\\n&=\\sum_{z_{0} \\in \\sigma(Z)\\cup\\sigma(W)} {\\rm Res}_{\\zeta=z_0}\n(\\zeta I-Z)^{-1}(XS(\\zeta)-Y)U(\\zeta I-W)^{-1}\\notag\\\\\n&=\\sum_{z_{0} \\in \\sigma(Z)\\cup\\sigma(W)} \n{\\rm Res}_{\\zeta=z_0}(\\zeta I-Z)^{-1}XS(\\zeta)U(\\zeta I-W)^{-1}=\\Gamma.\n\\label{aug2}\n\\end{align}\nLet us now introduce the block $2 \\times 2$-matrix ${\\mathbf \\Gamma}'_{\\mathfrak \nD}$ by\n\\begin{equation} \n \\boldsymbol{\\Gamma}'_{\\mathfrak D} = \\begin{bmatrix} (XH)^{\\wedge L}(Z) \n \\\\ (GU)^{\\wedge R}(W)^{*} \\end{bmatrix}\n \\begin{bmatrix} ((XH)^{\\wedge L}(Z) )^{*} & (GU)^{\\wedge R}(W) \n \\end{bmatrix}.\n\\label{aug4}\n\\end{equation}\nWe claim that $\\boldsymbol{\\Gamma}'_{\\mathfrak D} = \\boldsymbol{\\Gamma}_{\\mathfrak D}$. Note \nthat equality of the off-diagonal blocks follows from \n\\eqref{aug2}. It remains to show the two equalities\n\\begin{align}\n\\Gamma'_{L}&:=(XH)^{\\wedge L}(Z) ((XH)^{\\wedge L}(Z))^{*}= \\Gamma_{L}, \\label{verifyGamma}\\\\\n\\Gamma'_{R}&:=((G^{*}U)^{\\wedge R}(W) )^{*} (G^{*}U)^{\\wedge R}(W)=\n \\Gamma_{R}.\\label{verifyGamma1}\n\\end{align}\nTo verify \\eqref{verifyGamma}, we note \nthat $\\Gamma_{L}$ is defined as the unique solution of the Lyapunov \nequation \\eqref{GammaL}. Thus it suffices to verify that $\\Gamma'_{L}$\nalso satisfies \\eqref{GammaL}. Toward this end, the two expressions \\eqref{ker22} and \n\\eqref{KS2} for ${\\mathbf K}_{S}$ give us equality of the \n$(1,1)$-block entries:\n$$\n H(z) H(\\zeta)^{*} = \\frac{ I - S(z) S(\\zeta)^{*}}{ z + \n \\overline{\\zeta}}\n$$\nwhich we prefer to rewrite in the form\n\\begin{equation} \\label{identity}\n z \\cdot H(z) H(\\zeta)^{*} + H(z) H(\\zeta)^{*} \\cdot \n \\overline{\\zeta} = I - S(z) S(\\zeta)^{*}.\n\\end{equation}\nTo avoid confusion, let us introduce the notation $\\chi$ for the \nidentity function $\\chi(z) = z$ on $\\Pi_{+}$. Then it is \neasily verified that\n\\begin{equation} \\label{identity'}\n ( X \\chi \\cdot H)^{\\wedge L}(Z) = Z (X H)^{\\wedge L}(Z).\n\\end{equation}\nMultiplication on the left by $X$ and on the right by $X^{*}$ and \nthen plugging in the left operator argument $Z$ for $\\lambda$ in \\eqref{identity} then gives\n\\begin{align*}\n& Z (X H)^{\\wedge L}(Z) H(\\zeta)^{*}X^{*} + (X H)^{\\wedge L}(Z) ( \\zeta \n \\cdot H(\\zeta)^{*}X^{*} \\\\\n & \\quad = XX^{*} - (X S)^{\\wedge L}(Z) S(\\zeta)^{*} = XX^{*} - Y S(\\zeta)^{*}X^{*}.\n\\end{align*}\nReplacing the variable $\\zeta$ by the operator argument $Z$ and \napplying the adjoint of the identity \\eqref{identity'} then brings us to\n$$\nZ (XH)^{\\wedge L}(Z)( (XH)^{\\wedge L}(Z))^{*} Z^{*} = \nX X^{*} - Y \\left( X S^{\\wedge L}(Z) \\right)^{*} = X X^{*} - Y Y^{*},\n$$\ni.e., $\\Gamma'_{L}$ satisfies \\eqref{GammaL} as wanted. The proof \nthat $\\Gamma'_{R}$ (see \\eqref{verifyGamma1}) \nsatisfies \\eqref{GammaR} proceeds in a similar way.\n \n\\smallskip\n\nFor the sufficiency direction, for simplicity we shall assume \n that $\\boldsymbol{\\Gamma}_{\\mathfrak D}$ is strictly positive definite rather \n than just positive semidefinite. We then must show that \n solutions $S$ of the BTOA-NP problem exist and in fact the set of \n all solutions is given by the linear-fractional parametrization\n \\eqref{LFTparam}. The case where the Pick matrix is \n positive-semidefinite then follows by perturbing the semidefinite \n Pick matrix to a definite Pick matrix and using an approximation \n and normal families argument. The ideas follow \\cite{BGR}.\n \n\\smallskip\n\nLet us therefore assume that $\\Gamma_{\\mathfrak D}$ is positive \ndefinite. Then we can form the rational matrix function $\\Theta$ \ngiven by \\eqref{Theta}. Let us write $\\Theta$ in the more condensed \nform \n\\begin{equation}\n\\Theta(\\lambda) = I_{p+m} - {\\mathbf C} (\\lambda I - {\\mathbf A})^{-1} \\boldsymbol{\\Gamma}_{\\mathfrak \nD}^{-1} {\\mathbf C}^{*} J\n\\label{oct1}\n\\end{equation}\nwhere we set\n\\begin{equation}\n{\\mathbf A} = \\begin{bmatrix} -Z^{*} & 0 \\\\ 0 & W \\end{bmatrix}, \\quad\n{\\mathbf C} = \\begin{bmatrix} -X^{*} & V \\\\ -Y^{*} & U \\end{bmatrix},\\quad \nJ = \\begin{bmatrix} I_{p} & 0 \\\\ 0 & -I_{m} \\end{bmatrix}.\n\\label{oct2}\n\\end{equation}\nRecall that $\\Gamma_{L}$, $\\Gamma_{R}$, $\\Gamma$ satisfy the \nLyapunov\/Sylvester equations \\eqref{GammaL}, \\eqref{GammaR}, \n\\eqref{Sylvester'}. Consequently one can check that \n$\\boldsymbol{\\Gamma}_{\\mathfrak D}$ satisfies the $(2 \\times 2)$-block \nLyapunov\/Sylvester equation\n\\begin{align*}\n & \\begin{bmatrix} \\Gamma_{L} & \\Gamma \\\\ \\Gamma^{*} & \\Gamma_{R} \n\t\\end{bmatrix} \\begin{bmatrix} -Z^{*} & 0 \\\\ 0 & W \\end{bmatrix}\n+ \\begin{bmatrix} -Z & 0 \\\\ 0 & W^{*} \\end{bmatrix} \\begin{bmatrix} \n\\Gamma_{L} & \\Gamma \\\\ \\Gamma^{*} & \\Gamma_{R} \\end{bmatrix} \\\\\n& \\quad = \\begin{bmatrix} Y Y^{*} - X X^{*} & XV - YU \\\\ V^{*} X^{*} - U^{*} \nY^{*} & V^{*} U - V^{*} V \\end{bmatrix},\n\\end{align*}\nor, in more succinct form,\n\\begin{equation} \\label{bigLySyl}\n \\boldsymbol{\\Gamma}_{\\mathfrak D} {\\mathbf A} + {\\mathbf A}^{*} \\boldsymbol{\\Gamma}_{\\mathfrak D} = - {\\mathbf C}^{*} J {\\mathbf C}.\n\\end{equation}\nUsing this we compute\n\\begin{align*}\n & J - \\Theta(\\lambda) J \\Theta(\\zeta)^{*} = \\\\\n& J - \\left( I - {\\mathbf C} (\\lambda I - {\\mathbf A})^{-1} \\boldsymbol{\\Gamma}_{\\mathfrak D}^{-1} \n {\\mathbf C}^{*} J \\right) J \\left( I - J {\\mathbf C} \\boldsymbol{\\Gamma}_{\\mathfrak D}^{-1} \n (\\overline{\\zeta} I - {\\mathbf A}^{*})^{-1} {\\mathbf C}^{*} \\right) \\\\\n & ={\\mathbf C} (\\lambda I - {\\mathbf A})^{-1} \\boldsymbol{\\Gamma}_{\\mathfrak D}^{-1} {\\mathbf C}^{*}\n + {\\mathbf C} \\boldsymbol{\\Gamma}_{\\mathfrak D}^{-1} (\\overline{\\zeta} I -{\\mathbf A}^{*})^{-1} \n {\\mathbf C}^{*} \\\\\n & \\quad \\quad \\quad \\quad - {\\mathbf C} (\\lambda I - {\\mathbf A})^{-1} \\boldsymbol{\\Gamma}_{\\mathfrak \n D}^{-1} {\\mathbf C}^{*} J {\\mathbf C} \\Gamma_{\\mathfrak D}^{-1} (\\overline{\\zeta} I \n - {\\mathbf A}^{*})^{-1} {\\mathbf C}^{*} \\\\\n & = {\\mathbf C} (\\lambda I - A)^{-1} \\boldsymbol{\\Gamma}_{\\mathfrak D}^{-1} \\Xi(\\lambda,\\zeta) (\\overline{\\zeta} I - \n {\\mathbf A}^{*})^{-1} \\boldsymbol{\\Gamma}_{\\mathfrak D}^{-1} {\\mathbf C}^{*}\n \\end{align*}\n where\n$$\n \\Xi(\\lambda,\\zeta) =(\\overline{\\zeta} I - {\\mathbf A}^{*}) \\boldsymbol{\\Gamma}_{\\mathfrak D} \n + \\boldsymbol{\\Gamma}_{\\mathfrak D} (\\lambda I - {\\mathbf A}) - {\\mathbf C}^{*} J {\\mathbf C}= (\\lambda + \\overline{\\zeta}) \n \\boldsymbol{\\Gamma}_{\\mathfrak D},\n$$\n where we used \\eqref{bigLySyl} in the last step. We conclude that \n \\begin{equation}\n K_{\\Theta,J}(\\lambda.\\zeta):= \\frac{J - \\Theta(\\lambda) J \\Theta(\\zeta)^{*}}\n {\\lambda + \\overline{\\zeta}}= \n{\\mathbf C}(\\lambda I - {\\mathbf A})^{ -1} \\boldsymbol{\\Gamma}_{\\mathfrak D}^{-1} \n (\\overline{\\zeta}I - {\\mathbf A}^{*})^{-1} {\\mathbf C}^{*}.\n \\label{ad1}\n\\end{equation}\n By assumption, $\\sigma(Z)\\cup\\sigma(W)\\subset \\Pi_+$, so \nthe matrix ${\\mathbf A} = \\sbm{ -Z^{*} & 0 \\\\ 0 & W }$ has no \n eigenvalues on the imaginary line, and hence $\\Theta$ is analytic and \n invertible on $i {\\mathbb R}$. As a consequence of \\eqref{ad1}, \n we see that $\\Theta(\\lambda)$ is $J$-coisometry for $\\lambda \n \\in i{\\mathbb R}$. As $J$ is a finite matrix we actually have (see \\cite{AI}): \n \\begin{itemize}\n \\item {\\em for $\\lambda \\in i{\\mathbb R}$, $\\Theta(\\lambda)$ is \n $J$-unitary:}\n \\begin{equation} \\label{ThetaJunitary}\n J - \\Theta(\\lambda)^{*} J \\Theta(\\lambda) = J - \\Theta(\\lambda) J \n \\Theta(\\lambda)^{*} = 0 \\; \\text{ for } \\; \\lambda \\in i{\\mathbb R}.\n \\end{equation}\n \\end{itemize}\n The significance of the assumption that $\\boldsymbol{\\Gamma}_{\\mathfrak D}$ is \n not only invertible but also positive definite is that\n \\begin{itemize}\n \\item {\\em for $\\lambda \\in \\Pi_{+}$ a point of analyticity for \n $\\Theta$, $\\Theta(\\lambda)$ is $J$-bicontractive:}\n \\begin{equation} \\label{ThetaJcontr}\n\tJ - \\Theta(\\lambda)^{*} J \\Theta(\\lambda) \\succeq 0, \\quad\n\tJ - \\Theta(\\lambda) J \\Theta(\\lambda)^{*} \\succeq 0 \\text{ for } \n\t\\lambda \\in \\Pi_{+}.\n\\end{equation}\n \\end{itemize}\n Here we make use of the fact that $J$-co-contractive is equivalent \n to $J$-contractive in the matrix case (see \\cite{AI}). \nThese last two observations have critical consequences. Again \n writing out $\\Theta$ and $J$ as\n$$ \\Theta(\\lambda) = \\begin{bmatrix} \\Theta_{11}(\\lambda) & \n\\Theta_{12}(\\lambda) \\\\ \\Theta_{21}(\\lambda) & \\Theta_{22}(\\lambda) \n\\end{bmatrix}, \\quad J = \\begin{bmatrix} I_{p} & 0 \\\\ 0 & -I_{m} \n\\end{bmatrix},\n$$\nrelations \\eqref{ThetaJunitary} and \\eqref{ThetaJcontr}) \ngive us (with the variable $\\lambda$ suppressed)\n $$\n \\begin{bmatrix} \\Theta_{11} \\Theta_{11}^{*} - \\Theta_{12} \n \\Theta_{12}^{*} & \\Theta_{11} \\Theta_{21}^{*} - \\Theta_{12} \n \\Theta_{22}^{*} \\\\\n \\Theta_{21} \\Theta_{11}^{*} - \\Theta_{22} \\Theta_{12}^{*} &\n \\Theta_{21} \\Theta_{21}^{*} - \\Theta_{22} \\Theta_{22}^{*} \n\\end{bmatrix} \\preceq \\begin{bmatrix} I_{p} & 0 \\\\ 0 & - I_{m} \n\\end{bmatrix}\n$$\nfor $\\lambda$ a point of analyticity of $\\Theta$ in $\\Pi_{+}$ with equality for \n$\\lambda$ in $i {\\mathbb R} = \\partial \\Pi_{+}$ (including the point \nat infinity). In particular,\n$$\n \\Theta_{21} \\Theta_{21}^{*} - \\Theta_{22} \\Theta_{22}^{*} \\preceq \n -I_{m}\n$$\nor equivalently,\n\\begin{equation} \\label{Theta22inv}\n \\Theta_{21} \\Theta_{21}^{*} + I_{m} \\preceq \\Theta_{22} \n \\Theta_{22}^{*}.\n\\end{equation}\nHence, $\\Theta_{22}(\\lambda)$ is invertible at all points \n$\\lambda$ of analyticity in $\\Pi_+$, namely, $\\Pi_{+} \\setminus \n\\sigma(W)$, and then, since multiplying \non the left by $\\Theta_{22}^{-1}$ and on the right by its adjoint \npreserves the inequality, we get \n\\begin{equation} \\label{Theta22invTheta21}\n \\Theta_{22}^{-1} \\Theta_{21} \\Theta_{21}^{*} \\Theta_{22}^{*-1} + \n \\Theta_{22}^{-1} \\Theta_{22}^{* -1} \\preceq I_{m}.\n\\end{equation}\nWe conclude: \n\\begin{itemize}\n \\item {\\em $\\Theta_{22}^{-1}$ has analytic continuation to a \ncontractive $m \\times m$-matrix function on all of $\\Pi_{+}$ and \n$\\Theta_{22}^{-1} \\Theta_{21}$ has analytic continuation to an \nanalytic $m \\times p$-matrix rational function which is pointwise \nstrictly contractive on the closed right half \nplane $\\overline{\\Pi}_{+} = \\Pi_{+} \\cup i{\\mathbb R}$.}\n\\end{itemize}\n\nIt remains to make the connection of $\\Theta$ with the BTOA-NP interpolation \nproblem. Let us introduce some additional notation.\nFor $N$ a positive integer, \n$ H^{2}_{N}(\\Pi_{+})$ is short-hand notation for the ${\\mathbb \nC}^{N}$-valued Hardy space $H^{2}(\\Pi_{+}) \\otimes {\\mathbb C}^{N}$ \nover the right half plane $\\Pi_{+}$. Similarly $L^{2}_{N}(i \n{\\mathbb R}) = L^{2}(i {\\mathbb R}) \\otimes {\\mathbb C}^{N}$ is the \n${\\mathbb C}^{N}$-valued $L^{2}$-space over the imaginary line $i {\\mathbb R}$. \n\n\\smallskip\n\nIt is well known (see e.g.\\ \\cite{Hoffman}) that the space \n$H^{2}_{N}(\\Pi_{+})$ (consisting of analytic functions on $\\Pi_{+}$) \ncan be identified with \na subspace of $L^{2}_{N}(i {\\mathbb R})$ (consisting of measurable \nfunctions on $i {\\mathbb R}$ defined only almost everywhere with \nrespect to linear Lebesgue measure) via the process of taking \nnontangential limits from $\\Pi_{+}$ to a point on $i {\\mathbb R}$. \nSimilarly the Hardy space $H^{2}_{N}(\\Pi_{-})$ over the left half \nplane can also be identified with a subspace (still denoted as \n$H^{2}_{N}(\\Pi_{-})$) of $L^{2}_{N}(i {\\mathbb R})$, and, after these \nidentifications, $H^{2}_{N}(\\Pi_{-}) = H^{2}_{N}(\\Pi_{+})^{\\perp}$ \nas subspaces of $L^{2}_{N}(i {\\mathbb R})$:\n$$\n L^{2}_{N}(i {\\mathbb R}) = H^{2}_{N}(\\Pi_{+}) \\oplus \n H^{2}_{N}(\\Pi_{-}).\n$$\nWe shall use these identifications freely in the discussion to follow.\nGiven the $\\Pi_{+}$-admissible interpolation data set\n\\eqref{data}, we define a subspace of \n$L^{2}_{p+m}(i{\\mathbb R})$ by\n\\begin{align}\n {\\mathcal M}_{\\mathfrak D} = &\\left \\{ \\begin{bmatrix} V \\\\ U \\end{bmatrix} (\\lambda I - W)^{-1} x + \n \\begin{bmatrix} f(\\lambda) \\\\ \ng(\\lambda) \\end{bmatrix} \\colon x \\in {\\mathbb C}^{n_{W}} \\text{ and } \n\\begin{bmatrix} f \\\\ g \\end{bmatrix} \\in H^{2}_{p+m}(\\Pi_{+} ) \n \\right. \\notag \\\\\n& \\left. \\text{ such that } \n \\sum_{z_{0} \\in \\Pi_{+}} {\\rm Res}_{\\lambda = z_{0}} (\\lambda I - Z)^{-1} \n\\begin{bmatrix} X & -Y \\end{bmatrix} \\begin{bmatrix} f(\\lambda) \\\\ \n g(\\lambda) \\end{bmatrix} = \\Gamma x \\right\\}.\n \\label{cMrep}\n\\end{align}\nand a subspace of $L^{2}_{m}(i {\\mathbb R})$ by \n$$ \n{\\mathcal M}_{{\\mathfrak D}, -}=\\{ U (\\lambda I - W)^{-1} x \\colon x \\in {\\mathbb C}^{n_{W}} \\}\n \\oplus H^{2}_{m}(\\Pi_{+}).\n$$\nUsing $\\Pi_{+}$-admissibility assumptions on the data set ${\\mathfrak D}$ one can show\n(we refer to \\cite{BGR} for details, subject to the disclaimer in \nRemark \\ref{R:Amaya} below) that \n$$\n{\\mathcal M}_{{\\mathfrak D}, -}= P_{\\sbm{ 0 \\\\ L^{2}_{m}(i{\\mathbb R})}}{\\mathcal M}_{\\mathfrak D}.\n$$\nFurthermore, a variant of the Beurling-Lax Theorem assures us that there is a \n$m \\times m$-matrix \ninner function $\\psi$ on $\\Pi_{+}$ so that\n\\begin{equation} \\label{cM-rep}\n {\\mathcal M}_{{\\mathfrak D},-} = \\psi^{-1} \\cdot H^{2}_{m}(\\Pi_{+}).\n\\end{equation}\nMaking use of \\cite[Theorem 6.1]{BGR} applied to the null-pole triple \n$(U,W; \\emptyset, \\emptyset; \\emptyset)$ over $\\Pi_{+}$,\none can see that such a \n$\\psi$ (defined uniquely up to a constant unitary factor on the left) \nis given by the state-space realization formula \n\\begin{equation}\n\\psi(z)=I_m-UP^{-1}(zI+W^*)^{-1}U^*,\n\\label{sep1}\n\\end{equation}\nwhere the positive definite matrix $P$ is uniquely defined from the Lyapunov equation \n$PW+W^*P=U^*U$, with $\\psi^{-1}$ given by\n\\begin{equation} \\label{psi-inv}\n\\psi(z)^{-1}=I_m+U(zI-W)^{-1}P^{-1}U^*,\n\\end{equation}\ni.e., that $(U,W)$ is the right null pair of $\\psi$. \nFurthermore, a second application of \\cite[Theorem 6.1]{BGR} \nto the null-pole triple $(\\sbm{ V \\\\ U}, W; Z, \\sbm{ X & -Y}; \\Gamma)$ \nover $\\Pi_{+}$ leads to:\n\\begin{itemize}\n \\item {\\em ${\\mathcal M}_{\\mathfrak D}$ has the Beurling-Lax-type \n representation} \n\\begin{equation} \\label{cMBLrep}\n{\\mathcal M}_{\\mathfrak D} = \\Theta \\cdot \n H^{2}_{p+m}(\\Pi_{+}).\n\\end{equation}\n\\end{itemize}\nBy projecting the identity \\eqref{cMBLrep} onto the bottom component and recalling \nthe identity \n\\eqref{cM-rep}, we see that\n\\begin{equation} \\label{Theta22-1}\n\\begin{bmatrix} \\Theta_{21} & \\Theta_{22} \\end{bmatrix} H^{2}_{p + \n m}(\\Pi_{+}) = {\\mathcal M}_{{\\mathfrak D}, -} =\n \\psi^{-1} H^{2}_{m}(\\Pi_{+}).\n \\end{equation} \n On the other hand, for any $\\sbm{ f_{+} \\\\ f_{-} } \\in \n H^{2}_{p+m}(\\Pi_{+})$, we have\n \\begin{align}\n \\begin{bmatrix} \\Theta_{21} & \\Theta_{22} \\end{bmatrix} \n \\begin{bmatrix} f_{+} \\\\ f_{-} \\end{bmatrix} &=\n\\Theta_{21} f_{+} + \\Theta_{22} f_{-} \\notag\\\\\n&= \\Theta_{22} \n(\\Theta_{22}^{-1} \\Theta_{21} f_{+} + f_{-}) \\in \\Theta_{22} \nH^{2}_{m}(\\Pi_{+})\\label{Theta22-2},\n\\end{align}\nsince $\\Theta_{22}^{-1} \\Theta_{21}$ is analytic on $\\Pi_+$. Since \nthe reverse containment\n$$\n\\Theta_{22}\\cdot H^{2}_{m}(\\Pi_{+}) \\subset \\begin{bmatrix} \\Theta_{21} & \n\\Theta_{22} \\end{bmatrix}\\cdot H^{2}_{p+m}(\\Pi_{+})\n$$\nis obvious, we may combine \\eqref{Theta22-1} and \\eqref{Theta22-2} to conclude that\n\\begin{equation} \\label{Theta22-3}\n \\Theta_{22}\\cdot H^{2}_{m}(\\Pi_{+}) = \\begin{bmatrix} \\Theta_{21} & \n \\Theta_{22} \\end{bmatrix} \\cdot H^{2}_{p+m}(\\Pi_{+}) = \\psi^{-1} \n \\cdot H^{2}_{m}(\\Pi_{+}).\n \\end{equation}\nIt turns out that the geometry of ${\\mathcal M}_{\\mathfrak D}$ encodes the \ninterpolation conditions:\n\\begin{itemize}\n \\item {\\em An analytic function $S \\colon \\Pi_{+} \\to {\\mathbb C}^{p \\times m}$ \n satisfies the interpolation conditions \\eqref{BTOAint1}, \n \\eqref{BTOAint2}, \\eqref{BTOAint3} if and only if}\n \\begin{equation} \\label{sol-rep}\n \\begin{bmatrix} S \\\\ I_m \\end{bmatrix} \\cdot {\\mathcal M}_{{\\mathfrak D}, -}\n \\subset {\\mathcal M}_{\\mathfrak D}.\n \\end{equation}\n \\end{itemize}\n \n It remains to put the pieces together to arrive at the \n linear-fractional parametrization \\eqref{LFTparam} for the set of \n all solutions (and thereby prove that solutions exist).\n Suppose that $S \\in {\\mathcal S}^{p \\times m}(\\Pi_{+})$ satisfies the \n interpolation conditions \\eqref{BTOAint1}, \\eqref{BTOAint2}, \n \\eqref{BTOAint3}. As a consequence of the criterion \\eqref{sol-rep}\n combined with \\eqref{cM-rep} and \\eqref{cMBLrep}, we have\n $$\n \\begin{bmatrix} S \\\\ I_{m} \\end{bmatrix} \\psi^{-1} \\cdot\n H^{2}_{m}(\\Pi_{+}) \\subset \\begin{bmatrix} \\Theta_{11} & \n \\Theta_{12} \\\\ \\Theta_{21} & \\Theta_{22} \\end{bmatrix} \\cdot\n H^{2}_{p+m}(\\Pi_{+}).\n $$\n Hence there must be a $(p+m) \\times m$ matrix function\n $\\sbm{Q_{1} \\\\ Q_{2}} \\in \n H^2_{(p+m) \\times m}(\\Pi_{+})$ so that\n \\begin{equation} \\label{Sparam}\n \\begin{bmatrix} S \\\\ I_{m} \\end{bmatrix} \\psi^{-1} = \n\\begin{bmatrix} \\Theta_{11} & \\Theta_{12} \\\\ \n \\Theta_{21} & \\Theta_{22} \\end{bmatrix} \\begin{bmatrix} Q_{1} \n \\\\ Q_{2} \\end{bmatrix}.\n\\end{equation}\n \nWe next combine this identity with the $J$-unitary property of \\eqref{ThetaJunitary}: \nfor the (suppressed) argument $\\lambda \\in i{\\mathbb R}$ we have\n\\begin{align*}\n 0 \\succeq \\psi^{-1 *} (S^{*} S - I) \\psi^{-1}& = \\psi^{-1 *} \n\\begin{bmatrix} Q_{1}^{*} & Q_{2}^{*} \\end{bmatrix}\n \\Theta^{*} J \\Theta \\begin{bmatrix} Q_{1} \\\\ Q_{2} \\end{bmatrix} \n \\psi^{-1} \\\\\n & =\\psi^{-1 *} \\begin{bmatrix} Q_{1}^{*} & Q_{2}^{*} \\end{bmatrix} \n J \\begin{bmatrix} Q_{1} \\\\ Q_{2} \\end{bmatrix} \\psi^{-1}\\\\\n & = \\psi^{-1 *} (Q_{1}^{*} Q_{1} - Q_{2}^{*} Q_{2}) \\psi^{-1}.\n\\end{align*}\nWe conclude that\n$$\n \\| Q_{1}(\\lambda) x\\|^{2} \\le \\| Q_{2}(\\lambda) x \\|^{2} \\; \\text{ for all \n } \\; x \\in {\\mathbb C}^{m} \\; \\text{ and }\\; \\lambda \\in i {\\mathbb R}.\n$$\nIn particular, if $Q_{2}(\\lambda) x = 0$ \nfor some $\\lambda \\in i{\\mathbb R}$ and $x \\in {\\mathbb C}^{m}$, then \nalso $Q_{1}(\\lambda) x = 0$ and hence \n$$\n \\psi(\\lambda)^{-1} x = \\begin{bmatrix} \\Theta_{21}(\\lambda) & \n \\Theta_{22}(\\lambda) \\end{bmatrix} \\begin{bmatrix} Q_{1}(\\lambda) \\\\ \n Q_{2}(\\lambda) \\end{bmatrix} x = 0,\n$$\nwhich forces $x=0$ since $\\psi$ is rational matrix inner. We conclude:\n\\begin{itemize}\n \\item {\\em for $\\lambda \\in i{\\mathbb R}$, $Q_{2}(\\lambda)$ is \n invertible and $G(\\lambda) = Q_{1}(\\lambda) Q_{2}(\\lambda)^{-1}$\nis a contraction.}\n\\end{itemize}\n\nThe next step is to apply a winding-number argument to get similar \nresults for $\\lambda \\in \\Pi_+$. From the bottom component of \\eqref{Sparam} we have, \nagain for the moment with $\\lambda \\in i {\\mathbb R}$,\n\\begin{equation} \\label{bottom}\n \\psi^{-1} = \\Theta_{21} Q_{1} + \\Theta_{22} Q_{2} \n= \\Theta_{22} (\\Theta_{22}^{-1} \\Theta_{21} G + I_{m}) Q_{2}.\n\\end{equation}\nWe conclude that, for the argument $\\lambda \\in i {\\mathbb R}$, \n\\begin{equation} \\label{wno'}\n \\operatorname{wno} \\det( \\psi^{-1}) = \n \\operatorname{wno}\\det(\\Theta_{22}) + \\operatorname{wno} \n \\det(\\Theta_{22}^{-1} \\Theta_{21} G + I_{m}) + \n \\operatorname{wno} \\det(Q_{2}) \n\\end{equation}\nwhere we use the notation $\\operatorname{wno}f$ to indicate {\\em \nwinding number} or {\\em change of argument} of the function $f$ as \nthe variable runs along the imaginary line. Since both $\\det \n\\Theta_{22}^{-1}$ and $\\det \\psi$ are analytic on \n$\\overline{\\Pi}_{+}$, a consequence of the \nidentity \\eqref{Theta22-3} is that\n\\begin{equation} \\label{wno''}\n\\operatorname{wno} \\det( \\psi^{-1}) = \\operatorname{wno}\\det(\\Theta_{22}).\n\\end{equation}\nCombining the two last equalities gives\n\\begin{equation}\n \\operatorname{wno}\n \\det(\\Theta_{22}^{-1} \\Theta_{21} G + I_{m}) + \\operatorname{wno} \\det(Q_{2})=0.\n\\label{wno}\n\\end{equation}\n\n\nWe have already observed that\n$$\n \\| \\Theta_{22}(\\lambda)^{-1} \\Theta_{21}(\\lambda)\\| < 1\\quad\\mbox{and}\\quad\n \\| G(\\lambda) \\| \\le 1 \\; \\text{ for } \\; \\lambda \\in i {\\mathbb R}.\n$$\nHence, for $0 \\le t \\le 1$ we have $\\| t\\Theta_{22}(\\lambda)^{-1} \n\\Theta_{21}(\\lambda) G(\\lambda) \\| < 1$ and hence\n$t \\Theta_{22}(\\lambda)^{-1} \\Theta_{21}(\\lambda) G(\\lambda) + I$ is \ninvertible for $\\lambda \\in i {\\mathbb R}$ for all $0 \\le t \\le 1$.\nHence \n$$ \ni(t): = \\operatorname{wno} \\det(t \\Theta_{22}(\\lambda)^{-1} \n\\Theta_{21}(\\lambda) G(\\lambda) + I)\n$$\nis well defined and independent of $t$ for $0 \\le t \\le 1$. \nAs clearly $i(0) = 0$, it follows that \n$$\ni(1) = \\operatorname{wno} \\det( \\Theta_{22}(\\lambda)^{-1} \n\\Theta_{21}(\\lambda) G(\\lambda) + I) = 0\n$$\nwhich, on account of \\eqref{wno}, implies \n$ \\operatorname{wno} \\det(Q_{2}) = 0$. As $Q_{2}$ is analytic \non $\\Pi_{+}$, we conclude that $\\det Q_{2}$ has no zeros in \n$\\Pi_{+}$, i.e., $Q_{2}^{-1}$ is analytic on $\\Pi_{+}$. By the \nmaximum modulus theorem it then follows that\n$G(\\lambda) : = Q_{1}(\\lambda) Q_{2}(\\lambda)^{-1}$ is in the Schur class ${\\mathcal S}^{p \n\\times m}(\\Pi_{+})$. Furthermore, from \\eqref{Sparam} we have\n\\begin{equation}\n\\begin{bmatrix} S \\\\ I_{m} \\end{bmatrix} = \\begin{bmatrix} \n \\Theta_{11} & \\Theta_{12} \\\\ \\Theta_{21} & \\Theta_{22} \n\\end{bmatrix} \\begin{bmatrix} G \\\\ I \\end{bmatrix} Q_{2} \\psi.\n\\label{ad3}\n\\end{equation}\nFrom the bottom component we read off that $Q_{2} \\psi = (\\Theta_{21} \nG + \\Theta_{22})^{-1}$. From the first component we then get\n$$\n S = (\\Theta_{11} G + \\Theta_{12}) Q_{2} \\psi = (\\Theta_{11} G + \n \\Theta_{12}) (\\Theta_{21} G + \\Theta_{22})^{-1}\n$$\nand the representation \\eqref{LFTparam} follows.\n\n\\smallskip\n\nConversely, if $G \\in {\\mathcal S}^{p \\times m}(\\Pi_{+})$, we can reverse the \nabove argument (with $Q_{1}(\\lambda) = G(\\lambda)$ and $Q_{2}(\\lambda) = I_{m}$) to see that \n$S$ of the form \\eqref{LFTparam} is a Schur-class solution of the interpolation \nconditions \\eqref{BTOAint1}, \\eqref{BTOAint2}, \\eqref{BTOAint3}.\n\\end{proof}\n\n\\begin{remark} \\label{R:Amaya}\n The theory from \\cite{BGR} is worked out explicitly only with \n$H^{2}_{m}(\\Pi_{+})$ replaced by its rational subspace $\\operatorname{\\rm \nRat} H^{2}_{m}$ consisting of elements of $H^{2}_{m}$ with \nrational-function column entries, and similarly $H^{2}_{m}(\\Pi_{-})$ and \n$L^{2}(i {\\mathbb R})$ replaced by their respective rational subspaces $\\operatorname{Rat} \nH^{2}_{m}(\\Pi_{-})$ and $\\operatorname{Rat} L^{2}(i {\\mathbb R})$.\nNevertheless the theory is easily adapted to the $L^{2}$-setting here. \nSubspaces ${\\mathcal M}$ of $L^{2}_{p+m}(i {\\mathbb R})$ having a representation of \nthe form \\eqref{cMrep} (with $\\sbm{U \\\\ V}, W, \\begin{bmatrix} X & -Y \n\\end{bmatrix}, Z, \\Gamma$ all equal to finite matrices rather than infinite-dimensional \noperators) are characterized by the conditions: (1) \n${\\mathcal M}$ is forward-shift invariant, i.e., ${\\mathcal M}$ is invariant under multiplication \nby the function $\\chi(\\lambda) = \\frac{ \\lambda -1}{ \\lambda +1}$, (2) the subspace\n$({\\mathcal M} + H^{2}_{p+m}(\\Pi_{+}))\/ H^{2}_{p+m}(\\Pi_{+})$ has finite dimension, and (3) the \nquotient space ${\\mathcal M}\/\\left({\\mathcal M} \\cap H^{2}_{p+m}(\\Pi_{+})\\right)$ has finite dimension.\nThe representation \\eqref{cM-rep} with $\\psi^{-1}$ of the form \n\\eqref{psi-inv} with finite matrices $U,W,P$ is roughly the special case \nof the statement above where ${\\mathcal M} = {\\mathcal M}_{{\\mathfrak D},-} \\supset\nH^{2}_{m}(\\Pi_{+})$. The analogue of such representations \n\\eqref{cMrep} and \\eqref{cM-rep}--\\eqref{psi-inv} for more general \nfull-range pure forward shift-invariant subspaces of \n$L^{2}_{p+m}(\\Pi_{+})$ (or dually of full-range pure backward \nshift-invariant subspaces of $L^{2}_{p+m}(\\Pi_{+})$) involving infinite-dimensional \n(even unbounded) operators $\\sbm{ U \\\\ V}, W, Z, \\sbm{X & -Y }, \\Gamma$ \nis worked out in the Virginia Tech dissertation of Austin Amaya \\cite{Amaya}.\n\\end{remark}\n\n\n\\subsection{The Fundamental Matrix Inequality approach of Potapov} \n\\label{S:FMI}\nThe linear fractional parametrization formula \\eqref{LFTparam} can be \nalternatively established by the Potapov's method of the Fundamental\nMatrix Inequalities. As we will see, this method bypasses the winding number argument.\n\n\n\\smallskip\n\nConsider a $\\Pi_{+}$-admissible BTOA interpolation data set ${\\mathfrak D}$ as in \n\\eqref{data} giving rise to the collection \\eqref{BTOAint1}, \n\\eqref{BTOAint2}, \\eqref{BTOAint3} of BTOA interpolation conditions \nimposed on a Schur-class function $S^{p \\times m}(\\Pi_{+})$.\nWe assume that $\\boldsymbol{\\Gamma}_{\\mathfrak D}$ is positive definite. We form \nthe matrix $\\Theta(\\lambda)$ as in \\eqref{oct1}--\\eqref{oct2} and \nassume all knowledge of all the properties of $\\Theta$ falling out of \nthe positive-definiteness of $\\boldsymbol{\\Gamma}_{\\mathfrak D}$, specifically\n\\eqref{bigLySyl}--\\eqref{Theta22inv} above.\n\n\\smallskip\n\nThe main idea is to extend the interpolation data by one extra interpolation node \n$z\\in\\Pi_+$ with the corresponding full-range value $S(z)$, i.e., by \nthe tautological full-range interpolation condition\n\\begin{equation} \\label{generic-int}\n S(z) = S(z)\n\\end{equation}\nwhere $z$ is a generic point in the right half plane. \nTo set up this augmented problem as a BTOA problem, we have a choice \nas to how we incorporate the global generic interpolation condition \n\\eqref{generic-int} into the BTOA formalism: (a) \nas a LTOA interpolation condition:\n\\begin{equation} \\label{genericLTOA}\n (X_{z} S)^{\\wedge L}(Z_{z}) = Y_{z} \\; \\text{ where } \\; X_{z} = I_{p}, \n \\, Y_{z} = S(z), \\, Z_{z} = z I_{p},\n\\end{equation}\nor as a RTOA interpolation condition:\n\\begin{equation} \\label{genericRTOA}\n (S U_{z})^{\\wedge R}(W_{z}) = V_{z} \\; \\text{ where } \\;\n U_{z} = I_{m}, \\, V_{z} = S(z), \\, W_{z} = z I_{m}.\n\\end{equation}\nWe choose here to work with the left versions \\eqref{genericLTOA} exclusively; working \nwith the right version \\eqref{genericRTOA} will give seemingly different but in the end \nequivalent parallel results. \n\n\\smallskip\n\nAs a first step, we wish to combine \\eqref{BTOAint1} and \n\\eqref{genericLTOA} into a single LTOA interpolation condition.\nThis is achieved by augmenting the matrices $(Z, X, Y)$ to the \naugmented triple $(Z_{\\rm aug}, X_{\\rm aug}, Y_{\\rm aug})$ given by\n$$\nZ_{\\rm aug} = \\begin{bmatrix} Z & 0 \\\\ 0 & z I_{p} \\end{bmatrix}, \n\\quad\nX_{\\rm aug} = \\begin{bmatrix} X \\\\ I_{p} \\end{bmatrix}, \\quad\nY_{\\rm aug} = \\begin{bmatrix} Y \\\\ S(z) \\end{bmatrix}.\n$$\nHere all matrices indexed by aug depend on the parameter $z$, but for \nthe moment we suppress this dependence from the notation.\nAs the RTOA-interpolation conditions for the augmented problem remain \nthe same as in the original problem (namely, \\eqref{BTOAint2}), we set\n$$\nU_{\\rm aug} = U, \\quad V_{\\rm aug} = V, \\quad W_{\\rm aug} = W.\n$$\nWe therefore take the \\textbf{augmented data set} ${\\mathfrak D}_{\\rm \naug}$ to have the form\n\\begin{equation} \\label{Daug}\n {\\mathfrak D}_{\\rm aug} = (X_{\\rm aug}, Y_{\\rm aug}, Z_{\\rm aug}; \n U_{\\rm aug}, V_{\\rm aug}, W_{\\rm aug}; \\Gamma_{\\rm aug})\n\\end{equation}\nwhere the coupling matrix $\\Gamma_{\\rm aug}$ is still to be \ndetermined.\n\n\\smallskip\n\nWe know that $\\Gamma_{\\rm aug}$ must solve the Sylvester equation\n\\eqref{Sylvester'} associated with the data set ${\\mathfrak D}_{\\rm \naug}$, i.e., $\\Gamma_{\\rm aug}$ must have the form $\\Gamma_{\\rm aug} \n= \\sbm{ \\Gamma_{{\\rm aug},1} \\\\ \\Gamma_{{\\rm aug},2} }$ with\n$$ \\begin{bmatrix} \\Gamma_{{\\rm aug},1} \\\\ \\Gamma_{{\\rm aug},2} \n \\end{bmatrix} W - \\begin{bmatrix} Z & 0 \\\\ 0 & z I_{p} \n\\end{bmatrix} \\begin{bmatrix} \\Gamma_{{\\rm aug},1} \\\\ \\Gamma_{{\\rm aug},2} \n \\end{bmatrix} = \n \\begin{bmatrix} X \\\\ I_{p} \\end{bmatrix} V - \\begin{bmatrix} Y \\\\ \n S(z) \\end{bmatrix} U.\n$$\nEquivalently, $\\Gamma_{\\rm aug} = \\sbm{ \\Gamma_{{\\rm aug},1} \\\\ \n\\Gamma_{{\\rm aug},2}}$ is determined by the decoupled system of equations\n\\begin{align} \n & \\Gamma_{{\\rm aug}, 1} W - Z \\Gamma_{{\\rm aug}, 1} = X V - Y U, \n \\notag \\\\\n& \\Gamma_{{\\rm aug},2} W - (z I_{p}) \\Gamma_{{\\rm aug}, 2} = V - S(z) U.\n\\label{aug-Sylvester''}\n\\end{align}\nIn addition, the third augmented interpolation condition takes the \nform\n$$\n \\left( \\begin{bmatrix} X \\\\ I_{p} \\end{bmatrix} S \\, U \n \\right)^{\\wedge L,R} \\left( \\begin{bmatrix} Z & 0 \\\\ 0 & zI_{p} \n\\end{bmatrix}, W \\right) = \\begin{bmatrix} \\Gamma_{{\\rm aug}, 1} \\\\\n\\Gamma_{{\\rm aug}, 2} \\end{bmatrix}\n$$\nwhich can be decoupled into two independent bitangential \ninterpolation conditions\n\\begin{equation} \\label{coupledBTint}\n ( X S U)^{\\wedge L,R}(Z,W) = \\Gamma_{{\\rm aug},1}, \\quad\n (I_{p} S U)^{\\wedge L,R}(z I_{p}, W ) = \\Gamma_{{\\rm aug},2}.\n\\end{equation}\nFrom the first of the conditions \\eqref{coupledBTint} coupled with \nthe interpolation condition \\eqref{BTOAint3}, we are forced to take\n$ \\Gamma_{{\\rm aug}, 1} = \\Gamma$.\n\n\\smallskip\n\nSince the point $z \\in \\Pi_{+}$ is generic, we may assume as a first \ncase that $z$ is disjoint from the spectrum $\\sigma(W)$ of $W$. Then \nwe can solve the second of the equations \\eqref{aug-Sylvester''} \nuniquely for $\\Gamma_{{\\rm aug}, 2}$:\n\\begin{equation} \\label{Gammaaug2}\n\\Gamma_{{\\rm aug},2} = (S(z) U - V) (zI_{n_{W}} - W)^{-1}.\n\\end{equation}\nA consequence of the RTOA interpolation condition \\eqref{BTOAint2} is \nthat the right-hand side of \\eqref{Gammaaug2} has analytic \ncontinuation to all of $\\Pi_{+}$. It is not difficult to see that \n$(I_{p} S U)^{L,R}(z I_{p}, W)$ in general is just the value of this \nanalytic continuation at the point $z$; we conclude that the formula \n\\eqref{Gammaaug2} holds also at points $z$ in $\\sigma(W)$ with proper \ninterpretation. In this way we have completed the computation of the \naugmented data set \\eqref{Daug}:\n\\begin{equation} \\label{Daug'}\n {\\mathfrak D}(z):= {\\mathfrak D}_{\\rm aug} = \n \\left( \\begin{bmatrix} Z & 0 \\\\ 0 & zI_{p} \\end{bmatrix}, \\,\n\\begin{bmatrix} X \\\\ I_{p} \\end{bmatrix}, \\, \n\\begin{bmatrix} Y \\\\ S(z) \\end{bmatrix}; \\, U,V, W; \\begin{bmatrix} \n \\Gamma \\\\ T_{S,1}(z) \\end{bmatrix} \\right).\n\\end{equation}\nwhere we set\n\\begin{equation} \\label{TS1}\n T_{S,1}(z) = (S(z) U - V) (zI_{n_{W}} - W)^{-1}.\n\\end{equation}\n\nWe next compute the Pick matrix $\\boldsymbol{\\Gamma}_{{\\mathfrak D}_{\\rm \naug}(z)}$ for the augmented data set ${\\mathfrak D}_{\\rm aug}$ \n\\eqref{Daug'} according to the recipe \\eqref{GammaL}--\\eqref{GammaData}. \nThus \n$$\n\\boldsymbol{\\Gamma}_{{\\mathfrak D}_{\\rm aug}(z)} = \n\\begin{bmatrix}\\Gamma_{{\\rm aug}, L} & \\Gamma_{\\rm aug} \\\\ (\\Gamma_{\\rm \naug})^{*} & \\Gamma_{{\\rm aug}, R}\\end{bmatrix},\\quad\\mbox{where}\\quad\n\\Gamma_{\\rm aug} = \\begin{bmatrix} \\Gamma \\\\ T_{S,1}(z) \\end{bmatrix}, \n\\; \\Gamma_{{\\rm aug},R} = \\Gamma_{R},\n$$\nand where $\\Gamma_{{\\rm aug}, L} = \\sbm{\\Gamma_{{\\rm aug}, L11} & \n\\Gamma_{{\\rm aug}, L12} \\\\ \\Gamma_{{\\rm aug}, L21} & \\Gamma_{{\\rm \naug}, L22 }}$ is determined by the Lyapunov equation \\eqref{GammaL} \nadapted to the interpolation data set ${\\mathfrak D}(z)$:\n\\begin{align*}\n&\\begin{bmatrix} \\Gamma_{{\\rm aug}, L11} & \\Gamma_{{\\rm aug}, L12} \\\\ \n \\Gamma_{{\\rm aug}, L21} & \\Gamma_{{\\rm aug}, L22} \\end{bmatrix}\n\\begin{bmatrix} Z^{*} & 0 \\\\ 0 & \\overline{z} I_{p} \\end{bmatrix} +\n\\begin{bmatrix} Z & 0 \\\\ 0 & z I_{p} \\end{bmatrix}\n\\begin{bmatrix} \\Gamma_{{\\rm aug}, L11} & \\Gamma_{{\\rm aug}, L12} \\\\ \n \\Gamma_{{\\rm aug}, L21} & \\Gamma_{{\\rm aug}, L22} \\end{bmatrix} \\\\\n& \\quad = \\begin{bmatrix} X \\\\ I_{p} \\end{bmatrix} \\begin{bmatrix} X^{*} & \n I_{p} \\end{bmatrix} - \\begin{bmatrix} Y \\\\ S(z) \\end{bmatrix}\n\\begin{bmatrix} Y^{*} & S(z)^{*} \\end{bmatrix}.\n\\end{align*}\nOne can solve this equation uniquely for $\\Gamma_{{\\rm aug}, Lij}$ ($i,j=1,2$) \nwith the result\n\\begin{align*}\n & \\Gamma_{{\\rm aug}, L11} = \\Gamma_{L}, \\quad \\Gamma_{{\\rm aug}, L21} \n = (\\Gamma_{{\\rm aug}, L12})^{*} = T_{S,2}(z), \\notag \\\\\n& \\Gamma_{{\\rm aug},L22} = \\frac{I - S(z) S(z)^{*}}{z + \\overline{z}}\n\\end{align*}\nwhere we set\n\\begin{equation} \\label{TS2}\n T_{S,2}(z): = (X^{*} - S(z) Y^{*})(zI_{n_{Z}} + Z^{*})^{-1}.\n\\end{equation}\nIn this way we arrive at the Pick matrix for data set ${\\mathfrak \nD}(z)$, denoted for convenience as $\\boldsymbol{\\Gamma}_{\\mathfrak D}(z)$ \nrather than as $\\boldsymbol{\\Gamma}_{{\\mathfrak D}(z)}$:\n$$\n\\boldsymbol{\\Gamma}_{\\mathfrak D}(z) = \\begin{bmatrix} \\Gamma_{L} & \nT_{S,2}(z)^{*} & \\Gamma \\\\ T_{S,2}(z) & \\frac{ I - S(z) S(z)^{*}}{ z + \n\\overline{z}} & T_{S,1}(z) \\\\ \\Gamma^{*} & T_{S,1}(z)^{*} & \n\\Gamma_{R} \\end{bmatrix}.\n$$\nIf we interchange the second and third rows and then also the second \nand third columns (i.e., conjugate by a permutation matrix), we get a \nnew matrix having the same inertia; for simplicity from now on we use \nthe same notation $\\boldsymbol{\\Gamma}_{\\mathfrak D}(z)$ for this transformed \nmatrix:\n$$\n \\boldsymbol{\\Gamma}_{{\\mathfrak D}}(z) = \\begin{bmatrix} \\Gamma_{L} & \\Gamma & \n T_{S,2}(z)^{*} \\\\ \\Gamma^{*} & \\Gamma_{R} & T_{S,1}(z)^{*} \\\\\n T_{S,2}(z) & T_{S,1}(z) & \\frac{ I - S(z) S(z)^{*}}{z + \n \\overline{z}} \\end{bmatrix}.\n$$\n\nHad we started with a finite number ${\\mathbf z} = \\{z_{1}, \\dots, z_{N}\\}$ \nof generic interpolation nodes in $\\Pi_{+}$ rather than a single \ngeneric point $z$ and augmented the interpolation conditions \n\\eqref{BTOAint1}, \\eqref{BTOAint2}, \\eqref{BTOAint3} with the \ncollection of tautological interpolation conditions\n$$\n S(z_{i}) = S(z_{i}) \\; \\text{ for } \\; i = 1, \\dots, N\n$$\nmodeled as the additional LTOA interpolation condition\n$$\n (X_{{\\mathbf z}} S)^{\\wedge L}(Z_{{\\mathbf z}}) = Y_{{\\mathbf z}}\n$$\nwhere\n$$\nZ_{{\\mathbf z}} = \\sbm{ z_{1}I_{p} & & 0 \\\\ & \\ddots & \\\\ 0 & & z_{N} I_{p} }, \\quad\nX_{{\\mathbf z}} = \\sbm{I_{p} \\\\ \\vdots \\\\ I_{p} }, \n\\quad Y_{{\\mathbf z}} = \\sbm{ S(z_{1}) \\\\ \\vdots \\\\ S(z_{N})},\n$$\nthe same analysis as above would lead us to the following conclusion: \n{\\em there is a matrix function $S$ in the Schur class ${\\mathcal S}^{p \\times \nm}(\\Pi_{+})$ satisfying the interpolation conditions \n\\eqref{BTOAint1}, \\eqref{BTOAint2}, \\eqref{BTOAint3} if and only if, \nfor ${\\mathbf z} = \\{z_{1}, \\dots, z_{N}\\}$ any collection of $N$ distinct \npoints in $\\Pi_{+}$, the associated augmented Pick matrix \n$\\boldsymbol{\\Gamma}_{\\mathfrak D}({\\mathbf z})$ is positive-semidefinite, where}\n$$\n\\boldsymbol{\\Gamma}_{\\mathfrak D}({\\mathbf z}) =\n\\begin{bmatrix} \\Gamma_{L} & \\Gamma & T_{S,2}(z_{1})^{*} & \\cdots & \n T_{S,2}(z_{N})^{*} \\\\\n\\Gamma^{*} & \\Gamma_{R} & T_{S,1}(z_{1})^{*} & \\cdots & \nT_{S,1}(z_{N})^{*} \\\\\nT_{S,2}(z_{1}) & T_{S,1}(z_{1}) & \n\\frac{I - S(z_{1})S(z_{1})^{*}}{z_{1} + \\overline{z}_{1}} & \\cdots &\n\\frac{I - S(z_{1}) S(z_{N})^{*}}{z_{1} + \\overline{z}_{N}} \\\\\n\\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\nT_{S,2}(z_{N}) & T_{S,1}(z_{N}) & \n\\frac{ I - S(z_{N}) S(z_{1})^{*}}{z_{1} + \\overline{z}_{N}} & \\cdots & \n\\frac{ I - S(z_{N}) S(z_{N})^{*}}{z_{N} + \\overline{z}_{N}} \\end{bmatrix} \\succeq 0.\n$$\nAs the finite set of points ${\\mathbf z} = \\{z_{1}, \\dots, z_{N}\\}$ ($N=1,2, \n\\dots$) is an arbitrary finite subset of $\\Pi_{+}$, this condition in \nturn amounts to the assertion that the kernel \n$\\boldsymbol{\\Gamma}_{\\mathfrak D}(z, \\zeta)$ defined by\n\\begin{equation} \\label{sep8a}\n \\boldsymbol{\\Gamma}_{\\mathfrak D}(z, \\zeta) = \\begin{bmatrix} \\Gamma_{L} & \\Gamma & \n T_{S,2}(\\zeta)^{*} \\\\\n \\Gamma^{*} & \\Gamma_{R} & T_{S,1}(\\zeta)^{*} \\\\\n T_{S,2}(z) & T_{S,1}(z) & \\frac{ I - S(z) S(\\zeta)^{*}}{z + \n \\overline{\\zeta}} \\end{bmatrix}\n\\end{equation}\nis a positive kernel on $\\Pi_{+}$ (see \\eqref{posker}).\nObserve from \\eqref{TS2}, \\eqref{TS1} that\n\\begin{align*}\n\\begin{bmatrix}T_{S,2}(z) & T_{S,1}(z)\\end{bmatrix}&=\n- \\begin{bmatrix}I_p & -S(z)\\end{bmatrix}\\begin{bmatrix}\n -X^{*} & V \\\\ -Y^{*} & U \\end{bmatrix}\n \\begin{bmatrix}(zI+Z^*)^{-1} &0 \\\\ 0 & (zI-W)^{-1}\\end{bmatrix}\\notag\\\\\n&= - \\begin{bmatrix}I_p & -S(z)\\end{bmatrix}{\\bf C}(z I-{\\bf A})^{-1},\n\\end{align*}\nwhere ${\\bf C}$ and ${\\bf A}$ are defined as in \\eqref{oct2}. Taking the latter \nformula into account, we way write\n\\eqref{sep8a} in a more structured form as\n\\begin{equation} \\label{sep8c}\n\\boldsymbol{\\Gamma}_{\\mathfrak D}(z,\\zeta)=\\begin{bmatrix} \\Gamma_{\\mathfrak D} &\n-(\\overline\\zeta I-{\\bf A}^*)^{-1}\n{\\bf C}^*\\begin{bmatrix}I_p \\\\ -S(\\zeta)^*\\end{bmatrix}\\\\\n-\\begin{bmatrix}I_p & -S(z)\\end{bmatrix}{\\bf C}(z I-{\\bf A})^{-1}&\n{\\displaystyle\\frac{I - S(z)S(\\zeta)^*}{z+\\overline{\\zeta}}}\\end{bmatrix}.\n\\end{equation}\nSince the matrix $\\boldsymbol{\\Gamma}_{\\mathfrak D}$ is positive definite, the kernel \\eqref{sep8c} \nis positive if and only if the \nSchur complement of $\\Gamma_{\\mathfrak D}$ is a positive kernel on \n$\\Pi_+\\backslash \\sigma(W)$ \nand therefore, admits a unique positive extension to the whole $\\Pi_+$:\n$$\n\\frac{I - S(z)S(\\zeta)^*}{z+\\overline{\\zeta}}-\n\\begin{bmatrix}I_p & -S(z)\\end{bmatrix}{\\bf C}(z I-{\\bf A})^{-1}\\Gamma_{\\mathfrak D}^{-1}\n(\\overline\\zeta I-{\\bf A}^*)^{-1}{\\bf C}^*\\begin{bmatrix}I_p \\\\ \n-S(\\zeta)^*\\end{bmatrix}\\succeq 0.\n$$\nThe latter can be written as\n$$\n\\begin{bmatrix}I_p & -S(z)\\end{bmatrix}\\left\\{\\frac{J}{z+\\overline{\\zeta}}-\n{\\bf C}(z I-{\\bf A})^{-1}\\Gamma_{\\mathfrak D}^{-1}\n(\\overline\\zeta I-{\\bf A}^*)^{-1}{\\bf C}^*\\right\\}\n\\begin{bmatrix}I_p \\\\ -S(\\zeta)^*\\end{bmatrix}\\succeq 0,\n$$\nand finally, upon making use of \\eqref{ad1}, as\n\\begin{equation}\n\\begin{bmatrix}I_p & -S(z)\\end{bmatrix}\\frac{\\Theta(z)J\\Theta(\\zeta)^*}\n {z+\\overline{\\zeta}}\n\\begin{bmatrix}I_p \\\\ -S(\\zeta)^*\\end{bmatrix}\\succeq 0.\n\\label{sep8d}\n\\end{equation}\nWe next define two functions $Q_1$ and $Q_2$ by the formula\n\\begin{equation}\n\\begin{bmatrix}Q_2(z) & -Q_1(z)\\end{bmatrix}=\\begin{bmatrix}I_p & -S(z)\\end{bmatrix}\n\\begin{bmatrix}\\Theta_{11}(z) & \\Theta_{12}(z)\\\\ \\Theta_{21}(z) & \\Theta_{22}(z)\n\\end{bmatrix}, \n\\label{sep8f}\n\\end{equation}\nand write \\eqref{sep8d} in terms of these functions as\n$$\n\\begin{bmatrix}Q_2(z) & -Q_1(z)\\end{bmatrix}\\frac{J}{z+\\overline{\\zeta}}\n\\begin{bmatrix}Q_2(\\zeta)^* \\\\ -Q_1(\\zeta)^*\\end{bmatrix}=\n\\frac{Q_2(z)Q_2(\\zeta)^*-Q_1(z)Q_1(\\zeta)^*}{z+\\overline{\\zeta}}\n\\succeq 0.\n$$\nBy Leech's theorem \\cite{leech}, there exists a Schur-class function \n$G\\in\\mathcal S^{p\\times m}(\\Pi_+)$ such that \n$$\nQ_2 G=Q_1,\n$$\nwhich, in view of \\eqref{sep8f} can be written as \n$$\n(\\Theta_{11}-S\\Theta_{21})G=S\\Theta_{22}-\\Theta_{12},\n$$\nor equivalently, as\n\\begin{equation} \\label{sep8g}\n S (\\Theta_{21} G + \\Theta_{22}) = \\Theta_{11} G + \\Theta_{12}.\n\\end{equation}\nNote that $\\Theta_{22}(z)$ is invertible and that \n$\\Theta_{22}(z)^{-1} \\Theta_{21}(z)$ is strictly contractive on all of \n$\\Pi_{+} \\setminus \\sigma(W)$ (and then on all of $\\Pi_{+}$ by \nanalytic continuation) as a consequence of the bullet immediately \nafter \\eqref{Theta22invTheta21} above. As $G$ is in the Schur class \nand hence is contractive on all of $\\Pi_{+}$, \nit follows that $\\Theta_{22}(z)^{-1}\\Theta_{21}(z)G(z)+I_m$ is \ninvertible on all of $\\Pi_{+}$.\nHence \n$$\\Theta_{21}(z)G(z)+\\Theta_{22}(z)=\\Theta_{22}(z)(\\Theta_{22}(z)^{-1}\n\\Theta_{21}(z)G(z)+I_m)\n$$ \nis invertible for all $z\\in\\Pi_+$ and we can solve \\eqref{sep8g} for $S$ \narriving at at the formula \\eqref{LFTparam}. \n\n\\begin{remark} \\label{R:FMI} Note that in this Potapov approach to \n the derivation of the linear-fractional parametrization via the \n Fundamental Matrix Inequality, the winding-number argument \n appearing in the state-space approach never appears. What \n apparently replaces it, once everything is properly organized, is \n the theorem of Leech.\n\\end{remark}\n\n\\subsection{Positive kernels and reproducing kernel Hilbert spaces} \\label{S:RKHS}\nAssume now that we are given a $\\Pi_{+}$-admissible interpolation data \nset and that the PIck matrix $\\boldsymbol{\\Gamma}_{\\mathfrak D}$ is invertible. \nThen one can define the matrix function $\\Theta(z)$ as in \n\\eqref{oct1} and then $K_{\\Theta,J}(z, \\zeta) = \n\\frac{J - \\Theta(z) J \\Theta(\\zeta)^{*}}{z + \\overline{\\zeta}}$ is \ngiven by \\eqref{ad1}. A straightforward computation then shows that, \nfor any $N=1,2,\\dots$ with points $z_{1}, \\dots, z_{N}$ in \n$\\Pi_{+}\\setminus \\sigma(W)$ and vectors ${\\mathbf y}_{1}, \\dots, {\\mathbf y}_{N}$ in \n${\\mathbb C}^{p+m}$, we have\n\\begin{align*}\n&\\sum_{i,j=1}^{N} \\langle K_{\\Theta,J}(z_{i}, z_{j}) {\\mathbf y}_{j}, {\\mathbf y}_{i} \n\\rangle_{{\\mathbb C}^{p+m}} = \\\\\n& \\quad = \\sum_{i,j=1}^{N} \\bigg\\langle \\boldsymbol{\\Gamma}_{\\mathfrak D}^{-1}\n\\bigg( \\sum_{i=1}^{N} (\\overline{z}_{i} I - {\\mathbf A}^*)^{-1}{\\mathbf C}^{*} _{i} \n\\bigg), \\; \\sum_{j=1}^{N} (\\overline{z}_{j} I - {\\mathbf A}^*)^{-1}{\\mathbf C}^{*} _{j} \n \\bigg\\rangle, \n\\end{align*}\nand hence\n $K_{\\Theta,J}$ is a positive kernel on $\\Pi_{+} \\setminus \\sigma(W)$ \n if $\\boldsymbol{\\Gamma}_{\\mathfrak D} \\succ 0$. More generally, if \n $\\boldsymbol{\\Gamma}_{\\mathfrak D}$ has some number $\\kappa$ of negative \n eigenvalues, then for any choice of points $z_{1}, \\dots, z_{N} \\in \n \\Pi_{+} \\setminus \\sigma(W)$ the block Hermitian matrix\n\\begin{equation} \\label{KThetaJblock}\n \\left[ K_{\\Theta,J}(z_{i}, z_{j}) \\right]_{i,j=1, \\dots, N}\n \\end{equation}\nhas at most $\\kappa$ negative eigenvalues. If we impose the controllability and observability \nassumptions on the matrix pairs $(U,W)$ and $(Z,X)$, then there \nexist a choice of $z_{1}, \\dots, z_{N} \\in \\Pi_{+} \\setminus \n\\sigma(W)$ so that the matrix \\eqref{KThetaJblock} has exactly \n$\\kappa$ negative eigenvalues, in which case we say that $\\Theta$ is \nin the generalized $J$-Schur class ${\\mathcal S}_{J, \\kappa}(\\Pi_{+})$ \n(compare with the Kre\\u{\\i}n-Langer generalized Schur class discussed \nat the beginning of Section \\ref{S:negsquares} below). In the case \nwhere $\\Theta \\in {\\mathcal S}_{J, \\kappa}(\\Pi_{+})$ with $\\kappa > 0$, there \nis still associated a space of functions ${\\mathcal H}(K_{\\Theta,J})$ as in \n\\eqref{RKHSa}--\\eqref{RKHSb}; the space ${\\mathcal H}(K_{\\Theta,J})$ is \nnow a {\\em Pontryagin space} with negative index equal to $\\kappa$\n(see Section \\ref{S:Kreinprelim} for background on Pontryagin and \nKre\\u{\\i}n spaces). In any case, in this way we arrive at yet another \ninterpretation of the condition that $\\boldsymbol{\\Gamma}_{\\mathfrak D}$ be \npositive definite.\n\n\\begin{theorem} \\label{T:RKHS}\n Assume that we are given a $\\Pi_{+}$-admissible interpolation \n data set with $\\Gamma_{\\mathfrak D}$ is invertible (so \n $\\Theta$ and $K_{\\Theta,J}$ are defined). Then \n ${\\mathcal H}(K_{\\Theta,J})$ is a Hilbert space if and only if \n $\\boldsymbol{\\Gamma}_{\\mathfrak D} \\succ 0$.\n\\end{theorem}\n\nIn Section \\ref{S:synthesis} below (see display \\eqref{HKThetaJ})\nwe shall spell this criterion out in more detail and arrive at another \ncondition equivalent to positive-definiteness \nof the Pick matrix $\\boldsymbol{\\Gamma}_{\\mathfrak D}$.\n\n\\section{The Grassmannian\/Kre\\u{\\i}n-space-geometry approach to the \nBTOA-NP interpolation problem} \n\\label{S:proof2}\nIn this section we sketch the Grassmannian\/Kre\\u{\\i}n-space geometry \nproof of Theorem \\ref{T:BTOA-NP} based on the work in \\cite{BH}---see \nalso \\cite{Ball-Indiana} for a more expository account and \\cite{BF} \nfor a more recent overview which also highlights the method in \nvarious multivariable settings. These treatments work with the \nSarason \\cite{Sarason67} or Model-Matching \\cite{Francis} formulation \nof the Nevanlinna-Pick interpolation problem, while we work with the \nLTOA-interpolation formulation.\nThe translation between the two is given in \\cite[Chapter 16]{BGR}\n(where the Sarason\/Model Matching formulation is called \n{\\em divisor-remainder form}.\n\n\\subsection{Kre\\u{\\i}n-space preliminaries} \\label{S:Kreinprelim}\nLet us first review a few preliminaries concerning Kre\\u{\\i}n spaces.\nA Kre\\u{\\i}n space by definition is a linear space ${\\mathcal K}$ endowed\nwith an indefinite inner product $[ \\cdot, \\cdot]$ which is\n{\\em complete} in the following sense: there are two subspaces\n${\\mathcal K}_{+}$ and ${\\mathcal K}_{-}$ of ${\\mathcal K}$ such that the restriction of $[\n\\cdot, \\cdot ]$ to ${\\mathcal K}_{+} \\times {\\mathcal K}_{+}$ makes ${\\mathcal K}_{+}$ a\nHilbert space while the restriction of $-[ \\cdot, \\cdot]$ to\n${\\mathcal K}_{-} \\times {\\mathcal K}_{-}$ makes ${\\mathcal K}_{-}$ a Hilbert space, and \n\\begin{equation}\n{\\mathcal K} = {\\mathcal K}_{+} [\\dot +] {\\mathcal K}_{-}\n\\label{dec}\n\\end{equation}\nis a $[ \\cdot, \\cdot]$-orthogonal direct sum decomposition\nof ${\\mathcal K}$. In this case the decomposition \\eqref{dec}\nis said to form a {\\em fundamental decomposition} for ${\\mathcal K}$.\nFundamental decompositions are never unique except in the trivial\ncase where one of ${\\mathcal K}_{+}$ or ${\\mathcal K}_{-}$ is equal to the zero space.\nIf $\\min(\\dim {\\mathcal K}_{+}, \\, \\dim {\\mathcal K}_{-})=\\kappa<\\infty$, then \n${\\mathcal K}$ is called a Pontryagin space of index $\\kappa$.\n\n\\smallskip\n\nUnlike the case of Hilbert spaces where closed subspaces all look\nthe same, there is a rich geometry for subspaces of a Kre\\u{\\i}n\nspace. A subspace ${\\mathcal M}$ of a Kre\\u{\\i}n space ${\\mathcal K}$ is said to be\n{\\em positive}, {\\em isotropic}, or {\\em negative} depending on\nwhether $[u, u] \\ge 0$ for all $u \\in {\\mathcal M}$, $[ u, u ] =0$ for all\n$u \\in {\\mathcal M}$ (in which case it follows that $[u,v] = 0$ for all\n$u,v \\in {\\mathcal M}$ as a consequence of the Cauchy-Schwarz inequality),\nor $[u,u] \\le 0$ for all $u \\in {\\mathcal M}$. Given any subspace ${\\mathcal M}$,\nwe define the Kre\\u{\\i}n-space orthogonal complement\n${\\mathcal M}^{[\\perp]}$ to consist of all $v \\in {\\mathcal K}$ such that $[u,v] =\n0$ for all $u \\in {\\mathcal K}$. Note that the statement that ${\\mathcal M}$ is\nisotropic is just the statement that ${\\mathcal M} \\subset {\\mathcal M}^{[ \\perp\n]}$. If it happens that ${\\mathcal M} = {\\mathcal M}^{[ \\perp]}$, we say that ${\\mathcal M}$\nis a {\\em Lagrangian} subspace of ${\\mathcal K}$. Simple examples show that \nin general, unlike the Hilbert space case, it can happen that ${\\mathcal M}$ \nis a closed subspace of the Kre\\u{\\i}n space ${\\mathcal K}$ yet the space \n${\\mathcal K}$ cannot be split at the ${\\mathcal K}$-orthogonal direct sum of ${\\mathcal M}$ and \n${\\mathcal M}^{[\\perp]}$ (e.g., this happens dramatically if ${\\mathcal M}$ is an \nisotropic subspace of ${\\mathcal K}$). If ${\\mathcal M}$ is a subspace of ${\\mathcal K}$ for which this does \nhappen, i.e., such that ${\\mathcal K} = {\\mathcal M} [+] {\\mathcal M}^{[\\perp]}$, we say \nthat ${\\mathcal M}$ is a {\\em regular subspace} of ${\\mathcal K}$.\n\n\\smallskip\n\nExamples of such subspaces arise from placing appropriate\nKre\\u{\\i}n-space inner products on the direct sum ${\\mathcal H}^\\prime \\oplus\n{\\mathcal H}$ of two Hilbert spaces and looking at graphs of operators of\nan appropriate class.\n\n\\begin{example} \\label{E:unitary} Suppose that ${\\mathcal H}'$ and\n ${\\mathcal H}$ are two Hilbert spaces and we take ${\\mathcal K}$ to be the\n external direct sum ${\\mathcal H}' \\oplus {\\mathcal H}$ with inner product\n$$\n \\left[ \\begin{bmatrix} x \\\\ y \\end{bmatrix}, \\,\n \\begin{bmatrix} x' \\\\ y' \\end{bmatrix} \\right] =\n \\left\\langle \\begin{bmatrix} I_{{\\mathcal H}'} & 0 \\\\ 0 & -I_{{\\mathcal H}}\n \\end{bmatrix} \\begin{bmatrix} x \\\\ y \\end{bmatrix}, \\,\n \\begin{bmatrix} x' \\\\ y' \\end{bmatrix} \\right\\rangle_{{\\mathcal H}' \\oplus {\\mathcal H}}\n$$\nwhere $\\langle \\cdot, \\cdot \\rangle_{{\\mathcal H}' \\oplus {\\mathcal H}}$ is the\nstandard Hilbert-space inner product on the direct-sum Hilbert\nspace ${\\mathcal H}' \\oplus {\\mathcal H}$. In this case it is easy to find a\nfundamental decomposition: take ${\\mathcal K}_{+} = \\sbm{ {\\mathcal H} \\\\ \\{0\\}}$\nand ${\\mathcal K}_{-} = \\sbm{ \\{0\\} \\\\ {\\mathcal H}' }$. Now let $T$ be a bounded\nlinear operator from ${\\mathcal H}$ to ${\\mathcal H}'$ and let ${\\mathcal M}$ be the graph of\n$T$:\n$$\n {\\mathcal M} = {\\mathcal G}_{T} = \\left\\{ \\begin{bmatrix} T x \\\\ x \\end{bmatrix}\n \\colon x \\in {\\mathcal H} \\right\\} \\subset {\\mathcal K}.\n $$\n Then a nice exercise is to work out the following facts:\n \\begin{itemize}\n \\item ${\\mathcal G}_{T}$ is negative if and only if $\\|T\\| \\le 1$, in \n which case ${\\mathcal G}_{T}$ is {\\em maximal negative}, i.e., the subspace\n ${\\mathcal G}_{T}$ is not contained in any strictly larger negative \n subspace.\n\n \\item ${\\mathcal G}_{T}$ is isotropic if and only if $T$ is isometric\n ($T^{*} T = I_{{\\mathcal H}}$).\n\n \\item ${\\mathcal G}_{T}$ is Lagrangian if and only if $T$ is unitary:\n $T^{*} T = I_{{\\mathcal H}}$ and $T T^{*} = I_{{\\mathcal H}'}$.\n \\end{itemize}\n\\end{example}\n\nLet ${\\mathcal M}$ be a fixed subspace of a Kre\\u{\\i}n space ${\\mathcal K}$ and ${\\mathcal G}$ a \nclosed subspace of ${\\mathcal M}$. In order that ${\\mathcal G}$ be maximal negative as \na subspace of ${\\mathcal K}$, it is clearly necessary that ${\\mathcal G}$ be maximal \nnegative as a subspace of ${\\mathcal M}$. The following lemma (see \\cite{BH} or \n\\cite{Ball-Indiana}\nfor the proof) identifies when the converse holds.\n\n\\begin{lemma} \\label{L:maxneg} Suppose that ${\\mathcal M}$ is a closed \n subspace of a Kre\\u{\\i}n-space ${\\mathcal K}$ and ${\\mathcal G}$ is a negative \n subspace of ${\\mathcal M}$. Then a subspace ${\\mathcal G} \\subset {\\mathcal M}$ which is maximal-negative \n as a subspace of ${\\mathcal M}$ is automatically also maximal negative as \n a subspace of ${\\mathcal K}$ if and only if the Kre\\u{\\i}n-space \n orthogonal complement\n $$\n {\\mathcal K} [-] {\\mathcal M} = \\{ k \\in {\\mathcal K} \\colon [k, m ]_{{\\mathcal K}} = 0 \\text{ for \n all } m \\in {\\mathcal M}\\}\n $$\n is a positive subspace of ${\\mathcal K}$.\n\\end{lemma}\n\n\\subsection{The Grassmannian\/Kre\\u{\\i}n-space approach to \ninterpolation}\nSuppose now that we are given a $\\Pi_{+}$-admissible \nBTOA-interpolation data set as in \\eqref{data}.\nLet ${\\mathcal M}_{\\mathfrak D} \\subset L^{2}_{p+m}(i {\\mathbb R})$ be as in \n\\eqref{cMrep}. We view ${\\mathcal M}_{\\mathfrak D}$ as a subspace of the \nKre\\u{\\i}n space \n\\begin{equation} \\label{defcK} \n{\\mathcal K} =\\begin{bmatrix} L^{2}_{p}(i {\\mathbb R}) \\\\ \\mathcal M_{\\mathfrak D,-} \n\\end{bmatrix}=\n\\begin{bmatrix} L^{2}_{p}(i {\\mathbb R}) \\\\ \\psi^{-1} \n H^{2}_{m}(\\Pi_{+}) \\end{bmatrix}\n\\end{equation}\n(where we use the notation in \\eqref{cM-rep})\nwith Kre\\u{\\i}n-space inner product $[ \\cdot, \\cdot ]_{J}$ induced by the matrix $J = \n\\sbm{ \nI_{p} & 0 \\\\ 0 & -I_{m} }$:\n$$\n\\left[ \\begin{bmatrix} f_{1} \\\\ f_{2} \\end{bmatrix}, \\, \n\\begin{bmatrix} g_{1} \\\\ g_{2} \\end{bmatrix} \n\\right]_{J}: =\n\\langle f_{1}, \\, g_{1} \\rangle_{L^{2}_{p}(i {\\mathbb R})} -\n\\langle f_{2}, \\, g_{2} \\rangle_{ \\psi^{-1} H^{2}_{m}(\\Pi_{+})}.\n$$\nA key subspace in the Kre\\u{\\i}n-space geometry approach to the BTOA-NP problem is \nthe $J$-orthogonal complement of ${\\mathcal M}_{\\mathfrak D}$ inside ${\\mathcal K}$:\n\\begin{equation} \\label{cPdata}\n {\\mathcal M}_{\\mathfrak D}^{[ \\perp {\\mathcal K}]} : = {\\mathcal K} [-]_{J} {\\mathcal M}_{\\mathfrak D} = \n \\{ f \\in {\\mathcal K} \\colon [ f, g ]_{J} = 0 \\text{ for all } g \\in \n {\\mathcal M}_{\\mathfrak D} \\}.\n\\end{equation}\nWe then have the following result.\n\n\\begin{theorem} \\label{T:BTOA-NP'} \nThe BTOA-NP has a solution $S \\in {\\mathcal S}^{p \\times m}(\\Pi_{+})$ if and \nonly if the subspace ${\\mathcal M}_{\\mathfrak D}^{[\\perp {\\mathcal K}]}$ \\eqref{cPdata} \nis a positive subspace of ${\\mathcal K}$ \\eqref{defcK}, i.e.,\n $$\n [ f, f ]_{_J} \\ge 0\\; \\text{ for all } \\; f \\in {\\mathcal M}_{\\mathfrak \n D}^{[\\perp {\\mathcal K}]}.\n $$\n If it is the case that ${\\mathcal M}_{\\mathfrak D}^{[\\perp {\\mathcal K}]}$ is a Hilbert \n space in the ${\\mathcal K}$-inner product, then there is rational \n $J$-inner function $\\Theta$ so that \n \\begin{enumerate}\n \\item $\\Theta$ provides a \n Beurling-Lax representation for ${\\mathcal M}_{\\mathfrak D}$ \\eqref{cMBLrep}, \n and\n \\item the set of all Schur-class solutions $S \\in {\\mathcal S}^{p \\times \n m}(\\Pi_{+})$ of the interpolation conditions \\eqref{BTOAint1}, \n \\eqref{BTOAint2}, \\eqref{BTOAint3} is given by the linear-fractional \n parametrization formula \\eqref{LFTparam} with $G \\in {\\mathcal S}^{p \\times \n m}(\\Pi_{+})$.\n \\end{enumerate}\n \\end{theorem}\n \n \\begin{proof}[Sketch of the proof of Theorem \\ref{T:BTOA-NP'}]\nWe first argue the ${\\mathcal M}_{\\mathfrak D}^{[\\perp {\\mathcal K}]}$ being a positive \nsubspace of ${\\mathcal K}$ is necessary for the BTOA-NP to have a solution.\nLet $S\\in {\\mathcal S}^{p \\times m}(\\Pi_{+})$ be such a solution and let \n$M_{S} \\colon \\psi^{-1} H^{2}_{m}(\\Pi_{+}) \\to L^{2}_{p}(i {\\mathbb R})$ be the \noperator of multiplication by $S$:\n$$\n M_{S} \\colon \\psi^{-1} h \\mapsto S \\cdot \\psi^{-1} h.\n$$\nThe operator norm of $M_{S}$ is the same as the supremum norm of $S$\nover $i {\\mathbb R}$:\n$$\n \\| M_{S}\\|_{\\rm op} = \\|S\\|_\\infty:=\\sup_{\\lambda \\in i {\\mathbb R}} \\| S(\\lambda) \\|.\n$$\nLet us consider the graph space of $M_{S}$, namely\n\\begin{equation}\n {\\mathcal G}_{S} = \\begin{bmatrix} M_{S} \\\\ I_m \\end{bmatrix} \\psi^{-1} H^{2}_{m}(\\Pi_{+}) \n = \\begin{bmatrix} S \\\\ I_{m} \\end{bmatrix} \\cdot \\psi^{-1} \n H^{2}_{m}(\\Pi_{+}).\n\\label{ad2}\n\\end{equation}\nBy the first bullet in Example \\ref{E:unitary}, it follows that \n\\begin{itemize} \n \\item {\\em $\\| S\\|_{\\infty} \\le 1$ if and only if ${\\mathcal G}_{S}$ \n is a maximal negative subspace of ${\\mathcal K}$.}\n\\end{itemize}\nMoreover, as a consequence of the criterion \\eqref{sol-rep} for $S$ \nto satisfy the interpolation conditions, we have\n\\begin{itemize}\n \\item {\\em $S$ satisfies the interpolation conditions if and only \n if ${\\mathcal G}_{S} \\subset {\\mathcal M}_{\\mathfrak D}$.}\n \\end{itemize}\nBy combining these two observations, we see that if $S$ is a \nsolution to the BTOA-NP, then the \nsubspace ${\\mathcal G}_{S}$ is contained in ${\\mathcal M}_{\\mathfrak D}$ and is maximal \nnegative in ${\\mathcal K}$. It follows that ${\\mathcal M}_{\\mathfrak D}^{[\\perp {\\mathcal K}]}$ is \na positive subspace in ${\\mathcal K}$ as a consequence of Lemma \\ref{L:maxneg}.\nThis verifies the necessity part in Theorem \\ref{T:BTOA-NP'}.\n\n\\smallskip\n\nConversely, suppose that ${\\mathfrak D}$ is a $\\Pi_{+}$-admissible \nBTOA-interpolation data set. Then we can form the space \n$$\n{\\mathcal M}_{\\mathfrak D}^{[\\perp {\\mathcal K}]} \\subset {\\mathcal K} = \\sbm{ L^{2}_{p}(i{\\mathbb \nR}) \\\\ \\psi^{-1} H^{2}_{m}(\\Pi_{+}) }.\n$$\nSuppose that ${\\mathcal M}_{\\mathfrak D}^{[\\perp {\\mathcal K}]}$ is a positive subspace of ${\\mathcal K}$. \nBy Lemma \\ref{L:maxneg}, a subspace ${\\mathcal G}$ of ${\\mathcal M}_{\\mathfrak \nD}$ which is maximal negative as a subspace of ${\\mathcal M}_{\\mathfrak D}$ is \nalso maximal negative as a subspace of ${\\mathcal K}$. We also saw in the necessity \nargument that if the subspace ${\\mathcal G}$ has the form ${\\mathcal G}_{S}$ \\eqref{ad2} \nfor a matrix function $S$ \nand ${\\mathcal G}_{S}\\subset {\\mathcal M}_{\\mathfrak D}$, then $S$ satisfies \nthe interpolation conditions \\eqref{BTOAint1}, \\eqref{BTOAint2}, \n\\eqref{BTOAint3}. However, not all maximal negative subspaces ${\\mathcal G} = \n\\sbm{ T \\\\ I} \\psi^{-1} H^{2}_{m}(\\Pi_{+})$ \nof ${\\mathcal K}$ have the form ${\\mathcal G} = {\\mathcal G}_{S}$ for a matrix function $S$; the \nmissing property is {\\em shift-invariance}, i.e., one must require \nin addition that ${\\mathcal G}$ is invariant under multiplication by the \ncoordinate function $\\chi(\\lambda) = \\frac{\\lambda - 1}{\\lambda +1}$. Then \none gets that $T$ and $M_{\\chi}$ commute and one can conclude that $T$ \nis a multiplication operator: $T = M_{S}$ for some multiplier \nfunction $S$. Thus the issue is to construct maximal negative \nsubspaces of ${\\mathcal M}_{\\mathfrak D}$ (which are then also maximal \nnegative as subspaces of ${\\mathcal K}$ by Lemma \\ref{L:maxneg}) which are \nalso shift-invariant.\n\nTo achieve this goal, it is convenient to assume that ${\\mathcal M}_{\\mathfrak \nD}^{[\\perp {\\mathcal K}]}$ is strictly positive, i.e., that ${\\mathcal M}_{\\mathfrak \nD}^{[\\perp {\\mathcal K}]}$ is a Hilbert space. It then follows in particular that \n${\\mathcal M}_{\\mathfrak D}^{[\\perp {\\mathcal K}]}$ is regular, i.e., ${\\mathcal M}_{\\mathfrak D}^{[\\perp {\\mathcal K}]}$ \nand its $J$-orthogonal complement (relative to ${\\mathcal K}$) ${\\mathcal M}_{\\mathfrak D}$ form\na $J$-orthogonal decomposition of ${\\mathcal K}$:\n$$\n {\\mathcal K} = {\\mathcal M}_{\\mathfrak D} [+]_{J} {\\mathcal M}_{\\mathfrak D}^{[\\perp {\\mathcal K}]}.\n$$\nOne can argue that one can use an approximation\/normal-families \nargument to reduce the general case to this special case, but we do \nnot go into details on this point here. Then results from \\cite{BH} \nimply that there is a $J$-Beurling-Lax representer for \n${\\mathcal M}_{\\mathfrak D}$, i.e., there is a $J$-phase function \n$$\n\\Theta \\in L^{2}_{(p+m) \\times (p+m)}(i {\\mathbb R})\\quad\\mbox{with}\\quad \n\\Theta(\\lambda)^{*} J \\Theta(\\lambda) = J \\; \\text{ for a.e. } \\; \\lambda \\in \\Pi_{+}\n$$ \nsuch that \\eqref{cMBLrep} holds. As both \n$$\n{\\mathcal M}_{\\mathfrak D} \\ominus ({\\mathcal M}_{\\mathfrak D} \\cap \nH^{2}_{p+m}(\\Pi_{+}))\\quad\\mbox{and}\\quad H^{2}_{p+m}(\\Pi_{+})\\ominus(\nH^{2}_{p+m}(\\Pi_{+}) \\cap {\\mathcal M}_{\\mathfrak D})\n$$ \nare finite-dimensional, in \nfact one can show that $\\Theta$ is rational and bounded on $i \n{\\mathbb R}$. Then the multiplication operator $M_{\\Theta} \\colon k \n\\mapsto \\Theta \\cdot k$ is a Kre\\u{\\i}n-space isomorphism from \n$H^{2}_{p+m}(\\Pi_{+})$ (a Kre\\u{\\i}n space with inner product induced \nby $J = \\sbm{ I_{p} & 0 \\\\ 0 & -I_{m}}$) onto ${\\mathcal M}_{\\mathfrak D}$ \nwhich also intertwines the multiplication operator $M_{\\chi}$ on the \nrespective spaces. It follows that shift-invariant ${\\mathcal M}_{\\mathfrak \nD}$-maximal-negative subspaces ${\\mathcal G}$ are exactly those of the form\n$$\n{\\mathcal G} = \\Theta \\cdot \\begin{bmatrix} G \\\\ I_{m} \\end{bmatrix} \\cdot \nH^{2}_{m}(\\Pi_{+}), \\; \\text{ where } G \\in \\; {\\mathcal S}^{p \\times m}(\\Pi_{+}).\n$$\nBy the preceding analysis, any such subspace ${\\mathcal G}$ also has the form\n$$\n {\\mathcal G} = \\begin{bmatrix} S \\\\ I_{m} \\end{bmatrix} \\cdot \\psi^{-1} \n H^{2}_{m}(\\Pi_{+})\n$$\nwhere $S \\in {\\mathcal S}^{p \\times m}(\\Pi_{+})$ is a Schur-class solution of \nthe interpolation conditions \\eqref{BTOAint1}, \\eqref{BTOAint2}, \n\\eqref{BTOAint3}. Moreover one can reverse this analysis to see that \nany solution $S$ of the BTOA-NP interpolation problem arises in this \nway from a $G \\in {\\mathcal S}^{p \\times m}(\\Pi_{+})$. From the subspace \nequality\n$$\n\\begin{bmatrix} S \\\\ I_{m} \\end{bmatrix} \\cdot \\psi^{-1} \n H^{2}_{m}(\\Pi_{+}) = \\begin{bmatrix} \\Theta_{11} & \\Theta_{12} \\\\ \n \\Theta_{21} & \\Theta_{22} \\end{bmatrix} \\cdot \\begin{bmatrix} G \\\\ I_{m} \n\\end{bmatrix} \\cdot \nH^{2}_{m}(\\Pi_{+})\n$$\none can solve for $S$ in terms of $G$: in particular we have\n$$\n\\begin{bmatrix} S \\\\ I_{m} \\end{bmatrix} \\cdot \\psi^{-1} I_{m} \n \\in \\begin{bmatrix} \\Theta_{11} & \\Theta_{12} \\\\ \n \\Theta_{21} & \\Theta_{22} \\end{bmatrix} \\cdot \\begin{bmatrix} G \\\\ I_{m} \n\\end{bmatrix} \\cdot H^{2}_{m}(\\Pi_{+}),\n$$\nso there must be a function $Q \\in H^{\\infty}_{m \\times m}(\\Pi_{+})$ \nso that\n$$ \n\\begin{bmatrix} S \\\\ I_{m} \\end{bmatrix} \\cdot \\psi^{-1} I_{m} \n = \\begin{bmatrix} \\Theta_{11} & \\Theta_{12} \\\\ \n \\Theta_{21} & \\Theta_{22} \\end{bmatrix} \\cdot \\begin{bmatrix} G \\\\ I_{m} \\end{bmatrix}\n \\cdot Q.\n$$\nAs we saw in Section 2, the latter equality (which is the same as \\eqref{ad3}) implies the \nrepresentation formula \\eqref{LFTparam} for the set of solutions $S$.\nThis completes the proof of Theorem \\ref{T:BTOA-NP'}.\n\\end{proof}\n\n\\begin{remark} \\label{R:no-wno}\nNote that in this Grassmannian\/Kre\\u{\\i}n-space approach\nwe have not even mentioned that the $J$-phase $\\Theta$ is actually $J$-inner \n(i.e., $\\Theta(\\lambda)$ is $J$ contractive at its points of analyticity \nin $\\Pi_{+}$); this condition and the winding number argument in the \nproof via the state-space approach in Section \\ref{S:proof1} have been \nreplaced by the condition that ${\\mathcal M}_{\\mathfrak D}^{[\\perp {\\mathcal K}]}$ is a \npositive subspace and consequences of this assumption coming out of \nLemma \\ref{L:maxneg}.\n\\end{remark}\n\n\\section{State-space versus Grassmannian\/Kre\\u{\\i}n-space-geometry \nsolution criteria} \\label{S:synthesis}\n\nAssume that we are given a $\\Pi_{+}$-admissible interpolation data \nset ${\\mathfrak D}$ with $\\boldsymbol{\\Gamma}_{\\mathfrak D}$ invertible.\nWhen we combine the results of Theorems \\ref{T:BTOA-NP}, \n\\ref{T:BTOA-NP'} and \\ref{T:RKHS}, we see immediately that $\\boldsymbol{\\Gamma}_{\\mathfrak D} \n\\succ 0$ if and only if the subspace ${\\mathcal M}_{\\mathfrak D}^{[\\perp]}$ \nis positive as a subspace of the Kre\\u{\\i}n-space ${\\mathcal K}$ \n\\eqref{defcK}, since each of these two conditions is equivalent to \nthe existence of solutions for the BTOA-NP interpolation problem with \ndata set ${\\mathfrak D}$. It is not too much of a stretch to \nspeculate that the strict positive definiteness of $\\boldsymbol{\\Gamma}_{\\mathfrak D}$ \nis equivalent to strict positivity of ${\\mathcal M}_{\\mathfrak D}^{[\\perp {\\mathcal K}]}$.\nFurthermore, in the case where $\\boldsymbol{\\Gamma}_{\\mathfrak D}$ is invertible, \nby the analysis in Section \\ref{S:RKHS} we know that \npositive-definiteness of $\\boldsymbol{\\Gamma}_{\\mathfrak D}$ is equivalent to \npositivity of the kernel $K_{\\Theta,J}$ \\eqref{ad1}, or to the \nreproducing kernel space ${\\mathcal H}(K_{\\Theta,J})$ being a Hilbert space.\nThe goal of this section is to carry out some additional geometric \nanalysis to verify these equivalences for the nondegenerate case \n($\\boldsymbol{\\Gamma}_{\\mathfrak D}$ invertible) directly.\n\n\\begin{corollary} \\label{C:BTOA-NP} Suppose that ${\\mathfrak D}$ is \n a $\\Pi_{+}$-admissible BTOA interpolation data set, let \n $\\boldsymbol{\\Gamma}_{\\mathfrak D}$ be the matrix given in \\eqref{GammaData} and let \n ${\\mathcal M}_{\\mathfrak D}^{[\\perp {\\mathcal K}]} \\subset {\\mathcal K}$ be the subspace defined in\n \\eqref{cPdata}. Then the following are equivalent:\n \\begin{enumerate}\n \\item $\\boldsymbol{\\Gamma}_{\\mathfrak D} \\succ 0$.\n\\item ${\\mathcal M}_{\\mathfrak D}^{[\\perp {\\mathcal K}]}$ is a strictly positive subspace \n of ${\\mathcal K}$ (i.e., ${\\mathcal M}^{[ \\perp {\\mathcal K}]}$ is a Hilbert space in the \n $J$-inner product).\n\\item The reproducing kernel Pontryagin space ${\\mathcal H}(K_{\\Theta,J})$ is \nactually a Hilbert space.\n\\end{enumerate}\n\\end{corollary}\n\n\\smallskip\n\n\\begin{proof} For simplicity we consider first the case where the data set \n $\\mathfrak D$ has the form\n\\begin{equation} \\label{dataL}\n{\\mathfrak D}_{L} = (Z, X, Y; \\emptyset, \\emptyset, \\emptyset;\\emptyset),\n\\end{equation}\ni.e., there are only Left Tangential interpolation conditions \n\\eqref{BTOAint1}. \n\n\\smallskip\n\n\\noindent\n\\textbf{Case 1: The LTOA setting.}\nIn case $\\mathfrak D$ has the form ${\\mathfrak D} = {\\mathfrak \nD}_{L}$ as in \\eqref{dataL}, the matrix $\\boldsymbol{\\Gamma}_{\\mathfrak D}$ collapses down to \n$\\boldsymbol{\\Gamma}_{{\\mathfrak D}_{L}} = \\Gamma_{L}$ and ${\\mathcal M}_{{\\mathfrak D}_{L}}$ collapses down to\n$$\n {\\mathcal M}_{{\\mathfrak D}_{L}} = \\left\\{ \\begin{bmatrix} f \\\\ g \n\\end{bmatrix} \n \\in H^{2}_{p+m}(\\Pi_{+}) \\colon\n\\left( \\begin{bmatrix}X & -Y \\end{bmatrix} \\begin{bmatrix} f \\\\ g \n\\end{bmatrix} \\right)^{\\wedge L}(Z) = 0 \\right\\}.\n$$\nFurthermore, in the present case, ${\\mathcal M}_{{\\mathfrak D}_{L},-}=H^2_m(\\Pi_+)$ and \ntherefore, ${\\mathcal K}$ given by \\eqref{defcK} is simply\n ${\\mathcal K} = \\sbm{L^{2}_{p}(i {\\mathbb R}) \\\\ H^{2}_{m}(\\Pi_{+})}$.\n\n\\smallskip\n\n We view the map $\\sbm{ f \\\\ g} \\mapsto \\left(\\sbm{X & -Y } \\sbm{ f \n \\\\ g} \\right)^{\\wedge L}(Z)$ as an operator\n $$\n {\\mathcal C}_{Z, \\sbm{ X & -Y}} \\colon H^{2}_{p+m}(\\Pi_{+}) \\to {\\mathbb \n C}^{n_{Z}} \n $$\n which can be written out more explicitly as an integral operator \n along the imaginary line:\\footnote{We view operators of this form as \n {\\em control-like} operators; they and their cousins ({\\em observer-like} operators)\nwill be discussed in a broader context as part of the analysis of Case 2 to \ncome below.}\n$$ \n {\\mathcal C}_{Z, \\, \\sbm{ X & -Y }} \\colon\n \\begin{bmatrix} f_{+} \\\\ f_{-} \\end{bmatrix} \\mapsto\n \\frac{1}{2 \\pi} \\int_{-\\infty}^{\\infty}\n- (iyI - Z)^{-1} \\begin{bmatrix} X & -Y \\end{bmatrix} \\begin{bmatrix} \nf_{+}(iy) \\\\ f_{-}(iy) \\end{bmatrix} \\, dy.\n$$\nThen we can view ${\\mathcal M}_{{\\mathfrak D}_{L}}$ as an operator kernel:\n $$\n {\\mathcal M}_{{\\mathfrak D}_{L}} = \\operatorname{Ker} \\, {\\mathcal C}_{Z, \\sbm{ X & -Y}}.\n $$\n We are actually interested in the $J$-orthogonal complement \n\\begin{align*}\n {\\mathcal M}_{{\\mathfrak D}_{L}}^{[\\perp {\\mathcal K}]}:={\\mathcal K} [-]_J {\\mathcal M}_{{\\mathfrak D}_{L}}\n&=\\sbm{L^{2}_{p}(i {\\mathbb R}) \\\\ H^{2}_{m}(\\Pi_{+})} [-]_J {\\mathcal M}_{{\\mathfrak D}_{L}}\\\\\n&=\\sbm{ H^{2}_{p}(\\Pi_{-}) \\\\ 0 } \\oplus\\left( H^{2}_{p+m}(\\Pi_+)\n [-]_{J} {\\mathcal M}_{{\\mathfrak D}_{L}}\\right).\n\\end{align*}\n As the subspace $\\sbm{ H^{2}_{p}(\\Pi_{-}) \\\\ 0 }$ is clearly \n positive, we see that ${\\mathcal M}_{{\\mathfrak D}_{L}}^{[\\perp {\\mathcal K}]}$ is \n positive if and only if its subspace \n$$\n{\\mathcal M}_{{\\mathfrak D}_{L}}^{[\\perp H^{2}_{p+m}(\\Pi_{+})]}: = H^{2}_{p+m}(\\Pi_{+}) \n [-]_{J} {\\mathcal M}_{{\\mathfrak D}_{L}}\n$$\nis positive. By standard operator-theory duality, we \n can express the latter (finite-dimensional and hence closed) subspace as an operator range:\n $$\n {\\mathcal M}_{{\\mathfrak D}_{L}}^{[\\perp H^{2}_{p+m}(\\Pi_{+})]} = \\operatorname{Ran} J \\left({\\mathcal C}_{Z,\\sbm{ X & -Y}} \n \\right)^{*},\n $$\n where the adjoint is with respect to the standard Hilbert-space \n inner product on $H^{2}_{p+m}(\\Pi_{+})$ and the standard Euclidean \n inner product on ${\\mathbb C}^{n_{Z}}$. \n One can compute the adjoint $\\left({\\mathcal C}_{Z, \\sbm{ X & -Y}} \n \\right)^{*}: \\; {\\mathbb C}^{n_{Z}}\\to \\sbm{ H^{2}_{p}(\\Pi_+) \\\\\nH^{ 2}_{m}(\\Pi_+) }$ explicitly as \n$$\n({\\mathcal C}_{Z, \\sbm{ X & -Y}})^{*} \\colon x \\mapsto \n\\begin{bmatrix} -X^{*} \\\\ Y^{*} \\end{bmatrix} (\\lambda I + Z^{*})^{-1} x.\n$$\nThen the Kre\\u{\\i}n-space orthogonal complement $H^{2}_{p+m}(\\Pi_{+}) [-]_{J} {\\mathcal M}_{{\\mathfrak D}_{L}}$ \ncan be identified with\n\\begin{align} \n{\\mathcal M}_{{\\mathfrak D}_{L}}^{[\\perp H^{2}_{p+m}(\\Pi_{+}) ]} &=\nJ \\cdot {\\rm Ran} ({\\mathcal C}_{Z, \\sbm{ X & -Y}})^{*} \\notag\\\\\n&=\\left\\{ \\begin{bmatrix} -X^{*} \\\\ -Y^{*} \\end{bmatrix} (\\lambda I + \nZ^{*})^{-1} x \\colon x \\in {\\mathbb C}^{n_Z} \\right\\}. \\label{cMDL}\n\\end{align}\n\nTo characterize when ${\\mathcal M}_{{\\mathfrak D}_{L}}^{[\\perp \nH^{2}_{p+m}(\\Pi_{+})]}$ is a \npositive subspace, it suffices to compute the Kre\\u{\\i}n-space \ninner-product gramian matrix ${\\mathbb G}$ for ${\\mathcal M}_{{\\mathfrak \nD}_{L}}^{[\\perp H^{2}_{p+m}(\\Pi_{+})]}$ with respect to its parametrization by \n${\\mathbb C}^{n_{Z}}$ in \\eqref{cMDL}:\n\\begin{align*}\n& \\langle {\\mathbb G} x, x' \\rangle_{{\\mathbb C}^{n_{Z}}} \\\\\n& \\quad = \n \\frac{1}{2 \\pi}\\left\\langle J \\begin{bmatrix} -X^{*} \\\\ -Y^{*} \n\\end{bmatrix} (\\lambda I + Z^{*})^{-1} x,\\, \n\\begin{bmatrix} -X^{*} \\\\ -Y^{*} \\end{bmatrix} (\\lambda I \n+ Z^{*})^{-1} x' \\right\\rangle_{H^{2}_{p+m}(\\Pi_{+})} \\\\\n& \\quad = \\frac{1}{2 \\pi}\\int_{-\\infty}^{\\infty}\\langle (-iy I + Z)^{-1} (X X^{*} - \nY Y^{*}) (iyI + Z^{*})^{-1} x,\\, x' \\rangle_{{\\mathbb C}^{n_{Z}}}\\, dy. \n\\end{align*}\nThus ${\\mathbb G}$ is given by\n$$ \n{\\mathbb G} = \\frac{1}{2 \\pi} \\int_{-\\infty}^{\\infty} (-iyI + Z)^{-1} (X X^{*} - Y \nY^{*}) (iyI + Z^{*})^{-1}\\, dy.\n$$\nIntroduce the change of variable $\\zeta = i y$, $d\\zeta = i \\, dy$ to \nwrite this as a complex line \nintegral\n\\begin{align*}\n {\\mathbb G} & = \\frac{1}{2 \\pi i} \\lim_{R \\to \\infty} \\int_{\\Gamma_{R,1}} \n (-\\zeta I + Z)^{-1} (X X^{*} - \n Y Y^{*}) (\\zeta I + Z^{*})^{-1}\\, d \\zeta \\\\\n & = \\frac{1}{2 \\pi i} \\lim_{R \\to \\infty} \n \\int_{-\\Gamma_{R,1}} (\\zeta I - Z)^{-1} (X X^{*} - YY^{*}) (\\zeta I \n + Z^{*})^{-1}\\, d\\zeta\n\\end{align*}\nwhere $\\Gamma_{R,1}$ is the straight line from $-iR$ to $iR$ and \n$-\\Gamma_{R,1}$ is the same path but with reverse orientation (the \nstraight line from $iR$ to $-iR$). Since the integrand\n\\begin{equation} \\label{fzeta}\n f(\\zeta) =\n(\\zeta I - Z)^{-1} (X X^{*} - YY^{*}) (\\zeta I + Z^{*})^{-1}\n\\end{equation}\nsatisfies an estimate of the form\n$ \\| f(\\zeta \\| \\le \\frac{M}{ |\\zeta|^{2}}\\; $ as $\\; |\\zeta| \\to \\infty$, \nit follows that\n$$\n\\lim_{R \\to \\infty} \\int_{\\Gamma_{R,2}} (\\zeta I - Z)^{-1} (X X^{*} - YY^{*}) (\\zeta I \n + Z^{*})^{-1}\\, d\\zeta = 0\n$$\nwhere $\\Gamma_{R,2}$ is the semicircle of radius $R$ with counterclockwise \norientation starting at the point $-iR$ and ending at the point \n$iR$ (parametrization: $\\zeta = R e^{i \\theta}$ with $-\\pi\/2 \\le \n\\theta \\le \\pi\/2$). Hence we see that \n$$\n{\\mathbb G} = \\frac{1}{ 2 \\pi i} \\lim_{R \\to \\infty} \\int_{\\Gamma_{R}} \n(\\zeta I - Z)^{-1} (X X^{*} - YY^{*}) (\\zeta I \n + Z^{*})^{-1}\\, d\\zeta\n$$\nwhere $\\Gamma_{R}$ is the simple closed curve $-\\Gamma_{R,1} + \n\\Gamma_{R,2}$. By the residue theorem, this last expression is \nindependent of $R$ once $R$ is so large that all the RHP poles of the \nintegrand $f(\\zeta)$ \\eqref{fzeta} are inside the curve $\\Gamma_{R}$, \nand hence\n$$\n{\\mathbb G} = \\frac{1}{2 \\pi i} \\int_{\\Gamma_{R}}\n(\\zeta I - Z)^{-1} (X X^{*} - Y Y^{*} ) (\\zeta I + Z^{*})^{-1}\\, \nd\\zeta \n$$\nfor any $R$ large enough. This enables us to compute ${\\mathbb G}$ via residues:\n\\begin{equation} \\label{bbG}\n {\\mathbb G} = \\sum_{z_{0} \\in\\Pi_+} {\\rm Res}_{\\zeta = z_{0}}\n(\\zeta I - Z)^{-1} (X X^{*} - Y Y^{*} ) (\\zeta I + Z^{*})^{-1}.\n\\end{equation}\n\nWe wish to verify that ${\\mathbb G}$ satisfies the Lyapunov equation\n\\begin{equation} \\label{LyapunovG}\n {\\mathbb G} Z^{*} + Z {\\mathbb G} = X X^{*} - Y Y^{*}.\n\\end{equation}\nToward this end let us first note that\n\\begin{align*}\n&(\\zeta I - Z)^{-1}A(\\zeta I + Z^{*})^{-1}Z^*+Z(\\zeta I - Z)^{-1}A\n(\\zeta I + Z^{*})^{-1}\\\\\n&=(\\zeta I - Z)^{-1}A-A(\\zeta I + Z^{*})^{-1}\n\\end{align*}\nfor any $A\\in\\mathbb C^{n_Z\\times n_Z}$. Making use of the latter equality with \n$A=X X^{*} - YY^{*}$ \nwe now deduce from the formula \\eqref{bbG} for ${\\mathbb G}$ that\n\\begin{align*}\n {\\mathbb G} Z^{*} + Z {\\mathbb G} & = \n \\sum_{z_{0} \\in \\Pi_+} {\\rm Res}_{\\zeta = z_{0}} \n \\left( (\\zeta I - Z)^{-1} (XX^{*} - Y Y^{*}) \\right. \\\\\n & \\quad \\quad \\quad \\quad \\quad \\quad \\left. - (XX^{*} - Y Y^{*}) \n (\\zeta I + Z^{*})^{-1} \\right) \\\\\n & = I \\cdot (XX^{*} - Y Y^{*}) - (XX^{*} - Y Y^{*}) \\cdot 0\n = XX^{*} - YY^{*}\n\\end{align*}\nwhere for the last step we use that $Z$ has all its spectrum in the \nright half plane while $-Z^{*}$ has all its spectrum in the left\nhalf plane; also note that in general the sum of the residues of any \nresolvent matrix $R(\\zeta) = ( \\zeta I - A)^{-1}$ is the identity \nmatrix, due to the Laurent expansion at infinity for $R(\\zeta)$: \n$R(\\zeta) = \\sum_{n=0}^{\\infty} A^{n} \\zeta^{-n-1}$.\nThis completes the verification of \\eqref{LyapunovG}.\n\n\\smallskip\n\nSince both $\\Gamma_{L}$ and ${\\mathbb G}$ satisfy the same Lyapunov \nequation \\eqref{GammaL} which has a \nunique solution since $\\sigma(Z) \\cap \\sigma(-Z^{*}) = \\emptyset$, we \nconclude that ${\\mathbb G} = \\Gamma_{L}$. This completes the direct proof \nof the equivalence of conditions (1) and (2) in Corollary \\ref{C:BTOA-NP} for the case \nthat $\\mathfrak D = {\\mathfrak D}_{L}$.\n\nTo make the connection with the kernel $K_{\\Theta,J}$, we note that there \nis a standard way to identify a reproducing kernel Hilbert space ${\\mathcal H}(K)$ of a \nparticular form with an operator range (see e.g.\\ \\cite{Sarason} \nor \\cite{BB-HOT}). Specifically, let $M_{\\Theta}$ be the \nmultiplication operator\n$$\n M_{\\Theta} \\colon f(\\lambda) \\mapsto \\Theta(\\lambda) f(\\lambda)\n$$\nacting on $H^{2}_{p+m}(\\Pi_{+})$, identify $J$ with $J \\otimes \nI_{H^{2}}(\\Pi_{+})$ acting on $H^{2}_{p+m}(\\Pi_{+})$, and define $W \n\\in {\\mathcal L}(H^{2}_{p+m}(\\Pi_{+}))$ by\n$$\n W = J - M_{\\Theta} J (M_{\\Theta})^{*}.\n$$\nFor $w \\in \\Pi_{+}$ and ${\\mathbf y} \\in {\\mathbb C}^{p+m}$, let \n$k_{w,{\\mathbf y}}(z) = \\frac{1}{z - \\overline{w}}{\\mathbf y}$ by the kernel element \nassociate with the Szeg\\H{o} kernel $k_{\\rm Sz} \\otimes I_{{\\mathbb \nC}^{p+m}}$. One can verify\n$$\n W k_{w,{\\mathbf y}} = K_{\\Theta,J}( \\cdot, w) {\\mathbf y} \\in {\\mathcal H}(K_{\\Theta, J}),\n$$\nand furthermore,\n\\begin{align*}\n \\langle W k_{w_{j},{\\mathbf y}_{j}}, \\, W k_{w_{i},{\\mathbf y}_{i}} \n \\rangle_{{\\mathcal H}(K_{\\Theta,J})} & =\n \\langle K_{\\Theta,J}((w_{i},w_{j}) {\\mathbf y}_{j},\\, {\\mathbf y}_{i} \n \\rangle_{{\\mathbb C}^{p+m}} \\\\\n & = \\langle W k_{w_{j}, {\\mathbf y}_{j}}, \\, k_{w_{i}, {\\mathbf y}_{i}} \n \\rangle_{H^{2}_{p+m}(\\Pi_{+})}.\n\\end{align*}\nAs $\\Theta$ is rational and $M_{\\Theta}$ is a $J$-isometry, one can see that \n$\\operatorname{Ran} W$ is already closed. Hence we have \nthe concrete identification ${\\mathcal H}(K_{\\Theta,J}) = \\operatorname{Ran} \nW$ with lifted inner product\n$$\n\\langle W f, W g\\rangle_{{\\mathcal H}(K_{\\Theta,J})} = \\langle W f, g \n\\rangle_{H^{2}_{p+m}(\\Pi_{+})}.\n$$\nAs $M_{\\Theta}$ is a $J$-isometry, the operator \n$M_{\\Theta} J (M_{\\Theta})^{*} =: M_{\\Theta}(M_{\\Theta})^{[*]}$ is \nthe $J$-selfadjoint projection onto $\\Theta \\cdot \nH^{2}_{p+m}(\\Pi_{+})$ and $WJ = I - M_{\\Theta} (M_{\\Theta})^{[*]}$ is \nthe $J$-self-adjoint projection onto $H^{2}_{p+m} [-] \\Theta \\cdot \nH^{2}_{p+m}(\\Pi_{+}) = {\\mathcal M}_{\\mathfrak D}^{[\\perp {\\mathcal K}]}$. We then see \nthat, for all $f,g \\in H^{2}_{p+m}(\\Pi_{+})$,\n$$\\langle W J f, W J g \\rangle_{{\\mathcal H}(K_{\\Theta,J})} =\n \\langle W J f, J g \\rangle_{H^{2}_{p+m}(\\Pi_{+})} =\n\\langle J \\cdot WJ f, WJg \\rangle_{H^{2}_{p+m}(\\Pi_{+})},\n$$\ni.e., the identity map is a Kre\\u{\\i}n-space isomorphism between \n${\\mathcal H}(K_{\\Theta,J})$ and ${\\mathcal M}_{\\mathfrak D}^{[ \\perp {\\mathcal K}]}$ with the \n$J$-inner product.\nIn particular, we arrive at the equivalence of conditions (2) and (3) \nin Corollary \\ref{C:BTOA-NP} for Case 1.\n\n\\smallskip\n\n\\noindent\n\\textbf{Case 2: The general BTOA setting:} \nTo streamline formulas to come, we introduce two types of \ncontrol-like operators and two types of observer-like operators\nas follows (for fuller details and systems-theory motivation, we \nrefer to \\cite{BR} for the discrete-time setting and \\cite{Amaya} for \nthe continuous-time setting). Suppose that $(A,B)$ is an input pair of matrices \n(so $A$ has, say, \nsize $N \\times N$ and $B$ has size $N \\times n$). We assume that either $A$ is \nstable ($\\sigma(A) \n\\subset \\Pi_{-}$) or $A$ is antistable ($\\sigma(A) \\subset \\Pi_{+}$). \nIn case $\\sigma(A) \\subset \\Pi_{+}$, we define a \ncontrol-like operator as appeared in the Case 1 analysis\n$$\n{\\mathcal C}_{A,B} \\colon H^{2}_{n}(\\Pi_{+}) \\to {\\mathbb C}^{N}\n$$\nby\n\\begin{align*}\n {\\mathcal C}_{A,B} \\colon g & \\mapsto (Bg)^{\\wedge L}(A): = \\sum_{z \\in \\Pi_{+}} {\\rm \n Res}_{\\lambda = z} (\\lambda I - A)^{-1} B g(\\lambda) \\\\\n & = - \\frac{1}{2 \\pi} \\int_{-\\infty}^{\\infty} (iy I - A)^{-1} B \n g(iy)\\, dy.\n\\end{align*}\nIn case $\\sigma(A) \\subset \\Pi_{-}$, we define a complementary \ncontrol-like operator \n$$\n{\\mathcal C}^{\\times}_{A,B} \\colon H^{2}_{n}(\\Pi_{-}) \\to {\\mathbb C}^{N}\n$$\nby \n\\begin{align*} \n {\\mathcal C}^{\\times}_{A,B} \\colon g & \\mapsto (Bg)^{\\wedge L}(A): = \\sum_{z \n \\in \\Pi_{-}} {\\rm Res}_{\\lambda = z} (\\lambda I - A)^{-1} B g(\\lambda) \\\\\n & = \\frac{1}{2 \\pi} \\int_{-\\infty}^{\\infty} (iy I - A)^{-1} B \n g(iy)\\, dy.\n\\end{align*}\nSuppose next that $(C,A)$ is an output-pair, say of respective sizes \n$n \\times N$ and $N \\times N$, and that $A$ is either stable or \nantistable. In case $A$ is antistable ($\\sigma(A) \\subset \\Pi_{+}$), \nwe define the observer-like operator \n$$\n {\\mathcal O}_{C,A} \\colon {\\mathbb C}^{N} \\to H^{2}_{n}(\\Pi_{-})\n$$\nby\n$$\n{\\mathcal O}_{C,A} \\colon x \\mapsto C (\\lambda I - A)^{-1} x.\n$$\nIn case $A$ is stable (so $\\sigma(A) \\subset \\Pi_{-})$, then the \ncomplementary observer-like operator is given by the same formula \nbut maps to the complementary $H^{2}$ space:\n$$\n{\\mathcal O}^{\\times}_{C,A} \\colon {\\mathbb C}^{N} \\mapsto H^{2}_{n}(\\Pi_{+})\n$$\ngiven again by\n$$\n {\\mathcal O}^{\\times}_{C,A} \\colon x \\mapsto C (\\lambda I - A)^{-1} x.\n$$\nWe are primarily interested in the case where $A$ is antistable and \nwe consider the operators ${\\mathcal C}_{A,B} \\colon H^{2}_{n}(\\Pi_{+}) \\to \n{\\mathbb C}^{N}$ and ${\\mathcal O}_{C,A} \\colon {\\mathbb C}^{N} \\mapsto \nH^{2}_{n}(\\Pi_{-})$. However a straightforward exercise is to show \nthat the complementary operators come up when computing adjoints:\nfor $A$ antistable, $-A^{*}$ is stable and we have the formulas\n$$\n({\\mathcal O}_{C,A})^{*} = -{\\mathcal C}^{\\times}_{-A^{*}, C^{*}} \\colon \nH^{2}_{n}(\\Pi_{-}) \\mapsto {\\mathbb C}^{N}, \\quad\n({\\mathcal C}_{A,B})^{*} = {\\mathcal O}^{\\times}_{B^{*}, -A^{*}} \\colon {\\mathbb C}^{N} \n\\mapsto H^{2}_{n}(\\Pi_{+}).\n$$\n\nAssume now that \n${\\mathcal M}_{\\mathfrak D} \\subset L^{2}_{p+m}(\\Pi_{+})$ is defined as in \n\\eqref{cMrep} for a $\\Pi_{+}$-admissible interpolation data set \n${\\mathfrak D} = (U,V,W; Z, X, Y; \\Gamma)$. Thus $(U,W)$ and $(V,W)$ are \noutput pairs with $\\sigma(W) \\subset \\Pi_{+}$ and $(Z,X)$ and $(Z,Y)$ \nare input pairs with $\\sigma(Z) \\subset \\Pi_{+}$. We therefore have \nobserver-like and control-like operators\n\\begin{align*}\n&{\\mathcal O}_{V,W} \\colon {\\mathbb C}^{n_{W}} \\to H^{2}_{p}(\\Pi_{-}), \n\\quad {\\mathcal O}_{U,W} \\colon {\\mathbb C}^{n_{W}} \\to H^{2}_{m}(\\Pi_{-}), \\\\\n& {\\mathcal C}_{Z,X} \\colon H^{2}_{p}(\\Pi_{+}) \\to {\\mathbb C}^{n_{Z}}, \\quad\\quad\n{\\mathcal C}_{Z,Y} \\colon H^{2}_{m}(\\Pi_{+}) \\to {\\mathbb C}^{n_{Z}}\n\\end{align*}\ndefined as above, as well as the observer-like and control-like operators\n$$\n {\\mathcal O}_{\\sbm{ V \\\\ U}, W} : = \\begin{bmatrix} {\\mathcal O}_{V,W} \\\\ {\\mathcal O}_{U,W} \n \\end{bmatrix},\\quad \n {\\mathcal C}_{Z, [ X \\; \\; -Y]} = \\begin{bmatrix} \n {\\mathcal C}_{Z,X} & - {\\mathcal C}_{Z,Y} \\end{bmatrix}.\n $$\nThen the adjoint operators have the form\n \\begin{align*}\n&({\\mathcal O}_{V,W})^{*} = -{\\mathcal C}^{\\times}_{-W^{*}, V^{*}} \\colon H^{2}_{p}(\\Pi_{-}) \\to\n{\\mathbb C}^{n_{W}}, \\\\ \n& ({\\mathcal O}_{U,W})^{*} = -{\\mathcal C}^{\\times}_{-W^{*}, U^{*}} \\colon \nH^{2}_{m}(\\Pi_{-}) \\to {\\mathbb C}^{n_{W}}, \\\\\n& ({\\mathcal C}_{Z,X})^{*} = {\\mathcal O}^{\\times}_{X^{*}, -Z^{*}} \\colon {\\mathbb \nC}^{n_{Z}} \\to H^{2}_{p}(\\Pi_{+}), \\\\\n& ({\\mathcal C}_{Z,Y})^{*} = {\\mathcal O}^{\\times}_{Y^{*}, -Z^{*}} \\colon {\\mathbb C}^{n_{Z}}\n\\to H^{2}_{m}(\\Pi_{+}) \n\\end{align*}\nand are given explicitly by: \n\\begin{align*}\n & ({\\mathcal O}_{V,W})^{*} \\colon g_{1} \\mapsto \n -\\frac{1}{2 \\pi} \\int_{-\\infty}^{\\infty} (iy I + W^{*})^{-1} V^{*} \n g_1(iy)\\, dy, \\\\\n & ({\\mathcal O}_{U,W})^{*} \\colon g_{2} \\mapsto \n -\\frac{1}{2 \\pi} \\int_{-\\infty}^{\\infty} (iy I + W^{*})^{-1} \n U^{*}g_2(iy)\\, dy, \\\\\n& ({\\mathcal C}_{Z,X})^{*} \\colon x \\mapsto X^{*} (\\lambda I + Z^{*})^{-1} x,\n\\quad ({\\mathcal C}_{Z,Y})^{*} \\colon x \\mapsto Y^{*} (\\lambda I + Z^{*})^{-1} x.\n \\end{align*}\nFurthermore one can check via computations as in the derivation of \n\\eqref{bbG} above that the $J$-observability and $J$-controllability gramians\n\\begin{align*} & {\\mathcal G}^{J}_{Z,\\sbm{X & -Y}} : = \n {\\mathcal C}_{Z,X} {\\mathcal C}_{Z,X}^{*} - {\\mathcal C}_{Y,Z} {\\mathcal C}_{Z,Y}^{*} =: {\\mathcal G}_{Z,X} - \n {\\mathcal G}_{Z,Y}, \\\\\n & {\\mathcal G}^{J}_{\\sbm{ V \\\\ U}, W} : = {\\mathcal O}^{*}_{V,W} {\\mathcal O}_{V,W} - \n {\\mathcal O}_{U,W}^{*} {\\mathcal O}_{U,W} =: {\\mathcal G}_{V,W} - {\\mathcal G}_{U,W}\n\\end{align*}\nsatisfy the respective Lyapunov equations\n\\begin{align*}\n & {\\mathcal G}^{J}_{Z,\\sbm{ X & -Y}} Z^{*} + Z {\\mathcal G}^{J}_{Z,\\sbm{ X & -Y}} \n = X X^{*} - Y Y^{*}, \\\\\n & {\\mathcal G}^{J}_{\\sbm{ V \\\\ U}, W} W + W^{*} {\\mathcal G}^{J}_{\\sbm{V \\\\ U}, W} = \n V^{*}V - U^{*} U.\n\\end{align*}\nHence, by the uniqueness of such solutions and the characterizations \nof $\\Gamma_{L}$ and $\\Gamma_{R}$ in \\eqref{GammaL}, \\eqref{GammaR},\nwe get \n\\begin{equation} \\label{GammaLRid}\n {\\mathcal G}^{J}_{\\sbm{X & -Y}, Z} = \\Gamma_{L}, \\quad\n {\\mathcal G}^{J}_{\\sbm{V \\\\ U}, W} = -\\Gamma_{R}.\n\\end{equation}\nThen the representation \\eqref{cMrep} for ${\\mathcal M}_{\\mathfrak D}$ can be rewritten more \nsuccinctly as\n\\begin{align} \n {\\mathcal M}_{\\mathfrak D} = &\\left\\{ {\\mathcal O}_{\\sbm{V \\\\ U}, W} x + \\sbm{f_{1} \n \\\\ f_{2}} \\colon x \\in {\\mathbb C}^{n_{W}} \\text{ and } \\sbm{ \n f_{1} \\\\ f_{2} } \\in H^{2}_{p+m}(\\Pi_{+}) \\right. \\notag \\\\\n& \\left. \\quad \\quad \\text{such that } {\\mathcal C}_{Z,\\sbm{ X & -Y}} \\sbm{ f_{1} \\\\ f_{2}} = \n \\Gamma x \\right\\}. \n\\label{cMrep'}\n\\end{align}\nIt is readily seen from the latter formula that \n \\begin{align}\nP_{H^{2}_{p+m}(\\Pi_-)} {\\mathcal M}_{\\mathfrak D}&=\\operatorname{Ran}{\\mathcal O}_{\\sbm{ V \\\\ U}, W},\n\\label{ad8} \\\\\n{\\mathcal M}_{\\mathfrak D} \\cap H^{2}_{p+m}(\\Pi_{+}) &=\n\\operatorname{Ker} {\\mathcal C}_{Z, \\sbm{ X & -Y}}, \\notag\n\\end{align}\nand therefore, \n$$\n{\\mathcal M}_{\\mathfrak D} \\cap \\sbm{ H^{2}_{p}(\\Pi_{+}) \\\\ 0 } =\n\\sbm{ \\operatorname{Ker} {\\mathcal C}_{Z, X} \\\\ 0 }, \\quad\n{\\mathcal M}_{\\mathfrak D} \\cap \\sbm{0 \\\\ H^{2}_{m}(\\Pi_{+})} =\n\\sbm{ 0 \\\\ \\operatorname{Ker} {\\mathcal C}_{Z, Y} }.\n$$\n\n\\begin{lemma} \\label{L:cMDperp} If ${\\mathcal M}_{\\mathfrak D}$ is given by \n \\eqref{cMrep'}, then the $J$-orthogonal complement \n${\\mathcal M}_{\\mathfrak D}^{[\\perp]} = L^{2}_{p+m}(i {\\mathbb R}) [-]_{J} {\\mathcal M}_{\\mathfrak D}$\nwith respect to the space $L^{2}_{p+m}(i {\\mathbb R})$\nis given by\n\\begin{align} \n {\\mathcal M}_{\\mathfrak D}^{[\\perp]} = &\\left\\{ J( {\\mathcal C}_{Z, \\sbm{ X & \n -Y}})^{*} y + \\sbm{ g_{1} \\\\ g_{2}} \\colon y \\in {\\mathbb \n C}^{n_{Z}} \\text{ and } \\sbm{ g_{1} \\\\ g_{2}} \\in \n H^{2}_{p+m}(\\Pi_{-}) \\right. \\notag \\\\\n& \\left. \\quad \\quad \\text{such that } \\; ({\\mathcal O}_{V,W})^{*} g_{1} - ({\\mathcal O}_{U,W})^{*} g_{2} = \n-\\Gamma^{*} y \\right\\}.\n\\label{cMperprep'}\n\\end{align}\n \\end{lemma}\n \n\\begin{proof} Since ${\\mathcal M}_{\\mathfrak D}^{[\\perp]}$ is\n $J$-orthogonal to ${\\mathcal M}_{\\mathfrak D} \\cap H^{2}_{p+m}(\\Pi_{+}) =\n \\operatorname{Ker} {\\mathcal C}_{Z, \\sbm{ X & -Y}}$, it follows that \n$P_{H_{p+m}^{2}(\\Pi_+)} {\\mathcal M}_{\\mathfrak D}^{[\\perp]}$ is also \n $J$-orthogonal to $\\operatorname{Ker} {\\mathcal C}_{Z, \\sbm{ X & -Y}}$. Hence\n $P_{H^{2}_{p+m}(\\Pi_+)} {\\mathcal M}_{\\mathfrak D}^{[\\perp]} \\subset J \n \\operatorname{Ran}( ({\\mathcal C}_{Z, \\sbm{X & -Y}})^{*}$ and \n each ${\\mathbf g}\\in{\\mathcal M}_{\\mathfrak D}^{[\\perp]}$ has the form \n$$\n{\\mathbf g} = J({\\mathcal C}_{Z, \\sbm{X & -Y}})^{*}y + \\sbm{ g_{1} \\\\ g_{2}}\\quad\\mbox{with}\\quad \n y \\in {\\mathbb C}^{n_{Z}} \\; \\; \\mbox{and} \\; \\; \\sbm{g_{1} \\\\ g_{2}} \\in H^{2 \n \\perp}_{p+m}(\\Pi_{+}).\n$$\n For such an element to be in \n ${\\mathcal M}_{\\mathfrak D}^{[\\perp]}$, we compute the $J$-inner product of such an \n element against a generic element of ${\\mathcal M}_{\\mathfrak D}$: for all \n $\\sbm{f_{1} \\\\ f_{2}} \\in H^{2}_{p+m}(\\Pi_{+})$ and $x \\in \n {\\mathbb C}^{n_{Z}}$ such that ${\\mathcal C}_{Z, X} f_{1} - {\\mathcal C}_{Z, Y} \n f_{2} = \\Gamma x$, we must have\n \\begin{align*}\n 0 & = \\left\\langle J \\left( J ({\\mathcal C}_{Z, \\sbm{ X & -Y}})^{*} y + \n \\sbm{ g_{1} \\\\ g_{2}} \\right), {\\mathcal O}_{\\sbm{V \\\\ U}, W} x + \\sbm{ \n f_{1} \\\\ f_{2}} \\right\\rangle_{L^{2}_{p+m}(i {\\mathbb R})} \\\\\n & = \\langle y, {\\mathcal C}_{Z,X} f_{1} - {\\mathcal C}_{Z,Y} f_{2} \n \\rangle_{{\\mathbb C}^{n_{_Z}}} +\n \\langle ({\\mathcal O}_{V,W})^{*} g_{1} - ({\\mathcal O}_{U,W})^{*} g_{2}, x \n \\rangle_{{\\mathbb C}^{n_{_W}}} \\\\\n & = \\langle y, \\Gamma x \\rangle_{{\\mathbb C}^{n_{Z}}} + \n \\langle ({\\mathcal O}_{V,W})^{*} g_{1} - ({\\mathcal O}_{U,W})^{*} g_{2}, x \n \\rangle_{{\\mathbb C}^{n_{_W}}} \n\\end{align*} \nwhich leads to the coupling condition\n$({\\mathcal O}_{V,W})^{*} g_{1} - ({\\mathcal O}_{U,W})^{*} g_{2} = - \\Gamma^{*} y$ in \\eqref{cMperprep'}.\n\\end{proof}\nAs a consequence of the representation \\eqref{cMperprep'} we see that\n\\begin{align}\n& P_{H^{2}_{p+m}(\\Pi_{+})} {\\mathcal M}_{\\mathfrak D}^{[\\perp]} =\n\\operatorname{Ran} J ({\\mathcal C}_{Z,\\sbm{ X & -Y}})^{*}, \\notag\\\\\n& {\\mathcal M}_{\\mathfrak D}^{[\\perp]} \\cap H^{2}_{p+m}(\\Pi_{-}) =\n\\operatorname{Ker} \\begin{bmatrix} ({\\mathcal O}_{V,W})^{*} & - \n({\\mathcal O}_{U,W})^{*} \\end{bmatrix}\n\\label{ad9}\n\\end{align}\nand therefore,\n$$\n {\\mathcal M}_{\\mathfrak D}^{[\\perp]} \\cap \\sbm{ H^{2}_{p}(\\Pi_{-}) \\\\ 0 } =\n\\sbm{\\operatorname{Ker} ({\\mathcal O}_{V,W})^{*} \\\\ 0},\\quad \n {\\mathcal M}_{\\mathfrak D}^{[\\perp]} \\cap \\sbm{ 0 \\\\ H^{2}_{m}(\\Pi_{-})} =\n\\sbm{ 0 \\\\ \\operatorname{Ker} ({\\mathcal O}_{U,W})^{*}}.\n$$\nIn this section we shall impose an additional assumption:\n\\smallskip\n\n\\noindent\n\\textbf{Nondegeneracy assumption:}\n{\\sl Not only ${\\mathcal M}_{\\mathfrak D}$ but also ${\\mathcal M}_{\\mathfrak D} \\cap H^{2}_{p+m}(\\Pi_{+})$ \nand ${\\mathcal M}_{\\mathfrak D}^{[\n\\perp]} \\cap H^{2}_{p+m}(\\Pi_{-})$ (see \\eqref{ad8} and \n\\eqref{ad9}) are regular subspaces (i.e., have good Kre\\u{\\i}n-space \northogonal complements---as explained in Section \\ref{S:Kreinprelim}) of the\nKre\\u{\\i}n space $L^{2}_{p+m}(\\Pi_{+})$ (with the $J$-inner product).}\n\n\\smallskip\n\nWe proceed via a string of lemmas.\n\n\\begin{lemma} \\label{L:decom}\n{\\rm (1)} The space ${\\mathcal M}_{\\mathfrak D}$ given in \\eqref{cMrep'} decomposes as\n\\begin{equation} \\label{Mdecom}\n {\\mathcal M}_{\\mathfrak D} = \\widehat{\\mathbb G}_{T} [+] {\\mathcal M}_{{\\mathfrak D},1} [+] \n {\\mathcal M}_{{\\mathfrak D},2},\n\\end{equation}\nwhere \n\\begin{align}\n \\widehat{\\mathbb G}_{T} &= {\\mathcal M}_{\\mathfrak D} [-]_{J} \n \\operatorname{Ker} {\\mathcal C}_{Z, \\sbm{ X & -Y}}, \\notag \\\\\n {\\mathcal M}_{{\\mathfrak D},1} &= \\operatorname{Ker} {\\mathcal C}_{Z, \\sbm{ X & -Y}}[-]_{J} \n\\left(\\sbm{\\operatorname{Ker} {\\mathcal C}_{Z, X}\\\\ 0} \\oplus \n\\sbm{0\\\\ \\operatorname{Ker} {\\mathcal C}_{Z, Y}}\\right), \\notag \\\\\n {\\mathcal M}_{{\\mathfrak D}, 2} &=\n\\sbm{\\operatorname{Ker} {\\mathcal C}_{Z, X}\\\\ 0} \\oplus \\sbm{0\\\\ \\operatorname{Ker} {\\mathcal C}_{Z, Y}}.\n \\label{def012}\n\\end{align}\nMore explicitly, the operator $T: \\operatorname{Ran}\n{\\mathcal O}_{\\sbm{ V \\\\ U}, W}\\to \\operatorname{Ran} J ({\\mathcal C}_{Z,\\sbm{X & -Y}})^{*}$ \nis uniquely determined by the identity\n\\begin{equation} \\label{TGamma}\n {\\mathcal C}_{Z, \\sbm{ X & -Y}} T {\\mathcal O}_{\\sbm{ V \\\\ U}, W} = -\\Gamma,\n\\end{equation}\nand $\\widehat{\\mathbb G}_{T}$ is the graph space for $-T$ parametrized as\n\\begin{align}\n\\widehat{\\mathbb G}_{T} & = \\left\\{- {\\mathbf f} + T {\\mathbf f} \\colon \n{\\mathbf f}\\in \\operatorname{Ran}\n{\\mathcal O}_{\\sbm{V \\\\ U}, W}\\right\\} \\notag \\\\\n& = \\left\\{- {\\mathcal O}_{\\sbm{V \\\\ U}, W} x + T {\\mathcal O}_{\\sbm{V \\\\ U}, W} x \\colon\nx \\in {\\mathbb C}^{n_{W}} \\right\\},\n\\label{graphT}\n\\end{align}\nwhile ${\\mathcal M}_{{\\mathfrak D},1}$ is given\nexplicitly by\n\\begin{equation} \\label{cMD1}\n {\\mathcal M}_{{\\mathfrak D},1} = \\operatorname{Ran} \\begin{bmatrix}\n ( {\\mathcal C}_{Z,X})^{*} ({\\mathcal G}_{Z,X})^{-1} {\\mathcal G}_{Z,Y} \\\\ ({\\mathcal C}_{Z,Y})^{*} \\end{bmatrix}.\n\\end{equation}\n\n{\\rm (2)} Dually, the subspace ${\\mathcal M}_{\\mathfrak D}^{[\\perp]} =\nL^{2}_{p+m}(i {\\mathbb R}) [-]_{J} {\\mathcal M}_{\\mathfrak D}$ \n decomposes as\n \\begin{equation} \\label{Mperpdecom}\n {\\mathcal M}_{\\mathfrak D}^{[\\perp]} = {\\mathbb G}_{T^{[*]}} [+] \n ({\\mathcal M}_{\\mathfrak D}^{[\\perp]})_{1} [+] ( {\\mathcal M}_{\\mathfrak D}^{[\\perp]})_{2},\n \\end{equation}\nwhere\n \\begin{align}\n & {\\mathbb G}_{T^{[*]}} = {\\mathcal M}_{\\mathfrak D}^{[\\perp]} \n [-]_{J} \\operatorname{Ker} \\begin{bmatrix} ({\\mathcal O}_{V,W})^{*} & - \n ({\\mathcal O}_{U,W})^{*} \\end{bmatrix},\\notag \\\\\n & ({\\mathcal M}_{\\mathfrak D}^{[ \\perp]})_{1} = \n \\operatorname{Ker}\\begin{bmatrix} ({\\mathcal O}_{V,W})^{*} & - \n ({\\mathcal O}_{U,W})^{*} \\end{bmatrix}\n[-]_{J} \\left( \\sbm{ \\operatorname{Ker} ({\\mathcal O}_{V,W})^{*} \\\\ 0} \n\\oplus \\sbm{ 0 \\\\ \\operatorname{Ker} \n({\\mathcal O}_{U,W})^{*} } \\right),\\notag\\\\ \n & ({\\mathcal M}_{\\mathfrak D}^{[\\perp]})_{2} =\n \\sbm{ \\operatorname{Ker} ({\\mathcal O}_{V,W})^{*} \\\\ 0} \\oplus \\sbm{ 0 \\\\ \\operatorname{Ker} \n ({\\mathcal O}_{U,W})^{*}}. \n\\label{def012perp}\n \\end{align}\nMore explicitly,\n \\begin{align*}\n {\\mathbb G}_{T^{[*]}} & = \\left\\{ {\\mathbf g} + T^{[*]} {\\mathbf g} \\colon {\\mathbf g}\n \\in \\operatorname{Ran} J ({\\mathcal C}_{Z, \\sbm{ X & -Y}})^{*} \\right\\} \\\\\n & =\n \\left\\{ J({\\mathcal C}_{Z, \\sbm{ X & -Y}})^{*} x + T^{[*]} J ({\\mathcal C}_{Z, \\sbm{ X\n & -Y}})^{*} x \\colon x \\in {\\mathbb C}^{n_{Z}}\\right\\}\n \\end{align*}\nwhere $T^{[*]} = J T^{*} J\\colon \\operatorname{Ran} J ({\\mathcal C}_{Z, \\sbm{X & -Y}})^{*}\n\\to \\operatorname{Ran} {\\mathcal O}_{\\sbm{ V \\\\ U}, W}$ is the $J$-adjoint of $T$, and\n \\begin{equation} \\label{cMDperp1}\n ({\\mathcal M}_{\\mathfrak D}^{[ \\perp]})_{1} = \\operatorname{Ran} \n \\begin{bmatrix} {\\mathcal O}_{V,W} \\\\ {\\mathcal O}_{U,W}({\\mathcal G}_{U,W})^{-1} {\\mathcal G}_{V,W} \\end{bmatrix}. \n \\end{equation}\n\\end{lemma}\n\n\\begin{proof} By the Nondegeneracy Assumption we can define subspaces\n\\eqref{def012} and \\eqref{def012perp}, so that ${\\mathcal M}_{\\mathfrak D}$\nand ${\\mathcal M}_{\\mathfrak D}^{[\\perp]}$ decompose as in \\eqref {Mdecom} and \n\\eqref{Mperpdecom}, respectively.\n\n\\smallskip\n\nGiven an element ${\\mathbf g} \\in P_{H^{2}_{p+m}(\\Pi_-)} {\\mathcal M}_{\\mathfrak D}$,\nthere is an ${\\mathbf f}\\in H^{2}_{p+m}(\\Pi_{+})$ so that $-{\\mathbf g} + {\\mathbf f} \\in {\\mathcal M}_{\\mathfrak D}$; \nfurthermore, one can choose \n$$\n{\\mathbf f}\\in H^{2}_{p+m}(\\Pi_{+}) [-]_{J} ({\\mathcal M}_{\\mathfrak D} \\cap H^{2}_{p+m}(\\Pi_{+}))=\nP_{H^{2}_{p+m}(\\Pi_{+})}{\\mathcal M}_{\\mathfrak D}^{[\\perp]}. \n$$\nIf ${\\mathbf f}'$ is another such choice, \n then $(-{\\mathbf g} + {\\mathbf f}) - (-{\\mathbf g} + {\\mathbf f}') = {\\mathbf f} - {\\mathbf f}'$ is in\n ${\\mathcal M}_{\\mathfrak D} \\cap H^{2}_{p+m}(\\Pi_{+})$ as well as in \n $H^{2}_{p+m}(\\Pi_{+}) [-]_{J} ({\\mathcal M}_{\\mathfrak D} \\cap H^{2}_{p+m}(\\Pi_{+}))$. By \n the Nondegeneracy Assumption, we conclude that ${\\mathbf f} = {\\mathbf f}'$. \n Hence there is a well-defined map ${\\mathbf g} \\mapsto {\\mathbf f}$ defining a \n linear operator $T$ from \n$$\nP_{H^{2}_{p+m}(\\Pi_-)} {\\mathcal M}_{\\mathfrak D}=\\operatorname{Ran}{\\mathcal O}_{\\sbm{ V \\\\ U}, W}\n\\quad\\mbox{into}\\quad\nP_{H^{2}_{p+m}(\\Pi_{+})}{\\mathcal M}_{\\mathfrak D}^{[\\perp]}=\n\\operatorname{Ran} J ({\\mathcal C}_{Z,\\sbm{X & -Y}})^{*}\n$$\n(see \\eqref{ad8} and \\eqref{ad9}).\n In this way we arrive at a well-defined operator $T$ so that \n $\\widehat{\\mathbb G}_{T}$ as in \\eqref{graphT} is equal to the subspace \n (see \\eqref{def012})\n$$\n{\\mathcal M}_{\\mathfrak D} [-]_{J} \\left({\\mathcal M}_{\\mathfrak D} \\cap \nH^{2}_{p+m}(\\Pi_{+})\\right)\n={\\mathcal M}_{\\mathfrak D} [-]_{J} \\operatorname{Ker} {\\mathcal C}_{Z, \\sbm{ X & -Y}}.\n$$\nTo check that $T$ is also given by \n \\eqref{TGamma}, combine the fact that \n $$\n -{\\mathcal O}_{\\sbm{V \\\\ U}, W} x + T {\\mathcal O}_{\\sbm{V \\\\ U}, W} x\\in {\\mathcal M}_{\\mathfrak D}\n $$\n together with the characterization \\eqref{cMrep'} for \n ${\\mathcal M}_{\\mathfrak D}$ to \n deduce that\n $$\n {\\mathcal C}_{Z, \\sbm{ X & -Y}} \\cdot T {\\mathcal O}_{\\sbm{V \\\\ U}, W} x = \n -\\Gamma x\n $$\n for all $x$ to arrive at \\eqref{TGamma}. \n \n\\smallskip\n\n To get the formula \\eqref{cMD1}, we first note that\n\\begin{equation} \\label{DecoupledCZXY}\n H^{2}_{p+m}(\\Pi_{+}) [-]_{J}\\sbm{\\operatorname{Ker} {\\mathcal C}_{Z, X} \\\\ \n \\operatorname{Ker} {\\mathcal C}_{Z, Y}}\n= \\sbm{ \\operatorname{Ran} \n ({\\mathcal C}_{Z,X})^{*} \\\\ \\operatorname{Ran} ({\\mathcal C}_{Z,Y})^{*} }.\n\\end{equation}\nThe space ${\\mathcal M}_{{\\mathfrak D}, 1}$ is the intersection of this space \nwith ${\\mathcal M}_{\\mathfrak D} \\cap H^{2}_{p+m}(\\Pi_{+})$.\nTherefore, it consists of elements of the form $\\sbm{ ({\\mathcal C}_{Z,X})^{*} y_{1} \\\\ \n({\\mathcal C}_{Z,Y})^{*} y_{2} }$ \nsubject to condition\n$$\n 0 = \\begin{bmatrix} {\\mathcal C}_{Z,X} & - {\\mathcal C}_{Z,Y} \\end{bmatrix}\n \\begin{bmatrix} ({\\mathcal C}_{Z,X})^{*} y_{1} \\\\ ({\\mathcal C}_{Z,X})^{*} y_{2} \n \\end{bmatrix} = {\\mathcal C}_{Z,X} ({\\mathcal C}_{Z,X})^{*} y_{1} - {\\mathcal C}_{Z,Y} \n ({\\mathcal C}_{Z,Y})^{*} y_{2}.\n$$\nBy the $\\Pi_{+}$-admissibility requirement on the data set ${\\mathfrak \nD}$, the gramian ${\\mathcal G}_{Z,X}: = {\\mathcal C}_{Z,X} ({\\mathcal C}_{Z,X})^{*}$ is \ninvertible and hence we may solve this last equation for $y_{1}$:\n$$\n y_{1} = {\\mathcal G}_{Z,X}^{-1} {\\mathcal C}_{Z,Y} ({\\mathcal C}_{Z,Y})^{*} y_{2}.\n$$\nWith this substitution, the element $\\sbm{ ({\\mathcal C}_{Z,X})^{*} y_{1} \\\\\n({\\mathcal C}_{Z,Y})^{*} y_{2} }$ \nof the $J$-orthogonal complement space \\eqref{DecoupledCZXY} assumes \nthe form\n$$\n \\begin{bmatrix} ({\\mathcal C}_{Z,X})^{*} {\\mathcal G}_{Z,X}^{-1} {\\mathcal C}_{Z,Y} \n ({\\mathcal C}_{Z,Y})^{*} y_{2} \\\\\n ({\\mathcal C}_{Z,Y})^* y_{2} \\end{bmatrix}\n$$\nand we have arrived at the formula \\eqref{cMD1} for ${\\mathcal M}_{{\\mathfrak \nD},1}$.\n\n\\smallskip\n\nFor the dual case (2), similar arguments starting with the \nrepresentation \\eqref{cMperprep'} for ${\\mathcal M}^{[\\perp]}_{\\mathfrak D}$ show that \nthere is an operator $T^{\\times}$ from \n$\\operatorname{Ran} J ({\\mathcal C}_{Z, \\sbm{ X & -Y}})^{*}$ into \n${\\mathcal M}_{\\mathfrak D}^{[\\perp]} [-]_{J} \\left({\\mathcal M}_{\\mathfrak D}^{[\\perp]} \n\\cap H^{2}_{p+m}(\\Pi_{-}) \\right)$ so that\n$$\n {\\mathcal M}_{\\mathfrak D}^{[\\perp]} [-]_{J} \\left( {\\mathcal M}_{\\mathfrak D}^{[\\perp]} \n \\cap H^{2}_{p+m}(\\Pi_{-})\\right) = (I + T^{\\times}) \n \\operatorname{Ran} J ({\\mathcal C}_{Z, \\sbm{ X & -Y}})^{*}.\n$$\nFrom the characterization \\eqref{cMperprep'} of the space \n${\\mathcal M}_{\\mathfrak D}^{[\\perp]}$ we see that\nthe condition \n$$\n J ({\\mathcal C}_{Z, \\sbm{ X & -Y}})^{*}y + T^{\\times} J ({\\mathcal C}_{Z, \\sbm{ X & \n-Y}})^{*}y \\in {\\mathcal M}_{\\mathfrak D}^{[\\perp]}\n$$\nrequires that, for all $y \\in {\\mathbb C}^{n_{Z}}$,\n$$\n\\begin{bmatrix} ({\\mathcal O}_{V,W})^{*} & - ({\\mathcal O}_{U,W})^{*} \\end{bmatrix} \n T^{\\times} \\begin{bmatrix} ({\\mathcal C}_{Z,X})^{*} \\\\ ({\\mathcal C}_{Z,Y})^{*} \n \\end{bmatrix} y = - \\Gamma^{*}y.\n$$\nCancelling off the vector $y$ and rewriting as an operator equation \nthen gives:\n\\begin{align*}\n & \\begin{bmatrix} ({\\mathcal O}_{V,W})^{*} & - ({\\mathcal O}_{U,W})^{*} \\end{bmatrix} \n T^{\\times} \\begin{bmatrix} ({\\mathcal C}_{Z,X})^{*} \\\\ ({\\mathcal C}_{Z,Y})^{*} \n \\end{bmatrix} \\\\\n & \\quad \\quad =\n\\begin{bmatrix} ({\\mathcal O}_{V,W})^{*} & ({\\mathcal O}_{U,W})^{*} \\end{bmatrix} J \n T^{\\times} J \\begin{bmatrix} ({\\mathcal C}_{Z,X})^{*} \\\\ (-{\\mathcal C}_{Z,Y})^{*} \n \\end{bmatrix} \\\\\n & \\quad \\quad = ({\\mathcal O}_{\\sbm{V \\\\ U}, W})^{*} J T^{\\times} J ({\\mathcal C}_{Z, \\sbm{ X & \n -Y}})^{*} = -\\Gamma^{*}.\n\\end{align*}\nTaking adjoints of both sides of the identity \\eqref{TGamma} \nsatisfied by $T$, we see that \n$$\n ({\\mathcal O}_{\\sbm{V \\\\ U}, W} )^{*} T^{*} ( {\\mathcal C}_{Z, \\sbm{ X & -Y}})^{*} = \n -\\Gamma^{*}.\n$$\nSince $({\\mathcal O}_{\\sbm{V \\\\ U}, W} )^{*}$ is injective on \nthe range space of $T^{\\times}$ or $JT^{*}J$ and $({\\mathcal C}_{Z, \\sbm{ X & \n-Y}})^{*}$ maps onto the domain space of $T^{\\times}$ or $T^{*}$, it \nfollows that $T^{\\times} = J T^{*} J = T^{[*]}$.\nThe remaining points in statement (2) of the Lemma follow in much the \nsame way as the corresponding points in statement (1).\n\\end{proof}\n\\begin{lemma} \\label{L:MperpK}\n{\\rm (1)} With ${\\mathcal K}$ as in \\eqref{defcK}, the subspace \\eqref{cPdata} decomposes as\n\\begin{equation} \\label{MperpK}\n {\\mathcal M}_{\\mathfrak D}^{[\\perp {\\mathcal K}]} = {\\mathbb G}_{T^{[*]}} \n [+] ({\\mathcal M}_{\\mathfrak D}^{[\\perp]})_{1} [+] \\sbm{ \n \\operatorname{Ker} ({\\mathcal O}_{V,W})^{*} \\\\ 0 }.\n\\end{equation}\nIn particular, ${\\mathcal M}_{\\mathfrak D}^{[\\perp {\\mathcal K}]}$ is $J$-positive if and only \nif its subspace \n$$\n ({\\mathcal M}_{\\mathfrak D}^{[\\perp {\\mathcal K}]})_{0}:= {\\mathbb G}_{T^{[*]}} \n [+] ({\\mathcal M}_{\\mathfrak D}^{[\\perp]})_{1}\n$$\nis $J$-positive.\n\n\\smallskip\n\n{\\rm (2)} Dually, define a space ${\\mathcal K}' \\subset L^{2}_{p+m}(i{\\mathbb R})$ \nby\n\\begin{equation} \\label{defcK'}\n {\\mathcal K}' = \\sbm{ H^{2}_{p}(\\Pi_{-}) \\oplus \\operatorname{Ran} \n ({\\mathcal C}_{Z,X})^{*} \\\\ L^{2}_{m}(i {\\mathbb R}) }.\n\\end{equation}\nThen ${\\mathcal M}_{\\mathfrak D}^{[ \\perp]} \\subset {\\mathcal K}'$ and the space\n$$\n({\\mathcal M}_{\\mathfrak D}^{[ \\perp]})^{[ \\perp {\\mathcal K}']}: = {\\mathcal K}' [-]_{J} {\\mathcal M}_{\\mathfrak D}^{[ \\perp]} \n= {\\mathcal K}' \\cap {\\mathcal M}_{\\mathfrak D}\n$$\nis given by\n$$\n ({\\mathcal M}^{[\\perp]})^{[ \\perp {\\mathcal K}']} = \\widehat{\\mathbb G}_{T} [+] \n{\\mathcal M}_{{\\mathfrak D},1} [+] \\sbm{ 0 \\\\ \\operatorname{Ker} {\\mathcal C}_{Z,Y} }.\n$$\n In particular, $({\\mathcal M}_{\\mathfrak D}^{[\\perp]})^{[ \\perp {\\mathcal K}']}$ is \n $J$-negative if and only if its subspace\n$$ \n( ({\\mathcal M}_{\\mathfrak D}^{[\\perp]})^{[ \\perp {\\mathcal K}']})_{0}: = {\\mathbb G}_{T} [+] \n{\\mathcal M}_{{\\mathfrak D}, 1}\n$$\nis $J$-negative.\n\n \\end{lemma}\n \n\\begin{proof}\n By definition, ${\\mathcal M}_{\\mathfrak D}^{[ \\perp {\\mathcal K}]} = {\\mathcal K} \\cap \n {\\mathcal M}_{\\mathfrak D}^{[ \\perp ]}$, where ${\\mathcal M}_{\\mathfrak D}^{[ \\perp\n ]}$ is given by \\eqref{Mperpdecom} and where, due to \\eqref{defcK} and \n\\eqref{cMrep}, ${\\mathcal K} =\\sbm{ L^{2}(i {\\mathbb R}) \\\\ \\operatorname{Ran} {\\mathcal O}_{U,W} \n\\oplus H^{2}_{m}(\\Pi_{+})}$. Note that \n $$\n{\\mathcal G}_{T^{[*]}} \\subset {\\mathcal K}, \\quad ({\\mathcal M}_{\\mathfrak D}^{[ \n\\perp]})_{1} \\subset H^{2}_{p+m}(\\Pi_{+}) \\subset {\\mathcal K},\n$$\nwhile\n$$\n ({\\mathcal M}_{\\mathfrak D}^{[ \\perp]})_{2} \\cap {\\mathcal K} =\n \\sbm{ \\operatorname{Ker} ({\\mathcal O}_{V,W})^{*} \\\\ \\operatorname{Ker} \n ({\\mathcal O}_{U,W})^{*}} \\cap \\sbm{ L^{2}_{p}(i{\\mathbb R}) \\\\ \n \\operatorname{Ran} {\\mathcal O}_{U,W} \\oplus H^{2}_{m}(\\Pi_{+}) } = \n\\sbm{ \\operatorname{ Ker } ({\\mathcal O}_{V,W})^{*} \\\\ 0 }.\n$$\nPutting the pieces together leads to the decomposition \\eqref{MperpK}.\nSince the $J$-orthogonal summand $\\sbm{ \\operatorname{Ker} \n({\\mathcal O}_{V,W}))^{*} \\\\ 0 }$ is clearly $J$-positive, it follows \nthat ${\\mathcal M}_{\\mathfrak D}^{[\\perp {\\mathcal K}]}$ is $J$-positive if and only if \n$\\widehat {\\mathbb G}_{T^{[*]}} [+] ({\\mathcal M}_{\\mathfrak D}^{[ \\perp \n]})_{1}$ is $J$-positive.\nStatement (2) follows in a similar way.\n\\end{proof}\n\n\\begin{lemma} \\label{L:posnegsubspaces}\n{\\rm (1)} The subspace ${\\mathbb G}_{T^{[*]}}$ is $J$-positive if and only if \n$I + T T^{[*]}$ is $J$-positive on the subspace\n$P_{H^{2}_{p+m}(\\Pi_{+})}{\\mathcal M}_{\\mathfrak D}^{[\\perp]} = \n\\operatorname{Ran} J ({\\mathcal C}_{Z, \\sbm{ X & -Y }})^{*}$.\n\n{\\rm (2)} The subspace $({\\mathcal M}_{\\mathfrak D}^{[ \\perp ]})_{1}$ is \n$J$-positive if and only if \nthe subspace \\\\ $P_{H^{2}_{p+m}(\\Pi_-)} {\\mathcal M}_{\\mathfrak D} = \n\\operatorname{Ran} {\\mathcal O}_{\\sbm{ V \\\\ U}, W}$\n is $J$-negative.\n\n{\\rm (3)} The subspace $\\widehat {\\mathbb G}_{T}$ is $J$-negative if and \nonly if $I + T^{[*]} T$ is a $J$-negative operator on the subspace \n$P_{H^{2}_{p+m}(\\Pi_-)} {\\mathcal M}_{\\mathfrak D} = \\operatorname{Ran} \n{\\mathcal O}_{\\sbm{ V \\\\ U},W}$.\n\n\\smallskip\n\n{\\rm (4)} The subspace ${\\mathcal M}_{{\\mathfrak D}, 1}$ is $J$-negative if and only \nif the subspace \\\\\n$P_{H^{2}_{p+m}(\\Pi_+)} {\\mathcal M}_{\\mathfrak D}^{[ \\perp ]} = \n\\operatorname{Ran} J ({\\mathcal C}_{Z, \\sbm{ X & -Y}})^{*}$\nis $J$-positive.\n\\end{lemma}\n \n\\begin{proof}\nTo prove (1), note that ${\\mathbb G}_{T^{[*]}}$ being a $J$-positive \nsubspace means that\n$$\n \\left\\langle \\begin{bmatrix} I \\\\ T^{[*]} \\end{bmatrix} x, \\, \n \\begin{bmatrix} I \\\\ T^{[*]} \\end{bmatrix} x\\right\\rangle_{J \\oplus J} =\n \\langle (I + T T^{[*]}) x, x \\rangle_{J} \\ge 0\n$$\nfor all $x \\in \\operatorname{Ran} J ({\\mathcal C}_{Z, \\sbm{ X & -Y}})^{*}$, i.e., that\n$I + T T^{[*]}$ is a $J$-positive operator.\n\n\\smallskip\n\nTo prove (2), use \\eqref{cMDperp1} to see that elements ${\\mathbf g}$ of \n$({\\mathcal M}_{\\mathfrak D}^{[\\perp]})_{1}$ have the form \n$$\n{\\mathbf g} =\\sbm{ {\\mathcal O}_{V,W} \\\\ \n{\\mathcal O}_{U,W} ({\\mathcal G}_{U,W})^{-1} {\\mathcal G}_{V,W} } x\\quad\\mbox{for some} \\quad x \\in {\\mathbb \nC}^{n_{W}}.\n$$ \nThe associated $J$-gramian is then given by\n\\begin{align*}\n & \\begin{bmatrix} ({\\mathcal O}_{V,W})^{*} & {\\mathcal G}_{V,W} ({\\mathcal G}_{U,W})^{-1} \n\t({\\mathcal O}_{U,W})^{*} \\end{bmatrix} \\begin{bmatrix} I_{p} & 0 \\\\ 0 \n\t& -I_{m} \\end{bmatrix} \\begin{bmatrix} {\\mathcal O}_{V,W} \\\\ {\\mathcal O}_{U,W} \n\t({\\mathcal G}_{U,W})^{-1} {\\mathcal G}_{V,W} \\end{bmatrix} \\\\\n& = {\\mathcal G}_{V,W} - {\\mathcal G}_{V,W} ({\\mathcal G}_{U,W})^{-1} {\\mathcal G}_{V,W}.\n\\end{align*}\nBy a Schur-complement analysis, this defines a negative semidefinite operator \n(in fact by our Nondegeneracy Assumption, a negative definite operator) if and only if\n\\begin{align*}\n \\begin{bmatrix} {\\mathcal G}_{V,W} & {\\mathcal G}_{V,W} \\\\ {\\mathcal G}_{V,W} & {\\mathcal G}_{U,W} \n \\end{bmatrix} = \\begin{bmatrix} {\\mathcal G}_{V,W}^{\\frac{1}{2}} & 0 \\\\ 0 \n & I \\end{bmatrix}\\begin{bmatrix} I_{\\operatorname{Ran} {\\mathcal G}_{V,W}} \n & {\\mathcal G}_{V,W}^{\\frac{1}{2}} \\\\ {\\mathcal G}_{V,W}^{\\frac{1}{2}} & {\\mathcal G}_{U,W} \n \\end{bmatrix} \\begin{bmatrix} {\\mathcal G}_{V,W}^{\\frac{1}{2}} & 0 \\\\ 0 & \n I \\end{bmatrix} \\prec 0,\n\\end{align*}\nwhich in turn happens if and only if\n$$ \n\\begin{bmatrix} I_{\\operatorname{Ran} {\\mathcal G}_{V,W}} \n & {\\mathcal G}_{V,W}^{\\frac{1}{2}} \\\\ {\\mathcal G}_{V,W}^{\\frac{1}{2}} & {\\mathcal G}_{U,W} \n \\end{bmatrix} \\prec 0.\n$$\nYet another Schur-complement analysis converts this to the condition \n$$\n{\\mathcal G}_{U,W} - {\\mathcal G}_{V,W} \\prec 0\n$$ \nwhich is equivalent to \n$\\operatorname{Ran} {\\mathcal O}_{\\sbm{V \\\\ U}, W}$ being a $J$-negative subspace.\n\nThe proofs of statements (3) and (4) are parallel to those of (1) \nand (2) respectively.\n\\end{proof} \n\n\\begin{lemma} \\label{L:GammaD-factored} The Pick matrix $\\boldsymbol{\\Gamma}_{\\mathfrak D}$ \n \\eqref{GammaData} can be factored as follows:\n\\begin{equation} \\label{GammaData-factored}\t\n\\boldsymbol{\\Gamma}_{\\mathfrak D}\n = \\begin{bmatrix} -{\\mathcal C}_{Z, \\sbm{ X & -Y}} & 0 \\\\ 0 & \n({\\mathcal O}_{\\sbm{V \\\\ U},W})^{*} J \\end{bmatrix}\n\\begin{bmatrix} I & T \\\\ T^{[*]} & -I \\end{bmatrix}\n \\begin{bmatrix} -J ({\\mathcal C}_{Z, \\sbm{ X & -Y}})^{*} & 0 \\\\ 0 & \n\t{\\mathcal O}_{\\sbm{ V \\\\ U}, W} \\end{bmatrix}.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof} Multiplying out the expression on the right-hand \n side in \\eqref{GammaData-factored}, we get\n$$\n\\begin{bmatrix} {\\mathcal G}^{J}_{Z, \\sbm{ X & -Y}} & - {\\mathcal C}_{Z, \\sbm{ X & -Y}} \n T {\\mathcal O}_{\\sbm{V \\\\ U}, W} \\\\\n-( {\\mathcal O}_{\\sbm{V \\\\ U}, W})^{*} J T^{[*]} J ({\\mathcal C}_{Z, \\sbm{X & -Y}})^{*} \n & - {\\mathcal G}^{J}_{\\sbm{V \\\\ U}, W} \\end{bmatrix},\n$$\nwhich is exactly $\\sbm{\\Gamma_{L} & \\Gamma \\\\ \\Gamma^{*} & \\Gamma_{R}}=:\n\\boldsymbol{\\Gamma}_{\\mathfrak D}$\nas we can see from the identities \\eqref{GammaLRid} and \\eqref{TGamma}.\n\\end{proof}\n\n\\begin{lemma} \\label{L:GammaDpos}\n The following conditions are equivalent:\n \n\\begin{enumerate}\n \\item The matrix $\\boldsymbol{\\Gamma}_{\\mathfrak D}$ \\eqref{GammaData} is \n positive.\n \n \\item The subspace $P_{H^{2}_{p+m}(\\Pi_-)} {\\mathcal M}_{\\mathfrak D} = \n \\operatorname{Ran} {\\mathcal O}_{\\sbm{V \\\\ U}, W}$ is $J$-negative and the \n subspace ${\\mathbb G}_{T^{[*]}}$ is $J$-positive.\n \n \\item The subspace $P_{H^{2}_{p+m}(\\Pi_+)} {\\mathcal M}_{\\mathfrak D}^{[ \\perp]} \n = \\operatorname{Ran} J ({\\mathcal C}_{Z, \\sbm{X & -Y}})^{*}$ is \n $J$-positive and the subspace $\\widehat {\\mathbb G}_{T}$ is \n $J$-negative.\n \n \n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\n From the factorization \\eqref{GammaData-factored} we see that \n $\\boldsymbol{\\Gamma}_{\\mathfrak D} \\succ 0$ if and only if the Hermitian form \n on the subspace $\\sbm{\\operatorname{Ran} J ({\\mathcal C}_{Z, \\sbm{ X & \n -Y}})^{*} \\\\ \\operatorname{Ran} {\\mathcal O}_{\\sbm{V \\\\ U}, W} }$\n induced by the operator $\\sbm{I & T \\\\ T^{[*]} & - I}$ in the $J \n \\oplus J$-inner product is positive. On the one hand we may consider the factorization\n$$\n \\begin{bmatrix} I & T \\\\ T^{[*]} & -I \\end{bmatrix} =\n \\begin{bmatrix} I & 0 \\\\ T^{[*]} & I \\end{bmatrix} \n\t \\begin{bmatrix} I & 0 \\\\ 0 & -I - T^{[*]} T \\end{bmatrix}\n\\begin{bmatrix} I & T \\\\ 0 & I \\end{bmatrix}\n$$\nto deduce that $\\sbm{ I & T \\\\ T^{[*]} & -I }$ is $(J \\oplus \nJ)$-positive if and only if \n\\begin{enumerate}\n\\item[(i)] the identity operator $I$ is $J$-positive on \n$\\operatorname{Ran} J ({\\mathcal C}_{Z, \\sbm{X & -Y}})^{*}$ (i.e., the subspace\n$\\operatorname{Ran} J ({\\mathcal C}_{Z, \\sbm{X & -Y}})^{*}$ is $J$-positive), and \n\\item[(ii)] $-I - T^{[*]}T$ is a $J$-positive operator on \n$\\operatorname{Ran} {\\mathcal O}_{\\sbm{V \\\\ U}, W}$, i.e., $\\widehat{\\mathbb \nG}_{T}$ is a $J$-negative subspace. \n\\end{enumerate}\nNote that this analysis \namounts to taking the $J$-symmetrized Schur complement of the matrix \n$\\sbm{ I & T \\\\ T^{[*]} & -I}$ with respect to the (1,1)-entry. This \nestablishes the equivalence of (1) and (3).\n\n\\smallskip\n\nOn the other hand we may take the $J$-symmetrized Schur complement of $\\sbm{ I & T \\\\ T^{[*]} & -I}$\nwith respect to the (2,2)-entry, corresponding to the factorization\n$$\n \\begin{bmatrix} I & T \\\\ T^{[*]} & -I \\end{bmatrix} = \n \\begin{bmatrix} I & -T \\\\ 0 & I \\end{bmatrix}\n\\begin{bmatrix} I + T T^{[*]} & 0 \\\\ 0 & -I \\end{bmatrix}\n \\begin{bmatrix} I & 0 \\\\ -T^{[*]} & I \\end{bmatrix}.\n$$\nIn this way we see that $(J \\oplus J)$-positivity of $\\sbm{ I & T \\\\ T^{[*]} & -I}$\ncorresponds to \n\\begin{enumerate}\n\\item[(i$^{\\prime}$)] $I + T T^{[*]}$ is a $J$-positive operator (i.e., \nthe subspace ${\\mathbb G}_{T^{[*]}}$ is $J$-positive), and \n\\item[(ii$^{\\prime}$)] minus the identity operator $-I$ is $J$ positive on $\\operatorname{Ran} \n{\\mathcal O}_{\\sbm{ V \\\\ U}, W}$ (i.e., the subspace is $\\operatorname{Ran} \n{\\mathcal O}_{\\sbm{ V \\\\ U}, W}$ is $J$-negative). \n\\end{enumerate}\nThis establishes the equivalence of (1) and (2).\n\\end{proof}\n\nTo conclude the proof of Corollary \\ref{C:BTOA-NP} for the general \nBiTangential case (at least with the Nondegeneracy Assumption in \nplace), it remains only to assemble the various pieces. By Lemma \n\\ref{L:MperpK} part (1), we see that ${\\mathcal M}_{\\mathfrak D}^{[\\perp {\\mathcal K}]}$ \nbeing $J$-positive is equivalent to \n\\begin{equation} \\label{condition1}\n {\\mathbb G}_{T^{[*]}} \\text{ and } ({\\mathcal M}_{\\mathfrak D}^{[\\perp]})_{1} \n\\text{ are $J$-positive subspaces.}\n\\end{equation}\nBy Lemma \\ref{L:posnegsubspaces}, we see that $ ({\\mathcal M}_{\\mathfrak \nD}^{[\\perp]})_{1} $ being $J$-positive is equivalent to $\\operatorname{Ran} {\\mathcal O}_{\\sbm{ V \\\\ U}, W}$ \nbeing $J$-negative. We therefore may amend \\eqref{condition1} to\n\\begin{equation} \\label{condition2}\n{\\mathbb G}_{T^{[*]}} \\text{ is $J$-positive and } \nP_{H^{2 \\perp}_{p+m}(\\Pi_+)} {\\mathcal M}_{\\mathfrak D} = \\operatorname{Ran} {\\mathcal O}_{\\sbm{ V \\\\ U}, W}\n\\text{ is $J$-negative}\n\\end{equation}\nwhich is exactly statement (2) in Lemma \\ref{L:GammaDpos}. Thus (1) \n$\\Leftrightarrow$ (2) in Corollary \\ref{C:BTOA-NP} follows from (1) \n$\\Leftrightarrow$ (2) in Lemma \\ref{L:GammaDpos}.\n\n\\smallskip\n\nFor the general BTOA case, the reproducing kernel space \n${\\mathcal H}(K_{\\Theta,J})$ again can be identified with a range space, namely\n\\begin{equation} \\label{HKThetaJid}\n {\\mathcal H}(K_{\\Theta,J}) = \\operatorname{Ran} (P^{J}_{H^{2}_{p+m}(\\Pi_{+})} - \n P^{J}_{{\\mathcal M}_{\\mathfrak D}})\n\\end{equation}\nwith lifted indefinite inner product, \nwhere $ P^{J}_{H^{2}_{p+m}(\\Pi_{+})}$ and $P^{J}_{{\\mathcal M}_{\\mathfrak D}}$ are the \n$J$-orthogonal projections of $L^{2}_{p+m}(i{\\mathbb R})$ onto $H^{2}_{p+m}(\\Pi_{+})$ and ${\\mathcal M}_{\\mathfrak D}$\nrespectively (see \\cite[Theorem 3.3]{BHMesa}). Due to $J$-orthogonal decompositions\n\\begin{align*}\n & H^{2}_{p+m}(\\Pi_{+}) \n= \\operatorname{Ran} J ({\\mathcal C}_{Z, \\sbm{ X & -Y}})^{*} \\, [+] \n {\\mathcal M}_{{\\mathfrak D}, 1} [+] \\, {\\mathcal M}_{{\\mathfrak D}, 2}, \\\\\n& {\\mathcal M} = \\widehat{\\mathbb G}_{T} \\, [+] {\\mathcal M}_{{\\mathfrak D}, 1} [+] \\, \n{\\mathcal M}_{{\\mathfrak D}, 2},\n \\end{align*}\n we can simplify the difference of $J$-orthogonal projections to\n$$\n P^{J}_{H^{2}_{p+m}} - P^{J}_{{\\mathcal M}_{\\mathfrak D}}\n = P^{J}_{\\operatorname{Ran} J ({\\mathcal C}_{Z, \\sbm{ X &-Y}})^{*}} - \n P^{J}_{\\widehat {\\mathbb G}_{T}}.\n $$\n By a calculation as in the proof for Case 1, one can show that \n \\begin{equation} \\label{HKThetaJ}\n {\\mathcal H}(K_{\\Theta,J}) = ( \\operatorname{Ran} J ({\\mathcal C}_{Z, \\sbm{X & \n -Y}})^{*})_{J} \\, [+] (\\widehat {\\mathbb G}_{T})_{-J}\n \\end{equation}\n with the identity map a Kre\\u{\\i}n-space isomorphism, \n where the subscripts on the right hand side indicating that one should use the $J$-inner \n product for the first component but the $-J$-inner product for the \n second component. We conclude that ${\\mathcal H}(K_{\\Theta, J})$ is a HIlbert \n space exactly when condition (3) in Lemma \\ref{L:GammaDpos} holds. We now see that \n (1) $\\Leftrightarrow$ (3) in Corollary \\ref{C:BTOA-NP} is an immediate consequence of (1) \n $\\Leftrightarrow$ (3) in Lemma \\ref{L:GammaDpos}.\n \\end{proof}\n \nThe above analysis actually establishes a bit more which we collect \nin the following Corollary.\n\n\\begin{corollary} \\label{C:BTOA-NP'} The following conditions are \n equivalent:\n\\begin{enumerate}\n \\item The subspace ${\\mathcal M}_{\\mathfrak D}^{[\\perp {\\mathcal K}]}$ is $J$-positive.\n \n \\item The subspace $({\\mathcal M}_{\\mathfrak D}^{[ \\perp ]})^{[\\perp {\\mathcal K}']}$ is \n $J$-negative.\n \\end{enumerate}\n\\end{corollary}\n\n\\begin{proof}\n We have seen in Lemma \\ref{L:MperpK} part (2) that \n $({\\mathcal M}_{\\mathfrak D}^{[\\perp]})^{[ \\perp {\\mathcal K}']}$ being $J$-negative is \n equivalent to\n\\begin{equation} \\label{condition1'}\n \\widehat{\\mathfrak G}_{T} \\text{ and } ({\\mathcal M}_{\\mathfrak D}^{[\\perp]})_{1} \\text{ \n are $J$-negative subspaces.}\n\\end{equation}\nLemma \\ref{L:posnegsubspaces} (4) tells us that $ {\\mathcal M}_{{\\mathfrak D},1}$ \nbeing $J$-negative is equivalent to \\\\\n$\\operatorname{Ran} J ({\\mathcal C}_{Z, \\sbm{ X & -Y}})^{*}$ \nbeing $J$-positive. Thus condition \\eqref{condition1'} can be amended to\n\\begin{equation} \\label{condition2'}\n \\widehat{\\mathbb G}_{T} \\text{ is negative and }\n P_{H^2_{p+m}(\\Pi_+)} {\\mathcal M}_{\\mathfrak D}^{[ \\perp]} = \n\\operatorname{Ran} J ({\\mathcal C}_{Z, \\sbm{ X & -Y}})^{*} \\text{ is $J$-positive.}\n\\end{equation}\nWe next use the equivalence of (1) $\\Leftrightarrow$ (3) in Theorem \n\\ref{L:GammaDpos} to see that condition \\eqref{condition2'} is also \nequivalent to $\\boldsymbol{\\Gamma}_{\\mathfrak D} \\succ 0$. We then use the \nequivalence (1) $\\Leftrightarrow$ (2) in Theorem \\ref{L:GammaDpos} \nto see that this last condition in turn is equivalent \nto ${\\mathcal M}_{\\mathfrak D}^{[ \\perp {\\mathcal K}]}$ being $J$-positive.\n\\end{proof}\n\n\n\n\n \n\\section{Interpolation problems in the generalized Schur class} \\label{S:negsquares}\nMuch of the previous analysis extends from the Schur class ${\\mathcal S}^{p \\times m}(\\Pi_{+})$ \nto a larger class ${\\mathcal S}_\\kappa^{p \\times m}(\\Pi_{+})$ (generalized Schur class)\nconsisting of $\\mathbb C^{p\\times m}$-valued\nfunctions that are meromorphic on $\\Pi_+$ with total pole multiplicity equal $\\kappa$ \nand such that their $L^\\infty$ norm (that is, $\\sup_{y\\in\\mathbb R}\\|S(iy)\\|$) does \nnot exceed one. The values \n$S(iy)$ are understood in the sense of non-tangential boundary limits that exist for \nalmost all \n$y\\in\\mathbb R$. The multiplicity of a pole $z_0$ for a matrix-valued function $S$ is \ndefined as the sum of absolute values of all negative partial multiplicities \nappearing in the Smith form of $S$\nat $z_0$ (see e.g.\\ \\cite[Theorem 3.1.1]{BGR}). Then the total pole multiplicity of $S$ \nis defined as the sum of multiplicities of all poles. Let us \nintroduce the notation \n$$\nm_{P}(S) = \\text{ sum of all pole multiplicities of $S$ over all poles in } \\Pi_{+}.\n$$\nIt follows by the maximum modulus principle that ${\\mathcal S}^{p \\times m}_0(\\Pi_{+})$ \nis just the classical Schur class. Generalized Schur functions appeared first \nin \\cite{T} in the interpolation context and \nwere comprehensively studied by Kre\\u{\\i}n and Langer in \\cite{kl,kl1}. Later\nwork on the classes ${\\mathcal S}_{\\kappa}$ include \\cite{dls1}, \\cite{J},\n\\cite{CG1}, and \\cite{ADRS}, as well as \\cite{Gol}, \\cite{nud}, \\cite{BH}, \n\\cite{BH86} and the book \\cite{BGR} in the context of interpolation.\n\n\\smallskip\n\nThe class ${\\mathcal S}_{\\kappa}(\\Pi_{+})$ can alternatively be characterized \nby any of the following conditions:\n\\begin{enumerate}\n \\item ${\\rm sq}_{-}(K_{S}) = \\kappa$ where the kernel $K_{S}$ is \n given by \\eqref{dBRkerPi+}.\n \n \\item ${\\rm sq}_{-}({\\mathbf K}_{S}) = \\kappa$, where ${\\mathbf \n K}_{S}$ is the $2 \\times 2$-block matrix kernel \\eqref{ker22}.\n \n \\item $S$ admits left and right \n(coprime) {\\em Kre\\u{\\i}n-Langer factorizations}\n$$\nF(\\lambda)=S_R(\\lambda)\\vartheta_R(\\lambda)^{-1}= \\vartheta_L(\\lambda)^{-1}S_L(\\lambda),\n$$\nwhere $S_L, \\, S_R\\in{\\mathcal S}^{p\\times m}(\\Pi_+)$ and $\\vartheta_L$ and $\\vartheta_R$ are matrix-valued finite\nBlaschke products of degree $\\kappa$ (see \\cite{kl1} for the scalar-valued case and \n\\cite{dls1} for the Hilbert-space operator-valued case).\nBy a $\\mathbb C^{n\\times n}$-valued finite Blaschke product we mean the product of $\\kappa$ Blaschke \n(or Blaschke-Potapov) factors \n$$\nI_n-P+\\frac{\\lambda-\\alpha}{\\lambda+\\overline{\\alpha}}P\n$$\nwhere $\\alpha\\in\\Pi_+$ and $P$ is an orthogonal projection in $\\mathbb C^n$. \n\\end{enumerate}\nThere is also an intrinsic characterization of matrix triples \n$(C,A,B)$ which can arise as the pole triple over the unit disk for a \ngeneralized Schur class function---see \\cite{bolleiba} for details.\n\n\\smallskip\n\nLet us take another look at the BiTangential Nevanlinna-Pick problem \n\\eqref{inter1'}--\\eqref{inter3'}.\nIf the Pick matrix \\eqref{Pick-simple} is not positive semidefinite, \nthe problem has no solutions in the Schur class ${\\mathcal S}^{p\\times n}(\\Pi_+)$, \nby Theorem \\ref{T:BT-NP}. However, there always exist generalized Schur \nfunctions that are analytic at all interpolation nodes $z_i,w_j$ and satisfy \ninterpolation conditions \\eqref{inter1'}--\\eqref{inter3'}. One can show that \nthere exist such functions with only one pole of a sufficiently high multiplicity \nat any preassigned point in $\\Pi_+$. The question of interest is\nto find the smallest integer $\\kappa$, for which interpolation conditions \n\\eqref{inter1'}--\\eqref{inter3'} \nare met for some function $S\\in {\\mathcal S}_\\kappa^{p \\times m}(\\Pi_{+})$ and then \nto describe the set of all such functions. \n\n\\smallskip\n\nThe same question makes sense in the more general setting of the {\\bf BTOA-NP} \ninterpolation problem:\n{\\em given a $\\Pi_{+}$-admissible BTOA interpolation data set \\eqref{data},\nfind the smallest integer $\\kappa$, for which interpolation conditions \n\\eqref{BTOAint1}--\\eqref{BTOAint3} are satisfied for some function \n$S\\in {\\mathcal S}_\\kappa^{p \\times m}(\\Pi_{+})$ which is analytic on \n$\\sigma(Z)\\cup\\sigma(W)$, and describe the set of all such functions}. \n\n\\smallskip\n\nThe next theorem gives the answer to the question above in the so-called nondegenerate case.\n\n\\begin{theorem} \\label{T:BTOA-NPkap} Suppose that ${\\mathfrak D} = ( X,Y,Z; U,V, W; \\Gamma)$\n is a $\\Pi_{+}$-admissible BTOA interpolation data set and let us \n assume that the BTOA-Pick matrix \n$\\boldsymbol{\\Gamma}_{\\mathfrak D}$ defined by \\eqref{GammaData} is invertible. \nLet $\\kappa$ be the smallest \ninteger for which there is a function $S\\in {\\mathcal S}_\\kappa^{p \\times m}(\\Pi_{+})$ which is analytic on \n$\\sigma(W) \\cup \\sigma(Z)$ and satisfies the interpolation conditions \\eqref{BTOAint1}--\\eqref{BTOAint3}.\nThen $\\kappa$ is given by any one of the following three equivalent \nformulas:\n\\begin{enumerate}\n \\item $\\kappa=\\nu_-(\\boldsymbol{\\Gamma}_{\\mathfrak D})$, the number of negative \n eigenvalues of $\\boldsymbol{\\Gamma}_{\\mathfrak D}$.\n \\item $\\kappa = \\nu_{-}({\\mathcal M}_{\\mathfrak D}^{[\\perp] {\\mathcal K}})$, the negative signature \n of the Kre\\u{\\i}n-space ${\\mathcal M}_{\\mathfrak D}^{[\\perp] {\\mathcal K}}$ \n in the $J$-inner product.\n \n \\item $\\kappa = \\nu_{-}({\\mathcal H}(K_{\\Theta, J}))$, the negative signature of the \nreproducing kernel Pontryagin space ${\\mathcal H}(K_{\\Theta,J})$, where $\\Theta$ is defined \nas in \\eqref{sep1} and $K_{\\Theta,J}$ as in \\eqref{ad1}.\n\\end{enumerate}\nFurthermore, the function $S$ belongs to the generalized Schur class \n${\\mathcal S}_\\kappa^{p \\times m}(\\Pi_{+})$ and satisfies the interpolation conditions \n\\eqref{BTOAint1}--\\eqref{BTOAint3}\nif and only if it is of the form\n\\begin{equation} \\label{LFTparamkap}\nS(\\lambda) = (\\Theta_{11}(\\lambda) G(\\lambda) + \\Theta_{12}(\\lambda))\n( \\Theta_{21}(\\lambda) G(\\lambda) + \\Theta_{22}(\\lambda))^{-1}\n\\end{equation}\nfor a Schur class function $G\\in {\\mathcal S}^{p \\times m}(\\Pi_{+})$ such that\n\\begin{equation} \\label{LFTparampar}\n\\det (\\psi(\\lambda)(\\Theta_{21}(\\lambda) G(\\lambda) + \\Theta_{22}(\\lambda)))\\neq 0,\\quad \\lambda\\in \\Pi_{+}\n\\backslash(\\sigma(Z)\\cup\\sigma(W))\n\\end{equation}\nwhere $\\psi$ is the $m \\times m$-matrix function defined in \n\\eqref{sep1}.\n\\end{theorem}\n\n\\subsection{The state-space approach} \\label{S:statekap}\nThe direct proof of the necessity of condition (1) in Theorem \n\\ref{T:BTOA-NPkap} for the existence of class-${\\mathcal S}_{\\kappa}^{p \\times \nm}(\\Pi_{+})$ solution of the interpolation conditions \n\\eqref{BTOAint1}--\\eqref{BTOAint3} relies on the\ncharacterization of the class ${\\mathcal S}_\\kappa^{p \\times m}(\\Pi_{+})$ in \nterms of the kernel \\eqref{ker22} mentioned above: {\\em a \n$\\mathbb C^{p\\times n}$-valued function meromorphic on $\\Pi_+$ belongs to ${\\mathcal S}_\\kappa^{p \\times m}(\\Pi_{+})$\nif and only if the kernel ${\\mathbf K}_{S}(\\lambda, \\lambda_{*}; \\zeta, \\zeta_{*})$ defined as in \\eqref{ker22}\nhas $\\kappa$ negative squares on $\\Omega_S^4$}:\n\\begin{equation}\n{\\rm sq}_-{\\mathbf K}_{S}=\\kappa,\n\\label{aug3}\n\\end{equation}\nwhere $\\Omega_S\\subset\\Pi_+$ is the domain of analyticity of $S$. The latter equality means that the \nblock matrix $\\left[{\\mathbf K}_{S}(z_i, z_i; z_j, z_j)\\right]_{i,j=1}^N$ has at most $\\kappa$ \nnegative eigenvalues for any \nchoice of finitely many points $z_1.\\ldots,z_N\\in\\Omega_S$, and it has exactly $\\kappa$ negative eigenvalues \nfor at least one such choice.\n\n\\smallskip\nNow suppose that $S \\in {\\mathcal S}_{\\kappa}^{p \\times m}(\\Pi_{+})$ satisfies \nthe interpolation conditions \\eqref{BTOAint1}--\\eqref{BTOAint3}.\nThe kernel ${\\mathbf K}_{S}$ satisfying condition \\eqref{aug3} still admits the Kolmogorov decomposition\n\\eqref{KS2}, but this time the state space ${\\mathcal X}$ is a Pontryagin space of negative index $\\kappa$.\nAll computations following formula \\eqref{KS2} go through with $\\Pi_+$ replaced by $\\Omega_S$ showing \nthat the matrix $\\boldsymbol{\\Gamma}'_{\\mathfrak D}$ defined in \\eqref{aug4} is equal to the Pick matrix \n$\\boldsymbol{\\Gamma}_{\\mathfrak D}$ given in \\eqref{GammaData}. Note that the operations bringing the \nkernel ${\\mathbf K}_{S}$ to the matrix $\\boldsymbol{\\Gamma}_{\\mathfrak D}$ amount \nto a sophisticated conjugation of the kernel ${\\mathbf K}_{S}$. We \nconclude that $\\nu_-(\\boldsymbol{\\Gamma}_{\\mathfrak D})=\\nu_-(\\boldsymbol{\\Gamma}'_{\\mathfrak D}) \\le \\kappa$.\nOnce one of the sufficiency arguments has been carried out (by whatever \nmethod) to show that $\\nu_-(\\boldsymbol{\\Gamma}_{\\mathfrak \nD})=\\nu_-(\\boldsymbol{\\Gamma}'_{\\mathfrak D}) < \\kappa$ implies that there is a \na function $S$ in a generalized Schur class ${\\mathcal S}^{p \\times \nm}_{\\kappa'}(\\Pi_{+})$ with $\\kappa' < \\kappa$ satisfying the \ninterpolation conditions, then $\\nu_{-}(\\boldsymbol{\\Gamma}_{\\mathfrak D}) < \n\\kappa$ leads to a contradiction to \nthe minimality property of $\\kappa$. We conclude \nthat $\\nu_{-}(\\boldsymbol{\\Gamma}_{\\mathfrak D}) = \\kappa$ is necessary for \n$\\kappa$ to be the smallest integer so that there is a solution $S$ \nof class ${\\mathcal S}_{\\kappa}^{p \\times m}(\\Pi_{+})$ of the interpolation \nconditions \\eqref{BTOAint1}--\\eqref{BTOAint3}. \n\n\\smallskip\n\nWe now suppose that $\\nu_-(\\boldsymbol{\\Gamma}_{\\mathfrak D})=\\kappa$. The identity \\eqref{ad1} relies on \nequality \\eqref{bigLySyl} and on the assumption that $\\boldsymbol{\\Gamma}_{\\mathfrak D}$ \nis invertible. In particular, the matrix $\\Theta(\\lambda)$ still is $J$-unitary for each \n$\\lambda \\in i \\mathbb R$, i.e., equalities \\eqref{ThetaJunitary} hold for all $\\lambda\\in i\\mathbb R$.\nBy using the \ncontrollability\/observability assumptions on $(Z,X)$ and $(U,W)$, it \nfollows from the formula on the right hand side of \\eqref{ad1} that the kernel \n$K_{\\Theta,J}$ \\eqref{ad1} has $\\kappa$ negative squares on\n$\\Omega_\\Theta = \\Pi_{+} \\setminus \\sigma(W)$ (the points of \nanalyticity for $\\Theta$ in the right half plane $\\Pi_{+}$):\n$$\n {\\rm sq}_{-} K_{\\Theta,J} = \\kappa.\n$$\n\n\n\\smallskip\n\nWe shall have need of the {\\em Potapov-Ginsburg transform} $U = \\sbm{ \nU_{11} & U_{12} \\\\ U_{21} & U_{22}}$ of \na given block $2 \\times 2$-block matrix function $\\Theta = \\sbm{ \n\\Theta_{11} & \\Theta_{21} \\\\ \\Theta_{21} & \\theta_{22}} $(called \nthe {\\em Redheffer transform} in \\cite{BGR}) defined by\n$$\n U = \\begin{bmatrix} U_{11} & U_{12} \\\\ U_{21} & \n U_{22} \\end{bmatrix} : = \\begin{bmatrix} \\Theta_{12} \n \\Theta_{22}^{-1} & \\Theta_{11} - \\Theta_{12} \\Theta_{22}^{-1} \n \\Theta_{21} \\\\ \\Theta_{22}^{-1} & - \\Theta_{22}^{-1} \\Theta_{21} \n \\end{bmatrix}.\n$$\n This transform is the result of rearranging the inputs and outputs in the system of \n equations\n\\begin{equation} \\label{chain}\n\\begin{bmatrix} \\Theta_{11} & \\Theta_{12} \\\\ \\Theta_{21} & \\Theta_{22} \\end{bmatrix} \n\\begin{bmatrix} x_{2} \\\\ y_{2} \\end{bmatrix} = \\begin{bmatrix} y_{1} \\\\ x_{1} \n \\end{bmatrix}\n\\end{equation}\nto have the form\n\\begin{equation} \\label{scattering}\n \\begin{bmatrix} U_{11} & U_{12} \\\\ U_{21} & U_{22} \\end{bmatrix} \n \\begin{bmatrix} x_{1} \\\\ x_{2} \\end{bmatrix} = \\begin{bmatrix} \n\t y_{1} \\\\ y_{2} \\end{bmatrix},\n\\end{equation}\nand in circuit theory has the interpretation as the change of \nvariable from the {\\em chain formalism} \\eqref{chain} to the {\\em scattering \nformalism} \\eqref{scattering}. Based on this connection it is not hard to show that\n$$\n {\\rm sq}_{-} K_{U} = {\\rm sq}_{-} K_{\\Theta,J} = \\kappa\n$$\nwhere the notation $K_{U}$ is as in \\eqref{dBRkerPi+} and \n$K_{\\Theta,J}$ as in \\eqref{ad1}\n(see \\cite[Theorem 13.1.3]{BGR}).\nWe conclude that $U$ is in the generalized Schur class ${\\mathcal S}_{\\kappa}^{(p+m) \\times (m+p)}(\\Pi_{+})$. \nBy the Kre\\u{\\i}n-Langer factorization result for the generalized Schur \nclass (see \\cite{kl1}), it follows that $\\kappa$ is also equal to the \ntotal pole multiplicity of $U$ over points in $\\Pi_{+}$:\n$$\nm_{P}(U) = \\kappa.\n$$\n\\smallskip\n\nWe would like to show next that\n\\begin{equation} \\label{toshow1}\n m_{P}(U_{22}) = m_{P}(\\Theta_{22}^{-1} \\Theta_{21}) = \\kappa.\n\\end{equation}\nVerification of this formula will take several steps and \nfollow the analysis in \\cite[Chapter 13]{BGR}.\nWe first note that the calculations \\eqref{cMrep}--\\eqref{cMBLrep} \ngo through unchanged so we still have the Beurling-Lax representation\n\\begin{equation} \\label{BLrep}\n {\\mathcal M}_{\\mathfrak D} = \\Theta \\cdot H^{2}_{p+m}(\\Pi_{+})\n\\end{equation}\nwhere ${\\mathcal M}_{\\mathfrak D}$ also has the representation \\eqref{cMrep}.\nThe observability assumption on the output pair $(U,W)$ translates to \nan additional structural property on ${\\mathcal M}_{\\mathfrak D}$:\n\\begin{itemize}\n \\item $(U,W)$ {\\em observable implies}\n\\begin{equation} \\label{obs-imp1}\n {\\mathcal M}_{\\mathfrak D} \\cap \\begin{bmatrix} L^{2}(i {\\mathbb R}) \\\\ 0 \n \\end{bmatrix} = {\\mathcal M}_{\\mathfrak D} \\cap \\begin{bmatrix} \n H^{2}_{p}(\\Pi_{+}) \\\\ 0 \\end{bmatrix}.\n\\end{equation}\n\\end{itemize}\nMaking use of \\eqref{BLrep}, condition \\eqref{obs-imp1} translates to \nan explicit property of $\\Theta$, namely:\n$$\nf \\in H^{2}_{p}(\\Pi_{+}),\\, g \\in H^{2}_{m}(\\Pi_{+}),\\, \\Theta_{21}f \n+ \\Theta_{22}g = 0 \\Rightarrow \\Theta_{11} f + \\Theta_{12}g \\in \nH^{2}_{p}(\\Pi_{+}).\n$$\nSolving the first equation for $g$ gives $g = -\\Theta_{22}^{-1} \n\\Theta_{21} f$ and this last condition can be rewritten as\n$$\nf \\in H^{2}_{p}(\\Pi_{+}), \\, \\Theta_{22}^{-1} \n\\Theta_{21} f \\in H^{2}_{m}(\\Pi_{+}) \\Rightarrow\n(\\Theta_{11} - \\Theta_{12} \\Theta_{22}^{-1} \\Theta_{21}) f \\in \nH^{2}_{p}(\\Pi_{+}),\n$$\nor, more succinctly,\n$$\n f \\in H^{2}_{p}(\\Pi_{+}),\\, U_{22} f \\in H^{2}_{m}(\\Pi_{+}) \n \\Rightarrow U_{12} f \\in H^{2}_{m}(\\Pi_{+}).\n$$\nThis last condition translates to\n\\begin{equation} \\label{obs-imp2}\n m_{P}(U_{22}) = m_{P}\\left( \\sbm{ U_{12} \\\\ U_{22} } \\right).\n\\end{equation}\n\nSimilarly, the controllability assumption on the input pair $(Z,X)$ \ntranslates to an additional structural property on ${\\mathcal M}_{\\mathfrak \nD}$, namely:\n\\begin{itemize}\n \\item $(Z,X)$ {\\em controllable implies}\n\\begin{equation} \\label{control-imp1}\n P_{\\sbm{ 0 \\\\ H^{2}_{m}(\\Pi_{+})}} \\left( {\\mathcal M}_{\\mathfrak D} \\cap \n H^{2}_{p+m}(\\Pi_{+}) \\right) = \\begin{bmatrix} 0 \\\\ H^{2}_{m}(\\Pi_{+}) \n \\end{bmatrix}.\n\\end{equation}\n\\end{itemize}\nIn terms of $\\Theta$, from the representation \\eqref{BLrep} we see \nthat this means that, given any $h \\in H^{2}_{m}(\\Pi_{+})$, we can \nfind $f \\in H^{2}_{p}(\\Pi_{+})$ and $g \\in H^{2}_{m}(\\Pi_{+})$ so that\n$$\n\\Theta_{11} f + \\Theta_{12} g \\in H^{2}_{p}(\\Pi_{+}), \\quad\n\\Theta_{21} f + \\Theta_{22} g = h.\n$$\nWe can solve the second equation for $g$\n$$\n g = \\Theta_{22}^{-1} h - \\Theta_{22}^{-1} \\Theta_{21} f \\in \n H^{2}_{m}(\\Pi_{+})\n$$\nand rewrite the first expression in terms of $f$ and $h$:\n$$\n (\\Theta_{11} - \\Theta_{12} \\Theta_{22}^{-1} \\Theta_{21}) f + \n \\Theta_{12} \\Theta_{22}^{-1} h \\in H^{2}_{p}(\\Pi_{+}).\n$$\nPutting the pieces together, we see that an equivalent form of \ncondition \\eqref{control-imp1} is: {\\em for any $ h \\in H^{2}_{m}(\\Pi_{+})$, there exists \nan $f \\in H^{2}_{p}(\\Pi_{+})$ such that\n$$\n\\Theta_{22}^{-1} h - \\Theta_{22}^{-1} \\Theta_{21} f \\in \nH^{2}_{m}(\\Pi_{+}), \\quad (\\Theta_{11} - \\Theta_{12} \\Theta_{22}^{-1} \n\\Theta_{21}) f + \\Theta_{12} \\Theta_{22}^{-1} h \\in \nH^{2}_{p}(\\Pi_{+}).\n$$}\nMore succinctly,\n\\begin{align*}\n& h \\in H^{2}_{m}(\\Pi_{+}) \\Rightarrow \\exists f \\in \n H^{2}_{p}(\\Pi_{+}) \\text{ so that } \\\\\n& U_{21} h + U_{22} f \\in H^{2}_{m}(\\Pi_{+}), \\quad\nU_{12} f + U_{11} h \\in H^{2}_{p}(\\Pi_{+}),\n\\end{align*}\nor, in column form, for each $h \\in H^{2}_{m}(\\Pi_{+})$ there exists \n$f \\in H^{2}_{p}(\\Pi_{+})$ so that\n$$\n \\begin{bmatrix} U_{11} \\\\ U_{21} \\end{bmatrix} h + \\begin{bmatrix} \n U_{12} \\\\ U_{22} \\end{bmatrix} f \\in H^{2}_{p+m}(\\Pi_{+}).\n$$\nThe meaning of this last condition is:\n\\begin{equation} \\label{control-imp2}\n m_{P}(U) = m_{P}\\left(\\sbm{ U_{12} \\\\ U_{22}} \\right).\n \\end{equation}\nCombining \\eqref{obs-imp2} with \\eqref{control-imp2} gives us \n\\eqref{toshow1} as wanted.\n\n\\smallskip\nSince $\\Theta$ is not $J$-contractive in $\\Pi_+$ anymore,\nwe cannot conclude that $\\Theta_{22}^{-1}\\Theta_{21}$ is contraction valued. However, \ndue to equalities \\eqref{ThetaJunitary}, the function $\\Theta_{22}(\\lambda)^{-1}\\Theta_{21}(\\lambda)$\nis a contraction for each $\\lambda\\in i\\mathbb R$.\nTherefore, $\\Theta_{22}^{-1}\\Theta_{21}$ belongs to the generalized Schur class \n$\\mathcal S^{m\\times p}_\\kappa(\\Pi_+)$. We next wish to argue that\n\\begin{equation} \\label{toshow2}\n{\\rm wno} \\det \\Theta_{22}+{\\rm wno} \\det \\psi=\\kappa,\n\\end{equation}\nwhere $\\psi$ is given by \\eqref{sep1}.\nFrom the representation \\eqref{BLrep} and the form of ${\\mathcal M}_{\\mathfrak \nD}$ in \\eqref{BLrep} we see that\n$$\n\\Theta_{22} \\begin{bmatrix} \\Theta_{22}^{-1} \\Theta_{21} & I_{m}\n\\end{bmatrix} H^{2}_{p+m}(\\Pi_{+}) = \\begin{bmatrix} \\Theta_{21} & \n\\Theta_{22} \\end{bmatrix} H^{2}_{p+m}(\\Pi_{+}) = \\psi^{-1} \nH^{2}_{m}(\\Pi_{+}).\n$$\nWe rewrite this equality as\n\\begin{equation} \\label{subspace-eq}\n \\begin{bmatrix} \\Theta_{22}^{-1} \\Theta_{21} & I_{m} \\end{bmatrix} \n H^{2}_{p+m}(\\Pi_{+}) = \\Theta_{22}^{-1} \\psi^{-1} \n H^{2}_{m}(\\Pi_{+}).\n\\end{equation}\nIn particular,\n$$\n \\Theta_{22}^{-1} \\psi^{-1} H^{2}_{m}(\\Pi_{+}) \\supset \n H^{2}_{m}(\\Pi_{+})\n$$\nso the matrix function $\\Theta_{22}^{-1} \\psi^{-1}$ has no zeros (in \nthe sense of its Smith-McMillan form) in $\\Pi_{+}$. As \n$\\Theta_{22}^{-1}$ and $\\psi^{-1}$ are invertible on the boundary $i \n{\\mathbb R}$, we see that ${\\rm wno} \\det(\\Theta_{22}^{-1} \n\\psi^{-1})$ is well-defined and by the Argument Principle we have\n\\begin{align}\n& -{\\rm wno} \\det \\Theta_{22} - {\\rm wno} \\det(\\psi) \n= {\\rm wno} \\det(\\Theta_{22}^{-1} \\psi^{-1}) \\notag \\\\\n& \\quad \\quad \\quad \\quad = \n m_{Z}(\\det(\\Theta_{22}^{-1} \\psi^{-1})) - m_{P}(\\det(\\Theta_{22}^{-1} \n \\psi^{-1})) \\notag \\\\\n & \\quad \\quad \\quad \\quad = -m_{P}(\\det(\\Theta_{22}^{-1} \\psi^{-1})) = \n -m_{P}(\\Theta_{22}^{-1} \\psi^{-1}) \\notag \\\\\n &\\quad \\quad \\quad \\quad = -\\dim P_{H^{2}_{m}(\\Pi_{-})} \\Theta_{22}^{-1} \\psi^{-1} \n H^{2}_{m}(\\Pi_{+})\n \\label{know1}\n\\end{align}\nwhere $m_{Z}(S)$ is the total zero multiplicity of the rational matrix \nfunction $S$ over all zeros in $\\Pi_{+}$.\nOn the other hand we have\n\\begin{equation} \\label{know2}\n \\dim P_{H^{2}_{m}(\\Pi_{-})} \\begin{bmatrix} \\Theta_{22}^{-1} \\Theta_{21} & I_{m} \\end{bmatrix} \n H^{2}_{p+m}(\\Pi_{+}) = m_{P}(\\Theta_{22}^{-1} \\Theta_{21}) = \n \\kappa\n\\end{equation}\nwhere we make use of \\eqref{toshow1} for the last step. Combining \n\\eqref{know1} and \\eqref{know2} with \\eqref{subspace-eq} finally \nbrings us to \\eqref{toshow2}.\n\n\\smallskip\n\nIn addition to the Beurling-Lax representation \\eqref{cMBLrep} or \n\\eqref{BLrep}, we also still have the Beurling-=Lax representation \n\\eqref{cM-rep} for ${\\mathcal M}_{{\\mathfrak D},-}$ with $\\psi, \\psi^{-1}$ \ngiven by \\eqref{sep1} and \\eqref{psi-inv}.\nHowever, the condition \\eqref{sol-rep} should be modified as follows:\n\\begin{itemize}\n \\item {\\em A meromorphic function $S \\colon \\Pi_{+} \\to {\\mathbb C}^{p \\times m}$ \nhas total pole multiplicity at most $\\kappa$ over $\\Pi_+$ and \n satisfies the interpolation conditions \\eqref{BTOAint1}--\\eqref{BTOAint3} if and only if there \n is an $m\\times m$-matrix\nvalued function $\\Psi$ analytic on $\\Pi_+$ with $\\det \\Psi$ having no zeros on $\\sigma(Z)\\cup\\sigma(W)$ \nand with $\\kappa$ zeros in $\\Pi_+$ such that \n \\begin{equation} \\label{sol-rep'}\n \\begin{bmatrix} S \\\\ I_m \\end{bmatrix}\\psi^{-1}\\Psi H^2_m(\\Pi_+) \n \\subset {\\mathcal M}_{\\mathfrak D}.\n \\end{equation}}\n \\end{itemize}\nNow instead of \\eqref{Sparam}, we have\n \\begin{equation} \\label{Sparam'}\n \\begin{bmatrix} S \\\\ I_{m} \\end{bmatrix} \\psi^{-1} \\Psi =\n\\begin{bmatrix} \\Theta_{11} & \\Theta_{12} \\\\\n \\Theta_{21} & \\Theta_{22} \\end{bmatrix} \\begin{bmatrix} Q_{1}\n \\\\ Q_{2} \\end{bmatrix}\n\\end{equation}\nfor some $(p+m) \\times m$ matrix function $\\sbm{Q_{1} \\\\ Q_{2}} \\in H^2_{(p+m) \\times m}(\\Pi_{+})$.\nThen we conclude from the $J$-unitarity of $\\Theta$ on $i\\mathbb R$ (exactly as in Section 2) that for \nalmost all \n$\\lambda\\in i\\mathbb R$, the matrix $Q_2(\\lambda)$ is invertible whereas the matrix \n$G(\\lambda)=Q_1(\\lambda)Q_2(\\lambda)^{-1}$\nis a contraction. The identity \\eqref{bottom} arising from looking at \nthe bottom component of \\eqref{Sparam'} must be modified to read\n$$\n \\psi^{-1} \\Psi = \\Theta_{21} Q_{1} + \\Theta_{22} Q_{2} = \n \\Theta_{22} (\\Theta_{22}^{-1} \\Theta_{21} G + I_{m}) Q_{2}\n$$\nleading to the modification of \\eqref{wno'}:\n\\begin{align*} \n & {\\rm wno} \\det \\psi^{-1} + {\\rm wno} \\det \\Psi = \\notag \\\\\n & \\quad \\quad \\quad \\quad \n{\\rm wno} \\det \\Theta_{22} + {\\rm wno} \\det (\\Theta_{22}^{-1} \\Theta_{21} G + \n I_{m}) + \\rm{ wno} \\det Q_{2}.\n\\end{align*}\nThe identity \\eqref{wno''} must be replaced by \\eqref{toshow2}. \nUsing that ${\\rm wno} \\det \\Psi = \\kappa$, with all these adjustments \nin place we still arrive at ${\\rm wno} \\det Q_{2} = 0$ and hence \n$Q_{2}$ has no zeros in $\\Pi_+$ and $G$ extends inside $\\Pi_+$ as a \nSchur-class function.\nThe representation \\eqref{LFTparamkap} follows from \\eqref{Sparam'} as well as the equality\n$\\Psi = \\psi(\\Theta_{21}G+\\Theta_{22}) Q_{2}$. Since $\\Psi$ has no zeros \nin $\\sigma(Z) \\cap \\sigma(W)$ while $\\psi(\\Theta_{21}G+\\Theta_{22})$ \nand $Q_2$ are analytic on all of $\\Pi_+$, we see that $\\psi(\\Theta_{21}G+\\Theta_{22})$ has no zeros in \n$\\sigma(Z) \\cap \\sigma(W)$ as well.\n\n\\smallskip\n\nConversely, for any $G\\in\\mathcal S^{p\\times m}(\\Pi_+)$ such that $\\psi(\\Theta_{21}G+\\Theta_{22})$ has \nno zeros on $\\sigma(Z)\\cup\\sigma(W)$, we let\n$$\n\\begin{bmatrix}S_1 \\\\ S_2\\end{bmatrix}=\\Theta\\begin{bmatrix}G \\\\ I_m\\end{bmatrix},\\quad \\Psi=\\psi S_2, \n \\quad S=S_1S_2^{-1}, \n$$\nso that \n$$\n\\begin{bmatrix}S \\\\ I_m\\end{bmatrix}\\psi^{-1}\\Psi=\\Theta\\begin{bmatrix}G \\\\ I_m\\end{bmatrix}.\n$$\nSince $\\Theta$ is $J$-unitary on $i\\mathbb R$ and $G$ is a Schur-class, \nit follows that $S(\\lambda)$ is contractive for almost all $\\lambda\\in i\\mathbb R$.\nSince $\\det \\Psi$ has no zeros on $\\sigma(Z)\\cup\\sigma(W)$ and has $\\kappa$ zeros in $\\Pi_+$, due to \nthe equalities\n$$\n{\\rm wno}\\det \\Psi={\\rm wno}\\det \\psi+{\\rm wno}\\det\\Theta_{22}+\n{\\rm wno}\\det(\\Theta_{22}^{-1}\\Theta_{21}G+I)=\\kappa\n$$\nwe see that $S$ satisfies the interpolation conditions \\eqref{BTOAint1}--\\eqref{BTOAint3} \nby the criterion \\eqref{sol-rep'}\nand has total pole multiplicity at most $\\kappa$ in $\\Pi_+$. However, since \n$\\nu_-(\\boldsymbol{\\Gamma}_{\\mathfrak D})=\\kappa$,\nby the part of the sufficiency criterion already proved we know that $S$ must \nhave at least $\\kappa$ poles in $\\Pi_+$. Thus $S$ has exactly \n$\\kappa$ poles in $\\Pi_{+}$ and therefore is in the $\\mathcal \nS_\\kappa^{p\\times m}(\\Pi_{+})$-class.\n\n\\smallskip\n\n\\subsection{The Fundamental Matrix Inequality approach for the \ngeneralized Schur-class setting} \nThe Fundamental Matrix Inequality method extends to the present setting as follows. \nAs in the definite case,\nwe extend the interpolation data by an arbitrary finite set of \nadditional full-matrix-value interpolation conditions to conclude that the kernel \n$\\boldsymbol{\\Gamma}_{\\mathfrak D}(z,\\zeta)$ \ndefined as in \\eqref{sep8c} has at most $\\kappa$ negative squares in \n$\\Omega_S \\setminus \\sigma(W)$. \nSince the constant block\n(the matrix $\\Gamma_{\\mathfrak D}$) has $\\kappa$ negative eigenvalues (counted with multiplicities), \nit follows that \n${\\rm sp}_-\\boldsymbol{\\Gamma}_{\\mathfrak D}(z,\\zeta)=\\kappa$ which holds if and only if the Schur complement \nof $\\Gamma_{\\mathfrak D}$ in \\eqref{sep8c} is a positive kernel on $\\Omega_S \n\\setminus \\sigma(W)$: \n$$\n\\frac{I_{p} - S(z)S(\\zeta)^*}{z+\\overline{\\zeta}}-\n\\begin{bmatrix}I_p & -S(z)\\end{bmatrix}{\\bf C}(z I-{\\bf A})^{-1}\\Gamma_{\\mathfrak D}^{-1}\n(\\overline\\zeta I-{\\bf A}^*)^{-1}{\\bf C}^*\\begin{bmatrix}I_p \\\\ -S(\\zeta)^*\\end{bmatrix}\\succeq 0.\n$$\nAs in Section \\ref{S:FMI}, the latter positivity condition can be written in the form \\eqref{sep8d} \n(all we need is formula \\eqref{ad1} which still holds true) and eventually, implies equality \n\\eqref{sep8g} for some $G\\in\\mathcal S^{p\\times m}(\\Pi_+)$, which in turn, implies \nthe representation \\eqref{LFTparamkap}. However, establishing the necessity of the condition \n\\eqref{LFTparampar} \nrequires a good portion of extra work. Most of the known proofs are still based the Argument Principle \n(the winding number computations \\cite{BGR} or the operator-valued \nversion of Rouch\\'e's theorem \\cite{GS}).\nFor example, it can be shown that if $K$ is a $p\\times m$ matrix-valued polynomial \nsatisfying interpolation conditions \n\\eqref{BTOAint1}--\\eqref{BTOAint3} and if $\\varphi$ is the inner function given \n(analogously to \\eqref{sep1}) by\n$$\n\\varphi(z)=I_p-X^*(zI+Z^*)^{-1}\\widetilde{P}^{-1}X,\n$$\nwhere the positive definite matrix $\\widetilde{P}$ is uniquely defined from the Lyapunov equation\n$\\widetilde{P}Z+Z^*\\widetilde{P}=XX^*$, then the matrix function \n\\begin{equation}\n\\Sigma:=\\begin{bmatrix}\\Sigma_{11} & \\Sigma_{12}\\\\ \\Sigma_{21} & \\Sigma_{22}\\end{bmatrix}=\n\\begin{bmatrix}\\varphi^{-1} & -\\varphi^{-1}K \\\\ 0 & \\psi\\end{bmatrix}\n\\begin{bmatrix}\\Theta_{11} & \\Theta_{12}\\\\ \\Theta_{21} & \\Theta_{22}\\end{bmatrix}\n\\label{nov2}\n\\end{equation}\nis analytic on $\\Pi_+$. Let us observe that by the formulas \\eqref{oct1}, \\eqref{bigLySyl} and well known\nproperties of determinants,\n\\begin{align}\n\\det\\Theta(\\lambda)&=\\det\\left(I- {\\mathbf C} (\\lambda I - {\\mathbf A})^{-1} \\boldsymbol{\\Gamma}_{\\mathfrak D}^{-1} {\\mathbf C} J\\right)\\notag\\\\\n&=\\det\\left(I- {\\mathbf C} J{\\mathbf C} (\\lambda I - {\\mathbf A})^{-1} \\boldsymbol{\\Gamma}_{\\mathfrak D}^{-1}\\right)\\notag\\\\\n&=\\det(\\boldsymbol{\\Gamma}_{\\mathfrak D} (\\lambda I - {\\mathbf A})+\\boldsymbol{\\Gamma}_{\\mathfrak D}{\\mathbf A}+{\\mathbf A}^*\\boldsymbol{\\Gamma}_{\\mathfrak D})\\cdot\n\\det((\\lambda I - {\\mathbf A})^{-1} \\boldsymbol{\\Gamma}_{\\mathfrak D}^{-1})\\notag\\\\\n&=\\frac{\\det(\\lambda I+{\\mathbf A}^*)}{\\det(\\lambda I - {\\mathbf A})}=\\frac{\\det(\\lambda I-Z)\\det(\\lambda I+W^*)}\n{\\det(\\lambda I +Z^*)\\det(\\lambda I-W)}.\\notag\n\\end{align}\nSimilar computations show that \n$$\n\\det \\psi(\\lambda)=\\frac{\\det(\\lambda I-W)}{\\det(\\lambda I +W^*)},\\quad \n\\det \\varphi(z)=\\frac{\\det(\\lambda I-Z)}{\\det(\\lambda I +Z^*)}.\n$$\nCombining the three latter equalities with \\eqref{nov2} gives $\\det \\Sigma(\\lambda)\\equiv 1\\neq 0$. \nTherefore, for \n$G\\in\\mathcal S^{p\\times m}$, the total pole multiplicity of the function \n$$\n\\Upsilon=(\\Sigma_{11}G+\\Sigma_{12})(\\Sigma_{21}G+\\Sigma_{22})^{-1}\n$$\nis the same as the number of zeros of the denominator \n$$\\Sigma_{21}G+\\Sigma_{22}=\n\\psi(\\Theta_{21}G+\\Theta_{22}),\n$$\nthat is $\\kappa$, by the winding number argument.\nOn the other hand, since \n\\begin{equation} \\label{Takagi-Sarason}\n S=K+\\varphi \\Upsilon\\psi,\n \\end{equation}\nas can be seen from \\eqref{LFTparamkap} and \\eqref{nov2},\nthe total pole multiplicity of $S$ equals $\\kappa$ if no poles of $\\Upsilon$ \noccur at zeros of $\\varphi$ and \n$\\Psi$, that is, in $\\sigma(Z)\\cup\\sigma(W)$. We note that the form \n\\eqref{Takagi-Sarason} where $K,\\varphi,\\psi$ are part of the data \nand $\\Upsilon$ is a free meromorphic function with no poles on $i \n{\\mathbb R}$ but $\\kappa$ poles in $\\Pi_{+}$ (including possibly at \npoints of $\\sigma(W) \\cup \\sigma(Z)$) corresponds to a \nvariant of the interpolation problem \n\\eqref{BTOAint1}, \\eqref{BTOAint2}, \\eqref{BTOAint3} sometimes \ncalled the {\\em Takagi-Sarason problem} (see \\cite[Chapter 19]{BGR}, \\cite{Bolo04}). It \nturns out that discarding the side-condition \\eqref{LFTparampar} on \nthe Schur-class free-parameter function $G$ leads to a \nparametrization of the set of all solutions of the Takagi-Sarason \nproblem.\n\n\\subsection{Indefinite kernels and reproducing kernel Pontryagin \nspaces} \\label{S:RKHSkap}\nFrom the formula \\eqref{ad1} for $K_{\\Theta,J}$, we see from \nthe observability assumption on $({\\mathbf C}, {\\mathbf A})$ (equivalently, the \nobservability and controllability assumptions on $(\\sbm{V \\\\ U}, W)$ \nand $(Z, \\begin{bmatrix} X & -Y \\end{bmatrix})$) that \n$$\n \\nu_{-}(\\boldsymbol{\\Gamma}_{\\mathfrak D}) = {\\rm sq}_{-}(K_{\\Theta,J}).\n$$\nBy the general theory of reproducing kernel Hilbert spaces sketched \nin Section \\ref{S:RKHSkap}, it follows that ${\\mathcal H}(K_{\\Theta,J})$ is a Pontryagin space with \nnegative index $\\nu_{-}({\\mathcal H}(K_{\\Theta,J})$ equal to the number of \nnegative eigenvalues of $\\boldsymbol{\\Gamma}_{\\mathfrak D}$:\n$$\n \\nu_{-}({\\mathcal H}(K_{\\Theta,J})) = \\nu_{-}(\\boldsymbol{\\Gamma}_{\\mathfrak D}).\n$$\nWe conclude that the formula for $\\kappa$ in statement (1) agrees with that in statement (2)\nin Theorem \\ref{T:BTOA-NPkap}. \n\n\n\\subsection{The Grassmannian\/Kre\\u{\\i}n-space approach for the \ngeneralized Schur-class setting} \\label{S:Grass-kappa}\n\nThe Grassmannian approach extends to to the present setting as follows. \nThe suitable analog of Lemma \\ref{L:maxneg} is the following:\n\\begin{lemma} \\label{L:maxnegkap} \nSuppose that ${\\mathcal M}$ is a closed subspace of a Kre\\u{\\i}n-space ${\\mathcal K}$ \nsuch that the ${\\mathcal K}$-relative orthogonal \ncomplement ${\\mathcal M}^{[\\perp]}$ has negative signature equal $\\kappa$. If ${\\mathcal G}$ is a negative\nsubspace of ${\\mathcal M}$, then ${\\mathcal G}$ has codimension at least $\\kappa$ in any maximal negative \nsubspace of ${\\mathcal K}$. Moreover, the codimension of such a ${\\mathcal G}$ in any maximal negative \nsubspace of ${\\mathcal K}$ is equal to $\\kappa$ if and only if ${\\mathcal G}$ is a maximal negative\nsubspace of ${\\mathcal M}$. \n\\end{lemma}\n\nLet us now assume that we are given a $\\Pi_{+}$-admissible \ninterpolation data set $\\mathfrak D$ with $\\boldsymbol{\\Gamma}_{\\mathfrak D}$ \ninvertible. Then ${\\mathcal M}_{\\mathfrak D}$ given by \\eqref{cMrep} is a regular subspace of the \nKre\\u{\\i}n space $L^{2}_{p+m}(i {\\mathbb R})$ with the $J (= \n\\sbm{I_{p} & 0 \\\\ 0 & - I_{m}}$)-inner product. \n\nWith Lemma \\ref{L:maxnegkap} in hand, we argue that \n$\\nu_{-}({\\mathcal M}_{\\mathfrak D}^{[\\perp {\\mathcal K}]}) \\ge \\kappa$ is necessary for the existence of \n${\\mathcal S}_{\\kappa}^{p \\times m}(\\Pi_{+})$-functions $S$ analytic on \n$\\sigma(Z) \\cup \\sigma(W)$ satisfying the interpolation conditions\n\\eqref{BTOAint1}, \\eqref{BTOAint2}, \\eqref{BTOAint3}. \n\n\\smallskip\n\n\\noindent\n\\textbf{Proof of necessity for the generalized Schur-class setting.}\nIf $S \\in \n{\\mathcal S}_{\\kappa'}^{p \\times m}(\\Pi_{+})$ is a solution of the \ninterpolation conditions with $\\kappa' \\le \\kappa$, then as in Section \n\\ref{S:statekap}, there is a $m \\times m$-matrix \nfunction $\\Psi$ with $\\det \\Psi$ having no zeros in $\\sigma(Z) \\cup \n\\sigma(W)$ and having $\\kappa$ zeros in $\\Pi_{+}$ so that the \nsubspace ${\\mathcal G}_{S}: = \\sbm{ S \\\\ I_{m} } \\psi^{-1} \\Psi H^{2}_{m}(\\Pi_{+})$ \nsatisfies the inclusion \\eqref{sol-rep'}. We note that then \n${\\mathcal G}_{S}$ is a negative subspace of ${\\mathcal K}$ and the fact that $\\Psi$ has \n$\\kappa$ zeros means that ${\\mathcal G}_{S}$ has codimension $\\kappa$ in a \nmaximal negative subspace of ${\\mathcal K}: = \\sbm{ L^{2}_{p}(i {\\mathbb R}) \\\\ \nH^{2}_{m}(\\Pi_{+})}$. As ${\\mathcal G}_{S}$ is also a subspace of \n${\\mathcal M}_{\\mathfrak D}$, it follows by Lemma \\ref{L:maxnegkap} that \nthe negative signature of ${\\mathcal M}_{\\mathfrak D}^{[ \\perp {\\mathcal K}]}$ must be \nat least $\\kappa$. Thus $\\nu_{-}({\\mathcal M}_{\\mathfrak D}^{[ \\perp {\\mathcal K}]}) \n\\ge \\kappa$ is necessary for the existence of a solution $S$ of the \ninterpolation problem in the class ${\\mathcal S}_{\\kappa}^{p \\times \nm}(\\Pi_{+})$. As part of the sufficiency direction, we shall show \nthat conversely, if $\\kappa = \\nu_{-}({\\mathcal M}^{[ \\perp {\\mathcal K}]})$, then we \ncan always find solutions $S$ of the interpolation conditions in the \nclass ${\\mathcal S}_{\\kappa}^{p \\times m}(\\Pi_{+})$. This establishes \nthe formula in statement (2) of Theorem \\ref{T:BTOA-NPkap} as the \nminimal $\\kappa$ such that solutions of the interpolation \nconditions can be found in class ${\\mathcal S}_{\\kappa}^{p \\times m}(\\Pi_{+})$.\n\n\\smallskip\n\n\\noindent\n\\textbf{Proof of sufficiency for the generalized Schur-class setting.}\nLet us suppose that $\\boldsymbol{\\Gamma}_{\\mathfrak D}$ is invertible and hence \nthat ${\\mathcal M}_{\\mathfrak D}$ is a regular subspace of the Kre\\u{\\i}n \nspace $L^{2}_{p+m}(i {\\mathbb R})$ with the $J = \\sbm{ I_{p} & 0 \\\\ 0 \n& -I_{m}}$-inner product. By the results of \n\\cite{BH}, there is a $J$-phase function $\\Theta$ so that the \nBeurling-Lax representation \\eqref{cMBLrep} holds (we avoid using the \nformula \\eqref{oct1} for $\\Theta$ at this stage).\nWe now assume that ${\\mathcal M}_{\\mathfrak D}^{[ \\perp \n{\\mathcal K}]}$ has negative signature $\\nu_{-}({\\mathcal M}_{\\mathfrak D}^{[ \\perp \n{\\mathcal K}]})$ equal to $\\kappa$. We wish to verify the linear-fractional \nparametrization \\eqref{LFTparamkap}--\\eqref{LFTparampar} for the set \nof all ${\\mathcal S}_{\\kappa}^{p\\times m}(\\Pi_{+})$-class solutions of the \ninterpolation conditions.\n\nSuppose first that $S$ is any ${\\mathcal S}_{\\kappa}^{p \\times m}(\\Pi_{+})$-class \nsolution of the interpolation conditions. By the graph-space criterion\nfor such solutions, there is a $m \\times m$-matrix valued function \n$\\Psi$ analytic on $\\Pi_{+}$ with $\\det \\Psi$ having $\\kappa$ zeros \nbut none in $\\sigma(Z) \\cup \\sigma(W)$, so that \\eqref{sol-rep'} holds.\nBut then \n$$\n{\\mathcal G}_{S}: = \\sbm{ S \\\\ I_{m} } \\psi^{-1} \\Psi \nH^{2}_{m}(\\Pi_{+})\n$$ \nis a shift-invariant negative subspace of ${\\mathcal K}$\ncontained in ${\\mathcal M}_{\\mathfrak D}$ and having codimension $\\kappa$ in a maximal \nnegative subspace of ${\\mathcal K}$. It now follows from Lemma \n\\ref{L:maxnegkap} that ${\\mathcal G}_{S}$ is maximal negative as a subspace of \n${\\mathcal M}_{\\mathfrak D}$. As ${\\mathcal G}_{S}$ is also shift-invariant and \nmultiplication by $\\Theta$ is a Kre\\u{\\i}n-space isomorphism from \n$H^{2}_{p+m}(\\Pi_{+})$ onto ${\\mathcal M}_{\\mathfrak D}$, it follows that\n${\\mathcal G}_{S}$ is the image under multiplication by $\\Theta$ of a \nshift-invariant $J$-maximal negative subspace of \n$H^{2}_{p+m}(\\Pi_{+})$, i.e.,\n\\begin{equation} \\label{Srep'}\n {\\mathcal G}_{S}: = \\begin{bmatrix} S \\\\ I_{m} \\end{bmatrix} \\cdot \\psi^{-1} \\Psi \n\\cdot H^{2}_{m}(\\Pi_{+}) = \\Theta \\cdot \\begin{bmatrix} G \\\\ I_{m} \n\\end{bmatrix} \\cdot H^{2}_{m}(\\Pi_{+})\n\\end{equation}\nfor a ${\\mathcal S}^{p \\times m}(\\Pi_{+})$-class function $G$. From the fact \nthat $\\Psi$ has no zeros in $\\sigma(Z) \\cup \\sigma(W)$ one can read \noff from \\eqref{Srep'} that $\\psi (\\Theta_{21} G + \\Theta_{22})$ has no zeros in \n$\\sigma(Z) \\cup \\sigma(W)$ and from the representation \\eqref{Srep'} \nthe linear-fractional representation \\eqref{LFTparamkap} follows as \nwell. From the subspace identity \\eqref{Srep'} one can also read off \nthat there is a $m \\times m$ matrix function $Q$ with $Q^{\\pm 1} \\in \nH^{\\infty}_{m \\times m}(\\Pi_{+})$ such that\n$$\nS \\psi^{-1} \\Psi = (\\Theta_{11} G + \\Theta_{12}) Q\\quad\\mbox{and}\\quad\n\\psi^{-1} \\Psi = (\\Theta_{21} G + \\Theta_{22})Q.\n$$\nSolving the second equation for $Q$ then gives\n$$\n Q = (\\Theta_{22} G + \\Theta_{22})^{-1} \\psi^{-1} \\Psi.\n$$\nSubstituting this back into the first equation and then solving for \n$S$ leads to the linear-fractional representation \\eqref{LFTparamkap} \nfor $S$.\n\n\\smallskip\n\nLet now $G$ be any Schur-class function satisfying the additional \nconstraint \\eqref{LFTparampar}. Since multiplication by $\\Theta$ is \na Kre\\u{\\i}n-space isomorphism from $H^{2}_{p+m}(\\Pi_{+})$ to \n${\\mathcal M}_{\\mathfrak D}$ and $\\sbm{ G \\\\ I_{m}} H^{2}_{m}(\\Pi_{+})$ is a \nmaximal negative shift-invariant subspace of ${\\mathcal M}_{\\mathfrak D}$, it follows that \n$\\Theta \\cdot \\sbm{ G \\\\ I_{m}} H^{2}_{m}(\\Pi_{+})$ is maximal \nnegative as a subspace of ${\\mathcal M}_{\\mathfrak D}$.\nBy Lemma \\ref{L:maxnegkap}, it follows that $\\Theta \\cdot \\sbm{ G \\\\ \nI_{m}} H^{2}_{m}(\\Pi_{+})$ has codimension $\\kappa = \n\\nu_{-}({\\mathcal M}_{\\mathfrak {\\mathcal D}}^{[ \\perp] {\\mathcal K}})$ in a maximal negative \nsubspace of ${\\mathcal K}$. As $\\Theta \\cdot \\sbm{ G \\\\ \nI_{m}} H^{2}_{m}(\\Pi_{+})$ is also shift-invariant, it follows that\nthere must be a contractive matrix function $S$ on the unit circle \nand a bounded analytic $m \\times m$-matrix function $\\Psi$ on \n$\\Pi_{+}$ such that $\\Psi$ has exactly $\\kappa$ zeros in $\\Pi_{+}$\nand $\\Psi$ is bounded and invertible on $i {\\mathbb R}$ so that\n\\begin{equation} \\label{Srep}\n\\begin{bmatrix} S \\\\ I_{m} \\end{bmatrix} \\cdot \\psi^{-1} \\Psi \n \\cdot H^{2}_{m}(\\Pi_{+}) = \\Theta \\cdot \\begin{bmatrix} G \\\\ I \n\\end{bmatrix} \\cdot H^{2}_{m}(\\Pi_{+}).\n\\end{equation}\nIn particular, $\\psi^{-1} \\Psi\\cdot H^{2}_{m}(\\Pi_{+}) \\subset \n(\\Theta_{21} G + \\Theta_{22}) \\cdot H^{2}_{m}(\\Pi_{+})$, so\nthere is a $Q \\in H^{\\infty}_{p \\times m}(\\Pi_{+})$ so that\n$ \\psi^{-1} \\Psi = (\\theta_{21} G + \\Theta_{22}) Q$, i.e., so that\n$$\n \\Psi = \\psi (\\Theta_{21} G + \\Theta_{22}) Q.\n$$\nAs $\\psi (\\Theta_{21} G + \\Theta_{22})$ has no zeros in $\\sigma(Z) \n\\cup \\sigma(W)$ by assumption, it follows that none of the zeros of \n$\\Psi$ are in $\\sigma(Z) \\cup \\sigma(W)$. By the criterion \n\\eqref{sol-rep'} for ${\\mathcal S}_{\\kappa'}^{p \\times m}(\\Pi_{+})$-class \nsolutions of the interpolation conditions with $\\kappa' \\le \\kappa$, \nwe read off from \\eqref{Srep} that $S$ so constructed is a \n${\\mathcal S}_{\\kappa'}^{p \\times m}(\\Pi_{+})$-class solution of the \ninterpolation conditions for some $\\kappa' \\le \\kappa$. However, from \nthe proof of the necessity direction already discussed, it follows \nthat necessarily $\\kappa' \\ge \\kappa$. Thus $S$ so constructed is a \n${\\mathcal S}_{\\kappa'}^{p \\times m}(\\Pi_{+})$-class solution of the \ninterpolation conditions. The subspace identity \\eqref{Srep} leads to\nthe formula \\eqref{LFTparamkap} for $S$ in terms of $G$ just as in \nthe previous paragraph.\n\n\n\\begin{remark} \\label{R:summary}\nWe conclude that the Grassmannian approach extends to the generalized \nSchur-class setting. As in the classical Schur-class case, \none can avoid the elaborate winding-number argument used in Section \n\\ref{S:statekap} by using Kre\\u{\\i}n-space geometry (namely, the fact \nthe a Kre\\u{\\i}n-space isomorphism maps maximal negative \nsubspaces to maximal negative subspaces combined with Lemma \n\\ref{L:maxnegkap}), unlike the story for the Fundamental Matrix Inequality \nPotapov approach, which avoids the winding number argument in an \nelegant way for the definite case but appears to still require such \nan argument for the indefinite generalized Schur-class setting.\n\\end{remark}\n\n\\subsection{State-space versus Grassmannian\/Kre\\u{\\i}n-space-geometry solution criteria in the generalized \nSchur-class setting} \\label{S:synthesiskap}\nThe work of the previous subsections shows that each of conditions \n(1) and (2) in Theorem \\ref{T:BTOA-NPkap} is equivalent to the \nexistence of ${\\mathcal S}_{\\kappa}^{p \\times m}(\\Pi_{+})$-class solutions f \nthe interpolation conditions \\eqref{BTOAint1}--\\eqref{BTOAint3}, \nand that condition (2) is equivalent to condition \n(1). It follows that conditions (1), (2), (3) are all equivalent to \neach other. Here we wish to see this latter fact directly in a more \nconcrete from, analogously to what is done in Section \n\\ref{S:synthesis} above for the classical Schur-class setting.\n\n\\smallskip\n\nAs in Section \\ref{S:synthesis}, we impose an assumption a little \nstronger than the condition that $\\boldsymbol{\\Gamma}_{\\mathfrak D}$ be \ninvertible, namely, the Nondegeneracy Assumption: {\\em \n${\\mathcal M}_{\\mathfrak D}$, ${\\mathcal M}_{\\mathfrak D} \\cap H^{2}_{p+m}(\\Pi_{+})$, \nand ${\\mathcal M}_{\\mathfrak D} \\cap H^{2}_{p+m}(\\Pi_{-})$ are all regular \nsubspaces of $L^{2}_{p+m}(i {\\mathbb R})$ (with the $J\\, (= \\sbm{ I_{p} \n& 0 \\\\ 0 & -I_{m} })$-inner product).} Then Lemmas \\ref{L:cMDperp} and \n\\ref{L:decom} go through with no change. Lemma \\ref{L:MperpK} goes \nthrough, but with the {\\em in particular} statement generalized to \nthe following (here $\\nu_{-}({\\mathcal L})$ refers to negative signature of \nthe given subspace ${\\mathcal L}$ of $L^{2}_{p+m}(i {\\mathbb R})$ with respect \nto the $J$-inner product):\n\\begin{itemize}\n \\item {\\em In particular, $\\nu_{-}({\\mathcal M}_{\\mathfrak D}^{[ \\perp \n {\\mathcal K}]}) = \\kappa$ if and only if}\n $$\n \\nu_{-}({\\mathcal M}_{\\mathfrak D}^{[\\perp {\\mathcal K}]})_{0}) = \\kappa\n $$\n{\\em if and only if}\n$$\n \\nu_{-}({\\mathbb G}_{T^{[*]}}) + \\nu_{-}(({\\mathcal M}_{\\mathfrak D}^{[ \n \\perp]})_{1}) = \\kappa.\n$$\n \\end{itemize}\nLemma \\ref{L:posnegsubspaces} has the more general form:\n\\begin{enumerate}\n \\item $\\nu_{-}({\\mathbb G}_{T^{[*]}}) =\\kappa$ {\\em if and only if }\n $\\nu_{-}(I + T T^{[*]}) = \\kappa$ ({\\em where $I + T T^{[*]}$ is \n considered as an operator on } $\\operatorname{Ran} J ({\\mathcal C}_{Z, \n \\sbm{ X & -Y}})^{*}$).\n \n \\item $\\nu_{-}(({\\mathcal M}_{\\mathfrak D}^{[ \\perp]})_{1}) = \n \\nu_{-}(\\operatorname{Ran} {\\mathcal O}_{\\sbm{V \\\\ U}, W})$.\n \n \\item $\\nu_{-}(\\widehat {\\mathbb G}_{T}) = \\nu_{-}( I + T^{[*]} \n T)$ ({\\em where } $I + T^{[*]}T$ {\\em is considered as an operator \n on } $\\operatorname{Ran} {\\mathcal O}_{\\sbm{ V \\\\ U}, W}$).\n \n \\item $\\nu_{+}({\\mathcal M}_{{\\mathfrak D},1}) = \\nu_{-}(\\operatorname{Ran} \n J ({\\mathcal C}_{Z, \\sbm{ X & -Y}})^{*})$.\n \\end{enumerate}\n Lemma \\ref{L:GammaD-factored} is already in general form but its \n corollary, namely Lemma \\ref{L:GammaDpos}, can be given in a more \n general form:\n \\begin{itemize}\n \\item {\\em The following conditions are equivalent:}\n\\begin{enumerate}\n \\item $\\nu_{-}(\\boldsymbol{\\Gamma}_{\\mathfrak D}) = \\kappa$.\n \n \\item $\\nu_{+}(\\operatorname{Ran} {\\mathcal O}_{\\sbm{V \\\\ U}, W}) + \n \\nu_{-}({\\mathbb G}_{T^{[*]}}) = \\kappa.$\n \n \\item $\\nu_{-}( \\operatorname{Ran} J ({\\mathcal C}_{Z, \\sbm{ X & \n -Y}})^{*} ) + \\nu_{+}(\\widehat{\\mathbb G}_{T}) = \\kappa.$\n \\end{enumerate}\n \\end{itemize}\n Putting the pieces together, we have the following chain of \n reasoning. By the generalized version of Lemma \\ref{L:MperpK}, we \n have\n \\begin{equation} \\label{1}\n \\nu_{-}({\\mathcal M}_{\\mathfrak D})^{[ \\perp {\\mathcal K}]} = \\nu_{-}({\\mathbb \n G}_{T^{[*]}}) + \\nu_{-}(({\\mathcal M}_{\\mathfrak D}^{[ \\perp]})_{1})\n \\end{equation}\n where, by the generalized version of Lemma \\ref{L:posnegsubspaces} \n part (2),\n $$\n \\nu_{-}(({\\mathcal M}_{\\mathfrak D}^{[\\perp]})_{1}) = \n \\nu_{-}(\\operatorname{Ran} {\\mathcal O}_{\\sbm{V \\\\U}, W}).\n $$\n Thus \\eqref{1} becomes\n $$\n \\nu_{-}({\\mathcal M}_{\\mathfrak D})^{[ \\perp {\\mathcal K}]} = \\nu_{-}({\\mathbb G}_{T^{[*]}}) \n + \\nu_{-}(\\operatorname{Ran} {\\mathcal O}_{\\sbm{V \\\\U}, W}).\n $$\n By (1) $\\Leftrightarrow$ (2) in the generalized Lemma \n \\ref{L:GammaD-factored}, we get\n $$\n \\nu_{-}({\\mathcal M}_{\\mathfrak D}^{[ \\perp {\\mathcal K}]}) = \n \\nu_{-}(\\boldsymbol{\\Gamma}_{\\mathfrak D})\n $$\n which gives us (1) $\\Leftrightarrow$ (2) in Theorem \n \\ref{T:BTOA-NPkap}.\n \n\\smallskip\n\n To give a direct proof of (1) $\\Leftrightarrow$ (3) in Theorem \n \\ref{T:BTOA-NPkap}, we note the concrete identification \n \\eqref{HKThetaJid} of the space ${\\mathcal H}(K_{\\Theta,J})$ (with \n $J$-inner product on $\\operatorname{Ran} (P^{J}H^{2}_{p+m}(\\Pi_{+}) \n - P^{J}_{{\\mathcal M}_{\\mathfrak D}})$ which again leads to the more compact \n identification \\eqref{HKThetaJ} from which we immediately see that\n $$\n \\nu_{-}({\\mathcal H}(K_{\\Theta,J})) = \\nu_{-}(\\operatorname{Ran} J({\\mathcal C}_{Z, \n \\sbm{ X & -Y}})^{*}) + \\nu_{+}(\\widehat {\\mathbb G}_{T}).\n $$\n By (1) $\\Leftrightarrow$ (3) in the generalized Lemma \n \\ref{L:GammaDpos}, this last expression is equal to \n $\\nu_{-}(\\boldsymbol{\\Gamma}_{\\mathfrak D})$, and we have our more concrete \n direct proof of the equivalence of conditions (1) and (3) in Theorem \n \\ref{T:BTOA-NPkap}.\n\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{ Introduction}\n\\label{intro}\nThe notion of a ``Siegel zero\" was first introduced in the context of Dirichlet\n$L$-functions, $L(s,\\chi)$, where $\\chi$ is a quadratic Dirichlet character\nof conductor $D.$ \nRoughly speaking, a Siegel zero of $L(s,\\chi)$ is a zero\non the real axis close to 1. More precisely, given \n$\\epsilon>0$, an $\\epsilon$-Siegel zero of $L(s,\\chi)$ is a real number\n$\\beta$ in the interval \n$(1-\\epsilon (\\log D)^{-1},1)$ such that $L(\\beta,\\chi)=0$. \nThe idea of a \"Siegel zero\" has been generalized to\n$L$-functions associated with Maass forms by Hoffstein and \nLockhart ~\\cite{hofflock}. In this context, the conductor $D$ is replaced by\n$(\\lambda N+1)$, where $\\lambda$ and $N$ are the Laplace eigenvalue\nand level of a Maass\nform, respectively.\n\nThe connection with Eisenstein series comes from a 1975 paper of Goldfeld\n~\\cite{dg} \nin which he proves that if one assumes that the class number, $h(-d)$, of \nthe imaginary quadratic field $k$ of discriminant $-d$\nis small, \nthen the Siegel zero is given by a certain asymptotic formula. This \nasymptotic formula is computed by writing the Dedekind zeta function\n$\\zeta_k$ in \ntwo ways: \n\\begin{equation}\\label{dg}\n\\zeta(s)L(s,\\chi_{-d})=\\zeta_k(s)= \\frac{1}{|\\frak{o}_k^{\\times}|}\n\\sum_C 2^s d^{-\\frac{s}{2}}E(z_C,s),\n\\end{equation}\n where \n$E(z,s)$ is the nonholomorphic Eisenstein series on the upper half\nplane, the sum\nis over Heegner points $z_C$ corresponding to ideal classes $C$ of $k$,\nand $\\chi_{-d}$ is the unique quadratic character of conductor $d$ such that\n$\\chi_{-d}(-1)=-1.$\nIt is interesting, in this context, to \nnote that for any fixed $\\epsilon,$ and \n$d$ sufficiently large, a positive proportion of the functions\n$E(z_C,s)$ vanish for some $s$ \nin the interval $(1-\\epsilon (\\log d)^{-1},1)$. \nTo prove this one uses the equidistribution of Heegner points \nand the following result:\n\n\\begin{thm} ($GL(2)$ Case, due to Bateman and Grosswald \\cite{bg}): \nGiven $\\epsilon >0,$ there exists $Y >0$ such that\nif $y>Y$ then for each $x,$ $E(x+iy, \\beta(x,y)) =0$ for some \n$\\beta(x,y) \\in (1-\\epsilon(\\log y)^{-1},1),$ and for all $x,\\epsilon$,\nwe have\n$$1-\\beta(x,y)\\sim \\frac{3}{\\pi y}, \\mbox{ as } y \\rightarrow \\infty.$$ \n\\end{thm}\n\nHoffstein \\cite{jhoff} has proved an analogous result for Hilbert modular\nEisenstein series. We extend it to the generality of\nMoeglin and Waldspurger \\cite{mw}. However, our \nresult will be weaker than Hoffstein's in that we will not obtain a \nprecise error term, and in that we will not obtain any estimates for the \noptimal value of $Y$. \n\nThe author wishes to thank Carlos Moreno, who suggested that the result, \noriginally proved only for $GL_n$ over $\\ensuremath{{\\mathbb Q}}$ would go through in this \ngenerality, as well as Dorian Goldfeld, Herv\\'e Jacquet, and David \nGinzburg, for help and advice along the way.\n\n\\subsection{Notation}\n\\label{notation}\n\nFor the most part, we follow the notation of Moeglin and\nWalspurger \\cite{mw}. This entails \ncertain redundancies: for example the symbol $k$ is used both for\nthe global field and an element of the maximal compact, while $M$ is \nboth a Levi subgroup and an intertwining operator. All references\nare to \\cite{mw} until section 4\n\nThus, let $k$ be a global field, $\\ensuremath{{\\mathbb A}}$ the adeles of $k$,\n$G$ a\nconnected reductive algebraic group defined over $k$, and $\\mathbf{G}$\na finite central covering of $G(\\ensuremath{{\\mathbb A}})$ such that $G(k)$ lifts to a \nsubgroup of $\\ensuremath{\\mathbf{G}}$.\n\nFix once and for all a choice of minimal parabolic $P_0$ of $G$ \nand a Levi subgroup\n$M_0$ of $P_0,$ both defined over $k$. \nThis also fixes a definition of ``standard'' for parabolics and Levis (p.4).\nLet $T_0$ be the maximal split torus in the center of $M_0$ and\n$\\Delta_0$ the set of simple positive roots for $T_0$ determined by\n$P_0.$ \nFix a maximal compact subgroup $\\mathbf{K}$ of $\\ensuremath{\\mathbf{G}}$,\nas in I.1.4.\n\nLet $P$ be a standard parabolic subgroup of $G$, and let $U$ be its \nunipotent radical, and $M$ its standard Levi. \nThen $U(\\ensuremath{{\\mathbb A}})$ lifts canonically into $\\ensuremath{\\mathbf{G}}$. \nLet $\\ensuremath{\\mathbf{M}}$ denote the preimage of $M(\\ensuremath{{\\mathbb A}})$ in $\\ensuremath{\\mathbf{G}}$. \nIf $\\chi$ is a rational character of $M$, we obtain a map \n$\\ensuremath{\\mathbf{M}}\\mapsto \\ensuremath{{\\mathbb R}}^{\\times}_+$ by projecting to $M(\\ensuremath{{\\mathbb A}})$, using $\\chi$ to \nget to $\\ensuremath{{\\mathbb A}}^{\\times}$, and then taking the absolute value. \nLet $\\ensuremath{\\mathbf{M}}^1$ denote the intersection of the kernels of all such maps.\nLet $X_M^{\\ensuremath{\\mathbf{G}}}$ and $\\mbox{Re} X_M^{\\ensuremath{\\mathbf{G}}}$ \ndenote the groups of continuous homomorphisms of $\\ensuremath{\\mathbf{M}}$ to\n$\\ensuremath{{\\mathbb C}}^{\\times}$ and $\\ensuremath{{\\mathbb R}}^{\\times}_+$ respectively, which\nare trivial on $\\ensuremath{\\mathbf{M}}^1$ and on the center of $\\ensuremath{\\mathbf{G}}$. When $k$ is a \nnumber field, $X_M^{\\ensuremath{\\mathbf{G}}}$ may be identified with a complex vector\nspace $(\\frak{a}_M^{\\ensuremath{\\mathbf{G}}})^*$. When $k$ is a function field, it has \na finite number of connected components, and there is a natural \nprojection from $(\\frak{a}_M^{\\ensuremath{\\mathbf{G}}})^*$ to the identity\ncomponent, which still identifies $\\mbox{Re} X_M^\\ensuremath{\\mathbf{G}}$ with a real vector \nspace. See I.1.4 and I.1.6.\n\nThere is unique map $m_P:\\ensuremath{\\mathbf{G}}\\longrightarrow \\ensuremath{\\mathbf{M}}^1\\ensuremath{\\backslash} \\ensuremath{\\mathbf{M}}$ defined by requiring that if $g=umk$ for\nsome $u\\in U(\\ensuremath{{\\mathbb A}}), m\\in \\ensuremath{\\mathbf{M}}$ and $k \\in \\ensuremath{\\mathbf{K}}$, then $m_P(g) =\\ensuremath{\\mathbf{M}}^1m.$ \nIf $\\varphi$ is a function $U(\\ensuremath{{\\mathbb A}})M(k)\\ensuremath{\\backslash} \\ensuremath{\\mathbf{G}} \\mapsto \\ensuremath{{\\mathbb C}}$ and $k\\in \\ensuremath{\\mathbf{K}}$, \nlet us define a function $\\varphi_{k}$ on $M(k)\\ensuremath{\\backslash} \\ensuremath{\\mathbf{M}}$ by \n$\\varphi_{k}(m)=m^{-\\rho_P}\\varphi(mk),$ where $\\rho_P$ is half the sum of the\nroots of $M$ in Lie $U$. \nLet $\\phi_{\\pi}$ be an automorphic form on\n$U(\\ensuremath{{\\mathbb A}})M(k)\\ensuremath{\\backslash} \\ensuremath{\\mathbf{G}}$ such that\nfor each $k$, the function $\\phi_{\\pi,k}$ is a cusp form on $\\ensuremath{\\mathbf{M}}$ which \ngenerates a\nsemisimple isotypic submodule of type\n $\\pi,$ where $\\pi$ is an automorphic subrepresentation of \n$\\ensuremath{\\mathbf{M}}$ in the sense of \\cite{mw}, page 78.\n\nFor each \n$\\lambda \\in X_M^{\\ensuremath{\\mathbf{G}}}$, let \n$$\\lambda\\phi_{\\pi}(g)=m_{P}(g)^{\\lambda}\\phi_{\\pi}(g).$$ Then for each $k$, the function $\\lambda \\phi_{\\pi,k}$ is a cusp form on $\\ensuremath{\\mathbf{M}}$ which generates a semisimple isotypic submodule of type $\\pi\\otimes \\lambda.$\n\nFor $\\lambda$ in a suitable cone in $X_M^{\\ensuremath{\\mathbf{G}}}$, the Eisenstein series is defined by the following convergent sum:\n$$E(\\lambda \\phi_{\\pi},\\pi\\otimes \\lambda)(g)=\\sum_{\\gamma \\in P(k)\\ensuremath{\\backslash} G(k)}\\lambda \\phi_{\\pi}(\\gamma g).$$ It is holomorphic for $\\lambda$ in the domain of convergence, and extends to a meromorphic \nfunction on all of $X_M^{\\ensuremath{\\mathbf{G}}}$. We may assume without loss of generality\nthat $\\pi$ is unitary. This amounts to,\nat most, altering our choice of ``base point.'' (See I.3.3)\n(We deviate from \\cite{mw} in viewing the \nEisenstein series as a function on $X_M^{\\ensuremath{\\mathbf{G}}}$, rather than their $\\frak{P}$,\nwhich is a principal homogenous space for the quotient of $X_M^{\\ensuremath{\\mathbf{G}}}$ by \na certain finite group.) \n\nSuppose that $\\phi_{\\pi}$ is real valued. Then \n$E(\\lambda \\phi_{\\pi},\\pi\\otimes \\lambda)$ is real valued for \n$\\lambda \\in \\mbox{Re} X_M^{\\ensuremath{\\mathbf{G}}}.$\nLet us fix $P$ and $\\phi_{\\pi}$ once and for all. \nFor each $\\alpha \\in \\Delta_0,$ let \n$P_{\\alpha}=M_{\\alpha}U_{\\alpha}$\ndenote the standard maximal parabolic such that every element of \n$\\Delta_0$ is a root of $M_{\\alpha}$ except $\\alpha.$ \nOur choice of $\\Delta_0$ determines a set of positive roots for \nthe action of $T_0$ on any Levi $M'$ which we denote by $R^+(T_0,M')$.\n\nLet $W$ denote the Weyl group of $G$, and $W_M$ that of $M$. \nLet $W(M)$ be the set of $w\\in W$ of\nminimal length in their class $wW_M$, such that $wMw^{-1}$ is a \nstandard Levi of $G$.\nLet $W(M,M_{\\alpha})$ denote the set of $w \\in W$ such that $w^{-1}\\theta >0$ for every $\\theta \\in R^+(T_0,M_{\\alpha})$ and $wMw^{-1}$ is a standard \nLevi subgroup of $M_{\\alpha}.$ \n\nIf $k$ is a function field we fix once and for all a place $v_0$, \nand a uniformizing parameter $\\frak{w}$, and let $q=|\\frak{w}|^{-1}_{v_0}.$ \nLet\n$\\frak{m}=\\ensuremath{{\\mathbb R}}_+^{\\times}$ in the number\nfield case or $\\frak{w}^{\\ensuremath{{\\mathbb Z}}}$, in the function field case. Thus $\\ensuremath{\\frak{m}}$ may be \nembedded in $\\ensuremath{{\\mathbb A}}^{\\times}$ either at $k_{v_0}$ or diagonally at the infinite \nplaces, as a subgroup on which \n the absolute value is injective. One then has a \nsubgroup of $T_0(\\ensuremath{{\\mathbb A}})$ isomorphic to $\\ensuremath{\\frak{m}}^R$, where $R$ is the rank of $T_0$, \nand\nin I.2.1 of \\cite{mw} this is extended to a subgroup $A_{\\ensuremath{\\mathbf{M}}_0}$ of $\\ensuremath{\\mathbf{G}}$, \nstill isomorphic to $\\ensuremath{\\frak{m}}^R.$ We then define $A_{\\ensuremath{\\mathbf{M}}_{\\alpha}}=A_{\\ensuremath{\\mathbf{M}}_0}\\cap \nZ_{\\ensuremath{\\mathbf{M}}_{\\alpha}}.$ \n\nIf $\\lambda \\in \\mbox{Re} X_M^{\\ensuremath{\\mathbf{G}}}$, then $\\left. \\lambda \\right|_{A_{\\ensuremath{\\mathbf{M}}_{\\alpha}}}$\nfactors through the quotient $A_{\\ensuremath{\\mathbf{M}}_{\\alpha}}\/A_\\ensuremath{\\mathbf{G}}$, and is trivial on any \ntorsion in this quotient. It follows that \nwe may fix a map\n$\\tilde{\\alpha}:\\ensuremath{\\frak{m}} \\rightarrow A_{\\ensuremath{\\mathbf{M}}_{\\alpha}}$ such that\n$\\left. \\lambda \\right|_{A_{\\ensuremath{\\mathbf{M}}_{\\alpha}}}$ is determined by $\\lambda \\circ \n\\tilde{\\alpha}.$ \nIn the function field case, we also denote by $\\tilde{\\alpha}$ the \ncomposition of $\\tilde{\\alpha}$ with $|\\cdot |_{v_0}^{-1}:q^{\\ensuremath{{\\mathbb Z}}}\n\\rightarrow \\frak{w}^\\ensuremath{{\\mathbb Z}}.$\nThen $\\lambda \\circ \\ensuremath{\\tilde{\\alpha}}$ is a continuous homomorphism of a subgroup of \n$\\ensuremath{{\\mathbb R}}_+^{\\times}$ to $\\ensuremath{{\\mathbb R}}_+^{\\times}$, so \nwe may define $\\langle\\tilde{\\alpha},\\lambda\\rangle$ to be the \nunique real number such that\n$$\\lambda\\circ\\tilde{\\alpha}(y)=y^{\\langle\\tilde{\\alpha},\\lambda\\rangle}\n.$$\nReplacing $\\tilde{\\alpha}$ by the map $y\\mapsto \\tilde{\\alpha}(y^{-1})$ \nif necessary, we may add the stipulation that \n$\\langle \\tilde{\\alpha},\\alpha\\rangle >0.$\nWe will also need to refer to the Siegel set $S$, defined in section I.2.1 of \n\\cite{mw}, using a compact subset $\\omega$ of $\\ensuremath{\\mathbf{P}}_0.$\n\n{\\bf Definition: } Let us say that a map $\\Lambda : \\ensuremath{{\\mathbb C}}\\rightarrow X_M^{\\ensuremath{\\mathbf{G}}}$ is \n\\emph{elementary} if it is holomorphic and the restriction to $\\ensuremath{{\\mathbb R}}$ is an \naffine map into the real vector space $\\mbox{Re} X_M^\\ensuremath{\\mathbf{G}}.$\n\nThe constant term of $E(\\lambda\\phi_\\pi,\\lambda\\otimes\\pi)$ along $P_\\alpha$\n(see I.2.6)\nis given in terms of Eisenstein series $E^{M_{\\alpha}},$ \ndefined analogously, with $M_{\\alpha}$ replacing $G$, and\nintertwining operators $M(w,\\pi)$ defined in II.1.6:\n\\begin{equation}\\label{ctfrommw}\nE_{P_{\\alpha}}(\\lambda\\phi_{\\pi},\\lambda\\otimes\\pi)=\n\\sum_{w\\in W(M,M_{\\alpha})}E^{M_{\\alpha}}\n(M(w,\\lambda\\otimes \\pi)\\lambda\\phi_{\\pi},w(\\lambda \\otimes \\pi)).\n\\end{equation}\n(See II.1.7).\n\nSuppose that $E$ has a singularity along a root \nhyperplane $H$, associated to a root $\\theta$ as in IV.1.6. \nOur theorem applies only to the case when $H_\\ensuremath{{\\mathbb R}}:=H\\cap \\mbox{Re} X_M^\\ensuremath{\\mathbf{G}}$ \nis non-empty.\nIn this case, $H_\\ensuremath{{\\mathbb R}}$ will be of the form\n$$\\{\\lambda \\in \\mbox{Re} X_M^{\\ensuremath{\\mathbf{G}}}: \\langle \\lambda, \\check{\\theta} \\rangle=c_H\\},$$ \nfor some $c_H\\in \\ensuremath{{\\mathbb R}}$. See\nIV.1.6 and I.1.11. \nHere $\\check{\\theta}$ may be taken to be the coroot associated to a positive\nroot $\\theta,$ or its projection to the dual of $\\mbox{Re} X_M^\\ensuremath{\\mathbf{G}}$: the set $H$ is\nthe same either way.\nWe say that $\\lambda\\in H$ is \\emph{generic for $\\alpha$} \nif $\\lambda$ does not lie in \nany other hyperplane along which $E$ has a singularity, and\n$\\langle\\tilde{\\alpha},w_1\\lambda\\rangle = \n\\langle\\tilde{\\alpha},w_2\\lambda\\rangle$ for $w_1,w_2\\in W(M,M_{\\alpha})$\niff $w_1=w_2$.\nIt follows that for $W(M,M_\\alpha)$ nonempty, and $\\lambda$ generic \nfor $\\alpha,$\nthere is a unique \n$\\ensuremath{w_{\\mbox{\\tiny max}}}(\\lambda,\\alpha)$ \nsuch that $\\langle\\tilde{\\alpha},w\\lambda\\rangle$ is maximal. \n\n\\subsection{Statement of Main Result}\n\\label{statement}\n\n\\begin{thm}\\label{mainthm}\nFix a root hyperplane $H$ along which the Eisenstein series is singular, \na root $\\alpha$,\nan elementary map $\\Lambda$ such that $\\Lambda(\\ensuremath{{\\mathbb C}})\\cap H=\\Lambda(0)$ is\ngeneric for $\\alpha$, and elements $p\\in \\omega, k\\in \\ensuremath{\\mathbf{K}}.$ \nLet $$E(y,s)=E(\\Lambda(s)\\phi_{\\pi}, \\Lambda(s)\\otimes \\pi)\n(p\\tilde{\\alpha}(y)k).$$\nand for each $w\\in W(M,M_\\alpha),$ let \n$$E_w(y,s)=E^{M_\\alpha}(\nM(w,\\Lambda(s)\\otimes \\pi)\\Lambda(s)\\phi_{\\pi},w( \\Lambda(s)\\otimes \\pi))\n(p\\tilde{\\alpha}(y)k).$$\nWe say that $\\beta$ is an $\\epsilon$-Siegel zero of $E(y,\\sigma)$ if\n$\\beta \\in (-\\epsilon (\\log y)^{-1},\\epsilon (\\log y)^{-1})$ and\n$E(y,\\beta)=0.$\nLet $\\ensuremath{W_{\\mbox{\\tiny sng}}}(H,\\alpha,\\Lambda,pk)=\\{w\\in W(M,M\\alpha): E_w(1,s) \\mbox{ has a \npole at }s=0\\}.$\nIf $H,\\alpha,p,k,$ and $\\Lambda$ satisfy\n\\begin{enumerate}\n\\item{The set $W(M,M_\\alpha)$ is nonempty,\nand $\\ensuremath{w_{\\mbox{\\tiny max}}}(\\Lambda(0),\\alpha)\\notin \\ensuremath{W_{\\mbox{\\tiny sng}}}(H,\\alpha,\\Lambda,pk),$}\n\\item{$E_{\\ensuremath{w_{\\mbox{\\tiny max}}}(\\Lambda(0),\\alpha)}(1,0)\\neq 0,$}\n\\item{$E(y,s)$ has a simple pole at $s=0.$}\n\\end{enumerate}\nthen we have the following\n\\begin{description}\n\\item[A)]{ \nIf $\\ensuremath{W_{\\mbox{\\tiny sng}}}(H,\\alpha,\\Lambda,pk)\\neq \\emptyset$, \nlet\n$\\ensuremath{w_{\\mbox{\\tiny ms}}}$ be the unique element of $\\ensuremath{W_{\\mbox{\\tiny sng}}}$ such that \n$\\langle \\tilde{\\alpha},\\ensuremath{w_{\\mbox{\\tiny ms}}}\\Lambda(0) \\rangle$ is maximal \n(among elements of \\ensuremath{W_{\\mbox{\\tiny sng}}}), and denote $\\ensuremath{w_{\\mbox{\\tiny max}}}(\\Lambda(0),\\alpha)$\nmore briefly by $\\ensuremath{w_{\\mbox{\\tiny max}}}.$ Then\n\\begin{description}\n\\item[i)]{There exists $Y>0$ \n(dependent on $\\epsilon,H,\\alpha,p,k,\\Lambda,$ and $\\phi_\\pi$), \nsuch that for all $y>Y$, $E(y,\\sigma)$\nhas an $\\epsilon$-Siegel zero.\n}\n\\item[ii)]{Let $\\beta:(Y,\\infty)\\rightarrow \\ensuremath{{\\mathbb R}}$ or $q^\\ensuremath{{\\mathbb Z}}\\cap\n(Y,\\infty)\\rightarrow \\ensuremath{{\\mathbb R}}$ in the function field case,\nbe any function such that for\neach $y$, $\\beta(y)$ is an $\\epsilon$-Siegel zero of $E(y,\\sigma).$ \nThen\n$\\beta(y) \\sim e y^{-\\langle \\ensuremath{\\tilde{\\alpha}}, \\ensuremath{w_{\\mbox{\\tiny max}}} \\Lambda(0)-\\ensuremath{w_{\\mbox{\\tiny ms}}} \\Lambda(0) \\rangle},$\nwhere \n$$e=\n \\frac{\\chi_\\pi(\\ensuremath{w_{\\mbox{\\tiny ms}}}^{-1}\\ensuremath{\\tilde{\\alpha}}(y)\\ensuremath{w_{\\mbox{\\tiny ms}}})\n\\left({{\\mbox{Res}}\\atop{s =0}}E^{M_\\alpha}(M(\\ensuremath{w_{\\mbox{\\tiny ms}}},\\Lambda(s)\\otimes \\pi)\\Lambda(s)\n\\phi_\\pi,\\ensuremath{w_{\\mbox{\\tiny ms}}}(\\Lambda(s)\\otimes \\pi))(pk)\\right)\n}{\\chi_\\pi(\\ensuremath{w_{\\mbox{\\tiny max}}}^{-1}\\ensuremath{\\tilde{\\alpha}}(y)\\ensuremath{w_{\\mbox{\\tiny max}}})\nE^{M_\\alpha}(M(\\ensuremath{w_{\\mbox{\\tiny max}}},\\Lambda(0)\\otimes \\pi)\\Lambda(0)\n\\phi_\\pi,\\ensuremath{w_{\\mbox{\\tiny max}}}(\\Lambda(0)\\otimes \\pi))(pk)\n}.\n$$\n}\n\\end{description}}\n\\item[B)]{\nIf $\\ensuremath{W_{\\mbox{\\tiny sng}}}(H,\\alpha,\\Lambda,pk)=\\emptyset$,\nthen the conclusion is \nweaker:\nif\n$y>Y$ then either $E(y,\\sigma)$ has an $\\epsilon$-Siegel zero, or\n$E(y,\\sigma)$ is holomorphic at zero.\nIf $k$ is a function field, then it is always the latter.}\n\\end{description}\n\\end{thm}\nWe will also prove the following lemma, relevant to the problem \nof choosing $\\alpha,\\Lambda,p$ and $k$ so that conditions 1-3 above \nare satisfied. \n\\begin{lem}\\label{condlem}\nFor $\\alpha \\in \\Delta_0$ such that $\\alpha \\notin R^+(T_0,M)$ and \n$\\theta\\notin R^+(T_0,M_\\alpha)$, we have\n\\begin{enumerate}\n\\item{the identity element, $1$, is in $W(M,M_\\alpha),$}\n\\item{for all $p,k$ and $\\Lambda$, $1 \\notin \\ensuremath{W_{\\mbox{\\tiny sng}}}(H,\\alpha,\\Lambda,pk),$}\n\\item{if $c_H>0$, then \n$\\{\\lambda \\in H_\\ensuremath{{\\mathbb R}},\\mbox{ generic for }\\alpha: \\ensuremath{w_{\\mbox{\\tiny max}}}(\\lambda,\\alpha)=1\\}$ is a \nnonempty open subset of $H_\\ensuremath{{\\mathbb R}}$.}\n\\end{enumerate} \n\\end{lem}\nIt is also known (IV.1.11(c)) \nthat when $c_H>0,$ the singularity along $H$ is without\nmultiplicity. For each $p,k$ the restriction of $E_1$ and the \ncontinuation of $h_H(\\lambda)E$ are meromorphic functions of $\\lambda\n\\in H$. \nAs we see in an example below, either or both may be trivial \n(i.e., zero for all $\\lambda$) for \ncertain values of $p,k$. But, for any $p,k$ such that both are \nnontrivial, $\\Lambda$ may\nthen be chosen so that all three conditions are satisfied. \n\n\\section{ Basic Analytic Result}\n\\label{bar}\nOur approach to proving the existence of Siegel zeros of Eisenstein series is\nas follows. First, we show that any function that can be written in a \ncertain form will have a zero on the real axis very close to its pole. Then, \nwe show that our Eisenstein series can always be put into that form. \nThe lengthy definition of the function $F(y,s)$ in the following lemma should,\ntherefore, be taken as the ``mold'' into which we will fit the Eisenstein \nseries. As we see here, any function which fits into this mold will have\nSiegel zeros.\\label{secwitlem}\n\n\\begin{lem}\\label{mainlemma}\nFix real numbers $a, b, c,$ and $d,$ such that \n$b > d$. \nLet $\\sigma$ be a real variable, \nwhich we may think of as restricted to a small neighborhood of $0$ \nand\nlet $y$ be another real variable, which we think of as positive and large,\nkeeping in mind that when we apply this Lemma to the function field case, \n$y$ will only range over $q^{\\ensuremath{{\\mathbb Z}}}.$ Let \n$A(\\sigma)$ and $C(\\sigma)$ be two real valued \nfunctions that are both continuous and nonvanishing\nfor $\\sigma$ in a neighborhood of $0,$ \nand \nlet $B(y,\\sigma)$ and $D(y,\\sigma)$ be two more real valued functions, \nwhich are \nboth continuous in $\\sigma$, and such that $B(y,\\sigma)y^{-(a\\sigma+b)}$, and \n$D(y,\\sigma)y^{-(c\\sigma+d)}$ tend to zero as $y$ tends to $\\infty$ for all values \nof $\\sigma$ in some \nneighborhood of $0$, and that convergence is uniform as $\\sigma$ ranges\nover this neighborhood. \n\nDefine:\n$$F(y,\\sigma)=(A(\\sigma)y^{a\\sigma+b}+B(y,\\sigma))+\\frac{1}{\\sigma}\n(y^{c\\sigma+d}C(\\sigma)+D(y,\\sigma)).$$\n\nThen we have\n\\begin{description}\\item[i)]{\n For every $\\epsilon>0,$ there exists $Y(\\epsilon)>0,$ \nsuch that if $y>Y(\\epsilon),$ \nthen $F(y,\\sigma)$ has a zero \nin the interval $(-\\epsilon (\\log y)^{-1},\\epsilon (\\log y)^{-1})$.}\n\\item[ii)]{\nNow fix an $\\epsilon >0$, and \ntake $\\beta: (Y(\\epsilon),\\infty)\\rightarrow\\ensuremath{{\\mathbb R}}$ such that for each $y$ \nwe have \n $\\beta(y)\\in (-\\epsilon (\\log y)^{-1},\\epsilon (\\log y)^{-1}),$ and\n$F(\\beta(y),y)=0$. For any such $\\beta$, \n$$ \n-\\beta(y) \\sim \\frac{C(0)}{y^{(b-d)}A(0)} \\mbox { as }\ny \\rightarrow \\infty.$$} \\end{description} \n\\end{lem}\n\n{\\bf Proof: } \nBy replacing \n$F(y,\\sigma)$ with \n$y^{-(c\\sigma+d)}F(y,\\sigma)$, which has the same zeros, we may assume $c=d=0.$\nBy considering the four functions $\\pm F(y,\\pm \\sigma)$ we may assume that\n$A(0)$ and $C(0)$ are positive. \n\nNext, choose $\\delta, Y_1, m, M$ such that \n\\begin{equation} \\label{yano} \ny>Y_1, |\\sigma|<\\delta \\Rightarrow \\left\\{\\begin{array}{l}\n0Y_1,$ the function\n$A(\\sigma)y^{a\\sigma+b}+B(y,\\sigma)$ is bounded on \n$|\\sigma|<\\delta,$\nwhile $C(\\sigma)+D(y,\\sigma)$ is bounded away from zero.\nHence, for\nevery such $y$, there is a neighborhood of the form $(-l,0)$ on which \n$F(y,\\sigma)<0$.\nNow suppose that $y>Y_1$, and \n$\\epsilon (\\log y)^{-1} <\\delta.$ Then the bounds (\\ref{yano}) are valid at\n$\\sigma=-\\epsilon (\\log y)^{-1},$ yielding\n$$F(y,-\\epsilon (\\log y)^{-1})>\\frac{m}{2}e^{-a\\epsilon} y^{b}\n-\\epsilon^{-1} (M+\\frac{m}{2}) \\log y.$$ \nClearly, if we fix an $\\epsilon >0$, we may first choose $Y_2\\geq Y_1$ such\nthat \nthe above is valid for all $y>Y_2$, and then\n choose $Y_3$ such that if $y>Y_3,$ the right side is\npositive. We then let $Y=\\max(Y_2,Y_3)$, \nand the first assertion is proved. \nMoreover, if we fix and $\\epsilon'<\\epsilon$, we can choose $Y_2'$ \nsuch that for every $y>Y_2', \\epsilon'<\\varepsilon<\\epsilon,$ $\\varepsilon \n(\\log y)^{-1} <\\delta,$ and then choose $Y_3'$ such that for\n$y>Y_3', \\epsilon'<\\varepsilon<\\epsilon,$ \n$$\\frac{m}{2}e^{-a\\varepsilon} y^{b}\n-\\varepsilon^{-1} (M+\\frac{m}{2}) \\log y >0.$$ \nIt follows that any $\\beta(y),$ defined as above \\emph{with respect to}\n$\\epsilon$ satisfies\n$$\\beta(y) \\in (-\\epsilon'(\\log y)^{-1},0) \\mbox { for }y>Y'.$$\nSince this works for any $\\epsilon',$ we have\n$$\\lim_{y \\rightarrow \\infty} y^{-\\beta(y)}=1$$\nindependently of the choice of $\\epsilon,$ and independently of any possible \nchoice of $\\beta(y).$\nTo prove \\emph{ii)}, we note that \n$F(y,\\beta(y))=0$ iff $$A(\\beta(y))y^{a \\beta(y) +b} +B(y,\\beta(y))=\n\\frac{-1}{\\beta(y)}(C(\\beta(y))+D(y,\\beta(y))).$$ We have seen that \nfor $\\beta(y)$ near $0$ and $y$ large the left side is nonzero, so we may \nput this into the form\n$$\\beta(y)=-\\frac{C(\\beta(y))+D(y,\\beta(y))}{A(\\beta(y))y^{a \\beta(y) +b} +B(y,\\beta(y))}.$$\nSo\n\\begin{eqnarray*}&&\n\\hskip -.2in \\lim_{y\\rightarrow \\infty}\n\\frac{C(0)\/(A(0)y^{b})}{-\\beta(y)}\n\\\\&&=\\lim_{y\\rightarrow \\infty} \\frac{C(0)}{C(\\beta(y))+D(y,\\beta(y))}\n\\frac{\\left(A(\\beta(y))y^{a\\beta(y)+b}+B(y,\\beta(y))\\right)y^{-b}}{A(0)}.\n\\end{eqnarray*}\nThe second limit is evidently 1, which proves the asymptotic formula. \\begin{flushright}$\\blacksquare$\\end{flushright}\n\n{\\bf Remarks 1.} If, on the other hand, $F$ is in the same form, with \n$b < d,$ but all\nthe other assumptions are the same, then similar arguments show that \n$F(y,\\sigma)$ will be nonvanishing \nfor $\\sigma$ in a neighborhood of $0$ and $y$ sufficiently large.\n\n{\\bf 2.} If $F$ is as above, but $C(0)=0,$ then the asymptotic\nformula is no longer correct, but the Siegel zero still exists\nfor all values of $y>Y$ such that $D(y,\\sigma)\\neq 0.$ \n\n\\subsection{Necessary Fact}\n\\label{necessaryfacts}\nWe will need one more well known fact from the theory of Eisenstein series: \nthe Eisenstein series is well approximated by its constant term. \n\\begin{prop}\n\\label{approx}\nIf $k$ is a function field, then there is a constant $c$ such that\n$E(\\lambda\\phi_{\\pi},\\pi\\otimes\\lambda)(g)-\nE_{P_{\\alpha}}(\\lambda\\phi_{\\pi},\\pi\\otimes\\lambda)(g)=0$,\nwhenever $g\\in S$ and $m_{P_0}(g)^{\\alpha}>c$, where $S$ is a Siegel domain\nas in I.2.1. \nIn particular, \nfor $\\Lambda,p,k,\\ensuremath{\\tilde{\\alpha}}$ as in the main theorem,\n$$(E-E_{P_{\\alpha}})(\\Lambda(\\sigma)\\phi_{\\pi},\\pi\\otimes\\Lambda(\\sigma))\n(p\\tilde{\\alpha}(y)k)=0,$$\nfor $p\\in \\omega, k\\in K$ and $y$ sufficiently large.\nIf $k$ is a number field, then for $\\Lambda,p,k,\\ensuremath{\\tilde{\\alpha}}$ as in the main theorem,\n$$\n\\sigma\\left(\n(E-E_{P_{\\alpha}})(\\Lambda(\\sigma)\\phi_{\\pi},\\Lambda(\\sigma)\n\\otimes\\pi)(p\\tilde{\\alpha}(y)k)\n\\right)\n$$ \nis rapidly decreasing as a function of $y$ for $p\\in \\ensuremath{\\mathbf{G}}^1\\cap \\omega,k\\in K$, \nuniformly for $\\sigma$ in a neighborhood of zero.\n\\end{prop}\n{\\bf Proof: }\nThe function field case is immediate from Lemma I.2.7. \nFor the number field case we use Lemma I.2.10. \nLemma I.4.4, in conjunction with I.2.5 yields the bounds required by\nthe hypotheses of Lemma I.2.10.\n\\hfill $\\blacksquare$\n\\section{Proofs}\n\\subsubsection{Proof of Theorem \\ref{mainthm}}\nAs $H,\\alpha,\\Lambda,p$ and $k$ are fixed, we suppress them from the\nnotation, denoting $\\ensuremath{W_{\\mbox{\\tiny sng}}}(H,\\alpha,\\Lambda,pk)$ by $\\ensuremath{W_{\\mbox{\\tiny sng}}}$, etc.\nThe idea is to fit $E(y,\\sigma)$ into the mold described in section 3.\nIn the new notation we have introduced, equation \\eqref{ctfrommw} reads\n\\begin{equation}\\label{decomp}\nE_{P_{\\alpha}}(y,\\sigma)=\\sum_{w\\in W(M,M_{\\alpha})}E_w(y,\\sigma).\n\\end{equation}\nWe observe that\n\\begin{equation}\\label{athing}E_w(y,\\sigma)=\n\\chi_\\pi(w^{-1}\\tilde{\\alpha}(y)w)\ny^{\\langle\\tilde{\\alpha}, \\rho_{P_\\alpha}+w \\Lambda(\\sigma)\\rangle}\nE_w(1,\\sigma).\n\\end{equation}\nNow, $\\chi_\\pi$ is both real-valued and unitary. When $k$ is a number\nfield, \n$\\chi_\\pi(w^{-1}\\tilde{\\alpha}(y)w)$\n is trivial, but when $k$ is a function field, \n$\\chi_\\pi(w^{-1}\\tilde{\\alpha}(q^n)w)$ may equal $-1$ for some $w$ when $n$ is\nodd. In this case, one may consider restrictions to odd and even $n$ \nseperately, and combine the results. We omit the details, and assume that\n$\\chi_\\pi(w^{-1}\\tilde{\\alpha}(y)w)$ is identically $1$, when $w$ is either\n$\\ensuremath{w_{\\mbox{\\tiny max}}}$ or $\\ensuremath{w_{\\mbox{\\tiny ms}}}.$\n\nFor each $w$, the map \n$\\sigma \\mapsto \\langle \\tilde{\\alpha},w \\Lambda(\\sigma) \\rangle$ \nis an affine map $\\ensuremath{{\\mathbb R}} \\rightarrow \\ensuremath{{\\mathbb R}}$, and so we may define \n$a,b,c,$ and $d$ by the conditions \n$\\langle \\tilde{\\alpha}, \\rho_{P_\\alpha}+\\ensuremath{w_{\\mbox{\\tiny max}}}\\Lambda(\\sigma)\\rangle\n=a\\sigma+b,$ and,\nif $\\ensuremath{W_{\\mbox{\\tiny sng}}}$ is nonempty,\n$\\langle \\tilde{\\alpha}, \\rho_{P_\\alpha}+\\ensuremath{w_{\\mbox{\\tiny ms}}}\\Lambda(\\sigma)\\rangle\n=c\\sigma+d.$\nIf $\\ensuremath{W_{\\mbox{\\tiny sng}}}$ is empty, we take $c=0$, and $d$ any real number less than \n$b$.\nWe take \n\\begin{eqnarray*}\nA(\\sigma)&=&E_{\\ensuremath{w_{\\mbox{\\tiny max}}}}(1,\\sigma),\\\\\nC(\\sigma)&=&\\sigma E_{w_{\\mbox{\\tiny ms}}}(1,\\sigma), \\mbox{ or }\n0 \\mbox{ if }\\ensuremath{W_{\\mbox{\\tiny sng}}}=\\emptyset,\\\\\nD(y,\\sigma)&=&\\sigma \\left(\n\\sum_{w \\in \\ensuremath{W_{\\mbox{\\tiny sng}}}-\\left\\{w_{\\mbox{\\tiny ms}}\\right\\}}E_w(y,\\sigma)\n+(E-E_{P_\\alpha})(y,\\sigma)\\right)\n,\\\\\nB(y,\\sigma)&=&\\sum_{w \\in W(M,M_{\\alpha})-\\ensuremath{W_{\\mbox{\\tiny sng}}}-\\left\\{\\ensuremath{w_{\\mbox{\\tiny max}}}\\right\\}}E_w(y,\\sigma),\n\\end{eqnarray*}\nWhere we extend $C$ and $D$ to $\\sigma=0$ by continuity. \n\nWe check that all the hypotheses of the main lemma are satisfied. \nAs, $\\Lambda(0)$ is generic, \nwe may choose an open ball containing it \nwhich intersects no other hyperplane along which $E$ is singular. \nThe continuity of each function on the set of $\\sigma$ mapping into \nthis ball is clear. The fact that all functions are real valued follows\nfrom the fact that $\\phi_\\pi$ is. Condition 2 is precisely that \n$A(0)\\neq 0,$ and in the case when $\\ensuremath{W_{\\mbox{\\tiny sng}}} \\neq \\emptyset,$ the value \nof $C$ at $0$ is the residue of $E_{w_{\\mbox{\\tiny ms}}}(1,\\sigma)$\nat zero. The bounds on $B$ and $D$ as $y \\rightarrow \\infty$ follow\neasily from \\eqref{athing} and Proposition \\ref{approx}.\\hfill $\\blacksquare$\n\n\\subsubsection{Proof of Lemma \\ref{condlem}}\nThe first assertion is trivial.\nThe second and third follow from more general assertions, which we now\nprove.\nFor $w \\in W(M,M_\\alpha)$, let $W_{M_\\alpha}(wMw^{-1})$ be defined in the same\nway as $W(M),$ with $M_\\alpha$ replacing $G$ and $wMw^{-1}$ replacing $M$.\n\n\\begin{lem} \\label{hollemma}\n Let $H$ be a root hyperplane associated to a root $\\theta$,\nand fix $\\alpha.$\nIf there exist $p,k,\\Lambda$ such that \n$\\Lambda(0)\\in H$ is generic for $\\alpha$, and\n$w \\in \\ensuremath{W_{\\mbox{\\tiny sng}}}(H,\\alpha,p,k,\\Lambda)$, \nthen there exists $w'\\in W_{M_\\alpha}(wMw^{-1})$ such that \n$w'w\\theta<0.$\n\\end{lem}\n\n{\\bf Proof: } For some\n$w'\\in W_{M_\\alpha}(wMw^{-1}),$ write $w'w=s_{\\gamma_\\ell} \\dots\ns_{\\gamma_1}$, where for each $i$, $s_{\\gamma_i}$ is an ``elementary\nsymmetry'' as in section I.1.8. See also section IV.4.1. Let\n$w(j)=s_{\\gamma_j}\\dots s_{\\gamma_1}.$ Then\n$$M(w'w,\\pi \\otimes \\lambda)=M(s_{\\gamma_\\ell},w(\\ell-1)(\\pi \\otimes \\lambda))\n\\circ \\dots \\circ M(s_{\\gamma_1},\\pi\\otimes\\lambda).$$\nThe singularities of $M(s_\\gamma,\\tau \\otimes \\mu)$ are carried by a \nlocally finite set of hyperplanes associated to $\\gamma.$ Hence the \nsingularities of $M(s_{\\gamma_i},w(i-1)(\\pi\\otimes\\lambda))$ are carried by\na locally finite set of root hyperplanes associated to the root \n$w(i-1)^{-1}\\gamma_i,$ and, in general, the singularities of \n$M(w'w,\\pi\\otimes\\lambda)$ are carried by a locally finite set of root \nhyperplanes each of which is associated to a root $\\gamma$ such that \n$w'w\\gamma <0.$ \n\nSuppose that $w'w\\theta >0$ for all $w'\\in W_{M_\\alpha}(wMw^{-1}).$ \nThen there is a locally finite set of root hyperplanes, not containing\n$H$, that carries the singularities of $M(ww',\\pi \\otimes \\lambda)$ \nfor every $w'.$ Hence it carries the singularities of all the \ncuspidal components of all the constant terms of \n$E^{M_\\alpha}(M(w,\\pi\\otimes \\lambda)\\lambda \\phi_\\pi,w(\\pi\\otimes \\lambda))$, \n(equation \\eqref{ctfrommw} above, and IV.1.9 (b)). By I.4.10 it\ncarries the singularities of\n$E^{M_\\alpha}(M(w,\\pi\\otimes \\lambda)\\lambda \\phi_\\pi,w(\\pi\\otimes \\lambda))$\nas well. The result follows.\n\\hfill $\\blacksquare$\n\nUnder the hypotheses of Lemma \\ref{condlem}, \n$w'\\theta >0$ for all $w'\\in W_{M_\\alpha}(M),$ and\n2 follows. As for \n3, it's clear that the set in question is always open, and that it's \nempty iff $\\{\\lambda \\in H_\\ensuremath{{\\mathbb R}}:\\ensuremath{w_{\\mbox{\\tiny max}}}(\\lambda,\\alpha)=1\\}$ is.\nThus, we only need to prove that this latter set is nonempty.\n\n\\begin{lem}\nIf $c_H>0$, and $1\\in W(M,M_\\alpha)$, then there exists $\\lambda \\in H_\\ensuremath{{\\mathbb R}}$\nsuch that $\\ensuremath{w_{\\mbox{\\tiny max}}}(\\lambda,\\alpha)=1.$\n\\end{lem}\n{\\bf Proof: } The linear functional on\n$\\Delta_0$ given by pairing with $\\tilde{\\alpha}$ corresponds to a \npositive multiple of the one given by pairing with the\nfundamental\ncoweight $\\hat{\\varpi}_{\\alpha}$, in the basis dual to $\\Delta_0.$\nGiven an element $\\lambda$ of $\\mbox{Re} X_{M_0}^\\ensuremath{\\mathbf{G}},$ which is \nirrational, i.e., has trivial kernel in the coweight lattice,\nwe can define\nnotions of $\\lambda$-positivity for coroots and $\\lambda$-dominance for\ncoweights. If the definitions of positivity for coroots that come from\n$\\lambda$ and $\\Delta_0$ coincide, then the definitions of dominance \nfor coweights will as well. Hence, whenever\n$$\\langle \\lambda ,\\check{\\gamma} \\rangle >0 \\hskip .5in \n \\forall \\gamma \\in \\Delta_0,$$\nand $\\lambda$ is irrational, \nwe have\n$$\\langle \\lambda, \\tilde{\\alpha} \\rangle \\geq \n\\langle w\\lambda ,\\tilde{\\alpha} \\rangle \n\\hskip .5in \\forall w \\in W,$$\nwith equality only if $w^{-1}\\ensuremath{\\tilde{\\alpha}}=\\ensuremath{\\tilde{\\alpha}}.$\nNow project $\\lambda$ to $\\bar{\\lambda}\\in \\mbox{Re} X_M^\\ensuremath{\\mathbf{G}}.$ This projection \ncorresponds to restriction from $Z_{\\ensuremath{\\mathbf{M}}_0}$ to $Z_{\\ensuremath{\\mathbf{M}}}$ (see pp.7,11). \nSince the image of $w^{-1}\\tilde{\\alpha}$ is contained in $Z_\\ensuremath{\\mathbf{M}}$ whenever \n$w\\in W(M,M_\\alpha)$, we find that \n$$\\langle \\bar{\\lambda},\\tilde{\\alpha}\\rangle \\geq \n\\langle w\\bar{\\lambda},\\tilde{\\alpha}\\rangle\n\\hskip .5in \\forall w\\in W(M,M_\\alpha),$$ \nwith equality only if $w=1.$ \nThus, for $\\bar{\\lambda}$ the projection of any $\\lambda$ as above, we have\n$\\ensuremath{w_{\\mbox{\\tiny max}}}(\\bar{\\lambda},\\alpha)=1.$ Now, let $\\gamma_\\theta$ be the root \nassociated to $\\theta$ as in section I.1.11 (so that $\\check{\\theta}$ is \ndefined as the projection of $\\check{\\gamma_\\theta}$ to $\\mbox{Re} \\frak{a}_M$).\nIt should be emphasized that $\\langle \\lambda,\\check{\\gamma_\\theta} \\rangle$\nand $\\langle \\bar{\\lambda},\\check{\\theta} \\rangle$ need not be equal. \nHowever, $\\langle \\lambda, \\check{\\theta} \\rangle$ and \n$\\langle \\bar{\\lambda}, \\check{\\theta} \\rangle$ \\emph{are} always equal. \nSo, it's enough to verify that for some irrational $\\lambda$, \nwith $\\langle \\lambda, \\check{\\gamma} \\rangle >0 \\forall \\gamma \\in \\Delta_0,$\nthe quantity\n$\\langle \\lambda, \\check{\\theta} \\rangle$ is also positive. This is\nstraightforward, when $\\check{\\theta}$ is written in terms of the basis\n$\\Delta_0^\\vee$ of coroots $\\check{\\gamma}$,\nand $\\lambda$ in terms of the dual basis.\n\\hfill $\\blacksquare$\n\n\\section{Examples}\n\\label{examples}\n\nLet us consider the example when $G=GL(n)$, the representation $\\pi$ is unramified, and we choose $\\phi_{\\pi}$ to be the spherical vector. In this case, \nthe intertwining operators may be given explicitly in terms of automorphic $L$-functions, as in \\cite{ep}. It follows that a pole of the Eisenstein series comes from either a zero or a pole of one of these $L$-functions. \nThe zeroes are quite mysterious, but the poles of \nRankin-Selberg $L$-functions on $GL(n)$ have been completely determined by \nJacquet and Shalika \\cite{jsi},\\cite{jsii}, and referring back to the \n$GL(2)$ case, we see that it is these poles which provide the correct \nanalog of the pole of the $GL(2)$ Eisenstein series at 1. \n\nAs noted, the case $G=GL(2)$ is the simplest example of our theorem. Let us consider two more which exhibit some of the features of the general case, but are not so complicated as to become unwieldy. \n\\subsection{Example One: $GL(4)$ over $\\ensuremath{{\\mathbb Q}}$}\n\nFirst let us consider the case $k=\\ensuremath{{\\mathbb Q}},$ and $\\ensuremath{\\mathbf{G}}=GL(4,\\ensuremath{{\\mathbb A}})$. We take the \nusual Borel, consisting of all upper triangular elements, and let $P$ \nbe the standard maximal\nparabolic such that $M\\cong GL(2)\\times GL(2).$\n\nIn this case, $\\mbox{Re} X_M^{G}$ is one dimensional, consisting of all real powers\nof the modulus\n$$\\left(\\begin{smallmatrix} h_1 &*\\\\ & h_2\\end{smallmatrix}\\right)\n\\mapsto \\left|\\frac{\\det h_1}{\\det h_2}\\right|^2,$$\nso we identify it with $\\ensuremath{{\\mathbb R}}$. In order to have good knowledge of the poles of the intertwining operators, we will want to make a suitable choice of $\\pi$ and $\\phi_{\\pi}$, which can also be described quite explicitly.\n\nLet $\\varphi$ be a real-valued Maass Hecke eigenform of level 1,\nright invariant by the center and the maximal compact, and define a \nfunction on $GL(4,\\ensuremath{{\\mathbb A}})$ by \n$$I\\left(\\left(\\begin{smallmatrix}h_1 &*\\\\&h_2\\end{smallmatrix}\\right) k\n,s,\\varphi\\right)=\n\\left|\\frac{\\det h_1}{\\det h_2}\\right|^{2s+1} \\varphi(h_1)\\varphi(h_2),$$\nfor $k$ now in the maximal compact of $GL(4,\\ensuremath{{\\mathbb A}}).$\nOur Eisenstein series is \n$$E(g,s,\\varphi)= \\sum_{\\gamma \\in P(\\ensuremath{{\\mathbb Q}})\\ensuremath{\\backslash}\nGL(4,\\ensuremath{{\\mathbb Q}})}I(\\gamma g, s,\\varphi).$$\nIn this case $W(M,M')=\\emptyset$, unless $M'=M,$ and\n$$E_P(g,s,\\varphi)=I(g,s,\\varphi)+\n\\frac{L(2s,\\varphi\\times\\varphi)}{L(2s+1,\\varphi\\times\\varphi)}\nI(g,-s,\\varphi).$$\nHere $L$ denotes the completed $L$ function-- including gamma factors.\n(This is essentially a special case of the computation in \\cite{ep}.)\nSince $\\mbox{Re} X_M^G$ is one dimensional, a codimension 1 subspace is a point. \nThus our ``root hyperplane'' is just $s=\\frac{1}{2},$ or some zero of \n$L(2s+1,\\varphi\\times \\varphi).$ We consider the one at $\\frac{1}{2},$ \nand let\n$\\Lambda(\\sigma)=\\sigma+\\frac{1}{2},$ and $\\ensuremath{\\tilde{\\alpha}}(y)=diag(y,y,1,1),$ embedded\nat the infinite place. Note that if $p=\\left(\\begin{smallmatrix} h_1&*\\\\\n&h_2\\end{smallmatrix}\\right)$ with $\\varphi(h_1)$ or $\\varphi(h_2)=0,$\nthen the constant term is zero for all $s$. \nFor any other $p$,\n$\\ensuremath{W_{\\mbox{\\tiny sng}}}=\n\\left\\{\\left(\\begin{smallmatrix}&I\\\\ensuremath{\\mathcal{I}}&\\end{smallmatrix}\\right)\\right\\}.$\nFor $\\sigma$ positive, $\\ensuremath{w_{\\mbox{\\tiny max}}}=I,$ and the theorem applies.\n\nLet $A(\\sigma)=I(pk,\\sigma+\\frac{1}{2},\\varphi),\nC(\\sigma)=\\frac{L(2\\sigma+1,\\varphi\\times\\varphi)}\n{L(2\\sigma+2,\\varphi\\times\\varphi)}I(pk,-\\sigma-\\frac{1}{2},\\varphi)$ and \n$D(y,\\sigma)=E(p\\tilde{\\alpha}(y)k,\\sigma+\\frac{1}{2},\\varphi)-\nE_P(p\\tilde{\\alpha}(y)k,\\sigma+\\frac{1}{2},\\varphi).$ Let $a=4$, \n$b=4$, $c=-4$ and $d=0.$ The main analytic lemma applies. \nThe asymptotic formula it produces is\n\n$$-\\beta \\sim \\frac{( \\varphi,\\varphi)}\n{2 L(2,\\varphi \\times \\varphi)}\n\\left|\\frac{\\det h_1}{\\det h_2}\\right|^{-2}\ny^{-4},$$\nwhere $(\\varphi,\\varphi)$ is the Petersson inner product of \n$\\varphi$ with itself.\n\\subsection{Example Two: $GL(3)$ over $\\ensuremath{{\\mathbb Q}}$}\n\nNow let us consider the case $k=\\ensuremath{{\\mathbb Q}}$, and $\\ensuremath{\\mathbf{G}}=GL(3,\\ensuremath{{\\mathbb A}})$. We take $P=P_0$ to\nbe the Borel subgroup of upper triangular matrices and $M=T_0$ to be the\ntorus of diagonal matrices. The associated set of simple positive roots\nis $\\{\\alpha,\\theta\\}$, defined by\n$$\\alpha\\left(\\left(\\begin{smallmatrix}t_1&&\\\\&t_2&\\\\&&t_3\\end{smallmatrix}\\right)\\right)\n=\\frac{t_1}{t_2}\\hskip .5in\n\\theta\\left(\\left(\\begin{smallmatrix}t_1&&\\\\&t_2&\\\\&&t_3\\end{smallmatrix}\\right)\\right)\n=\\frac{t_2}{t_3}.\n$$\nWe take $\\pi$ to be the trivial representation, and $\\phi_\\pi$ to be the\nconstant function 1. We parametrize \n$\\mbox{Re} X_M^{\\ensuremath{\\mathbf{G}}}$, by associating the map\n$$\\ensuremath{\\mathcal{I}}_{\\lambda}\\left[\\left(\\begin{smallmatrix}t_1&&\\\\&t_2&\\\\&&t_3\\end{smallmatrix}\\right)\\right]=|t_1|^{\\lambda_1+1}|t_2|^{\\lambda_2}|t_3|^{\\lambda_3-1},$$ to the triple $\\lambda=(\\lambda_1,\\lambda_2,\\lambda_3) \\in \\ensuremath{{\\mathbb R}}^3$\nsuch that \n$\\lambda_1+\\lambda_2+\\lambda_3=0.$\n \nWe work with $\\alpha$, and define the map $\\tilde{\\alpha}$ by\n$\\tilde{\\alpha}(y)=\n\\left(\\begin{smallmatrix}y&&\\\\&1&\\\\&&1\\end{smallmatrix}\\right),$ embedded\nat the infinite place.\nThe Weyl group $W$ of $G$ may be identified with the group of permutation \nmatrices. Then \n$$\nW(M,M_{\\alpha})=\\left\\{I,\n\\left(\\begin{smallmatrix}&1&\\\\1&&\\\\&&1\\end{smallmatrix}\\right)\n\\left(\\begin{smallmatrix}&&1\\\\1&&\\\\&1&\\end{smallmatrix}\\right)\n\\right\\}.$$\nWe compute the intertwining operators as usual, by reducing from the \ngeneral case to the relative rank one case, i.e., to a Levi isomorphic to $GL(2).$ As on $GL(2)$, each root contributes a ratio of Riemann\nzeta functions. Specifically, if $\\zeta^*(s)=\\pi^{-\\frac{s}{2}}\\Gamma(\\frac{s}{2})\\zeta(s)$ is the ``completed'' Riemann zeta function, then we have\n\\begin{eqnarray*}\nM\\left(\n\\left(\\begin{smallmatrix}&1&\\\\1&&\\\\&&1\\end{smallmatrix}\\right),\n\\ensuremath{\\mathcal{I}}_{\\lambda}\\right)\\ensuremath{\\mathcal{I}}_{\\lambda}&=&\n\\frac{\\zeta^*(\\lambda_1-\\lambda_2)}{\\zeta^*(\\lambda_1-\\lambda_2+1)}\n\\ensuremath{\\mathcal{I}}_{(\\lambda_2,\\lambda_1,\\lambda_3)} \\\\\nM\\left(\n\\left(\\begin{smallmatrix}&&1\\\\1&&\\\\&1&\\end{smallmatrix}\\right),\n\\ensuremath{\\mathcal{I}}_{\\lambda}\\right)\\ensuremath{\\mathcal{I}}_{\\lambda}&=&\\frac{\\zeta^*(\\lambda_2-\\lambda_3)}{\\zeta^*(\\lambda_2-\\lambda_3+1)}\\frac{\\zeta^*(\\lambda_1-\\lambda_3)}{\\zeta^*(\\lambda_1-\\lambda_3+1)}\n\\ensuremath{\\mathcal{I}}_{(\\lambda_3,\\lambda_1,\\lambda_2)}\n\\end{eqnarray*}\nThe value of $E^{M_{\\alpha}}$ at a point\n$$g=\\left(\\begin{smallmatrix}1&x_2&x_3\\\\&1&x_1\\\\&&1\\end{smallmatrix}\\right)\\left(\\begin{smallmatrix}y_1y_2&&\\\\&y_1&\\\\&&1\\end{smallmatrix}\\right) k\n\\in GL(3,\\ensuremath{{\\mathbb R}})$$\nmay be given in terms of the Eisenstein series \n$$e(\\tau,s)=\\sum_{{(c,d)=1}\\atop{c>0}}\\frac{y^s}{|c\\tau+d|^{2s}}$$\nfor $\\tau=x+iy$ \nin the upper half\nplane. Specifically, \nif we let $\\tau_1=x_1+iy_1$, then we find that\n$$\nE^{M_{\\alpha}}(\\ensuremath{\\mathcal{I}}_{\\lambda},\\ensuremath{\\mathcal{I}}_{\\lambda}\\otimes 1)(g)= \n(y_2\\sqrt{y_1})^{\\lambda_1+1} e(\\tau_1,\n\\frac{\\lambda_2-\\lambda_3+1}{2}).\n$$\nIt follows that the value of $E_{P_{\\alpha}}(\\ensuremath{\\mathcal{I}}_{\\lambda})$ is given by \n\\begin{eqnarray*}\n&&(y_2\\sqrt{y_1})^{\\lambda_1+1} e(\\tau_1,\n\\frac{\\lambda_2-\\lambda_3+1}{2})\\\\&&\n+(y_2\\sqrt{y_1})^{\\lambda_2+1} \\frac{\\zeta^*(\\lambda_1-\\lambda_2)}{\\zeta^*(\\lambda_1-\\lambda_2+1)}e(\\tau_1,\\frac{\\lambda_1-\\lambda_3+1}{2})\\\\&&\n+(y_2\\sqrt{y_1})^{\\lambda_3+1} \\frac{\\zeta^*(\\lambda_2-\\lambda_3)}{\\zeta^*(\\lambda_2-\\lambda_3+1)}\n\\frac{\\zeta^*(\\lambda_1-\\lambda_3)}{\\zeta^*(\\lambda_1-\\lambda_3+1)}e(\\tau_1,\\frac{\\lambda_1-\\lambda_2+1}{2}).\\end{eqnarray*}\nIf we multiply $E(\\ensuremath{\\mathcal{I}}_{\\lambda},\\ensuremath{\\mathcal{I}}_{\\lambda}\\otimes 1)(g)$ by\n$$\\zeta^*(\\lambda_1-\\lambda_2+1)\\zeta^*(\\lambda_2-\\lambda_3+1)\\zeta^*(\\lambda_1-\\lambda_3+1)$$ the resulting\nfunction is essentially the function $G_{\\nu_1,\\nu_2}$ appearing in \nBump \\cite{yellowbump}. It is invariant under the six permutations of $\\lambda_1,\\lambda_2$, and $\\lambda_3.$\nIt's poles are along the lines $\\lambda_i-\\lambda_j=1,$ and are permuted \ntransitively by \nits functional equations. Let us therefore restrict our attention to the\nplane $\\lambda_2-\\lambda_3=1.$\nNow, it's clear that of the three terms that make up $E_{P_{\\alpha}}$, the \nfirst and third will be singular along this hyperplane, while the second will\nnot. It is thus clear that\n$\\{\\lambda \\mbox{ generic for }\\alpha:\\ensuremath{w_{\\mbox{\\tiny max}}}(\\lambda,\\alpha)\n=\\left(\\begin{smallmatrix}&1&\\\\1&&\\\\&&1\n\\end{smallmatrix}\\right)\\}$ \n is the union of\n$\\Omega_1:=\\{\\lambda\\in H: \\lambda_2>\\lambda_1>\\lambda_3\\}$ and\n$\\Omega_2:=\\{\\lambda\\in H: \\lambda_1<\\lambda_3,\\lambda_3\\neq \\lambda_1+1\\}$. \nThe function $e(\\tau,s)$ does not vanish identically in $s$ for any $\\tau,$\nand its residue is never zero. Thus, we may use any $p,k.$ \nWe fix \n$\\lambda^0 \\in \\Omega_1\\cup \\Omega_2,$ such that \n$e(\\tau_1,\\frac{\\lambda_1^0-\\lambda_3^0+1}{2})\\neq 0$, and \n $\\lambda^1$ in the hyperplane $\\lambda_2-\\lambda_3=0$ and define $\\Lambda$\nby $\\Lambda(\\sigma)=(1+\\sigma)\\lambda^0-\\sigma \\lambda^1.$ There is \nno real loss of generality, since an arbitrary $\\Lambda$ may be obtained \nfrom this one by a simple scaling. But this ensures that\n$\\Lambda(\\sigma)_2-\\Lambda(\\sigma)_3=\\sigma+1,$ so that the residue of\n$\\zeta^*(\\Lambda(\\sigma)_2-\\Lambda(\\sigma)_3)$ at zero is 1, while that of\n$e(\\tau, \\frac{\\Lambda(\\sigma)_2-\\Lambda(\\sigma)_3+1}{2})$ is \n$\\frac{1}{\\zeta^*(2)}=\\frac{6}{\\pi}$.\nIf $\\lambda^0\\in \\Omega_1$, then\n$\\ensuremath{w_{\\mbox{\\tiny ms}}}=I$. In this case, the asymptotic formula\nreads\n$$\\beta \\sim \\frac{6\\zeta^*(\\lambda_1^0-\\lambda_2^0+1)}\n{\\pi \\zeta^*(\\lambda_1^0-\\lambda_2^0)e(\\tau_1,\\frac{\\lambda_1^0-\\lambda_3^0+1}{2})(y_2\\sqrt{y_1})^{\\lambda_2^0-\\lambda_1^0}}.$$\nOn the other hand, if $\\lambda^0\\in \\Omega_\\alpha^2$, then \n$\\ensuremath{w_{\\mbox{\\tiny ms}}}=\\left(\\begin{smallmatrix}&&1\\\\1&&\\\\&1&\\end{smallmatrix}\\right),$\nand the asymptotic formula is \n$$\\beta \\sim \\frac{6\\zeta^*(\\lambda_1^0-\\lambda_2^0+1)\\zeta^*(\\lambda_1^0-\\lambda_3^0)e(\\tau_1,\\frac{\\lambda_1^0-\\lambda_2^0+1}{2})}\n{\\pi\\zeta^*(\\lambda_1^0-\\lambda_2^0)\\zeta^*(\\lambda_1^0-\\lambda_3^0+1)e(\\tau_1,\\frac{\\lambda_1^0-\\lambda_3^0+1}{2})(y_2\\sqrt{y_1})}.$$\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbkqk b/data_all_eng_slimpj/shuffled/split2/finalzzbkqk new file mode 100644 index 0000000000000000000000000000000000000000..922c9dc4841741eb3bec36781562a342b52755be --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbkqk @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe spectral crowding problem and the closely related problem of leakage to \nhigher-energy states have pestered the field of superconducting quantum computing \nsince its very inception \\cite{Fazio1999}. For almost twenty years a tremendous amount \nof effort has been made to resolve both of these problems \n(see., e.\\ g., \n\\cite{JM2003, Montangero2007, Safaei2008, Rebentrost2009, Safaei2009, Gambeta2009,\nForney2010, Stojanovic2012, Schutjens2013, Motzoi2013, Martinis2016}).\nThe effort was certainly worth making, because every time an operation lifting \nthe system off its ground state is performed (such as, e.\\ g., a strong Rabi pulse), leakage \nout of the computational subspace develops, which results in the degradation of the \ngate fidelity. One may argue that even if the other problems facing quantum computing, \nsuch as scalability and decoherence, are ever successfully resolved, the crowding \nand the leakage problems would still be with us, simply as a matter of principle. \nAs long as one insists on using Josephson junctions, one will have to deal with the matrix \nelements that kick the system all over the spectra of the corresponding \nanharmonic potentials.\n\nAs an example, consider the Greenberger-Horne-Zeilinger (GHZ) state \nprotocol proposed in Ref.\\ \\cite{AGJM2008} for a\n{\\it fully connected network of three identical} Josephson phase qubits. \nIn the rotating wave approximation (RWA) the system is described by the Hamiltonian, \n\\begin{align}\nH &= \n\\vec{\\Omega}_1 \\cdot \\vec{\\sigma}_1 + \\vec{\\Omega}_2\\cdot\\vec{\\sigma}_2\n+\\vec{\\Omega}_3 \\cdot \\vec{\\sigma}_3\n\\nonumber \\\\\n& \\quad \n+ \\frac{1}{2}\\bigg[g\\left(\\sigma_x^1 \\sigma_x^2+\\sigma_y^1 \\sigma_y^2\\right)\n\t\t\t\t+ \\tilde{g}\\sigma_z^1\\sigma_z^2\n\\nonumber \\\\\n& \\quad \\quad \\quad \\quad\n+ g\\left(\\sigma_x^2 \\sigma_x^3+\\sigma_y^2 \\sigma_y^3\\right)\n\t\t\t\t+ \\tilde{g}\\sigma_z^2\\sigma_z^3\n\\nonumber \\\\\n& \\quad \\quad \\quad \\quad\n+ g\\left(\\sigma_x^1 \\sigma_x^3+\\sigma_y^1 \\sigma_y^3\\right)\n\t\t\t\t+ \\tilde{g}\\sigma_z^1\\sigma_z^3\n \\bigg],\n\\end{align}\nwhere $\\Omega$'s are the Rabi frequencies, $g$ and $\\tilde{g}$ are the coupling \nconstants, and $\\sigma$'s are the Pauli matrices.\nThe protocol consist of the following sequence of {\\it symmetric} pulses:\n\\begin{equation}\nX_{\\pi\/2} U_{\\rm int} Y_{\\pi\/2}|000\\rangle = e^{-i\\alpha}e^{i(\\pi\/4)}|{\\rm GHZ}\\rangle,\n\\end{equation}\nwith the entangling time set to $t_{\\rm GHZ} = \\pi\/(2|g-\\tilde{g}|)$,\nand\n\\begin{align}\nX_{\\theta} &= X^{(3)}_{\\theta}X^{(2)}_{\\theta}X^{(1)}_{\\theta},\n\\\\\nY_{\\theta} &= Y^{(3)}_{\\theta}Y^{(2)}_{\\theta}Y^{(1)}_{\\theta},\n\\end{align}\nbeing the simultaneous single-qubit rotations (the unimportant overall \nphase $\\alpha$ depends on the values of $g$ and $\\tilde{g}$). \nThe crucial step in the protocol is the initial $Y$-rotation \nby $\\pi\/2$ performed on all the qubits in the circuit, which is supposed to \nresult in the {\\it fully uniform} superposition of the computational states,\n\\begin{align}\n\\label{eq:symmetricSTATE}\n{|\\psi\\rangle}_{\\rm unif.} \n&= (1\/\\sqrt{8})\n\\left( |000\\rangle + |001\\rangle+ \\dots + |110\\rangle + |111\\rangle \\right).\n\\end{align}\nThe protocol is very fast, but, unfortunately, is impossible to implement \ndue to severe problems with spectral crowding \\cite{MN2010}. The energy levels,\nsuch as $|200\\rangle$, etc., quickly get populated during the initial rotations.\n\nSince the problem of leakage out of the computational subspace is not going away\nany time soon, one may try to tackle it by using the proverbial ``If you can't beat \nthem, lead them'' principle. Since the higher energy levels are here \nto stay, we should look for ways to minimize their influence or make them irrelevant \naltogether. It seems that Nature herself, by way of quantum mechanical trickery, \noffers a curious possibility of doing just that, in the form of the so-called quantum \nZeno effect \\cite{Khalfin1957, MS1977, Itano1990, Panov1999, MITexperiment2006, \nFacchi2009, Matsuzaki2010, Kakuyanagi2015}. \nBefore analyzing that effect in superconducting qubits, let us take a quick look at \nhow it works in the simplest possible scenario.\n\n\n\\section{Brief review of the quantum Zeno effect}\n\nConsider a two-level system, with basis states $|1\\rangle$ and $|2\\rangle$, whose \ndynamics is described by the Hamiltonian \n\\begin{align}\nH &= V (|1\\rangle\\bra2| + |1\\rangle\\bra2|) \n=\n\\begin{pmatrix}\n0 & V \\cr \nV & 0\n\\end{pmatrix},\n\\end{align}\nwhere the off-diagonal matrix elements are chosen to be real, for convenience.\nDenote the general time-dependent state of the system by\n\\begin{equation}\n|\\psi(t)\\rangle = a_1(t)|1\\rangle + a_2(t)|2\\rangle, \n\\end{equation}\nsubject to the normalization condition,\n\\begin{equation}\n\\label{eq:3}\n|a_1(t)|^2+|a_2(t)|^2=1.\n\\end{equation}\nThe evolution of the corresponding amplitudes, $a_1$ and $a_2$, \nis given by (here we use $\\hbar =1$)\n\\begin{align}\ni \\frac{da_1}{dt}=Va_2, \\quad\ni \\frac{da_2}{dt}=Va_1.\n\\end{align}\nAssume,\n\\begin{equation}\na_1(0)=1, \\quad a_2(0) = 0.\n\\end{equation}\nThen, in linear order, for small times, $t=\\Delta t$, such that $|V \\Delta t|\\ll 1$, \nthe amplitudes are given by\n\\begin{equation}\na_1(\\Delta t) \\approx 1 + {o}(\\Delta t^2), \\quad\na_2(\\Delta t) \\approx -iV\\Delta t,\n\\end{equation}\nwith the probability of finding the system in state $|2\\rangle$ being\n\\begin{equation}\nw_2(\\Delta t) = |a_1(\\Delta t)|^2 \\approx V^2 \\Delta t^2. \n\\end{equation}\nNormalization condition (\\ref{eq:3}) then gives the probability of finding the \nsystem in state\n$|1\\rangle$,\n\\begin{equation}\nw_1(\\Delta t)=1-w_2(\\Delta t)=1-V^2 \\Delta t^2.\n\\end{equation}\n\nAssume now that the total time, $T$, of system's evolution is divided into $n$ \nsmall equal time intervals, $\\Delta t$, such that $T=n\\Delta t$, with \n$|V\\Delta t|\\ll 1$. Assume additionally that at the end of the first time interval \nan ideal instantaneous measurement represented by the projection operator \n$P_2\\equiv |2\\rangle\\bra2|$ has been made -- typically implemented by some kind \nof tunneling process out of state $|2\\rangle$, -- that gave a {\\it negative} result. \nThen the state of the system immediately after the measurement is described \nby the same ket $|1\\rangle$ in which the system was initially prepared at $t=0$.\n\nWe now calculate the probability $W^{(n)}_{1}(T)$ of the {\\it complex event}, \n${\\cal E}$, consisting of a series of $n$ $P_2$-measurements, resulting in the \nsystem {\\it remaining} in the $|1\\rangle$ state at time $T$. For this to be possible, \neach of the intermediate measurements has to produce a {\\it negative} result. \nSince all intermediate ``negative'' events are independent and each occurs with \nprobability $w_1$, the overall probability of the complex event ${\\cal E}$ is \ngiven by\n\\begin{align}\n\\label{eq:9}\nW^{(n)}_{1}(T)&=[w_1(\\Delta t)]^n \n\\approx \\left(1-V^2 \\Delta t^2\\right)^n\n\\nonumber \\\\\n&= \\left(1- \\frac{V^2 T^2}{n^2}\\right)^n \n\\equiv \\left(1- \\frac{q}{n^2}\\right)^n,\n\\end{align}\nwhere $q \\equiv V^2 T^2$.\nIn the limit $n\\rightarrow \\infty$, we get\n\\begin{align}\nW^{(\\infty)}_{1}(T)\n&= \\lim_{n\\rightarrow \\infty}\n\\left[\\left(1- \\frac{q}{n^2}\\right)^{n^2\/q}\\right]^{q\/n}\n\\nonumber \\\\\n&=\\lim_{n\\rightarrow \\infty}e^{-q\/n}=1,\n\\end{align}\nwhich constitutes the so-called quantum Zeno effect. Essentially, and \ncounter-intuitively, by continually ``watching'' the system's $|2\\rangle$ state, \nwe ``freeze'' the system in state $|1\\rangle$. \n\n\\section{Application to a Rabi-driven superconducting qubit}\n\nIt is now clear how to use the quantum Zeno effect to restrict qubit's dynamics\nto the computational subspace. By continually measuring the qubit's higher \nstate(s), we will automatically confine the qubit to its computational subspace \nspanned by $|1\\rangle$ (ground state) and $|2\\rangle$, thus suppressing the leakage \nto states $|3\\rangle$, $|4\\rangle$, etc.\n\nTo be more specific, let us restrict consideration to a three-level system, \n\\{$|1\\rangle$, $|2\\rangle$, $|3\\rangle$\\}, with energies \\{$E_1$, $E_2$, $E_3$\\}, \nwhose Rabi oscillation dynamics is implemented by a microwave drive of \nfrequency, $\\omega = \\omega_{12}$, which is resonant with the \n$|1\\rangle \\leftrightarrow |2\\rangle$ transition. This is described by the Hamiltonian, \nwhich, in the rotating wave approximation, is given by \\cite{JM2003}\n\\begin{align}\nH &= \n\\begin{pmatrix}\n0 & \\Omega e^{i\\phi} & 0 \\cr \n\\Omega e^{-i\\phi} & 0 & \\sqrt{2} \\Omega e^{i\\phi} \\cr\n0 & \\sqrt{2} \\Omega e^{-i\\phi} & \\eta \\cr\n\\end{pmatrix},\n\\end{align}\nwhere $\\eta \\equiv E_3-2E_2<0$ is the qubit anharmonicity \n(assuming $E_1=0$), $\\phi$ is the phase of the drive, and $\\Omega$ is \nthe time-dependent (pulse-shaped) Rabi frequency, which we take to be \nconstant, for simplicity. \n\nFor future use, we will be interested in the case \n$\\phi = -\\pi\/2$, which implements qubit rotations around the $Y$-axis of \nthe Bloch sphere, with the corresponding Hamiltonian being\n\\begin{align}\n\\label{eq:12}\nH &= \n\\begin{pmatrix}\n0 & -i\\Omega & 0 \\cr \ni\\Omega & 0 & -i\\sqrt{2} \\Omega \\cr\n0 & i\\sqrt{2} \\Omega & \\eta \\cr\n\\end{pmatrix}.\n\\end{align}\nWriting the unitary evolution operator in the form\n\\begin{equation}\nU(\\Delta t) = 1 +(-i)H\\Delta t + \\frac{(-i)^2}{2!}H^2\\Delta t^2 + \\dots,\n\\end{equation}\nand assuming that the initial state of the system is\n\\begin{align}\n|\\psi(0)\\rangle = a_{1}(0)|1\\rangle + a_{2}(0)|2\\rangle,\n\\\\\n|a_{1}(0)|^2+|a_{2}(0)|^2=1,\n\\\\\na_3(0) = 0,\n\\end{align}\nwe find, in {\\it second} order, the state at a later time $\\Delta t$ to be\n\\begin{align}\n\\label{eq:15}\na_1(\\Delta t)&= a_{1}(0)\\left(1-\\frac{1}{2}{\\Omega}^2\\Delta t^2\\right) \n- a_{2}(0){\\Omega} \\Delta t,\n\\\\\n\\label{eq:16}\na_2(\\Delta t)&= a_{2}(0)\\left(1-\\frac{3}{2}{\\Omega}^2\\Delta t^2\\right) \n+ a_{1}(0) {\\Omega}\\Delta t,\n\\\\\n\\label{eq:17}\na_3(\\Delta t)&=\\sqrt{2}a_{2}(0) {\\Omega}\\Delta t +o(\\Delta t^2). \n\\end{align}\nCorrespondingly, the probabilities for finding the system in state $|3\\rangle$ and \nin the computational subspace are, respectively,\n\\begin{align}\nw_3(\\Delta t) &= |a_3(\\Delta t)|^2 = 2|a_{2}(0)|^2 {\\Omega}^2\\Delta t^2,\n\\\\\nw_{\\rm comp.}(\\Delta t) &\\equiv w_1(\\Delta t)+w_2(\\Delta t) \n= 1- 2|a_{2}(0)|^2 {\\Omega}^2\\Delta t^2.\n\\end{align}\nBy anology with the discussion around Eq.\\ (\\ref{eq:9}), in the case of a long \nsequence of $n$ $P_3$-measurements with negative outcomes, the probability \n$W^{(n)}_{{\\rm comp.}}(T)$ of the complex event ${\\cal E}_{\\rm comp.}$, \nin which the qubit {\\it remains in its computational subspace} at time $T$, can \nbe estimated as,\n\\begin{widetext}\n\\begin{align}\n\\label{eq:22}\nW^{(n)}_{{\\rm comp.}}(T)\n& \\approx \n\\left(1- 2|a_{2}|_{t=(n-1)\\Delta T}^2 {\\Omega}^2 \\Delta t^2\\right)\n\\dots\n\\left(1- 2|a_{2}|_{t=2\\Delta T}^2 {\\Omega}^2 \\Delta t^2\\right)\n\\left(1- 2|a_{2}|_{t=\\Delta T}^2 {\\Omega}^2 \\Delta t^2\\right)\n\\left(1- 2|a_{2}|_{t=0}^2 {\\Omega}^2 \\Delta t^2\\right)\n\\\\\n& \\geq \n\\left(1- 2 {\\Omega}^2 \\Delta t^2\\right)^n\n= \\left(1- \\frac{2{\\Omega}^2 T^2}{n^2}\\right)^n \\rightarrow 1, \n\\quad\nn\\rightarrow \\infty,\n\\end{align} \n\\end{widetext}\nsince $|a_2|^2\\leq 1$ at every step of system's evolution.\n\n\nAs far as the evolution of the {\\it amplitudes} is concerned, \nwe return to Eqs.\\ (\\ref{eq:15}), (\\ref{eq:16}), (\\ref{eq:17}), and notice that \nif at time $\\Delta t$, $|V\\Delta t|\\ll 1$, an ideal $P_3$-measurement is made \nwith the negative outcome (for the system to be found in its $|3\\rangle$-state), \nthe updated renormalized state of the system will be, in second order,\n\\begin{align}\na_1(\\Delta t)&= a_{1}-a_{2}{\\Omega}\\Delta t \n- a_1\\left(\\frac{1}{2}-|a_2|^2\\right){\\Omega}^2\\Delta t^2,\n\\\\\na_2(\\Delta t)&= a_{2}+a_{1}{\\Omega}\\Delta t \n- a_2\\left(\\frac{1}{2}+|a_1|^2\\right){\\Omega}^2\\Delta t^2,\n\\\\\na_3(\\Delta t) & = 0,\n\\end{align}\nwhere on the right hand side of each equation \nthe label $(0)$ has been dropped for notational simplicity.\nFor $a_1=1$, $a_2=0$, we get\n\\begin{align}\na_1(\\Delta t)&= 1-\\frac{{\\Omega}^2\\Delta t^2}{2}\\approx \\cos (V\\Delta t),\n\\\\\na_2(\\Delta t)&= {\\Omega}\\Delta t \\approx \\sin (V\\Delta t),\n\\\\\na_3(t) & = 0,\n\\end{align}\nand for $a_1=0$, $a_2=1$, we get\n\\begin{align}\na_1(\\Delta t)&= -{\\Omega}\\Delta t \\approx -\\sin ({\\Omega}\\Delta t),\n\\\\\na_2(\\Delta t)&= 1-\\frac{{\\Omega}^2\\Delta t^2}{2}\n\\approx \\cos ({\\Omega}\\Delta t),\n\\\\\na_3(\\Delta t) & = 0.\n\\end{align}\nThe above discussion indicates that in the presence \nof continuous higher-energy state mesurement the qubit will keep evolving \nas if the higher-energy state did not exist.\n\nFigures \\ref{fig:1}, \\ref{fig:2}, and \\ref{fig:3} show the time dependence of \nvarious state populations,\n\\begin{equation}\np_i(t) \\equiv |a_i(t)|^2, \\quad i = 1,2,3,\n\\end{equation}\nduring the execution of the Rabi pulse applied to the system initially prepared\nin its ground state, $|\\psi(0)\\rangle = (1,0,0)$), and interrupted by $n=25$, 50, \nand 100 ideal $P_3$-measurements, with the assumption that the detector \nnever clicked. We distinguish the following three cases:\n\nThe ideal case, shown just for comparison, corresponds to the evolution \nunder the action of the model Hamiltonian,\n\\begin{align}\nH_{\\rm ideal} &= \n\\begin{pmatrix}\n0 & -i{\\Omega} & 0 \\cr \ni{\\Omega} & 0 & 0 \\cr\n0 & 0 & \\eta \\cr\n\\end{pmatrix}.\n\\end{align}\nwithout any leakage out of the computational subspace.\n\nThe Zeno case corresponds to the evolution under the action of $H$ given \nin (\\ref{eq:12}) and subjected to $n$ projective $P_3$-measurements. The \nevolution was numerically calculated in accordance with the iterative scheme,\n\\begin{align}\n|\\psi_1\\rangle &= |1\\rangle ,\n\\nonumber \\\\\n|\\psi_k\\rangle &= \n\\frac{P_{\\rm comp.} |\\psi_{k-1}\\rangle}\n{\\sqrt{||P_{\\rm comp.} |\\psi_{k-1}\\rangle||}} ,\n\\nonumber \\\\\n|\\psi_{k+1}\\rangle &= \\exp\\{-i H \\Delta t\\} |\\psi_k\\rangle ,\n\\end{align}\nfor $k=2, 3, \\dots, n$, where $P_{\\rm comp.}$ is the projection operator \nonto the computational subspace,\n\\begin{equation}\nP_{\\rm comp.} \\equiv 1 - P_3 = 1-|3\\rangle\\bra3|.\n\\end{equation}\nAt the final time $T=n\\Delta t$ the qubit remains in its computational \nsubspace (that is, $p_{3}(T)\\equiv |a_3(T)|^2=0$), with the probability \n(let us call it the {\\it survival probability}) given by the product \n(compare to (\\ref{eq:22})),\n\\begin{align}\nW^{(n)}_{{\\rm comp.}}(T)\n& = \\prod_{k=1}^{n} \\left(1- p_3(t_k)\\right).\n\\end{align} \n\nFinally, the no-Zeno case corresponds to the exact evolution,\n\\begin{equation}\nU(t) = \\exp\\{-iHt\\},\n\\end{equation}\nwith the leakage to $|3\\rangle$ taken into account. The probability for the \nsystem to be found in the computational subspace in a single \nmeasurement performed at the end of the evolution is \n\\begin{equation}\nW_{\\rm comp.}(T)= 1 - p_3(T).\n\\end{equation} \n\nA quick glance at the figures shows the presence of the Zeno effect in our \nsystem. There is a critical number of the measuring steps, $n_{\\rm crit}$, \ndepending on the choice of the system parameters, \nabove which the survival probability $W^{(n)}_{{\\rm comp.}}(T)$ exceeds \n$W_{\\rm comp.}(T)$. In the limit $n\\rightarrow \\infty$, \n$W^{(n)}_{{\\rm comp.}}(T)$ approaches 1, as expected.\n\n\\begin{figure}[!ht]\n\\includegraphics[angle=0,width=1.00\\linewidth]{fig1}\n\\caption{ \\label{fig:1} \nPopulations of various qubit states during the Rabi pulse interrupted by \n$n=25$ projective measurements of state $|3\\rangle$.\n}\n\\end{figure}\n\\begin{figure}[!ht]\n\\includegraphics[angle=0,width=1.00\\linewidth]{fig2}\n\\caption{ \\label{fig:2} \nPopulations of various qubit states during the Rabi pulse interrupted by \n$n=50$ projective measurements of state $|3\\rangle$.\n}\n\\end{figure}\n\\begin{figure}[!ht]\n\\includegraphics[angle=0,width=1.00\\linewidth]{fig3}\n\\caption{ \\label{fig:3} \nPopulations of various qubit states during the Rabi pulse interrupted by \n$n=100$ projective measurements of state $|3\\rangle$.\n}\n\\end{figure}\n\n\\section{Taking a closer look at Eq.\\ (\\ref{eq:17}) and designing a \nmeasurement model}\n\n\\begin{figure\n\\includegraphics[angle=0,width=1.00\\linewidth]{fig4}\n\\caption{ \\label{fig:4} \nPopulations of various qubit states during the Rabi pulse with the state $|3\\rangle$\nsubjected to tunneling.\n}\n\\end{figure}\n\nWe now take a closer look at Eq.\\ (\\ref{eq:17}), in which the second order \nterm was ignored. Restoring that $o(\\Delta t^2)$ term, we find\n\\begin{align}\na_3(\\Delta t) &= \\sqrt{2}a_{2}(0) {\\Omega}\\Delta t \n+ \\frac{\\sqrt{2}}{2}\\left[a_1(0) \\Omega^2 - a_2(0)\\Omega\\eta i\\right] \n\\Delta t^2.\n\\end{align}\nAssume now that the $|3\\rangle$ level is allowed to tunnel out with the rate \n$\\Gamma$. The third diagonal matrix element of the Hamiltonian (\\ref{eq:12}) \nwould then get modified as\n\\begin{equation}\n\\eta \\rightarrow \\eta - i\\Gamma\/2,\n\\end{equation}\ngiving\n\\begin{align}\n\\label{eq:42}\na_3(\\Delta t) &= \\sqrt{2}a_{2}(0) {\\Omega}\\Delta t \n\\nonumber \\\\\n&+ \\frac{\\sqrt{2}}{2}\\left[a_1(0) \\Omega^2 \n- a_2(0) \\Omega \\left(\\frac{\\Gamma}{2}+\\eta i\\right)\\right] \\Delta t^2.\n\\end{align}\nWe now make the following observation: if the tunneling rate is chosen to be\n\\begin{equation}\n\\Gamma = \\frac{4}{\\Delta t},\n\\end{equation}\nthe first order term, $\\sqrt{2}a_{2}(0) {\\Omega}\\Delta t$, in Eq.\\ (\\ref{eq:42}) \nwould get canceled, giving\n\\begin{align}\n\\label{eq:49}\na_3(\\Delta t) &=\n\\frac{\\sqrt{2}}{2}\\left[a_1(0) \\Omega^2 - a_2(0)\\Omega \\eta i\\right]\n\\Delta t^2,\n\\end{align}\nwhich would result in the {\\it fourth} order population of the unwanted state \n$|3\\rangle$.\n\nFigure \\ref{fig:4} shows the result of exact simulation for the system initially \nprepared in the ground state, using the Hamiltonian\n\\begin{align}\n\\label{eq:50}\nH &= \n\\begin{pmatrix}\n0 & -i\\Omega & 0 \\cr \ni\\Omega & 0 & -i\\sqrt{2} \\Omega \\cr\n0 & i\\sqrt{2} \\Omega & \\eta - i\\Gamma\/2 \\cr\n\\end{pmatrix},\n\\end{align}\nwith $\\Gamma = 40$ [ns$^{-1}$] and $\\Omega = 0.05$ [GHz] \n(this effectively corresponds to $n=50$ with $\\Delta t = 0.1$ [ns] of the Zeno \ncase). The survival probability is defined by\n\\begin{equation}\nW^{({\\rm tunnel.})}_{{\\rm comp.}}(t) = |a_1(t)|^2+|a_2(t)|^2.\n\\end{equation}\nThe corresponding dynamics can be viewed as a reasonable measurement model \nfor state $|3\\rangle$. We see that for the chosen system's parameters leakage out of the \ncomputational subspace is virtually nonexistent.\n\n\\newpage\n\\section{Conclusions}\n\nWorking with the RWA Hamiltonian of a three-level qubit driven by a Rabi \npulse, we showed that, by allowing the system to continuously tunnel out \nof its higher-energy state (the greater, $\\Gamma$ the better), we effectively \nconfine system's dynamics to the computational subspace. Such quantum \nZeno behavior can potentially be used to resolve the long standing spectral \ncrowding problem that plagued the field of superconducting quantum \ncomputing for almost two decades.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{#1}\\vspace{-0.05in}}\n\\newcommand{\\mysubsection}[1]{\\vspace{-0.1in}\\subsection{#1}\\vspace{-0.05in}}\n\\newcommand{\\mysubsubsection}[1]{\\vspace{-0.1in}\\subsubsection{#1}\\vspace{-0.05in}}\n\\newcommand{\\myparagraph}[1]{\\par\\smallskip\\par\\noindent{\\bf{}#1:~}}\n\\newcommand{\\ceil}[1]{{\\left\\lceil#1 \\right\\rceil}}\n\\newcommand{\\comment}[1]{}\n\\newcommand\\numberthis{\\addtocounter{equation}{1}\\tag{\\theequation}}\n\\newcommand{\\textnormal{poly}}{\\textnormal{poly}}\n\n\n\\newcommand{{\\mathcal{A}}}{{\\mathcal{A}}}\n\\newcommand{{\\mathcal{B}}}{{\\mathcal{B}}}\n\\newcommand{{\\mathcal{G}}}{{\\mathcal{G}}}\n\\newcommand{{\\mathcal{I}}}{{\\mathcal{I}}}\n\\newcommand{{\\mathcal{S}}}{{\\mathcal{S}}}\n\\newcommand{{\\mathcal{M}}}{{\\mathcal{M}}}\n\\newcommand{{\\mathcal{L}}}{{\\mathcal{L}}}\n\\newcommand{{\\mathcal{J}}}{{\\mathcal{J}}}\n\\newcommand{{\\mathcal{I}_R}}{{\\mathcal{I}_R}}\n\n\\def {\\cal N} {{\\cal N}}\n\\def {\\cal C} {{\\cal C}}\n\n\n\\newcommand{{\\mathcal{C}}}{{\\mathcal{C}}}\n\\newcommand{{\\bar{x}}}{{\\bar{x}}}\n\\newcommand{{\\bar{y}}}{{\\bar{y}}}\n\\newcommand{{\\bar{z}}}{{\\bar{z}}}\n\\newcommand{{\\bar{c}}}{{\\bar{c}}}\n\\newcommand{\\textnormal{OPT}}{\\textnormal{OPT}}\n\\newcommand{\\textnormal{\\textsf{Greedy}}}{\\textnormal{\\textsf{Greedy}}}\n\\newcommand{\\textsc{Greedy+Singleton}}{\\textsc{Greedy+Singleton}}\n\\newcommand{\\textsc{EnumGreedy}}{\\textsc{EnumGreedy}}\n\\newcommand{{\\varepsilon}}{{\\varepsilon}}\n\\newcommand{{\\mu}}{{\\mu}}\n\\newcommand{{\\mathbb{E}}}{{\\mathbb{E}}}\n\\newcommand{\\floor}[1]{\\left\\lfloor #1 \\right\\rfloor}\n\\DeclareMathOperator*{\\argmax}{arg\\,max}\n\\DeclareMathOperator*{\\argmin}{arg\\,min}\n\\newcommand{{\\varphi}}{{\\varphi}}\n\\begin{document}\n\t\\newtheorem{thm}{Theorem}[section]\n\t\\newtheorem{prop}[thm]{Proposition}\n\t\\newtheorem{assm}[thm]{Assumption}\n\t\\newtheorem{lem}[thm]{Lemma}\n\t\\newtheorem{obs}[thm]{Observation}\n\t\\newtheorem{cor}[thm]{Corollary}\n\t\\newtheorem{lemma}[thm]{Lemma}\n\t\\newtheorem{definition}[thm]{Definition}\n\t\\newtheorem{theorem}[thm]{Theorem}\n\t\\newtheorem{proposition}[thm]{Proposition}\n\t\\newtheorem{observation}[thm]{Observation}\n\t\\newtheorem{claim}[thm]{Claim}\n\t\t\\newtheorem{example}[thm]{Example}\n\t\\newtheorem{defn}[thm]{Definition}\n\t\\newcommand{\\ariel}[1]{{\\color{red} (Ariel :#1)}}\n\t\\newcommand{\\ilan}[1]{{\\color{blue} Ilan :\\color{magenta}#1}}\n\t\\def {\\mathcal I} {{\\mathcal I}}\n\t\\newcommand{\\mathbbm{1}}{\\mathbbm{1}}\n\t\n\n\t\\begin{titlepage}\n\t\\title{\n\t\t Bin Packing with Partition Matroid can be Approximated within $o(OPT)$ Bins\n\t}\n\t\\author{Ilan Doron-Arad\\thanks{Computer Science Department, \n\t\tTechnion, Haifa, Israel. \\texttt{idoron-arad@cs.technion.ac.il}}\n\t\\and\n\tAriel Kulik\\thanks{CISPA Helmholtz Center for Information Security, Saarland Informatics Campus, Germany. \\texttt{ariel.kulik@cispa.de}} \n\t\\and\n\tHadas Shachnai\\thanks{Computer Science Department, \n\t\tTechnion, Haifa, Israel. \\texttt{hadas@cs.technion.ac.il}}\n}\n\n\t\\maketitle\n\n\t\\input{abstract}\n\t\n\t\n\t\\thispagestyle{empty}\n\\end{titlepage}\n\n\t\\tableofcontents\n\\thispagestyle{empty}\n\t\\newpage\n\\setcounter{page}{1}\n\n\\input{introduction}\n\\input{preliminaries}\n\\input{Overview}\n\n\\input{eviction}\n\\input{shifting}\n\\input{FromPolytope}\n\\input{pack}\n\\input{Reconstruct}\n\\input{discussion}\n\n\\section{Algorithm $\\textsf{Partition}$}\n\\label{sec:FromPolytope}\n\n\nAlgorithm \\textsf{Partition} constructs an ${\\varepsilon}$-nice partition of small size based on the integrality properties of the polytope, as given in Lemma~\\ref{O(1)}. Let $\\bar{z}$ be \n a good prototype (satisfying the conditions in Definition~\\ref{def:goodprototype}).\nSince the values of entries in the support of $\\bar{z}$ are not necessarily integral, $\\bar{z}$ may not satisfy the conditions of Lemma~\\ref{O(1)}. Thus, we \nmodify $\\bar{z}$ to a prototype having integral entries. Define the {\\em integralization} of $\\bar{z}$ as a prototype $\\bar{z}^* \\in \\mathbb{R}^{{\\mathcal{C}}}_{\\geq 0}$ such that, for all $C \\in {\\mathcal{C}}$, $\\bar{z}^*_C = \\ceil{\\bar{z}_C}$.\n\n\\begin{observation}\n\t\\label{ob:y}\n\tThe $\\bar{z}^*$-polytope is non-empty, $\\textnormal{\\textsf{supp}} (\\bar{z}^*) =\\textnormal{\\textsf{supp}} (\\bar{z})$, and $\\|\\bar{z}^*\\| \\leq \\|\\bar{z}\\|+|\\textnormal{\\textsf{supp}}(\\bar{z})|$. \n\\end{observation}\n\nBy Observation~\\ref{ob:y}, it follows that $\\bar{z}^*$ is a good prototype, and there is a vertex $\\bar{\\gamma}$ of the $\\bar{z}^*$-polytope that satisfies constraint \\eqref{F4} with equality. Let $F = \\{\\{\\ell\\}~|~ \\ell \\in I, \\exists t \\in I \\cup {\\mathcal{C}} \\text{ s.t. } \\bar{\\gamma}_{\\ell,t} \\in (0,1)\\}$ be the set of \nitems that are fractionally assigned to some type by $\\bar{\\gamma}$. Since $\\bar{z}^*$ is a good prototype with integral entries, by Lemma~\\ref{O(1)} $|F|$ is small. Thus, \nan ${\\varepsilon}$-nice partition is obtained by assigning the integral items in $\\bar{\\gamma}$ and then adding the {\\em fractional} items,\nwith only a small increase in the size of the ${\\varepsilon}$-nice partition. \n\nWe start by finding a packing for items assigned integrally by $\\bar{\\gamma}$ to slot-types via bipartite matching. Specifically, let $$V = \\left\\{\\left(C,j,k\\right) ~|~C\\in \\textsf{supp}(\\bar{z}^*), j \\in C, k \\in \\left[\\bar{z}^*_C\\right] \\right\\}$$ where each vertex in $V$ represents a slot within a configuration $C \\in \\textsf{supp}(\\bar{z}^*)$, and an index for one of the $\\bar{z}^*_C$ bins associated with $C$. Also, let $U = \\{\\ell \\in I~|~ \\exists j \\in I \\text{ s.t. } \\bar{\\gamma}_{\\ell,j} = 1\\}$ be all items assigned integrally to some slot-type by $\\bar{\\gamma}$. Now, define the {\\em assignment graph} of $\\bar{\\gamma}$ as the bipartite graph $G = (U, V, E)$, where $E = \\{\\left(\\ell, \\left(C,j,k\\right)\\right) \\in U \\times V~|~ \\bar{\\gamma}_{\\ell,j} = 1\\}$. That is, there is an edge between any item $\\ell \\in U$ to $(C,j,k) \\in V$ if $\\ell$ is assigned integrally (i.e., completely) to $j$ by $\\bar{\\gamma}$. If the edge $(\\ell,(C,j,k))$ is taken to the matching, we replace the slot $j$ by $\\ell$ in the $k$-th bin associated with $C$. \n\n\\begin{lemma}\n\t\\label{lem:matchingU}\nThere is a matching in $G$ of size $|U|$. \n\\end{lemma}\n The proof of Lemma~\\ref{lem:matchingU} is based on finding a fractional matching in $G$, by taking $\\frac{1}{d(\\ell)}$ of an edge $(\\ell,v) \\in E$ to the matching, where $d(\\ell)$ is the degree of $\\ell$ in $G$. By constraint \\eqref{F3} of the $\\bar{z}^*$-polytope, this guarantees a feasible fractional matching of size $|U|$. Since $G$ is bipartite, there is also an integral matching of size $|U|$ in $G$ \\cite{schrijver2003combinatorial}. Let $M$ be a matching in $G$ of size $|U|$. \n\n\nWe construct below a packing based on $M$. For all $C \\in \\textsf{supp}(\\bar{z}^*)$ and $k \\in [\\bar{z}^*_C]$, define $$A_{C,k} = \\{\\ell \\in U ~|~ \\exists j \\in I \\text{ s.t. } (\\ell,(C,j,k)) \\in M\\}$$ as the {\\em bin} of $C$ and $k$, which contains all items coupled by the matching to the $k$-th bin associated with $C$. The next lemma follows since $M$ is a matching in the assignment graph of $\\bar{\\gamma}$.\n\n\n\n\\begin{lemma}\n\t\\label{lem:allowed}\nFor all $C \\in \\textnormal{\\textsf{supp}} (\\bar{z}^*)$ and $k \\in [\\bar{z}^*_C]$, $A_{C,k}$ is allowed in $C$. \n\\end{lemma} \n\nWe now define a packing of all items.\nGiven two tuples $S,T$, let $S \\oplus T$ denote the concatenation of $S$ and $T$. The {\\em packing of} $\\bar{\\gamma}$ and $M$ is given by\n\n\\begin{equation}\n\\label{Astar}\nA^* = \\big(h~|~h \\in F\\big) \\oplus \\big(A_{C,k} ~|~ C \\in \\textsf{supp}(\\bar{z}^*), k \\in [\\bar{z}^*_C]\\big). \n\\end{equation} In words, $A^*$ contains a bin for every fractional item, as well as all the bins associated with configurations in the support of $\\bar{z}^*$. It follows that $A^*$ is a packing of $U$ and all fractional items. \n\nWe construct an ${\\varepsilon}$-nice partition below whose packing is $A^*$. Let $\\mathcal{H} = \\textsf{supp}(\\bar{z}^*) \\cup F$. By Lemma~\\ref{O(1)}, we get $|F| \\leq {\\varepsilon}^{-21}|\\textsf{supp}(\\bar{z}^*)|^2$, and it follows that $|\\mathcal{H}| \\leq {\\varepsilon}^{-22}Q^2({\\varepsilon})$ . For all $C \\in \\mathcal{H}$ define the {\\em category of} $C$ as \\begin{equation}\n\t\\label{BC}\n\tB_C = \\left\\{A_{C,k}~|~ k \\in [\\bar{z}^*_{C}] \\right\\} \\cup \\big\\{ h \\in F ~|~ h = C \\big\\}\n\\end{equation} that contains all the bins associated with $C$, and possibly a singleton of a fractional item.\\footnote{If $C$ is a singleton of this item.} By \\eqref{Astar} and \\eqref{BC}, $\\{B_C\\}_{C \\in \\mathcal{H}}$ is a partition of the bins in $A^*$. In addition, for all $C \\in \\mathcal{H}$ define $$D_{C} = \\{\\ell \\in I~|~\\bar{\\gamma}_{\\ell,C} = 1\\}$$ as all items assigned integrally to $C$ by $\\bar{\\gamma}$. Finally, define the {\\em division} of $\\bar{\\gamma}$ and $M$, as $A^*$, $\\mathcal{H}$, $(B_C)_{C \\in \\mathcal{H}}$, and $(D_C)_{C \\in \\mathcal{H}}$. Algorithm $\\textsf{Partition}$ computes the above ${\\varepsilon}$-nice partition. We give the pseudocode in Algorithm~\\ref{Alg:Assign}.\n\n\n\\begin{algorithm}[htb]\n\t\\caption{$\\textsf{Partition} (\\bar{z}$)}\n\t\\label{Alg:Assign}\n\t\n\tGenerate $\\bar{z}^*$, the integralization of $\\bar{z}$\n\t\n\tFind a vertex $\\bar{\\gamma}$ of the $\\bar{z}^*$-polytope satisfying \\eqref{F4} with equality \\label{assign:find_gamma}\n\t\n\tConstruct the assignment graph $G$ of $\\bar{\\gamma}$\n\t\n\tFind a maximum matching $M$ in $G$\n\t\n\tReturn the division of $\\bar{\\gamma}$ and $M$\n\t\n\\end{algorithm}\n\n \\begin{claim}\n\t\\label{eqeq6}\n\tFor any $C \\in \\mathcal{H}$ and $G \\in {\\mathcal{G}}$ the following hold.\n\t\\begin{enumerate}\n\t\t\\item $D_C \\subseteq \\textnormal{\\textsf{fit}}(C)$. \n\t\t\\item $s(D_C) \\leq \\left(1-s(C)\\right) \\cdot |B_C|$. \n\t\t\\item $|D_C \\cap G| \\leq |B_C| \\cdot \\left( k(G) - |G \\cap C| \\right)$.\n\t\\end{enumerate} \n\\end{claim} \n\n The properties of the $\\bar{z}^*$-polytope are the key to proving Claim~\\ref{eqeq6} and consequently Lemma~\\ref{FromPolytope}. Recall that $\\bar{\\gamma}$ is in the $\\bar{z}^*$-polytope; thus, the first property of Claim~\\ref{eqeq6} follows from constraint \\eqref{F1}, the second by constraint \\eqref{F2}, and the third by constraint \\eqref{F5}. Moreover, using constraint \\eqref{F3}, we construct the matching $M$. Finally, we use constraint \\eqref{F4} to show that $\\{D_C\\}_{C \\in \\mathcal{H}}$ is a partition of items not packed in $A^*$.\n\\begin{lemma}\n\t\\label{lem:assign}\n\tThe division of $\\bar{\\gamma}$ and $M$ is an ${\\varepsilon}$-nice partition of size at most $\\|\\bar{z}\\|+{\\varepsilon}^{-22}|\\textsf{supp}(\\bar{z})|^2$.\n\\end{lemma}\n\n\\noindent{\\bf Proof of Lemma~\\ref{FromPolytope}}\nThe correctness of Algorithm~\\ref{Alg:Assign} follows from~\\ref{lem:assign}. Observe that the vertex~$\\bar{\\gamma}$ in Step~\\ref{assign:find_gamma} can be found via standard linear programing, thus the algorithm runs in polynomial time. \n\\qed\n\n\n\\section{Approximation Algorithm for ${\\varepsilon}$-Structured Instances}\n\\label{sec:overview}\n\nOur algorithm uses a configuration-LP \nrelaxation of the given BPP instance.\nFor ${\\varepsilon} \\in (0,0.1)$ such that ${\\varepsilon}^{-1}\\in\\mathbb{N}$, let $\\mathcal{I} = (I,\\mathcal{G},s,k)$ be an ${\\varepsilon}$-structured BPP instance. Recall that a {\\em configuration} of $\\mathcal{I}$ is a subset of items $C \\subseteq I$ such that $\\sum_{\\ell \\in C} s(\\ell) \\leq 1$, and $|C \\cap G| \\leq k(G)$ for all $G \\in \\mathcal{G}$. Let $\\mathcal{C}({\\mathcal{I}})$ be the set of all configurations of $\\mathcal{I}$; we use ${\\mathcal{C}}$ when the instance ${\\mathcal{I}}$ is clear from the context. Given $S \\subseteq I$, a partition $(A_1, \\ldots, A_m)$ of $S$ is a {\\em packing} (or, a {\\em packing of} $S$) if $A_b \\in {\\mathcal{C}}$ for all $b \\in [m]$; if $S = I$ then we say that $A$ is a {\\em packing of} ${\\mathcal{I}}$. For $\\ell \\in I$, let $\\mathcal{C}[\\ell] = \\{C \\in \\mathcal{C}~|~\\ell \\in C\\}$ be the set of configurations of $\\mathcal{I}$ that contain $\\ell$. Our algorithm initially solves the following configuration-LP relaxation of the problem.\n\n\\begin{equation}\n\t \\label{C-LP}\n\t\\begin{aligned}\n\t\t~~~~~ \\min\\quad & ~~~~~\\sum_{C \\in \\mathcal{C}} \\bar{x}_C \\\\\n\t\t\\textsf{s.t.\\quad} & ~~~\\sum_{~C \\in \\mathcal{C}[\\ell]} \\bar{x}_C = 1 & \\forall \\ell \\in I~~~~~\\\\ \n\t\t& ~~~~~\\bar{x}_C \\geq 0 ~~~~~~~~&~~~~~~~~~~~~~~~~~~~~ \\forall C \\in \\mathcal{C}~~~~\n\t\\end{aligned}\n\\end{equation}\n\n\nA solution for the LP~(\\ref{C-LP}) assigns to each configuration \n$C \\in \\mathcal{C}$ a real number $\\bar{x}_C \\in [0,1]$ which indicates the fractional selection of $C$ for the solution such that each item is fully {\\em covered}. For simplicity, denote by $\\|\\bar{x}\\| = \\sum_{C \\in \\mathcal{C}} \\bar{x}_C $ the $\\ell_1$-norm of $\\bar{x}$ (that is, the objective value in~(\\ref{C-LP})). Note that the configuration-LP~(\\ref{C-LP}) has an exponential number of variables; thus, it cannot be solved in polynomial time by applying standard techniques. A common approach for solving such linear programs is to use a {\\em separation oracle} for the dual program.\n\n\nConsider the {\\em configuration maximization problem (CMP)} in which we are given a BPP instance $\\mathcal{I}=(I,\\mathcal{G},s,k)$ and a weight function $w:I\\rightarrow \\mathbb{R}_{\\geq 0}$; the objective is to find a configuration $C\\in \\mathcal{C}$ such that $\\sum_{\\ell \\in C} w(\\ell)$ is maximized. By a well known connection between separation and optimization, an FPTAS for CMP implies an FPTAS for the configuration-LP of $\\mathcal{I}$~\\cite{grigoriadis2001approximate,fleischer2011tight,grotschel2012geometric,plotkin1995fast}. CMP\ncan be solved via an easy reduction \nto {\\em knapsack with partition matroid}, which admits an FPTAS~\\cite{chakaravarthy2013knapsack}. Thus, we have\n\\begin{lemma}\n\\label{configurationLP}\nThere is an algorithm $\\textnormal{\\textsf{SolveLP}}$ which given a BPP instance ${\\mathcal{I}}$ and ${\\varepsilon}>0$, returns in time $\\textnormal{poly} (| {\\mathcal{I}} |, \\frac{1}{{\\varepsilon}})$ a solution for the configuration-LP of $\\mathcal{I}$ of value at most $(1+{\\varepsilon})\\textnormal{OPT}$, where $\\textnormal{OPT}$ is the value of an optimal solution for the configuration-LP of $\\mathcal{I}$.\n\\end{lemma}\n\nWe give the proof of Lemma~\\ref{configurationLP} in Appendix~\\ref{app:omitted}. A key component in our scheme is the construction of a {\\em prototype} of a packing. \n\n\\begin{definition}\n\t\\label{def:prototype}\n\tGiven a BPP instance ${\\mathcal{I}}$, a {\\em prototype} is a vector $\\bar{x} \\in \\mathbb{R}_{\\geq 0}^{{\\mathcal{C}}}$. \n\\end{definition}\n\n\nIn particular, a solution for~\\eqref{C-LP} is a prototype. \nIn the context of a prototype, each configuration $C \\in \\mathcal{C}$ is considered as a set of {\\em slots}: each slot $\\ell \\in C$ is a placeholder for an item, where items that {\\em fit} in the place of a slot $\\ell$ are those in the group of $\\ell$ of smaller or equal size. \nFor any $\\ell \\in I$\ndefine $\\textsf{group}(\\ell) = G$, where $G\\in {\\mathcal{G}}$ is the unique group such that $\\ell\\in G$. Now, we define the subset of items that fit in place of a slot $\\ell \\in I$ by\n\n\\begin{equation}\n\t\\label{fitl}\n\t\\textsf{fit}(\\ell) = \\{\\ell' \\in \\textsf{group}(\\ell)~|~ s(\\ell') \\leq s(\\ell)\\}~~~~~~~~~~~~~~\\forall \\ell \\in I.\n\\end{equation}\n\n\n Each configuration is viewed as a set of slots, representing subsets of items that can \n replace the slots in the actual packing of the instance.\n We associate a polytope with the prototype, in which additional items (i.e, items which to not replace a slot) may be assigned to the unused capacity of a configuration $C$. To enable efficient packing of the instance, these additional items\n must be {\\em small} relative to the unused capacity of $C$. To this end, we define the set of items which {\\em fit with} a set of slots $C \\in \\mathcal{C}$ as \\begin{equation}\n\t\\label{fitS}\n\t\\textsf{fit}(C) = \\{\\ell \\in I~|~ s(\\ell) \\leq \\min \\{{\\varepsilon}^2, {\\varepsilon} \\cdot (1-s(C))\\}\\}~~~~~~~~~~~~~~\\forall C \\in \\mathcal{C}.\n\\end{equation} \n\n In words, $\\textsf{fit}(C)$ contains items of sizes smaller than ${\\varepsilon}^2$ and also at most ${\\varepsilon}$-fraction of the unused capacity of $C$. Given a prototype $\\bar{x}$, the {\\em $\\bar{x}$-polytope} is a relaxation for a packing of the instance that assigns items fractionally, either as replacement for slots or in addition to a set of slots. We define the set of {\\em types} of the instance $\\mathcal{I}$ to be $I\\cup \\mathcal {{\\mathcal{C}}}$. The set of types includes the {\\em slot-types}, i.e., slots (items) in $I$, and {\\em configuration-types}, i.e., configurations in $\\mathcal{C}$. A point in the $\\bar{x}$-polytope\n has an entry for each pair of an item $\\ell \\in I$ and a type $t \\in I \\cup {\\cal C}$ which\n represents the fractional {\\em assignment} of the item to the type. Formally,\n\n\n \n \n \\begin{definition}\n \t\\label{def:polytope}\n \tGiven a BPP instance ${\\mathcal{I}}$, the set ${\\mathcal{C}}$ of configurations for ${\\mathcal{I}}$, and a prototype $\\bar{x}$ of ${\\mathcal{I}}$, the $\\bar{x}$-polytope is the set containing all points $\\bar{\\gamma} \\in [0,1]^{I \\times (I\\cup \\mathcal{C})}$ which satisfy the following constraints.\n\n\n\\begin{align}\n\t\t\\displaystyle \\bar{\\gamma}_{\\ell,t} = 0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~\t~~~~~~~~~~\\forall \\ell \\in I , t \\in I \\cup \\mathcal{C}\\textnormal{ s.t. } \\ell \\notin \\textnormal{\\textsf{fit}}\\left(t \\right)~~~~ \\label{F1}\\\\\n\t\t\\rule{0pt}{1.8em}\n\t\t\\displaystyle \\sum_{\\ell \\in I} \\bar{\\gamma}_{\\ell,C} \\cdot s(\\ell) \\leq \\left(1-s(C)\\right) \\cdot \\bar{x}_C ~~~~~~~~~~~~~~~~~~~\t~~~~~~\\forall C \\in \\mathcal{C} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \\label{F2}\\\\\n\t\t\\rule{0pt}{1.8em}\n\t\t\\displaystyle \\sum_{\\ell \\in G} \\bar{\\gamma}_{\\ell,C} \\leq \\bar{x}_C \\cdot \\left(k(G)-|C \\cap G|\\right)~~~~ ~~~~~~~~~~\t~~~~~~~~~~\\forall G \\in \\mathcal{G}, C \\in \\mathcal{C}~~~~~~~~~~~~~~~~~~~~~~~\\label{F5}\\\\\n\t\t\\rule{0pt}{1.8em}\n\t\t\\displaystyle \\sum_{\\ell \\in I} \\bar{\\gamma}_{\\ell,j} \\leq \\sum_{\\substack{C \\in \\mathcal{C}[j]}} \\bar{x}_C~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~\t~~~~~~~~~~\\forall j \\in I ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\label{F3}\\\\\n\t\t\\rule{0pt}{1.8em}\n\t\t\\displaystyle \\sum_{t \\in I \\cup \\mathcal{C}} \\bar{\\gamma}_{\\ell,t} \\geq 1 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~\t\\forall \\ell \\in I~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \\label{F4}\n\t\\end{align}\n\t \\end{definition}\n\t\n\tConstraints~(\\ref{F1}) indicate that an item $\\ell \\in I$ cannot be assigned to type $t$ if $\\ell$ does not fit in~$t$. \n\tConstraints~(\\ref{F2}) set an upper bound on the total (fractional) size of items assigned to each configuration-type $C \\in \\mathcal{C}$. This bound is equal to the residual capacity of $C$ times the number of bins packed with $C$, given by $\\bar{x}_C$.\n\tConstraints~(\\ref{F5}) bound the number of items in each group $G$ assigned to configuration-type $C$; at most $k(G)-|C \\cap G|$ items in $G$ can be added to $C$ without violating the cardinality constraint of $G$. Constraints~(\\ref{F3}) bound the number of items assigned to slot-type $j \\in I$ by the total selection of configurations containing $j$ in $\\bar{x}$.\n\tFinally, constraints~(\\ref{F4}) guarantee that each item is fully assigned to the types.\n\tWe note that if a prototype $\\bar{x}$ is a solution for (\\ref{C-LP}) then\n\tthe $\\bar{x}$-polytope is non-empty;\n\tin particular, it contains the point \n\t$\\bar{\\gamma}$\n\twhere $\\bar{\\gamma}_{\\ell, j}=1$ $\\forall \\ell, j \\in I$ \n\tsuch that \n\t$\\ell=j$, and $\\bar{\\gamma}_{\\ell, t}=0$ otherwise.\n\t \nLet $\\textsf{supp}(\\bar{x}) = \\{C \\in \\mathcal{C}~|~ \\bar{x}_C>0\\}$ be the {\\em support} of $\\bar{x}$. Throughout this paper, we use prototypes $\\bar{x}$ for which $\\textsf{supp}(\\bar{x})$ is polynomial in the input size; thus, these prototypes have\nsparse representations.\nThe next lemma shows that if the prototype $\\bar{x}$ has a small support, and each configuration in the support contains a few items then the vertices of the $\\bar{x}$-polytope are almost integral.\nThus, given a vertex $\\bar{\\lambda}$ of such $\\bar{x}$-polytope, the items assigned fractionally by $\\bar{\\lambda}$ can be packed using only a small number of extra bins. \n\n\n \\begin{lemma}\n \t\\label{O(1)}\n \tLet ${\\mathcal{I}}$ be a BPP instance, $k \\geq 1$ an integer and $\\bar{x}$ a prototype of $\\mathcal{I}$ such that $|C| \\leq k$\n \tfor all $C \\in \\textnormal{\\textsf{supp}}(\\bar{x})$, and $\\bar{x}_C \\in \\mathbb{N}$.\n \tThen given a vertex $\\bar{\\lambda}$\n \tof the $\\bar{x}$-polytope for which constraints \\eqref{F4} hold with equality, $$\\left|\\left\\{\\ell \\in I~|~ \\exists t \\in I \\cup {\\mathcal{C}} \\text{ s.t. } \\bar{\\lambda}_{\\ell,t} \\in (0,1)\\right\\}\\right| \\leq 8k^2 \\cdot |\\textnormal{\\textsf{supp}}(\\bar{x})|^2.$$ \n \\end{lemma}\n\nThe proof of Lemma~\\ref{O(1)} bears some similarity to a proof of~\\cite{DKS21_arXiv}, which shows the integrality properties of a somewhat different polytope. We give the complete proof in Appendix~\\ref{sec:poly}. We now describe the main components of our scheme, which converts an initial prototype $\\bar{x}$ (defined by a solution for the configuration-LP) into another prototype $\\bar{z}$, and then constructs a packing based on $\\bar{z}$. The necessary conditions for a prototype allowing to construct an efficient packing are given below. Let $Q:(0,0.1)\\to \\mathbb{R}$ where $Q = \\exp({\\varepsilon}^{-17})$ for all ${\\varepsilon}\\in (0,0.1)$. \n\n\\begin{definition}\n\t\\label{def:goodprototype}\n\tGiven ${\\varepsilon} \\in (0, 0.1)$ and an ${\\varepsilon}$-structured BPP instance ${\\mathcal{I}}$, a prototype $\\bar{x}$ of ${\\mathcal{I}}$ is a {\\em good prototype} if the $\\bar{x}$-polytope is non-empty, $|\\textnormal{\\textsf{supp}}(\\bar{x})| \\leq Q({\\varepsilon})$, and $|C| \\leq {\\varepsilon}^{-10}$ for all $C \\in \\textnormal{\\textsf{supp}}(\\bar{x})$. \n\\end{definition}\nObserve that a solution $\\bar{x}$ for the configuration-LP of $\\mathcal{I}$\nis not necessarily a good prototype, since it may have a support of large size.\nGiven this initial prototype, our scheme generates a good prototype in two steps.\nIn the first step, algorithm {\\sf Evict} constructs an intermediate prototype $\\bar{y}$ \nwhich selects only configurations containing\na small number of items. This results in a small increase in the total number of bins used. Also, the $\\bar{y}$-polytope is non-empty. Given ${\\varepsilon}>0$, we say that an item $\\ell\\in I$ is ${\\varepsilon}$-large if $s(\\ell)\\geq {\\varepsilon}^2$. We use $L({\\varepsilon},{\\mathcal I})$ to denote the set of ${\\varepsilon}$-large items of an instance ${\\mathcal I}$. If ${\\mathcal I}$ and ${\\varepsilon}$ are known by context we simply use $L$ (instead of $L({\\varepsilon},{\\mathcal I})$). The properties of $\\bar{y}$ are summarized in the next lemma (see the details of algorithm {\\sf Evict} in Section~\\ref{sec:eviction}).\n \n\n\\begin{lemma}\n\t\\label{lem:eviction}\n\tThere is an algorithm $\\textsf{Evict}$ that given ${\\varepsilon} \\in (0,0.1)$ such that ${\\varepsilon}^{-1}\\in \\mathbb{N}$, an ${\\varepsilon}$-structured BPP instance ${\\mathcal{I}}$, and a solution $\\bar{x}$ for the configuration-LP~\\eqref{C-LP}, returns in time $\\textnormal{poly} (| {\\mathcal{I}} |, \\frac{1}{{\\varepsilon}})$ a prototype $\\bar{y}$ which satisfies the following. (i) There exists $\\bar{\\gamma}$ in the $\\bar{y}$-polytope such that $\\bar{\\gamma}_{\\ell,j} = 0$ for all $\\ell,j \\in I, \\ell \\neq j$; (ii) for all $C \\in \\textnormal{\\textsf{supp}}(\\bar{y})$ it holds that $|C| \\leq {\\varepsilon}^{-10}$, and $s(C \\setminus L) \\leq {\\varepsilon}$; (iii) $\\|\\bar{y}\\| \\leq (1+{\\varepsilon})\\|\\bar{x}\\| ; $ (iv) $\\sum_{C\\in {\\mathcal{C}}[\\ell]} {\\bar{y}}_C\\leq 2$ for every $\\ell \\in I$.\n\\end{lemma}\n\n\nObserve that property (i) of Lemma~\\ref{lem:eviction}\tallows $\\bar{\\gamma}_{\\ell,C}>0$ for item $\\ell \\in \tI$ and a set of slots $C\\in {\\mathcal{C}}$. Note that \\textsf{Evict} does not return the vector $\\bar{\\gamma}$ and only guarantees its existence. \nIn the second step, our scheme uses algorithm $\\textsf{Shift}$ to obtain a good prototype. This is done by a novel constructive use of fractional grouping over a small subset of carefully chosen groups. This step is formalized in the next lemma.\n\n\\begin{lemma}\n\\label{lem:Cnf}\t\nGiven ${\\varepsilon} \\in (0, 0.1)$ such that ${\\varepsilon}^{-1}\\in \\mathbb{N}$, let ${\\mathcal{I}}$ be an ${\\varepsilon}$-structured BPP instance. Furthermore, let $\\bar{y}$ be a prototype of $\\mathcal{I}$ with non-empty $\\bar{y}$-polytope which satisfies the following. $(i)$ For all $C \\in \\textsf{supp}(\\bar{y})$ it holds that $|C| \\leq {\\varepsilon}^{-10}$ and $s(C \\setminus L) \\leq {\\varepsilon}$;\n$(ii)$ there exists $\\bar{\\gamma}$ in the $\\bar{y}$-polytope such that \n$\\bar{\\gamma}_{\\ell,j} = 0$ for all $\\ell,j \\in I$ where $\\ell \\neq j$; (iii) $\\sum_{C\\in {\\mathcal{C}}[\\ell]} {\\bar{y}}_C\\leq 2$ for every $\\ell \\in I$. Then, given ${\\mathcal{I}}$, $\\bar{y}$, and ${\\varepsilon}$, Algorithm $\\textsf{Shift}$ returns in time $\\textnormal{poly} (| {\\mathcal{I}} |, \\frac{1}{{\\varepsilon}})$ a good prototype $\\bar{z}$ such that $\\|\\bar{z}\\| \\leq (1+5{\\varepsilon})\\|\\bar{y}\\|+Q({\\varepsilon})$.\n\\end{lemma}\n\nAlgorithm $\\textsf{Shift}$ is presented in Section~\\ref{sec:AlgPrototype}. Given a good prototype $\\bar{z}$, our scheme proceeds to find a partition of the items into slot-types and configuration-types using the following construction. \n\n\n For configurations $S,C \\in \\mathcal{C}$, we say that $S$ is {\\em allowed} in $C$ if each item $\\ell \\in S$ can be mapped to a distinct slot $j \\in C$, such that $\\ell \\in \\textsf{fit}(j)$. \n We consider a packing of a subset of $I$ such that the bins in the packing are partitioned into a bounded number of {\\em categories}. Each category is associated with \n $(i)$ a configuration $C \\in {\\mathcal{C}}$ such that all bins in the category are allowed in $C$, and $(ii)$ a {\\em completion}: a subset of (unpacked) items bounded by total size and number of items per group, where each item fits with $C$. Also, we require that each item $\\ell \\in I$ is either in this packing or in a completion of a category. The above constraints are analogous to constraints \\eqref{F1}-\\eqref{F4} of the $\\bar{x}$-polytope, that is used for finding an assignment of the items to slots and configurations. Formally, \n \n \n \n \\begin{definition}\n \t\\label{def:partition}\n \t\n \tGiven ${\\varepsilon} \\in (0, 0.1)$ and an ${\\varepsilon}$-structured BPP instance ${\\mathcal{I}} = (I,{\\mathcal{G}},s,k)$, an ${\\varepsilon}$-{\\em nice partition} $\\mathcal{B}$ of $\\mathcal{I}$ is a packing $(A_1,..., A_m)$ of a subset of $I$, a subset of configurations $\\mathcal{H} \\subseteq {\\mathcal{C}}$, categories $\\left(B_C\\right)_{C \\in \\mathcal{H}}$, and completions $\\left(D_C\\right)_{C \\in \\mathcal{H}}$ such that the following conditions hold.\n \t\n \t\\begin{itemize}\n \t\t\\item $|\\mathcal{H}| \\leq {\\varepsilon}^{-22}Q^2({\\varepsilon})$.\n \t\t\n \t\t\\item $\\{B_C\\}_{C \\in \\mathcal{H}}$ is a partition of $\\{A_i~|~i \\in [m]\\}$\n \t\t\n \t\t\\item $\\{D_C\\}_{C \\in \\mathcal{H}}$ is a partition of $I \\setminus \\bigcup_{i \\in [m]} A_i$.\n \t\t\n \t\t\\item For any $C \\in \\mathcal{H}$ and $A \\in B_C$ it holds that $A$ is allowed in $C$\n \t\t\n \t\t\\item For any $C \\in \\mathcal{H}$ and $G \\in {\\mathcal{G}}$ it holds that:\n \t\t\n \t\t\\begin{enumerate}\n \t\t\t\\item $D_C \\subseteq \\textnormal{\\textsf{fit}}(C)$.\n \t\t\t\n \t\t\t\\item $s(D_C) \\leq \\left(1-s(C)\\right) \\cdot |B_C|$.\n \t\t\t\n \t\t\t\\item $|D_C \\cap G| \\leq |B_C| \\cdot \\left(k(G)-|C \\cap G|\\right)$.\n \t\t\\end{enumerate}\n \t\\end{itemize} \n \n The {\\em size} of $\\mathcal{B}$ is $m$, and for all $C \\in \\mathcal{H}$ we say that $B_C$ is the category of $C$ and $D_C$. \n \\end{definition} \n \n Algorithm {\\sf Partition} initially rounds up the entries of $\\bar{z}$ to obtain the prototype $\\bar{z}^*$.\n It then finds a vertex $\\bar{\\lambda}$ of the $\\bar{z}^*$-polytope, which is almost integral by Lemma~\\ref{O(1)}. Thus, with the exception of a small number of items, each item is fully assigned either to a slot or to a configuration. Algorithm {\\sf Partition} uses $\\bar{\\lambda}$ to construct an ${\\varepsilon}$-nice partition. We generate a category for each $C \\in \\textnormal{\\textsf{supp}}(\\bar{z}^*)$ and define $D_C = \\{ \\ell \\in I~| \\bar{\\lambda}_{\\ell,C}=1 \\}$ to be the set of all items assigned to $C$. We also generate $\\bar{z}^*_C$ copies (bins) of each configuration and replace its slots by items via matching.\n \n\\begin{lemma}\n\t\\label{FromPolytope}\n\tThere is an algorithm $\\textsf{Partition}$ that given ${\\varepsilon} \\in (0, 0.1)$ such that ${\\varepsilon}^{-1}\\in \\mathbb{N}$, an ${\\varepsilon}$-structured BPP instance ${\\mathcal{I}}$, and a good prototype $\\bar{z}$ of $\\mathcal{I}$, returns in time $\\textnormal{poly} (| {\\mathcal{I}} |, \\frac{1}{{\\varepsilon}})$ an ${\\varepsilon}$-nice partition of $\\mathcal{I}$ of size at most $\\|\\bar{z}\\|+{\\varepsilon}^{-22}Q^2({\\varepsilon})$.\n\\end{lemma}\n\nAlgorithm $\\textsf{Partition}$ is presented in Section~\\ref{sec:FromPolytope}. Given an ${\\varepsilon}$-nice partition of size $m$, a packing of the instance in roughly $m$ bins is obtained using the next lemma. \n\n\\begin{lemma}\n \\label{lem:GREEDY}\n There is a polynomial-time algorithm \\textsf{Pack} which given ${\\varepsilon} \\in (0, 0.1)$ such that ${\\varepsilon}^{-1}\\in \\mathbb{N}$, an ${\\varepsilon}$-structured BPP instance ${\\mathcal{I}}$, and ${\\varepsilon}$-nice partition of $\\mathcal{I}$ of size $m$, returns in time $\\textnormal{poly} (| {\\mathcal{I}} |, \\frac{1}{{\\varepsilon}})$ a packing of $\\mathcal{I}$ in at most $(1+2{\\varepsilon})m+2{\\varepsilon}^{-22}Q^2({\\varepsilon})$ bins. \n \\end{lemma} \n\n\nAlgorithm \\textsf{Pack} utilizes Algorithm \\textsf{Greedy} to add the items in a completion of a category to the bins of this category, possibly using a few extra bins. Algorithm \\textsf{Pack} is presented in Section~\\ref{sec:alg_pack} and Algorithm \\textsf{Greedy} is presented in Section~\\ref{sec:greedy}. Using the above components, we construct an AFPTAS for ${\\varepsilon}$-structured BPP instances. The pseudocode of the scheme is given in Algorithm~\\ref{alg:Fscheme}. We summarize in the next result.\n\n\\begin{algorithm}[h]\n\t\\caption{$\\textsf{AFPTAS}({{\\mathcal I}}, {\\varepsilon})$}\n\t\\label{alg:Fscheme}\n\t\n\t\\SetKwInOut{Input}{Input}\n\t\t\\SetKwInOut{Output}{Output}\n\t\\Input{An ${\\varepsilon}$-structured instance ${\\mathcal I}$ and ${\\varepsilon}\\in (0,0.1)$ such that ${\\varepsilon}^{-1}\\in \\mathbb{N}$}\n\t\\Output{A packing of ${\\mathcal I}$}\n\t\n\t\n\tFind a solution for the configuration-LP of $\\mathcal{I}$; that is, $\\bar{x} = \\textsf{SolveLP}({\\varepsilon},\\mathcal{I})$\\label{LP} \n\t\n\t\n\tLet $\\bar{y} = \\textsf{Evict}({\\varepsilon},{\\mathcal{I}},\\bar{x})$ \\label{step:evicAFPTAS}\n\t\n\tFind a good prototype $\\bar{z} = \\textsf{Shift}({\\varepsilon},{\\mathcal{I}},\\bar{y})$ \\label{GetPolytope}\n\t\n\t\n\tFind an ${\\varepsilon}$-nice partition $\\mathcal{B}$ of $\\mathcal{I}$ by $\\textsf{Partition}({\\varepsilon},{\\mathcal{I}},\\bar{z})$ \\label{step:BPPnice}\n\t\t\n\n\tReturn \n\t $\\Phi = \\textsf{Pack}({\\varepsilon},\\mathcal{I},\\mathcal{B})$ \\label{step:greedy1}\n\n\t\n\n\\end{algorithm}\n\n\n\n\n\n\n\n\n \n\\begin{lemma}\n\\label{lem:AFPTAS}\nGiven ${\\varepsilon} \\in (0, 0.1)$ such that ${\\varepsilon}^{-1}\\in \\mathbb{N}$, and an ${\\varepsilon}$-structured BPP instance ${\\mathcal{I}}$, Algorithm~\\ref{alg:Fscheme} returns in time $\\textnormal{poly} (| {\\mathcal{I}} |, \\frac{1}{{\\varepsilon}})$ a packing of ${\\mathcal I}$ in at most $(1+60{\\varepsilon})\\textnormal{OPT}({\\mathcal I})+Q^3({\\varepsilon})$ bins.\n\\end{lemma}\n\n\nThe proof of Lemma~\\ref{lem:AFPTAS} follows immediately from Lemmas~ \\ref{configurationLP}, ~\\ref{lem:eviction}, \\ref{lem:Cnf}, ~\\ref{FromPolytope}, and \\ref{lem:GREEDY}. \nA formal proof is given in Appendix~\\ref{app:omitted}. \nThe next lemma is an immediate consequence of Lemmas~\\ref{lem:AFPTAS} and~\\ref{lem:reductionReconstruction}. \n\\begin{lemma}\n\t\\label{lem:gen_afptas}\nThere is an algorithm $\\textnormal{\\textsf{Gen-AFPTAS}}$ which given a BPP instance ${\\mathcal{J}}$ and ${\\varepsilon}\\in (0,0.1)$ such that ${\\varepsilon}^{-1}\\in \\mathbb{N}$, returns in time $\\textnormal{poly}(|{\\mathcal{J}}|,\\frac{1}{{\\varepsilon}})$ a packing of ${\\mathcal{J}}$ using at most $(1+130{\\varepsilon})\\cdot \\textnormal{OPT}({\\mathcal{J}}) + 3\\cdot Q^3({\\varepsilon})$ bins. \n\\end{lemma}\nWe give the proof of Lemma~\\ref{lem:gen_afptas} in Appendix~\\ref{app:omitted}. \nGiven a BPP instance $\\mathcal{J}$, Theorem~\\ref{thm:main} follows by applying $\\textnormal{\\textsf{Gen-AFPTAS}}$ to $\\mathcal{J}$ taking ${\\varepsilon} = \\left( \\ln \\ln W \\right)^{-\\frac{1}{17}}$, where \n$W=s(I) + V({\\mathcal I}) +\\exp(\\exp(100^{17}))=\\Theta(\\textnormal{OPT}({\\mathcal{J}}))$. By the above selection of ${\\varepsilon}$, it holds that \n$Q^3({\\varepsilon})=o(\\textnormal{OPT}(\\mathcal{J}))$ and ${\\varepsilon} \\textnormal{OPT}(\\mathcal{J})=o(\\textnormal{OPT}\\left( \\mathcal{J}) \\right)$. This yields the $o(\\textnormal{OPT}({\\mathcal I})))$ additive approximation ratio. The proof of Theorem~\\ref{thm:main} is given in Appendix~\\ref{app:omitted}. \n\n\n\n\n\n\n\n\n\n\n\\section{Omitted Proofs of Section~\\ref{sec:FromPolytope}}\n\\label{app:omittedFromPolytope}\n\n\n\\noindent{\\bf Proof of Lemma~\\ref{lem:matchingU}:}\nFor all $\\ell \\in U$, let $d(\\ell) = |\\{v \\in V~|~ (\\ell,v) \\in E\\}|$ be the degree of $\\ell$ in $G$. For all $\\ell \\in U$ there is exactly one $j \\in I$ such that $\\bar{\\gamma}_{\\ell,j} = 1$ because $\\bar{\\gamma}$ satisfies constraint \\eqref{F4} of the $\\bar{z}^*$-polytope with equality; let $\\ell^* = j$. Therefore, for every $\\ell \\in U$ it holds:\n\n\n\\begin{equation}\n\t\\label{welldegree}\n\td(\\ell) = |\\{ (C,\\ell^*,k) \\in V~|~ (\\ell,(C,\\ell^*,k)) \\in E\\}| = \\sum_{C \\in {\\mathcal{C}}[\\ell^*]~} \\sum_{k \\in [\\bar{z}^*_C]} 1 = \\sum_{C \\in {\\mathcal{C}}[\\ell^*]} \\bar{z}^*_C \\geq \\sum_{\\ell' \\in I} \\bar{\\gamma}_{\\ell',\\ell^*} \\geq \\bar{\\gamma}_{\\ell,\\ell^*} = 1. \n\\end{equation} The first inequality is because $\\bar{\\gamma}$ is in the $\\bar{z}^*$-polytope. Define a fractional matching as a vector $\\bar{M} \\in [0,1]^E$ such that for all $(\\ell,v) \\in E$ define $\\bar{M}_{(\\ell,v)} = \\frac{1}{d(\\ell)}$; note that $\\bar{M}$ is well-defined (i.e., for all $\\ell \\in U$ it holds that $d(\\ell)>0$) by \\eqref{welldegree}. We show below that $\\bar{M}$ is a fractional matching in $G$ of size~$|U|$. \n\n\\begin{enumerate}\n\t\\item $\\bar{M}$ is a feasible fractional matching in $G$. Let $\\ell' \\in U$. It holds that:\n\t\n\t$$\\sum_{(\\ell',v) \\in E} \\bar{M}_{(\\ell',v)} = \\sum_{(\\ell',v) \\in E} \\frac{1}{d(\\ell)} = d(\\ell) \\cdot \\frac{1}{d(\\ell)}=1.$$ In addition, let $(C,j,k) \\in V$ and let $v = (C,j,k)$. It holds that:\n\t\n\t\n\t\\begin{equation*}\n\t\t\\begin{aligned}\n\t\t\t\\sum_{(\\ell,v) \\in E} \\bar{M}_{(\\ell,v)} ={} & \\sum_{(\\ell,v) \\in E} \\frac{1}{d(\\ell)} = \\sum_{\\ell \\in U \\text{ s.t. } \\bar{\\gamma}_{\\ell,j} = 1} \\frac{1}{d(\\ell)} \\leq \\sum_{\\ell \\in U \\text{ s.t. } \\bar{\\gamma}_{\\ell,j} = 1} \\bar{\\gamma}_{\\ell,j} \\cdot \\frac{1}{d(\\ell)}\\\\ \\leq{} &\n\t\t\t\\sum_{\\ell \\in U \\text{ s.t. } \\bar{\\gamma}_{\\ell,j} = 1} \\bar{\\gamma}_{\\ell,j} \\cdot \\frac{1}{\\sum_{\\ell' \\in I} \\bar{\\gamma}_{\\ell',j}} \\leq \\frac{1}{\\sum_{\\ell' \\in I} \\bar{\\gamma}_{\\ell',j}} \\cdot \\sum_{\\ell \\in I} \\bar{\\gamma}_{\\ell,j} = 1.\n\t\t\\end{aligned}\n\t\\end{equation*} The second inequality is by \\eqref{welldegree}. By the above, we conclude that $\\bar{M}$ is a feasible fractional matching in $G$.\n\t\n\t\\item $|\\bar{M}| = |U|$. $$|\\bar{M}| = \\sum_{e \\in E}\\bar{M}_{e} = \\sum_{\\ell \\in U} \\sum_{(\\ell,v) \\in E} \\bar{M}_{e} = \\sum_{\\ell \\in U} \\sum_{(\\ell,v) \\in E} \\frac{1}{d(\\ell)} = \\sum_{\\ell \\in U} \\frac{d(\\ell)}{d(\\ell)}= |U|.$$\n\\end{enumerate} By the above, $\\bar{M}$ is a fractional matching in $G$ with size $|U|$. Since $G$ is a bipartite graph, there is an integral matching with size $|U|$ in $G$ \\cite{schrijver2003combinatorial}. \\qed\n\n\n\\noindent{\\bf Proof of Lemma~\\ref{lem:allowed}:}\nFor all $\\ell \\in A_{C,k}$ there is exactly one $j \\in C$ such that $(\\ell,(C,j,k)) \\in M$ by the definition of $A_{C,k}$ and because $M$ is a matching; we define $\\ell^* = j$ to be the {\\em slot of } $\\ell$. We construct an injective function $\\sigma: A_{C,k} \\rightarrow C$ such that for all $\\ell \\in A_{C,k}$ define $\\sigma(\\ell) = \\ell^*$. By the above, for all $\\ell \\in A_{C,k}$ it holds that $\\ell^*$ is well defined and it follows that $\\sigma$ is a function. Let $\\ell_1,\\ell_2 \\in A_{C,k}$ such that $\\ell_1 \\neq \\ell_2$. Since $M$ is a matching, it follows that $\\ell^*_1 \\neq \\ell^*_2$; hence, $\\sigma(\\ell_1) \\neq \\sigma(\\ell_2)$ and we conclude that $\\sigma$ is injective. Finally, let $\\ell \\in A_{C,k}$. Since $(\\ell,(C,\\ell^*,k)) \\in M$, by the definition of $E$ it follows that $\\bar{\\gamma}_{\\ell,\\ell^*} = 1$; thus, $\\ell \\in \\textsf{fit}(\\ell^*)$ by \\eqref{F1} since $\\bar{\\gamma}$ is in the $\\bar{y}$-polytope. \\qed\n\n\n\n\n\n\n\n\n\n\n\\noindent{\\bf Proof of Claim~\\ref{eqeq6}:}\nWe prove the three properties of the claim below. \n\n\\begin{enumerate}\n\t\\item $D_C \\subseteq \\textsf{fit}(C)$: Let $\\ell \\in D_C$. By the definition of $D_C$ it holds that $\\bar{\\gamma}_{\\ell,C} = 1$, and in particular $\\bar{\\gamma}_{\\ell,C} >0$; thus, since $\\bar{\\gamma}$ is in the $\\bar{z}^*$-polytope it follows by \\eqref{F1} that $\\ell \\in \\textsf{fit}(C)$. \n\t\n\t\\item $s(D_C) \\leq \\left(1-s(C)\\right) \\cdot |B_C|$:\n\t\\begin{equation*}\n\t\ts(D_C) = \\sum_{\\ell \\in I \\text{ s.t. } \\bar{\\gamma}_{\\ell,C} = 1} s(\\ell) = \\sum_{\\ell \\in I \\text{ s.t. } \\bar{\\gamma}_{\\ell,C} = 1} \\bar{\\gamma}_{\\ell,C} \\cdot s(\\ell) \\leq \\sum_{\\ell \\in I} \\bar{\\gamma}_{\\ell,C} \\cdot s(\\ell) \n\t\t\\leq (1-s(C)) \\bar{z}^*_C \\leq \\left(1-s(C)\\right) \\cdot |B_C|. \n\t\\end{equation*} The second inequality is because $\\bar{\\gamma}$ is in the $\\bar{z}^*$-polytope; thus, the inequality follows by \\eqref{F2}. The last inequality is by \\eqref{BC}. \n\t\n\t\n\t\n\t\\item $|D_C \\cap G| \\leq |B_C| \\cdot \\left( k(G) - |G \\cap C| \\right)$:\n\t\\begin{equation*}\n\t\t\\begin{aligned}\n\t\t\t|D_C \\cap G| ={} & \\left|\\bigcup_{\\ell \\in I \\text{ s.t. } \\bar{\\gamma}_{\\ell,C} = 1} \\{\\ell\\} \\cap G \\right| \\leq \\sum_{\\ell \\in I \\text{ s.t. } \\bar{\\gamma}_{\\ell,C} = 1} \\left| \\{\\ell\\} \\cap G \\right| = \\sum_{\\ell \\in G \\text{ s.t. } \\bar{\\gamma}_{\\ell,C} = 1} \\left| \\{\\ell\\} \\right|\\\\\n\t\t\t={} & \\sum_{\\ell \\in G \\text{ s.t. } \\bar{\\gamma}_{\\ell,C} = 1} \\bar{\\gamma}_{\\ell,C} \\leq \\sum_{\\ell \\in G} \\bar{\\gamma}_{\\ell,C} \\leq \\bar{z}^*_C \\cdot \\left( k(G) - |G \\cap C| \\right) \\leq |B_C| \\cdot \\left( k(G) - |G \\cap C| \\right). \n\t\t\\end{aligned} \n\t\\end{equation*} The third inequality is because $\\bar{\\gamma}$ is in the $\\bar{z}^*$-polytope; thus, the inequality follows by \\eqref{F5}. The last inequality is by \\eqref{BC}. \\qed\n\t\n\\end{enumerate} \n\n\n\n\n\n\n\\noindent{\\bf Proof of Lemma~\\ref{lem:assign}:} \nWe divide the proof of Lemma~\\ref{lem:assign} to the following claims. For simplicity, let $A^* = (A_1, \\ldots, A_m)$.\n\n\\begin{claim}\n\t\\label{eqeq1}\n\t$A^*$ is a packing.\n\\end{claim} \n\n\\begin{proof}\n\tLet $i \\in [m]$. By \\eqref{Astar}, we split into two cases. \n\t\n\t\\begin{enumerate}\n\t\t\\item $A_i \\in F$. Then, by the definition of $F$ it holds that $A_i$ is a singleton, a set containing a single item. Thus, $s\\left(A_i\\right) \\leq 1$ and for all $G \\in {\\mathcal{G}}$ it holds that $|G \\cap A^*_b| \\leq 1 \\leq k(G)$.\n\t\t\n\t\t\\item There are $C \\in C^*$ and $k \\in [\\bar{z}^*_{C^*}]$ such that $A_i = A_{C,k}$. Therefore, by Lemma~\\ref{lem:allowed} it holds that $A^*_b$ is allowed in $C$. Hence, there is an injective function $\\sigma: A_i \\rightarrow C$ such that for all $\\ell \\in A_i$ it holds that $\\ell \\in \\textsf{fit}\\left(\\sigma(\\ell)\\right)$. Therefore, \n\t\t\n\t\t\\begin{enumerate}\n\t\t\t\\item $s(A_i) \\leq 1$. \t$$s(A_i) = \\sum_{\\ell \\in A_i} s(\\ell) \\leq \\sum_{\\ell \\in A_i} s\\left(\\sigma(\\ell)\\right) \\leq \\sum_{\\ell \\in C} s(\\ell) = s(C) \\leq 1.$$ The first inequality is because for all $\\ell \\in A_i$ it holds that $\\ell \\in \\textsf{fit}\\left(\\sigma(\\ell)\\right)$; thus, the inequality follows by \\eqref{fitl}. The second inequality is because $\\sigma$ is injective. The last inequality is because $C$ is a configuration. \n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\\item For all $G \\in {\\mathcal{G}}$ it holds that $|A_i \\cap G| \\leq k(G)$. Let $G \\in {\\mathcal{G}}$. Now, $$|A_i \\cap G| = \\left|\\bigcup_{\\ell \\in A_i} \\{\\ell\\} \\cap G\\right| = \\bigcup_{\\ell \\in A_i} \\left|\\{\\ell\\} \\cap G\\right| \\leq \\bigcup_{\\ell \\in A_i} \\left|\\{\\sigma(\\ell)\\} \\cap G\\right| \\leq \\left|\\bigcup_{\\ell \\in C} \\{\\ell\\} \\cap G\\right| = |C \\cap G| \\leq k(G).$$ The first inequality is because $\\sigma$ is injective and that $G \\cap M = \\emptyset$; thus, by \\eqref{fitl} for all $\\ell \\in A_i \\cap G$ it holds that $\\sigma(\\ell) \\in A_i \\cap G$. The second inequality is because $\\sigma$ is injective. The last inequality is because $C$ is a configuration. \n\t\t\\end{enumerate}\n\t\t\n\t\t\n\t\t\n\t\\end{enumerate}\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\\begin{claim}\n\t\\label{eqeq2}\n\t$|\\mathcal{H}| \\leq {\\varepsilon}^{-22}Q^2({\\varepsilon})$ \n\\end{claim} \n\n\\begin{proof}\n\t$$|\\mathcal{H}| = \\left|\\textsf{supp}(\\bar{z}^*) \\cup F\\right| \\leq \\left|\\textsf{supp}(\\bar{z}^*)\\right| + \\left| F\\right| \\leq |\\textsf{supp}(\\bar{z}^*)|+{\\varepsilon}^{-21}|\\textsf{supp}(\\bar{z}^*)|^2 \\leq {\\varepsilon}^{-22}|\\textsf{supp}(\\bar{z})|^2 \\leq {\\varepsilon}^{-22}Q^2({\\varepsilon}).$$\n\t\n\tThe second inequality is because $\\bar{z}^*$ holds the conditions of Lemma~\\ref{FromPolytope} by Observation~\\ref{ob:y}; thus, the inequality follows by Lemma~\\ref{O(1)}. The third inequality is because $0<{\\varepsilon}<0.1$ and by Observation~\\ref{ob:y}. The last inequality is because $\\bar{z}$ holds the conditions of Lemma~\\ref{FromPolytope}, that is, $\\bar{z}$ is a good prototype.\n\\end{proof}\n\n\n\\begin{claim}\n\t\\label{eqeq3}\n\tThe size of $\\mathcal{B}$ is at most $\\|\\bar{z}\\| + {\\varepsilon}^{-22}|\\textnormal{\\textsf{supp}}(\\bar{z})|^2$.\n\\end{claim} \n\n\\begin{proof}\n\tThe size of $\\mathcal{B}$ is defined as the number of entries in $A^*$. Therefore, by \\eqref{Astar} the size of $\\mathcal{B}$ is at most:\n\t\\begin{equation*}\n\t\t\\begin{aligned}\n\t\t\t|F|+\\sum_{C \\in \\textsf{supp}(\\bar{z}^*)} \\sum_{k \\in [\\bar{z}^*_C]} ={} & |F|+\\sum_{C \\in \\textsf{supp}(\\bar{z}^*)} \\bar{z}^*_C \\leq \\|\\bar{z}^*\\| + {\\varepsilon}^{-21}|\\textsf{supp}(\\bar{z}^*)|^2\\\\ \\leq{} & \\|\\bar{z}\\| +|\\textsf{supp}(\\bar{z})|+ {\\varepsilon}^{-21}|\\textsf{supp}(\\bar{z})|^2 \\leq \\|\\bar{z}\\| + {\\varepsilon}^{-22}|\\textsf{supp}(\\bar{z})|^2 \\leq \\|\\bar{z}\\| + {\\varepsilon}^{-22}Q^2({\\varepsilon}).\n\t\t\\end{aligned}\n\t\\end{equation*} The first inequality is because $\\bar{z}^*$ holds the conditions of Lemma~\\ref{FromPolytope} by Observation~\\ref{ob:y}; thus, the inequality follows by Lemma~\\ref{O(1)}. The second inequality is by Observation~\\ref{ob:y}. The last inequality is because $\\bar{z}$ is a good prototype. \n\t\n\t\n\t\n\t\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\\begin{claim}\n\t\\label{partition:BC}\n\t$\\{B_C\\}_{C \\in \\mathcal{H}}$ is a partition of $\\{A_i~|~i \\in [m]\\}$\n\\end{claim} \n\n\\begin{proof}\n\t\n\tLet $i \\in [m]$. By \\eqref{Astar}, we split into two cases. \n\t\n\t\\begin{enumerate}\n\t\t\\item $A_i \\in F$. Then, by the definition of $\\mathcal{H}$ it holds that $A_i \\in \\mathcal{H}$; thus, by \\eqref{BC} it follows that $A_i \\in B_{A_i}$ and for all $C \\in \\mathcal{H} \\setminus \\{A_i\\}$ it holds that $A_i \\notin B_{C}$.\n\t\t\n\t\t\\item There are $C \\in \\mathcal{H}$ and $k \\in [\\bar{z}^*_{C}]$ such that $A_i = A_{C,k}$. Therefore, by \\eqref{BC} it follows that $A_i \\in B_{C}$ and for all $C' \\in \\mathcal{H} \\setminus \\{C\\}$ it holds that $A_i \\notin B_{C'}$.\n\t\\end{enumerate} \n\t\n\\end{proof}\n\n\nLet $f = \\{\\ell~|~\\{\\ell\\} \\in F\\}$ be all fractional items. We use the following auxiliary claim. \n\n\n\\begin{claim}\n\t\\label{fU}\n\t$A^*$ is a packing of $f \\cup U$. \n\\end{claim} \n\n\\begin{proof}\n\tLet $\\ell \\in U \\cup f$. We split into two cases.\n\t\n\t\\begin{enumerate}\n\t\t\\item $\\ell \\in f$. By \\eqref{Astar}, there is $i \\in [m]$ such that $A_i = \\{\\ell\\}$.\n\t\t\n\t\t\\item $\\ell \\in U$. Since $M$ is a matching of size $|U|$ in $G$, there is $(C,j,k) \\in V$ such that $(\\ell,(C,j,k)) \\in M$. Therefore, by the definition of the bin of $C$ and $k$ it holds that $\\ell \\in A_{C,k}$. Hence, by \\eqref{Astar} there is $i \\in [m]$ such that $A_i = A_{C,k}$ and the claim follows.\n\t\\end{enumerate} Moreover, by \\eqref{Astar} it follows that for any $\\ell \\in I \\setminus (U \\cup f)$ and $i \\in [m]$ it holds that $\\ell \\notin A_i$. \n\\end{proof}\n\n\n\n\n\n\n\\begin{claim}\n\t\\label{eqeq4}\n\t$\\{D_C\\}_{C \\in \\mathcal{H}}$ is a partition of $I \\setminus \\bigcup_{i \\in [m]} A_i$.\n\\end{claim} \n\n\\begin{proof}\n\t\n\tLet $\\ell \\in I \\setminus \\bigcup_{i \\in [m]} A_i$. By Claim~\\ref{fU}, it follows that $I \\setminus \\bigcup_{i \\in [m]} A_i = I \\setminus (U \\cup f)$ and we get $\\ell \\in I \\setminus (U \\cup f)$. Thus, as $\\ell$ is not fractional and is not fully assigned to a slot type as it is not in $U$, it follows that there is $C \\in \\textsf{supp}(\\bar{z}^*)$ such that $\\bar{\\gamma}_{\\ell,C} = 1$ by \\eqref{F4} since $\\bar{\\gamma}$ is in the $\\bar{z}^*$-polytope. In addition, since \\eqref{F4} holds with equality, we get that there is exactly one such $C$. Since $\\textsf{supp}(\\bar{z}^*) \\subseteq \\mathcal{H}$, it follows that $\\ell \\in D_C$ and for all $C' \\in \\mathcal{H} \\setminus \\{C\\}$ it holds that $\\ell \\notin D_{C'}$. Hence, the claim follows. \n\t\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{claim}\n\t\\label{eqeq5}\n\tFor any $C \\in \\mathcal{H}$ and $A \\in B_C$ it holds that $A$ is allowed in $C$\n\\end{claim} \n\n\\begin{proof}\n\t\n\tBy Claim~\\ref{partition:BC}, there is $i \\in [m]$ such that $A = A_i$. By \\eqref{Astar}, we split into two cases. \n\t\n\t\\begin{enumerate}\n\t\t\\item $A_i \\in F$. Then, by \\eqref{BC} it holds that $A_i = C$ and $C$ is a singleton. Then, it trivially follows that $A_i$ is allowed in $A_i$, since for each item $\\ell \\in A_i$ it holds that $\\ell \\in \\textsf{fit}(\\ell)$ by \\eqref{fitl}. \n\t\t\n\t\t\\item There are $C \\in \\mathcal{H}$ and $k \\in [\\bar{z}^*_{C}]$ such that $A_i = A_{C,k}$. Therefore, by Lemma~\\ref{lem:allowed} the claim follows. \n\t\\end{enumerate} \n\t\n\\end{proof}\n\n\n\nThe proof of Lemma~\\ref{lem:assign} follows by Claim~\\ref{eqeq1}, Claim~\\ref{eqeq2}, Claim~\\ref{eqeq3}, Claim~\\ref{partition:BC}, Claim~\\ref{eqeq4}, Claim~\\ref{eqeq5}, and Claim~\\ref{eqeq6}. \\qed\n\n\n\\section{Omitted Proofs of Section~\\ref{sec:eviction}}\n\\label{app:omittedEvict}\n\n\\noindent{\\bf Proof of Lemma~\\ref{lem:evictionHelp}:}\n\tLet $\\ell_1, \\ldots, \\ell_{r}$ be the items in $C \\setminus L$ in decreasing order such that for all $a,b \\in [r], ai+{\\varepsilon}^{-4}$, it holds that $s(\\ell_i) > \\frac{s(\\ell_k)}{{\\varepsilon}^2}$.\n\t\\end{claim}\n\t\n\t\\begin{proof}\n\t\t Assume towards a contradiction that there are $i,k \\in [h]$ which satisfy that $k>i+{\\varepsilon}^{-4}$ and $s(\\ell_i) \\leq \\frac{s(\\ell_k)}{{\\varepsilon}^{2}}$. Thus, \n\t\t\n\t\t\\begin{equation}\n\t\t\t\\label{eq:sik}\n\t\t\ts(\\{\\ell_i, \\ell_{i+1}, \\ldots, \\ell_k\\}) \\geq (k-i) \\cdot s(\\ell_k) \\geq {\\varepsilon}^{-4} \\cdot s(\\ell_k) \\geq {\\varepsilon}^{-4} \\cdot {\\varepsilon}^{2} \\cdot s(\\ell_i) = {\\varepsilon}^{-2} s(\\ell_i) \n\t\t\\end{equation} The first inequality is because the items are in decreasing order that induces a non increasing order of item sizes. The second inequality is because $k>i+{\\varepsilon}^{-4}$. The third inequality is by the assumption that $s(\\ell_i) \\leq \\frac{s(\\ell_k)}{{\\varepsilon}^{2}}$. \n\t\t\n\t Note that it holds that $T_i<1$ since $T_i$ is the total size of items in a proper subset of $C$ and the sizes of the items are strictly larger than zero. Now, because $\\ell_i \\in U_C$ and $i < \\alpha$, we conclude by \\eqref{fitS} and \\eqref{h} that \\begin{equation}\n\t\t\t\\label{eq:llli}\n\t\t\ts(\\ell_i) > {\\varepsilon} \\left( 1-T_i \\right).\n\t\t\\end{equation} Therefore, \n\t\t\\begin{equation}\n\t\t\t\\label{eq:contr}\n\t\ts(C) \\geq T_i +s(\\{\\ell_i, \\ldots, \\ell_k\\}) \\geq T_i +{\\varepsilon}^{-1} \\left( 1-T_i \\right) \\geq T_i +\\left( 1-T_i \\right)+({\\varepsilon}^{-1}-1) ( 1-T_i) > 1.\n\t\t\\end{equation}\n\t\tThe first inequality is because $C \\cap L \\cup \\{\\ell_1, \\ldots, \\ell_{i-1}\\}$ is disjoint to $\\{\\ell_i, \\ldots, \\ell_k\\}$. The second inequality is by \\eqref{eq:sik} and \\eqref{eq:llli}. The last inequality follows since $0<{\\varepsilon}<0.1$ and $0\\leq T_i<1$. By \\eqref{eq:contr}, we reach a contradiction to the fact that $s(C) \\leq 1$ because $C$ is a configuration. \n\t\\end{proof}\n\t\n\t\n\t\t\\begin{claim}\n\t\t\\label{claim:C1*}\n\t\tFor any $i \\in [h]$ such that $i+{\\varepsilon}^{-4} < h-1$ it holds that $\\ell_i \\in \\mathcal{L}_C$. \n\t\\end{claim}\n\t\n\t\\begin{proof}\n\t\\begin{equation}\n\t\t\\label{eq:LC}\n\t\ts(\\ell_i)> \\frac{s(\\ell_{h-1})}{{\\varepsilon}^{2}} > {\\varepsilon}^{-2} \\cdot {\\varepsilon} (1-T_{h-1}) = {\\varepsilon}^{-1} (1-T_{h-1}) \\geq {\\varepsilon}^{-1} (1-s(R(C))).\n\t\\end{equation}\n\t\n\tThe first inequality is by Claim~\\ref{claim:C1}. The second inequality is because $h-1 < h \\leq \\alpha$; thus, similarly to \\eqref{eq:llli}, it holds that $s(\\ell_{h-1}) > {\\varepsilon} \\left( 1-T_{h-1} \\right)$. The last inequality is because $T_{h-1}$ is the total size in a proper subset of $R(C)$; thus, $T_{h-1} \\leq s(R(C))$. By \\eqref{eq:LC}, we conclude that for any $i \\in [h]$ such that $i +{\\varepsilon}^{-4}< h-1$ it holds that $\\ell_i \\in \\mathcal{L}_C$.\n\t\\end{proof}\n\n\t\t\n\t Now, using the above claim: \n\t\n\t$$|\\mathcal{L}_C| \\geq h-2-{\\varepsilon}^{-4} \\geq {\\varepsilon}^{-5}- 2{\\varepsilon}^{-4} = {\\varepsilon}^{-4} ({\\varepsilon}^{-1}-2)> {\\varepsilon}^{-4}.$$\n\t\n\tThe first inequality is since by Claim~\\ref{claim:C1*} it holds that $\\ell_1, \\ldots, \\ell_{t} \\in \\mathcal{L}_C$ for $t = h-2-{\\varepsilon}^{-4}$. The second inequality is because $h = \\alpha$. \\qed \n\t\n\\noindent{\\bf Proof of Lemma~\\ref{lem:wProperties}:}\n\t\tLet $C \\in \\mathcal{C}$. We split the proof into the following claims which together form the proof of the lemma.\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\begin{claim}\n\t\t\t\\label{claim:evic1}\n\t\t\t Condition~1 of Lemma~\\ref{lem:wProperties} holds for $|U_C| = \\alpha$.\n\t\t\\end{claim}\n\t\n\t\\begin{proof}\n\t\t\n\t\t\n\t\t$$\\|\\bar{w}^C\\| = \\sum_{\\ell \\in \\mathcal{L}_C} \\bar{w}^C_{R(C) \\setminus \\{\\ell\\}} = \\sum_{\\ell \\in \\mathcal{L}_C} \\frac{1}{|\\mathcal{L}_C|-1} = \\frac{|\\mathcal{L}_C|}{|\\mathcal{L}_C|-1} \\leq \\frac{{\\varepsilon}^{-4}}{{\\varepsilon}^{-4}-1} \\leq 1+2{\\varepsilon}^4. $$\n\t\t\n\t\tThe first equality is because by the definition $\\bar{w}^C$, for any other entry $C'$ that is not in the second summation it holds that $\\bar{w}^C_{C'} = 0$. The second equality is by the definition of $\\bar{w}^C$ in case that $|U_C| = \\alpha$. The first inequality is because $|U_C| = \\alpha$; therefore, by Lemma~\\ref{lem:evictionHelp} it holds that $|\\mathcal{L}_C| \\geq {\\varepsilon}^{-4}$. The last inequality is because $0<{\\varepsilon}<0.1$. \n\t\\end{proof}\n\t\t\n\t\t\n\t\t\n\t\t\t\\begin{claim}\n\t\t\t\\label{claim:evic2}\n\t\t\t\t Condition~2 of Lemma~\\ref{lem:wProperties} holds for $|U_C| = \\alpha$.\n\t\t\\end{claim}\n\t\t\n\t\t\\begin{proof}\n\t\t\t Let $\\ell \\in R(C)$. Then, \n\t\t\t\n\t\t\t$$\\sum_{C' \\in \\mathcal{C}[\\ell]} \\bar{w}^C_{C'} \\geq \\sum_{j \\in \\mathcal{L}_C \\setminus \\{\\ell\\}} \\bar{w}^C_{R(C) \\setminus \\{j\\}} = \\sum_{j \\in \\mathcal{L}_C \\setminus \\{\\ell\\}} \\frac{1}{|\\mathcal{L}_C|-1} \\geq \\frac{|\\mathcal{L}_C|-1}{|\\mathcal{L}_C|-1} = 1.$$ \n\t\t\t\n\t\t\tThe first inequality is since for any $j \\in \\mathcal{L}_C \\setminus \\{\\ell\\}$ it holds that $R(C) \\setminus \\{j\\} \\in \\mathcal{C}[\\ell]$ and that all entries in $\\bar{w}^C$ are non-negative. The first equality is by the definition of $\\bar{w}^C$ in case that $|U_C| = \\alpha$. The last inequality is because $|\\mathcal{L}_C \\setminus \\{\\ell\\}| \\geq |\\mathcal{L}_C|-1$. \n\t\t\\end{proof}\n\t\t\n\t\t\t\\begin{claim}\n\t\t\t\\label{claim:evic3}\n\t\t\t\t Condition~3 of Lemma~\\ref{lem:wProperties} holds for $|U_C| = \\alpha$.\n\t\t\\end{claim}\n\t\t\n\t\t\\begin{proof}\n\t\t\t For the third condition of the lemma, let $C ' \\in \\textsf{supp}(\\bar{w}^C)$. By the definition of $\\bar{w}^C$ in case that $|U_C| = \\alpha$, there is $\\ell \\in \\mathcal{L}_C$ such that $C' = R(C) \\setminus \\{\\ell\\}$. Now,\n\t\t\t\n\t\t\t\n\t\t\t\\begin{equation}\n\t\t\t\t\\label{eq:1-RC}\n\t\t\t\t1-s(R(C)) = 1-s(R(C)\\setminus \\{\\ell\\})-s(\\ell) \\leq 1-s(R(C)\\setminus \\{\\ell\\}) -\\frac{1-s(R(C))}{{\\varepsilon}}.\n\t\t\t\\end{equation}\n\t\t\tThe first equality is because $\\ell \\in \\mathcal{L}_C$ and $\\mathcal{L}_C \\subseteq R(C)$. The first inequality is by the definition of $\\mathcal{L}_C$. Therefore, \n\t\t\t\n\t\t\t\\begin{equation}\n\t\t\t\t\\label{eq:1-RC2}\n\t\t\t\t1-s(R(C)) \\leq \\left(\\frac{1}{1+\\frac{1}{{\\varepsilon}}}\\right)(1-s(R(C)\\setminus \\{\\ell\\})) \\leq {\\varepsilon}(1-s(R(C)\\setminus \\{\\ell\\})).\n\t\t\t\\end{equation}\n\t\t\tThe first inequality is by \\eqref{eq:1-RC}. Let $j \\in C \\setminus R(C)$. Now, \n\t\t\t\n\t\t\t\\begin{equation}\n\t\t\t\t\\label{eq:fit1}\n\t\t\t\ts(j) \\leq 1-s(R(C)) \\leq {\\varepsilon}(1-s(R(C)\\setminus \\{\\ell\\})).\n\t\t\t\\end{equation}\n\t\t\t\n\t\t\tThe first inequality is because $j \\notin R(C)$ and $s(C) \\leq 1$. The second inequality is by \\eqref{eq:1-RC2}. Therefore, by \\eqref{eq:fit1} we get that $j \\in \\textsf{fit}(C')$ by \\eqref{fitS} (because $j \\in C \\setminus R(C)$ it holds that $j \\notin L$). Because there are no restrictions over $j$ besides that $j \\in C \\setminus R(C)$, we conclude that $C \\setminus R(C) \\subseteq \\textsf{fit}(C')$. In addition, \n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t$$s(C \\setminus R(C)) \\leq 1-s(R(C)) \\leq 1-s(R(C) \\setminus \\{\\ell\\}) = 1-s(C').$$\n\t\t\t\n\t\t\tThe first inequality is because $R(C) \\subseteq C$ and $s(C) \\leq 1$. The second inequality is because $s(\\ell)\\geq 0$. The first equality is by the definition of $C'$. \n\t\t\\end{proof}\n\t\t\n\t\t\t\\begin{claim}\n\t\t\t\\label{claim:evic4}\n\t\t\t\t Condition~4 of Lemma~\\ref{lem:wProperties} holds.\n\t\t\\end{claim}\n\t\t\n\t\t\\begin{proof}\n\t\t\tlet $C ' \\in \\textsf{supp}(\\bar{w}^C)$. Assume towards a contradiction that $s(C' \\setminus L)>{\\varepsilon}$. Therefore, \n\t\t\t\\begin{equation}\n\t\t\t\t\\label{eq:w4}\n\t\t\t\t{\\varepsilon}0$, this is a contradiction that $s(C' \\setminus L) > {\\varepsilon}$. \n\t\t\\end{proof}\n\t\t\n\t\t\n\t\t\t\\begin{claim}\n\t\t\t\\label{claim:evic6}\n\t\t\tCondition~5 of Lemma~\\ref{lem:wProperties} holds.\n\t\t\\end{claim}\n\t\t\n\t\t\\begin{proof}\n\t\t\tLet $C' \\in \\textsf{supp}(\\bar{w}^C)$ and $G \\in {\\mathcal{G}}$. Now, $$\\left|G \\cap \\left(C \\setminus R(C)\\right)\\right| \\leq |G \\cap C|-|G \\cap R(C)| \\leq k(G) -|G \\cap R(C)| \\leq k(G) - |G \\cap C'|.$$ The last inequality is because by the definition of $\\bar{w}^C$ it holds that $C' \\subseteq R(C)$\n \t\t\\end{proof}\n\t\n\t\t\n\t\t\n\t\t\t\\begin{claim}\n\t\t\t\\label{claim:evic5}\n\t\t\tConditions 1-3 of Lemma~\\ref{lem:wProperties} hold for $|U_C| < \\alpha$. \n\t\t\\end{claim}\n\t\t\n\t\t\\begin{proof}\n\t\t\tFor the first condition of the lemma: $$\\|\\bar{w}^C\\| = \\bar{w}^C_{R(C)} = 1 \\leq 1+2{\\varepsilon}^4. $$ The first equality is because by the definition $\\bar{w}^C$, for any other configuration $C' \\in \\mathcal{C} \\setminus \\{R(C)\\}$ it holds that $\\bar{w}^C_{C'} = 0$. The second equality is by the definition of $\\bar{w}^C$ in case that $|U_C| < \\alpha$. \n\t\t\t\n\t\t\tFor the second condition of the lemma, let $\\ell \\in R(C)$. Then, $$\\sum_{C' \\in \\mathcal{C}[\\ell]} \\bar{w}^C_{C'} = \\bar{w}^C_{R(C)} = 1.$$ The first equality is because by the definition $\\bar{w}^C$, for any other configuration $C' \\in \\mathcal{C} \\setminus \\{R(C)\\}$ it holds that $\\bar{w}^C_{C'} = 0$. The second equality is bythe definition of $\\bar{w}^C$ in case that $|U_C| < \\alpha$.\n\t\t\t\n\t\t\tFor the third condition of the lemma, let $C ' \\in \\textsf{supp}(\\bar{w}^C)$. By the definition of $\\bar{w}^C$ in case that $|U_C| < \\alpha$, it holds that $C' = R(C)$. Let $j \\in C \\setminus R(C)$. It holds that $j \\in \\textsf{fit}(C')$ by \\eqref{h}, since $|U_C|<\\alpha$. Because there are no restrictions over $j$ besides that $j \\in C \\setminus R(C)$, we conclude that $C \\setminus R(C) \\subseteq \\textsf{fit}(C')$. In addition, \n\t\t\t\n\t\t\t$$s(C \\setminus R(C)) \\leq 1-s(R(C)) = 1-s(C'). $$ The first inequality is because $R(C) \\subseteq C$ and $s(C) \\leq 1$. The first equality is by the definition of $C'$. \n\t\t\\end{proof} The proof of Lemma~\\ref{lem:wProperties} follows by Claim~\\ref{claim:evic1}, Claim~\\ref{claim:evic2}, Claim~\\ref{claim:evic3}, Claim~\\ref{claim:evic4}, Claim~\\ref{claim:evic5}, and Claim~\\ref{claim:evic6}.\n\t\t\n\t\t\n\t\t\n\t\n\t\n\t\n\t\n\t\n\t\n\n\t\t\n\t\t\n\t\t\n\n\t\\noindent{\\bf Proof of Lemma~\\ref{lem:eviction}:}\nWe use in the proof of Lemma~\\ref{lem:eviction} the following claims. \n\n\\begin{claim}\n\t\\label{lem:evictionPolynomial}\n\tAlgorithm~\\ref{Alg:eviction} is polynomial.\n\\end{claim}\n\n\n\\begin{proof}\n\tThe number of elements over which the loop in Step~\\ref{step:forsupp} iterates is polynomial, since $|\\textsf{supp}(\\bar{x})|$ is at most the encoding size of the input (in a sparse representation). Let $C \\in \\textsf{supp}(\\bar{x})$ be a configuration in the support of $\\bar{x}$. It follows by \\eqref{h} that $U_C$ can be easily computed in polynomial time by iteratively adding items from $C \\setminus L$ in decreasing order of the items. Therefore, given $U_C$, the relaxation of $C$ that is $\\bar{w}^C$ can be computed in polynomial time: by the definition of $\\bar{w}^C$, the number of nonzero entries in $\\bar{w}^C$ is at most $|\\mathcal{L}_C|$, which is polynomial in the size of the instance. This is since $\\mathcal{L}_C \\subseteq I$ is a subset of the items. Finally, Step~\\ref{step:return} is also polynomial as $\\bar{y}$ is a linear combination of a polynomial number (i.e., $|\\textsf{supp}(\\bar{x})|$) of vectors, each with a polynomial number of nonzero entries. By the above, the claim follows. \n\\end{proof}\n\t\n\t\n\t\n\t\\begin{claim}\n\t\t\\label{lem:eviction1}\n\t\tFor any $0<{\\varepsilon}<0.1$, an ${\\varepsilon}$-structured BPP instance $\\mathcal{I}$, and a solution $\\bar{x}$ to the configuration LP in \\eqref{C-LP}, Algorithm~\\ref{Alg:eviction} returns a prototype $\\bar{y}$ of $\\mathcal{I}$ with $\\bar{\\lambda}$ in the $\\bar{y}$-polytope such that for all $\\ell,j \\in I, \\ell \\neq j$ it holds that $\\bar{\\lambda}_{\\ell,j} = 0$. \n\t\\end{claim}\n\t\n\t\\begin{proof}\n\t\tWe define below a point $\\bar{\\lambda} \\in \\mathbb{R}^{I \\times (I \\cup \\mathcal{C})}_{\\geq 0}$ in the $\\bar{y}$-polytope. Thus, it follows that the $\\bar{y}$-polytope is non-empty. For any $C \\in \\mathcal{C}$, define the {\\em point of $C$} as a vector $\\bar{\\gamma}^C \\in \\mathbb{R}^{I \\times (I \\cup \\mathcal{C})}_{\\geq 0}$ as follows. For any $\\ell \\in R(C)$, define \n\t\t\n\t\t\\begin{equation}\n\t\t\t\\label{gammal}\n\t\t\t\\bar{\\gamma}^C_{\\ell,\\ell} = 1.\n\t\t\\end{equation}\n\t\n\tMoreover, for any $\\ell \\in C \\setminus R(C)$ and $C' \\in \\mathcal{C}$ define \n\t\t\\begin{equation}\n\t\t\\label{gammaC}\n\t\t\\bar{\\gamma}^C_{\\ell,C'} = \\bar{w}^C_{C'}.\n\t\\end{equation}\n\nFor any other $\\ell \\in I$ and $t \\in (I \\cup \\mathcal{C})$ define\n\n\t\\begin{equation}\n\t\t\\label{gammae}\n\t\\bar{\\gamma}^C_{\\ell,t} = 0.\n\\end{equation}\n\n\tFinally, define \n\n\\begin{equation}\n\t\\label{lambda}\n\t\\bar{\\lambda} = \\sum_{C \\in \\mathcal{C}} \\bar{x}_C \\cdot \\bar{\\gamma}^C.\n\\end{equation}\n\n\nWe show below that all constraints of the $\\bar{y}$-polytope are satisfied for $\\bar{\\lambda}$. Let $\\ell \\in I$ and $t \\in I \\cup \\mathcal{C}$ such that $\\ell \\notin \\textsf{fit}\\left(t \\right)$. Then, \n\n\\begin{equation}\n\t\\label{eq:ev1}\n\t\\bar{\\lambda}_{\\ell,t} = \\sum_{C \\in \\mathcal{C}} \\bar{y}_C \\cdot \\bar{\\gamma}^C_{\\ell,t} = \\sum_{C \\in \\mathcal{C}} \\bar{y}_C \\cdot 0 = 0.\n\\end{equation}\n\nThe first equality is by \\eqref{lambda}. Because $\\ell \\notin \\textsf{fit}\\left(t \\right)$, then $\\ell \\neq t$ by \\eqref{fitl}; moreover, for all $C \\in \\mathcal{C}$ such that $\\ell \\in C \\setminus R(C)$ and $C ' \\in \\textsf{supp}(\\bar{w}^C)$, it holds that $t \\neq C'$ because $\\ell \\in \\textsf{fit}(C')$ by Lemma~\\ref{lem:wProperties}. Therefore, by \\eqref{gammae} the second equality follows. By \\eqref{eq:ev1} we conclude that constraint \\eqref{F1} is satisfied for $\\bar{\\lambda}$. \n\nLet $C \\in \\mathcal{C}$. Then, \n\n\n\n\n\n\\begin{equation}\n\t\t\\label{eq:ev2}\n\t\\begin{aligned}\n\t\\sum_{\\ell \\in I} \\bar{\\lambda}_{\\ell,C} \\cdot s(\\ell) &= \\sum_{\\ell \\in I~} \\sum_{C' \\in \\mathcal{C}} \\bar{x}_{C'} \\cdot \\bar{\\gamma}^{C'}_{\\ell,C} \\cdot s(\\ell) = \\sum_{C' \\in \\mathcal{C}~} \\sum_{\\ell \\in I} \\bar{x}_{C'} \\cdot \\bar{\\gamma}^{C'}_{\\ell,C} \\cdot s(\\ell)\\\\\n\t={} & \\sum_{C' \\in \\mathcal{C}~} \\sum_{\\ell \\in C' \\setminus R(C')} \\bar{x}_{C'} \\cdot \\bar{w}^{C'}_{C} \\cdot s(\\ell) \n\t= \\sum_{C' \\in \\mathcal{C}} \\bar{x}_{C'} \\cdot \\bar{w}^{C'}_{C} \\cdot s\\left(C' \\setminus R(C')\\right)\\\\\n\t\\leq{} & \\sum_{C' \\in \\mathcal{C}} \\bar{x}_{C'} \\cdot \\bar{w}^{C'}_{C} \\cdot \\left(1-s(C)\\right) =(1-s(C)) \\cdot \\sum_{C' \\in \\textsf{supp}(\\bar{x})} \\bar{x}_{C'} \\cdot \\bar{w}^{C'}_{C} = (1-s(C)) \\bar{y}_{C}. \n\t\\end{aligned}\n\\end{equation}\n\n\n\n\n\n\n\nThe first equality is by \\eqref{lambda}. The second equality is by changing the order of summation. The third equality is by \\eqref{gammal}, \\eqref{gammaC} and \\eqref{gammae}. The inequality is because for all $C' \\in \\mathcal{C}$, if $C \\in \\textsf{supp}({\\bar{w}^{C'}})$ then $s(C' \\setminus R(C')) \\leq 1-s(C)$ by Lemma~\\ref{lem:wProperties}. The last equality is by Step~\\ref{step:return} of Algorithm~\\ref{Alg:eviction}. Therefore, by \\eqref{eq:ev2} we conclude that constraint \\eqref{F2} is satisfied for $\\bar{\\lambda}$. \n\n\n\n\n\n\n\n\n\n\nLet $G \\in \\mathcal{G}$ and $C \\in \\mathcal{C}$. Then, \n\n\n\\begin{equation}\n\t\\label{eq:ev3}\n\t\\begin{aligned}\n\t \\sum_{\\ell \\in G} \\bar{\\lambda}_{\\ell,C} ={} & \\sum_{\\ell \\in G~} \\sum_{C' \\in \\mathcal{C}} \\bar{x}_{C'} \\cdot \\bar{\\gamma}^{C'}_{\\ell,C} = \\sum_{C' \\in \\mathcal{C}~} \\sum_{\\ell \\in G} \\bar{x}_{C'} \\cdot \\bar{\\gamma}^{C'}_{\\ell,C} = \\sum_{C' \\in \\mathcal{C}~~} \\sum_{\\ell \\in G \\cap \\left(C' \\setminus R(C') \\right)} \\bar{x}_{C'} \\cdot \\bar{w}^{C'}_{C}\\\\\n\t ={} & \\sum_{C' \\in \\mathcal{C} \\text{ s.t. } C \\in \\textsf{supp}(\\bar{w}^{C'})~~} \\bar{x}_{C'} \\cdot \\bar{w}^{C'}_{C} \\sum_{\\ell \\in G \\cap \\left(C' \\setminus R(C') \\right)} 1 \n\t \\leq \\sum_{C' \\in \\mathcal{C}} \\bar{x}_{C'} \\cdot \\bar{w}^{C'}_{C} \\cdot \\left(k(G)-|G \\cap C|\\right) \\\\ ={} & \\left(k(G)-|G \\cap C|\\right) \\sum_{C' \\in \\textsf{supp}(\\bar{x})~} \\bar{x}_{C'} \\cdot \\bar{w}^{C'}_{C} =\\bar{y}_C \\cdot \\left(k(G)-|G \\cap C|\\right)\n\t\\end{aligned}\n\\end{equation}\n\n\n\n\n\nThe first equality is by \\eqref{lambda}. The second equality is by changing the order of summation. The third equality is by \\eqref{gammal}, \\eqref{gammaC} and \\eqref{gammae}. The inequality is by Condition~5 of Lemma~\\ref{lem:wProperties}. The last equality is by Step~\\ref{step:return} of Algorithm~\\ref{Alg:eviction}. Therefore, by \\eqref{eq:ev3} we conclude that constraint \\eqref{F5} is satisfied for $\\bar{\\lambda}$. \n\n\nLet $j \\in I$. Then, \n\n\\begin{equation}\n\t\\label{eq:ev4}\n\t\\begin{aligned}\n\t \\sum_{\\ell \\in I} \\bar{\\lambda}_{\\ell,j} ={} & \\sum_{\\ell \\in I~} \\sum_{C \\in \\mathcal{C}} \\bar{x}_{C} \\cdot \\bar{\\gamma}^{C}_{\\ell,j} = \\sum_{C \\in \\mathcal{C}~} \\sum_{\\ell \\in I} \\bar{x}_{C} \\cdot \\bar{\\gamma}^{C}_{\\ell,j} = \\sum_{C \\in \\mathcal{C}[j] \\text{ s.t. } j \\in R(C)} \\bar{x}_{C}\\\\ \\leq{} & \\sum_{C \\in \\mathcal{C}[j] \\text{ s.t. } j \\in R(C)} \\bar{x}_{C} \\cdot \\left( \\sum_{C' \\in \\mathcal{C}[j]} \\bar{w}^{C}_{C'} \\right) \n\t\\leq \\sum_{C \\in \\mathcal{C}[j]} \\sum_{C' \\in \\mathcal{C}[j]} \\bar{x}_{C} \\cdot \\bar{w}^{C}_{C'}\\\\ ={} & \\sum_{C' \\in \\mathcal{C}[j]} \\sum_{C \\in \\mathcal{C}[j]} \\bar{x}_{C} \\cdot \\bar{w}^{C}_{C'}\n\t \\leq \\sum_{C' \\in \\mathcal{C}[j]} \\sum_{C \\in \\textsf{supp}(\\bar{x})} \\bar{x}_{C} \\cdot \\bar{w}^{C}_{C'} = \\sum_{\\substack{C' \\in \\mathcal{C}[j]}} \\bar{y}_{C'} = \\sum_{\\substack{C \\in \\mathcal{C}[j]}} \\bar{y}_{C}.\n\t\\end{aligned}\n\\end{equation}\n\n\n\n\nThe first equality is by \\eqref{lambda}. The second equality is by changing the order of summation. The third equality is by \\eqref{gammal}, \\eqref{gammaC} and \\eqref{gammae}. The first inequality is because for all $C \\in {\\mathcal{C}}$ and $j \\in R(C)$ it holds that $ \\sum_{C' \\in \\mathcal{C}[j]} \\bar{w}^{C}_{C'} \\geq 1$ by Lemma~\\ref{lem:wProperties}. The second inequality is because for all $C,C' \\in {\\mathcal{C}}$ it holds that $ \\bar{x}_{C} \\cdot \\bar{w}^{C}_{C'} \\geq 0$. The fourth equality is is by changing the order of summation. The third inequality is because for all $C \\in \\mathcal{C}[j] \\setminus \\textsf{supp}(\\bar{x})$ it holds that $\\bar{x}_{C} \\cdot \\bar{w}^{C}_{C'} = 0$. The fifth equality is by Step~\\ref{step:return} of Algorithm~\\ref{Alg:eviction}. The sixth equality is by a change in the notation of a variable. Therefore, by \\eqref{eq:ev4} we conclude that constraint \\eqref{F3} is satisfied for $\\bar{\\lambda}$. \n\nLet $\\ell \\in I$. Then, we use the following equations. \n\n\\begin{equation}\n\t\\label{eq:ev5a}\n\t\\sum_{t \\in I} \\bar{\\lambda}_{\\ell,t} = \\sum_{t \\in I~} \\sum_{C \\in \\mathcal{C}} \\bar{x}_{C} \\cdot \\bar{\\gamma}^{C}_{\\ell,t} = \\sum_{C \\in \\mathcal{C}~} \\sum_{t \\in I} \\bar{x}_{C} \\cdot \\bar{\\gamma}^{C}_{\\ell,t} = \\sum_{C \\in \\mathcal{C}[\\ell] \\text{ s.t. } \\ell \\in R(C)} \\bar{x}_{C}.\n\\end{equation}\n\nThe first equality is by \\eqref{lambda}. The second equality is by changing the order of summation. The third equality is by \\eqref{gammal} and \\eqref{gammae}. In addition, \n\n\n\\begin{equation}\n\t\\label{eq:ev5b}\n\t\\begin{aligned}\n\t\t\\sum_{t \\in \\mathcal{C}} \\bar{\\lambda}_{\\ell,t} ={} & \\sum_{t \\in \\mathcal{C}~} \\sum_{C \\in \\mathcal{C}} \\bar{x}_{C} \\cdot \\bar{\\gamma}^{C}_{\\ell,t} = \\sum_{C \\in \\mathcal{C}~} \\sum_{t \\in \\mathcal{C}} \\bar{x}_{C} \\cdot \\bar{\\gamma}^{C}_{\\ell,t} = \\sum_{C \\in \\mathcal{C}[\\ell] \\text{ s.t. } \\ell \\in C \\setminus R(C)~} \\sum_{t \\in \\mathcal{C}} \\bar{x}_{C} \\cdot \\bar{w}^{C}_{t}\\\\ ={} & \\sum_{C \\in \\mathcal{C}[\\ell] \\text{ s.t. } \\ell \\in C \\setminus R(C)~} \\bar{x}_{C} \\sum_{t \\in \\mathcal{C}} \\bar{w}^{C}_{t} \\geq \\sum_{C \\in \\mathcal{C}[\\ell] \\text{ s.t. } \\ell \\in C \\setminus R(C)~} \\bar{x}_{C} \\sum_{t \\in \\mathcal{C}[\\ell]} \\bar{w}^{C}_{t} \\geq \\sum_{C \\in \\mathcal{C}[\\ell] \\text{ s.t. } \\ell \\in C \\setminus R(C)~} \\bar{x}_{C}.\n\t\\end{aligned}\n\\end{equation}\n\nThe first equality is by \\eqref{lambda}. The second equality is by changing the order of summation. The third equality is by \\eqref{gammaC} and \\eqref{gammae}. The first inequality is because $\\mathcal{C}[\\ell] \\subseteq \\mathcal{C}$. The second inequality is because $ \\sum_{t \\in \\mathcal{C}[\\ell]} \\bar{w}^{C}_{t} \\geq 1$ by Lemma~\\ref{lem:wProperties}. Thus, \n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{equation}\n\t\\label{eq:ev5d}\n\\sum_{t \\in I \\cup \\mathcal{C}} \\bar{\\lambda}_{\\ell,t} = \t\\sum_{t \\in I} \\bar{\\lambda}_{\\ell,t}+ \t\\sum_{t \\in \\mathcal{C}} \\bar{\\lambda}_{\\ell,t} \\geq \\sum_{C \\in \\mathcal{C}[\\ell] \\text{ s.t. } \\ell \\in R(C)} \\bar{x}_{C} +\\sum_{C \\in \\mathcal{C}[\\ell] \\text{ s.t. } \\ell \\in C \\setminus R(C)~} \\bar{x}_{C} = \\sum_{C \\in \\mathcal{C}[\\ell]} \\bar{x}_{C} \\geq 1. \n\\end{equation}\n\n\nThe first equality is because $I \\cap \\mathcal{C} = \\emptyset$. The first inequality is by \\eqref{eq:ev5a} and \\eqref{eq:ev5b}. The second equality is because for all $C \\in \\mathcal{C}[\\ell]$ either $\\ell \\in R(C)$ or $\\ell \\in C\\setminus R(C)$. The last inequality is because $\\bar{x}$ is a solution for \\eqref{C-LP}. Therefore, by \\eqref{eq:ev5d}, constraint \\eqref{F4} is satisfied for $\\bar{\\lambda}$. In summary, we conclude that $\\bar{\\lambda}$ is a point in the $\\bar{y}$-polytope; hence, the $\\bar{y}$-polytope is non-empty. \n\n\t\\end{proof}\n\t\n\t\n\n\t\n\t\t\t\\begin{claim}\n\t\t\\label{lem:eviction2}\n\t\tFor any $0<{\\varepsilon}<0.1$, an ${\\varepsilon}$-structured BPP instance $\\mathcal{I}$, and a prototype $\\bar{x}$ of $\\mathcal{I}$, Algorithm~\\ref{Alg:eviction} returns a prototype $\\bar{y}$ such that $||\\bar{y}|| \\leq (1+{\\varepsilon})||\\bar{x}||$.\n\t\\end{claim}\n\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\\begin{proof}\n\t\t\t\t\n\t\t\t\n\t\t\t\t\\begin{align*}\n\t\t\t\t\\|\\bar{y}\\| ={} & \\left\\| \\sum_{C \\in \\textsf{supp}(\\bar{x})} \\bar{x}_{C} \\cdot \\bar{w}^{C} \\right\\| \\leq \\sum_{C \\in \\textsf{supp}(\\bar{x})} \\bar{x}_{C} \\cdot \\| \\bar{w}^{C} \\| \\leq \\sum_{C \\in \\textsf{supp}(\\bar{x})} \\bar{x}_{C} \\cdot (1+2{\\varepsilon}^4) = (1+2{\\varepsilon}^4)\\|\\bar{x}\\| \\leq (1+{\\varepsilon})\\|\\bar{x}\\|.\n\t\t\t\\end{align*}\n\t\t\t\t\n\t\tThe first equality is by Step~\\ref{step:return} of Algorithm~\\ref{Alg:eviction}. The first inequality is by the triangle inequality. The second inequality is because $\\|\\bar{w}^{C}\\| \\leq (1+2{\\varepsilon}^4)$ by Lemma~\\ref{lem:wProperties} for all $C \\in \\textsf{supp}(\\bar{x})$. The last inequality is because $0<{\\varepsilon}< 0.1$. \n\t\t\n\n\t\t\t\t\n\t\t\t\\end{proof}\n\t\t\t\n\t\t\t\\begin{claim}\n\t\t\t\t\\label{lem:eviction3}\n\t\t\t\tFor any $0<{\\varepsilon}<0.1$, an ${\\varepsilon}$-structured BPP instance $\\mathcal{I}$, and a prototype $\\bar{x}$ of $\\mathcal{I}$, Algorithm~\\ref{Alg:eviction} returns a prototype $\\bar{y}$ such that for all $C \\in \\textsf{supp}(y)$ it holds that $|C| \\leq {\\varepsilon}^{-10}$.\n\t\t\t\\end{claim}\n\t\t\t\n\t\t\t\\begin{proof}\n\t\t\t\tLet $C \\in \\textsf{supp}(\\bar{y})$ be a configuration. Because $C \\in \\textsf{supp}(\\bar{y})$, then by Step~\\ref{step:return} of Algorithm~\\ref{Alg:eviction}, there is $C' \\in \\textsf{supp}(\\bar{x})$ such that $\\bar{w}^{C'}_C \\neq 0$. Therefore, $$|C| \\leq |R(C')| = |C' \\cap L \\cup U_{C'}| \\leq |C' \\cap L|+|U_{C'}| \\leq {\\varepsilon}^{-2}+\\alpha \\leq 2{\\varepsilon}^{-6} \\leq {\\varepsilon}^{-10}.$$ \n\t\t\t\t\n\t\t\t\tThe first inequality is because the maximal number of items in $C$ is bounded by $|R(C')|$ by the definition of the relaxation vector of a configuration. The first equality is by the definition of $R(C')$. The second inequality is by the union bound. The third inequality is since the size of each large item in $L$ is at least ${\\varepsilon}^2$ and there can be at most ${\\varepsilon}^{-2}$ such items in a configuration without exceeding the maximal total size of items in a configuration which is $1$; moreover, by \\eqref{h} it holds that $|U_C| \\leq \\alpha$. The fourth inequality is because $\\alpha = \\lceil {\\varepsilon}^{-5} \\rceil$. The last inequality is because $0<{\\varepsilon}<0.1$. \n\t\t\t\\end{proof}\n\t\t\t\n\t\t\t\t\\begin{claim}\n\t\t\t\t\\label{lem:eviction4}\n\t\t\t\tFor any $0<{\\varepsilon}<0.1$, an ${\\varepsilon}$-structured BPP instance $\\mathcal{I}$, and a prototype $\\bar{x}$ of $\\mathcal{I}$, Algorithm~\\ref{Alg:eviction} returns a prototype $\\bar{y}$ such that for all $C \\in \\textsf{supp}(y)$ it holds that $s(C \\setminus L) \\leq {\\varepsilon}$. \n\t\t\t\\end{claim}\n\t\t\t\n\t\t\t\\begin{proof}\n\t\t\t\tLet $C \\in \\textsf{supp}(\\bar{y})$ be a configuration. Because $C \\in \\textsf{supp}(\\bar{y})$, then by Step~\\ref{step:return} of Algorithm~\\ref{Alg:eviction}, there is $C' \\in \\textsf{supp}(\\bar{x})$ such that $\\bar{w}^{C'}_C \\neq 0$. Therefore, $C \\in \\textsf{supp}(\\bar{w}^{C'})$ and it follows that $s(C \\setminus L) \\leq {\\varepsilon}$ by Lemma~\\ref{lem:wProperties}. \n\t\t\t\\end{proof}\n\t\t\t\n\t\t\t\n\t\t\t\t\\begin{claim}\n\t\t\t\t\\label{lem:evictionF4}\n\t\t\tfor every $\\ell \\in I$ it holds that $\\sum_{C\\in {\\mathcal{C}}[\\ell]} {\\bar{y}}_C\\leq 2$. \n\t\t\t\\end{claim}\n\t\t\t\n\t\t\t\\begin{proof}\n\t\t\t\t\\begin{align*}\n\t\t\\sum_{C\\in {\\mathcal{C}}[\\ell]} {\\bar{y}}_C ={} & \\sum_{C\\in {\\mathcal{C}}[\\ell]~} \\sum_{C' \\in \\textsf{supp}(\\bar{x})} \\bar{x}_{C'} \\cdot \\bar{w}^{C'}_C = \\sum_{C' \\in \\textsf{supp}(\\bar{x})~} \\sum_{C\\in {\\mathcal{C}}[\\ell]} \\bar{x}_{C'} \\cdot \\bar{w}^{C'}_C \\leq \\sum_{C' \\in \\textsf{supp}(\\bar{x})~} \\bar{x}_{C'} \\cdot \\sum_{C\\in {\\mathcal{C}}} \\bar{w}^{C'}_C \\\\ ={} &\\sum_{C' \\in \\textsf{supp}(\\bar{x})~} \\bar{x}_{C'} \\cdot \\|\\bar{w}^{C'}\\| \\leq \\sum_{C' \\in \\textsf{supp}(\\bar{x})~} \\bar{x}_{C'} \\cdot (1+2{\\varepsilon}^{4}) \\leq \\sum_{C' \\in \\textsf{supp}(\\bar{x})~} \\bar{x}_{C'} \\cdot 2 = 2 \\cdot \\|\\bar{x}\\| = 2. \n\t\t\t\\end{align*}\n\t\t\tThe first equality is by Step~\\ref{step:return} of Algorithm~\\ref{Alg:eviction}. The second inequality is because $\\|\\bar{w}^{C'}\\| \\leq (1+2{\\varepsilon}^4)$ by Lemma~\\ref{lem:wProperties}. The third inequality is because $0<{\\varepsilon}< 0.1$. The last equality is by \\eqref{LP}. \n\t\t\t\\end{proof}\n\t\t\t\n\t\t\n\t\t\t\n\t\t\n\t\t\tThe proof of Lemma~\\ref{lem:eviction} follows by Claim~\\ref{lem:evictionPolynomial}, Claim~\\ref{lem:eviction1}, Claim~\\ref{lem:eviction2}, Claim~\\ref{lem:eviction3}, Claim~\\ref{lem:eviction4}, and Claim~\\ref{lem:evictionF4}. \\qed\n\t\n\t\n\t\n\t\n\n\\section{Omitted Proofs of Section~\\ref{sec:greedy}}\n\\label{sec:proofsGreedy}\n\n For this section as in Section~\\ref{sec:greedy}, we have a BPP instance ${\\mathcal{I}} = (I, {\\mathcal{G}},s,k)$ and $\\delta \\in (0,0.5)$ such that for all $\\ell \\in I$ it holds that $s(\\ell) \\leq \\delta$. Also, assume that $I \\neq \\emptyset$ otherwise all proofs here are trivial. \n \n\\noindent{\\bf Proof of Lemma~\\ref{clm:confappend}:} We prove the claim by the definition of packing. For convenience, let $T = (S, X_1, \\ldots, X_m)$ and also $T = (T_1, \\ldots, T_{m+1})$.\n\n\\begin{enumerate}\n\t\\item It holds that $T$ is a partition of $I$. Let $\\ell \\in I$. If $\\ell \\in S$, then $\\ell \\in T_1$ and for all $i \\in [m+1] \\setminus \\{1\\}$ it holds that $\\ell \\notin T_i$ because $X_{i-1} \\subseteq I \\setminus S$. Otherwise, it holds that $\\ell \\in I \\setminus S$; therefore, because $X$ is a packing of ${\\mathcal{I}} \\setminus S$ there is exactly one $i \\in [m]$ such that $\\ell \\in X_i$ and it follows that $\\ell \\in T_{i+1}$ (only).\n\t\n\t\\item For all $i \\in [m+1]$ it holds that $s(T_i) \\leq 1$. If $i = 1$ then $s(T_1) = s(S) \\leq 1$ because $S \\in {\\mathcal{C}}({\\mathcal{I}})$. Otherwise, it holds that $s(T_i) = s(X_{i-1}) \\leq 1$ because $X$ is a packing of ${\\mathcal{I}} \\setminus S$. \n\t\n\t\\item For all $G \\in {\\mathcal{G}}$ and $i \\in [m+1]$ it holds that $|T_i \\cap G| \\leq k(G)$. If $i = 1$ then $|T_i \\cap G| = |G \\cap S| \\leq k(G)$ because $S \\in {\\mathcal{C}}({\\mathcal{I}})$. Otherwise, it holds that $|T_i \\cap G| = |X_{i-1} \\cap G|\\leq k_S(G \\setminus S) = k(G)$. The inequality is because $X$ is a packing of ${\\mathcal{I}} \\setminus S$. The last equality is by \\eqref{IIS}.\\qed\n\\end{enumerate}\n\n\n\n\n\n\n\n\n\n\n\n\\noindent{\\bf Proof of Lemma~\\ref{bG}:} Let $n = |G|$; by \\eqref{promise} and \\eqref{bounding} it holds that $n \\geq 1$. Now, let $\\ell_1, \\ldots, \\ell_n$ be the items in $G$ in an increasing order of item sizes. Define\t\\begin{equation}\n\t\\label{bounding:psi}\n\t\\psi = \\argmin_{\\left\\{i \\in [n] ~\\big|~ \\ceil{ \\frac{n-i}{k(G)}} \\leq V({\\mathcal{I}})-1\\right\\}} i.\n\\end{equation} Observe that for $i =n$ it holds that $ \\ceil{ \\frac{n-i}{k(G)}} \\leq V({\\mathcal{I}})-1$ because $n \\geq 1$; thus, it follows that $\\psi$ is well defined. Now, define $S^* = \\{\\ell_1, \\ldots, \\ell_{\\psi}\\}$. We show next that $S^* \\in b^*(G)$.\n\n\\begin{itemize}\n\t\\item $S^* \\in b(G)$. First, $S^* \\subseteq G$ by definition. Second, $$ \\ceil{ \\frac{|G \\setminus S^*|}{k(G)} } = \\ceil{ \\frac{n-\\psi}{k(G)}} \\leq V({\\mathcal{I}})-1 \\leq p({\\mathcal{I}})-1.$$ The first inequality is by \\eqref{bounding:psi}. The last inequality is by \\eqref{promise}. \n\t\n\t\n\t\n\t\n\t\n\t\\item $|S^*| \\leq k(G)$. \\begin{equation}\n\t\t\\label{S:psi}\n\t\t\\begin{aligned}\n\t\t\t|S^*| ={} & \\psi \\leq |G|-(V({\\mathcal{I}})-1 ) \\cdot k(G) \\leq |G|-\\left(\\frac{|G|}{k(G)}-1 \\right) \\cdot k(G) = k(G).\n\t\t\\end{aligned}\n\t\\end{equation} The first inequality is by \\eqref{bounding:psi}, since $G \\in {\\mathcal{B}}({\\mathcal{I}})$ and $\\ceil{\\frac{|G|-\\left( |G|-(V({\\mathcal{I}})-1) \\cdot k(G)\\right)}{k(G)}} \\leq (V({\\mathcal{I}})-1)$ (recall that $V({\\mathcal{I}}) \\in \\mathbb{N}$). \n\t\n\t\n\t\n\t\\item $s(S^*) \\leq \\frac{s(G)}{s(I)}$. It holds that: $$s(S^*) =\\psi \\cdot \\frac{ s(S^*)}{\\psi} \\leq \\psi \\cdot \\frac{s(G)}{n} \\leq \\frac{k(G)}{|G|} \\cdot s(G) \\leq \\frac{s(G)}{p({\\mathcal{I}})-2} \\leq \\frac{s(G)}{(1+2\\delta) \\cdot s(I)+2-2} \\leq \\frac{s(G)}{ s(I)}.$$ The first inequality is because $S^*$ is a subset of items of $G$ where each item in $G \\setminus S^*$ has a larger or equal size compared to the size of any item in $S^*$ by the sorted order of the items; thus, the size of the average item in $S^*$ is smaller or equal to the size of the average item in $G$. The second inequality is by \\eqref{S:psi}. The third inequality is by \\eqref{bounding} and that $G \\in {\\mathcal{B}}({\\mathcal{I}})$: it holds that $\\frac{|G|}{k(G)}+1 \\geq \\ceil{\\frac{|G|}{k(G)}} > p({\\mathcal{I}})-1$. The fourth inequality is by \\eqref{promise}.\\qed\n\\end{itemize}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nGiven a BPP instance ${\\mathcal{J}}$, let $A^{{\\mathcal{J}}}$ be the first entry (from the left) of the returned tuple by $\\textsf{Greedy}({\\mathcal{J}},\\delta)$ and let $A^{{\\mathcal{J}}}_0,A^{{\\mathcal{J}}}_1$ be the object $A^{{\\mathcal{J}}}$ after Step~\\ref{gr:end0} and after Step~\\ref{gr:end1} in the computation of $\\textsf{Greedy}({\\mathcal{J}},\\delta)$, respectively. Note that $A^{{\\mathcal{J}}}_0$ is well defined (the algorithm is guaranteed to construct $A^{{\\mathcal{J}}}_0$) by Lemma~\\ref{bI}. For simplicity, we use $A, A_0,A_1$ instead of $A^{{\\mathcal{J}}}, A^{{\\mathcal{J}}}_0, A^{{\\mathcal{J}}}_1$, respectively, when ${\\mathcal{J}} = {\\mathcal{I}}$. Now, for the proof of Lemma~\\ref{thm:greedy}, we use the following auxiliary claims. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{claim}\n\t\\label{clm:A00}\n\tFor any BPP instance ${\\mathcal{I}} = (I,{\\mathcal{G}},s,k)$ it holds that $A_0 \\in {\\mathcal{C}}({\\mathcal{I}})$ and for each bounding group $G \\in {\\mathcal{B}}({\\mathcal{I}})$ it holds that $ \\ceil{ \\frac{|G \\setminus A_0|}{k(G)} } \\leq p({\\mathcal{I}})-1$.\n\\end{claim}\n\n\\begin{proof}\nObserve that $A_0 \\in b^*({\\mathcal{I}})$ by Step~\\ref{gr:end0}. In addition, note that $A_0$ is well defined by Lemma~\\ref{bI}. Then, by \\eqref{bounding:A} for each $G \\in {\\mathcal{B}}({\\mathcal{I}})$ there is $S_G \\in b^*(G)$ such that $A_0 = \\bigcup_{G \\in {\\mathcal{B}}({\\mathcal{I}})} S_G$. We show the conditions of the lemma as follows. \\begin{itemize}\n\t\t\\item $s(A_0) \\leq 1$. $$s(A_0) = s\\left( \\bigcup_{G \\in {\\mathcal{B}}({\\mathcal{I}})} S_G \\right) \\leq \\sum_{G \\in {\\mathcal{B}}({\\mathcal{I}})} \\frac{s(G)}{s(I)} \\leq \\frac{s(I)}{s(I)} = 1.$$ The first equality is by \\eqref{bounding:A}. The first inequality is by the definition of a minimal bounding subset of a bounding group.\n\t\t\n\t\t\n\t\t\n\t\t\\item For all $G \\in {\\mathcal{G}}$ it holds that $|G \\cap A_0| \\leq k(G)$. If $G \\in {\\mathcal{G}} \\setminus {\\mathcal{B}}({\\mathcal{I}})$ then $|G \\cap A_0| =0 < k(G)$ by \\eqref{bounding:A}. Otherwise, $|G \\cap A_0| = \\left| S_G \\right| \\leq k(G)$ where the equality is by \\eqref{bounding:A} and the inequality is by the definition of a minimal bounding subset of a bounding group. \n\t\t\n\t\t\n\t\t\\item For each bounding group $G \\in {\\mathcal{B}}({\\mathcal{I}})$ it holds that $ \\ceil{ \\frac{|G \\setminus A_0|}{k(G)} } \\leq p({\\mathcal{I}})-1$. $$ \\ceil{ \\frac{|G \\setminus A_0|}{k(G)} } = \\ceil{ \\frac{|G \\setminus S_G|}{k(G)} }\\leq p({\\mathcal{I}})-1$$ The first equality is by \\eqref{bounding:A}. The first inequality is because $S_G \\in b^*(G)$ and in particular $S_G \\in b(G)$; thus, the inequality follows by the definition of a bounding subset of $G$. \n\t\\end{itemize}\n\t\n\n\t\n\t\n\t\n\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{claim}\n\t\\label{clm:A1}\n\tFor any BPP instance ${\\mathcal{I}} = (I,{\\mathcal{G}},s,k)$ it holds that $A_1 \\in {\\mathcal{C}}({\\mathcal{I}})$ and for each bounding group $G \\in {\\mathcal{B}}({\\mathcal{I}})$ it holds that $ \\ceil{ \\frac{|G \\setminus A_1|}{k(G)} } \\leq p({\\mathcal{I}})-1$.\n\\end{claim}\n\n\\begin{proof}\n\tWe prove the claim using loop invariant for the while loop of Step~\\ref{gr:while2}. Let $A(s),A(t)$ be the object $A$ before and after an iteration of the while loop of Step~\\ref{gr:while2}, respectively. Now, assume that $A(s) \\in {\\mathcal{C}}({\\mathcal{I}})$ and for each bounding group $G \\in {\\mathcal{B}}({\\mathcal{I}})$ it holds that $ \\ceil{ \\frac{|G \\setminus A(s)|}{k(G)} } \\leq p({\\mathcal{I}})-1$. We show the conditions of the claim below for $A(t)$. By Step~\\ref{gr:swap2}, let $\\ell \\in I$ such that $A(t) = A(s) \\cup \\{\\ell\\}$. Therefore, \\begin{itemize}\n\t\t\\item $s(A(t)) \\leq 1$. $$s(A(t)) = s\\left( A(s) \\right) + s(\\ell) \\leq (1-\\delta)+\\delta =1.$$ The first equality is by Step~\\ref{gr:swap2}. The inequality is because $s(A(s)) \\leq 1-\\delta$ by Step~\\ref{gr:while2}; in addition, for all items the size is at most $\\delta$. \n\t\t\n\t\t\n\t\t\n\t\t\\item For all $G \\in {\\mathcal{G}}$ it holds that $|G \\cap A(t)| \\leq k(G)$. If $G \\in {\\mathcal{G}} \\setminus \\{\\textsf{group}(\\ell)\\}$; then, $|G \\cap A(t)| = |A(s) \\cap G| \\leq k(G)$. The equality is by Step~\\ref{gr:swap2} and the inequality is by the assumption that $A(s) \\in {\\mathcal{C}}({\\mathcal{I}})$. Otherwise, $|G \\cap A(t)| = \\left| A(s) \\cap G \\right| +|\\{\\ell\\} \\cap G| = |A(s) \\cap G|+1 \\leq k(G)$. The first equality is by Step~\\ref{gr:swap2} and the inequality is by Step~\\ref{gr:while2}. \n\t\t\n\t\t\n\t\t\\item For each bounding group $G \\in {\\mathcal{B}}({\\mathcal{I}})$ it holds that $ \\ceil{ \\frac{|G \\setminus A(t)|}{k(G)} } \\leq p({\\mathcal{I}})-1$. If $G \\in {\\mathcal{G}} \\setminus \\{\\textsf{group}(\\ell)\\}$; then, $ \\ceil{ \\frac{|G \\setminus A(t)|}{k(G)} } = \\ceil{ \\frac{|G \\setminus A(s)|}{k(G)} } \\leq p({\\mathcal{I}})-1$. The equality is by Step~\\ref{gr:swap2} and the inequality is by the assumption on $A(s)$. Otherwise, $$ \\ceil{ \\frac{|G \\setminus A(t)|}{k(G)} } = \\ceil{ \\frac{|G \\setminus A(s)|-|\\{\\ell\\}|}{k(G)} } \\leq \\ceil{ \\frac{|G \\setminus A(s)|}{k(G)} } \\leq p({\\mathcal{I}})-1.$$ The first equality is by Step~\\ref{gr:swap2} and the last inequality is by the assumption on $A(s)$. Therefore, by Claim~\\ref{clm:A00} and the above loop invariant the claim follows. \n\t\\end{itemize}\n\t\n\t\n\t\n\t\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{claim}\n\t\\label{clm:A}\n\t$A \\in {\\mathcal{C}}({\\mathcal{I}})$ and for each bounding group $G \\in {\\mathcal{B}}({\\mathcal{I}})$ it holds that $\\ceil{ \\frac{|G \\setminus A|}{k(G)} } \\leq p({\\mathcal{I}})-1$.\n\\end{claim}\n\n\\begin{proof}\n\tWe prove the claim using loop invariant for the while loop of Step~\\ref{gr:while1}. Let $A(s),A(t)$ be the object $A$ before and after an iteration of the while loop of Step~\\ref{gr:while1}, respectively. Now, assume that $A(s) \\in {\\mathcal{C}}({\\mathcal{I}})$ and for each bounding group $G \\in {\\mathcal{B}}({\\mathcal{I}})$ it holds that $ \\ceil{ \\frac{|G \\setminus A(s)|}{k(G)} } \\leq p({\\mathcal{I}})-1$. We show the conditions of the claim below for $A(t)$. By Step~\\ref{gr:swap}, let $\\ell,y \\in I$ such that $A(t) = \\left(A(s) \\setminus \\{\\ell\\} \\right) \\cup \\{y\\}$. Therefore, \\begin{itemize}\n\t\t\\item $s(A(t)) \\leq 1$. $$s(A(t)) = s\\left( A(s) \\right) -s(\\ell)+s(y) \\leq s\\left( A(s) \\right) +s(y) \\leq (1-\\delta)+\\delta =1.$$ The first equality is by Step~\\ref{gr:swap}. The second inequality is because $s(A(s)) \\leq 1-\\delta$ by Step~\\ref{gr:while1}; in addition, for all items the size is at most $\\delta$. \n\t\t\n\t\t\n\t\t\n\t\t\\item For all $G \\in {\\mathcal{G}}$ it holds that $|G \\cap A(t)| \\leq k(G)$. If $G \\in {\\mathcal{G}} \\setminus \\{\\textsf{group}(\\ell)\\}$; then, $|G \\cap A(t)| = |A(s) \\cap G| \\leq k(G)$. The equality is by Step~\\ref{gr:swap} and the inequality is by the assumption that $A(s) \\in {\\mathcal{C}}({\\mathcal{I}})$. Otherwise, $|G \\cap A(t)| = \\left| A(s) \\cap G \\right| -|\\{\\ell\\} \\cap G|+|\\{y\\} \\cap G| = |A(s) \\cap G| \\leq k(G)$. The first equality is by Step~\\ref{gr:swap} and the inequality is by the assumption that $A(s) \\in {\\mathcal{C}}({\\mathcal{I}})$\n\t\t\n\t\t\n\t\t\\item For each bounding group $G \\in {\\mathcal{B}}({\\mathcal{I}})$ it holds that $\\ceil{ \\frac{|G \\setminus A(t)|}{k(G)} } \\leq p({\\mathcal{I}})-1$. If $G \\in {\\mathcal{G}} \\setminus \\{\\textsf{group}(\\ell)\\}$; then, $\\ceil{ \\frac{|G \\setminus A(t)|}{k(G)} } = \\ceil{ \\frac{|G \\setminus A(s)|}{k(G)} }\\leq p({\\mathcal{I}})-1$. The equality is by Step~\\ref{gr:swap} and the inequality is by the assumption on $A(s)$. Otherwise, $$\\ceil{ \\frac{|G \\setminus A(t)|}{k(G)} } = \\ceil{ \\frac{|G \\setminus A(s)|+|\\{\\ell\\} \\cap G|-|\\{y\\} \\cap G|}{k(G)} }= \\ceil{ \\frac{|G \\setminus A(s)|}{k(G)} } \\leq p({\\mathcal{I}})-1$$ The equalities are by Step~\\ref{gr:swap} and the inequality is by the assumption on $A(s)$. Therefore, by Claim~\\ref{clm:A1} and the above loop invariant the claim follows. \n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\\end{itemize}\n\t\n\t\n\t\n\t\n\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{claim}\n\t\\label{ABCV}\n\t$V({\\mathcal{I}} \\setminus A)\\leq p({\\mathcal{I}})-1$.\n\\end{claim}\n\n\\begin{proof}\n\t\\begin{equation*}\n\t\tV({\\mathcal{I}} \\setminus A) = \\max_{G \\in {\\mathcal{G}}_A} \\ceil{ \\frac{|G|}{k_A(G)} } = \\max_{G \\in {\\mathcal{G}}} \\ceil{ \\frac{|G \\setminus A|}{k(G)} } \\leq p({\\mathcal{I}})-1\n\t\\end{equation*} The second equality is by \\eqref{IIS}. The last inequality is because for all bounding groups $G \\in {\\mathcal{B}}({\\mathcal{I}})$ it holds that $ \\ceil{ \\frac{|G \\setminus A|}{k(G)} } \\leq p({\\mathcal{I}})-1$ by Claim~\\ref{clm:A} and for all $G \\in {\\mathcal{G}} \\setminus {\\mathcal{B}}({\\mathcal{I}})$ it holds that $ \\ceil{ \\frac{|G \\setminus A|}{k(G)} } \\leq p({\\mathcal{I}})-1$ by \\eqref{bounding}.\n\\end{proof}\n\n\n\\begin{lemma}\n\t\\label{delta<}\n\tIf $s(A) < 1-\\delta$, then Algorithm~\\ref{alg:GREEDY} returns a packing for ${\\mathcal{I}}$ with at most $p({\\mathcal{I}})$ bins.\n\\end{lemma}\n\n\\begin{proof}\n\t\n\tWe use the following auxiliary claims. \n\t\n\t\\begin{claim}\n\t\t\\label{VD1}\n\t\tFor all $G \\in {\\mathcal{G}}$ it holds that $|A \\cap G| = \\min\\{k(G),|G|\\}$. \n\t\\end{claim}\n\t\n\t\\begin{proof}\n\t\tAssume towards a contradiction that there is $G \\in {\\mathcal{G}}$ such that $|A \\cap G| < \\min\\{k(G),|G|\\}$. Therefore, there is $\\ell \\in G \\setminus A$. Observe $A$ after Step~\\ref{gr:end0}. Because $s(A) < 1-\\delta$, it follows by Step~\\ref{gr:while2} and Step~\\ref{gr:swap2} that $\\ell$ can be added to $A$ in contradiction. \n\t\\end{proof}\n\t\n\t\\begin{claim}\n\t\t\\label{VD2}\n\t\tFor all $G \\in {\\mathcal{G}}$, $\\ell \\in A \\cap G$ and $y \\in G \\setminus A$ it holds that $s(y) \\leq s(\\ell)$. \n\t\\end{claim}\n\t\n\t\\begin{proof}\n\t\tAssume towards a contradiction that there are $\\ell \\in A, y \\in I \\setminus A$ such that $s(y) > s(\\ell)$ and $\\textnormal{\\textsf{group}}(\\ell) = \\textnormal{\\textsf{group}}(y)$. Observe $A$ after Step~\\ref{gr:end1}. Because $s(A) < 1-\\delta$, it follows by Step~\\ref{gr:while1} and Step~\\ref{gr:swap} that $y$ can be added to $A$ and $\\ell$ can be removed from $A$ in contradiction. \n\t\t\n\t\\end{proof}\n\t\n\t\n\t\\begin{claim}\n\t\t\\label{ABCV2}\n\t\tIf $s(A) < 1-\\delta$ then $V({\\mathcal{I}} \\setminus A)\\leq V({\\mathcal{I}})-1$.\n\t\\end{claim}\n\t\n\t\\begin{proof}\n\t\t\\begin{equation*}\n\t\t\t\\begin{aligned}\n\t\t\t\t\tV({\\mathcal{I}} \\setminus A) ={} & \\max_{G \\in {\\mathcal{G}}_A} \\ceil{ \\frac{|G|}{k_A(G)} } = \\max_{G \\in {\\mathcal{G}}} \\ceil{ \\frac{|G \\setminus A|}{k(G)} } = \\max_{G \\in {\\mathcal{G}}} \\ceil{ \\frac{|G|-\\min\\{ |G|,k(G) \\} }{k(G)}} \\\\ \\leq{} & \\max_{G \\in {\\mathcal{G}}} \\ceil{ \\frac{|G|}{k(G)} -1 } = \\max_{G \\in {\\mathcal{G}}} \\ceil{ \\frac{|G|}{k(G)}}-1 = V({\\mathcal{I}})-1.\n\t\t\t\\end{aligned}\n\t\t\\end{equation*} The second equality is by \\eqref{IIS}. The third equality is by Claim~\\ref{VD1}. The inequality is because if $\\min\\{ |G|,k(G) \\} = |G|$ then it follows that $V({\\mathcal{I}} \\setminus A) = 0 \\leq V({\\mathcal{I}})-1$, assuming that $I \\neq \\emptyset$. \n\t\\end{proof}\n\t\n\t\n\t\n\t\n\t\n\t\n\tNow, back to the proof of Lemma~\\ref{delta<}. We prove the lemma by induction on $V({\\mathcal{I}})$. For $V({\\mathcal{I}}) = 1$, it follows that $V({\\mathcal{I}} \\setminus A) \\leq V({\\mathcal{I}})-1$ by Claim~\\ref{ABCV2} and it follows that $V({\\mathcal{I}} \\setminus A) = 0$. Therefore, by \\eqref{IIS} it follows that $I \\setminus A = \\emptyset$ and therefore $A = I$. Thus, $I$ is a configuration of ${\\mathcal{I}}$ by Claim~\\ref{clm:A}. Hence, by Step~\\ref{gr:if1} and Step~\\ref{gr:if} it holds that Algorithm~\\ref{alg:GREEDY} returns $(I)$, which is a packing of ${\\mathcal{I}}$ because $I = A$ and $A$ is a configuration of ${\\mathcal{I}}$. \n\t\n\tRecall that for any BPP instance ${\\mathcal{J}}$ we define $A^{{\\mathcal{J}}}$ as the first entry of the tuple returned by $\\textsf{Greedy}({\\mathcal{J}},\\delta)$. Now, for the assumption of the induction, assume that $V({\\mathcal{I}}) \\geq 2$. Assume that for any BPP instance ${\\mathcal{J}} = (J,{\\mathcal{G}}_J,s_J,k_J)$ such that (i) $s_J(\\ell) \\leq \\delta ~\\forall \\ell \\in J$, (ii) $V({\\mathcal{J}}) \\leq V({\\mathcal{I}})-1$, and (iii) $s\\left(A^{{\\mathcal{J}}}\\right) < 1-\\delta$, it holds that $\\textsf{Greedy}({\\mathcal{J}},\\delta)$ returns a packing of ${\\mathcal{J}}$ of at most $V({\\mathcal{J}})$ bins. For the step of the induction, we use the following auxiliary claim. \n\t\n\t\n\t\\begin{claim}\n\t\t\\label{VD3}\n\t\t$s\\left(A^{{\\mathcal{I}} \\setminus A}\\right) < 1-\\delta$.\n\t\\end{claim}\n\t\n\t\\begin{proof}\n\t\tAssume towards a contradiction that $s\\left(A^{{\\mathcal{I}} \\setminus A}\\right) \\geq 1-\\delta$. Therefore, it holds that $s\\left(A^{{\\mathcal{I}} \\setminus A}\\right) > s(A)$; thus, we reach a contradiction if one of the following conditions hold. \n\t\t\n\t\t\\begin{enumerate}\n\t\t\t\n\t\t\t\\item $|A^{{\\mathcal{I}} \\setminus A}| > |A|$. Therefore, $$|A^{{\\mathcal{I}} \\setminus A}| = \\sum_{G \\in {\\mathcal{G}}} |A^{{\\mathcal{I}} \\setminus A} \\cap G| \\leq \\sum_{G \\in {\\mathcal{G}}} \\min\\{k(G),|G|\\} = \\sum_{G \\in {\\mathcal{G}}} |A \\cap G| = |A|.$$ The inequality is because $A^{{\\mathcal{I}} \\setminus A} \\in {\\mathcal{C}}({\\mathcal{I}} \\setminus A)$ by Claim~\\ref{clm:A} and ${\\mathcal{C}}({\\mathcal{I}} \\setminus A) \\subseteq {\\mathcal{C}}({\\mathcal{I}})$ by \\eqref{IIS}.\\footnote{Claim~\\ref{clm:A} is stated and proven for $A$ and not for $A^{{\\mathcal{I}} \\setminus A}$ for simplicity; however, the exact arguments can be used for proving the claim also for $A^{{\\mathcal{I}} \\setminus A}$.} The second equality is by Claim~\\ref{VD1}. The above equation is a contradiction that $|A^{{\\mathcal{I}} \\setminus A}| > |A|$. \n\t\t\t\n\t\t\t\n\t\t\t\\item There are $G \\in {\\mathcal{G}}$, $\\ell \\in A^{{\\mathcal{I}} \\setminus A} \\cap G$ and $y \\in G \\setminus A^{{\\mathcal{I}} \\setminus A}$ such that $s(y) > s(\\ell)$. This is a contradiction to Claim~\\ref{VD2}. \n\t\t\\end{enumerate}\n\t\t\n\t\tOtherwise, if the two cases above are not satisfied:\n\t\t\n\t\t$$s\\left(A^{{\\mathcal{I}} \\setminus A}\\right) = \\sum_{G \\in {\\mathcal{G}}~} s\\left(G\\cap A^{{\\mathcal{I}} \\setminus A}\\right) \\leq \\sum_{G \\in {\\mathcal{G}}~} s\\left(G\\cap A \\right) = s(A).$$ The inequality is by Claim~\\ref{VD1}, Claim~\\ref{VD2} and that conditions 1. and 2. above are not satisfied. By the above equation, we reach a contradiction that $s\\left(A^{{\\mathcal{I}} \\setminus A}\\right) > s(A)$. \n\t\t\n\t\\end{proof}\n\t\n\t\n\t\n\tBy Claim~\\ref{VD3} we can apply the assumption of the induction on ${\\mathcal{I}} \\setminus A$; thus, Algorithm~\\ref{alg:GREEDY} returns a packing for ${\\mathcal{I}} \\setminus A$ with at most $V({\\mathcal{I}} \\setminus A)$ bins. In addition, by Claim~\\ref{ABCV2} it holds that $V({\\mathcal{I}} \\setminus A) \\leq V({\\mathcal{I}})-1$. Hence, by Step~\\ref{gr:return}, the returned object by the algorithm is $X = (A) \\oplus \\textsf{Greedy}\\left(\\mathcal{I} \\setminus A, \\delta \\right)$ which contains at most $V({\\mathcal{I}})$ entries. Moreover, it holds that $X$ is a packing of ${\\mathcal{I}}$ by Claim~\\ref{clm:confappend} using the following arguments: (i) $A \\in {\\mathcal{C}}({\\mathcal{I}})$ by Claim~\\ref{clm:A} and (ii) $ \\textsf{Greedy}\\left(\\mathcal{I} \\setminus A, \\delta \\right)$ is a packing of ${\\mathcal{I}} \\setminus A$ by the assumption of the induction. \n\\end{proof}\n\n\n\n\n\n\\begin{lemma}\n\t\\label{delta>>}\n\tIf $s(A) \\geq 1-\\delta$, then Algorithm~\\ref{alg:GREEDY} returns a packing for ${\\mathcal{I}}$ with at most $p({\\mathcal{I}})$ bins.\n\\end{lemma}\n\n\\begin{proof}\n\tWe prove the claim by induction on $p({\\mathcal{I}})$. For $p({\\mathcal{I}}) \\leq 1$ it holds by \\eqref{promise} that $I$ is a configuration of ${\\mathcal{I}}$: it follows that $V({\\mathcal{I}} \\setminus A) = 0$ by Claim~\\ref{ABCV} and therefore, by \\eqref{IIS} it follows that $I \\setminus A = \\emptyset$ and $A = I$. Hence, by Step~\\ref{gr:if1} and Step~\\ref{gr:if} it holds that Algorithm~\\ref{alg:GREEDY} returns $(I)$, which is a packing of ${\\mathcal{I}}$ because $I$ is a configuration of ${\\mathcal{I}}$. Now, for the assumption of the induction, assume that for any BPP instance ${\\mathcal{J}} = (J,{\\mathcal{G}}_J,s_J,k_J)$ such that (i) $s(\\ell) \\leq \\delta ~\\forall \\ell \\in J$, (ii) $s\\left(A^{{\\mathcal{J}}}\\right) \\geq 1-\\delta$, and (iii) $p({\\mathcal{J}}) \\leq p({\\mathcal{I}})-1$, it holds that $\\textsf{Greedy}({\\mathcal{J}},\\delta)$ returns a packing of ${\\mathcal{J}}$ of at most $p({\\mathcal{J}})$ bins.\n\t\n\tFor the step of the induction, assume that $p({\\mathcal{I}}) > 1$. We use the following auxiliary claim. \n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\\begin{claim}\n\t\t\\label{prom:2}\n\t\t$p\\left({\\mathcal{I}} \\setminus A\\right) \\leq p({\\mathcal{I}})-1$.\n\t\\end{claim}\n\t\n\t\\begin{proof}\n\t\t\n\t\tWe use the following inequality. \\begin{equation}\n\t\t\t\\label{ABC}\n\t\t\t(1+2\\delta) \\cdot s(I \\setminus A) +2 \\leq (1+2\\delta) \\cdot (s(I)-(1-\\delta))+2 \\leq (1+2\\delta) \\cdot s(I) -1+2\\leq p({\\mathcal{I}})-1. \n\t\t\\end{equation} The first inequality is because $s(A) \\geq 1-\\delta$. The second inequality is because $\\delta \\in (0,0.5)$. The last inequality is by \\eqref{promise}. Now, $$p\\left({\\mathcal{I}} \\setminus A\\right) = \\max \\left\\{ (1+2\\delta) \\cdot s(I \\setminus A)+2, V({\\mathcal{I}} \\setminus A) \\right\\} \\leq p({\\mathcal{I}})-1.$$ The first equality is by \\eqref{IIS} and \\eqref{promise}. The first inequality is by \\eqref{ABC} and Claim~\\ref{ABCV}\n\t\\end{proof}\n\t\n\tTo conclude the lemma, observe the following two complementary cases. \\begin{enumerate}\n\t\t\\item $s\\left(A^{{\\mathcal{I}} \\setminus A}\\right) \\geq 1-\\delta$. Then, by Claim~\\ref{prom:2} and that $s\\left(A^{{\\mathcal{I}} \\setminus A}\\right) \\geq 1-\\delta$, we can apply the assumption of the induction on ${\\mathcal{I}} \\setminus A$; thus, Algorithm~\\ref{alg:GREEDY} applied on ${\\mathcal{I}} \\setminus A$ returns a packing for ${\\mathcal{I}} \\setminus A$ with at most $p\\left({\\mathcal{I}} \\setminus A\\right)$ bins. In addition, by Claim~\\ref{prom:2} it holds that $p\\left({\\mathcal{I}} \\setminus A\\right) \\leq p({\\mathcal{I}})-1$. Hence, by Step~\\ref{gr:return}, the returned object by the algorithm on ${\\mathcal{I}}$ is $X = (A) \\oplus \\textsf{Greedy}\\left(\\mathcal{I} \\setminus A, \\delta \\right)$ which contains at most $ 1+p({\\mathcal{I}})-1 = p({\\mathcal{I}})$ entries. Moreover, it holds that $X$ is a packing of ${\\mathcal{I}}$ by Claim~\\ref{clm:confappend} using the following arguments: (i) $A \\in {\\mathcal{C}}({\\mathcal{I}})$ by Claim~\\ref{clm:A} and (ii) $ \\textsf{Greedy}\\left(\\mathcal{I} \\setminus A, \\delta \\right)$ is a packing of ${\\mathcal{I}} \\setminus A$ by the assumption of the induction. \n\t\t\n\t\t\\item $s\\left(A^{{\\mathcal{I}} \\setminus A}\\right) < 1-\\delta$. Then, by Lemma~\\ref{delta<} and that $s\\left(A^{{\\mathcal{I}} \\setminus A}\\right) < 1-\\delta$, Algorithm~\\ref{alg:GREEDY} applied on ${\\mathcal{I}} \\setminus A$ returns a packing for ${\\mathcal{I}} \\setminus A$ with at most $V({\\mathcal{I}} \\setminus A)$ bins. In addition, by Claim~\\ref{ABCV} it holds that $V({\\mathcal{I}} \\setminus A) \\leq p({\\mathcal{I}})-1$. Hence, by Step~\\ref{gr:return}, the returned object by the algorithm is $X = (A) \\oplus \\textsf{Greedy}\\left(\\mathcal{I} \\setminus A, \\delta \\right)$ which contains at most $ 1+p({\\mathcal{I}})-1 \\leq p({\\mathcal{I}})$ entries by \\eqref{promise}. Moreover, it holds that $X$ is a packing of ${\\mathcal{I}}$ by Claim~\\ref{clm:confappend} using the following arguments: (i) $A \\in {\\mathcal{C}}({\\mathcal{I}})$ by Claim~\\ref{clm:A} and (ii) $ \\textsf{Greedy}\\left(\\mathcal{I} \\setminus A, \\delta \\right)$ is a packing of ${\\mathcal{I}} \\setminus A$ by Lemma~\\ref{delta<}. \n\t\\end{enumerate}\n\\end{proof}\n\n\n\\begin{lemma}\n\t\\label{gr:running}\n\tThe running time of Algorithm~\\ref{alg:GREEDY} is $\\textnormal{poly} (|{\\mathcal{I}} |)$. \n\\end{lemma}\n\n\\begin{proof}\n\tStep~\\ref{gr:if} and Step~\\ref{gr:if1} can be trivially computed in linear time in $|{\\mathcal{I}}|$. Step~\\ref{gr:end0} can be computed in polynomial time in $|{\\mathcal{I}}|$ by (i) finding the bounding groups by \\eqref{bounding}, and (ii) compute a bounding subset of ${\\mathcal{I}}$ as follows. For each bounding group $G$ we compute a bounding subset of $G$ by sorting the items and choosing a minimal number of items according to the sorted order that form a bounding subset of $G$ (for more details see the proof of Lemma~\\ref{bG}). \n\t\n\tthe while loop of Step~\\ref{gr:while2} runs at most $|I|$ times because in each iteration an item is added to the constructed bin and is not added again by Step~\\ref{gr:while2} and Step~\\ref{gr:swap2}. Finally, the while loop of Step~\\ref{gr:while1} runs at most ${|I| \\choose 2} = \\textnormal{poly} (|{\\mathcal{I}} |)$ times because in each iteration two items are replaced in Step~\\ref{gr:swap}, one is add to the constructed bin and the other one is removed; for any two items, this can happen at most once by Step~\\ref{gr:while1}. We conclude that the running time of Algorithm~\\ref{alg:GREEDY} is $\\textnormal{poly} (|{\\mathcal{I}} |)$. \n\\end{proof}\n\n\n\n\\noindent{\\bf Proof of Lemma~\\ref{thm:greedy}:} If $s(A) < 1-\\delta$ it holds that $X = (A) \\oplus \\textsf{Greedy}\\left(\\mathcal{I} \\setminus A, \\delta \\right)$ is a packing of ${\\mathcal{I}}$ of at most $V({\\mathcal{I}})$ bins by Lemma~\\ref{delta<}; therefore, the number of bins is at most $p({\\mathcal{I}})$ by \\eqref{promise}. Otherwise, it holds that $s(A) \\geq 1-\\delta$; thus, it holds that $X$ is a packing of ${\\mathcal{I}}$ of at most $p({\\mathcal{I}})$ bins by Lemma~\\ref{delta>>}. Therefore, the number of bins in the packing is at most $p({\\mathcal{I}}) \\leq (1+2\\delta) \\cdot \\max\\left\\{ s(I),~V(\\mathcal{I})\\right\\}+2$ by \\eqref{promise}. Finally, the running time is $\\textnormal{poly} (|{\\mathcal{I}} |)$ by Lemma~\\ref{gr:running}.\\qed\n\n\n\\section{Omitted Proofs of Section~\\ref{sec:overview}}\n\\label{app:omitted}\n\n\\newcommand{\\textnormal{LP}}{\\textnormal{LP}}\n\\newcommand{\\mathcal{D}}{\\mathcal{D}}\n\\newcommand{\\textnormal{Dual}}{\\textnormal{Dual}}\n\\newcommand{\\textnormal{\\textsf{Ellipsoid}}}{\\textnormal{\\textsf{Ellipsoid}}}\n\\noindent{\\bf Proof of Lemma~\\ref{configurationLP}:}\nThe approach presented here is considered as standard, and often the result is mentioned without a proof (see, e.g., Theorem~1.1 in \\cite{bansal2010new}). We include the proof \nfor completeness. We refer to terms such as separation oracle and well-described polyhedron as defined in~\\cite{grotschel2012geometric}. \n\nLet ${\\mathcal I}=(I,{\\mathcal{G}}, s, k )$ be a BPP instance and ${\\varepsilon}\\in (0,0.1)$. \nFor any $\\mathcal{D}\\subseteq {\\mathcal{C}}$ we define the following linear program.\n\\begin{equation}\n\t\\label{C-LP-relaxed}\n\t\\textnormal{LP}(\\mathcal{D}):~~~~~\n\t\\begin{aligned}\n\t\t~~~~~ \\min\\quad & ~~~~~\\sum_{C \\in \\mathcal{D}} \\bar{x}_C \\\\\n\t\t\\textsf{s.t.\\quad} & ~~~\\sum_{~C \\in \\mathcal{C}[\\ell]\\cap \\mathcal{D}} \\bar{x}_C \\geq 1 & \\forall \\ell \\in I~~~~~\\\\ \n\t\t& ~~~~~\\bar{x}_C \\geq 0 ~~~~~~~~&~~~~~~~~~~~~~~~~~~~~ \\forall C \\in \\mathcal{C}~~~~\n\t\\end{aligned}\n\\end{equation}\nWhile $\\textnormal{LP}({\\mathcal{C}})$ is not identical to \\eqref{C-LP}, solving $\\textnormal{LP}({\\mathcal{C}})$ is equivalent to solving \\eqref{C-LP} and we will focus on this objective. \n\nFor any $\\mathcal{D}\\subseteq {\\mathcal{C}}$, the dual linear program of $\\textnormal{LP}(\\mathcal{D})$ is the following. \n\\begin{equation}\n\t\\label{dual}\n\t\\textnormal{Dual}(\\mathcal{D}):~~~~~\n\t\\begin{aligned}\n\t\t~~~~~ \\max\\quad & ~~~~~\\sum_{\\ell \\in I} \\bar{\\lambda}_{\\ell} \\\\\n\t\t\\textsf{s.t.\\quad} & ~~~\\sum_{\\ell \\in C} \\bar{\\lambda}_{\\ell} \\leq 1 & \\forall C\\in \\mathcal{D}~~~~~\\\\ \n\t\t& ~~~~~\\bar{\\lambda}_{\\ell} \\geq 0 ~~~~~~~~&~~~~~~~~~~~~~~~~~~~~ \\forall \\ell \\in I~~~~\n\t\\end{aligned}\n\\end{equation}\nWe note that $\\textnormal{Dual}({\\mathcal{C}})$ can be solved in polynomial time given a separation oracle (see Theorem 6.3.2 in~\\cite{grotschel2012geometric}). However, since no such separation oracle exists, we apply a technique dating back to~\\cite{karmarkar1982efficient} in order to obtain an approximate solution.\n\nGiven $\\mathcal{D}\\subseteq {\\mathcal{C}}$ and $v\\in \\mathbb{R}$, define a polytope\n\\begin{equation}\n\t\\label{eq:F_def}\n\tF(\\mathcal{D}, v) = \\left\\{\\bar{\\lambda}\\in \\mathbb{R}^I_{\\geq 0} ~~\\middle|~~\\begin{aligned}\n\t\\sum_{\\ell \\in I} \\bar{\\lambda}_{\\ell} \\geq v \\\\ \n\t\\sum_{\\ell \\in C} \\bar{\\lambda}_{\\ell} \\leq 1 &~~~~~~~\\forall C\\in \\mathcal{D}\n\\end{aligned}\\right\\}\n\\end{equation}\nClearly, $\\textnormal{OPT}(\\textnormal{Dual}(\\mathcal{D})) \\geq v$ if and only if $F(\\mathcal{D},v) \\neq \\emptyset$. \n\nObserve that, for every $\\mathcal{D}\\subseteq {\\mathcal{C}}$ and $v\\in \\mathbb{R}_{\\geq 0}$, each of the inequalities in the definitions $F(\\mathcal{D}, v)$ can be represented using $\\varphi= O(|I| + \\|v\\|)$ bits, where $\\|v\\|$ is the size of the representation for the number~$v$; thus, $(F(\\mathcal{D}, v), n ,\\varphi) $ is a well-described polyhedron (Definition 6.2.2 in~\\cite{grotschel2012geometric}). \nBy Theorem 6.4.1 in~\\cite{grotschel2012geometric} there is an algorithm $\\textnormal{\\textsf{Ellipsoid}}$ which given a separation oracle for $F(\\mathcal{D},v)$ determines if $F(\\mathcal{D},v)\\neq \\emptyset$ in time $\\textnormal{poly}(|I|, \\|v\\|)$. \n\nWe use $\\textnormal{\\textsf{Ellipsoid}}$ to determine if $F({\\mathcal{C}},v)\\neq \\emptyset$ with the following (flawed) separation oracle. Given $\\bar{\\lambda}\\in \\mathbb{R}^I_{\\geq 0}$, the oracle first checks if $\\sum_{\\ell \\in I} \\bar{\\lambda}_{\\ell} 1$, the algorithm returns $\\sum_{\\ell \\in C} \\bar{\\lambda}_{\\ell}>1$ as a separating hyperplane. If the configuration returned by the FPTAS does not meet this condition, then the oracle aborts the execution of $\\textnormal{\\textsf{Ellipsoid}}$.\n\nObserve that the execution of the separation oracle runs in time $\\textnormal{poly}(|{\\mathcal{I}}|, \\frac{1}{{\\varepsilon}}, \\|v\\|)$. Thus, the execution of $\\textnormal{\\textsf{Ellipsoid}}$ with the oracle terminates in polynomial time. The execution can either end with a declaration that $F({\\mathcal{C}},v)=\\emptyset$ or be aborted by the separation oracle. Consider each of these two cases:\n\\begin{itemize}\n\t\\item\nThe execution terminated by declaring that $F({\\mathcal{C}},v)=\\emptyset$. Then, as the separating hyperplanes returned by the oracle are indeed separating hyperplanes, it follows that $F({\\mathcal{C}},v)=\\emptyset$ is a correct statement. Let $\\mathcal{D}$ be the set of configurations returned as separating hyperplanes throughout the execution. Then, as all the separating hyperplanes returned are separating hyperplanes for $F(\\mathcal{D}, v)$ we conclude that $F(\\mathcal{D},v)=\\emptyset$ as well. Furthermore, $|\\mathcal{D}|$ is polynomial in $|I|$ and $\\|v\\|$, as the running time of $\\textnormal{\\textsf{Ellipsoid}}$ is polynomial in these variables. \n\n\\item Otherwise, the execution of $\\textnormal{\\textsf{Ellipsoid}}$ has been aborted. Let $\\bar{\\lambda}$ be the value given to the separation oracle on its last call (the one which ended up with the abortion). It follows that $\\sum_{\\ell \\in I} \\bar{\\lambda}_{\\ell}\\geq v$ and $\\sum_{\\ell \\in C} \\bar{\\lambda}_{\\ell} \\leq \\frac{1}{1-\\frac{{\\varepsilon}}{10}}$ for all $C \\in {\\mathcal{C}}$ (otherwise the FPTAS must return a solution $C$ for which $\\sum_{\\ell \\in C} \\bar{\\lambda}_{\\ell} >1$). Then it holds that $\\left(1-\\frac{{\\varepsilon}} {10}\\right) \\bar{\\lambda} \\in F\\left({\\mathcal{C}},\\left(1-\\frac{{\\varepsilon}} {10}\\right)\\cdot v \\right)$, and consequently $F\\left({\\mathcal{C}},\\left(1-\\frac{{\\varepsilon}} {10}\\right)\\cdot v \\right)\\neq \\emptyset$.\n\\end{itemize}\n\nThus, using a binary search, we can find $v\\in [0, n]$ and $\\mathcal{D}\\subseteq {\\mathcal{C}}$ such that $F({\\mathcal{C}},v)=F(\\mathcal{D}, v)=\\emptyset$, $F\\left({\\mathcal{C}}, \\left( 1-\\frac{{\\varepsilon}}{2}\\right) \\cdot v\\right)\\neq \\emptyset$, and $|\\mathcal{D}|$ is polynomial in $|I|$. As $F\\left({\\mathcal{C}}, \\left( 1-\\frac{{\\varepsilon}}{2}\\right) \\cdot v\\right)\\neq \\emptyset$ it follows that $\\textnormal{OPT}(\\textnormal{Dual}({\\mathcal{C}})) \\geq\\left( 1-\\frac{{\\varepsilon}}{2}\\right) \\cdot v$, and by strong duality it holds that $\\textnormal{OPT}(\\textnormal{LP}({\\mathcal{C}})) \\geq \\left( 1-\\frac{{\\varepsilon}}{2}\\right) \\cdot v$. Furthermore, as $F({\\mathcal{C}},v)=F(\\mathcal{D}, v)=\\emptyset$ it follows that $\\textnormal{OPT}(\\textnormal{Dual}(\\mathcal{D}))\\leq v$, and by strong duality $\\textnormal{OPT}(\\textnormal{LP}(\\mathcal{D}))\\leq v$. As $|\\mathcal{D}|$ is polynomial we can solve $\\textnormal{LP}(\\mathcal{D})$ in polynomial time and obtain a solution ${\\bar{x}}$ (which is also a solution for $\\textnormal{LP}({\\mathcal{C}})$), such that $$\\|{\\bar{x}}\\| =\\textnormal{OPT}(\\textnormal{LP}(\\mathcal{D})) \\leq v \\leq \\frac{\\textnormal{OPT}(\\textnormal{LP}({\\mathcal{C}}))}{1-\\frac{{\\varepsilon}}{2}} \\leq (1+{\\varepsilon})\\cdot \\textnormal{OPT}(\\textnormal{LP}({\\mathcal{C}})).$$\n\nOverall, we obtained a $(1+{\\varepsilon})$-approximate solution in $\\textnormal{poly}(|{\\mathcal{I}}|, \\frac{1}{{\\varepsilon}})$ time, as required. \n\n \\qed\n\n\\noindent{\\bf Proof of Lemma~\\ref{lem:AFPTAS}:} \nObserve that the optimum of \\eqref{C-LP} is at most $\\textnormal{OPT}({\\mathcal I})$, thus by Lemma~\\ref{configurationLP} it holds that $\\bar{x}$ is a solution to the configuration LP \\eqref{C-LP} of $\\mathcal{I}$ and $\\|{\\bar{x}}\\|\\leq(1+{\\varepsilon})\\textnormal{OPT}(\\mathcal{I})$. Therefore, by Lemma~\\ref{lem:eviction}, Algorithm $\\textsf{Evict}({\\varepsilon},{\\mathcal{I}},\\bar{x})$ in Step~\\ref{step:evicAFPTAS} returns a prototype $\\bar{y}$ with $\\bar{\\gamma}$ in the $\\bar{y}$-polytope such that (i) for all $\\ell,j \\in I, \\ell \\neq j$ it holds that $\\bar{\\gamma}_{\\ell,j} = 0$; (ii) for all $C \\in \\textnormal{\\textsf{supp}}(\\bar{y})$ it holds that $|C| \\leq {\\varepsilon}^{-10}$ and $s(C \\setminus L) \\leq {\\varepsilon}$; (iii) $\\|\\bar{y}\\| \\leq (1+{\\varepsilon})\\|\\bar{x}\\|$; (iv) for all $\\ell \\in I$ it holds that $\\sum_{C \\in {\\mathcal{C}}[\\ell]} {\\bar{y}}_C \\leq 2$. Then, in Step~\\ref{GetPolytope}, Algorithm $\\textsf{Shift}({\\varepsilon},{\\mathcal{I}},\\bar{y})$ returns a good prototype $\\bar{z}$ by Lemma~\\ref{lem:Cnf} such that\n\n\\begin{equation}\n\t\\label{final1}\n\t\\begin{aligned}\n\t\t\\|\\bar{z}\\| \\leq{} & (1+5{\\varepsilon})\\|\\bar{y}\\|+Q({\\varepsilon}) \\leq (1+5{\\varepsilon})\\cdot (1+{\\varepsilon})\\cdot \\|\\bar{x}\\|+Q({\\varepsilon}) \\\\ \\leq{} & (1+9{\\varepsilon})(1+{\\varepsilon}) \\textnormal{OPT}(\\mathcal{I})+Q({\\varepsilon})\\leq (1+19{\\varepsilon}) \\textnormal{OPT}(\\mathcal{I})+Q({\\varepsilon}).\n\t\\end{aligned}\n\\end{equation} The first inequality is by Lemma~\\ref{lem:Cnf} . The second inequality is by Lemma~\\ref{lem:eviction}. The third inequality is by Lemma~\\ref{configurationLP} and because $0<{\\varepsilon} < 0.1$. Consequently, by Lemma~\\ref{FromPolytope}, in Step~\\ref{step:BPPnice} it holds that $\\mathcal{B}$ is an ${\\varepsilon}$-nice partition with size at most: \n\n\\begin{equation}\n\t\\label{final2}\n\t\\|\\bar{z}\\| + {\\varepsilon}^{-22}Q^2({\\varepsilon}) \\leq (1+19{\\varepsilon}) \\textnormal{OPT}(\\mathcal{I})+Q({\\varepsilon})+ {\\varepsilon}^{-22}Q^2({\\varepsilon}) \\leq (1+19{\\varepsilon}) \\textnormal{OPT}(\\mathcal{I})+{\\varepsilon}^{-23}Q^2({\\varepsilon})\n\\end{equation} The first inequality is \\eqref{final1}. The second inequality is because $0<{\\varepsilon}<0.1$. Then, by Lemma~\\ref{lem:GREEDY}, in Step~\\ref{step:greedy1}, a full packing $\\Phi$ of $\\mathcal{I}$ is constructed. The number of bins in $\\Phi$ is bounded by:\n\n\\begin{equation*}\n\t\\label{final3}\n\t\\begin{aligned}\n\t\t\\#\\textsf{bins}(\\Phi) \\leq{} & (1+2{\\varepsilon}) \\left( (1+19{\\varepsilon}) \\textnormal{OPT}(\\mathcal{I})+{\\varepsilon}^{-23}Q^2({\\varepsilon}) \\right) +2{\\varepsilon}^{-22}Q^2({\\varepsilon}) \\leq (1+60{\\varepsilon}) \\textnormal{OPT}(\\mathcal{I})+3{\\varepsilon}^{-23}Q^2({\\varepsilon})\\\\\n\t\t\\leq{} &(1+60{\\varepsilon}) \\textnormal{OPT}(\\mathcal{I})+{\\varepsilon}^{-24}Q^2({\\varepsilon}) \\leq (1+60{\\varepsilon}) \\textnormal{OPT}(\\mathcal{I})+Q^3({\\varepsilon}).\n\t\\end{aligned}\n\\end{equation*} The first inequality is by \\eqref{final2} and Lemma~\\ref{lem:GREEDY}. The other inequalities hold as $0<{\\varepsilon}<0.1$. \\qed\n\n\n\n\n\n\\noindent{\\bf Proof of Lemma~\\ref{lem:gen_afptas}}\n\n\n\\begin{algorithm}[h]\n\t\\caption{$\\textnormal{\\textsf{Gen-AFPTAS}}({{\\mathcal{J}}}, {\\varepsilon})$}\n\t\\label{alg:genalg}\n\t\n\t\\SetKwInOut{Input}{Input}\n\t\\SetKwInOut{Output}{Output}\n\t\\Input{An BPP instance ${\\mathcal{J}}$ and ${\\varepsilon}\\in (0,0.1)$ such that ${\\varepsilon}^{-1}\\in \\mathbb{N}$}\n\t\\Output{A packing of ${\\mathcal{J}}$}\n\t\n\t\n\tCompute an ${\\varepsilon}$-structured instance $\\mathcal{I}$ by $\\textsf{Reduce}({\\cal J},{\\varepsilon})$ \\label{step:reduction}\\;\n\t\n\tCompute $\\Phi =\\textsf{AFPTAS}({\\mathcal I},{\\varepsilon})$ (Algorithm~\\ref{alg:Fscheme}) \\label{step:solve_structured}\\;\n\t\n\t\n\tReturn a packing for ${\\cal J}$ by $A = \\textsf{Reconstruct}({\\mathcal{J}},{\\varepsilon},\\Phi)$ \\label{step:recon1}\n\\end{algorithm}\nThe pseudo-code of $\\textnormal{\\textsf{Gen-AFPTAS}}$ is given in Algorithm~\\ref{alg:genalg}. \nConsider the execution of $\\textnormal{\\textsf{Gen-AFPTAS}}$ with a BPP instance ${\\mathcal{J}}$ and ${\\varepsilon}\\in(0,0.1)$ such that ${\\varepsilon}^{-1}\\in \\mathbb{N}$.\nBy Lemma~\\ref{lem:reductionReconstruction} it holds that the instance~${\\mathcal I}$ computed in Step~\\ref{step:reduction} is an ${\\varepsilon}$-structured instance and $\\textnormal{OPT}({\\mathcal I}) \\leq \\textnormal{OPT}({\\mathcal{J}})$. Thus by Lemma \\ref{lem:AFPTAS} it holds $\\Phi$ (calculated in Step~\\ref{step:solve_structured}) is a packing of ${\\mathcal I}$ which uses at most \n$$ (1+60{\\varepsilon}) \\cdot \\textnormal{OPT}({\\mathcal I}) + Q^3({\\varepsilon})\\leq (1+60{\\varepsilon}) \\cdot \\textnormal{OPT}({\\mathcal{J}}) + Q^3({\\varepsilon})$$\nbins. Thus, by Lemma~\\ref{lem:reductionReconstruction}, it holds that the packing $A$ returned by the algorithm is a packing of ${\\mathcal{J}}$ which uses at most \n$$\n(1+60{\\varepsilon}) \\cdot \\textnormal{OPT}({\\mathcal{J}}) + Q^3({\\varepsilon}) +13{\\varepsilon}\\cdot \\textnormal{OPT}({\\mathcal{J}}) +1 \\leq (1+130) \\cdot \\textnormal{OPT}({\\mathcal{J}}) +3\\cdot Q^{3}({\\varepsilon}) \n$$\nbins.\n\nIt easily follows from Lemmas~\\ref{lem:AFPTAS} and~\\ref{lem:reductionReconstruction} that the overall running time of the Algorithm~\\ref{alg:genalg} is $\\textnormal{poly}(|{\\mathcal{J}}|, \\frac{1}{{\\varepsilon}})$. \n\\qed\n\n\n\\noindent{\\bf Proof of Theorem~\\ref{thm:main}:}\nLet $\\mathcal{J} = (I, \\mathcal{G},s,k)$ be a BPP instance. Recall that $V({\\mathcal{J}}) = \\max_{G \\in \\mathcal{G}} \\ceil{\\frac{|G|}{k(G)}}$ is the maximum cardinality of a group in $\\mathcal{J}$ divided by the cardinality constraint of the group and let $W =s(I)+ V({\\mathcal{J}})+c$ where $c = \\exp\\left(\\exp\\left({100^{17}}\\right)\\right)$ is a large constant. \n\\begin{claim}\n\t\\label{clm:Weps}\n$$ \n\\textnormal{OPT}(\\mathcal{J}) \\leq 2\\cdot W\n$$\n\\end{claim}\n\\begin{proof}\n\tLet $L_{\\frac{1}{2}}= \\{ \\ell \\in I~|~s(\\ell) > \\frac{1}{2}\\}$. By Lemma~\\ref{thm:greedy} it follows that $\\textnormal{\\textsf{Greedy}}$ finds a packing of the instance ${\\mathcal{J}} \\setminus L_{\\frac{1}{2}}$ using \n\tat most $$\\left(1+2\\cdot \\frac{1}{2}\\right) \\cdot\\max\\left\\{s(I\\setminus L_{\\frac{1}{2}}), V({\\mathcal{J}}\\setminus L_{\\frac{1}{2}}) \\right\\}+2\\leq 2 \\cdot\\max \\left\\{s(I\\setminus L_{\\frac{1}{2}}), V({\\mathcal{J}}) \\right\\}+2$$\n\tbins, where we define ${\\mathcal{J}} \\setminus L_{\\frac{1}{2}} = (I_{L_{\\frac{1}{2}}},{\\mathcal{G}}_{L_{\\frac{1}{2}}},s_{L_{\\frac{1}{2}}},k_{L_{\\frac{1}{2}}})$ as the BPP instance such that \\begin{equation*}\n\t\t\\begin{aligned}\n\t\t\tI_{L_{\\frac{1}{2}}} = I \\setminus L_{\\frac{1}{2}}, ~~~~~~~~~~~~~~~~~~~~ ~~~~~~\n\t\t\t\\mathcal{G}_{L_{\\frac{1}{2}}} = \\{G \\cap I_{L_{\\frac{1}{2}}} \\neq \\emptyset~|~ G \\in {\\mathcal{G}}\\}, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\t\t\t\\\\\n\t\t\ts_{L_{\\frac{1}{2}}}(\\ell) = s(\\ell), ~~\\forall \\ell \\in I_{L_{\\frac{1}{2}}}, ~~~~~~~~~~~\n\t\t\tk_{L_{\\frac{1}{2}}}(G \\cap I_{L_{\\frac{1}{2}}}) = k(G), ~~\\forall G \\in {\\mathcal{G}} \\text{ s.t. } G \\cap I_{L_{\\frac{1}{2}}} \\neq \\emptyset.~~~~~~\n\t\t\\end{aligned}\n\t\\end{equation*} The items in $L_{\\frac{1}{2}}$ can be packed into $|L_{\\frac{1}{2}}|\\leq 2 \\cdot s \\left( L_{\\frac{1}{2}}\\right) $ bins (that is, partitioned into $|L_{\\frac{1}{2}}|$ configurations) with a single item per bin. Thus, set of items $I$ can be packed into number of bins bounded by\n\t$$\n\t2 \\cdot\\max \\left\\{s(I\\setminus L_{\\frac{1}{2}}), V({\\mathcal{J}}) \\right\\}+2 + 2 \\cdot s \\left( L_{\\frac{1}{2}}\\right) \\leq \n\t2 \\left( s(I) + V({\\mathcal{J}})\\right)+2 \\leq 2 \\cdot W\n\t$$\n\\end{proof}\n Now, define ${\\varepsilon} = \\floor{\\left( \\ln \\ln W \\right)^{\\frac{1}{17}}}^{-1}$. Since $W \\geq c$, it holds that $0<{\\varepsilon} <0.1$.\nIn the following we show that the running time of $\\textnormal{\\textsf{Gen-AFPTAS}}$ on the input $\\mathcal{J}$ and ${\\varepsilon}$, as defined above, is polynomial in $|\\mathcal{J}|$, while the number of bins in the packing returned by the algorithm is $\\textnormal{OPT}(\\mathcal{J})+o(\\textnormal{OPT}(\\mathcal{J}))$.\n\n\nBy Lemma~\\ref{lem:gen_afptas}, there are polynomial functions $f,g$ such that the running time of $\\textnormal{\\textsf{Gen-AFPTAS}} \\left( \\mathcal{J}, {\\varepsilon} \\right)$ is at most $f\\left(\\frac{1}{{\\varepsilon}}\\right) \\cdot g(|\\mathcal{J}|)$. We assume without the loss of generality that $f$ is monotone. Therefore, \n\\begin{equation}\n\t\\label{eq:polymain0}\n\tf\\left(\\frac{1}{{\\varepsilon}}\\right) \\cdot g(|\\mathcal{J}|) = f\\left(\\floor{ \\left( \\ln \\ln W \\right)^{\\frac{1}{17}}}\\right) \\cdot g(|\\mathcal{J}|) \\leq f(W) \\cdot g(|\\mathcal{J}|) \\leq f(2\\cdot |{\\mathcal{J}}|+c) \\cdot g(|{\\mathcal{J}}|)\n\\end{equation}\nThe first equality is by the definition of ${\\varepsilon}$. The first inequality is because $W>c>1$ and for all $x \\geq 1$ it holds that $\\ln x \\leq x$. The last inequality hold as $V({\\mathcal{I}}), s(I)\\leq |I|\\leq |{\\mathcal{J}}|$. \n Since $c$ is a constant it follows from the \\eqref{eq:polymain0} that the running time of $\\textnormal{\\textsf{Gen-AFPTAS}}({\\mathcal{J}},{\\varepsilon})$ is polynomial in $|{\\mathcal{J}}|$.\n\n\nBy Lemma~\\ref{lem:gen_afptas}, it holds that the packing $A$ returned by $\\textsf{Gen-AFPTAS} \\left( \\mathcal{J}, {\\varepsilon} \\right)$ is a packing of $\\mathcal{J}$ which uses at most $ (1+130{\\varepsilon})OPT(\\mathcal{I})+3\\cdot Q^3({\\varepsilon})$ bins. \nNow, \\begin{equation}\n\t\\label{eq:Q}\n\t\\begin{aligned}\n\t\t\tQ({\\varepsilon}) ={} & \\exp({\\varepsilon}^{-17}) = \\exp \\left( \\left( \\floor{\\left( \\ln \\ln W \\right)^{\\frac{1}{17}}}^{-1} \\right)^{-17} \\right) \\\\ \\leq{} & \\exp \\left( \\ln \\ln W \\right) = \\ln W \\leq \\ln \\left( 2\\cdot\\textnormal{OPT}(\\mathcal{J}) +c \\right). \n\t\\end{aligned}\n\\end{equation}\n\nThe first equality is by the definition of $Q({\\varepsilon})$. The second equality is by the definition of ${\\varepsilon}$. The last inequality holds by $\\textnormal{OPT}({\\mathcal{J}})\\geq s(I)$, $\\textnormal{OPT}({\\mathcal{J}})\\geq V({\\mathcal{I}})$ and the definition of $W$. Hence, the number of bins used by $A$ is at most\n\\begin{equation}\n\t\\begin{aligned}\n\t\t\\label{eq:n1}\n\t& (1+130{\\varepsilon})\\textnormal{OPT}(\\mathcal{J})+3\\cdot Q^3({\\varepsilon}) \\\\\n\t\t\\leq& \\textnormal{OPT}(\\mathcal{J})+ 130 \\floor{\\left( \\ln \\ln W \\right)^{\\frac{1}{17}}}^{-1}\\textnormal{OPT}(\\mathcal{J})+3\\cdot\\ln^3 \\left( 2\\cdot\\textnormal{OPT}(\\mathcal{J}) +c \\right)\\\\\n\t\t\\leq & \\textnormal{OPT}(\\mathcal{J})+ 130\\cdot \\frac{ \\textnormal{OPT}({\\mathcal{J}})}{\\left(\\ln \\ln\\left( \\frac{\\textnormal{OPT}({\\mathcal{J}})}{2}\\right) \\right)^{\\frac{1}{17}} -1} +3\\cdot \\ln^3 \\left( 2\\cdot\\textnormal{OPT}(\\mathcal{J}) +c \\right) \\\\ ={} & \\textnormal{OPT}(\\mathcal{J})+O\\left(\\frac{\\textnormal{OPT}(\\mathcal{J})}{ \\left(\\ln \\ln \\textnormal{OPT}({\\mathcal{J}}) \\right)^{\\frac{1}{17}}}\\right).\n\t\\end{aligned}\n\\end{equation}\nThe first inequality is by the definition of ${\\varepsilon}$ and \\eqref{eq:Q}. The second inequality holds since $W\\geq \\frac{\\textnormal{OPT}({\\mathcal{J}})}{2}$ by Claim~\\ref{clm:Weps}.\n\n Overall, we showed that the algorithm returns a packing of ${\\mathcal{J}}$ using $ \\textnormal{OPT}(\\mathcal{J})+O\\left(\\frac{\\textnormal{OPT}(\\mathcal{J})}{ \\left(\\ln \\ln \\textnormal{OPT}({\\mathcal{J}}) \\right)^{\\frac{1}{17}}}\\right)$ bins in polynomial time in $|{\\mathcal{J}}|$. \\qed\n\n\\section{Omitted Proofs of Section~\\ref{sec:alg_pack}}\n\\label{sec:PackProofs}\n\n\nWe use the following auxiliary claim. \\begin{claim}\n\t\\label{clm:fitifit}\n\tFor all $C \\in \\mathcal{H}$ and $\\ell \\in D_C$ it holds that $s_C(\\ell) < 0.1$.\n\\end{claim}\n\n\\begin{proof}\n\t$$s_C(\\ell) = \\frac{s(\\ell)}{1-s(C)} \\leq \\frac{{\\varepsilon} \\cdot (1-s(C))}{1-s(C)} = {\\varepsilon} < 0.1.$$\n\t The first equality is by \\eqref{IC}. The first inequality is because $\\ell \\in \\textsf{fit}(C)$ by Definition~\\ref{def:partition}; hence, the inequality follows by \\eqref{fitS}. \n\\end{proof}\n\n\\noindent{\\bf Proof of Lemma~\\ref{clm:GREEDY}:} Let $X = \\textsf{tuple}(B_C), Y = \\textsf{Greedy}({\\mathcal{I}}_C,{\\varepsilon})$, $X = (X_1, \\ldots, X_r)$, $Y = (Y_1, \\ldots, Y_t)$, $S = D_C \\cup \\bigcup_{B \\in B_C} B$, $Z = X+Y$, and $Z = (Z_1, \\ldots, Z_p)$. We prove the following.\n\n\\begin{enumerate}\n\t\\item $Z$ is a partition of $S$. Let $\\ell \\in S$. If $\\ell \\in D_C$, then there is $i \\in [r]$ such that $\\ell \\in X_i$; therefore, by the definition of $Z$ it holds that $\\ell \\in Z_i$. Otherwise, there is $i \\in [t]$ such that $\\ell \\in Y_i$; therefore, by the definition of $Z$ it holds that $\\ell \\in Z_i$.\n\t\n\t\\item For all $i \\in [p]$ it holds that $s(Z_i) \\leq 1$. If $D_C = \\emptyset$ then the claim is satisfied because $s(Z_i) = s(X_i) \\leq s(C) \\leq 1$; The first inequality is because $X_i \\in B_C$ and thus $X_i$ is allowed in $C$; The second inequality is because $C$ is a configuration. Otherwise, If $i \\leq t$ and $i \\leq r$: $$s(Z_i) = s(X_i)+s(Y_i) \\leq s(C)+s(Y_i) \\leq s(C)+(1-s(C)) \\cdot s_C(Y_i) \\leq s(C)+1-s(C) = 1.$$ The first inequality is because by Definition~\\ref{def:partition} it holds that $X_i$ is allowed in $C$. The second inequality is by \\eqref{IC}. The third inequality is by Lemma~\\ref{thm:greedy}; also, the conditions of Lemma~\\ref{thm:greedy} are indeed satisfied by Claim~\\ref{clm:fitifit}. Using similar arguments, we can show the claim for $i > t$ or $i >r$ by the definition of $Z$.\n\t\n\t\\item For all $i \\in [p]$ and $G \\in {\\mathcal{G}}$ it holds that $|G \\cap Z_i| \\leq k(G)$. If $|G\\cap D_C| = 0$ then the claim is trivially satisfied because $X_i$ is allowed in the configuration $C$. Otherwise, if $i \\leq t$ and $i \\leq r$: \t\\begin{equation*}\n\t\t\\begin{aligned}\n\t\t|G \\cap Z_i| \\leq{} & |G \\cap X_i| +|G \\cap Y_i| \\leq |G \\cap C|+|G \\cap Y_i| \\leq |G \\cap C|+k_C(G \\cap D_C) \\\\ \\leq{} & |G \\cap C|+\\left(k(G)-|G \\cap C|\\right) = k(C).\n\t\t\\end{aligned}\n\t\\end{equation*} The second inequality is because by Definition~\\ref{def:partition} it holds that $X_i$ is allowed in $C$. The third inequality is by Lemma~\\ref{thm:greedy}; also, the conditions of Lemma~\\ref{thm:greedy} are indeed satisfied by Claim~\\ref{clm:fitifit}. The fourth inequality is by \\eqref{IC}. Using similar arguments, we can show the claim for $i > t$ or $i >r$ by the definition of $Z$.\n\n\\item It holds that $p \\leq (1+2{\\varepsilon}) \\cdot |B_C|+2$. \t\\begin{equation*}\n\t\\begin{aligned}\n\tp ={} & \\max\\{r, t\\} \\leq \\max\\left\\{|B_C|, (1+2{\\varepsilon})\\cdot \\max\\left\\{ s_C(D_C) , \\max_{G \\in {\\mathcal{G}}_{C}} \\frac{|G|}{k_C(G)} \\right\\} +2 \\right\\} \\\\ \\\\ \\leq{} & \\max\\left\\{|B_C|, (1+2{\\varepsilon})\\cdot \\max\\left\\{ \\frac{s(D_C)}{1-s(C)} , \\max_{G \\in {\\mathcal{G}} \\text{ s.t. } G \\cap D_C \\neq \\emptyset} \\frac{|G\\cap D_C|}{k(G)-|G \\cap C|} \\right\\} +2 \\right\\} \\\\ \\\\ \\leq{} & \\max\\left\\{|B_C|, (1+2{\\varepsilon})\\cdot \\max\\left\\{ \\frac{(1-s(C)) \\cdot |B_C|}{1-s(C)} , \\max_{G \\in {\\mathcal{G}} \\text{ s.t. } G \\cap D_C \\neq \\emptyset} \\frac{|B_C| \\cdot (k(G) - |C \\cap G|)}{k(G)-|G \\cap C|} \\right\\} +2 \\right\\} \\\\={} & (1+2{\\varepsilon})\\cdot |B_C|+2.\n\t\\end{aligned}\n\\end{equation*} The first inequality is by Lemma~\\ref{thm:greedy}; also, the conditions of Lemma~\\ref{thm:greedy} are indeed satisfied by Claim~\\ref{clm:fitifit}. The second inequality is by \\eqref{IC}. The third inequality is by Definition~\\ref{def:partition}. \n\n \\qed\n\\end{enumerate} \n\n\n\n\n\\noindent{\\bf Proof of Lemma~\\ref{lem:GREEDY}:} For all $C \\in \\mathcal{H}$ let $X^C = \\textsf{tuple}(B_C), Y^C = \\textsf{Greedy}({\\mathcal{I}}_C,{\\varepsilon})$, $X^C = (X^C_1, \\ldots, X^C_{r(C)})$, $Y^C = (Y^C_1, \\ldots, Y^C_{t(C)})$, $S^C = D_C \\cup \\bigcup_{B \\in B_C} B$, $Z^C = X^C+Y^C$, and $P_C = (Z^C_1, \\ldots, Z^C_{p(C)})$. In addition, let $P = (P_1, \\ldots, P_a)$ be the tuple returned by Algorithm~\\ref{alg:pack}. We prove the following.\n\n\\begin{enumerate}\n\t\\item $P$ is a partition of $I$: Let $\\ell \\in I$. Then, by Definition~\\ref{def:partition} there is exactly one $C \\in \\mathcal{H}$ such that $\\ell \\in S^C$; then, by Lemma~\\ref{clm:GREEDY} there is exactly one $i \\in [p(C)]$ such that $\\ell \\in Z^C_i$; therefore, by Step~\\ref{step:pPack} of Algorithm~\\ref{alg:pack} there is exactly one $j \\in [a]$ such that $\\ell \\in P_j$ and the claim follows. \n\t\n\t\\item For all $i \\in [a]$ it holds that $s(P_i) \\leq 1$: By Step~\\ref{step:pPack} of Algorithm~\\ref{alg:pack} there is $C \\in \\mathcal{H}$ and $j \\in [p(C)]$ such that $P_j = Z^C_j$. Therefore, $s(P_i) = s(Z^C_j) \\leq 1$ by Lemma~\\ref{clm:GREEDY}. \n\t\n\t\\item For all $i \\in [a]$ and $G \\in {\\mathcal{G}}$ it holds that $|G \\cap P_i| \\leq k(G)$: By Step~\\ref{step:pPack} of Algorithm~\\ref{alg:pack} there is $C \\in \\mathcal{H}$ and $j \\in [p(C)]$ such that $P_i = Z^C_j$. Therefore, $|P_i \\cap G| = |Z^C_j \\cap G| \\leq k(G)$ by Lemma~\\ref{clm:GREEDY}. \n\t\n\t\\item The size of $P$ that is $a$ is at most $(1+2{\\varepsilon}) \\cdot m+ 2 \\cdot {\\varepsilon}^{-22} \\cdot Q^2({\\varepsilon})$:\n\t%\n\t$$a = \\sum_{C \\in \\mathcal{H}} p(C) \\leq \\sum_{C \\in \\mathcal{H}} \\left((1+2{\\varepsilon})\\cdot |B_C|+2\\right) \\leq (1+2{\\varepsilon}) \\cdot m+ 2 \\cdot {\\varepsilon}^{-22} \\cdot Q^2({\\varepsilon})$$ The first equality is by Step~\\ref{step:pPack} of Algorithm~\\ref{alg:pack}. The first inequality is by Lemma~\\ref{clm:GREEDY}. The second inequality is because $\\{B_C\\}_{C \\in \\mathcal{H}}$ is a partition of $\\{A_i~|~i \\in [m]\\}$ and that $|\\mathcal{H}| \\leq {\\varepsilon}^{-22} \\cdot Q^2({\\varepsilon})$ by Definition~\\ref{def:partition}. \n\t\n\t\n\t \\qed\n\\end{enumerate} \n\n\n\n \n\\section{Reduction and Reconstruction of the Instance}\n\\label{sec:reduction}\n\nIn this section we present algorithms \\textsf{Reduce} and \\textsf{Reconstruct}, which handle the structuring of the instance and \nthe transformation of the solution for the structured instance into a solution for the original one. For this section, fix ${\\mathcal{J}} = (I, \\mathcal{G}, s,k)$ to be a BPP instance, and let ${\\varepsilon} \\in (0, 0.1]$.\n\nAlgorithm \\textsf{Reduce} generates from a given instance ${\\mathcal{J}}$ a structured instance ${\\mathcal I}$, for which any optimal packing required at most the minimum number of bins required for packing ${\\mathcal{J}}$\n(see Section~\\ref{sec:F1}). Given a packing of ${\\mathcal I}$, algorithm \\textsf{Reconstruct} finds a packing of the original instance ${\\mathcal{J}}$, with only a slight increase in the total number of bins used (see Section~\\ref{sec:RECONSTRUCTION}). The proofs of the results in this section are given in Appendix~\\ref{sec:reductionProofs}. \n\n\\subsection{The Reduction}\n\\label{sec:F1}\n\nTowards structuring the instance, we first classify the items and groups in ${\\mathcal{J}}$.\nWe note that similar classifications were used in prior work (see, e.g., \\cite{DW17,Jansen_et_al:2019,DKS21}). \n For all $i \\in \\{2,\\ldots, {\\varepsilon}^{-1}+1\\}$ let \n \\begin{equation}\n\t\\label{Interval}\n\tI_i = \\big\\{\\ell \\in I~|~ s(\\ell) \\in \\big[{\\varepsilon}^{i+1}, {\\varepsilon}^{i}\\big)\\big\\}\n\\end{equation} be the $i$-th {\\em interval} of ${\\mathcal{J}}$, containing all items of sizes in the interval $[{\\varepsilon}^{i+1}, {\\varepsilon}^{i})$; we refer to $i$ is a {\\em pivot} of ${\\mathcal{J}}$. Now, for $w \\in \\{2,\\ldots,{\\varepsilon}^{-1}+1\\}$, we say that $w$ is a {\\em minimal pivot} of $\\mathcal{{\\mathcal{J}}}$ if the $w$-th interval of ${\\mathcal{J}}$ contains minimal total size of items among all intervals of ${\\mathcal{J}}$; that is,\n\n\\begin{equation}\n\t\\label{eq:k}\n\tw = \\argmin_{i \\in \\{2, \\ldots, {\\varepsilon}^{-1}+1\\}} s(I_i).\n\\end{equation}\n\n\nThe classification of the items depends on the selection of a pivot for ${\\mathcal{J}}$. Specifically, given $w \\in \\{2,\\ldots, {\\varepsilon}^{-1}+1\\}$ we classify item $\\ell$ as $w$-{\\em heavy} if $s(\\ell) \\geq \\varepsilon^{w}$, $w$-{\\em medium} if $s(\\ell) \\in [\\varepsilon^{w+1},\\varepsilon^{w})$ and $w$-{\\em light} otherwise. Let $H_w = \\{\\ell \\in I~|~s(\\ell) \\geq {\\varepsilon}^{w}\\}$ be the set of $w$-heavy items in ${\\mathcal{J}}$. \n\nFor the classification of groups, fix a pivot $w \\in \\{2, \\ldots, {\\varepsilon}^{-1}+1\\}$ of ${\\mathcal{J}}$. We sort the groups in $\\mathcal{G}$ in a non-increasing order by the total number of $w$-heavy and $w$-medium items of the group; then, we classify the first up to $ {\\varepsilon}^{-3w-5}$ groups according to the above order as the $w$-{\\em large groups}, and the remaining groups are called $w$-{\\em small}. More formally, let $g_w(G) = |G \\cap (H_w \\cup I_w)|~:~\\forall G \\in {\\mathcal{G}}$ and $n = |{\\mathcal{G}}|$. Now, let $G_1, \\ldots, G_{n}$ be a \nnon-increasing order of ${\\mathcal{G}}$ by $g_w$, where\ngroups having the same $g_w$ values are placed in fixed arbitrary order.\nNow, let $\\kappa_w = \\min\\left\\{{\\varepsilon}^{-3w-5},|{\\mathcal{G}}| \\right\\}$ and define the set of $w$-large groups as ${\\mathcal{G}}_L(w) = \\left\\{G_i~|~i \\in \\left[\\kappa_w \\right] \\right\\}$ and the set of $w$-small groups as ${\\mathcal{G}} \\setminus {\\mathcal{G}}_L$.\n\n\n\\begin{lemma}\n\t\\label{lem:few_large_groups}\t\n\tFor all $w \\in \\{2,\\ldots, {\\varepsilon}^{-1}+1\\}$ there are at most ${\\varepsilon}^{-3w-5}$ groups $G \\in {\\mathcal{G}}$ such that $$|G \\cap (H_w\\cup I_w) | \\geq {\\varepsilon}^{2w+4} \\cdot \\textnormal{OPT}({\\mathcal{J}}).$$ \n\\end{lemma}\n\nThe proof of Lemma~\\ref{lem:few_large_groups} follows by noting that\nthe total size of items in a group having at least ${\\varepsilon}^{2w+4} \\cdot \\textnormal{OPT}({\\mathcal{J}})$ items that are $w$-medium or $w$-heavy is at least ${\\varepsilon}^{3w+5} \\cdot \\textnormal{OPT}({\\mathcal{J}})$. \n\nBy Lemma~\\ref{lem:few_large_groups}, the number of $w$-heavy and $w$-medium items in a $w$-small group is at most ${\\varepsilon}^{2w+4}\\cdot \\textnormal{OPT}(\\mathcal{J})$. This is useful for the reduction, in which the matroid constraint is slightly relaxed for $w$-small groups, as described below. The $w$-large groups appear in the reduced instance with the original cardinality constraint. In contrast, for each $w$-small group we keep only the $w$-light and $w$-medium items with the original cardinality constraint of the group, whereas all $w$-heavy items from $w$-small groups are placed in a single {\\em union group} with unbounded cardinality constraint. Specifically, define the {\\em reduced $w$-small groups} containing only $w$-light and $w$-medium items as\n\n\n\n\\begin{equation}\n\\label{Is}\n{\\mathcal{G}}_S(w) = \\{G \\setminus H_w~|~G \\in {\\mathcal{G}} \\setminus {\\mathcal{G}}_L(w)\\}.\n\\end{equation}\n\nAlso, let the {\\em $w$-union group} containing all $w$-heavy items from $w$-small groups be \n\n\\begin{equation}\n\\label{Iu}\n\\Gamma_w = \\{\\ell \\in H_w~|~\\ell \\in G \\text{ s.t. } G \\in {\\mathcal{G}} \\setminus {\\mathcal{G}}_L(w)\\}.\n\\end{equation}\n\n\n\\begin{algorithm}[h]\n\t\\caption{$\\textsf{Reduce}({\\varepsilon},{\\mathcal{J}} = (I,{\\mathcal{G}},s,k))$}\n\t\\label{alg:reduction}\n\t\n\tFind a minimal pivot $w \\in \\{2,\\ldots,{\\varepsilon}^{-1}+1\\}$ of ${\\mathcal{J}}$.\\label{step:pivot}\n\t\n\tFind ${\\mathcal{G}}_L(w), {\\mathcal{G}}_S(w), \\Gamma_w$, the $w$-large groups, $w$-reduced small groups, and $w$-union of ${\\mathcal{J}}$. \n\t\n\tLet ${\\mathcal{G}}_R = {\\mathcal{G}}_L(w) \\cup {\\mathcal{G}}_S(w) \\cup \\{\\Gamma_w\\}$.\\label{step:groups}\n\t\n\tDefine the following new cardinality bounds for all $G \\in {\\mathcal{G}}_R$:\\label{kr} \\[\n\tk_R(G) = \\label{kr} \n\t\\begin{cases}\n\t\tk(G) & G \\in {\\mathcal{G}}_L(w)\\\\\n\t\tk(G') & G = G' \\setminus H_w \\text{ s.t. } G' \\in {\\mathcal{G}} \\setminus {\\mathcal{G}}_L(w) \\\\\n\t\t|\\Gamma_w|+1 & G = \\Gamma_w.\\\\\n\t\\end{cases}\n\t\\]\n\t\n\t\n\t\n\tReturn the reduced instance $\\left(I,{\\mathcal{G}}_R,s,k_R\\right)$. \\label{Ir}\n\t\n\\end{algorithm}\n\n\n\nAlgorithm \\textsf{Reduce} constructs the above groups, together with the new cardinality constraints, and preserves the initial set of items and item sizes. The pseudocode of Algorithm \\textsf{Reduce} is given in Algorithm~\\ref{alg:reduction}. The first claim of Lemma~\\ref{lem:reductionReconstruction} holds since the only of groups containing items larger than ${\\varepsilon}^2$ are the $w$-large groups and the $w$-union group. Moreover, a packing of the original instance is also a feasible packing for the reduced instance, thus the optimum of the reduced instance can only be smaller. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{The Reconstruction}\n\\label{sec:RECONSTRUCTION}\n\nWe now describe how a packing of the reduced instance is transformed to a packing for the original instance. \nFor simplicity, assume that objects such as $w$ computed by algorithm $\\textsf{Reduce}({\\varepsilon},{\\mathcal{J}})$ are known and fixed for the reconstruction of ${\\mathcal{J}}$. In addition, for the remainder of this section, fix a packing $A = (A_1, \\ldots, A_m)$ for ${\\mathcal{I}} = \\textsf{Reduce}({\\varepsilon},{\\mathcal{J}})$. \n\nRecall that Step~\\ref{kr} of algorithm \\textsf{Reduce} relaxes the cardinality constraint for the $w$-heavy items in the set $\\Gamma_w$. The reconstruction algorithm redefines $A$ for items in $\\Gamma_w$ to ensure that the matroid constraint is satisfied for these items; this may require using a few extra bins. Then, $w$-medium and $w$-light items from $w$-small groups are discarded from the packing and packed separately using Algorithm \\textsf{Greedy} (see Lemma~\\ref{thm:greedy}). This results in a feasible packing of the original instance ${\\mathcal{J}}$. \nWe now describe how our reconstruction algorithm resolves violations among $w$-heavy items. This is done by rearranging the packing of $w$-heavy items in $A$ using a variant of the classic {\\em linear shifting} technique of~\\cite{fernandez1981bin}. \nLet $\\beta = {\\varepsilon}^{-w-2}$ be the {\\em shifting parameter}, and $P_1,\\ldots, P_q$ the $\\mathcal{{\\mathcal{I}}}$-{\\em partition} obtained by applying linear shifting to the items in $\\Gamma_w$. More specifically, given the set $\\Gamma_w$ in non-decreasing order of item sizes, for all $1\\leq i \\leq j