diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzznpmw" "b/data_all_eng_slimpj/shuffled/split2/finalzznpmw" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzznpmw" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nA rational system $A\\bfm x\\le \\bfm b$ is called {\\it totally dual integral} (TDI) if the minimum in the \nLP-duality equation \n\\begin{equation}\\label{eq:lp}\n\\max\\{\\bfm w^T\\bfm x: A\\bfm x\\le \\bfm b\\} = \\min\\{\\bfm y^T\\bfm b: \\bfm y^TA= \\bfm w^T; \\ \\bfm y\\ge \\bfm 0\\}\n\\end{equation}\nhas an integral optimal solution, for every integral vector $\\bfm w$ for which the minimum is finite. \nEdmonds and Giles \\cite{tdi} proved that total dual integrality implies primal integrality: if $A\\bfm x \n\\le \\bfm b$ is TDI and $\\bfm b$ is integral, then both programs in (\\ref{eq:lp}) have integral \noptimal solutions whenever they have finite optimum. So the model of TDI systems serves as a general\nframework for establishing min-max results in combinatorial optimization (see Schrijver \\cite{schrijver3}\nfor an comprehensive and in-depth account). As summarized by Schrijver \\cite{S}, the importance of a \nmin-max relation is twofold: first, it serves as an optimality criterion and as a good characterization \nfor the corresponding optimization problem; second, a min-max relation frequently yields an elegant \ncombinatorial theorem, and allows a geometrical representation of the corresponding problem in \nterms of a polyhedron. Many well-known results and difficult conjectures in combinatorial optimization \ncan be rephrased as saying that a certain linear system is TDI; in particular, by Lov\\'asz' Replication \nLemma \\cite{L}, a graph $G$ is perfect if and only if the system $A_{_G}\\bfm x\\le \\bfm 1$, $\\bfm x\\ge\\bfm0$ \nis TDI, where $A_G$ is the clique-vertex incidence matrix of $G$. The reader is referred to \nChudnovsky {\\em et al.} \\cite{CRST,ep} for the proof of the Strong Perfect Graph Theorem and\nto Chudnovsky {\\em et al.} \\cite{CCLSV} for recognition of perfect graphs.\n\nA rational system $A\\bfm x\\le \\bfm b$ is called {\\it box-totally dual integral} (box-TDI) \nif $A\\bfm x\\le \\bfm b$, $\\bfm l\\le \\bfm x\\le \\bfm u$ is TDI for all vectors $\\bfm l$ and $\\bfm u$, \nwhere each coordinate of $\\bfm l$ and $\\bfm u$ is either a rational number or $\\pm\\infty$. By taking \n$\\bfm l=-\\bfm \\infty$ and $\\bfm u=\\bfm \\infty$ it follows that every box-TDI system must be TDI. Cameron \nand Edmonds \\cite{cameron,cameron2} proposed to call a graph $G$ {\\it box-perfect} if the system \n$A_{_G}\\bfm x\\le \\bfm 1$, $\\bfm x\\ge\\bfm0$ is box-TDI; they also posed the problem of characterizing such graphs.\n\nWe make some preparations before presenting an equivalent definition of box-perfect graphs. \nLet $G=(V,E)$ be a graph (all graphs considered in this paper are simple unless otherwise stated). For any $X\\subseteq V$, let $G[X]$ denote the subgraph of $G$ induced by $X$. For any $v\\in V$, let $N_G(v)$ denote the set of vertices incident with $v$. Members of $N_G(v)$ are called {\\it neighbors} of $v$. By {\\it duplicating} a vertex $v$ of $G$ we obtain a new graph $G'$ constructed as follows: we first add a new vertex $v'$ to $G$, which may or may not be adjacent to $v$, and then we join $v'$ to all vertices in $N_G(v)$.\n\nAs usual, let $\\alpha(G)$ and $\\chi(G)$ denote respectively the stable number and chromatic number of $G$. \nLet $\\bar\\chi(G)=\\chi(\\bar G)$, which is the clique cover number of $G$.\nFor any integer $q\\ge1$, let \\medskip\\\\\n\\indent $\\alpha_q(G)=\\max\\{|X|: X\\subseteq V(G)$ with $\\chi(G[X])\\le q\\}$, and \\\\ \n\\indent $\\bar\\chi_q(G)=\\min\\{q\\bar\\chi(G-X)+|X|: X\\subseteq V(G)\\}$. \\medskip\\\\ \nNotice that $\\alpha_1=\\alpha$ and $\\bar\\chi_1=\\bar\\chi$. A graph $G$ is called {\\it $q$-perfect} if $\\alpha_q(G[X])=\\bar\\chi_q(G[X])$ holds for all $X\\subseteq V(G)$. This concept was introduced by Lov\\'asz \\cite{lovasz} as an extension of perfect graphs, since 1-perfect graphs are precisely perfect graphs. Let us call a graph {\\it totally perfect} if it is $q$-perfect for all integers $q\\ge1$. Lov\\'asz pointed out that comparability graphs, incomparability graphs, and line graphs of bipartite graphs are totally perfect. However, $S_3$ is not 2-perfect, showing that a perfect graph does not have to be $q$-perfect when $q>1$.\n\n\\begin{figure}[ht]\n\\centerline{\\includegraphics[scale=0.58]{s3.eps}}\n\\caption{Graph $S_3$ and its complement $\\bar S_3$}\n\\label{fig:s3}\n\\end{figure}\n\n\\begin{theorem}[Cameron \\cite{cameron1}] \\label{thm:cameron1} \nA graph is box-perfect if and only if every graph obtained from this graph by repeatedly duplicating vertices is totally perfect. \n\\end{theorem}\n\nThis theorem implies the following immediately. \n\n\\begin{corollary}[Cameron \\cite{cameron1}] \\label{cor:basic}\n(1) Induced subgraphs of a box-perfect graph are box-perfect. \\\\ \n\\indent (2) Duplicating vertices in a box-perfect graph results in a box-perfect graph. \\\\ \n\\indent (3) Comparability and incomparability graphs are box-perfect. \n\\end{corollary}\n\nThe next proposition contains a few other important observations made by Cameron \\cite{cameron1}. A matrix $A$ is {\\it totally unimodular} if the determinant of every square submatrix of $A$ is 0 or $\\pm1$. A $\\{0,1\\}$-matrix $A$ is {\\it balanced} if none of its submtrices is the vertex-edge incidence matrix of an odd cycle. For each graph $G$, let $B_G$ be the submatrix of $A_G$ obtained by keeping only rows that correspond to maximal cliques of $G$. Let us call $G$ {\\it totally unimodular} or {\\it balanced} if $B_G$ is totally unimodular or balanced. It is worth pointing out that bipartite graphs and their line graphs are totally unimodular, and every totally unimodular graph is balanced. In addition, as shown by Berge \\cite{berge2}, all balanced graphs are totally perfect. \nLet $\\bar S_3^+$ be obtained from the complement $\\bar S_3$ of $S_3$ by adding a new vertex $v$ and joining $v$ to all six vertices of $\\bar S_3$.\n\n\\begin{proposition}[Cameron \\cite{cameron1}] \\label{prop:Sn}\n(1) $\\bar S_3^+$ is not box-perfect. \\\\\n\\indent (2) Totally unimodular graphs are box-perfect. \\\\ \n\\indent (3) Balanced graphs do not have to be box-perfect, shown by $\\bar S_3^+$. \\\\\n\\indent (4) The complement of a box-perfect graph does not have to be box-perfect, shown by $\\bar S_3$. \\\\\n\\indent (5) Box-perfectness is not preserved under taking clique sums, shown by $S_3$.\n\\end{proposition}\n\nAs we have seen, many nice properties of perfect graphs are not satisfied by box-perfect graphs. Another property of this kind is substitution: substituting a vertex of a box-perfect graph by a box-perfect graph does not have to yield a box-perfect graph, as shown by $\\bar S_3^+$ (which is obtained by substituting a vertex of $K_2$ with $\\bar S_3$). To our knowledge, almost none of the \nknown summing operations that preserve perfectness can carry over to box-perfectness -- this makes it extremely hard to obtain\na structural characterization of box-perfect graphs! \n\nAt this point, the only known box-perfect graphs are totally unimodular graphs, comparability graphs, incomparability graphs, and $p$-comparability graphs (where $p\\ge1$ and 1-comparability graphs are precisely comparability graphs) \\cite{cameron, cameron2}. \nCameron and Edmonds \\cite{cameron} conjectured that every parity graph is box-perfect. In this paper we confirm this conjecture\nand identify several other classes of box-perfect graphs, including claw-free box-perfect graphs.\nIn the next section we construct a class $\\cal R$ of non-box-perfect graphs, from which we characterize box-perfect split graphs. It turns out that every minimal non-box-perfect graph that we know of is contained in a graph from $\\cal R$. This observation \nraises the question: is it true that a graph $G$ is box-perfect if and only if $G$ does not contain any graph in $\\cal R$ as an induced subgraph? \n\nIn addition to structural description, the other difficulty with the study of box-perfect graphs lies in the lack of a\nproper tool for establishing box-perfectness. In section 3 we introduce a so-called ESP property, which is sufficient \nfor a graph to be box-perfect. Although recognizing box-perfectness is an optimization problem, our approach based on \nthe ESP property is of transparent combinatorial nature and hence is fairly easy to work with. For convenience,\nwe call a graph ESP if it has the aforementioned ESP property. In the remainder of this paper, we shall establish several \nclasses of box-perfect graphs by showing that they are actually ESP, including all classes obtained by Cameron \n\\cite{cameron, cameron1, cameron2}. We strongly believe that the ESP property is exactly the tool one needs for the study of box-perfect graphs. \n\n\\begin{conjecture}\nA perfect graph is box-perfect if and only if it is ESP if and only if it contains none of the members of $\\cal R$ as an induced subgraph. \n\\end{conjecture}\n\nWe close this section by mentioning a result on the complexity of recognizing box-perfect graphs. \n\n\\begin{theorem}[Cook \\cite{cook}]\nThe class of box-perfect graphs is in co-NP.\n\\end{theorem}\n\n\\section{A class of non-box-perfect graphs}\n\nLet $S_n$ be the graph obtained from cycle $v_1v_2...v_{2n}v_1$ by adding edges $v_iv_j$ for all distinct even $i,j$. It was proved in \\cite{cameron1} that $S_{2n+1}$ is not box-perfect for all $n\\ge 1$. In this section we construct a class of non-box-perfect graphs, which include $\\bar S_3^+$ and $S_{2n+1}$ ($n\\ge 1$). We will use this result to characterize box-perfect split graphs (a graph is {\\it split} if its vertex set can be partitioned into a clique and a stable set).\n\nLet $G=(U,V,E)$ be a bipartite graph, where $U=\\{u_1,...,u_m\\}$ and $V=\\{v_1,...,v_n\\}$. The {\\it biadjacency matrix} of $G$ is the $\\{0,1\\}$-matrix $M$ of dimension $m\\times n$ such that $M_{i,j}=1$ if and only if $u_iv_j\\in E$. Let $\\cal Q$ be the set of bipartite graphs $G$ such that its biadjacency matrix $M$ is not totally unimodular but all submatrices of $M$ are. The following is a classical result of Camion.\n\n\\begin{lemma}[Camion \\cite{camion}]\\label{lem:camion}\nEvery graph $G=(U,V,E)$ in $\\cal Q$ is Eulerian. In addition, $G$ satisfies $|U|=|V|$ and $|E|\\equiv 2 \\pmod 4$.\n\\end{lemma}\n\nLet $\\cal R$ be the class of graphs constructed as follows. \nTake a bipartite graph $G'=(U,V,E')\\in \\cal Q$ and a graph $G''=(V,E'')$ such that $N_{G'}(u)$ is a clique of $G''$ \nfor all $u\\in U$. \nLet $G=(U\\cup V, E'\\cup E'')$. \nIf there exists $u\\in U$ with $N_{G'}(u)=V$ then $G-u$ belongs to $\\cal R$; otherwise $G$ belongs to $\\cal R$.\n\n\\noindent{\\bf Examples.} For each odd $n\\ge3$, $S_n$ belongs to $\\cal R$ since $S_n$ can be constructed from a cycle $G'=C_{2n}\\in \\cal Q$ and a complete graph $G''=K_n$, where no vertex is deleted in the construction. Graph $\\bar S_3^+$ also belongs to $\\cal R$. In this case a vertex is deleted in the construction, see Figure \\ref{fig:s3p}.\n\n\\begin{figure}[ht]\n\\centerline{\\includegraphics[scale=0.55]{s3p.eps}}\n\\caption{Graph $\\bar S_3^+$ is constructed from a bipartite graph in $\\cal Q$ and $K_4$}\n\\label{fig:s3p}\n\\end{figure}\n\n\\begin{lemma}\\label{lem:O}\nNo graph in $\\cal R$ is box-perfect.\n\\end{lemma}\n\n\\vspace{-2mm}\n\\noindent{\\bf Proof.} Let $G\\in\\cal R$ be constructed from $G'=(U,V,E')\\in \\cal Q$ and $G''=(V,E'')$. Let $A_G$ and $B_G$ be the clique and maximal clique matrices of $G$. Then $A_G$ can be expressed as $A_G=[^{B_G}_{\\ C}]$.\nLet $M$ be the biadjacency matrix of $G'$ and let $n:=|U|\\ (=|V|)$. Since every $u\\in U$ belongs to exactly one maximal clique of $G$, the column of $B_G$ that corresponds to $u$ has precisely one nonzero entry. If no vertex was deleted in the construction of $G$ then $B_G$ can be expressed as $B_G=[^{M\\ I_n}_{N\\ \\ \\mathbf 0}]$, \nwhere the first $n$ columns are indexed by $V$ and the last $n$ columns are indexed by $U$. If a vertex $u_0\\in U$ was deleted in the construction of $G$, then $G''$ has to be a complete graph. In this case, since $U$ does not have a second vertex adjacent to all vertices in $V$, $B_G$ can be expressed as $[M,J]$, where $J_{n\\times (n-1)}=[^{I_{n-1}}_{\\ \\ \\mathbf 0}]$ \nand the last row of $M$, which corresponds to $u_0$, is a vector of all ones. \nBy Lemma \\ref{lem:camion}, all entries of $\\bfm1^TM$ and $M\\bfm1$ are even, and $\\bfm1^TM\\bfm1=4m+2$, for an integer $m>0$. We consider the dual programs (with $A=A_G$)\n\\begin{equation}\\label{eq:box}\n\\max\\{\\bfm w^T\\bfm x: A\\bfm x\\le \\bfm1; \\bfm x\\ge \\bfm l\\}=\\min\\{\\bfm y^T\\bfm1- \\bfm z^T\\bfm l: \\bfm y^TA-\\bfm z^T= \\bfm w^T; \\bfm y, \\bfm z\\ge\\bfm0\\}.\n\\end{equation}\n\nSuppose no vertex was deleted in the construction of $G$. Let $p>2m+1$ be a prime and let \n$$\\bfm w= \\begin{bmatrix}\\frac{1}{2}M^T\\bfm 1\\\\ \\bfm 0\\end{bmatrix}, \\ \\ \n\\bfm l=\\begin{bmatrix}\\bfm 0\\\\ \\bfm1-\\frac{1}{2p}M\\bfm1\\end{bmatrix}, \\ \\ \n\\bfm x=\\begin{bmatrix}\\frac{1}{2p}\\bfm1\\\\ \\bfm1 - \\frac{1}{2p} M \\bfm1\\end{bmatrix}, \\ \\ \n\\bfm y=\\begin{bmatrix}\\frac{1}{2}\\bfm 1\\\\ \\bfm0\\\\ \\bfm 0\\end{bmatrix}, \\ \\ \n\\bfm z=\\begin{bmatrix}\\bfm0\\\\ \\frac{1}{2}\\bfm 1\\end{bmatrix}.$$ \nThen it is routine to verify that $\\bfm w$ is integral, $\\bfm l\\ge\\bfm0$, and $\\bfm x, (\\bfm y, \\bfm z)$ are feasible solutions to (\\ref{eq:box}). Moreover $\\bfm w^T \\bfm x =\\frac{2m+1}{2p}= \\bfm y^T\\bfm1- \\bfm z^T\\bfm l$, so $\\bfm x, (\\bfm y, \\bfm z)$ are optimal solutions. Since the optimal value is not $\\frac{1}{p}$-integral, while $\\bfm l$ is, it follows that the dual does not have an integral optimal solution and so $G$ is not box-perfect. Next, suppose that a vertex was deleted in the construction of $G$. The proof for this case is almost identical to the proof for the last case. The only difference is that $B_G$ has $2n-1$ columns, instead of $2n$ columns. Thus we need to truncate the corresponding vectors. To be precise, let \n$$\\bfm w= \\begin{bmatrix}\\frac{1}{2}M^T\\bfm 1\\\\ \\bfm 0\\end{bmatrix}, \\ \\ \n\\bfm l=\\begin{bmatrix}\\bfm 0\\\\ J^T(\\bfm1-\\frac{1}{n}M\\bfm1)\\end{bmatrix}, \\ \\ \n\\bfm x=\\begin{bmatrix}\\frac{1}{n}\\bfm1\\\\ J^T(\\bfm1 - \\frac{1}{n} M \\bfm1)\\end{bmatrix}, \\ \\ \n\\bfm y=\\begin{bmatrix}\\frac{1}{2}\\bfm 1\\\\ \\bfm0\\end{bmatrix}, \\ \\ \n\\bfm z=\\begin{bmatrix}\\bfm0\\\\ \\frac{1}{2}\\bfm 1\\end{bmatrix}.$$\nUsing the fact that the last row of $M$ is $\\bfm 1^T$ we deduce that $\\bfm x$ and $(\\bfm y, \\bfm z)$ are feasible solutions, and $\\bfm w^T \\bfm x=\\frac{2m+1}{n}= \\bfm y^T\\bfm 1- \\bfm z^T \\bfm l$, which implies that both solutions are optimal. Furthermore, since $M\\bfm1$ is even and its last entry is $n$, we deduce that $n$ is even and thus $\\bfm l$ is $\\frac{1}{n\/2}$-integral. However, the optimal value $\\frac{2m+1}{n}$ is not $\\frac{1}{n\/2}$-integral, so $G$ is not box-perfect, which proves the theorem. \\hfill \\rule{4pt}{7pt}\n\nTo identify all minimally non-box-perfect split graphs, we consider the following subsets of $\\cal Q$. Let $\\mathcal Q_1$ consist of all bipartite graphs $G=(U,V,E)\\in \\cal Q$ such that $U$ has a vertex adjacent to all vertices of $V$. Let $\\mathcal Q_2$ consist of all bipartite graphs $G=(U,V,E)\\in \\mathcal Q\\backslash \\mathcal Q_1$ such that the graph obtained from $G$ by adding a vertex and making it adjacent to all vertices of $V$ does not contain any graph in $\\mathcal Q_1$ as an induced subgraph. Let $\\cal S$ consist of all graphs in $\\cal R$ that are constructed from a bipartite graph $G'\\in \\mathcal Q_1\\cup \\mathcal Q_2$ and a complete graph $G''$. It is clear that all members of $\\cal S$ are split graphs. Moreover, $\\bar S_3^+$ and $S_{2n+1}$ ($n\\ge1$) belong to $\\cal S$.\n\n\\begin{theorem} \\label{thm:split}\nThe following are equivalent for any split graph $G$. \\\\ \n\\indent (1) $G$ is box-perfect; \\\\ \n\\indent (2) no graph in $\\cal S$ is an induced subgraph of $G$; \\\\ \n\\indent (3) $G$ is totally unimodular.\n\\end{theorem}\n\n\\vspace{-2mm}\n\\noindent{\\bf Proof.} Implication (3) $\\Rightarrow$ (1) follows from Proposition \\ref{prop:Sn}(2) and implication (1) $\\Rightarrow$ (2) follows from Lemma \\ref{lem:O} and Corollary \\ref{cor:basic}(1). To prove (2) $\\Rightarrow$ (3), let $G=(U,V,E)$ be a split graph, where $U$ is a stable set and $V$ is a clique. Let $G''=G[V]$ and $G'=G\\backslash E(G'')$. Let $G'''$ be the bipartite graph obtained from $G'$ by adding a vertex $w$ adjacent to all vertices in $V$. Let $M$ be the biadjacency matrix of $G'''$. \n\nWe first prove that $M$ is totally unimodular. Suppose otherwise. Then $G'''$ has an induced subgraph $H'\\in \\cal Q$. Let us choose $H'$ so that $H'$ contains the new vertex $w$ whenever it is possible. Consequently, $H'\\in\\mathcal Q_1\\cup\\mathcal Q_2$. Let $H$ be constructed from $H'$ and a complete graph $H''$. Then $H\\in\\cal S$ and, by the construction of $G$, $G$ contains $H$ as an induced subgraph. This contradicts (2) and thus $M$ has to be totally unimodular. \n\nLet $N$ be the biadjacency matrix of $G'$. Then $B_G=[N,I]$ or $[^{N\\ I}_{\\,\\mathbf 1\\ \\,\\mathbf 0}]$, \ndepending on if $V$ is a maximal clique of $G$. Notice that $M=[^N_{\\, \\mathbf 1}]$. So $B_G$, and thus $G$, is totally unimodular. \\hfill \\rule{4pt}{7pt}\n\nThis theorem shows that all minimally non-box-perfect split graphs are contained in $\\cal S$. In fact, $\\cal S$ consists of precisely such graphs.\n\n\\begin{theorem}\\label{thm:minsplit}\nA split graph $G$ belongs to $\\cal S$ if and only if $G$ is not box-perfect but all its induced subgraphs are.\n\\end{theorem}\n\n\\vspace{-2mm}\n\\noindent{\\bf Proof.} The backward implication follows immediately from Theorem \\ref{thm:split}. To prove the forward implication, let $G\\in\\cal S$. By Lemma \\ref{lem:O}, we only need to show that $G-w$ is box-perfect for all $w\\in V(G)$. Suppose $G$ is constructed from a bipartite graph $G'=(U,V,E')\\in \\mathcal Q_1\\cup \\mathcal Q_2$ and a complete graph $G''=(V,E'')$. Let $M$ be the biadjacency matrix of $G'$ and let $n:=|U|=|V|$. Observe that if $G'\\in\\mathcal Q_1$ then $B_G=[N,J]$, where $N=M$ and $J=[^{I_{n-1}}_{\\ \\ \\mathbf 0}]$; if $G'\\in \\mathcal Q_2$ then $B_G=[N,J]$, where $N=[^M_{\\, \\mathbf 1}]$ and $J=[^{I_n}_{\\, \\mathbf 0}]$.\n\nNow it is straightforward to verify that, for each $u\\in U$, $B_{G-u}=[N',J']$ is obtained from $B_G$ by deleting the row and the column indexed by $u$; for each $v\\in V$, $B_{G-v}=[N',J']$ is obtained from $B_G$ by deleting the column indexed by $v$ and also possibly the last row. In both cases, $N'$ is a proper submatrix of $N$. This implies that $N'$ is totally unimodular and thus so is $[N',J']$. Consequently, $G-w$ is box-perfect (totally unimodular) for all $w\\in V(G)$, which proves the theorem. \\hfill \\rule{4pt}{7pt}\n\nAs we observed earlier that $\\bar S_3^+$ and $S_{2n+1}$ $(n\\ge 1)$ belong to $\\cal S$. Thus these graphs are minimally non-box-perfect. We point out that, in addition to graphs in $\\cal S$, other minimally non-box-perfect graphs can also be obtained using Lemma \\ref{lem:O}. For instance, the graph illustrated in Figure \\ref{fig:newnbp} is constructed from $G'=C_{10}$ and $G''=C_5+e$. By Lemma \\ref{lem:O}, this graph $G$ is not box-perfect. However, $G$ is not minimally non-box-perfect since $H=G-\\{9,0\\}$ is not box-perfect, which is certified by vectors $\\bfm w^T=(1, 1, 1, 1, 1, 0, 0, 0)$, $\\bfm l^T=(0,0,0,0,0,\\frac{1}{2},\\frac{1}{2},\\frac{1}{2})$, $\\bfm x^T=(\\frac{1}{4},\\frac{1}{4},\\frac{1}{4},\\frac{1}{4},\\frac{3}{4},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2})$, $\\bfm y^T=(0,\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2})$, and $\\bfm z^T=(0,0,0,0,0,\\frac{1}{2},\\frac{1}{2},\\frac{1}{2})$, where the first row of $B_H$ is the triangle 123.\nIt can be shown that $H$ is in fact minimally non-box-perfect because $H-x$ is totally unimodular for $x=1,2,3,4,5,6,7$, and $H-8$ has the ESP property defined in the next section which implies the box-perfectness.\n\n\\begin{figure}[ht]\n\\centerline{\\includegraphics[scale=0.5]{mingraph.eps}} \n\\caption{A new non-box-perfect graph $G$}\n\\label{fig:newnbp}\n\\end{figure}\n\n\\section{ESP graphs}\n\nIn this section we introduce a so-called ESP property, which is sufficient for a graph to be box-perfect. We shall\nuse this combinatorial property to identify several new classes of box-perfect graphs. We begin with a few lemmas.\n\n\\begin{lemma}[Chen, Ding and Zang \\cite{cdz}] \\label{lem:cdz}\nSuppose $\\bfm a_1$ and $\\bfm a_2$ are rational vectors with $\\bfm a_1 \\ge \\bfm a_2$, and $b_1$ and $b_2$ are rational numbers with $b_1 \\le b_2$. Then the system $A\\bfm x\\le \\bfm b$, $\\bfm a^T_1\\bfm x\\le b_1$, $\\bfm a^T_2\\bfm x \\le b_2$, $\\bfm x \\ge \\bfm 0$ is box-TDI if and only if the system $A\\bfm x\\le \\bfm b$, $\\bfm a^T_1\\bfm x\\le b_1$, $\\bfm x \\ge \\bfm 0$ is box-TDI.\n\\end{lemma}\n\n\\begin{lemma}[Cameron \\cite{cameron1}] \\label{lem:upper}\nThe system $A\\bfm x\\le \\bfm b$ is box-TDI if and only if the system $A\\bfm x\\le \\bfm b$, $\\bfm x\\le \\bfm u$ is TDI, for all vectors $\\bfm u$, where each coordinate of $\\bfm u$ is either a rational number or $+\\infty$.\n\\end{lemma}\n\nThe next two lemmas are reformulations of Theorem 22.7 and Theorem 22.13 of Schrijver \\cite{schrijver}. \n\n\\begin{lemma}[Schrijver \\cite{schrijver}] \\label{lem:inf}\nSuppose the system $A\\bfm x\\le \\bfm b$, $x_1\\le u$ is TDI for all rational numbers $u$, where $x_1$ is the first coordinate of $\\bfm x$. Then $A\\bfm x\\le \\bfm b$ is TDI.\n\\end{lemma}\n\n\\begin{lemma}[Schrijver \\cite{schrijver}] \\label{lem:int}\nA rational system $A\\bfm x\\le \\bfm b$, $\\bfm x \\ge \\bfm 0$ is TDI if and only if $\\min\\{\\bfm y^T\\bfm b: \\bfm y^TA\\ge \\bfm w^T$\\!, $\\bfm y\\ge\\bfm 0$ {\\rm is half-integral}\\} is finite and is attained by an integral $\\bfm y$, for each integral vector $\\bfm w$ for which $\\min\\{\\bfm y^T\\bfm b: \\bfm y^TA\\ge \\bfm w^T$\\!, $\\bfm y\\ge\\bfm 0\\}$ is finite.\n\\end{lemma}\n\nThe next are two easy corollaries. \n\n\\begin{lemma} \\label{lem:A2B}\nA graph $G$ is box-perfect if and only if the system $B_{_G}\\bfm x\\le\\bfm 1$, $\\bfm 0\\le \\bfm x\\le \\bfm u$ is TDI for all rational vectors $\\bfm u\\ge\\bfm0$.\n\\end{lemma}\n\n\\vspace{-2mm}\n\\noindent{\\bf Proof.} The forward implication follows immediately from the definition of box-TDI and Lemma \\ref{lem:cdz}. Conversely, Lemma \\ref{lem:upper} and Lemma \\ref{lem:inf} imply that $B_{_G}\\bfm x\\le\\bfm 1$, $\\bfm x\\ge \\bfm 0$ is box-TDI. Then the result follows from Lemma \\ref{lem:cdz}. \\hfill \\rule{4pt}{7pt}\n\n\\begin{lemma} \\label{lem:halfi}\nA graph $G$ is box-perfect if and only if for all rational $\\bfm u\\ge\\bfm 0$ and integral $\\bfm w\\ge\\bfm 0$, \\medskip\\\\ \n\\makebox[30.5mm]{} $\\min\\{\\bfm y^T\\bfm 1 + \\bfm z^T\\bfm u|\\ \\bfm y^TB_{_G} + \\bfm z^T \\ge 2\\bfm w^T; \\bfm y, \\bfm z \\ge\\bfm0$ {\\rm integral}\\} \\\\ \n\\makebox[24mm]{} $\\ge 2\\min\\{\\bfm y^T\\bfm 1 + \\bfm z^T\\bfm u|\\ \\bfm y^TB_{_G} + \\bfm z^T \\ge \\bfm w^T; \\bfm y, \\bfm z \\ge\\bfm0$ {\\rm integral}\\}. \\hfill $(3.1)$\n\\end{lemma}\n\n\\vspace{-2mm}\n\\noindent {\\bf Proof.} Observe that, for all vectors $\\bfm u\\ge\\bfm 0$ and $\\bfm w$, the three programs \\smallskip\\\\ \n\\indent\\indent $\\min\\{\\bfm y^T\\bfm 1 + \\bfm z^T\\bfm u|\\ \\bfm y^TB_{_G} + \\bfm z^T \\ge \\bfm w^T; \\bfm y, \\bfm z \\ge\\bfm0\\}$ \\\\ \n\\indent\\indent $\\min\\{\\bfm y^T\\bfm 1 + \\bfm z^T\\bfm u|\\ \\bfm y^TB_{_G} + \\bfm z^T \\ge \\bfm w^T; \\bfm y, \\bfm z \\ge\\bfm0$ half-integral\\}\\\\ \n\\indent\\indent $\\min\\{\\bfm y^T\\bfm 1 + \\bfm z^T\\bfm u|\\ \\bfm y^TB_{_G} + \\bfm z^T \\ge \\bfm w^T; \\bfm y, \\bfm z \\ge\\bfm0$ integral\\}\\smallskip\\\\ \nare finite. Moreover, replacing $\\bfm w$ by $\\bfm w_+$ does not change the minimum values of these programs, where $\\bfm w_+$ is obtained from $\\bfm w$ by turning its negative coordinates into zero. Therefore, the result follows immediately from Lemma \\ref{lem:A2B} and Lemma \\ref{lem:int}. \\hfill \\rule{4pt}{7pt}\n\nLet $G=(V,E)$ be a graph. For any multiset $\\Lambda$ of cliques of $G$ and any $v\\in V$, let $d_{\\Lambda}(v)$ denote the number of members of $\\Lambda$ that contain $v$. We call $G$ {\\it equitably subpatitionable} ({\\it ESP}) if for every set $\\Lambda$ of maximal cliques of $G$ there exist two multisets $\\Lambda_1$ and $\\Lambda_2$ of cliques of $G$ (which are not necessarily members of $\\Lambda$) such that \\smallskip\\\\ \n\\indent (i) \\ $|\\Lambda_1| + |\\Lambda_2| \\le |\\Lambda|$; \\\\ \n\\indent (ii) \\ $d_{\\Lambda_1}(v)+ d_{\\Lambda_2}(v)\\ge d_{\\Lambda}(v)$, for all $v\\in V$; and \\\\ \n\\indent (iii) \\ $\\min\\{d_{\\Lambda_1}(v),d_{\\Lambda_2}(v)\\}\\ge \\lfloor d_{\\Lambda}(v)\/2\\rfloor$, for all $v\\in V$. \\smallskip\\\\ We call $(\\Lambda_1,\\Lambda_2)$ an {\\it equitable subpartition} of $\\Lambda$, and refer to the above (i), (ii), and (iii)\nas {\\em ESP property}. Note that (i) is equivalent to $|\\Lambda_1| + |\\Lambda_2| = |\\Lambda|$ since we may include empty cliques in $\\Lambda_1$ and $\\Lambda_2$. Similarly, (ii) is equivalent to $d_{\\Lambda_1}(v)+ d_{\\Lambda_2}(v)= d_{\\Lambda}(v)$ for all $v$, since cliques in $\\Lambda_1,\\Lambda_2$ can be replaced by smaller ones. Finally, it is also easy to see that in an ESP graph every multiset $\\Lambda$ of cliques admits an equitable subpartition. We will use these facts without further explanation. \n\n\\begin{theorem}\\label{thm:esp}\nEvery ESP graph $G=(V,E)$ is box-perfect.\n\\end{theorem}\n\n\\vspace{-2mm}\n\\noindent{\\bf Proof.} By Lemma \\ref{lem:halfi} we only need to show that inequality (3.1) holds for all rational $\\bfm u\\ge \\bfm0$ and all integral $\\bfm w\\ge \\bfm 0$. Let $(\\bfm y^T, \\bfm z^T)$ be an optimal solution of the first minimum in (3.1). Let $\\cal C$ be the set of maximal cliques of $G$ and let $\\cal D$ be the multiset of members of $\\cal C$ such that each $C\\in\\cal C$ appears in $\\cal D$ exactly $y_C$ times. Let $\\Lambda$ be the set of $C\\in\\cal C$ such that $y_C$ is odd. Since $G$ is ESP, $\\Lambda$ admits a equitable subpartition $(\\Lambda_1,\\Lambda_2)$. Since every clique can be extended into a maximal clique, we may assume without loss of generality that members of $\\Lambda_1$ and $\\Lambda_2$ are all in $\\cal C$. Let $\\mathcal D_0$ be the multiset of members of $\\cal C$ such that each $C\\in\\cal C$ appears $\\lfloor y_C\/2\\rfloor$ times. It follows that $\\mathcal D=\\mathcal D_0\\uplus \\mathcal D_0\\uplus \\Lambda$, where $\\uplus$ stands for multiset sum. For $i=1,2$, let $\\mathcal D_i=\\mathcal D_0\\uplus \\Lambda_i$. \nWe deduce from (i) that \n\n\\noindent(1) \\ $|\\mathcal D_1| + |\\mathcal D_2|\\le |\\mathcal D|$.\n\nLet $\\bfm p = \\bfm y^TB_{_G} + \\bfm z^T - 2\\bfm w^T$ and let $v\\in V$. Without loss of generality we assume \n\n\\noindent (2) \\ $d_{\\mathcal D_1}(v)\\ge d_{\\mathcal D_2}(v)$ \\ and \\ $\\bfm p_v \\bfm z_v=0$.\n\nSince $d_{\\mathcal D}(v)=2d_{\\mathcal D_0}(v)+d_{\\Lambda}(v)$, we deduce from (ii-iii) that $d_{\\mathcal D_1}(v)+ d_{\\mathcal D_2}(v) \\ge d_{\\mathcal D}(v)$ and $d_{\\mathcal D_i}(v) = d_{\\mathcal D_0}(v)+d_{\\Lambda_i}(v) \\ge \\lfloor d_{\\mathcal D}(v)\/2\\rfloor$ ($i=1,2$). Thus we conclude from (2) that\n\n\\noindent (3) \\ $d_{\\mathcal D_1}(v) \\ge \\lceil d_{\\mathcal D}(v)\/2\\rceil$ \\ and \\ $d_{\\mathcal D_2}(v) \\ge \\lfloor d_{\\mathcal D}(v)\/2\\rfloor$. \n\nBy the definition of $\\cal D$ we have $d_{\\cal D}(v)=\\bfm y^T B_v$, where $B_v$ is the column of $B_G$ indexed by $v$. So \n\n\\noindent(4) \\ $d_{\\cal D}(v)+ \\bfm z_v = \\bfm p_v + 2\\bfm w_v \\ge 2\\bfm w_v$. \n\nSince $\\bfm w_v$ is an integer, we deduce that \n\n\\noindent(5) \\ $\\bfm w_v\\le \\lfloor (d_{\\cal D}(v)+ \\bfm z_v)\/2\\rfloor$.\n\nSetting $\\bfm z_{1v}= \\lfloor \\bfm z_v\/2\\rfloor$ and $\\bfm z_{2v}= \\lceil \\bfm z_v\/2\\rceil$, we have \n\n\\noindent(6) \\ $\\bfm z_{1v}\\bfm u_v + \\bfm z_{2v} \\bfm u_v= \\bfm z_v\\bfm u_v$. \n\nWe further claim that \n\n\\noindent(7) \\ $d_{\\mathcal D_i}(v) + \\bfm z_{iv}\\ge \\bfm w_v$, for $i=1,2$. \n\nTo see (7), recall $\\bfm p_v \\bfm z_v=0$ from (2). \nIf $d_{\\cal D}(v)$ is even, we deduce from (4) that $\\bfm z_v$ is even, which implies, by (3-4), that $d_{\\mathcal D_i}(v) + \\bfm z_{iv}\\ge \\frac{1}{2} (d_{\\mathcal D}(v) + \\bfm z_v) \\ge \\bfm w_v$. \nSo we assume that $d_{\\cal D}(v)$ is odd. \nIf $\\bfm z_v=0$ then, by (3) and (5), $d_{\\mathcal D_i}(v) + \\bfm z_{iv} = d_{\\mathcal D_i}(v) \\ge \\lfloor d_{\\mathcal D}(v)\/2\\rfloor \\ge \\bfm w_v$. \nElse, by (2) and (4), $\\bfm z_v$ is odd. Thus $d_{\\mathcal D_i}(v) + \\bfm z_{iv} \\ge \\frac{1}{2}(d_{\\mathcal D_i}(v)\\pm1)+\\frac{1}{2}(\\bfm z_v\\mp1) = \\frac{1}{2}(d_{\\mathcal D_i}(v) + \\bfm z_v)\\ge \\bfm w_v$, because of (3), (5), and the definition of $\\bfm z_{iv}$. So (7) holds.\n\nFor $i=1,2$, let $\\bfm z_i=(\\bfm z_{iv}: v\\in V)$ and $\\bfm y_i\\in\\mathbb Z_+^{\\cal C}$ be the multiplicity function of $\\mathcal D_i$. It follows from (7) that $\\bfm y_i^TB_{_G} + \\bfm z_i^T \\ge \\bfm w^T$, which means that both $(\\bfm y_1,\\bfm z_1)$ and $(\\bfm y_2,\\bfm z_2)$ are feasible solutions of the second program in (3.1). From (1) and (6) we also conclude that $\\bfm y_i^T\\bfm 1 + \\bfm z_i^T\\bfm u \\le (\\bfm y^T\\bfm 1 + \\bfm z^T\\bfm u)\/2$ holds for at least one $i\\in\\{1,2\\}$. Hence inequality (3.1) holds, which proves the Theorem. \\hfill \\rule{4pt}{7pt}\n\nFor a perfect graph $G$, being ESP can be characterized as follows. Let $\\mathbb Z_+$ denote the set of nonnegative integers. For any $d\\in \\mathbb Z_+^{V(G)}$, let $G^d$ denote the graph obtained from $G$ by substituting each vertex $v$ with a stable set of size $d(v)$. Note that $v$ is deleted when $d(v)=0$. Let $c_{_G} =\\bfm 1^TB_G$. In other words, for each $v\\in V(G)$, $c_{_G}(v)$ is the number of maximal cliques of $G$ that contain $v$. \n\n\\begin{theorem}\\label{thm:esp1}\nLet $G$ be perfect. Then $G$ is ESP if and only if for every $d\\in \\mathbb Z_+^{V(G)}$ with $d\\le c_{_G}$ there exists $d'\\in \\mathbb Z_+^{V(G)}$ such that $\\lfloor d\/2\\rfloor \\le d'\\le \\lceil d\/2\\rceil$ and $\\alpha(G^{d'}) + \\alpha(G^{d-d'}) \\le \\alpha(G^d)$.\n\\end{theorem}\n\n\\vspace{-2mm}\n\\noindent{\\bf Proof.} To prove the forward implication, let $G$ be ESP and let $d\\in \\mathbb Z_+^{V(G)}$. Since $G^d$ is perfect, its vertex set can be partitioned into $\\alpha(G^d)$ cliques. These cliques naturally correspond to a multiset $\\Lambda$ of $\\alpha(G^d)$ cliques of $G$. Note that $|\\Lambda|=\\alpha(G^d)$ and $d_\\Lambda = d$. Since $G$ is ESP, $\\Lambda$ admits a equitable subpartition $(\\Lambda_1,\\Lambda_2)$. By deleting vertices from cliques in $\\Lambda_1$ and $\\Lambda_2$ we can obtained multisets $\\Lambda_1^*$ and $\\Lambda_2^*$ of cliques of $G$ such that $|\\Lambda_1^*| + |\\Lambda_2^*|\\le |\\Lambda_1| + |\\Lambda_2|$, $d_{\\Lambda_1^*} + d_{\\Lambda_2^*} =d$, and $\\min\\{d_{\\Lambda_1^*}, d_{\\Lambda_2^*}\\}\\ge \\lfloor d\/2\\rfloor$. Let $d'=d_{\\Lambda_1^*}$. Then $\\lfloor d\/2\\rfloor \\le d'\\le \\lceil d\/2\\rceil$ and \\smallskip\\\\ \n\\indent\\indent $\\alpha(G^{d'}) + \\alpha(G^{d-d'}) \\le \\alpha(G^{d_{\\Lambda_1}}) + \\alpha(G^{d_{\\Lambda_2}}) \\le |\\Lambda_1| + |\\Lambda_2| \\le |\\Lambda| = \\alpha(G^d)$, \\smallskip\\\\ \nwhich proves the forward implication.\n\nTo prove the backward implication, let $\\Lambda$ be a set of maximal cliques of $G$. Then $d:=d_{\\Lambda}\\le c_{_G}$ and thus there exists $d'$ as stated in the theorem. Let $d_1=d'$ and $d_2=d-d'$. For $i=1,2$, vertices of $G^{d_i}$ can be partitioned into $\\alpha(G^{d_i})$ cliques, and these cliques correspond to a multiset $\\Lambda_i$ of $\\alpha(G^{d_i})$ cliques of $G$. Note that $d_{\\Lambda_i} =d_i$. Thus $(\\Lambda_1,\\Lambda_2)$ is an equitable subpartition of $\\Lambda$, which proves the theorem. \\hfill \\rule{4pt}{7pt}\n\n\nWe first remark that $\\alpha(G^d)$ is exactly the maximum of $\\sum_{v\\in S} d(v)$ over all stable sets $S$ of $G$. Sometimes this interpretation is more convenient. We also remark that we do not know a box-perfect graph that is not ESP. It seems reasonable to conjecture that no such a graph exists. \n\n\\section{Known box-perfect graphs}\n\nCameron \\cite{cameron1} identified a few classes of box-perfect graphs. In this section we prove that they are in fact ESP graphs. Our results could be stronger than the results of Cameron if ESP and box-perfect are not equivalent. But the main reason for establishing our results is for future applications. We envision that more ESP graphs (possibly all box-perfect graphs) can be constructed from basic ESP graphs. Therefore, it is important to make sure that all known box-perfect graphs are ESP. \n\n\\subsection{Totally unimodular graphs}\n\nIt is well known (see Theorem 19.3 of \\cite{schrijver}) that in a totally unimodular matrix, each set of rows can be partitioned so that the sum of one part minus the sum of the other part is a $\\{0,\\pm1\\}$-vector. If $G$ is totally unimodular then $B_G$ has this partition property, which implies immediately that $G$ satisfies the definition of ESP graphs. Thus we have the following.\n\n\\begin{theorem}\nTotally unimodular graphs are ESP.\n\\end{theorem}\n\nWe point out that totally unimodular graphs include graphs like interval graphs, bipartite graphs, and block graphs (every block is a complete graph).\n\n\\subsection{Incomparability graphs}\n\n\\begin{theorem}\\label{thm:incomp}\nEvery incomparability graph $G$ is ESP.\n\\end{theorem}\n\n\\vspace{-2mm}\n\\noindent{\\bf Proof.} Since $G$ is perfect, we may apply Theorem \\ref{thm:esp1}. \nLet $d\\in\\mathbb Z_+^{V(G)}$. Note that $G^d$ is again an incomparability graph. In fact, let $P$ be a poset such that $G$ is the incomparability of $P$ and let $P^d$ be obtained from $P$ by replacing each element $v$ with a chain of size $d(v)$. Then $G^d$ is the incomparability graph of poset $P^d$. For each positive integer $i$, let $A_i$ be the set of maximal elements of $P^d - (A_1\\cup ... \\cup A_{i-1})$. Then $(A_1, ..., A_n)$ is a partition of $V(G^d)$ into cliques, where $n=\\alpha(G^d)$. Let $V_1$ be the union of $A_i$ for all odd $i$ and let $V_2$ be the union of $A_i$ for all even $i$. Then $G^d[V_1]$ and $G^d[V_2]$ can be expressed as $G^{d_1}$ and $G^{d_2}$, respectively, for some $d_1,d_2\\in \\mathbb Z_+^{V(G)}$. It is easy to see that $d_1+d_2=d$ and $\\lfloor d\/2\\rfloor \\le d_j\\le \\lceil d\/2\\rceil$ ($j=1,2$). Moreover, each $\\alpha(G^{d_j})$ is bounded by the number of $A_i$s contained in $V_j$. Therefore, $\\alpha(G^{d_1}) + \\alpha(G^{d_2}) \\le \\alpha(G^d)$, which implies that $d'=d_1$ satisfies Theorem \\ref{thm:esp1} and thus $G$ is ESP. \\hfill \\rule{4pt}{7pt}\n\n\\subsection{$p$-Comparability graphs}\n\n$p$-Comparability graphs were introduced in \\cite{cameron} and were shown \\cite{cameron, cameron2} to be box-perfect. We show that they are ESP. Let $D$ be a digraph with a special set $T$ of vertices such that every arc is in a dicycle (directed cycle) and every dicycle meets $T$ exactly once. In particular, $D$ has no arc between any two vertices of $T$. If $p$ is an integer with $|T|\\le p$, then a {\\it p-comparability graph} $G$ is defined from $D$ by adding all chords of all dicycles, then deleting $T$, and finally ignoring all directions on edges. Note that 1-comparability graphs are precisely comparability graphs. \n\n\n\\begin{theorem} \\label{thm:pcom}\nEvery $p$-comparability graph $G$ is ESP.\n\\end{theorem}\n\nTo prove this theorem we will need the following Lemma. Let $D=(V,A)$ be a digraph. For each dicycle $C$ of $D$, the {\\it incidence vector} of $C$ is the vector $\\chi^C\\in \\{0,1\\}^A$ such that $\\chi^C(a)=1$ if and only if $a$ is on $C$. A sum of incidence vectors of (not necessarily distinct) dicycles of $D$ is called a {\\it circulation} of $D$. The following is a special case of Corollary 11.2b of \\cite{schrijver3}.\n\n\\begin{lemma}\\label{lem:cir}\nEvery circulation $f$ is the sum of two circulations $f_1$, $f_2$ such that $\\lfloor f\/2\\rfloor \\le f_i \\le \\lceil f\/2\\rceil$ holds for both $i=1,2$. \n\\end{lemma}\n\n\\noindent{\\bf Proof of Theorem \\ref{thm:pcom}.} Let $G$ be constructed from $D$ and $T$. Let $D^*$ be obtained from $D$ by splitting each vertex $v$ into $v'$ and $v''$ such that arcs entering $v$ are now entering $v'$, and arcs leaving $v$ are now leaving $v''$. We also add an arc from $v'$ to $v''$. Observe that for every dicycle $C$ of $D$, $D^*$ has a unique dicycle $C^*$ such that $A(C^*)\\cap A(D) = A(C)$. Moreover, every dicycle of $D^*$ can be expressed as $C^*$ for a dicycle $C$ of $D$. \n\nWe will use a fact proved in \\cite{cameron2} that for every clique $K$ of $G$, there exists a dicycle $C_K$ of $D$ such that $K\\subseteq V(C_K)$.\n\nLet $\\Lambda$ be a set of maximal cliques of $G$. We prove the theorem by showing that $\\Lambda$ admits an equitable subpartition. Let $f$ be the sum of incidence vectors of $C^*_K$ over all $K\\in \\Lambda$. Since each $C_K$ meets $T$ exactly once, each $C_K^*$ must meet $T^*=\\{t't'':t\\in T\\}$ exactly once. As a result, $|\\Lambda|$ equals the sum of $f(a)$ over all $a\\in T^*$. In addition, since each $K\\in\\Lambda$ is a maximal clique, we must have $V(C_K)-T=K$. This implies that $d_{\\Lambda}(v) = f(v'v'')$ holds for all $v\\in V(D)$.\n\nLet $f_1$ and $f_2$ be the two circulations of $D^*$ determined by Lemma \\ref{lem:cir}. For $i=1,2$, let $\\mathcal C_i^*$ be the multiset of dicycles of $D^*$ such that $f_i$ is the sum of $\\chi^{_{C^*}}$ over all $C^*\\in\\mathcal C_i^*$. Then let $\\mathcal C_i$ be the multiset $\\{C: C^*\\in \\mathcal C_i^*\\}$ and $\\Lambda_i=\\{V(C)-T:C\\in\\mathcal C_i\\}$. By the construction of $G$, each member of $\\Lambda_i$ is a clique of $G$. Moreover, $d_{\\Lambda_i}(v)=f_i(v'v')$ holds for all $v\\in V(G)$, and $|\\Lambda_i| = \\sum_{a\\in T^*} f_i(a)$. Therefore, $(\\Lambda_1,\\Lambda_2)$ is an equitable subpartition of $\\Lambda$, which proves that $G$ is ESP. \\hfill \\rule{4pt}{7pt}\n\n\\noindent{\\bf Remark.} Let us call a graph {\\it strong ESP} if every set $\\Lambda$ of maximal cliques admits an equitable subpartition $(\\Lambda_1,\\Lambda_2)$ with $\\max\\{|\\Lambda_1|,|\\Lambda_2|\\}\\le \\lceil |\\Lambda|\/2\\rceil$. This proof also proves that (1-)comparability graphs are in fact strong ESP.\n\n\n\\section{Parity graphs}\n\nA graph is called a {\\it parity graph} if any two induced paths between the same pair of vertices have the same parity. These are natural extensions of bipartite graphs and they are perfect \\cite{sachs}. Cameron and Edmonds \\cite{cameron} conjectured that\nevery parity graph is box-perfect. The objective of this section is to present a proof of this conjecture. \n\nTo establish our result we need a structural characterization of parity graphs. \nLet $H$ be a graph with a stable set $S$ such that all vertices of $S$ have the same set of neighbors. \nLet $B$ be a bipartite graph and let $T$ be a subset of a color class of $B$ with $|T|=|S|$. \nLet $G$ be obtained from the disjoint union of $H$ and $B$ by identifying $S$ with $T$. We call $G$ a {\\it bipartite extension} of $H$ by $B$, and we also call the construction of $G$ from $H$ {\\it bipartite extension}.\n\n\\begin{lemma}[Burlet and Uhry \\cite{uhry}]\nEvery connected parity graph can be constructed from a single vertex by repeatedly duplicating vertices and bipartite extensions.\n\\end{lemma}\n\n\\begin{lemma}\\label{lem:twin}\nDuplicating a vertex in an ESP graph results in an ESP graph.\n\\end{lemma}\n\n\\vspace{-2mm}\n\\noindent{\\bf Proof.} Let ESP graph $G$ have a vertex $v$. Let $G'$ be obtained by duplicating $v$ and let $v'$ be the new vertex. For any set $\\Lambda'$ of maximal cliques of $G'$, we prove that $\\Lambda'$ has an equitable subpartition. \n\nWe define $\\Lambda$ as follows. \nIf $vv'$ is an edge then $\\Lambda = \\{K-v':K\\in \\Lambda'\\}$; if $vv'$ is not an edge then $\\Lambda =\\{K:v'\\not\\in K\\in\\Lambda\\} \\uplus \\{K-v'+v:v'\\in K\\in\\Lambda'\\}$. Note that $\\Lambda$ is a multiset of maximal cliques of $G$. Since $G$ is ESP, $\\Lambda$ admits an equitable subpartition $(\\Lambda_1, \\Lambda_2)$. By deleting vertices from cliques in $\\Lambda_1$ and $\\Lambda_2$ we may assume that $d_{\\Lambda_1} + d_{\\Lambda_2} =d_{\\Lambda}$ and $\\lfloor d_{\\Lambda}\/2 \\rfloor \\le d_{\\Lambda_i}\\le \\lceil d_{\\Lambda}\/2 \\rceil$ ($i=1,2$).\n\nIf $vv'$ is an edge, let $\\Lambda_i' =\\{K:v\\not\\in K\\in\\Lambda_i\\} \\uplus \\{K+v':v\\in K\\in\\Lambda_i\\}$ ($i=1,2$). Then $(\\Lambda_1', \\Lambda_2')$ is an equitable subpartition of $\\Lambda'$ because $d_X(v')=d_X(v)$ holds for $X\\in \\{\\Lambda,\\Lambda_1', \\Lambda_2'\\}$. \n\nNow suppose $vv'$ is not an edge. Note that $d_{\\Lambda} (v) = d_{\\Lambda'}(v) + d_{\\Lambda'}(v')$. Also we may assume that $d_{\\Lambda_1}(v) = \\lfloor d_{\\Lambda}(v)\/2 \\rfloor$ and $d_{\\Lambda_2}(v) = \\lceil d_{\\Lambda}(v)\/2 \\rceil$. Let \n$$m_1=\\lfloor d_{\\Lambda'}(v)\/2 \\rfloor, \\ \\ m_2=\\lceil d_{\\Lambda'}(v)\/2 \\rceil, \\ \\ m_1'=d_{\\Lambda_1}(v)-m_1, \\ \\ m_2'=d_{\\Lambda_2}(v)-m_2.$$\nThen \n$$m_1+m_2=d_{\\Lambda'}(v), \\ \\ m_1'+m_2'=d_{\\Lambda'}(v'), \\ \\ \\min\\{m_1',m_2'\\}\\ge \\lfloor d_{\\Lambda'}(v')\/2 \\rfloor.$$\nFor $i=1,2$, let $\\Lambda_i'$ be obtained from $\\Lambda_i$ by turning $m_i'$ cliques $K$ that contain $v$ into $K-v+v'$. Then the above equalities and inequalities imply that $(\\Lambda_1', \\Lambda_2')$ is an equitable subpartition of $\\Lambda'$. \\hfill \\rule{4pt}{7pt}\n\n\\noindent{\\bf Remark.} Clearly, this proof also proves that duplicating a vertex in a strong ESP graph results in a strong ESP graph. \n\n\\begin{theorem}\nParity graphs are ESP.\n\\end{theorem}\n\n\\vspace{-2mm}\n\\noindent{\\bf Proof.} By Lemma \\ref{lem:twin}, we only need to show that if $G$ is a bipartite extension of an ESP graph $H$ by a bipartite graph $B=(X,Y,E)$, then $G$ is ESP. Let $X_0\\subseteq X$ be the intersection of $H$ and $B$. Let $\\Lambda$ be a set of maximal cliques of $G$. Naturally, $\\Lambda$ can be partitioned into $\\Lambda_H$ and $\\Lambda_B$, which are maximal cliques of $H$ and edges of $B$, respectively. Now we find an equitable subpartition $(\\Lambda_B',\\Lambda_B'')$ of $\\Lambda_B$ and an equitable subpartition $(\\Lambda_H',\\Lambda_H'')$ of $\\Lambda_H$ such that $(\\Lambda_B'\\cup\\Lambda_H',\\Lambda_B''\\cup\\Lambda_H'')$ is an equitable subpartition of $\\Lambda$. Let $X_0$ be partitioned into $(X_1,X_2)$ such that $X_1$ consists of $x\\in X_0$ with both $d_{\\Lambda_B}(x)$ and $d_{\\Lambda_H}(x)$ odd. Since $(\\Lambda_B',\\Lambda_B'')$ and $(\\Lambda_H',\\Lambda_H'')$ are always compatible on vertices in $X_2$, we only need to focus on vertices in $X_1$.\n\nWithout loss of generality, let $\\Lambda_B=E$. Suppose $B$ has $2t$ vertices of odd degree. Then $E$ can be partitioned into cycles and $t$ paths $P_1,..., P_t$. Let $(\\Lambda_B',\\Lambda_B'')$ be defined by assigning edges to the two parts alternatively along the cycles and paths. Then $(\\Lambda_B',\\Lambda_B'')$ is an equitable partition. Note that we have the following freedom in the assignment. Let $x\\in X_1$ and let $P_i$ be the path with $x$ as an end. If the other end of $P_i$ is not in $X_1$, then we may choose $d_{\\Lambda_B'}(x)$ to be $\\lfloor d_{\\Lambda_B}(x)\/2 \\rfloor$ or $\\lceil d_{\\Lambda_B}(x)\/2\\rceil$, as we wish (without changing $d_{\\Lambda_B'}(z)$ and $d_{\\Lambda_B''}(z)$ for any other $z\\in X_1$). If the other end of $P_i$ is a vertex $x'$ in $X_1$, then we may assume that $d_{\\Lambda_B'}(x)=\\lfloor d_{\\Lambda_B}(x)\/2 \\rfloor$ and $d_{\\Lambda_B'}(x')=\\lceil d_{\\Lambda_B}(x')\/2\\rceil$. Let $(x_1,x_1')$, ..., $(x_k,x_k')$ be these pairs in $X_1$.\n\nLet $H_1$ be obtained from $H$ by deleting $x_1',...,x_k'$ and let $\\Lambda_1$ be obtained from $\\Lambda_H$ by replacing each $x_i'$ with $x_i$. Note that $d_{\\Lambda_1}(x_i) = d_{\\Lambda_H}(x_i) + d_{\\Lambda_H}(x_i')$ for all $i$, while $d_{\\Lambda_1}(v)=d_{\\Lambda_H}(v)$ for all other vertices $v$ of $H_1$. Since $H$ is ESP, so is $H_1$. Let $(\\Lambda_1', \\Lambda_1'')$ be an equitable subpartition of $\\Lambda_1$. Without loss of generality, we assume $d_{\\Lambda_1'}(x_i)=d_{\\Lambda_1''}(x_i) = d_{\\Lambda_1}(x_i)\/2$ for all $i$. Let $\\Lambda_H'$ be obtained from $\\Lambda_1'$ by turning $\\lfloor d_{\\Lambda_H}(x_i')\/2 \\rfloor$ of its cliques $K$ that contain $x_i$ into $K-x_i+x_i'$ (for every $i$). Then $d_{\\Lambda_H'}(x_i)=\\lceil d_{\\Lambda_H}(x_i)\/2\\rceil$ and \n$d_{\\Lambda_H'}(x_i')=\\lfloor d_{\\Lambda_H}(x_i')\/2 \\rfloor$. Let $\\Lambda_H''$ be obtained analogously. Now it is straightforward to verify that, the freedom on partition $(\\Lambda_B',\\Lambda_B'')$ allows us to make adjustments so that $(\\Lambda_B'\\cup\\Lambda_H',\\Lambda_B''\\cup\\Lambda_H'')$ is an equitable subpartition of $\\Lambda$. \\hfill \\rule{4pt}{7pt}\n\n\n\\section{Complements of line graphs}\n\nIn the rest of this paper we allow some graphs to have loops and parallel edges. We call these {\\it multigraphs} and we reserve the word {\\it graph} for simple graphs. If a multigraph $H$ is obtained from a graph $H_0$ by adding loops and parallel edges, then $H_0$ is called a {\\it simplification} of $H$ and is denoted by $si(H)$.\n\nLet $L(H)$ denote the line graph of a multigraph $H$. Under this circumstance, we always make the following implicit assumptions: \\\\ \n\\indent (i) $H$ has no isolated vertices (deleting an isolated vertex does not affect $L(H)$); \\\\ \n\\indent (ii) $H$ has no loops (replacing a loop with a pendent edge does not affect $L(H)$); \\\\ \n\\indent (iii) $H$ has no distinct vertices $x,y,z$ such that $z$ is the only neighbor of $x$ and the only neighbor \\makebox[34pt]{} of $y$ (replacing edges between $y$ and $z$ by edges between $x$ and $z$ does not affect $L(H)$). \n\nThe complement of $L(H)$ will be denoted by $\\bar L(H)$. Our results in the next two sections imply a characterization of box-perfect line graphs. The goal of this section is to characterize box-perfect graphs that are complements of line graphs.\n\n\\begin{theorem} \\label{thm:linebar}\nLet $G=\\bar L(H)$ be perfect. Then $G$ is box-perfect if and only if $G$ is $\\{S_3,\\bar S_3^+\\}$-free.\n\\end{theorem}\n\nOur proof of this theorem is divided into a sequence of lemmas. We first determine the structure of $\\{S_3,\\bar S_3^+\\}$-free perfect graphs of the form $\\bar L(H)$, and then we confirm that all such graphs are ESP. We will see that some of these graphs are in fact strong ESP.\n\nWe need a result of Gallai \\cite{gallai} which identifies eight classes and ten individual graphs such that a graph is a comparability graph if and only if it does not contain any of these identified graphs as an induced subgraph. We will use the following immediate consequence of Gallai's theorem. Let $\\Gamma$ be the graph obtained from a 6-cycle $v_1v_2v_3v_4v_5v_6v_1$ by adding two edges $v_1v_3$ and $v_1v_5$.\n\n\\begin{lemma} \\label{lem:inc}\nLet $G$ be claw-free and perfect. Then $G$ is an incomparability graph if and only if $G$ does not contain any of $S_3$, $\\bar{S}_3$, $\\Gamma$, and $C_{2n}\\ (n\\ge3)$ as an induced subgraph. \n\\end{lemma}\n\nLet $K_4^+$ denote the graph obtained from $K_4$ by adding two pendent edges to two of its distinct vertices. Let $K_{2,n}^+$ denote the graph obtained from $K_{2,n}$ ($n\\ge3$) by adding a pendent edge to a degree-2 vertex and an edge between the two degree-$n$ vertices.\n\n\\begin{lemma}\\label{lem:bars3}\nLet $\\bar L(H)$ be $\\{C_5, S_3,\\bar S_3^+\\}$-free. If $H$ contains $\\bar{S}_3$ as a subgraph then $si(H)$ is either $K_4^+$ or a subgraph of $K_{2,n}^+$ for some $n\\ge3$. \n\\end{lemma}\n\n\\vspace{-2mm}\n\\noindent{\\bf Proof.} Since $\\bar{S}_3$ is a subgraph of $H$, we assume $V(H)=\\{x_1, x_2, x_3, y_1, y_2, y_3, z_1, ..., z_m\\}$ such that $x_1x_2x_3$ is a triangle and $x_iy_i\\in E(H)$ ($i=1,2,3$). If $m=0$ then it is straightforward to verify the conclusion of the lemma, using the fact that $H$ does not contain $C_5$ as a subgraph. So we assume $m>0$. Let $K_{1,3}^*$ denote the graph obtained from $K_{1,3}$ by subdividing each edge exactly once. Note that $K_{1,3}^*$ is not a subgraph of $H$ since $\\bar{L}(K_{1,3}^*)=S_3$. As a result, each $z_i$ is adjacent to none of $y_1,y_2,y_3$, and at most two of $x_1,x_2,x_3$. Furthermore, since $\\bar L(H)$ is $\\bar S_3^+$-free, the entire neighborhood of each $z_i$ must be a subset of $\\{x_1,x_2,x_3\\}$ of size one or two (here we also use assumption (i) above). By assumption (iii) above we may assume that each $z_i$ is adjacent to exactly two of $x_1,x_2,x_3$. Since $C_5$ is not a subgraph of $H$, all $z_i$'s must have the same set of neighborhood. Now, since $m>0$, it is straightforward to verify that $si(H)$ is a subgraph of $K_{2,m+3}^+$. \\hfill \\rule{4pt}{7pt}\n\nLet $C$ be an even cycle of length $\\ge4$. Let $X$ be a stable set of $C$ and let $Y=V(C)-X-N_C(X)$, where $X$ is allowed to be empty. We construct a bipartite graph from $C$ by adding a pendent edge to each vertex in $Y$ and by repeatedly duplicating vertices in $X$. Let $\\cal C$ consist of all graphs that can be constructed in this way. \n\n\\begin{lemma} \\label{lem:nos3}\nLet $L(H)$ be perfect and $\\bar S_3$-free. Suppose $H$ is connected and $H$ does not contain $\\bar S_3$ as a subgraph. If $L(H)$ contains an induced $\\Gamma$ or $C_{2n}$ $(n\\ge3)$, then $si(H)$ is a subgraph of a graph in $\\mathcal C\\cup\\{K_{3,3}\\}$. \n\\end{lemma}\n\n\\vspace{-2mm}\n\\noindent{\\bf Proof.} Suppose $\\Gamma$ is an induced subgraph of $L(H)$. Then $H$ has a subgraph with a 4-cycle $x_1x_2x_3x_4$ and two pendent edges $x_1y_1$, $x_2y_2$. Note that $x_1x_3$ and $x_2x_4$ are not edges of $H$ since $\\bar S_3$ is not a subgraph of $H$. Let $z_1,...,z_m$ be the remaining vertices of $H$. If $m=0$, then either $si(H)$ is a subgraph of $K_{3,3}$ or $H$ contains a 5-cycle. So we assume $m>0$. Like in the proof of the last lemma, since $C_5$ and $K_{1,3}^*$ are not subgraphs of $H$, for each $i$ we must have $N_H(z_i)=\\{x_1,x_3\\}$ or $\\{x_2,x_4\\}$, or $\\{x_j\\}$ for some $j$. In addition, $N_H(y_i)\\subseteq\\{x_i,x_{i+2}\\}$ ($i=1,2$) and $|N_H(y_1)\\cup N_H(y_2)|\\le 3$. Now, since $H$ does not contain $K_{1,3}^*$, it is routine to check that $si(H)$ is a subgraph of a graph in $\\cal C$.\n\nNext, suppose $L(H)$ is $\\Gamma$-free. Then $H$ contains a $2n$-cycle $x_1x_2... x_{2n}$ ($n\\ge3$). Note that this cycle has no chord (otherwise $L(H)$ contains an induced $\\Gamma$, $\\bar S_3$, or $C_{2k+1}$ with $k\\ge2$). Let $z_1,...,z_m$ be the remaining vertices of $H$. Using the same argument we used in the last paragraph it is straightforward to show that each $N_H(z_i)$ is $\\{x_j\\}$ or $\\{x_j,x_{j+2}\\}$ for some $j$ (where $x_{2n+t}$ is $x_t$). In addition, if $N_H(z_i)=\\{x_j,x_{j+2}\\}$ then $N_H(x_{j+1})=N_H(z_i)$. Therefore, $si(H)$ is a subgraph of a graph in $\\cal C$. \\hfill \\rule{4pt}{7pt}\n\n\\begin{lemma} \\label{lem:ab}\nSuppose $G$ has a vertex $u$ such that $G-u$ is bipartite and $G-N(u)$ is edge-less. Then $G$ is totally unimodular. \n\\end{lemma}\n\n\\vspace{-2mm}\n\\noindent{\\bf Proof.} By Theorem 19.3 of \\cite{schrijver}, we only need to show that each set $\\Lambda$ of maximal cliques admits an {\\it equitable} partition $(\\Lambda_1,\\Lambda_2)$, meaning that $\\min\\{d_{\\Lambda_1}(v), d_{\\Lambda_2}(v)\\} \\ge \\lfloor d_{\\Lambda}(v)\\rfloor$, for all $v\\in V(G)$. Suppose to the contrary that some $\\Lambda$ does not admit such a partition. We choose $\\Lambda$ with $|\\Lambda|$ as small as possible. \n\nLet $A,B,C,D$ be a partition of $V(G)-u$ such that $A\\cup C$, $B\\cup D$ are stable and $N(u)=B\\cup C$. Let $G'$ be the subgraph of $G$ formed by edges in $K-u$, over all $K\\in \\Lambda$. We claim that $G'$ is a forest. Suppose $G'$ has a cycle $x_1x_2...x_n$. Note that for each $i$, exactly one of $x_ix_{i+1}$ and $ux_ix_{i+1}$ is a clique in $\\Lambda$. Let $\\Lambda'$ be the rest cliques in $\\Lambda$. By the minimality of $|\\Lambda|$, $\\Lambda'$ admits an equitable partition $(\\Lambda_1',\\Lambda_2')$. Let us extend $\\Lambda_j'$ ($j=1,2$) to $\\Lambda_j$ by including $x_ix_{i+1}$ or $ux_ix_{i+1}$ (whichever belongs to $\\Lambda$) for all $i$ with $i-j$ even. Then it is easy to see that $(\\Lambda_1,\\Lambda_2)$ is an equitable partition of $\\Lambda$. This contradicts the choice of $\\Lambda$ and thus the claim is proved. The same argument also shows that $G'$ has no maximal path with two ends both in $A\\cup B$ or both in $C\\cup D$. Thus all components of $G'$ are paths with one end in $A\\cup B$ and one end in $C\\cup D$. If $G'$ has only one path then the same argument still works. If $G'$ has two or more paths then we can take any two of them and treat their union as a cycle and again apply the same argument. \\hfill \\rule{4pt}{7pt}\n\nRecall that a graph $G$ is {\\it strong ESP} if every set $\\Lambda$ of maximal cliques of $G$ admits an equitable subpartition $(\\Lambda_1,\\Lambda_2)$ with $\\max\\{|\\Lambda_1|,|\\Lambda_2|\\}\\le \\lceil |\\Lambda|\/2\\rceil$. The next lemma follows immediately from this definition.\n\n\\begin{lemma} \\label{lem:strongesp}\n(1) If $G$ is strong ESP then so are all its induced subgraphs. \\\\ \n\\indent (2) Let $G_1,G_2$ be strong ESP and let $G$ be obtained from the disjoint union of $G_1,G_2$ by adding all edges between them. Then $G$ is also strong ESP. \n\\end{lemma}\n\nIn a (loopless) multigraph $G$, the {\\it degree} of a vertex $v$, denoted $d_G(v)$, is the number of edges incident with $v$. The next is the key step for proving Theorem \\ref{thm:linebar}.\n\n\\begin{lemma} \\label{lem:k33}\nFor every $H\\in\\mathcal C\\cup\\{K_{3,3}\\}$, $\\bar L(H)$ is strong ESP.\n\\end{lemma}\n\n\\vspace{-2mm}\n\\noindent{\\bf Proof.} For each $\\mu\\in \\mathbb Z_+^{E(H)}$, let $\\mu H$ denote the multigraph with vertex set $V(H)$ such that the number of edges between any two vertices $x,y$ is zero (if $xy\\not\\in E(H)$) or $\\mu(xy)$ (if $xy\\in E(H)$). Note that $\\mu H$ is bipartite since $H$ is bipartite. Let $\\Delta(\\mu)$ denote the maximum degree of $\\mu H$. By Konig's edge-coloring theorem, $E(\\mu H)$ is the union of $k$ matchings if and only if $k\\ge \\Delta(\\mu)$. Because of this theorem and the one-to-one correspondence between cliques of $\\bar L(H)$ and matchings of $H$, to prove the lemma it is enough for us to show that \n\n($*$) \\ for any $\\mu\\in \\mathbb Z_+^{E(H)}$ there exist $\\mu_1,\\mu_2\\in \\mathbb Z_+^{E(H)}$ such that $\\mu_1+\\mu_2=\\mu$, $\\mu_i\\ge\\lfloor \\mu\/2 \\rfloor$ ($i=1,2$), \\\\ \n\\makebox[35pt]{} $\\Delta(\\mu_1) \\le \\lceil \\Delta(\\mu)\/2\\rceil$, and $\\Delta(\\mu_2) \\le \\lfloor \\Delta(\\mu)\/2\\rfloor$.\n\nIn the following we construct a partition $(E_1,E_2)$ of $E(\\mu H)$ such that the multiplicity functions $\\mu_i$ of $E_i$ ($i=1,2$) satisfies ($*$). This partition will be constructed in several steps. In the process we determine a partition $(E_1,E_2,E_3)$ of $E(\\mu H)$, where we begin with $(E_1,E_2,E_3)=(\\emptyset,\\emptyset, E(\\mu(H))$ and we keep moving edges from $E_3$ to $E_1,E_2$ until $E_3$ becomes empty. For $i=1,2,3$, let $H_i$ denote the subgraph of $\\mu H$ formed by edges in $E_i$. \n\nFirst, for each edge $e=xy$ of $H$, among all $\\mu(e)$ edges of $E_3$ that are between $x$ and $y$, we move $\\lfloor \\mu(e)\/2 \\rfloor$ of them to $E_1$ and $\\lfloor \\mu(e)\/2 \\rfloor$ of them to $E_2$. At the end of this process, $H_3$ becomes a simple graph. It follows that $\\mu_i\\ge\\lfloor \\mu\/2 \\rfloor$ ($i=1,2$) and this inequality will be satisfied no matter how edges of $H_3$ are moved to $E_1$ and $E_2$ in later steps.\n\nIf $H_3$ has a cycle $C$, since $H$ is bipartite, $E(C)$ can be partitioned into two matchings $M_1,M_2$. We move $M_i$ from $E_3$ to $E_i$ ($i=1,2$). We repeat this process until $H_3$ become a forest. At this point, $H_1$ and $H_2$ have the same degree on every vertex.\n\nLet $S=\\{v: d_{\\mu H}(v)=\\Delta(\\mu)\\}$. Suppose $H_3$ has a leaf $v$ that is not in $S$. Let $P$ be a maximal path of $H_3$ starting from $v$. Let $E(P)$ be partitioned into two matchings $M_1,M_2$, where we assume the edge of $P$ that is incident with the other end $u$ of $P$ belongs to $M_1$. Then we move $M_i$ from $E_3$ to $E_i$ ($i=1,2$). After this change, $d_{H_1}(u)=\\lceil d_{\\mu H}(u)\/2 \\rceil \\le \\lceil \\Delta(\\mu)\/2\\rceil$, $d_{H_2}(u)=\\lfloor d_{\\mu H}(u)\/2 \\rfloor \\le \\lfloor \\Delta(\\mu)\/2\\rfloor$, and $d_{H_i}(v)\\le \\lceil d_{\\mu H}(v)\/2\\rceil \\le \\lfloor \\Delta(\\mu)\/2\\rfloor$ ($i=1,2$). In addition, $d_{H_1}(w)= d_{H_2}(w)$ for all $w\\ne u,v$, and $d_{H_i}(u),d_{H_i}(v)$ will remain unchanged in the remaining process. By repeating this process we may assume that all leaves of $H_3$ are in $S$. As a consequence, $\\Delta(\\mu)$ is odd. Note that the same argument works if $H_3$ has a maximal path with an odd number of edges. Thus we further assume that in every component of $H_3$, all leaves are in the same color class (of any 2-coloring of $H_3$).\n\nWe first consider the case $H=K_{3,3}$. We claim that each component of $H_3$ is a path. Suppose a component $H_3'$ of $H_3$ is not a path. Then $H_3'$ has at least three leaves. Since all these leaves are in the same color class, $H_3'$ must have exactly three leaves $z_1,z_2,z_3$ and they form a color class of $H$. Consequently, $H_3'=H_3=K_{1,3}$. Moreover, in the previous steps of reducing $H_3$, no path was ever deleted because otherwise $H_3$ would be a subgraph of $K_{2,2}$. It follows that $d_{\\mu H}(v^*)$ is even, where $v^*\\in V(H)-V(H_3)$. However, the fact $z_1,z_2,z_3\\in S$ implies that $\\mu H$ is $\\Delta(\\mu)$-regular, and thus $d_{\\mu H}(v^*)=\\Delta(\\mu)$ is odd. This contradiction proves our claim. Now, since each non-leaf $v$ of $H_3$ has degree two, its degree in $\\mu H$ is even and thus $v\\not\\in S$. It follows that moving all edges of $E_3$ to $E_1$ results in the required partition. \n\nNext suppose $H\\in\\cal C$. Let $H_3'$ be a component of $H_3$. Then $H_3'$ is a {\\it caterpillar} since $K_{1,3}^*$ is not a subgraph of $H$. Therefore, $H_3'$ has a path $x_1x_2...x_{2k+1}$ such that every leaf of $H_3'$ is adjacent to some $x_{2i+1}$. We assume that $H_3'$ is not a path because otherwise we may move the entire path from $E_3$ to $E_1$. We make two observations before we continue. First, $d_H(v)>1$ holds for every leaf $v$ of $H_3'$, because otherwise the only edge of $H$ that is incident with $v$ would be the only edge of $H_3'$ (as $v\\in S$). Second, if $u,v\\in V(H_3')$ are of degree-2 in $H$ and are contained in a 4-cycle $uxvy$ of $H$, then at most one of $u,v$ is in $S$. This is because otherwise $\\mu(ux)=\\mu(vy)$, $\\mu(uy)=\\mu(vx)$, and both $x,y\\in S$, which implies that $H_3'$ is a subgraph of the 4-cycle $uxvy$. It follows from these two observations and the construction of graphs in $\\cal C$ that each $x_{2i+1}$ is adjacent to at most two leaves of $H_3'$. For the same reasons, there must exist $i_0\\in\\{0,1,...,k\\}$ such that $d_{H_3'}(x_{2i_0+1})=2$.\n\nFor $i=1,2,3,4$, let $V_i=\\{v: d_{H_3'}(v)=i\\}$. Note that $V_2\\cup V_3\\cup V_4 =\\{x_1,...,x_{2k+1}\\}$. Let $M$ be the matching $\\{x_{2i-1}x_{2i}:i=1,...,i_0\\} \\cup \\{x_{2i}x_{2i+1}:i_0+1,...,k\\}$. From $H_3'$ we move $M$ to $E_2$ and the rest of $E(H_3')$ to $E_1$. Now we verify that, after this change, $d_{H_1}(v)\\le \\lceil \\Delta(\\mu)\/2 \\rceil$ and $d_{H_2}(v) \\le \\lfloor \\Delta(\\mu)\/2\\rfloor$ hold for all $v\\in V_1\\cup V_2\\cup V_3\\cup V_4$. For each $v\\in V_1$ it is easy to see that in fact $d_{H_1}(v)=\\lceil \\Delta(\\mu)\/2 \\rceil$ and $d_{H_2}(v) = \\lfloor \\Delta(\\mu)\/2\\rfloor$. For each even $i$, we have $x_i\\in V_2$ and $d_{H_1}(x_i) = d_{H_2}(x_i) = d_{\\mu H}(x_i)\/2\\le \\lfloor \\Delta(\\mu)\/2\\rfloor$. For each odd $i$ we consider two cases. If $x_i\\in V_3$ then $d_{H_1}(x_i)= (d_{\\mu H}(x_i)+1)\/2\\le \\lceil \\Delta(\\mu)\/2 \\rceil$ and $d_{H_2}(x_i)= (d_{\\mu H}(x_i)-1)\/2\\le \\lfloor \\Delta(\\mu)\/2\\rfloor$. If $x_i\\in V_2\\cup V_4$ then $d_{H_1}(x_i)\\le (d_{\\mu H}(x_i)+2)\/2\\le \\lceil \\Delta(\\mu)\/2 \\rceil$ and $d_{H_2}(x_i)\\le d_{\\mu H}(x_i)\/2\\le \\lfloor \\Delta(\\mu)\/2\\rfloor$. Therefore, we may apply this split to all components of $H_3$ and create the required partition $E_1,E_2$. \\hfill \\rule{4pt}{7pt}\n\n\\bigskip\n\\noindent{\\bf Proof of Theorem \\ref{thm:linebar}.} The forward implication is obvious so we only show that $G=\\bar L(H)$ is ESP when $G$ is perfect and $\\{S_3, \\bar{S}_3^+\\}$-free. \n\nSuppose $L(H)$ contains an induced $S_3$. Then $H$ contains $\\bar S_3$ as a subgraph. By Lemma \\ref{lem:bars3}, $si(H)$ is either $K_4^+$ or a subgraph of $K_{2,n}^+$ for some $n\\ge3$. In both cases, it is straightforward to verify that $\\bar L(si(H))$ satisfies the assumptions in Lemma \\ref{lem:ab}. So $\\bar L(si(H))$ is totally unimodular and thus is also ESP. By Lemma \\ref{lem:twin}, $\\bar L(H)$ is ESP.\n\nNow suppose $L(H)$ is $S_3$-free. We claim that $\\bar L(H')$ is strong ESP for every component $H'$ of $H$. If $\\bar L(H')$ is a comparability graph, then the claim follows immediately from the Remark at the end of Section 4. So we assume that $\\bar L(H')$ is not a comparability graph. By Lemma \\ref{lem:inc}, $L(H)$ contains an induced $\\Gamma$ or $C_{2n}$ ($n\\ge3$). This implies, by Lemma \\ref{lem:nos3}, that $si(H')$ is a subgraph of a graph in $\\mathcal C\\cup\\{K_{3,3}\\}$. Then the claim follows from Lemma \\ref{lem:k33}, Lemma \\ref{lem:strongesp}(1), and the Remark of Lemma \\ref{lem:twin}. Finally, this claim and Lemma \\ref{lem:strongesp}(2) imply that $\\bar L(H)$ is ESP. \\hfill \\rule{4pt}{7pt}\n\n\\section{Trigraphs}\n\nOur next objective is to characterize claw-free box-perfect graphs. To accomplish this goal, we will need a result of Chudnovsky and Plumettaz \\cite{CP} on the structure of claw-free perfect graphs. The purpose of this section is to explain their result, which requires many definitions. \n\nA \\emph{trigraph} $G$ consists of a finite set $V$ of {\\it vertices} and an {\\it adjacency function} $\\theta: \\binom{V}{2} \\rightarrow \\{1, 0, -1\\}$ such that $\\{uv: \\theta(uv)=0\\}$ is a matching. \nTwo distinct vertices $u$ and $v$ of $G$ are \\emph{strongly adjacent} if $\\theta(uv)=1$, \\emph{strongly antiadjacent} if $\\theta(uv)=-1$, and \\emph{semiadjacent} if $\\theta(uv)=0$. We call $u$, $v$ \\emph{adjacent} if $\\theta(uv)\\geq 0$, and \\emph{antiadjacent} if $\\theta(uv)\\leq 0$. Note that every graph can be considered as a trigraph with $\\{uv: \\theta(uv)=0\\}=\\emptyset$. In other words, graphs are exactly trigraphs with no semiadjacent pairs. The result of Chudnovsky and Plumettaz is in fact about trigraphs.\n\nFor any trigraph $G =(V,\\theta)$, let $G^{\\ge0}$ denote the graph $(V,\\{uv:\\theta(uv)\\ge0\\})$. Conversely, for any graph $G=(V,E)$, let $tri(G)$ denote the set of all trigraphs $(V,\\theta)$ such that for any distinct $u,v\\in V$, $\\theta(uv)\\ge0$ if $uv\\in E$ and $\\theta(uv)\\le 0$ if $uv\\not\\in E$. \n\nLet $G=(V,\\theta)$ be a trigraph. \nWe call $G$ {\\it connected} if $G^{\\ge0}$ is connected. \nFor each $v\\in V$, let $N_G(v)=N_{G^{\\ge0}}(v)$. We often write $N(v)$ for $N_G(v)$ if the dependency on $G$ is clear. \nFor any $X\\subseteq V$, let $G|X$ be the trigraph such that its vertex set is $X$ and its adjacency function is the restriction of $\\theta$ to $\\binom{X}{2}$. If a trigraph $H$ is isomorphic to $G|X$ for some $X\\subseteq V$, then we call $H$ a {\\it subtrigraph} of $G$ and we say that $G$ {\\it contains} $H$.\n\nA trigraph is a {\\it hole} if it belongs to $tri(C_n)$ for some $n\\ge4$. A trigraph $(V,\\theta)$ is an {\\it antihole} if $(V,-\\theta)$ is a hole. A hole or antihole is {\\it odd} if its number of vertices is odd. A trigraph is {\\it Berge} if it contains neither odd hole nor odd antihole. A trigraph is a {\\it claw} if it belongs to $tri(K_{1,3})$. \nA trigraph is {\\it claw-free} if it does not contain any claw. In general, if $\\cal H$ is a set of trigraphs, then a trigraph is {\\it $\\cal H$-free} if it does not contain any trigraph in $\\cal H$. The result of Chudnovsky and Plumettaz characterizes \\{claw, holes, antiholes\\}-free trigraphs, that is, claw-free Berge trigraphs. To describe the resulting structure we need more definitions.\n\nLet $G=(V,\\theta)$ be a trigraph. For any two disjoint $X, Y\\subseteq V$, we say that $X$ is \\emph{complete} (resp. \\emph{strongly complete}, \\emph{anticomplete}, \\emph{strongly anticomplete}) to $Y$ if every $x\\in X$ and every $y\\in Y$ are adjacent (resp. strongly adjacent, antiadjacent, strongly antiadjacent). A \\emph{clique} (resp. \\emph{strong clique}) of $G$ is a set $C\\subseteq V$ such that any two distinct vertices of $C$ are adjacent (resp. strongly adjacent). A \\emph{stable set} (resp. \\emph{strong stable set}) of $G$ is a set $S\\subseteq V$ such that any two distinct vertices of $S$ are antiadjacent (resp. strongly antiadjacent).\n\nA trigraph $H$ is a \\emph{thickening} of a trigraph $G$ if $V(H)$ admits a partition $(X_v:v\\in V(G))$ such that \\\\ \n\\indent$\\bullet$ \\ if $v\\in V(G)$ then $X_v\\ne\\emptyset$ is a strong clique of $H$;\\\\ \n\\indent$\\bullet$ \\ if $u, v \\in V(G)$ are strongly adjacent in $G$ then $X_u$ is strongly complete to $X_v$ in $H$; \\\\ \n\\indent$\\bullet$ \\ if $u, v \\in V(G)$ are strongly antiadjacent, then $X_u$ is strongly anticomplete to $X_v$ in $H$;\\\\ \n\\indent$\\bullet$ \\ if $u, v \\in V(G)$ are semiadjacent, then $X_u$ is neither strongly complete nor strongly \\\\ \\makebox[9mm]{} anticomplete to $X_v$ in $H$.\n\nLet ${\\cal C}$ be the class of all trigraphs illustrated in Figure \\ref{fig:c}, where \\\\ \n\\indent $\\bullet$ \\ $|B^j_i| \\leq 1$ for all $i,j\\in\\{1,2,3\\}$\\\\ \n\\indent $\\bullet$ \\ $|B_2^1\\cup B_3^1|$, $|B_1^2\\cup B_3^2|$, $|B_1^3\\cup B_2^3|\\in \\{0,2\\}$ \\\\ \n\\indent $\\bullet$ \\ if $\\theta(a_1a_3)=0$ then $B_2^1\\cup B_3^1 =\\emptyset$\\\\ \n\\indent $\\bullet$ \\ there exists $x_i\\in B_i^1\\cup B_i^2\\cup B_i^3$ for $i=1,2,3$, such that $\\{x_1,x_2,x_3\\}$ is a clique.\n\n\\begin{figure}[ht]\n\\centerline{\\includegraphics[scale=0.5]{C.eps}}\n\\caption{Trigraphs in $\\cal C$}\n\\label{fig:c}\n\\end{figure}\n\nIt turns out that there are two kinds of claw-free Berge trigraphs. The first are thickenings of trigraphs in $\\cal C$. The second are constructed (in a way like constructing line graphs) from certain basic trigraphs. In the following, we first define the building blocks and then describe the construction.\n\nLet $G$ have three vertices $v, z_1, z_2$ such that $\\theta(vz_1)=\\theta(vz_2)=1$ and $\\theta(z_1z_2)=-1$. Then the pair $(G, \\{z_1, z_2\\})$ is a {\\it spot}. Let $G$ have four vertices $v_1,v_2,z_1,z_2$ such that $\\theta(v_1z_1)=\\theta(v_2z_2)=1$, $\\theta(v_1v_2)=0$, $\\theta(z_1z_2)=\\theta(z_1v_2)=\\theta(z_2v_1)=-1$. Then the pair $(G,\\{z_1, z_2\\})$ is a {\\it spring}. \n\nA trigraph is a {\\it linear interval} if its vertices can be ordered as $v_1,...,v_n$ such that if $i < j < k$ and $\\theta(v_iv_k)\\ge 0$ then $\\theta(v_iv_j) = \\theta(v_jv_k)=1$. Let $G$ be such a trigraph with $n\\ge4$. We call $(G, \\{v_1, v_n\\})$ a {\\it linear interval stripe} if: $v_1$ and $v_n$ are strongly antiadjacent, $v_i$ and $v_{i+1}$ are adjacent for every $i\\in\\{1,...,n-1\\}$, no vertex is complete to $\\{v_1, v_n\\}$, and no vertex is semiadjacent to $v_1$ or $v_n$.\n\nLet $(G,\\{p,q\\})$ be a spring or a linear interval strip. Let $H$ be a thickening of $G$ and let $X_v$ ($v\\in V(G)$) be the corresponding sets. If $|X_p|=|X_q|=1$, then $(H,X_p\\cup X_q)$ is called a {\\it thickening} of $(G,\\{p,q\\})$.\n\nLet $\\mathcal C'$ be the class of all pairs $(H, \\{z\\})$ such that $H$ is a thickening of a trigraph $G\\in\\cal C$ and $z\\in X_{a_i}$ for some $i\\in \\{1,2,3\\}$ for which $B^{i+2}_{i+1}\\cup B^{i+2}_{i}=\\emptyset$ and $N(z)\\cap (X_{a_{i+1}}\\cup X_{a_{i+2}})=\\emptyset$ (here we use the notation from the definitions of $\\cal C$ and thickening).\n\nA {\\it signed graph} $(G,s)$ consists of a multigraph $G=(V,E)$ and a function $s: E\\rightarrow \\{0, 1\\}$. If $\\sum_{e\\in E(C)}s(e)$ is even for all cycles $C$ of $G$, then $(G, s)$ is an \\emph{evenly signed graph}. In the following we define another three classes of signed graphs. \nFor any $F\\subseteq E$, let $G[F]= (V, F)$.\n\nLet $\\mathcal F_1$ be the class of loopless signed graphs $(G,s)$ such that $si(G)=K_4$ and $s \\equiv 1$. \nLet $\\mathcal F_2$ be the class of loopless signed graphs $(G,s)$ such that $si(G)$ is obtained from $K_{2,n}$ ($n\\ge1$) by adding an edge $e^*$ between its two degree-$n$ vertices, and edges in $\\{e:s(e) = 0\\}$ are all parallel to $e^*$ (while $s(e^*)=1$). \nWe remark that our $\\mathcal F_1$ is $\\mathcal F_2$ of \\cite{CP} and our $\\mathcal F_2$ is $\\mathcal F_1\\cup \\mathcal F_3$ of \\cite{CP}\n\nIn a connected multigraph $G$ with $E(G)\\ne\\emptyset$, a subgraph $B$ is a {\\it block} of $G$ if $B$ is a loop or $B$ is maximal with the property that $B$ is loopless and $si(B)$ is a block of $si(G)$. A signed graph $(G, s)$ is called an \\emph{even structure} if $E(G)\\ne\\emptyset$ and for all blocks $B$ of $G$, $(B, s|_{E(B)})$ is a member of ${\\cal F}_1\\cup {\\cal F}_2$ or an evenly signed graph or a loop.\n\nNow we describe how the pieces defined above can be put together. A trigraph $G=(V,\\theta)$ is called an \\emph{evenly structured linear interval join} if it can be constructed in the following manner: \\\\ \n\\indent $\\bullet$ Let $(H,s)$ be an even structure. \\\\ \n\\indent $\\bullet$ For each edge $e\\in E(H)$, let $Z_e\\subseteq V(H)$ be the set of ends of $e$ (so $|Z_e|=1$ or 2). \\\\ \\makebox[23pt]{} Let $S_e=(G_e,Z_e)$ such that $G_e$ is a trigraph with $V(G_e)\\cap V(H)=Z_e$ and \\\\ \n\\indent\\indent $*$ if $e$ is not on any cycle then $S_e$ is a spot or a thickening of a linear interval stripe, \\\\ \n\\indent\\indent $*$ if $e$ is on a cycle of length $>1$ and $s(e)=0$ then $S_e$ is a thickening of a spring,\\\\ \n\\indent\\indent $*$ if $e$ is on a cycle of length $>1$ and $s(e)=1$ then $S_e$ is a spot, \\\\ \n\\indent\\indent $*$ if $e$ is a loop then $S_e\\in {\\cal C}'$. \\\\ \n\\indent $\\bullet$ For all distinct $e,f\\in E(H)$, $V(G_e)\\cap V(G_f)\\subseteq Z_e\\cap Z_f$. \\\\ \n\\indent $\\bullet$ Let $V=\\cup_{e\\in E(H)} V(G_e)\\backslash Z_e$ and let $\\theta$ be given by: for any $u,v\\in V$ \\\\\n\\indent\\indent $*$ if $u,v\\in V(G_e)\\backslash Z_e$ for some $e\\in E(H)$ then $\\theta(uv)=\\theta_{G_e}(uv)$ \\\\ \n\\indent\\indent $*$ if $u\\in\\! N_{G_e}(x)$ and $v\\in\\! N_{G_f}(x)$ for distinct $e,f\\in\\! E(H)$ with a common end $x$, then $\\theta(uv)=\\!1$ \\\\ \n\\indent\\indent $*$ in all other cases, $\\theta(uv)=-1$. \\\\ \n\\indent $\\bullet$ We will write $G=\\Omega(H,s,\\{S_e:e\\in E(H)\\})$.\n\n\\begin{theorem}[Chudnovsky and Plumettaz \\cite{CP}] \\label{thm:cp}\nA connected trigraph is claw-free and Berge if and only if it is a thickening of a trigraph in $\\cal C$ or an evenly structured linear interval join.\n\\end{theorem}\n\nIn the following we produce a different formulation of this result. A vertex $x$ of a trigraph is {\\it simplicial} if $N(x)\\ne\\emptyset$ and $\\{x\\}\\cup N(x)$ is a strong clique. For $i=1,2$, let $G_i=(V_i,\\theta_i)$ be a trigraph with a simplicial vertex $x_i$ and with $|V_i|\\ge3$. The {\\it simplicial sum} of $G_1,G_2$ (over $x_1,x_2$) is the trigraph $G=(V,\\theta)$ such that $V=(V_1-x_1)\\cup (V_2-x_2)$ and, for all distinct $v_1$, $v_2\\in V$, \\\\ \n\\indent $\\bullet$ $\\theta(v_1v_2)=\\theta_i(v_1v_2)$ if $\\{v_1,v_2\\}\\subseteq V_i$ for some $i=1,2$ \\\\ \n\\indent $\\bullet$ $\\theta(v_1v_2)=1$ if $v_i\\in N_{G_i}(x_i)$ for both $i=1,2$ \\\\ \n\\indent $\\bullet$ $\\theta(v_1v_2)=-1$ if otherwise. \\\\ \nWe point out that both $G_1$ and $G_2$ are contained in $G$. Moreover, using the language of \\cite{CP}, $G$ admits either a 1-join or a homogeneous set of size $\\ge2$. \n\n\\begin{lemma}\\label{lem:sum}\nLet $G$ be a simplicial sum of $G_1,G_2$. Then $G$ is claw-free if and only if both $G_1,G_2$ are; and $G$ is Berge if and only if both $G_1,G_2$ are. \n\\end{lemma}\n\nWe omit the proof since it is straightforward. This lemma suggests that we can characterize claw-free Berge trigraphs by determining all such trigraphs that are not simplicial sums. In the following we describe these trigraphs.\n\nLet $\\cal I$ be the class of linear interval trigraphs. \nLet $\\cal L$ be the class of trigraphs \n$G$ such that $G^{\\ge0}$ is the line graph of a bipartite multigraph and every {\\it triangle} (a clique of size 3) of $G$ is a strong clique. Let $J_1$ be the first graph in Figure \\ref{fig:j}. We consider $J_1$ as a trigraph with no semiadjacent pairs. Let $\\mathcal J_1$ consists of trigraphs obtained from $J_1$ by deleting $k$ of its cubic vertices $(0\\le k\\le 4)$. Let $J_2(n)$ be the second trigraph in Figure \\ref{fig:j}, where $Q_1,Q_2$, and all vertical triples are strong cliques, $\\theta(uv)$ could be 0, 1, or $-1$, and all other pairs are strongly antiadjacent. Note that $J_2(0)\\in\\cal I$. Let $\\mathcal J_2$ consist of trigraphs of the form $J_2(n)-X$ for all $n\\ge 1$ and all $X\\subseteq \\{u,v\\}$. Let $\\mathcal J=\\mathcal J_1\\cup \\mathcal J_2$.\n\n\\begin{figure}[ht]\n\\centerline{\\includegraphics[scale=0.5]{J.eps}}\n\\caption{$J_1$ and $J_2$}\n\\label{fig:j}\n\\end{figure}\n\n\\begin{theorem}\\label{thm:sum}\nA connected trigraph is claw-free and Berge if and only if it is obtained by simplicial summing thickenings of trigraphs in $\\cal C\\cup L\\cup I\\cup J$.\n\\end{theorem}\n\nWe need a few lemmas in order to prove this theorem. A {\\it $1$-separation} of a multigraph $H$ is a pair $(H_1,H_2)$ of edge-disjoint proper subgraphs of $H$ such that $H_1\\cup H_2=H$ and $|V(H_1)\\cap V(H_2)|=1$. Suppose $G=\\Omega(H,s,\\{S_e\\})$. Then a 1-separation $(H_1,H_2)$ of $H$ is called {\\it trivial} if there exists $i\\in\\{1,2\\}$ such that $H_i=K_2$ and $S_f$ is a spot, where $f$ is the only edge of $H_i$. \n\n\n\\begin{lemma} \\label{lem:trivial}\nSuppose $G=\\Omega(H,s,\\{S_e\\})$ and suppose $H$ has a nontrivial 1-separation $(H_1,H_2)$. Then $G$ is a simplicial sum of two trigraphs. \n\\end{lemma}\n\n\\vspace{-2mm}\n\\noindent\\textbf{Proof.} Let $x$ be the common vertex of $H_1,H_2$. For $i=1, 2$, let $H_i'$ be obtained from $H_i$ by adding a new vertex $x_i$ and a new edge $xx_i$. Let $s_i$ be the signing of $H_i'$ which agrees with $s$ on $H_i$, and $s_i(xx_i)=1$. Since all blocks of $H_i'$ (other than $xx_i$) are blocks of $H$, $(H_i', s_i)$ is an even structure. Let $S_{xx_i}$ be a spot and let $G_i=\\Omega(H_i',s_i,\\{S_e:e\\in E(H_i')\\})$. Since separation $(H_1,H_2)$ is nontrivial, $G_i$ must have $\\ge 3$ vertices. Now it is straightforward to verify that $x_i$ is a simplicial vertex of $G_i$ ($i=1,2$) and $G$ is the simplicial sum of $G_1$ and $G_2$ over $x_1$ and $x_2$. \\hfill \\rule{4pt}{7pt}\n\n\\begin{lemma}\\label{lem:thick}\nLet $H$ be a thickening of $G$. \\\\ \n\\indent (i) $H$ is claw-free if and only if $G$ is claw-free. \\\\ \n\\indent (ii) $H$ is Berge if and only if $G$ is Berge. \n\\end{lemma}\n\n\\vspace{-2mm}\n\\noindent{\\bf Proof.} Part (ii) is (6.4) of \\cite{CP} and part (i) is easy to verify, as pointed out in \\cite{claw5}.\\hfill \\rule{4pt}{7pt}\n\nA trigraph $G$ is {\\it quasi-line} if $N(v)$ is the union of two strong cliques for every $v\\in V(G)$. It is easy to see that if $G$ is quasi-line then $G$ is claw-free. A trigraph $G$ is {\\it cobipartite} if $V(G)$ is the union of two strong cliques. Clearly, if $G$ cobipartite then $G$ is quasi-line and thus is claw-free. It is also clear that every connected cobipartite trigraph with $\\ge 2$ vertices is a thickening of a two-vertex trigraph. Thus every cobipartite trigraph is Berge.\n\n\\bigskip\n\\noindent {\\bf Proof of Theorem \\ref{thm:sum}.} To prove the backward implication, by Lemma \\ref{lem:sum} and Lemma \\ref{lem:thick}, we only need to consider trigraphs $G\\in \\cal C\\cup L\\cup I\\cup J$. If $G\\in\\cal C$ then the result follows from Theorem \\ref{thm:cp}. If $G\\in\\cal I$ then $G$ is claw-free \\cite{claw5} and Berge \\cite{CP}. If $G\\in\\cal L\\cup J$ then $G$ is quasi-line and thus $G$ is claw-free. If $G\\in \\cal J$, then deleting simplicial vertices from $G$ results in a cobipartite trigraph, which implies that $G$ is Berge. Finally, assume $G\\in \\cal L$ and $G^{\\ge0}=L(B)$ is the line graph of a bipartite multigraph $B$. We need to show that $G$ is Berge. Since no semiadjacent pairs are contained in a triangle, every hole of $G$ must come from a cycle of $B$ and thus $G$ contains no odd holes. If $G$ has an antihole $v_1v_2...v_nv_1$ with $n\\ge 7$, then we consider the restriction of $G$ on $v_1,...,v_6$. If $\\theta(v_iv_{i+1})=-1$ for all $i=1,...,5$, then the graph $X$ formed by $\\{v_iv_j:\\theta(v_iv_j)\\ge0\\}$ would be the complement of a path on six vertices, which is one of the minimal non-line-graphs. This is impossible since $X$ is an induced subgraph of $L(B)$. So $\\theta(v_iv_{i+1})=0$ holds for some $i$, which makes $v_i,v_{i+1},x_k$ a triangle for some $k$. This contradiction (two semiadjacent vertices are contained in a triangle) shows that $G$ contains no antihole of length $\\ge7$. Thus $G$ is Berge, which completes the proof of the backward direction.\n\nTo prove the forward implication, by Theorem \\ref{thm:cp}, we assume $G=\\Omega(H,s,\\{S_e\\})$. Since $G$ is connected, $H$ is connected as well. By Lemma \\ref{lem:trivial}, we also assume that all 1-separations of $H$ are trivial. Let $U$ be the set of all degree-one vertices $u$ of $H$ for which if $e$ is the only edge incident with $u$ then $S_e$ is a spot. We assume $V(H)\\ne U$ because otherwise $H=K_2$ and $G=K_1$ and thus the result holds. \nLet $H_0=H-U$. Note that $H_0$ is connected, as $H$ is connected. Moreover, by its construction, $H_0$ dose not have a 1-separation. Thus either $H_0=K_1$ or $H_0$ is a block of $H$.\n\nSuppose $H_0$ is $K_1$ or $K_2$. It follows that $H$ is a tree with 1, 2, or 3 edges. Moreover, $S_e$ is a thickening of a linear interval strip for at most one $e$, and every other $S_e$ is a spot. In all cases, it is routine to check that $G$ is a thickening of a trigraph in $\\cal I$.\n\nSuppose $H_0$ is a loop $e$. Let $S_e=(G_e,\\{z\\})\\in\\cal C'$ and let $G_e$ be a thickening of $C\\in\\cal C$. If $H$ has $\\ge2$ edges then $H$ consists of $e$ and a pendent edge $f$ with $S_f$ a spot. It follows that $G=G_e$, which is a thickening of a trigraph in $\\cal C$. So $e$ is the only edge of $H$ and $G=G_e-z$. If $z$ is not the only vertex of $X_{a_i}$ (here we use the notation in the definition of $\\cal C'$) then $G$ is also a thickening of $C$. If $z$ is the unique vertex of $X_{a_i}$ then $G$ is cobipartite. In this case $G$ is a thickening of a two-vertex trigraph and thus $G$ is a thickening of a trigraph in $\\cal I$.\n\nSuppose none of the last two cases occurs. Then $H_0$ is a block in which every edge is on a cycle of length $\\ge2$. Let $s_0$ be the restriction of $s$ on $H_0$. Then $(H_0, s_0)$ is either in $\\mathcal F_1\\cup \\mathcal F_2$ or evenly signed. First we assume $(H_0, s_0)$ is evenly signed. Then $(H,s)$ is also evenly signed. Moreover, $S_e$ is a thickening of a spring for every edge in $E_0=\\{e\\in E(H_0): s(e)=0\\}$, and $S_e$ is a spot for every other edge of $H$. Let $S_e'$ be a spring for each $e\\in E_0$ and let $S_e'=S_e$ for every other edge of $H$. Then $G$ is a thickening of $G'=\\Omega(H,s,\\{S_e'\\})$. Now we only need to show that $G'\\in \\cal L$. Let $H'$ be obtained from $H$ by subdividing each edge in $E_0$ exactly once. Then $H'$ is bipartite. It follows from the construction of $\\Omega$ that adjacent pairs of $G'$ are exactly adjacent pairs of the line graph $L(H')$. In addition, all semiadjacent pairs of $G'$ come from a spring, and thus no such pair is contained in a triangle. Therefore, $G'$ belongs to $\\cal L$, as required.\n\n\nIt remains to consider the case $(H_0, s_0)\\in \\mathcal F_1\\cup \\mathcal F_2$. If $(H_0, s_0)\\in \\mathcal F_1$, then $H$ is obtained from $K_4$ by adding parallel edges and adding pendent edges to distinct vertices. Moreover, every $S_e$ is a spot. It follows that $G$ is an ordinary graph (meaning that $G$ has no semiadjacent pairs) and this graph is exactly $L(H)$. Now it is clear that $G$ is a thickening of $L(si(H))$, which belongs to $\\mathcal J_1$. So we assume $(H_0, s_0)\\in \\mathcal F_2$. Let $V(H_0)=\\{x_1, x_2, y_1, ..., y_m\\}$ ($m\\ge1$) such that $x_i$ ($i=1,2$) is adjacent to all other vertices. Like before, we assume that $H_0$ has no parallel edges, except for two possible edges $e_0,e_1$ between $x_1,x_2$, and such that $s(e_0)=0$ and $s(e_1)=1$. We also assume that $S_{e_0}$ is a spring, if $e_0$ is present. Suppose $H$ is obtained by adding pendent edges to $y_1,...,y_n$ ($n\\ge0$) and to $k$ of $x_1,x_2$ ($0\\le k\\le 2$). If $e_0$ is present, then $G$ is a thickening of $J_2(n)$, where $\\theta(uv)=0$. So assume that $e_0$ is not in $H$, and thus $G=L(H)$. For $i=1,2$, let $Q_i$ be the clique of $G$ formed by edges of $H$ incident with $x_i$. Let $Q_i'=Q_i-\\{x_1x_2, x_iy_1,...,x_iy_n\\}$. If $Q_1'\\ne\\emptyset$ is neither complete nor anticomplete to $Q_2'\\ne\\emptyset$, then again $G$ is a thickening of $J_2(n)$ with $\\theta(uv)=0$. In the remainder cases (which are: some $Q_i'$ is empty, or $Q_1'\\ne\\emptyset$ is complete or anticomplete to $Q_2'\\ne\\emptyset$), if $n=0$ then $G$ is a thickening of $K_3$, and if $n\\ge 1$ then $G=J_2(n)-X$ for some $X\\subseteq \\{u,v\\}$. \\hfill \\rule{4pt}{7pt}\n\n\\section{Claw-free box-perfect graphs}\n\nIn this section we prove the following.\n\n\\begin{theorem} \\label{thm:claw}\nA claw-free perfect graph is box-perfect if and only if it is $S_3$-free.\n\\end{theorem}\n\nWe divide the proof into several lemmas. \nLet $G$ be a trigraph. \nWe call $G$ a {\\it sun} if $G\\in tri(S_3)$. \nWe call $G$ an {\\it incomparability} trigraph if $G^{\\ge0}$ is an incomparability graph. \nWe call $G$ {\\it elementary} if it is a thickening of a trigraph in $\\cal L$. We remark that when an elementary trigraph has no semiadjacent pairs then they are exactly {\\it elementary graphs} discussed in \\cite{maffray}.\n\n\\begin{lemma} \\label{lem:clawsun}\nLet $G$ be a connected Berge trigraph. If $G$ is \\{claw, sun\\}-free then $G$ is obtained by simplicial summing incomparability trigraphs and elementary trigraphs. \n\\end{lemma}\n\n\\vspace{-2mm}\n\\noindent{\\bf Proof.} Since $G$ is connected, Berge, and claw-free, by Theorem \\ref{thm:sum}, $G$ is obtained by simplicial summing thickenings of trigraphs in $\\cal C\\cup L\\cup I\\cup J$. Therefore, we may assume that $G$ is a thickening of a trigraph $G_0\\in \\cal C\\cup L\\cup I\\cup J$. If $G_0\\in \\cal L$ then $G$ is elementary and we are done. If $G_0\\in \\cal C$ then $G_0|\\{a_1,a_2,a_3,x_1,x_2,x_3\\}$ (here we are using the notation in the definition of $\\cal C$) is a sun and thus $G$ contains a sun, which is impossible. So we assume that $G_0\\in \\cal I\\cup J$. In the following we prove that $G$ is an incomparability trigraph.\n\nSuppose $G_0\\in \\cal I$. Then vertices of $G_0$ can be ordered as $v_1,...,v_n$ such that if $i1$ and we put the rest into $\\Lambda_1$. For each $x\\in V(H_i) - (C_1^{(i)}\\cap X_{v_i})$, it is clear that $\\min\\{d_{\\Lambda_1}(x), d_{\\Lambda_2}(x)\\}\\ge \\lfloor d_{\\Lambda}(x)\/2\\rfloor$. For each $x\\in C_1^{(i)}\\cap X_{v_i}$, we have $d_{\\Lambda}(x)=1+n_i$. Our partition yields $d_{\\Lambda_2}(x) = n_i\/2$, which also leads to $\\min\\{d_{\\Lambda_1}(x), d_{\\Lambda_2}(x)\\}\\ge \\lfloor d_{\\Lambda}(x)\/2\\rfloor$. \\hfill \\rule{4pt}{7pt}\n\n\\bigskip\n\\noindent {\\bf Proof of Theorem \\ref{thm:claw}.} The forward implication is clear, so we only need to consider the backward implication. Let $G$ be perfect and $\\{claw, S_3\\}$-free. By Lemma \\ref{lem:clawsun}, each component of $G$ is obtained by simplicial summing incomparability graphs and elementary graphs. By Theorem \\ref{thm:incomp} and Lemma \\ref{lem:elementary}, incomparability graphs and elementary graphs are ESP. Thus $G$ is ESP by Lemma \\ref{lem:simsum}, which proves that $G$ is box-perfect by Theorem \\ref{thm:esp}. \\hfill \\rule{4pt}{7pt}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{S1}\n\nNumerous test statistics can be formulated or approximated in terms of\ndegenerate $U$- or $V$-type statistics. Examples include the Cram\\'er--von\nMises statistic, the Anderson--Darling statistic or the $\\chi\n^2$-statistic.\nFor i.i.d.~random variables the limit distributions of $U$- and\n$V$-statistics can be derived via a spectral decomposition of their\nkernel if the latter is squared integrable.\nTo use the same method for dependent data, often restrictive\nassumptions are required whose validity is quite complicated or even\nimpossible to verify in many cases. The first of our two main results\nis the derivation of the asymptotic distributions of $U$- and\n$V$-statistics under assumptions that are fairly easy to check. This\napproach is based on a wavelet decomposition instead of a spectral\ndecomposition of the kernel.\n\nThe limit distributions for both independent and dependent observations\ndepend on certain parameters which in turn depend on the underlying\nsituation in a complicated way. Therefore, problems arise as soon as\ncritical values for test statistics of $U$- and $V$-type have to be\ndetermined. The bootstrap offers a convenient way to circumvent these\nproblems; see Arcones and Gin{\\'e} \\cite{ArGi92}, Dehling and Mikosch \\cite{DM94} or\nLeucht and Neumann~\\cite{LeuNeu08} for the\ni.i.d.~case. To our knowledge, there are no results concerning\nbootstrapping general degenerate $U$-statistics of non-independent\nobservations. As a second main result of the paper, we establish\nconsistency of model-based bootstrap methods for $U$- and $V$-type\nstatistics of weakly dependent data.\n\nIn order to describe the dependence structure of the sample, we do not\ninvoke the concept of mixing although a great variety of processes\nsatisfy these constraints and various tools of probability theory and\nstatistics such as central limit theorems, probability and moment\ninequalities can be carried over from the i.i.d.~setting to mixing processes.\nHowever, these methods of measuring dependencies are inappropriate in\nthe present context since not only the asymptotic\nbehaviour of $U$- and $V$-type statistics but also bootstrap\nconsistency is focused. Model-based bootstrap methods can yield samples\nthat are no longer mixing even though the original sample satisfies\nsome mixing condition. A simple example is presented in Section~\\ref\n{SS42}. There we consider a model-specification test within the class\nof nonlinear $\\operatorname{AR}(1)$ processes. Under ${\\mathcal H_0}$,\n$X_k=g_0(X_{k-1})+\\varepsilon_k$, where $g_0$ is Lipschitz contracting\nand $(\\varepsilon_k)_k$ is a sequence of i.i.d.~centered innovations.\nIt is most natural to draw the bootstrap innovations $(\\varepsilon\n_k^*)_k$ via Efron's bootstrap from the\nrecentered residuals first. Then the bootstrap counterpart of $(X_k)_k$\nis generated iteratively by choosing an initial variable~$X_0^*$\nindependently of $(\\varepsilon_k^*)_k$ and defining\n$X_k^*=g_0(X_{k-1}^*)+\\varepsilon_k^*$. Due to the discreteness of the\nbootstrap innovations, commonly used coupling techniques to prove\nmixing properties for Markovian processes fail; see also Andrews \\cite{An84}.\nIt turns out that the characterization of dependence structures\nintroduced by Dedecker and Prieur~\\cite{DP05} is exceptionally suitable here. Based on\ntheir $\\tau$-dependence coefficient it is possible to construct an\n$L_1$-coupling in the following sense. Let ${\\mathcal M}$ denote a\n$\\sigma$-algebra generated by sample variables of the ``past'' and let\n$X$ be a random variable of a certain ``future'' time point. Then, the\nminimal $L_1$-distance between $X$ and a random variable that has the\nsame distribution as $X$ but that is independent of ${\\mathcal M}$ is\nequivalent to the $\\tau$-dependence coefficient $\\tau({\\mathcal M}, X)$.\n\nWe exploit this coupling property in order to derive the asymptotic\ndistribution for the original as well as the bootstrap statistics of\ndegenerate $U$-type. Basically, both proofs follow the same lines.\nFirst, the (almost) Lipschitz continuous kernels of the $U$-statistics\nare approximated by a finite wavelet series expansion. There are two\ncrucial points that assure asymptotic negligibility of the\napproximation error. On the one hand, the smoothness of the kernel\nfunction carries over to its wavelet approximation uniformly in scale,\ncf.~Lemma~\\ref{l.2}. On the other hand, Lipschitz continuity of the\nkernel and the $L_1$-coupling property of the underlying $\\tau\n$-dependent sample perfectly fit together. A next step contains the\napplication of a central limit theorem and the continuous mapping\ntheorem to determine the limits of the approximating statistics of\n$U$-type. Based on these investigations, the asymptotic distribution of\nthe $U$-statistic and its bootstrap counterpart is then deduced via\npassage to the limit. It can be expressed as an infinite weighted sum\nof normal variables.\n\nOur paper is organized as follows.\nWe start with an overview of asymptotic results on degenerate $U$-type\nstatistics of dependent random variables. In Section~\\ref{SS22}, we\nintroduce the underlying concept of weak dependence and derive the\nasymptotic distributions of $U$- and $V$-statistics.\nOn the basis of these results, we deduce consistency of general\nbootstrap methods in Section~\\ref{S3}. Some applications of the theory\nto hypothesis testing are presented in Section~\\ref{S4}.\nAll proofs are deferred to a final Section~\\ref{S5}.\n\n\\section{Asymptotic distributions of $U$- and $V$-statistics}\\label{S2}\n\n\\subsection{Survey of literature}\\label{SS21}\n\nLet $(X_n)_{n\\in\\N}$ be a sequence of $\\R^d$-valued random variables\nwith common distribution~$P_X$. In the case of i.i.d.~random variables,\nthe limit distributions of degenerate $U$- and $V$-type statistics,\nthat is,\n\\[\nn U_n=\\frac{1}{n}\\sum_{j=1}^n\\sum_{k\\neq j} h(X_j,X_k)\\quad \\mbox\n{and}\\quad n V_n=\\frac{1}{n}\\sum_{j,k=1}^n h(X_j,X_k),\n\\]\nwith $h\\dvt \\R^d\\times\\R^d\\to\\R$ symmetric and $\\int_{\\R^d}\nh(x,y)P_X(\\mathrm{d}x)=0, \\forall y\\in\\R^d,$ can be derived by using a\nspectral decomposition of the kernel, $h(x,y)=\\sum_{k=1}^\\infty\\lambda\n_k \\Phi_k(x)\\Phi_k(y)$, which holds true in the $L_2$-sense. Here,\n$(\\Phi_k)_k$ denote orthonormal eigenfunctions and $(\\lambda_k)_k$ the\ncorresponding eigenvalues of the integral equation\n\\begin{equation}\\label{eq.inteq}\n\\int_{\\R^d}h(x,y)g(y)P_X(\\mathrm{d}y)=\\lambda g(x).\n\\end{equation}\nApproximate $n U_n$ by $n U_n^{(K)}=\\sum_{k=1}^K \\lambda_k \\{\n( n^{-1\/2} \\sum_{i=1}^n \\Phi_k(X_i) )^2\n - n^{-1} \\sum_{i=1}^n \\Phi_k^2(X_i) \\}$. Then the sum under the\nround brackets is asymptotically standard normal while the latter sum\nconverges in probability to 1. Finally, one obtains\n\\begin{equation}\\label{eq.ustat}\nn U_n\n\\stackrel{d}{\\longrightarrow}\\sum_{k=1}^\\infty\\lambda_k (Z_k^2-1),\n\\end{equation}\nwhere $(Z_k)_{k}$ is a sequence of i.i.d.~standard normal random\nvariables; cf.~Serfling \\cite{serf80}. If additionally $\\E|h(X_1,X_1)|<\\infty$,\nthe weak law of large numbers and Slutsky's theorem imply $V_n\\stackrel\n{d}{\\longrightarrow}\\sum_{k=1}^\\infty\\lambda_k (Z_k^2 -1)+ \\E\nh(X_1,X_1)$. (Here, $\\stackrel{d}{\\longrightarrow}$ denotes convergence\nin distribution.)\n\nSo far, most previous attempts to derive the limit distributions of\ndegenerate $U$- and $V$-statistics of dependent random variables are\nbased on the adoption of this method of proof.\nEagleson \\cite{E79} developed the asymptotic theory in the case of a strictly\nstationary sequence of $\\phi$-mixing, real-valued random variables\nunder the assumption of absolutely summable eigenvalues. This condition\nis satisfied if the kernel function is of the form $h(x,y)=\\int_\\R\nh_1(x,z)h_1(z,y)P_X(\\mathrm{d}z)$ and $h_1$ is squared integrable w.r.t. $P_X$.\nUsing general heavy-tailed weight functions instead of $P_X$, the\neigenvalues are not necessarily absolutely summable; see, for example,\nde Wet \\cite{dew87}.\nCarlstein \\cite{Car88} analysed $U$-statistics of $\\alpha$-mixing, real-valued\nrandom variables in the case of finitely many eigenfunctions. He\nderived a~limit distribution of the form (\\ref{eq.ustat}), where\n$(Z_k)_{k\\in\\N}$ is a sequence of centered normal random variables.\nDenker \\cite{Den82} considered stationary sequences $(X_n=f(Y_n,Y_{n+1},\\ldots\n))_n$ of functionals of $\\beta$-mixing random variables $(Y_n)_n$. He\nassumed $f$ and the cumulative distribution function of $X_1$ to be H\\\"\nolder continuous. Imposing some smoothness condition on~$h$, the limit\\vadjust{\\goodbreak}\ndistribution of $n U_n$ was derived under the additional\nassumption~$\\|\\Phi_k\\|_\\infty<\\infty$, $\\forall k\\in\\N$. The condition on $(\\Phi\n_k)_k$ is difficult or even impossible to check in a multitude of cases\nsince this requires to solve the associated integral equation~(\\ref\n{eq.inteq}). Similar difficulties occur if one wants to apply the\nresults of Dewan and Prakasa~Rao~\\cite{DPR01} or Huang and Zhang \\cite{HuZh06}. They studied $U$-statistics\nof associated, real-valued random variables. Besides the absolute\nsummability of the eigenvalues, certain regularity conditions have to\nbe satisfied uniformly by the eigenfunctions in order to obtain the\nasymptotic distribution of~$n U_n$.\n\nA different approach was used by Babbel \\cite{Ba89} to determine the limit\ndistribution of $U$-statistics of $\\phi$- and $\\beta$-mixing random\nvariables. She deduced the limit distribution via a Haar wavelet\ndecomposition of the kernel and empirical process theory without\nimposing the critical conditions mentioned above. However, she presumed\nthat $\\iint h(x,y) P_{X_k,X_{k+n}}(\\mathrm{d}x, \\mathrm{d}y)=0, \\forall k\\in\\Z,\nn\\in\\N$. This assumption does in general not hold true within our\napplications in Section~\\ref{S3}.\nMoreover, this approach is not suitable when dealing with\n$U$-statistics of $\\tau$-dependent random variables since Lipschitz\ncontinuity will be the crucial property of the (approximating) kernel\nin order to exploit the underlying dependence structure.\n\n\\subsection{Main results}\\label{SS22}\n\nLet $(X_n)_{n\\in\\N}$ be a sequence of $\\R^d$-valued random variables on\nsome probability space $(\\Omega,{\\mathcal A}, P)$ with common\ndistribution~$P_X$.\nIn this subsection, we derive the limit distributions of\n\\[\nn U_n=\\frac{1}{n}\\sum_{j=1}^n\\sum_{k\\neq j} h(X_j,X_k) \\quad \\mbox\n{and} \\quad n V_n=\\frac{1}{n}\\sum_{j,k=1}^n h(X_j,X_k),\n\\]\nwhere $h:\\R^d\\times\\R^d\\to\\R$ is a symmetric function with $\\int_{\\R\n^d} h(x,y)P_X(\\mathrm{d}x)=0, \\forall y\\in\\R^d$.\nIn order to describe the dependence structure of $(X_n)_{n\\in\\N}$, we\nrecall the definition of the $\\tau$-dependence coefficient for $\\R\n^d$-valued random variables of Dedecker and Prieur \\cite{DP05}.\n\n\\begin{defn}\\label{def1}\nLet $(\\Omega,{\\mathcal A}, P)$ be a probability space, ${\\mathcal M}$~a\nsub-$\\sigma$-algebra of ${\\mathcal A}$ and $X$ an $\\R^d$-valued random\nvariable. Assume that $\\E\\|X\\|_{l_1} < \\infty$, where $\\|x\\|_{l_1}=\\sum\n_{i=1}^d |x_i|,$ and define\n\\[\n\\tau({\\mathcal M}, X)=\\E\\biggl( \\sup_{f\\in\\Lambda_1(\\R^d)} \\biggl|\\int\n_{\\R^d} f(x) P_{X|{\\mathcal M}}(\\mathrm{d}x)-\\int_{\\R^{d}} f(x)P_X(\\mathrm{d}x)\n\\biggr|\\biggr).\n\\]\nHere, $P_{X|{\\mathcal M}}$ denotes the conditional distribution of $X$\ngiven ${\\mathcal M} $ and $ \\Lambda_1(\\R^d)$ denotes the set of\n1-Lipschitz functions from $\\R^d$ to $\\R$.\n\\end{defn}\n\n\nWe assume\n\n\\begin{enumerate}[(A1)]\n\\item[(A1)]\n\\begin{enumerate}[(ii)]\n\\item[(i)]\n$(X_n)_{n\\in\\N}$ is a (strictly) stationary sequence of $\\R^d$-valued\nrandom variables on some probability space $(\\Omega,{\\mathcal A}, P)$\nwith common distribution $P_X$ and $\\E\\|X_1\\|_{l_1} < \\infty$.\n\\item[(ii)]\nThe sequence $(\\tau_r)_{r\\in\\N}$, defined by\n\\begin{eqnarray*}\n\\tau_r &=& \\sup\\{\\tau(\\sigma(X_{s_1},\\ldots,X_{s_u}),(X_{t_1}^\\prime\n,X_{t_2}^\\prime,X_{t_3}^\\prime)^\\prime) |\\\\\n&&\\phantom{\\sup\\{}{} u\\in\\N, s_1\\leq\\cdots\\leq s_u< s_u+r\\leq t_1\\leq t_2\\leq t_3 \\in\\N\\},\n\\end{eqnarray*}\nsatisfies $\\sum_{r=1}^\\infty r \\tau_r^{\\delta} <\\infty$ for some\n$\\delta\\in(0,1)$.\n(Here, prime denotes the transposition.)\n\\end{enumerate}\n\\end{enumerate}\n\n\\begin{rem}\\label{r.1}\nIf $\\Omega$ is rich enough, due to Dedecker and Prieur \\cite{DP04} the validity of (A1)\nallows for the construction of a random vector $(\\widetilde\nX_{t_1}^\\prime,\\widetilde X_{t_2}^\\prime,\\widetilde X_{t_3}^\\prime\n)^\\prime\\stackrel{d}{=}(X_{t_1}^\\prime,X_{t_2}^\\prime\n,X_{t_3}^\\prime)^\\prime$ that is independent of $X_{s_1},\\ldots\n,X_{s_u}$ and such that\n\\begin{equation}\\label{eq.a1}\n\\sum_{i=1}^3 \\E\\|\\widetilde X_{t_i}-X_{t_i}\\|_{l_1}\\leq\\tau_r.\n\\end{equation}\n\\end{rem}\n\nThe notion of $\\tau$-dependence is more general than mixing. If, for\nexample, $(X_n)_n$ is $\\beta$-mixing, we obtain an upper bound for the\ndependence coefficient $\\tau_r\\leq 6\\int_0^{\\beta(r)}Q_{|X_1|}(u)\n\\,\\mathrm{d}u$, where $Q_{|X_1|}(u)=\\inf\\{t\\in\\R | P(\\|X_1\\|_{l_1}>t)\\leq u\\}\n, u\\in[0,1],$ and $\\beta(r)$ denotes the ordinary $\\beta$-mixing\ncoefficient $\\beta(r) :=\\E\\sup_{B\\in\\sigma(X_s, s\\geq t+r), t\\in\\Z\n}|P(B|\\sigma(X_s, s\\leq t))-P(B)|.$ This is a~consequence of Remark~2\nof Dedecker and Prieur \\cite{DP04}.\nMoreover, inequality~(\\ref{eq.a1}) immediately implies\n\\begin{equation}\\label{eq.cov}\n| \\operatorname{cov}( h(X_{s_1},\\ldots,X_{s_u}),k(X_{t_1},\\ldots,X_{t_v})\n) |\\leq2\\|h\\|_\\infty \\operatorname{Lip}(k)\\biggl\\lceil\\frac{v}{3}\\biggr\\rceil\n\\tau_r\n\\end{equation}\nfor $s_1\\leq\\cdots\\leq s_u(2-\\delta)\/(1-\\delta)$ and an independent copy\n$\\widetilde X_1$ of $X_1$:\n\\[\n\\sup_{k\\in\\N}\\E|h(X_1,X_{1+k})|^{\\nu}<\\infty \\quad\\mbox{and}\\quad\n\\E|h(X_1,\\widetilde X_1)|^{\\nu}<\\infty.\n\\]\n\\end{enumerate}\n\\item[(A3)] The kernel $h$ is Lipschitz continuous.\n\\end{enumerate}\nUsing an appropriate kernel truncation, it is possible to reduce the\nproblem of deriving the asymptotic distribution of $n U_n$ to\nstatistics with bounded kernel functions.\n\n\\begin{lem}\\label{l.1}\nSuppose that \\textup{(A1)}, \\textup{(A2)}, and \\textup{(A3)} are fulfilled. Then there exists a\nfamily of bounded functions $(h_c)_{c\\in\\R^+}$ satisfying \\textup{(A2)} and \\textup{(A3)}\nuniformly such that\n\\begin{equation}\\label{eq1}\n\\lim_{c\\to\\infty}\\sup_{n\\in\\N} n^2 \\E(U_n-U_{n,c})^2=0,\n\\end{equation}\nwhere $U_{n,c}=n^{-2}\\sum_{j=1}^n\\sum_{k\\neq j} h_c(X_j,X_k)$.\n\\end{lem}\n\nAfter this simplification of the problem, we intend to develop a\ndecomposition of the kernel that allows for the application of a\ncentral limit theorem (CLT) for weakly dependent random variables.\nOne could try to imitate the proof of the i.i.d.~case. According to the\ndiscussion in the previous subsection, this leads to prerequisites that\ncan hardly be checked in numerous cases. Therefore, we do not use a\nspectral decomposition of the kernel but a wavelet decomposition. It\nturns out that Lipschitz continuity is the central property the kernel\nfunction should satisfy in order to exploit~(\\ref{eq.a1}). For this\nreason, the choice of Haar wavelets, as they were employed by Babbel \\cite\n{Ba89}, is inappropriate in the present situation. Instead, the\napplication of Lipschitz continuous scale and wavelet functions is more\nsuitable.\n\nIn the sequel, let $\\phi$ and $\\psi$ denote scale and wavelet functions\nassociated with an one-dimensional multiresolution analysis. As\nillustrated by Daubechies \\cite{Dau02}, Section~8, these functions can be selected\nin such a manner that they possess the following\nproperties:\\looseness=1\n\\begin{enumerate}[(1)]\n\\item[(1)] $\\phi$ and $\\psi$ are Lipschitz continuous,\n\\item[(2)] $\\phi$ and $\\psi$ have compact support,\n\\item[(3)] $\\int_{-\\infty}^{\\infty}\\phi(x)\\, \\mathrm{d}x=1$ and $\\int_{-\\infty\n}^{\\infty}\\psi(x) \\,\\mathrm{d}x=0.$\n\\end{enumerate}\\looseness=0\nIt is well known that an orthonormal basis in $L_2(\\R^d)$ can be\nconstructed from $\\phi$ and~$\\psi$. For this purpose, define $E:=\\{0,1\\}\n^d\\setminus\\{0_d\\}$, where $0_d$ denotes the $d$-dimensional null\nvector. In addition, set\n\\[\n\\varphi^{(i)}:=\n\\cases{\n\\phi& \\quad $\\mbox{for } i=0,$\\vspace*{2pt}\\cr\n\\psi&\\quad $\\mbox{for } i=1$\n}\n\\]\nand define functions $\\Psi^{(e)}_{j,k}\\dvtx \\R^d\\to\\R, j\\in\\Z\n,k=(k_1,\\ldots,k_d)^\\prime\\in\\Z^d,$ by\n\\[\n\\Psi^{(e)}_{j,k}(x):=2^{jd\/2}\\prod_{i=1}^d \\varphi\n^{(e_i)}(2^{j}x_i-k_i)\\qquad \\forall e=(e_1,\\ldots,e_d)^\\prime\\in E,\nx=(x_1,\\ldots,x_d)^\\prime\\in\\R^d.\n\\]\nThe system\n$(\\Psi^{(e)}_{j,k})_{e \\in E, j\\in\\Z, k\\in\\Z^d}$\nis an orthonormal basis of $L_2(\\R^d)$, see Wojtaszczyk \\cite{Woj97}, Section~5. The\nsame holds true for\n$\n(\\Phi_{0,k})_{k\\in\\Z^d }\\cup(\\Psi^{(e)}_{j,k})_{ j\\geq\n0,e \\in E, k\\in\\Z^d},\n$\nwhere the functions $\\Phi_{j,k}\\dvtx \\R^d\\to\\R$ are given by $\\Phi\n_{j,k}(x):=2^{jd\/2}\\prod_{i=1}^d \\phi(2^{j}x_i-k_i), j\\in\\Z, k\\in\\Z^d$.\n\nNow, an $L_2$-approximation of $n U_{n,c}$ by a statistic based on a\nwavelet approximation of $h_c$ can be established. To this end, we\nintroduce $\\widetilde h^{(K,L)}_c$ with\n\\begin{eqnarray}\\label{eq.hckl}\n\\widetilde h^{(K,L)}_c(x,y)\n&:=& \\sum_{ k_1,k_2 \\in\\{-L,\\ldots,L\\}^d } \\alpha^{(c)}_{k_1,k_2}\\Phi\n_{0,k_1}(x)\\Phi_{0,k_2}(y)\n\\nonumber\n\\\\[-8pt]\n\\\\[-8pt]\n\\nonumber\n&&{}+ \\sum_{j=0}^{J(K)-1}\\sum_{ k_1,k_2\\in\\{-L,\\ldots,L\\}^d}\\sum_{e\\in\\bar\nE} \\beta_{j;k_1,k_2}^{(c,e)}\\Psi_{j;k_1,k_2}^{(e)}(x,y),\n\\end{eqnarray}\nwhere $\\bar E:=(E\\times E)\\cup(E\\times\\{0_d\\})\\cup(\\{0_d\\}\\times E)$,\n\\[\n\\Psi_{j;k_1,k_2}^{(e)}:=\n\\cases{\n\\Psi_{j,k_1}^{(e_1)} \\Psi_{j,k_2}^{(e_2)} &\\quad$\\mbox{for } (e_1^\\prime\n,e_2^\\prime)^\\prime\\in E\\times E,$\\vspace*{2pt}\\cr\n\\Psi_{j,k_1}^{(e_1)} \\Phi_{j,k_2} &\\quad$\\mbox{for } (e_1^\\prime,e_2^\\prime\n)^\\prime\\in E\\times\\{0_d\\},$\\vspace*{2pt}\\cr\n\\Phi_{j,k_1} \\Psi_{j,k_2}^{(e_2)} &\\quad$\\mbox{for } (e_1^\\prime,e_2^\\prime\n)^\\prime\\in\\{0_d\\}\\times E,$\n}\n\\]\n$\\alpha_{k_1,k_2}^{(c)}=\\iint_{\\R^d\\times\\R^d}h_c(x,y) \\Phi\n_{0,k_1}(x) \\Phi_{0,k_2}(y) \\,\\mathrm{d}x \\,\\mathrm{d}y$ and $\\beta\n^{(c,e)}_{j;k_1,k_2}=\\iint_{\\R^d\\times\\R^d}h_c(x,y)\\times \\Psi\n_{j;k_1,k_2}^{(e)}(x,\\allowbreak y) \\,\\mathrm{d}x \\,\\mathrm{d}y$.\nWe refer to the degenerate version of $\\widetilde h_c^{(K,L)}$ as\n$h_c^{(K,L)}$, given by\n\\begin{eqnarray*}\nh_c^{(K,L)}(x,y)&:= & \\widetilde h_c^{(K,L)}(x,y)-\\int_{\\R\n^d}\\widetilde h_c^{(K,L)}(x,y)P_X(\\mathrm{d}x)-\\int_{\\R^d}\\widetilde\nh_c^{(K,L)}(x,y)P_X(\\mathrm{d}y)\\\\\n&&{}+\\iint_{\\R^d\\times\\R^d}\\widetilde\nh_c^{(K,L)}(x,y)P_X(\\mathrm{d}x)P_X(\\mathrm{d}y).\n\\end{eqnarray*}\nThe associated $U$-type statistic will be denoted by $U_{n,c}^{(K,L)}$.\n\n\\begin{lem}\\label{l.5}\nAssume that \\textup{(A1)}, \\textup{(A2}), and \\textup{(A3)} are fulfilled. Then the sequence of\nindices $(J(K))_{K\\in\\N}$ in (\\ref{eq.hckl}) with $J(K)\\longrightarrow\n_{K\\to\\infty} \\infty$ can be chosen such that\n\\[\n\\lim_{K\\to\\infty}\\mathop{\\lim\\sup}_{L\\to\\infty}\\sup_{n\\in\\N}n^2 \\E\n\\bigl(U_{n,c}-U_{n,c}^{(K,L)}\\bigr)^2= 0.\n\\]\n\\end{lem}\n\nEmploying the CLT of Neumann and Paparoditis \\cite{NeuPa05} and the continuous mapping\ntheorem, we obtain the limit distribution of $n U_{n,c}^{(K,L)}$.\nFinally, based on this result, the asymptotics of the $U$-type\nstatistic $n U_n$ can be derived. Moreover, a weak law of large\nnumbers (Lemma~\\ref{l.lln} in Section~\\ref{SS52}) allows for\ndeducing the limit distribution of $n V_n$ since $n V_n=n\nU_n+n^{-1}\\sum_{k=1}^n h(X_k,X_k)$.\n\nBefore stating the main result of this section, we introduce constants\n$A_{k_1,k_2}:=\\operatorname{cov}(\\Phi_{0,k_1}(X_1), \\Phi_{0,k_2}(X_1) )$ and\n\\[\nB_{j;k_1,k_2}^{(c,e)}:=\n\\cases{\n\\operatorname{cov}\\bigl(\\Psi_{j,k_1}^{(e_1)}(X_1),\\Psi_{j,k_2}^{(e_2)}(X_1)\\bigr) &\\quad$\\mbox{for }\n(e_1^\\prime,e_2^\\prime)^\\prime\\in E\\times E,$\\vspace*{2pt}\\cr\n\\operatorname{cov}\\bigl(\\Psi_{j,k_1}^{(e_1)}(X_1),\\Phi_{j,k_2}(X_1)\\bigr) &\\quad$\\mbox{for }\n(e_1^\\prime,e_2^\\prime)^\\prime\\in E\\times\\{0_d\\},$\\vspace*{2pt}\\cr\n\\operatorname{cov}\\bigl(\\Phi_{j,k_1}(X_1),\\Psi_{j,k_2}^{(e_2)}(X_1)\\bigr) &\\quad$\\mbox{for }\n(e_1^\\prime,e_2^\\prime)^\\prime\\in\\{0_d\\}\\times E,$}\n \\qquad j\\in\\Z,\\ k_1,k_2\\in\\Z^d.\n\\]\n\n\\begin{thmm}\\label{t.1}\nSuppose that the assumptions \\textup{(A1)}, \\textup{(A2)}, and \\textup{(A3)} are fulfilled. Then,\nas $n\\to\\infty$,\n\\[\nn U_n \\stackrel{d}{\\longrightarrow} Z\n\\]\nwith\n\\begin{eqnarray*}\nZ&:=&\\lim_{c\\to\\infty}\\Biggl(\n\\sum_{ k_1,k_2 \\in\\Z^d}\\alpha^{(c)}_{ k_1,k_2}[Z_{k_1} Z_{k_2}-A_{k_1,k_2}]\n\\\\\n&&\\hphantom{\\lim_{c\\to\\infty}(}{}+ \\sum_{j = 0}^\\infty\n\\sum_{k_1,k_2 \\in\\Z^d} \\sum_{e=(e_1^\\prime,e_2^\\prime)^\\prime\\in\\bar E}\n\\beta_{j; k_1,k_2}^{(c,e)} \\bigl[Z_{j;k_1}^{(e_1)}\nZ_{j;k_2}^{(e_2)}-B_{j;k_1,k_2}^{(c,e)}\\bigr]\\Biggr).\n\\end{eqnarray*}\nHere, $(Z_k)_{k\\in\\Z^d}$ as well as $(Z_{j;k}^{(e)})_{j\\geq0, k\\in\\Z\n^d, e\\in\\{0,1\\}^d}$ are centered and jointly normally distributed\nrandom variables and the r.h.s.~converges in the $L_2$-sense.\nIf additionally $\\E|h(X_1,X_1)|<\\infty$, then\n\\[\nn V_n \\stackrel{d}{\\longrightarrow} Z+\\E h(X_1,X_1).\n\\]\n\\end{thmm}\n\nAs in the case of i.i.d.~random variables, the limit distributions of\n$n U_n$ and $n V_n$ are, up to a constant, weighted sums of products\nof centered normal random variables. In contrast to many other results\nin the literature, the prerequisites of this theorem, namely moment\nconstraints and Lipschitz continuity of the kernel, can be checked\nfairly easily in many cases. Nevertheless, the asymptotic distribution\nhas a complicated structure. Hence, quantiles can hardly be determined\non the basis of the previous result. However, we show in the following\nsection that the conditional distributions of the bootstrap\ncounterparts of $n U_n$ and $n V_n$, given $X_1,\\ldots,X_n$, converge\nto the same limits in probability.\n\nOf course, the assumption of Lipschitz continuous kernels is rather\nrestrictive. Thus, we extend our theory to a more general class of\nkernel functions. The costs for enlarging the class of feasible kernels\nare additional moment constraints.\n\nBesides (A1) and (A2), we assume\n\\begin{enumerate}[(A4)]\n\\item[(A4)]\n\\begin{enumerate}[(ii)]\n\\item[(i)] The kernel function satisfies\n\\[\n|h(x,y)-h(\\bar x,\\bar y)|\\leq f(x,\\bar x,y,\\bar y)[ \\|x-\\bar x\\|\n_{l_1}+\\|y-\\bar y\\|_{l_1}] \\qquad \\forall x,\\bar x,y,\\bar y\\in\\R^d,\n\\]\nwhere $f\\dvtx\\R^{4d}\\to\\R$ is continuous.\nMoreover,\n\\[\n\\sup_{Y_1,\\ldots,Y_5\\sim P_X} \\E\\Bigl(\\max_{a_1,a_2\\in[-A,A]^d}\n[f(Y_{1},Y_{2}+a_1,Y_{3},Y_{4}+a_2)]^\\eta\\|Y_{5}\\|_{l_1}\\Bigr)<\\infty\n\\]\nfor $\\eta:=1\/(1-\\delta)$ with $\\delta$ satisfying (A2) and some $A>0$.\n\\item[(ii)]\n$\\sum_{r=1}^\\infty r (\\tau_r)^{\\delta^2}<\\infty.$\n\\end{enumerate}\n\\end{enumerate}\nEven though the assumption (A4)(i) has a rather technical structure, it\nis satisfied for example,~by polynomial kernel functions as long as the\nsample variables have sufficiently many finite moments.\nAnalogous to Lemma~\\ref{l.1} and Lemma~\\ref{l.5}, the following\nassertion holds.\n\n\\begin{lem}\\label{l.4}\nSuppose that \\textup{(A1)}, \\textup{(A2)}, and \\textup{(A4)} are fulfilled. Then a family of\nbounded kernels $(h_c)_c$ satisfying \\textup{(A2)} and \\textup{(A4)} uniformly and the\nsequence of indices $(J(K))_{K\\in\\N}$ in (\\ref{eq.hckl}) with\n$J(K)\\longrightarrow_{K\\to\\infty} \\infty$ can be chosen such that\n\\[\n\\lim_{c\\to\\infty} \\limsup_{K\\to\\infty} \\limsup_{L\\to\\infty} \\sup\n_{n\\in\\N} \\E\\bigl(U_{n}-U_{n,c}^{(K,L)}\\bigr)^2=0.\n\\]\n\\end{lem}\n\n\nThis auxiliary result implies the analogue of Theorem~\\ref{t.1} for\nnon-Lipschitz kernels.\n\n\\begin{thmm}\\label{t.2}\nAssume that \\textup{(A1)}, \\textup{(A2)}, and \\textup{(A4)} are satisfied. Then, as $n\\to\\infty$,\n\\[\nn U_n\\stackrel{d}{\\longrightarrow}Z,\n\\]\nwhere $Z$ is defined as in Theorem~\\ref{t.1}. If additionally $\\E\n|h(X_1,X_1)|<\\infty$, then\n\\[\nn V_n \\stackrel{d}{\\longrightarrow} Z+\\E h(X_1,X_1).\n\\]\n\\end{thmm}\n\n\\section{Consistency of general bootstrap methods}\\label{S3}\n\nAs we have seen in the previous section, the limit distributions of\ndegenerate $U$- and $V$-statistics have a rather complicated structure.\nTherefore, in the majority of cases it is quite difficult to determine\nquantiles, which are required in order to derive asymptotic critical\nvalues of $U$- and $V$-type test statistics. The bootstrap offers a\nsuitable way of approximating these quantities.\n\nGiven $X_1,\\ldots,X_n$, let $X^*$ and $Y^*$ denote vectors of bootstrap\nrandom variables with values in $\\R^{d_1}$ and $\\R^{d_2}$.\nIn order to describe the dependence structure of the bootstrap sample,\nwe introduce, in analogy to Definition~\\ref{def1},\n\\[\n\\tau^*(Y^*, X^*,x_n):=\\E\\biggl( \\sup_{f\\in\\Lambda_1(\\R^{d_1})}\n\\biggl|\\int_{\\R^{d_1}} f(x) P_{X^*|Y^*}(\\mathrm{d}x)-\\int_{\\R^{d_1}}\nf(x)P_{X^*}(\\mathrm{d}x)\\biggr|\\big| \\X_n=x_n\\biggr)\n\\]\nprovided that $\\E(\\|X^*\\|_{l_1}\\vert\\X_n=x_n)<\\infty$ with $\\X\n_n:=(X_1^\\prime,\\ldots,X_n^\\prime)^\\prime$. We make the following assumptions:\n\n\\begin{enumerate}[$\\mathrm{(A1^*)}$]\n\\item[$\\mathrm{(A1^*)}$]\n\\begin{enumerate}[(ii)]\n\\item[(i)]The sequence of bootstrap variables is stationary with probability\ntending to one. Additionally, $ (X^{*\\prime}_{t_1},X^{*\\prime\n}_{t_2})^\\prime\\stackrel{d}{\\longrightarrow} (X_{t_1}^\\prime\n,X_{t_2}^\\prime)^\\prime, \\forall t_1,t_2 \\in\\N,$ holds true in probability.\n\\item[(ii)]Conditionally on $X_1,\\ldots,X_n$, the random variables $(X_k^*)_{k\\in\\Z\n}$ are $\\tau$-weakly dependent, that is,~there exist a sequence of\ncoefficients $(\\bar\\tau_r)_{r\\in\\N}$ with $\\sum_{r=1}^\\infty r(\\bar\\tau\n_r)^\\delta<\\infty$ for some $\\delta\\in(0,1)$, a constant $C_1<\\infty$,\nand a sequence~of sets $(\\XX_n^{(1)})_{n\\in\\N}$ with $P(\\X_n\\in\\XX\n_n^{(1)})\\longrightarrow_{n\\to\\infty} 1$ and the following property:\nFor any sequence $(x_n)_{n\\in\\N}$ with $x_n \\in\\XX_n^{(1)}, n\\in\\N$,\n\\mbox{$\\sup_{k\\in\\N}\\E(\\|X_k^*\\|_{l_1}\\vert\\X_n=x_n)\\leq C_1$} and\n\\begin{eqnarray*}\n\\tau_r^*(x_n) &:=& \\sup\\{\\tau^*((X^{*\\prime}_{s_1},\\ldots\n,X^{*\\prime}_{s_u})^\\prime,(X_{t_1}^{*\\prime},X_{t_2}^{*\\prime\n},X_{t_3}^{*\\prime})^\\prime, x_n) |\\\\[-2pt]\n&&\\hphantom{\\sup\\{}{} u\\in\\N, s_1\\leq\\cdots\\leq s_u< s_u+r\\leq t_1\\leq t_2\\leq t_3 \\in\\N\n\\}\n\\end{eqnarray*}\ncan be bounded by $\\bar\\tau_r$ for all $r\\in\\N$.\\vspace*{-1pt}\n\\end{enumerate}\n\\end{enumerate}\n\n\\begin{rem}\n\\begin{enumerate}[(ii)]\n\\item[(i)]\nNeumann and Paparoditis \\cite{NeuPa05} proved that in case of stationary Markov chains of\nfinite order, the key for convergence of the finite-dimensional\ndistributions is convergence of the conditional distributions, cf.\ntheir Lemma~4.2. In particular, they showed that $\\operatorname{AR}(p)$~bootstrap and\n$\\operatorname{ARCH}(p$)~bootstrap yield samples that satisfy (A$1^*$)(i).\n\\item[(ii)]\nIn Section~\\ref{SS42}, we present another example that satisfies\n(A$1^*$), namely a residual-based bootstrap procedure for a Lipschitz\ncontracting nonlinear $\\operatorname{AR}(1)$~process, given by\n$X_{t}=g(X_{t-1})+\\varepsilon_t$. In particular, note that the\nbootstrap process there cannot be proved to be mixing according to the\ndiscreteness of the bootstrap innovations that are generated via\nEfron's bootstrap from the empirical distribution of the recentered\nresiduals of the original process.\\vspace*{-1pt}\n\\end{enumerate}\n\\end{rem}\n\n\\begin{lem}\\label{l.7}\nSuppose that \\textup{(A1)} and \\textup{(A$1^*$)} hold true. Further let \\mbox{$h\\dvtx \\R^d\\times\\R\n^d\\to\\R$} be a~bounded, symmetric, Lipschitz continuous function such\nthat\n$\\E h(X_1,y)=\\E(h(X_1^*,y)|\\allowbreak X_1,\\ldots,X_n)= 0, \\forall y\\in\\R^d$. Then,\n\\[\n\\frac{1}{n}\\sum_{j=1}^n\\sum_{k\\neq j}h(X_j^*,X_k^*)\\stackrel\n{d}{\\longrightarrow}Z\\quad \\mbox{and}\\quad\n\\frac{1}{n}\\sum_{j,k=1}^n h(X_j^*,X_k^*)\\stackrel{d}{\\longrightarrow\n}Z+\\E h(X_1,X_1)\n\\]\nhold in probability as $n\\to\\infty$. Here, $Z$ is defined as in\nTheorem~\\textup{\\ref{t.1}}.\\vspace*{-1pt}\n\\end{lem}\n\nIn order to deduce bootstrap consistency, additionally, convergence in\na certain metric~$\\rho$ is required, that is,\n\\[\n\\rho\\Biggl(P\\Biggl(\\frac{1}{n}\\sum_{j,k=1}^nh(X_j^*,X_k^*) \\leq x\n|X_1,\\ldots,X_n\\Biggr), P\\Biggl(\\frac{1}{n}\\sum_{j,k=1}^nh(X_j,X_k) \\leq x\n\\Biggr)\\Biggr)\\stackrel{P}{\\longrightarrow}0.\n\\]\n(Here, $\\stackrel{P}{\\longrightarrow}$ denotes convergence in\nprobability.) Convergence in the uniform metric follows from Lemma~\\ref\n{l.7} if the limit distribution has a continuous cumulative\ndistribution function. The next assertion gives a necessary and\nsufficient condition for this.\\vspace*{-1pt}\n\n\\begin{lem}\\label{l.6}\nThe limit variable $Z$, derived in Theorem~\\textup{\\ref{t.1}}\/Theorem~\\textup{\\ref\n{t.2}} under \\textup{(A1)}, \\textup{(A2)}, and \\textup{(A3)}\/\\textup{(A4)}, has a continuous cumulative\ndistribution function if $\\operatorname{var}(Z)>0$.\\vspace*{-1pt}\n\\end{lem}\n\nKernels of statistics emerging from goodness-of-fit tests for composite\nhypotheses often depend on an unknown\\vadjust{\\goodbreak} parameter. We establish bootstrap\nconsistency for this setting, that is,~when parameters have to be\nestimated. Moreover, the class of feasible kernels is enlarged.\nFor this purpose, we additionally assume\n\n\\begin{enumerate}[$\\mathrm{(A2^*)}$]\n\\item[$\\mathrm{(A2^*)}$]\n\\begin{enumerate}[(iii)]\n\\item[(i)] $\\widehat\\theta_n\\stackrel{P}{\\longrightarrow}\\theta\n\\in\\Theta\\subseteq\\R^p.$\n\\item[(ii)]\n$\\E(h(X_1^*,y,\\widehat\\theta_n)| \\X_n )=0, \\forall y\\in\\R^d$.\n\\item[(iii)]\nFor some $\\delta$ satisfying (A$1^*$)(ii), $\\nu>(2-\\delta)\/(1-\\delta)$,\nand a constant $C_2<\\infty$, there exists a sequence of sets $(\\XX\n_n^{(2)})_{n\\in\\N}$ such that $P(\\X_n\\in\\XX_n^{(2)})\\longrightarrow\n_{n\\to\\infty} 1$ and $\\forall (x_n)_{n\\in\\N}$ with $x_n\\in\\XX\n_n^{(2)}$ the following moment constraint holds true:\n\\[\n\\sup_{1 \\leq k< n}\\E\\bigl(|h(X_1^*,X_{1+k}^*,\\widehat\\theta_n)|^{\\nu\n}+|h(X_1^*,\\widetilde X_1^{*},\\widehat\\theta_n)|^{\\nu}|\\X_n=x_n\n\\bigr)\\leq C_2,\n\\]\nwhere (conditionally on $\\X_n$) $\\widetilde X_1^{*}$ denotes an\nindependent copy of $X_1^*$.\n\\end{enumerate}\n\\item[$\\mathrm{(A3^*)}$]\n\\begin{enumerate}[(iii)]\n\\item[(i)] The kernel is continuous in its third argument in\nsome neighbourhood $U(\\theta)\\subseteq\\Theta$ of $\\theta$ and satisfies\n\\[\n|h(x,y,\\widehat\\theta_n)-h(\\bar x,\\bar y,\\widehat\\theta_n)|\\leq\nf(x,\\bar x,y,\\bar y,\\widehat\\theta_n)[ \\|x-\\bar x\\|_{l_1}+\\|y-\\bar\ny\\|_{l_1}]\n\\]\nfor all $x,\\bar x,y,\\bar y\\in\\R^d$, where $f\\dvtx\\R^{4d}\\times\\R^p\\to\\R$\nis continuous on $\\R^{4d}\\times U(\\theta)$.\nMoreover, for $\\eta:=1\/(1-\\delta)$ and some constants $A>0, C_3<\\infty$\nthere exists a sequence of sets $(\\XX_n^{(3)})_{n\\in\\N}$ such that\n$P(\\X_n\\in\\XX_n^{(3)})\\longrightarrow_{n\\to\\infty} 1$ and $\\forall\n(x_n)_{n\\in\\N}$ with $x_n\\in\\XX_n^{(3)}$ the following moment\nconstraint holds true:\\looseness=1\n\\[\n\\E\\Bigl(\\max_{a_1,a_2\\in[-A,A]^d}\n[f(Y_{1}^*,Y_{2}^*+a_1,Y_{3}^*,Y_{4}^*+a_2,\\widehat\\theta_n)]^\\eta\\|\nY_{5}^*\\|_{l_1}\\big|\\X_n=x_n\\Bigr)\\leq C_3\n\\]\\looseness=0\nfor all $Y_1^*,\\ldots, Y_5^*$ with $Y_k^*\\stackrel{d}{=}X_1^*, k\\!\\in\\!\\{\n1,\\ldots,5\\}$ (conditionally on $X_1,\\ldots,X_n$).\n\\item[(ii)]\n$\\sum_{r=1}^\\infty r(\\bar\\tau_r)^{\\delta^2} <\\infty$.\n\\end{enumerate}\n\\end{enumerate}\nUnder these assumptions a result concerning the asymptotic\ndistributions of $n U_n^*=n^{-1}\\times \\sum_{j=1}^n\\sum_{k\\neq j}\nh(X_j^*,X_k^*, \\widehat\\theta_n)$ and $n V_n^*=n^{-1}\\sum_{j,k=1}^n\nh(X_j^*,X_k^*, \\widehat\\theta_n)$ can be derived. To this end, we\ndenote the $U$- and $V$-statistics with kernel $h(\\cdot,\\cdot,\\theta)$\nand arguments $X_1,\\ldots, X_n$ by $U_n$ and $V_n$, respectively.\n\n\\begin{thmm}\\label{t.3}\nSuppose that the conditions \\textup{(A$1$)}, \\textup{(A$2$)}, and \\textup{(A$4$)} as well as\n\\textup{(A$1^*$)}, \\textup{(A$2^*$)}, and \\textup{(A$3^*$)} are fulfilled.\n\\begin{enumerate}[(ii)]\n\\item[(i)] As $n\\to\\infty$,\n\\[\nn U_n^* \\stackrel{d}{\\longrightarrow} Z,\\qquad \\mbox{in probability,}\n\\]\nwhere $Z$ is defined as in Theorem~\\ref{t.1}. If furthermore\n$\\operatorname{var}(Z)>0$, then\n\\[\n\\sup_{-\\infty< x < \\infty}|P(n U_n^* \\leq x |X_1,\\ldots,X_n)- P(n\nU_n \\leq x)| \\stackrel{P}{\\longrightarrow}0.\n\\]\n\\item[(ii)]\nIf additionally $\\E|h(X_1,X_1,\\theta)|<\\infty$ and\n$\n\\E(|h(X_1^*,X_1^*,\\widehat\\theta_n)| \\vert\\X_n )\\stackrel\n{P}{\\longrightarrow}\\E|h(X_1,X_1,\\theta)|,\n$\nthen as $n\\to\\infty$,\n\\[\nn V_n^* \\stackrel{d}{\\longrightarrow} Z+ \\E h(X_1,X_1,\\theta),\\qquad\n\\mbox{in probability}.\n\\]\nMoreover, in case of $\\operatorname{var}(Z)>0$,\n\\[\n\\sup_{-\\infty< x < \\infty}|P(n V_n^* \\leq x |X_1,\\ldots,X_n)- P(n\nV_n \\leq x)| \\stackrel{P}{\\longrightarrow}0.\n\\]\n\\end{enumerate}\n\\end{thmm}\n\n\\begin{rem}\nTheorem~\\ref{t.3} implies that bootstrap-based tests of $U$- or\n$V$-type have asymptotically a prescribed size $\\alpha$, that is, $P(n\nU_n>t_{u,\\alpha}^*)\\longrightarrow_{n\\to\\infty} \\alpha$ and $P(n\nV_n>t_{v,\\alpha}^*)\\longrightarrow_{n\\to\\infty} \\alpha$, where\n$t^*_{u,\\alpha}$ and $t^*_{v,\\alpha}$ denote the $(1-\\alpha)$-quantiles\nof $n U_n^*$ and $n V_n^*$, respectively, given $X_1,\\ldots,X_n$.\n\\end{rem}\n\n\\section{$L_2$-tests for weakly dependent observations}\\label{S4}\n\nThis section is dedicated to two applications in the field of\nhypothesis testing. For sake of simplicity, we restrict ourselves to\nreal-valued random variables and consider simple null hypotheses only.\nThe test for symmetry as well as the model-specification test can be\nextended to problems with composite hypotheses, cf. Leucht \\cite{Leu10a,Leu10b}.\n\n\\subsection{A test for symmetry}\\label{SS41}\n\nAnswering the question whether a distribution is symmetric or not is\ninteresting for several reasons. Often robust estimators of and\nrobust tests for location parameters assume the observations to arise\nfrom a symmetric\ndistribution, see, for example, Staudte and Sheather \\cite{StSh90}. Consequently, it is\nimportant to check\nthis assumption before applying those methods. Moreover, symmetry\nplays a central role in analyzing and modeling real-life phenomena. For\ninstance, it is often presumed that an observed process can be\ndescribed by an $\\operatorname{AR}(p)$~process with Gaussian innovations which in turn\nimplies a~Gaussian marginal distribution. Rejecting the hypothesis of\nsymmetry contradicts this type of marginal distribution. Furthermore,\nthis result of the test excludes any kind of symmetric innovations in\nthat context.\n\nSuppose that we observe $X_1,\\ldots, X_n$ from a sequence of real-valued\nrandom variables with common distribution~$P_X$ and satisfying~(A1).\nFor some $\\mu\\in\\R$, we are given the problem\n\\[\n{\\mathcal H}_0\\dvt P_{X-\\mu}=P_{\\mu-X}\n\\quad\\mbox{vs.}\\quad {\\mathcal H}_1\\dvt P_{X-\\mu}\\neq P_{\\mu-X}.\n\\]\nSimilar to Feuerverger and Mureika \\cite{FM77}, who studied the problem for i.i.d.~random\nvariables, we propose the following test statistic:\n\\[\nS_n=n \\int_\\R\\bigl[\\Im\\bigl(c_n(t)\\mathrm{e}^{-\\mathrm{i}\\mu t}\\bigr)\\bigr]^2 w(t) \\,\\mathrm{d}t\n=\\frac{1}{n}\\sum_{j,k=1}^n \\int_\\R\\sin\\bigl(t(X_j-\\mu)\\bigr) \\sin\\bigl(t(X_k-\\mu)\\bigr)\nw(t)\\, \\mathrm{d}t\n\\]\nwhich makes use of the fact that symmetry of a distribution is\nequivalent to a vanishing imaginary part of the associated\ncharacteristic function. Here, $\\Im(z)$ denotes the imaginary part of\n$z\\in\\C$, $c_n$ denotes the empirical characteristic function and $w$\nis some positive measurable weight function with $\\int_\\R(1+|t|) w(t)\n\\,\\mathrm{d}t <\\infty.$ Obviously, $S_n$ is a $V$-type statistic whose kernel\nsatisfies (A2) and~(A3). Thus, its limit distribution can be determined\nby Theorem~\\ref{t.1}. Assuming that the observations come from a\nstationary $\\operatorname{AR}(p)$ or $\\operatorname{ARCH}(p)$~process, the validity of (A$1^*$) is\nassured when the $\\operatorname{AR}(p)$ or $\\operatorname{ARCH}(p)$~bootstrap methods given by Neumann and Paparoditis \\cite\n{NeuPa05} are used in order to generate the bootstrap counterpart of\nthe sample. Hence, in these cases the prerequisites of Lemma~\\ref{l.7}\nare satisfied excluding degeneracy. Inspired by Dehling and Mikosch~\\cite{DM94}, who\ndiscussed this problem for Efron's Bootstrap in the i.i.d.~case, we\npropose a bootstrap statistic with the kernel\n\\[\nh_n^*(x,y)=h(x,y)-\\int_\\R h(x,y)P_n^*(\\mathrm{d}x)-\\int_\\R h(x,y)P_n^*(\\mathrm{d}y)\n+\\int_{\\R^2} h(x,y)P_n^*(\\mathrm{d}x)P_n^*(\\mathrm{d}y).\n\\]\nHere, $h$ denotes the kernel function of $S_n$ and $P_n^*$ the\ndistribution of $X_1^*$ conditionally on $X_1,\\ldots,X_n$.\nSimilar to the proof of Theorem~\\ref{t.3}, the desired convergence\nproperty of~$S_n^*$ can be verified.\n\n\\subsection{A model-specification test}\\label{SS42}\n\nLet $X_0,\\ldots,X_n$ be observations resulting from a stationary\nreal-valued nonlinear autoregressive process with centered\ni.i.d.~innovations~$(\\varepsilon_k)_{k\\in\\Z}, $ that\nis,~$X_k=g(X_{k-1})+\\varepsilon_k.$ Suppose that $\\E|\\varepsilon\n_0|^{4+\\delta}\\,{<}\\,\\infty$ for some $\\delta\\,{>}\\,0$ and that\n$g\\,{\\in}\\, G\\,{:=}\\,\\{f\\dvtx \\R\\,{\\to}\\,\\R | f \\mbox{ Lipschitz continuous}$\n$\\mbox{with } \\operatorname{Lip}(f)<1\\}$. Thus, the process $(X_k)_{k\\in\\Z}$ is $\\tau\n$-dependent with exponential rate, see Dedecker and Prieur \\cite{DP05}, Example~4.2.\nWe will present a test for the problem\n\\[\n{\\mathcal H}_0\\dvt P\\bigl(\\E(X_1|X_{0})=g_0(X_{0})\\bigr)=1\n\\quad\\mbox{vs.}\\quad {\\mathcal H}_1\\dvt P\\bigl(\\E\n(X_1|X_{0})=g_0(X_{0})\\bigr)<1\n\\]\nwith $g_0\\in G$.\nFor sake of simplicity, we stick to these small classes of functions\n$G$ and of processes $(X_k)_{k\\in\\Z}.$ An extension to a more\ncomprehensive variety of model-specification tests is investigated in a\nforthcoming paper, cf.~Leucht \\cite{Leu10b}.\n\nSimilar to Fan and Li \\cite{FL99}, we propose the following test statistic:\n\\begin{eqnarray*}\nT_n&=&\\frac{1}{n \\sqrt h}\\sum_{j=1}^n\\sum_{k\\neq\nj}\\bigl(X_j-g_0(X_{j-1})\\bigr)\\bigl(X_k-g_0(X_{k-1})\\bigr)K\\biggl(\\frac\n{X_{j-1}-X_{k-1}}{h}\\biggr)\\\\\n&=:& \\frac{1}{n}\\sum_{j=1}^n\\sum_{k\\neq j} H(Z_j,Z_k),\n\\end{eqnarray*}\nthat is,~a kernel estimator (multiplied with $n\\sqrt h$) of $\\E\n([X_1-g(X_0)]\\E(X_1-g(X_0)|\\break X_0)p(X_0))$ that is equal to zero under\n${\\mathcal H_0}$. Here, $Z_k:=(X_k,X_{k-1})^\\prime, k\\in\\Z,$ and $p$\ndenotes the density of the distribution of $X_0$.\\vadjust{\\goodbreak}\nFan and Li \\cite{FL99}, who considered $\\beta$-mixing processes, used a similar\ntest statistic with a vanishing bandwidth. In contrast, we consider the\ncase of a fixed bandwidth. These tests are more powerful against Pitman\nalternatives $g_{1,n}(x)=g_0(x)+n^{-\\beta}w(x)+\\mathrm{o}(n^{-\\beta}), \\beta\n>0, w\\in G$. For a detailed discussion of this topic, see Fan and Li \\cite{FL00}.\n\nObviously, $T_n$ is degenerate under ${\\mathcal H_0}$. If we assume $K$\nto be a bounded, even, and Lip\\-schitz continuous function, then there\nexists a function $f\\dvtx\\R^8\\to\\R$ with $ |H(z_1,z_2)-H(\\bar z_1,\\bar\nz_2)|\\leq f(z_1,\\bar z_1,z_2,\\bar z_2)(\\|z_1-\\bar z_1\\|_{l_1}+\\|\nz_2-\\bar z_2\\|_{l_1})$ and such that (A4) is valid. Moreover, under\nthese conditions $H$ satisfies (A2). Hence, the assertion of\nTheorem~\\ref{t.2} holds true.\nIn order to determine critical values of the test, we propose the\nbootstrap procedure given by Franke and Wendel~\\cite{FW92} (without estimating the\nregression function). The bootstrap innovations $(\\varepsilon_t^*)_{t}$\nare drawn with replacement from the set $\\{\\tilde\\varepsilon\n_t=\\varepsilon_t-n^{-1}\\sum_{k=1}^n\\varepsilon_k\\}_{t=1}^n$, where\n$\\varepsilon_t=X_t-g_0(X_{t-1}), t=1,\\ldots,n$. After choosing a\nstarting value $X_0^*$ independently of $(\\varepsilon_t^*)_{t\\geq1}$,\nthe bootstrap sample $X_t^*=g(X_{t-1}^*)+\\varepsilon_t^*$ as well as\nthe bootstrap counterpart $T_n^*= n^{-1}\\sum_{j=1}^n\\sum_{k\\neq j}\nH(Z_j^*,Z_k^*)$ of the test statistic with\n$Z_k^*=(X_k^*,X_{k-1}^*)^\\prime, k=1,\\ldots,n,$ can be computed. In\ncontrast to the previous subsection, the proposed bootstrap method\nleads to a degenerate kernel function.\nObviously, the bootstrap sample is $\\tau$-dependent in the sense of\n(A$1^*$) and satisfies \\mbox{$\\E(|X_k^*| | Z_1,\\ldots,Z_n)c_h(\\widehat\\theta_n)$}\n\\]\nwith $c_h(\\widehat\\theta_n):=\\max_{x,y\\in[-c,c]^d} |h(x,y,\\widehat\\theta\n_n)|\\leq\\max_{x,y\\in[-c,c]^d,\\|\\bar\\theta\\|_{l_1}\\leq\\delta_1}\n|h(x,y,\\bar\\theta)|<\\infty$. The associated $U$-statistics are denoted\nby $U^*_{n,c}$. Now, imitating the proof of Lemma~\\ref{l.1} results in\n\\[\n\\mathop{\\lim\\sup}_{n\\to\\infty} n^2 \\E[(U_n^*-U_{n,c}^*)^2|\\X_n=x_n] \\mathop\n{\\longrightarrow} _{c\\to\\infty}0.\n\\]\nWithin the calculations, the relation $\\limsup_{n\\to\\infty}P\n(X_1^*\\notin (-c,c)^d\\vert\\X_n=x_n)\\leq P(X_1\\notin\n(-c,c)^d)\\longrightarrow_{c\\to\\infty}0$ has to be invoked which\nfollows from Portmanteau's theorem in conjunction with~(\\ref{eqn.2}).\nNext, we approximate the bounded kernel by the degenerate version of\n\\[\n\\widetilde h^{*(K,L)}_c\n:= \\sum_{ k_1,k_2 \\in\\{-L,\\ldots,L\\}^d } \\widehat\\alpha\n^{(c)}_{k_1,k_2}\\Phi_{0,k_1}\\Phi_{0,k_2}+ \\sum_{j=0}^{J(K)-1}\\sum_{\nk_1,k_2 \\in\\{-L,\\ldots,L\\}^d}\\sum_{e\\in\\bar E} \\widehat\\beta\n_{j;k_1,k_2}^{(c,e)}\\Psi_{j;k_1,k_2}^{(e)},\n\\]\nwhere $\\widehat\\alpha_{k_1,k_2}^{(c)}=\\iint_{\\R^d\\times\\R\n^d}h^{*}_c(x,y, \\widehat\\theta_n) \\Phi_{0,k_1}(x) \\Phi_{0,k_2}(y)\\, \\mathrm{d}x\\, \\mathrm{d}y$\nand $\\widehat\\beta^{(c,e)}_{j;k_1,k_2}=\\iint_{\\R^d\\times\\R\n^d}h^{*}_c(x,y,\\allowbreak\\widehat\\theta_n) \\Psi_{j;k_1,k_2}^{(e)}(x,y)\\, \\mathrm{d}x\\, \\mathrm{d}y$.\nDenoting the associated $U$-statistic by $\\widehat U_{n,c}^{*(K,L)}$\nleads to\n\\[\n\\lim_{K\\to\\infty}\\limsup_{L\\to\\infty}\\limsup_{n\\to\\infty}\nn^2 \\E\\bigl[\\bigl( U_{n,c}^*-\\widehat U_{n,c}^{*(K,L)}\\bigr)^2|\\X_n=x_n\\bigr]=0\n\\]\nwhich can be proved by following the lines of the proof of Lemma~\\ref{l.4}.\nHere,~$J(K)$ is chosen as follows: We first select some $b=b(K)<\\infty\n$ such that $P(X_1\\notin(-b,b)^d)\\leq1\/K$. Afterwards, we choose\n$J(K)$ such that $\\max_{x,y\\in[-b,b]^d}|h_c(x,y,\\theta)-\\widetilde\nh^{(K)}_c(x,y,\\theta)|\\leq{1\/K}$ and $S_\\phi\/2^{J(K)}0$ the inequalities $|\\frac{1}{n}\n\\operatorname{var}(Q_1^*+\\cdots+ Q_n^* |\\X_n=x_n)-\\sigma^2|<\\varepsilon, \\forall\nn\\geq n_0(\\varepsilon),$ hold true with $\\sigma^2$ as in the proof of\nTheorem~\\ref{t.1}, the abbreviations\n$\\operatorname{var}^*(\\cdot)= \\operatorname{var}(\\cdot|\\X_n=x_n)$ and $\\operatorname{cov}^*(\\cdot)= \\operatorname{cov}(\\cdot|\\X\n_n=x_n)$ are used. Hence,\n\\begin{eqnarray*}\n&& \\biggl|\\frac{1}{n}\\operatorname{var}{}^*[Q_1^*+\\cdots+Q_n^* ] - \\sigma^2\\biggr| \\\\\n&&\\quad \\leq2\\sum_{r=2}^\\infty\\min\\biggl\\{\\frac{r-1}{n},1\\biggr\\}\n|\\operatorname{cov}{}^*(Q_1^*,Q_r^*)| +\\Biggl|\\operatorname{var}{}^*(Q_1^*)+2\\sum_{r=2}^\\infty\n\\operatorname{cov}{}^*(Q_1^*,Q_r^*) - \\sigma^2\\Biggr|\\\\\n&&\\quad \\leq2\\sum_{r=2}^\\infty\\min\\biggl\\{\\frac{r-1}{n},1\\biggr\\}\n|\\operatorname{cov}{}^*(Q_1^*,Q_r^*)|\n+2\\Biggl|\\sum_{r=2}^{R-1}[\\operatorname{cov}{}^*(Q_1^*,Q_r^*)-\\operatorname{cov}(Q_1,Q_r)]\\Biggr|\\\\\n&&\\qquad{} +|\\operatorname{var}{}^*(Q_1^*)-\\operatorname{var}(Q_1)|+2\\biggl|\\sum_{r\\geq R}\n\\operatorname{cov}{}^*(Q_1^*,Q_r^*)\\biggr|\n+2\\biggl|\\sum_{r\\geq R} \\operatorname{cov}(Q_1,Q_r)\\biggr|.\n\\end{eqnarray*}\nBy (A1) and (A$1^*$), $R$ can be chosen such that $|\\sum_{r\\geq R}\n\\operatorname{cov}(Q_1,Q_r)| +|\\sum_{r\\geq R}\\operatorname{cov}^*(Q_1^*, Q_r^*)|$\n$\\leq\\varepsilon\/4$. Moreover, (A$1^*$) implies that the first summand\ncan be bounded from above by~$\\varepsilon\/4$ as well if $n\\geq\nn_0(\\varepsilon)$ for some $n_0(\\varepsilon)\\in\\N$. According to the\nconvergence of the two-dimensional distributions and the uniform\nboundedness of $(Q_k^*)_{k\\in\\Z}$, it is possible to pick\n$n_0(\\varepsilon)$ such that additionally the two remaining summands\nare bounded by $\\varepsilon\/8$. For the\\vadjust{\\goodbreak} validity of the CLT of Neumann and Paparoditis \\cite\n{NeuPa05} in probability, it remains to verify\ntheir inequality~(6.4). By Lipschitz continuity of $Q_{t_1}^*Q_{t_2}^*$\nthis holds with $\\bar\\theta_r=\\operatorname{Lip}(Q_{t_1}^*Q_{t_2}^*)\\bar\\tau_r\\leq\nC\\bar\\tau_r$.\nThe application of the continuous mapping theorem results in\n$n U_{n,c}^{*(K,L)} \\stackrel{d}{\\longrightarrow} Z_c^{(K,L)}$, in\nprobability. Invoking the same arguments as in the proof of Theorem~\\ref\n{t.1}, this implies $n U_{n}^*\\stackrel{d}{\\longrightarrow} Z$, in probability.\n\nIn order to obtain the analogous result of convergence for $n V_n^*$,\nwe define \\mbox{$\\widetilde\\XX_n^\\theta\\,{\\subseteq}\\,\\XX_n^\\theta, n\\,{\\in}\\,\\N,$}\nsuch that\n$|\\E(|h(X_1^*,X_1^*,\\widehat\\theta_n)|\\vert\\X_n=x_n)-\\E|h(X_1,X_1,\\theta\n)||\\leq\\eta_n, \\forall x_n\\in\\widetilde\\XX_n^\\theta$. Here, the null\nsequence $(\\eta_n)_{n\\in\\N}$ is chosen in such a way that $P(\\X_n\\in\n\\widetilde\\XX_n^\\theta)\\longrightarrow_{n\\to\\infty} 1.$\nNow, additionally to our previous considerations,\n\\[\nP\\Biggl(\\Biggl|\\frac{1}{n}\\sum_{i=1}^n h(X_i^*,X_i^*,\\widehat\\theta\n_n)-\\E h(X_1,X_1,\\theta)\\Biggr| > \\varepsilon\\Big|\\X_n=x_n\\Biggr)\n\\mathop{\\longrightarrow} _{n\\to\\infty} 0\n\\]\nhas to be proved for arbitrary $\\varepsilon>0$ and any sequence\n$(x_n)_{n\\in\\N}$ with $x_n\\in\\widetilde\\XX_n^\\theta, n\\in\\N$.\nAccording to the definition of the sets $(\\widetilde\\XX_n^\\theta)_n$,\nwe get $\\E(h(X_1^*,X_1^*,\\widehat\\theta_n)\\vert\\X_n=x_n\n)\\longrightarrow_{n\\to\\infty} \\E h(X_1,X_1,\\theta)$. Therefore, it\nsuffices to prove\n\\[\nP\\Biggl(\\Biggl|\\frac{1}{n}\\sum_{k=1}^n \\bigl[h(X_k^*,X_k^*,\\widehat\n\\theta_n)-\\E\\bigl(h(X_1^*,X_1^*,\\widehat\\theta_n)|\\X_n=x_n\\bigr)\\bigr]\\Biggr| >\n\\frac{\\varepsilon}{2} \\Big|\\X_n=x_n\\Biggr)\\ninfty0.\n\\]\nThis in turn is a consequence of Lemma~\\ref{l.lln}\nsince under the assumptions of the theorem the sequence of functions\n$(g_n)_{n\\in\\N}$ with $g^{(n)}(\\cdot)=h(\\cdot,\\cdot,\\widehat\\theta_n)-\\E\n( h(X_1^*,X_1^*,\\widehat\\theta_n)\\vert\\X_n=x_n)$ is uniformly integrable\nand satisfies the smoothness property presumed in Lemma~\\ref{l.lln}.\nFinally, bootstrap consistency follows from Lemma~\\ref{l.6}.\n\\end{pf*}\n\n\\subsection{Proofs of auxiliary results}\\label{SS52}\n\nFirst, we derive a weak law of large numbers for smooth functions of\ntriangular arrays of $\\tau$-dependent random variables.\n\n\\begin{lem}[(Weak law of large numbers)]\\label{l.lln}\nLet $(X_{n,k})_{k=1}^n, n\\in \\N,$ be a triangular scheme of\n(row-wise) stationary, $\\R^d$-valued, integrable random variables such\nthat $\\lim_{K\\to\\infty} \\sup_{n\\in\\N}P(\\|X_{n,1}\\|_{l_1}>K)=0.$\nSuppose that the coefficients $\\bar\\tau_r:=\\sup_{n>r}\\tau_{r,n}$\nsatisfy $\\bar\\tau_r\\longrightarrow_{r\\to\\infty}0$, where\n\\begin{eqnarray*}\n\\tau_{r,n}&:=&\n\\sup\\{\\tau(\\sigma(X_{n,s_1},\\ldots,X_{n,s_u}),(X_{n,t_1}^\\prime\n,X_{n,t_2}^\\prime,X_{n,t_3}^\\prime)^\\prime)\\vert u\\in\\N,\\\\\n&&\\phantom{\\sup\\{} 1\\leq s_1\\leq\\cdots\\leq s_u < s_u+r\\leq t_1\\leq t_2\\leq t_3\\leq n\\}.\n\\end{eqnarray*}\nMoreover, suppose that the functions $g^{(n)}\\dvtx\\R^d\\to\\R^p$ with $\\E\ng^{(n)}(X_{n,1})=0_p$ are uniformly Lipschitz continuous on any bounded\ninterval.\nIf additionally the sequence $(g^{(n)}(X_{n,1}))_{n\\in\\N}$ is uniformly\nintegrable, then\n\\[\n\\frac{1}{n}\\sum_{k=1}^n g^{(n)}(X_{n,k})\\stackrel{P}{\\longrightarrow}\n0_p.\\vadjust{\\goodbreak}\n\\]\n\\end{lem}\n\n\\begin{pf}\nW.l.o.g.~let $p=1$. We prove that for arbitrary $\\varepsilon, \\eta>0$\nthere exists an $n_0$ such that for all $n>n_0$ the inequality\n$P(|n^{-1}\\sum_{k=1}^n g^{(n)}(X_{n,k})|>\\varepsilon)\\leq\\eta$ holds.\nTo this end, a truncation argument is invoked. Let $w_K$ denote a\nLipschitz continuous, nonnegative function that is bounded from above\nby one such that $w_K(x)=1$ for $x\\in[-K,K]^d$ and $w_K(x)=0$ for\n$x\\notin[-K-1,K+1]^d$ with $K\\in\\R_+$. For a finite constant $M$, that\nis specified later, define functions $ g_{M,K}^{(n)}\\dvtx \\R^d\\to\\R$ by\n\\[\ng_{M,K}^{(n)}(x):=\n\\cases{\ng^{(n)}(x) w_K(x)&\\quad$\\mbox{for }\\bigl|g^{(n)}(x) w_K(x)\\bigr|\\leq\nM,$\\vspace*{1pt}\\cr\n-M&\\quad$\\mbox{for }g^{(n)}(x) w_K(x)< -M,$\\vspace*{1pt}\\cr\nM&\\quad$\\mbox{for }g^{(n)}(x) w_K(x)> M$}\\vspace*{-2pt}\n\\]\nand $g_{M,K}^{(n,c)}$ by $g_{M,K}^{(n,c)}(x)= g_{M,K}^{(n)}(x)-\\E\ng^{(n)}_{M,K}(X_{n,1})$.\nThis allows for the estimation\n\\begin{eqnarray*}\nP\\Biggl(\\Biggl|\\frac{1}{n}\\sum_{k=1}^n g^{(n)}(X_{n,k})\\Biggr|>\\varepsilon\\Biggr)\n&\\leq& P\\Biggl(\\Biggl|\\frac{1}{n}\\sum_{k=1}^n g^{(n)}(X_{n,k})-\ng_{M,K}^{(n)}(X_{n,k})\\Biggr|>\\frac{\\varepsilon}{3}\\Biggr)\n\\\\[-3pt]\n&&{} +P\\biggl(\\bigl|\\E g_{M,K}^{(n)}(X_{n,1})\\bigr|>\\frac{\\varepsilon}{3}\n\\biggr)+P\\Biggl(\\Biggl|\\frac{1}{n}\\sum_{k=1}^n g_{M,K}^{(n,c)}(X_{n,k})\\Biggr|>\\frac\n{\\varepsilon}{3}\\Biggr).\\vspace*{-2pt}\n\\end{eqnarray*}\nAccording to Markov's inequality, the first summand on the r.h.s.~can\nbe bounded by\n\\[\n\\frac{3}{\\varepsilon}\\Bigl[\\sup_{n\\in\\N}\\E\\bigl|g^{(n)}(X_{n,1})\\bigr|\\I\n_{|g^{(n)}(X_{n,1})|>M}+M\\sup_{n\\in\\N}P(\\|X_{n,1}\\|_{l_1}>K)\\Bigr].\\vspace*{-2pt}\n\\]\nSince the functions $g^{(n)}, n\\in\\N,$ are centered, we additionally obtain\n\\begin{eqnarray*}\n&&P\\biggl(\\bigl|\\E g_{M,K}^{(n)}(X_{n,1})\\bigr|>\\frac{\\varepsilon}{3}\\biggr)\\\\[-3pt]\n&&\\quad\\leq P\\biggl(\\sup_{n\\in\\N}\\E\\bigl|\ng_{M,K}^{(n)}(X_{n,1})-g^{(n)}(X_{n,1})\\bigr|>\\frac{\\varepsilon}{3}\\biggr)\\\\[-3pt]\n&&\\quad \\leq P\\biggl(\\sup_{n\\in\\N}\\E\\bigl|g^{(n)}(X_{n,1})\\bigr|\\I\n_{|g^{(n)}(X_{n,1})|>M}+M\\sup_{n\\in\\N}P(\\|X_{n,1}\\|_{l_1}>K)>\\frac\n{\\varepsilon}{3}\\biggr).\\vspace*{-2pt}\n\\end{eqnarray*}\nTherefore, by choosing $M$ and $K=K(M)$ sufficiently large,\nwe get\n\\[\nP\\Biggl(\\Biggl|\\frac{1}{n}\\sum_{k=1}^n g^{(n)}(X_{n,k})-\ng_{M,K}^{(n)}(X_{n,k})\\Biggr|>\\frac{\\varepsilon}{3}\\Biggr)\n+P\\biggl(\\bigl|\\E g_{M,K}^{(n)}(X_{n,1})\\bigr|>\\frac{\\varepsilon}{3}\n\\biggr)\\leq\\frac{\\eta}{2}.\\vspace*{-2pt}\n\\]\nConcerning the remaining term, Chebyshev's inequality leads to\n\\[\nP\\Biggl(\\Biggl|\\frac{1}{n}\\sum_{k=1}^n g_{M,K}^{(n,c)}(X_{n,k})\\Biggr|>\\frac\n{\\varepsilon}{3}\\Biggr)\n\\leq\\frac{9M^2}{\\varepsilon^2 n}+\\frac{18}{\\varepsilon^2 n^2}\\sum\n_{j\\min\\{j,k\\}-i}\\bigl|\\E H(X_i,X_j)H(X_k,X_l)-\\E\nH(X_i,X_j)H\\bigl(X_k,\\widetilde X_l^{(r)}\\bigr)\\bigr|,\\\\\nZ_{n,r}^{(3)}&:=&\\mathop{\\sum_{1\\leq i\\leq k< l< j\\leq n}}_{r:=k-i\\geq\nj-l}\\bigl|\\E H(X_i,X_j)H(X_k,X_l)-\\E H\\bigl(X_i,\\widetilde\nX_j^{(r)}\\bigr)H\\bigl(\\widetilde X_k^{(r)},\\widetilde X_l^{(r)}\\bigr)\\bigr|,\\\\\nZ_{n,r}^{(4)}&:=&\\mathop{\\sum_{1\\leq i\\leq k< l< j\\leq n}}_{\nr:=j-l>k-i}\\bigl|\\E H(X_i,X_j)H(X_k,X_l)-\\E H\\bigl(X_i,\\widetilde\nX_j^{(r)}\\bigr)H(X_k, X_l)\\bigr|.\n\\end{eqnarray*}\nHere, in every summand of $Z_{n,r}^{(1)}$ and $Z_{n,r}^{(3)}$ the\nvector $(\\widetilde X_j^{(r)\\prime},\\widetilde X_k^{(r)\\prime\n},\\widetilde X_l^{(r)\\prime})^\\prime$ is chosen such that it is\nindependent of the random variable $X_i$, $(\\widetilde X_j^{(r)\\prime\n},\\widetilde X_k^{(r)\\prime},\\widetilde X_l^{(r)\\prime})^\\prime\\stackrel\n{d}{=}( X_j^\\prime, X_k^\\prime, X_l^\\prime)^\\prime$, and (\\ref{eq.a1})\nholds. Within $Z_{n,r}^{(2)}$ (resp., $Z_{n,r}^{(4)})$, the\nrandom variable $\\widetilde X_l^{(r)}$ (resp., $\\widetilde\nX_j^{(r)}$) is chosen to be independent of the vector $( X_i^\\prime,\nX_j^\\prime, X_k^\\prime)^\\prime$ (resp., $( X_i^\\prime, X_k^\\prime\n, X_l^\\prime)^\\prime$) such that $ \\widetilde X_l^{(r)}\\stackrel{d}{=}\nX_l$ (resp., $ \\widetilde X_j^{(r)}\\stackrel{d}{=} X_j$) and\n(\\ref{eq.a1}) holds. This may possibly require an enlargement of the\nunderlying probability space.\nMoreover, note that the subtrahends of these expressions vanish due to\nthe degeneracy of $H$ and that the number of summands of\n$Z_{n,r}^{(t)}, t=1,\\ldots,4,$ is bounded by $(r+1)n^2$.\nFor sake of notational simplicity, the upper index $r$ is omitted in\nthe sequel.\n\n\\begin{pf*}{Proof of Lemma~\\textup{\\protect\\ref{l.1}}}\nFor $c>0$, we define $c_h:=\\max_{x,y\\in[-c,c]^d} |h(x,y)|$,\n\\[\n\\widetilde h^{(c)}(x,y) :=\n\\cases{\nh(x,y) &\\quad$\\mbox{for } |h(x,y)|\\leq c_h$,\\vspace*{2pt}\\cr\n-c_h &\\quad$\\mbox{for } h(x,y) <-c_h,$\\vspace*{2pt}\\cr\nc_h &\\quad$\\mbox{for } h(x,y) >c_h$}\n\\]\nand its degenerate version\n\\begin{eqnarray*}\nh_c(x,y) &:= &\\widetilde h^{(c)}(x,y) - \\int_{\\R^d}\\widetilde\nh^{(c)}(x,y) P_X(\\mathrm{d}x)\n- \\int_{\\R^d}\\widetilde h^{(c)}(x,y) P_X(\\mathrm{d}y)\\\\\n&&{}+ \\iint_{\\R^d\\times\\R^d} \\widetilde h^{(c)}(x,y) P_X(\\mathrm{d}x) P_X(\\mathrm{d}y).\n\\end{eqnarray*}\nThe approximation error $n^2 \\E(U_n-U_{n,c})^2$ can be reformulated in\nterms of $Z_n$ with kernel $H=H^{(c)}:=h-h^{(c)}$. Hence, it remains to\nverify that $\\sup_{k\\in\\N}\\E|H^{(c)}(H_1,X_{1+k})|^2$ and $\\sup_{n\\in\\N\n}n^{-2}\\sum_{r=1}^{n-1}\\sum_{t=1}^4 Z_{n,r}^{(t)}$\ntend to zero as $c\\to\\infty.$\nFirst, we consider $\\sup_{n\\in\\N}n^{-2}\\sum_{r=1}^{n-1}$\n$Z_{n,r}^{(1)}$, the remaining quantities can be treated similarly. The\nsummands of $Z_{n,r}^{(1)}$ are bounded as follows:\n\\begin{eqnarray}\\label{eq.unb1}\n&& \\bigl| \\E H^{(c)}(X_i,X_j)H^{(c)}(X_k, X_l)- \\E H^{(c)}(X_i,\\widetilde\nX_j)H^{(c)}(\\widetilde X_{k},\\widetilde X_{l})\\bigr|\\nonumber\\\\\n&&\\quad\\leq\\E\\bigl|H^{(c)}(X_k,X_l)\n\\bigl[H^{(c)}(X_i,X_j)-H^{(c)}(X_i,\\widetilde X_{j})\\bigr]\\I_{(X_k^\\prime\n,X_l^\\prime)^\\prime\\in[-c,c]^{2d}}\\bigr|\\nonumber\\\\\n&&\\qquad{} +\\E\\bigl|H^{(c)}(X_k,X_l)\n\\bigl[H^{(c)}(X_i,X_j)-H^{(c)}(X_i,\\widetilde X_{j})\\bigr]\\I_{(X_k^\\prime,\nX_l^\\prime)^\\prime\\notin[-c,c]^{2d}}\\bigr|\n\\nonumber\n\\\\[-8pt]\n\\\\[-8pt]\n\\nonumber\n&&\\qquad{} + \\E\\bigl|H^{(c)}(X_i,\\widetilde X_j)\n\\bigl[H^{(c)}(X_k,X_l)-H^{(c)}(\\widetilde X_{k},\\widetilde X_{l})\\bigr]\\I\n_{(X_i^\\prime,\\widetilde X_j^\\prime)^\\prime\\in[-c,c]^{2d}}\\bigr|\\\\\n&&\\qquad{} + \\E\\bigl|H^{(c)}(X_i,\\widetilde X_j)\n\\bigl[H^{(c)}(X_k,X_l)-H^{(c)}(\\widetilde X_{k},\\widetilde X_{l})\\bigr]\\I\n_{(X_i^\\prime,\\widetilde X_j^\\prime)^\\prime\\notin[-c,c]^{2d}}\\bigr|\\nonumber\\\\\n&&\\quad= E_1+E_2+E_3+E_4.\\nonumber\n\\end{eqnarray}\n\nThe functions $H^{(c)}$ are obviously Lipschitz continuous uniformly in $c$.\nTherefore, an iterative application of H\\\"older's inequality to $E_2$\nyields\n\\begin{eqnarray}\\label{eq.zerl2}\nE_2&\\leq& \\bigl(\\E\\bigl|H^{(c)}(X_i,X_j)-H^{(c)}(X_i,\\widetilde\nX_{j})\\bigr|\\bigr)^\\delta\\nonumber\\\\\n&&{}\\times\\bigl(\\E\\bigl|H^{(c)}(X_k,X_l)\\bigr|^{1\/(1-\\delta\n)}\\bigl|H^{(c)}(X_i,X_j)-H^{(c)}(X_i,\\widetilde X_{j})\\bigr|\\I_{(X_k^\\prime,\nX_l^\\prime)^\\prime\\notin[-c,c]^{2d}}\\bigr)^{1-\\delta}\\qquad\\nonumber\\\\\n&\\leq& C\\tau_r^\\delta\\bigl\\{\\bigl(\\E\\bigl|H^{(c)}(X_k,X_l)\\bigr|^{(2-\\delta\n)\/(1-\\delta)}\\I_{(X_k^\\prime, X_l^\\prime)^\\prime\\notin\n[-c,c]^{2d}}\\bigr)^{1\/(2-\\delta)}\\\\\n&&{}\\times\\bigl(\\E\\bigl|H^{(c)}(X_i,X_j)\\bigr|^{(2-\\delta)\/(1-\\delta)}+\\E\n\\bigl|H^{(c)}(X_i,\\widetilde X_j)\\bigr|^{(2-\\delta)\/(1-\\delta)}\\bigr)^{(1-\\delta\n)\/(2-\\delta)}\\bigr\\}^{1-\\delta}\\nonumber\\\\\n&\\leq& C\\tau_r^\\delta\\bigl(\\E\\bigl|H^{(c)}(X_k,X_l)\\bigr|^{(2-\\delta)\/(1-\\delta\n)}\\I_{(X_k^\\prime, X_l^\\prime)^\\prime\\notin[-c,c]^{2d}}\n\\bigr)^{(1-\\delta)\/(2-\\delta)}.\\nonumber\n\\end{eqnarray}\nAs $\\sup_{k\\in\\N}\\E| h(X_1,X_{1+k})|^\\nu<\\infty$ for $\\nu>(2-\\delta\n)\/(1-\\delta)$, we obtain $E_2\\leq\\tau_r^\\delta \\varepsilon_1(c)$ with\n$\\varepsilon_1(c)\\longrightarrow_{c\\to\\infty}0$ after employing H\\\"\nolder's inequality once again.\nAnalogous calculations yield $E_4\\leq\\tau_r^\\delta \\varepsilon_2(c)$\nwith $\\varepsilon_2(c)\\longrightarrow_{c\\to\\infty}0$. Likewise, the\napproximation methods for $E_1$ and $E_3$ are equal. Therefore, only\n$E_1$ is considered:\n\\begin{eqnarray*}\nE_1&\\leq& \\E\\biggl|\\int_{\\R^d}\\widetilde h^{(c)}(X_k,y)P_X(\\mathrm{d}y)\n\\bigl[H^{(c)}(X_i,X_j)-H^{(c)}(X_i,\\widetilde X_{j})\\bigr]\\I_{X_k\\in\n[-c,c]^d}\\biggr|\\\\\n&&{}+\\E\\biggl|\\int_{\\R^d}\\widetilde h^{(c)}(y,X_l)P_X(\\mathrm{d}y)\n\\bigl[H^{(c)}(X_i,X_j)-H^{(c)}(X_i,\\widetilde X_{j})\\bigr]\\I_{X_l\\in\n[-c,c]^d}\\biggr|\\\\\n&&{}+\\E\\biggl|\\iint_{\\R^d\\times\\R^d}\\widetilde\nh^{(c)}(x,y)P_X(\\mathrm{d}x)P_X(\\mathrm{d}y)\\bigl[H^{(c)}(X_i,X_j)-H^{(c)}(X_i,\\widetilde\nX_{j})\\bigr]\\biggr|\\\\\n&=& E_{1,1}+E_{1,2}+E_{1,3}.\n\\end{eqnarray*}\nAnalogous to ~(\\ref{eq.zerl2}), we obtain\n\\begin{eqnarray*}\nE_{1,1}&\\leq& C \\tau_r^\\delta \\biggl\\{\\biggl(\\E\\biggl|\\int_{\\R^d}\nh(X_k,y)-\\widetilde h^{(c)}(X_k,y)P_X(\\mathrm{d}y)\\biggr|^{(2-\\delta)\/(1-\\delta)}\\I\n_{X_k\\in[-c,c]^d}\\biggr)^{1\/(2-\\delta)}\\\\\n&&{}\\times\\Bigl[\\sup_{k\\in\\N}\\E\\bigl|H^{(c)}(X_1,X_{1+k})\\bigr|^{(2-\\delta)\/(1-\\delta\n)}+\\E\\bigl|H^{(c)}(X_i,\\widetilde X_j)\\bigr|^{(2-\\delta)\/(1-\\delta)}\n\\Bigr]^{(1-\\delta)\/(2-\\delta)}\\biggr\\}^{1-\\delta}\\\\\n&\\leq& C \\tau_r^\\delta\\biggl (\\int_{\\R^d}\\int_{\\R^d}\n\\bigl|h(x,y)-\\widetilde h^{(c)}(x,y)\\bigr|^{(2-\\delta)\/(1-\\delta)}\\\\\n&&\\hspace*{54pt}{}\\times P_X(\\mathrm{d}y)\\I_{x\\in\n[-c,c]^d} P_X(\\mathrm{d}x)\\biggr)^{(1-\\delta)\/(2-\\delta)}\\\\\n&\\leq& \\tau_r^\\delta \\varepsilon_3(c)\n\\end{eqnarray*}\nwith $\\varepsilon_3(c)\\longrightarrow_{c\\to\\infty}0$. The estimation of\n$E_{1,2}$ coincides with the previous one.\nThe expression $E_{1,3}$ can be bounded as follows:\n\\begin{eqnarray*}\nE_{1,3}&\\leq& C \\tau_r\\iint_{\\R^d\\times\\R^d}\\bigl|h(x,y)-\\widetilde\nh^{(c)}(x,y)\\bigr|P_X(\\mathrm{d}x)P_X(\\mathrm{d}y)\\\\\n&\\leq& C \\tau_r \\iint_{\\R^d\\times\\R^d}|h(x,y)|\\I_{(x^\\prime,y^\\prime\n)^\\prime\\notin[-c,c]^{2d}}P_X(\\mathrm{d}x)P_X(\\mathrm{d}y)\\\\\n&\\leq&\\tau_r \\varepsilon_4(c)\n\\end{eqnarray*}\nwith $\\varepsilon_4(c)\\longrightarrow_{c\\to\\infty}0.$ To sum up,\nwe have $E_1+E_2+E_3+E_4\\leq\\varepsilon_5(c) \\tau_r^\\delta$, where\n$\\varepsilon_5(c)\\longrightarrow_{c\\to\\infty}0$ uniformly in $n$. This\nleads to\n\\[\n\\lim_{c\\to\\infty}\\sup_{n\\in\\N}\\frac{1}{n^2}\\sum_{r=1}^{n-1}Z_{n,r}^{(1)}\n\\leq\\lim_{c\\to\\infty}\\sup_{n\\in\\N} \\frac{1}{n^2}\\sum_{r=1}^{n-1}(r+1)\nn^2 \\tau_r^\\delta \\varepsilon_5(c)\n=0.\n\\]\nIt remains to examine\n\\begin{eqnarray*}\n\\sup_{k\\in\\N}\\E\\bigl[H^{(c)}(X_1,X_{1+k})\\bigr]^2 &\\leq& C\\Bigl( \\sup_{k\\in\\N} \\E\n\\bigl[h(X_1,X_{1+k})-\\widetilde h^{(c)}(X_1,X_{1+k})\\bigr]^2\\\\[-3pt]\n&&\\hphantom{C\\Bigl(}{}+ \\E\\bigl[h(X_1,\\widetilde X_1)-\\widetilde h^{(c)}(X_1,\\widetilde\nX_1)\\bigr]^2\\Bigr).\n\\end{eqnarray*}\nHere, $\\widetilde X_1$ denotes an independent copy of $X_1$. Similar\narguments as before yield $\\lim_{c\\to\\infty}\\sup_{k\\in\\N}\\E\n[H^{(c)}(X_1,X_{1+k})]^2=0.$\n\\end{pf*}\n\n\nThe characteristics stated in the following two lemmas will be\nessential for a wavelet approximation of the kernel function~$h$.\n\n\\begin{lem}\\label{l.2}\nGiven a Lipschitz continuous function $g\\dvtx\\R^d\\to\\R$, define a wavelet\nseries approximation~$g_j$ by $g_j(x):=\\sum_{k\\in\\Z^d}\\alpha_{j,k}\\Phi\n_{j,k}(x),j\\in\\Z$, where $\\alpha_{j,k}=\\int_{\\R^d}g(x) \\Phi_{j,k}(x)\n\\,\\mathrm{d}x$. Then $g_j$ is Lipschitz continuous with a constant that is\nindependent of $j$.\n\\end{lem}\n\n\\begin{pf}\nIn order to establish Lipschitz continuity, the function~$g_{j}$ is\ndecomposed into two parts\n\\begin{eqnarray*}\ng_{j}(x)\n&= &\\sum_{ k \\in\\Z^d} \\biggl[ \\int_{\\R^d}\\Phi_{j,k}(u) g(x)\\, \\mathrm{d}u\\biggr]\n\\Phi_{j,k}(x)\n+ \\sum_{ k \\in\\Z^d} \\biggl[ \\int_{\\R^d} \\Phi_{j,k}(u) [g(u) - g(x)]\n\\,\\mathrm{d}u\\biggr]\n\\Phi_{j,k}(x) \\\\\n&=& H_1(x) + H_2(x).\n\\end{eqnarray*}\nAccording to the above choice of the scale function (with\ncharacteristics (1)--(3) of Section~\\ref{SS22}), the prerequisites\nof Corollary~8.1 of H{\\\"a}rdle \\textit{et al.} \\cite{Hetal98} are fulfilled for $N=1$. This\nimplies that $\\int_{-\\infty}^\\infty\\sum_{l\\in\\Z} \\phi(y-l)\\phi(z-l)\\,\\mathrm{d}z\n=1, \\forall y\\in\\R$. Based on this result, we obtain\n\\[\n\\sum_{ k \\in\\Z^d} \\int_{\\R^d}\\Phi_{j,k}(u)\\Phi_{j,k}(x) \\,\\mathrm{d}u\n= 2^{jd} \\prod_{i=1}^d\\int_{-\\infty}^\\infty\\sum_{l \\in\\Z}\\phi\n(2^{j}u_i-l)\\phi(2^{j}x_i-l)\\,\\mathrm{d}u_i=1\\qquad \\forall x\\in\\R^d,\n\\]\nby applying an appropriate variable substitution. To this end, note\nthat for every fixed $x$, the number of non-vanishing summands can be\nbounded by a finite constant uniformly in~$j$ because of the finite\nsupport of $\\phi$. Therefore, the order of summation and integration is\ninterchangeable. Hence, $H_1=g$ which in turn immediately implies the\ndesired continuity property for~$H_1$.\n\nIn order to investigate $H_2$, we define a sequence of functions\n$(\\kappa_{k})_{k\\in\\Z}$ by\n\\[\n\\kappa_{k}(x) = \\int_{\\R^d}\\Phi_{j,k}(u) [g(u) - g(x)]\n\\,\\mathrm{d}u.\\vadjust{\\goodbreak}\n\\]\nThese functions are Lipschitz continuous with a constant decreasing in $j$:\n\\begin{equation}\\label{eq.1}\n| \\kappa_{k}(x) - \\kappa_{k}(\\bar x) |\n\\leq \\operatorname{Lip}(g) \\mathrm{O}(2^{-jd\/2}) \\|x-\\bar x\\|_{l_1}.\n\\end{equation}\nMoreover, boundedness and Lipschitz continuity of~$\\phi$ yield\n\\begin{equation}\\label{eq.2}\n\\| \\Phi_{j,k} \\|_\\infty= \\mathrm{O}(2^{jd\/2})\\quad \\mbox{and}\\quad\n|\\Phi_{j,k}(x) - \\Phi_{j,k}(\\bar x) |\n= \\mathrm{O}\\bigl(2^{j(d\/2+1)}\\bigr) \\|x-\\bar x\\|_{l_1}.\n\\end{equation}\nThus,\n\\begin{eqnarray*}\n|H_2(x)-H_2(\\bar x)|\n&\\leq &\\sum_{k\\in\\Z^d} |\\Phi_{j,k}(x)| |\\kappa_{k}(x) - \\kappa\n_{k}(\\bar x) |\\\\\n&&{}+ \\sum_{ k \\in\\Z^d} |\\kappa_{k}(\\bar x)||\\Phi\n_{j,k}(x) - \\Phi_{j,k}(\\bar x) |\\\\\n&\\leq&C \\|x-\\bar x\\|_{l_1} + \\sum_{k\\in\\Z^d} |\\kappa_{k}(\\bar x)|\n|\\Phi_{j,k}(x) - \\Phi_{j,k}(\\bar x) |.\n\\end{eqnarray*}\nNow, it has to be distinguished whether or not $\\bar x\\in\\operatorname{supp}(\\Phi_{j,k})$\nin order to approximate the second summand. (Here,\n$\\operatorname{supp}$\ndenotes the support of a function.) In the first case, it is\nhelpful to illuminate $|\\kappa_{k}(\\bar x)|= |\\int_{\\R^d} \\Phi_{j,k}(u)\n[g(u) - g(\\bar x)] \\,\\mathrm{d}u|$. The integrand is non-trivial only if $u\\in\n\\operatorname{supp } (\\Phi_{j,k})$. In these situations, $|g(u) - g(\\bar\nx)|=\\mathrm{O}(2^{-j})$ by Lipschitz continuity. Consequently, we get\n\\[\n|\\kappa_{k}(\\bar x)|\\leq \\mathrm{O}(2^{-j})\\int_{\\R^d}|\\Phi_{j,k}(u)|\\,\\mathrm{d}u= \\mathrm{O}\\bigl(2^{-j(d\/2+1)}\\bigr)\n\\]\nwhich leads to\n\\[\n\\sum_{k\\in\\Z^d} |\\kappa_{k}(\\bar x)||\\Phi_{j,k}(x) - \\Phi\n_{j,k}(\\bar x) |\\leq C \\|x-\\bar x\\|_{l_1}\n\\]\nas the number of nonvanishing summands is finite, independently of the\nvalues of $x$ and~$\\bar x$. Therefore, Lipschitz continuity of $H_2$ is\nobtained as long as $\\bar x\\in\\operatorname{supp }(\\Phi_{j,k})$.\n\nIn the opposite case, we only have to consider the situation of $x\\in\n\\operatorname{supp }(\\Phi_{j,k})$ since the setting~$\\bar x$, $x\\notin\\operatorname\n{supp }(\\Phi_{j,k})$ is trivial. With the aid of (\\ref{eq.1}) and (\\ref\n{eq.2}), the first term of the r.h.s.~of\n\\begin{equation}\\label{eq.l2.1}\n|\\kappa_{k}(\\bar x)[\\Phi_{j,k}(x) - \\Phi_{j,k}(\\bar x)\n] |\\leq|\\kappa_{k}(\\bar x) - \\kappa_{k}(x) | |\\Phi\n_{j,k}(x)|+ |\\kappa_{k}(x) | |\\Phi_{j,k}(x) - \\Phi_{j,k}(\\bar x)\n|\n\\end{equation}\ncan be estimated from above by $C\\|x-\\bar x\\|_{l_1} $. The\ninvestigation of the second summand is identical to the analysis of the\ncase $\\bar x\\in\\operatorname{supp }(\\Phi_{j,k})$.\n\nFinally, we obtain $|H_2(x)-H_2(\\bar x)| \\leq C \\|x-\\bar x\\|_{l_1}$,\nwhere $ C < \\infty$ is a constant that is independent of $j$. This\nyields the assertion of the lemma.\n\\end{pf}\n\n\\begin{lem}\\label{l.3}\nLet $g\\dvtx \\R^d\\to\\R$ be a function that is continuous on some interval $(-c,c)^d$.\nFor arbitrary $b\\in(0,c)$ and $K\\in\\N$ there exists a $J(K,b,c)\\in\\N$\nsuch that for $g$ and its approximation~$ g_J$ given by $ g_J(x) = \\sum\n_{ k \\in\\Z^d} \\alpha_{J,k}\\Phi_{J,k}(x)$ it holds\n\\[\n\\max_{x\\in[-b,b]^d} |g(x) - {g}_J(x)|\n \\leq 1\/K\\qquad \\forall J\\geq J(K,b,c) .\\vspace*{-2pt}\n\\]\n\\end{lem}\n\n\\begin{pf}\nGiven $b\\in(0,c)$, we define $\\bar g^{(b,c)}(x):=g(x) w_{b,c}(x)$,\nwhere $w_{b,c}$ is a Lipschitz continuous and nonnegative weight\nfunction with compact support $S_w\\subset(-c,c)^d$. Moreover,~$w_{b,c}$\nis assumed to be bounded from above by~1 and $w_{b,c}(x) :=1$ for $x\n\\in(-b-\\delta,\\allowbreak b+\\delta)^d$ for some $\\delta>0$ with $b+\\delta< c$.\nAdditionally, we set\n$\\alpha^{(b,c)}_{J,k}:=\\int_{\\R^d} \\bar g^{(b,c)}(u)\\Phi_{J,k}(u) \\,\\mathrm{d}u$. Hence,\n\\begin{eqnarray*}\n&&\\max_{ x\\in[-b,b]^d}|g(x) - g_J(x)|\\\\[-2pt]\n&&\\qquad\\leq \\max_{ x\\in[-b,b]^d} \\biggl|\\bar g^{(b,c)} (x)\n- \\sum_{k \\in\\Z^d} \\alpha^{(b,c)}_{J, k} \\Phi_{J,k}(x) \\biggr|+ \\max_{\nx\\in[-b,b]^d}\\biggl |\\sum_{k \\in\\Z^d} \\alpha^{(b,c)}_{J, k} \\Phi_{J,k}(x)\n - g_J(x)\\biggr|\\\\[-2pt]\n&&\\qquad= \\max_{ x\\in[-b,b]^d} A^{(J)}(x)+ \\max_{ x\\in[-b,b]^d}B^{(J)}(x).\n\\end{eqnarray*}\nSince $\\bar g^{(b,c)}\\in C_0(\\R^d)$, Theorem~8.4 of Wojtaszczyk \\cite{Woj97}\nimplies that there exists a~$J_0(K,b,c)\\in\\N$ such that $\\max_{ x\\in\n[-b,b]^d} A^{(J)}(x)\\leq1\/K$ for all $J\\geq J_0(K,b,c)$.\nMoreover, the introduction of the finite set of indices\n\\[\n\\bar Z(J):= \\{k\\in\\Z^d | \\Phi_{J,k}(x)\\neq0 \\mbox{ for some\n} x\\in[-b,b]^d\\}\n\\]\nleads to\n\\[\n\\max_{x\\in[-b,b]^d}B^{(J)}(x) =\n\\max_{x\\in[-b,b]^d}\\biggl|\n\\sum_{k\\in\\bar Z(J)} \\bigl( \\alpha_{J,k} -\\alpha_{J,k}^{(b,c)}\\bigr) \\Phi\n_{J,k}(x)\\biggr|.\n\\]\nThis term is equal to zero for all $J\\geq J(K,b,c)$ and some\n$J(K,b,c)\\geq J_0(K,b,c)$ since the definition of $\\bar g^{(b,c)}$\nimplies $\\alpha_{J,k} = \\alpha_{J,k}^{(b,c)},~ \\forall k\\in\\bar Z,$\nfor all sufficiently large~$J$.\n\\end{pf}\n\n\\begin{pf*}{Proof of Lemma~\\textup{\\protect\\ref{l.5}}}\nThe assertion of the lemma is verified in two steps.\nFirst, the bounded kernel $h_c$, constructed in the proof of Lemma~\\ref\n{l.1}, is approximated by $\\widetilde h^{(K)}_c$ which is defined by\n$\\widetilde h^{(K)}_c(x,y)=\\sum_{k_1,k_2\\in\\mathbb{Z}^d}\\alpha\n_{J(K);k_1,k_2}^{(c)}\\Phi_{J(K),k_1}(x)\\Phi_{J(K),k_2}(y)$ with $\\alpha\n_{J(K);k_1,k_2}^{(c)}=\\iint_{\\R^d\\times\\R^d}h_c(x, y)\\Phi\n_{J(K),k_1}(x)\\Phi_{J(K),k_2}(y) \\,\\mathrm{d}x \\,\\mathrm{d}y$. Here, the indices\n$(J(K))_{K\\in\\N}$ with $J(K)$ $\\longrightarrow_{K\\to\\infty} \\infty$ are\nchosen such that the assertion of Lemma~\\ref{l.3} holds true for\n$b=b(K)\\in\\R$ with $P(X_1\\notin[-b,b]^d)\\leq K^{-1}$ and $c=2b$.\nSince the function $\\widetilde h^{(K)}_c$ is not degenerate in general,\nwe introduce its degenerate counterpart\n\\begin{eqnarray*}\nh^{(K)}_c(x,y)&= &\\widetilde h^{(K)}_c(x,y)-\\int_{\\R^d}\\widetilde\nh^{(K)}_c(x,y)P_X(\\mathrm{d}x)-\\int_{\\R^d}\\widetilde h^{(K)}_c(x,y)P_X(\\mathrm{d}y)\\\\[-2pt]\n&&{}+\\iint_{\\R^d\\times\\R^d}\\widetilde h^{(K)}_c(x,y)P_X(\\mathrm{d}x)P_X(\\mathrm{d}y)\n\\end{eqnarray*}\nand denote the corresponding $U$-statistic by $U_{n,c}^{(K)}$.\\vadjust{\\goodbreak}\n\nNow, the structure of the proof is as follows. First, we prove\n\\begin{equation}\\label{eq.l5.1}\n\\sup_{n\\in\\N}n^2 \\E\\bigl(U_{n,c}-U_{n,c}^{(K)}\\bigr)^2 \\mathop\n{\\longrightarrow} _{K\\to\\infty} 0.\n\\end{equation}\nIn a second step, it remains to show that for every fixed $K$\n\\begin{equation}\\label{eq.l5.2}\n\\sup_{n\\in\\N}n^2 \\E\\bigl(U_{n,c}^{(K)}-U_{n,c}^{(K,L)}\\bigr)^2 \\mathop\n{\\longrightarrow} _{L\\to\\infty} 0.\n\\end{equation}\nIn order to verify (\\ref{eq.l5.1}), we rewrite $n^2 \\E\n(U_{n,c}-U_{n,c}^{(K)})^2$ in terms of $Z_n$ with kernel function\n$H:=H^{(K)}=h_c-h^{(K)}_c$. Hence, it remains to verify that $\\sup_{n\\in\n\\N}n^{-2}\\sum_{r=1}^{n-1}\\sum_{t=1}^4 Z_{n,r}^{(t)}$\nand $\\sup_{k\\in\\N}\\E|H^{(K)}(H_1,X_{1+k})|^2$ tend to zero as $K\\to\n\\infty.$\nExemplarily, we investigate $\\sup_{n\\in\\N}n^{-2}\\sum\n_{r=1}^{n-1}Z_{n,r}^{(1)}$. The summands of $Z_{n,r}^{(1)}$ can be\nbounded as follows:\n\\begin{eqnarray*}\n&&\\bigr| \\E H^{(K)}(X_i,X_j)H^{(K)}(X_k, X_l)-H^{(K)}(X_i,\\widetilde\nX_j)H^{(K)}(\\widetilde X_k,\\widetilde X_l)\\bigr|\\\\\n&&\\quad\\leq \\E\\bigl|H^{(K)}(X_k,X_l)\n\\bigl[H^{(K)}(X_i,X_j)-H^{(K)}(X_i,\\widetilde X_{j})\\bigr]\\bigr|\\\\\n&&\\qquad{} + \\E\\bigl|H^{(K)}(X_i,\\widetilde X_j)\n\\bigl[H^{(K)}(X_k,X_l)-H^{(K)}(\\widetilde X_{k},\\widetilde X_{l})\n\\bigr]\\bigr|.\n\\end{eqnarray*}\nSince further approximations are similar for both summands, we\nconcentrate on the first one. Note that boundedness of $h_c$ implies\nuniform boundedness of $(H^{(K)})_K$ due to the compact support of the\nfunction $\\phi$. Moreover, the constant $\\operatorname{Lip}(H^{(K)})$ does not depend\non $K$ in consequence of Lemma~\\ref{l.2}. Therefore, the application of\nH\\\"older's inequality leads to\n\\[\n\\E\\bigl|H^{(K)}(X_k,X_l)\\bigl[H^{(K)}(X_i,X_j)-H^{(K)}(X_i,\\widetilde\nX_{j})\\bigr]\\bigr|\n\\leq C\\tau_r^\\delta\\bigl[\\E\\bigl|H^{(K)}(X_k,X_l)\\bigr|^{1\/(1-\\delta)}\n\\bigr]^{1-\\delta}.\n\\]\nThe construction of the sequence $(b(K))_K$ above allows for the\nfollowing estimation:\n\\begin{eqnarray*}\n&&\\E\\bigl|H^{(K)}(X_k,X_l)\\bigr|^{1\/(1-\\delta)}\\\\\n&&\\quad= \\E\\bigl|H^{(K)}(X_k,X_l)\\bigr|^{1\/(1-\\delta)}\\I_{X_k, X_l\\in[-b(K),\nb(K)]^d}+\\mathrm{O}\\bigl(P\\bigl(X_1\\notin[-b(K),b(K)]^d\\bigr)\\bigr)\\\\\n&&\\quad\\leq\\sup_{x,y\\in\\bigl[-b(K),b(K)\\bigr]^d} \\bigl|H^{(K)}(x,y)\\bigr|^{1\/(1-\\delta)}+\\frac{C}{K}.\n\\end{eqnarray*}\nAccording to Lemma~\\ref{l.3} and the above choice of the sequence\n$(b(K))_K$, we obtain\n\\begin{eqnarray*}\n&&\\sup_{x,y\\in[-b(K),b(K)]^d}\\bigl|H^{(K)}(x,y)\\bigr|\\\\\n&&\\quad\\leq\\frac{1}{K} + 2\\sup_{{x,y\\in[-b(K),b(K)]^d}} \\E\\bigl|h_c(x,\nX_1)-\\widetilde{h}^{(K)}_c(x,X_1)\\bigr|\\\\\n&&\\qquad{} + \\biggl| \\iint_{\\R^d\\times\\R^d}h_c(x,y)- \\widetilde\n{h}^{(K)}_c(x,y) P_X(\\mathrm{d}x) P_X(\\mathrm{d}y)\\biggr|\\\\\n&&\\quad\\leq\\frac{4}{K}+ 2\\sup_{{x\\in[-b(K),b(K)]^d}} \\E\\bigl|h_c(x,\nX_1)-\\widetilde{h}^{(K)}_c(x,X_1)\\bigr|\\I_{X_1\\notin[-b(K),b(K)]^d}\\\\\n&&\\qquad{} + 2 \\int_{\\R^d}\\int_{\\R^d\\setminus[-b(K),b(K)]^d}\n\\bigl|h_c(x,y)-\\widetilde{h}^{(K)}_c (x,y)\\bigr| P_X(\\mathrm{d}x) P_X(\\mathrm{d}y)\\\\\n&&\\quad\\leq\\frac{C}{K}.\n\\end{eqnarray*}\nConsequently,\n\\[\n\\bigl| \\E H^{(K)}(X_i,X_j)H^{(K)}(X_k, X_l)-\\E H^{(K)}(X_i,\\widetilde\nX_j)H^{(K)}(\\widetilde X_k,\\widetilde X_l)\\bigr|\\leq C \\varepsilon\n_K \\tau^{\\delta}_r\n\\]\nfor some null sequence $(\\varepsilon_K)_K$. This implies that $\\sup\n_{n\\in\\N} n^{-2}\\sum_{r=1}^n Z_{n,r}^{(1)}$ tends to zero as $K$ increases.\nFurthermore, one obtains $\\sup_{k\\in\\N}\\E\n[H^{(K)}(X_1,X_{1+k})]^2=\\mathrm{O}(K^{-1})$ similarly to the\nconsideration of $\\E|H^{(K)}(X_k,X_l)|^{1\/(1-\\delta)}$ above.\nThus, we get $\\sup_{n}n^2 \\E(U_{n,c}-U_{n,c}^{(K)})^2 \\longrightarrow\n_{K\\to\\infty} 0.$\n\nThe main goal of the previous step was the multiplicative separation of\nthe random variables which are cumulated in $h_c$.\nThe aim of the second step is the approximation of~$h_c^{(K)}$, whose\nrepresentation is given by an infinite sum, by a function consisting of\nonly finitely many summands.\nSimilar to the foregoing part of the proof the approximation error\n$n^2 \\E(U_{n,c}^{(K)}-U_{n,c}^{(K,L)})^2$ is reformulated in terms of\n$Z_n$ with kernel $H:=H^{(L)}= h^{(K)}_c-h^{(K,L)}_c$. As before, we\nexemplarily take $n^{-2}\\sum_{r=1}^{n-1}Z_{n,r}^{(1)} $ and $\\sup_{k\\in\n\\N}\\E|H^{(L)}(X_1,X_{1+k})|^2$ into further consideration.\nConcerning the summands of $Z_{n,r}^{(1)}$, we obtain\n\\begin{eqnarray*}\n&&\\bigl| \\E H^{(L)}(X_i,X_j)H^{(L)}(X_k, X_l)-\\E H^{(L)}(X_i,\\widetilde\nX_j)H^{(L)}(\\widetilde X_k,\\widetilde X_l)\\bigr|\\\\\n&&\\quad\\leq\\E\\bigl|H^{(L)}(X_k,X_l)\n\\bigl[H^{(L)}(X_i,X_j)-H^{(L)}(X_i,\\widetilde X_{j})\\bigr]\\I_{(X_k^\\prime\n,X_l^\\prime)^\\prime\\in[-B,B]^{2d}}\\bigr|\\\\\n&&\\qquad{} +\\E\\bigl|H^{(L)}(X_k,X_l)\n\\bigl[H^{(L)}(X_i,X_j)-H^{(L)}(X_i,\\widetilde X_{j})\\bigr]\\I_{(X_k^\\prime\n,X_l^\\prime)^\\prime\\notin[-B,B]^{2d}}\\bigr|\\\\\n&&\\qquad{} + \\E\\bigl|H^{(L)}(X_i,\\widetilde X_j)\n\\bigl[H^{(L)}(X_k,X_l)-H^{(L)}(\\widetilde X_{k},\\widetilde X_{l})\\bigr]\\I\n_{(X_i^\\prime,\\widetilde X_j^\\prime)^\\prime\\in[-B,B]^{2d}}\\bigr|\\\\\n&&\\qquad{} + \\E\\bigl|H^{(L)}(X_i,\\widetilde X_j)\n\\bigl[H^{(L)}(X_k,X_l)-H^{(L)}(\\widetilde X_{k},\\widetilde X_{l})\\bigr]\\I\n_{(X_i^\\prime,\\widetilde X_j^\\prime)^\\prime\\notin[-B,B]^{2d}}\\bigr|\\\\\n&&\\quad=E_1+E_2+E_3+E_4\n\\end{eqnarray*}\nfor arbitrary $B>0$. Obviously, it suffices to take the first two\nsummands into further considerations. The both remaining terms can be\ntreated similarly. First, note that $(H^{(L)})_L$ is uniformly bounded.\nSince $\\phi$ and $\\psi$ have compact support, the number of overlapping\nfunctions within $(\\Phi_{0,k})_{k\\in\\{-L,\\ldots,L\\}^d}$ and $(\\Psi\n_{j,k}^{(e)})_{k\\in\\{-L,\\ldots,L\\}^d, 0\\leq j0$ such that\n\\begin{eqnarray}\\label{eq.t2.1}\n&&\\bigl|\\widetilde h^{(K)}_c(\\bar x,\\bar y)-\\widetilde h^{(K)}_c(x,y)\\bigr|\\nonumber\\\\\n&&\\quad\\leq f_1(x,\\bar x,y, \\bar y)[\\|x-\\bar x\\|_{l_1}+\\|y-\\bar y\\|\n_{l_1}]+|H_2(\\bar x,\\bar y)-H_2(x,y)|\n\\nonumber\n\\\\[-8pt]\n\\\\[-8pt]\n\\nonumber\n&&\\quad\\leq C f_1(x,\\bar x,y, \\bar y)[\\|x-\\bar x\\|_{l_1}+\\|y-\\bar y\\|\n_{l_1}]\\\\\n&&\\qquad{} +\\sum_{k_1,k_2\\in\\Z^d} \\bigl(|\\kappa_{k_1,k_2}(\\bar x, \\bar\ny)|\\bigl|\\Phi_{J(K),k_1}(x)\\Phi_{J(K),k_2}(y)-\\Phi_{J(K),k_1}(\\bar x)\\Phi\n_{J(K),k_2}(\\bar y)\\bigr|\\bigr),\\nonumber\n\\end{eqnarray}\nwhere\n$\\kappa_{k_1,k_2}$ is given by\n\\[\n\\kappa_{k_1,k_2}(x, y):=\n\\int_{\\R^d}\\int_{\\R^d}\\Phi_{J(K),k_1}(u)\\Phi\n_{J(K),k_2}(v)[h_c(u,v)-h_c(x,y)]\\, \\mathrm{d}u \\,\\mathrm{d}v\n\\]\nand $H_2$ is defined as in the proof of Lemma~\\ref{l.2}.\nIn order to approximate the last summand of~(\\ref{eq.t2.1}), we\ndistinguish again between the cases whether or not $(\\bar x^\\prime,\n\\bar y^\\prime)^\\prime\\in\\operatorname{supp }(\\Phi_{J(K),k_1}\\times \\Phi_{J(K),k_2})$.\nIn the first case, an upper bound of order\n\\[\n\\mathrm{O}\\Bigl(\\max_{a_1,a_2\\in[-S_\\phi\/2^{J(K)},S_\\phi\/2^{J(K)}]^d} f_1(\\bar x,\n\\bar x+a_1,\\bar y,\\bar y+a_2)\\Bigr)(\\|\\bar x-x\\|_{l_1}+ \\|\\bar y-y\\|_{l_1})\n\\]\ncan be obtained since\n\\begin{eqnarray*}\n|\\kappa_{k_1,k_2}(\\bar x,\\bar y)|&\\leq&\\frac{ S_\\phi}{2^{J(K)}} \\max\n_{a_1,a_2\\in[-S_\\phi\/2^{J(K)},S_\\phi\/2^{J(K)}]^d}f_1(\\bar x, \\bar\nx+a_1,\\bar y,\\bar y+a_2)\\\\\n&&{} \\times\\iint_{\\R^d\\times\\R^d}\\bigl|\\Phi_{J(K),k_1}(u)\\Phi\n_{J(K),k_2}(v)\\bigr| \\,\\mathrm{d}u \\,\\mathrm{d}v\\\\\n&\\leq& \\mathrm{O}\\bigl(2^{-J(K)(d+1)}\\bigr)\\max_{a_1,a_2\\in[-S_\\phi\n\/2^{J(K)},S_\\phi\/2^{J(K)}]^d}f_1(\\bar x, \\bar x+a_1,\\bar y,\\bar y+a_2).\n\\end{eqnarray*}\nHere, $S_\\phi$ denotes the length of the support of $\\phi$. In the\nsecond case, a decomposition similar to~(\\ref{eq.l2.1}) can be employed\nwhich leads to\nthe upper bound\n\\[\n\\mathrm{O}\\Bigl(f_1(x,\\bar x,y,\\bar y)+\\max_{a_1,a_2\\in[-S_\\phi\/2^{J(K)},S_\\phi\n\/2^{J(K)}]^d} f_1( x, x+a_1, y, y+a_2)\\Bigr)(\\|\\bar x-x\\|_{l_1}+ \\|\\bar y-y\\|_{l_1}).\n\\]\nConsequently, we get\n\\begin{eqnarray*}\n\\bigl|\\widetilde h^{(K)}_c(\\bar x,\\bar y)-\\widetilde h^{(K)}_c(x,y)\\bigr|&\\leq&\n\\mathrm{O}\\Bigl( \\max_{a_1,a_2\\in[-S_\\phi\/2^{J(K)},S_\\phi\/2^{J(K)}]^d} f_1( x,\nx+a_1, y, y+a_2)\\\\\n&&\\hphantom{\\mathrm{O}\\Bigl(}{} + \\max_{a_1,a_2\\in[-S_\\phi\/2^{J(K)},S_\\phi\/2^{J(K)}]^d} f_1(\\bar x,\n\\bar x+a_1,\\bar y,\\bar y+a_2) \\\\\n&&\\hphantom{\\mathrm{O}\\Bigl(}{}+f_1(x,\\bar x,y,\\bar y)\\Bigr)\\times(\\|\\bar x-x\\|_{l_1}+ \\|\\bar y-y\\|\n_{l_1})\\\\\n&=:& f_2(x,\\bar x, y, \\bar y)(\\|\\bar x-x\\|_{l_1}+ \\|\\bar y-y\\|_{l_1}).\n\\end{eqnarray*}\nThis yields $|H^{(K)}(x,y)-H^{(K)}(\\bar x, \\bar y)|\\leq f_3(x,\\bar x,y\n,\\bar y)(\\|x-\\bar x\\|_{l_1}+\\|y-\\bar y\\|_{l_1})$ with $f_3(x,\\bar x,y,\\allowbreak\n\\bar y)=2 f_2(x,\\bar x,y ,\\bar y)+\\int_{\\R^d} f_2(x,\\bar\nx,z,z)P_X(\\mathrm{d}z)+\\int_{\\R^d} f_2(z,z,\\bar y,y)P_X(\\mathrm{d}z)$.\nNote that under~(A4)(i), $\\E[ f_3(Y_i,Y_j,Y_k,Y_l)]^\\eta(\\|Y_i\\|\n_{l_1}+\\|Y_j\\|_{l_1}+\\|Y_k\\|_{l_1}+\\|Y_l\\|_{l_1})<\\infty$ if $J(K)$ is\nsufficiently large. Thus, we have\n\\[\n\\E\\bigl|H^{(K)}(Y_{k_1},Y_{k_2})-H^{(K)}(Y_{k_3},Y_{k_4})\\bigr|\n\\leq C (\\E\\|Y_{k_1}-Y_{k_3}\\|_{l_1}+\\E\\|Y_{k_2}-Y_{k_4}\\|_{l_1})^\\delta\n\\]\nfor $Y_{k_i} (k_i=1,\\ldots,5, i=1,\\ldots,4)$, as defined in (A4).\nMoreover, Lemma~\\ref{l.3} remains valid with $g=h_c$.\nTherefore, one can follow the lines of the proof of Lemma~\\ref{l.4} and\nplug in the inequality above. This procedure leads to $\\sup_{n\\in\\N\n}n^2 \\E(U_{n,c}-U_{n,c}^{(K)})^2 \\longrightarrow_{K\\to\\infty} 0$.\n\nIn the third step of the proof, we verify $\\sup_{n\\in\\N} n^2 \\E\n(U_{n,c}^{(K)}-U_{n,c}^{(K,L)})^2 \\longrightarrow_{L\\to\\infty}0$.\nFor this purpose, it suffices to plug in a modified approximation of\n$H^{(L)}(x,y)-H^{(L)}(\\bar x,\\bar y)$ into the second part of the proof\nof Lemma~\\ref{l.5}.\nLipschitz continuity of $h_c^{(K,L)}$ implies\n\\[\n\\bigl|H^{(L)}(x,y)-H^{(L)}(\\bar x,\\bar y)\\bigr|\\leq f_4(x,\\bar x,y ,\\bar y)[\\|\nx-\\bar x\\|_{l_1}+\\|y-\\bar y\\|_{l_1}]\\vspace*{-2pt}\n\\]\nwith $f_4(x,\\bar x,y ,\\bar y)=C+f_3(x,\\bar x,y ,\\bar y)$.\nSince, $f_4$ satisfies the moment assumption of (A4)(i) with $A=0$ for\nsufficiently large $J(K)$, we obtain\n\\[\n\\E\\bigl|H^{(L)}(Y_{k_1},Y_{k_2})-H^{(L)}(Y_{k_3},Y_{k_4})\\bigr|\\leq C [\\E(\\|\nY_{k_1}-Y_{k_3}\\|_{l_1}+\\|Y_{k_2}-Y_{k_4}\\|_{l_1})]^\\delta.\\vspace*{-2pt}\n\\]\nHence, $\\sup_{n\\in\\N} n^2 \\E(U_{n,c}^{(K)}-U_{n,c}^{(K,L)})^2\n\\longrightarrow_{L\\to\\infty}0$.\nSumming up the three steps yields\n\\[\n\\lim_{c\\to\\infty}\\limsup_{K\\to\\infty}\\limsup_{L\\to\\infty}\\sup_{n\\in\\N\n}n^2 \\E\\bigl(U_n-U_{n,c}^{(K,L)}\\bigr)^2=0.\n\\]\n\\upqed\\vspace*{-2pt}\\end{pf*}\n\n\\begin{pf*}{Proof of Lemma~\\textup{\\protect\\ref{l.6}}}\nA positive variance of $Z$ implies the existence of constants $V>0$ and\n$c_0>0$ such that for every $c\\geq c_0$ we can find a $K_0\\in\\N$ such\nthat for every $K\\geq K_0$ there is an $L_0$ with $\\operatorname{var}(Z^{(K,L)}_c)\\geq\nV, \\forall L\\geq L_0.$ Moreover, uniform equicontinuity of the\ndistribution functions of $(((Z^{(K,L)}_c)_{L})_{K})_c$ yields the\ndesired property of $Z$. By matrices-based notation of $Z_c^{(K,L)}$,\nwe obtain\n\\[\nZ^{(K,L)}_c =C^{(K,L)}+\\sum_{k_1,k_2=1}^{M(K,L)}\\gamma\n_{k_1,k_2}^{(c,K,L)}Z_{k_1}^{(K,L)} Z_{k_2}^{(K,L)}=C^{(K,L)}+\\bigl[\\bar\nZ^{(K,L)}\\bigr]^\\prime\\Gamma^{(K,L)}_c\\bar Z^{(K,L)},\\vspace*{-2pt}\n\\]\nwith a constant $C^{(K,L)}$, a symmetric matrix of coefficients $\\Gamma\n^{(K,L)}_c$, and a normal vector\n$\\bar Z^{(K,L)}=(Z_1^{(K,L)},\\ldots, Z_{M(K,L)}^{(K,L)})^\\prime$.\nHence, $Z^{(K,L)}_c-C^{(K,L)}$ can be rewritten as follows:\n\\begin{eqnarray*}\nZ^{(K,L)}_c-C^{(K,L)}&\\stackrel{d}{=}& \\bar Y^\\prime\n\\bigl[U_c^{(K,L)}\\bigr]^\\prime\\Lambda^{(K,L)}_c U_c^{(K,L)}\\bar Y\n=Y^\\prime \\Lambda^{(K,L)}_c Y\\\\[-2pt]\n&=&\\sum_{k=1}^{M(K,L)}\\lambda_k^{(c,K,L)}Y_k^2.\\vspace*{-2pt}\n\\end{eqnarray*}\nHere $U_c^{(K,L)}$ is a certain orthogonal matrix, $\\Lambda\n^{(K,L)}_c:=\\operatorname{diag}(\\lambda_1^{(c,K,L)}, \\ldots,\\lambda\n_{M{(K,L)}}^{(c,K,L)})$ with $|\\lambda_1^{(c,K,L)}|\\geq\\cdots\\geq|\\lambda\n_{M{(K,L)}}^{(c,K,L)}|$, and $\\bar Y$ as well as $ Y$ are multivariate\nstandard normally distributed random vectors. For notational\nsimplicity, we suppress the upper index $(c,K,L)$ in the sequel.\nDue to the above choice of the triple~$(c,K,L)$, either $\\sum\n_{k=1}^{4}(\\lambda_k)^2$ or $\\sum_{k=5}^{M(K,L)}(\\lambda\n_k)^2$ is bounded from below by $V\/4$.\nIn the first case, $\\lambda_1\\geq\\sqrt{V\/16}$ holds true which implies\n\\[\nP\\bigl(Z^{(K,L)}_c\\in[x-\\varepsilon,x+\\varepsilon]\\bigr)\\leq\\int\n_{0}^{2\\varepsilon} f_{\\lambda_1Y_1^2}(t)\\, \\mathrm{d}t\n\\leq P(Y_1^2\\leq2\\varepsilon)\\max\\biggl\\{1,\\frac{4}{\\sqrt V}\\biggr\\}\\qquad\n \\forall x\\in\\R.\\vspace*{-2pt}\n\\]\nHere, the first inequality results from the fact that convolution\npreserves the continuity properties of the smoother function.\\vadjust{\\goodbreak}\nIn the opposite case, that is, $\\sum_{k=5}^{M(K,L)}(\\lambda_k\n)^2\\geq V\/4$, it is possible to bound the uniform norm of the density\nfunction of $Z^{(K,L)}_c$ by means of its variance. To this end, we\nfirst consider the characteristic function $\\varphi_{Z^{(K,L)}_c}$ of\n$Z^{(K,L)}_c$ and assume w.l.o.g.~that $M(K,L)$ is divisible by 4.\nDefining a sequence $(\\mu_k)_{k=1}^{M(K,L)\/4}$ by $\\mu_k=\\lambda_{4k}$\nfor $k\\in\\{1,\\ldots,M(K,L)\/4\\}$ allows for the approximation:\n\\begin{eqnarray*}\n\\bigl| \\varphi_{Z^{(K,L)}_c}(t) \\bigr|\n& =& \\Biggl\\{ \\prod_{j=1}^{M(K,L)} ( 1 + [2\\lambda_j t]^2\n) \\Biggr\\}^{-1\/4} \\leq\\Biggl\\{ \\prod_{j=1}^{M(K,L)\/4} ( 1 +\n[2\\mu_j t]^2 ) \\Biggr\\}^{-1} \\\\[-3pt]\n& \\leq &\\frac{1}{1 + 4(\\mu_1^2+\\cdots+\\mu_{M(K,L)\/4}^2) t^2}.\n\\end{eqnarray*}\nBy inverse Fourier transform, we obtain the following result concerning\nthe density function of $Z^{(K,L)}_c$:\n\\begin{eqnarray*}\n\\bigl\\| f_{Z^{(K,L)}_c} \\bigr\\|_\\infty\n& \\leq& \\frac{1}{2\\uppi} \\| \\varphi_{Z^{(K,L)}_c} \\|_1 \\leq \\frac{1}{2\\uppi\n} \\int_{-\\infty}^\\infty\n\\frac{1}{1 + (2\\sqrt{\\mu_1^2+\\cdots+\\mu_{M(K,L)\/4}^2} t)^2} \\, \\mathrm{d}t \\\\[-3pt]\n& =& \\frac{1}{\\sqrt{\\mu_1^2+\\cdots+\\mu_{M(K,L)\/4}^2}} \\frac{1}{2\\uppi}\n\\int_0^\\infty\\frac{1}{1+u^2} \\mathrm{d}u \\\\[-3pt]\n&\\leq&\\frac{1}{2 \\sqrt{4(\\mu_1^2+\\cdots+\\mu_{M(K,L)\/4-1}^2})}\\\\[-3pt]\n&\\leq&\\frac{1}{2 \\sqrt{\\lambda_5^2+\\cdots+\\lambda_{M(K,L)}^2}}\n\\leq\\frac{1}{\\sqrt{V}}.\n\\end{eqnarray*}\nThus, $P(Z^{(K,L)}_c\\in[x-\\varepsilon,x+\\varepsilon])\\leq2\\varepsilon\/\\sqrt\n{V}$ which completes the studies of the case $\\sum_{k=5}^{M(K,L)}\n(\\lambda_k)^2>V\/4$ and finally yields the assertion.\\vspace*{-2pt}\n\\end{pf*}\n\n\\begin{pf*}{Proof of Lemma~\\textup{\\protect\\ref{l.7}}}\nThis result is an immediate consequence of Theorem~\\ref{t.3}.\\vspace*{-2pt}\n\\end{pf*}\n\n\\section*{Acknowledgements}\\vspace*{-2pt}\nThe author is grateful to Michael H.~Neumann for his constructive\nadvice and fruitful discussions. She also thanks an anonymous referee\nfor helpful comments that led to an improvement of the paper.\nThis research was funded by the German Research Foundation DFG (project:\nNE~\\mbox{606\/2-1}).\\vspace*{-2pt}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:Introduction}\n\nThe Cabibbo-Kobayashi-Maskawa(CKM) matrix has proved to be an effective description of the weak CP violation where the observed weak CP phase seems to be maximal \\cite{CKM73}. The maximality of the weak CP violation has been pointed out by a number of authors from the known quark mass ratios \\cite{Shin84}. If it is maximal, there is a hope to obtain it from the dynamical details of the mass generation mechanism \\cite{MassGenFirst}.\n\nThe maximal CP violation is an attractive idea from the theoretical point of view. For example, from U(1)$^3$ symmetry, Georgi, Nelson and Shin tried to obtain the phase $\\frac{\\pi}2$ which however has led to an unacceptably large generation of the vacuum angle $\\theta_{\\rm weak}$ \\cite{GeorgiShin85}.\nThe CKM matrix can be expanded in terms of a parameter $\\lambda$ which has been suggested as a superheavy mass ratio by Froggatt and Nielsen(FN) \\cite{FN79}. This superheavy mass ratio is typically provided by a superheavy vacuum expectation value(VEV) of the standard model(SM) singlet field compared to the Planck scale $M_P\\simeq 2.44\\times 10^{18}\\,\\textrm{GeV}$. In view of the recent accurate determination of the CKM matrix elements in the last decade \\cite{PData10}, now time is ripe enough to scrutinize these old ideas whether they are realizable or not.\n\nThe well-known facts on the CKM matrix are\n\\begin{itemize}\n\\item[A.] To read the weak CP violation directly from the $V_{\\rm CKM}$~ elements, Det.$V_{\\rm CKM}$~ is better to be real \\cite{KimSeo11}. Note however that the phase of Det.$V_{\\rm CKM}$~ is not observable. To have the weak CP violation, $V_{\\rm CKM}$~ itself must be complex.\n\n\\item[B.] If any among the nine elements of $V_{\\rm CKM}$~ is zero, then there is no\n weak CP violation\n\\item[C.] The Cabibbo angle $\\sin\\theta_C=\\lambda$ is a good expansion parameter \\cite{Wol83}.\n\n\\item[D.] With Item A satisfied, the product $V_{31}V_{22}V_{13}$ is the barometer of weak CP violation \\cite{KimSeo11}.\n\n\\item[E.] $V_{\\rm CKM}$~ is derivable from the Yukawa texture \\cite{MassGenFirst}.\n\n\\end{itemize}\n\nIn Ref. \\cite{KimSeo11}, an exact CKM matrix satisfying the above and containing the approximate Wolfenstein form was presented. In this exact form with Det.$V_{\\rm CKM}$=1, vanishing of any one parameter among $\\theta_{1,2,3}$ or $\\delta$ makes $V_{\\rm CKM}$~ real. The CP phase $\\delta$ has been determined to be almost maximal, $\\delta= 89.0^{\\rm o}\\pm 4.4^{\\rm o}$ \\cite{PData10}. On the other hand, in Wolfenstein's approximate form there are two phases, the phase $\\delta_b$ of the (13) element and the phase $\\delta_t$ of the (31) element. These phases are proportional to the imaginary parameter $i\\eta$ of Wolfenstein \\cite{Wol83} but the physically observable CP phase is $\\delta_b+\\delta_t$ which is our single parameter $\\delta$.\\footnote{This was observed also in \\cite{Ahn11}. See also \\cite{QinMaPLB11}.}\nAs shown in \\cite{KimSeo11}, Item D shows the weak CP violation directly from each element of\n$V_{\\rm CKM}$. The weak CP invariant quantity the Jarlskog determinant \\cite{Jarlskog85} removes the real part in the expression of Ref. \\cite{KimSeo11} and leaves only the imaginary part. Therefore, Item A is simpler than calculating the Jarlskog determinant. Among 6 contributions to Det.$V_{\\rm CKM}$, any one is a good barometer of CP violation. In particular, the product of the skew diagonal elements (Item D) is a quick barometer of the weak CP violation.\n\nThe largest error, numerically 0.0011, in the currently determined matrix elements resides in $V_{\\rm CKM}$$_{(23)}$ and $V_{\\rm CKM}$$_{(32)}$. It is of order $\\lambda^3$ which is an order $\\lambda$ smaller than the leading terms of $V_{\\rm CKM}$$_{(23)}$ and $V_{\\rm CKM}$$_{(32)}$, and hence the leading terms in each element of $V_{\\rm CKM}$~ are pretty well determined by now. Since the $\\lambda$ expansion of $V_{\\rm CKM}$~ has determined all the elements accurately, with the current knowledge of the quark mass ratios \\cite{PData10} it is fairly straightforward to calculate the quark mass matrices in the weak basis, $\\tilde M^{u}$ and $\\tilde M^{d}$ \\cite{KimSeo11}. Now, we can attack the problem posed in Item E \\cite{MassGenFirst}.\n\nToward the Yukawa textures, the most obvious try would be U(1) symmetries \\cite{GeorgiShin85}. If they are gauged, the anomaly cancelation should be satisfied. Here, however, we attempt to introduce a discrete symmetry toward the Yukawa textures \\cite{KimSeo10}, in particular a ${\\bf Z}_{12}$ symmetry. The CKM fitting with up and down quark mass matrices allows the smallest entry, {\\it i.e.}\\ the (11) entry of the up quark mass matrix, of ${\\cal O}(\\lambda^6)$ and hence ${\\bf Z}_{12}$ is tried in this paper. For discrete symmetries, it is better for them to be of discrete gauge symmetry \\cite{Krauss89}. Here, however, we do not satisfy the discrete gauge symmetry at the Planck scale. We anticipate that the gravitational interaction breaks the discrete symmetry, but the gravitational interaction, breaking the discrete symmetry, respects the flavor independence \\cite{FritzschDemo90} since the quark masses are much smaller than the Planck scale $M_P$. Far below $M_P$, the nongravitational interaction respects the discrete symmetry we propose here.\n\nIn Sec. \\ref{sec:Qmasses}, we parametrize the quark masses as powers of $\\lambda$. From the known CKM matrix, we identify the left-hand and right-hand unitary matrices for diagonalization of quark mass matrices. If the quark mass matrices are given, these unitary matrices are determined. So, our choices are confined to Hermitian mass matrices. Specifying the left-- and right--unitary matrices is a kind of a solution of an inverse problem. In Sec. \\ref{sec:YukTexture}, we obtain the Yukawa texture and introduce ${\\bf Z}_{12},\\, {\\bf Z}_{4}$ and ${\\bf Z}_{3}$ discrete symmetries. In Sec. \\ref{sec:MaxCP}, we introduce supersymmetry(SUSY) and attempt to obtain the CP phase $\\frac{\\pi}2$ from the allowed superpotential terms. Sec. \\ref{sec:Conclusion} is a conclusion.\n\n\\section{Quark mass matrices}\\label{sec:Qmasses}\n\nTo determine the quark mass matrices as accurately as possible, it is necessary to have\nthe Wolfenstein parametrizations valid up to high orders of $\\lambda$. In Ref. \\cite{KimSeo11}, the $\\lambda$ expansion was obtained up to ${\\cal O}(\\lambda^6)$,\n\\begin{widetext}\n\\dis{\n\\left(\\begin{array}{lll} 1-\\frac{\\lambda^2}{2}-\\frac{\\lambda^4}{8}-\\frac{\\lambda^6}{16}(1+8\\kappa_b^2), \\quad &\\lambda , & \\lambda^3 \\kappa_b\\left(1+\\frac{\\lambda^2}{3}\\right) \\\\ [1em]\n-\\lambda+\\frac{\\lambda^5}{2}(\\kappa_t^2-\\kappa_b^2) ,\\quad\n & \\begin{array}{l}\n1-\\frac{\\lambda^2}{2}-\\frac{\\lambda^4}{8}-\\frac{\\lambda^6}{16} \\\\[0.2em]\n-\\frac{\\lambda^4}{2}(\\kappa_t^2+\\kappa_b^2-2\\kappa_b\\kappa_t e^{-i\\delta})\\\\[0.2em]\n-\\frac{\\lambda^6}{12}\\left(7 \\kappa_b^2+\\kappa_t^2-8\\kappa_t \\kappa_b e^{-i \\delta}\\right)\n\\end{array}, &\n\\begin{array}{l}\n\\lambda^2\\left(\\kappa_b-\\kappa_t e^{-i\\delta} \\right) \\\\[0.2em]\n -\\frac{\\lambda^4}{6}(2\\kappa_t e^{-i \\delta}+\\kappa_b)\n\\end{array}\\\\ [2.5em]\n -\\lambda^3 \\kappa_t e^{i\\delta}\\left(1+\\frac{\\lambda^2}{3}\\right) , &\n\\begin{array}{l}\n -\\lambda^2\\left(\\kappa_b-\\kappa_t e^{i\\delta} \\right)\\\\[0.2em]\n -\\frac{\\lambda^4}{6}(2\\kappa_b+ \\kappa_te^{i\\delta})\n\\end{array}\n , &\n\\begin{array}{c}\n1-\\frac{\\lambda^4}{2}(\\kappa_t^2+\\kappa_b^2\n -2\\kappa_b\\kappa_t e^{i\\delta})\\\\[0.2em]\n-\\frac{\\lambda^6}{6} \\left(2[\\kappa_b^2+\\kappa_t^2]-\\kappa_t \\kappa_b e^{i \\delta}\n\\right)\n\\end{array}\n\\end{array}\\right) \\label{eq:KSrotated}\n}\n\\end{widetext}\nwhere\n$\\lambda=0.22527\\pm 0.00092, \\kappa_t=0.7349\\pm 0.0141,~~\\kappa_b=0.3833\\pm 0.0388$, and\n$\\delta=89.0^{\\rm o}\\pm 4.4^{\\rm o}$.\n\nUnder the mass eigenstate bases $u^{\\rm (mass)}=(u,c,t)^T$ and $d^{\\rm (mass)}=(d,s,b)^T$, the observed quark masses are\n\\dis{\n \\frac{M^{(u)}}{m_t}=\\left(\\begin{array}{ccc} \\lambda^7 u &0 , & 0 \\\\\n 0 & \\lambda^4 c& 0 \\\\\n 0 & 0&1\n\\end{array}\\right), \\\n\\frac{M^{(d)}}{m_b}=\\left(\\begin{array}{ccc} \\lambda^4 d &0 , & 0 \\\\\n 0 & \\lambda^2 s& 0 \\\\\n 0 & 0&1\n\\end{array}\\right) \\label{eq:MuMd} }\nwhere $u,c,d$ and $s$ are four real parameters of ${\\cal O}(1)$ \\cite{PData10},\n\\dis{\n&u= 0.50^{+0.16}_{-0.13},~ c= 2.8\\pm 0.2, \\\\\n&d= 0.45^{+0.10}_{-0.08},~ s=0.49\\pm 0.13.\n }\nThese are ${\\cal O}(1)$ parameters but $c$ is about $1\/\\lambda$ times larger than the others. Even though this is a peculiarity, we use this form so that the second family $c$ element of Eq. (\\ref{eq:MuMd}) is an even power of $\\lambda$. For the first family member $u$, using $\\lambda^7$ or $\\lambda^8$ does not matter since the parameter $u$ does not appear as an important term in the determinant. If we used $c$ as the coefficient of $\\lambda^3$, we do not achieve the nice features of the present model discussed below. So, we speculate that $u,c,d,$ and $s$ are determined by another mechanism, probably by topological numbers of the internal space rather than the VEVs of the FN fields.\nThe mass matrices in the weak eigenstate bases are related to the above by bi-unitary transformations,\n\\dis{\n\\tilde M^{(u)}= R^{(u)\\dagger} M^{(u)}L^{(u)} ,\\quad \\tilde M^{(d)}= R^{(d)\\dagger} M^{(d)} L^{(d)}\n}\nwhere $R^{(u),(d)}$ and $L^{(u),(d)}$ are unitary matrices used for the R-handed and L-handed quark fields.\n\nActually, obtaining the specific forms of mass matrices from $V_{\\rm CKM}$\\ is a kind of an inverse problem, needing the information on the unitary matrices diagonalizing mass matrices. So, there are two ambiguities: firstly the right handed unitary matrices are arbitrary and second even in this case the left handed unitary matrices have many possibilities. We will choose the unitary matrices so that many zeros appear in the left-handed matrices. The CKM matrix can be represented as\n \\dis{& V_{\\rm CKM}= L^{(u)} L^{(d)\\dagger}=\\left(\\begin{array}{ccc} 1&0&0\\\\[1 em]\n0& 1 & 0\\\\ [1em] 0& 0 &e^{i \\delta} \\end{array}\\right)\n\\left(\n\\begin{array}{ccc} 1&0&0\\\\[1 em]0& c_2 &s_2\\\\ [1em]\n0& -s_2 &c_2 \\end{array}\\right)\\\\\n&~\\times\n\\left( \\begin{array}{ccc} c_1&s_1&0\\\\[1 em]\n-s_1& c_1 & 0\\\\ [1em] 0& 0 &1 \\end{array}\\right)\n\\left( \\begin{array}{ccc} 1&0&0\\\\[1 em] 0& c_3 & -e^{i \\delta}s_3\\\\ [1em]\n0& e^{i \\delta}s_3 &c_3 \\end{array}\\right)\n\\left( \\begin{array}{ccc} 1&0&0\\\\[1 em]\n0& 1 & 0\\\\ [1em]0& 0 &e^{-i \\delta} \\end{array}\\right).\\label{eq:CKMfivefact}\n }\nOf course, Eq. (\\ref{eq:CKMfivefact}) is one among many published forms in the literature \\cite{eq:CKMfactors}.\nSince Eq. (\\ref{eq:CKMfivefact}) is composed of a product of five matrices, $L^{(u)}$ and $L^{(d)}$ can take different forms. Now, let us choose the left hand matrices such that many zeros appear in $L^{(d)}$,\n\\dis{L^{(u)}=\\left(\\begin{array}{ccc} 1&0&0\\\\[1 em]\n0& 1 & 0\\\\ [1em]0& 0 &e^{i \\delta} \\end{array}\\right)\n\\left(\\begin{array}{ccc} 1&0&0\\\\[1 em]0& c_2 &s_2\\\\ [1em]\n0& -s_2 &c_2 \\end{array}\\right)\n\\left(\\begin{array}{ccc} c_1&s_1&0\\\\[1 em]\n-s_1& c_1 & 0\\\\ [1em]0& 0 &1 \\end{array}\\right)}\nand\n\\dis{L^{(d) \\dagger}=\\left(\\begin{array}{ccc} 1&0&0\\\\[1 em]\n0& c_3 & -e^{i \\delta}s_3\\\\ [1em]0& e^{i \\delta}s_3 &c_3 \\end{array}\\right)\n\\left(\\begin{array}{ccc} 1&0&0\\\\[1 em]0& 1 & 0\\\\ [1em]\n0& 0 &e^{-i \\delta} \\end{array}\\right).\n}\nThen, for $R^{(u),(d)}=L^{(u),(d)}$,\\footnote{The mass matrices $\\tilde M^{(u)}$ and $\\tilde M^{(d)}$ must be hermitian.} $\\tilde M^{(d)}$ contain four zeros\n\\begin{widetext}\n\\dis{\n\\tilde M^{(u)}&=\\left(\\begin{array}{ccc} (c+\\kappa_t^2 \\lambda) \\lambda^6 , & -(c+\\kappa_t^2 ) \\lambda^5 ,\n & \\kappa_t\\lambda^3 (1+\\frac{1}{3}\\lambda^2) \\\\[0.2em]\n -(c+\\kappa_t^2 ) \\lambda^5 ,& c\\lambda^4(1- \\frac{1}{3} \\lambda^2), & -\\kappa_t \\lambda^2+\\frac{\\kappa_t}{6}\\lambda^4 +O(\\lambda^6) \\\\[0.2em]\n \\kappa_t\\lambda^3 (1+\\frac{1}{3}\\lambda^2), &-\\kappa_t \\lambda^2+\\frac{\\kappa_t}{6}\\lambda^4 +O(\\lambda^6) ,\n & 1-\\kappa_t^2\\frac{\\lambda^4}{2}-\\kappa_t^2\\frac{\\lambda^6}{3}\n\\end{array}\\right)\\\\[0.4em]\n \\tilde M^{(d)}&=\\left(\\begin{array}{ccc} d \\lambda^4(1+\\frac{2}{3}\\lambda^2), & 0 , & 0 \\\\[0.2em]\n 0 ,& s\\lambda^2+(\\kappa_b+\\frac{s}{3})\\lambda^4+(\\frac{8}{45}s+\\frac{2\\kappa_b^2}{3})\\lambda^6,\n & \\kappa_b e^{i \\delta} (-\\lambda^2+(s-\\frac{1}{3})\\lambda^4) +O(\\lambda^6) \\\\[0.2em]\n 0, & \\kappa_b e^{-i \\delta} (-\\lambda^2+[s-\\frac{1}{3}]\\lambda^4) +O(\\lambda^6),\n & 1-\\kappa_b^2\\lambda^4+\\kappa_b^2(s-\\frac{2}{3})\\lambda^6\n\\end{array}\\right). \\label{eq:TextureRL}\n}\n\\end{widetext}\nwhich will be used below. Note that there appear four zeros in $\\tilde M^{(d)}$.\n\n\\section{Yukawa textures}\\label{sec:YukTexture}\nThe maximality of CP violation can be related to the Yukawa texture. The $\\lambda$ expansion may come from the FN mechanism \\cite{FN79}. So far, the FN mechanism is mostly applied to continuous symmetries. Here, we attempt to obtain the texture (\\ref{eq:TextureRL}) using discrete symmetries \\cite{KimSeo10}, which may be useful in determining the CP phase. Since the observed CP phase seems maximal $\\delta\\simeq\\frac\\pi{2}$, discrete symmetries might have worked in determining it. Because the highest power of $\\lambda$ in Eq. (\\ref{eq:TextureRL}) is ${\\cal O}(\\lambda^6)$, here we choose the discrete symmetry ${\\bf Z}_{12}$: $n\\pm 12$ is identified with $n$. Since we will try $|X_{\\pm 1}^{6+a}|=|X_{\\pm 1}^{6-a}|$, ${\\bf Z}_{12}$ is the discrete symmetry we need.\n\nTo facilitate the algebra and also toward the gauge hierarchy solution, we introduce $N=1$ supersymmetry (SUSY). With SUSY, we can follow the set-and-forget principle in the superpotential $W$. But that is a fine-tuning in a sense, and hence we try to introduce more symmetries to obtain Eq. (\\ref{eq:TextureRL}) naturally.\n\nLet us introduce two Higgs doublets $H_u$ and $H_d$, and several GUT scale singlets $X_1^{d,u}, X_{-1}^{d,u}, X_{6}^{d,u}$ and $X_{0}^{d,u}$. The fields with indices $u$ give mass to up-type quarks and the fields with indices $d$ give mass to down-type quarks. The $H_u$ and $H_d$ fields coupling to the up-type quarks and down-type quarks separately are common with the Peccei-Quinn(PQ) symmetry \\cite{PQ77,KimRMP10} and with SUSY. The GUT scale singlets $X_{\\pm 1}^{d,u}$ are the FN fields. We do not introduce $X_{\\pm m}^{d,u}$ for $2\\le m\\le 5$ in the hope that the expansion parameter in each quark mass texture is just one parameter $|X_{\\pm 1}^{d,u}|$. In Table \\ref{tab:Discrete}, we present ${\\bf Z}_{12}$ quantum numbers of these fields. In the table, in addition we also presented the fields $X_6^{d,u}$ and $X_0^{d,u}$ which are needed in the superpotential to determine $\\delta=\\frac{\\pi}{2}$.\n\nAs commented before, with SUSY the needed Yukawa couplings can be written with the set-and-forget principle. But, we introduce ${\\bf Z}_4$ and ${\\bf Z}_3$ without invoking the set-and-forget principle. In Table \\ref{tab:Discrete}, we present ${\\bf Z}_{12}, {\\bf Z}_4$ and ${\\bf Z}_3$ quantum numbers of left-handed quark doublets, right-handed quark singlets, Higgs doublets, the FN fields $X_{\\pm 1}^{d,u}$, and the SM singlet fields $X_{6,0}^{d,u}$. The SM singlet fields $X_{6,0}^{d,u}$ are needed for generating the needed VEVs. $X_0^{d,u}$ is expected to be of order 1. $X_{\\pm 1}^{d,u}$ is of order $\\lambda$, and $X_{6}^{d,u}$ is of order $\\lambda^6$. The ${\\bf Z}_4$ and ${\\bf Z}_3$ symmetries are needed to keep the leading terms of Eq. (\\ref{eq:TextureRL}). These guarantee the vanishing entries of the quantum number elements of $\\tilde M^{(d)}$ in Eq. (\\ref{eq:TextureRL}). With ${\\bf Z}_4$ and ${\\bf Z}_3$, the up-type $X^u_{\\pm 1}$ and the down-type $X^d_{\\pm 1}$ couple to $Q_{\\rm em}=\\frac23$ quarks and $Q_{\\rm em}=-\\frac13$ quarks separately.\nOne can obtain the transformation properties of the fields under ${\\bf Z}_4$ and ${\\bf Z}_3$ from Table \\ref{tab:Discrete}.\n\\begin{widetext}\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{c|ccc|ccc|ccc|cc|cccc|cccc}\n &$\\overline{q}_{1L}$& $\\overline{q}_{2L}$& $\\overline{q}_{3L}$& $d_R$& $s_R$& $b_R$ & $u_R$ & $c_R$ & $t_R$ & $H_d$& $H_u$ & $X_1^{d}$ & $X_{-1}^{d}$ &$X_1^{u}$ & $X_{-1}^{u}$ & $X_6^{d}$&$X_0^{d}$ &$X_{6}^u$ &$X_0^{u}$ \\\\[0.3em] \\hline\n ${\\bf Z}_{12}$ &+1 &0 & $-2$ & $-5$& 0 & +2 & +5& +4&+2 & 0 & 0 & +1 & $-1$ &+1& $-1$\n &6 &0 &6 & 0 \\\\\n\\hline\n${\\bf Z}_4$ & 2& 0& 0& 0& 0& 0& 2&0&0& $2$&0& $1$& $3$& $2$& $2$& 2& 2& 0& 0\\\\\n${\\bf Z}_3$ & 0&1&0&2&2& 1& 0&1&0& 1&0& $0$&$0$&$2$&$1$& 0&2&0 &0\n\\end{tabular}\n\\caption{The ${\\bf Z}_{12}$ charges of the fields. There can be additional ${\\bf Z}_4$ and ${\\bf Z}_3$ symmetries. The indices $d$ and $u$ denote the coupling to the right-handed $d$ and $u$ quarks, respectively.\n}\n\\label{tab:Discrete}\n\\end{center}\n\\end{table}\n\nThen, the up and down type quark mass matrices are given, only for the leading term in each element, as\n\\dis{\n \\tilde M^{(u)}&=\\left(\\begin{array}{c|ccc} & u_R(+5) & c_R(+4) & t_R(+2)\\\\[0.3em] \\hline\n \\overline{q}_1(+1) & cX_{-1}^{u\\,6} & -c X_{-1}^{u\\,5} & \\kappa_t X_{-1}^{u\\,3} \\\\[0.2em]\n \\overline{q}_2(0) & -cX_{-1}^{u\\,5} & c X_{-1}^{u\\,4} & -\\kappa_t X_{-1}^{u\\,2} \\\\[0.2em]\n \\overline{q}_3(-2) & \\kappa_t X_{-1}^{u\\,3} & -\\kappa_t X_{-1}^{u\\,2} & 1\n\\end{array}\\right) v_u\\, ,\\\\[0.4em]\n \\tilde M^{(d)}&=\\left(\\begin{array}{c|ccc} & d_R(-5) & s_R(0) & b_R(+2)\\\\[0.3em] \\hline\n \\overline{q}_1(+1) & dX_{+1}^{d\\,4} & 0 & 0\\\\[0.2em]\n \\overline{q}_2(0) & 0 & s X_{+1}^d X_{-1}^d & \\kappa_b X_{-1}^{d\\,2} \\\\[0.2em]\n \\overline{q}_3(-2) & 0 & \\kappa_b X_{+1}^{d\\,2} & 1\n\\end{array}\\right) v_d\\,. \\label{eq:TexMuMd}\n}\n\\end{widetext}\nwhere the appropriate powers of $M_P^{-1}$ are multiplied to make the elements of the mass dimension. Note the parameter $u$ (the coefficient of the mass eigenvalue of the up quark in Eq. (\\ref{eq:MuMd})) does not appear in the leading terms in Eq. (\\ref{eq:TexMuMd}).\n\nThe specific forms $\\tilde M^{(u)}$ and $\\tilde M^{(d)}$ of (\\ref{eq:TextureRL}) are obtained requiring Arg.Det.$V_{\\rm CKM}=0$, which however is not a physically required condition. Changing this condition allows two unobservable quark phases. If Det.$\\tilde M^{(u)}$ has a phase, then the Det.$\\tilde M^{(u)}$ phase can be removed by redefining $u_R,c_R,$ and $t_R$ each absorbing the third of the Det.$\\tilde M^{(u)}$ phase. Similarly, if Det.$\\tilde M^{(d)}$ has a phase, then that phase also is removed by redefining $d_R,s_R,$ and $b_R$ each absorbing the third of the Det.$\\tilde M^{(d)}$ phase. Therefore, even though the form (\\ref{eq:TextureRL}) is derived from the useful CKM matrix \\cite{KimSeo11}, we must allow two overall phases, one in $\\tilde M^{(u)}$ and the other in $\\tilde M^{(d)}$. In this paper, however, this possibility is not needed.\n\nNote that the (33) elements of $\\tilde M^{(u)}$ and $\\tilde M^{(d)}$ in Eq. (\\ref{eq:TexMuMd}) are set to 1 since their ${\\bf Z}_{12}$ quantum numbers are zero. The (33) element of $\\tilde M^{(u)}$ satisfies the ${\\bf Z}_4$ and ${\\bf Z}_3$ discrete symmetries also. However, the (33) element of $\\tilde M^{(d)}$ does not satisfy the ${\\bf Z}_4$ and ${\\bf Z}_3$ discrete symmetries. Here, we use the discrete gauge symmetry idea that the Planck scale physics destroys the discrete symmetry if it is not a subgroup of a gauge symmetry \\cite{Krauss89}. Except the (33) element of $\\tilde M^{(d)}$, the matrices in Eq. (\\ref{eq:TexMuMd}) describe the terms respecting the SM gauge and ${\\bf Z}_{12}\\times {\\bf Z}_4\\times {\\bf Z}_3$ discrete symmetries. The Planck scale physics would lead to a democratic form of a mass matrix, even though it does not respect the discrete symmetries. To keep all other terms respect the symmetries of Table \\ref{tab:Discrete}, we assign 1 at the (33) position of $\\tilde M^{(d)}$. Under this philosophy, we assume that the matrices are proportional to a democratic form \\cite{FritzschDemo90},\n\\dis{\n\\left(\\begin{array}{ccc}\n\\frac13 & \\frac13 & \\frac13 \\\\ \\frac13 & \\frac13 & \\frac13 \\\\ \\frac13 & \\frac13 & \\frac13\n \\end{array} \\right),\\nonumber\n}\nwhich gives one massive quark and two massless quarks with the quark mass matrix in the new basis becomes\n\\dis{\n\\left(\\begin{array}{ccc}\n0 & 0 & 0 \\\\ 0 & 0 & 0 \\\\ 0 & 0 & 1\n \\end{array} \\right).\\label{eq:DemoMass}\n}\nNote in particular that the determinant having two zero eigenvalues must be satisfied.\nAn important lesson from this is that we must have a correct traces and determinants of the matrices. For the up-type quarks the trace must be $1+{\\cal O}(\\lambda^4)+{\\cal O}(\\lambda^6)$, and for the down-type quarks the trace must be $1+{\\cal O}(\\lambda^2)+{\\cal O}(\\lambda^4)$. These trace conditions are read from mass textures, but not from the mass eigenvalues. For example, for the up quark sector, even though the trace of mass eigenvalues is $m_t+m_c+m_t(u\\lambda^7)$, we keep the trace only up to $\\lambda^6$ whose order is a correction to $m_t$ and $m_c$. Since there are two zero eigenvalues for the democratic form (\\ref{eq:DemoMass}), the following $2\\times 2$ submatrix (in the lower right corner) condition is also satisfied,\n\\dis{\n\\left(\\begin{array}{cc}\n 0 & 0 \\\\ 0 & 1\n \\end{array} \\right).\\label{eq:twobytwo}\n}\nWith this understanding, below $M_P$ we take the Planck scale contribution to mass matrices as Eq. (\\ref{eq:DemoMass}). If we add more ${\\cal O}(1)$ terms to Eq. (\\ref{eq:DemoMass}) due to gravity, we cannot satisfy the $3\\times 3$ matrix condition Eq. (\\ref{eq:DemoMass}) and the $2\\times 2$ submatrix condition Eq. (\\ref{eq:twobytwo}). This is the logic we insert 1 in the (33) elements of Eq. (\\ref{eq:TexMuMd}) even if those (33) entries violate the symmetries of Table \\ref{tab:Discrete}. But all the other entries are required to respect the symmetries. Namely, the discrete symmetry violation by gravity is moved to the (33) entries only, in fact to one position in the present example: the (33) entry of $\\tilde M^{(d)}$.\n\n\\section{Maximal CP violation}\\label{sec:MaxCP}\nFor a calculable CP phase, we start with real couplings in the Lagrangian, which is usually adopted in calculable $\\bar\\theta$ models. With SUSY all the parameters of the superpotential $W$ are real, and the initial QCD vacuum angle $\\theta_{QCD}$ is zero \\cite{KimRMP10}.\n\nTerms including $X_0^{(d,u)}$ are\n\\dis{\nW^{(0)}=&~ X_0^{(u)}(\\alpha X_{+1}^{(d)}X_{-1}^{(d)} +\\tilde\\alpha X_{+1}^{(u)}X_{-1}^{(u)})\n+ \\tilde m_0^2 X_0^{(u)} \\\\\n&+\\frac{\\alpha'}3 X_0^{(u)3}-\\tilde M_0 X_0^{(u)2}\\\\\n &+\\frac{h_{00}^3}{6M_P^3} X_0^{(d)6} +\\frac{h_{06}^2}{3M_P^2} X_0^{(d)3}X_6^{(d)}X_6^{(u)}.\n \\label{eq:WX0}\n}\nIn Eq. (\\ref{eq:WX0}), we will take the limit $\\tilde m_0^2\\to 0$.\n\\dis{\n\\frac{dW^{(0)}}{dX_{0}^{(u)}}=&\\alpha X_{+1}^{(d)}X_{-1}^{(d)} +\\tilde\\alpha X_{+1}^{(u)}X_{-1}^{(u)} -2\\tilde M_0 X_{0}^{(u)}\\\\\n&+\\alpha' X_0^{(u)2}+\\tilde m_0^2 =0\\\\\n\\frac{dW^{(0)}}{dX_{0}^{(d)}}=& \\frac{h_{00}^3}{M_P^3} X_{0}^{(d)5} +\\frac{h_{06}^2}{M_P^2} X_0^{(d)2}X_6^{(d)}X_6^{(u)} =0\\label{eq:SUSYcond}\n}\nIn the limit $\\tilde m_0^2\\to 0$, $X_{0}^{(u)}$ is determined as\n\\dis{\nX_{0}^{(u)}\\simeq \\frac{\\tilde M_0}{\\alpha'}\\pm \\sqrt{\\left(\\frac{\\tilde M_0}{\\alpha'}\n\\right)^2- \\frac{\\alpha X_{+1}^{(d)}X_{-1}^{(d)} +\\tilde\\alpha X_{+1}^{(u)}X_{-1}^{(u)}}{\\alpha'} }.\\nonumber\n}\nThe second equation of (\\ref{eq:SUSYcond}) leads to\n\\dis{\nX_0^{(d)}= &\\left(-\\frac{h_{06}^2 X_6^{(d)}X_6^{(u)} M_P}{h_{00}^3 }\\right)^{1\/3}\\\\\n&\\approx \\lambda^4\\left(-\\frac{h_{06}^{2\/3}}{h_{00} } \\right) M_P \\gtrsim \\lambda M_P,\n}\nwhere we assumed $X_6^{(d,u)}\\simeq \\lambda^6 M_P$, and we allow the possibility of a smaller $m_s$ compared to $m_\\mu$ \\cite{GeoJarlskog79}.\nSo, we need a fine-tuning of couplings $|{h_{00}}\/{h_{06}^{2\/3}}| \\lesssim\\lambda^3\\sim 10^{-2}$.\n\nThe needed weak CP violation occurs through developing complex vacuum expectation values \\cite{LeeTD73}. The symmetries of Table \\ref{tab:Discrete} allow the following down-type singlets terms in $W$,\\footnote{To simplify the notations, for the $W^{(d)}$ discussion we suppress the superscript $d$ of the down-type singlets, and for the $W^{(u)}$ discussion we suppress the superscript $u$ of the up-type singlets.}\n\\dis{\nW^{(d)}= &\\frac12 \\mu_6 X_{6}X_{6} +\\mu_1 X_{+1}X_{-1}\\\\\n&+\\frac{c_{11}}{2M_P}X_{+1}^2X_{-1}^2 +\\frac{c_{66}}{4M_P}X_{6}^4 +W' \\\\\nW'=&\\frac{f}{M_P^4}X_{6}X_{-1}^6+\\frac{g}{M_P^4}X_{6}X_{+1}^6\n\\label{eq:Veffdown}\n}\nwhere $\\mu_1$ and $\\mu_2$ are real and $g=\\pm f$ and $c_{ij}$ are real. There are more dimension 4 terms including $X_{6}^2$ and $X_{+1}X_{-1}$ which are neglected for simplicity, since they do not change our introduction of the phase $\\frac{\\pi}{2}$. We introduced the mixing term $W'$ which relates the phases of $X_6$ and $X_{+1}$. We can multiply $X_0^{(u)}$ to any of these terms which however is assumed to be smaller.\nThe SUSY points for $X_{\\pm 1}$ and $X_{+6}$ fields are given by,\n\\begin{widetext}\n\\dis{\n&\\frac{dW^{(d)}}{dX_{-1}}X_{-1}=\\mu_1 X_{+1}X_{-1}+6\\frac{f}{M_P^4}X_{+6} X_{-1}^6+\\frac{c_{11}}{M_P}X_{+1}^2 X_{-1}^2=0\\\\\n&\\frac{dW^{(d)}}{dX_{+1}}X_{+1}=\\mu_1 X_{+1}X_{-1}+6\\frac{g}{M_P^4}X_{+6} X_{+1}^6+\\frac{c_{11}}{M_P}X_{+1}^2 X_{-1}^2=0\\\\\n&\\frac{dW^{(d)}}{dX_{+6}}=\\mu_6 X_{+6}+\\frac{f}{M_P^4} X_{-1}^6+\\frac{g}{M_P^4} X_{+1}^6 +\\frac{c_{66}}{M_P}X_{+6}^3=0\\label{eq:VEVsX1}\n}\n\\end{widetext}\nFor $g=f$, we do not obtain a phase for $X_{\\pm 1}^2$. For $g=-f$, we obtain from the first two equations,\n\\dis{\nX_{+1}^6+X_{-1}^6 &=0,~~ X_{\\pm 1}=\\left(\\frac{1}{\\sqrt2} \\pm i\\frac{1}{\\sqrt2}\\right)\\left| X_{\\pm1} \\right|\\,,\\\\\n&X_{\\pm 1}^2=\\pm i |X_{\\pm 1}|^2\\label{eq:pio4}\n}\nand\n\\dis{\n6\\frac{f}{M_P^4}I_6|X_{+1}|^4-\\frac{c_{11}}{M_P}|X_{+1}|^2-\\mu_1=0 \\label{eq:AbsX1}\n}\nwhere $X_6=R_6 +iI_6$, {\\it i.e.}\\ $X_{+6}$ is pure imaginary $X_6=iI_6$.\nFrom Eq. (\\ref{eq:AbsX1}), we determine the smaller $|X_{+1}|^2$ as\n\\dis{\n\\frac{|X_{+1}|^2}{M_P^2}&=\\frac{c_{11}M_P}{12 fI_6}-\\sqrt{\\left(\\frac{c_{11}M_P}{12 fI_6} \\right)^2 +\\frac{\\mu_1}{6 fI_6}}\\\\\n&\\simeq -\\frac{\\mu_1}{c_{11}M_P}\\simeq -\\lambda^2\\,,\\label{eq:lambda2}\n}\nwhere $\\mu_1$ and $c_{11}$ are tuned to satisfy Eq. (\\ref{eq:lambda2}).\nFrom the third equation of (\\ref{eq:VEVsX1}), we have\n\\dis{\n\\left(\\frac{\\mu_6}{c_{66}M_P}\\right)\\frac{I_6}{M_P}+ \\frac{2f}{c_{66}}\\lambda^6-\\left(\\frac{I_6}{M_P} \\right)^3=0\n}\nwhich has a solution $I_6\\simeq M_P\\sqrt{ \\mu_6\/c_{66}M_P}$ in the limit $\\lambda^6\\to 0$. However, the solution $I_6\\sim \\lambda^6$ is the one we need so that a small expansion parameter is $X_{\\pm 1}$ and any expansion parameter involving $I_6$ must be of order $\\lambda^6$ and smaller. For $I_6\\to 0$, we have,\n\\dis{\n\\frac{I_6}{M_P}\\simeq -\\left(\\frac{2f M_P}{\\mu_6} \\right)\\lambda^6.\\label{eq:I6mag}\n}\n\nThis solution leads to the maximal weak CP violation since the phase $\\delta$ appearing in Eq. (\\ref{eq:TexMuMd}) is the phase of $X_{+1}^2$ which is $\\frac\\pi{2}$, viz. Eq. (\\ref{eq:pio4}). But the maximal CP violation is completed with obtaining an appropriate up-type quark mass texture of Eq. (\\ref{eq:TexMuMd}).\n\nFor the up-type Higgs singlets, we need the real VEVs except the phase leading to an overall phase of $V_{\\rm CKM}$. The superpotential for the $Q_{\\rm em}=+\\frac23$ quarks is\\footnote{Note that for the $W^{(u)}$ we suppress the superscript $u$ of the up-type singlets.}\n\\dis{\nW^{(u)}= &\\frac12\\tilde \\mu_6 X_{6}X_{6} +\\tilde \\mu_1 X_{+1}X_{-1}\\\\\n&+\\frac{\\tilde c_{11}}{2M_P}X_{+1}^2X_{-1}^2 +\\frac{\\tilde c_{66}}{4M_P}X_{6}^4 +W' \\\\\n\\tilde W'=&\\frac{\\tilde f}{M_P^4}X_{6}X_{-1}^6+\\frac{\\tilde g}{M_P^4}X_{6}X_{+1}^6\n\\label{eq:Veffup}\n}\nwhere $\\tilde \\mu_1$ and $\\tilde \\mu_2$ are real and $\\tilde g=\\pm \\tilde f$ and $\\tilde c_{ij}$ are real. We introduced the mixing term $\\tilde W'$ which relates the phases of $X_6$ and $X_{+1}$.\n\nThe SUSY points for $X_{\\pm 1}$ and $X_{6}$ fields are given by,\\\\\n\n\\begin{widetext}\n\\dis{\n&\\tilde \\mu_1 X_{+1}X_{-1}+6\\frac{\\tilde f}{M_P^4}X_{6} X_{-1}^6+\\frac{\\tilde c_{11}}{M_P}X_{+1}^2 X_{-1}^2=0\\\\\n&\\tilde \\mu_1 X_{+1}X_{-1}+6\\frac{\\tilde g}{M_P^4}X_{6} X_{+1}^6+\\frac{\\tilde c_{11}}{M_P}X_{+1}^2 X_{-1}^2=0\\\\\n&\\tilde \\mu_6 X_{6}+\\frac{\\tilde f}{M_P^4} X_{-1}^6+\\frac{\\tilde g}{M_P^4} X_{+1}^6 +\\frac{\\tilde c_{66}}{M_P}X_{6}^3=0\\label{eq:VEVsX}\n}\n\\end{widetext}\nNot to have a phase, we choose $\\tilde g=\\tilde f$. Then, we obtain the real solution, $X_6=R_6$ and $X_{\\pm 1}=R_1$, and obtain an equation,\n\\dis{\n\\frac{R_1^4}{M_P^4} + \\frac{\\tilde{c}_{11}}{6\\tilde fR_6 M_P}R_1^2+\\frac{\\tilde\\mu_1 }{6\\tilde fR_6} =0\\,,\\label{eq:upR1}\n}\nwhich leads to a smaller solution of $R_1^2$ as\n\\dis{\n\\frac{R_1^2}{M_P^2}&=- \\frac{\\tilde c_{11}M_P}{12\\tilde fR_6}+ \\sqrt{\\left(\\frac{\\tilde{c}_{11}M_P}{12 \\tilde fR_6} \\right)^2 -\\frac{\\tilde\\mu_1}{6 \\tilde fR_6}}\\\\\n&\\simeq -\\frac{\\tilde \\mu_1}{\\tilde c_{11}M_P}\\simeq -\\tilde\\lambda^2\\,,\\label{eq:smallR1}\n}\nwhere $\\tilde \\mu_1$ and $\\tilde c_{11}$, having the opposite signs, are tuned to satisfy Eq. (\\ref{eq:lambda2}). As in the down-type case, $R_6$ is determined from a cubic equation. To have a universal $\\lambda$, {\\it i.e.}\\ $\\lambda$ of Eq. (\\ref{eq:lambda2}) and $\\tilde\\lambda$ of Eq. (\\ref{eq:smallR1}) the same, we need $\\tilde\\mu_1\/\\tilde{c}_{11}= \\mu_1\/{c}_{11}$.\n\nThe ${\\bf Z}_{12}$ symmetry has ${\\bf Z}_4$ and ${\\bf Z}_3$ as its subgroups. So, the down-type symmetry ${\\bf Z}_4$ can allow the phase $\\frac{\\pi}{2}$.\nThe chief merit of the ${\\bf Z}_{12}$ symmetry is to introduce the FN type powers of $\\lambda$.\n\nThe above determination of the CKM matrix with the maximal CP phase needs to be completed in a more complete theory. We present two speculations related to our determination of the CKM matrix:\n\\begin{itemize}\n\\item\nThe discrete symmetries of Table \\ref{tab:Discrete} allow the following superpotential term,\n\\dis{\n X_0^{(d)} H_uH_d\n\\sim M_P H_uH_d \\label{eq:mu}\n}\\\\\nwhich gives a too large $\\mu$-term. Therefore, we need to introduce a PQ symmetry \\cite{PQ77} to suppress the $\\mu$-term of (\\ref{eq:mu}).\nIf it were not for this unacceptably large $\\mu$ of Eq. (\\ref{eq:mu}), this model is a good example of the calculable $\\bar{\\theta}$ \\cite{KimRMP10} because the higher order corrections do not destroy the hermiticity nature of $\\tilde M^{(d)}$.\n\n\\item We speculate that the parameters $c,d,s, \\kappa_t$, and $\\kappa_b$ of Eq. (\\ref{eq:TexMuMd}) are determined in an ultraviolet completed theory, probably by geometrical factors.\n\n\\item There are a few unsatisfactory features in the present example. For example, firstly the equivalence of $\\lambda$ of Eq. (\\ref{eq:lambda2}) and $\\tilde\\lambda$ of Eq. (\\ref{eq:smallR1}) requires a fine tuning. Second, the initial flavor democratic mass matrix is redefined such that the discrete symmetry violating entry, the (33) element of $\\tilde M^{(d)}$, absorbs the nonzero mass eigenvalue. We hope that they can be explained in a better model.\\\\\n\n\\end{itemize}\n\n\\section{Conclusion}\\label{sec:Conclusion}\nDue to the accurate determination of the CKM parameters, it is possible to obtain the quark mass matrices fairly reliably. For the same left-hand unitary matrices $L^{(u),(d)}$ and the right-hand unitary matrices $R^{(u),(d)}$, {\\it i.e.}\\ $L^{(u),(d)}=R^{(u),(d)}$, the weak-basis quark-mass matrices take simple forms. In contrast to continuous symmetries, with discrete symmetries the CP phase of $2\\pi$ divided by an integer can be obtained from a discrete symmetry since the degenerate vacua of discrete symmetries are countable. Introducing a ${\\bf Z}_{12}$ symmetry, we obtain the maximal CP phase $\\frac{\\pi}{2}$. We introduced a ${\\bf Z}_{12}$ discrete symmetry with the FN scalars since the the lightest element($\\bar uu$ element) of the quark mass matrices has a power $\\lambda^6$. We considered the ${\\bf Z}_{12}$ symmetry at field theory level. With SUSY, we can follow the set-and-forget principle for the needed and forbidden terms in the superpotential. Barring the set-and-forget principle, however, we need additional ${\\bf Z}_4\\times {\\bf Z}_3$ symmetries to obtain the desired superpotential. In this paper, we have shown the maximality of CP violation with SUSY but the maximality might be obtained also without SUSY if we introduced an appropriate discrete symmetry. Finally, we note that the ${\\bf Z}_{12}$ symmetry may have a root in string theory such as in a ${\\bf Z}_{12}$ orbifold compactification \\cite{KimKaye07}.\n\n\\acknowledgments{I thank B. Kyae and M. Seo for helpful discussions. This work is supported in part by the National Research Foundation (NRF) grant funded by the Korean Government (MEST) (No. 2005-0093841).}\n\n\\vskip 0.5cm\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLet $(M,g)$ be a Riemannian manifold and $f_0:M\\to \\R^n$ be a $C^\\infty$ map such that $f_0^*h$ {\\em rank\\,}$H$ then $f_0$ can be homotoped to a partial isometry $f:(M,g_H)\\to (N,h)$. Furthermore, the homotopy can be made to lie in a given neighbourhood of $f_0$ in the fine $C^0$ topology. \n\\label{main}\\end{thm}\nIf we take $H=TM$ in Theorem~\\ref{main} then we obtain the Nash-Kuiper isometric $C^1$-immersion theorem. \nTaking $N$ to be an Euclidean space we prove the existence of partial isometries. \n\\begin{cor} Every sub-Riemannian manifold $(M,H,g_H)$ admits a partial isometry in $\\R^n$ provided $n\\geq \\dim M+${\\em rank }$H$. \\label{partial isometry}\\end{cor}\nWe also discuss several other consequences of Theorem~\\ref{main} in Corollaries \\ref{integrable subbundle} and ~\\ref{trivial subbundle}. \n\nWe use the convex integration technique (see \\cite{gromov}, \\cite{eliash}) to prove the main theorem of this paper. It would be appropriate to mention here that Gromov developed the convex integration theory on the foundation of Kuiper's technique \\cite{kuiper} and applied this theory to solve many interesting problems which appear in the context of geometry. \n\nWe organise the paper as follows. In Section 2, we outline the proof of Theorem~\\ref{main}, and in Section 3 we briefly discuss the convex integration technique following the beatiful exposition of Eliashberg and Mishachev \\cite{eliash}. In Section 4 we prove the main results of the paper stated above and in Section 5 we discuss some applications of Theorem~\\ref{main}.\n \n\\section{Sketch of the proof}\nLet $(N,h)$ and $(M,H,g_H)$ be as in Section 1 and let $g_0$ be a fixed Riemannian metric on $M$ such that $g_0|_H=g_H$.\n\\begin{defn} A $C^1$ map from $M$ to $N$ is called an $H$-\\textit{immersion} if its derivative restricts to a monomorphism on $H$ (We have borroed this terminology from \\cite{dambra-loi}). \n\nA $C^1$ map $f_0:M\\to N$ is said to be $g_H$-\\textit{short} if $g_H-f_0^*h$ restricted to $H$ is positive definite. We use the notation $f_0^*h|_H0$ on $M$. Then by Nash's decomposition formula (see \\cite{dambra-datta} and \\cite{nash}), there exist smooth functions $\\psi_i$ and $\\phi_i$ as described in the lemma such that $g_M-f_0^*h=\\sum_i\\phi_i^2\\,d\\psi_i^2$. By restricting both sides to $H$ we get the desired decomposition.\\end{proof}\n\n\\noindent\\textbf{Construction of an Approximate solution:} Applying Lemma~\\ref{decomposition} we get a decomposition of $g_H-f_0^*h$. We then use this decomposition to obtain a $C^\\infty$ map $\\tilde{f}$ which is very close to a partial isometry in the sense that $n(g_H-\\tilde{f}^*h)$ is sufficiently small. This is achieved following successive deformations $\\bar{f}_1$, $\\bar{f}_2, \\dots, \\bar{f}_n,\\dots,$ of $f_0$ such that $\\bar{f}_{i}^*h$ is approximately equal to $\\bar{f}_{i-1}^*h+\\phi_{i}^2d\\psi_{i}^2$ for each $i$. Each step of deformation involves a convex integration (discussed in Section 3) and the deformation takes place on the open set $U_\\lambda$ containing supp\\,$\\phi_i$ in such a way that the value of the derivative $d\\bar{f}_{i-1}$ along $\\tau_{i}=\\ker d\\psi_{i}$ is affected by a small amount.\nSince at most finitely many $\\phi_i$ are non-zero on any $U_\\lambda$, the sequence $\\{\\bar{f}_i\\}$ is eventually constant on each $U_\\lambda$ and therefore converges to a $C^\\infty$ map $\\tilde{f}:M\\to N$ which is very close to being a partial isometry. Indeed, for the final map $\\tilde{f}$ the total error $g_H-\\tilde{f}^*h$, estimated by the function $n(g_H-\\tilde{f}^*h)$, can be made arbitrarily small. Moreover, $d(f_0,\\tilde{f})$ can be controlled by the function $n(g_H-f_0^*h)$. See Lemma~\\ref{approximation} and Lemma~\\ref{recursion} for a detailed proof.\\\\\n\n\\noindent\\textbf{Obtaining a partial isometry:}\nThe principal idea is to obtain a partial isometry as the limit of a sequence of $C^\\infty$ smooth $g_H$-short $H$-immersions $f_j:M\\longrightarrow (N,h)$ which is Cauchy in the fine $C^1$-topology and is such that the induced metric $f_j^*h$ approaches to $g_H$ on $H$ in the limit. More explicitly, the sequence $\\{f_j\\}$ will satisfy the following relations:\n\\begin{enumerate}\n\\item $n(g_H-f_j^*(h))\\approx \\frac{1}{2}n(g_H-f_{j-1}^*(h)),$\n\n\\item $d(f_j,f_{j-1})< c(m) n(g_H-f_{j-1}^*(h))^\\frac{1}{2}$,\n\\end{enumerate}\nwhere $c(m)$ is a constant depending on the dimension $m$ of the\nmanifold $M$. The $j$-th map $f_j$ can be seen as an improved approximate solution over $f_{j-1}$. The conditions $(1)$ and $(2)$ together guarantee that the sequence $\\{f_n\\}$ is a\nCauchy sequence in the fine $C^1$ topology and hence it converges\nto some $C^1$ map $f:M\\longrightarrow N$. Then the induced metric $f^*h$ must be equal to $g_H$ when restricted to $H$ by condition $(1)$. Thus $f$ is the desired partial isometry.\n\n\n\\section{Preliminaries of Convex Integration Theory}\n\nIn this section, we recall from \\cite{gromov} and \\cite{spring} the basic terminology of the theory of $h$-principle and preliminaries of convex integration theory.\n\nLet $M$ and $N$ be smooth manifolds and $x\\in M$. If $f:U\\to N$ is a $C^r$ map defined on an open subset $U$ of $M$ containing $x$, then the $r$-jet of $f$ at $x$, denoted by $j^r_f(x)$, corresponds to the $r$-th degree Taylor's polynomial of $f$ relative to a coordinate system around $x$. Let $J^r(M,N)$ denote the space of $r$-jets of germs of $C^r$-maps $M\\to N$ and let $p^r:J^r(M,N)\\lgra M$ be the natural projection map taking $j^r_f(x)$ to $x$, which is a fibration. For any $C^r$ map $f:M\\to N$, $j^r_f$ is a section of $p^r$. Moreover, if $r>s$ then there is a canonical projection $p^r_s:J^r(M,N)\\to J^s(M,N)$ which takes an $r$ jet at $x$ represented by a germ $f$ to the $s$ jet of $f$ at $x$.\n\nA \\textit{partial differential relation} of order $r$ for $C^r$ maps $M\\to N$ is defined as a subspace ${\\mathcal R}$ of $J^r(M,N)$. If $\\mathcal R$ is an open subset then we say that ${\\mathcal R}$ is an {\\em open\\\/} relation.\n\nA section $\\sigma$ of $p^r:J^r(M,N)\\to M$ is said to be a \\textit{section} of ${\\mathcal R}$ if the image of $\\sigma$ is contained in ${\\mathcal R}$. A section of $\\mathcal R$ is often referred as a formal solution of $\\mathcal R$. If $f:M\\to N$ is such that $j^r_f$ maps $M$ into $\\mathcal R$ then $f$ is called a {\\em solution\\\/} of ${\\mathcal R}$.\nA section $\\sigma:M\\to{\\mathcal R}$ is called {\\em holonomic\\\/} if $\\sigma=j^r_f$ for a $C^r$-map $f:M\\lgra N$.\n\nLet $\\Gamma({\\mathcal R})$ denote the space of sections of the $r$-jet bundle $p^r:J^r(M,N)\\longrightarrow M$ whose images lie in ${\\mathcal R}$. We endow this space with the $C^0$-compact open topology. The space of $C^r$ solutions of $\\mathcal R$ is denoted by Sol\\,$\\mathcal R$. We endow it with the $C^r$ compact-open topology. Then the $r$-th jet map $j^r:\\mbox{Sol\\,}{\\mathcal R}\\lgra \\Gamma({\\mathcal R})$ defined by $j^r(f)=j^r_f$ is continuous relative to the given topologies.\n\n\\begin{defn} A relation $\\mathcal R$ is said to satisfy the\n$h$-\\textit{principle} if given a section $\\sigma$ of $\\mathcal R$ there\nexists a solution $f$ of ${\\mathcal R}$ such that $j^r_f$ is\nhomotopic to $\\sigma$ in $\\Gamma(\\mathcal R)$. If the $r$-jet map $j^r$ is a weak homotopy equivalence then $\\mathcal R$ is said to satisfy the \\textit{parametric $h$-priniple}.\n\nLet $\\mathcal U$ be a subspace of $C^0$ maps $M\\to N$. A relation $\\mathcal R$ is said to satisfy the $C^0$ \\textit{dense $h$-principle near} $\\mathcal U$ provided given any $f\\in {\\mathcal U}$ and any neighbourhood $N$ of $j^0_f(M)$, every section $\\sigma$ of $\\mathcal R$ which lies over $j^0_f$ (i.e., $p^r_0\\circ \\sigma=j^0_f)$ admits a homotopy $\\sigma_t$ such that $\\sigma_t$ lies in $(p^r_0)^{-1}(\\mathcal U)\\cap{\\mathcal R}$ and $\\sigma_1$ is holonomic.\\end{defn}\n\nGiven a differential relation $\\mathcal R$, the main problem is to determine whether or not it has a solution. Proving $h$-principle is a step forward towards this goal. If a relation satisfies the $h$-principle then we can not at once say that the solution exists; however, we can conclude that if $\\mathcal R$ has a section (i.e., a formal solution) then it has a solution. Thus, we reduce a differential topological problem to a problem in algebraic topology. There are several techniques due to Gromov which address the question of $h$-principle. The convex integration theory is one such. Here we will review the convex integration theory only for first order differential relations.\n\nLet $\\tau$ be a codimension 1 integrable hyperplane distribution on $M$. Let $f$ and\n$g$ be germs at $x\\in X$ of two $C^1$ smooth maps from $M$ to $N$. We say\nthat $f$ and $g$ are $\\perp$-equivalent if\n$$f(x)=g(x)\\ \\ \\mbox{and}\\ \\\nDf_x|_{\\tau}=Dg_x|_{\\tau}.$$ The $\\perp$-equivalence is an equivalence relation on\nthe $1$-jet space $J^1(M,N)$. The equivalence class of $j^1_f(x)$\nis denoted by $j^\\perp_f(x)$ and is called the $\\perp$-jet of $f$ at $x$.\nSince $\\tau$ is integrable, we can choose local coordinate\nsystems $(U;x_1,\\dots,x_{n-1},t)$ so that\n$\\{(x_1,\\dots,x_{n-1},t):t=$\\,const\\} are integral submanifolds of\n$\\tau$. Moreover, we can express $j^1_f(x)$ as\n$(j^\\perp_f(x),\\partial_tf(x))$, where $j^\\perp_f=(\\frac{\\partial f}{\\partial x_1},\\dots,\\frac{\\partial f}{\\partial x_{n-1}})$ and $\\partial_tf$ denotes the partial derivative of $f$ in the direction of $t$.\nIn particular, if $M=\\R^n$, $N=\\R^q$ and $\\tau$ is defined by the codimension one foliation $\\R^{n-1}\\times\\R$ on $\\R^n$, then the 1-jet space gets a splitting $J^1(\\R^n,\\R^q)=J^\\perp(\\R^n,\\R^q)\\times \\R^q$.\nThe set of all $\\perp$-jets, denoted by $J^\\perp(M,N)$, has a manifold structure \\cite[6.1.1]{spring} and the natural\nprojection map $p^{1}_\\perp: J^1(M,N)\\lgra J^\\perp(M,N)$, taking a\n$1$-jet to its $\\perp$-equivalence class (relative to the given $\\tau$), defines an affine bundle, in which\nthe fibres are affine spaces of dimension $n=\\dim N$. The fibres of\nthis affine bundle are called {\\em principal subspaces\\\/} relative\nto $\\tau$. Note that there is a unique principal subspace through each point of $J^{1}(M,N)$. In fact, the\nfibre of $J^{1}(M,N)\\lgra J^0(M,N)$ over any $b\\in J^0(M,N)$ is\nfoliated by these principal subspaces and the translation map\ntakes principal subspaces onto principal subspaces.\\\\\n\n\\noindent\\textbf{Notation:} We shall denote the principal subspace through $a\\in J^{1}(M,N)$ by $R(a,\\tau)$. If $\\mathcal R$ is a first order relation and $a\\in{\\mathcal R}$, then the connected component of $a$ in $\\mathcal R\\cap R(a,\\tau)$ will be denoted by ${\\mathcal R}(a,\\tau)$.\n\nThe following theorem, known as the $h$-Stability Theorem in the literature, (\\cite[2.4.2(B)]{gromov} and \\cite[Theorems\n7.2, 7.17]{spring}) is a key result in the theory of convex integration.\n\n\\begin{thm} Let ${\\mathcal R}$ be an open relation and let $f_0:M\\lgra N$ be a $C^1$ map such\nthat \\begin{enumerate}\\item $j^\\perp_{f_0}$ lifts to a section\n$\\sigma_0$ of $\\mathcal R$ and\n\\item $j^1_{f_0}(x)$ lies in the convex hull of ${\\mathcal\nR}(\\sigma_0(x),\\tau_x)$ for every $x\\in M$.\\end{enumerate}\nLet $\\mathcal N$ be any neighbourhood of $j^\\perp_f(M)$.\nThen there exists a homotopy $\\sigma_t:M\\to {\\mathcal R}$, $t\\in [0,1]$, such that\n\\begin{enumerate}\\item[$(i)$] $\\sigma_1=j^1_{f_1}$ for some $C^1$ map $f_1:M\\to N$, so that $f_1$ is a solution of $\\mathcal R$ and\n\\item[$(ii)$] $(p^1_\\perp\\circ \\sigma_t)(M)\\subset {\\mathcal N}$ for all $t\\in[0,1]$. In particular $f_1$ is close to $f_0$ in the fine $C^0$ topology.\\end{enumerate}\n\nFurther, if the initial map $f_0$ is a solution on $Op\\,K$ for some closed set $K$ then the homotopy remains constant on $Op\\,K$.\\label{C-perp}\n\\end{thm}\n\n\\begin{rem} Since $C^\\infty(M,N)$ is dense in $C^1(M,N)$ relative to the fine $C^1$-topology and $\\mathcal R$ is open, we can perturb any $C^1$-solution of $\\mathcal R$ to obtain a $C^\\infty$ solution.\n\\end{rem}\n\n\\begin{defn} A connected subset $S$ in a vector space (or in an affine space) $V$ is said to be \\textit{ample} if the convex hull of $S$ is all of $V$. The subset defined by the polynomial $x^2+y^2-z^2=0$ in $\\R^3$ is an example of an ample subset. However, the complement of a 2-dimensional vector subspace in $\\R^3$ is not ample.\n\nA relation $\\mathcal R$ is said to be \\textit{ample} if for every hyperplane distribution $\\tau$ on $M$, ${\\mathcal R}(a,\\tau)$ is ample in $R(a,\\tau)$ for all $a\\in {\\mathcal R}$.\n\\end{defn}\n\\begin{thm}$($\\cite[2.4.3, Theorem (A)]{gromov}$)$ Every open ample relation satisfies the $C^0$-dense parametric $h$-principle.\\label{ample}\\end{thm}\n\nWe end this section with an application of Theorem~\\ref{ample} to the $H$-immersion relation; (see \\cite[8.3.4]{eliash} for an alternative proof).\n\\begin{prop} Let $M$ be a smooth manifold and $H$ a subbundle of $TM$. Then $H$-immersions $f:M\\to N$ satisfy the $C^0$-dense parametric $h$-principle provided $\\dim N>$ {\\em rank}\\,$H$. In other words, every bundle map $(F_0,f_0):TM\\to TN$ such that $F_0|_H$ is a monomorphism is homotopic through such bundle maps to an $(F,f):TM\\to TN$ such that $F=df$ provided $\\dim N>$ {\\em rank}\\,$H$.\\label{H-immersion}\\end{prop}\n\n\\begin{proof} The $H$-immersions $f:M\\to N$ are solutions to the first order partial differential relation\n\\begin{center}${\\mathcal R}=\\{(x,y,\\alpha)\\in J^1(M,N)|\\ \\alpha|_{H_x}:H_x\\to T_yN \\mbox{ is injective linear} \\}$.\\end{center}\nFirst of all, we prove that $\\mathcal R$ is an open relation. Recall that if $(U,\\phi)$ and $(V,\\psi)$ are coordinate charts in $M$ and $N$ respectively then the bijection $\\tau:J^1(U,V)\\to J^1(\\phi(U),\\psi(V)=\\phi(U)\\times\\psi(V)\\times L(\\R^m,\\R^n)$ defined by\n\\begin{center}$\\tau(j^1_f(x))=(\\phi(x),\\psi(f(x)),d(\\psi f\\phi^{-1})_{\\phi(x)})$\n\\end{center}\nis a coordinate chart for the total space $J^1(M,N)$ of the 1-jet bundle \\cite{guillemin}, where $m=\\dim M$ and $n=\\dim N$. Since $H$ is a subbundle of $TM$ we can further choose a trivialisation $\\Phi:TM|_U\\to U\\times\\R^m$ of the bundle $TM|_U$ (possibly after shrinking $U$), such that $\\Phi$ maps $H$ onto $U\\times \\R^k$. Then $\\bar{\\tau}: J^1(U,V)\\to \\phi(U)\\times\\psi(V)\\times L(\\R^m,\\R^n)$ by $\\bar{\\tau}(j^1_f(x))=\n(\\phi(x),\\psi(f(x)),d(\\psi f)_x\\circ \\bar{\\Phi}^{^{-1}}_{\\phi(x)})$ is a diffeomorphism, where $\\bar{\\Phi}=(\\phi\\times {Id\\,})\\circ \\Phi:TM|_U\\to \\phi(U) \\times \\R^m$.\n\nNow, consider the restriction morphism $r:L(\\R^m,\\R^n)\\to L(\\R^k,\\R^n)$ which takes a linear transformation $L\\in L(\\R^m,\\R^n)$ onto its restriction $L|_{\\R^k}$. Let $L_k(\\R^m,\\R^n)$ denote the inverse image under $r$ of the set of all monomorphisms $\\R^k\\to\\R^n$. This is clearly an open set and it is easy to see that $\\bar{\\tau}$ maps $\\mathcal R\\cap J^1(U,V)$ diffeomorphically onto $\\phi(U)\\times \\psi(V)\\times L_k(\\R^m,\\R^n)$. Consequently $\\mathcal R$ is open in the 1-jet space $J^1(M,N)$.\n\nNext, we shall show that $\\mathcal R$ is an ample relation. To see this, consider a codimension 1 subspace $\\tau_x$ of $TM_x$ for some $x\\in M$ and take a 1-jet $j^1_f(x)\\in{\\mathcal R}$. We need to show that the principal subspace\n\\begin{center} $R(j^1_f(x),\\tau_x)=\\{(x,f(x),\\beta)\\in J^1(M,N)| \\beta=df_x \\mbox{ on } \\tau_x\\}$\\end{center} intersects the relation $\\mathcal R$ in a pathconnected set and moreover the convex hull of the intersection, denoted by ${\\mathcal R}(j^1_f(x),\\tau_x)$, is all of $R(j^1_f(x),\\tau_x)$. There are two possible cases:\n\nCase 1. $H_x\\subset \\tau_x$. In this case, the principal subspace is completely contained in ${\\mathcal R}$. Thus ${\\mathcal R}(j^1_f(x),\\tau_x)$ is equal to the principal subspace itself.\n\nCase 2. $H_x\\cap\\tau_x$ is a codimension 1 subspace of $H_x$. Choose a vector $v\\in H_x$ which is transverse to $H_x\\cap \\tau_x$. First observe that $R(j^1_f(x),\\tau_x)$ is affine isomorphic to $T_{f(x)}N$ since any 1-jet $(x,y,\\beta)$ is completely determined by $\\beta(v)$. Therefore, ${\\mathcal R}(j^1_f(x),\\tau_x)$ is affine equivalent to the subset\n\\begin{center} $S(j^1_f(x))=\\{w\\in T_{f(x)}N|w\\not\\in df_x(\\tau_x\\cap H_x)\\}$.\\end{center}\nSince $\\tau_x\\cap H_x$ has dimension $k-1$ and $df_x$ is injective on $H_x$, the subspace $df_x(\\tau_x\\cap H_x)$ is of codimension at least 2 in $T_{f(x)}N$ provided $\\dim N>k$. Hence $S(j^1_f(x))$ is path-connected and its convex hull is all of $T_{f(x)}N$. In other words, the convex hull of ${\\mathcal R}(j^1_f(x),\\tau_x)$ is all of $R(j^1_f(x),\\tau_x)$.\n\nThis proves that $\\mathcal R$ is an open, ample relation. Hence, we can apply Theorem~\\ref{ample} to conclude that $\\mathcal R$ satisfies the $C^0$-dense parametric $h$ principle.\n\\end{proof}\n\n\\begin{cor} Suppose that $f_0:M\\to N$ is a smooth map. If $\\dim N\\geq \\dim M+$ {\\em rank} $H$, then $f_0$ can be homotoped within its $C^0$-neighbourhood to a $C^\\infty$ $H$-immersion $f:M\\to N$.\\end{cor}\n\\begin{proof} In view of the above proposition it is enough to show that $f_0$ can be covered by a monomorphism $F:H\\to TN$ if $\\dim N\\geq \\dim M+$ {\\em rank} $H$. It is well-known that the obstruction to the existence of such an $F$ lies in certain homotopy groups of the Stiefel manifold $V_k(\\R^n)$, namely in $\\pi_i(V_k(\\R^n))$ for $0\\leq i\\leq m-1$, where $m=\\dim M$, $n=\\dim N$ and $k=\\mbox{ rank\\,}H$. Since $V_k(\\R^n)$ is $n-k-1$ connected the obstructions vanish for $n\\geq m+k$. This proves the corollary.\n\\end{proof}\n\n\\begin{rem} The set of smooth $H$-immersions $M\\to N$ is an open, dense subset of $C^\\infty(M,N)$ relative to the fine $C^\\infty$ topology when $\\dim N\\geq \\dim M+$ {\\em rank} $H$ \\cite[Proposition 2.2]{dambra-loi}.\\end{rem}\n\n\n\\section{Proof of Theorem~\\ref{main} and Corollary~\\ref{partial isometry}}\n\nLet $(N,h)$ be a smooth Riemannian manifold of dimension $n$ and $(M,H,g_H)$ be as in Section 1. Let $g_0$ be a Riemannian metric on $M$ such that $g_0|_H=g_H$. Suppose that $\\dim N>$ rank $H$.\n\n\\begin{lem}$($Main Lemma$)$ Let $g$ be a Riemannian metric on $H$ such that $gk$, $S_x$ is path-connected. This proves (a).\n\nAlso, $df_x(\\textbf{v}_x)$ lies in the convex hull of $S_x$. Indeed, the condition $g-f^*h=\\phi^2d\\psi^2$ on $H_x$ implies that $h(df_x(\\textbf{v}_x),df_x(w))=0$ for all $w\\in H_x\\cap \\tau_x$ and therefore, $df_x(\\textbf{v}_x)$ is $h$-orthogonal to $df_x(H_x\\cap\\tau_x)$. Moreover, as $g-f^*h|_{H_x}>0$ and $f$ is an $H$-immersion it also follows that $0<\\|df_x(\\textbf{v}_x)\\|< 1$. Hence, $df_x(\\textbf{v}_x)$ lies in the convex hull of $S_x$ proving (b).\n\nTo prove (c) note that $df_x(\\textbf{v}_x)$ is orthogonal to $df_x(H_x\\cap\\tau_x)$. Therefore, if we define $w_0(x)={df_x(\\textbf{v}_x)}\/{\\|df_x(\\textbf{v}_x)\\|_h}$ then $w_0(x)\\in S_x$. Let $\\sigma_0(x)$ denote the 1-jet in $R(j^1_f(x),\\tau_x)\\cap{\\mathcal I}$ which corresponds to $w_0(x)$. Thus, $\\sigma_0$ is a continuous section of $\\mathcal I$ over $V$ as mentioned in (c).\n\nIf $x\\in U\\setminus V$, then either $\\phi(x)=0$ or $d\\psi_x|_{H_x}=0$ i.e., $H_x$ is contained in $\\tau_x=\\ker d\\psi_x$. If $\\phi(x)=0$ then proceeding as in the above case we can prove that (a) and (b) are true. If $H_x\\subset \\tau_x$ then the principal subspace $R(j^1_f(x),\\tau_x)$ is completely contained in $\\mathcal I$. Therefore (a) and (b) are clearly true in this case also. Further, $x\\in U\\setminus V$ implies that $j^1_f(x)\\in{\\mathcal I}$ and we can choose $\\sigma_0(x)=j^1_f(x)$ on $U\\setminus V$ so that (c) is proved on all of $U$. This completes the proof of the claim made above.\n\nIn fact, we have proved that the map $f:M\\to N$ satisfies both (1) and (2) of the hypothesis of Theorem~\\ref{C-perp} relative to the relation $\\mathcal I$. Indeed, by our construction $\\sigma_0$ lifts $j^\\perp_f$. Further, $j^1_f(x)$ lies in the convex hull of ${\\mathcal I}(\\sigma_0(x),\\tau_x)$ for all $x\\in U$. This follows from (a) and (b) since $R(j^1_f(x),\\tau_x)\\cap {\\mathcal I}=R(\\sigma_0(x),\\tau_x)\\cap {\\mathcal I}={\\mathcal I}(\\sigma_0(x),\\tau_x)$ (see section 3). \nHowever, we cannot apply Theorem~\\ref{C-perp} to $(f,\\mathcal I)$, since $\\mathcal I$ is not open. To surpass this difficulty, we consider a small open neighbourhood $Op\\,{\\mathcal I}$ of ${\\mathcal I}$ in the $H$-immersion relation $\\mathcal R$ and apply Theorem~\\ref{C-perp} to the pair $(f,Op\\,{\\mathcal I})$ to obtain a smooth $H$-immersion $\\tilde{f}:M\\to N$ which is a solution of $Op\\,{\\mathcal I}$. By choosing $Op\\,{\\mathcal I}$ sufficiently small we can ensure that $\\tilde{f}^*h|_H$ is arbitrarily close to $g$. Thus, we prove (i) and (ii) as stated in the theorem.\n\nIn order that $\\tilde{f}$ satisfies condition (iii) as well, we need to modify the relation $Op\\,\\mathcal I$ further.\nConsider the subset $S'_x=\\{w\\in S_x| h(w,df_x(\\textbf{v}_x))\\geq h(df_x(\\textbf{v}_x),df_x(\\textbf{v}_x)\\}$ of $S_x$ (see \\cite[\\S 21.5]{eliash}). This is path-connected, symmetric about $w_0(x)$ and contains $df_x(\\textbf{v}_x)$ in its convex hull. Moreover, for any vector $w$ in $S'_x$, $\\|w-df_x(\\textbf{v}_x)\\| \\leq \\sqrt{1-\\|df_x(\\textbf{v}_x)\\|^2}$. Let ${\\mathcal I}'$ denote the subset of ${\\mathcal I}$ defined by $S'_x$, $x\\in M$. Now, applying Theorem~\\ref{C-perp} to $(Op\\,{\\mathcal I}',f)$ we obtain a $C^\\infty$ map $\\tilde{f}:M\\to N$ which is homotopic to $f$ and is a solution of $Op\\,{\\mathcal I}'$. As we have already observed, $\\tilde{f}$ satisfies (i) and (ii) as stated in the theorem. Further, we have,\n\\begin{center}$\\begin{array}{rcl}\\|d\\tilde{f}_x(\\textbf{v}_x)-df_x(\\textbf{v}_x)\\|_h & \\leq & \\sqrt{1-\\|df_x(\\textbf{v}_x)\\|_h^2}+\\vare\\\\\n& = & \\sqrt{(g-f^*h)(\\textbf{v}_x,\\textbf{v}_x)} + \\vare\n\\end{array}$\\end{center}\nwhere the `error term' $\\vare$ appears because of enlarging $\\mathcal I$.\nSince $g_0|_H=g_H$ and $\\textbf{v}_x\\in H$, dividing out both sides by $\\|\\textbf{v}_x\\|_{g_0}$ we obtain from the above that\n$$\\frac{\\|d\\tilde{f}_x(\\textbf{v}_x)-df_x(\\textbf{v}_x)\\|_h}{\\|\\textbf{v}_x\\|_{g_0}}\\leq\n\\sqrt{n_{g_H}(g-f^*h)}+\\vare.$$\nMoreover, by Theorem~\\ref{C-perp} we can choose $\\tilde{f}$ so that the directional derivatives of $\\tilde{f}$ along $\\tau$ are arbitrarily close to the corresponding derivatives of $f$. Thus we obtain that $d_{g_0}(f,\\tilde{f})\\leq\\sqrt{n_{g_H}(g-f^*h)}+\\vare$.\\end{proof}\n\n\\begin{rem} In the above lemma we started with a $C^\\infty$ map $f$ satisfying $f^*h0$ obtained an $\\tilde{f}$ satisfying the condition $n(g-\\tilde{f}^*h)<\\delta$. Therefore, if we choose $\\delta$ sufficiently small then $\\tilde{f}$ can be made to satisfy the inequality $f^*h<\\tilde{f}^*h< g_H$.\\end{rem}\n\nWe now fix a countable open covering ${\\mathcal U}=\\{U_\\lambda|\\lambda\\in\\Lambda\\}$ of the manifold $M$\nwhich has the following properties:\n\\begin{enumerate}\n\\item[$(a)$] each $U_\\lambda$ is a coordinate neighbourhood in $M$ and\n\\item[$(b)$] for any $\\lambda_0$, $U_{\\lambda_0}$ intersects atmost $c_1(m)$ many $U_\\lambda$'s\nincluding itself,\n\\end{enumerate}\nwhere $c_1(m)$ is an integer depending on $m=\\dim M$. This open convering will remain fixed througout. All decompositions of Riemannian metrics on $H$ will be considered with respect to this covering.\n\n\\begin{lem}Let $f_0:M\\to N$ be a smooth $H$-immersion such that $f_0^*h0$ we get a decomposition as follows:\n\\begin{center}$g_H-f_0^*h=2\\sum_{k=1}^\\infty\\phi_k^2d\\psi_k^2$ \\ on \\ $H$,\\end{center}\nwhere $\\phi_k$ and $\\psi_k$ are as described in Lemma~\\ref{decomposition}. It further follows from the lemma that all but finitely many $\\phi_i$ vanish on any $U_p$ and and at most $c(m)$ number of $\\phi_i$ are non-vanishing at any point $x$. Define a sequence of Riemannian metrics on $H$ as follows: $\\bar{g}_0=f_0^*h|_H$ and $\\bar{g}_k=\\bar{g}_{k-1}+\\phi_k^2d\\psi_k^2|_H$. Then each $\\bar{g}_ki_\\lambda$ and supp $\\phi_i\\subset U_{\\lambda'}$ then $U_\\lambda\\cap U_{\\lambda'}=\\emptyset$, so that $\\phi_i$ vanishes identically on $U_\\lambda$. This implies that $\\bar{f}_i=\\bar{f}_{i-1}$ on $U_\\lambda$ by the given construction. \nThus, $f_1$ is a smooth map and $f_1^*h=f_0^*h+\\frac{1}{2}(g-f_0^*h)+\\sum_k\\delta_k$. We shall prove that $f_1$ is the desired map. To see this note that\n\\begin{eqnarray*}n_{g_H}(g_H-f_1^*h) & = & n_{g_H}(g_H-[f_0^*h+\\frac{1}{2}(g_H-f_0^*h)+\\sum_k\\delta_k])\\\\\n & = & n_{g_H}(\\frac{1}{2}(g_H-f_0^*h)-\\sum_k\\delta_k)\\\\\n & \\leq & \\frac{1}{2}n_{g_H}(g_H-f_0^*h)+\\sum_k n_{g_H}(\\delta_k) \\ \\text{(since the sum is locally finite)}\\\\\n & \\leq & \\frac{1}{2}n_{g_H}(g_H-f_0^*h)+\\sum_k \\delta'_k\\\\\n\\end{eqnarray*}\nWe can choose $\\delta'_k$ at each stage so that $\\sum_k\\delta'_k<\\frac{1}{6}n_{g_H}(g_H-f_0^*h)$, and we obtain relation (ii). On the other hand,\n\n\\begin{eqnarray*}d_{g_0}(f_0,f_1) & \\leq & \\sum_{k\\geq 1} d_{g_0}(\\bar{f}_{k-1},\\bar{f}_{k})\\\\\n& \\leq & \\sum_{k\\geq 1} n_{g_H}(\\bar{g}_{k}-\\bar{g}_{k-1})^{1\/2}+\\sum_{k\\geq 1}\\vare_k \\ \\ \\mbox{by (3) above}\n\\end{eqnarray*}\nEach term of the first series on the right hand side can be estimated as follows:\n\\begin{eqnarray*}n_{g_H}(\\bar{g}_{k+1}-\\bar{g}_k)^{1\/2} & \\leq &\nn_{g_H}(g_H-\\bar{g}_k)^{1\/2}\\\\\n& = & n_{g_H}(g_H-\\bar{f}_0^*h)^{1\/2}.\\end{eqnarray*}\nHowever, since $\\bar{g}_{k}-\\bar{g}_{k-1}=\\phi_{k}^2d\\psi_{k}^2$ and at most $c(m)$ number of $\\phi_k$ are non-vanishing at a point, the series \n$\\sum_{k\\geq 1} n_{g_H}(\\bar{g}_{k}-\\bar{g}_{k-1})^{1\/2}$ is bounded above by $c(m) n_{g_H}(g_H-\\bar{f}_0^*h)^{1\/2}$. On the other hand, by Lemma~\\ref{approximation} we are allowed to choose the sequence $\\{\\vare_k\\}$ so that $\\sum_k\\vare_k<\\infty$. This gives the desired relation (iii). \\end{proof}\n\nWe have made all necessary preparation for the proof of Theorem~\\ref{main}.\n\\begin{proof} \\textit{of Theorem}~\\ref{main}. Let $f_0$ be as in the hypothesis of the theorem. Applying Lemma~\\ref{recursion} on $f_0$ recursively we can construct a sequence of $C^\\infty$ maps $\\{f_i:M\\to N: i=1,2,\\dots\\}$ which has the following properties. \n\\begin{enumerate}\n\\item $0m+k$, then we will show that there exists a vector $v\\in\\R^n$ such that $P_v\\circ f$ is an $H$-immersion, where $P_v$ denotes the orthogonal projection of $\\R^n$ onto $v^\\perp$. \n\nWe first cover $M$ by countably many open neighbouhoods $U_j$ such that $TM|_{U_j}$ is trivial. We may assume that under the trivialising map $H|_{U_j}$ sits inside $U_j\\times \\R^n$ as $U_j\\times \\R^k$. A vector $v\\in \\R^n$ for which $P_v\\circ f$ is not an $H$-immersion on $U_j$ corresponds to a pair $(x,u)\\in U_j\\times \\R^k$ such that $df_x(u)$ is a scalar multiple of $v$. Thus, for $v\\in S^{n-1}$, $P_v\\circ f$ is not an $H$-immersion on $U_j$ if and only if $v$ lies in the image of the map $F:U_j\\times S^{k-1}\\to S^{n-1}$ given by $(x,u)\\mapsto \\frac{df_x(u)}{\\|df_x(u)\\|}$. If $n>m+k$ then the image of this map is a set of measure zero by Sard's theorem \\cite{guillemin}. Since $M$ can be covered by countably many $U_j$'s and the countable union of sets of measure zero is again a set of measure zero, we have proved that $P_v\\circ f$ is an $H$-immersion for almost all $v\\in \\R^n$. Finally, we observe that the projection operators are length decreasing. Hence, $P_v\\circ f$ is also a $g_H$ short $H$-immersion since $f$ is so. Hence $M$ admits a $g_H$-short $H$-immersion $(M,g_H)\\to (\\R^n,g_{can})$ for $n\\geq \\dim M+$ rank $H$.\\end{proof}\n\nIn the special situation, when $H$ is an integrable subbundle, we can reformulate Corollary~\\ref{partial isometry} as follows.\n\\begin{cor} Every Riemannian manifold $(M,g_0)$ with a regular foliation $\\mathcal F$ admits a $C^1$-map $f:M\\to \\R^n$ which restricts to an isometric immersion on each leaf of the foliation, provided $n\\geq\\dim M+\\dim {\\mathcal F}$.\\label{integrable subbundle}\n\\end{cor} \n\n\\begin{rem} We observed in Section 1 that a partial isometry of a sub-Riemannian manifold $(M,H,g_H)$ is also a path isometry with respect to the Carnot-Caratheodory metric $d_H$ on $M$ induced by $g_H$. Therefore, by Corollary~\\ref{partial isometry} there is a path-isometry $f:(M,d_H)\\to (\\R^n, d_{can})$, provided $n\\geq \\dim M+$ rank $H$. We refer to a result in \\cite[Corollary 1.5]{donne} which is of similar inerest.\\end{rem}\n\n\\section{Applications of Theorem~\\ref{main}}\n\nIn this section we discuss some applications of Theorem~\\ref{main}. Throughout, we assume $M$ to be a closed manifold. \nFirst observe that if $M$ is a closed manifold and $N$ is an Euclidean space, then the hypothesis of Theorem~\\ref{main} can be relaxed to conclude the existence of partial isometry. Indeed, we do not require the $g_H$-shortness condition on $f_0$; given any $H$-immersion $f_0:M\\to \\R^n$ we can obtain a $g_H$-short $H$-immersion $f_1$ which is of the form $\\lambda f_0$, where $\\lambda$ is a positive real number. Applying Theorem~\\ref{main} we can then homotope $f_1$ to a partial isometry $f:M\\to \\R^n$. However, the resulting partial isometry cannot be made $C^0$-close to $f_0$ by this technique, since $f_1=\\lambda f_0$ may not be $C^0$-close to $f_0$.\n\n\\begin{cor}(\\cite{gromov}) Let $M$ be a closed manifold and $\\partial_i$, $i=1,2,\\dots,k$, be linearly independent vector fields on $M$. Then there exists a $C^1$ map $f:M\\to\\R^{k+1}$ such that $\\langle \\partial_if,\\partial_jf\\rangle=\\delta_{ij}$, $1\\leq i\\leq j\\leq k$, where $\\delta_{ij}=1$ if $i=j$ and $0$ if $i< j$.\\label{trivial subbundle}\\end{cor}\n\n\\begin{proof} Let $H$ be the (trivial) subbundle of $TM$ spanned by the vector fields $\\partial_i$, $i=1,2,\\dots,k$. Define a Riemannian metric $g_H$ on $H$ by the relations\n\\begin{center}$g_H(\\partial_i,\\partial_j)=0$ if $i\\neq j$ and $g_H(\\partial_i,\\partial_i)=1$\\end{center}\nfor $i,j=1,\\dots,k$. Consider the triple $(M,H,g_H)$ as defined above. Since $H$ is trivial, Proposition~\\ref{H-immersion} guarantees the existence of an $H$-immersion $M\\to \\R^{k+1}$ which can be scaled appropriately in order to get a strictly $g_H$ short $H$-immersion, since the manifold $M$ is closed. Hence by Theorem~\\ref{main} there exists a $C^1$ partial isometry $f:(M, g_H)\\to (\\R^{k+1}, g_{can})$. This means that $\\langle \\partial_if,\\partial_jf\\rangle=g_H\\langle \\partial_i,\\partial_j\\rangle$ and the proof is now complete.\\end{proof}\n\n\\begin{rem} A more general form of the above result is in fact true. Let $\\Sigma(k,\\R)$ denote the set of all positive definite symmetric matrices over reals and let $g:M\\to \\Sigma(k,\\R)$ be any smooth map. Then $g$ can be realised as the matrix $(\\langle\\partial_i f,\\partial_j f\\rangle)_{i,j}$ for some $C^1$-function $f:M\\to \\R^{k+1}$ provided $M$ is a closed manifold.\\end{rem}\n\nGromov observed in \\cite{gromov} that if we have $k=1$ in Corollary~\\ref{trivial subbundle} then we can actually obtain $C^\\infty$ partial isometries. We here give a direct proof of this result without going into the convex integration theory. \n\\begin{thm} If $M$ is a closed manifold and $X$ is a smooth nowhere vanishing vector field on $M$, then there exists a $C^\\infty$-map $f:M\\to\\R^2$ such that $\\langle Xf,Xf\\rangle=1$. \\label{smooth partial isometry}\\end{thm}\n\\begin{proof}{\\em of Theorem~\\ref{smooth partial isometry}} Let $X$ be a smooth vector field on $M$ which is nowhere vanishing. We need to solve the equation $=1$, for smooth functions $f:M\\to \\R^2$. Let $H$ denote the 1-dimensional (integrable) distribution on $M$ determined by $X$. By Proposition~\\ref{H-immersion} there exists an $H$-immersion $f_0:M\\to \\R^2$ which implies that $Xf_0$ is a nowhere vanishing function on $M$. Since $M$ is a closed manifold, without loss of generality we may assume that $0<\\langle Xf_0,Xf_0\\rangle=\\phi^2<1$. This condition means that $f$ is $g_H$-short if we define $g_H$ by $g_H(X,X)=1$. Consider the equation $\\langle X(f_0+\\alpha),X(f_0+\\alpha)\\rangle=1$, where $\\alpha:M\\to \\R^2$ is a smooth map. This reduces to\n\\begin{equation} \\langle X\\alpha,X\\alpha\\rangle+ 2\\langle Xf_0,X\\alpha\\rangle=1-\\phi^2.\n\\end{equation}\n We split this into a system of two equations as follows:\n\\begin{equation} \\langle Xf_0,X\\alpha\\rangle=0\\ \\ \\ \\ \\langle X\\alpha,X\\alpha\\rangle=1-\\phi^2.\\end{equation}\nNow note that $\\beta=\\frac{\\sqrt{1-\\phi^2}}{\\|Xf_0\\|}\\rho_{\\pi\/2}\\circ (Xf_0)$ is a formal solution of the above system, where $\\rho_{\\pi\/2}$ is the rotation on $\\R^2$ through the angle $\\pi\/2$ in the anticlockwise direction. Hence the problem of finding the desired $f$ reduces to solving the equation $X\\alpha=\\beta$, where $\\beta:M\\to\\R^2$ is a nowhere vanishing smooth function.\n\nThe vector field $X$ can be considered as a first order linear differential operator on $C^\\infty(M,\\R^2)$. We will define a differential operator $M:C^\\infty(M,\\R^2)\\to C^\\infty(M,\\R^2)$ such that $X(M(\\beta))=\\beta$ for all $\\beta\\in C^\\infty(M,\\R^2)$. We first observe that this problem is local and therefore it is enough to define local inversion operators on open sets around each point \\cite[2.3.8]{gromov}. To see this let $\\{U_\\mu\\}$ be a locally finite open covering of $M$ by coordinated neighbourhoods on each of which we have a local inversion $M_\\mu$ of $X$. Define $M$ by $M\\beta=\\sum_\\mu M_\\mu(\\phi_\\mu \\beta)$ where $\\{\\phi_\\mu\\}$ is a partition of unity subordinate to the open covering $\\{U_\\mu\\}$. Then $M$ is a global inversion operator since\n\n\\begin{center}$\\begin{array}{rcll} X(M\\beta) & = & \\sum_\\mu X(M_\\mu(\\phi_\\mu \\beta)) & \\mbox{(since the sum is locally finite)}\\\\\n& = & \\sum_\\mu \\phi_\\mu\\beta & \\mbox{(since } \\phi_\\mu\\beta\\mbox{ is supported on } U_\\mu)\\\\\n& = & \\beta\\end{array}$\\end{center}\n\nIt now remains to prove the local existence of an inversion $M$ of $X$. Recall that the distribution $H$ is integrable so that $M$ is foliated by integral curves of $H$. Indeed around each point of $M$ there exists a coordinate system $(U,(x,t))$ such that $U$ is homeomorphic to $\\R^{n-1}\\times \\R$ and $\\frac{\\partial}{\\partial t}$ is tangent to $H$, so that $X$ can be expressed as $\\psi(x,t)\\frac{\\partial}{\\partial t}$ on $U$. Therefore, the problem reduces to solving the equation $\\psi(x,t)\\frac{\\partial\\alpha}{\\partial t}=\\beta(x,t)$. Since $\\psi(x,t)$ is nowhere vanishing we can define $M\\beta$ by $\\int_0^t\\beta(x,t)\/\\psi(x,t)\\,dt$. This completes the proof.\\end{proof}\n\n\\begin{rem} Possibly we do not require the closedness condition on $M$ in the above two results (see \\cite{gromov}).\\end{rem}\n\nWe end this section with an example of $C^\\infty$ partial isometry which supports the above theorem.\n\\begin{ex} {\\em Let $\\psi:\\R^2\\to \\R^3$ be the smooth immersion defined by \\[\\psi(\\theta,\\phi)=((b+a\\cos\\theta)\\cos\\phi,(b+a\\cos\\theta)\\cos\\phi, a\\sin\\theta),\\]\nwhere $(\\theta,\\phi)\\in\\R^2$ and $a$, $b$ are two real numbers with $04\\;\\mr{mV}$ indicates a degree of asymmetry in the individual junction resistances, which can be expected for the ultrasmall contacts. The minimum voltage below which the current $I$ is strongly suppressed corresponds to $4\\Delta\/e\\approx 900\\;\\mu\\mr{V}$, typical for the series combination of two Al-based SINIS structures with $\\Delta\\approx 225\\;\\mu\\mr{eV}$. In contrast, the maximum blockade voltage $4\\Delta+E_{\\mr{c},\\Sigma}$ gives an estimate $E_{\\mr{c},\\Sigma}\\approx 1\\;\\mr{meV}$ for the sum of the charging energies of the individual turnstiles.\n\n\\begin{figure}[!htb]\n\\includegraphics[width=\\columnwidth]{dutchar_a.png}\n\\caption{(color online) $\\left(\\mathrm{a}\\right)\\;$ Current--voltage characteristic of the series turnstiles L and R. The two points at each $V_{\\mr{b}}$ show the minimal and maximal current when the gate voltages $V_{\\mr{g,L}}$ and $V_{\\mr{g,R}}$ are swept over several periods. The top inset shows the gate modulation of the current $I$ at fixed $V_{\\mr{b}}=2\\;\\mr{mV}$. The bottom inset displays the gate-dependent residual tunnel-event rate observed by the detector when the series turnstiles are biased at fixed $V_{\\mr{b}}=-50\\;\\mu\\mr{V}$ and the gate offsets are scanned. The light blue arrows sketch how the gates are operated sequentially during electronic pumping. $\\left(\\mathrm{b}\\right)\\;$ From top to bottom: plateaus of the average pumped current $I$ as a function of the drive amplitude $A_{\\mr{g}}$ (peak-to-peak) under continuous pumping operation at 3, 4, and $5\\;\\mr{MHz}$, respectively.} \\label{fig:dut}\n\\end{figure}\n\nThe top left inset of Fig.~\\ref{fig:dut}~$\\left(\\mathrm{a}\\right)\\;$ displays a surface plot of the current $I$ through the turnstiles at fixed bias $V_{\\mr{b}}=2\\;\\mr{mV}$ as a function of the two gate voltages: $I$ is $e$-periodic in both $V_{\\mr{g,L}}$ and $V_{\\mr{g,R}}$, and maximized when each turnstile is tuned to charge degeneracy. The lack of skewness, {\\it i.e.}, distortion of the underlying square lattice of current maxima, indicates negligible coupling between the L and R gate signals. Comparing with the envelope curves in the main panel of Fig.~\\ref{fig:dut}~$\\left(\\mathrm{a}\\right)$, also here the cross section shape suggests asymmetry of the turnstile junctions. Based on SEM observations, a difference of $50\\%$ in the junction areas is typical.\n\nAt $V_{\\mr{b}}\\ll 1\\;\\mr{mV}$ the average current $I$ is strongly suppressed. However, in this bias regime the detector comes to play as a direct probe of the charge fluctuations through the turnstiles: The bottom right inset of Fig.~\\ref{fig:dut}~$\\left(\\mathrm{a}\\right)\\;$ shows a 2D slice of the total rate $\\Gamma$ of tunneling events onto or off the central island at $V_{\\mr{b}}=-50\\;\\mu\\mr{V}$. This plot inherits its shape from the behavior of $I$ as a function of the two gate voltages, evident as strong peaking of $\\Gamma$ at double charge degeneracy. As $V_{\\mr{b}}$ is increased, the areas of elevated $\\Gamma$ around these points and the lines connecting them grow larger. Nevertheless, even at $V_{\\mr{b}}\\approx 400\\;\\mu\\mr{V}$, essential for the driven operation of the turnstiles, a large range of suppressed $\\Gamma$ remains around each gate offset corresponding to an integer charge state on the corresponding turnstile island. Notably, in the present experiment where the electrons are counted in the S lead, we are able to probe the tunneling rates even when the turnstile islands are in Coulomb blockade. This was not the case for the previous SINIS counting experiments~\\cite{saira12,maisi11}, and constitutes an essential ingredient needed for detection of higher order tunneling processes in SINIS structures.\n\nIn Fig.~\\ref{fig:dut}~$\\left(\\mathrm{b}\\right)\\;$ we demonstrate that, under continuous drive, electrons can be transferred through the two series turnstiles as expected. The plot shows the average pumped current $I$ at $V_{\\mr{b}}=400\\;\\mu\\mr{V}$ when the L and R gates were driven simultaneously, with relative delay time $\\tau=0$, by pulses of 50 \\% duty cycle and increasing amplitude $A_{\\mr{g}}$ (peak-to-peak), centered around charge degeneracy points for each turnstile. The blue, green, and red curve from bottom to top correspond to continuous pumping at $f=3$, 4, and $5\\;\\mr{MHz}$, respectively. In this measurement, limited by the short averaging time, the resulting average currents of 0.48, 0.64, and $0.80\\;\\mr{pA}$ on the plateau are within 1 \\% of the expected values given by $ef$.\n\n\\begin{figure}[!htb]\n\\includegraphics[width=\\columnwidth]{dcchar_a.png}\n\\caption{(color online) $\\left(\\mathrm{a}\\right)\\;$ Detector current--voltage characteristic, with the points at each $V_{\\mr{det}}$ obtained by sweeping the gate voltage $V_{\\mr{g,det}}$ over several periods. The horizontal dashed line indicates the fixed bias current used for most of the measurements. The inset shows a typical time trace of the detector signal in the absence of time-dependent drives of $V_{\\mr{g,L}}$ and $V_{\\mr{g,R}}$. The red dots indicate tunneling events identified by a simple edge-detecting algorithm. $\\left(\\mathrm{b}\\right)\\;$ Bias voltage dependence of the maximum observed event rate $\\Gamma$ of the undriven system, corresponding to both turnstiles tuned to charge degeneracy. The rates are extracted as the maxima of 2D gate scans similar to the bottom inset of Fig.~\\ref{fig:dut}~$\\left(\\mathrm{a}\\right)$. The error bars show the standard deviation from a few repetitions of the measurement.} \\label{fig:dc}\n\\end{figure}\n\nThe main panel of Fig.~\\ref{fig:dc}~$\\left(\\mathrm{a}\\right)\\;$ displays the IV characteristic of the detector SET with $R_{\\mr{T,det}}\\approx1\\;\\mr{M}\\Omega$. The dots at each $V_{\\mr{det}}$ indicate the range of currents $I_{\\mr{det}}$ flowing through the detector as the gate voltage $V_{\\mr{g,det}}$ is swept over a period corresponding to several $e$. The bias current $I_{\\mr{det}}\\approx 40\\;\\mr{pA}$, shown by the dashed gray horizontal line and employed throughout the electron counting experiments described in this work, was chosen based on high sensitivity and small backaction onto the turnstiles. The inset of Fig.~\\ref{fig:dc}~$\\left(\\mathrm{a}\\right)\\;$ shows a typical time trace of the detector signal for an undriven system at $V_{\\mr{b}}=-50\\;\\mu\\mr{V}$, with $V_{\\mr{g,L}}$ and $V_{\\mr{g,R}}$ tuned close to charge degeneracy. Here we choose to operate the detector at fixed current (and gate voltage $V_{\\mr{g,det}}$) and record the varying voltage $V_{\\mr{det}}$. To facilitate extraction of the rate $\\Gamma$ from such traces, the red dots indicate edges identified by a simple algorithm based on threshold detection and numerical differentiation of the detector signal after digital low-pass filtering. We note that the detector could equally well be operated at a fixed bias voltage $V_{\\mr{det}}$ and thus changes in $I_{\\mr{det}}$ would be monitored. For the present measurements current bias was preferred due to higher signal-to-noise ratio.\n\nIn Fig.~\\ref{fig:dc}~$\\left(\\mathrm{b}\\right)\\;$ we plot the $V_{\\mr{b}}$-dependence of the maximum of the event rate $\\Gamma$, corresponding to both turnstiles at charge degeneracy. In this sample we found the voltage applied to the M-gate to have no clearly distinguishable effect on the rates, and this gate electrode coupled to the middle island was kept grounded during the majority of the measurements. For pumping operation under pulsed drive, described in the following sections, it is important to note that in general in our measurements only one of the turnstile gates was driven at any given time of the operation cycle, {\\it cf.} the blue arrows in the bottom inset of Fig.~\\ref{fig:dut}~$\\left(\\mathrm{a}\\right)$.\n\n\\section{Detection of all electrons at slow repetition rates}\nWe now consider sequential operation under drive with low duty cycle and repetition frequency. The main finding of the present work is illustrated in Fig.~\\ref{fig:pump}~$\\left(\\mathrm{a}\\right)$: The L and R turnstile gates are driven sequentially by the pulse sequences sketched on the left, where each of the gate voltage pulses of amplitude close to $1e$ is expected to ideally result in the transfer of one electron through the turnstile in question. As evidenced by the time traces of the detector voltage on the right, this is indeed what we directly observe -- the detector signal and hence the charge state on the middle island depends only on the repetition frequency $f$ and the phases (relative time delay $\\tau$) of the two drive signals, whereas no dependence on the length $T_{\\mr{pulse}}\\llf_{\\mr{S}}^{-1},f^{-1},\\tau$ of the individual drive pulses is seen. The three example sequences correspond to the transfer of $+1\\rightarrow -1$ (top), $+1\\rightarrow -1 \\rightarrow -1 \\rightarrow +1$ (middle), and $+1\\rightarrow +1 \\rightarrow -1 \\rightarrow -1$ (bottom) electrons to the middle island. Each timing diagram sketches two identical cycles of the drive, with each cycle followed by a slight delay for easier visual recognition. The colored bands and the numbers on the right indicate the number of electrons on the counting node, relative to the value at the start of the drive.\n\n\\begin{figure}[!htb]\n\\includegraphics[width=\\columnwidth]{slowpump_a.png}\n\\caption{(color online) $\\left(\\mathrm{a}\\right)\\;$ Examples of control sequences and corresponding detector signals during slow manipulation of the charge state on the counting node (see the main text for details). $\\left(\\mathrm{b}\\right)\\;$ Detection of each electron over a longer time span of slow sequential pumping operation. An error event where turnstile L transfers two electrons occurs around 1.2 s. Such relatively rare errors remain distinguishable also with a bandwidth-limited detector even at high $f$ when each transferred charge is no longer resolvable. In this limit, however, the setup with a single counting node cannot discriminate whether the error is due to missed or extra electron tunneling.} \\label{fig:pump}\n\\end{figure}\n\nThe strong detector coupling is evident in the detector traces in Fig.~\\ref{fig:pump}~$\\left(\\mathrm{a}\\right)$: Notably, the vertical scale is the same in all three panels, illustrating that the nonlinearity of the detector gate modulation becomes relevant after the charge state on the middle island changes by only a few electrons. Note also that the detector is operating on a different slope of its gate modulation in the top panel compared to the other two sequences -- in this case pulsing $V_{\\mr{g,L}}$ results in a step down in the detected signal.\n\nIn Fig.~\\ref{fig:pump}~$\\left(\\mathrm{a}\\right)\\;$ we considered different sequences of manipulation of the charge state on the counting node, for the duration of a few cycles of the periodic drive pulse trains. Figure~\\ref{fig:pump}~$\\left(\\mathrm{b}\\right)\\;$ extends this with a typical time trace over a longer span, demonstrating the sequential transfer of more than 100 electrons through the counting node without errors. Here, $V_{\\mr{b}}$ was set to $-400\\;\\mu\\mr{V}$ and pulses of $500\\;\\mr{ns}$ length and $1e$ amplitude were applied to $V_{\\mr{g,L}}$ and $V_{\\mr{g,R}}$ at $f=80\\;\\mr{Hz}$ with $\\tau=T_{\\mr{rep}}\/4$. The typical observed duration of faultless operation varied from 0.5 up to 10 seconds, in line with the residual event rate [{\\it cf}. Fig.~\\ref{fig:dut}~$\\left(\\mathrm{a}\\right)$] at the operating points when no drive pulses are applied. Also evident in Fig.~\\ref{fig:pump}~$\\left(\\mathrm{b}\\right)\\;$ is one error event where an extra electron is transferred through turnstile L. In such measurements at low repetition rates $f$ up to $100-200\\;\\mr{Hz}$, where the timing of each tunneling event is clearly resolved by the detector, in spirit of Ref.~\\onlinecite{wulf13} we can reliably identify in which of the two turnstiles the errors appear to originate from, the direction in which they occur, or whether they are caused by background charge jumps. In Ref.~\\onlinecite{fricke14} the advantage of two detectors is that such information is available even at faster repetition rates. We aim to test this in future experiments. Moreover, based on straightforward modeling, it is further possible to estimate, e.g., the probability of misattribution of one missed tunneling in turnstile L to one extra electron tunneling in turnstile R, which would result in close to identical detector signals.\n\nThe operation points for the measurements in Fig.~\\ref{fig:pump} were determined by first scanning the L gate offset while the R offset was kept fixed at some constant value. Under pulsed, constant-amplitude drive at 50 \\% duty cycle the center of the resulting plateau in the average pumped current was then determined. The L gate offset was set to this optimal value, and another scan was subsequently performed but this time by varying the R offset. Such a gate offset search method based on the average pumped current under continuous drive was found to reliable as it is possible to keep $V_{\\mr{b}}$ constant once set to the desired value. The advantage here is that the only action needed to transition between the the faster continuous pumping measurements in Fig.~\\ref{fig:dut}~$\\left(\\mathrm{b}\\right)\\;$ and the representative on-chip electrometer traces in Fig.~\\ref{fig:pump}~$\\left(\\mathrm{a}\\right)\\;$ is a change in the duty cycle and repetition frequency of the L and R gate drives.\n\n\\section{Detection of pumping errors only at higher repetition rates}\nWe next describe detection of pumping errors at drive frequencies $f$ too high to distinguish each electron entering or exiting the counting node, but on the other hand low enough that the error rate remains in the sub-kHz bandwidth of the detector. For our present device, this limits the highest usable $f$ to around $100\\;\\mr{kHz}$ or less. The symbols in Fig.~\\ref{fig:fdep}~$\\left(\\mathrm{a}\\right)\\;$ show error rates $\\Gamma$ extracted from 30 s long time traces of the detector signal. Here, the pulsed turnstile drive with $T_{\\mr{pulse}}=100\\;\\mr{ns}$ and $\\tau=T_{\\mr{rep}}\/4$ at repetition frequencies $f$ up to $400\\;\\mr{kHz}$ was switched on\/off at the rate of $1\\;\\mr{Hz}$ to help verifying the stability of the gate offsets. The trace in the inset of Fig.~\\ref{fig:fdep}~$\\left(\\mathrm{b}\\right)$, obtained at $V_{\\mr{b}}=400\\;\\mu\\mr{V}$ and $f=50\\;\\mr{kHz}$, illustrates a typical error signal. During the `drive off'-sections the detector registers a small number of extra counts due to the background rate, {\\it cf.} Fig.~\\ref{fig:dc}, but this contribution remains negligible compared to jumps recorded during the driven operation. The red and blue symbols correspond to measurements at $V_{\\mr{b}}=-300\\;\\mu\\mr{V}$ and $-420\\;\\mu\\mr{V}$, respectively. Assuming independent $1e$ error events, we expect $\\Gamma$ to scale linearly with $f$. The black and gray solid lines plot $\\Gamma_0=rf$ with the respective relative error rates $r=3.2\\times 10^{-3}$ and $7.5\\times 10^{-4}$, in reasonable agreement with the detected rates at $f\\lesssim 100\\;\\mr{kHz}$.\n\nAt higher $f$ the measured values deviate increasingly from the expected linear behavior, and for $V_{\\mr{b}}=-300\\;\\mu\\mr{V}$ display even a tendency to decrease. We attribute this to the limited detector bandwidth: When two events happen within or almost within the $\\approx 1\\;\\mr{ms}$ rise time, they cannot be reliably distinguished by the edge detection algorithm. The dashed lines model this by an effective error rate $\\Gamma_{\\mr{eff}}=\\Gamma_0\\exp(-\\Gamma_0\\tau_{\\mr{det}})$, {\\it i.e.}, the `intrinsic' rate multiplied by the estimated probability of two independent events with interval larger than the detector time constant $\\tau_{\\mr{det}}\\approx 1.4\\;\\mr{ms}$. The initial, close to linear increase of the observed $\\Gamma$ with $f$ demonstrates we are able to reliably count the number of pumping errors. However, compared to Ref.~\\onlinecite{fricke14} with 3 pumps in series, monitored by two independent detectors on two separate counting nodes, we are unable to tell the direction of the errors or in which turnstile they occurred. In contrast to Fig.~\\ref{fig:pump}~$\\left(\\mathrm{a}\\right)\\;$ where $f\\llf_{\\mr{S}}$ we can no longer deduce this based on detailed timing information of individual tunneling events.\n\n\\begin{figure}[!htb]\n\\includegraphics[width=\\columnwidth]{rates_fdep_a.png}\n\\caption{(color online) $\\left(\\mathrm{a}\\right)\\;$ Observed error rate $\\Gamma$ as a function of the drive frequency $f$, at $V_{\\mr{b}}=-300\\;\\mu\\mr{V}$ (red\/light symbols), and at $V_{\\mr{b}}=-420\\;\\mu\\mr{V}$ (blue\/dark). The respective gray and black solid lines are straight lines through origin, showing the expected linear behavior. The dashed lines include the effect of limited detector bandwidth. $\\left(\\mathrm{b}\\right)\\;$ Bias voltage dependence of the error rate at the fixed repetition frequency $f=50\\;\\mr{kHz}$. The inset shows a typical time trace of error events recorded at $V_{\\mr{b}}=400\\;\\mu\\mr{V}$, $f=50\\;\\mr{kHz}$, and $T_{\\mr{pulse}}=100\\;\\mr{ns}$, with the pulsed drive switched on\/off every 1 s. Here, the drive is on for approximately $0.5\\;\\mr{s}