diff --git a/.gitattributes b/.gitattributes index f51205c24227faeeca8bd3c71bacd8def546225e..0e31cf53af5c4903c0852bca7159becdf894111d 100644 --- a/.gitattributes +++ b/.gitattributes @@ -223,3 +223,4 @@ data_all_eng_slimpj/shuffled/split/split_finalac/part-09.finalac filter=lfs diff data_all_eng_slimpj/shuffled/split/split_finalac/part-00.finalac filter=lfs diff=lfs merge=lfs -text data_all_eng_slimpj/shuffled/split/split_finalac/part-17.finalac filter=lfs diff=lfs merge=lfs -text data_all_eng_slimpj/shuffled/split/split_finalac/part-18.finalac filter=lfs diff=lfs merge=lfs -text +data_all_eng_slimpj/shuffled/split/split_finalac/part-13.finalac filter=lfs diff=lfs merge=lfs -text diff --git a/data_all_eng_slimpj/shuffled/split/split_finalac/part-13.finalac b/data_all_eng_slimpj/shuffled/split/split_finalac/part-13.finalac new file mode 100644 index 0000000000000000000000000000000000000000..79b56ba0fd0ff1e4b5fccce2889d94e18b29a29d --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split/split_finalac/part-13.finalac @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61c3d2bd1ad5c05abf69e3eca8b27a0afb3f3c4a5695fbf60c06a85bc54d90fe +size 12576662197 diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzaebm b/data_all_eng_slimpj/shuffled/split2/finalzzaebm new file mode 100644 index 0000000000000000000000000000000000000000..c4333f41044dd7d05ed9830e9e41bba2484c38a6 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzaebm @@ -0,0 +1,5 @@ +{"text":"\\section*{Introduction}\n\nSet $[n]:=\\{x_1,\\ldots,x_n\\}$. Let $K$ be a field and\n$S=K[x_1,\\ldots,x_n]$, a polynomial ring over $K$. Let $\\Delta$ be a\nsimplicial complex over $[n]$. For an integer $t\\geq 0$, Haghighi,\nYassemi and Zaare-Nahandi introduced the concept of ${\\mathrm{CM}}_t$-ness\nwhich is the pure version of simplicial complexes\n\\emph{Cohen-Macaulay in codimension $t$} studied in \\cite{MiNoSw}. A\nreason for the importance of ${\\mathrm{CM}}_t$ simplicial complexes is that\nthey generalizes two notions for simplicial complexes: being\nCohen-Macaulay and Buchsbaum. In particular, by the results from\n\\cite{Re,Sh}, ${\\mathrm{CM}}_0$ is the same as Cohen-Macaulayness and ${\\mathrm{CM}}_1$\nis identical with Buchsbaum property.\n\nIn \\cite{HaYaZa1}, the authors described some combinatorial properties\nof ${\\mathrm{CM}}_t$ simplicial complexes and gave some characterizations of them\nand generalized some results of \\cite{Hi,Mi}. Then, in \\cite{HaYaZa2},\nthey generalized a characterization of Cohen-Macaulay bipartite graphs\nfrom \\cite{HeHi} and \\cite{CoNa} on unmixed Buchsbaum graphs.\n\nBayati and Herzog defined the expansion functor in the category of\nfinitely generated multigraded $S$-modules and studied some homological\nbehaviors of this functor (see \\cite{BaHe}). The expansion functor helps\nus to present other multigraded $S$-modules from a given finitely generated\nmultigraded $S$-module which may have some of algebraic properties of the\nprimary module. This allows to introduce new structures of a given multigraded\n$S$-module with the same properties and especially to extend some homological\nor algebraic results for larger classes (see for example \\cite[Theorem 4.2]{BaHe}.\nThere are some combinatorial versions of expansion functor which we will recall in this paper.\n\nThe purpose of this paper is the study of behaviors of expansion\nfunctor on ${\\mathrm{CM}}_t$ complexes. We first recall some notations and\ndefinitions of ${\\mathrm{CM}}_t$ simplicial complexes in Section 1. In the\nnext section we describe the expansion functor in three contexts,\nthe expansion of a simplicial complex, the expansion of a simple\ngraph and the expansion of a monomial ideal. We show that there is a\nclose relationship between these three contexts. In Section 3 we\nprove that the expansion of a ${\\mathrm{CM}}_t$ complex $\\Delta$ with respect to\n$\\alpha$ is ${\\mathrm{CM}}_{t+e-k+1}$ but it is not ${\\mathrm{CM}}_{t+e-k}$ where\n$e=\\dim(\\Delta^\\alpha)+1$ and $k$ is the minimum of the components of\n$\\alpha$ (see Theorem \\ref{main}). In Section 4, we introduce a new\nfunctor, called contraction, which acts in contrast to expansion\nfunctor. As a main result of this section we show that if the\ncontraction of a ${\\mathrm{CM}}_t$ complex is pure and all components of the\nvector obtained from contraction are greater than or equal to $t$\nthen it is Buchsbaum (see Theorem \\ref{contract,CM-t}). The section\nis finished with a view towards the contraction of simple graphs.\n\n\\section{Preliminaries}\n\nLet $t$ be a non-negative integer. We recall from \\cite{HaYaZa1}\nthat a simplicial complex $\\Delta$ is called ${\\mathrm{CM}}_t$ or\n\\emph{Cohen-Macaulay in codimension $t$} if it is pure and for every\nface $F\\in\\Delta$ with $\\#(F)\\geq t$, $\\mathrm{link}_\\Delta(F)$ is Cohen-Macaulay.\nEvery ${\\mathrm{CM}}_t$ complex is also ${\\mathrm{CM}}_r$ for all $r\\geq t$. For $t<0$,\n${\\mathrm{CM}}_t$ means ${\\mathrm{CM}}_0$. The properties ${\\mathrm{CM}}_0$ and ${\\mathrm{CM}}_1$ are the\nsame as Cohen-Macaulay-ness and Buchsbaum-ness, respectively.\n\nThe link of a face $F$ in a simplicial complex $\\Delta$ is denoted by\n$\\mathrm{link}_\\Delta(F)$ and is $$\\mathrm{link}_\\Delta(F)=\\{G\\in\\Delta: G\\cap F=\\emptyset,\nG\\cup F\\in\\Delta\\}.$$ The following lemma is useful for checking the\n${\\mathrm{CM}}_t$ property of simplicial complexes:\n\n\\begin{lem}\\label{CM-t eq}\n(\\cite[Lemma 2.3]{HaYaZa1}) Let $t\\geq 1$ and let $\\Delta$ be a nonempty complex. Then $\\Delta$ is ${\\mathrm{CM}}_t$ if and only if $\\Delta$ is pure and $\\mathrm{link}_\\Delta(v)$ is ${\\mathrm{CM}}_{t-1}$ for every vertex $v\\in\\Delta$.\n\\end{lem}\n\nLet $\\mathcal G=(V(\\mathcal G),E(\\mathcal G))$ be a simple graph with vertex set $V$ and edge set $E$. The \\emph{independence complex} of $\\mathcal G$ is the complex $\\Delta_\\mathcal G$ with vertex set $V$ and with faces consisting of independent sets of vertices of $\\mathcal G$. Thus $F$ is a face of $\\Delta_\\mathcal G$ if and only if there is no edge of $\\mathcal G$ joining any two\nvertices of $F$.\n\nThe \\emph{edge ideal} of a simple graph $\\mathcal G$, denoted by $I(\\mathcal G)$, is an ideal of $S$ generated by all squarefree monomials $x_ix_j$ with $x_ix_j\\in E(\\mathcal G)$.\n\nA simple graph $\\mathcal G$ is called ${\\mathrm{CM}}_t$ if $\\Delta_\\mathcal G$ is ${\\mathrm{CM}}_t$ and it is called \\emph{unmixed} if $\\Delta_\\mathcal G$ is pure.\n\nFor a monomial ideal $I\\subset S$, We denote by $G(I)$ the unique minimal set of monomial generators of $I$.\n\n\\section{The expansion functor in combinatorial and algebraic concepts}\n\nIn this section we define the expansion of a simplicial complex and recall the expansion of a simple graph from \\cite{Sc} and the expansion of a monomial ideal from \\cite{BaHe}. We show that these concepts are intimately related to each other.\n\n(1) Let $\\alpha=(k_1,\\ldots,k_n)\\in\\mathbb N^n$. For $F=\\{x_{i_1},\\ldots,x_{i_r}\\}\\subseteq \\{x_1,\\ldots,x_n\\}$ define\n$$F^\\alpha=\\{x_{i_11},\\ldots,x_{i_1k_{i_1}},\\ldots,x_{i_r1},\\ldots,x_{i_rk_{i_r}}\\}$$\nas a subset of $[n]^\\alpha:=\\{x_{11},\\ldots,x_{1k_1},\\ldots,x_{n1},\\ldots,x_{nk_n}\\}$. $F^\\alpha$ is called \\emph{the expansion of $F$ with respect to $\\alpha$.}\n\nFor a simplicial complex $\\Delta=\\langle F_1,\\ldots,F_r\\rangle$ on $[n]$, we define \\emph{the expansion of $\\Delta$ with respect to $\\alpha$} as the simplicial complex\n$$\\Delta^\\alpha=\\langle F^\\alpha_1,\\ldots,F^\\alpha_r\\rangle.$$\n\n(2) The \\emph{duplication} of a vertex $x_i$ of a simple graph $\\mathcal G$ was first introduced by Schrijver \\cite{Sc} and it means extending its vertex set $V(\\mathcal G)$ by a new vertex $x'_i$ and replacing $E(\\mathcal G)$ by\n$$E(\\mathcal G)\\cup\\{(e\\backslash\\{x_i\\})\\cup\\{x'_i\\}:x_i\\in e\\in E(\\mathcal G)\\}.$$\nFor the $n$-tuple $\\alpha=(k_1,\\ldots,k_n)\\in\\mathbb N^n$, with positive integer entries, the \\emph{expansion} of the simple graph $\\mathcal G$ is denoted by $\\mathcal G^\\alpha$ and it is obtained from $\\mathcal G$ by successively duplicating $k_i-1$ times every vertex $x_i$.\n\n(3) In \\cite{BaHe} Bayati and Herzog defined the expansion functor in the category of finitely generated multigraded $S$-modules and studied some homological behaviors of this functor. We recall the expansion functor defined by them only in the category of monomial ideals and refer the reader to \\cite{BaHe} for more general case in the category of finitely generated multigraded $S$-modules.\n\nLet $S^\\alpha$ be a polynomial ring over $K$ in the variables\n$$x_{11},\\ldots,x_{1k_1},\\ldots,x_{n1},\\ldots,x_{nk_n}.$$\nWhenever $I\\subset S$ is a monomial ideal minimally generated by\n$u_1,\\ldots,u_r$, the expansion of $I$ with respect to $\\alpha$ is\ndefined by\n$$I^\\alpha=\\sum^r_{i=1}P^{\\nu_1(u_i)}_1\\ldots P^{\\nu_n(u_i)}_n\\subset S^\\alpha$$\nwhere $P_j=(x_{j1},\\ldots,x_{jk_j})$ is a prime ideal of $S^\\alpha$ and $\\nu_j(u_i)$ is the exponent of $x_j$ in $u_i$.\n\nIt was shown in \\cite{BaHe} that the expansion functor is exact and\nso $(S\/I)^\\alpha=S^\\alpha\/I^\\alpha$. In the following lemmas we\ndescribe the relations between the above three concepts of expansion\nfunctor.\n\n\\begin{lem}\\label{epansion s-R}\nFor a simplicial complex $\\Delta$ we have $I^\\alpha_\\Delta=I_{\\Delta^\\alpha}$. In particular, $K[\\Delta]^\\alpha=K[\\Delta^\\alpha]$.\n\\end{lem}\n\\begin{proof}\nLet $\\Delta=\\langle F_1,\\ldots,F_r\\rangle$. Since $I_\\Delta=\\bigcap^r_{i=1}P_{F^c_i}$, it follows from Lemma 1.1 in \\cite{BaHe} that $I^\\alpha_\\Delta=\\bigcap^r_{i=1}P^\\alpha_{F^c_i}$. The result is obtained by the fact that $P^\\alpha_{F^c_i}=P_{(F^\\alpha_i)^c}$.\n\\end{proof}\n\nLet $u=x_{i_1}\\ldots x_{i_t}\\in S$ be a monomial and $\\alpha=(k_1,\\ldots,k_n)\\in\\mathbb N^n$. We set $u^\\alpha=G((u)^\\alpha)$\nand for a set $A$ of monomials in $S$, $A^\\alpha$ is defined $$A^\\alpha=\\bigcup_{u\\in A} u^\\alpha.$$\nOne can easily obtain the following lemma.\n\n\\begin{lem}\nLet $I\\subset S$ be a monomial ideal and $\\alpha\\in\\mathbb N^n$. Then $G(I^\\alpha)=G(I)^\\alpha$.\n\\end{lem}\n\n\\begin{lem}\nFor a simple graph $\\mathcal G$ on the vertex set $[n]$ and $\\alpha\\in\\mathbb N^n$ we have $I(\\mathcal G^\\alpha)=I(\\mathcal G)^\\alpha$.\n\\end{lem}\n\\begin{proof}\nLet $\\alpha=(k_1,\\ldots,k_n)$ and $P_j=(x_{j1},\\ldots,x_{jk_j})$. Then it follows from Lemma 11(ii,iii) of \\cite{BaHe} that\n$$I(\\mathcal G^\\alpha)=(x_{ir}x_{js}:x_ix_j\\in E(\\mathcal G), 1\\leq r\\leq k_i,1\\leq s\\leq k_j)=\\sum_{x_ix_j\\in E(\\mathcal G)}P_iP_j$$\n$$=\\sum_{x_ix_j\\in E(\\mathcal G)}(x_i)^\\alpha (x_j)^\\alpha=(\\sum_{x_ix_j\\in E(\\mathcal G)}(x_i)(x_j))^\\alpha=I(\\mathcal G)^\\alpha.$$\n\\end{proof}\n\n\\section{The expansion of a ${\\mathrm{CM}}_t$ complex}\n\nThe following proposition gives us some information about the\nexpansion of a simplicial complex which are useful in the proof of\nthe next results.\n\n\\begin{prop}\\label{ex indepen}\nLet $\\Delta$ be a simplicial complex and let $\\alpha\\in\\mathbb N^n$.\n\\begin{enumerate}[\\upshape (i)]\n \\item For all $i\\leq\\dim(\\Delta)$, there exists an epimorphism $\\theta:\\tilde{H}_{i}(\\Delta^\\alpha;K)\\rightarrow\\tilde{H}_{i}(\\Delta;K)$.\n\nIn particular in this case\n$$\\tilde{H}_{i}(\\Delta^\\alpha;K)\/\\ker(\\theta)\\cong\\tilde{H}_{i}(\\Delta;K);$$\n \\item For $F\\in\\Delta^\\alpha$ such that $F=G^\\alpha$ for some $G\\in\\Delta$, we have\n$$\\mathrm{link}_{\\Delta^\\alpha}(F)=(\\mathrm{link}_\\Delta (G))^\\alpha;$$\n \\item For $F\\in\\Delta^\\alpha$ such that $F\\neq G^\\alpha$ for every $G\\in\\Delta$, we have\n$$\\mathrm{link}_{\\Delta^\\alpha}F=\\langle U^\\alpha\\backslash F\\rangle\\ast \\mathrm{link}_{\\Delta^\\alpha}U^\\alpha$$\nfor some $U\\in\\Delta$ with $F\\subseteq U^\\alpha$. Here $\\ast$ means the join of two simplicial complexes.\n\nIn the third case, $\\mathrm{link}_{\\Delta^\\alpha}F$ is a cone and so acyclic,\ni.e., $\\tilde{H}_i(\\mathrm{link}_{\\Delta^\\alpha}F;K)=0$ for all $i>0$.\n\n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\n(i) Consider the map $\\pi:[n]^\\alpha\\rightarrow [n]$ by $\\pi(x_{ij})=x_i$ for all $i,j$. Let the simplicial map $\\varphi:\\Delta^\\alpha\\rightarrow\\Delta$ be defined by $\\varphi(\\{x_{i_1j_1},\\ldots,x_{i_qj_q}\\})=\\{\\pi(x_{i_1j_1}),\\ldots,\\pi(x_{i_qj_q})\\}=\\{x_{i_1},\\ldots,x_{i_q}\\}$. Actually, $\\varphi$ is an extension of $\\pi$ to $\\Delta^\\alpha$ by linearity. Define $\\varphi_\\#:\\tilde{\\mathcal C}_q(\\Delta^\\alpha;K)\\rightarrow\\tilde{\\mathcal C}_q(\\Delta;K)$, for each $q$, by\n$$\\varphi_\\#([x_{i_0j_0},\\ldots,x_{i_qj_q}])=\\left\\{\n \\begin{array}{ll}\n 0 & \\mbox{if for some indices}\\ i_r=i_t \\\\\n \\left[\\varphi(\\{x_{i_0j_0}\\}),\\ldots,\\varphi(\\{x_{i_qj_q}\\})\\right] & \\mbox{otherwise}.\n \\end{array}\n\\right.\n$$\nIt is clear from the definitions of $\\tilde{\\mathcal C}_q(\\Delta^\\alpha;K)$ and $\\tilde{\\mathcal C}_q(\\Delta;K)$ that $\\varphi_\\#$ is well-defined. Also, define $\\varphi_\\alpha:\\tilde{H}_{i}(\\Delta^\\alpha;K)\\rightarrow\\tilde{H}_{i}(\\Delta;K)$ by\n$$\\varphi_\\alpha:z+B_i(\\Delta^\\alpha)\\rightarrow \\varphi_\\#(z)+B_i(\\Delta).$$\nIt is trivial that $\\varphi_\\alpha$ is onto.\n\n(ii) The inclusion $\\mathrm{link}_{\\Delta^\\alpha}(F)\\supseteq(\\mathrm{link}_\\Delta (G))^\\alpha$ is trivial. So we show the reverse inclusion. Let $\\sigma\\in\\mathrm{link}_{\\Delta^\\alpha}(G^\\alpha)$. Then $\\sigma\\cap G^\\alpha=\\emptyset$ and $\\sigma\\cup G^\\alpha\\in\\Delta^\\alpha$. We want to show $\\pi(\\sigma)\\in\\mathrm{link}_\\Delta (G)$. Because in this case, $\\pi(\\sigma)^\\alpha\\in(\\mathrm{link}_\\Delta (G))^\\alpha$ and since that $\\sigma\\subseteq \\pi(\\sigma)^\\alpha$, we can conclude that $\\sigma\\in (\\mathrm{link}_\\Delta (G))^\\alpha$.\n\nClearly, $\\pi(\\sigma)\\cup G\\in\\Delta$. To show that $\\pi(\\sigma)\\cap G=\\emptyset$, suppose, on the contrary, that $x_i\\in \\pi(\\sigma)\\cap G$. Then $x_{ij}\\in \\sigma$ for some $j$. Especially, $x_{ij}\\in G^\\alpha$. Therefore $\\sigma\\cap G^\\alpha\\neq\\emptyset$, a contradiction.\n\n(iii) Let $\\tau\\in\\mathrm{link}_{\\Delta^\\alpha}F$. Let $\\tau\\cap \\pi(F)^\\alpha=\\emptyset$. It follows from $\\tau\\cup F\\in \\Delta^\\alpha$ that $\\pi(\\tau)^\\alpha\\cup\\pi(F)^\\alpha\\in \\Delta^\\alpha$. Now by $\\tau\\subset\\pi(\\tau)^\\alpha$ it follows that $\\tau\\cup\\pi(F)^\\alpha\\in \\Delta^\\alpha$. Hence $\\tau\\in\\mathrm{link}_{\\Delta^\\alpha}(\\pi(F)^\\alpha)$. So we suppose that $\\tau\\cap \\pi(F)^\\alpha\\neq\\emptyset$. We write\n$\\tau=(\\tau\\cap \\pi(F)^\\alpha)\\cup (\\tau\\backslash \\pi(F)^\\alpha)$. It is clear that $\\tau\\cap \\pi(F)^\\alpha\\subset \\pi(F)^\\alpha\\backslash F$ and $\\tau\\backslash \\pi(F)^\\alpha\\in\\mathrm{link}_{\\Delta^\\alpha}\\pi(F)^\\alpha$. The reverse inclusion is trivial.\n\\end{proof}\n\n\\begin{rem}\\label{pure expan}\nLet $\\Delta=\\langle x_1x_2,x_2x_3\\rangle$ be a complex on $[3]$ and\n$\\alpha=(2,1,1)\\in\\mathbb N^3$. Then $\\Delta^\\alpha=\\langle\nx_{11}x_{12}x_{21},x_{21}x_{31}\\rangle$ is a complex on\n$\\{x_{11},x_{12},x_{21},x_{31}\\}$. Notice that $\\Delta$ is pure but\n$\\Delta^\\alpha$ is not. Therefore, the expansion of a pure simplicial\ncomplex is not necessarily pure.\n\\end{rem}\n\n\\begin{thm}\\label{main}\nLet $\\Delta$ be a simplicial complex on $[n]$ of dimension $d-1$ and\nlet $t\\geq 0$ be the least integer that $\\Delta$ is ${\\mathrm{CM}}_t$. Suppose\nthat $\\alpha=(k_1,\\ldots,k_n)\\in\\mathbb N^n$ such that $k_i>1$ for some\n$i$ and $\\Delta^\\alpha$ is pure. Then $\\Delta^\\alpha$ is ${\\mathrm{CM}}_{t+e-k+1}$\nbut it is not ${\\mathrm{CM}}_{t+e-k}$, where $e=\\dim(\\Delta^\\alpha)+1$ and\n$k=\\min\\{k_i:k_i>1\\}$ .\n\\end{thm}\n\\begin{proof}\nWe use induction on $e\\geq 2$. If $e=2$, then $\\dim(\\Delta^\\alpha)=1$ and $\\Delta$ should be only in form $\\Delta=\\langle x_1,\\ldots,x_n\\rangle$. In particular, $\\Delta^\\alpha$ is of the form\n$$\\Delta^\\alpha=\\langle \\{x_{i_11},x_{i_12}\\},\\{x_{i_21},x_{i_22}\\},\\ldots,\\{x_{i_r1},x_{i_r2}\\}\\rangle.$$\nIt is clear that $\\Delta^\\alpha$ is ${\\mathrm{CM}}_1$ but it is not Cohen-Macaulay.\n\nAssume that $e>2$. Let $\\{x_{ij}\\}\\in\\Delta^\\alpha$. We want to show that $\\mathrm{link}_{\\Delta^\\alpha}(x_{ij})$ is ${\\mathrm{CM}}_{e-k}$. Consider the following cases:\n\nCase 1: $k_i>1$. Then\n$$\\mathrm{link}_{\\Delta^\\alpha}(x_{ij})=\\langle\\{x_i\\}^\\alpha\\backslash x_{ij}\\rangle\\ast(\\mathrm{link}_{\\Delta}(x_i))^\\alpha.$$\n$(\\mathrm{link}_{\\Delta}(x_i))^\\alpha$ is of dimension $e-k_i-1$ and, by induction hypothesis, it is ${\\mathrm{CM}}_{t+e-k_i-k+1}$. On the other hand, $\\langle\\{x_i\\}^\\alpha\\backslash x_{ij}\\rangle$ is Cohen-Macaulay of dimension $k_i-2$. Therefore, it follows from Theorem 1.1(i) of \\cite{HaYaZa2} that $\\mathrm{link}_{\\Delta^\\alpha}(x_{ij})$ is ${\\mathrm{CM}}_{t+e-k}$.\n\nCase 2: $k_i=1$. Then\n$$\\mathrm{link}_{\\Delta^\\alpha}(x_{ij})=(\\mathrm{link}_{\\Delta}(x_i))^\\alpha$$\nwhich is of dimension $e-2$ and, by induction, it is ${\\mathrm{CM}}_{t+e-k}$.\n\nNow suppose that $e>2$ and $k_s=k$ for some $s\\in [n]$. Let $F$ be a facet of $\\Delta$ such that $x_s$ belongs to $F$.\n\nIf $\\dim(\\Delta)=0$, then $k_l=k$ for all $l\\in [n]$. In particular, $e=k$. It is clear that $\\Delta^\\alpha$ is not ${\\mathrm{CM}}_{t+e-k}$ (or Cohen-Macaulay). So suppose that $\\dim(\\Delta)>0$. Choose $x_i\\in F\\backslash x_s$. Then\n$$\\mathrm{link}_{\\Delta^\\alpha}(x_{ij})=\\langle \\{x_i\\}^\\alpha\\backslash x_{ij}\\rangle\\ast (\\mathrm{link}_\\Delta(x_i))^\\alpha.$$\nBy induction hypothesis, $(\\mathrm{link}_\\Delta(x_i))^\\alpha$ is not ${\\mathrm{CM}}_{t+e-k_i-k}$. It follows from Theorem 3.1(ii) of \\cite{HaYaZa2} that $\\mathrm{link}_{\\Delta^\\alpha}(x_{ij})$ is not ${\\mathrm{CM}}_{t+e-k-1}$. Therefore $\\Delta^\\alpha$ is not ${\\mathrm{CM}}_{t+e-k}$.\n\\end{proof}\n\n\\begin{cor}\nLet $\\Delta$ be a non-empty Cohen-Macaulay simplicial complex on\n$[n]$. Then for any $\\alpha\\in\\mathbb N^n$, with $\\alpha\\neq\\mathbf 1$,\n$\\Delta^\\alpha$ can never be Cohen-Macaulay.\n\\end{cor}\n\n\\section{The contraction functor}\n\nLet $\\Delta=\\langle F_1,\\ldots,F_r\\rangle$ be a simplicial complex on $[n]$. Consider the equivalence relation `$\\sim$' on the vertices of $\\Delta$ given by\n$$x_i\\sim x_j\\Leftrightarrow\\langle x_i\\rangle\\ast\\mathrm{link}_\\Delta(x_i)=\\langle x_j\\rangle\\ast\\mathrm{link}_\\Delta(x_j).$$\nIn fact $\\langle x_i\\rangle\\ast\\mathrm{link}_\\Delta(x_i)$ is the cone over\n$\\mathrm{link}_\\Delta(x_i)$, and the elements of\n$\\langle x_i\\rangle\\ast\\mathrm{link}_\\Delta(x_i)$ are those faces of $\\Delta$,\nwhich contain $x_i$. Hence $\\langle x_i\\rangle\\ast\\mathrm{link}_\\Delta(x_i)\n=\\langle x_j\\rangle\\ast\\mathrm{link}_\\Delta(x_j)$, means the cone with vertex $x_i$ is equal to the cone with vertex $x_j$. In other words,\n$x_i\\sim x_j$ is equivalent to saying that for a facet $F\\in\\Delta$, $F$ contains $x_i$ if and only if it contains $x_j$.\n\nLet $[\\bar{m}]=\\{\\bar{y}_1,\\ldots,\\bar{y}_m\\}$ be the set of\nequivalence classes under $\\sim$. Let\n$\\bar{y}_i=\\{x_{i1},\\ldots,x_{ia_i}\\}$. Set\n$\\alpha=(a_1,\\ldots,a_m)$. For $F_t\\in\\Delta$, define\n$G_t=\\{\\bar{y}_i:\\bar{y}_i\\subset F_t\\}$ and let $\\Gamma$ be a\nsimplicial complex on the vertex set $[m]$ with facets\n$G_1,\\ldots,G_r$. We call $\\Gamma$ the \\emph{contraction of $\\Delta$\nby $\\alpha$} and $\\alpha$ is called \\emph{the vector obtained from\ncontraction}.\n\nFor example, consider the simplicial complex $\\Delta=\\langle x_1x_2x_3,x_2x_3x_4,x_1x_4x_5,x_2x_3x_5\\rangle$\non the vertex set $[5]=\\{x_1,\\ldots,x_5\\}$. Then $\\bar{y}_1=\\{x_1\\}$, $\\bar{y}_2=\\{x_2,x_3\\}$, $\\bar{y}_3=\\{x_4\\}$,\n$\\bar{y}_4=\\{x_5\\}$ and $\\alpha=(1,2,1,1)$. Therefore, the contraction of $\\Delta$ by $\\alpha$ is\n$\\Gamma=\\langle \\bar{y}_1\\bar{y}_2,\\bar{y}_2\\bar{y}_3,\\bar{y}_1\\bar{y}_3\\bar{y}_4,\\bar{y}_2\\bar{y}_4\\rangle$ a\ncomplex on the vertex set $[\\bar{4}]=\\{\\bar{y}_1,\\ldots,\\bar{y}_4\\}$.\n\n\\begin{rem}\nNote that if $\\Delta$ is a pure simplicial complex then the contraction of $\\Delta$ is not necessarily pure (see the above example). In the special case where the vector $\\alpha=(k_1,\\dots,k_n)\\in\\mathbb N^n$ and $k_i=k_j$ for all $i,j$, it is easy to check that in this case $\\Delta$ is pure if and only if $\\Delta^\\alpha$ is pure. Another case is introduced in the following proposition.\n\\end{rem}\n\n\\begin{prop}\nLet $\\Delta$ be a simplicial complex on $[n]$ and assume that\n$\\alpha=(k_1,\\dots,k_n)\\in\\mathbb N^n$ satisfies the following condition:\n\n$(\\dag)$ for all facets $F,G\\in\\Delta$, if $x_i\\in F\\backslash G$ and $x_j\\in G\\backslash F$ then $k_i=k_j$.\n\nThen $\\Delta$ is pure if and only if $\\Delta^\\alpha$ is pure.\n\\end{prop}\n\\begin{proof}\nLet $\\Delta$ be a pure simplicial complex and let $F,G\\in\\Delta$ be two facets of $\\Delta$. Then $$|F^\\alpha|-|G^\\alpha|=\\sum_{x_i\\in F}k_i-\\sum_{x_i\\in G}k_i=\\sum_{x_i\\in F\\backslash G}k_i-\\sum_{x_i\\in G\\backslash F}k_i.$$\nNow the condition $(\\dag)$ implies that $|F^\\alpha|=|G^\\alpha|$. This means that all facets of $\\Delta^\\alpha$ have the same cardinality.\n\nLet $\\Delta^\\alpha$ be pure. Suppose that $F,G$ are two facets in\n$\\Delta$. If $|F|>|G|$ then $|F\\backslash G|>|G\\backslash F|$.\nTherefore $\\sum_{x_i\\in F\\backslash G}k_i>\\sum_{x_i\\in G\\backslash\nF}k_i$. This concludes that $|F^\\alpha|=\\sum_{x_i\\in\nF}k_i>\\sum_{x_i\\in G}k_i=|G^\\alpha|$, a contradiction.\n\\end{proof}\n\nThere is a close relationship between a simplicial complex and its contraction. In fact, the expansion of the contraction of a simplicial complex is the same complex. The precise statement is the following.\n\n\\begin{lem}\nLet $\\Gamma$ be the contraction of $\\Delta$ by $\\alpha$. Then $\\Gamma^\\alpha\\cong \\Delta$.\n\\end{lem}\n\\begin{proof}\nSuppose that $\\Delta$ and $\\Gamma$ are on the vertex sets $[n]=\\{x_1,\\ldots,x_n\\}$ and $[\\bar{m}]=\\{\\bar{y}_1,\\ldots,\\bar{y}_m\\}$, respectively. Let $\\alpha=(a_1,\\ldots,a_m)$. For $\\bar{y}_i\\in\\Gamma$, suppose that $\\{\\bar{y}_i\\}^\\alpha=\\{\\bar{y}_{i1},\\ldots,\\bar{y}_{ia_i}\\}$. So $\\Gamma^\\alpha$ is a simplicial complex on the vertex set $[\\bar{m}]^\\alpha=\\{\\bar{y}_{ij}:i=1,\\ldots,m,\\ j=1,\\ldots,a_i\\}$. Now define $\\varphi:[\\bar{m}]^\\alpha\\rightarrow [n]$ by $\\varphi(\\bar{y}_{ij})=x_{ij}$. Extending $\\varphi$, we obtain the isomorphism $\\varphi:\\Gamma^\\alpha\\rightarrow \\Delta$.\n\\end{proof}\n\n\\begin{prop}\\label{CM indepen}\nLet $\\Delta$ be a simplicial complex and assume that $\\Delta^\\alpha$ is\nCohen-Macaulay for some $\\alpha\\in\\mathbb N^n$. Then $\\Delta$ is\nCohen-Macaulay.\n\\end{prop}\n\\begin{proof}\nBy Lemma \\ref{ex indepen}(i), for all $i\\leq \\dim(\\mathrm{link}_\\Delta F)$ and all $F\\in\\Delta$ there exists an epimorphism $\\theta:\\mathrm{link}_{\\Delta^\\alpha} F^\\alpha\\rightarrow \\mathrm{link}_\\Delta F$ such that\n$$\\tilde{H}_{i}(\\mathrm{link}_{\\Delta^\\alpha} F^\\alpha;K)\/\\ker(\\theta)\\cong\\tilde{H}_{i}(\\mathrm{link}_\\Delta F;K).$$\nNow suppose that $i<\\dim(\\mathrm{link}_\\Delta F)$. Then $i<\\dim(\\mathrm{link}_{\\Delta^\\alpha} G^\\alpha)$ and by Cohen-Macaulayness of $\\Delta^\\alpha$, $\\tilde{H}_{i}(\\mathrm{link}_{\\Delta^\\alpha} F^\\alpha;K)=0$. Therefor $\\tilde{H}_{i}(\\mathrm{link}_\\Delta F;K)=0$. This means that $\\Delta$ is Cohen-Macaulay.\n\\end{proof}\n\nIt follows from Proposition \\ref{CM indepen} that:\n\n\\begin{cor}\\label{CM-ness}\nThe contraction of a Cohen-Macaulay simplicial complex $\\Delta$ is Cohen-Macaulay.\n\\end{cor}\n\nThis can be generalized in the following theorem.\n\n\\begin{thm}\\label{contract,CM-t}\nLet $\\Gamma$ be the contraction of a ${\\mathrm{CM}}_t$ simplicial complex\n$\\Delta$, for some $t\\geq 0$, by $\\alpha=(k_1,\\ldots,k_n)$. If\n$k_i\\geq t$ for all $i$ and $\\Gamma$ is pure, then $\\Gamma$ is\nBuchsbaum.\n\\end{thm}\n\\begin{proof}\nIf $t=0$, then we saw in Corollary \\ref{CM-ness} that $\\Gamma$ is Cohen-Macaulay and so it is ${\\mathrm{CM}}_t$. Hence assume that $t>0$. Let $\\Delta=\\langle F_1,\\ldots,F_r\\rangle$. We have to show that $\\tilde{H}_i(\\mathrm{link}_\\Gamma G;K)=0$, for all faces $G\\in\\Gamma$ with $|G|\\geq 1$ and all $i<\\dim(\\mathrm{link}_\\Gamma G)$.\n\nLet $G\\in\\Gamma$ with $|G|\\geq 1$. Then $|G^\\alpha|\\geq t$. It follows from Lemma \\ref{CM-t eq} and ${\\mathrm{CM}}_t$-ness of $\\Delta$ that\n$$\\tilde{H}_{i}(\\mathrm{link}_\\Gamma G;K)\\cong\\tilde{H}_{i}(\\mathrm{link}_\\Delta G^\\alpha;K)=0$$\nfor $i<\\dim(\\mathrm{link}_\\Delta G^\\alpha)$ and, particularly, for $i<\\dim(\\mathrm{link}_\\Gamma G)$. Therefore $\\Gamma$ is Buchsbaum.\n\\end{proof}\n\n\\begin{cor}\nLet $\\Gamma$ be the contraction of a Buchsbaum simplicial complex\n$\\Delta$. If $\\Gamma$ is pure, then $\\Gamma$ is also Buchsbaum.\n\\end{cor}\n\nLet $\\mathcal G$ be a simple graph on the vertex set $[n]$ and let $\\Delta_\\mathcal G$ be its independence complex on $[n]$, i.e., a simplicial complex whose faces are the independent vertex sets of $G$. Let $\\Gamma$ be the contraction of $\\Delta_\\mathcal G$. In the following we show that $\\Gamma$ is the independence complex of a simple graph $\\mathcal H$. We call $\\mathcal H$ the \\emph{contraction} of $\\mathcal G$.\n\n\\begin{lem}\nLet $\\mathcal G$ be a simple graph. The contraction of $\\Delta_\\mathcal G$ is the independence complex of a simple graph $\\mathcal H$.\n\\end{lem}\n\\begin{proof}\nIt suffices to show that $I_\\Gamma$ is a squarefree monomial ideal generated in degree 2. Let $\\Gamma$ be the contraction of $\\Delta_\\mathcal G$ and let $\\alpha=(k_1,\\ldots,k_n)$ be the vector obtained from the contraction. Let $[n]=\\{x_1,\\ldots,x_n\\}$ be the vertex set of $\\Gamma$. Suppose that $u=x_{i_1}\\ldots x_{i_t}\\in G(I_\\Gamma)$. Then $u^\\alpha\\subset G(I_\\Gamma)^\\alpha=G(I_{\\Delta_\\mathcal G})=G(I(\\mathcal G)$. Since $u^\\alpha=\\{x_{i_1j_1}\\ldots x_{i_tj_t}:1\\leq j_l\\leq k_{i_l},1\\leq l\\leq t\\}$ we have $t=2$ and the proof is completed.\n\\end{proof}\n\n\\begin{exam}\nLet $\\mathcal G_1$ and $\\mathcal G_2$ be, respectively, from left to right the following graphs:\n\n$$\\begin{array}{cccc}\n\\begin{tikzpicture}\n\\coordinate (a) at (0,0);\\fill (0,0) circle (1pt);\n\\coordinate (b) at (0,1);\\fill (0,1) circle (1pt);\n\\coordinate (c) at (1,0);\\fill (1,0) circle (1pt);\n\\coordinate (d) at (0,-1);\\fill (0,-1) circle (1pt);\n\\coordinate (e) at (-1,0);\\fill (-1,0) circle (1pt);\n\\draw[black] (a) -- (c) -- (d) -- (e) -- (b);\n\\end{tikzpicture}\n&&&\n\\begin{tikzpicture}\n\\coordinate (a) at (0,0);\\fill (0,0) circle (1pt);\n\\coordinate (b) at (0,1);\\fill (0,1) circle (1pt);\n\\coordinate (c) at (1,0);\\fill (1,0) circle (1pt);\n\\coordinate (d) at (0,-1);\\fill (0,-1) circle (1pt);\n\\coordinate (e) at (-1,0);\\fill (-1,0) circle (1pt);\n\\draw[black] (e) -- (a) -- (c) -- (d) -- (e) -- (b) -- (c);\n\\end{tikzpicture}\n\\end{array}$$\n\nThe contraction of $\\mathcal G_1$ and $\\mathcal G_2$ are\n\n$$\\begin{array}{cccc}\n\\begin{tikzpicture}\n\\coordinate (a) at (0,0);\\fill (0,0) circle (1pt);\n\\coordinate (b) at (0,1);\\fill (0,1) circle (1pt);\n\\coordinate (c) at (1,0);\\fill (1,0) circle (1pt);\n\\coordinate (d) at (0,-1);\\fill (0,-1) circle (1pt);\n\\coordinate (e) at (-1,0);\\fill (-1,0) circle (1pt);\n\\draw[black] (a) -- (c) -- (d) -- (e) -- (b);\n\\end{tikzpicture}\n&&&\n\\begin{tikzpicture}\n\\coordinate (a) at (0,0);\\fill (0,0) circle (1pt);\n\\coordinate (b) at (1,0);\\fill (1,0) circle (1pt);\n\\draw[black] (a) -- (b);\n\\end{tikzpicture}\n\\end{array}$$\nThe contraction of $\\mathcal G_1$ is equal to itself but $\\mathcal G_2$ is contracted to an edge and the vector obtained from contraction is $\\alpha=(2,3)$.\n\\end{exam}\n\nWe recall that a simple graph is ${\\mathrm{CM}}_t$ for some $t\\geq 0$, if the associated independence complex is ${\\mathrm{CM}}_t$.\n\n\\begin{rem}\nThe simple graph $\\mathcal G'$ obtained from $\\mathcal G$ in Lemma 4.3 and Theorem\n4.4 of \\cite{HaYaZa2} is the expansion of $\\mathcal G$. Actually, suppose\nthat $\\mathcal G$ is a bipartite graph on the vertex set $V(\\mathcal G)=V\\cup W$\nwhere $V=\\{x_1,\\ldots,x_d\\}$ and $W=\\{x_{d+1},\\ldots,x_{2d}\\}$. Then\nfor $\\alpha=(n_1,\\ldots,n_d,n_1,\\ldots,n_d)$ we have\n$\\mathcal G'=\\mathcal G^\\alpha$. It follows from Theorem \\ref{main} that if $\\mathcal G$ is\n${\\mathrm{CM}}_t$ for some $t\\geq 0$ then $\\mathcal G'$ is ${\\mathrm{CM}}_{t+n-n_{i_0}+1}$ where\n$n=\\sum^d_{i=1}n_i$ and $n_{i_0}=\\min\\{n_i>1:i=1,\\ldots,d\\}$. This\nimplies that the first part of Theorem 4.4 of \\cite{HaYaZa2} is an\nobvious consequence of Theorem \\ref{main} for $t=0$.\n\\end{rem}\n\n\\subsection*{Acknowledgment}\n\nThe author would like to thank Hassan Haghighi from K. N. Toosi University of Technology and Rahim Zaare-Nahandi from University of Tehran for careful\nreading an earlier version of this article and for their helpful comments.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nRandom unitary operators have often been used to approximate chaotic dynamics. Notably, in the context of black holes Hayden and Preskill used a model of random dynamics to show such systems can be efficient information scramblers; initially localized information will quickly thermalize and become thoroughly mixed across the entire system \\cite{Hayden07}. It was later conjectured \\cite{Sekino08} and then proven \\cite{Shenker:2013pqa,Maldacena:2015waa} that black holes are the fastest scramblers in nature, suggesting their dynamics must have much in common with random unitary evolution. Such ``scrambling'' is a byproduct of strongly-coupled chaotic dynamics \\cite{Lashkari13,Almheiri13b,Hosur:2015ylk}, and so the work of \\cite{Hayden07} suggests there should be strong quantitative connection between such chaos and pseudorandomness.\\footnote{\n Scrambling has a close connection to the decoupling theorem, see e.g.~\\cite{Berta, Brown15}. Also, see \\cite{Chamon14,Hayden:2016cfa,Nahum:2016muy} for studies connecting randomness to entanglement.\n}\n\n\n\nThe connection between pseudorandomness and chaotic dynamics can also be understood at the level of operators. For example, consider $W$ a local operator of low weight (e.g. a Pauli operator acting on a single spin). With a chaotic Hamiltonian $H$, the operator $W(t)=e^{iHt} W e^{-iHt}$ will be a complicated nonlocal operator that has an expansion as a sum of products of many local operators with an exponential number of terms each with a pseudorandom coefficient \\cite{Roberts:2014isa}. We can gain intuition for this by considering the Baker-Campbell-Hausdorff expansion of $W(t)$\n\\begin{equation}\n W(t) = \\sum_{j=0}^\\infty \\frac{(it)^j}{j!}\\underbrace{[H, \\dots[H}_j,W\\underbrace{] \\dots]}_j. \\label{eq:BCH}\n\\end{equation}\nIf $H$ is $q$-local and sufficiently ``generic'' to contain all possible $q$-local interactions, then the $j$th term will consist of a sum of roughly $\\sim (n\/q)^{qj}$ terms each of weight ranging from $1$ to $\\sim j(q-1)$, where we assume the system consists of $n$ spins so that the Hilbert space is of dimension $d=2^n$: \n\\begin{itemize}\n \\item At roughly $j \\sim n \/ (q-1)$, there will be many terms in the sum of weight $n$. These terms are delocalized over the entire system. For a system without spatial locality, the relationship between time $t$ and when the $j$th term becomes $O(1)$ is roughly $t \\sim \\log j$. The timescale $t \\sim O(\\log n)$ for the operator to cover the entire system is indicative of fast-scrambling behavior.\n \\item At around $j \\sim 2n \/ q \\log(n\/q)$, the total number of terms will reach $2^{2n}$, equal to the total number of orthogonal linear operators acting on the Hilbert space.\\footnote{The number of terms is actually not a well defined quantity, since it can change under local rotations of spin, and should possibly instead consider the minimum number of terms under all local changes of basis. For chaotic systems, the distinction should not be important, since there will not exist a local change of basis where the number of terms drastically simplifies. We thank Douglas Stanford who thanks Juan Maldacena for raising this point.} Even after the operator covers the entire system, it continues to spread over the unitary group (though possibly only until a time roughly $O(\\log n) +$ a constant). \n\\end{itemize}\nFurthermore, the coefficient of any given term will be incredibly complicated, depending on the details of the path through the interaction graph and the time. \nOver time, $W(t)$ should cover the entire unitary group (possibly quotiented by a group set by the symmetries of the Hamiltonian). At sufficiently large $t$, one might even suspect that for many purposes $W(t)$ can be approximated by a random operator $\\tilde{W}\\equiv U^\\dagger W U$, with $U$ sampled randomly from the unitary group.\\footnote{Of course, for a less generic and sparser Hamiltonian e.g. a free system, the expansion of $W(t)$ will organize itself in a way such that the commutator in Eq.~\\eqref{eq:BCH} does not produce an exponential number of terms. Said another way, the terms in the expansion of $W(t)$ will only cover a small subset of the unitary group, and the assumption of uniform randomness will not hold.} If this is true, then we would say that $W(t)$ behaves pseudorandomly.\n\n\n\n\n\n\\subsubsection*{Chaos}\n\nThis pattern of growth of $W(t)$ can be measured by a second local operator $V$. For example, the group commutator of $W(t)$ with $V$, given by $W(t)^\\dagger\\, V^\\dagger \\, W(t)\\, V$, measures the effect of the small perturbation $V$ on a later measurement of $W$. In other words, it is a measure of the butterfly effect and the strength of chaos:\n\\begin{itemize}\n \\item If $W(t)$ is of low weight and few terms, then $W(t)$ and $V$ approximately commute $[W(t),V]\\approx 0$, and the operator $W(t)^\\dagger\\, V^\\dagger \\, W(t)\\, V$ is close to the identity. \n \\item If instead the dynamics are strongly chaotic, $W(t)$ will grow to eventually have a large commutator with all other local operators in the system (in fact, just about all other operators), and so $W(t)^\\dagger\\, V^\\dagger \\, W(t)\\, V$ will be nearly random and have a small expectation in most states.\n\\end{itemize}\nThus, the decay of out-of-time-order (OTO) four-point functions of the form\n\\begin{equation}\n \\langle W(t)^\\dagger\\, V^\\dagger \\, W(t)\\, V \\rangle = \\langle U(t)^\\dagger W^\\dagger U(t) ~ V^\\dagger ~ U(t)^\\dagger W U(t) ~ V \\rangle, \\label{oto-4pt-def}\n\\end{equation} \ncan act as a simple diagnostic of quantum chaos, where \n$U(t)=e^{-iHt}$ is the unitary time evolution operator, and the correlator is usually evaluated on the thermal state $\\langle \\cdot \\rangle \\equiv \\text{tr} \\, \\{ e^{-\\beta H} \\, \\cdot \\, \\} \/ \\text{tr} \\, e^{-\\beta H}$ \\cite{Larkin:1969abc,Almheiri13b,Shenker:2013pqa,Kitaev:2014t1,Maldacena:2015waa}.\\footnote{Usually $W, V$ are taken to be Hermitian, but here we will more generally allow them to be unitary.} For further discussion, please see a selection (but by all means not a complete set) of recent work on out-of-time-order four-point functions and chaos \\cite{Shenker:2013pqa,Shenker:2013yza,Roberts:2014isa,Roberts:2014ifa,Kitaev:2014t1,Shenker:2014cwa,Maldacena:2015waa,Hosur:2015ylk,Stanford:2015owe,Fitzpatrick:2016thx,Gu:2016hoy,Caputa:2016tgt,Swingle:2016var,Perlmutter:2016pkf,Blake:2016wvh,Roberts:2016wdl,Blake:2016sud,Swingle:2016jdj,Huang:2016knw,fan2016out,Halpern:2016zcm}.\n\n\n\nFor sufficiently chaotic systems and sufficiently large times, the correlators Eq.~\\eqref{oto-4pt-def} will reach a floor value equivalent to the substitution, $W(t) \\to U^\\dagger W U$ with $U$ chosen randomly. \nFurthermore, it can be shown \\cite{Hosur:2015ylk} that the decay of correlators Eq.~\\eqref{oto-4pt-def} implies the sort of information-theoretic scrambling studied by Hayden and Preskill in \\cite{Hayden07}. This explains why the random dynamics model of \\cite{Hayden07} was such a good approximation for strongly-chaotic systems, such as black holes. However, are out-of-time-order four-point functions Eq.~\\eqref{oto-4pt-def} actually a sufficient diagnostic of chaos?\n\n\n\nIn \\cite{Hayden07} the authors did not actually require the dynamics to be a uniformly random unitary operator sampled from the Haar measure on the unitary group. Instead, it would have been sufficient to sample from a simpler ensemble of operators that could reproduce only a few moments of the larger Haar ensemble.\\footnote{Inspection of Eq.~\\eqref{oto-4pt-def} suggests that only two moments are required since there are only two copies of $U$ and two copies of $U^\\dagger$ in the correlator. We will explain this more carefully in the rest of the paper.} Of course, there may be other finer-grained information theoretic properties of a system that are dependent on higher moments and would require a larger ensemble to replicate the statistics of the Haar random dynamics. If random dynamics is a valid approximation for computing some, but not all, of these finer-grained quantities then they can represent a measure of the degree of pseudorandomness of the underlying dynamics. In this paper, we will make some progress in developing some of these finer-grained quantities and therefore connect measures of chaos to measures of randomness.\n\n\\subsubsection*{Unitary design}\n\nThe extent to which an ensemble of operators behaves like the uniform distribution can be quantified by the notion of unitary $k$-designs \\cite{DiVincenzo02, Emerson05, Ambainis07, Gross07, Dankert09}.\\footnote{N.B. in the literature, these are often referred to as unitary $t$-designs. Here, we will reserve $t$ for time. For recent work on unitary designs, see~\\cite{Emerson03, Emerson05, Brown10, Brown15, Hayden07, Harrow09, Knill08, Brandao12, Kueng15, Webb15, Low_thesis, Scott08, Zhu15, Collins16,Nakata:2016blv}.} A unitary $k$-design is a subset of the unitary group that replicates the statistics of at least $k$ moments of the distribution. Consider a finite-dimensional Hilbert space $\\mathcal{H}^{\\otimes k}=(\\mathbb{C}^{d})^{\\otimes k}$ consisting of $k$ copies of $\\mathcal{H}=\\mathbb{C}^d$. Given an ensemble of unitary operators $\\mathcal{E}=\\{p_{j},U_{j}\\}$ acting on $\\mathcal{H}$ with probability distribution $p_{j}$, the ensemble $\\mathcal{E}$ is a unitary $k$-design if and only if\n \\begin{align}\n \\sum_{j}p_{j} (\\underbrace{U_{j}\\otimes \\cdots \\otimes U_{j}}_{k}) \\rho (\\underbrace{U_{j}^{\\dagger}\\otimes \\cdots \\otimes U_{j}^{\\dagger}}_{k}) = \\int_{\\text{Haar}}dU(\\underbrace{U\\otimes \\cdots \\otimes U}_{k}) \\rho (\\underbrace{U^{\\dagger}\\otimes \\cdots \\otimes U^{\\dagger}}_{k}), \n \\end{align}\nfor all quantum states $\\rho$ in $\\mathcal{H}^{\\otimes k}$. Therefore, whether or not a given ensemble $\\mathcal{E}$ forms at most a $k$-design is a fine-grained notion of the randomness of the ensemble.\\footnote{With this definition, a $k+1$-design is automatically a $k$-design.}\n\n\n\nInspection of Eq.~\\eqref{oto-4pt-def} suggests that a $2$-design is sufficient for OTO four-point functions to decay. This was also the requirement for a system to appear scrambled \\cite{Page93,Hayden07,Sekino08}. However, the four-point functions Eq.~\\eqref{oto-4pt-def} are not the only observables related to chaos and the butterfly effect. In an attempt to understand the geometry and interior of a typical microstate of a holographic black hole, Shenker and Stanford constructed a series of black hole geometries by making multiple perturbations of an essentially maximally entangled state \\cite{Shenker:2013yza}. While they were unfortunately unsuccessful at constructing a generic microstate, studying correlation functions in these geometries leads one to a family of $2k$-point out-of-time-order correlation functions of the form\n \\begin{equation}\n \\T{\\mc{W}^\\dagger \\, V(0)^\\dagger \\, \\mc{W} \\, V(0) } = \\T{W_1(t_1)^\\dagger \\cdots W_{k-1}(t_{k-1})^\\dagger ~ V(0)^\\dagger ~ W_{k-1}(t_{k-1}) \\cdots W_1(t_1) ~ V(0) }, \\label{multiple-shocks-correlator}\n \\end{equation}\nwhere $V(0)$ and the $W_j(0)$ are local operators, and $\\mc{W} \\equiv W_{k-1}(t_{k-1}) \\cdots W_1(t_1)$ is a composite operator that lets us understand how the correlator is organized. The existence of these correlators begs the question: do these contain any additional information about the system or are they redundant with the four-point functions? The fact that the correlators Eq.~\\eqref{multiple-shocks-correlator} involve many more copies of unitary time evolution, suggests a relationship between higher-point functions and unitary design. In fact, we will show that a generalization of the OTO higher-point correlators Eq.~\\eqref{multiple-shocks-correlator} are probes of unitary design. \n\n\n\\subsubsection*{Complexity}\n\n\n\nProbes of chaos and unitary design can only indirectly address a sharp question of recent interest in holography: the existence and growth of the black hole interior via the time evolution of the state. To be precise, one cannot make use of unitary designs since evolution by a time-independent Hamiltonian does not define any ensemble. However, even in this setting we still expect that higher-point OTO should be able to see fine-grained properties of time evolved states that are not captured by four-point OTO correlators.\n\nOne fine-grained quantity is the quantum circuit complexity of a quantum state $|\\psi(t)\\rangle = U(t)|\\psi(0)\\rangle$. Consider a simple initial state, such as the product state $|\\psi(0)\\rangle= |0\\rangle^{\\otimes n}$, undergoing chaotic time evolution. After a short time thermalization time of $O(1)$, the system will evolve to an equilibrium in which local quantities will reach their thermodynamic values. Next, after the scrambling time of $O(\\log n)$, the initial state $|\\psi(0)\\rangle$ will be forgotten: the information will be distributed in such a way that measurements of even a large number of collections of local quantities of $|\\psi(t)\\rangle$ will not reveal the initial state. However, even after the scrambling time, the quantum circuit complexity of the time-evolved state $|\\psi(t)\\rangle$, as quantified by the number of elementary quantum gates necessary to reach it from the initial product state, will continue to evolve. In fact, it is expected to keep growing linearly in time until it saturates at a time exponential in the system size $e^{O(n)}$ \\cite{Knill95,Susskind:2015toa}. \n\n\n\n\n\n\\sbreak\n\nWe hope this presents an intuitive picture that chaos, pseudorandomness, and quantum circuit complexity should be related. \nTo that end, having first established a connection between higher-point out-of-time-order correlators and pseudorandomness, we will next connect the randomness of an ensemble to computational complexity. Finally, we will use correlation functions to probe pseudorandomness by comparing different random averages to expectations from time evolution.\n\n\n\nBelow, we will summarize our main results, deferring the technical statements to the body and appendices.\n\n\n \n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\\subsection*{Main results}\n\nWe will focus on a particular form of $2k$-point correlation functions evaluated on the maximally mixed state $\\rho=\\frac{1}{d}I$\n \\begin{equation}\n \\langle A_{1} \\, U^{\\dagger}B_1U \\cdots A_{k} \\, U^{\\dagger}B_kU \\rangle := \\frac{1}{2^n}\\text{tr} \\, \\{ A_{1}\\, U^{\\dagger}B_1U \\cdots A_{k}\\, U^{\\dagger}B_kU \\}, \\label{OCO-correlator}\n \\end{equation}\nwhere any of the $A_{j}, B_{j}$ may be a product Pauli operators that act on a single spin.\\footnote{To be clear, this means that the operators we are correlating are not necessarily simple or local.} Note that each of the $B_j$ is conjugated by the same unitary $U$ (which is similar to picking all the time arguments in Eq.~\\eqref{multiple-shocks-correlator} to be either $0$ or $t$). Furthermore, $U$ will not necessarily represent Hamiltonian time evolution, and instead we will let $U$ be sampled from some ensemble.\\footnote{As a result, these correlators are not really out-of-\\emph{time}-order, since there may not be a notion of time. Instead, they are probably more accurately called \\emph{out-of-complexity-order} (OCO) since we might generalize the notion of time ordering to complexity ordering, where we put unitaries of smaller complexity to the right of unitaries of larger complexity. In order to (hopefully) avoid confusion, we will continue to call the $2k$-point functions in Eq.~\\eqref{OCO-correlator} out-of-time-order, despite there not necessarily being a notion of time.} From this point forward, we will use the notation\n\\begin{equation}\n\\tilde{B} \\equiv U^{\\dagger}BU,\n\\end{equation}\nto simplify expressions involving unitary conjugation. Therefore, we can represent the ensemble average of OTO $2k$-point correlation functions as\n\\begin{align}\n\\langle A_{1}\\tilde{B_{1}}\\cdots A_{k}\\tilde{B_{k}} \\rangle_{\\mathcal{E}} := \\int_{\\mathcal{E}} dU \\langle A_{1}\\tilde{B_{1}}\\cdots A_{k}\\tilde{B_{k}} \\rangle,\n\\end{align}\nwhere the integral is with respect to the probability distribution in an ensemble of unitary operators $\\mathcal{E}=\\{p_{j},U_{j}\\}$. Finally, the $k$-fold channel over the ensemble $\\mathcal{E}$ is \n\\begin{align}\n\\Phi_{\\mathcal{E}}^{(k)}(\\cdot) = \\int_{\\mathcal{E}} dU (U_{j}\\otimes \\cdots \\otimes U_{j}) (\\cdot )(U_{j}^{\\dagger}\\otimes \\cdots \\otimes U_{j}^{\\dagger}),\n\\end{align}\nwhich is a superoperator. \n\n\\subsubsection*{Chaos and $k$-designs}\n\nFirst, we will prove a theorem stating that a particular set of $2k$-point OTO correlators, averaged over an ensemble $\\mathcal{E}$, is in a one-to-one correspondence with the $k$-fold channel~$\\Phi_{\\mathcal{E}}^{(k)}$ \n\\begin{align}\n\\text{$2k$-point OTO correlators}\\ \\leftrightarrow \\ \\text{$k$-fold channel $\\Phi_{\\mathcal{E}}^{(k)}$},\n\\end{align}\nand we provide a simple formula to convert from one to the other. Such an explicit relation between OTO correlators and the $k$-fold channel may have practical and experimental applications such as statistical testing (e.g. a quantum analog of the $\\chi^2$-test) and randomized bench marking~\\cite{Knill08}. \n\n\n\nNext, we prove that generic ``smallness'' of $2k$-point OTO correlators implies that the ensemble $\\mathcal{E}$ is close to $k$-design. We will make this statement precise by relating OTO correlators to a useful quantity known as the frame potential \n\\begin{align}\nF_{\\mathcal{E}}^{(k)}=\\frac{1}{|\\mathcal{E}|^{2}}\\sum_{U,V\\in\\mathcal{E}}\\big|\\text{tr}\\, \\{ U^{\\dagger}V \\} \\big|^{2k}.\n\\end{align}\nThis quantity, first introduced in \\cite{Scott08}, measures the ($2$-norm) distance between $\\Phi_{\\mathcal{E}}^{(k)}(\\cdot)$ and $\\Phi_{\\text{Haar}}^{(k)}(\\cdot)$, and has been shown to be minimized if and only if the ensemble $\\mathcal{E}$ is $k$-design. We will derive the following formula:\n\\begin{align}\n\\text{Average of $|\\text{$2k$-point OTO correlator}|^{2}$}\\ \\propto \\ \\text{$k$th frame potential $F_{\\mathcal{E}}^{(k)}$},\n\\end{align}\nwhich shows that $2k$-point OTO correlators $\\langle A_{1}\\tilde{B_{1}}\\cdots A_{k}\\tilde{B_{k}} \\rangle_{\\mathcal{E}}$ are measures of whether an ensemble $\\mathcal{E}$ is a unitary $k$-design.\nThus, the decay of OTO correlators can be used to quantify an increase in pseudorandomness. \n\n\\subsubsection*{Chaos, randomness, and complexity}\n\n\n\nWe prove a lower bound on quantum circuit complexity needed to generate an ensemble of unitary operators $\\mathcal{E}$\n\\begin{align}\n\\text{Complexity of $\\mathcal{E}$}\\ \\geq \\ \\frac{2kn \\log(2) - \\log F_{\\mathcal{E}}^{(k)} }{\\log( \\text{choices})}.\\label{intro-complexity-lower-bound}\n\\end{align}\nThis bound is actually given by a rather simple counting argument. The denominator should be thought of as (the log of) the number of choices made at each step in generating the circuit. For instance, if we have a set $\\mc{G}$ of cardinality $g$ of $q$-qubit quantum gates available and at each step randomly select $q$ qubits out of $n$ total and select one of the gates in $\\mc{G}$ to apply, then we would make $g\\binom{n}{q}$ choices at each step. Recalling our result relating OTO correlators and the frame potential, this result implies that generic smallness of OTO correlators leads to higher quantum circuit complexity. This is a direct and quantitative link between chaos and complexity. \n\nHowever, we caution the reader that in many cases Eq.~\\eqref{intro-complexity-lower-bound} may not be a very tight lower bound. We will provide some discussion of this point as well as a few examples, however further work is most likely required to better understand the utility of this bound.\n\n\\subsubsection*{Haar vs. simpler ensemble averages}\nFinally, we present calculations of the Haar average of some higher-point OTO correlators and compare them to averages in simpler ensembles. These results suggest that the floor value of OTO correlators of local operators might be good diagnostics of pseudorandomness.\n\nFor $4$-point OTO correlators, we find\n\\begin{equation}\n\\begin{split}\n&\\langle A\\tilde{B}C\\tilde{D} \\rangle_{\\text{Haar}} = \\langle AC \\rangle \\langle B \\rangle\\langle D \\rangle + \\langle A \\rangle\\langle C \\rangle \\langle BD \\rangle - \\langle A\\rangle \\langle C \\rangle \\langle B \\rangle\\langle D \\rangle - \\frac{1}{d^2-1}\\langle\\!\\langle AC\\rangle\\!\\rangle \\langle\\!\\langle BD\\rangle\\!\\rangle,\n\\end{split} \\label{intro-4p-haar-average}\n\\end{equation}\nwhere $\\langle\\!\\langle AC\\rangle\\!\\rangle$ represents a connected correlator and $d=2^{n}$ is the total number of states. In contrast, if the $U$ are averaged over the Pauli operators (which form a $1$-design but not a $2$-design), we find\n\\begin{align}\n\\langle A\\tilde{B}C\\tilde{D} \\rangle_{\\text{Pauli}}&= \n\\langle AC \\rangle \\langle B D \\rangle. \\label{intro-4p-pauli-average}\n\\end{align}\nWe will present intuitive explanations on this difference from the viewpoint of local thermal dissipations vs. global thermalization or scrambling. \n\nFor $8$-point OTO correlators with Pauli operators $A,B,C, D$, we compute averages over the unitary group and the Clifford group (which form a $3$-design on qubits but not a $4$-design)\n\\begin{equation}\n\\begin{split}\n&\\langle A\\tilde{B}C\\tilde{D}A^{\\dagger} \\tilde{D}^{\\dagger} C^{\\dagger}\\tilde{B}^{\\dagger}\\rangle_{\\text{Haar}} \\sim \\frac{1}{d^4},\\\\\n&\\langle A\\tilde{B}C\\tilde{D}A^{\\dagger} \\tilde{D}^{\\dagger} C^{\\dagger}\\tilde{B}^{\\dagger}\\rangle_{\\text{Clifford}} \\sim \\frac{1}{d^2}.\n\\end{split} \\label{intro-8p-averages}\n\\end{equation}\nThis suggests that forming a higher $k$-design leads to a lower value of the correlator.\n\n\nThe results Eq.~\\eqref{intro-4p-haar-average}-\\eqref{intro-8p-averages} are exact for any choice of operators. (The extended results for the correlators in Eq.~\\eqref{intro-8p-averages} is presented in Appendix~\\ref{sec:8-pt-functions}.) However, for a particular ordering of OTO $4m$-point functions where we average over choices of operators, we will also show that a Haar averaging over the unitary group scales as\n\\begin{equation}\n\\textrm{OTO}^{(4m)} \\sim \\frac{1}{d^{2m}}.\\label{intro-4m-averages}\n\\end{equation}\nThis result hints that these correlators continue to be probes of increasing pseudorandomnes. \n\n\n\n\\subsection*{Organization of the paper}\nFor convenience of the reader, we included a(n almost) self-contained introduction to Haar random unitary operators and unitary design in \\S\\ref{sec:review}. In \\S\\ref{sec:OTO_channel}, we establish a formal connection between chaos and unitary design by proving the theorems mentioned above. In \\S\\ref{sec:complexity-bound}, we connect complexity to unitary design by proving the complexity lower bound \\eqref{intro-complexity-lower-bound}.\nIn \\S\\ref{sec:haar-averages}, we include the explicit calculations of $2$-point and $4$-point functions averaged over different ensembles and discuss how these averages relate to expectations from time evolution with chaotic Hamiltonians. We also discuss results and expectations for higher-point functions.\nWe conclude in \\S\\ref{sec:discussion} with an extended discussion of these results, their relevance for physical situations, and outline some future research directions.\n\nDespite page counting to the contrary, this is actually a short paper. A knowledgeable reader may learn our results by simple reading \\S\\ref{sec:OTO_channel}, \\S\\ref{sec:complexity-bound} and \\S\\ref{sec:haar-averages}. On the other hand, a large number of extended calculations and digressions are relegated to the Appendices: \n\\begin{itemize}\n\\item In \\S\\ref{app:proof}, we collect some proofs that we felt interrupted the flow of the main discussion. \n\\item In \\S\\ref{sec:appendix:orthogonal}, we discuss the number of nearly orthogonal states in large-dimensional Hilbert spaces.\n\\item In \\S\\ref{sec:complexity-appendix}, we extend our complexity lower bound to minimum circuit depth by considering gates that may be applied in parallel. We also derive a bound on early-time complexity for evolution with an ensemble of Hamiltonians.\n\\item In \\S\\ref{sec:appendix-averages}, we hide the details of our $8$-point functions Haar averages and also derive the ${}\\sim d^{-2m}$ scaling of Haar averages of certain $4m$-point functions.\n\\item In \\S\\ref{sec:sub-space-randomization}, we provide a generalization of the frame potential that equals an average of the square of OTO correlators for arbitrary states rather than just the maximally mixed state.\n\\item Finally, in \\S\\ref{sec:more-chaos} we prove some extended results relating to our earlier work \\cite{Hosur:2015ylk} that are somewhat outside the main focus of the current paper.\n\\end{itemize}\n\n\\section{Measures of Haar}\\label{sec:review}\n\n\nThe goal of this section is to provide a review of the theory of Haar random unitary operators and unitary design in a self-contained manner. The presentation of this section owes a lot to a recent paper by Webb~\\cite{Webb15} as well as a course note by Kitaev \\cite{Kitaev-Haar}.\\footnote{We also would like to highlight a Master's thesis by Yinzheng Gu, which considers a certain matrix multiplication problem with conjugations by Haar random unitary operators. Although not directly relevant to the work, these are actually out-of-time-order correlation functions in a disguised form~\\cite{gu2013moments}.}\n\n\\subsection*{Haar random unitaries}\n\n\n\\subsubsection*{Schur-Weyl duality}\nConsider a finite-dimensional Hilbert space $\\mathcal{H}^{\\otimes k}=(\\mathbb{C}^{d})^{\\otimes k}$ consisting of $k$ copies of $\\mathcal{H}=\\mathbb{C}^d$. A permutation operator $W_{\\pi}$ with a permutation $\\pi=\\pi(1)\\ldots \\pi (k)$ is defined as follows\n\\begin{align}\nW_{\\pi} |a_{1},\\ldots,a_{k} \\rangle = |a_{\\pi(1)},\\ldots,a_{\\pi(k)} \\rangle,\n\\end{align}\nand $W_{\\pi}(A_{1}\\otimes \\cdots \\otimes A_{k})W_{\\pi}^{-1} = A_{\\pi(1)}\\otimes \\cdots \\otimes A_{\\pi(k)}$. If $\\pi$ is a cyclic permutation ($\\pi(j)=j+1$), then $W_{\\text{cyc}}$ acts as follows\n\\begin{align}\n\\includegraphics[width=0.35\\linewidth]{fig_permutation}.\n\\end{align}\n\n\n\\begin{theorem}\\emph{\\tb{[Schur-Weyl duality]}}\nLet $L(\\mathcal{H}^{\\otimes k})$ be the algebra of all the operators acting on $\\mathcal{H}^{\\otimes k}$. Let $U(\\mathcal{H})$ be the unitary group on $\\mathcal{H}$. An operator $A \\in L(\\mathcal{H}^{\\otimes k})$ commutes with all operators $V^{\\otimes k}$ with $V\\in U(\\mathcal{H})$ if and only if $A$ is a linear combination of permutation operators $W_{\\pi}$\n\\begin{align}\n[A,V^{\\otimes k}]=0, ~ \\forall V \\ \\Leftrightarrow \\ A = \\sum_{\\pi\\in S_k} c_{\\pi}\\cdot W_{\\pi}.\\label{Schur-Weyl-duality} \n\\end{align}\n\\end{theorem}\n\nWhen an operator $A$ is a linear combination of permutation operators, it is clear that $A$ commutes with $V^{\\otimes k}$ ($\\Leftarrow$). A difficult part is to prove the converse ($\\Rightarrow$), which relies on Von Neumann's double commutant theorem. \n\n\\subsubsection*{Pauli operators}\nPauli operators for $\\mathbb{C}^d$ (i.e. $d$-state spins or qudits) are defined by \n\\begin{align}\nX|j\\rangle = |j+1\\rangle, \\qquad Z|j\\rangle = \\omega^j|j\\rangle, \\label{eq:pauli-def}\n\\end{align}\nwhere $\\omega \\equiv e^{2\\pi i\/d}$. We note that Eq.~\\eqref{eq:pauli-def} implies $ZX = \\omega XZ$ and $X^d = Z^d = I$, and that for $d>2$, the Pauli operators are unitary and traceless, but not Hermitian.\n\nThe Pauli group is $\\tilde{\\mathcal{P}}=\\langle \\tilde{\\omega}I,X,Z \\rangle$, where $\\tilde{\\omega}=\\omega$ for odd $d$, and $\\tilde{\\omega}=e^{\\pi\/d}$ for even $d$. Since we are usually uninterested in global phases, we will consider the quotient of the group\n\\begin{align}\n\\mathcal{P} = \\tilde{\\mathcal{P}} \\backslash \\langle \\tilde{\\omega} I \\rangle.\n\\end{align}\nThere are $d^2$ (representative) Pauli operators in $\\mathcal{P}$. When the Hilbert space is built up from the space of $n$ qubits, we will denote the Pauli group by $\\mathcal{P}_{n}$. For such systems, (the representatives of) $\\mathcal{P}_{n}$ consist of tensor products of qubit Pauli operators, such as $X \\otimes Y \\otimes I \\otimes Z \\otimes \\cdots $ without any global phases.\n\nThe Pauli operators provide a basis for the space of linear operators acting on the Hilbert space. They are orthogonal, $\\text{tr} \\, \\{P_{i}^{\\dagger}P_{j} \\}=d\\delta_{ij}$ for $P_{i},P_{j}\\in \\mathcal{P}$, and therefore we can expand any operator $A$ acting on $\\mathcal{H}$ as\n\\begin{align}\nA = \\sum_{j} a_{j} P_{j},\\qquad a_{j}=\\frac{1}{d}\\text{tr} \\, \\{P_{j}^{\\dagger}A\\}.\n\\end{align}\nWith this property, the cyclic permutation operator $W_{\\text{cyc}}$ on $\\mathcal{H}^{\\otimes k}$ can be decomposed as\n\\begin{align}\nW_{\\text{cyc}} = \\frac{1}{d^{k-1}}\\sum_{P_{1},\\ldots,P_{k-1}\\in \\mathcal{P}} P_{1}\\otimes P_{2} \\otimes \\cdots P_{k-1}\\otimes Q^{\\dagger}, \n\\qquad Q = P_{1}P_{2}\\cdots P_{k-1}, \\label{eq:pauli-decomposition-of-cyc}\n\\end{align}\nwhere the sum is over $k-1$ copies of $\\mathcal{P}$.\n\nThe case of $k=2$ is particularly important, giving an operator that swaps two subsystems. Explicitly, we have $\\text{SWAP}=\\frac{1}{d}\\sum_{P} P\\otimes P^{\\dagger}$, or graphically (up to a multiplicative factor)\n\\begin{align}\n\\includegraphics[width=0.35\\linewidth]{fig_SWAP}.\\label{eq:fig_SWAP}\n\\end{align}\nHere and in what follows, a dotted line represents an average over all Pauli operators. For example, let us consider the Pauli channel\n\\begin{align}\n\\frac{1}{d^2}\\sum_{P \\in \\mathcal{P}}P^{\\dagger} A P = \\frac{1}{d}\\, \\text{tr} \\{A \\},\\label{eq:Pauli_twirl}\n\\end{align}\nwhere $A$ is any operator on the system.\nThis equation can be derived graphically by applying Eq.~(\\ref{eq:fig_SWAP})\n\\begin{align}\n\\includegraphics[width=0.50\\linewidth]{fig_Pauli_twirl}.\n\\end{align}\n\n\n\\subsubsection*{$k$-fold channel}\nLet $A$ be an operator acting on $\\mathcal{H}^{\\otimes k}$. The $k$-fold channel of $A$ with respect to the unitary group is defined as\n\\begin{align}\n\\Phi_{\\text{Haar}}^{(k)}(A) := \\int_{\\text{Haar}} (U^{\\otimes k})^{\\dagger} A \\, U^{\\otimes k} dU,\n\\end{align}\nwhere the integral is taken over the Haar measure. Note, this is sometimes referred to as the $k$-fold twirl of $A$. The Haar measure is the unique probability measure on the unitary group that is both left-invariant and right-invariant~\\cite{Watrous}\n\\begin{align}\n\\int_{\\text{Haar}} dU = 1,\\qquad \\int_{\\text{Haar}} f(VU)dU=\\int_{\\text{Haar}} f(UV)dU = \\int_{\\text{Haar}} f(U)dU, \\label{eq:def_Haar}\n\\end{align} \nfor all $V\\in U(\\mathcal{H})$, where $f$ is an arbitrary function. If we take $f(U)= (U^{\\otimes k})^{\\dagger}A\\,U^{\\otimes k}$, then we can show that the twirl of $A$ is invariant under $k$-fold unitary conjugation, \n\\begin{align}\n(V^{\\otimes k})^{\\dagger}\\big(\\Phi^{(k)}_{\\text{Haar}}(A)\\big) V^{\\otimes k} &= \\int_{\\text{Haar}} f(UV)dU = \\Phi^{(k)}_{\\text{Haar}}(A),\\label{eq:duality1}\n\\end{align}\nand that twirl of the $k$-fold unitary conjugation of $A$ equals the twirl of $A$\n\\begin{align}\n\\Phi^{(k)}_{\\text{Haar}}\\big((V^{\\otimes k})^{\\dagger}A V^{\\otimes k}\\big)&=\\int_{\\text{Haar}} f(VU)dU = \\Phi^{(k)}_{\\text{Haar}}(A),\\label{eq:duality2}\n\\end{align}\nwhere for Eq.~\\eqref{eq:duality1} we used the right-invariance property, and for Eq.~\\eqref{eq:duality2} we used left-invariance.\n\n\\subsubsection*{Weingarten function}\nThe content of Eq.~(\\ref{eq:duality1}) is that $\\Phi^{(k)}_{\\text{Haar}}(A)$ commutes with all operators $V^{\\otimes k}$. Thus, we may use the Schur-Weyl duality Eq.~\\eqref{Schur-Weyl-duality} to rewrite it as\n\\begin{align}\n\\Phi^{(k)}_{\\text{Haar}}(A) = \\sum_{\\pi \\in S_k} W_{\\pi} \\cdot u_{\\pi}(A).\n\\end{align}\nHere, $S_k$ is the permutation group, and $u_{\\pi}(A)$ is some linear function of $A$. Since $u_{\\pi}(A)$ is a linear function, it can be written as\n\\begin{align}\nu_{\\pi}(A) = \\text{tr} \\{ C_{\\pi} A\\},\n\\end{align}\nfor some operators $C_{\\pi}$. From Eq.~(\\ref{eq:duality2}), we find that $C_{\\pi}$ commutes with all operators $V^{\\otimes k}$. Then again, by the Schur-Weyl duality, we have\n\\begin{align}\n\\Phi^{(k)}_{\\text{Haar}}(A) =\\sum_{\\pi,\\sigma\\in S_k} c_{\\pi,\\sigma} W_{\\pi}\\, \\text{tr} \\{ W_{\\sigma}A \\}.\n\\end{align}\nThe coefficients $c_{\\pi,\\sigma}$ are called the Weingarten matrix~\\cite{Collins03}. Since $\\Phi^{(k)}_{\\text{Haar}}(W_{\\lambda})=W_{\\lambda}$, we have $W_{\\lambda} = \\sum_{\\pi,\\sigma} c_{\\pi,\\sigma}W_{\\pi} \\, \\text{tr} \\{W_{\\sigma}W_{\\lambda} \\}$. Recalling that $\\text{tr} \\{ W_{\\sigma}W_{\\lambda}\\}=d^{\\# \\text{cycles}(\\sigma\\lambda)}$, we have\n\\begin{align}\n\\delta_{\\pi,\\lambda} = \\sum_{\\sigma\\in S_k}c_{\\pi,\\sigma} Q_{\\sigma,\\lambda},\\qquad Q_{\\sigma,\\lambda}:=d^{\\# \\text{cycles}(\\sigma\\lambda)}.\n\\end{align}\nSo finally, we find\n\\begin{align}\n\\Phi^{(k)}_{\\text{Haar}}(A) = \\sum_{\\pi,\\sigma\\in S_k} (Q^{-1})_{\\pi,\\sigma} W_{\\pi} \\cdot \\text{tr} \\{W_{\\sigma}A\\}. \\label{eq:Weingarten_decomposition}\n\\end{align}\nHere, we assumed the presence of the inverse $Q^{-1}$ which is guaranteed for $k \\leq d$.\n\n\\subsection*{Examples}\nFor $k=1$, $Q_{I,I}=d$, so one has\n\\begin{align}\n\\Phi^{(1)}_{\\text{Haar}}(A) = \\frac{1}{d} \\text{tr}\\{A\\}.\n\\end{align}\nFor $k=2$, so one has\n\\begin{align}\nQ = \\left(\n\\begin{array}{cc}\nQ_{I,I}, Q_{I,S} \\\\\nQ_{S,I} , Q_{S,S}\n\\end{array}\n\\right) =\n\\left(\n\\begin{array}{cc}\nd^2,&d \\\\\nd , &d^2\n\\end{array}\n\\right) \\quad \nC = \\left(\n\\begin{array}{cc}\nC_{I,I}, C_{I,S} \\\\\nC_{S,I} , C_{S,S}\n\\end{array}\n\\right) =\n\\left(\n\\begin{array}{cc}\n\\frac{1}{d^2-1} ,& \\frac{-1}{d(d^2-1)} \\\\\n\\frac{-1}{d(d^2-1)} , &\\frac{1}{d^2-1}\n\\end{array}\n\\right).\n\\end{align}\nExplicitly, we have\n\\begin{align}\n\\Phi^{(2)}_{\\text{Haar}}(A) = \\frac{1}{d^2-1} \\left( I \\ \\text{tr} \\{A\\} + S\\ \\text{tr} \\{ SA\\} - \\frac{1}{d} S\\ \\text{tr} \\{A\\} - \\frac{1}{d} I\\ \\text{tr} \\{SA\\} \\right),\n\\end{align}\nwhere $S$ is the SWAP operator. \n\n\n\\subsubsection*{Haar random states}\nWe can also consider the $k$-fold average of a Haar random state. Define a random state by $|\\psi\\rangle = U|0\\rangle$, where $U$ is sampled uniformly from the unitary group. Then, we have\n\\begin{align}\n\\int_{\\text{Haar}} (|\\psi\\rangle \\langle \\psi| )^{\\otimes k} d\\psi := \\Phi^{(k)}_{\\text{Haar}}\\big((|0\\rangle\\langle 0 |)^{\\otimes k}\\big) = \\sum_{\\pi}c_{\\pi}W_{\\pi},\\label{eq:Haar_state}\n\\end{align}\nfor some coefficients $c_{\\pi}$.\nSince $\\int_{\\text{Haar}} (|\\psi\\rangle \\langle \\psi| )^{\\otimes k}$ commutes with $V^{\\otimes k}$, we again decomposed it with permutation operators. Furthermore, one has\n\\begin{align}\nW_{\\pi}\\int_{\\text{Haar}} (|\\psi\\rangle \\langle \\psi| )^{\\otimes k}=\\int_{\\text{Haar}} (|\\psi\\rangle \\langle \\psi| )^{\\otimes k}, \\qquad \\forall \\pi \\in S_{k}\n\\end{align}\nwhich implies $c_{\\pi}=c$ for all $\\pi$. By taking the trace in Eq.~(\\ref{eq:Haar_state}), we have\n\\begin{align}\nc_{\\pi}= \\frac{1}{\\sum_{\\pi\\in S_k} d^{\\text{\\#cycles($\\pi$)}}} = \\frac{k!}{\n\\Big(\\begin{array}{c}\nk+d-1 \\\\\nk\n\\end{array}\\Big)\n}.\n\\end{align}\nDefining the projector onto the symmetric subspace as $\\Pi_{\\text{sym}} = \\frac{1}{k!}\\sum_{\\pi\\in S_k}W_{\\pi}$, one has\n\\begin{align}\n\\int_{\\text{Haar}} (|\\psi\\rangle \\langle \\psi| )^{\\otimes k} d\\psi = \\frac{\\Pi_{\\text{sym}}}{\n\\Big(\\begin{array}{c}\nk+d-1 \\\\\nk\n\\end{array}\\Big) \\label{haar-random-states-equation}\n}.\n\\end{align}\n\n\n\n\\subsubsection*{Frame potential} In this paper, we will be particularly interested in the following quantity\n\\begin{align}\nF_{\\text{Haar}}^{(k)}=\\iint dU dV\\big|\\text{tr}\\, \\{ U^{\\dagger}V \\} \\big|^{2k}.\n\\end{align}\nUsing Eq.~\\eqref{eq:Weingarten_decomposition}, we can rewrite this\n\\begin{equation}\n\\begin{split}\nF_{\\text{Haar}}^{(k)} &= \\sum_{\\pi_{1},\\pi_{2},\\pi_{3},\\pi_{4}\\in S_{k}} \\text{tr} \\{ W_{\\pi_{1}}W_{\\pi_{2}} \\} (Q^{-1})_{\\pi_{2},\\pi_{3}} \\text{tr} \\{ W_{\\pi_{3}}W_{\\pi_{4}} \\} (Q^{-1})_{\\pi_{4},\\pi_{1}}, \\\\\n&= \\sum_{\\pi_{1},\\pi_{2},\\pi_{3},\\pi_{4}\\in S_{k}}Q_{\\pi_{1},\\pi_{2}} (Q^{-1})_{\\pi_{2},\\pi_{3}}Q_{\\pi_{3},\\pi_{4}}(Q^{-1})_{\\pi_{4},\\pi_{1}} = k!,\n\\end{split}\n\\end{equation}\nfor $k\\leq d$, where we used $C=Q^{-1}$ and $|S_{k}|=k!$.\n\n\n\\subsection*{Unitary design}\n\n\nConsider an ensemble of unitary operators $\\mathcal{E}=\\{p_{j},U_{j}\\}$ where $p_{j}$ are probability distributions such that $\\sum_{j}p_{j}=1$, and $U_{j}$ are unitary operators. The action of the $k$-fold channel with respect to the ensemble $\\mathcal{E}$ is given by \n\\begin{align}\n\\Phi^{(k)}_{\\mathcal{E}}(A) := \\sum_{j} p_{j} (U_{j}^{\\otimes k})^{\\dagger}A (U_{j})^{\\otimes k },\n\\end{align}\nor for continuous distributions\n\\begin{align}\n\\Phi^{(k)}_{\\mathcal{E}}(A) := \\int_{\\mathcal{E}} dU (U_{j}^{\\otimes k})^{\\dagger}A (U_{j})^{\\otimes k }.\n\\end{align}\nThe ensemble $\\mathcal{E}$ is a unitary $k$-design if and only if $ \\Phi^{(k)}_{\\mathcal{E}}(A)=\\Phi^{(k)}_{\\text{Haar}}(A)$ for all $A$. Intuitively, a unitary $k$-design is as random as the Haar ensemble up to the $k$th moment. That is, a unitary $k$-design is an ensemble which satisfies the definition of the Haar measure in Eq.~(\\ref{eq:def_Haar}) when $f(U)$ contains up to $k$th powers of $U$ and $U^{\\dagger}$ (i.e. balanced monomials of degree at most $k$). By this definition, if an ensemble is $k$-design, then it is also $k-1$-design. However, the converse is not true in general. \n\n\nIt is convenient to write the above definition of $k$-design in terms of Pauli operators. An ensemble $\\mathcal{E}$ is $k$-design if and only if $ \\Phi^{(k)}_{\\mathcal{E}}(P)=\\Phi^{(k)}_{\\text{Haar}}(P)$ for all Pauli operators $P\\in (\\mathcal{P})^{\\otimes k}$, since the Pauli operators are the basis of the operator space. Furthermore, for an arbitrary ensemble $\\mathcal{E}$\n\\begin{align}\n\\Phi_{\\text{Haar}}\\left( \\Phi_{\\mathcal{E}}\\big( A \\big) \\right) =\\Phi_{\\text{Haar}}\\left( A\\right), \\qquad \\forall \\mathcal{E},\\label{eq:ensemble-followed-by-Haar}\n\\end{align}\ndue to the left\/right-invariance of the Haar measure. By using Eq.~\\eqref{eq:ensemble-followed-by-Haar}, we can derive the following useful criteria for $k$-designs~\\cite{Webb15}\n\\begin{align}\n\\text{$\\mathcal{E}$ is $k$-design} \\quad \\Leftrightarrow \\quad \\text{$\\Phi_{\\mathcal{E}}(P)$ is a linear combination of $W_{\\pi}$ for all $P\\in (\\mathcal{P})^{\\otimes k}$.} \\label{eq:criteria}\n\\end{align}\nTo make use of this, we will look at some illustrative examples. \n\n\\subsubsection*{Pauli is a $1$-design}\nThe Pauli operators form a unitary $1$-design. In fact, we have already shown this in with Eq.~\\eqref{eq:Pauli_twirl}. We have shown that an average over Pauli operators gives $\\frac{1}{d^2}\\sum_{P\\in \\mathcal{P}}P^{\\dagger} A P = \\frac{1}{d}\\, \\text{tr}\\{A\\}$, and the Haar random channel gives $\\Phi^{(1)}_{\\text{Haar}}(A)=\\frac{1}{d}\\, \\text{tr}\\{A\\}$, so therefore\n\\begin{align}\n\\frac{1}{d^2}\\sum_{P\\in \\mathcal{P}}P^{\\dagger} \\rho P=\\Phi^{(1)}_{\\text{Haar}}(\\rho),\n\\end{align}\nfor all $\\rho$. For example, if $d=2$ and $A=X$ (Pauli $X$ operator), then we have\n\\begin{align}\n\\frac{1}{4}(IXI + XXX + YXY + ZXZ) = X + X - X - X =0,\n\\end{align}\nwhich is consistent with $\\text{tr} \\{X \\}=0$. \n\nThus, an average over Pauli operators is equivalent to talking a trace. Since Pauli operators can be written in a tensor product form: $P_{1}\\otimes P_{2}\\otimes \\ldots \\otimes P_{n}$ for a system of $n$ qubits, they do not create entanglement between different qubits and do not scramble. Instead, they can only mix quantum information locally. This implies some kind of relationship between $1$-designs and local thermalization that we will revisit in the discussion \\S\\ref{sec:discussion}.\n\n\\subsubsection*{Clifford is a $2$-design}\nThe Clifford operators form a unitary $2$-design. The Clifford group $\\mathcal{C}_{n}$ is a group of unitary operators acting on a system of $n$ qubits that transform a Pauli operator into another Pauli operator\n\\begin{align}\nC^{\\dagger} P C = Q, \\qquad P,Q\\in \\tilde{\\mathcal{P}}, \\quad C\\in \\mathcal{C}_{n}.\n\\end{align}\nClearly, Pauli operators are Clifford operators, since $PQP = e^{i\\theta} Q$ for any pairs of Pauli operators $P,Q$: Pauli operators transform a Pauli operator to itself up to a global phase. However, non-trivial Clifford operators are those which transform a Pauli operator into a different Pauli operator. An example of such an operator is the Control-$Z$ gate\n\\begin{align}\n\\text{C$Z$}|a,b\\rangle = (-1)^{ab}|a,b\\rangle, \\qquad a,b =0,1,\n\\end{align}\nwhere summation is modulo $2$. The conjugation of a Pauli operator with a Control-$Z$ gate is as follows\n\\begin{align}\nX\\otimes I \\rightarrow X \\otimes Z, \\qquad Z\\otimes I \\rightarrow Z \\otimes I, \n\\qquad\nI\\otimes X \\rightarrow Z \\otimes X, \\qquad I\\otimes Z \\rightarrow I \\otimes Z. \\label{eq:control-z}\n\\end{align}\n\nLet us prove that the Clifford group is $2$-design by using Eq.~(\\ref{eq:criteria})~\\cite{Webb15}. For qubit Pauli operators of the form $P\\otimes P$ ($P\\not=I$), the action of the Clifford $2$-fold channel is\n\\begin{align}\n\\sum_{C\\in \\mathcal{C}_{n} } (C^{\\dagger}\\otimes C^{\\dagger}) P\\otimes P (C\\otimes C) \\propto \\sum_{Q\\in \\mathcal{P}_{n}, Q\\not=I }Q\\otimes Q,\n\\end{align}\nbecause a random Clifford operator will transform $P$ into some other non-identity Pauli operator. Recalling the definition of the swap operator $\\text{SWAP}=\\frac{1}{d}\\sum_{P\\in \\mathcal{P}_{n}} P\\otimes P$, the RHS is a linear combination of $I\\otimes I$ and $\\text{SWAP}$. On the other hand, for other Pauli operators $P\\otimes Q$ with $P\\not=Q$, the action of the channel is\n\\begin{align}\n\\sum_{C\\in \\mathcal{C}_{n} } (C^{\\dagger}\\otimes C^{\\dagger}) P\\otimes Q (C\\otimes C) =0.\n\\end{align}\nThis can be seen by rewriting the sum as \n\\begin{equation}\n\\frac{1}{2}\\sum_{C\\in \\mathcal{C}_{n} } (C^{\\dagger}\\otimes C^{\\dagger}) P\\otimes Q (C\\otimes C) + \\frac{1}{2}\\sum_{RC\\in \\mathcal{C}_{n} } ( C^\\dagger R^{\\dagger}\\otimes C^\\dagger R^{\\dagger}) P\\otimes Q (RC\\otimes RC), \n\\end{equation}\nsince the Clifford group is invariant under element-wise multiplication by a Pauli $R$. Since we assumed the Pauli operators are different we have $PQ \\neq I$, and thus we can pick $R$ such that it anti-commutes with $PQ$. This implies $R^\\dagger P R \\otimes R^\\dagger Q R = - P \\otimes Q$, and therefore the two terms cancel.\nFinally, $I\\otimes I$ remains invariant. Thus, since the action of the channel on Pauli operators gives a linear combination of permutation operators, by Eq.~\\eqref{eq:criteria} the Clifford group is a unitary $2$-design. \n\nNote that Clifford operators do not have a tensor product form, in general. This means that unlike evolution restricted to Pauli operators, they can change the sizes of an operator as seen in the Control-$Z$ gate example Eq.~\\eqref{eq:control-z}. In other words they can grow local operators into a global operators, indicative of the butterfly effect. \n\nIn fact, Clifford operators can prepare a large class of interesting quantum states called the stabilizer states. Let $|\\psi_{0}\\rangle=|0\\rangle^{\\otimes n}$ be an initial product state. This state satisfies $Z_{j}|\\psi_{0}\\rangle = |\\psi_{0}\\rangle$. Let $U$ be an arbitrary Clifford operator and consider $|\\psi\\rangle = U |\\psi_{0}\\rangle$. This state $|\\psi\\rangle$ satisfies the following\n\\begin{align}\nS_{j}|\\psi\\rangle = |\\psi\\rangle, \\qquad S_{j}=UZ_{j}U^{\\dagger}.\n\\end{align}\nBy definition, $S_{j}$ are Pauli operators and will commute with each other. A quantum state that can be represented by a set of commuting Pauli operators $S_{j}$ is called a stabilizer state. Examples of stabilizer states include ground states of the toric code and the perfect tensors used in construction of holographic quantum error-correcting codes \\cite{Pastawski15b}. The upshot is that Clifford operators can create a global entanglement and can scramble quantum information. We will return to this point again in the discussion \\S\\ref{sec:discussion}.\n\n\\subsubsection*{? is a higher-design}\nCurrently there is no known method of constructing an ensemble which forms an exact $k$-design for $k\\geq 4$ in a way which generalizes to large $d$. Instead, there are several constructions for preparing approximate $k$-design in an efficient manner~\\cite{Brandao12,Nakata:2016blv}.\n\n\\section{Measures of chaos and design}\\label{sec:OTO_channel}\n\nIn this section, we show that $2k$-point OTO correlators are probes of $k$-unitary designs. We will focus on a Hilbert space $\\mathcal{H}=\\mathbb{C}^d$ with $2k$-point correlators of the following form\n\\begin{align}\n\\big\\langle A_{1}\\tilde{B_{1}}\\cdots A_{k}\\tilde{B_{k}} \\big\\rangle := \\frac{1}{d}\\text{tr}\\, \\{A_{1}\\tilde{B_{1}}\\cdots A_{k}\\tilde{B_{k}}\\},\\label{eq-form-of-correlator}\n\\end{align}\nwhere $\\tilde{B_{j}}=U^{\\dagger}B_{j}U$. We can think of this as a correlator evaluated in a maximally mixed or infinite temperature state $\\rho = \\frac{1}{d}I$. The trace can be rewritten as\n\\begin{align}\n\\text{tr} \\,\\big\\{ (A_{1} \\otimes \\cdots \\otimes A_{k}) (U^{\\dagger}\\otimes \\cdots\\otimes U^{\\dagger})\n(B_{1} \\otimes \\cdots \\otimes B_{k})(U\\otimes \\cdots\\otimes U)\n W_{\\pi_{\\text{cyc}}} \\big\\},\\label{eq:cyclic}\n\\end{align}\nby considering an enlarged Hilbert space $\\mathcal{H}^{\\otimes k}$ that consists of $k$ copies of the original Hilbert space $\\mathcal{H}$, where $W_{\\pi_{\\text{cyc}}}$ represents a cyclic permutation operator on $\\mathcal{H}^{\\otimes k}$. The action of $W_{\\text{cyc}}$ is to send the $j$th Hilbert space to the $(j+1)$th Hilbert space (modulo $k$), see Fig.~\\ref{fig_k-fold_twirl} for a graphical representation.\\footnote{N.B. this trick of using cyclic permutation operators is similar to the method used for computing R\\'enyi-$k$ entanglement entropies in quantum field theories via the insertion of twist operators.} \n\n\\begin{figure}[htb!]\n\\centering\n\\includegraphics[width=0.40\\linewidth]{fig_k-fold_twirl}\n\\caption{Schematic form of the $2k$-point OTO correlation functions Eq.~\\eqref{eq-form-of-correlator}, interpreted as a correlation function on the enlarged $k$-copied system. The dotted line diagram surrounds the $k$-fold channel $\\Phi_{\\mathcal{E}}(B_{1} \\otimes \\cdots \\otimes B_{k})$, which is probed by $A_{1}\\otimes \\ldots \\otimes A_{k}$. (Periodic boundary conditions are implied to take the trace.)\n} \n\\label{fig_k-fold_twirl}\n\\end{figure}\n\nObserve that Eq.~(\\ref{eq:cyclic}) contains a $k$-fold unitary action \n\\begin{equation}\n(U\\otimes \\cdots\\otimes U){}\n(B_{1} \\otimes \\cdots \\otimes B_{k})\n(U^{\\dagger}\\otimes \\cdots\\otimes U^{\\dagger}), \\notag\n\\end{equation} \nsuggesting correlators of the form Eq.~\\eqref{eq-form-of-correlator} have the potential to be sensitive to whether an ensemble is or is not a $k$-design.\nUnitary $k$-designs concern the \\emph{$k$-fold channel} of an ensemble of unitary operators\n\\begin{align}\n\\Phi_{\\mathcal{E}}(B_{1} \\otimes \\cdots \\otimes B_{k}) = \\int_{\\mathcal{E}} dU (U\\otimes \\cdots\\otimes U)\n(B_{1} \\otimes \\cdots \\otimes B_{k})\n(U^{\\dagger}\\otimes \\cdots\\otimes U^{\\dagger}),\n\\end{align} \nwhere $\\mathcal{E}$ is an ensemble of unitary operators. To further this point, let us consider an average of these correlators over an ensemble of unitary operators $\\mathcal{E}$\n\\begin{align}\n\\left|\\big\\langle A_{1}\\tilde{B_{1}}\\cdots A_{k}\\tilde{B_{k}} \\big\\rangle\\right|_{\\mathcal{E}} := \\frac{1}{d}\\int_{\\mathcal{E}} dU\\, \\text{tr}\\, \\{ A_{1}\\tilde{B_{1}}\\cdots A_{k}\\tilde{B_{k}}\\}. \\label{eq-form-of-average-oto}\n\\end{align}\nLooking back at Eq.~\\eqref{eq:cyclic}, the idea is that $A_{1},\\ldots,A_{k}$ operators probe the outcome of $k$-fold channel $\\Phi_{\\mathcal{E}}(B_{1} \\otimes \\cdots \\otimes B_{k})$. Indeed, the part of Fig.~\\ref{fig_k-fold_twirl} surrounded by a dotted line, is $\\Phi_{\\mathcal{E}}(B_{1} \\otimes \\cdots \\otimes B_{k})$. Below, we will make this intuition precise by proving that this set of OTO correlators Eq.~\\eqref{eq-form-of-correlator} completely determine the $k$-fold channel $\\Phi^{(k)}_{\\mathcal{E}}$. \n\n\n\n\\subsection{Chaos and $k$-designs}\n\nIn this subsection, we prove that $2k$-point OTO correlators completely determines the $k$-fold channel of an ensemble $\\mathcal{E}$, denoted by $\\Phi_{\\mathcal{E}}^{(k)}$. \nTo recap, we write the $k$-fold channel of an ensemble $\\mathcal{E}=\\{p_{j},U_{j}\\}$ by \n\\begin{align}\n\\Phi^{(k)}_{\\mathcal{E}}(\\rho) = \\sum_{j} p_{j} (U_{j}^{\\dagger}\\otimes \\cdots \\otimes U_{j}^{\\dagger})\\rho (U_{j}\\otimes \\cdots \\otimes U_{j}),\n\\end{align}\nwhere $\\rho$ is defined over $k$ copies of the system, $\\mathcal{H}^{\\otimes k}$. The map is linear, and completely-positive and trace-preserving (CPTP), i.e. a quantum channel. For simplicity of discussion, we assume that the system $\\mathcal{H}$ is made of $n$ qubits so that $\\mathcal{H}=\\mathbb{C}^{d}$ with $d=2^n$. The input density matrix $\\rho$ can be expanded by Pauli operators \n\\begin{align}\n\\rho_{\\text{in}} = \\sum_{B_{1},\\ldots,B_{k}}\\beta_{B_{1},\\ldots,B_{k}}(B_{1}\\otimes \\cdots \\otimes B_{k}),\n\\end{align}\nwhere $\\beta_{B_{1},\\ldots,B_{k}}= d^{-k} \\, \\text{tr} \\, \\big\\{(B_{1}^{\\dagger}\\otimes \\cdots \\otimes B_{k}^{\\dagger}) \\rho\\big\\}$. The output density matrix is given by\n\\begin{align}\n\\rho_{\\text{out}} = \\sum_{B_{1},\\ldots,B_{k}}\\beta_{B_{1},\\ldots,B_{k}}\\Phi_{\\mathcal{E}}(B_{1}\\otimes \\cdots \\otimes B_{k}).\n\\end{align}\nFor a given Pauli operator $B_{1},\\ldots,B_{k}$, we would like to examine $\\Phi_{\\mathcal{E}}(B_{1}\\otimes \\cdots \\otimes B_{k})$. \n\nLet us fix Pauli operator $B_{1},\\ldots,B_{k}$ for the rest of the argument. Note that the output $\\Phi_{\\mathcal{E}}(B_{1}\\otimes \\cdots \\otimes B_{n})$ can be also expanded by Pauli operators\n\\begin{align}\n\\Phi_{\\mathcal{E}}(B_{1}\\otimes \\cdots \\otimes B_{k}) = \\sum_{C_{1},\\ldots,C_{k}}\\gamma_{C_{1},\\ldots,C_{k}}(C_{1}\\otimes \\cdots \\otimes C_{k}).\\label{eq:expansion}\n\\end{align}\nSince we have fixed $B_{1}\\otimes \\cdots \\otimes B_{k}$, for notational simplicity we have not included $B_{1}, \\cdots, B_{k}$ indices from the tensor $\\gamma$. In order to characterize the $k$-fold channel, we need to know values of $\\gamma_{C_{1},\\ldots,C_{n}}$ for a given $B_{1}, \\cdots, B_{k}$. We would like to show that we can determine the values of $\\gamma_{C_{1},\\ldots,C_{n}}$ by knowing a certain collection of OTO correlators. Consider a $2k$-point OTO correlator labeled by the set of $A$ operators, averaged over an ensemble $\\mathcal{E}$\n\\begin{align}\n\\alpha_{A_{1},\\ldots,A_{k}} = \\left|\\big\\langle A_{1} \\tilde{B_{1}} \\cdots A_{k}\\tilde{B_{k}} \\big\\rangle \\right|_{\\mathcal{E}}\\label{eq:definition},\n\\end{align}\nwhere as always $\\tilde{B_{j}}=U^{\\dagger}B_{j}U$ and $A_{1},\\ldots,A_{k}$ are Pauli operators. As before, for simplicity of notation we have not included $B_{1},\\ldots,B_{k}$ indices on $\\alpha$. Now that the notation is setup, the main question is whether one can determine the coefficients $\\gamma_{C_{1},\\ldots,C_{k}}$ from the numbers $\\alpha_{A_{1},\\ldots,A_{k}}$.\n\n Substituting Eq.~\\eqref{eq:expansion} into Eq.~\\eqref{eq:definition}, we see\n\\begin{align}\n\\alpha_{A_{1},\\ldots,A_{k}} = \\frac{1}{d}\\cdot M^{C_{1},\\ldots,C_{k}}_{A_{1},\\ldots,A_{k}} \\gamma_{C_{1},\\ldots,C_{k}},\n\\qquad\nM^{C_{1},\\ldots,C_{k}}_{A_{1},\\ldots,A_{k}} =\n \\text{tr} \\, \\{ A_{1}C_{1}\\cdots A_{k}C_{k} \\}, \\label{eq:normal}\n\\end{align}\nwhere tensor contractions are implicit following the Einstein summation convention.\nThis shows we can compute OTO correlators $\\alpha_{A_{1},\\ldots,A_{k}}$ the coefficients defining the $k$-fold channel $\\gamma_{C_{1},\\ldots,C_{k}}$. To establish the converse, we must prove that the tensor $M$ is invertible. \n\n\n\\begin{theorem}\\label{theorem:OTO_chaos}\nConsider the tensor $M^{C_{1},\\ldots,C_{k}}_{A_{1},\\ldots,A_{k}}$ \nand its conjugate transpose ${M^{\\dagger}}_{C_{1},\\ldots,C_{k}}^{A_{1},\\ldots,A_{k}}$. Then\n\\begin{align}\n\\sum_{C_{1},\\ldots,C_{k}}{M^{\\dagger}}_{C_{1},\\ldots,C_{k}}^{A_{1}',\\ldots,A_{k}'}M^{C_{1},\\ldots,C_{k}}_{A_{1},\\ldots,A_{k}} = d^{2k}\\cdot \n\\delta^{A_{1}'}_{A_{1}}\\cdots \\delta^{A_{k}'}_{A_{k}},\\label{eq:tensor_M}\n\\end{align}\nwhere $\\delta^{P}_{Q}$ is the delta function for Pauli operators $P,Q$\n\\begin{align}\n\\delta^{P}_{Q} = 1, \\quad (P=Q), \\qquad \\delta^{P}_{Q} = 0, \\quad (P\\not=Q).\n\\end{align}\n\\end{theorem}\n\nThe proof of this theorem is sort of technical and has been relegated to Appendix~\\ref{sec:proof:oto-channnel}. Thus, from the OTO correlators $\\alpha_{C_{1},\\ldots,C_{k}}$, we can completely determine the $k$-fold channel $\\gamma_{A_{1},\\ldots,A_{k}}$\n\\begin{align}\n\\gamma_{C_{1},\\ldots,C_{k}}= \\frac{1}{d^{2k-1}} {M^{\\dagger}}_{C_{1},\\ldots,C_{k}}^{A_{1},\\ldots,A_{k}}\\alpha_{A_{1},\\ldots,A_{k}}.\n\\end{align}\nAs an obvious corollary, this means that $2k$-point OTO correlators can measure whether or not an ensemble forms $k$-design.\n\n\n\n\\subsection{Frame potentials}\n\n\nIn this section we introduce the \\emph{frame potential}, a single quantity that can measure whether an ensemble is a $k$-design. Furthermore, we show how the frame potential may be computed from OTO correlators.\n\nGiven an ensemble of unitary operators $\\mathcal{E}$, the $k$th frame potential is defined by the following double sum \\cite{Scott08}\n\\begin{align}\nF_{\\mathcal{E}}^{(k)} := \\frac{1}{|\\mathcal{E}|^{2}}\\sum_{U,V\\in \\mathcal{E}} \\left| \\text{tr} \\{U^{\\dagger}V\\} \\right|^{2k},\n\\end{align}\nwhere $|\\mathcal{E}|$ denotes the cardinality of $\\mathcal{E}$.Denote the frame potential for the Haar ensemble as $F^{(k)}_{\\text{Haar}}$. Then, the following theorem holds.\n\n\\begin{theorem}\nFor any ensemble $\\mathcal{E}$ of unitary operators, \n\\begin{align}\nF^{(k)}_{\\mathcal{E}} \\geq F^{(k)}_{\\mathrm{Haar}},\n\\end{align}\nwith equality if and only if $\\mathcal{E}$ is $k$-design.\n\\end{theorem}\n\nThe proof of this theorem is very insightful and beautiful, which we reprint from~\\cite{Scott08}. \n\n\\begin{proof}\nLetting $S=\\int_{\\mathcal{E}}(U^{\\dagger})^{\\otimes k}\\otimes U^{\\otimes k} - \\int_{\\text{Haar}}(U^{\\dagger})^{\\otimes k}\\otimes U^{\\otimes k}$, we have\n\\begin{equation}\n\\begin{split}\n0 \\leq \\text{tr} \\{ S^{\\dagger}S\\} = &\\int_{U\\in\\mathcal{E}}\\int_{V\\in\\mathcal{E}}dUdV\\, |\\text{tr} \\{U^{\\dagger}V \\} |^{2k} - 2 \\int_{U\\in\\mathcal{E}}\\int_{V\\in\\text{Haar}}dUdV \\, |\\text{tr} \\{U^{\\dagger}V \\} |^{2k} \\\\\n&+ \\iint_{U,V\\in\\text{Haar}}dUdV \\, |\\text{tr} \\{ U^{\\dagger}V \\} |^{2k}.\n\\end{split}\n\\end{equation}\nThe first term is $F^{(k)}_{\\mathcal{E}}$, and the third term is $F^{(k)}_{\\text{Haar}}$ by definition. The second term is equal to $- 2F^{(k)}_{\\text{Haar}}$ by the left\/right-invariance of the Haar measure. Thus, we see that\n\\begin{align}\n0 \\leq F^{(k)}_{\\mathcal{E}} - 2F^{(k)}_{\\text{Haar}} + F^{(k)}_{\\text{Haar}}\n=F^{(k)}_{\\mathcal{E}} - F^{(k)}_{\\text{Haar}},\n\\end{align}\nwith equality if and only if $\\mathcal{E}$ is $k$-design. \n\\end{proof}\n\nNote that we derived the minimal value of the frame potential in \\S\\ref{sec:review},\n\\begin{equation}\nF_{\\text{Haar}}^{(k)}=k!,\n\\end{equation}\n which holds for $k \\le d$. The frame potential quantifies the $2$-norm distance between the Haar ensemble and the $k$-fold $\\mathcal{E}$-channel.\\footnote{The distance between two quantum channels is typically measured by the diamond norm, while the frame potential measures the $2$-norm (or more precisely the Frobenius norm) distance between a given ensemble and the Haar ensemble. Importantly, the $2$-norm is weaker than the diamond norm. For precise statements regarding the different norms and bounds relating them, see~\\cite{Low_thesis}.} Here we show that the frame potential can be expressed as a certain average of OTO correlation functions. \n\n\\begin{theorem}\nFor any ensemble $\\mathcal{E}$ of unitary operators,\n\\begin{align}\n\\frac{1}{d^{4k}}\\sum_{A_{1},\\cdots,B_{1},\\cdots}\\left|\\big\\langle A_{1}\\tilde{B_{1}}\\cdots A_{k}\\tilde{B_{k}} \\big\\rangle_{\\mathcal{E}}\\right|^{2} = \\frac{1}{d^{2(k+1)}}\\cdot F^{(k)}_{\\mathcal{E}}\\label{eq:OTO_frame},\n\\end{align}\nwhere summations are over all possible Pauli operators.\n\\end{theorem} \n\nThe LHS of the equation is the operator average of the $2$-norm of OTO correlators, and the RHS is the $k$th frame potential up to a constant factor. There are $d^{4k}$ Pauli operators $A_{1},\\ldots, A_k, B_{1},\\ldots, B_k$, which leads to $1\/d^{4k}$. The theorem implies that the quantitative effect of random unitary evolution is to decrease the frame potential, which is equivalent to the decay of OTO correlators. \n\n\n\\begin{proof}\nWe take the averages over $A_{1},\\cdots,A_k, B_{1},\\cdots, B_k$ first. Expanding the LHS gives \n\\begin{align}\n\\frac{1}{d^{4k}}\\frac{1}{|\\mathcal{E}|^{2}d^2}\\sum_{U,V\\in \\mathcal{E}}\\sum_{A_{1},\\cdots,B_{1},\\cdots}\\text{tr} \\{A_{1}U^{\\dagger}B_{1}U\\cdots A_{k}U^{\\dagger}B_{k}U \\} \\cdot\\text{tr} \\{ V^{\\dagger}B_{k}^{\\dagger}VA_{k}^{\\dagger}\\cdots V^{\\dagger}B_{1}^{\\dagger}VA_{1}^{\\dagger}\\}.\n\\end{align}\nFor $k=2$, this can be depicted graphically as\n\\begin{align}\n\\includegraphics[width=0.40\\linewidth]{fig_frame}.\n\\end{align}\nRecall that a SWAP operator is given by $\\text{SWAP} = \\frac{1}{d}\\sum_{P} P \\otimes P^{\\dagger}$\n\\begin{align}\n\\includegraphics[width=0.35\\linewidth]{fig_SWAP}.\n\\end{align}\nThus, we replace each average of Pauli operators $A_{1},\\cdots,A_k, B_{1},\\cdots, B_k$ by SWAP operators. There are $2k$ loops, where $k$ of them contribute to $\\text{tr} \\{UV^{\\dagger}\\}$ and the remaining $k$ loops contribute to $\\text{tr} \\{ VU^{\\dagger} \\}$. Keeping track of the number of factors of $d$, we find \n\\begin{align}\n\\frac{1}{d^{2(k+1)}|\\mathcal{E}|^2}\\sum_{U,V\\in\\mathcal{E}}\\text{tr} \\{ UV^{\\dagger} \\} ^k\\cdot \\text{tr} \\{ VU^{\\dagger}\\}^k = \\frac{1}{d^{2(k+1)}} \\cdot F^{(k)}_{\\mathcal{E}},\n\\end{align}\nwhich is the desired result.\n\\end{proof}\n\n\nLastly, we note that we can use the frame potential to lower bound the size of the ensemble. Since all the terms in the sum are positive, by taking the diagonal part of the double sum, we find\n\\begin{align}\nF_{\\mathcal{E}}^{(k)} \\geq \\frac{1}{|\\mathcal{E}|^2}\\sum_{U,V\\in \\mathcal{E},\\, U=V} \\big|\\text{tr} \\{I \\}\\big|^{2k}=\n \\frac{1}{|\\mathcal{E}|}d^{2k}, \\label{frame-potential-vs-size}\n\\end{align}\nmeaning the cardinality is lower bounded as $|\\mathcal{E}| \\ge d^{2k} \/ F_{\\mathcal{E}}^{(k)}$. Using the fact that $F^{(k)}_\\mathrm{Haar} = k!$, we can simply bound the size of a $k$-design\\footnote{Actually, one can slightly improve this lower bound for a $k$-design, see~\\cite{Roy:2009aa, Brandao12}.}\n\\begin{equation}\n\\mathcal{|E|} \\ge \\frac{d^{2k}}{k!}, \\qquad \\text{($k$-design)}. \\label{eq-cardinality-of-k-design}\n\\end{equation}\n\n\n\n\\subsubsection*{Large $k$}\n\n\nIn Appendix~\\ref{sec:appendix:orthogonal}, we provided an intuitive way of counting the number of nearly orthogonal states in a high-dimensional vector space $\\mathbb{C}^d$ by introducing a precision tolerance $\\epsilon$. A similar argument holds for the number of operators; if we can only distinguish operators up to some precision $\\epsilon$, then the total number of unitary operators is given by the volume of the unitary group measured in balls of size $\\epsilon$, which roughly goes like $\\sim \\epsilon^{-2^{2n}}$.\n\nThe key point is that if we have a tolerance $\\epsilon$, then the number of operators is finite even for a continuous ensemble. This means that there is a maximum size to any ensemble that is a subset of the unitary group. Looking at our bound Eq.~\\eqref{eq-cardinality-of-k-design}, we see that for $k\\sim d$ we begin to reach the maximum size for some $\\epsilon$. \n\nAs a corollary, this means that for large $k \\gtrsim d$, being a $k$-design implies that the ensemble is an approximate $k+1$-design. In this sense, there are really only $O(d)$ nontrivial moments of the Haar ensemble.\n\nAs an explicit example, we will consider the case of $d=2$. Here, for Haar we can compute the frame potential exactly for all $k$ \\cite{Scott08}\n\\begin{align}\nF^{(k)}_{\\text{Haar}} = \\frac{(2k)!}{k ! (k+1)!},\n\\end{align}\nand thus we have the following relationship\n\\begin{align}\n\\frac{F_{\\text{Haar}}^{(k+1)}}{F_{\\text{Haar}}^{(k)}} = 4 \\frac{k+1\/2}{k+2}.\n\\end{align}\nOn the other hand, for any ensemble the frame potential satisfies\n\\begin{align}\nF^{(k+1)} \\leq d^2 F^{(k)}=4F^{(k)}.\n\\end{align}\nTherefore, for $d=2$ we see that if an ensemble is a $k$-design, for $k \\gtrsim 2$ it will automatically be close to being a $k+1$-design.\n\n\n\n\n\\subsection{Ensemble vs. time averages}\\label{sec:F-time-average}\n\nIn this subsection, we consider the time average of the frame potential\nto ask whether the ensemble specified by sampling $U(t)=e^{-iHt}$ at different times $t$ will ever form a $k$-design. In the classical statistical mechanics and chaos literature, this is the question of whether a system is ergodic---whether the average over phase space equals an average over the time evolution of some initial state.\n\nConsider the one-parameter ensemble of unitary matrices defined by evolving with a fixed time-independent Hamiltonian $H$, \n\\begin{equation}\n\\mathcal{E} = \\big\\{ e^{-iHt} \\big\\}_{t=0}^{\\infty}.\n\\end{equation} \nWe can compute the frame potential as\n\\begin{align}\nF_{\\mathcal{E}}^{(k)} &= \\lim_{T\\to\\infty} \\frac{1}{T^2}\\int_0^T dt_1 dt_2 \\, \\Big|\\trr{e^{-iH(t_1 - t_2)}}\\Big|^{2k}, \\\\\n &= \\lim_{T\\to\\infty} \\frac{1}{T^2}\\int_0^T dt_1 dt_2 \\, \\sum_{\\ell_{1}\\dots \\ell_{k}} \\sum_{m_{1}\\dots m_{k}} \\exp \\Big\\{ -i (t_2 - t_1) \\sum_{i=1}^k E_{\\ell_{i}} + i(t_2-t_1)\\sum_{j=1}^k E_{m_{j}} \\Big\\}. \\notag\n\\end{align}\nIf the spectrum is chaotic\/generic, then the energy levels are all incommensurate. The terms in the sum evaluate to zero unless we have $E_{\\ell_{i}} = E_{m_{j}}$ for all possible pairings $(i,j)$\n\\begin{align}\nF_{\\mathcal{E}}^{(k)} &= \\sum_{\\ell_{1}\\dots \\ell_{k}} \\sum_{m_{1}\\dots m_{k}} \\sum_{ij} \\delta_{\\ell_{i}m_{j}}, \\\\\n &= k! \\Big(\\sum_{\\ell} I \\Big)^k = k!\\,d^k. \\notag\n\\end{align}\nThis is larger than $F_{\\text{Haar}}^{(k)}$ by a factor of $d^k$; the ensemble $\\mathcal{E}=\\{e^{-iHt}\\}$ does not form a $k$-design!\nRecall that $F_{\\mathcal{E}}=d^{2k}$ for a trivial ensemble $\\mathcal{E}=\\{I\\}$ while $F_{\\mathcal{E}}=k!$ for a Haar ensemble. The time average ensemble sits in the middle of these ensembles.\n\nThis ``half-randomness''can be understood in the following way. Let us write the Hamiltonian as\n\\begin{align}\nH = \\sum_{j} E_{j} |\\psi_{j}\\rangle \\langle \\psi_{j}|.\n\\end{align}\nRotating $H$ by a unitary operator does not affect the frame potential, so we can consider a classical Hamiltonian \n\\begin{align}\nH' = \\sum_{j} E_{j} |j\\rangle \\langle j|,\n\\end{align}\nwith the same frame potential. Even though $H'$ is classical, it has an ability to generate entanglement. Namely, if an initial state is $|+\\rangle=\\sum_{j}|j\\rangle$, then time-evolution will create entanglement. In terms of the original Hamiltonian $H$, we see that the system keeps evolving in a non-trivial manner as long as the initial state is ``generic'' and is different from eigenstates of the Hamiltonian $H$. This arguments suggests that the frame potential, in a time-average sense, can only see the spectrum distribution. \n\n\nThis fact, that Hamiltonian time evolution can never lead to random unitaries, can also be understood in terms of the distribution of the spacing of the eigenvalues.\\footnote{This argument has been made by Michael Berry, who related it to Steve Shenker, who related it to Douglas Stanford, who related it to us. As far as we know, it does not otherwise appear in the literature.} The phases of the Haar random unitary operators have Wigner-Dyson statistics; they are repulsed from being degenerate. The eigenvalues of a typical Hamiltonian $H$ also have this property. However, these eigenvalues live on the real line, while the phases of $e^{-iHt}$ live on the circle. In mapping the line to the circle, the eigenvalues of $H$ wrap many times. This means that the difference of neighboring phases of $e^{-iHt}$ will not be repulsed; $e^{-iHt}$ will have Poisson statistics! In this sense, the ensemble formed by sampling $e^{-iHt}$ over time may never become Haar random.\\footnote{Relatedly, the space of unitaries with a fixed Hamiltonian is $d$-dimensional, while the space of unitaries is $d^2$-dimensional.} The calculation in the beginning of this subsection provides an explicit calculation of the $2$-norm distance between such an ensemble and $k$-designs and quantifies the degree to which the ensemble average is ``too random'' as compared to the time average.\n\n\n\n\\section{Measures of complexity}\\label{sec:complexity-bound}\nFor an operator or state, the computational complexity is a measure of the minimum number of basic elementary gates necessary to generate the operator or the state from a simple reference operator (e.g. the identity operator) or reference state (e.g. the product state). With the assumption that the elementary gates are easy to apply, the complexity is a measure of how difficult it is to create the state or operator.\n\n\nOn the other hand, an ensemble contains many different operators with different weights or probabilities. In that case, the computational complexity of the ensemble should be understood as the number of steps it takes to generate the ensemble by probabilistic applications of elementary gates. For instance, to generate the ensemble of Pauli operators, we randomly choose with probability $1\/4$ a Pauli operator $I,X,Y,Z$ to apply to a qubit, and then repeat this procedure for all the qubits.\n\nThe complexity of an ensemble is related to the complexity of an operator in the following way. If an ensemble can be prepared in $\\mathcal{C}$ steps, then all the operators in the ensemble can be generated by applications of at most $\\mathcal{C}$ elementary gates. On the other hand, if an ensemble cannot be prepared (or approximated) in $\\mathcal{C}$ steps, then---for the sorts of ensembles we are interested in---most of the operators cannot be generated by applications of $\\mathcal{C}$ elementary gates. \nFor example, generating the Haar ensemble will take exponential complexity since, on average, individual elements have exponential complexity.\n\n\n\nThe complexity of the ensemble can be lower bounded in terms of the number of elements or cardinality of the ensemble $|\\mathcal{E}|$. If all the elements are represented equally (with uniform probabilities), then clearly at least $\\mathcal{E}$ circuits need to be generated from probabilistic applications of the elementary gates. Making use of a fact introduced in the previous section that $F_{\\mathcal{E}}^{(k)}$ provides a lower bound on $|\\mathcal{E}|$, here we show that the frame potential provides a lower bound on a circuit complexity of generating $\\mathcal{E}$. We will also explain how this bound applies to ensembles that depend continuously on some parameters and thus have a divergent number of elements. \n\nTwo additional bounds that are somewhat outside the scope of the main presentation, one on circuit depth and one on the early-time complexity growth with a disordered ensemble of Hamiltonians, are relegated to Appendix~\\ref{sec:complexity-appendix}.\n\n\\subsection{Discrete ensembles}\\label{sec:complexity-discrete}\n\nConsider a system of $n$ qubits. Let $\\mathcal{G}$ denote an elementary gate set that consists of a finite number of two-qubit quantum gates. We denote the number of elementary two-qubit gates by $g:=|\\mathcal{G}|$. At each time step we assume that we can implement any of the gates from $\\mathcal{G}$. One typically chooses $\\mathcal{G}$ so that gates in $\\mathcal{G}$ enables a universal quantum computation. A well-known example is\n\\begin{align}\n\\mathcal{G} = \\{\\text{$2$-qubit Clifford}, T\\}, \n\\end{align}\nwhere $T$ is the $\\pi\/4$ phase shift operator; $T=\\text{diag}(1,e^{i\\pi\/4})$. (Of course, this is not the only choice of elementary gate sets.) \n\nOur goal is to generate an ensemble of unitary operators $\\mathcal{E}$ by sequentially implementing quantum gates from $\\mathcal{G}$. Let us denote the necessary number of steps (i.e. the circuit complexity) to generate $\\mathcal{E}$ by $\\mathcal{C}(\\mathcal{E})$. Then one has the following complexity lower bound. \n\\begin{theorem}\nLet $g$ be the number of distinct two-qubit gates from the elementary gate set. Then the circuit complexity $\\mathcal{C}(\\mathcal{E})$ to generate an ensemble $\\mathcal{E}$ is lower bounded by \n\\begin{align}\n\\mathcal{C}(\\mathcal{E}) \\geq \\frac{\\log |\\mathcal{E}| }{\\log(g n^2)}.\\label{eq-for-theorem-complexity-ensemble-size-bound}\n\\end{align}\n\\label{theorem-complexity-ensemble-size-bound}\n\\end{theorem}\n\nThe proof relies on an elementary counting argument. Arguments along this line of thought have been used commonly in the literature. \n\n\n\\begin{proof}\n At each step, we randomly pick a pair of qubits. Since there are $g$ implementable quantum gates and $\\binom{n}{2}$ qubit pairs, there are in total $\\simeq g n^2$ choices at each step. \nIf this procedure runs for $\\mathcal{C}$ steps, the number of unique circuits this procedure can implement is upper bounded by\n\\begin{align}\n\\text{$\\#$ of circuits}\\leq (g n^2)^{\\mathcal{C}}.\n\\end{align}\nSince there are $|\\mathcal{E}|$ unitary operators in an ensemble $\\mathcal{E}$, we must have\n\\begin{align}\n(g n^2)^{\\mathcal{C}} \\geq |\\mathcal{E}|, \n\\end{align}\nwhich implies\n\\begin{align}\n\\mathcal{C}(\\mathcal{E}) \\geq \\frac{\\log |\\mathcal{E}| }{\\log(g n^2)}. \n\\end{align} \n\\end{proof}\n\n\nIn Appendix~\\ref{sec:appendix:orthogonal}, we provide an intuitive way of counting the number of states in a $d=2^n$-dimensional Hilbert space (which was well known for a long time from \\cite{Knill95}). \nAs a sanity check on Theorem~\\ref{theorem-complexity-ensemble-size-bound}, if we substitute $|\\mathcal{E}| \\gtrsim 2^{2^n}$ into Eq.~\\eqref{eq-for-theorem-complexity-ensemble-size-bound} we see that the complexity \nof most states is exponential in the number of qubits $\\mathcal{C} > 2^n \\log 2$.\n\n\nFinally, let us examine the relation between the frame potential and the circuit complexity. Using Eq.~\\eqref{frame-potential-vs-size} to trade $|\\mathcal{E}|$ for $F_{\\mathcal{E}}^{(k)}$, we find\n\\begin{align}\n\\mathcal{C}(\\mathcal{E}) \\geq \\frac{2kn \\log(2) - \\log F_{\\mathcal{E}}^{(k)} }{\\log(g n^2)}.\n\\end{align}\nOf course, this bound obviously depends on the choice of basic elements $g$ and the fact that we are using two-qubit gates. If we had considered $q$-body gates, we would have found a denominator $\\log(g\\binom{n}{q}) \\approx \\log(gn^q)$ for $n\\gg q$.\nThis is no more than a choice of ``units'' with which to measure complexity. Thus, we state our key result as:\n\\begin{theorem}\\label{theorem-bound}\nFor an ensemble $\\mathcal{E}$ with the $k$th frame potential $F_{\\mathcal{E}}^{(k)}$, the circuit complexity is lower bounded by\n\\begin{align}\n\\mathcal{C}(\\mathcal{E}) \\geq \\frac{2kn \\log(2) - \\log F_{\\mathcal{E}}^{(k)} }{\\log( \\mathrm{choices})},\\label{eq:lower-bound}\n\\end{align}\n\\end{theorem}\nIn this context, $\\log( \\mathrm{choices})$ simply indicates the logarithm of the number of decisions that are made at each step. If we imagine we have some kind of decision tree for determining which gate to apply where we make a binary decision at each step (and use $\\log_2$), then we may set the denominator to unity and measure complexity in bits rather than gates, i.e. $\\mathcal{C}(\\mathcal{E}) \\geq 2kn - \\log_2 F_{\\mathcal{E}}^{(k)}$.\n\n\nIn the above discussion, we glossed over a subtlety. Here, we considered the quantum circuit complexity to prepare or approximate an entire ensemble $\\mathcal{E}$. A closely related but different question concerns the quantum circuit complexity required to implement a typical unitary operator from the ensemble $\\mathcal{E}$. Nevertheless, in ordinary settings the typical operator complexity and the ensemble complexity are roughly of the same order. While establishing a rigorous result in this direction is beyond the scope of this paper, see~\\cite{Knill95} for some basic proof techniques that are useful in establishing this connection. \n\nAn important consequence of Theorem~\\ref{theorem-bound} is that the smallness of the frame potential (i.e. generic smallness of OTO correlators) implies increases in quantum circuit complexity of generating the ensemble $\\mathcal{E}$. As a corollary, we can simply rewrite Eq.~\\eqref{eq:lower-bound} as\n\\begin{align}\n\\mathcal{C}(\\mathcal{E}) \\geq(2k-1)2n - \\log_2 \\sum_{A_{1},\\cdots,B_{1},\\cdots}\\left|\\big\\langle A_{1}\\tilde{B_{1}}\\cdots A_{k}\\tilde{B_{k}} \\big\\rangle_{\\mathcal{E}}\\right|^{2}.\\label{eq:lower-bound-oto} \n\\end{align}\nIn this sense, we see how the decay of OTO correlators is directly related to an increase in the (lower-bound) of the complexity.\n\n\nNext, recall that the frame potential for a $k$-design is given by $F^{(k)}_{\\text{Haar}}=k!$, which does not grow with $n$. The complexity of a $k$-design is thus lower bounded as\n\\begin{equation}\n\\mathcal{C}(k\\text{-design}) \\ge 2kn - k\\log_2 (k) + \\log_2 k + \\dots, \\label{eq:complexity-k-design}\n\\end{equation}\nwhich for large $n$ grows roughly linearly in $k$ and $n$ (at least). In \\S\\ref{sec:complexity:depth}, we show that the minimum depth circuit to make a $k$-design also growths linearly in $k$ (at least).\n\n\n\n\n\n\n\n\nFinally, we offer an additional information-theoretic interpretation for our lower bound and generalize it for ensembles with non-uniform probability distributions. Consider an ensemble $\\mathcal{E}=\\{ p_{j}, U_{j} \\}$ with probability distribution $\\{p_{j} \\}$ such that $\\sum_{j}p_{j}=1$. The second R\\`{e}nyi entropy of the distribution $\\{p_{j} \\}$ is defined as $S^{(2)}= - \\log \\big( \\sum_{j} p_{j}^2 \\big)$. In this more general situation, we can still bound the frame potential by considering the diagonal part of the sum\n\\begin{align}\nF^{(k)}_{\\mathcal{E}} = \\sum_{i,j} p_{i}p_{j} \\, \\big|\\text{tr} \\{ U_{i}U_{j}^{\\dagger} \\}\\big|^{2k} \\geq \\sum_{i} p_{i}^2 \\, \\big|\\text{tr} \\{ I \\} \\big|^{2k} = e^{-S^{(2)}}d^{2k}. \n\\end{align}\nSince the von Neumann entropy $S^{(1)} = - \\sum_{j} p_j \\log(p_j)$ is always greater than the second R\\`enyi entropy, $S^{(1)} \\geq S^{(2)}$, we can bound the von Neumann entropy as\n\\begin{align}\nS^{(1)} \\geq 2kn - \\log_2 F^{(k)}_{\\mathcal{E}}.\n\\end{align}\nThe entropy of the ensemble is a notion of complexity measured in bits.\\footnote{However, this is not to be confused with the entanglement entropy. The entropy of the ensemble is essentially the logarithm of the number of different operators, and therefore can be exponential in the size of the system. Instead, the entanglement entropy (as a measure of entanglement) can only be as large as (half) the size of the system.} \n\n\n\n\n\n\n\n\n\n\\subsection{Continuous ensembles}\\label{sec:continuous}\n\nMany interesting ensembles of unitary operators are defined by continuous parameters, e.g. a disordered system has a time evolution that may be modeled by an ensemble of Hamiltonians.\\footnote{A notable example of recent interest to the holography and condensed matter community is the ensemble implied by time evolving with the Sachdev-Ye-Kitaev Hamiltonian \\cite{Sachdev:1992fk,Kitaev:2014t2,Maldacena:2016hyu}.} While the counting argument in \\S\\ref{sec:complexity-discrete} is not directly applicable to these systems with continuous parameters, the complexity lower bound generalizes to such systems by allowing approximations of unitary operators. To be concrete, imagine that we wish to create some unitary operator $U_{0}$ by combining gates from the elementary gate set. In practice, we do not need to create an exact unitary operator $U_{0}$. Instead, we may be fine with preparing some $U$ that faithfully approximates $U_{0}$ to within a trace distance\n\\begin{align}\n||U_{0}-U||_{1}\\leq \\epsilon,\n\\end{align}\nwhere the notation $||\\cdot||_p \\equiv (\\text{tr} \\, \\{ |\\cdot|^p \\} )^{1\/p}$ specifies the $p$-norm of an operator.\n\nNow, let us derive a complexity lower bound up to an $\\epsilon$-tolerance. We begin by taking $N_s$ samples from the ensemble and use them to estimate the frame potential of the continuous distribution\n\\begin{equation}\nF_{\\mathcal{E}}^{(k)} \\approx \\frac{1}{N_s^2}\\sum_{i,j} \\big|\\text{tr}\\, \\{ U_i^{\\dagger}V_j \\} \\big|^{2k},\\label{sampled-frame-potential}\n\\end{equation}\nwhere the each of the two sums runs over all $N_s$ samples. We can lower bound Eq.~\\eqref{sampled-frame-potential} as follows\n\\begin{equation}\nF_{\\mathcal{E}}^{(k)} \\ge \\frac{1}{N_s^2}\\sum_{i} \\sum_{\\substack{j, \\\\ \\mathllap{||}U_i\\mathrlap{ - V_j||_1 < \\epsilon} } } \\big|\\text{tr}\\, \\{ U_i^{\\dagger}V_j \\} \\big|^{2k},\\label{epsilon-frame-potential}\n\\end{equation}\nwhere the sum over $i$ runs from $1$ to $N_s$. The sum over $j$ includes only a smaller subset $N_\\epsilon(U_i)$, which is the number of operators within a trace distance $\\epsilon$ of a particular $U_i$.\n\n\nTo continue, let's bound the summand. First, note that\n\\begin{equation}\n\\big| \\text{tr} \\{ U^\\dagger V \\} \\big| > \\text{Re} \\big\\{ \\, \\text{tr} \\, \\{ U^\\dagger V \\} \\big\\} = d - \\frac{1}{2}||U-V||_2^2.\n\\end{equation}\nThe $2$-norm is upper bounded by the $1$-norm as $|| \\mathcal{O} ||_2 \\le \\sqrt{d} || \\mathcal{O}||_1$, which let's us rewrite this as\n\\begin{equation}\n\\big| \\text{tr}\\, \\{ U^{\\dagger}V \\} \\big|^{2k} > d^{2k}\\bigg(1 - \\frac{1}{2}||U-V||_1^2\\bigg)^{2k}.\\label{summand-bound}\n\\end{equation}\nFor this formula to be sensible, this approximation requires that $\\epsilon < \\sqrt{2}$. Substituting Eq.~\\eqref{summand-bound} into Eq.~\\eqref{epsilon-frame-potential}, we can bound the frame potential\n\\begin{equation}\nF_{\\mathcal{E}}^{(k)} \\ge \\frac{1}{N_s} d^{2k}\\bigg(1 - \\frac{\\epsilon^2}{2}\\bigg)^{2k} \\bigg[ \\frac{1}{N_s} \\sum_{i=1}^{N_s} N{_\\epsilon}(U_i) \\bigg] = d^{2k}\\bigg(\\frac{ \\overline{N_\\epsilon} }{N_s} \\bigg) \\bigg(1 - \\frac{\\epsilon^2}{2}\\bigg)^{2k}. \n\\end{equation}\nThe term in brackets in the middle expression is the average number of operators within a trace distance $\\epsilon$ of an operator in our sample set. In the final expression this is represented by the symbol $\\overline{N_\\epsilon}$.\n\n\nNow, let's run the counting argument again. If we want to make $N_s$ circuits exactly, then in $\\mathcal{C}$ steps we must have\n\\begin{equation}\n(\\text{choices})^\\mathcal{C} > N_s,\n\\end{equation}\nwhere as before $(\\text{choices})$ summarizes the information about our choice of gate set, etc. Instead, if you only care about making circuits to within an $\\epsilon$-accuracy, then in $\\mathcal{C}_\\epsilon$ steps $N_s$ instead satisfies\n\\begin{equation}\n(\\text{choices})^{\\mathcal{C}_\\epsilon} ~ \\overline{N_\\epsilon} > N_s.\n\\end{equation}\nThis lets us lower bound the complexity of the ensemble at precision $\\epsilon$ as\n\\begin{equation}\n\\mathcal{C}_\\epsilon(\\mathcal{E}) > \\frac{2k \\log(d) - k \\epsilon^2 - \\log F_{\\mathcal{E}}^{(k)}}{\\log (\\text{choices}) }. \\label{complexity-boudnd-eps}\n\\end{equation}\nWe then take the continuum limit by taking $N_s \\to \\infty$. The number of operators within an $\\epsilon$-ball of a given sample will also diverge, but the ratio $ N_s\/ \\overline{N_\\epsilon}$ should remain finite and converge to some value, roughly the volume of the ensemble as measured in balls of $\\epsilon$-radius.\\footnote{Note that $\\log(\\text{choices})$ diverges if the elementary gate set is continuous. This problem may be also fixed by employing the $\\epsilon$-tolerance.}\n\nFinally, in $\\S\\ref{sec:complexity:early}$, we further extend this notion of bounding complexity for continuous ensembles to show that the initial early-time growth of complexity for evolution with an ensemble of Hamiltonians grows initially as $t^2$ for a time $t < 1\/\\sqrt{\\log(d)}$.\n\n\n\n\n\n\n\n\\section{Measures of correlators}\\label{sec:haar-averages}\n\nWhile much of the focus of this paper has been on the behavior of ensembles, we were originally motivated by the following question: When is a random unitary operator an appropriate approximation to the underlying dynamics? In this section, we will attempt to return the focus to this question by computing random averages over correlation functions and comparing them to expectations for chaotic time evolution in physical systems.\n\n\\subsection{Haar random averages}\n\nIn this subsection, we will explicitly compute some ensemble averages of OTO correlators for different choices of ensembles. A particular goal will be to understand the asymptotic behavior of these averages in the limit of a large number of degrees of freedom $d\\to \\infty$. We present explicit calculations for $2$-point and $4$-point functions here and provide results for $6$-point and $8$-point functions. Additional calculations may be found in Appendix~\\ref{sec:appendix-averages}.\n\n\n\\subsubsection*{$2$-point functions}\n\nConsider a $2$-point correlator, averaged over Haar random unitary operators\n\\begin{align}\n\\langle A \\tilde{B} \\rangle_{\\text{Haar}}=\\frac{1}{d}\\int_{\\text{Haar}} dU \\, \\text{tr} \\, \\{ A \\, U^{\\dagger}BU\\} .\n\\end{align}\nSince $U$ and $U^\\dagger$ each only appear in the expression once, we will obtain the same answer if the average is instead performed over a 1-design: $\\langle A \\tilde{B} \\rangle_{\\text{Haar}} = \\langle A \\tilde{B} \\rangle_{\\text{1-design}}$. By using a formula from \\S\\ref{sec:review}, we can derive the following expression\n\\begin{align}\n\\langle A \\tilde{B} \\rangle_{\\text{Haar}} = \\langle A\\rangle\\langle B \\rangle. \\label{2-pt-haar}\n\\end{align}\nGraphically, the calculation goes as follows\n\\begin{align}\n\\includegraphics[width=0.65\\linewidth]{fig_2point.pdf}.\n\\end{align}\nIt is often convenient to consider physical observables with zero mean by shifting $A \\rightarrow A -\\langle A\\rangle$. Then, we see that these $2$-point correlation function vanish. Of course, this always holds for Pauli operators\n\\begin{align}\n\\langle A \\tilde{B} \\rangle_{\\text{Haar}} = 0, \\qquad A,B\\in \\mathcal{P},\\quad A,B\\not=I.\n\\end{align}\n\nNext, let us consider the norm squared of a $2$-point correlator averaged over Haar random unitary operators $|\\langle A \\tilde{B} \\rangle|^2_{\\text{Haar}}=\\frac{1}{d^2}\\int_{\\text{Haar}} dU\\, \\text{tr} \\{ A\\,U^{\\dagger}BU \\} \\, \\text{tr} \\{ U^{\\dagger}B^\\dagger U \\, A^{\\dagger}\\}$. Note that we take the Haar average after squaring the correlator. Since there are two pairs of $U$ and $U^{\\dagger}$ appearing, we can perform the average over a $2$-design: $| \\langle A \\tilde{B} \\rangle|^{2}_{\\text{Haar}} = |\\langle A \\tilde{B} \\rangle|^{2}_{\\text{2-design}}$. Let us assume that $A,B$ are Pauli operators so that we can neglect contributions from $\\langle A\\rangle$ and $\\langle B\\rangle$. There are four terms, but only one term survives because the trace of non-identity Pauli operators is zero. We depict the calculation graphically as\n\\begin{align}\n\\includegraphics[width=0.6\\linewidth]{fig_2point_absolute.pdf}.\n\\end{align}\nwhere $C_{I}=1\/(d^2-1)$ comes from the Weingarten function as shown in \\S\\ref{sec:review}. The final result is\n\\begin{align}\n|\\langle A \\tilde{B}\\rangle|^2_{\\text{Haar}}=\\frac{1}{d^2-1},\\qquad A,B\\in \\mathcal{P},\\quad A,B\\not=I.\\label{eq:2-pt-variance}\n\\end{align}\nThus, the variance of the Haar averaged $2$-point function is exponentially small in the number of qubits. \n\n\\subsubsection*{4-point functions}\n\nNext, consider a $4$-point OTO correlator averaged over Haar random unitary operators: \n\\begin{align}\n\\langle A \\tilde{B} C \\tilde{D} \\rangle_{\\text{Haar}} = \\frac{1}{d}\\int_{\\text{Haar}} dU \\text{tr} \\{ A\\, U^\\dagger B U \\,C \\, U^\\dagger D U \\}. \n\\end{align}\nAs has already been explained, we will obtain the same answer if the average is performed with a 2-design: $\\langle A \\tilde{B} C \\tilde{D} \\rangle_{\\text{Haar}} = \\langle A \\tilde{B} C \\tilde{D} \\rangle_{\\text{2-design}}$. By using formulas from \\S\\ref{sec:review} we can derive the following expression\\footnote{Note: this formula was independently obtained by Kitaev.}\n\\begin{equation}\n\\langle A \\tilde{B} C \\tilde{D} \\rangle_{\\text{Haar}} = \\langle AC \\rangle \\langle B \\rangle\\langle D \\rangle + \\langle A \\rangle\\langle C \\rangle \\langle BD \\rangle - \\langle A\\rangle \\langle C \\rangle \\langle B \\rangle\\langle D \\rangle - \\frac{1}{d^2-1}\\langle\\!\\langle AC\\rangle\\!\\rangle \\langle\\!\\langle BD\\rangle\\!\\rangle, \\label{eq:OTO_formula}\n\\end{equation}\nwhere $d=2^n$ and $\\langle \\! \\langle AC\\rangle \\! \\rangle\\equiv\\langle AC\\rangle-\\langle A\\rangle\\langle C\\rangle$. In particular, for Pauli operators $A,B,C,D\\not=I$, one has\n\\begin{equation}\n\\begin{split}\n\\langle A \\tilde{B} C \\tilde{D} \\rangle_{\\text{Haar}} &= - \\frac{1}{d^2-1} \\qquad (A=C^{\\dagger},\\ B=D^{\\dagger} ), \\\\\n&= 0 \\qquad \\quad \\qquad (\\text{otherwise}).\n\\end{split}\n\\end{equation}\nWhen nonzero, the result is exponential small in the number of qubits $n$. The derivation of the aforementioned formula can be understood graphically as follows\n\\begin{equation}\n\\includegraphics[width=0.88\\linewidth]{fig_4pt_OTO}.\n\\end{equation}\nBy rewriting the expression in terms of connected correlators, we obtain the formula Eq.~\\eqref{eq:OTO_formula}.\n\nOf course, we can obtain the same result by instead averaging over the Clifford group. When $A\\not=C^{\\dagger}$ or $B\\not=D^{\\dagger}$, it is easy to show $\\langle A\\tilde{B}C\\tilde{D} \\rangle_{\\text{Clifford}}=0$. When $A=C^{\\dagger}$ and $B=D^{\\dagger}$, the value of the correlator depends on the commutation relations between $A$ and~$B$\n\\begin{equation}\n\\begin{split}\n\\langle A\\tilde{B}A^{\\dagger}\\tilde{B}^{\\dagger} \\rangle&=1, \\quad \\qquad [A,B]=0,\\\\\n&=-1, \\qquad \\{A,B\\}=0.\n\\end{split}\n\\end{equation}\nSince by definition Clifford operators transform a Pauli operators to another Pauli operator, $\\tilde{B}$ is a random Pauli operator with $\\tilde{B}\\not=I$. There are $d^2-1$ non-identity Pauli operators. Among them, $d^2\/2-1$ Pauli operators commute with $A$, and $d^2\/2$ anti-commute with $A$. Therefore, we find\n\\begin{align}\n\\langle A\\tilde{B}A^{\\dagger}\\tilde{B}^{\\dagger} \\rangle_{\\text{Clifford}}\n= \\frac{d^2\/2-1}{d^2-1} - \\frac{d^2\/2}{d^2-1} = \\frac{-1}{d^2-1}.\n\\end{align}\nAs expected, the $4$-point OTO correlator ensemble average over the Clifford group equals the Haar average. Recalling our result from \\S\\ref{sec:OTO_channel} that $4$-point OTO values completely determine the $2$-fold channel, this explicit calculation gives an alternative proof that the Clifford group forms a unitary $2$-design.\n\nFinally, we will present one additional way of computing the Haar average of $4$-point OTO correlators. \nFor convenience, we introduce the following notation\n\\begin{align}\n\\text{OTO}^{(4)}(A,B) :=\\langle A \\tilde{B} A^{\\dagger}\\tilde{B}^{\\dagger} \\rangle_{\\text{Haar}}.\n\\end{align}\nSince $U$ is sampled uniformly over the unitary group, $\\text{OTO}^{(4)}(A,B)$ does not depend on $A,B$ as long as $A,B\\not=I$. Consider the average of OTO correlation functions over all Pauli operators $A$, including identity operators: $\\sum_{A} \\langle A \\tilde{B} A^{\\dagger}\\tilde{B}^{\\dagger} \\rangle_{\\text{Haar}}$. Since $\\frac{1}{d}\\sum_{A}A\\otimes A^{\\dagger}$ is the swap operator, we have\n\\begin{align}\n\\includegraphics[width=0.65\\linewidth]{fig_short_cut},\n\\end{align}\nwhere a dotted line represents an average over all the Pauli operators. This expression must be zero for $B\\not=I$. Thus, we have\n\\begin{align}\n\\sum_{A}\\text{OTO}^{(4)}(A,B) = 0, \\qquad B\\not=I. \n\\end{align}\nIf $A=I$, we have $\\text{OTO}(A,B)=1$. Since there are $d^2$ Pauli operators, we have \n\\begin{align}\n\\text{OTO}^{(4)}(A,B) = -\\frac{1}{d^2-1}, \\qquad A,B\\not=I. \n\\end{align}\nIn Appendix~\\ref{sec:k-point-averages}, we use this method to estimate the scaling of higher-point OTO correlators with $d$, finding $4m$-point functions of a related ordering scale as $\\sim 1\/d^{2m}$.\n\n\n\\subsubsection*{$6$-point functions}\n\nNext, consider the Haar average of $6$-point OTO correlators, $\\langle A \\tilde{B} C \\tilde{D} E \\tilde{F} \\rangle_{\\text{Haar}}$. We will assume that $A,\\ldots,F\\not=I$ are Pauli operators. In order to have non-zero contributions, we must have $ACE \\propto I$ and $BDF \\propto I$. Thus, we will only consider cases with $ACE \\propto I$ and $BDF \\propto I$.\n\nThe results depend on commutation relations between $A,C$ and $B,D$, but always have the following scaling\n\\begin{align}\n\\langle \\text{6-point} \\rangle_{\\text{Haar}} \\sim \\frac{1}{d^2}, \\qquad (A C E=I, \\quad B D F=I).\n\\end{align}\nExplicitly, when $[A,C]=0$ and $[B,D]=0$, we find\n\\begin{align}\n\\langle \\text{6-point} \\rangle_{\\text{Haar}} = \\frac{2d^2}{(d^2-1)(d^2-4)}.\n\\end{align}\nThe more general expression is slightly complicated though has the same scaling.\nThus, the Haar average of $6$-point OTO correlators does not reach any lower a floor value than the Haar average of $4$-point OTO correlators. \n\n\\subsubsection*{$8$-point functions}\nFinally, we will study Haar averages of $8$-point OTO correlators. In this case, there are two different types of nontrivial out-of-time ordering, which behave differently at large $d$. These computations are annoyingly technical, and so the details are hidden in Appendix~\\ref{sec:8-pt-functions}.\n\n\nThe $8$-point OTO correlators of the first type can be written in the following manner\n\\begin{align}\n\\langle A\\tilde{B}C\\tilde{D}A^{\\dagger} \\tilde{B}^{\\dagger} C^{\\dagger}\\tilde{D}^{\\dagger}\\rangle, \\qquad \\text{(non-commutator type)}.\\label{eq:non-commutator-type}\n\\end{align}\nFor Hermitian operators, this essentially repeats $A\\tilde{B}C\\tilde{D}$ twice. For reasons that will subsequently become clear, we will call such OTO correlators ``non-commutator types.'' (However, the result does depends on the commutation relations between $A,C$ and $B,D$.) For these correlators, the scaling of the Haar average with respect to $d$ is\n\\begin{align}\n\\langle A\\tilde{B}C\\tilde{D}A^{\\dagger} \\tilde{B}^{\\dagger} C^{\\dagger}\\tilde{D}^{\\dagger}\\rangle_{\\text{Haar}} \\sim \\frac{1}{d^2}, \\qquad \\text{(non-commutator type)},\n\\end{align}\nand this scaling does not depend on any commutation relations. Similar to what we found for the Haar average of the $6$-point functions, these non-commutator type $8$-point OTO correlators have the same scaling with $d$ as the Haar average of $4$-point OTO correlators. \n\nThe $8$-point OTO correlators of the second type take the form\n\\begin{align}\n\\langle A\\tilde{B}C\\tilde{D}A^{\\dagger} \\tilde{D}^{\\dagger} C^{\\dagger}\\tilde{B}^{\\dagger}\\rangle, \\qquad \\text{(commutator type)},\n\\end{align}\nand are denoted ``commutator-type'' correlators.\nThese correlators have the property that they can be written in the form\n\\begin{align}\n\\langle A\\tilde{B}C\\tilde{D}A^{\\dagger} \\tilde{D}^{\\dagger} C^{\\dagger}\\tilde{B}^{\\dagger}\\rangle = \\langle AKA^{\\dagger} K^{\\dagger}\\rangle,\n\\qquad K = \\tilde{B}C\\tilde{D},\n\\end{align}\ni.e. they are the expectation value of the group commutator of the operator $AKA^{\\dagger} K^{\\dagger}$.\nThe OTO correlators Eq.~\\eqref{eq:non-commutator-type} cannot be written in this way. As with the non-commutator types, the exact Haar average depends on commutation relations between $A,C$ and $B,D$. However, the scaling with respect to $d$ does not\n\\begin{align}\n\\langle A\\tilde{B}C\\tilde{D}A^{\\dagger} \\tilde{D}^{\\dagger} C^{\\dagger}\\tilde{B}^{\\dagger}\\rangle_{\\text{Haar}} \\sim \\frac{1}{d^4}, \\qquad \\text{(commutator type)}.\n\\end{align}\nThe Haar average of these correlators is much smaller than the Haar average of the non-commutator types and the $4$- and $8$-point Haar averages! This suggests they might be a useful statistic for distinguishing ensembles that form a $4$-design and ensembles that form a $2$-design but do not form a higher design.\n\nTo test this idea, we can take an average of the commutator type $8$-point OTO correlators averaged over the Clifford group.\nSince we have assumed the operators $A, \\dots, D$ are Pauli operators, we find\n\\begin{align}\n\\langle A\\tilde{B}C\\tilde{D}A^{\\dagger} \\tilde{D}^{\\dagger} C^{\\dagger}\\tilde{B}^{\\dagger}\\rangle_{\\text{Clifford}} = \\langle A\\tilde{B}\\tilde{D}CA^{\\dagger} C^{\\dagger} \\tilde{D}^{\\dagger} \\tilde{B}^{\\dagger} \\rangle_{\\text{Clifford}} = \\frac{K(A,C^{\\dagger})}{d} \\, \\text{tr} \\{ A\\tilde{B}\\tilde{D}A^{\\dagger} \\tilde{D}^{\\dagger} \\tilde{B}^{\\dagger}\\},\n\\end{align}\nwhere in the first equality we commuted $C,\\tilde{D}$ and $C^{\\dagger},\\tilde{D}^{\\dagger}$, which holds because $C, D$ are Pauli operators, and in the second equality we\nhave defined $K(P,Q)$ by\n\\begin{equation}\nQ^{\\dagger}PQ = K(P,Q)P,\n\\end{equation} \nfor Pauli operators $P,Q$. The final answer is\n\\begin{align}\n\\langle A\\tilde{B}C\\tilde{D}A^{\\dagger} \\tilde{D}^{\\dagger} C^{\\dagger}\\tilde{B}^{\\dagger}\\rangle_{\\text{Clifford}} = \\frac{-K(A,C^{\\dagger})}{d^2-1}\\sim \\frac{1}{d^2}.\n\\end{align}\nRecall that the Clifford group is a unitary $2$-design, is not a $3$-design in general (except for a system of qubits) \\cite{Webb15}, and is never a $4$-design. Therefore, we see that commutator-type correlators may provide a statistical test of whether an ensemble forms a $k$-design but not a $k+1$-design. We explore this idea further in \\S\\ref{sec:k-point-averages}.\n\n\n\n\\subsection{Dissipation vs. scrambling}\nIn this subsection, we will return to the $2$- and $4$-point averages and compare them against expectations from time evolution. Furthermore, we will attempt to provide some physical intuition for the behavior of these averages over different ensembles. This will support our picture of chaotic time evolution leading to increased pseudorandomness.\n\nFor strongly coupled thermal systems, it is expected that the connected part of the $2$-point correlation functions decays exponentially within a time scale $t_{d}$ of order the inverse temperature $\\beta$\n\\begin{align}\n\\langle A(0)B(t) \\rangle \\rightarrow \\langle A\\rangle \\langle B \\rangle + O(e^{-t\/t_d}). \\label{2-pt-decay}\n\\end{align}\nThis time scale is often referred to as a ``dissipation'' or ``thermalization'' time and is related to the time it takes local thermodynamic quantities reaching equilibrium.\\footnote{For weakly coupled systems where the quasiparticle picture is valid (e.g. Fermi-liquids) the time scale is instead be given by $t_{d}\\sim \\beta^2$.} \nIt is suggestive that the results Eq.~\\eqref{2-pt-haar} and Eq.~\\eqref{2-pt-decay} are so similar. After a short time $t_d$, for these $2$-point functions the chaotic dynamics give the same results as the Haar random dynamics.\n\nNext, we turn to the variance of the $2$-point correlator $\\langle A(0)B(t) \\rangle$. For a closed system of finite number of degrees of freedom, the $2$-point function will be quasi-periodic with recurrences after a timescale $t_r \\sim e^{d}$ that is exponential in the dimension and doubly exponential in the number of degrees of freedom $d=2^n$. As such, the long-time average of $|\\langle A(0)B(t) \\rangle|^2$ must be nonzero. This can be estimated by performing a time average and gives a well known result \\cite{Dyson:2002pf,Barbon:2003aq,Barbon:2014rma}\\footnote{N.B. there is an error in this calculation as presented in \\cite{Dyson:2002pf,Barbon:2003aq} and so interested readers should consult \\cite{Barbon:2014rma} for the actual details.}\n\\begin{align}\n\\lim_{T\\rightarrow \\infty}\\frac{1}{T}\\int_{0}^{T} |\\langle A(0)B(t)|^2 dt \\sim \\frac{1}{d^2}.\n\\end{align}\nComparing against our result for the Haar-averaged dynamics Eq.~\\eqref{eq:2-pt-variance}, we see that they coincide.\n\n\nNext, let's consider $4$-point correlators in strongly-coupled theories with a large number of degrees of freedom $N$.\\footnote{We thank Douglas Stanford for conversations relating to dissipative behavior of $4$-point functions.} (For example, this can be thought of a system of $N$ qubits where all the qubits interact but the interactions are at most $q$-local, with $q\\ll N$ and $N\\to \\infty$.) First, let's consider the case of a \\emph{time-ordered} correlator. Similar to the case of the $2$-point functions, two of the three Wick contractions are expected to decay exponentially within a dissipation time $t_{d}$\n\\begin{equation}\n\\begin{split}\n\\langle A(0)C(0)B(t)D(t) \\rangle \\rightarrow \n\\langle AC \\rangle \\langle B D \\rangle + O(e^{-t\/t_d}),\n\\end{split}\n\\end{equation}\nwhich for qubits is analogous to considering a $2$-point function between the composite operators $AC$ and $B D$.\\footnote{Note that the exact timescale $t_{d}$ might depend on the operators being correlated and the particular contraction.} Thus, this correlator will equilibrate after a time $t_{d}$ with a late-time value that depends on the expectations $\\langle AC \\rangle$ and $\\langle B D \\rangle$.\n\nNow, let's consider the out-of-time-order $4$-point correlator in a large $N$ strongly interacting theory. For $t \\sim t_d$, this will behave similarly to the time-ordered correlator with two of the three Wick contractions decaying exponentially\n\\begin{equation}\n\\begin{split}\n\\langle A(0)B(t)C(0)D(t) \\rangle \\rightarrow \n\\langle AC \\rangle \\langle B D \\rangle + O(e^{-t\/t_d}), \\qquad t< t_d.\n\\end{split}\n\\end{equation}\nHowever, for $t > t_d$ the correlator obtains a exponentially growing connected component\n\\begin{equation}\n\\langle A(0)B(t)C(0)D(t) \\rangle \\rightarrow \n\\langle AC \\rangle \\langle B D \\rangle - O(e^{\\lambda (t-t_*)}), \\qquad t_d < t < t_*.\n\\end{equation}\nThis growth occurs in the regime $t_d < t < t_*$. The time scale $t_* = \\lambda^{-1} \\log N$, known as the fast scrambling time, is the time at which the exponentially growing piece of the correlator compensates its $1\/N$ suppression and becomes $O(1)$. The coefficient $\\lambda$ has the interpretation of a new kind of Lyapunov exponent \\cite{Kitaev:2014t1} and is bounded from above by $2\\pi \/ \\beta$ \\cite{Maldacena:2015waa}. Finally, for $t > t_*$, these OTO $4$-point correlators are expected to decay to a small floor value that is exponentially small in $N$. A natural guess for this floor is\n\\begin{equation}\n\\langle A(0)B(t)C(0)D(t) \\rangle \\rightarrow e^{-O(N)}\\langle AC\\rangle \\langle BD\\rangle, \\qquad t > t_*,\n\\end{equation}\nwhich is reproduced from Eq.~\\eqref{eq:OTO_formula} with $1$-point functions assumed to be subtracted off.\n\n\nAs we mentioned, the $2$-point function Eq.~\\eqref{2-pt-decay} reached its Haar random value after a short dissipation time $t_d$. This is very suggestive of a picture where chaotic dynamics behave as a pseudo-$1$-design after a time $t_d$. Taking this point further, let's consider the 4-point OTO correlator averaged over Pauli operators, an ensemble that forms a $1$-design, but not a $2$-design. Furthermore, we will assume that the operators $A$ and $B$ have zero overlap. Under this assumption, we can show that \n\\begin{equation}\n\\begin{split}\n\\langle A \\tilde{B} C \\tilde{D} \\rangle_{\\text{Pauli}} = \n\\langle AC \\rangle \\langle B D \\rangle. \\label{eq:some_result}\n\\end{split}\n\\end{equation}\nThe proof of this is relegated to \\S\\ref{sec:proof:pauli}.\nApparently, Pauli operators capture the behavior of the dynamics around $t\\sim t_d$, i.e. from after the dissipative regime until the scrambling regime, but then a $2$-design is required to capture the behavior after $t\\sim t_*$, i.e. the post-scrambling regime.\\footnote{Note that these observations depend on the ensemble we average over actually being the Pauli operators, and not just any ensemble that forms a $1$-design without forming a $2$-design. Furthermore, we assume that $A,B,C,D$ are simply few-body operators so that the correlator is of local operators. These choices are determined for us by the basis in which the Hamiltonian is $q$-local.}\n\nThus, we might say that after a time $\\sim t_d$, the system becomes a pseudo-$1$-design, and then after $\\sim t_*$ the system becomes a pseudo-$2$-design. (See Fig.~\\ref{fig_scrambling} for a cartoon of this behavior.) However, it remains an open question whether there are any additional meaningful timescales that can be probed with correlators after $t_*$, though we are hopeful that such timescales might be hiding in higher-point OTO correlators.\n\n\n \n\n\\begin{figure}[htb!]\n\\centering\n\\includegraphics[width=0.70\\linewidth]{fig_plot2}\n\\caption{Thermalization and scrambling in the decay of a four-point OTO correlator. These correlators typically decay to $\\langle AC \\rangle \\langle B D \\rangle$ in a thermal dissipation time $t_d$, and then they decay to a floor value $\\sim d^{-2}$ at around the scrambling time $t_*$. These regimes are very well captured by replacing the dynamics with $1$-designs or $2$-designs, respectively. \n} \n\\label{fig_scrambling}\n\\end{figure}\n\n\n\n\n\n\n\\section{Discussion}\\label{sec:discussion}\nIn this paper, we have connected the related ideas of chaos and complexity to pseudorandomness and unitary design. A cartoon of these ideas is expressed nicely by Fig.~\\ref{fig-universe}. Operators can be thought of as being organized by increasing complexity. Regions defined by circles of larger and larger radius can be thought of as defining designs with increasing $k$.\\footnote{\n While this picture is a cartoon, the manifold for the unitary group $U(n)$ has a dimension exponentially large in $n$. However, following \\cite{Dowling2006:gt}, one can find a metric in which the length of a minimal geodesic between points computes operator complexity and in which most sections are expected to be hyperbolic. Taking this further, \\cite{Brown:2016wib} considered an interesting analog system on a hyperbolic plane that captures many of the expected properties of complexity.\n} In the rest of the discussion, we will make some related points, tie up loose ends, and mention future work.\n\n\n\\begin{figure}[htb!]\n\\centering\n\\includegraphics[scale=.45]{fig_universe_rev4}\n\\caption{A cartoon of the unitary group, with operators arranged by design. We pick the identity operator to be the reference operator of zero complexity and place it at the center. Typical operators have exponential complexity and live near the edge. Operators closer to the center have lower complexity, which makes them both atypical and more physically realizable in a particular computational model.\n}\n\n \n\\label{fig-universe}\n\\end{figure}\n\n\n\n\n\\subsection*{Generalized frame potentials and designs}\n\nIn realistic physical systems, one usually does not have access to the full Hilbert space. For example, there may be some conserved quantities, such as energy or particle numbers, or the system may be at some finite temperature $\\beta$. In that case, one would be interested in understanding pseudorandomness inside a subspace of the Hilbert space, i.e restricted to some state $\\rho$. In Appendix~\\ref{sec:sub-space-randomization}, we generalize the frame potential for an arbitrary state $\\rho$ finding that the quantity\n\\begin{align}\n\\mathcal{F}^{(k)}_{\\mathcal{E}}(\\rho)&= \\iint dU dV \\ [\\text{tr} \\{ \\rho^{1\/k} UV^{\\dagger}\\} \\, \\text{tr} \\{ \\rho^{1\/k} VU^{\\dagger} \\}]^{k}, \\label{eq:discussion:generalized}\n\\end{align}\nhas all the useful properties desired of a frame potential. In particular, it is minimized by the Haar ensemble, and it provides a lower bound on ensemble size and complexity.\n\nHowever, if the state $\\rho$ is the thermal density matrix, $\\propto e^{-\\beta H}$ and the ensemble is given by time evolution with an ensemble of Hamiltonians $\\mathcal{E}= \\{ e^{-iHt}\\}$, then we need to take into account the fact that the state itself depends on the ensemble. Instead, we can define a \\emph{thermal} frame potential\n\\begin{align}\n\\mathcal{W}^{(k)}_{\\beta}(t)= \\iint dG dH \\ \\frac{ \n\\big|\\text{tr} \\, \\{e^{-(\\beta\/2k-it )G } e^{-(\\beta\/2k+it )H }\\}\\big|^{2k}\n}{\\text{tr} \\, \\{e^{-\\beta G} \\}\\text{tr} \\, \\{e^{-\\beta H} \\}}.\n\\end{align}\n In this case, even at $t=0$ one can derive an interesting bound on the complexity of the ensemble. We hope to return to this in the future to analyze the \\emph{complexity of formation}: the computational complexity of forming the thermal state $\\rho_{\\beta}$ from a suitable reference state.\\footnote{This is also a question of interest in holography, see e.g. \\cite{CofFormation}.}\n\nFinally, it would be similarly interesting to consider a different generalization of unitary designs where, under some physical constraints, we can only access some limited degrees of freedom in the system. In this sense, one could think of the unitary ensemble (as opposed the state) as being generated by tracing over these additional degrees of freedom. These ``subsystem designs'' would then be ``purified'' by integrating back in the original degrees of freedom.\\footnote{We thank Patrick Hayden and Michael Walter for initial discussions about this idea.}\n This interesting direction is a potential subject of future work.\n \n\n\\subsection*{More chaos in quantum channels}\n\nIn Appendix~\\ref{sec:more-chaos}, we revisit some ideas from our previous work \\cite{Hosur:2015ylk}, though these ideas are also relevant to the current work. In particular, in \\S\\ref{sec:more:HP} we reconsider the Hayden-Preskill notion of black hole scrambling \\cite{Hayden07}. In this thought experiment, Alice throws a secret quantum state into a black hole. Assuming Bob knows the initial state and dynamics of the black hole, we show how the question of whether Bob can reconstruct Alice's secret is related to the decay of an average of a certain set of OTO four-point correlators.\n\n\n\n In \\S\\ref{sec:more:op}, we provide an operational interpretation to taking averages over OTO correlators. We show that this is related to a quantum game of ``catch'' where Alice may ``spit-on'' or otherwise perturb the ball before throwing it to Bob. The average over four-point OTO correlators gives the probability that Alice did not modify the ball. We also show that an average over higher-point OTO correlators can be interpreted as an ``iterated'' game of catch (i.e. what normally people just call ``catch'') where both Alice and Bob have the opportunity to modify the ball each turn. In this case, the OTO correlator average is related to the joint probability that neither Alice nor Bob perturb the ball.\n\n\n\n Finally, in \\S\\ref{sec:more:renyi} we show that an average over a particular ordering of $2k$-point OTO correlators can be related to the $k$th R\\'enyi entropy of the operator $U$ interpreted as a state. We find that\n\\begin{equation}\n-\\log( \\text{a certain average of $2k$-point OTO correlators}) \\propto S_{ \\text{subsystem}}^{(k)}(U),\\label{eq:summary-of-renyi-formula}\n\\end{equation}\nwhere $S_{\\text{subsystem} }^{(k)}$ is the R\\'enyi $k$-entropy of a particular subsystem of the density matrix $\\rho = \\ket{U}\\bra{U}$.\n\n\n\n\\subsection*{Volume of unitary operators}\n\nThe argument in \\S\\ref{sec:continuous} led to a bound on the ratio $ N_s \/ \\overline{N_\\epsilon}$, which can be interpreted as the volume of $\\mathcal{E}$ in terms of $\\epsilon$-balls. An interesting application of this bound might be to think about the volume of unitary operators in $U(d)$ that can be probed in a finite time scale $T$, i.e. the volume of operators with depth $\\mathcal{D} \\sim T$. (See \\S\\ref{sec:complexity:depth} for further discussion of a lower bound on circuit depth in terms of the frame potential.)\n\n\n\nIn fact, in certain situations (such as the Brownian circuit introduced in \\cite{Lashkari13} or the random circuit model), it is not difficult to show that the volume of unitary operators with depth $T$ grows at least $\\sim \\exp(\\text{const} \\cdot n \\, T)$ for some small $T$ and some constant independent of $n$ by computing the $k=1$ frame potential. This implies that the space of unitary operators, with the metric being quantum gate complexity, has hyperbolic structure with constant curvature, as discussed in e.g. \\cite{Dowling2006:gt,Susskind:2014jwa,Brown:2016wib} (see also \\cite{Chemissany:2016qqq}). On the other hand, one can upper bound the volume of unitary operators with circuit depth $T$ by thinking about how the depth can grow $V(T) \\sim \\binom{n}{2}\\binom{n-2}{2}\\binom{n-4}{2} \\cdots \\binom{2}{2} \\approx \\exp(n \\log n \\cdot T)$. Thus, for small $T$ and large $n$, the lower bound seems to be reasonably tight.\n\nOnce a lower bound on the volume of unitary operators in an ensemble is obtained in the unit of $\\epsilon$-balls, we can also obtain a lower bound (of the same order) on the complexity of a typical operator in the ensemble. This seems possible by using the formal arguments given in \\cite{Knill95} even when the elementary gate set is not discrete, e.g. all the two-qubit gates.\n\nFinally, it's a curious fact that for systems with time-dependent Hamiltonian ensembles (such as the random circuit models or the Brownian circuit of \\cite{Lashkari13}) that we get an initial linear growth of the volume with $T$. As argued in \\S\\ref{sec:complexity:early} (and confirmed numerically), for time independent Hamiltonian evolution---e.g. in SYK or in the Gaussian unitary ensemble (GUE)---we get a lower bound $V(T) \\sim \\exp(\\text{const} \\cdot n\\, T^2 )$, which persists for a short time $T \\sim 1\/\\sqrt{n}$. It would be very interesting to understand the difference in this scaling.\\footnote{\n One might worry that this bound saturates at a value smaller than unity. However, we expect that there may be a continuous definition of complexity sensible for small complexities, see e.g. \\cite{Brown:2017jil}. \n}\n\n\n\n\n\n\\subsection*{Tightness of the complexity bound}\n\nWhile the frame potential provides a rigorous lower bound on the complexity of generating an ensemble of unitary operators, there may be a cost: the bound may not be very tight when applied to time evolution by an ensemble of Hamiltonians.\\footnote{We have learned this by some numerical investigations of the frame potential.} \n\nLet us try to understand this better. \nTo be concrete, consider the $k=2$ frame potential for a strongly coupled spin systems that scrambles in $t_* \\sim \\log n$ time. In such a system, for local operators $W, V$ of unit weight, OTO four-point correlators $\\langle W(t)^{\\dagger}V^{\\dagger}W(t)V\\rangle$ will begin to decay after $t\\sim O(\\log n)$. Since the $k=2$ frame potential is the average of four-point OTO correlators, one might expect that the frame potential will also start to decay at $t\\sim O(\\log n)$. \n\nHowever, this is not quite right. We expect the decay time for more general correlators of larger operators to be reduced to $\\bar{t}_* \\sim t_* - O\\big(\\log(\\text{size}~W)\\big) - O\\big(\\log(\\text{size}~V)\\big)$, where $t_*\\sim O(\\log n)$ is the scrambling when $V$ and $W$ are low-weight operators. If we randomly select Pauli operators $W$ and $V$, they will typically be nonlocal operators with $O(n)$ weights, and therefore the OTO decay time $\\bar{t}_*$ will be reduced to $O(1)$ for $V$ and $W$ of typical sizes.\n\nIn fact, the above estimate suggests most of the correlators determining the complexity bound should begin to decay immediately. As we can see in Eq.~\\eqref{eq:lower-bound-oto}, each correlator itself only makes a logarithmic contribution to the complexity and so we shouldn't expect the remaining slow decaying local correlators to be dominant. (To be sure, a further investigation of this point is required.)\n\nOne possible way to fix this problem would be to generalize the frame potential by using $p$-norm with $p \\neq 2$ so that it is more sensitive to the slower decaying local correlators. We leave the study of such a generalization to the future.\n\n\n\n\n\n\\subsection*{Complexity and holography}\nFinally, we will return to the question of complexity and holography discussed in the introduction.\nIn the context of holography, computational complexity was ``introduced'' \\cite{Harlow:2013tf} as a possible resolution to the firewall paradox of \\cite{Almheiri13,Almheiri13b}. A direct connection between complexity and black hole geometry was first proposed by Susskind \\cite{Susskind:2014rva,Susskind:2014ira}, which culminated in proposals that the interior of the black hole geometry is holographically dual to the spatial volume \\cite{Stanford:2014jda} or the spacetime action \\cite{Brown:2015bva,Brown:2015lvg}. These proposals are motivated by the fact that the black hole interior continues to grow as the state evolves long past the time entropic quantities equilibrate \\cite{Hartman13}. While there is nice qualitative evidence for both of these proposals \\cite{Susskind:2014jwa,Roberts:2014isa,Brown:2016wib}, missing is a direct understanding of computational complexity in systems that evolve continuously in time with a time-independent Hamiltonian.\n\n\n\n A hint can be obtained by considering some of the motivations for these holographic complexity proposals. In particular, building on the work of \\cite{Hartman13} and previous work of Swingle \\cite{Swingle12,Swingle12b}, Maldacena suggested that the black hole interior could be found in the boundary theory by a tensor network construction of the state \\cite{Maldacena:2013t1}. A tensor network toy model of the evolution of the black hole interior was investigated in \\cite{Hosur:2015ylk}. In this toy model, the interior of the black hole was modeled as a flat tiling of perfect tensors, see Fig.~\\ref{tn-erb}. These tensors were elements of the Clifford group and acted as two-qubit unitary operators that highly entangled neighboring qubits at each time step. From the perspective of the boundary theory, this is a model for Hamiltonian time evolution.\\footnote{In addition to providing a model for the black hole interior, Swingle's identification of the ground state of AdS with a tensor network \\cite{Swingle12,Swingle12b} has led to numerous other quantum information insights in holography (e.g. quantum error correction and its relationship to bulk operator reconstruction \\cite{Almheiri14}) as well as additional toy models that demonstrate those features (e.g. \\cite{Pastawski15b,Hayden:2016cfa}).}\n\n\n\\begin{figure}[htb!]\n\\centering\n\\includegraphics[width=1\\linewidth]{fig_circuit}\n\\caption{A $6$-qubit tensor network model for the geometry of the interior of a black hole. Via holography, the growth of the interior is expected to correspond to chaotic time evolution of a strongly coupled quantum theory. Here, each node corresponds to a perfect tensor and the numbers label the qubit. \n} \n\\label{tn-erb}\n\\end{figure}\n\n\n This toy model captures some important features of the complexity growth of the black hole state. The number of tensors in the network grows linearly in time, by construction. Operators will grow ballistically, exhibiting the butterfly effect, and the network scrambles in linear time. Thus, this network captures the aspects black hole chaos related to local scrambling and ballistic operator growth discussed in \\cite{Roberts:2014isa} as well as aspects of complexity growth discussed in \\cite{Hartman13,Susskind:2014ira,Stanford:2014jda,Brown:2015bva}. \n\n\n\n However, since in this model the perfect tensor is a repeated element of the Clifford group, the complexity can never actually grow to be very big. In fact, the quantum recurrence time of the model was investigated in \\cite{Hosur:2015ylk} and was found to be exponential in the entropy $\\sim e^{n}$ rather than doubly exponential $\\sim e^{e^n}$ as expected in a fully chaotic model. This is related to our oft stated fact that the Clifford group generally does not form a higher-than-2-design. In fact, this model can actually be mapped to a classical problem, and \n by the Gottesman-Knill theorem its complexity can be no greater than $O(n^2)$ gates \\cite{Nielsen_Chuang}.\\footnote{Since the Clifford group is a group, the complexity of any particular circuit in our model is an element of the group. Thus, at most we should be able to reach it by a polynomial number of applications of $2$-qubit gates.} \n\n\n\n These observations were the inspiration for this current work, since in this toy model $4$-point OTO correlators behave chaotically, but higher-point OTO correlators do not. Nevertheless, this model can be ``improved'' by using random $2$-qubit tensors rather than a repeated perfect tensor.\\footnote{Note: the case of a averaging over a single random tensor is like time evolution with a time independent Hamiltonian. On the other hand, the case of averaging separately over all tensors has a continuum limit of time evolution with a time-dependent Hamiltonian with couplings that evolve at each time step. This model is known as the Brownian circuit \\cite{Lashkari13}.} In \\cite{Brandao12}, it was shown that this local random quantum circuit approaches a unitary $k$-design in a circuit depth that scales at most as $O(k^{10})$. Our complexity lower bound for a $k$-design Eq.~\\eqref{eq:complexity-k-design} suggests that the time to become a $k$-design is lower bounded by $k$, and we suspect that this can be saturated.\\footnote{It is believed that this is actually saturated by the local random circuit of \\cite{Brandao12}. For example, there is some numerical evidence for this claim in \\cite{mozrzymas2013local}.} \n\nIt is in this sense that we speculate that the complexity growth of the chaotic black hole is pseudorandom. That is, we suspect that as the complexity of the black hole state increases linearly with time evolution $t$, the dynamics evolve to become pseudo-$k$-designs, with the value $k$ roughly scaling with $t$\n \\begin{equation}\n \\mathcal{C}(e^{-iHt}) \\sim t \\sim k,\n \\end{equation}\n and that this may be quantified by either representative $2k$-point OTO correlators or by an appropriate generalization of unitary design. With this in mind, it would be interesting to see whether one could use the tools of unitary design to prove a version of the conjectures of \\cite{Brown:2015bva,Brown:2015lvg} suggesting that complexity (is greater than or) equal to action.\n\n\n\n\n\n\n\\section*{Acknowledgments}\n\nWe are grateful to Fernando Brandao, Adam Brown, Jordan Cotler, Guy Gur-Ari, Patrick Hayden, Alexei Kitaev, M\\'ark Mezei, Xiao-Liang Qi, Steve Shenker, Lenny Susskind, Douglas Stanford, and Michael Walter for discussions. \n\nDR and BY both acknowledge support from the Simons Foundation through the ``It from Qubit'' collaboration.\nDR is supported by the Fannie and John Hertz Foundation, the National Science Foundation grant\nnumber PHY-1606531 and the Paul Dirac Fund, and is also very thankful for the hospitality of the Stanford Institute for Theoretical Physics and the Perimeter Institute of Theoretical Physics during the completion of this work. \nResearch at Center for Theoretical Physics at MIT is supported by the U.S. Department of Energy under cooperative research agreement Contract Number DESC0012567.\nResearch at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation. This paper was brought to you unitarily by the Haar measure.\\footnote{\n Finally, we would like to thank one of our anonymous JHEP referees for pointing out the prior work \\cite{cbd:cd} and suggesting an acknowledgment. While we were unaware of \\cite{cbd:cd} at the time of submission, we are nevertheless quite happy to include a citation in our revision.\n}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nCreating realistic music pieces automatically has always been regarded as one of frontier subjects in the field of computational creativity. With recent advances in deep learning, deep generative model and its variants have been widely used in automatic music generation \\cite{HerremansCC17}\\cite{deepgenerationsurvey}. However, most of deep composition methods focus on Western music rather than Chinese music. How to employ deep learning to model the structure and style of Chinese music is a challenging but novel problem.\n\nChinese folk songs, an important part of traditional Chinese music, are improvised by local people and passed on from one generation to the next orally. Folk tunes from the same region exhibit similar style while tunes from different areas present different regional styles \\cite{Miao1985}\\cite{LiLDZY19}. For example, the songs named \\emph{Mo Li Hua} have different versions in many areas of China and show various music styles, though they share the same name and similar lyrics \\footnote{The Chorus of \\emph{Mo Li Hua Diao} from various regions in China by Central National Orchestra: \\url{http:\/\/ncpa-classic.cntv.cn\/2017\/05\/11\/VIDEEMEg82W5MuXUMM1jpEuL170511.shtml}.} . The regional characteristics of Chinese folk songs are not well explored and should be utilized to guide automatic composition for Chinese folk tunes. Furthermore, folk song composition based on regional style provides abundant potential materials for Chinese national music creation, and promotes the spread and development of Chinese national music and even Chinese culture in the world.\n\nThere are lots of studies on music style composition of Western Music\\cite{Dai2018}. However, few studies employ deep generative model for Chinese music composition. There is a clear difference between Chinese and Western music. Unlike Western Music, which focuses on the vertical structure of music, Chinese music focuses on the horizontal structure, i.e., the development of melody, and the regional style of Chinese folk songs is mainly reflected in its rhythm and pitch interval patterns \\cite{Guan2014}. \n\nIn this paper, we propose a deep music generation model named MG-VAE to capture regional style of Chinese folk songs (\\emph{Min Ge}) and create novel tunes with controlled regional style. Firstly, a MIDI dataset with more than 2000 Chinese folk songs covering six regions is collected. After that, we encode the input music representations to the latent space and decode the latent space to reconstruct music notes. In detail, the latent space is divided into two parts to present the pitch features and rhythm features, namely, \\emph{pitch variable} and \\emph{rhythm variable}. Then we further divide the pitch latent space into \\emph{style variable} part and \\emph{content variable} part to present style feature and style-less feature in pitch variable, the same operation is launched in rhythm variable. In order to capture the regional style of Chinese folk songs precisely and generate regional style songs in controllable way, we propose a method based on adversarial training for disentanglement of the four latent variables, where temporal supervision is employed in the separation of pitch and rhythm variable, and label supervision is used for the disentanglement the style and content variable. The experimental results and visualization of latent spaces show that our model is effective to disentangle latent variables and is able to generate folk songs with specific regional style. \n\nThe rest of the paper is structured as follows: after introducing related work on deep music generation in Section 2, we present our music representations and model in Section 3. Section 4 describes the experimental results and analysis of our methods. Conclusions and future work are presented in Section 5.\n\n\\section{Related Work}\nRNN (Recurrent Neural Network) is one of the most earliest models introduced into the domain of deep music generation. Researchers employ RNNs to model the music structure and generate different formats of music, including monophonic folk melodies \\cite{SturmSBK16}, rhythm composition \\cite{MakrisKKK19}, expressive music performance \\cite{Sageev19}, multi-part music harmonization \\cite{YanLVD18}. Other recent studies have started to combine convolutional structure and explore using VAE, GAN (Generative Adversarial Network) and Transformer for music generation. MidiNet \\cite{YangCY17} and MuseGAN \\cite{DongHYY18} combine CNN (Convolutional Neural Network) and GAN architecture to generate music with multiple MIDI tracks. MusicVAE \\cite{RobertsERHE18} introduces a hierarchical decoder into general VAE model to generate music note sequences with long-term structure. Due to the impressive results of Transformer in neural translation, Huang et al. modify this sequence model's relative attention mechanism and generate minutes of music clips with high long-range structural coherence \\cite{MusicTransformer}. In addition to the study of music structure, researchers also employ deep generative models to model music styles, such as producing jazz melodies through two LSTM networks \\cite{JohnsonKW17}, harmonizing a user-made melody in Bach's style \\cite{Coconet2019}. Most of them are trained on the specific style dataset. The music generated from these models can only mimic the single style embodied in the training data. \n\nMoreover, little attention has been paid to Chinese music generation with deep learning techniques, especially for modeling the music style of Chinese music, though some researchers utilize Seq2Seq model to create multi-track Chinese popular songs from scratch \\cite{ZhuLYQLZZWXC18} or generate melody of Chinese popular songs with given lyrics \\cite{Bao2018}. The existing generation algorithms for Chinese traditional songs are mostly based on non-deep models such as Markov models \\cite{HuangLNC16}, genetic algorithms \\cite{ZhengWLSGGW17}. These studies cannot break up the bottleneck in melody creation and style imitation. \n\nSome latest work in the domain of music style transfer begins to generate music with mixed style or recombine music content and style. For example, Mao et al. propose an end to end generative modal to produce music with mixture of different classical composer styles \\cite{MaoSC18}. Lu et al. study the deep style transfer between Bach chorales and Jazz \\cite{LuS18a}. Nakamura et al. complete melody style conversion among different music genres \\cite{NakamuraSNY19}. The above studies are based on the music data from different genres or composing periods. However, the regional style generation of Chinese folk songs studied here is modeling style within the same genre, which is more challenging.\n\n\\section{Approach}\n\n\\subsection{Music Representation}\nThe monophonic folk songs $M$ can be represented as a sequence of note tokens, which is a combination of its pitch, interval and rhythm. Pitch and rhythm are essential information for music. The interval is an important indicator to distinguish the regional music feature, especially for Han Chinese folk songs\\cite{Han1989}. The detail processing is described as below and shown in Fig.~\\ref{fig:1}.\n\n\\begin{figure}[htb]\n\t\\centering\n\t\\includegraphics[width=3.6in]{data_representation.png}\n\t\\caption{Chinese folk songs representation including pitch sequence, interval sequence, rhythm sequence.}\n\t\\label{fig:1}\n\\end{figure}\n\n\\begin{itemize}\n\\item \\textbf{Pitch Sequence} $P$: Sequence of pitch tokens which consists of the pitch type presented in melody sequence. Rest note is assigned a special token.\n\t\n\\item \\textbf{Interval Sequence} $I$: Sequence of interval tokens derived from $P$. Each interval token is represented as a deviation between the next pitch and current pitch in step of semitone.\n\t\n\\item \\textbf{Rhythm Sequence} $R$: Sequence of duration tokens comprised of the duration type presented in melody sequence.\n\\end{itemize}\n\n\n\\subsection{Model}\n\nAs mentioned in Section 1, the regional characteristics of Chinese folk songs are mainly reflected in their pitch patterns and rhythm patterns. In some areas, the regional characteristics of folk songs are more dependent on pitch feature, while the rhythm patterns in some areas are more distinctive. For example, in terms of pitch, folk songs in northern Shaanxi tend to use perfect forth, the Hunan folk songs often use the combination of major third and minor third \\cite{Miao1985}, while Uighur folk songs employ the non-pentatonic scale. In terms of rhythm, Korean folk songs have their special rhythm system named \\emph{Jangdan}, while Mongolian \\emph{Long Song}s generally prefer long duration notes \\cite{Du2014}. \n\nInspired by the above observations, it is necessary to further refine the style of folk songs both in pitch and rhythm. Therefore, we propose a VAE-based model to separate pitch space and rhythm space, and further disentangle the music style and content space from pitch and rhythm space, respectively. \n\n\\subsubsection{VAE and its Latent Space Division}\n\nThe VAE introduces a continuous latent variable $z$ from a Gaussian prior $p_{\\theta}(z)$, and then generates sequence $x$ from the distribution $p_{\\theta}(x|z)$ \\cite{KingmaW13}. Concisely, a VAE includes an encoder $q_{\\phi}(z|x)$, a decoder $p_{\\theta}(x|z)$ and latent variable $z$. The loss function of VAE is \n\n\\begin{equation}\nJ(\\phi, \\theta) = -\\mathbb{E}_{q_{\\phi}(z|x)}[{\\rm log}p_{\\theta}(x|z)] + \\beta KL(q_{\\phi}(z|x)\\|p_{\\theta}(z))\n\\label{eq:1}\n\\end{equation}\n\nwhere the first term denotes reconstruction loss, and the second term refers to the Kullback-Leibler (KL) divergence, which is added to regularize the latent space. Weight $\\beta$ is a hyperparameter to balance the two loss terms. By setting $\\beta<1$, we can improve the generation quality of the model \\cite{Higgins2017}. $p_{\\theta}(z)$ is the prior and generally obeys the standard normal distribution, i.e., $p_{\\theta}(z) = \\mathcal{N}(0,I)$. The posterior approximation $q_{\\phi}(z|x)$ is parameterized by encoder which is also assumed to be Gaussian and reparameterization trick is used to acquire its mean and variance.\n\n\\begin{figure}[htb]\n\t\\centering\n\t\\includegraphics[width=4.4in]{model1.png}\n\t\\caption{Architecture of our model, it consists of melody encoder $E$, pitch decoder $D_P$, rhythm decoder $D_R$ and melody decoder $D_M$.}\n\t\\label{fig:2}\n\\end{figure}\n\nWith the labeled data, we can disentangle the latent space of VAE in a way that different parts of the latent space correspond to different external attributes, which can enable the generation process in a more controllable way. In our case, we assume that the latent space can be firstly divided into two independent parts, i.e., \\emph{pitch variable} and \\emph{rhythm variable}. The pitch variable learns the pitch features of Chinese folk songs, while rhythm variable captures the rhythm patterns. Further, we assume both the pitch variable and rhythm variable consist of two independent parts, which refer to music \\emph{style variable} and music \\emph{content variable}, respectively. \n\nSpecifically, given a melody sequence $M=\\{m_1,m_2,\\cdots,m_n\\}$ as the input sequence with $n$ tokens (notes), where $m_k$ denotes the feature combination of the corresponding pitch token $p_k$, interval sequence $i_k$ and rhythm sequence $r_k$, we firstly encode $M$ and obtain four latent variables from the linear transformation of the encoder's output. The four latent variables are pitch style variable $Z_{P_s}$, pitch content variable $Z_{P_c}$, rhythm style variable $Z_{R_s}$ and rhythm content variable $Z_{R_c}$, respectively. Then, we concatenate $Z_{P_s}$ and $Z_{P_c}$ into the total pitch variable $Z_{P}$, which is used to predict the pitch sequence $\\hat{P}$. The same operation is launched in rhythm variable to predict $\\hat{R}$. Finally, all latent variables are concatenated to predict the total melody sequence $\\hat{M}$. The architecture of our model is shown in Fig.~\\ref{fig:2}. \n\n\n\nBased the above assumption and operation, it is easy to extend the basic loss function:\n\\begin{equation}\nJ_{vae} = H(\\hat{P},P) + H(\\hat{R},R) + BCE(\\hat{M},M) + \\beta KL_{total}\n\\label{eq:2}\n\\end{equation}\n\nwhere $H(\\cdot,\\cdot)$ and $BCE(\\cdot,\\cdot)$ denote the cross entropy and binary cross entropy between prediction values and target values, respectively, and $KL_{total}$ denotes the sum KL loss of the four latent variables.\n\n\n\\subsubsection{Adversarial Training for Latent Spaces Disentanglement}\n\nHere, we propose an adversarial training based method to conduct the disentanglement of pitch and rhythm, music style and content. The detail processing is shown in Fig.~\\ref{fig:3}.\n\n\\begin{figure}[htb]\n\t\\centering\n\t\\includegraphics[width=4.5in]{model2.png}\n\t\\caption{Detail processing of latent spaces disentanglement. The\n\t\tdashed lines indicate the adversarial training parts.}\n\t\\label{fig:3}\n\\end{figure}\n\nAs shown in Fig.~\\ref{fig:2}, we use two parallel decoders to reconstruct pitch sequence and rhythm sequence, respectively. Ideally, we expect the pitch variable $Z_P$ and rhythm variable $Z_R$ should be independent of each other. However, the pitch feature may be implicit in rhythm variables actually, vice versa, since the two variables are sampled from the same encoder output. \n\nIn order to separate the pitch and rhythm variable explicitly, the temporal supervision is employed in the separation of pitch and rhythm, which is similar to the work of disentangled representation for pitch and timbre \\cite{IJCAI2019}. Specifically, we feed the latent variable to the wrong decoder deliberately and force the decoder to predict nothing, i.e., all zero sequence, resulting in the following two loss terms based on cross entropy:\n\n\\begin{equation}\nJ_{adv,P} = -\\Sigma[\\mathbf{0}\\cdot {\\rm log}\\hat{P}_{adv} + (1-\\mathbf{0})\\cdot {\\rm log}(1-\\hat{P}_{adv})]\n\\label{eq:3}\n\\end{equation}\n\n\\begin{equation}\nJ_{adv,R} = -\\Sigma[\\mathbf{0}\\cdot {\\rm log}\\hat{R}_{adv} + (1-\\mathbf{0})\\cdot {\\rm log}(1-\\hat{R}_{adv})]\n\\label{eq:4}\n\\end{equation}\n\nwhere $\\mathbf{0}$ denotes all zero sequence, $`\\cdot$' denotes the element-wise product.\n\nFor the disentanglement of music style and content, we firstly obtain the total music style variable $Z_s$ and content variable $Z_c$:\n\n\\begin{equation}\nZ_s = Z_{P_s} \\oplus Z_{R_s}, Z_c = Z_{P_c} \\oplus Z_{R_c}\n\\label{eq:5}\n\\end{equation}\n\nwhere $\\oplus$ means the concatenate operation. \n\nThen two classifiers are defined to force the separation of style and content in the latent space using the regional information. The style classifier ensures the style variable is discriminative for regional label, while the adversary classifier force the content variable is not distinctive for regional label. For style classifier is trained with the cross entropy defined by\n\\begin{equation}\nJ_{dis, Z_s} = -\\Sigma y {\\rm log}p(y|Z_s)\n\\label{eq:6}\n\\end{equation}\n\nwhere $y$ denotes the ground truth, $p(y|Z_s)$ is the predicted probability distributions from style classifier.\n\nFor adversary classifier, we train it by maximizing the empirical entropy of the adversary classifier's prediction \\cite{FuTPZY18}\\cite{Vineet2018}. The training processing is divided into two steps. Firstly, the parameters of the adversary classifier are trained independently, i.e., the gradients of the classifier don't propagate back to VAE. Secondly, we compute the empirical entropy based on the output from adversary classifier as defined by\n\n\\begin{equation}\nJ_{adv, Z_c} = -\\Sigma p(y|Z_c) {\\rm log}p(y|Z_c)\n\\label{eq:7}\n\\end{equation}\nwhere $p(y|Z_c)$ is the predicted probability distributions from adversary classifier. \n\nIn summary, the overall training objective of our model is the minimization the loss function defined by\n\n\\begin{equation}\nJ_{total} = J_{vae} + J_{adv,P} + J_{adv, R} + J_{dis, Z_s} - J_{adv, Z_c}\n\\label{eq:8}\n\\end{equation}\n\n\n\\section{Experimental Results and Analysis}\n\\subsection{Datasets and Preprocessing}\n\nThe lack of large-scale Chinese folk song datasets makes it impossible to apply deep learning methods for automatic generation and analysis of Chinese music. Therefore, we digitize more than 2000 Chinese folk songs in MIDI format from the record of \\emph{Chinese Folk Music Integration}\\footnote{\\emph{Chinese Folk Music Integration} is one of the major national cultural project leaded by the former Ministry of Culture, National Ethnic Affairs Commission and Chinese Musicians Association from 1984 to 2001. This set of book contains more than 40000 selected folk songs of different nationalities. The project website is \\url{http:\/\/www.cefla.org\/project\/book }.}. These songs contain Han folk songs from Wu dialect district, Xiang dialect district\\footnote{According to the analysis of Han Chinese folk songs \\cite{Han1989}\\cite{Du1993}, the folk song style of each region is closely related to the local dialects. Therefore, we classify Han folk songs based on dialect divisions. Wu dialect district here mainly includes Southern Jiangsu, Northern Zhejiang and Shanghai. Xiang dialect district here mainly includes Yiyang, Changsha, Hengyang, Loudi and Shaoyang in Hunan province.} and northern Shaanxi, as well as three ethnic minority folk songs of Uygur in Xinjiang, Mongolian in Inner Mongolia and Korean in northeast China.\n\nAll melodies in datasets are transposed to C key. We use the Pretty-midi python toolkit \\cite{raffel2014intuitive} to process each MIDI file, and count the numbers of pitch token, interval token and rhythm token as the feature dimension of the corresponding sequence, which are 40, 46 and 58, respectively. Then pitch sequence, interval sequence and rhythm sequence are extracted from raw notes sequence with the overlapping window of length 32 tokens and a hop-size of 1. Finally, we get 65508 ternary sequences in total. The regional labels of the token sequences drawn from the same song are consistent.\n\n\\subsection{Experimental Setup}\n\n\\begin{figure}[htb]\n\t\\centering\n\t\\includegraphics[width=2.8in]{encoder.png}\n\t\\caption{Encoder with residual connections.}\n\t\\label{fig:4}\n\\end{figure}\n\nIn order to extract melody feature into latent space effectively, we employ a bidirectional GRU model with the residual connection \\cite{HeZRS16} as encoder, which is illustrated in Fig.~\\ref{fig:4}. The decoder is a normal two-layers GRU. All recurrent hidden size in this paper is 128. Both style classifier and adversary classifier are one-layer linear layer with Softmax function. The size of pitch style variable and rhythm style variable is set to 32, while the size of pitch content variable and rhythm content variable is 96. During training period, the KL term coefficient $\\beta$ increases from 0.0 to 0.15 linearly to alleviate the impact of posterior collapse.\n\nAdam optimizer is employed with the initial learning rate of 0.01 for VAE training, and vanilla SGD optimizer with the initial learning rate of 0.005 for classifiers. All test models are trained for 30 epochs and the size of mini-batch is set to 50.\n\n\n\n\\subsection{Evaluation and Results Analysis}\n\nTo evaluate the generated music, we employ the following metrics from objective and subjective perspectives.\n\n\\begin{itemize}\n\t\\item \\textbf{Reconstruction Accuracy}: We calculate the accuracy between the target notes sequence and reconstructed notes sequence on our test set to evaluate the music generation quality.\n\t\\item \\textbf{Style Recognition Accuracy}: We train a separate style evaluation classifier using the architecture in Fig.~\\ref{fig:4} to predict the regional style of the tunes that are generated using different latent variables. The classifier achieves a reasonable regional accuracy on the independent test set, which is up to 82.71\\%. \n\t\\item \\textbf{Human Evaluation}: As human should be the ultimate judge of creations, human evaluations are conducted to overcome the incoordinations between objective metrics and user studies. We invite three experts who are well educated and expertise in Chinese music. Each expert is asked to listen to the random selected five folk songs of each region on-site, and rate each song on a 5-point scale from 1 (very low) to 5 (very high) according to the following two criteria: a) \\emph{Musicality}: Does the song have a clear music pattern or structure? b) \\emph{Style Significance}: Does the songs' style match the given regional label?\n\\end{itemize}\n\n\\begin{table}[htb]\n\t\\centering\n\t\\caption{Results of automatic evaluations.}\n\t\\begin{tabular}{ccc}\n\t\t\\hline\n\t\tObjectives & Reconstruction Accuracy & Style Recognition Accuracy \\\\\n\t\t\\hline\n\t\t$J_{vae}$ & 0.7684 & 0.1726\/0.1772\/0.1814 \\\\\n\t\t\\hline\n\t\t$J_{vae}, J_{adv,P,R}$ & 0.7926 & 0.1835\/0.1867\/0.1901 \\\\\n\t\t\\hline\n\t\t$J_{vae}, J_{adv, P,R}, J_{adv, Z_c}$ & 0.7746 & 0.4797\/0.4315\/0.5107 \\\\\n\t\t\\hline\n\t\t$J_{vae}, J_{adv, P,R}, J_{dis, Z_s}$ & \\textbf{0.8079} & 0.5774\/0.5483\/0.6025 \\\\\n\t\t\\hline\n\t\t$J_{total}$ & 0.7937 &\\textbf{0.6271}\/\\textbf{0.5648}\/\\textbf{0.6410} \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\label{tab:1}\n\\end{table}\t\t\n\n\n\nTab~\\ref{tab:1} shows all evaluation results of our models. The three values in the third column denote the accuracies derived from the following three kinds of latent variables: a) the concatenation of pitch style variable $Z_{P_s}$ and a random variable sampled from standard normal distribution; b) the concatenation of rhythm style variable $Z_{R_s}$ and the random variable; c) the concatenation of total style variable $Z_s$ and the random variable. $J_{adv,P,R}$ denotes the sum of $J_{adv,P}$ and $J_{adv, R}$. \n\n\n\nThe model with $J_{total}$ achieves the best results in style recognition accuracy and a sub-optimal result in reconstruction accuracy. The model without any constraints performs poorly on the two objective metrics. The addition of $J_{adv,P,R}$ improves the the reconstruction accuracy but fails to bring meaningful improvement to style classification. With the addition of either $J_{adv, Z_c}$ or $J_{dis, Z_s}$, all the three recognition accuracies improve a lot, which indicates that the latent spaces are disentangled into style and content subspaces as expected. Moreover, only employing pitch style or rhythm style for style recognition can also obtain fair results, demonstrating the disentanglement of pitch and rhythm is effective.\n\n\\begin{figure}[htb]\n\t\\centering\n\t\\includegraphics[width=4.6in]{human_new.png}\n\t\\caption{Results of human evaluations including musicality and style significance. The heights of bars represent means of the ratings and the error bars represent the standard deviation.}\n\t\\label{fig:5_add}\n\\end{figure}\n\nThe result of human evaluations is shown in Fig.~\\ref{fig:5_add}. In terms of musicality, all test models have similar performance, which demonstrates the addition of extra loss function has no negative impact on the generation quality of original VAE. Moreover, the model with total objectives $J_{total}$ performs significantly better than other models in terms of style significance (two-tailed $t$-test, $p<0.05$), which is consistent with the results in Tab~\\ref{tab:1}.\n\n\n\n\\begin{figure*}[!t]\n\t\\centering\n\t\\subfigure[Pitch style latent space ]{\\includegraphics[width=2.2in]{ps_space.png} \\label{fig:5a}}~~\n\t\\subfigure[Rhythm style latent space ]{\\includegraphics[width=2.2in]{ds_space.png} \\label{fig:5b}}\n\t\\subfigure[Total style latent space]{\\includegraphics[width=2.2in]{style_space.png} \\label{fig:5c}}~~\n\t\\subfigure[Total content latent space ]{\\includegraphics[width=2.2in]{content_space.png} \\label{fig:5d}}\n\t\\caption{t-SNE visualization of model with $J_{total}$.}\n\t\\label{fig:5} \n\\end{figure*}\n\nFig.~\\ref{fig:5} shows the t-SNE visualization\\cite{maaten2008visualizing} of our model with $J_{total}$. We can observe that music with different regional labels is noticeably separated in the pitch style space, rhythm style space and total style space, but looks chaos in content space.This further demonstrates the validity of our proposed methods to disentangle the pitch, rhythm, style and content.\n\nFinally, we present several examples \\footnote{Online Supplementary Material: \\url{https:\/\/csmt201986.github.io\/mgvaeresults\/}.} of generating folk song with given regional labels with our methods in Fig.~\\ref{fig:6}. As seen, we can create novel folk songs with dominated regional features such as long duration notes and large interval in Mongolian songs, the combination of major third and minor third in Hunan folk songs, and so on. However, there are still several failed examples. For instance, few generated songs repeat same melody pattern. More commonly, some songs don't show the correct regional feature, especially when the given regions belong to Han nationality areas. This may due to the fact that folk tunes in those regions share the same tonal system.\n\n\n\\begin{figure}[htb]\n\t\\centering\n\t\\includegraphics[width=4.8in]{music_example.png}\n\t\\caption{Examples of folk songs generation given regional labels. In order to align each row, the scores of several regions are not completely displayed.}\n\t\\label{fig:6}\n\\end{figure}\n\n\\section{Conclusion}\n\nIn this paper, we focus on how to capture the regional style of Chinese folk songs and generate novel folk songs with specific regional labels. We firstly collect a database including more than 2000 Chinese folk songs for analysis and generation. Then, inspired by the observation of the regional characteristics in Chinese folk songs, a model named MG-VAE based on adversarial learning is proposed to disentangle the pitch variable, rhythm variable, style variable and content variable in the latent space of VAE. Three metrics containing automatic and subjective evaluation in our experiments are used to evaluate the proposed model. Finally, the experimental results and t-SNE visualization show that the disentanglement of the four variables is successful and our model is able to generate folk songs with controllable regional style. In the future, we plan to expand the proposed model to generate longer melody sequence using more powerful model like Transformers, and explore the evolution of tune families like \\emph{Mo Li Hua Diao}, \\emph{Chun Diao} among different regions.\n\n\n\\bibliographystyle{spmpsci} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Analysis and Results}\n\\label{sec:results}\n\\subsection{RQ1- Influence of grammatical, temporal and sentimental sentence characteristics on FR\/NFR classification}\nFor the classification of functional and non-functional requirements, we used the approach of Hussain et al. \\cite{hussain2008using}. We applied this approach to the unprocessed data set of requirements as well as on the processed one resulting from our preprocessing.\n\n{\\it Classification Process:}\nFirstly, we clean up the respective data set by iteratively removing encoding and formatting errors to ensure the further processing. Subsequently, we apply the part-of-speech tagger of the Stanford Parser \\cite{Klein:2003:AUP:1075096.1075150} to assign parts of speech to each word in each requirement.\n\nBased on the tagging of all requirements, we extract the five syntactic features \\textit{number of adjectives}, \\textit{number of adverbs}, \\textit{number of adverbs that modify verbs}, \\textit{number of cardinals}, and \\textit{number of degree adjective\/adverbs}. For each feature, we determine its rank based on the feature's probability of occurrence in the requirements of the data set. According \nto Hussain et al.~\\cite{hussain2008using}, we selected a cutoff threshold of $>0.8$. Therefore, we determined \\textit{number of cardinals} and \\textit{number of degree of adjectives\/adverbs} as valid features among all five ones for the unprocessed data set. For the processed data set, we identified \\textit{number of cardinals} and \\textit{number of adverbs} as valid features.\n\nAfterwards, we extract the required keyword features for the nine defined part-of-speech keyword groups \\textit{adjective}, \\textit{adverb}, \\textit{modal}, \\textit{determiner}, \\textit{verb}, \\textit{preposition}, \\textit{singular noun}, and \\textit{plural noun}. For each keyword group, we calculate the smoothed probability measure and selected the respective cutoff threshold manually to determine the most discriminating keywords for each data set, corresponding to Hussain et al. \\cite{hussain2008using}.\n\nOur final feature list for the unprocessed data set consisted of the ten features \\textit{number of cardinals}, \\textit{number of degree of adjectives\/adverbs}, \\textit{adjective}, \\textit{adverb}, \\textit{modal}, \\textit{determiner}, \\textit{verb}, \\textit{preposition}, \\textit{singular noun}, and \\textit{plural noun}.\n\nOur final feature list for the processed data set consisted of the ten features \\textit{number of cardinals}, \\textit{number of adverbs}, \\textit{adjective}, \\textit{adverb}, \\textit{modal}, \\textit{determiner}, \\textit{verb}, \\textit{preposition}, \\textit{singular noun}, and \\textit{plural noun}.\n\nTo classify each requirement of the respective data set, we implemented a Java-based feature extraction prototype that parses all requirements from the data set and extracts the values for all ten features mentioned above. Subsequently, we used Weka \\cite{witten2016data} to train a C4.5 decision tree algorithm \\cite{quinlan2014c4} which comes with Weka as J48 implementation. According to Hussain et al. \\cite{hussain2008using}, we set the parameters for the minimum number of instances in a leaf to 6 to counter possible chances of over-fitting.\n\n\n\n\n\nSince the data set was not very large with 625 requirements, we performed a 10-fold-cross validation. In the following, we report our classification results for each data set.\\\\\n{\\bf Results:} The classification of the \\emph{unprocessed data set} results in $89.92\\%$ correctly classified requirements with a weighted average precision and recall of $0.90$. The classification of the \\emph{processed data set} results in $94.40\\%$ correctly classified requirements with a weighted average precision of $0.95$ and recall of $0.94$. \\tablename{ \\ref{tb:classification_unprocessed}} and \\tablename{ \\ref{tb:classification_processed}} show the details. By applying our approach, we could achieve an improvement of $4.48\\%$ correctly classified requirements. In total, we could correctly classify $28$ additional requirements, which consist of $9$ functional and $19$ non-functional ones.\nWhen classifying NFRs into sub-categories, the influence of our preprocessing is much stronger. The last two columns of \\tablename{ \\ref{tab:compare}} show the overall precision and recall of six different machine learning algorithms for sub-classifying NFRs into the categories listed in columns 1--10 of the table. For all algorithms, results are dramatically better when using the preprocessed data (column Total P) compared to using the raw data (column Total UP).\n\n\n\\begin{table}[htbp]\n\t\\centering\n\t\\caption{Classification results of the unprocessed data set}\n\n\t\\label{tb:classification_unprocessed}\n\t\\resizebox{\\linewidth}{!}{\\begin{tabular}{c|c|c|c|c|c|c|}\n\t\t\\cline{2-7}\n\t\t& \\begin{tabular}[c]{@{}c@{}}Correctly \\\\ Classified\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Incorrectly \\\\ Classified\\end{tabular} & Precision & Recall & F-Measure & Kappa \\\\ \\hline\n\t\t\\multicolumn{1}{|c|}{NFR} & 325 (87.84\\%) & 45 (12.16\\%) & 0.95 & 0.88 & 0.91 & \\multirow{3}{*}{0.79} \\\\ \\cline{1-6}\n\t\t\\multicolumn{1}{|c|}{FR} & 237 (92.94\\%) & 18 (7.06\\%)& 0.84 & 0.93 & 0.88 & \\\\ \\cline{1-6}\n\t\t\\multicolumn{1}{|c|}{Total} & 562 (89.92\\%) & 63 (10.08\\%) & 0.90 & 0.90 & 0.90 & \\\\ \\hline\n\t\\end{tabular}}\n\\end{table}\n\n\n\n\n\n\n\\begin{table}[htbp]\n\t\\centering\n\t\\caption{Classification results of the processed data set}\n\n\t\\label{tb:classification_processed}\n\t\\resizebox{\\linewidth}{!}{\\begin{tabular}{c|c|c|c|c|c|c|}\n\t\t\t\\cline{2-7}\n\t\t\t& \\begin{tabular}[c]{@{}c@{}}Correctly \\\\ Classified\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Incorrectly \\\\ Classified\\end{tabular} & Precision & Recall & F-Measure & Kappa \\\\ \\hline\n\t\t\t\\multicolumn{1}{|c|}{NFR} & 344 (92.97\\%) & 26 (7.03\\%) & 0.98 & 0.93 & 0.95 & \\multirow{3}{*}{0.89} \\\\ \\cline{1-6}\n\t\t\t\\multicolumn{1}{|c|}{FR} & 246 (96.47\\%) & 9 (3.53\\%)& 0.90 & 0.97 & 0.93 & \\\\ \\cline{1-6}\n\t\t\t\\multicolumn{1}{|c|}{Total} & 590 (94.40\\%) & 35 (5.60\\%) & 0.95 & 0.94 & 0.94 & \\\\ \\hline\n\t\\end{tabular}}\n\n\\end{table}\n\n\\vspace{-3mm}\n\n\n\n\n\n\n\\subsection{RQ2- Classifying Non-functional Requirements}\nIn this section, we describe the machine learning algorithms we used to classify NFRs. The performance of each method is assessed in terms of its recall and precision.\n\n\n\\begin{figure*}[!ht]\n\\centering\n\n\\subfloat[{\\scriptsize Hopkins statistic to assess the clusterability of the data set (hopkins-stat = 0.1)}]{\\includegraphics[scale=0.26]{Figures\/Tendency}}\n\\subfloat[Hierarchical= 0.13]{\\includegraphics[scale=0.28]{Figures\/HCL}}\n\\subfloat[K-means= 0.1]{\\includegraphics[scale=0.28]{Figures\/Kmeans}}\n\\subfloat[Hybrid= 0.13]{\\includegraphics[scale=0.28]{Figures\/HB}}\n\\subfloat[A visual representation of the confusion matrix (BNB algorithm)]{\\includegraphics[scale=0.2]{Figures\/ConfMatrix}}\\\\\n\\caption{Detailed visual representation of classifying NFRs }\n\\label{fig:cluster}\n\\end{figure*}\n\n\n\n\n\\subsubsection{Topic Modeling}\nTopic modeling is an unsupervised text analysis technique that groups a small number of highly correlated words, over a large volume of unlabelled text \\cite{TM1}, into {\\it topics}. \n\n\n{\\bf Algorithms:} The {\\it Latent Dirichlet Allocation (LDA)} algorithm classify documents based on the frequency of word co-occurrences. Unlike the LDA approach, the {\\it Biterm Topic Model} (BTM) method models topics based on the word co-occurrence patterns and learns topics by exploring word-word (i.e., biterm) patterns. Some recent studies on the application of topic modeling in classifying short text documents stated that the BTM approach has a better ability in modeling short and sparse text, as the ones typical for requirements specifications.\n\n\n\n\n\n\n\n\\begin{table*}\n\\centering\n\\footnotesize\n\\caption{Comparison between classification algorithms for classifying non-functional requirements [(U)P= (Un)Processed]}\n\\label{tab:compare}\n\n\\begin{tabular}{ |c|cc|cc|cc|cc|cc|cc|cc|cc|cc|cc|cc|cc| cc| }\n \\hline\n{\\bf Algorithm}& \\multicolumn{2}{c|}{\\bf A}& \\multicolumn{2}{c|}{\\bf US}& \\multicolumn{2}{c|}{\\bf SE}& \\multicolumn{2}{c|}{\\bf SC}& \\multicolumn{2}{c|}{\\bf LF}& \\multicolumn{2}{c|}{\\bf L}& \\multicolumn{2}{c|}{\\bf MN}& \\multicolumn{2}{c|}{\\bf FT}& \\multicolumn{2}{c|}{\\bf O}& \\multicolumn{2}{c|}{\\bf PE} & \\multicolumn{2}{c|}{\\bf PO}& \\multicolumn{2}{c|}{\\bf Total [P]}& \\multicolumn{2}{c|}{\\bf Total [UP]}\\\\ \\cline{2-27}\n&{\\bf R}&{\\bf P} &{\\bf R}&{\\bf P} & {\\bf R}&{\\bf P}& {\\bf R}&{\\bf P} &{\\bf R}&{\\bf P} &{\\bf R}&{\\bf P} &{\\bf R}&{\\bf P} &{\\bf R}&{\\bf P} &{\\bf R}&{\\bf P} &{\\bf R}&{\\bf P} & {\\bf R}&{\\bf P} & {\\bf R}&{\\bf P}&{\\bf R}&{\\bf P}\\\\\\hline\n{\\bf LDA}&95&60 &61&76 & 87&87& 81&57 &60&85 &47&20 &70&52 &10&2 &35&70 &70&95 & -&- &\\cellcolor{Gray}62&\\cellcolor{Gray}62&31&31\\\\\\cline{2-27}\n{\\bf BTM}&0&0 &6&12 & 13&18& 9&8 &5&7 &0&0 &0&0 &40&17 &0&0 &18&43 & -&- &\\cellcolor{Gray}8&\\cellcolor{Gray}8&3&3\\\\\\cline{2-27}\n{\\bf Hierarchical }&13&14 &25&20 &24&17 &16&29 &5&3 &6&15 &19&35 &18&40 &32&29 &26&22 &-&- & \\cellcolor{Gray}21&\\cellcolor{Gray}21&16&16\\\\ \\cline{2-27}\n{\\bf K-means}&10&23 &19&14 &29&18 &14&14 &21&21 &8&15 &22&47 &18&40 &26&30 &31&11& -&- &\\cellcolor{Gray}20&\\cellcolor{Gray}20&15&15\\\\ \\cline{2-27}\n{\\bf Hybrid}&15&14 &27&22& 29&18& 20&4& 26&24 &6&15 &17&35 &18&40 &22&27 &26&22& -&- &\\cellcolor{Gray}22&\\cellcolor{Gray}22&19&17\\\\\\cline{2-27}\n{\\bf Na\\\"{i}ve Bayes} &\\cellcolor{Gray}90&\\cellcolor{Gray}90&\\cellcolor{Gray} 97&\\cellcolor{Gray}77 &\\cellcolor{Gray}97&\\cellcolor{Gray}100 &\\cellcolor{Gray}83&\\cellcolor{Gray}83 &\\cellcolor{Gray}94&\\cellcolor{Gray}94 &\\cellcolor{Gray}75&\\cellcolor{Gray}100 &\\cellcolor{Gray}90&\\cellcolor{Gray}82 &\\cellcolor{Gray}97&\\cellcolor{Gray}90 &\\cellcolor{Gray}78&\\cellcolor{Gray}91 &\\cellcolor{Gray}90&\\cellcolor{Gray}100 &-&- &\\cellcolor{Gray}91&\\cellcolor{Gray}90&45&45\\\\\\hline\n\\end{tabular}\n\n\\end{table*}\n{\\bf Results and Evaluation:} The modeled topics for both LDA and BTM, including the top frequent words and the NFR assigned to each topic are provided in our source code package\\footnote{http:\/\/wcm.ucalgary.ca\/zshakeri\/projects}. We determined each topic by the most probable words that are assigned to it. For instance, LDA yields the word set \\{user, access, allow, prior, and detail\\} for the topic describing the Fault Tolerance sub-category, while BTM yields the set \\{failure, tolerance, case, use and data\\}. \n\n\nGenerally, the word lists generated by BTM for each topic are more intuitive than those produced by LDA. This confirms previous research that BTM performs better than LDA in terms of modeling and generating the topics\/themes of a corpus consisting of short texts. However, surprisingly, BTM performed much worse than LDA for sub-classifying NFRs as shown in Table~\\ref{tab:compare}. This might be because BTM performs its modeling directly at the corpus level and biterms are generated independently from topics. \n\n\n \\noindent\\makebox[\\linewidth]{\\resizebox{0.3333\\linewidth}{1pt}{$\\bullet$}}\\bigskip \n \\vspace{-3mm}\n\\subsubsection{Clustering}\nClustering is an unsupervised classification technique which categorizes documents into groups based on likeness \\cite{Cluster}. This likeness can be defined as the numerical distance between two documents \\(D_i\\) and \\(D_j\\) which is measured as:\n\\vspace{-3mm}\n\n\\begingroup\n\\everymath{\\scriptstyle}\n\\scriptsize\n\\[\nd(D_i, D_j)= \\sqrt{({d_i}_1-{d_j}_1)^2+ ({d_i}_2-{d_j}_2)^2 + ...+ ({d_i}_n-{d_j}_n)^2}\n\\] \\endgroup\n\nWhere \\(({d_i}_1, {d_i}_2, ..., {d_i}_n)\\) and \\(({d_j}_1, {d_j}_2, ..., {d_j}_n)\\) represent the coordinates (i.e., word frequencies) of the two documents. \n\n\n{\\bf Algorithms: }The {\\it Hierarchical} (i.e., Agglomerative) algorithm, first, assigns each document to its own cluster and iteratively merges clusters that are closest to each other until the entire corpus forms a single cluster. Despite the hierarchical approach in which we do not need to specify the number of clusters upfront, the {\\it K-means} algorithm assigns documents randomly to {\\it k} bins. This approach computes the location of the centroid of each bin and computes the distance between each document and each centroid. We defined the {\\it k=10} to run this algorithm. However, the k-means approach is highly sensitive to the initial random selection of cluster centroid (i.e., mean), which might lead to different results each time we run this algorithm. Thus, we used a {\\it Hybrid} algorithm, which combined hierarchical and K-means algorithms. This algorithm, first, computes the center (i.e., mean) of each cluster by applying the hierarchical approach. Then computes the K-means approach by using the set of defined clusters' centers. \n\n\n{\\bf Results and Evaluation: }Before applying clustering algorithms we used Hopkins (H) statistic to test the spatial randomness and assess the {\\it clustering tendency} (i.e., clusterability) of our data set. To this end, we raised the following null hypothesis: \\(\\big(\\){\\it \\(H_0\\): the NFR data set is uniformly distributed and has no meaningful clusters}\\(\\big)\\). As presented in Figure \\ref{fig:cluster} (a), the {\\it H-value} of this test is 0.1 (close to zero), which rejects this hypothesis and concludes that our data set is significantly clusterable. However, as presented in Table \\ref{tab:compare}, the clustering algorithms had poor performance at classifying NFRs. This may imply that the data set under study is quite unstructured and sub-categories of NFRs are not well separated. Thus, an unsupervised algorithm (e.g. Hierarchical or K-means) cannot accurately achieve segmentation.\n\nMoreover, we used Silhouette (s) analysis to assess the cohesion of resulted clusters. We used the function {\\it silhouette()} of {\\it cluster package} to compute the silhouette coefficient. Small {\\it s-value} (i.e., around 0) means that the observation lies between two clusters and has a low cohesion. The results of this test and the details of each cluster, including a number of requirements assigned to it, and its {\\it s-value} are illustrated in Figure \\ref{fig:cluster}(b-d). \n\n \\noindent\\makebox[\\linewidth]{\\resizebox{0.3333\\linewidth}{1pt}{$\\bullet$}}\\bigskip \n \\vspace{-3mm}\n \n\\subsubsection{Na\\\"{i}ve Bayes Classification}\n\nThis approach is a supervised learning method which predicts unseen data based on the {\\it bayes' theorem} \\cite{Naive} used to calculate conditional probability:\n\n\\begingroup\n\\everymath{\\scriptstyle}\n\\scriptsize\n\\[\nP(C=c_k\\mid F=f)= \\dfrac{P(F=f\\mid C=c_k) P(C=c_k)}{P(f)}\n\\] \\endgroup\n\nWhere \\(C=(c_1, c_2, ..., c_k)\\) represents classes and \\(F= (f_1, f_2, ..., f_d)\\) is a vector random variable, which includes one vector for each document. \n\n\n{\\bf Algorithm:} We use a variation of the multinomial Na\\\"{i}ve Bayes (BNB) algorithm known as {\\it Binarized Na\\\"{i}ve Bayes}. In this method, the term frequencies are replaced by Boolean presence\/absence features. The logic behind this is the higher importance of word occurrence than word frequency to sentiment classification.\n\n\n{\\bf Results and Evaluation:} To apply this algorithm we employed a 5-fold-cross validation. To reduce the data splitting bias, we run five runs of the 5-fold-cross validation. Overall accuracy is calculated at just over 90\\% with a {\\it p-value} of 2.2e-16. As illustrated in Table \\ref{tab:compare}, results obtained using the BNB algorithm were generally more accurate. All of the NFRs (except for PO) were recalled at relatively high values ranging from 75 (i.e., Legal requirements) to 97\\% (i.e., security and performance requirements). To represent more details about the performance of our classifier for each NFR, we visualized the confusion matrix resulted from applying the BNB algorithm (Figure \\ref{fig:cluster} (e)). Each column and row of this matrix represent the actual (i.e., reference) and the prediction data, respectively. The blocks are colored based on the frequency of the intersection between actual and predicted classes (e.g., the diagonal represents the correct predictions for the actual class). Since some of the NFRs in our data set occur more frequently, we normalized our data set before visualizing the confusion matrix. As illustrated in Figure \\ref{fig:cluster} (e), requirements in classes FT, L, MN, O, and SC were often assigned to class US. We can imply that {\\it the terminology we use for representing usability requirements is very general, which covers other NFRs that are indirectly related to usability. } This shows a clear need for additional (or better) sentimental patterns, which differentiate this category of NFR from other similar categories. \nto differentiate usability requirements from other types of NFRs.\n\n\n\\begin{tcolorbox}[colback=white, title= Findings]\n\\footnotesize\n \\textcolor{white}{.................. } \\\\\n\\vspace{-5mm}\n \n{\\bf Finding 1:} Our preprocessing approach positively impacted the performance of the applied classification of functional and non-functional requirements. We could improve the accuracy from 89.92\\% to 95.04\\%.\\\\\n\\vspace{-2mm}\n\n{\\bf Finding 2:} Our preprocessing approach strongly impacted the performance of all applied sub-classification methods. For LDA and BNB, both precision and recall doubled. \\\\\n\\vspace{-2mm}\n\n{\\bf Finding 3:} Among the machine learning algorithms LDA, BTM, Hierarchical, K-means, Hybrid and Binarized Na\\\"{i}ve Bayes (BNB), {\\it BNB} had the highest performance for sub-classifying NFRs.\\\\\n\\vspace{-2mm}\n\n\n{\\bf Finding 4:} While BTM generally works better than LDA for exploring the general themes and topics of a short-texts corpus, it did not perform well for sub-classifying NFRs. \\\\\n\\vspace{-2mm}\n\n{\\bf Finding 5:} There is a clear need for additional sentimental patterns\/sentence structures to differentiate usability requirements from other types of NFRs.\n\\end{tcolorbox}\n\\vspace{-2mm}\n\n\n\n\n\n\n\\section{Conclusion and Implications}\n\\label{sec:agenda}\n\n\n\n\nOur findings are summarized in the box at the end of Section~\\ref{sec:results}. \nIn particular, we conclude that using our preprocessing approach improves the performance of both classifying FR\/NFR and sub-classifying NFR into sub-categories. Further, we found that, among popular machine learning algorithms, Binarized Na\\\"{i}ve Bayes (BNB) performed best for the task of classifying NFR into sub-categories. Our results further show that, although BTM generally works better than LDA for extracting the topics of short-texts, BTM does not perform well for classifying NFRs into sub-categories. Finally, additional (or better) sentimental patterns and sentence structures are needed for differentiating usability requirements from other types of NFRs.\n\\vspace{-2mm}\n\\section{Introduction}\n \n In requirements engineering, classifying the requirements of a system by their kind into \\emph{functional requirements}, \\emph{quality requirements} and \\emph{constraints} (the latter two usually called \\emph{non-functional requirements}) \\cite{re_glossary} is a widely accepted standard practice today.\n \nWhile the different kinds of requirements are known and well-described today~\\cite{Martin}, automated classification of requirements written in natural language into functional requirements (FRs) and the various sub-categories of non-functional requirements (NFRs) is still a challenge \\cite{Ernst2010}. This is particularly due to the fact that stakeholders, as well as requirements engineers, use different terminologies and sentence structures to describe the same kind of requirements \\cite{RO, IWSPM}. The high level of inconsistency in documenting requirements makes automated classification more complicated and therefore error-prone.\n\nIn this paper, we investigate how automated classification algorithms for requirements can be improved and how well some of the frequently used machine learning approaches work in this context. We make two contributions. (1)~We investigate if and to which extent an existing decision tree learning algorithm~\\cite{hussain2008using} for classifying requirements into FRs and NFRs can be improved by preprocessing the requirements with a set of rules for (automated) standardizing and normalizing the requirements found in a requirements specification. (2)~We study how well several existing machine learning methods perform for automated classification of NFRs into sub-categories such as availability, security, or usability.\n\nWith this work, we address the \\emph{RE Data Challenge} posed by the 25th IEEE International Requirements Engineering Conference (RE'17). \n\n \n\n \n\n\n\n\n\n\n\\section{Limitations and Threats to Validity}\n\\label{sec:limits}\nIn this section, we discuss the potential threats to the validity\nof our findings in two main threads: \n\n{\\bf (1) Data Analysis Limitations:} The biggest threat to the validity of this work is the fact that our preprocessing model was developed on the basis of the data set given for the RE Data Challenge and that we had to use the same data set for evaluating our approach.\nWe mitigate this threat by using sentence structure features such as temporal, entity, and functional features which are applicable to sentences with different structures from different contexts. We also created a set of regular expressions which are less context-dependent and have been formed mainly based on the semantics of NFRs.\n\nAnother limiting factor is that our work depends on\nthe number and choice of the NFR sub-categories used by the creators of our data set.\nHowever, our preprocessing can be adapted to a different set of NFR sub-categories. In terms of the co-occurrence rules and regular expressions presented in Table \\ref{tab:reg}, we aim to\nexpand these rules by adding more NFR sub-categories in future work, as we gain additional insights from processing real world requirements specifications.\n\n{\\bf (2) Dataset Limitations:} Due to the nature of the RE'17 Data Challenge, we used the data set as is, although it has major data quality issues: (1)~Some requirements are incorrectly labeled. For example, R2.18 ``The product shall allow the user to view previously downloaded search results, CMA reports and appointments'' is labeled as NFR. Obviously, however, this is a functional requirement.\n(2) The important distinction between quality requirements and constraints is not properly reflected in the labeling. (3)~The selection of the requirements has some bias. For example, the data set does not contain any compatibility, compliance or safety requirements. Neither does it contain any cultural, environmental or physical constraints. (4)~Only one single requirement is classified as PO which makes this sub-category useless for our study. The repetition of our study on a data set of higher data quality is subject to future work.\n\nFurthermore, the {\\it unbalanced data set} we used for classifying the NFRs may affect the findings of this study. However, a study by Xue and Titterington \\cite{unbalanced} revealed that there is no reliable empirical\nevidence to support the claim that an unbalanced data\nset negatively impacts the performance of the LDA\/BTM approaches. Further, a recent study by L\\'{o}pez et al. \\cite{unbalanced2} shows that the unbalanced ratio by itself does not have the most significant effect on the classifiers'\nperformance, but there are other issues such as (a) the presence of small disjuncts, (b) the lack of density, (c) the class overlapping,\n(d) the noisy data, (e) the management of borderline examples, and (f) the dataset shift that must be taken into account. The pre-processing step we conducted before applying the classification algorithms helps discriminate the NFR sub-classes more precisely and mitigates the negative impact of the noisy data and borderline problems. Moreover, we employed the n-fold cross validation technique which helps generate enough positive class instances in different folds and reduces additional problems in the data distribution especially for highly unbalanced datasets. This technique, to a great extent, mitigates the negative impact of class overlapping, the dataset shift, and the presence of small disjuncts issues on the performance of the classification algorithms we applied in this study. \n \n\\section{Preprocessing of Requirements Specifications}\n\\label{sec:PP}\nIn this section, we describe the preprocessing\nwe applied to reduce the inconsistency of requirements specifications by leveraging rich sentence features and latent co-occurrence relations.\n\\subsection{Part Of Speech (POS) Tagging} We used the part-of-speech tagger of the Stanford Parser \\cite{Klein:2003:AUP:1075096.1075150} to assign parts of speech, such as noun, verb, adjective, etc. to each word in each requirement. \nThe POS tags\\footnote{Check https:\/\/gist.github.com\/nlothian\/9240750 for a complete list of tags} are necessary to perform the FR\/NFR classification based on the approach of Hussain et al. \\cite{hussain2008using}.\n\\subsection{Entity Tagging}\nTo improve the generalization of input requirements, we used a ``supervised training data'' method in which all context-based products and users are blinded by assigning names as {\\it PRODUCT} and {\\it USER}, respectively. To this end, we used the LingPipe NLP toolkit\\footnote{http:\/\/alias-i.com\/lingpipe\/} and created the \\(SRS\\_dictionary\\) by defining project specific users\/customers and products (e.g., program administrators, nursing staff members, realtor, or card member marked as USER), such as below:\n\\vspace{-2mm}\n\\begin{figure}[H]\n\\centering\n{\\includegraphics[scale=0.85]{Figures\/User}}\n\n\\caption*{}\n\n\\end{figure}\n\n\nThen, each sentence is tokenized and POS tagged with the developed SRS dictionary. All of the tokens associated with {\\it user} and {\\it product} were discarded and we only kept these two keywords to represent these two entities. Finally, we used the POS tagger of the Stanford Parser \\cite{Klein:2003:AUP:1075096.1075150} and replaced all Noun Phrases (NPs) including ``USER'' and ``PRODUCT'' with \\(USER\\) and \\(PRODUCT\\), respectively. For instance, \\mybox[fill=gray!30]{registered USER} in \\big, is replaced with \\(USER\\). \n\\subsection{Temporal Tagging} {\\it Time} is a key factor to characterize non-functional requirements, such as availability, fault tolerance, and performance. \n\nFor this purpose, we used SUTime, a rule-based temporal tagger for recognizing and normalizing temporal expressions by TIMEX3 standard\\footnote{See http:\/\/www.timeml.org for details on the TIMEX3 tag}. SUTIME detects the following basic types of temporal objects \\cite{SUTIME}:\n\\begin{enumerate}\n\\item {\\it Time:} A particular instance on a time scale. SUTIME also handles absolute times, such as {\\it Date}. As in: \n\n\\begin{figure}[H]\n\\centering\n{\\includegraphics[scale=0.67]{Figures\/TIME}}\n\n\\caption*{}\n\n\\end{figure}\n\n\\item {\\it Duration and Intervals:} The amount of intervening time in a time interval. As in:\n\n\\begin{figure}[H]\n\\centering\n{\\includegraphics[scale=0.67]{Figures\/Duration}}\n\n\\caption*{}\n\n\\end{figure}\n\nIntervals can be described as a range of time defined by a start and end time points. SUTime represents this type in the form of other types. \n\\item{\\it Set:} A set of temporals, representing times that occur with some frequency. As in:\n\n\\begin{figure}[H]\n\\centering\n{\\includegraphics[scale=0.67]{Figures\/SET1}}\n\n\\caption*{}\n\\end{figure}\n\\end{enumerate}\n\n\n\\begin{table*}[!htb]\n\\centering\n\\caption{\\small Proposed co-occurrence and regular expressions for preprocessing SRSs [(\\(CO(w)\\)): the set of terms co-occur with word \\(w\\)] }\n\n\\label{tab:reg}\n\\begin{tabular}{ |c|p{4.1cm}|p{5.1cm}p{5.2cm}| }\n \\hline\n {\\bf NFR}&{\\bf Keywords} & {\\bf Part of Speech (POS) and Regular Expressions} & {\\bf Replacements}\\\\\\hline \n\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n{\\bf Security [SE]} &protect, encrypt, policy, authenticate, prevent, malicious, login, logon, password, authorize, secure, ensure, access&\n\\( (\\mybox[fill=gray!30]{only}\/ ...\/\\mybox[fill=gray!30]{ nsubj}\/...\/\\mybox[fill=gray!30]{root}\/... )\\)\n\\( \\mid\\mid (\\mybox[fill=gray!30]{only}\/ ...\/\\mybox[fill=gray!30]{ root}\/...\/\\mybox[fill=gray!30]{nmod: agent}\/... )\\)\n\\(\\wedge \\big(\\mybox[fill=gray!30]{root}\\text{ is a } VBP\\big)\\) \n\n\n& \\( \\Rightarrow \\begin{cases}\\mybox[fill=gray!30]{nsubj}\\mybox[fill=gray!30]{agent} \\gets\\text{\\it authorized user}&\\\\ \\mybox[fill=gray!30]{root} \\gets access& \\end{cases}\\)\n\n{\\raisebox{-\\totalheight}{\\includegraphics[scale=.43]{Figures\/SE2}}}\\\\\\cline{3-4}\n\n&&\\( \\forall \\omega \\in \\{\\text{\\it username \\& password, login, logon}\\}\\)\n\n \\(\\text{\\it security, privacy, right, integrity, polict}\\}\\)\n \n \\(\\wedge CO(\\omega)\\cap CO(NFR_{se}) \\neq \\emptyset\\)\n &\\(\\Rightarrow \\omega \\gets authorization\\)\\\\\\cline{3-4}\n \n &&\\( \\forall \\omega \\in \\{\\text{\\it reach, enter, protect, input, interface}\\)\n \n \\(\\wedge \\text{product is obj }\\wedge CO(\\omega)\\cap CO(NFR_{se}) \\neq \\emptyset\\)\n &\\(\\Rightarrow \\omega \\gets access\\)\\\\\\hline\n\n \n \\hline\n\\end{tabular}\n\\vspace{-6mm}\n\\end{table*}\n\\vspace{-6mm}\n\nAfter tagging a sample set of all the classified NFRs, we identified the following patterns and used them to normalize the entire data set. To do this, first, we replaced all expressions in this format ``24[-\/\/*]7'' and ``\\(24\\times7\\times365\\)'' with ``24 hours per day 365 days per year'', and ``everyday'' with ``every day''. Likewise, we replaced all ``sec(s)'' and ``min(s)'' with ``seconds'' and ``minutes'', respectively. In the rest of this section, we present the rules we defined and applied to normalize the temporal expressions embedded in requirements' descriptions. \\\\\n\\vspace{-3mm}\n\\begin{tcolorbox}[colback=white, title= Temporal Rules]\n\\scriptsize\n\\begin{enumerate}\n\\item \\(\\forall \\text{ } [\\backslash exp]\\big, exp \\gets within\\) \\\\\nwhere \\(exp \\in\\)\\{\\it no longer than, under, no more than, not be more than, no later, in, for less than, at a maximum\\} \\\\\n\\vspace{-3mm}\n\n\n \\vspace{2mm}\n\\item \\(\\forall \\text{ } [\\backslash DURATION \\backslash TIME \\backslash DATE]^+ \\gets alltimes\\) \\\\\n\n\\begin{minipage}[t]{1\\linewidth}\n \\includegraphics[scale=.55]{Figures\/Temporal1}\\hfill\n \\vspace{-7mm}\n \\captionsetup{labelformat=empty}\n { \\captionof{figure}{ }\n \\label{fig:Terminology}}\n \\end{minipage}\\hfill\\\\\n \\vspace{2mm}\n\n\n\\item \\(within \\text{ } \\big \\gets fast\\) \\\\\n\\(if \\big == [\\backslash seconds \\backslash minutes]\\)\\\\\n \\vspace{-2mm}\n \n\\begin{minipage}[t]{1\\linewidth}\n\\vspace{-2mm}\n \\includegraphics[scale=.6]{Figures\/Temporal2}\n \\vspace{-3mm}\n \\captionsetup{labelformat=empty}\n { \\captionof{figure}{ }\n \\label{fig:Terminology}}\n \\end{minipage}\\hfill\\\\\n\\vspace{-5mm}\n\n \\vspace{2mm}\n\\item \\(\\{timely, quick\\} \\mid\\mid [\\backslash \\text{\\it positive adj } \\backslash time] \\gets fast\\)\\\\\n\\item \\({[8\\text{-}9][0\\text{-}9][{\\bf \\backslash.}?[0\\text{-}9]?{\\bf \\%}?]}[IN\\mid DET]^* time \\gets alltimes\\)\n\n\\vspace{-1mm}\n \\noindent \n\n\\end{enumerate}\n\\end{tcolorbox}\n\\vspace{-5mm}\n\n\n\n\n\n\n\n\n\\subsection{Co-occurrence and Regular Expressions}\nOnce the sentence features are utilized to reduce the complexity of the text, we used the co-occurrence and regular expressions to increase the weight of the influential words for each type of NFRs. To explore these rules we manually analyzed 6 to 10 requirements of each NFR and deployed different components of Stanford Parser such as part-of-speech, named entities, sentiment, and relations. Moreover, in this step, we recorded co-occurrence counts of each term within the provided NFR data set as the co-occurrence vector. We used this parameter as a supplement for exploring the SRS regular expressions. For instance, Table \\ref{tab:reg} represents the details of the rules we proposed for Security (SE) NFR. Please refer to the footnote\\footnote{ http:\/\/wcm.ucalgary.ca\/zshakeri\/projects} for the complete list of these rules containing regular expressions for all of the provided NFRs. \n\n\n\n\n\\section{Related Work}\n \\label{sec:RW}\n Software Requirements Specifications (SRSs) are written in natural language, with mixed statements of functional and non-functional requirements. There is a growing body of research studies that compare the effect of using manual and automatic approaches for classification of requirements~\\cite{nanniu,Ernst2010}. An efficient classification enables focused communication and prioritization of requirements~\\cite{janerej2}. Categorization of requirements allows filtering relevant requirements for a given important aspect. Our work is also closely related to the research on automatic classification of textual requirements.\n\nKnauss and Ott \\cite{knaussREFSQ} introduced a model of a socio-technical system for requirements classification. They evaluated their model in an industrial setting with a team of ten practitioners by comparing a manual, a semi-automatic, and a fully-automatic approach of requirements classification. \nThey reported that a semi-automatic approach offers the best ratio of quality and effort as well as the best learning performance. Therefore, it is the most promising approach of the three.\n\nCleland-Huang et al. \\cite{janere} investigated mining large requirements documents for non-functional requirements. Their results indicate that NFR-Classifier adequately distinguishes several types of NFRs. However, further work is needed to improve the results for some other NFR types such as 'look-and-feel'. Although their study is similar to ours as they trained a classifier to recognize a set of weighted indicator terms, indicative of each type of requirement, we used different classification algorithms and additionally assessed their precisions and recalls to compare their performance with each other.\n\nRahimi et al. \\cite{monaRE} present a set of machine learning and data mining methods for automatically extracting quality concerns from requirements, feature requests, and online forums. Then, they generate a basic goal model from the requirements specification. Each concern is modeled as a softgoal. For attaching topics to softgoals they used an LDA approach to estimate the similarity between each requirement and the discovered topics. In addition, they used LDA to identify the best sub-goal placement for each of the unattached requirements. However, in this research, we used LDA as one of our approaches for classifying the non-functional requirements. \n\n\nNa\\\"{i}ve Bayes classifier is used in several studies\\cite{KnaussRE, koj} for automatic classification of requirements. Therefore, we included Na\\\"{i}ve Bayes in our study to be comparable with other classifiers.\n\\section{The Challenge and Research Questions}\n\\label{sec:RQ}\n\n\n\\subsection{Context and Data Set}\nThe challenge put forward by the Data Track of RE'17 consists of taking a given data set and performing an automated RE task on the data such as tracing, identifying\/classifying requirements or extracting knowledge. For this paper, we chose the task of automated classification of requirements.\n\nThe data set given for this task comes from the OpenScience tera-PROMISE repository\\footnote{https:\/\/terapromise.csc.ncsu.edu\/!\/\\#repo\/view\/head\/requirements\/nfr}. It consists of 625 labeled natural language requirements (255 FRs and 370 NFRs). The labels classify the requirements first into FR and NFR. Within the latter category, eleven sub-categories are defined: (a)~ten \\emph{quality requirement categories}: Availability (A), Look \\& Feel (LF), Maintainability (MN), Operability (O), Performance (PE), Scalability (SC), Security (SE), Usability (US), Fault Tolerance (FT), and Portability (PO); (b)~one \\emph{constraint category}: Legal \\& Licensing (L). These labels constitute the ground truth for our investigations.\n\n\n\\subsection{Research Questions}\nWe frame the goal of our study in two research questions:\n {\\it RQ1. How do grammatical, temporal and sentimental characteristics of a sentence affect the accuracy of classifying requirements into functional and non-functional ones?}\n \nWith this research question, we investigate whether our preprocessing approach, which addresses the aforementioned characteristics, has a positive impact on the classification into FRs and NFRs in terms of precision and recall.\n\n {\\it RQ2. To what extent is the performance of classifying NFRs into sub-categories influenced by the chosen machine learning classification method?}\n\n\nWith this research question, we study the effects of the chosen machine learning method on the precision and recall achieved when classifying the NFRs in the given data set into the sub-categories defined in the data set.\n\n\n\n\n \n\n \n \n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\n\nDeep neural networks have achieved state-of-the-art performances in various natural and medical-imaging problems \\cite{litjens2017survey}. However, they tend to under-perform when the test-image distribution is different from those seen during training. In medical imaging, this is due to, for instance, variations in imaging modalities and protocols, vendors, machines, clinical sites and subject populations. For semantic segmentation problems, labelling a large number of images for each different target distribution is impractical, time-consuming, and often impossible. To circumvent those impediments, methods learning robust networks with less supervision have triggered interest in medical imaging \\cite{Cheplygina2018Notsosupervised}.\n\nThis motivates {\\em Domain Adaptation} (DA) methods: DA amounts to adapting a model trained on an annotated source domain to another target domain, with no or minimal new annotations for the latter. Popular strategies involve minimizing the discrepancy between source and target distributions in the feature or output spaces \\cite{ADDA,tsai2018learning}; integrating a domain-specific module in the network \\cite{dou2018pnp}; translating images from one domain to the other \\cite{cyclegan}; or integrating a domain-discriminator module and penalizing\nits success in the loss function \\cite{ADDA}.\n\nIn medical applications, separating the source training and adaptation is critical for privacy and regulatory reasons, as the source and target data may come from different clinical sites. Therefore, it is crucial to develop adaptation methods, which neither assume access to the source data nor modify the pre-training stage. Standard DA methods, such as \\cite{dou2018pnp,tsai2018learning,ADDA,cyclegan}, do not comply with these restrictions. This has recently motivated \\emph{Source-Free Domain Adaptation} (SFDA) \\cite{Bateson2020,KARANI2021101907}, a setting where the source data (neither the images nor the ground-truth masks) is unavailable during the training of the adaptation phase. \n\nEvaluating SFDA methods consists in: (i) adapting on a dedicated training set \\textit{Tr} from the target domain; and (ii) measuring the generalization performance on an unseen test set \\textit{Te} in the target domain. However, emerging and very recent \\emph{Test-Time Adaptation} (TTA) works in machine learning \\cite{wang2021tent,Sun2020} and medical imaging \\cite{KARANI2021101907,Varsavsky} argue that this is not as useful as adapting directly to the test set \\textit{Te}. In various interesting applications, access to the target distribution might not be possible. This is particularly common in medical image segmentation when only a single target-domain subject is available for test-time inference. In the context of image classification, the authors of \\cite{wang2021tent} showed recently that simple adaptation of batch normalization's scale and bias parameters on a set of test-time samples can deal competitively with domain shifts.\n\nWith this context in mind, we propose a simple formulation for source-free and single-subject test-time adaptation of segmentation networks. During inference for a single testing subject, we optimize a loss integrating shape priors and the entropy of predictions with respect to the batch normalization's scale and bias parameters. Unlike the standard SFDA setting, we perform test-time adaptation on each subject separately, and forgo the use of target training set \\textit{Tr} during adaptation. Our setting is most similar to the image classification work in \\cite{wang2021tent}, which minimized a label-free entropy loss defined over test-time samples. Building on this entropy loss, we further guide segmentation adaptation with domain-invariant shape priors on the target regions, and show the substantial effect of such shape priors on TTA performances. \nWe report comprehensive experiments and comparisons with state-of-the-art TTA, SFDA and DA methods, which show the effectiveness of our shape-guided entropy minimization in two different adaptation scenarios: cross-modality cardiac segmentation (from MRI to CT) and prostate segmentation in MRI\nacross different sites. Our method exhibits substantially better performances than the existing TTA methods. Surprisingly, it also fares better than various state-of-the-art SFDA and DA methods, although it does not train on source and additional target data during adaptation, but just performs joint inference and adaptation on a single 3D data point in the target domain.\nOur results and ablation studies question the usefulness of training on target set \\textit{Tr} during adaptation and points to the surprising and substantial effect of embedding shape priors during inference on domain-shifted testing data. \nOur framework can be readily used for integrating various priors and adapting any\nsegmentation network at test times. \n\n\n\n\\begin{comment}\n\\begin{figure}[t]\n \\includegraphics[width=1\\linewidth]{figures\/no_ad3.png}\n \\caption[]{Visualization of 2 aligned slice pairs in source (Water) and target modality (In-Phase): the domain shift in the target produces a drop in confidence and accuracy.}\n \\label{fig:s_t_im}\n\n\\end{figure} \n\\end{comment}\n\n\n\n\\begin{figure}[t]\n \\includegraphics[width=1\\linewidth]{figures\/over2.png}\n \\caption[]{Overview of our framework for Test-Time Adaptation with Shape Moments: we leverage entropy minimization and shape priors to adapt a segmentation network on a single subject at test-time.}\n \\label{fig:overview}\n\n\\end{figure}\n\n\\section{Method} \n\nWe consider a set of $M$ source images ${I}_{m}: \\Omega_s\\subset \\mathbb R^{2} \\rightarrow {\\mathbb R}$, $m=1, \\dots, M$, and denote their ground-truth K-class segmentation for each pixel $i \\in \\Omega_s$ as a $K$-simplex vector ${\\mathbf y}_m (i) = \\left(y^{(1)}_{m} (i), \\dots, y^{(K)}_{m} (i)\\right) \\in \\{0,1\\}^K$. For each pixel $i$, its coordinates in the 2D space are represented by the tuple $\\left(u_{(i)}, v_{(i)}\\right) \\in \\mathbb{R}^{2}$.\n\n\\paragraph{Pre-training Phase} The network is first trained on the source domain only, by minimizing the cross-entropy loss with respect to network parameters $\\theta$: \n\\begin{equation}\\label{eq:crossent}\n\\begin{aligned}\n\\min_{\\theta} \\frac{1}{\\left|\\Omega_{s}\\right|} \\sum_{{m}=1}^{M} \\ell\\left({\\mathbf y}_{m} (i), {\\mathbf s}_{m} (i, \\theta)\\right)\n \\end{aligned}\n\\end{equation}\nwhere ${\\mathbf s}_{m} (i, \\theta) = (s^{(1)}_{m} (i,\\theta), \\dots, s^{(K)}_{m} (i, \\theta)) \\in [0,1]^K$ denotes the predicted softmax probability for class $k \\in\\{1, \\ldots, K\\}$.\n\\paragraph{Shape moments and descriptors}\nShape moments are well-known in classical computer vision \\cite{nosrati2016incorporating}, and were recently shown useful in the different context of supervised training \\cite{KervadecMIDL2021}. Each moment is parametrized by its orders $p, q \\in \\mathbb{N}$, and each order represents a different characteristic of the shape. For a given $p, q \\in \\mathbb{N}$ and class $k$, the shape moments of the segmentation prediction of an image $I_n$ can be computed as follows from the softmax matrix $\\mathrm{S}_{n}(\\theta)=\\left(\\mathrm{s}_{n}^{(k)}(\\theta)\\right)_{k=1\\ldots K}$ :\n$$\n\\mu_{p, q}\\left(\\mathrm{s}_{n}^{(k)}(\\theta)\\right)=\\sum_{i \\in \\Omega} s_{n}^{(k)}(i, \\theta) u_{(i)}^{p} v_{(i)}^{q}\n$$\n\\begin{comment}\n$$\n\\mu_{p, q}^{(k)}\\left({\\mathbf s}_{n}(\\theta)\\right)=\\sum_{i \\in \\Omega} s_{n}^{(k)}(i,\\theta) u_{(i)}^{p} v_{(i)}^{q}.\n$$ \n\\end{comment}\nCentral moments are derived from shape moments to guarantee translation invariance. They are computed as follows:\n\\begin{comment}\n $$\n\\bar{\\mu}_{p, q}^{(k)}\\left({\\mathbf s}_{n,\\theta}\\right)=\\sum_{i \\in \\Omega} s_{n}^{k}(i,\\theta)\\left(u_{(i)}-\\frac{\\mu_{1,0}^{(k)}}{\\mu_{0,0}^{(k)}}\\right)^{p}\\left(v_{(i)}-\\frac{\\mu_{0,1}^{(k)}}{\\mu_{0,0}^{(k)}}\\right)^{q}.\n$$\n$$\n \\bar{\\mu}_{p, q}^{(k)}\\left({\\mathbf s}_{n,\\theta}\\right)=\\sum_{i \\in \\Omega} s_{n}^{k}(i,\\theta)\\left(u_{(i)}-\\bar{u}^{(k)}\\right)^{p}\\left(v_{(i)}-\\bar{v}^{(k)}\\right)^{q}.\n$$\n\\end{comment}\n$$\n \\bar{\\mu}_{p, q}\\left({\\mathbf s}_{n}^{(k)}(\\theta)\\right)=\\sum_{i \\in \\Omega} s_{n}^{k}(i,\\theta)\\left(u_{(i)}-\\bar{u}^{(k)}\\right)^{p}\\left(v_{(i)}-\\bar{v}^{(k)}\\right)^{q}.\n$$\nwhere \n\\begin{comment}\n $\\left( \\bar{u}^{(k)}, \\bar{v}^{(k)}\\right)=\\left(\\frac{\\mu_{1,0}^{(k)}}{\\mu_{0,0}^{(k)}},\\frac{\\mu_{0,1}^{(k)}}{\\mu_{0,0}^{(k)}}\\right)$ \n\\end{comment}\n$\\left(\\frac{\\mu_{1,0}(s_{n}^{(k)}(\\theta))}{\\mu_{0,0}(s_{n}^{(k)}(\\theta))},\\frac{\\mu_{0,1}(s_{n}^{(k)}(\\theta))}{\\mu_{0,0}(s_{n}^{(k)}(\\theta))}\\right)$ are the components of the centroid. \nWe use the vectorized form onwards, e.g. $\\mu_{p, q}\\left(s_{n}(\\theta)\\right) =\\left ( \\mu_{p, q}(s_{n}^{(1)}(\\theta)), \\dots, \\mu_{p, q}(s_{n}^{(K)}(\\theta)) \\right )^\\top$.\n\\begin{comment}\n $\\mu_{p, q}\\left(s_{n}\\right) = (\\mu_{p, q}^{(1)}(s_{n}), \\dots, \\mu_{p, q}^{(K)}(s_{n}))^\\top$.\n\\end{comment}\nBuilding from these definitions, we obtain 2D shape moments from the network predictions. We then derive the shape descriptors $\\mathcal{R} ,\\mathcal{C},\\mathcal{D}$ defined in Table \\ref{table:shapes}, which respectively inform on the size, position, and compactness of a shape. \n\\begin{comment}\n \\begin{table}[t]\n\\begin{tabular}{lll}\nShape Descriptor & Computation from the predicted segmentation $s_\\theta$ \\\\\n\\midrule\nClass-Ratio & $\\mathcal{R}^{(k)}(s_\\theta)=\\frac{1}{\\left| \\Omega_T \\right|}\\mu_{0, 0}^{(k)}\\left(s_{\\theta}\\right) $ \\\\\nDistance to Centroid & $\\mathcal{D}^{(k)}\\left(s_{\\theta}\\right)=\\left(\\sqrt[2]{\\frac{\\overline{\\mu_{2,0}^{(k)}\\left(s_{\\theta}\\right)}}{\\mu_{0,0}^{(k)}\\left(s_{\\theta}\\right)}}, \\sqrt[2]{\\frac{\\bar{\\mu}_{0,2}^{(k)}\\left(s_{\\theta}\\right)}{\\mu_{0,0}^{(k)}\\left(s_{\\theta}\\right)}}\\right) $\\\\ \nEccentricity & $\\mathcal{E}^{(k)}\\left(s_{\\theta}\\right)={\\sqrt {1-{\\frac {\\lambda _{2}}{\\lambda _{1}}}}} \\text{ with } \\lambda _{i}={\\frac {\\bar{\\mu}_{{20}}+\\bar{\\mu}_{{02}}}{2}}\\pm {\\frac {{\\sqrt {4{\\bar{\\mu}}_{{11}}^{2}+({\\bar{\\mu}}_{{20}}-{\\bar{\\mu}}_{{02}})^{2}}}}{2}}$ \\\\\nCompactness & $\\mathcal{C}^{(k)}\\left(s_{\\theta}\\right)=\\frac{\\sum_{i \\in \\Omega_T} \\sqrt{\\left(\\nabla s_{u_{i}}\\right)^{2}+\\left(\\nabla s_{v_{i}}\\right)^{2}}}{4 \\pi\\mu_{0, 0}^{(k)}\\left(s_{\\theta}\\right)}$ \\\\\n\\midrule\n\\label{table:shapes}\n\\end{tabular}\n\\end{table} \n\\end{comment}\n\n\\vspace{-0.3em}\n\\begin{table}[h!!!]\n\\centering\n \\caption{Examples of shape descriptors based on softmax predictions.}\n\\begin{tabular}{ll}\n\\toprule\nShape Descriptor & \\multicolumn{1}{c}{Definition} \\\\\n\\midrule\nClass-Ratio & $\\mathcal{R}(s):=\\frac{1}{\\left| \\Omega_T \\right|}\\mu_{0, 0}\\left(s\\right) $ \\\\\nCentroid & $\\mathcal{C}\\left(s\\right):=\\left(\\frac{\\mu_{1,0}\\left(s\\right)}{\\mu_{0,0}\\left(s\\right)}, \\frac{\\mu_{0,1}\\left(s\\right)}{\\mu_{0,0}\\left(s\\right)}\\right)$\\\\ \nDistance to Centroid & $\\mathcal{D}\\left(s\\right):=\\left(\\sqrt[2]{\\frac{\\bar{\\mu}_{2,0}\\left(s\\right)}{\\mu_{0,0}\\left(s\\right)}}, \\sqrt[2]{\\frac{\\bar{\\mu}_{0,2}\\left(s\\right)}{\\mu_{0,0}\\left(s\\right)}}\\right) $\\\\ \n\\bottomrule\n\\label{table:shapes}\n\\end{tabular}\n\\end{table}\n\n\\vspace{-1.5em}\n\\paragraph{Test-time adaptation and inference with shape-prior constraints}\n\nGiven a single new subject in the target domain composed of $N$ 2D slices, ${I}_n: \\Omega_t\\subset \\mathbb R^{2} \\rightarrow {\\mathbb R}$, $n=1, \\ldots, N$, the first loss term in our adaptation phase is derived from \\cite{wang2021tent}, to encourage high confidence in the softmax predictions, by minimizing their weighted Shannon entropy: $\\ell_{ent}({\\mathbf s}_n (i,\\theta)) = - \\sum_k \\nu_k s^k_n (i,\\theta) \\log s^k_n (i, \\theta)$, where $\\nu_k, k=1 \\ldots K$, are class weights added to mitigate imbalanced class-ratios.\n\\begin{comment}\n\\begin{equation}\n\\label{eq:ent-target}\n\\ell_{ent}({\\mathbf s}_n (i,\\theta)) = - \\nu_k\\sum_k s^k_n (i,\\theta) \\log s^k_n (i, \\theta)\n\\end{equation} \n\\end{comment}\n\n\nIdeally, to guide adaptation, for each slice $I_n$, we would penalize the deviations between the shape descriptors of the softmax predictions ${S}_{n}(\\theta)$ and those corresponding to the ground truth $\\mathbf{y_n}$. As the ground-truth labels are unavailable, instead, we estimate the shape descriptors using the predictions from the whole subject $ \\left\\{ S_{n}(\\theta), n=1,\\dots,N\\right\\}$, which we denote respectively $\\mathcal{\\bar{C}},\\mathcal{\\bar{D}}$.\n\n\nThe first shape moment we leverage is the simplest: a zero-order class-ratio $\\mathcal{R}$. Seeing these class ratios as distributions, we integrate a KL divergence with the Shannon entropy: \n\\begin{equation}\\label{eq:AdaMI}\n\\begin{aligned}\n \\mathcal{L}_{TTAS}(\\theta) = \\sum_n\\frac{1}{\\left|\\Omega_{n}\\right|} \\sum_{i \\in \\Omega_t} \\ell_{ent}({s}_n (i, \\theta))+ \\mbox{KL}(\\mathcal{R}(S_n (\\theta)),\\mathcal{\\bar{R}}).\n \\end{aligned}\n\\end{equation}\nIt is worth noting that, unlike \\cite{Bateson2022}, which used a loss of the form in Eq~\\eqref{eq:AdaMI} for training on target data, here we use this term for inference on a test subject, as a part of our overall shape-based objective.\nAdditionally, we integrate the centroid ($\\mathcal{M}=\\mathcal{C}$) and the distance to centroid ($\\mathcal{M}=\\mathcal{D}$) to further guide adaptation to plausible solutions:\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:ineq}\n \\min_{\\theta} &\\quad\\mathcal{L}_{TTAS}(\\theta) \n \\\\\n &\\text{s.t. } \\left|\\mathcal{M}^{(k)}(S_n (\\theta))-\\mathcal{\\bar{M}}^{(k)} \\right|\\leq0.1, & k=\\{2,\\dots, K\\}, n = \\{1, \\dots, N\\}.\n \\end{aligned}\n\\end{equation}\n\n\nImposing such hard constraints is typically handled through the minimization of the Lagrangian dual in standard convex-optimization. As this is computationally intractable in deep networks, inequality constraints such as Eq~\\eqref{eq:ineq} are typically relaxed to soft penalties \\cite{He2017,kervadec2019constrained,Jia2017}.\nTherefore, we experiment with the integration of $\\mathcal{C}$ and $\\mathcal{D}$ through a quadratic penalty, leading to the following unconstrained objectives for joint test-time adaptation and inference:\n\\begin{comment}\n \\begin{equation}\\label{eq:TTAdS}\n\\begin{aligned}\n \\sum_n\\frac{1}{\\left|\\Omega_{n}\\right|} \\sum_{i \\in \\Omega_n} \\ell_{ent}({s}_n (i, \\theta))+ \\mbox{KL}(\\mathcal{R}(\\mathbf{s}_n (\\theta)),\\mathcal{\\bar{R}}) + \\lambda (\\mathcal{M}(\\mathbf{s}_n (\\theta))-0.9\\mathcal{\\bar{M}})]_+^2 +\\lambda (1.1\\mathcal{\\bar{M}}-\\mathcal{M}(\\mathbf{s}_n (\\theta)))]_+^2\n \\end{aligned}\n\\end{equation} \n\\end{comment}\n\\begin{equation}\\label{eq:TTAS}\n \\sum_n\\frac{1}{\\left|\\Omega_{t}\\right|} \\sum_{i \\in \\Omega_n} \\ell_{ent}({\\mathbf s}_n (i, \\theta))+ \\mbox{KL}(\\mathcal{R}(S_n (\\theta)),\\mathcal{\\bar{R}}) + \\lambda \\mathcal{F}(\\mathcal{M}(S_n (\\theta)),\\mathcal{\\bar{M}}),\n\\end{equation}\nwhere $\\mathcal{F}$ is a quadratic penalty function corresponding to the relaxation of Eq~\\eqref{eq:ineq}: $\\mathcal{F}(m_1,m_2)= [m_1-0.9m_2]_+^2 + [1.1m_2-m_1]_+^2$ and $[m]_+ = \\max (0,m)$, with $\\lambda$ denoting a weighting hyper-parameter.\nFollowing recent TTA methods \\cite{wang2021tent,KARANI2021101907}, we only optimize for the scale and bias parameters of batch normalization layers while the rest of the network is frozen. Figure \\ref{fig:overview} shows the overview of the proposed framework.\n\n\\section{Experiments}\n \n\\subsection{Test-time Adaptation with shape descriptors} \n\n\\paragraph{\\textbf{Heart Application}} We employ the 2017 Multi-Modality Whole Heart Segmentation (MMWHS) Challenge dataset for cardiac segmentation \\cite{Zhuang2019}. The dataset consists of 20 MRI (source domain) and 20 CT volumes (target domain) of non-overlapping subjects, with their manual annotations of four cardiac structures: the Ascending Aorta (AA), the Left Atrium (LA), the Left Ventricle (LV) and the Myocardium (MYO). We employ the pre-processed data provided by \\cite{dou2018pnp}.\nThe scans were normalized as zero mean and unit variance, and data augmentation based on affine transformations was performed. For the domain adaptation benchmark methods (DA and SFDA), we use the data split in \\cite{dou2018pnp}: 14 subjects for training, 2 for validation, and 4 for testing. Each subject has $N=256$ slices.\n\n\\paragraph{\\textbf{Prostate Application}} We employ the dataset from the publicly available NCI-ISBI 2013 Challenge\\footnote{https:\/\/wiki.cancerimagingarchive.net}. It is composed of manually annotated T2-weighted MRI from two different sites: 30 samples from Boston Medical Center (source domain), and 30 samples from Radboud University Medical Center (target domain). For the DA and SFDA benchmark methods, 19 scans were used for training, one for validation, and 10 scans for testing. We used the pre-processed dataset from \\cite{SAML}, who resized each sample to $384\\times384$ in axial plane, and normalized it to zero mean and unit variance. We employed data augmentation based on affine transformations on the source domain. Each subject has $N \\in \\left [15,24 \\right ] $ slices.\n\n\n\\paragraph{\\textbf{Benchmark Methods}} Our first model denoted $TTAS_{\\mathcal{R}\\mathcal{C}}$ constrains the class-ratio $\\mathcal{R}$ and the centroid $\\mathcal{C}$ using Eq~\\eqref{eq:TTAS}; similarly, $TTAS_{\\mathcal{R}\\mathcal{D}}$ constrains $\\mathcal{R}$ and the distance-to-centroid $\\mathcal{D}$. We compare to two \\emph{TTA} methods: the method in \\cite{KARANI2021101907}, denoted $TTDAE$, where an auxiliary branch is used to denoise segmentation, and $Tent$ \\cite{wang2021tent}, which is based on the following loss: $\\min_{\\theta}\\sum_{n} \\sum_{i \\in \\Omega_n} \\ell_{ent}({\\mathbf s}_n (i, \\theta))$. Note that $Tent$ corresponds to performing an ablation of both shape moments terms in our loss. As an additional ablation study, $TTAS_{\\mathcal{R}}$ is trained with the class-ratio matching loss in Eq~\\eqref{eq:AdaMI} only.\nWe also compared to two \\emph{DA} methods based on class-ratio matching, \\textit{CDA} \\cite{Bateson2021}, and $CurDA$ \\cite{zhang2019curriculum}, and to the recent source-free domain adaptation (\\emph{SFDA}) method $AdaMI$ in \\cite{Bateson2022}. \nA model trained on the source only, \\textit{NoAdap}, was used as a lower bound. A model trained on the target domain with the cross-entropy loss, $Oracle$, served as an upper bound.\n\n\\paragraph{\\textbf{Estimating the shape descriptors}}\\label{sec:sizeprior} For the estimation of the class-ratio $\\mathcal{\\bar{R}}$, we employed the coarse estimation in \\cite{Bateson2021}, which is derived from anatomical knowledge available in the clinical literature.\nFor $\\mathcal{M} \\in \\left\\{ \\mathcal{C},\\mathcal{D}\\right\\}$, we estimate the target shape descriptor from the network prediction masks $\\mathbf{\\hat{y}_n}$ after each epoch: $\\mathcal{\\bar{M}}^{(k)} = \\frac{1}{\\left|V^k \\right|}\\sum_{v \\in V^{k}}v$, with $V^{k} = \\left\\{ {\\mathcal{M}^{(k)}(\\mathbf{\\hat{y}_n}) \\text{ if }{\\mathcal{R}^k(\\mathbf{\\hat{y}_n})>\\epsilon^k}},n=1 \\cdots N \\right\\}$.\n\nNote that, for a fair comparison, we used exactly the same class-ratio priors and weak supervision employed in the benchmarks methods in \\cite{Bateson2021,Bateson2022,zhang2019curriculum}.\nWeak supervision takes the form of simple image-level tags by setting $\\mathcal{\\bar{R}}^{(k)}=\\textbf{0}$ and $\\lambda=0$ for the target images that do not contain structure $k$. \n\n\\paragraph{\\textbf{Training and implementation details}} For all methods, the segmentation network employed was UNet \\cite{UNet}. A model trained on the source data with Eq~\\eqref{eq:crossent} for 150 epochs was used as initialization. Then, for TTA models, adaptation is performed on each test subject independently, without target training. Our model was initialized with Eq~\\eqref{eq:AdaMI} for 150 epochs, after which the additional shape constraint was added using Eq~\\eqref{eq:TTAS} for 200 epochs. As there is no learning and validation set in the target domain, the hyper-parameters are set following those in the source training, and are fixed across experiments: we trained with the Adam optimizer \\cite{Adam}, a batch size of $min(N,22)$, an initial learning rate of $5\\times10^{-4}$, a learning rate decay of 0.9 every 20 epochs, and a weight decay of $10^{-4}$. The weights $\\nu_k$ are calculated as: $\\nu_k=\\frac{\\bar{\\mathcal{R}}_k^{-1}}{\\sum_k \\bar{\\mathcal{R}}_k^{-1}}$. We set $\\lambda=1\\times10^{-4}$.\n \n\n\\paragraph{\\textbf{Evaluation}} The 3D Dice similarity coefficient (DSC) and the 3D Average Surface distance (ASD) were used as evaluation metrics in our experiments. \n\n\n\\subsection{Results \\& discussion} \n\n\n\nTable~\\ref{table:resultswhs} and Table \\ref{table:resultspro} report quantitative metrics for the heart and prostate respectively. Among DA methods, the source-free $AdaMI$ achieves the best DSC improvement over the lower baseline \\textit{NoAdap}, with a mean DSC of 75.7\\% (cardiac) and 79.5\\% (prostate). Surprisingly though, in both applications, our method $TTAS_{\\mathcal{R}\\mathcal{D}}$ yields better scores: 76.5\\% DSC, 5.4 vox. ASD (cardiac) and 79.5\\% DSC, 3.9 vox. ASD (prostate); while $TTAS_{\\mathcal{R}\\mathcal{C}}$ achieves the best DSC across methods: 80.0\\% DSC and 5.3 vox. ASD (cardiac), 80.2\\% DSC and 3.79 ASD vox. (prostate).\nFinally, comparing to the TTA methods, both $TTAS_{\\mathcal{R}\\mathcal{C}}$ and $TTAS_{\\mathcal{R}\\mathcal{D}}$ widely outperform $TTADAE$, which yields 40.7\\% DSC, 12.9 vox. ASD (cardiac) and 73.2\\% DSC, 5.80 vox. ASD (prostate), and $Tent$, which reaches 48.2\\% DSC, 11.2 vox. ASD (cardiac) and 68.7\\% DSC, 5.87 vox. ASD (prostate). \n\n\n\n\n\n\\begin{table}[t]\n\\footnotesize\n \\caption{Test-time metrics on the cardiac dataset, for our method and various \\textit{Domain Adaptation} (DA), \\textit{Source Free Domain Adaptation} (SFDA) and \\textit{Test Time Adaptation} (TTA) methods.}\n \\centering\n \\resizebox{\\textwidth}{!}{\n \\begin{tabular}{lccccccc|cp{5pt}cccc|c}\n\n\\toprule\n\n\\multirow{2}{3em}{Methods} & \\multirow{2}{3em}{DA}& \\multirow{2}{3em}{SFDA} & \\multirow{2}{3em}{TTA} & \n \\multicolumn{5}{c}{ DSC (\\%)} && \\multicolumn{5}{c}{ASD (vox)} \\\\\n \\cmidrule(lr){5-9}\\cmidrule(lr){10-15}\n& & & & AA & LA & LV & Myo & Mean && AA & LA & LV & Myo & Mean\n\\\\\n\\hline\nNoAdap (lower b.)&&&& 49.8&62.0&21.1&22.1&38.8 && 19.8&13.0&13.3&12.4&14.6 \\\\\nOracle \\, \\,(upper b.)&&&&91.9 & 88.3 & 91.0 & 85.8 & 89.2 && 3.1 & 3.4 & 3.6 & 2.2 & 3.0 \\\\\n\\midrule\nCurDA \\cite{zhang2019curriculum}&$\\checkmark$&$\\times$& $\\times$&79.0 & 77.9 & 64.4 & 61.3 & 70.7&& 6.5 & 7.6 & 7.2 & 9.1 & 7.6 \\\\\nCDA \\cite{Bateson2021} & $\\checkmark$& $\\times$&$\\times$&77.3 & 72.8 & 73.7 & 61.9 &71.4 && \\textbf{4.1} &6.3 & 6.6 &6.6 & 5.9 \\\\\nAdaMI \\cite{Bateson2022} & $\\times$&$\\checkmark$&$\\times$ &83.1&78.2&74.5&66.8& 75.7&&5.6&\\textbf{4.2}&\\textbf{5.7}&6.9&5.6\\\\\nTTDAE \\cite{KARANI2021101907} &$\\times$& $\\times$& $\\checkmark$& 59.8 & 26.4 & 32.3& 44.4 & 40.7 && 15.1 & 11.7 & 13.6 &11.3 & 12.9 \\\\\nTent \\cite{wang2021tent} & $\\times$& $\\times$& $\\checkmark$& 55.4 & 33.4 &63.0 &41.1 & 48.2 && 18.0 & 8.7 & 8.1 & 10.1 & 11.2 \\\\\n\\rowcolor{Gray}\\begin{tabular}[c]{@{}l@{}}Proposed Method \\end{tabular} && & & & & & && & & & & &\\\\%u\n\\textbf{TTAS$_{\\mathcal{RC}}$} (Ours) &$\\times$&$\\times$&$\\checkmark$&\\textbf{85.1}&\\textbf{82.6}&\\textbf{79.3}&\\textbf{73.2}& \\textbf{80.0}&&5.6&4.3&6.1&\\textbf{5.3}&\\textbf{5.3}\\\\\n\\textbf{TTAS$_{\\mathcal{RD}}$} (Ours) &$\\times$&$\\times$&$\\checkmark$&82.3&78.9&76.1&68.4& 76.5&&4.0&5.8&6.1&5.7&5.4\\\\\n\\rowcolor{Gray}\\begin{tabular}[c]{@{}l@{}}Ablation study \\end{tabular}& & & & & & & && & & & & &\\\\%u\n\\textbf{TTAS$_{\\mathcal{R}}$} &$\\times$&$\\times$&$\\checkmark$ &78.9&77.7&74.8&65.3& 74.2&& 5.2&4.9&7.0&7.6&6.2\\\\\n\\bottomrule\n \\end{tabular}\n }\n \\label{table:resultswhs}\n\\end{table}\n\n\\vspace{-0.5em}\n\\begin{table}[h!]\n\\centering\n\\footnotesize\n\\caption{Test-time metrics on the prostate dataset.\n\n\\begin{tabular}{lccccc}\n\\toprule\nMethods & DA & SFDA &TTA & DSC (\\%) & ASD (vox) \\\\\n\n\\midrule\nNoAdap (lower bound) &&& & 67.2 & 10.60\\\\%u\nOracle \\, \\,(upper bound) &&& & 88.9 & 1.88\\\\%u\n\\midrule\n\\begin{tabular}[c]{@{}l@{}}CurDA \\cite{zhang2019curriculum}\\end{tabular} & $\\checkmark$ &$\\times$& $\\times$ & 76.3 & 3.93\\\\%u \/data\/users\/mathilde\/ccnn\/CDA\/results\/ivdsag\/CKLSizewTag \n\\begin{tabular}[c]{@{}l@{}}CDA \\cite{Bateson2021}\\end{tabular} & $\\checkmark$&$\\times$ &$\\times$ & 77.9 & \\textbf{3.28}\\\\%u\n\\begin{tabular}[c]{@{}l@{}}AdaMI\\cite{Bateson2022}\\end{tabular} & $\\times$&$\\checkmark$& $\\times$& 79.5 & 3.92\\\\%u results\/ivdsag\/sfda2 results\/sal\/invsekllit9 \n\\begin{tabular}[c]{@{}l@{}}TTDAE \\cite{KARANI2021101907}\\end{tabular} &$\\times$ & $\\times$ & $\\checkmark$ & 73.2 & 5.80\\\\%u\n\\begin{tabular}[c]{@{}l@{}}Tent \\cite{wang2021tent}\\end{tabular} &$\\times$ &$\\times$ & $\\checkmark$ & 68.7 & 5.87\\\\%u \n\\rowcolor{Gray}\\begin{tabular}[c]{@{}l@{}}Proposed Method \\end{tabular} & & & & & \\\\%u\n\\begin{tabular}[c]{@{}l@{}}TTAS$_{\\mathcal{RC}}$ (Ours)\\end{tabular} & $\\times$& $\\times$ & $\\checkmark$& \\textbf{80.2} & 3.79\\\\\n\\begin{tabular}[c]{@{}l@{}}TTAS$_{\\mathcal{RD}}$ (Ours)\\end{tabular} & $\\times$& $\\times$& $\\checkmark$ & 79.5 & 3.90\\\\\n\\rowcolor{Gray}\\begin{tabular}[c]{@{}l@{}}Ablation study \\end{tabular} & & & & & \\\\%u\n\\begin{tabular}[c]{@{}l@{}}TTAS$_{\\mathcal{R}}$ (Ours)\\end{tabular} & $\\times$& $\\times$& $\\checkmark$ & 75.3 & 5.06\\\\\n\\bottomrule\n\\end{tabular}\n\\label{table:resultspro}\n\\end{table}\n\n\nQualitative segmentations are depicted in Figure~\\ref{fig:seg}. These visuals results confirm that without adaptation, a model trained only on source data cannot properly segment the structures on the target images. The segmentation masks obtained using the TTA formulations $Tent$ \\cite{wang2021tent}, $TTADAE$ \\cite{KARANI2021101907} only show little improvement. Both methods are unable to recover existing structures when the initialization $NoAdap$ fails to detect them (see fourth and fifth row, Figure~\\ref{fig:seg}). On the contrary, those produced from our degraded model $TTAS_\\mathcal{R}$ show more regular edges and is closer to the ground truth. However, the improvement over $TTAS_\\mathcal{R}$ obtained by our two models $TTAS_{\\mathcal{R}\\mathcal{C}}$, $TTAS_{\\mathcal{R}\\mathcal{D}}$ is remarkable regarding the shape and position of each structures: the prediction masks show better centroid position (first row, Figure~\\ref{fig:seg}, see LA and LV) and better \ncompactness (third, fourth, fifth row, Figure~\\ref{fig:seg}).\n\n\n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=\\textwidth]{figures\/segall8.png}\n \\caption[]{Qualitative performance on cardiac images (top) and prostate images (bottom): examples of the segmentations achieved by our formulation ($TTAS_{\\mathcal{R}\\mathcal{C}},TTAS_{\\mathcal{R}\\mathcal{D}}$), and benchmark TTA models. The cardiac structures of MYO, LA, LV and AA are depicted in blue, red, green and yellow respectively. }\n \\label{fig:seg}\n\\end{figure} \n\n\n\n\\begin{comment}\n \\begin{table}[]\n\\scriptsize\n\\begin{tabular}{lcccccccc}\n\\toprule\n & Adversarial & Constraint& Adv+Cons \\\\\n \\midrule\nTraining Time (ms\/batch) & 168 & 153 & 0 \\\\\n \n\\bottomrule\n\\end{tabular}\n \\caption[]{Training times for the diverse methods with a batch size of 1}\n \\label{tab:res_source}\n\\end{table}\n\\end{comment}\n\n\n\n\n\n\n\\section{Conclusion}\n\nIn this paper, we proposed a simple formulation for \\emph{single-subject} test-time adaptation (TTA), which does not need access to the source data, nor the availability of a target training data. \nOur approach performs inference on a test subject by minimizing the entropy of predictions and a class-ratio prior over batchnorm parameters. To further guide adaptation, we integrate shape priors through penalty constraints.\nWe validate our method on two challenging tasks, the MRI-to-CT adaptation of cardiac segmentation and the cross-site adaptation of prostate segmentation. Our formulation achieved better performances than state-of-the-art TTA methods, with a 31.8\\% (resp. 7.0\\%) DSC improvement on cardiac and prostate images respectively. Surprisingly, it also fares better than various state-of-the-art DA and SFDA methods. These results highlight the effectiveness of shape priors on test-time inference, and question the usefulness of training on target data in segmentation adaptation. Future work will involve the introduction of higher-order shape moments, as well as the integration of multiple shapes moments in the adaptation loss. Our test-time adaptation framework is straightforward to use with any segmentation network architecture.\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{splncs04}\n\n\\section{Introduction}\n\nDeep convolutional neural networks, have demonstrated state-of-the-art performance in many natural and medical imaging problems \\cite{litjens2017survey}. However, deep-learning methods tend to under-perform when trained on a dataset with an underlying distribution different from the target images. In medical imaging, this is due for instance to variations in imaging modalities and protocols, vendors, machines and clinical sites (see Fig~\\ref{fig:s_t_im}). For semantic segmentation problems, labelling for each different target distribution is impractical, time-consuming, and often impossible.\nTo circumvent those impediments, methods learning robust networks with less supervision have been popularized in computer vision.\n\n\\paragraph{Domain Adaptation} This motivates Domain Adaptation (DA) methods, to adapt a model trained on an annotated source domain to another target domain with no or minimal annotations. While some approaches to the problem require labelled data from the target domain, others adopt an unsupervised approach to domain adaptation (UDA). \n\n\\paragraph{Source Free Domain Adaptation} Additionally, a more recent line of works tackles source-free domain adaptation \\cite{Bateson2020} (SFDA), a setting where the source data is unavailable (neither images, nor labeled masks) during the training of the adaptation phase. In medical imaging, this may be the case when the source and target data come from different clinical sites, due to, for instance, privacy concerns or the loss or corruption of source data.\n\n\\paragraph{Test-Time Domain Adaptation} In the standard DA setting, the first step consists in fine-tuning or retraining the model with some (unlabeled) samples the target domain. Then, the evaluation consists of measuring the model's ability to generalise to unseen data\nin the target domain. However, the emerging field of Test-Time Adaptation (TTA) \\cite{wang2021tent} argues that this is not as useful as adapting to the test set directly. In some applications, it might not even be possible, such as when only a single target-domain subject is available. \n\n\\paragraph{}Our framework will therefore be Source-Free Test-Time Adaptation for Segmentation. We propose an evaluation framework where we perform test-time adaptation on each subject from the target data separately. This can be seen as an extreme case of the active field of \"learning with less data\", as our method should work with only a single sample from a new data distribution.\n\n\\begin{figure}[t]\n \\includegraphics[width=1\\linewidth]{figures\/s_t_im_crop.png}\n \\caption[]{Visualization of 2 aligned slice pairs in two different MRI modalities: Water and In-Phase, showing the different aspects of the structures to be segmented.}\n \\label{fig:s_t_im}\n\\end{figure}\n\n\n\\section{Project} \n\nIn this project, we will propose a simple formulation for test-time adaptation (DA), which removes the need for a concurrent access to the source and target data, as well as the need for multiple target subjects, in the context of semantic segmentation. Our approach will substitute the standard supervised loss in the source domain by a direct minimization of the entropy of predictions in the target domain. To prevent trivial solutions, we will integrate the entropy loss with different shape moments, such as were introduced in the less challenging context of semi-supervised learning in \\cite{KervadecMIDL2021}. Our formulation is usable with any segmentation network architecture, we will use \\cite{UNet}.\n\nTTA is a nascent field, with only a few works in segmentation \\cite{KARANI2021101907}. The question we wish to answer is the following : Can we leverage the inherent structure of anatomical shapes, to further guide test-time adaptation ?\n\n\\section{Datasets}\n \n\\subsection{Test-time Adaption with shape moments} \n\n\\subsubsection{\\textbf{Prostate Application}} We will first evaluate the proposed method on the publicly available NCI-ISBI 2013 Challenge\\footnote{https:\/\/wiki.cancerimagingarchive.net\/display\/Public\/NCI-ISBI+2013+Challenge+-+Automated+Segmentation+of+Prostate+Structures} dataset. It is composed of manually annotated 3D T2-weighted magnetic resonance from two different sites, including 30 samples from Radboud University Nijmegen Medical Centre (Site A, target domain $S$), 30 samples from Boston Medical Center (Site B, source domain $T$).\n\n\\subsubsection{\\textbf{Heart Application}} We will employ the 2017 Multi-Modality Whole Heart Segmentation (MMWHS) Challenge dataset for cardiac segmentation \\cite{Zhuang2019}. The dataset consists of 20 MRI (source domain $S$) and 20 CT volumes (target domain $T$) of non-overlapping subjects, with their manual annotations. We will adapt the segmentation network for parsing four cardiac structures: the Ascending Aorta (AA), the Left Atrium blood cavity (LA), the Left Ventricle blood cavity (LV) and the Myocardium of the left ventricle (MYO). We will employ the pre-processed data provided by \\cite{dou2018pnp}. \n\n\\subsubsection{\\textbf{Benchmark Methods}}\n\nA model trained on the source only, \\textit{NoAdaptation}, will be used as a lower bound. A model trained with the cross-entropy loss on the target domain, referred to as $Oracle$, will be the upper bound. Regarding Test Time adaptation methods, we will compare our method to \\cite{wang2021tent}, the seminal test-time adaptation method, as well as \\cite{KARANI2021101907}. Then, we will also compare the TTA setting to the DA setting, where a set of subjects from the target distribution is available for adapting the network. We will compare to both SFDA methods \\cite{Bateson2020,zhang2019curriculum} and standard DA methods, such as adversarial methods \\cite{tsai2018learning}.\n\n \n\\subsubsection{\\textbf{Evaluation.}}\nThe Dice similarity coefficient (DSC) and the Hausdorff distance (HD) will be used as evaluation metrics in our experiments. \n\n\n\n\n\\section{Outline} \n\nTable \\ref{tab:our} shows the outline of the project.\n\n \\begin{table}[]\n\\begin{tabular}{ll}\n\\toprule\n1 week & Feasibility study \\\\\n2 weeks & Literature study \\\\\n1 week & Project proposal \\\\\n1 week & Literature Report \\\\\n5 weeks & Implementation, evaluation, and report writing \\\\\n\\bottomrule\n\\end{tabular}\n \\label{tab:our}\n\\end{table}\n\n\n\n\n\n\n\\section{Critical Points}\n\n\\paragraph{Experiments} We will introduce an adaptation loss in the form of entropy loss with a various shapes moments to constrain the structures to be segmented. We will investigate simple shape priors, suche as the size, the centroid, the distance to centroid, the eccentricity ... \n\n\\paragraph{Questions to be answered} How to derive good priors for the shape moments ? Should the whole network be adapted, or just the batch normalization parameters, as is advocated in recent works \\cite{KARANI2021101907,wang2021tent} ?\n\n\\paragraph{Code}\nThe code will build on previous work \\cite{Bateson2020}, using PyTorch.\n\n\\bibliographystyle{splncs04}\n\n\\section{Introduction}\n\nDeep convolutional neural networks, have demonstrated state-of-the-art performance in many natural and medical imaging problems \\cite{litjens2017survey}. However, deep-learning methods tend to under-perform when trained on a dataset with an underlying distribution different from the target images. In medical imaging, this is due for instance to variations in imaging modalities and protocols, vendors, machines and clinical sites. For semantic segmentation problems, labelling for each different target distribution is impractical, time-consuming, and often impossible. To circumvent those impediments, methods learning robust networks with less supervision have been popularized in computer vision.\n\nThis motivates Domain Adaptation (DA) methods, to adapt a model trained on an annotated source domain to another target domain with no or minimal annotations. While some approaches to the problem require labelled data from the target domain, others adopt an unsupervised approach to domain adaptation (UDA). Evaluating UDA methods consists of measuring the model's ability to generalise to unseen data\nin the target domain. However, the emerging field of Test-Time Adaptation (TTA) \\citep{wang2021tent} argues that this is not as useful as adapting to the test set directly. In some applications, it might not even be possible, such as when only a single target-domain subject is available. \n\nMoreover, a more recent line of works tackles source-free domain adaptation \\citep{Bateson2020}, a setting where the source data is unavailable (neither images, nor labeled masks) during the training of the adaptation phase. In medical imaging, this may the case when the source and target data come from different clinical sites, due to, for instance, privacy concerns or the loss or corruption of source data.\n\nOur framework will therefore be source-free Test-Time Adaptation for Segmentation. We therefore propose an evaluation framework where we perform test-time adaptation on each subject separately.\n\n\n\n\\section{Project} \n\nIn this project, we will propose a simple formulation for test-time adaptation (DA), which removes the need for a concurrent access to the source and target data, as well as the need for multiple target subjects, in the context of semantic segmentation. Our approach will substitute the standard supervised loss in the source domain by a direct minimization of the entropy of predictions in the target domain. To prevent trivial solutions, we will integrate the entropy loss with different shape moments, such as in \\citep{KervadecMIDL2021,Bateson2020}. Our formulation is usable with any segmentation network architecture, we will use \\citep{UNet}.\n\n\\section{Datasets}\n \n\\subsection{Test-time Adaption with shape moments} \n\n\\subsubsection{\\textbf{Prostate Application}} We will first evaluate the proposed method on the publicly available NCI-ISBI 2013 Challenge\\footnote{https:\/\/wiki.cancerimagingarchive.net\/display\/Public\/NCI-ISBI+2013+Challenge+-+Automated+Segmentation+of+Prostate+Structures} dataset. It is composed of manually annotated 3D T2-weighted magnetic resonance from two different sites, including 30 samples from Radboud University Nijmegen Medical Centre (Site A, target domain $S$), 30 samples from Boston Medical Center (Site B, source domain $T$).\n\n\\subsubsection{\\textbf{Heart Application}} We employ the 2017 Multi-Modality Whole Heart Segmentation (MMWHS) Challenge dataset for cardiac segmentation \\citep{Zhuang2019}. The dataset consists of 20 MRI (source domain $S$) and 20 CT volumes (target domain $T$) of non-overlapping subjects, with their manual annotations. We will adapt the segmentation network for parsing four cardiac structures: the Ascending Aorta (AA), the Left Atrium blood cavity (LA), the Left Ventricle blood cavity (LV) and the Myocardium of the left ventricle (MYO). We will employ the pre-processed data provided by \\citep{dou2018pnp}. \n\n\\subsubsection{\\textbf{Benchmark Methods}}\n\nA model trained on the source only, \\textit{NoAdaptation}, will be used as a lower bound. A model trained with the cross-entropy loss on the target domain, referred to as $Oracle$, will be the upper bound. Regarding Test Time adaptation methods, we will compare our method to \\citep{wang2021tent}, the the seminal test-time adaptation method, as well as \\citep{KARANI2021101907}. Then, we will also compare the TTA setting to the DA setting, where a set of subjects from the target distribution is available for adaptation. We will compare to both SFDA methods \\citep{Bateson2020,zhang2019curriculum} and standard DA methods, such as adversarial methods \\citep{tsai2018learning}.\n\n \n\\subsubsection{\\textbf{Evaluation.}}\nThe Dice similarity coefficient (DSC) and the Hausdorff distance (HD) were used as evaluation metrics in our experiments. \n\n\n\n\n\\section{Outline} \n\nTable \\ref{tab:our} shows the outline of the project.\n\n \\begin{table}[]\n\\begin{tabular}{ll}\n\\toprule\n1 week & Feasibility study \\\\\n2 weeks & Literature study \\\\\n1 week & Project proposal \\\\\n1 week & Literature Report \\\\\n5 weeks & Implementation, evaluation, and report writing \\\\\n\\bottomrule\n\\end{tabular}\n \\label{tab:our}\n\\end{table}\n\n\n\n\n\n\n\\section{Conclusion}\n\nIn this paper, we proposed a simple formulation for domain adaptation (DA), which removes the need for a concurrent access to the source and target data, in the context of semantic segmentation for multi-modal magnetic resonance images. Our approach substitutes the standard supervised loss in the source domain by a direct minimization of the entropy of predictions in the target domain. To prevent trivial solutions, we integrate the entropy loss with a class-ratio prior, which is built from an auxiliary network. Unlike the recent domain-adaptation techniques, our method tackles DA without resorting to source data during the adaptation phase. Interestingly, our formulation achieved better performances than related state-of-the-art methods with access to both source and target data. \nThis shows the effectiveness of our prior-aware entropy minimization and that, in several cases of interest where the domain shift is not too large, adaptation might not need access to the source data. \nOur proposed adaptation framework is usable with any segmentation network architecture.\n\n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{splncs04}\n\n\\section{Introduction}\n\\lipsum[2]\n\\lipsum[3]\n\n\n\\section{Headings: first level}\n\\label{sec:headings}\n\n\\lipsum[4] See Section \\ref{sec:headings}.\n\n\\subsection{Headings: second level}\n\\lipsum[5]\n\\begin{equation}\n\\xi _{ij}(t)=P(x_{t}=i,x_{t+1}=j|y,v,w;\\theta)= {\\frac {\\alpha _{i}(t)a^{w_t}_{ij}\\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}{\\sum _{i=1}^{N} \\sum _{j=1}^{N} \\alpha _{i}(t)a^{w_t}_{ij}\\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}}\n\\end{equation}\n\n\\subsubsection{Headings: third level}\n\\lipsum[6]\n\n\\paragraph{Paragraph}\n\\lipsum[7]\n\n\\section{Examples of citations, figures, tables, references}\n\\label{sec:others}\n\\lipsum[8] \\cite{kour2014real,kour2014fast} and see \\cite{hadash2018estimate}.\n\nThe documentation for \\verb+natbib+ may be found at\n\\begin{center}\n \\url{http:\/\/mirrors.ctan.org\/macros\/latex\/contrib\/natbib\/natnotes.pdf}\n\\end{center}\nOf note is the command \\verb+\\citet+, which produces citations\nappropriate for use in inline text. For example,\n\\begin{verbatim}\n \\citet{hasselmo} investigated\\dots\n\\end{verbatim}\nproduces\n\\begin{quote}\n Hasselmo, et al.\\ (1995) investigated\\dots\n\\end{quote}\n\n\\begin{center}\n \\url{https:\/\/www.ctan.org\/pkg\/booktabs}\n\\end{center}\n\n\n\\subsection{Figures}\n\\lipsum[10] \nSee Figure \\ref{fig:fig1}. Here is how you add footnotes. \\footnote{Sample of the first footnote.}\n\\lipsum[11] \n\n\\begin{figure}\n \\centering\n \\fbox{\\rule[-.5cm]{4cm}{4cm} \\rule[-.5cm]{4cm}{0cm}}\n \\caption{Sample figure caption.}\n \\label{fig:fig1}\n\\end{figure}\n\n\\subsection{Tables}\n\\lipsum[12]\nSee awesome Table~\\ref{tab:table}.\n\n\\begin{table}\n \\caption{Sample table title}\n \\centering\n \\begin{tabular}{lll}\n \\toprule\n \\multicolumn{2}{c}{Part} \\\\\n \\cmidrule(r){1-2}\n Name & Description & Size ($\\mu$m) \\\\\n \\midrule\n Dendrite & Input terminal & $\\sim$100 \\\\\n Axon & Output terminal & $\\sim$10 \\\\\n Soma & Cell body & up to $10^6$ \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:table}\n\\end{table}\n\n\\subsection{Lists}\n\\begin{itemize}\n\\item Lorem ipsum dolor sit amet\n\\item consectetur adipiscing elit. \n\\item Aliquam dignissim blandit est, in dictum tortor gravida eget. In ac rutrum magna.\n\\end{itemize}\n\n\n\\section{Conclusion}\nYour conclusion here\n\n\\section*{Acknowledgments}\nThis was was supported in part by......\n\n\\bibliographystyle{unsrt} \n\n\\section{Introduction}\n\\lipsum[2]\n\\lipsum[3]\n\n\n\\section{Headings: first level}\n\\label{sec:headings}\n\n\\lipsum[4] See Section \\ref{sec:headings}.\n\n\\subsection{Headings: second level}\n\\lipsum[5]\n\\begin{equation}\n\\xi _{ij}(t)=P(x_{t}=i,x_{t+1}=j|y,v,w;\\theta)= {\\frac {\\alpha _{i}(t)a^{w_t}_{ij}\\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}{\\sum _{i=1}^{N} \\sum _{j=1}^{N} \\alpha _{i}(t)a^{w_t}_{ij}\\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}}\n\\end{equation}\n\n\\subsubsection{Headings: third level}\n\\lipsum[6]\n\n\\paragraph{Paragraph}\n\\lipsum[7]\n\n\\section{Examples of citations, figures, tables, references}\n\\label{sec:others}\n\\lipsum[8] \\cite{kour2014real,kour2014fast} and see \\cite{hadash2018estimate}.\n\nThe documentation for \\verb+natbib+ may be found at\n\\begin{center}\n \\url{http:\/\/mirrors.ctan.org\/macros\/latex\/contrib\/natbib\/natnotes.pdf}\n\\end{center}\nOf note is the command \\verb+\\citet+, which produces citations\nappropriate for use in inline text. For example,\n\\begin{verbatim}\n \\citet{hasselmo} investigated\\dots\n\\end{verbatim}\nproduces\n\\begin{quote}\n Hasselmo, et al.\\ (1995) investigated\\dots\n\\end{quote}\n\n\\begin{center}\n \\url{https:\/\/www.ctan.org\/pkg\/booktabs}\n\\end{center}\n\n\n\\subsection{Figures}\n\\lipsum[10] \nSee Figure \\ref{fig:fig1}. Here is how you add footnotes. \\footnote{Sample of the first footnote.}\n\\lipsum[11] \n\n\\begin{figure}\n \\centering\n \\fbox{\\rule[-.5cm]{4cm}{4cm} \\rule[-.5cm]{4cm}{0cm}}\n \\caption{Sample figure caption.}\n \\label{fig:fig1}\n\\end{figure}\n\n\\subsection{Tables}\n\\lipsum[12]\nSee awesome Table~\\ref{tab:table}.\n\n\\begin{table}\n \\caption{Sample table title}\n \\centering\n \\begin{tabular}{lll}\n \\toprule\n \\multicolumn{2}{c}{Part} \\\\\n \\cmidrule(r){1-2}\n Name & Description & Size ($\\mu$m) \\\\\n \\midrule\n Dendrite & Input terminal & $\\sim$100 \\\\\n Axon & Output terminal & $\\sim$10 \\\\\n Soma & Cell body & up to $10^6$ \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:table}\n\\end{table}\n\n\\subsection{Lists}\n\\begin{itemize}\n\\item Lorem ipsum dolor sit amet\n\\item consectetur adipiscing elit. \n\\item Aliquam dignissim blandit est, in dictum tortor gravida eget. In ac rutrum magna.\n\\end{itemize}\n\n\n\\section{Conclusion}\nYour conclusion here\n\n\\section*{Acknowledgments}\nThis was was supported in part by......\n\n\\bibliographystyle{unsrt} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzafzd b/data_all_eng_slimpj/shuffled/split2/finalzzafzd new file mode 100644 index 0000000000000000000000000000000000000000..f597f926e3702c43a97bc5eb9925628efefabd42 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzafzd @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nLet $\\A$ be a Banach algebra (over the complex field $\\mathbb{C}$), and $\\U$ be a Banach $\\A$-bimodule. A linear map $\\delta:\\A\\rightarrow\\U$ is called a \\textit{derivation} if $\\delta (ab)=a\\delta(b)+\\delta (a)b$ holds for all $a,b\\in\\A$. For any $x\\in\\U$, the map $id_x:\\A\\rightarrow\\U$ given by $id_x(a)=ax-xa$ is a continuous derivation called \\textit{inner derivation}. The set of all continuous derivations from $\\A$ into $\\U$ is denoted by $Z^{1}(\\A,\\U)$ while $N^{1}(\\A,\\U)$ denotes the set of all inner derivations from $\\A$ into $\\U$. $Z^{1}(\\A,\\U)$ is a linear subspace $\\mathbb{B}(\\A,\\U)$ (the space of all continuous linear maps from $\\A$ into $\\U$) and $N^{1}(\\A,\\U)$ is a linear subspace of $Z^{1}(\\A,\\U)$. We denote by $H^{1}(\\A,\\U)$, the quotient space $\\frac{Z^{1}(\\A,\\U)}{N^{1}(\\A,\\U)}$ which is called \\textit{the first cohomology group of $\\A$ with coefficients in $\\U$}. We call the the first cohomology group of $\\A$ with coefficients in $\\A$ briefly by first cohomology group of $\\A$. Derivations and cohomology group are important subjects in the study of Banach algebras. Among the most important problems related to them are these questions; Under what conditions a derivation $\\delta:\\A\\rightarrow\\U$ is continuous? Under what conditions one has $H^{1}(\\A,\\U)=(0)$? (i.e., every continuous derivation from $\\A$ into $\\U$ is inner). Also the calculation of $H^{1}(\\A,\\U)$, up to isomorphism of linear spaces, is a considerable problem.\n\\par \nThe problem of continuity of derivations is related to the subject of automatic continuity which is an important subject in mathematical analysis. Many studies have been performed in this regard and it has a long history. We may refer to \\cite{da} for more information which is a detailed source in this subject. Here we only review the most important obtained results concerning the automatic continuity of derivations. Johnson and Sinclair in \\cite{john1} have shown that every derivation on a semisimple Banach algebra is continuous. Ringrose in \\cite{rin} showed that every derivation from a $C^{*}$-algebra $\\A$ into Banach $\\A$-bimodule $\\U$ is continuous. In \\cite{chr}, Christian proved that every derivation from a nest algebra on Hilbert space $\\mathcal{H}$ into $\\mathbb{B}(\\mathcal{H})$ is continuous. Additionally, some results on automatic continuity of derivations on prime Banach algebras have been established by Villena in \\cite{vi1} and \\cite{vi2} .\n\\par \nThe study of the first cohomology group of Banach algebras is also a considerable topic which may be used to study the structure of Banach algebras. Johnson in \\cite{john2}, using the first cohomology group, has defined amenable Banach Algebras and then various types of amenability defined by using the first cohomology group. We may refer the reader to \\cite{run} for more information. Also among the interesting problems in the theory of derivations is either characterizing algebras on which every continuous derivation is inner, that is, the first cohomology group is trivial or characterizing the first cohomology group of Banach algebras up to vector spaces isomorphism. Sakai in \\cite{sa} showed that every continuous derivation on a $W^{*}$-algebra is inner. Kadison \\cite{ka} proved that every derivation of a $C^{*}$-algebra on a Hilbert space $\\mathcal{H}$ is spatial (i.e., it is of the form $a\\mapsto ta-at$ for $t\\in \\mathbb{B}(\\mathcal{H})$) and in particular, every derivation on a von Neumann algebra is inner. Some results have also been obtained in the case of non-self-adjoint operator algebras. Christian \\cite{chr} showed that every continuous derivation on a nest algebra on $\\mathcal{H}$ to itself and to $\\mathbb{B}(\\mathcal{H})$ is inner and then this result generalized to some other forms among which we may refer to \\cite{li} and the references therein. Gilfeather and Smith have calculated the first cohomology group of some operator algebras called joins(\\cite{gil1}, \\cite{gil2}). In \\cite{do}, the cohomology group of operator algebras called seminest algebras has been calculated. All of these operator algebras and nest algebras have a structure like triangluar Banach algebras, so motivated by these studies, Forrest and Marcoux in \\cite{for} verified the first cohomology group of triangular Banach algebras. The first cohomology group of triangular Banach algebras is not necessarily zero and in \\cite{for}, in fact it has been calculated under some special conditions and using the results, various examples of Banach algebras with non-trivial cohomology have been given and then the first cohomology group of those examples has been computed. Also in \\cite{for}, some results concerning the automatic continuity of derivations on triangluar Banach algebras are presented. A generalization of triangluar Banach algebras is module extensions of Banach algebras which Zhang \\cite{zh} has studied the weak amenability of them and then Medghalchi and Pourmahmood in \\cite{med} computed the first cohomology group of module extensions of Banach algebras and using those results, they gave various examples of Banach algebras with non-trivial cohomology group and in fact they calculated the first cohomology group of the given examples.\n\\par \nAnother class of Banach algebras which considered during the last thirty years, is the class of algebras obtained by a special product called $\\theta$-Lau product. This product is firstly introduced by Lau \\cite{lau} for a special class of Banach algebras which are pre-dual of von Neumann algebras where the dual unit element (the unit element of the dual) is a multiplicative linear functional. Afterwards, various studies have been performed to it. For instance, Monfared in \\cite{mn} has verified the structure of this special product and in \\cite{gha} the amenability of Banach algebras equipped with the same product has been studied. For more information about this product the reader may refer to \\cite{gha}, \\cite{mn} and references therein. \n\\par \nLet $\\mathcal{B}$ be a Banach algebra such that $\\mathcal{B} =\\A\\oplus\\U$ (as direct sum of Banach spaces) where $\\A$ is a closed subalgebra of $\\mathcal{B}$ and $\\U$ is a closed ideal in $\\mathcal{B}$. In this case, we say that $\\mathcal{B}$ is \\textit{semidirect product} of $\\A$ and $\\U$ and write $\\mathcal{B}=\\A\\ltimes \\U$. Semidirect product Banach algebras appear in the study of many classes of Banach algebras. For instance, in the strong Wedderburn decomposition of a Banach algebra $\\mathcal{B}$, it is assumed that $\\mathcal{B}=\\A\\ltimes Rad\\mathcal{B}$ where $Rad\\mathcal{B}$ is the Jacobson radical of $\\mathcal{B}$ and $\\A $ is a closed subalgebra of $\\mathcal{B}$ with $\\A\\cong \\frac{\\mathcal{B}}{Rad\\mathcal{B}}$ or for example, in \\cite{da2} using the structure of semidirect product, the authors have studied the amenability of measure algebras since every measure algebra has a decomposition to semidirect product of Banach algebras. In \\cite{tho2}, Thomas has verified the necessary conditions of decomposition of a commutative Banach algebra to semidirect product of a closed subalgebra and a principal ideal. We also may refer to \\cite{ba}, \\cite{ber} and \\cite{wh} where semidirect product Banach algebras are studied from different points of view. Equivalently, one may discuss semidirect product Banach algebras as follows; Let $\\A$ and $\\U$ be Banach algebras such that $\\U$ is a Banach $\\A$-bimodule with compatible actions and appropriate norm. Consider the multiplication on $\\A\\times\\U$ given by \n\\[(a,x)(b,y)=(ab,ay+xb+xy)\\quad\\quad ((a,b)\\in \\A\\times\\U).\\]\nIt can be shown that with this multiplication and $\\l^{1}$-norm, $\\A\\times\\U$ is a Banach algebra where $\\A$ is a closed subalgebra of this Banach algebra and $\\U$ is a closed ideal of it. So in fact this Banach algebra is equivalent to $\\A\\ltimes\\U$. By considering different module actions or algebra multiplications, it can be seen that $\\A\\ltimes\\U$ is a generalization of direct products of Banach algebras, tivial extension Banach algebras, triangular Banach algebras or $\\theta$-Lau product Banach algebras. In this paper we consider semidirect product Banach algebras as mentioned above and study the derivations on this special product of Banach algebras. In fact we establish various results concerning the automatic continuity of derivations on $\\A\\ltimes\\U$ and the first cohomology group of it and we present these results in special cases of $\\A\\ltimes\\U$ and obtain various examples of Banach algebras with automatically continuous derivations and trivial first cohomology group or compute their first cohomology group.\n\\par\nThis paper is organized as follows. In section 2, we investigate the definition of semidirect product Banach algebras and we show that this product is a generalization of various types of Banach algebras. In section 3, the structure of derivations on semidirect product Banach algebras will be discussed and using that we obtain several results about the automatic continuity of derivations on these Banach algebras and also verify the decomposition of derivations into the sum of a continuous derivation and another derivation. In section 4, we consider the fist cohomology group of semidirect product Banach algebras and compute it under some different conditions and establish various results in this context. In section 5, we apply the obtained results in sections 3,4 to some special cases of semidirect products of Banach algebras. In fact we investigate the automatic continuity of the derivations and the first cohomology group of direct products of Banach algebras, module extension Banach algebras and $\\theta$-Lau products of Banach algebras and establish some various results about the derivations on these Banach algebras. \n\\par \nAt the end of this section we introduce some used notations and expressions in the paper.\n\\par \nIf $\\mathcal{X}$ and $\\mathcal{Y}$ are Banach spaces, for a linear transform $T:\\mathcal{X}\\rightarrow \\mathcal{Y}$, define the separating space $\\mathfrak{S}(T)$ as \n\\[\\mathfrak{S}(T):=\\{y\\in \\mathcal{Y}\\, \\mid \\, \\text{there is}\\, \\, \\{x_n\\}\\subseteq \\mathcal{X}\\,\\, \\text{with}\\, \\, x_n\\rightarrow 0 , \\, T(x_n)\\rightarrow y\\} .\\]\nBy closed graph theorem, $T$ is continuous if and only if $\\mathfrak{S}(T)=(0)$.\n\\par \nLet $\\A$ be a Banach algebra and $\\U$ be a Banach $\\A$-bimodule. By $Z(\\A)$, we mean the center of $\\A$. Consider the set $ann_{\\A}\\U$ as\n\\[ann_{\\A}\\U:=\\{a\\in\\A\\, \\mid \\, a\\U=\\U a=(0)\\}.\\]\nIf $\\mathcal{N}$ is an $\\A$-submodule of $\\U$, we put \n\\[(\\mathcal{N}:\\U)_{\\A}:=\\{a\\in\\A \\, \\mid \\, a\\U\\subseteq \\mathcal{N}, \\, \\U a\\subseteq \\mathcal{N}\\}. \\] \nIt is clear that if $\\mathcal{N}=(0)$, then $((0):\\U)_{\\A}=ann_{\\A}\\U$. Let $\\U$ and $\\mathcal{V}$ be Banach $\\A$-bimodules. A linear map $\\phi :\\U\\rightarrow \\mathcal{V}$ is said to be a \\textit{left $\\A$-module homomorphism} if $\\phi (ax)=a\\phi (x)$ whenever $a\\in \\A$ and $x\\in\\U$ and it is a \\textit{right $\\A$-module homomorphism} if $\\phi (xa)=\\phi (x) a\\quad (a\\in \\A,x\\in\\U)$. The linear map $\\phi$ is called \\textit{$\\A$-module homomorphism}, if $ \\phi$ is both of left and right $\\A$-module homomorphism. The set of all continuous $\\A$-module homomorphisms from $\\U$ into $\\mathcal{V}$ is denoted by $Hom_{\\A}(\\U,\\mathcal{V})$. Note that if the spaces are the same, we just write $Z^{1}(\\A), N^{1}(\\A), H^{1}(\\A), Hom_{\\A}(\\U)$\n\\section{Semidirect products of Banach algebras}\nIn this section we introduce the notion of semidirect produts of Banach algebras and give some properties of this concept.\\\\\nLet $\\A$ and $\\U$ be Banach algebras such that $\\U$ is a Banach $\\A$-bimodule with\n\\[\\parallel ax \\parallel \\leq \\parallel a \\parallel \\parallel x \\parallel, \\quad \\parallel xa \\parallel \\leq \\parallel x \\parallel \\parallel a \\parallel \\quad\\quad (a\\in \\A, x\\in \\U), \\] \nand compatible actions, that is \n\\[ (a.x)y=a.(xy),\\,\\,\\, (xy).a=x(y.a), \\,\\,\\, (x.a)y=x(a.y) \\quad\\quad (a \\in \\A, \\, x,y\\in \\U). \\]\nIf we equip the set $\\A \\times \\U$ with the usual $\\mathbb{C}$-module structure, then the multiplication \n\\[ (a,x)(b,y)=(ab, a.y+x.b+xy) \\]\nturns $\\A \\times \\U$ into an associative algebra. \n\\par \nIn continue we also denote the module actions by $ax$ and $xa$.\n\\par \nThe \\textit{semidirect product} of Banach algebras $\\A$ and $\\U$, denoted by $\\A \\ltimes \\U$, is defined as the space $\\A \\times \\U$ with the above algebra multiplication and with the norm \n\\[ \\parallel (a,x) \\parallel = \\parallel a \\parallel + \\parallel x \\parallel . \\]\nThe semidirect product $\\A \\ltimes \\U$ is a Banach algebra.\n\\begin{rem}\\label {1}\nIn $\\A \\ltimes \\U$ we identify $\\A \\times \\lbrace 0 \\rbrace$ with $\\A$, and $\\lbrace 0 \\rbrace \\times \\U $ with $\\U$. Then $\\A$ is a closed subalgebra while $\\U$ is a closed ideal of $\\A \\ltimes \\U$, and\n\\[ \\A \\ltimes \\U \/ \\U \\cong \\A \\quad (isometric \\, \\, isomorphism). \\]\nIndeed $\\A \\ltimes \\U$ is equal to direct sum of $\\A$ and $\\U$ as Banach spaces.\n\\par\nConversely, let $\\mathcal{B}$ be a Banach algebra which has the form $\\mathcal{B}=\\A \\oplus \\U$ as Banach spaces direct sum, where $\\A$ is a closed subalgebra of $\\mathcal{B}$ and $\\U$ is a closed ideal in $\\mathcal{B}$. In this case, the product of two elements $a+x$ and $b+y$ of $\\mathcal{B}=\\A \\oplus \\U$ is given by \n\\[(a+x)(b+y)=ab+(ay+xb+xy), \\] where $ab\\in \\A$ and $ay+xb+xy\\in \\U$. Also the norm on $\\mathcal{B}$ is equivalent to the one given by \n\\[ \\parallel a+x \\parallel =\\parallel a \\parallel + \\parallel x \\parallel \\quad\\quad (a\\in \\A, x \\in \\U)).\\]\nOn the other hand $\\A$ and $\\U$ are Banach algebras such that $\\U$ is a Banach $\\A$-bimodule with compatible actions and\n\\[\\parallel ax \\parallel \\leq \\parallel a \\parallel \\parallel x \\parallel, \\quad \\parallel xa \\parallel \\leq \\parallel x \\parallel \\parallel a \\parallel \\quad\\quad (a\\in \\A, x\\in \\U). \\] \nBy above arguments we have \n\\[ \\mathcal{B}\\cong \\A \\ltimes \\U , \\]\nas isomorphism of Banach algebras.\n\\end{rem}\nThe following examples provides various\ntypes of semidirect product af Banach algebras.\n \\begin{exm}\nLet $\\A$ be a Banach algebra. Then the unitization of $\\A$ denoted by $\\A^{\\#}$ is in fact the semidirect product $\\mathbb{C}\\ltimes \\A$.\n\\end{exm}\n\\begin{exm}\\label{dp}\nSuppose that the action $\\A$ on $\\U$ is trivial, that is, $ \\A\\U=\\U\\A=(0)$, then we obtain the usual $l^{1}$-direct product of Banach algebras $\\A$ and $\\U$. In this case, $\\A \\ltimes \\U= \\A \\times \\U$.\n\\end{exm}\n\\begin{exm}\\label{me}\nLet the algebra multiplication on $\\U$ be the trivial action, that is, $\\U^2=(0)$, then $\\A\\ltimes \\U$ is same as the module extension (or trivial extension) of $\\A$ by $\\U$ which we denote by $T(\\A,\\U)$.\n\\end{exm}\n\\begin{exm}\\label{tri}\nLet $\\A$ and $\\mathcal{B}$ be Banach algebras and $\\mathcal{M}$ be a Banach $(\\A,\\mathcal{B})$-bimodule. The triangular Banach algebra introduced in \\cite{for} is \n$Tri(\\A, \\mathcal{M}, \\mathcal{B}):=\\begin{pmatrix}\n \\A & \\mathcal{M} \\\\\n 0 &\\mathcal{B} \n\\end{pmatrix}$ with the usual matrix operations and $l^1$-norm. If we trun $\\mathcal{M}$ into a Banach $\\A \\times\\mathcal{B}$-bimodule with the actions \n$$(a,b)m=am\\quad ,\\quad m(a,b)=mb\\quad\\quad ((a,b)\\in \\A\\times\\mathcal{B}, m\\in \\mathcal{M} )$$\n( $\\A \\times \\mathcal{B}$ is considered with $l^1$-norm), then $Tri(\\A, \\mathcal{M}, \\mathcal{B})\\cong T(\\A\\times\\mathcal{B},\\mathcal{M})$ as isomorphism of Banach algebras. Therefore triangular Banach algebra is an example of semidirect products of Banach algebras.\n\\end{exm}\n\\begin{rem}\nLet $\\A$ and $\\U$ be Banach algebras such that $\\U$ is a Banach $\\A$-bimodule with the compatible actions and norm, and let $\\overline{\\A\\U}=\\U$ or $\\overline{\\U\\A}=\\U$. Let the Banach algebra $\\A$ be the direct sum of its closed ideals $I_1$ and $I_2$, that is, $\\A=I_1 \\oplus I_2$. If $I_2\\U=\\U I_1=(0)$, then $\\U^2=(0)$. Because if $a\\in I_1$, $b\\in I_2$ and $x,y\\in \\U$ are arbitrary, then by the associativity we have $$x[(a+b)y]=[x(a+b)]y.$$\nSo $$x(ay)=(xb)y.$$\nIf $\\overline{\\A\\U}=\\U$ and we put $b=0$ in the above equation, then $xy=0$ and analogously, if $\\overline{\\U\\A}=\\U$, letting $a=0$ gives $xy=0$.\n\\par \nBy the above arguments, hypothesis and according to Example \\ref{tri}, in this case we have \n\\begin{equation*}\n\\A\\ltimes \\U=T(\\A,\\U)\\cong\\begin{pmatrix}\n I_1 & \\U \\\\\n 0 &I_2\n\\end{pmatrix}\n\\end{equation*}\n as isomorphism of Banach algebras.\n\\end{rem}\n\\begin{exm}\\label{la}\nLet $\\A$ and $\\U$ be Banach algebras, $\\theta\\in \\Delta(\\A)$ where $\\Delta(\\A)$ is the set of all non-zero characters of $\\A$. With the following module actions, $\\U$ becomes a Banach $\\A$-bimodule:\n$$ax=xa=\\theta(a)x\\quad\\quad (a\\in \\A ,x\\in \\U).$$\nThe norm and the actions on $\\U$ are compatible and one can consider $\\A\\ltimes \\U$ with the multiplication\n$$(a,x)(b,y)=(ab,\\theta(a)y+\\theta(b)x+xy).$$\nIn this case $\\A\\ltimes \\U$ is the $\\theta$-Lau product which is introduced in \\cite{lau}. \n\\end{exm}\n\\begin{exm}\nLet $G$ be a locally compact group. Then $M(G)=l^{1}(G)\\ltimes M_{c}(G)$ where $M_{c}(G)$ is a subspace of $M(G)$ containing all continuous measures such that for $\\mu\\in M(G)$, $\\mu\\in M_c(G)$ if and only if $\\mu(\\{s\\})=0$ $(s\\in G)$. We denote the subspace of discrete measures by $M_d({G})$ which is isomorphic to $l^{1}(G)$ and \n\\begin{equation*}M_{d}(G)=\\{\\mu =\\sum{\\alpha_{s}\\delta_{s}}:\\Vert\\mu\\Vert =\\sum_{s\\in G}{\\vert \\alpha_{s}\\vert }<\\infty \\}.\n\\end{equation*}\nIndeed $l^{1}(G)$ is a closed subspace of $M(G)$ and $M_c(G)$ is a closed ideal of $M(G)$. If $G$ is discrete, then $M(G)=l^{1}(G)$ and $M_{c}(G)=\\{0\\}$. But if $G$ is not discrete, then $M_{c}(G)\\neq\\{0\\}$ \n\\end{exm}\nIt is possible for an algebra $\\mathcal{B}$ with a closed ideal $\\mathcal{I}$, not to exist some closed subalgebra $\\A$ of $\\mathcal{B}$ such that $\\mathcal{B}=\\A\\ltimes \\mathcal{I}$. In the following we give an example of such a Banach algebra.\n\\begin{exm}\nLet $\\mathcal{B}:=C([0,1])$ be the Banach algebra of continuous complex-valued functions on $[0,1]$ and let $\\mathcal{I}:=\\{f\\in \\mathcal{B} :f(0)=f(1)=0\\}$. $\\mathcal{I}$ is a closed ideal of $\\mathcal{B}$. If $\\A $ is a closed subalgebra of $\\mathcal{B}$ satisfying $\\mathcal{B}=\\A \\oplus \\mathcal{I}$ as Banach spaces direct sum, then for $f\\in \\mathcal{I}$ and $g\\in \\A$ with $f(x)+g(x)=x$ for $x\\in [0,1]$, we have $g(0)=0$, $g(1)=1$ and $g-g^2\\in \\A\\cap \\mathcal{I}$. But yet $g-g^2\\neq 0$.\n\\end{exm}\nNote that $\\A\\ltimes\\U$ is commutative if and only if both $\\A$ and $\\U$ are commutative Banach algebras and $\\U$ is a commutative $\\A$-bimodule.\n\\par \nIn the rest of this section we introduce some special maps which are used in next sections.\n\\par \nFor $a\\in\\A$ define the map $r_{a}:\\U\\rightarrow\\U$ by $r_{a}(x)=xa-ax$. Some properties of this map are given in the following remark.\n\\begin{rem}\\label{inn1}\nFor $a\\in\\A$, consider the map $r_{a}:\\U\\rightarrow\\U$.\n\\begin{enumerate}\n\\item[(i)]\n$r_a$ is a derivation on $\\U$.\n\\item[(ii)]\nFor every $b\\in\\A$ and $x\\in\\U$,\n\\[r_{a}(bx)=br_{a}(x)+id_{a}(b)x \\quad \\text{and} \\quad r_{a}(xb)=r_{a}(x)b+x id_{a}(b).\\]\n\\item[(iii)]\nFor $a\\in \\A$, if $id_{a}=0$, then $r_a$ is an $\\A$-bimodule homomorphism. Also if $ann_{\\A}\\U =(0)$ and $r_a$ is an $\\A$-bimodule homomorphism, then $id_{a}=0$.\n\\end{enumerate}\n\\end{rem}\nAlso inner derivations on $\\U$ have significant properties considered in determining the first cohomology group of $\\A\\ltimes\\U$ which are given in the next remark.\n\\par \nNote that both of inner derivations from $\\A$ to $\\U$ and inner derivations from $\\U$ to $\\U$ are denoted by $id_{x}$. So in order to avoid confusion, we denote by $id_{\\A , x}$, the inner derivations from $\\A$ to $\\U$ while $id_{\\U , x}$ denotes the inner derivations from $\\U$ to $\\U$.\n\\begin{rem}\\label{inn2}\nFor $x_0\\in\\U$, consider the inner derivation $id_{\\U , x_0}:\\U\\rightarrow\\U$.\n\\begin{enumerate}\n\\item[(i)]\nFor every $a\\in\\A$ and $x\\in\\U$, \n\\[id_{\\U ,x_0}(ax)=a \\, id _{\\U ,x_0}(x)+id _{\\A , x_0}(a)x \\quad \\text{and} \\quad id_{\\U ,x_0}(xa)=id _{\\U ,x_0}(x)a+x \\, id _{\\A, x_0}(a).\\]\n\\item[(ii)]\nFor $x_{0}\\in \\U$, if $id_{\\A, x_0}=0$, then $id_{\\U , x_0}$ is an $\\A$-bimodule homomorphism. If $ann_{\\U}\\U =(0)$ and $id_{\\U , x_0}$ is an $\\A$-bimodule homomorphism, then $id_{\\A, x_0}=0$.\n\\end{enumerate}\n\\end{rem}\nThis following sets play an important role in determining the first cohomology group of $\\A\\ltimes\\U$.\n\\[R_{\\A}(\\U):=\\{r_{a}: \\U\\rightarrow\\U \\, \\mid \\, a\\in \\A\\};\\]\n\\[C_{\\A}(\\U):=\\{r_{a} :\\U\\rightarrow\\U \\, \\mid \\, id_{a}=0 \\, \\, (a\\in \\A)\\};\\]\n\\[I(\\U):=\\{id_{\\U, x}:\\U\\rightarrow\\U \\, \\mid \\, id_{\\A, x}=0 \\, \\, (x\\in \\U)\\};\\]\n\\indent In view of the above remarks, the set $R_{\\A}(\\U)$ is a linear subspace of $Z^{1}(\\U)$. Also $C_{\\A}(\\U)$ is a linear subspace of $Hom_{\\A}(\\U) \\cap R_{\\A}(\\U)$ and $I(\\U)$ is a linear subspace of $Hom_{\\A}(\\U) \\cap N^{1}(\\U)$. Indeed, we have the following inclusion linear subspaces;\n\\[ C_{\\A}(\\U)+I(\\U)\\subseteq Hom_{\\A}(\\U) \\cap (R_{\\A}(\\U)+N^{1}(\\U))\\subseteq Hom_{\\A}(\\U) \\cap Z^{1}(\\U).\\]\nIf $\\A$ is commutative, then $R_{\\A}(\\U)= C_{\\A}(\\U)$ and if $\\U$ is a commutative $\\A$-bimodule, then $N^{1}(\\U)=I(\\U)$. If $\\U ^{2}=(0)$, then $Z^{1} (\\U)=\\mathbb{B}(\\U)$ and $N^{1}(\\U)=I(\\U )=(0)$.\n\\section{Derivations on $\\A \\ltimes \\U$}\nIn this section we determine the structure of derivations on $\\A\\ltimes \\U$. According to which, we get some results concerning the automatic continuity of derivations on $\\A\\ltimes \\U$. Also we use the results of this section to determine the first cohomology group of $\\A\\ltimes \\U$ in the next sections.\n\\par \nThroughout this section we always assume that $\\A$ and $\\U$ are Banach algebras where $\\U$ is a Banach $\\A$-bimodule with the compatible actions and norm (As in the section 2). In other cases the conditions will be specified.\n\\par \nIn the following theorem the structure of derivations on $\\A\\ltimes \\U$ are determined.\n\\begin{thm}\\label{asll}\nLet $D:\\A\\ltimes \\U\\rightarrow \\A\\ltimes \\U$ be a map. Then the following conditions are equivalent.\n\\begin{enumerate}\n\\item[(i)] $D$ is a derivation.\n\\item[(ii)] \n\\[D((a,x))=(\\delta_1 (a)+\\tau_1 (x),\\delta_2 (a)+\\tau_2 (x))\\quad\\quad (a\\in \\A,x\\in \\U)\\]\nsuch that \n\\begin{enumerate}\n\\item[(a)] \n$\\delta_1:\\A\\rightarrow\\A$ is a derivation.\n\\item[(b)]\n$\\delta_2:\\A\\rightarrow \\U$ is a derivation.\n\\item[(c)]\n$\\tau_1 :\\U\\rightarrow \\A$ is an $\\A$-bimodule homomorphism such that \n$$\\tau_1(xy)=0,$$\nfor all $x,y\\in \\U$.\n\\item[(d)]\n$\\tau_2 :\\U\\rightarrow \\U$ is a linear map such that for every $a\\in \\A$ and $x,y\\in \\U$ satisfies the following conditions\n\\begin{eqnarray*}\n\\tau_2 (ax)&=&a\\tau_2 (x)+\\delta_1 (a)x+\\delta_2 (a)x; \\\\\n\\tau_2 (xa)&=&\\tau_2 (x)a+x\\delta_1 (a)+x\\delta_2 (a); \\\\\n\\tau_2(xy)&=&x\\tau_1(y)+\\tau_1(x)y+x\\tau_2(y)+\\tau_2(x)y.\n\\end{eqnarray*}\n\\end{enumerate}\n\\end{enumerate}\nMoreover, $D$ is an inner derivation if and only if $\\delta_1 ,\\delta_2$ are inner derivations, $\\tau_1 =0$ and if $\\delta_1 =ad_{a_0}$ and $ \\delta_2 =ad_{x_0}$, then $\\tau_2 =ad_{x_0}+r_{a_0}$.\n\\end{thm}\n\\begin{proof}\n$(i)\\implies (ii)$. Since $D$ is a linear map and $\\A\\ltimes \\U$ is the direct sum of the linear spaces $\\A$ and $\\U$, there are some linear maps $\\delta_1:\\A\\rightarrow \\A$, $\\delta_2:\\A\\rightarrow \\U$, $\\tau_1 :\\U\\rightarrow \\A$ and $\\tau_2 :\\U\\rightarrow \\U$ such that for any $(a,x)\\in \\A\\ltimes \\U$ we have \n$$D((a,x))=(\\delta_1 (a)+\\tau_1 (x),\\delta_2 (a)+\\tau_2 (x)).$$\nBy applying $D$ on the equality $(a,0)(b,0)=(ab,0)$ we conclude that $\\delta_1$ and $\\delta_2$ are derivations. Analogously, applying $D$ on the equalities \n\\[ (a,0)(0,x)=(0,ax),\\,\\,\\, (0,x)(a,0)=(0,xa) \\,\\,\\, and \\,\\,\\, (0,x)(0,y)=(0,xy),\\]\n establishes the desired properties for the maps $\\tau_1$ and $\\tau_2$ given in parts $(c)$ and $(d)$ respectively.\n$(ii)\\implies (i)$ Clear.\n\\par \nThe equivalent conditions of inner-ness of $D$ can be obtained by a straightforward calculation.\n\\end{proof}\nIn the sequel for a derivation $D$ on $\\A\\ltimes \\U$, we always assume that \n$$D((a,x))=(\\delta_1 (a)+\\tau_1 (x),\\delta_2 (a)+\\tau_2 (x))\\quad\\quad ((a,x)\\in \\A\\ltimes \\U)$$ in which the mentioned maps satisfy the conditions of the preceding theorem.\n\\par \nBy Theorem \\ref{asll}, if $D$ is an inner derivation on $\\A\\ltimes \\U$, then $\\tau_1 =0$. So this question is of interest for a given derivation $D$, under what conditions one has $\\tau_1 =0$?\nBy part $(ii)-(c)$ of Theorem \\ref{asll} if $\\U^2=\\U$ ($\\overline{\\U^2}=\\U$, if $D$ is continuous), then $\\tau_1 =0$. If $\\U$ has a bounded approximate identity, then by Cohen's factorization theorem we have $\\U^2=\\U$ and therefore in this case $\\tau_1 =0$. All unital Bnach algebras, $C^{*}$-algebras and group algebras have bounded approximate identity. Also if $\\U$ is a simple Banach algebra, then $\\U^{2}=\\U$.\n\\par \n The following corollary follows from Theorem \\ref{asll}.\n \\begin{cor}\\label{tak}\n Suppose that $\\delta_1:\\A\\rightarrow \\A$, $\\delta_2:\\A\\rightarrow \\U$, $\\tau_1 :\\U\\rightarrow \\A$ and $\\tau_2 :\\U\\rightarrow \\U$ are linear maps.\n \\begin{enumerate}\n \\item[(i)]\n $D:\\A\\ltimes\\U\\rightarrow \\A\\ltimes\\U $ defined by $D((a,x))=(\\delta_1(a),0)$ is a derivation if and only if $\\delta_1$ is a derivation and $\\delta_1 (\\A)\\subseteq ann_{\\A}\\U$. In this case if $\\delta_1 =id_{a_0}$ where $aa_0-a_0 a\\in ann_{\\A}\\U \\, \\, (a\\in \\A) $ and for any $x\\in \\U$, $a_0x =xa_0$, then $D$ is inner.\n \\item[(ii)]\n $D:\\A\\ltimes\\U\\rightarrow \\A\\ltimes\\U $ with $D((a,x))=(0,\\delta_2(a))$ is a derivation if and only if $\\delta_2$ is a derivation and $\\delta_2 (A)\\subseteq ann_{\\U}\\U$. Moreover, if $\\delta_2 =id_{\\A, x_0}$ is inner, $ax_0 - x_0a\\in ann_{\\U}\\U$ (for all $a\\in \\A$) and $x_0\\in Z(\\U)$, then $D$ is inner.\n \\item[(iii)]\n$D:\\A\\ltimes\\U\\rightarrow \\A\\ltimes\\U $ with $D((a,x))=(\\tau_1(x),0)$ is a derivation if and only if $\\tau_1(xy)=0, x\\tau_1(y)+\\tau_1(x)y=0 (x,y\\in\\U)$. In this case $D$ is inner if and only if $\\tau_1 =0$.\n\\item[(iv)] $D:\\A\\ltimes\\U\\rightarrow \\A\\ltimes\\U $ with $D((a,x))=(0,\\tau_2(x))$ is a derivation if and only if $\\tau_2$ is a derivation and also an $\\A$-bimodule homomorphism. In this case $D$ is inner if and only if $\\tau_2(x)=r_{a_0}(x)+id_{\\U, x_0}(x)$ where $a_0\\in Z(\\A)$ and $ax_0 =x_0 a$ for all $a\\in \\A$.\n\\end{enumerate}\n \\end{cor}\nIn the following we give an example showing that the condition $\\tau_1 =0$ does not hold in general.\n\\begin{exm}\nLet $\\mathcal{B}$ be a Banach algebra and $\\A:=T(\\mathcal{B},\\mathcal{B})$ and let $\\U :=\\mathcal{B}$. The Banach algebra $\\U$ becomes a Banach $\\A$-bimodule by the following compatible module actions:\n\\[(a,b)x=ax,\\quad x(a,b)= xa\\quad\\quad (a,b\\in \\A , x\\in \\U).\\]\nNow consider the Banach algebra $T(\\A,\\U)$ and define the map $\\tau_1:\\U\\rightarrow \\A$ by \n$$\\tau_1(x)=(0,x)\\quad\\quad (x\\in \\U)$$\nThen $\\tau_1\\neq 0$ is an $\\A$-bimodule homomorphism with $\\tau_1(\\U ^2)=(0)$ and for any $x,y\\in \\U$ we have \n$$x\\tau_1(y)+\\tau_1(x)y=x(0,y)+(0,x)y=0$$\nNow by Corollary \\ref{tak} it can be seen that the map $D$ on $T(\\A,\\U)$ defined by $D((a,b),x)=(\\tau_1(x),0)$ is a derivation in which $\\tau_1\\neq 0$.\n\\end{exm}\nIn this example $\\tau_1\\neq 0$ but $x\\tau_1(y)+\\tau_1(x)y=0$ $ (x,y\\in\\U)$. The next example shows that $\\tau_1$ does not satisfy the condition $x\\tau_1(y)+\\tau_1(x)y=0 \\,\\, (x,y\\in\\U)$ in general.\n\\begin{exm}\nLet $\\A$ be a Banach algebra, $\\mathcal{C}$ be a Banach $\\A$-bimodule and $\\gamma :\\mathcal{C}\\rightarrow \\A$ be a nonzero $\\A$-bimodule homomorphism such that \n\\[ c\\gamma (c')+\\gamma (c)c' =0\\]\nfor all $c,c'\\in\\mathcal{C}$.\n\\\\\nLet $\\U :=\\A\\times \\mathcal{C}$. With the usual actions, $\\U$ is a Banach $\\A$-bimodule. Consider the multiplication on $\\U$ given by\n\\[(x,y)(x',y')=(xx',0)\\quad\\quad ((x,y),(x',y')\\in\\U). \\]\nWith this multiplication $\\U$ is a Banach algebra such that the multiplication on $\\U$ is compatible with its module actions. Now we may consider the Banach algebra $\\A\\ltimes\\U$. Define the maps $\\tau_1:\\U\\rightarrow\\A$ and $\\tau_2:\\U\\rightarrow\\U$ by \n\\[\\tau_1 ((x,y))=\\gamma(y)\\quad , \\quad \\tau_2((x,y))=(-\\gamma (y),0).\\]\n$\\tau_1$ and $\\tau_2$ are $\\A$-bimodule homomorphisms such that $\\tau_1(\\U ^2)=0$, $\\tau_1\\,,\\tau_2\\neq 0$ and \n\\[0=\\tau_2((x,y)(x',y'))=(x,y)\\tau_1 ((x',y'))+(x,y)\\tau_2((x',y'))+\\tau_1 ((x,y)) (x',y')+\\tau_2((x,y))(x',y').\\]\nBy Theorem \\ref{asll} it can be seen that $D=(\\tau_1,\\tau_2)$ is a derivation on $\\A\\ltimes\\U$. If we assume further that $\\A$ is a unital algebra, then for any $y,y'\\in\\mathcal{C}$ with $\\gamma (y)\\neq 0$,\n\\begin{eqnarray*}\n(0,y)\\tau_1((1,y'))+\\tau_1((0,y))(1,y')&=&(0,y)\\gamma (y')+\\gamma (y)(1,y')\\\\&=&(\\gamma(y),y\\gamma (y')+\\gamma(y)y')\\\\&=&(\\gamma (y),0)\\neq 0.\n\\end{eqnarray*}\n \\end{exm}\nIn the following we investigate the automatic continuity of derivations on $\\A\\ltimes\\U$. It is clear that a derivation $D$ on $\\A\\ltimes \\U$ is continuous if and only if the maps $\\delta_1 , \\delta_2 , \\tau_1$ and $\\tau_2$ are continuous. \n\\par \n In the next theorem we state the relation between separating spaces of $\\delta_i$ and $\\tau_i$ ($i=1,2)$ and then by using this theorem, we obtain some results concerning the automatic continuity of the derivations on $\\A\\ltimes\\U$.\n\\begin{thm}\\label{joda}\nLet $D$ be a derivation on $\\A\\ltimes\\U$ such that \n$$D((a,x))=(\\delta_1 (a)+\\tau_1 (x),\\delta_2 (a)+\\tau_2 (x))\\quad\\quad ((a,x)\\in \\A\\ltimes \\U),$$\nthen\n\\begin{enumerate}\n\\item[(i)]\n$\\mathfrak{S} (\\tau_1)$ is an ideal in $\\A$. If $\\tau_2$ is continuous, then $\\mathfrak{S} (\\tau_1)\\subseteq ann_{\\A}\\U$ and if $\\tau_2(\\U)\\subseteq ann_{\\U}\\U$, then $\\mathfrak{S} (\\tau_1)\\subseteq(\\mathfrak{S} (\\tau_2):\\U)_\\A$.\n\\item[(ii)]\n$\\mathfrak{S} (\\tau_2)$ is an $\\A$-sub-bimodule of $\\U$ and if $\\tau_1$ is continuous or $\\tau_1 (\\U)\\subseteq ann_{\\A}\\U$, then $\\mathfrak{S} (\\tau_2)$ is an ideal in $\\U$.\n\\item[(iii)]\n$\\mathfrak{S} (\\delta_1)$ is an ideal in $\\A$ and if $\\delta_2$ is continuous or $\\delta_2 (\\A)\\subseteq ann_{\\U}\\U$, then $\\mathfrak{S} (\\delta_1)\\subseteq(\\mathfrak{S} (\\tau_2):\\U)_\\A$.\n\\item[(iv)]\n$\\mathfrak{S} (\\delta_2)$ is an $\\A$-subbimodule of $\\U$ and if $\\delta_1$ is continuous or $\\delta_1 (\\A)\\subseteq ann_{\\A}\\U$, then $\\mathfrak{S} (\\delta_2)\\subseteq(\\mathfrak{S} (\\tau_2):\\U)_\\U$.\n\\end{enumerate}\n\\end{thm}\n\\begin{proof}\n$(i)$ Let $a\\in \\mathfrak{S} (\\tau_1)$. Thus there is a sequence $\\{x_n\\}$ in $\\U$ such that \n$x_n\\rightarrow 0$ and $ \\tau_1(x_n)\\rightarrow a$.\nFor any $b\\in \\A$, $bx_n\\rightarrow 0$, so $\\tau_1(bx_n)=b\\tau_1 (x_n)\\rightarrow ba $ and hence $ba\\in \\mathfrak{S} (\\tau_1)$. Similarly, $ab\\in \\mathfrak{S} (\\tau_1)$. Therefore $\\mathfrak{S} (\\tau_1)$ is an ideal in $\\A$.\\\\\n Now for every $x\\in \\U$ we have \n$$\\tau_2(xx_n)=x\\tau_1(x_n)+x\\tau_2(x_n)+\\tau_1(x)x_n+\\tau_2(x)x_n$$\nand\n$$\\tau_2(x_nx)=x_n\\tau_1(x)+x_n\\tau_2(x)+\\tau_1(x_n)x+\\tau_2(x_n)x.$$\nIf $\\tau_2$ is continuous, taking limit of the above equations gives $ax=xa=0$ ($x\\in \\U$). So $a\\in ann_{\\A}\\U$. Hence $\\mathfrak{S} (\\tau_1)\\subseteq ann_{\\A}\\U$. \n\\par \nIf $\\tau_2 (\\U)\\subseteq ann_{\\U}\\U$, by taking limit of the above equations we get \n$$\\tau_2(xx_n)\\rightarrow xa\\quad \\text{and} \\quad \\tau_2(x_n x)\\rightarrow ax.$$ So $ax,xa\\in \\mathfrak{S} (\\tau_2)$ and hence $\\mathfrak{S} (\\tau_1)\\subseteq(\\mathfrak{S} (\\tau_2):\\U)_\\A$.\n\\\\\n$(ii)$ If $x_n\\rightarrow 0$ and $\\tau_2(x_n)\\rightarrow x$, then for any $a\\in \\A$ we have \n$$\\tau_2(ax_n)=a\\tau_2(x_n)+\\delta_1(a)x_n+\\delta_2(a)x_n$$\nand\n$$\\tau_2(x_{n}a)=\\tau_2(x_n)a+x_n\\delta_1(a)+x_n\\delta_2(a).$$\nBy taking limits of these equations we get \n$$\\tau_2(ax_n)\\rightarrow ax\\quad\\text{and}\\quad \\tau_2(x_{n}a)\\rightarrow xa.$$\nSo $ax,xa\\in \\mathfrak{S} (\\tau_2)$ and hence $\\mathfrak{S} (\\tau_2)$ is an $\\A$-subbimodule of $\\U$.\n\\par \nFor any $y\\in \\U$ we have\n$$\\tau_2(yx_n)=\\tau_1(y)x_n+\\tau_2(y)x_n+y\\tau_1(x_n)+y\\tau_2(x_n)$$\n$$\\tau_2(x_n y)=\\tau_1(x_n)y+\\tau_2(x_n)y+x_n\\tau_1(y)+x_n\\tau_2(y).$$\nIf $\\tau_1$ is continuous or $\\tau_1(\\U)\\subseteq ann_{\\A} \\U$, by taking limit we obtain \n$$\\tau_2(yx_n)\\rightarrow yx\\quad \\text{and}\\quad \\tau_2(x_{n}y)\\rightarrow xy.$$\nTherefore $xy,yx\\in \\mathfrak{S} (\\tau_2)$ and $\\mathfrak{S} (\\tau_2)$ is an ideal.\n\\\\\n$(iii)$ Let $a_n\\rightarrow 0 $ and $\\delta_1 (a_n)\\rightarrow a$. Since $\\delta_1$ is a derivation, it follows that $\\mathfrak{S} (\\delta_1)$ is an ideal in $\\A$. For every $x\\in \\U$ we have \n$$\\tau_2(a_nx)=a_n\\tau_2(x)+\\delta_1(a_n)x+\\delta_2(a_n)x$$\nand\n$$\\tau_2(ax_n)=\\tau_2(x)a_n+x\\delta_1(a_n)+x\\delta_2(a_n).$$\nIf $\\delta_2$ is continuous or $\\delta_2(\\A)\\subseteq ann_{\\U}\\U$, taking limit of the above equations yields \n$$\\tau_2(a_n x)\\rightarrow ax\\quad \\text{and}\\quad \\tau_2(x a_n)\\rightarrow xa.$$\nSince $xa_n, a_nx\\rightarrow 0$, it follows that $ax,xa\\in \\mathfrak{S} (\\tau_2)$. So $\\mathfrak{S} (\\delta_1)\\subseteq(\\mathfrak{S} (\\tau_2):\\U)_\\A$.\n\\\\\n$(iv)$ The proof is similar to part $(iii)$.\n\\end{proof}\nNow we give some results of the preceding theorem.\n\\begin{prop}\\label{au1}\nLet $D$ be a derivation on $\\A\\ltimes\\U$ such that \n$$D((a,x))=(\\delta_1 (a)+\\tau_1 (x),\\delta_2 (a)+\\tau_2 (x))\\quad\\quad ((a,x)\\in \\A\\ltimes \\U),$$\nthen \n\\begin{enumerate}\n\\item[(i)]\nif $ann_{\\A}\\U =(0)$, $\\tau_2$ and $\\delta_2$ are continuous, then $D$ is continuous.\n\\item[(ii)]\nif $ann_{\\U}\\U =(0)$, $\\tau_2$ and $\\delta_1$ are continuous, then $\\delta_2$ is continuous. If we also add the assumption of continuity of $\\tau_1$, then $D$ is continuous.\n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\n$(i)$ Since $\\tau_2$ is continuous so $\\mathfrak{S} (\\tau_2)=0$. By Theorem \\ref{joda}-$(i)$ from the continuity of $\\tau_2$ we conclude that $\\mathfrak{S} (\\tau_1)\\subseteq ann_{\\A}\\U=(0)$, thus $\\tau_1$ is continuous. Moreover, since the conditions of Theorem \\ref{joda}-$(iii)$ hold, we have\n$$\\mathfrak{S} (\\delta_1)\\subseteq (\\mathfrak{S} (\\tau_2):\\U)_\\A=((0):\\U)_\\A =ann_{\\A}\\U=(0).$$ Therefore $\\delta_1$ is continuous. So $D$ is continuous.\n\\\\\n$(ii)$ Since the conditions of Theorem \\ref{joda}-$(iv)$ hold and $\\tau_2$ is continuous, it follows that\n $$\\mathfrak{S} (\\delta_2)\\subseteq (\\mathfrak{S} (\\tau_2):\\U)_\\U=((0):\\U)_\\U=ann_{\\U}\\U =(0).$$ So $\\delta_2$ is continuous . If $\\tau_1$ is continuous, then we have the continuity of all the maps and thus $D$ is continuous.\n\\end{proof}\nIf we add the condition $ann_{\\A}\\U=(0)$ to part $(ii)$ of the previous proposition, then as in part $(i)$, this implies that $\\tau_1$ is continuous and in this case $D$ is continuous as well.\n\\par \nNow by the preceding proposition, we obtain some results concerning the automatic continuity of the derivations on $\\A\\ltimes\\U$.\n\\begin{cor}\\label{n1}\nSuppose that for every derivation $D$ on $\\A\\ltimes \\U$ we have \n$x\\tau_1 (y)+\\tau_1(x)y=0$ $ (x,y\\in \\U)$. If $ann_{\\A}\\U=(0)$ and each derivation from $\\U$ to $\\U$ and each derivation from $\\A$ to $\\U$ is continuous, then every derivation on $\\A\\ltimes \\U$ is continuous.\n\\end{cor}\n\\begin{proof}\nBy Theorem \\ref{asll}, for any derivation $D=(\\delta_1 +\\tau_1,\\delta_2+\\tau_2)$ on $\\A\\ltimes \\U$, the maps $\\delta_2:\\A \\rightarrow\\U$ and $\\tau_2:\\U\\rightarrow \\U$ are derivations, since we have \n$x\\tau_1 (y)+\\tau_1(x)y=0$ $ (x,y\\in \\U)$. So $\\delta_2$ and $\\tau_2$ are continuous by the hypothesis. Now the continuity of $D$ follows from Proposition \\ref{au1}-$(i)$.\n\\end{proof}\n\\begin{cor}\\label{n2}\nSuppose that for any derivation $D$ on $\\A\\ltimes \\U$ we have $x\\tau_1 (y)+\\tau_1(x)y=0 $ $(x,y\\in \\U)$. If $ann_{\\U}\\U=(0)$, every derivation from $\\U$ to $\\U$ and every derivation from $\\A$ to $\\A$ is continuous and every $\\A$-bimodule homomorphism from $\\U$ to $\\A$ is continuous, then every derivation on $\\A\\ltimes \\U$ is continuous.\n\\end{cor}\n\\begin{proof}\nConsider the derivation $D=(\\delta_1 +\\tau_1,\\delta_2+\\tau_2)$. By Theorem \\ref{asll}, the maps \n $\\delta_1:\\A\\rightarrow \\A$ and $\\tau_2:\\U\\rightarrow \\U$ are derivation and $\\tau_1:\\U\\rightarrow \\A$ is an $\\A$-bimodule homomorphism. By the hypothesis, $\\delta_1, \\tau_1$ and $\\tau_2$ are continuous. Therefore $D$ is continuous by Proposition \\ref{au1}-$(ii)$.\n\\end{proof}\nBy Johnson's and Sinclair's theorem \\cite{john1}, every derivation on a semisimple Banach algebra is continuous, so we have the following corollary.\n\\begin{cor}\\label{semi}\nSuppose that $\\A$ and $\\U$ are semisimple Banach algebras such that $\\U$ has a bounded approximate identity. Then every derivation on $\\A\\ltimes \\U$ is continuous.\n\\end{cor}\n\\begin{proof}\nLet $D=(\\delta_1 +\\tau_1,\\delta_2+\\tau_2)$ be a derivation on $\\A\\ltimes \\U$. By the Cohen's factorization theorem, $\\U ^2 =\\U$ and so $\\tau_1=0$. Also from the hypothesis we conclude that $ann_{\\U}\\U =(0)$. By Johnson's and Sinclair's theorem \\cite{john1}, every derivation on $\\A$ and $\\U$ is continuous. Now by Proposition \\ref{au1}-$(ii)$, every derivation on $\\A\\ltimes \\U$ is continuous.\n\\end{proof}\nAll $C^{*}$-algebras, semigroup algebras, measure algebras and unital simple algebras are semisimple Banach algebras with bounded approximate identity. \n\\par \nIn next results we investigate some conditions under which we can express the derivation $D$ as the sum of two derivations one of which being continuous.\n\\begin{prop}\\label{taj1}\nLet $D$ be a derivation on $\\A\\ltimes\\U$ such that \n\\[D((a,x))=(\\delta_1 (a)+\\tau_1 (x),\\delta_2 (a)+\\tau_2 (x))\\quad\\quad ((a,x)\\in \\A\\ltimes \\U),\\]\nthen\n\\begin{enumerate}\n\\item[(i)]\nif $ann_{\\U}\\U =(0)$, $x\\tau_1 (y)+\\tau_1(x)y=0 $ $(x,y\\in \\U)$, $\\delta_1$ and $\\tau_2$ are continuous, then $D=D_1 +D_2$ in which \n$$D_1((a,x))=(\\delta_1(a),\\delta_2(a)+\\tau_2(x))\\quad\\text{and}\\quad D_2((a,x))=(\\tau_1(x),0),$$\nsuch that $D_1$ and $D_2$ are derivations on $\\A\\ltimes\\U$ and $D_1$ is continuous.\n\\item[(ii)]\nif $ann_{\\U}\\U =(0)$, $\\delta_1(\\A)\\subseteq ann_{\\A}\\U$ and $\\tau_1, \\tau_2$ are continuous, then \\[D_1((a,x))=(\\tau_1(x),\\delta_2 (a)+\\tau_2 (x))\\] is a continuous derivation on $\\A\\ltimes \\U$ and $D=D_1 +D_2$ where $D_2((a,x))=(\\delta_1(a),0)$ is a derivation on $\\A\\ltimes \\U$.\n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\n$(i)$ By Proposition \\ref{au1}-$(ii)$, $\\delta_2$ is continuous. By Corollary \\ref{tak}, $D=D_1 +D_2$ where \n$$D_1((a,x))=(\\delta_1(a),\\delta_2(a)+\\tau_2(x)) \\quad \\text{and} \\quad D_2((a,x))=(\\tau_1(x),0)\\quad\\quad ((a,x)\\in \\A\\ltimes\\U)$$\nare derivations on $\\A\\ltimes \\U$. By the assumptions and that $\\delta_2$ is continuous, it follows that $D_1$ is a continuous derivation.\n\\\\\n$(ii)$ By Corollary \\ref{tak} \n$$D_1((a,x))=(\\tau_1(x),\\delta_2 (a)+\\tau_2 (x))\\quad\\text{and}\\quad D_2((a,x))=(\\delta_1(a),0)$$\nare derivations on $\\A\\ltimes \\U$. Now by the continuity of $\\tau_2$, $ann_{\\U}\\U =(0)$ and that $\\delta_1(\\A)\\subseteq ann_{\\A}\\U$, from Theorem \\ref{joda}-$(iv)$, it follows that \n$$\\mathfrak{S} (\\delta_2)\\subseteq (\\mathfrak{S} (\\tau_2):\\U)_\\U =((0):\\U)_\\U =ann_{\\U}\\U=(0).$$ \nTherefore $\\delta_2$ is continuous and hence so is $D_1$.\n\\end{proof}\nTo prove the next proposition we need the following lemma.\n\\begin{lem}(\\cite [Proposition 5.2.2]{da}).\\label{da}\nLet $\\mathcal{X}$, $\\mathcal{Y}$ and $\\mathcal{Z}$ be Banach spaces, and let $T:\\mathcal{X}\\rightarrow \\mathcal{Y}$ be linear.\n\\begin{enumerate}\n\\item[(i)] \nSuppose that $R:\\mathcal{Z}\\rightarrow \\mathcal{X}$ is a continuous surjective linear map. Then $ \\mathfrak{S} (TR)= \\mathfrak{S} (T). $\n\\item[(ii)]\n Suppose that $S:\\mathcal{Y}\\rightarrow \\mathcal{Z}$ is a continuous linear map. Then $ST$ is continuous if and only if $S(\\mathfrak{S} (T))=(0)$.\n\\end{enumerate}\n\\end{lem}\n\\begin{prop}\\label{taj2}\nLet $D$ be a derivation on $\\A\\ltimes \\U$ such that \n$$D((a,x))=(\\delta_1 (a)+\\tau_1 (x),\\delta_2 (a)+\\tau_2 (x))\\quad\\quad (a\\in \\A,x\\in \\U)$$\nand $\\delta_2(\\A)\\subseteq ann_{\\U}\\U$. Then under any of the following conditions, \\[D_1((a,x))=(\\delta_1(a)+\\tau_1(x),\\tau_2(x))\\]\nis a continuous derivation on $\\A\\ltimes \\U$ and $D=D_1+D_2$ where $D_2((a,x))=(0,\\delta_2(a))$ is a derivation on $\\A\\ltimes \\U$.\n\\begin{enumerate}\n\\item[(i)]\n$ann_{\\A}\\U =(0)$ and $\\tau_2$ is continuous.\n\\item[(ii)]\n$\\A$ possesses a bounded right approximate identity, there is a surjective left $\\A$-module homomorphism \n$\\phi :\\A\\rightarrow \\U$ and both $\\delta_1$ and $\\tau_1$ are continuous.\n\\item[(iii)]\n $\\A$ possesses a bounded right approximate identity, there is an injective left $\\A$-module homomorphism \n$\\phi :\\A\\rightarrow \\U$ and both $\\tau_1$ and $\\tau_2$ are continuous.\n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\n From Corollary \\ref{tak}-$(ii)$,\n\\[D_1((a,x))=(\\delta_1(a)+\\tau_1(x),\\tau_2(x)) \\quad \\text{and} \\quad\nD_2((a,x))=(0,\\delta_2(a))\\quad\\quad ((a,x))\\in \\A\\ltimes \\U) \\]\nare derivations on $\\A\\ltimes \\U$.\\\\\n$(i)$ Since $ann_{\\A}\\U =(0)$, as in Proposition \\ref{au1}-$(i)$, it can be proved that $\\tau_1$ is continuous. By continuity of $\\tau_2$, $ann_{\\A}\\U =(0)$, $\\delta_2(\\A)\\subseteq ann_{\\U}\\U$ and Theorem \\ref{joda}-$(iii)$, we have \n$$\\mathfrak{S} (\\delta_1)\\subseteq (\\mathfrak{S} (\\tau_2):\\U)_{\\A}=((0),\\U)_{\\A}=ann_{\\A}\\U =(0).$$\nTherefore $\\delta_1$ is continuous and so is $D_1$.\\par \nBefore proving parts $(ii)$ and $(iii)$ we show that if $\\psi :\\A\\rightarrow \\U$ is any left $\\A$-module homomorphism, then it is continuous. Let $\\{a_k\\}$ be a sequence in $\\A$ such that $a_k\\rightarrow 0$. By the Cohen's factorization theorem there is some $c\\in \\A$ and some sequence $b_k$ in $\\A$ for which $b_k\\rightarrow 0$ and $a_k=b_k c \\, \\, (k\\in \\mathbb{N})$. Thus $\\psi (a_k)=b_k\\psi (c)\\rightarrow 0$ and hence $\\psi$ is continuous.\n\\\\\n$(ii)$ Let $\\psi :\\A\\rightarrow \\U$ be the map $\\psi = \\tau_2 o\\phi - \\phi o\\delta_1$. By the condition $\\delta_2(\\A)\\subseteq ann_{\\U}\\U$ and the properties of $\\delta_1$ and $\\tau_2$ it can be shown that $\\psi$ is a left $\\A$-module homomorphism. Therefore $\\psi$ and $\\phi$ are continuous and hence $\\tau_2 o\\phi$ is continuous. So $\\mathfrak{S} (\\tau_2 o\\phi)=(0)$. On the other hand by Lemma \\ref{da}-$(i)$, since $\\phi$ is surjective, $\\mathfrak{S} (\\tau_2 o\\phi)=\\mathfrak{S} (\\tau_2)$. Thus $\\mathfrak{S} (\\tau_2)=0$. Therefore $\\tau_2$ is continuous. So by the hypothesis $D_1$ is continuous.\n\\\\\n$(iii)$\nAs in the previous part we put $\\psi = \\tau_2 o\\phi - \\phi o\\delta_1$ which is a left $\\A$-module homomorphism. Since $\\tau_2$ and $\\phi$ are continuous, it follows that $\\phi o\\delta_1 $ is continuous as well. By Lemma \\ref{da}-$(ii)$ we have $\\phi(\\mathfrak{S} (\\delta_1))=(0)$. Since $\\phi$ is injective, it follows that $\\mathfrak{S} (\\delta_1)=(0)$. So $\\delta_1$ is continuous and by the hypothesis $D_1$ is continuous as well.\n\\end{proof}\nNote that Propositions \\ref{taj2} also holds if \"bounded right approximate identity\" and \"left $\\A$-module homomorphism\" are replaced respectively by \"bounded left approximate identity\" and \"right $\\A$-module homomorphism\".\n\\begin{rem}\\label{taj11}\nIn Propositions \\ref{taj1} and \\ref{taj2} if we assume that $ann_{\\A}\\U =(0)$, then as in Proposition \\ref{au1}-$(i)$, the continuity of $\\tau_2$ implies the continuity of $\\tau_1$ or if we assume that $\\U$ has a bounded approximate identity, we conclude that $\\tau_1 =0$. Therefore any of the conditions $ann_{\\A}\\U =(0)$ or $\\U$ to have a bounded approximate identity implies that $\\tau_1$ is continuous and therefore so $D_1$ is continuous.\n\\end{rem}\n\\par \nIn the following example we show that the derivation $D_2$ in Propositions \\ref{taj1} and \\ref{taj2}, can be discontinuous.\n\\begin{exm}\nLet $\\mathcal{B}$ be a Banach algebra and $d:\\mathcal{B}\\rightarrow \\mathcal{B}$ be a discontinuous derivation.\n\\begin{enumerate}\n\\item[(i)]\nLet $\\A:=\\mathcal{B}\\times \\mathcal{B}$ be the direct product of Banach algebras and let $\\U :=\\mathcal{B}$ becomes a Banach $\\A$-bimodule with compatible actions, by the following module actions;\n\\[ \n(a,b)x=ax \\quad \\text{and} \\quad \nx(a,b)= xa\\quad\\quad (a,b\\in \\A , x\\in \\U).\n\\]\nTherefore $(0)\\times \\mathcal{B}\\subseteq ann_{\\A}\\U$.\nDefine the map $\\delta_1: \\A\\rightarrow \\A$ by $\\delta_1((a,b))=(0,d(b))$. Then $\\delta_1$ is a discontinuous derivation on $\\A$ such that \n$$\\delta_1 (\\A)\\subseteq (0)\\times \\mathcal{B}\\subseteq ann_{\\A}\\U. $$\nSo the map $D_2: T(\\A,\\U)\\rightarrow T(\\A,\\U)$ defined by $D_2((a,b),x)=(\\delta_1((a,b)),0)$ is discontinuous.\n\\item[(ii)]\nAssume that $\\U :=\\mathcal{B}\\times \\mathcal{B}$ as the direct product of Banach spaces which becomes a Banach algebra with thr product \n$$(x,y)(x',y')=(xx',0)\\quad \\quad ((x,y),(x',y')\\in \\U).$$\nLet $\\A:=\\mathcal{B}$ and $\\U$ is became a Banach $\\A$-bimodule with the pointwise module actions which are compatible with its algebraic operations.\\par \nNow consider the map $\\delta_2: \\A\\rightarrow \\U$ defined by $\\delta_2(a)=(0,d(a))$. $\\delta_2$ is a discontinuous derivation such that \n$$\\delta_2(\\A)\\subseteq (0)\\times \\mathcal{B}\\subseteq ann_{\\U}\\U.$$\nHence the map $D_2: \\A \\ltimes \\U \\rightarrow \\A \\ltimes \\U$ defined by \n$$D_2((a,(x,y)))=(0,\\delta_2(a))$$\nis a discontinuous derivation on $\\A\\ltimes \\U$.\n\\item[(iii)]\nSuppose that $\\U$ is a Banach space and $T:\\U\\rightarrow \\mathbb{C}$ is a discontinuous linear functional. \nSet $\\A:=T(\\mathbb{C},\\mathbb{C})$ and we turn $\\U$ into a Banach $\\U$-bimodule by the actions below;\n\\[(a,b)x=ax \\quad \\text{and} \\quad x(a,b)=xb \\quad\\quad ((a,b)\\in A, x\\in \\U).\\]\nConsider the Banach algebra $T(\\A,\\U)$ and the map $\\tau_1:\\U\\rightarrow \\A$ defined by \n$$\\tau_1(x)=(0,T(x)).$$\nSo $\\tau_1$ is an $\\A$-bimodule homomorphism such that \n$$x\\tau_1(x')+\\tau_1 (x)x'=0\\quad\\quad (x,x'\\in \\U).$$\nThus the map $D_2((a,b),x)=(\\tau_1(x),0)$ is a derivation on $T(\\A,\\U)$ which is discontinuous.\n\\end{enumerate}\n\\end{exm}\nNote that in Proposition \\ref{taj2} if we assume that $\\delta_2$ is continuous, then derivation $D$ is continuous as well.\n\n\\section{The first cohomology group of $\\A\\ltimes\\U$}\nIn this section we determine the first cohomology group of $\\A\\ltimes\\U$ in some special cases. Throughout this section for two linear spaces $\\mathcal{X}$ and $\\mathcal{Y}$, we shall write $ \\mathcal{X} \\cong \\mathcal{Y}$ to indicate that the spaces are linearly isomorphic. \n\\par \nFrom this point up to the last section we assume that every derivation $D$ on $\\A\\ltimes\\U$ is of the form \n\\[D(a,x)=(\\delta_1 (a),\\delta_2 (a)+\\tau_2 (x))\\quad\\quad (a\\in \\A,x\\in \\U)\\]\nin which $\\delta_1:\\A\\rightarrow\\A$, $\\delta_2:\\A\\rightarrow\\U$ and $\\tau_2:\\U\\rightarrow\\U$\nare derivations such that \n\\[\n\\tau_2 (ax)=a\\tau_2 (x)+\\delta_1 (a)x+\\delta_2 (a)x \\,\\, \\text{and} \\,\\, \n\\tau_2 (xa)=\\tau_2 (x)a+x\\delta_1 (a)+x\\delta_2 (a) \\quad\\quad (a\\in \\A,x\\in \\U).\n\\]\nIndeed, we consider that for every derivation $D$ on $\\A\\ltimes\\U$ we have $\\tau_1=0$, where $\\tau_1$ is as in Theorem \\ref{asll}. \n\\par \nIf for all $\\delta\\in Z^{1}(\\A)$, $\\delta_1(\\A)\\subseteq ann_{\\A}\\U$, then by Corollary \\ref{tak}, the map $D$ given by $D((a,x))=(\\delta(a),0)$ is a continuous derivation on $\\A\\ltimes\\U$ and for every continuous derivation $D$ on $\\A\\ltimes\\U$ the maps $D_1((a,x))=(0,\\delta_2 (a)+\\tau_2 (x))$ and $D_2((a,x))=(\\delta_1(a),0)$ are derivations in $Z^{1}(\\A\\ltimes\\U)$ and $D=D_1+D_2$. By these arguments, in this case,\n\\[Z^{1}(\\A\\ltimes\\U)\\cong \\mathcal{W} \\times Z^{1}(\\A),\\]\nwhere $\\mathcal{W}$ is a linear subspace consisting of all continuous derivations on $\\A\\ltimes\\U$ of the form \n$D(a,x)=(0,\\delta_2 (a)+\\tau_2 (x))$, such that $\\delta_2\\in Z^{1}(\\A,\\U)$, $\\tau_2\\in Z^{1}(\\U)$ and \n\\[\\tau_2 (ax)=a\\tau_2 (x)+\\delta_2 (a)x \\,\\, \\text{and} \\,\\,\\tau_2 (xa)=\\tau_2 (x)a+x\\delta_2 (a)\\quad\\quad (a\\in\\A,x\\in\\U).\\]\n\\begin{thm}\\label{11}\nLet for every derivation $\\delta\\in Z^{1}(\\A)$ we have $\\delta(\\A)\\subseteq ann_{\\A}\\U$. If $H^{1}(\\A,\\U)=(0)$, then \\[H^{1}(\\A\\ltimes\\U)\\cong \\frac{Z^{1}(\\A)\\times [Hom_{\\A}(\\U)\\cap Z^{1}(\\U)]}{\\mathcal{E}},\\]\nwhere $\\mathcal{E}$ is the following linear subspace of $Z^{1}(\\A)\\times [Hom_{\\A}(\\U)\\cap Z^{1}(\\U)]$;\n\n\\[\\mathcal{E}=\\{(id_{a},r_{a}+id_{\\U,x})\\, \\mid \\, a\\in\\A, \\, id_{\\A , x}=0 \\, \\, (x\\in\\U)\\}.\\]\n\\end{thm}\n\\begin{proof}\nDefine the map \\[\\Phi:Z^{1}(\\A)\\times [Hom_{\\A}(\\U)\\cap Z^{1}(\\U)]\\rightarrow H^{1}(\\A\\ltimes\\U)\\]\nby $\\Phi ((\\delta,\\tau))=[D_{\\delta,\\tau}]$\nwhere $D_{\\delta,\\tau} ((a,x))=(\\delta (a),\\tau (x))$ and $[D_{\\delta,\\tau}]$ represents the equivalence class of $D_{\\delta,\\tau}$ in $H^{1}(\\A\\ltimes\\U).$ By Corollary \\ref{tak} the maps $D_1:\\A\\ltimes\\U\\rightarrow \\A\\ltimes\\U$ and $D_2:\\A\\ltimes\\U\\rightarrow\\A\\ltimes\\U$ given respectively by $D_1((a,x))=(0,\\tau (x))$ and $D_2((a,x))=(\\delta(a),0)$, are continuous derivations on $\\A\\ltimes\\U$ and so $\\Phi$ is well-defined. Clearly $\\Phi$ is linear. Let $D\\in Z^{1}(\\A\\ltimes\\U)$. By the hypothesis and the discussion before the theorem, $D=D_1+D_2$ where $D_1((a,x))=(\\delta_1(a),0)$ and $D_2((a,x))=(\\delta_1(a),0)$ are continuous derivations on $\\A\\ltimes\\U$ and $\\delta_1\\in Z^{1}(\\A)$, $\\delta_2\\in Z^{1}(\\A,\\U)$ and $\\tau_2\\in Z^{1}(\\U)$. Since $H^{1}(\\A,\\U)=(0)$, there is some $x_0\\in \\U$ such that $\\delta_2 =id_{\\A,x_0}$. Define the map $\\tau:\\U\\rightarrow \\U $ by $\\tau=\\tau_2 - id_{\\U,x_0}$. By the properties of $\\tau$ and Remark \\ref{inn2}, it follows that $\\tau\\in Hom_{\\A}(\\U)\\cap Z^{1}(\\U)$. Now we have\n\\[\nD((a,x))-D_{\\delta_1 ,\\tau} ((a,x))=(0,id_{\\A,x_0}(a)+id_{\\U,x_0}(x))= id_{(0,x_0)}(a,x) \\quad \\quad ((a,x)\\in\\A\\ltimes\\U).\\]\nSo $[D]=[D_{\\delta,\\tau}]$ and hence $\\Phi ((\\delta_1,\\tau))=[D_{\\delta_1 ,\\tau}]=[D]$. Thus $\\Phi$ is surjective.\n\\par\nIt can be easily checked that $\\mathcal{E}$ is a linear subspace and if $(\\delta,\\tau)\\in ker \\Phi$, then $D_{\\delta,\\tau}$ is an inner derivation on $\\A\\ltimes\\U$. So for some $(a_0,x_0)\\in \\A\\ltimes\\U$, \n\\[D_{\\delta,\\tau}((a,x))=id_{a_0,x_0}(a,x)=(id_{a_0}(a),id_{\\A,x_0}(a)+r_{a_0}(x)+id_{\\U,x_0}(x)),\\]\nwhich implies that \\[\\delta =id_{a_0}, \\quad \\tau (x)=r_{a_0}(x)+id_{\\U,x_0}(x)\\quad \\text {and} \\quad id_{\\A,x_0}=0 \\quad \\quad (a\\in\\A ,x\\in\\U).\\] Hence $(\\delta,\\tau)\\in \\mathcal{E}$. Conversely, Suppose that $(\\delta,\\tau)\\in \\mathcal{E}$. Then for $a_0\\in\\A$ and $x_0\\in\\U$ we have $\\delta =id_{a_0}$ and $ \\tau =r_{a_0}+id_{\\U,x_0}$ where $id_{\\A,x_0}=0$ for all $a\\in\\A$. So \n\\[\nD_{\\delta,\\tau}((a,x))=(id_{a_0}(a),r_{a_0}(x)+id_{\\U ,x_0}(x))\n= id_{(a_0,x_0)}((a,x)).\\]\nThus $D_{\\delta,\\tau}\\in N^{1}(\\A\\ltimes\\U)$ and hence $(\\delta,\\tau)\\in ker\\Phi$. So $\\mathcal{E}=ker\\Phi$. Therefore by the above arguments we have \n\\[H^{1}(\\A\\ltimes\\U)\\cong \\frac{Z^{1}(\\A)\\times [Hom_{\\A}(\\U)\\cap Z^{1}(\\U)]}{\\mathcal{E}}.\\]\n\\end{proof}\n\\begin{cor}\nSuppose that for every $\\delta\\in Z^{1}(\\A)$, $\\delta (\\A)\\subseteq ann_{\\A}\\U$. If $H^{1}(\\A,\\U)=(0)$ and $H^{1}(\\A\\ltimes\\U)=(0)$, then $H^{1}(\\A)=(0)$ and $\\frac{Hom_{\\A}(\\U)\\cap Z^{1}(\\U)}{C_{\\A}(\\U)+I(\\U)}=(0).$\n\\end{cor}\n\\begin{proof}\nLet $\\tau\\in Hom_{\\A}(\\U)\\cap Z^{1}(\\U)$ and $\\delta\\in Z^{1}(\\A)$. Since $\\delta (\\A)\\subseteq ann_{\\A}\\U$, it follows that \n\\[ (0,\\tau)\\,,\\, (\\delta,0)\\in Z^{1}(\\A)\\times [Hom_{\\A}(\\U)\\cap Z^{1}(\\U)].\\]\nBy the hypothesis and according to the preceding theorem, there exist some $a_0\\in\\A$ and $x_0\\in\\U$ such that $id_{\\A,x_0}=0$ and \\[(\\delta,0)=(id_{a_0},r_{a_0}+id_{\\U,x_0})\\quad ,\\quad (0,\\tau)=(id_{a_0},r_{a_0}+id_{\\U,x_0}).\\]\nHence $\\delta=id_{a_0}$ and so $H^{1}(\\A)=(0)$. Moreover, $\\tau=r_{a_0}+id_{\\U,x_0}$ where $id_{a_0}=0$. Thus $\\tau\\in C_{\\A}(\\U)+I(\\U)$.\n \\end{proof}\n If we assume that $\\A$ is a Banach algebra with $ann_{\\A}\\A=(0)$ and $\\delta\\in Z^{1}(\\A)$ where $\\delta\\neq 0$, then $\\delta(\\A)\\not\\subseteq ann_{\\A}\\A =(0)$. So in this case, the condition $\\delta (\\A)\\subseteq ann_{\\A}\\A$ is not stisfied on $\\A\\ltimes\\A$ for every derivation $\\delta\\in Z^{1}(\\A)$ and this shows that this condition does not hold in general. In the following we give an example of Banach algebras which are satisfying in conditions of Theorem \\ref{11}.\n \\begin{exm}\nLet $\\A$ be a semisimple commutative Banach algebra. By Thomas' theorem \\cite{tho}, we have $Z^{1}(\\A)=(0)$. So in this case, for every $\\delta\\in Z^{1}(\\A)$ and every Banach $\\A$-bimodule $\\U$, $\\delta (\\A)\\subseteq ann_{\\A}\\U$ (in fact $\\delta=0$). In particular for a semisimple commutative Banach algebra $\\A$, if $\\U$ is a Banach $\\A$-bimodule such that $H^{1}(\\A,\\U)=(0)$ and $\\A\\ltimes\\U$ is satisfying in conditions of Theorem \\ref{11}, then\n \\[H^{1}(\\A\\ltimes\\U)\\cong \\frac{Hom_{\\A}(\\U)\\cap Z^{1}(\\U)}{R_{\\A}(\\U)+ I(\\U)}.\\]\n \\end{exm}\n Note that if $\\A$ is a commutative Banach algebra with $H^{1}(\\A)=(0)$ and $H^{1}(\\A,\\U)=(0)$, then above example holds again.\n \\par \nWe continue by characterizing the first cohomology group of $\\A\\ltimes\\U$ in another case.\n\\par \nIf for each $\\delta_2\\in Z^{1}(\\A,\\U)$, $\\delta_2(\\A)\\subseteq ann_{\\U}\\U$, then by Corollary \\ref{tak}, the map $D ((a,x))=(0,\\delta_2(a))$ is a continuous derivation on $\\A\\ltimes\\U$ and for each derivation $D\\in Z^{1}(\\A\\ltimes\\U)$ we conclude that the maps $D_1 ((a,x))=(\\delta_1 (a),\\tau_2(x))$ and $D_2((a,x))=(0,\\delta_2(a))$ are derivations in $Z^{1}(\\A\\ltimes\\U)$ and $D=D_1+D_2$. So by this argument in this case,\n \\[Z^{1}(\\A\\ltimes\\U)\\cong Z^{1}(\\A ,\\U)\\times \\mathcal{W}\\]\n where $\\mathcal{W}$ is the linear subspace of all continuous derivations on $\\A\\ltimes\\U$ which are of the form $D_1((a,x))=(\\delta_1 (a),\\tau_2(x))$ with $\\delta_1\\in Z^{1}(\\A)\\,,\\tau_2\\in Z^{1}(\\U)$ and \n \\[\\tau_2 (ax)=a\\tau_2 (x)+\\delta_1 (a)x\\quad \\text{and} \\quad \\tau_2 (xa)=\\tau_2 (x)a+x\\delta_1 (a)\\quad\\quad ((a,x)\\in \\A\\ltimes\\U).\\]\n \\begin{thm}\\label{22}\n Suppose that for each derivation $\\delta\\in Z^{1}(\\A,\\U)$ we have $\\delta(\\A)\\subseteq ann_{\\U}\\U$. If $H^{1}(\\A)=(0)$, then \n \\[H^{1}(\\A\\ltimes\\U)\\cong \\frac{Z^{1}(\\A,\\U)\\times [Hom_{\\A}(\\U)\\cap Z^{1}(\\U)]}{\\mathcal{F}},\\]\n where $\\mathcal{F}$ is the following linear subspace of $Z^{1}(\\A,\\U)\\times [Hom_{\\A}(\\U)\\cap Z^{1}(\\U)]$;\n \\[\\mathcal{F}=\\{ (id_{\\A,x},r_{a}+id_{\\U, x})\\, \\mid \\, x\\in\\U, \\, id_{a}=0 \\, \\, (a\\in\\A) \\}.\\]\n \\end{thm}\n \\begin{proof}\n Define the map \n \\[\\Phi: Z^{1}(\\A,\\U)\\times [Hom_{\\A}(\\U)\\cap Z^{1}(\\U)]\\rightarrow H^{1}(\\A\\ltimes\\U)\\]\n by $\\Phi ((\\delta,\\tau))=[D_{\\delta,\\tau}]$\n where $D_{\\delta,\\tau}((a,x))=(0,\\delta(a)+\\tau(x))$\n is a continuous derivation on $\\A\\ltimes\\U$. Clearly $\\Phi$ is a well-defined linear map. If \n $D\\in Z^{1}(\\A\\ltimes\\U)$, then $D=D_1+D_2$ where $D_1((a,x))=(\\delta_1 (a),\\tau_2(x))$ and $D_2((a,x))=(0,\\delta_2(a))$ are continuous derivations on $\\A\\ltimes\\U$ with $\\delta_1\\in Z^{1}(\\A), \\delta_2\\in Z^{1}(\\A,\\U)$ and $\\tau\\in Z^{1}(\\U)$. Since $H^{1}(\\A)=(0)$, there is some $a_0\\in \\A$ such that $\\delta_1=id_{a_0}$. Consider the map $\\tau=\\tau_2-r_{a_0}$ on $\\U$. We have \n $\\tau\\in Hom_{\\A}(\\U)\\cap Z^{1}(\\U)$. Also\n \\[D((a,x))-D_{\\delta_2 ,\\tau}((a,x))=id_{(a_0,0)}((a,x)).\\]\n So $[D]=[D_{\\delta_2 ,\\tau}]$ and hence $\\Phi ((\\delta_2,\\tau))=[D_{\\delta_2,\\tau}]=[D]$.\n Therefore $\\Phi$ is surjective. A straightforward proceeding shows that $\\mathcal{F}$ is a vector subspace and $ker\\Phi =\\mathcal{F}$. This establishes the desired vector space isomorphism. \n \\end{proof}\n \\begin{cor}\\label{222}\n Suppose that for any derivation $\\delta\\in Z^{1}(\\A,\\U)$, $\\delta(\\A)\\subseteq ann_{\\U}\\U$. If \n $H^{1}(\\A)=(0)$ and $H^{1}(\\A\\ltimes\\U)=(0)$, then $H^{1}(\\A,\\U)=(0)$ and \n $\\frac{Hom_{\\A}(\\U)\\cap Z^{1}(\\U)}{C_{\\A}(\\U)+I(\\U)}=(0).$\n \\end{cor}\n \\begin{proof}\n Let $\\delta\\in Z^{1}(\\A,\\U)$ and $\\tau\\in Hom_{\\A}(\\U)\\cap Z^{1}(\\U)$. By the hypothesis, \n$(\\delta,0),(0,\\tau)\\in Z^{1}(\\A,\\U)\\times Hom_{\\A}(\\U)\\cap Z^{1}(\\U)$. Again by the hypothesis and the preceding theorem, there are $x_0\\in\\U$ and $a_0\\in \\A$ such that $id_{a_0}=0$ and \n\\[(\\delta,0)=(id_{\\A,x_0},r_{a_0}+id_{\\U,x_0})\\quad , \\quad (0,\\tau)=(id_{\\A,x_0},r_{a_0}+id_{\\U,x_0}).\\]\nNow the result follows from these equalities.\n \\end{proof}\nIf we assume that $\\A$ is a Banach algebra with $ann_{\\A}\\A =(0)$ and $\\delta\\in Z^{1}(\\A)$ where $\\delta\\neq 0$, then by putting $\\U =\\A$ we have $\\delta(\\A)\\not\\subseteq ann_{\\A}\\A =ann_{\\A}\\U =(0)$. So in this case, the condition $\\delta(\\A)\\subseteq ann_{\\A}\\U$ is not satisfied on $\\A\\ltimes\\U$ for every derivation $\\delta\\in Z^{1}(\\A,\\U)$. This shows that this condition does not hold in general. In the following we give an example of Banach algebras which are satisfying in conditions of Theorem \\ref{22}.\n\\begin{exm}\nIf $\\A$ is a Banach algebra and $\\U$ is a commutative Banach $\\A$-bimodule such that $H^{1}(\\A,\\U)=(0)$, then $Z^{1}(\\A,\\U)=(0)$. So in this case, for every $\\delta\\in Z^{1}(\\A,\\U)$, $\\delta(\\A)\\subseteq ann_{\\U}\\U$ (in fact $\\delta =0$). In this case if $H^{1}(\\A)=(0)$ and $\\A\\ltimes\\U$ satisfies in Theorem \\ref{22}, then\n \\[ H^{1}(\\A\\ltimes\\U)\\cong \\frac{Hom_{\\A}(\\U)\\cap Z^{1}(\\U)}{C_{\\A}(\\U)\\cap N^{1}(\\U)}.\\]\n\\end{exm}\n Note that if $\\A$ is a super amenable Banach algebra and $\\U$ is a commutative Banach $\\A$-bimodule, then above example holds.\n \\par \n In the continuation we consider a case on $\\A\\ltimes\\U$ in which for any derivation $\\delta_1\\in Z^{1}(\\A)$ and $\\delta_2\\in Z^{1}(\\A,\\U)$ we have $\\delta_1(\\A)\\subseteq ann_{\\A}\\U$ and $\\delta_2(\\A)\\subseteq ann_{\\U}\\U$. In fact by these conditions on $\\A\\ltimes\\U$, for every $\\delta_1\\in Z^{1}(\\A)$ and $\\delta_2\\in Z^{1}(\\A,\\U);$\n \\[\\delta_1(a)x+\\delta_2(a)x=0\\quad \\text {and} \\quad x\\delta_1 (a)+x\\delta_2 (a)=0 \\quad\\quad (a\\in \\A,x\\in \\U).\\]\nIn this case, every derivation $D\\in Z^{1}(A\\ltimes\\U)$ can be written as $D=D_1+D_2$ where \n \\[D_{1}((a,x))=(\\delta_1 (a),\\delta_2(a))\\quad \\text{and} \\quad D_{2}((a,x))=(0,\\tau_2(x))\\]\nare continuous derivations on $\\A\\ltimes\\U$ and $\\tau\\in Hom_{\\A}(\\U)\\cap Z^{1}(\\U)$. In this case, we conclude that \n$$R_{\\A}(\\U)+N^{1}(\\U)\\subseteq Hom_{\\A}(\\U)\\cap Z^{1}(\\U).$$ \n\\begin{thm}\\label{33}\n Suppose that for every $\\delta_1\\in Z^{1}(\\A)$ and $\\delta_2\\in Z^{1}(\\A,\\U)$ we have \\[\\delta_1(\\A)\\subseteq ann_{\\A}\\U \\quad \\text{and} \\quad \\delta_{2}(\\A)\\subseteq ann _{\\U}\\U.\\]\nSuppose further that $\\frac{Hom_{\\A}(\\U)\\cap Z^{1}(\\U)}{R_{\\A}(\\U)+ N^{1}(\\U)}=(0)$. Then \n \\[H^{1}(\\A\\ltimes\\U)\\cong \\frac{Z^{1}(\\A)\\times Z^{1}(\\A,\\U)}{\\mathcal{K}},\\]\n where $\\mathcal{K}$ is the following linear subspace of $Z^{1}(\\A)\\times Z^{1}(\\A,\\U)$;\n \\[\\mathcal{K}=\\{(id_{a},id_{\\A,x})\\, \\mid \\, r_{a}+id_{\\U,x}=0\\,\\,(a\\in \\A,x\\in \\U)\\}.\\]\n \\end{thm}\n \\begin{proof}\n Define the map \n \\[\\Phi: Z^{1}(\\A)\\times Z^{1}(\\A,\\U)\\rightarrow H^{1}(\\A\\ltimes\\U)\\]\n by $\\Phi ((\\delta_1,\\delta_2))=[D_{\\delta_1 , \\delta_2}]$\n where $D_{\\delta_1 , \\delta_2} ((a,x))=(\\delta_1 (a),\\delta_2(a))$ is a continuous derivation on $\\A\\ltimes\\U$. The map $\\Phi$ is well-defined and linear. If $D\\in Z^{1}(\\A\\ltimes\\U)$, then $D((a,x))=(\\delta_1 (a),\\delta_2(a)+\\tau_2(x))$. By the hypothesis, $\\tau_2=r_{a}+id_{\\U,x}$ where $a\\in\\A$ and $x\\in\\U$. Define the derivations $d_1\\in Z^{1}(\\A)$ and $d_2\\in Z^{1}(\\A,\\U)$ by $d_1=\\delta_1 -id_{a}$ and $d_2=\\delta_2 -id_{\\A,x}$ respectively. Then $D-D_{d_1 ,d_2}=id_{(a ,x )}$. So $\\Phi ((\\delta_1,\\delta_2))=[D_{d_1 , d_2}]=[D]$.\nThus $\\Phi$ is surjective. It can be easily seen that $ker \\Phi =\\mathcal{K}$. So we have the desired vector spaces isomorphism.\n \\end{proof}\n \\begin{cor}\\label{333}\n Suppose that for every $\\delta_1\\in Z^{1}(\\A)$ and $\\delta_2\\in Z^{1}(\\A,\\U)$ we have \\[\\delta_1(\\A)\\subseteq ann_{\\A}\\U \\quad \\text{and} \\quad \\delta_2(\\A)\\subseteq ann _{\\U}\\U.\\] Suppose further that \n $\\frac{Hom_{\\A}(\\U)\\cap Z^{1}(\\U)}{{R_{\\A}(\\U)+ N^{1}(\\U)}}=(0)$. If $H^{1}(\\A\\ltimes\\U)=(0)$, then $H^{1}(\\A)=(0)$ and $H^{1}(\\A,\\U)=(0)$.\n \\end{cor}\n \\begin{proof}\n Let $\\delta_1\\in Z^{1}(\\A)$ and $\\delta_2\\in Z^{1}(\\A,\\U)$. By the hypothesis \n \\[(\\delta_1 , 0)\\,,\\, (0,\\delta_2)\\in Z^{1}(\\A)\\times Z^{1}(\\A,\\U).\\]\n Again by the hypothesis and the preceding theorem, there exists some $(id_{a},id_{\\A,x})\\in \\mathcal{K}$ such that \n $(\\delta_1 , 0)=(id_{a},id_{\\A,x})$ and $(0,\\delta_2)=(id_{a},id_{\\A,x})$ and so $\\delta_1$ and $\\delta_2$ are inner.\n \\end{proof} \nIf $H^{1}(\\U)=(0)$, since $R_{\\A}(\\U)+ N^{1}(\\U)\\subseteq Z^{1}(\\U)= N^{1}(\\U)$, It follows that \n $\\frac{Hom_{\\A}(\\U)\\cap Z^{1}(\\U)}{{R_{\\A}(\\U)+ N^{1}(\\U)}}=(0)$. So if $H^{1}(\\U)=(0)$, Theorem \\ref{33} and Corollary \\ref{333} hold again. \n \\par \n Let $\\A$ be a Banach algebra with $ann_{\\A}\\A =(0)$ and $\\delta_1 ,\\delta_2\\in Z^{1}(\\A)$ such that $\\delta_1 +\\delta_2 \\neq 0$. If we put $\\tau =\\delta_1+\\delta_2$ and define linear map $D$ on $\\A\\ltimes\\A$ by \n \\[D((a,x))=(\\delta_1 (a),\\delta_2(a)+\\tau(x))\\quad\\quad ((a,x)\\in \\A\\ltimes\\A),\\]\n then $D\\in Z^{1}(\\A\\ltimes\\A )$. But for $a, x\\in\\A$, the equation $\\delta_1 (a)x+\\delta_2 (a)x=0$\n is not necessarily true. This example does not satisfy the conditions of Theorem \\ref{33}. \n \\par \nThe last case we consider is as follows.\n \\begin{thm}\\label{44}\n Let $H^{1}(\\A)=(0)$ and $H^{1}(\\A,\\U)=(0)$. Then \n \\[H^{1}(\\A\\ltimes\\U)\\cong \\frac{Hom_{\\A}(\\U)\\cap Z^{1}(\\U)}{C_{\\A}(\\U)+I(\\U)}.\\]\n\\end{thm} \n\\begin{proof}\nDefine the map \n\\[\\Phi : Hom_{\\A}(\\U)\\cap Z^{1}(\\U)\\rightarrow H^{1}(\\A\\ltimes\\U)\\]\nby $\\Phi(\\tau)=[D_{\\tau}]$ where $D_{\\tau}((a,x))=(0,\\tau (x))$ is a continuous derivation on $\\A\\ltimes\\U$. The map $\\Phi$ is well-defined linear. If $D\\in Z^{1}(\\A\\ltimes\\U )$, then $D=(\\delta_1 , \\delta_2 +\\tau_2)$. By hypotheses $\\delta_1 =id _{a}$ and $\\delta_2 =id_{\\A, x}$ for some $a\\in \\A$ and $x\\in \\U$. Define the map $\\tau:\\U\\rightarrow \\U$ by $\\tau=\\tau_2-r_{a}-id_{\\U, x}$. Then $\\tau\\in Hom_{\\A}(\\U)\\cap Z^{1}(\\U)$ and so $D-D_{\\tau}=id_{(a , x)}$. Hence $\\Phi (\\tau)=[D_\\tau]=[D]$. Thus $\\Phi$ is surjective. By Corollary \\ref{tak}-$(iv)$, $ker \\Phi =C_{\\A}(\\U)+I(\\U)$. So the desired vector space isomorphism is established.\n\\end{proof}\nThe following corollary follows immediately from the preceding theorem.\n\\begin{cor}\nIf $H^{1}(\\A)=(0),H^{1}(\\A,\\U)=(0)$ and $H^{1}(\\A\\ltimes\\U)=(0)$, then for every $\\tau\\in Hom_{\\A}(\\U)\\cap Z^{1}(\\U)$ there are some $r_{a}\\in C_{\\A}(\\U)$ and $id_{\\U , x}\\in I(\\U)$ such that $\\tau =r_{a}+id_{x}$.\n\\end{cor}\nIn the following an example of Banach algebras satisfying the conditions of Theorem \\ref{44}, is given.\n\\begin{exm}\nIf $\\A$ is a weakly amenable commutative Banach algebra, then for any commutative Banach $\\A$-bimodule $\\mathcal{X}$ we have $H^{1}(\\A,\\mathcal{X})=(0)$. So in this case, if $\\U$ is a commutative Banach $\\A$-bimodule satisfying the conditions of Theorem \\ref{44}, then $Z^{1}(\\A)=H^{1}(\\A)=(0)$ and $Z^{1}(\\A,\\U)=H^{1}(\\A,\\U)=(0)$ and hence \n\\[H^{1}(\\A\\ltimes\\U)\\cong \\frac{Hom_{\\A}(\\U)\\cap Z^{1}(\\U)}{R_{\\A}(\\U)+ N^{1}(\\U)}.\\]\n\\end{exm}\nThis example could be also derived from Theorem \\ref{22}.\n\\section{Applications}\nIn this section we investigate applications of the previous sections and give some examples.\n\\subsection*{Direct product of Banach algebras}\nLet $\\A$ and $\\U$ be Banach algebras. With trivial module actions $\\A\\U=\\U\\A =(0)$, as we saw in Example \\ref{dp}, $\\A\\ltimes\\U=\\A\\times\\U$ where $\\A\\times\\U$ is $l^{1}$-direct product of Banach algebras. In this case, $ann_{\\A}\\U =\\A$ and so for every derivation $\\delta:\\A\\rightarrow \\A$ we have $\\delta(\\A)\\subseteq ann_{\\A}\\U$. Also in this case, $R_{\\A}(\\U)=(0)$ and $Hom_{\\A}(\\U)=\\mathbb{B}(\\U)$. The following proposition follows from Theorem \\ref{asll}.\n\\begin{prop}\\label{dpd}\nLet $\\A$ and $\\U$ be Banach algebras and $D:\\A\\times\\U\\rightarrow\\A\\times\\U$ be a map. The following are equivalent.\n\\begin{enumerate}\n\\item[(i)]\n$D$ is a derivation.\n\\item[(ii)]\n\\[D((a,x))=(\\delta_1(a)+\\tau_1(x),\\delta_2(a)+\\tau_2(x))\\quad \\quad ((a,x)\\in\\A\\times\\U)\\]\nsuch that $\\delta_1:\\A\\rightarrow\\A,\\tau_2:\\U\\rightarrow\\U$ are derivations and $\\tau_1:\\U\\rightarrow\\A$ and $\\delta_2:\\A\\rightarrow\\U$ are linear maps satisfying the following conditions;\n\\[\\tau_1(\\U)\\subseteq ann_{\\A}\\A , \\,\\, \\delta_2(\\A)\\subseteq ann_{\\U}\\U, \\,\\, \\tau_1(xy)=0, \\, \\text{and} \\,\\, \\delta_2(ab)=0 \\quad \\quad(a,b\\in\\A , x,y\\in \\U).\\]\n\\end{enumerate}\nMoreover, if $ann_{\\U}\\U=(0)$ or $\\A^{2}=\\A$ and $ann_{\\A}\\A=(0)$ or $\\U ^{2}=\\U$, then $\\delta_2=0$ and $\\tau_1=0$, respectively.\n\\end{prop}\nBy this proposition it is clear if $\\A$ or $\\U$ has a bounded approximate identity, then $\\delta_2=0$ and $\\tau_1=0$. Hence in this case every derivation $D$ on $\\A\\times\\U$ is of the form $D((a,x))=(\\delta(a),\\tau(x))$ where $\\delta$ and $\\tau$ are derivations on $\\A$ and $\\U$, respectively.\n\\begin{rem}\\label{d1}\nIf $\\A$ and $\\U$ are Banach algebras, then for any derivation $\\delta:\\A\\rightarrow\\A$ and $\\tau:\\U\\rightarrow\\U$ the maps $D_1$ and $D_2$ on $\\A\\times\\U$ given by \n\\[D_{1}((a,x))=(\\delta (a),0)\\quad \\text{and} \\quad D_2((a,x))=(0,\\tau (x)),\\]\nare derivations. According to this point, if every derivation on $\\A\\times\\U$ is continuous, then every derivation on $\\A$ and every derivation on $\\U$ is continuous.\n\\par \nAlso let $\\A$ or $\\U$ have a bounded approximate identity. So by above arguments if every derivation on $\\A$ and every derivation on $\\U$ is continuous, then every derivation on $\\A\\times\\U$ is continuous.\n\\end{rem}\n\\begin{rem}\\label{d2}\nLet $\\A$ and $\\U$ be Banach algebras and $D\\in Z^{1}(\\A\\ltimes\\U)$. If $ann_{\\U}\\U =(0)$ or $\\overline{\\A^{2}}=\\A$ and $ann_{\\A}\\A =(0)$ or $\\overline{\\U ^{2}}=\\U$, then in the representation of $D$ as in the preceding proposition, $\\delta_2=0$ and $\\tau_1=0$. So $D((a,x))=(\\delta(a),\\tau(x))$ where $\\delta \\in Z^{1}(\\A)$ and $\\tau\\in Z^{1}(\\U)$. In this case we can conclude\n\\[ Z^{1}(\\A\\times\\U)\\cong Z^{1}(\\A)\\times Z^{1}(\\U) \\quad \\text{and} \\quad N^{1}(\\A\\times\\U)\\cong N^{1}(\\A)\\times N^{1}(\\U).\\]\nMoreover\n\\[ H^{1}(\\A\\times\\U)\\cong H^{1}(\\A)\\times H^{1}(\\U).\\]\nIn particular if $\\A$ or $\\U$ have a bounded approximate identity, then above observation holds.\n\\par \nIn the case of isomorphism of the first cohomology group by Theorem \\ref{11}, one can weaken the conditions. Indeed $\\A\\times\\U$ with $\\overline{\\U ^{2}}=\\U$ or $ann_{\\A}\\A =(0)$ satisfies the conditions of Theorem \\ref{11}. Since $Hom_{\\A}(\\U)=\\mathbb{B}(\\U)$ and $R_{\\A}(\\U)=(0)$, in this case \n\\[H^{1}(\\A\\times\\U)\\cong H^{1}(\\A)\\times H^{1}(\\U)\\]\n(In fact in this case, $\\mathcal{E}=N^{1}(\\A)\\times N^{1}(\\U)$).\n\\par \nIf $\\overline{\\A ^{2}}=\\A$ or $ann_{\\U}\\U =(0)$, by symmetry as above we obtain the same result again.\n\\end{rem}\nAs the next example confirms, it is not necessarily true that for any derivation on $\\A\\times\\U$, $\\tau_1 =0$ or $\\delta_1 =0$ in the decomposition of it. The example is given from \\cite{ess}.\n \\begin{exm}\n Let $\\A$ be a Banach algebra with $ann_{\\A}\\A\\neq 0$. Put $\\U:=ann_{\\A}\\A$. Then $\\U$ is a closed subalgebra of $\\A$. Define the map $D$ on $\\A\\times\\U$ by $D((a,x))=(x,0)$. Then $D$ is a derivation on $\\A\\times\\U$ such that in its representation the map $\\tau_1:\\U\\rightarrow\\A$ is given by $\\tau_1(x)=x$ with $\\tau_1\\neq 0$. \n \\end{exm}\nLet $\\A$ and $\\U$ be Banach algebras and $\\alpha:\\A\\rightarrow \\U$ be a continuous algebra homomorphism with $\\Vert \\alpha\\Vert \\leq 1$. Then the following module actions turn $\\U$ into a Banach $\\A$-bimodule with the compatible actions and norm;\n\\[\nax=\\alpha(a)x \\quad \\text{and} \\quad\nxa=x\\alpha(a)\\quad\\quad (a\\in \\A,x\\in \\U).\\]\nIn this case we can consider $\\A\\ltimes \\U$ with the multiplication given by \n$$(a,x)(b,y)=(ab,\\alpha(a) y+x\\alpha(b)+xy).$$\nWe denote this Banach algebra by $\\A\\ltimes_{\\alpha} \\U$ which is introduced in \\cite{bh}. In the next proposition we see that $\\A\\ltimes_{\\alpha} \\U$ is isomorphic as a Banach algebra to the direct product $\\A\\times \\U$.\n\\begin{prop}\\label{dal}\nLet $\\A$, $\\U$ be Banach algebras and $\\alpha:\\A\\rightarrow \\U$ be a continuous algebra homomorphism with $\\Vert \\alpha\\Vert \\leq 1$. Consider the Banach algebra $\\A\\ltimes_{\\alpha} \\U$ as above. Then $\\A\\ltimes_{\\alpha} \\U$ is isomorphic as a Banach algebra to $\\A\\times \\U$.\n\\end{prop}\n\\begin{proof}\nDefine $\\theta:\\A\\times \\U\\rightarrow\\A\\ltimes_{\\alpha} \\U$ by $\\theta((a,x))=(a, x-\\alpha(a))$ $(a,x)\\in \\A\\times \\U$. The map $\\theta$ is linear and continuous. Also\n\\[\\theta((a,x)(b,y))=(a,x-\\alpha(a))(b,y-\\alpha(b))=(ab,xy-\\alpha(ab))=\\theta((a,x))\\theta((b,y)),\\]\nfor any $(a,x),(b,y)\\in \\A\\times \\U$. It is obvious that $\\theta$ is bijective. Hence $\\theta$ is a continuous algebra isomorphism.\n\\end{proof}\n\\begin{rem}\nLet $\\A$, $\\U$ and $\\alpha$ are as Proposition \\ref{dal}. By Remarks \\ref{d1}, \\ref{d2} and Proposition \\ref{dal} we have the following.\n\\par \nIf every derivation on $\\A\\ltimes_{\\alpha}\\U$ is continuous, then every derivation on $\\A$ and every derivation on $\\U$ is continuous. If $\\A$ or $\\U$ have a bounded approximate identity and every derivation on $\\A$ and every derivation on $\\U$ is continuous, then every derivation on $\\A\\ltimes_{\\alpha}\\U$ is continuous. Also if $\\overline{\\U ^{2}}=\\U$ or $ann_{\\A}\\A =(0)$ ($\\overline{\\A ^{2}}=\\A$ or $ann_{\\U}\\U =(0)$), then $H^{1}(\\A\\ltimes_{\\alpha}\\U)\\cong H^{1}(\\A)\\times H^{1}(\\U)$.\n\\end{rem}\nIf $\\U$ is a Banach algebra and $\\A$ is a closed subalgebra of it, then it is clear that the embedding map $i:\\A\\rightarrow\\U$ is continuous algebra homomorphism and hence we can conmsider $\\A\\ltimes_{i} \\U$ which is isomorphic as a Banach algebra to $\\A\\times \\U$ by Proposition \\ref{dal}. So by above remark we have the following examples.\n\\begin{exm}\n\\begin{enumerate}\n\\item[(i)] If $\\A$ is a semisimle Banach algebra with a bounded approximate identity, then every derivation on $\\A\\ltimes_{i}\\A$ is continuous. \n\\item[(ii)] Consider the group algebra $L^{1}(G)$ as a closed ideal in the measure algebra $M(G)$. Every derivation on $M(G)$ and every derivation on $L^{1}(G)$ is continuous and $M(G)$ is unital. So every derivation on $L^{1}(G)\\ltimes_{i} M(G)$ is continuous.\n\\item[(iii)] Let $\\mathcal{H}$ be a Hilbert space and $\\mathcal{N}$ be a complete nest in $\\mathcal{H}$. The associated nest algebra $Alg\\mathcal{N}$ is a closed subalgebra of $\\mathbb{B}(\\mathcal{H})$ which is unital. By \\cite{chr} every derivation on $Alg\\mathcal{N}$ is continuous. Also $\\mathbb{B}(\\mathcal{H})$ is a unital $C^{*}$-algebra. Hence every derivation on $Alg\\mathcal{N}\\ltimes_{i} Alg\\mathcal{N}$ and $Alg\\mathcal{N}\\ltimes_{i} \\mathbb{B}(\\mathcal{H})$ is continuous.\n\\end{enumerate}\n\\end{exm}\n\\begin{exm}\n\\begin{enumerate}\n\\item[(i)] Let $\\A$ be a weakly amenable commutative Banach algebra. Since $H^{1}(\\A)=(0)$ and $\\overline{\\A ^{2}}=\\A$, it follows that $H^{1}(\\A\\ltimes_{i}\\A)=(0)$.\n\\item[(ii)]\nSakai showed in \\cite{sa} that every continuous derivation on a $W^{*}$-algebra is inner. Every von Neumann algebra is a $W^{*}$-algebra which is unital. Let $\\A$ be a von Neumann algebra on a Hilbert space $\\mathcal{H}$. Hence $H^{1}(\\A\\ltimes_{i}\\A)=(0)$ and $H^{1}(\\A\\ltimes_{i}\\mathbb{B} (\\mathcal{H}))=(0)$.\n\\item[(iii)]\nLet $\\mathcal{H}$ be a Hilbert space and $\\mathcal{N}$ be a complete nest in $\\mathcal{H}$. In \\cite{chr}, Christensen proved that $H^{1}(Alg\\mathcal{N})=(0)$. Also $\\mathbb{B}(\\mathcal{H})$ is a von Neumann algebra. Hence \\[ H^{1}(Alg\\mathcal{N}\\ltimes_{i} Alg\\mathcal{N})=(0) \\quad \\text{ and } \\quad H^{1}(Alg\\mathcal{N}\\ltimes_{i} \\mathbb{B}(\\mathcal{H}))=(0).\\]\n\\end{enumerate}\n\\end{exm}\n\\subsection*{Module extension Banach algebras}\nLet $\\A$ be a Banach algebra and $\\U$ be a Banach $\\A$-bimodule. With trivial product $\\U ^{2}=(0)$, as we saw in Example \\ref{me}, $\\A\\ltimes\\U$ is the same as module extension of $\\A$ by $\\U$, namely $T(\\A,\\U)$. In this case, $ann_{\\U}\\U =\\U$ and so for every derivation $\\delta:\\A\\rightarrow\\U$ we have $\\delta(\\A)\\subseteq ann_{\\U}\\U$. By these notes, Theorem \\ref{asll} and Corollary \\ref{tak}, we have the following proposition on derivations on $T(\\A,\\U)$.\n\\begin{prop}\\label{ttd}\n Let $D:T(\\A,\\U)\\rightarrow T(\\A,\\U)$ be a linear map such that \n \\[D((a,x))=(\\delta_1(a)+\\tau_1(x),\\delta_2(a)+\\tau_2(x))\\quad\\quad ((a,x)\\in \\A\\ltimes\\U).\\]\n The following are equivalent.\n \\begin{enumerate}\n \\item[(i)]\n $D$ is a derivation. \n \\item[(ii)]\n $D=D_1+D_2$ where $D_1((a,x))=(\\delta_1(a)+\\tau_1(x),\\tau_2(x))$ and $D_{2}((a,x))=(0,\\delta_2(a))$ are derivations on $T(\\A,\\U)$.\n \\item[(iii)]\n $\\delta_1:\\A\\rightarrow \\A$, $\\delta_2:\\A\\rightarrow \\U$ are derivations, $\\tau_{2}:\\U\\rightarrow \\U$ is a linear map such that \n \\[\\tau_2 (ax)=a\\tau_2 (x)+\\delta_2 (a)x\\quad \\text{and} \\quad \\tau_2 (xa)=\\tau_2 (x)a+x\\delta_2(a)\\quad \\quad (a\\in\\A,x\\in\\U).\\] \n and $\\tau_1:\\U\\rightarrow \\A$ is an $\\A$-bimodule homomorphism such that $x\\tau_1(y)+\\tau_1(x)y=0$ $ (x,y\\in\\U)$. \\\\\n Moreover, $D$ is an inner derivation if and only if $\\delta_1 ,\\delta_2$ are inner derivations, $\\tau_1 =0$ and if $\\delta_1 =ad_{a}$ and $ \\delta_2 =ad_{x}$, then $\\tau_2 =r_{a}$.\n \\end{enumerate}\n \\end{prop}\n Proposition 2.2 of \\cite{med} is a consequence of this proposition. \n\\begin{rem} \\label{r0} \n By Proposition \\ref{ttd}, the linear map $\\delta:\\A\\rightarrow\\U$ is a derivation if and only if the linear map $D((a,x))=(0,\\delta (a))$ on $T(\\A,\\U)$ is a derivation. So any derivation $\\delta:\\A\\rightarrow\\U$ is continuous, if every derivation on $T(\\A,\\U)$ is continuous.\n \\end{rem} \n \\begin{prop}\\label{tau}\nSuppose that there are $\\A$-bimodule homomorphisms $\\phi:\\A\\rightarrow\\U$ and $\\psi:\\U\\rightarrow\\A$ such that $\\phi o\\psi = I_{\\U}$ ($I_\\U$ is the identity map on $\\U$). If every derivation on $T(\\A,\\U)$ is continuous, then every derivation on $\\A$ is continuous.\n\\end{prop}\n\\begin{proof}\nLet $\\delta$ be a derivation on $\\A$. Define the map $\\tau:\\U\\rightarrow\\U$ by $\\tau =\\phi o\\delta o\\psi$.\nThen for every $a\\in\\A,x\\in\\U$,\n\\[\\tau(ax)=a\\tau(x)+\\delta(a)x\\quad \\text{and} \\quad \\tau(xa)=\\tau(x)a+x\\delta(a).\\]\nSo the map $D:T(\\A,\\U)\\rightarrow T(\\A,\\U)$ defined by $D((a,x))=(\\delta(a),\\tau(x))$ is a derivation which is continuous by the hypothesis. Thus $\\delta$ is continuous.\n\\end{proof}\nIn the previous proposition the assumption of existence of $\\A$-bimodule homomorphisms $\\phi:\\A\\rightarrow\\U$ and $\\psi:\\U\\rightarrow\\A$ with $\\phi o \\psi =I_{\\U}$ is equivalent to that there exists a subbimodule $\\mathcal{V}$ of $\\A$ such that $\\A=\\U\\oplus \\mathcal{V}$ as $\\A$-bimodules direct sum. (In fact in this case, $\\U$ and $\\mathcal{V}$ are ideals of $\\A$). \n \\par \nSince for every derivation $\\delta:\\A\\rightarrow\\U$, $\\delta(\\A)\\subseteq ann_{\\U}\\U$, in the stated cases in Proposition \\ref{taj2}, one can express any derivation $D$ on $T(\\A,\\U)$ as the sum of two derivations one of which being continuous. Also the Proposition \\ref{au1}-$(i)$ holds in the case of module extension Banach algebras. \n\\begin{rem}\\label{r1}\nIf $\\A$ is a Banach algebra and $\\mathcal{I}$ is a closed ideal on it, then $\\frac{\\A}{\\mathcal{I}}$ is a Banach $\\A$-bimodule and so we can consider $T(\\A, \\frac{\\A}{\\mathcal{I}})$. Suppose that $\\A$ possesses a bounded right (or left) approximate identity and every derivation on $\\A$ and every derivation from $\\A$ to $\\frac{\\A}{\\mathcal{I}}$ is continuous. Let $D$ be a derivation on $T(\\A, \\frac{\\A}{\\mathcal{I}})$ which has a structure as in Proposition \\ref{ttd}. Then $\\tau_1:\\frac{\\A}{\\mathcal{I}}\\rightarrow \\A$ is an $\\A$-bimodule homomorphism. Since $\\A$ has a bounded right(left) approximate identity, then so does $\\frac{\\A}{\\mathcal{I}}$. Hence $\\tau_1$ is continuous. Now from Proposition \\ref{taj2}-$(ii)$ it follows that $D$ is continuous. Hence in this case any derivation on $T(\\A, \\frac{\\A}{\\mathcal{I}})$ is continuous.\n\\end{rem}\n\\begin{rem}\\label{r2}\nIf $\\mathcal{I}$ is a closed ideal in a Banach algebra $\\A$ and $\\delta:\\A\\rightarrow\\A$ is a derivation such that $\\delta(\\mathcal{I})\\subseteq \\mathcal{I}$, then the map $\\tau: \\frac{\\A}{\\mathcal{I}}\\rightarrow \\frac{\\A}{\\mathcal{I}}$ defined by $\\tau(a+\\mathcal{I})=\\delta(a)+\\mathcal{I}$ is well-defined and linear and \n\\[\\tau (a(x+\\mathcal{I}))=a\\tau (x+\\mathcal{I})+\\delta (a)(x+\\mathcal{I})\\quad,\\quad \\tau((x+\\mathcal{I})a)=\\tau(x+\\mathcal{I})a+(x+\\mathcal{I})\\delta(a).\\]\n Therefore the map $D$ on $T(\\A, \\frac{\\A}{\\mathcal{I}})$ defined by $D((a,x))=(\\delta (a),\\tau(x+\\mathcal{I}))$ is a derivation. So if every derivation on $T(\\A, \\frac{\\A}{\\mathcal{I}})$ is continuous, then every derivation $\\delta:\\A\\rightarrow \\A $ with $\\delta(\\mathcal{I})\\subseteq \\mathcal{I}$ is continuous.\n\\end{rem}\n In Remarks \\ref{r1} and \\ref{r2}, if we let $\\mathcal{I}=(0)$, then we have the following corollary.\n \\begin{cor}\n Let $\\A$ be a Banach algebra.\n\\begin{enumerate}\n\\item[(i)] If $\\A$ has a right (left) approximate identity and every derivation on $\\A$ is continuous, then any derivation on $T(\\A, \\A)$ is continuous. \n\\item[(ii)] If every derivation on $T(\\A, \\A)$ is continuous, then any derivation on $\\A$ is continuous.\n\\end{enumerate} \n \\end{cor}\nThe part (ii) of this corollary could also be derived from Remark \\ref{r0} or Proposition \\ref{tau}.\n\\par \nIn continue we give some results of Proposition \\ref{taj2} in the case of module extension Banach algebras.\n\\begin{cor}\\label{trs}\nLet $\\A$ be a semisimple Banach algebra which has a bounded approximate identity and $\\U$ be a Banach $\\A$-bimodule with $ann_{\\A}\\U=(0)$. Suppose that there exists a surjective left $\\A$-module homomorphism $\\phi:\\A\\rightarrow\\U$ and every derivation from $\\A$ into $\\U$ is continuous, then every derivation on $T(\\A,\\U)$ is continuous.\n\\end{cor}\n\\begin{proof}\nLet $D$ be a derivation on $T(\\A,\\U)$ which has a structure as in Proposition \\ref{ttd}. Since $\\A$ is semisimple, every derivation from $\\A$ into $\\A$ is continuous. Now by Proposition \\ref{taj2}-$(ii)$ and Remark \\ref{taj11}, it follows that every derivation on $T(\\A,\\U)$ is continuous.\n\\end{proof}\nRingrose in \\cite{rin} proved that every derivation from a $C^{*}$-algebra into a Banach bimodule is continuous. So we have the following example which satisfies the conditions of the above corrolary.\n\\begin{exm}\nLet $\\A$ be a $C^{*}$-algebra and $\\U$ be a Banach $\\A$-bimodule with $ann_{\\A}\\U=(0)$. Suppose that there exists a surjective left $\\A$-module homomorphism $\\phi:\\A\\rightarrow\\U$. Hence $\\A$ is a semisimple Banach algebra with a bounded approximate identity. So by \\cite{rin} and Corrolary \\ref{trs}, any derivation on $T(\\A,\\U)$ is continuous. \n\\end{exm}\nAn element $p$ in an algebra $\\A$ is called an \\textit{idempotent} if $p^{2}=p$.\n\\begin{cor}\nLet $\\A$ be a prime Banach algebra with a non-trivial idempotent\n$p$ (i.e. $p \\neq 0$) such that $\\A p$ is finite dimensional. Then every derivation\non $\\A$ is continuous.\n\\end{cor}\n\\begin{proof}\nLet $\\U:=\\A p$. Then $\\U$ is a closed left ideal in $\\A$. By the following make $\\U$ into a Banach\n$\\A$-bimodule:\n\\[ xa=0 \\quad \\quad (x\\in \\A p, a\\in \\A), \\]\nand the left multiplication is the usual multiplication of $\\A$. So we can consider $T(\\A,\\U)$ in this case. Since $\\A$ is prime, it follows that $ann_{\\A}\\U=(0)$. Let $\\delta:\\A \\rightarrow \\A$ be a derivation. Define the map $\\tau:\\U \\rightarrow \\U$ by $\\tau(ap)=\\delta(ap)p$ $(a\\in \\A)$. The map $\\tau$ is well-defined and linear. Also\n\\[\\tau (ax)=a\\tau (x)+\\delta (a)x\\quad \\text{and} \\quad \\tau (xa)=\\tau (x)a+x\\delta(a)\\quad \\quad (a\\in\\A,x\\in\\U).\\] \nSince $\\U$ is finite dimensional, it follows that $\\tau$ is continuous. By Proposition \\ref{ttd}, the mapping $D:T(\\A,\\U)\\rightarrow T(\\A,\\U)$ defined by $D((a,x))=(\\delta(a),\\tau(x))$ is a derivation. Now for $D$ the conditions of Proposition \\ref{taj2}-$(i)$ hold and hence $D$ is continuous. Therefore $\\delta$ is continuous.\n\\end{proof}\n Now we investigate the first cohomology group of $T(\\A ,\\U)$. \n \\par \n In module extension $T(\\A ,\\U)$ since $\\U ^{2}=(0)$, for every derivation $\\delta:\\A\\rightarrow\\U$ on $T(\\A ,\\U)$ we have $\\delta (\\A)\\subseteq ann_{\\U}\\U$ and also $Z^{1}(\\U)=\\mathbb{B}(\\U)$ and $N^{1}(\\U)=(0)$, we may conclude the following proposition from Theorem \\ref{22}. \n \\begin{prop}\\label{cte}\nConsider the module extension $T(\\A,\\U)$ of a Banach algebra $\\A$ and a Banach $\\A$-bimodule $\\U$. Suppose that $H^{1}(\\A)=(0)$ and the only continuous $\\A$-bimodule homomorphism $T:\\U\\rightarrow\\A$ satisfying $T(x)y+xT(y)=0$ is $T=0$. Then \n\\[H^{1}(T(\\A ,\\U))\\cong H^{1}(\\A ,\\U)\\times \\frac{Hom_{\\A}(\\U)}{C_{\\A}(\\U)}.\\]\n \\end{prop} \nIn fact with the assumptions of the previous proposition, in Theorem \\ref{22} we have $\\mathcal{F}=N^{1}(\\A)\\times C_{\\A}(\\U)$. Proposition \\ref{cte} is the same as Theorem 2.5 of \\cite{med}. So it can be said that Theorem \\ref{22} is a generalization of Theorem 2.5 in \\cite{med}. \n\\begin{rem}\n \\begin{enumerate}\n \\item[(i)]\n If $\\U$ is a closed ideal of a Banach algebra $\\A$ such that $\\overline{\\U^{2}}=\\U$, then for any continuous $\\A$-bimodule homomorphism $T:\\U\\rightarrow\\A$ with $T(x)y+xT(y)=0\\, (x,y\\in\\U)$ we have $T(xy)=0$. Since $\\overline{\\U^{2}}=\\U$ and $T$ is continuous, it follows that $T=0$. Hence in this case, for any $\\delta\\in Z^{1}(T(\\A ,\\U))$ in its representation, $\\tau_1=0$. Also in this case, for any $r_a \\in C_{\\A}(\\U)$, since $a\\in Z(\\A)$, then $r_{a}=0$. Therefore if $H^{1}(\\A)=(0)$, in this case by Proposition \\ref{22} we have \n \\[H^{1}(T(\\A ,\\U))\\cong H^{1}(\\A,\\U)\\times Hom_{\\A}(\\U).\\]\n \\item[(ii)]\n If Banach algebra $\\A$ has no nonzero nilpotent elements, $\\U$ is a Banach $\\A$-bimodule and $T:\\U\\rightarrow\\A$ is an $\\A$-bimodule homomorphism such that $T(x)y+xT(y)=0\\, (x,y\\in\\U)$, then $2T(x)T(y)=0\\, (x,y\\in \\U)$. Thus for any $x\\in \\U$, $T(x)^{2}=0$ and by the hypothesis $T(x)=0$. So in this case for any derivation on $T(\\A ,\\U)$, in its structure by Proposition \\ref{ttd} we have $\\tau_1 =0$.\n\\end{enumerate}\n \\end{rem} \n \\begin{exm}\n If $\\A$ is a weakly amenable commutative Banach algebra and $\\U$ is a commutative Banach $\\A$-bimodule such that for any continuous $\\A$-bimodule homomorphism $T:\\U\\rightarrow\\A$ with $T(x)y+xT(y)=0\\, (x,y\\in\\U)$ implies that $T=0$, then $H^{1}(\\A)=(0)$, $H^{1}(\\A,\\U)=(0)$ and $C_{\\A}(\\U)=(0)$ and thus \n \\[H^{1}(T(\\A ,\\U))\\cong Hom_{\\A}(\\U).\\]\n \\end{exm}\nVarious examples of the trivial extension of Banach algebras and computing their first cohomology group are given in \\cite{med}. \n\\par \nIf $\\delta\\in Z^{1}(\\A,\\U)$, then the map $D_{\\delta}:T(\\A,\\U)\\rightarrow T(\\A,\\U)$ defined by $D_{\\delta}((a,x))=(0,\\delta(a))$ is a continuous derivation. Now we may define the linear map $\\Phi :Z^{1}(\\A,\\U)\\rightarrow H^{1}(T(\\A ,\\U))$ by $\\Phi (\\delta)=[D_{\\delta}]$. By noting that $\\delta$ is inner if and only if $D_{\\delta}$ is inner (by Corollary \\ref{tak}), it can be seen that $ker\\Phi= N^{1}(T(\\A ,\\U))$. Thus $H^{1}(\\A ,\\U)$ is isomorphic to some subspace of $H^{1}(T(\\A ,\\U))$. So we have the following corollary. \n \\begin{cor}\n Let $\\A$ be a Banach algebra and $\\U$ be a Banach $\\A$-bimodule. Then there is a linear isomorphism from \n $H^{1}(\\A ,\\U)$ onto a subspace of $H^{1}(T(\\A ,\\U))$.\n \\end{cor}\n By the above corollary from $H^{1}(T(\\A ,\\U))=(0)$ we conclude that $H^{1}(\\A ,\\U)=(0)$. In particular, if $H^{1}(T(\\A ,\\A))=(0)$, then $H^{1}(\\A)=(0)$. We can also obtain this result from the fact that any derivation $\\delta:\\A\\rightarrow\\A$ gives rise to a derivation $D:T(\\A,\\A)\\rightarrow T(\\A,\\A)$ given by $D((a,x))=(\\delta(a),\\delta(x))$ (by Remark \\ref{r2}). So if $D$ is inner, then $\\delta$ is inner.\n\n \\subsection*{$\\theta$-Lau products of Banach algebras} \nIn this subsection we assume that $0\\neq \\theta\\in \\Delta (\\A)$ and $\\U$ is a Banach algebra. By the module action given in Example \\ref{la} we turn $\\U$ into a Banach $\\A$-bimodule with comaptible actions and norm and if it is necessary we show this module by $ \\U_{\\theta}$. Note that $ann_{\\A}\\U_{\\theta} =ker \\theta$. Consider $\\A\\ltimes\\U$ and denote it by $\\A\\ltimes_{\\theta}\\U$ which is called $\\theta$-Lau product. In the continuation of this section we always consider $\\A\\ltimes_{\\theta}\\U$ as just mentioned. \n\\par \nThe following proposition characterizes the structure of derivations on $\\A\\ltimes_{\\theta}\\U$, which is obtained from Theorem \\ref{asll}.\n \\begin{prop}\\label{lau-der}\nLet $D:\\A\\ltimes_{\\theta} \\U\\rightarrow \\A \\ltimes_{\\theta}\\U$ be a map. Then the following conditions are equivalent.\n\\begin{enumerate}\n\\item[(i)] $D$ is a derivation.\n\\item[(ii)] \n\\[D((a,x))=(\\delta_1 (a)+\\tau_1 (x),\\delta_2 (a)+\\tau_2 (x))\\quad\\quad (a\\in \\A,x\\in \\U)\\]\nsuch that \n\\begin{enumerate}\n\\item[(a)]\n$\\delta_1 :\\A\\rightarrow\\A, \\delta_2 :\\A\\rightarrow\\U$ are derivations such that \n\\[\\theta (\\delta_1 (a))x+\\delta_2(a)x=0\\quad \\text{and}\\quad \\theta (\\delta_1 (a))x+x\\delta_2(a)=0\\quad (a\\in\\A,x\\in\\U).\\]\n\\item[(b)]\n$\\tau_1:\\U\\rightarrow \\A$ is an $\\A$-bimodule homomorphism such that $\\tau_1(xy)=0\\quad (x,y\\in\\U)$.\n\\item[(c)]\n$\\tau_2:\\U\\rightarrow \\U$ is a linear map such that \n\\[\\tau_2(xy)=\\theta (\\tau_1(y))x+\\theta (\\tau_1(x))y+x\\tau_2(y)+\\tau_2(x)y\\quad \\quad (x,y\\in\\U).\\]\nAlso $D$ is inner if and only if $\\tau_1 =0, \\delta_2 =0, \\delta_1 =id_{a}$ and $\\tau_2 =id_{\\U,x}$.\n\\end{enumerate}\n\\end{enumerate}\n\\end{prop}\nBy the above proposition, for a derivation $D$ on $\\A\\ltimes_{\\theta}\\U$ we have \n\\[\\delta_2 (\\A)\\subseteq Z(\\U), \\quad \\theta (a)\\tau_1 (x) =a\\tau_1(x)=\\tau_1(x)a\\]\nand so $\\tau_1(\\U)\\subseteq Z(\\A)$. Also $x\\tau_1 (y)+\\tau_1 (x)y =0$ for all $x,y\\in \\U$ if and only if $\\tau_1 (\\U)\\subseteq ker \\theta$. Additionally, $\\delta_1 (\\A)\\subseteq ann_{\\A}\\U =ker\\theta$ if and only if $\\delta_2 (\\A)\\subseteq ann_{\\U}\\U$.\n\\begin{rem} \\label{cla}\nIf $\\A$ is a commutative Banach algebra, then by Thomas' theorem \\cite{tho}, for any derivation $\\delta:\\A\\rightarrow\\A$, $\\delta(\\A)\\subseteq rad (\\A)$. So in this case for any derivation $D$ on $\\A\\ltimes_{\\theta}\\U$ we always have $\\delta_1 (\\A)\\subseteq rad (\\A) \\subseteq ker\\theta=ann_{\\A}\\U $ and hence $\\delta_2 (\\A)\\subseteq ann_{\\U}\\U$. Also $D=D_1+D_2+D_{3}$ where $D_1((a,x))=(\\delta_1(a),0)$, $D_{2}((a,x))=(0,\\delta_2(a))$ and $D_3((a,x))=(\\tau_1(x),\\tau_2(x))$ are derivations on $\\A\\ltimes_{\\theta}\\U$. In this case by Corollary \\ref{tak}-$(i)$, for every derivation $\\delta:\\A\\rightarrow\\A$ the map $D$ on $\\A\\ltimes_{\\theta}\\U$ defined by $D((a,x))=(\\delta(a),0)$ is a derivation.\n\\end{rem}\nIn the following we always assume that for any derivation $D\\in \\A\\ltimes_{\\theta}\\U$ we have $\\tau_1 =0$ and study the derivations under this condition. In this case, $\\tau_2$ is then always a derivation on $\\U$. \n\\par \nThe established results about the automatic continuity of derivations in section 3 obviously hold as well as in this special case $\\theta$-Lau product. Now we are ready to state some results concerning the automatic continuity of derivations on $\\A\\ltimes_{\\theta}\\U$.\n\\par \nBy the definition of $\\theta$-Lau product and Corollary \\ref{tak}-$(iv)$, any linear map $\\tau:\\U\\rightarrow\\U$ is a derivation if and only if the linear map $D((a,x))=(0,\\tau(x))$ on $\\A\\ltimes_{\\theta}\\U$ is a derivation. According to this point, the following corollary is clear.\n\\begin{cor}\nIf every derivation on $\\A\\ltimes_{\\theta}\\U$ is continuous, then every derivation on $\\U$ is continuous. In particular, if every derivation on $\\A\\ltimes_{\\theta}\\A$ is continuous, then every derivation on $\\A$ is continuous.\n\\end{cor}\nIf $\\A$ and $\\U$ are semisimple where $\\U$ has a bounded approximate identity, then by Proposition \\ref{semi} every derivatin on $\\A\\ltimes_{\\theta}\\U$ is continuous. In particular if $\\A$ is a semisimple Banach algebra with a bounded approximate identity, then all derivations on $\\A\\ltimes_{\\theta}\\A$ are continuous. In the case of $C^{*}$-algebras we can drop the semisimplicity of $\\U$. In fact by Ringrose's result \\cite{rin}, Proposition \\ref{lau-der} and preceding corollary we have the next corollary.\n\\begin{cor}\nLet $\\A$ be a $C^{*}$-algebra. Then every derivation on $\\A\\ltimes_{\\theta}\\U$ is continuous if and only if every derivation on $\\U$ is continuous.\n\\end{cor}\nFrom Proposition \\ref{taj1}-$(ii)$ and Remark \\ref{cla} we have the following corollary.\n\\begin{cor}\nLet $\\A$ be a commutative Banach algebra and $\\U$ a semisimple Banach algebra. Then every derivation $D$ on $ \\A\\ltimes_{\\theta}\\U $ is of the form $D=D_1+D_2$ where $D_1 ((a,x))=(0,\\delta_2 (a)+\\tau_2(x))$ is a continuous derivation on $\\A\\ltimes_{\\theta}\\U$ and $D_1 ((a,x))=(\\delta_1 (a),0)$ is a derivation on $\\A\\ltimes_{\\theta}\\U$. In particular, in this case, if every derivation on $\\A$ is continuous, then every derivation on $\\A\\ltimes_{\\theta}\\U$ is continuous.\n\\end{cor}\nNote that if $\\A$ is commutative, then $\\U$ is a commutative $\\A$-bimodule and hence the set of all left $\\A$-module homomorphisms, all right $\\A$-module homomorphisms and all $\\A$-module homomorphisms are the same. Now by Proposition \\ref{taj2} and Remark \\ref{cla} we obtain the next corollary.\n\\begin{cor}\nLet $\\A$ be a commutative Baanch algebra which has a bounded approximate identity. Then any derivation $D$ on $\\A\\ltimes_{\\theta}\\U$ is of the form $D=D_1+D_2$ where $D_1 ((a,x))=(\\delta_1 (a),\\tau_2(x))$ and $D_2 ((a,x))=(0,\\delta_2(a))$ are derivations on $\\A\\ltimes_{\\theta}\\U$ and under any of the following conditions, $D_1$ is continuous.\n\\begin{enumerate}\n\\item[(i)]\nThere is a surjective $\\A$-module homomorphism $\\phi:\\A\\rightarrow\\U$ and $\\delta_1$ is continuous.\n\\item[(ii)]\nThere is an injective $\\A$-module homomorphism $\\phi:\\A\\rightarrow\\U$ and $\\tau_2$ is continuous.\n\\end{enumerate}\n\\end{cor}\nIn the continuation we assume that for any continuous derivation $D$ on $\\A\\ltimes_{\\theta}\\U$ we have $\\tau_1 =0$. By the definition of $\\A\\ltimes_{\\theta}\\U$, in this case, $Hom_{\\A}(\\U)=\\mathbb{B}(\\U), N^{1}(\\A,\\U)=(0), Z^{1}(\\A,\\U)=H^{1}(\\A,\\U), R_{\\A}(\\U)=C_{\\A}(\\U)=(0)$ and $N^{1}(\\U)=I(\\U)$. \n\\begin{rem}\\label{pri}\nFor any derivation $\\delta\\in Z^{1}(\\A)$, we have $\\delta(\\A)\\subseteq ker \\theta=ann_{\\A}\\U $, since Sinclair's theorem \\cite{sinc} implies that $\\delta(P)\\subseteq P$ for any primitive ideal $P$ of $\\A$. So in this case for any derivation $D\\in Z^{1}(\\A\\ltimes_{\\theta}\\U)$ we always have $\\delta_1 (\\A)\\subseteq ker \\theta =ann_{\\A}\\U $ and hence $\\delta_2 (\\A)\\subseteq ann_{\\U}\\U$. Also $D=D_1+D_2+D_{3}$ where $D_1((a,x))=(\\delta_1(a),0)$, $D_{2}((a,x))=(0,\\delta_2(a))$ and $D_3((a,x))=(\\tau_1(x),\\tau_2(x))$ are in $Z^{1}(\\A\\ltimes_{\\theta}\\U)$. \n\\end{rem}\nNote that it is not necessarily true that $\\delta\\in Z^{1}(\\A,\\U)$ implies $\\delta(\\A)\\subseteq ann_{\\U}\\U$ as the next example shows this.\n\\begin{exm}\nAssume that $G$ is a non-discrete abelian group. In \\cite{br}, it has been shown that there is a nonzero continuous point derivation $d$ at a nonzero character $\\theta$ on $M(G)$. Now consider $M(G)\\ltimes_{\\theta} \\mathbb{C}$. Every derivation from $M(G)$ into $\\mathbb{C}_\\theta$ is a point derivation at $\\theta$. It is clear that $ann_{\\mathbb{C}}\\mathbb{C} =(0)$. But $d\\in Z^{1}(M(G),\\mathbb{C}_\\theta)$ is a nonzero derivation such that $d(M(G))\\not\\subseteq ann_{\\mathbb{C}}\\mathbb{C} =(0)$.\n\\end{exm}\n\\par \nNow we determine the first cohomology group of $\\A\\ltimes_{\\theta}\\U$ in some other cases.\n\\par \nLinear space $\\mathcal{E}$ in Theorem \\ref{11} in the case of $\\A\\ltimes_{\\theta}\\U$ is $\\mathcal{E}=N^{1}(\\A)\\times N^{1}(\\U)$. Therefore by theorem \\ref{11} and Remark \\ref{pri} we have the next proposition.\n\\begin{prop}\\label{a1}\nIf $H^{1}(\\A,\\U)=(0)$, then \n\\[H^{1}(\\A\\ltimes_{\\theta}\\U)\\cong H^{1}(\\A)\\times H^{1}(\\U).\\]\n\\end{prop}\n\\begin{cor}\nLet $\\A$ be a Banach algebra with $H^{1}(\\A,\\A_{\\theta})=(0)$ and $\\overline{\\A^{2}}=\\A$. Then $H^{1}(\\A \\ltimes_{\\theta}\\A)=(0)$ if and only if $H^{1}(\\A)=(0)$.\n\\end{cor}\n\\par \nLinear space $\\mathcal{F}$ in Theorem \\ref{22} in the case of $\\A\\ltimes_{\\theta}\\U$ is $\\mathcal{F}=(0)\\times N^{1}(\\U)$. So we have the following proposition.\n\\begin{prop}\nLet for any $\\delta\\in Z^{1}(\\A,\\U)$, $\\delta(\\A)\\subseteq ann_{\\U}\\U$. If $H^{1}(\\A)=(0)$, then \n\\[H^{1}(\\A\\ltimes_{\\theta}\\U)\\cong H^{1}(\\A,\\U)\\times H^{1}(\\U).\\]\n\\end{prop}\n The linear space $\\mathcal{K}$ in Theorem \\ref{33} in the case of $\\A\\ltimes_{\\theta}\\U$ is $\\mathcal{K}=N^{1}(\\A)\\times (0)$. Therefore by Theorem \\ref{33} we have the next proposition.\n \\begin{prop}\\label{prop8}\nLet for every $\\delta\\in Z^{1}(\\A,\\U)$, $ \\delta(\\A)\\subseteq ann _{\\U}\\U$.\n If $H^{1}(\\U)=(0)$, then \n \\[H^{1}(\\A\\ltimes_{\\theta}\\U)\\cong H^{1}(\\A)\\times H^{1}(\\A,\\U).\\]\n\\end{prop}\nThe following proposition follows directly from Theorem \\ref{44} and the properties of $\\A\\ltimes_{\\theta}\\U$.\n\\begin{prop}\\label{prop10}\nSuppose that $H^{1}(\\A)=(0)$ and $H^{1}(\\A,\\U)=(0)$. Then\n\\[H^{1}(\\A\\ltimes_{\\theta}\\U)\\cong H^{1}(\\U).\\] \n\\end{prop}\n\\begin{rem}\\label{akh}\nFrom Proposition \\ref{prop10} it follows that if $H^{1}(\\A)=(0), H^{1}(\\A,\\U)=(0)$ and $H^{1}(\\U)=(0)$, then $H^{1}(\\A\\ltimes_{\\theta}\\U)=(0)$. Under some conditions the converse of is also true. That is, if for any $\\delta\\in Z^{1}(\\A,\\U)$ we have $\\delta (\\A)\\subseteq ann_{\\U}\\U$ and $H^{1}(\\A\\ltimes_{\\theta}\\U)=(0)$, then $H^{1}(\\A)=(0), H^{1}(\\A,\\U)=(0)$ and $H^{1}(\\U)=(0)$. It does not yield from above propositions. We prove it directly as follows. By the hypotheses, for any $\\delta_1\\in Z^{1}(\\A)$, $\\delta_2\\in Z^{1}(\\A,\\U)$ and $\\tau\\in Z^{1}(\\U)$ the map $D:\\A\\ltimes_{\\theta}\\U\\rightarrow \\A\\ltimes_{\\theta}\\U$ given by $D((a,x))=(\\delta_1(a),0)$, $D((a,x))=(0,\\delta_2(a))$ or $D((a,x))=(0,\\tau_2(x))$ is a continuous derivation. If $H^{1}(\\A\\ltimes_{\\theta}\\U)=(0)$, then $\\delta_2=0$ and $\\delta_1, \\tau_2$ are inner.\n\\end{rem}\n \\begin{exm}\nLet $\\A$ be a weakly amenable commutative Banach algebra and $\\overline{\\U^{2}}=\\U$. So $Z^{1}(\\A)=(0)$, $Z^{1}(\\A,\\U)=(0)$ and from Remark \\ref{akh} we have $H^{1}(\\A\\ltimes_{\\theta}\\U)=(0)$ if and only if $H^{1}(\\U)=(0)$.\n \\end{exm}\n In particular if $\\A$ is a weakly amenable commutative Banach algebra, above example implies that $H^{1}(\\A\\ltimes_{\\theta}\\A)=(0)$. Note that for a weakly amenable Banach algebra $\\A$ the equality $\\overline{\\A^{2}}=\\A$ always holds.\n\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n \\label{Sec_intro}\n\n In this paper we present results on the longitudinal double spin\nasymmetry $A_1^{\\rho }$ for exclusive incoherent $\\rho ^0$ production in the\nscattering of high energy muons on nucleons.\nThe experiment was carried out at CERN by the COMPASS collaboration\nusing the 160~GeV muon beam and the large $^{6}$LiD polarised target.\n\nThe studied reaction is \n\\begin{equation} \n \\mu + N \\rightarrow \\mu^{\\prime} + \\rho ^0 + N^{\\prime}, \\label{murho}\n\\end{equation}\nwhere $N$ is a quasi-free nucleon from the polarised deuterons.\nThe reaction (\\ref{murho}) can be described in terms of the virtual photoproduction\nprocess\n\\begin{equation}\n \\gamma^{\\ast} + N \\rightarrow \\rho^0 + N^{\\prime}. \\label{phorho} \n\\end{equation}\nThe reaction (\\ref{phorho}) can be regarded as a fluctuation of the\nvirtual photon into a quark-antiquark pair (in partonic language),\nor an off-shell vector meson (in Vector Meson Dominance model), which then scatters\noff the target nucleon resulting in the production of an on-shell vector meson.\nAt high energies this is predominantly a diffractive process and plays\nan important role in the investigation of Pomeron exchange and its\ninterpretation in terms of multiple gluon exchange.\n\nMost of the presently available information on the spin structure \nof reaction (\\ref{phorho})\nstems from the $\\rho ^0$ spin density matrix elements,\nwhich are obtained from the analysis of angular distributions\nof $\\rho ^0$ production and decay \\cite{Schilling73}. Experimental results\non $\\rho ^0$ spin density matrix elements \ncome from various experiments \\cite{NMC,E665-1,ZEUS,H1,HERMES00} including\nthe preliminary results from COMPASS \\cite{sandacz}.\n \nThe emerging picture of the spin structure of the considered process \nis the following. At low photon virtuality $Q^2$ the cross section by transverse virtual photons\n$\\sigma _T$\ndominates, while the relative contribution of the cross section by longitudinal\nphotons $\\sigma _L$ \nrapidly increases with $Q^2$. At $Q^2$ of about 2~(GeV\/{\\it c})$^2$ both\ncomponents become comparable and at a larger $Q^2$ the contribution of\n$\\sigma _L$ becomes dominant and continues to grow, although\nat lower rate than at low $Q^2$. \nApproximately, the so called $s$-channel helicity\nconservation (SCHC) is valid, i.e.\\ the helicity of the vector meson is the same\nas the helicity of the parent virtual photon. The data indicate that \nthe process can be described approximately by the \nexchange in the $t$-channel of an object with natural \nparity $P$.\nSmall deviations from SCHC are observed, also at the highest energies, whose\norigin is still to be understood. \nAn interesting suggestion was made in Ref.~\\cite{igivanov} that at high energies\nthe magnitudes of various helicity amplitudes for the reaction (\\ref{phorho})\nmay shed a light on the spin-orbital momentum structure of the vector meson. \n\nA complementary information can be obtained from measurements of the double\nspin cross section asymmetry, when the information on both the beam\nand target polarisation is used.\nThe asymmetry is defined as\n\\begin{equation}\n A_{1}^{\\rho} = \n \\frac{\n \\sigma_{1\/2} - \\sigma_{3\/2}\n }{\n \\sigma_{1\/2} + \\sigma_{3\/2}}\n ,\n \\label{A1def}\n\\end{equation}\nwhere $\\sigma_{1\/2 (3\/2)}$ stands for the cross sections of the reaction (\\ref{phorho})\nand the subscripts denote the total virtual photon--nucleon angular momentum\ncomponent along the virtual photon\ndirection.\nIn the following we will also use the asymmetry\n\\ALL\\\nwhich is defined for reaction (\\ref{murho}) as the asymmetry of\nmuon--nucleon cross sections for antiparallel and parallel beam and\ntarget longitudinal spin orientations.\n\n\nIn the Regge approach \\cite{manaenkov} the longitudinal double spin asymmetry\n$A_1^{\\rho }$ can arise due \nto the interference of amplitudes for exchange in the $t$-channel of Reggeons with natural parity\n(Pomeron, $\\rho $, $\\omega $, $f$, $A_2$ ) with amplitudes for Reggeons with \nunnatural parity \n($\\pi , A_1$).\nNo significant asymmetry is expected \nwhen only a non-perturbative \nPomeron is exchanged because it has small spin-dependent couplings as found from \nhadron-nucleon data for cross sections and polarisations.\n\nSimilarly, in the approach of Fraas \\cite{fraas76}, \nassuming approximate validity of \nSCHC, the spin asymmetry $A_1^{\\rho }$ arises from the interference between\nparts of the helicity amplitudes for transverse photons corresponding\nto the natural and unnatural parity exchanges\nin the $t$ channel. While a measurable asymmetry can arise even from a small\ncontribution of the unnatural parity exchange, the latter may remain\nunmeasurable in the cross sections.\nA significant unnatural-parity contribution may indicate an exchange \nof certain Reggeons like $\\pi$, $A_{1}$ or in\npartonic terms an exchange of $q\\bar{q}$ pairs.\n\nIn the same reference a theoretical prediction for $A_1^{\\rho}$ was\npresented, which is based on the description of forward exclusive $\\rho^{0}$\nleptoproduction and inclusive inelastic lepton-nucleon\nscattering by the off-diagonal Generalised Vector Meson Dominance (GVMD) model,\napplied to the case of polarised lepton--nucleon scattering.\nAt the values of Bjorken variable $x < 0.2$, with additional assumptions\n\\cite{HERMES01}, \\Aor\\ can be related \nto the $A_{1}$ asymmetry for inclusive inelastic lepton scattering at the same\n\\xBj\\ as\n\\begin{equation}\n A_1^{\\rho } = \\frac{2 A_1}{1 + (A_1)^2} . \\label{A1-Fraas}\n\\end{equation} \nThis prediction is consistent with the HERMES results for both the proton\nand deuteron targets, although with rather large errors.\n\nIn perturbative QCD, there exists a general proof of factorisation \\cite{fact}\nfor exclusive vector meson production by longitudinal photons. \nIt allows a decomposition of the full amplitude for reaction (\\ref{phorho})\ninto three components:\na hard scattering amplitude for the exchange of quarks or gluons,\na distribution amplitude for the meson and \nthe non-perturbative description of the target nucleon in terms of\nthe generalised parton distributions (GPDs), which are related to the internal\nstructure\nof the nucleon.\nNo similar proof of factorisation exists for transverse virtual photons, and as a consequence\nthe interpretation of $A_{1}^{\\rho}$ in perturbative QCD is not possible\nat leading twist. However, a model including higher twist effects proposed\nby Martin et al. \\cite{mrt} describes the behaviour of both $\\sigma_{L}$\nas well as of $\\sigma _T$ reasonably well. An extension of this model by Ryskin\n\\cite{Ryskin} for the\nspin dependent cross sections allows to relate\n\\Aor\\ to the spin dependent GPDs of gluons and quarks in the nucleon.\nThe applicability of this model is limited to the range $Q^2 \\geq 4~(\\mathrm{GeV}\/c)^2$.\nMore recently another pQCD-inspired model involving GPDs has been proposed by Goloskokov and Kroll \n\\cite{krgo,gokr}. The non-leading twist asymmetry $A_{LL}$ results from the \ninterference between the dominant GPD $H_g$ and the helicity-dependent GPD\n$\\tilde{H}_g$. The asymmetry is estimated to be of the order \n$ k_T^2 \\tilde{H}_g \/ (Q^2 H_g )$,\nwhere $k_T$ is the transverse momentum of the quark and the antiquark.\n\nUp to now little experimental information has been available on the\ndouble spin asymmetries for exclusive leptoproduction of vector mesons. \nThe first observation of a non-zero asymmetry $A_1^{\\rho }$ in polarised\nelectron--proton deep-inelastic scattering was reported by the HERMES experiment \n\\cite{HERMES01}.\nIn the deep inelastic region $(0.8 < Q^2 < 3~(\\mathrm{GeV}\/c)^2)$\nthe measured asymmetry is equal to 0.23 $\\pm$ 0.14 (stat) $\\pm$ 0.02 (syst)\n\\cite{HERMES03},\nwith little dependence on the kinematical variables. In contrast, for the `quasi-real photoproduction' data, with \n$\\langle Q^2\\rangle = 0.13~(\\mathrm{GeV}\/c)^2$, the asymmetry for the proton target is consistent with zero.\nOn the other hand the measured asymmetry $A_1^{\\rho }$ for the polarised deuteron target and the asymmetry $A_1^{\\phi }$ for exclusive production of $\\phi $\nmeson \neither on polarised protons or deuterons\n are consistent with zero both in the deep inelastic and in the \nquasi-real photoproduction regions\n \\cite{HERMES03}.\n\nThe HERMES result indicating a non-zero $A_1^{\\rho }$ for the proton\ntarget differs from the unpublished result of similar measurements by \nthe SMC experiment \\cite{ATrip-1} at comparable values of $Q^2$\\ but at\nabout three times higher values of the photon-nucleon centre of mass energy $W$, i.e.\\ at smaller \\xBj. The SMC measurements of \n\\ALL\\ \nin several bins of $Q^2$ are consistent with zero\nfor both proton and deuteron targets. \n\n\\section{The experimental set-up}\n \\label{Sec_exper}\n\nThe experiment \\cite{setup} was performed with\nthe high intensity positive muon beam from the CERN M2 beam line.\nThe $\\mu ^{+}$ beam intensity is $2 \\cdot 10^8$ per spill of 4.8 s with a cycle time of\n16.8~s. The average beam energy is 160~GeV and the momentum spread is $\\sigma _{p}\/p = 0.05$.\nThe momentum of each beam muon is measured upstream of the experimental area in a beam\nmomentum station consisting of several planes of scintillator strips or scintillating fibres\nwith a dipole magnet in between. The precision of the momentum determination is typically\n$\\Delta p\/p \\leq 0.003$. The $\\mu ^{+}$ beam is naturally polarised by the weak decays \nof the parent\nhadrons. The polarisation of the muon varies with its energy and the \naverage polarisation is $-0.76$.\n\nThe beam traverses the two cells of the polarised target, \neach 60~cm long, 3~cm in diameter and separated by 10~cm, which are placed one after the other. \nThe target cells are filled with $^{6}$LiD which is used as polarised deuteron target material\nand is longitudinally polarised by dynamic nuclear polarisation (DNP).\nThe two cells are polarised in opposite directions so that data from both spin directions\nare recorded at the same time. The typical values of polarisation are about 0.50.\nA mixture of liquid $^3$He\nand $^4$He, used to refrigerate the target, and a small amount of heavier nuclei are also\npresent in the target. \nThe spin directions in the two target cells are reversed every 8 hours by rotating\nthe direction of the magnetic field in the target. \nIn this way fluxes and acceptances cancel in the calculation of spin asymmetries, provided that\nthe ratio of acceptances of the two cells remains unchanged after the reversal. \n\nThe COMPASS spectrometer is designed to reconstruct the scattered muons\nand the produced hadrons in wide momentum and angular ranges.\nIt is divided in two stages with two dipole magnets, SM1 and SM2. The first magnet, \nSM1, accepts charged particles of momenta larger than 0.4~GeV\/{\\it c}, and the second one, SM2, those\nlarger than 4~GeV\/{\\it c}. The angular acceptance of the spectrometer is limited by\nthe aperture of the polarised target magnet. For the upstream\nend of the target it is $\\pm 70$~mrad.\n\nTo match the expected particle flux at various locations in the spectrometer, COMPASS\nuses various tracking detectors. Small-angle tracking is provided by stations of\nscintillating fibres, silicon detectors, micromesh gaseous chambers and gas electron\nmultiplier chambers. Large-angle tracking devices are multiwire proportional chambers,\ndrift chambers and straw detectors. Muons are identified in large-area mini drift \ntubes and drift tubes placed downstream of hadron absorbers. Hadrons are detected by\ntwo large iron-scintillator sampling calorimeters installed in front of the absorbers\nand shielded to avoid electromagnetic contamination.\nThe identification of charged particles is possible with a RICH detector, although\nin this paper we have not utilised the information from the RICH.\n\nThe data recording system is activated by various triggers indicating the presence\nof a scattered muon and\/or an energy deposited by hadrons\nin the calorimeters. In addition to the inclusive trigger, in which the scattered muon\nis identified by coincidence signals in the trigger hodoscopes, several semi-inclusive\ntriggers were used. They select events fulfilling the requirement to detect\nthe scattered muon together with the energy deposited in the hadron calorimeters exceeding\na given threshold. In 2003 the acceptance was further extended towards high $Q^2$\nvalues by the addition of a standalone calorimetric trigger in which no condition \nis set for the scattered muon. \nThe COMPASS trigger system allows us to cover a wide range of $Q^2$, from quasi-real\nphotoproduction to deep inelastic interactions.\n \nA more detailed description of the COMPASS apparatus can be found in Ref.~\\cite{setup}\n\n\\section{Event sample}\n \\label{Sec_sample}\n\nFor the present analysis the whole data sample taken in 2002 and 2003 with the \nlongitudinally polarised target is used. For an event to be accepted for further analysis it is required to originate in the target, have a reconstructed \nbeam track, a\nscattered muon track, and only two additional tracks \nof oppositely charged hadrons associated to the primary vertex. \nThe fluxes of beam muons passing through each target cell are equalised using\nappropriate cuts on the position and angle of the beam tracks.\n\nThe charged\npion mass hypothesis is assigned to each hadron track and the invariant mass of two\npions, $m_{\\pi \\pi}$, calculated. A cut on the invariant mass of two pions, $0.5 < m_{\\pi \\pi} < 1~\\mathrm{GeV}\/c^2$, is applied to select\nthe $\\rho ^0$.\nAs slow recoil target particles are not detected, in order to select\nexclusive events we use the cut on the missing energy, $-2.5 < E_{miss} < 2.5~\\mathrm{GeV}$, and on the transverse momentum of $\\rho^0$ with respect to the direction of\nvirtual photon, $p_t^2 < 0.5~(\\mathrm{GeV}\/c)^2$. Here $E_{miss} = (M^{2}_{X} - M^{2}_{p})\/2M_{p}$, where $M_X$ is the missing mass of the unobserved recoiling\nsystem and $M_p$ is the proton mass.\nCoherent interactions on the\ntarget nuclei are removed by a cut $p_t^2 > 0.15~(\\rm{GeV}\/c)^2$. \nTo avoid large corrections for acceptance and misidentification of\nevents, additional cuts \n$\\nu > 30~\\mathrm{GeV}$ and $E_{\\mu'} > 20~\\mathrm{GeV}$ are applied.\n\n\nThe distributions of $m_{\\pi \\pi}$, $E_{miss}$ and $p_t^2$ are shown in \nFig.~\\ref{hadplots}. Each plot is obtained applying all cuts except\nthose corresponding to the displayed variable. \nOn the left top panel of Fig.~\\ref{hadplots} a clear peak of the \\rn\\ resonance, centred at 770~MeV\/$c^2$, is visible on the top of the small contribution of\nbackground of the non-resonant \\pip\\pin\\ pairs. Also the skewing of the resonance peak towards smaller values\nof \\mpipi, due to an interference with the non-resonant background, is noticeable. A small bump below 0.4~GeV\/$c^2$\nis due to assignment of the charged pion mass to the kaons from decays of $\\phi$ mesons.\nThe mass cuts eliminate the non-resonant background outside of the \\rn\\ peak, as well as the contribution of $\\phi$\nmesons.\n\n\\begin{figure}[t]\n \\begin{center}\n \\epsfig{figure=mpipi_0203.eps,height=5cm,width=7.5cm}\n \\epsfig{figure=Emiss_0203.eps,height=5cm,width=7.5cm}\n \\epsfig{figure=ptsq_0203.eps,height=5cm,width=7.5cm}\n \\caption{Distributions of \\mpipi\\ (top left), \\Emi\\ (top right) \nand \\pts\\ (bottom) for the exclusive sample. \nThe arrows show cuts imposed on each variable to define the final sample.}\n \\label{hadplots}\n \\end{center}\n\\end{figure}\n \nOn the right top panel of the figure the peak at $E_{miss} \\approx 0$ is the signal of \nexclusive \\rn\\ production. The width of the peak, $\\sigma \\approx 1.1~\\mathrm{GeV}$,\nis due to the spectrometer resolution. Non-exclusive events, where in addition to the recoil nucleon other undetected hadrons are produced, appear at $E_{miss} > 0$.\nDue to the finite resolution, however, they are not resolved from the exclusive peak.\nThis background\nconsists of two components: the double-diffractive events where additionally to \\rn\\ an excited nucleon state is produced in the\nnucleon vertex of reaction (\\ref{phorho}), and events with semi-inclusive \\rn\\ production, in which other\nhadrons are produced but escape detection. \n\nThe $p_t^2$ distribution shown on the bottom panel of the figure\nindicates a contribution from coherent production on target \nnuclei at small $p_t^2$ values. A three-exponential fit to this distribution \nwas performed, which indicates also \na contribution of non-exclusive background increasing\nwith $p_t^2$.\nTherefore to select the sample of exclusive incoherent \\rn\\ production, \nthe aforementioned $p_t^2$ cuts, indicated by arrows, were applied. \n\nAfter all selections the final sample consists of about 2.44 million events.\nThe distributions of $Q^2$, \\xBj\\ and $W$ are shown in Fig.~\\ref{kinplots}.\nThe data cover a wide range in $Q^2$\\ and \\xBj\\ which extends towards the small values \nby almost two orders of magnitude compared to the similar studies reported in \nRef.~\\cite{HERMES03}. The sharp edge of the $W$ distribution at the low $W$ values\nis a consequence of the cut applied on $\\nu $. For this sample \n$\\langle W \\rangle$ is equal to 10.2~GeV and $\\langle p_{t}^{2}\n\\rangle = 0.27(\\mathrm{GeV}\/c)^2$.\n\n\\begin{figure}[t]\n \\begin{center}\n \\epsfig{figure=Qsq_0203_incoh_liny.eps,height=5cm,width=7.5cm}\n \\epsfig{figure=Qsq_0203_incoh_logy.eps,height=5cm,width=7.5cm}\n \\epsfig{figure=xBj_0203_incoh.eps,height=5cm,width=7.5cm}\n \\epsfig{figure=Wene_0203_incoh.eps,height=5cm,width=7.5cm}\n \n \\caption{Distributions of the kinematical variables for the final sample: $Q^2$\\ with linear and\n logarithmic vertical axis scale (top left and right panels respectively), \\xBj\\ (bottom left), and the energy\n $W$ (bottom right).}\n \\label{kinplots}\n \\end{center}\n\\end{figure}\n\n\\section{Extraction of asymmetry \\Aor}\n \\label{Sec_extract} \n\nThe cross section asymmetry $A_{LL} = \n(\\sigma_{\\uparrow \\downarrow} - \\sigma_{\\uparrow \\uparrow})\/\n(\\sigma_{\\uparrow \\downarrow} + \\sigma_{\\uparrow \\uparrow})$ for reaction\n(\\ref{murho}) , for antiparallel ($\\uparrow \\downarrow $) and parallel \n($\\uparrow \\uparrow $) spins of\nthe incoming muon and the target nucleon, is related to the virtual-photon\nnucleon asymmetry $A_{1}^{\\rho}$ by\n\\begin{equation}\n A_{LL} = D \\left( A_{1}^{\\rho} + \\eta A_{2}^{\\rho} \\right) , \\label{A1rALL}\n\\end{equation}\nwhere the factors $D$ and $\\eta $ depend on the event kinematics and \n$A_{2}^{\\rho}$ is related to\nthe interference cross section for exclusive production by \nlongitudinal and transverse virtual photons. \nAs the presented results extend into the range of very small $Q^2$, \nthe exact formulae for the depolarisation factor $D$ and kinematical \nfactor $\\eta$ \\cite{JKir}\nare used without neglecting terms proportional to the lepton mass squared $m^2$.\nThe depolarisation factor is given by\n\\begin{equation}\n D(y, Q^{2}) = \n \\frac{\n y \\left[ (1 + \\gamma^{2} y\/2) (2 - y) - \n 2 y^{2} m^{2} \/ Q^{2} \\right]\n }{\n y^{2} (1 - 2 m^{2} \/ Q^{2}) (1 + \\gamma^{2}) + \n 2 (1 + R) (1 - y - \\gamma^{2} y^{2}\/4)\n }, \\label{depf}\n\\end{equation}\nwhere $R = \\sigma_{L} \/ \\sigma_{T}$, $\\sigma_{L(T)}$ is the cross section for reaction (\\ref{phorho})\ninitiated by longitudinally (transversely) polarised virtual photons, the fraction of the muon energy lost $y = \\nu \/E_{\\mu }$ and $\\gamma^{2} = Q^{2} \/ \\nu^{2}$.\nThe kinematical factor $\\eta (y, Q^{2})$ is the same as for the inclusive \nasymmetry.\n\nThe asymmetry $A_{2}^{\\rho}$ obeys the positivity limit $A_{2}^{\\rho} < \\sqrt{R}$, analogous to the one for the inclusive case.\nFor $Q^2 \\leq 0.1~(\\mathrm{GeV}\/c)^2$ the ratio $R$ for the reaction (\\ref{phorho}) is\nsmall, cf.\\ Fig.~\\ref{R-pict}, and the positivity limit constrains $A_{2}^{\\rho}$ to small values. Although for larger $Q^2$\\ the ratio $R$ for the process (\\ref{phorho}) increases\nwith $Q^2$, because of small values of $\\eta $ the product $\\eta \\sqrt{R}$ is small in the whole $Q^2$\\ range of our sample.\nTherefore the second term in Eq.~\\ref{A1rALL} can be neglected, so that\n\\begin{equation}\n A_{1}^{\\rho} \\simeq \\frac{1}{D} A_{LL},\n\\end{equation}\nand the effect of this approximation is included in the systematic uncertainty of \\Aor.\n\\begin{figure}[thb]\n\\begin{center}\n \\epsfig{file=RLT.eps,height=5cm,width=7.4cm}\n \\caption{The ratio $R= \\sigma_L\/\\sigma_T$ as a function of $Q^2$\\ measured in the E665 experiment.\n The curve is a fit to the data described in the text.}\n \\label{R-pict}\n \\end{center}\n\\end{figure}\n\nThe number of events $N_i$ collected from a given target cell in a given time interval\nis related to the spin-independent cross section $\\bar{\\sigma}$ for reaction (\\ref{phorho})\nand to the asymmetry $A_1^{\\rho}$ by\n\\begin{equation}\nN_i = a_i \\phi _i n_i \\bar{\\sigma } (1+P_B P_T f D A_1^{\\rho } ),\n\\label{nevents}\n\\end{equation}\nwhere $P_B$ and $P_T$ are the beam and target polarisations, $\\phi _i$ is the incoming\nmuon flux, $a_i$ the acceptance for the target cell, $n_i$ corresponding number of target nucleons, and $f$ the target dilution factor.\nThe asymmetry is extracted from the data sets taken before and after a reversal of the\ntarget spin directions. The four relations of Eq.~\\ref{nevents}, corresponding to the two \ncells ($u$ and $d$) and the two spin orientations (1 and 2) lead to a second-order\nequation in \\Aor\\ for the ratio $(N_{u,1}N_{d,2}\/N_{d,1}N_{u,2})$. Here fluxes cancel out\nas well as acceptances, if the ratio of acceptances for the two cells is the same before\nand after the reversal \\cite{SMClong}. In order to minimise the statistical error\nall quantities used in the asymmetry calculation are evaluated event by event with the\nweight factor $w=P_BfD$. The polarisation of the beam muon, $P_B$, is obtained from\na simulation of the beam line and parameterised as a function of the beam momentum. The target polarisation is not \nincluded in the event weight $w$ because it may vary in time and generate false\nasymmetries. An average $P_T$ is used for each target cell and each spin orientation.\n\n The ratio $R$, which enters the formula for $D$ and strongly depends on $Q^2$\\ for reaction (\\ref{phorho}), was calculated on an event-by-event basis\nusing the parameterisation \n\\begin{equation}\n R(Q^{2}) = \n a_{0} (Q^{2})^{a_{1}} ,\n\\end{equation}\nwith $a_{0} = 0.66 \\pm 0.05$, and $a_{1} = 0.61 \\pm 0.09$.\nThe parameterisation was obtained by the Fermilab E665 experiment from a fit\nto their $R$ measurements for exclusive $\\rho ^0$ muoproduction on protons \\cite{E665-1}.\nThese are shown in Fig.~\\ref{R-pict} together with the fitted $Q^2$-dependence.\nThe preliminary COMPASS results on $R$ for the incoherent exclusive $\\rho ^0$\nproduction on the nucleon \\cite{sandacz}, which cover a broader kinematic region in $Q^2$\n, agree reasonably well with this parameterisation.\nThe uncertainty of $a_{0}$ and $a_{1}$ is included in the \nsystematic error of $A_{1}^{\\rho}$.\n\nThe dilution factor $f$ gives the fraction of events of reaction (\\ref{phorho}) \noriginating from nucleons in polarised deuterons inside the target material.\nIt is calculated event-by-event using the formula\n\\begin{equation}\n f = C_1 \\cdot f_{0} = C_1 \\cdot \n \\frac{\n n_{\\rm D}\n }{\n n_{\\rm D} + \\Sigma_{A} n_{\\rm A} \n (\\tilde{\\sigma}_{\\rm A} \/ \\tilde{\\sigma}_{\\rm D})\n }. \n \\vspace*{3mm}\n\\end{equation}\nHere $n_{\\rm D}$ and $n_{\\rm A}$ denote numbers of nucleons in deuteron and nucleus of atomic mass $A$\nin the target, and \n$\\tilde{\\sigma}_{\\rm D}$ and $\\tilde{\\sigma}_{\\rm A}$ are the cross sections\n per nucleon for reaction (\\ref{phorho}) occurring on the deuteron and on the nucleus of atomic mass\n$A$, respectively. The sum runs over all nuclei present in the COMPASS target.\nThe factor $C_1$ takes into account that there are two polarised deuterons in\nthe $^6$LiD molecule,\nas the $^6$Li nucleus is in a first approximation composed of a deuteron and\nan $\\alpha $ particle.\n\nThe measurements of the $\\tilde{\\sigma}_{\\rm A} \/ \\tilde{\\sigma}_{\\rm D}$ \nfor incoherent exclusive $\\rho ^0$ production come from the NMC \\cite{NMC}, E665 \\cite{E665-2} and early experiments on \\rn\\ photoproduction \\cite{BPSY}. They were fitted in Ref.~\\cite{ATrip-2}\nwith the formula:\n\\begin{equation}\n \\tilde{\\sigma}_{\\rm A} = \n \\sigma_{\\rm p} \\cdot A^{\\alpha(Q^{2}) - 1} , \\hspace*{1cm}\n {\\rm with}\\ \\ \\alpha(Q^{2}) - 1 = -\\frac{1}{3} \\exp\\{-Q^{2}\/Q^{2}_{0}\\},\n \\label{alpha}\n\\end{equation}\nwhere $\\sigma_{\\rm p}$ is the cross section for reaction (\\ref{phorho})\non the free proton.\nThe value of the fitted parameter $Q^{2}_{0}$ is equal to $9 \\pm 3~(\\mathrm{GeV}\/c)^2$.\nThe measured values of the parameter $\\alpha $ and the fitted curve $\\alpha (Q^2)$\nare shown on the left panel of \nFig.~\\ref{alpha-fig} taken from Ref.~\\cite{ATrip-2}.\nOn the right panel of the figure the average value of $f$ is plotted for\nthe various $Q^2$ bins used in the present analysis. The values of $f$\nare equal to about 0.36 in most of the $Q^2$ range, rising to about 0.38 at the\nhighest $Q^2$.\n\n\\begin{figure}[t]\n\\begin{center}\n \\epsfig{file=alpha.eps,height=5cm,width=7.5cm}\n \\epsfig{file=f_Qsq_0203.eps,height=5cm,width=7.5cm}\n \\caption{(Left) Parameter $\\alpha$ of Eq.~\\ref{alpha} as a function of $Q^2$\\ (from Ref.~\\cite{ATrip-2}). The experimental points and the fitted curve are shown. See text for details. (Right) The dilution factor $f$ as a function of $Q^2$.}\n \\label{alpha-fig}\n \\end{center}\n\\end{figure}\n\n The radiative corrections (RC) have been neglected in the present analysis, in particular in the calculation of $f$, because\nthey are expected to be small for reaction (\\ref{murho}). They were evaluated \\cite{KKurek} to be\nof the order of 6\\% for the NMC exclusive \\rn\\ production analysis.\nThe small values of RC are mainly due to the requirement of event exclusivity via cuts on \\Emi\\ and $p_t^2$,\nwhich largely suppress the dominant external photon radiation.\nThe internal (infrared and virtual) RC were\nestimated in Ref.~\\cite{KKurek} to be of the order of 2\\%. \n\n\\section{Systematic errors}\n \\label{Sec_syst}\n\nThe main systematic uncertainty of \\Aor\\ comes from an estimate of possible false asymmetries. \nIn order to improve the accuracy of this estimate,\nin addition to the standard sample of incoherent events, a second sample\nwas selected by changing the $p_t^2$ cuts to\n\\begin{equation}\n 0 < p_{t}^{2} < 0.5~(\\mathrm{GeV}\/c)^{2}, \\label{ptsext}\n\\end{equation} and keeping all the remaining selections and cuts the same\nas for the `incoherent sample'.\nIn the following it will be\n referred to as the `extended $p_t^2$ sample'.\nSuch an extension of the \\pts\\ range allows one to obtain a sample which is\nabout\nfive times larger than the incoherent sample. \nHowever, in addition to incoherent events such a sample contains a large fraction of events originating from coherent \\rn\\ production.\nTherefore, for the estimate of the dilution factor $f$ a different nuclear dependence of the\nexclusive cross section was used, applicable for the sum of coherent and incoherent\ncross sections \\cite{NMC}.\nThe physics asymmetries \\Aor\\ for both samples are consistent within statistical errors.\n\nPossible, false experimental asymmetries were searched for by modifying the selection\nof data sets used for the asymmetry calculation. The grouping of the data into\nconfigurations with opposite target-polarisation was varied from large samples,\ncovering at most two weeks of data taking, into about 100 small samples, taken in time\nintervals of the order of 16 hours.\nA statistical test was performed on the distributions of asymmetries obtained\nfrom these small samples. In each of the $Q^2$\\ and \\xBj\\ bins the dispersion of the values of\n$A_{1}^{\\rho}$ around their mean agrees with the statistical error.\nTime-dependent effects which would lead to a broadening of these distributions\nwere thus not observed. Allowing the dispersion of $A_{1}^{\\rho}$ to vary within its\ntwo standard deviations we \nobtain for each bin an upper bound for the systematic error arising from\ntime-dependent effects\n\\begin{equation}\n \\sigma_{\\rm falseA,tdep} < \n 0.56~\\sigma_{\\rm stat}.\n\\end{equation}\nHere $\\sigma _{\\rm stat}$ is the statistical error on $A_{1}^{\\rho}$ for the extended\n$p_t^2$ sample.\nThe uncertainty on the estimates of possible false asymmetries due to the time-dependent \neffects is the dominant contribution to the total systematic error in most of the kinematical\nregion.\n\nAsymmetries for configurations where spin effects cancel out were calculated to\ncheck the cancellation of effects due to fluxes and acceptances. They were found compatible\nwith zero within the statistical errors. Asymmetries obtained with different\nsettings of the microwave (MW) frequency, used for DNP, were compared in order to test possible\neffects related to the orientation of the target magnetic field. \nThe results for the extended $p_t^2$ sample tend to show that there is a small\ndifference between asymmetries for the two MW configurations.\nHowever, because the numbers of events of the data samples taken with each MW setting \nare approximately balanced, the effect of this difference on \\Aor\\ is\nnegligible for the total sample.\n \nThe systematic error on $A_{1}^{\\rho}$ also contains an overall scale uncertainty of\n6.5\\% due to uncertainties on $P_B$ and $P_T$. The uncertainty of the \nparameterisation of $R(Q^2)$ \naffects the depolarisation factor $D$. The uncertainty of the dilution factor\n$f$ is mostly due to uncertainty of the parameter $\\alpha (Q^2)$ which takes into account\nnuclear effects in the incoherent $\\rho ^0$ production. \nThe neglect of the $A_2^{\\rho }$ term mainly affects the highest bins of $Q^2$\\ and \\xBj.\n\nAnother source of systematic errors is due to the contribution of\nthe non-exclusive background to our sample.\nThis background originates from two sources.\nFirst one is due to the production of\n\\rn\\ accompanied by the dissociation of the target nucleon,\nthe second one is the production of \\rn\\ in inclusive scattering.\nIn order to evaluate the amount of background in the sample of exclusive\nevents it is necessary to\ndetermine the $E_{miss}$ dependence for the non-exclusive background\nin the region under the exclusive peak (cf.\\ Fig.~\\ref{hadplots} ).\nFor this purpose complete Monte Carlo simulations of the experiment were \nused, with events \ngenerated by either the PYTHIA 6.2 or LEPTO 6.5.1 generators.\nEvents generated with LEPTO come only from deep inelastic scattering and cover the range of $Q^2 > 0.5~(\\mathrm{GeV}\/c)^2$. \nThose generated with PYTHIA cover the whole kinematical range of the experiment and include exclusive production of vector mesons and processes with diffractive excitation of the target nucleon or the vector meson, in addition to inelastic\nproduction.\n\nThe generated MC events were reconstructed\nand selected for the analysis using the same procedure as for the data.\nIn each bin of $Q^2$\\ the $E_{miss}$ distribution for the MC was \nnormalised to the corresponding one for the data\nin the range of large $E_{miss} > 7.5~\\mathrm{GeV}$. Then the normalised MC\ndistribution was used to estimate the number of background events under the exclusive peak in the data. \nThe fraction of background events in the sample of\nincoherent exclusive $\\rho ^0$ production was estimated to be about $0.12 \\pm 0.06$\nin most of the kinematical range, except in the largest $Q^2$\\ region, where it is about\n$0.24 \\pm 0.12$. \nThe large uncertainties of these fractions reflect the differences between \nestimates from LEPTO and PYTHIA in the region where they overlap. In \nthe case of PYTHIA the\nuncertainties on the cross sections for diffractive photo- and\nelectroproduction of vector mesons also contribute.\nFor events generated with PYTHIA the $E_{miss}$ \ndistributions for various physics processes could be studied separately. \nIt was found\nthat events of \\rn\\ production with an\nexcitation of the target nucleon into $N^*$ resonances\nof small mass, $M < 2~\\mathrm{GeV}\/c^2$, cannot be resolved from the exclusive\npeak and therefore were not included \nin the estimates of number of background events.\n\nAn estimate of the asymmetry \\Aor\\ for the background was obtained using a non-exclusive sample,\nwhich was selected with the standard cuts used in this analysis,\nexcept the cut on $E_{miss}$ which was modified to $E_{miss} > 2.5$~GeV. In different high-$E_{miss}$ bins \\Aor\\ for this sample was\nfound compatible with zero.\n\nBecause no indication of a non-zero \\Aor\\ for the background was found,\nand also due to a large uncertainty of the estimated amount of background \nin the exclusive sample, no background corrections were made.\nInstead, the effect of background was treated as a source\nof systematic error. Its contribution \nto the total systematic error was not significant in most of the kinematical\nrange, except for the highest $Q^2$ and $x$.\n\nThe total systematic error on \\Aor\\ was obtained as a quadratic sum of the errors from\nall discussed sources. \nIts values for each $Q^2$\\ and \\xBj\\ bin are given\nin Tables~\\ref{binsq2-1} and \\ref{binsx-1}. The total systematic error \namounts to about 40\\% of the \nstatistical error for most of the kinematical range. Both errors become\ncomparable in the highest bin of $Q^2$. \n\n\\section{Results}\n \\label{Sec_resu}\n\nThe COMPASS results on $A^{\\rho}_1$ are shown as a function of $Q^2$\\ and \\xBj\\ \nin Fig.~\\ref{A1-Com} and listed in \nTables~\\ref{binsq2-1} and \\ref{binsx-1}.\nThe statistical errors are represented by vertical bars and the total systematic\nerrors by shaded bands.\n\n\\begin{figure}[ht]\n\\begin{center}\n \\epsfig{file=A1_Qsq_incoh_cons_0203.eps,height=5cm,width=7.5cm}\n \\epsfig{file=A1_xBj_incoh_cons_0203.eps,height=5cm,width=7.5cm}\n \\caption{\\Aor\\ as a function of $Q^2$\\ (left) and \\xBj\\ (right) from the present analysis.\nError bars correspond to statistical errors, while bands at the bottom represent the systematical errors.}\n \\label{A1-Com}\n \\end{center}\n\\end{figure}\n\\begin{table}[ht]\n \\caption{Asymmetry $A_1^{\\rho }$ as a function of $Q^2$. Both the statistical errors (first) and the total systematic errors (second) are listed.}\n \\label{binsq2-1}\n \\small\n \\begin{center}\n \\begin{tabular}{||c|c|c|c|c||}\n \\hline\n \\hline\n \\raisebox{0mm}[4mm][2mm]{$Q^2$\\ range}\\ \\ &\n $\\langle Q^{2} \\rangle$ [$(\\mathrm{GeV}\/c)^2$] & $\\langle x \\rangle$ &\n $\\langle \\nu \\rangle$ [GeV] & $A_1^{\\rho }$ \\\\ \\hline\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $0.0004 - 0.005~$ & \\ \\ \\ 0.0031\\ \\ \\ & $4.0 \\cdot 10^{-5}$ & 42.8 & $-0.030 \\pm 0.045 \\pm 0.014$\\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $0.005 - 0.010$ & ~0.0074 & $8.4 \\cdot 10^{-5}$ & 49.9 & $~~0.048 \\pm 0.038 \\pm 0.013$\\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $0.010 - 0.025$ & 0.017 & $1.8 \\cdot 10^{-4}$ & 55.6 & $~~0.063 \\pm 0.026 \\pm 0.014$\\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $0.025 - 0.050$ & 0.036 & $3.7 \\cdot 10^{-4}$ & 59.9 & $-0.035 \\pm 0.027 \\pm 0.009$\\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $0.05 - 0.10$ & 0.072 & $7.1 \\cdot 10^{-4}$ & 62.0 & $-0.010 \\pm 0.028 \\pm 0.008$\\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $0.10 - 0.25$ & 0.16~ & 0.0016 & 62.3 & $-0.019 \\pm 0.029 \\pm 0.009$\\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $0.25 - 0.50$ & 0.35~ & 0.0036 & 60.3 & $~~0.016 \\pm 0.045 \\pm 0.014$\\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $0.5 - 1~~$ & 0.69~ & 0.0074 & 58.6 & $~~0.141 \\pm 0.069 \\pm 0.030$\\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $1 - 4$ & 1.7~~ & 0.018~~ & 59.7 & $~~0.000 \\pm 0.098 \\pm 0.035$\\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $~4 - 50$ & 6.8~~ & 0.075~~ & 55.9 & $-0.85 \\pm 0.50 \\pm 0.39 $\\\\\n \\hline\n \\hline\n \\end{tabular}\n \\end{center}\n \\normalsize\n\\end{table}\n\\begin{table}[ht]\n \\caption{Asymmetry $A_1^{\\rho }$ as a function of \\xBj. Both the statistical errors (first) and the total systematic errors (second) are listed.}\n \\label{binsx-1}\n \\small\n \\begin{center}\n \\begin{tabular}{||c|c|c|c|c||}\n \\hline\n \\hline\n \\raisebox{0mm}[4mm][2mm]{\\xBj\\ range}\\ \\ & $\\langle x \\rangle$ & $\\langle Q^{2} \\rangle$ [$(\\mathrm{GeV}\/c)^2$] &\n $\\langle \\nu \\rangle$ [GeV] & $A_1^{\\rho }$ \\\\ \\hline\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $8 \\cdot 10^{-6} - 1 \\cdot 10^{-4}$ & $5.8 \\cdot 10^{-5}$ & ~0.0058 & 51.7 & $~~0.035 \\pm 0.026 \\pm 0.011$ \\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $1 \\cdot 10^{-4} - 2.5 \\cdot 10^{-4}$ & $1.7 \\cdot 10^{-4}$ & 0.019 & 59.7 & $~~0.036 \\pm 0.024 \\pm 0.010$\\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $2.5 \\cdot 10^{-4} - 5 \\cdot 10^{-4}$ & $3.6 \\cdot 10^{-4}$ & 0.041 & 61.3 & $-0.039 \\pm 0.027 \\pm 0.012$\\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $5 \\cdot 10^{-4} - 0.001$ & $7.1 \\cdot 10^{-4}$ & 0.082 & 60.8 & $-0.010 \\pm 0.030 \\pm 0.010$\\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $0.001 - 0.002$ & 0.0014 & 0.16~~ & 58.6 & $-0.005 \\pm 0.036 \\pm 0.013$ \\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $0.002 - 0.004$ & 0.0028 & 0.29~~ & 54.8 & $ ~~0.032 \\pm 0.050 \\pm 0.019$ \\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $0.004 - 0.01~~$ & 0.0062 & 0.59~~ & 50.7 & $ ~~0.019 \\pm 0.069 \\pm 0.026$ \\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $0.01 - 0.025$ & 0.015~~ & 1.3~~~~ & 47.5 & $-0.03 \\pm 0.14 \\pm 0.06$ \\\\\n \\hline\n \\raisebox{0mm}[4mm][2mm] \\ \\ $0.025 - 0.8~~~~$ & 0.049~~ & 3.9~~~~ & 43.8 & $-0.27 \\pm 0.38 \\pm 0.19$ \\\\\n \\hline\n \\hline\n \\end{tabular}\n \\end{center}\n \\normalsize\n\\end{table}\n\nThe wide range in $Q^2$\ncovers four orders of magnitude from $3 \\cdot 10^{-3}$ to 7~$(\\mathrm{GeV}\/c)^2$. The domain in \\xBj\\ which is strongly correlated\nwith $Q^2$, varies from $5 \\cdot 10^{-5}$ to about 0.05 (see Tables for more details).\nFor the whole kinematical range the \\Aor\\\nasymmetry measured by COMPASS is consistent with zero. As discussed in the introduction,\nthis indicates that the role of unnatural parity exchanges,\nlike $\\pi$- or $A_{1}$-Reggeon exchange, \nis small in that kinematical domain, which is to be expected if diffraction\nis the dominant process for reaction (\\ref{phorho}). \n\nIn Fig.~\\ref{A1-Com-Her} the COMPASS results are compared to the HERMES results on $A^{\\rho}_1$ obtained\non a deuteron target \\cite{HERMES03}.\nNote that the lowest $Q^2$\\ and \\xBj\\ HERMES points, referred to as `quasi-photoproduction', come from measurements where the kinematics of the small-angle scattered \nelectron was not measured but estimated from a MC simulation. This is in contrast to COMPASS,\nwhere scattered muon kinematics is measured even at the smallest $Q^2$.\n\\begin{figure}[ht]\n\\begin{center}\n \\epsfig{file=A1_Qsq_incoh_cons_0203_wHermes.eps,height=5cm,width=7.5cm}\n \\epsfig{file=A1_xBj_incoh_cons_0203_wHermes.eps,height=5cm,width=7.5cm}\n \\caption{\\Aor\\ as a function of $Q^2$\\ (left) and \\xBj\\ (right) from the present analysis (circles)\ncompared to HERMES results on the deuteron target (triangles). For the COMPASS results inner bars represent statistical errors, while the outer bars correspond to the total error.\n For the HERMES results vertical bars represent the quadratic sum of statistical and systematic errors. The curve represents the prediction explained in the text.}\n \\label{A1-Com-Her}\n\\end{center}\n\\end{figure}\n\nThe results from both experiments are consistent within errors. \nThe kinematical range covered by the present analysis extends further towards small\nvalues of \\xBj\\ and $Q^2$\\ by almost two orders of \nmagnitude. In each of the two experiments $A^{\\rho}_1$ is measured\nat different average $W$, which is equal to about 10~GeV for\nCOMPASS and 5~GeV for HERMES. Thus, no significant $W$ dependence\nis observed for $A^{\\rho}_1$ on an isoscalar nucleon target.\n\nThe \\xBj\\ dependence of the measured \\Aor\\ is compared in Fig.~\\ref{A1-Com-Her} to the prediction given by Eq.~\\ref{A1-Fraas}, which\nrelates $A_1^{\\rho}$ to the asymmetry $A_{1}$ for the inclusive inelastic\nlepton-nucleon scattering. To produce the curve\nthe inclusive asymmetry $A_{1}$ was parameterised as\n$A_1(x) = (x^{\\alpha } - \\gamma^{\\alpha }) \\cdot (1- e^{-\\beta x})$ , where\n$\\alpha = 1.158 \\pm 0.024$, $\\beta = 125.1 \\pm 115.7$ and $\\gamma = 0.0180 \\pm 0.0038$.\nThe values of the parameters have been obtained\nfrom a fit of $A_{1}(x)$ to the world data\nfrom polarised deuteron targets \\cite{smc,e143,e155_d,smc_lowx,hermes_new,compass_a1_recent} including COMPASS measurements at very \nlow $Q^2$ and \\xBj\\ \\cite{compass_a1_lowq2}. Within the present accuracy the results on \\Aor\\ are consistent with this prediction. \n\n\nIn the highest $Q^2$\\ bin, $\\langle Q^{2} \\rangle = 6.8~(\\mathrm{GeV}\/c)^2$, in the kinematical domain of applicability of pQCD-inspired models which relate\nthe asymmetry to the spin-dependent\nGPDs for gluons and quarks (cf.\\ Introduction), \none can observe a hint of a possible nonzero asymmetry, although with a\nlarge error.\nIt should be noted that in Ref.~\\cite{ATrip-1} a negative value of $A_{LL}$ different from zero by about 2 standard deviations\nwas reported at $\\langle Q^{2} \\rangle = 7.7~(\\mathrm{GeV}\/c)^2$.\nAt COMPASS, including the data \ntaken with the longitudinally polarised deuteron target in 2004 and 2006 will result in an\nincrease of\nstatistics by a factor of about three compared to the present paper,\nand thus may help to clarify the issue.\n \nFor the whole $Q^2$ range future COMPASS data, to be taken with the polarised proton target, would be very valuable for checking if the role of the flavour-blind\nexchanges is indeed dominant, as expected for the Pomeron-mediated process.\n\n \n\\section{Summary}\n \\label{Sec_summ}\n\nThe longitudinal double spin asymmetry \\Aor\\ for the diffractive muoproduction of \\rn\\ meson, $\\mu + N \\rightarrow \n\\mu + N + \\rho$, has been measured by scattering longitudinally polarised muons off longitudinally polarised deuterons from the\n$^6$LiD target and selecting incoherent exclusive $\\rho^0$ production.\nThe presented results for the COMPASS 2002 and 2003 data cover a range of energy $W$ from about 7 to 15~GeV.\n\nThe $Q^2$\\ and \\xBj\\ dependence of \\Aor\\ is presented in a wide kinematical range\n$3 \\cdot 10^{-3} \\leq Q^{2} \\leq 7~(\\mathrm{GeV}\/c)^2$ and $5 \\cdot 10^{-5} \\leq x \\leq 0.05$.\nThese results extend the range in $Q^2$\\ and \\xBj\\ by two orders of magnitude down\nwith respect to the existing data from HERMES.\n\nThe asymmetry $A^{\\rho}_1$ is compatible with zero in the whole \\xBj\\ and\n$Q^2$\\ range.\nThis may indicate that the role of unnatural parity exchanges like $\\pi$- or $A_{1}$-Reggeon exchange is\nsmall in that kinematical domain.\n\nThe \\xBj\\ dependence of measured \\Aor\\ is consistent with the prediction of Ref.~\\cite{HERMES01} which relates $A^{\\rho}_1$\nto the asymmetry $A_{1}$ for the inclusive inelastic lepton--nucleon scattering.\n\n\\section{Acknowledgements}\n \\label{Sec_ack}\n\nWe gratefully acknowledge the support of the CERN management and staff and\nthe skill and effort of the technicians of our collaborating institutes.\nSpecial thanks are due to V. Anosov and V. Pesaro for their support during the\ninstallation and the running of the experiment. This work was made possible by \nthe financial support of our funding agencies.\n\n\n\\noindent\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\nMany tasks in natural language processing, computational biology, reinforcement learning, and time series analysis rely on learning\nwith sequential data, i.e.\\ estimating functions defined over sequences of observations from training data. \nWeighted finite automata~(WFAs) and recurrent neural networks~(RNNs) are two powerful and flexible classes of models which can efficiently represent such functions.\nOn the one hand, WFAs are tractable, they encompass a wide range of machine learning models~(they can for example compute any probability distribution defined by a hidden Markov \nmodel~(HMM)~\\cite{denis2008rational} and can model the transition and observation behavior of\npartially observable Markov decision processes~\\cite{thon2015links}) and they offer appealing theoretical guarantees. In particular, \nthe so-called \n\\emph{spectral\nmethods} for learning HMMs~\\cite{hsu2009spectral}, WFAs~\\cite{bailly2009grammatical,balle2014spectral} and related models~\\cite{glaude2016pac,boots2011closing}, \nprovide an alternative to Expectation-Maximization based algorithms that is both computationally efficient and \nconsistent. \nOn the other hand, RNNs are \nremarkably expressive models --- they can represent any computable function~\\cite{siegelmann1992computational} --- and they have successfully\ntackled many practical problems in speech and audio recognition~\\cite{graves2013speech,mikolov2011extensions,gers2000learning}, but\ntheir theoretical analysis is difficult. Even though recent work provides interesting results on their\nexpressive power~\\cite{khrulkov2018expressive,yu2017long} as well as alternative training algorithms coming with learning guarantees~\\cite{sedghi2016training},\nthe theoretical understanding of RNNs is still limited.\n\n\\footnotetext{\\footnotemark[1] Mila \\footnotemark[2] Universit\u00e9 de Montr\u00e9al \\footnotemark[3] McGill University}\n\n\\renewcommand*{\\thefootnote}{\\arabic{footnote}}\n\nIn this work, we bridge a gap between these two classes of models by unraveling a fundamental connection between WFAs and second-order RNNs~(2-RNNs): \n\\textit{when considering input sequences of discrete symbols, 2-RNNs with linear activation functions and WFAs are one and the same}, i.e.\\ they are expressively\nequivalent and there exists a one-to-one mapping between the two classes~(moreover, this mapping conserves model sizes). While connections between\nfinite state machines~(e.g.\\ deterministic finite automata) and recurrent neural networks have been noticed and investigated in the past~(see e.g.\\ \\cite{giles1992learning,omlin1996constructing}), to the\nbest of our knowledge this is the first time that such a \\rev{rigorous} equivalence between linear 2-RNNs and \\emph{weighted} automata is explicitly formalized. \n\\rev{More precisely, we pinpoint exactly the class of recurrent neural architectures to which weighted automata are equivalent, namely second-order RNNs with\nlinear activation functions.}\nThis result naturally leads to the observation that linear 2-RNNs are a natural generalization of WFAs~(which take sequences of \\emph{discrete} observations as\ninputs) to sequences of \\emph{continuous vectors}, and raises the question of whether the spectral learning algorithm for WFAs can be extended to linear 2-RNNs. \nThe second contribution of this paper is to show that the answer is in the positive: building upon the spectral learning algorithm for vector-valued WFAs introduced\nrecently in~\\cite{rabusseau2017multitask}, \\emph{we propose the first provable learning algorithm for second-order RNNs with linear activation functions}.\nOur learning algorithm relies on estimating sub-blocks of the so-called Hankel tensor, from which the parameters of a 2-linear RNN can be recovered\nusing basic linear algebra operations. One of the key technical difficulties in designing this algorithm resides in estimating\nthese sub-blocks from training data where the inputs are sequences of \\emph{continuous} vectors. \nWe leverage multilinear properties of linear 2-RNNs and the fact that the Hankel sub-blocks can be reshaped into higher-order tensors of low tensor train rank~(a\nresult we believe is of independent interest) to perform this estimation efficiently using matrix sensing and tensor recovery techniques.\nAs a proof of concept, we validate our theoretical findings in a simulation study on toy examples where we experimentally compare \ndifferent recovery methods and investigate the robustness of our algorithm to noise and rank mis-specification. \\rev{We also show that refining the estimator returned\nby our algorithm using stochastic gradient descent can lead to significant improvements.}\n\n\\rev{\n\\paragraph{Summary of contributions.} We formalize a \\emph{strict equivalence between weighted automata and second-order RNNs with linear activation\nfunctions}~(Section~\\ref{sec:WFAs.and.2RNNs}), showing that linear 2-RNNs can be seen as a natural extension of (vector-valued) weighted automata for input sequences of \\emph{continuous} vectors. We then\npropose a \\emph{consistent learning algorithm for linear 2-RNNs}~(Section~\\ref{sec:Spectral.learning.of.2RNNs}).\nThe relevance of our contributions can be seen from two perspectives.\nFirst, while learning feed-forward neural networks with linear activation functions is a trivial task (it reduces to linear or reduced-rank regression), this\nis not at all the case for recurrent architectures with linear activation functions; to the best of our knowledge, our algorithm is the \\emph{first consistent learning algorithm\nfor the class of functions computed by linear second-order recurrent networks}. Second, from the perspective of learning weighted automata, we propose a natural extension of WFAs to continuous inputs and \\emph{our learning algorithm addresses the long-standing limitation of the spectral learning method to discrete inputs}.\n}\n\n\\paragraph{Related work.}\nCombining the spectral learning algorithm for WFAs with matrix completion techniques~(a problem which is closely related to matrix sensing) has\nbeen theoretically investigated in~\\cite{balle2012spectral}. An extension of probabilistic transducers to continuous inputs~(along with a spectral learning algorithm) has been proposed in~\\cite{recasens2013spectral}.\nThe connections between tensors and RNNs have been previously leveraged to study the expressive power of RNNs in~\\cite{khrulkov2018expressive}\nand to achieve model compression in~\\cite{yu2017long,yang2017tensor,tjandra2017compressing}. \nExploring relationships between RNNs and automata has recently received a renewed interest~\\cite{peng2018rational,chen2018recurrent,li2018nonlinear}. In particular, such connections have been explored for interpretability purposes~\\cite{weiss2018extracting,ayache2018explaining} and the ability of RNNs to learn classes of formal languages\nhas been investigated in~\\cite{avcu2017subregular}. \\rev{Connections between the tensor train decomposition and WFAs have been previously noticed in~\\cite{critch2013algebraic,critch2014algebraic,rabusseau2016thesis}.} \nThe predictive state RNN model introduced in~\\cite{downey2017predictive} is closely related to 2-RNNs and the authors propose\nto use the spectral learning algorithm for predictive state representations to initialize a gradient based algorithm; their approach however comes without\ntheoretical guarantees. Lastly, a provable algorithm for RNNs relying on the tensor method of moments has been proposed in~\\cite{sedghi2016training} but\nit is limited to first-order RNNs with quadratic activation functions~(which do not encompass linear 2-RNNs).\n\n\\emph{The proofs of the results given in the paper can be found in the supplementary material.}\n\n\n\n\n\n\n\n\n\\section{Preliminaries}\\label{sec:prelim}\nIn this section, we first present basic notions of tensor algebra before introducing second-order recurrent neural \nnetwork, \nweighted finite automata and the spectral learning algorithm.\nWe start by introducing some notation.\nFor any integer $k$ we use $[k]$ to denote the set of integers from $1$ to $k$. We use\n$\\lceil l \\rceil$ to denote the smallest integer greater or equal to $l$.\nFor any set $\\Scal$, we denote by $\\Scal^*=\\bigcup_{k\\in\\Nbb}\\Scal^k$ the set of all\nfinite-length sequences of elements of $\\Scal$~(in particular, \n$\\Sigma^*$ will denote the set of strings on a finite alphabet $\\Sigma$). \nWe use lower case bold letters for vectors (e.g.\\ $\\vec{v} \\in \\Rbb^{d_1}$),\nupper case bold letters for matrices (e.g.\\ $\\mat{M} \\in \\Rbb^{d_1 \\times d_2}$) and\nbold calligraphic letters for higher order tensors (e.g.\\ $\\ten{T} \\in \\Rbb^{d_1\n\\times d_2 \\times d_3}$). We use $\\ten_i$ to denote the $i$th canonical basis \nvector of $\\Rbb^d$~(where the dimension $d$ will always appear clearly from context).\nThe $d\\times d$ identity matrix will be written as $\\mat{I}_d$.\nThe $i$th row (resp. column) of a matrix $\\mat{M}$ will be denoted by\n$\\mat{M}_{i,:}$ (resp. $\\mat{M}_{:,i}$). This notation is extended to\nslices of a tensor in the straightforward way.\nIf $\\vec{v} \\in \\Rbb^{d_1}$ and $\\vec{v}' \\in \\Rbb^{d_2}$, we use $\\vec{v} \\otimes \\vec{v}' \\in \\Rbb^{d_1\n\\cdot d_2}$ to denote the Kronecker product between vectors, and its\nstraightforward extension to matrices and tensors.\nGiven a matrix $\\mat{M} \\in \\Rbb^{d_1 \\times d_2}$, we use $\\vectorize{\\mat{M}} \\in \\Rbb^{d_1\n\\cdot d_2}$ to denote the column vector obtained by concatenating the columns of\n$\\mat{M}$. The inverse of $\\mat{M}$ is denoted by $\\mat{M}^{-1}$, its Moore-Penrose pseudo-inverse\nby $\\mat{M}^\\dagger$, and the transpose of its inverse by $\\mat{M}^{-\\top}$; the Frobenius norm\nis denoted by $\\norm{\\mat{M}}_F$ and the nuclear norm by $\\norm{\\mat{M}}_*$.\n\n\\paragraph{Tensors.}\nWe first recall basic definitions of tensor algebra; more details can be found\nin~\\cite{Kolda09}. \nA \\emph{tensor} $\\ten{T}\\in \\Rbb^{d_1\\times\\cdots \\times d_p}$ can simply be seen\nas a multidimensional array $(\\ten{T}_{i_1,\\cdots,i_p}\\ : \\ i_n\\in [d_n], n\\in [p])$. The\n\\emph{mode-$n$} fibers of $\\ten{T}$ are the vectors obtained by fixing all\nindices except the $n$th one, e.g.\\ $\\ten{T}_{:,i_2,\\cdots,i_p}\\in\\Rbb^{d_1}$.\nThe \\emph{$n$th mode matricization} of $\\ten{T}$ is the matrix having the\nmode-$n$ fibers of $\\ten{T}$ for columns and is denoted by\n$\\tenmat{T}{n}\\in \\Rbb^{d_n\\times d_1\\cdots d_{n-1}d_{n+1}\\cdots d_p}$.\nThe vectorization of a tensor is defined by $\\vectorize{\\ten{T}}=\\vectorize{\\tenmat{T}{1}}$.\nIn the following $\\ten{T}$ always denotes a tensor of size $d_1\\times\\cdots \\times d_p$.\n\nThe \\emph{mode-$n$ matrix product} of the tensor $\\ten{T}$ and a matrix\n$\\mat{X}\\in\\Rbb^{m\\times d_n}$ is a tensor denoted by $\\ten{T}\\ttm{n}\\mat{X}$. It is \nof size $d_1\\times\\cdots \\times d_{n-1}\\times m \\times d_{n+1}\\times\n\\cdots \\times d_p$ and is defined by the relation \n$\\ten{Y} = \\ten{T}\\ttm{n}\\mat{X} \\Leftrightarrow \\tenmat{Y}{n} = \\mat{X}\\tenmat{T}{n}$.\nThe \\emph{mode-$n$ vector product} of the tensor $\\ten{T}$ and a vector\n$\\vec{v}\\in\\Rbb^{d_n}$ is a tensor defined by $\\ten{T}\\ttv{n}\\vec{v} = \\ten{T}\\ttm{n}\\vec{v}^\\top\n\\in \\Rbb^{d_1\\times\\cdots \\times d_{n-1}\\times d_{n+1}\\times\n\\cdots \\times d_p}$.\nIt is easy to check that the $n$-mode product satisfies $(\\ten{T}\\ttm{n}\\mat{A})\\ttm{n}\\mat{B} = \\ten{T}\\ttm{n}\\mat{BA}$\nwhere we assume compatible dimensions of the tensor $\\ten{T}$ and\nthe matrices $\\mat{A}$ and $\\mat{B}$.\n\n\nGiven strictly positive integers $n_1,\\cdots, n_k$ satisfying\n$\\sum_i n_i = p$, we use the notation $\\tenmatgen{\\ten{T}}{n_1,n_2,\\cdots,n_k}$ to denote the $k$th order tensor \nobtained by reshaping $\\ten{T}$ into a tensor\\footnote{Note that the specific ordering used to perform matricization, vectorization\nand such a reshaping is not relevant as long as it is consistent across all operations.} of size \n$(\\prod_{i_1=1}^{n_1} d_{i_1}) \\times (\\prod_{i_2=1}^{n_2} d_{n_1 + i_2}) \\times \\cdots \\times (\\prod_{i_k=1}^{n_k} d_{n_1+\\cdots+n_{k-1} + i_k})$.\nIn particular we have $\\tenmatgen{\\ten{T}}{p} = \\vectorize{\\ten{T}}$ and $\\tenmatgen{\\ten{T}}{1,p-1} = \\tenmat{\\ten{T}}{1}$.\n\n\nA rank $R$ \\emph{tensor train (TT) decomposition}~\\cite{oseledets2011tensor} of a tensor \n$\\ten{T}\\in\\Rbb^{d_1\\times \\cdots\\times d_p}$ consists in factorizing $\\ten{T}$ into the product of $p$ core tensors\n$\\ten{G}_1\\in\\Rbb^{d_1\\times R},\\ten{G}_2\\in\\Rbb^{R\\times d_2\\times R},\n\\cdots, \\ten{G}_{p-1}\\in\\Rbb^{R\\times d_{p-1} \\times R},\n\\ten{G}_p \\in \\Rbb^{R\\times d_p}$, and is defined\\footnote{The classical definition of the TT-decomposition allows the rank $R$ to be different\nfor each mode, but this definition is sufficient for the purpose of this paper.} by\n\\begin{align*}\n\\MoveEqLeft\\ten{T}_{i_1,\\cdots,i_p} = \n&(\\ten{G}_1)_{i_1,:}(\\ten{G}_2)_{:,i_2,:}\\cdots \n (\\ten{G}_{p-1})_{:,i_{p-1},:}(\\ten{G}_p)_{:,i_p}\n\\end{align*}\n %\nfor all indices $i_1\\in[d_1],\\cdots,i_p\\in[d_p]$; we will use the notation $\\ten{T} = \\TT{\\ten{G}_1,\\cdots,\\ten{G}_p}$\nto denote such a decomposition. A tensor network representation of this decomposition is shown in Figure~\\ref{fig:tn.TT}.\n While the problem of finding the best approximation of TT-rank $R$\nof a given tensor is NP-hard~\\cite{hillar2013most}, \na quasi-optimal SVD based compression algorithm~(TT-SVD) has been proposed \nin~\\cite{oseledets2011tensor}.\nIt is worth mentioning that the TT decomposition is invariant under change of basis: \nfor any invertible matrix $\\mat{M}$ and any core tensors $\\ten{G}_1,\\ten{G}_2,\\cdots,\\ten{G}_p$, we have\n$\\TT{\\ten{G}_1,\\cdots,\\ten{G}_p} = \\TT{\\ten{G}_1\\ttm{2}\\mat{M}^{-\\top},\\ten{G}_2\\ttm{1}\\mat{M}\\ttm{3}\\mat{M}^{-\\top},\\cdots,\n\\ten{G}_{p-1}\\ttm{1}\\mat{M}\\ttm{3}\\mat{M}^{-\\top},\\ten{G}_p\\ttm{1}\\mat{M}}$.\n\n\n\n\n\\begin{figure}\n\\begin{center}\n\\resizebox{0.45\\textwidth}{0.08\\textwidth}{%\n\\begin{tikzpicture}\n\t\\input{tikz_tensor_networks}\n\t\\node[tensor](G1){$\\ten{G}_1$};\n\t\\node[draw = none,below=0.8cm of G1](G11){};\n\t\n\t\\node[tensor,right = 1cm of G1](G2){$\\ten{G}_2$};\n\t\\node[draw=none,below=0.8cm of G2](G22){};\n\t\n\t\\node[tensor,right = 1cm of G2](G3){$\\ten{G}_3$};\n\t\\node[draw=none,below=0.8cm of G3](G32){};\n\t\n\t\\node[tensor,right = 1cm of G3](G4){$\\ten{G}_4$};\n\t\\node[draw=none,below=0.8cm of G4](G41){};\n\t\n\t\\node[tensor,left=3cm of G1](T){$\\ten{T}$};\n\t\\node[draw=none,below left = 0.2cm and 1cm of T](T1){};\n\t\\node[draw=none,below left = 0.8cm and 0.2cm of T](T2){};\n\t\\node[draw=none,below right = 0.8cm and 0.2cm of T](T3){};\n\t\\node[draw=none,below right = 0.2cm and 1cm of T](T4){};\n\t\\node[draw=none,right=1.5cm of T](eq){$=$};\t\n\t\t\n\t\\edgeports{T}{1}{above left}{T1}{}{}{$d_1$};\n\t\\edgeports{T}{2}{below right}{T2}{}{}{$d_2$};\n\t\\edgeports{T}{3}{below right=-0.1cm and 0.1cm}{T3}{}{}{$d_3$};\n\t\\edgeports{T}{4}{above right}{T4}{}{}{$d_4$};\n\t\n\t\\edgeports{G1}{1}{below left = -0.1cm and 0.01cm }{G11}{}{}{$d_1$};\n\t\n\t\\edgeports{G2}{1}{below left}{G1}{2}{below right}{$R$};\t\n\t\\edgeports{G2}{2}{below left = -0.1cm and 0.01cm }{G22}{}{}{$d_2$};\n\t\n\t\\edgeports{G3}{1}{below left}{G2}{3}{below right}{$R$};\t\n\t\\edgeports{G3}{2}{below left = -0.1cm and 0.01cm }{G32}{}{}{$d_3$};\n\t\n\t\\edgeports{G4}{1}{below left}{G3}{3}{below right}{$R$};\t\n\t\\edgeports{G4}{2}{below left = -0.1cm and 0.01cm }{G41}{}{}{$d_4$};\n\n\t\n\\end{tikzpicture}\n}%\n\\end{center}\n\\caption{Tensor network representation of a rank $R$ tensor train decomposition~(nodes represent tensors and an edge between two nodes\nrepresents a contraction between the corresponding modes of the two tensors).}\n\\label{fig:tn.TT}\n\\end{figure}\n\n\\paragraph{Second-order RNNs.}\nA \\emph{second-order recurrent neural network} (2-RNN)~\\cite{giles1990higher,pollack1991induction,lee1986machine}\\footnote{Second-order reccurrent architectures have also been successfully used more recently, see e.g.\\ \\cite{sutskever2011generating} and \\cite{wu2016multiplicative}.} with $n$ hidden units can be defined as a tuple \n$M=(\\vec{h}_0,\\Aten,\\vecs{\\vvsinfsymbol})$ where $\\vec{h}_0\\in\\Rbb^n$ is the initial state, $\\Aten\\in\\Rbb^{ n\\times d \\times n }$ is the transition tensor, and\n$\\vecs{\\vvsinfsymbol}\\in \\Rbb^{p\\times n}$ is the output matrix,\nwith $d$ and $p$ being the input and output dimensions respectively. \nA 2-RNN maps any sequence of inputs $\\vec{x}_1,\\cdots,\\vec{x}_k\\in\\Rbb^d$ to\na sequence of outputs $\\vec{y}_1,\\cdots,\\vec{y}_k\\in\\Rbb^p$ defined for any $ t=1,\\cdots,k$ by\n\\begin{equation}\\label{eq:2RNN.definition}\n\\vec{y}_t = z_2(\\vecs{\\vvsinfsymbol}\\vec{h}_t) \\text{ with }\\vec{h}_t = z_1(\\Aten\\ttv{1}\\vec{x}_t\\ttv{2}\\vec{h}_{t-1})\n\\end{equation}\nwhere $z_1:\\Rbb^n\\to\\Rbb^n$ and $z_2:\\Rbb^p\\to\\Rbb^p$ are activation functions.\nAlternatively, one can think of a 2-RNN as computing\na function $f_M:(\\Rbb^d)^*\\to\\Rbb^p$ mapping each input sequence $\\vec{x}_1,\\cdots,\\vec{x}_k$ to the corresponding final output $\\vec{y}_k$.\nWhile $z_1$ and $z_2$ are usually non-linear\ncomponent-wise functions, we consider in this paper the case where both $z_1$ and $z_2$ are the identity, and we refer to\nthe resulting model as a \\emph{linear 2-RNN}.\nFor a linear 2-RNN $M$, the function $f_M$ is multilinear in the sense that, for any integer $l$, its restriction to the domain $(\\Rbb^d)^l$ is\nmultilinear. Another useful observation is that linear 2-RNNs are invariant under change of basis: for any invertible matrix\n$\\P$, the linear 2-RNN $\\tilde{M}=(\\P^{-\\top}\\vec{h}_0,\\Aten\\ttm{1}\\P\\ttm{3}\\P^{-\\top},\\P\\vecs{\\vvsinfsymbol})$ is such that $f_{\\tilde{M}}=f_M$. A linear 2-RNN $M$ with $n$ states is called \\emph{minimal} if its number of hidden units is minimal~(i.e.\\ any linear 2-RNN computing $f_M$ has\nat least $n$ hidden units).\n\n\n\\paragraph{Weighted automata and spectral learning.} \\emph{Vector-valued weighted finite automaton}~(vv-WFA) have\nbeen introduced in~\\cite{rabusseau2017multitask} as a natural generalization of weighted automata from scalar-valued functions\nto vector-valued ones. A $p$-dimensional vv-WFA with $n$ states is a tuple $A=\\vvwa$\nwhere $\\vecs{\\szerosymbol}\\in\\Rbb^n$ is the initial weights vector, $\\vecs{\\vvsinfsymbol}\\in\\Rbb^{p\\times n}$ is the matrix of final weights, \nand $\\mat{A}^\\sigma\\in\\Rbb^{n\\times n}$ is the transition matrix for each symbol $\\sigma$ in a finite alphabet $\\Sigma$.\nA vv-WFA $A$ computes a function $f_A:\\Sigma^*\\to\\Rbb^p$ defined by \n$$f_A(x) =\\vecs{\\vvsinfsymbol} (\\mat{A}^{x_1}\\mat{A}^{x_2}\\cdots\\mat{A}^{x_k})^\\top\\vecs{\\szerosymbol} $$\nfor each word $x=x_1x_2\\cdots x_k\\in\\Sigma^*$. We call a vv-WFA \\emph{minimal} if its number of states\nis minimal. Given a function $f:\\Sigma^*\\to \\Rbb^p$ we denote by $\\rank(f)$ the number of states of a minimal vv-WFA computing $f$~(which is\nset to $\\infty$ if $f$ cannot be computed by a vv-WFA).\n\nThe spectral learning algorithm for vv-WFAs relies on the following fundamental theorem relating\nthe rank of a function $f:\\Sigma^*\\to\\Rbb^d$ to its Hankel tensor $\\ten{H} \\in \\Rbb^{\\Sigma^*\\times\\Sigma^*\\times p}$, which is defined\nby $\\ten{H}_{u,v,:} = f(uv)$ for all $u,v\\in\\Sigma^*$.\n\\begin{theorem}[\\cite{rabusseau2017multitask}]\n\\label{thm:fliess-vvWFA}\nLet $f:\\Sigma^*\\to\\Rbb^d$ and let $\\ten{H}$ be its Hankel tensor. Then $\\rank(f) = \\rank(\\tenmat{H}{1})$.\n\\end{theorem}\nThe vv-WFA learning algorithm leverages the fact that the proof of this theorem is constructive: one can recover a vv-WFA computing $f$\nfrom any low rank factorization of $\\tenmat{H}{1}$. In practice, a finite sub-block $\\ten{H}_{\\Pcal,\\Scal} \\in \\Rbb^{\\Pcal\\times \\Scal\\times p}$ of the Hankel tensor is used\nto recover the vv-WFA, where $\\Pcal,\\Scal\\subset\\Sigma^*$ are finite sets of prefixes and suffixes forming a \\emph{complete basis} for $f$, i.e.\\ such that\n$\\rank(\\tenmatpar{\\ten{H}_{\\Pcal,\\Scal}}{1}) = \\rank(\\tenmat{H}{1})$. More details can be found \nin~\\cite{rabusseau2017multitask}.\n\n\n\\section{A Fundamental Relation between WFAs and Linear 2-RNNs}\\label{sec:WFAs.and.2RNNs}\n\nWe start by unraveling a fundamental connection between vv-WFAs and linear 2-RNNs: vv-WFAs and\nlinear 2-RNNs are expressively equivalent for representing functions defined over sequences of\ndiscrete symbols. \\rev{Moreover, both models have the same capacity in the sense that there is a direct\ncorrespondence between the hidden units of a linear 2-RNN and the states of a vv-WFA computing the same function}. More formally, we have the following theorem. \n\\begin{theorem}\\label{thm:2RNN-vvWFA}\nAny function that can be computed by a vv-WFA with $n$ states can be computed by a linear 2-RNN with $n$ hidden units.\nConversely, any function that can be computed by a linear 2-RNN with $n$ hidden units on sequences of one-hot vectors~(i.e.\\ canonical basis \nvectors) can be computed by a WFA with $n$ states.\n\n\nMore precisely, the WFA $A=\\vvwa$ with $n$ states and the linear 2-RNN $M=(\\vecs{\\szerosymbol},\\Aten,\\vecs{\\vvsinfsymbol})$ with\n$n$ hidden units, where $\\Aten\\in\\Rbb^{n\\times \\Sigma \\times n}$ is defined by $\\Aten_{:,\\sigma,:}=\\mat{A}^\\sigma$ for all $\\sigma\\in\\Sigma$, are\nsuch that\n$f_A(\\sigma_1\\sigma_2\\cdots\\sigma_k) = f_M(\\vec{x}_1,\\vec{x}_2,\\cdots,\\vec{x}_k)$ for all sequences of input symbols $\\sigma_1,\\cdots,\\sigma_k\\in\\Sigma$,\nwhere for each $i\\in[k]$ the input vector $\\vec{x}_i\\in\\Rbb^\\Sigma$ is\nthe one-hot encoding of the symbol $\\sigma_i$.\n\n\\end{theorem}\n\nThis result first implies that linear 2-RNNs defined over sequence of discrete symbols~(using one-hot encoding) \\emph{can be provably learned using the spectral\nlearning algorithm for WFAs\/vv-WFAs}; indeed, these algorithms have been proved to return consistent estimators.\n\\rev{Let us stress again that, contrary to the case of feed-forward architectures, learning recurrent networks with linear activation functions is not a trivial task.}\nFurthermore, Theorem~\\ref{thm:2RNN-vvWFA} reveals that linear 2-RNNs are a natural generalization of classical weighted automata to functions\ndefined over sequences of continuous vectors~(instead of discrete symbols). This spontaneously raises the question of whether the spectral learning algorithms\nfor WFAs and vv-WFAs can be extended to the general setting of linear 2-RNNs; we show that the answer is in the positive in the next section.\n\n\n\n \n\n\n\n\\section{Spectral Learning of Linear 2-RNNs}\\label{sec:Spectral.learning.of.2RNNs}\nIn this section, we extend the learning algorithm for vv-WFAs to linear 2-RNNs, \\rev{thus at the same time addressing the limitation of the spectral learning algorithm to discrete inputs and providing the first consistent learning algorithm for linear second-order RNNs.}\n\n\n\n\\subsection{Recovering 2-RNNs from Hankel Tensors}\\label{subsec:SL-2RNN}\nWe first present an identifiability result showing how one can recover a linear 2-RNN computing a function $f:(\\Rbb^d)^*\\to \\Rbb^p$ from observable tensors extracted from some Hankel tensor\nassociated with $f$. Intuitively, we obtain this result by reducing the problem to the one of learning a vv-WFA. This is done by considering the restriction of\n$f$ to canonical basis vectors; loosely speaking, since the domain of this restricted function is isomorphic to $[d]^*$, this allows us to fall back onto the \nsetting of sequences of discrete symbols.\n\nGiven a function $f:(\\Rbb^d)^*\\to\\Rbb^p$, we define its Hankel tensor $\\ten{H}_f\\in \\Rbb^{[d]^* \\times [d]^* \\times p}$ by\n$$(\\ten{H}_f)_{i_1\\cdots i_s, j_1\\cdots j_t,:} = f(\\ten_{i_1},\\cdots,\\ten_{i_s},\\ten_{j_1},\\cdots,\\ten_{j_t}),$$ \nfor all $i_1,\\cdots,i_s,j_1,\\cdots,j_t\\in [d]$, which is infinite in two\nof its modes. It is easy to see that $\\ten{H}_f$ is also the Hankel tensor associated with the function $\\tilde{f}:[d]^* \\to \\Rbb^p$ mapping any\nsequence $i_1i_2\\cdots i_k\\in[d]^*$ to $f(\\ten_{i_1},\\cdots,\\ten_{i_k})$. Moreover, in the special case where $f$ can be computed by a linear 2-RNN, one\ncan use the multilinearity of $f$ to show that $f(\\vec{x}_1,\\cdots,\\vec{x}_k) = \\sum_{i_1,\\cdots,i_k = 1}^d (\\vec{x}_1)_{i_1}\\cdots(\\vec{x}_l)_{i_k} \\tilde{f}(i_1\\cdots i_k)$,\n\\rev{giving us some intuition on how one could} learn $f$ by learning a vv-WFA computing $\\tilde{f}$ using the spectral learning algorithm. \nThat is, given a large enough sub-block $\\ten{H}_{\\Pcal,\\Scal}\\in \\Rbb^{\\Pcal\\times \\Scal\\times p}$ of $\\ten{H}_f$ for some prefix and suffix sets $\\Pcal,\\Scal\\subseteq [d]^*$, \none should be able to recover a vv-WFA computing $\\tilde{f}$ and consequently a linear 2-RNN computing $f$ using Theorem~\\ref{thm:2RNN-vvWFA}. \n\\rev{Before devoting the remaining of this section to formalize this intuition~(leading to Theorem~\\ref{thm:2RNN-SL}), it is worth \nobserving that while this approach is sound, it is not realistic since it requires observing entries of the Hankel tensor $\\ten{H}_f$, which implies having access to input\/output examples where the inputs are \\emph{sequences of canonical basis vectors}; This issue will be discussed in more details and addressed in the \nnext section.} \n\n\n\n\\emph{For the sake of clarity, we present the learning algorithm for the particular case where there exists an $L$ such that \nthe prefix and suffix sets consisting of all sequences of length $L$, that is $\\Pcal = \\Scal = \n[d]^L$, forms a complete basis for $\\tilde{f}$}~\\rev{(i.e.\\ the sub-block $\\ten{H}_{\\Pcal,\\Scal}\\in\\Rbb^{[d]^L\\times [d]^L\\times p}$ of the Hankel tensor $\\ten{H}_f$ is\nsuch that $\\rank(\\tenmatpar{\\ten{H}_{\\Pcal,\\Scal}}{1}) = \\rank(\\tenmatpar{\\ten{H}_f}{1})$)}. This assumption allows us to present all the key elements of the algorithm in a simpler way, the technical details\nneeded to lift this assumption are given in the supplementary material.\n\nFor any integer $l$, we define the finite tensor \n$\\ten{H}^{(l)}_f\\in \\Rbb^{ d\\times \\cdots \\times d\\times p}$ of order $l+1$ by\n$$ (\\ten{H}^{(l)}_f)_{i_1,\\cdots,i_l,:} = f(\\ten_{i_1},\\cdots,\\ten_{i_l}) \\ \\ \\ \\text{for all } i_1,\\cdots,i_l\\in [d].$$ \nObserve that for any integer $l$, the tensor $\\ten{H}^{(l)}_f$ can be obtained by reshaping a finite sub-block of the Hankel tensor $\\ten{H}_f$. \nWhen $f$ is computed by a linear $2$-RNN, we have the useful property that, for any integer $l$,\n\\begin{equation}\\label{eq:}\n f(\\vec{x}_1,\\cdots,\\vec{x}_l) = \\ten{H}^{(l)}_f \\ttv{1} \\vec{x}_1 \\ttv{2} \\cdots \\ttv{l} \\vec{x}_l\n\\end{equation}\nfor any sequence of inputs $\\vec{x}_1,\\cdots,\\vec{x}_l\\in\\Rbb^d$~(which can be shown using the\nmultilinearity of $f$). Another fundamental property of the tensors $\\ten{H}^{(l)}_f$ is that they are of low tensor train rank. Indeed, for any $l$, one can check that\n$\\ten{H}^{(l)}_f = \\TT{\\Aten\\ttv{1}\\vecs{\\szerosymbol}, \\underbrace{\\Aten, \\cdots, \\Aten}_{l-1\\text{ times}}, \\vecs{\\vvsinfsymbol}^\\top}$~(the tensor network representation of this\ndecomposition is shown in Figure~\\ref{fig:Hl.TT.rank}).\nThis property will be particularly relevant to the learning algorithm we design in the following section, but it is also a fundamental relation\nthat deserves some attention on its own: it implies in particular that, beyond the classical relation between the rank of the Hankel matrix $\\H_f$ and the number states of\na minimal WFA computing $f$, the Hankel matrix possesses a deeper structure intrinsically connecting weighted automata to the tensor\ntrain decomposition. \n\\rev{We now state the main result of this section, showing that a (minimal) linear 2-RNN computing a function $f$ can be exactly recovered from sub-blocks of the Hankel tensor $\\ten{H}_f$.}\n\n\n\n\\begin{figure}\n\\begin{center}\n\\resizebox{0.45\\textwidth}{0.08\\textwidth}{%\n\\begin{tikzpicture}\n\t\\input{tikz_tensor_networks}\n\t\\node[tensor](G1){$\\vecs{\\szerosymbol}$};\n\t\\node[draw=none,left of = G1](H){$\\ten{H}^{(4)}_f=\\ \\ \\ $};\n\t\\node[tensor,right = 1cm of G1](G2){$\\ten{A}$};\n\t\\node[draw=none,below=0.8cm of G2](G22){};\n\t\n\t\\node[tensor,right = 1cm of G2](G3){$\\ten{A}$};\n\t\\node[draw=none,below=0.8cm of G3](G32){};\n\t\n\n\t\\node[tensor,right = 1cm of G3](G4){$\\ten{A}$};\n\t\\node[draw=none,below=0.8cm of G4](G42){};\t\n\t\n\t\\node[tensor,right = 1cm of G4](G4-){$\\ten{A}$};\n\t\\node[draw=none,below=0.8cm of G4-](G42-){};\t\n\t\n\t\\node[tensor,right = 1cm of G4-](G5){$\\vecs{\\vvsinfsymbol}$};\n\t\\node[draw=none,below=0.8cm of G5](G51){};\n\t\n\n\t\n\t\\edgeports{G2}{1}{below left = 0.01cm }{G1}{1}{below right}{$n$};\t\n\t\\edgeports{G2}{2}{below left = -0.1cm and 0.01cm }{G22}{}{}{$d$};\n\t\n\t\\edgeports{G3}{1}{below left}{G2}{3}{below right}{$n$};\t\n\t\\edgeports{G3}{2}{below left = -0.1cm and 0.01cm }{G32}{}{}{$d$};\n\t\n\t\\edgeports{G4}{1}{below left}{G3}{3}{below right}{$n$};\t\n\t\\edgeports{G4}{2}{below left = -0.1cm and 0.01cm }{G42}{}{}{$d$};\n\t\n\t\\edgeports{G4-}{1}{below left}{G4}{3}{below right}{$n$};\t\n\t\\edgeports{G4-}{2}{below left = -0.1cm and 0.01cm }{G42-}{}{}{$d$};\n\t\n\t\\edgeports{G5}{1}{below left}{G4-}{3}{below right}{$n$};\t\n\t\\edgeports{G5}{2}{below left = -0.1cm and 0.01cm }{G51}{}{}{$p$};\n\n\t\n\\end{tikzpicture}\n}%\n\\end{center}\n\\caption{Tensor network representation of the TT decomposition of the Hankel tensor $\\ten{H}^{(4)}_f$ induced\nby a linear $2$-RNN $(\\vecs{\\szerosymbol},\\Aten,\\vecs{\\vvsinfsymbol})$.}\n\\label{fig:Hl.TT.rank}\n\\end{figure}\n\n\n\n\\begin{theorem}\\label{thm:2RNN-SL}\nLet $f:(\\Rbb^d)^*\\to \\Rbb^p$ be a function computed by a minimal linear $2$-RNN with $n$ hidden units and let\n$L$ be an integer such that $\\rank(\\tenmatgen{\\ten{H}^{(2L)}_f}{L,L+1}) = n$.\n\nThen, for any $\\P\\in\\Rbb^{d^L\\times n}$ and $\\S\\in\\Rbb^{n\\times d^Lp}$ such that $\\tenmatgen{\\ten{H}^{(2L)}_f}{L,L+1} = \\P\\S$, the\nlinear 2-RNN $M=(\\vecs{\\szerosymbol},\\Aten,\\vecs{\\vvsinfsymbol})$ defined by\n\n$$\\vecs{\\szerosymbol} = (\\S^\\dagger)^\\top\\tenmatgen{\\ten{H}^{(L)}_f}{L+1}, \\ \\ \\ \\ \\vecs{\\vvsinfsymbol}^\\top = \\P^\\dagger\\tenmatgen{\\ten{H}^{(L)}_f}{L,1}\n$$\n$$\\Aten = (\\tenmatgen{\\ten{H}^{(2L+1)}_f}{L,1,L+1})\\ttm{1}\\P^\\dagger\\ttm{3}(\\S^\\dagger)^\\top$$\nis a minimal linear $2$-RNN computing $f$.\n\\end{theorem}\nFirst observe that such an integer $L$ exists under the assumption that $\\Pcal = \\Scal = \n[d]^L$ forms a complete basis for $\\tilde{f}$.\nIt is also worth mentioning that a necessary condition for $\\rank(\\tenmatgen{\\ten{H}^{(2L)}_f}{L,L+1}) = n$ is that\n$d^L\\geq n$, i.e.\\ $L$ must be of the order $\\log_d(n)$.\n\n\n\n\n\n\n\n\n\n\n\\subsection{Hankel Tensors Recovery from Linear Measurements}\\label{subsec:Hankel.tensor.recovery}\n\n\n\nWe showed in the previous section that, given the Hankel tensors $\\ten{H}^{(L)}_f$, $\\ten{H}^{(2L)}_f$ and $\\ten{H}^{(2L+1)}_f$, one can recover \na linear 2-RNN computing $f$ if it exists. This first implies that the class of functions that can be computed by linear 2-RNNs is learnable in Angluin's\nexact learning model~\\cite{angluin1988queries} where one has access to an oracle that can answer membership queries~(e.g.\\ \\textit{what is the value computed by the target $f$ \non~$(\\vec{x}_1,\\cdots,\\vec{x}_k)$?}) and equivalence queries~(e.g.\\ \\textit{is my current hypothesis $h$ equal to the target $f$?}). While this fundamental result is \nof significant theoretical interest, assuming access to such an oracle is unrealistic. In this section, we show that a stronger learnability result can be obtained in a more realistic setting,\nwhere we only\nassume access to randomly generated input\/output examples $((\\vec{x}^{(i)}_1,\\vec{x}_2^{(i)},\\cdots,\\vec{x}_l^{(i)}),\\vec{y}^{(i)})\\in(\\Rbb^d)^*\\times\\Rbb^p$ where $\\vec{y}^{(i)} = f(\\vec{x}^{(i)}_1,\\vec{x}_2^{(i)},\\cdots,\\vec{x}_l^{(i)})$.\n\n\nThe key observation is that such an input\/output example $((\\vec{x}^{(i)}_1,\\vec{x}_2^{(i)},\\cdots,\\vec{x}_l^{(i)}),\\vec{y}^{(i)})$ can be seen as a \\emph{linear measurement} of the \nHankel tensor $\\ten{H}^{(l)}$. Indeed, we have\n\\begin{align*}\n\\vec{y}^{(i)} &= f(\\vec{x}^{(i)}_1,\\vec{x}_2^{(i)},\\cdots,\\vec{x}_l^{(i)}) = \\ten{H}^{(l)}_f \\ttv{1} \\vec{x}_1 \\ttv{2} \\cdots \\ttv{l} \\vec{x}_l \\\\\n&= \n\\tenmatgen{\\ten{H}^{(l)}}{l,1}^\\top \\vec{x}^{(i)}\n\\end{align*}\nwhere $\\vec{x}^{(i)} = \\vec{x}^{(i)}_1\\otimes\\cdots \\otimes \\vec{x}_{l}^{(i)}\\in\\Rbb^{d^l}$. Hence, by regrouping $N$ output examples $\\vec{y}^{(i)}$ into\nthe matrix $\\Ymat\\in\\Rbb^{N\\times p}$ and the corresponding input vectors $\\vec{x}^{(i)}$ into the matrix $\\mat{X}\\in\\Rbb^{N\\times d^l}$,\none can recover $\\ten{H}^{(l)}$ by solving the linear system $\\Ymat = \\mat{X}\\tenmatgen{\\ten{H}^{(l)}}{l,1}$, which has a unique\nsolution whenever $\\mat{X}$ is of full column rank. \nThis naturally leads to the following theorem, whose proof relies on the fact that\n$\\mat{X}$ will be of full column rank whenever $N\\geq d^l$ and the components of each $\\vec{x}^{(i)}_j$ for $j\\in[l],i\\in[N]$\nare drawn independently from a continuous distribution over $\\Rbb^{d}$~(w.r.t. the Lebesgue measure).\n\n\n\n\n\n\n\n\\begin{theorem}\\label{thm:learning-2RNN}\nLet $(\\vec{h}_0,\\Aten,\\vecs{\\vvsinfsymbol})$ be a minimal linear 2-RNN with $n$ hidden units computing a function $f:(\\Rbb^d)^*\\to \\Rbb^p$, and let $L$ be an integer\\footnote{Note that the theorem can be adapted if such an integer $L$ does not exists~(see supplementary material).}\nsuch that $\\rank(\\tenmatgen{\\ten{H}^{(2L)}_f}{L,L+1}) = n$.\nSuppose we have access to $3$ datasets\n$D_l = \\{((\\vec{x}^{(i)}_1,\\vec{x}_2^{(i)},\\cdots,\\vec{x}_l^{(i)}),\\vec{y}^{(i)}) \\}_{i=1}^{N_l}\\subset(\\Rbb^d)^l\\times \\Rbb^p$ for $l\\in\\{L,2L,2L+1\\}$ \nwhere the entries of each $\\vec{x}^{(i)}_j$ are drawn independently from the standard normal distribution and where each\n$\\vec{y}^{(i)} = f(\\vec{x}^{(i)}_1,\\vec{x}_2^{(i)},\\cdots,\\vec{x}_l^{(i)})$.\n\nThen, if $N_l \\geq d^l$ for $l =L,\\ 2L,\\ 2L+1$,\nthe linear 2-RNN $M$ returned by Algorithm~\\ref{alg:2RNN-SL} with the least-squares method satisfies $f_M = f$ with probability one.\n\\end{theorem}\n\n\n\\begin{algorithm}[tb]\n \\caption{\\texttt{2RNN-SL}: Spectral Learning of linear 2-RNNs }\n \\label{alg:2RNN-SL}\n\\begin{algorithmic}[1]\n \\REQUIRE Three training datasets $D_L,D_{2L},D_{2L+1}$ with input sequences of length $L$, $2L$ and $2L+1$ respectively, a \\texttt{recovery\\_method}, rank $R$ and learning rate $\\gamma$~(for IHT\/TIHT).\n \n \\FOR{$l\\in\\{L,2L,2L+1\\}$}\n \\STATE\\label{alg.firstline.forloop} Use $D_l = \\{((\\vec{x}^{(i)}_1,\\vec{x}_2^{(i)},\\cdots,\\vec{x}_l^{(i)}),\\vec{y}^{(i)}) \\}_{i=1}^{N_l}\\subset(\\Rbb^d)^l\\times \\Rbb^p$ to build $\\mat{X}\\in\\Rbb^{N_l\\times d^l}$ with rows $\\vec{x}^{(i)}_1\\otimes\\vec{x}_2^{(i)}\\otimes\\cdots\\otimes\\vec{x}_l^{(i)}$ for $i\\in[N_l]$ and $\\Ymat\\in\\Rbb^{N_l\\times p}$ with rows $\\vec{y}^{(i)}$ for $i\\in[N_l]$.\n \\IF{\\texttt{recovery\\_method} = \"Least-Squares\"}\n \\STATE\\label{alg.line.lst-sq} $\\ten{H}^{(l)} = \\displaystyle\\argmin_{\\ten{T}\\in \\Rbb^{d\\times\\cdots\\times d\\times p}} \\norm{\\mat{X}\\tenmatgen{\\ten{T}}{l,1} - \\Ymat}_F^2$.\n \\ELSIF{\\texttt{recovery\\_method} = \"Nuclear Norm\"}\n \\STATE $\\ten{H}^{(l)} = \\displaystyle\\argmin_{\\ten{T}\\in \\Rbb^{d\\times\\cdots\\times d\\times p}} \\norm{\\tenmatgen{\\ten{T}}{\\ceil{l\/2},l-\\ceil{l\/2} + 1}}_*$ subject to $\\mat{X} \\tenmatgen{\\ten{T}}{l,1} = \\Ymat$.\n \\label{alg.line.nucnorm}\n \\ELSIF{\\texttt{recovery\\_method} = \"(T)IHT\"}\n \\STATE Initialize $\\ten{H}^{(l)} \\in \\Rbb^{d\\times\\cdots\\times d\\times p}$ to $\\mat{0}$.\n \\REPEAT\\label{alg.line.iht.start}\n \\STATE\\label{alg.line.iht.gradient} $\\tenmatgen{\\ten{H}^{(l)}}{l,1} = \\tenmatgen{\\ten{H}^{(l)} }{l,1} + \\gamma\\mat{X}^\\top(\\Ymat - \\mat{X}\\tenmatgen{\\ten{H}^{(l)} }{l,1})$\n \n \\STATE $\\ten{H}^{(l)} = \\texttt{project}(\\ten{H}^{(l)},R)$~(using either SVD for IHT or TT-SVD for TIHT)\n \n \\UNTIL{convergence}\\label{alg.line.iht.end}\n \\ENDIF\\label{alg.lastline.forloop} \n \n \\ENDFOR\n \\STATE\\label{alg.line.svd} Let $\\tenmatgen{\\ten{H}^{(2L)}}{L,L+1} = \\P\\S$ be a rank $R$ factorization.\n \\STATE Return the linear 2-RNN $(\\vec{h}_0,\\Aten,\\vecs{\\vvsinfsymbol})$ where \n \\begin{align*}\n \\vecs{\\szerosymbol}\\ &= (\\S^\\dagger)^\\top\\tenmatgen{\\ten{H}^{(L)}_f}{L+1},\\ \\ \\ \\ \\vecs{\\vvsinfsymbol}^\\top = \\P^\\dagger\\tenmatgen{\\ten{H}^{(L)}_f}{L,1}\\\\\n \\Aten\\ &= (\\tenmatgen{\\ten{H}^{(2L+1)}_f}{L,1,L+1})\\ttm{1}\\P^\\dagger\\ttm{3}(\\S^\\dagger)^\\top\n\\end{align*} \n\\end{algorithmic}\n\\end{algorithm}\n\nA few remarks on this theorem are in order. The first observation is that the $3$\ndatasets $D_L$, $D_{2L}$ and $D_{2L+1}$ can either be drawn independently or not~(e.g.\\ the sequences in $D_{L}$ can\nbe prefixes of the sequences in $D_{2L}$ but it is not necessary). In particular, the result still holds when the datasets $D_l$ are constructed from a unique dataset \n$S =\\{((\\vec{x}^{(i)}_1,\\vec{x}_2^{(i)},\\cdots,\\vec{x}_T^{(i)}),(\\vec{y}^{(i)}_1,\\vec{y}^{(i)}_2,\\cdots,\\vec{y}^{(i)}_T)) \\}_{i=1}^{N}$\nof input\/output sequences with $T\\geq 2L+1$, where $\\vec{y}^{(i)}_t = f(\\vec{x}^{(i)}_1,\\vec{x}_2^{(i)},\\cdots,\\vec{x}_t^{(i)})$ for any $t\\in[T]$.\nObserve that having access to such input\/output training sequences is not an unrealistic assumption: for example when training RNNs for\nlanguage modeling the output $\\vec{y}_t$ is the conditional probability vector of the next symbol, and for classification tasks the output is\nthe one-hot encoded label for all time steps. Lastly, when the outputs $\\vec{y}^{(i)}$ are noisy, one can solve the least-squares problem\n$\\norm{\\Ymat - \\mat{X}\\tenmatgen{\\ten{H}^{(l)}}{l,1}}^2_F$ to approximate the Hankel tensors; we will empirically evaluate this approach \nin Section~\\ref{sec:xp} and we defer its theoretical analysis in the noisy setting to future work.\n\n\n\n\\subsection{\\rev{Leveraging the low rank structure of the Hankel tensors}}\nWhile the least-squares method is sufficient to obtain the theoretical guarantees of Theorem~\\ref{thm:learning-2RNN}, it does not leverage\nthe low rank structure of the Hankel tensors $\\ten{H}^{(L)}$, $\\ten{H}^{(2L)}$ and $\\ten{H}^{(2L+1)}$. We now propose three alternative recovery\nmethods to leverage this structure, whose sample efficiency will be assessed in a simulation study in Section~\\ref{sec:xp}~(deriving improved sample\ncomplexity guarantees using these methods is left for future work). In the noiseless setting, we first propose to replace solving the linear \nsystem $\\Ymat = \\mat{X}\\tenmatgen{\\ten{H}^{(l)}}{l,1}$ with a nuclear norm minimization problem~(see line~\\ref{alg.line.nucnorm} of Algorithm~\\ref{alg:2RNN-SL}), thus leveraging the\nfact that $\\tenmatgen{\\ten{H}^{(l)}}{\\ceil{l\/2},l-\\ceil{l\/2} + 1}$ is potentially of low matrix rank. We also propose to use iterative hard thresholding~(IHT)~\\cite{jain2010guaranteed}\nand its tensor counterpart TIHT~\\cite{rauhut2017low}, which are based on the classical projected gradient descent algorithm and have shown to be\nrobust to noise in practice. These two methods are implemented in lines~\\ref{alg.line.iht.start}-\\ref{alg.line.iht.end} of Algorithm~\\ref{alg:2RNN-SL}. There,\nthe \\texttt{project} method either projects $\\tenmatgen{\\ten{H}^{(l)}}{\\ceil{l\/2},l-\\ceil{l\/2} + 1}$ onto the manifold of low rank matrices \nusing SVD~(IHT) or projects $\\ten{H}^{(l)}$ onto the manifold of tensors with TT-rank $R$~(TIHT). \n\n\\input{xp_figures.tex}\n\n\n\\rev{\nThe low rank structure of the Hankel tensors can also be leveraged to improve the scalability of the learning algorithm.\nOne can check that the computational complexity of Algorithm~\\ref{alg:2RNN-SL} is exponential in the maximum sequence length: indeed,\nbuilding the matrix $\\mat{X}$ in line~2 is already in $\\bigo{N_ld^l}$, where $l$ is in turn equal to $L,\\ 2L$ and $2L+1$.\nFocusing on the TIHT recovery method, a careful analysis shows that the computational complexity of the algorithm\nis in \n$$\\bigo{d^{2L+1}\\left(p(TN+R) +R^2\\right) + TL\\max(p,d)^{2L+3}}, $$\nwhere $N=\\max(N_L,N_{2L},N_{2L+1})$ and $T$ is the number of iterations of the loop on line~\\ref{alg.line.iht.start}.\nThus, in its present form, our approach cannot scale to high dimensional inputs and long sequences. However, one can leverage\nthe low tensor train rank structure of the Hankel tensors to circumvent this issue:\nby storing \nboth the estimates of the Hankel tensors $\\ten{H}^{(l)}$ and the matrices $\\mat{X}$ in TT format~(with decompositions of ranks $R$ and $N$ respectively),\nall the operations needed to implement Algorithm~\\ref{alg:2RNN-SL} with the TIHT recovery method can be performed in time $\\bigo{T(N+R)^3(Ld + p)}$~(more details can be found in the supplementary\nmaterial). By leveraging the tensor train structure, one can thus lift the dependency on $d^{2L+1}$ by paying the price of an increased cubic complexity \nin the number of examples $N$ and the number of states $R$. While the\ndependency on the number of states is not a major issue~($R$ should be negligible w.r.t. $N$), the dependency on $N^3$ can quickly become prohibitive for realistic application scenario. \nFortunately, this issue can be\ndealt with by using mini-batches of training data for the gradient updates on line~\\ref{alg.line.iht.gradient} instead of the whole dataset $D_l$, in which case the overall complexity\nof Algorithm~\\ref{alg:2RNN-SL} becomes $\\bigo{T(M+R)^3(Ld + p)}$ where $M$ is the mini-batch size~(the overall algorithm in TT format is summarized in Algorithm~\\ref{alg:2RNN-SL-TT} in the supplementary material).\n}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\input{xp}\n\n\n\n\n\n\\section{Conclusion and Future Directions}\n\nWe proposed the first provable learning algorithm for second-order RNNs with linear activation functions:\nwe showed that linear 2-RNNs are a natural extension of vv-WFAs to the setting of input sequences of \\emph{continuous vectors}~(rather than\ndiscrete symbol) and we extended the vv-WFA spectral learning\nalgorithm to this setting. We believe that the results presented in this paper open a number of exciting and promising research directions on both the\ntheoretical and practical perspectives. We first plan to use the spectral learning estimate as a starting point for\ngradient based methods to train non-linear 2-RNNs. More precisely, linear 2-RNNs can be thought of as 2-RNNs using LeakyRelu activation functions with negative slope $1$, therefore one could use \na linear 2-RNN as initialization before gradually reducing the negative slope parameter during training. The extension of the spectral method to linear 2-RNNs also\nopens the door to scaling up the classical spectral algorithm to problems with large discrete alphabets~(which is a known caveat of the spectral algorithm for WFAs) since\nit allows one to use low dimensional embeddings of large vocabularies~(using e.g.\\ word2vec or latent semantic analysis). From the theoretical perspective, we plan on \nderiving learning guarantees for linear 2-RNNs in the noisy setting~(e.g.\\ using the PAC learnability framework). Even though it is intuitive that such guarantees should hold~(given\nthe continuity of all operations used in our algorithm), we believe that such an analysis may entail results of independent interest. In particular, analogously to the\nmatrix case studied in~\\cite{cai2015rop}, obtaining rate optimal convergence rates for the recovery of the low TT-rank Hankel tensors from rank one measurements is an interesting\ndirection; such a result could for example allow one to improve the generalization bounds provided in~\\cite{balle2012spectral} for spectral learning of \ngeneral WFAs. \n\n\n\n\\newpage\n\\subsubsection*{Acknowledgements} This work was done while G. Rabusseau was an IVADO postdoctoral scholar at McGill University. \n{\n\\bibliographystyle{plain}\n\n\n\\section*{\\centering \\LARGE Connecting Weighted Automata and Recurrent Neural Networks through Spectral Learning \\\\ \\vspace*{0.3cm}(Supplementary Material)}\n\n\n\\section{Proofs}\n\\subsection{Proof of Theorem~\\ref{thm:2RNN-vvWFA}}\n\\begin{theorem*}\nAny function that can be computed by a vv-WFA with $n$ states can be computed by a linear 2-RNN with $n$ hidden units.\nConversely, any function that can be computed by a linear 2-RNN with $n$ hidden units on sequences of one-hot vectors~(i.e.\\ canonical basis \nvectors) can be computed by a WFA with $n$ states.\n\n\nMore precisely, the WFA $A=\\vvwa$ with $n$ states and the linear 2-RNN $M=(\\vecs{\\szerosymbol},\\Aten,\\vecs{\\vvsinfsymbol})$ with\n$n$ hidden units, where $\\Aten\\in\\Rbb^{n\\times \\Sigma \\times n}$ is defined by $\\Aten_{:,\\sigma,:}=\\mat{A}^\\sigma$ for all $\\sigma\\in\\Sigma$, are\nsuch that\n$f_A(\\sigma_1\\sigma_2\\cdots\\sigma_k) = f_M(\\vec{x}_1,\\vec{x}_2,\\cdots,\\vec{x}_k)$ for all sequences of input symbols $\\sigma_1,\\cdots,\\sigma_k\\in\\Sigma$,\nwhere for each $i\\in[k]$ the input vector $\\vec{x}_i\\in\\Rbb^\\Sigma$ is\nthe one-hot encoding of the symbol $\\sigma_i$.\n\\end{theorem*}\n\\begin{proof}\nWe first show by induction on $k$ that, for any sequence $\\sigma_1\\cdots\\sigma_k\\in\\Sigma^*$, the hidden state $\\vec{h}_k$ computed by $M$~(see\nEq.~\\eqref{eq:2RNN.definition})\non the corresponding one-hot encoded sequence $\\vec{x}_1,\\cdots,\\vec{x}_k\\in\\Rbb^d$\nsatisfies $\\vec{h}_k = (\\mat{A}^{\\sigma_1}\\cdots\\mat{A}^{\\sigma_k})^\\top\\vecs{\\szerosymbol}$. The case $k=0$\nis immediate. Suppose the result true for sequences of length up to $k$. One can check easily check that $\\Aten\\ttv{2}\\vec{x}_i = \\mat{A}^{\\sigma_i}$\nfor any index $i$. Using the induction hypothesis it then follows that\n\\begin{align*}\n\\vec{h}_{k+1} &= \\Aten \\ttv{1}\\vec{h}_k \\ttv{2} \\vec{x}_{k+1} = \\mat{A}^{\\sigma_{k+1}}\\ttv{1} \\vec{h}_k = (\\mat{A}^{\\sigma_{k+1}})^\\top \\vec{h}_k\\\\\n&= (\\mat{A}^{\\sigma_{k+1}})^\\top (\\mat{A}^{\\sigma_1}\\cdots\\mat{A}^{\\sigma_k})^\\top\\vecs{\\szerosymbol} = (\\mat{A}^{\\sigma_1}\\cdots\\mat{A}^{\\sigma_{k+1}})^\\top\\vecs{\\szerosymbol} .\n\\end{align*} \nTo conclude, we thus have\n\\begin{equation*}\nf_M(\\vec{x}_1,\\vec{x}_2,\\cdots,\\vec{x}_k) = \\vecs{\\vvsinfsymbol}\\vec{h}_{k} = \\vecs{\\vvsinfsymbol}(\\mat{A}^{\\sigma_1}\\cdots\\mat{A}^{\\sigma_{k}})^\\top\\vecs{\\szerosymbol} = f_A(\\sigma_1\\sigma_2\\cdots\\sigma_k).\\qedhere\n\\end{equation*}\n\\end{proof}\n\n\n\\subsection{Proof of Theorem~\\ref{thm:2RNN-SL}}\n\n\n\\begin{theorem*}\nLet $f:(\\Rbb^d)^*\\to \\Rbb^p$ be a function computed by a minimal linear $2$-RNN with $n$ hidden units and let\n$L$ be an integer such that $\\rank(\\tenmatgen{\\ten{H}^{(2L)}_f}{L,L+1}) = n$.\n\nThen, for any $\\P\\in\\Rbb^{d^L\\times n}$ and $\\S\\in\\Rbb^{n\\times d^Lp}$ such that $\\tenmatgen{\\ten{H}^{(2L)}_f}{L,L+1} = \\P\\S$, the\nlinear 2-RNN $M=(\\vecs{\\szerosymbol},\\Aten,\\vecs{\\vvsinfsymbol})$ defined by\n$$\\vecs{\\szerosymbol} = (\\S^\\dagger)^\\top\\tenmatgen{\\ten{H}^{(L)}_f}{L+1},\\ \\ \\ \\ \\Aten = (\\tenmatgen{\\ten{H}^{(2L+1)}_f}{L,1,L+1})\\ttm{1}\\P^\\dagger\\ttm{3}(\\S^\\dagger)^\\top,\\ \\ \\ \\ \n\\vecs{\\vvsinfsymbol}^\\top = \\P^\\dagger\\tenmatgen{\\ten{H}^{(L)}_f}{L,1}$$\nis a minimal linear $2$-RNN computing $f$.\n\\end{theorem*}\n\\begin{proof}\nLet $\\P\\in\\Rbb^{d^L\\times n}$ and $\\S\\in\\Rbb^{n\\times d^Lp}$ be such that $\\tenmatgen{\\ten{H}^{(2L)}_f}{L,L+1} = \\P\\S$\nDefine the tensors \n$$\\Pten^* = \\TT{\\Aten^\\star\\ttv{1}\\vecs{\\szerosymbol}^\\star, \\underbrace{\\Aten^\\star, \\cdots, \\Aten^\\star}_{L-1\\text{ times}}, \\mat{I}_n}\\in\\Rbb^{d\\times\\cdots\\times d\\times n}\\ \\ \\ \\ \n\\text{ and }\\ \\ \\ \\ \n\\Sten^* = \\TT{\\mat{I}_n,\\underbrace{\\Aten^\\star, \\cdots, \\Aten^\\star}_{L\\text{ times}}, \\vecs{\\vvsinfsymbol}^\\star}\\in\\Rbb^{n\\times d\\times\\cdots\\times d\\times p}$$ \nof order $L+1$ and $L+2$ respectively, and let $\\P^\\star = \\tenmatgen{\\Pten^*}{l,1} \\in\\Rbb^{d^l\\times n}$ \nand $\\S = \\tenmatgen{\\Sten^*}{1,L+1} \\in\\Rbb^{n\\times d^lp}$. Using the identity $\\ten{H}^{(j)}_f = \\TT{\\Aten\\ttv{1}\\vecs{\\szerosymbol}, \\underbrace{\\Aten, \\cdots, \\Aten}_{j-1\\text{ times}}, \\vecs{\\vvsinfsymbol}^\\top}$ for\nany $j$, one can easily check the following identities:\n\\begin{gather*}\n\\tenmatgen{\\ten{H}^{(2L)}_f}{L,L+1} = \\P^\\star\\S^\\star,\\ \\ \\ \\ \\tenmatgen{\\ten{H}^{(2L+1)}_f}{L,1,L+1}= \\Aten^\\star \\ttm{1} \\P^\\star \\ttm{3} (\\S^\\star)^\\top,\\\\\n\\tenmatgen{\\ten{H}^{(L)}_f}{L,1} = \\P^\\star(\\vecs{\\vvsinfsymbol}^\\star)^\\top, \\ \\ \\ \\ \\ \\ \\\n\\tenmatgen{\\ten{H}^{(L)}_f}{L+1} = (\\S^\\star)^\\top\\vecs{\\szerosymbol}.\n\\end{gather*}\n\nLet $\\mat{M} = \\P^\\dagger\\P^\\star$. We will show that $\\vecs{\\szerosymbol} = \\mat{M}^{-\\top}\\vecs{\\szerosymbol}^\\star$, $\\Aten = \\Aten^\\star \\ttm{1}\\mat{M}\\ttm{3}\\mat{M}^{-\\top}$ and\n$\\vecs{\\vvsinfsymbol} = \\mat{M}\\vecs{\\vvsinfsymbol}^\\star$, which will entail the results since linear 2-RNN are invariant under change of basis~(see Section~\\ref{sec:prelim}). First observe that $\\mat{M}^{-1} = \\S^\\star\\S^\\dagger$. Indeed,\nwe have\n$\\P^\\dagger\\P^\\star\\S^\\star\\S^\\dagger = \\P^\\dagger\\tenmatgen{\\ten{H}^{(2l)}_f}{l,l+1}\\S^\\dagger = \\P^\\dagger\\P\\S\\S^\\dagger = \\mat{I}$\nwhere we used the fact that $\\P$~(resp. $\\S$) is of full column rank~(resp. row rank) for the last equality. \n\nThe following derivations then follow from basic tensor algebra:\n\\begin{align*}\n\\vecs{\\szerosymbol} \n&= \n(\\S^\\dagger)^\\top\\tenmatgen{\\ten{H}^{(L)}_f}{L+1} \n=\n(\\S^\\dagger)^\\top (\\S^\\star)^\\top\\vecs{\\szerosymbol}\n=\n(\\S^\\star\\S^\\dagger)^\\top\n=\n\\mat{M}^{-\\top}\\vecs{\\szerosymbol}^\\star,\\\\\n\\ \\\\\n\\Aten \n&= \n(\\tenmatgen{\\ten{H}^{(2L+1)}_f}{L,1,L+1})\\ttm{1}\\P^\\dagger\\ttm{3}(\\S^\\dagger)^\\top\\\\\n&=\n(\\Aten^\\star \\ttm{1} \\P^\\star \\ttm{3} (\\S^\\star)^\\top)\\ttm{1}\\P^\\dagger\\ttm{3}(\\S^\\dagger)^\\top\\\\\n&=\n\\Aten^\\star \\ttm{1} \\P^\\dagger\\P^\\star \\ttm{3} (\\S^\\star\\S^\\dagger)^\\top = \\Aten^\\star \\ttm{1}\\mat{M}\\ttm{3}\\mat{M}^{-\\top},\\\\\n\\ \\\\\n\\vecs{\\vvsinfsymbol}^\\top \n&= \n\\P^\\dagger\\tenmatgen{\\ten{H}^{(L)}_f}{L,1}\n=\n\\P^\\dagger\\P^\\star(\\vecs{\\vvsinfsymbol}^\\star)^\\top\n=\n\\mat{M}\\vecs{\\vvsinfsymbol}^\\star,\n\\end{align*}\nwhich concludes the proof.\n\\end{proof}\n\n\n\n\n\n\n\n\\subsection{Proof of Theorem~\\ref{thm:learning-2RNN}}\n\n\n\\begin{theorem*}\nLet $(\\vec{h}_0,\\Aten,\\vecs{\\vvsinfsymbol})$ be a minimal linear 2-RNN with $n$ hidden units computing a function $f:(\\Rbb^d)^*\\to \\Rbb^p$, and let $L$ be an integer\\footnote{Note that the theorem can be adapted if such an integer $L$ does not exists~(see supplementary material).}\nsuch that $\\rank(\\tenmatgen{\\ten{H}^{(2L)}_f}{L,L+1}) = n$.\n\nSuppose we have access to $3$ datasets\n$D_l = \\{((\\vec{x}^{(i)}_1,\\vec{x}_2^{(i)},\\cdots,\\vec{x}_l^{(i)}),\\vec{y}^{(i)}) \\}_{i=1}^{N_l}\\subset(\\Rbb^d)^l\\times \\Rbb^p$ for $l\\in\\{L,2L,2L+1\\}$ \nwhere the entries of each $\\vec{x}^{(i)}_j$ are drawn independently from the standard normal distribution and where each\n$\\vec{y}^{(i)} = f(\\vec{x}^{(i)}_1,\\vec{x}_2^{(i)},\\cdots,\\vec{x}_l^{(i)})$.\n\nThen, whenever $N_l \\geq d^l$ for each $l\\in\\{L,2L,2L+1\\}$,\nthe linear 2-RNN $M$ returned by Algorithm~\\ref{alg:2RNN-SL} with the least-squares method satisfies $f_M = f$ with probability one.\n\\end{theorem*}\n\\begin{proof}\nWe just need to show for each $l\\in \\{L,2L,2L+1\\}$ that, under the hypothesis of the Theorem, the Hankel tensors $\\hat{\\ten{H}}^{(l)}$ computed in line~\\ref{alg.line.lst-sq} of\nAlgorithm~\\ref{alg:2RNN-SL} are equal to the true Hankel tensors $\\ten{H}^{(l)}$ with probability one. Recall that these tensors are computed by solving the least-squares\nproblem\n$$\\hat{\\ten{H}}^{(l)} = \\argmin_{T\\in \\Rbb^{d\\times\\cdots\\times d\\times p}} \\norm{\\mat{X}\\tenmatgen{\\ten{T}}{l,1} - \\Ymat}_F^2$$\nwhere $\\mat{X}\\in\\Rbb^{N_l\\times d_l}$ is the matrix with rows $\\vec{x}^{(i)}_1\\otimes\\vec{x}_2^{(i)}\\otimes\\cdots\\otimes\\vec{x}_l^{(i)}$ for each $i\\in[N_l]$. Since $\\mat{X}\\tenmatgen{\\ten{H}^{(l)}}{l,1} = \\Ymat$ and since the solution\nof the least-squares problem is unique as soon as $\\mat{X}$ is of full column rank, we just need to show that this is the case with probability one\nwhen the entries of the vectors $\\vec{x}^{(i)}_j$ are drawn at random from a standard normal distribution. The result will then directly follow\nby applying Theorem~\\ref{thm:2RNN-SL}.\n\nWe will show that the set \n$$\\Scal = \\{ (\\vec{x}_1^{(i)},\\cdots, \\vec{x}_l^{(i)}) \\mid \\ i\\in[N_l],\\ dim(span(\\{ \\vec{x}^{(i)}_1\\otimes\\vec{x}_2^{(i)}\\otimes\\cdots\\otimes\\vec{x}_l^{(i)} \\})) < d^l\\} $$\nhas Lebesgue measure $0$ in $((\\Rbb^d)^{l})^{N_l}\\simeq \\Rbb^{dlN_l}$ as soon as $N_l \\geq d^l$, which will imply that it has probability $0$ under any continuous probability, hence\nthe result. For any $S=\\{(\\vec{x}_1^{(i)},\\cdots, \\vec{x}_l^{(i)})\\}_{i=1}^{N_l}$, we denote by $\\mat{X}_S\\in\\Rbb^{N_l\\times d^l}$ the matrix with rows $\\vec{x}^{(i)}_1\\otimes\\vec{x}_2^{(i)}\\otimes\\cdots\\otimes\\vec{x}_l^{(i)}$.\nOne can easily check that $S\\in\\Scal$ if and only if $\\mat{X}_S$ is of rank strictly less than $d^l$, which is equivalent to the determinant of \n$\\mat{X}_S^\\top\\mat{X}_S$ being equal to $0$. Since this determinant is a polynomial in the entries of the vectors $\\vec{x}_j^{(i)}$, $\\Scal$ is an algebraic\nsubvariety of $\\Rbb^{dlN_l}$.\nIt is then easy to check that the polynomial $det(\\mat{X}_S^\\top\\mat{X}_S)$ is not uniformly 0 when $N_l \\geq d^l$. Indeed, \nit suffices to choose the vectors $\\vec{x}_j^{(i)}$ such that the family $(\\vec{x}^{(i)}_1\\otimes\\vec{x}_2^{(i)}\\otimes\\cdots\\otimes\\vec{x}_l^{(i)})_{n=1}^{N_l}$ spans the whole space \n$\\Rbb^{d^l}$~(which is possible since we can choose arbitrarily any of the $N_l\\geq d^l$ elements of this family), hence the result. \nIn conclusion, $\\Scal$ is a proper algebraic subvariety of $\\Rbb^{dlN_l}$ and hence has Lebesgue\nmeasure zero~\\cite[Section 2.6.5]{federer2014geometric}.\n\n\n\n\\end{proof}\n\n\n\\section{Lifting the simplifying assumption}\\label{app:lift}\nWe now show how all our results still hold when there does not exist an $L$ such that $\\rank(\\tenmatgen{\\ten{H}^{(2L)}_f}{L,L+1}) = n$.\nRecall that this simplifying assumption followed from assuming that the sets $\\Pcal=\\Scal=[d]^L$ form a complete basis for the function\n$\\tilde{f}:[d]^*\\to \\Rbb^p$ defined by $\\tilde{f}(i_1i_2\\cdots i_k) = f(\\ten_{i_1},\\ten_{i_2},\\cdots,\\ten_{i_k})$. \nWe first show that there always exists an integer $L$ such that $\\Pcal=\\Scal=\\cup_{i\\leq L} [d]^i$ forms a complete basis for $\\tilde{f}$.\n Let $M = (\\vecs{\\szerosymbol}^\\star,\\Aten^\\star,\\vecs{\\vvsinfsymbol}^\\star)$ be a linear 2-RNN with $n$ hidden units computing\n$f$~(i.e.\\ such that $f_M=f$). It follows from Theorem~\\ref{thm:2RNN-vvWFA} and from the discussion at the beginning of Section~\\ref{subsec:SL-2RNN}\nthat there exists a vv-WFA computing $\\tilde{f}$ and it is easy to check that $\\rank(\\tilde{f}) = n$. This implies $\\rank(\\tenmatpar{\\ten{H}_f}{1}) = n$ by\nTheorem~\\ref{thm:fliess-vvWFA}. Since $\\Pcal=\\Scal=\\cup_{i\\leq l} [d]^i$ converges to $[d]^*$ as $l$ grows to infinity, there exists an $L$ such that \nthe finite sub-block $\\tilde{\\ten{H}}_f \\in \\Rbb^{\\Pcal\\times\\Scal\\times p}$ of $\\ten{H}_f\\in \\Rbb^{[d]^*\\times[d]^*\\times p}$\nsatisfies $\\rank(\\tenmatpar{\\tilde{\\ten{H}}_f}{1}) = n$, i.e.\\ such that\n$\\Pcal=\\Scal=\\cup_{i\\leq L} [d]^i$ forms a complete basis for $\\tilde{f}$. \n\nNow consider the finite sub-blocks $\\tilde{\\ten{H}}^{+}_f\\in \\Rbb^{\\Pcal \\times [d] \\times \\Scal \\times p}$ and $\\tilde{\\H}^{-}_f\\in \\Rbb^{\\Pcal \\times p}$ of $\\ten{H}_f$ defined by\n$$(\\tilde{\\ten{H}}^{+}_f)_{u,i,v,:}=\\tilde{f}(uiv),\\ \\ \\ \\text{and} (\\tilde{\\H}^{-}_f)_{u,:}=f(u)$$\nfor any $u\\in \\Pcal=\\Scal$ and any $i\\in [d]$. One can check that Theorem~\\ref{thm:2RNN-SL} holds by replacing \\emph{mutatis mutandi}\n$\\tenmatgen{\\ten{H}^{(2L)}_f}{L,L+1}$ by $\\tenmatpar{\\tilde{\\ten{H}}_f}{1}$, $\\tenmatgen{\\ten{H}^{(2L+1)}_f}{L,1,L+1}$ by $\\tilde{\\ten{H}}^{+}_f$, $\\tenmatgen{\\ten{H}^{(L)}_f}{L,1}$ by $\\tilde{\\H}^{-}_f$\nand $\\tenmatgen{\\ten{H}^{(L)}_f}{L+1}$ by $\\vectorize{\\tilde{\\H}^{-}_f}$.\n\nTo conclude, it suffices to observe that both $\\tilde{\\ten{H}}^{+}_f$ and $\\tilde{\\H}^{-}_f$ can be constructed\nfrom the entries for the tensors $\\ten{H}^{(l)}$ for $1\\leq l \\leq 2L+1$, which can be recovered~(or estimated in the noisy setting) using\nthe techniques described in Section~\\ref{subsec:Hankel.tensor.recovery}~(corresponding to lines \\ref{alg.firstline.forloop}-\\ref{alg.lastline.forloop} of\nAlgorithm~\\ref{alg:2RNN-SL}).\n\nWe thus showed that linear 2-RNNs can be provably learned even when there does not exist an $L$ such that $\\rank(\\tenmatgen{\\ten{H}^{(2L)}_f}{L,L+1}) = n$. In this\nsetting, one needs to estimate enough of the tensors $\\ten{H}^{(l)}$ to reconstruct a complete sub-block $\\tilde{\\ten{H}}_f$ of the Hankel tensor \n$\\ten{H}$~(along with the corresponding tensor $\\tilde{\\ten{H}}^{+}_f$ and matrix $\\tilde{\\H}^{-}_f$)\nand recover the linear 2-RNN by applying Theorem~\\ref{thm:2RNN-SL}. In addition, one needs to have access to sufficiently large datasets \n$D_l$ for each $l\\in [2L+1]$ rather than only the three datasets mentioned in Theorem~\\ref{thm:learning-2RNN}. However the data requirement remains the\nsame in the case where we assume that each of the datasets $D_l$ is constructed from a unique training dataset \n$S =\\{((\\vec{x}^{(i)}_1,\\vec{x}_2^{(i)},\\cdots,\\vec{x}_T^{(i)}),(\\vec{y}^{(i)}_1,\\vec{y}^{(i)}_2,\\cdots,\\vec{y}^{(i)}_T)) \\}_{i=1}^{N}$\nof input\/output sequences.\n\n\\section{Leveraging the tensor train structure for computational efficiency}\nThe overall learning algorithm using the TIHT recovery method in TT format is summarized in Algorithm~\\ref{alg:2RNN-SL-TT}. The key ingredients to improve the complexity of Algorithm~\\ref{alg:2RNN-SL} are (i) to estimate the gradient using mini-batches of data and (ii) to directly use the TT format to represent and perform operations on the tensors $\\ten{H}^{(l)}$ and the tensors $\\Xten^{(l)}\\in\\Rbb^{M\\times d\\times \\cdots \\times d}$ defined by\n\\begin{equation}\\label{eq:Xten}\n \\Xten_{i,:,\\cdots,:}= \\vec{x}^{(i)}_1\\otimes\\vec{x}_2^{(i)}\\otimes\\cdots\\otimes\\vec{x}_l^{(i)}\\ \\ \\ \\text{for }i\\in[M]\n\\end{equation}\nwhere $M$ is the size of a mini-batch of training data~($\\ten{H}^{(l)}$ is of TT-rank $R$ by design and it can easily be shown that $\\Xten^{(l)}$ is of TT-rank at most $M$, cf. Eq.~\\eqref{eq:XtenTT}). \nThen, all the operations of the algorithm can be expressed in terms of these tensors and performed efficiently in TT format.\nMore precisely, the products and sums needed to compute the gradient update on line~\\ref{algtt.line.iht.gradient} can be performed in~$\\bigo{(R+M)^2(ld+p)+(R+M)^3d}$. After the gradient update, the tensor $\\ten{H}^{(l)}$ has TT-rank at most $(M+R)$ but can be efficiently projected back to a tensor of TT-rank $R$ using the tensor train rounding operation~\\cite{oseledets2011tensor} in $\\bigo{(R+M)^3(ld+p)}$~(which is the operation dominating the complexity of the whole algorithm).\nThe subsequent operations on line~\\ref{line.algtt.return} can be performed efficiently in the TT format in~$\\bigo{R^3d+R^2p}$~(using the method described in~\\cite{klus2018tensor} to compute the pseudo-inverses of the matrices $\\P$ and $\\S$). \nThe overall complexity of Algorithm~2 is thus in~$\\bigo{T(R+M)^3(Ld+p)}$ where $T$ is the number of iterations of the inner loop.\n\n\n\\begin{algorithm}\n \\caption{\\texttt{2RNN-SL-TT}: Spectral Learning of linear 2-RNNs \\textbf{in tensor train format}}\n \\label{alg:2RNN-SL-TT}\n\\begin{algorithmic}[1]\n \\REQUIRE Three training datasets $D_L,D_{2L},D_{2L+1}$ with input sequences of length $L$, $2L$ and $2L+1$ respectively, rank $R$, learning rate $\\gamma$ and\n mini-batch size $M$.\n \n \\FOR{$l\\in\\{L,2L,2L+1\\}$}\n \\STATE Initialize all cores of the rank $R$ TT-decomposition $\\ten{H}^{(l)} = \\TT{\\Gten^{(l)}_1,\\cdots,\\Gten^{(l)}_{l+1}} \\in \\Rbb^{d\\times\\cdots\\times d\\times p}$ to $\\mat{0}$.\\\\\n \/\/ \\emph{Note that all the updates of $\\ten{H}^{(l)}$ stated below are in effect applied directly to the core tensors $\\Gten^{(l)}_k$, i.e. the tensor $\\ten{H}^{(l)}$ is never \n explicitely constructed.}\n \n \\REPEAT\\label{algtt.line.iht.start}\n \\STATE Subsample a minibatch $$\\{((\\vec{x}^{(i)}_1,\\vec{x}_2^{(i)},\\cdots,\\vec{x}_l^{(i)}),\\vec{y}^{(i)}) \\}_{i=1}^{M}\\subset(\\Rbb^d)^l\\times \\Rbb^p$$ of size $M$ from $D_l$.\n \\STATE Compute the rank $M$ TT-decomposition of the tensor $\\Xten=\\Xten^{(l)}$~(defined in Eq.~\\eqref{eq:Xten}), which is given by\n \\begin{equation}\\label{eq:XtenTT}\n \\Xten = \\TT{\\mat{I}_M, \\Aten_1,\\cdots,\\Aten_l} \\text{ where the cores are defined by } (\\Aten_k)_{i,:,j}=\\delta_{ij}\\vec{x}^{(i)}_k \\ \\ \\text{ and } \\ \\ (\\Aten_l)_{i,:} = \\vec{x}^{(i)}_k\n \\end{equation}\n for all $1\\leq k