diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzeyvl" "b/data_all_eng_slimpj/shuffled/split2/finalzzeyvl" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzeyvl" @@ -0,0 +1,5 @@ +{"text":"\\section*{Introduction} \\setcounter{equation}{0}\n\nTensors are multilinear objects ubiquitous in the applications in different flavours, see e.g. \\cite{allman2008phylogenetic, bernardi2012algebraic, bernardi2021high, comon2014tensors}. They can be seen as elements of $V_1 \\otimes \\dots \\otimes V_d$, where any $V_i$ is a vector space of finite dimension over some field $\\mathbb{K}$. Being provided with an additive structure, one particular interest is {\\it tensor decomposition}, i.e. additive decompositions of tensors into sums of elementary objects, often referred to as tensors of {\\it rank 1}. Throughout all this paper we assume that $\\mathbb{K}$ is algebraically closed and of characteristic $0$.\n\n\\noindent {\\it Structured} tensors are tensors with prescribed symmetries between the factors of the tensor product. Even in this case we can talk about structured tensors of {\\it structured rank} $1$ which are in some sense the most elementary tensors with that structure. \n\n\\noindent Some examples are {\\it symmetric} tensors $\\Sym^d V$, i.e. tensors invariant under permutations of the factors, and {\\it skew-symmetric} tensors $\\superwedge^d V$, i.e. tensors invariant under permutations of the factors up to the sign of the permutation. In these instances the tensors of {\\it structured rank} $1$ are determined by the underlying geometry. Indeed, since both $\\Sym^d V$ and $\\superwedge^d V$ are irreducible representations of the group $SL(V)$, one can consider the highest weight vector of both of them and its orbit in the projectivization under the group action. These orbits turn out to be projective varieties which are Veronese and Grassmann varieties respectively. The (skew-)symmetric tensors of (skew-)symmetric rank $1$ are the points of the Veronese (Grassmann) variety. In the symmetric case, due to the canonical identification of $\\Sym^d V$ with the vector space of homogeneous polynomials of degree $d$ in $\\dim V$ variables, the symmetric rank $1$ tensors are $d$ powers of linear forms $l^d$, with $l \\in V$. In the skew-symmetric case, the tensors of skew-symmetric rank $1$ look like $v_1 \\wedge \\dots \\wedge v_d$ for some $v_1,\\dots,v_d \\in V$. \\smallskip\n\n\\noindent The irreducible representations of $SL(V)$ are known and usually described as {\\it Schur modules}, as defined in \\cite{fulton2013representation}. The respective minimal orbit inside their projectivization is in general a {\\it Flag variety}, and tensors of structured rank $1$ in the general context will represent flags, possibly partial, of the vector space $V$. \\smallskip\n\nComing back to (skew-)symmetric tensors, clearly one would like to compute the (skew-)symmetric rank of any (skew-)symmetric tensor. In both cases but chronologically first in the symmetric, see \\cite{iarrobino1999power}, and after for the other one, see \\cite{arrondo2021skew}, {\\it apolarity theories} have been developed to compute the ranks of such tensors. Even though they may seem different, these two theories share a greatest common idea which is the {\\it evaluation}. Starting from the usual contraction $V^* \\times V \\longrightarrow \\mathbb{K}$, one gets respectively the maps\n\n$$ \\Sym^d V \\otimes \\Sym^e V^* \\longrightarrow \\Sym^{d-e} V\\quad \\text{and} \\quad \\superwedge^d V \\otimes \\superwedge^e V^* \\longrightarrow \\superwedge^{d-e} V$$\ncalled {\\it apolarity actions}. Consequently, given any (skew-)symmetric tensor, one can compute its annihilator, i.e. any element in the dual world, symmetric or skew-symmetric respectively, that kills the given tensor in any degree via the suitable apolarity action. Such annihilator turns out to be an ideal in the symmetric or exterior algebra of $V^*$ respectively. In both cases, given a tensor of (skew-)symmetric rank $1$, there is attached to it an {\\it ideal} generated in the respective symmetric or exterior algebra. The ideal of multiple tensors of rank $1$ is just the intersection of the respective ideals in both cases. The key result in both theories is the {\\it apolarity lemma} which has the following simple statement. Finding a decomposition of a (skew-)symmetric tensor into a sum of rank $1$ (skew-)symmetric tensors is equivalent to find the inclusion of the ideal of the rank $1$ (skew-)symmetric tensors involved in the decomposition inside the annihilator of the (skew-)symmetric tensor we would like to decompose. It follows this second question. Remark that if a (skew-)symmetric tensor admits more than one decomposition, the apolarity lemma implies that the ideals associated to all such decompositions are contained in the annihilator.\n\n\\begin{question}\nIs it possible to define an apolarity theory for any other irreducible representation of $SL(V)$?\n\\end{question}\n\nThe main motivation of this document is to answer this question. We will present a suitable apolarity action which will be called {\\it Schur apolarity action}. If we denote with $\\mathbb{S}_{\\lambda} V$ the Schur module determined by the partition $\\lambda$, it is the map\n\n$$ \\mathbb{S}_{\\lambda} V \\otimes \\mathbb{S}_{\\mu} V^* \\longrightarrow \\mathbb{S}_{\\lambda \/ \\mu} V $$\nwhere $\\mathbb{S}_{\\lambda \/ \\mu} V$ is called {\\it skew Schur module}, cf. Definition \\ref{skewSchurModule} of this article. Several examples as well as an analogous {\\it Schur apolarity lemma} are provided. \\smallskip\n\nThe content of this article is original but it is worth noting the strong connection of the Schur apolarity with the {\\it non-abelian apolarity}. Introduced for the first time in \\cite{landsberg2013equations} to seek new sets of equations of secant varieties of Veronese varieties, it is an apolarity which implements vector bundles techniques for any projective variety $X$. More specifically, the non-abelian apolarity action has always in its codomain an irreducible representation. On the contrary in the Schur apolarity case, the skew Schur module is in general reducible. Hence we may think of the Schur apolarity as a slight generalization of the non-abelian one. Another apolarity theory which is worth noting is described in \\cite{galkazka2016multigraded} for toric varieties, that features also the use of Cox rings. Formerly the special case in which the toric variety is a Segre-Veronese variety has been introduced in \\cite{catalisano2002ranks} as {\\it multigraded apolarity}.\n\n\\smallskip\n\nThe article is organized as follows. In Section \\ref{primasez} we recall basic definitions needed to develop the theory. In Section \\ref{secondasez} we give a description of the maps $\\mathbb{S}_{\\lambda} V \\otimes \\mathbb{S}_{\\mu} V \\longrightarrow \\mathbb{S}_{\\nu}V$ which turn out to be useful in the Schur apolarity setting. In Section \\ref{terzasez} we introduce the Schur apolarity action. In Section \\ref{quartasez} we state and prove the Schur apolarity lemma. In Section \\ref{quintasez} we give a description of the ranks of certain linear maps induced by the apolarity action when a structured tensor has structured rank $1$. In Section \\ref{sestasez} we investigate the secant variety of the Flag variety of $\\mathbb{C}^1 \\subset \\mathbb{C}^k \\subset \\mathbb{C}^n$ together with an algorithm which distinguishes tensors of the secant variety with different ranks.\n\\bigskip\n\n\n\n\n\\section{Notation and basic definitions} \\label{primasez} \\setcounter{equation}{0} \\medskip\n\nLet $V$ be a vector space over a field $\\mathbb{K}$ of dimension $\\operatorname{dim}(V) = n < \\infty$. In the following $\\mathbb{K}$ is always intended algebraically closed and of characteristic $0$. We adopt the following notation for the algebras:\n\\begin{enumerate}\n\\item[$\\bullet$] $\\Sym^{\\bullet}{V} = \\bigoplus_{k\\geq 0} \\Sym^k V$ is the {\\it symmetric} algebra, i.e. the algebra of polynomials in $n$ variables;\n\\item[$\\bullet$] $\\superwedge^{\\bullet} V = \\bigoplus_{k \\geq 0} \\superwedge^k V$ is the {\\it exterior} algebra.\n\\end{enumerate}\n\n\\noindent We will indicate with $V^*$ the dual vector space of $V$, i.e. the vector space of linear functional defined on $V$. Remark that this space $V^*$ defines an action $\\operatorname{tr} \\colon V \\times V^* \\to \\mathbb{K}$ on $V$ called {\\it trace}. \n\n\\noindent The group $SL(V)$ is the group of automorphisms of $V$ inducing the identity on the space $\\superwedge^n V$. Such a group defines a transitive action on $V$\n\\begin{align*} SL(V) &\\times V\\ \\to\\ V \\\\ (&g , v) \\longmapsto g(v). \\end{align*}\nThis action can be extended to $V^{\\otimes d}$ just setting \n\n$$g \\cdot (v_1 \\otimes \\dots \\otimes v_d) = g(v_1) \\otimes \\dots \\otimes g(v_d).$$\n\n\\noindent The symmetric group $\\mathfrak{S}_d$ acts on $V^{\\otimes d}$ by permuting the factors of the tensor product. It can be easily seen that the actions of $SL(V)$ and $\\mathfrak{S}_d$ commutes. These standard facts and their consequences in terms of representations are classical, (cf. \\cite{fulton2013representation}).\n\\medskip\n\nIn order to develop an apolarity theory for any irreducible representation of the special linear group we need to set some notation and basic definitions. The facts we are going to recall are borrowed from \\cite{fulton2013representation}, \\cite{fulton1997young}, \\cite{sturmfels2008algorithms} and \\cite{procesi2007lie}. \\smallskip\n\nIn the following we regard the irreducible representations of $SL(n)$ as Schur modules. They are denoted with $\\mathbb{S}_{\\lambda}V$ where $\\lambda = (\\lambda_1,\\dots,\\lambda_k)$, with $k < n$, is a partition. The number $k$ is the {\\it length} of the partition. We denote with $|\\lambda| := \\lambda_1 + \\dots + \\lambda_k$ the number partitioned by $\\lambda$. Sometimes we may also write $\\lambda \\vdash |\\lambda|$ to underline that $\\lambda$ is a partition of $|\\lambda|$. We follow \\cite{fulton2013representation} for a construction of this representations. \n\n\\noindent Fix a partition $\\lambda$ of the integer $d$ whose length is less than $\\dim(V)$. We can draw its {\\it Young diagram} placing $\\lambda_1$ boxes in a row, $\\lambda_2$ boxes below it and so on, where all the rows are left justified. A filling of positive integers will turn the diagram into a {\\it tableau of shape} $\\lambda$. A tableau of shape $\\lambda$ is said {\\it semistandard} if reading from left to right the sequences of integers are weakly increasing, while from top to bottom the sequences are strictly increasing. We will abbreviate it with just {\\it sstd} tableau. A sstd tableau of shape $\\lambda$ is called {\\it standard} if also the row sequences are strictly increasing and there are no repetitions. In this case we will simply write {\\it std} tableau. Let $T$ be a std tableau of shape $\\lambda$ with entries in $\\{1,\\dots,d\\}$. The {\\it Young symmetrizer associated to $\\lambda$ and $T$} is the endomorphism $c_{\\lambda}^T$ of $V^{\\otimes d}$ which sends the tensor $v_1 \\otimes \\dots \\otimes v_d$ to\n\n\\begin{equation} \\label{YoungSymm} \\sum_{\\tau \\in C_{\\lambda}} \\sum_{\\sigma \\in R_{\\lambda}} \\operatorname{sign}(\\tau) v_{\\tau(\\sigma(1))} \\otimes \\dots \\otimes v_{\\tau(\\sigma(d))} \\end{equation}\n\n\\noindent where $C_{\\lambda}$, $R_{\\lambda}$ respectively, is the subgroup of $\\mathfrak{S}_d$ of permutations which fix the content of every column, of every row respectively, of $T$. The image of the Young symmetrizer \n\n$$\\mathbb{S}_{\\lambda}^T V := c_{\\lambda}^T (V^{\\otimes d}) $$\n\n\\noindent is called {\\it Schur module} associated to $\\lambda$. It is easy to see that any two Schur modules that are images of $c_{\\lambda}^T$ and $c_{\\lambda}^{T'}$, with $T$ and $T'$ two different std tableaux of shape $\\lambda$, are isomorphic. Hence we drop the $T$ in the notation and we write only $\\mathbb{S}_{\\lambda} V$. It can be proved that Schur modules are irreducible representations of $SL(n)$ via the induced action of the group, and they are completely determined by the partition $\\lambda$, cf. \\cite[p. 77]{fulton2013representation}. From the construction of such modules we have the inclusion\n\n$$ \\mathbb{S}_{\\lambda} V \\subset \\superwedge^{\\lambda_1'} V \\otimes \\dots \\otimes \\superwedge^{\\lambda_h'} V =: \\superwedge_{\\lambda'} V. $$\n\n\\noindent where $\\lambda'$ is the {\\it conjugate} partition to $\\lambda$, i.e. obtained transposing the diagram of $\\lambda$ as if it were a matrix. \\smallskip\n\nWe would like to give a little more explicit insight of the elements of such modules since we will need it in the following. We recall briefly the construction given in \\cite{sturmfels2008algorithms} of a basis of these spaces in terms of sstd tableaux. The Schur-Weyl duality, see \\cite[Theorem 6.4.5.2, p. 148]{Landsberg2011TensorsGA}, gives the isomorphism\n\n\\begin{equation} \\label{SWduality} V^{\\otimes d} \\simeq \\bigoplus_{\\lambda\\ \\vdash\\ d} \\mathbb{S}_{\\lambda} V^{\\oplus m_{\\lambda}} \\end{equation}\n\n\\noindent where $m_{\\lambda}$ is the number of std tableaux of shape $\\lambda$ with entries in $\\{1,\\dots,d\\}$. For example $m_{(2,1)} = 2$ since we have the two std tableaux\n\n\\begin{equation} \\label{EsT1} T_1 = \\begin{ytableau} 1 & 2 \\\\ 3 \\end{ytableau}\\ , \\quad T_2 = \\begin{ytableau} 1 & 3 \\\\ 2 \\end{ytableau}\\ . \\end{equation}\n\n\\noindent Now fix a module $\\mathbb{S}_{\\lambda} V$ together with a std tableaux $T$ of shape $\\lambda$, where $\\lambda \\vdash d$. Fix also a basis $v_1,\\dots,v_n$ of $V$ and consider a sstd tableaux $S$ of shape $\\lambda$. The pair $(T,S)$, regarded in \\cite{sturmfels2008algorithms} as {\\it bitableau}, describes an element of $\\mathbb{S}_{\\lambda}V$ in the following way. At first build the element \n$$v_{(T,S)} := v_{i_1} \\otimes \\dots \\otimes v_{i_d},$$\nwhere $v_{i_j} = v_k$ if there is a $k$ in the box of $S$ corresponding to the box of $T$ in which a $j$ appears. We drop the $T$ in $v_{(T,S)}$ if the choice of $T$ is clear. After that, one applies the Young symmetrizer $c_{\\lambda}^T$ to the element $v_{(T,S)}$. For example, if $\\lambda =(2,1)$, consider the std tableau $T_1$ in \\eqref{EsT1} and the sstd tableau\n\n\\begin{center} S = \\begin{ytableau} 1 & 1 \\\\ 2 \\end{ytableau} . \\end{center}\n\n\\noindent Then $c_{\\lambda}(v_S) = c_{\\lambda}(v_1 \\otimes v_1 \\otimes v_2) = 2 \\cdot v_1 \\wedge v_2 \\otimes v_1$, which we may represent pictorially as\n\n$$ c_{\\lambda}(v_S) = 2 \\cdot \\left ( \\begin{ytableau} 1 & 1 \\\\ 2 \\end{ytableau} - \\begin{ytableau} 2 & 1 \\\\ 1 \\end{ytableau} \\right )$$\n\n\\noindent where in this particular instance the tableaux represent tensor products of vectors with the order prescribed by $T_1$. We may use sometimes this notation.\n\n\\noindent As a consequence of \\cite[Theorem 4.1.12, p. 142]{sturmfels2008algorithms}, one has the following result.\n\n\\begin{prop} \\label{BaseSchur}\nThe set \n$$\\{ c_{\\lambda}^T (v_{(T,S)})\\ :\\ S\\ \\text{is a sstd tableau of shape}\\ \\lambda \\}$$\n is a basis of the module $\\mathbb{S}_{\\lambda} V$. \n\\end{prop} \n\n\\noindent As already told, choosing different std tableaux of the same shape give rise to isomorphic Schur modules. The only difference between them is the way they embed in $V^{\\otimes |\\lambda|}$. Let us see an example.\n\n\\begin{example}\nLet $\\lambda = (2,1)$ and consider the std tableau\n\n$$T_1 = \\begin{ytableau} 1 & 2 \\\\ 3 \\end{ytableau}\\ ,\\ T_2 = \\begin{ytableau} 1 & 3 \\\\ 2 \\end{ytableau}$$\n\n\\noindent and the sstd tableau\n\n$$S = \\begin{ytableau} 1 & 1 \\\\ 2 \\end{ytableau}\\ .$$\n\n\\noindent We get that \n\n$$ v_{(T_1,S)} = v_1 \\otimes v_1 \\otimes v_2,\\ v_{(T_2,S)} = v_1 \\otimes v_2 \\otimes v_1. $$\n\n\\noindent Applying the respective Young symmetrizers we get that \n\n$$ c_{\\lambda}^{T_1} (v_{(T_1,S)}) = 2 \\cdot (v_1 \\otimes v_1 \\otimes v_2 - v_2 \\otimes v_1 \\otimes v_1) = 2 \\cdot (v_1 \\wedge v_2) \\otimes v_1, $$\n$$ c_{\\lambda}^{T_2} (v_{(T_2,S)}) = 2 \\cdot (v_1 \\otimes v_2 \\otimes v_1 - v_2 \\otimes v_1 \\otimes v_1) = 2 \\cdot (v_1 \\wedge v_2) \\otimes v_1. $$\n\\end{example}\n\n\\noindent It is clear that we get the same element of $\\superwedge^2 V \\otimes V$. However it is clear that it embeds in a different way in $V^{\\otimes 3}$ under the specific choice of the std tableau of shape $\\lambda$. Since we are interested only in the elements of $\\superwedge^2 V \\otimes V$, we will ignore the way it embeds in $V^{\\otimes d}$. For this reason we reduce to work in the vector space\n\n$$ \\mathbb{S}^{\\bullet} V := \\bigoplus_{\\lambda} \\mathbb{S}_{\\lambda} V$$\nwhere roughly every Schur module appears exactly once for each partition $\\lambda$. One can see that such a space can also be obtained as the quotient\n\n$$ \\mathbb{S}^{\\bullet} V = \\Sym^{\\bullet} \\left (\\superwedge^{n-1} V \\oplus \\superwedge^{n-1} V \\oplus \\dots \\oplus \\superwedge^1 V \\right)\/I^{\\bullet}$$\n\n\\noindent where $I^{\\bullet}$ is the two-sided ideal generated by the elements\n\n\\begin{align} \\begin{split} \\label{PluckRel}\n(v_1 \\wedge & \\dots \\wedge v_p) \\cdot (w_1 \\wedge \\dots \\wedge w_q) \\\\\n& - \\sum_{i=1}^p (v_1 \\wedge \\dots \\wedge v_{i-1} \\wedge w_1 \\wedge v_{i+1} \\wedge \\dots \\wedge v_p) \\cdot (v_i \\wedge w_2 \\wedge \\dots \\wedge w_q)\n\\end{split}\n\\end{align}\n\n\\noindent called {\\it Pl\\\"ucker relations}, for all $p \\geq q \\geq 0$. Remark that the elements \\eqref{PluckRel} are the equations which define Flag varieties in general as incidence varieties inside products of Grassmann varieties. See \\cite{fulton2013representation, towber1977two, towber1979young} for more details. Let us highlight that we are not going to use the natural symmetric product that $\\mathbb{S}^{\\bullet} V$ has as a quotient of a commutative symmetric algebra.\n\n\\noindent Eventually, since we are dealing with only one copy of $\\mathbb{S}_{\\lambda}V$ for any partition $\\lambda$, to ease the construction of the theory we will imagine to build every Schur module using a fixed std tableau. If $\\lambda \\vdash d$, this one will be given by filling the diagram of $\\lambda$ from top to bottom, starting from the first column, with the integers $1,\\dots,d$. For instance, if $\\lambda = (3,2,1)$, then the fixed tableau will be\n\n\\begin{center} \\begin{ytableau} 1 & 4 & 6 \\\\ 2 & 5 \\\\ 3 \\end{ytableau} . \\end{center}\n\n\\smallskip\n\n\n\\section{Toward Schur apolarity} \\label{secondasez} \\setcounter{equation}{0} \\medskip\n\nIn the following we will introduce the apolarity theory. For this purpose we construct an action called {\\it Schur apolarity action}. \\smallskip\n\nIn order to develop our theory we need to use multiplication maps $\\mathcal{M}^{\\lambda,\\mu}_{\\nu} : \\mathbb{S}_{\\lambda} V^* \\otimes \\mathbb{S}_{\\mu} V^* \\longrightarrow \\mathbb{S}_{\\nu} V^*$. Since $SL(n)$ is reductive, every representation is completely reducible and in this particular instance the behaviour of the tensor product of any two irreducible representations is well-known and computable via the Littlewood-Richardson rule which we recall. Given two Schur modules we have\n\n\\begin{equation} \\label{LR} \\mathbb{S}_{\\lambda} V^* \\otimes \\mathbb{S}_{\\mu} V^* \\simeq \\bigoplus_{\\nu} N^{\\lambda,\\mu}_{\\nu} \\mathbb{S}_{\\nu} V^* \\end{equation}\n\n\\noindent where the coefficients $N^{\\lambda,\\mu}_{\\nu} = N^{\\mu,\\lambda}_{\\nu} \\geq 0$ are called {\\it Littlewood-Richardson coefficients}. Sufficient conditions to get $N^{\\lambda,\\mu}_{\\nu} = 0$ are either $|\\nu| \\neq |\\lambda| + |\\mu|$ or that the diagram of either $\\lambda$ or $\\nu$ does not fit in the diagram of $\\nu$. Therefore as soon as $N^{\\lambda,\\mu}_{\\nu} \\neq 0$, the map $\\mathcal{M}^{\\lambda,\\mu}_{\\nu} : \\mathbb{S}_{\\lambda} V^* \\otimes \\mathbb{S}_{\\mu} V^* \\longrightarrow \\mathbb{S}_{\\nu} V^*$ is non trivial and it will be a projection onto one of the addends $\\mathbb{S}_{\\nu} V^*$ appearing in the decomposition of $\\mathbb{S}_{\\lambda} V^* \\otimes \\mathbb{S}_{\\mu} V^*$ into sum of irreducible representations as described by \\eqref{LR}. Remark also that via \\eqref{LR} it follows that the vector space of such equivariant morphisms has dimension $N^{\\lambda,\\mu}_{\\nu}$. \\smallskip\n\nBefore proceeding we recall some basic combinatorial facts.\n\n\\begin{definition} \\label{skewtab}\n\\noindent Two partitions $\\nu \\vdash d$ and $\\lambda \\vdash e$, with $0 \\leq e \\leq d$, are such that $\\lambda \\subset \\nu$ if $\\lambda_i \\leq \\mu_i$ for all $i$, possibly setting some $\\lambda_i$ equal to $0$. In this case $\\lambda$ is a {\\it subdiagram} of $\\nu$. The {\\it skew Young diagram} $\\nu \/ \\lambda =(\\nu_1 - \\lambda_1,\\dots,\\nu_k - \\lambda_k)$ is the diagram of $\\nu$ with the diagram of $\\lambda$ removed in the top left corner. A {\\it skew Young tableau} $T$ of shape $\\nu \/ \\lambda$ is the diagram of $\\nu \/ \\lambda$ with a filling. The definitions of sstd and std tableau apply also in this context.\n\\end{definition}\n\n\\noindent For example if $\\nu = (3,2,1)$ and $\\lambda = (2)$, the skew diagram $\\nu \/ \\lambda$ together with a sstd skew tableau of shape $\\nu \/ \\lambda$ is\n\n\\begin{center} \\begin{ytableau} *(gray) & *(gray) & 1 \\\\ 1 & 2 \\\\ 3 \\end{ytableau} . \\end{center}\n\n\\noindent Skew-tableaux can be seen as a generalization of the regular tableaux by setting $\\lambda = (0)$ in Definition \\ref{skewtab}. \n\n\\begin{definition} \\label{skewSchurModule} Fix a std skew tableau of shape $\\nu \/ \\lambda$. One can define analogously to the formula \\eqref{YoungSymm} the Young symmetrizer $c_{\\nu \/ \\lambda} : V^{\\otimes |\\nu|-|\\lambda|} \\longrightarrow V^{\\otimes |\\nu|-|\\lambda|}$. The {\\it skew Schur module} $\\mathbb{S}_{\\nu \/ \\lambda} V$ is the image of $c_{\\nu \/ \\lambda}$. \n\\end{definition}\n\nClearly also in this case two skew Schur modules determined by two different skew std tableaux of the same shape are isomorphic. Moreover they are still representations of $SL(n)$ but in general they may be reducible. Indeed it holds\n\n\\begin{equation} \\label{SKEW} \\mathbb{S}_{\\nu \/ \\lambda} V \\cong \\bigoplus_{\\mu} N^{\\lambda, \\mu}_{\\nu} \\mathbb{S}_{\\mu} V \\end{equation}\n\n\\noindent where the coefficients $N^{\\mu, \\nu}_{\\lambda}$ are the same appearing in \\eqref{LR}. For more details see \\cite[p. 83]{fulton2013representation}. Also in this case we assume that such modules are built using a std skew tableau of shape $\\nu \/ \\lambda$ filled with the integers from $1$ to $|\\nu| - |\\lambda|$ from top to bottom, starting from the first column.\n\\smallskip\n\n\\begin{definition} \nLet $\\nu \/ \\lambda$ be any skew partition and consider a sstd skew tableau $T$ of shape $\\nu \/ \\lambda$. The {\\it word associated to} $T$ is the string of integers obtained from $T$ reading its entries from left to right, starting from the bottom row. The obtained word $w_1 \\dots w_k$ is called either {\\it Yamanouchi word} or {\\it reverse lattice word} if for any $s$ from $0$ to $k-1$, the sequence $w_k w_{k-1} \\dots w_{k-s}$ contains the integer $i+1$ at most many times it contains the integer $i$. For short this words will be denominated $Y${\\it -words}. The {\\it content} of $T$ is the sequence of integers $\\mu = (\\mu_1,\\dots,\\mu_n)$ where $\\mu_i$ is the number of $i$'s in $T$. Note that this may be not a partition.\n\\end{definition}\n\n\\noindent For example, given\n\n\\begin{center} $T_1$ = \\begin{ytableau} *(gray) & *(gray) & 1 \\\\ 1 & 2 \\\\ 3 \\end{ytableau} , \\quad $T_2$ = \\begin{ytableau} *(gray) & *(gray) & 1 \\\\ 1 & 3 \\\\ 2 \\end{ytableau} \\end{center}\n\n\\noindent their associated words are $w_{T_1} = 3121$ and $w_{T_2} = 2131$ respectively. Remark that only the first one is a Y-word since in $w_{T_2}$ the subsequence $13$ has the integer $3$ appearing more times than the integer $2$. In both cases the content is $(2,1,1)$.\n\n\\begin{definition}\nLet $\\lambda$ and $\\nu$ be two partitions such that $\\lambda \\subset \\nu$ and consider a skew sstd tableau $T$ of shape $\\nu \/ \\lambda$. The tableau $T$ is a {\\it Littlewood-Richardson skew tableau} if its associated word is a Y-word.\n\\end{definition}\n\n\\begin{prop}[Prop. 3, p. 64, \\cite{fulton1997young}]\nLet $\\mu$, $\\lambda$ and $\\nu$ such that $\\mu, \\lambda \\subset \\nu$ and $|\\mu| + |\\lambda| = |\\nu|$, and consider the skew diagram $\\nu \/ \\lambda$. The number of Littlewood-Richardson skew tableau of shape $\\nu \/ \\lambda$ and content $\\mu$ is exactly $N^{\\lambda,\\mu}_{\\nu}$.\n\\end{prop}\n\n\\begin{remark}\nThe set of std tableau of shape $\\lambda$ with entries in $\\{1,\\dots,|\\lambda|\\}$ is in $1$ to $1$ correspondence with the set of Y-words of length $n$ and {\\it content} $\\lambda$, i.e. with $\\lambda_i$ times the integer $i$. \n\\end{remark}\n\nIndeed we can define two functions $\\alpha$ and $\\beta$\n\n$$ \\left \\{ \\text{std tableaux of shape}\\ \\lambda\\ \\right \\} \\xrightarrow{\\quad\\alpha\\quad} \\left \\{ \\text{Y-words of length}\\ n\\ \\text{and content}\\ \\lambda \\right \\} $$\nand \n\n$$ \\left \\{ \\text{std tableaux of shape}\\ \\lambda\\ \\right \\} \\xleftarrow{\\quad\\beta\\quad} \\left \\{ \\text{Y-words of length}\\ n\\ \\text{and content}\\ \\lambda \\right \\} $$\nthat are inverses each other. \\smallskip\n\n\\noindent Let $T$ be a std tableau of shape $\\lambda$ with entries in $\\{1,\\dots,|\\lambda|\\}$. Remark that each entry $l \\in \\{1,\\dots,|\\lambda|\\}$ appears in a certain position $(i,j)$, i.e. in the box in the $i$-th row and $j$-th column. Then consider the sequence $(a_1,\\dots,a_{|\\lambda|})$, where we set $a_l = i$ if $l$ appears in the position $(i,j)$. Roughly, starting with the smallest entry in $\\{1,\\dots,|\\lambda|\\}$ record the row in which it appears in $T$. We define $\\alpha(T)$ to be the reversed sequence \n\n$$\\alpha(T):= \\operatorname{rev}(a_1,\\dots,a_{|\\lambda|}) := (a_{|\\lambda|},\\dots,a_1),$$\nwhere we consider $\\operatorname{rev}$ as the involution that acts on the set of words reversing them. It is easy to see that $\\alpha(T)$ is a Y-word since the sequences of integers are determined by the entries of $T$. \\smallskip\n\n\\noindent Let $\\underline{a}$ be a Y-word of content $\\lambda$, so that $\\lambda_i$ is the number of times in which an integer $i$ appears in $\\underline{a}$. Consider its reversed sequence $\\operatorname{rev}(\\underline{a}) = (a_1,\\dots,a_{|\\lambda|})$. We define the std tableau $\\beta(\\underline{a})$ of shape $\\lambda$ in the following way. For any integer $i \\in \\{1,\\dots,k\\}$, where $k= l(\\lambda)$, consider the subsequence \n\n$$\\operatorname{rev}(\\underline{a})_i := (a_{k_1},\\dots,a_{k_{\\lambda_i}})$$\ngiven by all $i$'s with respect to the order with which they appear in $\\operatorname{rev}(\\underline{a})$. Then we set the $(i,j)$-th entry of $\\beta(\\underline{a})$ to be equal to $k_j$ appearing in the subsequence $\\operatorname{inv}(\\underline{a})_i$. Even in this case it is clear that the image will be a std tableau of shape $\\lambda$. \n\n\\noindent For more details we refer to \\cite[Definition IV.1.3, p. 252]{akin1982schur}. \n\n\\begin{example}\nConsider $\\lambda = (3,1)$ and let $T$ be the tableau\n\n$$ T = \\begin{ytableau} 1 & 2 & 4 \\\\ 3 \\end{ytableau} .$$\nWe apply $\\alpha$ to $T$. We get at first $(a_1,\\dots,a_4) = (1,1,2,1)$, so that $\\alpha(T) = (1,2,1,1)$. Now we apply $\\beta$ to see that actually we come back to $T$. Consider the reversed sequence $\\operatorname{rev}(\\alpha(T)) = (1,1,2,1)$, which has content $\\lambda$. Here we have only two subsequences\n\n$$\\operatorname{rev}(\\alpha(T))_1 = (a_1,a_2,a_4) = (1,1,1)\\ \\text{and}\\ \\operatorname{rev}(\\alpha(T))_2 = (a_3) = (2).$$\nHence it follows that\n\n$$ \\beta((\\alpha(T))) = T = \\begin{ytableau} 1 & 2 & 4 \\\\ 3 \\end{ytableau} .$$\n\\end{example}\n\n\\begin{definition}[Definition IV.2.2, p. 257 in \\cite{akin1982schur}]\nLet $T$ be a Littlewood-Richardson skew tableau of shape $\\nu \/ \\lambda$ and content $\\mu$. We can define a new tableau $T'$ of shape $\\nu \/ \\lambda$ and content $\\mu'$ not necessarily sstd as following. Let $\\underline{a}$ be the word associated to $T$ and consider $\\underline{a}' := \\alpha(\\beta(\\underline{a})')$, where with the notation $\\beta(\\underline{a})'$ we mean the conjugate tableau to $\\beta(\\underline{a})$, i.e. the one obtained from the latter transposing it as if it would be a matrix. Then define $T'$ as the skew tableau of shape $\\nu \/ \\lambda$ whose associated word is $\\underline{a}'$. \n\\end{definition}\n\n\\begin{example}\nLet $\\nu = (3,2)$, $\\lambda = (1)$ and consider\n\n$$ T = \\begin{ytableau} *(gray) & 1 & 1 \\\\ 1 & 2 \\end{ytableau},$$\nwhere in this case the content is $\\mu = (3,1)$. The associated word is $\\underline{a} = (1,2,1,1)$ which is a Y-word. Then \n\n$$ \\beta(\\underline{a}) = \\begin{ytableau} 1 & 2 & 4 \\\\ 3 \\end{ytableau}\\quad \\text{and}\\quad \\beta(\\underline{a})' = \\begin{ytableau} 1 & 3 \\\\ 2 \\\\ 4 \\end{ytableau} $$\nso that $\\alpha(\\beta(\\underline{a})') = (3,1,2,1)$. Therefore we get\n\n$$ T' = \\begin{ytableau} *(gray) & 2 & 1 \\\\ 3 & 1 \\end{ytableau} .$$\n\\end{example}\n\n\\begin{definition}[Definition IV.2.4, p. 258]\nConsider three partitions $\\lambda$, $\\mu$ and $\\nu$ such that $N^{\\lambda,\\mu}_{\\nu} \\neq 0$ and consider the diagrams of $\\nu \/ \\lambda$ and $\\mu$. Let $T$ be a Littlewood-Richardson skew tableau of shape $\\nu \/ \\lambda$ and content $\\mu$. When considering a Young diagram, we denote with $(i,j)$ the entry of the diagram positioned in the $i$-th row and $j$-th column. We can define a map $\\sigma_T : \\nu \/ \\lambda \\longrightarrow \\mu$ from the entries of the diagram of $\\nu \/ \\lambda$ to the entries of the diagram of $\\mu$ such that\n\n$$\\sigma_T (i,j) := ( T(i,j), T'(i,j)), $$\nfor every entry $(i,j)$ of $\\nu \/ \\lambda$. In the following we adopt the notation $V^{\\otimes \\lambda}$ to denote $V^{\\otimes |\\lambda|}$ in which every factor is identified with some entry $(i,j)$ of the Young diagram of $\\lambda$. \n\\end{definition}\n\n\\begin{remark}\nRecall that we can write the Young symmetrizer $c_{\\lambda}^T : V^{\\otimes d} \\longrightarrow V^{\\otimes d}$ as the composition $b_{\\lambda} \\cdot a_{\\lambda}$ of two endomorphisms $a_{\\lambda}^T,\\ b_{\\lambda}^T : V^{\\otimes d} \\longrightarrow V^{\\otimes d}$ such that on decomposable elements they act as\n\n$$ a_{\\lambda} (v_1 \\otimes \\dots \\otimes v_d) := \\sum_{\\sigma \\in R_{\\lambda}} v_{\\sigma(1)} \\otimes \\dots \\otimes v_{\\sigma(d)},$$\n\n$$ b_{\\lambda} (v_1 \\otimes \\dots \\otimes v_d) := \\sum_{\\tau \\in C_{\\lambda}} \\operatorname{sgn}(\\tau) v_{\\tau(1)} \\otimes \\dots \\otimes v_{\\tau(d)}.$$\nIn the following we are going to use these two maps. In particular remark that\n\n$$a_{\\lambda} (V^{\\otimes d}) = \\Sym_{\\lambda} V := \\Sym^{\\lambda_1} V \\otimes \\dots \\otimes \\Sym^{\\lambda_k} V,$$\nwhile for $b_{\\lambda}$ we have\n$$b_{\\lambda} (V^{\\otimes d}) = \\superwedge_{\\lambda'} V$$\nso that we get the inclusion $\\mathbb{S}_{\\lambda} V \\subset \\superwedge_{\\lambda'} V$ we have already seen. \n\\end{remark}\n\n\\noindent We are ready to give the definition we need of the maps $\\mathcal{M}^{\\lambda,\\mu}_{\\nu} : \\mathbb{S}_{\\lambda} V^* \\otimes \\mathbb{S}_{\\mu} V^* \\longrightarrow \\mathbb{S}_{\\nu} V^*$. \n\n\\begin{definition} \\label{defmapmult}\nLet $\\lambda$, $\\mu$ and $\\nu$ be three partitions such that $N^{\\lambda,\\mu}_{\\nu} \\neq 0$, and fix a Littlewood-Richardson tableau $T$ of shape $\\nu \/ \\lambda$ and content $\\mu$. We define the map $\\mathcal{M}^{\\lambda,\\mu}_{\\nu,T} : \\mathbb{S}_{\\lambda} V^* \\otimes \\mathbb{S}_{\\mu} V^* \\longrightarrow \\mathbb{S}_{\\nu} V^*$ as the following composition of maps\n\n\\begin{align*} \\mathbb{S}_{\\lambda} V^* \\otimes \\mathbb{S}_{\\mu} V^* \\rightarrow &\\Sym_{\\lambda} V^* \\otimes \\Sym_{\\mu} V^* \\rightarrow \\\\\n&\\rightarrow \\Sym_{\\lambda} V^* \\otimes \\Sym_{\\nu \/ \\lambda} V^* \\rightarrow \\Sym_{\\nu} V^* \\rightarrow \\mathbb{S}_{\\nu} V^*, \\end{align*}\nwhere the first map is the transpose of the map\n\n$$ a_{\\lambda} \\otimes a_{\\mu} : \\Sym_{\\lambda} V^* \\otimes \\Sym_{\\mu} V^* \\rightarrow \\mathbb{S}_{\\lambda} V^* \\otimes \\mathbb{S}_{\\mu} V^*.$$\nThe second map is the tensor product of the identity on $\\Sym_{\\lambda}V^*$ with the map\n\n$$\\Sym_{\\mu} V^* \\longrightarrow (V^*)^{\\otimes \\mu} \\longrightarrow (V^*)^{\\otimes \\nu \/ \\lambda} \\longrightarrow \\Sym_{\\nu \/ \\lambda} V^*$$\nthat previously embeds $\\Sym_{\\mu} V^*$ in $V^{\\otimes |\\mu|}$ denoting the factors of $V^{\\otimes |\\mu|}$ with the entries of the diagram of $\\mu$. Then it rearranges such factors accordingly to the map $\\sigma_T$, and eventually it applies the map $a_{\\nu \/ \\lambda}$ to get $\\Sym_{\\nu \/ \\lambda} V^*$. \n\\noindent The third map is the tensor product of the maps\n\n$$\\Sym^{\\lambda_i} V^* \\otimes \\Sym^{\\nu_i - \\lambda_i} V^* \\longrightarrow \\Sym^{\\nu_i} V^*.$$\nEventually the last map is just the skew-symmetrizing part of the Young symmetrizer $c_{\\nu}$, i.e.\n\n$$b_{\\nu} : \\Sym_{\\nu} V^* \\longrightarrow \\mathbb{S}_{\\nu} V^*.$$\n\\end{definition}\n\n\\begin{example}\nConsider the partitions $\\lambda = (2,1)$, $\\mu = (1,1)$ and $\\nu = (3,2)$, so that $N^{(2,1),(1,1)}_{(3,2)} = 1$. In particular we have only one Littlewood-Richardson skew tableau of shape $(3,2) \/ (2,1)$ and content $(1,1)$ which is\n\n$$ T = \\begin{ytableau} *(gray) & *(gray) & 1 \\\\ *(gray) & 2 \\end{ytableau} . $$\nWe describe the image of the element \n\n$$ t = (v_1 \\wedge v_2 \\otimes v_3 - v_2 \\wedge v_3 \\otimes v_1) \\otimes (\\alpha \\wedge \\beta) \\in \\mathbb{S}_{(2,1)} V \\otimes \\mathbb{S}_{(1,1)} V$$\nvia the map\n\n$$\\mathcal{M}^{(2,1),(1,1)}_{(3,2)} : \\mathbb{S}_{(2,1)} V \\otimes \\mathbb{S}_{(1,1)} V \\longrightarrow \\mathbb{S}_{(3,2)} V\\ .$$\nTo this end we describe all the maps involved in the composition defining $\\mathcal{M}^{(2,1),(1,1)}_{(3,2)}$ as described in Definition \\ref{defmapmult}. Remark that the element belonging to $\\mathbb{S}_{(2,1)} V$ is the one determined by the sstd Young tableau\n\n$$ \\begin{ytableau} 1 & 3 \\\\ 2 \\end{ytableau}\\ .$$\nAt first we have the map\n\n$$\\mathbb{S}_{(2,1)} V \\otimes \\mathbb{S}_{(1,1)} V \\longrightarrow \\Sym_{(2,1)} V \\otimes \\Sym_{(1,1)} V$$\nthat sends\n\n$$ t\\ \\mapsto\\ (v_1v_3 \\otimes v_2 - v_2v_3 \\otimes v_1 ) \\otimes (\\alpha \\otimes \\beta - \\beta \\otimes \\alpha). $$\nThen we have to apply the map $\\Sym_{(2,1)} V \\otimes \\Sym_{(1,1)} V \\longrightarrow \\Sym_{(2,1)} V \\otimes \\Sym_{(3,2)\/(2,1)} V$ to this last element. Remark that in this case the map $\\sigma_T$ acts as\n\n$$\\sigma_T :\\ \\begin{ytableau} *(gray) & *(gray) & b \\\\ *(gray) & a \\end{ytableau} \\longrightarrow \\begin{ytableau} b \\\\ a \\end{ytableau} $$\nand moreover $\\Sym_{(1,1)} V = \\Sym_{(3,2)\/(2,1)} V = V \\otimes V$. Now we apply the map $\\Sym_{(2,1)} V \\otimes \\Sym_{(3,2)\/(2,1)} V \\longrightarrow \\Sym_{(3,2)} V$, which is given by the tensor product of the maps\n\n$$\\Sym^2 V \\otimes \\Sym^1 V \\longrightarrow \\Sym^3 V,\\quad \\Sym^1 V \\otimes \\Sym^1 V \\longrightarrow \\Sym^2 V. $$\nThe image of our element is \n\n$$ v_1v_3\\alpha \\otimes v_2 \\beta - v_1v_3\\beta \\otimes v_2 \\alpha - v_2v_3\\alpha \\otimes v_1 \\beta + v_2v_3\\beta \\otimes v_1 \\alpha.$$\nEventually we have to apply the map $\\Sym_{(3,2)} V \\longrightarrow \\mathbb{S}_{(3,2)} V$ which is the skew-symmetrizing part of the Young symmetrizer. We can represent the image of such a map using Young tableaux of shape $(3,2)$\n\n$$\\begin{ytableau} 1 & 3 & \\alpha \\\\ 2 & \\beta \\end{ytableau} - \\begin{ytableau} 1 & 3 & \\beta \\\\ 2 & \\alpha \\end{ytableau} - \\begin{ytableau} 2 & 3 & \\alpha \\\\ 1 & \\beta \\end{ytableau} + \\begin{ytableau} 2 & 3 & \\beta \\\\ 1 & \\alpha \\end{ytableau}$$\nwhere we are considering such tableaux as elements of $V^{\\otimes 5}$ as described in Section \\ref{primasez}. \n\\end{example}\n\n\n\n\n\\begin{remark}The map $\\mathcal{M}^{\\lambda,\\mu}_{\\nu,T} : \\mathbb{S}_{\\lambda} V \\otimes \\mathbb{S}_{\\mu} V \\longrightarrow \\mathbb{S}_{\\nu} V$ is clearly equivariant and by construction its image is contained in $\\mathbb{S}_{\\nu}V$. Moreover it is easy to see that since these maps are determined by a choice of a Littlewood-Richardson skew tableau, they all acts in different way and they are linearly independent.\n\\end{remark}\n\n\\medskip\n\n\\begin{remark} \\label{trim} The last thing we would like to underline is that some of these multiplication maps are involved in the construction of the elements of $\\mathbb{S}_{\\lambda}V$ via Young symmetrizers. Indeed, for example consider $\\lambda=(3,2,1)$. We know that $\\mathbb{S}_{(3,2,1)} V$ is the image of the respective Young symmetrizer applied $V^{\\otimes 6}$ using symmetrization along rows and then skew-symmetrizations along columns. Via this interpretation we can see $\\mathbb{S}_{\\lambda}V$ obtained by adding row by row, i.e. via the following composition\n\n$$\\mathbb{S}_{(3)}V \\otimes \\mathbb{S}_{(2)} V \\otimes \\mathbb{S}_{(1)} V \\longrightarrow \\mathbb{S}_{(3,2)}V \\otimes \\mathbb{S}_{(1)} V \\longrightarrow \\mathbb{S}_{(3,2,1)} V $$\n\n\\noindent where the first map is $\\mathcal{M}^{(3),(2)}_{(3,2)} \\otimes \\text{id}_{\\mathbb{S}_{(1)}V}$ and the second one is $\\mathcal{M}^{(3,2),(1)}_{(3,2,1)}$. Note that all these maps that add only a row are unique up to scalar multiplication since the respective Littlewood-Richardson coefficient is $1$. Hence we can see every module obtained by adding row by row in the order specified by the partition. We sometimes refer to the inverse passage, i.e. deleting row by row, as {\\it trimming}.\n\\end{remark}\n\n\\begin{remark} \\label{gradedring}\nWe end this section with a last remark. The space $\\mathbb{S}^{\\bullet} V$ together with the maps $\\mathcal{M}^{\\lambda,\\mu}_{\\nu}$ is a graded ring. Precisely the graded pieces are given by\n\n$$\\left ( \\mathbb{S}^{\\bullet} V \\right )_a := \\bigoplus_{\\lambda\\ :\\ |\\lambda| = a} \\mathbb{S}_{\\lambda} V. $$\nMoreover, given two elements $g \\in \\mathbb{S}_{\\lambda} V$ and $h \\in \\mathbb{S}_{\\mu} V$ where $|\\lambda| = a$ and $|\\mu| = b$, then if $N^{\\lambda,\\mu}_{\\nu} \\neq 0$ we get that the element $\\mathcal{M}^{\\lambda,\\mu}_{\\nu}(g \\otimes h)$ belongs to $\\mathbb{S}_{\\nu} V$, where $|\\nu| = |\\lambda| + |\\mu| = a+b$. Therefore the condition\n\n$$\\left ( \\mathbb{S}^{\\bullet} V \\right )_a \\cdot \\left ( \\mathbb{S}^{\\bullet} V \\right )_b \\subset \\left ( \\mathbb{S}^{\\bullet} V \\right )_{a+b}$$\nis satisfied.\n\\end{remark}\n\n\n\\section{Schur apolarity action} \\label{terzasez} \\setcounter{equation}{0} \\medskip\n\n\\noindent In this section we define the {\\it Schur apolarity action} which will be the foundation of our results. \\smallskip\n\nWe would like to define an apolarity action whose domain is $\\mathbb{S}_{\\lambda} V \\otimes \\mathbb{S}_{\\mu} V^*$, with $\\lambda$ and $\\mu$ suitably chosen, which extends the symmetric and the skew-symmetric action. Naively it seems natural to require that $\\mu \\subset \\lambda$ and that this map is \n\n$$ \\varphi : \\mathbb{S}_{\\lambda} V \\otimes \\mathbb{S}_{\\mu} V^* \\longrightarrow \\mathbb{S}_{\\lambda \/ \\mu} V $$\n\n\\noindent and to set $\\varphi$ equal to the zero map if $\\mu \\not\\subset \\lambda$. \n\nBefore going along with the description, we have to recall the definition of {\\it skew-symmetric apolarity action} given in \\cite{arrondo2021skew}.\n\n\\begin{definition} Given two integers $1 \\leq h \\leq k \\leq \\dim(V)$, it can be described as\n\n$$ \\superwedge^{k} V \\otimes \\superwedge^{h} V^* \\longrightarrow \\superwedge^{k-h} V$$ \n$$ (v_1 \\wedge \\dots \\wedge v_k) \\otimes (\\alpha_1 \\wedge \\dots \\wedge \\alpha_h)\\ \\longmapsto \\sum_{R \\subset \\{1,\\dots,k\\}} \\operatorname{sign}(R) \\cdot \\det(\\alpha_i(v_j)_{j \\in R}) \\cdot v_{\\overline{R}} $$\n\n\\noindent where the sum runs over all the possible ordered subsets $R$ of $\\{1,\\dots,k\\}$ of cardinality $h$ and the set $\\overline{R}$ is the complementary set to $R$ in $\\{1,\\dots,k\\}$. The element $v_{\\overline{R}}$ is the wedge product of the vectors whose index is in $\\overline{R}$, and $\\operatorname{sign}(R)$ is the sign of the permutation which sends the sequence of integers from $1$ to $k$ to the sequence in which the elements of $R$ appear first already ordered, keeping the order of the other elements. \n\\end{definition} \n\\begin{remark}\nThe skew-symmetric apolarity action can be regarded as the composition\n\n$$ \\superwedge^h V^* \\otimes \\superwedge^k V \\longrightarrow \\superwedge^h V^* \\otimes \\superwedge^{h} V \\otimes \\superwedge^{k-h} V \\longrightarrow \\superwedge^{k-h}V$$\n\n\\noindent where the first map is the tensor product of the identity on the first factor and the comultiplication of the exterior algebra regarded as a bialgebra on the second one. The second map acts with the identity on the last factor and acts with the determinantal pairing $\\superwedge^h V^* \\otimes \\superwedge^{h} V \\longrightarrow \\mathbb{C}$ on the first two. \n\\end{remark}\n\n\\begin{definition} \\label{Schurapodef} Let $\\mathbb{S}_{\\lambda} V \\subset \\superwedge_{\\lambda'} V$ and $\\mathbb{S}_{\\mu} V^*\\subset \\superwedge_{\\mu'} V$ be two Schur modules. Then the Schur apolarity action is defined as the map\n\n $$\\varphi: \\mathbb{S}^{\\bullet} V \\otimes \\mathbb{S}^{\\bullet} V^* \\longrightarrow \\mathbb{S}^{\\bullet} V$$\n\n\\noindent such that when restricted to a product $\\mathbb{S}_{\\lambda} V \\otimes \\mathbb{S}_{\\mu} V^*$, it is the trivial map if $\\mu \\not\\subset \\lambda$. Otherwise it is given by the restriction of the map\n\n$$\\tilde{\\varphi} : \\superwedge_{\\lambda'} V \\otimes \\superwedge_{\\mu'} V^* \\longrightarrow\n \\superwedge_{\\lambda' \/ \\mu'} V $$\n\n\\noindent that acts as the tensor product of skew-symmetric apolarity actions $\\superwedge^{\\lambda_i'} V \\otimes \\superwedge^{\\mu_i'} V^* \\longrightarrow \\superwedge^{\\lambda_i'-\\mu_i'} V$.\n\\end{definition}\n\n\\begin{example} \\label{esschurapoflag} Consider the partitions $\\lambda = (3,2,1)$ and $\\mu = (2)$. We have that $\\mu \\subset \\lambda$ so that the Schur apolarity action $\\varphi$ restricted to $\\mathbb{S}_{\\lambda} V \\otimes \\mathbb{S}_{\\mu} V^*$ is not trivial. Consider the element\n\n$$ t = v_1 \\wedge v_2 \\wedge v_3 \\otimes v_1 \\wedge v_2 \\otimes v_1 \\in \\mathbb{S}_{(3,2,1)} V$$\nand let $\\alpha\\beta \\in \\Sym^2 V$ be any element. Then the Schur apolarity action acts as\n\n\\begin{align*}\n&\\mathcal{C}^{(3,2,1),(2)}_{t_1}(\\alpha \\beta) = \\\\\n&= \\sum_{i=1}^3 (-1)^{i+1} \\left [ \\alpha(v_i)\\beta(v_1) + \\alpha(v_1)\\beta(v_i) \\right ] \\cdot v_1 \\wedge \\dots \\wedge \\hat{v_i} \\wedge \\dots \\wedge v_3 \\otimes v_2 \\otimes v_1 + \\\\\n&- \\sum_{i=1}^3 (-1)^{i+1} \\left [ \\alpha(v_i)\\beta(v_2) + \\alpha(v_2)\\beta(v_i) \\right ] \\cdot v_1 \\wedge \\dots \\wedge \\hat{v_i} \\wedge \\dots \\wedge v_3 \\otimes v_1 \\otimes v_1.\n\\end{align*} \n\\end{example}\n\n\\begin{example}\nConsider $\\lambda = (2,2)$ and $\\mu = (1,1)$. Let\n$$ t = v_1 \\wedge v_2 \\otimes v_1 \\wedge v_3 + v_1 \\wedge v_3 \\otimes v_1 \\wedge v_2 \\in \\mathbb{S}_{(2,2)}\\mathbb{C}^4$$\n\n\\noindent and let $s = x_1 \\wedge x_2 \\in \\mathbb{S}_{(1,1)}(\\mathbb{C}^4)^*$. Then\n\n\\begin{align*} \\varphi(t \\otimes s)& = \\\\ & = \\det \\left (\\begin{matrix} x_1(v_1) & x_1(v_2) \\\\ x_2(v_1) & x_2(v_2) \\end{matrix}\\right ) v_1 \\wedge v_3 + \\det \\left (\\begin{matrix} x_1(v_1) & x_1(v_3) \\\\ x_2(v_1) & x_2(v_3) \\end{matrix}\\right ) v_1 \\wedge v_2 \\\\ & = v_1 \\wedge v_3. \\end{align*}\n\\end{example}\n\\medskip\n\n\\noindent A priori we are not able to say that the image is contained in the skew Schur module $\\mathbb{S}_{\\lambda \/ \\mu} V$. The following fact will clear our minds.\n\n\\begin{prop} \\label{imgSchurAction}\nLet $ \\varphi: \\mathbb{S}_{\\lambda} V \\otimes \\mathbb{S}_{\\mu} V^* \\to \\superwedge_{\\lambda'\/\\mu'} V$ be the Schur apolarity action restricted to $\\mathbb{S}_{\\lambda} V \\otimes \\mathbb{S}_{\\mu} V^*$. Then image of $\\varphi$ is contained in $\\mathbb{S}_{\\lambda \/ \\mu} V$. \n\\end{prop}\n\\begin{proof}\n\nWe prove it for two elements of the basis of $\\mathbb{S}_{\\lambda}V$ and $\\mathbb{S}_{\\mu} V^*$ given by the projections $c_{\\lambda}$ and $c_{\\mu}$ respectively. In the following the letters $a,b,\\dots$ denote the elements of the filling of tableaux of shape $\\lambda$, while greek letters $\\alpha,\\beta,\\dots$ denote the elements of the filling of tableaux of shape $\\mu$. We use the pictorial notation we have introduced when generating a basis of all these spaces in Section \\ref{primasez}. Performing Schur apolarity means erasing the diagram of $\\mu$ in the diagram of $\\lambda$ in the top left corner contracting the respective letters coming from the elements of $V$ with those of $V^*$. Every such skew tableau comes with a coefficient given by the contraction of the $\\alpha, \\beta, \\dots$ with the $a, b, \\dots$ in a specific order according with our notation. The image of Schur apolarity between two elements of the basis is then given by a sum of skew tableaux of shape $\\lambda \/ \\mu$ with proper fillings and coefficients. Remark that skew tableaux may contain {\\it disjoint} subdiagrams, i.e. diagrams which do not share neither a row or a column. For example \n\n \\begin{center}\n\\ytableausetup{nosmalltableaux}\n\\begin{ytableau}\n*(gray) & *(gray) & c \\\\\nd & e \\\\\nf \n\\end{ytableau} \\end{center}\n\n\\noindent contains two disjoint subdiagrams, $\\lambda_1 = (1)$ and $\\lambda_2 = (2,1)$. Let us group the addenda in the image in such a way that they all have the same coefficients and the respective disjoint subdiagrams share the same fillings. For example, consider the tableaux\n\n$$ U = \\begin{ytableau} a & b & c \\\\ d & e \\\\ f \\end{ytableau} \\quad \\text{and} \\quad S = \\begin{ytableau} \\alpha & \\beta \\end{ytableau} $$\n\n\\noindent and perform all the permutations accordingly to the maps $c_{\\lambda}$ and $c_{\\mu}$. Then we have to contract every single addend of the first tableau with every addend of the second as we have said above. In all these contractions we may find a summand like\n\n\\begin{equation} \\label{SchurApoImg} \\alpha(a) \\beta(b) \\cdot \\left ( \\begin{ytableau} *(gray) & *(gray) & c \\\\ d & e \\\\ f \\end{ytableau} - \\begin{ytableau} *(gray) & *(gray) & c \\\\ f & e \\\\ d \\end{ytableau} + \\begin{ytableau} *(gray) & *(gray) & c \\\\ e & d \\\\ f \\end{ytableau} - \\begin{ytableau} *(gray) & *(gray) & c \\\\ f & d \\\\ e \\end{ytableau} \\right ). \\end{equation}\n\n\\noindent It is possible to find such elements for two reasons. At first, if we consider permutations of the bigger diagram which send the erased elements in the right positions, we can find disjoint subdiagrams which share the same fillings. Moreover, if the elements fixed are contracted by a single addend of the element in $\\mathbb{S}_{\\mu} V^*$, the coefficients are always the same. We have collected every single group of skew tableaux such that the fillings of every subdiagram are permuted accordingly to the symmetrization rules of a std skew tableau of shape $\\lambda \/ \\mu$. Hence, every such group is the image via $c_{\\lambda \/ \\mu}$ of some element of $V^{\\otimes |\\lambda| - |\\mu|}$. In the Example \\ref{SchurApoImg}, the element is image via $c_{\\lambda \/ \\mu}$ of the tableau\n\n$$ \\alpha(a) \\beta(b) \\cdot \\begin{ytableau} *(gray) & *(gray) & c \\\\ d & e \\\\ f \\end{ytableau} . $$ \n\n\\noindent In particular, remark that the signs due to permutations come in the right way since permutations along rows of the skew tableau come from permutations along rows of the bigger diagram. The same happens for exchanges along columns with the proper sign of the permutations. This proves that the image is contained in $\\mathbb{S}_{\\lambda \/ \\mu} V$. \n\\end{proof} \n\n\\begin{remark} Proposition \\ref{imgSchurAction} tells us that these action can be restricted to the known apolarity maps. In the case $\\lambda = (d)$ and $\\mu = (e)$ such that $e \\leq d$ we have that $\\mathbb{S}_{\\lambda}V = \\Sym^d V$ and $\\mathbb{S}_{\\mu} V^* = \\Sym^e V^*$. It follows that $\\mathbb{S}_{\\lambda \/ \\mu} V = \\Sym^{d-e}V$ and by Proposition \\ref{imgSchurAction} the Schur apolarity action coincides with the classic apolarity action. Transposing all the diagrams, if $\\lambda = (1^d)$ and $\\mu = (1^e)$, then the Schur apolarity action coincides with the skew symmetric apolarity action.\n\\end{remark}\n\n\\noindent We conclude with some basic definitions.\n\n\\begin{definition} \\label{schurcatdef}\nFix $f \\in \\mathbb{S}_{\\lambda}V$ and let $\\mu \\subset \\lambda$, where $\\mu$ and $\\lambda$ both have length strictly less then $\\dim(V)$. The map induced by the Schur apolarity action $\\varphi$, introduced in Definition \\ref{Schurapodef}, is\n\n$$ \\mathcal{C}^{\\lambda,\\mu}_f : \\mathbb{S}_{\\mu} V^* \\longrightarrow \\mathbb{S}_{\\lambda \/ \\mu} V$$\ndefined as $\\mathcal{C}^{\\lambda,\\mu}_f(g):=\\varphi(f \\otimes g)$ for any $g \\in \\mathbb{S}_{\\mu}V^*$, and is called {\\it catalecticant map of $\\lambda$ and $\\mu$}, or simply {\\it catalecticant map} if no specification is needed.\n\\end{definition}\n\n\\begin{definition}\nThe {\\it apolar set to} $f \\in \\mathbb{S}_{\\lambda}V$ is the vector subspace $f^{\\perp}\\subset\\mathbb{S}^{\\bullet}V^*$ defined as\n\n$$ f^{\\perp} := \\bigoplus_{\\mu} \\ker \\mathcal{C}^{\\lambda,\\mu}_f. $$ \n\\end{definition}\nThe apolar set of a point $f \\in \\mathbb{S}_{\\lambda} V$ will turn out to be useful on computing its structured rank.\n\\bigskip\n\n\\section{Additive rank and algebraic varieties} \\label{quartasez} \\setcounter{equation}{0}\\bigskip\n\nThis section is devoted to the description of rational homogeneous varieties and the use of the Schur apolarity action. We recall at first basic facts of the theory and we conclude with a result linking additive rank decompositions and the Schur apolarity.\n\n\\begin{definition} \\label{Xrango}\nLet $X \\subset \\mathbb{P}^N$ be a non degenerate algebraic variety and let $p \\in \\mathbb{P}^N$. The $X$-{\\it rank of $p$} is\n$$r_X(p) := \\min \\{ r\\ :\\ p \\in \\langle p_1,\\dots,p_r \\rangle,\\ \\text{with}\\ p_1,\\dots,p_r \\in X \\}. $$\n\\end{definition} \\smallskip\n\n\\noindent Let us see in detail the rational homogeneous varieties we are looking for. Given the group $G=SL(V)$, a {\\it representation of} $G$ is a vector space $W$ together with a morphism $\\rho : G \\longrightarrow GL(W)$. We use the notation $g \\cdot w$ instead of $\\rho(g) w$, with $g \\in G$ and $w \\in W$. If we choose a basis for $V$, we may identify $G$ with $SL(n)$. Consider the subgroup $H$ of diagonal matrices $x = \\operatorname{diag}(x_1,\\dots,x_n)$. An element $w\\in W$ is called {\\it weight vector} with {\\it weight} $\\alpha = (\\alpha_1,\\dots,\\alpha_n)$, with $\\alpha_i$ integers, if\n\n$$ x \\cdot w = x_1^{\\alpha_1} \\dots x_n^{\\alpha_n} w,\\ \\text{for all}\\ x \\in H. $$ \n\n\\noindent Since $H$ is a subgroup in which every element commutes, the space $W$ can be decomposed as\n\n$$ W = \\bigoplus_{\\alpha} W_{\\alpha}, $$\n\n\\noindent where $W_{\\lambda}$ is given by all weight vectors of weight $\\alpha$ and is called {\\it weight space}. If $W = \\mathbb{S}_{\\lambda} V$ every element of the basis $c_{\\lambda}(w_S)$ with $S$ sstd tableau of shape $\\lambda$, is a weight vector. Let $B \\subset G$ be the subgroup of upper triangular matrices. A weight vector $w \\in W$ is called {\\it highest weight vector} if $B \\cdot w = \\mathbb{C}^* \\cdot w$. It is well known that $W$ is an irreducible representation if and only if there is a unique highest weight vector, see \\cite{fulton2013representation}. In the case of Schur modules we have\n\n\\begin{prop} \\label{hwvSchur}\nIf $W = \\mathbb{S}_{\\lambda}V$, then the only highest weight vector in $W$ is $c_{\\lambda}(v_U)$ up to scalar multiplication, where $U$ is the tableau of shape $\\lambda$ whose $i-$th row contains only the number $i$, and $v_U \\in V^{\\otimes |\\lambda|}$ is defined in Section \\ref{primasez}.\n\\end{prop} \n\n\\noindent For a proof see \\cite[Lemma 4, p. 113]{fulton1997young}. The theory of highest weights and geometry are closely related. Given a highest weight vector $v$, there is a subgroup of $G$\n\n$$ P = \\{g \\in G\\ :\\ g \\cdot [v] = [v] \\in \\mathbb{P}(W) \\} $$\n\n\\noindent called {\\it parabolic subgroup}. The subgroup $P$ may not be normal and hence the quotient $G \/ P$ is just a space of cosets. Moreover, by the definition of $P$, the space $G\/P$ can be identified with the orbit $G \\cdot [v] \\subset \\mathbb{P}(W)$. It is a general fact that $G\/P$ is compact and hence a closed subvariety of $\\mathbb{P}(W)$ called {\\it rational homogeneous variety}. \n\\smallskip\n\n\\noindent Coming back to the symmetric and skew-symmetric cases, the highest weight vectors of the modules $\\Sym^d V = \\mathbb{S}_{(d)} V$ and $\\superwedge^k V = \\mathbb{S}_{(1,\\dots,1)} V$, $k < \\dim(V)$, are the elements $v_1^d$ and $v_1 \\wedge \\dots \\wedge v_k$ respectively. The action of $G$ on these two elements generates the Veronese and the Grassmann varieties respectively. \n\n\\begin{example} Consider $\\lambda = (2,1)$. Then the respective highest weight vector is determined by the tableau\n\n$$ U =\\begin{ytableau} 1 & 1 \\\\ 2 \\end{ytableau} $$\n\n\\noindent and $c_{\\lambda}(v_U) = v_1 \\wedge v_2 \\otimes v_1$. The action of $G$ on such an element generates a closed orbit which may be identified with the variety\n\n\\begin{align*} \\mathbb{F}(1,2;V) = \\{ (V_1,V_2)\\ :\\ V_1 \\subset V_2 \\subset V,\\ \\dim(V_1)=1,\\ &\\dim(V_2)=2 \\} \\\\\n&\\subset \\mathbb{G}(1,V) \\times \\mathbb{G}(2,V) \\end{align*}\n\n\\noindent called {\\it flag variety} of lines in planes in $V$. \n\\end{example}\nIn general, the minimal orbit inside $ \\mathbb{P}(\\mathbb{S}_{\\lambda}V)$ is the following Flag variety\n\n$$ \\mathbb{F}(k_1,\\dots,k_s; V) := \\{ (V_1,\\dots,V_s)\\ :\\ V_1 \\subset \\dots \\subset V_s \\subset V,\\ \\dim V_i = k_i \\} $$\n\n\\noindent embedded with $\\mathcal{O}(d_1,\\dots,d_s)$, where the $k_i$ and $d_i$ are integers determined by $\\lambda$ which we are going to describe in a moment. The Veronese and the Grassmann varieties appear as particular cases. Fixed a rational homogeneous variety $X \\subset \\mathbb{P}(\\mathbb{S}_{\\lambda}V)$, we will refer to the $X$-rank as $\\lambda${\\it -rank} in order to underline the connection with the respective representation in which $X$ is embedded. The points of $\\lambda$-rank $1$ are of the form \n\n$$ (v_1 \\wedge \\dots \\wedge v_{k_s})^{\\otimes d_s} \\otimes \\dots \\otimes (v_1 \\wedge \\dots \\wedge v_{k_1})^{\\otimes d_1} $$\n\n\\noindent and they represent the flag\n\n$$ \\langle v_1,\\dots,v_{k_1} \\rangle \\subset \\dots \\subset \\langle v_1,\\dots,v_{k_s} \\rangle.$$\n\n\\noindent Remark that the notation of tensors and flags may seem inverted but it is coherent with the action of the group. \\smallskip\n \n\\begin{remark} In the classic and skew-symmetric apolarity theories, given a point of symmetric or skew-symmetric rank $1$ respectively, there is attached an ideal generated in the symmetric or exterior algebra respectively.\nIn analogy to the known apolarity theories, we give the following definition.\n\\end{remark}\n\n\n\\begin{definition} \\label{subpoint}\nLet $\\lambda = (\\lambda_1^{a_1},\\dots,\\lambda_k^{a_k})$ be a partition, where $i^j$ means that $i$ is repeated $j$ times, such that $a_1 + \\dots + a_k < n$. The variety $X \\subset \\mathbb{P}(\\mathbb{S}_{\\lambda} V)$ is the flag variety $\\mathbb{F}(h_1,\\dots,h_k; V)$ embedded with $\\mathcal{O}(d_1,\\dots,d_k)$ where\n\n$$ h_i = \\sum_{j=1}^i a_i,\\ \\text{and}\\ d_i = \\lambda_i - \\lambda_{i+1},\\ \\text{setting}\\ \\lambda_{k+1}=0. $$\n\n\\noindent A point $p \\in X$ is of the form\n\n$$ p = (v_1 \\wedge \\dots \\wedge v_{h_k})^{\\otimes d_k} \\otimes \\dots \\otimes (v_1 \\wedge \\dots \\wedge v_{h_1})^{\\otimes d_1}$$\n\n\\noindent and it represents the flag \n\n$$W_1 = \\langle v_1, \\dots, v_{h_1} \\rangle \\subset \\dots \\subset W_k = \\langle v_1,\\dots,v_{h_k} \\rangle. $$\n\n\\noindent We may assume that their annihilators are generated by\n\n$$W_1^{\\perp} = \\langle x_{h_1 + 1},\\dots, x_n \\rangle \\supset \\dots \\supset W_k^{\\perp} = \\langle x_{h_k+1},\\dots,x_n \\rangle. $$\nConsider the spaces\n\n$$\\Sym^1 W_k^{\\perp},\\ \\Sym^{d_k+1} W_{k-1}^{\\perp},\\dots,\\ \\Sym^{d_k+\\dots+d_2+1} W_1^{\\perp}$$\nwhich we will refer to as ``generators''. We define the {\\it ideal $I(p)$ associated to the point} $p$ as the ideal generated by the generators inside the graded ring $\\left(\\mathbb{S}^{\\bullet} V^*,\\ \\mathcal{M}^{\\lambda,\\mu}_{\\nu} \\right )$ as described in Remark \\ref{gradedring}, where $\\mathcal{M}^{\\lambda,\\mu}_{\\nu}$ are the multiplication maps introduced in Definition \\ref{defmapmult}. \n\\end{definition}\n\n\\begin{prop} \\label{PropIdealKill} Let $p \\in \\mathbb{S}_{\\eta} V$ be a point of $\\eta$-rank $1$. Then the associated ideal $I(p)$ is such that all its elements kill $p$ via the Schur apolarity action. \n\\end{prop}\n\n\\begin{proof}\nRemark at first that the generators kill $p$ via the Schur apolarity action. Then one has to prove that given $g \\in I(p)$ such that $\\varphi(g \\otimes p)=0$, then the product of $g$ and $h$ is still apolar to $p$ for any $h \\in \\mathbb{S}^{\\bullet} V^*$. Without loss of generality we can assume that $g \\in \\mathbb{S}_{\\lambda}V^*$ and $h \\in \\mathbb{S}_{\\mu} V^*$. Consider a partition $\\nu$ such that $N^{\\lambda,\\mu}_{\\nu} \\neq 0$. Let us denote the element $\\mathcal{M}^{\\lambda,\\mu}_{\\nu}(g \\otimes h)$ only with $g \\cdot h$. Clearly if $\\nu \\not \\subset \\eta$, then $g \\cdot h$ kills $p$ by Definition \\ref{Schurapodef}. Otherwise, to prove that $g \\cdot h$ kills $p$ is enough to recall the description of the multiplication maps given in Definition \\ref{defmapmult}, and to consider the diagram \\medskip\n\n\\begin{tikzcd}\n & \\mathbb{S}_{\\lambda}V^* \\otimes \\mathbb{S}_{\\mu}V^* \\otimes \\mathbb{S}_{\\eta} V \\arrow[ld] \\arrow[rd] & \\\\\n\\mathbb{S}_{\\nu} V^* \\otimes \\mathbb{S}_{\\eta} V \\arrow[rdd] & & \\mathbb{S}_{\\mu}V^* \\otimes \\mathbb{S}_{\\eta \/ \\lambda} V \\arrow[d] \\\\\n & & \\mathbb{S}_{\\nu \/ \\lambda} V^* \\otimes \\mathbb{S}_{\\eta \/ \\lambda} V \\arrow[ld] \\\\\n & \\mathbb{S}_{\\eta \/ \\nu}V & \n\\end{tikzcd}\n\n\\noindent from which it follows $\\varphi(g\\cdot h \\otimes p)=0$ by the fact that $\\varphi( h \\otimes \\varphi(g \\otimes p)) = \\varphi( h \\otimes 0) =0$ for any $h \\in \\mathbb{S}_{\\mu}V^*$ from the hypothesis.\n\\end{proof}\n\n\\begin{remark}\nGiven any element $f \\in \\mathbb{S}_{\\eta} V$, from the proof of Proposition \\ref{PropIdealKill} it follows that $f^{\\perp} \\subset \\mathbb{S}^{\\bullet} V^*$ is an ideal. Indeed, one needs only to prove the following fact. Given any element $g \\in f^{\\perp}$, which without loss of generality we can assume that belongs to $f^{\\perp} \\cap \\mathbb{S}_{\\lambda} V^*$ for some partition $\\lambda$, and given any element $h \\in \\mathbb{S}_{\\mu}V^*$ for a partition $\\mu$, the product\n\n$$ g \\cdot h := \\mathcal{M}^{\\lambda,\\mu}_{\\nu}(g \\otimes h) \\in \\mathbb{S}_{\\nu} V^*,$$\nwhere we assume that $N^{\\lambda,\\mu}_{\\nu} \\neq 0$ and $\\nu \\subset \\eta$, is still an element of $f^{\\perp}$, i.e. it holds\n\n$$\\varphi(g \\cdot h \\otimes f)=0. $$\nTo prove this it is enough to apply verbatim the proof of Proposition \\ref{PropIdealKill}.\n\\end{remark}\n\n\\begin{remark}\nThe choice of the integers appearing on the symmetric powers of the generators depends on the embedding of the variety. For instance consider the flag variety $\\mathbb{F}(1,2,3;V)$ embedded with $\\mathcal{O}(1,1,1) $ in $\\mathbb{S}_{(3,2,1)} V$. Let $\\{v_1,\\dots,v_n\\}$ and $\\{x_1,\\dots,x_n\\}$ be bases of $V$ and $V^*$ respectively, dual each other. Consider the $(3,2,1)$-rank $1$ tensor \n\n$$ t = v_1 \\wedge v_2 \\wedge v_3 \\otimes v_1 \\wedge v_2 \\otimes v_1. $$\nAccording with the notation of Definition \\ref{subpoint} we have that\n\n$$W_1 = \\langle v_1 \\rangle \\subset W_2 = \\langle v_1, v_2 \\rangle \\subset W_3 = \\langle v_1,v_2,v_3 \\rangle$$\nand\n$$W_1^{\\perp} =\\langle x_2,\\dots,x_n \\rangle \\supset W_2^{\\perp} = \\langle x_3,\\dots,x_n \\rangle \\supset W_3^{\\perp} = \\langle x_3,\\dots,x_n \\rangle.$$\nFrom Definition \\ref{subpoint} the generators are\n\n$$\\Sym^1 W_3^{\\perp},\\ \\Sym^2 W_2^{\\perp}\\ \\text{and}\\ \\Sym^3 W_1^{\\perp}.$$\nConsider the element $x_2^2x_3 \\in \\Sym^3 W_1^{\\perp}$. It is easy to see that $\\varphi(t \\otimes x_2^2x_3) = 0$ since either $x_2$ or $x_3$ is always evaluated on the part of $t$ representing the subspace $V_1$ of the flag. \n\n\\noindent On the other hand, consider the same variety embedded with $\\mathcal{O}(1,2,1) $ in $\\mathbb{S}_{(4,3,1)} V$. The analogous element to $t$ is\n\n$$s = v_1 \\wedge v_2 \\wedge v_3 \\otimes (v_1 \\wedge v_2)^{\\otimes 2} \\otimes v_1. $$\nRemark that the linear spaces $W_i$ and $W_i^{\\perp}$ are the same as before, with $i = 1,2, 3$. However it is easy to see that the element $x_2^2x_3$ is no longer apolar to $s$, indeed\n\n$$\\varphi(s \\otimes x_2^2x_3) = v_1 \\wedge v_2 \\otimes (v_1)^{\\otimes 3}.$$\nTherefore, we need to change the powers appearing on the generators as described in Definition \\ref{subpoint} to get as generators the spaces\n\n$$\\Sym^1 W_3^{\\perp},\\ \\Sym^2 W_2^{\\perp}\\ \\text{and}\\ \\Sym^4 W_1^{\\perp}.$$\nThis will allow us to evaluate elements of $W_1^{\\perp}$ to the part of the tensor $s$ representing $W_1$.\n\\end{remark}\n\n\\begin{remark}[Restriction to the known apolarities]\nLet $p = l^d \\in \\nu_d(\\mathbb{P}^{n-1})$ be a point of a Veronese variety. The point $p$ represents a line contained in $V$ and hence the respective annihilator is generated by $n-1$ linear forms. Applying Definition \\ref{subpoint} we get the ideal $I(p) \\subset \\mathbb{S}^{\\bullet}V^*$. In particular remark that the multiplication maps $\\Sym^d V^* \\otimes \\Sym^e V^* \\longrightarrow \\Sym^{d+e}V^*$ are involved in the definition. Hence it is not hard to check that the intersection $I(p) \\cap \\Sym^{\\bullet} V$ is the usual ideal of the point $p$ contained in $\\Sym^{\\bullet} V^*$ used in the classic apolarity theory. See \\cite{iarrobino1999power} for more details.\n\n\\noindent The same happens when we consider $p = v_1 \\wedge \\dots \\wedge v_k \\in \\superwedge^k V$ a point of a Grassmannian $\\mathbb{G}(k,V)$. Recall that such a point represents a $k$-dimensional subspace $W$ of $V$. The annihilator $W^{\\perp}$ of $W$ is $(\\dim(V)-k)$-dimensional. Applying Definition \\ref{subpoint} we get the ideal $I(p)$ generated inside $\\mathbb{S}^{\\bullet}V^*$ using the maps $\\mathcal{M}^{\\lambda,\\mu}_{\\nu}$ introduced in Definition \\ref{defmapmult}. In particular remark that the multiplication maps $\\superwedge^d V^* \\otimes \\superwedge^e V^* \\longrightarrow \\superwedge^{d+e} V^*$ are involved in the definition. Hence the intersection $I(p) \\cap \\superwedge^{\\bullet} V^*$ is the usual ideal of the point $p$ contained in $\\superwedge^{\\bullet} V^*$ used in the skew-symmetric apolarity theory. See \\cite{arrondo2021skew} for more details.\n\\end{remark}\n\n\\begin{example}\nLet $ V = \\mathbb{C}^4$ and $\\lambda = (2,2)$. The minimal orbit inside $\\mathbb{P}(\\mathbb{S}_{(2,2)} \\mathbb{C}^4)$ is the Grassmann variety $X = (\\mathbb{G}(2,\\mathbb{C}^4),\\mathcal{O}(2))$. Let $\\{v_1,\\dots,v_4 \\}$ be a basis of $\\mathbb{C}^4$ and $\\{x_1,\\dots,x_4\\}$ be the respective dual basis of $(\\mathbb{C}^4)^*$. Let $p = (v_1 \\wedge v_2)^{\\otimes 2} \\in \\mathbb{S}_{(2,2)} \\mathbb{C}^4$ be a point of $(2,2)$-rank $1$. The element associated to the sstd tableau\n\\begin{center} \\begin{ytableau} 1 & 1 \\\\ 2 & 2 \\end{ytableau} \\end{center}\nand it represents the subspace $W_1$ spanned by $v_1$ and $v_2$. Hence the annihilator is spanned by $x_3$ and $x_4$. One can readily check that \n\n$$ I(p) \\cap \\mathbb{S}_{(1)} (\\mathbb{C}^4)^* = \\langle x_3,x_4 \\rangle,$$\n$$ I(p) \\cap \\mathbb{S}_{(1,1)} (\\mathbb{C}^4)^* = \\langle x_i \\wedge x_j\\ :\\ j \\in \\{3,4\\} \\rangle,$$\n$$ I(p) \\cap \\mathbb{S}_{(2)} (\\mathbb{C}^4)^* = \\langle x_i x_j\\ :\\ j \\in \\{3,4\\} \\rangle$$\nusing the maps $\\mathbb{S}_{(1)} (\\mathbb{C}^4)^* \\otimes \\mathbb{S}_{(1)} (\\mathbb{C}^4)^* \\longrightarrow \\mathbb{S}_{\\mu}(\\mathbb{C}^4)^*$, where $\\mu = (2), (1,1)$, restricted to $\\Sym^1 W_1^{\\perp}$. \n\n\\noindent Consider now $\\mu = (2,1)$. We have that $I(p) \\cap \\mathbb{S}_{(2,1)} (\\mathbb{C}^4)^*$ is given as the span of the images of the maps $\\mathcal{M}^{(1),(2)}_{(1,1)}$ and $\\mathcal{M}^{(1),(1,1)}_{(2,1)}$ restricted to $I(p)$ in one of the factors in the domain. For instance the map $\\mathcal{M}^{(1),(1,1)}_{(2,1)}$ restricted to $\\langle x_3, x_4 \\rangle \\otimes \\mathbb{S}_{(1,1)} (\\mathbb{C}^4)^*$ has as image\n\n\\begin{align*} \\langle (x_i \\wedge \\alpha \\otimes \\beta + \\beta \\wedge \\alpha \\otimes x_i -&x_i \\wedge \\beta \\otimes \\alpha - \\alpha \\wedge \\beta \\otimes x_i,\\ \\\\ \n&\\text{for all}\\ \\alpha \\wedge \\beta \\in \\mathbb{S}_{(1,1)} (\\mathbb{C}^4)^*,\\ i = 3,\\ 4 \\rangle. \\end{align*}\nOn the other hand, if we consider the same map restricted to $\\mathbb{S}_{(1)} (\\mathbb{C}^4)^* \\otimes (I(p) \\cap \\mathbb{S}_{(1,1)} (\\mathbb{C}^4)^* )$, we get as image \n\n$$ \\langle \\alpha \\wedge \\beta \\otimes \\gamma - \\alpha \\wedge \\gamma \\otimes \\beta,\\ \\text{for all}\\ \\alpha \\in V^*,\\ \\beta \\wedge \\gamma \\in I(p) \\cap \\mathbb{S}_{(1,1)} (\\mathbb{C}^4)^* \\rangle.$$\nIf one consider the span of all such images together with the ones obtained from the map $\\mathcal{M}^{(1),(2)}_{(2,1)}$, one gets that\n\n$$ I(p) \\cap \\mathbb{S}_{(2,1)} (\\mathbb{C}^4)^* = \\langle c_{(2,1)}(e_S)\\ :\\ S\\ \\text{is a sstd tableau in which an entry is 3 or 4} \\rangle.$$\nAnalogously one can compute that\n\n$$ I(p) \\cap \\mathbb{S}_{(2,2)} (\\mathbb{C}^4)^* = \\langle c_{(2,2)}(e_S)\\ :\\ S\\ \\text{is a sstd tableau in which an entry is 3 or 4} \\rangle.$$\nNote that all the elements in such spaces kill $p$ via the Schur apolarity action.\n\\end{example}\n\\smallskip\n\n\\begin{example}\nLet $ V = \\mathbb{C}^3$ and $\\lambda = (2,1)$. The minimal orbit inside $\\mathbb{P}(\\mathbb{S}_{(2,1)} \\mathbb{C}^3)$ is the Flag variety $X = (\\mathbb{F}(1,2;\\mathbb{C}^3),\\mathcal{O}(1,1))$. Let $\\{v_1,\\dots,v_3 \\}$ be a basis of $\\mathbb{C}^3$ and $\\{x_1,\\dots,x_3\\}$ be the respective dual basis of $(\\mathbb{C}^3)^*$. Let $p = v_1 \\wedge v_2 \\otimes v_1 \\in \\mathbb{S}_{(2,1)} \\mathbb{C}^3$ be a point of $(2,1)$-rank $1$. The element associated to the sstd tableau\n\\begin{center} \\begin{ytableau} 1 & 1 \\\\ 2 \\end{ytableau} \\end{center}\nand it represents the line $W_1$ generated by $v_1$ contained in the subspace $W_2$ spanned by $v_1$ and $v_2$. Hence the annihilators are $W_1^{\\perp} = \\langle x_2,\\ x_3 \\rangle$ and $W_2^{\\perp} = \\langle x_3 \\rangle$. In this case the generators are\n$$\\Sym^1 W_2^{\\perp} = \\langle x_3 \\rangle\\ \\text{and}\\ \\Sym^2 W_1^{\\perp} = \\langle x_2^2, x_2x_3, x_3^2 \\rangle. $$\nFor any $\\mu \\subset (2,1)$ we can check that the subspace $I(p)$ associated to $p$ is such that\n\n$$ I(p) \\cap \\mathbb{S}_{(1)} (\\mathbb{C}^3)^* = \\langle x_3 \\rangle,$$\n$$ I(p) \\cap \\mathbb{S}_{(1,1)} (\\mathbb{C}^3)^* = \\langle x_1 \\wedge x_3, x_2 \\wedge x_3 \\rangle,$$\n$$ I(p) \\cap \\mathbb{S}_{(2)} (\\mathbb{C}^3)^* = \\langle x_1 x_3, x_2 x_3, x_3^2, x_2^2 \\rangle,$$\n$$ I(p) \\cap \\mathbb{S}_{(2,1)} (\\mathbb{C}^3)^* = \\langle c_{(2,1)}(e_S)\\ :\\ S\\ \\text{is a sstd tableau in which an entry is 3} \\rangle.$$\n\\end{example} \\smallskip\n\nAs recalled in the Introduction, in the classic, skew-symmetric respectively, apolarity theory, a result called {\\it apolarity lemma}, {\\it skew-symmetric apolarity lemma} respectively, links the symmetric, skew-symmetric respectively, rank of a point with the equivalent condition of the inclusion of the ideal of points of rank $1$ inside the apolar set of the point. We now approach to an analogous result called {\\it lemma of Schur apolarity}, cf. Theorem \\eqref{LemmaSchurApo}. At first we need a preparatory lemma.\n\n\\begin{lemma} \\label{topdeg}\nLet $\\lambda$ be a partition of length less then $n$. Let $p \\in X \\subset \\mathbb{P}(\\mathbb{S}_{\\lambda} V)$ be a point of $\\lambda$-rank $1$. Then we have the equality \n\n$$ I(p)_{\\lambda} = (p^{\\perp})_{\\lambda} $$\n\n\\noindent where $p^{\\perp}$ is introduced in Definition \\ref{schurcatdef}, and where $I(p)_{\\lambda}$ and $(p^{\\perp})_{\\lambda}$ denotes $I(p) \\cap \\mathbb{S}_{\\lambda} V^*$ and $(p^{\\perp})\\cap \\mathbb{S}_{\\lambda} V^*$ respectively. \n\\end{lemma}\n\n\\begin{proof}\nWe present here a proof only for the case in which $\\lambda$ is a partition such that its diagram is union of two rectangles, being the general case identical but more cumbersome in terms of notation. \n\n\\noindent Assume that $\\lambda$ is given by the union of two rectangles. This means that $\\lambda = ((d+e)^k,e^{h-k})$, where $d,e >0$ and $0 \\mathbb{S}_{(b^k)} \\mathbb{C}^n$ if $a > b$. Hence the most square catalecticant map is the one given by $h = \\lceil \\frac{d}{2} \\rceil$. \n\\end{remark} \\smallskip\n\nWe discuss now the case of any flag variety. The situation here is a bit different as the following example shows.\n\n\\begin{example} \\label{EsempioFakeRk2}\nConsider the complete flag variety $\\mathbb{F}(1,2,3;\\mathbb{C}^4)$ embedded with $\\mathcal{O}(1,1,1)$ in $\\mathbb{P}(\\mathbb{S}_{(3,2,1)} \\mathbb{C}^4)$. Consider the element $ t$\n\n\\begin{equation} \\label{FakeRk2} t = v_1 \\wedge v_2 \\wedge v_3 \\otimes v_1 \\wedge v_2 \\otimes v_1 + v_1 \\wedge v_2 \\wedge v_3 \\otimes v_2 \\wedge v_3 \\otimes v_3 \\in \\mathbb{P}(\\mathbb{S}_{(3,2,1)} \\mathbb{C}^4). \\end{equation}\nBy \\eqref{FakeRk2} the element is written as a sum of two $(3,2,1)$-rank $1$ elements representing the flags \n\n$$\\langle v_1 \\rangle \\subset \\langle v_1, v_2 \\rangle \\subset \\langle v_1,v_2,v_3 \\rangle, \\quad \\langle v_3 \\rangle \\subset \\langle v_2, v_3 \\rangle \\subset \\langle v_1,v_2,v_3 \\rangle.$$\nHence the $\\lambda$-rank of $t$ is at most $2$. Consider for a moment only the first addend\n\n$$ t_1 = v_1 \\wedge v_2 \\wedge v_3 \\otimes v_1 \\wedge v_2 \\otimes v_1 $$\nand consider the catalecticant map $\\mathcal{C}^{(3,2,1),(2)}_{t_1} : \\mathbb{S}_{(2)} (\\mathbb{C}^4)^* \\longrightarrow \\mathbb{S}_{(3,2,1)\/(2)} \\mathbb{C}^4$. From Example \\ref{esschurapoflag} We have that\n\n\\begin{align*}\n&\\mathcal{C}^{(3,2,1),(2)}_{t_1}(\\alpha \\beta) = \\\\\n&= \\sum_{i=1}^3 (-1)^{i+1} \\left [ \\alpha(v_i)\\beta(v_1) + \\alpha(v_1)\\beta(v_i) \\right ] \\cdot v_1 \\wedge \\dots \\wedge \\hat{v_i} \\wedge \\dots \\wedge v_3 \\otimes v_2 \\otimes v_1 + \\\\\n&- \\sum_{i=1}^3 (-1)^{i+1} \\left [ \\alpha(v_i)\\beta(v_2) + \\alpha(v_2)\\beta(v_i) \\right ] \\cdot v_1 \\wedge \\dots \\wedge \\hat{v_i} \\wedge \\dots \\wedge v_3 \\otimes v_1 \\otimes v_1.\n\\end{align*}\nIt is easy to see that\n\n$$\\ker \\mathcal{C}^{(3,2,1),(2)}_{t_1} = \\langle x_1x_4, x_2x_4,x_3x_4,x_4^2,x_3^2 \\rangle$$\nand hence $ \\operatorname{rk}(\\mathcal{C}^{(3,2,1),(2)}_{t_1}) = 5$. Clearly this equality holds for any $t_1 \\in \\mathbb{F}(1,2,3;\\mathbb{C}^4)$. Coming back to $t$, if we consider the same catalecticant map we get\n\n$$\\ker \\mathcal{C}^{(3,2,1),(2)}_t = \\langle x_1x_4, x_2x_4,x_3x_4,x_4^2 \\rangle$$\nwhich means that $ \\operatorname{rk}(\\mathcal{C}^{(3,2,1),(2)}_{t_1}) = 6$. This implies that $t$ cannot have $(3,2,1)$-rank $1$ and hence the decomposition \\eqref{FakeRk2} is minimal. \n\n\\noindent Differently to the case of Grassmann varieties, chopping a column of the diagram of $(3,2,1)$ does not give the right information about the $(3,2,1)$-rank $1$ tensors. Indeed if we consider the partition $(1,1,1) \\subset (3,2,1)$ together with the respective catalecticant map we have that\n\n$$ \\ker \\mathcal{C}^{(3,2,1),(1,1,1)}_t = \\langle x_1 \\wedge x_2 \\wedge x_4, x_1 \\wedge x_3\\wedge x_4, x_2 \\wedge x_3 \\wedge x_4 \\rangle$$\nand hence $\\operatorname{rk}(\\mathcal{C}^{(3,2,1),(1,1,1)}_{t}) = 1$ which happens also for any $t \\in \\mathbb{F}(1,2,3;\\mathbb{C}^4)$. This is because the two flags appearing in the decomposition share the same biggest subspace $\\langle v_1,v_2,v_3 \\rangle$.\n\\end{example}\n\nEven though it seems that there is no connection between the rank of catalecticant maps and the $\\lambda$-rank of tensors, we can give a lower bound on the $\\lambda$-rank of a tensor. Let $X= \\left(\\mathbb{F}(n_1,\\dots,n_s;\\mathbb{C}^n),\\mathcal{O}(d_1,\\dots,d_s)\\right)$ be the minimal orbit in $\\mathbb{P}(\\mathbb{S}_{\\lambda} V)$. With this notation we have that\n\n$$ \\lambda =\\left ((d_1+\\dots+d_s)^{n_1},(d_2+\\dots+d_s)^{n_2-n_1},\\dots,d_s^{n_s-n_{s-1}} \\right). $$\nHence the Young diagram of $\\lambda$ is given by $d_s$ columns of length $n_s$, then $d_{s-1}$ columns of length $n_{s-1}$ and so on up to $d_1$ columns of length $n_1$. For example if $X=\\left(\\mathbb{F}(1,2,4;\\mathbb{C}^5),\\mathcal{O}(1,2,2)\\right)$, then $\\lambda =(5,4,2^2)$ and its Young diagram is \n\n$$ \\ydiagram{5,4,2,2}.$$\nLet $\\lambda$ be a partition such that it has $d_i$ columns of length $n_i$, with $1 \\leq i \\leq s$. Consider the catalecticant map determined by $(e^{n_s}) \\subset \\lambda$ with $e \\leq d_s$, i.e. the one that removes the first $e$ columns of length $n_s$ from the diagram of $\\lambda$. Then it is easy to see that $\\mathbb{S}_{\\lambda \/ (e^{n_s})} V \\simeq \\mathbb{S}_{\\mu_e} V$\nwhere\n\n\\begin{align}\\label{FormulambdaU}\\mu_e = ((d_1+\\dots+d_{s-1}+(d_s-e))^{n_1}&,\\ (d_2+\\dots+d_{s-1}+(d_s-e))^{n_2-n_1},\\dots \\\\ &\\dots, (d_{s-1}+(d_s-e))^{n_{s-1}-n_{s-2}},(d_s-e)^{n_s}) \\nonumber\n\\end{align}\ni.e. $\\lambda$ with the first $e$ columns of length $n_s$ removed. \n\n\n\n\\begin{algorithm} \\label{algoritmo2} In order to get a bound on the $\\lambda$-rank using catalecticants, we provide an algorithm that after the computation of a sequence of ranks of ``consecutive'' catalecticant maps returns a lower bound on the $\\lambda$-rank of the given tensor. The last registered rank will be a lower bound on the $\\lambda$-rank of $t$.\n\n\\noindent Before proceeding with the procedure, we need a preparatory fact.\n\n\\begin{prop} \\label{PropCheServe}\nLet $\\lambda$ be a partition with $d_i$ columns of length $n_i$, with $i= 1,\\dots,s$, and let $t \\in \\mathbb{S}_{\\lambda} V$. If \n\n$$\\operatorname{rk} \\mathcal{C}^{\\lambda,\\left ( \\lceil \\frac{d_s}{2} \\rceil \\right)^{n_s}}_t = 1,\\ \\text{then for any}\\ 1 \\leq e \\leq d_s,\\ \\text{we have that}\\ \\operatorname{rk} \\mathcal{C}^{\\lambda,(e^{n_s})}_t = 1.$$\n\\end{prop}\n\\begin{proof}\nLet us prove the contraposition, i.e. if there exists an $e \\in \\{1,\\dots,d_s\\}$ such that \n\n$$\\operatorname{rk} \\mathcal{C}^{\\lambda,(e^{n_s})}_t > 1,\\ \\text{then}\\ \\operatorname{rk} \\mathcal{C}^{\\lambda,\\left(\\lceil \\frac{d_s}{2} \\rceil \\right)^{n_s}}_t > 1. $$\nAssume that for a certain $e \\in \\{1,\\dots,d_s\\}$ it happens that $\\operatorname{rk} \\mathcal{C}^{\\lambda,(e^{n_s})}_t > 1$. From the hypothesis we can assume that the $\\lambda$-rank of $t$ is at least $2$, otherwise it can be easily seen that the rank of $\\mathcal{C}^{\\lambda,(e^{n_s})}_t$ would be equal to $1$ which is against the hypothesis. Assume then that $t = t_1 + \\dots + t_r$ has $\\lambda$-rank $r \\geq 2$, where every $t_i$ has $\\lambda$-rank $1$ and it is written as\n\n$$ t_i = (v_{1,i} \\wedge \\dots \\wedge v_{n_s,i})^{\\otimes d_s} \\otimes \\dots \\otimes (v_{1,i} \\wedge \\dots \\wedge v_{n_1,i})^{\\otimes d_1}$$\nfor some vectors $v_{1,i},\\dots,v_{n_s,i} \\in V$, for all $i = 1,\\dots,r$. The catalecticant map $\\mathcal{C}^{\\lambda,(e^{n_s})}_t$ clearly acts on the first products $(v_{1,i} \\wedge \\dots \\wedge v_{n_s,i})^{\\otimes d_s}$ of every $t_i$. Since the rank of such map is at least $2$, we can find at least two points $t_i$ and $t_j$ such that\n\n$$(v_{1,i} \\wedge \\dots \\wedge v_{n_s,i}) \\neq (v_{1,j} \\wedge \\dots \\wedge v_{n_s,j})$$\nand also such that the images via the catalecticant map of the respective duals $(x_{1,i} \\wedge \\dots \\wedge x_{n_s,i})^{\\otimes e}$ and $(x_{1,j} \\wedge \\dots \\wedge x_{n_s,j})^{\\otimes e}$ are linearly independent. \nConsider now the catalecticant map \n\n$$\\mathcal{C}^{\\lambda,\\left(\\lceil \\frac{d_s}{2} \\rceil \\right)^{n_s}}_t : \\mathbb{S}_{\\left(\\lceil \\frac{d_s}{2} \\rceil \\right)^{n_s}} V^* \\longrightarrow \\mathbb{S}_{\\mu_{\\lceil \\frac{d_s}{2} \\rceil}} V$$\nand the elements $(x_{1,i} \\wedge \\dots \\wedge x_{n_s,i})^{\\otimes \\lceil \\frac{d_s}{2} \\rceil}$ and $(x_{1,j} \\wedge \\dots \\wedge x_{n_s,j})^{\\otimes \\lceil \\frac{d_s}{2} \\rceil}$. It is clear that they are linearly independent and that their images via $\\mathcal{C}^{\\lambda,\\left(\\lceil \\frac{d_s}{2} \\rceil \\right)^{n_s}}_t$ are also linearly independent. Therefore we get that the rank of $\\mathcal{C}^{\\lambda,\\left(\\lceil \\frac{d_s}{2} \\rceil \\right)^{n_s}}_t$ is at least $2$. By contraposition we get that if $\\operatorname{rk} \\mathcal{C}^{\\lambda,\\left(\\lceil \\frac{d_s}{2} \\rceil \\right)^{n_s}}_t = 1,$ then $\\operatorname{rk} \\mathcal{C}^{\\lambda,(e^{n_s})}_t = 1$ for any $1 \\leq e \\leq d_s$.\nThis concludes the proof.\n\\end{proof}\n\n\\noindent Now we describe the algorithm which provides a lower bound of the $\\lambda$-rank. Let $t \\in \\mathbb{S}_{\\lambda} V$ and consider\nthe catalecticant map that removes half of the $d_s$ columns of maximal length from the diagram of $\\lambda$, rounded up to the next integer if needed. Compute the rank of such a catalecticant map \n\n$$\\mathcal{C}^{\\lambda,(\\lceil \\frac{d}{2} \\rceil^{n_s})}_t : \\mathbb{S}_{(\\lceil \\frac{d}{2} \\rceil^{n_s})} V^* \\longrightarrow \\mathbb{S}_{\\mu_{\\lceil \\frac{d}{2} \\rceil}} V,$$\nwhere $\\mu_{\\lceil \\frac{d}{2} \\rceil}$ denotes the partition as in \\eqref{FormulambdaU}. If the rank of the catalecticant is strictly greater than $1$, the algorithm stops and outputs this number. Such a number is a lower bound on the $\\lambda$-rank of $t$. Otherwise, by Proposition \\ref{PropCheServe} we get that\n\n$$\\operatorname{rk} \\mathcal{C}^{\\lambda,e^{n_s}}_t = 1,$$\nfor any $1 \\leq e \\leq d_s$. Hence we can consider the catalecticant with $e = d_s$ and its image will be generated by a unique element up to scalar multiplication. Such image is contained in $\\mathbb{S}_{\\mu_{d_s}} V$, where $\\mu_{d_s}$ denotes the partition with $d_i$ columns of length $n_i$, with $i = 1,\\dots,s-1$, accordingly with the notation used in \\eqref{FormulambdaU}. Choose a generator $t_1$ of the image of the chosen catalecticant map. Then set $\\lambda = \\mu_{d_s}$ and $t = t_1$ and repeat the previous steps. \\smallskip\n\n\\noindent Generalizing, at the $i$-th step we set $\\lambda = \\lambda_i$, where $\\lambda_i$ is the diagram with $d_j$ columns of length $n_j$ for $j = 1,\\dots,s-i+1$ , and we set $t = t_i$, where $t_i \\in \\mathbb{S}_{\\lambda_i} V$ is the generator of the one dimensional image obtained from the previous step of the algorithm. Then compute \n\n$$\\operatorname{rk} \\left ( \\mathcal{C}^{\\lambda_i,\\left \\lceil \\frac{d_{s-i+1}}{2} \\right \\rceil^{n_{s-j}}}_{t_i} \\right ).$$ \nIf it is strictly greater than $1$ the algorithm outputs this number and it stops. Otherwise consider the catalecticant map that removes all the columns of length $d_{s-i+1}$. Compute the generator $t_{i+1} \\in \\mathbb{S}_{\\lambda_{i+1}} V$ of the image of this last catalecticant map, set $\\lambda = \\lambda_{i+1}$ and $t = t_{i+1}$, and move to the $(i+1)$-th step. \n\n\\noindent If the rank of every catalecticant map we compute along the procedure is $1$, the algorithm outputs $1$. Obviously the output of the algorithm is an integer greater or equal than $1$ and it is a lower bound on the $\\lambda$-rank of $t$. Indeed, preliminary we have\n\n\\begin{prop} \\label{RanghiConsecutivi}\nConsider the flag variety $\\mathbb{F}(n_1,\\dots,n_s;V)$ embedded with $\\mathcal{O}(d_1,\\dots,d_s)$ in $\\mathbb{P}(\\mathbb{S}_{\\lambda} V)$. Let $t \\in \\mathbb{S}_{\\lambda} V$ be any point. If $t$ has $\\lambda$-rank $1$, then the Algorithm \\ref{algoritmo2} outputs a $1$. The converse is true if $d_i \\geq 2$ for any $i = 1,\\dots,s$. \n\\end{prop}\n\n\\begin{proof}\nAssume that $t$ has $\\lambda$-rank $1$, i.e.\n\n$$ t = (v_1 \\wedge \\dots \\wedge v_{n_s})^{\\otimes d_s} \\otimes \\dots \\otimes (v_1 \\wedge \\dots \\wedge v_{n_1})^{\\otimes d_1} $$\nfor some $v_i \\in V$. The image of the first catalecticant map of the algorithm, i.e. the one determined by $e=1$ and the partition $(1^{n_s})$, is the span of\n\n$$ (v_1 \\wedge \\dots \\wedge v_{n_{s}})^{\\otimes d_{s}-1} \\otimes (v_1 \\wedge \\dots \\wedge v_{n_{s-1}})^{\\otimes d_{s-1}} \\otimes \\dots \\otimes (v_1 \\wedge \\dots \\wedge v_{n_1})^{\\otimes d_1}$$\nand hence the map has rank $1$ since $t$ is non zero. The same happens for the next steps until $e = \\lfloor \\frac{d_s}{2} \\rfloor$. Therefore consider the catalecticant map that removes all the first $d_s$ columns and consider the only generator $t_{s-1}$ of its image\n\n$$ t_{s-1} = (v_1 \\wedge \\dots \\wedge v_{n_{s-1}})^{\\otimes d_{s-1}} \\otimes \\dots \\otimes (v_1 \\wedge \\dots \\wedge v_{n_1})^{\\otimes d_1}. $$\nAt this point set $t = t_{s-1}$ and $\\lambda = \\mu_{d_s}$ and repeat the previous steps. It is obvious that the algorithm will not stop when computing the rank of any catalecticant map since any such number is equal to $1$. Therefore it eventually outputs $1$.\n\n\\noindent For the converse part, assume that $d_i \\geq 2$ for any $i$ and suppose that the output of the algorithm is $1$. Suppose that $t$ has a $\\lambda$-rank $r$ decomposition $t = t_1 + \\dots + t_r$, with $t_i$ of $\\lambda$-rank $1$. We may assume that\n\n$$ t_i = (v_{1,i} \\wedge \\dots \\wedge v_{n_s,i})^{\\otimes d_s} \\otimes \\dots \\otimes (v_{1,i} \\wedge \\dots \\wedge v_{n_1,i})^{\\otimes d_1}$$\nfor some vectors $v_{1,i},\\dots,v_{n_s,i} \\in V$, for all $i = 1,\\dots,r$. For any $1 \\leq e \\leq d_s$, the image of the catalecticant map is one dimensional by Proposition \\ref{PropCheServe} and it is contained in $\\langle t_1^e,\\dots,t_r^e \\rangle$, where $t_i^e$ denotes\n\n$$ t_i^e = (v_{1,i} \\wedge \\dots \\wedge v_{n_s,i})^{\\otimes d_s-e} \\otimes \\dots \\otimes (v_{1,i} \\wedge \\dots \\wedge v_{n_1,i})^{\\otimes d_1}$$\nfor all $i = 1,\\dots, r$. We can assume that the element $(x_{1,1} \\wedge \\dots \\wedge x_{n_s,1})^{\\otimes e}$, defined taking the dual elements $x_{i,j}$ to the vectors appearing in the biggest subspace associated to $t_1$, is not apolar to $t$. Hence its image is a generator of the one dimensional image of the respective catalecticant map. If such an element of $\\mathbb{S}_{(e^{n_s})} V^*$ is the only one with this property, then we can already conclude that the biggest subspace must be the same for any $t_i$. Otherwise, if we could find another element defined in the same way as $(x_{1,1} \\wedge \\dots \\wedge x_{n_s,1})^{\\otimes e}$ but using this time the other $t_i$'s, then its image must be a scalar multiple of the image we have already obtained. Hence a certain linear combination of these two elements is apolar to $t$. On the other hand in the respective images of the two selected elements of $\\mathbb{S}_{(e^{n_s})} V^*$ there are also the tensors $t_1^e$ and $t_i^e$ which are linearly independent, unless $e = d_s$ and $t_1^{d_s} = t_i^{d_s}$ which happens only if the points $t_1$ and $t_i$ share the same remaining part of the flag. However, since we are assuming $d_s > 1$, and since the algorithm is settled to pick $\\lceil \\frac{d_s}{2} \\rceil$, we are always considering $e < d_s$ that allows to avoid such a problem. Hence, if the rank of the catalecticant is $1$, we get that all the subspaces are the same. Therefore the image of the catalecticant map is one dimensional and is generated by $t' = t_1^{d_s} + \\dots + t_r^{d_s}$. At this point the proof is just a repetition of the previous reasoning until one arrives to get $t_1 = \\dots = t_r$, i.e. $t$ has $\\lambda$-rank $1$. This concludes the proof. \n\\end{proof}\n\n\\begin{cor} \\label{RanghiConsecutivi2}\nLet $t \\in \\mathbb{S}_{\\lambda} V$ and suppose that the output of the Algorithm \\ref{algoritmo2} applied to $t$ is $r$. Then $t$ has $\\lambda$-rank greater or equal than $r$.\n\\end{cor}\n\n\n\n\n\\noindent We describe briefly the Algorithm \\ref{algoritmo2} \\index{algorithm!that computes a lower bound on the $\\lambda$-rank} in general for any $t \\in \\mathbb{S}_{\\lambda} V$. \\medskip\n\n\\hrule \\noindent {\\bf Algorithm \\ref{algoritmo2}.} \\hrule \\medskip\n\n\\noindent {\\bf Input}: A partition $\\lambda$ and an element $t \\in \\mathbb{S}_{\\lambda} V$, where the related minimal orbit inside the projectivization of the space is $X = \\left(\\mathbb{F}(n_1,\\dots,n_s;V),\\mathcal{O}(d_1,\\dots,d_s)\\right)$.\n\n\\noindent {\\bf Output}: A lower bound of the $\\lambda$-rank of $t$.\n\\smallskip\n\n\\begin{enumerate}[nosep, label = \\arabic*)]\n\\item set $i =s$;\n\\item set $r = 0$;\n\\item \\label{Sicomincia} {\\bf if} $i = 0$, {\\bf then}\n\\item $\\quad$ {\\bf print} $1$ {\\bf and exit};\n\\item set $r = \\operatorname{rk}\\mathcal{C}^{\\lambda,(\\lceil \\frac{d_i}{2} \\rceil^{n_i})}_t$;\n\\item {\\bf if} $r >1$, {\\bf then}\n\\item $\\quad$ {\\bf print} $r$ and {\\bf exit};\n\\item consider the map $\\mathcal{C}^{\\lambda,(d_i^{n_i})}_t$ and compute the only generator $t'$ of the image;\n\\item set $i = i-1$, $t = t'$ and $\\lambda = \\lambda_{i-1}$ where this last partition is the one with $d_j$ columns of length $n_j$, for $j = 1,\\dots,i-1$. Come back to \\ref{Sicomincia};\n\\end{enumerate}\n\\end{algorithm}\n{\\hrule \\medskip}\n\n\\begin{remark} \\label{CounterAlg}\nLet us highlight that if the Algorithm \\ref{algoritmo2} outputs $1$, obviously this does not imply that $t$ has $\\lambda$-rank $1$. Indeed, consider as an example of this phenomenon the partition $\\lambda = (5,3,1,1)$ and the tensor $t \\in \\mathbb{S}_{\\lambda} V$\n\n$$ t = v_1 \\wedge v_2 \\wedge v_3 \\wedge v_4 \\otimes v_1 \\wedge v_2 \\otimes v_1 + v_1 \\wedge v_2 \\wedge v_5 \\wedge v_6 \\otimes v_1 \\wedge v_2 \\otimes v_1. $$\nThe output of the Algorithm \\ref{algoritmo2} in this case is $1$. This is due to the fact that the two points share the same partial flag $\\langle v_1 \\rangle \\subset \\langle v_1,\\ v_2\\rangle$. Hence the kernel of the first catalecticant map of the algorithm contains in particular the element $x_1 \\wedge x_2 \\wedge x_3 \\wedge x_4 $ $-x_1 \\wedge x_2 \\wedge x_5 \\wedge x_6$. Nonetheless, it is obvious that $t$ has $\\lambda$-rank $2$. \n\\end{remark}\n\n\n\\begin{example}[Example \\ref{EsempioFakeRk2} reprise]\nConsider the tensor\n\n$$ t = v_1 \\wedge v_2 \\wedge v_3 \\otimes v_1 \\wedge v_2 \\otimes v_1 + v_1 \\wedge v_2 \\wedge v_3 \\otimes v_2 \\wedge v_3 \\otimes v_3 \\in \\mathbb{P}(\\mathbb{S}_{(3,2,1)} \\mathbb{C}^4). $$\nFollowing the notation of the Algorithm \\ref{algoritmo2}, when $i=3$, then $\\lambda_0 = (3,2,1)$ and the first catalecticant map is \n\n$$ \\mathcal{C}^{(3,2,1),(1,1,1)}_t : \\mathbb{S}_{(1,1,1)} (\\mathbb{C}^4)^* \\longrightarrow \\mathbb{S}_{(2,1)} \\mathbb{C}^4. $$\nAs we have already remarked this map has rank $1$ so the algorithm can continue. The generator of the image is \n\n$$ t_3 = v_1 \\wedge v_2 \\otimes v_1 + v_2 \\wedge v_3 \\otimes v_3. $$\nSet $\\lambda_1 = (2,1)$, $i=2$ and $t=t_1$ and restart the algorithm. This time we consider the catalecticant map\n\n$$ \\mathcal{C}^{(2,1),(1,1)}_t : \\mathbb{S}_{(1,1)} (\\mathbb{C}^4)^* \\longrightarrow \\mathbb{S}_{(1)} \\mathbb{C}^4.$$\nIt is easy to see that this time the rank is $2$. Hence the algorithm stops and the output is $(r_0,r_1) = (1,2)$. As Proposition \\ref{RanghiConsecutivi} is going to tell us in a moment, we get that $t$ has $\\lambda$-rank at least $2$. \n\\end{example}\n\nWe now use the procedure to investigate the $\\lambda$-rank of a tensor. \n\n\\begin{prop} \\label{RanghiConsecutivi}\nConsider the flag variety $\\mathbb{F}(k_1,\\dots,k_s;n)$ embedded with $\\mathcal{O}(d_1,\\dots,d_s)$ in $\\mathbb{P}(\\mathbb{S}_{\\lambda} \\mathbb{C}^n)$. Let $t \\in \\mathbb{S}_{\\lambda} \\mathbb{C}^n$ be any point. Then $t$ has $\\lambda$-rank $1$ if and only if the output of the Algorithm \\ref{algoritmo2} is a sequence of $1'$s of length $s$. As a consequence, if the last integer of the sequence obtained with the Algorithm \\ref{algoritmo2} is $r \\neq 1$, then $t$ has $\\lambda$-rank at least $r$.\n\\end{prop}\n\n\\begin{proof}\nAssume that $t$ has $\\lambda$-rank $1$, i.e.\n\n$$ t = (v_1 \\wedge \\dots \\wedge v_{k_s})^{\\otimes d_s} \\otimes \\dots \\otimes (v_1 \\wedge \\dots \\wedge v_{k_1})^{\\otimes d_1}. $$\nThen it is easy to see that the catalecticant map computed by the algorithm has rank $1$, where at every step $t$ is set equal to \n\n$$t_i = (v_1 \\wedge \\dots \\wedge v_{k_i})^{\\otimes d_i} \\otimes \\dots \\otimes (v_1 \\wedge \\dots \\wedge v_{k_1})^{\\otimes d_1}$$\nfor all $i $ starting from $s$ to $1$. Hence the output will be a sequence of $1$'s of length $s-1$. \n\\noindent On the other hand assume that the output of the Algorithm \\ref{algoritmo2} is a sequence of $1$'s of length $s-1$. Suppose $t = p_1 + \\dots + p_r$, where every $p_i$ has $\\lambda$-rank $1$. Hence each $p_i$ represents a flag of subspaces, each of them repeated a certain number of times. Assume that $r > 1$. Then we have either two alternatives. All the $p_i$'s share the same biggest subspace and this is coherent with the hypothesis. Otherwise there are at least a $p_i$ and a $p_j$ such that the respective two biggest subspaces are different. If this is the case, then, up to choosing a suitable basis, it is easy to see that the rank of the map will be at least $2$. This contradicts the hypothesis and hence all the $p_i$ must share the same biggest subspace. Now repeating this argument at every step of the algorithm one obtains that $t = p_1 = \\dots = p_r$, i.e. $t$ has $\\lambda$-rank $1$.\n\\end{proof}\n\n\n\\section{Secant varieties of Flag varieties} \\label{sestasez} \\setcounter{equation}{0}\\medskip\n\nThis section is devoted to the study of the $\\lambda$-rank appearing on the second secant variety to a Flag variety. An algorithm collecting all the results obtained will follow.\n\\medskip\n\n\\subsection{The case $\\lambda = (2,1^{k-1})$.} \\setcounter{equation}{0}\\medskip\n\nThe first case we consider is given by Flag varieties $\\mathbb{F} = \\mathbb{F}(1,k;\\mathbb{C}^n)$ embedded in $\\mathbb{P}(\\mathbb{S}_{(2,1^{k-1})} \\mathbb{C}^n)$. We will see that such varieties are related to the adjoint varieties $\\mathbb{F}(1,k;\\mathbb{C}^{k+1})$. Only in this section we will refer to the $(2,1^{k-1})$-rank of a tensor simply with rank.\n\n\\begin{definition} Given a non degenerate variety $X \\subset \\mathbb{P}^N$, we use the following definition for the {\\it tangential variety} of $X$\n\n$$\\tau (X) = \\bigcup_{p \\in X} T_p X$$\n\n\\noindent where $T_p X$ denotes the tangent space to $X$ at $p$. \n\\end{definition}\nIn order to study the elements appearing on the second secant variety of $\\mathbb{F}$, we have to understand which ranks appear on the tangential variety of $X$. Since $\\mathbb{F}$ is homogeneous, we may reduce to study what happens on a single tangent space. By virtue of this fact, choose $p$ as the highest weight vector of this representation, i.e.\n\n\\begin{equation} \\label{hwtang} p = v_1 \\wedge \\dots \\wedge v_k \\otimes v_1 \\end{equation}\n\n\\noindent for some $v_i \\in \\mathbb{C}^n$. This element may be represented with the sstd tableau\n\n\\begin{center} \\begin{ytableau}\n1 & 1 \\\\ 2 \\\\ \\vdots \\\\ k\n\\end{ytableau} .\\end{center}\n\n\\noindent Recall the following classic result.\nLet $p=v_1 \\wedge \\dots \\wedge v_k$ be a point of $\\mathbb{G}(k, V) \\subset \\mathbb{P} (\\superwedge^k V)$. Then \n\n\\begin{equation} \\label{TangGrass} \\widehat{T_p \\mathbb{G}(k, V)} = \\sum_{i=1}^k v_1 \\wedge \\dots \\wedge v_{i-1} \\wedge V \\wedge v_{i+1} \\wedge \\dots \\wedge v_k. \\end{equation}\n\n\\begin{prop}\nLet $p \\in \\mathbb{F}$ be the highest weight vector in \\eqref{hwtang}. The cone over the tangent space $T_p \\mathbb{F}$ to $\\mathbb{F}$ at $p$ is the subspace \n\\begin{align*}\n\\langle &v_1 \\wedge \\dots \\wedge v_k \\otimes v_1,\\ v_1 \\wedge \\dots \\wedge v_{i-1} \\wedge v_h \\wedge v_{i+1} \\wedge \\dots \\wedge v_k \\otimes v_1,\\ i \\in \\{2,\\dots,k\\}, \\\\ &v_1 \\wedge \\dots \\wedge v_k \\otimes v_h + v_h \\wedge v_2 \\dots \\wedge v_k \\otimes v_1,\\ h \\in \\{1,\\dots,n \\}\\rangle \\subset \\mathbb{S}_{(2,1^{k-1})} \\mathbb{C}^n.\n\\end{align*}\n\\end{prop}\n\n\\begin{proof}\n\\noindent By definition we have the inclusion $\\mathbb{F} \\subset \\mathbb{G}(1,\\mathbb{C}^n) \\times \\mathbb{G}(k,\\mathbb{C}^k)$. By \\cite[Prop. 4.4]{freire2019secant}, we have the equality\n\n$$ \\widehat{T_p \\mathbb{F}} = \\left ( \\widehat{T_p \\mathbb{G}(1,\\mathbb{C}^n) \\times \\mathbb{G}(k,\\mathbb{C}^k)} \\right ) \\cap \\mathbb{S}_{(2,1^{k-1})} \\mathbb{C}^n$$\n\n\\noindent where $\\hat{Y}$ denotes the affine cone over the projective variety $Y$. Applying Formula \\eqref{TangGrass} we see that $\\widehat{T_p \\left(\\mathbb{G}(1,\\mathbb{C}^n) \\times \\mathbb{G}(k,\\mathbb{C}^k)\\right)}$ is the subspace \n\n\\begin{align*}\n\\langle v_1 &\\wedge \\dots \\wedge v_k \\otimes v_1,\\ v_1 \\wedge \\dots \\wedge v_k \\otimes v_2,\\dots, v_1 \\wedge \\dots \\wedge v_k \\otimes v_n,\n\\\\ &v_1 \\wedge \\dots \\wedge v_{i-1} \\wedge v_h \\wedge v_{i+1} \\wedge \\dots \\wedge v_k \\otimes v_1,\\ h \\in \\{k+1,\\dots,n\\},\\ i \\in \\{1,\\dots,k\\} \\rangle.\n\\end{align*}\n\n\\noindent It is easy to see that if $i \\neq 1$, the elements \n$$v_1 \\wedge \\dots \\wedge v_k \\otimes v_1,\\ v_1 \\wedge \\dots \\wedge v_{i-1} \\wedge v_h \\wedge v_{i+1} \\wedge \\dots \\wedge v_k \\otimes v_1$$\nwith $h \\in \\{k+1,\\dots,n\\}$ satisfy the relations \\eqref{PluckRel}. Indeed, they are, up to the sign, the elements of the Schur module determined by the sstd tableaux\n\n\\begin{equation} \\label{TangFlag1} \\begin{ytableau}\n1 & 1 \\\\ 2 \\\\ \\vdots \\\\ k\n\\end{ytableau}\\quad \\text{and}\\quad \\begin{ytableau}\n1 & 1 \\\\ 2 \\\\ \\vdots \\\\ \\hat{i} \\\\ \\vdots \\\\ k \\\\ h\n\\end{ytableau}\\end{equation}\n\n\\noindent respectively, where $\\hat{i}$ means that $i$ is not appearing in the list. We can see also that the elements\n\n$$ v_1 \\wedge \\dots \\wedge v_k \\otimes v_h + v_h \\wedge v_2 \\dots \\wedge v_k \\otimes v_1$$\n\n\\noindent satisfy the equations \\eqref{PluckRel} for any $h=2,\\dots,n$ and hence they belong to the module. Indeed they are the elements associated to the sstd tableaux\n\n\\begin{equation} \\label{TangFlag2} \\begin{ytableau}\n1 & h \\\\ 2 \\\\ \\vdots \\\\ k\n\\end{ytableau}\\ . \\end{equation}\n\n\\noindent Consider then the span of the elements of the Schur modules whose associated sstd tableaux is either in \\eqref{TangFlag1} or in \\eqref{TangFlag2}. Since they are all different, the respective elements of the module are linearly independent. Moreover the number of elements in \\eqref{TangFlag1} is $(k-1)(n-k)+1$, and those in \\eqref{TangFlag2} are $n-1$ , for a total of $-k^2 + kn + k$. Since the variety $\\mathbb{F}$ is smooth of dimension $-k^2+kn+k-1$, we can conclude that $\\widehat{T_p \\mathbb{F}}$ is the subspace\n\\begin{align*}\n\\langle &v_1 \\wedge \\dots \\wedge v_k \\otimes v_1,\\ v_1 \\wedge \\dots \\wedge v_{i-1} \\wedge v_h \\wedge v_{i+1} \\wedge \\dots \\wedge v_k \\otimes v_1,\\ i \\in \\{2,\\dots,k\\}, \\\\ &v_1 \\wedge \\dots \\wedge v_k \\otimes v_h + v_h \\wedge v_2 \\dots \\wedge v_k \\otimes v_1,\\ h \\in \\{1,\\dots,n \\}\\rangle.\n\\end{align*}\\vskip-0.6cm\\end{proof}\n\n\\noindent In order to understand which ranks appear in this space, we split the generators of $T_p \\mathbb{F}$ in the three following sets:\\begin{enumerate}[label = (\\arabic*)]\n\\item \\label{item1} $v_1 \\wedge \\dots \\wedge v_k \\otimes v_1$,\n\\item \\label{item2} $v_1 \\wedge \\dots \\wedge v_{i-1} \\wedge v_h \\wedge v_{i+1} \\wedge \\dots \\wedge v_k \\otimes v_1$, for $i = 2,\\dots,k$ and $h = k+1,\\dots,n$,\n\\item \\label{item3} $v_1 \\wedge \\dots \\wedge v_k \\otimes v_h + v_h \\wedge v_2 \\dots \\wedge v_k \\otimes v_1$ for $h = 2,\\dots,n$.\n\\end{enumerate}\nThe elements from \\ref{item1} and \\ref{item2} have rank $1$ and they represent the flags \n$$ \\langle v_1 \\rangle \\subset \\langle v_1,\\dots,v_k \\rangle $$ \n\\noindent and\n$$ \\langle v_1 \\rangle \\subset \\langle v_1,\\dots,v_{i-1},v_h,v_{i+1},v_k \\rangle $$\n\\noindent respectively. \n\n\\begin{prop} \\label{famiglia3} The elements $t$ from \\ref{item3} have all rank $1$ for $h = 2,\\dots,k$ and they represent the flags\n$$\\langle v_h \\rangle \\subset \\langle v_1,\\dots,v_k \\rangle. $$ \nIf $h = k+1,\\dots,n$ then the corresponding element has rank $2$ and it decomposes as\n $$ t = -\\frac{1}{2} (v_1-v_h) \\wedge v_2 \\wedge \\dots \\wedge v_k \\otimes (v_1-v_h) + \\frac{1}{2} (v_1+v_h) \\wedge v_2 \\wedge \\dots \\wedge v_k \\otimes (v_1+v_h).$$\n\\end{prop}\n\\begin{proof} Suppose that $h = 2,\\dots,k$. Then $t$ has the form \n$$ v_1 \\wedge \\dots \\wedge v_h \\wedge \\dots \\wedge v_k \\otimes v_h$$\nand hence it has rank $1$ and it represents the flag\n$$\\langle v_h \\rangle \\subset \\langle v_1,\\dots,v_k \\rangle. $$ \nIf $h=k+1,\\dots,n$, we can compute the kernel\n$$ \\ker \\mathcal{C}_t^{(2,1^{k-1}),(1)} = \\langle x_{k+1},\\dots,\\hat{x_h},\\dots,x_n \\rangle $$\nwhere $\\hat{x_h}$ means that $x_h$ does not appear among the generators. Remark that in general if $t$ has rank $1$, say for instance $t = p$ in \\eqref{hwtang}, then the catalecticant map $\\mathcal{C}^{(2,1^{k-1}),(1)}_t$ has rank $k$. This implies that if $t = t_1 +\\dots + t_r$, where every $t_i$ has rank $1$, we get the inequality\n\n\\begin{equation} \\operatorname{rk} \\mathcal{C}^{(2,1^{k-1}),(1)}_t = \\operatorname{rk} \\mathcal{C}^{(2,1^{k-1}),(1)}_{t_1 + \\dots + t_r} \\leq \\sum_{i=1}^r \\operatorname{rk} \\mathcal{C}^{(2,1^{k-1}),(1)}_{t_i} = r \\cdot k. \\end{equation}\n\n\\noindent Since in this case $\\operatorname{rk}(\\mathcal{C}^{(2,1^{k-1}),(1)}_t) = k+1$, we can already conclude that $t$ has not rank $1$. Hence the decomposition \n$$ t = -\\frac{1}{2} (v_1-v_h) \\wedge v_2 \\wedge \\dots \\wedge v_k \\otimes (v_1-v_h) + \\frac{1}{2} (v_1+v_h) \\wedge v_2 \\wedge \\dots \\wedge v_k \\otimes (v_1+v_h)$$\nis minimal and $t$ has rank $2$.\n\\end{proof}\n\n\\begin{remark} \\label{primosistema}\nThe fact that $\\operatorname{rk}(\\mathcal{C}^{(2,1^{k-1}),(1)}_t) = k+1$ suggests that we can restrict our study to the flag $\\mathbb{F}(1,k;\\mathbb{C}^{k+1})$ which is an adjoint variety. In this restricted case, in order to find a rank $2$ decomposition of the tensor, we should find at least one product of two distinct linear forms inside $\\ker\\mathcal{C}_t^{(2,1^{k-1}),(2)}$. Such linear forms are the equations of the two $k$-dimensional linear spaces in $\\mathbb{C}^{k+1}$ of the two flags associated to the decomposition. We can see that\n$$ (x_1-x_h)(x_1+x_h) \\in \\ker \\mathcal{C}^{(2,1^{k-1}),(2)} $$\nare the linear forms we are looking for and the respective $k$-dimensional linear spaces are\n$$ v(x_1-x_h) = \\langle v_1 + v_h,v_2,\\dots,v_k \\rangle $$\nand\n$$ v(x_1+x_h) = \\langle v_1 - v_h,v_2,\\dots,v_k \\rangle. $$\n\\end{remark} \n\\smallskip\n\n\\noindent Now we have to study the possible sums of elements from the three different sets. \n\n\\begin{remark} The sum of two elements from \\ref{item1} and \\ref{item2} gives\n\\begin{align*} a\\cdot v_1 \\wedge \\dots \\wedge v_k \\otimes v_1 + &b\\cdot v_1 \\wedge \\dots \\wedge v_{i-1} \\wedge v_h \\wedge v_{i+1} \\wedge \\dots \\wedge v_k \\otimes v_1 = \\\\ &v_1 \\wedge \\dots \\wedge v_{i-1} \\wedge (a \\cdot v_i + b \\cdot v_h) \\wedge v_{i+1} \\wedge \\dots \\wedge v_k \\otimes v_1\n\\end{align*}\nwhich has rank $1$. \n\\end{remark}\n\n\\begin{remark}\nThe sum of two elements from \\ref{item1} and \\ref{item3} with $h = 2,\\dots,k$ turns out to be\n\\begin{align*} a\\cdot v_1 \\wedge \\dots \\wedge v_k \\otimes v_1 + b \\cdot v_1 \\wedge \\dots \\wedge v_k \\otimes v_h = (v_1 + v_h) \\wedge \\dots \\wedge v_k \\otimes (v_1 + v_h)\n\\end{align*}\nwhich has rank $1$, while if $h = k+1,\\dots,n$ is\n\\begin{align*} & a\\cdot v_1 \\wedge \\dots \\wedge v_k \\otimes v_1 +b \\cdot ( v_1 \\wedge \\dots \\wedge v_k \\otimes v_h + v_h \\wedge v_2 \\dots \\wedge v_k \\otimes v_1) = \\\\ & b \\cdot v_1 \\wedge \\dots \\wedge v_k \\otimes (\\frac{a}{2} \\cdot v_1 + b \\cdot v_h) + (\\frac{a}{2} \\cdot v_1 + b \\cdot v_h) \\wedge v_2 \\dots \\wedge v_k \\otimes v_1\n\\end{align*}\nwhich is again an element of the set \\ref{item3}. \n\\end{remark}\n\n\\noindent Now we focus on the case \\ref{item2} $+$ \\ref{item3}.\n\\begin{prop}\nFor any $v_j \\in \\mathbb{C}^n$, the sum of two elements from \\ref{item2} and \\ref{item3}, i.e. the tensor \n\\begin{align} \\label{2+3}\nt = a \\cdot v_1 \\wedge \\dots \\wedge v_{i-1} \\wedge &v_j \\wedge v_{i+1} \\wedge \\dots \\wedge v_k \\otimes v_1 + \\\\ &+ b \\cdot (v_1 \\wedge \\dots \\wedge v_k \\otimes v_h + v_h \\wedge v_2 \\dots \\wedge v_k \\otimes v_1), \\nonumber\n\\end{align}\nhas \n\\begin{enumerate}[label = (\\roman*)]\n\\item \\label{item11} rank $2$ if $v_j$ and $v_h$ are not multiple and $v_h \\in \\langle v_2,\\dots,v_k \\rangle$\n\\item \\label{item12} rank $3$ if $v_j$ and $v_h$ are not multiple and $v_h \\not\\in \\langle v_2,\\dots,v_k \\rangle$\n\\item \\label{item13} rank $2$ if $v_j$ and $v_h$ are multiple.\n\\end{enumerate}\n\\end{prop}\n\n\\begin{proof}\nAssume at first that $v_j$ and $v_h$ are not multiple each other. If $v_h \\in \\langle v_2,\\dots,v_k \\rangle$, the sum \\eqref{2+3} reduces to\n\\begin{equation} \\label{2+3.1}a \\cdot v_1 \\wedge \\dots \\wedge v_{i-1} \\wedge v_j \\wedge v_{i+1} \\wedge \\dots \\wedge v_k \\otimes v_1 + b \\cdot v_1 \\wedge \\dots \\wedge v_k \\otimes v_h. \\end{equation}\nWe claim that this element has rank $2$. Indeed if the tensor in \\eqref{2+3.1} has rank $1$, then the catalecticant map \n$$\\mathcal{C}_t^{(2,1^{k-1}),(1)} : \\mathbb{S}_{(1)} (\\mathbb{C}^n)^* \\rightarrow \\mathbb{S}_{(2,1^{k-1})\/(1)} \\mathbb{C}^n$$\n has rank $k$ as already discussed in the proof of Proposition \\ref{famiglia3}. Since the catalecticant map in this case has rank $k+1$, we can already conclude that the decomposition in \\eqref{2+3.1} is minimal. This proves \\ref{item11}.\n\n\\noindent Assume now that $v_h \\not \\in \\langle v_2,\\dots,v_k \\rangle$. In this case we use the catalecticant map \n$$\\mathcal{C}_t^{(2,1^{k-1}),(2)} : \\mathbb{S}_{(2)} (\\mathbb{C}^n)^* \\rightarrow \\mathbb{S}_{(2,1^{k-1})\/(2)} \\mathbb{C}^n$$\nto compute the rank of the element. At first remark that if $t$ is an element of rank $1$, then the rank of this catalecticant map is $k$. Indeed for instance if $t = p$, then the only elements of $\\mathbb{S}_{(2)} (\\mathbb{C}^n)^*$ which does not kill $p$ and whose images via the catalecticant map are linearly independent are\n$$ x_1^2, x_1x_2, \\dots,x_1x_k$$\nwhich are exactly $k$. This implies that if $t = t_1 + \\dots + t_r$ has rank $r$, then\n\\begin{equation} \\label{rankbound} \\operatorname{rk} \\mathcal{C}^{(2,1^{k-1}),(2)}_t = \\operatorname{rk} \\mathcal{C}^{(2,1^{k-1}),(2)}_{t_1 + \\dots + t_r} \\leq \\sum_{i=1}^r \\operatorname{rk} \\mathcal{C}^{(2,1^{k-1}),(2)}_{t_i} = r \\cdot k. \\end{equation}\nIn the instance of a tensor $t$ like \\eqref{2+3} one gets that the kernel of $\\mathcal{C}^{(2,1^{k-1}),(2)}_t$ is the subspace\n$$\\ker \\mathcal{C}^{(2,1^{k-1}),(2)}_t = \\langle x_mx_n,\\ \\text{where either}\\ (m,n)=(h,h)\\ \\text{or}\\ m,n \\neq 1,h \\rangle $$\ni.e. the elements of $\\mathbb{S}_{(2)} (\\mathbb{C}^n)^*$ not killing $t$ are all the ones in the span\n$$ \\langle x_1x_h,\\dots,x_kx_h,x_1x_2,\\dots,x_1x_k,x_1x_j,x_1^2\\rangle. $$\nThis means that $\\operatorname{rk} \\mathcal{C}^{(2,1^{k-1}),(2)}_t = 2k+1$. This implies by \\eqref{rankbound} that $t$ has rank at least $3$. However, by Proposition \\ref{famiglia3}, the element $t$ is written as a sum of $3$ rank $1$ elements and hence its rank is $3$.\n\n\\noindent Finally assume that $v_j$ and $v_h$ are multiples. The element in \\eqref{2+3} reduces to \n\\begin{align*} \nt = a \\cdot v_1 \\wedge \\dots \\wedge v_{i-1} \\wedge &v_h \\wedge v_{i+1} \\wedge \\dots \\wedge v_k \\otimes v_1 + \\\\ &+ b \\cdot (v_1 \\wedge \\dots \\wedge v_k \\otimes v_h + v_h \\wedge v_2 \\dots \\wedge v_k \\otimes v_1). \\nonumber\n\\end{align*}\nWe obtain again that $\\operatorname{rk} \\mathcal{C}^{(2,1^{k-1}),(1)}_t = k+1$ and hence $t$ has not rank $1$. One can see that $t$ can be written as\n\\begin{align*} \nt = &(v_1-v_h) \\wedge \\dots \\wedge v_{i-1} \\wedge (v_i-v_h) \\wedge v_{i+1} \\wedge \\dots \\wedge v_k \\otimes (v_1-v_h) + \\\\ &+ (v_1+v_h) \\wedge \\dots \\wedge v_{i-1} \\wedge (v_i+v_h) \\wedge v_{i+1} \\wedge \\dots \\wedge v_k \\otimes (v_1+v_h)\n\\end{align*}\nand hence $t$ has rank $2$. Note that up to change of coordinates this is the tensor described in Remark \\ref{primosistema}. This concludes the proof.\n\\end{proof}\n\n\\noindent We collect the elements we found in a table.\n\n\\begin{table}[ht]\\begin{center}\n\\begin{tabular}{| c | c | c | c |}\n\\hline\n$(2,1^{k-1})$-rank & $\\operatorname{rk} \\mathcal{C}_t^{(2,1^{k-1}),(1)}$ & $\\operatorname{rk} \\mathcal{C}_t^{(2,1^{k-1}),(2)}$ & Notes \\\\ \\hline\n1 & $k$ & $k$ & sets $(1)$, $(2)$ and $(3)$ \\\\ \\hline\n2 & $k+1$ & $2k-1$ & set $(3)$ \\\\ \\hline\n3 & $k+2$ & $2k+1$ & $(2) + (3)$ \\\\ \\hline\n\\end{tabular}\n\\caption{The ranks appearing on the tangential variety to $\\mathbb{F}$.} \\label{tabella1}\\end{center}\n\\end{table}\n\n\\noindent Let us study now the elements of $\\mathbb{S}_{(2,1^{k-1})} V$ lying on a secant line to $\\mathbb{F}$. Such elements can be written as\n$$ v_1 \\wedge \\dots \\wedge v_k \\otimes v_1 + w_1 \\wedge \\dots \\wedge w_k \\otimes w_1 $$\nso that their rank is at most $2$. Remark that letting the group $SL(n)$ act on this element, the rank and the numbers $\\dim \\langle v_1,\\dots,v_k \\rangle \\cap \\langle w_1,\\dots,w_k \\rangle$ and $\\dim \\langle v_1,\\dots,v_k \\rangle + \\langle w_1,\\dots,w_k \\rangle$ are preserved. In particular we may pick as a representative of the orbit of the element in the previous formula \n\n\\begin{equation} \\label{rango2} t=v_1 \\wedge \\dots \\wedge v_h \\wedge v_{h+1} \\wedge \\dots \\wedge v_k \\otimes v_i + v_1 \\wedge \\dots \\wedge v_h \\wedge v_{k+1} \\wedge \\dots \\wedge v_{2k-h} \\otimes v_j \\end{equation}\nin which the intersection of the $k$-dimensional spaces of the flags is explicit, i.e.\n\n$$\\langle v_1,\\dots,v_k \\rangle \\cap \\langle v_1,\\dots,v_h,v_{k+1},\\dots,v_{2k-h} \\rangle = \\langle v_1,\\dots,v_h \\rangle .$$ \nThe vectors $v_i$ and $v_j$ appearing after the tensor products in the first and second addend are one of the generators of the spaces $\\langle v_1,\\dots,v_k \\rangle$ and $\\langle v_1, \\dots,v_h,v_{k+1},\\dots,v_{2k-h}\\rangle$ respectively. Note that if both $v_i$ and $v_j$ belong to the intersection of the $k$-dimensional subspaces of the flags, then they can be the same vector up to scalar multiplication. Hence we may distinguish the elements on secant lines to $\\mathbb{F}$ using only two invariants: the dimension of the intersection of the $k$-dimensional subspaces of the two flags and whether the equality $\\langle v_i \\rangle = \\langle v_j \\rangle$ holds or not. In terms of Schur apolarity action, we can use the catalecticant maps to determine which orbit we are studying. Specifically the map\n\n$$ \\mathcal{C}_t^{(2,1^{k-1}),(1)} : \\mathbb{S}_{(1)} (\\mathbb{C}^n)^* \\longrightarrow \\mathbb{S}_{(2,1^{k-1})\/(1)} \\mathbb{C}^n $$\nwill give us information about the dimension of the intersection. Indeed consider a rank $2$ element $t$ as in \\eqref{rango2} and denote with $\\{x_i,\\ i=1,\\dots,n\\}$ the dual basis of the $\\{v_i,\\ i=1,\\dots,n\\}$. Then the image of $x_i$ can be either $0$ if $x_i = x_{2k-h+1},\\dots,x_n$, or non zero if $x_i = x_1,\\dots,x_{2k-h}$. It is easy to see that in this latter case all the images that we get are linearly independent as elements of $\\mathbb{S}_{(2,1^{k-1})\/(1)}\\mathbb{C}^n$. Hence the rank of the catalecticant map is equal to the dimension of the sum of the two $k$-dimensional subspaces of the two flags involved. Once that this number is fixed, the rank of the catalecticant\n$$ \\mathcal{C}_t^{(2,1^{k-1}),(2)} : \\mathbb{S}_{(2)} (\\mathbb{C}^n)^* \\longrightarrow \\mathbb{S}_{(2,1^{k-1})\/(2)} \\mathbb{C}^n $$\nwill help us on discriminating whether $\\langle v_i \\rangle = \\langle v_j \\rangle$ holds or not. Indeed by the definition of the Schur apolarity action, the element $x_p x_q \\in \\mathbb{S}_{(2)}(\\mathbb{C}^n)^*$, which can be written as $x_p \\otimes x_q + x_q \\otimes x_p$, is applied in both the factors of the tensor product $\\superwedge^k \\mathbb{C}^n \\otimes \\superwedge^1 \\mathbb{C}^n$ in which $\\mathbb{S}_{(2,1^{k-1})}\\mathbb{C}^n$ is contained. Hence the fact that $\\langle v_i \\rangle = \\langle v_j \\rangle$ is true or not will change the rank of this catalecticant map.\n\n\\noindent We give in the following table a classification of all the possible orbits depending on the invariants we have mentioned. Let us denote briefly $\\dim \\langle v_1,\\dots,v_k \\rangle \\cap \\langle w_1,\\dots,w_k \\rangle$ just with $\\dim V \\cap W$. \n\n\n\\begin{table}[ht] \\begin{center}\n\\begin{tabular}{| c | c | c | c | c |}\n\\hline\n$\\dim V \\cap W$ & $\\langle v_1 \\rangle = \\langle w_1 \\rangle$ & $(2,1^{k-1})$-rank & $\\operatorname{rk} \\mathcal{C}_t^{(2,1^{k-1}),(1)}$ & $\\operatorname{rk} \\mathcal{C}_t^{(2,1^{k-1}),(2)}$ \\\\ \\hline\n$k$ & True\/False & 1 & $k$ & $k$ \\\\ \\hline\n$k-1$ & True & 1 & $k$ & $k$ \\\\ \\hline\n$k-1$ & False & 2 & $k+1$ & $2k-1$ \\\\ \\hline\n$\\vdots$ & & & & \\\\ \\hline\n$h$ & False & 2 & $2k-h$ & $2k$ \\\\ \\hline\n$h$ & True & 2 & $2k-h$ & $2k-h$ \\\\ \\hline\n$\\vdots$ & & & & \\\\ \\hline\n$0$ & False & 2 & $2k$ & $2k$ \\\\ \\hline\n\\end{tabular}\n\\caption{Orbits of points on a secant line to $\\mathbb{F}$, where $h = 0,\\dots,k$.} \\end{center}\n\\end{table}\n\\vskip-0.5cm\n\n\\noindent The results obtained so far can be collected in the following algorithm.\n\n\\begin{algorithm} The following algorithm distinguishes the rank of the elements of border rank at most $2$.\n\\vskip0.5cm\n\n\\noindent {\\bf Input}: An element $t \\in \\widehat{\\sigma_2(\\mathbb{F}(1,k;\\mathbb{C}^n))} \\subset \\mathbb{S}_{(2,1^{k-1})} \\mathbb{C}^n$.\n\n\\noindent {\\bf Output}: If the border rank of $t$ is less or equal to $2$, it returns the rank of $t$.\n\n\\begin{enumerate}[nosep]\n\\item[1:] compute $(r_1,r_2,r_3) = \\left (\\operatorname{rk} \\mathcal{C}_t^{(2,1^{k-1}),(1^k)},\\operatorname{rk} \\mathcal{C}_t^{(2,1^{k-1}),(1)},\\operatorname{rk} \\mathcal{C}_t^{(2,1^{k-1}),(2)} \\right)$\n\\item[2:] {\\bf if} $r_1 = 1$ {\\bf then}\n\\item[3:] $\\quad$ $t$ has both border rank and rank equal to $1$, {\\bf exit};\n\\item[4:] {\\bf else if} $r_1 \\geq 3$ {\\bf then}\n\\item[5:] $\\quad$ $t$ has border rank at least $3$, {\\bf exit};\n\\item[6:] {\\bf else if} $r_1 = 2$ {\\bf then}\n\\item[7:] $\\quad$ $t$ has border rank $2$ and\n\\item[8:] $\\quad$ {\\bf if} $(r_1,r_2,r_3) = (2,k+2,2k+1)$ {\\bf then}\n\\item[9:] $\\quad$ $t$ has rank $3$ and it is the element given by \\ref{item2}$+$\\ref{item3} in Table \\ref{tabella1};\n\\item[10:] $\\quad$ {\\bf else if} $(r_1,r_2,r_3) = (2,2k-h,2k)$ {\\bf then}\n\\item[11:] $\\quad$ $t$ has rank $2$ and it is in the orbit with $\\dim V \\cap W = h$ and $\\langle v_i \\rangle \\neq \\langle v_j \\rangle$,\n\\item[12:] $\\quad$ {\\bf else if} $(r_1,r_2,r_3) = (2,2k-h,2k-h)$ {\\bf then}\n\\item[13:] $\\quad$ $t$ has rank $2$ and it is in the orbit with $\\dim V \\cap W = h$ and $\\langle v_i \\rangle = \\langle v_j \\rangle$\n\\item[14:] $\\quad$ {\\bf end if}\n\\item[15:] {\\bf end if}\n\\item[16:] {\\bf end}\n\\end{enumerate}\n\\end{algorithm}\n\n\\begin{remark}\nNote that this is not a complete classification of the orbits appearing on $\\sigma_2(X)$ by the action of $SL(V)$. For this purpose one has to make a more specific distinction of the orbits related to secant lines to $\\mathbb{F}$. In particular one has to discriminate whether the lines $\\langle v_i \\rangle$ and $\\langle v_j \\rangle$ both belong to the intersection of the two $k$-dimensional subspaces, only one of them belongs to this intersection and eventually none of them belong to it.\n\\end{remark}\n\n\\noindent As a conclusion of the discussion of this section, we obtain the following result. For a given non degenerate irreducible projective variety $X \\subset \\mathbb{P}^N$, for $s \\geq r$ we use the notation\n\n$$ \\sigma_{r,s} (X) := \\{ p \\in \\sigma_r(X): r_X(p)=s \\}. $$\n\\smallskip\n\n\\begin{cor}\nLet $\\mathbb{F} = \\mathbb{F}(1,k;n) $ embedded with $\\mathcal{O}(1,1)$ in $\\mathbb{P}(\\mathbb{S}_{(2,1^{k-1})} \\mathbb{C}^n)$. Then we have\n\n$$ \\sigma_2(\\mathbb{F}) \\setminus \\mathbb{F} = \\sigma_{2,2} (\\mathbb{F}) \\cup \\sigma_{2,3} (\\mathbb{F}). $$\n\\end{cor}\n\\bigskip\n\n\\section*{Acknowledgements}\nI thank Giorgio Ottaviani for the help and useful comments, and Alessandra Bernardi for suggesting me the topic and for the support. I would like to thanks also Jan Draisma and the referees for their useful comments and remarks. The author is partly supported by GNSAGA of INDAM. \\bigskip\n\n\n\\noindent Contact: reynaldo.staffolani@unitn.it, Ph.D. student at University of Trento.\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn the open online communities, such as Free, Libre, and Open Source (FLOSS)\\cite{ArafatRiehle09}, it has been shown that the number of articles and edits per author follows a power law \\cite{Voss05,JohnsonFarajKudaravalli14}, like in scientific publication \\cite{Maillartetal08}. Even for Wikipedia, which claims that it is 'the encyclopedia that everybody can edit', this repartition exists \\cite{Kitturetal07b,Ortegaetal09,Zhangetal10}, and 'the top 10\\% of editors by number of edits contributed 86\\% of the PWVs {[}persistent word views{]}, and top 0.1\\% contributed 44\\% - nearly half! The domination of these very top contributors is increasing over time.' \\cite[p. 5]{Priedhorskyetal07}. \nThis apparent paradox is easy to understand: contributing to an online community is not only about having something to say but more\nand more about knowing how to say it \\cite{FordGeiger2012}.\nAnd, as Wikipedia has become bigger, the editing tasks have increased in complexity (see \\cite{FongBiuk-Aghai10} for a proposition of classification in terms of semantic complexity of these various type of edits), and have increased also the proportion of non-editing tasks. In other words, participants' types of activity have multiplied. \nBeyond the writing, which can be seen as the emerged part of the iceberg, but also the most important part, for an encyclopedia, are the actions leading to the writing (coordination tasks, discussions on the topic of the project, etc.), but also the actions of maintaining the existent, which take a growing importance with the maturation of the articles \\cite{KaneJohnsonMajchrzak14}.\n\nOne consequence is that over time, the amount of effort needed to add new content increases, since new edits are more likely to be rejected, making the work less rewarding \\cite{RansbothamKane11,AaltonenSeitler16}.\nThis may also explain, for a part, the contributor turnover \\cite{Farajetal16,KaneJohnsonMajchrzak14}: once a project is finished, or at least mature, some people, those interested in content addition, drop. As a consequence, there is a constant need for these projects to recruit new contributors, and to turn them into 'big' contributors, to guarantee the survival of the project in the long run.\n\nThere are a lot of experiments to slowly engage people into contribution (from simple edits to more complex tasks), based on the concept of legitimate peripheral participation \\cite{LaveWenger91,Wengeretal02}. For example, and still on Wikipedia, some experiments show that readers or contributors can be asked to perform small tasks, that they do, and then keep participating \\cite{HalfakerKeyesTaraborelli13}. Acknowledging the newcomers contribution with moral reward ('banstars') increases their investment and their retention, at least over the first year \\cite{Gallus15}.\nBut it may not be a very sustainable activity, as those who respond the most to these kinds of initiative seem to be those who are already willing to participate \\cite{Narayanetal17}.\nAnd, statistically speaking, big contributors seem to have been so from the beginning, and if there is a path to contribution, it concerns more the learning of the rules than the level of contribution \\cite{PancieraHalfakerTerveen09,DejeanJullien15}.\n\nIn a nutshell, beyond this statistical information of a majority of big contributors from the beginning, what these experiments seem to indicate is the existence of different profile of contributors regarding their involvement. And one may want to know better about these different profiles and if they are in the same proportion amongst all the projects.\nThis is important for the managers of such projects. It would allow them to better adapt their response to newcomers contributions, and to improve their retention rate.\nRecent studies \\cite{Weietal15,Yangetal16, Arazyetal16} have strongly improved our knowledge of the different types or profiles of contributors, from casual to very involved ones, through focused people. However they do so by using very complex methodologies (qualitative-quantitative mix, with a high workload to manually codify\/characterize the edits), making their replication for the practitioners limited. These studies are on the English Wikipedia only. The objective of this paper is to highlight different profiles of contributors with clustering techniques. The originality is to show how using only the edits, and their distribution over time, allows to build these contributors profiles with a good accuracy and stability amongst languages. These profiles are identifiable early in the history of involvement, suggesting that light monitoring of newcomers may be sufficient to adapt the interaction with them and increase the retention rate.\n\n\nThe paper continues as follows: the next section reviews our theoretical background to develop our hypotheses regarding the profiles of the contributors, and the good balance between the simplicity of the variables and the accuracy of the results. Then, we describe our data collection strategy (choice of Wikipedia, data and variables), before presenting the methodology and the results. Finally, we discuss our findings and highlight their implications for both theory and practice, before concluding.\n\n\\section{Research hypotheses: Contributor's Behaviors and Roles Detection}\n\nIt has been no more a matter of debate that regular contributors vary in the tasks they perform, leading to various 'career' within the projects. For instance, \\cite{OkoliOh07}, looking at English Wikipedia contributors, showed that people having lots of participation in various\narticles (and thus collaborating with a lot of people, but not in a sustained manner, something they assimilate to 'weak links', in a Granovetter's perspective \\cite{Granovetter85}) are more likely to become administrators (to have administrative rights) than those more focused on a sub-set of articles and talking repeatedly with a small subset of people (and then developing strong(er) links). This leads \\cite{Zhuetal11}, relying on \\cite{Bryantetal05}'s study, to propose two main careers for the Wikipedia contributors, coherent with \\cite{OkoliOh07}'s findings: from non-administrators to administrators and from non-members to Wiki-project regular members to Wiki-project core members (Figure 1, page 3433). On that aspect, \\cite{AntinCheshireNov12} confirmed that people involved from the beginning in more diverse revision activities are more likely to take administrative responsibilities.\n\nQualitative research has refined our understanding of people's interest and focus: in their in-depth analysis of one English Wikipedia article (autism) \\cite{KaneJohnsonMajchrzak14} showed that in the lifetime of an article, different tasks were required (contend edition, article structuring, knowledge stability protection), requiring different skills and centers of interest, and consequently endorsed by different persons, with different level of edits.\n\n\\cite{Huvila10}, using a ground theory approach via an online open-question survey to contributors, proposed a classification in five types for the contributors, according to their activities and to the way they find their information (from in depth research to personal\/professional area of expertise, through browsing the net). These profiles illustrate how diverse even contributing knowledge can be, between the topics, but also between the sources of information they rely on, but the contributing profiles remain: some people focus on an area of expertise, other contribute a lot on a lot of subjects, others are more casual, etc.\n\nInformed by these findings, several authors proposed qualitative techniques to retrieve and quantify the different roles the qualitative research has identified. \nBeing able to do quantitative identification makes its automatizing possible, and can then decrease the supervision burden, in addition to increase its accuracy and its rapidity.\nHowever, as for article quality identification\\footnote{It is well known, since \\cite{WilkinsonHuberman07} that there is a strong correlation between the number of edits and the probability for an (English) Wikipedia article to be of best quality. Nevertheless, as detailed by \\cite{DangIgnat17}, if one wants to refine this finding, more costly methods are needed in terms of data collection and analytic techniques}, there is a debate between the simplicity and the accuracy of the methods used.\nWhat is directly observable, in most of the open, online projects, is the number of contributions (edits, commits) over time. What is less accessible, requiring more data preparation, and most of the time, allowing only ex-post analyses, is the content, and the quality, of such contributions. \n\\cite{Yangetal16}, in a defense of the second trail of research, summarized this trade-off, as follows: \n'While classification based on edit histories can be constructed for most active editors, current approaches focus on simple edit counts and access privileges fail to provide a finer grained description of the work actually performed in an edit'. \n\n\nAnd it is to be acknowledged that, as far as the English Wikipedia is concerned, research has made tremendous progress. Via a mix of non supervised and supervised techniques \\cite{Yangetal16, Arazyetal16,Weietal15}, scholars identified and characterized the edits, and then constructed editor roles based on their characterized edits.\nLooking at the English Wikipedia, \\cite{Yangetal16} proposed a two-steps methodology. First, to enrich the description of the edits, they used a multi-class classifier to assign edit types for edits, based on a training data set, called \"the Annotated Edit Category Corpus\" they annotated themselves. Then they applied a LDA graphical model, in order, in a second step, to identify editors' repeating patterns of activity, and to cluster the editors according to their editing behaviors. Afterwards, the authors try to link these behaviors to the improvement of articles quality.\n\\cite{Arazyetal16}, clustered, on a stratified sub-set of a thousand English Wikipedia articles, the contributors according to their edits, edits classified using supervised learning techniques. They confirmed and refined the above qualitative results. They also showed, in \\cite{Arazyetal17}, that some people can take different role over time, when others stick to the same behavior in the various articles they contribute to. \n\nIn citizen science, \\cite{Jacksonetal16} used a similar approach to study the newcomers' activities (contributing sessions) and clustered their behavior in a Zooniverse project (a citizen science contributive project), Planet Hunters, \"an online astronomy citizen science project, in which astronomers seek the help of volunteers to filter data collected from the Kepler space telescope\". Based on a mix qualitative-quantitative methods, they first observe and interviewed participant regarding their contributing behaviors, in order to define the tasks to be observed to define a contributing pattern. Then, they aggregated page view data and server logs containing annotations and comments of each participant, and regrouped data by activity by 'session' (a session was defined as \"a submission of an annotation where no more than 30 minutes exists between the current and next annotation\"). They clustered the sessions based on counts of dimensions (e.g., number of contributions to \/object, \/discussion, annotations), using a k-means clustering algorithm to defined types of sessions, and, finally, they described the people by their history of participation (the types of sessions they did). Interestingly, the type of sessions and the contributor profile they found are very similar to those found in Wikipedia\\footnote{Their principal finding are 1) that many newcomers remained in a single session type (so they can be detected quite early in their participation journey); 2) that the contributor patterns can be regrouped in three types: Casual Workers, Community Workers, and Focused Workers.}. \n\nEven if these studies can be extended toward other case studies than English speaking projects, it is not sure they could go farther in terms of precision in the description of the different profiles, neither that the people involved in those projects will be aiming at investing their time to manually create the dataset of coded contributions these methods require.\nWe argue that there is still some work to do in the detection of these profiles, especially amongst newcomers, but more on the simplification of the detection methods rather that in their over-sophistication. \nWhat our discussion shows is that practitioners and researchers have the too extremities of the story: the newcomers seem to engage themselves in a contributing profile very early in their contributing history; they converge toward different contributing profiles.\nBut how much data do we need to connect the dots, and how early is it possible to do so? There are strong managerial reasons for advocating for detection as early as possible, with not too much apparatus.\n\nFor being able to quickly respond to contributors\/newcomers, community managers need not ex-post data analysis (which are very good to describe the behaviors), but tools to identify people along the way, to adapt the interactions as soon as possible.\nIf the development of complex, artificial intelligent tools is very high for the 'big' Wikipedias, it is slower for the smaller ones, and its tuning is based on the cultural and organizational principals of those big Wikipedias, and especially of the English one\\footnote{Such as the ORES project, on detecting the quality of the edits, which is quite developed over the different Wikipedias for detecting the quality of an edit, but not very much at article level \\url{https:\/\/www.mediawiki.org\/wiki\/ORES}.}.\n\n\n\nThis calls for a temporal description of the contributors, with the minimum of data extraction, or qualification (and possibly the data which are already available for consultation). As stressed by all these studies, the minimal information is the contribution (commit, edit), and all the studies cited are based on the analysis of the edits (sometimes the enriched edits).\nAs a consequence, we wonder, in this article, if observing only the edit behavior over time, it is possible to distinguish different profiles, and, if so, to link these profiles with the ones detailed by more complex methods. We put ourselves in the path of \\cite{Welseretal11} and of other sociological studies \\cite{GusevaRona-Tas01}, wondering if it is possible to find 'structural signatures of social attributes of actors'.\n\n\n\\section{Research methodology}\nThis section will describe the different steps that compose our research methodology, from raw data to the interpretation of the 4 clusters in terms of contributors activity and roles. Figure \\ref{flowchart} gives the global picture of the methodology\ndetailed in the next subsections.\\footnote{Figure inspired by \\cite{Jacksonetal16}.}\n\\begin{figure}[!h]\n\\begin{center}\n\\includegraphics[width=12cm]{flowchart.png}\n\\end{center}\n\\caption{Methodological flowchart.}\n\\label{flowchart}\n\\end{figure}\n\\subsection{Data collection strategy}\nOne of the most useful thing about Wikipedia is that many data are publicly available for downloading and analyzing. This includes information about Wikipedia content, discussion pages, contributor pages, editing activity (who, what and when), administrative tasks (reviewing content, blocking users, deleting or moving pages), and many other details\\footnote{It has to be understood that when speaking of \"user\" page, Wikipedia means the users of the wiki, or, more simply the contributors. The simple readers are called readers. In this article we use the terms contributor and user indifferently}. There are many different ways of retrieving data from Wikipedia such as web-crawlers, using available APIs and etc. \nWe used database dump files which are publicly available for every language and can be downloaded from Wikimedia Downloads center. An important advantage of retrieving information from these dump files is that researchers have complete flexibility as for the type and format of the information they want to obtain. These dump files are usually available in XML and SQL formats. An important remark about these dump files is that every new file includes again all data already stored in prior versions plus the new changes performed in the system since the last dump process, and excluding all information and meta-data pertaining pages that have been deleted in that interval.\n\n\n\nIn our research, we studied Danish and Romanian Wikipedia to show how our methodology can be implemented on mid-size language projects. The required data for our analysis was present in the \"pages-meta-history\" dump file which was completed on 1st January, 2018. This dump file contains data about complete Wikipedia text and meta-data for every change in the Wikipedia from the launch of that Wikipedia till December,2017. After getting the dump file, we used WikiDAT\\footnote{\\url{http:\/\/glimmerphoenix.github.io\/WikiDAT\/}} for extraction of data from the dumps. Wikipedia Data Analysis Toolkit abbreviated to WikiDAT is a tool that automates the extraction and preparation of Wikipedia data into 5 different tables of MySQL database (page, people, revision, revision hash, logging). WikiDAT uses Python and MySQL database and was developed with the motive to create an extensible toolkit for Wikipedia Data Analysis.\n\n\\subsection{Construction of the variables}\nIn the field of pattern recognition, it is very important to have features that are informative, discriminative and should explain the variability present in the data. As a primary data filtering step, the study has been limited to those contributors who have contributed more than 100 edits (irrespective of whether the edits made by them were minor or major) on the respective Wikipedias\\footnote{The definition of what a contributor is still a matter of debate. \\cite{PancieraHalfakerTerveen09}, studying the English Wikipedia, defined a \"Wikipedians\", or a regular, really involved contributors, as people having made at least 250 in their lifetime. We chose a smaller Figure because we wanted to capture the behavior of the not so involved, in a nutshell, all those who have been active for several months. And an \"editor\", for the Wikimedia Foundation, is somebody who have contributed 5 edits or more in a month. We also wanted to have a big enough number of contributors.}. We removed those contributors who were either robots, contributed only in a single month and contributed anonymously. There were such 171 contributors in the Romanian Wikipedia and 274 contributors in the Danish Wikipedia. \nAs said, our goal was to use simple activity measures based only on the edits and their distribution over time. With respect to the state of the art, contributors are likely to be grouped in terms of volume, intensity (focus) or duration of the activity. Starting with a \"brainstorming\" list of 12 initial features, a short list of 6 features was obtained after studying the correlation matrix. The features that were dropped were the number of edits\/contributions made, the number of days user has been on wikipedia, the minimum and the median gap between two consecutive posts, the median number of edits\/contributions made during different months and the number of different months a user has contributed in. Indeed, redundancy has been dropped by removing some heavily correlated features. The final features used for the statistical analysis are described in Table \\ref{tab:Variable-description}. \n\n\n\n\n\n\n\\begin{table}[!h]\n\\caption{\\label{tab:Variable-description}Description of the variables}\n\\begin{tabular}{|c|m{6 cm}|}\n\\hline \nVariable & Description\\tabularnewline\n\\hline \n\\hline \nRatio & The ratio between the number of edits and the number of days a contributor has been on Wikipedia from the very first edit \\tabularnewline\n\\hline \nMean\\textunderscore gap & The average gap between two consecutive posts measured in months\\tabularnewline\n\\hline \nMax\\textunderscore gap & The maximum gap between any two consecutive posts measured in months. \\tabularnewline\n\\hline \nNum\\textunderscore cons & The number of pairs of consecutive months with contributions \\tabularnewline\n\\hline \nMean\\textunderscore Month & Per month average edits made \\tabularnewline\n\\hline \n\nSD & Standard Deviation among the month average edits value \\tabularnewline\n\\hline \n\\end{tabular}\n\\end{table}\n\n\n\nRatio is a measure of how massively the contributors have contributed during their entire period of contribution and it incorporates with it the relationship between the number of edits and the number of days. Mean\\textunderscore Month provides information about average number of edits made in a month and SD tells us about the variations in the contributions made during these months. Collectively, we can say that the features Ratio, Mean\\textunderscore Month and SD is an evaluation about the quantity and deviation of the contributions made by the contributors. Mean\\textunderscore Gap is a measure that describes the average time gap between two consecutive posts and Max\\textunderscore gap is a measure about the longest period of inactivity between two successive posts. Both features give us information about how often the contributors get active and how long they can quit the community before coming back. The feature Num\\textunderscore cons tells us about how many times the contributors have contributed successively for two consecutive months. For example, if a contributor made edits in January 2011 and February 2011, the count is increased by 1. In other words, Num\\textunderscore cons is a measure of regularity of contributors edits over time. \n\n\n\n\\subsection{Statistical methods}\nClustering techniques were used to group contributors in similar clusters highlighting various pattern in terms of activity and roles. In order to design robust conclusions, the Romanian Wikipedia was used to calibrate the methods and come up with a first groups interpretation. Then, the Danish Wikipedia was used as a validation dataset to check the group correspondence across different datasets. A contribution of this article is to provide this double checking in terms of cluster validation. Regarding the methods, a two-stage cluster analysis was performed:\n\\begin{enumerate}\n\\item A hierarchical clustering was done based on the features described in the Table \\ref{tab:Variable-description} with the \\textit{hclust} function of the R platform for statistical computing \\cite{RRR}. The metric used was the Ward distance, adapted to quantitative features \\cite{duda2012pattern}. The resulting dendrogram can suggest a first trend about the optimal number of clusters, in terms of loss of inertia.\n\n\n\\item Partitioning algorithms were used as alternative clustering methods in order to select the final typology. In our research, the contributors were clustered using a k-medoids clustering algorithm called PAM (Partitioning Around Medoids), from the R package \\textit{cluster}. The PAM algorithm is based on the search for $k$ representative objects or medoids among the observations of the data set. It is known to be more robust than the k-means algorithm, especially with respect to the initialization \\cite{kaufman2009finding}.\n\nIn this work, different typologies have been formed for $k$ ranging in the interval selected in step 1. Results of the PAM algorithms were consistent with those obtained from step 1. Then, the optimal number of cluster has been selected with cluster validation technique such as the silhouette index \\cite{halkidi2002cluster} which models how well contributors are clustered into their groups (intra vs. inter cluster inertia). \n\\end{enumerate}\n\nTo assist the interpretation of the resulting clusters, Principal Component Analysis (PCA) has been carried out in order to project the data onto a small number of dimensions that are combinations of the initial variables \\cite{saporta2006probabilites}. In this article, three dimensions were enough to explain almost $90\\%$ of the data variability. In addition to PCA, ANOVA analysis and Tukey statistical tests have helped to determine the significant variables within each cluster. All together, these methods ensure a full and robust interpretation of clustering results. \n\n\n\\section{Results}\nHierarchical clustering gives a first visualization of the data structure (Figure \\ref{den}). As for the optimal number of clusters $k$, the dendrogram suggests an interval between 2 and 10 clusters that could be investigated by further evaluation.\n\\begin{figure}[!h]\n\\begin{center}\n\\includegraphics[width=10cm]{Dendrogram.png}\n\\end{center}\n\\caption{Cluster dendrogram of the Romanian Wikipedia}\n\\label{den}\n\\end{figure}\nAs mentioned in the previous section, the evaluation has been made with cluster validity indexes such as the average silhouette width and the total within sum of squares. Figure \\ref{validity} depicts the evaluation results in four clusters of contributor's contribution behavior in Romanian Wikipedia, this number being validated afterward with the Danish Wikipedia.\n\\begin{figure}[!h]\n\\begin{center}\n\\includegraphics[width=8cm]{Rowiki.png}\n\\end{center}\n\\caption{Cluster Validation Plots}\n\\label{validity}\n\\end{figure}\n One cluster in our analysis contains the least number of contributors in both the cases. The distribution of the contributors in the clusters for both the Wikipedias are in Table \\ref{Table:Size_of_Clusters}. \n\\begin{table}[h] \n\\centering\n\\caption{Size of Clusters}\n\\label{Table:Size_of_Clusters}\n\\begin{tabular}{l c c c c }\n\\hline\nWikipedia & Cluster 1 & Cluster 2 & Cluster 3 & Cluster 4 \\\\\n\\hline \nRomanian& 25 & 92 & 48 & 6 \\\\\nDanish& 45 & 144 & 61 & 24\\\\\n\\hline\n\\end{tabular}\n\\end{table}\nWith respect to clusters interpretation, a PCA with three principal components explains almost 90\\% of the total dataset variance. Figure \\ref{PCA} depicts the projection of the labeled contributors onto these three first dimensions. Analyzing the loadings for both wikis, it turns out that the first dimension (PC1) is correlated with the volume of the activity (ratio, mean number of edits) with a relative intra-cluster variability. Dimension 2 (PC2) relates to the periods of inactivity (the gaps). The correlation is negative in Figure \\ref{PCA}. Dimension 3 (PC3) mainly refers to the variable Num\\textunderscore cons, it relates to the notion of regularity. Please note that due to computational details, this correlation is positive for the Romanian Wikipedia and negative for the Danish Wikipedia. \n\n\\begin{figure*}[!h]\n\\begin{center}\n\\includegraphics[width=\\textwidth]{Rplot.png}\n\\end{center}\n\\caption{PCA Analysis with projection of the four clusters}\n\\label{PCA}\n\\end{figure*}\n\n\nThe interpretation enables the extraction of the following contributors profiles:\n\\begin{itemize}\n\\item Cluster 1: contributors \"on a mission\". After joining the community or editing first articles, these contributors left Wikipedia for long periods. When they did contribute, they did it with a high, but short-term, activity.\n\\item Cluster 2: basic, or 'casual' contributors. This group contains basic characteristics with no significant markers, beside the fact that their activity is neither particularly intense, nor particularly focused (in terms of period, or of number of articles addressed).\n\\item Cluster 3: regular contributors. The activity is above the average (even if not that much) and the most regular among all groups, especially in terms of the number of consecutive months of presence. \n\\item Cluster 4: top contributors. The contributors present a huge activity ratio, they are those core, or very active contributors found in other research articles. Nevertheless, this cluster contains higher variability than others.\n\\end{itemize}\n\n\nThese interpretations are confirmed by unidimensional boxplots distributions (Figure \\ref{dawiki_boxplots} and Figure \\ref{rowiki_boxplots}, in Appendix).\n\nGenerally, boxplots give a fine picture of the features distributions within each cluster, with a focus on the intra cluster variability. An illustrative variable was added to the analysis: the number of different articles a contributor has contributed to. This external feature confirms the analysis above.\n\n\n\n\\section{Discussion}\n\n\\subsection{Simple methods = solid conclusions}\nOur goal was to evaluate if, with simple measures of contributing activities over time, it was possible to detect the different profiles of contributors with data reduction techniques. At least on the Wikipedia example, we have been able to detect the focused workers (cluster 1), the casual workers (cluster 2), and the regular workers (clusters 3 and 4), and even to discriminate between those the very involved (the top, or very top contributors \\cite{Priedhorskyetal07}). As far as the objective is to identify contributors profile, our article shows that following the edits is quite enough. The number of articles involved has been added as an illustrative variable, on order to better link our findings to the descriptions realized by \\cite{Balestraetal16,Arazyetal17}. In terms of methodology, it is noticeable to remark that simple data reduction techniques such as clustering and PCA allow to reach a comparable level of information as more refined approaches, such as \\cite{Weietal15}, who applied Non-parametric Hidden Markov Clustering models of profiles.\n\n\\subsection{Limitations and future research}\nHowever, this work suffers from some limitations that should be discussed, while opening future research direction. First, a strong hypothesis has been made by focusing only on contributors with more than 100 edits. If a potential application of such a clustering approach is to increase the users retention rate, it would be relevant to pay a special attention to the small contributors with less than 100 edits, and design retention strategies for them. However, dealing with such a population would lead to more data quality issues and uncertainty. A deep analysis of cluster revealed also the presence of peripheral participation periods, but mainly for the people on a mission (Cluster 1) so the first edits are of paramount importance and may need special treatments to distinguish between those learners and the casual contributors (Cluster 2), for instance.\nThe second limitation relies on the volume of data analyzed: the results should be generalized to bigger datasets like the English, or the French Wikipedia. Nevertheless, our research methodology gives some guarantees about the work's generalization capabilities since the methods have been first calibrated on the Romanian Wikipedia and then validated with the Danish Wikipedia (with very good consistency). However, those two are occidental Wikipedias, and it will be as interesting to run the same analysis on Arabic, or Thai, or Hindi wikipedias, in a word, on any other non-occidental, medium size Wikipedia.\nAnother weakness is related to the limited amount of features used to detect the profiles. It will be relevant to consider other characteristics that will add variety. For instance, a first step would consist in using the number of different articles as an explanatory variable instead of just an illustrative one. Other variables should be added as well, as long as they remain simple and easily observable (and computable) by the project 'managers' in all Wikipedias. \nThe highlighted profiles are identifiable early in the history of involvement, suggesting that light monitoring of newcomers may be sufficient to adapt the interaction with them and increase the retention rate. \n\nBut above all, further research will deal with the extension of this offline clustering towards dynamic techniques. The principle is to dynamically adapt the clusters as new contributors join the community. Online clustering methods (such as Growing Neural Gas) could be adapted in order to develop a dynamic decision support tool for online contributors assistance. \n\n\n\n\n\n \n\n\n\n\n\n\n\\bibliographystyle{SIGCHI}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nUltracold atoms in optical lattices provide tunable experimental\nrealizations of correlated many-particle systems. Even the dimensionality\nof these systems can be chosen between one and three.\nRecently, the transition between superfluid and Mott insulating (MI) phases\nhas been observed by tuning the ratio between the kinetic and the interaction\nenergy \\cite{grein02,pared04}. Dynamical aspects can also be addressed \n experimentally \\cite{stofe04,kohl05}.\n\n\nThere are many theoretical treatments of these issues, ranging from \nmean-field approaches \\cite{jaksc98} to more sophisticated \ntechniques \\cite{fishe89,pai96,kashu96,kuhne98,elstn99,kuhne00}. \nThe transition at zero temperature ($T=0$)\nhas been analyzed in one-dimensional \nsystems \\cite{pai96,kashu96,kuhne98,elstn99,kuhne00}. \nBut there is still a multitude of open questions.\nOur aim is to explain the dynamics of the excitations\n which has moved in the focus recently\n\\cite{stofe04,kohl05,batro05}.\n\nThe present work is intended to clarify the significance of various\nexcitation processes depending on the kinetic and the \ninteraction energy and on the temperature. To this end, we will\nanalyze the spectral weights of the\none-dimensional MI phase with one boson per site $n=1$.\n\nThe low-energy excitations are either double occupancies ($n=2$),\nwhich we will call `particles', or they are holes ($n=0$). Both excitations \nare gapped as long as the system is insulating. They become soft at the \nquantum critical point which separates the insulator from the superfluid.\nWe use a continuous unitary transformation (CUT)\n\\cite{wegne94,knett00a} to obtain an effective Hamiltonian \nin terms of the elementary excitations `particle' and `hole'. \nThis effective Hamiltonian\nconserves the number of particles and of holes \\cite{knett03a}.\nThe CUT is realized in real space in close analogy to the \nderivation of the generalized $t$-$J$ model from the Hubbard model\n\\cite{reisc04}. The strongly correlated many-boson problem\nis reduced to a problem involving a small number of particles and holes.\nThis simplification enables us to calculate \nkinetic properties like the dispersions and\nspectral properties like spectral weights. \n\n\nThe article is set-up as follows. First, the model and the relevant\nobservable are introduced (Sect.\\ II). \nThen, the method used is presented and described (Sect.\\ III).\nThe spectral weights are computed in Sect.\\ IV. In Sect.\\ V, the\nexperiment is analyzed which requires calculations at finite temperatures\nas well. Finally, the results are discussed in Sect.\\ VI.\n \n \n\\section{Model}\nTo be specific, we study the Bose-Hubbard model \n$H = t H_t+ U H_U$ in one dimension\n\\begin{equation}\n\\label{hamiltonian}\n H = -t\\sum\\limits_i( b^\\dagger_i b^{\\phantom{\\dagger}}_{i+1}+b^\\dagger_{i+1}\n b^{\\phantom{\\dagger}}_i ) + (U\/2)\\sum_i \\hat{n}_i ( \\hat{n}_i-1) \n\\end{equation}\nwhere the first term is the kinetic part $t H_t$ and the second term the \nrepulsive interaction $U H_U$ with $U>0$. The \n bosonic annihilation (creation) operators are denoted by \n$b^{(\\dagger)}_i$, the number of bosons by \n$\\hat{n}_i =b^\\dagger_i b^{\\phantom{\\dagger}}_i$.\nIf needed the term $H_\\mu= -\\mu\\sum_i \\hat{n}_i$ is added to $H$\nto control the particle number.\nFor numerical simplicity, we truncate the local bosonic Hilbert space \nto four states. This does not change the relevant physics significantly\n\\cite{pai96,kashu96,kuhne98}.\n\nBesides the Hamiltonian we need to specify the excitation operator $R$. In the\nset-up of Refs.~\\onlinecite{stofe04} and \\onlinecite{kohl05} the depth\nof the optical lattices is changed periodically to excite the system. \nIn terms of the tight binding model (\\ref{hamiltonian}) this amounts\nto a periodic change of $t$ and of $U$\nleading to \n\\begin{equation}\nR \\propto \\delta t H_t +\\delta U H_U\\ .\n\\end{equation} \nSince multiples of $H$ do not induce excitations we consider\n\\begin{equation}\nR \\to \\tilde R = R- (\\delta U\/U) H\\ .\n\\end{equation} \nBoth operators $R$ and $\\tilde R$ induce the same transitions\n$\\langle n | R | m \\rangle = \\langle n | \\tilde R | m \\rangle$\nwhere $| n \\rangle \\neq | m \\rangle$ are eigen states of $H$.\nEventually, the relevant part of $R$ is proportional to $H_t$.\nFor simplicity, we set the factor of proportionality to one.\n\n\nIf the interaction dominates ($U\/t \\to \\infty$) the ground state of\n(\\ref{hamiltonian}) is the product state of precisely one boson per site\n$|\\text{ref}\\rangle=|1\\rangle_1\\otimes |1\\rangle_2 \\ldots\\otimes |1\\rangle_N$,\nwhere $|n\\rangle_i$ denotes the local state at site $i$ with $n$ bosons.\nWe take $|\\text{ref}\\rangle$ as our reference state; all deviations\nfrom $|\\text{ref}\\rangle$ are considered as elementary excitations. \nSince we restrict\nour calculation to four states per site we define three creation operators:\n$h^\\dagger_i |1\\rangle_i= |0\\rangle_i$ induces a hole at site $i$,\n$p^\\dagger_i |1\\rangle_i= |2\\rangle_i$ induces a particle at site $i$,\nand $d^\\dagger_i |1\\rangle_i= |3\\rangle_i$ induces a double-particle\nat site $i$. The operators $h$, $p$ and $d$ obey the \ncommutation relations for hardcore bosons.\n\n\\section{CUT}\nThe Hamiltonian (\\ref{hamiltonian}) conserves the number of \nbosons $b$. But if it is rewritten in terms of $h,p$, and $d$ it is no longer\nparticle-conserving, e.g., the application of $H_t$ to $|\\text{ref}\\rangle$\ngenerates a particle-hole pair. We use a CUT defined by\n\\begin{equation}\n \\label{eq:fleq}\n \\partial_l H (l) = [\\eta (l),H(l)]\n\\end{equation}\nto transform $H(l=0)=H$ from its form in \n(\\ref{hamiltonian}) to an effective Hamiltonian\n$H_\\text{eff}:=H(l=\\infty)$ which \\emph{conserves} the number\nof elementary excitations, i.e., $[H_U,H_\\text{eff}]=0$. An appropriate\nchoice of the infinitesimal generator $\\eta$ is defined by the matrix\nelements \n\\begin{equation}\n\\label{eq:generator}\n\\eta_{i,j} (l)={\\rm sgn}\\left(q_i-q_j\\right)H_{i,j}(l)\n\\end{equation}\nin an eigen basis of $H_U$; $q_i$ is the corresponding eigenvalue\n\\cite{knett00a}. The \nstructure of the resulting $H_\\text{eff}$ and the\nimplementation of the flow equation in second quantization in real space\nis described in detail in Refs.\\ \\onlinecite{knett03a} and \n\\onlinecite{reisc04}. \n\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=\\columnwidth]{.\/fig1.eps}\n \\end{center}\n \\caption{Residual off-diagonality for values of $t\/U\\in \\{0.1,0.2\\}$ for\n maximal $r=2$ and $r=3$. {\\em Inset:} Magnification for small values of\n the flow parameter $l$. The ROD for $r=3$ and $t\/U=0.2$\n displays non-monotonic behavior. In this case, we stop \n the flow at the first minimum of the ROD. \n Its position is indicated by a vertical line.}\n \\label{fig_ROD}\n\\end{figure}\nThe flow equation (\\ref{eq:fleq}) generates a \nproliferating number of terms.\nFor increasing $l$, more and more hardcore\nbosons are involved, e.g., annihilation and creation of three bosons \n$p^\\dagger_i p^\\dagger_{i+1} p^\\dagger_{i+2}\np^{\\phantom\\dagger}_i p^{\\phantom\\dagger}_{i+1} p^{\\phantom\\dagger}_{i+2}$, \nand processes over a larger and larger range occur,\ne.g., hopping over $j$ sites $p^\\dagger_{i+j} p^{\\phantom\\dagger}_i$.\nTo keep the number of terms finite we omit normal-ordered terms\nbeyond a certain extension $r$.\nNormal-ordering means that the creation operators of the elementary\nexcitations appear to the left of the annihilation operators. If\nterms appear which are not normal-ordered the commutation relations\nare applied to rewrite them in normal-ordered form. The normal-ordering\nis important since it ensures that only less important terms are omitted.\nGenerically, such terms involve more particles \\cite{knett03a,reisc04}.\n\nWe define the extension $r$ as the distance between\nthe rightmost and the leftmost creation or annihilation\noperator in a term \\cite{knett03a,reisc04}. The extension $r$ of a term \nmeasures the range of the physical process which is described by\nthis term. So our restriction to a finite extension restricts the\nrange of processes kept in the description. This is the most\nserious restriction in our approach.\nNote that the extension $r$ implies for hardcore bosons\nthat at maximum $r+1$ bosons are involved.\n\n\n\n\nFor $l<\\infty$, $H(l)$ still contains terms\nlike $p^\\dagger_i h^\\dagger_{i+1}$ which do not conserve the number\nof particles. To measure the extent to which $H(l)$ deviates from the\ndesired particle-conserving form, we introduce the\nresidual off-diagonality (ROD) as the sum of the \nmoduli squared of the coefficients of all terms which change the number of\nelementary excitations. The ROD for maximal extensions $r=2$ and $r=3$ is \nshown in Fig.~\\ref{fig_ROD}. \nIdeally, the ROD decreases monotonically with $l$. \nThe calculation with $r=2$ indeed shows the desired\nmonotonic behavior of the ROD for all values of $t\/U$.\n \n \nBut non-monotonic behavior can also occur \\cite{reisc04}. It is observed in \nFig.\\ \\ref{fig_ROD} for extension $r=3$. \nWhile for small $t\/U$ the ROD still decays monotonically it displays\nnon-monotonic behavior for larger $t\/U$, see e.g.\\ the ROD for $t\/U=0.2$ and \n$r=3$ in Fig.~\\ref{fig_ROD}. \nAn uprise at larger values $l \\gtrapprox 1\/t$ signals that the \nintended transformation cannot be performed in a well-controlled way.\nThe uprise stems from matrix elements which do not decrease but increase,\nat least in an intermediate interval of the running variable $l$.\nSuch an increase occurs if the number of excitations is not correlated\nwith the total energy of the states. This means that two states are linked by\nan off-diagonal matrix element of which the state with low\nnumber of excitations is \\emph{higher} in total energy than the state with\na higher number of excitations. The increase of such a matrix element\nbears the risk that the unavoidable\ntruncations imply a too severe approximation.\n\nThe situation that the number of excitations is not correlated with the\ntotal energy can occur where the lower band edge of a continuum of more \nparticles falls below the upper band edge of a continuum of less particles or\nbelow the dispersion of a single excitation,\nsee, e.g., Ref.\\ \\onlinecite{schmi05b}. This situation implies additional\nlife-time effects since the states with less particles can decay\ninto the states with more particles. Even if the decay rates are very\nsmall the CUT can be spoilt by them because the CUT in its form defined \nby Eq.\\ (\\ref{eq:generator}) correlates the particle number and the energy of\nthe states.\n\nIn order to proceed in a controlled way, we neglect the small life-time \neffects if they occur at all. This is done by stopping the CUT at\n$l_\\text{min}$ at the first minimum of the ROD. \nThe position of $l_\\text{min}$ for $t\/U=0.2$ and $r=3$ is indicated in the\ninset of Fig.~\\ref{fig_ROD} by a vertical line. It is found that the remaining \nvalues of the ROD are small and thus negligible. Hence we omit the remaining\noff-diagonal terms. \nOnly close to the critical point, where\nthe MI phase vanishes, the present approach\nbecomes insufficient.\n\n\n\\begin{figure}[thbp]\n \\begin{center}\n \\includegraphics[width=\\columnwidth,height=\\columnwidth]\n\t\t {.\/fig2.eps}\n \\end{center}\n \\caption{(color online) Phase diagram of the Mott insulating (MI) and the\n superfluid (SF) phase in the $(t\/U,\\mu\/U)$ plane.\n Dotted (dashed) lines show CUT results for maximal extension $r=2$ \n ($r=3$). Solid lines (symbols) show series (DMRG) results from Refs.\\\n \\onlinecite{elstn99} and \\onlinecite{kuhne98}.}\n \\label{fig_PD}\n\\end{figure}\nWe check the reliability of our approach by comparing its results to those\nof other methods\nfor the phase diagram in Fig.~\\ref{fig_PD}. The upper\nboundary of the MI phase is given by the particle gap\n$\\Delta^\\text{p}:=\\text{min}_k \\omega^\\text{p}(k)$. \nThe lower boundary is given by the negative hole gap\n$-\\Delta^\\text{h}$ where\n$\\Delta^\\text{h}:=\\text{min}_k \\omega^\\text{h}(k)$.\nThe dotted (dashed) curves result from CUTs for $r=2$ ($r=3$).\nFor $r=2$, the CUT can be performed till $l=\\infty$;\nfor $r=3$, the flow is stopped at $l_\\text{min}$.\nSolid curves depict the findings by series\nexpansion, the symbols those obtained by DMRG \\cite{elstn99,kuhne98}. \nThe agreement is very good in view of the truncation of the \nHilbert space and in view of the low value of $r$. Note that the $r=3$\nresult agrees better with the series and DMRG results\nthan the $r=2$ result. As expected, the deviations\nincrease for larger values of $t$ because longer-range processes become\nmore important. Yet the values obtained by the CUT for the critical ratio \n$x_c:=t\/U$, where the MI phase vanishes, are reasonable.\nWe find $x_c^{(r=2)}=0.271$ and $x_c^{(r=3)}=0.258$. By high accuracy \ndensity-matrix renormalization $x_c=0.297\\pm0.01$ was found, see\nRef.\\ \\onlinecite{kuhne00} and references therein. Series expansion\nprovides $x_c=0.26\\pm0.01$ which is very close to our value $x_c^{(r=3)}$.\nThis fact underlines the similarity between series expansions and the\nreal space CUT as employed in the present work.\n\nWe conclude from the above findings for the phase diagram (Fig.\\\n\\ref{fig_PD}) that the mapping to the particle-conserving $H_\\text{eff}$\nworks very well for large parts of the phase diagram. It\ndoes not, however, capture the Kosterlitz-Thouless nature of the transition\nitself \\cite{kuhne00}.\nThe CUT yields reliable results within the MI phase for \n$t\\lessapprox 0.2U$. Henceforth, results for this regime\nwill be shown which were obtained in the $r=3$ scheme.\n\n\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=\\columnwidth,height=\\columnwidth]\n\t\t {.\/fig3.eps}\n \\end{center}\n \\caption{(color online) The dispersions of a particle $\\omega^{\\rm p}(k)$ \n (black) and a hole $\\omega^{\\rm h}(k)$ (green\/grey) for $t=0.1U$ (solid),\n $t=0.15U$ (dotted) and $t=0.2U$ (dashed).}\n \\label{fig_disp}\n\\end{figure}\nIn Fig.~\\ref{fig_disp}, the single-particle dispersion $\\omega^{\\rm p}(k)$ \n(one-hole dispersion $\\omega^{\\rm h}(k)$) is shown as black \n(green or grey, resp.) curves.\nBoth dispersions increase with $t$; the particle dispersion \nalways exceeds the hole dispersion. \nOn increasing $t$, the center of the hole dispersion shifts\nto higher energies while the center of the particle dispersion \nremains fairly constant in energy.\n\n\\section{Observable}\nAt the beginning of the flow ($l=0$), the observable $R$ is proportional to \n$H_t$. The dynamic structure factor $S_R(k,\\omega)$ encodes the response of the\nsystem to the application of the observable $R$. The observable $R$ transfers \nno momentum because it is invariant with respect to translations.\nTherefore, the Bragg spectroscopy \\cite{stofe04,kohl05} measures the response \n$S_R(k=0,\\omega)$ at momentum $k=0$ and energy $\\omega$. \nA sketch of the spectral density $S_R(k=0,\\omega)$ for this observable is shown\nin Fig.~\\ref{fig_sketch}. We assume that the average energy \nis mainly determined by $H_U$ in the MI regime.\nThen the first continuum is located at $U$ and a \nsecond one at $2U$. For small $t\/U$ the continua will be well separated. \nThe energy-integrated spectral density in the first continuum is the spectral\nweight $S_1$. Correspondingly, $S_2$ stands for \nthe spectral weight in the second continuum. \n\n\nTo analyze the spectral weights in the effective model obtained by the CUT\nthe observable $R$ must be transformed as well.\nBefore the CUT, the observable is $R(l=0)=H_t$. It is a sum of \nlocal terms \n\\begin{equation}\n R(l=0)=H_t= \\sum_{{\\mathbf r}} \n b^\\dagger_{{\\mathbf r}-1\/2} b^{\\protect\\phantom\\dagger}_{{\\mathbf r}+1\/2}\n +b^\\dagger_{{\\mathbf r}+1\/2} b^{\\protect\\phantom\\dagger}_{{\\mathbf r}-1\/2}\\ ,\n\\end{equation}\nwhere we have rewritten the sum as a sum over the bonds ${\\mathbf r}$. The bond\npositions are in the centers of two neighboring sites. This notation emphasizes\nthat the observable acts locally on bonds. The observable is\ntransformed by the CUT to\n\\begin{equation}\n\\label{eq:Reff}\n {R}^{\\rm eff}=R(l=\\infty)=\\sum_{{\\mathbf r}} R(l=\\infty,{\\mathbf r})\\ .\n\\end{equation}\nIt is the sum over the transformed local observables $R(l=\\infty,{\\mathbf r})$ \nwhich are centered at the bonds ${\\mathbf r}$. \n\nThe local observable is sketched schematically in Fig.~\\ref{fig_obsaction}. \nThe sites on which the local observable acts are shown as filled circles. \nThe state of the sites shown as empty circles is not altered by the observable.\nAt $l=0$, the observable is $H_t$. It acts only locally on adjacent sites of \nthe lattice as shown in Fig.~\\ref{fig_obsaction}a. \n\n\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=\\columnwidth]{.\/fig4.eps}\n \\end{center}\n \\caption{Sketch of the distribution of spectral weight $S(k=0,\\omega)$. \n The weight centered around $\\omega=U$ is given by\n $S_1$, the weight around $\\omega=2U$ by $S_2$.}\n \\label{fig_sketch}\n\\end{figure}\n\\begin{figure}[h]\n \\begin{center}\n \\includegraphics[width=\\columnwidth]{.\/fig5.eps}\n \\end{center}\n \\caption{Terms of the local observable $R(l,{\\mathbf r})$. \n The arrow indicates the bond on\n which a term acts. Filled circles: sites on which a non-trivial\n operator acts; `non-trivial' means that the operator is different from\n the identity. Open circles: sites on which the term under study does not \n act, i.e., the term acts as identity on these sites.\n (a) At $l=0$ the observable is $H_t$. \n It is composed of local terms that act only on adjacent sites of the\n lattice. \n (b) and (c) More complicated terms appear during the flow. The sum\n of the distances of all operators in a term is our measure \n $r_{\\mathcal O}$ for the extension of a term. \n It is $2$ for (b) and $4$ for (c). Terms beyond\n a certain extension are omitted.}\n \\label{fig_obsaction}\n\\end{figure}\n\nWe compute the matrix elements of the excitation\noperator after the CUT, i.e., of ${R}^{\\rm eff}$. This operator\nconsists of terms of the form \n$c_{(\\bar{n};\\bar{m})} (h^\\dagger)^{n_h}(p^\\dagger)^{n_p} (d^\\dagger)^{n_d}\nh^{m_h}p^{m_p} d^{m_d}$ where we omitted all spatial subscripts\ndenoting only the powers \n$(\\bar{n};\\bar{m}) = (n_h\\; n_p\\; n_d; m_h\\; m_p\\; m_d)$ of the particular \noperator type. \nFor the full expression we refer the reader to Appendix~\\ref{sec:app}. \nThe coefficient $c_{(\\bar{n};\\bar{m})}$ is the corresponding prefactor.\nThe spectral weight $I^\\text{eff}_{(\\bar{n};\\bar{m})}$ \nis the integral of $S(k=0,\\omega)$ over all frequencies for momentum \ntransfer $k=0$. It stands for\n the excitation process starting from the states with $m_d$ double-particles,\n$m_p$ particles, and $m_h$ holes and leading to \nthe states with $n_d$ double-particles, $n_p$ particles, and $n_h$ holes. \n\nIn the course of the flow, contributions to the observable appear which\ndo not act on the bond on which the initial observable is centered.\nThe initial local process spreads out in real space for $l\\to \\infty$.\nExamples are sketched in Fig.~\\ref{fig_obsaction}b-c. \nIn order to avoid proliferation, also the terms in the observable have to be \ntruncated. Like for the terms in the Hamiltonian we introduce a measure\nfor the extension of the terms in the observable. This measure, however,\nis slightly different: It is the sum $r_{\\mathcal O}$ of the distances \nof all its local creation or annihilation operators to ${\\mathbf r}$.\nIf the value $r_{\\mathcal O}$ of a certain term\nexceeds a preset truncation criterion, this term is neglected.\n\nFigures \\ref{fig_obsaction}b-c illustrate the truncation criterion for the\nobservable. An operator that acts on the sites shown in \nFigure~\\ref{fig_obsaction}b meets the truncation criterion for \n$r_{\\mathcal O}=3$. It is kept in a $r_{\\mathcal O}=3$ calculation. \nAn operator acting on the sites in Figure~\\ref{fig_obsaction}c \nis kept in the calculation with $r_{\\mathcal O}=4$. But it does not meet\nthe truncation criterion for $r_{\\mathcal O}=3$; in a calculation with \n$r_{\\mathcal O}=3$ it is discarded. \n\nOnce the local effective observables $R(l=\\infty,{\\mathbf r})$ are known,\nthe total effective expression $R^\\text{eff}$ is given by the sum over \nall bonds (\\ref{eq:Reff}).\n\n\\begin{figure}[thbp]\n \\begin{center}\n \\includegraphics[width=\\columnwidth]\n {.\/fig6a.eps}\n \\includegraphics[width=\\columnwidth]\n {.\/fig6b.eps}\n \\end{center}\n \\caption{(color online) Spectral weights of various processes for \n truncations $r_{\\mathcal O}\\in \\{2,3,4\\}$. \n (a) \n $I^\\text{eff}_{(\\text{hp;0})}$ (black curves) and \n $I^\\text{eff}_{(2\\text{h2p};0)}$ (green\/grey). \n (b) \n $I^\\text{eff}_{(\\text{h2p;p})}+I^\\text{eff}_{(\\text{2hp;h})}$ (black)\n and $I^\\text{eff}_{(\\text{hd;p})}$ (green\/grey)}\n \\label{fig_sw}\n\\end{figure}\nThe results for the observable truncations\n$r_{\\mathcal O}\\in \\{2, 3, 4\\}$ are shown in Fig.~\\ref{fig_sw}. At zero temperature, only the processes starting from the ground state\nare relevant. These processes are the $(\\bar{n};0)$ processes \nwhich start from the reference state $|\\text{ref}\\rangle$ because\nthe CUT is constructed such that\nthe reference state, which is the vacuum of excitations,\nbecomes the ground state after the transformation\n\\cite{knett00a,mielk98,knett03a}.\nThe weights of the $(\\bar{n};0)$ processes are shown \nin Fig.~\\ref{fig_sw}a. The by far dominant weight is\n $I^\\text{eff}_{(\\text{hp};0)}$; the process \n$I^\\text{eff}_{(\\text{2h2p};0)}$ is lower by orders of magnitude.\nThe agreement between results for various truncations is very good.\n\n\nThe particle-hole pair excited in $I^\\text{eff}_{(\\text{hp};0)}$ \nhas about the energy $U$ for low values of $t\/U$, \nwhile $I^\\text{eff}_{(\\text{2h2p};0)}$ leads to a response at $2U$.\nHence we find noticeable weight only around $U$ in accordance with\nrecent quantum Monte Carlo (QMC) data \\cite{batro05}.\n\n\nAt finite temperature, excitations are present before $R^\\text{eff}$\nis applied. At not too high temperatures, independent particles $p^\\dagger$\nand holes $h^\\dagger$ are the prevailing excitations. Other excitations,\nfor instance $d^\\dagger$ or correlated states $(p^\\dagger)^2$\nof two $p$-particles, are higher in energy and thus much less likely.\nSo the processes starting from a particle or a hole are the important ones\nwhich come into play for $T>0$. Hence we focus on\n $I^\\text{eff}_{(\\text{h2p;p})}$, $I^\\text{eff}_{(\\text{2hp;h})}$,\nand $I^\\text{eff}_{(\\text{hd;p})}$. \nThese weights are shown in Fig.~\\ref{fig_sw}b. The results \nfor $I^\\text{eff}_{(\\text{hd;p})}$ depend only very little on truncation. \nA larger dependence on the truncation is found for\n$I^\\text{eff}_{(\\text{h2p;p})}+I^\\text{eff}_{(\\text{2hp;h})}$. But the \nagreement is still satisfactory. \n\nThe processes \n$I^\\text{eff}_{(\\text{h2p;p})}$ and $I^\\text{eff}_{(\\text{2hp;h})}$\nincrease the energy by about $U$ because they \ncreate an additional particle-hole pair. The \n$I^\\text{eff}_{(\\text{hd;p})}$ process increments the energy\nby about $2U$ because a hole and a double-particle is generated.\n\n\n\\section{Approaching Experiment}\n\n\nLet us address the question what causes the high energy peak in\nRefs.~\\cite{grein02,stofe04,kohl05}. It was suggested that\ncertain defects, namely an adjacent pair of a singly and a doubly\noccupied site, are at the origin of the high energy peak.\nThe weight of such processes for a given particle state is\nquantified by $I^\\text{eff}_{(\\text{hd;p})}$. Its \nrelatively high value, see Fig.\\ \\ref{fig_sw}b, puts the presumption\nthat such defects cause the peak at $2U$ on a quantitative basis.\n\n\\subsection{Zero Temperature}\n\nBut what generates such defects? At zero or at very low temperature,\nthe inhomogeneity of the parabolic trap can imply the existence\nof plateaus of various occupations $\\langle n\\rangle \\in \\{0,1,2,\\ldots\\}$\ndepending on the total filling \\cite{batro02,kolla04,batro05}.\nIn the transition region from one integer value of $\\langle n\\rangle$\nto the next, defects occur which lead to excitations at $2U$.\n\nYet it is unlikely that this mechanism explains the experimental\nfinding since the transition region is fairly short at low value of\n$T\/U$ and $t\/U$, i.e., the plateaus prevail. The high energy peak at $2U$, \nhowever, has less weight by only a factor $2$ to $5$ compared to the weight \nin the low energy peak at $U$ \\cite{stofe04}. So we conclude that the \ninhomogeneity of the traps alone cannot be sufficient to\naccount for the experimental findings.\n\n\\subsection{Finite Temperature}\nAt higher temperatures, thermal fluctuations are a likely candidate\nfor the origin of the defects. Hence we estimate their effect in the following\nway. Thermally induced triply occupied $d$ states are\nneglected because they are very rare due to their high energy of\n$\\omega^\\text{d}(k)-2\\mu\\approx 2U$ above the vacuum $|\\text{ref}\\rangle$.\nWe focus on the particle $p$ and the holes $h$.\nThe average occupation of these states is estimated by a previously introduced\napproximate hardcore boson statistics \\cite{troye94}\n\\begin{subequations}\n\\label{eq:statistik}\n\\begin{eqnarray}\n \\langle n^{\\sigma}_k \\rangle &=& \n \\exp\\left({-\\beta( \\omega^{\\sigma}(k) -\\mu^{\\sigma} )}\\right)\n \\langle n^{\\rm vac} \\rangle\n\\\\\n \\langle n^{\\rm vac} \\rangle &=& \\left(1+z^{\\rm p}(\\beta) + z^{\\rm\n h}(\\beta)\\right)^{-1}\\ ,\n\\end{eqnarray}\n\\end{subequations}\nwhere \n$z^\\sigma = (2\\pi)^{-1}\\int_0 ^{2\\pi}dk e^{-\\beta(\\omega^{\\sigma}(k)\n-\\mu^{\\sigma})}$, $\\mu=\\mu^\\text{p}=-\\mu^\\text{h}$. The chemical potential\nchanges sign for the holes because a hole stands for the absence of\none of the original bosons $b$. Equation (\\ref{eq:statistik}) is obtained\nfrom the statistics of bosons without any interaction by correcting globally\nfor the overcounting of states. The fact that the overcounting is \nremedied on an average level implies that (\\ref{eq:statistik}) \nrepresents only an approximate, classical description \\cite{schmi05c}.\nThe chemical potential $\\mu$ is determined self-consistently such that \nas many particles as holes are excited, i.e.,\nthe average number of bosons $b$ per site remains one. \n\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=\\columnwidth]{.\/fig7.eps}\n \\end{center}\n \\caption{(color online) Temperature dependence of the relative \n spectral weight $S_2\/S_1$ for various values of $t\/U$.\n {\\it Inset:} \n $S_2\/S_1$ as function of $t\/U$ for various temperatures.}\n \\label{fig_sw_T}\n\\end{figure}\nNext, we turn to the computation of spectral weights at finite temperatures.\nAt not too high temperatures,\nthe relevant channels are $(\\text{hp};0)$, $(\\text{h2p;p})$, $(\\text{2hp;h})$,\nwhich excite at around $U$ and $(\\text{2h2p};0)$, $(\\text{hd;p})$,\nwhich excite at around $2U$.\nThe spectral weights at finite $T$ are calculated by Fermis golden rule\nas before. The essential additional ingredient is the probability to\nfind a particle, or a hole, respectively, to start from and to find\nsites which can be excited, i.e., which are not occupied by a particle\nor a hole. This leads us to the equations\n\\begin{subequations}\n\\begin{eqnarray}\nI^{\\text{eff},T}_{(\\bar{n};0)} \n&=& I^{\\rm eff}_{(\\bar{n};0)}\\langle n^{\\rm vac} \\rangle^{n_h+n_p+n_d}\n\\\\\n I^{\\text{eff},T}_{(\\bar{n};\\sigma)} &=& \n\\frac{\\langle n^{\\rm vac} \\rangle^{n_h+n_p+n_d}}{2\\pi}\n\\int_0^{2\\pi} I^{k,{\\rm eff}}_{(\\bar{n};\\sigma)} \\langle n^{\\sigma}_k\\rangle dk\n\\end{eqnarray}\n\\end{subequations}\nwith $\\sigma\\in\\{\\text{p,h}\\}$. The powers of $\\langle n^{\\rm vac} \\rangle$\naccount for the probability that the sites to be excited are not blocked\nby other excitations; the factor\n$\\langle n^{\\sigma}_k\\rangle$ accounts for the probability \nthat the excitation necessary for the particular process\nis present.\nThe momentum dependence in $I^{k,{\\rm eff}}$ stems from the momentum\ndependence of the annihilated particle or hole. It is computed by the\nsum of the moduli squared of the \nFourier transform of the matrix elements $c_{(\\bar{n};\\bar{m})}$\nin the real space coordinate of the annihilated excitation. \n\nThe spectral weight in the high energy peak (at $2U$) relative to the weight in\nthe low energy peak (at $U$) is given by the ratio $S_2\/S_1$ where \n$S_2=I^{\\text{eff},T}_{(\\text{2h2p};0)}+I^{\\text{eff},T}_{(\\text{hd;p})}$ and \n$S_1=I^{\\text{eff},T}_{(\\text{hp};0)}+I^{\\text{eff},T}_{(\\text{h2p;p})}+\nI^{\\text{eff},T}_{(\\text{2hp;h})}$. Fig.~\\ref{fig_sw_T}\ndisplays this key quantity as function of $t\/U$ and $T\/U$ for \n$r_{\\mathcal O}=4$. The difference to the result for $r_{\\mathcal O}=3$ is \nless than $0.005$. \n\nTwo regimes can be distinguished. For low values of\n$T\\lessapprox 0.19U$, the ratio $S_2\/S_1$ increases on increasing $t\/U$\nbecause the particle gap $\\Delta^\\text{p}$ decreases so that\n$\\langle n^\\text{p}_{k\\approx 0}\\rangle$ grows.\nFor higher values of $T\\gtrapprox 0.19U$, the increase of \n$\\langle n^\\text{p}_{k\\approx 0}\\rangle$ \nis overcompensated by the decrease of the weights\n$I^{\\text{eff},T}_{(\\text{2h2p};0)}+I^{\\text{eff},T}_{(\\text{hd;p})}$\nand the increase of the weights $I^{\\text{eff},T}_{(\\text{hp};0)}+\nI^{\\text{eff},T}_{(\\text{h2p;p})}+ I^{\\text{eff},T}_{(\\text{2hp;h})}$, cf.\\\nFig.~\\ref{fig_sw}, so that the relative spectral weight\ndecreases on increasing $t\/U$. Around $T=0.19U$, the ratio is fairly\nindependent of $t\/U$.\n\n\nThe experimental value of $S_2\/S_1$ \\cite{stofe04} is about $0.2-0.5$ \nfor small values of $t\/U$ ($t\\leq 0.03U$). It increases\non approaching the superfluid phase. Hence, our estimate for\n$S_2\/S_1$ implies a \\emph{significant} temperature $T\\approx U\/3$\nin the MI phase, which was not expected. \nThis is the main result of our analysis of the spectral weights.\n\n\\section{Discussion}\nThe analysis of the spectral weights lead us to the conclusion that\nthe temperature of the Mott-insulating phases must be quite considerable.\nAt first sight, this comes as a surprise because other experiments\non cold atoms imply very low temperatures, see for instance Ref.\\\n\\onlinecite{pared04}. But the seeming contradiction can be resolved.\n\nOne has to consider the entropies involved \\cite{blaki04,rey04}.\nThe entropy per boson $S\/N$ in a three-dimensional harmonic trap\nis given by $\\approx 3.6(1-f_0)$ where $f_0$ is the fraction\nof the Bose-Einstein condensate \\cite{schmi05c}. \nAdiabatic loading into the optical lattice\nkeeps $S\/N$ constant so that we can estimate the temperature of\nthe MI phase from the derivative of the free energy \\cite{troye94}. \nFor $f_0=0.95; 0.9; 0.8$ we obtain significant temperatures\n$T\/U=0.12; 0.17; 0.27$ in satisfactory\nagreement with the analysis of the spectral weights \\cite{schmi05c}.\nThe analogous estimate in the case of the Tonks-Girardeau limit (large $U$ and\naverage filling $n$ below unity, see Ref.\\ \\onlinecite{pared04})\nleads to temperatures of the order of the hopping $t$:\n$T\/t= 0.17; 0.32; 0.61$ (assuming $n=1\/2$) \\cite{schmi05c}.\nThese values are again in good agreement with\nexperiment. \n\nThe physical interpretation is the following.\nOn approaching the Mott insulator by changing the filling to the\ncommensurate value $n\\to 1$ the temperature has to rise because\nthe available phase space decreases. For $n<1$ there is phase space \nwithout occupying any site with two or more bosons $b$ because\none can choose between occupation 0 or 1. But at $n=1$ the state\nwithout any doubly occupied site is unique. Hence no entropy \ncan be generated without inducing doubly occupied and empty sites, i.e., \nexcitations of $p$- and of $h$-type. This in turn requires that\nthe temperature is of the order of the gap which agrees well with\nour analysis of the spectral weights.\n\nWe like to draw the reader's attention to the fact that our result\nof a fairly large temperature ($T\\approx U\/3$) provides also a \npossible explanation why no response at $\\approx 2U$ was found\nby QMC a low temperatures \\cite{batro05}. \nFurther investigations will be certainly fruitful, e.g.,\nit would be interesting to obtain QMC results for the \nexcitation operator $R$ that we used in our present work.\nThe excitation operator used in Ref.\\ \\onlinecite{batro05}\nvanishes for vanishing momentum so that it is less suited to\ndescribe the experimental Bragg spectroscopy \\cite{stofe04}.\n\n\nIn the attempt to provide quantitive numbers for the temperature\nin the MI bose systems one must be cautious because the bosonic systems\nare shaken fairly strongly in experiment \\cite{stofe04,kohl05}. \nThough it was ascertained that the experiment was conducted\nin the linear regime it might be that the systems are heated by\nthe probe procedure so that the temperatures seen in the \nspectroscopic investigations are higher than those seen by other\nexperimental investigations.\n\nIt would be rewarding to clarify the precision of the\n spectroscopic investigations. Then the pronounced $T$ dependence of the\nrelative spectral weight in Fig.\\ \\ref{fig_sw_T} could be used as\na thermometer for Mott-insulating bosons in optical lattices.\n\nIn summary, we have studied spectral properties of bosons in one-dimensional\noptical lattices using particle-conserving continuous\nunitary transformations (CUTs). At $T=0$ and for small $t\/U$ \nspectral weight is only present at energies $\\approx U$. \nRecent experimental peaks at $\\approx 2U$ \\cite{stofe04} can\nbe explained assuming $T\\approx U\/3$. Our results suggest to investigate the\neffects of finite $T$ on bosons in optical lattices much more thoroughly.\n\n\\begin{acknowledgments}\nWe thank T. St\\\"oferle for providing the experimental data.\nFruitful discussions are acknowledged with A. L\\\"auchli, \nT. St\\\"oferle, I. Bloch,\nS. Dusuel, D. Khomskii, and E. M\\\"uller-Hartmann. \nThis work was supported by the DFG via SFB 608 and SP 1073.\n\\end{acknowledgments}\n\n\\bigskip\n\n\\begin{appendix}\n\\section{Explicit expression for the observable}\\label{sec:app}\nThe local observable depending on the flow parameter $l$ reads \n\\begin{eqnarray}\n \\label{eq:bose:obsformel2}\n R(l,\\mathbf r)&=&\\sum_{\\{i, i^\\prime, j, j^\\prime, k,k^\\prime\\}}\nc(l,\\{i, i^\\prime, j, j^\\prime, k,k^\\prime\\})\\times\\nonumber\\\\\n&&\\quad h_{i_1+r}^{\\dagger}\\cdot ... \\cdot h_{i_{n_h}+r}^{\\dagger} h_{i^\\prime_1+r}^{\\phantom{\\dagger}} \\cdot ... \\cdot h_{i^\\prime_{m_h}+r}^{\\phantom{\\dagger}}\\times\\nonumber\\\\\n&&\\quad p_{j_1+r}^{\\dagger}\\cdot ... \\cdot p_{j_{n_p}+r}^{\\dagger} p_{j^\\prime_1+r}^{\\phantom{\\dagger}} \\cdot ... \\cdot p_{j^\\prime_{m_p}+r}^{\\phantom{\\dagger}}\\times\\nonumber\\\\\n&&\\quad d_{k_1+r}^{\\dagger}\\cdot ... \\cdot d_{k_{n_d}+r}^{\\dagger}\nd_{k^\\prime_1+r}^{\\phantom{\\dagger}} \\cdot ... \\cdot\nd_{k^\\prime_{m_d}+r}^{\\phantom{\\dagger}}\\nonumber. \\\\\n\\end{eqnarray}\nThe numbers $(n_h\\; n_p\\; n_d; m_h\\; m_p\\; m_d)$ are defined as the number of \noperators involved in a term in Eq.~\\ref{eq:bose:obsformel2}. The number of\ncreation operators $h^\\dagger$, $p^\\dagger$, and $d^\\dagger$ is \ngiven by $n_h$, $n_p$, and $n_d$, respectively. \nThe number of annihilation operators $h$, $p$, and $d$ \nis given by $m_h$, $m_p$, and $m_d$, respectively. \nA set of these six numbers defines the type\n$(\\bar{n};\\bar{m})$ of a process. \nThe variables $\\{i, i^\\prime, j, j^\\prime, k,k^\\prime\\}$\nare multi-indices, e.\\ g.\\ $i=\\{i_1,...,i_{n_h}\\}$ which give the position of\nthe operator. The coefficients $c(l,\\{i, i^\\prime, j, j^\\prime, k,k^\\prime\\})$\nkeep track of the amplitudes of these processes during the flow. Their value\nat $l=\\infty$ defines the effective observable $R^\\text{eff}$. \n\nThe total observable is the sum \n\\begin{equation}\n R(l)= \\sum_{{\\mathbf r}} R(l,{\\mathbf r}).\n\\end{equation}\nNo phase factors occur so that the observable is invariant with respect\nto translations. Hence no momentum transfer takes place and the\nresponse at $k=0$ is the relevant one. \n\\end{appendix}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nAdversarial images are perturbed visual stimuli that can fool a high performing image classifier with carefully chosen noise that is often imperceptible to humans~\\citep{szegedy2013intriguing,goodfellow2014explaining}. These images are synthesized using an optimization procedure that maximizes the wrong output class of a model observer, while minimizing any noticeable differences in the image for a reference observer. Understanding why adversarial images exist has been studied extensively in machine learning as a way to explore gaps in generalization~\\citep{gilmer2018adversarial,yuan2019adversarial,ilyas2019adversarial}, computer vision with applications to real-world robustness~\\citep{dubey2019defense,yin2019fourier,richardson2020bayes}, and recently in vision science to understand similar and divergent visual representations with humans ~\\citep{zhou2019humans,feather2019metamers,golan2019controversial,reddy2020biologically,dapello2020simulating}. Thus far there have been gaps in the literature on how natural image distributions and classification task impact the robustness of a model to adversarial images. \n\nThis paper addresses whether training on a specific natural image distribution or task plays a role in the adversarial robustness of a model. \\textit{Natural images} are images that are representative of the real world. MNIST, CIFAR-10, and ImageNet are examples of natural image datasets. The goal is to understand what it means for a model to inherently be more adversarially robust to objects vs scenes or objects vs digits, where the latter is addressed in this paper. The thesis of this paper is that both the natural image distribution and task (independently and jointly) play a role in the adversarial robustness of a model trained on them. \n\nAnswering questions about the role the image distribution and task in adversarial robustness could be critical for applications where an adversarial image can be detrimental (e.g. self-driving cars~\\citep{lu2017no}, radiology~\\citep{hirano2020vulnerability} and military surveillance~\\citep{ortiz2018defense,deza2019assessment}). These applications are often models of natural image distributions. Understanding if and how the dataset and task play a role in the adversarial robustness of a model could lead to better adversarial defenses for the aforementioned applications and a better understanding of the existence of adversarial images.\n\nThe works most closely related to the thesis of this paper are the following: \\citet{ilyas2019adversarial} found that adversarial vulnerability is not necessarily tied to the training scheme, but rather is a property of the dataset. Similarly,~\\citet{ding2019on} finds that semantic-preserving shifts on the image distribution could result in drastically different adversarial robustness even for adversarially trained models.\n\n All work in this paper studying the role of natural image distribution and classification task in the adversarial robustness of a model is empirical. The experiments presented require important performance comparisons. Therefore, we propose an unbiased metric to measure the adversarial robustness of a model for a particular set of images over an interval of perturbation strengths. Using this metric, we compare MNIST and CIFAR-10 models and find that MNIST models are inherently more adversarially robust than CIFAR-10 models. We then create a Fusion dataset ($\\alpha$-blend of MNIST and CIFAR-10 images) to determine whether the image distribution or task is causing this difference in adversarial robustness and discover that both play a role. Finally, we examine whether pretraining on one dataset (CIFAR-10 or MNIST), then training on the other results in a more adversarially robust learned representation of the dataset the model is trained on and find that this impacts robustness in unexpected ways. \n\n\\begin{figure*}[t!]\n \\centering\n \\includegraphics[width=2.0\\columnwidth]{figures\/Figure1_Cartoon_Final.pdf}\n \\caption{(A) After using the same hyperparameters and training scheme (SGD) for both models, MNIST achieves around $99\\%$ accuracy, while CIFAR-10 peaks around $80\\%$ with ResNet50 (both without data-augmentation). In cases like these it may be obvious to say that better performing models will be more adversarially robust -- but this is not always the case, in some cases it is the opposite when fixing the image distribution~\\citep{zhang2019theoretically}; (B) One solution: example graphs showing the area under the curve of $f(\\epsilon)$ and $g(\\epsilon$), functions outputting the accuracy of an adversarial attack for a given $\\epsilon$ of two models before (top) and after (bottom) accuracy normalization. This shows how at $\\epsilon_0$, models go from an unmatched accuracy to a matched upper-bounded score of 1, allowing an unbiased computation of area under the curve.}\n \\label{fig:normalized}\n\\end{figure*}\n\n\n\\section{Proposed Adversarial Robustness Metric}\n\n\nIn order to make comparisons of how robust a model is to adversarial perturbations, a proper metric for adversarial robustness must be defined. \nWe define the adversarial robustness $R$ as a measure of the rate at which accuracy of a model changes as $\\epsilon$ (adversarial perturbation strength) increases over a particular $\\epsilon$-interval of interest. The faster the accuracy of a model decreases as $\\epsilon$ increases, the lower the adversarial robustness is for that model. We propose an adaptation of area under the curve (AUC) to measure adversarial robustness. A good measure of how much change of accuracy is occurring in an $\\epsilon$-interval for a model is the AUC of a function that outputs the accuracy for an adversarial attack given an $\\epsilon$ for a model. This AUC provides a total measure of model performance for an $\\epsilon$-interval. If the accuracy decreases quickly as $\\epsilon$ increases, then the AUC will be smaller. \n\nDespite how intuitive the previous notion may sound, we immediately run into a problem: Some datasets are more discriminable than others independent of model observers, as shown in Figure~\\ref{fig:normalized}(A). This must be taken into account when computing the area under the curve. It could be possible that under unequal initial performances, one model seems more `adversarially robust' over the other by virtue purely of the initial offset in the better performance.\n\nFigure~\\ref{fig:normalized}(B) shows that one simple solution to solve the differences in accuracy between two model systems is by normalizing them with respect to their accuracy under non-adversarial ($\\epsilon_0=0$) inputs. This yields the following expression: \\begin{equation}\n\\label{eq:Normalization}\n R=\\frac{1}{f(\\epsilon_0)(\\epsilon_1-\\epsilon_0)}\\int_{\\epsilon_0}^{\\epsilon_1}f(\\epsilon)d\\epsilon\n\\end{equation}\nwhich can be interpreted as the normalized area under the curve of a function $f(\\epsilon)$ that outputs the accuracy of a model for an adversarial attack of strength $\\epsilon$ over an $\\epsilon$-interval (i.e. $[\\epsilon_0, \\epsilon_1]$). Note that $f(\\epsilon_0)>0$ and $\\epsilon_1 > \\epsilon_0$. Computing $R$ is the same as integrating relative change (shown in supplementary material). Therefore, $R$ is an aggregate measure of relative change in accuracy over an $\\epsilon$-interval. The division by $f(\\epsilon_0)$ normalizes the function because the function now represents the change in accuracy with respect to no adversarial perturbations (i.e. it is now a relative change). Further, the accuracy at $f(\\epsilon_0)$ can be considered an \\textit{`oracle'} for the adversarial attacks of the model (i.e. the likely optimal or best performance for that $\\epsilon$-interval). The term $\\frac{1}{\\epsilon_1-\\epsilon_0}$ of Eq.~\\ref{eq:Normalization} puts the area under the curve of the normalized accuracy between $(0,1]$. This is so that it is easier to interpret and so that the metric is normalized for different $\\epsilon$-intervals (i.e. the maximum value is not $\\epsilon_1 - \\epsilon_0$, but instead is 1). Note that the metric is valid independent of the adversarial attack method.\n\nIf for a particular model, $R=1$, this implies that $f(\\epsilon)$ is constant over $[\\epsilon_0, \\epsilon_1]$. If for a model, $R\\approx 0$, that means that for all $\\epsilon$ in the interval, the model classifies nearly all the perturbed images of a given set incorrectly. $R$ can be arbitrarily close to 0.\n\nTo guarantee that $R\\leq 1$, the following constraint must be satisfied: \n\\begin{equation}\n \\int_{\\epsilon_0}^{\\epsilon_1}f(\\epsilon)d\\epsilon \\leq f(\\epsilon_0)(\\epsilon_1 - \\epsilon_0)\n\\end{equation}\nThis is a reasonable constraint to make. An interpretation of $f(\\epsilon_0)(\\epsilon_1 - \\epsilon_0)$ is a possible AUC for $f(\\epsilon)$. This AUC occurs when $f(\\epsilon) = f(\\epsilon_0$) for all $\\epsilon\\in[\\epsilon_0, \\epsilon_1]$. In other words, as $\\epsilon$ increases, the classification performance of the adversarial images does not change. An AUC greater than $f(\\epsilon_0)(\\epsilon_1 - \\epsilon_0)$ would imply that the accuracy increases above the starting accuracy (i.e. $f(\\epsilon_0)$). This behavior would contradict what it means to perform an adversarial attack.\n\nTo measure the impact $C$, that adversarial attacks have on a model between two specific $\\epsilon$ points instead of an interval, the following can be used: \\begin{equation}\n\\label{eq:relative}\n C =\\frac{f(\\epsilon)-f(\\epsilon_0)}{f(\\epsilon_0)}\n\\end{equation}\n\nwhere $C$ is the relative change between the performance of a model for two different $\\epsilon$'s of adversarial attacks. Normalizing to compute $R$ by taking the relative change in error with respect to a reference or optimal value $f(\\epsilon_0)$ (i.e. Eq. \\ref{eq:relative}) results in a less biased measure for adversarial robustness than other normalization schemes, such as taking the difference (i.e. $f(\\epsilon) - f(\\epsilon_0)$). This is because the other schemes are unable to properly account for differences in performance of models on a particular dataset or task. Broadly, we are not interested in how much the performance differs overall, but how much it differs relative from where it started.\n\nThere are two methods to find $f(\\epsilon)$: 1) to empirically compute multiple values of $\\epsilon$ and estimate the area under the curve using integral approximations, such as the trapezoid method; 2) to find the closed form expression of $f(\\circ)$ as one would do for psychometric functions~\\citep{wichmann2001psychometric} and integrate. In this paper, we do the former (compute multiple values of $\\epsilon$ and estimate the integral using the trapezoid method), although this method is extendable to the latter.\n\nPicking $\\epsilon_0$ and $\\epsilon_1$ is an experimental choice. Choosing $\\epsilon_0 = 0$ allows measures the adversarial robustness starting from no perturbations, yet $\\epsilon_0 > 0$ can also be used. For too high a choice of $\\epsilon_1$, the image can saturate and the performance will likely approach chance. This rebounding effect can be seen in some of the CIFAR-10 curves in our experiments.\n\nThere are certain assumptions for this normalization scheme to hold. For example, in both of our experiments MNIST and CIFAR-10 are equalized to have 10 classes and we assume an independent and identically distributed testing distribution such that chance performance for any model observers is the same at $10\\%$. One could see how the normalization scheme would give a misleading result if one dataset has 2 i.i.d classes that yield 50\\% chance and another dataset yields 10 i.i.d classes that yield 10\\% chance. In this case, proportions correct are not comparable and a more principled way of equalizing performance -- likely using $d'$ (a generalized form of Proportion Correct used in Signal Detection Theory) would be required~\\citep{green1966signal}.\n\nOverall, this robustness metric can be used to get a sense of whether a model is adversarially robust over a particular $\\epsilon$-interval or to measure how adversarially robust a model is comparatively to other models over that interval for a particular set of inputs. Note that this metric is not intended to be used to certify the adversarial robustness of an artificial neural network since it is an approximation of the change of accuracy of a model over an $\\epsilon$-interval for, in this paper, a specific set of images. \n\n\n\\begin{figure*}[t!]\n \\centering\n \\includegraphics[width=2.0\\columnwidth]{figures\/experiment_figure.pdf}\n \\caption{(A) 20 models are trained for each dataset\/task (MNIST, CIFAR-10, and later a Fusion Dataset) and network architecture (LeNet, ResNet50, FullyConnectedNet), using a different set of randomly initialized weights (i.e. 60 models per dataset); (B) The models are then tested on adversarial images generated using FGSM \\cite{goodfellow2014explaining} of various perturbation strengths. The results from testing result in graphs similar to \\ref{fig:normalized}(A). Using these results, the adversarial robustness is computed using Eq. \\ref{eq:Normalization}. The average adversarial robustness across the set of two models is compared to determine which model is more adversarially robust and analyze these results.}\n \\label{fig:setup}\n\\end{figure*}\n\n\\section{Experimental Design}\n\\label{sec:Methods}\n\nFigure $\\ref{fig:setup}$ visualizes the general experimental design, where models are trained on either MNIST or CIFAR-10 images, and later Fusion images. The architecture, optimization and learning scheme, and initial random weights between each MNIST and CIFAR-10 model is the same, allowing us to draw comparisons between the adversarial robustness of the models after attacking the trained models. \n\n\\subsection{Architectures}\nAll experiments used 3 networks: LeNet \\cite{lenet}, ResNet50 \\cite{resnet}, and a fully connected network (FullyConnectedNet) where we explored adversarial robustness over 20 paired network runs and their learning dynamics. FullyConnectedNet has 1 hidden layer with 7500 hidden units. This number of hidden units was chosen so the number of parameters for the FullyConnectedNet has the same order of magnitude as the number of parameters for ResNet50. FullyConnectedNet only have 1 hidden layer so that the network is not biased to approximate a hierarchical function as a convolutional neural network (See~\\cite{mhaskar2016deep,poggio2017and} and recently \\cite{neyshabur2020towards,deza2020hierarchically}).\n\n\\subsection{Datasets}\nThe datasets used were MNIST, CIFAR-10, and a Fusion Dataset. To use the exact same architectures with the datasets, MNIST was upscaled to $32\\times32$ and converted to 3 channels to match the dimensions of CIFAR-10 (i.e. $32\\times 32 \\times 3$). MNIST was changed instead of CIFAR-10 because given the low image complexity of MNIST images -- mainly their low spatial frequency structure, that lends itself to upscaling -- and thus it would be less likely to change the accuracy of models trained on that dataset. Preliminary results showed that there is a difference (insignificant in comparison to the other differences in results in this paper) in the adversarial robustness of models trained on the scaled and 3 color channel version and the regular version, with the scaled and 3 color channel version being less robust. Whether the changes to MNIST entirely caused the difference was not determined due to the differences between architectures that were used for each version. No other changes to the datasets were made (such as color normalization, which is typically used for CIFAR-10) in order to preserve the natural image distribution.\n\nThe Fusion dataset that is used in the experiments is not a natural image distribution. It was created with the purpose of better understanding the inherit adversarial robustness proprieties of natural image distribution models. Each fusion image in the dataset is generated with the following $\\alpha$-blending procedure: \n\\begin{equation}\n\\label{eq:fusion}\n F = 0.5M + 0.5C,\n\\end{equation} where $F$ is a new fusion image, $M$ is an MNIST image modified to 32x32x3 (by upscaling and increasing number of color channels), and $C$ is a CIFAR-10 image. Example fusion images can be found in Figure~\\ref{fig:fusion}. This dataset is similar to \\textit{Texture~shiftMNIST} from~\\citet{jacobsen2018excessive}.\n\nThe Fusion dataset was created online during training or testing during each mini-batch by formula~\\ref{eq:fusion}. The fusion image training set was constructed using the MNIST and CIFAR-10 training set and the fusion image test set was constructed using the MNIST and CIFAR-10 test set. During training, the MNIST and CIFAR-10 datasets are shuffled at the start of every epoch. Therefore, it is likely that no fusion images are shown to the model more than once. This was done to ensure that the model cannot learn any correlation between any CIFAR-10 object and any MNIST digit, as well as, improve generalization of the model. Additionally, it is important to note that no two models were trained on the exact same set of fusion images, but were evaluated on the same test images. Since we train 20 random models, it should average out any possible noise to a certain degree, but strictly speaking the images were different but the statistics were approximately matched.\n\n \\begin{figure}[!t]\n \\centering\n \\includegraphics[width=1.0\\columnwidth]{figures\/Figure_Fusion_Dataset_Only.pdf}\n \\caption{The Fusion dataset was created to tease apart the cause and effects of the inherit adversarial robustness of models trained on natural image distributions. Here, we show sample images from the Fusion dataset consisting of alpha-blended MNIST + CIFAR-10 stimuli.}\n \\label{fig:fusion}\n\\end{figure}\n \n\\subsection{Hyperparameters, Optimization Scheme, and Initialization}\nIt is important to note that all hyperparameters are held constant, including the $\\epsilon$-interval. The only difference between the models using a certain architecture is the dataset\/task they are trained and tested on (just the task in the case of the Fusion dataset). In the experiments presented, the independent variables are the dataset and task, while the dependent variable being measured is the adversarial robustness of the model. Since all other variables are held fixed, if the adversarial robustness of the models trained on the different datasets\/tasks are different, then this change is due to the dataset\/task itself (i.e. the image distribution and classification task). If the $\\epsilon$-interval used to attack the two models is different we could not directly conclude that any differences are due to the image distribution and task because the difference could also be due to the differences in the strengths of the adversarial attacks on each model. Experiments using the Fusion dataset are presented in this paper to investigate which of the independent variables (i.e. whether image distribution or task) is playing a role in the differences in adversarial robustness.\n\nThe loss function used for all models was cross-entropy loss and the optimizer used was stochastic gradient descent (SGD) with weight decay $5\\times10^{-4}$, momentum $0.9$, and with an initial learning rate 0.01 for the FullyConnectedNet and LeNet models and an initial learning rate 0.1 for the ResNet50 models. The learning rate was divided by 10 at 50\\% of the training. The FullyConnectedNet and LeNet models were trained to 300 epochs and the ResNet50 models were trained to 125 epochs. ResNet50 models required less epochs during training because those models reached high levels of performance sooner than the other architectures. A batch size of 125 was used. The batch size was 125 since this is the closest number to a more typical batch size of 128 that divides both the number of CIFAR-10 images and MNIST images. This was needed to ensure that the batches align properly when creating the fusion images. These hyperparameters and optimization scheme were chosen since they resulted in the best performance of those tested in preliminary experiments. \n\nFor all experiments, each model was trained 20 times with matched initial random weights across different datasets. For example in the case of LeNet, 20 different LeNet models all with different initial random weights:~$\\{w_1,w_2,...,w_{20}\\}$ were used to train for CIFAR-10 in our first experiment, and these same initial random weights were used to train for MNIST. This removed the variance induced by a particular initialization (\\textit{e.g.} a lucky\/unlucky noise seed) that could bias the comparisons by arriving to a better solution via SGD. This procedure was possible because our MNIST dataset was resized to a 3-channeled version with a new size of $32\\times32\\times3$ instead of $28\\times28\\times1$ (original MNIST).\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=2.0\\columnwidth]{figures\/Figure_Regular_Fusion_Tree.pdf}\n \\caption{MNIST-trained networks (bottom left) across all architectures show greater adversarial robustness after accuracy normalization than CIFAR-10 trained networks (top left for each architecture). Notice too that ResNet50 appears to be the more adversarially robust network across network architectures (LeNet and FullyConnectedNet) independent of learning dynamics. Graphs of the normalized accuracy of the Fusion dataset on the object recognition task (top right) and digit recognition task (bottom right) for LeNet, ResNet50, and FullyConnectedNet. Generally, models trained on the digit task were more adversarially robust than those trained on the object task, showing the role that task plays in the adversarial robustness of a model. Additionally, these models were generally less adversarially robust than their MNIST and CIFAR-10 model counterparts. In combination, these results imply that both task and image distribution play distinct roles in the adversarial robustness of a model. The gold lines represent chance performance in the graphs.}\n \\label{fig:MNISTvsCIFAR_Original_and_Fusion}\n\\end{figure*}\n\n\\subsection{Adversarial Attacks}\n\nThe method used for generating adversarial images in the experiments presented in this paper is the Fast Gradient Sign Method (FGSM) presented in \\citet{goodfellow2014explaining}. The focus of the attacks was to create images that cause the model to misclassify in general, rather than misclassifying an image to a particular class. FGSM was chosen over other optimization based attacks such as Projected Gradient Descent (PGD) \\cite{madry2019deep} based on preliminary results as FGSM was sufficient to successfully adversarially attack the model. FGSM also has a lower computational cost than PGD allowing us to run more experiments and train more models. Adversarial training or other data-augmentations schemes that may bias the outcome were not performed. Importantly, given that an adversarial defense mechanism is not being proposed or used, strong adversarial attack methods, such as PGD, are not necessary in this first work -- contrary, but justified to the advice from \\citet{carlini2019evaluating}.\n\nThe $\\epsilon$-interval used in the experiments is $[0,0.3]$ (i.e. $\\epsilon_0 = 0, \\epsilon_1 = 0.3$). The upper bound of $0.3$ was chosen because adversarial images at that magnitude are difficult for many undefended classifiers to classify correctly. The trained models were adversarially attacked with $\\epsilon\\in\\{0, 0.0125, 0.025, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3\\}$ to approximate $f(\\epsilon)$. For the models using LeNet and FullyConnectedNet architectures, they were adversarially attacked at 1, 10, 25, 50, 150, and 300 epochs. Models using the ResNet50 architecture were adversarially attacked at 1, 10, 25, 50, 100, and 125 epochs. Different epochs were adverarially attacked to determine whether the results differed at different stages of learning.\n\n\\section{Experimental Results}\n\nThe following experiments provide a glimpse into the role of classification task and image distribution in the adversarial robustness of models.\n\nAll differences in robustness that are mentioned are statistically significant using a Welch's t-test with significance level $\\alpha=0.05$. This test was used because the models are unpaired and do not have equal variance since the models are trained on different datasets.\n\n\\subsection{Comparing MNIST vs CIFAR-10 Adversarial Robustness}\nThis experiment investigates whether MNIST models are inherently more adversarially robust than CIFAR-10 models. This was investigated by comparing the adversarial robustness of CIFAR-10 models and the MNIST models for the three architectures. Figure~\\ref{fig:MNISTvsCIFAR_Original_and_Fusion} (top left for each architecture) shows normalized accuracy graphs for the CIFAR-10 trained models and Figure~\\ref{fig:MNISTvsCIFAR_Original_and_Fusion} (bottom left) shows graphs of normalized accuracy for MNIST trained models. Both LeNet and FullyConnectedNet, the MNIST models were more adversarially robust than CIFAR-10 models, for each epoch we examined. The same pattern of results held for ResNet50 models except for the first epoch where there was no difference between the MNIST and CIFAR-10 models.\n\n\\underline{Result 1:} For the three network architectures tested (that all vary in approximation power and architectural constraints), MNIST trained models are inherently more adversarially robust than CIFAR-10 models. This implies that the task and\/or image distribution play a role in the adversarial robustness of the model.\n\n\\subsection{Comparing Object vs Digit Classification in the Fusion (MNIST + CIFAR-10) dataset}\n\nThe previous results suggested that after taking into account different measures of accuracy normalization, MNIST (both dataset and digit recognition task) models are intrinsically more adversarially robust than CIFAR-10 models. This implies that it is harder to fool an MNIST model, than a CIFAR-10 model, likely, in part, due to the fact that number digits are highly selective to shape, and show less perceptual variance than objects.\n\nNaturally, the next question that arises is if the task itself is somehow making each perceptual system less adversarially robust. To test this hypothesis the Fusion dataset was used. Models were trained to perform either digit recognition or object recognition on these fusion images -- thus we have approximately fixed the image distribution but varied the approximation task~\\citep{deza2020hierarchically}. They are approximately matched because no model is trained on the exact same images, the image distribution is approximately the same on average given the random sampling procedure. The goal with this new hybrid dataset is to re-run the same set of previous experiments and test adversarial robustness for both the digit recognition task and the object recognition task and probe the role\nof the type of classification task when fixing the dataset to test how adversarial robustness varies when all other variables remain constant.\n\nObservation: When examining the first epoch for the fusion trained models, the standard deviation of the curves in \\ref{fig:fusion}(B) are generally high. This is likely due to design choice of avoiding to show the same fusion image twice. This does not occur in later stages of training.\n\n\\underline{Result 2a:} Task plays a critical role in the adversarial robustness of a model. Figure ~\\ref{fig:MNISTvsCIFAR_Original_and_Fusion} contains the normalized curves of the results for the digit and object recognition tasks on the fusion dataset for each of the architectures. The models were evaluated on fusion images constructed from the MNIST and CIFAR-10 test sets. The FullyConnectedNet (all epochs), ResNet50 and LeNet fusion image models were more adversarially robust on the digit recognition task than the object recognition task for all epochs examined excluding the first epoch. This suggests that even if the image distribution is approximately equalized at training, the representation learned varies given the task, and impacts adversarial robustness differently.\n\n\\underline{Result 2b:} Image distribution also plays a role in the adversarial robustness of a model. Comparing the three architectures trained on the Fusion Dataset vs their regular image-distribution trained models show that increasing the image complexity (by adding a conflicting image with the hope of increasing invariance) in fact decreases adversarial robustness when compared to regularly trained networks. Comparing fusion image models trained on the digit task and MNIST models: for the FullyConnectedNet and LeNet architecture, the MNIST models were more robust. The same holds for the ResNet50 MNIST models except at the first epoch, where there was no difference. CIFAR-10 models using the FullyConnectedNet architecture were more adversarially robust than the fusion image models trained on the object recognition task for all epochs tested. The same was true for the LeNet and ResNet50 architectures except there were no differences between CIFAR-10 models and fusion images with object task in adversarial robustness for 1 and 50 epochs. \n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=2.0\\columnwidth]{figures\/Figure_Regular_PreTraining_Tree.pdf}\n \\caption{\\textit{Visual Hysteresis}: \n FullyConnectedNet and LeNet networks seems to carry over the learned representation and adversarial vulnerability from the pretrained system. However, only LeNet experiences a clear visual hysteresis where pretraining on CIFAR-10 for MNIST is worse (less adversarially robust) than only training on MNIST, yet pretraining on MNIST for CIFAR-10 is better (more adversarially robust) than only training on CIFAR-10 (See supplementary material). The gold lines represent chance performance in the graphs.}\n \\label{fig:MNISTvsCIFAR_Original_and_Pretraining}\n\\end{figure*}\n\n\\subsection{Impact of Pretraining on Out-Of-Distribution (o.o.d) image datasets}\n\nThis experiment investigates whether pretraining on one dataset (CIFAR-10 or MNIST), then training on the other results in a more adversarially robust learned representation of the dataset the model is trained on. \n\nThe pretraining procedure was done by using the existing fully trained CIFAR-10 or MNIST FullyConnectedNet, LeNet, and ResNet50 models as bases and then training\/fine-tuning them using the same training scheme but with MNIST or CIFAR-10 respectively. These models were then tested using the test sets of the datasets the models were fine-tuned on.\n\nFor the FullyConnectedNet, the MNIST models were more adversarially robust than the MNIST pretrained on CIFAR-10 model during early stages of learning, but the pretrained models were more robust when examined at 150 and 300 epochs of fine-tuning. The MNIST LeNet models were more adversarially robust for all stages of learning than the pretrained model. The pretrained ResNet50 models had no differences in robustness compared to the MNIST ResNet50 models, except for the first epoch where the pretrained models were more robust. This result is unexpected as this does not occur for the other architectures. These results would seem to suggest that architecture plays a role in the adversarial robustness of the learned representation contingent on the given dataset\/task and potentially compositional nature. \n\nPretraining on CIFAR-10 and then training on MNIST generally does not lead to more adversarially robust models. Next we investigate whether pretraining on MNIST and then training on CIFAR-10 has this same effect. We find that this is not always the case. Pretraining on MNIST then training on CIFAR-10 led to marginal improvements in adversarial robustness for LeNet, except for the first epoch (Figure~\\ref{fig:MNISTvsCIFAR_Original_and_Pretraining}). For ResNet50, pretraining resulted in less adversarially robust models at the start and end of training (1 and 125 epochs), otherwise there was no difference compared to not pretraining. The FullyConnectedNet pretrained models were more adversarially robust in earlier stages of learning, but were less robust in later stages. Tables of the robustness metrics for the CIFAR-10 models pretrained on MNIST (as well as for other experiments) can be found in the supplementary material. These findings requires further investigation. \n\nFor the ResNet50, LeNet, and FullyConnectedNet architectures, the models pretrained on CIFAR-10 then trained on MNIST were statistically significantly more adversarially robust than models pretrained on MNIST then trained on CIFAR-10 for all epochs examined.\n\n\\underline{Result 3}: Pretraining on CIFAR-10 followed by training on MNIST does not generally produce a more adversarially robust model than training on MNIST alone, with any of the tested architectures. This is counter intuitive given that humans typically base their learned representations on objects rather than figures~\\citep{janini2019shape}. On the other hand, pretraining on MNIST, then training on CIFAR-10 only aided LeNet; for FullyConnectedNet it helped in earlier stages of learning, while decreased robustness later. Generally, however, ResNet50 models were not affected in terms of carried-over robustness at any intermediate stages of learning. Investigating the origins of this visual hysteresis (an asymmetry in learned representation visible through robustness given the pretraining scheme)~\\citep{sadr2004object} and how it may relate to shape\/texture bias~\\citep{geirhos2018imagenet,hermann2019exploring}, spatial frequency sensitivity~\\citep{dapello2020simulating,deza2020emergent}, or common perturbations~\\citep{hendrycks2018benchmarking} is a subject of on-going work.\n\n\\section{Discussion}\nThis work verified that both the image distribution and task (independently or jointly) can impact the adversarial robustness of a model under FGSM. The next step is to investigate why, and what specific factors of the image statistics and task play a role. It is likely that MNIST trained networks are intrinsically more adversarially robust than CIFAR-10 trained networks in part due to the lower-dimensional subspace in which they live in given their image structure~\\citep{henaff2014local} compared to CIFAR-10 (\\textit{i.e.} MNIST has less non-zero singular values than CIFAR-10 allowing for greater compression for a fixed number of principal components). Additionally, in future work we want to know whether these observations hold with other optimization based attacks and gradient-free attacks, such as PGD \\cite{madry2019deep} and NES \\cite{ilyas2018blackbox} respectively. Given that FGSM is not considered a strong attack, would a stronger attack exacerbate these results? Based on the noticeable differences in adversarial robustness between the models testing only using FGSM, this is a promising direction. \n\nIndeed, this paper has only scratched the surface of the role of natural image distribution and task in the adversarial robustness of a model by comparing two well known candidate datasets over their learning dynamics: MNIST and CIFAR-10. Continuing this line of work onto exploring the role of the image distribution on adversarial robustness for other natural image distributions such as textures or scenes is another promising next step. Finally, future experiments should continue to investigate the effect of the learning objective on the learned representation induced from the image distribution. We have already seen how the task affects the adversarial robustness of a model even when image statistics are approximately matched under a supervised training paradigm. With the advent of self-supervised~\\citep{konkle2020instance,geirhos2020on,purushwalkam2020demystifying} and unsupervised~\\citep{zhuang2020unsupervised} objectives that may be predictive of human visual coding, it may be relevant to investigate the changes in adversarial robustness for the current (objects, digits) and new (texture, scenes) image distributions with the proposed adversarial robustness metric for these new learning objectives. \n\n\\section{Acknowledgements}\nThe authors thank Dr. Christian Bueno and Dr. Susan Epstein for their helpful feedback on this paper. This work was supported in part by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF \u2013 1231216.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nFeynman integrals \\brk{FIs} are a corner stone of the perturbative\nQuantum Field Theory \\brk{pQFT}, at least in its contemporary formulation.\nWithin that branch of research one fruitful discovery were the\nintegration-by-parts identities \\brk{IBPs}~\\cite{Tkachov:1981wb,\nChetyrkin:1981qh}, i.e. linear identities among FIs.\nFor a given pQFT problem \\brk{such as, for example, scattering amplitudes}\nIBPs allow to reduce an infinite set of contributing FIs to a linear\ncombination of finite number of basic objects known as the master integrals\n\\brk{MIs}.\n\nRecently a novel framework~\\cite{\n Mastrolia:2018uzb,\n Frellesvig:2019kgj,\n Mizera:2019gea,\n Frellesvig:2019uqt,\n Frellesvig:2020qot%\n} based on the twisted cohomology theory \\cite{\n matsumoto:1994,\n cho:1995,\n matsumoto:1998,\n ohara:1998,\n OST:2003,\n \n goto:2013,\n \n goto:2015,\n goto:2015b,\n goto:2015c,\n Mizera:2017rqa,\n matsubaraheo:2019%\n} was proposed to describe relations among FIs.\nIt was shown that \\brk{for fixed topology of the corresponding Feynman graphs}\nFIs form a finite dimensional vector space endowed with a scalar product called\nthe intersection number. Among other applications, this structure then helped\nto derive novel algorithms for the direct projection of FIs onto the basis of MIs.\n\nIn \\secref{sec:cohom} review the basics of twisted cohomology theory and\ncomputation of intersection numbers.\nThen in \\secref{sec:second} we present another algorithm\\footnote{\n This section is based on the joint work with\n Federico Gasparotto,\n Manoj K. Mandal,\n Pierpaolo Mastrolia,\n Saiei J. Matsubara-Heo,\n Henrik J. Munch,\n Nobuki Takayama.\n}~\\cite{Chestnov:2022alh}\nfor reduction of FIs exploiting the connection with the\nGel'fand-Kapranov-Zelevinsky \\brk{GKZ} hypergeometric systems and the secondary\nequation~\\cite{matsubaraheo:2019}.\n\n\\section{Twisted cohomology}\n\\label{sec:cohom}\nHere we review some aspects of the twisted cohomology and intersection theory\nsee also \\cite{Mastrolia:2022tww, Mizera:2019ose, Frellesvig:2021vem,\nMandal:2022vok} and \\cite{Weinzierl:2022eaz}.\nOur central subject of study is going to be generalized hypergeometric\nintegrals of the form:\n\\begin{align}\n \\mathcal{I} = \\int_\\mathrm{\\Gamma} u\\brk{x} \\, \\varphi\\brk{x}\n \\ ,\n \n \n \\label{eq:FI-def}\n\\end{align}\nwhere $u\\brk{x} = \\prod_i \\mathcal{B}_i^{\\gamma_i}$ is a multivalued function,\n$\\mathrm{\\Gamma}$ is an $n$-dimensional integration contour such that $\\prod_i\n\\mathcal{B}_i\\brk{\\partial \\mathrm{\\Gamma}} = 0$, and $\\varphi \\equiv \\hat{\\varphi} \\> \\mathrm{d} x_1\n\\wedge \\ldots \\wedge \\mathrm{d} x_n$ is a holomorphic $n$-form \\brk{meaning that the\ncoefficient $\\hat{\\varphi}\\brk{x}$ is a rational function}.\n\nIntegrals such as~\\eqref{eq:FI-def} often appear as parametric representation of FIs. For\nexample, the Baikov representation of a FI with $\\ell$ loops and $E$\nexternal legs in $d$ dimensions, the multivalued function $u\\brk{x}$\ncontains a single factor $u\\brk{x} = \\mathcal{B}\\brk{x}^\\gamma$, where $\\mathcal{B}$ is\nthe Baikov polynomial~\\cite{Baikov:1996iu}, and the exponent\n$\\gamma = \\brk{d - \\ell - E - 1} \/ 2$~. Hence in the following we will\nrefer to the integrals~\\eqref{eq:FI-def} as generalized Feynman Integrals\n\\brk{GFI}.\n\nLinear equivalence relation between FIs:\n\\begin{align}\n \\mathcal{I} =\n \\int_\\mathrm{\\Gamma} u \\> \\varphi \\equiv \\int_\\mathrm{\\Gamma} u \\> \\brk{\\varphi\n + \\nabla_\\omega \\xi}\\ ,\n \\label{eq:FI-equiv}\n\\end{align}\nwhere we introduced the covariant derivative: $\\nabla_\\omega := \\mathrm{d} +\n\\omega \\wedge$ and the $1$-form\n\\begin{align}\n \\omega := \\mathrm{d} \\log{u}\\ ,\n \\label{eq:omega-def}\n\\end{align}\nwhich will be very useful in the following.\nThe equivalence relation~\\eqref{eq:FI-equiv} follows from the Stokes theorem:\n$\n 0\n = \\int_{\\partial \\mathrm{\\Gamma}} u \\> \\xi\n \n \n = \\int_\\mathrm{\\Gamma} u \\> \\nabla_\\omega \\xi\n \n \n \\ ,\n$\n where $\\nabla_\\omega \\xi := \\mathrm{d} \\xi + \\omega \\wedge \\xi$ is the\n covariant derivative.\n\nFixing the contour of integration $\\mathrm{\\Gamma}$ allows us to interpret\nrelation~\\eqref{eq:FI-equiv} as an equivalence of integrands.\nNamely, we collect $n$-forms $\\varphi$ into equivalence classes\n$\\bra{\\varphi}: \\varphi \\sim \\varphi + \\nabla_\\omega \\xi$ generated by adding\ncovariant derivatives of $\\brk{n - 1}$-forms.\nTheir totality forms the twisted cohomology group:\n\\begin{align}\n \\bra{\\varphi} \\in \\mathbb{H}^n_\\omega\n :=\n \\bigbrc{\n \\text{$n$-forms $\\varphi$} \\> | \\>\n \\nabla_\\omega \\varphi = 0\n }\n \\Big\/\n \\bigbrc{\n \\nabla_\\omega \\xi\n }\\ ,\n \\label{eq:cohom-def}\n\\end{align}\nwhich can be thought of as the space of linearly independent FIs \\brk{of a\ngiven topology}.\n\nAnalogously we can introduce the dual integrals $\\mathcal{I}^\\vee$, whose definition\nmimics~\\eqref{eq:FI-def} up to $u \\mapsto u^{-1}$ and $\\nabla_\\omega\n\\mapsto \\nabla_{-\\omega}$\\ . Elements of the dual twisted\ncohomology group will be denoted by kets $\\ket{\\psi}$.\n\n\n\n\\subsection{Counting the number of Master integrals}\nThe framework of twisted cohomology unites several seemingly independent\nmethods for computation of the number of MIs $r$:\n\\begin{enumerate}\n \\item Number of unreduced integrals produced by the Laporta algorithm \\cite{Laporta:2000dsw}.\n \\item Number of critical points, i.e. solutions of $\\dd \\log u\\brk{x} = 0$\\ \\cite{Baikov:2005nv, Lee:2013hzt}.\n \\item Number of independent integration contours $\\mathrm{\\Gamma}_\\lambda$ \\cite{Bosma:2017ens, Primo:2017ipr}.\n \\item Number of independent $n$-forms, i.e. $\\mathop{\\mathrm{dim}}\\bigbrk{\\mathbb{H}^n_{\\pm \\omega}}$\n \\cite{Mastrolia:2018uzb, Frellesvig:2020qot}.\n \\item Holonomic rank of GKZ system \\brk{volumes of certain polytopes}\n \\cite{Chestnov:2022alh, Henrik:2022}.\n\\end{enumerate}\n\n\\subsection{Scalar product between Feynman integrals}\nThe twisted cohomology theory allows us to view the set of FIs\n\\brk{of a given topology} as a finite dimensional vector space.\nA set of MIs $\\bra{e_\\lambda}$ for $\\lambda \\in \\brc{1, \\ldots, r}$\nthen forms a basis in that space.\n\nThe dual FIs really form a dual vector space to FIs due to the\nexistence of a scalar product:\n\\begin{align}\n \\vev{\\varphi | \\psi}\n = \\frac{1}{\\brk{2 \\pi \\mathrm{i}}^n}\n \\int \\iota\\brk{\\varphi} \\wedge \\psi\n \\ ,\n \\label{eq:interx-def}\n\\end{align}\ncalled the intersection number.\nThis scalar product allows to directly\ndecompose a given integral $\\mathcal{I}$\nin a basis of MIs $\\ensuremath{\\mathcal{J}}_\\lambda := \\int_\\mathrm{\\Gamma} u \\> e_\\lambda$,\nnamely $\\mathcal{I} = \\sum_{\\lambda = 1}^r c_\\lambda \\, \\ensuremath{\\mathcal{J}}_\\lambda$\\ .\nLinear algebra leads us to the master decomposition\nformula~\\cite{Mastrolia:2018uzb, Frellesvig:2020qot}:\n\\begin{align}\n &\\bra{\\varphi} = \\sum_{\\lambda = 1}^r c_\\lambda \\,\n \\bra{e_\\lambda}\\ ,\n \\quad\n c_\\lambda = \\sum_{\\mu = 1}^r \\vev{\\varphi | h_\\mu}\n \\bigbrk{C^{-1}}_{\\mu \\lambda}\n \\ ,\n \\\\\n &C_{\\lambda \\mu} := \\vev{e_\\lambda |\n h_\\mu}\n \\ ,\n \\label{eq:cmat-def}\n\\end{align}\nfor any choice of the dual basis $\\ket{h_\\mu}$.\nTherefore the intersection numbers~\\eqref{eq:interx-def} completely determine the\ndecomposition coefficients. Let's see now how they can be computed.\n\n\\subsection{Univariate intersection numbers}\n\\label{ssec:uni}\nIn the $n = 1$ case, intersection numbers~\\eqref{eq:interx-def}\nturn into a sum of residues~\\cite{cho:1995, matsumoto:1998, Frellesvig:2021vem}:\n\\begin{align}\n \\vev{\\varphi | \\psi}\n \n \n \\equiv \\frac{1}{2 \\pi \\mathrm{i}} \\int_X \\lrbrk{\n \\varphi - \\sum_{p \\in \\mathbb{P}_\\omega}\n \\nabla_\\omega \\bigbrk{\\theta_p\\brk{x, \\bar{x}} f_p}\n }\\wedge \\psi\n = \\sum_{p \\in \\mathbb{P}_\\omega} \\res{x = p}\\lrsbrk{f_p \\, \\psi}\\ ,\n\\end{align}\nwhere\n\\begin{itemize}\n \\item Integration goes over $X = \\mathrm{\\Complex P}^1$.\n \n \\item $\\mathbb{P}_\\omega := \\bigbrc{p \\>\\big|\\> \\text{poles of $\\omega$}}$, including\n the $\\infty$\\ .\n \\item Terms with Heaviside $\\theta$-functions regulate the integral with\n the help of a local potential $f_p$, which satisfies\n $\\nabla_\\omega f_p \\equiv \\brk{\\mathrm{d} + \\omega \\wedge} f_p = \\varphi$\n around the pole $p$\\ .\n %\n This differential equation can be solved via an Ansatz: $f_p =\nf_{p, \\, \\mathrm{min}} \\brk{x - p}^{\\mathrm{min}} +\nf_{p, \\, \\mathrm{min} + 1} \\brk{x - p}^{\\mathrm{min} + 1} + \\ldots +\nf_{p, \\, \\mathrm{max}} \\brk{x - p}^{\\mathrm{max}}\n$\\ .\n\\end{itemize}\n\n\\subsection{Multivariate intersection numbers}\nOne strategy for dealing with the intersection numbers of multivariate FIs\nis to apply the univariate procedure recursively one variable at a\ntime~\\cite{ohara:1998, Mizera:2017rqa, Mastrolia:2018uzb, Frellesvig:2019uqt, Frellesvig:2020qot}.\nConsider a 2 variable problem: given two 2-forms $\\varphi\\brk{x_1, x_2}$ and $\\psi\\brk{x_1, x_2}$\nwe would like to compute $\\vev{\\varphi | \\psi}$ by first integrating out $x_1$ and then $x_2$\\ .\nTo do that we pick a basis $\\bra{e_\\lambda}$ and its dual\n$\\ket{h_\\mu}$ for the internal $x_1$-integration and project $\\varphi$,\n$\\psi$ onto them \\brk{omitting the summation signs}:\n\\begin{alignat}{3}\n \\bra{\\varphi} &= \\bra{e_\\lambda} \\wedge \\bra{\\varphi_{\\lambda}}\n \\ ,\n \\quad\n &&\n \\bra{\\varphi_{\\lambda}} = \\vev{\\varphi | h_\\mu}\n \\bigbrk{C^{-1}}_{\\mu \\lambda}\n \\ ,\n &&\n \\\\\n \\ket{\\psi} &= \\ket{h_\\mu} \\wedge \\ket{\\psi_{\\mu}}\n \\ ,\n \\quad\n &&\n \\ket{\\psi_{\\mu}} = \\bigbrk{C^{-1}}_{\\mu \\lambda}\n \\vev{e_\\lambda| \\psi}\n \\ ,\n &&\n \\label{eq:psi-proj}\n\\end{alignat}\nThe internal $x_1$-integration can be seen as the insertion of the identity operator\n$\n \\mathbb{I}_{\\mathrm{c}}\n = \\ket{h_\\mu} \\bigbrk{C^{-1}}_{\\mu \\lambda}\n \\bra{e_\\lambda}\\ ,\n \n$\nwhich consequently allows us to write the remaining integral in $x_2$ as a sum over residues:\n\\begin{align}\n \\vev{\\varphi | \\psi} =\n \\langle \\varphi \\underbrace{\n | h_\\mu \\rangle\n \\bigbrk{C^{-1}}_{\\mu \\lambda}\n \\langle e_\\lambda |\n }_{\\mathbb{I}_{\\mathrm{c}}}\n \\psi \\rangle\n =\n \\sum_{p \\in \\mathbb{P}_P} {\\rm Res}_{x_2 = p}\\lrsbrk{\n f_{p, \\lambda} \\, C_{\\lambda \\mu} \\, \\psi_{\\mu}\n }\n \\ .\n \\label{eq:interx-res}\n\\end{align}\nSimilar to~\\secref{ssec:uni}, this formula requires the knowledge of a local vector potential\n$f_{p, \\lambda}$ near each pole $x_2 = p$.\nThe potential is fixed by the following system of first order differential\nequations \\brk{omitting the $p$ subscript}:\n\\begin{align}\n \\partial_{x_2} f_{\\lambda} + f_{\\mu} \\> P_{\\mu\n \\lambda} = \\varphi_\\lambda\n \\quad \\text{near $x_2 = p$}\\ .\n\\end{align}\nThe differential equation matrix $P$ and it's dual version $P^\\vee$\nare made out of $x_1$-intersection numbers:\n\\begin{align}\n P_{\\lambda \\nu} :=\n \\vev{\n \\brk{\\partial_{x_2} + \\omega_2} e_\\lambda | h_\\mu\n }\n \\bigbrk{C^{-1}}_{\\mu \\nu}\n \\ ,\\quad\n P^\\vee_{\\mu \\xi} :=\n \\bigbrk{C^{-1}}_{\\mu \\lambda}\n \\vev{\n e_\\lambda |\n \\brk{\\partial_{x_2} - \\omega_2} h_\\xi\n }\n \\ ,\n \\label{eq:pfaff-def}\n\\end{align}\nso they can be computed using the univariate method of~\\secref{ssec:uni}. The\nset $\\mathbb{P}_P$ in eq.~\\eqref{eq:interx-res} is defined as $\\mathbb{P}_P\n:= \\bigbrc{p \\> \\big| \\> \\text{poles of $P$}}$\\ .\n\nIn practice, to compute the residue at, say, $x_2 = 0$ we solve for $\\rho$ the\nfollowing system:\n\\begin{align}\n \\begin{cases}\n \n \\lrsbrk{x_2 \\, \\partial_{x_2} + P\\brk{x_2}} \\vv{f} = \\vv{\\varphi}\n \\\\[5pt]\n \\rho = \\res{x_2 = 0} \\lrsbrk{\n \\vv{f} \\, \\cdot \\, \\vv{\\psi}\n }\n \n \\end{cases}\n \\ ,\n \\label{eq:res-sys}\n\\end{align}\nwhere we rescaled $\\vv{\\varphi}\\brk{x_2} \\mapsto 1 \/ x_2 \\> \\vv{\\varphi}\\brk{x_2}$ and\n$P\\brk{x_2} \\mapsto 1 \/ x_2 \\> P\\brk{x_2}$,\nand canceled the $C$ matrix in the residue~\\eqref{eq:interx-res} against the\n$C^{-1}$ coming from eq.~\\eqref{eq:psi-proj}.\nThe series expansion of the system~\\eqref{eq:res-sys} is build from:\n\\begin{align}\n P\\brk{x_2} = \\sum_{i \\ge 0} x_2^i \\> P_i\n \\ , \\quad\n \\vv{\\varphi} = \\sum_{i \\ge k} x_2^i \\> \\vv{\\varphi}_i\n \\ , \\quad\n \\vv{\\psi} = \\sum_{i \\ge m} x_2^i \\> \\vv{\\psi}_i\n \\ ,\n \n\\end{align}\nfor integer $k, m \\in \\mathbb{Z}$.\nInserting an Ansatz\n$\\vv{f} = \\sum_i x_2^i \\> \\vv{f}_i$\\ ,\nand matching the powers of $x_2$ order by order, we obtain the linear system\n\\brk{here dots $\\gr{0}$ denote zeros}:\n\\begin{align}\n \n \\lrsbrk{\n \\begin{array}{c|ccccc|c}\n -1\n & \\vv{\\psi}_{1} & \\vv{\\psi}_{0} & \\vv{\\psi}_{-1} & \\vv{\\psi}_{-2} & \\vv{\\psi}_{-3}\n \n & \\cellcolor{gr1}\\gr{0}\n \\\\\n \\gr{0}\n & P_0 - 2 & \\gr{0} & \\gr{0} & \\gr{0} & \\gr{0}\n & \\vv{\\varphi}_{-2}\n \\\\\n \\gr{0}\n & \\gr{P_1} & P_0 - 1 & \\gr{0} & \\gr{0} & \\gr{0}\n & \\vv{\\varphi}_{-1}\n \\\\\n \\gr{0}\n & \\gr{P_2} & \\gr{P_1} & P_0 & \\gr{0} & \\gr{0}\n & \\vv{\\varphi}_{0}\n \\\\\n \\gr{0}\n & \\gr{P_3} & \\gr{P_2} & \\gr{P_1} & P_0 + 1 & \\gr{0}\n & \\vv{\\varphi}_{1}\n \\\\\n \\gr{0}\n & \\gr{P_4} & \\gr{P_3} & \\gr{P_2} & \\gr{P_1} & P_0 + 2\n & \\vv{\\varphi}_{2}\n \\end{array}\n }\n \\cdot\n \\lrsbrk{\n \\def1{1.2}\n \\begin{array}{c}\n \\rho\n \\\\\n \\vv{f}_{-2}\n \\\\\n \\vdots\n \\\\\n \\vv{f}_2\n \\\\\n -1\n \\end{array}\n } = 0\n \\ .\n\\end{align}\nThis equation has to be solved only for $\\rho$\\ .\nRow reduction of this matrix can be carried out only until the first row is\nfilled with zeros except for the element in the last column \\brk{highlighted\nwith grey}, which will contain the needed residue.\nOther poles of eq.~\\eqref{eq:interx-res} are treated in the same manner and\nthe sum of their residues produces the intersection number $\\vev{\\varphi | \\psi}$\\ .\n\n\n\\section{Decomposition via the secondary equation}\n\\label{sec:second}\nAs was observed in \\cite{Chestnov:2022alh} \\brk{see also \\cite{Henrik:2022}},\nthe twisted cohomology framework provides another method for computation of\nthe decomposition coefficients~\\eqref{eq:cmat-def}.\nThe first key idea is the so-called secondary equation\n\\cite{matsubaraheo:2019, Frellesvig:2020qot, Weinzierl:2020xyy}, which is a\nmatrix differential equation satisfied by the intersection matrix\n$C$:\n\\begin{align}\n \\begin{cases}\n \\partial_{z_i} \\, \\bra{e_\\lambda} = \\bigbrk{P_i}_{\\lambda \\nu}\n \\, \\bra{e_\\nu}\n \\\\\n \\partial_{z_i} \\, \\ket{h_\\mu} = \\ket{h_\\xi} \\, \\bigbrk{P^\\vee_i}_{\\xi \\mu}\n \\end{cases}\n \\Longrightarrow\n \\partial_{z_i} \\, C =\n P_i \\cdot C + C \\cdot \\lrbrk{P_i^\\vee}^\\mathrm{T}\n \\ ,\n \\label{eq:second}\n\\end{align}\nwhere $z_i$ are some external kinematical variables.\nThe other key step is computation of the differential equation\nmatrices $P$ and $P^{\\mathrm{aux}}$ made available thanks to the connection\nof the twisted cohomology theory, the GKZ formalism, and $D$-module theory.\nWe assume that this step is completed and refer the\ninterested reader to~\\cite{Chestnov:2022alh, Henrik:2022} for the full story.\nOnce the secondary equation~\\eqref{eq:second} is written down, we employ the known\nalgorithms for finding rational solutions of such systems, e.g.\nthe \\soft{Maple} package \\soft{IntegrableConnections}~\\cite{Barkatou:2012}.\n\nFinally, to determine the decomposition coefficients~\\eqref{eq:cmat-def} we\nrepeat the above procedure for an auxiliary basis $e^{\\mathrm{aux}} :=\n\\brc{e_1, \\ldots, e_{r - 1}, \\varphi}$, i.e. we compute an\nauxiliary $P^{\\mathrm{aux}}$ and then $C^{\\mathrm{aux}}$\\ . The FI decomposition is then\nencoded in the following matrix product:\n\\begin{align}\n \\lrsbrk{\n \n \\arraycolsep = -1pt \\def 1{0.9}\n \\begin{array}{c}\n e_1\\\\\n \\vdots\\\\\n e_{r-1}\\\\\n \\varphi\n \\end{array}\n \n }\n =\n C^{\\mathrm{aux}} \\cdot C^{-1}\n \\lrsbrk{\n \n \\arraycolsep = -1pt \\def 1{0.9}\n \\begin{array}{c}\n e_1\\\\\n \\vdots\\\\\n e_{r-1}\\\\\n e_r\n \\end{array}\n \n }\n \\quad\n \\Longrightarrow\n \\quad\n C^{\\mathrm{aux}} \\cdot C^{-1}\n = \\lrsbrk{\n \n \\arraycolsep = .7pt \\def 1{0.9}\n \\begin{array}{ccc|c}\n & & & 0\n \\\\\n & {\\mathrm{id}_{r-1}} & & \\vdots\n \\\\\n & & & 0\n \\\\\n \\hline\n \\rowcolor{gr1}\n c_1 & \\cdots & c_{r-1} & c_r\n \\end{array}\n \n }\n \\ ,\n \\label{eq:CauxCinv}\n\\end{align}\nwhere $\\mathrm{id}_{r - 1}$ denotes an identity matrix of size $\\brk{r - 1}$,\nand the decomposition coefficients $c_\\lambda$ are collected in the last row\nhighlighted with grey.\n\n\\subsection{A simple example}\nLet us briefly showcase how the secondary equation can produce the reduction\ncoefficients of a box diagram with a single dot $\\varphi = \\boxd$ in terms of the\nbasis $\n \\brk{e_1, e_2, e_3} = \\lrbrk{\n \\tbub,\n \\sbub,\n \\boxx\\\n }\n$\\ .\nThis topology has\n$u = \\brk{x_1 + x_2 + x_3 + x_4 + x_1 x_3 + t \/ s \\> x_2 x_4}^\\gamma$\\ .\nUsing the algorithm of\n\\cite{Chestnov:2022alh} and the \\soft{Asir} computer algebra system\n\\cite{url-asir} we generate the differential equation matrices:\n\\begin{align}\n P = \\lrsbrk{\n \\arraycolsep = 2pt \\def 1{1.5}\n \\begin{array}{ccc}\n - \\frac{ \\epsilon \\left(\\delta ^2 (12 z+11)+7 \\delta (z+1)+z+1 \\right)}{(3 \\delta\n +1) z (z+1)} & -\\frac{\\delta ^2 \\epsilon }{(3 \\delta+1)(z+1)} & \\frac{\\delta ^2\n \\epsilon (\\delta (z+2)+1)}{2 (3 \\delta +1) z\n (z+1) (\\delta \\epsilon +1)} \\\\\n \\frac{\\delta ^2 \\epsilon }{(3 \\delta +1)\n z \\left(z+1\\right)} & -\\frac{\\delta ^2\n \\epsilon }{(3\\delta+1) (z+1)} &\n -\\frac{\\delta ^2 \\epsilon (\\delta +2 \\delta\n z+z)}{2 (3 \\delta +1) z (z+1) (\\delta\n \\epsilon +1)} \\\\\n -\\frac{2 (2 \\delta +1) \\epsilon (\\delta\n \\epsilon +1)}{(3 \\delta +1) z (z+1)} & \\frac{2\n (2 \\delta +1) \\epsilon (\\delta \\epsilon\n +1)}{(3 \\delta +1) (z+1)} & -\\frac{\\epsilon\n \\left(\\delta ^2 (5 z+7)+\\delta (2\n z+5)+1\\right)}{(3 \\delta +1) z (z+1)}\n \\end{array}\n }\\ ,\n\\end{align}\nwhere $z = t \/ s$ is the ratio of the Mandelstam invariants, $\\delta$ is an\nadditional regularization parameter which should be set $\\delta \\to 0$ at the\nend of the computation, and $P^\\vee = P \\big|_{\\epsilon \\to\n-\\epsilon}$\n\\brk{see~\\cite{Chestnov:2022alh,Henrik:2022} for further details}.\nThe rational solution to the secondary equation~\\eqref{eq:second}\nlooks like this:\n\\begin{align}\n C =\n \\lrsbrk{\n \\arraycolsep = 1pt \\def 1{1}\n \\begin{array}{ccc}\n -\\frac{(2 \\delta +1) (4 \\delta +1)}{\\delta } &\n \\delta & -2 (\\delta \\epsilon -1) \\\\\n \\delta & -\\frac{(2 \\delta +1) (4 \\delta\n +1)}{\\delta } & -2 (\\delta \\epsilon -1) \\\\\n 2 (\\delta \\epsilon +1) & 2 (\\delta \\epsilon\n +1) & -\\frac{4 \\left(10 \\delta ^2+6 \\delta\n +1\\right) (\\delta \\epsilon -1) (\\delta\n \\epsilon +1)}{\\delta ^3}\n \\end{array}\n }\n \\ .\n\\end{align}\nWe repeat the same procedure for the auxiliary $C^{\\mathrm{aux}}$ and mount everything\ninto eq.~\\eqref{eq:CauxCinv} to produce:\n\\begin{align}\n \\boxd\n =\n -\\frac{2 \\varepsilon \\brk{2 \\varepsilon + 1}}{z \\brk{\\varepsilon + 1}} \\cdot\n \\tbub\n + 0 \\cdot\n \\sbub\n + \\brk{2 \\varepsilon + 1} \\cdot\n \\boxx\n \\ .\n\\end{align}\nTherefore the secondary equation method allows us to decompose FIs in terms of MIs via\nsolving a first order matrix differential equation~\\cite{Chestnov:2022alh}!\n\n\\section{Conclusion}\nWe reviewed the connection between FIs and the twisted cohomology theory,\nfocusing on the algorithms for computation of the uni- and multivariate\nintersection numbers, that is the scalar products between FIs.\n\nFurthermore, following~\\cite{Chestnov:2022alh}, we showed how the twisted\ncohomology together with the theory of GKZ hypergeometric system provide a\nway to compute the IBP reduction coefficients via essentially finding rational\nsolutions to a system of PDEs~\\eqref{eq:second} called the secondary equation.\nIn the future it would be interesting to further develop this connection and\napply it to other problems and processes within pQFT.\n\nFigures were made with \\soft{AxoDraw2}~\\cite{Collins:2016aya}.\n\n\\bibliographystyle{JHEP}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}