diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzngib" "b/data_all_eng_slimpj/shuffled/split2/finalzzngib" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzngib" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction} \\label{S1}\nLet $\\mathbb F_q$ be the finite field with $q=2^n$ elements, where $n$ is a positive integer. We denote {the} multiplicative group of nonzero elements of $\\mathbb F_q$ by $\\mathbb F_q^*$. Let $f$ be a function from {the} finite field $\\mathbb F_q$ to itself. It is well-known that any function from {the} finite field $\\mathbb F_q$ to itself can be uniquely represented by a polynomial in $\\mathbb F_q[x]$ of degree at most $q-1.$\n\n\nSubstitution boxes play a very crucial role in the design of secure cryptographic primitives, such as block ciphers. The differential attack, introduced by Biham and Shamir~\\cite{BS91} is one of the most efficient attacks on block ciphers. To quantify the degree of security of a substitution box, used in a block cipher, against the differential attacks, Nyberg~\\cite{KN93} introduced the notion of differential uniformity. For any $\\epsilon \\in \\mathbb F_q$, the derivative of $f$ in the direction of $\\epsilon$ is given by $D_{\\epsilon}f(x) = f(x+\\epsilon)+f(x)$ for all $x\\in \\mathbb F_q.$ For any $a,b \\in \\mathbb F_q$, the Difference Distribution Table (DDT) entry at the point $(a,b)$ of $f$ is given by\n\\begin{equation}\\label{ddt}\n \\Delta_f(a,b) = \\lvert \\{ x \\in \\mathbb F_q \\mid D_{a}f(x)=b \\} \\rvert,\n\\end{equation}\nand the differential uniformity is $\\Delta_f = \\max \\{\\Delta_f(a,b) \\mid a,b \\in \\mathbb F_q, a\\neq0\\}$. When $\\Delta_f=1,2$, then the function $f$ is called perfect nonlinear (PN) function, almost perfect nonlinear (APN) function, respectively. It is easy to see that over finite fields of even characteristic, $\\Delta_f$ is always even and hence APN functions give the optimal resistance against the differential attack. \n\nWagner~\\cite{DW99} introduced a new cryptanalysis method against block ciphers, which became known as the boomerang attack. This attack may be thought of as an extension {of} the differential attack~\\cite{BS91}. In order to analyze the boomerang attack in a better way, and analogously to the Difference Distribution Table (DDT) concerning differential attack, Cid et al.~\\cite{cid} introduced the notion of Boomerang Connectivity Table (BCT). Further, to quantify the resistance of a function against the boomerang attack, Boura and Canteaut~\\cite{BC18} introduced the concept of boomerang uniformity, which is the maximum value in the BCT excluding the first row and first column. For effectively computing the entries in the BCT, Li et al.~\\cite{KLi19} proposed an equivalent formulation as described below. For any $a,b \\in \\mathbb F_q$, the Boomerang Connectivity Table (BCT) entry of the function $f$ at point $(a,b)$, denoted by $\\mathcal B_f(a,b)$, is the number of solutions in $\\mathbb F_q \\times \\mathbb F_q$ of the following system of equations\n\\begin{equation}\\label{bs}\n \\begin{cases}\n f(x)+f(y)=b,\\\\\n f(x+a)+f(y+a)=b.\n \\end{cases}\n\\end{equation}\nThe boomerang uniformity of the function $f$, denoted by $\\mathcal B_f$, is given by \n$$\\mathcal B_f = \\max\\{ \\mathcal B_f(a,b) \\mid a,b \\in\\mathbb F_q^* \\}.$$\nFor {any permutation $f$}, Cid et al.~\\cite[Lemma 1]{cid} showed that $\\mathcal B_f(a,b) \\geq \\Delta_f(a,b)$ for all $(a,b)\\in \\mathbb F_q \\times \\mathbb F_q$. Later, Mesnager et. al~\\cite{SM20} showed that it holds for non-permutation functions as well. Cid et. al~\\cite[Lemma 4]{cid} showed that for APN {permutations}, the BCT is the same as the DDT, except for the first row and the first column. Thus APN {permutations} offer an optimal resistance to both differential and boomerang attacks. However, over finite fields $\\mathbb F_{2^n}$ with $n$ even, which is the most interesting case in cryptography, the only known example of APN {permutation} is due to Dillon~\\cite{Dil10} over $\\mathbb F_{2^6}$. The existence of APN permutations over $\\mathbb F_{2^n}$, $n\\geq 8$ even, is an open problem and often referred to as the Big APN Problem. Therefore, over $\\mathbb F_{2^n}$, $n$ even, the functions with differential and boomerang uniformity four offer the best (known) resistance to differential and boomerang attacks. So far, there are six classes of {permutations} over $\\mathbb F_{2^n}$, $n\\equiv 2 \\pmod 4$ with boomerang uniformity $4$ (see \\cite{BC18, KLi21, KLi19, NLi21, SM20, NLi20}). \n\nIn this paper, we consider the boomerang uniformity of an infinite class of locally-APN (see Definition~\\ref{D1}) functions $f(x)=x^{2^m-1}$ over the finite field $\\mathbb F_{2^n}$, where $n=2m$ with $m>1$. In Section~\\ref{S2}, we recall some results concerning the differential uniformity of $f$. Section~\\ref{S3} will be devoted to the boomerang uniformity of this power map and we shall show that the power map $f$ is boomerang $2$-uniform when $n \\equiv 0 \\pmod 4$ (i.e. when $m$ is even) and boomerang $4$-uniform when $n \\equiv 2 \\pmod 4$ (i.e. when $m$ is odd), respectively. \n\n\nCid et al.~\\cite{cid} (see also~\\cite[Theorem 1]{SM20}) showed that for permutation functions $f$, $\\mathcal B_f \\geq \\Delta_f$. However, perhaps, due to lack of any explicit example in the case of non-permutations, in several follow up papers of~\\cite{cid} such as~\\cite{Cal21, CV20, KLi21}, the term ``permutation\" was not emphasized and it has been stated that for any function~$f$, the differential uniformity is less than the boomerang uniformity. In this paper, we shall show that for non-permutations, the differential uniformity is not necessarily smaller than the boomerang uniformity. To the best of our knowledge, this is the first such example. Finally, we end with some concluding remarks in Section~\\ref{S4}. \n\nWhile one might wonder if investigating non-permutation is worthy, and we believe that these questions and their answers may reveal results of interests that do have applications to cryptography. With one exception~\\cite{Dil10}, all APN functions on even dimension are non-permutations. In fact, even the known example from~\\cite{Dil10} is an APN permutation that is CCZ-equivalent to the known Kim (non-permutation) APN function. \nMoreover, it is known that the boomerang uniformity is not invariant under the CCZ or even extended affine equivalence, while the differential uniformity is invariant. There are many open questions asking whether by adding a linearized polynomial (or even monomial) to an APN non-permutation function might render a permutation (surely, APN) function.\nIf we know exactly how the boomerang uniformity behaves under such small perturbations, then we can possibly answer some of these questions. For example, a simple consequence of our main results is that there is no permutation in the CCZ-equivalent class of $x^{2^m-1}$ over $\\mathbb F_{2^{2m}}$ that has boomerang uniformity smaller than $2^m-2$.\n\n\n\\section{Differential Uniformity of $x\\to x^{2^m-1}$}\n\\label{S2}\n\nThe differential properties of the power maps of the form $x^{2^t-1}$ over $\\mathbb F_{2^n}$, $1 < t 1$ is odd then $\\gcd(m-1, 2m) = 2$ and the above equation~\\eqref{b1} can have at most $4$ solutions, namely $0,1,\\omega, \\omega^2$, where $\\omega$ is a primitive cubic root of unity. Hence $\\Delta_f(1,1)=4$. When $m>1$ is even then $\\gcd(m-1, 2m) = 1$, thus $0,1$ are only solutions of the equation~\\eqref{b1}. Hence in this case $\\Delta_f(1,1)=2$.\n\n\\textbf{Case 3.} Let $b \\in \\mathbb F_{2^{2m}} \\backslash \\mathbb F_2$. It is easy to see that in this case $x=0$ and $x=1$ are not solutions of Equation~\\eqref{Deq}. Therefore, the DDT entry at $(1,b)$ is the number of solutions in $\\mathbb F_{2^{2m}} \\backslash \\mathbb F_2$ of the following equivalent equation\n\\begin{equation}\n\\label{due1}\n x^{2^m}+bx^2+(b+1)x=0.\n\\end{equation}\nNow, raising the above equation to the power $2^m$, we have \n\\begin{equation}\n\\label{due2}\n x^{2^{2m}}+b^{2^m}x^{2^{m+1}}+(b^{2^m}+1)x^{2^m}=0.\n\\end{equation}\nCombining~\\eqref{due1} and~\\eqref{due2}, we have\n\\begin{equation*}\n \\begin{split}\n b^{2^m+2}x^4+(b^{2^m+2}+b^{2^m+1}+b^{2^m}+b)x^2\n +(b^{2^m+1}+b^{2^m}+b)x=0.\n \\end{split}\n\\end{equation*}\nWe note that the above equation can have at most $4$ solutions in $\\mathbb F_{2^{2m}}$, two of which are $0$ and $1$ and thus it can have at most two solutions in $\\mathbb F_{2^{2m}} \\backslash \\mathbb F_2$. Therefore for $b \\in \\mathbb F_{2^{2m}} \\backslash \\mathbb F_2$, $\\Delta_f(1,b) \\leq 2$. This completes the proof.\n\\end{proof}\n\n\n\n\\section{Boomerang uniformity of $x\\to x^{2^m-1}$}\\label{S3}\n\nIn this section, we shall discuss the boomerang uniformity of the locally-APN functions given in the previous section. The boomerang uniformity of the power maps of the type $x^{2^t-1}$ over $\\mathbb F_{2^n}$ has been considered in~\\cite{ZH19}, where the authors give bounds on the boomerang uniformity in terms of the differential uniformity under the condition $\\gcd(n,t)=1$ and also show that the power permutation $x^7$ has boomerang uniformity $10$ over $\\mathbb F_{2^n}$, where $n\\geq 8$ is even and $\\gcd(3, n) = 1$. The following theorem gives the boomerang uniformity of the power map~$f(x)=x^{2^m-1}$ over $\\mathbb F_{2^{2m}}$ where $m>1$ is odd. \n\n\\begin{thm}\\label{T1}\nLet $f(x)=x^{2^m-1}$, $m>1$ odd, be a power map from the finite field $\\mathbb F_{2^{2m}}$ to itself. Then, the boomerang uniformity of $f$ is~$4$.\n\\end{thm}\n\\begin{proof}\nRecall that for any $b \\in \\mathbb F_q^*$, $q=2^{2m}$, the BCT entry $\\mathcal B_f(1,b)$ at point $(1,b)$ of $f$, is given by the number of solutions in $\\mathbb F_q \\times \\mathbb F_q$ of the following system of equations\n \\begin{equation}\\label{pbs}\n \\begin{cases}\n x^{2^m-1}+y^{2^m-1}=b,\\\\\n (x+1)^{2^m-1}+(y+1)^{2^m-1}=b.\n \\end{cases}\n \\end{equation}\nNotice that the above system~\\eqref{pbs} cannot have solutions of the form $(x_1,y_1)$ with $x_1=y_1$ as $b \\neq 0$. Also it is easy to observe that if $(x_1,y_1)$ is a solution of the above system~\\eqref{pbs}, then so are $(y_1,x_1), (x_1+1,y_1+1)$ and $(y_1+1,x_1+1)$. We shall split the analysis of the solutions of the system~\\eqref{pbs} in the following five cases.\n\n\\textbf{Case 1.} Let $x=0$. In this case, the system~\\eqref{pbs} reduces to \n\\begin{equation}\\label{eqx0}\n \\begin{cases}\n y^{2^m-1}=b,\\\\\n (y+1)^{2^m-1}+y^{2^m-1}=1.\n \\end{cases}\n \\end{equation}\nFrom Lemma~\\ref{L2}, we know that the second equation of the above system has four solutions, namely $y= 0,1,\\omega$ and $\\omega^2$. Also since $m$ is odd, we have $2^m-1 \\equiv 1 \\pmod 3$. Since $b\\neq0$, $y=0$ cannot be a solution of the system~\\eqref{eqx0} and $y= 1,\\omega$ and $\\omega^2$ will be a solution of the system~\\eqref{eqx0} when $b=1,\\omega$ and $\\omega^2$, respectively. Equivalently, when $b=1,\\omega, \\omega^2$ then $(0,1), (0,\\omega), (0,\\omega^2)$ are solutions of the system~\\eqref{pbs}, respectively. When $b \\in \\mathbb F_q \\backslash \\mathbb F_{2^2}$ then there is no solution of the system~\\eqref{pbs} of the form $(0,y)$.\n\n\\textbf{Case 2.} Let $x=1$. In this case, the system~\\eqref{pbs} reduces to \n\\begin{equation}\\label{eqx1}\n \\begin{cases}\n y^{2^m-1}=b+1,\\\\\n (y+1)^{2^m-1}+y^{2^m-1}=1.\n \\end{cases}\n \\end{equation}\n Similar to the previous case, the second equation of the above system~\\eqref{eqx1} has four solutions, namely $y= 0,1,\\omega$ and $\\omega^2$. Since $b\\neq0$, $y=1$ cannot be a solution of~\\eqref{eqx1} and $y= 0,\\omega$ and $\\omega^2$ will be a solution of~\\eqref{eqx1}, when $b=1,\\omega^2$ and $\\omega$, respectively. Equivalently, when $b=1,\\omega, \\omega^2$ then $(1,0), (1,\\omega^2), (1,\\omega)$ are solutions of the system~\\eqref{pbs}, respectively. When $b \\in \\mathbb F_q \\backslash \\mathbb F_{2^2}$ then there is no solution of the system~\\eqref{pbs} of the form $(1,y)$.\n\n\n\\textbf{Case 3.} Let $y=0$. Since the system~\\eqref{pbs} is symmetric in the variables $x$ and $y$, this case directly follows from Case 1. That is, when $b=1,\\omega, \\omega^2$ then $(1,0), (\\omega,0), (\\omega^2,0)$ are solutions of the system~\\eqref{pbs}, respectively. When $b \\in \\mathbb F_q \\backslash \\mathbb F_{2^2}$ then there is no solution for~\\eqref{pbs} of the form~$(x,0)$. \n\n\\textbf{Case 4.} Let $y=1$. This case directly follows from Case 2. That is, when $b=1,\\omega, \\omega^2$ then $(0,1), (\\omega^2,1), (\\omega,1)$ are solutions of the system~\\eqref{pbs}, respectively. When $b \\in \\mathbb F_q \\backslash \\mathbb F_{2^2}$ then there is no solution for~\\eqref{pbs} of the form~$(x,1)$.\n\n\\textbf{Case 5.} Let $x,y \\neq 0,1$. In this case system~\\eqref{pbs} reduces to \n \\begin{equation}\n \\label{eq:xy1}\n \\begin{cases}\n x^{2^m}y+xy^{2^m}=bxy,\\\\\n (x+y)^{2^m}+(b+1)(x+y)+b=0.\n \\end{cases}\n \\end{equation}\n Let $y=x+z$. Then, the above system becomes \n \\begin{equation}\\label{pbs3}\n \\begin{cases}\n x^{2^m}z+xz^{2^m}=bx(x+z),\\\\\n z^{2^m}+(b+1)z+b=0.\n \\end{cases}\n \\end{equation}\n Now, raising the second equation of the above system to the power $2^m$, we have\n \\begin{equation}\n \\label{pbs4}\n (b^{2^m}+1)z^{2^m}+z+b^{2^m} =0.\n \\end{equation}\n Combining the second equation of~\\eqref{pbs3} and Equation~\\eqref{pbs4}, we obtain\n \\begin{equation}\n \\label{pbs5}\n ((b+1)^{2^m+1}+1)(z+1) =0.\n \\end{equation}\n Therefore, the system~\\eqref{pbs3} reduces to\n \\begin{equation}\n \\label{pbs6}\n \\begin{cases}\n x^{2^m}z+xz^{2^m}=bx(x+z),\\\\\n ((b+1)^{2^m+1}+1)(z+1) =0.\n \\end{cases}\n \\end{equation} \nNow, we shall consider following two cases\n \n\\textbf{Subcase 5.1.} Let $(b+1)^{2^m+1} \\neq 1$. In this case, the first equation of~\\eqref{pbs6} reduces to \n\\[\n x^{2^m}+bx^2+(b+1)x=0,\n\\]\nwhich is equivalent to \n\\begin{equation}\n\\label{pbs7}\n b^{2^m+2}x^4+(b^{2^m+2}+b^{2^m+1}+b^{2^m}+b)x^2\n +(b^{2^m+1}+b^{2^m}+b)x=0.\n\\end{equation}\nWhen $b=1$, the above equation becomes $x^4+x =0$, which has four solutions $x=0,1,\\omega, \\omega^2$. Since we assumed $x,y \\neq0$, the only solutions we consider are $x=\\omega$ and $\\omega^2$. Thus for $b=1$, $(\\omega, \\omega^2)$ and $(\\omega^2, \\omega)$ are solutions of the system~\\eqref{pbs3}. When $b \\in \\mathbb F_q \\backslash \\mathbb F_2$ with $(b+1)^{2^m+1}\\neq 1$, by Lemma~\\ref{L2}, Equation~\\eqref{pbs7} can have at most two solutions. \n\n\\textbf{Subcase 5.2.} Let $(b+1)^{2^m+1} = 1$. It is more convenient, now, to work with~\\eqref{eq:xy1}. We then raise the first equation of the system~\\eqref{eq:xy1} to the $2^m$-th power obtaining\n\\[\nx^{2^{2m}} y^{2^m}+y^{2^{2m}} x^{2^m}=b^{2^m} x^{2^m}y^{2^m},\n\\]\nwhich is equivalent to \n\\[\nx y^{2^m}+y x^{2^m}=b^{2^m} x^{2^m}y^{2^m}.\n\\]\nCombining this with the first equation of~\\eqref{eq:xy1}, we infer that\n\\[\nb^{2^m} x^{2^m}y^{2^m}=bxy,\n\\]\nand so, $bxy=\\alpha\\in\\mathbb F_{2^m}^*$.\nUsing $y=\\frac{\\alpha}{bx}$ in the first equation of~\\eqref{eq:xy1}, we obtain \n\\begin{equation}\n\\label{eq:x1}\nx^{2^m-1}\\frac{1}{b}+x^{1-2^m} \\frac{1}{b^{2^m}}=1.\n\\end{equation}\nLabel $T=x^{2^m-1}$. Then the above equation reduces to \n \\begin{equation*}\n \\begin{split}\n \\frac{T}{b}+ \\frac{T^{-1}}{b^{2^m}} &= 1\\\\\n \\iff \\frac{T^2}{b}+ \\frac{1}{b^{2^m}} &= T\\\\\n \\iff T^2b^{2^m}+b &= Tb^{2^m+1}.\n \\end{split}\n \\end{equation*}\nSince, $(b+1)^{2^m+1} = 1$, by expansion, we get $b^{2^m+1}+b^{2^m}+b=0$, and so, $b^{2^m+1}=b^{2^m}+b$. The previous equation becomes \n\\begin{equation*}\n\n T^2b^{2^m}+b = Tb^{2^m} +Tb\\iff\n (Tb^{2^m}+b)(T+1) =0.\n\n \\end{equation*}\n If $T=1$, then $x\\in\\mathbb F_{2^m}$ and so, $by\\in\\mathbb F_{2^m}$. Taking this back into~\\eqref{eq:xy1}, we then obtain\n \\begin{equation*}\n \\begin{cases}\n y^{2^m}+(b+1)y=0,\\\\\n y^{2^m}+(b+1)y=(x+1)b,\n \\end{cases}\n \\end{equation*}\nwhich is inconsistent with $x\\neq 1$ and $b \\in \\mathbb F_q^*.$ If $Tb^{2^m}+b=0$, then we have \n\\begin{equation*}\n \\begin{split}\n Tb^{2^m}+b &=0 \\\\\n \\iff Tb^{2^m-1}+1 &=0\\\\\n \\iff (bx)^{2^m-1}&=1.\n \\end{split}\n\\end{equation*}\nTherefore $bx \\in \\mathbb F_{2^m}$ and hence $\\frac{\\alpha}{bx}= y \\in \\mathbb F_{2^m}$. Taking this back into~\\eqref{eq:xy1}, we then obtain\n\\begin{equation*}\n \\begin{cases}\n x^{2^m}+(b+1)x=0,\\\\\n x^{2^m}+(b+1)x=(y+1)b,\n \\end{cases}\n \\end{equation*}\n which is inconsistent with $y \\neq 1$ and $b \\in \\mathbb F_q^*.$ This completes the proof.\n\\end{proof}\n\n \\begin{exmp}\nAs an example, we checked by SageMath that the differential uniformity of the non-permutation power map $x^7$ over $\\mathbb F_{2^6}$ is $6$, whereas its boomerang uniformity is $4$.\n \\end{exmp}\n \nThe following theorem gives the boomerang uniformity of the power map $f(x)=x^{2^m-1}$ over $\\mathbb F_{2^{2m}}$ , where $m>1$ is even.\n\n\\begin{thm}\n\\label{T2}\nLet $f(x)=x^{2^m-1}$, $m>1$ even, be a power map from the finite field $\\mathbb F_{2^{2m}}$ to itself. Then, the boomerang uniformity of $f$ is~$2$.\n\\end{thm}\n\\begin{proof} Following similar arguments as in the proof of Theorem~\\ref{T1}, it is straightforward to see that when $b=1$, $(0,1)$ and $(1,0)$ are the only solutions of the system~\\eqref{pbs} with either of the coordinates $x,y$ being $0$ or $1$. On the other hand, when $b \\in \\mathbb F_{2^{2m}} \\backslash \\mathbb F_{2}$, there is no solution of the system~\\eqref{pbs} with either of the coordinates $x,y \\in \\{0,1\\}$. \n\nWe now consider the case when $x,y \\neq 0,1$. In this case, the system~\\eqref{pbs} reduces to \n \\begin{equation}\n \\label{BE:xy1}\n \\begin{cases}\n x^{2^m}y+xy^{2^m}=bxy,\\\\\n (x+y)^{2^m}+(1+b)(x+y)+b=0.\n \\end{cases}\n \\end{equation}\n Let $y=x+z$. Now, raising the second equation of the above system to the power $2^m$ and adding it to the second equation of the above system, we have \n \\begin{equation} \\label{BE6}\n \\begin{cases}\n x^{2^m}z+xz^{2^m}=bx(x+z),\\\\\n ((b+1)^{2^m+1}+1)(z+1) =0.\n \\end{cases}\n \\end{equation} \nNow, we shall consider the following two cases.\n \n\\textbf{Case 1.} Let $(b+1)^{2^m+1} \\neq 1$. In this case, the system~\\eqref{BE6} reduces to \n\\[\n x^{2^m}+bx^2+(b+1)x=0,\n\\]\nwhich is equivalent to \n\\begin{equation}\\label{BE7}\n b^{2^m+2}x^4+(b^{2^m+2}+b^{2^m+1}+b^{2^m}+b)x^2\n +(b^{2^m+1}+b^{2^m}+b)x=0.\n\\end{equation}\nWhen $b=1$, the above equation becomes $x^4+x =0$, which has two solutions $x=0,1$, as $m$ is even. Since we assumed $x,y \\neq0,1$, we do not get any solution of the system~\\eqref{BE7} in this case. When $b \\in \\mathbb F_{2^{2m}} \\backslash \\mathbb F_2$ with $(b+1)^{2^m+1}\\neq 1$, by Lemma~\\ref{L2}, Equation~\\eqref{BE7} can have at most two solutions. \n\n\\textbf{Case 2.} Let $(b+1)^{2^m+1} = 1$, the argument is similar to Subcase 5.2 of Theorem~\\ref{T1} and in this case the system~\\eqref{pbs} will have no solution.\n\\end{proof}\n\n\n \\begin{exmp}\nThe differential uniformity of the non-permutation power map $x^{15}$ over $\\mathbb F_{2^8}$ is $14$, whereas its boomerang uniformity is $2$. \n \\end{exmp}\n\nIn the following we shall focus on APN functions. First, we recall two lemmas which give a connection between the DDT and BCT entries of arbitrary permutation functions, respectively, APN permutations.\n\n\n\\begin{lem}\\cite[Lemma 1]{cid} \\label{LL1}\n If $f$ is a permutation function on $\\mathbb F_q$, then $\\Delta_f(a,b) \\leq \\mathcal B_f(a,b)$, for all $(a,b)\\in \\mathbb F_q \\times \\mathbb F_q.$\n\\end{lem}\n\n\\begin{lem}\\cite[Lemma 4]{cid}\\label{LL2}\n For any permutation with $2$-uniform DDT, the BCT entries equal the DDT entries, except for the first row and the first column.\n\\end{lem}\n\nFrom Lemma~\\ref{LL1} and Lemma~\\ref{LL2}, we can deduce (surely, known) that APN permutations have boomerang uniformity $2$. Of course, it is rather easy to see that the converse is also true.\n\n\\begin{prop}\n Let $f$ be a permutation on $\\mathbb F_q$. If the boomerang uniformity of $f$ is $2$, then it is APN.\n\\end{prop}\n\\begin{proof}\n Since the boomerang uniformity of the permutation $f$ is $2$, we have\n \\[\n \\Delta_f(a,b) \\leq \\mathcal B_f(a,b) \\leq 2,\n \\]\nfor all $(a,b)\\in \\mathbb F_q^* \\times \\mathbb F_q^*.$ Also notice that for any $a \\in \\mathbb F_q^*$,\n\\begin{equation*}\n \\begin{split}\n \\Delta_f(a,0) &= \\lvert \\{ x\\in \\mathbb F_q \\mid f(x+a)+f(x)=0\\} \\rvert \\\\\n &=0.\n \\end{split}\n\\end{equation*}\nThus $\\Delta_f(a,b) \\leq 2$ for all $(a,b) \\in \\mathbb F_q^* \\times \\mathbb F_q$ and hence $f$ is APN.\n\\end{proof}\nBy providing an extension of the boomerang uniformity to the case of arbitrary functions, Mesnager et. al~\\cite{SM20} observed that for any arbitrary function $f$, $\\Delta_f(a,b) \\leq \\mathcal B_f(a,b)$, for all $(a,b)\\in \\mathbb F_q \\times \\mathbb F_q.$ In the following, we shall show that the boomerang uniformity of any arbitrary APN function $f$ is $2$.\n\n\\begin{thm}\\label{P2}\n Let $f$ be an arbitrary APN function over $\\mathbb F_q$. Then the boomerang uniformity of $f$ is $2$.\n\\end{thm}\n\\begin{proof}\n Recall that the BCT entry $\\mathcal B_f(a,b)$ of $f$ at ponint $(a,b)$ is given by\n \\begin{equation*}\n \\mathcal B_f(a,b) = \\left \\lvert \\left\\{ (x,y)\\in \\mathbb F_q \\times \\mathbb F_q \\mid\n \\begin{cases}\n f(x)+f(y)=b \\\\\n f(x+a)+f(y+a) =b\n \\end{cases}\n \\right \\} \\right \\rvert .\n \\end{equation*}\nLet $y=x+\\gamma$, then the above equation becomes\n\\begin{equation*}\n \\begin{split}\n \\mathcal B_f(a,b) &= \\left \\lvert \\left \\{ (x,\\gamma)\\in \\mathbb F_q \\times \\mathbb F_q \\mid \n \\begin{cases}\n f(x+\\gamma)+f(x)=b \\\\\n f(x+a+\\gamma)+f(x+a)=b\n \\end{cases}\n \\right \\} \\right \\rvert \\\\\n &= \\sum_{\\gamma \\in \\mathbb F_q} \\left \\lvert \\left \\{ x\\in \\mathbb F_q \\mid \n \\begin{cases}\n f(x+\\gamma)+f(x)=b \\\\\n f(x+a+\\gamma)+f(x+a)=b\n \\end{cases}\n \\right \\} \\right \\rvert.\n \\end{split}\n \\end{equation*}\n Also observe that for any $(a,b)\\in \\mathbb F_q^* \\times \\mathbb F_q^*$, we have \n \\begin{equation*}\n \\mathcal B_f(a,b) = \\sum_{\\gamma \\in \\mathbb F_q^*} \\left \\lvert \\left \\{ x\\in \\mathbb F_q \\mid \n \\begin{cases}\n f(x+\\gamma)+f(x)=b \\\\\n f(x+a+\\gamma)+f(x+a)=b\n \\end{cases}\n \\right \\} \\right \\rvert.\n \\end{equation*}\n Since $f$ is an APN function, therefore, for any $(a,b)\\in \\mathbb F_q^* \\times \\mathbb F_q^*$, the quantity under summation will contribute only if $a=\\gamma$. Now for $\\gamma=a$, $\\mathcal B_f(a,b)=\\Delta_f(a,b) \\leq 2$ and hence the boomerang uniformity of $f$ is $2$.\n\\end{proof}\n\n\\begin{rmk}\n The converse of the above Theorem~\\textup{\\ref{P2}} is not necessarily true and counterexamples can be easily constructed. For instance, the boomerang uniformity of the power map $x^{15}$ over $\\mathbb F_{2^8}$ is $2$, though it is not an APN function. \n\\end{rmk}\n\n \n \n\\section{concluding remarks}\\label{S4}\nIn this note we compute the boomerang uniformity of the power map $x^{2^m-1}$ over $\\mathbb F_{2^{2m}}$. As an immediate consequence, we find that the differential uniformity is not necessarily smaller than the boomerang uniformity (for non-permutations), as it was previously shown for permutations. The presented class is not just an isolated example. Our computer programs reveal quickly some other like-functions, for instance, $x\\mapsto x^{45}$ on $\\mathbb F_{2^8}$, which is locally-APN, has differential uniformity~$14$, and boomerang uniformity~$2$. We could not extrapolate an infinite class out of all these examples, though.\n It would be interesting to construct some new (infinite) classes of functions for which the boomerang uniformity is strictly smaller than the differential uniformity. \n\n\\section*{Acknowledgements}\nWe would like to express our sincere appreciation to the editors for handling our paper and to the reviewers for their careful reading, beneficial comments and constructive suggestions. \n\nThe research of Sartaj Ul Hasan is partially supported by MATRICS grant MTR\/2019\/000744 from the Science and Engineering Research Board, Government of India. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThis article addresses the problem of recovering a class of signals with periodic-like structure that are masked both by noise and by a general warping of the domain. Additive noise is mitigated by averaging over groups of samples, but this requires care to preserve structure of the signal. If the signal has a definite spectral shape, then a matched linear filter has the optimal weights for the samples to be averaged. If the signal does not have a definite spectral shape -- for instance, if it is subject to an unknown time warping -- linear matched filters do not exist. This article presents a novel, two-stage adaptive filter for signals that are subjected to unknown warping of the domain, which may be a general smooth manifold. We call this filter the \\emph{quasiperiodic low pass filter (QPLPF)}.\n\n\\subsection{Historical context}\n\nAlthough \\emph{almost periodic} signals -- those within a certain metric distance of a periodic signal -- are a natural generalization beyond periodic signals, they do not accurately represent signals that are periodic under a warped timescale. These kind of signals are common in music processing \\cite{Mueller_2015}. If the domain has two or more dimensions, then many more possibilities for warping arise. The path to greater generality is embodied in the two dimensional images captured by cryo-electron microscopy. These images have a different underlying symmetry group -- the group of rotations in $\\mathbb{R}^3$ -- and the smooth structure of this group can be exploited to great effect \\cite{Singer_2013}.\n\nAdaptive filters are often used in image processing (for instance \\cite{Dabov_2007}, among many others), but ignoring internal structure of the signal can lead to poor results \\cite{Willett_2014}. Class averaging \\cite{Singer_2011,vanHeel_1984} is usually presented as a way to ensure that this structure is preserved, but theoretical guarantees are usually given for a specific problem domain. The QPLPF we present in this paper is a \\emph{general} class averaging filter, and is applicable to many problem domains. To support the broad application of the QPLPF, we impose only weak theoretical constraints on the input signals. Under these constraints we obtain surprisingly strong theoretical guarantees.\n\nSignals that have a hidden state space are identifiable using the topology of \\emph{delay embeddings} \\cite{DeSilva_2011}, a concept that can be traced to a paper by Takens \\cite{Takens_1981}. Many papers have discussed ways to find the hidden state of a dynamical system; recovering the \\emph{phase space} from measurements \\cite{Chelidze_2008,Chelidze_2006,Casdagli_1991, Sauer_1991,Hegger_2002}. The key theoretical guarantees arise from transversality results for smooth manifolds. These can be lifted to geometric conditions for recovering state spaces up to topology under noisy conditions \\cite{Chazal_2009,Chazal_2006,Niyogi_2008}. Although the present paper does not require a complete estimation of a topological space, we obtain similar performance bounds in the face of noise.\n\n\\section{Problem statement}\nWe begin by specifying the class of signals of interest: those with nontrivial \\emph{quasiperiodic factorizations}.\n\n\\begin{df} \\cite{RobinsonSampTA2015}\nA function $u:M\\to N$ from one smooth manifold $M$ to another $N$ is called \\emph{$(\\phi,U)$-quasiperiodic} if there exists another smooth manifold $C$, a smooth function $U:C \\to N$, and a surjective submersion $\\phi: M \\to C$ such that $u=U \\circ \\phi$. We say $u$ \\emph{factors through} $\\phi$ and call $C$ the \\emph{phase space}. \n\\end{df}\n\nQuasiperiodic functions are a strict generalization of dynamically time warped functions, in which the phase function $\\phi:\\mathbb{R}\\to\\mathbb{R}$ is a diffeomorphism. We treat dynamic time warping experimentally in Section \\ref{sec:results}, though our algorithm works for all quasiperiodic functions as shown by Theorem \\ref{thm:phase_recovery} (noisless case) and in Section \\ref{sec:noise_perf} (noisy case). Although our simulated data is rather simplistic, we note that Theorem \\ref{thm:phase_recovery} establishes a substantially more general condition for class averaging.\n\nThe main problem addressed by this article is the following:\n\\begin{prob}\n Assume the following:\n \\begin{enumerate}\n \\item $M$ is a finite dimensional manifold,\n \\item $N$ is a finite dimensional vector space,\n \\item $n$ is a random field $M \\to N$ whose values are identically distributed and independent from one another, and\n \\item $M$ is acted upon transitively by a group $G$ of diffeomorphisms.\n \\end{enumerate}\n Given a function $\\tilde{u}(x)=u(x)+n(x)$ consisting of the sum of a $(\\phi,U)$-quasiperiodic function $u:M\\to N$ and a noise signal $n:M\\to N$, recover an estimate of $u$. We will assume that $\\tilde{u}$ is only specified at a discrete set of points $X=\\{x_i\\} \\subset M$.\n\\end{prob}\n\n\\section{Algorithm description}\n\nThe \\emph{quasiperiodic low pass filter} (QPLPF) estimates $u$ from samples of $\\tilde{u}$ and is tuned by several parameters:\n\\begin{enumerate}\n\\item The \\emph{delays} $g_1, \\dotsc, g_m \\subset G$, and\n\\item The \\emph{neighborhood size} $S$, which is a positive integer.\n\\end{enumerate}\nThe QPLPF consists of two distinct stages:\n\\begin{enumerate}\n\\item \\emph{Topological estimation,} a discrete estimation of the phase function $\\phi$. This stage consists of two steps:\n \\begin{enumerate}\n \\item \\emph{Delay immersion,} constructing an auxillary phase function $F: M \\to N^{m+1}$\n \\begin{equation*}\n F(x) = \\left(\\tilde{u}(x),\\tilde{u}(g_1 x), \\dotsc, \\tilde{u}(g_m x)\\right),\n \\end{equation*}\n using a fixed set $\\{g_1, \\dotsc, g_m\\} \\subset G$ of group elements to translate copies of $\\tilde{u}$.\n \\item \\emph{Discretization,} which extracts a distance-based graph $H$ using the set $X \\subseteq M$ as vertices based on the image of $F$. Since $N$ is a normed vector space, we can select a metric $d$ on $N^{m+1}$. For a given $x\\in X$, its set of adjacent edges in $H$ is defined to be the $S$ nearest neighbors\\footnote{If there are more than $S$ nearest neighbhors, then the adjacent edges are drawn arbitrarily from this set. To simplify the notation we assert that each vertex is adjacent to itself, but that this does not count against the $S$ nearest neighbors.} measured via $d(F(x),F(y))$. \n \\end{enumerate}\n\\item \\emph{Neighborhood averaging,} a statistical estimator for $U$ using the neighborhoods of $H$:\n \\begin{equation}\n \\label{eq:qplpf_highlevel}\n (\\text{QPLPF}\\; \\tilde{u})(x_i) = \\frac{1}{1+ S}\\left(\\sum_{[x_i,x_j]\\in H} \\tilde{u}(x_j)\\right).\n \\end{equation}\n\\end{enumerate}\n\n\\section{Theoretical discussion}\n\nQuasiperiodic factorizations of smooth functions have a number of interesting properties that make them both expressive and useful models of signals.\n\n\\begin{eg}\nEvery smooth function $u:M\\to N$ has a \\emph{trivial quasiperiodic factorization}, namely $(\\textrm{id}_M, u)$, where $\\textrm{id}_M:M\\to M$ is the identity function. The QPLPF filter reduces to a sliding window average on functions that have \\emph{only} the trivial factorization.\n\\end{eg}\n\n\\begin{eg}\nConsider the phase modulated sinusoid $u(t) = \\sin \\left(\\phi(t)\\right)$ for $t\\in (-\\infty,\\infty)$. If we use $\\phi: \\mathbb{R} \\to S^1$, where $S^1 = \\{(x,y) \\in \\mathbb{R}^2: x^2+y^2=1\\}$ is the unit circle and $U: S^1 \\to \\mathbb{R}$ with $U(x,y) = y$, then $U \\circ \\phi = u$. This is a nontrivial quasiperiodic factorization of $u$ if the derivative of $\\phi$ is never zero.\n\\end{eg}\n\n\\begin{prop}\nIf a smooth function $u$ from a manifold $M$ to a metric space $N$ has a quasiperiodic factorization with a compact phase space, then $u$ is bounded.\n\\end{prop}\nUnbounded smooth functions cannot have $S^1$ as a phase space, for instance.\n\\begin{proof}\nSuppose that $u$ is $(\\phi,U)$-quasiperiodic and that the domain of $U:C \\to N$ is compact. The image of $u$ coincides with the image of $U$, which is compact since $U$ is continuous. Thus this image is closed and bounded, hence $u$ is bounded.\n\\end{proof}\n\n\\begin{prop}\nAny compactly supported smooth function from $\\mathbb{R}^n \\to \\mathbb{R}$ is quasiperiodic with phase space $C=S^n$. \n\\end{prop}\nThis might be a wildly uninformative quasiperiodic factorization. There are usually better ones as Proposition \\ref{prop:uqf} states.\n\\begin{proof}\n$S^n$ is the one-point compactification of $\\mathbb{R}^n$, formed by adding a point at infinity. Since $u$ is compactly supported, we merely construct $\\phi$ so that a neighborhood of infinity in $S^n$ has zero preimage, and then the complement (which includes the support of $u$) is diffeomorphic to $\\mathbb{R}^n$.\n\\end{proof}\n\n\n\\subsection{Obtaining quasiperiodic factorizations}\n\nTo establish the theoretical validity of the QPLPF, we show that if $u$ is $(\\phi,U)$-quasiperiodic, then the QPLPF will produce a (possibly less compact) quasiperiodic factorization of $u$ in which $F$ is the phase function. \n\n\\begin{lem}\n \\label{lem:toprank}\n Suppose $u:M\\to N$ is a smooth function. If $M$ is a compact manifold that is acted upon transitively by a group $G$ of diffeomorphisms, then there is a finite set $\\{g_1, \\dotsc, g_m\\}\\subset G$ for which the function $F: M \\to N^{m+1}$ given by\n \\begin{equation*}\n F(x) = \\left(u(x),u(g_1 x), \\dotsc, u(g_m x)\\right)\n \\end{equation*}\n has constant rank\n \\begin{equation*}\n \\textrm{rank } dF(x) = \\max_{y\\in M} \\textrm{rank } du(y)\n \\end{equation*}\n for all $x\\in M$.\n\\end{lem}\n\\begin{proof}\n Consider the set $R \\subseteq M$ given by\n \\begin{equation*}\n R = \\{x \\in M : \\textrm{rank } du(x) = \\max_{y\\in M} \\textrm{rank } du(y) \\}.\n \\end{equation*}\n Because $u$ is smooth, it is continuous, so $R$ is open. Then the collection\n \\begin{equation*}\n \\mathcal{R} = \\{g R : g \\in G\\}\n \\end{equation*}\n is an open cover of $M$ because each $g$ is a diffeomorphism and $G$ acts transitively. Because $M$ is compact, there is a finite subcollection\n \\begin{equation*}\n \\mathcal{R}' = \\{g_1 R, \\dotsc, g_m R \\} \\subset \\mathcal{R}\n \\end{equation*}\n that is also an open cover of $M$. Thus for any $x\\in M$, $g_i x \\in R$ for at least one of $i = 1, \\dotsc, m$. Thus\n \\begin{eqnarray*}\n \\textrm{rank } dF(x) &= &\\max\\{\\textrm{rank } du(x), \\textrm{rank } du(g_1 x), \\dotsc,\\\\\n && \\textrm{rank } du(g_m x) \\} \\\\\n &=& \\textrm{rank } du(g_i x) \\\\\n &=& \\max_{y\\in M} du(y).\n \\end{eqnarray*}\n\\end{proof}\n\nWhen there is no noise, the topological estimation stage of the QPLPF recovers a quasiperiodic factorization.\n\n\\begin{thm}\n \\label{thm:phase_recovery}\n Suppose $u:M\\to N$ is a smooth function, where $M$ is a compact manifold that is acted upon transitively by a group $G$ of diffeomorphisms. Using the finite set $\\{g_1, \\dotsc, g_m\\}\\subset G$ and the function $F: M \\to N^{m+1}$ defined in Lemma \\ref{lem:toprank},\n \\begin{equation*}\n F(x) = \\left(u(x),u(g_1 x), \\dotsc, u(g_m x)\\right)\n \\end{equation*}\n define $C= \\textrm{image } F$. If $m=0$, then $(F,\\textrm{id})$ is a quasiperiodic factorization of $u$. If $m>0$, then\n \\begin{enumerate}\n \\item $C$ is an immersed submanifold of $N^{m+1}$, let $i:C' \\to C \\subset N^{m+1}$ be the immersion, and \n \\item $F$ can be pulled back to $\\phi : M \\to C'$ so that there is a $U:C' \\to N$ with $u = U \\circ \\phi$ being a quasiperiodic factorization.\n \\end{enumerate}\n\\end{thm}\n\\begin{proof}\n \\begin{enumerate}\n \\item By Lemma \\ref{lem:toprank}, $dF$ can be constructed so that it has constant rank, so $C$ is an immersed submanifold \\cite[Thm. 7.13]{Lee_2003}. Let $i: C' \\to C$ be the immersion. Without loss of generality, assume that self-intersections of $C'$ are transverse. Self-intersections are therefore finite sets, because they have dimension\n \\begin{equation*}\n 2 \\dim C' - (m+1) \\dim N \\le (1 - m) \\dim C' \\le 0\n \\end{equation*}\n since $\\dim C' \\le \\dim N$ by construction.\n \\item $F$ is surjective onto $C$ by construction, so we wish to construct a surjective $\\phi$ so that the diagram\n \\begin{equation*}\n \\xymatrix{\n M \\ar[r]^F \\ar[d]_\\phi& C\\\\\n C' \\ar[ur]_i & \\\\\n }\n \\end{equation*}\n commutes. The only issue is when the image of $C'$ intersects itself, because away from those self-intersections, $i$ is injective. Let $x\\in M$ be such that $F(x)$ is at a place where $C'$ intersects itself in $C$. We assumed self-intersections of $C'$ are transverse, so there are finitely many preimages $y_1,\\dotsc, y_p$ of $F(x)$ in $C'$ which could be chosen as $\\phi(x)$. Because $dF$ is of constant rank and because the self-intersections are transverse, $dF$ will take the tangent space at $x\\in M$ to exactly one of the images of the tangent spaces $T_{y_j}$ through $i$. We simply let $\\phi(x) = y_j$, and define $U = \\textrm{pr}_1 \\circ i$ to obtain the quasiperiodic factorization of $u$.\n \\end{enumerate}\n\\end{proof}\n\n\\subsection{Universal quasiperiodic factorizations}\nAlthough there are many quasiperiodic factorizations of a smooth function, they are related to one another. Although $(F,\\textrm{pr}_1)$ may differ from $(\\phi,U)$, its use in the QPLPF will not destroy the structure of $u$.\n\n\\begin{df}\n The quasiperiodic factorizations of $u:M \\to N$ form a category ${\\bf QuasiP}(u)$ in which the objects are quasiperiodic factorizations $(\\phi,U)$, the morphisms $(\\phi,U)\\to(\\phi',U')$ are commutative diagrams of the form\n \\begin{equation*}\n \\xymatrix{\n M \\ar[r]^\\phi \\ar[d]^{\\phi'}& C \\ar[dl]\\ar[d]^U \\\\\n C' \\ar[r]^{U'} & N \\\\\n }\n \\end{equation*}\n\\end{df}\n\n\\begin{eg}\n The category ${\\bf QuasiP}(u)$ is usually not finite: consider $u(x) = \\sin x$, because then if $\\phi:\\mathbb{R}\\to S^1$, $U$ can represent any finite number of periods of $\\sin$ on $S^1$.\n\\end{eg}\n\n\\begin{prop} \\cite[Thm. 5]{RobinsonSampTA2015}\n\\label{prop:uqf}\n If $u:M \\to N$ is a smooth map, the category ${\\bf QuasiP}(u)$ has a unique final object called the \\emph{universal quasiperiodic factorization} of $u$. It also has a trivial initial object $(\\text{id},u)$. The category ${\\bf QuasiP}(u)$ also has coproducts, which allow one to constructively reduce the phase space.\n\\end{prop}\n\nQuasiperiodic factorizations impose specific restrictions on the ranks of the derivatives of $\\phi$ and $U$.\n\n\\begin{lem}\n \\label{lem:quasirank}\n If $(\\phi,U)$ is any quasiperiodic factorization of $u$, then\n \\begin{equation*}\n \\textrm{rank } du(x) \\le \\min\\left\\{\\textrm{rank } d\\phi(x), \\textrm{rank } dU(\\phi(x))\\right\\} \\le \\textrm{rank } d\\phi.\n \\end{equation*}\n and $\\textrm{rank } dU(\\phi(x)) = \\textrm{rank } du(x)$ for all $x\\in M$.\n\\end{lem}\n\\begin{proof}\n Merely note that the $\\textrm{rank } d\\phi$ is constant because $\\phi$ is a submersion. Additionally, by Sylvester's inequality, if $\\phi : M \\to C$,\n \\begin{eqnarray*}\n \\textrm{rank } d\\phi + \\textrm{rank } dU(\\phi(x)) - \\dim C& \\le& \\textrm{rank } du(x)\\\\\n \\textrm{rank } dU(\\phi(x)) &\\le & \\textrm{rank } du(x)\\\\\n \\end{eqnarray*}\n from which the result follows.\n\\end{proof}\n\nThe universal quasiperiodic factorization involves the unique minimal phase space.\n\n\\begin{prop}\n \\label{prop:uqf_dim}\n If $(\\phi,U)$ is a universal quasiperiodic factorization, then\n \\begin{equation*}\n \\textrm{rank } d\\phi = \\max_{y \\in M} \\textrm{rank } du(y).\n \\end{equation*}\n\\end{prop}\n\\begin{proof}\n If it happens that $\\textrm{rank } d\\phi > \\max_{y \\in M} \\textrm{rank } du(y)$, then we can show the factorization is not universal. Specifically, notice that by Lemma \\ref{lem:quasirank}\n \\begin{equation*}\n \\dim \\ker dU(y) > 0 \n \\end{equation*}\n for all $y \\in C$. Thus, there is at least one nonvanishing, smooth vector field $v$ on $C$ that is annhiliated by $dU$. Solving for the flow along $v$ yields a 1-parameter family of diffeomorphisms $D_t$. The action of $D_t$ is a symmetry of $U$, namely for all $t\\in \\mathbb{R}$, $U \\circ D_t = U$.\n Thus, $\\phi: M \\to C$ descends to the quotient $C \/ D$ -- whose dimension is strictly less than that of $C$ -- yielding a unique $\\phi'$ making the diagram\n \\begin{equation*}\n \\xymatrix{\n M \\ar[r]^{\\phi} \\ar[rd]_{\\phi'}& C \\ar[d] \\ar[r]^{U} & N \\\\\n & C\/D\\ar[ru]_{U'} &\\\\\n }\n \\end{equation*}\n commute. Observe that $(\\phi',U')$ is a quasiperiodic factorization, so we conclude that $(\\phi,U)$ was not final in ${\\bf QuasiP}(u)$ and therefore not a universal quasiperiodic factorization.\n\\end{proof}\n\n\\subsection{Noise performance}\n\\label{sec:noise_perf}\nPerformance of the QPLPF on noisy signals is governed both by Theorem \\ref{thm:phase_recovery} and by the neighborhood size $S$. We would like to minimize the recovery error in the $L^2$ norm,\n\\begin{align*}\n & \\left\\|(\\text{QPLPF}\\; \\tilde{u})(x_i) - u(x_i) \\right\\| = \\left\\| \\frac{1}{1+ S}\\sum_{[x_i,x_j]\\in H} \\tilde{u}(x_j) - u(x_i) \\right\\|\\\\\n & \\le \\left\\| \\frac{1}{1+ S}\\sum_{[x_i,x_j]\\in H} u(x_j) - u(x_i)\\right\\| + \\frac{\\sigma}{\\sqrt{1+S}}\n\\end{align*}\nwhere we have used independence of the noise in the last step. The first term above is the \\emph{Stage 1 error} and the second term is the \\emph{Stage 2 error}. The Stage 2 error in the QPLPF is essentially the best that can be obtained without further knowledge of the statistics of $n$.\n\nGiven that $u$ is $(\\phi,U)$-quasiperiodic, we can have substantially better control of the Stage 1 error. Unless it is perfectly matched to the signal, a traditional filter has nonzero Stage 1 error even if there is no noise. If there is no noise present and $S$ is small enough, so that\n\\begin{equation}\n \\label{eq:neighborhood_size}\n S \\le \\# (\\phi^{-1}(x_i) \\cap X)\n\\end{equation}\nfor all $x_i \\in X$, we have that $\\phi(x_j) = \\phi(x_i)$ for all adjacent pairs $(x_i,x_j)$. This situation causes the Stage 1 error\n\\begin{align*}\n &\\left\\| \\frac{1}{1+ S}\\sum_{[x_i,x_j]\\in H} u(x_j) - u(x_i)\\right\\| \\\\\n & = \\left\\| \\frac{1}{1+ S}\\sum_{[x_i,x_j]\\in H} U\\left(\\phi(x_j)\\right) - U\\left(\\phi(x_i)\\right)\\right\\|\n\\end{align*}\nto completely vanish for the QPLPF! \n\n\\begin{prop}\nWhen a $(\\phi,U)$-quasiperiodic function $u$ with $\\textrm{rank } du(x) < \\dim M$ for all $x$ is given as input to the QPLPF, the output is exactly $u$.\n\\end{prop}\n\\begin{proof}\nThe condition $\\textrm{rank } du(x) < \\dim M$ ensures that preimages of points through $\\phi$ have dimension greater than zero, so that \\eqref{eq:neighborhood_size} can be satisfied.\n\\end{proof}\n\n\nWhen noise is present, there is a tradeoff between keeping $S$ small enough to satisfy \\eqref{eq:neighborhood_size} but large enough to control the Stage 2 error. The Stage 1 error is controlled both by $S$ and $m$ through the construction of the graph $H$. A loose upper bound on the Stage 1 error is\n\\begin{align*}\n &\\left\\| \\frac{1}{1+ S}\\sum_{[x_i,x_j]\\in H} U\\left(\\phi(x_j)\\right) - U\\left(\\phi(x_i)\\right)\\right\\| \\\\\n\n & \\le \\|U\\|_\\infty \\max_{[x_i,x_j]\\in H} \\left\\| \\phi(x_j) - \\phi(x_i)\\right\\| \n\\end{align*}\nAlthough noise does not enter into the norm expression, it does impact our construction of $H$. If the phase function (and hence $H$ also) is known outright, then $S$ can be chosen optimally even in the face of noise. Otherwise, the QPLPF must rely on its estimate $F$ of $\\phi$ instead.\n\\begin{align*}\n & \\|U\\|_\\infty \\max_{[x_i,x_j]\\in H} \\left\\| \\phi(x_j) - \\phi(x_i)\\right\\| \\\\\n & \\approx \\|U\\|_\\infty \\max_{[x_i,x_j]\\in H} \\left\\| F(x_j) - F(x_i)\\right\\| \\\\\n & \\le \\|U\\|_\\infty \\max_{[x_i,x_j]\\in H} \\left(\\left\\| u(x_j) - u(x_i)\\right\\| + \\frac{\\sigma}{\\sqrt{m}}\\right)\n\\end{align*}\nAgain, if $S$ is small enough, then all $[x_i,x_j]\\in H$ will satisfy $u(x_i) \\approx u(x_j)$, so first term above will typically be small. The second term will usually dominate for small amounts of noise, and this can be controlled by increasing $m$.\n\n\\section{Results}\n\\label{sec:results}\nThis section presents three experimental data sets that validate both the theory and implementation of the QPLPF. The first two data sets are simulated, while the third set uses image data collected by a satellite.\n\n\\subsection{Performance on simulated data}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=3in]{qpi_images} \n\\caption{A noisy quasiperiodic image (left) and QPLPF output (right) when filtered with an matching window of 10 pixels and averaging 10 pixels. Axes are in pixels.}\n\\label{fig:qpi_images}\n\\end{center}\n\\end{figure}\n\nFigure \\ref{fig:qpi_images} shows the performance of the QPLPF applied to a noisy quasiperiodic image (left). The QPLPF output is shown at right, and shows a visible improvement over the entire image.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=3in]{lfm_space} \n\\caption{An example of the estimated space for the case of an LFM chirp. Coordinates are the first four principal components ($x$, $y$, $z$, and color)}\n\\label{fig:lfm_space}\n\\end{center}\n\\end{figure}\n\nOur implementation of the QPLPF on images is not particularly efficient, therefore for our statistical validation, we considered the discretized linear frequency modulated (LFM) chirp given by\n\\begin{equation}\n \\label{eq:lfm}\nu(t)=\\sin\\left(\\frac{2\\pi}{5} t (t+1)\\right) + n(t)\n\\end{equation}\nwhere $t=0, 1\/50, \\dotsc 10$ and $n(t)$ is additive white Gaussian noise. This function is quasiperiodic, with a period that decreases with increasing $t$ over the given interval. The output of Stage 1 of the QPLPF using a window size of (50 samples for the topological estimation stage and 15 samples for the averaging stage) is shown in Figure \\ref{fig:lfm_space}, which suggests that the state space is a knotted circle. The output of the QPLPF is shown as the red curve at right in Figure \\ref{fig:lfm_output}.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=3.5in]{lfm_output} \n\\caption{Comparison of output of a typical adaptive filter (left) and QPLPF (right) applied to a noisy LFM signal \\eqref{eq:lfm}. Both filters used a window size of 15 samples for averaging. The QPLPF used a window size of 50 samples for topological estimation}\n\\label{fig:lfm_output}\n\\end{center}\n\\end{figure}\n\nFor comparison, the left frame of Figure \\ref{fig:lfm_output} also shows the output of an adaptive variable-bandwidth filter, that uses as sliding window of 15 samples (same as the QPLPF) to estimate a local maximum frequency, and then sets the local averaging block size according to that frequency. As the Figure shows, although the adaptive filter recovers the signal's frequency well, it does not produce a stable amplitude. In contrast, the QPLPF does a better job of recovering the amplitude. The QPLPF suffers no penalty as a function of SNR for this stability.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=3.5in]{qplpf_topo_perf_50_11_} \n\\caption{Comparison of the RMS filter error for a noisy LFM chirp as a function of SNR. See text for window parameters.}\n\\label{fig:lfm_error_rms}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=3.5in]{qplpf_envelope_perf_50_11_} \n\\caption{Comparison of the RMS envelope variability as a function of SNR. See text for window parameters.}\n\\label{fig:lfm_error_env}\n\\end{center}\n\\end{figure}\n\nFigures \\ref{fig:lfm_error_rms} and \\ref{fig:lfm_error_env} shows the performance of the QPLPF and the adaptive filter as a function of SNR for an LFM signal like what is shown in Figure \\ref{fig:lfm_output}. A boxcar filter and the averaging stage of the QPLPF using the true phase space -- both with a fixed window size of 11 samples -- are included for comparison. The QPLPF used a window size of 50 samples for topological estimation and a window size of 11 samples for averaging. The adaptive boxcar filter used a window size of 50 samples for frequency estimation, and its averaging window was set adaptively at Nyquist based on this estimate.\n\nThe vertical axis of Figure \\ref{fig:lfm_error_rms} shows the RMS difference between the original (noiseless) signal and the output of each filter. Since the amplitude of the original signal was held constant at 1, the RMS measurement of the envelope of the ideal output should be zero. The envelope signal is produced by linearly interpolating between peaks of the output signal. The vertical axis of Figure \\ref{fig:lfm_error_env} shows the RMS envelope of each output signal. \n\nThe three variable-bandwidth filters (the QPLPF, the adaptive filter, and the QPLPF averaging stage) all exhibit improved RMS error and improved envelope stability as the SNR improves. However, the QPLPF exhibits better performance when the SNR is lower. The QPLPF exhibits considerably greater envelope stability than the adaptive filter, an effect which is most pronounced at low SNR.\n\n\\subsection{A maritime SAR image}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=3in]{sar_image} \n\\caption{Ocean SAR image before (left) and after (right) the application of the QPLPF with topological estimation window of $10\\times 10$ pixels and an averaging window of $150$ pixels. Axes in pixels, each of which is a square, 25 meters on a side. Images copyright \\copyright DLR 2014.}\n\\label{fig:sar_image}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=3in]{sar_spectrum} \n\\caption{Spectrum of SAR images before (left) and after (right) the application of the QPLPF. Axes in radian\/meter}\n\\label{fig:sar_spectrum}\n\\end{center}\n\\end{figure}\n\nThis example demonstrates the QPLPF applied to the left frame of Figure \\ref{fig:sar_image}, a $100\\times 100$ pixel SAR image acquired by the German satellite TerraSAR-X on 9 March 2014 over the Gulf of Maine at 25 meters per pixel. The diagonal striations in the image are produced by ocean swells that are roughly 80 meters in wavelength. Figure \\ref{fig:sar_spectrum} at left shows the 2d FFT of the image, from which the ocean wave spatial frequency and direction can be easily discerned. Both the image and spectrum have been corrupted by speckle and noise, and the spectrum shows a horizontal streak artifact. After applying the QPLPF with a matching window size of 10 pixels and a blocksize of 150 pixels, we obtain the images at right in Figures \\ref{fig:sar_image} and \\ref{fig:sar_spectrum}. Notice that the QPLPF improves both the apparent contrast of individual waves and the SNR in the spectrum.\n\n\\section{Conclusion}\n\nThis article presented the QPLPF, a two-stage topological filter that performs averaging on an estimated phase space of a signal. The correctness of this approach was proven theoretically, was demonstrated statistically on simulated data, and was exhibited on experimental data.\n\n\n\\section*{Acknowledgements}\nThe author would like to thank the American University Vice Provost for Graduate Studies and Research and the DC Space Grant Consortium for providing partial funding for this project. Partial funding was also provided by the Office of Naval Research via Federal Contract No. N00014-15-1-2090. The author also thanks the Deutsches Zentrum f\\\"{u}r Luft und Raumfahrt (DLR) for supplying the SAR imagery used on this project.\n\n\\bibliographystyle{IEEEbib}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nOver the last decade, the Sachdev-Ye-Kitaev (SYK) model has been established as a paradigmatic model accounting for a variety of phenomena ranging from aspects of the physics of black holes to non-Fermi liquids~\\cite{Chowdhury-RMP2022,Rosenhaus2019-review,Franz2018-review,patel_quantum_2017}.\nThere exist two main variants of this model in the literature: one that is formulated in terms of $N$ 'complex' Dirac fermions,\nand another one written in terms of $N$ 'real' Majorana fermions.\nIn both cases, the fermions interact via random four-body terms.\nIrrespective of the formulation, one of the main features of the model is that it exhibits emergent conformal symmetry in the infrared in the strong-coupling and large-$N$ limit.\nThe scaling dimension of the fermion correlation function is given by $\\Delta = \\frac{1}{4}$ \\cite{maldacena_comments_2016,polchinski_spectrum_2016},\nindicative of strong interactions (for comparison, a free fermion has scaling dimension $1\/2$). \n\nThere has been a variety of proposals for the creation of SYK-like models in laboratory setups.\nThey range from mesoscopic systems hosting Majorana modes~\\cite{Pikulin2017,Chew2017},\nor Dirac fermions in graphene flakes~\\cite{Chen2018,can_charge_2019},\nto ultracold atomic systems \\cite{danshita2017creating,wei2021optical}.\nA comprehensive review of such possible setups can be found in Refs.~\\cite{Chowdhury-RMP2022,Franz2018-review} and references.\n\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=1.0\\columnwidth]{pic\/TimeLine.png}\n \\caption{The SYK model exhibits multiple characteristic timescales, and with that associated regimes of dynamics.\nCrucial quantities in distinguishing the different limits are the number of fermions $N$ and coupling strength $\\beta J$.\nThis paper studies the region characterized by Lyapunov growth.\n }\n \\label{fig:TimeLine}\n\\end{figure}\n\nThe SYK model involves three important time scales, as shown in Fig.~\\ref{fig:TimeLine} (henceforth,\nwe measure time $t$ in units of $\\beta$ and set $\\hbar=k_b=1$).\nThey are called the Plackian time~\\cite{Zaanen2019,hartnoll2021planckian,patel_magnetotransport_2018,hartnoll2022colloquium}, $t_P$,\nthe Ehrenfest time~\\cite{gu2019relation,larkin1969quasiclassical,hashimoto_out--time-order_2017,kobrin_many-body_2021,craps_lyapunov_2020}, $t_E$,\nand the Heisenberg time $t_H$. The shortest time scale, $t_P$,\nis set by the condition $t_P\/\\beta\\approx 1$. For times shorter than $t_P$,\nwe expect non-universal physics determined by processes at the cutoff scale.\nFor $t_P 0.7$ and $\\beta J \\gtrsim 300$ the b-SYK model saturates the quantum chaos bound of $\\lambda = 2\\pi\/\\beta$.\n The special case $\\kappa=1$ has identical $\\lambda$ as in the SYK model. \n }\n \\label{fig_Lyaponov}\n\\end{figure}\n\\begin{figure}\n\\includegraphics[width=1.0\\columnwidth]{pic\/Ly_vs_Delta.png}\n \\caption{\n The strong-coupling limit $\\beta J \\gg 1$ of the Lyapunov exponent $\\lambda$ as function of the fermion flavor ratio $\\kappa=N_A\/N_B$ (upper panel) or the scaling dimension $\\Delta_A$ (lower panel).\n We find a finite region of $\\kappa$ where the quantum chaos bound of the Lynapunov exponent is saturated. Conversely,\nit drops to zero when the fermion species are strongly imbalanced,\n$N_A \\ll N_B$ or $N_A \\gg N_B$ ($\\kappa\\to 0$ or $\\kappa\\to\\infty$, respectively).\n \\label{fig_Ly_vs}}\n\\end{figure}\n\n\n\n\n\\section{Results}\n\\label{sec_Results}\n\nWe now present and discuss the results of our numerical calculations.\nIn Figure~\\ref{fig_Lyaponov} we plot the Lyapunov exponent $\\lambda$ as a function of the coupling $\\beta J$, and for various values of $\\kappa=N_A\/N_B$.\nWe see that $\\lambda$ grows with the interaction strength $J$, as well as $\\kappa$, when it goes from $0$ to $1$. \nFor $\\kappa > 0.7$ and $\\beta J \\gtrsim 300$, the b-SYK model saturates the quantum chaos bound of $\\lambda = 2\\pi\/\\beta$, at least up to numerical precision.\nThe special case $\\kappa=1$ has identical $\\lambda$ as the SYK model. Qualitatively, a reason for this is that both flavors of Majoranas in this limit have exactly the same scaling dimensions as in the regular SYK model, which is $\\Delta = \\frac{1}{4}$, and their Green functions are identical. \nIf $\\kappa < 0.7$, then $\\lambda$ reaches a maximum at large $\\beta J$, only to decrease marginally at even larger values.\nWe believe that this reversal and decay for large couplings $\\beta J$ is a finite-size effect related to the resolution of our numerical calculations and that $\\lambda$ should flatten out. \n\nIn Figure~\\ref{fig_Ly_vs}, we plot the strong coupling Lyapunov exponent (fixed large $\\beta J$) as a function of the ratio of fermion flavors $\\kappa=N_A\/N_B$, or the scaling dimension $\\Delta_A$.\nThere is a finite region of $ \\kappa $ where the quantum bound is saturated.\nHowever, the maximum Lyapunov exponent goes to zero when $N_A \\ll N_B$ (or $\\kappa\\to0$).\n\nWe can understand the vanishing of the Lyapunov exponent in the following way.\nWhen $\\kappa\\to0$, our system is described by two different types of free fermions.\nThe $A$-fermions have a scaling dimension of $\\Delta_A = \\frac{1}{2}$,\nand the $B$-fermions have a scaling dimension of $\\Delta_B = 0$.\nThe interpretation then is that the huge bath of $B$-fermions has no dynamics,\nand the sparse number of $A$-fermions then are effectively free to move around in the bath.\nWe also see in this limit that the model is non-chaotic, and the Lyapunov exponent vanishes even at strong coupling. \n\n\n\nThe shape of the curve in Fig.~\\ref{fig_Ly_vs} for intermediate $\\kappa\\neq 0,1$ leaves room for interpretation.\nThere are two possible scenarios that are very different in nature. The first scenario is that as soon as $\\kappa$ is non-zero,\nthe non-trivial scaling dimension leads to the Lyapunov exponent saturating at the maximal bound at strong enough coupling.\nIn that case the smooth shape of the curve could be seen as a result of intermediate values of coupling; the numerics allows us to reach a maximum $\\beta J = 1000$.\nBased on our data, however, we believe this to be unlikely. Specifically,\nFig.~\\ref{fig_Lyaponov} shows a saturating Lyapunov exponent for small values of $\\kappa$ across two decades of coupling strength,\nsuggesting that the strong-coupling limit has indeed been reached. \n\nA second scenario is that the non-chaotic behavior and vanishing Lyapunov exponent at $\\kappa=0$ dominates the four-point function even at small but finite $\\kappa$,\nleading to a non-maximal value of the Lyapunov exponent. In this case,\none would expect a crossover behavior as a function of $\\kappa$ (or equivalently,\n$\\Delta_{A,B}$) from a situation with the absence of chaos to one with maximal chaos.\n\nThe accuracy of our numerics does not allow a clear-cut answer to this question,\nbut for the reasons outlined above we strongly favor scenario two.\nFurther analysis of these questions in the present and related models will add to the discussion of the possible reasons for non-maximal or maximal chaos and Lyapunov exponents to emerge in models with conformal symmetry in the IR~\\cite{blake2021systems,tikhanovskaya2022maximal,jiang2019thermodynamics}.\nFor the future, we hope that we can further analyze the shape of the curve,\na possible starting point being a perturbative expansion around $\\kappa=0$.\n\nThe present work on the calculation of the Lyapunov exponent in the b-SYK model shows that the features of emergent conformal symmetry and maximal quantum chaos of the SYK model are quite robust to the couplings obeying additional internal symmetries.\nBesides the particular model considered here, there are many setups where parity, charge, spin,\nor general flavor symmetries of the underlying fermions carry over to the interaction matrix elements~\\cite{Chowdhury-RMP2022,Franz2018-review,Kim2019,sahoo_traversable_2020,xu_sparse_2020}.\nThe methods used here readily carry over to those models and can be applied to the calculation of Lyapunov exponents and, in general, to the analysis of Bethe-Salpeter equations.\n\n\n\\begin{acknowledgments}\nWe acknowledge discussions with Y. Cheipesh, A. Kamenev, K. Schalm, M. Haque, and S. Sachdev.\nSP thanks E. Lantagne-Hurtubise, O. Can, S. Sahoo, and M. Franz for many useful discussions related to SYK models and holography. This work is part of the D-ITP consortium,\na program of the Netherlands Organisation for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW).\nSP received funding through the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program.\n\\end{acknowledgments}\n\n\n\\section*{Author Contributions}\nA.S.S and M.F contributed equally to this work.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:int}\nLet $\\mathcal{H}_0, \\mathcal{H}_1,\\ldots, \\mathcal{H}_n$ be real Hilbert spaces and let $\\inner{\\cdot}{\\cdot}$ and\n $\\|\\cdot\\|=\\sqrt{\\inner{\\cdot}{\\cdot}}$ denote the inner product and norm (respectively) in $\\mathcal{H}_i$ ($i=0,\\dots, n$).\n Assume that $\\mathcal{H}_0=\\mathcal{H}_n$. \n %\n Let $\\boldsymbol{\\mathcal{H}}:=\\mathcal{H}_0\\times \\dots\\times \\mathcal{H}_{n-1}$ be endowed with the inner product and norm\ndefined, respectively, as follows (for some $\\gamma >0$):\n\\begin{align}\n \\label{eq:def.inner}\n\\inner{(z,w)}{(z',w')}_{\\gamma}=\\gamma\\inner{z}{z'}+\\sum_{i=1}^{n-1}\\inner{w_i}{w'_i},\\quad\n\\norm{(z,w)}_\\gamma^2=\\gamma \\norm{z}^2+\\sum_{i=1}^{n-1}\\norm{w_i}^2,\n\\end{align}\nwhere $z,z'\\in \\mathcal{H}_0$ and $w:=(w_1,\\dots, w_{n-1}), w':=(w'_1,\\dots, w'_{n-1})\\in \\mathcal{H}_1\\times \\ldots\\times \\mathcal{H}_{n-1}$.\n\nConsider the monotone inclusion problem of finding $z\\in \\mathcal{H}_0$ such that\n\\begin{align}\n \\label{eq:Pmip}\n 0\\in \\sum_{i=1}^{n}G_i^*T_iG_i(z)\n\\end{align}\nwhere $n\\geq 2$ and the following assumptions hold:\n\\begin{itemize}\n \\item[\\mbox{(A1)}] For each $i=1,\\dots, n$, the operator $T_i:\\mathcal{H}_i\\rightrightarrows \\mathcal{H}_i$ is (set-valued) maximal \nmonotone and $G_i:\\mathcal{H}_0\\rightarrow \\mathcal{H}_i$ is a bounded linear operator.\n \\item[\\mbox{(A2)}] The linear operator $G_n$ is equal to the identity map in $\\mathcal{H}_0=\\mathcal{H}_n$, i.e., $G_n:z\\mapsto z$ \nfor all $z\\in \\mathcal{H}_0$.\n \\item[\\mbox{(A3)}] The solution set of \\eqref{eq:Pmip} is nonempty, i.e., there exists at least one $z\\in \\mathcal{H}_0$ satisfying\nthe inclusion in \\eqref{eq:Pmip}.\n\\end{itemize}\n\n\n\nProblem \\eqref{eq:Pmip} appears in different fields of applied mathematics and optimization, \nincluding machine learning, inverse problems and image processing~\\cite{alot.combet.shah.2014,Johnst.Eckst.2018,Johnst.Eckst.2019}, specially in connection with\nthe convex optimization problem\n\\begin{align}\n \\label{eq:opt}\n %\n \\min_{z\\in \\mathcal{H}_0}\\,\\sum_{i=1}^n\\,f_i(G_i z)\n\\end{align}\nwhere, for $i=1,\\dots, n$, each $f_i:\\mathcal{H}_i\\to (-\\infty,+\\infty]$ is proper, closed and convex. Indeed, under mild assumptions\non $f_i$ and $G_i$, the minimization problem \\eqref{eq:opt} is equivalent to the monotone inclusion problem \\eqref{eq:Pmip} with $T_i=\\partial f_i$ ($i=1,\\dots, n$). \n\nA very popular strategy to find approximate solutions of \\eqref{eq:Pmip} is that of (monotone) operator splitting algorithms, \nwhich traces back to the development of some well-known numerical schemes like the Douglas-Rachford splitting algorithm, Spingarn's method of partial inverses, among others. \n\nThe family of \\emph{projective splitting algorithms} for solving \\eqref{eq:Pmip}, originated in \\cite{eck.sva-gen.sjco09}\nfor the case that $G_i$ is the identity ($i=1,\\dots, n$), and later on developed in different directions, e.g., \nin \\cite{alot.combet.shah.2014,Johnst.Eckst.2018,Johnst.Eckst.2019}, has deserved a lot of attention\nin modern operator splitting research, mainly due to its flexibility (when compared to other classes of operator splitting algorithms) regarding parameters and the activation of $T_i$ and $G_i$ separately during the iterative process.\n\nThe derivation of the class of projective splitting algorithms can be motivated as follows.\nFirst note that using Assumption (A2) above, we obtain that \\eqref{eq:Pmip} can equivalently be written as\n\\begin{align}\n \\label{eq:Pmip02}\n 0\\in \\sum_{i=1}^{n-1}G_i^*T_iG_i(z) + T_n(z)\n\\end{align}\nwhich, in turn, is clearly equivalent to the (feasibility) problem of finding a point in the \\emph{extended solution set} of \\eqref{eq:Pmip} \n(or \\eqref{eq:Pmip02}):\n\\begin{align}\n\\label{eq:KTSi}\n \\mathcal{S}:=\\left\\{(z,w_1,\\ldots,w_{n-1})\\in \\boldsymbol{\\mathcal{H}}\\;\\;|\\;\\;w_i\\in T_i(G_iz),\\, i=1,\\ldots,n-1,\\, \n-\\sum_{i=1}^{n-1}G^*_iw_i\\in T_n(z)\\right\\}.\n\\end{align}\nSince $\\mathcal{S}$ is nonempty (see Assumption (A3)), closed and convex \n(see, e.g., \\cite{alot.combet.shah.2014,eck.sva-gen.sjco09}) in \n$\\boldsymbol{\\mathcal{H}}$, it follows that problem \\eqref{eq:Pmip} reduces to the task of finding a point in $\\mathcal{S}$ (fact that\nmotivates the abstract framework developed in Section \\ref{sec:spm} below).\n\nNote now that, if we pick $y_i^k\\in T_i(x_i^k)$ ($i=1,\\dots, n$), then from the monotonicity of $T_i$ and the inclusions\nin \\eqref{eq:KTSi}, it follows that\n\\begin{align}\n \\label{eq:sep.i}\n \\sum_{i=1}^n\\,\\inner{G_i z-x_i^k}{y_i^k- w_i}\\leq 0\\qquad \\forall (z,w_1,\\dots, w_{n-1})\\in \\mathcal{S},\n\\end{align}\nwhere \n\\begin{align}\n \\label{eq:wni}\n w_n:=-\\sum_{i=1}^{n-1}G^*_iw_i.\n %\n \\end{align}\nThe inequality \\eqref{eq:sep.i} says, in particular, that $\\{(x_i^k,y_i^k)\\}_{i=1}^n$ defines a function of $(z,w_1,\\dots, w_{n-1})$ which is negative in $\\mathcal{S}$. \nSince this function can be proved to be affine (see, e.g., Lemma \\ref{lmm:grad.sep} below), it follows \nfrom \\eqref{eq:sep.i} that it defines a semispace in $\\boldsymbol{\\mathcal{H}}$ containing the extended solution set $\\mathcal{S}$.\n\nBased on the exposed above, it follows that the main mechanism behind the idea of projective splitting algorithms is basically: \nat the iteration $(z^k,w_1^k,\\dots, w_{n-1}^k)$, pick, for each $i=1,\\dots, n$, a pair $(x_i^k,y_i^k)$ in the graph of $T_i$ and\nthen update the current iterate $p^k:=(z^k,w_1^k,\\dots, w_{n-1}^k)$ to \n$p^{k+1}:=(z^{k+1},w_1^{k+1},\\dots, w_{n-1}^{k+1})$ by projecting $p^k$ onto the semispace defined by the affine function given in the left hand side of \\eqref{eq:sep.i}. \nComputation of $(x_i^k,y_i^k)$ is in general performed by (inexactly) activating the resolvent $(T_i+I)^{-1}$ operator of each $T_i$ to guarantee, in particular, that\nthe current iterate $(z^k,w_1^k,\\dots, w_{n-1}^k)$ belongs to the positive side of the corresponding hyperplane.\n\n\n\\vspace{.1in}\n\nIn this paper, we propose and study a relative-error inertial-relaxed inexact projective splitting algorithm for solving \\eqref{eq:def.inner} and, in particular, for solving the convex program \\eqref{eq:opt}. Inertial algorithms for solving monotone inclusions of the form $0\\in T(z)$, where $T$ is maximal monotone, \nwere first proposed in \\cite{alv.att-iner.svva01}, and since then developed by different authors and in different directions of \nresearch (see, e.g., \\cite{alv.eck.ger.mel-preprint19,att.pey-con.mp19,bot.cse.hen-ine.amc15,com.gla-qua.sjo17} and references there in). \nAt a current iterate, say $p^k$, the inertial effect in the iterative process is produced by an extrapolation step of the form \n(see also Algorithm \\ref{inertial_projective} and Figure \\ref{fig:arrows} below):\n\\[\n \\widehat p^k = p^k + \\alpha_k(p^k - p^{k-1}).\n\\]\nSince $\\alpha_k\\geq 0$ controls the magnitude of extrapolation performed in the direction of the vector $p^k-p^{k-1}$, it follows\nthat the asymptotic behavior and size of $\\alpha_k$ have a direct influence in the convergence analysis of inertial-type algorithms. \nAn usual sufficient condition \\cite{alv.att-iner.svva01} imposed on the sequence $\\{\\alpha_k\\}$, with guarantee of\nweak convergence \nof $\\{p^k\\}$, is that\n$\\{\\alpha_k\\}$ is nondecreasing and $\\alpha_k<1\/3$ for all $k\\geq 0$. The upper bound $1\/3$ has been recently improved in\ncombination with relaxation effects~\\cite{alv.eck.ger.mel-preprint19,att.cab-con.pre18}.\n\nThe main goal of this paper is to develop a projective splitting algorithm-type algorithm for solving \\eqref{eq:Pmip} with\nboth inertial and relaxation effects and, additionally, with inexact subproblems solution within relative-error criterion.\n Up to the authors knowledge, this is the first time in the literature that\ninertial effects are considered in projective splitting-like algorithms. Our main algorithm is Algorithm \\ref{alg:iPSM} from\nSection \\ref{sec:ps}, for which the convergence is studied in Theorems \\ref{th:conv01} and \\ref{th:conv02}, under flexible\nassumptions on the inertial and relaxation parameters. Motivated by the discussion above that \\eqref{eq:Pmip} is equivalent\nto the problem of finding a point in the closed and convex set $\\mathcal{S}$ as in \\eqref{eq:KTSi}, we first introduce in Section \n\\ref{sec:spm} an inertial-relaxed separator-projector method for solving the (feasibility) problem of finding points in closed\nconvex subsets of Hilbert spaces.\n\n\n\\vspace{.1in}\n\nThe following well-known property will be useful in this paper: for all $x,y$ in a real Hilbert space $\\mathcal{H}$ and $t\\in \\mathbb{R}$,\n it holds that\n\\begin{align}\n \\label{eq:ineq.s}\n %\n \\norm{tx+(1-t)y}^2=t\\norm{x}^2+(1-t)\\norm{y}^2-t(1-t)\\norm{x-y}^2.\n %\n \\end{align}\nWe shall also use the following inequality:\n\\begin{align}\n \\label{eq:ineq.sum2}\n \\left\\|\\sum_{i=1}^n\\,x_i\\right\\|^2\\leq n\\sum_{i=1}^n\\,\\norm{x_i}^2.\n\\end{align}\n\n\n\\section{An inertial-relaxed separator-projection method}\n \\label{sec:spm}\n\nIn this section, we propose and study a general separator-projection framework (Algorithm \\ref{inertial_projective}) for \nfinding a point in a closed and convex subset of a Hilbert space. \nThe main motivation comes from the fact (as previously discussed in Section \\ref{sec:int}) that the monotone inclusion \nproblem \\eqref{eq:Pmip} can be reformulated as the problem of finding a point in the extended solution set $\\mathcal{S}$ as in \\eqref{eq:KTSi}. \nAlgorithm \\ref{inertial_projective} will be used in Section \\ref{sec:ps} to analyze the convergence of the main algorithm proposed in \nthis paper (namely Algorithm \\ref{alg:iPSM}) for solving \\eqref{eq:Pmip1}. \n\nLet $\\mathcal{H}$ be a real Hilbert space with inner product $\\inner{\\cdot}{\\cdot}$\nand norm $\\|\\cdot\\|=\\sqrt{\\inner{\\cdot}{\\cdot}}$. We denote the gradient of an affine\nfunction $\\varphi:\\mathcal{H}\\to \\mathbb{R}$ by the usual notation $\\nabla \\varphi$ and, in this case, we\nalso write $\\varphi(z)=\\inner{\\nabla \\varphi}{z}+\\varphi(0)$ for all $z\\in \\mathcal{H}$.\n\n\n\\vspace{.1in}\n\\vspace{.1in}\n\n\n\\noindent\n\\fbox{\n\\addtolength{\\linewidth}{-2\\fboxsep}%\n\\addtolength{\\linewidth}{-2\\fboxrule}%\n\\begin{minipage}{\\linewidth\n\\begin{algorithm}\n\\label{inertial_projective}\n{\\bf An inertial-relaxed linear separator-projection method for finding a point in a nonempty closed convex set $\\mathcal{S}\\subset \\mathcal{H}$}\n\\end{algorithm}\n\\begin{itemize}\n\\item[{\\bf(0)}] Let $p^0=p^{-1}\\in \\mathcal{H}$, $\\alpha\\in [0,1)$ and $0<\\underline{\\beta}<\\overline{\\beta}<2$ be given and let $k\\leftarrow0$.\n %\n\\item [{\\bf(1)}] Choose $\\alpha_{k}\\in [0,\\alpha]$ and define\n %\n \\begin{align}\n \\label{eq:ext}\n \\widehat p^{\\,k}= p^{k}+\\alpha_{k}(p^{k}-p^{k-1}).\n \\end{align}\n\\item [{\\bf(2)}] Find an affine function $\\varphi_k$ such that $\\nabla \\varphi_k\\neq 0$ and $\\varphi_k(p)\\leq 0$ for all $p\\in \\mathcal{S}$.\nChoose $\\beta_k\\in [\\underline{\\beta},\\overline{\\beta}]$ and set\n %\n \\begin{align}\n \\label{eq:ext.proj}\n p^{k+1}=\\widehat p^{\\,k} - \\dfrac{\\beta_k \\max\\{0,\\varphi_k(\\widehat p^{\\,k})\\}}{\\norm{\\nabla \\varphi_k}^2}\\nabla \\varphi_k.\n \\end{align}\n\\item[{\\bf(3)}] Let $k\\leftarrow k+1$ and go to step 1.\n\\end{itemize}\n\\noindent\n\\end{minipage}\n}\n\n\\vspace{.1in}\n\\vspace{.1in}\n %\n\\noindent\n{\\bf Remarks}.\n\\begin{itemize}\n\\item[\\mbox{(i)}] Letting $\\widetilde p^{\\,k+1}$ be the (orthogonal) projection of $\\widehat p^{\\,k}$ onto the semispace\n$\\{p\\in \\mathcal{H}\\,|\\,\\varphi_k(p)\\leq 0\\}$, i.e.,\n\\begin{align}\n \\label{eq:001}\n \\widetilde p^{\\,k+1} = \\widehat p^{\\,k} - \\dfrac{\\max\\{0,\\varphi_k(\\widehat p^{\\,k})\\}}\n{\\norm{\\nabla \\varphi_k}^2}\\nabla \\varphi_k\n\\end{align}\nand using \\eqref{eq:ext.proj} we conclude that\n\\begin{align}\n \\label{eq:333}\n p^{k+1}\n = \\widehat p^{\\,k} + \\beta_k (\\widetilde p^{\\,k+1} - \\widehat p^{\\,k}).\n\n\\end{align}\n\\item[\\mbox{(ii)}] Note that \\eqref{eq:ext} and \\eqref{eq:333} illustrate the different effects promoted in \nAlgorithm \\ref{inertial_projective} by inertia and relaxation, which are respectively controlled by the parameters $\\alpha_k$ and\n$\\beta_k$. See Figure \\ref{fig:arrows} below.\n\\begin{figure}[!htb]\n\\centering\n\\vspace{0.5cm}\n\\begin{tikzpicture}[scale=3]\n\\draw[thick,->,blue] (0, 0) -- (1.5, 0.3);\n\\draw[thick,->,green] (1.5, 0.3) -- (2, 0.75);\n\\draw[dashed] (1.1,1.4) -- (2.40,-0.1);\n\\filldraw (0,0) circle (0.6pt) node[align=center, below] {$p^{k-1}$};\n\\filldraw (1,0.2) circle (0.6pt) node[align=center, below] {$p^{k}$};\n\\filldraw (1.5,0.3) circle (0.5pt) node[align=center, below] {$\\widehat p^k$};\n\\filldraw (2,0.75) circle (0.5pt) node[align=center, right] {$p^{k+1}$};\n\\draw (2.55,0) circle (0.0pt) node[align=center, below] {$\\{p\\in \\mathcal{H}\\;|\\;\\varphi_k(p)=0\\}$};\n\\draw[thick,->,blue] (1,0.2) -- (2.5,1.025);\n\\filldraw (2.5,1.025) circle (0.6pt) node[align=center, right] {$\\widehat p^{k+1}$};\n\\end{tikzpicture}\n\\vspace{-0.5ex}\n\\caption{Geometric interpretation of steps~\\eqref{eq:ext}\nand~\\eqref{eq:ext.proj} in Algorithm~\\ref{inertial_projective}. The (overrelaxed)\nprojection step~\\eqref{eq:ext.proj} is orthogonal to the separating hyperplane\n$\\{p\\in \\mathcal{H}\\;|\\;\\varphi_k(p)=0\\}$, which can differ from the direction between $p^{k-1}$, $p^k$,\nand $\\widehat p^k$ when $\\alpha_k > 0$.}\\label{fig:arrows}\n\\end{figure}\n\\item[\\mbox{(iii)}] If $\\alpha_k\\equiv 0$, in which case $\\widehat p^k=p^k$ in \\eqref{eq:ext}, then it follows that Algorithm \n\\ref{inertial_projective} reduces to the well-known linear separator-projection method for finding a point in \n$\\mathcal{S}\\subset \\mathcal{H}$ (see, e.g., \\cite{alot.combet.shah.2014}).\n\\item[\\mbox{(iv)}] As we mentioned early, Algorithm \\ref{inertial_projective} will be used in the next section for analyzing the\nconvergence of Algorithm \\ref{alg:iPSM}. The main convergence results for Algorithm \\ref{inertial_projective} will be stated in \nthis section, in Theorems \\ref{lm:conv.proj} and \\ref{thm:second.conv} below.\n\\end{itemize}\n\n\n\\vspace{.1in}\n\nNext lemma plays the role of Fej\\'er-monotonicity for Algorithm \\ref{inertial_projective} and will be used in the proofs of\nTheorems \\ref{lm:conv.proj} and \\ref{thm:second.conv}.\n\n\n\\begin{lemma}\n \\label{lm:inv}\n %\nConsider the sequences evolved by \\emph{Algorithm \\ref{inertial_projective}} and let $\\widetilde p^{k+1}$ be as\nin \\eqref{eq:001}. For an arbitrary $p\\in \\mathcal{S}$, define\n\\begin{align}\n \\label{eq:def.hk}\n h_k=\\norm{p^k-p}^2\\qquad \\forall k\\geq -1.\n\\end{align}\nThen the following hold:\n\\begin{itemize}\n\\item[\\emph{(a)}] For all $k\\geq 0$,\n\\begin{align*}\n\n h_{k+1}-h_k-\\alpha_k(h_k-h_{k-1}) \\leq \\alpha_k(1+\\alpha_k)\\norm{p^k-p^{k-1}}^2 - s_{k+1},\n\\end{align*}\nwhere\n\\begin{align}\n \\label{eq:s.k}\n %\n s_{k+1}:= \\beta_k(2-\\beta_k)\\norm{\\widehat p^{\\,k}-\\widetilde p^{\\,k+1}}^2\\qquad \\forall k\\geq 0.\n %\n \\end{align}\n\\item[\\emph{(b)}] For all $k\\geq 0$,\n\\begin{align}\n \\label{eq:97}\n h_{k+1}-h_k-\\alpha_k(h_k-h_{k-1}) \\leq \\gamma_k \\norm{p^k-p^{k-1}}^2\n - (2-\\overline{\\beta})\\overline{\\beta}^{\\,-1}(1-\\alpha_k)\\norm{p^{k+1}-p^k}^2,\n\\end{align}\nwhere\n\\begin{align}\n \\label{eq:def.gamma}\n \\gamma_k := 2\\left(1 - \\overline{\\beta}^{\\,-1}\\right)\\alpha_k^2+2\\overline{\\beta}^{\\,-1}\\alpha_k\\qquad \\forall k\\geq 0.\n\\end{align}\n\\end{itemize}\n\\end{lemma}\n\\begin{proof}\n(a) We shall first prove that\n\\begin{align}\n \\label{eq:1832}\n \\norm{p^{k+1}-p}^2 + \\beta_k(2-\\beta_k)\\norm{\\widehat p^{\\,k}-\\widetilde p^{\\,k+1}}^2 \n\\leq \\norm{\\widehat p^{\\,k}-p}^2\\qquad \\forall p\\in \\mathcal{S},\n\\end{align}\nwhere $\\widetilde p^{\\,k+1}$ is as in \\eqref{eq:001}, i.e., it is the projection of $\\widehat p^k$ onto\nthe semispace $\\{p\\in \\mathcal{H}\\;|\\;\\varphi_k(p)\\leq 0\\}$.\nTo this end, note first that, for all $p\\in \\mathcal{S}$,\n\\begin{align}\n\\nonumber\n \\norm{\\widehat p^{\\,k}-p}^2 - \\norm{\\widetilde p^{\\,k+1}-p}^2 &=\n \\norm{\\widehat p^{\\,k}-\\widetilde p^{\\,k+1}}^2+2\\inner{\\widehat p^{\\,k}- \\widetilde p^{\\,k+1}}{\\widetilde p^{\\,k+1}-p}\\\\\n \\label{eq:002}\n &\\geq \\norm{\\widehat p^{\\,k}-\\widetilde p^{\\,k+1}}^2\n %\n\\end{align}\nwhere we have used \\eqref{eq:001} and the fact that\n$\\mathcal{S}\\subset \\{p\\in \\mathcal{H}\\,|\\,\\varphi_k(p)\\leq 0\\}$ (see Step 2 of Algorithm \\ref{inertial_projective}) to obtain\nthe inequality $\\inner{\\widehat p^{\\,k}- \\widetilde p^{\\,k+1}}{\\widetilde p^{\\,k+1}-p}\\geq 0$.\nNote now that \\eqref{eq:333} is trivially equivalent to \n$p^{k+1}= (1-\\beta_k)\\widehat p^{\\,k}+\\beta_k \\widetilde p^{\\,k+1}$,\nwhich in turn combined with the property \\eqref{eq:ineq.s} yields\n\\begin{align*}\n \\norm{p^{k+1}-p}^2= (1-\\beta_k)\\norm{\\widehat p^{\\,k}-p}^2 + \\beta_k \\norm{\\widetilde p^{\\,k+1}-p}^2\n - \\beta_k(1-\\beta_k)\\norm{\\widehat p^{\\,k}-\\widetilde p^{\\,k+1}}^2\n\\end{align*}\nor, equivalently, \n\\begin{align}\n \\label{eq:003}\n\\beta_k\\left(\\norm{\\widehat p^{\\,k}-p}^2 - \\norm{\\widetilde p^{\\,k+1}-p}^2\\right) = \\norm{\\widehat p^{\\,k}-p}^2\n- \\beta_k(1-\\beta_k)\\norm{\\widehat p^{\\,k}-\\widetilde p^{\\,k+1}}^2 - \\norm{p^{k+1}-p}^2.\n\\end{align}\nThe desired inequality \\eqref{eq:1832} now follows by multiplying the inequality in \\eqref{eq:002} by $\\beta_k\\geq 0$, \nby combining the resulting inequality with \\eqref{eq:003} and by using some simple algebraic manipulations.\n\nNow, from \\eqref{eq:ext} we have\n\\begin{align}\n \\label{eq:101}\n p^k-p=\\frac{1}{1+\\alpha_k}(\\widehat p^{\\,k}-p)+\\frac{\\alpha_k}{1+\\alpha_k}(p^{k-1}-p)\\;\\;\\mbox{and}\\;\\;\n \\widehat p^{\\,k}-p^{k-1}=(1+\\alpha_k)(p^k-p^{k-1}).\n\\end{align}\nUsing \\eqref{eq:ineq.s} and the first identity in \\eqref{eq:101} we obtain\n\\begin{align*}\n \\norm{p^k-p}^2=\\frac{1}{1+\\alpha_k}\\norm{\\widehat p^{\\,k}-p}^2+\\frac{\\alpha_k}{1+\\alpha_k}\\norm{p^{k-1}-p}^2-\\frac{\\alpha_k}{(1+\\alpha_k)^2}\\norm{\\widehat p^{\\,k}-p^{k-1}}^2,\n\\end{align*}\nwhich combined with the second identity in \\eqref{eq:101} and some algebraic manipulations gives\n\\begin{align}\\label{eq:005}\n \\norm{\\widehat p^{\\,k}-p}^2=(1+\\alpha_k)\\norm{p^k-p}^2-\\alpha_k\\norm{p^{k-1}-p}^2+\\alpha_k(1+\\alpha_k)\\norm{p^k-p^{k-1}}^2.\n\\end{align}\nHence, (a) follows directly from \\eqref{eq:1832}, \\eqref{eq:005} and the definitions of $h_k$ and $s_{k+1}$ in\n\\eqref{eq:def.hk} and \\eqref{eq:s.k}, respectively.\n\n\n\n(b) Note that \\eqref{eq:333} is also trivially equivalent to $\\widehat p^k - \\widetilde p^{k+1} = \\beta_k^{-1}(\\widehat p^k -p^{k+1})$, which in turn combined with the definition of $s_{k+1}$ in \\eqref{eq:s.k} and the fact \nthat $\\beta_k\\leq \\overline{\\beta}$ -- see Step 2 of Algorithm \\ref{inertial_projective} -- yields\n\\begin{align}\n \\label{eq:07}\n s_{k+1}=\\beta_k(2-\\beta_k) \\norm{\\widehat p^{\\,k}-\\widetilde p^{\\,k+1}}^2\n =\\left(2\\beta_k^{-1}-1\\right)\\norm{\\widehat p^{\\,k}-p^{k+1}}^2\n\\geq \\left(2\\overline{\\beta}^{\\,-1}-1\\right)\\norm{\\widehat p^{\\,k}-p^{k+1}}^2.\n\\end{align}\nUsing \\eqref{eq:ext}, the Cauchy-Schwarz inequality, the Young inequality ($2ab\\leq a^2+b^2$ with $a=\\norm{p^{k+1}-p^k}$ and $b=\\norm{p^k-p^{k-1}}$) and some algebraic manipulations, we find\n\\begin{align}\\label{eq:008}\n \\norm{\\widehat p^{\\,k}-p^{k+1}}^2 = &\\norm{p^{k+1}-p^{k}}^2+\\alpha_k^2\\norm{p^k-p^{k-1}}^2-2\\alpha_k\\langle p^{k+1}-p^k,p^k-p^{k-1}\\rangle\\nonumber \\\\\n\\geq&\\norm{p^{k+1}-p^k}^2+\\alpha_k^2\\norm{p^k-p^{k-1}}^2-2\\alpha_k\\norm{p^{k+1}-p^k}\\norm{p^k-p^{k-1}}\\nonumber\\\\\n\\geq&\\norm{p^{k+1}-p^k}^2+\\alpha_k^2\\norm{p^k-p^{k-1}}^2-\\alpha_k\\big(\\norm{p^{k+1}-p^k}^2+\\norm{p^k-p^{k-1}}^2\\big)\\nonumber\\\\\n =&(1-\\alpha_k)\\left(\\norm{p^{k+1}-p^k}^2-\\alpha_k \\norm{p^k-p^{k-1}}^2\\right).\n \\end{align}\n %\n From \\eqref{eq:07} and \\eqref{eq:008} we obtain\n %\n \\begin{align*}\n \n s_{k+1}\\geq \\left(2\\overline{\\beta}^{\\,-1}-1\\right)(1-\\alpha_k)\\left(\\norm{p^{k+1}-p^k}^2-\\alpha_k \\norm{p^k-p^{k-1}}^2\\right),\n \\end{align*}\nwhich in turn combined with the inequality in (a) and \\eqref{eq:def.gamma}, and after some simple manipulations, gives exactly the desired inequality in (b).\n %\n\\end{proof}\n\n\n\\vspace{.1in}\nNext is our first result on the (asymptotic) convergence of Algorithm \\ref{inertial_projective}. The key assumption is the summability condition \\eqref{eq:sum.p}, for which a sufficient condition, only depending on the parameters $\\alpha_k$ and \n$\\beta_k$, will be given in Theorem \\ref{thm:second.conv} -- see conditions \\eqref{eq:alpha_k}, \\eqref{eq:beta(alpha)}\nand Figure \\ref{fig02}.\n\n\n\n\n\\begin{theorem}[First result on the convergence of Algorithm \\ref{inertial_projective}]\n \\label{lm:conv.proj}\n Let $\\{p^k\\}$, $\\{\\varphi_k\\}$, $\\{\\widehat p^k\\}$ and $\\{\\alpha_k\\}$ be generated by \\emph{Algorithm \\ref{inertial_projective}} and\n assume that\n %\n \\begin{align}\n \\label{eq:sum.p}\n \\sum_{k=0}^\\infty\\,\\alpha_k\\norm{p^k-p^{k-1}}^2<\\infty.\n \\end{align}\nThen the following hold:\n\\begin{itemize}\n %\n \\item[\\emph{(a)}] $\\{p^k\\}$ and $\\{\\widehat p^k\\}$ are bounded sequences.\n %\n \\item[\\emph{(b)}] If every weak cluster point of $\\{p^k\\}$ belongs to $\\mathcal{S}$, then $\\{p^k\\}$ converges weakly to some element in $\\mathcal{S}$.\n\\item[\\emph{(c)}] We have, \n\\begin{align*}\n \\dfrac{\\max\\{0,\\varphi_k(\\widehat p^k)\\}}{\\norm{\\nabla \\varphi_k}}\\to 0.\n\\end{align*}\n\\end{itemize}\n\\end{theorem}\n\\begin{proof}\nDefining $\\delta_k=\\alpha_k(1+\\alpha_k)\\norm{p^k-p^{k-1}}^2$ and using Lemma \\ref{lm:inv}(a), we conclude that condition \\eqref{eq:alv.att02} in Lemma \\ref{lm:alv.att} below holds with $h_k$ and $s_{k+1}$ as in\n\\eqref{eq:def.hk} and \\eqref{eq:s.k}, respectively. \nHence, using the assumption \\eqref{eq:sum.p}, Lemma \\ref{lm:alv.att}(b) and \\eqref{eq:def.hk}, we conclude that\n$$\\lim_{k\\to \\infty}\\,\\|p^k-p\\|\\;\\; \\mbox{exists for all}\\;\\; p\\in \\mathcal{S}.$$\nThis gives, in particular, that $\\{p^k\\}$ and $\\{\\widehat p^k\\}$ are bounded (see \\eqref{eq:ext})\nand, after using Lemma \\ref{lm:opial} below, that $\\{p^k\\}$ converges weakly to some element in $\\mathcal{S}$ whenever \nevery weak cluster point of $\\{p^k\\}$ belongs to $\\mathcal{S}$. So we have proved (a) and (b).\n\nTo prove (c), note first that from \\eqref{eq:333} we have\n\\begin{align*}\n \\dfrac{\\max\\{0,\\varphi_k(\\widehat p^k)\\}}{\\norm{\\nabla \\varphi_k}} = \\norm{\\widetilde p^{k+1}-\\widehat p^k}.\n\\end{align*}\nHence, to conclude the proof of (c), it suffices to prove that $\\norm{\\widetilde p^{\\,k+1}-\\widehat p^{\\,k}}\\to 0$.\nTo this end, note that \\eqref{eq:sum.p} combined with the definition of $\\delta_k$ above,\nthe fact that $\\alpha_k^2\\leq \\alpha_k$ \n and Lemma \\ref{lm:alv.att}(a) \ngives $\\sum_{k=0}^\\infty\\,s_{k+1}<\\infty$, with $s_{k+1}$ (for all $k\\geq 0$) as in \\eqref{eq:s.k}, and so\n$s_{k+1}\\to 0$. The desired result now follows form this fact, \\eqref{eq:s.k} and the fact that \n$0<\\underline{\\beta}\\leq \\beta_k\\leq \\overline{\\beta}<2$ (see Step 2 of Algorithm \\ref{inertial_projective}).\n\\end{proof}\n\n\\vspace{.1in}\n\n\n\n\n\n\\begin{theorem}[Second result on the convergence of Algorithm \\ref{inertial_projective}]\n\\label{thm:second.conv}\n Let $\\{p^k\\}$ and $\\{\\alpha_k\\}$ be generated by \\emph{Algorithm \\ref{inertial_projective}}.\n %\n Assume that $\\alpha\\in [0,1)$, $\\overline{\\beta}\\in (0,2)$ and $\\{\\alpha_k\\}$ satisfy the following \\emph{(}for some\n $\\overline{\\alpha}>0$\\emph{)}:\n %\n \\begin{align}\\label{eq:alpha_k}\n 0\\leq \\alpha_k\\leq \\alpha_{k+1}\\leq \\alpha<\\overline{\\alpha}<1\\qquad \\forall k\\geq 0\n \\end{align}\n and\n %\n \\begin{align}\\label{eq:beta(alpha)}\n \\overline{\\beta}=\\overline{\\beta}(\\overline{\\alpha}):=\\dfrac{2(\\overline{\\alpha}-1)^2}\n {2(\\overline{\\alpha}-1)^2+3\\overline{\\alpha}-1}.\n \\end{align}\n %\n Then the following hold:\n\\begin{itemize}\n %\n\\item[\\emph{(a)}] We have\n\\begin{align}\n \\label{sum(p_k)}\n \\sum_{k=0}^{\\infty}\\,\\norm{p^k-p^{k-1}}^2<\\infty.\n \\end{align}\n %\n\\item[\\emph{(b)}] Under the assumptions \\eqref{eq:alpha_k} and \\eqref{eq:beta(alpha)},\n if every weak cluster point of $\\{p^k\\}$ belongs to $\\mathcal{S}$, then $\\{p^k\\}$ converges weakly to some element in $\\mathcal{S}$.\n\\end{itemize}\n \\end{theorem}\n\\begin{proof}\n(a) Define, for all $k\\geq 0$,\n\\begin{align}\n \\label{eq:muka}\n \n \\mu_k =h_k-\\alpha_k h_{k-1}+\\gamma_k\\norm{p^k-p^{k-1}}^2\n\\end{align}\nwhere $h_k$ is as in \\eqref{eq:def.hk} (for some $p\\in \\mathcal{S}$) and $\\gamma_k$ is as in \\eqref{eq:def.gamma}.\nUsing the assumption \\eqref{eq:alpha_k} and Lemma \\ref{lm:inv}(b), we obtain, for all $k\\geq 0$,\n\\begin{align}\n \\label{eq:010}\n \\mu_{k+1}-\\mu_k\n&\\leq h_{k+1}-\\alpha_{k}h_k+ \\gamma_{k+1}\\norm{p^{k+1}-p^k}^2-h_k+\\alpha_kh_{k-1}-\\gamma_k\\norm{p^k-\n p^{k-1}}^2 \\quad \\text{[by \\eqref{eq:alpha_k}]}\\nonumber\\\\\n&= h_{k+1}-h_k-\\alpha_k(h_k-h_{k-1})+\\gamma_{k+1}\\norm{p^{k+1}-p^k}^2-\\gamma_k\\norm{p^k-p^{k-1}}^2 \\nonumber \\\\\n&\\leq \\left[-\\left(2-\\overline{\\beta}\\right)\\overline{\\beta}^{\\,-1}(1-\\alpha_k)+ \\gamma_{k+1}\\right]\\norm{p^{k+1}-p^k}^2 \\qquad\\qquad\\quad\\text{[by Lemma \\ref{lm:inv}(b)]}\\nonumber\\\\\n&\\leq \\left[-\\left(2-\\overline{\\beta}\\right)\\overline{\\beta}^{\\,-1}(1-\\alpha_{k+1})+\\gamma_{k+1}\\right]\\norm{p^{k+1}-p^k}^2 \\quad\\qquad\\qquad\\qquad\\qquad \\text{[by \\eqref{eq:alpha_k}]}\\nonumber\\\\\n&=-q(\\alpha_{k+1})\\norm{p^{k+1}-p^k}^2 \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\quad\n\\text{[by \\eqref{eq:def.gamma} and \\eqref{eq:q(alpha)}]}\n\\end{align}\nwhere\n \\begin{align}\\label{eq:q(alpha)}\n q(\\nu):=2\\left(\\overline{\\beta}^{\\,-1}-1\\right)\\nu^2-\\left(4\\overline{\\beta}^{\\,-1}-1\\right)\\nu+2\\overline{\\beta}^{\\,-1}-1,\\quad \\nu\\in \\mathbb{R}.\n \\end{align}\n %\n %\nNext we will show that $q(\\alpha_{k+1})$ admits an uniform lower bound. To this end, note first that \\eqref{eq:beta(alpha)} and Lemma \\ref{lmm:inverse} below yield\n\\[\n\\overline{\\alpha}=\\dfrac{2(2-\\overline{\\beta})}{4-\\overline{\\beta}+\\sqrt{16\\overline{\\beta}-7\\overline{\\beta}^2}},\n\\]\nwhich in turn combined with Lemma \\ref{lm:quadratic} below implies that $q(\\overline{\\alpha})=0$ and $q(\\cdot)$ is decreasing in $[0,\\overline{\\alpha}]$. Thus, in view of \\eqref{eq:alpha_k},\n we obtain\n\\[\nq(\\alpha_{k+1})\\geq q(\\alpha)>q(\\overline{\\alpha})=0\n\\]\nand so, in view of \\eqref{eq:010}, it follows that\n\\begin{align}\n \\label{eq:2351}\n \\norm{p^{k+1}-p^k}^2\\leq \\dfrac{1}{q(\\alpha)}(\\mu_k - \\mu_{k+1})\\qquad \\forall k\\geq 0.\n\\end{align}\nHence, for all $k\\geq 0$,\n\\begin{align}\n \\label{eq:2348}\n %\n\\sum_{j=0}^k\\,\\norm{p^{j+1}-p^j}^2&\\leq \\dfrac{1}{q(\\alpha)}(\\mu_0 - \\mu_{k+1})\\nonumber\\\\\n &\\leq \\dfrac{1}{q(\\alpha)}(\\mu_0 + \\alpha h_k)\n\\end{align}\nwhere in the second inequality above we also used the fact that $\\mu_{k+1}\\geq -\\alpha h_k$ (in view of \\eqref{eq:muka}\nand \\eqref{eq:alpha_k}).\nTherefore, to finish the proof of (a) it is enough to find an upper bound on $h_k$ and use \\eqref{eq:2348}.\nTo this end, note that from \\eqref{eq:2351} and \\eqref{eq:alpha_k} we have, for all $k\\geq -1$,\n\\begin{align*}\n\\mu_0\\geq \\mu_1\\geq \\ldots\\geq \\mu_{k+1}&=h_{k+1}-\\alpha_{k+1}h_{k}+\n\\gamma_{k+1}\\norm{p^{k+1}-p^{k}}^2\\\\\n &\\geq h_{k+1}-\\alpha h_{k}\n\\end{align*}\nand so, for all $k\\geq -1$,\n\\begin{align*}\n h_{k+1}&\\leq \\alpha^{k+1}h_0 + \\left(\\sum_{i=0}^k\\,\\alpha^i\\right)\\mu_0\\\\\n &\\leq h_0 + \\dfrac{\\mu_0}{1-\\alpha}\n\\end{align*}\nwhere in the second inequality we also used the fact -- from \\eqref{eq:muka} -- that $\\mu_0=(1-\\alpha_0)h_0\\geq 0$.\n(b) The result follows trivially from (a), the fact that $\\alpha_k\\leq 1$ for all $k\\geq 0$ and Theorem \\ref{lm:conv.proj}(b). \n\\end{proof}\n\n\n \\begin{figure}\n \\centering\n \\begin{tikzpicture}[scale=2] \\centering\n \\draw[->,line width = 0.50mm] (-0.2,0) -- (1.4,0) node[right] { $\\overline{\\alpha}$};\n \\draw[->,line width = 0.50mm] (0,-0.2) -- (0,2.4) node[above] {$\\overline{\\beta}(\\overline{\\alpha})$};\n \\draw[domain=0:1,smooth,variable=\\x,red,line width = 0.50mm,scale=1] plot ({\\x},{(2*(\\x -1)^2)\/((2*(\\x -1)^2)+3*\\x -1)});\n \\draw[red] (0,2) circle (0.35mm);\n \\draw[red] (1,0) circle (0.35mm);\n \\draw[dashed] (0.3333,0) -- (0.3333,1);\n \\draw[dashed] (0,1) -- (0.3333,1);\n \\node[below left,black] at (0,0) {0};\n \\node[below right ,black] at (1,0) {1};\n \\node[above left,black] at (0,2) {2};\n \\node[below ,black] at (0.3333,0) {$\\frac{1}{3}$};\n \\node[left,black] at (0,1) {1};\n \\end{tikzpicture}\n \\caption{{The relaxation parameter upper bound\n $\\overline{\\beta}(\\overline{\\alpha})$ from \\eqref{eq:beta(alpha)} \n as a function of inertial step upper bound $\\overline{\\alpha}>0$\n of \\eqref{eq:alpha_k}. Note that $\\overline \\beta(1\/3)=1$, while\n $\\overline{\\beta}(\\overline{\\alpha})>1$ whenever $\\overline \\alpha<1\/3$.}}\n \\label{fig02}\n \\end{figure}\n\n\\vspace{.1in}\n\\vspace{.1in}\n %\n\\noindent\n{\\bf Remarks}.\n\\begin{itemize}\n\\item[\\mbox{(i)}] The proofs of Theorems \\ref{lm:conv.proj} and \\ref{thm:second.conv} have followed the same outline of the\nproofs of Theorems 2.4 and 2.5 in \\cite{alv.eck.ger.mel-preprint19}. On the other hand, we emphasize that\nAlgorithm \\ref{inertial_projective} proposed in this work is more general that Algorithm 1 from \\cite{alv.eck.ger.mel-preprint19}, since\nthe latter has been designed to solve inclusions with monotone operators.\n\\item[\\mbox{(ii)}] We deduce from conditions \\eqref{eq:alpha_k} and \\eqref{eq:beta(alpha)} that overrelaxation effects\ncan be achieved in Algorithm \\ref{inertial_projective} at the price of choosing the inertial parameter upper bound\n$\\overline{\\alpha}$ strictly smaller than $1\/3$ (see Figure \\ref{fig02}). We also emphasize that the interplay between inertial and \nrelaxation effects has also been investigated, e.g., in \n\\cite{att.cab-con.jde18,att.pey-con.mp19,bot.cse.hen-ine.amc15,com.gla-qua.sjo17}.\n\\end{itemize}\n\n\n\\section{A relative-error inertial-relaxed inexact projective splitting algorithm}\n \\label{sec:ps}\n %\nIn this section, we propose and study the asymptotic convergence of a relative-error inertial-relaxed inexact projective splitting algorithm \n(Algorithm \\ref{alg:iPSM}). The main convergence results are stated in Theorems \\ref{th:conv01} and \\ref{th:conv02}.\n\nWe start by considering the monotone inclusion problem \\eqref{eq:Pmip} (or, equivalently, \\eqref{eq:Pmip02}), i.e., \n the problem of finding $z\\in \\mathcal{H}_0$ such that\n\\begin{align}\n \\label{eq:Pmip1}\n 0\\in \\sum_{i=1}^{n}G_i^*T_iG_i(z)\n\\end{align}\nwhere $n\\geq 2$ and Assumptions (A1)--(A3) of Section \\ref{sec:int} are assumed to hold. \n\nConsider the extend solution set (or generalized Kuhn-Tucker set) as in \\eqref{eq:KTSi} for the problem \\eqref{eq:Pmip1}, i.e.:\n\\begin{align}\n\\label{eq:KTS}\n \\mathcal{S}:=\\left\\{(z,w_1,\\ldots,w_{n-1})\\in \\boldsymbol{\\mathcal{H}}\\;\\;|\\;\\; w_i\\in T_i(G_iz),\\, i=1,\\ldots,n-1,\\, -\\sum_{i=1}^{n-1}G^*_iw_i\\in T_n(z) \\right\\}.\n\\end{align}\nAs we pointed out early, $z\\in \\mathcal{H}_0$ is a solution of \\eqref{eq:Pmip1} if and only if there exist $w_i\\in \\mathcal{H}_i$ ($i=1,\\ldots,n-1$) such that $(z,w_1,\\ldots, w_{n-1})\\in \\mathcal{S}$. \nWe deduce from Assumption (A3) above that $\\mathcal{S}$ is nonempty. Moreover, it follows form \n\\cite[Lemma 3]{Johnst.Eckst.2018} that\n$\\mathcal{S}$ is closed and convex in $\\boldsymbol{\\mathcal{H}}$ (endowed with inner product and norm as in \\eqref{eq:def.inner}). \nAs a consequence, one can apply the framework (Algorithm \\ref{inertial_projective}) of Section \\ref{sec:spm} for $\\mathcal{S}$ as in \\eqref{eq:KTS} and the Hilbert space \n$\\boldsymbol{\\mathcal{H}}$ with the inner product and norm as in \\eqref{eq:def.inner}. The resulting scheme is Algorithm \\ref{alg:iPSM}, which, in particular, will be shown in Proposition \\ref{prop:Q1} to be a special instance of Algorithm \\ref{inertial_projective}.\n\nSince Step 2 of Algorithm \\ref{inertial_projective} demands the construction of an (nonconstant) affine function $\\varphi_k$ such that\n$\\varphi_k(p)\\leq 0$ for all $p\\in \\mathcal{S}$, next we discuss the construction of such $\\varphi_k$ satisfying the latter inequality for\n$\\mathcal{S}$ as in \\eqref{eq:KTS}.\n\nMotivated by \\eqref{eq:sep.i} and \\eqref{eq:wni}, for $y_i^k\\in T_i(x_i^k)$ ($i=1,\\dots, n$), we define $\\varphi_k:\\boldsymbol{\\mathcal{H}}\\to \\mathbb{R}$ by\n\\begin{align}\n \\label{eq:sep.F02}\n \\varphi_k(\\underbrace{z,w_1,\\ldots,w_{n-1}}_{p})=\\sum_{i=1}^{n-1}\\langle G_iz-x_i^k,y_i^k-w_i\\rangle+\n \\langle G_n z-x_n^k,y_n^k + \\sum_{i=1}^{n-1}G_i^*w_i\\rangle.\n \\end{align}\nWe shall also use the fact, from \\eqref{eq:sep.F02} and \\eqref{eq:wni}, that\n\\begin{align}\n \\label{eq:sep.F03}\n \\varphi_k(p) = \\sum_{i=1}^{n}\\langle G_iz-x_i^k,y_i^k-w_i\\rangle.\n\\end{align}\nNote that the construction above depends on the computation of pairs $(x_i^k,y_i^k)$ in the graph of $T_i$, for each $i=1,\\dots, n$, which\ncan be computed by inexact evaluation (with relative-error tolerance) of the resolvent $J_{T_i}=(T_i+I)^{-1}$ of $T_i$ (see Step 2 of Algorithm \\ref{alg:iPSM}).\n\n\\vspace{.1in}\n\nNext lemma presents some properties of $\\varphi_k$ which will be useful in this paper.\n\n\\begin{lemma}\\emph{(\\cite[Lemma 4]{Johnst.Eckst.2018})}\n \\label{lmm:grad.sep}\n %\n Let $\\varphi_k(\\cdot)$ and $\\mathcal{S}$ be as in \\eqref{eq:sep.F02} and \\eqref{eq:KTS}, respectively.\n The following hold:\n \\begin{enumerate}\n \\item [\\emph{(a)}] $\\varphi_k$ is affine on $\\boldsymbol{\\mathcal{H}}$.\n \\item [\\emph{(b)}] $\\varphi_k(p)\\leq 0$ for all $p\\in \\mathcal{S}$.\n \\item [\\emph{(c)}] The gradient of $\\varphi_k$ with respect to the inner product $\\inner{\\cdot}{\\cdot}_{\\gamma}$ as in \\eqref{eq:def.inner} is\n \\begin{align}\\label{eq:grad.phi_k}\n \\nabla\\varphi_k=\n \\left(\\dfrac{1}{\\gamma}\\left(\\sum_{i=1}^{n-1}G_i^*y_i^k+y_n^k\\right),x_1^k-G_1x_n^k,\n \\ldots,x_{n-1}^k-G_{n-1}x_{n}^k\\right).\n \\end{align}\n \n \\item [\\emph{(d)}] If $\\nabla \\varphi_k=0$, then $(x_n^k,y_1^k,\\ldots,y_{n-1}^k)\\in \\mathcal{S}$.\n \\end{enumerate}\n \\end{lemma}\n\n\\vspace{.1in}\n\nAs a direct consequence of Lemma \\ref{lmm:grad.sep}(c) and \\eqref{eq:def.inner}, we have\n\\begin{align}\n \\label{eq:norm.grad.pi_k}\n \\norm{\\nabla\\varphi_k}_{\\gamma}^2=\n \\gamma^{-1}\\left\\|\\sum_{i=1}^{n-1}G_i^*y_i^k+y_n^k\\right\\|^2+\\sum_{i=1}^{n-1}\\norm{x_i^k-G_ix_{n}^k}^2.\n\\end{align}\n %\n \n \n \\vspace{.1in}\n\nNext we present the main algorithm of this paper. As we mentioned before, it consists of a relative-error inertial-relaxed inexact projective splitting method for solving \\eqref{eq:Pmip1}.\n\n\n\\vspace{.1in}\n\n\\vspace{.1in}\n\\noindent\n\\fbox{\n\\addtolength{\\linewidth}{-2\\fboxsep}%\n\\addtolength{\\linewidth}{-2\\fboxrule}%\n\\begin{minipage}{\\linewidth\n\\begin{algorithm}\n\\label{alg:iPSM}\n{\\bf A relative-error inertial-relaxed inexact projective splitting algorithm for solving \\eqref{eq:Pmip1}}\n\\end{algorithm}\n\\begin{itemize}\n\\item[{\\bf(0)}] Let $(z^{-1},w_1^{-1},\\ldots,w_{n-1}^{-1})=(z^0,w_1^{0},\\ldots, w_{n-1}^{0}) \\in \\boldsymbol{\\mathcal{H}}$, \n$0\\leq \\alpha,\\sigma<1$, $0<\\underline{\\beta}\\leq \\overline{\\beta}< 2$ and $\\gamma >0$ be given; let $k\\leftarrow0$.\n\\item [{\\bf(1)}] Choose $\\alpha_k\\in [0,\\alpha]$ and let\n\\begin{align}\n\\label{eq:ext.022}\n &\\widehat z^{\\,k}=z^k+\\alpha_k(z^k-z^{k-1}),\\\\\n \\label{eq:ext.02}\n &\\widehat w_i^{\\,k}= w_i^{k}+\\alpha_k(w_i^k-w_i^{k-1}),\\quad i=1,\\ldots,n-1,\\\\\n \\label{eq:ext.023}\n &\\widehat w_n^{\\,k}=-\\sum_{i=1}^{n-1}G_i^*\\widehat w_i^{\\,k}.\n\\end{align}\n\\item [{\\bf(2)}] Choose scalars $\\rho_i^k>0$ and compute $(x_i^k,y_i^k)$ ($i=1,\\ldots,n$) satisfying\n\\begin{align}\n\\left\\{\n \\begin{array}{ll}\n\t\t\t \\label{eq:proxT_i}\n\t y_i^k\\in T_i(x_i^k),\\quad x_i^k+\\rho_i^k y_i^k = G_i\\widehat z^{\\,k}+\\rho_i^k \\widehat w_i^{\\,k}+e_i^k,\\\\[5mm]\n \\norm{e_i^k}^2\\leq \\sigma^2\\left(\\norm{G_i\\widehat z^k-x_i^k}^2+\\norm{\\rho_i^k(\\widehat w_i^k-y_i^k)}^2\\right).\n \\end{array}\n \\right.\n\\end{align}\n\n\n\n\n\\item [{\\bf(3)}]\nIf $x_i^k=G_i x_n^k$ ($i=1,\\ldots,n-1$) and $\\displaystyle \\sum_{i=1}^{n-1}G_i^*y_i^k +y_n^k=0$, then STOP.\nOtherwise, define\n\\begin{align}\n \\label{eq:sep.F}\n &\\varphi_k (\\widehat p^{\\,k}) =\\sum_{i=1}^{n-1}\\langle G_i\\widehat z^k-x_i^k,y_i^k - \\widehat w_i^k\\rangle+\n \\langle G_n \\widehat z^k-x_n^k,y_n^k + \\sum_{i=1}^{n-1}G_i^*\\widehat w^k_i\\rangle,\\\\\n\\label{eq:theta_k}\n&\\theta_k =\\dfrac{\\max\\{0, \\varphi_k(\\widehat p^{\\,k}) \\}}{\\gamma^{-1}\\norm{\\sum_{i=1}^{n-1}G_i^*y_i^k+y_n^k}^2+ \\sum_{i=1}^{n-1}\\norm{x_i^k-G_i x_n^k}^2}.\n\\end{align}\n\\item[{\\bf(4)}] Choose some relaxation parameter $\\beta_k\\in [\\underline{\\beta},\\overline{\\beta}]$ and define\n\\begin{align}\n\\label{eq:ext.proj02}\n&z^{k+1}=\\widehat z^{\\,k}-\\gamma^{-1}\\beta_k\\theta_k\\left(\\sum_{i=1}^{n-1}G_i^*y_i^k+y_n^k \\right),\\\\\n&w_i^{k+1}=\\widehat w_i^{\\,k}-\\beta_k\\theta_k\\left(x_i^k-G_i x_n^k\\right), \\quad i=1,\\ldots,n-1.\\label{eq:ext.proj002}\n\\end{align}\n\\item[{\\bf(5)}] Let $k\\leftarrow k+1$ and go to step 1.\n\\end{itemize}\n\\end{minipage}\n}\n\\vspace{.1in}\n\\vspace{.1in}\n\\vspace{.1in}\n\\vspace{.1in}\n\n\n\n\\noindent\n{\\bf Remarks.}\n\n\\begin{itemize}\n \\item[\\mbox{(i)}] Similarly to Algorithm \\ref{inertial_projective} of Section \\ref{sec:spm}, Algorithm \\ref{alg:iPSM} also promotes\n inertial and relaxation effects, controlled by the parameters $\\alpha_k$ and $\\beta_k$, respectively. The inertial (extrapolation) step\n is performed in \\eqref{eq:ext.022} and \\eqref{eq:ext.02}, while the relaxed projective step is given in \\eqref{eq:ext.proj02} and\n \\eqref{eq:ext.proj002} (in the context of Algorithm \\ref{inertial_projective}, see Figure \\ref{fig:arrows} of Section \\ref{sec:spm}). Conditions on the choice of the upper bounds $\\alpha$ and $\\overline{\\beta}$, as well as on the sequence of extrapolation parameters\n $\\{\\alpha_k\\}$, to guarantee the convergence of Algorithm \\ref{alg:iPSM} will be given in Theorem \\ref{th:conv02}. \n\\item[\\mbox{(ii)}] Direct substitution of \\eqref{eq:ext.02} into \\eqref{eq:ext.023} gives that, similarly to\n$\\widehat w_i^k$ for $i=1,\\dots, n-1$, $\\widehat w_n^k$ also satisfies\n\\begin{align}\n \\widehat w_n^k = w_n^k + \\alpha_k(w_n^k - w_n^{k-1}),\n\\end{align}\nwhere\n \\begin{align}\n\\label{eq:w_nk}\n w_n^k := -\\sum_{i=1}^{n-1}G_i^* w_i^k,\\qquad \\forall k\\geq 0.\n\\end{align}\n\\item[\\mbox{(iii)}] The computation of $(x_i^k,y_i^k)$ in \\eqref{eq:proxT_i} can be performed inexactly within a relative-error tolerance \ncontrolled by the parameter $\\sigma\\in [0,1)$. In practice, the error condition in \\eqref{eq:proxT_i} is used as a stopping-criterion for\nsome computational procedure (e.g., conjugate gradient algorithm) applied to (inexactly) solving the related inclusion (for $i=1,\\dots, n$)\n\\begin{align*}\n 0\\in \\rho_i^k T_i(x) + x- (G_i\\widehat z^k +\\rho_i^k \\widehat w_i^k)\n\\end{align*}\nuntil the error-condition in \\eqref{eq:proxT_i} is satisfied for the first time. Note also that $(x_i^k,y_i^k)$ is given explicitly by\n$x_i^k=J_{\\rho_i^k T_i}(G_i\\widehat z^k +\\rho_i^k \\widehat w_i^k)$ and \n$y_i^k = \\frac{G_i\\widehat z^k - x_i^k}{\\rho_i^k}+\\widehat w_i^k$ whenever the resolvent $J_{\\rho_i^k T_i} = (\\rho_i^k T_i +I)^{-1}$ of $T_i$ is assumed to be easily computed and $\\sigma=0$ in \\eqref{eq:proxT_i}.\nFor the particular case of the minimization problem \\eqref{eq:opt}, the computation of $(x_i^k,y_i^k)$ reduces to the (inexact) computation of the proximity operator $\\mbox{prox}_{\\rho_i^k f_i}$, i.e., in this case\n\\begin{align}\n \\label{eq:prox_fi}\n x_i^k \\approx \\mbox{arg}\\min_{z\\in \\mathcal{H}_0}\\,\\left\\{f_i(z)+\\dfrac{1}{2\\rho_i^k}\\norm{z-(G_i\\widehat z^k+\\rho_i^k\\widehat w_i^k)}^2\\right\\}.\n\\end{align}\nSee also Section \\ref{sec:ne} for an additional discussion in the context of LASSO problems.\n \n\\item[\\mbox{(iv)}] It follows from Lemma \\ref{lmm:grad.sep}, items (c) and (d), that $(x_n^k,y_1^k,\\dots, y_{n-1}^k)$ belongs\nto the extended solution set $\\mathcal{S}$ whenever Algorithm \\ref{alg:iPSM} stops at Step 3. In particular, in this case, $x_n^k$ is a solution of\n\\eqref{eq:Pmip1}. \n\n\\emph{From now one in this paper, we assume that \\emph{Algorithm \\ref{alg:iPSM}} generates infinite sequences, i.e., we assume\nthat it never stops at \\emph{Step 3}.}\n\\item[\\mbox{(v)}] We also emphasize that if $\\alpha_k\\equiv 0$ in Algorithm \\ref{alg:iPSM}, then it reduces to the projective splitting\nalgorithm (or some of its variants) originated in \\cite{eck.sva-gen.sjco09} and later developed in different directions in, e.g.,\n\\cite{alot.combet.shah.2014,Johnst.Eckst.2018,jonhst.eckst.siam2019,Johnst.Eckst.2019}. The advantages and flexibility of projective splitting algorithms (beyond inertial effects) when compared to other proximal-splitting strategies are also extensively discussed in the latter references.\n\\end{itemize}\n\n\n\n\\vspace{.1in}\n\nNext we show that Algorithm \\ref{alg:iPSM} (under the assumption that it never stops at Step 3; see Remark (iv) above) is a special instance of Algorithm \\ref{inertial_projective} for finding a point in $\\mathcal{S}$\nas in \\eqref{eq:KTS} in the Hilbert space $\\boldsymbol{\\mathcal{H}}$ endowed with the inner product and norm as in \\eqref{eq:def.inner}.\n\n\\vspace{.1in}\n\n %\n %\n\n\n\n\n\n\n\\begin{proposition}\n\\label{prop:Q1}\n %\n Assume that \\emph{Algorithm \\ref{alg:iPSM}} does not stop at \\emph{Step 3}, let\n %\n$\\{z^k\\}$, $\\{w_1^k\\}, \\dots, \\{w_{n-1}^k\\}$ be generated by \\emph{Algorithm \\ref{alg:iPSM}}, let $\\{\\varphi_k\\}$ be as\nin \\eqref{eq:sep.F02} and define\n\\begin{align}\n \\label{eq:def.pk2}\n p^k = (z^k,w_1^k,\\dots, w_{n-1}^k)\\qquad \\forall k\\geq -1.\n\\end{align}\nThen the following hold:\n\\begin{itemize}\n\\item[\\emph{(a)}] For all $k\\geq 0$,\n\\[\n \\nabla \\varphi_k \\neq 0\\;\\;\\mbox{and}\\;\\; \\varphi_k(p)\\leq 0\\qquad \\forall p\\in \\mathcal{S},\n\\]\nwhere $\\mathcal{S}$ is as in \\eqref{eq:KTS}.\n\\item[\\emph{(b)}] For all $k\\geq 0$,\n\\begin{align}\n \\label{eq:prop:Q101}\n p^{k+1}=\\widehat p^{\\,k} - \\dfrac{\\beta_k \\max\\{0,\\varphi_k(\\widehat p^{\\,k})\\}}{\\norm{\\nabla \\varphi_k}_\\gamma^2}\\nabla \\varphi_k\\;\\;\\emph{and}\\;\\; \\widehat p^k = (\\widehat z^k,\\widehat w_i^k,\\dots, \\widehat w_{n-1}^k),\n \\end{align}\nwhere $\\widehat p^k$ is as in \\eqref{eq:ext}. \n\\end{itemize}\n As a consequence of \\emph{(a)} and \\emph{(b)} above, it follows that \\emph{Algorithm \\ref{alg:iPSM}} is a special instance of \\emph{Algorithm \\ref{inertial_projective}} for finding a point in the\nextended solution set $\\mathcal{S}$ as in \\eqref{eq:KTS}.\n \\end{proposition}\n\\begin{proof}\n(a) The fact that $\\nabla \\varphi_k\\neq 0$ follows from the assumption that Algorithm \\ref{alg:iPSM} does not\nstop at Step 3 and Lemma \\ref{lmm:grad.sep}(c). \nUsing now Lemma \\ref{lmm:grad.sep}(b) and the inclusions in \\eqref{eq:proxT_i}, we conclude that\n$\\varphi_k(p)\\leq 0$ for all $p\\in \\mathcal{S}$. \n\n(b) The second identity in \\eqref{eq:prop:Q101} follows from \\eqref{eq:ext}, \\eqref{eq:def.pk2}, \\eqref{eq:ext.022} and \\eqref{eq:ext.02}.\nOn the other hand, the first identity in \\eqref{eq:prop:Q101} is a direct consequence of \\eqref{eq:theta_k}--\\eqref{eq:ext.proj002},\n\\eqref{eq:grad.phi_k}, \\eqref{eq:norm.grad.pi_k} and the second identity in \\eqref{eq:prop:Q101}. \n\nFinally, the last statement of the proposition is a consequence of items (a) and (b) as well as of Algorithm \\ref{inertial_projective}'s definition.\n\\end{proof}\n\n\\vspace{.1in}\n\nSince Algorithm \\ref{alg:iPSM} is a special instance of Algorithm \\ref{inertial_projective} of Section \\ref{sec:spm}, it follows from \nTheorems \\ref{lm:conv.proj}(b) and \\ref{thm:second.conv}(b), under\nthe assumptions \\eqref{eq:sum.p} and \\eqref{eq:alpha_k}--\\eqref{eq:beta(alpha)}, respectively, that to prove the convergence of Algorithm \\ref{alg:iPSM} it suffices to check that every weak cluster point of Algorithm \\ref{alg:iPSM} belongs to $\\mathcal{S}$ as in \\eqref{eq:KTS}. This will be done in Proposition \\ref{pr:varios}(e), but before we need the lemma below.\n\n\n\\begin{lemma}\n \\label{lmm:seg01_new}\nConsider the sequences evolved by \\emph{Algorithm \\ref{alg:iPSM}}, let \n$\\widehat p^k = (\\widehat z^k,\\widehat w_i^k,\\dots, \\widehat w_{n-1}^k)$ and let $\\widehat w_n^k$ be as in \\eqref{eq:ext.023}.\nAssume that, for $i=1,\\dots, n$,\n\\begin{align}\n \\label{eq:assum.rho}\n 0<\\underline{\\rho}\\leq \\rho_i^k\\leq \\overline{\\rho}<\\infty\\qquad \\forall k\\geq 0.\n\\end{align}\nThen the following hold:\n\\begin{itemize}\n\\item[\\emph{(a)}] For all $k\\geq 0$,\n\\begin{align}\\label{eq:max.phi_k}\n \\varphi_k(\\widehat p^k)\\geq\n \\dfrac{(1-\\sigma^2)\\min\\left\\{\\overline{\\rho}^{\\,-1},\\overline{\\rho}\\right\\}}{2}\n \n \\sum_{i=1}^n\\,\n \\left(\\norm{G_i\\widehat z^k-x_i^k}^2+\\norm{\\widehat w_i^k-y_i^k}^2\\right)\\geq 0.\n \\end{align}\n %\n \\item[\\emph{(b)}] There exists a constant $c>0$ such that, for all $k\\geq 0$,\n %\n \\begin{align}\n \\label{eq:smale}\n \\dfrac{\\varphi_k(\\widehat p^k)^2}{c\\norm{\\nabla \\varphi_k}_\\gamma^2}\\geq \n \\varphi_k(\\widehat p^k)\\geq c\\,\\norm{\\nabla \\varphi_k}^2_\\gamma.\n \\end{align}\n\\end{itemize}\n\\end{lemma}\n\\begin{proof}\n(a) Using the identity $\\inner{a}{b}=(1\/2)\\left(\\norm{a+b}^2-\\norm{a}^2-\\norm{b}^2\\right)$ \nwith $a=x_i^k-G_i\\widehat z^k$ and $b=\\rho_i^k(y_i^k-\\widehat w_i^k)$, and some algebraic manipulations, \nwe obtain, for $i=1,\\dots, n$,\n\\begin{align*}\n \\inner{x_i^k-G_i\\widehat z^k}{\\rho_i^k(y_i^k-\\widehat w_i^k)} &\n = \\dfrac{1}{2}\\left(\\norm{\\underbrace{x_i^k-G_i\\widehat z^k+\\rho_i^k(y_i^k-\\widehat w_i^k)}_{= e^k_i}}^2\n -\\norm{G_i\\widehat z^k-x_i^k}^2-\\norm{\\rho_i^k(\\widehat w_i^k-y_i^k)}^2\\right)\\\\\n & \\leq \\dfrac{-(1-\\sigma^2)}{2}\\left(\\norm{G_i\\widehat z^k-x_i^k}^2+\\norm{\\rho_i^k(\\widehat w_i^k-y_i^k)}^2\\right),\\\\\n \\end{align*}\nwhere we also used the error condition in \\eqref{eq:proxT_i}. Note now that the desired result follows by dividing the latter\ninequality by $-\\rho_i^k$ and by using \\eqref{eq:sep.F03} and assumption \\eqref{eq:assum.rho}.\n\n(b) First note that using the property \\eqref{eq:ineq.sum2}, \\eqref{eq:ext.023} and the assumption that $G_n=I$, we obtain\n\\begin{align}\n \\label{eq:0946}\n \\left\\|\\sum_{i=1}^{n}G_i^*y_i^k\\right\\|^2 = \\left\\|\\sum_{i=1}^{n}G_i^*(\\widehat w_i^k-y_i^k)\\right\\|^2\n \\leq n\\left(\\max_{i=1,\\dots, n}\\,\\norm{G_i^*}^2\\right)\\,\\sum_{i=1}^n\\norm{\\widehat w_i^k-y_i^k}^2.\n\\end{align}\nOn the other hand, using the inequality $\\norm{a+b}^2\\leq 2(\\norm{a}^2+\\norm{b}^2)$, (again) the fact that $G_n=I$\nand some algebraic manipulations, we find\n\\begin{align}\n\\nonumber\n \\sum_{i=1}^{n-1}\\norm{x_i^k-G_i x_n^k}^2 &= \\sum_{i=1}^{n-1}\\norm{x_i^k - G_i\\widehat z^k + G_i(\\widehat z^k-x_n^k)}^2\\\\\n \\nonumber\n &\\leq 2\\sum_{i=1}^{n-1}\\,\\left(\\norm{G_i\\widehat z^k-x_i^k}^2+\\norm{G_i(\\widehat z^k-x_n^k)}^2\\right)\\\\\n \\nonumber\n &\\leq 2\\left(\\sum_{i=1}^{n-1}\\,\\norm{G_i\\widehat z^k-x_i^k}^2+(n-1)\\max_{i=1,\\dots, n-1}\\{\\norm{G_i}^2\\}\n \\norm{\\widehat z^k-x_n^k}^2\\right)\\\\\n \\label{eq:avila}\n &\\leq 2 \\max\\left\\{1, (n-1)\\max_{i=1,\\dots, n-1}\\{\\norm{G_i}^2\\}\\right\\}\\,\\sum_{i=1}^n\\,\\norm{G_i\\widehat z^k-x_i^k}^2.\n %\n\\end{align}\nWe know from \\eqref{eq:norm.grad.pi_k} (and the fact that $G_n=I$) that\n\\begin{align*}\n\\norm{\\nabla\\varphi_k}_{\\gamma}^2&=\n \\gamma^{-1}\\left\\|\\sum_{i=1}^{n}G_i^*y_i^k\\right\\|^2+\\sum_{i=1}^{n-1}\\norm{x_i^k-G_ix_{n}^k}^2,\n\\end{align*}\nwhich combined with \\eqref{eq:0946}, \\eqref{eq:avila} and \\eqref{eq:max.phi_k} yields the second inequality in \\eqref{eq:smale}, for some constant $c>0$. To finish the proof, note that the first inequality in \\eqref{eq:smale} is a direct consequence of the second one.\n\\end{proof}\n\n\n\n\\vspace{.1in}\n\\vspace{.1in}\n\n\n\n\\begin{proposition}\n \\label{pr:varios}\n %\n Consider the sequences evolved by \\emph{Algorithm \\ref{alg:iPSM}} and let $\\{w_n^k\\}$ and $\\{p^k\\}$ be as in \\eqref{eq:w_nk}\n and \\eqref{eq:def.pk2}, respectively. \n\n %\n\n Assume that\n %\n \\begin{align}\n \\label{eq:901}\n \\sum_{k=0}^\\infty\\,\\alpha_k\\norm{p^k-p^{k-1}}_\\gamma^2<\\infty\n \\end{align}\n %\nand, for $i=1,\\dots, n$,\n %\n \\begin{align}\n \\label{eq:assum.rho2}\n 0<\\underline{\\rho}\\leq \\rho_i^k\\leq \\overline{\\rho}<\\infty\\qquad \\forall k\\geq 0.\n\\end{align}\n Then,\n %\n \\begin{itemize}\n %\n \\item[\\emph{(a)}] We have, $\\varphi_k(\\widehat p^k)\\to 0$ and $\\norm{\\nabla \\varphi_k}_\\gamma \\to 0$.\n\\item[\\emph{(b)}] We have, $\\sum_{i=1}^n\\,G_i^* y_i^k\\to 0$ and $x_i^k-G_i x_n^k\\to 0$ \\emph{(}$i=1,\\dots, n-1$\\emph{)}.\n\n\n\\item[\\emph{(c)}] For each $i=1,\\dots, n$, we have \n $\\norm{G_i\\widehat z^k-x_i^k}\\to 0$ and $\\norm{\\widehat w_i^k-y_i^k}\\to 0$.\n\n \\item[\\emph{(d)}] For each $i=1,\\dots, n$, we have $\\norm{G_i z^k-x_i^k}\\to 0$ and $\\norm{w_i^k-y_i^k}\\to 0$.\n %\n \\item[\\emph{(e)}] Every weak cluster point of $\\{p^k\\}$ belongs to $\\mathcal{S}$, where $\\mathcal{S}$ is as in \\eqref{eq:KTS}.\n %\n \\end{itemize}\n %\n\\end{proposition}\n\\begin{proof}\n(a) Using the last statement in Proposition \\ref{prop:Q1}, Theorem \\ref{lm:conv.proj}(c) and the fact from \\eqref{eq:max.phi_k} that\n$\\varphi_k(\\widehat p^k)\\geq 0$, we obtain\n\\begin{align*}\n \\dfrac{\\varphi_k(\\widehat p^k)}{\\norm{\\nabla \\varphi_k}_\\gamma}\\to 0,\n\\end{align*}\nwhich after taking limit in \\eqref{eq:smale} gives the desired result in item (a).\n\n(b) This follows from the second limit in item (a) combined with \\eqref{eq:norm.grad.pi_k} (and the fact that $G_n=I$).\n\n(c) This follows from the first limit in item (a) and \\eqref{eq:max.phi_k}.\n\n(d) Using the triangle inequality, the identity \\eqref{eq:ext.022}, \\eqref{eq:def.pk2} and \\eqref{eq:def.inner},\nwe find\n\\begin{align}\n\\nonumber\n \\norm{G_iz^k-x_i^k}&\\leq \\norm{z^k-\\widehat z^{\\,k}}\\norm{G_i}+\\norm{G_i \\widehat z^{\\,k}-x_i^k}\\\\\n \\nonumber\n & = \\alpha_k\\norm{z^k-z^{k-1}}\\norm{G_i}+\\norm{G_i\\widehat z^{\\,k}-x_i^k}\\\\\n \\label{eq:905}\n &\\leq \\sqrt{\\gamma^{-1}}\\sqrt{\\alpha_k}\\,\\norm{p^k-p^{k-1}}_\\gamma\\norm{G_i}+\\norm{G_i\\widehat z^{\\,k}-x_i^k},\n \\quad i=1,\\dots, n,\n\\end{align}\nwhere we also used the fact that $\\alpha_k\\leq \\sqrt{\\alpha_k}$ (because $0\\leq \\alpha_k<1$).\nUsing a similar reasoning, we also find\n\\begin{align}\n\\label{eq:TM01}\n \\norm{y_i^k-w_i^k} \\leq \\sqrt{\\alpha_k}\\,\\norm{p^k-p^{k-1}}_\\gamma+\\norm{y_i^k-\\widehat w_i^k},\\qquad i=1,\\dots, n-1.\n \\end{align}\nNote also that, using \\eqref{eq:ext.023}, \\eqref{eq:ext.02}, \\eqref{eq:w_nk}, the property \\eqref{eq:ineq.sum2}, the fact that $\\alpha_k^2\\leq \\alpha_k$ and \\eqref{eq:def.inner}, we obtain\n\\begin{align}\n \\nonumber\n \\dfrac{1}{2}\\norm{y_n^k-w_n^k}^2 &\\leq \\norm{\\widehat w_n^k - w_n^k}^2+\\norm{y_n^k- \\widehat w_n^k}^2\\\\\n \\nonumber\n & \\leq (n-1)\\max_{i=1,\\dots, n-1}\\{\\norm{G_i^*}^2\\}\n \\left(\\sum_{i=1}^{n-1}\\,\\norm{\\widehat w_i^k-w_i^k}^2\\right)+\\norm{y_n^k- \\widehat w_n^k}^2\\\\\n \\nonumber\n & = (n-1)\\max_{i=1,\\dots, n-1}\\{\\norm{G_i^*}^2\\}\n \\left(\\sum_{i=1}^{n-1}\\,\\alpha_k^2\\norm{w_i^{k-1}-w_i^k}^2\\right)+\\norm{y_n^k- \\widehat w_n^k}^2\\\\\n \\nonumber\n & \\leq (n-1)\\max_{i=1,\\dots, n-1}\\{\\norm{G_i^*}^2\\}\n \\left(\\sum_{i=1}^{n-1}\\,\\alpha_k\\norm{w_i^{k-1}-w_i^k}^2\\right)+\\norm{y_n^k- \\widehat w_n^k}^2\\\\\n \\label{eq:904}\n & \\leq (n-1)\\max_{i=1,\\dots, n-1}\\{\\norm{G_i^*}^2\\}\n \\alpha_k\\norm{p^k-p^{k-1}}^2_\\gamma+\\norm{y_n^k- \\widehat w_n^k}^2.\n\\end{align}\nTo finish the proof of (d), combine \\eqref{eq:905}--\\eqref{eq:904} with item (c) and assumption \\eqref{eq:901} \n(which, in particular, implies that $\\alpha_k\\norm{p^k-p^{k-1}}_\\gamma^2\\to 0$).\n\n\n(e) Let $p^\\infty:=(z^\\infty, w_1^\\infty, \\cdots, w_{n-1}^\\infty)\\in \\boldsymbol{\\mathcal{H}}$ be a weak cluster point of $\\{p^k\\}$ (by Proposition\n\\ref{prop:Q1} and Theorem \\ref{lm:conv.proj}(a), we have that $\\{p^k\\}$ is bounded) and let\n$\\{p^{k_j}\\}$ be a subsequence of $\\{p^k\\}$ such that $p^{k_j}\\rightharpoonup p^\\infty$, i.e.,\n\\begin{align}\n \\label{eq:903}\n z^{k_j}\\rightharpoonup z^\\infty\\;\\;\\mbox{and}\\;\\; w_i^{k_j}\\rightharpoonup w_i^{\\infty},\\quad i=1,\\dots, n-1.\n %\n\\end{align}\nUsing item (d), \\eqref{eq:903} and the fact that $G_n=I$ (see Assumption (A2)), we obtain\n\\begin{align}\n \\label{eq:16103}\n x_n^{k_j}\\rightharpoonup z^\\infty\\;\\;\\mbox{and}\\;\\; y_i^{k_j}\\rightharpoonup w_i^{\\infty},\\quad i=1,\\dots, n-1.\n\\end{align}\nNow define the maximal monotone operators $A:\\mathcal{H}_0\\rightrightarrows \\mathcal{H}_0$,\n$B:\\mathcal{H}_1\\times\\cdots\\times\\mathcal{H}_{n-1}\\rightrightarrows \\mathcal{H}_1\\times\\cdots\\times\\mathcal{H}_{n-1}$\nand the bounded linear operator $G:\\mathcal{H}_0\\to \\mathcal{H}_1\\times\\cdots\\times\\mathcal{H}_{n-1}$ by\n\\begin{align}\n\\label{eq:161011}\n A:= T_n,\\quad B:=T_1\\times \\cdots \\times T_{n-1}\\;\\;\\mbox{and}\\;\\; G:=(G_1,\\dots, G_{n-1}).\n\\end{align}\nUsing the above definitions of $A$ and $B$ and the inclusions in \\eqref{eq:proxT_i}, we have\n\\begin{align}\n \\label{eq:1610}\n a^{k_j}\\in A(r^{k_j})\\;\\;\\mbox{and}\\;\\; b^{k_j}\\in B(s^{k_j}),\n\\end{align}\nwhere\n\\begin{align}\n \\label{eq:16102}\n a^{k_j}:=y_n^{k_j},\\quad r^{k_j}:=x_n^{k_j},\n \\quad b^{k_j}:=(y_1^{k_j},\\dots, y_{n-1}^{k_j})\\;\\;\\mbox{and}\\;\\; s^{k_j}:=(x_1^{k_j},\\dots, x_{n-1}^{k_j}).\n\\end{align}\nMoreover, \\eqref{eq:16102} and \\eqref{eq:16103} yield \n\\begin{align}\n \\label{eq:16104}\n r^{k_j}\\rightharpoonup r^\\infty\\;\\;\\mbox{and}\\;\\; b^{k_j}\\rightharpoonup b^\\infty,\n\\end{align}\nwhere\n\\begin{align}\n \\label{eq:16105}\n r^\\infty:= z^\\infty\\;\\;\\mbox{and}\\;\\; b^\\infty:=(w_1^\\infty,\\cdots, w_{n-1}^\\infty).\n\\end{align}\nNote now that using \\eqref{eq:16102}, the fact that $G^*(w_1,\\dots, w_{n-1})=\\sum_{i=1}^{n-1}G_i^*w_i$,\nfor all $(w_1,\\cdots, w_{n-1})\\in \\mathcal{H}_1\\times\\cdots\\times\\mathcal{H}_{n-1}$, the fact that $G_n=I$ and the first limit in item (b), we find\n\\begin{align}\n\\label{eq:16106}\n a^{k_j} + G^* b^{k_j} = \\sum_{i=1}^{n}G_i^*y_i^{k_j}\\to 0.\n\n\\end{align}\nUsing now the second limit in item (b) combined with \\eqref{eq:16102} and the definition of $G$ in \\eqref{eq:161011}, we obtain\n\\begin{align}\n\\label{eq:16109}\n G r^{k_j} - s^{k_j} \\to 0.\n\\end{align}\nUsing Lemma \\ref{lm:comb} below combined with \\eqref{eq:1610}, \\eqref{eq:16104}, \\eqref{eq:16106} and \\eqref{eq:16109} we \nconclude that\n\\begin{align*}\n b^\\infty\\in B(Gr^\\infty)\\;\\;\\mbox{and}\\;\\; -G^*b^\\infty \\in A(r^\\infty),\n\\end{align*}\nwhich, in turn, combined with \\eqref{eq:161011} and \\eqref{eq:16105} implies that\n\\begin{align*}\n w_i^\\infty \\in T_i(G_iz^\\infty),\\;\\; i=1,\\cdots, n-1,\\;\\; -\\sum_{i=1}^{n-1}G_i^* w_i^\\infty \\in T_n(z^\\infty),\n\\end{align*}\nwhich means exactly (see \\eqref{eq:KTS}) that $p^\\infty = (z^\\infty, w_1^\\infty, \\cdots, w_{n-1}^\\infty)\\in \\mathcal{S}$. Hence, \nwe conclude that every weak cluster point of $\\{p^k\\}$ belongs to $\\mathcal{S}$.\n\\end{proof}\n\n\n\\vspace{.1in}\n\nNext is the first result on the asymptotic convergence of Algorithm \\ref{alg:iPSM}.\n\n\\begin{theorem}[First result on the convergence of Algorithm \\ref{alg:iPSM}]\n \\label{th:conv01}\n %\n Consider the sequences evolved by \\emph{Algorithm \\ref{alg:iPSM}} and let $\\{p^k\\}$ be as in \\eqref{eq:def.pk2}.\n %\n Assume that conditions \\eqref{eq:901} and \\eqref{eq:assum.rho2} of \\emph{Proposition \\ref{pr:varios}} hold, i.e., assume that\n %\n \\begin{align}\n \\label{eq:1423}\n \\sum_{k=0}^\\infty\\,\\alpha_k\\norm{p^k-p^{k-1}}^2_\\gamma<\\infty\n \\end{align}\nand, for $i=1,\\dots, n$,\n\\begin{align}\n \\label{eq:161012}\n 0<\\underline{\\rho}\\leq \\rho_i^k\\leq \\overline{\\rho}<\\infty\\qquad \\forall k\\geq 0.\n\\end{align}\nThen, there exists $(z^\\infty, w_1^\\infty, \\cdots, w_{n-1}^\\infty)\\in \\mathcal{S}$ such that\n$z^k \\rightharpoonup z^\\infty$ and\n$w_i^k\\rightharpoonup w_i^\\infty$, for $i=1,\\dots, n-1$.\nFurthermore, \n$x_i^k\\rightharpoonup G_i z^\\infty$ and\n$y_i^k\\rightharpoonup w_i^\\infty$, for $i=1,\\dots, n$, where $w_n^k$ is as in \\eqref{eq:w_nk}.\n\\end{theorem}\n\\begin{proof}\nIn view of Propositions \\ref{prop:Q1} and \\ref{pr:varios}(e) and Theorem \\ref{lm:conv.proj}(b) one concludes that\nthat $\\{p^k\\}$ converges weakly to some $p^\\infty:=(z^\\infty, w_1^\\infty, \\cdots, w_{n-1}^\\infty)$ in \n$\\mathcal{S}$ as in \\eqref{eq:KTS}. Using the definition of $p^k$ in \\eqref{eq:def.pk2} one easily concludes\nthat $z^k \\rightharpoonup z^\\infty$ and $w_i^k\\rightharpoonup w_i^\\infty$, for $i=1,\\dots, n-1$, which \nin turn combined with Proposition \\ref{pr:varios}(d) implies that $x_i^k\\rightharpoonup G_i z^\\infty$ and\n$y_i^k\\rightharpoonup w_i^\\infty$, for $i=1,\\dots, n$.\n\\end{proof}\n\n\n\\vspace{.1in}\n\nNext theorem shows the convergence of Algorithm \\ref{alg:iPSM} under certain assumptions on $\\alpha$, $\\overline{\\beta}$ and the\nsequence $\\{\\alpha_k\\}$ (see the remarks below).\n\n\\begin{theorem}[Second result on the convergence of Algorithm \\ref{alg:iPSM}]\n \\label{th:conv02}\nConsider the sequences evolved by \\emph{Algorithm \\ref{alg:iPSM}} and assume that\n$\\alpha\\in [0,1)$, $\\overline{\\beta}\\in (0,2)$ and $\\{\\alpha_k\\}$ satisfy \\emph{(}for some $\\overline{\\alpha}>0$\\emph{)} the conditions\n\\eqref{eq:alpha_k} and \\eqref{eq:beta(alpha)} of \\emph{Theorem \\ref{thm:second.conv}}, i.e.,\n\\begin{align}\n 0\\leq \\alpha_k\\leq \\alpha_{k+1}\\leq \\alpha<\\overline{\\alpha}<1\\qquad \\forall k\\geq 0\n \\end{align}\n and\n %\n \\begin{align}\n \\label{eq:beta_alpha_4}\n \\overline{\\beta}=\\overline{\\beta}(\\overline{\\alpha}):=\\dfrac{2(\\overline{\\alpha}-1)^2}\n {2(\\overline{\\alpha}-1)^2+3\\overline{\\alpha}-1}.\n \\end{align}\n %\n Assume also that condition \\eqref{eq:161012} holds, i.e., assume that, for $i=1,\\dots, n$,\n %\n \\begin{align}\n\n 0<\\underline{\\rho}\\leq \\rho_i^k\\leq \\overline{\\rho}<\\infty\\qquad \\forall k\\geq 0.\n\\end{align}\n Then, the same conclusions of \\emph{Theorem \\ref{th:conv01}} hold, i.e., \n there exists $(z^\\infty, w_1^\\infty, \\cdots, w_{n-1}^\\infty)\\in \\mathcal{S}$ such that\n$z^k \\rightharpoonup z^\\infty$ and\n$w_i^k\\rightharpoonup w_i^\\infty$, for $i=1,\\dots, n-1$.\nFurthermore, \n$x_i^k\\rightharpoonup G_i z^\\infty$ and\n$y_i^k\\rightharpoonup w_i^\\infty$, for $i=1,\\dots, n$, where $w_n^k$ is as in \\eqref{eq:w_nk}.\n \\end{theorem}\n\\begin{proof}\nIn view of Propositions \\ref{prop:Q1} and \\ref{pr:varios}(e) and Theorem \\ref{thm:second.conv}(b) one concludes that\nthat $\\{p^k\\}$ converges weakly to some $p^\\infty:=(z^\\infty, w_1^\\infty, \\cdots, w_{n-1}^\\infty)$ in \n$\\mathcal{S}$ as in \\eqref{eq:KTS}. The rest of the proof follows the same argument used in Theorem \\ref{th:conv01}'s proof.\n\\end{proof}\n\n\\vspace{.1in}\n\n\\noindent\n{\\bf Remarks.}\n\n\\begin{itemize}\n \\item[\\mbox{(i)}] We emphasize that the conditions on $\\alpha$, $\\overline{\\beta}$ and on the sequence $\\{\\alpha_k\\}$ above are exactly\n the same of Theorem \\ref{thm:second.conv}, namely \\eqref{eq:alpha_k} and \\eqref{eq:beta(alpha)}. See also the second remark following\n Theorem \\ref{thm:second.conv} and Figure \\ref{fig02} for a discussion of the interplay between inertial and overrelaxation \n parameters.\n %\n \\item[\\mbox{(ii)}] Note that, since $(z^\\infty, w_1^\\infty, \\cdots, w_{n-1}^\\infty)\\in \\mathcal{S}$ in Theorem \\ref{th:conv02}, it follows that the weak limit $z^\\infty \\in \\mathcal{H}_0$ is a solution of the monotone inclusion problem \\eqref{eq:Pmip1}.\n\\end{itemize}\n\n\n\n\n\n\n\n\n\\section{Numerical experiments on LASSO Problems}\n \\label{sec:ne}\n\nIn this section, we present simple numerical experiments on $\\ell_1$--regularized least square problems\n\\begin{align}\n\\label{eq:lasso}\n\\min_{x\\in \\mathbb{R}^d}\\,\\left\\{\\dfrac{1}{2}\\norm{Qx-b}_2^2 + \\lambda \\norm{x}_1\\right\\},\n\\end{align}\nwhere $Q\\in \\mathbb{R}^{m\\times d}$, $b\\in \\mathbb{R}^m$ and $\\lambda\\geq 0$. \nLet $\\mathcal{R}=\\{R_1,\\dots, R_r\\}$ be an arbitrary partition\n\\footnote{$R_i\\neq \\emptyset\\; (i=1,\\dots, r)$, $R_i\\cap R_j=\\emptyset$ for $i\\neq j$ and $\\cup_{i=1}^r\\,R_i = \\{1,\\dots, m\\}$.} \nof $\\{1,\\dots, m\\}$\nand, for $i=1,\\dots, r$, let $Q_i\\in \\mathbb{R}^{|R_i|\\times d}$ be the submatrix of $Q$ with rows corresponding to indices\nin $R_i$ and similarly let $b_i\\in \\mathbb{R}^{|R_i|}$ be the corresponding subvector of $b$.\nThen, problem \\eqref{eq:lasso} is equivalent to the minimization problem\n\\begin{align*}\n\n %\n \\min_{x\\in \\mathbb{R}^d}\\,\\left\\{\\sum_{i=1}^r\\,\\dfrac{1}{2}\\norm{Q_i x-b_i}_{2}^2 + \\lambda \\norm{x}_1\\right\\},\n\\end{align*}\nwhich, in turn, is clearly equivalent to the monotone inclusion problem\n\\begin{align}\n \\label{eq:lasso03}\n 0\\in \\sum_{i=1}^r\\,Q_i^T(Q_i x - b_i) + \\partial (\\lambda \\norm{x}_1).\n\\end{align} \nOn the other hand, \\eqref{eq:lasso03} is a special instance of the monotone inclusion problem \\eqref{eq:Pmip1} with $z\\leftarrow x$, $n=r+1$, $G_i=I$ ($i=1,\\dots, n$), \n\\begin{align*}\n \n T_i(x)=Q_i^T(Q_i x -b_i)\\;( i=1,\\dots, n-1)\\;\\;\\mbox{and}\\;\\; T_n(x)=\\partial (\\lambda \\norm{x}_1).\n\\end{align*}\n\nIn this section, we shall apply Algorithm \\ref{alg:iPSM} for solving \\eqref{eq:lasso03} (and, in particular, \\eqref{eq:lasso}) with the following choice of\nparameters (see Steps 0, 1, 2 and 4 of Algorithm \\ref{alg:iPSM}):\n\\begin{center}\n$\\alpha_k\\equiv \\alpha=0.1$,\\; $\\sigma=0.99$,\\; $\\gamma=1$,\\; $\\rho_i^k\\equiv 1$\\;\\;\\mbox{and}\\;\\;\n$\\beta_k\\equiv \\underline{\\beta}=\\overline{\\beta} = 1.5519$.\n\\end{center}\nThe value $1.5519$ is computed from \\eqref{eq:beta_alpha_4} with $\\overline{\\alpha}=0.17>\\alpha$. \nFollowing \\cite{Johnst.Eckst.2018}, we stop\nthe algorithm using the stopping criterion\n\\begin{align}\n\\label{eq:stop_crit}\n \\dfrac{|F(z^k) - F^*|}{F^*}\\leq 10^{-4},\n\\end{align}\nwhere $F(\\cdot)$ denotes the objective function in \\eqref{eq:lasso} and $F^*$ is the optimal value of the \nproblem estimated by running Algorithm \\ref{alg:iPSM} at least $10^4$ iterations and taking the minimum objective value. \n\nAt each iteration $k$ of Algorithm \\ref{alg:iPSM}, we used two different strategies for computing $(x_i^k,y_i^k)$ ($i=1,\\dots, n$) satisfying \\eqref{eq:proxT_i}: for $i=1,\\dots, n-1$, in which case $ T_i(x)=Q_i^T(Q_i x -b_i)$, we implemented the\nstandard conjugate gradient (CG) algorithm for computing $x=x_i^k$ as an approximate solution of the linear system \n\\begin{align*}\n (Q_i^T Q_i + I) x =\\widehat z^k + \\widehat w_i^k + Q_i^T b_i\n\\end{align*}\nuntil the satisfaction of the relative-error condition in \\eqref{eq:proxT_i} with $y_i^k:=T_i(x_i^k)$ by the residual $e_i^k:=T_i(x_i^k)+x_i^k - (\\widehat z^k + \\widehat w_i^k)$. On the other hand, for $i=n$, in which case $T_n(x)=\\partial (\\lambda \\norm{x}_1)$, we\nset $x_i^k = \\mbox{prox}_{\\lambda \\|\\cdot\\|_1}(\\widehat z^k + \\widehat w_i^k)$ and\n$y_i^k = (\\widehat z^k + \\widehat w_i^k) - x_i^k$ (in this case, $e_i^k=0$).\n\n\n\\vspace{.1in}\n\\vspace{.1in}\n\n\\noindent\n{\\bf Data sets}.\nWe implemented Algorithm \\ref{alg:iPSM} using the following data sets:\n\\begin{itemize}\n\\item Four randomly generated instances of \\eqref{eq:lasso}: RandomA, RandomB, RandomC and RandomD. We used the Matlab command ``randn'' to generate $Q\\in \\mathbb{R}^{m\\times d}$, and $b\\in \\mathbb{R}^m$ with \n$b_j \\in \\{0,1\\}\\; (j=1,\\dots, m)$ where $b=(b_1,\\dots, b_j, \\dots, b_m)$ (see Table \\ref{tab:random}).\n\\begin{table}\n\\caption{Dimensions of $Q\\in \\mathbb{R}^{m\\times d}$ and $b\\in \\mathbb{R}^m$, size $r$ of the partition $\\mathcal{R}$ of $\\{1,\\dots, m\\}$ and number of rows of each submatrix of $Q$ on \nfour randomly generated instances of \\eqref{eq:lasso}}\n\\begin{center}\n\\begin{tabular}{cc|c|c|c|l}\n\\cline{2-5}\n& \\multicolumn{1}{ |c| }{\\multirow{2}{*}{$m$}} & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{$d$}} & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{$r$}} & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{$ \\mid R_i \\mid $}} \\\\ \n & \\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c | }{} & \\multicolumn{1}{ |c | }{} \\\\\\cline{1-5}\n\\multicolumn{1}{ |c | }{\\multirow{2}{*}{RandomA} } & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{1000}} & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{1000}} & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{10}} & \\multicolumn{1}{ |c| }\n{\\multirow{2}{*}{\\hspace{-0.2cm}$100$ $\\; (i=1, \\dots , 10)$}} & \\\\\n\\multicolumn{1}{ |c| }{} &\n\\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }{} & \\\\ \\cline{1-5}\n\\multicolumn{1}{ |c| }{\\multirow{2}{*}{RandomB} } & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{5000}} & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{100}} & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{20}} & \\multicolumn{1}{ |c| }\n{\\multirow{2}{*}{\\hspace{-0.2cm}$250$ $\\; (i=1, \\dots , 20)$}} & \\\\ \n\\multicolumn{1}{ |c | }{} &\n\\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }{} & \\\\ \\cline{1-5}\n\\multicolumn{1}{ |c | }{\\multirow{2}{*}{RandomC} } & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{50000}} & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{100}} & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{250}} & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{$200$ $ \\; (i=1, \\dots , 250)$}} & \\\\ \n\\multicolumn{1}{ |c | }{} &\n\\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }{} & \\\\ \\cline{1-5}\n\\multicolumn{1}{ |c | }{\\multirow{2}{*}{RandomD} } & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{100000}} & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{100}} & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{325}} & \\multicolumn{1}{ |c| }{$307$ $ \\; (i=1, \\dots , 324)$} & \\\\ \\cline{5-5}\n\\multicolumn{1}{ |c| }{} &\n\\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }\n{\\hspace{-1cm}$532$ $ \\; (i=325)$} & \\\\ \\cline{1-5}\n\\end{tabular}\n\\end{center}\n\\label{tab:random}\n\\end{table}\n\\item Five data sets (real examples) from the UCI Machine Learning Repository \\cite{Dua:2019}: \n the blog feedback dataset (BlogFeedback)\n\\footnote{https:\/\/archive.ics.uci.edu\/ml\/datasets\/BlogFeedback.}\n, \ncommunities and crime dataset (Crime)\n\\footnote{http:\/\/archive.ics.uci.edu\/ml\/datasets\/communities+and+crime.}\n, \nDrivFace dataset (DrivFace)\n\\footnote{https:\/\/archive.ics.uci.edu\/ml\/datasets\/DrivFace.}\n,\nSingle-Pixed Camera (Mug32)\n\\footnote{see \\cite{alv.eck.ger.mel-preprint19}.}\nand Breast Cancer Wisconsin (Diagnostic)\ndataset (Wisconsin) \\footnote{https:\/\/archive.ics.uci.edu\/ml\/datasets\/Breast+Cancer+Wisconsin+(Diagnostic).}\n(see Table \\ref{tab:exampleUCI}).\n\\begin{table}\n\\caption{Dimensions of $Q\\in \\mathbb{R}^{m\\times d}$ and $b\\in \\mathbb{R}^m$, size $r$ of the partition $\\mathcal{R}$ of $\\{1,\\dots, m\\}$ and number of rows of each submatrix of $Q$ on five real examples (from the UCI Machine Learning Repository \\cite{Dua:2019}) of \\eqref{eq:lasso}}\n\\begin{center}\n\\begin{tabular}{cc|c|c|c|l}\n\\cline{2-5}\n& \\multicolumn{1}{ |c| }{\\multirow{2}{*}{$m$}} & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{$d$}} & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{$r$}} & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{$\\mid R_i\\mid$}} \\\\ \n & \\multicolumn{1}{ |c | }{} & \\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c | }{} \\\\\\cline{1-5}\n\\multicolumn{1}{ |c | }{\\multirow{2}{*}{BlogFeedback} } & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{60021}} & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{280}} & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{175}} & \\multicolumn{1}{ |c| }{$343$ $ \\; (i=1, \\dots,174)$} & \\\\ \\cline{5-5}\n\\multicolumn{1}{ |c| }{} &\n\\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }\n{\\hspace{-1.08cm}$339$ $ \\; (i=175)$} & \\\\ \\cline{1-5}\n\\multicolumn{1}{ |c| }{\\multirow{2}{*}{Crime} } & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{1994}} & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{121}} & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{10}} & \\multicolumn{1}{ |c| }\n{\\hspace{-0.4cm}$200$ $ \\; (i=1, \\dots, 9)$} & \\\\ \\cline{5-5}\n\\multicolumn{1}{ |c | }{} &\n\\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }\n{\\hspace{-1.3cm}$194$ $ \\; (i=10)$} & \\\\ \\cline{1-5}\n\\multicolumn{1}{ |c |}{\\multirow{2}{*}{DrivFace} } & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{606}} & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{6400}} & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{6}} & \\multicolumn{1}{ |c|}{\\multirow{2}{*}\n{\\hspace{-0.406cm}$101$ $ \\; (i=1,\\dots,6)$}} & \\\\ \n\\multicolumn{1}{ |c| }{} &\n\\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }{} & \\\\ \\cline{1-5}\n\\multicolumn{1}{ |c| }{\\multirow{2}{*}{Mug32} } & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{410}} & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{1024}} & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{4}} & \\multicolumn{1}{ |c| }\n{\\hspace{-0.7cm}$100$ $ \\; (i=1,2,3)$} & \\\\ \\cline{5-5}\n\\multicolumn{1}{ |c | }{} &\n\\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }\n{\\hspace{-1.408cm}$110$ $ \\; (i=4)$} & \\\\ \\cline{1-5}\n\\multicolumn{1}{ |c| }{\\multirow{2}{*}{Wisconsin} } & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{198}} & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{30}} & \\multicolumn{1}{ |c| }{\\multirow{2}{*}{3}} & \\multicolumn{1}{ |c| }{\\multirow{2}{*}\n{\\hspace{-0.8cm}$66$ $\\;(i=1,2,3)$}} & \\\\ \n\\multicolumn{1}{ |c | }{} &\n\\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }{} & \\multicolumn{1}{ |c| }{} & \\\\ \\cline{1-5}\n\\end{tabular}\n\\end{center}\n\\label{tab:exampleUCI}\n\\end{table} \n\\end{itemize}\nWe also used $\\lambda = 0.1\\norm{Q^T b}_\\infty$ (see \\cite{boy.par.chu-dis.ftml11}) in\n\\eqref{eq:lasso}. Table \\ref{lasso_outer} shows the number of outer iterations, and Table \\ref{lasso_runtime}\nshows runtimes in seconds.\nFigures \\ref{figura:graf} and \\ref{figura:graf02} show the same results graphically (see the stopping criterion \\eqref{eq:stop_crit}). \n\n\n\\begin{table}\n\\caption{Outer iterations for LASSO problems} \n\\vspace{0.3cm}\n\\centering\n \\begin{tabular}{llrrcrrcrrcc}\\hline \\hline \\\\\n Problem & & & & PS & & & PS\\_in\\_rel & & & $ \\dfrac{iteration 2}{ iteration 1} $ &\\\\ \n & & & & \\tiny $iteration 1$ & & & \\tiny $iteration 2$ & & & & \\\\ \\hline \\\\\n BlogFeedback & & & & 2968 & & & \\textbf{2342} & & & 0.7891 & \\\\\n Crime & & & & \\textbf{211} & & & 216 & & & 1.0237 & \\\\\n DrivFace & & & & 2008 & & & \\textbf{585} & & & 0.2913 & \\\\\n Mug32 & & & & 203 & & & \\textbf{192} & & & 0.9458 & \\\\\n Wisconsin & & & & 211 & & & \\textbf{210} & & & 0.9952 & \\\\\n RandomA & & & & 219 & & & \\textbf{185} & & & 0.8447 & \\\\\n RandomB & & & & 23 & & & \\textbf{21} & & & 0.9131 & \\\\\n RandomC & & & & 408 & & & \\textbf{151} & & & 0.3701 & \\\\\n RandomD & & & & 507 & & & \\textbf{278} & & & 0.5483 & \\\\ \\hline \\\\\n Geometric mean & & & & 337.04 & & & \\textbf{231.98} & & & 0.6883 & \\\\ \\hline \\hline \\\\\n \\end{tabular}\n\\label{lasso_outer}\n\\end{table}\n\\begin{table}\n\\caption{LASSO runtimes in seconds} \n\\vspace{0.3cm}\n\\centering\n \\begin{tabular}{llrrcrrcrrcc}\\hline \\hline \\\\\n Problem & & & & PS & & & PS\\_in\\_relerr & & & $ \\dfrac{time 2}{ time 1} $ &\\\\ \n & & & & \\tiny $time 1$ & & & \\tiny $time 2$ & & & & \\\\ \\hline \\\\\n BlogFeedback & & & & 207.18 & & & \\textbf{130.44} & & & 0.6296 & \\\\\n Crime & & & & 0.85 & & & \\textbf{0.78} & & & 0.9176 & \\\\\n DrivFace & & & & 133.19 & & & \\textbf{37.11} & & & 0.2786 & \\\\\n Mug32 & & & & 1.36 & & & \\textbf{1.18} & & & 0.8676 & \\\\\n Wisconsin & & & & 0.15 & & & \\textbf{0.11} & & & 0.7333 & \\\\\n randomA & & & & 2.45 & & & \\textbf{1.69} & & & 0.6898 & \\\\\n randomB & & & & 0.25 & & & \\textbf{0.13} & & & 0.52 & \\\\\n randomC & & & & 10.53 & & & \\textbf{4.08} & & & 0.3875 & \\\\\n randomD & & & & 20.09 & & & \\textbf{11.31} & & & 0.5629 & \\\\ \\hline \\\\\n Geometric mean & & & & 3.79 & & & \\textbf{2.57} & & & 0.6793 & \\\\ \\hline \\hline \\\\\n \\end{tabular}\n\\label{lasso_runtime}\n\\end{table}\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n \n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n \n \n \n\n\n\n\n\n \n\n \n\n\n \n\n\n\n\n\n\n\n\n\n\\begin{figure}\n \\centering\n \\caption{Comparison of performance in LASSO problems}\n \\label{figura:graf}\n \\vspace{0.2cm}\n \\begin{minipage}{\\linewidth}\n \\centering\n \\subfloat{\n \\includegraphics[scale=0.32]{fig1.eps} } \n ~\n \\subfloat{\n \\includegraphics[scale=0.32]{fig2.eps}}\n \\end{minipage}\\par\\medskip\n %\n \\begin{minipage}{\\linewidth}\n \\centering\n \\subfloat{\n \\includegraphics[scale=0.32]{fig3.eps}} \n ~\n \\subfloat{\n \\includegraphics[scale=0.32]{fig4.eps}}\n \\end{minipage}\\par\\medskip\n %\n \\begin{minipage}{\\linewidth}\n \\centering\n \\subfloat{\n \\includegraphics[scale=0.32]{fig5.eps}} \n \n \n \n \\end{minipage\n %\n \n \n \n \n \n \n \n \n %\n \n \n \n \n \n \\end{figure}\n\n\n\n\\begin{figure}\n \\centering\n \\caption{Comparison of performance in LASSO problems}\n \\label{figura:graf02}\n \\vspace{0.2cm}\n\\begin{minipage}{\\linewidth}\n \\centering\n \\subfloat{\n \\includegraphics[scale=0.32]{fig6.eps}} \n ~\n \\subfloat{\n \\includegraphics[scale=0.32]{fig7.eps}}\n \\end{minipage}\\par\\medskip\n %\n \\begin{minipage}{ \\linewidth}\n \\centering\n \\subfloat{\n \\includegraphics[scale=0.32]{fig8.eps}}\n ~\n \\subfloat{\n \\includegraphics[scale=0.32]{fig9.eps}}\n \\end{minipage}\n \\end{figure}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\n\nBose-Einstein condensates (BECs) of atoms with more than one spin\nstate\npresent rich dynamics in their spin and motional degrees of freedom.\nMost experiments so far have focused on either spin or real-space\nevolution, carefully avoiding time-dependent evolution in the other\nsubspace. One class of experiments prepares each atom in the BEC in a\nprecisely controlled spin superposition and explores the complex\nspatial phase dynamics that deploys due to the spin-dependent\ninteractions \\cite{Kawaguchi12}. In these studies, mainly focused on\nthe mean field dynamics, the spin state remains unchanged throughout\nthe evolution. Other experiments, by contrast, use the condensed\nsample as a support for spin dynamics, especially to generate\nentangled spin states, while spatial dynamics is carefully avoided\n\\cite{Esteve08,Gross10}. Only a few recent experiments have started to\nexplore the interplay of spatial and spin dynamics in order to\ngenerate different forms of entanglement in two-component or spinor\nBECs \\cite{Riedel10,Luecke11,Schmied16}, including spin squeezing.\nSpin-squeezed states \\cite{Kitagawa93,Wineland94} are the prime\nexample of highly entangled many-particle states with the potential to\nimprove atomic clocks and interferometric sensors beyond the standard\nquantum limit \\cite{Gross12}. This metrological prospect also applies\nto BECs which, due to their minimum phase-space spread, are considered\nas precious source states for atom interferometry\n\\cite{Gross12,Abend16,Hardman16} despite their inherent fluctuations,\nphase diffusion and losses. Furthermore, the spin squeezing parameter\ncan be used to quantify the degree of entanglement between the\ncondensate atoms \\cite{Sorensen01}. For all these reasons,\nspin-squeezed states of BECs have met with wide interest, and it has\nbeen pointed out early on that such states can naturally arise in\ntwo-component BECs due to different scattering lengths between the\ninternal states \\cite{Sorensen01}. Yet, experiments with two-component\nBECs have not produced such states, except when atomic interactions\nwere enhanced with the help of a Feshbach resonance \\cite{Gross10} or\nby actively separating the spin components in a state-dependent trap\n\\cite{Riedel10}. Both methods have led to spectacular results, but\ncome at the price of a considerably more complex setup. Here we\ndescribe an experiment where spin squeezing occurs spontaneously after\nan internal state quench, the dynamics being initiated simply by an\ninitial $\\pi\/2$ pulse \\cite{Haine14} applied to a rubidium BEC in a\nharmonic trap.\n\n\\section{Origin of spontaneous spin squeezing}\nThe basic idea of creating spin squeezing by atomic interaction in a\nBEC, as originally envisaged in 2001 \\cite{Sorensen01}, is easily\nunderstood in the basis of well-defined atom numbers $\\ket{N_1}$ and\n$\\ket{N_2}$=$\\ket{N-N_1}$, where the index refers to the spin state\nand $N$ is the total atom number, which we consider fixed for now. On\nthe $N$-atom Bloch sphere, each state with a given $N_1$ corresponds\nto a circle of fixed latitude. If the energy of these states depends\nmonotonically on $N_1$, a superposition of several $\\ket{N_1}$, such\nas a coherent state, will not evolve with constant phase speed on the\nBloch sphere, but will be sheared. This leads to spin squeezing due to\nthe well-known ``one-axis twisting'' Hamiltonian\n\\cite{Kitagawa93}. More precisely, for a BEC with spin states $i=1,2$\nhaving spatial wavefunctions $\\phi_i(\\mathbf r)$, and\n$S_z=(N_2-N_1)\/2$, the spin interaction can be written\n\\begin{equation}\nH_{\\text{int}}\/\\hbar = \\chi S_z^2\\,.\n\\label{eq:Hint}\n\\end{equation}\nNeglecting the dependence of the condensate mode on atom number,\n$\\chi$ can be written simply \\cite{Pezze16}\\footnote{In general, $\\chi$ can be\n expressed as the derivative of the condensate relative phase with\n respect to the relative number of particles \\cite{Li09}, and in\n stationary conditions\n $U_{jk}=-\\frac{1}{2\\hbar}\\partial_{N_j} \\mu_k$.}.\n\\begin{equation}\n\\chi = (U_{11}+U_{22}-2U_{12})\/(2\\hbar)\\qquad \\mbox{and}\\qquad\nU_{jk} = g_{jk}\\int dr^3|\\phi_j|^2|\\phi_k|^2\n\\end{equation}\nwith $g_{jk}=4\\pi \\hbar^2 a_{jk}\/m$ and $m$ the mass of the\natom. Significant squeezing develops for times $t$ such that\n$\\chi t \\ge \\frac{1}{N}$ \\cite{Sinatra12a}. However, when all\nscattering lengths are nearly equal,\n$a_{11}\\approx a_{22}\\approx a_{12}$ as in the case of $^{87}$Rb, and\nthere is full spatial overlap between the components, population\nimbalance causes only a small energy change, and $\\chi$ becomes so\nsmall that the required $t$ is unrealistically large. This rules out\nthe straightforward implementation of BEC squeezing in $^{87}$Rb --\nthe most widely used atom in two-component BEC experiments and in\ncold-atom metrology today. If, on the other hand, the overlap of the\ncomponents is reduced, then a sizeable nonlinear interaction exists\neven for identical scattering lengths\n\\cite{Riedel10,Maussang10,Haine14}.\n\nUnder properly chosen trapping conditions, spatial dynamics of the\nspin components will occur spontaneously \\cite{Mertes07}, reducing the\noverlap and thus creating spin squeezing. Note that the spatial\ndynamics is created by the same difference of scattering lengths\nwhich, while too weak to create spin squeezing on its own, can be\nstrong enough to drive the spatial separation which then causes the\nsqueezing. $\\chi$ dynamically increases during the separation,\ngenerating the squeezing, and then decreases again as the atoms\noscillate back to their initial position.\n\nIn this article, we experimentally demonstrate this effect. In our\nexperiment, an elongated trap is operated near the ``magic'' bias\nfield \\cite{Harber02,Szmuk15} where trapping frequencies are identical\nfor two hyperfine ground state sublevels\n$\\ket{1}\\equiv\\ket{F=1,m_F=-1}$ and $\\ket{2}\\equiv\\ket{F=2,\n m_F=1}$. The condensate is initially prepared in $\\ket{1}$ and\nsubjected to a $\\pi\/2$ pulse on the $\\ket{1}\\leftrightarrow\\ket{2}$\ntransition. The subsequent free evolution in the cigar-shaped trap\nleads to demixing of the two components\n\\cite{Hall98,Mertes07,Egorov11,Haine14,Nicklas15}, initiating the\nsqueezing dynamics (Fig.~\\ref{fig:scheme}). By applying a second pulse\nto close the spin interferometer when the $\\ket{1}$ component\noscillates back into overlap with $\\ket{2}$, we indeed observe not\nonly a contrast revival, but also a simultaneous reduction of spin\nprojection noise, yielding metrological spin squeezing.\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=0.8\\columnwidth]{Bloch_Spheres2.pdf}\n\\caption{Experimental sequence. A first $\\pi\/2$ pulse of Rabi\n frequency $\\Omega$ places the condensate in a coherent\n superposition. This initiates state-dependent spatial dynamics,\n leading to the shearing of the spin noise distribution. Due to\n asymmetric losses, the mean spin is also tilted below the equatorial\n plane by an angle $\\theta_c$. A second pulse of variable duration\n is applied in order to rotate the spin distribution before detecting\n the atom numbers $N_1$ and $N_2$. The clouds below the Bloch spheres\n represent the spatial dynamics undergone by the two states $\\ket{1}$\n (blue) and $\\ket{2}$ (red).}\n\\label{fig:scheme}\n\\end{figure}\n\n\\section{Experiment}\n\nThe experiment is performed on the Trapped-Atom Clock on a Chip (TACC)\nplatform, described in detail in\n\\cite{Lacroute10,Deutsch10,Szmuk15}. In contrast to those references,\nhere we use a BEC. An atom chip generates the magnetic field gradients\nfor trapping and also carries the two-photon, radiofrequency (RF) and\nmicrowave (MW) signals for exciting the clock transition. Atoms are\ninitially trapped in $\\ket{1}$ and cooled by forced RF evaporation in\na tight trap. We continue the RF ramp well into the BEC regime,\nobtaining condensates with no discernible thermal fraction and\ncontaining up to $\\sim 14000$ atoms. The magnetic potential is then\nslowly (600\\,ms) transformed into an interrogation trap with\nfrequencies $\\omega_{x,y,z}=2\\pi\\times(2.7,92,74)\\,\\mbox{Hz}$ unless\notherwise specified, located $z=350\\,\\mu$m below the chip surface\n($z=0$). The lifetime of the BEC in this trap is about $5\\,s$, limited\nby collisions with thermal atoms in the single vacuum cell. The atom\nnumber in this trap is controlled with the MOT loading time and the\nfinal frequency of the evaporation ramp.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.8\\columnwidth]{cloudProfiles2.pdf}\n\\includegraphics[width=0.8\\columnwidth]{Demix3.pdf}\n\\caption{Spatial dynamics observed in absorption imaging after a 30\\,ms\n time of flight. Note that these images are taken with an auxiliary\n imaging system along the $y$ axis, which has higher noise than the\n one used for the squeezing measurements below. \n For these measurements, a BEC of $10^{4}$ atoms is\n produced in state $\\ket{1}$ in a trap with trap frequencies\n $\\omega_{x,y,z}=2\\pi\\times(2.7,92,74)\\,\\mbox{Hz}$. A resonant $\\pi\/2$ pulse\n prepares an equal superposition of $\\ket{1}$ and $\\ket{2}$ and the\n cloud dynamics are monitored in time. \n (a) Individual images taken after the evolution times indicated in\n the figure. (b) Many such images integrated along $z$ and assembled to\n show the spatial dynamics along $x$. (A common-mode sloshing that was\n present in this experiment has been subtracted.) (c) 3D coupled\n Gross-Pitaevskii numerical simulation for the atom numbers and trap\n frequencies of the experiment.}\n\\label{fig:TypicalImages}\n\\end{figure}\n\nWe use the $\\ket{1} \\leftrightarrow \\ket{2}$ clock transition, which\nenables first-order cancellation of spatial inhomogeneity of the\ntransition frequency in a magnetic trap\n\\cite{Harber02,Treutlein04}. The transition is driven by a two-photon,\nRF and MW pulse \\cite{Szmuk15} with Rabi frequency\n$\\Omega = 2\\pi\\times 3.6\\,$Hz. The MW signal at 6.8\\,GHz is generated\nby a custom-built synthesizer \\cite{Ramirez10}, while the RF photon of\n$\\approx 2\\,$MHz comes from a commercial direct-digital\nsynthesizer. Both are referenced to SYRTE's active hydrogen maser\n\\cite{Szmuk15}. After preparing a BEC in the interrogation trap, the\nsequence always starts by applying a resonant $\\pi\/2$ pulse to create\nthe superposition $1\/2^{N\/2}(\\ket{1}+\\ket{2})^{\\otimes N}$ (see\n\\ref{sec:appendixExp}). Due to the slight difference in scattering\nlengths, the initial density distribution no longer corresponds to a\nstationary state, and the two components start to oscillate\n\\cite{Hall98,Mertes07,Papp08,Tojo10,Egorov13,Nicklas15}. To reveal the\nresulting spatial dynamics, we have imaged both states using an\nauxiliary imaging system on the $y$ axis, so that the slow $x$ axis\nis visible. Images are taken at variable times after the pulse. A\ntypical result is shown in Fig.~\\ref{fig:TypicalImages}. The\n$\\ket{1}$ component splits into two parts which oscillate along the\nweak axis, while the $\\ket{2}$ component does not separate but\nundergoes a breathing-type oscillation in the center between the\n$\\ket{1}$ component's two lobes. After a period of $1.2\\,$s, the\n$\\ket{1}$ component has come back into superposition with $\\ket{2}$\nand another oscillation begins. For longer times, a third oscillation\nis barely visible. A 3D numerical simulation using coupled\nGross-Pitaevskii equations (GPEs) reproduces the features of the\nobserved oscillation (Fig.~\\ref{fig:TypicalImages}(b)) reasonably\nwell, however, the calculated and measured oscillation frequencies\ndiffer by 20\\,\\%. One possible reason for this difference could be a\nresidual thermal cloud too weak to be visible on the camera images.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.45\\columnwidth]{Lifetime_log_paper.pdf}\n\\includegraphics[width=0.45\\columnwidth]{N_vs_Tr.pdf}\n\\end{center}\n\\caption{Atom number as a function of trapping time in the\n interrogation trap for condensates with all atoms in $\\ket{1}$ (cyan\n squares), all atoms in $\\ket{2}$ (magenta stars) and with atoms in\n an equal superposition (blue diamonds: $N_1$, red circles:\n $N_2$). The fit to the $\\ket{1}$ data is a simple exponential,\n yielding the $5$\\,s background-limited lifetime. The other lines are\n not fits, but predictions without adjustable parameters, using\n published values \\cite{Egorov13} for the two-body decay rates,\n $\\gamma_{22} = 8.1(3)\\times10^{-14} \\mbox{cm}^3\/\\mbox{s}$ and\n $\\gamma_{12} = 1.51(18)\\times 10^{-14} \\mbox{cm}^3\/\\mbox{s}$, and\n our experimentally determined densities. The latter are obtained\n from the measured atom numbers and the trap frequencies, assuming a\n BEC in the dimensional crossover regime. (b) Total atom number\n $N_1+N_2$ as a function of time, starting from an initial atom\n number $N\\sim 1.2\\dip{4}$ in equal superposition of $\\ket{1}$ and\n $\\ket{2}$. The blue points are experimental data, while the solid\n line has been obtained by numerical integration of coupled\n GPEs.}\n\\label{fig:losses}\n\\end{figure}\n\nWhile these images are instructive for observing the spatial dynamics,\nmeasurements of the atom numbers are better performed by imaging along\nthe $x$ axis, where the cloud covers fewer pixels. We use saturated\nabsorption imaging \\cite{Reinaudi07} and state-selective release from\nthe trap so that both states can be detected in the same image with a\nback-illuminated deep depletion CCD with high quantum efficiency. The\nimaging system is carefully calibrated for absolute accuracy as\ndescribed in \\ref{sec:appendixExp}.\n\nDensity-dependent atom losses are an important limiting factor in BEC\nspin squeezing \\cite{Gross10,Li08}. While the background-limited\nlifetime of state $\\ket{1}$ is much longer than the oscillation\nperiod, state $\\ket{2}$ has additional loss channels which reduce its\nlifetime. To measure the relevant loss parameters, we prepare a BEC in\n$\\ket{1}$, $\\ket{2}$ or an equal superposition of both states, and\nmeasure the remaining populations in the interrogation trap after\ndifferent trapping times. The results are displayed on\nFig.~\\ref{fig:losses} (points). An exponential fit to the $\\ket{1}$\ndata yields a $5$\\,s background-limited lifetime. The other curves in\nFig.~\\ref{fig:losses}(a) are not fits but predictions without\nadjustable parameters as described in the caption. They reproduce the\ndata well, as does the simulation using coupled GPEs\n(Fig.~\\ref{fig:losses}(b)). In our experiments, number densities in\nthe $\\ket{2}$ state range from $1\\dip{12}$ to\n$8\\dip{12}\\,\\mbox{cm}^{-3}$, corresponding to two-body loss limited\nlifetimes ranging from $12\\,$s to as short as $1.5\\,$s,\nrespectively. In the following, $N_{\\text{i}}$ refers to the initial atom\nnumber and $N_{\\text{f}}$ to the atom number measured at the end of the\nexperimental sequence.\n\n\n\\section{Oscillation of the Ramsey contrast}\n\\label{sec:contrast}\n\nBecause the oscillation puts the atoms into motion and changes the\nspatial overlap of the components, it also manifests itself in the\ncontrast of the Ramsey fringes when a second $\\pi\/2$ pulse is added\nafter a time $T_R$. Fig.~\\ref{fig:contrast}(a) shows the evolution of\nthis contrast (red circles) as a function of $T_R$ for\n$N_{\\text{i}} = 1.2\\times 10^4$. For this atom number, the initial contrast of\n98\\% drops to about 50\\% around 600\\,ms, and then shows a revival at\n$T_R=1.2\\,$s , which reflects the spatial overlap between the two\nmodes at this time. Indeed, we find that the contrast revival time\ncoincides with the spatial oscillation period observed by absorption\nimaging. This period depends on trap frequencies and atom number\n\\cite{Egorov13}.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.45\\columnwidth]{contrast_vs_Tr.pdf}\n\\includegraphics[width=0.45\\columnwidth]{Contrast_vs_Nat.pdf}\n\\end{center}\n\\caption{Contrast of a Ramsey measurement consisting of two $\\pi\/2$\n pulses separated in time by $T_R$, performed in the interrogation\n trap with frequencies\n $\\omega_{x,y,z}=2\\pi\\times (2.7,92,74)\\,\\mbox{Hz}$. For each $T_R$,\n the frequency of the two pulses is scanned around resonance to\n measure the fringe contrast $C$, defined by a fit according to\n $P_2=\\frac{1}{2}(1+C\\text{cos}(2\\pi\\Delta\\nu T_R+ \\varphi_{lo}))$,\n where $P_2=N_2\/N_{\\text{f}}$, $T_R$ is the Ramsey time and corresponds to the\n time during which the interferometer is sensitive to phase\n variations, and $\\Delta\\nu$ is the detuning from atomic\n resonance. (a) Measured contrast (red circles), corrected contrast\n (blue squares) and corresponding predictions of the coupled GPE\n simulation (solid lines), as a function of Ramsey time for\n $N_{\\text{i}}=1.2\\times 10^4$. (b) Corrected contrast at the revival time\n (red circles) and half the revival time (purple diamonds) as a\n function of $N_{\\text{f}}$. The corresponding solid lines are results of the\n GPE simulations. The dashed line shows the simulation result\n multiplied with a decoherence term (exponential decay) which\n accounts for decoherence sources not contained in the\n simulation. Its value $\\gamma_d=0.5s^{-1}$ is chosen to match the\n maximum experimental contrast.}\n\\label{fig:contrast}\n\\end{figure}\n\nDue to the state-dependent losses, the population imbalance is\ntime-dependent. Right after the initial $\\pi\/2$ pulse, the polar angle\nof the Bloch vector is $\\theta=\\pi\/2$, but then slowly evolves to a\nvalue $\\theta=\\pi\/2+\\theta_c$ at time $T_R$ (Fig.~\\ref{fig:scheme}).\nTo obtain maximum contrast (and thus, maximum phase sensitivity in the\nfinal measurement), $\\theta_c$ must be taken into account. In an\nactual atomic clock or interferometer, this can be achieved by\ninserting a correction pulse with a well-defined phase before the\nsecond $\\pi\/2$ pulse to remove the known mean value $\\bar{\\theta}_c$.\nThe contrast that one would get by applying such a correction pulse\ncan also be derived numerically using simple geometric considerations\n(\\ref{sec:appendixExp}). We have used both, correction pulses and\nnumerical contrast correction, and find that the results are\nconsistent. Unless otherwise indicated, the results below use the\nnumerical correction method. Note that only the mean value of\n$\\theta_c$ can be removed or corrected, while the noise introduced by the\nstatistical nature of the losses remains and contributes to the final\nnoise budget \\cite{Gross10,Li08}. We will come back to this point\nbelow.\n\nThe corrected contrast, represented by blue squares in\nFig.~\\ref{fig:contrast}(a), shows a first revival of 82\\% for\n$N_{\\text{i}}=1.2\\times 10^4$. The precise\nvalue of the contrast, and thus the spatial dynamics, also depends on\natom number, as shown in Fig.~\\ref{fig:contrast}(b). Lower atom\nnumbers result in higher contrast revivals.\n\nThe numerical simulations described above qualitatively reproduce the\nobserved time evolution and provide some additional insight (solid\nlines in Fig.~\\ref{fig:contrast}). The contrast minimum occurring at\nhalf the revival time decays faster than the contrast maximum at\nthe revival time. Its simulated\nvalue is in good agreement with the experiment, confirming that the\ndecay is caused by a stronger spatial separation for higher atom\nnumbers. Both the demixing period and the contrast at revival time are\noverestimated in the simulation, even though the population decay is\nwell reproduced (cf.~Fig.~\\ref{fig:losses}(b)). The lower revival\ncontrast suggests experimental sources of decoherence that are not\ncontained in the simulation. Indeed, multiplying with\na decoherence term with $\\gamma_d=0.5s^{-1}$ brings the simulated\ncontrast into agreement with the measured values (dashed line in\nFig.~\\ref{fig:contrast}(b)). For the revival period, the source of the\ndeviation is less obvious, one possible candidate being a dilute\nthermal cloud as mentioned above.\n\n\nNote that the density-dependent frequency shift \\cite{Harber02},\ncombined with the spatial dynamics and atom losses, leads to a time\ndependence of the atomic transition frequency $\\nu_{12}$: the\nresonance frequency of the pulse applied in the beginning of the\nsequence is slightly higher than that at the revival time. Although\nthe shift is small (on the order of $-10^{-4}\\,$Hz per atom for our\ntrap), it is easily detected in a metrology setup like ours and\nneeds to be taken into account in the squeezing measurements, as\ndetailed below. In particular, for every atom number and Ramsey time,\nwe use the adequate effective resonance frequency, which is determined\nin a separate measurement (see~\\ref{sec:appendixExp}).\n\n\\section{Spin noise measurements}\n\n\n\\begin{figure}[tb!]\n\\centering\n\\includegraphics[width=\\columnwidth]{Squeezing_final_test.pdf}\n\\caption{Measured spin noise for a final atom number $N_{\\text{f}}\\approx5000$\n and trap frequencies\n $\\omega_{x,y,z}=2\\pi\\times (2.7,92,74)\\,\\mbox{Hz}$. The squeezing\n factor $\\xi^2=\\Delta_n S_z^2 = 4\\Delta S_z^2\/C^2\\overline{N_{\\text{f}}}$ is\n shown as a function of the tomography angle $\\alpha$, with error bars\n corresponding to a 68$\\%$ confidence interval. The inset is a zoom\n of the main plot in the region where $\\xi^2$ reaches its minimum.}\n\\label{fig:squeezing_result}\n\\end{figure}\n\nWe use spin noise tomography to characterize the spin distribution\nthat is generated in the dynamically evolving two-component BEC, . As\nbefore, a BEC with a precisely controlled atom number is produced in\n$\\ket{1}$ and we apply a first near-resonant $\\pi\/2$ pulse which puts\neach atom into a coherent superposition between the two clock\nstates. The BEC then evolves freely in the trap during a time $T_R$\nwhich we adjust to coincide exactly with the contrast revival time\nmeasured above. During the free evolution, the spin distribution\nundergoes the nonlinear collisional interaction enhanced by the\nspatial separation of the two components. At the time $T_R$, a second\npulse (``analysis pulse'') is used to rotate the spin distribution\nabout its center, which has been determined separately (see\nFig.~\\ref{fig:scheme} and \\ref{sec:appendixExp}). By changing the\nduration of the analysis pulse, the rotation angle $\\alpha$\n(``tomography angle'') can be varied. After this rotation, the trap is\nswitched off and the atom numbers $N_1$ and $N_2$ are measured. For\neach $\\alpha$, the whole preparation and analysis sequence is repeated\na large number of times (typically 300 repetitions) and the normalized\npopulation difference $S_z^n = \\frac{1}{2}\\frac{N_2-N_1}{N_1+N_2}$ is\ndetermined. The number squeezing\n$\\mathcal{V}^2 = \\frac{4\\Delta\n S_z^2}{\\overline{N_{\\text{f}}}\\text{cos}(\\overline{\\theta_c})^2}$ and the\nsqueezing factor\n$\\xi^2=\\frac{4\\Delta\n S_z^2}{\\overline{N_{\\text{f}}}C^2\\text{cos}(\\overline{\\theta_c})^2}$\n\\cite{Wineland94} are then derived in order to quantify the spin noise\nreduction and the metrologically useful spin squeezing respectively,\n$\\Delta S_z^2$ being the variance of the spin in the y-z plane\n(Fig.~\\ref{fig:scheme}). The contrast is determined separately using\nthe procedure detailed on\nFig.~\\ref{fig:contrast}. Fig.~\\ref{fig:squeezing_result} shows the\nresult for a final atom number $N_{\\text{f}}\\approx5000$ and trap frequencies\n$\\omega_{x,y,z}=2\\pi\\times (2.7,92,74)\\,\\mbox{Hz}$. As expected, the\nmeasured noise corresponds to a slightly tilted, ellipse-shaped\ndistribution. The minimum squeezing factor occurs for an angle\n$\\alpha=2.5^\\circ$ and reaches $\\xi^2 = -1.3 \\pm 0.4$ dB with a\ncontrast of 90$\\pm$1$\\%$ for this parameter set. It corresponds to\natom number fluctuations of $\\pm 32$ atoms for each component.\n\n\\begin{figure}[tb!]\n\\centering\n\\includegraphics[width=0.47\\columnwidth]{Squeezing_vs_N_final.pdf}\n\\includegraphics[width=0.47\\columnwidth]{SqueezingDSz_vs_N_final.pdf}\n\\caption{Squeezing factor (a) and number squeezing (b) as a function\n of the detected atom number for two different traps. Black circles:\n $\\omega_{x,y,z}=2\\pi\\times (2.7,92,74)\\,\\mbox{Hz}$, green squares:\n $\\omega_{x,y,z}=2\\pi\\times (4.4,128,113)\\,\\mbox{Hz}$. The red point\n corresponds to the data displayed on\n Fig.~\\ref{fig:squeezing_result}. The result for the stronger trap\n are generally worse in spite of the faster dynamics\n ($T_R=0.7$s). While the metrological squeezing factor deteriorates\n with increasing atom number, no clear tendency is visible in the number\n squeezing. This indicates that the squeezing factor is\n mostly limited by the contrast reduction occurring when increasing\n the atom number, as shown in Fig.~\\ref{fig:contrast} (b).}\n\\label{fig:squeezingVsNat}\n\\end{figure}\n\nWe have repeated these measurements for different atom numbers up to\nthe maximum BEC atom number accessible in our experiment, and for a\nsecond, stronger trap with frequencies\n$\\omega_{x,y,z}=2\\pi\\times (4.4,128,113)\\,\\mbox{Hz}$. The results are\nshown in Fig.~\\ref{fig:squeezingVsNat}. The squeezing factor $\\xi$\ndeteriorates for $N_{\\text{f}}>5000$, but seems to saturate for smaller\n$N_{\\text{f}}$. Interestingly, the number fluctuations\n(Fig.~\\ref{fig:squeezingVsNat}(b)) do not show these tendencies, but\nmaintain a constant level within the error bars. The deterioration of\n$\\xi$ is mostly due to the reduced contrast at high atom numbers\n(Fig.~\\ref{fig:contrast}(b)). Both effects will be discussed in the\nnext section.\n\n\\section{Limiting factors}\nThe squeezing factor $\\xi$ observed at the revival time results from\nthe competition between the twisting interaction (eq.~\\ref{eq:Hint})\nand the state-dependent losses and non-perfect spatial revival\ndynamics: the latter two introduce new fluctuations and reduce\ncontrast. The two main parameters that can be experimentally\ncontrolled are the atom number $N$ and the trap frequencies\n$\\omega_{x,y,z}$. Higher atom numbers and higher trap frequencies\nincrease the condensate density, which accelerates the squeezing\ndynamics by increasing $\\chi$, but also accelerates the two-body\nlosses. In the case of a homogeneous system or in separated harmonic\ntraps for the two components, the two effects cancel\n\\cite{Li08,Sinatra12a}, so that one does not expect a density\ndependence of the squeezing factor $\\xi$. In our case, the spatial\ndynamics lead to a significantly more complicated situation. $\\chi$ as\nwell as the contrast depend on the spatial overlap of the spin\ncomponents, which is time-dependent (cf.~Fig.~\\ref{fig:contrast}) with\nan evolution that depends on $N$ as well as on $\\omega_{x,y,z}$. The\nrather high value of the contrast at $T_R\/2$ (Fig.~\\ref{fig:contrast})\nindicates that the component separation is not complete. For complete\nseparation and our range of atom numbers, it is known that $\\chi$\nwould be large enough to reduce number fluctuations by several orders\nof magnitude in a time much shorter than our revival time (see\n\\ref{sec:appendixSimul}), even when losses are taken into account. In\nthe absence of losses, these high values could still be reached with\nincomplete separation, at the expense of a longer squeezing\ntime. Thus, in our situation, the state-dependent losses\n(Fig.~\\ref{fig:losses}) clearly have a major effect on the final\nresult. In an attempt to obtain more quantitative predictions, we have\nperformed beyond-GPE simulations, described in the next section.\n\nApart from these fundamental contributions, technical noise such as\nphase noise can limit the measurement of the noise reduction induced\nby the squeezing process. In order to evaluate our system in terms of\ntechnical instabilities, a standard clock measurement, similar to the\none conducted in \\cite{Szmuk15}, has been performed using the same\nexperimental condition as for Fig.~\\ref{fig:squeezing_result}. This\nmeasurement yielded a fractional frequency stability of\n$9.7\\times 10^{-12}\\tau^{-1\/2}$. Several noise sources have been\ninvestigated to explain this stability, and atom loss has been\nidentified as the major contribution to the stability budget\n($8.47\\times 10^{-12}\\tau^{-1\/2}$). This is due to the fact that, for\neach shot, we only have access to the final populations. We therefore\ndo not precisely know how many atoms have been lost during the\nsequence, nor when they were lost. For instance, if an atom is lost at\nthe beginning of the Ramsey time, it will not contribute to the\ncollisional shift, whereas if it is lost right before the second\ninterrogation pulse, it was partly responsible for this frequency\nshift, but will not be detected. This leads to a noise on $S_z$ that\nwe cannot correct. This noise also impacts the squeezing measurement,\ncontributing on the order of 9\\% of the quantum projection\nnoise. Subtracting this noise would bring $\\xi^2$ from $-1.3\\,$dB to\n$\\xi^2 \\approx -2\\,$dB: its contribution is non-neglegible, but it\ndoes not limit the order of magnitude of the observed squeezing.\n\n\n\n\\section{Simulations beyond GPE}\nWhile the spatial dynamics is well described by the coupled\nGross-Pitaevskii simulations mentioned above, the quantum spin\ndynamics generating the spin squeezing cannot be captured by such a\nmean-field approach. Furthermore, as we have seen, the asymmetric\nlosses significantly affect the state of the system over the\nrelatively long times needed for the spontaneous spin squeezing to\noccur. In order to take into account all of these features in a\nconsistent way, we performed simulations using a Wigner method\ninspired by \\cite{Opanchuk12}. To limit the drawbacks of the\ntruncated Wigner method \\cite{Sinatra02} -- which are related to the\nfact that the added quantum noise in each mode efficiently thermalizes\nin 3D, introducing spurious effects -- we implemented a ``minimal\nversion'' of the Wigner method, where we project the quantum noise on\nthe condensate mode for each component.\n\n\\subsection{Description of the projected Wigner method}\nThe implementation of the method consists in $(i)$ generating\nclassical fields $\\psi_1(\\mathbf{r},0^+)$ and $\\psi_2(\\mathbf{r},0^+)$\nnormalized to the atom number in each component, that sample the\ninitial probability distribution after the pulse, and $(ii)$ evolving\nthem with stochastic equations. Besides the usual Hamiltonian terms,\nthese equations involve a damping term due to non-linear losses and\nthe associated noise. The results for the observables are then\nobtained by averaging over many stochastic realizations.\n\n\\subsubsection{Initial state with partition noise}\nAt $t=0^-$, before the mixing pulse, all $N$ particles are in the\ninternal state $\\ket{1}$ where a condensate of wave function\n$\\phi_1(\\mathbf{r},0^-)$ is present. We approximate the field for\n$\\ket{1}$ by $\\psi_1(\\mathbf{r},0^-)=\\sqrt{N}\\phi_1(\\mathbf{r},0^-)$.\nThe field for $\\ket{2}$ is in vacuum, that is, it is filled with\nquantum noise in each mode. In contrast to what is usually done in the\nWigner method, we project the vacuum fluctuations of field 2 on the\ncondensate mode $\\phi_1(\\mathbf{r},0^-)$ that will be macroscopically\npopulated after the pulse, and we keep only this contribution. We then\nobtain after the mixing pulse\n\\begin{eqnarray}\n\\psi_1(\\mathbf{r},0^+)&=&\\frac{1}{\\sqrt{2}} \\left[\\psi_1(\\mathbf{r},0^-)-\\phi_1(\\mathbf{r},0^-)b \\right]\\\\\n\\psi_2(\\mathbf{r},0^+)&=&\\frac{1}{\\sqrt{2}} \\left[\\psi_1(\\mathbf{r},0^-)+\\phi_1(\\mathbf{r},0^-)b \\right]\n\\end{eqnarray}\nwhere $b$ is a stochastic complex Gaussian variable with $\\langle b^\\ast b \\rangle=\\frac{1}{2}$.\nAs $N_j(t)=\\int \\mathbf{dr} |\\psi_j(\\mathbf{r},t)|^2$, one has $\\langle N_1-N_2\\rangle(0^+)=0$ and $\\Delta^2(N_1-N_2)(0^+)=N$.\n\n\\subsubsection{Time evolution}\nStarting from the stochastic equations in \\cite{Opanchuk12}, we apply\nthe same idea and project the noise due to non-linear losses over the\ntime-dependent condensate modes. Including $2-2$ and $1-2$ two-body\nlosses, plus one-body losses for the two states, we finally have for\nthe evolution during $dt$:\n\\begin{eqnarray}\n d\\psi_1&=&-i dt \\left[(\\hat{h}_1- i K_1)+g_{11}|\\psi_1|^2+(g_{12}-i K_{12})|\\psi_2|^2 \\right]\\psi_1 + \n {\\cal P}_{\\phi_1}[{\\rm \\Delta_1}] \\\\\n d\\psi_2&=&-i dt \\left[(\\hat{h}_2- i K_2)+(g_{22}- 2 i K_{22} ) |\\psi_2|^2+ (g_{12} - i K_{12})|\\psi_1|^2 \\right]\\psi_2 \n \\nonumber \\\\\n && + {\\cal P}_{\\phi_2}[{\\rm \\Delta_2}]\\,, \n\\end{eqnarray}\nwhere $\\hat{h}_j$ is the one-body Hamiltonian operator including the\nkinetic energy and the external potential for the internal state $j$,\n$g_{jk}=(4\\pi \\hbar^2 a_{jk})\/m$ as above, $K_{12}=\\gamma_{12}\/2$,\n$K_{22}=\\gamma_{22}\/4$ are two-body loss rate constants, and\n$K_1=K_2=\\tau^{-1}$ are one-body loss rate constants equal to the\ninverse lifetime in the trap. The projected noises have the\nexpressions\n\\begin{eqnarray}\n {\\cal P}_{\\phi_1}[{\\rm \\Delta_1}] &=& \\phi_1(\\mathbf{r},t) \\left[ B_{12}(t) \\sqrt{K_{12}I_{12}}+B_1(t)\\sqrt{K_1}\\right]\\\\\n {\\cal P}_{\\phi_2}[{\\rm \\Delta_2}] &=& \\phi_2(\\mathbf{r},t) \\left[\n B_{22}(t)\n \\sqrt{4K_{22}I_{22}}+B_{12}(t)\n \\sqrt{K_{12}I_{12}}+B_2(t)\\sqrt{K_2}\\right]\n\\end{eqnarray}\nwhere the $B_{12}(t)$, $B_{22}(t)$, $B_{1}(t)$ and $B_{2}(t)$ are\nindependent $\\delta$-correlated complex Gaussian noises of variance\n$dt$, e.g. $\\langle B_{12}^\\ast(t) B_{12}(t')\\rangle=\\delta(t-t')dt$,\nand\n$I_{jk}=\\int \\mathbf{dr} |\\phi_j(\\mathbf{r},t)\n\\psi_k(\\mathbf{r},t)|^2$.\nWe have tested this method by comparing its results with an\nexact solution of the two-mode model with losses \\cite{Li08}. Details\ncan be found in \\ref{sec:appendixSimul}.\n\n\\subsection{Simulation results}\n\n\\begin{figure}\n\\begin{center}\n \\includegraphics[width=0.495\\linewidth]{fig_WigC.pdf}\\includegraphics[width=0.505\\linewidth]{fig_WigXi.pdf}\n \\caption{Contrast (a) and Spin squeezing (b) as a function of time.\n Solid lines: Wigner simulations, including spatial dynamics,\n quantum spin dynamics and particle losses, for three different\n choices of the scattering lengths. Symbols: experiment (for (b),\n the experimental squeezing is measured at the time corresponding\n to the first contrast revival). The initial atom number is\n $N=10^4$. Trap frequencies\n $\\omega_{x,y,z}=2\\pi\\times(2.9,92,74)$Hz. Lifetime\n $\\tau=5$s. Two-body loss rate constants\n $\\gamma_{22}=8.1\\times10^{-14}$cm${}^{3}\/$s and\n $\\gamma_{12}=1.51\\times10^{-14}$cm${}^{3}\/$s. Scattering lengths\n in Bohr radii units: $a_{11}=100.4$ for the three curves. Red\n curve $a_{22}=95.44$ \\cite{Egorov13}, $a_{12}=98.00$\n \\cite{Egorov13}. Green curve $a_{22}=95.00$ \\cite{Mertes07},\n $a_{12}=97.66$ \\cite{Mertes07}. Blue curve $a_{22}=95.68$\n \\cite{KokkelmansPC}, $a_{12}=97.66$ \\cite{Mertes07}. We used 800\n realizations for the Wigner simulation. The statistical\n uncertainty is around $10\\%$ for the spin squeezing, corresponding\n to 0.4dB on the figure. The spatial grid had $128\\times8\\times8$\n points in the three directions and the initial temperature is\n zero. At the squeezing time in the simulation $T\\simeq 1.37$s,\n approximately $N_{\\text{f}} \\simeq 6000$ atoms are left in the trap, in a\n proportion $N_{1f}\/N_{2f} \\simeq 5\/3$ for the two states.\n\\label{fig:Wigner}\n}\n\\end{center}\n\\end{figure}\n\nIn Fig.~\\ref{fig:Wigner} we show the results of the projected Wigner\nsimulation for an initial atom number $N_{\\text{i}}=10^4$ and three different\nchoices of the scattering lengths $a_{12}$ and $a_{22}$ chosen among\npublished values that differ by $0.7\\%$ at most.\nFig.~\\ref{fig:Wigner}(a) shows the contrast and\nFig.~\\ref{fig:Wigner}(b) shows the squeezing as a function of\ntime. The experimental data for similar parameters is shown as symbols\nfor comparison. Given that the demixing dynamics which induces the\nsqueezing is driven by the small differences between the scattering\nlengths, it is not surprising that, in the absence of an external\nstate-dependent potential imposing the spatial separation\n\\cite{Hall98,Riedel10}, the squeezing result is very sensitive to the\nprecise values of the scattering lengths. To estimate the effective\nnonlinearity for the different choices of the scattering lengths, we\ncalculated the parameter $\\chi$ of the one-axis twisting Hamiltonian\nat the stationary state in our geometry. We obtain\n$\\chi=7.5\\times10^{-5}$s${}^{-1}$ with the scattering length values\nfrom \\cite{Egorov13} (red curve), $\\chi=7.3\\times10^{-5}$s${}^{-1}$\nwith the values from \\cite{Mertes07} (green curve), and\n$\\chi=24.6\\times10^{-5}\\,\\mbox{s}^{-1}$ for the combination\n\\cite{Mertes07}-\\cite{KokkelmansPC} (blue curve).\n\nComparing simulations and experiment, the contrast oscillations in the\nexperiment have a smaller amplitude and shorter period than in the\nsimulations (Fig.~\\ref{fig:Wigner}(a)), as was already observed with the GPE\nsimulations. For the squeezing factor, the simulations do not allow a\nquantitative comparison to experiment due to their strong\ndependence on the scattering lengths. As shown in\nFig.~\\ref{fig:Wigner}(b), depending on the choice of the scattering\nlength values, and despite their relatively high accuracy (compared to\nthe values available for other elements), the prediction at the\nrevival time varies between no squeezing at all and about -1.8\\,dB.\n\nWe conclude that spontaneous squeezing in our geometry is compatible\nwith the results of our simulations although we cannot reproduce all\nthe features of the experimental data. If a quantitative agreement for\nthe contrast dynamics can be attained, it would be interesting to use\nthe extreme sensitivity to the scattering lengths to infer very\nprecise values for them from the experiment, similar to\n\\cite{Egorov13}.\n\n\n\n\n\n\\section{Conclusion and outlook}\n\nOur results show that nonclassical spin dynamics occur spontaneously\nin a two-component BEC, and can produce spin squeezing in BECs with\nsizeable atom numbers. This supports the notion of squeezing as a\nnaturally occuring form of entanglement. In order to use this\nsqueezing as a resource for quantum metrology in particular, a higher\nlevel of squeezing is desirable. Our results suggest several possible\nroutes. The first is to accelerate the component separation, so that\nsqueezing would be produced on a faster timescale, before particle\nlosses become dominant. To do this, it would suffice to induce a small\nasymmetry between the trapping potential of the two states at the\nbeginning of the sequence, in order to help the dynamics to start. It\nhas indeed been shown that this greatly enhances and accelerates the\nspatial separation \\cite{Ockeloen13}. In our case, this asymmetry\ncould come from the combination of the quadratic Zeeman effect with\ngravity. This leads to a displacement of the center of the trapping\npotentials for the two clock states that depends on the difference\nbetween the field at the bottom of the trap and the magic field\n\\cite{Rosenbusch09}. Therefore, by scanning the magnetic field at the\ntrap bottom, one could displace the position of the two states and\nstudy its influence on the spatial dynamics. In the same spirit, it\nwould be interesting to study whether the component separation can be\nimproved by modifying the aspect ratio of the trap, perhaps\ndynamically.\n\nAnother path would be to act on the asymmetric two-body losses\nthemselves to reduce their rate. A possible approach could be to use\nmicrowave dressing during the interrogation time to shift the\n$\\ket{2,2}$ state upward and induce an energy difference between the\ntransitions $\\ket{2,1}\\rightarrow\\ket{2,2}$ and\n$\\ket{2,1}\\rightarrow\\ket{2,0}$, thereby reducing the two-body\ncollision rate in state $\\ket{2,1}$. This could be accomplished using\na one-photon dressing with a $\\sigma$-polarized microwave field. Of\ncourse, one would need to check that this additional coupling does not\nintroduce too much noise on the clock transition. Observing a\nreduction of these losses would also be an interesting subject in itself.\n\n\n\\ack We thank Markus Oberthaler, Philipp Treutlein and Christian Gross\nfor inspiring discussions. This work was supported by the\nD\\'el\\'egation G\\'en\\'erale de l'Armement (DGA) through the ANR ASTRID\nprogram (contract no. ANR-14-ASTR-0010, project ``eeTACC''), the\nEuropean Research Council (ERC), (GA 671133, Advance Grant\n``EQUEMI''), and the Institut Francilien pour la Recherche sur les\nAtomes Froids (IFRAF).\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}