diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzcuxh" "b/data_all_eng_slimpj/shuffled/split2/finalzzcuxh" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzcuxh" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n The classical algorithm for multiplying two $n\\times n$ matrices performs\n $2n^3-n^2$ additions and multiplications. Strassen's algorithm~\\citep{strassen1969gaussian}\n does the job with only $\\mathcal{O}(n^{\\log_27})$ additions and multiplications, by\n recursively applying a certain scheme for computing the product of two $2\\times\n 2$-matrices with only 7 instead of the usual 8 multiplications. The discovery\n of Strassen's algorithm has initiated substantial work during the past 50 years\n on finding the smallest exponent $\\omega$ such that matrix multiplication costs\n $\\mathcal{O}(n^\\omega)$ operations in the coefficient ring.\n The current record is $\\omega\\leq 2.3728639$ and was obtained by~\\cite{LeGall:2014:PTF:2608628.2608664}.\n It improves the previous record of~\\cite{Williams:2012:MMF:2213977.2214056} by just $3\\cdot10^{-7}$.\n Extensive background in this direction is available in text books~\\citep{buergisser-book,landsberg2017geometry} and\n survey articles~\\citep{blaeser-survey2013,pan-survey2018}. \n Contrary to wide-spread belief, Strassen's algorithm is not only efficient in theory but also in practice. \n Special purpose software for exact linear algebra, such as the FFLAS and FFPACK packages~\\citep{DGP:2008},\n have been using it since long, and there are also reports that its performance in a numerical context\n is not as bad as its reputation~\\citep{Huang:2016:SAR:3014904.3014983}.\n \n Besides the quest for the smallest exponent, which only concerns the asymptotic\n complexity for asymptotically large~$n$, it is also interesting to know how many\n multiplications are needed for a specific (small) $n$ to compute the product of two $n\\times\n n$-matrices. Thanks to Strassen, we know that the answer is at most 7 for\n $n=2$, and it can be shown~\\citep{winograd1971multiplication} that there is no way to do it with 6\n multiplications. It can further be shown that, in a certain sense, Strassen's scheme\n is the only way of doing it with 7 multiplications~\\citep{de1978varieties}. \n\n Already for $n=3$, the situation is not completely understood. \\cite{laderman1976noncommutative}\n showed that 23 multiplications suffice, and \\cite{blaser2003complexity} showed that at least 19\n multiplications are needed.\n For larger sizes as well as rectangular matrices, many people have been searching\n for new schemes using fewer and fewer coefficient multiplications.\n For $n=4$, the best we know is to apply Strassen's scheme recursively, which requires~49 multiplications.\n For $n=5$, the record of 100 multiplications was held~\\cite{MAKAROV1987205} for 30 years until it was\n improved to 99 by~\\cite{DBLP:journals\/corr\/Sedoglavic17aa}.\n For $n=6$, there is a recent scheme by~\\cite{smirnov2013bilinear} which needs only 160 multiplications.\n For $n=7$, \\cite{DBLP:journals\/corr\/abs-1712-07935} found a way to compute the product with 250 multiplications.\n For larger sizes and rectangular matrices, see the extensive tables compiled by~\\cite{smirnov2013bilinear,smirnovseveral}\n and~\\cite{fastmm}.\n Many of the schemes for larger matrix sizes are obtained by combining\n multiplication schemes for smaller matrices~\\citep{DBLP:journals\/tcs\/DrevetIS11}.\n \n Although nobody knows whether there is a scheme using only 22 multiplications for~$n=3$ (in an exact and\n non-commutative setting), 23 multiplications can be achieved in many different\n ways. \\cite{DBLP:journals\/siamcomp\/JohnsonM86} have in fact found infinitely many ways.\n They presented a family of schemes involving three free parameters.\n However, their families involve fractional coefficients and therefore\n do not apply to arbitrary coefficient rings~$K$. Many others have reported isolated\n schemes with fractional or approximate coefficients. Such schemes can be \n constructed for example by numerically solving a certain optimization problem,\n or by genetic algorithms. In Laderman's multiplication scheme, all coefficients are\n $+1$,~$-1$, or~$0$, which has the nice feature that it works for any\n coefficient ring. As far as we know, there are so far only three other schemes\n with this additional property, they are due to \\cite{smirnov2013bilinear}, \\cite{oh2013inequivalence}, and\n \\cite{DBLP:journals\/corr\/abs-1108-2830}, respectively. We add more than 13\\,000 new schemes to this list. \n\n The isolated scheme presented by Courtois et al. was not found numerically but with the help of\n a SAT solver. SAT~\\citep{DBLP:series\/faia\/2009-185} refers to the decision problem of propositional logic:\n given a Boolean formula in conjunctive normal form, is there an assignment of the Boolean variables\n such that the formula evaluates to true under this assignment? Although SAT is a prototypical example\n of an NP-complete problem, modern SAT solvers are able to solve very large instances. In addition to\n various industrial applications, they have recently also contributed to the solution of difficult\n mathematical problems, see \\cite{DBLP:conf\/sat\/HeuleKM16} and~\\cite{DBLP:conf\/aaai\/Heule18} for two\n examples. SAT solvers also play a central role in our approach. \n As explained in Section~\\ref{sec:searching}, we first use a SAT solver to find multiplication\n schemes for the coefficient ring~$\\set Z_2$, starting from some known solutions. \n In a second step, explained in Section~\\ref{sec:sieving}, we discard solutions that\n are equivalent to solutions found earlier.\n Next, we simplify the new solutions (Sect.~\\ref{sec:switching}), and use them as \n starting points for a new round of searching.\n Altogether about 35 years of computation time were spent in several iterations of this process. \n In the end, we lifted the solutions from $\\set Z_2$ to arbitrary coefficient rings (Sect.~\\ref{sec:signing}),\n and we extracted families with up to 17 free parameters from them (Sect.~\\ref{sec:families}).\n Our 13,000 isolated schemes and our parameterized families are provided in various formats on our website~\\citep{mmr}. \n \n \\section{The Brent Equations}\\label{sec:brent}\n\n The general pattern of a matrix multiplication scheme consists of two sections.\n In the first section, several auxiliary quantities $M_1,M_2,\\dots, M_m$ are computed,\n each of which is a product of a certain linear combination of the entries of the first matrix\n with a certain linear combination of the entries of the second matrix.\n In the second section, the entries of the resulting matrix are obtained as certain\n linear combinations of the auxiliary quantities $M_1,M_2,\\dots, M_m$.\n\n For example, writing\n \\[\n A = \\begin{pmatrix}\n a_{1,1}&a_{1,2}\\\\\n a_{2,1}&a_{2,2}\\\\\n \\end{pmatrix},\\quad\n B = \\begin{pmatrix}\n b_{1,1}&b_{1,2}\\\\\n b_{2,1}&b_{2,2}\n \\end{pmatrix},\\quad\\text{and}\\quad\n C = \\begin{pmatrix}\n c_{1,1}&c_{1,2}\\\\\n c_{2,1}&c_{2,2}\n \\end{pmatrix} := AB, \n \\]\n Strassen's multiplication scheme proceeds as follows:\n \n \\hangindent=3em\\hangafter=1\\emph{First section.}\\\\\n $M_1 = (a_{1,1} + a_{2,2}) (b_{1,1} + b_{2,2})$\\\\\n $M_2 = (a_{2,1} + a_{2,2}) (b_{1,1})$\\\\\n $M_3 = (a_{1,1}) (b_{1,2} - b_{2,2})$\\\\\n $M_4 = (a_{2,2}) (b_{2,1} - b_{1,1})$\\\\\n $M_5 = (a_{1,1} + a_{1,2})(b_{2,2})$\\\\\n $M_6 = (a_{2,1} - a_{1,1}) (b_{1,1}+ b_{1,2})$\\\\\n $M_7 = (a_{1,2} - a_{2,2}) (b_{2,1} + b_{2,2})$\n\n \\hangindent=3em\\hangafter=1\\emph{Second section.}\\\\\n $c_{1,1} = M_1 + M_4 - M_5 + M_7$\\\\\n $c_{1,2} = M_3 + M_5$\\\\\n $c_{2,1} = M_2 + M_4$\\\\\n $c_{2,2} = M_1 - M_2 + M_3 + M_6$.\n\n Observe that the number of multiplications is exactly the number of~$M$'s.\n Also observe that while it is not obvious how to construct such a scheme\n from scratch, checking that a given scheme is correct is an easy and\n straightforward calculation. For example, $c_{2,1}=M_2+M_4=(a_{2,1} + a_{2,2}) (b_{1,1})\n + (a_{2,2}) (b_{2,1} - b_{1,1}) = a_{2,1}b_{1,1} + a_{2,2}b_{2,1}$. \n\n In order to search for a multiplication scheme for a prescribed shape of\n matrices (e.g., $3\\times 3$) and a prescribed number of multiplications (e.g.,\n $23$), we can make an ansatz for the coefficients of the various linear\n combinations,\n \\begin{alignat*}1\n M_1 &= (\\alpha_{1,1}^{(1)}a_{1,1} + \\alpha_{1,2}^{(1)}a_{1,2}+\\cdots)(\\beta_{1,1}^{(1)}b_{1,1} + \\beta_{1,2}^{(1)}b_{1,2}+\\cdots)\\\\\n M_2 &= (\\alpha_{1,1}^{(2)}a_{1,1} + \\alpha_{1,2}^{(2)}a_{1,2}+\\cdots)(\\beta_{1,1}^{(2)}b_{1,1} + \\beta_{1,2}^{(2)}b_{1,2}+\\cdots)\\\\\n &\\vdots\\\\\n M_{23} &= (\\alpha_{1,1}^{(23)}a_{1,1} + \\alpha_{1,2}^{(23)}a_{1,2}+\\cdots)(\\beta_{1,1}^{(23)}b_{1,1} + \\beta_{1,2}^{(23)}b_{1,2}+\\cdots)\\\\\n c_{1,1} &= \\gamma_{1,1}^{(1)}M_1 + \\gamma_{1,1}^{(2)}M_2 + \\cdots + \\gamma_{1,1}^{(23)}M_{23}\\\\\n c_{1,2} &= \\gamma_{1,2}^{(1)}M_1 + \\gamma_{1,2}^{(2)}M_2 + \\cdots + \\gamma_{1,2}^{(23)}M_{23}\\\\\n &\\vdots\\\\\n c_{3,3} &= \\gamma_{3,3}^{(1)}M_1 + \\gamma_{3,3}^{(2)}M_2 + \\cdots + \\gamma_{3,3}^{(23)}M_{23}\n \\end{alignat*}\n and then compare coefficients such as to enforce $c_{i,j}=\\sum_k\n a_{i,k}b_{k,j}$. Doing so leads to a system of polynomial equations for the\n undetermined coefficients $\\alpha_{i_1,i_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}},\\beta_{j_1,j_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}},\n \\gamma_{k_1,k_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}$. The equations in this system are known as the Brent\n equations~\\citep{brent1970algorithms}. For $3\\times 3$-matrices and 23 multiplications, the\n equations turn out to be\n \\[\n \\sum_{\\iota=1}^{23} \\alpha_{i_1,i_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}\\beta_{j_1,j_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}\\gamma_{k_1,k_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}} = \\delta_{i_2,j_1}\\delta_{i_1,k_1}\\delta_{j_2,k_2}\n \\]\n for $i_1,i_2,j_1,j_2,k_1,k_2\\in\\{1,2,3\\}$, i.e., there are 621 variables and\n 729 cubic equations. The $\\delta_{u,v}$ on the right refer to the\n Kronecker-delta, i.e., $\\delta_{u,v}=1$ if $u=v$ and $\\delta_{u,v}=0$\n otherwise.\n\n The equations become a bit more symmetric if we connect the matrices $A,B,C$\n through $C^\\top=AB$ rather than $C=AB$. In the version with the transposition,\n which we shall use from now on, and which is also more common in the\n literature, the right hand side has to be replaced with\n $\\delta_{i_2,j_1}\\delta_{j_2,k_1}\\delta_{k_2,i_1}$.\n\n In any case, the problem boils down to finding a solution of the Brent\n equations. In principle, this system could be solved using Gr\\\"obner\n bases~\\citep{buchberger65,cox2013ideals,buchberger10}, but doing so would require an absurd amount of computation\n time. Some of the solutions reported in the literature have been found using\n numerical solvers~\\citep{smirnov2013bilinear,oh2013inequivalence},\n and~\\cite{laderman1976noncommutative} claims that his solution\n was found by solving the Brent equations by hand. He writes that he would explain in a later paper\n how exactly he did this, but apparently this later paper has never been written. Only recently,\n \\cite{DBLP:journals\/corr\/Sedoglavic17} has\n\n given a convincing explanation of how Laderman's scheme can be derived from Strassen's scheme\n for $2\\times 2$ matrices.\n \\cite{DBLP:journals\/corr\/abs-1108-2830} found their solution using a SAT solver.\n We also start our search using SAT solvers.\n\n \\section{SAT Encoding and Streamlining}\\label{sec:searching}\n\n In order to encode the problem as a SAT\n problem, we view the Brent equations as equations for the finite field~$\\set Z_2$, interpret \n its elements as truth values, \n its addition as exclusive {\\sc or} ($\\oplus$), \n and its multiplication as\n conjunction ($\\land$). These propositional formulas cannot be \n directly be processed by most state-of-the-art SAT solvers, because they \n require the formulas in conjunctive normal form (CNF). A formula is in \n CNF if it is a conjunction of clauses, where a clause is a disjunction \n ($\\lor$) of \n literals and a literal is a Boolean variable $x$ or the negation \n of a Boolean \n variable ($\\bar x$). For avoiding an exponential blow-up when transforming an \n arbitrary structured formula to CNF, auxiliary variables are introduced \n that abbreviate certain subformulas. For every $i_1,i_2,j_1,j_2\\in\\{1,2,3\\}$ and\n every $\\iota=1,\\dots,23$, we introduce a fresh variable $s^{\\text{\\tiny (}\\iota\\text{\\tiny )}}_{i_1,i_2,j_1,j_2}$\n and impose the condition\n \\[\n s_{i_1,i_2,j_1,j_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}\\leftrightarrow (\\alpha_{i_1,i_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}} \\land \\beta_{j_1,j_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}),\n \\]\n whose translation to CNF requires three clauses.\n Similarly, for every $i_1,i_2,j_1,j_2,k_1,k_2\\in\\{1,2,3\\}$ and every\n $\\iota=1,\\dots,23$, we introduce a fresh variable $t_{i_1,i_2,j_1,j_2,k_1,k_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}$\n and impose the condition\n \\[\n t_{i_1,i_2,j_1,j_2,k_1,k_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}\\leftrightarrow (s_{i_1,i_2,j_1,j_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}} \\land \\gamma_{k_1,k_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}),\n \\]\n whose translation to CNF costs again three clauses.\n\n \\def\\even{\\mathrm{even}}\\def\\odd{\\mathrm{odd}\n For each fixed choice $i_1,i_2,j_1,j_2,k_1,k_2\\in\\{1,2,3\\}$, there is a Brent equation\n which says that the number of $\\iota$'s for which $t_{i_1,i_2,j_1,j_2,k_1,k_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}$\n is set to true should be even (if $\\delta_{i_2,j_1}\\delta_{i_1,k_1}\\delta_{j_2,k_2}=0$)\n or that it should be odd (if $\\delta_{i_2,j_1}\\delta_{i_1,k_1}\\delta_{j_2,k_2}=1$).\n It therefore remains to encode the condition that an even number (or an odd number) of a\n given set of $p$ variables should be true, i.e., we need to construct a formula\n $\\even(x_1,\\dots,x_p)$ which is true if and only if an even number among the\n variables $x_1,\\dots,x_p$ is true. Such a formula can again be constructed using\n auxiliary variables. Note that $\\even(x_1,\\dots,x_p)$ is true if and only if\n $\\even(x_1,\\dots,x_i,y)\\land\\even(x_{i+1},\\dots,x_p,y)$ is true, because this is the\n case if and only if both $\\{x_1,\\dots,x_i\\}$ and $\\{x_{i+1},\\dots,x_p\\}$ contain an\n even number of variables set to true (and then $y$ is set to false) or both sets contain\n an odd number of variables set to true (and then $y$ is set to true). Applying this principle\n recursively for $p = 23$ (the number of summands in each Brent equation), the problem can be broken down to chunks of size four:\n \\begin{alignat*}5\n &\\even(x_1,x_2,x_3,y_1)&&\\land\n \\even(x_4,x_5,x_6,y_2)&&\\land\n \\even(x_7,x_8,x_9,y_3)&&\\land\n \\even(x_{10},x_{11},x_{12},y_4)\\\\\n {}\\land{}&\\even(x_{13},x_{14},x_{15},y_5)&&\\land\n \\even(x_{16},x_{17},x_{18},y_6)&&\\land\n \\even(x_{19},x_{20},x_{21},y_7)&&\\land\n \\even(x_{22},x_{23},y_1,y_8)\\\\\n {}\\land{}&\\even(y_2,y_3,y_4,y_9)&&\\land\n \\even(y_5,y_6,y_7,y_{10})&&\\land\n \\even(y_8,y_9,y_{10},y_{11}).\n \\end{alignat*}\n The small chunks can be encoded directly by observing that $\\even(a,b,c,d)$ is equivalent\n to\n \\begin{alignat*}1\n &(a \\lor b \\lor c \\lor \\bar d) \\land\n (a \\lor b \\lor \\bar c \\lor d) \\land\n (a \\lor \\bar b \\lor c \\lor d) \\land\n (\\bar a \\lor b \\lor c \\lor d) \\land\\\\\n &(a \\lor \\bar b \\lor \\bar c \\lor \\bar d) \\land\n (\\bar a \\lor \\bar b \\lor c \\lor \\bar d) \\land\n (\\bar a \\lor b \\lor \\bar c \\lor \\bar d) \\land\n (\\bar a \\lor \\bar b \\lor \\bar c \\lor d).\n \\end{alignat*}\n For the cases where an odd number of the variables $x_1,\\dots,x_{23}$ must be true, we can\n apply the encoding described above to $\\even(\\bar x_1,x_2,x_3,\\dots,x_{23})$.\n\n The SAT problems obtained in this way are very hard. In order to make the problems more\n tractable, we added further constraints in order to simplify the search performed by the\n solver. This approach is known as streamlining~\\citep{streamlining}. The following\n restrictions turned out to be successful:\n \\begin{itemize}\n \\item Instead of a faithful encoding of the sums in the Brent equations using the\n even predicate as described above, we also used a more restrictive sufficient condition \n which instead of requiring an even number of arguments to be true enforces that zero or\n two arguments should be true. This predicate zero-or-two can be broken\n into at-most-two and not-exactly-one, which can be efficiently encoded as\n \\begin{alignat*}1\n \\textrm{not-exactly-one}(x_1,\\dots,x_p) &= \\bigwedge_{i=1}^p \\Bigl(x_i \\rightarrow \\bigvee_{j\\neq i} x_j\\Bigr)\\\\\n \\textrm{at-most-two}(x_1,\\dots,x_p) &=\n (\\bar x_1\\lor\\bar x_2\\lor\\bar x_3)\\land(\\bar x_1\\lor\\bar x_2\\lor\\bar x_4)\\\\\n &\\quad\\land(\\bar x_1\\lor\\bar x_3\\lor\\bar x_4)\\land(\\bar x_2\\lor\\bar x_3\\lor\\bar x_4)\\\\\n &\\quad\\land(\\bar x_1\\lor y)\\land(\\bar x_2\\lor y)\\land(\\bar x_1\\lor\\bar x_2\\lor z)\\\\\n &\\quad\\land(\\bar x_3\\lor z)\\land(\\bar x_4\\lor z)\\land(\\bar x_3\\lor\\bar x_4\\lor y)\\\\\n &\\quad\\land\\textrm{at-most-two}(y,z,x_5,\\dots,x_p),\n \\end{alignat*}\n where $y$ and $z$ are fresh variables. \nThe first two lines of at-most-two assert that at most two variables \nof $x_1, x_2, x_3, x_4$ are true. \nIf two or more of those variables are \ntrue then the new variables \n$y$ and $z$ have to be both true, if one variable is true, \nthen either $y$ or $z$ has to be true, and if all four variables are false, \nthen also $y$ and $z$ can be both false. Encoding this information \nin $y$ and $z$ allows to recursively apply at-most-two with two \narguments less. \nA straightforward direct encoding as in the first two lines \n\t\t is used when $p\\leq 4$.\n \\item We selected a certain portion, say 50\\%, of the variables $\\alpha_{i_1,i_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}$, $\\beta_{j_1,j_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}$, $\\gamma_{k_1,k_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}$ and instantiate them with the values they have in one of the known solutions.\n The SAT solver then has to solve for the remaining variables. It turns out that in\n many cases, it does not just rediscover the known solution but finds a truly different one\n that only happens to have an overlap with the original solution.\n \\item Another approach was to randomly set half of the terms $\\alpha_{i_1,i_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}\\beta_{j_1,j_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}\\gamma_{k_1,k_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}$ with $i_2\\neq j_1$ and $j_2\\neq k_1$ and $k_2\\neq i_1$ to zero. This strategy was\n motivated by the observation that in most of the known solutions, almost all these\n terms are zero.\n \\item A third approach concerns the terms $\\alpha_{i_1,i_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}\\beta_{j_1,j_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}\\gamma_{k_1,k_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}$ with\n $i_2=j_1$ and $j_2=k_1$ and $k_2=i_1$. Again motivated by the inspection of known solutions,\n we specified that for each $\\iota$ either one or two such terms should be one.\n More precisely, we randomly chose a distribution of the 27 terms with $i_2=j_1$ and $j_2=k_1$ and $k_2=i_1$\n to the 23 summands of the scheme, with the condition that 19 summands should contain one term each\n and the remaining four summands should contain two terms each.\n \\end{itemize}\n Each of the latter three approaches was used in combination with both the `even' and the `zero-or-two' encoding\n of the Brent equations. The resulting instances were presented to the SAT solver yalsat by \\cite{yalsat}. When it didn't\n find a solution for an instance within a few minutes, the instance was discarded and a new instance with another\n random choice was tried. A detailed analysis of the effect of our optimizations on the performance of the solver\n is provided in a separate paper~\\citep{sls}.\n\n \\section{Recognizing Equivalences}\\label{sec:sieving}\n\n From any given solution of the Brent equations we can generate many equivalent\n solutions. For example, exchanging $\\alpha$ with $\\beta$ and flipping all\n indices maps a solution to another solution. This operation corresponds to the\n fact that $(AB)^\\top=B^\\top A^\\top$. It is also clear from the equations\n that replacing $\\alpha$ by~$\\beta$, $\\beta$ by~$\\gamma$, and $\\gamma$\n by~$\\alpha$ maps a solution to another solution, although this operation is\n less obvious in terms of matrix multiplication. Finally, for any fixed\n invertible matrix~$U$, we can exploit the fact $AB=AUU^{-1}B$ to map solutions\n to other solutions.\n\n The operations just described form a group of symmetries of matrix\n multiplication which was introduced by \\cite{de1978varieties}, who used them\n for showing that Strassen's scheme for $2\\times 2$ matrices is essentially\n unique: it is unique modulo the action of this symmetry group. \n To describe the group more formally, it is convenient to express matrix\n multiplication schemes as tensors,\n \\[\n \\sum_{\\iota=1}^{23}\n \\begin{pmatrix}\n \\alpha_{1,1}^{\\text{\\tiny (}\\iota\\text{\\tiny )}} & \\alpha_{1,2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}} & \\alpha_{1,3}^{\\text{\\tiny (}\\iota\\text{\\tiny )}} \\\\\n \\alpha_{2,1}^{\\text{\\tiny (}\\iota\\text{\\tiny )}} & \\alpha_{2,2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}} & \\alpha_{2,3}^{\\text{\\tiny (}\\iota\\text{\\tiny )}} \\\\\n \\alpha_{3,1}^{\\text{\\tiny (}\\iota\\text{\\tiny )}} & \\alpha_{3,2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}} & \\alpha_{3,3}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}\n \\end{pmatrix}\\otimes\n \\begin{pmatrix}\n \\beta_{1,1}^{\\text{\\tiny (}\\iota\\text{\\tiny )}} & \\beta_{1,2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}} & \\beta_{1,3}^{\\text{\\tiny (}\\iota\\text{\\tiny )}} \\\\\n \\beta_{2,1}^{\\text{\\tiny (}\\iota\\text{\\tiny )}} & \\beta_{2,2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}} & \\beta_{2,3}^{\\text{\\tiny (}\\iota\\text{\\tiny )}} \\\\\n \\beta_{3,1}^{\\text{\\tiny (}\\iota\\text{\\tiny )}} & \\beta_{3,2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}} & \\beta_{3,3}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}\n \\end{pmatrix}\\otimes\n \\begin{pmatrix}\n \\gamma_{1,1}^{\\text{\\tiny (}\\iota\\text{\\tiny )}} & \\gamma_{1,2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}} & \\gamma_{1,3}^{\\text{\\tiny (}\\iota\\text{\\tiny )}} \\\\\n \\gamma_{2,1}^{\\text{\\tiny (}\\iota\\text{\\tiny )}} & \\gamma_{2,2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}} & \\gamma_{2,3}^{\\text{\\tiny (}\\iota\\text{\\tiny )}} \\\\\n \\gamma_{3,1}^{\\text{\\tiny (}\\iota\\text{\\tiny )}} & \\gamma_{3,2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}} & \\gamma_{3,3}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}\n \\end{pmatrix}.\n \\]\n A scheme is correct if and only if it is equal, as element of $(K^{3\\times\n 3})^{\\otimes3}$, to $\\sum_{i,j,k=1}^3 E_{i,k}\\otimes E_{k,j}\\otimes E_{j,i}$,\n where $E_{u,v}\\in K^{3\\times3}$ refers to the matrix which has a $1$ at\n position $(u,v)$ and zeros everywhere else.\n\n A permutation $\\pi\\in S_3$ acts on\n a tensor $A\\otimes B\\otimes C$ by permuting the three factors, and transposing\n each of them if $\\operatorname{sgn}(\\pi)=-1$. For example, $(1\\ 2)\\cdot(A\\otimes B\\otimes\n C)=B^\\top\\otimes A^\\top\\otimes C^\\top$ and $(1\\ 2\\ 3)\\cdot(A\\otimes B\\otimes\n C)=B\\otimes C\\otimes A$. A triple $(U,V,W)\\in\\operatorname{GL}(K,3)^3$ of invertible\n matrices acts via\n \\[\n (U,V,W)\\cdot(A\\otimes B\\otimes C)=UAV^{-1}\\otimes VBW^{-1}\\otimes WCU^{-1}.\n \\]\n A tuple $(U,V,W,\\pi)\\in\\operatorname{GL}(K,3)^3\\times S_3$ acts on a tensor $A\\otimes\n B\\otimes C$ by first letting the permutation act as described above, and then\n applying the matrices as described above. The set $G=\\operatorname{GL}(K,3)^3\\times S_3$ is\n turned into a group by defining the multiplication in such a way that the\n operation described above becomes a group action. The action of the group $G$\n defined on tensors $A\\otimes B\\otimes C$ is extended to the whole space\n $(K^{3\\times 3})^{\\otimes 3}$ by linearity. In other words, elements of $G$ act\n on sums of tensors by acting independently on all summands.\n\n Two matrix multiplication schemes are called equivalent if they belong to the\n same orbit under the action of~$G$. Whenever a new matrix multiplication scheme\n is discovered, the question is whether it is equivalent to a known scheme, for\n if it is, it should not be considered as new. A common test for checking that\n two schemes are not equivalent proceeds by computing certain invariants of the\n group action. For example, since permutation and multiplication by invertible\n matrices do not change the rank of a matrix, we can count how many matrices of\n rank 1,~2, and~3 appear in the scheme. If the counts differ for two schemes,\n then these schemes cannot be equivalent. For example, \\cite{DBLP:journals\/corr\/abs-1108-2830}\n and \\cite{oh2013inequivalence} proved in this way that their schemes were indeed\n new. Writing a scheme in the form $\\sum_{\\iota=1}^{23}(A_\\iota\\otimes\n B_\\iota\\otimes C_\\iota)$, we can encode this invariant as the polynomial\n $\\sum_{\\iota=1}^{23} (x^{\\operatorname{rank}(A_\\iota)}+x^{\\operatorname{rank}(B_\\iota)}+x^{\\operatorname{rank}(C_\\iota)})$.\n Similarly, also the polynomials\n \\[\n \\sum_{\\iota=1}^{23} x^{\\operatorname{rank}(A_\\iota)+\\operatorname{rank}(B_\\iota)+\\operatorname{rank}(C_\\iota)}\n \\quad\\text{and}\\quad\n x^{\\sum_{\\iota=1}^{23}\\operatorname{rank}(A_\\iota)} +\n x^{\\sum_{\\iota=1}^{23}\\operatorname{rank}(B_\\iota)} +\n x^{\\sum_{\\iota=1}^{23}\\operatorname{rank}(C_\\iota)} \n \\]\n are invariants, because changing the order of summation does not affect the\n relative order of the factors in the tensor, and applying a permutation changes\n the relative order of the factors in every summand in the same way.\n\n When we have two schemes for which all three invariants match, they may\n nevertheless be inequivalent. For checking whether a solution found\n by the SAT solver is really new, comparing invariants is useful as a\n first step, but it is not sufficient. In fact, many solutions found\n by the SAT solver were inequivalent although all three invariants stated\n above agreed. Fortunately, it is not too hard to decide the equivalence of two given\n schemes by constructing, whenever possible, a group element that maps one to\n the other. We can proceed as follows. \n\n Suppose we are given two\n multiplication schemes $S,S'$ and we want to decide whether there exists a\n tuple $(U,V,W,\\pi)\\in\\operatorname{GL}(K,3)^3\\times S_3$ such that $(U,V,W,\\pi)\\cdot S=S'$.\n As far as the permutation is concerned, there are only six candidates,\n so we can simply try each of them. Writing $S=\\sum_{\\iota=1}^{23}(A_\\iota\\otimes\n B_\\iota\\otimes C_\\iota)$ and $S'=\\sum_{\\iota=1}^{23}(A_\\iota'\\otimes B_\\iota'\\otimes\n C_\\iota')$, it remains to find $U,V,W$ that map all the summands of $S$ to the\n summands of~$S'$, albeit possibly in a different order. We search for a suitable\n order by the following recursive algorithm, which is initially called with\n $Q$ being full space $K^{3\\times 3}\\times K^{3\\times 3}\\times K^{3\\times\n 3}$.\n\n \\medskip\n \\par\\noindent\n \\emph{Input:} $S,S'$ as above, a basis of a subspace $Q$ of $K^{3\\times 3}\\times K^{3\\times 3}\\times K^{3\\times 3}$\\\\\n \\emph{Output:} A triple $(U,V,W)\\in\\operatorname{GL}(K,3)^3\\cap Q$ with $(U,V,W)\\cdot S=S'$, or $\\bot$ if no such triple exists.\n\n \\step10 if $S$ and $S'$ are empty, then:\n \\step21 return any element $(U,V,W)$ of $Q$ with $\\det(U)\\det(V)\\det(W)\\neq0$, or $\\bot$ if no such element exists.\n \\step30 for all summands $A_\\iota'\\otimes B_\\iota'\\otimes C_\\iota'$ of $S'$, do:\n \\step41 if $\\operatorname{rank}(A_1)=\\operatorname{rank}(A_\\iota')$ and $\\operatorname{rank}(B_1)=\\operatorname{rank}(B_\\iota')$ and $\\operatorname{rank}(C_1)=\\operatorname{rank}(C_\\iota')$, then:\n \\step52 compute a basis of the space $P$ of all $(U,V,W)$ such that $UA_1=A_\\iota'V$, $VB_1=B_\\iota'W$, $WC_1=C_\\iota'U$ by\n making an ansatz, comparing coefficients, and solving a homogeneous linear system.\n \\step62 compute a basis of $R=P\\cap Q$.\n \\step72 if $R$ contains at least one triple $(U,V,W)$ with $\\det(U)\\det(V)\\det(W)\\neq0$, then:\n \\step83 call the algorithm recursively with the first summand of $S$ and the $\\iota$th summand of $S'$ removed, and with $R$ in place of~$Q$.\n \\step93 if the recursive call yields a triple $(U,V,W)$, return it.\n \\step{10}0 return $\\bot$.\n\n \\medskip\n The algorithm terminates because each recursive call is applied to a sum with\n strictly fewer summands.\n The correctness of the algorithm is clear because it essentially performs an\n exhaustive search through all options. \n In order to perform the check in\n step~7, we can consider a generic linear combination of the basis elements\n of~$R$, with variables as coefficients. Then $\\det(U)\\det(V)\\det(W)$ is a\n polynomial in these variables, and the question is whether this polynomial\n vanishes identically on~$K$. Since we are interested in the case $K=\\set Z_2$,\n we can answer this by an exhaustive search.\n\n The recursive structure of the algorithm with up to 23 recursive calls at every\n level may seem prohibitively expensive. However, the two filters in lines~4\n and~7 turn out to cut down the number of recursive calls considerably. A\n straightforward implementation in Mathematica needs no more than about one\n second of computation time to decide whether or not two given schemes are\n equivalent. Of course, we first compare the invariants, which is almost for\n free and suffices to settle many cases.\n\n For each scheme found by the SAT solver we have checked whether it is\n equivalent (for $K=\\set Z_2$) to one of the schemes found earlier, or to one\n of the four known schemes found by Laderman, Smirnov, Oh et al., and\n Courtois et al., respectively. From the roughly $270\\,000$ solutions\n found by the SAT solver that were distinct modulo the order of the summands,\n we isolated about $13\\,000$ schemes that were distinct modulo equivalence.\n In the appendix, we list the number of schemes we found separated by\n invariant. \n\n \\section{Simplifying Solutions}\\label{sec:switching}\n\n We can use the symmetries introduced in the previous section not only to\n recognize that a seemingly new scheme is not really new. We can also use\n them for simplifying schemes. A scheme can for example be regarded\n as simpler than another scheme if the number of terms\n $\\alpha_{i_1,i_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}\\beta_{j_1,j_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}\\gamma_{k_1,k_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}$ in\n it which evaluate to $1$ is smaller. Calling this number the \\emph{weight} of\n a scheme, we prefer schemes with smaller weight.\n\n Ideally, we would like to replace every scheme $S$ by an equivalent scheme\n with smallest possible weight. In principle, we could find such a minimal\n equivalent element by applying all elements of $G$ to $S$ and taking the\n smallest result. Unfortunately, even for $K=\\set Z_2$, the group $G$ has\n $168^3\\cdot6=28\\,449\\,792$ elements, so trying them all might be feasible if we had\n to do it for a few schemes, but not for thousands of them.\n If we do not insist in the smallest possible weight, we can take a pragmatic\n approach and just spend for every scheme $S$ a prescribed amount of computation\n time (say half an hour) applying random elements of $G$ to~$S$:\n\n \\medskip\n \\par\\noindent\n \\emph{Input:} a multiplication scheme $S$\\\\\n \\emph{Output:} an equivalent multiplication scheme whose weight is less than or equal to the weight of~$S$.\n\n \\step10 while the time limit is not exhausted, do\n \\step21 pick a group element $g$ at random\n \\step31 if $\\mathrm{weight}(g(S))<\\mathrm{weight}(S)$, then set $S = g(S)$\n \\step40 return $S$\n\n \\medskip\n With this algorithm, we were able to replace about 20\\% of the new schemes found by the SAT solver\n by equivalent schemes with smaller weight. It is not too surprising that no\n improvement was found for the majority of cases, because the way we specified the\n problem to the SAT solver already induces a bias towards solutions with a small weight.\n\n The figure below shows the distribution of our $13\\,000$ schemes according to\n weight, after simplification. It is clear that the weight is always odd, hence\n the small gaps between the bars. It is less clear why we seem to have an\n overlay of three normal distributions, but we believe that this is rather an\n artifact of the way we generated the solutions than a structural feature of the\n entire solution set.\n\n \\begin{center}\n \\begin{tikzpicture}[x=2pt,y=2pt,yscale=.1,xscale=1]\n\\foreach \\x\/\\y in {257\/227,259\/129,261\/186,263\/81,265\/153,267\/65,269\/101,271\/36,273\/66,167\/4,275\/31,277\/36,279\/24,281\/21,283\/10,285\/10,287\/11,289\/9,163\/3,293\/4,295\/3,297\/1,299\/2,173\/8,175\/13,177\/14,179\/14,181\/17,183\/38,185\/52,187\/72,189\/90,169\/6,191\/143,193\/165,195\/185,197\/215,199\/252,201\/300,203\/353,205\/331,291\/3,207\/424,209\/379,211\/484,213\/364,215\/509,217\/339,219\/554,221\/315,223\/588,225\/266,227\/557,229\/297,165\/3,231\/533,233\/331,171\/5,235\/484,237\/349,239\/451,241\/330,243\/360,245\/352,247\/278,249\/314,251\/227,253\/290,255\/165}\n\\draw[fill=gray] (\\x,0) rectangle (\\x+1,\\y);\n\\draw[->] (150,0)--(310,0) node[above left] {weight};\n\\draw[->] (150,0)--++(0,610) node[right] {count};\n\\foreach \\x in {160,180,200,220,240,260,280,300} \\draw (\\x.5,0)--++(0,-10) node[below] {\\footnotesize\\x};\n\\foreach \\y in {0,100,200,300,400,500,600} \\draw (150,\\y)--++(0,-1) node[left] {\\footnotesize\\y};\n \\end{tikzpicture}\n \\end{center}\n \n \\section{Generalizing the Coefficient Ring}\\label{sec:signing}\n\n At this point, we have a considerable number of new matrix multiplication\n schemes for the coefficient field $K=\\set Z_2$. The next step is to lift them\n to schemes that work in any coefficient ring. \n The SAT solver presents us with a solution for $\\set Z_2$ in which all\n coefficients are $0$ or~$1$, and in order to lift such a solution, we make the\n hypothesis that this solution originated from a solution for an arbitrary coefficient\n ring in which all coefficients are $+1$, $-1$, or~$0$. The distinction between\n $+1$ and $-1$ gets lost in~$\\set Z_2$, and the task consists in recovering it.\n There is a priori no reason why such a lifting should exist, and indeed, we have\n seen a small number of instances where it fails. One such example is given in the\n appendix. Interestingly however, these examples seem to be very rare. In almost all cases,\n a lifting turned out to exist. \n\n In order to explain the lifting process, we return to the Brent equations discussed in\n Section~\\ref{sec:brent}. We set variables corresponding to coefficients\n that are zero in the SAT solution to zero, which simplifies the system\n considerably. According to the axioms of\n tensor products, we have $(\\lambda A)\\otimes B\\otimes C=A\\otimes(\\lambda B)\n \\otimes C=A\\otimes B\\otimes(\\lambda C)$ for any $A,B,C$ and every\n constant~$\\lambda$. We may therefore select in every summand $A\\otimes\n B\\otimes C$ one variable appearing in $A$ and one variable appearing in $B$ and\n set them to~$+1$. This reduces the number of variables further. However, the\n resulting system is still to hard to be solved directly.\n\n Before calling a general\n purpose Gr\\\"obner bases engine, we apply some simplifications to the system,\n which take into account that we are only interested in solutions whose\n coordinates are $-1$ or~$+1$. In particular, we can replace any exponent $k$\n appearing in any of the polynomials by~$k\\bmod 2$, we can cancel factors that\n clearly do not vanish on the points of interest, and we can replace polynomials\n of the from $xy\\pm1$ by $x\\pm y$.\n These simplifications may bring up some linear polynomials. By triangularizing the linear system\n corresponding to these polynomials, we can eliminate some of the variables. We can then\n simplify again, and possibly obtain new linear equations. The process is repeated until\n no further linear equations appear. We then add for each variable $x$ the polynomial $x^2-1$\n and compute a Gr\\\"obner basis with respect to a degree order. If this leads to new\n linear polynomials, we return to iterating triangularization, elimination,\n and simplification until no further linear equations show up, and then compute again a\n degree Gr\\\"obner basis. The whole process is repeated until we obtain a Gr\\\"obner basis\n that does not contain any new linear equations. If there are more than 15 variables left,\n we next compute a minimal associated prime ideal of an elimination ideal involving only five\n variables, and check whether adding it to the original system and computing a Gr\\\"obner basis\n leads to new linear equations. If it does, we start over with the whole procedure.\n Otherwise, we compute the minimal associated prime ideal of the whole system and return\n the solution corresponding to one of the prime factors. The process is summarized in the following\n listing.\n\n \\medskip\n \\par\\noindent\n \\emph{Input:} A finite subset $B$ of $\\set Q[x_1,\\dots,x_n]$\\\\\n \\emph{Output:} A common root $\\xi\\in\\{-1,1\\}^n$ of all the elements of~$B$, or $\\bot$ if no such common root exists.\n\n \\step10 Replace every exponent $k$ appearing in an element of $B$ by $k\\bmod2$\n \\step20 For every $p\\in B$ and every $i$ with $x_i\\mid p$, replace $p$ by $p\/x_i$\n \\step30 Replace every element of the form $xy-1$ or $-xy-1$ by $x-y$ or $x+y$, respectively.\n \\step40 if $B$ now contains linear polynomials, then:\n \\step51 Use them to eliminate some variables, say $y_1,\\dots,y_k$\n \\step61 Call the procedure recursively on the resulting set of polynomials\n \\step71 if there is a solution, extend it to the eliminated variables $y_1,\\dots,y_k$ and return the result\n \\step81 if there is no solution, return $\\bot$.\n \\step{9}0 Compute a Gr\\\"obner basis $G$ of $B\\cup\\{x_i^2-1:i=1,\\dots,n\\}$ with respect to a degree order\n \\step{10}0 if $G=\\{1\\}$, return $\\bot$\n \\step{11}0 if $G$ contains linear polynomials, then call this procedure recursively and return the result\n \\step{12}0 if $n>15$, then:\n \\step{13}1 Compute a basis $P$ of one of the minimal associated prime ideals of $\\\\cap\\set Q[x_1,\\dots,x_5]$.\n \\step{14}1 Compute a Gr\\\"obner basis $G'$ of $G\\cup P$ with respect to a degree order\n \\step{15}1 if $G'$ contains linear polynomials, then call this procedure recursively and return the result\n \\step{16}0 Compute a basis $P$ of one of the minimal associated prime ideals of $\\\\subseteq\\set Q[x_1,\\dots,x_n]$.\n \\step{17}0 Return the common solution $\\xi$ of~$P$.\n\n \\medskip\n An implementation of this procedure in Mathematica is available on the website of this\n article~\\citep{mmr}. In this implementation, we use Singular \\citep{greuel02} for\n doing the Gr\\\"obner basis calculations and for the computation of minimal associated\n prime ideals. Despite the large number of\n variables, Singular handles the required computations with impressive\n speed, so that the whole signing process takes only about 20 seconds per solution on\n the average. Only a small number of cases, which happen to have a few more\n variables than the others, need much longer, up to a few hours.\n\n \\section{Introducing Parameters}\\label{sec:families}\n\n The idea of instantiating some of the variables based on a known scheme and\n then solving for the remaining variables approach not only applies to SAT\n solving. It also has an algebraic counterpart. Solving the Brent equations with\n algebraic methods is infeasible because the equations are nonlinear, but\n observe that we only have to solve a linear system if we start from a known\n scheme and only replace all $\\gamma_{k_1,k_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}$ by fresh\n variables. Solving linear systems is of course much easier than\n solving nonlinear ones.\n\n More generally, we can select for each $\\iota\\in\\{1,\\dots,23\\}$ separately\n whether we want to replace all $\\alpha_{i_1,i_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}$'s or all\n $\\beta_{j_1,j_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}$'s or all $\\gamma_{k_1,k_2}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}$'s by fresh\n variables, and we still just get a linear system for these variables. Once we\n make a selection, solving the resulting linear system yields an affine vector\n space. One might expect this affine space will typically consist of a single\n point only, but this is usually not the case.\n\n A solution space with positive dimension can be translated into a\n multiplication scheme involving one or more free parameters. Starting from the\n resulting parameterized scheme, we can play the same game with another\n selection of variables, which may allow us to introduce further parameters. If\n we repeat the procedure several times with random selections of which variables\n are known, we obtain huge schemes involving 40 or more parameters.\n These parameters are however algebraically dependent, or at least it is too\n costly check whether they are dependent or not. We got better results by\n proceeding more systematically, as summarized int in the following listing.\n\n \\medskip\n \\par\\noindent\n \\emph{Input:} A matrix multiplication scheme $S=\\sum_{\\iota=1}^{23}(A_\\iota\\otimes B_\\iota\\otimes C_\\iota)$.\n Write $A_\\iota=((\\alpha_{i,j}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}))$, $B_\\iota=((\\beta_{i,j}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}))$, $C_\\iota=((\\gamma_{i,j}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}))$.\\\\\n \\emph{Output:} A family of matrix multiplication schemes with parameters $x_1,x_2,\\dots$\n\n \\step10 for $\\iota=1,\\dots,23$, do:\n \\step21 for every choice $u,v\\in\\{\\alpha,\\beta,\\gamma\\}$ with $u\\neq v$, do:\n \\step32 replace all entries $u_{i,j}^{\\text{\\tiny (}\\iota\\text{\\tiny )}}$ for $i,j=1,\\dots,3$ in $S$ by fresh variables\n \\step42 replace all entries $v_{i,j}^{(m)}$ for $i,j=1,\\dots,3$ and $m\\neq\\iota$ in $S$ by fresh variables\n \\step52 equate the resulting scheme $S$ to $\\sum_{i,j,k} E_{i,j}\\otimes E_{j,k}\\otimes E_{k,i}$ and compare coefficients\n \\step62 solve the resulting inhomogeneous linear system for the fresh variables introduced in steps 3 and~4\n \\step72 substitute the generic solution, using new parameters $x_i,x_{i+1},\\dots$, into~$S$\n \\step80 return $S$\n\n \\medskip\n With this algorithm and some slightly modified variants (e.g., letting the outer loop run backwards or transposing\n the inner and the outer loop), we were able to obtain schemes with altogether up to 17 parameters.\n Although all new parameters introduced in a certain iteration can only appear\n linearly in the scheme, old parameters that were considered as belonging to the\n ground ring during the linear solving can later appear rationally. However, by manually\n applying suitable changes of variables, we managed to remove all denominators from\n all the families we inspected. Not even integer denominators are needed. \n We can also check using Gr\\\"obner bases\n\n whether the parameters are independent, and for several families with 17 parameters\n they turn out to be. In the language of algebraic geometry, this means that the solution\n set of the Brent equations has at least dimension~17 as an algebraic variety.\n\n One of our families is shown in the appendix, and some further ones are provided\n electronically on our website. These families should be contrasted with the family found by\n Johnson and McLoughlin in the the 1980s~\\citep{DBLP:journals\/siamcomp\/JohnsonM86}. In particular, while they lament\n that their family contains fractional coefficients such as $\\frac12$ and $\\frac13$\n and therefore does not apply in every coefficient ring, our families only involve\n integer coefficients and therefore have no such restriction. Moreover, their family\n has only three parameters, and with the method described above, only $6$ additional\n parameters can be introduced into it. The number of parameters we managed to introduce\n into the known solutions by Laderman, Courtois et al., Oh et al., and Smirnov\n are $0$,~$6$, $10$, and~$14$, respectively.\n \n \\section{Concluding Remarks}\\label{sec:skimming}\n\n Although we have found many new multiplication schemes with 23 multiplications,\n we did not encounter a single scheme with 22 multiplications. We have checked\n all schemes whether some of their summands can be merged together using tensor product\n arithmetic. For doing so, it would suffice if a certain scheme contains some summands\n which share the same $A$'s, say, and where the\n corresponding $B$'s, say, of these rows are linearly independent. We could then\n express one of these $B$'s in terms of the others and eliminate the summand in\n which it appears. For example, if $B_3=\\beta_1B_1+\\beta_2B_2$, then we have $A\\otimes B_1\\otimes C_1\n + A\\otimes B_2\\otimes C_2+A\\otimes B_3\\otimes C_3=A\\otimes\n B_1\\otimes(C_1+\\beta_1C_3)+A\\otimes B_2\\otimes(C_2+\\beta_2C_3)$. Since none of\n our schemes admits a simplification of this kind, it remains open whether a\n scheme with 22 multiplications exists.\n\n Another open question is: how many further schemes with 23 multiplications and coefficients in $\\{-1,0,1\\}$\n are there? We have no evidence that we have found them all. In fact, we rather believe that there\n are many further ones, possibly including schemes that are very different from\n ours. There may also be parametrized families with more than 17 parameters, and it would\n be interesting to know the maximal possible number of parameters, i.e., the actual dimension\n of the solution set of the Brent equations. \n\n\\bibliographystyle{elsarticle-harv}\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and Previous Work} \\label{sec:intro}\nDuring the elaboration of~\\cite{BirkvonNeu:36} John von Neumann wrote\nto Garret Birkhoff: ``I would like to make a confession\nwhich may seem immoral: I do not believe absolutely in Hilbert space any more. \nAfter all Hilbert-space (as far as quantum-mechanical things are concerned) \nwas obtained by generalizing Euclidean space, footing on the principle of \n``conserving the validity of all formal rules''. \nThis is very clear, if you consider the axiomatic-geometric definition of\nHilbert-space, where one simply takes Weyl's axioms for a unitary-Euclidean\nspace, drops the condition on the existence of a finite linear basis, and\nreplaces it by a minimum of topological assumptions \n(completeness + separability). Thus Hilbert-space is the straightforward\ngeneralization of Euclidean space, if one considers the {\\em vectors}\nas the essential notions. Now we begin to believe that it is not the \n{\\em vectors} which matter but the {\\em lattice of all linear (closed) \nsubspaces}. Because:\n\\begin{enumerate}\n\\item The vectors ought to represent the physical {\\em states}, but they\ndo it redundantly, up to a complex factor only.\n\\item And besides the {\\em states} are merely a derived notion, the primitive\n(phenomenologically given) notion being the {\\em qualities}, which correspond \nto the {\\em linear closed subspaces}'' \n(see~\\cite{vNeumann_letters}, p. 59, letter dated Nov. 13, Wednesday, 1935).\n\\end{enumerate}\n\nThe goal of this work is to pursue von Neumann's program of describing \nQuantum Logic in terms of closed subspaces and without vectors one step\nfurther. \nThis work presents two original features:\n\\begin{itemize}\n\\item it takes a logical approach to Quantum Physics, where states and\npropositions take the main roles, and\n\\item while it assumes the formalism of Hilbert spaces that fits Quantum \nPhysics, it tries the utmost to use only notions, such as states, propositions,\nprojections, orthogonality and so on, that have a meaning, albeit mostly\ntrivial, in Classical Physics.\nSpecial care will be taken to ensure that the quantic principles proposed\nhold classically.\n\\end{itemize}\n\n\\section{Quantum Logic} \\label{sec:Logic}\nOne may say that Logic is the study of the relation between states of the \nworld and propositions used to talk about those states. Quantum logic must\ntherefore be the study of the relation between quantum states and quantum\npropositions. The accepted view is that both quantum states and quantum\npropositions should be represented by closed subspaces of a Hilbert space.\nQuantum states are one-dimensional subspaces.\nQuantum logic is therefore the study of the relation between one-dimensional \nsubspaces and arbitrary closed subspaces.\nOne obvious topic for Quantum logic is therefore the study of the properties\nof projections in Hilbert spaces: a one-dimensional subspace projects onto\na one-dimensional or zero-dimensional subspace of any closed subspace.\nProjections are also central to Quantum Physics since they correspond to the \nchange brought about by the measurement of a physical property. \nPrevious works~\\cite{LEG:Malg} and~\\cite{AndThen:Leibniz} provided \na first study of some of the properties of such projections: \nthey dealt only with qualitative properties. \nThe present paper inaugurates the quantitative study of the \nprojective geometry of complex Hilbert spaces.\n\nThe purpose of the exercise is to shed light on the notion of measurement \nin Quantum Physics by developing a geometry of Hilbert spaces whose \nentities are physically meaningful: states of physical systems and \nmeasurements on physical systems. Our goal can be understood in considering \nthe history of geometry. Euclidean plane geometry was the starting point. \nIts elements are points and lines. \nMathematical developments (due to Descartes in particular) \nenabled a treatment of geometry in the vector space\n$\\mbox{${\\cal R}$}^{n}$. A new definition of geometry, abstracting from the vector space\nstructure and returning to the basic notions of points and lines, enabled\nthe development of non-Euclidean geometries.\nFor Hilbert spaces, historically the algebraic presentation came first.\nThe purpose of this paper is to extract from the algebraic presentation\na leaner presentation similar in spirit to Euclid's geometry.\nOur basic entities are one-dimensional subspaces and, more\ngenerally, closed subspaces {\\em and not vectors}.\n\n\nIn an obvious way, two elements (vectors) of a Hilbert space \ndefine a number, their inner product. \nWe are looking for numbers that characterize relations between subspaces, not \nvectors. \nThis paper proposes to associate a real number\nwith any pair of one-dimensional subspaces $x, y$: \\mbox{$p(x, y)$} \nand, by extension,\nto any pair of a one-dimensional subspace and a closed \nsubspace $\\alpha$: \\mbox{$p(x, \\alpha)$}. \nThis number is always in the interval $[0,1]$ \nand behaves in many ways like {\\em the probability that the proposition \n$\\alpha$ is\nfound true when it is tested for in state $x$}, in line with the probabilistic\ninterpretation of Quantum Physics. \nIt satisfies further properties that\nare more difficult to interpret and that characterize the linear dependence\nstructure and the structure of projections.\n\nAnother numerical quantity, an angle, $\\theta$, is defined by any triple of \none-dimensional subspaces. It is interpreted as the source of the\ninterference occurring between alternative paths a system could take.\nThis paper is devoted to the study of those aspects of the geometry of\nHilbert spaces related to the numbers $p$ and $\\theta$. \nThe study of those M-algebras \n(see~\\cite{LEG:Malg}) that admit quantities satisfying the properties of $p$,\n$\\theta$ and superpositions is left for further study.\n\n\\section{Background and Notations} \\label{sec:background}\nWe assume a Hilbert space \\mbox{${\\cal H}$}\\ on the field \\mbox{${\\cal C}$}\\ of complex numbers is given.\nThe complex conjugate of a complex number $c$ is $\\overline{c}$.\nFor any complex number $c$, $\\mid \\! c \\! \\mid$ \nrepresents its modulus, which is \na nonnegative real number.\nFor any complex number $c$ different from $0$, $\\arg(c)$ represents its\ncomplex argument: \\mbox{$c = \\, \\mid \\! c \\! \\mid e^{i \\arg(c)}$}.\nElements of \\mbox{${\\cal H}$}\\ will typically be: \\mbox{$\\vec{u}, \\vec{v} \\ldots$}.\nThe zero vector is denoted by $\\vec{0}$. The inner product of $\\vec{u}$\nand $\\vec{v}$ is \\mbox{$\\langle \\vec{u} \\, , \\, \\vec{v} \\rangle$}.\nThe inner product is linear in its first argument and conjugate-linear in\nits second argument.\nTwo vectors $\\vec{u}$ and $\\vec{v}$ are perpendicular, written \n\\mbox{$\\vec{u} \\perp \\vec{v}$}, iff \n\\mbox{$\\langle \\vec{u} \\, , \\, \\vec{v} \\rangle = 0$}.\nThe norm of $\\vec{u}$ is \\mbox{$\\parallel \\vec{u} \\parallel$}.\nA unit vector is a vector of norm $1$.\nWe shall use the notation \\mbox{$\\langle \\vec{u} \\, , \\, \\vec{v} \\rangle > 0$}\nto denote the fact that the inner product is a strictly positive {\\em real} \nnumber.\n\nThe set of all closed subspaces of \\mbox{${\\cal H}$}\\ will be denote by $M$.\nThe elements of $M$ should be thought of representing propositions, \nor, results of physical measurements. Greek letters from the beginning\nof the alphabet will be used to denote elements of $M$. The reader may\nthink of a typical element of $M$, $\\alpha$ as meaning {\\em the spin\nin the $z$-direction is nonnegative}. Note that propositions represent\nmeasurements with a specified result or a set of possible results: \nsuch as measuring the value $1\/2$ for\nthe spin in the $z$-direction or measuring a nonnegative value for this spin.\nTo every \\mbox{$\\alpha \\in M$}\none may associate its orthogonal complement, which will be denoted\n$\\neg \\alpha$. The proposition $\\neg \\alpha$ is interpreted as the measurement\nthat measures the quantity measured by $\\alpha$ but provides a value that is\nnot in the set specified by $\\alpha$. If $\\alpha$ claims that the spin in\nthe $z$-direction is nonnegative, $\\neg \\alpha$ measures the spin along the\nsame direction but finds it negative. \nTwo specific propositions are worth mentioning: falsehood, $0$\nis the null subspace $\\{\\vec{0}\\}$ and truth, $1$ is the whole space \\mbox{${\\cal H}$}.\nAny closed subspace $\\alpha$ of \\mbox{${\\cal H}$}\\ defines the projection of \\mbox{${\\cal H}$}\\ onto\n$\\alpha$. For any \\mbox{$\\vec{u} \\in \\mbox{${\\cal H}$}$} its projection on $\\alpha$ will\nbe denoted $\\alpha(\\vec{u})$. The relation between physical measurements and\nprojections will be explained after we discuss states.\n \nAmong the closed subspaces of \\mbox{${\\cal H}$}\\ particular attention will be paid to\none-dimensional subspaces. The set of one-dimensional subspaces of \\mbox{${\\cal H}$}\\\nis denoted $X$ and the elements of $X$ are typically letters from the end\nof the alphabet: $x$, $y$ and so on. As mentioned just above:\n\\mbox{$X \\subseteq M$}. Elements of $X$ will be called {\\em states}.\nA one-dimensional subspace $x$ represents a possible (pure) state of the \nphysical system. Think of the state in which the spin in the $z$-direction is \n$1\/2$, for example. We assume that states are propositions.\nThe fact that \\mbox{$X \\subseteq M$} reflects\nthe situation in which every pure state has an associated measurement that\ncharacterizes it: one may measure the spin in the $z$-direction and one\nof the possible values is $1\/2$. The proposition ``the spin in the \n$z$-direction is nonnegative'' is not a state.\n\nSince a proposition \\mbox{$\\alpha \\in M$} is a closed subspace of \\mbox{${\\cal H}$},\nfor any \\mbox{$x \\in X$}, either \\mbox{$x \\subseteq \\alpha$} or $\\alpha$ \ncontains no vector of $x$ except the zero vector. Any proposition is the union\nof the states it includes and any proposition can be seen as the set of all\nthe states it includes. We shall indeed prefer the notation \n\\mbox{$x \\in \\alpha$} to \\mbox{$x \\subseteq \\alpha$}.\n\nNote that if \n\\mbox{$\\vec{v} \\in x \\in X$} and \\mbox{$\\vec{u} \\perp \\vec{v}$} then\n\\mbox{$\\vec{u} \\perp \\vec{w}$} for every \\mbox{$\\vec{w} \\in x$}.\nWe denote such a situation by \\mbox{$\\vec{u} \\perp x$}.\nIf every \\mbox{$\\vec{u} \\in \\alpha$} is orthogonal to $x$ we say that\n\\mbox{$x \\perp \\alpha$}. If every \\mbox{$x \\in X$}, \n\\mbox{$x \\in \\alpha$} is orthogonal to $\\beta$ we say that\n\\mbox{$\\alpha \\perp \\beta$}.\nThe image of any \\mbox{$x \\in X$} by any (projection) \\mbox{$\\alpha \\in M$} \nis either a one-dimensional subspace \\mbox{$y \\in X$} or the zero-dimensional\nsubspace. This second possibility occurs exactly when $x$ is orthogonal\nto $\\alpha$. We shall denote by $\\alpha(x)$ the one-dimensional \nor zero-dimensional subspace that is the projection of $x$ onto $\\alpha$.\nNote that \\mbox{$\\alpha(x) = x$} iff \\mbox{$x \\in \\alpha$}.\nWe write \\mbox{$\\alpha(x) = 0$} to denote the case $\\alpha(x)$ \nis zero-dimensional, i.e., the case \\mbox{$x \\perp \\alpha$}.\nThe projection of the zero-dimensional subspace on any $\\alpha$ is the\nzero-dimensional subspace and we shall extend the action of $\\alpha$ by\nsetting \\mbox{$\\alpha(0) = 0$}.\n\nIn Quantum Physics measurements may change\nthe state of the system. The state obtained when measuring $\\alpha$ in state\n$x$ is precisely $\\alpha(x)$, the projection of $x$ on the subspace $\\alpha$.\nIf $x$ is orthogonal to $\\alpha$, then the measurement $\\alpha$ is impossible\nin state $x$: this happens precisely when the quantity measured by $\\alpha$\nhas, in $x$, a well-defined value that is not in the set specified by $\\alpha$.\nEquivalently, this happens precisely when $x$ is in the subspace $\\neg \\alpha$,\nor \\mbox{$(\\neg \\alpha)(x) = x$}. \n\n\\section{Classical Physics} \\label{sec:classical}\nThe notions described in Section~\\ref{sec:background} have been given\na meaning grounded in the Hilbert space formalism of Quantum Mechanics.\nThis seems to preclude their application to Classical Mechanics, since,\nclassically, states are not rays in a Hilbert space.\nNevertheless, the common wisdom is that Quantum Mechanics should apply \neverywhere and that Classical Mechanics should be a limiting case of\nQuantum Mechanics. Indeed, both Classical Mechanics and Quantum Mechanics\ncan be studied in structures that abstract from the concepts\nof Section~\\ref{sec:background}, preserving the properties of states\nand measurements. A full treatment is left for future work, \nbut the following remark explains the main feature of classical systems.\n\nClassically, measurements do not change the state of a system, therefore if\na state $x$ is not orthogonal to a proposition $\\alpha$,\nwe have \\mbox{$\\alpha(x) = x$}, expressing the fact that either $x$ possesses\nthe property $\\alpha$ or it possesses its negation $\\neg \\alpha$. We have:\n\\[\n{\\bf Principle \\ of \\ Classical \\ Physics \\ } \n{\\rm \\ Any \\ two \\ different \\ states \\ are \\ orthogonal.}\n\\]\n\n\\section{The Reciprocity Principle} \\label{sec:reciprocity}\nBefore proceeding to the analysis of the notion of a superposition which is the\ncrux of this paper, we need a simple remark. It will be presented as a \nprinciple, to stress the physical meaning of a fact that is woven so deep in \nthe familiar linear structure of Hilbert spaces that we tend not to reflect \non it anymore.\nIf the measurement $\\neg x$ acting on state $y$ and on state $z$ \nproduces the same state, then $x$, $y$ and $z$ must sit in the same\nplane, and therefore the measurement $\\neg y$ must produce the same state\nwhen acting on $x$ and on $z$.\n\\pagebreak[0]\n\\[\n{\\bf Reciprocity \\ Principle \\ } {\\rm \\ Let \\ } x, y, z \\in X, \n{\\rm \\ be \\ pairwise \\ different}.\n\\]\n\\[\n{\\rm Then \\ } (\\neg x)(y) = (\\neg x)(z) \\: \\Rightarrow \\: \n(\\neg y)(z) = (\\neg y)(x).\n\\]\n\nThe Reciprocity Principle suggests the following definition.\n\\begin{definition} \\label{def:coplanarity}\nWe shall say that states $x$, $y$ and $z$ are {\\em coplanar},\nwritten \\mbox{$coplanar(x, y, z)$} iff either two out of the three\nare equal, or they are pairwise different and \n\\mbox{$(\\neg x)(y) = (\\neg x)(z)$}. \n\\end{definition}\nThe Reciprocity Principle says that \n{\\em coplanarity} is a property of the set \\mbox{$\\{x, y, z\\}$}, \ni.e., for any permutation $x'$, $y'$, $z'$ of $x$, $y$, $z$\n\\mbox{$coplanar(x', y', z')$} is equivalent to \\mbox{$coplanar(x, y, z)$}.\n\nThe Reciprocity Principle is experimentally testable: if the {\\em no} answer\nto a test $x$ gives the same state when performed on $y$ and on $z$, the\n{\\em no} answer on a test $y$ will give the same answer on $z$ and $x$.\n\nIn Hilbert space, indeed, if $y$ and $z$ have the same projection on the \nsubspace orthogonal to $x$, call it $x'$, then all four one-dimensional\nsubspaces: $x$, $x'$, $y$ and $z$ are in the same two-dimensional subspace,\ncall it $\\alpha$,\nand therefore the projections of $z$ and $x$ on the subspace orthogonal\nto $y$ are both the one-dimensional subspace of $\\alpha$ orthogonal to $y$.\n\nIn Classical Physics, the Reciprocity Principle holds trivially,\nsince its assumptions are never satisfied. \nIndeed if \\mbox{$x \\neq y$}, we have \n\\mbox{$(\\neg x)(y) = y$}, and similarly\n\\mbox{$(\\neg x)(z) = z$} and therefore the assumption\n\\mbox{$(\\neg x)(y) = (\\neg x)(z)$} implies \\mbox{$y = z$}, contrary to \nassumption.\n\n\\section{Superpositions: Conceptual Analysis} \\label{sec:conceptual}\nThe concept of a superposition is a revolutionary novelty introduced\nby Quantum Mechanics.\nIf a system may be in any one of two pure states $x$ and $y$, we must consider\nthat it may also be in any one of many {\\em superpositions} of $x$ and $y$.\nThis paper is devoted to an in-depth analysis of superpositions.\n\nThe following remark has resulted in a vast literature: \nthe revolutionary character of quantic superpositions is the consequence\nof the fact no such superpositions have to be considered, or may be \nseen in classical systems. In Schr\\\"{o}dinger's colorful thought experiment:\nthe cat is either dead or alive, but nobody has evidence of a superposition\nof a dead and a live cat. This seems to contradict the principle exposed\nin Section~\\ref{sec:classical}, of the universality of Quantum Mechanics.\nIf everything in the universe is quantic and any two quantic states can be\nsuperposed, then any two classical states, such as a live and a dead cat,\ncan be superposed. Many explanations have been proposed and this is not\nthe place for a survey. Most explanations accept the existence of \nsuperpositions of classical states and explain why such superpositions are\nnot seen. The analysis of the superposition concept to be developed below\nproposes a radically different explanation. It is not the case, it is claimed \nhere, that, in Quantum Mechanics, any two states can be superposed: \non the contrary, no superposition of orthogonal states can ever be considered. \nSince different classical states are orthogonal,\nthe only superpositions of classical states that can ever occur are\ntrivial: superpositions of a state with itself. Trivial superpositions\nare indeed observed and unproblematic.\n\nTo avoid any misunderstanding: if \\mbox{$\\mid + \\rangle$} and \n\\mbox{$\\mid - \\rangle$} are orthogonal states, the state \n\\mbox{$1 \/ \\sqrt{2} (\\mid + \\rangle + \\mid - \\rangle)$} is a perfectly legal\nstate, but it is not a superposition of \\mbox{$\\mid + \\rangle$} and \n\\mbox{$\\mid - \\rangle$}. It is equal, as will be clear, to many different\nsuperpositions of non-orthogonal states (that are themselves linear \ncombinations of the states \\mbox{$\\mid + \\rangle$} and \\mbox{$\\mid - \\rangle$}.\nThe reader will be well advised {\\em not} to think {\\em linear combination} \nwhen {\\em superposition} is read.\n \nTo explain the surprising position above, let us, first, reflect on the \nnature of superpositions and their origin:\nwhat are they and how do they come into consideration, without trying to\ndescribe formally such superpositions. Then, we shall propose a formalization\nand an algebraic structure.\n\nThe reader should notice that the linear combination of vectors of a Hilbert \nspace provides a formal operation, not a conceptual analysis, and also that,\nsince vectors do not represent states, the linear combination of vectors\ncannot offer a proper formalization for the superpositions of states.\nEven though we announced above that orthogonal states cannot be superposed,\nit is clear that orthogonal unit vectors can be combined linearly to form \nunit vectors. This should convince the reader that we shall not formalize\nsuperposition as a straightforward linear combination.\n\n\\subsection{Nature and Origin} \\label{sec:nature}\nSuperpositions must be considered to describe systems about which all we \nknow is that they are the result of one of a number of different possible \npaths (or histories), \ni.e., if we have no way of knowing which history indeed took place.\nIn such a case, we must consider that the system is in some state that is\na superposition, i.e., a {\\em compound} of the states that are the produced\nby each of the possible paths. The term {\\em compound} is used here\nwhere, chemically-speaking, the term {\\em mixture} may be more appropriate\nbecause this last term is used in Quantum Mechanics with a different meaning.\n\nIf one knows which path has been taken, or one could discover \nwhich path has been taken, then one must consider that the system is in the\nstate that results from the path taken, and one must use probability theory\nto describe one's ignorance about the state of the system.\nIf one does not know and cannot know which path has been taken, then one must\nconsider that the system is in some specific superposition of the states \nresulting from the different possible paths. This is a general principle:\nif one cannot know which path has been taken, then those paths {\\em interfere}\nand therefore the system cannot be described using only probability theory,\nbut must be described by a state that is a compound, i.e., a superposition of\nthe states resulting from the different interfering paths.\nThis general principle holds also in Classical Physics, as will be seen in \nSection~\\ref{sec:conditions}. The way in which the different paths may \ninterfere, i.e., the parameters that characterize the different possible\nsuperpositions will be described in Section~\\ref{sec:parameters}.\n\nThe paradigmatic example of such a situation is a the two-slits\nexperiment in which a particle travels through one of two slits and\none does not know which.\n\n\\subsection{Parameters} \\label{sec:parameters}\nTo leave things simple we shall consider only the superpositions of two\nstates, without loss of generality as long as we consider only a finite\nnumber of possible paths. Generalizing to path integrals is beyond the scope\nof this paper. Suppose therefore that we must deal with a system that may\nresult from two different paths. If path $p_{1}$ was taken, the system is \nin state $y$; if path $p_{2}$ was taken, the system is in state $z$.\nIf one cannot know which path was taken, one must consider that \nthe system is in\na state that is some superposition of the two states $y$ and $z$. \nMany such superpositions are possible and the purpose of this section is\nto describe the experimental parameters that influence the superposition\nto be used.\nIn Section~\\ref{sec:conditions}, the question of whether we can know which\npath was taken will be given an unequivocal answer.\n\nIn a situation in which any one of two paths may have been taken, \nthe experimental conditions determine the respective weights to be\ngiven to each one of the possible paths. These relative weights may be \ninterpreted as describing the a-priori probability of each one of the paths,\nor the relative proportions in which each of the paths is taken. \nA superposition of $y$ and $z$ obtained\nas the result of the interference between the two paths $p_{1}$ and $p_{2}$\nwill therefore be characterized by a single parameter \\mbox{$r \\in [0,1]$}. \nThe proper value to be chosen\nfor this parameter is a function of the experimental setup. \nThe reader should notice that, even though\nwe shall describe such a superposition of states $x$ and $y$ as some sort of \n{\\em compound} or {\\em mixture} of $x$ and $y$, a superposition is a pure\nstate, not what is known in QM as a mixed state.\n\nThe parameter $r$ that characterizes a superposition\ndescribes, in a sense, the respective proportions (ratios) of $y$ and $z$\npresent in the superposition, though this intuitive analogy should not \nbe taken too seriously. The parameter $r$ is therefore a real number:\n\\mbox{$0 \\leq r \\leq 1$} that describes the {\\em weight} of $y$ relative to $z$\nin the superposition.\n\nIn the two-slits experiment, where $y$ represents the state resulting from\nthe electron moving through the upper slit and $z$ the state resulting from\nthe electron moving through the lower slit, the parameter $r$ will depend on\nthe respective widths of the two slits and the respective distance of those\nslits to the origin.\n\nThe superpositions we shall consider are therefore of the\nform \\mbox{$super(y, z, r)$} for states \\mbox{$y, z \\in X$} and\nreal number \\mbox{$0 \\leq r \\leq 1$}.\nThe telling notation \\mbox{$r y \\, + \\, (1 - r) z$} \nwill be used in place of the more austere \\mbox{$super(y, z, r)$},\nbut the reader is warned that $+$ does not mean addition, juxtaposition \ndoes not mean\nmultiplication and some of the properties one would expect from our notation\ndo {\\em not} hold. In particular the composition of superpositions does not \npossess the properties suggested by the notation.\n\n\\subsection{Conditions} \\label{sec:conditions}\nSection~\\ref{sec:parameters} indicated that superpositions of states $y$ and\n$z$ should be considered only if there is no way to know which\none of the paths $p_{1}$ or $p_{2}$ leading to\n$y$ and $z$ respectively has been traveled.\nIt is time to reflect on this condition.\n\nIf the states $y$ and $z$ are orthogonal: \\mbox{$y \\perp z$}, then there is\na way to find out for sure which of the two paths has been traveled:\nperform on the resulting state a measurement testing whether the state is $y$\nor not: a test $y$, $(\\neg y)$. If path $p_{1}$ has been traveled, the result\nwill be a {\\em yes} for sure since the state is $y$. If path $p_{2}$ \nhas been traveled, the result, for sure, will be a {\\em no} since the state\nis $z$, orthogonal to $y$. Similarly, we could have tested for $z$ or for any\nproposition satisfied by one of the states $y$ or $z$ and orthogonal to the\nother one. We see that no superposition of orthogonal states can ever\nbe defined. This is is stark contrast with the linear combination of vectors\nin a Hilbert space.\n\nFurther reflection shows that if the states $y$ and $z$ are not orthogonal,\none can never find out for sure which of the paths $p_{1}$ or $p_{2}$ has\nbeen traveled. Indeed the only situation in which one could find out would be\nto test for some proposition $\\alpha$ satisfied, for sure, by one of the two\nstates $y$ or $z$ and not satisfied, for sure, by the other state.\nIn other terms, a closed subspace $\\alpha$ containing one of $y$ or $z$ and\northogonal to the other one. But this implies \\mbox{$y \\perp z$}.\nWe see that:\n\\[\n{\\bf Principle \\ of \\ Superposition} {\\rm \\ The \\ superposition \\ }\nr y \\, + \\, (1 - r) z \n\\]\n\\[ \n{\\rm \\ is \\ defined \\ if \\ and \\ only \\ if \\ } y \\not \\perp z.\n\\]\n\nIn Section~\\ref{sec:superp_def} a definition of superpositions \nin the formalism of Hilbert spaces will be provided, \nbut, first, we shall discuss two general principles, and justify them\nby considerations independent of the Hilbert space formalism. \n\n\\subsection{Trivial Superpositions} \\label{sec:trivial}\nLet us consider, first, the superpositions of a state $y$ with itself:\n\\mbox{$r y \\, + \\, (1 - r) y$}.\nBy the Principle of Classical Physics, these are the only superpositions\npossible in classical physics.\n\nEvidence from both classical and quantum physics shows that such superpositions\nare trivial:\n\\[\n{\\bf Principle \\ of \\ Triviality} \\ \\forall y \\in X, \\forall r \\in [0 , 1],\nr y \\, + \\, (1 - r) y = y.\n\\]\n\nHaving disposed of the cases \\mbox{$y \\perp z$} and \\mbox{$y = z$}, let us\nstudy the generic case of superpositions.\n\n\\subsection{Principle of Coplanarity} \\label{sec:coplanarity}\n\nA superposition is coplanar with its components.\nAssume \\mbox{$y \\not \\perp z$}.\n\\[\n{\\bf Principle \\ of \\ Coplanarity} \\ \\forall r \\in [0 , 1],\ncoplanar(r y \\, + \\, (1 - r) z, \\, y, \\, z) .\n\\]\nThis principle can be justified in the following way.\nThe superposition\n\\mbox{$x = r y \\, + \\, (1 - r) z$} results from\nour inability to know which of $p_{1}$, resulting in $y$ or $p_{2}$,\nresulting in $z$ has been traveled.\nMeasuring $\\neg y$ on $x$ shows that the path $p_{1}$ has not been traveled\nand therefore $p_{2}$ has been traveled and the current state\n$(\\neg y)(x)$ is in fact $(\\neg y)(z)$.\n\nWe shall propose a precise definition of\nsuperpositions such as \\mbox{$r y \\, + \\, (1 - r) z$} for\n\\mbox{$y \\not \\perp z$} in Section~\\ref{sec:superp_def}.\nThen, in Sections~\\ref{sec:euclidean} and~\\ref{sec:theta}, \nwe shall define fundamental geometric\nquantities in terms of which the properties of superpositions \nwill be studied in Section~\\ref{sec:superp_prop}.\n\n\\section{Definition of Superpositions} \\label{sec:superp_def}\nWe shall now present the definition of the superposition\n\\mbox{$r y \\, + \\, (1 - r) z$}.\nOur definition is taken from the everyday practice of physicists.\n\\begin{definition} \\label{def:superposition}\nFor any \\mbox{$r \\in [0 , 1]$}, \nfor any \\mbox{$y , z \\in X$} such that \\mbox{$y \\not \\perp z$},\nwe shall define\n\\mbox{$r y \\, + \\, (1 - r) z$} in the following way.\n\nChoose some arbitrary unit vector $\\vec{v}$ in $y$.\nSince \\mbox{$y \\not \\perp z$}, there is a unique unit vector $\\vec{w}$\nof $z$ such that \\mbox{$\\langle \\vec{v} \\, , \\, \\vec{w} \\rangle \\, > \\, 0$}.\nDefine, now: \n\\begin{equation} \\label{eq:super}\n\\vec{u} \\: = \\: \n\\sqrt{r} \\, \\vec{v} \\, + \\, \\sqrt{1 - r} \\, \\vec{w}.\n\\end{equation}\nNote that \\mbox{$\\vec{u} \\neq \\vec{0}$}: if \\mbox{$y = z$} then \n\\mbox{$\\vec{v} = \\vec{w}$} and \\mbox{$\\sqrt{r} + \\sqrt{1 - r} > 0$}.\nOtherwise $\\vec{v}$ and $\\vec{w}$ are\nlinearly independent and at least one of $\\sqrt{r}$ or $\\sqrt{1 - r}$ is\nstrictly positive.\nWe may now define \\mbox{$r y \\, + \\, (1 - r) z$} \nto be the one-dimensional subspace generated by $\\vec{u}$.\n\\end{definition}\n\nNote that the vector $\\vec{u}$ above is not a unit vector.\nDefinition~\\ref{def:superposition} squares well with the Dirac notation\nand the way it is used in everyday physics.\nIf $y$ and $z$ are to be compounded in equal parts (\\mbox{$r = 1\/2$}) then \n\\mbox{$1\/2 y + 1\/2 z$} is defined by the vector \n\\mbox{$1 \/ \\sqrt{2} (\\vec{v} + \\vec{w})$}, which is a unit vector in case \n\\mbox{$y \\perp z$}. Notice, though, that the case $y$ and $w$ are orthogonal\nis a case we do not allow.\n\nThe following is expected on general considerations and easily shown to\nfollow from Definition~\\ref{def:superposition}.\n\\begin{lemma} \\label{le:+commu}\nFor any \\mbox{$y, z \\in X$} such that \\mbox{$y \\not \\perp z$}, we have\n\\begin{enumerate}\n\\item \\label{one}\n\\mbox{$1 y \\, + \\, 0 z = y$}, and\n\\item \\label{comm}\nfor any \\mbox{$r \\in [0 , 1]$}\n\\mbox{$r y \\, + \\, (1 - r) z =$}\n\\mbox{$(1 - r) z \\, + \\, r y$}.\n\\end{enumerate}\n\\end{lemma}\n\nWe shall now define two geometrical quantities that will help us understand\nthe structure of superpositions.\n\n\\section{The Geometry of Hilbert Spaces} \\label{sec:geometry}\nFirst, we shall define a geometrical property of two states.\n\\subsection{Quantities from Euclidean Geometry} \\label{sec:euclidean}\n\\subsubsection{The Quantity $a(x, y)$} \\label{sec:a}\nWe shall now define the first geometric quantity we wish to consider.\nWhen considering the geometry of Hilbert spaces it is useful to\nbegin by reflecting on the geometry of Euclidean spaces, \nabout which we know much more and have a much better intuition. \nConsider two lines, i.e, one-dimensional linear (not affine) subspaces, \nin $\\mbox{${\\cal R}$}^{n}$.\nThe only invariant characterizing their relation is their angle.\nTwo lines define a plane and four angles. Those four angles are two pairs\nof equal angles. Therefore only two quantities are defined by two lines.\nMoreover those two angles add up to $\\pi$, therefore there is essentially\nonly one quantity defined. One can take as the fundamental quantity either\nthe acute or the obtuse angle. Let us consider the acute angle as the\nquantity of interest. Two lines in Euclidean space define an angle $\\varphi$\nin the interval \\mbox{$[0 , \\pi \/ 2]$}. Equivalently, they define a real number\nin the interval \\mbox{$[0 , 1]$}, the value of $\\cos(\\varphi)$.\n\nThe same quantity may be defined in Hilbert spaces. \nConsider two states \\mbox{$x, y \\in X$}. We are trying to associate\na numerical quantity to this pair of states. The most natural thing to consider\nis the inner product of two vectors contained in $x$ and $y$ respectively.\nIt is very natural to choose two unit vectors \\mbox{$\\vec{u} \\in x$} and\n\\mbox{$\\vec{v} \\in y$} and consider the inner product \n\\mbox{$\\langle \\vec{u} \\, , \\, \\vec{v} \\rangle$}. This will not do since the \nquantity depends on the choice of the unit vectors $\\vec{u}$ and $\\vec{v}$\nand we are looking for a quantity that depends only on $x$ and $y$.\nThe inner product depends on the choice of the unit vectors, but its modulus\ndoes not.\nConsider therefore the quantity \n\\[\na(x, y) \\: \\stackrel{\\rm def}{=} \\: \\mid \\! \\langle \\vec{u} \\, , \\, \\vec{v} \\rangle \\! \\mid\n\\]\nfor arbitrary unit vectors $\\vec{u}$ and $\\vec{v}$ of $x$ and $y$ respectively.\nAny unit vector $\\vec{u}'$ of $x$ has the form:\n\\mbox{$\\vec{u}' \\, = \\, e^{i \\theta} \\vec{u}$} and any $\\vec{v}'$ of\n$y$ has the form:\n\\mbox{$\\vec{v}' \\, = \\, e^{i \\varphi} \\vec{v}$}.\nTherefore\n\\mbox{$\\langle \\vec{u}' \\, , \\, \\vec{v}' \\rangle \\, = \\,$}\n\\mbox{$e^{i (\\theta - \\varphi)} \\langle \\vec{u} \\, , \\, \\vec{v} \\rangle$},\nand \n\\mbox{$\\mid \\langle \\vec{u}' \\, , \\, \\vec{v}' \\rangle \\mid \\, = \\,$}\n\\mbox{$\\mid \\langle \\vec{u} \\, , \\, \\vec{v} \\rangle \\mid$}.\n\nThe following is easily proved.\n\\begin{lemma} \\label{le:a}\nFor any \\mbox{$x, y \\in X$}:\n\\begin{enumerate}\n\\item \\label{r:01}\n$a(x,y)$ is a real number of the interval $[0,1]$,\n\\item \\label{r:one}\n\\mbox{$a(x,y) = 1$} iff \\mbox{$x = y$},\n\\item \\label{r:zero}\n\\mbox{$a(x,y) = 0$} iff \\mbox{$x \\perp y$},\n\\item \\label{r:sym}\n\\mbox{$a(y,x) = a(x,y)$}.\n\\end{enumerate}\n\\end{lemma}\n\n\\subsubsection{Similarity: $p$} \\label{sec:similarity}\nIt turns out that the square of the quantity \\mbox{$a(x, y)$}, akin to the \n$\\cos^2$ of an angle has even more remarkable properties.\n\\begin{definition} \\label{def:p}\nGiven any states \\mbox{$x, y \\in X$}, we shall define their similarity\n$p(x, y)$ by\n\\[\np(x, y) = a^{2}(x, y). \n\\]\n\\end{definition}\n\nThe quantity $p$ will be called {\\em similarity} because it measures\nhow similar, i.e., close, are its arguments $x$ and $y$. Its physical \ninterpretation is straightforward: $p(x,y)$ is the probability that,\nwhen, on state $x$, one tests whether $y$ is the case, one gets a positive \nanswer. With probability $1 - p(x,y)$ one gets the the answer that $y$ is\nnot the case. This physical interpretation is the reason $p = a^{2}$ \nand not $a$ has been chosen as the quantity of reference. \nNote that $p$ can be directly obtained experimentally.\nBelow, we shall extend the definition of $p$ to measure\nthe {\\em similarity} between any state \\mbox{$x \\in X$} and any proposition\n\\mbox{$\\alpha \\in M$}, i.e., the degree to which state $x$ satisfies \nproposition $\\alpha$.\n\nA straightforward result on Hilbert spaces will be recalled now.\n\\begin{lemma} \\label{le:inner}\nLet \\mbox{$\\vec{u}, \\vec{v} \\in \\mbox{${\\cal H}$}$}.\nAssume $\\vec{v}$ is a unit vector and \\mbox{$\\vec{v} \\in x \\in X$}.\nThen the projection \\mbox{$x(\\vec{u})$} of $\\vec{u}$ on $x$ is \n\\mbox{$\\langle \\vec{u} \\, , \\, \\vec{v} \\rangle \\, \\vec{v}$}.\n\\end{lemma}\n\\begin{proof}\n\\mbox{$\\vec{u} - \\langle \\vec{u} \\, , \\, \\vec{v} \\rangle \\, \\vec{v}$} \nis indeed \northogonal to $\\vec{v}$ and therefore to $x$.\n\\end{proof}\n\nFirst properties of $p$ are described in the following.\n\\begin{lemma} \\label{le:p}\nFor any \\mbox{$x, y \\in X$}:\n\\begin{enumerate}\n\\item \\label{p:01}\n$p(x,y)$ is a real number in the interval $[0,1]$,\n\\item \\label{p:one}\n\\mbox{$p(x,y) = 1$} iff \\mbox{$x = y$},\n\\item \\label{p:zero}\n\\mbox{$p(x,y) = 0$} iff \\mbox{$x \\perp y$},\n\\item \\label{p:sym}\n\\mbox{$p(y,x) = p(x,y)$},\n\\item \\label{p:inner}\nfor any unit vector \\mbox{$\\vec{u} \\in x$},\n\\mbox{$p(x,y) \\, = \\,$}\n\\mbox{$\\langle \\vec{u} \\, , \\, y(\\vec{u}) \\rangle$} \nwhere $y(\\vec{u})$ is the projection of \n$\\vec{u}$ on $y$,\n\\item \\label{p:proj}\nfor any unit vector \\mbox{$\\vec{u} \\in x$},\n\\mbox{$p(x,y) \\, = \\,$}\n\\mbox{$\\parallel \\! y(\\vec{u}) \\! \\parallel^{2}$}.\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nFor~\\ref{p:inner}, note that \nfor any unit vector $\\vec{v}$ of $y$, we have, by Lemma~\\ref{le:inner},\n\\mbox{$y(\\vec{u}) \\, = \\, \\langle \\vec{u} \\, , \\, \\vec{v} \\rangle \\, \\vec{v}$},\nand therefore\n\\mbox{$\\langle \\vec{u} \\, , \\, y(\\vec{u}) \\rangle \\, = \\,$}\n\\mbox{$\\overline{\\langle \\vec{u} \\, , \\, \\vec{v} \\rangle} \\, \n\\langle \\vec{u} \\, , \\, \\vec{v} \\rangle \\, = \\,$}\n\\mbox{$\\mid \\! \\langle \\vec{u} \\, , \\, \\vec{v} \\rangle \\! \\mid^{2}$}.\nNote that this implies that the inner product\n\\mbox{$\\langle \\vec{u} \\, , \\, y(\\vec{u}) \\rangle$} is a real number.\nFor~\\ref{p:proj}, note that projections are Hermitian and idempotent, and\ntherefore\n\\mbox{$\\langle y(\\vec{u}) \\, , \\, y(\\vec{u}) \\rangle \\, = \\,$}\n\\mbox{$\\langle \\vec{u} \\, , \\, y(y(\\vec{u})) \\rangle \\, = \\,$}\n\\mbox{$\\langle \\vec{u} \\, , \\, y(\\vec{u}) \\rangle$}.\n\\end{proof}\n\nThe next result is central. \nIt shows that, for any given proposition $\\alpha$, the projection\non $\\alpha$ is determined by the $p$-structure.\n\\begin{theorem} \\label{the:p}\nFor any proposition \\mbox{$\\alpha \\in M$} and any\nstates \\mbox{$x, y \\in X$}, if \\mbox{$x \\not \\perp \\alpha$}\nand \\mbox{$y \\in \\alpha$} then\n\\mbox{$p(x,y) = p(x,\\alpha(x)) \\, p(\\alpha(x), y)$}. \n\\end{theorem}\n\\begin{proof}\nLet $\\vec{u}$ be a unit vector of $x$.\nSince \\mbox{$y \\in \\alpha$}, the projection of any vector on $y$\ncan be obtained by projecting the vector first on $\\alpha$ and then\nprojecting the result on $y$. In particular,\n\\mbox{$y(\\vec{u}) = y(\\alpha(\\vec{u}))$}.\nTherefore\n\\[\np(x,y) = \\parallel \\! y(\\vec{u}) \\! \\parallel^{2} = \n\\parallel \\! y(\\alpha(\\vec{u})) \\! \\parallel^{2} \/ \n\\parallel \\! \\alpha(\\vec{u}))\\! \\parallel^{2} \/: \\times \\:\n\\parallel \\! \\alpha(\\vec{u}) \\! \\parallel^{2}\n\\]\nLet \\mbox{$\\vec{v} =$}\n\\mbox{$ \\alpha(\\vec{u}) \/ \\parallel \\! \\alpha(\\vec{u} \\! \\parallel$}. \nNotice that $\\vec{v}$ is a unit vector of $\\alpha(x)$ and therefore\n\\[\np(x,y) = \\parallel \\! \\vec{v} \\! \\parallel^{2} \\: \\times \\:\n\\parallel \\! \\alpha(\\vec{u}) \\! \\parallel^{2} = \np(\\alpha(x), y) \\: \\times \\: p(x, \\alpha(x))\n\\]\nsince $\\alpha(\\vec{u})$ is the projection of $\\vec{u}$ on $\\alpha(x)$,\nand by Lemma~\\ref{le:p}.\n\\end{proof}\n\n\\begin{corollary} \\label{le:max}\nFor any proposition \\mbox{$\\alpha \\in M$} and any\nstate \\mbox{$x \\in X$}, if \\mbox{$x \\not \\perp \\alpha$}\nthen $\\alpha(x)$ is the unique state $y$ of $\\alpha$ on which\nthe value of $p(x,y)$ is maximal. \n\\end{corollary}\nIn short, there is a unique state of $\\alpha$ that is most similar to $x$,\nthis is $x$'s projection on $\\alpha$.\n\\begin{proof}\nBy Theorem~\\ref{the:p}, since \\mbox{$p(\\alpha(x),y) \\leq 1$} by\nLemma~\\ref{le:p}, \\mbox{$p(x,y) \\leq p(x,\\alpha(x))$} for any\n\\mbox{$y \\in \\alpha$}.\n\nFor uniqueness, suppose \\mbox{$y \\in \\alpha$} and \n\\mbox{$p(x,y) = p(x,\\alpha(x))$}.\nBy Theorem~\\ref{the:p}, \n\\mbox{$p(x,\\alpha(x)) =$}\n\\mbox{$p(x, \\alpha(x)) \\, p(\\alpha(x), y)$}.\nSince $x$ is not orthogonal to $\\alpha$, \\mbox{$p(x,\\alpha(x)) > 0$}\nand therefore \\mbox{$p(\\alpha(x),y) = 1$} and \\mbox{$\\alpha(x) = y$}.\n\\end{proof}\n\nIt is now only natural to extend the definition of $p$ to an arbitrary\nproposition as second argument. \nFor any \\mbox{$x \\in X$} and \n\\mbox{$\\alpha \\in M$}, we define \\mbox{$p(x,\\alpha)$} in the following way:\n\\begin{itemize}\n\\item \\mbox{$p(x,\\alpha) = 0$} if \\mbox{$x \\perp \\alpha$}, and\n\\item \\mbox{$p(x,\\alpha) = p(x,\\alpha(x))$} otherwise.\n\\end{itemize}\n\nThe following is known, in Physics, as Born's rule.\nThe quantity $p(x,\\alpha)$ is the probability of measuring the property\n$\\alpha$ in state $x$.\n\\begin{lemma} \\label{le:Born}\nFor any state $x \\in X$ and any proposition \\mbox{$\\alpha \\in M$}, if\n\\mbox{$\\vec{u} \\neq \\vec{0} \\in x$},\n\\mbox{$p(x, \\alpha) = \\parallel \\alpha(\\vec{u}) \\parallel^{2} \/ \n\\parallel \\vec{u} \\parallel^{2}$}.\n\\end{lemma}\nThe proof is obvious. The following is an obvious consequence of \nCorollary~\\ref{le:max}.\n\\begin{corollary} \\label{le:satisfaction}\nFor any state $x$ and any proposition $\\alpha$,\n\\mbox{$x \\in \\alpha$} iff \\mbox{$\\alpha(x) = x$} iff \n\\mbox{$p(x, \\alpha) = 1$}.\n\\end{corollary}\n\nThe next two sections prove additional properties of the quantity $p$.\nOn a first reading the reader is advised to advance to \nSection~\\ref{sec:theta}. \nSection~\\ref{sec:probas} shows that, \nfor any given $x$ and different $\\alpha$'s, \\mbox{$p(x, \\alpha)$} \nbehaves very much as a probability on the propositions. \nExactly so, for propositions that commute as projections.\nSection~\\ref{sec:num_inter} proves an intriguing inequality that\nprovides a numerical strengthening of the {\\bf Interference} property\nof~\\cite{LEG:Malg}.\n\n\\subsubsection{Similarity as Probability} \\label{sec:probas}\nThe following results will show that, for any fixed \\mbox{$x \\in X$}, \nthe quantities\n$p(x, \\alpha)$ for different measurements $\\alpha$ play the role of a \nprobability on the propositions.\nFor any two propositions \\mbox{$\\alpha, \\beta \\in M$} we shall define, as\ntraditional since~\\cite{BirkvonNeu:36},\ntheir conjunction \\mbox{$\\alpha \\wedge \\beta$} as their intersection\n\\mbox{$\\alpha \\cap \\beta$} (note the intersection of closed subspaces\nis a closed subspace) and their disjunction \\mbox{$\\alpha \\vee \\beta$}\nas the topological closure of their linear sum: \n\\mbox{$cl(\\alpha + \\beta)$}.\nNote that these notations are inconsistent with those of~\\cite{LEG:Malg}\nwhere conjunction and disjunction were defined only for {\\em commuting}\npropositions.\nWe shall demonstrate a particular interest in {\\em commuting} propositions.\nFor the sake of obtaining a straightforward definition of commutation, \nwe shall extend our notation for projections.\n\\begin{definition} \\label{def:commuting}\nLet \\mbox{$\\alpha, \\beta \\in M$} be two propositions. We shall say that\n$\\alpha$ and $\\beta$ {\\em commute} iff for any \\mbox{$x \\in X$} \n\\mbox{$\\alpha(\\beta(x)) = \\beta(\\alpha(x))$}.\n\\end{definition}\n\n\\begin{lemma} \\label{le:commuting}\nAny two propositions \\mbox{$\\alpha, \\beta \\in M$} commute iff\nthere are three pairwise orthogonal propositions \n\\mbox{$\\gamma_{i}, i = 1, \\ldots , 3$} such that\n\\mbox{$\\alpha =$} \\mbox{$\\gamma_{1} \\vee \\gamma_{2}$} and\n\\mbox{$\\beta =$} \\mbox{$\\gamma_{1} \\vee \\gamma_{3}$}.\n\\end{lemma}\nNote that one of the propositions $\\gamma_{i}$ may be the falsehood $\\bot$.\n\\begin{proof}\nThe {\\em if} claim is obvious. The {\\em only if} claim follows \nfrom the fact that \nprojections are Hermitian and that Hermitian operators commute iff they\nhave a joint basis of eigenvectors.\n\\end{proof}\n\n\\begin{corollary} \\label{le:comm}\nFor any \\mbox{$\\alpha, \\beta \\in X$}, if \\mbox{$\\alpha \\subseteq \\beta$}\nor \\mbox{$\\alpha \\perp \\beta$}, then $\\alpha$ and $\\beta$ commute.\n\\end{corollary}\n\\begin{proof}\nIn the first case, take \\mbox{$\\gamma_{1} = \\alpha$}, \n\\mbox{$\\gamma_{2} = \\bot$} and \\mbox{$\\gamma_{3} =$}\n\\mbox{$\\neg \\alpha \\wedge \\beta$}.\nIn the second case, take \\mbox{$\\gamma_{1} = \\alpha$}, \n\\mbox{$\\gamma_{2} = \\bot$} and \\mbox{$\\gamma_{3} = \\beta$}.\n\\end{proof}\n\n\\begin{corollary} \\label{le:comm_neg}\nFor any \\mbox{$\\alpha, \\beta \\in X$}, if $\\alpha$ and $\\beta$ commute\nthen $\\neg \\alpha$ and $\\beta$ commute.\n\\end{corollary}\n\\begin{proof}\nLet \\mbox{$\\alpha =$} \\mbox{$\\gamma_{1} \\vee \\gamma_{2}$} and\n\\mbox{$\\beta =$} \\mbox{$\\gamma_{1} \\vee \\gamma_{3}$}.\nThen \\mbox{$\\neg \\alpha =$} \\mbox{$\\neg \\gamma_{1} \\wedge \\neg \\gamma_{2}$}.\nSince \\mbox{$\\gamma_{3} \\subseteq \\neg \\alpha$}, we have, by the\northomodular property,\n\\mbox{$\\neg \\alpha =$} \n\\mbox{$\\gamma_{3} \\vee \\neg \\gamma_{1} \\wedge \\neg \\gamma_{2} \n\\wedge \\neg \\gamma_{3}$}.\nBut \\mbox{$\\beta =$} \\mbox{$\\gamma_{3} \\vee \\gamma_{2}$} and\n\\mbox{$\\gamma_{2} \\perp \\neg \\gamma_{1} \\wedge \\neg \\gamma_{2}\n\\wedge \\neg \\gamma_{3}$}.\n\\end{proof}\n\nFirst, we shall consider disjunctions of orthogonal propositions.\n\\begin{lemma} \\label{le:orthodisjunction}\nIf \\mbox{$\\alpha \\perp \\beta$} then, for any \\mbox{$x \\in X$},\n\\mbox{$p(x, \\alpha \\vee \\beta) =$}\n\\mbox{$p(x, \\alpha) + p(x, \\beta)$}.\n\\end{lemma}\n\\begin{proof}\nConsider any \\mbox{$\\vec{u} \\neq \\vec{0} \\in x$}.\nNow \\mbox{$(\\alpha \\vee \\beta)(\\vec{u}) =$}\n\\mbox{$\\alpha(\\vec{u}) + \\beta(\\vec{u})$} (see~\\cite{Halmos:Hilbert} \nTheorem 2, page 46). Therefore\n\\mbox{$\\langle \\vec{u} \\, , \\, (\\alpha \\vee \\beta)(\\vec{u}) \\rangle =$}\n\\mbox{$\\langle \\vec{u} \\, , \\, \\alpha(\\vec{u}) \\rangle +$}\n\\mbox{$\\langle \\vec{u} \\, , \\, \\beta(\\vec{u}) \\rangle$}. \n\\end{proof}\n\\begin{corollary} \\label{co:orthodisjunction}\nIf \\mbox{$\\alpha_{i}$} is a family of pairwise orthogonal measurements, then\nfor any \\mbox{$x \\in X$} we have\n\\mbox{$p(x, \\bigvee_{i \\in I} \\alpha_{i}) =$}\n\\mbox{$\\sum_{i \\in I} p(x, \\alpha_{i})$}.\n\\end{corollary}\n\\begin{proof}\nBy induction on the size of $I$, and associativity of disjunction.\n\\end{proof}\n\nThe following lemmas are fundamental characteristics of probabilities.\n\\begin{lemma} \\label{le:neg_sum_one}\nFor any \\mbox{$\\alpha \\in M$} and any \\mbox{$x \\in X$}:\n\\mbox{$p(x, \\alpha) + p(x, \\neg \\alpha) = 1$}.\n\\end{lemma}\n\\begin{proof}\nBy Lemma~\\ref{le:orthodisjunction}, \n\\mbox{$p(x, \\alpha) + p(x, \\neg \\alpha) = p(x, \\alpha \\vee \\neg \\alpha)$}.\nBut \\mbox{$\\alpha \\vee \\neg \\alpha = \\top$} and therefore \n\\mbox{$(\\alpha \\vee \\neg \\alpha)(x) = x$} and, \nby Corollary~\\ref{le:satisfaction},\n\\mbox{$p(x, \\alpha \\vee \\beta) = 1$}.\n\\end{proof}\n\\begin{lemma} \\label{le:zero_one}\nFor any \\mbox{$\\alpha \\in M$} and any \\mbox{$x \\in X$}:\n\\mbox{$0 \\leq p(x, \\alpha) \\leq 1$}.\n\\end{lemma}\n\\begin{proof}\nBy Lemmas~\\ref{le:Born} and~\\ref{le:neg_sum_one}.\n\\end{proof}\n\\begin{lemma} \\label{le:disj_prob}\nLet \\mbox{$\\alpha, \\beta \\in M$} be any {\\em commuting} measurements.\nFor any \\mbox{$x \\in X$}\n\\mbox{$p(x, \\alpha \\vee \\beta) =$}\n\\mbox{$p(x, \\alpha) + p(x, \\beta) - p(x, \\alpha \\wedge \\beta)$}.\n\\end{lemma}\n\\begin{proof}\nWe know that \\mbox{$\\alpha \\vee \\beta =$}\n\\mbox{$(\\alpha \\wedge \\beta) \\vee (\\alpha \\wedge \\neg \\beta)\n\\vee (\\neg \\alpha \\wedge \\beta)$}. The three parts of the disjunction above\nare pairwise orthogonal, therefore Corollary~\\ref{co:orthodisjunction} implies\nthat \\mbox{$p(x, \\alpha \\vee \\beta) =$}\n\\mbox{$p(x, \\alpha \\wedge \\beta) +$}\n\\mbox{$p(x, \\alpha \\wedge \\neg \\beta) +$}\n\\mbox{$p(x, \\neg \\alpha \\wedge \\beta)$}.\nBut, by Lemma~\\ref{le:orthodisjunction}:\n\\mbox{$p(x, \\alpha \\wedge \\beta) +$}\n\\mbox{$p(x, \\alpha \\wedge \\neg \\beta) =$}\n\\mbox{$p(x, \\alpha)$} and\n\\mbox{$p(x, \\alpha \\wedge \\beta) +$}\n\\mbox{$p(x, \\neg \\alpha \\wedge \\beta) =$}\n\\mbox{$p(x, \\beta)$}.\n\\end{proof}\nThe lemmas above dealt mostly with the properties of disjunction. The next\nresult concerns conjunction and parallels the consideration of conditional\nprobabilities.\n\\begin{lemma} \\label{le:conj}\nLet \\mbox{$\\alpha, \\beta \\in M$} be any {\\em commuting} measurements.\nFor any \\mbox{$x \\in X$}:\n\\mbox{$p(x, \\alpha \\wedge \\beta) =$}\n\\mbox{$p(x, \\alpha) \\: p(\\alpha(x), \\beta)$}.\n\\end{lemma}\n\\begin{proof}\nSince \\mbox{$\\alpha \\wedge \\beta = \\alpha \\circ \\beta$}, by the definition\nof $p$, taking any \\mbox{$\\vec{u} \\neq \\vec{0} \\in x$}:\n\\[\np(x, \\alpha \\wedge \\beta) =\n{{\\parallel (\\alpha \\circ \\beta)(\\vec{u}) \\parallel^{2}} \\over\n{\\parallel \\vec{u} \\parallel^{2}}} =\n{{\\parallel (\\alpha \\circ \\beta)(\\vec{u}) \\parallel^{2}} \\over\n{\\parallel \\alpha(\\vec{u}) \\parallel^{2}}} \\ \n{{\\parallel \\alpha(\\vec{u}) \\parallel^{2}} \\over \n{\\parallel \\vec{u} \\parallel^{2}}} =\np(\\alpha(x), \\beta) \\ p(x, \\alpha).\n\\]\n\\end{proof}\n\\begin{corollary} \\label{co:leq}\nLet \\mbox{$\\alpha, \\beta \\in M$} be any measurements such that \n\\mbox{$\\alpha \\leq \\beta$}.\nThen for any \\mbox{$x \\in X$}, \\mbox{$p(x, \\alpha) \\leq p(x, \\beta)$}.\n\\end{corollary}\n\\begin{proof}\nIf \\mbox{$\\alpha \\leq \\beta$}, the two measurements commute and\n\\mbox{$\\alpha = \\beta \\wedge \\alpha$}.\nBy Lemma~\\ref{le:conj}, then \\mbox{$p(x, \\alpha) =$}\n\\mbox{$p(x, \\beta) \\: p(\\beta(x), \\alpha) \\leq$}\n\\mbox{$p(x, \\beta)$} by Lemma~\\ref{le:zero_one}. \n\\end{proof}\n\\begin{corollary} \\label{co:comp}\nLet \\mbox{$\\alpha, \\beta \\in M$} be any {\\em commuting} measurements.\nThen for any \\mbox{$x \\in X$}, \n\\mbox{$p(x, \\beta) = p(x, \\alpha) \\, p(\\alpha(x), \\beta) \\: + \\: \np(x, \\neg \\alpha) \\, p((\\neg \\alpha)(x), \\beta)$}.\n\\end{corollary}\n\\begin{proof}\nSince $\\alpha$ and $\\beta$ commute, by Theorem~1 of~\\cite{LEG:Malg},\n\\mbox{$\\beta = (\\alpha \\wedge \\beta) \\vee (\\neg \\alpha \\wedge \\beta)$}.\nBy Lemma~\\ref{le:orthodisjunction} we have:\n\\mbox{$p(x, \\beta) = p(x, \\alpha \\wedge \\beta) \\: + \\: p(x, \\neg \\alpha \\wedge \\beta)$}.\nWe conclude, by Lemma~\\ref{le:conj}, that\n\\mbox{$p(x, \\beta) = p(x, \\alpha) \\, p(\\alpha(x), \\beta) \\: + \\: \np(x, \\neg \\alpha) \\, p((\\neg \\alpha)(x), \\beta)$}.\n\\end{proof}\nIn Corollary~\\ref{co:comp} one cannot omit the requirement \nthat $\\alpha$ and $\\beta$ commute. The consideration of a two-dimensional \nEuclidean space where $\\alpha$ is the x-axis and $x$ makes an angle\n$\\theta$ with the x-axis is sufficient. If $\\beta$ is $x$, then\n\\mbox{$p(x, \\beta) = 1$} whereas \n\\mbox{$p(x, \\alpha) =$}\n\\mbox{$\\cos^{2}(\\theta) =$} \\mbox{$p(\\alpha(x), \\beta)$} and\n\\mbox{$p(x, \\neg \\alpha) =$}\n\\mbox{$\\sin^{2}(\\theta) =$} \\mbox{$p((\\neg \\alpha)(x), \\beta)$}.\nAlso taking $\\beta$ orthogonal to $x$ gives\n\\mbox{$p(x, \\beta) = 0$} and \n\\mbox{$p(x, \\alpha) =$} \\mbox{$\\cos^{2}(\\theta) =$}\n\\mbox{$p((\\neg \\alpha)(x), \\beta)$} and\n\\mbox{$p(x, \\neg \\alpha) =$} \\mbox{$\\sin^{2}(\\theta) =$}\n\\mbox{$p(\\alpha(x), \\beta)$}.\nNevertheless the result holds in the following case.\n\\begin{lemma} \\label{le:orthomodular_equality}\nFor any \\mbox{$x \\in X$} and any \\mbox{$\\alpha, \\beta \\in M$} such that\n\\mbox{$\\alpha(x) \\in \\beta$} and \\mbox{$(\\neg \\alpha)(x) \\in \\beta$},\none has \n\\[\np(x, \\beta) = p(x, \\alpha) \\, p(\\alpha(x), \\beta) \\: + \\: \np(x, \\neg \\alpha) \\, p((\\neg \\alpha)(x), \\beta)= 1.\n\\]\n\\end{lemma}\n\\begin{proof}\nBy assumption both $\\alpha(x)$ and $(\\neg \\alpha)(x)$ are subspaces of \n$\\beta$. Given any \\mbox{$\\vec{u} \\in x$}, both \n$\\alpha(\\vec{u})$ and $(\\neg \\alpha)(\\vec{u})$ are in $\\beta$.\nBut $\\beta$ is a subspace and therefore \n\\mbox{$\\alpha(\\vec{u}) + (\\neg \\alpha)(\\vec{u}) =$}\n\\mbox{$\\vec{u} \\in \\beta$}.\n\\end{proof}\n\n\\begin{lemma} \\label{le:local_comp}\nFor any \\mbox{$x \\in X$} and any \\mbox{$\\alpha, \\beta \\in M$} such that\n\\mbox{$(\\alpha \\circ \\beta)(x) =$} \\mbox{$(\\beta \\circ \\alpha)(x)$}, we have\n\\mbox{$p(x, \\beta) =$} \\mbox{$p(x, \\alpha) \\, p(\\alpha(x), \\beta) \\: + \\: \np(x, \\neg \\alpha) \\, p((\\neg \\alpha)(x), \\beta)$}.\n\\end{lemma}\n\\begin{proof}\nAssume that \\mbox{$(\\alpha \\circ \\beta)(x) =$}\n\\mbox{$(\\beta \\circ \\alpha)(x)$}.\nBy Lemma~\\ref{le:comm_neg}, \n\\mbox{$(\\neg \\alpha \\circ \\beta)(x) =$} \n\\mbox{$(\\beta \\circ \\neg \\alpha)(x)$}.\nTake any \\mbox{$\\vec{u} \\neq \\vec{0} \\in x$}.\nThen,\n\\[\np(x, \\beta) \\ = \\ \n\\parallel \\beta(\\vec{u}) \\parallel^{2} \\: \/ \\: \n\\parallel \\vec{u} \\parallel^{2} \\ = \\ \n\\parallel \\alpha(\\beta(\\vec{u}))) + (\\neg \\alpha)(\\beta(\\vec{u})) \\parallel^{2}\n\\: \/ \\: \\parallel \\vec{u} \\parallel^{2} \\ = \n\\]\n\\[\n\\parallel \\alpha(\\beta(\\vec{u}))) \\parallel^{2}\\: \/ \\: \n\\parallel \\vec{u} \\parallel^{2} \n+ \\parallel (\\neg \\alpha)(\\beta(\\vec{u})) \\parallel^{2}\n\\: \/ \\: \\parallel \\vec{u} \\parallel^{2} \\ = \\ \n\\]\n\\[\n\\parallel \\beta(\\alpha(\\vec{u}))) \\parallel^{2}\\: \/ \\: \n\\parallel \\vec{u} \\parallel^{2} \n+ \\parallel (\\beta)((\\neg \\alpha)(\\vec{u})) \\parallel^{2}\n\\: \/ \\: \\parallel \\vec{u} \\parallel^{2} \\ = \\ \n\\]\n\\[\n{{\\parallel \\beta(\\alpha(\\vec{u})) \\parallel^{2}} \\over\n{\\parallel \\alpha(x) \\parallel^{2}}} \\ \n{{\\parallel \\alpha(x) \\parallel^{2}} \\over\n{\\parallel \\vec{u} \\parallel^{2}}}\n+ \n{{\\parallel (\\beta)((\\neg \\alpha)(\\vec{u})) \\parallel^{2}} \\over \n{\\parallel (\\neg \\alpha)(x) \\parallel^{2}}} \\ \n{{\\parallel (\\neg \\alpha)(x) \\parallel^{2}} \\over \n{\\parallel \\vec{u} \\parallel^{2}}} \\ = \\ \n\\]\n\\[\np(\\alpha(x),\\beta) \\, p(x, \\alpha) \\: + \\: p((\\neg \\alpha)(x), \\beta) \\, \np(x, \\neg \\alpha). \n\\]\n\\end{proof}\n\n\\subsubsection{An Inequality} \\label{sec:num_inter}\nThe next result strengthens the Interference property of~\\cite{LEG:Malg}\nby presenting a quantitative version of the principle.\n\\begin{theorem} \\label{the:quant_interference}\nFor any \\mbox{$\\alpha, \\beta \\in M$} and any \\mbox{$x \\in X$} such that\n\\mbox{$\\alpha(x) = x$}, \n\\[\np(x, \\beta) \\: (1 - p(\\beta(x), \\alpha))^{2} \\: \\leq \\:\np(\\beta(x), \\alpha) \\: (1 - p(\\alpha(\\beta(x)), \\beta))\n\\]\n\\end{theorem}\nNote that, by Theorem~\\ref{the:p},\n\\mbox{$p(x, \\beta) \\leq p(\\beta(x), \\alpha)$} but\n\\mbox{$(1 - p(\\beta(x), \\alpha)) \\geq (1 - p(\\beta(x), \\alpha))$}.\nThe fact that the quantity \\mbox{$1 - p(\\beta(x), \\alpha)$} \nappears squared seems inevitable. \nAn examination of $\\mbox{${\\cal R}$}^{3}$ shows that it may be the case that\n\\mbox{$p(x, \\beta) \\: (1 - p(\\beta(x), \\alpha)) \\: > \\:$}\n\\mbox{$p(\\beta(x), \\alpha) (1 - p(\\alpha(\\beta(x)), \\beta))$}.\n\\begin{proof}\nAssume \\mbox{$\\vec{t} \\neq \\vec{0} \\in x$}.\nLet \\mbox{$\\vec{u} = \\beta(\\vec{t})$}, \\mbox{$\\vec{v} = \\alpha(\\vec{u})$}\nand \\mbox{$\\vec{w} = \\beta(\\vec{v})$}.\n\nIn a first step we want to show that:\n\\[\n\\parallel \\vec{u} - \\vec{v} \\parallel^{2} =\n\\langle \\vec{t} \\, , \\, \\vec{v} - \\vec{w} \\rangle.\n\\]\n\nIndeed: \\mbox{$\\parallel \\vec{u} - \\vec{v} \\parallel^{2} =$}\n\\mbox{$\\langle \\vec{u} - \\vec{v} \\, , \\, \\vec{u} - \\vec{v} \\rangle =$}\n\\mbox{$\\langle \\vec{u} \\, , \\, \\vec{u} - \\vec{v} \\rangle -$}\n\\mbox{$\\langle \\vec{v} \\, , \\, \\vec{u} - \\vec{v} \\rangle$}.\nBut the last term is null since \\mbox{$\\vec{u} - \\vec{v}$} is orthogonal\nto $\\alpha$ in general and in particular to $\\vec{v}$.\nWe have:\n\\[\n\\parallel \\vec{u} - \\vec{v} \\parallel^{2} =\n\\langle \\vec{u} \\, , \\, \\vec{u} - \\vec{v} \\rangle.\n\\]\nBut \\mbox{$\\vec{t} - \\vec{u}$} is, similarly, orthogonal\nto $\\vec{u}$ and \\mbox{$\\langle \\vec{u} \\, , \\, \\vec{u} \\rangle =$}\n\\mbox{$\\langle \\vec{t} \\, , \\, \\vec{u} \\rangle$}.\nSince \\mbox{$\\vec{u} - \\vec{v}$} is orthogonal to $\\vec{t}$,\n\\mbox{$\\langle \\vec{t} \\, , \\, \\vec{u} \\rangle =$}\n\\mbox{$\\langle \\vec{t} \\, , \\, \\vec{v} \\rangle$}.\nWe have:\n\\[\n\\parallel \\vec{u} - \\vec{v} \\parallel^{2} =\n\\langle \\vec{t} \\, , \\, \\vec{v} \\rangle - \\langle \\vec{u} \\, , \\, \\vec{v} \\rangle.\n\\]\nAgain, \\mbox{$\\vec{v} - \\vec{w}$} is orthogonal to $\\vec{u}$ and therefore:\n\\mbox{$\\langle \\vec{u} \\, , \\, \\vec{v} \\rangle =$}\n\\mbox{$\\langle \\vec{u} \\, , \\, \\vec{w} \\rangle$} \nand \\mbox{$\\vec{t} - \\vec{u}$} is orthogonal to $\\vec{w}$ and\nwe have: \\mbox{$\\langle \\vec{u} \\, , \\, \\vec{w} \\rangle =$}\n\\mbox{$\\langle \\vec{t} \\, , \\, \\vec{w} \\rangle$}.\nTherefore:\n\\[\n\\parallel \\vec{u} - \\vec{v} \\parallel^{2} =\n\\langle \\vec{t} \\, , \\, \\vec{v} \\rangle - \\langle \\vec{t} \\, , \\, \\vec{w} \\rangle =\n\\langle \\vec{t} \\, , \\, \\vec{v} - \\vec{w} \\rangle.\n\\]\nBy Cauchy-Schwarz therefore we have:\n\\[\n\\parallel \\vec{u} - \\vec{v} \\parallel^{2} \\: \\leq \\:\n\\parallel \\vec{t} \\parallel \\: \\parallel \\vec{v} - \\vec{w} \\parallel.\n\\]\nand:\n\\[\n\\parallel \\vec{u} - \\vec{v} \\parallel^{4} \\: \\leq \\:\n\\parallel \\vec{t} \\parallel^{2} \\: \\parallel \\vec{v} - \\vec{w} \\parallel^{2}.\n\\]\nBut: \\mbox{$\\parallel \\vec{u} \\parallel^{2} =$}\n\\mbox{$\\parallel \\vec{v} \\parallel^{2} +$}\n\\mbox{$\\parallel \\vec{u} - \\vec{v} \\parallel^{2}$}, and\n\\mbox{$\\parallel \\vec{v} \\parallel^{2} =$}\n\\mbox{$\\parallel \\vec{w} \\parallel^{2} +$}\n\\mbox{$\\parallel \\vec{v} - \\vec{w} \\parallel^{2}$}.\nTherefore we have:\n\\[\n(\\parallel \\vec{u} \\parallel^{2} - \\parallel \\vec{v} \\parallel^{2})^{2} \n\\: \\leq \\:\n\\parallel \\vec{t} \\parallel^{2} \\: \n(\\parallel \\vec{v} \\parallel^{2} - \\parallel \\vec{w} \\parallel^{2}).\n\\]\nand\n\\[\n{{\\parallel {\\vec{u}} \\parallel^{2}} \\over \n{\\parallel \\vec{t} \\parallel^{2}}}\n\\: ( 1 - \n{{\\parallel \\vec{v} \\parallel^{2}} \\over \n{\\parallel \\vec{u} \\parallel^{2}}})^{2}\n\\: \\leq \\:\n{{\\parallel \\vec{v} \\parallel^{2} - \\parallel \\vec{w} \\parallel^{2}}\n\\over\n{\\parallel \\vec{u} \\parallel^{2}}},\n\\]\n\\[\np(x, \\beta) \\: ( 1 - p(\\beta(x), \\alpha))^{2} \\: \\leq \\:\n{{\\parallel \\vec{v} \\parallel^{2}} \\over {\\parallel \\vec{u} \\parallel^{2}}}\n\\: (1 - {{\\parallel \\vec{w} \\parallel^{2}} \\over \n{\\parallel \\vec{v} \\parallel^{2}}}).\n\\]\nWe conclude that:\n\\[\np(x, \\beta) \\: ( 1 - p(\\beta(x), \\alpha))^{2} \\: \\leq \\:\np(\\beta(x), \\alpha) \\: (1 - p(\\alpha(\\beta(x)), \\beta)).\n\\]\n\\end{proof}\nTheorem~\\ref{the:quant_interference} is a quantitative strengthening of the\n{\\bf Interference} property of projections in Hilbert spaces that plays a \ncentral role in the definition of an M-algebra~\\cite{LEG:Malg}.\nIndeed, assuming that \\mbox{$x \\in \\alpha$}, if \n\\mbox{$\\alpha(\\beta(x)) \\in \\beta$}, then, by Corollary~\\ref{le:satisfaction},\n\\mbox{$p(\\alpha(\\beta(x)), \\beta) = 1$} and by \nTheorem~\\ref{the:quant_interference}, either \\mbox{$p(x, \\beta) = 0$}\nor \\mbox{$p(\\beta(x), \\alpha) = 1$}. In both cases we have \n\\mbox{$p(\\beta(x), \\alpha) = 1$} and, by Corollary~\\ref{le:satisfaction},\n\\mbox{$\\beta(x) \\in \\alpha$}.\n\n\\subsection{Phases for Triangles: $\\theta(x, y, z)$} \n\\label{sec:theta}\nWe may now proceed to the definition of a second geometric quantity \nrelating three states: $\\theta(x, y, z)$. This quantity does not seem to\nhave been studied previously.\n\nIn section~\\ref{sec:a} a quantity was attached to any pair of states.\nThis quantity was the modulus of some inner product.\nIt seems natural that the argument of a similar inner product represents\nanother important geometrical quantity. But, clearly some thinking must be\ndone to define, out of such an argument, a quantity that does not depend\non the vectors chosen, but only on states.\nA new quantity, \\mbox{$\\theta(x, y, z)$}, \nan angle in the interval $[0, 2 \\pi]$\nwill be attached to triples of states.\nThis quantity can be defined only if no two of the three states\n$x$, $y$ and $z$ are orthogonal.\n\n\\begin{definition} \\label{def:theta}\nLet \\mbox{$x, y, z \\in X$} be such that \\mbox{$x \\not \\perp y$},\n\\mbox{$y \\not \\perp z$} and \\mbox{$z \\not \\perp x$}.\nWe shall define\n\\mbox{$\\theta(x, y, z)$} in the following way.\nChoose arbitrary unit vectors\n$\\vec{u}$, $\\vec{v}$ and $\\vec{w}$ in $x$, $y$ and $z$ respectively \nand let:\n\\[\n\\theta(x, y, z) \\: = \\: \\arg(\\langle \\vec{u} \\, , \\, \\vec{v} \\rangle) \\: + \\:\n\\arg(\\langle \\vec{v} \\, , \\, \\vec{w} \\rangle) \\: + \\:\n\\arg(\\langle \\vec{w} \\, , \\, \\vec{u} \\rangle).\n\\]\n\nNote that each of those three inner products is different from zero, \nby assumption, and therefore the three complex arguments are well-defined. \n\\end{definition}\nWe need to justify the definition by showing that the quantity \n$\\theta(x, y, z)$ depends only on $x$, $y$ and $z$ and does not\ndepend on the vectors $\\vec{u}$, $\\vec{v}$ and $\\vec{w}$.\nFor example, the definition is independent of the vector $\\vec{u}$ \nchosen in $x$ since any unit vector $\\vec{s}$ of $x$ has the form\n\\mbox{$\\vec{s} = e^{i \\varphi} \\vec{u}$} for some\n\\mbox{$\\varphi \\in [0 , 2 \\pi]$}.\nHad we used $\\vec{s}$ instead of $\\vec{u}$ we would have obtained:\n\\[\n\\arg(\\langle e^{i \\varphi} \\vec{u} \\, , \\, \\vec{v} \\rangle) \\: + \\:\n\\arg(\\langle \\vec{v} \\, , \\, \\vec{w} \\rangle) \\: + \\:\n\\arg(\\langle \\vec{w} \\, , \\, e^{i \\varphi} \\vec{u} \\rangle) \\: = \\:\n\\]\n\\[\n\\arg(e^{i \\varphi} \\langle \\vec{u} \\, , \\, \\vec{v} \\rangle) \\: + \\:\n\\arg(\\langle \\vec{v} \\, , \\, \\vec{w} \\rangle) \\: + \\:\n\\arg(e^{- i \\varphi} \\langle \\vec{w} \\, , \\, \\vec{u} \\rangle) \\: = \\:\n\\]\n\\[\n\\varphi + \\arg(\\langle \\vec{u} \\, , \\, \\vec{v} \\rangle) \\: + \\:\n\\arg(\\langle \\vec{v} \\, , \\, \\vec{w} \\rangle) \\: - \\: \\varphi \\: + \\:\n\\arg(\\langle \\vec{w} \\, , \\, \\vec{u} \\rangle).\n\\]\nA similar line shows that the choice of none of $\\vec{v}$ or\n$\\vec{w}$ influences $\\theta(x, y, z)$.\n\nWe shall now prove some properties of $\\theta$.\nFirst, \\mbox{$\\theta(x, y, z)$} is invariant under a circular permutation\nof the arguments and antisymmetric under transpositions.\n\\begin{lemma} \\label{le:cyclic}\nFor any generic states $x$, $y$ and $z$, we have:\n\\mbox{$\\theta(y, z, x) =$} \\mbox{$\\theta(x, y, z)$},\n\\mbox{$\\theta(x, z, y) =$} \\mbox{$- \\theta(x, y, z)$} and\n\\mbox{$\\theta(x, y, w) =$}\n\\mbox{$\\theta(x, y, z) + \\theta(x, z, w) + \\theta(z, y, w)$}.\n\\end{lemma}\n\\begin{proof}\nObvious.\n\\end{proof}\nThe behavior of $\\theta$ under (planar) orthogonal complements is also\nantisymmetric.\n\\begin{lemma} \\label{le:prime}\nAssume \\mbox{$x, y, z \\in X$} are states no two of them are equal\nand no two of them are orthogonal\nand such that \\mbox{$coplanar(x, y, z)$}.\nLet \\mbox{$x' =$} \\mbox{$(\\neg x)(y) =$} \\mbox{$(\\neg x)(z)$},\n\\mbox{$y' =$} \\mbox{$(\\neg y)(z) =$} \\mbox{$(\\neg y)(x)$} and\n\\mbox{$z' =$} \\mbox{$(\\neg z)(x) =$} \\mbox{$(\\neg z)(y)$}.\nThen \\mbox{$\\theta(x', y', z') =$} \\mbox{$- \\theta(x, y, z)$}.\n\\end{lemma}\n\\begin{proof}\nChoose an arbitrary unit vector $\\vec{u}$ in $x$. \nLet $\\vec{v}$ be the unit vector of $y$ such that \n\\mbox{$\\langle \\vec{u} \\, , \\, \\vec{v} \\rangle > 0$}.\nLet $\\vec{u}'$ be the unit vector of $x'$ such that\n\\mbox{$\\langle \\vec{v} \\, , \\, \\vec{u}' \\rangle > 0$}.\nLet us have \\mbox{$\\vec{v} =$}\n\\mbox{$r_{1} \\vec{u} + r_{2} \\vec{u}'$} for positive real numbers\n\\mbox{$r_{i}, i = 1 , 2$}.\nThe vector \\mbox{$r_{2} \\vec{u} - r_{1} \\vec{u}'$} is a unit vector in $y'$.\nLet \\mbox{$\\vec{v}' = r_{2} \\vec{u} - r_{1} \\vec{u}'$}.\nLet $\\vec{w}$ be the unit vector of $z$ such that \n\\mbox{$\\langle \\vec{u} \\, , \\, \\vec{w} \\rangle > 0$}.\nLet \\mbox{$\\vec{w} = r_{3} \\vec{u} + r_{4} e^{i \\varphi} \\vec{u}'$}\nfor positive $r_{i}$'s $i = 3 , 4$ and some angle $\\varphi$.\nLet \\mbox{$\\vec{w}' = r_{4} e^{- i \\varphi} \\vec{u} - r_{3} \\vec{u}'$},\na unit vector of $z'$.\n\nWe see that:\n\\[\n\\theta(x, y, z) \\, = \\, \\arg(\\langle \\vec{u} \\, , \\, \\vec{v} \\rangle) \\, + \\,\n\\arg(\\langle \\vec{v} \\, , \\, \\vec{w} \\rangle) \\, + \\,\n\\arg(\\langle \\vec{w} \\, , \\, \\vec{u} \\rangle) \\: = \\:\n0 + \\arg(\\langle \\vec{v} \\, , \\, \\vec{w} \\rangle) + 0.\n\\]\nand\n\\[\n\\theta(x', y', z') \\, = \\, \\arg(\\langle \\vec{u}' \\, , \\, \\vec{v}' \\rangle) \\, + \\,\n\\arg(\\langle \\vec{v}' \\, , \\, \\vec{w}' \\rangle) \\, + \\,\n\\arg(\\langle \\vec{w}' \\, , \\, \\vec{u}' \\rangle) \\, = \\,\n\\pi + \\arg(\\langle \\vec{v}' \\, , \\, \\vec{w}' \\rangle) + \\pi.\n\\]\nWe are left to show that \n\\mbox{$\\arg(\\langle \\vec{v}' \\, , \\, \\vec{w}' \\rangle) = $}\n\\mbox{$- \\arg(\\langle \\vec{v} \\, , \\, \\vec{w} \\rangle)$}.\nIn fact, we shall show that\n\\mbox{$\\langle \\vec{v}' \\, , \\, \\vec{w}' \\rangle = $}\n\\mbox{$\\langle \\vec{w} \\, , \\, \\vec{v} \\rangle$}.\nIndeed, \\mbox{$\\langle \\vec{v}' \\, , \\, \\vec{w}' \\rangle = $}\n\\mbox{$r_{2} r_{4} e^{i \\varphi} + r_{1} r_{3}$}\nand\n\\mbox{$\\langle \\vec{w} \\, , \\, \\vec{v} \\rangle = $} \n\\mbox{$r_{1} r_{3} + r_{2} r_{4} e^{i \\varphi}$}.\n\\end{proof}\n\n\\section{Properties of Superpositions} \\label{sec:superp_prop}\n\nA most remarkable novelty of QM is that the components of a superposition\ninterfere. To put this in evidence, let us consider\n\\mbox{$p(r y + (1 - r) z, x)$}. One would expect this quantity to be, \nessentially, equal to \\mbox{$r p(y, x) + (1 - r) p(z, x)$}.\n\\begin{lemma} \\label{le:p_basis}\nIf \\mbox{$x \\not \\perp y$}, \\mbox{$y \\not \\perp z$}, \n\\mbox{$z \\not \\perp x$}, \\mbox{$r \\in [0,1]$}, \n\\mbox{$\\omega(r, y, z) = 1 + 2 \\sqrt{r (1 -r) p(y, z)}$} we have, for any \n\\mbox{$x \\in X$}:\n\\[\np(r y \\, + \\, (1 - r) z, x) = \n\\frac{r p(y, x) + (1 - r) p(z, x) + \n2 \\cos(\\theta(x, y, z)) \\sqrt{r (1 - r) p(y,x) p(z, x)}} \n{\\omega(r, y, z)}\n\\]\n\\end{lemma}\nWe see that, indeed, \\mbox{$p(r y + (1 - r) z, x)$} is almost \nequal to \\mbox{$r p(y, x) + (1 - r) p(z, x)$}. But there are two correction\nterms. The term \n\\mbox{$2 \\cos(\\theta(x, y, z)) \\sqrt{r (1 - r) p(y,x) p(z, x)}$} is\nan interference term, a characteristic of QM.\nThe denominator \\mbox{$\\omega(r, y, z)$} is a normalization factor.\nNote that the interference term contains $\\cos(\\theta(x, y, z))$, not\n$\\sin(\\theta(x, y, z))$. Even if all angles $\\theta$ are equal to zero,\nwhich is the case in a Euclidean space, the term is non-zero. \n\\begin{proof}\nLet $\\vec{v}$ and $\\vec{w}$ be unit vectors of $y$ and $z$\nrespectively with \\mbox{$\\langle \\vec{v} \\, , \\, \\vec{w} \\rangle > 0$}.\nLet \\mbox{$\\vec{u} = \\sqrt{r} \\, \\vec{v} + \\sqrt{1 - r} \\, \\vec{w}$}.\nWe have \\mbox{$\\langle \\vec{v} \\, , \\, \\vec{w} \\rangle =$}\n\\mbox{$\\langle \\vec{w} \\, , \\, \\vec{v} \\rangle =$}\n\\mbox{$\\mid \\langle \\vec{v} \\, , \\, \\vec{w} \\rangle \\mid = $}\n\\mbox{$\\sqrt{p(y, z)}$} and\n\\[\n\\mid \\vec{u} \\mid^{2} = \\langle \\sqrt{r} \\, \\vec{v} + \\sqrt{1 - r} \\, \\vec{w} \n\\, , \\,\n\\sqrt{r} \\, \\vec{v} + \\sqrt{1 - r} \\, \\vec{w} \\rangle = \n\\]\n\\[\nr + 2 \\sqrt{r (1 - r)} \\sqrt{p(y, z)} + (1 - r) = \n\\omega(r, y, z).\n\\]\n\nLet now $\\vec{t}$ be the unit vector of $x$ such that \n\\mbox{$\\langle \\vec{t} \\, , \\, \\vec{v} \\rangle > 0$}. \nWe have:\n\\mbox{$\\theta(x, y, z) =$}\n\\mbox{$0 + 0 + \\arg(\\langle \\vec{w} \\, , \\, \\vec{t} \\rangle)$}.\nTherefore \n\\mbox{$\\langle \\vec{w} \\, , \\, \\vec{t} \\rangle =$}\n\\mbox{$\\sqrt{p(x, z)} e^{i \\theta(x, y, z)}$} and\n\\[\n\\langle \\sqrt{r} \\, \\vec{v} + \\sqrt{1 - r} \\, \\vec{w} \\, , \\, \\vec{t} \\rangle =\n\\sqrt{r} \\sqrt{p(y, x)} + \\sqrt{1 - r} \\sqrt{p(z, x)} (\\cos(\\theta(x, y, z)) \n+ i \\sin(\\theta(x, y, z))\n\\]\nTherefore\n\\[\n\\mid \\langle \\sqrt{r} \\, \\vec{v} + \\sqrt{1 - r} \\, \\vec{w} \\, , \\, \n\\vec{t} \\rangle \\mid^{2}\n=\n(r p(x, y) + (1 -r) p(x, z) \\cos^{2}(\\theta(x, y, z)) + \n\\]\n\\[\n2 \\sqrt{r (1 - r) p(x, y) p(x, z)} cos(\\theta(x, y, z)) +\n(1 - r) p(x, z) \\sin^{2}(\\theta(x, y, z)). \n\\]\n\\end{proof}\n\n\\begin{lemma} \\label{le:prop1}\nIf \\mbox{$y \\not \\perp z$}, \\mbox{$r \\in [0,1]$}, and\n\\mbox{$x = r y \\, + \\, (1 - r) z$} then:\n\\begin{enumerate}\n\\item \\mbox{$coplanar(x, y, z)$},\n\\item \\mbox{$\\theta(x, y, z) = 0$}, \n\\item \\label{form} \n\\mbox{$p(x, y) = 1 - (1 - r) (1 - p(y, z)) \\: \/ \\: \\left( 1 + 2 \\sqrt{r (1 - r) p(y, z)} \\right)$}, and\n\\item for any \\mbox{$0 < r \\leq 1$}, we have\n\\mbox{$p(r y + (1 - r) z, y) > p(y, z)$}.\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nLet \\mbox{$x = r y \\, + \\, (1 - r) y$} and\n\\mbox{$\\vec{u} = \\sqrt{r} \\, \\vec{v} + \\sqrt{1 - r} \\, \\vec{w}$}.\nImmediately, by Definition~\\ref{def:superposition}, \\mbox{$coplanar(x, y, z)$}.\nSince \\mbox{$\\langle \\vec{v} \\, , \\, \\vec{w} \\rangle > 0$},\nwe have \n\\mbox{$\\langle \\vec{u} \\, , \\, \\vec{v} \\rangle > 0$}\nand also \n\\mbox{$\\langle \\vec{u} \\, , \\, \\vec{w} \\rangle > 0$}.\nWe conclude that \\mbox{$\\theta(x, y, z) = 0$}.\nFor~\\ref{form}) the value of \\mbox{$p(x, y)$} is straightforward from \nLemma~\\ref{le:p_basis}.\nFrom the same Lemma,\n\\[\np(r y + (1 - r) z, y) = \n\\frac{r + (1 - r) p(y, z) + 2 \\sqrt{r (1 - r) p(y, z)}}\n{1 + 2 \\sqrt{r (1 - r) p(y, z)}} > \n\\]\n\\[\n\\frac{r p(y, z) + (1 - r) p(y, z) + \n2 p(y, z) \\sqrt{r (1 - r) p(y, z)}}\n{1 + 2 \\sqrt{r (1 - r) p(y, z)}} = p(y, z).\n\\]\n\\end{proof}\n\n\\begin{corollary} \\label{le:co_prime}\nIf $x$, $x'$, $y$ and $z$ are coplanar states with \\mbox{$x \\perp x'$},\none has:\n\\[\n\\cos(\\theta(x', y, z)) = \\frac{\\sqrt{p(y, z)} - \\cos(\\theta(x, y, z)) \\,\n\\sqrt{p(x, y) p(x, z)}} \n{\\sqrt{(1 - p(x, y))(1 - p(x, z))}}.\n\\] \n\\end{corollary}\n\\begin{proof}\nWe have: \\mbox{$p(r y + (1 - r) z, x) + \np(r y + (1 - r) z, x') = 1$}. By Lemma~\\ref{le:p_basis}:\n\\[\np(r y + (1 - r) z, x) = \\frac{r p(x, y) + (1 - r) p(x, z) +\n2 \\cos(\\theta(x, y, z)) \\sqrt{r (1 - r) p(x, y) p(x, z)}}\n{1 + 2 \\sqrt{r (1 - r) p(y, z)}}\n\\]\nand\n\\[\np(r y + (1 - r) z, x') = \n\\]\n\\[\n\\frac{r (1 - p(x, y)) + (1 - r) (1 - p(x, z)) +\n2 \\cos(\\theta(x', y, z)) \\sqrt{r (1 - r) (1 - p(x, y)) (1 - p(x, z))}}\n{1 + 2 \\sqrt{r (1 - r) p(y, z)}}.\n\\]\nTherefore\n\\[\n1 + 2 \\sqrt{r (1 - r) p(y, z)} =\n\\]\n\\[\n r + (1 - r) + \n2 \\cos(\\theta(x, y, z)) \\sqrt{r (1 - r) p(x, y) p(x, z)} + \n\\]\n\\[\n2 \\cos(\\theta(x', y, z)) \\sqrt{r (1 - r) (1 - p(x, y)) (1 - p(x, z))}\n\\]\nand\n\\[\n\\sqrt{r (1 - r) p(y, z)} = \n\\]\n\\[ \n\\cos(\\theta(x, y, z)) \\sqrt{r (1 - r) p(x, y) p(x, z)} + \n\\]\n\\[\n\\cos(\\theta(x', y, z)) \\sqrt{r (1 - r) (1 - p(x, y)) (1 - p(x, z))}\n\\]\n\\end{proof}\n\nIn parallel with Lemma~\\ref{le:p_basis},\none would like to express \\mbox{$\\theta(ry + (1-r)z, x1, x2)$}\nin terms of $r$ and the $p$'s and $\\theta$'s of $y$, $z$, $x1$ and $x2$, for\ncoplanar states.\nThe formula obtained (by considering some orthonormal basis for the two\ndimensional subspace) is, unfortunately, not very appealing and shall\nnot be presented here.\n\n\\section{Mappings that Preserve Superpositions} \\label{sec:mappings}\nIt is a thesis of this paper that the structure of superpositions is \nthe fundamental structure of Hilbert spaces that is meaningful for \nQuantum Physics. To support this thesis one should, now, analyze the\nfundamental constructions used in Quantum Physics, such as tensor products \nand quotients as universal, i.e., categorical constructions in the\ncategory of superposition preserving mappings.\nSuch an analysis has not been performed yet. Some first reflections on tensor\nproducts may be found in Section~\\ref{sec:conclusion}.\n\nA preliminary step must be the proper definition of the category of\nsuperposition structures and their superposition preserving mappings.\nThis paper does not provide for a proper definition of such a category,\nwhose objects must include both structures defined by Hilbert spaces,\nstudied here, and classical structures in which any two distinct states\nare orthogonal, and all structures in-between.\nWe shall, therefore, consider only superposition structures defined by\nsome Hilbert space. A more general definition abstracting from Hilbert\nspaces and based on the properties of the quantities $p$ and $\\theta$\nis left for future work.\n\nLet \\mbox{${\\cal H}$}\\ be a Hilbert space on the complex field, \nand $X$ be the set of all one-dimensional subspaces of \\mbox{${\\cal H}$}.\nWith any triple \\mbox{$y, z \\in X$},\n\\mbox{$r \\in [0 , 1]$} such that \\mbox{$y \\not \\perp z$}, we can associate\nthe superposition \\mbox{$r y \\: + \\: (1 - r) z$}.\nA function \\mbox{$f : X_{1} \\longrightarrow X_{2}$} between two such sets\nof one-dimensional subspaces $X_{1}$ and $X_{2}$ preserves superpositions\niff for any \\mbox{$y, z \\in X_{1}$}, such that \n\\mbox{$y \\not \\perp z$} and for any\n\\mbox{$r \\in [0 , 1]$} the superposition, in $X_{2}$, \n\\mbox{$r f(y) \\, + \\, (1 - r) f(z)$} is defined,\ni.e., \\mbox{$f(y) \\not \\perp f(z)$} and is equal to\n\\mbox{$f(r y \\, + \\, (1 - r) z)$}.\n\nNote that if \\mbox{$f : X_{1} \\rightarrow X_{2}$} preserves superpositions \nand \\mbox{$x \\not \\perp y$} then \n\\mbox{$f(x) \\not \\perp f(y)$} since the superpositions\n\\mbox{$r f(x) \\, + \\, (1 - r) f(y)$} must be defined.\n \nWe shall now present some preliminary results concerning mappings that preserve\nsuperpositions. First, note that if $\\mbox{${\\cal H}$}_{2}$ is a one-dimensional \nHilbert space, then $X_{2}$ contains one element only and, for any $X_{1}$, \nthe unique mapping \\mbox{$X_{1} \\rightarrow X_{2}$} preserves superpositions.\nSuch a mapping does not preserve $p$ or $\\theta$.\n\nA natural way to obtain a mapping\n\\mbox{$f : X_{1} \\rightarrow X_{2}$} is to start from a \n{\\em linear} map \\mbox{$m : \\mbox{${\\cal H}$}_{1} \\rightarrow \\mbox{${\\cal H}$}_{2}$}.\nSuch a map $m$ associates, with every one-dimensional subspace\nof $\\mbox{${\\cal H}$}_{1}$, i.e., every member of $X_{1}$, \na subspace of $\\mbox{${\\cal H}$}_{2}$ that is either one-dimensional or zero-dimensional.\nAny injective, i.e., left-invertible, linear map $m$ defines an application\n\\mbox{$\\overline{m} : X_{1} \\rightarrow X_{2}$} defined by:\n$\\overline{m}(x)$ is the image $m(x)$ of the subspace $x$.\n\\begin{definition} \\label{def:regular}\nA mapping obtained from an injective linear mapping between \nHilbert spaces in the way described just above will be called {\\em regular}.\nIf such a map \\mbox{$m : \\mbox{${\\cal H}$}_{1} \\rightarrow \\mbox{${\\cal H}$}_{2}$} is a linear isometry, \ni.e., a unitary map of $\\mbox{${\\cal H}$}_{1}$ onto its image, we shall say that \nthe mapping $\\overline{m}$ is an isometry.\n\\end{definition}\nNote that the mappings preserving superpositions described just above \nthat map into a singleton are not regular unless $\\mbox{${\\cal H}$}_{1}$ is also of \ndimension one.\nNote also that if \\mbox{$m : \\mbox{${\\cal H}$}_{1} \\rightarrow \\mbox{${\\cal H}$}_{2}$} is an\ninjective linear map, then, for any complex number $c$ different from zero,\nthe map $c \\, m$ is an injective linear map \n\\mbox{$\\mbox{${\\cal H}$}_{1} \\rightarrow \\mbox{${\\cal H}$}_{2}$}\nand that \\mbox{$\\overline{c \\, m} = \\overline{m} : X_{1} \\rightarrow X_{2}$}.\n\nWe shall now characterize the regular mappings that preserve\nsuperpositions. First, a well-known result from the theory of Hilbert spaces.\n\\begin{theorem} \\label{the:isometry}\nLet \\mbox{$H_{1}, H_{2}$} be Hilbert spaces. If \n\\mbox{$f : H_{1} \\rightarrow H_{2}$} is a linear isometry, i.e., \n\\mbox{$\\parallel f(\\vec{u}) \\parallel =$}\n\\mbox{$\\parallel \\vec{u} \\parallel$} for every \\mbox{$\\vec{u} \\in \\mbox{${\\cal H}$}_{1}$}\nthen it preserves inner products:\n\\mbox{$\\langle f(\\vec{u}) \\, , \\, f(\\vec{v}) \\rangle =$}\n\\mbox{$\\langle \\vec{u} \\, , \\, \\vec{v} \\rangle$} for every \n\\mbox{$\\vec{u} \\, , \\, \\vec{v} \\in \\mbox{${\\cal H}$}_{1}$}.\n\\end{theorem}\n\nWe now move to prove that if $\\overline{m}$ is any regular mapping that\npreserves superpositions, then $\\overline{m}$ is an isometry.\n\\begin{lemma} \\label{le:regular}\nLet $\\mbox{${\\cal H}$}_{1}$ and $\\mbox{${\\cal H}$}_{2}$ be Hilbert spaces and let $X_{1}$ and $X_{2}$ \nbe the one-dimensional subspaces of $\\mbox{${\\cal H}$}_{1}$ and $\\mbox{${\\cal H}$}_{2}$ respectively.\nAssume \\mbox{$m : \\mbox{${\\cal H}$}_{1} \\rightarrow \\mbox{${\\cal H}$}_{2}$} is an injective linear mapping \nand that \\mbox{$\\overline{m} : X_{1} \\rightarrow X_{2}$} \npreserves superpositions.\nThen there is a strictly positive real constant $c$ such that, for every\n\\mbox{$\\vec{u} \\in \\mbox{${\\cal H}$}_{1}$}, one has \\mbox{$\\parallel m(\\vec{u}) \\parallel =$}\n\\mbox{$c \\parallel \\vec{u} \\parallel$}, and $\\overline{m}$ is an isometry.\n\\end{lemma}\n\\begin{proof}\nNotice first that, if \\mbox{$\\parallel m(\\vec{u}) \\parallel =$}\n\\mbox{$c \\parallel \\vec{u} \\parallel$} for every \\mbox{$\\vec{u} \\in \\mbox{${\\cal H}$}_{1}$},\nthen, if we define \\mbox{$n = m \/ c$} the mapping $n$ is a linear isometry and \none has \\mbox{$\\overline{m} = \\overline{n}$}, proving that $\\overline{m}$ \nis an isometry.\n \nLet $m$ be linear and assume $\\overline{m}$ preserves superpositions.\nLet \\mbox{$x, y \\in X_{1}$} be one-dimensional subspaces of $\\mbox{${\\cal H}$}_{1}$.\nIt is enough to show that there are unit vectors \\mbox{$\\vec{u}, \\vec{v}$} \nin $x$ and $y$ respectively such that \n\\mbox{$\\parallel m(\\vec{u}) \\parallel =$}\n\\mbox{$\\parallel m(\\vec{v}) \\parallel$}. \n\nIf \\mbox{$x = y$} the result follows from the linearity of $m$.\nWe may therefore assume that \\mbox{$x \\neq y$}.\n\nSuppose, first, that \\mbox{$x \\not \\perp y$}\nand let \\mbox{$r \\in ]0, 1[$}.\nThere are unit vectors \\mbox{$\\vec{u} \\in x$}, \\mbox{$\\vec{v} \\in y$},\n\\mbox{$\\vec{t} \\in \\overline{m}(x)$} and \\mbox{$\\vec{w} \\in \\overline{m}(y)$} \nsuch that\n\\mbox{$\\langle \\vec{u} \\, , \\, \\vec{v} \\rangle > 0$} and \n\\mbox{$\\langle \\vec{t} \\, , \\, \\vec{w} \\rangle > 0$}.\nNote that \\mbox{$\\overline{m}(x) \\neq \\overline{m}(y)$} since $m$ is injective.\nThe vector \\mbox{$\\sqrt{r} \\, \\vec{u} + \\sqrt{1 - r} \\, \\vec{v}$} \nis a vector of\nthe superposition \\mbox{$r x \\, + \\, (1 -r) y$}.\nThe vector \\mbox{$\\sqrt{r} \\, \\vec{t} + \\sqrt{1 - r} \\, \\vec{w}$} \nis a vector of\nthe superposition \\mbox{$r m(x) \\, + \\, (1 -r) m(y) =$}\n\\mbox{$\\overline{m}(r x \\, + \\, (1 - r) y)$} since $\\overline{m}$ preserves \nsuperpositions.\nSince $m$ is linear the vector \n\\mbox{$\\sqrt{r} \\, m(\\vec{u}) + \\sqrt{1 - r} \\, m(\\vec{v})$} is a vector of \n\\mbox{$\\overline{m}(r x \\, + \\, (1 - r) y)$}.\nWe conclude that both vectors\n\\mbox{$\\sqrt{r} \\, \\vec{t} + \\sqrt{1 - r} \\, \\vec{w}$} and\n\\mbox{$\\sqrt{r} \\, m(\\vec{u}) + \\sqrt{1 - r} \\, m(\\vec{v})$}\nare members of the same one-dimensional subspace. \nThis implies that \\mbox{$m(\\vec{u}) = d \\vec{t}$} and\n\\mbox{$m(\\vec{v}) = d \\vec{w}$} for some complex number $d$ and\n\\mbox{$\\parallel m(\\vec{u}) \\parallel =$} \n\\mbox{$\\parallel m(\\vec{v}) \\parallel =$}\n\\mbox{$\\mid d \\mid$}.\n\nLet us now assume that \\mbox{$x \\perp y$}. We can find find some \n\\mbox{$z \\in X_{1}$} such that \\mbox{$z \\neq x$}, \\mbox{$z \\neq y$},\n\\mbox{$coplanar(z, x, y)$}. Since \\mbox{$z \\not \\perp x$}, by the above\nwe can find unit vectors \\mbox{$\\vec{u} , \\vec{w}$} in $x$ and $z$ respectively\nsuch that \\mbox{$\\parallel m(\\vec{u}) \\parallel =$}\n\\mbox{$\\parallel m(\\vec{w}) \\parallel$}. \nSimilarly, we can find unit vectors \n\\mbox{$\\vec{v} , \\vec{w'}$} in $y$ and $z$ respectively\nsuch that \\mbox{$\\parallel m(\\vec{v}) \\parallel =$}\n\\mbox{$\\parallel m(\\vec{w'}) \\parallel$}. \nBut \\mbox{$\\parallel m(\\vec{w'}) \\parallel =$}\n\\mbox{$\\parallel m(\\vec{w}) \\parallel$}. \n\\end{proof}\n\nWe shall show now that any isometry preserves superpositions.\n\\begin{lemma} \\label{le:unit_superp}\nLet \\mbox{$m : \\mbox{${\\cal H}$}_{1} \\rightarrow \\mbox{${\\cal H}$}_{2}$} be a linear isometry. \nThen $\\overline{m}$ preserves $p$, $\\theta$ and superpositions.\n\\end{lemma}\n\\begin{proof}\nBy Theorem~\\ref{the:isometry}, $f$ preserves inner products and therefore \npreserves orthogonality, $p$ and $\\theta$.\n\nAssume now that \\mbox{$x \\not \\perp y$} and\n\\mbox{$z = r x + (1 -r) y$}.\nWe have\n\\mbox{$\\overline{m}(x) \\not \\perp \\overline{m}(y)$} and therefore\nthe superposition \\mbox{$r \\overline{m}(x) + (1 - r) \\overline{m}(y)$}\nis defined.\n\nIf \\mbox{$\\vec{u} , \\vec{v}$} are unit vectors of $x$ and $y$ respectively,\nsuch that \\mbox{$\\langle \\vec{u} \\, , \\, \\vec{v} \\rangle > 0$} then\n\\mbox{$m(\\vec{u}) , m(\\vec{v})$} are unit vectors of \n\\mbox{$\\overline{m}(x) , \\overline{m}(y)$} respectively such that\n\\mbox{$\\langle m(\\vec{u}) \\, , \\, m(\\vec{v}) \\rangle > 0$} and therefore\n\\mbox{$r \\overline{m}(x) + (1 - r) \\overline{m}(y)$} is the one-dimensional\nsubspace generated by\n\\mbox{$\\sqrt{r} \\, m(\\vec{u}) + \\sqrt{1 - r} \\, m(\\vec{v}) =$}\n\\mbox{$m(\\sqrt{r} \\, \\vec{u} + \\sqrt{1 - r} \\, \\vec{v})$} which is\n\\mbox{$\\overline{m}(r x \\, + \\, (1 - r) y)$}.\n\\end{proof}\n\nWe can now characterize regular mappings that preserve superpositions. \n\\begin{theorem} \\label{the:char_morph}\nLet \\mbox{$m : \\mbox{${\\cal H}$}_{1} \\rightarrow \\mbox{${\\cal H}$}_{2}$} be any linear injective mapping.\nThe function \\mbox{$\\overline{m} :$}\n\\mbox{$X_{1} \\rightarrow X_{2}$} preserves superpositions \niff it is an isometry.\n\\end{theorem}\n\\begin{proof}\nThe {\\em if} part is Lemma~\\ref{le:unit_superp}.\nThe {\\em only if} part is Lemma~\\ref{le:regular}.\n\\end{proof}\n\n \n\\section{Conclusion and Future Work} \\label{sec:conclusion}\nWe have shown that the properties of superpositions are governed by two\ngeometrical quantities $p$ and $\\theta$ defined, respectively for pairs\nand triples of one-dimensional subspaces in a Hilbert space, \nthus moving forward\nJohn von Neumann's program of focusing on subspaces and not on vectors.\n\nThe most pressing task is probably now to provide an abstract definition of\nstructures admitting a superposition operation, \ngeneralizing those structures provided by Hilbert spaces. \n\nA quantic system composed of two sub-systems is represented by the tensor\nproduct of the Hilbert spaces representing the two sub-systems.\nProduct states of the form $x_{1} \\otimes x_{2}$ are elements of this tensor\nproduct. On such product states, the quantities $p$ and $\\theta$ \nare easily analyzed:\nwe have \n\\[\np(x_{1} \\otimes x_{2}, y_{1} \\otimes y_{2}) =\np(x_{1}, y_{1}) p(x_{2}, y_{2})\n\\]\nand\n\\[\n\\theta(x_{1} \\otimes x_{2}, y_{1} \\otimes y_{2}, z_{1} \\otimes z_{2}) =\n\\theta(x_{1}, y_{1}, z_{1}) + \\theta(x_{2}, y_{2}, z_{2}).\n\\]\nThe tensor product can be characterized as the closure of the set of product \nstates under superpositions (in our sense) and the operation of taking \nthe state orthogonal to a given state in a given two-dimensional plane. \n\nExtending this definition to superpositions of product states in accordance\nwith the properties of $p$ and $\\theta$ on superpositions provides a \nsuperposition structure that is a original presentation of the tensor product\nand may be found useful to study symmetry properties. \n\n\\section{Acknowledgements} \\label{sec:ack}\nI am most grateful to Kurt Engesser and Dov Gabbay for extremely fruitful \ndiscussions during the elaboration of this paper. I thank Dorit Aharonov and\nJean-Marc L\\'{e}vy-Leblond for their interest and help.\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nDistributional representations of words have become an indispensable asset in natural language processing (NLP) research due to its wide application in downstream tasks such as parsing \\cite{bansal2014tailoring}, named entity recognition \\cite{lample2016neural}, and sentiment analysis \\cite{tang2014learning}. Of these, ``neural'' word vectors such as Word2Vec \\cite{Mikolov2013}, GloVe \\cite{Pennington2014}, and Paragram \\cite{wieting2015paraphrase} are amongst the most prevalently used and on which we focus in this article. \n\nThere has been a recent thrust in the study of word vector postprocessing methods \\cite{Faruqui2015,Fried2015,Mrksic2016,Mrksic2017,Shiue2017,Mu2018,Liu2019,Tang2019}. These methods directly operate on word embeddings and effectively enhance their linguistic regularities in light-weight fashions. Nonetheless, existing postprocessing methods usually come with a few limitations. For example, some rely on external linguistic resources such as English WordNet \\cite{Faruqui2015,Fried2015,Mrksic2016,Mrksic2017,Shiue2017}, leaving out-of-database word vectors untouched. Others use heuristic methods to flatten the spectrum of word vector embedding matrices \\cite{Mu2018,Liu2019,Wang2019,Tang2019}. Although being effective, these spectral flattening algorithms are primarily motivated by experimental observations but lack of direct interpretability.\n\nIn this paper, we propose a novel word vector postprocessing approach that addresses these limitations. Under a \\emph{causal inference} framework, the proposed method meets the joint desiderata of (1) \\emph{theoretical interpretability}, (2) \\emph{empirical effectiveness}, and (3) \\emph{computational efficiency}. Concretely, the postprocessing pipeline is realized by Half-Sibling Regression (HSR) \\cite{Scholkopf2016}, a method for identifying and removing confounding noise of word vectors. Using a simple linear regression method, we obtain results that are either on-par or outperform state-of-the-art results on a wide battery of lexical-level evaluation tasks and downstream sentiment analysis tasks. More specifically, our contributions are as follows:\n\n \\begin{itemize}\n \n \\item We formulate the word vector postprocessing task as a confounding noise identification problem under a putative causal graph. This formulation brings causal interpretability and theoretical support to our postprocessing algorithm.\n \n\\item The proposed method is data-thrifty and computationally simple. Unlike many existing methods, it does not require external linguistic resources (e.g., synonym relationships); besides, the method can be implemented easily via simple linear regressions.\n\n\\item The proposed postprocessing method yields highly competitive empirical results. For example, while achieving the best performance on 20 semantic textual similarity tasks, on average, our proposed method brings 4.71\\%, 7.54\\%, and 6.54\\% improvement respectively compared to the previously best results, and it achieves 7.13\\%, 22.06\\%, and 9.83\\% improvement compared to the original word embedding when testing on Word2Vec, GloVe, and Paragram.\n\\end{itemize} \n \nThe rest of the paper is organized as follows. We first briefly review prior work on word vector postprocessing. Next, we introduce Half-Sibling Regression as a causal inference framework to remove confounding noise; we then proceed to explain how to apply Half-Sibling Regression to remove noise from word embeddings. Then, we showcase the effectiveness of the Half-Sibling Ridge Regression model on word similarity tasks, semantic textual similarity tasks, and downstream sentiment analysis tasks using three different pre-trained English word embeddings. Finally, we conduct statistical significance tests on all experimental results\\footnote{Our codes are available at \\url{https:\/\/github.com\/KunkunYang\/denoiseHSR-AAAI}}.\n \n\\section{Prior Work} \\label{sec:prior}\n In this section, we review prior art for word vector postprocessing. Modern word vector postprocessing methods can be broadly divided into two streams: (1) lexical and (2) spatial approaches.\n\n\\paragraph{The Lexical Approach} The lexical approach uses lexical relational resources to enhance the quality of word vectors. These lexical relational resources specify semantic relationships of words such as synonym and antonym relationships. For example, \\citet{Faruqui2015} inject synonym lexical information into pre-trained collections of word vectors. \\citet{Mrksic2016} generalize this approach and insert both antonym and synonymy constraints into word vectors. \\citet{Mrksic2017} use constraints from mono- and cross-lingual lexical resources to fine-tune word vectors. \\citet{Fried2015} and \\citet{Shiue2017} propose to use hierarchical semantic relations such as hypernym semantics to enrich word vectors. To make sure that word vectors satisfy the lexical relational constraints, supervised machine learning algorithms are used.\n\n\\paragraph{The Spatial Approach} The spatial approach differs from the lexical approach in that it does not require external knowledge bases. The general principle of this approach is to enforce word vectors to be more ``isotropic'', i.e., more spread out in space. This goal is usually achieved by flattening the spectrum of word vectors. For example, \\citet{Mu2018} propose All-But-The-Top (ABTT) method which removes leading principal components of word vectors; \\citet{Wang2019} extend this idea by softly shrinking principal components of word embedding matrix using a variance normalization method; \\citet{Liu2019} propose the Conceptor Negation (CN) method, which employs regularized identity maps to filter away high-variance latent features of word vectors; more recently, \\citet{Tang2019} develop SearchBeta (SB) that uses a centralized kernel alignment method to smooth the spectrum of word vectors.\n\n\n\\section{The Causal Inference Approach for Word Vector Postprocessing}\n\nThe lexical and spatial approaches introduced in the previous section have empirically proven to be effective. Nonetheless, they also suffer from a few limitations. A shortcoming of the lexical approach is that it is unable to postprocess out-of-database word vectors. Indeed, lexical relational resources like synonym-antonym relationships are informative for word meaning, in particular word meaning of \\emph{adjectives}. However, many non-adjective words do not have abundant lexical connections with other words, and for this reason, they are not well-represented in lexical-relationship databases. For instance, most nouns (e.g., \\texttt{car}) and verbs (e.g., \\texttt{write}) have few synonyms and even fewer antonyms, making the lexical postprocessing methods inapplicable to these words. The spatial approach favorably avoids this problem by lifting the requirement of lexical relational resources. Yet, one major downside of the spatial approach is its lack of direct interpretability. For example, many spatial approaches propose to completely or softly remove a few leading principal components (PCs) of word vectors. However, it is rather unclear what exactly has been encoded by these leading PCs other than the empirical finding that these leading PCs are somehow correlated with word frequencies \\cite{Mu2018}.\n\nIn this paper, we go beyond the lexical and spatial schemes and introduce a novel \\emph{causal inference approach} for postprocessing word vectors. The method does not seek to infer the causal structure of words or word vectors; instead, in line with \\citet{Scholkopf2012On} and \\citet{Scholkopf2016}, it incorporates causal beliefs and assumptions for empirical objectives -- postprocessing off-the-shelf word vectors in our case. Concretely, this is achieved by identifying and removing confounding noise of word vectors using Half-Sibling Regression (HSR) method \\cite{Scholkopf2016}. Here we first briefly introduce HSR and then explain how to apply HSR to word vectors.\n\n\n\\subsection{Half-Sibling Regression}\n\nIn the passing, we introduce HSR mainly based on the presentation of \\citet{Scholkopf2016}. Consider a hypothetical causal graph, shown in Figure \\ref{fig:causalGraph}, where each vertex labeled by $Q$, $N$, $Y$, and $X$ are random variables defined on an underlying probability space and each directed edge indicates the probabilistic dependency between two random variables. We are mostly interested in quantities taken by the random variable $Q$. Unfortunately, it is not possible to directly observe these quantities. Instead, we are given only the \\emph{corrupted} observations of $Q$, taken value by the random variable $Y$. That is, intuitively $Y$ can be seen as a noisy, lossy version of $Q$. A natural assumption of $Y$ is that it statistically depends on its ``clean'' version $Q$ as well as some noise, whose values are taken by some unobservable random variable $N$ that encodes the noise source. We further assume that the noise source $N$ affects another random variable, $X$, whose quantities are directly observable. Importantly, we require $X$ to be independent of $Q$.\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width = 0.4\\textwidth]{3106_causalGraph.png}\n\\caption{The causal graph for HSR (adapted from \\citet{Scholkopf2016}). Each vertex labeled by $Q$, $N$, $Y$, and $X$ is a random variable defined on an underlying probability space. Directed edges connecting random variables describe probabilistic dependency between random variables.}\n\\label{fig:causalGraph}\n\\end{figure}\n\n\nRecall that we are mostly interested in the unobservable random variable $Q$. Hence the question we aim to answer is: How to reconstruct the quantities taken by $Q$ by leveraging the underlying statistical dependency structure in Figure \\ref{fig:causalGraph}? HSR provides a simple yet effective solution to this question -- It estimates $Q$ via its approximation $\\hat{Q}$, which is defined as\n\n\\begin{equation} \\label{eq:HSR}\n\\hat{Q} \\coloneqq Y - \\EE [Y \\mid X].\n\\end{equation}\n\nThe HSR Equation \\ref{eq:HSR} can be straightforwardly interpreted as follows. Recall that $X$ is independent of $Q$, and therefore $X$ is \\emph{not} predictive to $Q$ or $Q$'s influence on $Y$. However, $X$ is predictive to $Y$, because $X$ and $Y$ are both influenced by the \\emph{same} noise source $N$. When predicting $Y$ based on $X$ realized by the term $\\EE [Y \\mid X]$, since those signals of $Y$ coming from $Q$ cannot be predicted by $X$, only those noise contained in $Y$ coming from $N$ could be captured. To reconstruct $Q$ from $Y$, we can therefore remove the captured noise $\\EE [Y \\mid X]$ from $Y$, resulting in the reconstruction $\\hat{Q} \\coloneqq Y - \\EE [Y \\mid X]$, which is Equation \\ref{eq:HSR}. This procedure is referred to as Half-Sibling Regression because $X$ and $Y$ share one parent vertex $N$. $Y$ is regressed upon its half-sibling $X$ to capture the components of $Y$ inherited from their shared parent vertex $N$.\n\nHSR enjoys a few appealing theoretical properties. In particular, it is possible to show that $\\hat{Q}$ reconstructs $Q$ (up to its expectation $\\EE[Q]$) at least as good as the mean-subtraction $Y - \\EE [Y]$ does. We refer the readers to \\citet{Scholkopf2016} for detailed theoretical discussions.\n\n\n\\subsection{Applying HSR to De-Noise Word Vectors}\n\nWe now explain how we apply HSR to remove noise from word vectors. Before getting into the details, we first recall some linguistic basics of words, which are the key enablers of our approach. Semantically, words can be divided into two basic classes: (1) content or open-class words and (2) function or closed-class words (also known as stop words). Content words are those that have meaning or semantic value, such as nouns, verbs, adjectives, and adverbs. Function words have little lexical meaning; rather, they mainly exist to explain grammatical or structural relationships with other words. In English, examples of function words include \\texttt{a}, \\texttt{to}, \\texttt{for}, \\texttt{of}, \\texttt{the}, and more.\n\n\n\\begin{algorithm}[ht]\n\\SetKwInOut{Input}{Input}\n\\SetKwInOut{Output}{Output}\n\\Input{(i) $\\{v_i^Y\\}_{i = 1}^K$: a collection of $K$ content-word vectors, each of dimension $n$; $\\mathbf{V}^Y$ is a $n \\times K$ matrix whose columns are from $\\{v_i^Y\\}_{i = 1}^K$. (ii) $\\{v_i^X\\}_{i = 1}^P$: a collection of $P$ function-word vectors, each of dimension $n$; $\\mathbf{V}^X$ is a $n \\times P$ matrix whose columns are from $\\{v_i^X\\}_{i = 1}^P$. (iii) Regression constants $\\alpha_1, \\alpha_2 > 0$.} \n\\textbf{Postprocess content-word vectors}: \\newline\n Step 1.1: \\emph{Identify noise contained in content-word vectors}: Estimate a weight matrix $\\textbf{W}_1$ such that\n\\[ \\mathbf{V}^Y \\approx \\mathbf{V}^X \\mathbf{W}_1, \\]\n with ridge regression\n\\[ \\mathbf{W}_1 = \\left ( (\\mathbf{V}^X)^\\top \\mathbf{V}^X + \\alpha_1 \\mathbf{I} \\right )^{-1} (\\mathbf{V}^X)^\\top \\mathbf{V}^Y.\n\\]\n Step 1.2: \\emph{Remove noise contained in content-word vectors}: \n\\[ \\hat{\\mathbf{V}}^Y \\coloneqq \\mathbf{V}^Y - \\mathbf{V}^X \\mathbf{W}_1. \\] \\\\\n\n\\textbf{Postprocess stop-word vectors}: \\newline\n Step 2.1: \\emph{Identify noise contained in stop-word vectors}: Estimate a weight matrix $\\textbf{W}_2$ such that\n\\[ \\mathbf{V}^X \\approx \\mathbf{V}^Y \\mathbf{W}_2, \\]\n with ridge regression\n\\[ \\mathbf{W}_2 = \\left ((\\mathbf{V}^Y)^\\top \\mathbf{V}^Y + \\alpha_2 \\mathbf{I} \\right )^{-1} (\\mathbf{V}^Y)^\\top \\mathbf{V}^X.\n\\]\n Step 2.2: \\emph{Remove noise contained in stop-word vectors}: \n\\[ \\hat{\\mathbf{V}}^X \\coloneqq \\mathbf{V}^X - \\mathbf{V}^Y \\mathbf{W}_2. \\] \\\\\n\\Output{(i) HSR postprocessed content-word vectors $\\{\\hat{v}_i^Y\\}$, which are columns of $\\hat{\\mathbf{V}}^Y$; (ii) HSR postprocessed stop-word vectors $\\{\\hat{v}_i^X\\}$, which are columns of $\\hat{\\mathbf{V}}^X$.} \n\\caption{HSR algorithm for word vector postprocessing}\n\\label{alg:hsr}\n\\end{algorithm}\n\n\n\nBased on these basic linguistic facts, we posit that content-word vectors and function-word vectors can be seen as half-siblings as their linguistic properties align well with the HSR foundations. Indeed, since function-word vectors carry little semantic content, they could not be predictive to clean content-word vectors. Additionally, since content-word vectors and function-word vectors are induced from some shared training corpora, we hypothesize that they are subjected to the same noise profile. Using HSR language of Figure \\ref{fig:causalGraph}, this means we can model the off-the-shelf stop-word vectors with $X$, off-the-shelf content-word vectors with $Y$, and ``clean'' yet unseen content-word vectors with $Q$. Under the HSR framework, when we regress content-word vectors upon function-word vectors, only the noise of the former is captured. Once such noises are identified, they can be directly subtracted, so that the clean content-word vectors will be reconstructed.\n\n\nThe above described procedure can be mathematically realized as follows. Let $\\{v^X_i\\}_{i=1}^P$ be a collection of function-word vectors and let $\\{v^Y_i\\}_{i=1}^K$ be a collection of content-word vectors. To postprocess content-word vectors $\\{v^Y_i\\}_{i=1}^K$, we run a simple two-step algorithm. In the first step, we estimate parameters of a linear multiple-output model \\cite[Section 3.2.4]{Hastie2001}, in which we use model inputs $v^X_1, \\cdots, v^X_P$ to predict model outputs $v^Y_1, \\cdots, v^Y_K$. This amounts to estimate each $w_{ij}$ such that $v^Y_j \\approx \\sum_{i = 1}^P w_{ij} v^X_i$ for each $j \\in \\{1, \\cdots, K\\}$. In the second step, we remove the regression result from the target of the regression. That is, we let $\\hat{v}^Y_j \\coloneqq v_j^Y - \\sum_{i = 1}^P w_{ij} v^X_i$ be the postprocessed content-word vector.\n\nSo far, we have described how to postprocess content-word vectors. To postprocess function-word vectors, we can employ a similar pipeline with the predictor and target flipped. That is, to identify confounding noise contained in stop-word vectors, we use off-the-shelf content-word vectors as features to predict off-the-shelf stop-word vectors. The full algorithm is summarized in Algorithm \\ref{alg:hsr}.\n\nWe provide a few remarks on the practical implementations and further generalizations of Algorithm \\ref{alg:hsr}. Our first remark goes to how to identify the function and content words in practice. Throughout our experiments, to identify function words, we use the stop word list provided by Natural Language Toolkit (NLTK) package\\footnote{\\url{https:\/\/www.nltk.org\/}}, which is a list of 179 words. We regard words outside of this list to be content words. A small amount of stop words works efficiently when postprocessing tens of thousands of content-word vectors because in this case, we only have a handful of features. However, when postprocessing stop-word vectors, it is cumbersome because the number of content words as features are too large to be efficiently implemented. For this reason, in practice, we only use a small sample of commonly used content-word vectors as features for postprocessing stop-word vectors. Specifically, borrowing the word list provided by \\citet{Arora2017}, we use the most frequent 1000 content words as features in Step 2.1 and Step 2.2 of Algorithm \\ref{alg:hsr}.\n\nMoreover, while our framework postprocesses both content and function words, we have tried only postprocessing content words and leaving function words unchanged. We discover that the experimental results are still better than the baseline spatial approaches but worse than postprocessing both content and function words. The reason might be that stop words play non-trivial roles in various NLP tasks. As all baseline spatial approaches postprocess both content and function words, we follow this setting.\n\nFinally, we remark that the linear model used in Algorithm \\ref{alg:hsr} can be straightforwardly generalized to non-linear models. For this, we have formulated and tested Multilayer Perceptrons (MLPs) as extensions to the linear model used in Algorithm \\ref{alg:hsr}. The detailed MLP version of Algorithm \\ref{alg:hsr} is postponed to the appendix.\n\n\n\\begin{table*}[t]\n \\centering\n \\caption{Spearman's rank correlation coefficient of seven word similarity tasks}\n\\scalebox{0.72}{\n\n \\begin{tabular}{lrrrrrrrrrrrrrrr}\n \\toprule\n \\multirow{2}[0]{*}{} & \\multicolumn{5}{c}{\\textbf{WORD2VEC}} & \\multicolumn{5}{c}{\\textbf{GLOVE}} & \\multicolumn{5}{c}{\\textbf{PARAGRAM}} \\\\\n \\cmidrule(r){2-6} \\cmidrule(r){7-11} \\cmidrule(r){12-16}\n & \\multicolumn{1}{c}{\\textbf{Orig.}} & \\multicolumn{1}{c}{\\textbf{ABTT}} & \\multicolumn{1}{c}{\\textbf{CN}} & \\multicolumn{1}{c}{\\textbf{SB}} & \\multicolumn{1}{c}{\\textbf{HSR-RR}} & \\multicolumn{1}{c}{\\textbf{Orig.}} & \\multicolumn{1}{c}{\\textbf{ABTT}} & \\multicolumn{1}{c}{\\textbf{CN}} & \\multicolumn{1}{c}{\\textbf{SB}} & \\multicolumn{1}{c}{\\textbf{HSR-RR}} & \\multicolumn{1}{c}{\\textbf{Orig.}} & \\multicolumn{1}{c}{\\textbf{ABTT}} & \\multicolumn{1}{c}{\\textbf{CN}} & \\multicolumn{1}{c}{\\textbf{SB}} & \\multicolumn{1}{c}{\\textbf{HSR-RR}} \\\\\n \\toprule\n \\textbf{RG65} & 0.7494 & \\underline{0.7869} & \\underline{\\textbf{0.8041}} & \\underline{0.7964} & 0.7569 & 0.7603 & 0.7648 & \\underline{\\textbf{0.7913}} & \\underline{0.7850} & \\underline{0.7694} & 0.7630 & \\underline{0.7683} & 0.7594 & \\underline{\\textbf{0.7898}} & \\underline{0.7760} \\\\\n \\textbf{WordSim-353} & \\underline{0.6999} & 0.6929 & \\underline{0.6992} & 0.6856 & \\underline{\\textbf{0.7059}} & 0.7379 & \\underline{0.7668} & \\underline{0.7886} & 0.7115 & \\underline{\\textbf{0.7887}} & 0.7302 & \\underline{\\textbf{0.7386}} & \\underline{0.7321} & 0.7196 & \\underline{0.7338} \\\\\n \\textbf{RW} & 0.5997 & 0.5984 & \\underline{\\textbf{0.6036}} & \\underline{0.5998} & \\underline{0.6033} & 0.5101 & \\underline{0.5716} & \\underline{\\textbf{0.5898}} & 0.4879 & \\underline{0.5580} & 0.5972 & \\underline{\\textbf{0.6038}} & \\underline{0.6006} & 0.5769 & \\underline{0.6023} \\\\\n \\textbf{MEN} & 0.7706 & \\underline{\\textbf{0.7929}} & \\underline{0.7901} & \\underline{0.7888} & 0.7726 & 0.8013 & \\underline{0.8234} & \\underline{\\textbf{0.8339}} & 0.7853 & \\underline{0.8258} & \\underline{0.7728} & 0.7705 & \\underline{0.7746} & 0.7693 & \\underline{\\textbf{0.7750}} \\\\\n \\textbf{MTurk} & \\underline{0.6831} & 0.6538 & 0.6610 & \\underline{0.6846} & \\underline{\\textbf{0.6854}} & 0.6916 & \\underline{\\textbf{0.7233}} & \\underline{0.7116} & 0.6731 & \\underline{0.7074} & \\underline{0.6300} & 0.6106 & \\underline{0.6251} & 0.6147 & \\underline{\\textbf{0.6319}} \\\\\n \\textbf{SimLex-999} & 0.4427 & 0.4629 & \\underline{\\textbf{0.4728}} & \\underline{0.4702} & \\underline{0.4672} & 0.4076 & \\underline{0.4650} & \\underline{\\textbf{0.4858}} & 0.3985 & \\underline{0.4728} & 0.6847 & \\underline{0.6862} & 0.6854 & \\underline{0.6878} & \\underline{\\textbf{0.6903}} \\\\\n \\textbf{SimVerb-3500} & 0.3659 & 0.3792 & \\underline{0.3868} & \\underline{0.3865} & \\underline{\\textbf{0.3978}} & 0.2842 & \\underline{0.3433} & \\underline{0.3632} & 0.2671 & \\underline{\\textbf{0.3980}} & 0.5411 & \\underline{0.5461} & \\underline{0.5413} & 0.5389 & \\underline{\\textbf{0.5518}} \\\\\n \\bottomrule\n \\end{tabular}\n}%\n \\label{tab:word_sim}%\n\\end{table*}%\n\n\\begin{table*}[htbp]\n \\centering\n \\caption{Pearson correlation coefficient of 20 semantic textual similarity tasks}\n\\scalebox{0.70}{%\n\n \\begin{tabular}{lrrrrrrrrrrrrrrr}\n \\toprule\n \\multirow{2}[0]{*}{} & \\multicolumn{5}{c}{\\textbf{WORD2VEC}} & \\multicolumn{5}{c}{\\textbf{GLOVE}} & \\multicolumn{5}{c}{\\textbf{PARAGRAM}} \\\\\n \\cmidrule(r){2-6} \\cmidrule(r){7-11} \\cmidrule(r){12-16}\n & \\multicolumn{1}{c}{\\textbf{Orig.}} & \\multicolumn{1}{c}{\\textbf{ABTT}} & \\multicolumn{1}{c}{\\textbf{CN}} & \\multicolumn{1}{c}{\\textbf{SB}} & \\multicolumn{1}{c}{\\textbf{HSR-RR}} & \\multicolumn{1}{c}{\\textbf{Orig.}} & \\multicolumn{1}{c}{\\textbf{ABTT}} & \\multicolumn{1}{c}{\\textbf{CN}} & \\multicolumn{1}{c}{\\textbf{SB}} & \\multicolumn{1}{c}{\\textbf{HSR-RR}} & \\multicolumn{1}{c}{\\textbf{Orig.}} & \\multicolumn{1}{c}{\\textbf{ABTT}} & \\multicolumn{1}{c}{\\textbf{CN}} & \\multicolumn{1}{c}{\\textbf{SB}} & \\multicolumn{1}{c}{\\textbf{HSR-RR}} \\\\\n \\toprule\n \\textbf{STS-2012-MSRpar} & \\textbf{41.78} & 38.70 & 39.42 & 40.77 & 34.42 & \\textbf{42.06} & 41.41 & 41.27 & 41.15 & 32.49 & 39.32 & 38.84 & 39.84 & 37.72 & \\textbf{41.44} \\\\\n \\textbf{STS-2012-MSRvid} & 76.27 & 75.60 & 75.32 & 74.98 & \\textbf{79.63} & 65.85 & 67.84 & 62.50 & 64.71 & \\textbf{80.03} & 56.34 & 57.65 & 56.78 & 55.55 & \\textbf{62.31} \\\\\n \\textbf{STS-2012-surprise.OnWN} & 70.62 & 70.89 & 70.73 & 69.99 & \\textbf{71.27} & 60.74 & 69.48 & 67.87 & 57.02 & \\textbf{72.24} & 62.60 & 64.61 & 63.21 & 60.68 & \\textbf{67.91} \\\\\n \\textbf{STS-2012-SMTeuroparl} & 31.20 & 35.71 & 35.29 & 33.88 & \\textbf{40.32} & 51.97 & \\textbf{54.36} & 52.58 & 50.06 & 51.60 & 50.64 & 51.64 & 50.63 & 51.34 & \\textbf{51.92} \\\\\n \\textbf{STS-2012-surprise.SMTnews} & \\textbf{51.07} & 46.24 & 47.34 & 47.10 & 50.09 & 46.35 & 48.19 & 47.69 & 45.18 & \\textbf{54.41} & 52.94 & 50.18 & 52.66 & \\textbf{54.16} & 53.87 \\\\\n \\hdashline\n \\textbf{STS-2012} & 54.19 & 53.43 & 53.62 & 53.34 & \\textbf{55.15} & 53.39 & 56.26 & 54.38 & 51.62 & \\textbf{58.15} & 52.37 & 52.58 & 52.62 & 51.89 & \\textbf{55.49} \\\\\n \\toprule\n \\textbf{STS-2013-FNWN} & 39.68 & 43.51 & 43.40 & 42.95 & \\textbf{49.09} & 39.48 & 45.81 & 42.03 & 39.15 & \\textbf{46.47} & 35.79 & 36.05 & 35.93 & 34.35 & \\textbf{38.00} \\\\\n \\textbf{STS-2013-OnWN} & 67.98 & 70.56 & 69.29 & 69.12 & \\textbf{75.57} & 53.75 & 63.86 & 57.45 & 52.36 & \\textbf{74.91} & 48.07 & 48.18 & 48.23 & 48.28 & \\textbf{56.57} \\\\\n \\textbf{STS-2013-headlines} & 63.29 & 63.24 & 63.62 & 63.22 & \\textbf{63.65} & 63.54 & 66.70 & 67.00 & 60.65 & \\textbf{68.56} & 64.43 & 65.13 & 64.69 & 62.99 & \\textbf{66.90} \\\\\n \\hdashline\n \\textbf{STS-2013} & 56.98 & 59.10 & 58.77 & 58.43 & \\textbf{62.77} & 52.26 & 58.79 & 55.49 & 50.72 & \\textbf{63.31} & 49.43 & 49.79 & 49.62 & 48.54 & \\textbf{53.82} \\\\\n \\toprule\n \\textbf{STS-2014-OnWN} & 74.85 & 75.92 & 75.27 & 74.43 & \\textbf{81.40} & 61.91 & 70.93 & 66.43 & 60.36 & \\textbf{81.39} & 60.29 & 61.95 & 60.75 & 59.45 & \\textbf{68.30} \\\\\n \\textbf{STS-2014-deft-forum} & 41.30 & 42.25 & 42.74 & 42.03 & \\textbf{46.73} & 28.82 & 38.90 & 37.57 & 25.91 & \\textbf{45.85} & 35.17 & 37.60 & 35.75 & 33.59 & \\textbf{40.84} \\\\\n \\textbf{STS-2014-deft-news} & 66.76 & 64.87 & 65.45 & 64.97 & \\textbf{67.88} & 63.41 & 68.72 & 69.08 & 61.27 & \\textbf{70.60} & 62.19 & 63.73 & 62.75 & 61.09 & \\textbf{66.66} \\\\\n \\textbf{STS-2014-headlines} & 60.87 & 60.61 & \\textbf{61.09} & 60.66 & 60.93 & 59.28 & 61.34 & 61.71 & 56.25 & \\textbf{64.01} & 60.84 & 60.72 & 60.97 & 60.21 & \\textbf{62.83} \\\\\n \\textbf{STS-2014-tweet-news} & 73.33 & 75.13 & 74.87 & 73.66 & \\textbf{76.00} & 62.43 & 74.62 & \\textbf{75.38} & 58.70 & 75.09 & 69.29 & 72.43 & 70.14 & 66.75 & \\textbf{75.16} \\\\\n \\textbf{STS-2014-images} & 77.44 & 77.81 & 78.42 & 77.11 & \\textbf{80.55} & 61.89 & 69.40 & 65.81 & 59.03 & \\textbf{78.45} & 53.67 & 58.29 & 54.86 & 51.58 & \\textbf{65.10} \\\\\n \\hdashline\n \\textbf{STS-2014} & 65.76 & 66.10 & 66.31 & 65.48 & \\textbf{68.92} & 56.29 & 63.99 & 62.66 & 53.59 & \\textbf{69.23} & 56.91 & 59.12 & 57.54 & 55.45 & \\textbf{63.15} \\\\\n \\toprule\n \\textbf{STS-2015-answers-forums} & 52.65 & 54.01 & 53.99 & 50.51 & \\textbf{66.77} & 36.86 & 49.58 & 48.62 & 36.76 & \\textbf{65.46} & 38.79 & 41.19 & 39.25 & 38.35 & \\textbf{48.37} \\\\\n \\textbf{STS-2015-answers-students} & 70.82 & 70.92 & 71.65 & 69.74 & \\textbf{72.16} & 62.77 & 69.46 & \\textbf{69.68} & 61.84 & 67.38 & 67.52 & 69.46 & 67.96 & 66.80 & \\textbf{71.98} \\\\\n \\textbf{STS-2015-belief} & 60.11 & 61.91 & 61.62 & 58.10 & \\textbf{77.08} & 44.20 & 61.43 & 59.77 & 41.19 & \\textbf{76.12} & 49.77 & 55.57 & 50.79 & 46.98 & \\textbf{61.32} \\\\\n \\textbf{STS-2015-headlines} & 68.11 & 68.28 & 68.65 & 68.19 & \\textbf{69.02} & 65.42 & 68.90 & 69.20 & 63.25 & \\textbf{71.41} & 67.85 & 68.40 & 68.09 & 66.92 & \\textbf{70.38} \\\\\n \\textbf{STS-2015-images} & 80.07 & 80.18 & 80.74 & 79.48 & \\textbf{83.08} & 69.14 & 73.53 & 71.43 & 67.81 & \\textbf{80.58} & 66.55 & 68.29 & 67.08 & 65.55 & \\textbf{73.17} \\\\\n \\hdashline\n \\textbf{STS-2015} & 66.35 & 67.06 & 67.33 & 65.20 & \\textbf{73.62} & 55.68 & 64.58 & 63.74 & 54.17 & \\textbf{72.19} & 58.10 & 60.58 & 58.63 & 56.92 & \\textbf{65.04} \\\\\n \\toprule\n \\textbf{SICK} & 72.25 & \\textbf{72.49} & 72.40 & 72.32 & 72.02 & 66.64 & 68.12 & 66.42 & 66.03 & \\textbf{71.62} & 64.55 & 64.89 & 64.78 & 64.05 & \\textbf{67.07} \\\\\n \\bottomrule\n \\end{tabular}%\n\n }\n \\label{tab:STS}%\n\\end{table*}%\n\n\n\\begin{table*}[htbp]\n \\centering\n \\caption{Five-fold cross-validation accuracy of four sentiment analysis tasks}\n\\scalebox{0.75}{%\n\n \\begin{tabular}{lrrrrrrrrrrrrrrr}\n \\toprule\n & \\multicolumn{5}{c}{\\textbf{WORD2VEC}} & \\multicolumn{5}{c}{\\textbf{GLOVE}} & \\multicolumn{5}{c}{\\textbf{PARAGRAM}} \\\\\n \\cmidrule(r){2-6} \\cmidrule(r){7-11} \\cmidrule(r){12-16}\n & \\multicolumn{1}{l}{\\textbf{Orig.}} & \\multicolumn{1}{l}{\\textbf{CN}} & \\multicolumn{1}{l}{\\textbf{ABTT}} & \\multicolumn{1}{l}{\\textbf{SB}} & \\multicolumn{1}{l}{\\textbf{HSR-RR}} & \\multicolumn{1}{l}{\\textbf{Orig.}} & \\multicolumn{1}{l}{\\textbf{CN}} & \\multicolumn{1}{l}{\\textbf{ABTT}} & \\multicolumn{1}{l}{\\textbf{SB}} & \\multicolumn{1}{l}{\\textbf{HSR-RR}} & \\multicolumn{1}{l}{\\textbf{Orig.}} & \\multicolumn{1}{l}{\\textbf{CN}} & \\multicolumn{1}{l}{\\textbf{ABTT}} & \\multicolumn{1}{l}{\\textbf{SB}} & \\multicolumn{1}{l}{\\textbf{HSR-RR}} \\\\\n \\toprule\n \\textbf{AR} & 0.8375 & 0.8338 & 0.8329 & 0.8302 & \\textbf{0.8377} & 0.8441 & 0.8431 & 0.8444 & 0.8426 & \\textbf{0.8454} & 0.8124 & 0.8129 & 0.8113 & 0.8124 & \\textbf{0.8152} \\\\\n \\textbf{CR} & 0.7800 & 0.7792 & 0.7718 & 0.7726 & \\textbf{0.7824} & \\textbf{0.7829} & 0.7800 & 0.7808 & 0.7819 & 0.7792 & 0.7657 & 0.7649 & 0.7628 & 0.7644 & \\textbf{0.7673} \\\\\n \\textbf{IMDB} & 0.8392 & 0.8369 & 0.8370 & 0.8281 & \\textbf{0.8434} & 0.8491 & 0.8453 & \\textbf{0.8493} & 0.8459 & \\textbf{0.8493} & 0.7957 & 0.7960 & 0.7953 & 0.7938 & \\textbf{0.7999} \\\\\n \\textbf{STS-B} & \\textbf{0.8071} & 0.8062 & 0.8048 & 0.8052 & 0.8056 & 0.8044 & 0.8045 & 0.8049 & 0.8031 & \\textbf{0.8053} & 0.7818 & 0.7819 & 0.7778 & 0.7813 & \\textbf{0.7846} \\\\\n \\bottomrule\n \\end{tabular}%\n }\n \\label{tab:sentiment}%\n\\end{table*}%\n\n\\section{Experiments}\n\nWe evaluate the HSR postprocessing algorithm described in Algorithm \\ref{alg:hsr} (denoted by HSR-RR as it is based on ridge regression). We test it on three different pre-trained English word embeddings including Word2Vec\\footnote{\\url{https:\/\/code.google.com\/archive\/p\/word2vec\/}} \\cite{Mikolov2013}, GloVe\\footnote{\\url{https:\/\/nlp.stanford.edu\/projects\/glove\/}} \\cite{Pennington2014}, and Paragram\\footnote{\\url{https:\/\/www.cs.cmu.edu\/~jwieting\/}} \\cite{wieting2015paraphrase}. The original word vectors, as well as word vectors postprocessed by ABTT \\cite{Mu2018}, CN \\cite{Liu2019}, and SB \\cite{Tang2019}, are set as baselines. The performances of these baselines against HSR-RR are compared on word similarity tasks, semantic textual similarity tasks, and downstream sentiment analysis tasks. A statistical significance test is conducted on all experimental results to verify whether our method yields significantly better results compared to the baselines. For ABTT, we set $d = 2$ for GloVe and $d = 3$ for Word2Vec and Paragram as suggested by the original authors. For CN, we fix $d = 2$ for all word embeddings as suggested by the original authors. For HSR, we fix the regularization constants $\\alpha_1, \\alpha_2 = 50$ for HSR-RR. Generally, we recommend using $\\alpha_1, \\alpha_2 = 50$ for HSR-RR and other HSR models. Furthermore, we construct a Multilayer Perceptrons HSR model (denoted by HSR-MLP), and the experimental result of HSR-MLP is shown in the appendix.\n\n\\subsection{Word Similarity} \n\nWe use seven popular word similarity tasks to evaluate the proposed postprocessing method. The seven tasks are RG65 \\cite{Rubenstein1965}, WordSim-353 \\cite{Finkelstein2002}, Rare-words \\cite{Luong2013}, MEN \\cite{Bruni2014}, MTurk \\cite{Radinsky2011}, SimLex-999 \\cite{Hill2015}, and SimVerb-3500 \\cite{Gerz2016}.\n\nFor each task, we calculate the cosine similarity between the vector representation of two words, and the Spearman's rank correlation coefficient \\cite{Myers1995} of the estimated rankings against the human rankings is reported in Table \\ref{tab:word_sim}. In the table, the result marked in bold is the best, and the results underlined are the top three results.\n\n\nFrom the table, we could see that while no postprocessing method performs dominantly better than others, HSR-RR has the best performance overall by performing the best on the most number of tasks for two out of the three word embeddings, which are Word2Vec and Paragram. HSR-RR generally achieves the best on these five tasks: WordSim-353, MEN, MTurk, SimLex-999, and SimVerb-3500. Notably, HSR-RR has the best performance on the task SimVerb-3500 for all three word embeddings, which achieves 8.72\\%, 40.04\\%, and 1.98\\% improvement respectively on SimVerb-3500 dataset relative to the original word embeddings and 2.84\\%, 9.58\\%, and 1.04\\% increase compared to the runner-up method. Since SimVerb-3500 is the state-of-the-art task that contains the highest number of word pairs and distinguishes genuine word similarity from conceptual association \\cite{Hill2015}, the result obtained on SimVerb-3500 is usually deemed to be more telling than those of other tasks \\cite{Liu2019}.\n\n\\subsection{Semantic Textual Similarity} \n\nNext, we test the sentence-level effectiveness of our proposed HSR method on semantic textual similarity (STS) tasks, which measure the degree of semantic equivalence between two texts \\cite{Agirre2012}. The STS tasks we employ include 20 tasks from 2012 SemEval Semantic Related task (SICK) and SemEval STS tasks from 2012 to 2015 \\cite{Marco2014,Agirre2012,Agirre2013,Agirre2014,Agirre2015}.\n\nTo construct the embedding of each sentence in the tasks, we first tokenize the sentence into a list of words, then average the word embedding of all words in the list as the vector representation of the sentence. Following \\citet{Agirre2012}, we calculate the cosine distance between the two sentence embeddings and record the Pearson correlation coefficient of the estimated rankings of sentence similarity against the human rankings.\n\nIn Table \\ref{tab:STS}, we present the result of the 20 STS tasks as well as the average result each year. From the table, we could observe that HSR-RR dominantly outperforms the original word embedding as well as other postprocessing methods, as the average result by year of HSR-RR is the best for all tasks except the SICK task on Word2Vec. On average, HSR-RR improves the Pearson correlation coefficient by 4.71\\%, 7.54\\%, and 6.54\\% respectively over the 20 STS tasks compared to the previously best results, and it achieves 7.13\\%, 22.06\\%, and 9.83\\% improvement respectively compared to the original word embeddings.\n\n\\begin{table*}[htbp]\n \\centering\n \\caption{P-value of one-tailed Student's t-test of three experiments}\n\\scalebox{0.77}{%\n \\begin{tabular}{lrrrrrrrrrrrr}\n \\toprule\n & \\multicolumn{4}{c}{\\textbf{Word Similarity}} & \\multicolumn{4}{c}{\\textbf{Semantic Textual Similarity}} & \\multicolumn{4}{c}{\\textbf{Sentiment Analysis}} \\\\\n \\cmidrule(r){2-5} \\cmidrule(r){6-9} \\cmidrule(r){10-13}\n & \\multicolumn{1}{l}{\\textbf{Orig.}} & \\multicolumn{1}{l}{\\textbf{ABTT}} & \\multicolumn{1}{l}{\\textbf{CN}} & \\multicolumn{1}{l}{\\textbf{SB}} & \\multicolumn{1}{l}{\\textbf{Orig.}} & \\multicolumn{1}{l}{\\textbf{ABTT}} & \\multicolumn{1}{l}{\\textbf{CN}} & \\multicolumn{1}{l}{\\textbf{SB}} & \\multicolumn{1}{l}{\\textbf{Orig.}} & \\multicolumn{1}{l}{\\textbf{ABTT}} & \\multicolumn{1}{l}{\\textbf{CN}} & \\multicolumn{1}{l}{\\textbf{SB}} \\\\\n \\toprule\n \\textbf{WORD2VEC} & \\textbf{2.51e-02} & 3.56e-01 & 3.29e-01 & 3.38e-01 & \\textbf{2.92e-03} & \\textbf{1.12e-03} & \\textbf{2.22e-03} & \\textbf{1.42e-03} & 9.27e-02 & \\textbf{1.35e-03} & \\textbf{3.84e-03} & \\textbf{2.49e-04} \\\\\n \\textbf{GLOVE} & \\textbf{6.85e-03} & 1.83e-01 & 2.30e-01 & \\textbf{7.02e-03} & \\textbf{2.88e-05} & \\textbf{1.35e-03} & \\textbf{5.49e-04} & \\textbf{5.51e-06} & 4.02e-01 & 4.58e-01 & 1.25e-01 & 1.28e-01 \\\\\n \\textbf{PARAGRAM} & \\textbf{4.86e-03} & 7.13e-02 & \\textbf{1.62e-02} & 5.13e-02 & \\textbf{5.35e-07} & \\textbf{1.17e-07} & \\textbf{5.94e-07} & \\textbf{3.69e-07} & \\textbf{1.23e-04} & \\textbf{3.32e-04} & \\textbf{5.62e-04} & \\textbf{1.20e-05} \\\\\n \\bottomrule\n \\end{tabular}%\n\n }\n \\label{tab:t_test}%\n\\end{table*}%\n\n\\subsection{Downstream Task: Sentiment Analysis} \n\nSince the success of intrinsic lexical evaluation tasks does not imply success on downstream tasks, we test the performance of HSR on four sentiment analysis tasks. The dataset we adopt include Amazon reviews\\footnote{\\url{https:\/\/www.kaggle.com\/bittlingmayer\/amazonreviews\\#train.ft.txt.bz2}} (AR), customer reviews (CR) \\cite{hu2004mining}, IMDB movie reviews (IMDB) \\cite{maas2011learning}, and SST binary sentiment classification (SST-B) \\cite{socher2013recursive}, which are all binary sentence-level sentiment classification tasks. Sentiment analysis is an important task in NLP which has been widely applied in business areas such as e-commerce and customer service. \n\nSimilar to the STS tasks, we first tokenize the sentence, then average the corresponding word embeddings as the vector representation of the sentence. We use a logistic regression model trained by minimizing cross-entropy loss to classify the sentence embeddings into positive or negative emotions. This procedure was adopted in previous studies such as \\citet{zeng2017socialized}. We report the five-fold cross-validation accuracy of the sentiment classification results in Table \\ref{tab:sentiment}.\n\nFrom Table \\ref{tab:sentiment}, we could observe that HSR-RR has the best downstream-task performance among all the tested postprocessing methods. Specifically, for Paragram, HSR-RR achieves the highest classification accuracy on all four tasks; for Word2Vec and GloVe, HSR-RR performs the best on three out of the four tasks.\n\n\\subsection{Statistical Significance Test}\n\nTo show that our proposed method yields significant improvement compared to the baselines, we employ the one-tailed Student's t-test. The p-value of the t-test of HSR-RR against other methods for all three experiments is shown in Table \\ref{tab:t_test} in scientific notation. We use the convention that a p-value is significant if it is smaller than 0.05, and all significant p-values are marked in bold.\n\nFrom Table \\ref{tab:t_test}, we observe that on word similarity and STS tasks, the improvements yielded by HSR are significant when compared to all three original word vectors. On sentiment analysis tasks, the improvement on Paragram is significant. We also test the significance of improvement of results yielded by HSR-RR with those yielded by other state-of-the-art baseline methods (ABTT, CN, and SB). We find that, for STS tasks, improvements against all three baseline methods on all three word vectors are significant; for sentiment analysis, the improvements against all three baseline methods on Word2Vec and Paragram are significant; for word similarity, only two results (SB on GloVe and CN on Paragram) are significant. While in other cases, improvements of HSR-RR over the original word vectors and the baseline algorithms are not significant, conversely, the baseline methods and the original word vectors also fail to surpass the performance of HSR-RR when the null hypothesis and alternative hypothesis are switched. Therefore, we conclude that HSR-RR yields solid improvement when compared to the original word vectors, and it is either significantly better or on-par with other state-of-the-art baseline methods.\n\nWe want to remark that, while statistical significance tests are useful for algorithm comparison, it is mostly excluded in previous word vector evaluation papers \\cite{Bullinaria2007,Levy2015,Faruqui2015,Fried2015,Mrksic2016,Mrksic2017,Shiue2017,Mu2018,Liu2019,tang2014learning}, and there could be a valid reason for this. As pointed out by \\citet{dror2018hitchhiker}, the way how existing NLP datasets are structured tends to cripple those widely adopted significance tests: while most statistical significance tests (e.g., t-test) assume that the test set consists of independent observations, NLP datasets usually violate this hypothesis. For instance, most STS datasets only contain sentences from a certain source (e.g., news or image captions) and word similarity datasets usually contain words of specialized types (e.g., verbs). This makes a proper significance test quite challenging. Some NLP researchers even contend to abandon the null hypothesis statistical significance test approach due to this hard-to-meet assumption \\cite{koplenig2019against,mcshane2019abandon}.\n\n\\section{Conclusion and Future Work}\n\nIn this paper, we present a simple, fast-to-compute, and effective framework for postprocessing word embeddings, which is inspired by the recent development of causal inference. Specifically, we employ Half-Sibling Regression to remove confounding noise contained in word vectors and to reconstruct latent, ``clean'' word vectors of interest. The key enabler of the proposed Half-Sibling Regression is the linguistic fact that function words and content words are lexically irrelevant to each other, making them natural ``half-siblings''. The experimental results on both intrinsic lexical evaluation tasks and downstream sentiment analysis tasks reveal that the proposed method efficiently eliminates noise and improves performance over the existing alternative methods on three different brands of word embeddings.\n\nThe current work has a few limitations, which we wish to address in the future. The first limitation resides in the way we formulate the regression. Note that, when performing the multiple-output regression step in HSR algorithm (Step 1.1 and Step 2.1 of Algorithm \\ref{alg:hsr}), we do not take the correlation of targets into account. Such correlations, however, could be beneficial in some cases. Consider, for instance, the task of predicting content words based on stop words (Step 1.1 of Algorithm \\ref{alg:hsr}). As content words as targets are strongly correlated (e.g., synonyms and antonyms), such correlations can be further employed to facilitate the regression with well-studied methods such as Reduced-rank regression \\cite{Anderson1949}. For a survey of these multiple outcome regression methods taking output into account, please see \\citet{Hastie2001}, Section 3.7.\n\nThe second line of future work concerns how to use a non-linear model for HSR more effectively. Although we have tried neural-network-based HSR algorithms for various tasks (see the appendix for details), empirically they bring marginally improved results, if not slightly worsened. One hypothesis for explaining this phenomenon is that neural networks tend to be highly expressive, overfitting small datasets easily. For future work, we plan to explore more regularization methods which may improve the results of neural-network-based HSR. \n\nThe third line of future work is to develop a unified framework for understanding word vector postprocessing. As various word vector postprocessing algorithms yield (sometimes surprisingly) similar results in a few cases, it is our hope to establish connections between these approaches in the future. The recent work by \\citet{zhou2019getting} points toward this direction.\n\nLast but not least, we believe that there remain ample opportunities for using HSR in other NLP tasks and models. For instance, recently, we have observed that pre-trained language models such as BERT \\cite{devlin2019bert} start to replace word vectors as default feature representations for downstream NLP tasks. The HSR framework, in principle, can be incorporated in language model postprocessing pipelines as well. We would like to explore these possibilities in the future.\n\n\n\\paragraph{Acknowledgement} This work was partially supported by the National Natural Science Foundation of China (grant number 71874197). We appreciate the anonymous reviewers for their detailed and constructive comments. We thank all the people who helped Zekun Yang flee from Hong Kong to Shenzhen on Nov. 12th, 2019 such that she could safely finish writing the camera-ready version of this paper.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nProbabilistic Gaussian processes (GPs)~\\cite{Rasmussen2006} are the method of choice for probabilistic regression: Their non-parametric nature allows for flexible modelling without specifying low-level assumptions (e.g., the degree of a polynomial) in advance. Moreover, for the standard Gaussian likelihood, inference can be performed in closed form in a principled way simply by applying Bayes' theorem. GPs have had substantial impact in various research areas, including geostatistics~\\cite{Cressie1993}, optimisation~\\cite{Jones1998,Brochu2009}, data visualisation~\\cite{Lawrence2005}, robotics and reinforcement learning~\\cite{Deisenroth2014}, spatio-temporal modelling~\\cite{Luttinen2012}, and active learning~\\cite{Krause2008}. A strength of the GP model is that it can be used fairly reliably as a black-box function approximator, i.e., it produces reasonable predictions without manual parameter tuning.\n\n\nA practical limitation of the GP model is its computational demand: In\nstandard implementations, training and predicting scale in\n$\\mathcal{O}(N^3)$ and $\\mathcal{O}(N^2)$, respectively, where $N$ is\nthe size of the training data set. For large $N$ (e.g., $N>10,000$) we\noften use sparse approximations~\\cite{Williams2001,Quinonero-Candela2005,\n Hensman2013, Titsias2009, Lazaro-Gredilla2010, Shen2006}. Typically, these sparse approximations lower the computational burden by implicitly (or explicitly) using a subset of the data. They scale GPs up to multiple tens or hundreds of thousands of data points. However, even with sparse approximations it is inconceivable to apply GPs to data set sizes of tens or hundreds of millions of data points. \n\nAn alternative to sparse approximations is to distribute the computations by using local models. These local models typically require stationary kernels for a notion of ``distance'' and ``locality''. \nShen et al.~\\cite{Shen2006} used KD-trees to recursively partition the data space by means of a multi-resolution tree data structure, which allows to scale GPs up to multiple tens of thousands of training points. However, the approach proposed in~\\cite{Shen2006} does not provide solutions for variance predictions and is limited to stationary kernels. Similarly, \\cite{Nguyen-Tuong2009a} used a heuristic partitioning scheme based on the locality notion of stationary kernels for real-time mean predictions of GPs.\nAlong the lines of exploiting locality, mixture-of-experts (MoE) models~\\cite{Jacobs1991} have been applied to GP regression~\\cite{Meeds2006,Rasmussen2002,Yuan2009}. However, these models have not primarily been used to speed up GP regression, but rather to allow for heteroscedasticity (input-dependent variations) and non-stationarity. Each local model possesses its own set of hyper-parameters to be optimised. Predictions are made by collecting the predictions of all local expert models, and weighing them using the responsibilities assigned by the gating network. In these MoE models, a Dirichlet process prior is placed on the multinomial ``responsibility'' vector of each local expert, which allows for a data-driven partitioning on the fly. Unfortunately, inference in these models is computationally intractable, requiring MCMC sampling or variational inference to assign data points to each GP expert, a computationally demanding process.\n\nWithin the context of spatio-temporal models with $10^6$ data points, \\cite{Luttinen2012} propose efficient inference that exploits computational advantages from both separable and compactly supported kernels, leading to very sparse kernel matrices. The authors propose approximate (efficient) sampling methods to deal with both noisy and missing data.\n\nRecently, Gal et al.~\\cite{Gal2014} proposed a distributed approach that scales variational sparse GPs further by exploiting distributed computations. In particular, they derive an exact re-parameterisation of the variational sparse GP model by Titsias~\\cite{Titsias2009}, to update the variational parameters independently on different nodes. This is implemented within a Map-Reduce framework, and the corresponding architecture consists of a central node and many local nodes, i.e., a single-layer architecture.\n\nIn this paper, we follow an approach orthogonal to sparse approximations in order to \\emph{scale full GPs to large data sets} by exploiting massive parallelisation. In particular, we propose a hierarchical mixture of experts model that distributes the computational load amongst a large set of independent computational units. Our model recursively recombines computations by these independent units to an overall GP inference\\slash training procedure. This idea is related to Tresp's Bayesian Committee Machine~\\cite{Tresp2000}, which combines estimators. A key advantage of our model is that all computations can be performed analytically, i.e., no sampling is required. Given sufficient computing power our model can handle arbitrarily large data sets. In our experiments we demonstrate that our model can be applied easily to data sets of size $2^{24}\\approx 1.7\\times 10^7$, which exceeds the typical data set sizes sparse GPs deal with by orders of magnitude. However, even with limited computing resources, our model is practical in the sense that it can train a full GP with a million training points in less than half an hour on a laptop.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Objective and Set-up}\nWe consider a regression setting $y=f(\\vec x)+\\epsilon$, where $\\vec x\\in\\mathds{R}^d$ and $\\epsilon\\sim\\gauss{0}{\\sigma_\\epsilon^2}$ is i.i.d. Gaussian noise. The objective is to infer the latent function $f$ from a training data set $\\boldsymbol X = \\{\\vec x_i\\}_{i=1}^n, \\vec y = [y_1, \\dotsc, y_N]^\\top$. For small data set sizes $N$, a Gaussian process is one of the methods of choice for probabilistic non-parametric regression.\n\n\nA Gaussian process is a non-parametric Bayesian model, which is often used for (small-scale) regression. A GP is defined as a collection of random variables, any finite number of which is Gaussian distributed. A GP is fully specified by a mean function and a covariance function (kernel). In this paper, we assume that the prior mean function is 0. Furthermore, we consider the Gaussian (squared exponential) kernel\n\\begin{align}\n\\hspace{-2.6mm}k(\\vec x_i, \\vec x_j) = \\sigma_f^2\\exp\\big(-\\tfrac{1}{2}(\\vec x_i - \\vec x_j)^\\top\\boldsymbol\\Lambda^{-1}(\\vec x_i-\\vec x_j)\\big),\n\\end{align}\n$\\vec x_i,\\vec x_j\\in\\mathds{R}^D$, where $\\sigma_f^2$ is the variance of the latent function $f$ and $\\boldsymbol\\Lambda = \\mathrm{diag}(l_1^2,\\dotsc,l_D^2)$ is the diagonal matrix of squared length-scales $l_i$, $i = 1,\\dotsc,D$. Furthermore, we consider a Gaussian likelihood $p(y|f(\\vec x))=\\gauss{f(\\vec x)}{\\sigma_\\epsilon^2}$ to account for the measurement noise. \n\nA GP is typically trained by finding hyper-parameters $\\vec\\theta = \\{\\sigma_f, l_i, \\sigma_\\epsilon\\}$ that maximise the log-marginal likelihood~\\cite{Rasmussen2006}, which is (up to a constant) given by\n\\begin{align}\n\\log p(\\vec y|\\boldsymbol X, \\vec\\theta) &\\stackrel{.}{=} -\\tfrac{1}{2}\\big(\\vec y(\\boldsymbol K+\\sigma_\\epsilon^2\\boldsymbol I)^{-1}\\vec y + \\log|\\boldsymbol K+\\sigma_\\varepsilon^2\\boldsymbol I|\\big)\\,.\n\\label{eq:log-marginal likelihood}\n\\end{align}\nThe computationally demanding computations in~\\eqref{eq:log-marginal likelihood} are the inversion and the determinant of \\mbox{$\\boldsymbol K + \\sigma_\\epsilon^2\\boldsymbol I$}, both of which scale in $\\mathcal{O}(N^3)$ with a standard implementation.\n\nFor a given set of hyper-parameters $\\vec\\theta$, a training set $\\boldsymbol X, \\vec y$ and a test input $\\vec x_*\\in\\mathds{R}^D$, the GP posterior predictive distribution of the corresponding function value $f_* = f(\\vec x_*)$ is Gaussian with mean and variance given by\n\\begin{align}\n\\mathds{E}[f_*] &= m(\\vec x_*) = \\vec k_*^\\top(\\boldsymbol K + \\sigma_\\varepsilon^2\\boldsymbol I)^{-1} \\vec y\\,,\\label{eq:mean GP}\\\\\n\\mathrm{var}[f_*] &= \\sigma^2(\\vec x_*) = k_{**} -\\vec k_*^\\top(\\boldsymbol K + \\sigma_\\varepsilon^2\\boldsymbol I)^{-1} \\vec k_*\\,,\\label{eq:var GP}\n\\end{align}\nrespectively, where $\\vec k_* = k(\\vec x_*, \\boldsymbol X)$ and $k_{**}=k(\\vec x_*, \\vec x_*)$. When we cache $(\\boldsymbol K + \\sigma_\\epsilon^2\\boldsymbol I)^{-1}$ computing the mean and variance in~\\eqref{eq:mean GP} and \\eqref{eq:var GP} requires $\\mathcal{O}(N)$ and $\\mathcal{O}(N^2)$ computations, respectively.\n\nFor $N>10,000$ training and predicting become rather time-consuming procedures, which additionally require large amounts of memory, i.e., $\\mathcal{O}(N(N+D))$. \n\nOur working hypothesis is that a standard GP can model the latent function $f$. However, due to the data set size $N$ the standard GP is not applicable.\n\nInstead of a sparse approximation~\\cite{Quinonero-Candela2005, Titsias2009}, we address both the computational and the memory issues of full GPs by distributing the computational and memory load to many individual computational units that only operate on subsets of the data. This results in an approximation of the full GP, but this approximation can be computed efficiently (time and memory) by exploiting massive parallelisation. \n\n\n\\section{Hierarchical Mixture-of-Experts Model}\n\\begin{figure*}[tb]\n\\subfigure[Single-layer model.]{\n\\includegraphics[height = 2.45cm]{.\/figures\/1layer}\n\\label{fig:1layer}\n}\n\\hspace{15mm}\n\\subfigure[Two-layer model.]{\n\\includegraphics[height = 4cm]{.\/figures\/2layer}\n\\label{fig:2layer}\n}\n\\caption{Hierarchical MoE model. Main computations are at the leaf nodes (black). All other nodes (linearly) recombine information from their direct children, allowing for an arbitrarily deep architecture.}\n\\end{figure*}\n\n\nConsider a GP with a training data set $\\mathcal{D} = \\{\\boldsymbol X, \\vec y\\}$. We define a set $\\mathcal{S}$ of $c$ subsets (not necessarily a partition) of the data set as $\\mathcal{S} = \\{\\mathcal{D}^\\idx{1},\\dotsc,\\mathcal{D}^\\idx{c}\\}$ where $\\mathcal{D}^{(i)} = \\{ \\boldsymbol{X}^{(i)} , \\vec{y}^{(i)} \\}$. These subsets are from the full training set $\\mathcal{D}$, and we will use a GP on each of them as a (local) expert\\footnote{The notion of ``locality'' is misleading as our model does not require similarity measures induced by stationary kernels.}. Each of these local expert models computes means and variances conditioned on their respective training data\\footnote{Both mean and variances are necessary for training and inference.}. These (local) predictions are recombined to a mean\\slash variance by a parent node (see Fig.~\\ref{fig:1layer}), which subsequently can play the role of an expert at the next level of the model architecture. Recursively applying these recombinations, our model results in a tree structured architecture with arbitrarily many layers, see Fig.~\\ref{fig:2layer}.\\footnote{We discuss different architecture choices in Section~\\ref{sec:architectures}.}\nIn our model, all GPs at the leaves of this tree are trained jointly and share a single set of hyper-parameters $\\vec\\theta$.\n\n\n\\begin{figure*}[tb]\n\\centering\n\\includegraphics[width = 0.9\\hsize]{.\/figures\/block-diagonal2}\n\\caption{In a single-layer MoE model, partitioning the data leads to a block-diagonal approximation of the kernel matrix (top path). By duplicating data points this clear separation between blocks is smoothed out (bottom path), and the effects of the independence assumption are mitigated.}\n\\label{fig:kernel matrix approx}\n\\end{figure*}\n\nWe train our model and make predictions under the assumption that each expert is independent of the other experts at the same level of the tree, which allows us to parallelise and distribute computations over independent processing units.\nThe assumption that the GPs at each level are independent effectively results in a GP with a block-diagonal covariance matrix, which can be efficiently inverted by distribution. If $\\mathcal{S}$ is a partition of $\\mathcal{D}$, this covariance matrix would be composed of the block-diagonal elements of the original GP's covariance matrix (with zeros on the off block-diagonals). However, with a partition, some information about the structure of the data is lost, which would otherwise be captured by the cross-covariance terms in the full GP. Since our model aims to replicate a full GP we are interested in mitigating the effects of the independence assumption. We do so by sharing arts of the training set amongst multiple subsets in $\\mathcal{S}$. Thereby, we account for the covariance effects between the points in $\\mathcal{D}$ to some degree. This approach is illustrated in the bottom path of Fig.~\\ref{fig:kernel matrix approx}, where parts of the training set are shared amongst individual nodes. Note that memory consumption can be kept constant since the training set is not modified (read-only access).\n\n\n\\begin{comment}\nWe train our model and make predictions under the assumption that each expert is independent of the other experts at the same level of the tree, which allows us to parallelise and distribute computations over independent processing units. \nIf $\\mathcal{S}$ is a partition of $\\mathcal{D}$ this implies that the data in each subset is independent of the data in every other subset. This effectively results in a block-diagonal approximation of the kernel matrix (which can be efficiently inverted), see top path of Fig.~\\ref{fig:kernel matrix approx}. However, some information about the structure of the data is lost\\footnote{This information would otherwise be captured by the cross-covariance terms in the full GP}. Since our model aims to replicate a standard GP we are interested in mitigating the effects of the independence assumption. We do so by having each of the subsets in $\\mathcal{S}$ \\emph{overlap} (i.e., share parts of the training set), thereby allowing the individual child-GPs to account to some degree for the covariance effects between the points in $\\mathcal{D}$. This is illustrated in the bottom path of Fig.~\\ref{fig:kernel matrix approx}, where parts of the training set are shared amongst individual nodes. Note that the duplication of memory can be prevented since the training set is not modified (read-only access). For further details we refer to~\\cite{Ng2014}.\n\\end{comment}\n\n\\subsection{Dividing the Data into Subsets}\nCreating subsets of the training data and using each subset with a GP forms the basis of our hierarchical MoE GP (HGP) model, where we have divided the problem into a number of GPs, each using a smaller training data set. This can be done recursively, further subdividing the problem until a desired training set size for the leaf-GPs is achieved.\\footnote{\nThe data set sizes assigned to the leaves can be chosen depending on the computational resources available.\n}\nFor an efficient implementation, the number $c$ of data subsets $\\mathcal{D}^\\idx{k}$ should correspond to a multiple of the number of computing units available. For disjoint data sets $\\mathcal{D}^\\idx{k}$, every leaf-GP would possess $N\/c$ data points; if we allow for shared data sets this number scales linearly with the degree of sharing. For instance, if every data point appears in two local experts in a single-layer model, we would have each $\\mathcal{D}^\\idx{k}$ of size $2N\/c$.\n\nThere are various ways of assigning data points to the experts at the leaves of the tree in Fig.~\\ref{fig:2layer}. For instance, random assignment is fast and can work quite well. Most of our results, however are based on a different approach: First, we use a KD-tree to recursively divide the input space into non-overlapping regions. We terminate the division when the required number of partitions is reached. Second, each region is then partitioned into $p$ disjoint groups of inputs. Third, we construct each data set $\\mathcal{D}^\\idx{k},\\,k = 1,\\dotsc, c$, for the local experts, such that it contains exactly one group from each region. After this procedure, each data set $\\mathcal{D}^\\idx{k}$ will contain points across the entire input space, rather than being clustered in the same region in the input space. Note that neither method for assigning data points to GP experts relies on any locality notion induced by the kernel.\n\n\n\n\n\\subsection{Training}\nWe train the model by maximising a factorising approximation to the log-marginal likelihood in~\\eqref{eq:log-marginal likelihood}, i.e.,\n\\begin{align}\n\\log p(\\vec y| \\boldsymbol X, \\vec\\theta) \\approx \\sum\\nolimits_k \\log p(\\vec y^\\idx{k}|\\boldsymbol X^\\idx{k}, \\vec\\theta)\n\\label{eq:approximate log-marginal likelihood}\n\\end{align}\nwith respect to the kernel hyper-parameter $\\vec\\theta$, which are shared amongst all individual GPs. The factorising approximation in~\\eqref{eq:approximate log-marginal likelihood} is implied by our independence assumption of the individual (small) GP models. Each term in~\\eqref{eq:approximate log-marginal likelihood} is given by \n\\begin{align}\n\\hspace{-2mm}\\log p(\\vec y^\\idx{k}|\\boldsymbol X^\\idx{k},\\vec\\theta) &= -\\tfrac{1}{2}\\vec y^\\idx{k}(\\boldsymbol K_{\\vec\\theta}^\\idx{k} + \\sigma_\\epsilon^2\\boldsymbol I)^{-1}\\vec y^\\idx{k} \\nonumber\\\\\n&\\quad -\\tfrac{1}{2}\\log |\\boldsymbol K_{\\vec\\theta}^\\idx{k} + \\sigma_\\epsilon^2\\boldsymbol I| + \\text{const}\n\\label{eq:term in approx. LML}\n\\end{align}\nand requires the inversion and determinant of $\\boldsymbol K_{\\vec\\theta}^\\idx{k} +\\sigma_\\epsilon^2\\boldsymbol I$, where $\\boldsymbol K_{\\vec\\theta}^\\idx{k} = k(\\boldsymbol X^\\idx{k}, \\boldsymbol X^\\idx{k})$ is a $p\\times p$ matrix, and $p$ is the size of the data set associated with the GP expert $k$. These computations can be performed in $\\mathcal{O}(p^3)$ time with a standard implementation. Note that $p$ is significantly smaller than $N$, i.e., the size of the full data set. The memory consumption is $\\mathcal{O}(p^2 + pD)$ for each individual model.\n\nNote that in~\\eqref{eq:approximate log-marginal likelihood} the number of parameters to be optimised is relatively small since we do not consider additional variational parameters or inducing inputs that we optimise. The gradients of~\\eqref{eq:approximate log-marginal likelihood} and~\\eqref{eq:term in approx. LML} with respect to $\\vec\\theta$ can be computed in independently at all $k$ nodes, which allows for massive parallelisation and a significant speed-up of training compared to training a full GP. \n\n\n\\begin{figure}[tb]\n\\centering\n \\includegraphics[width = \\hsize]{.\/figures\/hgp_timings}\n\\caption{Computing time for the log-marginal likelihood and its gradient with respect to the kernel hyper-parameters as a function of the size of the training data. The HGP scales favourably to large-scale data sets. With an increasing number of child-GPs (but fixed computational resources), the HGP scales to more than $10^7$ data points.}\n\\label{fig:hgp timings}\n\\end{figure}\nTo evaluate the training time for our GP model, we computed the amount of time required to compute the log-marginal likelihood and its gradients with respect to the kernel hyper-parameters. A typical optimisation procedure for the kernel hyper-parameters, e.g., conjugate gradients or (L)BFGS, requires these values. The full training time is proportional to the time it takes to compute the log-marginal likelihood and its gradient (it still depends on the number of line-searches). We chose a computer architecture of 64 nodes with 4 cores each. Furthermore, we chose a three-layer model with varying widths (branching factors). For data sets of $\\leq 2^{20}$ data points the leaf nodes possessed 512 data points each, for data set sizes of $>2^{20}$, we chose the number of data points per node to be 128.\n\nFig.~\\ref{fig:hgp timings} shows the time required for computing the log-marginal likelihood and its gradient with respect to the hyper-parameters. The horizontal axis shows the size of the training set (logarithmic scale), the left vertical axis shows the computation time in seconds (logarithmic scale) for our model (HGP, blue-dashed), a full GP (red-dashed) and a sparse GP with inducing inputs~\\cite{Snelson2006} (green-dashed). For the sparse GP model, we chose the number $M$ of inducing inputs to be 10\\% of the size of the training set, i.e., the computation time is of the order of $\\mathcal{O}(NM^2)=\\mathcal{O}(N^3\/100)$, which offsets the curve of the full GP. Taking even fewer inducing inputs (e.g., 1\\% or 0.1\\% of the data) would push the sparse approximation towards multiple-hundred thousand data points. However, this can only be done if the data set possesses a high degree of redundancy. The right vertical axis shows the number of leaf GPs (black-solid), i.e., the number of GP experts amongst which we distribute the computation. While the training time of the full GP reaches impractical number at data set sizes of about 10,000, the sparse GP model can be reasonably trained up to 50,000 data points.\\footnote{In this comparison we did not include any computational overhead for selecting the inducing inputs (either as variational parameters or as free parameters to be optimised), which is often non-negligible.} The computational time required for the HGP to compute the marginal likelihood and gradients is significantly lower than that of the full GP, and we scaled it up to $2^{24} \\approx 1.7 \\times 10^7$ training data points, which required about the same amount of time ($\\approx 230$\\,s) for training a full GP with $2^{14}\\times 10^4$ and a sparse GP with $2^{15}\\approx 3.2\\times 10^4$ data points. The figure shows that for any problem size, we are able to find an architecture that allows us to compute the marginal likelihood (hence, train the model) within a feasible amount of time. \n\n\n\nEven if a big computing infrastructure is not available, our model is useful in practice: We performed a full GP training cycle (which includes many evaluations of the log-marginal likelihood and its gradient) with $10^6$ data points on a standard laptop in about 20 minutes. This is also a clear indicator that the memory consumption of the HGP is relatively small.\n\n\n\n\n\n\n\\subsection{Predictions\/Inference}\nThe predictive distribution is computed by an iterative recombination of the computations at the leaf-GPs. In particular, the parent nodes compute\n\\begin{align}\np(y_*|\\vec x_*, \\boldsymbol X, \\vec y) &\\propto \\prod\\nolimits_k p(y_*|\\vec x_*, \\boldsymbol X^\\idx{k}, \\vec y^\\idx{k}) \\label{eq:predictive_likelihood}\\\\\n&\\propto\\gaussx{y_*}{\\mu_*}{\\sigma_*^2}\\,,\\label{eq:predictive_distribution} \n\\end{align}\nas the product of all Gaussians passed on by the children. The resulting distribution is also Gaussian with\nmean $\\mu_*$ and variance $\\sigma_*^2$, where\n\\begin{align*}\n\\mu_* = \\sigma_*^{2}\\sum\\nolimits_k \\frac{\\mu_k(\\vec x_*)}{\\sigma_k^2(\\vec x_*)}\\,,\\quad\n\\sigma_*^2 = \\left(\\sum\\nolimits_k \\sigma_k^{-2}(\\vec x_*)\\right)^{-1}.\n\\end{align*}\nBoth $\\mu_*$ and $ \\sigma_*^2$ are computed analytically. We exploit the distributed architecture of our model to ensure small computational overhead. The mean of the HGP predictive distribution can be written as a weighted sum of the means of the child-GPs' predictive distributions. $\\mu_* = \\sum\\nolimits_k w_k \\mu_k(\\vec x_*)$ where\n$w_k = \\sigma_k^{-2}(\\vec x_*)\/\\sum\\nolimits_j \\sigma_j^{-2}(\\vec x_*)$. The weights on the child-GPs' predictions are proportional to the inverse variances of their prediction, which allows more accurate predictions (lower variances) to have bigger weights in the final prediction, and less accurate predictions (higher variances) to have weights closer to zero. This, in general, allows the HGP to remain effective across various methods of assigning data points to the child-GPs.\n\n\\subsection{Architecture Choice}\n\\label{sec:proofs}\nThus far, we have described the single-level version of the HGP model, where\nthe child-GPs are standard GPs. Since the HGP possesses the same ``interface''\n(marginal likelihood, predictive distribution) as a standard GP, we can use\nHGPs themselves as the child-GPs of an HGP. This can be done recursively to build up a tree\nstructure of arbitrary depth and width. \n\nIn the following, we show that a multi-level HGP is\nequivalent to a single-level HGP if its children (experts) are the leaf-GPs (experts) of a\nmulti-level HGP. For this, we show that training and prediction are identical in both models.\n\n\\paragraph{Training} In \\eqref{eq:approximate log-marginal likelihood}, we\nexpressed the log-marginal likelihood of the HGP as a sum of the log marginal\nlikelihoods of its child-GPs. If the child-GPs themselves are also HGPs (``child-HGPs''),\nthen this sum can be expanded and expressed as the sum of the log-marginal\nlikelihood of the child-GPs of the child-HGPs. This generalises to HGPs of\nan arbitrary number of levels, and we can always write the log-marginal likelihood\nof a HGP as a sum of the log-marginal likelihood of its leaf GPs (experts):\n\\begin{align}\n\\log p(\\vec y| \\boldsymbol X, \\vec\\theta) &\\approx \\sum\\nolimits_k \\log p(\\vec y^\\idx{k}|\\boldsymbol X^\\idx{k}, \\vec\\theta) \\nonumber \\\\\n&\\approx \\sum\\nolimits_k \\sum\\nolimits_{i_k} \\log p(\\vec y^\\idx{i_k}|\\boldsymbol X^\\idx{i_k}, \\vec\\theta) \\nonumber \\\\\n&\\approx \\cdots \\nonumber \\nonumber \\\\\n&\\approx \\sum\\nolimits_{l \\in leaves} \\log p(\\vec y^\\idx{l}|\\boldsymbol X^\\idx{l}, \\vec\\theta)\n\\label{eq:hgp_lik_expansion}\n\\end{align}\nEquation~\\eqref{eq:hgp_lik_expansion} shows that the log-marginal likelihood of a multi-level HGP\nis the sum of the log marginal-likelihoods of its leaves, which is equivalent\nto the log-marginal likelihood of a single-level HGP with the (multi-level HGP's) leaves as its child-GPs (experts). Hence, the structure of the HGP above the leaves has no effect on the computation of the log-marginal likelihood.\n\n\\paragraph{Prediction}\nWe now show that the predictions of a multi-level HGP and a single-level HGP are identical. The product in \\eqref{eq:predictive_likelihood} can be factorised\\footnote{This is not strictly a factorisation, since it is only proportional to \\eqref{eq:predictive_likelihood}. While it is not crucial to our application, we can easily recover the normalising constant by integration.} into\n$\\prod\\nolimits_{l \\in leaves} p(y_*|\\vec x_*, \\boldsymbol X^\\idx{l}, \\vec y^\\idx{l})$,\na product of terms involving only the leaf-GPs (experts) and not containing terms relating to the intermediate levels.\n\nIt is, however, not immediately obvious that the predictive distribution in\n\\eqref{eq:predictive_distribution} is equivalent to the predictive distribution of a single-level HGP with the same leaves. To show this, we provide a simple proof that the resulting distribution of the product of an arbitrary number of Gaussians, which are in turn the product of Gaussians (we refer to them as the ``sub-Gaussians''), is Gaussian and equivalent to the distribution resulting from the product of all the sub-Gaussians.\n\nThe product of Gaussians is proportional to a Gaussian, i.e., \n$\\prod\\nolimits_k \\gaussx{x}{\\mu_k}{\\sigma_k^2} \\propto\n\\gaussx{x}{\\mu_*}{\\sigma_*^2}\n$ where\n$\\mu_* = \\sigma_*^2\\sum\\nolimits_k \\mu_k\\sigma_k^{-2}$\nand $\\sigma_*^2 = (\\sum\\nolimits_k \\sigma_k^{-2})^{-1} $.\nSuppose each of the component Gaussians are themselves product of Gaussians.\nThat is,\n$\\gaussx{x}{\\mu_k}{\\sigma_k^2}\n\\propto\n\\prod\\nolimits_{i_k} \\gaussx{x}{\\mu_{i_k}}{\\sigma_{i_k}^2}\n$ and\n$\\mu_k = \\sigma_k^2\\sum\\nolimits_{i_k} \\mu_{i_k}\\sigma_{i_k}^{-2}$\nand $\\sigma_k^2 = (\\sum\\nolimits_{i_k} \\sigma_{i_k}^{-2})^{-1} $. Then\n\\begin{align}\n\\mu_* &= \\sigma_*^2\\sum\\nolimits_k \\mu_k\\sigma_k^{-2}\n= \\sigma_*^2\\sum\\nolimits_k \\sum\\nolimits_{i_k} \\mu_{i_k}\\sigma_{i_k}^{-2}\\,,\\\\\n\\sigma_*^2 &= \n\\big(\\sum\\nolimits_k \\sigma_k^{-2}\\big)^{-1} =\n\\big(\\sum\\nolimits_k \\sum\\nolimits_{i_k} \\sigma_{i_k}^{-2} \\big)^{-1}\\,,\n\\end{align}\n(where $i_k$ are the indices corresponding to the children of the child GPs, and so on)\ni.e., the distribution from the product of the Gaussian distributions is equivalent to the distribution from the product of the sub-Gaussians.\nThis result generalises to any number of levels of Gaussian products (if the sub-Gaussians are derived as products of ``sub-sub-Gaussians'' we apply the above again), and completes our proof for the equivalence of a multi-level HGP and a single-level HGP with the same leaves (experts).\n\nTherefore, mathematically it does not matter whether to choose a shallow or deep architecture if the leaf GPs (experts) are the same. However, a multi-level HGP still makes sense from a computational point of view.\n\n\\paragraph{Multi-level HGPs}\nGiven the same leaf-GPs, the depth of the HGP has no effect on the model, for both training and prediction, as shown in Section~\\ref{sec:proofs}. \nAlthough it is mathematically not necessary to construct an HGP of more than one level, in practice, a multi-level HGP allows us to fully utilise a given set of distributed computing hardware. To implement a single-level HGP on a distributed system, we have one ``master'' node, which is responsible for the computational work of the HGP (combining the outputs of the child-GPs). The computational work of the child-GPs is distributed evenly across the other ``slave'' nodes. Such a set-up imposes a heavy communication and computational load on the master node, since it has to manage its communication with all slave nodes, and perform the computations required for combining the child-GPs on its own (during which the slave nodes will idle). This is not an optimal use of resources, and we exploit the fact that the HGP model is invariant to the presence of intermediate layers to propose a better solution, which is illustrated in Fig.~\\ref{fig:arch}. Starting from the top of the HGP tree, divide the number of computational nodes available into $c$ groups where $c$ is the number of child-GPs that the HGP possesses, and assign each child-GP\/child-HGP to one group. We do this recursively until we reach the leaves of the HGP or until there is only a single node available to the HGP. This approach leads to a more uniform distribution of network communication and computational load amongst all nodes. \n\n\n\\begin{figure}\n\\includegraphics[width = \\hsize]{.\/figures\/arch_new}\n\\caption{The flexibility of choosing amongst equivalent architectures enables the HGP to evenly distribute computational (and network communication) work. Each blue vertical rectangle represents one distributed computing unit while each coloured node denotes an HGP. The coloured rectangles represent the overall responsibility of the corresponding coloured nodes. The overlap between the coloured rectangles and the blue represent the computing resources available for the computational work related to a particular HGP. The main computations are performed at the leaf-experts (black).}\n\\label{fig:arch}\n\\end{figure}\n\n\n\\paragraph{Number of Experts}\nFig.~\\ref{fig:depth_ndata} (top row) illustrates the effect of the number of leaf-GPs (experts) on the accuracy of the HGP. We constructed 3 HGPs with 1, 2, and 3 levels (4, 16, and 64 experts), respectively, on a training data set of size 200. This resulted in each HGP having experts with data sets of sizes 100, 50, and 25, respectively. As the number of experts increases, the accuracy decreases. Especially with 64 leaves, no expert has enough data points to model the underlying latent function reasonably well. However, with more training data the HGP with 64 experts recovers the prediction accuracy of the full GP. \n\n\n\\begin{figure*}[tb]\n\\begin{center}\n\\includegraphics[width=0.76\\hsize]{figures\/img_02_02_img04_nchild_deep}\n\\caption{Top row: Comparison of HGPs (blue lines) with varying depths (hence, number of experts) with a ground truth GP (red lines). The mean functions and corresponding $2\\sigma$ predictive intervals are shown. The model accuracy decreases with the number of experts. Bottom row: For a fixed depth (and number of experts), the HGP model becomes more and more similar to the ground truth GP model with an increasing number of data points.}\n\\label{fig:depth_ndata}\n\\end{center}\n\\end{figure*}\n\n\n\n\\subsection{Implementation: True Concurrency in Python}\nA known issue of the CPython interpreter, which we use in our implementation, is the lack of true concurrency using the in-built threading library. Due to\nthe \\emph{Global Interpreter Lock} (GIL, which is implemented in the interpreter because Python's memory management is not thread safe), only a single thread of Python code can be executed at any point in time. Therefore, the use of threads in the Python context only provides logical concurrency in terms of the flow of programs, but not true simultaneous computations. \n\nThere exists a workaround for the true concurrency problem in Python, via the use of processes instead of threads to perform simultaneous computations. In the POSIX model, threads are lightweight units of computations belonging to the same process, thus, sharing the same memory space. Processes have their own memory space and come with increased system overheads compared to threads. However, on Linux (which we use for this implementation), the creation of duplicate processes (forking) does not incur large memory costs since Linux implements a copy-on-write model. This means that when a process forks into two, the memory is not copied, unless the new process attempts to modify it. In the context of our implementation, we make no modification to the training data, which is shared amongst all child-GPs. In terms of the memory usage, each child-GP only needs to compute its own kernel matrix and the corresponding Jacobian matrix per hyper-parameter, which have no interaction with any other child-GP. Therefore, computing each child-GP using a separate process does not incur any large, redundant memory costs that would not be present in a true concurrency model implemented by native threads.\n\n\n\n\n\n\n\n\\section{Experimental Results}\n\nIn this section, we apply our model to two real-world data sets. For both the full GP (if applicable) and the HGP model we optimised the kernel hyper-parameters by maximising the log-marginal likelihood using BFGS. \nIn all following experiments, we used a single standard desktop computer with four cores (16 GB RAM, Intel Core-i7). \n\n\\subsection{Robotic-Arm Data}\nWe applied our HGP model to the kin40k data set~\\cite{kin40k}. The kin40k data set is generated with a simulation of the forward dynamics of an 8-link all-revolute robot arm. The task in the data set is to predict the distance of the end-effector from a target using joint positions and twist angles. The training set size is 10,000 (i.e., we can still train a full GP on it for comparison), the test set size is 30,000, and the input dimension is 8. We trained our model with various architectures (fixed branching factor), ranging from a single-layer model with four experts to a model with 7 layers and $4^7=16,384$ experts. We chose different architectures to assess the trade-off between computation time and accuracy with respect to the full GP.\n\n\n\n\\begin{table*}\n\\caption{Overview of the kin40K-data set results.}\n\\label{tab:kin40K}\n\\begin{center}\n\\begin{tabular}{c|c|c|c|c|c} \n\\bf \\shortstack{\\-\\\\Model\\\\\\-} &\n\\bf \\shortstack{\\-\\\\Number of Levels\\\\(HGP)} &\n\\bf \\shortstack{\\-\\\\Number of Leaves\\\\(HGP)} &\n\\bf \\shortstack{\\-\\\\Training Time (s)\\\\ per BFGS Iteration} &\n\\bf \\shortstack{\\-\\\\Data Points\\\\per Leaf} &\n\\bf \\shortstack{\\-\\\\Likelihood\\\\ Ratio}\n\\\\ \\hline\nGP (ground truth) & - & - & 218.5 & 10,000 & 1 \\\\ \\hline\n\\multirow{7}{*}{HGP}\n & 1 & $4 $ & 75.6 & 5,000 & 0.992 \\\\ \\cline{2-6}\n & 2 & $4^2=16$ & 56.5 & 2,500 & 0.978 \\\\ \\cline{2-6}\n & 3 & $4^3=64$ & 52.0 & 1,250 & 0.956 \\\\ \\cline{2-6}\n & 4 & $4^4=256$ & 49.4 & 625 & 0.909 \\\\ \\cline{2-6}\n & 5 & $4^5=1024$ & 32.2 & 313 & 0.875 \\\\ \\cline{2-6}\n & 6 & $4^6=4096$ & 17.1 & 157 & 0.834 \\\\ \\cline{2-6}\n & 7 & $4^7=16384$ & 22.0 & 79 & 0.815 \n\\end{tabular}\n\\end{center}\n\\end{table*}\nTable~\\ref{tab:kin40K} summarises the results. We report the training time per BFGS iteration (all models required 50--70 iterations), the number of data points per computational unit, and the likelihood ratio $\\LR{GP}{HGP}$, which tells us how close our model is to the full GP. The likelihood ratio $\\LR{G_1}{G_2}$ of two distributions $G_1$ and $G_2$ is defined as\n\\begin{align*}\n\\LR{G_1}{G_2} &:=\n\\prod_{i=1}^N \\frac{p(y_i|G_2)}{p(y_i|G_1)} \\stackrel{N\\to\\infty}{\\longrightarrow} \\exp{\\big( - \\KL{G_1}{G_2} \\big)}\n\\end{align*}\nwhere $y_i\\sim G_1$ (see supplementary material for the proof).\nThe basic single-level HGP with only four experts was able to achieve very similar results in a significantly shorter amount of time. The performance of the HGP decreased with increasing depth since the number of data points per expert becomes too small (note that the input space is 8D) as discussed in Fig.~\\ref{fig:depth_ndata}.\n\n\n\\subsection{Airline Delays (US Flight Data)}\nWe considered a data set reporting flight arrival and departure times for every commercial fight in the US from January to April 2008. This dataset contains extensive information about almost 2 million flights, including the delay (in minutes) in reaching the destination. For this data set, we followed the procedure described in~\\cite{Hensman2013}\\footnote{The data set can be obtained from \\url{http:\/\/stat-computing.org\/dataexpo\/2009\/}. Thanks to J Hensman for the availability of the pre-processing script.}: We randomly selected 800,000 data points from which we used a random subset of 700,000 samples to train the model and 100,000 to test it. We chose the same eight input variables $\\vec x$ as in~\\cite{Hensman2013}: age of the aircraft, distance that needs to be covered, airtime, departure and arrival times, day of the week and month, month. This data set has been evaluated in~\\cite{Hensman2013, Gal2014}, both of which use sparse variational GP methods to deal with this training set size. We applied our HGP model, using data duplication (each training instance is used by two experts) and 1,400 experts with 1,000 data points each. Data was assigned randomly to the expert GPs. \n\nWe repeated the experiment four times. The average training time was 35.6 minutes and 14 BFGS iterations.\nTable~\\ref{tab:airline} reports the average RMSE values for predicting airline delays.\n\\begin{table}[tb]\n\\centering\n\\caption{Average RMSE values for predicting airline delays (700K training data, 100K test data).}\n\\label{tab:airline}\n\\begin{tabular}{c|c|c}\nSVGP~\\cite{Hensman2013} & Dist SVGP~\\cite{Gal2014} & HGP\\\\\n\\hline\n 33.00 & 32.95 & \\textbf{27.45}\n\\end{tabular}\n\\end{table}\nTable~\\ref{tab:airline} also relates our results for predicting airline delays to the corresponding results reported in~\\cite{Gal2014}, where 100 inducing points were used for the sparse variational GP (SVGP)~\\cite{Hensman2013} and for the distributed sparse variational GP (Dist SVGP)~\\cite{Gal2014}, which are in line with the results reported in~\\cite{Hensman2013}. Compared to the sparse variational GP methods, our HGP achieves a substantially better predictive performance. Additionally, the HGP converged after a few tens of iterations, whereas the sparse variational GP methods~\\cite{Hensman2013,Gal2014} require hundreds or thousands of iterations.\n\n\n\n\n\\section{Discussion}\n\n\nOur approach to scaling up GPs is conceptually straightforward and practical: It recursively recombines computations by independent lower-level experts to an overall prediction. Unlike any other approach to scaling GPs to large data sets our model is not an explicit sparse approximation of the full GP.\nTherefore, the leaf nodes still perform full GP computations, i.e., their computations scale cubically in the number of data points. However, the number of data points at each leaf can be controlled by adjusting the number of leaves.\n\nIn the limit of a single expert, our hierarchical GP model is equivalent to a standard GP. Additionally, even with more than a single expert, our hierarchical mixture-of-experts model is still a Gaussian process: Any finite number of function values is Gaussian distributed, although we make an implicit (smoothed out) block-diagonal approximation of the covariance matrix. Note that the Deep GP model~\\cite{Damianou2013} is actually not a GP. \n\nIn our model, the kernel hyper-parameters are shared amongst all local GP experts. This makes sense as our objective is to reproduce a full ``vanilla'' GP, which, for practical reasons (size of training set) cannot be applied to the problem. Shared hyper-parameters also do not suffer much from overfitting problems: Even if a local expert fits a poor model, its (wrong\/biased) gradient only has a small contribution if the number of local models is high. \nTraining our model is relatively easy: Besides the shared GP hyper-parameters there are no additional parameters, such as inducing inputs~\\cite{Gal2014, Titsias2009, Snelson2006}, i.e., compared to these approaches it is less likely to end up in local optima.\n\nThe main purpose of our model is to scale up the vanilla GP by distributing computational load. Therefore, all kinds of variations that are conceivable for standard GPs could also be implemented in our model. In particular, this includes classification, sparse approximations, heteroscedastic models, and non-Gaussian likelihoods. Note that these models would only be necessary at the level of the leaf GPs: All other computations are (linear) recombinations of the computations at the leaves.\n\n \n\nCompared to other mixture-of-experts models, we chose the simplifying assumption that we know the number of experts, which in our case corresponds to the individual computing units. Thus, we do not need a Dirichlet process prior over partitions and, hence, sampling methods, which dramatically simplifies (and speeds up) training and inference in our model.\n\n\n\\section{Conclusion and Future Work}\nWe presented a conceptually straightforward, but effective, hierarchical model that allows to scale probabilistic Gaussian processes to (in principle) arbitrarily large data sets. The key idea behind our model is to massively parallelise computations by distributing them amongst independent computational units. A recursive (and closed-form) recombination of these independent computations results in a practical hierarchical mixture-of-GP-experts model that is both computationally and memory efficient. Compared to the most recent sparse GP approximations, our model performs very well, learns fast, requires little memory, and does not suffer from high-dimensional variational optimisation of inducing inputs. We have demonstrated that our model scales well to large data sets: (a) Training a GP with a million data points takes less than 30 minutes on a laptop, (b) with more computing power training a GP with more than $10^7$ data points can be done in a few hours.\n\nThe model presented in this paper lays the foundation for a variety of future work in the context of Gaussian processes, including classification, non-Gaussian likelihoods in regression, and the combination with sparse GP methods (for really large data sets and limited computing power) at the level of the leaf nodes of our mixture-of-experts model.\n\n\n\\subsubsection*{Acknowledgements}\nMPD was supported by an Imperial College Junior Research Fellowship.\n\n\\bibliographystyle{abbrv}\n\n\\section{Likelihood Ratio}\nLet $G_1 = \\gauss{\\mu_1}{\\sigma_1^2}$ and $G_2 = \\gauss{\\mu_2}{\\sigma_2^2}$ be two Gaussian distributions. We compare $G_2$ to $G_1$ by evaluating the ratio of the likelihood of $G_2$ to $G_1$ given observations drawn from $G_1$.\nWe can do this empirically by drawing $N$ independent samples $y_1,\\cdots,y_N$\nfrom $G_1$, and evaluate the likelihood ratio $\\LR{G_1}{G_2} :=\n\\prod_{i=1}^N \\frac{p(y_i|G_2)}{p(y_i|G_1)} =\\exp{\\left\\{-\\sum_{i=1}^N\n\\log{\\left(\\frac{p(y_i|G_1)}{p(y_i|G_2)}\\right)}\\right\\}}$. Here $y_i$ are the\nindependent observations of $Y$ and $p(\\cdot|G_2)$ is the likelihood function\nof $G_2$. We write the likelihood ratio as the exponential of\nthe negative sum of log-likelihood ratios to use the Kullback-Leibler (KL)\ndivergence to compute this in closed form, instead of drawing samples, i.e.,\n\\begin{align*}\n\\sum_{i=1}^N \\log{p(y_i|G_j)} &\\propto \\frac{1}{N} \\sum_{i=1}^N \\log{p(y_i|G_j)}\\\\\n&\\stackrel{N\\rightarrow\\infty}{\\approx} \\mathds{E}_{G_1}[\\log[p(Y|G_j)]\n\\end{align*}\nWith this substitution,\n\\begin{align*}\n\\sum_{i=1}^{N} &\\log{\\frac{p(y_i|G_1)}{p(y_i|G_2)}} = \n\\sum_{i=1}^{N} \\log{p(y_i|G_1)} - \\sum_{i=1}^{N} \\log{p(y_i|G_2)}\\\\ &\\propto \\sum_{i=1}^{N} \\frac{1}{N}\\log{p(y_i|G_1)} - \\frac{1}{N}\\sum_{i=1}^{N} \\log{p(y_i|G_2)} \\\\\n&\\stackrel{N\\rightarrow\\infty}{\\approx} \\mathds{E}_{G_1}[\\log{p(Y|G_1)}] - \\mathds{E}_{G_1}[\\log{p(Y|G_2)}] \\\\\n&= \\KL{G_1}{G_2}\n\\end{align*}\nthe likelihood ratio becomes\n$$\n\\LR{G_1}{G_2} = \\exp{\\big( - \\KL{G_1}{G_2} \\big)},\n$$\nwhich we can evaluate in closed form for two Gaussian distributions $G_1$ and\n$G_2$. Since $\\KL{G_1}{G_2} \\in [0,\\infty)$ and is continuous\nin all the parameters of $G_1$ and $G_2$, it follows that $\\LR{G_1}{G_2} \\in (0,1]$. Thus, we can interpret $LR{G_1}{G2}$ as the similarity\nof $G_2$ and $G_1$. \nThe likelihood ratio $\\LR{G_1}{G_2}$ is a monotonic decreasing function of\n$\\KL{G_1}{G_2}$ and therefore not symmetric. In the comparisons we make, we set $G_1$ to be the predictive\ndistribution of the full GP, which we assume is ``correct'' and $G_2$ to be the\npredictive distribution of the HGP. $\\LR{G_1}{G_2}$ then tells us how well the\nHGP models after the full GP.\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec_intro}\n\nLow-density parity-check (LDPC) codes, which are collectively called graph-based codes, are a family of error-correcting codes (ECC) that were introduced by Gallager \\cite{gallager} in 1962. Three decades later, with the advances of circuit design and their decoding algorithms, LDPC codes were revisited and new code constructions, including repeat-accumulate (RA) \\cite{divsalar} and irregular repeat-accumulate (IRA) codes \\cite{jin_ira}, were proposed. Today, graph-based codes have applications in many areas including wireless communication and data storage.\n\nIn ECC, unequal error protection (UEP) is used in applications where some of the channel bits are more sensitive to error or where error in a specific feature of the data is more costly than others, i.e., data has unequal value to the user \\cite{calderbank_uep, katsman_uep}. Codes with unequal protection of bits, and higher protection of input information bits over parity bits offer performance gains \\cite{wolf_uep,furzun_uep}. In this paper, we apply UEP with higher protection of the parity bits over the input information bits of an RA code to limit rate loss, and we demonstrate threshold gains.\\footnote{Although in our UEP setup we protect some \\textit{data} bits without any additional error-correction, we stick to the more common nomenclature of unequal \\textit{error} protection. Moreover, error here refers to flips\/erasures.}\n\nThe idea of applying higher protection on parity bits was introduced in \\cite{ahh_loco} in the context of data storage. The authors introduced lexicographically-ordered constrained (LOCO) codes, which significantly mitigate inter-symbol interference (ISI) in magnetic recording (MR) systems. In their model, parity bits of a spatially-coupled (SC) LDPC code are encoded via a LOCO code, and up to $20\\%$ density gain, with limited rate loss, is achieved compared with using the LDPC code only. Protecting parity bits solely via a LOCO code achieves a significant rate-density gain trade-off. The same UEP idea can also be used to limit speed loss in Flash and other applications where the constrained code rate is already high \\cite{ahh_qaloco, ahh_general}. This UEP setup is successful because when parity bits have higher fidelity messages, e.g., log-likelihood ratios (LLRs), those reliable messages are diffused into all bits during message passing \\cite{ahh_loco}. Thus, the decoder effectively experiences a higher signal-to-noise ratio (SNR) compared to the uniform case. The empirical results presented in \\cite{ahh_loco} are the motivation behind this paper, which is to demonstrate the threshold gains of this UEP setup theoretically. We start with the binary erasure channel (BEC) and the binary symmetric channel (BSC), and subject parity bits to lower erasure and crossover probabilities, respectively, compared to input information bits. With this setup, we ensure obtaining the closest model to achieving high reliability on parity bits which was done by constrained coding in \\cite{ahh_loco}.\n\nWe use extrinsic information transfer (EXIT) charts \\cite{brink_cid} for the threshold analysis of RA LDPC codes with our idea of UEP via more reliable parity bits on BEC and BSC. EXIT charts are a tool to visualize the asymptotic performance and predict convergence behavior of iterative decoders \\cite{brink_cid, alexei_bec}. EXIT charts plot \\textit{average extrinsic information} coming out of the decoder as a function of \\textit{average a-priori information} going into the decoder during the iterations. In the literature, EXIT charts were used in the design of RA codes that are capacity-approaching \\cite{brink_ra}. EXIT functions were derived for BEC, and models for the decoding of RA and general LDPC codes were introduced in \\cite{alexei_bec}. In \\cite{sharon_bms}, methods to obtain EXIT functions for binary-input memoryless symmetric channels were developed through an alternative pseudo-MAP decoder.\n\nIn this paper, we formulate and solve a linear programming (LP) problem to find the optimal degree distribution of the RA code, which maximizes the rate given the erasure or crossover probabilities such that decoding convergence is guaranteed on the EXIT chart, for both the UEP and the uniform setups. For our UEP setup with more reliable parity bits, EXIT functions are used to investigate how the change in the mutual information of parity bits affects the behavior of the overall coding scheme. We discuss an alternative derivation of the EXIT functions for BEC from a combined channel perspective, and using the decoding model introduced in \\cite{alexei_bec}, we derive EXIT functions of variable nodes (VNs) and check nodes (CNs) of an RA LDPC code for BSC. Our experimental results demonstrate the effectiveness of this UEP idea as it achieves up to about $17\\%$ and $28\\%$ threshold gains on BEC and BSC, respectively. The ideas and results we present in this paper are the first step towards developing the UEP theoretical framework for modern data storage systems.\n\n\nThe rest of the paper is organized as follows. In Section~\\ref{sec_prelim}, we discuss the preliminaries. In Section~\\ref{sec_steps}, we introduce our theoretical methodology and derive the (LP) problem for our UEP idea. In Section~\\ref{sec_bec}, we apply this methodology to BEC, and show threshold gains. In Section~\\ref{sec_bsc}, we do the same for BSC. Section~\\ref{sec_conc} concludes the paper.\n\n\\section{Preliminaries}\\label{sec_prelim}\n\nWe use the decoding model shown in Fig.~\\ref{fig_1} for EXIT chart analysis. A binary-symmetric source produces bits that take $0$ or $1$ value with equal probability. There exist $P(\\underline{y}|\\underline{x})$ \\textit{communication channel} and $P(\\underline{w}|\\underline{v})$ \\textit{extrinsic channel} with output vectors $\\underline{y}$ and $\\underline{w}$ (noisy versions of the inputs $\\underline{x}$ and $\\underline{v}$) and LLRs $\\underline{c}$ and $\\underline{a}$, respectively. Fig.~\\ref{fig_1} models iterative decoding where the extrinsic channel, which is actually an artificial channel, models extrinsic information coming from the previous decoding iteration \\cite{alexei_bec}. The decoder uses outputs of both channels $\\underline{y}$ and $\\underline{w}$ to calculate \\textit{a-posteriori} and \\textit{extrinsic} LLRs $\\underline{d}$ and $\\underline{e}$ of $\\underline{v}$, respectively. See \\cite{alexei_bec} for more details.\n\n\\begin{figure}\n\\vspace{-0.3em}\n\\centering\n\\includegraphics[trim={2.1in 3.5in 2.2in 1.7in},clip,width=3.7in]{decoding_model_switch.pdf}\n\\vspace{-2.7em}\n\\caption{Decoding model for EXIT analysis}\n\\label{fig_1}\n\\vspace{-0.8em}\n\\end{figure}\n\nWe investigate the threshold gains of higher protection of the~parity bits of an RA code using EXIT chart analysis. When considering VNs of the RA code, the switch in Fig.~\\ref{fig_1} is closed, resulting in $\\underline{u} = \\underline{x}$, $\\underline{u}$ is one bit, and the Encoder is a repetition code with length $d_{\\textup{v}}$ (VN degree). Whereas when considering the CNs of the RA code, the switch in Fig.~\\ref{fig_1} is open, and the Encoder is a single parity-check (SPC) code with length $d_{\\textup{c}}$ (CN degree) \\cite{alexei_bec}. We investigate the effect of UEP on RA codes in two setups. In the first setup, both communication and extrinsic channels are BECs with erasure probabilities $q$ and $p$, respectively. In the second, both channels are BSCs with crossover probabilities $\\epsilon$ and $\\delta$, respectively.\n\nIn the model of iterative decoding, extrinsic LLR $e_{i}$ at the decoder output in one iteration re-enters the decoder as a-priori LLR $a_{i}$ after passing through an interleaver in the next iteration \\cite{alexei_bec, brink_cid}. This is consistent with the modelling of the extrinsic channel discussed above. Next, EXIT functions are defined. Let $m$ be the length of $\\underline{v}$, $\\underline{w}$, $\\underline{a}$, and $\\underline{e}$. The \\textit{average a-priori information} $I_{\\textup{A}}$ going into the decoder is then \\cite{alexei_bec}:\n\\begin{equation} \\label{IA}\nI_{\\textup{A}} = \\frac{1}{m}\\sum_{i=1}^{m} I(V_{i};A_{i}) = I(V_{1};A_{1}).\n\\end{equation} \nThe second equality follows from the observation that $V_{i}$, for all $i$, have the same distribution and that the extrinsic channel is memoryless and time invariant. The \\textit{average extrinsic information} $I_{\\textup{E}}$ coming out of the decoder is \\cite{alexei_bec}:\n\\begin{equation} \\label{IE}\nI_{\\textup{E}} = \\frac{1}{m}\\sum_{i=1}^{m} I(V_{i};E_{i}) = I(V_{1};E_{1}) = I(V_{1};\\underline{Y},\\underline{A}_{[1]}),\n\\end{equation} \nwhere an a-posteriori probability (APP) decoder is assumed. We write random variables with upper case letters, their realizations with lower case letters. $\\underline{A}_{[i]}$ denotes vector $\\underline{A}$ with the $i$th entry removed. The third equality in (\\ref{IE}) follows from the proposition proved in \\cite{alexei_bec} for APP decoders with extrinsic message passing. This allows the extrinsic information to be defined as mutual information between input of the extrinsic channel and the inputs of the decoder instead of the extrinsic LLR at the output of the decoder. An EXIT chart plots extrinsic information as a function of a-priori information. \\cite{brink_cid}.\n\n\\section{Methodology for UEP Analysis}\\label{sec_steps}\n\nFor our UEP setup, we use EXIT functions to see how a change in the mutual information of parity bits affects the behavior of the overall coding scheme. In this section, we illustrate the code construction and our methodology with steps which are applied to BEC and BSC in Sections~\\ref{sec_bec} and~\\ref{sec_bsc}.\n\nTo guarantee the diffusion of higher fidelity messages from parity to input information bits, a specific property is required in the code construction. That is, for each VN representing an input information bit, there exists at least one VN representing a parity bit connected to the first through a CN. This property is satisfied in RA LDPC codes with parity-check matrix $H$ of the form $H = [J\\textup{ }P]$, where $P$ is $(n-k)\\times(k+1)$ sparse matrix and $J$ is $(n-k)\\times(n-k-1)$ matrix of the form:\n\\begin{equation}\nJ = \n\\begin{bmatrix}\n1 & 0 & & & &...& 0 \\\\\n1 & 1 & 0 & & &...& 0 \\\\\n0 & 1 & 1 & 0 & &...& 0 \\\\\n0 & 0 & 1 & 1 &0 &...& 0 \\\\\n & & & ... & && \\\\\n0 & ... & & & 0 & 1 & 1 \\\\\n0 & ... & & & & 0 & 1 \n\\end{bmatrix}.\n\\end{equation}\nAssuming that the first column of $P$ has weight $2$, and it is linearly independent from the columns of J, the first $(n-k)$ bits of the RA codeword (corresponding to the columns of $J$ and the first column of $P$) can be considered parity bits, whereas the last $k$ bits can be considered input information bits. Also assuming $d_{\\textup{c}} > 3$, each parity bit (except the first, $(n-k-1)$-th, and the last bits) is connected to another parity bit and at least $d_{\\textup{c}}-3$ input information bits, which satisfies the required property. ($P$ has no $\\underline{0}$ columns.)\n\nWith this RA code construction, we can now derive EXIT functions for our UEP setup via the following steps. Let the communication and extrinsic channels have error probabilities $\\sigma$ and $\\beta$, respectively.\n\n\\noindent\\textbf{Step~1:} Consider the VNs of the RA code. First, derive the extrinsic information $I_{\\textup{E,v}}$ as a function of the a-priori information $I_{\\textup{A,v}}$ for the VNs without considering UEP, i.e., assuming both input information and parity bits are transmitted via a communication channel with fixed error probability $\\sigma$. This step assumes fixed $d_{\\textup{v}}$ for simplicity.\n\n\\noindent\\textbf{Step~2:} Consider the CNs of the RA code. Derive the extrinsic information $I_{\\textup{E,c}}$ as a function of the a-priori information $I_{\\textup{A,c}}$ without considering UEP. Next, derive the inverse EXIT function of CNs.\\footnote{Always $I_{\\textup{A,v}} = I_{\\textup{A,c}}$. Thus, we use $I_{\\textup{A}}$ notation in the rest of the paper.} We adopt fixed $d_{\\textup{c}}$ in our RA code construction.\n\n\\noindent\\textbf{Step~3:} We are now ready to apply unequal protection on parity and input information bits. Let all CNs have degree $d_{\\textup{c}}$. Let $\\lambda_i$ be the fraction of branches (edges) connected to VNs of degree $i$, which we refer to as the {\\textit{degree distribution}}\\cite{jin_ira}.\\footnote{In this paper, we refer to an edge that is adjacent to a node as \"connected\" to the node. Here, we mean they are directly connected.} Let $N$ be the total number of branches. Let $\\lambda_2 = a + b$, where $a$ is the fraction of branches connected to $(n-k)$ VNs corresponding to $(n-k)$ parity bits. It follows from the RA code construction discussed earlier that $n-k = N\\cdot \\frac{1}{d_{\\textup{c}}} = N\\cdot \\frac{a}{2} \\implies a = \\frac{2}{d_{\\textup{c}}}$, where $n-k$ is the number of CNs. Thus we have,\n\\begin{equation} \\label{constraint1}\n \\frac{2}{d_{\\textup{c}}} + b + \\sum_{i \\geq 3}\\lambda_i = 1,\n\\end{equation}\nwhich is the first constraint of the linear programming (LP) problem to be explained in the upcoming step.\n\nLet $\\sigma_{1}$ and $\\sigma_{2}$ be the error probabilities of parity bits and input information bits transmitted through the communication channel, respectively. We now derive $I^{*}_{\\textup{E,v}}$ for the UEP setup:\n\\begin{align} \\label{general_Iev}\n I^{*}_{\\textup{E,v}} &= a \\cdot I_{\\textup{E,v}}(\\sigma = \\sigma_1, d_{\\textup{v}} = 2) + b \\cdot I_{\\textup{E,v}}(\\sigma = \\sigma_2, d_{\\textup{v}} = 2) \\nonumber \\\\ &\\hspace{+1.0em}+ \\sum_{i \\geq 3}\\lambda_i \\cdot I_{\\textup{E,v}}(\\sigma = \\sigma_2, d_{\\textup{v}} = i),\n\\end{align}\nwhich is a weighted sum over branches. Note that we extend the arguments of $I_{\\textup{E,v}}$ in (\\ref{general_Iev}) for clarity.\\footnote{The remaining information equations ($I_{\\textup{E,c}}$ and $I_{\\textup{A}}$) are same for unequal and uniform error protection (also same notation). A-priori information for both VNs and CNs depend only on the input and output of extrinsic channel, not the communication channel. Same for extrinsic information when considering CNs due to the open switch in Fig.~\\ref{fig_1}.} Note also that (\\ref{general_Iev}) is used for the uniform protection setup as well by setting $\\sigma_{1} = \\sigma_{2}$.\n\n\\noindent\\textbf{Step~4:} In this step, we formulate an LP problem. Iterative decoding will be successful, i.e., convergence occurs, if the EXIT function of VNs lies above and does not intersect with the inverse of EXIT function of CNs \\cite{alexei_bec}, i.e.,\n\\begin{equation} \\label{constraint2}\n I^{*}_{\\textup{E,v}}(I_{\\textup{A}}) > I^{-1}_{\\textup{E,c}}(I_{\\textup{A}}), \\textup{ } I_{\\textup{A}} \\in (0,1), \n\\end{equation}\nwhich is the second constraint of the LP problem.\n\nWe calculate the code rate $R$ as follows:\n\\begin{align} \\label{rate}\n R = 1- \\frac{\\frac{1}{d_{\\textup{c}}}}{\\frac{a}{2} + \\frac{b}{2} + \\sum_{i \\geq 3} \\frac{\\lambda_i}{i}} = 1- \\frac{\\frac{1}{d_{\\textup{c}}}}{\\frac{1}{d_{\\textup{c}}} + \\frac{b}{2} + \\sum_{i \\geq 3} \\frac{\\lambda_i}{i}}\n\\end{align}\n\nWe then formulate the following LP problem for finding optimal degree distribution that maximizes the rate of the code under code construction and EXIT convergence constraints, (\\ref{constraint1}) and (\\ref{constraint2}), for pre-determined $\\sigma_{1}$, $\\sigma_{2}$ error probabilities:\n\\begin{align} \\label{LP}\n \\nonumber \\textbf{maximize } &\\; \\frac{b}{2} + \\sum_{i \\geq 3} \\frac{\\lambda_i}{i} \\\\ \\nonumber\n \\textbf{subject to } &\\; \\frac{2}{d_{\\textup{c}}} + b + \\sum_{i \\geq 3}\\lambda_i = 1, \\nonumber \\\\\n & I^{*}_{\\textup{E,v}}(I_{\\textup{A}}) > I^{-1}_{\\textup{E,c}}(I_{\\textup{A}}), \\textup{ } I_{\\textup{A}} \\in (0,1). \n\\end{align}\n\nThis LP problem is derived in order to investigate the gains of unequal error protection. We solve the LP problem numerically using a software program. Given the channel probabilities, the solution of this optimization problem gives the degree distribution that achieves the highest rate. How to use the LP solution to obtain the threshold gains is discussed in the following sections.\n\n\n\\section{Unequal Error Protection on BEC}\\label{sec_bec}\n\nLet the communication and extrinsic channels in Fig.~\\ref{fig_1} be BECs with erasure probabilities $q$ and $p$, respectively. In this section, we apply the steps outlined in Section~\\ref{sec_steps} and discuss the results of applying unequal error protection on parity and input information bits of the RA code for BEC.\n\n\\noindent\\textbf{\\underline{Step~1:}} When considering the VNs of the RA code, $\\underline{u} = \\underline{x}$, and the Encoder is a repetition code with length $d_{\\textup{v}}$. From \\cite{alexei_bec} (see also the intuitive explanation after Step 2), for fixed $d_{\\textup{v}}$,\n\\begin{align}\n & I_{\\textup{A}} = I(V_{1};A_{1}) = 1-p, \\\\ \n & I_{\\textup{E,v}} = 1-qp^{d_{\\textup{v}}-1}, \\label{bec_Iev} \\\\\n & I_{\\textup{E,v}}(I_{\\textup{A}}) = 1-q(1-I_{\\textup{A}})^{d_{\\textup{v}}-1}. \\label{fcn_bec_Iev}\n\\end{align}\n \n\\noindent\\textbf{\\underline{Step~2:}} When considering the CNs of the RA code, the switch on the top branch is open, and the Encoder is an SPC code with length $d_{\\textup{c}}$. From \\cite{alexei_bec} (see also the intuitive explanation after Step 2) and for fixed $d_{\\textup{c}}$,\n\\begin{align}\n & I_{\\textup{A}} = I(V_{1};A_{1}) = 1-p, \\\\\n & I_{\\textup{E,c}} = (1-p)^{d_{\\textup{c}}-1}, \\label{bec_Iec} \\\\\n & I_{\\textup{E,c}}(I_{\\textup{A}}) = (I_{\\textup{A}})^{d_{\\textup{c}}-1}, \\label{fcn_bec_Iec} \\\\\n & I^{-1}_{\\textup{E,c}}(I_{\\textup{A}}) = (I_{\\textup{A}})^{\\frac{1}{d_{\\textup{c}}-1}}. \\label{fcn_bec_Iec_inv}\n\\end{align}\n\nTo intuitively explain (\\ref{bec_Iec}), we can think of a combined BEC setup. Let us consider the CNs side first. In order that a CN sends a correct message to a VN, the messages from all other $d_{\\textup{c}}-1$ VNs must be correct. The probability of getting correct information from a VN straight from a BEC($p$) \\textit{channel} is $1-p$. Note that here, a \\textit{channel} refers to the extrinsic channel, not the communication channel. The probability that all those $d_{\\textup{c}}-1$ VNs send correct messages is $p_{\\textup{correct}} = (1-p)^{d_{\\textup{c}}-1}$. Thus, the probability of receiving wrong (erased) information is $p_{\\textup{wrong}} = 1-p_{\\textup{correct}} = 1-(1-p)^{d_{\\textup{c}}-1}$, which is actually the erasure probability ($p_{\\textup{erasure}}$) of the new combined channel. Hence, the mutual information under equiprobable inputs (the capacity) of the combined BEC is $I_{\\textup{E,c}} = 1-p_{\\textup{erasure}} = 1-(1-(1-p)^{d_{\\textup{c}}-1}) = (1-p)^{d_{\\textup{c}}-1}$, which is (\\ref{bec_Iec}).\n\nThe same logic can be applied to the VNs side to get (\\ref{bec_Iev}), where the combined channel erasure probability will be $qp^{(d_{\\textup{v}}-1)}$ this time. In order that a VN sends a wrong message to a CN, the information at that VN has to be erased by the communication channel, which is the first event, and all messages coming to that VN from the other $d_{\\textup{v}}-1$ CNs are also erased, which is the second event. The probability of the first event is $q$, and the probability of the second event is $p^{d_{\\textup{v}}-1}$. Note that any CN with non-erased information suffices to fix the erasure at that VN (different from BSC, see Section~\\ref{sec_bsc}). Hence, $qp^{(d_{\\textup{v}}-1)}$ is the erasure probability $p_{\\textup{erasure}}$ of the new combined channel. Thus, $I_{\\textup{E,v}} = 1-p_{\\textup{erasure}} = 1-qp^{(d_{\\textup{v}}-1)}$, which is (\\ref{bec_Iev}). \n\n\\begin{table*}\n\\caption{Threshold Gains of UEP at Various Erasure Probabilities When LP is Solved for Uniform and Unequal Error Protection on BEC}\n\\vspace{-0.5em}\n\\centering\n\n\\begin{tabular}{|c|c|c|c|c|c!{\\vrule width 0.85pt}c|c|c|c|c|c|}\n\\hline\n\\multicolumn{6}{|c!{\\vrule width 0.85pt}}{\\makecell{\\textbf{First method:} LP solved for uniform protection}} & \\multicolumn{6}{c|}{\\makecell{\\textbf{Second method:} LP solved for unequal protection}} \\\\\n\\hline\nRate & $q_{\\textup{uniform}}$ & $q'_1$ & {$q'_2$} & $q'_{\\textup{avg}}$ & \\textbf{Gain} & Rate & $q_1$ & {$q_2$} & $q_{\\textup{avg}}$ & $q'_{\\textup{uniform}}$ & \\textbf{Gain} \\\\\n\\hline\n$0.6316$ & $0.28$ & $0.003$ & $0.488$ & $0.3093$ & $10.5\\%$ & $0.6376$ & $0.050$ & $0.500$ & $0.3369$ & $0.2883$ & $16.9\\%$ \\\\\n\\hline\n$0.6430$ & $0.25$ & $0.002$ & $0.426$ & $0.2746$ & $9.9\\%$ & $0.6644$ & $0.080$ & $0.430$ & $0.3126$ & $0.2739$ & $14.1\\%$ \\\\\n\\hline\n$0.7237$ & $0.20$ & $0.003$ & $0.299$ & $0.2172$ & $8.6\\%$ & $0.7347$ & $0.080$ & $0.300$ & $0.2416$ & $0.2180$ & $10.8\\%$ \\\\\n\\hline\n$0.7853$ & $0.14$ & $0.003$ & $0.188$ & $0.1483$ & $5.9\\%$ & $0.7876$ & $0.090$ & $0.210$ & $0.1845$ & $0.1750$ & $5.4\\%$ \\\\\n\\hline\n$0.8527$ & $0.10$ & $0.004$ & $0.122$ & $0.1046$ & $4.6\\%$ & $0.8526$ & $0.004$ & $0.122$ & $0.1046$ & $0.1009$ & $3.7\\%$ \\\\\n\\hline\n\\end{tabular}\n\n\\label{table1}\n\\vspace{-0.25em}\n\\end{table*}\n\n\n\\noindent\\textbf{\\underline{Step~3:}} We now apply the UEP setup explained in the previous section to derive $I^{*}_{\\textup{E,v}}$. Parity and input information bits transmitted through the communication channel face erasure probabilities $q_{1}$ and $q_{2}$, respectively. Using (\\ref{general_Iev}) and (\\ref{fcn_bec_Iev}), the EXIT function is derived to be:\n\\begin{align} \\label{UEP_bec_fcn}\n I^{*}_{\\textup{E,v}}(I_{\\textup{A}}) &= a \\cdot (1-q_{1}(1-I_{\\textup{A}})) + b \\cdot (1-q_{2}(1-I_{\\textup{A}})) \\nonumber \\\\ &\\hspace{+1.0em}+ \\sum_{i \\geq 3}\\lambda_i \\cdot (1-q_{2}(1-I_{\\textup{A}})^{i-1}).\n\\end{align}\n\\vspace{-0.6em}\n\n\\noindent\\textbf{\\underline{Step~4:}} Substituting (\\ref{fcn_bec_Iec_inv}) and (\\ref{UEP_bec_fcn}) in (\\ref{constraint2}) completes the LP problem in (\\ref{LP}). We now solve this LP problem to find the optimal degree distribution. That is, we maximize the rate of the code in (\\ref{rate}), subject to degree distribution constraint (\\ref{constraint1}) and EXIT convergence constraint (\\ref{constraint2}).\n\\newline\n\nWe can find the solution ($a$, $b$ and $\\lambda$'s) of the LP problem numerically using a software program. We implement two methods to investigate the threshold gains of our UEP idea over uniform protection in our software. In the first method, we mimic the approach adopted in data storage devices. In particular, the graph-based code is designed and optimized assuming all codeword bits will have the same protection \\cite{ahh_loco,ahh_md}. As the device ages, the constrained code can be applied to the parity bits only, resulting in UEP and achieving density\/lifetime gains \\cite{ahh_loco}. Thus, in our first setup, we solve the LP problem for the uniform protection case, i.e, $q_1 = q_2 = q_{\\textup{uniform}}$, and find the optimal degree distribution. An appropriate $d_{\\textup{c}}$ is chosen according to $q_{\\textup{uniform}}$. A code with this degree distribution has \\textit{threshold} $q_{\\textup{uniform}}$. After solving the LP problem for the uniform setup, we apply higher protection on parity bits (with $q'_1