diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzihfb" "b/data_all_eng_slimpj/shuffled/split2/finalzzihfb" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzihfb" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\nAs part of the broader quantum cognition program a number of researchers have considered whether there is evidence for violation of contextual inequalities in psychology (e.g., \\cite{Aerts2013}, \\cite{Asano2014}, \\cite{Bruza2015}, \\cite{Bruza2015b}). Such inequalities are derived on the assumption that there exist hidden joint preference or probability states for the psychological observables being measured. This is, loosely, equivalent to assuming that judgment processes giving rise to choices between different options operate independently, which is an important constraint on the processes underlying human decision making.\n\nOne complicating factor is that it is hard to rule out the possibility of direct influence between measurements, which can mimic the effect of true contextuality. In other words, contextuality means the outcome of a judgment about observable A has an apparent and unexpected dependence\non what else is being measured, but that can also occur if the outcome of the other measurements directly influence A.\n\nThe absence of such direct influences must be justified or explicitly tested in any particular application. In physics such influences can sometimes be ruled out by reference to some physical principle, but nothing equivalent in psychology can be used to rule out direct influences a priori. The challenge of quantifying exactly when violations of contextual inequalities can be accounted for by direct influences, and when they can only be explained by genuine contextuality, was taken up by \\cite{DK} who derived modified inequalities which, they claim, allow the identification of true contextuality not explainable by \n``signalling'', a specific form of direct influence which we will explain below.\n\nIn a series of papers Dzhafarov and collaborators re-analyzed existing experimental claims of contextuality in psychology and concluded they could all be explained by signalling (\\cite{Dzhafarov2015}, \\cite{Dzhafarov2016}). However in an elegant paper \\cite{Cervantes2018} presented an experiment which did satisfy their modified contextual inequality, and in a recent paper \\cite{Basieva2019} followed this up with series of experiments, the majority of which produced data demonstrating genuine contextuality, ie over and above that explicable by signalling, according to Dzhafarov and Kujala's modified inequalities.\n\nIn this comment we explain why we nevertheless do not believe that the results presented by \\cite{Basieva2019} provide conclusive\n evidence for contextuality in human decision making. We only consider in detail the form of experiment conducted by \\cite{Basieva2019} and \\cite{Cervantes2018}, but we use the insight gained to question whether contextuality could ever be observable in human decision making.\n\n\\section{Outline of \\cite{Basieva2019}}\n\nConsider one of the experiments in \\cite{Basieva2019}; \n\n``Alice wishes to order a two-course meal. For each course she can choose a high-calorie option (indicated by H) or a low-calorie option (indicated by L). Alice does not want both courses to be high-calorie nor does she want both of them to be low-calorie.''\n\nEach participant was given two out of three courses (Starter, Main, Dessert) to choose from. Clearly participants should select options so that while the calorie content of, eg, the starter is undetermined, it is anti-correlated with the calorie content of the main (indeed this restriction was forced on participants in the experiment.) We can easily see why \\cite{Basieva2019} expect to see evidence of contextuality - the underlying probability distribution for all three courses needs to have three binary anti-correlated variables, which is impossible. In other words, the calorie content of at least two of the dishes has to match, since there are three courses and only two calorie options, but then one cannot have the three choices anti-correlating.\n\nThe specific inequality \\cite{Basieva2019} test is an example of modified Bell-type inequalities derived by Dzhafarov and Kujala (2015).\nThey use the contextuality by default (CbD) approach, in which a set of random variables $R_n$ each taking values $\\pm 1 $ may take different values in different measurement contexts. So $R_n^m$ denotes the value of $R_n$ in measurement context labeled by $m$. In the above example $n,m=1,2,3$ and context $1$ might be\nthe condition where participants were asked to choose options for Starter and Main. In the CbD approach, direct influence is defined in terms of the degree to which $R_n^m$ and $R_n^k$ may be different.\n\n\nThe modified Bell-type inequalities consist of standard Bell-type inequalities but modified by terms of the form\n\\begin{equation}\n \\Delta = |E[R^1_1]-E[R^3_1]|+|E[R^1_2]-E[R^2_2]|+|E[R^2_3]-E[R^3_3]|,\n\\label{eq1}\n\\end{equation}\nwhere $E[\\cdot]$ denotes an expectation value. The quantity $\\Delta$ is held to be a measure of signalling in this situation, which is thus seen to be defined in terms of an {\\it average} of the degree of direct influence, since it involves only terms of the form $E[R_n^m - R_n^k]$.\nSuch modified inequalities permit the identification of contextuality beyond that explainable by signalling.\nIn the specific model considered by \\cite{Basieva2019} the modified Bell-type inequality has the simple form\n\\begin{equation}\n\\Delta < 2,\n\\label{eq1.5}\n\\end{equation}\nhere written in a notational form opposite to the traditional form of the Bell inequalities so contextuality is deemed to be present if this equation is satisfied.\nWhat \\cite{Basieva2019}\ndid was to show this inequality was satisfied by data collected in a number of different experiments which were variants of the one outlined above. \n\nHowever it is intuitively obvious how participants could solve the problem set by \\cite{Basieva2019}; given two courses to select, eg starter and main, choose the calorie content of the first one randomly, then make the opposite choice for the second course. This complies with the instructions and is ``non-contextual'' in a colloquial sense, since it makes no reference to measurement contexts. However it does not sit well with the idea of a pre-existing preference, since one of the judgments is made deterministically based on the other, with no reference to existing preferences. We therefore need to apply a more precise measure of contextuality. Also, as described, this strategy has the feature that it will tend to produce equal preferences for each course, which is not what was observed in \\cite{Basieva2019}. So we need to establish that this heuristic can generalise to cases where the preferences are not equal.\n\n\n\\section{A Possible Non-Contextual Account?}\n\nIf the variables of interest are thought to be the same in different contexts (e.g. dish choice for main in the context of starter, dish choice for main in the context of dessert)\nnon-contextuality means that there exists a joint probability matching the set of marginal probabilities characterizing the data. Contextuality is thus defined to be the absence of such a distribution. This definition of contextuality is essentially the same as that frequently employed both in physics (see for example \\cite{Abramsky2011}) and in cognitive models in psychology (see for example, \\cite{Oaksford2007}). \nIf the variables are allowed to take different values in different contexts, then the way this standard notion of non-contextuality is extended in the CbD approach is to require that the variables vary as little as possible across different contexts (Dzhafarov and Kujala, 2015), about which more below.\n\n\n\nLet us begin with our idealized version of the experiment: assume participants solve the problem by choosing the calorie content of the first course randomly, then making the opposite choice for the second course. The expectation value of any of the variables therefore equals zero, regardless of the context in which it is measured. That means,\n\\begin{equation}\n \\Delta =0,\n\\end{equation}\nso Eq.(\\ref{eq1.5}) is satisfied and \\cite{Basieva2019}\nwould presumably claim genuine contextuality in this case, i.e. contextuality over and above that which could be explained by signalling. \n\nHowever it is still possible to write down a probability distribution on the variables $R^1_1, R^1_2, R^2_2, R^2_3, R^3_1, R^3_3$, which has these correlations and expectation values:\n\\begin{equation}\np(R^1_1, R^1_2, R^2_2, R^2_3, R^3_1, R^3_3)=\\frac{1}{64}(1-R^1_1 R^1_2)(1-R^2_2 R^2_3)(1-R^3_1 R^3_3).\n\\end{equation}\nNote $E[R^1_1, R^1_2]=-1$ etc, as required, and $E[R^i_j]=0$ for all variables and contexts. This proves that a particular type of non-contextual account of this idealisation of the \\cite{Basieva2019} experiments is possible. The explanation is a direct influence of $R^1_1$ on $R^1_2$ etc, such that the value of one random variable in a given context is set equal to minus the value of the other one. This does not conform to the standard notion of a non-contextual model in the CbD approach since it involves probabilities on variables permitted to take different values in different contexts, but without the requirement of minimal variation across different contexts.\nHowever, it is clearly still of interest since it gives a probabilistic explanation of the data in terms of the action of direct influences. \n\n\nWe note an interesting property of this probability distribution, which is that it clearly factorises as;\n\\begin{equation}\np(R^1_1, R^1_2, R^2_2, R^2_3, R^3_1, R^3_3)=p(R^1_1,R^1_2)p(R^2_2,R^2_3)p(R^3_1,R^3_3)\\label{propfac}\n\\end{equation}\nOne consequence is that correlation functions between the same variable in different contexts are zero, eg $E[R^1_1 R^3_1]=0$. The reason, in terms of a process account, is that $R^3_1$ is basically set by $R^3_3$, which is an independent random variable. So the effect of the direct influence is to remove correlations between the same variable in different contexts.\n\nThis idealisation is interesting, because the fact the expectation values of all individual variables are all zero means the modified contextual inequality of Dzhafarov and Kujala's (2015) reduces to the one in the absence of signalling. In other words, although our account of this experiment involves direct influence between variables measured in the same context, \nit does not involve the weaker notion of signalling. \nThis suggests the origin of the discrepancy between the claims in \\cite{Cervantes2018} and \\cite{Basieva2019} and our construction of a non-contextual model lies in the definition of signalling used by Dzhafarov and Kujala (2015). We will explore this further below.\n\nOur idealisation of the experiments in \\cite{Basieva2019} is informative, but the results they reported had non-zero expectation values for $R^1_1, R^2_2$ and $R^3_3$. We can modify our account to deal with this by taking the joint probability to have the same form as Eq.(\\ref{propfac}) above, where now,\n\\begin{equation}\np(R^m_i,R^m_j)= \\frac{1}{4}(1+(R^m_i-R^m_j)E[R^m_i]-R^m_i R^m_j)\n\\end{equation}\nwhere $E[R^m_m]$ are the measured expectation values.\n\nThis obviously has the correct values for the measured expectation values and correlations. It also has the same interpretation, namely that there is a direct influence between, eg $R^1_1$ and $R^1_2$, such that, on measuring the value of $R^1_1$, the value of $R^1_2$ is set to minus this. The correlations between variables measured in different contexts are no longer zero, however we have $E[R^m_i R^n_i]=E[R^m_i] E[R^n_i]$, so they are still independent. There are no constraints on the $E[R^m_m]$ in order that this construction be valid. \n\nWe have therefore shown by explicitly constructing a joint probability distribution that the experimental results reported in \\cite{Basieva2019} can be accounted for by a particular type of non-contextual model which includes direct influences. More precisely, a model is possible in which preferences for the three dish choices are well defined at all times, but there is an explicitly modeled disturbance caused by eliciting a preference which explains the apparently contextual data.\n\n\\section{Signalling vs Direct Influence}\n\n\nThe roots of our disagreement with \\cite{Basieva2019} lie in the work of Dzhafarov and Kujala (2015) which may be regarded as a generalization of the famous result of \\cite{Fine1982}\nwho established the conditions under which certain sets of marginal probabilities possess a joint probability distribution. A crucial assumption in Fine's work is that overlapping pairwise marginal probabilities are compatible with each other, a condition referred to as marginal selectivity, which in the present application reduces to a set of simple conditions of the form\n\\begin{equation}\nE[R_i^j] = E[R_i^k], \\label{E10}\n\\end{equation}\nin other words, the average values of all variables $R_i^j$ are independent of context.\n Dzhafarov and Kujala (2015) essentially demonstrate how to extend Fine's result to embrace the case in which marginal selectivity fails.\n\n\nThis generalized Fine's theorem leads to the set of Bell-type inequalities mentioned above which Dzhafarov and Kujala (2015) claim to be tests for contextuality in the presence of signalling.\nAs indicated, they define signalling as a failure of conditions such as Eq.(\\ref{E10}), or more generally, by non-zero values of the quantity $\\Delta$ defined in Eq.(\\ref{eq1}).\nSince this definition of signalling corresponds to the {\\it average} degree of direct influence, it leaves a residual degree of direct influence which has the power to explain apparent contextuality not explained by signalling.\nThe presence of direct influence is characterized precisely by non-zero values of probabilities of the form $p(R_i^j \\ne R_i^k)$ (i.e, the probabilities that the same variable measured in different contexts gives different results). This probability can be non-zero even when Eq.(\\ref{E10}) holds. Indeed this possibility occurs in our model above where direct influence is present trial to trial but averages to zero. \n\nThe difference between signalling and more general direct influence is not very apparent in the approach of Dzhafarov and Kujala (2015) since in their definition of non-contextuality, the underlying joint probability is required to change as little as possible across different contexts. They implement this by requiring that the probabilities of the form $p(R_i^j \\ne R_i^k)$ are \n{\\it minimized}. The minimum then depends only on terms proportional to $\\Delta$, Eq.(\\ref{eq1}), hence notions of signalling and more general direct influence coincide in this situation.\n\n\nTo put all this another way, in order to claim contextuality, it is necessary to show that there is no other possible account of the correlations. In physics it is necessary to go to some lengths to be sure of this. The attitude one needs to adopt is of the ``worst case scenario'', where the direct influence is as hard to detect as possible. Only by ruling out this sort of stubborn direct influence can we be sure that a non-contextual account is impossible. In contrast, focussing on changes to the averages can be thought of as a ``best case scenario'', where the direct influence is as easy to detect as possible. Ruling out changes to the average distributions is necessary, but not sufficient to rule out direct influences, because one could imagine, for example, that the process of measuring A changes the correlation between A and B, but not the averages. This clearly implies a direct influence between A and B, but one which is not detectable from the averages alone.\n\nTo give a simple example, imagine we have two coins which are tossed either together or independently and under either circumstance both have probability $1\/2$ of coming up heads. Suppose however that the coins always come up the same when tossed together. There is no signalling in the sense defined above, but there is clearly a direct influence trial to trial.\n\nThe notion of direct influence beyond the average considered here may however not be readily detectable without more elaborate measurements, a key qualitative difference to the weaker notion based just on signalling, which involves readily measurable quantities. In physics such measurements are not hard to devise and higher order signalling conditions that detect direct influences beyond the average have been proposed, e.g. by \\cite{CK2015}.\nThis could be a lot harder in psychological experiments, which means that a definition of of contextuality phrased only in terms of what can actually be measured is a reasonable one.\n\nIn summary, we see that the claims of Dzhafarov \\& Kujala (2015) about the presence of contextuality beyond that explainable by signalling hinge on a notion of signalling as average direct influence, a notion weaker than that used in physics (where ``signalling'' is more commonly associated with direct influences more generally). We have found that a particular type of non-contextual model is possible if direct influence beyond the average is taken into account.\n\n\n\n\n\n\n\n\n\n\n\n\\section{Discussion}\n\nThe above results raise two interesting questions; first, is it ever possible to rule out direct influences in a psychology setting? This remains an open question, but we suspect the answer is negative. In physics one can always reproduce the results of quantum theory with a model which is non-contextual but non-local \\citep{Bohm1952}. In physics such accounts can be ruled out on the basis of a {\\em physical} principle, locality, but this is an additional assumption going beyond statements about the statistics of measurements. There is nothing equivalent in psychology that would supply such a clear cut limit on the set of allowable models. \n\nSecond, if we cannot rule out models involving direct influence, does ruling out models involving the weaker notion of signalling tell us anything useful? In one sense the answer is clearly no - violations of contextual inequalities such as Eq.(\\ref{eq1.5}) have been billed as tests of the necessity of a contextual (quantum) account of human decision making, and we have seen that such violations do not in fact rule out all possible non-contextual accounts, and therefore cannot definitively prove the necessity of a quantum model for such data. \n\nDoes this mean contextual inequalities have nothing to teach us in psychology? Not necessarily. It has previously been argued \\citep{Yearsley2014} that data satisfying Dzhafarov and Kujala's (2015) inequalities presents us with a choice - either we can construct a model which is non-contextual but which involves unobservable direct influences, or we can construct a model which only involves observed quantities, but which combines them in a contextual way. The correct way to proceed is not fixed by any mathematical law, but depends on the goals of the researcher.\n\nAdded note: after completion of this work we were made aware that a number of other authors have made closely related observations, including \\cite{AF2019}, \\cite{Cal2018} and \\cite{Jones2019}. These connections will be addressed in future publications.\n\n\\section{Acknowledgments}\nWe are grateful to Samson Abramsky, Jerome Busemeyer, Ehtibar Dzhafarov,\nAndrew Simmons and Rob Spekkens for useful communications on this topic.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\subsection{Art gallery problem}\nThe original form of the art gallery problem presented by Victor Klee (see O'Rourke \\cite{ArtGalleryTextbook}) is as follows:\n\nWe say that a closed polygon $P$ can be \\emph{guarded} by $n$ guards if there is a set of $n$ points (called the \\emph{guards}) in $P$ such that every point in $P$ is \\emph{visible} to some guard, that is the line segment between the point and guard is contained in the polygon. The problem asks, for a given polygon $P$ (which we refer to as the \\emph{art gallery}), to find the smallest $n$ such that $P$ can be guarded by $n$ guards. \n\nThe vertices of $P$ are usually restricted to rational or integral coordinates, but even so an optimal configuration might require guards with irrational coordinates (see Abrahamsen, Adamaszek and Miltzow \\cite{IrrationalGuards} for specific examples of polygons where this is the case). For this reason, we don't expect algorithms to actually output the guarding configurations, only to determine how many guards are necessary.\n\nThe art gallery problem (and any variant thereof) can be phrased as a decision problem: ``can gallery $P$ be guarded with at most $k$ guards?'' The complexity of this problem is the subject of this paper. Approximation algorithms can also be studied, see for example Bonnet and Miltzow \\cite{BestApprox}.\n\n\\subsection{The complexity class $\\exists\\mathbb R$}\n\nThe decision problem ETR asks whether a sentence of form:\n\\[\\exists X_1\\dots\\exists X_n \\Phi(X_1,\\dots,X_n)\\]\nis true, where $\\Phi$ is a formula in the $X_i$ involving addition, subtraction, multiplication, constants $0$ and $1$, and strict or non-strict inequalities. The complexity class $\\exists\\mathbb R$ consists of problems which can be reduced to ETR in polynomial time. A number of interesting problems have been shown to be complete for this class, including for example the 2-dimensional packing problem \\cite{PackingProblem} and the problem of deciding whether there exists a point configuration with a given order type \\cite{Mnev}\\cite{Stretchability}.\n\nIt is straightforward to show that $\\txt{NP}\\subseteq\\exists\\mathbb R$, and it is also known, though considerably more difficult to prove, that $\\exists\\mathbb R\\subseteq \\txt{PSPACE}$ (see Canny \\cite{SubsetPSPACE}). Both inclusions are conjectured to be strict.\n\n\\subsection{Art gallery variants}\n\nThere are several variants of this problem. We will be interested in ones involving restrictions on the placement of guards and of the region that must be guarded. Table \\ref{InitialsTable} lists these variants as well as monikers we use to refer to them.\n\n\\begin{table}[ht]\n\\begin{tabu}{|c|[1pt]c|c|}\n\\hline\n&Guard interior of $P$&Guard only the boundary of $P$\\\\\n\\tabucline[1pt]{-}\nGuards anywhere inside $P$&AG (Art Gallery)&BG (Boundary Guarding)\\\\\n\\hline\nGuards on the boundary of $P$&GOB (Guards on Boundary)&BB (Boundary-Boundary)\\\\\n\\hline\n\\end{tabu}\n\\caption{Variants of the art gallery problem}\n\\label{InitialsTable}\n\\end{table}\n\nLee and Lin \\cite{NPHardness} showed that all of these variants are NP-hard (the result is stated for the AG and GOB variants, but their constructions also work for the BG and BB variants respectively). More recently, Abrahamsen, Adamaszek, and Miltzow \\cite{ExistsRHardness} showed that the AG and BOG variants are $\\exists\\mathbb R$ complete. It is straightforward to extend their proof of membership in $\\exists\\mathbb R$ to any of these variants, but the $\\exists\\mathbb R$-hardness question remained open for the BG and BB variants. We will show that the BG variant is also $\\exists\\mathbb R$-hard:\n\n\\begin{theorem}\\label{BGMain}\nThe BG variant of the art gallery problem is $\\exists\\mathbb R$-complete.\n\\end{theorem}\n\n\\section{The problem $\\txt{ETR-INV}^{rev}$}\n\nThe proof of Theorem \\ref{BGMain} is by reduction of the problem $\\txt{ETR-INV}^{rev}$ to the BG variant of the art gallery problem.\n\n\\begin{definition}\n($\\txt{ETR-INV}^{rev}$) In the problem $\\txt{ETR-INV}^{rev}$, we are given a set of real variables $\\{x_1, \\dots, x_n\\}$ and a set of inequalities of the form:\n\n\\[x=1,\\quad xy\\ge 1,\\quad x\\left(\\frac52-y\\right)\\le 1,\\quad x+y\\le z,\\quad x+y\\ge z,\\]\nfor $x, y, z\\in \\{x_1,\\dots, x_n\\}$. The goal is to decide whether there is a assignment of the $x_i$ satisfying these inequalities with each $x_i\\in[\\frac12, 2]$.\n\\end{definition}\n\nAbrahamsen et al \\cite{ExistsRHardness} proved the $\\exists\\mathbb R$-hardness of the AG and GOB variants using a similar problem called ETR-INV, which essentially differs from $\\txt{ETR-INV}^{rev}$ only by having a $xy\\le 1$ constraint instead of one of the form $x\\left(\\frac52-z\\right)\\le 1$: \n\n\\begin{definition}(Abrahamsen, Adamaszek, Miltzow \\cite{ExistsRHardness})\n(ETR-INV) In the problem ETR-INV, we are given a set of real variables $\\{x_1, \\dots, x_n\\}$ and a set of equations of the form:\n\n\\[x=1,\\quad xy= 1,\\quad x+y= z,\\]\nfor $x, y, z\\in \\{x_1,\\dots, x_n\\}$. The goal is to decide whether there is a solution to the system of equations with each $x_i\\in[\\frac12, 2]$.\n\\end{definition}\n\n\\begin{theorem}(Abrahamsen, Adamaszek, Miltzow \\cite{ExistsRHardness})\nThe problem ETR-INV is $\\exists\\mathbb R$-complete.\n\\end{theorem}\n\nGeometrically, the constraint $xy\\le 1$ seems to be difficult to construct in \\emph{any} variant of the art gallery problem. The inversion gadget in \\cite{ExistsRHardness} effectively computes $x\\left(\\frac52-y\\right)=1$ and uses other gadgets to reverse the second variable. This, however, is not strictly necessary, since the proof of the $\\exists\\mathbb R$-hardness of ETR-INV can be easily modified to show the same result for $\\txt{ETR-INV}^{rev}$.\n\n\\subsection{$\\exists\\mathbb R$-hardness of $\\txt{ETR-INV}^{rev}$}\n\nThe proof of the $\\exists\\mathbb R$-hardness of ETR-INV in \\cite{ExistsRHardness} has the interesting property that every time the inversion constraint is used, at least one of the input variables is known {\\it a priori} to be in an interval $[a, b]$ of length less than $\\frac12$. If $x$ is such a variable, then it is possible with the addition constraints to create an auxiliary variable $V$ satisfying $V=\\frac52-x$. This allows the full inversion constraint can be constructed. The construction of $V$ follows.\n\nFirst, construct a variable equal to $\\frac12$:\n\n\\[V_1=1\\]\n\\[V_2+V_2=V_1\\]\nNext, let $V_3=x+a$ where $a=0$ or $\\frac12$, so that $V_3\\in [1, 2]$ for any value of $x$. Now:\n\n\\[V_4+V_4=V_3\\quad\\left(V_4=\\frac12x+\\frac12a\\in[\\frac12, 1]\\right)\\]\n\\[V_5+V_5=V_2\\quad\\left(V_5=\\frac14\\right)\\]\n\\[V_5+V_1=V_6\\quad\\left(V_6=\\frac54\\right)\\]\n\\[V_1+V_2=V_7\\quad\\left(V_7=\\frac32\\right)\\]\n\\[V_8+V_4=V_6\\text{ or }V_7\\quad \\left(V_7+\\frac12x+\\frac12a=\\frac54+\\frac12a\\right)\\]\n\\[V+V=V_7\\quad\\left(V+x=\\frac52\\right)\\]\nThus, the problem $\\txt{ETR-INV}^{rev}$ can be shown to be $\\exists\\mathbb R$-hard as in the proof in \\cite{ExistsRHardness}.\n\nSince the publication of \\cite{ExistsRHardness}, Miltzow and Schmiermann \\cite{ConvexConcave} have shown that, subject to some minor technical conditions, a continuous constraint satisfaction problem with an addition constraint, a convex constraint, and a concave constraint is $\\exists\\mathbb R$. This would allow us to use constraints other than exact inversion constraints, but we didn't find that this could simplify our argument.\n\n\\section{Art gallery construction}\n\n\\subsection{Notation}\n\nHere $AB$ refers to a line segment, $\\overleftrightarrow{AB}$ is the line containing that segment, and $|AB|$ is the length of that segment.\n\n\\subsection{Wedges and variable gadgets}\n\nThe first step of the construction is to prove some results that allow us to restrict the possible guarding configurations.\n\n\\begin{definition}\n(Wedge) A \\emph{wedge} is any pair of adjacent line segments on the boundary of the art gallery with an internal angle between them less than $\\pi$. The \\emph{critical region} of a wedge is the set of points which are visible to the vertex of the wedge. \n\\end{definition}\n\n\\ipnc{0.5}{Figures\/CriticalRegion}{A wedge ($W$) and its critical region ($R$)}\n\n\\begin{lemma}\nIn a guarding configuration, the critical region of each wedge must contain a guard.\n\\end{lemma}\n\n\\begin{proof}\nStraightforward.\n\\end{proof}\n\n\\begin{definition}\n(Guard regions and guard segments) We will designate certain regions inside the art gallery called \\emph{guard regions}, which are each formed by the intersection of the critical regions of some number of wedges. The critical regions of wedges corresponding to different guard regions must not intersect. A guard region shaped like a line segment is called a \\emph{guard segment}.\n\\end{definition}\n\n\\ipnc{0.5}{Figures\/VariableGadget}{\\label{VariableGadget} The intersection of the critical regions of the three wedges shown forms a guard segment.}\n\n\\begin{lemma}\nIf we designate $n$ guard regions, then any guarding configuration has at least $n$ guards, and a guarding configuration with exactly $n$ guards has $1$ guard in each guard region.\n\\end{lemma}\n\n\\begin{proof}\nStraightforward.\n\\end{proof}\n\nHaving one guard in each guard region is a necessary but not sufficient condition for a configuration to be a guarding configuration. The gallery we construct will have some number of guard regions, with additional constraints on the positions of the guards which can be satisfied if and only if a given $\\text{ETR-INV}^{rev}$ problem has a solution.\n\n\\subsection{Nooks and continuous constraints}\n\n\\begin{definition}\n(Nook) a \\emph{nook} consists of a line segment on the boundary of the art gallery, called the \\emph{nook segment}, and geometry around it to restrict the visibility of that segment. The critical region of a nook is the region of points which can see some part of the nook segment.\n\\end{definition}\n\n\\ipnc{0.5}{Figures\/NookRegion}{A nook with nook segment $s$ and critical region $R$. This nook has a wedge on one of the side walls; it will occasionally be necessary to intersect nooks and wedges in this way.}\n\nA guarding configuration will have some non-zero number of guards in the critical region of a nook, which together must guard the nook segment. In general, no guard needs to see all of the nook segment, instead it can be guarded by a collaboration of several guards. This is used to enforce continuous constraints between guards.\n\n\\ipnc{0.5}{Figures\/NookCollab}{An art gallery which requires $2$ guards. No guard can see the entirety of the nook segment on the left while also being able to see the tips of the wedges on the right, so an optimal solution has two guards which collaborate to guard the nook segment. The allowed positions of $g_2$ depend continuously on $g_1$.}\n\nIn our construction, the critical region of each constraint nook will intersect exactly $2$ guard regions.\n\n\\subsection{Copying}\n\nIn order to use a variable in multiple constraints, we need to a way to force guards on two different guard segments to have the same relative position.\n\n\\begin{lemma}\\label{CopyLemma}\nSuppose segments $AB$ and $CD$ are such that $\\overleftrightarrow{AB}$ and $\\overleftrightarrow{CD}$ are parallel, and suppose $\\overleftrightarrow{AC}$ and $\\overleftrightarrow{BD}$ intersect at a point $P$, as in Figure \\ref{ParallelCopy}. If a line through $P$ intersects $AB$ at a point $X$ and intersects $CD$ at a point $Y$, then $\\frac{|AX|}{|AB|}=\\frac{|CY|}{|CD|}$.\n\\end{lemma}\n\n\\begin{proof}\nTriangles $APB$ and $CPD$ are similar, so $\\frac{|AP|}{|AB|}=\\frac{|CP|}{|CD|}$. Also, the triangles $AXP$ and $CYP$ are similar, so $\\frac{|AX|}{|AP|}=\\frac{|CY|}{|CP|}$. Multiplying, we obtain $\\frac{|AX|}{|AB|}=\\frac{|CY|}{|CD|}$.\n\\end{proof}\n\n\\ipnc{0.6}{Figures\/CopyLemma}{\\label{ParallelCopy} By Lemma \\ref{CopyLemma}, we have $\\frac{|AX|}{|AB|}=\\frac{|CY|}{|CD|}$.}\n\nThis allows us to create nooks which copy between two segments. \n\n\\begin{definition}\n(Copy nook) A \\emph{copy nook} is a nook whose critical region intersects exactly two guard segments and no other guard regions. Further, the two guard segments should be parallel to the nook segment.\n\\end{definition}\n\n\\ipnc{0.6}{Figures\/CopyNook}{A copy nook. By Lemma \\ref{CopyLemma}, the position of guard $g_2$ must be above that of guard $g_1$ in order to guard the nook.}\n\nWith these nooks, we can start to arrange the art gallery.\n\n\\subsection{Art Gallery Setup}\n\nThe rough setup for the art gallery is shown in Figure \\ref{CopyScheme}. The variables $x_1,\\dots,x_n$ are represented by a bank of guard segments, called the \\emph{variable segments}. The $y$-coordinate of each of these guards represents the value of a variable in $[\\frac12, 2]$, with larger $y$-coordinates corresponding to larger values of the variable. For each time that a variable appears in a constraint, a pair of \\emph{copy nooks} force the guard on one of the \\emph{input segments} to have a position corresponding to the same value in $[\\frac12, 2]$. The \\emph{constraint gadgets} then enforce constraints on the input segments.\n\n\\ipnc{0.6}{Figures\/GallerySetup}{\\label{CopyScheme} Diagram of the art gallery construction. The copy nooks would be far enough away that it is impractical to fit them on this diagram.}\n\n\\subsection{Specification of the copy nooks}\n\nThis section is concerned with showing that it is indeed possible to arrange all the copy nooks in such a way that none of them interfere with each other. The critical region of each copy nook can be made as almost as narrow as the convex hull of the segments being copied, and the nook itself can be made arbitrarily small and distant, so it shouldn't be surprising that it is possible to arrange copy nooks in this way. However, working out the exact details requires some tedious calculation. Figure \\ref{CopySpecification} shows the parameters describing the copy nooks that we will use.\n\n\\ipnc{0.3}{Figures\/CopySpecification}{\\label{CopySpecification} Parameters and measurements for the copy nooks. The nook on the left forces the guard on the variable segment to have a position less than or equal to the corresponding position of the guard on the input segment. The nook on the right enforces the corresponding $\\ge$ constraint. }\n\nNote that because the ratio of the length of the nook segment to the length of the input or variable segment is $1:k-1$, the points at the openings of the nooks occur $\\frac1k$th of the way between the nook segment and a guard segment.\n\nThe segments in each bank occur at regular intervals with a spacing of a distance $d_0$. The two grids should be horizontally aligned, as in Figure \\ref{SegmentBanks}, but no input segment should have a guard segment directly below it. We will need $n$ variable segments, and let $i$ be the number of spaces needed for input segments, and let $j$ be the index of the furthest input segment from the first variable segment. \n\n\\ipnc{0.4}{Figures\/SegmentBanks}{\\label{SegmentBanks} The variable and input segments.}\n\nIdeally, the constraint gadgets should only depend on these parameters up to a change of scale. This can be accomplished if $d_0$ is a fixed multiple of $k-1$. We will use $d_0=9(k-1)$. As we will see, this gives sufficient space between segments for the copy nooks and constraint gadgets to work.\n\nAll lines between nook segments and input segments should have slope between $-\\frac12$ and $-1$. The steepest of these lines in Figure \\ref{CopySpecification} has slope:\n\n\\[-\\frac{(m-h)k}{\\frac{m-h-1}{h+2}d}=-(h+2)\\frac{(m-h)k}{(m-h-1)d},\\]\n\n\\noindent and the shallowest of these lines has slope:\n\n\\[-\\frac{(m-h-1)k}{\\frac{m-h}{h}d}=-h\\frac{(m-h-1)k}{(m-h)d}\\]\n\n\\noindent The values of $d$ range from $(j-i-n)d_0$ to $jd_0$. We have set $d_0=9(k-1)$, so as long as $k\\ge 9$, we have that $8k\\le d_0\\le9k$. The value of $m$ will be chosen in such that a way that $m\\ge 2h$, so if $h\\ge 10$:\n\n\\[\\frac{m-h}{m-h-1}\\le \\frac{10}9\\]\n\n\\noindent So we have:\n\n\\[(h+2)\\frac{(m-h)k}{(m-h-1)d}\\le\\frac{10(h+2)k}{9(j-i-m)d_0}\\le\\frac{5(h+2)}{36(j-i-m)}\\le 1\\]\n\n\\[h\\frac{(m-h-1)k}{(m-h)d}\\ge \\frac{9hk}{10jd_0}\\ge \\frac{h}{10j}\\ge \\frac12\\]\n\n\\noindent Rearranging:\n\n\\[h+2\\le \\frac{36}{5}(j-i-m)\\text{ and }h\\ge 5j,\\]\n\n\\noindent so it is sufficient to have $h+2\\le 6(j-i-n)$ and $h\\ge 5j$. So if $h=5j$ and $j\\ge 2+6(i+n)$, then this is satisfied, and the slopes of lines between nook segments and guard segments will be between $-1$ and $-\\frac12$ for any $k\\ge 9$ and $m\\ge 2h$.\n\nNext, we need to choose $m$ and $k$ so that none of the possible copy nooks interfere with each other. By the construction of the segment banks, $d$ can take values $qd_0$ for $q(\\ell+2)j+n$, then each copy nook has a nook segment at a different $x$-coordinate.\n\nWith $p$ (and so $m$) chosen this way, the distance between adjacent copy nooks will be at least $d_0$. The copy nooks have a height of $m+1$. The lines between copy nooks and variable or input segments have slope less than $-\\frac12$, so allowing a distance of $2(m+1)$ between the copy nooks is sufficient to prevent nooks from interfering or obstructing each other. So choose $k$ so that $d_0=9(k-1)>2(m+1)$.\n\nThis shows that we can construct the copy nooks themselves, but it remains to show that the critical region of each nook doesn't intersect any guard segments that it isn't supposed to.\n\n\\begin{lemma}\nIf all is as above, then each copy nook only has exactly $1$ variable segment and $1$ guard segment in its critical region.\n\\end{lemma}\n\n\\begin{proof}\nFor each pair of segments, there are $4$ nearby segments which might intersect the critical region of one of their copy nooks. These are shown in Figure \\ref{CopyRegions}.\n\n\\ipnc{0.25}{Figures\/CopyRegions}{\\label{CopyRegions} The critical regions of the copy nooks and nearby segments which they must avoid. All of the lines shown have slope less than $-\\frac{1}2$.}\n\nThe segment highlighted in red is the closest (in this diagram, though also in general) to intersecting the critical region. Figure \\ref{RegionsDetail} shows measurements for the lines near this segment.\n\n\\ipnc{0.4}{Figures\/CopyRegionsDetail}{\\label{RegionsDetail} Closeup near the highlighted segment. The additional red line has slope $-\\frac12$, and shows the maximum allowed value of $\\alpha$.}\n\nSince $k\\ge 2$, it is sufficient to have $\\alpha\\le 3$ in order for the critical region to avoid the highlighted segment. We compute that:\n\n\\[\\alpha=\\frac{\\frac{(k-1)m-h}{kh}}{\\frac{(k-1)m-h}{kh}-1}\\]\n\nThis is $\\le 3$ so long as $\\frac{(k-1)m-h}{kh}\\le \\frac{3}{2}$. Since $k\\ge 2$, $m\\le 8h$ is sufficient for this to hold. We set $m=(\\ell+2)h+ph(h+2)$, and $p$ will always be at least $4$, so this is already sufficient for $m$ to be at least $8h$.\n\nThere is a similar constraint on $m$ for each of the other nearby segments, but $m>8h$ is sufficient in every case.\n\n\\end{proof}\n\n\\subsection{Constraint Gadgets}\n\nNext, we need to create the constraint gadgets which will actually enforce the constraints. Figure \\ref{ConstraintGadget} shows how a constraint gadget can be created without interfering with the copy nooks.\n\n\\ipnc{0.4}{Figures\/ConstraintGadget}{\\label{ConstraintGadget} Diagram of a constraint gadget. The region $R_1$ is bounded by lines with slope $-1$ and $-\\frac12$. The region $R_2$ is bounded by lines with slope $-1$.}\n\nEach constraint gadget will have some number (either $2$ or $3$) of input guard segments, but may also take up additional slots in the bank of input segments. These slots will be left empty.\n\nThree wedges form each of these guard segments; two will be on the top wall of the art gallery, but one must be on the bottom of the gadget. Since the distance from the constraint gadgets to the top wall depends on the number of variables and constraints, the width of these wedges also needs to depend on these parameters. All other parameters of each constraint gadget are fixed up to a choice of scale.\n\nThe input segments are tied to the variable segments by the copy gadgets. In order for these gadgets to work, the region $R_1$ should not be obstructed by the walls of the art gallery. Additionally, guards on the variable segments might be able to see anything in the region $R_2$, so the nooks which enforce the constraint should have nook segments which don't intersect this region. Also, the guard regions for any auxiliary guards used should avoid intersecting $R_2$ so that they don't interfere with the copy gadgets, and the critical regions of the wedges making up these guard regions should not interfere with the wedges making up the variable gadgets for the input segments. Figure \\ref{ConstraintGadget} shows examples of constraint nooks and an auxiliary guard region (yellow) which meet these criteria.\n\nConstraints of the form $x=1$ will not have a dedicated constraint gadget. Instead, these can be created by adding a wedge to a guard segment to turn it into a guard point, as in Figure \\ref{GuardPoint}.\n\n\\ipnc{0.5}{Figures\/GuardPoint}{\\label{GuardPoint} A point-shaped guard region.}\n\nThe remaining constraints each need a constraint gadget. These constraints are:\n\n\\[xy\\ge 1,\\quad x\\left(\\frac52-y\\right)\\le 1,\\quad x+y\\le z,\\quad x+y\\ge z\\]\n\n\\subsubsection{Inversion}\n\nThe first gadgets we will create are the $xy\\ge 1$ and $x\\left(\\frac52-y\\right)\\le 1$ gadgets. Lemma \\ref{InversionLemma} shows how this type of constraint can arise geometrically.\n\n\\begin{lemma}\\label{InversionLemma}\nStart with non-parallel line segments $AB$ and $CD$, as in Figure \\ref{InversionLines}, and say that $\\overleftrightarrow{AD}$ and $\\overleftrightarrow{BC}$ intersect at a point $P$. Let $E$ be the point on $\\overleftrightarrow{AB}$ such that $\\overleftrightarrow{PE}$ and $\\overleftrightarrow{CD}$ are parallel, and let $F$ be the point on $\\overleftrightarrow{CD}$ such that $\\overleftrightarrow{PF}$ and $\\overleftrightarrow{AB}$ are parallel. Draw a line through $P$ intersecting $AB$ at $X$ and $CD$ at $Y$. Then $\\frac{|EA|}{|EB|}=\\frac{|FC|}{|FD|}$, and letting $\\alpha^2=|EA||EB|$ and $\\beta^2=|FC||FD|$ we have $\\frac{|EX|}{\\alpha}\\cdot\\frac{|FY|}{\\beta}=1$.\n\\end{lemma}\n\nSince we want to enforce inversion constraints on variables in the range $[\\frac12, 2]$, we will use geometry so that $|EB|=4|EA|$ (and therefore $|FD|=4|FC|$), so $\\alpha=2|EA|=\\frac12|EB|$ and $\\beta=2|FC|=\\frac12|FD|$. This means that $\\frac{|EX|}{\\alpha}$ and $\\frac{|FY|}{\\beta}$ will map the segments $AB$ and $CD$ respectively onto $[\\frac12, 2]$.\n\n\\ipnc{0.8}{Figures\/InversionLines}{\\label{InversionLines} By Lemma \\ref{InversionLemma}, the value of $|EX||FY|$ is independent of the position of $X$.}\n\n\\begin{proof}[Proof of Lemma \\ref{InversionLemma}]\nThe triangles $PEX$ and $PFY$ are similar, so $\\frac{|EX|}{|EP|}=\\frac{|FP|}{|FY|}$, so $|EX||FY|=|FP||EP|$. In particular, when $X=A$, we have $|EA||FD|=|FP||EP|$, and when $X=B$, we have $|EB||FC|=|FP||EP|$, so $|EA||FD|=|EB||FC|$ and $\\frac{|EA|}{|EB|}=\\frac{|FC|}{|FD|}$. \n\nNow $|EX||FY|=|FP||EP|=|EA||FD|=|EB||FC|$, so:\n\n\\[|EX||FY|=\\sqrt{|FP||EP||EB||FC|}=\\sqrt{\\alpha^2\\beta^2}=\\alpha\\beta\\]\n\\end{proof}\n\nAs long as $\\overleftrightarrow{AB}$ and $\\overleftrightarrow{CD}$ are not parallel, Lemma \\ref{InversionLemma} will work with any arrangement of the points $A, B, C$ and $D$. Importantly for the $xy\\ge1$ gadget, it is okay if the segments $AB$ and $CD$ intersect.\n\nWe do want to enforce inversion constraints between segments which are parallel, and so we will need an additional idea, set out in Lemma \\ref{AngleCopyLemma}.\n\n\\begin{lemma}\\label{AngleCopyLemma}\nSuppose line segments $AB$, $CD$, and $EF$ are such that $\\overleftrightarrow{AB}$, $\\overleftrightarrow{CD}$, and $\\overleftrightarrow{EF}$ all intersect at a point $O$, as in Figure \\ref{AngleCopyGeometry}. Also suppose that the ratios $\\frac{|OA|}{|OB|}$ and $\\frac{|OC|}{|OD|}$ are the same. Let $P$ be the point where the lines $BE$ and $AF$ intersect, and $Q$ be the point where $DE$ and $CF$ intersect. We obtain a mapping $\\varphi$ as follows: draw a line from a point $X$ on $AB$ through $P$, and let $Y$ be the intersection of this line with $EF$. Now draw a line through $Z$ and $Q$. The intersection of this line with $CD$ is $\\varphi(X)$.\n\nThe map $\\varphi$ is linear, that is $\\frac{|AX|}{|AB|}=\\frac{|C\\varphi(X)|}{|CD|}$.\n\\end{lemma}\n\n\\ipnc{1.0}{Figures\/AngleCopyGeometry}{\\label{AngleCopyGeometry} By Lemma \\ref{AngleCopyLemma}, $\\frac{|AX|}{|AB|}=\\frac{|CZ|}{|CD|}$. If $|OA|=|OB|$, then $|OX|=|O\\phi(X)|$}\n\n\\begin{proof}\nPlace the figure in the vector space $\\mathbb R^2$ with the point $O$ at $(0, 0)$. Now the pairs of vectors $\\{E, A\\}$ and $\\{E, C\\}$ are each bases for $\\mathbb R^2$. Let $\\theta$ be the linear map $\\mathbb R^2\\rightarrow\\mathbb R^2$ which takes vector $V$ to $(t, s)$ where $V=tE+sA$, and let $\\psi$ be a similar map which writes $V$ as $tE+sC$. Now the linear map $\\psi^{-1}\\circ \\theta$ fixes points on the line containing $E$ and $F$, and sends $A$ to $C$. Also\n\n\\[\\theta\\left(B\\right)=\\frac{|OB|}{|OA|}=\\frac{|OC|}{|OD|}=\\psi(D)\\]\n\nSo $\\psi^{-1}\\circ \\theta$ sends $B$ to $D$. Now by linearity, this composition sends $P$ to $Q$, and so sends $X$ to $\\varphi(X)$. So $\\varphi$ is linear.\n\n\\end{proof}\n\nWe could alternatively prove Lemma \\ref{AngleCopyLemma} by working an additional dimension. We construct an arrangement of lines in $\\mathbb R^3$ (see Figure \\ref{AngleCopy3D}) of which Figure \\ref{AngleCopyGeometry} is the image under a linear projection, and so that the segments $O'B'$ and $O'D'$ are related by a reflection in $\\mathbb R^3$. Since the three lines intersect in a point, the pairs of segments $(A'B', E'F')$ and $(C'D', E'F')$ are each coplanar, and so $F'A'$ and $E'B'$ intersect at a point $P'$ in $\\mathbb R^3$, and similarly the point $Q'$ exists. This means that the map $\\varphi$ can be defined on the figure in $\\mathbb R^3$, and the symmetry of the $3$-dimensional geometry descends to the $2$-dimensional figure, making $\\varphi$ linear.\n\n\\ignc{0.3}{Figures\/AngleCopy2}{\\label{AngleCopy3D}}\n\n\nUnlike in the parallel copying gadgets, the mappings from $AB$ to $EF$ and from $EF$ to $CD$ are in general \\emph{not} linear, only the composition is. This geometry can be combined with the inversion geometry to create a nook which enforces an inversion constraint between two parallel segments. Figure \\ref{ParallelInversion} shows how to create an $xy\\ge 1$ constraint in this way.\n\n\\ipnc{1.0}{Figures\/InversionCombined}{\\label{ParallelInversion} An inversion nook between two parallel segments. The nook attempts to make an angled copy between guard segment $GH$ and and the ``phantom'' segment $CD$, but it ends up hitting segment $AB$ instead.}\n\nTo create the $xy\\ge 1$ gadget, we choose geometry as in figure \\ref{ParallelInversion} with $|AB|=|GH|$, $|FD|=4|FC|$, and $|EB|=4|EA|$. By Lemma \\ref{InversionLemma} we have $\\frac{|GX|+|EA|}{2|EA|}=\\frac{|FZ|}{2|FC|}$, and so by Lemma \\ref{InversionLemma} we have that $\\frac{|GX|+|EA|}{2|EA|}\\cdot\\frac{EZ}{2|EA|} = \\frac{|FZ|}{2|FC|}\\cdot\\frac{EZ}{2|EA|} = 1$.\n\nBy positioning the two guard segments appropriately, we can create the $xy\\ge 1$ gadget (Figure \\ref{InversionGE}).\n\n\\ipnc{0.25}{Figures\/InversionGE}{\\label{InversionGE} The $xy\\ge 1$ gadget. The constraint $xy\\ge 1$ is symmetrical, so it isn't surprising that the nook will be symmetrical if the nook segment is chosen to lie on a horizontal line. }\n\nThe $x\\left(\\frac52-y\\right)\\le 1$ gadget is created in a similar way:\n\n\\ipnc{0.4}{Figures\/LECombined}{\\label{LECombined} Combining the geometry from lemmas \\ref{InversionLemma} and \\ref{AngleCopyLemma} in a different way.}\n\nAgain $|AB|=|GH|$, $|FD|=4|FC|$, and $|EB|=4|EA|$, so $\\frac{|GX|+|EA|}{2|EA|}\\cdot\\frac{EZ}{2|EA|}=1$. The segment $GH$ is now oriented in the reverse direction compared to Figure \\ref{ParallelInversion}, and the nook is now arranged such that guards see more of the nook segment as $|AZ|$ or $|GX|$ increases. Figure \\ref{InversionLE} shows the full $x(\\frac52-y)\\le 1$ gadget:\n\n\\begin{figure}[H]\\begin{center}\\includegraphics[width=5in, height=3.16in]{Figures\/InversionLE.pdf}\\caption{\\label{InversionLE} The $x\\left(\\frac52-y\\right)\\le 1$ gadget. Unlike with the $\\ge$ inversion gadget, constructing the geometry in a naive way requires solving a quadratic, so the positions of the vertices would potentially be irrational. The solution is to choose a setup as in Figure \\ref{LECombined}, and then use a linear transform to position the segments appropriately.}\\end{center}\\end{figure}\n\n\\subsubsection{Addition}\n\nThe addition constraint is the one constraint that involves more than two variables. In order to make this work, we use the fact that a single guard has $2$ coordinates, so a combination of nooks which only interact with $2$ guards each can enforce a constraint which continuously depends on $3$ variables.\n\\ipnc{0.8}{Figures\/TripleConstraint}{\\label{TripleConstraint} Guards $g_1$, $g_2$, and $g_3$ see parts of the nook segments of nooks $N_1$, $N_2$, and $N_3$ respectively. In order to see the rest of each of these nooks, the guard $g_4$ needs to be in the intersection of the shaded regions. This places a constraint on the positions of $g_1$, $g_2$, and $g_3$, since there needs to be at least one point in the intersection of all $3$ regions. Each $x+y\\le z$ constraint will have a corresponding $x+y\\ge z$ constraint, so there will never be more than one point in this intersection.}\nIn order to make this constraint correspond to addition and not some other constraint, we will need a lemma about geometry:\n\n\\begin{lemma}\\label{AdditionLemma}\nLet the line segments $AB$, $CD$, and $EF$ have the same length and lie on the same vertical line, as in Figure \\ref{AdditionSetup}, and suppose $|CB|=|DE|$. Let points $P$, $Q$, and $R$ lie on a vertical line, with $|QP|=|QR|$. Note that $\\overleftrightarrow{AP}$, $\\overleftrightarrow{DQ}$, and $\\overleftrightarrow{FR}$ intersect a single point, and the same is true of $\\overleftrightarrow{BP}$, $\\overleftrightarrow{CQ}$, and $\\overleftrightarrow{ER}$.\n\nChoose points $X$ and $Y$ on $AB$ and $EF$ respectively. Now $\\overleftrightarrow{XP}$ and $\\overleftrightarrow{YQ}$ intersect at a point $I$. Draw a line through points $I$ and $R$. This intersects $CD$ at a point $Z$.\n\nIf all is as above, then $\\frac12\\left(|AX|+|EY|\\right)=|CZ|$.\n\\end{lemma}\n\n\\ipnc{0.6}{Figures\/SkewAdditionSetup}{\\label{AdditionSetup} Setup for the addition gadget. By Lemma \\ref{AdditionLemma}, we will have that $\\frac12\\left(|AX|+|EY|\\right)=|CZ|$.}\n\n\\begin{proof}\nA \\emph{homography} of $\\mathbb R^2$ (or more precisely $\\mathbb{RP}^2$) is a transform which sends straight lines to straight lines. We want to find such a map which fixes points $A, B, C, D, E, F, X, Y, Z$ while sending the points $P$, $Q$ and $R$ to infinity. Additionally, lines through $P$ should be sent to lines with slope $-1$, lines through $Q$ should be sent to lines with slope $0$, and lines through $R$ should be sent to lines with slope $+1$ (see Figure \\ref{AdditionTransformed}).\n\n\\ipnc{0.5}{Figures\/AdditionTransformed}{\\label{AdditionTransformed} The transformed geometry. The only parameters of the original geometry which can be recovered after transforming are the lengths of $AB$ and $BC$.}\n\nIn the transformed geometry, it is clear by elementary linear algebra that $\\frac12\\left(|AX|+|EY|\\right)=|CZ|$.\n\nA degrees-of-freedom argument is sufficient to show that such a transformation should exist, but for completeness we will give it explicitly. Let $x_0$ be the $x$-coordinate of $A, B, C, \\dots$, $x_1$ be the $x$-coordinate of $P, Q$ and $R$, $y_0$ be the $y$-coordinate of $Q$, and let $a=|QP|=|QR|$, so $P$ has $y$-coordinate $y_0+a$ and $Q$ has $y$-coordinate $y_0-a$. Then the transform with the desired properties is given by:\n\n\\[\\lambda\\begin{bmatrix}x'\\\\y'\\\\1\\end{bmatrix}=\\begin{bmatrix}x_0+a&0&-(x_1+a)x_0\\\\y_0&x_0-x_1&-y_0x_0\\\\1&0&-x_1\\end{bmatrix}\\begin{bmatrix}x\\\\y\\\\1\\end{bmatrix}\\]\n\n\nWriting the map in this form makes it easy to check what happens to lines through $P$, $Q$, and $R$. In particular, for a $3\\times3$ matrix $A$, if:\n\n\\[A\\begin{bmatrix}p_x\\\\p_y\\\\1\\end{bmatrix}=\\begin{bmatrix}a\\\\b\\\\0\\end{bmatrix}\\]\n\n\\noindent then the map:\n\n\\[\\lambda\\begin{bmatrix}x'\\\\y'\\\\1\\end{bmatrix}=A\\begin{bmatrix}x\\\\y\\\\1\\end{bmatrix}\\]\n\n\\noindent sends lines through $(p_x, p_y)$ to lines parallel to $\\begin{bmatrix}a\\\\b\\end{bmatrix}$.\n\n\\end{proof}\n\nLike all homographies of $\\mathbb R^2$, the transformation used in the proof of Lemma \\ref{AdditionLemma} can be obtained geometrically as a projection from a plane, through a point, and onto another plane, as in Figure \\ref{Addition3D}.\n\n\\ignc{0.3}{Figures\/Addition4}{\\label{Addition3D} A geometric realization of the transformation from Lemma \\ref{AdditionLemma}. Only rotation and reflection is required from here to obtain the same transformed figure as in the lemma.}\n\nAbrahamsen et al \\cite{ExistsRHardness} use an instance of the same type of geometry for their addition gadget. The verification of their gadget given in that paper could be generalized to give an alternate proof of Lemma \\ref{AdditionLemma}.\n\n\\subsubsection{The $\\ge$ addition gadget}\n\n\\ipnc{0.05}{Figures\/AdditionGadgetGE}{\\label{AdditionGE} The $x+y\\ge z$ addition gadget. From left to right, the input segments represent the variables $x$, $z$, and $y$.}\n\nWe want the constraint to be $x+y\\ge z$, not $\\frac12(x+y)\\ge z$, so the middle nook has to adjust the scale and offset accordingly. Figure \\ref{MiddleNook} shows how this is done.\n\n\\ipnc{0.5}{Figures\/MiddleNook}{\\label{MiddleNook} The positions of the auxiliary guard correspond to values of $x+y$ in the range $[1, 4]$, while the segment should correspond to values in the range $[\\frac12, 2]$. The middle nook in this gadget is adjusted to compensate for this.}\n\n\\subsubsection{The $\\le$ addition gadget}\n\nThe $x+y\\le z$ addition gadget is very similar to the $x+y\\ge z$ one, just with the nooks oriented in the opposite way.\n\n\\ipnc{0.05}{Figures\/AdditionGadgetLE}{\\label{AdditionLE} The $x+y\\le z$ addition gadget. From left to right, the input segments represent variables $z$, $x$, and $y$.}\n\nWith these gadgets, we can complete the reduction of ETR-INV to the BG variant of the art gallery problem, and hence prove Theorem \\ref{BGMain}.\n\n\\section{Conclusions}\n\nIt is interesting to note that our construction, while intended for the BG variant of the art gallery problem, is also sufficient to show the $\\exists\\mathbb R$-hardness of the standard AG variant. Indeed, all the guarding configurations considered are also guarding configurations in the AG variant. This is similar to the construction from \\cite{ExistsRHardness}, which simultaneously showed the $\\exists\\mathbb R$-hardness of the AG and BOG variants. It doesn't seem to be possible to adapt our construction to the BB variant.\n\nIn our construction, each nook enforces a constraint between only two guards. While it is possible to put multiple guard regions in the critical region of one nook, the types of constraints created seem to be unable to depend continuously on more than $2$ of the guards at a time. Addition constraints are only possible because the guards themselves have two coordinates, so a single nook can in principal enforce a constraint on as many as $4$ variables. In the $GOB$ variant, guards have only one coordinate, but have to cover the entire $2$-dimensional interior of the art gallery, so constraints that affect more than $2$ guards can be created. In the BB variant, neither of these ideas work, so it seems unlikely that a problem like $\\txt{ETR-INV}$ could be reduced to it in this way. It is very possible that the BB variant is only $\\text{NP}$.\n\n\\section{Acknowledgements}\nWe are grateful to Eric Stade for feedback on a draft of this paper.\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\n\n\n\\subsection{Pre-training natural language representations}\n\nPre-training natural language representations has been the basis of natural language processing (NLP) research for a long time. They use language modeling (LM) for pre-training natural language representations. The key idea is that ideal representations must convey syntactic and semantic information, and thus we must be able to use a representation of a token to predict its neighboring tokens. For example, embeddings from language models (ELMo) learned contextualized representations by adopting forward and reverse RNNs \\cite{peters2018}. Given a sequence of tokens without additional labels, the forward RNN sequentially processes the sequence left-to-right. It is trained to predict the next token, given its history. The reverse RNN is similar but processes the sequence in reverse, right-to-left. After the pre-training, the hidden states of both RNNs are merged into a single vector representation for each token. Thus, the same token can be transformed into different representations based on its context.\n\nThe major limitation of ELMo is that RNNs are trained using unidirectional LM and simply combined afterward. As valuable information often comes from both directions, unidirectional LM is inevitably suboptimal. To address this problem, bidirectional encoder representations from Transformers (BERT) was proposed to pre-train bidirectional natural language representations using the Transformer model \\cite{devlin2018}. Instead of the conventional LM, BERT utilizes an masked language modeling (MLM) pre-training task. It masks some input tokens at random and trains the model to predict them from the context. In addition, BERT includes a complementary NLP-specific pre-training task, next sentence prediction, which enables the learning of sentence relationships by training a model to predict whether a given pair of sentences is consecutive.\n\n\\vspace{0.1cm}\n\n\\subsection{Pre-training protein sequence representations}\n\nNLP-based methods are historically adapted to learn protein sequence representations \\cite{asgari2015, yang2018}. The previous methods most closely related to our paper are P-ELMo \\cite{bepler2019} and UniRep \\cite{alley2019}, which learn contextualized protein representations. P-ELMo is based on a two-phase algorithm. First, it trained forward and reverse RNNs using LM with an unlabeled dataset. hen, it adopted another bidirectional recurrent neural network (BiRNN) and further trained the model with an additional small labeled dataset. Note that the latter supervised training deviates from the goal of pre-training, namely, utilizing low human effort and large unlabeled datasets. UniRep used a unidirectional RNN with multiplicative long short-term memory hidden units \\cite{krause2016}. Similarly, UniRep trained its model using conventional LM. \n\nMost methods have some common limitations and still often lag behind task-specific models \\cite{rao2019}. First, some of them learn unidirectional representations from unlabeled datasets. Unidirectional representations are sub-optimal for numerous protein biology tasks, where it is crucial to assimilate global information from both directions. Note that we do not consider combination of two unidirectional representations as bidirectional representations since they were simply combined after the unidirectional pre-training. Second, most pre-training methods solely rely on LM to learn from unlabeled protein sequences. Although LM is a simple and effective task, a complementary pre-training task tailored for each data modality has been often the key to further improve the quality of representations in other domains. For instance, in NLP, BERT adopted the next sentence prediction task. In another example, ALBERT devised a complementary sentence order prediction task to model the inter-sentence coherence and yielded consistent performance improvements for downstream tasks \\cite{lan2019}. Similarly, a complementary protein-specific task for pre-training might be necessary to better capture the information contained within unlabeled proteins.\n\n\\subsection{Pre-training procedure}\nPLUS\\ can be used to pre-train various model architectures that transform a protein sequence $X = [\\textbf{x}_{1}, \\cdots, \\textbf{x}_{n}]$, which has a variable-length $n$, into a sequence of bidirectional representations $Z = [\\textbf{z}_{1}, \\cdots, \\textbf{z}_{n}]$ with the same length. In this work, we use PLUS\\ to pre-train a BiRNN and refer to the resulting model as PLUS-RNN. The complete pre-training loss is defined as: \n\\begin{equation*}\n\\mathcal{L}_{\\mathrm{PT}} = \\lambda_{\\mathrm{PT}}\\mathcal{L}_{\\mathrm{MLM}} + (1 - \\lambda_{\\mathrm{PT}})\\mathcal{L}_{\\mathrm{SFP}}\n\\end{equation*}\nwhere $\\mathcal{L}_{\\mathrm{MLM}}$ and $\\mathcal{L}_{\\mathrm{SFP}}$ are the MLM and SFP losses, respectively. We use $\\lambda_{\\mathrm{PT}}$ to control their relative importance (Appendix \\ref{sec:appendix_training}).\n\n\\subsubsection{Task \\#1: Masked Language Modeling (MLM)}\nGiven a protein sequence $X$, we randomly select 15\\% of the amino acids. Then, for each selected amino acid $\\textbf{x}_{i}$, we randomly perform one of the following procedures. For 80\\% of the time, we replace $\\textbf{x}_{i}$ with the token denoting an unspecified amino acid. For 10\\% of the time, we randomly replace $\\textbf{x}_{i}$ with one of the 20 amino acids. Finally, for the remaining 10\\%, we keep $\\textbf{x}_{i}$ intact. This is to bias the learning toward the true amino acids. For the probabilities of masking actions, we follow those used in BERT \\cite{devlin2018}.\n\nPLUS-RNN\\ transforms a masked protein sequence $\\hat{X}$ into a sequence of representations. Then, we use an MLM decoder to compute log probabilities from the representations for $\\widetilde{X}$ over 20 amino acid types. The MLM task trains the model to maximize the probabilities corresponding to the masked ones. As the model is designed to accurately predict randomly masked amino acids given their contexts, the learned representations must convey syntactic and semantic information within proteins.\n\n\\subsubsection{Task \\#2: Same-Family Prediction (SFP)}\nConsidering that additional pre-training tasks has been often the key for improving the quality of representations in other domains \\cite{devlin2018, lan2019}, we devise a complementary protein-specific pre-training task. The SFP task trains a model to predict whether a given protein pair belongs to the same protein family. The protein family labels provide weak structural information and help the model learn structurally contextualized representations. Note that PLUS\\ is still a semi-supervised learning method; it is supervised by computationally clustered weak labels rather than human-annotated labels.\n\nWe randomly sample two protein sequences $X^a$ and $X^b$, from the training dataset. In 50\\% of the cases, two sequences are sampled from the same protein family. For the other 50\\%, they are randomly sampled from different families. PLUS-RNN\\ transforms the protein pair into sequences of representations $Z^a = [\\textbf{z}^{a}_{1}, \\cdots, \\textbf{z}^{a}_{n_{1}}]$ and $Z^b = [\\textbf{z}^{b}_{1}, \\cdots, \\textbf{z}^{b}_{n_{2}}]$. Then, we use a soft-align comparison \\cite{bepler2019} to compute their similarity score, $\\hat{c}$, as a negative weighted sum of $l_{1}$-distances between every $\\textbf{z}^{a}_{i}$ and $\\textbf{z}^{b}_{j}$ pair:\n\\begin{equation*}\n\\hat{c} = - \\frac{1}{C} \\sum_{i=1}^{n_{1}} \\sum_{j=1}^{n_{2}} \\omega_{ij} \\left\\| \\textbf{z}^{a}_{i} - \\textbf{z}^{b}_{j} \\right\\|_{1}, \\quad C = \\sum_{i=1}^{n_{1}} \\sum_{j=1}^{n_{2}} \\omega_{ij},\n\\end{equation*}\nwhere weight $\\omega_{ij}$ of each $l_{1}$-distance is computed as\n\n\\begin{equation*}\n\\begin{gathered}\n\\omega_{ij} = 1 - (1 - \\alpha_{ij})(1 - \\beta_{ij}),\\\\\n\\alpha_{ij} = \\frac{\\exp({-\\left\\| \\textbf{z}^{a}_{i} - \\textbf{z}^{b}_{j} \\right\\|_{1}})}{\\sum_{k=1}^{n_{2}} \\exp({-\\left\\| \\textbf{z}^{a}_{i} - \\textbf{z}^{b}_{k} \\right\\|_{1}})}, \\\\\n\\beta_{ij} = \\frac{\\exp({-\\left\\| \\textbf{z}^{a}_{i} - \\textbf{z}^{b}_{j} \\right\\|_{1}})}{\\sum_{k=1}^{n_{1}} \\exp({-\\left\\| \\textbf{z}^{a}_{k} - \\textbf{z}^{b}_{j} \\right\\|_{1}})}.\n\\end{gathered}\n\\end{equation*}\nIntuitively, we can understand the soft-align comparison as computing an {\\it expected alignment score}, where they are summed over all the possible alignments. We suppose that the smaller the distance between representations, the more likely it is that the pair of amino acids is aligned. Then, we can consider $\\alpha_{ij}$ as the probability that $\\textbf{z}^{a}_{i}$ is aligned to $\\textbf{z}^{b}_{j}$, considering all the amino acids from $Z^b$ (and vice versa for $\\beta_{ij}$). As a result, $\\hat{c}$ is the expected alignment score over all possible alignments with probabilities $\\omega_{ij}$. Note that the negative signs are applied for converting distances into scores. Therefore, a higher value of $\\hat{c}$ indicates that the pair of protein sequences is structurally more similar. \n\nGiven the similarity score, the SFP output layer computes the probability that the pair belongs to the same protein family. The SFP task trains PLUS-RNN\\ to minimize the cross-entropy loss between the true label and the predicted probability. As the model is designed to produce higher similarity scores for proteins from the same families, learned representations must convey global structural information.\n\\vspace{0.1cm}\n\n\\subsection{Fine-tuning procedure}\nThe fine-tuning procedure of PLUS-RNN\\ follows the conventional usage of BiRNN-based prediction models. For each downstream task, we add one hidden layer and one output layer on top of the pre-trained model. Then, all the parameters are fine-tuned using task-specific datasets. The complete fine-tuning loss is defined as: \n\n\\begin{equation*}\n\\mathcal{L}_{\\mathrm{FT}} = \\lambda_{\\mathrm{FT}}\\mathcal{L}_{\\mathrm{MLM}} + (1 - \\lambda_{\\mathrm{FT}})\\mathcal{L}_{\\mathrm{TASK}}\n\\end{equation*}\nwhere $\\mathcal{L}_{\\mathrm{TASK}}$ is the task-specific loss. $\\mathcal{L}_{\\mathrm{MLM}}$ is the regularization loss. We use $\\lambda_{\\mathrm{FT}}$ to control their relative importance (Appendix \\ref{sec:appendix_training}).\n\nThe model's architectural modifications for the three types of downstream tasks are as follows. For tasks involving a protein pair, we use the same computations used in the SFP pre-training task. Specifically, we replace only the SFP output layer with a new output layer. For single protein-level tasks, we adopt an additional attention layer to aggregate variable-length representations into a single vector \\cite{bahdanau2014}. Then, the aggregated vector is fed into the hidden and output layers. For amino-acid-level tasks, representations of each amino acid are fed into the hidden and output layers. \n\\vspace{0.1cm}\n\n\\subsection{Model architecture}\nPLUS\\ can be used to pre-train any model architectures that transform a protein sequence into a sequence of bidirectional representations. In this work, we use PLUS-RNN\\ because of its superior sequential modeling capability and lower computational complexity. Refer to Appendix \\ref{sec:appendix_transformer} for more detailed explanations of the advantages of PLUS-RNN\\ over an alternative Transformer-based model, called PLUS-TFM.\n\n\\input{table\/results_summary.tex}\n\nIn this section, we explain the architecture of PLUS-RNN. First, an input embedding layer, $\\mathrm{EM}$, embeds each amino acid $\\textbf{x}_{i}$ into a $d_{e}$-dimensional dense vector $\\textbf{e}_{i}$:\n\\begin{equation*}\nE = [\\textbf{e}_{1}, \\cdots, \\textbf{e}_{n}], \\quad \\textbf{e}_{i} = \\mathrm{EM}(\\textbf{x}_{i}).\n\\end{equation*}\nThen, a BiRNN of $L$-layers computes representations as a function of the entire sequence. We use long short-term memory as the basic unit of the BiRNN \\cite{hochreiter1997}. In each layer, the BiRNN computes $d_{h}$-dimensional forward and backward hidden states ($\\overrightarrow{\\textbf{h}^{l}_{i}}$ and $\\overleftarrow{\\textbf{h}^{l}_{i}}$) and combines them into a hidden state $\\textbf{h}^{l}_{i}$ using a non-linear transformation:\n\\begin{equation*}\n\\begin{gathered}\n\\overrightarrow{\\textbf{h}^{l}_{i}} = \\sigma(\\overrightarrow{\\textbf{W}^{l}_{x}}\\textbf{h}^{l-1}_{i} + \\overrightarrow{\\textbf{W}^{l}_{h}}\\textbf{h}^{l}_{i-1} + \\overrightarrow{\\textbf{b}^{l}}), \\\\\n\\overleftarrow{\\textbf{h}^{l}_{i}} = \\sigma(\\overleftarrow{\\textbf{W}^{l}_{x}}\\textbf{h}^{l-1}_{i} + \\overleftarrow{\\textbf{W}^{l}_{h}}\\textbf{h}^{l}_{i+1} + \\overleftarrow{\\textbf{b}^{l}}), \\\\ \n\\textbf{h}^{l}_{i} = \\sigma(\\textbf{W}^{l}_{h}[\\overrightarrow{\\textbf{h}^{l}_{i}}; \\overleftarrow{\\textbf{h}^{l}_{i}}] + \\textbf{b}^{l}) \\quad for \\quad l = 1, \\cdots, L,\n\\end{gathered}\n\\end{equation*}\nwhere $\\textbf{h}^{0}_{i} = \\textbf{e}_i$; $\\textbf{W}$ and $\\textbf{b}$ are the weight and bias vectors, respectively. We use the final hidden states $\\textbf{h}^{L}_{i}$ as representations $\\textbf{r}_i$ of each amino acid: \n\n\\begin{equation*}\nR = [\\textbf{r}_{1}, \\cdots, \\textbf{r}_{n}], \\quad \\textbf{r}_{i} = \\textbf{h}^{L}_{i}.\n\\end{equation*}\nWe adopt an additional projection layer to obtain smaller $d_{z}$-dimensional representations $\\textbf{z}_i$ of each amino acid with a linear transformation:\n\\begin{equation*}\nZ = [\\textbf{z}_{1}, \\cdots, \\textbf{z}_{n}], \\quad \\textbf{z}_{i} = \\mathrm{Proj}(\\textbf{r}_{i}).\n\\end{equation*}\nDuring pre-training, to reduce computational complexity, we use $R$ and $Z$ for the MLM and SFP tasks, respectively. During fine-tuning, we can use either $R$ or $Z$, considering the performance on development sets or based on the computational constraints.\n\nWe use two models with the fixed $d_{e}$ of 21 and $d_{z}$ of 100:\n\\vspace{0.1cm}\n\\begin{itemize}\n \\item PLUS-RNN\\textsubscript{BASE}: $L$ = 3, $d_{h}$= 512, 15M parameters\n \\item PLUS-RNN\\textsubscript{LARGE}: $L$ = 3, $d_{h}$= 1024, 59M parameters\n\\end{itemize}\n\\vspace{0.1cm}\nThe hyperparameters (\\textit{i.e.}, $L$ and $d_{h}$) of PLUS-RNN\\textsubscript{BASE}\\ are chosen to match the BiRNN model architecture used in P-ELMo \\cite{bepler2019}. However, as P-ELMo uses additional RNNs, PLUS-RNN\\textsubscript{BASE}\\ has less than half the number of parameters that P-ELMo has (32M).\n\\vspace{0.2cm}\n\\subsection{Pre-training dataset}\nWe used Pfam (release 27.0) as the pre-training dataset \\cite{finn2014}. After pre-processing (Appendix \\ref{sec:appendix_training}), it contained 14,670,860 sequences from 3,150 families. The Pfam dataset provides protein family labels which were computationally pre-constructed by comparing sequence similarity using multiple sequence alignments and HMMs. Owing to the loose connection between sequence and structure similarities, the family labels only provide weak structural information \\cite{elofsson1999}. Note that we did not use any human-annotated labels. Therefore, pre-training does not result in biased evaluations in fine-tuning tasks. The pre-training results are provided in the Appendix \\ref{sec:appendix_pre-training}.\n\\vspace{0.1cm}\n\n\\subsection{Fine-tuning tasks}\nWe evaluated PLUS-RNN\\ on seven protein biology tasks. The datasets were curated and pre-processed by the cited studies. In the main manuscript, we provide concise task definitions and evaluation metrics. Please refer to Appendix \\ref{sec:appendix_fine-tuning} for more details. \n\n\\textbf{Homology} is a protein-level classification task \\cite{fox2013}. The goal is to classify the structural similarity level of a protein pair into \\textit{family}, \\textit{superfamily}, \\textit{fold}, \\textit{class}, or \\textit{none}. We report the accuracy of the predicted similarity level and the Spearman correlation, $\\rho$, between the predicted similarity scores and the true similarity levels. Furthermore, we provide the average precision (AP) from prediction scores at each similarity level. \n\n\\textbf{Solubility} is a protein-level classification task \\cite{khurana2018}. The goal is to predict whether a protein is \\textit{soluble} or \\textit{insoluble}. We report the accuracy of this task. \n\n\\textbf{Localization} is a protein-level classification task \\cite{almagro2017}. The goal is to classify a protein into one of 10 subcellular locations. We report the accuracy of this task.\n\n\\textbf{Stability} is a protein-level regression task \\cite{rocklin2017}. The goal is to predict a real-valued proxy for intrinsic stability. This task is from TAPE \\cite{rao2019}, and we report the Spearman correlation, $\\rho$.\n\n\\textbf{Fluorescence} is a protein-level regression task \\cite{sarkisyan2016}. The goal is to predict the real-valued fluorescence intensities. This task is from TAPE, and we report the Spearman correlation, $\\rho$.\n\n\\textbf{Secondary structure (SecStr)} is an amino-acid-level classification task \\cite{klausen2019}. The goal is to classify each amino acid into eight or three classes, that describe its local structure. This task is from TAPE. We report both the three-way and eight-way classification accuracies (Q8\/Q3) of this task.\n\n\\textbf{Transmembrane} is an amino-acid-level classification task \\cite{tsirigos2015}. The goal is to detect amino acid segments that cross the cell membrane. We report the accuracy of this task.\n\\vspace{0.1cm}\n\n\\subsection{Baselines}\nWe provided several baselines for comparative evaluations. Note that since up-scaling of models and datasets often provide performance improvements, we only considered those with a similar scale of model sizes and pre-training datasets to focus on evaluating the pre-training schemes.\n\nFirst, in all the tasks, we used two baselines: P-ELMo and PLUS-TFM. The former has a model architecture similar to PLUS-RNN\\textsubscript{BASE}; thus, it can show the effectiveness of the pre-training scheme. The latter is pre-trained with PLUS, so it can show the effectiveness of the BiRNN compared to the Transformer architecture.\n\nSecond, for the tasks from TAPE, we provide their reported baselines: P-ELMo, UniRep, TAPE-TFM, TAPE-RNN, and TAPE-ResNet. Note that these comparisons are in their favor, as they used a larger pre-training dataset (32M proteins from Pfam release 32.0). The TAPE baselines can demonstrate that PLUS-RNN\\ outperforms models of similar size solely pre-trained with the LM.\n\nFinally, we benchmarked PLUS-RNN\\ against task-specific SOTA models trained from scratch. If no deep learning-based baseline exists for a given task, we provided RNN\\textsubscript{BASE} and RNN\\textsubscript{LARGE} models without pre-training. The comparison with those exploit additional features can help us identify the tasks for which the proposed pre-training scheme is most effective and help us understand its current limitations. \n\\vspace{0.1cm}\n\n\\subsection{Summary of fine-tuning results}\n\nTable \\ref{table:result_summary} presents the summarized results for the benchmark tasks. Specifically, we show the best results from two categories: LM pre-trained models and task-specific SOTA models. Refer to Appendix \\ref{sec:appendix_fine-tuning} for detailed fine-tuning results.\n\nThe PLUS-RNN\\textsubscript{LARGE}\\ model outperformed models of similar size solely pre-trained with the conventional LM in six out of seven tasks. Considering that some pre-trained models exhibited higher LM capabilities (Appendix \\ref{sec:appendix_pre-training}), it can be speculated that the protein-specific SFP pre-training task contributed to the improvement. In the ablation studies, we further explained the relative importance of each aspect of PLUS-RNN\\ (Appendix \\ref{sec:appendix_ablation}). Although PLUS-TFM\\ had almost twice as many parameters as PLUS-RNN\\textsubscript{LARGE}\\ (110M vs. 59M), it exhibited inferior performance in most tasks. We infer that this is because it disregarded the \\textit{locality bias} (Appendix \\ref{sec:appendix_transformer}).\n\n\\input{table\/results_homology.tex}\n\\input{table\/results_secstr.tex}\n\\input{figure\/homology_plot.tex}\n\\input{figure\/homology_interpretation.tex}\n\nWe compared PLUS-RNN\\textsubscript{LARGE}\\ with task-specific SOTA models. Although the former performed better in some tasks, it still lagged behind on the others. The results indicated that tailored models with additional features provide powerful advantages that could not be learned through pre-training. A classic example is the use of position-specific scoring matrices generated from multiple sequence alignments. We conjectured that simultaneous observation of multiple proteins could facilitate evolutionary information. In contrast, current pre-training methods use millions of proteins; however, they still consider each one individually. The relatively small performance improvement from PLUS\\ could also be explained by the fact that the SFP task only utilizes pairwise information. We expect that investigating multiple proteins during pre-training might be the key to a superior performance over the task-specific SOTA models\n\\vspace{0.1cm}\n\n\\subsection{Detailed Homology and SecStr results}\nWe present detailed evaluation results for the Homology and SecStr tasks. We chose these two tasks because they are representative protein biology tasks relevant to global and local structures, respectively. Improved results on the former can lead to the discovery of new enzymes and antibiotic-resistant genes \\cite{tavares2013}. The latter is important for understanding the function of proteins in cases where evolutionary structural information is not available \\cite{klausen2019}.\n\nThe detailed Homology prediction results are listed in Table \\ref{table:results_homology}. The results show that PLUS-RNN\\textsubscript{LARGE}\\ outperformed both P-ELMo and task-specific models. In contrast to RNN\\textsubscript{LARGE}, which exhibited overfitting owing to the limited labeled training data, PLUS\\ pre-training enabled us to take advantage of the large model architecture. The correlation differences among PLUS-RNN\\textsubscript{LARGE}\\ (0.697), PLUS-RNN\\textsubscript{BASE}\\ (0.693), and P-ELMo (0.685) were small; however, they were statistically significant with p-values lower than $10^{-15}$ \\cite{steiger1980}. The per-level AP results helped us further examine the level of structural information captured by the pre-training. The largest performance improvement of PLUS-RNN\\ comes at the higher \\textit{class} level rather than the lower \\textit{family} level. This indicates that even though Pfam family labels tend to be structurally correlated with the Homology task \\textit{family} levels \\cite{creighton1993}, they are not the decisive factors for performance improvement. Instead, PLUS\\ pre-training incorporates weak structural information and facilitates inferring higher-level global structure similarities. \n\nThe detailed SecStr prediction results are listed in Table \\ref{table:results_secstr}. CB513, CASP12, and TS115 denote the SecStr test datasets. The results show that PLUS-RNN\\textsubscript{LARGE}\\ outperformed all the other models of similar size pre-trained solely with LM. This demonstrates that the SFP task complements the LM task during pre-training and helps in learning improved structurally contextualized representations. However, PLUS-RNN\\textsubscript{LARGE}\\ still lagged behind task-specific SOTA models that employ alignment-based features. We infer that this limitation might be attributable to the following two factors. First, as previously stated, PLUS\\ only utilizes pairwise information, rather than simultaneously examining multiple proteins during pre-training. Second, the SFP task requires an understanding of the global structures, and thus the local structures are relatively negligible. Therefore, we believe that devising an additional pre-training task relevant to local structural information would improve the performance on the SecStr task.\n\\vspace{0.1cm}\n\n\\subsection{Qualitative analyses}\nTo better understand the strengths of PLUS\\ pre-training, we provide its qualitative analyses. We examined the Homology task to interpret how the learned protein representations help infer the global structural similarities of proteins\n\nTo compare two proteins, PLUS-RNN\\ used soft-align to compute their similarity score, $\\hat{c}$. Even though there was one more computation by the output layer for the Homology prediction output, we could use the similarity scores to interpret PLUS-RNN. Note that using the penultimate layer for model interpretation is a widely adopted approach in the machine learning community \\cite{zintgraf2017}. \n\nFigure \\ref{fig:homology_plot} shows a scatter plot of the predicted similarity scores and true similarity levels. For comparison, we also show the NW-align results based on the BLOSUM62 scoring matrix \\cite{eddy2004}. The plot shows that NW-align often produces low similarity scores for protein pairs from the same \\textit{family}. This is because of high sequence-level variations, which result in dissimilar sequences having similar structures. In contrast, PLUS-RNN\\textsubscript{LARGE}\\ produces high similarity scores for most protein pairs from the same \\textit{family}. \n\nFurthermore, we examined three types of protein pairs: (1) a similar sequence-similar structure pair, (2) a dissimilar sequence-similar structure pair, and (3) a dissimilar sequence-dissimilar structure pair (Figure \\ref{fig:homology_interpretations}(A) and (B)). Note that similar sequence-dissimilar structure pairs did not exist in the Homology datasets. The sequence and structure similarities were defined by NW-align scores and Homology dataset labels, respectively. The pairs with similar structures were chosen from the same \\textit{family}, and those with dissimilar structures were chosen from the same \\textit{fold}. Figure \\ref{fig:homology_interpretations}(C) shows the heatmaps of the NW-align of raw amino acids and soft-alignment of PLUS-RNN\\ representations ($\\omega_{ij}$) for the three pairs. Owing to space limitations, we only show the top left quadrant of the heatmaps. Each cell in the heatmap indicates the corresponding amino acid pairs from proteins A and B. Blue denotes high sequence similarity in NW-align and high structure similarity in PLUS-RNN.\n\nFirst, we compared the pairs having similar structures (the first and second columns in Figure \\ref{fig:homology_interpretations}(C)). The heatmaps show that NW-align successfully aligned the similar-sequence pair, resulting in a score of 2.65. However, it failed for the dissimilar-sequence pair, with a score of 0.92. This supports the observation that comparing raw sequence similarities cannot identify the correct structural similarities. In contrast, the soft-alignment of PLUS-RNN\\ representations was successful for both similar and dissimilar sequences, with scores of 3.95 and 3.76, respectively. Next, we compared the second and third pairs. Although only the second pair had similar structures, NW-align failed for both and even yielded a higher score of 1.03 for the third pair. In contrast, regardless of the sequence similarities, the soft-alignment of PLUS-RNN\\ representations correctly decreased only for the third pair, with dissimilar structures having a score of 2.12. Therefore, the interpretation results confirmed that the learned representations from PLUS-RNN\\ are structurally contextualized and perform better in inferring global structure similarities. \n\\vspace{0.2cm}\n\n\n\\subsection{Pre-training procedure}\nWe used Pfam (release 27.0) as the pre-training dataset \\cite{finn2014}. Moreover, we divided the training and test sets in a random 80\\%\/20\\% split and filtered out sequences shorter than 20 amino acids. Additionally, for the training set, we removed families containing fewer than 1,000 proteins. This resulted in 14,670,860 sequences from 3,150 families being utilized for the pre-training of PLUS. For the test dataset, we sampled 100,000 pairs from the test split.\n\nWe pre-trained PLUS-RNN\\ with a batch size of 64 sequences for 900,000 steps, which is approximately four epochs over the training dataset. We used the Adam optimizer \\cite{kingma2014} with a learning rate of 0.001, $\\beta_{1} = 0.9$, $\\beta_{2} = 0.999$, and without weight decay and dropout. Default $\\lambda_{\\mathrm{PT}}$ was 0.7.\n\nFor pre-training PLUS-TFM, we used different filtering conditions due its computational complexity. The minimum and maximum lengths of a protein were set to 20 and 256, respectively. The minimum number of proteins for a family was set to 1000. This resulted in 11,956,227 sequences from 2,657 families. We pre-trained PLUS-TFM\\ with a batch size of 128 for 930,000 steps, which is approximately 10 epochs over the training dataset. We used the Adam optimizer with a learning rate of 0.0001, $\\beta_{1} = 0.9$, $\\beta_{2} = 0.999$, $L_{2}$ weight decay of 0.01, a linearly decaying learning rate with warmup over the first 10\\% steps, and a dropout probability of 0.1. Default $\\lambda_{\\mathrm{PT}}$ was 0.7.\n\n\n\\subsection{Fine-tuning procedure}\nWhen fine-tuning PLUS-RNN, most model hyperparameters were the same as those during the pre-training. The commonly used hyperparameters were as follows; we fine-tuned the PLUS-RNN\\ with a batch size of 32 for 20 epochs. We used the Adam optimizer with a smaller learning rate of 0.0005, $\\beta_{1} = 0.9$, $\\beta_{2} = 0.999$, and without weight decay. For the other hyperparameters, we chose the configurations that performed best on the development sets for each task. The possible configurations were as follows:\n\\begin{itemize}\n \\item Number of units in the added output layer: 128, 512\n \\item Usage of the projection layer: True, False\n \\item $\\lambda_{\\mathrm{FT}}$: 0, 0.3, 0.5\n\\end{itemize}\n\nFor fine-tuning PLUS-TFM, we used the following hyperparameters: batch size of 32 for 20 epochs, the Adam optimizer with a smaller learning rate of 0.00005, $\\beta_{1} = 0.9$, $\\beta_{2} = 0.999$, and without weight decay. Default $\\lambda_{\\mathrm{FT}}$ was 0.3.\n\nFor fine-tuning both PLUS-RNN\\ and PLUS-TFM\\ for the Homology task, we additionally followed the procedures of \\cite{bepler2019}. In each epoch, we sampled 100,000 protein pairs using the smoothing rule: the probability of sampling a pair with similarity level $t$ is proportional to $N^{0.5}_{t}$, where $N_{t}$ is the number of protein pairs with similarity level $t$.\n\n\\subsection{Transformer architecture}\nThe key element of the Transformer is a self-attention layer composed of multiple attention heads \\cite{vaswani2017}. Given an input sequence, $X = [\\textbf{x}_{1}, \\cdots, \\textbf{x}_{n}]$, an attention head computes the output sequence, $Z = [\\textbf{z}_{1}, \\cdots, \\textbf{z}_{n}]$. Each token is a weighted sum of values computed by a weight matrix $\\textbf{W}^{V}$:\n\n\\begin{equation*}\n\\textbf{z}_{i} = \\sum_{j=1}^{n} \\alpha_{ij} (\\textbf{x}_{j}\\textbf{W}^{V}).\n\\end{equation*}\nEach attention coefficient, $\\alpha_{ij}$, is the output of a softmax function applied to the dot products of the query with all keys, which are computed using $\\textbf{W}^{Q}$ and $\\textbf{W}^{K}$:\n\n\\begin{equation*}\n\\label{equation:self_attention}\n\\alpha_{ij} = \\frac{\\exp(\\textbf{e}_{ij})}{\\sum_{k=1}^{n} \\exp(\\textbf{e}_{ik})}, \\qquad\n\\textbf{e}_{ij} = \\frac{(\\textbf{x}_{i}\\textbf{W}^{Q})(\\textbf{x}_{j}\\textbf{W}^{K})^{T}}{\\sqrt{d_{z}}},\n\\end{equation*}\nwhere $d_{z}$ is the output token dimension. The self-attention layer directly performs $\\textit{O}(1)$ computations for all the pairwise tokens, whereas a recurrent layer requires $\\textit{O}(n)$ sequential computations for the farthest pair. This allows easier traversal for forward and backward signals and thus better captures long-range dependencies.\n\nThe model architecture of PLUS-TFM\\ is analogous to the BERT\\textsubscript{BASE} model, consisting of 110M parameters. Because of its significant computational burden, which scale quadratically with the input length, we pre-trained PLUS-TFM\\ using only protein pairs shorter than 512 amino acids, following the procedures used in BERT. \n\\vspace{0.5cm}\n\n\\subsection{Advantages of PLUS-RNN\\ over PLUS-TFM}\nPLUS\\ can be used to pre-train various model architectures including BiRNN and the Transformer. The resulting models are referred to as PLUS-RNN\\ and PLUS-TFM, respectively. In this work, we mainly used PLUS-RNN, because of its two advantages over PLUS-TFM. First, it is more effective for learning the sequential nature of proteins. The self-attention layer of the Transformer performs dot products between all pairwise tokens regardless of their positions within the sequence. In other words, it provides an equal opportunity for local and long-range contexts to determine the representations. Although this facilitates the learning of long-range dependencies, the downside is that it completely ignores the \\textit{locality bias} within a sequence. This is particularly problematic for protein biology, where local amino acid motifs often have significant structural and functional implications \\cite{bailey2006}. In contrast, RNN sequentially processes a sequence, and local contexts are naturally more emphasized.\n\nSecond, PLUS-RNN\\ provides lower computational complexity. Although the model hyperparameters have an effect, the Transformer-based models generally demand a larger number of parameters than RNNs \\cite{vaswani2017}. Furthermore, the computations between all pairwise tokens in the self-attention layer impose a considerable computational burden, which scales quadratically with the input sequence length. Considering that pre-training typical Transformer-based models handling 512 tokens already requires tremendous resources \\cite{devlin2018}, it is computationally difficult to use Transformers to manage longer protein sequences, even up to a few thousand amino acids.\n\\vspace{0.5cm}\n\n\n\n\\subsection{Homology}\nFor the Homology task, we used the SCOPe datasets \\cite{fox2013}, which were pre-processed and provided by \\cite{bepler2019}. The dataset was filtered with a maximum sequence identity of 95\\% and split into 20,168 training, 2,240 development, and 5,602 test sequences. For the development and test datasets, we used the sampled 100,000 pairs from each dataset.\n\nThe detailed Homology prediction results are listed in Table 2. We report the excerpted baseline results from \\cite{bepler2019}. NW-align computed the similarity between proteins based on sequence alignments with the BLOSUM62 substitution matrix, with a gap open penalty of -11 and a gap extension penalty of -1 \\cite{eddy2004}. HHalign conducted multiple sequence alignments and searched similar sequences with HHbits \\cite{soding2005, remmert2012}. TMalign performed structure alignments and scored a pair as the average of target-to-query and query-to-target scores \\cite{zhang2005}. Note that we additionally normalized the scores from NW-align and HHalign with the sum of the lengths of protein pairs. This results in minor correlation improvements compared with the results reported by \\cite{bepler2019}. \n\n\\subsection{Solubility} \nFor the Solubility task, we used the pepeDB datasets \\cite{berman2009, chang2014}, which were pre-processed and provided by \\cite{khurana2018}. The dataset was split into 62,478 training, 6,942 development, and 2,001 test sequences. The training dataset was filtered with a maximum sequence identity of 90\\% to remove data redundancy. Furthermore, any sequences with more than 30\\% sequence identity to those from the test dataset were removed to avoid any bias from homologous sequences.\n\n\\input{table\/results_solubility.tex}\n\\input{table\/results_localization.tex}\n\\input{table\/results_stability.tex}\n\\input{table\/results_fluorescence.tex}\n\nThe detailed Solubility prediction results are presented in Table \\ref{table:results_solubility}. We report the excerpted top-5 baseline results from \\cite{khurana2018}. PaRSnIP \\cite{rawi2018}, the second-best task-specific baseline model, used a gradient boosting classifier with over 8,000 1,2,3-mer amino acid frequency features, sequence-based features (\\textit{e.g.}, length, molecular weight, and absolute charge), and structural features (\\textit{e.g.}, secondary structures, a fraction of exposed residues, and hydrophobicity). DeepSol \\cite{khurana2018}, the best task-specific baseline model, used a convolutional neural network consisting of convolution and max-pooling modules to learn sequence representations. It additionally adopted the sequence-based and structural features used in PaRSnIP to enhance the performance. \n\n\\subsection{Localization} \nFor the Localization task, we used the UniProt datasets \\cite{apweiler2004}, which were pre-processed and provided by \\cite{almagro2017}. The proteins were clustered with 30\\% sequence identity. Then, each cluster of homologous proteins was split into 9,977 training, 1,108 development, and 2,773 test sequences. The subcellular locations are as follows: nucleus, cytoplasm, extracellular, mitochondrion, cell membrane, endoplasmic reticulum, plastid, Golgi apparatus, lysosome, and peroxisome. \n\nThe detailed Localization prediction results are listed in Table \\ref{table:results_localization}. We report the excerpted top-5 baseline results from \\cite{almagro2017}. iLoc-Euk \\cite{chou2011}, the second-best task-specific model, used a multi-label K-nearest neighbor classifier with pseudo-amino acid frequency features. Because iLoc-Euk predicted 22 locations, these were mapped onto our 10 locations. DeepLoc \\cite{almagro2017}, the best task-specific model, used a convolutional neural network to learn motif information and a recurrent neural network to learn sequential dependencies of the motifs. It also adopted sequence-based evolutionary features through the combination of BLOSUM62 encoding \\cite{eddy2004} and homology protein profiles from the Swiss-Prot database \\cite{bairoch2000}. \n\n\\subsection{Stability}\nFor the Stability task, we utilized the datasets from \\cite{rocklin2017}, which were pre-processed by \\cite{rao2019}. The dataset was split into 53,679 training, 2,447 development, and 12,839 test sequences. The test set contained one-Hamming distance neighbors of the top candidates from the training set. This allowed us to evaluate the model's ability to localize information from a broad sampling of relevant sequences.\n\nThe detailed Stability prediction results are listed in Table~\\ref{table:results_stability}. As the data splits for this task were created by \\cite{rao2019}, no clear task-specific SOTA existed. Instead, we present the results obtained using RNN\\textsubscript{BASE} and RNN\\textsubscript{LARGE} models without pre-training. \n\n\\subsection{Fluorescence} \nFor the Fluorescence task, we used the datasets from \\cite{sarkisyan2016}, which were pre-processed by \\cite{rao2019}. The dataset was split into 21,446 training, 5,362 development, and 27,217 test sequences. The training dataset contained three-Hamming distance mutations, whereas the test dataset contained more than four mutations. This allowed us to evaluate the model's ability to generalize to unseen mutation combinations.\n\nThe detailed Fluorescence prediction results are presented in Table \\ref{table:results_fluorescence}. As this task were created by \\cite{rao2019}, no clear task-specific SOTA exists. Instead, we present the results obtained using RNN\\textsubscript{BASE} and RNN\\textsubscript{LARGE} models without pre-training. \n\n\\input{table\/results_ablation.tex}\n\\input{table\/results_transmembrane.tex}\n\\input{figure\/homology_length.tex}\n\n\\subsection{Secondary structure (SecStr)} \nFor the SecStr task, we used the training and development datasets from the PDB \\cite{berman2003}, which were pre-processed and provided by \\cite{rao2019}. Any sequences with more than 25\\% sequence identity within the datasets were removed to avoid any bias due to homologous sequences. The training and development datasets contained 8,678 and 2,170 protein sequences, respectively. We used three test datasets: CB513 with 513 sequences \\cite{cuff1999}, CASP12 with 21 sequences \\cite{abriata2018}, and TS115 with 115 sequences \\cite{yang2018}.\n\nThe detailed SecStr prediction results are presented in Table 3. We report excerpted results from \\cite{rao2019} and \\cite{klausen2019}. RaptorX \\cite{wang2016} used a convolutional neural network to capture structural information and conditional neural fields to model secondary structure correlations among adjacent amino acids. It adopted the position-specific scoring matrix evolutionary features generated by searching the UniProt database \\cite{apweiler2004} with PSI-BLAST \\cite{altschul1997}. NetSurfP-2.0 \\cite{klausen2019} used a convolutional neural network to learn motif information and a biRNN to learn their sequential dependencies. It adopted HMM evolutionary features from HH-bits \\cite{remmert2012} including amino acid profiles, state transition probabilities, and local alignment diversities.\n\n\\subsection{Transmembrane} \nFor the Transmembrane task, we used the TOPCONS datasets \\cite{tsirigos2015}, which were pre-processed and provided by \\cite{bepler2019}. The dataset was split into 228 training, 29 development, and 29 test sequences. The goal was to classify each amino acid into one of the following: the membrane, inside or outside of the membrane domains. \n\nThe detailed Transmembrane prediction results are presented in Table \\ref{table:results_transmembrane}. We report the excerpted top-5 baseline results from \\cite{bepler2019}. Although this is an amino-acid-level prediction task, the evaluation was performed at the protein level, following the guidelines from TOPCONS. The predictions were judged correct if the protein had the same number of predicted and true transmembrane regions, and the predicted and true regions overlapped by at least five amino acids. TOPCONS \\cite{tsirigos2015}, the best task-specific baseline model, is a meta-predictor that combines the predictions from five different predictors into a topology profile based on a dynamic programming algorithm.\n\n\\subsection{Pre-training and fine-tuning of PLUS-RNN}\nWe explored the effect of using different $\\lambda_{\\mathrm{PT}}$ values for controlling the relative importance of the MLM and SFP pre-training tasks (Table \\ref{table:result_ablation}). The results indicated that the pre-trained models with different $\\lambda_{\\mathrm{PT}}$ values (PLUS-RNN\\textsubscript{BASE}, PT-A, PT-B, PT-C) always outperformed the RNN\\textsubscript{BASE} model trained from the scratch. Both pre-training tasks consistently improve the prediction performance at all structural levels. Of the two pre-training tasks, removing MLM negatively affects the prediction performance more than removing the SFP. This coincides with the expected result, according to which, the MLM task would play the primary role, and the SFP task would complement MLM by encouraging the models to compare pairwise protein representations.\n\nDuring the fine-tuning, we simultaneously trained a model for the MLM task as well as the downstream task. Moreover, we explored the effect of using different $\\lambda_{\\mathrm{FT}}$ values for controlling their relative importance (Table \\ref{table:result_ablation}). The results showed that the models simultaneously fine-tuned with the MLM task loss (PLUS-RNN\\textsubscript{BASE}\\ and FT-B) consistently outperformed the (FT-A) model fine-tuned only with the task-specific loss. Based on this, we infer that the MLM task serves as a form of regularization and improves the generalization performance of the models. \n\n\\subsection{Comparison of PLUS-RNN\\ and PLUS-TFM}\nWe compared the Homology prediction performances of PLUS-TFM\\ and PLUS-RNN\\textsubscript{LARGE}\\ for protein pairs of different lengths (Figure \\ref{fig:homology_length}). Because PLUS-TFM\\ was pre-trained using protein pairs shorter than 512 amino acids, we denote {\\it Long} for protein pairs longer than 512 amino acids and {\\it Short} otherwise. Next, we evaluated PLUS-TFM\\ for the {\\it Long} protein pairs in the following two ways. First, we simply used the protein pairs as they were. Second, we truncated them to 512 amino acids. The former is denoted as PLUS-TFM-EXT (as in extended), and the latter is denoted as PLUS-TFM. \n\nPLUS-RNN\\textsubscript{LARGE}\\ consistently provided competitive performance regardless of the protein length. In contrast, PLUS-TFM-EXT deteriorated for the {\\it Long} protein pairs, whereas PLUS-TFM\\ exhibited a relatively less performance degradation. The results presented the limitations of TFM models using the limited context size of 512 amino acids. Although the number of {\\it Long} protein pairs in the Homology development dataset was relatively small (13.4\\%), complex proteins that are found in nature make the ability to analyze long protein sequences indispensable. Moreover, because this is due to the computational burden of TFM scaling quadratically with the input length, we predict that the recently proposed adaptive attention span approach \\cite{sukhbaatar2019} may be able to help improve PLUS-TFM. \n\n\n\n\\section{Introduction}\n\\input{1_introduction}\n\n\\section{Related Works}\n\\input{2_1_related_works_NLP.tex}\n\\input{2_2_related_works_protein.tex}\n\n\\section{Methods}\n\\input{3_1_methods_training.tex}\n\\input{3_2_methods_model.tex}\n\n\\section{Experiments} \n\\input{4_1_2_experiments_datasets.tex}\n\\input{4_3_experiments_baselines.tex}\n\\input{4_4_experiments_fine-tuning.tex}\n\\input{4_5_experiments_qualitative.tex}\n\n\\section{Conclusion}\n\\input{5_conlusion}\n\n\\appendices\n\\section{Details on Training Procedures}\n\\input{6_1_appendix_training}\n\n\\section{Details on PLUS-TFM}\n\\input{6_2_appendix_tranformer}\n\n\\section{Pre-training Results}\n\\input{6_3_appendix_pre-training}\n\n\\section{Fine-tuning Results}\n\\input{6_4_appendix_fine-tuning}\n\n\\section{Ablation Studies}\n\\input{6_5_appendix_ablation}\n\n\n\\bibliographystyle{IEEEtran}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLet $H_0$ be a self-adjoint operator with an isolated simple eigenvalue $\\lambda_0$. Further let $V$ be a bounded or unbounded self-adjoint operator \nsuch that the family of operators $H(\\varkappa) := H_0 + \\varkappa V$\nis well-defined and self-adjoint for sufficiently small coupling constants $\\varkappa \\in \\mathbb{R}$. If $V$ is relatively compact with respect to $H_0$, \nthen there is a smooth function $\\lambda(\\varkappa)$ such that $\\lambda(\\varkappa)$ is a simple eigenvalue of $H(\\varkappa)$ for each $\\varkappa$ and\n$\\lim_{\\varkappa\\to 0}\\lambda(\\varkappa) = \\lambda_0$ holds, \ncf. \\cite{Kato1995}. Since the function $\\lambda(\\varkappa)$ \nis smooth it admits a Taylor-type expansion of the form \n\\begin{equation}\\label{1.0}\n\\lambda(\\varkappa) = \\lambda_0 + a\\varkappa + b\\varkappa^2 + O(\\varkappa^3).\n\\end{equation}\nThe problem is to compute the coefficients $a$ and $b$ of this perturbation series\nin terms of the operators $H_0$ and $V$.\n\nIn a slightly modified form, similar problem \nappears for point contacts in quantum mechanics. Typically one considers \ntwo quantum systems which do not interact, where one of them has a simple isolated eigenvalue $\\lambda_0$. If both systems are coupled by a point contact,\nthen the eigenvalue $\\lambda_0$ can move either along the real axis or become a pole of the analytic continuation of the resolvent of the coupled system in the lower complex half-plane. In the last case one speaks of resonances. \nThe eigenvalue case realizes if the second system has no spectrum around $\\lambda_0$ while the resonance case appears \nif the second system has continuous spectrum around $\\lambda_0$, that is, if \n$\\lambda_0$ is an embedded eigenvalue for the decoupled systems. \nIf the point interaction depends on a parameter $\\varkappa$ such that for $\\varkappa \\to 0$ the coupled system converges to the decoupled one, then again a perturbation series is expected either for the eigenvalues or for the resonances.\nIn the following we focus on the eigenvalue case.\n\nPerturbation series for point interactions were perhaps first studied by \nB.\\,S.~Pavlov in \\cite{P84,P87} for \na model of point interactions with an inner structure, where \nthe first order coefficient $a$ was computed.\nA direct sum of two three-dimensional Schr\\\"odinger operators\ncoupled by a point contact was considered by P.~Exner in \\cite{E91}. In this paper he was able to compute the first and second order coefficients $a$ and $b$.\nSee also related work \\cite{CCF09} on spin-dependent\npoint interactions and \\cite{CCF10} \nfor perturbation of eigenvalues at threshold in point contact models.\nA survey on the resonance case can be found in \\cite{E13}, \nsee also references therein. \nPoint contact models are often used in other areas of mathematical physics. \nIn \\cite{P93} a model of a small window in the screen is studied. \nIn \\cite{P12} Maxwell and Schr\\\"odinger operators are coupled via a point contact and in \\cite{P13} a model of a three-dimensional Helmholtz resonator is constructed via point coupling.\n\nIn the following we consider an abstract point contact model and are interested in the perturbation series for its eigenvalues. \nIn particular, let $\\widetilde{A}$ and $\\widehat{A}$\nbe two densely defined closed symmetric operators \nin the Hilbert spaces $\\widetilde{\\mathcal H}$ and $\\widehat{\\mathcal H}$,\nrespectively, both having equal finite deficiency indices $(d,d)$.\nLet us consider the direct sum $A := \\widetilde{A} \\oplus \\widehat{A}$ \nwhich is also a densely defined closed symmetric operator in\n$\\widetilde{\\mathcal H}\\oplus\\widehat{\\mathcal H}$\nwith deficiency indices $(2d,2d)$. \nFurther, let $\\widetilde A_{[\\alpha]}$ and $\\widehat A_{[\\beta]}$ be self-adjoint extensions of $\\widetilde{A}$ and $\\widehat{A}$, respectively. \nThe Hamiltonian of the decoupled system is given by \n$H_0 := \\widetilde A_{[\\alpha]} \\oplus \\widehat A_{[\\beta]}$,\nwhich is a self-adjoint extension of $A$. \nAs usual the Hamiltonian of the point coupled system\nis given by another self-adjoint extension $H$ of $A$,\nwhich can not be decomposed into the orthogonal sum with respect \nto the decomposition $\\widetilde {\\mathcal H}\\oplus \\widehat {\\mathcal H}$. The family $H(\\varkappa)$ from above is now replaced \nby a one-parametric family of point contacts, that means, \nby a family of self-adjoint extensions $H(\\varkappa)$ of $A$. \n\nTo make the problem precise we use the framework of boundary triples. \nIn this framework a subfamily $A_\\Lambda$ of self-adjoint extensions of $A$\nare labeled by Hermitian matrices $\\Lambda$ in ${\\mathbb C}^{2d}$ via \nan abstract boundary condition involving $\\Lambda$.\nIn particular, there is a Hermitian matrix $\\Lambda_0$ \nsuch that $H_0 = A_{\\Lambda_0}$. Let us \nassume that $\\lambda_0$ is an isolated eigenvalue of \n$\\widehat A_{[\\beta]}$,\nbut a resolvent point of $\\widetilde A_{[\\alpha]}$. \nMoreover, let $\\Lambda(\\varkappa)$ \nbe a one-parametric sufficiently regular\nfamily of Hermitian matrices in ${\\mathbb C}^{2d}$,\nwhich converges to $\\Lambda_0$ as $\\varkappa \\to 0$. \nSetting $H(\\varkappa) := A_{\\Lambda(\\varkappa)}$ \none gets a family of self-adjoint extensions $H(\\varkappa)$ of $A$ \nwhich converges in an appropriate sense to $H_0$ as $\\varkappa \\to 0$\nand in the discrete spectra of the operators $H(\\varkappa)$ \nexists a branch of the type \\eqref{1.0}.\nThe goal is to compute the coefficients $a$ and $b$ for this branch\nin terms of the Taylor coefficients of $\\Lambda(\\varkappa)$\nand the abstract Weyl function $M(\\cdot)$\nwhich is an important ingredient of the boundary triple approach. \nIn general this problem can not be reduced\nto the investigation of holomorphic operator families\nof the types (A) or (B) in the sense of Kato\nwhich are thoroughly discussed in \\cite{Kato1995}. \nOnly in some special cases such a reduction can be done.\n\nWe solve this problem for arbitrary $d\\in{\\mathbb N}$ \nfor the first order coefficient $a$.\nIn the special case of $d = 1$\nwe obtain first and second order coefficients $a$ and $b$ which is beyond Pavlov~\\cite{P84,P87} and covers~\\cite{E91}. \nIn general, it would be also possible to compute $b$ for\narbitrary finite deficiency indices, however, we have not included that\nfor the purpose to avoid tedious computations.\n\nOur abstract results are illustrated with a vector two-level quantum model, which is a generalization of the analogous\nscalar model considered in~\\cite{E91,E13}. \n\n\n\n\\section{Boundary triples and Weyl functions}\nThe reader may consult with~\\cite{BMN02, BGP08, DM91, DM95, MN13, S} for the theory of boundary triples and its applications. \nIn this note we use this concept only in the case of finite deficiency indices.\nThroughout this section the following hypothesis is employed.\\\\[1.0mm]\n{\\bf Hypothesis I.}\\,{\\em Let $A$ be a closed, symmetric, densely defined operator in a Hilbert space\n${\\mathcal H}$ with equal finite deficiency indices $(d,d)$.\n}\n\\begin{dfn}\nAssume that Hypothesis I holds.\nThe triple $\\{{\\mathbb C}^d,\\Gamma_0,\\Gamma_1\\}$ with $\n\\Gamma_0, \\Gamma_1\\colon{\\rm dom}\\, A^*\\rightarrow{\\mathbb C}^d$\nis a boundary triple for $A^*$ if the following conditions hold:\nthe mapping $\\Gamma := (\\Gamma_0,\\Gamma_1)^\\top$ is surjective onto ${\\mathbb C}^{2d}$ and the abstract Green's identity\n$(A^*f,g)_{{\\mathcal H}}- (f,A^*g)_{{\\mathcal H}} = (\\Gamma_1f, \\Gamma_0g)_{{\\mathbb C}^d}- (\\Gamma_0f, \\Gamma_1g)_{{\\mathbb C}^d}$\nholds for all $f,g\\in{\\rm dom}\\, A^*$.\n\\end{dfn}\n\\noindent Boundary triples is an efficient tool to parametrize self-adjoint extensions of a symmetric operator.\n\\begin{proposition}[{\\cite[Proposition 1.4]{DM95},\\cite[Proposition 14.7]{S}}]\\label{prop:bt}\nAssume that Hypothesis I holds. Let $\\{{\\mathbb C}^d,\\Gamma_0,\\Gamma_1\\}$ be a boundary triple for $A^*$. Then \nfor each self-adjoint extension ${\\widetilde A}$ of $A$ there is a unique self-adjoint relation $\\Theta$\nin $\\mathbb{C}^d$ such that\n${\\widetilde A} = A_\\Theta :=\nA^*\\upharpoonright\\{f\\in{\\rm dom}\\, A^*\n\\colon \\Gamma f\\in\\Theta\\}$.\n\\end{proposition}\n\\begin{remark}\n\\label{rem:extensions}\nThe self-adjoint extension $A_0 := A^*\\upharpoonright \\ker\\Gamma_0$ is distinguished. It corresponds to the self-adjoint relation\n$\\Theta_\\infty := \\left\\{\\begin{pmatrix}0\\\\h\\end{pmatrix}: h \\in \\mathbb{C}^d\\right\\}$.\nIf $\\Theta$ is the graph of a Hermitian matrix $\\Lambda$ in $\\mathbb{C}^d$, i.e $\\Theta = \\mathrm{graph}(\\Lambda)$, then one easily checks that\n\\begin{equation}\n\\label{AB}\nA_{\\mathrm{graph}(\\Lambda)} = A_\\Lambda := \nA^*\\upharpoonright\\{f\\in{\\rm dom}\\, A^*\n\\colon \n\\Gamma_1f = \\Lambda\\Gamma_0f\\}.\n\\end{equation}\nThe operator $\\Lambda$ is called the boundary operator with respect to the boundary triple $\\{{\\mathbb C}^d,\\Gamma_0,\\Gamma_1\\}$.\n\\end{remark}\n\\noindent One can associate $\\gamma$-fields and Weyl functions with boundary triples.\n\\begin{dfn}\nAssume that Hypothesis I holds. Let $\\{{\\mathbb C}^d,\\Gamma_0,\\Gamma_1\\}$ be a boundary triple for $A^*$. The function $\\gamma \\colon\\rho(A_0)\\rightarrow{\\mathcal B}({\\mathbb C}^d, {\\mathcal H})$ defined as \n\\[\n\\gamma(\\lambda) := \\big(\\Gamma_0\\upharpoonright\\ker(A^*-\\lambda)\\big)^{-1},\n\\qquad \\lambda\\in\\rho(A_0),\n\\]\nis called the \\emph{$\\gamma$-field}. \n\\noindent The function $M\\colon\\rho(A_0)\\rightarrow {\\mathbb C}^{d\\times d}$ defined as $M(\\lambda) := \\Gamma_1\\gamma(\\lambda)$ is called the \\emph{Weyl function}.\n\\end{dfn}\n\\begin{proposition}\n\\label{prop:gammaM}\nAssume that Hypothesis~I holds. Let $M$ be the Weyl function associated with a boundary triple $\\{{\\mathbb C}^d,\\Gamma_0,\\Gamma_1\\}$ for $A^*$. Let the self-adjoint operator $A_\\Lambda$ in $\\mathcal H$ \nbe as in~\\eqref{AB}. Then the following statements hold.\n\\begin{itemize}\\setlength{\\parskip}{1.0mm}\n\\item [\\em (i)] The function $M(\\cdot)$ is holomorphic on $\\rho(A_0)$. \n\\item [\\em (ii)] For $\\lambda\\in\\rho(A_0)$ the relation \n${\\rm dim}\\ker(A_\\Lambda-\\lambda) = \n{\\rm \\dim}\\ker(\\Lambda - M(\\lambda))$ holds.\n\\item [\\em (iii)] If $\\lambda_0\\in\\rho(A_0)$ \nis a simple eigenvalue of $A_{\\Lambda}$, then the function \n\\[\nD_{\\Lambda}(\\lambda) := {\\rm det}(\\Lambda - M(\\lambda)),\n\\qquad \\lambda\\in\\rho(A_0),\n\\]\nhas a simple zero at $\\lambda = \\lambda_0$. \nIn particular, $D'_{\\Lambda}(\\lambda_0) \\ne 0$ holds.\n\\end{itemize}\n\\end{proposition}\n\\begin{proof}\nAll the statements of this proposition are known. \nItem (i) can be found in \\cite[Proposition 1.21]{BGP08}, \nsee also \\cite[Proposition 14.15\\,(iv)]{S} and item (ii) \nis given in \\cite[Theorem 1.36\\,(1)]{BGP08}, \nsee also \\cite[Proposition 14.17\\,(ii)]{S}.\nFor item (iii) see \\cite[Corollary 4.4, Proposition 5.1\\,(iii)]{MN13}.\n\\end{proof}\n\\section{Abstract point contact and its weak coupling regime}\nIn this section we present an abstract treatment of point contacts in the framework of boundary triples and obtain the perturbation series of the simple eigenvalue in the weak coupling regime. We make use of the following hypothesis.\\\\[1.0mm]\n\\noindent {\\bf Hypothesis II.}\\,{\\em Let $\\widetilde {\\mathcal H}$ and $\\widehat {\\mathcal H}$ be separable Hilbert spaces.\nLet $\\widetilde A$ and $\\widehat A$ be closed, densely defined, symmetric operators in $\\widetilde {\\mathcal H}$ and $\\widehat {\\mathcal H}$, respectively, both with deficiency indices $(d,d)$. Let $\\{{\\mathbb C}^d,\\widetilde \\Gamma_0,\\widetilde \\Gamma_1\\}$ \nand $\\{{\\mathbb C}^d,\\widehat \\Gamma_0,\\widehat \\Gamma_1\\}$ be boundary triples for \n$\\widetilde A^*$ and $\\widehat A^*$, respectively. \n}\n\\\\[1.2mm]\n\\noindent The next lemma appears to be useful in what follows.\n\\begin{lemma}[\\!\\!{\\cite[Section 1.4.4]{BGP08}}] \n\\label{lem:bt}\nAssume that Hypothesis II holds.\nThen the operator $\\widetilde A\\oplus \\widehat A$ is closed, densely defined and symmetric in the Hilbert space $\\widetilde {\\mathcal H}\\oplus \\widehat {\\mathcal H}$ with deficiency indices $(2d,2d)$ and \n$\\{\n{\\mathbb C}^{2d},\n\\widetilde\\Gamma_0\\oplus\\widehat\\Gamma_0,\n\\widetilde\\Gamma_1\\oplus\\widehat\\Gamma_1\n\\}$ \nis a boundary triple for \n$(\\widetilde A\\oplus \\widehat A)^*$.\n\\end{lemma}\nOur model operator $A_{\\Lambda}$ in the Hilbert space $\\widetilde {\\mathcal H} \\oplus \\widehat \n{\\mathcal H}$ is defined as \n\\[\n\\begin{split}\nA_{\\Lambda}\n(\\widetilde f\\oplus \\widehat f) &:= \n\\widetilde A^*\\widetilde f\\oplus \\widehat A^* \\widehat f,\\\\\n{\\rm dom}\\,A_{\\Lambda} &:= \n\\Bigg\\{\\widetilde f\\oplus \\widehat f \\in \n{\\rm dom}\\, \\widetilde A^*\n\\oplus\n{\\rm dom}\\, \\widehat A^*\\colon \n\\Lambda\n\\begin{pmatrix} \n\\widetilde\\Gamma_0\\widetilde f\\\\ \n\\widehat\\Gamma_0\\widehat f \\end{pmatrix} \n= \n\\begin{pmatrix} \n\\widetilde\\Gamma_1\\widetilde f\\\\ \n\\widehat\\Gamma_1\\widehat f \n\\end{pmatrix}\\Bigg\\},\n\\end{split}\n\\]\nwith a Hermitian $2d\\times 2d$ matrix of the form\n\\begin{equation}\n\\label{Lambda}\n\\Lambda := \n\\begin{pmatrix}\n\\alpha I_{d} & \\omega I_{d}\\\\\n\\overline\\omega I_{d} & \\beta I_{d}\n\\end{pmatrix},\n\\qquad\n\\alpha,\\beta\\in{\\mathbb R},~\\omega\\in{\\mathbb C}.\n\\end{equation}\n\\begin{proposition}\nThe operator $A_{\\Lambda}$, defined as above, is self-adjoint in the Hilbert space $\\widetilde {\\mathcal H}\\oplus \\widehat {\\mathcal H}$.\n\\end{proposition}\n\\begin{proof}\nThe statement of this proposition is a straightforward consequence of the structure of the matrix $\\Lambda$, Proposition~\\ref{prop:bt}, Remark~\\ref{rem:extensions} and Lemma~\\ref{lem:bt}.\n\\end{proof}\nThe next theorem contains the main results of this note: the two terms expansion of a bound state of $A_{\\Lambda}$ for small coupling parameter $|\\omega|$ in the case of arbitrary \n$d\\in{\\mathbb N}$ and the three terms analogous expansion in the special case $d = 1$.\nIn its formulation we use self-adjoint operators\n\\begin{equation}\n\\label{ext}\n\\begin{split}\n\\widetilde A_{[\\alpha]} := \n\\widetilde A^*\\upharpoonright \n\\ker(\\widetilde\\Gamma_1 - \\alpha\\widetilde\\Gamma_0),&\n\\qquad \n\\widetilde A_0 := \n\\widetilde A^*\\upharpoonright \\ker\\widetilde\\Gamma_0,\\\\\n\\widehat A_{[\\beta]} := \n\\widehat A^*\\upharpoonright \n\\ker(\\widehat\\Gamma_1 - \\beta\\widehat\\Gamma_0),&\\qquad\n\\widehat A_0 := \n\\widehat A^*\\upharpoonright \\ker\\widehat\\Gamma_0.\n\\end{split}\n\\end{equation}\n\nLet $L$ be a $d \\times d$-matrix. In the following we use the notion of the adjugate matrix $\\mathrm{adj}(L)$, cf.~\\cite{Bosch2008,MN98}.\nNotice that the adjugate of a matrix is quite different from the adjoint one $L^*$. \n\\begin{theorem}\n\\label{thm:main}\nAssume that Hypothesis II holds with some $d \\in{\\mathbb N}$. \nLet $\\widetilde M$ and $\\widehat M$ be the Weyl functions \nassociated with boundary triples from that hypothesis. Let the self-adjoint operators $\\widetilde A_{[\\alpha]}$ and $\\widehat A_{[\\beta]}$ be as above. \nAssume that the real value $\\lambda_0$ satisfies $\\lambda_0\\in\\rho(\\widetilde A_0)\\cap\\rho(\\widehat A_0)\\cap\\rho(\\widetilde A_{[\\alpha]})$ and $\\lambda_0$ is a simple isolated eigenvalue of $\\widehat A_{[\\beta]}$.\n\\begin{itemize}\n\\item [\\em (i)] \nThen for sufficiently small $|\\omega|$ in the discrete spectrum of $A_{\\Lambda}$ there is a branch\n\\begin{equation}\n\\label{branch}\n\\lambda(|\\omega|^2) = \\lambda_0 + a|\\omega|^2 + O(|\\omega|^4),\\qquad |\\omega|\\rightarrow 0+,\n\\end{equation}\nwith \n\\begin{equation}\n\\label{A}\na := \\frac{\n{\\rm tr}\\,\n\\Big(\n{\\rm adj}\\,\\big(\\beta I_d -\\widehat M(\\lambda_0)\\big)\n\\big(\\widetilde M(\\lambda_0)-\\alpha I_d\\big)^{-1}\n\\Big)}\n{\n{\\rm tr}\\,\\Big( \n{\\rm adj}\\,\\big(\\beta I_d -\\widehat M(\\lambda_0)\\big)\n\\widehat{M}'(\\lambda_0)\\Big)},\n\\end{equation}\nwhere ${\\rm adj}\\,(\\beta I_d -\\widehat M(\\lambda_0))$ is the \nadjugate matrix.\n\\item [\\em (ii)]\nSuppose that $d =1$. Then the expansion \\eqref{branch} can be\nextended as\n\\begin{equation}\n\\label{branch2}\n\\lambda(|\\omega|^2) = \\lambda_0 + a|\\omega|^2 + b|\\omega|^4+ O(|\\omega|^6),\\qquad |\\omega|\\rightarrow 0+,\n\\end{equation}\nwith\n\\begin{equation}\n\\label{A1}\na := \n\\frac{1}{(\\widetilde{M}(\\lambda_0) - \\alpha)\\widehat{M}'(\\lambda_0)}\n\\end{equation}\nand\n\\begin{equation}\n\\label{B}\nb := \n\\frac{1}{((\\widetilde{M}(\\lambda_0) - \\alpha)\\widehat{M}'(\\lambda_0))^2}\n\\left(\n\\frac{\\widetilde{M}'(\\lambda_0)}\n{\\alpha-\\widetilde{M}(\\lambda_0)} \n-\\frac{1}{2}\n\\frac{\\widehat{M}''(\\lambda_0)}{\\widehat{M}'(\\lambda_0)}\n\\right).\n\\end{equation}\n\\end{itemize}\n\\end{theorem}\n\\begin{proof}\n(i)\nThe proof of this item is carried out in three steps.\\\\\n\\noindent {\\em Step I}.\nFor sufficiently small $\\varepsilon > 0$ the interval $I := (\\lambda_0 -\\varepsilon,\\lambda_0 +\\varepsilon)$ is contained \nin the set $\\rho(\\widetilde A_0)\\cap\\rho(\\widehat A_0)$. By Proposition~\\ref{prop:gammaM}\\,(i) the following matrix-valued function\n\\[\nT(\\lambda) := (\\alpha I_d -\\widetilde M(\\lambda))(\\beta I_d -\\widehat M(\\lambda))\n\\]\nis well-defined and $C^\\infty$-smooth on $I$. \nNext we introduce the scalar-valued function\n\\begin{equation}\n\\label{F}\nF\\colon I\\times{\\mathbb R}\\rightarrow {\\mathbb R},\\quad F(\\lambda,x) := {\\rm det}\\,\\big(T(\\lambda)- x I_d\\big),\n\\end{equation}\nwhich is $C^\\infty$-smooth on $I\\times{\\mathbb R}$. \n\\\\\n\\noindent {\\em Step II}.\nThe following two functions\n\\[\n\\widetilde{D}_\\alpha(\\lambda) := \n{\\rm det}\\,\\big(\\alpha I_d - \\widetilde{M}(\\lambda)\\big)\n\\quad\\text{and}\\quad\n\\widehat{D}_\\beta(\\lambda) := \n{\\rm det}\\,\\big(\\beta I_d - \\widehat{M}(\\lambda)\\big)\n\\]\nare well-defined and $C^\\infty$-smooth on $I$.\nJacobi's formula~\\cite{G72, MN98} and the identity \n${\\rm adj}\\,(L_1L_2)= {\\rm adj}\\,(L_2)\\,{\\rm adj}\\,(L_1)$ imply\n\\[\nF_{x}(\\lambda_0,0) = -{\\rm tr}\\,\\Big(\n{\\rm adj}\\,\\big(\\beta I_d -\\widehat{M}(\\lambda_0)\\big)\n{\\rm adj}\\,\\big(\\alpha I_d -\\widetilde{M}(\\lambda_0)\\big)\n\\Big).\n\\]\nIn view of \n$\\lambda_0\\in\\rho(\\widetilde{A}_{[\\alpha]})$ and of Proposition~\\ref{prop:gammaM}\\,(ii)\nthe matrix $\\alpha I_d - \\widetilde M(\\lambda_0)$ is invertible.\nFor any invertible matrix $L$ the identity\n${\\rm adj}\\,(L) = {\\rm det}\\,(L)\\,L^{-1}$ holds.\nHence, we arrive at\n\\begin{equation}\n\\label{deriv1}\nF_{x}(\\lambda_0,0) = \n\\widetilde{D}_{\\alpha}(\\lambda_0)\n{\\rm tr}\\,\n\\Big(\n{\\rm adj}\\,\\big(\\beta I_d -\\widehat{M}(\\lambda_0)\\big)\n\\big(\\widetilde{M}(\\lambda_0)-\\alpha I_d\\big)^{-1}\n\\Big).\n\\end{equation}\nNote that \n$F(\\lambda,0) = \n\\widetilde{D}_\\alpha(\\lambda)\n\\widehat{D}_\\beta(\\lambda)$, \nwhere the identity \n${\\rm det}\\,(L_1L_2) = \n{\\rm det}\\,(L_1)\\,{\\rm det}\\,(L_2)$ \nis used. In view of $\\lambda_0\\in\\sigma_{\\rm d}(\\widehat{A}_{[\\beta]})$ and of Proposition~\\ref{prop:gammaM}\\,(ii) we get \n$\\widehat{D}_\\beta(\\lambda_0) = 0$, which\nimplies $F(\\lambda_0,0) = 0$.\nNext we compute $F_\\lambda$ at the point $(\\lambda_0,0)$\n\\begin{equation}\n\\label{deriv2}\n\\begin{split}\nF_{\\lambda}(\\lambda_0,0) &= \n\\Big(\\frac{d}{d\\lambda}F(\\lambda,0)\\Big)\n\\Big|_{\\lambda = \\lambda_0} \\\\\n&=\n\\widetilde{D}_\\alpha'(\\lambda_0)\n\\widehat{D}_\\beta(\\lambda_0)\n+\n\\widetilde{D}_\\alpha(\\lambda_0)\n\\widehat{D}_\\beta'(\\lambda_0) =\n\\widetilde{D}_\\alpha(\\lambda_0)\n\\widehat{D}_\\beta'(\\lambda_0).\n\\end{split}\n\\end{equation}\nSince the eigenvalue $\\lambda_0$ is simple in the spectrum of $\\widehat{A}_{[\\beta]}$, by Proposition~\\ref{prop:gammaM}\\,(iii) $\\widehat{D}'_\\beta(\\lambda_0) \\ne 0$ holds. \nSimilarly $\\widetilde{D}_\\alpha(\\lambda_0) \\ne 0$ because of $\\lambda_0\\in\\rho(\\widetilde{A}_{[\\alpha]})$. Hence we obtain \nthat $F_{\\lambda}(\\lambda_0,0) \\ne 0$. Recall that $F(\\lambda_0,0) = 0$ and that $F$ is $C^\\infty$-smooth. Therefore, by the classical implicit function theorem~\\cite[Theorem 3.3.1]{KP} there exists the $C^\\infty$-smooth function $\\lambda(\\cdot)$ defined on a sufficiently small neighborhood of the origin such that $\\lambda(0) = \\lambda_0$ and that $F(\\lambda(x),x) = 0$ holds pointwise. The derivative of $\\lambda(\\cdot)$ is given as usual by \n\\begin{equation}\n\\label{lambda'}\n\\lambda'(x) = \n-\\frac{F_{x}(\\lambda(x),x)}\n{F_{\\lambda}(\\lambda(x),x)}.\n\\end{equation}\nAgain using Jacobi's formula we get\n\\begin{equation}\n\\label{D'}\n\\widehat{D}'_\\beta(\\lambda_0) = \n-{\\rm tr}\\,\n\\Big(\n{\\rm adj}\\,\n\\big(\\beta I_d -\\widehat{M}(\\lambda_0)\\big)\n\\widehat{M}'(\\lambda_0) \n\\Big).\n\\end{equation}\nSubstituting \\eqref{deriv1}, \\eqref{deriv2} and \\eqref{D'} into \\eqref{lambda'} we arrive at $\\lambda'(0) = a$ with $a$ given by \\eqref{A}.\nHence we obtain that\n\\begin{equation}\n\\label{lambdaexp}\n\\lambda(x) = \n\\lambda_0 \n+ \nax \n+\nO(x^2),\\qquad x\\rightarrow 0.\n\\end{equation}\n\\noindent {\\em Step III.}\nBy Proposition~\\ref{prop:gammaM}\\,(ii) a point $\\lambda\\in\\rho(\\widetilde A_0)\\cap\\rho(\\widehat A_0)$ satisfying\n\\[\n{\\rm det}\\,\n\\begin{pmatrix}\n\\alpha I_d - \\widetilde M(\\lambda) & \n\\omega I_d\\\\\n\\overline\\omega I_d & \n\\beta I_d -\\widehat M(\\lambda)\n\\end{pmatrix} = 0\n\\]\nis in the discrete spectrum of $A_{\\Lambda}$. \nBy \\cite[Theorem 3]{Silvester2000} one gets that \n\\[\n{\\rm det}\\,\n\\begin{pmatrix}\n\\alpha I_d - \\widetilde{M}(\\lambda) & \n\\omega I_d\\\\\n\\overline\\omega I_d & \n\\beta I_d -\\widehat M(\\lambda)\n\\end{pmatrix} = \\mathrm{det}((\\alpha I_d - \\widetilde{M}(\\lambda))(\\beta I_d - \\widehat{M}(\\lambda)) - |\\omega|^2 I_d).\n\\]\nThat is $\\lambda\\in\\rho(\\widetilde A_0)\\cap\\rho(\\widehat A_0)$ satisfying $F(\\lambda,|\\omega|^2) = 0$ with $F$ as in \\eqref{F} belongs to the discrete spectrum of $A_{\\Lambda}$. \nHence for sufficiently small $|\\omega|^2$\nwe have $\\lambda(|\\omega|^2)\\in\\sigma_{\\rm d}(A_{\\Lambda})$ with $\\lambda(\\cdot)$ defined by Step II. Finally, the expansion \\eqref{lambdaexp} implies \\eqref{branch} in the formulation of the theorem. \n\n\\noindent (ii)\nThe proof of this item goes along the lines of the proof of (i) and we indicate only the differences.\nLet $F$ be defined as in \\eqref{F}. In this special case ($d = 1$)\nwe have\n\\[\nF(\\lambda,x) = \\big(\\alpha -\\widetilde M(\\lambda)\\big)\n\t\t\t \\big(\\beta -\\widehat M(\\lambda)\\big) - x. \n\\] \nThe 1st and 2nd order partial derivatives of $F$ \nare computed below \n\\begin{equation}\n\\label{deriv1(1)}\n\\begin{split}\nF_{x}(\\lambda,x) &= -1,\\quad \nF_{\\lambda x}(\\lambda,x) = 0,\\quad \nF_{xx}(\\lambda,x) =0,\\\\\nF_{\\lambda}(\\lambda,x) &= \n - \\widetilde{M}'(\\lambda)\n(\\beta -\\widehat{M}(\\lambda)) - \n(\\alpha -\\widetilde{M}(\\lambda))\n\\widehat{M}'(\\lambda),\\\\\nF_{\\lambda\\lambda}(\\lambda,x) &=\n-\\widetilde{M}''(\\lambda)\n(\\beta - \\widehat{M}(\\lambda)) \n+ \n2\\widetilde{M}'(\\lambda)\\widehat{M}'(\\lambda)\n- \n(\\alpha - \\widetilde{M}(\\lambda))\n\\widehat{M}''(\\lambda).\n\\end{split}\n\\end{equation}\nIn particular, we have at the point $(\\lambda_0,0)$\n\\begin{equation}\n\\label{deriv2(1)}\n\\begin{split}\nF_{\\lambda}(\\lambda_0,0) &= \n(\\widetilde{M}(\\lambda_0)-\\alpha)\\widehat{M}'(\\lambda_0),\\\\\nF_{\\lambda\\lambda}(\\lambda_0,0) &= 2\\widetilde{M}'(\\lambda_0)\\widehat{M}'(\\lambda_0) \n+ \n(\\widetilde{M}(\\lambda_0)-\\alpha)\n\\widehat{M}''(\\lambda_0),\n\\end{split}\n\\end{equation}\nwhere we used that $\\beta - \\widehat M(\\lambda_0) = 0$, which is true in view of $\\lambda_0\\in\\sigma_{\\rm d}(\\widehat A_{[\\beta]})$. Similarly as on Step II in the proof of (i) we get that $F(\\lambda_0,0) = 0$ and $F_\\lambda(\\lambda_0,0)\\ne 0$. Hence, there exists the $C^\\infty$-smooth function $\\lambda(\\cdot)$ defined on a sufficiently small neighborhood of the origin such that $\\lambda(0) =\\lambda_0$, that $F(\\lambda(x),x) = 0$ holds pointwise and that $\\lambda'(x)$ is as in \\eqref{lambda'}.\nSubstituting the identity $F_x(\\lambda(x),x) = -1$\ninto \\eqref{lambda'} we obtain that\n\\begin{equation}\n\\label{lambda'(1)}\n\\lambda'(x) = \\frac{1}{F_\\lambda(\\lambda(x),x)},\n\\end{equation}\nand further substituting \\eqref{deriv2(1)} into the above formula we get $\\lambda'(0) = a$ with $a$ as in \\eqref{A1}. \nTaking the derivative in \\eqref{lambda'(1)} we get\n\\[\n\\lambda''(x) = \n-\\frac{F_{\\lambda\\lambda}(\\lambda(x),x)\\lambda'(x)\n+ F_{\\lambda x}(\\lambda(x),x)}\n{(F_{\\lambda}(\\lambda(x),x))^2}.\n\\]\nPlugging \\eqref{deriv1(1)} and \\eqref{deriv2(1)} into the above formulae we obtain $\\lambda''(0) = 2b$ with $b$ as in \\eqref{B}. Hence we arrive at the expansion\n\\begin{equation*}\n\\label{lambdaexp2}\n\\lambda(x) = \n\\lambda_0 \n+ \nax \n+ \nbx^2 \n+ \nO(x^3),\n\\qquad x\\rightarrow 0,\n\\end{equation*}\nwhich implies \\eqref{branch2} similarly as on Step III in the proof of (i) the expansion \\eqref{lambdaexp} implied the formula \\eqref{branch}.\n\\end{proof}\n\\begin{remark}\nThe roles of the operators $\\widetilde A_{[\\alpha]}$ and $\\widehat A_{[\\beta]}$ in the above theorem can be interchanged.\n\\end{remark}\n\\begin{remark}\nNote that ${\\rm adj}\\,(0) = 1$ and in the special case $d = 1$ the formula \\eqref{A} reduces to \\eqref{A1}.\n\\end{remark}\n\n\n\n\n\\section{An example}\n\\label{sec:example}\nLet the operator $A$ in the Hilbert space\n$L^2({\\mathbb R}^3)\\otimes{\\mathbb C}^d$ with \n$d \\in{\\mathbb N}$ be defined as\n\\begin{equation}\n\\label{Adef}\nA f := (-\\Delta + Q) f,\\qquad\n{\\rm dom}\\, A := \\big\\{f\\in H^2({\\mathbb R}^3)\\otimes\n{\\mathbb C}^d\\colon \nf(0,0,0) = {\\bf 0}\\big\\},\n\\end{equation}\nwhere $Q = Q^*$ is a $d\\times d$ matrix.\nThe operator $A$ is closed, symmetric, densely defined with deficiency indices $(d,d)$,\ncf.~\\cite{AGHH05,BMN08,GMZ12} and \\cite[Section 3]{BNP13}.\nThe adjoint of the above symmetric\noperator can be characterized according to~\\cite{BMN08,GMZ12} as\n\\[\n\\begin{split}\n{\\rm dom}\\,A^*&= \n\\Big \\{f = f_0 + \\vec a\\tfrac{e^{-|x|}}{|x|} + \\vec be^{-|x|}\n\\colon f_0 \\in {\\rm dom}\\, A, \\vec a,\\vec b\\in{\\mathbb C}^{d}\\Big\\},\\\\\nA^*f & = -\\Delta f_0 - \\vec a\\tfrac{e^{-|x|}}{|x|} -\n\\vec b\\Big(e^{-|x|} - \\tfrac{2e^{-|x|}}{|x|}\\Big) + Qf.\n\\end{split}\n\\]\nThe triple $\\{{\\mathbb C}^d,\\Upsilon_0,\\Upsilon_1\\}$\nwith $\\Upsilon_0,\\Upsilon_1\\colon {\\rm dom}\\,A^*\\rightarrow\n{\\mathbb C}^d$, where \n\\[\n\\Upsilon_0 f := \\sqrt{4\\pi}\\vec a\\quad\\text{and}\\quad\n\\Upsilon_1f := \\sqrt{4\\pi}\\lim_{|x|\\rightarrow 0}\\big(f(x) - \n\\tfrac{\\vec a}{|x|}\\big)\n\\]\nis a boundary triple for $A^*$. By \n\\cite[Proposition 4.1\\, (iii)]{GMZ12} and\n\\cite[Proposition 3.3]{BNP13} \nthe Weyl function associated with the boundary triple\n$\\{{\\mathbb C}^d,\\Upsilon_0,\\Upsilon_1\\}$ is given by\n$\nM(\\lambda) = i\\sqrt{\\lambda - Q}$, $\\lambda \\in\n\\rho(A_0)$. Let us assume that $Q$ has only simple eigenvalues $q_1 <\n\\ldots < q_k < \\ldots < q_d$. It turns out that $\\sigma(A_0) =\n[q_1,\\infty)$. Consider the self-adjoint extension \n$A_{[\\alpha]} = A^*\\upharpoonright\\ker(\\Upsilon_1 - \\alpha\\Upsilon_0)$\nof $A$ with $\\alpha < 0$. The extension $A_{[\\alpha]}$ has below\nthe threshold $q_1$ at most $d$ eigenvalues. Moreover, $\\lambda \\in\n(-\\infty,q_1)$ is an eigenvalue if and only if $\\lambda + \\alpha^2 \\in \\sigma(Q)$. \nHence, $\\lambda_k := q_k - \\alpha^2$ is an eigenvalue of $A_{[\\alpha]}$\nbelow the threshold $q_1$ if and only if $q_k - \\alpha^2 < q_1$. \nIn particular, we have $A_{[\\alpha]} \\ge\nq_1- \\alpha^2$.\n\n\nSuppose that symmetric operators $\\widetilde A$ and \n$\\widehat A$ are as in \\eqref{Adef} with $Q = \\widetilde Q$\nand $Q = \\widehat Q$, respectively.\nLet us assume that $\\{\\widetilde q_1,\\widetilde\nq_2,\\ldots,\\widetilde q_{d}\\}$ and $\\{\\widehat q_1,\\widehat\nq_2,\\ldots,\\widehat q_{ d}\\}$ are the simple eigenvalues of $\\widetilde Q$ and\n$\\widehat Q$, respectively, ordered increasingly. \nWe denote the instances of the triple $\\{{\\mathbb\n C}^d,\\Upsilon_0,\\Upsilon_1\\}$ for \n$\\widetilde A^*$ by $\\{{\\mathbb C}^d,\\widetilde\\Gamma_0,\\widetilde\\Gamma_1\\}$ and for $\\widehat A^*$ by \n$\\{{\\mathbb C}^d,\\widehat\\Gamma_0,\\widehat\\Gamma_1\\}$.\nThe corresponding Weyl functions are clearly given by \n\\[\n\\widetilde M(\\lambda) = i\\sqrt{\\lambda- \\widetilde Q}\n\\quad\\text{and}\\quad \\widehat M(\\lambda) = \ni\\sqrt{\\lambda- \\widehat Q}.\n\\]\nNote that the triple $\\{{\\mathbb C}^{2d},\\Gamma_0,\\Gamma_1\\}$\nwith $\\Gamma_i = \\widetilde \\Gamma_i \\oplus \\widehat \\Gamma_i$,\n$i=0,1$, is a boundary triple for $(\\widetilde A\\oplus \\widehat A)^*$. \nThe self-adjoint extensions $\\widetilde A_{[\\alpha]}$, $\\widehat A_{[\\beta]}$ ($\\alpha,\\beta < 0$) \nand $\\widetilde A_0$, $\\widehat A_0$ are defined as in \\eqref{ext}.\nThe self-adjoint extension\n$\\widetilde A_{[\\alpha]}$ of $\\widetilde A$ satisfies \n$\\widetilde A_{[\\alpha]} \\ge \\widetilde q_1 - \\alpha^2$.\nThe self-adjoint extension\n${\\widehat A}_{[\\beta]}$ of $\\widehat A$ has the spectrum \n$\\sigma(\\widehat A_{[\\beta]})= \\big(\\cup_{k=1}^l\n\\{\\widehat q_k - \\beta^2\\}\\big) \\cup \\big[\\widehat q_1,+\\infty\\big)$,\nwhere the eigenvalues are simple and $l$ is the greatest\ninteger $l \\in \\{1,\\ldots,d\\}$ satisfying \n$\\widehat q_l - \\beta^2 < \\widehat q_1$. \nIf the condition\n\\[\n\\widehat\\lambda_k := \\widehat q_{k} - \\beta^2 < \n\\widetilde q_1 - \\alpha^2 \n\\]\nholds for some $k\\le l$, then $\\widehat \\lambda_k$ is a simple isolated eigenvalue\nof $\\widehat A_{[\\beta]}$ and simultaneously a\nresolvent point of $\\widetilde A_{[\\alpha]}$. \nConsider the self-adjoint extension \n$A_{\\Lambda} := (\\widetilde A\\oplus\\widehat A)^*\\upharpoonright\n\\ker(\\Gamma_1 - \\Lambda\\Gamma_0)$\nof $\\widetilde A\\oplus\\widehat A$ with $\\Lambda$ as in \\eqref{Lambda}. \nSimple computations give us\n\\[\n \\widehat M'(\\widehat \\lambda_k) \n= \\frac{i}{2}\\big(\\widehat \\lambda_k - \\widehat Q\\big)^{-1\/2} = \n\\frac{1}{2}\\big(\\widehat Q - \\widehat \\lambda_k\\big)^{-1\/2}. \\\\\n\\]\nWith the above formula in hands we get using\nTheorem~\\ref{thm:main}\\,(i) \nthat in the discrete spectrum of $A_{\\Lambda}$ exists a branch with\nthe expansion\n\\[\n\\widehat \\lambda_k(|\\omega|^2) = \\widehat \\lambda_k \n- 2\\frac{\n{\\rm tr}\\,\\Big(\n{\\rm adj}\\,\\big(\\sqrt{\\widehat Q - \\widehat \\lambda_k} + \\beta\\big)\n\\big(\\sqrt{\\widetilde Q - \\widehat \\lambda_k} + \\alpha\\big)^{-1}\\Big)}{\n{\\rm tr}\\,\n\\Big({\\rm adj}\\,\\big(\\sqrt{\\widehat Q - \\widehat \\lambda_k} + \\beta\\big)\n\\big(\\widehat Q - \\widehat \\lambda_k\\big)^{-1\/2} \\Big)}|\\omega|^2\n + O(|\\omega|^4),\\quad \\omega\\rightarrow 0.\\]\nNotice that $\\widehat Q - \\widehat \\lambda_1\n\\ge 0$ and $\\widetilde Q - \\widehat \\lambda_1 \\ge 0$. Since\n$\\sqrt{\\widetilde Q - \\widehat \\lambda_1} + \\alpha > 0$ and furthermore\n${\\rm adj}\\,\\big(\\sqrt{\\widehat Q - \\widehat \\lambda_1} + \\beta\\big) \\ge 0$\nwe get that $\\widehat \\lambda_1(|\\omega|^2) < \\widehat \\lambda_1$ for sufficiently small $\\omega$. \n\nNext we consider the special case $d = 1$. Setting \n$\\widetilde q := \\widetilde q_1$ and $\\widehat q := \\widehat q_1$\nwe get $\\sigma(\\widetilde A_{[\\alpha]})= \n\\{\\widetilde q -\\alpha^2\\} \\cup \\big[\\widetilde q,+\\infty\\big)$\nand $\\sigma(\\widehat A_{[\\beta]})= \n\\{\\widehat q -\\beta^2\\} \\cup \\big[\\widehat q,+\\infty\\big)$.\nLet $\\lambda_0 := \\widehat q -\\beta^2$. If \n$\\widehat q -\\beta^2 < \\widetilde q$ and \n$\\widetilde q - \\alpha^2 \\not= \\widehat q -\\beta^2$, then $\\lambda_0 \\in \\rho(\\widetilde A_{[\\alpha]})$. \n\nLet $E := \\widehat q - \\widetilde q$. Notice that $\\beta^2 - E > 0$. \nSimple computations give us\n\\[\n\\begin{split}\n\\widetilde M(\\lambda_0) = -\\sqrt{\\beta^2-E},&\\qquad\n\\widetilde M'(\\lambda_0) = \n\\frac{1}{2\\sqrt{\\beta^2 - E}},\\\\\n\\widehat M'(\\lambda_0) =\n\\frac{1}{2|\\beta|},&\\qquad\n\\widehat M''(\\lambda_0) = \\frac{1}{4|\\beta|^3}.\n\\end{split}\n\\]\nHence, according to Theorem~\\ref{thm:main}\\,(ii)\nin the discrete spectrum of $A_\\Lambda$ exists a branch \nwith the expansion\n\\begin{displaymath}\n\\begin{split}\n\\lambda(|\\omega|^2) = \\lambda_0 +&\n\\frac{2\\beta}\n{\\sqrt{\\beta^2 - E} +\\alpha}|\\omega|^2 +\\\\\n&\\frac{1}\n{\\Big(\\sqrt{\\beta^2 - E} + \\alpha\\Bigr)^3}\n\\frac{\\beta^2 + E - \\alpha \\sqrt{\\beta^2 - E}}{\\sqrt{\\beta^2 - E}}\n|\\omega|^4+\nO(|\\omega|^6)\n\\end{split}\n\\end{displaymath}\nas $\\omega\\rightarrow 0$.\nThe latter result is consistent with \\cite[Theorem 3.1]{E91}, \nsee also~\\cite{E13}. \n\n\n\n\\section*{Acknowledgment}\nThe work of VL was supported by the Austrian Science Fund (FWF),\nproject P 25162-N26. VL thanks Weierstrass Institute for Applied Analysis and Stochastics for hospitality.\nThe work of IYP was supported by the Government of\nRussian Federation (grant 074-U01) and by State contract of the\nRussian Ministry of Education and Science. IYP thanks TU Graz for\nhospitality. Jussi Behrndt and Jonathan Rohleder are acknowledged for\ninteresting and fruitful discussions. The authors are very grateful to\nthe anonymous referee for useful suggestions.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe random phase approximation (RPA)\\cite{Macke1950, Bohm1953} has found widespread use in quantum chemistry for the calculation of covalent and non-covalent interaction energies.\\cite{Furche2001, Furche2008, Hesselmann2011, Eshuis2012a, Ren2012a, Paier2012, Chen2017, Chedid2018, Kreppel2020} The direct (particle-hole) RPA can be derived in the framework of the adiabatic connection (AC) fluctuation-dissipation theorem (ACFD)\\cite{Langreth1975, Langreth1977, Harris1975} or as a subset of terms in the coupled cluster (CC)\\cite{Coester1958, Coester1960, Cizek1966, Cizek1969, Paldus1972} singles and doubles (CCD) expansion.\\cite{Scuseria2008, Scuseria2013} Within Many-body perturbation theory (MBPT)\\cite{Abrikosov1975, Mattuck1992, Bruus2004, martin2016}, the RPA is obtained by evaluating the Klein,\\cite{Klein1961} or alternatively, the Luttinger-Ward\\cite{Luttinger1960a} functional with the self-energy in the $GW$ approximation (GWA) using a (non-interacting) Kohn-Sham (KS)\\cite{Kohn1965} Density functionl theory (DFT)\\cite{Hohenberg1964} Green's function.\\cite{Casida1995, Dahlen2006} \n\nIn the GWA\\cite{Hedin1965}, the self-energy is obtained as the first term of an expansion in terms of a screened electron-electron interaction where screening is usually calculated within a pair bubble approximation\\bibnote{The pair bubble approximation is typically also denoted as RPA. To avoid potential confusion with the expression for the correlation energy, we will use the term bubble approximation when referring to the screening.}\\cite{martin2016} The RPA is generally believed to describe long-range electron correlation very accurately since the effect of charge screening dominates in this limit.\\cite{Langreth1977} This property is very desirable for the description of long-range dispersion effects or hydrogen bonding, which are omnipresent in chemistry and biology, determining for instance the structure of DNA or protein folding.\\cite{Sedlak2013, Rezac2016} \n\nThe magnitude of disperion interactions does generally grow super-linearly with system size\\cite{Dobson2014} and it's relative importance compared to covalent bonding therefore increases with the number of electrons. Especially for larger systems, it becomes decisive to take into account the screening of the electron-electron interaction. CC and MBPT based methods describe this screening by resummation of certain classes of self-energy diagrams to infinite order.\\cite{Mattuck1992,Zhang2019,Keller2022} The RPA is the simplest first principle method which accounts for these effects and is implemented with $\\mathcal{O}\\left(N^4\\right)$ scaling with system size using global density fitting (DF).\\cite{Eshuis2010} Modern RPA (and $GW$) implementations typically use local density-fitting approaches to calculate the non-interacting polarizability,\\cite{Wilhelm2016, \nWilhelm2018, \nWilhelm2021, \nForster2020b, \nDuchemin2019, \nDuchemin2021a} leading to quadratic or cubic scaling in practice, and even effectively linearly scaling implementations (for sufficently sparse and large systems) have been reported.\\cite{Schurkus2016, Luenser2017, Vlcek2017, Graf2018} For these reasons, the RPA is considered a promising method to study non-covalent interactions\\cite{Kresse2009, Lu2010, Lebegue2010, Eshuis2011, Modrzejewski2020} also in large molecules.\\cite{Nguyen2020} At short electron-electron distances, however, charge screening becomes less important for the description of electron correlation and taking into account higher-order contributions to the self-energy via the 4-point vertex function becomes decisive.\\cite{Irmler2019a} The absence of these terms in the RPA leads to Pauli exclusion principle violating contributions to the electron correlation energy.\\cite{Hummel2019} As a consequence, total correlation energies are much too high compared to exact reference values.\\cite{Singwi1968, Jiang2007} \n\nIn contrast to RPA, the approximations to the correlation energy of M\u00f8ller-Plesset perturbation theory (MPPT) are free of Pauli principle violating terms. Especially MP2 is relatively inexpensive and can routinely applied to systems with more than 100 atoms even close to the complete basis set limit. However, screening effects are entirely absent in MPPT and electron correlation is described by HF quasiparticles (QP) interacting via the bare Coulomb interaction instead, neglecting the fact that the interactions between the HF QPs are generally much weaker than the ones between the undressed electrons. This issue is also present in orbital optimized MP2 in which the HF QPs are replaced by MP2 QPs.\\cite{Lochan2007, Neese2009d, Kossmann2010} Therefore, MP2 is a suitable method only for (typically small) systems in which screening effects are negligible. The divergence of MPPT for the uniform electron gas (see for instance chapter 10 in ref.~\\citen{Mattuck1992} for a thorough discussion) is known at least since early work by Macke\\cite{Macke1950} and has been demonstrated later on for metals\\cite{Gruneis2010a} and recently also for large, non-covalently bound organic complexes.\\cite{Nguyen2020} The divergence of the M\u00f8ller-Plesset series for small-gap systems is directly related to this issue since the magnitude of the screening is proportional to the width of the fundamental gap.\\cite{VanSchilfgaarde2006, Tal2021} \n\nThere have been various approaches to regularize MP2 by an approximate treatment of higher-order screening effects, either using empirical\\cite{Jung2004, \nLochan2005, \nGrimme2003,\nSzabados2006,\nPitonak2009, \nSedlak2013, \nSedlak2013a,\nGoldey2012, \nGoldey2013, \nGoldey2015,\nLee2018} or diagrammatically motivated modifications\\cite{\nPittner2003, \nKeller2022,\nEngel2006, \nJiang2006} or attacking the problem from a DFT perspective.\\cite{Daas2021,Daas2022} Starting from the opposite direction, there have been many attempts to correct the RPA correlation energy expression by adding additional terms to improve the description of short-range correlation. This includes range-separation based approaches,\\cite{Kurth1999, Yan2000, Angyan2005,Janesko2009a, Janesko2009b, Toulouse2009, Zhu2010, Toulouse2010, Toulouse2011b, Beuerle2017} or augmentations by singles contributions.\\cite{Ren2011, Paier2012, Ren2013} Via MBPT, the RPA can generally be improved upon inclusion of the 4-point vertex in the electronic self-energy, either directly, or indirectly through the kernel of the Bethe-Salpeter equation (BSE) for the generalized susceptibility. Following the latter approach, approximations often start from the ACFD and go beyond the Coulomb kernel in the BSE by adding additional terms, for instance exact exchange (exx) (often denoted as exx-RPA)\\cite{Hellgren2007, Hellgren2008, Hesselmann2010, Hesselmann2011a, Bates2013, Bleiziffer2015, Mussard2016} and higher order contributions,\\cite{Erhard2016, Olsen2019, Gorling2019} or the statically screened $GW$ kernel,\\cite{Maggio2016, Holzer2018a, Loos2020} but also empirically tuned functions of the eigenvalues of the KS density-density response.\\cite{Trushin2021,Fauser2021} Notice, that the BSE for the generalized susceptibility reduces to a Dyson equation for the density-density response function which makes local kernels very attractive from a computational perspective.\n\nInstead of relying on the ACFD theorem, beyond-RPA energy expressions can also be introduced directly from approximations to the self-energy beyond the GWA. For instance, in RPAx\\cite{Colonna2014, Colonna2016, Hellgren2018, Hellgren2021a} a local 4-point vertex obtained from the functional derivative of the \\emph{local} exact exchange potential calculated within the optimized effective potential method\\cite{Sharp1953, Talman1976, EngelEberhardandDreizler2013} is used in the self-energy. In Freeman's second-order screened exchange (SOSEX) correction,\\cite{DavidL.Freeman1977} the HF vertex (i.e. the functional derivative of the \\emph{non-local} HF self-energy with respect to the single-particle Green's function) is included in the self-energy directly but not in the screened interaction.\\cite{Jansen2010, Paier2010, Paier2012, Ren2013, Gruneis2009, Hummel2019} Another expression for SOSEX can be obtained by including the static $GW$ kernel in the self-energy but not in the density-density response. This possibility has not been explored until recently\\cite{Forster2022} and is the main topic of this work. \n\nIn our recent work, we have assessed the accuracy of the statically screened $G3W2$ correction to the $GW$ self-energy for charged excitations.\\cite{Forster2022} This correction has first been applied by Gr\u00fcneis \\emph{at al.}\\cite{Gruneis2014} to calculate the electronic structure of solids and is obtained by calculating the self-energy to second-order in the screened Coulomb interaction (equivalent to including the full $GW$ vertex) and then taking the static limit for both terms. The resulting energy expression fulfills the crossing symmetry of the vertex to first order in the electron-electron interaction. Preliminary results for the correlation energies of atoms have been promising.\\cite{Forster2022} This realization of SOSEX is computationally more efficient than AC-SOSEX since no expensive numerical frequency integration is required. Here, we compare the performance of this method to RPA and RPA+AC-SOSEX for non-covalent interactions for the S66 and S66x8 test sets.\\cite{Rezac2011} Our results show that, independently of the choice of the input KS Green's function, both RPA+SOSEX variants consistently outperform RPA and that the statically screened SOSEX variant is comparable in accuracy to AC-SOSEX.\n\nThe remainder of this work is organized as follows. In section~\\ref{sec::theory} we give a detailed derivation of the different SOSEX energy expressions starting directly from MBPT. After an outline of our computational approach and implementation in section~\\ref{sec::compDetails}, we present and analyze our results for non-covalent interaction energies in section~\\ref{sec::results}. Finally, section~\\ref{sec::conclusions} summarizes and concludes this work.\n\n\\section{\\label{sec::theory}Theory} \nWithin MBPT, the electron-electron interaction energy can be obtained as\\cite{Luttinger1960a}\n\\begin{equation}\n\\label{LW}\n\\begin{aligned}\n E_{Hxc}[G] = & E_{Hx}[G] + E_{c}[G] \\\\\n E_{Hx}[G] = & \\frac{1}{2} \\int d 1 d 2 G(1,2) \\Sigma_{Hxc}^{(1)}(2,1)[G] \\\\\n E_{c}[G] = & \\frac{1}{2}\\sum_{n=2}\\frac{1}{n} \\int d 1 d 2 G(1,2) \\Sigma_{Hxc}^{(n)}(2,1)[G] \\;,\n\\end{aligned}\n\\end{equation}\nwhere $\\Sigma_{Hxc}$ is the one-particle irreducible (1PI) electronic self-energy. It is the sum of all 1PI skeleton diagrams (diagrams which do not contain any self-energy insertions) of $n$th order in the electron-electron interaction $v_c$. Space, spin, and imaginary time indices are collected as $1 = (\\bm{r}_1,\\sigma_1,i\\tau_1)$. One can always switch between imaginary time and imaginary frequency using the Laplace transforms\\cite{Rieger1999}\n\\begin{equation}\n\\label{TtoW}\n f(i\\tau) = \\frac{i}{2\\pi} \\int d\\omega F(i\\omega) e^{i\\omega \\tau}\n\\end{equation}\nand\n\\begin{equation}\n\\label{WtoT}\n f(i\\omega) = -i \\int d\\tau F(i\\tau) e^{-i\\omega \\tau} \\;.\n\\end{equation}\n$\\Sigma$ is a functional of the single-particle Green's function $G = G_1$, which in turn depends on the 2-particle Green's function $G_2$. These quantities are defined by \n\\begin{equation}\n\\label{greens_definition}\n G_n(1, \\dots 2n) = \n \\left\\langle\n \\Psi^{(N)}_0\n \\Big|\n \\mathcal{T} \n \\left[\n \\hat{\\psi}^{\\dagger}(1) \n \\hat{\\psi}(2) \n \\dots \n \\hat{\\psi}^{\\dagger}(2n-1) \n \\hat{\\psi}(2n) \n \\right]\n \\Big| \n \\Psi^{(N)}_0\n \\right\\rangle \\;.\n\\end{equation}\nHere, $\\Psi^{(N)}_0$ is the ground state of an $N$-electron system, $\\mathcal{T}$ is the time-ordering operator and $\\hat{\\psi}$ is the field operator. The self-energy maps $G$ to its non-interacting counterpart $G^{(0)}$ by means of Dyson's equation,\\cite{Dyson1949}\n\\begin{equation}\n\\label{dyson}\n G(1,2) = G^{(0)}(1,2) + G^{(0)}(1,3)\\Sigma(3,4)G(4,2) \\;.\n\\end{equation}\nThe exchange-correlation (xc) contribution to it can be written compactly as \n\\begin{equation}\n\\label{sigma} \n\\Sigma_{xc}(1,2) = i G(1,2)W(1,2)\n+ i G(1,3)W(1,4)\\chi^{(0)}(6,4,5,4^+)\\Gamma_{xc}^{(0)}(6,5,2,3) \\;.\n\\end{equation}\nThe quantities appearing in \\eqref{sigma} are the particle-hole irreducible 4-point vertex (i.e. the sum of all diagrams which can not be cut into parts by removing a particle and a hole line),\\cite{Baym1961}\n\\begin{equation}\n\\label{vertex}\n \\Gamma_{Hxc}^{(0)}(1,2,3,4) = \\Gamma_H^{(0)}(1,2,3,4) + \\Gamma_{xc}^{(0)}(1,2,3,4) = i \\frac{\\delta\\Sigma_{H} (1,3)}{\\delta G(4,2)} + i \\frac{\\delta\\Sigma_{xc} (1,3)}{\\delta G(4,2)} \\;,\n\\end{equation}\nthe non-interacting generalized susceptibility,\n\\begin{equation}\n\\label{chi0}\n \\chi^{(0)}(1,2,3,4) = -i G(1,4)G(2,3) \\;,\n\\end{equation}\nand the screened Coulomb interaction $W$, defined by \n\\begin{equation}\n\\label{screened-coulomb}\n W(1,2) = W^{(0)}(1,2) + W^{(0)}(1,3)\n P(3,4) W^{(0)}(4,2) \\;,\n\\end{equation}\nwith\n\\begin{equation}\n\\label{polarizability-1}\n P(1,2) = \\int d3 d4 \\chi(1,2,3,4)\\delta (1,4)\\delta (2,3) \\;,\n\\end{equation}\nand\n\\begin{equation}\n W^{(0)}(1,2) = v_c(\\bm{r}_1,\\bm{r}_2)\\delta_{\\sigma,\\sigma'}\\delta(t_1-t_2) \\;,\n\\end{equation}\ngiven in terms of the bare coulomb interaction $v_c$. In equation \\eqref{polarizability}, $P$ is the reducible polarizability, which is the diagonal of the particle-hole reducible generalized susceptibility $\\chi$, defined by\n\\begin{equation}\n\\label{susceptibility}\n \\chi(1,2,3,4) = -iG_2(1,2,3,4) -i G(1,2)G(3,4) \\;.\n\\end{equation}\nIt is related to it's non-interacting counterpart $\\chi^{(0)}$ by a Bethe-Salpeter equation (BSE),\\cite{Salpeter1951, Baym1961}\n\\begin{equation}\n\\label{bse}\n \\chi(1,2,3,4) = \\chi^{(0)}(1,2,3,4)\n + \\chi^{(0)}(1,8,3,7) \\Gamma_{Hxc}^{(0)}(7,5,8,6) \\chi(6,2,5,4) \\;,\n\\end{equation}\nwhich reduces to a Dyson equation for the polarizability $P$ when the xc-contribution to the 4-point vertex is set to zero. One can then also introduce the irreducible polarizability $P^{(0)}$ as\n\\begin{equation}\n\\label{polarizability}\n P^{(0)}(1,2) = \\int d3 d4 \\chi^{(0)}(1,2,3,4)\\delta (1,4)\\delta (2,3) \\;,\n\\end{equation}\nwhich is useful to define the RPA correlation energy. Using this quantity, \\eqref{screened-coulomb} can also be written as \n\\begin{equation}\n\\label{screened-coulomb-2}\n W(1,2) = W^{(0)}(1,2) + W^{(0)}(1,3)\n P^{(0)}(3,4)\n W(4,2) \\;.\n\\end{equation}\nWhen \\cref{dyson,sigma,vertex,chi0,screened-coulomb,polarizability-1,bse} are solved self-consistently, the expression for the correlation energy \\eqref{LW} becomes exact. Note, that the equations above are different from, but completely equivalent to Hedin's equations.\\cite{Hedin1965} They have the advantage that the BSE appears explicitly, and also that only 2-point or 4-point quantities occur. The resulting equations are therefore invariant under unitary transformations of the basis, as has for instance been pointed out by Starke and Kresse.\\cite{Starke2012} or in ref.~\\citen{Held2011}\n\nIn practice, \\cref{dyson,sigma,vertex,chi0,screened-coulomb,polarizability-1,bse} need to be truncated to obtain a closed system of equations. Setting $\\Gamma_{xc}^{(0)} = 0$ defines the GWA. One typically also introduces a new non-interaction KS Green's function $G^s$,\\cite{Hybertsen1985} \n\\begin{equation}\n G^{s}(1,2) = G^{(0)}(1,2) + \n G^{(0)}(1,3) v_{s}(3,4) G^{s}(4,1) \\;,\n\\end{equation}\nwith \n\\begin{equation}\n v_s(1,2) = v_H(1,2)\\delta(1,2) + v_{xc}(\\bm{r}_1, \\bm{r}_2) \\delta(\\tau_{12}) \n\\end{equation}\nwhere $v_H$ is the Hartree-potential, $v_{xc}$ is a KS xc-potential mixed with a fraction of HF exchange and $\\tau_{12} = \\tau_1 - \\tau_2$, and evaluates \\eqref{LW} with $G^s$,\n\\begin{equation}\n\\label{LW2}\n E_{c} = \\frac{1}{2}\\sum_{n=2}\\frac{1}{n} \\int d 1 d2 G^s(1,2) \\Sigma_{Hxc}^{(n)}(2,1)[G^s] \\;.\n\\end{equation}\nWith the $GW$ approximation for $\\Sigma$ and using \\eqref{polarizability} and \\eqref{screened-coulomb},\n\\begin{equation}\n\\begin{aligned}\n E^{RPA}_{xc} = & i\\frac{1}{2} \\int d 1 d2 G^s(1,2) G(2,1)W(2,1) \\\\\n = & - \\frac{1}{2} \\int d 1 d2 \n P^{(0)}(1,2) \\;\n \\left\\{W^{(0)}(1,2) + \\frac{1}{2}W^{(0)}(1,3)P^{(0)}(3,4)W^{(0)}(4,2) + \\dots\\right\\} \n\\end{aligned}\n\\end{equation}\nis obtained. Isolating the exchange contribution to the Hartree-exchange energy,\n\\begin{equation}\n E_x = \\int d1d2 \\; \\delta(\\tau_{12})G(1,2)W^{(0)}(2,1) G(2,1) \\;,\n\\end{equation}\nwe obtain the RPA correlation energy\n\\begin{equation}\n\\begin{aligned}\n E^{RPA}_c = & -\\frac{1}{2}\\sum_n \\frac{1}{n} \\int d1d2 \\left\\{\\left[\\int d3 P^{(0)}(1,3)W^{(0)}(3,2)\\right]^n + \\int d3 P^{(0)}(1,3)W^{(0)}(3,2) \\right\\} \\\\ \n = & \\frac{1}{2} \\int d1d2\\left\\{ \\ln\\left[\\delta(1,2) - \\int d3 P^{(0)}(1,3)W^{(0)}(3,2)\\right] + \\int d3 P^{(0)}(1,3)W^{(0)}(3,2) \\right\\} \\;,\n\\end{aligned} \n\\end{equation}\nand using \\eqref{TtoW} as well as the symmetry of the polarizability on the imaginary frequency axis, its well-known representation due to Langreth and Perdew\\cite{Langreth1977} is obtained,\n\\begin{equation}\n \\label{e_rpa}\n \\begin{aligned}\n E^{RPA}_c = & \\frac{1}{2\\pi}\\int d \\bm{r}_1 d \\bm{r}_2 \\int^{\\infty}_0 d \\omega \\left\\{ \n \\ln\\left[\\delta(1,2) - \\int d \\bm{r}' P^{(0)}(\\bm{r}_1,\\bm{r}',i\\omega)v_c(\\bm{r}',\\bm{r}_2)\\right] \\right. \\\\ \n & + \\left.\\int d \\bm{r}' P^{(0)}(\\bm{r}_1,\\bm{r}',i\\omega)v_c(\\bm{r}',\\bm{r}_2) \\right\\} \\;.\n \\end{aligned}\n\\end{equation}\nIn this work, we are interested in approximations to the self-energy beyond the GWA. From the antisymmetry of Fermionic Fock space, it follows that $G_2$ needs to change sign when the two creation or annihilation operators in \\eqref{greens_definition} are interchanged. This property is known as crossing symmetry.\\cite{Rohringer2012} In the RPA, the crossing symmetry is violated which leads to the well-known overestimation of absolute correlation energies. One can show (see ref.~\\citen{Rohringer2012} or ref.~\\citen{Rohringer2018} for details) that the crossing symmetry of $G$ translates into the requirement \n\\begin{equation}\n\\label{crossing-relation}\n \\frac{\\delta \\Sigma(1,3)}{\\delta G(4,2)} \\stackrel{!}{=}\n \\frac{\\delta \\Sigma(1,4)}{\\delta G(3,2)} \\;,\n\\end{equation}\nfor the 4-point vertex. In the GWA, the irreducible vertex is approximated with the Hartree-vertex,\\cite{Romaniello2009}\n\\begin{equation}\n \\Gamma^{(0)}_{Hxc}(1,2,3,4) \\approx \\Gamma^{(0)}_{H}(1,2,3,4) = -i \\frac{1}{\\delta G(1,3)} \\delta(1,3)\\int d2 \\; W^{(0)}(1,2)G(2,2^+) \\;.\n\\end{equation}\nWe then obtain\n\\begin{equation}\n \\begin{aligned} \n \\frac{\\delta \\Sigma_H(1,3)}{\\delta G(4,2)}\n = & -i \\delta (1,3) \\delta (2,4)v_c(1,2) \\\\\n \\frac{\\delta \\Sigma_H(1,4)}{\\delta G(3,2)}\n = & -i \\delta (1,4) \\delta (2,3)v_c(1,2) \\;,\n \\end{aligned}\n\\end{equation}\nand therefore \\eqref{crossing-relation} is violated. When the 4-point vertex is approximated by the functional derivative of the Hartree-exchange self-energy, we obtain\n\\begin{equation}\n \\frac{\\delta \\Sigma_{Hx}(1,3)}{\\delta G(4,2)}\n = -i v_c(1,2)\\left[\\delta (1,3) \\delta (2,4) - \n \\delta (1,4) \\delta (2,3)\\right] = \\frac{\\delta \\Sigma_{Hx}(1,4)}{\\delta G(3,2)} \\;,\n\\end{equation}\ni.e. the crossing symmetry is fulfilled. Approximations to the self-energy in Hedin's equations always violate the crossing symmetry.\\cite{Rohringer2018, Krien2021} However, with each iteration of Hedin's pentagon, the crossing symmetry is fulfilled up to an increasingly higher order in $v_c$. We can then expect to obtain improvements over the RPA energies expressions by choosing a self-energy which fulfills the crossing symmetry to first order in $v_c$. The easiest approximation to the self-energy of this type is obtained from the HF vertex,\n\\begin{equation}\n \\Gamma_{xc}^{(0),HF} (1,2,3,4) = i\\frac{\\delta \\Sigma^{HF}_{xc} (1,3)}{\\delta G^s (4,2)} = - W^{(0)}(1,2) \\delta(1,4)\\delta(3,2) \\;.\n\\end{equation}\nUsing this expression in \\eqref{sigma} with \\eqref{chi0} yields the AC-SOSEX contribution to the self-energy,\\cite{Jansen2010} \n\\begin{equation}\n\\label{ac-sosex}\n \\Sigma^{\\textrm{SOSEX}(W,v_c)}(1,2) = - \\int d3 d4 G^s(1,3)W(1,4) G^s(3,4)G^s(4,2) W^{(0)}(3,2) \\;,\n\\end{equation}\nwhere we have indicated the screening of the electron-electron interaction in the SOSEX expression in the superscript on the \\emph{l.h.s.} of \\eqref{ac-sosex}. If one instead uses the $GW$ self-energy for $\\Gamma^{(0)}$, the screened exchange kernel is obtained, \n\\begin{equation}\n \\Gamma_{xc}^{(0),GW} (1,2,3,4) = i\\frac{\\delta \\Sigma^{GW}_{xc} (1,3)}{\\delta G^s (4,2)} = - W(1,2) \\delta(1,4)\\delta(3,2) \\;.\n\\end{equation}\nThe resulting self-energy is the complete second-order term in the expansion of the self-energy in terms of the screened electron-electron interaction,\\cite{Hedin1965}\n\\begin{equation}\n\\label{g3w2}\n \\Sigma^{\\textrm{G3W2}}(1,2) = - \\int d3 d4 G^s(1,3)W(1,4) G^s(3,4)G^s(4,2) W(3,2) \n\\end{equation}\nand contains the AC-SOSEX self-energy. The $G3W2$ self-energy can be decomposed into eight skeleton diagrams on the Keldysh contour,\\cite{VanLeeuwen2015} but the AC-SOSEX self-energy only into four.\\cite{Stefanucci2014} In practice, the evaluation of the resulting energy expression requires to perform a double frequency integration while the evaluation of the AC-SOSEX energy only requires a single frequency integration. Since the computation of the AC-SOSEX term is already quite cumbersome, the complete $G3W2$ energy expression is therefore not a good candidate for an efficient beyond-RPA correction. Instead, we take the static limit in both $W$ in \\eqref{g3w2} to arrive at a self-energy expression similar to AC-SOSEX,\n\\begin{equation}\n\\label{sosex_w0w0}\n \\Sigma^{\\textrm{SOSEX}(W(0),W(0))}(1,2) = - \\int d3 d4 G^s(1,3)W(1,4) G^s(3,4)G^s(4,2) W(3,2)\\delta(\\tau_{32}) \\delta(\\tau_{14}) \\;.\n\\end{equation}\nThis expression resembles the MP2 self-energy, with the difference that the bare electron-electron interaction is replaced by the statically screened one. The resulting expression for the correlation energy will be different in general due to the factors $\\frac{1}{n}$ in \\eqref{LW}. However, we will show in the following that the energy expression can be evaluated in exactly the same way as the MP2 energy when the sum over $n$ is approximated in a suitable way. Using \\eqref{screened-coulomb}, eq. \\eqref{sosex_w0w0} can be written as \n\\begin{equation}\n \\Sigma^{\\textrm{SOSEX}(W(0),W(0))}(1,2) = \n \\Sigma^{\\textrm{MP2-SOX}}(1,2) + \n \\Sigma^{\\delta\\textrm{MP2-SOX}}(1,2) \\;,\n\\end{equation}\nwith the first term being the SOX term in MP2 and with the remainder accounting for the screening of the electron-electron interaction. Defining\n\\begin{equation}\n\\delta W(1,2) = \\int d3 d4 W^{(0)}(1,3)P(3,4) W^{(0)}(4,2) \\;,\n\\end{equation}\nit can be written as\n\\begin{equation}\n \\Sigma^{\\delta\\textrm{MP2-SOX}}(1,2) = - \\int d3 d4 G^s(1,3)\n \\delta W(1,4)\\delta(\\tau_{14})\n G^s(3,4)G^s(4,2)\n \\delta W(3,2) \\delta(\\tau_{32}) \\;.\n\\end{equation}\nIn the same way one can see, that the statically screened $GW$ vertex contains the HF vertex. The same is obviously true for all other flavours of SOSEX, and therefore all of them fulfill the crossing symmetry of the full 4-point vertex to first order in the electron-electron interaction. Therefore, all of these approximations compensate the overestimation of the electron correlation energy in the RPA.\n\nIn contrast to the RPA which is efficiently evaluated in a localized basis, beyond-RPA energies are most easily formulated in the molecular spin-orbital basis $\\left\\{\\phi_i(\\bm{r,\\sigma})\\right\\}$ in which the time-ordered KS Green's function is diagonal,\n\\begin{equation}\n\\begin{aligned}\n\\label{g0}\n G^s_{kk'}(i\\tau_{12}) = & \n \\delta_{kk'}\\Theta(\\tau_{12}) G^{>}_{kk'}(i\\tau_{12}) - \n \\delta_{kk'}\\Theta(\\tau_{21}) G^{<}_{kk'}(i\\tau_{21}) \\\\\n G^{>}_{kk}(i\\tau_{12}) = &\n i \\left(1 - f(\\epsilon_k)\\right)\n e^{-\\epsilon_k\\tau_{12}} \\\\\n G^{<}_{kk}(i\\tau_{12}) = &\n i f(\\epsilon_k)\n e^{-\\epsilon_k\\tau_{12}} \\;.\n \\end{aligned}\n\\end{equation}\nThe $\\epsilon_k$ denote KS eigenvalues which are understood to be measured relative to the chemical potential $\\mu$ and $f(\\epsilon_k)$ denotes the occupation number of the $k$th orbital.\n\nOne can now obtain energy expressions analogous to \\eqref{e_rpa}. For example, inserting the AC-SOSEX self-energy \\eqref{ac-sosex} into \\eqref{LW2}, we obtain\n\\begin{equation}\n\\label{sosex_lambda}\n \\begin{aligned}\n E^{\\textrm{SOSEX}}_c = &\\frac{1}{2} \\int d1 d2 d3 d4 \\;\n G^s(1,2)\n G^s(2,3)\n G^s(3,4)\n G^s(4,1) \\\\\n & \\times\n \\left\\{\n \\frac{1}{2} W^{(0)}(3,1)W^{(0)}(2,4) + \\frac{1}{3}\n W^{(0)}(3,1)W^{(0)}(2,5)P^{(0)}(5,6)W^{(0)}(6,4) + \\dots \n \\right\\} \\;.\n \\end{aligned}\n\\end{equation}\nIn contrast to the RPA energy expression, the terms in this equations can not be summed exactly due to the presence of the $1\/n$-terms. However, defining \n\\begin{equation}\n\\label{lambda_eq}\n \\Sigma_{Hxc}^{\\lambda} = \\sum_{n=1}^{\\infty} \\lambda^n \\Sigma_{Hxc}^{(n)} \\left[G^s, v_c\\right] \\;.\n\\end{equation}\nwe can rewrite \\eqref{LW2} \n\\begin{equation}\n E_{c} = \\frac{1}{2}\\sum_{n=2}\\frac{1}{n} \\int d 1 d2 G^s(1,2) \\Sigma_{Hxc}^{(n)}(2,1)[G^s] = \\frac{1}{2} \\int^1_0 \\frac{d \\lambda}{\\lambda} \n \\int d 1 d2 G^s(1,2) \\Sigma_{Hxc}^{(\\lambda)}(2,1)[G^s] \\;,\n\\end{equation}\nas an integral over a coupling constant $\\lambda$. We can now rewrite \\eqref{lambda_eq} as \n\\begin{equation}\n \\Sigma_{Hxc}^{\\lambda} = \n \\sum_{n=1}^{\\infty} \\Sigma_{Hxc}^{(n)} \\left[G^s, \\lambda v_c\\right]\n = \n \\sum_{n=1}^{\\infty} \\Sigma_{Hxc}^{(n)} \\left[G^s, W^{(\\lambda)}\\right] \\;,\n\\end{equation}\nwhere $W^{(\\lambda)}$ is defined as in \\eqref{screened-coulomb-2}, with $W^{(0)}$ replaced by $\\lambda W^{(0)}$. Defining \n\\begin{equation}\n\\label{overlineW}\n \\overline{W} = \\int^1_0 d \\lambda W^{(\\lambda)} \\;,\n\\end{equation}\nand \n\\begin{equation}\n \\overline{\\Sigma} = \\Sigma\\left[\\overline{W}\\right]\n\\end{equation}\nthe correlation energy becomes\n\\begin{equation}\n \\label{general_ec}\n E_c = \\frac{1}{2} \\int d 1 d2 \\; G^s(1,2) \\overline{\\Sigma}_c(2,1) \\;.\n\\end{equation}\nThe integral in \\eqref{overlineW} needs to be computed numerically, but converges typically very fast when Gauss-Legendre grids are employed.\\cite{Ren2013} In ref.~\\citen{Rodriguez-Mayorga2021} a trapezoidal rule for the solution of this integral has been used and also ref.~\\citen{Hesselmann2011} suggests that this choice is often suitable for the calculation of correlation energies within the RPA and beyond. Below, we will assess the effect of such approximate coupling constant integration on absolute and relative correlation energies. Notice, that using a trapezoidal rule, \\eqref{general_ec} reduces to\n\\begin{equation}\n\\label{mp2-generate}\n E_c = \\frac{1}{4} \\int d 1 d2 \\; G^s(1,2) \\Sigma_c(2,1) \\;,\n\\end{equation}\nand when the statically screened $G3W2$ self-energy \\eqref{sosex_w0w0} is used in this expression, the energy expression of ref.~\\citen{Forster2022} is obtained. When additionally both $W(0)$ are replaced by $W^{(0)}$, \\eqref{mp2-generate} gives the MP2 correlation energy (evaluated with $G^s$).\\cite{Dahlen2006} Therefore, the correlation energy expression obtained in this way can be interpreted as a renormalized variant of MP2.\n\nUsing \\eqref{general_ec}, simple expressions for the AC-SOSEX energy in the canonical basis of KS orbitals can be obtained. With \\eqref{general_ec}, the self-energy \\eqref{ac-sosex} and \\eqref{g0} we have \n\\begin{equation}\n\\begin{aligned}\n E^{\\textrm{SOSEX}(W,v_c)} = & \\frac{i}{2}\\sum_{pqrs} \\int d\\tau_{12} d \\tau_3 \n G^s_p(\\tau_{13})\n G^s_q(\\tau_{31})\n G^s_r(\\tau_{12})\n G^s_s(\\tau_{21}) W^{(0)}_{spqr}\\overline{W}_{rspq}(\\tau_{23}) \\\\ \n = & \n- \\frac{1}{4 \\pi} \\sum_{pqrs} \\int d \\omega'\n W^{(0)}_{spqr}\\overline{W}_{rspq}(i\\omega') \n \\int d\\tau_{12}\n G^s_r(\\tau_{12}) \n G^s_s(\\tau_{21}) \\\\\n & \\times \\underbrace{ \\int d \\tau_3 \n e^{-i\\omega' \\tau_{23}}\n G^s_p(\\tau_{13})\n G^s_q(\\tau_{31})}_{I(i\\tau_{12})} \\;.\n\\end{aligned} \n\\end{equation}\nIn going from the second equations, we have used \\eqref{TtoW} to transform $W$ to the imaginary frequency axis. The integral over $\\tau_3$ can be evaluated by splitting it at $\\tau_1$ and using the definition of the KS Green's function \\eqref{g0},\n\\begin{equation}\n I(i\\tau_{12}) = \\frac{ \\left[\n \\left(1-f(\\epsilon_p)\\right)f(\\epsilon_q)\n - \\left(1-f(\\epsilon_q)\\right)f(\\epsilon_p)\n \\right] e^{i\\omega'\\tau_{12}}}{\\epsilon_p - \\epsilon_q + i\\omega'}\n = - e^{i\\omega'\\tau_{12}} \n \\frac{f(\\epsilon_p) - f(\\epsilon_q)}{\\epsilon_p - \\epsilon_q + i\\omega'}\n\\end{equation}\nThe remaining integral over $\\tau_{12}$ is\n\\begin{equation}\n I_{\\tau_{12}} = -\\int \n G^{(0)}_r(\\tau_{12}) \n G^{(0)}_s(\\tau_{21}) e^{i\\omega'\\tau_{12}} d\\tau_{12} = \n \\frac{f(\\epsilon_r) - f(\\epsilon_s)}\n {\\epsilon_r - \\epsilon_s - i\\omega'} \\;,\n\\end{equation}\nso that the correlation energy becomes \n\\begin{equation}\n E^{\\textrm{SOSEX}(W,v_c)} = - \\frac{1}{4 \\pi} \\sum_{pqrs} \\int d \\omega'\n W^{(0)}_{spqr}\\overline{W}_{rspq}(i\\omega') \n \\frac{f(\\epsilon_r) - f(\\epsilon_s)}\n {\\epsilon_r - \\epsilon_s - i\\omega'}\n \\frac{f(\\epsilon_p) - f(\\epsilon_q)}{\\epsilon_p - \\epsilon_q + i\\omega'}\n\\end{equation}\nEach of the nominators can only give a non-vanishing contribution if one of the two occupation numbers is zero. If the difference of the occupation numbers is $-1$, we simply flip sign in the denominator. Without loss of generality we can then decide that the indices $r$ and $p$ belong to occupied and the indices $s$ and $q$ to virtual single-particle states. This gives us a factor of $4$. We can then use the symmetry of the Coulomb interaction and finally sum over spins (i.e. transforming to a basis of spatial orbitals), which gives an additional factor of 2. Therefore, we recover the well-known SOSEX correlation energy expression for a closed-shell system\\cite{Ren2013, Rodriguez-Mayorga2021} as\n\\begin{equation}\n\\label{e_sosex}\n E^{\\textrm{SOSEX}(W,v_c)} = - \\frac{1}{2\\pi} \\sum^{occ}_{ij}\\sum^{virt}_{ab} \\int^{\\infty}_0 d \\omega\n \\overline{W}_{iajb}(i\\omega) W^{(0)}_{jaib}\n \\frac{4(\\epsilon_i - \\epsilon_a)(\\epsilon_j - \\epsilon_b)}\n {\\left[(\\epsilon_i - \\epsilon_a)^2 + \\omega^2\\right]\n \\left[(\\epsilon_j - \\epsilon_b)^2 + \\omega^2\\right]} \\;.\n\\end{equation}\nIn a spatial orbital basis, the SOSEX$(W(0),W(0))$ Correlation energy is obtained from \\eqref{g3w2} and \\eqref{g0} as\n\\begin{equation}\n\\label{e_sosex2}\n\\begin{aligned}\n E^{\\textrm{SOSEX}(W(0),W(0))} = & - \\frac{1}{2}\\sum_{pqrs} \\int d\\tau_{12}\n G^s_p(\\tau_{12})\n G^s_q(\\tau_{21})\n G^s_r(\\tau_{12})\n G^s_s(\\tau_{21}) \\\\ \n & \\times \\overline{W}_{spqr}(i\\omega=0)\\overline{W}_{rspq}(i\\omega=0) \\\\ \n = & \n - \\sum^{occ}_{ij}\\sum^{virt}_{ab}\n \\frac{\\overline{W}_{spqr}(i\\omega=0)\\overline{W}_{rspq}(i\\omega=0)}\n {\\epsilon_i + \\epsilon_j - \\epsilon_a - \\epsilon_b} \\;.\n\\end{aligned}\n\\end{equation}\nThis is the expression we have introduced in ref.~\\citen{Forster2022}. It is completely equivalent to the exchange term in MP2 with the bare electron-electron interaction replaced by the statically screened, coupling constant averaged one. Both RPA+SOSEX variants can be understood as renormalized MP2 expressions and have a clear diagrammatic interpretation. In the next section, we briefly outline our implementation of these expressions, before we proceed by assessing their accuracy for non-covalent interactions in sec.~\\ref{sec::results}.\n\n\\section{\\label{sec::compDetails}Technical and Computational Details} \nAll expressions presented herein have been implemented in a locally modified development version of the Amsterdam density functional (ADF) engine of the Amsterdam modelling suite 2022 (AMS2022).\\cite{adf2022} The non-interacting polarizability needed to evaluate \\eqref{e_rpa} and \\eqref{screened-coulomb-2} is calculated in imaginary time with quadratic scaling with system size in the atomic orbital basis. The implementation is described in detail in ref.~\\citen{Forster2020b}. In all calculations, we expand the KS Green's functions in correlation consistent bases of Slater-type orbitals of triple- and quadruple-$\\zeta$ quality (TZ3P and QZ6P, respectively).\\cite{Forster2021} All 4-point correlation functions (screened and unscreened Coulomb interactions as well as polarizabilities) are expressed in auxiliary basis sets of Slater type functions which are usually 5 to 10 times larger than the primary bases. In all calculations, we use auxiliary basis sets of \\emph{VeryGood} quality. The transformation between primary and auxiliary basis (for the polarizability) is implemented with quadratic scaling with system size using the pair-atomic density fitting (PADF) method for products of atomic orbitals.\\cite{Krykunov2009, Wirz2017} For an outline of the implementation of this method in ADF, we refer to ref.~\\citen{Forster2020}. Eq.~\\eqref{e_rpa} is then evaluated in the basis of auxiliary fit functions with cubic scaling with system size. Eqs.\\cref{e_sosex,e_sosex2} are evaluated with quintic scaling with system size in the canonical basis of KS states. This implementation is completely equivalent to the canonical MP2 implementation outlined in ref.~\\citen{Forster2020}. Eq.~\\eqref{overlineW} is evaluated using small Gauss-Legendre grids. In case of a single $\\lambda-$point, a trapezoidal rule is used for integration. \n\nImaginary time and imaginary frequency variables are discretized using non-uniform bases $\\mathcal{T} = \\left\\{\\tau_{\\alpha}\\right\\}_{\\alpha = 1, \\dots N_{\\tau}}$ and $\\mathcal{W} = \\left\\{\\omega_{\\alpha}\\right\\}_{\\alpha = 1, \\dots N_{\\omega}}$ of sizes $N_{\\tau}$ and $N_{\\omega}$, respectively, tailored to each system. More precisely, \\eqref{TtoW} and \\eqref{WtoT} are then implemented by splitting them into sine- and cosine transformation parts as\n\\begin{equation}\n\\label{sine-cosine}\n\\begin{aligned}\n \\overline{F}(i\\omega_{\\alpha}) = & \\sum_{\\beta} \\Omega^{(c)}_{\\alpha\\beta} \\overline{F}(i\\tau_\\beta) \\\\\n \\underline{F}(i\\omega_{\\alpha}) = & \\sum_{\\beta} \\Omega^{(s)}_{\\alpha\\beta} \\underline{F}(i\\tau_\\beta) \\;, \n\\end{aligned}\n\\end{equation}\nwhere $\\overline{F}$ and $\\underline{F}$ denote even and odd parts of $F$, respectively. The transformation from imaginary frequency to imaginary time only requires the (pseudo)inversion of $\\Omega^{(c)}$ and $\\Omega^{(s)}$, respectively. Our procedure to calculate $\\Omega^{(c)}$ and $\\Omega^{(s)}$ as well as $\\mathcal{T}$ and $\\mathcal{W}$ follows Kresse and coworkers.\\cite{Kaltak2014,Kaltak2014a,Liu2016} The technical specifications of our implementation have been described in the appendix of ref.~\\citen{Forster2021}. \n\nWe use in all calculations grids of 24 points in imaginary time and imaginary frequency which is more then sufficient for convergence.\\cite{Forster2020} The final correlation energies are then extrapolated to the complete basis set limit using the relation ,\\cite{Helgaker1997}\n\\begin{equation}\n \\label{helgaker}\n E_{CBS} = E_{QZ} + \\frac{E_{QZ} * 4^3 - E_{TZ} * 3^3}{4^3-3^3} \\;, \n\\end{equation}\nwhere $E_{QZ}$ ($E_{TZ}$) denotes the total energies at the QZ6P (TZ3P) level. The extrapolation scheme has been shown to be suitable for correlation consistent basis sets but can not be used for KS or HF contributions.\\cite{Helgaker1997, Jensen2013} Therefore, we do not extrapolate the DFT energies, but assume them to be converged on the QZ level. Since the basis set error is not completely eliminated with this approach, we also counterpoise correct all energies, taking into account 100 \\% of the counterpoise correction. With these settings, we assume all our calculated values to be converged well enough to be able to draw quantitative conclusions about the performance of the methods we benchmark herein. We use the \\emph{VeryGood} numerical quality for integrals over real space and distance cut-offs. Dependency thresholds\\cite{Forster2020b} have been set to $5e^{-4}$. \n\n\\section{\\label{sec::results}Results} \n\n\\subsection{Dependence on the $\\lambda$-Integration}\n\nWe now assess the accuracy of the post-RPA approaches for the S66 database and first discuss the dependence of the SOSEX correlation energies on the $\\lambda$-integration. The magnitude of the SOSEX contribution to the correlation energy as a function of the size of the $\\lambda$-grid is shown in table~\\ref{tab::lambda} for three selected systems. One can show, that for a 2-electron system like $\\mathrm{H}_2$, the SOSEX($W, v_c$) correction equals minus half of the RPA correlation energy (In other words, RPA+SOSEX is self-correlation free). This relation is fulfilled for SOSEX($W, v_c$) with already 4 Gauss-Legendre points. The magnitude of the SOSEX correction is underestimated when the $\\lambda$-integration is carried out using a trapezoidal rule. This is illustrated in fig.~\\ref{fig::lambdapath} for $\\mathrm{H}_2$ and $\\mathrm{(H}_2\\mathrm{O)}_2$. Using a single-frequency point corresponds to approximating the $\\lambda$-dependence of the correlation energy as a straight line, which leads to a small integration error. \n\n\\begin{table}[hbt!]\n \\centering\n \\begin{tabular}{lcccccc}\n \\toprule\n & \\multicolumn{3}{c}{SOSEX($W, v_c$)} & \\multicolumn{3}{c}{SOSEX($W(0), W(0)$)} \\\\ \\cline{2-4} \\cline{5-7}\n $N_{\\lambda}$ & \n $\\mathrm{H}_2$ & $\\mathrm{(H}_2\\mathrm{O)}_2$ & Benzene & \n $\\mathrm{H}_2$ & $\\mathrm{(H}_2\\mathrm{O)}_2$ & Benzene \\\\\n \\midrule\n 1 & 15.51400 & 147.15253 & 287.90735 & 10.74228 & 101.45448 & 204.17377 \\\\\n 2 & 16.63810 & 157.79684 & 307.48549 & 12.83037 & 119.99186 & 239.22546 \\\\\n 4 & 16.63421 & 157.71007 & 307.35326 & 12.82006 & 119.79484 & 238.91728 \\\\\n 6 & 16.63421 & 157.70997 & 307.35313 & 12.82006 & 119.79451 & 238.91688 \\\\\n \\midrule\n $\\frac{1}{2}E^{RPA}$ & 16.63421 & & & 16.63421 & & \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Total magnitude of SOSEX correction in kcal\/mol as a function of the size of the $\\lambda$-grid for selected systems.}\n \\label{tab::lambda}\n\\end{table}\n\n\\begin{figure}[hbt!]\n \\centering\n \\includegraphics[width=0.5\\textwidth]{lambdapath.pdf}\n \\caption{Magnitude of the SOSEX($W, v_c$) correlation energy as a function of $\\lambda$ relative to its value at $\\lambda=1$ for $\\mathrm{H}_2$ and $\\mathrm{(H}_2\\mathrm{O)}_2$.}\n \\label{fig::lambdapath}\n\\end{figure}\n\n\\begin{figure}[hbt!]\n \\centering\n \\includegraphics[width=\\textwidth]{s66_lambda_relative.pdf}\n \\caption{a) Difference in relative correlation energies for the S66 test set for both SOSEX variants when using 1 and 4 integration points for the $\\lambda$-integration. b) Error of relative SOSEX($W, v_c$) correlation energies with respect to the CCSD(T) reference values when using 1 and 4 integration points for the $\\lambda$-integration. c) Same as b), but for SOSEX($W, W(0)$). All values are in kcal\/mol.}\n \\label{fig::s66_lambda}\n\\end{figure}\n\nLet us now look at the effect on relative energies for SOSEX($W, v_c$) and SOSEX($W, W(0)$). For the S66 test set, the results of this comparison is shown in figure~\\ref{fig::s66_lambda}. Fig.~\\ref{fig::s66_lambda} a) shows the error in the covalent bonding energies with respect to the converged $\\lambda-$integration when only a single integration point is used. Generally, the resulting integration errors are very small and do not exceed 0.1 kcal\/mol for most complexes. In parts b) and c) of figure~\\ref{fig::s66_lambda}, the relative errors of the interaction energies with respect to the CCSD(T) reference values are shown. One can clearly see, that the error in the $\\lambda$-integration is negligible when looking at the accuracy of relative energies. This is also reflected in the MADs with respect to the CCSD(T) reference shown in figure~\\ref{fig::s66_lambda}.\n\n\\subsection{Dependence on the KS Green's function}\nWe next discuss the dependence of the correlation energies on the choice of the KS Green's function $G^s$. RPA calculations can in principle be performed self-consistently using a variety of approaches\\cite{\nHellgren2007, \nHellgren2012, \nVerma2012, \nBleiziffer2013, \nKlimes2014a,\nHellgren2015, \nVoora2019,\nGraf2020,\nRiemelmoser2021} (see ref.~\\citen{Yu2021} for a review) but this is rarely done in practice. For once, self-consistent RPA calculations are computationally demanding. Furthermore, the resulting energies are often worse than the ones evaluated using a Green's function from a generalized gradient approximation (GGA) or hybrid calculation.\\cite{Bleiziffer2013} GGAs like PBE or TPSS are often used to construct $G^s$.\\cite{Ren2013, Nguyen2020, Kreppel2020} However, for $GW$ calculations it is well known that GGAs produce a much too low band gap and therefore the screening within the bubble approximation is massively overestimated.\\cite{Tal2021} Hybrid functional with 25 - 50 \\% exact exchange are always a much better choice.\\cite{Caruso2016} The situation is similar for RPA calculations.\\cite{Modrzejewski2020}\n\n\\begin{figure}[hbt!]\n \\centering\n \\includegraphics[width=\\textwidth]{functionals_compare.pdf}\n \\caption{Mean absolute deviations (MAD) (lower triangle in each plot) and Maximum deviations (MAX) (upper triangle) for a) RPA, b) SOSEX($W(0), W(0)$) and c) SOSEX($W(0), v_c$) interaction energies for the S66 database using different KS Green's functions as well as to the CCSD(T) reference values (ref.). All values are in kcal\/mol.}\n \\label{fig::startingPoint}\n\\end{figure}\n\nIn figure~\\ref{fig::startingPoint}, mean absolute deviations (MAD) and maximum devitations (MAX) are shown for the S66 database for RPA and SOSEX($W(0), W(0)$). All values have been obtained using a single integration point for the $\\lambda$-integral. The interaction energies obtained using the different Green's functions are compared to each other as well as to the CCSD(T) reference values by Hobza and coworkers.\\cite{Rezac2011} RPA and RPA+SOSEX($W(0), W(0)$) are more or less equivalently independent of the choice of the KS Green's function, with MADs between 0.20 and 0.39 kcal\/mol between the different functionals. However, individual values can differ by almost 2 kcal\/mol which is a sizable difference, given that the largest interaction energies in the S66 database are of the order of 20 kcal\/mol only. The performance of RPA compared to the CCSD(T) reference is rather insensitive to the KS Green's function, even though the hybrid functionals perform slightly better. With 0.52 kcal\/mol, the MAD for RPA@PBE is in excellent agreement with the 0.61 kcal\/mol MAD obtained by Nguyen \\emph{et. al.} in ref.~\\citen{Nguyen2020}, which has been obtained with GTO-type basis sets and 50 \\% counterpoise correction instead of 100 \\%. This shows, that our interaction energies are well converged with respect to the basis set size. Also note, that the SOSEX contributions are expected to converge faster with respect to the basis size than the RPA contributions.\\cite{Kutepov2016} The SOSEX($W(0), W(0)$) results are much better using the hybrid functionals than with PBE. SOSEX($W, v_c$), is slightly more accurate using the PBE Green's function than the PBE0 and PBE0(50) ones, but the differences between the different starting points are negligibly small. Also, the dependence of SOSEX($W,v_c$) on the starting point is smaller than for SOSEX($W(0), W(0)$) Independently of the choice of $G^s$, RPA+SOSEX always outperforms RPA.\n\n\\subsection{S66 Interaction Energy}\nWe now compare the accuracy of the different SOSEX corrections for the S66 database in more detail. The interactions of the first 22 complexes in the database are dominated by Hydrogen bonds, will the next 22 complexes are mostly bound by dispersion interactions. The remaining interactions are of mixed nature.\\cite{Rezac2011} It is therefore useful to distinguish between these different interaction patterns in the following comparison. \n\n\\begin{figure}[hbt!]\n \\centering\n \\includegraphics[width=\\textwidth]{s66_absolute.pdf}\n \\caption{Deviations of RPA@PBE0 and both RPA + SOSEX@PBE0 variants for the S66 database with respect to the CCSD(T) reference. All values are in kcal\/mol.}\n \\label{fig::s66}\n\\end{figure}\n\n\\begin{table}[hbt!]\n \\centering\n \\begin{tabular}{lcccccccc}\n \\toprule \n & \\multicolumn{8}{c}{MAD} \\\\\n & \\multicolumn{2}{c}{S66} & \\multicolumn{2}{c}{hydr. bond} & \\multicolumn{2}{c}{dispersion} & \\multicolumn{2}{c}{mixed} \\\\ \n Method & [$\\frac{kcal}{mol}$] & [\\%] & [$\\frac{kcal}{mol}$] & [\\%] & [$\\frac{kcal}{mol}$] & [\\%] & [$\\frac{kcal}{mol}$] & [\\%] \\\\\n \\midrule \n SOSEX($W(0), W(0)$)@PBE0 & 0.32 & 7.28 & 0.45 & 5.76 & 0.29 & 10.33 & 0.21 & 5.50 \\\\\n SOSEX($W, v_c$)@PBE0 & 0.29 & 6.85 & 0.31 & 3.39 & 0.33 & 11.63 & 0.21 & 5.33 \\\\\n SOSEX($W, v_c$)@PBE & 0.26 & 6.25 & 0.23 & 3.51 & 0.33 & 10.16 & 0.17 & 4.26 \\\\\n RPA & 0.46 & 11.54 & 0.55 & 7.19 & 0.47 & 17.74 & 0.34 & 9.41 \\\\\n PBE0-D3(BJ) & 0.28 & 5.09 & 0.47 & 4.80 & 0.18 & 5.09 & 0.18 & 5.42 \\\\\n DSD-PBE-P86-D3(BJ) & 0.23 & 5.07 & 0.31 & 3.71 & 0.21 & 6.99 & 0.16 & 4.43 \\\\\n \\bottomrule \n \\end{tabular}\n \\caption{MADs (absolute and in \\%) of different electronic structure methods with respect to the CCSD(T) reference values for the whole S66 database and for its subcategories.}\n \\label{tab::s66_values}\n\\end{table}\n\nFigure~\\ref{fig::s66} shows the deviations of RPA and both RPA+SOSEX variants with respect to CCSD(T) for all datapoints. MADs for the whole database and MADs given in percent of the CCSD(T) interaction energies for the whole database as well as for the individual categories are presented in table~\\ref{tab::s66_values}. For the whole database, RPA gives a MAD of 11.5 \\%. Both SOSEX corrections lead to a considerable improvement, reducing the MAD to in between 7.3 \\% and 6.3 \\%, depending on the starting point. SOSEX($W, v_c$) outperforms SOSEX($W(0), W(0)$) by far for the hydrogen-bonded complexes, and is even slightly more accurate than the double-hybrid DSD-PBE-P86-D3(BJ),\\cite{Santra2019a} one of the best double hybrid functionals for weak interactions.\\cite{Mehta2018} For dispersion interactions, the performance of SOSEX($W(0), W(0)$) and SOSEX($W, v_c$) is comparable. Here, the empirically dispersion corrected\\cite{Grimme2010, Stefan2011} functionals, the hybrid PBE0-D3(BJ) and DSD-PBE-P86-D3(BJ), are much more accurate than all MBPT based methods. A few exceptions aside, fig.~\\ref{fig::s66} shows that RPA overestimates the non-covalent interaction energies in the S66 database. SOSEX corrections lower the interaction energies, i.e. the non-covalently bound complexes become more stable. SOSEX($W, v_c$) shows a tendency to overstabilize the hydrogen-bonded complexes. For these systems, the RPA+SOSEX($W(0), W(0)$) energies are almost identical to the ones from RPA. \n\nOverall, SOSEX leads to significant improvements over RPA, making RPA+SOSEX competitive with dispersion corrected hybrid functionals, albeit their description of dispersion interactions is certainly inferior. An important observation is, that the dynamically screened SOSEX does not lead to an improved description of these effects compared to the statically screened one. This can be linked to the fact that the frequency dependence of the screening tends to be dominated by its static limit with increasing electron-electron separation. When the electron correlation effects are dominated by long-ranged interactions, the frequency-dependence of the screening therefore becomes unimportant. We will comment on this observation in more detail below.\n\n\\subsection{S66x8 Interaction Energy}\n\nWe now assess the accuracy of these methods for the S66x8 dataset which contains the complexes in the S66 database at 8 different geometries.\\cite{Rezac2011} The separations of the monomers in the complexes are given relative to their equilibrium distances, i.e. a relative separation of 2.0 means that the monomers separation in the complex is twice as large as the equilibrium separation. For our assessment of the different SOSEX corrections, we divide the regions on the potential energy curve in three regions, which we denote as short (equilibrium distance scaled by a factor 0.9-0.95), middle (1.0-1.25) and long (1.5-2.0). All RPA (+SOSEX) calculation discussed here have been performed using a PBE0 Green's function.\n\n\\begin{figure}[hbt!]\n \\centering\n \\includegraphics[width=0.7\\textwidth]{s66x8_percent.pdf}\n \\caption{MADs (in percent) for the S66x8 database with respect to the CCSD(T) reference values for RPA, RPA+SOSEX$(W,v_c)$ and RPA+SOSEX$(W(0),W(0))$. MADs are shown separately for the whole database (columns on the left) and for different monomer-monomer separations.}\n \\label{fig::s66x8}\n\\end{figure}\n\nThe results of our comparison are shown in figure~\\ref{fig::s66x8}, where the MADs (given in percent) with respect to CCSD(T) for the whole database as well as for the scaled monomer-monomer separations are shown. For the whole database, the average relative deviations with respect to the reference values are larger than for S66. For once, this is due to the geometries with large monomer-monomer separation for which the interaction energies are very small leading also to much larger relative errors. It is important to notice that with smaller interaction energies the relative importance of basis set expansion errors increases as well. However, also at short monomer-monomer separations, the relative errors increase slightly compared to the intermediate regime. This can probably be attributed to stronger electron correlation and screening effects at short monomer separations. As already alluded to in the introduction, with decreasing electron-electron distances vertex corrections in the electronic self-energy become more and more important and an expansion of the self-energy in terms of the screened electron-electron interaction will converge slower. Another factor might be, that screening effects beyond the bubble approximation might become more relevant. \n\n\\begin{table}[hbt!]\n \\centering\n \\begin{tabular}{lccc}\n \\toprule \n & short [\\%] & middle [\\%] & long [\\%] \\\\\n \\midrule\n SOSEX$(W,v_c)$ & 35.2 & 42.8 & 13.5 \\\\\n SOSEX$(W(0),W(0))$ & 31.0 & 37.9 & 19.1 \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Relative improvements obtained with different SOSEX variants over RPA for different groups of monomer-monomer separations.}\n \\label{tab::mads_S66x8_groups}\n\\end{table}\n\nWe can make two interesting observations: First, in the short and medium regime, both SOSEX correction lead to sizable improvements over the RPA with in between 31 and 43 \\%. This is comparable to the observation for S66. For large monomer-monomer separations, the improvements become much smaller, with 14 \\% for SOSEX$(W,v_c)$ and 19 \\% for SOSEX$(W(0),W(0))$. Second, while at the equilibrium geometries SOSEX$(W,v_c)$ shows a tendency to lead to larger improvements over the RPA than SOSEX$(W(0),W(0))$, this is not longer true for large monomer separations.\n\nThe second point can be rationalized by the fact that at large electron-electron distances the frequency dependence of the screening averages out.\\cite{Mattuck1992} The first point can be related to the second Ward identity for the 4-point vertex.\\cite{Ward1950} At large monomer-monomer separation, the interaction energies are dominated by long-range correlation effects. Following the argument by Kotani, van Schilfgaarde and Faleev,\\cite{Kotani2007} the interacting Green's function can be written as \n\\begin{equation}\n G = Z G^{(0)} + \\overline{G} \\;.\n\\end{equation}\nHere, $G^{(0)}$ is to be understood as the QP part of the interacting $G$ (in our case, this is the KS Green's function), while the incoherent part $\\overline{G}$ contains all contributions which are not described within the independent-particle approximation. We can rewrite \\eqref{sigma} as\n\\begin{equation}\n \\Sigma = GW I \\;,\n\\end{equation}\nwhere $I = 1 - \\Gamma_{xc}^{(0)}$. n the limit of large electron-electron separation and zero momentum transfer, it goes as\n\\begin{equation}\n I \\rightarrow 1 - \\frac{\\delta \\Sigma}{\\delta \\omega} = \\frac{1}{Z} \n\\end{equation} \ndue to the second Ward identity.\\cite{Strinati1988, Tal2021, Nakashima2021} If we now assume that only this limit is important for the description of electron-electron correlation, we obtain \n\\begin{equation}\n \\Sigma = G^{(s)}W + \\frac{1}{z}\\overline{G}W \\;. \n\\end{equation}\nWe now need to assume that $\\overline{G}$ does not contribute to the total correlation energy at large electron-electron distances. Under this assumption, the correlation energy reduces to the RPA form. Note, that a similar argument also holds for the kernel of \\eqref{bse} in the long-range and low-frequency limit.\\cite{Kotani2007} In practice, we can neither ignore the short-range contributions to the correlation energy nor the incoherent part of the Green's function completely. However, this argument gives some indication as to why SOSEX hardly leads to any improvements over the RPA for large monomer separations. Other technical reasons certainly play a role as well. For instance, the importance of basis set errors will increase, since the magnitude of the interaction energy decreases. \n\n\\section{\\label{sec::conclusions}Conclusions} \nThe accuracy of the RPA can in principle be improved by including vertex correction in the self-energy. This can be done either directly, or indirectly through the solution of the BSE. In this work, we have assessed the accuracy of two closely related SOSEX corrections to RPA correlation energies for weak interactions. These are the well-known AC-SOSEX, herein termed SOSEX$(W,v_c)$, first introduced by Jansen \\emph{et. al},\\cite{Jansen2010} as well as an energy expression which is obtained from the statically screened $G3W2$ correction to the $GW$ self-energy.\\cite{Gruneis2014, Forster2022} This energy expression has already been introduced in ref.~\\citen{Forster2022}, albeit without a rigorous derivation. Especially, we have implicitly assumed that the integral over the coupling strength is evaluated using a trapezoidal rule. Here, we have introduced this expression (referred to as SOSEX$(W(0),W(0))$ in this work) with its proper $\\lambda$-dependence. \n\nWe have then assessed the accuracy of these approximations for the S66 and S66x8 database of non-covalently bound complexes. Independently of the input KS Green's function, both SOSEX corrections lead to significant improvements over RPA. Also, we have shown here that the coupling constant integration can be approximated by a trapezoidal rule without relevant loss of accuracy. Notice, that in implementations using DF, the evaluation of all $\\lambda-$dependent terms only scales as $N^3$. However, in implementations without DF such steps generally scale as $N^6$ and therefore evaluating the coupling-constant integration with a single frequency point results in substantial computational savings.\\cite{Bates2018}\n\nSOSEX$(W,v_c)$ is most accurate for the hydrogen-bonded complexes while SOSEX$(W(0),W(0))$ is slightly more accurate for dispersion interactions, indicating that the frequency-dependence of the screened interactions does not seem to be an important factor here. Our results for the S66x8 database revealed, that the improvements over RPA are largest for the non-covalently bounded complexes at their equilibrium distances. For large monomer-monomer separations, we have argued that this can be attributed to the $Z-$factor cancellation\\cite{Kotani2007} which is a consequence of the second Ward identity.\\cite{Ward1950, Strinati1988} Also, in the long-range limit, the differences between static and dynamic screening vanishes. While for equilibrium distances SOSEX$(W,v_c)$ is slightly more accurate than SOSEX$(W(0),W(0))$, this is not longer true for large monomer-monomer separations. A systematic assessment of the accuracy of RPA+SOSEX for a wider range of reaction types is currently missing. Especially for processes like bond breaking or ionization for which RPA is known to perform poorly,\\cite{Eshuis2012a, Chen2017} RPA+SOSEX approaches might be a promising alternative.\n\nSOSEX$(W,v_c)$ and SOSEX$(W(0),W(0))$ both formally scale as $N^5$ with system size. While the computation of the SOSEX$(W,v_c)$ correction requires an a numerical integration over imaginary frequency, the SOSEX$(W(0),W(0))$ correction comes with the same computational cost as the evaluation of the SOX contribution to MP2. However, MP2 is inadequate for large molecules since it neglects screening effects entirely.\\cite{Macke1950, Nguyen2020} In SOSEX$(W(0),W(0))$, the electron-electron interaction is screened and therefore RPA+SOSEX$(W(0),W(0))$ is in principle applicable to large molecules. A stochastic linear scaling implementation of the SOSEX self-energy has already been developed.\\cite{Vlcek2019} Low-scaling MP2 implementations\\cite{Doser2009a,Pinski2015, Nagy2016} could potentially be generalized to SOSEX as well.\n\nWhile the addition of SOSEX leads to significant improvements over the RPA for the description of dispersion interactions, RPA+SOSEX falls short of the accuracy of dispersion corrected hybrid or double hybrids. Going to higher orders in the expansion of the self-energy in terms of $W$ might make MBPT based energy expressions competitive to these functionals. This would lead to at least an $N^6$ scaling with system size $N$. Augmenting the 4-point vertex in the BSE with additional non-local contributions leads to the same unfavourable scaling. It should be noted that faster algorithms for solving the BSE with these kernels have been proposed as well, but so far only for the calculation of polarizabilities.\\cite{Ljungberg2015} It might therefore be a better solution to add local terms to the Hartree-kernel instead. Such modifications have recently been shown to lead to major improvements over the RPA for the description of dispersion interactions,\\cite{Trushin2021} and their combination with SOSEX might result in even better accuracy.\n\n\\begin{acknowledgement}\nThis research received funding (project number 731.017.417) from the Netherlands Organisation for Scientific Research (NWO) in the framework of the Innovation Fund for Chemistry and from the Ministry of Economic Affairs in the framework of the \\enquote{\\emph{TKI\/PPS-Toeslagregeling}}. We thank Mauricio Rodr\u00edguez-Mayorga for fruitful discussions.\n\\end{acknowledgement}\n\n\n\\begin{suppinfo}\n.csv files with all calculated total energies and non-covalent interaction energies at the extrapolated CBS TZ3P, and QZ6P level. PDF with explanations\n\\end{suppinfo}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}