diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzhgoj" "b/data_all_eng_slimpj/shuffled/split2/finalzzhgoj" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzhgoj" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe purpose of this paper will be to introduce a way to reconstruct objects from their grey-scale digital images. More specifically, we focus on objects that are small compared to the image resolution and satisfy a certain regularity constraint called \\emph{$r$-regularity}. The notion of $r$-regularity was developed independently by Serra \\cite{Serra} and Pavlidis \\cite{Pavlidis} to describe a class of objects for which reconstruction from digital images preserved certain topological features. They both consider \\emph{subset digitisation}, that is, digitisation formed by placing an image grid on top of an object and then colouring an image cell black if its midpoint is on top of the object, and white if the cell midpoint is on top of the complement of the object. This way a binary image is produced, and they consider the set of black cells as the reconstructed set. Serra showed that if the grid is hexagonal and the object satisfies certain constraints, the original and reconstructed sets have the same homotopy, and Pavlidis showed that for a square grid and for certain $r$-regular sets, the set and its reconstruction are homeomorphic. Later on, Stelldinger and K\u00f6the \\cite{StelldingerKothe},\\cite{SK} argued that the concepts of homotopies or homeomorphisms were not strong enough to fully capture human perception of shape similarity. Instead they proposed two new similarity criterions called \\emph{weak} and \\emph{strong $r$-similarity}, and showed that under certain conditions, an $r$-regular set and its reconstruction by a square grid are both weakly and strongly $r$-similar. {\\color{black}We, too, will consider the notion of weak $r$-similarity in this paper.}\n\nHowever, Serra, Pavlidis, Stelldinger and K\u00f6the were modelling images using subset digitisation, which outputs a binary image. In contrast to this approach, Latecki et al. \\cite{LateckiConradGross} modelled an image by requiring that the intensity in each pixel be a monotonic function of the fraction of that object covered by that pixel. This way they seek to model a pixel intensity as the light intensity measured by a sensor in the middle of the pixel, and the result is a grey-level image much like the ones obtained in real situations. They show that after applying any threshold to such an image of an $r$-regular object with certain constraints, the set of black pixels has the same homotopy type as the original object and, in the case where the original object is a manifold with boundary, the two are even homeomorphic. They also conjecture that all $r$-regular objects are manifolds with boundary. This was later proven by Duarte and Torres in \\cite{DTs}.\n\nWe will model our images in the same way as Latecki et al. did, namely by requiring each pixel intensity to be a monotonic function of the fraction of the pixel covered by the object. In contrast to the above reconstruction approaches, we do not wish to use a set of pixels as our reconstructed set, but rather to construct a new set with smooth boundary that we may then use as the reconstruction. Also in contrast to the above, we will not consider binary images, but keep the information stored in the grey values in our endeavour to make a more precise reconstruction.\n\nWhen reconstruction, one should decide which properties one wishes the reconstructed object to share with the original one. Should the reconstructed set have the same topological features as the original one? Should the reconstructed set be close to the original one? Should a digitisation of the reconstructed set yield the same image as the original set? Should the reconstructed set be $r'$-regular for some $r'$ close to $r$? Though all of these comparison criteria are interesting to work with and an ideal reconstruction should satisfy them all, it is hard to construct such a set. In this paper, we will therefore focus on constructing a set that is close to the original one in Hausdorff distance (which will be introduced in the following), has a smooth boundary{\\color{black}, and is homeomorphic to the original set. This means that we show that our reconstructed set and the original are weakly $r$-similar in the sense of \\cite{SK}}.\n\n\\section{Basic definitions and theorems about \\texorpdfstring{$r$}{r}-regular sets}\\label{SecRBasics}\nLet us start by establishing some terminology.\nLet $X\\subset \\R^2$ be a set. We will denote the closure of $X$ by $\\overline{X}$, the interior of $X$ by $\\text{Int}(X)$ and the boundary of $X$ by $\\partial X$. The complement $\\R^2\\backslash X$ will be denoted by $X^C$. The set $X$ is compact if and only if $X$ is closed and bounded.\n\nThe Euclidean distance between two points $x$ and $y$ in $\\R^2$ will be denoted by $d(x,y)$ or, occasionally, by $\\norm{x-y}$.\n\nFor an $s>0$, we let $B_s(x)=\\sett{y\\in \\R^2\\mid d(x,y)0$. A closed set $X\\subseteq \\R^n$ is said to be \\emph{$r$-regular} if for each $x\\in\\partial X$ there exists two $r$-balls $B_r(x_b)\\subseteq X$ and $B_r(x_w)\\subseteq X^C$ such that $\\overline{B_r(x_b)}\\cap \\overline{B_r(x_w)}=\\sett{x}$, see \\cref{FigR-reg}.\n\\end{defn}\n\n\\begin{figure}\n\\includegraphics[scale=1]{R-regularCircles}\n\\caption{An $r$-regular set $X$ is a set where each boundary point belongs to both the boundary of an $r$-ball contained in $X$ and the boundary of an $r$-ball contained in $X^C$}\n\\label{FigR-reg}\n\\end{figure}\n\nIn general, we believe that a reconstruction can be made more accurately by taking the intensities of the grey pixels into account, and we are currently working on this idea. However, in this paper we restrict ourselves to looking at images where each pixel is considered to be either black, grey or white, without taking the exact intensities of the grey pixels into account:\n\n\\begin{defn}\nA \\emph{trinary} digital image is a digital image where the intensities of all grey pixels are set to 0.5.\n\\end{defn}\n\nThese trinary images will be our main interest in this paper. Note that the colour of a pixel (black, grey or white) does not depend on the monotonic function $\\phi$ used for calculating the pixel intensities - in fact, a pixel in a trinary image of an object $X$ is black if it is contained in $X$, white if it is contained in $X^C$ and grey if $\\partial X$ passes through it.\n\nWhen we make the digital image of an $r$-regular object by a lattice $d\\Z^2$, we can in general not be certain that there are any black or white pixels in the image - for instance, if $d$ is large compared to $r$, all pixels could contain an $r$-ball, which would mean that the image would be all grey. Since we cannot hope to make a very good reconstruction in this case, we will put a restriction on the relationship between the $r$ and $d$:\n\n\\begin{Convention}\nThroughout the following, we assume that $X$ is a bounded $r$-regular set and that $d\\sqrt{2}0$. We call $A$ and $B$ weakly $r$-similar if there exists a homeomorphism $f:\\R^2\\to\\R^2$ such that $x\\in A\\iff f(x)\\in B$ and the Hausdorff distance between the set boundaries satisfies $d_H(\\partial A, \\partial B)0$. Then the following are equivalent:\n\\begin{enumerate}\n\\item At any point $x\\in\\partial X$ there exist two closed $r$-balls $B_r\\subseteq A$ and $B_r'\\subseteq\\overline{A^C}$ such that $B_r\\cap B_r'=\\sett{x}$.\n\\item The sets $A$ and $\\overline{A^C}$ are equal to unions of closed $r$-balls.\n\\end{enumerate}\n\\end{prop}\n\n\\begin{defn}\n For $\\delta>0$, we denote the $\\delta$-tubular neighbourhood of\n $\\partial X$ in $\\R^2$ by\n $N_\\delta=\\sett{x\\in\\R^2\\mid d(x,\\partial X)<\\delta}$.\n\\end{defn}\n\n\\begin{lem}[Duarte \\& Torres, \\cite{DTs}, Lemma 5]\\label{LemProjection}\n Let $X$ be an $r$-regular set. For each $x\\in N_{r}$ there is a\n unique point $\\pi(x)\\in\\partial X$ such that $d(x,\\partial\n X)=d(\\pi(x),x)$. Hence there is a well-defined projection\n $\\pi:N_{r}\\to\\partial X$.\n\\end{lem}\n\n\\begin{thm}[Duarte and Torres, \\cite{DTr}]\nThe projection map $\\pi: N_r\\to\\partial X$ is continuous.\n\\end{thm}\n\n\nAnother important fact that we will be using heavily is the following:\n\nThere is a retraction $\\rho_{X^C}:N_r\\to X^C\\cup\\partial X$ (that we will\nsometimes just denote by $\\rho$) defined by\n\\begin{equation*}\n \\rho_{X^C}(x)=\\begin{cases}\n x & \\text{if $x\\in X^C\\cup\\partial X$},\n \\\\\n \\pi(x) & \\text{otherwise},\n \\end{cases}\n\\end{equation*}\nand likewise a retraction $\\rho_{X}:N_r\\to X$ defined by\n\\begin{equation*}\n \\rho_{X}(x)=\\begin{cases}\n x & \\text{if $x\\in X$},\n \\\\\n \\pi(x) & \\text{otherwise}.\n \\end{cases}\n\\end{equation*}\nThese retractions will prove to be crucial in later arguments, since\nthey have some nice properties.\n\nWe now state some results about $\\rho=\\rho_{X^C}$. However, the\nsimilar results for $\\rho_X$ also hold.\n\n\\begin{prop}[Stelldinger et al., \\cite{SLS}] Let $x, y\\in X^C$ with\n $d(x,y)<2r$ and let $L\\subseteq\\R^n$ be the line segment between\n them. Then\n \\begin{enumerate}[(i)]\n \\item The line segment $L$ is a subset of $ X^C\\cup N_r$, and\n $\\rho\\vert_L$ is injective,\n \\item For $sd\\sqrt{2}$ becomes $r>\\sqrt{2}$.\n\n\n\\begin{proof}[Proof of Lemma \\ref{LemTwoIntersections}]\n Let $x$ and $y$ be two points on the common edge of $B$ and $C$ that belongs to $\\partial X$, and let $L$ be the line segment\n between them. Note that since $X$ is $r$-regular and $r>\\sqrt{2}$,\n $X$ is in particular $\\sqrt{2}$-regular (cf. \\cite{TC}, Proposition\n A.2).\n\n Since the distance from $x$ to $y$ is less that $\\sqrt{2}$, there\n must be a path $\\pi(L)$ in $\\partial X$ between them, where $\\pi$ is the\n projection onto $\\partial X$, see Section \\ref{SecRBasics}. Since the\n projection is continuous and fixed at the endpoints, there must be\n a point $p$ on $\\pi(L)$ such that the tangent to $\\partial X$ at $p$ is\n horisontal. Let $p=(p_1,p_2)$. Since $p\\in\\partial X$ and $X$ is an\n $\\sqrt{2}$-regular set, there are balls\n $B_{\\sqrt{2}}(x_b)\\subseteq X$ and $B_{\\sqrt{2}}(x_w)\\subseteq X^c$\n such that\n $\\overline{B_{\\sqrt{2}}(x_b)}\\cap\\overline{B_{\\sqrt{2}}(x_w)}=\\sett{p}$,\n and since the tangent to $\\partial X$ at $p$ is horisontal, the centres\n $x_b$ and $x_w$ must lie on the vertical line through $p$.\n\n Note that $p\\in\\pi(L)\\subseteq S(L,\\sqrt{2})$. By Lemma\n \\ref{LemSpindleWidth}, the thickness of $S(L,\\sqrt{2})$ is\n $\\sqrt{2}-\\sqrt{2-\\frac{L^2}{4}}\\leq\n \\sqrt{2}-\\sqrt{2-\\frac{1}{4}}$. So $d(p, L)\\leq \\sqrt{2}-\\sqrt{2-\\frac{1}{4}}$. Then \n \\begin{equation*}\nd(x_b,L)\\leq d(x_b,p)+d(p,L)\\leq 2\\sqrt{2}-\\sqrt{2-\\frac{1}{4}}<1.51\n \\end{equation*}\n and\n \\begin{equation*}\nd(x_b,L)> d(p,L)-d(p,x_b)>\\sqrt{2}-\\sqrt{2-\\frac{1}{4}}-\\sqrt{2}=\\sqrt{2-\\frac{1}{4}}>1\n \\end{equation*}\n So $x_b$ belongs to either $A$ or $D$ - let us say $D$. Then $D$ must be black. In fact, since the first\n and last inequality are sharp, the common edge of $B$ and $C$ must\n be interior points of $X$, and hence it contains no intersection\n points. A symmetric argument for $x_w$ shows that $x_w$ must belong to $A$, hence\n $A$ must be white, and that the common edge of $A$ and $B$ cannot\n contain any points of $\\partial X$.\n \n If $\\partial X$ is tangent to $L$ at a point $p'$, replacing $p$ with $p'$ in the above argument shows the result.\n\\end{proof}\n\n\\begin{proof}[Proof of Lemma \\ref{LemYieldsTwoIntersections}]\nLet us name the two grey pixels in the configuration $B$ and $C$, as in the right part of Figure \\ref{FigYieldsTwoIntersections}. Choose boundary points $x_C\\in C$ and $x_D\\in D$. Both of these points are contained in a ball $B_{\\sqrt{5}\/2}(p)$, where $p$ is the midpoint of the common edge $e$ of pixel $B$ and $C$. By Corollary \\ref{CorPathInSpindle} and Lemma \\ref{LemSpindleIsIntersection} applied to the projection $\\pi$ in stead of $\\rho$, there is a path $\\gamma$ in $\\partial X$ from $x_C$ to $x_D$ contained in $B_{\\sqrt{5}\/2}(p)$. This path must pass the line containing $e$, and since it cannot do so if passing this line means entering a black pixel, it must in fact pass the edge $e$. The endpoints of $e$ are both black, hence if $\\partial X$ passes $e$ once, it must also pass $e$ a second time, as $\\partial X$ separates black points from white ones.\n\nBut then by Lemma \\ref{LemTwoIntersections} $A$ must be black and $D$ must be white (it cannot be the other way around, because then a black and a white pixel would share a corner, meaning that $\\partial X$ passes through that corner - which is against our assumptions).\n\\end{proof}\n\n\n\\begin{proof}[Proof of Lemma \\ref{LemNo9greys}]\nLet $c$ be the centre point of the grey pixel $C$. Assume $c\\in X$ (the other case is similar). Then $c$ belongs to a black ball of radius $\\sqrt{2}$ by Proposition \\ref{PropEquivalentRregDefinitions}, hence the centre of this black ball belongs to $B_{\\sqrt{2}}(c)$ and thus to either $C$ or one of its neighbours. Since the pixel containing the centre of the black ball must be entirely contained in the black ball, said pixel must be black. Hence one of the neighbours of $C$ must be black.\n\\end{proof}\n\n\\begin{figure}\n\\includegraphics[scale=0.4]{5greys}\n\\caption{We aim to show that the centre of the white ball tangent to $\\partial X$ at $p$ belongs to the red set $Y$.}\n\\label{Fig5greys}\n\\end{figure}\n\n\\begin{proof}[Proof of Lemma \\ref{Thm5greys}]\nPlace the configuration in a coordinate system as in the figure,\n such that the pixels has side length 1. The aim will be to show that\n $x_w$ lies so close to the upper right pixel that the white ball\n $B_r(x_w)$ contains the pixel.\n\n Note first that the line $l$ through $x_b$ and $c$ passes an edge of\n the black pixel, say the right edge. Let $a$ be this intersection\n point, see Figure~\\ref{Fig5greys}. To study the limit case, we\n will first assume that $x_b=a$.\n\n Since $d(a,c)0,\n \\end{align*}\n and on $[1\/2,1]$,\n \\begin{align*}\n \\frac{d}{dx_1}(f-x_2)&=-\\sqrt{7}+\\frac{1}{2}+\\frac{1+x_1}{\\sqrt{8-(1+x_1)^2}}-\\frac{2}{\\sqrt{\\frac{2}{(1+x_1)^2}-\\frac{1}{4}}}\\frac{1}{(1+x_1)^3}\\\\\n &\\leq \\frac{d}{dx_1}(f-x_2)\\vert_{x_1=1}\\\\\n &=-\\sqrt{7}+\\frac{1}{2}+{1}-\\frac{1}{2}\\\\\n &<0.\n \\end{align*}\n So $f-x_2$ is increasing on $[0,1\/2]$ and decreasing on\n $[1\/2,1]$. Hence, on $[0,1\/2]$\n \\begin{equation*}\n (f-x_2)(x_1)\\geq (f-x_2)(0)=-\\frac{\\sqrt{7}}{2}+\\frac{3}{2}>0\n \\end{equation*}\n and similarly, on $[1\/2,1]$\n \\begin{equation*}\n (f-x_2)(x_1)\\geq (f-x_2)(1)=0.\n \\end{equation*}\n Putting the last two equations together we see that\n $f(x_1)-x_2(x_1)\\geq 0$ everywhere on $[0,1]$, so $x_2\\leq f$ as\n claimed. So if $x_b=a$, then $x_w$ belongs to the set $Y$.\n\n \\begin{figure}\n \\includegraphics[scale=0.5]{Closeup5greys}\n \\caption{Close-up on the upper right pixel. The red graph is the\n graph of $x_2(x_1)$, and the blue graph is the graph of\n $f(x_1)$. The point $(x_1,x_2)$ on $l$ is chosen such that\n $d((x_1,x_2),a)=2r$, and hence $x_b$ lies closer to $c$ than\n $(x_1,x_2)$ does.}\n \\label{FigCloseup5greys}\n \\end{figure}\n\n Suppose now that $x_b$ is just any point in the lower left pixel\n that is closer than $\\sqrt{2}$ to $c$, and suppose that the line $l$\n through $x_b$ and $c$ leaves the lower left pixel in a point $a$ on\n the right pixel edge. Then $(0,1-t)$ lies on $l$ and inside $Y$. By\n what we just showed, the point $(x_1,x_2)$, $x_1\\geq0$, on $l$ that\n is at a distance $2r$ from $a$ is also in $Y$, hence the entire line\n segment from $(0,1-t)$ to $(x_1,x_2)$ is in $Y$, since $Y$ was\n convex, see Figure~\\ref{FigCloseup5greys}. But noticing that\n $r\\geq d(a,c)=d(c,(0,1-t))$, we get that\n \\begin{equation*}\n d(x_b,(0,1-t))=d(x_b,c)+d(a,c)\\leq 2r\n \\end{equation*}\n and\n \\begin{equation*}\n d(x_b,(x_1,x_2))=d(x_b,a)+d(a,x_w)\\geq d(a,x_w)=2r.\n \\end{equation*}\n Combining these equations, we see that\n $d(x_b,(0,1-t))\\leq d(x_b,x_w)\\leq d(x_b,(x_1,x_2))$. Hence $x_w$\n belongs to the line segment between $(0,1-t)$ and $(x_1,x_2)$ which\n was contained in $Y$, so $x_w\\in Y$.\n\\end{proof}\n\n\\begin{proof}[Proof of Lemma \\ref{LemSkalfarvessorte1}]\nLet us show $(i)$. Let $x$ and $y$ be\n corner points of the two pixels as in Figure\n \\ref{FigSkalfarvessorte1}, and let $L$ denote the line between them.\n\n If there are points of $ X^C$ in $L$, then $\\partial X$ must either be tangent to $X$ or intersect $L$ in several points (since the endpoints of $L$ clearly all\n belong to $X$). By Lemma \\ref{LemTwoIntersections} this means that\n either the pixel above $C$ or the pixel below pixel $B$ is white. Both of these pixels share a corner with a black pixel. But by the proof of Lemma \\ref{LemTwoIntersections}, the black corner point must be an interior point of $X^C$ and hence white - a contradiction.\nSo $L\\subseteq\\text{Int} (X)$\n\n If $B$ is not black, pick white points $b\\in \\text{Int} (B)$ and\n $c\\in\\text{Int} (C)$. Let $L_{bc}$ denote the line between them. Then\n there is a path $\\rho_{X^C}(L_{bc})$ in $X^C\\cup\\partial X$ connecting\n $b$ and $c$, and this path belongs to all balls of radius less than\n $r$ that contains both $b$ and $c$, (cf. Section\n \\ref{SecRBasics}). In particular,\n $\\rho_{X^C}(L_{bc})\\subseteq {B_{\\sqrt{2}}(x)}$, since this ball\n contains all of $B$ and all of $C$.\n\n Let $\\gamma$ be the piecewise linear path from $z$ though $x$ and\n $y$ to $w$. Then $\\gamma$ is contained in $\\text{Int}(X)$ and separates\n $B$ from $C$ inside $B_{\\sqrt{2}}(x)$. Hence $\\rho_{X^C}(L_{bc})$\n must intersect $\\gamma$ somewhere, but this is impossible, since\n $\\gamma\\subseteq\\text{Int}(X)$ and\n $\\rho_{X^C}(L_{bc})\\subseteq X^C\\cup\\partial X$. So $B$ cannot contain\n any white points, and hence it must be black.\n\n A similar reasoning can be applied to $A$: If $A$ is not black, pick\n a white point $a\\in A$, and let $L_{ac}$ denote the line between $a$\n and $c$ (where $c\\in C$ is the point we chose earlier). Then there\n is a path $\\rho_{X^C}(L_{ac})$ in $X^C\\cup\\partial X$ connecting $a$ and\n $c$, and this path must belong to the ball $B_{\\sqrt{2}}(x)$. But\n since $\\gamma$ also separates $A$ from $C$ inside\n $B_{\\sqrt{2}}(x)$, $\\rho_{X^C}(L_{ac})$ must intersect $\\gamma$\n somewhere. But this is impossible, since $\\gamma\\subseteq\\text{Int}(X)$\n and $\\rho_{X^C}(L_{ac})\\subseteq X^C\\cup\\partial X$ as before. So $A$\n cannot contain any white points, and hence it must also be black.\n \n The second part of this proof also proves $(ii)$. To prove $(iii)$, we apply Lemma \\ref{Thm5greys} to argue that one of the pixels $A_1$, $A_2$, $A_3$, $B_1$, $B_2$, $B_3$ is black.\n \nIndeed, suppose none of the pixels $A_1$, $A_2$, $A_3$, $B_1$, $B_2$, $B_3$ were black. Then $A_1$, $A_3$, $B_1$ and $B_3$ would have to be grey, since black and white pixels cannot be neighbours by assumption. But then by Remark \\ref{Rem3x3Pix}, either $A_2$ or $B_2$ would have to be black - a contradiction. So one of the pixels has to be black.\n If $A_1$, $A_3$, $B_1$ or $B_3$ is black, we are in situation $(i)$ and may use this proof to complete the proof of $(iii)$. If $A_2$ is black, and neither $A_1$ or $A_3$ is black Lemma \\ref{LemYieldsTwoIntersections} shows that both $B_1$ and $B_3$ is black, and we are again in the situation of case $(i)$. If $A_2$ and either $A_1$ or $A_3$ is black, we are in the situation of case $(i)$. The proof works equivalently if $B_2$ is black. So $(iii)$ is true when one of the pixels $A_1$, $A_2$, $A_3$, $B_1$, $B_2$, $B_3$ are black.\n\\end{proof}\n\n\\begin{figure}\n \\begin{subfigure}{0.3\\linewidth}\n \\centering\n \\includegraphics[scale=0.4]{6greysallnamed2}\n \\caption{We are considering 6 grey pixels in a $2\\times3$\n combination, and we wish to show that $\\partial X\\cap(G_1\\cup G_2)$\n belongs to the red set in the figure, and that one of the pixels\n $K_1$, $K_2$ must be black, and the other one white.}\n \\label{Fig6greysAdd}\n \\end{subfigure}\\hspace{0.5cm}\n \\begin{subfigure}{0.3\\linewidth}\n \\centering\n \\includegraphics[scale=0.4]{6greys2spindles2}\n \\caption{If both $K_1$ and $K_2$ were white, then points $x_a$\n and $x_c$ would be joined by path in $X^C\\cup \\partial X$, and so\n would points $x_b$, $x_d$ (these are the white paths in the\n figure).}\n \\label{Fig6greys2spindlesAdd}\n \\end{subfigure}\n \\hspace{0.5cm}\n \\begin{subfigure}{0.3\\linewidth}\n \\includegraphics[scale=0.4]{6greysWithSpindles2}\n \\caption{The projection $\\pi$ yields a path $\\gamma$ in\n $\\partial X$ from $p_{34}$ through $p_{12}$ to $p_{56}$, and this\n path lives inside the spindles between points $p_{34}$ and\n $p_{12}$, and points $p_{12}$ and $p_{56}$.}\n \\label{Fig6greysWithSpindlesAdd}\n \\end{subfigure}\n\\end{figure}\n\n\\begin{proof}[Proof of Lemma \\ref{Lem2x3grey}]\n Let us start by discussing (i).\n\n Consider $G_1$ as in Figure \\ref{Fig6greys2spindlesAdd}, and look at the configuration of $3\\times3$ pixel with $G_1$ as the centre pixel. Then all but the three upper neighbours of $G_1$ are grey. By Remark \\ref{Rem3x3Pix}, the upper d-neighbour $K_1$ of $G_1$ cannot be grey, hence it must be either black or white. By the same reasoning, $K_2$ must be either black or white\n\n\n It remains to prove that $K_1$ and $K_2$ cannot have the same\n colour, so suppose that $K_1$, $K_2$ are both white. Let $x_a$,\n $x_b$ be the lower corners of $K_1$ and $x_c$, $x_d$ the upper\n corners of $K_2$, as in Figure~\\ref{Fig6greys2spindlesAdd}. Note\n that these points are all elements of $X^C\\cup\\partial X$.\n\n Let $L_{ac}$ be the line segment between $x_a$ and $x_c$, and let\n $L_{bd}$ be the line segment between $x_b$ and $x_d$. Since\n $d(x_a,x_c)=d(x_b,x_d)=2<2\\sqrt{2}$, the map $\\rho$ maps these line\n segments to continuous paths in $X^C\\cup\\partial X$ by projecting points\n of $\\text{Int}(X)$ to $\\partial X$ and fixing all other points.\n\n Now, $L_{ac}$ and $L_{bd}$ cannot both be contained entirely in\n $\\overline{X^C}$, since this would imply that $\\rho_{X^C}$ kept them\n fixed. But since $G_1$, $G_2$ were grey, they must contain a point\n of $\\text{Int}(X)$ which in turn would belong to some black $r$-ball\n $B_r(x)\\subseteq X$. However, an interior point of such an $r$-ball would\n have to intersect the boundary of $G_1\\cup G_2$, which hence cannot\n be a subset of $X^C\\cup\\partial X$.\n\n So assume that $\\rho_{X^C}$ does not fix $L_{ac}$. Then there is a\n point $q$ on $\\rho_{X^C}(L_{ac})$ that is furthest from $L_{ac}$ and hence is a point on $\\partial X$ with a vertical tangent. This point must\n belong to $S(L_{ac},r)$ since all of $\\rho_{X^C}(L_{ac})$ does, and there\n must be a black and a white ball that are tangent to $\\partial X$ at\n $q$. Since the thickness of $S(L_{ac},r)\\leq\\sqrt{2}-1$, we must have $d(q,L_{ac})\\leq\\frac{1}{2}$. But then the centre $x$ of the left $\\sqrt{2}$-ball tangent to $\\partial X$ at $q$ must satisfy \n \\begin{equation*}\n 10$. The intersection of all $r$-balls centred in $A$ is equal to the intersection of all $r$-balls centred at the vertices of $A$.\n\\end{lem}\n\n\\begin{proof}\nIt suffices to show the theorem for lines, since if it holds for lines, it holds for any edge of $A$, and hence for all of $A$ by convexity.\n\nSo let $A$ be a line with endpoints $(x_1,0)$, $(x_2,0)$. Let $(p_1,p_2)\\in B_r((x_1,0))\\cap B_r((x_2,0))$ and $(x,0)\\in A$. Assume $p_1\\leq x$ - the other case is symmetrical. Then $\\norm{(x,0)-(p_1,p_2)}^2=(x-p_1)^2+p_2^2<(x_2-p_1)^2+p_2^2=\\norm{(x_2,0)-(p_1,p_2)}^2\\tau_1$) edges over its lifetime.}\nBesides serving the purpose of validating the relevance of our $M^4C$ model, this prediction has many real-world applications.\nFor instance, it is useful for media organizations to forecast popular news stories \\cite{gruhl05predictiveonlinechatter}.\nLikewise, popular videos on social media -- if predicted early -- can be cached by content distribution networks at their servers to achieve better performance \\cite{rodrigues11wordofmouth}.\nFurthermore, solving this problem enables the early detection of epidemic outbreaks and political crisis.\n\nTo the best of our knowledge, this problem has not been investigated in prior literature.\nThe closest effort is that Galuba \\etal analyzed the cascades of URLs on Twitter to predict URLs that users will tweet \\cite{galuba10twitters}.\nTheir proposed approach achieved about $50\\%$ true positive rate with about $15\\%$ false positive rate.\nUnfortunately, this accuracy is not much useful in practice.\n\nWe compare the prediction performance of $M^4C$ based scheme with a baseline scheme that uses the following $8$ cascade graph features with Na\\\"ive Bayes classifier: (1) edge growth rate, (2) number of nodes, (3) degree of the root node, (4) average shortest path length, (5) diameter, (6) number of spanning trees, (7) clustering coefficient, and (8) clique number.\nWe evaluate the effectiveness of these schemes in terms of the following decision sets.\n\\begin{enumerate}\n \\item \\emph{True Positives (TPs)}: The set of cascades that are correctly predicted to have a total of at least $\\tau_2$ edges over their lifetime.\n \\item \\emph{False Positives (FPs)}: The set of cascades that are incorrectly predicted to have a total of at least $\\tau_2$ edges over their lifetime.\n \\item \\emph{True Negatives (TNs)}: The set of cascades that are correctly predicted to have a total of less than $\\tau_2$ edges over their lifetime.\n \\item \\emph{False Negatives (FNs)}: The set of cascades that are incorrectly predicted to have a total of less than $\\tau_2$ edges over their lifetime.\n\\end{enumerate}\nWe further quantify the effectiveness of both cascade size prediction schemes in terms of the following three Receiver Operating Characteristic (ROC) metrics \\cite{fawcett04ROC}.\n\\begin{equation}\n \\emph{\\text{Detection Rate}} = \\frac{|TPs|}{|TPs| + |FNs|}\n\\end{equation}\n\\begin{equation}\n \\emph{\\text{False Positive Rate}} = \\frac{|FPs|}{|FPs| + |TNs|}\n\\end{equation}\n\\begin{equation}\n \\emph{\\text{Precision}} = \\frac{|TPs| + |TNs|}{|TPs| + |TNs| + |FPs| + |FNs|}\n\\end{equation}\n\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=.9\\columnwidth]{number_edges_distribution_tau2_10}\n\\precaption\n\\caption{Evaluation setup for varying $\\tau_1$.} \\label{fig: variation tau1}\n\\postcaption\n\\end{figure}\n\n\nTo ensure that the classification results are generalizable, we divide the data set into $k$ folds and use $k-1$ of them for training and the left over for testing.\nWe repeat these experiments $k$ times and report the average results in the following text.\nThis setup is called stratified $k$-fold cross-validation procedure \\cite{dataminingbook}.\nFor all experimental results reported in this paper, we use the value of $k=10$.\n\n\nIn this paper, we treat the cascade size prediction problem to an equivalent cascade classification problem: given a cascade with $\\tau_1$ edges, classify it into two classes: the class of cascades that will have less than $\\tau_2$ edges over their lifetime and the class of cascades that will have greater than or equal to $\\tau_2$ edges over their lifetime.\nWe use the initial $\\tau_1$ edges to train both the cascade size prediction scheme based on our $M^4C$ model and the baseline scheme that is based on the known cascade graph features.\nFor thorough evaluation, we vary the values of $\\tau_1$ and $\\tau_2$.\nBecause the distribution of the number of edges in our data set is skewed, that is, most cascades having only a few edges over their lifetime, the larger the values of $\\tau_1$ and $\\tau_2 - \\tau_1$ are, the more imbalanced the two classes are.\nTo mitigate the potential adverse effect of class imbalance \\cite{japkowicz02classimbalance}, we employ instance re-sampling to ensure that both classes have equal number of instances before the cross-validation evaluations.\nBelow we discuss the classification accuracies of both schemes as we vary the values of $\\tau_1$ and $\\tau_2$.\n\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=.9\\columnwidth]{roc_tau1}\n\\precaption\n\\caption{ROC plot of $M^4C$ based scheme for varying $\\tau_1$.} \\label{fig: roc tau1}\n\\postcaption\n\\end{figure}\n\n\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=1\\columnwidth]{number_edges_distribution_tau1_10}\n\\precaption\n\\caption{Evaluation setup for varying $\\tau_2 - \\tau_1$.} \\label{fig: variation tau2}\n\\postcaption\n\\end{figure}\n\n\n\n\n\\begin{figure*}[!t]\n\\centering\n \\subfigure[Detection Rate]{\n \\includegraphics[width=.66\\columnwidth]{tp_rate_tau2}\n }\n \\subfigure[False Positive Rate]{\n \\includegraphics[width=.66\\columnwidth]{fp_rate_tau2}\n }\n \\subfigure[Precision]{\n \\includegraphics[width=.66\\columnwidth]{precision_tau2}\n }\n\\precaption\n\\caption{Classification results of $M^4C$ and baseline schemes for varying values of $\\tau_2 - \\tau_1$, at $\\tau_1 = 10$.} \\label{fig: accuracy tau2}\n\\postcaption\n\\end{figure*}\n\n\n\\subsection{Impact of Varying $\\tau_1$}\nFigure \\ref{fig: variation tau1} shows the evaluation setup as we vary the values of $\\tau_1 \\in \\{10, 50, 100\\}$, while keeping $\\tau_2 - \\tau_1$ fixed at $10$.\nThe solid, dashed, and dotted vertical black lines corresponds to $\\tau_1 = 10, 50,$ and $100$.\nThe solid, dashed, and dotted vertical grey lines all correspond to $\\tau_2 - \\tau_1 = 100$.\nThe value of $\\tau_1$ impacts the classification results because it determines the number of edges in each cascade that are available for training.\nTherefore, larger values of $\\tau_1$ generally improve training quality of both cascade size prediction schemes and lead to better prediction accuracy.\n\n\nFigure \\ref{fig: accuracy tau1} plots the detection rate, false positive rate, and precision of $M^4C$ and baseline schemes for varying $\\tau_1 \\in \\{10, 50, 100\\}$, while keeping $\\tau_2 - \\tau_1$ fixed at $10$.\nOverall, we observe that $M^4C$ consistently outperforms the baseline scheme with peak precision of $96\\%$ at $\\tau_1 = 100, \\tau_2 - \\tau_1 = 10s$.\nWith some exceptions, we generally observe that the effectiveness of both schemes decreases as the value of $\\tau_1$ is increased.\nThe standard ROC threshold plots of $M^4C$ shown in Figure \\ref{fig: roc tau1} also confirm this observation.\n\n\n\n\n\n\n\\subsection{Impact of Varying $\\tau_2 - \\tau_1$}\n\\vspace{0.2in}\nFigure \\ref{fig: variation tau2} shows the evaluation setup as we vary the values of $\\tau_2 - \\tau_1 \\in \\{10, 50, 100\\}$, while keeping $\\tau_1$ fixed at $10$.\nThe solid vertical black line corresponds to $\\tau_1 = 10$.\nThe solid, dashed, and dotted vertical grey lines correspond to $\\tau_2-\\tau_1 = 10, 50,$ and $100$, respectively.\nThe value of $\\tau_2 - \\tau_1$ also impacts the classification results because it determines the separation or distance between the two classes.\nTherefore, larger values of $\\tau_2 - \\tau_1$ generally lead to better prediction accuracy.\n\n\nFigure \\ref{fig: accuracy tau2} plots the detection rate, false positive rate, and precision of $M^4C$ and baseline schemes for varying values of $\\tau_2 - \\tau_1$.\nOnce again, we observe that $M^4C$ consistently outperforms the baseline scheme with peak precision of $99\\%$ at $\\tau_2 - \\tau_1 = 100, \\tau_1 = 10$.\nWe also observe that the classification performance of both methods improves as the value of $\\tau_2 - \\tau_1$ is increased.\nThe standard ROC threshold plots of $M^4C$ shown in Figure \\ref{fig: roc tau2} also confirm this observation.\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusions and Future Work}\n\\label{sec: conclusions}\n\\postsub\nIn this paper, we first propose $M^4C$, a multi-order Markov chain based model to represent and quantitatively characterize the morphology of cascades with arbitrary structures, shapes, and sizes.\nWe then demonstrate the relevance of our $M^4C$ model in solving the cascade size prediction problem.\nThe experimental results using a real-world Twitter data set showed that $M^4C$ significantly outperforms the baseline scheme in terms of prediction accuracy.\nIn summary, our $M^4C$ model allows us to formally and rigorously study cascade morphology, which is otherwise difficult.\n\n\n\n\n\n\nIn this paper, we applied our $M^4C$ model in the context of online social networks; however, our model is generally applicable to cascades in other contexts as well such as sociology, economy, psychology, political science, marketing, and epidemiology.\nApplications of our model in these contexts are interesting future work to pursue.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=1\\columnwidth]{roc_tau2}\n\\precaption\n\\caption{ROC plot of $M^4C$ based scheme for varying $\\tau_2 - \\tau_1$.} \\label{fig: roc tau2}\n\\postcaption\n\\end{figure}\n\n\n\\Comment\nWe have planned future work along the following two directions.\nFirst, we plan to explore randomized graph encoding methods such as those based on random walks on graphs \\cite{rosvall08random,figueiredo12randomwalks}.\nSecond, we plan to apply $M^4C$ to solve other important cascade classification problems that can use morphological features.\nFor example, $M^4C$ can be used to differentiate spam and normal activity cascades in online social networks.\n}\n\n\n\n\n\n\n\\section{Data set}\n\\label{sec: data set}\n\\postsec\n\n\\presub\n\\subsection{Data Collection}\n\\postsub\nAmong the popular online social networks, Twitter is one of the social networks that allows systematic collection of public data from its site.\nTherefore, we chose to study the morphology of cascades appearing on Twitter.\nTo collect data from Twitter, we focused on tweets related to the Arab Spring event, which represents an ideal case study because it spans several months.\nFor countries involved in the Arab Spring event, we collected data from Twitter during one complete week in March $2011$.\nWe provide more details of the data collection process in the following text.\n\n\nFor our study, we separately collected two data sets from Twitter.\nThe first data set was collected using Twitter's \\textit{streaming API}, which allows the realtime collection of public tweets matching one or more filter predicates \\cite{twitterapi}.\nTo collect tweet data pertaining to a given country, we provided relevant keywords as filter predicates.\nFor example, we used the keywords `Libya' and `Tripoli' to collect tweets related to Libya.\nIn total, we collected tweets for $8$ countries over a period of a week in March $2011$.\nUsing Twitter's streaming API, we collected more than $8$ million tweets involving more than $200$ thousand unique users.\n\n\n\nAs mentioned in Section \\ref{subsec: cascade construction}, we cannot accurately construct cascade graphs without information about whom the users are following.\nThe one-way following policy of Twitter results in three types of relationships between two given users: (1) both follow each other, (2) only one of them follows the other, and (3) they do not follow each other.\nTwitter provides follower information for a given user via a separate interface called REST API \\cite{twitterapi}.\nREST API employs aggressive rate limiting by allowing clients to make only a limited number of API calls in an hour.\nTwitter applies this limit based on the public IP address or authentication token from the client who issues the request.\nCurrently, rate limiting for REST API permits only $150$ requests per hour for unauthenticated users and $350$ requests per hour for authenticated users.\nIn our tweet data set, we encountered more than $200,000$ unique users and we were required to make at least one request per user to get the follower list.\nFor each user who follows more than $5000$ users, we had to make a separate call to get each subset of $5000$ users.\nHere it is noteworthy that some users were following or were being followed by millions of users, requiring thousands of separate calls for each user.\nIt would take us several months to collect this data if we were to use a single authentication token or a single external IP address.\nTo overcome this limitation, we utilized dozens of public proxy servers to parallelize calls to Twitter's REST API \\cite{wang04codeen}.\nUsing this methodology, we collected follower lists of all users in less than a month.\n\n\nTwitter provides a ``re-tweet'' functionality which allows users to re-post the tweet of other users to their profiles.\nThe reference to the user with original tweet is maintained in all subsequent re-tweets.\nThere is no information on intermediate users in re-tweets.\nUsing the follower graph, we constructed cascade graphs for all sets of re-tweets which are essentially cascades.\nTherefore, the overall graph is a union of all cascades in our data.\nIn Figure \\ref{fig: tweet visualizations}, we visualize two cascades in our data set using radial and circular layout methods in Graphviz \\cite{graphviz}.\nIn a radial layout, we choose the user with original tweet as a center vertex (or root vertex in general) and the remaining vertices are put in concentric circles based on their proximity to the center vertex.\nIn a circular layout, all components are plotted separately with their respective vertices in a circular format.\nVisualization of two example cascades provides us interesting insights about their morphology.\nFrom the first example, we observe that the degree of vertices typically decreases as their distance from the root vertex increases.\nHowever, for the second example, we observe that subsequent vertices have degrees comparable to the root vertex.\nIn this paper, our aim is to capture such differences in an automated fashion using our proposed model.\n\n\n\\begin{figure*}[!t]\n\\precaption\n\\centering\n \\subfigure[Edge and node counts]{\n \\label{fig: url degree}\n \\includegraphics[width=1\\columnwidth]{scatter_fig1}\n }\n \\subfigure[Root node degree and average path length]{\n \\label{fig: url cc}\n \\includegraphics[width=1\\columnwidth]{scatter_fig2}\n }\n \\subfigure[Diameter]{\n \\label{fig: url degree}\n \\includegraphics[width=1\\columnwidth]{cdf_fig4}\n }\n \\subfigure[Number of spanning trees]{\n \\label{fig: url spanning}\n \\includegraphics[width=1\\columnwidth]{cdf_fig6}\n }\n \\subfigure[Clustering coefficient]{\n \\label{fig: url spanning}\n \\includegraphics[width=1\\columnwidth]{cdf_fig3}\n }\n \\subfigure[Clique number]{\n \\label{fig: url cc}\n \\includegraphics[width=1\\columnwidth]{cdf_fig5}\n }\n\\postcaption\n\\caption{Distributions of various cascade graph attributes in the Twitter data set.}\n\\label{fig: distribution properties}\n\\postcaption\n\\postcaption\n\\end{figure*}\n\n\\subsection{Data Analysis}\n\\label{subsec: data analysis}\nWe now analyze the structural features of the cascades in our collected data set in terms of degree, path, and connectivity.\nLater in Section \\ref{sec: applications}, we will use these features for baseline comparison with our proposed model in terms of classification accuracy.\nFor structural features that can only be computed from undirected graphs, such as clustering coefficient and diameter, we compute them on the undirected versions of cascade graphs.\n\n\n\\subsubsection{Degree Properties}\nWe first jointly study the number of edges and the number of nodes for all cascades in our data set.\nThe cascade graphs in our data set are connected and each user in the cascade graph has at least one inward or outward edge.\nTherefore, the number of edges in a cascade graph $|E|$ has the lower bound: $|E| \\ge |V| - 1$, where $|V|$ is the number of users participating in the cascade.\nFigure \\ref{fig: distribution properties}(a) shows the scatter plot between edge and node counts for all cascades in our data set.\nNote that we use the logarithmic scale for both axes.\nFrom this figure, we observe that the scatter plot takes the form of a strip whose thickness represents the average number of additional edges for each node.\nThe average thickness of this strip approximately corresponds to having twice the number of edges compared to the number of nodes.\n\n\\vfill\\eject\n\\subsubsection{Path Properties}\nAnother important characteristic of a cascade is the degree of the root node (user who initiated the cascade), which typically has the highest degree compared to all other nodes in a cascade graph.\nIn our data set, the root node has the highest degree compared to all other nodes in cascade graphs for more than $92\\%$ of the cascades.\nThe degree of the root node essentially represents the number of different routes through which cascade propagates in an online social network.\nNote that these paths may merge together after the first hop; however, we expect some correlation between the degree of root node and the number of unique routes through which a cascade propagates.\nOne relevant characteristic of a graph is average (shortest) path length ($APL$), which denotes the average of all-pair shortest paths \\cite{bondaygraphtheory}.\n\\[\nAPL = \\sum_{\\forall i,j \\in V, i\\neq j} \\frac{d(i,j)}{|V|(|V|-1)},\n\\]\nwhere $d(i,j)$ is the shortest path length between users $i$ and $j$.\nWe expect the average path length of a cascade to be proportional to the degree of the root node.\nFigure \\ref{fig: distribution properties}(b) shows the scatter plot of the root node degree and the average path length.\nAs expected, we observe that cascades with higher root node degrees tend to have larger average path lengths.\nWe have changed the x-axis to logarithm scale to emphasize this relationship.\n\n\nAnother fundamental characteristic of a graph is called diameter, which denotes the largest value of all-pair shortest paths \\cite{bondaygraphtheory}.\nFigure \\ref{fig: distribution properties}(c) shows the distribution of diameter of cascades in our data set.\nThe bars represent the probability mass function and the line represents the cumulative density function (CDF).\nThe minimum diameter is $1$ because the minimum number of nodes in a cascade is $2$.\nCascades with more than $2$ nodes can have a diameter of $1$ only if they are cliques.\nIn our data set, approximately $40\\%$ cascades have a diameter of $1$.\nThe largest cascades in our data set have a diameter of $9$.\n\n\nFinally, we can characterize the number of unique paths that connect nodes in a graph by using the notion of spanning trees.\nFor a given graph, the number of unique paths between nodes is proportional to the number of spanning trees.\nThe number of spanning trees of a graph $G$, denoted by $t(G)$, is given by the product of non-zero eigenvalues of the Laplacian matrix and the reciprocal of the number of nodes \\cite{bondaygraphtheory}.\n\\[\nt(G) = \\frac{1}{n}\\lambda_1\\lambda_2...\\lambda_{n-1},\n\\]\nwhere $n$ is the number of nodes of the graph and $\\lambda_i$ is the $i$-th eigenvalue of the Laplacian matrix of the graph and $\\lambda_i \\neq 0, \\forall i$.\nFigure \\ref{fig: distribution properties}(d) shows the CDF of the number of spanning trees for cascades in our data set.\nNote that the x-axis is converted to logarithm scale.\nWe observe that only a small fraction $(< 15\\%)$ of cascades have more than one spanning tree in our data set, which highlights their sparsity.\n\n\n\\subsubsection{Connectivity Properties}\nThe clustering coefficient of a vertex $v_i$ is denoted by $c_i$ and is defined as the ratio of the number of existing edges among $v_i$ and $v_i$'s neighbors and the number of all possible edges among them \\cite{bondaygraphtheory}.\nUsing $\\Delta_i$ to denote the number of triangles containing vertex $v_i$ and $d_i$ to denote the degree of vertex $v_i$, the clustering coefficient of vertex $v_i$ is defined as:\n\\[\nc_i = \\frac{\\Delta_i}{{d_i\\choose 2}} = \\frac{2\\Delta_i}{d_i(d_i - 1)}\n\\]\nThe average clustering coefficient of a graph $G$ with $n$ nodes is simply the mean of clustering coefficients of individual nodes.\n\\[\nC_{avg} = \\frac{1}{n}\\sum_{\\forall i}c_i\n\\]\nFigure \\ref{fig: distribution properties}(e) shows the CDF of the average clustering coefficient for all cascades in our data set.\nWe note that approximately $86\\%$ of all cascades in our data set have average clustering coefficient value equal to $0$, \\ie, they do not have a single triangle.\nOnly a small fraction (less than $2\\%$) of cascades in our data set have clustering coefficient values greater than $0.5$, which again highlights their sparsity.\n\nWe are also interested in investigating the sizes of cliques in cascades that have one or more triangles.\nTowards this end, we study the clique numbers of all cascade graphs in our data set.\nThe clique number of a graph is the number of vertices in its largest clique \\cite{bondaygraphtheory}.\nFigure \\ref{fig: distribution properties}(f) shows the distribution of clique number for all cascades in our data set.\nSimilar to our observation in Figure \\ref{fig: distribution properties}(e), we observe that approximately $86\\%$ of cascades have a clique number of $2$, which means that they do not have a triangle.\nA little more than $10\\%$ of cascades have at least one triangle.\nThe largest clique number observed in our data set is $6$.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction} \\label{sec: introduction}\n\\postsec\n\n\\subsection{Background and Motivation}\n\\postsub\nThe term \\emph{cascade} describes the phenomenon of something propagating along the links in a social network.\nThat something can be information such as a URL, action such as a monetary donation, influence such as buying a product, discussion such as commenting on a blog article, and a resource such as a torrent file.\nBased on what is being propagated, we can categorize cascades into various classes such as information cascades \\cite{cha09flickr}, action cascades \\cite{dave11icwsm}, influence cascades \\cite{kempe03kddinfluence}, discussion cascades \\cite{gomez11ht}, and resource cascades \\cite{starr92resourcecascade}.\nConsider a toy example where user $A$, connected to users $B$ and $C$ in a social network, broadcasts a piece of information (\\eg a picture or a news article) to his neighbors.\nUsers $B$ and $C$, after receiving it from user $A$, may further rebroadcast it to their neighbors resulting in the formation of a cascade.\n\n\nCascade phenomenon has been a fundamental topic in many disciplines such as sociology, economy, psychology, political science, marketing, and epidemiology with research literature tracing back to the 1950s \\cite{Rogers03Diffusion}.\nA key challenge in these studies is the lack of large scale cascade data.\nAs online social networks have recently become a primary way for people to share and disseminate information, the massive amount of data available on these networks provides unprecedent opportunities to study cascades at a large scale.\nRecent events, such as the Iran election protests, Arab Spring, Japanese earthquake, and London riots, have been significantly impacted by campaigns via cascades in online social networks \\cite{zhou10iran,ray11arab,londonsocial}.\nStudying cascades in online social networks will benefit a variety of domains such as social campaigns \\cite{zhou10iran}, product marketing and adoption \\cite{li04adoption}, online discussions \\cite{gomez11ht}, sentiment flow \\cite{miller11sentiment}, URL recommendation \\cite{rodrigues11wordofmouth}, and meme tracking \\cite{rod10meme}.\n\n\\subsection{Problem Statement}\n\\label{subsec: problem statement}\nThe goal of this paper is to study the morphology of cascades in online social networks.\nCascade morphology encompasses many aspects of cascades such as their structures, shapes, and sizes.\nSpecifically, we aim to develop a model that allows us to \\emph{represent} and \\emph{quantitatively characterize} cascade morphology; which are extremely difficult without a model.\nThere are two important requirements on the desired model of cascade morphology.\nFirst, this model should have enough expressivity and scalability to allow us to represent and describe cascades with arbitrary structures, shapes, and sizes.\nReal-world cascades sometimes have large sizes, containing thousands of nodes and edges \\cite{kwak10twitter}.\nSecond, this model should allow us to quantitatively characterize and rigorously analyze cascades based on the features extracted from this model.\n\n\n\\presub\n\\subsection{Limitations of Prior Art}\n\\postsub\nDespite the numerous publications regarding different aspects of online social networks, little work has been done on the morphology of cascades.\nRecently some researchers have studied the structure of cascades \\cite{leskovec07cascadingbehavior, kwak10twitter, zhou10iran, gomez11ht}; however, their analysis of cascade structures is limited to basic structural properties such as degree distribution, size, and depth.\nThese structural properties of cascades are important; however, they are far from being sufficient to precisely describe and represent cascade morphology.\n\n\n\\presub\n\\subsection{Proposed Model}\n\\postsub\nIn this paper, we propose a Multi-order Markov Model for the Morphology of Cascades ($M^4C$) that can represent and quantitatively characterize the morphology of cascades with arbitrary structures, shapes, and sizes.\n$M^4C$ has two key components: a cascade encoding algorithm and a cascade modeling method.\nThe cascade encoding algorithm uniquely encodes the morphology of a cascade for quantitative representation.\nIt encodes a cascade by first performing a depth-first traversal on the cascade graph and then compressing the traversal results using run-length encoding.\nThe cascade modeling method models the run-length encoded sequence of a cascade as a discrete random process.\nThis random process is further modeled as a Markov chain, which is then generalized into a multi-order Markov chain model.\n$M^4C$ satisfies the aforementioned two requirements.\nFirst, this model can precisely represent cascades with arbitrary structures, shapes, and sizes.\nSecond, this model allows us to quantitatively characterize cascades with different attributes using the state information from the underlying multi-order Markov chain model.\n\n\\presub\n\\subsection{Experimental Evaluation}\n\\postsub\nTo demonstrate the effectiveness of our $M^4C$ model in quantitatively characterizing cascades, we use it to investigate an unexplored but important problem in online social networks -- \\emph{cascade size prediction}:\n\\emph{given the first $\\tau_1$ edges in a cascade, we want to predict whether the cascade will have a total of at least $\\tau_2$ ($\\tau_2>\\tau_1$) edges over its lifetime.}\nThis prediction has many real-world applications.\nFor example, media companies can use it to predict social media stories that can potentially go viral \\cite{gruhl05predictiveonlinechatter,rodrigues11wordofmouth}.\nFurthermore, solving this problem enables early detection of epidemic outbreaks and political crisis.\nDespite its importance, this problem has not been addressed in prior literature.\n\nWe validate the effectiveness of $M^4C$ based cascade size prediction scheme on a real-world data set collected from Twitter containing more than $8$ million tweets, involving more than $200$ thousand unique users.\nThe results show that our $M^4C$ based cascade size prediction scheme consistently achieves more than $90\\%$ classification accuracy under different experimental scenarios.\nWe also compare our $M^4C$ based cascade size prediction scheme with a baseline prediction scheme based on cascade graph features such as edge growth rate, degree distribution, clustering, and diameter.\nThe results show that $M^4C$ allows us to achieve significantly better classification accuracy than the baseline method.\n\n\\vfill\\eject\n\\presub\n\\subsection{Key Contributions}\n\\postsub\nIn this paper, we not only propose the first cascade morphology model, but also propose the first cascade size prediction scheme based on our model. In summary, we make the following key contributions in this paper.\n\\begin{enumerate}\n\\item We propose $M^4C$ for representing and quantitatively characterizing the morphology of cascades with arbitrary structures, shapes, and sizes.\n\n\\item To demonstrate the effectiveness of our $M^4C$ model in quantitatively characterizing cascades, we develop a cascade size prediction scheme based on $M^4C$ features and compare its performance with that based on non-$M^4C$ features.\n\\end{enumerate}\n\n\nThe rest of this paper proceeds as follows.\nWe first review related work in Section \\ref{sec: related work}.\nWe then introduce our proposed model in Section \\ref{sec: model}.\nWe describe the details of our Twitter data set in Section \\ref{sec: data set}.\nWe present the experimental results of the aforementioned application in Section \\ref{sec: applications}.\nFinally, we conclude in Section \\ref{sec: conclusions} with an outlook to our future work.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Proposed Model}\n\\label{sec: model}\n\\postsec\nIn this section, we present $M^4C$ for quantitatively representing the morphology of cascades in online social networks.\nIt consists of two major components.\nThe first component encodes a given cascade graph for quantitative representation such that its morphological information is retained.\nThe second component models the encoded sequence using a multi-order Markov chain.\nBefore we describe these two components, we first present the details of the cascade graph construction process.\n\n\n\\presub\n\\subsection{Cascade Graph Construction}\n\\label{subsec: cascade construction}\n\\postsub\nA social network can be represented using two graphs, a relationship graph and a cascade graph.\nBoth graphs share the same set of nodes (or vertices) $V$, which represents the set of all users in a social network.\nA \\emph{relationship graph} represents the relationships among users in a social network.\nIn this graph, nodes represent users and edges represent the relationship among users.\nIf the edges are directed, where a directed edge from user $u$ to user $v$ denotes that $v$ is a follower of $u$, then this graph is called a \\emph{follower graph}, denoted as $(V, \\overrightarrow{E_f})$, where $V$ is the set of users and $\\overrightarrow{E_f}$ is the set of directed edges.\nIf the edges are undirected, where an undirected edge between user $u$ and user $v$ denotes that $u$ and $v$ are friends, then this graph is called a \\emph{friendship graph}, denoted as $(V, E_f)$, where $V$ is the set of users and $E_f$ is the set of undirected edges.\nBy the nature of our study, we focus on the follower graph denoted as $G_f = (V, \\overrightarrow{E_f})$.\nThe \\emph{cascade graph} represents the dynamic activities that are taking place in a social network (such as users sharing a URL or joining a group).\nA cascade graph is an acyclic directed graph denoted as $G_c = (V, \\overrightarrow{E_c}, T)$ where $V$ is the set of users, $\\overrightarrow{E_c}$ is a set of directed edges where a directed edge $e = (u,v)$ from user $u$ to user $v$ represents the propagation of something from $u$ to $v$, and $T$ is a function whose input is an edge $e \\in \\overrightarrow{E_c}$ and output is the time when the propagation along edge $e$ happens.\n\n\nWhile the static relationship graph is easy to construct from a social network, the dynamic cascade graph is non-trivial to construct because there maybe multiple propagation paths from the cascade source to a node.\nSo far there is no consensus on cascade graph construction in prior literature.\nIn this paper, we use a construction method that is similar to the method described in \\cite{sadikov11missing}.\nWe next explain our construction method through a Twitter example.\nConsider the follower graph in Figure \\ref{fig: cascade construction encoding}(a).\nLet $(u, t)$ denote a user $u$ performing an action, such as posting a URL on $u$'s Twitter profile, at time $t$.\nSuppose the following actions happen in the increasing time order: $(A,t_1)$, $(B,t_2)$, $(D,t_3)$, $(C,t_4)$, $(E,t_5)$, where $t_1 < t_2 < t_3 < t_4 < t_5$.\nSuppose $(A,t_1)$ denotes that $A$ posts a URL on his Twitter profile, and all other actions (namely $(B,t_2)$, $(D,t_3)$, $(C,t_4)$, and $(E,t_5)$) are reposting the same URL from $A$.\n\n\nThe cascade graph regarding the propagation of this URL is constructed as follows.\nFirst, $A$ is the root of the cascade graph because it is the origin of this cascade.\nSecond, $B$ reposting $A$'s tweet (which is a URL in this example) at time $t_2$ must be under $A$'s influence because there is only one path from $A$ to $B$ in the follower graph in Figure \\ref{fig: cascade construction encoding}(a).\nTherefore, in the cascade graph in Figure \\ref{fig: cascade construction encoding}(b), there is an edge from $A$ to $B$ with time stamp $t_2$.\nNote that each repost (or retweet in Twitter's terminology) contains the origin of the tweet ($A$ in this example).\nThird, however, $D$ reposting $A$'s tweet at time $t_3$ could be under either $A$'s influence (because there is a path from $A$ to $D$ in the follower graph in Figure \\ref{fig: cascade construction encoding}(a) and $t_1 < t_3$) or $B$'s influence (because there is a path from $B$ to $D$ in the follower graph as well and $t_2 < t_3$).\nNote that even if $D$ sees $A$'s tweet through $B$'s retweet, the repost of $A$'s tweet on $D$'s profile does not contain any information about $B$ and only shows that the origin of the tweet is $A$.\nIn this scenario, we assume that $D$ is partially influenced by both $A$ and $B$, instead of assuming that $D$ is influenced by either user $B$ or $A$, because this way we can retain more information with respect to the corresponding follower graph.\nTherefore, there is an edge from $A$ to $D$ and another edge from $B$ to $D$ in the cascade graph shown in Figure \\ref{fig: cascade construction encoding}(b), where the time stamps of both edges are $t_3$.\nSimilarly, we add the edge from $B$ to $C$ with a time stamp $t_4$ and the edge from $D$ to $E$ with a time stamp $t_5$ in the cascade graph.\n\n\n\\presub\n\\subsection{Cascade Encoding}\n\\label{subsection: cascade encoding}\nThe first step in cascade encoding is to encode the constructed cascade graph as a binary sequence that uniquely represents the structure of the cascade graph.\nGraph encoding has been studied for a wide range of problems across several domains such as image compression, text and speech recognition, and DNA profiling \\cite{reid97imagecoding, biem06handwriting, hsieha08dnagraph}.\nThe typical goal of graph encoding is to transform large geometric data into a succinct representation for efficient storage and processing.\nHowever, our goal here is to encode a given cascade graph in a way that its morphological information is captured.\nTowards this end, we use the following graph encoding algorithm.\n\nWe first conduct a depth-first traversal of the constructed cascade graph starting from the root node, which results in a spanning tree.\nTo result in a unique spanning tree, at each node in the cascade graph, we sort the outgoing edges in the increasing order of their time stamps, \\ie, sort the outgoing edges $e_1, e_2, \\cdots, e_k$ of a node so that $T(e_1) < T(e_2) < \\cdots < T(e_k)$; and then traverse them in this order.\nFor each edge, we use 1 to encode its downward traversal and 0 to encode its upward traversal.\nFigure \\ref{fig: cascade construction encoding}(c) shows the traversal of the cascade graph in Figure \\ref{fig: cascade construction encoding}(b) and the encoding of each downward or upward traversal.\nThe binary encoding results from this traversal process is \\textcolor{red}{11}0\\textcolor{red}{11}000.\nLet $C$ represent the binary code of a cascade graph $G = (V,\\overrightarrow{E})$.\nThen the length of the binary code $|C|$ is always twice the size of the edge set $|\\overrightarrow{E}|$, \\ie, $|C|=2|\\overrightarrow{E}|$.\nFurthermore, let $C[i]$ be the $i$-th element of the binary code and $I(C[i])$ be an indicator function so that $I(C[i]) = 1$ if $C[i] = 1$, and $I(C[i]) = -1$ if $C[i] = 0$.\nBecause each edge is exactly traversed twice, one downward and one upward, we have:\n\\[\n\\sum_{i=1}^{|C|}I(C[i]) = 0.\n\\]\n\n\nThe second step in cascade encoding is to convert the binary sequence, which is obtained from the depth-first traversal of the cascade graph, into the corresponding run-length encoding.\nA \\emph{run} in a binary sequence is a subsequence where all bits in this subsequence are 0s (or 1s) but the bits before and after the subsequence are 1s (or 0s), if they exist.\nBy replacing each run in a binary sequence with the length of the run, we obtain the run-length encoding of the binary sequence \\cite{jaynat84coding}.\nFor example, for the binary sequence \\textcolor{red}{11}0\\textcolor{red}{11}000, the corresponding run-length encoding is \\textcolor{red}{2}1\\textcolor{red}{2}3.\nSince the binary sequence obtained from our depth-first traversal of a cascade graph always starts with 1, the run-length encoding uniquely and compactly represents the binary sequence.\n\n\n\\presub\n\\subsection{Markov Chain Model of Cascades}\n\\label{subsec: markov model}\nWe want to model cascade encoding to capture characteristics of cascades so that they can be used to identify the similarities and differences among cascades.\nThis model should allow us to extract morphological features for different classes of cascades and then use these features to classify them.\nWe first present our model, and then demonstrate its usefulness in classifying cascades.\n\n\nConsider the run-length encoded sequence $\\hat{C}$ of a cascade graph $G$.\nWe can model this sequence using a discrete random process $\\{\\hat{C}_k\\}$, $k = 1,2,...,|\\hat{C}|$.\nBasic analysis of this process reveals that there is some level of dependencies among the consecutive symbols emitted by the random process.\nIn other words, it would be unreasonable to assume that the process is independent or memoryless.\nMeanwhile, to balance between capturing some of the dependencies within the process and to simplify the mathematical treatment of this encoded sequence, we resort to invoking the Markovian assumption \\cite{pierre08markovchains}.\nAs we show later, this assumption can be reasonably justified (to some extent) by analyzing the autocorrelation function of the underlying process $\\{\\hat{C}_k\\}$.\nFor a first order Markov process, this implies the following assumption:\n$Pr[\\hat{C}_n = c_n | \\hat{C}_1 = c_1, \\hat{C}_2 = c_2, ..., \\hat{C}_{n-1} = c_{n-1}] = Pr[\\hat{C}_n = c_n | \\hat{C}_{n-1} = c_{n-1}]$.\nEquivalently:\n\\begin{equation}\nPr[c_1, c_2, ..., c_n] = Pr[c_1]Pr[c_2|c_1]...Pr[c_n|c_{n-1}].\n\\label{equation: markov}\n\\end{equation}\nIn other words, we invoke the Markovian assumption about the underlying cascade process and its morphology, which is represented by the encoded sequence $\\hat{C}$.\n\nGiven the Markovian assumption with homogeneous time-invariant transition probabilities, $\\hat{C}$ can be represented using a traditional Markov chain.\nFigure \\ref{fig: markov chain} shows the Markov chain corresponding to the toy example in Figure \\ref{fig: cascade construction encoding}, where each unique symbol in $\\hat{C}$ is represented as a state.\nThe Markov chain in Figure \\ref{fig: markov chain} has $3$ states because there are $3$ unique symbols in its run-length encoding.\n\n\\begin{figure}[htbp]\n\\precaption\n\\centering\n\\includegraphics[width=0.8\\columnwidth]{figure_encoding_markovchain1}\n\\caption{Markov chain model for the toy example.}\n\\label{fig: markov chain}\n\\end{figure}\n\nA Markov chain can also be specified in terms of its state transition probabilities, denoted as $T$.\nHence, for the toy example of Figure 2, we have:\n\\[ T = \\left( \\begin{array}{ccc}\nP_{1|1} & P_{1|2} & P_{1|3} \\\\\nP_{2|1} & P_{2|2} & P_{2|3} \\\\\nP_{3|1} & P_{3|2} & P_{3|3} \\end{array} \\right),\\]\nwhere $P_{i|j}$ represents the conditional probabilities $Pr[\\hat{C}_n = i | \\hat{C}_{n-1} = j]$.\nThe Markov chain framework allows us to quantify the probability of an arbitrary sequence of states by using Equation \\ref{equation: markov}.\nThis will help us to identify sequences that are more (or less) probable in one class of cascades.\nWe next further generalize the above basic Markov chain model.\n\n\n\\subsection{Multi-order Generalization}\nEach element of the state transition matrix of a Markov chain is equivalent to a sub-sequence of $\\hat{C}$, which in turn is equivalent to a subgraph of the corresponding cascade.\nWe can generalize a Markov chain model by incorporating multiple consecutive transitions as a single state in the state transition matrix, which will allow us to specify arbitrary sized subgraphs of cascades.\nSuch generalized Markov chains are called multi-order Markov chains and are sometimes referred to as full-state Markov chains \\cite{khayam03markov}.\nThe order of a Markov chain represents the extent to which past states determine the present state.\nThe basic Markov chain model introduced earlier is of order $1$.\n\n\nAutocorrelation is an important statistic for selecting appropriate order for a Markov chain model \\cite{pierre08markovchains}.\nFor a given lag $t$, the autocorrelation function of a stochastic process, $X_m$ (where $m$ is the time or space index), is defined as:\n\\begin{equation}\n\\rho[t] = \\frac{E\\{ X_0 X_t\\} - E\\{ X_0\\} E\\{X_t\\}}{\\sigma_{X_0}\\sigma_{X_t}},\n\\end{equation}\nwhere $E(\\cdot)$ represents the expectation operation and $\\sigma_{X_i}$ is the standard deviation of the random variable at time or space lag $i$.\nThe value of the autocorrelation function lies in the range $[-1,1]$, where $|\\rho[t]|=1$ indicates perfect correlation at lag $t$ and $\\rho[t]=0$ means no correlation at lag $t$.\nFigure \\ref{fig: acf} plots the sample autocorrelation function of the run-length encoding of an example cascade.\nThe dashed horizontal lines represent the $95\\%$ confidence envelope.\nFor this particular example, we observe that sample autocorrelation values jump outside the confidence envelope at lag $= 3$.\nThis indicates that the underlying random process has the third order dependency.\nThus, we select the third order for Markov chain model for this particular cascade.\nThe autocorrelation-based analysis of more complex cascades can lead to even higher order Markov chains.\n\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=1\\columnwidth]{untitled}\n\\vspace{0.1in}\n\\caption{Sample autocorrelation function for the toy example.}\n\\label{fig: acf}\n\\vspace{0.1in}\n\\end{figure}\n\n\nThe number of possible states of a Markov chain increase exponentially with an increase in the order of the Markov chain model.\nFor the $n$-th order extension of a Markov chain with $k$ states, the total number of states is $k^n$.\nFigure \\ref{fig: multiorder} shows the plot of the second order extension of the $3$-state, $1$-st order Markov chain model shown in Figure \\ref{fig: markov chain}.\nThis second order Markov chain contains a total of $3^2 = 9$ states, $4$ of which are shown in the figure due to space limitations.\nIn this second order Markov chain model, the conditional probabilities are in the form $P_{i,j|k,l}$ and the state transition matrix is now defined as follows.\n\\[\n\\vspace{0.2in}\nT_2 = \\left( \\begin{array}{ccccc}\nP_{1,1|1,1} & P_{1,1|1,2} & P_{1,1|1,3} & ... & P_{1,1|3,3}\\\\\nP_{1,2|1,1} & P_{1,2|1,2} & P_{1,2|1,3} & ... & P_{1,2|3,3}\\\\\nP_{1,3|1,1} & P_{1,3|1,2} & P_{1,3|1,3} & ... & P_{1,3|3,3}\\\\\n. & . & . & \\ddots & .\\\\\n. & . & . & \\ddots & .\\\\\nP_{3,2|1,1} & P_{3,2|1,2} & P_{3,2|1,3} & ... & P_{3,2|3,3}\\\\\nP_{3,3|1,1} & P_{3,3|1,2} & P_{3,3|1,3} & ... & P_{3,3|3,3}\\\\\n\\end{array} \\right)\n\\vspace{0.1in}\n\\]\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=1\\columnwidth]{figure_encoding_markovchain2}\n\\caption{Multi-order generalization of the Markov chain model for the toy example.}\n\\label{fig: multiorder}\n\\end{figure}\n\nFor a set of cascade encoding sequences, let $\\mathbb{T}$ denote the set of selected orders as per the aforementioned criterion.\nWe select the maximum value in $\\mathbb{T}$, denoted by $T_{max}$, as the order of a single Markov chain model that we want to employ.\n\n\n\\subsection{Cascade Classification}\n\\label{subsec: cascade classification}\nAs mentioned in Section \\ref{subsec: problem statement}, an important desirable property for our proposed model is to identify differentiating features of cascade morphology that can be potentially leveraged for automated classification of cascades.\nWe now show how to use the aforementioned Markov chain model to classify cascades.\n\n\n\\subsubsection{Feature Selection}\nThe essence of our modeling approach is to capture the morphology of a cascade through the states of the multi-order Markov model.\nEach state in the Markov chain represents a likely sub-structure of cascades' morphology.\nThus, we can use these states to serve as underlying features that can be used to characterize a given cascade and to determine the class that it might belong to.\nHowever, as mentioned earlier, the number of states in a Markov chain increase exponentially for higher orders and so does the complexity of the underlying model.\nFurthermore, higher order Markov chains require a large amount of training data to identify a subset of states that actually appear in the training data.\nIn other words, a Markov chain model trained with limited data is typically sparse.\nTherefore, we use the following two approaches to systematically reduce the number of states in the Markov chain of order $T_{max}$.\n\n\nFirst, we can combine multiple states in the Markov chain to reduce its number of states.\nBy combining states in a multi-order Markov chain, we are essentially using states from lower order Markov chains.\nWe need to establish a criterion to combine states in the Markov chain.\nTowards this end, we use the concept of \\emph{typicality} of Markov chain states.\nTypicality allows us to identify a typical subset of Markov chain states by generating its realizations \\cite{pierre08markovchains}.\nBefore delving into further details, we first state the well-known typicality theorem below:\nFor any stationary and irreducible Markov process $X$ and a constant $c$, the sequence $x_1, x_2, ..., x_m$ is almost surely $(n, \\epsilon)$-typical for every $n \\le c \\log m$ as $m \\rightarrow \\infty$.\nA sequence $x_1, x_2, ..., x_m$ is called $(n, \\epsilon)$-typical for a Markov process $X$ if $\\hat{P}(x_1,x_2,..., x_n) = 0$, whenever $P(x_1,x_2,..., x_n) = 0$, and\n\\[\n\\bigg|\\frac{\\hat{P}(x_1,x_2,..., x_n)}{P(x_1,x_2,..., x_n)} - 1\\bigg| < \\epsilon \\mbox{, when } P(x_1,x_2,..., x_n)>0.\n\\]\nHere $\\hat{P}(x_1,x_2,..., x_n)$ and $P(x_1,x_2,..., x_n)$ are the empirical relative frequency and the actual probability of the sequence $x_1,x_2,..., x_n$, respectively.\nIn other words,\n\\[\n\\hat{P}(x_1,x_2,..., x_n) \\approx {P(x_1,x_2,..., x_n)}.\n\\]\nThis theorem shows us a way of empirically identifying typical sample paths of arbitrary length for a given Markov process.\nBased on this theorem, we generate realizations (or sample paths) of arbitrary lengths from the transition matrix of the Markov process.\nBy generating a sufficiently large number of sample paths of a given length, we can identify a relatively small subset of sample paths that are typical.\nUsing this criterion, we select a subset of up to top-$100,000$ typical states as potential features, whose lengths vary in the range $[0,\\mathbb{T}_{max}]$.\nIn what follows, we further short-list the Markov states from the top-$100,000$ typical subset and use them as features to classify cascades.\n\n\nSecond, to further reduce the number of features to be employed in a classifier, we need to prioritize the aforementioned typical Markov states.\nThe prioritization of features can be based on their differentiation power.\nAn information theoretic measure that can be used to quantify the differentiation power of features (Markov states in our case) is information gain \\cite{cover91infotheory}.\nIn this context, information gain is the mutual information between a given feature $X_i$ and the class variable $Y$.\nFor a given feature $X_i$ and the class variable $Y$, the information gain of $X_i$ with respect to $Y$ is defined as:\n\\[\nIG(X_i;Y) = H(Y) - H(Y|X_i),\n\\]\nwhere $H(Y)$ denotes the marginal entropy of the class variable $Y$ and $H(Y|X_i)$ represents the conditional entropy of $Y$ given feature $X_i$.\nIn other words, information gain quantifies the reduction in the uncertainty of the class variable $Y$ given that we have complete knowledge of the feature $X_i$.\nNote that, in this paper, the class variable $Y$ is $\\{0,1\\}$ because we apply our morphology modeling framework to problems that require differentiating between two classes of cascades (as described later).\nIn this study, we eventually only select the top-$100$ features with highest information gain.\n\n\n\\subsubsection{Classification}\n\\vspace{0.1in}\nLet us assume that the presence of a state $i$ is represented by a binary random variable $X_i, i = 1,2, ..., 100$.\nHence, $P(X_i = 1)$ represents the probability for the presence of state $X_i$.\nWe can think of the $X_i$s as the variables representing potential features.\nThus, our training process proceeds as follows.\nFor a given class $Y$ of cascades, we evaluate the presence of a given feature (state) $X_i$ in $Y$ by analyzing a sufficiently large number of sample cascades that belong to the class $Y$.\nSubsequently, we are able to evaluate the a-priori conditional probability $P(X_i|Y)$ for each class $Y \\in \\{1,2,..., k\\}$, where the number of classes $k$ is usually very small.\nIn our case, we are interested in the traditional binary classifier with $k = 2$.\nHowever, note that this classification methodology can be extended to the cases with $k > 2$ using the well-known one-against-one (pairwise) or multiple one-against-all formulations \\cite{hsu02multiclass}.\n\n\nWe can jointly use multiple features to differentiate between two sets of cascades belonging to different classes.\nIn particular, given the top-$100$ features with respect to information gain, we can classify cascades by deploying a machine learning classifier.\nIn this study, we use a Bayesian classifier to jointly utilize the selected features to classify cascades.\nNa\\\"ive Bayes is a popular probabilistic classifier that has been widely used in the text mining and bio-informatics literature, and is known to outperform more complex techniques in terms of classification accuracy \\cite{dataminingbook}.\nIt trains using two sets of probabilities: the prior, which represents the marginal probability $P(Y)$ of the class variable $Y$; and the a-priori conditional probabilities $P(X_i|Y)$ of the features $X_i$ given the class variable $Y$.\nAs previously explained, these probabilities can be computed from the training set.\n\n\nNow, for a given test instance of a cascade with observed features $X_i$, $i = 1,2,..., n$, the \\textit{a-posteriori} probability $P(Y|X^{(n)})$ can be computed for both classes $Y \\in \\{0,1\\}$, where $X^{(n)} = (X_1, X_2, ..., X_n)$ is the vector of observed features in the test cascade under consideration:\\\\\\\\\n\\begin{equation}\n\\vspace{0.1in}\nP(Y|X^{(n)}) = \\frac{P(X^{(n)},Y)}{P(X^{(n)})} = \\frac{P(X^{(n)}|Y)P(Y)}{P(X^{(n)})}\n\\vspace{0.1in}\n\\end{equation}\nThe na\\\"ive Bayes classifier then combines the a-posteriori probabilities by assuming conditional independence (hence the ``na\\\"ive'' term) among the features.\\\\\n\\begin{equation}\n\\vspace{0.1in}\nP({X^{(n)}}|Y) = \\prod_{i=1}^{n} P(X_i|Y).\n\\vspace{0.1in}\n\\end{equation}\nAlthough the independence assumption among features makes it feasible to evaluate the a-posteriori probabilities with much lower complexity, it is unlikely that this assumption truly holds all the time.\nFor our study, we mitigate the effect of the independence assumption by pre-processing the features using the well-known Karhunen-Loeve Transform (KLT) to uncorrelate them \\cite{dony01klt}.\n\n\nIn the following section, we provide details of the data set that we have collected to demonstrate the usefulness of our $M^4C$ model.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Related Work}\n\\label{sec: related work}\n\\postsec\nCascades in online social networks have attracted much attention and investigation; however, little work has been done on cascade morphology.\nBelow we summarize the prior work related to cascade morphology.\n\n\n\\subsection{Shape} Zhou \\etal studied Twitter posts (\\ie, tweets) about the Iranian election \\cite{zhou10iran}.\nIn particular, they studied the frequency of pre-defined shapes in cascades.\nTheir experimental results showed that cascades tend to have more width than depth.\nThe largest cascade observed in their data has a depth of seven hops.\nLeskovec \\etal studied patterns in the shapes and sizes of cascades in blog and recommendation networks \\cite{leskovec06pakdd,leskovec07cascadingbehavior}.\nTheir work is also limited to studying the frequency of fixed shapes in cascades.\n\n\\subsection{Structure} Kwak \\etal investigated the audience size, tree height, and temporal characteristics of the cascades in a Twitter data set \\cite{kwak10twitter}.\nTheir experimental results showed that the audience size of a cascade is independent of the number of neighbors of the source of that cascade.\nThey found that about $96\\%$ of the cascades in their data set have a height of $1$ hop and the height of the biggest cascade is $11$ hops.\nThey also found that about $10\\%$ of cascades continue to expand even after one month since their start.\nRomero \\etal specifically studied Twitter cascades with respect to hashtags in terms of degree distribution, clustering, and tie strengths \\cite{romero11crosstopics}.\nThe results of their experiments showed that cascades from diverse topics (identified using hashtags), such as sports, music, technology, and politics, have different characteristics.\nSimilarly, Rodrigues \\etal studied structure-related properties of Twitter cascades containing URLs \\cite{rodrigues11wordofmouth}.\nThey studied cascade properties like height, width, and the number of users for cascades containing URLs from different web domains.\nSadikov \\etal investigated the estimation of the sizes and depths of information cascades with missing data \\cite{sadikov11missing}.\nTheir estimation method uses multiple features including the number of nodes, the number of edges, the number of isolated nodes, the number of weakly connected components, node degree, and non-leaf node out-degree.\nTheir empirical evaluation using a Twitter data set showed that their method accurately estimates cascade properties for varying fractions of missing data.\n\n\n\\subsection{Simulation}\nGomez \\etal studied the structure of discussion cascades in Wikipedia, Slashdot, Barrapunto, and Meneame using features solely based on the depth and degree distribution of cascades \\cite{gomez11ht}.\nThey also developed a generative model based on the maximum likelihood estimation of preferential attachment process to simulate synthetic discussion cascades.\nHowever, their model does not capture morphological properties of cascades and is limited to generation of synthetic discussion cascades.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nQuite often in theoretical work, approximation schemes for some\nquantities converge rather slowly. Thus, there is a need for means to\naccelerate convergence or, equivalently, to extrapolate from few\nmembers of a sequence to its limit. Fortunately, the development of\nsuch methods has become a rather active field at the borderline between\nmathematics and the sciences in recent years. Brezinski and Redivo\nZaglia \\cite{BrezinskiRedivoZaglia91} have given an excellent\nmathematical introduction to such methods. There are many methods that\ncan be used to accelerate slowly convergent (or to sum divergent) power\nseries in terms of rational approximations, e.g., Pad\\'e approximants\n\\cite{Baker65,BakerGravesMorris81a,BakerGravesMorris81b} that are\nrelated to the famous epsilon algorithm \\cite{Wynn56a}, Levin-type\nmethods \\cite{Levin73,Weniger89,Homeier93,HomeierWeniger95}, and\niterative methods \\cite{Weniger91} like the recently developed\n$\\mathcal{J}$ transformation\n\\cite{Homeier94a,Homeier94d,Homeier94e}. There are also methods\nthat can be used to accelerate the convergence of Fourier\n\\cite{KieferWeiss81,Homeier92,Homeier93} and other orthogonal\nseries \\cite{Longman87,Homeier94b}. Onedimensional iteration\nsequences can be accelerated very effectively as is demonstrated\nin \\cite{Homeier95} for the case of the inverse Dyson equation.\nThere is also a growing literature on extrapolation of matrix\nand vector sequences (see \\cite{BrezinskiRedivoZaglia91} for an\nintroduction) that have found applications to the computation of\nmatrix functions \\cite{Homeier94c} and the iterative\nsolution of fixed-point equations \\cite{HomeierRastKrienke95}.\nThe full potential for application of these methods in the\nsciences has still to be explored.\n\nOne of the fields where these methods may be applied is\nMany-Body Perturbation Theory (MBPT), that is one of the standard methods to\nobtain the correlation energy in molecular {\\em ab initio}\ncalculations. The convergence acceleration of many-body perturbation\nseries has recently become a topic of increasing interest\n\\cite{SchmidtWarkenHandy93,DietzSchmidtWarkenHess92,%\nDietzSchmidtWarkenHess93a,DietzSchmidtWarkenHess93b,%\nDietzSchmidtWarkenHess93c,DietzSchmidtWarkenHess94a,%\nDietzSchmidtWarkenHess94b}, also in the context of time-dependent\nphenomena \\cite{DietzSchmidtWarken95}. Here, we restrict attention to\napproaches to correlation energy estimation that are based on the\nM{\\o}ller-Plesset (MP) series since the latter is commonly and\nroutinely used in quantum chemistry for closed-shell systems. For\nopen-shell systems, the restricted MP (RMP) method has been developed\n\\cite{KnowlesAndrewsAmosHandyPople91} that is based on an restricted\nopen-shell Hartree-Fock (ROHF) determination of the MP unperturbed\nHamiltonian. In this way, the RMP approach largely avoids spin\ncontaminations that are characteristic for unrestricted MP (UMP) based\non an unrestricted HF (UHF) zero-order calculation. For smaller\nmolecules, calculations up to fourth or even fifth order do not pose\nlarge problems, and MPn (n=2,4) calculations are a popular approach to\nthe correlation problem. However, the computational effort increases\nsteeply with the order of the perturbation series, and with the size of\nthe molecular system. Therefore, there is a need to make the best use\nof the lower-order terms since higher terms are difficult to obtain.\nOrder-by-order summation of the perturbation expansion as given by\n\\begin{equation}\\label{eq0}\nE = E_{0} + E_{1} + E_{2} + E_{3} + E_{4}\n+ E_{5} + \\dots\\>,\n\\end{equation}\ni.e., using the $n$th order estimate\n\\begin{equation}\\label{eq0a}\nE^{(n)} = \\sum_{j=0}^{n} E_j\\>,\n\\end{equation}\nis not the best way to exploit the information content\nof its terms. It has been shown by Schmidt, Warken and Handy\n\\cite{SchmidtWarkenHandy93} that a specific variant of a method\noriginally proposed by Goldhammer and Feenberg\n\\cite{GoldhammerFeenberg56,Feenberg56} for the Brillouin-Wigner\nperturbation expansion allows to obtain better estimates for the\ncorrelation energy than order-by-order summation of the usual MP series.\nThis variant was called the {\\em Feenberg series} in\n\\cite{SchmidtWarkenHandy93}. It is also a special case of the so-called\n{\\em Geometric Approximation}\n\\cite{Amos70,Bhattacharyya81,SchulmanMusher68,WilsonSilverFarrell77}.\nSimilar to the original approach of Goldhammer and Feenberg\n\\cite{GoldhammerFeenberg56,Feenberg56}, the computation of the Feenberg\nseries requires only the terms $E_{j}$ of the perturbation series.\n\n\n\nAlternatively, one may use Pad\\'e approximants\nthat provide rational approximations $[p,q]$ to power series,\nwhere $p$ denotes the order of the numerator polynomial, and $q$\nthat of the denominator polynomial. Pad\\'e approximants may be\ncalculated for the original perturbation series, and also for\nthe renormalized perturbation series. As shown by Wilson,\nSilver, and Farrell \\cite{WilsonSilverFarrell77}, the special\nPad\\'e approximants $[n+1,n]$ have the property that they are\ninvariant under the scaling of the unperturbed Hamilton operator\nand, thus, are identical for the original and the renormalized\ncase. This invariance is an important property of correlation\nenergy estimators since the true correlation energy is\nindependent of our choice of the unperturbed Hamiltonian.\n\nRecently, a method based on effective characteristic polynomials\nhas been applied to correlation energy computations of some model\nsystems\n\\cite{Bracken94,BrackenCizek94,TakahashiBrackenCizekPaldus95,%\nBrackenCizek95a,BrackenCizek95b,CizekBracken95,%\nDowningMichlCizekPaldus79}\nand for the summation of perturbation expansions of anharmonic\noscillators \\cite{CizekWenigerBrackenSpirko96}. We will see that\nresults based on low-order effective characteristic polynomials\nalso have the desirable invariance property under rescaling of\nthe unperturbed Hamiltonian.\n\nAll these methods require only the terms $E_{i}$ of the\nM{\\o}ller-Plesset perturbation series. The additional effort to\ncalculate them besides the usual perturbation series is very\nlow. As will be shown, these methods allow to obtain much better\nestimates of the correlation energy in many cases, and allow the\nidentification of cases where standard perturbation theory\nfails. In these cases, computationally more demanding\ncorrelation energy estimators have to be used \\cite{%\nSchmidtWarkenHandy93,DietzSchmidtWarkenHess92,%\nDietzSchmidtWarkenHess93a,DietzSchmidtWarkenHess93b,%\nDietzSchmidtWarkenHess93c,DietzSchmidtWarkenHess94a,%\nDietzSchmidtWarkenHess94b,%\nCizek69,Paldus74,%\nPaldus76,%\nKutzelnigg77,%\nShavitt77,%\nHoseKaldor79,%\nBartlett81,%\nJeziorskiMonkhorst81,%\nHoseKaldor82,%\nBartlettDykstraPaldus84,%\nWilson84,%\nHoffmanSchaefer86,%\nHose89,%\nPaldus88,%\nMukherjeePal89,%\nSzaboOstlund89,%\nFulde91,%\nKarwowski92,%\nMcWeeny92,%\nPaldus92,%\nRoos92,Roos94,%\nHandy94,%\nRychlewski94,%\nMeissner95,%\nSteinbornSilver96}.\n\n \\section{Methods}\nThe Goldhammer-Feenberg approach \\cite{GoldhammerFeenberg56,Feenberg56}\nrenormalizes the unperturbed Hamiltonian $H_0$ by a constant factor\naccording to\n\\begin{equation}\\label{eq1}\nH_0(\\alpha) = (1-\\alpha) H_0\\>.\n\\end{equation}\nThis leads to a repartitioning of the total Hamiltonian\n$H=H_0+H_1$ as\n\\begin{equation}\\label{eq0alpha}\nH = H_0(\\alpha) + H_1(\\alpha), \\quad H_1(\\alpha)= H_1 + \\alpha H_0\\>.\n\\end{equation}\nIt also leads to a renormalized perturbation series\n\\begin{equation}\\label{eq3}\nE(\\alpha) = E_{0}(\\alpha) + E_{1}(\\alpha) + E_{2}(\\alpha) +\nE_{3}(\\alpha) + E_{4}(\\alpha) + E_{5}(\\alpha) +\n\\dots\\>\n\\end{equation}\nwith partial sums --- i.e., renormalized $n$th order energy\nestimates --- given by\n\\begin{equation}\\label{eq3a}\nE^{(n)}(\\alpha) = \\sum_{j=0}^{n} E_j(\\alpha)\\>\n\\end{equation}\ndepending on renormalized $j$th order contributions \\cite[Eq.\n(12)]{Feenberg56}\n\\begin{equation}\\label{eq5}\n\\begin{array}{rl}\nE_{0}(\\alpha) =&\\displaystyle (1-\\alpha) E_{0}\\>, \\qquad\nE_{1}(\\alpha) =\\displaystyle E_{1} + \\alpha E_{0}\\>,\\\\\nE_{n}(\\alpha) =&\\displaystyle\n\\frac{1}{(1-\\alpha)^{n-1}} \\,\\sum_{j=2}^{n}\n{{n-2}\\choose{j-2}}(-\\alpha)^{n-j} \\, E_{j}\\>,\\qquad (n\\ge 2)\\>.\n\\end{array}\n\\end{equation}\nFor the Feenberg series, the factor $\\alpha$ is determined by\nrequiring that the third order energy $E^{(3)}(\\alpha)$\nof the renormalized perturbation expansion is stationary with respect\nto variations of the factor $\\alpha$. This leads to an optimized\nvalue based on the third order result given by\n$\\alpha^{(3)}=E_{3}\/E_{2}$.\nIn this way, the partitioning of the Hamiltonian is fixed, and the\nFeenberg series is obtained as the usual Rayleigh-Schr\\\"odinger series\nfor the unperturbed Hamilton operator $H_0(\\alpha^{(3)})$.\nThe total energies are\n\\begin{equation}\\label{eqFn}\nF_n = E^{(n)}(\\alpha^{(3)}) = E^{(n)}(E_{3}\/E_{2})\\>.\n\\end{equation}\n\nThe stationarity of the eigenvalue is based on the observation\nthat the exact value of the energy, i.e., the infinite order\nresult should be independent of the value of $\\alpha $ that is\nused. When applying this to an approximation obtained in some\nfinite order, that value of $\\alpha$ is best where the\nderivative of the approximation is as small as possible in\nabsolute value, preferably zero. We remark that this is related\nto the concept of order-dependent mappings as discussed in\n\\cite[Sec. 18]{ArtecaFernandezCastro90}. Since order-by-order\nsummation of the $\\alpha$ dependent Rayleigh-Schr\\\"odinger\nexpansion leads to the $n$th order estimate $E^{(n)}(\\alpha)$\ndefined in Eq.~(\\ref{eq3a}), the optimal value $\\alpha^{(n)}$ of\n$\\alpha$ in $n$th order is determined from the equation $(n>\n1)$\n\\begin{equation}\\label{eq8}\n0=\\frac{d E^{(n)}} {d \\alpha} (\\alpha^{(n)})\\>, \\qquad \\frac{d\nE^{(n)}} {d \\alpha} (\\alpha)\n = (n-1)\\, E_n(\\alpha) \/ (1-\\alpha) \\>.\n\\end{equation}\nThe second equality here follows from an explicit calculation.\nA solution of this equation leads to an approximation\n\\begin{equation}\\label{eqGFn}\nGF_n=E^{(n)}(\\alpha^{(n)})\n\\end{equation}\nfor the total energy. Thus, in each order of the\nrenormalized perturbation series, different values of $\\alpha$\nare\nchosen. This approach has been proposed already by Feenberg. We will\ncall its results the total {\\em Goldhammer-Feenberg\nenergies}\nin order to distinguish it from the Feenberg total energies. Obviously,\nthere can be several solutions of Eq. (\\ref{eq8}), and the\nGoldhammer-Feenberg energies are not guaranteed to be real.\n\nIn the case\nof fifth order, the condition (\\ref{eq8}) reduces in combination\nwith Eq.~(\\ref{eq5}) to requiring that\n$\\alpha^{(5)}$ is a root of the third order polynomial\n$(1-\\alpha)^4\\,E_5(\\alpha)$. The latter has\nreal coefficients and, thus, is guaranteed to have a real\nsolution $\\alpha^{(5)}_r$. The corresponding value $E^{(5)}(\\alpha^{(5)}_r)$ will be called\nGF5 later. Alternatively, one can use the average of the two (in\nthe present case always) complex energies obtained from the\nother roots of the third order polynomial. This average will be\ncalled GF5b later.\n\nAs is well-known (see for instance \\cite{Kutzelnigg77,Hose89}),\nRayleigh-Schr\\\"odinger MBPT is size-extensive order by order,\ni.e., for a super-molecule build up from $N$ non-interacting\nidentical systems, the perturbation energies are linear in $N$\nin each order. Thus, if $E_j$ is the $j$th term of the\nperturbation series of one of the $N$ subsystems, the $j$th\norder term of the perturbation series for the super-molecule is\n$N\\,E_j$.\n\nIn the case of the Feenberg scaling, we note that Eq.\\\n(\\ref{eq5}) implies that for $E_j\\to N\\,E_j$, we also have\n$E_j(\\alpha)\\to N\\,E_j(\\alpha)$. Thus, for any $\\alpha$ that\nis independent of $N$, also\nthe renormalized perturbation series is size-extensive in each\norder. Since\n$\\alpha^{(3)}=E_3\/E_2$ is invariant under $E_j\\to N\\,E_j$, all Feenberg\nenergies $F_n$ are size-extensive as a consequence of Eq.\\\n(\\ref{eqFn}).\n\nThe Goldhammer-Feenberg energies\n$GF_n$ for $n>1$ are also size-extensive. To prove this, we note that\nunder $E_n\\to N\\,E_n$, we have $d E_n\/d \\alpha\\to N\\,d\nE_n\/d\\alpha$. This follows from the last equality in Eq.\\ (\\ref{eq8}), since\n$E_n(\\alpha)\\to N\\,E_n(\\alpha)$ under $E_n\\to N\\,E_n$.\nThis implies that the positions of the zeros of $d E_n\/d \\alpha$,\nand hence the positions $\\alpha^{(n)}$ of the extrema of\n$E_n(\\alpha)$ are invariant under $E_n\\to N\\,E_n$. Since the\n$\\alpha^{(n)}$ are used to define the Goldhammer-Feenberg energies, the\nlatter are size-extensive. In particular, this applies to GF5\nand GF5b.\n\n\n Now, we sketch the method of the effective characteristic polynomial\n that has recently been applied to the summation of divergent\n perturbation series \\cite{CizekWenigerBrackenSpirko96}. In the linear\n variation method with $n$ orthonormal basis functions\n $\\{\\phi_j\\}_{j=1}^{n}$ applied to a Hamiltonian $H$, the\n characteristic polynomial $P_n(E)$ of degree $n$ in the unknown energy\n $E$ has the form\n\\begin{equation}\\label{eqC1}\nP_n(E)={\\rm det}\\,\\left\\vert \\left\\langle \\phi_j \\vert H \\vert\n\\phi_k \\right\\rangle -E\\,\\delta_{j,k}\\right\\vert \\>.\n\\end{equation}\n If $H=H_0+\\beta V$, the polynomial has the form\n (\\cite{CizekWenigerBrackenSpirko96}, Eq. (3.2))\n\\begin{equation}\\label{eqC2}\nP_n(E) = \\sum_{j=0}^{n} E^j \\,\\sum_{k=0}^{n-j} f_{n,j,k} \\beta^k\n\\end{equation}\n with $f_{n,n,0}=1$. Thus, $N=n(n+3)\/2$ coefficients $f_{n,j,k}$ have\n to be determined. They could be obtained\n from the matrix elements of $H_0$ and $V$. In the method of the\n characteristic polynomial, they are obtained from the coefficients of\n the perturbation series for $E$\n\\begin{equation}\\label{eqC3}\nE = \\sum_{j=0}^{\\infty} E_j \\, \\beta^j\\>.\n\\end{equation}\n For this end, one uses (\\ref{eqC3}) in (\\ref{eqC2}) and does a Taylor\n expansion in $\\beta$ with the result\n\\begin{equation}\\label{eqC4}\nP_n\\left(\\sum_{j=0}^{\\infty} E_j \\, \\beta^j\\right) = \\sum_{k=0}^{N-1}\nA_k \\beta^k + O\\left(\\beta^{N}\\right)\\>.\n\\end{equation}\nThe $A_k$ depend on the $f_{n,j,k}$. Since $P_n(E)=0$ for an eigenvalue\n$E$, one demands\n\\begin{equation}\\label{eqC5}\nA_k = 0\\>, \\qquad 0\\le k \\le N-1\\>.\n\\end{equation}\nThis yields a linear equation system for the unknown\n$f_{n,j,k}$, and thus, these coefficients can be determined.\nAfter the determination, the effective characteristic equation\n$P_n(E)=0$ is\nsolved for $E$. If only perturbation coefficients $E_j$ up to $j=5$ are\navailable, only a second degree effective characteristic\npolynomial can be used. In our case, one finally puts $\\beta=1$. In this way,\none obtains an explicit solution of $P_2(E)=0$ as\n\\begin{equation}\\label{eqP2}\n \\Pi_2 = E_0 + E_1 +\n {\\displaystyle \\frac {E_2^2}{2}}\n \\,\\frac{\\displaystyle\n E_2 -E_3 +\n \\sqrt {(E_2- E_3)^2 - 4 \\,(E_2\\,E_4- E_3^2)}\n }\n {\\displaystyle E_2\\,E_4 - E_3^2\n }\n\\end{equation}\nA further solution (with a minus sign of the square root) only\nyields the correct result for small $\\beta$ if $E_2>0$\nholds which does not occur in perturbation theory calculations of\nground states.\n\nDirect calculation shows that the estimate $\\Pi_2$ is\nindependent under a scaling of $H_0$, i.e., we have\n\\begin{equation}\\label{eqP2a}\n\\Pi_2(E_0,\\dots,E_4) = \\Pi_2(E_0(\\alpha),\\dots,E_4(\\alpha))\\>.\n\\end{equation}\nSince the true characteristic polynomials --- depending only on\nthe total Hamiltonian --- are invariant under Feenberg scaling,\nit may be conjectured that this invariance also holds for\nestimates obtained as roots of effective characteristic\npolynomials of higher degree. A proof of this conjecture is\nunder investigation.\n\nWe denote ${\\Pi}_{2}$ also as estimate $\\Pi$2 for the total\nenergy in the following.\n\nIt is easy to see from Eq.\\ (\\ref{eqP2}) that $\\Pi_2\\to\nN\\,\\Pi_2$ if $E_j\\to N\\,E_j$ for all $j$ with $0\\le j\\le 4$. Thus, the\n$\\Pi_2$ estimator is size-extensive.\n\n\n\n\nPad\\'e approximants\n\\cite{Baker65,BakerGravesMorris81a,BakerGravesMorris81b} are defined\nwith respect to a given power series as ratios of two polynomials.\nGiven numerator and denominator polynomial degrees $p$ and $q$, the\ncoefficients of these polynomials in the Pad\\'e approximant $[p,q]$ are\ndetermined by requiring that up to the order $p+q$, the coefficients in\nthe Taylor expansion of the ratio of polynomials are equal to the\ncoefficients of the given power series. In the present contribution, we\ntake as this power series the perturbation expansion (\\ref{eqC3}) in\nthe parameter $\\beta$ that is put equal to one in the final formulas.\nWe note that a different power series that is not explicitly defined,\nseems to have been used for the Pad\\'e approximants in\n\\cite{KucharskiNogaBartlett89}. For the application of rational\napproximants to the M{\\o}ller-Plesset series see also Ref.\\\n\\cite{HandyKnowlesSomasundram85}.\n\n\n \\section{Numerical Results}\nFortunately, excellent data for the test of the methods\ndescribed in the previous section are available in\n\\cite{SchmidtWarkenHandy93}. This paper also includes results given in\n\\cite{KucharskiNogaBartlett89}. In these references, a large\nnumber of M{\\o}ller-Plesset results up to fifth order, and FCI\n(Full Configuration Interaction) or CCSDT (Coupled Cluster\nSingles Doubles Triples) results are given for the ground states\nof benchmark molecules (BH, HF, CH${}_2$, H${}_2$O, NH${}_2$,\nNH${}_3$, CO, C${}_2$H${}_2$, O${}_3$, CN). The results of the\nreanalysis of these data is presented in Table \\ref{tab1}. For\ncompleteness, the MP data are also plotted. If not stated\notherwise, MPn means RMPn in open shell cases. Apart from case\nn (NH${}_3$), the left half of the data in Table \\ref{tab1} is obtained\nfrom the data up to fourth order, while the right half also\ndepends on the fifth order.\n\nIt is seen that in many cases, the correlation energy estimators\nprovide excellent results. Problematic cases are s, t, and u. In case\ns corresponding to CN, the perturbation series is divergent, being based on doubly occupied\nROHF orbitals where for alpha and beta spins the same orbitals are\nused, unlike the RMP orbitals where occupied alpha and beta set both\nare rotated. \\cite{SchmidtWarkenHandy93,Handy94} In cases t and\nu corresponding to H${}_2$O at stretched geometries, the\napproach is based on an UMP series that is monotonously and very slowly\nconvergent \\cite{SchmidtWarkenHandy93,HandyKnowlesSomasundram85}.\n\n\n\\setlongtables\n\\setlength{\\LTleft}{0pt}\n\\setlength{\\LTright}{0pt}\n\\begin{longtable}{l@{\\extracolsep{\\fill}}rr|lrr}\n\\caption{Comparison of Correlation Energy Estimators}\\label{tab1} \\\\\n Method & \\multicolumn{1}{c}{Energy} & \\%Corr\n& Method & \\multicolumn{1}{c}{Energy} &\n\\%Corr \\\\\n\\hline\n\\endfirsthead\n\\multicolumn{6}{l}{(Table \\ref{tab1} -- continued)}\n\\endhead\n\\multicolumn{6}{c}{Case a: BH (${}^1\\Sigma$, $r=2.329\\,a_0$, DZP,\n\\cite{KucharskiNogaBartlett89,HarrisonHandy83,%\nBartlettSekinoPurvis83})} \\\\\n SCF & -25.125260 & 0.00 & MP5 & -25.225101 & 97.53 \\\\\n MP2 & -25.198988 & 72.02 & F5 & -25.226881 & 99.27 \\\\\n MP3 & -25.216566 & 89.19 & GF5 & -25.226971 & 99.36 \\\\\n MP4 & -25.222567 & 95.06 & GF5b & -25.227088 & 99.47 \\\\\n F4 & -25.226167 & 98.57 & [3,2] & -25.227299 & 99.68 \\\\\n $[2,2]$& -25.225294 & 97.72 & [2,3] & -25.227478 & 99.85 \\\\\n $\\Pi$2 & -25.226555 & 98.95 & FCI & -25.227627 & 100.00 \\\\\n\\hline\n\\multicolumn{6}{c}{Case b: BH (${}^1\\Sigma$, $r=1.5 \\times 2.329\\,a_0$, DZP,\n\\cite{KucharskiNogaBartlett89,NogaBartlett87})} \\\\\n SCF & -25.062213 & 0.00 & MP5 & -25.172372 & 96.83 \\\\\n MP2 & -25.139869 & 68.26 & F5 & -25.174484 & 98.69 \\\\\n MP3 & -25.160249 & 86.18 & GF5 & -25.174544 & 98.74 \\\\\n MP4 & -25.168745 & 93.64 & GF5b & -25.177010 & 100.91 \\\\\n F4 & -25.175345 & 99.45 & [3,2] & -25.175078 & 99.21 \\\\\n $[2,2]$& -25.173623 & 97.93 & [2,3] & -25.175106 & 99.24 \\\\\n $\\Pi$2 & -25.176791 & 100.72 & FCI & -25.175976 & 100.00 \\\\\n\\hline\n\\multicolumn{6}{c}{Case c: BH (${}^1\\Sigma$, $r=2 \\times 2.329\\,a_0$, DZP,\n\\cite{KucharskiNogaBartlett89,NogaBartlett87})} \\\\\n SCF & -24.988201 & 0.00 & MP5 & -25.121278 & 95.65 \\\\\n MP2 & -25.074503 & 62.03 & F5 & -25.126844 & 99.65 \\\\\n MP3 & -25.100221 & 80.51 & GF5 & -25.126983 & 99.75 \\\\\n MP4 & -25.114005 & 90.42 & GF5b & -25.130104 & 101.99 \\\\\n F4 & -25.128829 & 101.08 & [3,2] & -25.129407 & 101.49 \\\\\n $[2,2]$& -25.124953 & 98.29 & [2,3] & -25.129475 & 101.54 \\\\\n $\\Pi$2 & -25.137084 & 107.01 & FCI & -25.127333 & 100.00 \\\\\n\\hline\n\\multicolumn{6}{c}{Case d: HF ($r=1.733 \\,a_0$, DZP,\n\\cite{KucharskiNogaBartlett89,BauschlicherLanghoffTaylorHandyKnowles86})} \\\\\n SCF & -100.047087 & 0.00 & MP5 & -100.250158 & 99.60 \\\\\n MP2 & -100.243165 & 96.17 & F5 & -100.250099 & 99.57 \\\\\n MP3 & -100.245531 & 97.33 & GF5 & -100.250276 & 99.66 \\\\\n MP4 & -100.251232 & 100.13 & GF5b & -100.251988 & 100.50 \\\\\n F4 & -100.251443 & 100.23 & [3,2] & -100.250468 & 99.75 \\\\\n $[2,2]$& -100.251547 & 100.28 & [2,3] & -100.250481 & 99.76 \\\\\n $\\Pi$2 & -100.251820 & 100.42 & FCI & -100.250969 & 100.00 \\\\\n\\hline\n\\multicolumn{6}{c}{Case e: HF ($r=1.5 \\times 1.733 \\,a_0$, DZP,\n\\cite{KucharskiNogaBartlett89,BauschlicherLanghoffTaylorHandyKnowles86})} \\\\\n SCF & -99.933230 & 0.00 & MP5 & -100.158121 & 99.00 \\\\\n MP2 & -100.149756 & 95.32 & F5 & -100.158152 & 99.01 \\\\\n MP3 & -100.148543 & 94.78 & GF5 & -100.158247 & 99.05 \\\\\n MP4 & -100.159627 & 99.66 & GF5b & -100.161609 & 100.53 \\\\\n F4 & -100.159443 & 99.58 & [3,2] & -100.158750 & 99.28 \\\\\n $[2,2]$& -100.160091 & 99.87 & [2,3] & -100.158757 & 99.28 \\\\\n $\\Pi$2 & -100.160708 & 100.14 & FCI & -100.160395 & 100.00 \\\\\n\\hline\n\\multicolumn{6}{c}{Case f: HF ($r=2 \\times 1.733 \\,a_0$, DZP,\n\\cite{KucharskiNogaBartlett89,BauschlicherLanghoffTaylorHandyKnowles86})} \\\\\n SCF & -99.817571 & 0.00 & MP5 & -100.073004 & 96.93 \\\\\n MP2 & -100.057062 & 90.88 & F5 & -100.073139 & 96.98 \\\\\n MP3 & -100.054148 & 89.77 & GF5 & -100.073301 & 97.04 \\\\\n MP4 & -100.076267 & 98.16 & GF5b & -100.079678 & 99.46 \\\\\n F4 & -100.075480 & 97.86 & [3,2] & -100.075064 & 97.71 \\\\\n $[2,2]$& -100.077899 & 98.78 & [2,3] & -100.075072 & 97.71 \\\\\n $\\Pi$2 & -100.080476 & 99.76 & FCI & -100.081107 & 100.00 \\\\\n\\hline\n\\multicolumn{6}{c}{Case g: CH${}_2$ (${}^1A_1$, $r=2.11 \\,a_0$,\n$\\theta=102.4\\,{}^{\\circ}$, DZP,\n\\cite{KucharskiNogaBartlett89,BauschlicherTaylor86b})} \\\\\n SCF & -38.886297 & 0.00 & MP5 & -39.024234 & 97.91 \\\\\n MP2 & -38.996127 & 77.96 & F5 & -39.025336 & 98.69 \\\\\n MP3 & -39.016593 & 92.48 & GF5 & -39.025450 & 98.77 \\\\\n MP4 & -39.022203 & 96.47 & GF5b & -39.025413 & 98.74 \\\\\n F4 & -39.024615 & 98.18 & [3,2] & -39.025674 & 98.93 \\\\\n $[2,2]$& -39.024049 & 97.78 & [2,3] & -39.025895 & 99.09 \\\\\n $\\Pi$2 & -39.024791 & 98.30 & FCI & -39.027183 & 100.00 \\\\\n\\hline\n\\multicolumn{6}{c}{Case h: H${}_2$O (${}^1A_1$, $r=1.88973 \\,a_0$,\n$\\theta=104.5\\,{}^{\\circ}$, DZP,\n\\cite{KucharskiNogaBartlett89,BauschlicherTaylor86a})} \\\\\n SCF & -76.040542 & 0.00 & MP5 & -76.255924 & 99.68 \\\\\n MP2 & -76.243660 & 94.00 & F5 & -76.255918 & 99.67 \\\\\n MP3 & -76.249403 & 96.66 & GF5 & -76.255929 & 99.68 \\\\\n MP4 & -76.255706 & 99.58 & GF5b & -76.257338 & 100.33 \\\\\n F4 & -76.256262 & 99.83 & [3,2] & -76.256134 & 99.77 \\\\\n $[2,2]$& -76.256282 & 99.84 & [2,3] & -76.256135 & 99.77 \\\\\n $\\Pi$2 & -76.256729 & 100.05 & FCI & -76.256624 & 100.00 \\\\\n\\hline\n\\multicolumn{6}{c}{Case i: H${}_2$O (${}^1A_1$, $r=1.5\\times 1.88973 \\,a_0$,\n$\\theta=104.5\\,{}^{\\circ}$, DZP,\n\\cite{KucharskiNogaBartlett89,BauschlicherTaylor86a})} \\\\\n SCF & -75.800494 & 0.00 & MP5 & -76.066422 & 98.16 \\\\\n MP2 & -76.048095 & 91.40 & F5 & -76.066368 & 98.14 \\\\\n MP3 & -76.045081 & 90.28 & GF5 & -76.066442 & 98.17 \\\\\n MP4 & -76.065641 & 97.87 & GF5b & -76.068395 & 98.89 \\\\\n F4 & -76.064909 & 97.60 & [3,2] & -76.068528 & 98.94 \\\\\n $[2,2]$& -76.066937 & 98.35 & [2,3] & -76.068533 & 98.94 \\\\\n $\\Pi$2 & -76.068954 & 99.10 & FCI & -76.071405 & 100.00 \\\\\n\\hline\n\\multicolumn{6}{c}{Case j: H${}_2$O (${}^1A_1$, $r=2\\times 1.88973 \\,a_0$,\n$\\theta=104.5\\,{}^{\\circ}$, DZP,\n\\cite{KucharskiNogaBartlett89,BauschlicherTaylor86a})} \\\\\n SCF & -75.582286 & 0.00 & MP5 & -75.935304 & 95.41 \\\\\n MP2 & -75.898603 & 85.50 & F5 & -75.934525 & 95.20 \\\\\n MP3 & -75.877664 & 79.84 & GF5 & -75.935353 & 95.43 \\\\\n MP4 & -75.937410 & 95.98 & GF5b & -75.923566 & 92.24 \\\\\n F4 & -75.927115 & 93.20 & [3,2] & -75.949379 & 99.22 \\\\\n $[2,2]$& -75.941045 & 96.97 & [2,3] & -75.949401 & 99.22 \\\\\n $\\Pi$2 & -75.954930 & 100.72 & FCI & -75.952269 & 100.00 \\\\\n\\hline\n\\multicolumn{6}{c}{Case k: NH${}_2$ (${}^2B_1$, $r=1.013 \\,\\mbox{\\AA}$,\n$\\theta=103.2\\,{}^{\\circ}$, 6-31G,\n\\cite{HandyKnowlesSomasundram85,KnowlesAndrewsAmosHandyPople91})} \\\\\n SCF & -55.530177 & 0.00 & MP5 & -55.632426 & 99.18 \\\\\n MP2 & -55.617272 & 84.48 & F5 & -55.632818 & 99.56 \\\\\n MP3 & -55.627501 & 94.40 & GF5 & -55.632834 & 99.57 \\\\\n MP4 & -55.631220 & 98.01 & GF5b & -55.633280 & 100.00 \\\\\n F4 & -55.632525 & 99.27 & [3,2] & -55.633011 & 99.74 \\\\\n $[2,2]$& -55.632204 & 98.96 & [2,3] & -55.633022 & 99.75 \\\\\n $\\Pi$2 & -55.632825 & 99.56 & FCI & -55.633276 & 100.00 \\\\\n\\hline\n\\multicolumn{6}{c}{Case l: NH${}_2$ (${}^2B_1$, $r=1.5\\times 1.013 \\,\\mbox{\\AA}$,\n$\\theta=103.2\\,{}^{\\circ}$, 6-31G,\n\\cite{HandyKnowlesSomasundram85,KnowlesAndrewsAmosHandyPople91})} \\\\\n SCF & -55.367729 & 0.00 & MP5 & -55.520522 & 96.14 \\\\\n MP2 & -55.489967 & 76.91 & F5 & -55.521721 & 96.89 \\\\\n MP3 & -55.504270 & 85.91 & GF5 & -55.521724 & 96.90 \\\\\n MP4 & -55.516470 & 93.59 & GF5b & -55.523319 & 97.90 \\\\\n F4 & -55.521456 & 96.73 & [3,2] & -55.523696 & 98.14 \\\\\n $[2,2]$& -55.521125 & 96.52 & [2,3] & -55.523706 & 98.14 \\\\\n $\\Pi$2 & -55.526202 & 99.71 & FCI & -55.526658 & 100.00 \\\\\n\\hline\n\\multicolumn{6}{c}{Case m: NH${}_2$ (${}^2B_1$, $r=2\\times 1.013 \\,\\mbox{\\AA}$,\n$\\theta=103.2\\,{}^{\\circ}$, 6-31G,\n\\cite{HandyKnowlesSomasundram85,KnowlesAndrewsAmosHandyPople91})} \\\\\n SCF & -55.181593 & 0.00 & MP5 & -55.418215 & 91.36 \\\\\n MP2 & -55.357617 & 67.96 & F5 & -55.420149 & 92.11 \\\\\n MP3 & -55.375463 & 74.85 & GF5 & -55.420173 & 92.12 \\\\\n MP4 & -55.409165 & 87.87 & GF5b & -55.412429 & 89.13 \\\\\n F4 & -55.421427 & 92.60 & [3,2] & -55.432093 & 96.72 \\\\\n $[2,2]$& -55.426946 & 94.73 & [2,3] & -55.432101 & 96.72 \\\\\n $\\Pi$2 & -55.478348 & 114.58 & FCI & -55.440593 & 100.00 \\\\\n\\hline\n\\multicolumn{6}{c}{Case n: NH${}_3$ ($r=1.91165 \\,a_0$,\n$\\theta=106.7\\,{}^{\\circ}$, DZ,\n\\cite{HarrisonHandy83,BartlettSekinoPurvis83})} \\\\\n SCF & -56.165931 & 0.00 & F4 & -56.291937 & 99.47 \\\\\n MP2 & -56.277352 & 87.95 & $[2,2]$& -56.291782 & 99.35 \\\\\n MP3 & -56.285281 & 94.21 & $\\Pi$2 & -56.292636 & 100.02 \\\\\n MP4 & -56.290692 & 98.48 & FCI & -56.292612 & 100.00 \\\\\n\\hline\n\\multicolumn{6}{c}{Case o: CO (${}^1\\Sigma$,\nDZ,\n\\cite{KucharskiNogaBartlett89})} \\\\\n SCF & -112.760093 & 0.00 & MP5 & -113.059117 & 98.36 \\\\\n MP2 & -113.045824 & 93.99 & F5 & -113.059254 & 98.41 \\\\\n MP3 & -113.044659 & 93.61 & GF5 & -113.060859 & 98.93 \\\\\n MP4 & -113.067749 & 101.20 & GF5b & -113.073579 & 103.12 \\\\\n F4 & -113.067469 & 101.11 & [3,2] & -113.062479 & 99.47 \\\\\n $[2,2]$& -113.069566 & 101.80 & [2,3] & -113.062539 & 99.49 \\\\\n $\\Pi$2 & -113.072074 & 102.62 & CCSDT & -113.064100 & 100.00 \\\\\n\\hline\n\\multicolumn{6}{c}{Case p: C${}_2$H${}_2$ (${}^1\\Sigma_g$,\nDZP,\n\\cite{KucharskiNogaBartlett89})} \\\\\n SCF & -76.831819 & 0.00 & MP5 & -77.118892 & 102.18 \\\\\n MP2 & -77.085307 & 90.23 & F5 & -77.120192 & 102.65 \\\\\n MP3 & -77.097232 & 94.47 & GF5 & -77.122141 & 103.34 \\\\\n MP4 & -77.111732 & 99.63 & GF5b & -77.117205 & 101.58 \\\\\n F4 & -77.113928 & 100.42 & [3,2] & -77.127079 & 105.10 \\\\\n $[2,2]$& -77.114110 & 100.48 & [2,3] & -77.127731 & 105.33 \\\\\n $\\Pi$2 & -77.116235 & 101.24 & CCSDT & -77.112760 & 100.00 \\\\\n\\hline\n\\multicolumn{6}{c}{Case q: O${}_3$ (${}^1A_1$,\nDZP,\n\\cite{KucharskiNogaBartlett89})} \\\\\n SCF & -224.295920 & 0.00 & MP5 & -224.929902 & 97.54 \\\\\n MP2 & -224.931924 & 97.86 & F5 & -224.933812 & 98.15 \\\\\n MP3 & -224.888104 & 91.11 & GF5 & -224.934513 & 98.25 \\\\\n MP4 & -224.952784 & 101.07 & GF5b & -224.952167 & 100.97 \\\\\n F4 & -224.941418 & 99.32 & [3,2] & -224.938301 & 98.84 \\\\\n $[2,2]$& -224.950280 & 100.68 & [2,3] & -224.938367 & 98.85 \\\\\n $\\Pi$2 & -224.952387 & 101.00 & CCSDT & -224.945859 & 100.00 \\\\\n\\hline\n\\multicolumn{6}{c}{Case r: CN (${}^2\\Sigma$, $r=1.1619\\, \\,\\mbox{\\AA}$,\nSTO-3G, RMP\n\\cite{KnowlesAndrewsAmosHandyPople91})} \\\\\n SCF & -90.99752 & 0.00 & MP5 & -91.16157 & 95.07 \\\\\n MP2 & -91.15437 & 90.90 & F5 & -91.16165 & 95.12 \\\\\n MP3 & -91.14799 & 87.20 & GF5 & -91.16166 & 95.12 \\\\\n MP4 & -91.16300 & 95.90 & GF5b & -91.16360 & 96.24 \\\\\n F4 & -91.16133 & 94.93 & [3,2] & -91.16297 & 95.88 \\\\\n $[2,2]$& -91.16321 & 96.02 & [2,3] & -91.16297 & 95.88 \\\\\n $\\Pi$2 & -91.16426 & 96.63 & FCI & -91.17008 & 100.00 \\\\\n\\hline\n\\multicolumn{6}{c}{Case s: CN (${}^2\\Sigma$, $r=1.1619\\, \\,\\mbox{\\AA}$,\nSTO-3G, Hubac-Carsky,\n\\cite{Handy94,HubacCarsky80})} \\\\\n SCF & -90.99752 & 0.00 & MP5 & -91.12039 & 71.20 \\\\\n MP2 & -91.17762 & 104.37 & F5 & -91.15212 & 89.59 \\\\\n MP3 & -91.14160 & 83.50 & GF5 & -91.15998 & 94.15 \\\\\n MP4 & -91.19422 & 113.99 & GF5b & -91.18190 & 106.85 \\\\\n F4 & -91.17389 & 102.21 & [3,2] & -91.16350 & 96.19 \\\\\n $[2,2]$& -91.18753 & 110.11 & [2,3] & -91.16359 & 96.24 \\\\\n $\\Pi$2 & -91.19152 & 112.42 & FCI & -91.17008 & 100.00 \\\\\n\\hline\n\\multicolumn{6}{c}{Case t: H${}_2$O ($r=1.5 \\times 0.967\\, \\,\\mbox{\\AA}$,\n$\\theta=107.6\\,{}^{\\circ}$,\n6-21G,\\cite{HandyKnowlesSomasundram85})} \\\\\n RHF & -75.707206 & 0.00 & UMP5 & -75.853895 & 76.41 \\\\\n UHF & -75.735012 & 14.48 & F5 & -75.855560 & 77.28 \\\\\n UMP2 & -75.829388 & 63.65 & GF5 & -75.856608 & 77.82 \\\\\n UMP3 & -75.836823 & 67.52 & GF5b & -75.850870 & 74.84 \\\\\n UMP4 & -75.848211 & 73.45 & [3,2] & -75.862349 & 80.81 \\\\\n F4 & -75.851276 & 75.05 & [2,3] & -75.862421 & 80.85 \\\\\n $[2,2]$& -75.851994 & 75.42 & FCI & -75.899180 &100.00 \\\\\n $\\Pi$2 & -75.857074 & 78.07 & & & \\\\\n\\hline\n\\multicolumn{6}{c}{Case u: H${}_2$O ($r=2 \\times 0.967\\, \\,\\mbox{\\AA}$,\n$\\theta=107.6\\,{}^{\\circ}$,\n6-21G,\\cite{HandyKnowlesSomasundram85})} \\\\\n RHF & -75.491406 & 0.00 & UMP5 & -75.763370 & 90.72 \\\\\n UHF & -75.699298 & 69.35 & F5 & -75.763704 & 90.83 \\\\\n UMP2 & -75.754669 & 87.82 & GF5 & -75.763826 & 90.88 \\\\\n UMP3 & -75.760219 & 89.67 & GF5b & -75.763657 & 90.82 \\\\\n UMP4 & -75.762422 & 90.41 & [3,2] & -75.764089 & 90.96 \\\\\n F4 & -75.763098 & 90.63 & [2,3] & -75.764104 & 90.97 \\\\\n $[2,2]$& -75.762941 & 90.58 & FCI & -75.791180 &100.00 \\\\\n $\\Pi$2 & -75.763281 & 90.69 & & & \\\\\n\\hline\n\\hline\n\\end{longtable}\n\n\nApart\nfrom these problematic cases, it is seen that in case\nm corresponding to NH${}_2$ at twice the equilibrium distances, the errors are\nrather high. Excluding this case also, one may study the performance of\nthe correlation energy estimators statistically as shown in Table\n\\ref{tab1a}. Plotted are the maximal error, the mean absolute error,\nthe root mean square (rms) absolute error, and the mean percentage of\nthe correlation energy as obtained with the various methods. In\ncases o, p, and q corresponding to the molecules CO,\nC${}_2$H${}_2$, O${}_3$, respectively, no FCI result is\navailable. The statistical comparison is done\nonce excluding these cases, and once including these cases where as\nreference for the error calculation the CCSDT result is taken.\nFor these cases, the given correlation energies\nshould thus be taken with care. Carefully designed fourth order methods like\n$\\Pi$2 yield correlation energy estimates that can compete with fifth\norder results. As regards the fifth order methods, it seems that the\nGoldhammer-Feenberg estimator GF5 is slightly superior to the Feenberg\nenergy F5, and the somewhat {\\em ad hoc} estimator GF5b performs\nsurprisingly well. Among the Pad\\'e approximants, the $[3,2]$ approximant (that\nis invariant under the Feenberg scaling) is a rather successful\ncorrelation estimator while the $[2,3]$ approximant performs very\nsimilarly. Other Pad\\'e approximants (not displayed in Table\n\\ref{tab1}) do not perform as well as the ones given in this table when\napplied to the same data.\n\n\\begin{table}\n\\caption{Statistical comparison of various correlation energy\nestimators}\\label{tab1a}\n\\begin{tabular*}{\\linewidth}{@{}l@{\\extracolsep{\\fill}}rrrr@{}}\n\\hline\nMethod & max $\\vert error\\vert$ & mean $\\vert error\\vert$ & rms\n$\\vert error\\vert$ & mean $\\%$Corr\\\\\n\\hline\n\\multicolumn{5}{@{}c@{}}{Sampling 14 cases (a-l,n,r)}\\\\\nF4 & 0.02515 & 0.00433 & 0.00767 & 98.3\\\\\n$[2,2]$ & 0.01122 & 0.00319 & 0.00433 & 98.3\\\\\n$\\Pi$2 & 0.00975 & 0.00199 & 0.00329 & 100.1\\\\\n\\hline\n\\multicolumn{5}{@{}c@{}}{Sampling 17 cases (a-l,n-r)}\\\\\nF4 & 0.02515 & 0.00409 & 0.00710 & 98.6\\\\\n$[2,2]$ & 0.01122 & 0.00329 & 0.00430 & 98.8\\\\\n$\\Pi$2 & 0.00975 & 0.00269 & 0.00398 & 100.3\\\\\n\\hline\n\\multicolumn{5}{@{}c@{}}{Sampling 13 cases (a-l,r)}\\\\\nF5 & 0.01774 & 0.00407 & 0.00628 & 98.2\\\\\nGF5 & 0.01692 & 0.00394 & 0.00607 & 98.2\\\\\nGF5b & 0.02870 & 0.00400 & 0.00834 & 99.0\\\\\n$[3,2]$ & 0.00711 & 0.00228 & 0.00308 & 99.1\\\\\n$[2,3]$ & 0.00711 & 0.00224 & 0.00307 & 99.1\\\\\n\\hline\n\\multicolumn{5}{@{}c@{}}{Sampling 16 cases (a-l,o-r)}\\\\\n F5 & 0.01774 & 0.00483 & 0.00678 & 98.5\\\\\n GF5 & 0.01692 & 0.00470 & 0.00664 & 98.6\\\\\n GF5b & 0.02870 & 0.00452 & 0.00811 & 99.6\\\\\n $[3,2]$ & 0.01432 & 0.00332 & 0.00492 & 99.4\\\\\n $[2,3]$ & 0.01497 & 0.00332 & 0.00503 & 99.5\\\\\n\\hline\n\\end{tabular*}\n\\end{table}\n\n\nA careful analysis of the data in Table \\ref{tab1} reveals that the\ncorrelation energy estimation based on MP perturbation theory is the\nbetter the closer one is to the optimal geometries of the molecule\nunder consideration. This is not very much surprising since it is\nwell-known that the quality of the MP series deteriorates with\nincreasing separations from the equilibrium geometries. Compare for\ninstance the triples of cases (a,b,c) for BH, (d,e,f) for HF,\n(h,i,j) for H${}_2$O, and (k,l,m) for NH${}_2$,\nwith ratios 1:1.5:2 of the relevant distances. The values away from the\nequilibrium geometries may or may not be reliable. The data, however,\nsuggest that then the correlation energy estimates are reliable if ---\nas in cases f for HF at $2\\times r_e$ and i for H${}_2$O at\n$1.5\\times r_e$--- the values of $\\Pi$2, F4 and $[2,2]$ do not\ndiffer too much from each other. In this situation, the $\\Pi$2\nestimator seems to provide the best results. On the other hand, large\ndifferences between the estimates $\\Pi$2, F4 and $[2,2]$ --- as in the\ncases j for H${}_2$O at $2\\times r_e$ and m for NH${}_2$ at\n$2\\times r_e$ --- clearly indicate that in these cases more\nsophisticated methods (for instance the $\\Lambda$ transformation\n\\cite{SchmidtWarkenHandy93,DietzSchmidtWarkenHess92,%\nDietzSchmidtWarkenHess93a,DietzSchmidtWarkenHess93b,%\nDietzSchmidtWarkenHess93c,DietzSchmidtWarkenHess94a,%\nDietzSchmidtWarkenHess94b} or multi-reference methods\n\\cite{JeziorskiMonkhorst81,Hose89,SteinbornSilver96,%\nLindgren74,WahlDas77,%\nWerner87,MurphyMessmer93}) are needed to\ncalculate the correlation energies reliably. As regards the fifth order\nestimates, it is similarly seen that a large spread of the values of\nthe various estimates reveals that the MP based methods do not provide\nsufficiently accurate results. Reversely, a small spread of the various\nestimates indicates that with a high probability, the (R)MP based\ncorrelation energy estimates are reliable.\n\nComparing fourth and fifth order based estimators, it is seen that the\nlatter do not always provide better estimates of the correlation\nenergy. In many cases, the $\\Pi$2 estimate that is based on fourth\norder, provides results of comparable quality.\n\n\n\\begin{table}\n\\caption{Dissociation barrier (kJ\/mol) of H${}_2$CO$\\longrightarrow$H${}_2+{}$CO\nusing a TZ2P basis at MP2 geometries ${}^{a}$}\\label{tab2}\n\\begin{tabular*}{\\linewidth}{@{}l@{\\extracolsep{\\fill}}...l@{}}\n\\hline\n Method & \\multicolumn{1}{c}{Minimum} &\n \\multicolumn{1}{c}{Transition state} & \\multicolumn{1}{c@{}}{Barrier\n} & Ref. \\\\\n\\hline\n SCF & -113.912879 & -113.748693 & 431.1 & \\cite{SchmidtWarkenHandy93}\\\\\n MP2 & -114.329202 & -114.182435 & 385.3 & \\cite{SchmidtWarkenHandy93}\\\\\n MP3 & -114.334186 & -114.185375 & 390.7 & \\cite{SchmidtWarkenHandy93}\\\\\n MP4 & -114.359894 & -114.219892 & 367.6 & \\cite{SchmidtWarkenHandy93}\\\\\n F4 & -114.360838 & -114.220603 & 368.2 & \\cite{SchmidtWarkenHandy93}\\\\\n $[2,2]$ & -114.362267 & -114.223409 & 364.6 & This work\\\\\n $\\Pi$2 & -114.364840 & -114.227767 & 359.9 & This work\\\\\n BE${}^{b}$\n & & & 360 & \\cite{DupuisLesterLengsfieldLiu83}\\\\\n\\hline\n\\end{tabular*}\n\\\\\n${}^{a}$ \\cite{SchmidtWarkenHandy93} \\\\\n${}^{b}$ Best estimate \\cite{DupuisLesterLengsfieldLiu83}\n\\end{table}\n\n\\begin{table}%\n\\caption{Barrier height and heat of reaction (kJ\/mol) for\nCH${}_3+{}$C${}_2$H${}_4\\longrightarrow{}$C${}_3$H${}_7$ with a\n6-31G${}^{*}$\nbasis${}^{a}$}\\label{tab3}%\n\\begin{tabular*}{\\linewidth}{@{}l@{\\extracolsep{\\fill}}rrrrl@{}}\n\\hline\nMethod & \\multicolumn{1}{l}{Reactants} &\n\\multicolumn{1}{l}{TS${}^{b}$} & \\multicolumn{1}{l}{Product} &\n\\multicolumn{1}{c}{Barrier} & \\multicolumn{1}{c@{}}{HR${}^{c}$}\\\\\n\\hline\n RHF & $-$117.585674 & $-$117.553736 & $-$117.626572 & 83.8 & $-$107.4 \\\\\n RMP2 & $-$117.967150 & $-$117.952092 & $-$118.014126 & 39.5 & $-$123.3 \\\\\n RMP3 & $-$118.004259 & $-$117.986543 & $-$118.049999 & 46.5 & $-$120.1 \\\\\n RMP4 & $-$118.022888 & $-$118.008072 & $-$118.066816 & 38.9 & $-$115.3 \\\\\n F4 & $-$118.028674 & $-$118.014137 & $-$118.071720 & 38.2 & $-$113.0 \\\\\n $[2,2]$ & $-$118.027529 & $-$118.013226 & $-$118.070703 & 37.6 & $-$113.3 \\\\\n $\\Pi$2 & $-$118.030923 & $-$118.017302 & $-$118.073432 & 35.8 & $-$111.6 \\\\\n exp.${}^{d}$ & & & & 33.1 & $-$107 \\\\\n\\hline\n\\end{tabular*}\n\\\\\n${}^{a}$ \\cite{SchmidtWarkenHandy93} \\\\\n${}^{b}$ Transition state\\\\\n${}^{c}$ Heat of reaction\\\\\n${}^{d}$ \\cite{Kerr72,XXX85,CastelhanoGriller82,SchmidtWarkenHandy93}\n\\end{table}\n\n\n\n\n\n\n\n\n\nIn Tables \\ref{tab2} and \\ref{tab3} the correlation energy\nestimators are used to calculate the dissociation barrier for\nH${}_2$CO$\\longrightarrow$H${}_2+{}$CO, and the barrier height\nand the heat of reaction for\nCH${}_3+{}$C${}_2$H${}_4\\longrightarrow{}$C${}_3$H${}_7$.\n\n\nIn both examples, the calculation is based on known M{\\o}ller-Plesset\nenergies up to fourth order \\cite[Tab. 2-4]{SchmidtWarkenHandy93}. The\nresults show that reliable correlation energy estimates as provided by\nthe Feenberg energy F4 \\cite{SchmidtWarkenHandy93}, the Pad\\'e\napproximant $[2,2]$, and the effective characteristic polynomial\nestimate $\\Pi$2 lead to good agreement with experimental data. The\n$\\Pi$2 estimator yields in both cases the best results.\n\nIn summary, it has been shown that the availability of various\nestimators based on (R)MP results allows in many cases the\naccurate calculation of the correlation energy at negligible\nadditional computational costs. Also, larger deviations between\nthe values indicate clearly cases where further work is\nnecessary.\n\nFinally, we note that the above estimators are expected to be\nuseful to improve convergence of perturbation series for the\nenergies also for the multi-reference case. This conjecture is\na promising topic for further investigations.\n\n\\begin{ack}\nI thank Dr. E. J. Weniger and Prof. Dr. J. {{\\v C}{\\' \\i}{\\v\nz}ek} for discussions regarding the effective characteristic\npolynomial approach. I am grateful to Prof.\\ Dr.\\\nE.\\ O.\\ Steinborn for his support and the excellent working\nconditions at Regensburg. Help of the staff of the computing\ncentre of the University of Regensburg is thankfully\nacknowledged.\n\\end{ack}\n\n\n\n\n\\input{varpt.bbl}\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nQuantum general relativity is known to experience problems with renormalizability. At the one-loop level pure general relativity is renormalizable only on-shell \\cite{tHooft:1974toh}. When matter is added to the model, then even on-shell renormalization does not take place \\cite{tHooft:1974toh}. At the two-loop level even pure general relativity is completely non-renormalizable \\cite{Goroff:1985th}. Because of this it can hardly be considered as a fundamental theory.\n\nEffective field theory technique provides a way to account for quantum corrections to general relativity consistently \\cite{Burgess:2003jk,Donoghue:2012zc,Donoghue:1994dn} and avoid problems associated with non-renor\\-malizability. The technique is based on a premise that general relativity describes gravitational interaction at some energy scale $\\mu$ below the Planck scale. From the normalization scale $\\mu$ the theory is extended to the low energy regime via standard loop corrections. Applicability of the theory is limited only to energies below the normalization scale and it is assumed that all divergences in loop corrections can be renormalized within the complete theory.\n\nThe effective theory for general relativity allows one to obtain some verifiable predictions, for instance to recover the low energy effective action \\cite{Donoghue:1994dn,Burgess:2003jk}, to obtain corrections to the Newton law \\cite{BjerrumBohr:2002kt,Bjerrum-Bohr:2016hpa}, etc \\cite{Calmet:2018qwg,Calmet:2018uub}.\n\nIn this paper we highlight a phenomenon that we call ``graviton mixing''. It is similar to the fermion mixing in the standard model. In the electroweak sector of the standard model neutrino free states which are eigenstates of the mass operator $(\\nu_1,\\nu_2,\\nu_3)$ are given as a superposition of eigenstates of an interaction operator $(\\nu_e,\\nu_\\mu,\\nu_\\tau)$. Their relation is described by the PMNS matrix \\cite{Pontecorvo:1957qd,Maki:1962mu,Tanabashi:2018oca}. To put it otherwise, fermion mixing takes place, when a superposition of fermion state with well-defined mass is coupled to gauge bosons \\cite{Tanabashi:2018oca}.\n\nA similar phenomenon takes place in gravity. General relativity contains massless spin-2 degree of freedom on shell, while off-shell it receives an additional massless scalar degree of freedom \\cite{Fierz:1939ix,Hinterbichler:2011tt,RN7}. In the classical theory these degrees of freedom are coupled to an energy-momentum source $T^{\\mu\\nu}$ in the same way:\n\\begin{align}\n T^{\\mu\\nu} h_{\\mu\\nu} = T^{\\mu\\nu} h_{\\mu\\nu}^{(s=2)} + T^{\\mu\\nu} h_{\\mu\\nu}^{(s=0)}.\n\\end{align}\nWithin an effective theory this is not so, as the correspondent couplings are modified due to quantum effects. In full analogy with the case of the standard model one should introduce a mixing matrix $\\mathcal{M}_{\\mu\\nu\\alpha\\beta}$ that couples a linear combination of spin-2 and spin-0 states to an energy-momentum source:\n\\begin{align}\n &T^{\\mu\\nu} h_{\\mu\\nu} \\to T^{\\mu\\nu} \\mathcal{M}_{\\mu\\nu\\alpha\\beta} h^{\\alpha\\beta}= \\nonumber \\\\\n &=T^{\\mu\\nu} ~\\left[ M_{\\mu\\nu\\alpha\\beta} h^{(s=2)}{}^{\\alpha\\beta} + M_{\\mu\\nu\\alpha\\beta} h^{(s=0)}{}^{\\alpha\\beta}\\right].\n\\end{align}\n\nThe main focus of the paper is to draw an attention to the graviton mixing and to highlight its role within effective gravity. In the next section we discuss a motivation behind the graviton mixing in more details and provide a way to construct a graviton mixing matrix. Then we show that the mixing has a non-trivial influence on processes involving matter. Namely, the structure of virtual graviton exchange processes is strongly affected by the mixing. We conclude with a discussion of the role of graviton mixing.\n\n\\section{Graviton Mixing}\n\nWithin general relativity an interaction between a week gravitational field $h_{\\mu\\nu}$ and an energy-momentum source is given by the following Lagrangian:\n\\begin{align}\n \\mathcal{L}_\\text{int} = T^{\\mu\\nu} ~I_{\\mu\\nu\\alpha\\beta} h^{\\alpha\\beta}.\n\\end{align}\nHere $I_{\\mu\\nu\\alpha\\beta}$ is the gauge-invariant generalization of a rank-$2$ unit tensor\n\\begin{align}\n I_{\\mu\\nu\\alpha\\beta} \\overset{\\text{def}}{=} \\cfrac12 \\left( \\theta_{\\mu\\alpha}\\theta_{\\nu\\beta}+\\theta_{\\mu\\beta} \\theta_{\\nu\\alpha} \\right) .\n\\end{align}\nAnd the standard projectors $\\theta_{\\mu\\nu} = \\eta_{\\mu\\nu} - k_\\mu k_\\nu\/k^2$ are used. The interaction contains spin-$2$ and spin-$0$ parts:\n\\begin{align}\\label{interaction_gauge-invariant_form}\n L_\\text{int}&= T^{\\mu\\nu} P^2_{\\mu\\nu\\alpha\\beta} h^{\\alpha\\beta} + T^{\\mu\\nu} P^0_{\\mu\\nu\\alpha\\beta} h^{\\alpha\\beta} \\nonumber \\\\\n &=T^{\\mu\\nu} h^{(s=2)}_{\\mu\\nu} + T^{\\mu\\nu} h^{(s=0)}_{\\mu\\nu}.\n\\end{align}\nWhere $P^2$ and $P^0$ are Nieuwenhuizen operators \\cite{VanNieuwenhuizen:1973fi,Accioly:2000nm}:\n\\begin{align}\n \\begin{cases}\n P^2_{\\mu\\nu\\alpha\\beta} &\\overset{\\text{def}}{=} \\cfrac12 \\left( \\theta_{\\mu\\alpha} \\theta_{\\nu\\beta} + \\theta_{\\mu\\beta} \\theta_{\\nu\\alpha}\\right) - \\cfrac13 ~ \\theta_{\\mu\\nu} \\theta_{\\alpha\\beta}, \\\\\n P^0_{\\mu\\nu\\alpha\\beta} &\\overset{\\text{def}}{=} \\cfrac13 ~ \\theta_{\\mu\\nu} \\theta_{\\alpha\\beta}.\n \\end{cases}\n\\end{align}\nThey are orthogonal projectors on spin-2 and spin-0 states:\n\\begin{align}\n \\begin{cases}\n P^2_{\\mu\\nu\\rho\\sigma} P^2{}^{\\rho\\sigma}{}_{\\alpha\\beta} &= P^2_{\\mu\\nu\\alpha\\beta}, \\\\\n P^0_{\\mu\\nu\\rho\\sigma} P^0{}^{\\rho\\sigma}{}_{\\alpha\\beta} &= P^0_{\\mu\\nu\\alpha\\beta}, \\\\\n P^2_{\\mu\\nu\\rho\\sigma} P^0{}^{\\rho\\sigma}{}_{\\alpha\\beta} &= 0 .\n \\end{cases}\n\\end{align}\nThus, in general relativity spin-$2$ and spin-$0$ graviton states share the matter coupling constant.\n\nWithin a more general setup of effective theory this feature can hardly holds. Even if general relativity does describe gravity at some energy scale $\\mu$ below the Planck scale, then loop corrections can still induce a non-minimal coupling between graviton spin states and matter. In the most general case one can account for such effects by introducing the graviton mixing matrix:\n\\begin{align}\n \\mathcal{M}_{\\mu\\nu\\alpha\\beta} = I_{\\mu\\nu\\alpha\\beta} + A P^2_{\\mu\\nu\\alpha\\beta} + B P^0_{\\mu\\nu\\alpha\\beta}\n\\end{align}\nwith $A$ and $B$ being mixing parameters. The matrix redefines the form of the interaction between gravity and matter:\n\\begin{align}\n \\mathcal{L}_\\text{int}\\to T^{\\mu\\nu} \\mathcal{M}_{\\mu\\nu\\alpha\\beta} h^{\\alpha\\beta}.\n\\end{align}\nA few comment on the mixing are due.\n\nFirstly, the mixing parameters $A$ and $B$ should be given in terms of momentum expansions. Loop corrections can be consistently taken into account within effective field theory \\cite{Donoghue:1994dn,Burgess:2003jk}. The form of the mixing parameters can be recovered via dimension reasoning:\n\\begin{align}\\label{mixing_parameters_expansion}\n \\begin{cases}\n A (\\kappa^2\\square) = a_1 \\kappa^2 \\square + a_2 (\\kappa^2 \\square)^2 + \\cdots \\\\\n B (\\kappa^2\\square)= b_1 \\kappa^2 \\square + b_2 (\\kappa^2 \\square)^2 + \\cdots\n \\end{cases}.\n\\end{align}\nThese expansions are due, as the $n$-loop graviton correction is suppressed by the factor $\\kappa^{2n}$. Thus, constants $a_1$, $b_1$ contain data on one-loop corrections, $a_2$ and $b_2$ describe two-loop corrections, etc. The explicit values of these constants cannot be evaluated within the effective theory, but can be calculated within quantum gravity models.\n\nSecondly, mixing parameters are defined by the particle content of the effective theory. Alongside graviton loop corrections the energy-momentum tensor is also affected by matter loop corrections. It is safe to consider only renormalizable interactions with dimensionless couplings. Corrections from such matter loops do not influence the structure of the momentum expansions \\eqref{mixing_parameters_expansion} and only contribute to coefficients $a_i$, $b_i$. Cases of non-renormalizable interactions and interactions with dimensionful couplings require a special treatment and lie beyond the scope of this paper.\n\nSummarizing all of the above, within an effective theory for general relativity the graviton mixing appears naturally due to loop corrections. The mixing parameters are defined by the structure of the underlying fundamental theory, but their form can be recovered via dimension reasoning. But most importantly, the mixing must be taken into account for processes involving matter states.\n\nThe importance of the graviton mixing can be easily illustrated via a simple example of a virtual graviton exchange. To proceed with the goal one should use the gauge-invariant part of the graviton propagator \\cite{Accioly:2000nm}:\n\\begin{align}\n G_{\\mu\\nu\\alpha\\beta} = \\cfrac{i}{k^2} \\left( P^2_{\\mu\\nu\\alpha\\beta} -\\cfrac12 ~ P^0_{\\mu\\nu\\alpha\\beta} \\right).\n\\end{align}\nTo account for the graviton self-energy the following graviton polarization operator should be used:\n\\begin{align}\n \\Pi_{\\mu\\nu\\alpha\\beta} = i \\mathcal{N} \\kappa^2 k^4 \\left[ P^2_{\\mu\\nu\\alpha\\beta} + \\zeta P^0_{\\mu\\nu\\alpha\\beta} \\right].\n\\end{align}\nIn this expression $\\mathcal{N}$ and $\\zeta$ are numerical coefficients defined by the structure of the complete quantum model. We do not specify their values, as they are irrelevant for the reasoning to be presented. Moreover, this approach goes in line with classical results \\cite{tHooft:1974toh,Goroff:1985th}.\n\nThis allows one to recover the resummed graviton propagator:\n\\begin{align}\\label{resummed_propagator}\n \\mathcal{G}_{\\mu\\nu\\alpha\\beta}=&G_{\\mu\\nu\\alpha\\beta} + (G \\Pi G)_{\\mu\\nu\\alpha\\beta}+\\cdots\\nonumber\\\\\n =&\\cfrac{i}{k^2} \\left(P^2_{\\mu\\nu\\alpha\\beta}-\\cfrac12~P^0_{\\mu\\nu\\alpha\\beta}\\right) -\\cfrac{i~P^2_{\\mu\\nu\\alpha\\beta}}{k^2-\\cfrac{-1}{\\mathcal{N}\\kappa^2}} \\\\\n & +\\cfrac{i~\\frac12\\,P^0_{\\mu\\nu\\alpha\\beta}}{k^2-\\cfrac{1}{\\frac12 ~\\zeta\\mathcal{N}\\kappa^2}} \\nonumber ~.\n\\end{align}\nIn full agreement with classical results \\cite{tHooft:1974toh,Stelle:1976gc} the propagator contains additional poles corresponding to spin-$0$ massive states and spin-$2$ massive ghosts. These poles mark the applicability limits of the effective theory.\n\nIn such a setup an exchange of a virtual graviton between two energy-momentum sources is described by the following expression:\n\\begin{align}\\label{effective_propagator_definition}\n T^{\\mu\\nu}\\mathcal{M}_{\\mu\\nu\\alpha\\beta} \\left( G^{\\alpha\\beta\\rho\\sigma} + \\left( G\\, \\mathcal{M} \\, \\Pi \\, \\mathcal{M} \\, G \\right)^{\\alpha\\beta\\rho\\sigma + \\cdots}\\right) \\mathcal{M}_{\\rho\\sigma\\lambda\\tau} T^{\\lambda\\tau}=T^{\\mu\\nu} \\overline{\\mathcal{G}}_{\\mu\\nu\\alpha\\beta}T^{\\mu\\nu},\n\\end{align}\n\\begin{align}\\label{effective_propagator}\n \\overline{\\mathcal{G}}_{\\mu\\nu\\alpha\\beta} =\\cfrac{i}{k^2} \\left( (1+A)^2~ P^2_{\\mu\\nu\\alpha\\beta}-\\cfrac12 ~(1+B)^2 ~P^0_{\\mu\\nu\\alpha\\beta} \\right) \\\\\n -\\cfrac{i~(1+A)^2~P^2_{\\mu\\nu\\alpha\\beta}}{k^2 -\\cfrac{-1}{\\mathcal{N}\\kappa^2\\,(1+A)^2} } +\\cfrac12~\\cfrac{i~(1+B)^2~P^0_{\\mu\\nu\\alpha\\beta}}{k^2 -\\cfrac{1}{\\frac12 \\zeta\\mathcal{N} \\kappa^2\\,(1+B)^2}}\\nonumber ~.\n\\end{align}\nIn this expression $\\overline{\\mathcal{G}}$ is not a resummed graviton propagator, but a quantity that accounts both for graviton propagation and for the mixing. To be short we will call $\\overline{\\mathcal{G}}$ the effective propagator.\n\nIt may seem that effective and resummed propagators have a similar structure, but this is not so. The mixing coefficients $A$ and $B$ are functions of the transferred momentum $k^2$, so they alter the pole structure. For instance, if $A^2$ is proportional to $k^2+1\/(\\mathcal{N} \\kappa^2)$, then the corresponding matrix element is free from the ghost pole. At the same time, if either $A^2$ or $B^2$ admit additional poles, then these poles are to appear in an expression for the matrix element.\n\nThe difference between the pole structure of \\eqref{resummed_propagator} and \\eqref{effective_propagator} shows the following feature of the mixing. The graviton mixing alters the pole structure of the amplitudes involving matter states. This feature has two immediate corollaries.\n\nFirstly, as the graviton mixing influence the pole structure of amplitudes ivolving matter, it also influence the applicability of the effective model. Poles of the resummed graviton propagator \\eqref{resummed_propagator} indicate the energy scale at which the effective theory applicability should be put under question, as the ghost instability can be triggered \\cite{Solomon:2017nlh}. Poles of the effective propagator \\eqref{effective_propagator} play the same role and indicate the limits of the effective theory applicability. Due to the mixing the position of the pole corresponding to ghost states can be altered, which changes the area of applicability for the effective theory.\n\nSecondly, the graviton mixing can be probed empirically. The matrix element of a virtual graviton exchange can be studied empirically, as it defines the form of the Newton law. Namely, it can be studied in the terrestrial environment via experiments of E\\\"ot-Wash type \\cite{Hoyle:2004cw}. The structure of the corresponding matrix element is defined via the effective propagator \\eqref{effective_propagator} which contains data on the graviton mixing. Therefore the graviton mixing can be put to a direct empirical verification.\n\nThis conclusion is also independently supported with well-known results considering corrections to the Newton potential \\cite{BjerrumBohr:2002kt,Bjerrum-Bohr:2016hpa}. In these papers the graviton mixing is not separated explicitly, but it is taken into account. Without the mixing corrections to the Newton potential would have a Yukawa-like form due to the new poles associated with massive states. As the mixing is accounted for, the one-loop effective non-relativistic potential has corrections of a different form.\n\nFinally, one can argue that the graviton mixing is the reason behind the fact the it is impossible to introduce a universal definition of a running gravitational coupling \\cite{Anber:2011ut}. As it was highlighted before, within general relativity graviton spin components share the matter coupling. Because of this one can introduce a single coupling constant, the Newton constant, to describe these interactions. When loop corrections are taken into account so the spin components are mixed, the corresponding coupling receive different correction and can no longer be described by a single coupling.\n\n\\section{Conclusion}\n\nIn this paper we discussed a mechanism of graviton mixing and its possible application within effective field theory for gravity.\n\nWe define the graviton mixing in the following way. Within general relativity interaction between matter and spin-2 gravitons (that exist on- and off-shell) alongside with spin-0 gravitons (that exist only off-shell) is defined uniquely. Within effective theory this feature does not hold, as loop corrections can change the corresponding couplings of graviton spin states. We introduce the graviton mixing matrix that accounts for possible mixing of spin-2 and spin-0 graviton states in interaction with an energy-momentum source.\n\nWe have shown that graviton mixing strongly influence amplitudes containing matter states. A virtual graviton exchange process was used as an illustrative example. Formulae \\eqref{effective_propagator_definition} and \\eqref{effective_propagator} show that the poles structure of such a process is defined by the graviton mixing coefficients.\n\nThree conclusions can be made about the graviton mixing. First of all, the mixing can change the area of applicability of the effective theory due to the pole structure of the mixing parameters. Secondly, the mixing can be directly probed empirically. Namely, experiments of E\\\"ot-Wash type are sensitive to the graviton mixing. Thirdly, the graviton mixing is the reason behind an inability to define a universal running gravitational coupling. The mixing is due loop corrections that has different influence on spin-$2$ and spin-$0$ graviton states coupling to matter. Consequently the corresponding couplings can no longer be described by a single Newton constant.\n\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction: Young Dense Clusters in the Galaxy and Beyond}\n\nAmong the many massive young clusters now known throughout the local\nuniverse, perhaps the most interesting to dynamicists are those in\nwhich stellar dynamical time scales are short enough that the cluster\ncan undergo significant structural change during the lifetimes of the\nmost massive stars. In such clusters, dynamical evolution opens up\nnovel avenues for stellar and binary evolution, making possible the\ncreation of entirely new stellar species. One obvious modification to\nstandard stellar evolutionary tracks arises from collisions and\nmergers between stars, and we focus on that here. From this\nperspective, the clusters listed by Portegies Zwart et al.~(2004b;\nTable 1) represent an ideal combination of properties, having ages of\nless than a few million years and relaxation times of less than a few\ntens of millions of years. In these clusters, dynamical evolution,\ntraditionally regarded as a ``slow'' process, actually occurs much\nmore rapidly than the stellar evolution of even the most massive\nstars. In fact, cluster dynamics controls the early phases of these\nstars' lives.\n\nPortegies Zwart et al.~(2004b) are primarily concerned with the\nlifetimes and global structural evolution of young dense clusters in\nthe vicinity of the Galactic center. In this paper we consider mainly\nthe stellar evolutionary aspects of life in such an extreme\nenvironment. We start by investigating the circumstances under which\ncollisions are likely to occur, and how a cluster might find its way\ninto such a state. We then present a scenario which may plausibly\nlead to the formation of very massive stars and (perhaps)\nintermediate-mass black holes (IMBHs) in sufficiently young, dense\nsystems. Our results are based in large part on detailed $N$-body\nsimulations of model clusters. Finally, we apply this scenario to\nrecent observations of the starburst galaxy M82. In a companion\ncontribution, Baumgardt et al.~(2004) extend these ideas to the\nsubsequent evolution of a cluster containing a massive, compact\nobject.\n\n\n\\section{Stellar Collisions and Cluster Structure}\n\nWe are interested here in the possibility of runaway collisions\nleading to ultramassive stars. To appreciate the conditions under\nwhich such runaways can occur, consider a massive object moving\nthrough a field of background stars of total mass density $\\rho$ and\nvelocity dispersion $v$. We assume that the mass $M$ and radius $R$\nof the object are large compared to the masses and radii of other\nstars, and that all velocities are small enough that gravitational\nfocusing dominates the total cross section. In that case, the\nobject's collision cross section is\n\\begin{equation}\n \t\\sigma \\approx 2\\pi G M R \/ v^2\\,,\n\\end{equation}\nnearly independent of the properties of the other stars. The rate of\nincrease of the object's mass due to collisions is therefore\n\\begin{eqnarray}\n\t\\frac{dM}{dt} &\\approx& \\rho\\sigma v \\nonumber\\\\\n\t\t &\\approx& 2\\pi G M R \\rho \/ v \\nonumber\\\\\n\t\t &=& 6\\times10^{-11}\n\t\t\t\\left(\\frac{M}{M_\\odot}\\right)\n\t\t\t\\left(\\frac{R}{R_\\odot}\\right)\\nonumber\\\\\n\t\t&~&~~~~~~~~~~~~~~~\\times\\,\n\t\t\t\\left(\\frac{\\rho}{10^6\\,M_\\odot\/{\\rm pc}^3}\\right)\n\t\t\t\\left(\\frac{v}{10\\, {\\rm km\/s}}\\right)^{-1}\n\t\t\tM_\\odot\/{\\rm yr}\\,.\\ \\ \\ \\ \\ \n\\end{eqnarray}\nThus, if the object initially has $M=100 M_\\odot$ and $R=30 R_\\odot$,\nwe fix $v$ at 10 km\/s, and adopt a mass-radius relation $R\\propto\nM^{1\/2}$, we find that in order for the object to accrete\n$10^3M_\\odot$ of material in 5 Myr (to form an IMBH within the typical\nlifetime of a massive star), the local density must satisfy\n\\begin{equation}\n \t\\rho \\ \\ga\\ 5\\times10^8\\, M_\\odot\/{\\rm pc}^3 \\ \\ =\\ \\\n\t \t\\rho_{crit},\\ \\ {\\rm say}\\,.\n\\end{equation}\nSuch a density is much higher than the mean density of any known star\ncluster, young or old. For comparison, the average density of the\nArches cluster is $\\sim$$6\\times10^5\\,M_\\odot\/{\\rm pc}^3$, that of a\nfairly compact globular cluster is $\\sim$$10^4\\,M_\\odot\/{\\rm pc}^3$,\nwhile even the most concentrated globular cluster cores have densities\n$\\la 10^{6-7}\\, M_\\odot\/{\\rm pc}^3$.\n\nMight we be able to generate conditions more conducive to mergers by\nassuming that a cluster is born very centrally concentrated\n(e.g.~Portegies Zwart et al.~2004, Merritt et al.~2004)? As a simple\nlimiting model of a very condensed cluster, consider the nearly\nisothermal system of total mass $M_c$ and half-mass radius $r_h$,\ndescribed by the density profile\n\\begin{eqnarray}\n\t\\rho(r) &=& \\frac{M_c}{8\\pi r_h r^2}\\,,\\\\\n\tM(r) &=& {\\textstyle\\frac12}M_c\\left(\\frac{r}{r_h}\\right)\\,,\n\\end{eqnarray}\nfor $0 \\le r \\le 2r_h$. Densities exceeding $\\rho_{crit}$ are found\nfor $r350 M_\\odot$ IMBH in MGG-11 and the absence of a similar object\nin MGG-9?\n\nPortegies Zwart et al.~(2004a; PZBHMM) have addressed this issue using\ndetailed $N$-body simulations. Starting with MGG-11, they first\ndemonstrate that IMBH formation is a natural outcome of that cluster's\ndynamical evolution, and then go on to show that the same processes\nwould have failed to create a runaway in MGG-9. Their calculations\nwere carried out using two independently developed $N$-body codes,\n{\\tt Starlab} (see Portegies Zwart et al.~2001) and {\\tt NBODY4}\n(Aarseth 1999, Baumgardt 2003). Initial conditions for the model\nclusters were chosen so that at the present time they have mass\nfunctions, luminosities, half-mass radii and velocity dispersions in\nagreement with the McCrady et al.~observations.\n\nSince the initial and the current central densities of both clusters\nare unknown, the concentration parameter $c$ (the logarithm of the\nratio of the tidal radius to the core radius) is treated as a free\nparameter controlling the initial central density of the models.\nPZBHMM find that, for $c > 2$ (which for ``King'' 1966 models is\nequivalent to a dimensionless central potential $W_0 \\ga 9$) the\nMGG-11 models show runaway growth via repeated collisions. The\nmass-segregation time scale of a $50 M_\\odot$ star in MGG-11 is\n$t_s\\sim 4$\\,Myr (Eq.~\\ref{tseg}). Thus, massive stars in MGG-11 can\neasily reach the center of the cluster before leaving the main\nsequence. Given the high central density of MGG-11, once those stars\nhave accumulated in the center, a runaway collision is inevitable,\nleading to IMBHs with masses in the range 800--3000 $M_\\odot$. No\nepisode of runaway growth occurs in the MGG-11 models with $c < 2$,\nnor in any of the MGG-9 simulations, regardless of initial\nconcentration. In MGG-9, $t_s\\ga 15 $\\,Myr even for $100 M_\\odot$\nstars, so mass segregation cannot occur in the time available and no\nrunaway is seen.\n\n\\begin{figure}[!ht]\n\\psfig{figure=.\/mcmillan_fig2.ps,width=\\columnwidth,angle=-90} \n\\caption[]{Growth in mass $M_r(t)$ of the collision runaway for some\nof the simulations of PZBHMM, performed using {\\tt Starlab} and {\\tt\nNBODY4}. The choice of initial concentration parameter $W_0$ is\nindicated. The star symbols indicate the moment when the runaway\nexperiences a supernova, typically around 3\\,Myr. Open and filled\nstars indicate simulations performed with {\\tt NBODY4} and {\\tt\nStarlab}, respectively. The observed age range of MGG-11 and MGG-9 is\nindicated by the horizontal bar near the bottom of the figure. }\n\\label{fig:Mbh}\n\\end{figure}\n\nFigure 2 presents a representative sample of results from a number of\nsimulations performed for a broad range of cluster parameters. It\nshows the growth in mass of the star that will ultimately become the\nmost massive object in the cluster. Following detailed supernova\ncalculations by Heger et al.~(2003), stars having masses greater than\n$260\\,M_\\odot$ are assumed to collapse to black holes without\nsignificant mass loss. The stellar evolution models for stars with\nmasses between 50 and $1000\\,M_\\odot$ are based on work by Stothers \\&\nChin (1997) and Ishii et al.~(1999). The quantitative differences\nbetween the simulations performed with {\\tt Starlab} and those using\n{\\tt NBODY4} are due mainly to the different radii (and hence cross\nsections) assumed for very massive stars in those two packages.\n\nThe solid and dashed curves in the figure show the runaway mass as a\nfunction of time for a Salpeter (1955) IMF with a lower limit of $1\nM_\\odot$ and $c \\approx 2.1$ ($W_0 = 9$) and $c \\approx 2.7$ ($W_0=12$).\nThe dash-dotted curves are for two models with $W_0=9$ with an upper\nlimit to the IMF of $50 M_\\odot$, instead of the standard $100\nM_\\odot$ used in the other calculations; these runs were terminated at\nthe moment the runaway star exploded as a supernova. The\ndash-3-dotted curve shows the result for $W_0=12$ with a Salpeter IMF,\nwith 10\\% of the stars in primordial binaries---any tendency of these\nsystems to arrest core collapse is effectively offset by the larger\ncollision cross sections of the binary components. Finally, the\ndotted curve shows results for $W_0=9$ and a Kroupa (2001) IMF with a\nminimum mass of $0.1 M_\\odot$, in a simulation of 585,000 stars.\n\n\n\\section{Summary and Discussion}\n\nRapid mass segregation in a dense star cluster leads to an effective\ncore collapse on a time scale $\\sim$$0.2 t_{rh}$ for typical initial\nmass functions. This in turn can lead to a runaway series of\ncollisions in the cluster core and the possible formation of a\n$\\sim$1000 $M_\\odot$ IMBH there. We therefore expect an association\nbetween ultraluminous X-ray sources and the cores of dense young star\nclusters. A leading candidate for such an association is M82 X-1,\nwhich appears to lie in the massive young cluster MGG-11. On the\nbasis of $N$-body simulations and elementary considerations of the\ntime scale on which massive stars sink to the cluster center, we can\nreadily explain why MGG-11 might host an IMBH while its more luminous\nneighbor MGG-9 does not. High initial central concentrations are\nrequired in order for this process to operate even in MGG-11, but we\nnote that all of the ``local'' clusters listed in Table 1 of Portegies\nZwart et al.~(2004b) are in fact very centrally condensed.\n\nOf course, it must be conceded that next to nothing is known about the\ndetailed evolution and ultimate fate of stars hundreds or thousands of\ntimes more massive than the Sun, so we should perhaps not take too\nseriously the predictions of a $2000 M_\\odot$ ``star'' in some of our\nsimulations. Nevertheless, the simulations described here do make it\nclear that the hearts of these dense stellar systems can easily\nproduce conditions suitable for repeated stellar collisions. The\ncollision runaway at the center of such a system should be extremely\nluminous and eminently observable during its short lifetime.\nObservations of the cores of dense young star clusters in our Galaxy\nand beyond may thus shed light on the structure and lifetimes of such\nultramassive stellar objects.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}