diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzdxea" "b/data_all_eng_slimpj/shuffled/split2/finalzzdxea" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzdxea" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\n\\medskip\nThe class of affine semigroup rings is rich with examples that combine the flavors of convex geometry and commutative algebra. The structure of the semigroup ring $K[S]$ is\n intimately related to the structure of the affine semigroup $S$ and the cone $\\ensuremath{\\mathrm{pos}}(S)$ spanned \n by $S$. For example, it is well known that $K[S]$ is normal if and only if $S$ contains all integral points\n of $\\ensuremath{\\mathrm{pos}}(S)$ (see \\cite{BH}). Normal affine semigroup rings are Cohen-Macaulay by a theorem of \n Hochster \\cite{Hoch}. Ishida \\cite{Ish} characterized the S$_2$-ness of $K[S]$ in terms of $S$ and\n the facets of $\\ensuremath{\\mathrm{pos}}(S)$. In \\cite{GW1} Goto and Watanabe announced a characterization of those affine semigroups $S$ for which $K[S]$ is Cohen-Macaulay; Hoa and Trung gave a corrected characterization in terms of both $S$ and the cone spanned by $S$ over the rational numbers in \\cite{HT} and \\cite{HT2}. \n In an earlier paper \\cite{Vi} the author characterized those affine semigroup rings \n which satisfy Serre's condition R$_1$. In this note we characterize those affine semigroup\n rings $K[S]$ over an arbitrary field $K$ which satisfy condition $\\mathrm{R}_{\\ell}$\n of Serre. Our characterization is in terms of the face lattice of the positive cone\n $\\ensuremath{\\mathrm{pos}}(S)$ of $S$. We start by recalling some basic facts about the faces of\n $\\ensuremath{\\mathrm{pos}}(S)$ and consequences for the monomial primes of $K[S]$. After proving our\n characterization we turn our attention to the Rees algebras of a special class of\n monomial ideals in a polynomial ring over a field. We may view these as affine semigroup rings; the associated affine semigroups are in some sense complementary to the class of polytopal semigroups introduced and studied by Bruns, Gubeladze, and Trung in \\cite{BG} and \\cite{BGT}. In this special case, most\n of the characterizing criteria are always satisfied. We give examples of nonnormal\n affine semigroup rings that satisfy $\\mathrm{R}_2$. \n \n For the fundamentals on convex geometry we refer the reader to \\cite{Ew}, \\cite{Grun}, or \\cite{Zieg}. For background on monoids and semigroup rings one can consult \\cite{Gil}.\nWe make the standard assumptions that an affine semigroup $S$ is a subsemigroup of $\\mathbb{Z}^n$ and that \n$\\ensuremath{\\mathrm{grp}}(S) = \\mathbb{Z}^n$ for some positive integer $n$. Consider the \\textit{positive cone} $\\ensuremath{\\mathrm{pos}}(S)$ of $S$ defined by\n$$\\ensuremath{\\mathrm{pos}}(S) = \\{ c_1\\boldsymbol{\\alpha}_1 + c_2\\boldsymbol{\\alpha}_2 + \\cdots + c_m\\boldsymbol{\\alpha}_m \\mid m \\ge 0, \\boldsymbol{\\alpha}_i \\in S, c_i \\in \\mathbb{R}_{\\ge} \\; (i = 1, \\ldots, m)\\},$$\nwhere $\\mathbb{R}_{\\ge}$ denotes the set of nonnegative real numbers. Recall that $\\ensuremath{\\mathrm{pos}}(S)$ is a polyhedral cone, that is, $\\ensuremath{\\mathrm{pos}}(S)$ is the intersection of finitely many positive halfspaces $H_i^+ = \\{ \\boldsymbol{\\alpha} \\in \\mathbb{R}^n \\mid \\sigma_i(\\boldsymbol{\\alpha}) \\ge 0\\}$, where $\\sigma_i$ is a \n linear form on $\\mathbb{R}^n$. Since $S$ is a finitely generated subsemigroup of $\\mathbb{Z}^n$ we may \n assume that each $\\sigma_i$ has rational coefficients, that is, $\\ensuremath{\\mathrm{pos}}(S)$ is a \\textit{rational polyhedral\n cone}. After scaling we may assume that the coefficients of each $\\sigma_i$ are relatively prime\n integers; we call such a \\textit{primitive linear form}. Recall that a \\textit{supporting hyperplane} of $\\ensuremath{\\mathrm{pos}}(S)$ is a hyperplane $H$ such that $\\ensuremath{\\mathrm{pos}}(S) \\cap H \\ne \\emptyset$ and $\\ensuremath{\\mathrm{pos}}(S)$ lies on one\n side of $H$. A \\textit{face} of $\\ensuremath{\\mathrm{pos}}(S)$ is the intersection of $\\ensuremath{\\mathrm{pos}}(S)$ and a supporting hyperplane.\n The faces of $\\ensuremath{\\mathrm{pos}}(S)$ are again rational polyhedral cones. By the \\textit{dimension} $\\dim(F)$ of $F$ we mean the dimension of the vector space spanned by $F$ and by the \\textit{codimension} $\\mathrm{codim}(F)$ we mean $n - \\dim(F)$. A face of codimension one is called a \\textit{facet}. If we represent a polyhedral cone $C$ as an irredundant intersection of positive halfspaces $C= \\cap_{i = 1}^r H_i^+$ then $F_1 = C \\cap H_1, \\ldots, F_r = C \\cap H_r$ are the facets of $C$ (see \\cite[Section 2.6]{Grun} ).\n \nBy a \\textit{monomial }in $\\mathcal{R}=K[S]$ we mean an element of the form $x^{\\boldsymbol{\\alpha}}$ and by a \\textit{monomial ideal }we mean an ideal generated by monomials. There is an order-reversing bijective correspondence between the nonempty faces of $\\ensuremath{\\mathrm{pos}}(S)$ and the monomial primes of $\\mathcal{R}$. Indeed, the monomial prime of $\\mathcal{R}$ corresponding to the nonempty face $F$ of $\\ensuremath{\\mathrm{pos}}(S)$ is $P_F = (x^{\\boldsymbol{\\alpha}} \\mid \\boldsymbol{\\alpha} \\in S \\setminus F )$ (e.g. see \\cite{BH}). We let $\\mathfrak{m}$ denote the ideal of $K[S]$ generated by all noninvertible monomials; it is the maximal monomial prime of $K[S]$. Notice that we are considering the zero ideal to be monomial. Finally, we let $\\mathbf{e}_1, \\ldots , \\mathbf{e}_n$ denote the standard basis vectors for $\\mathbb{R}^n$. \n\nThe maximal proper faces of a polyhedral cone $C$ are precisely the facets of $C$. If $P = P_F$ is the monomial prime of height $d$ in an affine semigroup ring $K[S]$ corresponding to the face $F$ of $\\ensuremath{\\mathrm{pos}}(S)$, then there exists a chain of monomials primes of length $d$ descending from $P$ (see \\cite[Prop. 1.2.1]{GW2} ) and $\\mathrm{ht}(P) = \\mathrm{codim}(F)$. \n\n\n\n\\section{Serre's Regularity Condition for Affine Semigroup Rings} \n\nWe start this section with a basic result about $\\mathbb{Z}^n$-graded rings, which is crucial for our purposes. A version for $\\mathbb{Z}$-graded rings is well known (e.g. see \\cite[Ex. 2.24]{BH}). Then we specialize to the case of an affine semigroup ring defined over a field. Recall that if $P$ is a prime ideal of a $Z^n$-graded ring $R$ then $P^*$ denotes the largest homogeneous ideal of $R$ that is contained in $P$ and $R_{(P)}$ the homogeneous localization of $R$ at $P$, i.e., $R_{(P)} = S^{-1}R$, where $S$ is the set of homogeneous elements of $R$ that are not in $P$. The graded ring $R_{(P)}$ is an example of a ${}^*$local ring, that is, a graded ring with a unique maximal homogeneous ideal.\n\n\\begin{prop} \\label{prop regular graded ring} Let $R$ be a Noetherian $\\mathbb{Z}^n$-graded ring. Then, $R$ is regular if and only if $R_{(P)}$ is regular, for every homogeneous prime ideal $P$ of $R$. Furthermore, for a ${}^*$local ring $R$ with unique maximal homogeneous ideal $\\mathfrak{m}$, the ring $R$ is regular if and only if $R_{\\mathfrak{m}}$ is regular.\n\\end{prop}\n\\begin{proof} First suppose that $R$ is regular. Let $P$ be a homogeneous prime ideal. Then, $R_P$ is a regular local ring. We must show that the ${}^*$local ring $\\mathcal{R} := R_{(P)}$ is regular. If $P\\mathcal{R}$ is a maximal \nideal of $\\mathcal{R}$ then $R_{(P)} = R_P$ is a regular local ring. So assume $P\\mathcal{R}$ is not maximal and\n choose a prime ideal $Q$ of $R$ such that $Q^*= P$. By assumption, $R_P = R_{Q^*} \\cong \\mathcal{R}_{Q^*\\mathcal{R}}$ is a regular local ring. \n Let $\\mathcal{P}$ be any prime ideal of $\\mathcal{R}$. Then \n $\\mathcal{P}^* \\subseteq Q^*\\mathcal{R} \\Rightarrow \\mathcal{R} _{\\mathcal{P}^*}$ is regular and hence $\\mathcal{R} _{\\mathcal{P}}$ is regular by \\cite[Prop. 1.2.5]{GW2}.\n \n Now suppose that all homogeneous localizations of $R$ at homogeneous primes are regular. Let $P$ be any prime. Then $R_{P^*}$ is a regular local ring since it is a localization of $R_{(P^*)}$. Hence $R_P$ is a regular local ring by \\cite[Prop. 1.2.5]{GW2}. Thus $R$ is a regular ring.\n \n Now suppose $(R, \\mathfrak{m})$ is a ${}^*$local ring. Suppose that $R_{\\mathfrak{m}}$ is regular. Let $P$ be any homogeneous prime ideal. Then, $P \\subseteq \\mathfrak{m}$ implies $R_P$ is regular. Since $R = R_{(\\mathfrak{m})}$ we may deduce that $R$ is regular by the first part of the proof. The other implication is immediate.\n\\end{proof}\n\nNow we limit our attention to an affine semigroup ring $\\mathcal{R} = K[S]$ over a field $K$. We let $\\mathfrak{m}$ denote the ideal generated by the noninvertible monomials in $\\mathcal{R}$.\n\n \\begin{notn}\n{\\rm Notice that for elements $\\alpha, \\beta$ of the affine semigroup $S$ the monomial quotient $x^{\\alpha}\/x^{ \\beta} \\in \\mathcal{R}_{\\mathfrak{m}}$ if and only if $x^{\\alpha}\/x^{ \\beta} \\in \\mathcal{R}$. One way to see this is to use the fact that the colon ideal $(x^{\\beta}\\mathcal{R}: x^{\\alpha})$ is monomial. Now let $P$ be a monomial prime of $\\mathcal{R}$ corresponding to the nonempty face $F$ of $\\ensuremath{\\mathrm{pos}}(S)$. Replacing $S$ by $S_F := S - S \\cap F$ and $\\mathcal{R}$ by $K[S_F] \\cong K[S]_{(P)}$, where $K[S]_{(P)}$ denotes the homogeneous localization at $P$, we see that \n$$K[\\ensuremath{\\mathrm{grp}}(S)] \\cap K[S]_P = K[S]_{(P)}.$$\nWe will identify $\\ensuremath{\\mathrm{grp}}(S \\cap F)$ with the the group of invertible monomials in $\\mathcal{R}_P$ and refer to $S_F$ as the localization of $S$ at $F$.\n\n \nLet $S_0$ denote the subgroup of invertible elements in the affine semigroup $S$ and let $\\widetilde{S}$ denote the quotient monoid $S\/S_0$. It is well known that $K[S]$ is regular if and only if $S$ is the direct sum of a free abelian group $\\mathbb{Z}^{\\ell}$ and a free abelian monoid $\\mathbb{N}^k$ (e.g. see \\cite[Ex. 6.1.11]{BH}). \nWe shall need a slight variant of this result whose proof we omit.}\n \\end{notn} \n \n\\begin{prop} \\label{prop: K[S] regular} The affine semigroup ring $K[S]$ is regular if and only if $\\widetilde{S} \\cong \\mathbb{N}^k$, where $k = \\dim(K[S]_{\\mathfrak{m}})$.\n\n\\end{prop}\n\n\nSuppose the affine semigroup ring is regular. Notice that if the images of the elements $\\gamma_1, \\ldots , \\gamma_k \\in S \\setminus S_0$ form a free basis for $\\widetilde{S}$, then the monomials $x^{\\gamma_1} , \\ldots , x^{\\gamma_k}$ form a regular system of parameters for $K[S]_{\\mathfrak{m}}$. The local version of the above proposition is the following.\n \n \\begin{prop} Let $P$ be a prime ideal in the affine semigroup ring $\\mathcal{R} = K[S]$ over a field $K$ corresponding to the face $F$ of $\\ensuremath{\\mathrm{pos}}(S)$. Then, $\\mathcal{R}_P$ is regular if and only if the quotient $\\widetilde{S_F}$ is free. \n \\end{prop}\n \n \n \\begin{proof} Notice that the group of units in $S_F$ is $\\ensuremath{\\mathrm{grp}}(S \\cap F)$. By Proposition \\ref{prop regular graded ring}, $\\mathcal{R}_P$ is regular if and only if the homogeneous localization $\\mathcal{R}_{(P)} \\cong K[S_F]$ is regular. The result now follows immediately from Proposition \\ref{prop: K[S] regular}. \n \\end{proof}\n \n We now turn our attention to an alternate characterization of the regularity condition in the spirit of a result in \\cite{Vi}. One advantage of the alternate characterization is that it can be easily checked using the program NORMALIZ \\cite{Norm}. We first prove some auxiliary results.\n\n \\begin{lemma}\\label{lemma 1} Let $F = \\ensuremath{\\mathrm{pos}}(S) \\cap H$ be a face of the positive cone of the affine semigroup $S$ and $\\gamma_1 , \\ldots , \\gamma_k \\in S$. \nSuppose that $\\widetilde{S_F}$ is a free abelian monoid and the images of $\\gamma_1 , \\ldots , \\gamma_k $ form a basis. \n Let $P$ denote the monomial prime of $K[S]$ corresponding to $F$.\n The following assertions hold.\n \\begin{enumerate}\n\n \\item $F$ is contained in precisely $k$ facets $F_i = \\ensuremath{\\mathrm{pos}}(S) \\cap H_i$ of $\\ensuremath{\\mathrm{pos}}(S)$, and hence $F = F_1 \\cap \\cdots \\cap F_k$;\n \\item $\\sigma_i(\\gamma_j) = \\delta_{i j}$ for all $1 \\le i,j \\le k$, where $\\sigma_i$ is the primitive linear form corresponding to $ H_i$; and\n \\item $\\ensuremath{\\mathrm{grp}}(S \\cap F) = \\ensuremath{\\mathrm{grp}}(S) \\cap H_1 \\cap \\cdots \\cap H_k$.\n \\end{enumerate}\n \\end{lemma}\n \n \n \\begin{proof}\n \n (1) Since $x^{\\gamma_1} , \\ldots , x^{\\gamma_k}$ is a regular system of parameters for $\\mathcal{R}_P$ we know that $x^{\\gamma_i}\\mathcal{R}_P$ is a height one prime of $\\mathcal{R}_P$. Thus there exist facets $F_i$ of $\\ensuremath{\\mathrm{pos}}(S)$ corresponding to the height one primes $P_i$ of $\\mathcal{R}$ such that $x^{\\gamma_i}\\mathcal{R}_P = P_i\\mathcal{R}_P \\, (i = 1, \\ldots , k)$. We have $P_1 + \\cdots + P_k = P$ since they are both primes contained in $P$ and they are equal after localizing at $P$. Suppose $G$ is a facet of $\\ensuremath{\\mathrm{pos}}(S)$ containing $F$ and let $Q = P_G$. Then, $Q\\mathcal{R}_P = x^{\\delta}\\mathcal{R}$ since $\\mathcal{R}_P$ is a UFD. Since $x^{\\delta} \\in P_1 + \\cdots + P_k$ we must have $x^{\\delta} \\in P_i$ for some $i$. Hence $Q = P_i$. So $F_1, \\ldots , F_k$ are precisely the facets of $\\ensuremath{\\mathrm{pos}}(S)$ containing $F$ and $P_1, \\ldots , P_k$ are precisely the height one primes contained in $P$. Thus $F = F_1 \\cap \\cdots \\cap F_k$. \n \n (2) By construction, $\\sigma_i(\\gamma_i) > 0$. Just suppose $\\sigma_i(\\gamma_j) > 0$ for some $j \\ne i$. Then, $x_{\\gamma_j} \\in P_i$, which implies $P_j\\mathcal{R}_P \\subseteq P_i\\mathcal{R}_j$, which is absurd. Hence $\\sigma_i(\\gamma_j) = 0$ for $ i \\ne j$. Since $\\sigma_i$ is primitive, we must have $\\sigma_i(\\gamma_i) =1$.\n \n (3) Suppose that $\\alpha, \\beta \\in S$ and $\\alpha - \\beta \\in H_1 \\cap \\cdots \\cap H_k$. There exist nonnegative integers $a_1, \\ldots , a_k, b_1, \\ldots, b_k $ such that $\\alpha - \\sum a_i\\gamma_i, \\beta - \\sum b_i\\gamma_i \\in \\ensuremath{\\mathrm{grp}}(S \\cap F)$. Since $\\alpha - \\beta \\in H_1 \\cap \\cdots \\cap H_k$, we must have $a_i = b_i \\; (i = 1 , \\ldots , k)$ by (3). Hence $\\alpha - \\beta \\in \\ensuremath{\\mathrm{grp}}(S \\cap F)$. Since the opposite containment is clear we have equality of groups.\n \\end{proof}\n \n \\begin{lemma}\\label{lemma 2} Let $F$ be a face of $\\ensuremath{\\mathrm{pos}}(S)$ that is the intersection of $k$ facets $F_1 = \\ensuremath{\\mathrm{pos}}(S) \\cap H_1, \\ldots , F_k = \\ensuremath{\\mathrm{pos}}(S) \\cap H_k$ of $\\ensuremath{\\mathrm{pos}}(S)$ and let $\\sigma_i$ be the primitive linear form associated with $H_i \\; (i = 1, \\ldots , k)$. Suppose that\n \\begin{enumerate}\n \\item there exist $\\gamma_1 , \\ldots , \\gamma_k \\in S$ such that $\\sigma_i(\\gamma_j) = \\delta_{i j} \\mbox{ for all } 1 \\le i, j \\le k$; and\n \\item $\\ensuremath{\\mathrm{grp}}(S \\cap F) = \\ensuremath{\\mathrm{grp}}(S) \\cap H_1 \\cap \\cdots \\cap H_k$. \n \\end{enumerate}\n Then, $S_F\/U(S_F)$ is a free monoid with basis given by the images of $\\gamma_1 , \\ldots , \\gamma_k$ in the quotient monoid.\n \\end{lemma}\n \n \\begin{proof} The proof is straightforward. Suppose $\\alpha \\in S$ and $\\sigma_i(\\alpha) = a_i (i = 1, \\ldots k)$. Then $\\alpha - (\\sum a_i \\gamma_i) \\in \\ensuremath{\\mathrm{grp}}(S) \\cap H_1 \\cap \\cdots \\cap H_k = \\ensuremath{\\mathrm{grp}}(S \\cap F)$ implies the image of $\\sum a_i \\gamma_i$ in the quotient monoid is $\\overline{\\alpha}$. Suppose that $a_i , b_i \\in \\mathbb{N}$ and $\\sum a_i \\overline{\\gamma_i} = \\sum b_i \\overline{\\gamma_i}$. Then there exists $\\mu \\in \\ensuremath{\\mathrm{grp}}(S \\cap F)$ such that $\\sum a_i\\gamma_i + \\mu = \\sum b_i \\gamma_i$. By condition (1) we must have $a_i = b_i \\; (i = 1, \\ldots , k)$. Hence $\\widetilde{S_F}$ is free with the asserted basis.\n \\end{proof}\n\nCombining the previous two lemmas we immediately obtain the following characterization.\n\n\\begin{theorem}\\label{thm: R_k} An affine semigroup ring $\\mathcal{R} = K[S]$ satisfies condition $\\mathrm{R}_{\\ell}$ of Serre if and only if for each positive integer $k \\le \\ell$ and any face $F$ of $\\ensuremath{\\mathrm{pos}}(S)$ such that $\\mathrm{ht}(P_F) = k$ there exist facets $F_1, F_2, \\ldots, F_k$ of $\\mathrm{pos}(S)$ such that $F = F_1 \\cap F_2 \\cap \\cdots \\cap F_k$ and the following conditions hold:\n\n\\begin{enumerate}\n\\item there exist $\\boldsymbol{\\gamma}_1, \\ldots, \\boldsymbol{\\gamma}_k \\in S$ such that $\\sigma_i(\\boldsymbol{\\gamma}_j) = \\delta_{i j} \\mbox{ for all } 1 \\le i, j \\le k$; and\n\\item $\\ensuremath{\\mathrm{grp}}(S \\cap F_1 \\cap \\cdots \\cap F_k) = \\ensuremath{\\mathrm{grp}}(S) \\cap H_1 \\cap \\cdots \\cap H_k$.\n\\end{enumerate}\n\\end{theorem}\n \nWe end this section with an example of an affine semigroup ring that satisfies condition R$_2$ of Serre but doesn't satisfy condition S$_2$. It was inspired by an example suggested to the author by I. Swanson.\n\n\\begin{example} Suppose $K$ is a field and\nconsider the semigroup $S$ of $\\mathbb{Z}^3$ generated by the following vectors: \n$$(1,0,0), (1,3,0), (1,0,3), (1,1,0), (1,2,0), (1,0,1), (1,0,2), (1,2,1), \\mbox{and } (1,1,2).$$\n\nNotice that $\\ensuremath{\\mathrm{grp}}(S) = \\mathbb{Z}^3$ and that $\\ensuremath{\\mathrm{pos}}(S) = H_2^+ \\cap H_3^+ \\cap H_4^+$, where \n$H_2$ and $H_3$ are the indicated coordinate hyperplanes and $H_4$ is defined by the\n primitive linear form $\\sigma$, where $\\sigma(a,b,c) = 3a -b -c$. Thus $\\ensuremath{\\mathrm{pos}}(S)$ has 3 \n facets $F_2, F_3, F_4$ and 3 codimension 2 faces $F_{2 3} = F_2 \\cap F_3, F_{2 4} = \n F_2 \\cap F_4, F_{3 4} = F_3 \\cap F_4$. One checks that $\\ensuremath{\\mathrm{grp}}(S \\cap F_2) = \\ensuremath{\\mathrm{grp}}(\\{ \n (1,0,0), (1,0,1) \\}) = \\ensuremath{\\mathrm{grp}}(S) \\cap H_2$ and by symmetry, $\\ensuremath{\\mathrm{grp}}(S \\cap F_3) = \\ensuremath{\\mathrm{grp}}(S) \\cap \n H_3$. We also have $\\ensuremath{\\mathrm{grp}}(S \\cap F_4) = \\ensuremath{\\mathrm{grp}}(\\{ (1,3,0), (1,0,3), (1,2,1), (1,1,2)\\}) = \\ensuremath{\\mathrm{grp}}(\\{ \n (1,0,3), (0,1,-1)\\}) = \\ensuremath{\\mathrm{grp}}(S) \\cap H_4$. One can also verify that $\\ensuremath{\\mathrm{grp}}(S \\cap F_2 \\cap\n F_3) = \\ensuremath{\\mathrm{grp}}(\\{ (1,0,0) \\}) = \\ensuremath{\\mathrm{grp}}(S) \\cap H_2 \\cap H_3$, $\\ensuremath{\\mathrm{grp}}(S \\cap F_2 \\cap F_4) = \\ensuremath{\\mathrm{grp}}(\\{ \n (1,0,3) \\}) = \\ensuremath{\\mathrm{grp}}(S) \\cap H_2 \\cap H_4$, and by symmetry $\\ensuremath{\\mathrm{grp}}(S \\cap F_3 \\cap F_4) = \n \\ensuremath{\\mathrm{grp}}(S) \\cap H_3 \\cap H_4$. So the group conditions for the affine semigroup ring \n $K[S]$ to satisfy R$_2$ are satisfied. For each codimension 2 face $F_{ij}$ we must \n produce 2 vectors $\\boldsymbol{\\gamma}_i, \\boldsymbol{\\gamma}_j$ satisfying $\\sigma_i(\\boldsymbol{\\gamma}_j) =\n \\delta_{i j}$, where $\\sigma_2, \\sigma_3$ are the coordinate functions and \n $\\sigma_4 = \\sigma$ is defined above. The vectors are given below.\n$$\\begin{array}{ccc}\n(i,j) &\\boldsymbol{\\gamma}_i & \\boldsymbol{\\gamma}_j \\\\\n(2,3) & (1,1,0) & (1,0,1) \\\\\n(2,4) & (1,1,2) & (1,0,2) \\\\\n(3,4) & (1,2,1) & (1,2,0)\n\\end{array}$$\nThus condition (1) in Theorem \\ref{thm: R_k} is satisfied and we may conclude that $K[S]$ is regular in codimension 2. However, as we shall now see, $K[S]$ is not normal\n so can't possibly satisfy S$_2$. \n \n Notice that $(1,1,1) = \\frac{1}{3}(1,0,0) + \\frac{1}{3}(1,3,0) + \\frac{1}{3}(1,0,3) \\Rightarrow (1,1,1) \\in \\ensuremath{\\mathrm{pos}}(S)$. However, $(1,1,1) \\notin S$ and hence $K[S]$ is not normal.\n \n \n There is another way to see that $K[S]$ is not normal that is more obvious. Consider the injective homomorphism $\\varphi: \\mathbb{Z}^3 \\rightarrow \\mathbb{Z}^3$ defined by $\\varphi(\\mathbf{e}_1) = (3,0,0), \\\\\n \\varphi(\\mathbf{e}_2) = (-1,1,0), \\mbox{ and }\\varphi(\\mathbf{e}_3) = (-1,0,1)$. The image of $S$ is the semigroup $\\tilde{S}$ generated by the vectors \n $$(3,0,0), (0,3,0), (0,0,3), (2,1,0), (1,2,0), (2, 0,1), (1,0,2), (0,2,1), (0,1,2).$$\n Notice that we have listed all 3-tuples of non-negative integer whose components sum to \n 3 except (1,1,1). This isomorphism of semigroups induces an isomorphism of $K[S]$ \n and $K[\\tilde{S}] \\cong K[x^3, y^3, z^3, x^2y, xy^2, x^2z, xz^2, y^2z, yz^2]$. The latter \n has normalization \\\\ $K[x^3, y^3, z^3, xyz, x^2y, xy^2, x^2z, xz^2, y^2z, yz^2]$, which is the\n 3rd Veronese subring of \\\\\n $K[x,y,z]$. Hence $K[S]$ is not normal.\n\\end{example}\n\n\n\\section{The Rees Algebras of a Special Class of Monomial Ideals}\n\nWe now look at the Rees algebras of a special class of integrally closed monomial ideals.\n\n\\begin{notn}\\label{blam}\n{ \\rm Let $\\boldsymbol{\\lambda} = (\\ensuremath{{\\lambda}}_1, \\ldots, \\ensuremath{{\\lambda}}_n)$ be a tuple of positive integers and \\\\\n $J = J(\\boldsymbol{\\lambda}) = (x_1^{\\ensuremath{{\\lambda}}_1}, \\ldots, x_n^{\\ensuremath{{\\lambda}}_n})$, where the ideal in taken inside the polynomial ring \\\\$K[\\xvec{n}] =: R$, and $I = I(\\boldsymbol{\\lambda}) = \\bar{J}$. Thus $I$ is an integrally closed monomial ideal with minimal monomial reduction $J$. Let $L = \\ensuremath{\\mathrm{lcm}}(\\ensuremath{{\\lambda}}_1, \\ldots , \\ensuremath{{\\lambda}}_n)$, $\\omega_i = L\/\\ensuremath{{\\lambda}}_i \\; (i = 1, \\ldots , n)$, and $\\boldsymbol{\\omega} = (\\omega_1 , \\ldots , \\omega_n)$. Notice that $L = dw$, where $d = \\gcd(\\lambda_1 , \\ldots , \\lambda_n)$.}\n\\end{notn}\n\nWe will characterize those monomial ideals $I(\\boldsymbol{\\lambda})$ whose Rees algebras satisfy $\\mathrm{R}_{\\ell}$ for some $\\ell < n$. \n\n\nObserve that the Rees algebra $R[It]$ of a monomial ideal\n$I$ can always be identified with an affine semigroup ring over $K$. Namely, if $I =\n(x^{\\boldsymbol{\\beta}_1}, \\ldots, x^{\\boldsymbol{\\beta}_r})$ and $S(I) = \\langle (\\mathbf{e}_1,0),\n\\ldots, (\\mathbf{e}_n,0),(\\boldsymbol{\\beta}_1,1), \\ldots, (\\boldsymbol{\\beta}_r,1) \\rangle \\subseteq\n\\mathbb{N}^{n+1}$, then $R[It] \\cong K[S(I)]$. \n\nIn case $\\mathcal{R} := R[It]$ is the Rees algebra of $I = I(\\boldsymbol{\\lambda})$ the condition that every height $k$ monomial prime corresponds to an intersection of precisely $k$ facets and condition (2) of Theorem \\ref{thm: R_k} automatically hold as we shall see below. First we describe the height $k$ monomial primes of $\\mathcal{R}$. \n\n The facets $F_{\\sigma}, F_1, \\ldots , F_{n+1}$ of\n$\\ensuremath{\\mathrm{pos}}(S)$ are cut out by the supporting hyperplanes \\\\ $H_{\\sigma}, H_1,\n\\ldots, H_{n+1}$ where $\\sigma(\\boldsymbol{\\alpha}, a_{n+1})=\\boldsymbol{\\omega} \\cdot \\boldsymbol{\\alpha}\n- La_{n+1}$ and $ H_1,\n\\ldots, H_{n+1}$ are the coordinate hyperplanes in $\\mathbb{R}^{n+1}$. Notice that with this notation the generating set for $S(I)$ is\n$$ \\{ (a_1,\\ldots,a_n,d) \\in \\mathbb{N}^{n+1} \\mid\na_1\n\\omega_1 + \\cdots + a_n \\omega_n \\ge dL \\mbox{ for } d \\le 1 \\}.$$\nNotice that $(\\mathbf{e}_1,0), \\ldots , (\\mathbf{e}_n,0), (\\ensuremath{{\\lambda}}_1\\mathbf{e}_1,1) \\in S$ and hence $\\ensuremath{\\mathrm{pos}}(S) = \\mathbb{Z}^{n+1}$, i.e., $S$ is full-dimensional.\n\nThe following description of the height one monomial primes of\n$R[It]$ appeared in \\cite{Vi}.\n\n\\begin{lemma}\\label{lem: ht 1 monl prime} For a monomial ideal $I=I(\\boldsymbol{\\lambda})$ the height one\nmonomial primes of $R[It]$ are as follows:\n\\begin{eqnarray*}\nP_i &= & (x_i) + (x^{\\boldsymbol{\\beta}_j}t \\mid \\mathbf{e}_i \\le_{pr} \\boldsymbol{\\beta}_j) \\; for \\; (i=1,\n\\ldots, n);\\\\\nP_{n+1} &=& (x^{\\boldsymbol{\\beta}_1}t, \\ldots, x^{\\boldsymbol{\\beta}_r}t); and \\\\\n P_{\\sigma} &= & (x_1, \\ldots, x_n) + (x^{\\boldsymbol{\\beta}_j}t \\mid\n\\sigma(\\boldsymbol{\\beta}_j,1) > 0).\n\\end{eqnarray*}\n\\end{lemma}\n\nWe wish to describe the height $k$ monomials ideals. Towards this end we show that every codimension $k$ face of $\\ensuremath{\\mathrm{pos}}(S)$ is the intersection of precisely $k$ facets of $\\ensuremath{\\mathrm{pos}}(S)$ for each $k$ such that $1 \\le k \\le n$. Notice that \n\\begin{eqnarray*}\n\\ensuremath{\\mathrm{pos}}(S(I)) &=& \\ensuremath{\\mathrm{pos}}(S(J)) \\\\\n&=& \\ensuremath{\\mathrm{pos}}((\\mathbf{e}_1,0), \\ldots, (\\mathbf{e}_n,0), (\\ensuremath{{\\lambda}}_1\\mathbf{e}_1,1), \\ldots , (\\ensuremath{{\\lambda}}_n\\mathbf{e}_n,1)).\n\\end{eqnarray*}\n\n Alternate proofs that the following are the height $k$ monomial primes of $R[It]$ can be found in \\cite{Co}.\n\nIn the next few paragraphs, given integers $1 \\le i_1 < i_2 < \\cdots < i_k \\le n$ let $1 \\le j_1< \\cdots < j_{n-k} \\le n$ be such that $\\{i_1, \\ldots , i_k, j_1, \\ldots , j_{n-k} \\} = \\{ 1 , \\ldots , n \\}$.\n\n\\begin{lemma}\\label{lem: face 1} For $k < n$ and integers $1 \\le i_1 < i_2 < \\cdots < i_k \\le n$, let \\\\ $F= F_{i_1} \\cap \\cdots \\cap F_{i_k}$. Then,\n\\begin{enumerate}\n\\item $ F = \\ensuremath{\\mathrm{pos}}((\\mathbf{e}_{j_1},0), \\ldots , (\\mathbf{e}_{j_{n-k}},0), (\\ensuremath{{\\lambda}}_{j_1}\\mathbf{e}_{j_1},1), \\ldots , (\\ensuremath{{\\lambda}}_{j_{n-k}}\\mathbf{e}_{j_{n-k}},1));$\n\\item $\\mathrm{codim}(F_{i_1} \\cap \\cdots \\cap F_{i_k}) = k$ and \n$P_F = (x_{i_1}, \\ldots , x_{i_k}) + (x^{\\boldsymbol{\\beta}}t \\mid (\\boldsymbol{\\beta} ,1) \\in S \\setminus F);$ and \n\\item $\\widetilde{S_F}$ is free with basis given by the images of $(\\mathbf{e}_{i_1},0), \\ldots , (\\mathbf{e}_{i_k}, 0)$ and hence $R[It]$ localized at $P_F$ is regular.\n\\end{enumerate} \n\\end{lemma}\n\n\\begin{proof} Let $\\sigma_i(\\boldsymbol{\\alpha}, a_{n+1}) = a_i$ for $1 \\le i \\le n+1$. \nThe first assertion is a consequence of the fact that $\\sigma_i$ of a sum of vectors in $\\ensuremath{\\mathrm{pos}}(S)$ is zero if and only if $\\sigma_i$ of each summand is zero. The codimension statement follows immediately. The description of the corresponding monomial prime comes from looking at the generators of the prime ideal $S \\setminus F$ of $S$. \n\nTo see that the images of $(\\mathbf{e}_{i_1},0), \\ldots , (\\mathbf{e}_{i_k}, 0)$ generate the quotient monoid it suffices to consider generators of $S$ of the form $(\\beta,1)$ that aren't in $S \\cap F$. For such, $b_{i_s} > 0 $ for some $s$. \n \nThen, \n\\begin{eqnarray*}\n (\\beta,1) &\\cong &(( \\beta ,0) - (\\ensuremath{{\\lambda}}_{j_1} \\mathbf{e}_{j_1}, 0)) + (\\ensuremath{{\\lambda}}_{j_1} \\mathbf{e}_{j_1}, 1) \\pmod{\\ensuremath{\\mathrm{grp}}(S \\cap F)} \\\\\n &\\cong& (b_{i_1}\\mathbf{e}_{i_1},0) + \\cdots + (b_{i_k}\\mathbf{e}_{i_k},0) \\hspace{.35in} \\pmod{ \\ensuremath{\\mathrm{grp}}(S \\cap F)}\n \\end{eqnarray*}\nThus the images of $(\\mathbf{e}_{i_1},0), \\ldots , (\\mathbf{e}_{i_k}, 0)$ form a free basis for the quotient $\\widetilde{S_F}$, since uniqueness of representation is clear.\n\\end{proof}\n\n\nNext we consider the case where one of facets in the intersection is $F_{n+1}$. The proof is due to the same observations that appeared in the preceding proof, so we omit it.\n\n\\begin{lemma} \\label{lem: face 2} For integers $1 \\le i_1 < i_2 < \\cdots < i_{k} \\le n$ let $F=F_{i_1} \\cap \\cdots \\cap F_{i_{k}} \\cap F_{n+1}$. The following hold. \n\\begin{enumerate}\n\\item $F= \\ensuremath{\\mathrm{pos}}((\\mathbf{e}_{j_1}, 0) , \\ldots, (\\mathbf{e}_{j_k}, 0)).$ \n\\item $\\mathrm{codim}(F) = k+1$ and the corresponding monomial prime is \\\\\n$P_F = (x_{i_1}, \\ldots , x_{i_k}) + (x^{\\boldsymbol{\\beta}}t \\mid (\\boldsymbol{\\beta}, 1) \\in S).$\n\\item If $k < n$ then, $\\widetilde{S_F}$ is free with basis given by the images of the vectors \\\\ $(\\mathbf{e}_{i_1},0), \\ldots , (\\mathbf{e}_{i_k},0), (\\mathbf{e}_{j_1},1)$. In case $k < n$ the Rees algebra $R[It]$ localized at $P_F$ is regular.\n\\end{enumerate}\n\\end{lemma}\n\nNotice that $F_1 \\cap \\cdots \\cap F_n = \\{(0, \\ldots , 0)\\} = F_{n+1} \\cap F_{\\sigma}$ and the corresponding monomial prime is $\\mathfrak{m} = (x_1 , \\ldots , x_n) + (x^{\\boldsymbol{\\beta}}t \\mid (\\boldsymbol{\\beta} , 1) \\in S)$, the maximal monomial prime of $K[S]$. In this case, the codimension drops more than the expected amount. We can still realize the apex of the cone as the intersection of $n+1$ facets, namely $ \\{(0, \\ldots , 0)\\} = F_1 \\cap \\cdots \\cap F_{n+1}$. Notice $S_0 = \\{ 0 \\}$ and $\\widetilde{S}$ is not free provided that $n > 1$.\n\nNow we involve the facet $F_{\\sigma}$. By the above lemmas these are the only faces we need to worry about when characterizing which Rees algebras $R[It]$ are regular in codimension $k \\le n$. The proof of the next lemma is a consequence of the same observations and is omitted. \n\n\\begin{lemma} \\label{lem: face 3} For integers $1 \\le i_1 < i_2 < \\cdots < i_k \\le n$ let $F = F_{i_1} \\cap \\cdots \\cap F_{i_k} \\cap F_{\\sigma}$. The following hold.\n\\begin{enumerate}\n\\item $F = \\ensuremath{\\mathrm{pos}}((\\ensuremath{{\\lambda}}_{j_1}\\mathbf{e}_{j_1},1), \\ldots , (\\ensuremath{{\\lambda}}_{j_{n-k}}\\mathbf{e}_{j_{n-k}},1)).$ \n\\item $\\mathrm{codim}(F_{i_1} \\cap \\cdots \\cap F_{i_k} \\cap F_{\\sigma} ) = k+1$ and the corresponding monomial prime is\n$P_F = (x_{i_1}, \\ldots , x_{i_k}) + (x^{\\boldsymbol{\\beta}}t \\mid (\\boldsymbol{\\beta}, 1) \\in S \\setminus F).$\n\\end{enumerate}\n\\end{lemma}\n \n \n\nWe now show that\ncondition (2) of Theorem \\ref{thm: R_k} is always satisfied by the positive dimensional faces of $\\ensuremath{\\mathrm{pos}}(S)$ that are contained in $F_{\\sigma}$. The condition is a priori true for positive dimensional faces not contained in $F_{\\sigma}$ by Lemmas \\ref{lem: face 1} and \\ref{lem: face 2}.\n\n\\begin{lemma}\\label{lem: ker Z^n to Z} Let $\\boldsymbol{\\omega} = (\\omvec{n})$ be a tuple of positive integers with $n \\ge 2$ and let \n$\\phi: \\mathbb{Z} ^n \\rightarrow \\mathbb{Z} $ be defined by $\\phi(\\boldsymbol{\\alpha}) = \\boldsymbol{\\omega} \\cdot \\boldsymbol{\\alpha}$. For each $1 \\le i < j \\le n$ let $r_{ij} = \\gcd(\\omega_i, \\omega_j)$. Then, the kernel of $\\phi$ is generated by the tuples $\\boldsymbol{\\mu}_{ij} = \\frac{\\omega_j}{r_{ij}}\\mathbf{e}_i - \\frac{\\omega_i}{r_{ij}}\\mathbf{e}_j \\; \\; (1 \\le i < j \\le n)$, where $\\mathbf{e}_1, \\ldots , \\mathbf{e}_n$ are the standard basis vectors for $\\mathbb{Z}^n$. Furthermore, for integers $1 \\le i_1 < i_2 < \\cdots < i_k \\le n$ we have $\\ker(\\phi) \\cap H_{i_1} \\cap \\cdots \\cap H_{i_k}$ is generated by vectors $\\boldsymbol{\\mu}_{ij}$ in $H_{i_1} \\cap \\cdots \\cap H_{i_k}$.\n\\end{lemma}\n\\begin{proof} This lemma is used in Gr$\\mathrm {\\ddot{o}}$bner basis theory but for the reader's convenience we will supply a proof. \n\nWe proceed by induction on $n$ the case $n=2$ being straightforward. Suppose $n > 2$ and assertion holds for $n - 1$. Assume $\\boldsymbol{\\beta} = (b_1, \\ldots, b_n) \\in \\ker(\\phi)$. Let $g = \\gcd(\\omega_1, \\ldots, \\omega_n)$. Then, \\begin{equation}\n\\label{ 1}\nb_n\\frac{\\omega_n}{g}= - (b_1\\frac{\\omega_1}{g} + \\cdots + b_{n-1}\\frac{\\omega_{n-1}}{g}), \n\\end{equation} \nwhich implies\n\\begin{equation}\n\\label{2 }\nb_n = \\sum_{i=1}^{n-1} c_i\\frac{\\omega_i}{g} = \\sum_{i=1}^{n-1} c_is_i\\frac{\\omega_i}{r_{in}},\n\\end{equation}\nwhere $r_{in} = s_ig \\, (i=1, \\ldots, n-1)$. Then,\n\\begin{equation}\n\\label{ 3}\n\\boldsymbol{\\beta} + \\sum_{i=1}^{n-1} c_is_i\\boldsymbol{\\mu}_{in} = (b_1^{\\prime}, \\ldots, b_{n-1}^{\\prime},0) \\in \\ker(\\phi),\n\\end{equation}\nand the assertion then follows from the induction hypothesis. Notice that if $\\boldsymbol{\\beta} \\in \\ker(\\phi) \\cap H_{i_1} \\cap \\cdots \\cap H_{i_k}$ then each step only involves vectors in $H_{i_1} \\cap \\cdots \\cap H_{i_k}$.\n\\end{proof}\n\nThis enables us to prove that in our setting the group property is automatic. The following two results appear in the unpublished thesis of H. Coughlin \\cite{Co}.\n\n\\begin{lemma} \\label{lem: group 2} For integers $1 \\le i_1 < i_2 < \\cdots < i_k \\le n$ we always have\n$$\\ensuremath{\\mathrm{grp}}(S \\cap F_{i_1} \\cap \\cdots \\cap F_{i_k} \\cap F_{\\sigma} ) = \\ensuremath{\\mathrm{grp}}(S) \\cap H_{i_1} \\cap \\cdots \\cap H_{i_k} \\cap H_{\\sigma} .$$\n\\end{lemma}\n\n\\begin{proof} Recall that $\\ensuremath{\\mathrm{grp}}(S) = \\mathbb{Z}^{n+1}$. The containment $\\ensuremath{\\mathrm{grp}}(S \\cap F_{i_1} \\cap \\cdots \\cap F_{i_k} \\cap F_{\\sigma} ) \\subseteq \\ensuremath{\\mathrm{grp}}(S) \\cap H_{i_1} \\cap \\cdots \\cap H_{i_k} \\cap H_{\\sigma} = \\mathbb{Z}^{n+1}\\cap H_{i_1} \\cap \\cdots \\cap H_{i_k} \\cap H_{\\sigma} $ is clear. \n\nIf $k = n$ then $\\ensuremath{\\mathrm{grp}}(S) \\cap H_{i_1} \\cap \\cdots \\cap H_{i_k} \\cap H_{\\sigma} = \\mathbb{Z}^{n+1} \\cap H_1 \\cap \\cdots \\cap H_n \\cap H_{\\sigma} = \\{ \\mathbf{0} \\}$ and the assertion follows. Now assume $k < n$. Suppose $(\\boldsymbol{\\beta},d) \\in \\mathbb{Z}^{n+1}\\cap H_{i_1} \\cap \\cdots \\cap H_{i_k} \\cap H_{\\sigma} $. Then, $\\boldsymbol{\\beta} - d\\ensuremath{{\\lambda}}_n\\mathbf{e}_n \\in \\ker(\\phi)$, where $\\phi$ is defined as in the preceding lemma. By that lemma and the fact that $(\\ensuremath{{\\lambda}}_{j_1}\\mathbf{e}_{j_1},1) \\in S \\cap F_{i_1} \\cap \\cdots \\cap F_{i_k} \\cap F_{\\sigma} $, it suffices to show that $(\\boldsymbol{\\mu}_{ij},0) \\in \\ensuremath{\\mathrm{grp}}(S \\cap H_{i_1} \\cap \\cdots \\cap H_{i_k} \\cap H_{\\sigma})$ for $1 \\le i < j \\le n$ and $\\boldsymbol{\\mu}_{i j} \\in H_{i_1} \\cap \\cdots \\cap H_{i_k}$, where we are viewing $H_{i_j}$ as a coordinate hyperplane in either $\\mathbb{R}^n$ or $\\mathbb{R}^{n+1}$. \n\nLet $1 \\le i < j \\le n$, set $r = \\gcd(\\omega_i,\\omega_j)$ and choose $d \\in \\mathbb{Z}$ such that $0 \\le d\\ensuremath{{\\lambda}}_j - \\frac{\\omega_i}{r} < \\ensuremath{{\\lambda}}_j$. Multiplying by $\\omega_j$ and dividing by $\\omega_i$ we get $0 \\le d\\ensuremath{{\\lambda}}_i - \\frac{\\omega_j}{r} < \\ensuremath{{\\lambda}}_i$ which implies $0 < \\frac{\\omega_j}{r} - (d-1)\\ensuremath{{\\lambda}}_i \\le \\ensuremath{{\\lambda}}_i$. Then,\n\\begin{eqnarray*}\n(\\boldsymbol{\\mu}_{ij},0) &=& (\\frac{\\omega_j}{r}\\mathbf{e}_i +(d\\ensuremath{{\\lambda}}_j - \\frac{\\omega_i}{r})\\mathbf{e}_j,d) - d(\\ensuremath{{\\lambda}}_j\\mathbf{e}_j,1) \\\\\n& = & ((\\frac{\\omega_j}{r} - (d-1)\\ensuremath{{\\lambda}}_i)\\mathbf{e}_i + (d\\ensuremath{{\\lambda}}_j - \\frac{\\omega_i}{r})\\mathbf{e}_j,1) + (d-1)(\\ensuremath{{\\lambda}}_i\\mathbf{e}_i,1)-d(\\ensuremath{{\\lambda}}_j\\mathbf{e}_j,1) \\\\\n& \\in & \\ensuremath{\\mathrm{grp}}(S \\cap F_{i_1} \\cap \\cdots \\cap F_{i_k} \\cap F_{\\sigma}), \n\\end{eqnarray*}\nsince each of the 3 tuples involved is a generator of $S$ that is in $F_{i_1} \\cap \\cdots \\cap F_{i_k} \\cap F_{\\sigma}$.\n\\end{proof}\n\n\nCombining Theorem \\ref{thm: R_k} with the preceding lemma we obtain the following result.\n\n\\begin{theorem} \\label{thm: R_k for Rees} For a positive integer $\\ell < n$ the Rees \nalgebra of $I = I(\\boldsymbol{\\lambda})$ over a field satisfies condition $\\mathrm{R}_{\\ell + 1}$ of \nSerre if and only if for all sequences of positive integers $1 \\le i_1 < \\cdots < i_{\\ell} \\le n$\nthere exist $\\boldsymbol{\\gamma}_i = (\\boldsymbol{\\beta}_i , 1) \\in \\mathbb{N}^{n+1} \\; (i = 1 , \\ldots , \\ell +1)$ such that $\\sigma_i(\\boldsymbol{\\gamma}_j) = \n\\delta_{i j} \\, (1 \\le i,j \\le \\ell + 1)$, where $\\sigma_1 , \\ldots , \\sigma_{\\ell +1}$ are the primitive linear forms associated with the hyperplanes $H_{i_1} , \\ldots , H_{i_{\\ell}}, H_{\\sigma}$. \n\\end{theorem}\n\n\\begin{proof} First let us determine when $R_{\\ell + 1}$ holds. We need only consider faces contained in $F_{\\sigma}$. By Lemma \\ref{lem: group 2} and\n Theorem \\ref{thm: R_k} we need only show that condition (1) of Theorem \\ref{thm: R_k}\n holds for such faces. Let $1 \\le k \\le \\ell +1$ and let $Q$ be a height $k$ monomial prime corresponding to\n a face contained in the facet $F_{\\sigma}$. Then $Q$ is contained in a height $\\ell +1$\n monomial prime corresponding to a face contained in the facet $F_{\\sigma}$ by Lemma\n \\ref{lem: face 3}. Thus it suffices to establish condition (1) of Theorem \\ref{thm: R_k} for\n height $\\ell+1$ monomial primes whose faces are contained in the facet $F_{\\sigma}$. \nHence it is necessary and sufficient that there exist vectors $\\boldsymbol{\\gamma}_i \\in S(I) \\; (i = 1 , \\ldots , \\ell +1)$ such that\n $\\sigma_i(\\boldsymbol{\\gamma}_j) = \\delta_{i j}$ where $\\sigma_1 , \\ldots , \\sigma_{\\ell +1}$ are the\n primitive linear forms associated with the hyperplanes $H_{i_1} , \\ldots , H_{i_{\\ell}},\n H_{\\sigma}$. \n \n Recall that the generators of $S(I)$ have $(n+1)^{\\mathrm{st}}$ component 0 or 1. Write each $\\boldsymbol{\\gamma}_j$ as a sum of generators of $S(I)$. First suppose that $1 \\le j \\le \\ell $. The condition\n that $\\sigma_i(\\boldsymbol{\\gamma}_j) = \\delta_{i j} \\; (i = 1, \\ldots , \\ell + 1)$ forces some summand $(\\boldsymbol{\\beta}_j, 1)$ of \n $\\boldsymbol{\\gamma}_j$ to satisfy $\\sigma_i(\\boldsymbol{\\beta}_j,1) = \\delta_{i j}$ for all $1 \\le i \\le \\ell +1$. Replacing $\\boldsymbol{\\gamma}_j$ by this summand we may assume that $\\boldsymbol{\\gamma}_j = (\\boldsymbol{\\beta}_j,1).$ \n Now \n consider the summands involved in the expression for $\\boldsymbol{\\gamma}_{\\ell + 1}$. Consider \n first the possibility that all summands have $(n+1)^{\\mathrm{st}}$ component 0. Then,\n each summand has positive $\\sigma$-value so there is only one summand \n $\\boldsymbol{\\gamma}_{\\ell +1} = (\\mathbf{e}_j ,0)$, where $j \\in \\{j_1, \\ldots , j_{n_\\ell} \\}$ and \n $\\omega_j = 1$. In this case we also have $( (L+1)\\mathbf{e}_{j} ,1)$ satisfies the requirements and we may replace $\\boldsymbol{\\gamma}_{\\ell+1}$ by $((L+1)\\mathbf{e}_{j} ,1)$. The remaining possibility is that some summand has $(n+1)^{\\mathrm{st}}$ component 1 and again we may replace $\\boldsymbol{\\gamma}_{\\ell+1}$ by this summand. In any case, we may assume $\\boldsymbol{\\gamma}_{\\ell + 1}$ has $(n+1)^{\\mathrm{st}}$ component 1. Conversely, if vectors $(\\boldsymbol{\\beta}_j, 1) \\in \\mathbb{N}^{n+1}$ satisfying $\\sigma_i(\\boldsymbol{\\beta}_j,1) = \\delta_{i j}$ for all $1 \\le i, j \\le \\ell +1$ exist, they are automatically in $S(I)$ since $I$ is integrally closed. \n \\end{proof}\n\nWe now state the result entirely in terms of the integers $L, \\omega_1, \\ldots , \\omega_n$ determined by the vector $\\boldsymbol{\\lambda} = (\\ensuremath{{\\lambda}}_1 , \\ldots , \\ensuremath{{\\lambda}}_n)$.\n\n\\begin{cor} \\label{cor: R_k for Rees} For a positive integer $\\ell < n$ the Rees \nalgebra of $ I(\\boldsymbol{\\lambda})$ over a field satisfies condition $\\mathrm{R}_{\\ell + 1}$ of \nSerre if and only if for all sequences of positive integers $1 \\le i_1 < \\cdots < i_{\\ell} \\le n$\n and $1 \\le j_1 < \\cdots < j_{n-\\ell} \\le n$ such that $\\{1, \\ldots , n \\} = \\{i_1 , \\ldots , i_{\\ell} \\} \\cup \\{ j_1 , \\ldots , j_{n-\\ell}\n \\} $, we have $$L - \\omega_{i_1}, \\ldots , L - \\omega_{i_{\\ell}}, L + 1 \\in \\langle \\omega_{j_1}, \\ldots , \\omega_{j_{n-\\ell}} \\rangle.$$\n\\end{cor}\n\n\\begin{proof} By Theorem \\ref{thm: R_k for Rees} it suffices to show that for a sequence of positive integers $1 \\le i_1 < \\cdots < i_{\\ell} \\le n$ \n and $1 \\le j_1 < \\cdots < j_{n-\\ell} \\le n$ such that $ \\{1, \\ldots , n \\} = \\{i_1 , \\ldots , i_{\\ell} \\} \\cup \\{ j_1 , \\ldots , j_{n-\\ell} \\} $, \n vectors $\\boldsymbol{\\gamma}_j = (\\boldsymbol{\\beta}_j, 1) \\; (j = 1, \\ldots, \\ell +1) \\in \\mathbb{N}^{n+1}$ exist such that $\\sigma_i(\\boldsymbol{\\gamma}_j) = \\delta_{i j} \\; \\mbox{ for all }1 \\le i, j \\le \\ell + 1$, where the $\\sigma_i$ are as above, if and only if \n$L - \\omega_{i_1}, \\ldots , L - \\omega_{i_{\\ell}}, L + 1 \\in \\langle \\omega_{j_1}, \\ldots , \\omega_{j_{n-\\ell }} \\rangle.$\n \n Suppose first that the vectors $\\boldsymbol{\\gamma}_i = (\\boldsymbol{\\beta}_i, 1) \\in \\mathbb{N}^{n+1}$ satisfying the\n necessary conditions exist. By our requirements, $\\boldsymbol{\\gamma}_i= (\\mathbf{e}_i + a_{j_1}\\mathbf{e}_{j_1} + \\cdots + a_{j_{n - \\ell}}\\mathbf{e}_{j_{n - \\ell}},1)$ for all $i = 1, \\ldots , \\ell$, where the coefficients $a_{j_1} , \\ldots , a_{j_{n-\\ell}}$ are nonnegative. The existence of \n such vectors $\\boldsymbol{\\gamma}_i$ is equivalent to the conditions $L - \\omega_{i_1}, \\ldots , L - \\omega_{i_{\\ell}} \\in \\langle \\omega_{j_1}, \\ldots , \\omega_{j_{n-\\ell}} \\rangle.$ We\n must also have $\\boldsymbol{\\gamma}_{\\ell + 1} = (a_{j_1}\\mathbf{e}_{j_1} + \\cdots + a_{j_{n - \\ell}}\\mathbf{e}_{j_{n - \\ell}} ,1)$, where the coefficients $a_{j_1} , \\ldots , a_{j_{n-\\ell}}$ are\n nonnegative and $ a_{j_1}\\omega_{j_1} + \\cdots + a_{j_{n - \\ell}}\\omega_{j_{n - \\ell}} - L = 1$. The existence of such a $\\boldsymbol{\\gamma}_{\\ell + 1}$ is equivalent to $L + 1 \\in \n \\langle \\omega_{j_1}, \\ldots , \\omega_{j_{n-\\ell }} \\rangle.$\n\\end{proof}\n\nApplying this theorem for values of $\\ell$ close to $n$ gives simple descriptions of when the Rees algebra of $I(\\boldsymbol{\\lambda})$ is regular in codimension $\\ell$.\n\n\\begin{cor} The Rees algebra of $ I(\\boldsymbol{\\lambda}) \\subset K[\\xvec{n}]$ over a field $K$ is regular in codimension $n$ if and only if $\\boldsymbol{\\lambda} = \\ensuremath{{\\lambda}}(1,1, \\ldots , 1)$ and hence, $I(\\boldsymbol{\\lambda}) = \\mathfrak{m}^{\\ensuremath{{\\lambda}}}$.\n\\end{cor}\n\n\\begin{proof} The Rees algebra of $I$ satisfies $\\mathrm{R}_{n}$ if and only if for every sequence \\\\$1 < \\cdots < i-1 < i+1 < \\cdots < n$ of length $n-1$, we have\n$$L - \\omega_1, \\ldots , L - \\omega_{i-1} , L - \\omega_{i+1} , \\ldots , L-\\omega_n , L+1 \\in \\langle \\omega_i \\rangle.$$ In particular, $L+1 = a\\omega_i$ for some $a \\ge 0$, which implies $1 = \\omega_i(a - \\ensuremath{{\\lambda}}_i)$. Thus each $\\omega_i = 1$. Conversely, if all the $\\omega_i = 1$ then the necessary conditions are satisfied.\n\\end{proof}\n\n\\begin{cor} \\label{cor: 2} Suppose that $n \\ge 3$. The Rees algebra of $ I(\\boldsymbol{\\lambda}) \\subset K[\\xvec{n}]$ over a field $K$ is regular in codimension $n-1$ if and only if the integers $\\omega_i$ are pairwise relatively prime.\n\\end{cor}\n\n\\begin{proof} The sequences of length $n-2$ arise from omitting two integers $1 \\le i < j \\le n$. For each pair $1 \\le i < j \\le n$ we must have $$L - \\omega_k \\in \\langle \\omega_i , \\omega_j \\rangle \\mbox{ for all } k \\ne i,j \\mbox{ and } L+1 \\in \\langle \\omega_i , \\omega_j \\rangle.$$ Write $L+1 = a\\omega_i + b\\omega_j$ for $a, b \\ge 0$ and read modulo \n$\\omega_i$ to obtain the congruence $b\\omega_j \\equiv 1 \\pmod{\\omega_i}$. Hence $\\omega_i$ and\n $\\omega_j$ are relatively prime. This holds for all pairs $1 \\le i 1$. \n \nNow suppose that $u < L\/2a , v < L\/2b $ or $w < L\/2c $. Without loss of generality we may assume that $w < L\/2c $. Then, $L - wc > L\/2 \\ge (a-1)(b-1)$ so there exist $u_1, v_1 \\in \\mathbb{N}$ such that $u_1a + v_1b = L - wc$. Since $vb < L$, we have $ua + wc > L$. Now $u_1a \\le L - wc < ua$ implies $u_1 < u$. Similarly, $v_1 < v$. Therefore, $$(u,v,w) = (u_1,v_1,w) + (u-u_1, v-v_1,0)$$ is the desired decomposition.\n\nFinally, assume that $u \\ge L\/2a , v \\ge L\/2b $ and $w \\ge L\/2c $. Set $w_1 = \\lceil L\/2c \\rceil$. Then, \n$L - w_1c \\ge L - (ab+1)c\/2 = (c\/2)(ab-1) > (a-1)(b-1)$ and we may write $u_1a + v_1b = L - w_1c$ for some $u_1, v_1 \\in \\mathbb{N}$. Notice that $u_1a \\le L - w_1c \\le L\/2$ and hence $u_1 \\le L\/2a \\le u$. Similarly, $v_1 \\le v$. Thus $$(u,v,w) = (u_1,v_1,w_1) + (u-u_1,v-v_1,w-w_1)$$ is the desired decomposition.\n \\end{proof}\n \nWe now present an example of a Rees algebra $\\mathcal{R} = R[It]$ of a monomial ideal that satisfies $\\mathrm{R}_2$ but is not normal. In order to find an example we must work over a polynomial ring in at least 4 indeterminates by the above remarks. The following example is due to H. Coughlin \\cite{Co}. The example was first explored using the program NORMALIZ \\cite{Norm} of Bruns and Koch.\n\n\\begin{example}\\label{ex: weirdex} Let $\\boldsymbol{\\lambda}=(1443,37,21,91)$. Define $I=I(\\boldsymbol{\\lambda})$, $S=S(I)$, and $\\mathcal{R}$ as above. We claim $\\mathcal{R}$ is not normal but satisfies the equivalent conditions for R$_2$. Hence, $\\mathcal{R}$ does not satisfy S$_2$.\n\\end{example}\n\n\nWe first show that $\\mathcal{R}$ is not normal\nNote that $L=10101$.\n The vector $\\boldsymbol{\\alpha}=(2,36,1,89)$ satisfies \n$\\boldsymbol{\\omega} \\cdot \\boldsymbol{\\alpha}=2L$, so that $x^{\\boldsymbol{\\alpha}} \\in \\overline{I^2}$. Direct computation shows that $\\boldsymbol{\\alpha}$ is not the sum of two vectors in $S$ and hence $x^{\\boldsymbol{\\alpha}} \\notin I^2$. Thus $\\mathcal{R}$ is not normal.\n\nWe show that $\\mathrm{R}_2$ holds. As in the proof of Theorem \\ref{thm: R_k for Rees} we need only deal with the height two monomial primes corresponding to the \nfaces $G_1, G_2, G_3, G_4$, where $G_i = \\ensuremath{\\mathrm{pos}}(S) \\cap H_i \\cap H_{\\sigma}$. As in the proof of\nTheorem \\ref{thm: R_k for Rees}, for each $G_i$ we must define a \npair of elements $\\boldsymbol{\\gamma}_i$ and $\\boldsymbol{\\gamma}_6$ in $S$ such that $\\sigma_a(\\boldsymbol{\\gamma}_b) = \\delta_{a b}$ for $a, b \\in \\{ i, 6 \\}$. \n\nThe following vectors satisfy $\\sigma_a(\\boldsymbol{\\gamma}_b)=\\delta_{ab}$ for $a, b \\in \\{ i, 6 \\}$:\n$$\\begin{array}{ccc}\ni&\\boldsymbol{\\gamma}_i & \\boldsymbol{\\gamma}_6\\\\ \n1&(1,28,8,12,1) &(0,8,1,67,1) \\\\\n\n\n2&(35,1,1,82,1)&(16,0,0,91,1)\\\\\n3&(35,1,1,82,1) & (16,0,0,91,1)\\\\\n4&(220,1,17,1,1) & (275,0,17,0,1)\\\\\n\\end{array}$$\n\n$\\mathcal{R}$ satisfies R$_2$ by Theorem~\\ref{thm: R_k for Rees}. Thus $\\mathcal{R}$ does not satisfy S$_2$.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\setcounter{equation}{0}\n\\setcounter{thm}{0}\n\nRecently there is a lot of interest in the following singular diffusion\nequation \\cite{A}, \\cite{DK}, \\cite{P}, \\cite{V},\n\\begin{equation}\\label{fde-eqn}\nu_t=\\frac{n-1}{m}\\Delta u^m\\quad\\mbox{ in }{\\mathbb{R}}^n\\times (0,T)\n\\end{equation}\nwhich arises in the study of many physical models and geometric flows. When $00$ and $R_1>0$ (cf. \\cite{DS}). On the other hand if \\eqref{m-range} holds and $0\\le u_0\\in L_{loc}^p({\\mathbb{R}}^n)$ for some constant $p>\\frac{n(1-m)}{2}$ which satisfies \n\\begin{equation*}\n\\liminf_{R\\to\\infty}\\frac{1}{R^{n-\\frac{2}{1-m}}}\\int_{|x|\\le R}u_0\\,dx\n=\\infty,\n\\end{equation*}\nthen S.Y.~Hsu \\cite{Hs2} proved the existence and uniqueness of solutions of \n\\begin{equation}\\label{cauchy-problem}\n\\left\\{\\begin{aligned}\nu_t=&\\frac{n-1}{m}\\Delta u^m\\quad\\mbox{ in }{\\mathbb{R}}^n\\times (0,\\infty)\\\\\nu(x,0)=&u_0(x)\\qquad\\quad\\mbox{ in }{\\mathbb{R}}^n.\n\\end{aligned}\\right.\n\\end{equation}\nThese results say that we will have either global existence of solution of \\eqref{fde-eqn} or extinction in finite time for solution of \\eqref{fde-eqn} depending on whether the growth rate of the solution is large enough at infinity. \nWhen \\eqref{m-range} holds, existence, uniqueness and decay rate of self-similar solutions of \\eqref{fde-eqn} were also proved by S.Y.~Hsu in \\cite{Hs1}.\nInterested reader can read the book\n\\cite{DK} by P.~Daskalopoulos and C.E.~Kenig and the book \\cite{V} by\nJ.L.~Vazquez for the most recent results on \\eqref{fde-eqn}.\n\nIn the recent paper\n\\cite{CD} B.~Choi and P.~Daskalopoulos proved the higher order expansion of the radially symmetric solution $v_{\\lambda,\\beta}(r)$ of\n\\begin{equation}\\label{elliptic-eqn}\n\\left\\{\\begin{aligned}\n&\\frac{n-1}{m}\\Delta v^m+\\frac{2\\beta}{1-m} v+\\beta x\\cdot\\nabla v=0,\\quad v>0,\\quad\\mbox{ in }{\\mathbb{R}}^n\\\\\n&v(0)=\\lambda\\end{aligned}\\right.\n\\end{equation}\n for any constant $\\lambda>0$ where $m=\\frac{n-2}{n+2}$, $n\\ge 3$, as $r\\to\\infty$. They also proved that if $u$ is the solution of \\eqref{cauchy-problem} in ${\\mathbb{R}}^n\\times (0,\\infty)$ with $m=\\frac{n-2}{n+2}$, $n\\ge 3$, and initial value $u_0\\ge 0$ satisfying \n\\begin{equation*}\nu_0(x)^{1-m}\\approx \\frac{(n-1)(n-2)}{\\beta |x|^2}\\left(\\log |x|+K_1+o(1)\\right)\\quad\\mbox{ as }r=|x|\\to\\infty\n\\end{equation*}\nfor some constants $\\beta>0$ and $K_1\\in{\\mathbb{R}}$, then as $t\\to\\infty$ the rescaled function \n\\begin{equation}\\label{rescaled-soln}\n\\widetilde{u}(x,t)=e^{\\frac{2\\beta}{1-m}t}u(e^{\\beta t}x,t)\n\\end{equation}\nconverges uniformly on every compact subsets of ${\\mathbb{R}}^n$ to $v_{\\lambda_1,\\beta}(x)$ for some constant $\\lambda_1>0$.\nNote that for any solution $u$ of \\eqref{fde-eqn} in ${\\mathbb{R}}^n\\times (0,\\infty)$, $\\widetilde{u}$ satisfies\n\\begin{equation}\\label{u-tilde-eqn}\n\\widetilde{u}_t=\\frac{n-1}{m}\\Delta\\widetilde{u}+\\frac{2\\beta}{1-m}\\widetilde{u}+\\beta x\\cdot\\nabla\\widetilde{u}\\quad\\mbox{ in }{\\mathbb{R}}^n\\times (0,\\infty).\n\\end{equation}\nThis result of B.~Choi and P.~Daskalopoulos \\cite{CD} shows that the asymptotic large time behaviour of the solution of \\eqref{cauchy-problem} depends critically on the higher order expansion of the initial value of the solution. Moreover the asymptotic large time behaviour of the solution of \\eqref{cauchy-problem} is after a rescaling similar to the solution of \\eqref{elliptic-eqn} with the same higher order expansion. Hence it is important to study the higher order expansion of the solution of \\eqref{elliptic-eqn}.\n\n\nIn this paper we will extend the result of \\cite{CD}. We will prove the higher order expansion of the radially symmetric solution $v_{\\lambda,\\beta}(r)$ of \\eqref{elliptic-eqn} as $r\\to\\infty$ for any $n\\ge 3$, $00$ and $\\lambda>0$. We will also prove that \nwhen $n\\ge 3$, $00$ and $K_1\\in{\\mathbb{R}}$, then as $t\\to\\infty$ the rescaled solution $\\widetilde{u}$ given by \\eqref{rescaled-soln} will converges uniformly on every compact subsets of ${\\mathbb{R}}^n$ to the radially symmetric solution $v_{\\lambda_1,\\beta}(x)$ of \\eqref{elliptic-eqn} for some constant $\\lambda=\\lambda_1>0$.\n\nWe first start with a definition. For any $0\\le u_0\\in L_{loc}^1({\\mathbb{R}}^n)$, we say that a function $u$ is a solution of \\eqref{cauchy-problem}\nif $u>0$ in ${\\mathbb{R}}^n\\times (0,\\infty)$ is a classical solution of \\eqref{fde-eqn} in ${\\mathbb{R}}^n\\times (0,\\infty)$ and\n\\begin{equation*}\n\\|u(\\cdot, t)-u_0\\|_{L^1(E)}\\to 0\\quad\\mbox{ as }t\\to 0\n\\end{equation*}\nfor any compact subset $E$ of ${\\mathbb{R}}^n$. For any $\\lambda>0$ and $\\beta>0$, we say that $v$ is a solution of \\eqref{elliptic-eqn} if $v$ is a positive classical solution of \\eqref{elliptic-eqn} in ${\\mathbb{R}}^n$. When there is no ambiguity we will drop the subscript and write $v$ for the radially symmetric solution $v_{\\lambda,\\beta}$ of \\eqref{elliptic-eqn}. Let \n\\begin{equation}\\label{w-defn}\nw(s)=r^2v(r)^{1-m}\\quad\\mbox{ and }\\quad s=\\log r.\n\\end{equation}\nWe will assume that $n\\ge 3$, $00$, $\\lambda>0$ and $w$ be given by \\eqref{w-defn} for the rest of this paper. Unless stated otherwise we will also assume that $m\\ne\\frac{n-2}{n+2}$.\n\nWe obtain the following two main theorems in this paper.\n\n\\begin{thm}\\label{higher-order-expansion-thm}\nLet $n\\ge 3$, $00$ and $\\lambda>0$. Let $v_{\\lambda,\\beta}(r)$ be the radially symmetric solution of \\eqref{elliptic-eqn} given by \\cite{Hs1}. Then there exists a constant $K_0$ independent of $\\beta$ and $\\lambda$ and a constant $K(\\lambda,\\beta)$ such that\n\\begin{align}\\label{v-lambda-beta-expansion1}\nv_{\\lambda,\\beta}(r)^{1-m}=&\\frac{2(n-1)(n-2-nm)}{(1-m)\\beta r^2}\\left\\{\\log r -\\frac{n-2-(n+2)m}{2(n-2-nm)}\\log (\\log r))+\\frac{1-m}{2}\\log\\lambda\n\\right.\\notag\\\\\n&\\qquad \\left.+\\frac{1}{2}\\log\\beta +K_0+\\frac{a_0}{\\log r}\n+\\frac{(n-2-(n+2)m)^2}{4(n-2-nm)^2}\\cdot\\frac{\\log(\\log r)}{\\log r}+\\frac{o(\\log r)}{\\log r}\\right\\}\n\\end{align}\nas $r\\to\\infty$ where\n\\begin{equation}\\label{a0-defn}\na_0=\\frac{(n-2-(n+2)m)^2}{4(n-2-nm)^2}-\\frac{(1-m)^2 a_1(1,1)}{4(n-1)(n-2-nm)^2}\n\\end{equation}\nand\n\\begin{align}\\label{a1-defn}\na_1(\\lambda,\\beta)=&\\frac{2(1-2m)(n-1)(n-2-nm)}{(1-m)^2}+\\frac{(n-1)(n-2-(n+2)m)^2}{(1-m)^2}\\notag\\\\\n&\\qquad +\\frac{(n-2-(n+2)m)}{(1-m)}K(\\lambda,\\beta)\\beta.\n\\end{align}\n\\end{thm}\n\n\\begin{thm}\\label{convergence-thm}\nLet $n\\ge 3$, $00$, $0\\le u_0=\\phi+\\psi$, $ \\phi\\in L_{loc}^p({\\mathbb{R}}^n)$, $\\psi\\in L^1({\\mathbb{R}}^n)\\cap L_{loc}^p({\\mathbb{R}}^n)$, for some constant $p>\\frac{(1-m)n}{2}$, be such that\n\\begin{align}\\label{phi-expansion}\n\\phi(x)^{1-m}=&\\frac{2(n-1)(n-2-nm)}{(1-m)\\beta |x|^2}\\left(\\log |x|-\\frac{n-2-(n+2)m}{2(n-2-nm)}\\log (\\log |x|)+K_1+o(1)\\right)\\mbox{ as }|x|\\to\\infty\n\\end{align}\nfor some constant $K_1\\in{\\mathbb{R}}$.\nIf $u$ is the unique solution of \\eqref{fde-eqn} in ${\\mathbb{R}}^n\\times (0,\\infty)$ given by\nTheorem 1.1 of \\cite{Hs2}, then as $t\\to\\infty$, the rescaled function $\\widetilde{u}(x,t)$ given by \\eqref{rescaled-soln} converges to $v_{\\lambda_1,\\beta}$ in $L_{loc}^1({\\mathbb{R}}^n)$ with $\\lambda_1=\\left(e^{2K_1\/K_0}\/\\beta\\right)^{\\frac{1}{1-m}}$ where the constant $K_0$ is given by Theorem \\ref{higher-order-expansion-thm}.\n\nMoreover if $u_0$ also satisfies $u_0=\\phi\\in L^{\\infty}({\\mathbb{R}}^n)$, then as $t\\to\\infty$, $\\widetilde{u}(x,t)$ also converges to $v_{\\lambda_1,\\beta}$ uniformly in $C^{2,1}(E)$ for any compact subset $E\\subset{\\mathbb{R}}^n$. \n\\end{thm}\n\n\n\n\\section{Proofs}\n\\setcounter{equation}{0}\n\\setcounter{thm}{0}\n\nIn this section we will give the proof of Theorem \\ref{higher-order-expansion-thm} and Theorem \\ref{convergence-thm}. We first recall some results of \\cite{Hs1}, \\cite{DS} and \\cite{HK}.\n\n\\begin{thm}\\label{w's-bd-thm}(Theorem 1.3 of \\cite{Hs1} and its proof)\nLet $v$ be the unique radially symmetric solution of \\eqref{elliptic-eqn} and $w$ be given by \\eqref{w-defn}. Then \n\\begin{equation}\\label{w-decay-infty}\n\\lim_{|x|\\to\\infty}\\frac{|x|^2v(x)^{1-m}}{\\log |x|}=\\lim_{s\\to\\infty}\\frac{w(s)}{s}=\\lim_{s\\to\\infty}w_s(s)\n=\\frac{2(n-1)(n-2-nm)}{(1-m)\\beta}.\n\\end{equation}\n\\end{thm}\n\n\\begin{lem}(cf. Corollary 2.2 of \\cite{DS} and Lemma 2.2 of \\cite{HK})\\label{L1-comparison-lem}\nLet $0\\le u_{0,1}, u_{0,2}\\in L_{loc}^1({\\mathbb{R}}^n)$ be such that $u_{0,1}-u_{0,2}\\in L^1({\\mathbb{R}}^n)$. Suppose $u_1$, $u_2$, are solutions of \\eqref{fde-eqn} in ${\\mathbb{R}}^n\\times (0,\\infty)$ with initial values $u_{0,1}, u_{0,2}$ respectively such that for any $T>0$ there exist constants $C_1>0$, $R_1>0$, such that\n\\begin{equation*}\nu_i(x,t)\\ge C_1|x|^{-\\frac{2}{1-m}}\\quad\\forall |x|\\ge R_1, 00\n\\end{equation*}\nand hence\n\\begin{equation*}\n\\|\\widetilde{u}_1(\\cdot,t)-\\widetilde{u}_2(\\cdot,t)\\|_{L^1({\\mathbb{R}}^n)}\\le e^{-\\frac{(n-2-nm)}{1-m}t} \\|\\widetilde{u}_{0,1}-\\widetilde{u}_{0,2}\\|_{L^1({\\mathbb{R}}^n)}\\quad\\forall t >0\n\\end{equation*}\nwhere $\\widetilde{u}_1(\\cdot,t)$, $\\widetilde{u}_2(\\cdot,t)$, are the rescaled solutions of $u_1$, $u_2$, given by \\eqref{rescaled-soln}.\n\\end{lem}\n\nBy the computation of \\cite{Hs1} we have\n\\begin{equation}\\label{w-eqn}\nw_{ss}=\\frac{1-2m}{1-m}\\cdot\\frac{w_s^2}{w}\n-\\frac{n-2-(n+2)m}{1-m}w_s+\\frac{\\beta}{n-1}\\left(\\frac{2(n-1)(n-2-nm)}{(1-m)\\beta}-w_s\\right)w\\quad\\mbox{ in }{\\mathbb{R}}.\n\\end{equation}\nLet\n\\begin{equation}\\label{h-defn10}\nh(s)=w(s)-\\frac{2(n-1)(n-2-nm)}{(1-m)\\beta}s.\n\\end{equation}\nThen by \\eqref{w-eqn},\n\\begin{align*}\nh_{ss}=&\\frac{1-2m}{1-m}\\cdot\\frac{w_s^2}{w}\n-\\frac{(n-2-(n+2)m)}{1-m}\\left(\\frac{2(n-1)(n-2-nm)}{(1-m)\\beta} +h_s\\right)\\notag\\\\\n&\\qquad -\\frac{\\beta}{n-1}\\left(\\frac{2(n-1)(n-2-nm)}{(1-m)\\beta}s+h\\right)h_s\\quad\\mbox{ in }{\\mathbb{R}}.\n\\end{align*}\nHence\n\\begin{equation}\\label{h-eqn2}\nh_{ss}+\\left(\\frac{2(n-2-nm)}{(1-m)}s+\\frac{\\beta}{n-1}h+\\frac{n-2-(n+2)m}{1-m}\\right)h_s=\\frac{1-2m}{1-m}\\cdot\\frac{w_s^2}{w}-b_0\\quad\\mbox{ in }{\\mathbb{R}}\n\\end{equation}\nwhere\n\\begin{equation*}\nb_0=\\frac{2(n-1)(n-2-nm)(n-2-(n+2)m)}{(1-m)^2\\beta}.\n\\end{equation*}\n\n\\begin{lem}\\label{h-limit-lem1}\nLet $n\\ge 3$, $00\\quad\\forall s\\ge s_0\\qquad\\mbox{ if }\\,\\,\\frac{n-2}{n+2}0$ and $\\beta>0$. Then \n\\begin{equation}\\label{h1'-limit}\n\\lim_{s\\to\\infty}\\frac{s^2h_{1,s}(s)}{\\log s}=-\\frac{(n-1)(n-2-(n+2)m)^2}{2(n-2-nm)(1-m)\\beta}.\n\\end{equation}\n\\end{lem}\n\\begin{proof}\nLet \n\\begin{equation*}\nH(s)=\\frac{1-2m}{1-m}\\cdot\\frac{w_s^2}{w}+a_2\\left[-\\frac{1}{s^2}+\\left(\\frac{\\beta}{n-1}h+\\frac{n-2-(n+2)m}{1-m}\\right)\\frac{1}{s}\\right].\n\\end{equation*}\nThen by Theorem \\ref{w's-bd-thm} and Lemma \\ref{h-limit-lem1},\n\\begin{equation}\\label{H-infty-limit}\n\\lim_{s\\to\\infty}\\frac{sH(s)}{\\log s}=-\\frac{(n-2-(n+2)m)}{1-m}a_2=-a_3\n\\end{equation}\nwhere\n\\begin{equation}\na_3=\\frac{(n-1)(n-2-(n+2)m)^2}{(1-m)^2\\beta}.\n\\end{equation}\nBy \\eqref{h1-eqn2} and \\eqref{H-infty-limit} for any $0<\\varepsilon1$ such that\n\\begin{align}\\label{h1-ineqn}\n& (-a_3-\\varepsilon)\\frac{\\log s}{s}\\le h_{1,ss}+\\left(\\frac{2(n-2-nm)}{(1-m)}s+\\frac{\\beta}{n-1}h+\\frac{n-2-(n+2)m}{1-m}\\right)h_{1,s}\\le (-a_3+\\varepsilon)\\frac{\\log s}{s}\n\\end{align}\nholds for all $s\\ge s_1$. Let $f$ be given by \\eqref{f-defn}. Multiplying \\eqref{h1-ineqn} by $f$ and integrating over $(s_1,s)$,\n\\begin{align}\\label{h1'-ineqn3}\n\\frac{s^2}{\\log s}\\left(\\frac{f(s_1)h_{1,s}(s_1)+(-a_3-\\varepsilon)\\int_{s_1}^s\\frac{\\log z}{z}f(z)\\,dz}{f(s)}\\right)\\le& \\frac{s^2h_{1,s}(s)}{\\log s}\\notag\\\\\n\\le&\n\\frac{s^2}{\\log s}\\left(\\frac{f(s_1)h_{1,s}(s_1)+(-a_3+\\varepsilon)\\int_{s_1}^s\\frac{\\log z}{z}f(z)\\,dz}{f(s)}\\right)\n\\end{align}\nholds for all $s\\ge s_1$.\nBy the l'Hospital rule and Lemma \\ref{h-limit-lem1},\n\\begin{align}\\label{f-expression-ratio-limit2}\n\\lim_{s\\to\\infty}\\frac{s^2\\int_{s_1}^s\\frac{\\log z}{z}f(z)\\,dz}{ f(s)\\log s}\n=&\\lim_{s\\to\\infty}\\frac{f(s)s\\log s+2s\\int_{s_1}^s\\frac{\\log z}{z}f(z)\\,dz}{f(s)\\left(\\frac{2(n-2-nm)}{(1-m)}s+\\frac{\\beta}{n-1}h(s)+\\frac{n-2-(n+2)m}{1-m}\\right)\\log s+\\frac{f(s)}{s}}\\notag\\\\\n=&\\frac{1-m}{2(n-2-nm)}+\\frac{1-m}{n-2-nm}\\lim_{s\\to\\infty}\\frac{\\int_{s_1}^s\\frac{\\log z}{z}f(z)\\,dz}{f(s)\\log s}\n\\end{align}\nSince \n\\begin{equation*}\n\\left|\\frac{\\int_{s_1}^s\\frac{\\log z}{z}f(z)\\,dz}{f(s)\\log s}\\right|\\le\\frac{\\int_{s_1}^sf(z)\\,dz}{f(s)}\\quad\\forall s\\ge s_1,\n\\end{equation*}\nby \\eqref{f-expression-ratio-limit},\n\\begin{equation}\\label{f-expression-ratio-limit3}\n\\lim_{s\\to\\infty}\\frac{\\int_{s_1}^s\\frac{\\log z}{z}f(z)\\,dz}{f(s)\\log s}=0.\n\\end{equation}\nBy \\eqref{f-expression-ratio-limit2} and \\eqref{f-expression-ratio-limit3},\n\\begin{equation}\\label{f-expression-ratio-limit4}\n\\lim_{s\\to\\infty}\\frac{s^2\\int_{s_1}^s\\frac{\\log z}{z}f(z)\\,dz}{ f(s)\\log s}=\\frac{1-m}{2(n-2-nm)}.\n\\end{equation}\nLetting first $s\\to\\infty$ and then $\\varepsilon\\to 0$ in \\eqref{h1'-ineqn3}, by \\eqref{f-expression-ratio-limit4} we get \\eqref{h1'-limit}\nand the lemma follows.\n\\end{proof}\n\n\\begin{cor}\\label{h1-expansion-cor}\nLet $n\\ge 3$, $00$ and $\\beta>0$. Then \n\\begin{equation}\\label{K-defn}\nK(\\lambda,\\beta):=\\lim_{s\\to\\infty}h_1(s)\\in{\\mathbb{R}}\\quad\\mbox{ exists}\n\\end{equation} \nand\n\\begin{equation}\\label{h1-expression}\nh_1(s)=K(\\lambda,\\beta)+\\frac{(n-1)(n-2-(n+2)m)^2}{2(n-2-nm)(1-m)\\beta}\\left(\\frac{1+\\log s}{s}\\right)+o\\left(\\frac{1+\\log s}{s}\\right)\\quad\\mbox{ as }s\\to\\infty.\n\\end{equation}\n\\end{cor}\n\\begin{proof}\nBy \\eqref{h1'-limit} there exist constants $C_1>0$ and $s_1>1$ such that\n\\begin{equation}\n\\left|\\frac{s^2h_{1,s}(s)}{\\log s}\\right|\\le C_1\\quad\\forall s\\ge s_1.\n\\end{equation} \nHence\n\\begin{align}\\label{h1-uniform-bd}\n&|h_1(s)-h_1(s_1)|\\le\\int_{s_1}^s|h_{1,s}(z)|\\,dz\\le C_1\\int_{s_1}^s\\frac{\\log z}{z^2}\\,dz\\le C_2\\quad\\forall s\\ge s_1\\notag\\\\\n\\Rightarrow\\quad&|h_1(s)|\\le C_2+|h_1(s_1)|\\quad\\forall s\\ge s_1\n\\end{align}\nfor some constant $C_2>0$. On the other hand by \\eqref{h1'-limit} there exists a constant $s_0>s_1$ such that\n\\begin{equation*}\nh_{1,s}(s)<0\\quad\\forall s\\ge s_0.\n\\end{equation*}\nThen $h_1(s)$ is monotone decreasing in $(s_0,\\infty)$. This together with \\eqref{h1-uniform-bd} implies that \\eqref{K-defn} holds.\nBy \\eqref{h1'-limit} for any \n$0<\\varepsilon<\\frac{(n-1)(n-2-(n+2)m)^2}{4(n-2-nm)(1-m)\\beta}$ there exists a constant $s_2>1$ such that\n\\begin{equation}\\label{h1'-ineqn6}\n\\left(-\\frac{(n-1)(n-2-(n+2)m)^2}{2(n-2-nm)(1-m)\\beta}-\\varepsilon\\right)\\frac{\\log s}{s^2}\\le h_{1,s}(s)\\le \\left(-\\frac{(n-1)(n-2-(n+2)m)^2}{2(n-2-nm)(1-m)\\beta}+\\varepsilon\\right)\\frac{\\log s}{s^2}\n\\end{equation}\nholds for all $s\\ge s_2$.\nIntegrating \\eqref{h1'-ineqn6} over $(s,\\infty)$, $s\\ge s_2$,\n\\begin{align*}\n-\\left(\\frac{(n-1)(n-2-(n+2)m)^2}{2(n-2-nm)(1-m)\\beta}+\\varepsilon\\right)\\frac{(1+\\log s)}{s}\n\\le &K(\\lambda,\\beta)-h_1(s)\\\\\n\\le&-\\left(\\frac{(n-1)(n-2-(n+2)m)^2}{2(n-2-nm)(1-m)\\beta}-\\varepsilon\\right)\\frac{(1+\\log s)}{s}\n\\end{align*}\nfor all $s\\ge s_2$ and \\eqref{h1-expression} follows.\n\\end{proof}\n\nLet $K(\\lambda,\\beta)$ be given by \\eqref{K-defn} and\n\\begin{equation}\\label{h2-defn}\nh_2(s)=h_1(s)-K(\\lambda,\\beta)-\\frac{(n-1)(n-2-(n+2)m)^2}{2(n-2-nm)(1-m)\\beta}\\left(\\frac{1+\\log s}{s}\\right).\n\\end{equation}\nThen \n\\begin{equation}\\label{h2-infty}\nh_2(s)=o\\left(\\frac{1+\\log s}{s}\\right)\\mbox{ as }s\\to\\infty\n\\end{equation}\n and by \\eqref{h1-defn} and \\eqref{h1-eqn2},\n\\begin{align}\\label{h2-eqn}\n&h_{2,ss}+\\left(\\frac{2(n-2-nm)}{(1-m)}s+\\frac{\\beta}{n-1}h+\\frac{n-2-(n+2)m}{1-m}\\right)h_{2,s}\\notag\\\\\n=&\\frac{1-2m}{1-m}\\cdot\\frac{w_s^2}{w}+\\frac{(n-1)(n-2-(n+2)m)^2}{(1-m)^2\\beta}\\cdot\\frac{1}{s}+\\frac{(n-2-(n+2)m)}{1-m}\\cdot\\frac{h_1(s)}{s}-\\frac{a_2}{s^2}\\notag\\\\\n&\\qquad +\\frac{(n-1)(n-2-(n+2)m)^2}{2(1-m)(n-2-nm)\\beta}\\cdot\\left[\\frac{(1-2\\log s)}{s^3}+\\left(\\frac{\\beta}{n-1}h(s)+\\frac{n-2-(n+2)m}{1-m}\\right)\\cdot\\frac{\\log s}{s^2}\\right]\\notag\\\\\n=&:H_1(s).\n\\end{align}\n\n\\begin{lem}\\label{h2-limit-lem}\nLet $n\\ge 3$, $00$ and $\\beta>0$. Then $h_2$ satisfies\n\\begin{equation}\\label{h2-limit}\n\\lim_{s\\to\\infty}s^2\\,h_{2,s}(s)=\\frac{(1-m)a_1(\\lambda,\\beta)}{2(n-2-nm)\\beta}\n\\end{equation}\nwhere $a_1(\\lambda,\\beta)$ is given by \\eqref{a1-defn} with $K(\\lambda,\\beta)$ given by \\eqref{K-defn}.\n\\end{lem}\n\\begin{proof}\nBy Theorem \\ref{w's-bd-thm}, Lemma \\ref{h-limit-lem1} and \\eqref{K-defn},\n\\begin{equation}\\label{H1-decay-rate}\n\\lim_{s\\to\\infty}sH_1(s)=\\frac{a_1(\\lambda,\\beta)}{\\beta}\n\\end{equation}\nwhere $a_1(\\lambda,\\beta)$ is given by \\eqref{a1-defn} with $K(\\lambda,\\beta)$ given by \\eqref{K-defn}.\nThen by \\eqref{h2-eqn} and \\eqref{H1-decay-rate} for any $0<\\varepsilon<1$ there exists a constant $s_1>1$ such that\n\\begin{equation}\\label{h2-ineqn20}\n\\left(\\frac{a_1(\\lambda,\\beta)}{\\beta}-\\varepsilon\\right)\\frac{1}{s}\\le h_{2,ss}+\\left(\\frac{2(n-2-nm)}{(1-m)}s+\\frac{\\beta}{n-1}h+\\frac{n-2-(n+2)m}{1-m}\\right)h_{2,s}\\le\\left(\\frac{a_1(\\lambda,\\beta)}{\\beta}+\\varepsilon\\right)\\frac{1}{s} \n\\end{equation}\nholds for all $s\\ge s_1$.\nLet $f$ be given by \\eqref{f-defn}. Multiplying \\eqref{h2-ineqn20} by $f$ and integrating over $(s_1,s)$,\n\\begin{equation}\\label{h-ineqn32}\n\\frac{f(s_1)h_2(s_1)s^2+\\left(\\frac{a_1(\\lambda,\\beta)}{\\beta}-\\varepsilon\\right)s^2\\int_{s_1}^s\\frac{f(z)}{z}\\,dz}{f(s)}\\le s^2h_{2,s}(s)\\le \\frac{f(s_1)h_2(s_1)s^2+\\left(\\frac{a_1(\\lambda,\\beta)}{\\beta}+\\varepsilon\\right)s^2\\int_{s_1}^s\\frac{f(z)}{z}\\,dz}{f(s)}\n\\end{equation}\nholds for all $s\\ge s_1$.\nSince by the l'Hospital rule,\n\\begin{equation*}\n\\lim_{s\\to\\infty}\\frac{\\int_{s_1}^s\\frac{f(z)}{z}\\,dz}{f(s)}=\\lim_{s\\to\\infty}\\frac{\\frac{f(s)}{s}}{f(s)\\left(\\frac{2(n-2-nm)}{(1-m)}s+\\frac{\\beta}{n-1}h(s)+\\frac{n-2-(n+2)m}{1-m}\\right)}=0,\n\\end{equation*}\nby Lemma \\ref{h-limit-lem1} and the l'Hospital rule,\n\\begin{align}\\label{f-ratio-limit}\n\\lim_{s\\to\\infty}\\frac{s^2\\int_{s_1}^s\\frac{f(z)}{z}\\,dz}{f(s)}=&\\lim_{s\\to\\infty}\\frac{sf(s)+2s\\int_{s_1}^s\\frac{f(z)}{z}\\,dz}{f(s)\\left(\\frac{2(n-2-nm)}{(1-m)}s+\\frac{\\beta}{n-1}h(s)+\\frac{n-2-(n+2)m}{1-m}\\right)}\\notag\\\\\n=&\\frac{(1-m)}{2(n-2-nm)}+\\frac{(1-m)}{(n-2-nm)}\\lim_{s\\to\\infty}\\frac{\\int_{s_1}^s\\frac{f(z)}{z}\\,dz}{f(s)}\\notag\\\\\n=&\\frac{(1-m)}{2(n-2-nm)}.\n\\end{align}\nHence letting first $s\\to\\infty$ and then $\\varepsilon\\to 0$ in \\eqref{h-ineqn32}, by \\eqref{f-defn} and \\eqref{f-ratio-limit} we get \\eqref{h2-limit} and the lemma follows.\n\\end{proof}\n\nBy \\eqref{h2-infty} and Lemma \\ref{h2-limit-lem} we have the following result.\n\n\\begin{lem}\\label{h2-expansion-lem}\nLet $n\\ge 3$, $00$ and $\\beta>0$. Then \n\\begin{equation}\\label{h2-expansion}\nh_2(s)=-\\frac{(1-m)a_1(\\lambda,\\beta)}{2(n-2-nm)\\beta s}+\\frac{o(s)}{s}\\quad\\mbox{ as }s\\to\\infty\n\\end{equation}\nwhere $a_1(\\lambda,\\beta)$ is given by \\eqref{a1-defn} with $K(\\lambda,\\beta)$ given by \\eqref{K-defn}.\n\\end{lem}\n\nWe are now ready for the proof of Theorem \\ref{higher-order-expansion-thm}.\n\n\\noindent{\\bf Proof of Theorem \\ref{higher-order-expansion-thm}}: Let $a_0$, $a_1(\\lambda,\\beta)$, be given by \\eqref{a0-defn} and \\eqref{a1-defn} with $K(\\lambda,\\beta)$ given by \\eqref{K-defn}. Let \n\\begin{equation*}\nK_0=\\frac{(1-m)K(1,1)}{2(n-1)(n-2-nm)}.\n\\end{equation*}\nBy \\eqref{w-defn}, \\eqref{h-defn10}, \\eqref{h1-defn}, \\eqref{h2-defn} and \\eqref{h2-expansion}, \n\\begin{align}\\label{v11-expansion}\nv_{1,1}(r)^{1-m}=&\\frac{2(n-1)(n-2-nm)}{(1-m)r^2}\\left\\{\\log r-\\frac{(n-2-(n+2)m)}{2(n-2-nm)}\\log (\\log r)+K_0\\right.\\notag\\\\\n&\\qquad+\\frac{(n-2-(n+2)m)^2}{4(n-2-nm)^2}\\left.\\left(\\frac{1+\\log(\\log r)}{\\log r}\\right)\n-\\frac{(1-m)^2}{4(n-1)(n-2-nm)^2}\\frac{a_1(1,1)}{\\log r}\\right.\\notag\\\\\n&\\qquad \\left.+\\frac{o(\\log r)}{\\log r}\\right\\}\\notag\\\\\n=&\\frac{2(n-1)(n-2-nm)}{(1-m)r^2}\\left\\{\\log r-\\frac{(n-2-(n+2)m)}{2(n-2-nm)}\\log (\\log r)+K_0+\\frac{a_0}{\\log r}\\right.\\notag\\\\\n&\\qquad\\left.+\\frac{(n-2-(n+2)m)^2}{4(n-2-nm)^2}\\cdot\\frac{\\log(\\log r)}{\\log r}+\\frac{o(\\log r)}{\\log r}\\right\\}\\quad\\mbox{ as }r\\to\\infty.\n\\end{align}\nThen by (2.19) of \\cite{CD} and \\eqref{v11-expansion},\n\\begin{align*}\nv_{\\lambda,\\beta}(r)^{1-m}=&\\lambda^{1-m} v_{1,1}(\\lambda^{\\frac{1-m}{2}}\\sqrt{\\beta}r)^{1-m}\\notag\\\\\n=&\\frac{2(n-1)(n-2-nm)}{(1-m)\\beta r^2}\\left\\{\\log (\\lambda^{\\frac{1-m}{2}}\\sqrt{\\beta}r)-\\frac{(n-2-(n+2)m)}{2(n-2-nm)}\\log (\\log (\\lambda^{\\frac{1-m}{2}}\\sqrt{\\beta}r))+K_0\n\\right.\\notag\\\\\n&\\qquad+\\frac{a_0}{\\log (\\lambda^{\\frac{1-m}{2}}\\sqrt{\\beta}r)}\n+\\frac{(n-2-(n+2)m)^2}{4(n-2-nm)^2}\\cdot\\frac{\\log(\\log (\\lambda^{\\frac{1-m}{2}}\\sqrt{\\beta}r))}{\\log (\\lambda^{\\frac{1-m}{2}}\\sqrt{\\beta}r)}\\notag\\\\\n&\\qquad\\left.+\\frac{o(\\log (\\lambda^{\\frac{1-m}{2}}\\sqrt{\\beta}r))}{\\log (\\lambda^{\\frac{1-m}{2}}\\sqrt{\\beta}r)}\\right\\}\\notag\\\\\n=&\\frac{2(n-1)(n-2-nm)}{(1-m)\\beta r^2}\\left\\{\\log r -\\frac{(n-2-(n+2)m)}{2(n-2-nm)}\\log (\\log r))\n+\\frac{1-m}{2}\\log\\lambda\\right.\\notag\\\\\n&\\qquad \\left.+\\frac{1}{2}\\log\\beta+K_0+\\frac{a_0}{\\log r}\n+\\frac{(n-2-(n+2)m)^2}{4(n-2-nm)^2}\\cdot\\frac{\\log(\\log r)}{\\log r}+\\frac{o(\\log r)}{\\log r}\\right\\}\n\\end{align*}\nas $r\\to\\infty$ and Theorem \\ref{higher-order-expansion-thm} follows. \n\n{\\hfill$\\square$\\vspace{6pt}} \n\n\\begin{rmk}\nFrom \\eqref{v-lambda-beta-expansion1},\n\\begin{align*}\nh_1(s)=&\\frac{2(n-1)(n-2-nm)}{(1-m)\\beta}\\left\\{\n\\frac{1-m}{2}\\log\\lambda +\\frac{1}{2}\\log\\beta+K_0+\\frac{a_0}{s}\n+\\frac{(n-2-(n+2)m)^2}{4(n-2-nm)^2}\\cdot\\frac{\\log s}{s}\\right.\\\\\n&\\qquad\\left.+\\frac{o(s)}{s}\\right\\}\\qquad\\mbox{ as }s\\to\\infty.\n\\end{align*}\nHence \n\\begin{equation}\\label{h1-limit2}\n\\lim_{s\\to\\infty}h_1(s)=\\frac{2(n-1)(n-2-nm)}{(1-m)\\beta}\\left\\{\n\\frac{1-m}{2}\\log\\lambda +\\frac{1}{2}\\log\\beta+K_0\\right\\}.\n\\end{equation}\nThus by \\eqref{K-defn} and \\eqref{h1-limit2},\n\\begin{equation*}\nK(\\lambda,\\beta)=\\frac{2(n-1)(n-2-nm)}{(1-m)\\beta}\\left\\{\n\\frac{1-m}{2}\\log\\lambda +\\frac{1}{2}\\log\\beta+K_0\\right\\}.\n\\end{equation*}\n\\end{rmk}\n \n\\noindent{\\bf Proof of Theorem \\ref{convergence-thm}}: Since the proof is similar to the proof of Theorem 3.1 and Corollary 3.2 of \\cite{CD} we will only sketch the proof here. Let $K_0$ be given by Theorem \\ref{higher-order-expansion-thm} and $\\lambda_1=\\left(e^{2(K_1-K_0)}\/\\beta\\right)^{\\frac{1}{1-m}}$. Then for any $0<\\varepsilon<\\lambda_1$ there exists a constant $R_{\\varepsilon}>0$ such that\n\\begin{align}\\label{initial-data-compare}\n&u_{\\lambda_1-\\varepsilon,\\beta}(x)\\le \\phi(x)\\le u_{\\lambda_1+\\varepsilon,\\beta}(x)\\quad\\forall |x|\\ge R_{\\varepsilon}\\notag\\\\\n\\Rightarrow\\quad&u_{\\lambda_1-\\varepsilon,\\beta}(x)+\\psi(x)\\le u_0(x)\\le u_{\\lambda_1+\\varepsilon,\\beta}(x)+\\psi(x)\\quad\\forall |x|\\ge R_{\\varepsilon}.\n\\end{align}\nFor any $\\delta>0$, let $u_1$, $u_2$, $u_{1,\\delta}$ and $w_{1,\\delta}$ be the solution of \\eqref{cauchy-problem} with initial value \n\\begin{equation*}\n\\min (u_{\\lambda_1-\\varepsilon,\\beta}(x)+\\psi(x), u_0(x)),\\,\\, \\max (u_{\\lambda_1+\\varepsilon,\\beta}(x)+\\psi(x), u_0(x)),\\,\\, \\min (u_{\\lambda_1-\\varepsilon,\\beta}(x)+\\psi(x), u_0(x))+\\delta,\n\\end{equation*}\n and $u_{\\lambda_1-\\varepsilon,\\beta}(x)+\\delta$\nrespectively given by Theorem 1.1 of \\cite{Hs2}. Let $\\widetilde{u}_1$, $\\widetilde{u}_2$, $\\widetilde{u}_{1,\\delta}$ and $\\widetilde{w}_{1,\\delta}$ be given by \\eqref{rescaled-soln} with $u$ being replaced by $u_1$, $u_2$, $u_{1,\\delta}$ and $w_{1,\\delta}$ respectively. By \\eqref{initial-data-compare}\nand the construction of solutions in \\cite{Hs2},\n\\begin{align}\n&u_1\\le u\\le u_2, \\quad u_{1,\\delta}\\ge\\delta, \\quad w_{1,\\delta}\\ge\\delta\\quad\\mbox{ in }{\\mathbb{R}}^n\\times (0,\\infty)\\quad\\forall\\delta>0\\label{u1-u-u2-compare}\\\\\n\\Rightarrow\\quad& \\widetilde{u}_1\\le \\widetilde{u}\\le\\widetilde{u}_2, \\quad\\mbox{ in }{\\mathbb{R}}^n\\times (0,\\infty)\\label{u1-u-u2-tilde-compare2}.\n\\end{align}\nBy \\eqref{u1-u-u2-compare} and Lemma \\ref{L1-comparison-lem},\n\\begin{equation}\\label{u-tidle-L1-comparison}\n\\|\\widetilde{u}_{1,\\delta}(\\cdot,t)-\\widetilde{w}_{1,\\delta}(\\cdot,t)\\|_{L^1({\\mathbb{R}}^n)}\\le e^{-\\frac{(n-2-nm)}{1-m}t} \\|\\min (u_{\\lambda_1-\\varepsilon,\\beta}+\\psi, u_0)-u_{\\lambda_1-\\varepsilon,\\beta}\\|_{L^1({\\mathbb{R}}^n)}\\quad\\forall t >0\n\\end{equation} \nSince $\\min (u_{\\lambda_1-\\varepsilon,\\beta}(x)+\\psi(x), u_0(x))+\\delta$ and $u_{\\lambda_1-\\varepsilon,\\beta}(x)+\\delta$ decreases monotonically to \n\\begin{equation*}\n\\min (u_{\\lambda-\\varepsilon,\\beta}(x)+\\psi(x), u_0(x))\\quad\\mbox{ and } \n\\quad u_{\\lambda_1-\\varepsilon,\\beta}(x)\n\\end{equation*}\n as $\\delta\\to 0$, $u_{1,\\delta}$ and $w_{1,\\delta}$ decreases monotonically to $u_1$ and \n$e^{-\\frac{2\\beta}{1-m}t}u_{\\lambda_1-\\varepsilon,\\beta}(e^{-\\beta t}x)$ as $\\delta\\to 0$.\nHence letting $\\delta\\to 0$ in \\eqref{u-tidle-L1-comparison},\n\\begin{equation}\\label{u-tidle-L1-comparison2}\n\\|\\widetilde{u}_1(\\cdot,t)-u_{\\lambda_1-\\varepsilon,\\beta}\\|_{L^1({\\mathbb{R}}^n)}\\le e^{-\\frac{(n-2-nm)}{1-m}t} \\|\\min (u_{\\lambda_1-\\varepsilon,\\beta}+\\psi, u_0)-u_{\\lambda_1-\\varepsilon,\\beta}\\|_{L^1({\\mathbb{R}}^n)}\\quad\\forall t >0\n\\end{equation} \nSimilarly,\n\\begin{equation}\\label{u-tidle-L1-comparison3}\n\\|\\widetilde{u}_2(\\cdot,t)-u_{\\lambda_1+\\varepsilon,\\beta}\\|_{L^1({\\mathbb{R}}^n)}\\le e^{-\\frac{(n-2-nm)}{1-m}t} \\|\\max (u_{\\lambda_1-\\varepsilon,\\beta}+\\psi, u_0)-u_{\\lambda_1+\\varepsilon,\\beta}\\|_{L^1({\\mathbb{R}}^n)}\\quad\\forall t >0\n\\end{equation} \nBy \\eqref{u1-u-u2-tilde-compare2}, \\eqref{u-tidle-L1-comparison2} and \\eqref{u-tidle-L1-comparison3}, and an argument similar to the proof of Theorem 3.1 of \\cite{CD}, the rescaled function $\\widetilde{u}(x,t)$ given by \\eqref{rescaled-soln} converges to $v_{\\lambda_1,\\beta}$ in $L_{loc}^1({\\mathbb{R}}^n)$ as $t\\to\\infty$.\n \nSuppose now $u_0$ also satisfies $u_0=\\phi\\in L^{\\infty}({\\mathbb{R}}^n)$. Then by an argument similar to the proof of Corollary 3.2 of \\cite{CD}, there exists a constant $\\lambda_2>0$ such that\n\\begin{equation*}\nu_0\\le v_{\\lambda_2,\\beta}\\quad\\mbox{ in }{\\mathbb{R}}^n\n\\end{equation*}\nHence by maximum principle for solutions of \\eqref{fde-eqn} in bounded domains (cf. Lemma 2.3 of \\cite{DaK}) and the construction of solution \\eqref{cauchy-problem} in \\cite{Hs2},\n\\begin{align}\\label{u-tilde-v-bd}\n&u(x,t)\\le e^{-\\frac{2\\beta}{1-m}t}v_{\\lambda_2,\\beta}(e^{-\\beta t}x)\\quad\\forall x\\in{\\mathbb{R}}^n, t>0\\notag\\\\\n\\Rightarrow\\quad&\\widetilde{u}(x,t)\\le v_{\\lambda_2,\\beta}(x)\\qquad\\qquad\\,\\forall x\\in{\\mathbb{R}}^n, t>0.\n\\end{align} \nThen by \\eqref{u-tilde-eqn}, \\eqref{u-tilde-v-bd}, and an argument similar to the proof on P.10 of \\cite{CD}, the rescaled function $\\widetilde{u}(x,t)$ converges to $v_{\\lambda_1,\\beta}$ uniformly in $C^{2,1}(E)$ for any compact subset $E\\subset{\\mathbb{R}}^n$ as $t\\to\\infty$. \n\n{\\hfill$\\square$\\vspace{6pt}} \n\nFinally by Theorem \\ref{higher-order-expansion-thm} and a similar argument as the proof of Theorem 3.6 and Proposition 3.9 of \\cite{CD} we have the following result.\n\n\\begin{thm}\\label{convergence-thm2}\nLet $n\\ge 3$, $00$, $0\\le u_0=\\phi+\\psi$, $ \\phi\\in L_{loc}^p({\\mathbb{R}}^n)$, $\\psi\\in L^1({\\mathbb{R}}^n)\\cap L_{loc}^p({\\mathbb{R}}^n)$, for some constant $p>\\frac{(1-m)n}{2}$, such that \n\\begin{equation*}\nK_2:=\\limsup_{|x|\\to\\infty}\\left[|x|^2u_0(x)^{1-m}-\\frac{c_1}{\\beta}\\left(\\log|x|-\\frac{(n-2-(n+2)m)}{2(n-2-nm)}\\log(\\log |x|)\\right)\\right]<\\infty.\n\\end{equation*}\nholds and $\\phi$ satisfies \\eqref{phi-expansion} for some constant $K_1\\in{\\mathbb{R}}$ where $c_1=2(n-1)(n-2-nm)\/(1-m)$.\nIf $u$ is the unique solution of \\eqref{fde-eqn} in ${\\mathbb{R}}^n\\times (0,\\infty)$ given by\nTheorem 1.1 of \\cite{Hs2}, then as $t\\to\\infty$, the rescaled function $\\widetilde{u}(x,t)$ given by \\eqref{rescaled-soln} converges to $v_{\\lambda_1,\\beta}$ uniformly in $C^{2,1}(E)$ for any compact subset $E\\subset{\\mathbb{R}}^n$ with $\\lambda_1=\\left(e^{2(K_1-K_0)}\/\\beta\\right)^{\\frac{1}{1-m}}$ where the constant $K_0$ is given by Theorem \\ref{higher-order-expansion-thm}. Moreover\n\\begin{align*}\n&\\limsup_{|x|\\to\\infty}\\left[|x|^2u(x,t)^{1-m}-\\frac{c_1}{\\beta}\\left(\\log|x|-\\frac{n-2-(n+2)m}{2(n-2-nm)}\\log(\\log |x|)\\right)\\right]\\notag\\\\\n\\le&K_2-\\frac{2(n-1)(n-2-nm)}{(1-m)}t\\quad\\forall t\\ge 0.\n\\end{align*}\n\\end{thm}\n \n\\section{Appendix}\n\\setcounter{equation}{0}\n\\setcounter{thm}{0} \n\nFor the sake of completeness, in this appendix we will state and prove the analogue of Lemma \\ref{h-limit-lem1} for the case $m=\\frac{n-2}{n+2}$, $n\\ge 3$.\n\n\\begin{prop}(Proposition 2.3 of \\cite{CD})\nLet $n\\ge 3$, $m=\\frac{n-2}{n+2}$, $\\lambda>0$ and $\\beta>0$. Let $v$ be the solution of \\eqref{elliptic-eqn} and $h$ be given by \\eqref{h-defn10} with $w$ given by \\eqref{w-defn}. Then $h$ satisfies\n\\begin{equation*}\n\\lim_{s\\to\\infty}s^2\\,h_s(s)=\\frac{(6-n)(n-1)}{4\\beta}.\n\\end{equation*}\n\\end{prop}\n\\begin{proof}\nThis proposition is stated and proved in \\cite{CD}. For the sake of completeness we will give a simple different proof of the proposition here. We first observe that by Theorem \\ref{w's-bd-thm},\n\\begin{equation}\\label{w-ratio-limit}\n\\lim_{s\\to\\infty}\\frac{sw_s(s)^2}{w(s)}=\\frac{2(n-1)(n-2-nm)}{(1-m)\\beta}=\\frac{(n-1)(n-2)}{\\beta}.\n\\end{equation}\nLet\n\\begin{equation*}\na_4=\\frac{(1-2m)(n-1)(n-2)}{(1-m)\\beta}.\n\\end{equation*}\nThen by \\eqref{h-eqn2} and \\eqref{w-ratio-limit} for any $0<\\varepsilon<|a_4|\/2$ there exists a constant $s_1\\in{\\mathbb{R}}$ such that\n\\begin{equation}\\label{h-ineqn31}\n(a_4-\\varepsilon)s^{-1}\\le h_{ss}+\\left(\\frac{2(n-2-nm)}{(1-m)}s+\\frac{\\beta}{n-1}h+\\frac{n-2-(n+2)m}{1-m}\\right)h_s\\le (a_4+\\varepsilon)s^{-1}\\quad\\forall s\\ge s_1.\n\\end{equation}\nBy \\eqref{h-ineqn31} and an argument similar to the proof of Lemma \\ref{h2-limit-lem},\n\\begin{equation*}\n\\lim_{s\\to\\infty}s^2\\,h_s(s)=\\frac{(1-m)a_4}{2(n-2-nm)}=\\frac{(6-n)(n-1)}{4\\beta}\n\\end{equation*}\nand the proposition follows.\n\\end{proof}\n\n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nThe following terminology is basic in our research of puns. {\\bf A pun} is a) a short humorous genre,\nwhere a word or phrase is intentionally used in two meanings, b) a means of expression, the essence of which \nis to use a word or phrase so that in the given context the word or phrase can be understood in two meanings simultaneously.\n{\\bf A target word} is a word, that appears in two meanings. {\\bf A homographic pun} is a pun that \n``exploits distinct meanings of the same written word''~\\cite{miller2015automatic} (these can be meanings of a polysemantic word, \nor homonyms, including homonymic word forms). {\\bf A heterographic pun} is a pun, in which the target word \nresembles another word, or phrase in spelling; we will call the latter {\\bf the second target word}. \nConsider the following example (the Banker joke):\n\n\\begin{quote}\n``I used to be a banker, but I lost interest.''\n\\end{quote}\n\nThe Banker joke is a homographic pun; {\\em interest} is the target word. Unlike it, the Church joke below is a heterographic pun; \n{\\em propane} is the target word, {\\em profane} is the second target word:\n\n\\begin{quote}\n``When the church bought gas for their annual barbecue, proceeds went from the sacred to the propane.''\n\\end{quote} \n\nOur model of automatic pun analysis is based on the following premise: in a pun, there are two groups of words, \nand their meanings, that indicate the two meanings in which the target word is used. These groups can overlap, \ni.e. contain the same polysemantic words, used in different meanings. \n\nIn the Banker joke, words, and collocations {\\em banker}, {\\em lost interest} point at the professional status of the narrator, and his\/her \ncareer failure. At the same time, {\\em used to}, {\\em lost interest} tell a story of losing emotional attachment to the profession: \nthe narrator became disinterested. The algorithm of pun recognition, which we suggest, discovers these two groups of words, based on common \nsemes\\footnote{Bits of meaning. Semes are some parts of meaning, present both in the word and in its hypernym.\nMoving up the taxonomy, like Thesaurus, or WordNet, hypernyms become more general, and the seme, connecting them to the word, becomes more general, too.} \n(Subtask 1), finds the words, which belong to the both groups, and chooses the target word (Subtask 2), \nand, based on the common semes, picks up the best suitable meaning, which the target word exploits (Subtask 3). \nIn case of heterographic puns, in Subtask 2, the algorithm looks for the word, or phrase, which appears in one group \nand {\\em not} in the other.\n\n\\section{Subtask 1: Mining Semantic Fields}\n\nWe will call a semantic field a group of words and collocations, which share a common seme. In taxonomies, like WordNet~\\cite{kilgarriff2000wordnet}, and \nRoget's Thesaurus~\\cite{roget2004roget} (further referred to as Thesaurus), semes appear as hierarchies of word meanings. Top-levels\nattract words with more general meanings (hypernyms). For example, Thesaurus has six top-level Classes, that divide into Divisions, that divide\ninto Sections, and so on, down to the fifth lowest level. WordNet's structure is not so transparent. CITE!!! 10 TOP-semes \nApplying such dictionaries to get semantic fields (the mentioned common groups of words) in a pun is, therefore, the task of finding \ntwo most general hypernyms in WordNet, or two relevant Classes among the six Classes in Thesaurus. We chose Thesaurus, as its structure \nis only five levels deep, Classes labels are not lemmas themselves, but arbitrary names (we used numbers instead), and it allows parsing \non a certain level, and insert corrections (adding lemmas, merging subsections, etc.\\footnote{For example, we edited Thesaurus, adding words, \nwhich were absent in it. If a word in a pun was missing in Thesaurus, the system checked up for its hypernyms in Wordnet, and added the word \nto those Sections in Thesaurus, which contained the hypernyms.}). After some experimentation, instead of Classes, we chose to search for relevant \nSections, which are 34 subdivisions of the six Classes\\footnote{Sections are not always {\\em immediate} subdivisions of a Class. \nSome Sections are grouped in Divisions.}.\n\nAfter normalization (including change to lowercase; part-of-speech tagging, tokenization, and lemmatization with NLTK tools~\\cite{bird2009natural}; \ncollocation extraction\\footnote{To extract collocations and search for them in Thesaurus, we applied our own procedure, \nbased on a part-of-speech analysis.}; stop-words removal\\footnote{After lemmatization, all words are analyzed in collocations, but only nouns, adjectives, and\nverbs compose a list of separate words.}), the algorithm collects Section numbers for every word, and collocation, and removes duplicates \n(in Thesaurus, homonyms proper can belong to different subdivisions in the same or different Sections). \nTable~\\ref{tab:sem} shows what Sections words of the Banker joke belong to.\n\n\\begin{table*}\n\\centering\n\\begin{tabular}{lll}\n \\bf Word & \\bf Section No., Section name in Thesaurus \\\\ \n \\hline\n I & - \\\\\n use & 24, Volition In General \\\\\n & 30, Possessive Relations \\\\\n to & - \\\\\n be & 0, Existence \\\\\n & 19, Results Of Reasoning \\\\\n a & - & \\\\\n banker & 31, Affections In General \\\\\n & 30, Possessive Relations \\\\\n but & - \\\\\n lose & 21, Nature Of Ideas Communicated \\\\\n & 26, Results Of Voluntary Action \\\\\n & 30, Possessive Relations \\\\\n & 19, Results Of Reasoning \\\\\n interest & 30, Possessive Relations \\\\\n & 25, Antagonism \\\\\n & 24, Volition In General \\\\\n & 7, Causation \\\\\n & 31, Affections In General \\\\\n & 16, Precursory Conditions And Operations \\\\\n & 1, Relation \\\\\n\\end{tabular}\n\\caption{Semantic fields in the Banker joke}\\label{tab:sem}\n\\end{table*}\n\nThen the semantic vector of a pun is calculated. Every pun \\(p\\) is a vector in a 34-dimensional space:\n\\[p_{i}=p_{i}(s_{1i}, s_{2i},...,s_{34i})\\]\\label{f:pi}\nThe value of every element \\(s_{ki}\\) equals the number of words in a pun, which belong to a Section \\(S_{k}\\).\nThe algorithm passes from a Section to a Section, each time checking every word \\(w_{ji}\\) in the bunch of extracted words \\(l_{i}\\). \nIf a word belongs to a Section, the value of \\(s_{ki}\\) RAISES BY???? 1:\n\\[s_{ki}=\\displaystyle\\sum_{j=1}^{l_{i}} \\left\\{1\\middle|w_{ji} \\in S_{k}\\right\\}, k=1,2,...,34, i=1,2,3...\\]\\label{f:ski}\nFor example, the semantic vector of the Banker joke looks as follows: see Table~\\ref{tab:pbnker}.\n\\begin{table*}\n\\centering\n\\begin{tabular}{lll}\n \\(p_{Banker}\\) & \\(\\{ 1 , 1 , 0 , 0 , 0 , 0 , 0 , 1 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 1 , 0 , 0 , 2 , 0 , 1 , 0 , 0 , 2 , 1 , 1 , 0 , 0 , 0 , 4 , 2, 0, 0 \\}\\)\\\\ \n\\end{tabular}\n\\caption{Semantic vector of the Banker joke}\\label{tab:pbnker}\n\\end{table*}\n\nTo test the algorithm, we, first, collected 2484 puns from different Internet resources and, second, built a corpus\nof 2484 random sentences of length 5 to 25 words from different NLTK corpora~\\cite{bird2009natural} plus several hundred aphorisms and \nproverbs from different Internet sites. We shuffled and split the sentences into two equal groups, the first two forming \na training set, and the other two a test set. The classification was conducted, using different Scikit-learn~\\cite{pedregosa2011scikit} \nalgorithms. We also singled out 191 homographic puns, and 198 heterographic puns, and tested them against the same number of random\nsentences.\nIn all the tests\\footnote{The tests were run before the competition. Results of the competition for our system are given in Table~\\ref{results}.}, \nthe Scikit-learn algorithm of SVM with the Radial Basis Function (RBF) kernel produced the highest average \nF-measure results (\\(\\bar{f}=\\frac{f_{puns}+f_{random}}{2}\\)). In addition, its results are smoother, comparing the difference \nbetween precision, and recall (which leads to the highest F-measure scores) within the two classes (puns, and random sentences), \nand between the classes (average scores). Table~\\ref{punrec} illustrates results of different algorithms in class ``Puns'' (not \naverage results between puns, and not puns). The results were higher \nfor the split selection, reaching 0.79 (homographic), and 0.78 (heterographic) scores of F-measure. The common selection \ngot the maximum of 0.7 for average F-measure in several tests. The higher results of split selection may be due to \na larger training set.\n\n\\begin{table*}\n\\centering\n\\begin{tabular}{llll}\n\\hline\n\\bf Method & \\bf Precision & \\bf Recall & \\bf F-measure \\\\\n\\hline\n\\bf Common selection \\\\\n\\hline\nSVM with linear kernel & 0.67 & 0.68 & 0.67 \\\\\nSVM with polynomial kernel & 0.65 & 0.79 & 0.72 \\\\\nSVM with Radial Basis Function (RBF) kernel & 0.70 & 0.70 & 0.70 \\\\\nSVM with linear kernel, normalized data & 0.62 & 0.74 & 0.67 \\\\\n\\hline\n\\bf Homographic puns \\\\\n\\hline\nSVM with RBF kernel & 0.79 & 0.80 & 0.79 \\\\\nMultinomial Naive Bayes & 0.71 & 0.80 & 0.76 \\\\\nLogistic Regression, standardized data & 0.77 & 0.71 & 0.74 \\\\\n\\hline\n\\bf Heterographic puns \\\\\n\\hline\nSVM with RBF kernel & 0.77 & 0.79 & 0.78 \\\\\nLogistic Regression & 0.74 & 0.75 & 0.74 \\\\\n\\hline\n\\end{tabular}\n\\caption{Tests for pun recognition.}\\label{punrec}\n\\end{table*}\n\n\\section{Subtask 2: Hitting the Target Word}\n\nWe suggest that, in a homographic pun, the target word is a word, which immediately belongs to two semantic fields; \nin a heterographic pun, the target word belongs to at least one discovered semantic field, and does not belong to the other. \nHowever, in reality, words in a sentence tend to belong to too many fields, and they create noise in the search. \nTo reduce influence of noisy fields, we included such non-semantic features in the model as\nthe tendency of the target word to occur at the end of a sentence, and part-of-speech distribution, given in~\\cite{miller2015automatic}.\nA-group (\\(W_{A}\\)) and B-group (\\(W_{B}\\)) are groups of words in a pun, which belong to the two semantic fields, \nsharing the target word. Thus, for some \\(s_{ki}\\), \\(k\\) becomes \\(A\\), or \\(B\\)~\\footnote{\\(s_{ki}\\) is always an integer; \\(W_{A}\\) and \\(W_{B}\\) \nare always lists of words; \\(A\\) is always an integer, \\(B\\) is a list of one or more integers.}. A-group attracts the maximum number of words in a pun:\n\\[s_{Ai}=\\max_{k} s_{ki}, k=1,2,...,34 \\]\n\nIn the Banker joke, \\(s_{Ai}=4, A=30\\) (Possessive Relations); words, that belong to this group, are {\\em use}, {\\em lose}, \n{\\em banker}, {\\em interest}. B-group is the second largest group in a pun:\n\\[s_{Bi}=\\max_{k} ( s_{ki}\/s_{Ai} ), k=1,2,...,34 \\]\nIn the Banker joke, \\(s_{Bi}=2\\). There are three groups of words, which have two words in them: \\(B_{1}=19\\), \nResults Of Reasoning: {\\em be}, {\\em lose}; \\(B_{2}=24\\), Volition In General: {\\em use}, {\\em interest}; \n\\(B_{3}=31\\), Affections In General: {\\em banker}, {\\em interest}. Ideally, there should be a group of about three words, \nand collocations, describing a person`s inner state ({\\em used to be}, {\\em lose}, {\\em interest}), and two words \n({\\em lose}, {\\em interest}) in \\(W_{A}\\) are a target phrase. However, due to the shortage of data \nabout collocations in dictionaries, \\(W_{B}\\) is split into several smaller groups. Consequently, to find the target word, \nwe have to appeal to other word features. In testing the system on homographic puns, we relied on the polysemantic character of words.\nIf in a joke, there are more than one value of \\(B\\), \\(W_{B}\\) candidates merge into one, with duplicates removed, \nand every word in \\(W_{B}\\) becomes the target word candidate: \\(c \\in W_{B}\\). In the Banker joke, \\(W_{B}\\) is a list of \n{\\em be}, {\\em lose}, {\\em use}, {\\em interest}, {\\em banker}; \\(B=\\{19,24,31\\}\\). Based on the definition of the target word \nin a homographic pun, words from \\(W_{B}\\), that are also found in \\(W_{A}\\), should have a privilege. Therefore, \nthe first value \\(v_{\\alpha}\\), each word gets, is the output of the Boolean function:\n\\[v_{\\alpha}(c)=\\left\\{\n \\begin{array}{lr}\n 2 \\quad \\mathrm{if } (c \\in W_{A})\\wedge(c \\in W_{B}) \\\\\n 1 \\quad \\mathrm{if } (c \\notin W_{A})\\wedge(c \\in W_{B}) \\\\\n \\end{array}\n\\right.\n\\]\n\nThe second value \\(v_{\\beta}\\) is the absolute frequency of a word in the union of \\(B_{1}\\), \\(B_{2}\\), etc., \nincluding duplicates: \\(v_{\\beta}(c)=f_c(W_{B_{1}} \\cup W_{B_{2}} \\cup W_{B_{3}})\\).\n\nThe third value \\(v_{\\gamma}\\) is a word position in the sentence: the closer the word is to the end, the\nbigger this value is. If the word occurs several times, the algorithm counts the average of the\nsums of position numbers.\n\nThe fourth value is part-of-speech probability \\(v_{\\delta}\\). Depending on the part of speech, the word\nbelongs to, it gets the following rate:\n\\[v_{\\delta}(c) = \\left\\{\n \\begin{array}{lr}\n 0.502, if c - Noun \\\\\n 0.338, if c - Verb \\\\\n 0.131, if c - Adjective \\\\\n 0.016, if c - Adverb \\\\\n 0.013, otherwise \\\\\n \\end{array}\n\\right.\n\\]\n\nThe final step is to count rates, using multiplicative convolution, and choose the word with the\nmaximum rate:\n\\[z_{1}(W_{B})=\\left\\{c|\\max_{c} (v_{\\alpha} \\times v_{\\beta} \\times v_{\\gamma} \\times v_{\\delta}) \\right\\}\\]\n\nValues of the Banker joke are illustrated in Table~\\ref{valB}.\n\n\\begin{table}\n\\centering\n\\small\n\\begin{tabular}{cccccc}\n\\begin{tabular}{|l|l|l|l|l|l|}\n\\hline\n\\bf Word form & \\(v_{\\alpha}\\) & \\(v_{\\beta}\\) & \\(v_{\\gamma}\\) & \\(v_{\\delta}\\) & \\(v_{W_{Bk}}\\) \\\\\n\\hline\nbe & 1 & 1 & 4 & 0.338 & 1.352 \\\\\nlose & 2 & 1 & 9 & 0.338 & 6.084 \\\\\nuse & 2 & 1 & 2 & 0.338 & 1.352 \\\\\ninterest & 2 & 2 & 10 & 0.502 & \\bf 20.08 \\\\\nbanker & 2 & 1 & 6 & 0.502 & 6.024 \\\\\n\\hline\n\\end{tabular}\n\\end{tabular}\n\\caption{Values of the Banker joke.}\\label{valB}\n\\end{table}\n\nIn the solution for heterographic puns, we built a different model of B-group. Unlike homographic\npuns, here the target word is missing in \\(W_{B}\\) (the reader has to guess the word or phrase,\nhomonymous to the target word). Accordingly, we rely on the completeness of the union of \\(W_{A}\\)\nand \\(W_{B}\\): among the candidates for \\(W_{B}\\) (the second largest groups), such groups are relevant,\nthat form the longest list with \\(W_{A}\\) (duplicates removed). In Ex. 2 (the Church joke), \\(W_{A}={go, gas,\nannual, barbecue, propane}\\), and two groups form the largest union with it: \\(W_{B}={buy,\nproceeds} + {sacred, church}\\). Every word in \\(W_{A}\\) and \\(W_{B}\\) can be the target word. The privilege\npasses to words, used only in one of the groups. Ergo, the first value is:\n\\[v_{\\alpha}(c)=\\left\\{\n \\begin{array}{lr}\n 2 \\quad \\mathrm{if } (c \\in W_{A})\\oplus(c \\in W_{B}) \\\\\n 1 \\quad \\mathrm{otherwise} \\\\\n \\end{array}\n\\right.\n\\]\nFrequencies are of no value here; values of position in the sentence, and part-of-speech\ndistribution remain the same. The function output is:\n\\[z_{1}(W_{B})=\\left\\{c|\\max_{c} (v_{\\alpha} \\times v_{\\gamma} \\times v_{\\delta}) \\right\\}\\]\n\nValues of the Church joke are illustrated in Table~\\ref{valC}.\n\\begin{table}\n\\centering\n\\small\n\\begin{tabular}{ccccc}\n\\begin{tabular}{|l|l|l|l|l|}\n\\hline\n\\bf Word form & \\(v_{\\alpha}\\) & \\(v_{\\gamma}\\) & \\(v_{\\delta}\\) & \\(v_{W_{Ak}}, v_{W_{Bk}}\\) \\\\\n\\hline\npropane & 2 & 18 & 0.502 & \\bf 18.072 \\\\\nannual & 2 & 8 & 0.131 & 2.096 \\\\\ngas & 2 & 5 & 0.502 & 5.02 \\\\\nsacred & 2 & 15 & 0.338 & 10.14 \\\\\nchurch & 2 & 3 & 0.502 & 3.012 \\\\\nbarbecue & 2 & 9 & 0.502 & 9.036 \\\\\ngo & 2 & 12 & 0.338 & 8.112 \\\\\nproceeds & 2 & 11 & 0.502 & 11.044 \\\\\nbuy & 2 & 4 & 0.338 & 2.704 \\\\\n\\hline\n\\end{tabular}\n\\end{tabular}\n\\caption{Values of the Church joke.}\\label{valC}\n\\end{table}\n\n\\section{Subtask 3: Mapping Roget's Thesaurus to Wordnet}\nIn the last phase, we implemented an algorithm which maps Roget's Sections to synsets in Wordnet. \nIn homographic puns, definitions of a word in Wordnet are analyzed similarly to words in a pun, \nwhen searching for semantic fields, the words belong to. For example, words from the definitions \nof the synset {\\em interest} belong to the following Roget's Sections: \nSynset(interest.n.01)=a sense of concern with and curiosity about someone or something: \n(21, 19, 31, 24, 1, 30, 6, 16, 3, 31, 19, 12, 2, 0); Synset(sake.n.01)=a reason for wanting something done: \n15, 24, 18, 7, 19, 11, 2, 31, 24, 30, 12, 2, 0, 26, 24, etc. When A-Section is discovered (for example,\nin the Banker joke, A=30 (Possessive Relations)), the synset with the maximum number of words in its definition,\nwhich belong to A-Section, becomes the A-synset. The B-synset is found likewise for the B-group with the exception \nthat it should not coincide with A-synset. In heterographic puns the B-group is also a marker of the second target word. \nEvery word in the index of Roget's Thesaurus is compared to the known target word using Damerau-Levenshtein distance. \nThe list is sorted in increasing order, and the algorithm begins to check what Roget's Sections every word belongs to, \nuntil it finds the word that belongs to a Section (or the Section, if there is only one) in the B-group. This word becomes \nthe second target word.\n\nNevertheless, as we did not have many trial data, but for the four examples, released before the competition, the first trials \nof the program on a large collection returned many errors, so we changed the algorithm for the B-group as follows.\n\n{\\em Homographic puns, first run.} B-synset is calculated on the basis of sense frequencies \n(the output is the most frequent sense). If it coincides with A-synset, the program returns the second frequent synset.\n\n{\\em Homographic puns, second run.} B-synset is calculated on the basis of Lesk distance, using\nbuilt-in NLTK Lesk function~\\cite{bird2009natural}. If it coincides with A-synset, the program returns another synset on the basis of sense frequencies, \nas in the first run.\n\n{\\em Heterographic puns, first run.} The second target word is calculated, based on Thesaurus and Damerau-Levenshtein distance; \nwords, missing in Thesaurus, are analyzed as their WordNet hypernyms. In both runs for heterographic puns, synsets are\ncalculated, using the Lesk distance.\n\n{\\em Heterographic puns, second run.} The second target word is calculated on the basis of Brown corpus (NLTK~\\cite{bird2009natural}): \nif the word stands in the same context in Brown as it is in the pun, it becomes the target word. The size of the \ncontext window is (0; +3) for verbs, (0;+2) for adjectives; (-2;+2) for nouns, adverbs and other parts of speech \nwithin the sentence, where a word is used.\n\nTable~\\ref{results} illustrates competition results of our system.\n\n\\begin{table}\n\\centering\n\\small\n\\begin{tabular}{ccccc}\n\\begin{tabular}{|l|l|l|l|l|}\n\\hline\n\\bf Task & \\bf Precision & \\bf Recall & \\bf Accuracy & \\bf F1 \\\\\n\\hline\n1, Ho.\\footnote{Homographic.} & 0.8019 & 0.7785 & 0.7044 & 0.7900 \\\\\n1, He.\\footnote{Heterographic.} & 0.7585 & 0.6326 & 0.5938 & 0.6898 \\\\\n\\hline\n\\bf Task & \\bf Coverage & \\bf Precision & \\bf Recall & \\bf F1 \\\\\n\\hline\n2, Ho., run 1 & 1.0000 & 0.3279 & 0.3279 & 0.3279 \\\\\n2, Ho., run 2 & 1.0000 & 0.3167 & 0.3167 & 0.3167 \\\\\n2, He., run 1 & 1.0000 & 0.3029 & 0.3029 & 0.3029 \\\\ \n2, He., run 2 & 1.0000 & 0.3501 & 0.3501 & 0.3501 \\\\\n3, Ho., run 1 & 0.8760 & 0.0484 & 0.0424 & 0.0452 \\\\\n3, Ho., run 2 & 1.0000 & 0.0331 & 0.0331 & 0.0331 \\\\\n3, He., run 1 & 0.9709 & 0.0169 & 0.0164 & 0.0166 \\\\ \n3, He., run 2 & 1.0000 & 0.0118 & 0.0118 & 0.0118 \\\\\n\\hline\n\\end{tabular}\n\\end{tabular}\n\\caption{Competition results.}\\label{results}\n\\end{table}\n\n\\section{Conclusion}\n\nThe system, that we introduced, is based on one general supposition about the semantic structure of puns and combines two\ntypes of algorithms: supervised learning and rule-based. Not surprisingly, the supervised learning algorithm showed \nbetter results in solving an NLP-task, than the rule-based.\nAlso, in this implementation, we tried to combine two very different dictionaries (Roget's Thesaurus and Wordnet). And,\nalthough reliability of Thesaurus in reproducing a universal semantic map can be doubted, it seems to be a quite effective\nsource of data, still, when used in Subtask 1. The attempts to map it to Wordnet seem rather weak, so far, concerning\nthe test results, which also raises a question: if different dictionaries treat meaning of words differently, can there be an\nobjective and\/or universal semantic map, to apply as the foundation for any WSD task? \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\\label{sec:intr}\n\nThe decay constant $f_P$ of a pseudoscalar meson $P$ is defined\nby\n\\begin{align}\n \\langle 0 | A^\\mu | P \\rangle = i p^\\mu f_{P} \\,, \n \\label{eq:f}\n\\end{align}\nwhere the external state $|P\\rangle$ carries momentum $p^\\mu$, and the\naxial current $A^\\mu = \\bar{\\psi}_{h} \\gamma^\\mu \\gamma_5 \\psi_{l}$.\nHere, the subscript $h$ ($l$) represents heavy (light) flavors in\nthe $B_{(s)}$ and $D_{(s)}$ states.\nWe use the Oktay-Kronfeld (OK) action \\cite{Oktay:2008ex} for valence\nheavy quarks $\\psi_h$ with $h=b,c$, and the HISQ action\n\\cite{Follana:2006rc} for valence light quarks $\\chi$ that are\n recast into the naive quark field $\\psi_l$.\nThe calculations are done on MILC HISQ ensembles with $N_f=2+1+1$\n\\cite{Bazavov:2012xda}, whose parameters are summarized in Table \\ref{tab:ensembles}. \n\n\\begin{table}[!b]\n \\begin{center}\n \\begin{tabular}{ l || l | l | c | l l l }\n \\hline\\hline\n ensemble ID & $a$ (fm) & $N_s^3 \\times N_t$ & $M_\\pi$\n (MeV) & $am_l$ & $am_s$ & $am_c$ \\\\\n \\hline\n a12m310 & 0.1207(11) & $24^3\\times 64$ & 305.3(4) & 0.0102 &\n 0.0509 & 0.635 \\\\\n a12m220 & 0.1184(10) & $32^3\\times 64$ & 216.9(2) & 0.00507 &\n 0.0507 & 0.628 \\\\\n a12m130 & 0.1191(7) & $48^3\\times 64$ & 131.7(1) & 0.00184 &\n 0.0507 & 0.628 \\\\\n \\hline\n a09m310 & 0.0888(8) & $32^3\\times 96$ & 312.7(6) & 0.0074 &\n 0.037 & 0.440 \\\\\n a09m220 & 0.0872(7) & $48^3\\times 96$ & 220.3(2) & 0.00363 &\n 0.0363 & 0.430 \\\\\n \\hline\n a06m310 & 0.0871(6) & $48^3\\times 144$ & 319.3(5) & 0.0048 &\n 0.024 & 0.286 \\\\\n \\hline\\hline\n \\end{tabular}\n \\end{center}\n \\caption{Parameters of the MILC HISQ ensembles with $N_f=2+1+1$ \n \\cite{Bazavov:2012xda} used in our calculations. The lattice\n spacing $a$ is set by the Sommer scale $r_1$ and $N_s$ ($N_t$) is\n the lattice size in the spacial (temporal) direction. $M_{\\pi}$\n is the mass of Goldstone pions and $am_l$, $am_s$ and $am_c$ are sea\n quark masses for the light (up and down), strange and charm quarks\n in lattice units, respectively.\n \\label{tab:ensembles}}\n\\end{table}\n\n\n\nThe OK action $S_\\text{OK}$ improves on the Fermilab formulation\nof the Wilson clover action \\cite{ ElKhadra:1996mp} by including\n$\\mathcal{O}(\\lambda^2)$ and $\\mathcal{O}(\\lambda^3)$ improvement\nterms in heavy quark effective theory (HQET) power counting.\n\\begin{align}\n S_{\\text{OK}} & = a^{4} \\sum_{x} \\bar{\\psi} (x) \\left[\n \\vphantom{\\sum_{j\\neq k} \\left\\{ i \\Sigma_{k} B_{k} \\,,\\Delta_{j}\n \\right\\} } m_{0} + \\gamma_{4} D_{4} \\right. & \\leftarrow\n \\mathcal{O}(\\lambda^0) \\nonumber \\\\\n & \\hphantom{\\;=\\ a^{4}\\sum_{x}\\bar{\\psi}(x)} - \\frac{1}{2} a\n \\Delta_{4} + \\zeta \\bm{\\gamma \\cdot D} - \\frac{1}{2} r_{s} \\zeta a\n \\Delta^{(3)} - \\frac{1}{2} c_{B} a \\zeta i \\bm{\\Sigma \\cdot B} &\n \\leftarrow \\mathcal{O}(\\lambda^1) \\nonumber \\\\\n & \\hphantom{\\;=\\ a^{4}\\sum_{x}\\bar{\\psi}(x)} - \\frac{1}{2} c_{E} a\n \\zeta \\bm{\\alpha \\cdot E} & \\leftarrow \\mathcal{O}(\\lambda^2)\n \\nonumber \\\\\n &\\hphantom{=\\ a^{4}\\sum_{x}\\bar{\\psi}(x)}\n \\begin{rcases}\n & + c_{1} a^{2} \\sum_{k} \\gamma_{k} D_{k} \\Delta_{k} + c_{2}\n a^{2} \\left\\{ \\bm{\\gamma \\cdot D},\\Delta^{(3)} \\right\\} + c_{3}\n a^{2} \\left\\{ \\bm{\\gamma \\cdot D} \\, , i \\bm{\\Sigma \\cdot B}\n \\right\\} \\\\\n & \\left. + c_{EE} a^{2} \\left\\{ \\gamma_{4} D_{4} \\, , \\bm{\\alpha\n \\cdot E} \\right\\} + c_{4} a^{3} \\sum_{k} \\Delta_{k}^{2} +\n c_{5} a^{3} \\sum_{k} \\sum_{j \\neq k} \\left\\{ i \\Sigma_{k} B_{k}\n \\, , \\Delta_{j} \\right\\} \\right] \\psi(x)\n \\end{rcases} \\,.\n & \\leftarrow \\mathcal{O}(\\lambda^3)\n\\label{eq:ok-ac-1}\n\\end{align}\nThe definition of the operators in Eq.~\\eqref{eq:ok-ac-1} can be found\nin Ref.~\\cite{Oktay:2008ex}.\nThe bare quark mass $m_0$ is related to the hopping parameter\n$\\kappa$ as follows,\n\\begin{align}\n a m_{0} = \\frac{1}{2} \\left( \\frac{1}{\\kappa} -\n \\frac{1}{\\kappa_{\\text{crit}}} \\right)\\,.\n \\label{eq:bare-m-1}\n\\end{align}\nThe non-perturbatively tuned hopping parameters for bottom and charm\nquarks, $\\kappa_b$, $\\kappa_c$, and the critical hopping parameter\n$\\kappa_\\text{crit}$~\\cite{Bailey:2017xjk} for each measurement are\nsummarized in Table \\ref{tab:parameters}.\n\n\nIn order to achieve a better overlap with the wave functions of the\n$B_{(s)}$ and $D_{(s)}$ meson states, we apply the covariant Gaussian\nsmearing (CGS), $\\left\\{1 + \\sigma^2\\nabla^2\/(4 N_{\\text{GS}})\n\\right\\}^{N_{\\text{GS}}}$ to the point source and sink as in\nRef.~\\cite{Yoon:2016dij}.\nThe CGS parameters $\\{ \\sigma, N_\\text{GS} \\}$ for each measurement\nare given in Table \\ref{tab:parameters}.\nHere, we apply the CGS only to the heavy quark fields of the\npseudoscalar interpolating operators.\n\\section{Correlator and current improvement}\n\\label{sec:curim}\nThe meson-meson (MM) and meson-current (MC) 2-point correlators are\ndefined as follows \\cite{Bazavov:2011aa},\n\\begin{align}\n C_{\\text{MM}}(t) &= \\sum_{\\bf{x}} \\left\\langle\n \\mathcal{O}^{\\dagger}_P(t,{\\bf x}) \\mathcal{O}_P(0)\n \\right\\rangle \n = \\sum_{\\alpha = 1}^{4} \\sum_{\\bf{x}} \\left\\langle\n \\mathcal{O}^{\\dagger}_\\alpha(t,{\\bf x}) \\mathcal{O}_\\alpha(0)\n \\right\\rangle \\,, \\\\\n C_{\\text{MC}}(t) &= \\sum_{\\bf{x}} \\left\\langle A^{4\\dagger}(t,{\\bf x})\n \\mathcal{O}_P(0) \\right\\rangle\n = \\sum_{\\alpha = 1}^{4}\n \\sum_{\\bf{x}} \\left\\langle A^{4\\dagger}_\\alpha(t,{\\bf x})\n \\mathcal{O}_\\alpha(0) \\right\\rangle\\,,\n \\label{eq:corr-2pt}\n\\end{align}\nwhere the pseudoscalar heavy-light meson interpolating operator\n$\\mathcal{O}_\\alpha(t,{\\bf x})$ and the axial current operator\n$A^4_{\\alpha}(t,{\\bf x})$ are\n\\begin{align}\n \\mathcal{O}_\\alpha(t,{\\bf x}) &= \\left[ \\bar{\\psi}(t,{\\bf x})\n \\gamma_5 \\Omega(t,{\\bf x}) \\right]_\\alpha \\chi(t,{\\bf x}) \\,\\\\\n A^4_{\\alpha}(t,{\\bf x}) &= \\left[ \\bar{\\Psi}(t,{\\bf x}) \\gamma^4\n \\gamma_{5} \\Omega(t,{\\bf x}) \\right]_{\\alpha} \\chi(t,{\\bf x}) \\,.\n\\end{align}\nHere $\\psi$ is the OK heavy quark field, $\\chi$ is the HISQ \nlight quark field, and \n\\begin{align}\n\\Omega(t,{\\bf x}) \\equiv \\gamma_{1}^{\\;x_1} \\gamma_{2}^{\\;x_2}\n\\gamma_{3}^{\\;x_3} \\gamma_{4}^{\\;t} \\,,\n\\label{eq:omega-trans-1}\n\\end{align}\nand the subscript $\\alpha$ represents the taste degree of the\nstaggered light quarks.\nThe rotated heavy quark field $\\Psi$ is introduced to improve the\naxial current $A^4_{\\alpha}$ up to $O(\\lambda^3)$, the same\nlevel as the OK action.\n\\begin{align}\n \\Psi(t,{\\bf x}) & = \\Big( 1 & \\leftarrow\n O(\\lambda^{0}) \\nonumber \\\\\n & \\hphantom{=(\\ \\; } + d_{1} a \\bm{\\gamma \\cdot D} & \\leftarrow\n O(\\lambda^{1}) \\nonumber \\\\\n & \\hphantom{=(\\ \\; } + d_{2} a^{2} \\Delta^{\\left(3\\right)} + d_{B}\n a^{2} i \\bm{\\Sigma \\cdot B} - d_{E} a^{2} \\bm{\\alpha \\cdot\n E} & \\leftarrow O(\\lambda^{2}) \\nonumber \\\\\n &\\hphantom{=(\\ \\, }\n \\begin{rcases}\n & + d_{rE} a^{3} \\left\\{ \\bm{\\gamma \\cdot D}, \\bm{\\alpha \\cdot E}\n \\right\\} - d_{3} a^{3} \\sum_{i} \\gamma_{i} D_{i} \\Delta_{i} -\n d_{4} a^{3} \\left\\{ \\bm{\\gamma \\cdot D} , \\Delta^{\\left(3\\right)}\n \\right\\} \\\\\n & - d_{5} a^{3} \\left\\{ \\bm{\\gamma \\cdot D} , i \\bm{\\Sigma \\cdot\n B} \\right\\} + d_{EE} a^{3} \\left\\{ \\gamma_{4} D_{4} , \\bm{\\alpha\n \\cdot E} \\right\\} - d_{6} a^{3} \\left[ \\gamma_{4} D_{4} ,\n \\Delta^{\\left(3\\right)} \\right] \\\\\n \n \n & - d_{7} a^{3} \\left[ \\gamma_{4} D_{4} , i \\bm{\\Sigma\n \\cdot B} \\right] \\Big) \\psi (t,{\\bf x} ) \\,,\n \\end{rcases}\n & \\leftarrow O(\\lambda^3)\n \\label{eq:cur-imp-1}\n\\end{align}\nwhere the improvement coefficients $d_i$ are given in Ref.~\\cite{\n Bailey:2020uon}.\n\n\n\\begin{table}[!t]\n \\begin{center}\n \\begin{tabular}{ l || r | l l l | l | r }\n \\hline\\hline\n ensemble ID & $m_x \/ m_s$ & $\\kappa_{\\text{crit}}$ & $\\kappa_c$\n & $\\kappa_b$ & $\\{ \\sigma \\,,\\; N_{\\text{GS}} \\}$ &\n $N_{\\text{cfg}} \\times N_{\\text{src}}$ \\\\\n \\hline\n a12m310 & 1\/5, 1 & 0.051211 & 0.048524 & 0.04102 &\n $\\{ 1.5\\,,\\;5 \\}$ & $1053 \\times 3 $ \\\\\n a12m220 & 1\/10, 1 & 0.051218 & 0.048613 & 0.04070 &\n $\\{ 1.5\\,,\\;5 \\}$ & $1000 \\times 3 $ \\\\\n a12m130 & 1\/27, 1 & 0.05119 & 0.048501 & 0.041343 &\n $\\{ 1.5\\,,\\;5 \\}$ & $499 \\times 3 $ \\\\\n \\hline\n a09m310 & 1\/5, 1 & 0.05075 & 0.04894 & 0.0429 &\n $\\{ 2.0\\,,\\;10 \\}$ & $996 \\times 3$ \\\\\n a09m220 & 1\/10, 1 & 0.05077 & 0.04902 & 0.0431 &\n $\\{ 2.0\\,,\\;10 \\}$ & $1001 \\times 3$ \\\\\n \\hline\n a06m310 & 1\/5, 1 & 0.050357 & 0.04924 & 0.0452 &\n $\\{ 3.0\\,,\\;22 \\}$ & $1017 \\times 3$ \\\\\n \\hline\\hline\n \\end{tabular}\n \\end{center}\n \\caption{The 2${}^{\\rm nd}$ column gives the valence light quark masses\n $m_x$ and the following columns are the hopping parameters, CGS\n parameters and the number of measurements. $N_{\\text{cfg}}$ represents\n the number of gauge configurations analyzed and $N_{\\text{src}}$ is the number\n of sources used for measurement on each gauge configuration.\n \\label{tab:parameters}}\n\\end{table}\n\n\n\n\\section{Correlator fit}\n\\label{sec:corfit}\nWe fit the 2-point correlation functions $C_\\text{MM}(t)$ and\n$C_\\text{MC}(t)$ with three even time-parity and two odd time-parity\nstates and label it the 3+2-state fit.\nThe time parity is determined with respect to the shift operator in\nthe Euclidean time direction.\nThe fitting function is\n\\begin{align}\n C_{\\text{Y}}(t) & = g_{\\text{Y}}(t) \\pm\n g_{\\text{Y}}(T-t)\\,, \\qquad (\\text{$+$ for MM, $-$ for MC)}\n \\nonumber \\\\\n g_{\\text{Y}}(t) & = A_{0}^{\\text{Y}} e^{-M_0t}\n \\left[ 1 + R_{2}^{\\text{Y}}\n e^{-\\Delta M_{2}t} + R_{4}^{\\text{Y}}\n e^{-(\\Delta M_{2} + \\Delta M_{4})t}\n \\vphantom{e^{-\\Delta M_1^p t}} + \\cdots \\right. \\nonumber \\\\\n & \\hphantom{ = A_{0}^{\\text{Y}}e^{-M_0t} [ } \\left. \\ -\n \\left(-1\\right)^t R_{1}^{\\text{Y}} e^{-\\Delta M_1 t} -\n \\left(-1\\right)^t R_{3}^{\\text{Y}}\n e^{-(\\Delta M_1 + \\Delta M_3) t} + \\cdots\n \\right]\n \\label{eq:3+2-fit}\n\\end{align}\nwhere Y=MC or MM, $\\Delta M_i \\equiv M_i - M_{i-2}$, $M_{-1}\\equiv\nM_0$, and\n\\begin{align}\n A^\\text{MM}_i &\\equiv \\frac{1}{2M_i}\n \\langle 0 | \\mathcal{O}_P | P_i \\rangle\n\\langle P_i | \\mathcal{O}_P | 0 \\rangle\\,,\n\\qquad\nA^\\text{MC}_i \\equiv \\frac{1}{2M_i}\n\\langle 0 | A^4 | P_i \\rangle\n \\langle P_i | \\mathcal{O}_P | 0 \\rangle \\,,\n\\qquad\nR^\\text{Y}_i \\equiv \\frac{ A^\\text{Y}_i }{ A^\\text{Y}_0 }\n\\end{align}\nHere, $P_i$ represents the $i$-th excited meson state and $P_0$ the ground state. For a brevity, the subscript ``0'' for\nthe ground state is dropped from now on.\nWe take the following steps to analyze the 2-point correlation\nfunctions.\n\\begin{enumerate}\n\\item We fit the 2-point correlator, $C_\\text{MM}(t)$, data using the\n 3+2-state fit given in Eq.~\\eqref{eq:3+2-fit} to\n extract the ground state pseudoscalar meson mass $M(\\equiv M_0)$ and\n amplitude $A^\\text{MM}(\\equiv A^\\text{MM}_0)$ and control the \n excited states. We impose empirical Bayesian priors \\cite{\n Yoon:2016jzj} on the excited state mass gaps $\\Delta M_i$ and\n amplitude ratios $R^\\text{MM}_i$ to stabilize the fit. (See\n Fig.~\\ref{fig:prior-plots-1}).\n\\item We feed the results for $M_0$, and $\\Delta M_i$ obtained\n in the previous step as inputs into the fit for $C_\\text{MC}(t)$\n to extract $A^\\text{MC} (\\equiv A^\\text{MC}_0)$ and the ratios\n $R^\\text{MC}_i$.\n We use the same fit range and fit functional form as taken\n for $C_\\text{MM}(t)$\n\\end{enumerate}\n\n\n\\begin{figure}[t] \n \\subfigure[$B$ meson]{\n \\label{fig:b-prior-1}\n \\includegraphics[width=0.227\\textwidth]\n {prior_PION_d_cvg_d_cvg_m00184_k041343_p000.pdf}\n }\n \n \\subfigure[$B_s$ meson]{\n \\label{fig:b-prior-2}\n \\includegraphics[width=0.227\\textwidth]\n {prior_PION_d_cvg_d_cvg_m0507_k041343_p000.pdf}\n }\n \n \\subfigure[$D$ meson]{\n \\label{fig:d-prior-1}\n \\includegraphics[width=0.227\\textwidth]\n {prior_PION_d_cvg_d_cvg_m00184_k048501_p000.pdf}\n }\n \n \\subfigure[$D_s$ meson]{\n \\label{fig:d-prior-2}\n \\includegraphics[width=0.227\\textwidth]\n {prior_PION_d_cvg_d_cvg_m0507_k048501_p000.pdf}\n }\n \\caption{ Fit results and Bayesian priors for $\\Delta M_i$ from\n $a12m130$ ensembles.}\n \\label{fig:prior-plots-1}\n\\end{figure}\n\\begin{figure}[t] \n \\subfigure{\n \\label{fig:meff-mm-1}\n \\includegraphics[width=0.475\\textwidth]\n {meff_PION_d_cvg_d_cvg_m00184_k048501_p000.pdf}\n }\n \\hfill\n \\subfigure{\n \\label{fig:meff-mc-1}\n \\includegraphics[width=0.475\\textwidth]\n {meff_P5_A4_d_cvg_m00184_k048501_p000_l3.pdf}\n }\n \\caption{ Effective mass plots for the $C_\\text{MM}(t)$ and\n $C_\\text{MC}(t)$ correlators of the $D$ meson on the $a12m130$\n ensemble with $m_l=m_s\/27$.\n The orange (purple) curves connect 3+2-state fit results on the\n odd (even) time slices.\n The horizontal red line shows the ground state mass $M_0$\n within the fit range $[t_\\text{min},t_\\text{max}]=[5,20]$.\n The axial current operators are improved up to the $\\lambda^3$\n order.\n \\label{fig:meff-plots-1}}\n\\end{figure}\nAn example of the effective mass plot with \n\\begin{align}\n m_{\\text{eff}}^{Y}(t) &\\equiv \\frac{1}{2}\n \\log\\left[\\frac{C_Y(t)}{C_Y(t+2)}\\right] \n\\end{align}\nfor the 2-point correlators $C_\\text{MM}(t)$ and\n$C_\\text{MC}(t)$ is shown in Fig.~\\ref{fig:meff-plots-1} along with the fits to them. \n\n\n\\section{Results}\n\\label{sec:results}\nThe decay constant $f_P$ defined in Eq.~\\eqref{eq:f} can be expressed\nin terms of the ground state amplitudes $A^\\text{MM}$ and\n$A^\\text{MC}$ as follows,\n\\begin{align}\n f_{P} = Z_{A^4}^{hl} \\sqrt{\\frac{2}{M_P}} \\,\n \\frac{A^\\text{MC}}{\\sqrt{A^\\text{MM}}} \\,,\n\\end{align}\nwhere we take the meson mass $M_P=M_0$ of the ground state obtained\nfrom the fits to $C_\\text{MM}$.\nThe tree-level renormalization factor is given as\n$Z_{A^4}^{hl,\\text{tree}} = e^{m_{1}^h\/2}$ where $m_1^{h} = \\log(1 +\nm_0^{h})$ is the rest mass and $m_0^h$ is the bare mass for the heavy\nquark \\cite{ElKhadra:1996mp}.\nThe perturbative and non-perturbative determination of $Z_{A^4}^{hl}$\nis in progress.\nIn this work, we present the flavor $SU(3)$ breaking ratio of decay\nconstants:\n\\begin{align}\n f_{X_s} \/ f_{X} &= \\frac{Z_{A^4}^{hs}}{Z_{A^4}^{hl}} \\sqrt{\n \\frac{M_{X}}{M_{X_s}} }\n \\sqrt{\\frac{A^\\text{MM}}{A^\\text{MM}_s}}\n \\frac{A^\\text{MC}_s}{A^\\text{MC}}\n \\cong \\sqrt{\n \\frac{M_{X}}{M_{X_s}} }\n \\sqrt{\\frac{A^\\text{MM}}{A^\\text{MM}_s}}\n \\frac{A^\\text{MC}_s}{A^\\text{MC}}\\,,\n\\end{align}\nfor $X_{(s)}= B_{(s)}$ and $D_{(s)}$ mesons.\nHere, $A^\\text{MM}_s$ and $A^\\text{MC}_s$ are the ground state\namplitudes for the heavy-strange mesons.\nIn this ratio, we assume that the light quark mass $m_l$ dependence\nof $Z_{A^4}^{hl}$ is negligible \\cite{Bazavov:2011aa}, so the\nratio $Z_{A^4}^{hs} \/ Z_{A^4}^{hl} \\cong 1$.\n\n\n\\begin{figure}[t!] \n \\subfigure[$f_{B_s} \/ f_B$]{ \\label{fig:dec-b-2}\n \\includegraphics[width=0.475\\textwidth]{ratio_compare_3+2_k041343.pdf} }\n \\hfill\n \\subfigure[$f_{D_s} \/ f_D$]{\n \\label{fig:dec-d-2}\n \\includegraphics[width=0.475\\textwidth]{ratio_compare_3+2_k048501.pdf} }\n \\caption{ The ratios of decay constants, $f_{B_s} \/ f_B$ and\n $f_{D_s} \/ f_D$, on the a12m130 ensemble as a function of current\n improvement order in HQET power counting. }\n \\label{fig:results-2}\n\\end{figure}\nFig.~\\ref{fig:results-2} shows that in the ratios $f_{B_s} \/ f_{B}$ and $f_{D_s} \/ f_{D}$, \nthe effect of the current improvement applied to the heavy quark field (as given in\nEq.~\\eqref{eq:cur-imp-1}) cancels up to $O(\\lambda^3)$. \n\nIn Fig.~\\ref{fig:results-1}, we present preliminary results on\n$f_{B_s} \/ f_{B}$ and $f_{D_s} \/ f_{D}$ calculated on 6 different HISQ\nensembles, and compare them with the continuum limit value given in the FLAG\n2019 review~\\cite{Aoki:2019cca}.\nThe statistical errors in $f_{B_s} \/ f_{B}$\n(Fig.~\\ref{fig:results-1}\\;\\subref{fig:dec-b-1}) are much larger than in\n$f_{D_s}\/f_D$ (Fig.~\\ref{fig:results-1}\\;\\subref{fig:dec-d-1}).\nAs a result, discerning a chiral or discretization effect in $f_{B_s}\/f_B$ is not clear, \nother than to note that the result from the physical ensemble $a12m130$ is\nconsistent with the FLAG 2019 value.\nThe results for $f_{D_s}\/f_{D}$ show no significant discretization\neffect on the three lattices with $M_\\pi \\approx 310\\,\\mathop{\\rm MeV}\\nolimits$ and the\ntwo with $M_\\pi \\approx 220\\,\\mathop{\\rm MeV}\\nolimits$. On the other hand, there is a\nshift upwards towards the FLAG result as $M_\\pi$ is lowered towards\nthe physical value.\nPresuming that the OK action has significantly eliminated the heavy\nquark discretization error even on the coarsest lattice spacing\n$a\\approx 0.12\\,\\,\\mathrm{fm}$~\\cite{Bailey:2017nzm}, the leading effect to\nquantify is the pion mass dependence. For the $a \\approx 0.12\\,\\,\\mathrm{fm}$\ndata, the current trend is anchored by the physical ensemble with a\nvalue close to the FLAG 2019 result.\nIn near future, we plan to add measurements on more ensembles to check for discretization\neffects and on more physical pion mass ensembles to improve \nthe chiral-continuum extrapolation.\n\\begin{figure}[h!] \n \\subfigure[$f_{B_s}\/f_B$]{ \\label{fig:dec-b-1}\n \\includegraphics[width=0.475\\textwidth]{decay_b.pdf} }\n \\hfill\n \\subfigure[$f_{D_s}\/f_D$]{\n \\label{fig:dec-d-1}\n \\includegraphics[width=0.475\\textwidth]{decay_d.pdf} }\n \\caption{The ratios of decay constants, $f_{B_s} \/ f_B$ and $f_{D_s}\n \/ f_D$ on six ensembles.\n The errors are purely statistical.\n The FLAG 2019 \\cite{ Aoki:2019cca} results are given for the\n physical value.}\n \\label{fig:results-1}\n\\end{figure}\n\n\n\\acknowledgments\n\\label{sec:ackn}\nWe thank the MILC collaboration for sharing the HISQ ensembles\nwith us.\nComputations for this work were carried out in part on (i) facilities\nof the USQCD collaboration, which are funded by the Office of Science\nof the U.S. Department of Energy, (ii) the Nurion supercomputer at\nKISTI and (iii) the DAVID GPU clusters at Seoul National University.\nThe research of W. Lee is supported by the Mid-Career Research Program\n(Grant No.~NRF-2019R1A2C2085685) of the NRF grant funded by the Korean\ngovernment (MOE).\nThis work was supported by Seoul National University Research Grant in\n2019.\nW.~Lee would like to acknowledge the support from the KISTI\nsupercomputing center through the strategic support program for the\nsupercomputing application research (No.~KSC-2017-G2-0009).\nT.~Bhattacharya and R.~Gupta were partly supported by the\nU.S. Department of Energy, Office of Science, Office of High Energy\nPhysics under Contract No. DE-AC52-06NA25396.\nS.~Park, T.~Bhattacharya, R.~Gupta and Y.-C.~Jang were partly\nsupported by the LANL LDRD program.\nY.-C.~Jang is partly supported by U.S.~Department of Energy under\nContract No.~DE-SC0012704.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\nBipartite graphs are widely used to represent networks with two different groups of entities such as user-item networks \\cite{wang2006unifying}, author-paper networks \\cite{konect:DBLP}, and member-activity networks \\cite{brunson2015triadic}.\nIn bipartite graphs, cohesive subgraph mining has numerous applications including fraudsters detection \\cite{allahbakhsh2013collusion,beutel2013copycatch,liu2020efficient}, group recommendation \\cite{ding2017efficient,ntoutsi2012fast} and discovering inter-corporate relations \\cite{ornstein1982interlocking,palmer2002interlocking}. \n\n($\\alpha$,$\\beta$)-core and bitruss are two representative cohesive subgraph models in bipartite graphs extended from the unipartite \\textit{k}-core \\cite{seidman1983network} and \\textit{k}-truss \\cite{cohen2008trusses} models.\n$(\\alpha,\\beta)$-core\\xspace is the maximal subgraph of a bipartite graph \\textit{$G$} such that the vertices on upper or lower layer have at least $\\alpha$ or $\\beta$ neighbors respectively. \n$(\\alpha,\\beta)$-core\\xspace models vertex engagement as degrees and treats each edge equally, but ties (edges) in real networks have different strengths.\n\\textit{k}-bitruss\\xspace is the maximal subgraph where each edge is contained in at least $k$ butterflies (i.e. $2$x$2$-biclique), which can model the tie strength\n\\cite{sariyuce2018peeling,zou2016bitruss}.\n\n\\begin{figure}[htb]\n\\centering \n\\includegraphics[width=0.50\\textwidth]{fig\/motivation.pdf}\n\\caption{Motivation example}\n\\label{fig:moltivation}\n\\end{figure}\n\nIn the author-paper network as shown in Figure \\ref{fig:moltivation}, the graph is the $(\\alpha,\\beta)$-core ($\\alpha$=$2$, $\\beta$=$2$) and the light blue region is the \\textit{k}-bitruss\\xspace ($k$=$2$). \nWithout considering tie strength, $(\\alpha,\\beta)$-core blindly includes research groups of different levels of cohesiveness. We can see that $v_0$ and $v_1$ are not as closely connected as the rest authors. \nThe \\textit{k}-bitruss\\xspace model can exclude the relatively sparse subgraph containing $v_0$ and $v_1$, but it also deletes edges $(u_3,v_4)$ and $(u_4,v_3)$ when their incident vertices are present. This exposes the drawbacks of the \\textit{k}-bitruss\\xspace model: ($1$) As \\textit{k}-bitruss\\xspace only keeps strong ties, the weak ties between important vertices are missed.\nIn Fig \\ref{fig:moltivation}, it fails to recognize the contributions of authors $v_3,v_4$ in papers $u_3,u_4$.\n($2$) After removing weak ties, the tie strengths are modeled inaccurately. Edges $(u_3,v_3)$ and $(u_4,v_4)$ have more supporting butterflies ($u_3,u_4,v_3,v_4$ form a butterfly) than $(u_1,v_2)$, but their tie strengths are modeled as equal. \n\nIn this paper, we study the efficient and scalable computation of $\\tau$-strengthened $(\\alpha,\\beta)$-core\\xspace, \nwhich is the first cohesive subgraph model on bipartite graphs to consider both tie strength and vertex engagement. \nGiven a bipartite graph $G$, we model the tie strength of each edge as the number of butterflies containing it. With a strength level $\\tau$, we consider the edges with tie strength no less than $\\tau$ to be \\textit{strong ties}. \nThe \\textit{engagement} of a vertex is modeled as the number of strong ties it is incident to. \nGiven engagement constraints $\\alpha,\\beta$ and strength level $\\tau$, $(\\alpha,\\beta)_{\\tau}$-core\\xspace is the maximal subgraph of $G$ such that each upper or lower vertex in the subgraph has at least $\\alpha$ or $\\beta$ strong ties. The $(\\alpha,\\beta)_{\\tau}$-core\\xspace model is highly flexible and is able to capture unique structures.\nFor instance, in Figure \\ref{fig:moltivation}, the subgraph induced by vertices $\\{u_1, u_2, u_3, u_4, u_5, v_2, v_3, v_4, v_5, v_6\\}$ is the $(2,2)_2$-core which cannot be found by $(\\alpha,\\beta)$-core\\xspace or \\textit{k}-bitruss\\xspace for any $\\alpha,\\beta$ or $k$.\nAlso, as shown in Figure \\ref{fig:moltivation}, $(\\alpha,\\beta)$-core\\xspace can preserve the weak ties if the incident vertices are present (e.g., the red edges are preserved due to $u_3,u_4,v_3$ and $v_4$), which better resembles reality.\nThe flexibility of the $(\\alpha,\\beta)_{\\tau}$-core\\xspace model is also evaluated in another experiment conducted on dataset \\texttt{DBpedia-producer}. \nFigure \\ref{fig:profile} shows the subgraphs of different densities found by $(\\alpha,\\beta)_{\\tau}$-core\\xspace and $(\\alpha,\\beta)$-core\\xspace, where density is the ratio between the number of existing edges and the number of all possible edges \\cite{sariyuce2018peeling}. \n$165$ subgraphs with a density greater than $0.2$ are found by $(\\alpha,\\beta)_{\\tau}$-core\\xspace while only $9$ such subgraphs are found by $(\\alpha,\\beta)$-core\\xspace.\n\n\\begin{figure}[tbh]\n\\centering \n\\subfigure[subgraphs found by $(\\alpha,\\beta)_{\\tau}$-core\\xspace]{\n\\label{fig.abt.profile}\n\\includegraphics[width=0.38\\textwidth]{pr_profile_.pdf}}\n\\subfigure[subgraphs found by $(\\alpha,\\beta)$-core\\xspace]{\n\\label{fig.ab.profile}\n\\includegraphics[width=0.38\\textwidth]{pr_ab_profile.pdf}}\n\\caption{Dense subgraphs in \\texttt{DBpedia-producer}.}\n\\label{fig:profile}\n\\end{figure}\n\n\\noindent\n{\\bf Applications.} The $\\tau$-strengthened $(\\alpha,\\beta)$-core\\xspace model has many applications. We list some of them below.\n\n\\noindent\n$\\bullet$ \\textit{Identify nested communities.}\nOn Internet forums like Reddit, Quora, and StackOverflow, users hold conversations on topics that interest them. The users and the topics form a bipartite network. \nIn these networks, communities naturally exist and are nested.\nFor instance, Reddit displays a list of top communities like ``News\", ``Gaming\" and ``Sports\" on the front page. \n``Sports\" community contains many sub-communities including ``Cricket\", ``Bicycling\" and ``Golf\". \nThe edges in sub-communities have higher tie strength because users and topics within them are more closely connected. \nBy increasing strength level $\\tau$, $(\\alpha,\\beta)_{\\tau}$-core\\xspace captures the subgraphs forming a hierarchy, which can model nested communities on bipartite networks. \n\n\\noindent\n$\\bullet$ \\textit{Group similar users and items.}\nIn online shopping platforms like Amazon, eBay and Alibaba, users and items form a bipartite graph, where each edge indicates a purchasing record. \nSuch a network consists of many closely connected communities, where some items are repeatedly bought by the same group of users (i.e., the target market). \nExamples of such communities include gym-goers and gym attires, students and stationery, diabetic patients and no-sugar foods, etc. \nWithin one community, items are considered more similar and users tend to be alike due to their common shopping habits. \nAs the edges between these users and items have high tie strength (butterfly support), we can use $(\\alpha,\\beta)_{\\tau}$-core\\xspace to find these communities and group similar users or items together.\n\n\\noindent\n\\textbf{Challenges.}\nTo obtain the $(\\alpha,\\beta)_{\\tau}$-core\\xspace from the input graph, we can first compute the support of edges and the engagement of vertices and then iteratively delete the vertices not meeting the engagement constraints. \nWhen $\\alpha$, $\\beta$, $\\tau$ are large, $(\\alpha,\\beta)_{\\tau}$-core\\xspace is small and computing $(\\alpha,\\beta)_{\\tau}$-core\\xspace from the input graph is time-consuming. \nThus, the online computation method cannot support a large number of $(\\alpha,\\beta)_{\\tau}$-core\\xspace queries. \n\nIn this paper, we resort to index-based approaches. \nA straightforward solution is to compute all possible $(\\alpha,\\beta)_{\\tau}$-cores\\xspace and build a total index $I_{\\alpha,\\beta,\\tau}$ based on them. Instead of computing all $(\\alpha,\\beta)_{\\tau}$-cores\\xspace from the input graph, we take advantage of the nested property of the $(\\alpha,\\beta)_{\\tau}$-core\\xspace, which means that if $\\alpha \\geq \\alpha^*$, $\\beta \\geq \\beta^*$ and $\\tau \\geq \\tau^*$, $(\\alpha,\\beta)_{\\tau}$-core\\xspace is a subgraph of $(\\alpha^*,\\beta^*)_{\\tau^*}$-core. \nSpecifically, for all possible $\\alpha$ and $\\beta$, we first find $(\\alpha,\\beta)_{1}$-core and then compute $(\\alpha,\\beta)_{\\tau}$-core\\xspace while gradually increasing strength level $\\tau$. \nIn this manner, we can compute all $(\\alpha,\\beta)_{\\tau}$-cores\\xspace and construct the index $I_{\\alpha,\\beta,\\tau}$. \nAlthough $I_{\\alpha,\\beta,\\tau}$ supports optimal retrieval of the vertex set of any $(\\alpha,\\beta)_{\\tau}$-core\\xspace, it still suffers from long construction time on large graphs. \nTo devise more practical index-based approaches, we face the following challenges. \n\\begin{enumerate}\n \\item\n When building index $I_{\\alpha,\\beta,\\tau}$, it is time-consuming to enumerate all butterflies containing the deleted edges.\n Also, the $I_{\\alpha,\\beta,\\tau}$ index construction algorithm is prone to visit the same $(\\alpha,\\beta)_{\\tau}$-core\\xspace subgraph repeatedly as it can correspond to different combinations of $\\alpha, \\beta$, and $\\tau$.\n It is a challenge to speed up butterfly enumeration and avoid repeatedly visiting the same subgraphs during the construction of the total index $I_{\\alpha,\\beta,\\tau}$.\n \\item \n Due to the flexibility of the $(\\alpha,\\beta)_{\\tau}$-core\\xspace model, there are a large number of $(\\alpha,\\beta)_{\\tau}$-cores\\xspace corresponding to different combinations of $\\alpha$, $\\beta$, and $\\tau$.\n The time cost of indexing all $(\\alpha,\\beta)_{\\tau}$-cores\\xspace becomes not affordable on large graphs. \n It is also a challenge to strike a balance between building space-efficient indexes and supporting efficient and scalable query processing.\n\\end{enumerate}\n\n\\noindent\n{\\bf Our approaches.}\nTo address the first challenge, we extend the butterfly enumeration techniques in \\cite{wang2020efficient} and propose novel computation sharing optimizations to speed up the index construction process of $I_{\\alpha,\\beta,\\tau}$. \nSpecifically, we build a \\texttt{Bloom}-\\texttt{Edge}-\\texttt{Index} (hereafter denoted by \\texttt{BE-Index}) proposed in \\cite{wang2020efficient} to quickly fetch the butterflies containing an edge.\nThe \\texttt{BE-Index} captures the relationships between edges and $(2\\times k)$-bicliques (also called \\textit{blooms}).\nWhen an edge is deleted, we can quickly locate the blooms containing this edge in the \\texttt{BE-Index} and update the support of affected edges in these blooms accordingly. \nIn addition, computation-sharing optimization is based on the fact that the same $(\\alpha,\\beta)_{\\tau}$-core\\xspace subgraph corresponds to various parameter combinations. If we realize the vertices in a subgraph have already been recorded, we can choose to skip the current parameter combination. \n\nTo address the second challenge, we introduce space-efficient \\gratings including $I_{\\alpha,\\beta}$, $I_{\\beta,\\tau}$, and $I_{\\alpha,\\tau}$, and train a feed-forward neural network to predict the most promising index to handle an $(\\alpha,\\beta)_{\\tau}$-core\\xspace query.\nInstead of indexing all $(\\alpha,\\beta)_{\\tau}$-cores\\xspace, the \\gratings $I_{\\alpha,\\beta}$, $I_{\\beta,\\tau}$, and $I_{\\alpha,\\tau}$ store the vertex sets of all $(\\alpha,\\beta)$-core\\xspace, $(1,\\beta)_{\\tau}$-core\\xspace, and $(\\alpha,1)_{\\tau}$-core\\xspace respectively. These \\gratings are much smaller in size and require significantly less build time,\neach of which can be used to handle $(\\alpha,\\beta)_{\\tau}$-core\\xspace queries. \nFor example, to compute $(\\alpha,\\beta)_{\\tau}$-core\\xspace using $I_{\\beta,\\tau}$, we fetch the vertices in $(1,\\beta)_{\\tau}$-core\\xspace and recover the edges of $(1,\\beta)_{\\tau}$-core\\xspace. Then, we iteratively remove the vertices not having enough engagement from $(1,\\beta)_{\\tau}$-core\\xspace until we find $(\\alpha,\\beta)_{\\tau}$-core\\xspace. \nHowever, the query processing performance based on each \\grating is highly sensitive to parameters $\\alpha$, $\\beta$, and $\\tau$. \nThis is because the \\gratings only store the vertices in $(\\alpha,\\beta)$-core\\xspace, $(1,\\beta)_{\\tau}$-core\\xspace, and $(\\alpha,1)_{\\tau}$-core\\xspace and the size difference between $(\\alpha,\\beta)_{\\tau}$-core\\xspace and each of these subgraphs is uncertain. \nWe also observe that there are no simple rules to partition the parameter space so that queries from each partition can be efficiently handled by one type of index. \nThis motivates us to resort to machine learning techniques and train a feed-forward neural network as the classifier to predict the optimal choice of the index for each incoming query of $(\\alpha,\\beta)_{\\tau}$-core\\xspace. \nSince we aim to minimize the query time instead of accuracy, we propose a scoring function, \\textit{time-sensitive-error}, to tune the hyper-parameters of the classifier. \nThe experiment results show that the resulting hybrid computation algorithm significantly outperforms the query processing algorithms based on $I_{\\alpha,\\beta}$,$I_{\\beta,\\tau}$, and $I_{\\alpha,\\tau}$, and it is less sensitive to varying parameters. \n\n\\noindent\n{\\bf Contribution.}\nOur major contributions are summarized here: \\\\\n$\\bullet$ We propose the first cohesive subgraph model $\\tau$-strengthened $(\\alpha,\\beta)$-core\\xspace on bipartite graphs which considers both tie strength and vertex engagement. The flexibility of our model allows it to capture unique and useful structures on bipartite graphs. \\\\\n$\\bullet$ We construct index $I_{\\alpha,\\beta,\\tau}$ to support optimal retrieval of the vertex set of any $(\\alpha,\\beta)_{\\tau}$-core\\xspace.\nWe also devise computation sharing and \\texttt{BE}-\\texttt{Index}\\xspace based optimizations to effectively reduce its construction time.\\\\\n$\\bullet$ \nWe build \\gratings that are more space-efficient and require significantly less build time. \nAlso, we propose a learning-based hybrid computation paradigm to predict which index to choose to minimize the response time for an incoming $(\\alpha,\\beta)_{\\tau}$-core\\xspace query.\n\\\\\n$\\bullet$ We validate the efficiency of proposed algorithms and the effectiveness of our model through extensive experiments on real-world datasets.\nResults show that the \\gratings are scalable and the hybrid computation algorithm on a well trained neural network can outperform the algorithms based on each \\grating alone. \n\\\\\n\n\\noindent\n{\\bf Organization.}\nThe rest of the paper is organized as follows.\nSection $2$ reviews the related work. \nSection $3$ summarizes important notations and definitions and\nintroduces $(\\alpha,\\beta)$-core\\xspace and $\\tau$-strengthened $(\\alpha,\\beta)$-core\\xspace. \nSection $4$ presents the online computation algorithm. \nSection $5$ and $6$ presents the total index $I_{\\alpha,\\beta,\\tau}$ and optimizations of the index construction process.\nSection $7$ presents the learning-based hybrid computation paradigm. \nSection $8$ shows the experimental results and Section $9$ concludes the paper. \n\\section{Related work}\nIn the literature, there are many recent studies on cohesive subgraph models on both unipartite graphs and bipartite graphs. \n\n\\noindent\n\\textit{Unipartite graphs.}\n\\textit{k}-core\\xspace \\cite{seidman1983network,cheng2011efficient,khaouid2015k,zhang2018finding} and \\textit{k}-truss\\xspace \\cite{cohen2008trusses,huang2014querying,shao2014efficient} are two of the most well-known cohesive subgraph models on general, unipartite graphs. \nGiven a unipartite graph, \\textit{k}-core\\xspace is the maximal subgraph such that each vertex in the subgraph has at least $k$ neighbors.\n\\textit{k}-core\\xspace models vertex engagement as degrees and assumes the importance of each tie to be equal.\nHowever, on real networks, ties (edges) have different strengths and are not of equal importance \\cite{granovetter1977strength}.\nAs triangles are considered as the smallest cohesive units, the number of triangles containing an edge is used to model tie strength on unipartite graphs.\nThus, \\textit{k}-truss\\xspace is proposed to better model tie strength, which is the maximal subgraph such that each edge in the subgraph is contained in at least $(k-2)$ triangles.\nThe issue with \\textit{k}-truss\\xspace is that it does not tolerate the existence of weak ties, which is inflexible for modeling real networks.\nTo consider both vertex engagement and tie strength, the ($k$,$s$)-core model is proposed in \\cite{zhang2018discovering}. \nIn addition, recent works studied problems related to variants of \\textit{k}-core\\xspace such as radius-bounded \\textit{k}-core\\xspace on geo-social networks \\cite{wang2018efficient}, core maintenance on dynamic graphs \\cite{zhang2017fast},\ncore decomposition on uncertain graphs \\cite{bonchi2014core,peng2018efficient}, \nand anchored \\textit{k}-core\\xspace problem \\cite{bhawalkar2015preventing,zhang2017olak}. \nVariants of \\textit{k}-truss\\xspace are also studied including \\textit{k}-truss\\xspace communities on dynamic graphs \\cite{huang2014querying},\n\\textit{k}-truss\\xspace decomposition on uncertain graphs \\cite{zou2017truss}, and anchored \\textit{k}-truss\\xspace problem \\cite{zhang2018efficiently}.\nHowever, these algorithms do not apply to bipartite graphs. Attempts to project the bipartite graph to general graphs will incur information loss and size inflation \\cite{sariyuce2018peeling}.\n\n\\noindent\n\\textit{Bipartite graphs.}\nIn correspondence to \\textit{k}-core\\xspace and \\textit{k}-truss\\xspace, $(\\alpha,\\beta)$-core\\xspace \\cite{ding2017efficient,liu2020efficient} and \\textit{k}-bitruss\\xspace \\cite{zou2016bitruss,wang2020efficient} are proposed on bipartite graphs.\n$(\\alpha,\\beta)$-core\\xspace is the maximal subgraph such that each vertex on the upper or lower level in the subgraph has at least $\\alpha$ or $\\beta$ neighbors. \nJust like \\textit{k}-core\\xspace on unipartite graphs, $(\\alpha,\\beta)$-core\\xspace cannot distinguish weak ties from strong ties. \nOn bipartite graphs, tie strength is often modeled as the number of butterflies (i.e., ($2 \\times 2$)-bicliques) containing an edge because butterflies are viewed as analogs of triangles \\cite{sanei2018butterfly,wang2014rectangle,wang2019vertex,wang2020efficient}. \n\\textit{k}-bitruss\\xspace is the maximal subgraph such that each edge in the subgraph is contained in at least $k$ butterflies, which can model tie strength.\n\\textit{k}-bitruss\\xspace suffers from the same issue as its counterpart \\textit{k}-truss\\xspace does: it forcefully deletes all weak ties even if the incident vertices are strongly-engaged.\nOther works for bipartite graph analysis using cohesive structures such as ($p$,$q$)-core \\cite{DBLP:journals\/corr\/CerinsekB15}, fractional \\textit{k}-core\\xspace \\cite{giatsidis2011evaluating} cannot be used to address these issues. In contrast to the above studies, we propose the first cohesive subgraph model $\\tau$-strengthened $(\\alpha,\\beta)$-core\\xspace that considers both vertex engagement and tie strength on bipartite graphs.\n\\section{Problem Definition}\n\\begin{table}[htb]\n\\centering\n\\caption{Summary of Notations}\n\\scalebox{1.0}{\n\\begin{tabular}{c|c}\n\\noalign{\\hrule height 1pt}\nNotation & Definition \\\\ \n\\noalign{\\hrule height 0.6pt}\n$G$ & a bipartite graph \\\\\n$\\alpha,\\beta$ & the engagement constraints \\\\\n$\\tau$ & the strength level \\\\\n$nb(u,G)$ & the set of adjacent vertices of $u$ in $G$ \\\\\n$deg(u,G)$ & the number of adjacent vertices of $u$ in $G$ \\\\\n$ \\,\\mathbin{\\resizebox{0.15in}{!}{\\rotatebox[origin=c]{90}{$\\Join$}}}_{G} $ & the number of butterflies in $G$ \\\\\n$\\sup(e)$ & the number of butterflies containing $e$ \\\\\n$eng \\xspace(u)$ & the number of strong ties adjacent to $u$ \\\\\n$(\\alpha,\\beta)_{\\tau}$-core\\xspace & the $\\tau$-strengthened $(\\alpha,\\beta)$-core \\\\\n$I_{\\alpha,\\beta,\\tau}$ & the decomposition-based index \\\\\n$I_{\\alpha,\\beta}$,$I_{\\beta,\\tau}$,$I_{\\alpha,\\tau}$ & the \\gratings \\\\\n\\noalign{\\hrule height 1pt}\n\\end{tabular}\n}\n\\vspace{-2mm}\n\\label{tab:notation}\n\\end{table}\n\nIn this section, we formally define our cohesive subgraph model $\\tau$-strengthened $(\\alpha,\\beta)$-core\\xspace. We consider an unweighted, undirected bipartite graph $G(V, E)$. $V(G)$ = $U(G) \\cup L(G)$ denotes the set of vertices in $G$ where $U(G)$ and $L(G)$ represent the upper and lower layer, respectively. $E(G) \\subseteq U(G) \\times L(G)$ denotes the set of edges in $G$. We use $n$ = $|V(G)|$ to denote the number of vertices and $m$ = $|E(G)|$ to denote the number of edges. \nThe maximum degree in the upper and lower layer is denoted as $d_{max}(U)\\xspace$ and $d_{max}(L)\\xspace$ respectively.\nThe set of neighbors of a vertex $u$ in $G$ is denoted as $nb(u,G)$. The degree of a vertex is $deg(u,G)$ = $|nb(u,G)|$. When the context is clear, we omit the input graph $G$ in notations. \n\n\\begin{definition}\n\\label{def:abcore}\n{\\bf $(\\alpha,\\beta)$-core.} Given a bipartite graph G and degree constraints $\\alpha$ and $\\beta$, \na subgraph $G'$ is the $(\\alpha,\\beta)$-core, denoted by $C_{\\alpha,\\beta}(G)$,\nif ($1$) all vertices in $G'$ satisfy degree constraints,\ni.e. $deg(u,G')\\geq \\alpha$ for each $u \\in U(G')$ and \n$deg(v,G')\\geq \\beta$ for each $v \\in L(G')$;\nand ($2$) $G'$ is maximal, i.e. any subgraph $G'' \\supseteq G'$ is not an $(\\alpha,\\beta)$-core. \n\\end{definition}\n\n\\begin{definition}\n\\label{def:btf}\n{\\bf Butterfly.} In a bipartite graph G, given vertices $u,w \\in U(G)$ and $v,x \\in L(G)$, a butterfly $\\,\\mathbin{\\resizebox{0.15in}{!}{\\rotatebox[origin=c]{90}{$\\Join$}}}$ is the complete subgraph induced by $u,v,w,x$, which means both $u$ and $w$ are connected to $v$ and $x$ by edges. \nThe total number of butterflies in G is denoted as $\\,\\mathbin{\\resizebox{0.15in}{!}{\\rotatebox[origin=c]{90}{$\\Join$}}}_G$.\n\\end{definition}\n\\noindent\n$(\\alpha,\\beta)$-core is a vertex-induced subgraph model, which assumes that the edges are of equal importance. To better model the strength of an edge $e$, we define the support $sup \\xspace (e)$ to be the number of butterflies containing $e$. \n\\begin{definition}\n\\label{def:st}\n{\\bf Strong Tie.} Given an integer $\\tau$, an edge $e \\in E(G)$ is called a strong tie if $sup \\xspace(e) \\geq \\tau$, where $\\tau$ is called the strength level. Weak ties are the edges $e$ such that $sup \\xspace (e) < \\tau$.\n\\end{definition}\n\\begin{definition}\n\\label{def:eng}\n{\\bf Vertex Engagement.} Given a strength level $\\tau$ and $u \\in V(G)$, the engagement $eng \\xspace (u)$ is the number of strong ties incident to $u$. At strength level 0, $eng \\xspace(u)=deg(u,G)$.\n\\end{definition}\n\n\\noindent\nIf the engagement of an upper or lower vertex is at least $\\alpha$ or $\\beta$, we call it a {\\bf strongly-engaged} vertex. \nOtherwise, it is a {\\bf weakly-engaged} vertex. \n\\begin{definition}\n\\label{def:abgcore}\n{\\bf $\\tau$-strengthened $(\\alpha,\\beta)$-core\\xspace.} \nGiven a bipartite graph G and engagement constraints $\\alpha$ and $\\beta$, and strength level $\\tau$, \na subgraph $G'$ is the $\\tau$-strengthened $(\\alpha,\\beta)$-core\\xspace, denoted by $(\\alpha,\\beta)_{\\tau}$-core\\xspace, \nif ($1$) $eng \\xspace(u) \\geq \\alpha$ for each $u \\in U(G')$ and \n$eng \\xspace(v) \\geq \\beta$ for each $v \\in L(G')$;\nand ($2$) $G'$ is maximal, i.e. any subgraph $G'' \\supseteq G'$ is not a $\\tau$-strengthened $(\\alpha,\\beta)$-core\\xspace.\n\\end{definition}\n\n\\noindent\n\\textbf{Problem Statement.} Given a bipartite graph $G$ and parameters $\\alpha,\\beta$ and $\\tau$, we study the problem of scalable and efficient computation of $(\\alpha,\\beta)_{\\tau}$-core\\xspace in $G$.\n\\section{The Online Computation Algorithm}\nGiven engagement constraints $\\alpha$, $\\beta$ and strength level $\\tau$, the online algorithm to compute the $(\\alpha,\\beta)_{\\tau}$-core\\xspace is outlined in Algorithm \\ref{algo:compute_naive}. \nFirst, we compute the support of each edge $e$ using the algorithm in \\cite{wang2019vertex} and count how many strong ties each vertex $u$ has. \nThen, Algorithm \\ref{algo:peel} is invoked to iteratively remove the vertices without enough engagement along with their incident edges. \nThe vertices in $U(G)$ and $L(G)$ are sorted by engagement and the edges are sorted by support. In this manner, we can always delete the vertices with the smallest engagement first and quickly identify which edges are strong ties. \nWhen an edge $e$ is removed due to lack of support (i.e., $sup(e)<\\tau$), we go through all the butterflies containing $e$ and update the supports of the edges in these butterflies (lines 5-9).\nSpecifically, we do not need to update the support of the edges connected to weakly-engaged vertices, because they will be removed (line $10$). \nNeither do we update the support of weak ties because they do not contribute to any vertex engagement. \nIn other words, we only update the support of strong ties between strongly-engaged vertices. \nWhen a strong tie becomes a weak tie, we decrease the engagement of its incident vertices (lines 4,9). \nThe order of vertices and edges are maintained in linear heaps \\cite{chang2019cohesive} after their engagement and support are updated. \nHere we evaluate the time and space complexity of Algorithm \\ref{algo:compute_naive}. \n\\begin{lemma}\nThe time complexity of Algorithm \\ref{algo:compute_naive} is $O(\\sum_{(u,v)\\in E(G)}\\sum_{w\\in nb(v)} min(deg(u),deg(w)))$ and the space complexity is $O(m)$. \n\\end{lemma}\n\\begin{proof}\nThe butterfly counting process takes $O(\\sum_{(u,v)\\in E(G)} min(deg(u),deg(v)))$ time \\cite{wang2019vertex}. \nAfter the support of each edge is computed, it takes $O(m)$ time to compute the engagement for each vertex. \nThen, we need to run Algorithm \\ref{algo:peel}. For each weakly engaged vertex $u$, we need to delete all its incident edges, which dominates the time cost of Algorithm \\ref{algo:peel}. \n\nFor each deleted edge $(u,v)$, we need to enumerate the butterflies containing it. \nLet $w$ be a vertex in $nb(v,G) \\setminus \\{u\\} $. \nFor each vertex $x$ in $nb(u,G) \\cap nb(w,G)$, the induced subgraph of $\\{u,v,w,x\\}$ is a butterfly. The set intersection (computing $nb(u,G)\\cap nb(w,G)$) can be implemented in $O(min(deg(u),deg(w))$ time by using a $O(m)$ hash table to store the neighbor set of each vertex.\n\nThus, the butterfly enumeration for each delete edge $(u,v)$ takes $O(\\sum_{w\\in nb(v)} min(deg(u),deg(w)))$ time.\nAs each edge can only be deleted once, the total time complexity of the butterfly enumeration for all deleted edges takes $O(\\sum_{(u,v)\\in E(G)}\\sum_{w\\in nb(v)} min(deg(u),deg(w)))$ time. Thus, the time complexity of Algorithm \\ref{algo:compute_naive} is $O(\\sum_{(u,v)\\in E(G)}\\sum_{w\\in nb(v)} min(deg(u),deg(w)))$ which is denoted as $T_{peel}(G)$ hereafter.\n\nWe store the neighbors of each vertex as adjacency lists as well as the support of edges and engagement of vertices, which in total takes $O(m)$ space. Therefore, the space complexity of Algorithm \\ref{algo:compute_naive} is $O(m)$. \n\\end{proof}\n\n\\begin{figure}[htb]\n\\centering \n\\subfigure[$I_{\\alpha,\\beta,\\tau}$ based on Figure \\ref{fig:moltivation}]{\n\\label{fig.index}\n\\includegraphics[width=0.43\\textwidth]{btree.png}}\n\\subfigure[the relationships of indexes]{\n\\label{fig.cube}\n\\includegraphics[width=0.26\\textwidth]{cube.pdf}}\n\\subfigure[Computation sharing]{\n\\label{fig.share}\n\\includegraphics[width=0.26\\textwidth]{compsharing.pdf}}\n\\caption{Illustrating our ideas}\n\\label{Fig.queries}\n\\end{figure}\n\n\\section{The Decomposition Based Total Index}\nGiven $\\alpha$, $\\beta$, and $\\tau$, Algorithm \\ref{algo:compute_naive} computes the $(\\alpha,\\beta)_{\\tau}$-core\\xspace from the input graph, which is slow and cannot handle a large number of queries. \nIn this section, we present a decomposition algorithm that retrieves all $(\\alpha,\\beta)_{\\tau}$-cores\\xspace and build a total index based on the decomposition output to support efficient query processing.\n\n\\begin{algorithm}[t!]\n\t\\caption{Decomposition}\n\t\\label{algo:naivedecomp}\n\t\\LinesNumbered\n\t\\KwIn{$G(V=(U,L),E)$}\n\t\\KwOut{$\\tau_{max}(\\alpha,\\beta,u)$, for all $\\alpha,\\beta$, $\\forall u \\in V(G)$}\n\t$\\alpha \\gets 1$, $\\beta \\gets 1$, $\\tau \\gets 1$ \\\\\n\tCompute $sup \\xspace (e)$, $\\forall$ $e \\in E(G)$ \\\\\n Compute $eng \\xspace (u)$, $\\forall$ $u \\in V(G)$ \\\\\n \\While{$(\\alpha,1)_{1}$-core in $G$ is not empty}{\n $\\compute((\\alpha,1)_{1}\\textrm{-}core, \\alpha, 1, 1, sup \\xspace, eng \\xspace);\\beta\\gets1$ \\\\ \n \n \\While{$(\\alpha,\\beta)_{1}$-core in $G$ is not empty}{\n $sup \\xspace' \\gets sup \\xspace$;\\ $eng \\xspace' \\gets eng \\xspace$;\\ $\\tau\\gets1$ \\\\\n $\\compute((\\alpha,\\beta)_{1}\\textrm{-}core, \\alpha, \\beta, 1, sup \\xspace ', eng \\xspace ')$\\\\ \n \n \\While{ $(\\alpha,\\beta)_{\\tau}$-core in $G$ is not empty}{ \n $sup \\xspace'' \\gets sup \\xspace'$;\\ $eng \\xspace'' \\gets eng \\xspace'$ \\\\\n $\\compute((\\alpha,\\beta)_{\\tau}\\textrm{-}core, \\alpha, \\beta, \\tau, sup \\xspace '', eng \\xspace '')$, \\ \\ add $\\tau_{max}(\\alpha,\\beta,u) \\gets \\tau-1 $ before line 2\\\\\n $\\tau \\gets \\tau+1$ \\\\\n \\ForEach{$e$=$(u',v') \\in E(G)$, $sup \\xspace (e)$=$\\tau$}{\n $eng \\xspace ''(u')\\gets eng \\xspace ''(u')-1$ \\\\\n $eng \\xspace ''(v')\\gets eng \\xspace ''(v')-1$\n }\n }\n $\\beta \\gets \\beta+1$\n }\n $\\alpha \\gets \\alpha+1$\n }\n\\textbf{return} $\\tau_{max}(\\alpha,\\beta,u)$, for all $\\alpha,\\beta$, $\\forall u \\in V(G)$ \n\\end{algorithm}\n\n\n\\vspace{2mm}\n\\noindent\n{\\bf The decomposition algorithm.} \nThe following lemma is immediate based on Definition \\ref{def:abgcore}, which depicts the nested relationships among $(\\alpha,\\beta)_{\\tau}$-cores\\xspace. \n\\begin{lemma}\n$(\\alpha,\\beta)_{\\tau}$-core\\xspace $\\subseteq (\\alpha',\\beta')_{\\tau'}$-core\nif $ \\alpha \\geq \\alpha' $, $ \\beta \\geq \\beta' $, and $ \\tau \\geq \\tau' $.\n\\label{lemma:nest}\n\\end{lemma}\nBased on Lemma \\ref{lemma:nest}, if a vertex $u$ is in $(\\alpha, \\beta)_{\\tau'}$-core, $u$ is also in $(\\alpha,\\beta)_{\\tau}$-core\\xspace if $\\tau < \\tau'$. Thus, in the decomposition, for given vertex $u$ and $\\alpha$, $\\beta$ values, we aim to retrieve the maximum $\\tau$ value such that $u$ is in the corresponding $(\\alpha, \\beta)_\\tau$-core, namely, $\\tau_{max}(\\alpha,\\beta,u) \\textnormal{=} \\max\\{\\tau \\vert u \\in (\\alpha, \\beta)_\\tau\\textrm{-}core \\} $. \nFor each vertex $u$ and all possible combinations of $\\alpha$ and $\\beta$, it is only necessary to store $u$ in $(\\alpha,\\beta)_{\\tau'}$-core to build a space compact index, where $\\tau'$ = $\\tau_{max}(\\alpha,\\beta,u)$ and $u \\in$ $(\\alpha, \\beta)_{\\tau}$-core can be implied if $\\tau < \\tau'$.\nAlgorithm \\ref{algo:naivedecomp} is devised for $(\\alpha,\\beta)_{\\tau}$-core\\xspace decomposition which applies three nested loops to go through all possible $\\alpha, \\beta, \\tau$ combinations. \nNote that when computing the $(\\alpha,\\beta)_{\\tau}$-core\\xspace, we record $\\tau_{max}(\\alpha,\\beta,u)$ for each vertex $u$. Specifically, for a vertex $u$ contained in $(\\alpha,\\beta)_{\\tau_0}$-core but is removed when computing $(\\alpha,\\beta)_{\\tau_0+1}$-core, we assign $\\tau_0$ to $\\tau_{max}(\\alpha,\\beta,u)$ (line 11).\nHere we evaluate the time and space complexity of Algorithm \\ref{algo:naivedecomp}. \n\\begin{lemma}\nThe time complexity of Algorithm \\ref{algo:naivedecomp} is \n$O(\\sum_{\\alpha =1 }^{\\alpha_{max}} \\sum_{\\beta=1}^{\\beta_{max}(\\alpha)} T_{peel}((\\alpha,\\beta)_{1}\\textnormal{-}core))$,\nwhere $\\alpha_{max}$ is the maximal $\\alpha$ such that $(\\alpha,1)_1$-core exists and $\\beta_{max}(\\alpha)$ is the maximal $\\beta$ such that $(\\alpha,\\beta)_{1}$-core\\xspace exists.\nAlgorithm \\ref{algo:naivedecomp} takes up $O(m)$ space.\n\\end{lemma}\n\n\\begin{proof}\nIn the outer while-loop, we start from the input graph $G$ and compute $(\\alpha,1)_{1}$-core\\xspace, which takes $T_{peel}(G)$.\nIn the middle while-loop, we calculate $(\\alpha,\\beta)_{1}$-core\\xspace from $(\\alpha,1)_{1}$-core\\xspace for all possible $\\alpha$, which takes $ \\sum_{\\alpha =1 }^{\\alpha_{max}} T_{peel}((\\alpha,1)_{1}\\textnormal{-}core)$. \nThe dominant cost occurs in the innermost while-loop when we run the Peeling algorithm on $(\\alpha,\\beta)_{1}$-core\\xspace for all possible $\\alpha$ and $\\beta$. \nAs iterative removing edges and vertices in $(\\alpha,\\beta)_{1}$-core\\xspace until it is empty takes $T_{peel}((\\alpha,\\beta)_{1}\\textnormal{-}core)$, \nthe overall time complexity is $O(\\sum_{\\alpha =1 }^{\\alpha_{max}} \\sum_{\\beta=1}^{\\beta_{max}(\\alpha)} T_{peel}((\\alpha,\\beta)_{1}\\textnormal{-}core))$.\n\nAt any time during the execution of this algorithm, we always store $G$, $(\\alpha,1)_{1}$-core\\xspace, $(\\alpha,\\beta)_{1}$-core\\xspace, and $(\\alpha,\\beta)_{\\tau}$-core\\xspace in memory, which takes $O(m)$ space. \nWe also store the support of edges and engagement of vertices in these graphs, which also takes $O(m)$ space. \nThus, the total space complexity is $O(m)$. \n\\end{proof}\n\n\\noindent\n{\\bf Decomposition-based index.} Based on the decomposition results, a four level index $I_{\\alpha,\\beta,\\tau}$ can be constructed to support query processing as shown in Figure \\ref{fig.index}.\n\\begin{enumerate}\n \\item[$\\bullet$] \\textit{$\\alpha$ level.} The $\\alpha$ level of $I_{\\alpha,\\beta,\\tau}$ is an array of pointers, each of which points to an array in the $\\beta$ level. The length of the array in the $\\alpha$ level is $\\alpha_{max}$. The $k_{th}$ element is denoted as $I_{\\alpha,\\beta,\\tau}[k]$. \n \\item[$\\bullet$] \\textit{$\\beta$ level.} \n The $\\beta$ level has $\\alpha_{max}$ arrays of pointers. The array pointed by $I_{\\alpha,\\beta,\\tau}[k]$ has length $\\beta_{max}(\\alpha)$. The $j_{th}$ pointer in the $k_{th}$ array is denoted as $I_{\\alpha,\\beta,\\tau}[k][j]$, which points to an array in the $\\tau$ level. \n \\item[$\\bullet$] \\textit{$\\tau$ level.}\n The $\\tau$ level has $\\sum_{i=1}^{\\alpha_{max}}\\beta_{max}(i) $ arrays of pointers to vertex blocks, corresponding to all pairs of $\\alpha,\\beta$. The array pointed by $I_{\\alpha,\\beta,\\tau}[k][j]$ has length $\\tau_{max}(\\alpha,\\beta) = \\max\\{\\tau \\vert (\\alpha,\\beta)_{\\tau}\\textnormal{-}core \\ exists \\} $. \n \\item[$\\bullet$] \\textit{vertex blocks.} The fourth level of $I_{\\alpha,\\beta,\\tau}$ is a singly linked list of vertex blocks. Each vertex block corresponds to a set of $\\alpha,\\beta,\\tau$ value, which contains all vertex $u$ such that $\\tau_{max}(\\alpha,\\beta,u) = \\tau$ along with a pointer to the next vertex block. \n The vertex blocks with the same $\\alpha$ and $\\beta$ are sorted by the associated $\\tau$ values and each of them has a pointer to the next. \n Among them, the vertex block with the largest $\\tau$ has its pointer pointing to null. \n \n \n\n\\end{enumerate}\n\nWe can construct index $I_{\\alpha,\\beta,\\tau}$ from the output of Algorithm \\ref{algo:decomp_query}. \nGiven $\\alpha$ and $\\beta$, we store all vertices $u$ in the same vertex block if they have the same $\\tau_{max}(\\alpha,\\beta,u)$, so each vertex block has an associated $(\\alpha,\\beta,\\tau)$ value. \nIn each $I_{\\alpha,\\beta,\\tau}[\\alpha][\\beta][\\tau]$, we store the address of the vertex block associated with $(\\alpha,\\beta,\\tau')$, where $\\tau'$ is the smallest integer such that $\\tau \\leq \\tau'$.\nThe index construction time is linear to the size of decomposition results, which is bounded by the time complexity of Algorithm \\ref{algo:decomp_query}. \n\n\\begin{algorithm}[t]\n\t\\caption{DecompQuery}\n\t\\label{algo:decomp_query}\n\t\\LinesNumbered\n\t\\KwIn{$I_{\\alpha,\\beta,\\tau},\\alpha,\\beta,\\tau,G$}\n\t\\KwOut{$(\\alpha,\\beta)_{\\tau}$-core\\xspace}\n\t$U',V',E' \\gets \\emptyset$ \\\\\n\t\\If{$I_{\\alpha,\\beta,\\tau}.size <\\alpha$ or $I_{\\alpha,\\beta,\\tau}[\\alpha].size <\\beta$ or $I_{\\alpha,\\beta,\\tau}[\\alpha][\\beta].size < \\tau$ }{\n return $\\emptyset$ \\\\\n\t}\n\t$ ptr\\gets I_{\\alpha,\\beta,\\tau}[\\alpha][\\beta][\\tau]$ \\\\\n \\While{$ptr$ is not null }{\n $v \\gets $ vertices in vertex block pointed by $ptr$ \\\\ \n \\If{$v \\in U(G)$}{\n $U' \\gets U' \\cup v$\n }\\Else{\n $V' \\gets V' \\cup v$\n }\n $ptr \\gets $ the address of the next vertex block \n } \n $E' \\gets E(G) \\cap (U' \\times V')$ \\\\ \n\\textbf{return} $G'=(U',V',E')$\n\\end{algorithm}\n\\begin{lemma}\n\\label{lemma:izero}\nThe space complexity of index $I_{\\alpha,\\beta,\\tau}$ is \n$O(\\sum_{i=1}^{\\alpha_{max}} \\sum_{j=1}^{\\beta_{max}(i)} (\\tau_{max}(i,j)+n))$,\nwhere $\\tau_{max}(i,j)$ be the maximal $\\tau$ in all $(i,j)_{\\tau}$-cores. \n\\end{lemma}\n\n\\begin{proof}\nBy construction, the space complexity of the first two levels of pointers is bounded by that of $\\tau$ level. \nThe space complexity of $\\tau$ level is $\\sum_{i=1}^{\\alpha_{max}} \\sum_{j=1}^{\\beta_{max}(i)} \\tau_{max}(i,j)$. \nGiven vertex $u$, let $\\alpha_{max}(u)$ be the maximal $\\alpha$ such that $u \\in (\\alpha,\\beta)_{\\tau}\\textnormal{-}core$.\nThe space complexity of the vertex blocks is \n$\\sum_{u\\in V(G)} \\sum_{i=1}^{\\alpha_{max}(u)}\\max \\{\\beta | u \\in (i,\\beta)_1\\textnormal{-}core\\} \\leq \\sum_{i=1}^{\\alpha_{max}} \\sum_{j=1}^{\\beta_{max}(i)} n$. Adding it to the space complexity of level $\\tau$ completes the proof.\n\\end{proof} \n\n\\noindent\n{\\bf Index based query processing.} \nWhen the index $I_{\\alpha,\\beta,\\tau}$ is built on a bipartite graph $G$, Algorithm \\ref{algo:decomp_query} outlines how to restore $(\\alpha,\\beta)_{\\tau}$-core\\xspace given $\\alpha,\\beta$, $\\tau$ and $I_{\\alpha,\\beta,\\tau}$. \nFirst, it checks the validity of the input parameters. \nIf the queried $(\\alpha,\\beta)_{\\tau}$-core\\xspace does not exist, it terminates immediately (lines 2,3). \nOtherwise, it collects the vertices of $(\\alpha,\\beta)_{\\tau}$-core\\xspace from $I_{\\alpha,\\beta,\\tau}$ and restores the edges in the queried subgraph. \n\\begin{lemma}\n\\label{lemma:query}\nGiven a graph $G$ and parameters $\\alpha,\\beta$ and $\\tau$, \nAlgorithm \\ref{algo:decomp_query} retrieves \n$V((\\alpha,\\beta)_{\\tau}\\textnormal{-}core)$ from index $I_{\\alpha,\\beta,\\tau}$ in $O(|V((\\alpha,\\beta)_{\\tau}\\textnormal{-}core)|)$ time. \nThe edges in $(\\alpha,\\beta)_{\\tau}$-core\\xspace can be retrieved in $O(\\sum_{v \\in V((\\alpha,\\beta)_{\\tau}\\textnormal{-}core)}deg(v)$ time after obtaining the vertex set.\n\\end{lemma}\n\\begin{proof}\nAs each vertex block only stores the vertices with one given $\\tau$ value, the vertex blocks pointed by $I_{\\alpha,\\beta,\\tau}[\\alpha][\\beta][\\tau']$ where $\\tau' \\geq \\tau$ give us all the vertices in $(\\alpha,\\beta)_{\\tau}$-core\\xspace, which takes $O(|V((\\alpha,\\beta)_{\\tau}\\textnormal{-}core)|)$ time. \nTo restore the edges in $(\\alpha,\\beta)_{\\tau}$-core\\xspace, for each vertex $v$ in $(\\alpha,\\beta)_{\\tau}$-core\\xspace, we go through each of its neighbors in $G$ and check if it is in $V((\\alpha,\\beta)_{\\tau}\\textnormal{-}core)$. This takes $O(\\sum_{v \\in V((\\alpha,\\beta)_{\\tau}\\textnormal{-}core)}(deg(v)))$ time.\n\\end{proof} \n\n\\noindent\nAccording to Lemma \\ref{lemma:query}, given $\\alpha,\\beta$ and $\\tau$, the vertex set of the $(\\alpha,\\beta)_{\\tau}$-core\\xspace is retrieved in optimal time. \n\\begin{example}\nFigure \\ref{fig.index} illustrates the $I_{\\alpha, \\beta, \\tau}$ index for the bipartite graph in Figure \\ref{fig:moltivation}. When querying $(1,2)_{1}$-core, we start with the vertex block pointed by $I_{\\alpha,\\beta,\\tau}[1][2][1]$ ($u_0,v_0$ and $v_1$). \nThen we keep collecting the vertices until we have fetched the vertices pointed by $I_{\\alpha,\\beta,\\tau}[1][2][2]$ ($u_1$ to $u_5$ and $v_2$ to $v_6$) . All the collected vertices provides the final solution to vertex set of $(1,2)_{1}$-core, which are $u_0$ to $u_5$ and $v_0$ to $v_6$. \n\\end{example}\n\\section{Optimizations of Index Construction }\nThe above decomposition algorithm has these issues:\n($1$) the same subgraph can be computed repeatedly for different $\\alpha$ and $\\beta$ values. \nFor example, if $(1,1)_{\\tau}$-core is the same subgraph as $(1,2)_{\\tau}$-core, \nthen we will compute it twice when $\\beta$=$1$ and $\\beta$=$2$. \n($2$) when removing an edge $e$, we need to enumerate all the butterflies containing $e$.\nThe basic implementation of butterfly enumeration is inefficient, which involves finding three connected vertices first and then check if a fourth vertex can form a butterfly with the existing ones.\nWe devise computation-sharing optimizations to address the first issue, and adopt the \\texttt{Bloom}-\\texttt{Edge}-\\texttt{Index} proposed in \\cite{wang2019vertex} to speed up the butterfly enumeration process.\n\n\\noindent\n{\\bf Computation sharing optimizations.}\nIn this part, we reduce the times of visiting the same $(\\alpha,\\beta)_{\\tau}$-core\\xspace subgraphs by skipping some combinations of $\\alpha$, $\\beta$, and $\\tau$, while yielding the same decomposition results. \n\n\\noindent\n{$\\bullet$ \\it Skip computation for $\\tau$.}\nIn Algorithm \\ref{algo:naivedecomp}, if vertex $u$ is removed when computing $(\\alpha,\\beta)_{\\tau+1}$-core from $(\\alpha,\\beta)_{\\tau}$-core\\xspace, we conclude that $\\tau_{max}(\\alpha,\\beta,u)$=$\\tau$. However, if both $(\\alpha,\\beta)_{\\tau}$-core\\xspace and $(\\alpha,\\beta)_{\\tau+1}$-core are already visited for other $\\beta$, this process is redundant and the current $\\tau$ value can be skipped. \nSpecifically, in the innermost while loop (lines $9$-$15$), we can use an array $\\beta_{min}[\\tau]$ to store the minimal lower engagement of $(\\alpha,\\beta)_{\\tau}$-core\\xspace for each $\\tau$. \nIf $\\beta_{min}[\\tau] \\geq \\beta$, then the current $(\\alpha,\\beta)_{\\tau}$-core\\xspace has already been visited.\nWe only compute $\\tau_{max}(\\alpha,\\beta,u)$ values when one of $(\\alpha,\\beta)_{\\tau}$-core\\xspace and $(\\alpha,\\beta)_{\\tau+1}$-core is not visited. Otherwise, we skip the current $\\tau$ value. \nThe following lemma explains how to correctly obtain multiple $\\tau_{max}(\\alpha,\\beta,u)$ values when removing one vertex.\n\\begin{lemma}\n\\label{lemma:sharing}\nGiven $\\alpha,\\beta,\\tau$ and graph $G$, let $u$ be a vertex in $(\\alpha,\\beta)_{\\tau}$-core but not in $(\\alpha,\\beta)_{\\tau+1}$-core. If there exists an integer $k$ such that $(\\alpha,\\beta)_{\\tau}$-core = $(\\alpha,\\beta+k)_{\\tau}$-core, then $u \\not \\in$ $(\\alpha,\\beta+k)_{\\tau+1}$-core. \n\\end{lemma}\nThis lemma is immediate from Lemma \\ref{lemma:nest}, because if vertex $u$ is in $(\\alpha,\\beta+k)_{\\tau+1}$-core, then it must also be contained in $(\\alpha,\\beta)_{\\tau+1}$-core, which contradicts our assumption. \nTherefore, for vertices like $u$, we can conclude that $\\tau_{max}(\\alpha, \\beta',u)$=$\\tau$, for all $\\beta \\leq \\beta' \\leq \\beta+k$. In this way, we fully preserve the decomposition outputs of Algorithm \\ref{algo:naivedecomp}. \n\n\\noindent\n{$\\bullet$ \\it Skip computation for $\\alpha$ and $\\beta$.}\nTo skip some $\\beta$ values, we keep track of the minimal engagement of lower level vertices of the visited $(\\alpha,\\beta)_{\\tau}$-cores\\xspace in the middle while-loop (lines $7$-$16$) in Algorithm \\ref{algo:naivedecomp} as $\\beta^*$. At line $16$, if $\\beta^* > \\beta$, then $\\beta$ should be directly increased to $\\beta^*+1$ (the first value which is not computed yet) and values from $\\beta$ to $\\beta^*$ are skipped.\nThis is because for all $ \\beta \\leq \\beta' \\leq \\beta^*$, the decomposition process are exactly the same. \nLikewise, to skip some $\\alpha$ values, we record the minimal engagement of upper-level vertices of the visited $(\\alpha,\\beta)_{\\tau}$-cores\\xspace in the outermost while-loop (lines $5$-$17$) of Algorithm \\ref{algo:naivedecomp} as $\\alpha^*$. At line $17$, if $\\alpha^* > \\alpha$, then $\\alpha$ should be directly increased to $\\alpha^*+1$ and values from $\\alpha$ to $\\alpha^*$ are skipped. \n\n\\begin{example}\nAs shown in Figure \\ref{fig.index},\nwhen $\\beta$=$1$, we remove $u_0,v_0$ and $v_1$ when computing $(1,1)_2$-core from $(1,1)_1$-core. \nThe minimal engagement of lower vertices in $(1,1)_1$-core and $(1,1)_2$-core are $2$, so the array $\\beta_{min}$ is $[2,2]$.\nThis means that $\\tau_{max}(1,\\beta',u)$ = $1$ and $\\tau_{max}(1,\\beta',u')$ = $2$, where $u \\in \\{u_0,v_0,v_1 \\}$ and $u' \\in \\{u_1,u_2,u_3,u_4,u_5,v_2,v_3,v_4,v_5,v_6\\}$ and $\\beta' \\in \\{1,2\\}$. \nWhen $\\beta$=$2$, we infer that $(1,2)_1$-core and $(1,2)_2$-core are already visited based on array $\\beta_{min}$, so we can skip $\\beta$=$2$. \nWhen $\\beta$=$3$, the current $\\beta$ value exceeds the values in $\\beta_{min}$, so it cannot be skipped. \n\\end{example}\n\n\\noindent\n{\\bf Bloom-Edge-Index-based optimization.}\nDuring edge deletions of Algorithm \\ref{algo:naivedecomp}, we need to repeatedly retrieve the butterflies containing the deleted edges. \nTo efficiently address this, We deploy a \\texttt{Bloom}-\\texttt{Edge}-\\texttt{Index} (hereafter denoted as \\texttt{BE}-\\texttt{Index}\\xspace) proposed in \\cite{wang2020efficient} to facilitate butterfly enumeration. \nSpecifically, a bloom is a $2 \\times k$-biclique, which contains $\\frac{k \\times (k\\textnormal{-}1)}{2}$ butterflies. \nEach edge in the bloom is contained in $k\\textnormal{-}1$ butterflies. \nThe \\texttt{BE}-\\texttt{Index}\\xspace compresses butterflies into blooms and keeps track of the edges they contain. \nThe space complexity of \\texttt{BE}-\\texttt{Index}\\xspace and the time complexity to build it are both $O( \\sum_{e=(u,v) \\in E(G)} min(deg(u),deg(v))$ \\cite{wang2020efficient}. \nHereafter we also use $T_{BE}$ to represent $\\sum_{e=(u,v) \\in E(G)} min(deg(u),deg(v))$.\nDeleting an edge $e$ based on \\texttt{BE}-\\texttt{Index}\\xspace takes $O(sup(e))$ time, where $sup(e)$ is the number of butterflies containing $e$. \nThis is because when $e$ is deleted, \\texttt{BE}-\\texttt{Index}\\xspace fetches the associated blooms and updates the support number of the affected edges in these blooms. In total, there are $O(sup(e))$ affected edges if edge $e$ is deleted. \nHere we evaluate the \\texttt{BE}-\\texttt{Index}\\xspace's impact on the overall time and space complexity of Algorithm \\ref{algo:naivedecomp}. \n\\begin{lemma}\n\\label{thm:bloom}\nBy adopting the \\texttt{BE}-\\texttt{Index}\\xspace for edge deletions, \nthe time complexity of Algorithm \\ref{algo:naivedecomp} becomes \n$O(T_{BE})$+$O(\\sum_{\\alpha =1 }^{\\alpha_{max}} \\sum_{\\beta=1}^{\\beta_{max}(\\alpha)} \\,\\mathbin{\\resizebox{0.15in}{!}{\\rotatebox[origin=c]{90}{$\\Join$}}}_{(\\alpha,\\beta)_{1}\\textnormal{-}core}$), \nwhere $\\,\\mathbin{\\resizebox{0.15in}{!}{\\rotatebox[origin=c]{90}{$\\Join$}}}_{(\\alpha,\\beta)_{1}\\textnormal{-}core}$ is the number of butterflies in $(\\alpha,\\beta)_{1}$-core\\xspace.\n\\end{lemma}\n\\begin{proof}\nIn the innermost loop of Algorithm \\ref{algo:decomp_query}, we remove the edges from $(\\alpha,\\beta)_{1}$-core\\xspace to get $(\\alpha,\\beta)_{\\tau}$-core\\xspace. \nAs each edge deletion operation takes $sup(e)$ \\cite{wang2020efficient}, it takes $\\sum_{e \\in E((\\alpha,\\beta)_{1}\\textnormal{-}core)}$ = $O( \\,\\mathbin{\\resizebox{0.15in}{!}{\\rotatebox[origin=c]{90}{$\\Join$}}}_{(\\alpha,\\beta)_{1}\\textnormal{-}core} )$ to compute $(\\alpha,\\beta)_{\\tau}$-core\\xspace from $(\\alpha,\\beta)_{1}$-core\\xspace. \nAs we are doing this for all possible $\\alpha$ and $\\beta$, the time complexity within the while-loops becomes $O(\\sum_{\\alpha =1 }^{\\alpha_{max}} \\sum_{\\beta=1}^{\\beta_{max}(\\alpha)} \\,\\mathbin{\\resizebox{0.15in}{!}{\\rotatebox[origin=c]{90}{$\\Join$}}}_{(\\alpha,\\beta)_{1}\\textnormal{-}core}$). \nAdding the \\texttt{BE}-\\texttt{Index}\\xspace construction time to it completes the proof. \n\\end{proof}\n\\section{A Learning-based Hybrid Computation Paradigm}\nAlthough the index $I_{\\alpha,\\beta,\\tau}$ supports the optimal retrieval of the vertices in the queried $(\\alpha,\\beta)_{\\tau}$-core\\xspace, it does not scale well to large graphs due to its long build time and large space complexity even with the related optimizations.\nFor instance, on datasets \\texttt{Team}, \\texttt{Wiki-en}, \\texttt{Amazon}, and \\texttt{DBLP}, the index $I_{\\alpha,\\beta,\\tau}$ cannot be built within two hours as evaluated in our experiments.\nIn this section, we present \\gratings that selectively store the vertices of $(\\alpha,\\beta)_{\\tau}$-core\\xspace for some combinations of $\\alpha,\\beta$, and $\\tau$.\nWe also train a feed-forward neural network on a small portion of the queries to predict the choice of \\grating that minimizes the running time for each new incoming query. \n\n\\begin{table*}[tbh]\n\\centering\n\\scalebox{0.85}{\n\\begin{tabular}{c|c|c|c} \n\\noalign{\\hrule height 1pt}\nIndex & Space Complexity & Build Time & Query Time \\\\\n\\noalign{\\hrule height 0.56pt}\n$I_{\\alpha,\\beta,\\tau}$ & $O(\\sum_{i=1}^{\\alpha_{max}}\\sum_{j=1}^{\\beta_{max}(i)}(\\tau_{max}(i,j)+n))$ & $O(T_{BE}\\textnormal{+}\\sum_{\\alpha =1 }^{\\alpha_{max}} \\sum_{\\beta=1}^{\\beta_{max}(\\alpha)}\\,\\mathbin{\\resizebox{0.15in}{!}{\\rotatebox[origin=c]{90}{$\\Join$}}}_{(\\alpha,\\beta)_{1}\\textnormal{-}core})$ & $O(\\sum_{v \\in V((\\alpha,\\beta)_{\\tau}\\textnormal{-}core)}(deg(v)))$ \\\\\n$I_{\\alpha,\\beta}$ & $O(m)$ & $O(\\delta \\cdot m )$ & $O(T_{peel}((\\alpha,\\beta)\\textnormal{-}core))$ \\\\ \n$I_{\\beta,\\tau}$ & $O( \\sum_{j=1}^{\\beta_{max}}(\\tau_{max}(1,j)+n))$ & $O(T_{BE}\\textnormal{+}\\sum_{\\beta=1}^{\\beta_{max}} \\,\\mathbin{\\resizebox{0.15in}{!}{\\rotatebox[origin=c]{90}{$\\Join$}}}_{(1,\\beta)_{1}\\textnormal{-}core})$ & $O(T_{peel}((1,\\beta)_{\\tau}\\textnormal{-}core)$ \\\\\n$I_{\\alpha,\\tau}$ & $O(\\sum_{i=1}^{\\alpha_{max}}(\\tau_{max}(i,1)+n))$ & $O(T_{BE}\\textnormal{+}\\sum_{\\alpha =1 }^{\\alpha_{max}} \\,\\mathbin{\\resizebox{0.15in}{!}{\\rotatebox[origin=c]{90}{$\\Join$}}}_{(\\alpha,1)_{1}\\textnormal{-}core})$ & $O(T_{peel}((\\alpha,1)_{\\tau}\\textnormal{-}core)$ \\\\ \n\\noalign{\\hrule height 1pt}\n\\end{tabular}\n}\n\\caption{Space complexity, index construction time and query processing time of different indexes}\n\\label{tab:indexCompare}\n\\end{table*} \n\n\\noindent\n{\\bf \\gratings.} \nWe introduce three \\gratings $I_{\\alpha,\\beta}$, $I_{\\beta,\\tau}$, and $I_{\\alpha,\\tau}$ in this part. \nEach of them is a three-level index with two levels of pointers and one level of vertex blocks. The main structures of them are presented as follows. \n\n\n\\noindent\n$\\bullet$ $I_{\\beta,\\tau}$.\nFor all $\\beta$, $\\tau$, $I_{\\beta,\\tau}[\\beta][\\tau]$ points to the vertices $u\\in V(G)$ s.t. $\\tau_{max}(1,\\beta,u) = \\tau$.\nIt is the component of $I_{\\alpha,\\beta,\\tau}$ where $\\alpha$=$1$, which can fetch the vertices of all subgraphs of the form $(1,\\beta)_{\\tau}$-core\\xspace in optimal time. \n\n\\noindent\n$\\bullet$ $I_{\\alpha,\\tau}$.\nFor all $\\alpha$, $\\tau$, $I_{\\alpha,\\tau}[\\alpha][\\tau]$ points to the vertices $u\\in V(G)$ s.t. $\\tau_{max}(\\alpha,1,u) = \\tau$.\nIt is the component of $I_{\\alpha,\\beta,\\tau}$ where $\\beta$=$1$, which can fetch the vertices of all subgraphs of the form $(\\alpha,1)_{\\tau}$-core\\xspace in optimal time. \n\n\\noindent\n$\\bullet$ $I_{\\alpha,\\beta}$ consists of $I_{\\alpha,\\beta} U$ and $I_{\\alpha,\\beta} V$ to store the vertices in $U(G)$ and $L(G)$ separately.\n$I_{\\alpha,\\beta} U[\\alpha][\\beta]$ points to the vertices $u\\in U(G)$ s.t. \n$\\beta = \\max\\{\\beta' | u \\in (\\alpha,\\beta')\\textnormal{-}core \\} $ and $I_{\\alpha,\\beta} V[\\beta][\\alpha]$ points to the vertices $v\\in L(G)$ such that $\\alpha = \\max\\{\\alpha' | v \\in (\\alpha',\\beta)\\textnormal{-}core\\} $. \n$I_{\\alpha,\\beta}$ can fetch the vertices of any $(\\alpha,\\beta)$-core\\xspace in optimal time. \n\nNote that, $I_{\\alpha,\\beta}$ is proposed to support efficient $(\\alpha,\\beta)$-core\\xspace computation as introduced in \\cite{liu2020efficient} while $I_{\\beta,\\tau}$ and $I_{\\alpha,\\tau}$ are essentially parts of $I_{\\alpha,\\beta,\\tau}$. For each type of \\gratings, we analyze its construction time, space complexity, and the query time to compute $(\\alpha,\\beta)_{\\tau}$-core\\xspace based on it.\n\\begin{lemma}\nThe time complexity to build $I_{\\beta,\\tau}$ is $O(T_{BE}$+$\\sum_{\\beta=1}^{\\beta_{max}} \\,\\mathbin{\\resizebox{0.15in}{!}{\\rotatebox[origin=c]{90}{$\\Join$}}}_{(1,\\beta)_{1}\\textnormal{-}core})$\nand the space complexity of $I_{\\beta,\\tau}$ is $O( \\sum_{j=1}^{\\beta_{max}}(\\tau_{max}(1,j)+n))$. \nIt takes $T_{peel}((1,\\beta)_{\\tau}\\textnormal{-}core)$ time to compute $(\\alpha,\\beta)_{\\tau}$-core\\xspace using $I_{\\beta,\\tau}$.\n\\end{lemma}\n\\begin{proof}\nAs discussed in Lemma \\ref{thm:bloom}, the \\texttt{BE}-\\texttt{Index}\\xspace can significantly speed up butterfly enumeration during edge deletions, which takes $O(T_{BE})$ to construct. \nThen, we fix $\\alpha$ to one and run lines 6-16 of Algorithm \\ref{algo:naivedecomp} to compute all $(1,\\beta)_{\\tau}$-core\\xspace. \nFor each possible $\\beta$, this process takes $O(\\,\\mathbin{\\resizebox{0.15in}{!}{\\rotatebox[origin=c]{90}{$\\Join$}}}_{(1,\\beta)_{1}\\textnormal{-}core})$ time, so in total it takes $O(\\sum_{\\beta=1}^{\\beta_{max}} \\,\\mathbin{\\resizebox{0.15in}{!}{\\rotatebox[origin=c]{90}{$\\Join$}}}_{(1,\\beta)_{1}\\textnormal{-}core}))$ time. Adding it to the \\texttt{BE}-\\texttt{Index}\\xspace construction time ($O(T_{BE})$) gives the time complexity of $I_{\\beta,\\tau}$ construction. \n\nAs $I_{\\beta,\\tau}$ is the part of $I_{\\alpha,\\beta,\\tau}$ with $\\alpha$ = $1$, its space is equal to the part of $I_{\\alpha,\\beta,\\tau}$ that is pointed by $I_{\\alpha,\\beta,\\tau}[1]$. The size of the arrays of pointers in $I_{\\beta,\\tau}$ is bounded by $O(\\sum_{j=1}^{\\beta_{max}}(\\tau_{max}(1,j))))$. The size of the vertex blocks is bounded by the number of vertices in all $(1,\\beta)_{\\tau}$-core\\xspace, which is $O(\\sum_{j=1}^{\\beta_{max}}(|V((1,\\beta)_{\\tau}\\textnormal{-}core)| )) = O(\\sum_{j=1}^{\\beta_{max}} n)$. Therefore, the space complexity of $I_{\\beta,\\tau}$ is $O( \\sum_{j=1}^{\\beta_{max}}(\\tau_{max}(1,j)+n))$. \n\\end{proof}\n\n\\noindent\nIn order to query $(\\alpha,\\beta)_{\\tau}$-core\\xspace based on $I_{\\beta,\\tau}$, we first find the $(1,\\beta)_{\\tau}$-core\\xspace from $I_{\\beta,\\tau}$ and then compute $(\\alpha,\\beta)_{\\tau}$-core\\xspace from $(1,\\beta)_{\\tau}$-core\\xspace. \n\\begin{lemma}\nThe query time of $(\\alpha,\\beta)_{\\tau}$-core\\xspace based on $I_{\\beta,\\tau}$ is $T_{peel}((1,\\beta)_{\\tau}\\textnormal{-}core)$. \n\\end{lemma}\n\\begin{proof}\nGiven engagement constraints $\\alpha,\\beta$ and strength level $\\tau$, let $G'$ be the $(1,\\beta)_{\\tau}$-core\\xspace on bipartite graph $G$. \nFirst, it takes $O(|V(G')|$ time to fetch the vertices in $(1,\\beta)_{\\tau}$-core\\xspace from $I_{\\beta,\\tau}$. \nRestoring the edges of $(1,\\beta)_{\\tau}$-core\\xspace from $G$ takes $O(\\sum_{u\\in V(G')} deg(u,G'))$ time. \nThen, we call the \\compute algorithm on $(1,\\beta)_{\\tau}$-core\\xspace to compute $(\\alpha,\\beta)_{\\tau}$-core\\xspace, which takes $O(T_{peel}((1,\\beta)_{\\tau}\\textnormal{-}core))$ time.\n\\end{proof}\n\\begin{example}\nIn Figure \\ref{fig.index}, the component of $I_{\\alpha,\\beta,\\tau}$ wrapped in dotted line is the $I_{\\beta,\\tau}$ of the graph in Figure \\ref{fig:moltivation}. If $(2,2)_2$-core is queried, we first to obtain $(1,2)_2$-core from $I_{\\beta,\\tau}$ and compute $(2,2)_2$-core by calling the peeling algorithm.\n\\end{example}\n\nNote that index $I_{\\alpha,\\tau}$ is symmetric to index $I_{\\beta,\\tau}$, with $\\alpha$ and $\\beta$ switched. \nIt is immediate that \nit takes $O(T_{BE}$+$\\sum_{\\alpha=1}^{\\alpha_{max}} \\,\\mathbin{\\resizebox{0.15in}{!}{\\rotatebox[origin=c]{90}{$\\Join$}}}_{(\\alpha,1)_{1}\\textnormal{-}core})$ time to construct $I_{\\alpha,\\tau}$ and its space complexity is $O( \\sum_{i=1}^{\\alpha_{max}}(\\tau_{max}(i,1)+n))$. \nIt takes $T_{peel}((\\alpha,1)_{\\tau}\\textnormal{-}core)$ time to compute $(\\alpha,\\beta)_{\\tau}$-core\\xspace using $I_{\\alpha,\\tau}$.\nAs for $I_{\\alpha,\\beta}$, it takes $O(\\delta \\cdot m)$ time to construct and its space complexity is $O(m)$, where $\\delta$ is the degeneracy of the graph \\cite{liu2020efficient}. \nTo compute $(\\alpha,\\beta)_{\\tau}$-core\\xspace using $I_{\\alpha,\\beta}$, we fetch the vertices of $(\\alpha,\\beta)$-core\\xspace and restore the edges in $O(|V(G')|+O(\\sum_{u\\in V(G')} deg(u,G'))$ time ($G'$ = $(\\alpha,\\beta)$-core\\xspace ). \nThen, we call the peeling algorithm on $(\\alpha,\\beta)$-core\\xspace to compute $(\\alpha,\\beta)_{\\tau}$-core\\xspace, which takes $O(T_{peel}((\\alpha,\\beta)\\textnormal{-}core))$ time. \n\nThe sizes of \\gratings can be considered as the projections of $I_{\\alpha,\\beta,\\tau}$ onto 3 planes, as depicted in Figure \\ref{fig.cube}. We also summarize the space complexity, build time, and query time of \\gratings in Table \\ref{tab:indexCompare}. \n\n\\begin{figure}[htb]\n\\centering\n\\scalebox{0.75}{\n\\includegraphics[width=0.7\\textwidth]{mol_classification.pdf}\n}\n\\caption{Motivation example for learning-based query processing (\\texttt{DBpedia-Team})}\n\\label{fig:learning}\n\\end{figure}\n\n\\noindent\n{\\bf Learning-based hybrid query processing.}\nAs $I_{\\alpha,\\beta},I_{\\beta,\\tau}$, and $I_{\\alpha,\\tau}$ do not store all the decomposition results like $I_{\\alpha,\\beta,\\tau}$, the construction of these indexes is more time and space-efficient than $I_{\\alpha,\\beta,\\tau}$. \nHowever, the reduced index computation inevitably compromises the query processing performance.\nThis is because the \\gratings only store the vertices in $(\\alpha,\\beta)$-core\\xspace, $(1,\\beta)_{\\tau}$-core\\xspace, and $(\\alpha,1)_{\\tau}$-core\\xspace. \nClearly, computing the $(\\alpha,\\beta)_{\\tau}$-core\\xspace based on these \\gratings results in different response time.\nTo better illustrate this point, we plot all parameter combinations on dataset \\texttt{DBpedia-team} and give each combination a color based on which query processing algorithm performs the best in Figure \\ref{fig:learning}.\nFor ease of presentation, we denote the query processing algorithms based on index $I_{\\alpha,\\beta}$, $I_{\\beta,\\tau}$, and $I_{\\alpha,\\tau}$ as $Q_{\\alpha,\\beta}$, $Q_{\\beta,\\tau}$ and $Q_{\\alpha,\\tau}$ respectively.\nThe red points indicate that $Q_{\\beta,\\tau}$ is the fastest among the three. \nThe green ones represent the win cases for $Q_{\\alpha,\\tau}$ and the black points are the cases when $Q_{\\alpha,\\beta}$ is the best.\nEvidently, the points of different colors are mingled together and distributed across the parameter space.\nThis suggests that finding simple rules to partition the parameter space is not promising in deciding which of ${Q_{\\alpha,\\beta}, Q_{\\beta,\\tau}, Q_{\\alpha,\\tau}}$ is the fastest. \nHence, we formulate it as a classification problem and resort to machine learning techniques to solve this problem. \n\\begin{algorithm}[thb]\n \\LinesNumbered\n \t\\caption{Hybrid Computation Algorithm}\n\t\\label{algo:hyrid}\n\t\\tcp{\\textbf{Offline training:}} \n\t\\setcounter{AlgoLine}{0} \n\t\\KwIn{ $G:$ Input bipartite graph} \n\t\\KwOut{Neural network $C:D \\to \\{Q_{\\alpha,\\beta},Q_{\\beta,\\tau},Q_{\\alpha,\\tau}\\}$ } \n Build $I_{\\beta,\\tau},I_{\\alpha,\\tau}$ and $I_{\\alpha,\\beta}$ for $G$\\\\\n \\ForEach{$q\\in \\{\\textnormal{$N$ random queries}\\} $ on $G$}{\n $feature(q)\\gets[\\alpha,\\beta,\\tau\\textnormal{ of $q$}]$ \\\\ \n $label(q)\\gets$ the fastest algorithm in $\\{Q_{\\alpha,\\beta}, Q_{\\beta,\\tau}, Q_{\\alpha,\\tau} \\}$\\\\\n }\n $X=[feature(q)]$, $q\\in$ $N$ queries run on $G$ \\\\\n $Y=[label(q)]$ , $q\\in$ $N$ queries run on $G$ \\\\\n \n $C \\gets$ trained neural network on $X,Y$ \\\\\n\t\\tcp{\\textbf{Online query processing:}} \n\t\\KwIn{Query parameters: $\\alpha,\\beta,\\tau$}\n\t\\KwOut{$(\\alpha,\\beta)_{\\tau}$-core\\xspace in $G$} \n\t\\setcounter{AlgoLine}{0}\n\t$Q_{pred}\\gets$ $C.predict(\\alpha,\\beta,\\tau)$ \\\\\n Run $Q_{pred}$ to compute $(\\alpha,\\beta)_{\\tau}$-core\\xspace \\\\\n\\textbf{return } $(\\alpha,\\beta)_{\\tau}$-core\\xspace \\\\\n\\end{algorithm}\n\nWe introduce a hybrid computation algorithm (Algorithm \\ref{algo:hyrid}, denoted by $Q_{hb}$), which selects from $\\{ Q_{\\alpha,\\beta},Q_{\\beta,\\tau},Q_{\\alpha,\\tau} \\}$ based on the query parameters $\\alpha,\\beta$, and $\\tau$.\nIn the offline training phase, we build $I_{\\beta,\\tau},I_{\\alpha,\\tau}$ and $I_{\\alpha,\\beta}$ on $G$ and obtain the runtime of $Q_{\\alpha,\\beta},Q_{\\beta,\\tau}$, $Q_{\\alpha,\\tau}$ on $N$ queries, \nwhere $N$ is chosen to be less than $5\\%$ of all possible queries. \nThe label of a query is the algorithm that responds to it in the shortest time.\nThen, we train a feed-forward neural network $C$ on the $N$ labeled query instances. \nIn the online query processing phase, given a new query of $(\\alpha,\\beta)_{\\tau}$-core\\xspace, the trained neural network makes a prediction based on $\\alpha,\\beta \\textnormal{, and }\\tau$. Then, we use the predicted query processing algorithm to compute $(\\alpha,\\beta)_{\\tau}$-core\\xspace.\n\nHere we detail how to train the feed-forward neural network.\nWe impose only one hidden layer in $C$ to avoid over-fitting. \nThe important hyper-parameters of $C$ include the number of hidden units $H$ and the type of optimizer.\nWe use $5$-fold cross-validation to evaluate the above hyper-parameters. \nSpecifically, we split the $N$ labeled queries into $5$ partitions and each time we take one partition as the validation set and the remainder as the training set. \nFor each parameter setting, we build a classifier on the training sets for $5$ times and calculate a performance metric on the validation set. \nIn our model, we define a \\textit{time-sensitive error} on the validation set as the performance metric, which calculates a weighted mis-classification cost w.r.t the actual query time.\nLet $i_k, j_k \\in \\{ 1,2,3 \\}$ (encoding of $Q_{\\alpha,\\beta},Q_{\\beta,\\tau},Q_{\\alpha,\\tau}$) be the predicted class and the actual class of the $k_{th}$ instance. \nThe time-sensitive error is defined as \n$$ error(i_k,j_k) \\textnormal{=} e_{i_k}^T \n\\begin{bmatrix}\n0 & t_{1,k}-t_{2,k} & t_{1,k}-t_{3,k}\\\\\nt_{2,k}-t_{1,k} & 0 & t_{2,k}-t_{3,k}\\\\\nt_{3,k}-t_{1,k} & t_{3,k}-t_{2,k} & 0\\\\\n\\end{bmatrix}\ne_{j_k} $$\nwhere $e_{i_k}$ and $e_{j_k}$ are one-hot vectors of length $3$ with the $i_k$, $j_k$ position being $1$. \n$t_{1,k}$, $t_{2,k}$ and $t_{3,k}$ are the running time of $Q_{\\alpha,\\beta},Q_{\\beta,\\tau},Q_{\\alpha,\\tau}$ on the $k_{th}$ instance respectively. \nThe time-sensitive error measures the gap between the predicted query algorithm and the optimal query algorithm.\nIt is averaged over all instances in the validation set and across $5$ iterations of cross-validation.\nThen, the hyper-parameter setting with the lowest time-sensitive error should be chosen. \nIn this way, we are more prone to find the parameter settings that allow us to minimize the query time instead of merely correctly classify each instance. \n\nNote that, training a feed-forward neural network (lines 2 - 7) take significantly less time compared to the \\grating construction process (line 1) since only $N$ ($N \\leq 5\\%$ of the total number of possible queries) random queries are used. \n\n\\begin{example}\nOn dataset \\texttt{DBpedia-starring}, given $\\alpha$=$2$, $\\beta$=8, and $\\tau$=$9$.\n$Q_{\\alpha,\\beta}$ takes $0.53$ seconds to find the queried subgraph.\n$Q_{\\beta,\\tau}$ and $Q_{\\alpha,\\tau}$ takes $0.03$ and $0.05$ respectively. The optimal query processing algorithm on this instance is $Q_{\\beta,\\tau}$. \nAccuracy as a performance metric would give equal penalty to mis-classifying $Q_{\\alpha,\\beta}$ and $Q_{\\alpha,\\tau}$ as the best algorithm, which is clearly inappropriate. \nInstead, the time-sensitive error gives penalty of $0.02$ if we predict $Q_{\\beta,\\tau}$ and $0.51$ if we predict $Q_{\\alpha,\\tau}$. \n\\end{example}\n\\section{Experiments}\n\nIn this section, we first validate the effectiveness of the $\\tau$-strengthened $(\\alpha,\\beta)$-core\\xspace model. \nThen, we evaluate the performance of the index construction algorithms as well as the query processing algorithms.\n\n\\subsection{Experiments setting}\n\n\\noindent\n{\\bf Algorithms.} Our empirical studies are conducted against the following algorithms:\n\n\\noindent\n{\\em $\\bullet$ Index construction algorithms.}\nWe compare two $I_{\\alpha,\\beta,\\tau}$ construction algorithms: the naive decomposition algorithm \\texttt{decomp-naive}\\xspace and the decomposition algorithm with optimizations \\texttt{decomp-opt}\\xspace. We also evaluate the index construction algorithms of $I_{\\beta,\\tau}$ and $I_{\\alpha,\\tau}$.\nAs for $I_{\\alpha,\\beta}$, we report its size and build time by running the index construction algorithm in \\cite{liu2020efficient}. \n\n\\noindent\n{\\em $\\bullet$ Query processing algorithms.}\nWe use the online computation algorithm presented in Section 4 as the baseline method, denoted as $Q_{bs}$.\nWe compare it to the index-based query processing algorithms $Q_{\\alpha,\\beta,\\tau}, Q_{\\alpha,\\beta}, Q_{\\beta,\\tau}$, and $Q_{\\alpha,\\tau}$, which are based on $I_{\\alpha,\\beta,\\tau},I_{\\alpha,\\beta},I_{\\beta,\\tau}$, and $I_{\\alpha,\\tau}$ respectively. \nWe also evaluate the hybrid computation algorithm $Q_{hb}$, which depends on a well-trained classifier and the indexes $Q_{\\alpha,\\beta}, Q_{\\beta,\\tau}$, and $Q_{\\alpha,\\tau}$. \n\nAll algorithms are implemented in C++ and the experiments are run on a Linux server with Intel Xeon E3-1231 processors and $16$GB main memory. \\textit{We end an algorithm if the running time exceeds two hours.}\n\n\\begin{table*}[tbh]\n\\centering\n\\scalebox{1.0}{\n\\begin{tabular}{cccc|ccccccc}\n\\noalign{\\hrule height 1.23pt}\nDataset & $|E|$ & $|U|$ & $|L|$ & $\\alpha_{max}$ & $\\beta_{max}$ & $\\tau_{max}$ & $\\delta$ \\\\ \n\\noalign{\\hrule height 0.7pt}\nCond-mat (AC) & $58$K & $38$K & $16$K & $37$ & $13$ & $63$ & $8$ \\\\\nWriters (WR) & $144$K & $135$K & $89$K & $11$ & $82$ & $99$ & $6$ \\\\\nProducers (PR) & $207$K & $187$K & $48$K & $220$ & $18$ & $219$ & $6$ \\\\\nMovies (ST) & $281$K & $157$K & $76$K & $19$ & $215$ & $222$ & $7$ \\\\\nLocation (LO) & $294$K & $225$K & $172$K & $12$ & $853$ & $852$ & $8$ \\\\\nBookCrossing (BX) & $434$K & $264$K & $78$K & $376$ & $100$ & $375$ & $13$ \\\\\nTeams (TM) & $1.4$M & $935$K & $901$K & $11$ & $1063$ & $373$ & $9$ \\\\\nWiki-en (WC) & $3.80$M & $2.04$M & $1.85$M & $39$ & $7659$ & $7658$ & $18$ \\\\\nAmazon (AZ) & $5.74$M & $3.38$M & $2.15$M & $659$ & $294$ & $658$ & $26$ \\\\\nDBLP (DB) & $8.6$M & $5.4$M & $1.4$M & $421$ & $64$ & $420$ & $10$ \\\\\n\\noalign{\\hrule height 1.23pt}\n\\end{tabular}\n}\n\\vspace{-2mm}\n\\caption{This table reports the basic statistics of $10$ real graph datasets.}\n\\label{tab:datainfo}\n\\end{table*}\n\n\\noindent\n{\\bf Datasets.}\nWe use $10$ real graphs in our experiments, which are obtained from the website KONECT \\footnote{\\url{http:\/\/konect.uni-koblenz.de\/networks\/}}. \nTable \\ref{tab:datainfo} includes the statistics of these datasets, sorted by the number of edges in ascending order. \nThe abbreviations of dataset names are listed in parentheses.\n$|E|$ is the number of edges in the graph. \n$|U|$ and $|L|$ are the number of vertices in the upper and lower levels. \n$\\alpha_{max}$ is the largest $\\alpha$ such that $(\\alpha,1)_1$-core exists. \n$\\beta_{max}$ is the largest $\\beta$ such that $(1,\\beta)_1$-core exists. \n$\\tau_{max}$ is the largest $\\tau$ such that $(1,1)_{\\tau}$-core exists. \n\n\\subsection{Effectiveness Evaluation}\n\\begin{figure}[htb]\n\\centering\n\\scalebox{0.8}{\n\\includegraphics[width=0.46\\textwidth]{bi_coefficient.pdf}\n\\includegraphics[width=0.44\\textwidth]{density.pdf}\n}\n\\caption{The cohesive metrics comparisons}\n\\label{fig:metric}\n\\end{figure}\n\nIn this section, we validate the effectiveness of the $\\tau$-strengthened $(\\alpha,\\beta)$-core\\xspace model. \nFirst, we compute some cohesive metrics for $(\\alpha,\\beta)$-core\\xspace and $(\\alpha,\\beta)_{\\tau}$-core\\xspace. \nThen, we conduct a case study on dataset \\texttt{DBLP-2019}. \n\n\\noindent\n{\\bf Compare $(\\alpha,\\beta)$-core\\xspace with $\\tau$-strengthened $(\\alpha,\\beta)$-core\\xspace.}\nWe compare the graph density and bipartite clustering coefficient for $(\\alpha,\\beta)$-core\\xspace and $(\\alpha,\\beta)_{\\tau}$-core\\xspace. \nThe graph density \\cite{sariyuce2018peeling} of a bipartite graph is calculated as $|E|\/(|U|\\times|L|)$, where $|E|$ is the number of edges and $|U|$ and $|L|$ are the number of upper and lower vertices. \nThe bipartite clustering coefficient \\cite{aksoy2017measuring} is a cohesive measurement of bipartite networks, which is calculated as $4 \\times \\,\\mathbin{\\resizebox{0.1in}{!}{\\rotatebox[origin=c]{90}{$\\Join$}}}_G$\/$\\mathbin{\\resizebox{0.098in}{!}{\\rotatebox[origin=c]{270}{$\\ltimes$}}}_G$ where $\\mathbin{\\resizebox{0.098in}{!}{\\rotatebox[origin=c]{270}{$\\ltimes$}}}_G$ and $\\,\\mathbin{\\resizebox{0.1in}{!}{\\rotatebox[origin=c]{90}{$\\Join$}}}_G$ are the number of caterpillars (three-path) and the number of butterflies in graph $G$ respectively. \nIn Figure \\ref{fig:metric}, the black bars with $\\tau$=$0$, represents the $(\\alpha,\\beta)$-core\\xspace. \nThe shaded bars and the white bars represent the $(\\alpha,\\beta)_{\\tau}$-core\\xspace with $\\tau$ being $50$ and $100$ respectively. \nThe engagement constraints $\\alpha$ and $\\beta$ are set to $0.6 \\delta$ and $0.4 \\delta$ respectively, where $\\delta$ is the graph degeneracy. As we can see, on all four datasets, the $(\\alpha,\\beta)_{\\tau}$-core\\xspace has a higher density and bipartite coefficient than $(\\alpha,\\beta)$-core\\xspace. As $\\tau$ increases, both of the metrics increase as well. \nThis means that with higher values of $\\tau$, we can find subgraphs within $(\\alpha,\\beta)$-core\\xspace of higher density and cohesiveness. \n\n\\begin{figure}[h!]\n\\centering \n\\includegraphics[width=0.7\\textwidth]{case_study.png}\n\\vspace{-2mm}\n\\caption{Case study on DBLP-2019}\n\\label{fig:case}\n\\end{figure}\n\n\\noindent\n{\\bf Case study.}\nThe effectiveness of our model is evaluated through a case study on the DBLP-2019 dataset. The graph in Fig \\ref{fig:case} is an $(\\alpha,\\beta)$-core\\xspace ($\\alpha\\textnormal{=}7,\\beta\\textnormal{=}8$). Given $\\tau$=$50$, $(\\alpha,\\beta)_{\\tau}$-core\\xspace excludes the relatively sparse group represented by the light blue lines. The \\textit{k}-bitruss ($k$=$56$) represented by the red lines is in $(\\alpha,\\beta)_{\\tau}$-core\\xspace. The black lines are the edges included in $(\\alpha,\\beta)_{\\tau}$-core\\xspace but not in \\textit{k}-bitruss. The $(\\alpha,\\beta)_{\\tau}$-core\\xspace and \\textit{k}-bitruss involve the same authors, but \\textit{k}-bitruss removes the second last paper on the upper level to enforce the tie strength constraint.\nFigure \\ref{fig:case} implies that:\n(1) Although $(\\alpha,\\beta)$-core\\xspace models vertex engagement via degrees, it fails to distinguish between edges with different tie strength. \n(2) \\textit{k}-bitruss models tie strength via butterfly counting, but it forcefully excludes the weak ties between strongly engaged nodes, which leads to the imprecise estimation of tie strength and failure to include important nodes and their incident edges. \n(3) $(\\alpha,\\beta)_{\\tau}$-core\\xspace considers both vertex engagement and tie strength. Its flexibility allows it to capture unique structures that better resemble the communities in reality. \n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=0.6\\textwidth]{index_build.pdf}\n\\caption{The $I_{\\alpha,\\beta,\\tau}$ construction time}\n\\label{fig:index.compare}\n\\end{figure}\n\\begin{table*}[tbh]\n\\centering\n\\scalebox{1.0}{\n\\begin{tabular}{c|cccc|cccc}\n\\noalign{\\hrule height 1.23pt}\n\\multirow{2}*{Data} & \\multicolumn{4}{c|}{Index size (MB)} & \\multicolumn{4}{c}{Index construction time (sec)} \\\\\n\\cline{2-9} \n~ & $I_{\\alpha,\\beta,\\tau}$ & $I_{\\alpha,\\beta}$ & $I_{\\beta,\\tau}$ & $I_{\\alpha,\\tau}$ & $I_{\\alpha,\\beta,\\tau}$ & $I_{\\alpha,\\beta}$ & $I_{\\beta,\\tau}$ & $I_{\\alpha,\\tau}$ \\\\\n\\noalign{\\hrule height 0.7pt}\nCond-mat & $1.29$ & $0.78$ & $0.26$ & $0.50$ & $5.11$ & $0.11$ & $0.68$ & $1.37$ \\\\ \nWriters & $2.14$ & $2.24$ & $0.93$ & $0.24$ & $18.81$ & $0.24$ & $5.01$ & $1.31$ \\\\\nProducers & $5.55$ & $3.16$ & $0.37$ & $2.43$ & $79.43$ & $0.38$ & $4.99$ & $28.37$ \\\\\nMovies & $5.26$ & $3.51$ & $2.01$ & $0.53$ & $66.18$ & $0.46$ & $34.29$ & $6.26$ \\\\ \nLocation & $68.96$ & $4.15$ & $33.22$ & $0.75$ & $77.3978$ & $0.36$ & $49.5368$ & $9.19316$ \\\\\nBookCrossing & $33.56$ & $5.58$ & $3.07$ & $9.32$ & $342.728$ & $1.02$ & $48.8196$ & $75.2869$ \\\\ \nTeams & $-$ & $18.42$ & $114.17$ & $2.44$ & time out & $1.94$ & $944.102$ & $127.265$ \\\\ \nWiki-en & $-$ & $46.66$ & $945.91$ & $10.92$ & time out & $9.079$ & $3850.51$ & $680.569$ \\\\ \nAmazon & $-$ & $72.96$ & $74.47$ & $129.21$ & time out & $17.184$ & $3598.13$ & $4731.29$ \\\\\nDBLP & $-$ & $112.60$ & $29.13$ & $159.74$ & time out & $19.44$ & $559.76$ & $3000.08$ \\\\\n\\noalign{\\hrule height 1.23pt}\n\\end{tabular}\n}\n\\vspace{-2mm}\n\\caption{Evaluate the size of indexes and their build time.}\n\\label{tab:index_size}\n\\end{table*}\n\n\\subsection{Performance Evaluation}\nIn this part, we evaluate the efficiency of the index construction algorithms and explore the appropriate hyperparameter settings for the feed-forward neural network that $Q_{hb}$ depends on.\nThen, we evaluate the efficiency of the query processing algorithms to retrieve $(\\alpha,\\beta)_{\\tau}$-core\\xspace. \n\n\\noindent\n{\\bf Index construction.} \nFirst, We compare the build time of $I_{\\alpha,\\beta,\\tau},I_{\\alpha,\\beta},I_{\\beta,\\tau}$ and $I_{\\alpha,\\tau}$ on all datasets, as reported in Table \\ref{tab:index_size}.\nThe reported build time corresponds to the index construction algorithms with the optimization techniques in Section $6$. \nAs shown in \\ref{fig:index.compare}, although the computation-sharing and the \\texttt{Bloom-Edge}-index based optimizations effectively reduce the running time, the $I_{\\alpha,\\beta,\\tau}$ still cannot be built within time limit on \\texttt{Teams}, \\texttt{Wiki-en}, \\texttt{Amazon} or \\texttt{DBLP}. \nThis is because $I_{\\alpha,\\beta,\\tau}$ stores all the decomposition results and it takes the longest time to build, followed by $I_{\\beta,\\tau},I_{\\alpha,\\tau}$, and $I_{\\alpha,\\beta}$.\nWhen the graph is denser on the upper level, $I_{\\alpha,\\tau}$ takes longer to construct than $I_{\\beta,\\tau}$.\nFor example, in \\texttt{DBLP}, where $d_{max}(L)\\xspace\\textnormal{=}119 < d_{max}(U)\\xspace\\textnormal{=}951$, $I_{\\beta,\\tau}$ is built within $10$ minutes while $I_{\\alpha,\\tau}$ is built in $50$ minutes. \n$I_{\\alpha,\\beta}$ construction is the fastest as it does not involve any butterfly counting or updating of support. \nIn addition, we report the index sizes in Table \\ref{tab:index_size}. \nThe size of $I_{\\alpha,\\beta,\\tau}$ is larger than $I_{\\alpha,\\beta},I_{\\beta,\\tau} \\textnormal{ and } I_{\\alpha,\\tau}$. \nIn summary, the \\gratings that the hybrid computation algorithm depends on are space-efficient and can be built within a reasonable time.\\\\\n\n\\begin{figure}[htb]\n\\centering\n\\scalebox{0.8}{\n\\includegraphics[width=0.46\\textwidth]{optimizer.pdf}\n\\includegraphics[width=0.46\\textwidth]{hid.pdf}\n}\n\\caption{Effects of hyperparameters}\n\\label{fig:tune}\n\\end{figure}\n\n\\noindent\n{\\bf Tuning hyperparameters for the neural network.}\nWhen training the neural network for the hybrid computation algorithm, we choose the hyperparameters (the type of optimizer and size of the hidden layer) that minimizes the time-sensitive error from cross-validation.\nFor each graph $G$, we set the size of hidden layer to $50$ and test the \\textit{stochastic gradient descent}, \\textit{L-BFGS method} and \\textit{Adam} and compare the time-sensitive error. \nAs shown in Figure \\ref{fig:tune}, the L-BFGS method consistently \noutperforms the other methods and thus is chosen in our model. \nThen, we explore the effect of the size of the hidden layer on our model. \nWe report the change of time-sensitive error w.r.t varying hidden layer size on dataset \\texttt{DBpedia-location} and the trends are similar on other datasets.\nAs shown in the plot, $30$ hidden units are enough for the classifier built on most tested datasets and beyond this point, more hidden units have little effect on the performance of the model. \n\n\\begin{table*}[t!]\n\\centering\n\\scalebox{1.0}{\n\\begin{tabular}{ccccccc}\n\\noalign{\\hrule height 1.23pt}\nDataset & $Q_{bs}$ & $Q_{\\alpha,\\beta,\\tau}$ & $Q_{\\alpha,\\beta}$ & $Q_{\\beta,\\tau}$ & $Q_{\\alpha,\\tau}$ & $Q_{hb}$ \\\\ \n\\noalign{\\hrule height 0.7pt}\nCond-mat & $0.139$ & $0.004$ & $0.030$ & $0.008$ & $0.009$ & $0.006$ \\\\\nWriters & $0.406$ & $0.011$ & $0.069$ & $0.011$ & $0.009$ & $0.006$ \\\\\nProducers & $0.560$ & $0.022$ & $0.200$ & $0.047$ & $0.037$ & $0.029$ \\\\\nMovies & $0.871$ & $0.023$ & $0.235$ & $0.028$ & $0.041$ & $0.027$ \\\\\nLocation & $11.782$ & $0.285$ & $7.005$ & $1.755$ & $2.895$ & $0.234$\\\\\nBookCrossing & $44.397$ & $0.147$ & $13.271$ & $2.935$ & $4.794$ & $1.722$ \\\\\nTeams & $59.510$ & $-$ & $10.911$ & $3.080$ & $2.778$ & $1.394$ \\\\\nWiki-en & $128.536$ & $-$ & $28.589$ & $2.638$ & $13.613$ & $1.775$\\\\\nAmazon & $973.026$ & $-$ & $153.085$ & $88.673$ & $59.228$ & $16.432$ \\\\\nDBLP & $61.101$ & $-$ & $1.843$ & $1.321$ & $1.055$ & $0.269$ \\\\\n\\noalign{\\hrule height 1.23pt}\n\\end{tabular}\n}\n\\vspace{-2mm}\n\\caption{This table reports the average response time for all query processing algorithms.}\n\\label{tab:summary}\n\\end{table*}\n\\begin{figure}[h!]\n\\centering \n\\includegraphics[trim= -40 0 0 0 ,width=0.8\\textwidth]{legen.pdf}\n\n\\subfigure[LO (Vary $\\alpha$)]{\n\\label{Fig.pr.a}\n\\includegraphics[width=0.3\\textwidth]{PR_a.pdf}}\n\\subfigure[LO (Vary $\\beta$)]{\n\\label{Fig.pr.b}\n\\includegraphics[width=0.3\\textwidth]{PR_b.pdf}}\n\\subfigure[LO (Vary $\\tau$)]{\n\\label{Fig.pr.g}\n\\includegraphics[width=0.3\\textwidth]{PR_tau.pdf}}\n\n\\subfigure[BX (Vary $\\alpha$)]{\n\\label{Fig.pa.a}\n\\includegraphics[width=0.3\\textwidth]{Pa_a.pdf}}\n\\subfigure[BX (Vary $\\beta$)]{\n\\label{Fig.pa.b}\n\\includegraphics[width=0.3\\textwidth]{Pa_b.pdf}}\n\\subfigure[BX (Vary $\\tau$)]{\n\\label{Fig.pa.g}\n\\includegraphics[width=0.3\\textwidth]{Pa_tau.pdf}}\n\\vspace{-2mm}\n\\caption{Effects of varying parameters on each query processing algorithm.}\n\\label{Fig.queries}\n\\end{figure}\n\n\\noindent\n{\\bf Average query time of $Q_{\\alpha,\\beta,\\tau}, Q_{\\alpha,\\beta}, Q_{\\beta,\\tau}, Q_{\\alpha,\\tau}$ and $Q_{hb}$.} \nFor each algorithm, we report the average response time of $50$ randomly generated queries on each dataset in Table \\ref{tab:summary}.\nAs expected, all index based algorithms outperform $Q_{bs}$, as $Q_{bs}$ computes $(\\alpha,\\beta)_{\\tau}$-core\\xspace from the input graph.\nAmong them, $Q_{\\alpha,\\beta,\\tau}$ performs the best on most datasets, as it fetches the vertices from $I_{\\alpha,\\beta,\\tau}$ in optimal time and then restores the edges. \nHowever, the long build time of $I_{\\alpha,\\beta,\\tau}$ makes $Q_{\\alpha,\\beta,\\tau}$ not scalable to larger graphs like \\texttt{Team},\\texttt{Wiki-en},\\texttt{Amazon} or \\texttt{DBLP}. \nThe performances of $Q_{\\alpha,\\beta},Q_{\\beta,\\tau}$, and $Q_{\\alpha,\\tau}$ differ a lot from each other and across datasets. \nOn average, $Q_{\\alpha,\\beta}$ is slower than $Q_{\\beta,\\tau}$ and $Q_{\\alpha,\\tau}$, because it needs to delete many edges from $(\\alpha,\\beta)$-core\\xspace to get $(\\alpha,\\beta)_{\\tau}$-core\\xspace especially when $\\tau$ is large. \nAs $Q_{hb}$ is trained to pick the fastest from $\\{ Q_{\\alpha,\\beta},Q_{\\beta,\\tau},Q_{\\alpha,\\tau} \\}$, it outperforms these $3$ algorithms on average in all datasets.\nIn addition, $Q_{hb}$ outperforms the online computation algorithm $Q_{bs}$ by up to two orders of magnitude.\n\n\\noindent\n{\\bf Evaluate the effect of $\\alpha,\\beta$ and $\\tau$.} \nWe investigate the effects of varying $\\alpha, \\beta, \\tau$ on each query processing algorithm. \nWe input three types of query streams for all algorithms, each of which increments one of $\\alpha$, $\\beta$, and $\\tau$ while letting the other two parameters be generated randomly. \nAs trends are similar, we only report the results on \\texttt{Location} and \\texttt{BookCrossing} in Figure \\ref{Fig.queries}.\nEach data point in the figure represents the average response time of $50$ random queries.\nAs expected, index-based query processing algorithms always outperform $Q_{bs}$.\nThe performance of $Q_{\\alpha,\\beta,\\tau}$ is not affected much by the varying parameters, while the \\grating based algorithms are highly sensitive to them. \nEach of $Q_{\\alpha,\\beta},Q_{\\beta,\\tau}$, and $Q_{\\alpha,\\tau}$ tends to perform better when the increased parameter results in a smaller subgraph in the index. \nFor instance, $Q_{\\beta,\\tau}$ performs better as $\\beta$ or $\\tau$ increases. \nIn contrast, the hybrid computation algorithm $Q_{hb}$ has very stable performance, as it stays close to and in many cases outperforms the fastest of $Q_{\\alpha,\\beta},Q_{\\beta,\\tau}$, and $Q_{\\alpha,\\tau}$. \nIn summary, the hybrid computation algorithm $Q_{hb}$ with a well-trained classifier can adjust its querying processing algorithm to different parameters and datasets.\n\n\\section{The Elsevier article class}\n\\section{Conclusion}\nIn this paper, we introduce a novel cohesive subgraph model, $\\tau$-strengthened $(\\alpha,\\beta)$-core\\xspace, which is the first to consider both tie strength and vertex engagement on bipartite graphs.\nWe propose a decomposition-based index $I_{\\alpha,\\beta,\\tau}$ that can retrieve vertices of any $(\\alpha,\\beta)_{\\tau}$-core\\xspace in optimal time. \nWe also apply computation sharing and \\texttt{BE}-\\texttt{Index}\\xspace-based optimizations to speed up the index construction process of $I_{\\alpha,\\beta,\\tau}$. \nTo balance space-efficient index construction and time-efficient query processing, we propose a learning-based hybrid computation paradigm.\nUnder this paradigm, we introduce three \\gratings and train a feed-forward neural network to predict which index is the best choice to process an incoming $(\\alpha,\\beta)_{\\tau}$-core\\xspace query. \nThe efficiency of the proposed algorithms and the effectiveness of our model are verified through extensive experiments. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}