diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzdtkg" "b/data_all_eng_slimpj/shuffled/split2/finalzzdtkg" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzdtkg" @@ -0,0 +1,5 @@ +{"text":"\n\\section{Introduction}\n\nThe core task of Collaborative Filtering (CF) in recommendation is learning the representations of users and items from historical interactions, such that the cosine or Euclidean distance between two representations reflects their semantic similarity. \nHowever, one of the main challenges in learning high-quality representations in CF is the popularity bias issues in data \\cite{unfairness,Crowd,MostPop,Beyond_Parity}.\nTypically, the training data distribution is usually long-tailed, \\emph{e.g., } a few head items occupy most of the interactions, whereas the majority of tail items are unpopular and receive relatively little attention.\nDue to this imbalanced property of the training data, the resultant CF model will inherit the popularity bias and, in many cases, amplify the bias by over-recommending the head items and under-recommending the tail items. \nAs a result, the popularity bias is causing the biased representations with poor generalization ability, leading the recommendations to deviate from users' actual interests.\n\n\nMotivated by the existence of such biases, popularity debiasing methods that aim to lift the tail performance have been studied extensively. \nUnfortunately, most prevalent debiasing techniques focus on the trade-off between the head and tail evaluations, \\emph{e.g., } post-processing re-ranking \\cite{Calibration,RankALS,rescale,FPC,PC_Reg}, balanced training loss \\cite{ESAM,ALS+Reg,Regularized_Optimization,PC_Reg}, sample re-weighting \\cite{ips,IPS-C,IPS-CN,Propensity_SVM-Rank,YangCXWBE18,UBPR}, and head bias removal by causal inference \\cite{CausE,PDA,DecRS,MACR}. \nWorse still, many of them hold the assumptions that are infeasible in reality, \\emph{e.g., } the balanced test distribution is known in advance to guide the hyperparameters adjustment \\cite{DICE, MACR}, and a small unbiased subset data is existent for training unbiased model \\cite{KDCRec,CausE}. \nConsequently, these methods encounter the difficulty that tail performance rise sacrifices the head performance and leads to a severe overall performance drop.\nWe argue that the head and tail trade-off results in low-quality representations, which hardly improve the generalization ability of debiasing models.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{figures\/intro.pdf}\n \\vspace{-5pt}\n \\caption{Illustration of separative and discriminative representations. Dots represent item representations, while star refers to the user representation. Positive and negative samples are in red and blue, respectively.}\n \\label{fig:intro}\n \\vspace{-15pt}\n\\end{figure}\n\nWe conjecture that a debiased model truly improves long-tailed performance in CF by learning better user\/item representations, but not by playing a head and tail performance trade-off. \nFor the recommendation task, the user\/item is required to be mapped into not only separable but also discriminative representation space,\nsince it is impractical to pre-collect all the users' positive responses.\nTaking an implicit-feedback recommender as an example, it only leverages positive user-item interactions, such as clicks, to learn personalized user preferences, and the lack of such clicks does not indicate a negative response from the users.\nSuch a positive-unlabeled problem asks the learned representations to be discriminative and generalized enough to predict users' preferences without true labels.\nThe two critical properties for discriminative representations are high compactness for positive samples and large dispersion for negative samples, as shown in Figure \\ref{fig:intro}. \nHowever, the softmax loss \\cite{softmax} only encourage the separability to preserve maximal information \\cite{alignment_uniformity}.\n\nThe purpose of this paper, therefore, is to modify the softmax loss to a more general and highly efficient BC loss, short for Bias-margin Contrastive loss, to obtain discriminative representations in the recommendation task.\nSpecifically, we impose an interaction-wise bias-dependent angular margin between every positive user\/item pair and negative samples.\nThe size of the angular margin can be quantitatively controlled by a bias degree extractor.\nIf the user-item pair's popularity bias degree is low, which means it is a hard example, we further add a larger margin to strongly squeeze the tightness of those under-represented samples (usually inactivated users with tail items) to improve generalization. \nIn contrast, when the value of bias degree extractor is high, \\emph{e.g., } user-item interactions can be correctly predicted using only popularity information of user-item pair, we adjust the softmax loss by no margin or smaller margin to reduce the importance of these biased examples (in most cases, some head user-item pairs).\nBy optimizing BC loss, the user\/item representations become more discriminative by pulling positive representations together and pushing negative representations apart.\nAs a result, BC loss can significantly improve both head and tail performance.\n\nThe BC loss has several desirable advantages.\nFirst, it has a clear and intuitive geometric interpretation, as elaborated in Figure \\ref{fig:intro}.\nFrom a geometric view, we impose a popularity bias-related margin on a hypersphere manifold to enlarge dissimilar representations' distance.\nSecond, the interaction-wise adaptive margin forces the optimization to concentrate on the hard examples, a simple but efficient hard example mining strategy. \nThe implicit effect of the hard example mining strategy is to encourage the model to shift its attention to more informative hard samples instead of relying on biased samples.\nThird, theoretical analysis in Section 2 reveals that BC loss tends to learn a low-entropy cluster for positive samples (\\emph{e.g., } high similar item\/user compactness) and a high-entropy feature space (\\emph{e.g., } large dissimilar item\/user dispersion). \nIn this case, BC loss benefits alleviate the popularity bias as well as the general recommendation task.\n\n\nOur main contributions can be summarized as follows.\n\\begin{itemize}[leftmargin=*]\n\\item To the best of our knowledge, we are among the first to reduce the popularity bias in CF by enhancing the discriminative power of the loss function.\n\\item We devise a generalized softmax loss - BC loss, equipped with an intuitive geometric interpretation, an efficient hard example mining strategy, and a clear theoretical analysis.\n\\item Promising results demonstrate that BC loss not only alleviates popularity bias amplification effectively but also improves both head and tail performance in both imbalanced and balanced evaluations.\n\\end{itemize}\nBC loss opens up the possibility of constructing a highly discriminative loss function for the traditional recommendation.\nConsidering the theoretical guarantee and empirical effectiveness of BC loss, we recommend using BC loss not only as a debiasing technique but also as a standard loss in CF.\n\n\n\n\n\n\n\n\n\\section{Methodology}\n\n\nThe personalized item recommendation is usually based on implicit feedback \\cite{softmax} whose goal is to retrieve a subset of interesting items for a given user from a large catalog of items.\nWe consider the standard formulation of implicit feedback recommendation as a top-n ranking problem.\nGiven a historical interaction dataset $\\Set{O}^{+}=\\{(u,i)|y_{ui}=1\\}$ with $u \\in \\Space{U}$ a user, $i \\in \\Space{I}$ an item, and $y_{ui}$ indicating the user $u$ has adopted item $i$ before, a scoring function $\\hat{y}: \\Space{U}\\times \\Space{I} \\rightarrow \\Space{R}$ is optimized to reflect the preference of the user for the item. \n\nFor a single user-item pair, recommender utilizes an item encoder $\\phi: \\Space{I} \\rightarrow \\Space{R}^d$, a user encoder $\\psi: \\Space{U} \\rightarrow \\Space{R}^d$, and a similarity function $s: \\Space{R}^d \\times \\Space{R}^d \\rightarrow \\Space{R}$. \nThese functions are composed as follows: \n\\begin{equation*}\n \\hat{y}(u,i) = s(\\psi(u), \\phi(i))\n\\end{equation*}\nThe choice of item\/user encoder, the backbone of the recommender, can be roughly categorized into ID-based (\\emph{e.g., } MF \\cite{BPR,SVD++}, NMF \\cite{NMF}, CMN \\cite{cmn}), history-based (\\emph{e.g., } SVD++ \\cite{SVD++}, FISM \\cite{FISM}, MultVAE \\cite{MultVAE}), and graph-based models (\\emph{e.g., } NGCF \\cite{NGCF}, PinSage \\cite{PinSage}, LightGCN \\cite{LightGCN}). \nMoreover, the design of similarity function $s(\\dot)$ can be dot product, cosine similarity, or more complex multilayer perceptron (MLP) \\cite{NMF}.\n\nTo the end, we conduct our experiments in the representative of the conventional backbone MF and the SOTA backbone LightGCN. \nFurthermore, a cosine similarity is used as the similarity function, suggested in \\cite{RendleKZA20}, for better interpretation, which is denoted as\n\\begin{equation*}\n s(\\psi(u), \\phi(i))=\\frac{<\\psi(u), \\phi(i)>}{\\norm{\\psi(u)} \\cdot \\norm{\\phi(i)}} \t\\doteq \\cos{(\\hat{\\theta}_{ui})}\n\\end{equation*}\nin which $\\hat{\\theta}_{ui}$ is the angle between the representation of a user $\\psi(u)$ and an item $\\phi(i)$.\n\n\\textbf{Classical Learning Strategy.}\nThe classical learning strategy of collaborative filtering (CF) model can be classified into three groups: point-wise loss (\\emph{e.g., } binary cross-entropy \\cite{CrossEntropy}, mean square error \\cite{SVD++}), pair-wise loss (\\emph{e.g., } BPR \\cite{BPR}, WARP \\cite{WARP}) and softmax loss \\cite{softmax}. \nPoint-wise and pair-wise loss are long-standing and most-used objective functions in CF. \nHowever, large-scale studies theoretically and empirically confirm that both point-wise and pair-wise losses will inevitably propagate more information towards head user-item pairs, which causes gradient shifts and amplifies popularity bias \\cite{PC_Reg,unfairness,tang2020investigating}. \nOn the other hand, recent years have witnessed a growing academic interest in using softmax loss in recommender systems \\cite{sgl,CLRec,S3-Rec}, due to its wide success in self-supervised contrastive learning. \n\nHence, in this paper, we set sampled softmax loss as the learning objective and minimize it to train the parameters over a dataset of size $|\\Set{O}^{+}|$:\n\\begin{gather*}\n \\min\\mathbf{\\mathop{\\mathcal{L}}}_0 = -\\sum_{(u,i)\\in\\Set{O}^{+}}\\log\\frac{\\exp{(\\cos(\\hat{\\theta}_{ui})\/\\tau)}}{\\exp{(\\cos(\\hat{\\theta}_{ui})\/\\tau)}+\\sum_{j\\in\\Set{N}_{u}}\\exp{(\\cos(\\hat{\\theta}_{uj})\/\\tau)}}\n\\end{gather*}\nwhere $(u,i)\\in\\Set{O}^{+}$ is one observed interaction of user $u$, while $\\Set{N}_{u}=\\{j|y_{uj}=0\\}$ is the sampled unobserved items that $u$ did not interact with before; $\\tau$ is the hyper-parameter known as the temperature in softmax \\cite{tau}. \n\nAlthough, softmax loss has been outstandingly successful in various fields due to its inherent property of alignment, which assigns similar features to cohere together \\cite{alignment,WuXYL18}. \nSoftmax loss, as a widespread self-supervised representation learning loss, encourages the learned features uniformly distributed to preserve maximal information for helping downstream tasks \\cite{alignment_uniformity,SaunshiPAKK19}. \nUnfortunately, the modification of softmax loss to enhance discriminative representation as well as reduce the popularity bias has seldomly been studied. \nTherefore, the goal of this paper is to conduct a more generic and broadly applicable variant of softmax loss in CF tasks, which can fundamentally improve the long-tail performance in CF. \n\n\\subsection{Bias-Margin Contrastive Loss}\n\n\\textbf{Popularity bias extractor.} \nHow do we know if an example exhibits popularity bias? \nHow could we quantify the degree of popularity bias for a single user-item pair?\nAlthough it is impractical to directly measure the distribution shift without accessing the test data, CF models are unlikely to predict well, given only biased information.\nSpecifically, the biased information could be user popularity statistics $p_u \\in \\Space{P}$, the number of historical items that user $u$ interacted with before, and the item popularity statistics $p_i \\in \\Space{P}$, the number of observed interactions that item $i$ is involved in. \nIf the CF models could still correctly predict the user's preference using only $p_u$ and $p_i$, it is likely due to CF models learning the popularity bias.\nMoreover, the extent of the prediction recovery will capture the degree of popularity bias.\n\nSimilarly as a standard CF model, the popularity bias extractor can be formalized as a function $\\hat{y}_b: \\Space{P}\\times \\Space{P} \\rightarrow \\Space{R}$ and composed of an item popularity encoder $\\phi_b:\\Space{P} \\rightarrow \\Space{R}_d$, a user popularity encoder $\\psi_b: \\Space{P} \\rightarrow \\Space{R}^d$, and a cosine similarity function $s: \\Space{R}^d \\times \\Space{R}^d \\rightarrow \\Space{R}$, which is defined as\n\\begin{equation*}\n \\hat{y}_b(p_u,p_i) = s(\\psi_b(p_u), \\phi_b(p_i)) \\doteq cos(\\hat{\\xi}_{ui})\n\\end{equation*}\nin which $\\hat{\\xi}_{ui}$ is the angle between the user popularity embedding $\\psi_b(p_u)$ and item popularity embedding $\\phi_b(p_i)$.\n\nThe popularity bias extractor $\\hat{y}_b$ can be trained using sampled softmax loss $\\mathbf{\\mathop{\\mathcal{L}}}_b$:\n\\begin{gather} \\label{equ:bias_extractor}\n \\min\\mathbf{\\mathop{\\mathcal{L}}}_b = -\\sum_{(u,i)\\in\\Set{O}^{+}}\\log\\frac{\\exp{(\\cos(\\hat{\\xi}_{ui})\/\\tau)}}{\\exp{(\\cos(\\hat{\\xi}_{ui})\/\\tau)}+\\sum_{j\\in\\Set{N}_{u}}\\exp{(\\cos(\\hat{\\xi}_{uj})\/\\tau)}}\n\\end{gather}\n\nThis optimization forces the popularity bias extractor $\\hat{y}_b$ to reconstruct the historical interaction and assess the popularity degree from an interaction perspective.\nThose interactions that can not be predicted well using only popularity statistics are hard examples, which are more informative for representation learning.\nIn other words, the popularity bias extractor is helpful to identify the hard examples in the training set and carefully measures how hard it is for each user-item pair.\n\n\\textbf{Bias-margin Contrastive Loss (BC loss).} \nInstead of designing a weighted combination of softmax loss and a popularity bias penalty, we propose a more elegant way to learn discriminative representations by adding popularity bias-related angular margin.\nOur Bias-margin Contrastive Loss (BC loss) is formulated as:\n\\begin{gather}\\label{equ:bc_loss}\n \\mathbf{\\mathop{\\mathcal{L}}}_{BC} = -\\sum_{(u,i)\\in\\Set{O}^{+}}\\log\\frac{\\exp{(\\cos(\\hat{\\theta}_{ui}+M_{ui})\/\\tau)}}{\\exp{(\\cos(\\hat{\\theta}_{ui}+M_{ui})\/\\tau)}+\\sum_{j\\in\\Set{N}_{u}}\\exp{(\\cos(\\hat{\\theta}_{uj})\/\\tau)}}\n\\end{gather}\n\nWhere $M_{ui}$ is the interaction-wise additive angular margin corresponding to user $u$ and item $i$. \n\nWith larger $M_{ui}$, the additive angular margin becomes larger and the learning objective becomes harder.\nIntuitive, we prefer larger angular margin for the hard example, which helps the CF model concentrates on the hard examples to compensate biases, more detailed illustrations in Section \\ref{sec:three_advantages}.\n\nIn this study, we constrain the margin in a popularity bias view:\n\\begin{gather}\n M_{ui} = \\min \\{\\hat{\\xi}_{ui}, \\pi - \\hat{\\theta}_{ui}\\}\n\\end{gather}\nWhere the purpose of the upper bound $\\pi - \\hat{\\theta}_{ui}$ is to restrict the $\\cos(\\hat{\\theta}_{ui} + \\cdot)$ to be a monotonically decreasing function corresponding to the angular margin. \n\nThe choice of bias-margin $M_{ui}$ is various. \nA series of margin selections, such as $M_{ui} = \\min\\{\\lambda \\cdot \\hat{\\xi}_{ui}, \\pi - \\hat{\\theta}_{ui}\\}$, is also reasonable, where $\\lambda$ controls the strength of the bias-margin.\nMeanwhile, by carefully constructing a monotonically decreasing similarity function, we can get ride of the restriction: $\\pi - \\hat{\\theta}_{ui}$.\nHowever, Such modifications are heuristic-based, and come at the price of increased complexity or additional hyper-parameters.\nWe encourage the reader to further explore, but will not list and compare them in this article.\n\nBC loss is extremely easy to implement in recommendation task, which only needs revise several lines of code. \nMoreover, BC loss only adds negligible computational complexity during training compared with softmax loss, but achieves better representations as clearly shown in Section \\ref{sec:three_advantages} and \\ref{sec:toy_example}. \nHence, we recommend using BC loss not only as a debiasing technique but also as a standard loss in recommender system.\n\n\n\\subsection{Three Advantages of BC Loss} \\label{sec:three_advantages}\nOur proposed BC loss has three advantages: an intuitive geometric interpretation, a novel hard example mining strategy, and clear theoretical properties.\n\n\\textbf{Geometric Interpretation.}\nWe firstly revisit the softmax loss by looking its ranking criteria. To simplify the geometric interpretation, we analyze one user $u$ with one positive item $i$ and only two negative items $j_1, j_2$. Then the posterior probabilities obtained by softmax loss are:\n\\begin{gather*}\n \\frac{\\exp{(\\cos(\\hat{\\theta}_{ui})\/\\tau)}}{\\exp{(\\cos(\\hat{\\theta}_{ui})\/\\tau)}+\\exp{(\\cos(\\hat{\\theta}_{uj_1})\/\\tau)}+\\exp{(\\cos(\\hat{\\theta}_{uj_2})\/\\tau)}}\n\\end{gather*}\n\n\\begin{figure}[t]\n\t\\centering\n\t\\subcaptionbox{Softmax Loss (2D)\\label{fig:2d_softmax}}{\n\t \\vspace{-5pt}\n\t\t\\includegraphics[width=0.38\\linewidth]{figures\/2d-softmax.pdf}}\n\t\\subcaptionbox{BC Loss (2D)\\label{fig:2d_bc}}{\n\t \\vspace{-5pt}\n\t\t\\includegraphics[width=0.38\\linewidth]{figures\/2d-bc.pdf}}\n\t\\subcaptionbox{Softmax Loss (3D)\\label{fig:3d_softmax}}{\n\t \\vspace{-5pt}\n\t\t\\includegraphics[width=0.38\\linewidth]{figures\/3d-softmax.pdf}}\n\t\\subcaptionbox{BC Loss (3D)\\label{fig:3d_bc}}{\n\t \\vspace{-5pt}\n\t\t\\includegraphics[width=0.38\\linewidth]{figures\/3d-bc.pdf}}\n\t\\vspace{-10pt}\n\t\\caption{Geometric Interpretation of softmax loss and BC loss. The first row is the 2D hypersphere manifold for user representation constraint, and the second row is the 3D constraint. The dark red region indicates the discriminative user constraint for each loss, while the light red region is for comparison.}\n\t\\label{fig:geometric_interpretation}\n\t\\vspace{-13pt}\n\\end{figure}\n\n\nIn the training stage, the softmax loss requires $\\hat{\\theta}_{ui} > \\hat{\\theta}_{uj_1}, \\hat{\\theta}_{ui} > \\hat{\\theta}_{uj_2}$ to model the basic assumption in recommendation task.\nConcretely, if an item $i$ has been viewed by user $u$, then we assume that the user $u$ prefers the item $i$ over all other non-observed items $j_1,j_2$. \nWe will skip the analysis for BPR loss since the constraint and geometric interpretation can be easily obtained by considering one negative item.\n\nWhat if we add an angular margin to making the criteria more stringent? Intuitively, we instead requires $\\hat{\\theta}_{ui} + M_{ui} > \\hat{\\theta}_{uj_1}, \\hat{\\theta}_{ui} + M_{ui} > \\hat{\\theta}_{uj_2}$ where $M_{ui}$ is a bias-related value. By directly formulating this idea into the modified softmax loss, we have the posterior probabilities for BC loss, similar in Equation \\eqref{equ:bc_loss}:\n\n\\begin{gather*}\n \\frac{\\exp{(\\cos(\\hat{\\theta}_{ui}+ M_{ui})\/\\tau)}}{\\exp{(\\cos(\\hat{\\theta}_{ui}+M_{ui})\/\\tau)}+\\exp{(\\cos(\\hat{\\theta}_{uj_1})\/\\tau)}+\\exp{(\\cos(\\hat{\\theta}_{uj_2})\/\\tau)}}\n\\end{gather*}\nWe can see that BC loss is more rigor about the ranking assumption. More detailed explanations of assumption are given in Section \\ref{sec:hard_example}. \n\nThe comparison of geometric interpretation for softmax loss and BC loss is depicted in Figure \\ref{fig:geometric_interpretation}. \nAssume the learned items $i,j_1,j_2$ are given, and softmax and BC losses are optimized to the same value.\nThe constraint boundaries for correctly ranking the user's preference in Softmax are $\\hat{\\theta}_{ui} = \\hat{\\theta}_{uj_1}, \\hat{\\theta}_{ui} = \\hat{\\theta}_{uj_2}$, while in BC loss, the constraint boundaries are $\\hat{\\theta}_{ui} + M_{ui} = \\hat{\\theta}_{uj_1}, \\hat{\\theta}_{ui} + M_{ui} = \\hat{\\theta}_{uj_2}$.\nGeometrically, from softmax loss to BC loss, it is a more stringent circle-like region on the unit sphere in the 3D case, as illustrated in Figure \\ref{fig:3d_bc}.\nNote that a larger bias-related margin $M_{ui}$ leads to a smaller hypercycle-like region, which is an explicit discriminative constraint on a manifold.\nLimited constraint regions squeeze the tightness of similar items, at the same time, enlarge the dissimilar item's distance.\nAs the dimension of representations increases, BC loss has more difficult learning objective requirements, which exponentially decreases the area of constraint regions for correct ranking.\nConsequently, BC loss is gradually powerful to learn discriminative representations in recommendation tasks.\n\n\n\n\\textbf{Hard Example Mining Strategy.} \\label{sec:hard_example}\nWe argue that BC loss naturally inherits the idea of hard example mining to improve training efficiency and effectiveness. \n\nThe key for hard example mining strategy is to mine the most valuable samples.\nThe popularity bias extractor $\\hat{y}_b$, defined in Equation \\eqref{equ:bias_extractor}, can be viewed as a hard example detector.\nIf the value of popularity bias is high, which means the interaction could be predicted by using only popularity information, then the corresponding user-item pair is a biased sample.\nIn most cases, those biased samples are activated users with famous items.\nTake a news recommender system as an example; the reader is possible to click the news only because it is a top story, not because its topic is a long-term interest for that reader. \nHere, the bias-related margin $M_{ui}$ is close to zero.\nIn other words, we still make the basic assumption that the reader favors the popular news over all other unobserved news.\n\nIn contrast, the popularity bias extractor identifies the hard examples when the popularity statistics can not successfully predict the interactions.\nThose user-item pairs, in general, are inactivated users with unpopular items.\nRecall the news recommendation example; the reader clicking a piece of niche news is more likely to be attracted by interesting keywords or specific news topics. \nThose interactions are of importance in helping the recommendation model understand user behavior.\nSpecifically, the bias margin $M_{ui}$ is significant in this situation, and we assume a more stringent assumption that the reader prefers this unpopular news by a large margin to other unclicked news.\nThe large margin makes the objective function more challenging to learn. \nAs a result, leading the model shifts its attention to hard examples.\n\n\n\\textbf{Theoretical Properties of BC Loss.}\nWe show that BC loss has compactness part and dispersion part on representation learning based on the following theorem.\n\n\\begin{theorem}\nAssuming the representations of users and items are normalized, then minimizing the BC loss is equivalent to simultaneously minimizing a compactness part and a dispersion part:\n\\begin{align*}\n \\mathbf{\\mathop{\\mathcal{L}}}_{BC} & \\propto \\underbrace{\\sum_{u=1}^{|\\Space{U}|}\\norm{\\Mat{v}_u-\\Mat{c}_u}^2 + \\sum_{i=1}^{|\\Space{I}|}\\norm{\\Mat{v}_i-\\Mat{c}_i}^2}_{\\text{Compactness part}}\\underbrace{-\\sum_{u=1}^{|\\Space{U}|}\\sum_{j\\in\\Set{N}_{u}}\\norm{\\Mat{v}_u-\\Mat{v}_j}}_{\\text{Dispersion part}} \\\\\n & \\propto \\underbrace{\\mathcal{H}(\\Mat{V}|Y)}_{\\text{Compactness}}\\:\\: \\underbrace{- \\:\\: \\mathcal{H}(\\Mat{V})}_{\\text{Dispersion}}\\: = \\:- \\mathcal{I}(\\Mat{V};Y)\n\\end{align*}\n\\end{theorem}\n\n\\begin{proof}\nThroughout this section, for simplicity, $\\Mat{v}_u \\doteq \\psi(u)$ and $\\Mat{v}_i \\doteq \\phi(i)$. \nWe use the upper-case letter $\\Mat{V} \\in \\Space{V}$ to indicate the random vector of representation, where $\\Space{V} \\subseteq \\Space{R}^d$ is the representation space.\nWe also use the normalization assumption for representations to connect cosine and Euclidean distances, \\emph{i.e., } if $\\norm{\\Mat{v}_u}=1, \\forall u$ and $\\norm{\\Mat{v}_i}=1, \\forall i$, then $\\Mat{v}_u^T\\Mat{v}_i = 1 - \\frac{1}{2} \\norm{\\Mat{v}_u-\\Mat{v}_i}^2$. \n\nIt is clear that we can find a upper bound $m$, \\emph{s.t. } $-1 < \\cos(\\hat{\\theta}_{ui}+M_{ui}) \\leq \\Mat{v}_u^T\\Mat{v}_i - m < 1 $. \nLet $\\Set{P}_u = \\{i|y_{ui} =1\\}$ denotes the positive items set for user $u$, $\\Set{P}_i = \\{u|y_{ui} =1\\}$ refers to the positive user set for item $i$, and $\\Set{N}_{u}=\\{i|y_{ui}=0\\}$ represents the negative items set for user $u$.\nLet us start by splitting the BC loss into two terms:\n\\begin{align} \n \\mathbf{\\mathop{\\mathcal{L}}}_{BC} & \\geq -\\sum_{(u,i)\\in\\Set{O}^{+}}\\log\\frac{\\exp{\\{(\\Mat{v}_u^T\\Mat{v}_i - m)\/\\tau}\\}}{\\exp{\\{(\\Mat{v}_u^T\\Mat{v}_i - m)\/\\tau\\}}+\\sum_{j\\in\\Set{N}_{u}}\\exp{\\{(\\Mat{v}_u^T\\Mat{v}_j)\/\\tau\\}}} \\nonumber\\\\\n & = -\\sum_{u=1}^{|\\Space{U}|}\\sum_{i \\in \\Set{P}_u} \\frac{\\Mat{v}_u^T\\Mat{v}_i - m}{\\tau} \\nonumber\\\\\n & + \\sum_{u=1}^{|\\Space{U}|}\\sum_{i \\in \\Set{P}_u} \\log \\{\\exp{\\{\\frac{\\Mat{v}_u^T\\Mat{v}_i - m}{\\tau}\\}}+\\sum_{j\\in\\Set{N}_{u}}\\exp{\\{\\frac{\\Mat{v}_u^T\\Mat{v}_j}{\\tau}\\}}\\} \\label{eq:split_bc}\n\\end{align}\n\nLet $\\Mat{c}_{u} = \\frac{1}{|\\Set{P}_u|}\\sum_{i\\in \\Set{P}_{u}}\\Mat{v}_i$ denotes the hard mean of all items in the positive items set for user $u$. \nSimilarly, let $\\Mat{c}_{i} = \\frac{1}{\\Set{P}_{i}} \\sum_{u\\in\\Set{P}_{i}} \\Mat{v}_{u}$. \nLet the symbol $\\eqc$ indicates equality up to a multiplicative and\/or additive constant. \nWe first analyze the first term in Equation \\eqref{eq:split_bc} by illustrating it to a compactness term of the center loss.\n\\begin{align}\n -\\sum_{u=1}^{|\\Space{U}|}\\sum_{i \\in \\Set{P}_u} \\frac{\\Mat{v}_u^T\\Mat{v}_i - m}{\\tau} \n & = \\sum_{u=1}^{|\\Space{U}|}\\sum_{i \\in \\Set{P}_u} \\frac{\\norm{\\Mat{v}_u-\\Mat{v}_i}^2}{2\\tau} +\\frac{m-1}{\\tau} \\nonumber\\\\\n & \\eqc \\sum_{u=1}^{|\\Space{U}|}\\sum_{i \\in \\Set{P}_u} \\norm{\\Mat{v}_u-\\Mat{v}_i}^2 \\nonumber\\\\\n & = \\sum_{u=1}^{|\\Space{U}|}\\sum_{i \\in \\Set{P}_u} (\\norm{\\Mat{v}_u}^2 + \\norm{\\Mat{v}_i}^2 - 2\\Mat{v}_u^T\\Mat{v}_i)\\nonumber\\\\\n & = \\sum_{u=1}^{|\\Space{U}|}\\sum_{i \\in \\Set{P}_u} (\\norm{\\Mat{v}_u}^2 - \\Mat{v}_u^T\\Mat{v}_i )\n + \\sum_{i=1}^{|\\Space{I}|}\\sum_{u\\in\\Set{P}_i} (\\norm{\\Mat{v}_i}^2 - \\Mat{v}_u^T\\Mat{v}_i )\\nonumber \\\\\n & \\eqc \\sum_{u=1}^{|\\Space{U}|}\\norm{\\Mat{v}_u-\\Mat{c}_u}^2 + \\sum_{i=1}^{|\\Space{I}|}\\norm{\\Mat{v}_i-\\Mat{c}_i}^2 \\label{eq:conpactness}\n\\end{align}\nwhere we use the properties of normalization in the first line and the definitions of $\\Mat{c}_i$ and $\\Mat{c}_u$ in the last line. \nNote that, we technically link the K-means to the first term of our BC loss in Equation \\eqref{eq:split_bc}. \n\nCombining the two term in Equation \\eqref{eq:conpactness}, we obtain:\n\\begin{align*}\n -\\sum_{(u,i)\\in\\Set{O}^{+}}\\frac{\\Mat{v}_u^T\\Mat{v}_i - m}{\\tau} \\eqc \\sum_{\\Mat{v}\\in \\Space{V}} \\norm{\\Mat{v} - \\Mat{c}_{v}}^2\n\\end{align*}\n\nFollowing \\cite{BoudiafRZGPPA20}, we can then interpret the first term in Equation \\eqref{eq:split_bc} as a conditional cross-entropy between $\\Mat{V}$ and another random variable $\\bar{\\Mat{V}}$, whose conditional distribution given $Y$ is a standard Gaussian $\\bar{\\Mat{V}}|Y \\sim \\mathcal{N}(\\Mat{c}_{\\Mat{V}},I)$:\n\\begin{align}\n -\\sum_{(u,i)\\in\\Set{O}^{+}}\\frac{\\Mat{v}_u^T\\Mat{v}_i - m}{\\tau} & \\eqc\n \\mathcal{H}(\\Mat{V};\\bar{\\Mat{V}}|Y)=\\mathcal{H}(\\Mat{V}|Y)+\\mathcal{D}_{KL}(\\Mat{V}||\\bar{\\Mat{V}|Y}) \\nonumber\\\\\n & \\propto \\mathcal{H}(\\Mat{V}|Y) \\label{eq:first-term}\n\\end{align}\nAs a consequence, minimizing the first term in Equation \\eqref{eq:split_bc} is equivalent to minimizing $\\mathcal{H}(\\Mat{V}|Y)$.\nThis concludes the proof for the first compactness part of BC loss.\n\nWe then analyze the second term in Equation \\eqref{eq:split_bc} to demonstrate its dispersion property:\n\\begin{align}\n & \\sum_{u=1}^{|\\Space{U}|}\\sum_{i \\in \\Set{P}_u} \\log \\{\\exp{\\{\\frac{\\Mat{v}_u^T\\Mat{v}_i - m}{\\tau}\\}}+\\sum_{j\\in\\Set{N}_{u}}\\exp{\\{(\\frac{\\Mat{v}_u^T\\Mat{v}_j}{\\tau}\\}}\\}\\nonumber\\\\\n & \\geq \\sum_{u=1}^{|\\Space{U}|}\\sum_{i \\in \\Set{P}_u} \\log{(\n \\sum_{j\\in\\Set{N}_{u}}\\exp{\\{\\frac{\\Mat{v}_u^T\\Mat{v}_j}{\\tau}\\}})} \\nonumber\\\\\n & \\geq \\sum_{u=1}^{|\\Space{U}|}\\sum_{j\\in\\Set{N}_{u}}\\exp{\\{\\frac{\\Mat{v}_u^T\\Mat{v}_j}{\\tau}\\}}\\nonumber\\\\\n & \\eqc - \\sum_{u=1}^{|\\Space{U}|}\\sum_{j\\in\\Set{N}_{u}} \\norm{\\Mat{v}_u - \\Mat{v}_j}^2 \\label{eq:dispersion}\n\\end{align}\nWhere we drop the redundant term with the compactness objective in the second line, and use the Jensen's inequality in the third line. Following \\cite{WangS11}, the negative of term in Equation \\eqref{eq:dispersion} is the entropy estimator $\\hat{\\mathcal{H}}(\\Mat{V})$. Then, we know that minimizing the second term in Equation \\eqref{eq:split_bc} is equivalent to maximizing $\\mathcal{H}(\\Mat{V})$:\n\\begin{align} \\label{eq:second-term}\n \\sum_{u=1}^{|\\Space{U}|}\\sum_{i \\in \\Set{P}_u} \\log \\{\\exp{\\{\\frac{\\Mat{v}_u^T\\Mat{v}_i - m}{\\tau}\\}}+\\sum_{j\\in\\Set{N}_{u}}\\exp{\\{\\frac{\\Mat{v}_u^T\\Mat{v}_j}{\\tau}\\}}\\} \\propto -\\mathcal{H}(\\Mat{V})\n\\end{align}\nCombining Equation \\eqref{eq:first-term} and \\eqref{eq:second-term}, we conclude the proof.\n\\end{proof}\n\n\\textbf{Discussion.}\nThe BC loss breaks down into a compactness part and a dispersion part.\nFor the compactness part, $\\Mat{c}_{u}$ is the mean of all items that user $u$ has interaction with, which is a simple representation for the interest of user $u$.\nMoreover, $\\Mat{c}_{i}$ denotes the hard mean of all users in the positive users set for item $i$, which is the most straightforward portrait of all users who love item $i$.\nIn other words, the compactness part of BC loss encourages each user to be close to his interests while each item is tight to the portrait of all users who interact with the specific item.\nFrom the entropy view, the compactness part of BC loss tends to learn a low-entropy cluster for positive samples, \\emph{i.e., } high similar item\/user compactness.\nOn the other hand, the dispersion part of BC loss maximizes the pair-wise euclidean distance for negative samples, which is equivalent to forcing negative samples to stand far apart from one another.\nFrom the entropy view, the dispersion part enforces the spread of representations to learn a high-entropy representation space \\emph{i.e., } large dissimilar user\/item separation degree. \n\n\n\n\n\n\n\\subsection{Visualization of A Toy Experiment} \\label{sec:toy_example}\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\subcaptionbox{BPR loss for (head)\\label{fig:head_bpr}}{\n\t \\vspace{-5pt}\n\t\t\\includegraphics[width=0.24\\linewidth]{figures\/head_bpr.pdf}}\n\t\\subcaptionbox{Softmax loss (head)\\label{fig:head_softmax}}{\n\t \\vspace{-5pt}\n\t\t\\includegraphics[width=0.24\\linewidth]{figures\/head_info.pdf}}\n\t\\subcaptionbox{BC loss (head)\\label{fig:head_bc}}{\n\t \\vspace{-5pt}\n\t\t\\includegraphics[width=0.24\\linewidth]{figures\/head_bc.pdf}}\n\t\\subcaptionbox{IPS-CN\\cite{IPS-CN} (head) \\label{fig:head_ips}}{\n\t \\vspace{-5pt}\n\t\t\\includegraphics[width=0.24\\linewidth]{figures\/head_ips.pdf}}\n\t\\subcaptionbox{BPR loss (tail)\\label{fig:tail_bpr}}{\n\t \\vspace{-5pt}\n\t\t\\includegraphics[width=0.24\\linewidth]{figures\/tail_bpr.pdf}}\n\t\\subcaptionbox{Softmax loss (tail)\\label{fig:tail_softmax}}{\n\t \\vspace{-5pt}\n\t\t\\includegraphics[width=0.24\\linewidth]{figures\/tail_info.pdf}}\n\t\\subcaptionbox{BC loss (tail)\\label{fig:tail_bc}}{\n\t \\vspace{-5pt}\n\t\t\\includegraphics[width=0.24\\linewidth]{figures\/tail_bc.pdf}}\n\t\\subcaptionbox{IPS-CN\\cite{IPS-CN} (tail)\\label{fig:tail_ips}}{\n\t \\vspace{-5pt}\n\t\t\\includegraphics[width=0.24\\linewidth]{figures\/tail_ips.pdf}}\n\t\\vspace{-5pt}\n\t\\caption{Visualization of item representations learned with different loss functions on Yelp2018. The identical head\/tail user and the selected 500 items are drawn in (a-d)\/(e-h). The first row of each subfigure shows the 3D item representations projected on the unit sphere. The second row of each subfigure shows the angle distribution of both positive and negative items for the specific head\/tail user. \n\tBC loss encourages the majority positive items to fall into the group closest to the user and further achieves the smallest mean positive angle.\n\tSome item representations are almost clustered at one point by learning BC loss.\n\tMoreover, only BC loss shows a 0.5-radian margin for the tail user between positive and negative samples.\n\tThese phenomena verify that, compared to training with other loss functions, the BC loss simultaneously encourages similar user\/item compactness and enlarges dissimilar user\/item dispersion, strongly enhancing discriminative power for a better recommendation.}\n\t\\label{fig:toy-example}\n\t\\vspace{-8pt}\n\\end{figure*}\n\nTo intuitively visualize the effect of BC loss in recommendation task, a toy example on Yelp2018 \\cite{LightGCN} dataset is presented.\nWe modify the two-layer LightGCN by reducing the output number of the last hidden layer to three, which means the dimension of the representations is three. \nSo we can directly normalize the obtained 3-dimensional representations and plot them on a 3D unit sphere for visualization in Figure \\ref{fig:toy-example}.\nWe train the recommendation models with identical lightGCN backbone over the training dataset by minimizing different loss functions.\nFor the same head or tail user (green star), we plot 500 items' representations in the unit sphere containing all positive items (red dots) and randomly selected negative items (blue dots) from both training and testing sets. \nThe angle distribution (the second row of each subfigure) of positive and negative samples for a specific user quantitatively shows the discriminative power of each losses. \nWe observe that:\n\n\\begin{itemize}[leftmargin = *]\n \\item \\textbf{In both head and tail user cases, BC loss learns more discriminative representations.} \n As Figures \\ref{fig:head_bc} and \\ref{fig:tail_bc} show, for head and tail users, BC loss encourages around 40\\% and 50\\% of positive items to fall into the group closest to user representations, respectively. \n In other words, the representations of those samples are almost clustered at one point.\n BC loss also achieves the smallest distance \\emph{w.r.t.}~ mean positive angle.\n This verifies that BC loss tends to learn a high similar item\/user compactness.\n Moreover, a 0.5-radian margin is clearly shown in Figure \\ref{fig:tail_bc} between positive items and negative items, reflecting a highly discriminative power.\n We conjecture that, compared with softmax loss performance in Figures \\ref{fig:head_softmax} and \\ref{fig:tail_softmax}, the compactness and dispersion properties of BC loss come from the incorporating of interaction-wise popularity bias-related margin.\n \n \\item \\textbf{The representations learned by BPR and Softmax losses are not discriminative enough.}\n Under the supervision of BPR and Softmax losses, the item representations are separably allocated in a wide range of the unit sphere, where blue and red points occupy almost the same space area, demonstrated in Figures \\ref{fig:head_bpr} and \\ref{fig:tail_softmax}.\n Furthermore, it seems only a negligible overlapping for angle distribution of positive and negative samples in Figure \\ref{fig:tail_bpr}.\n However, because the number of positive and negative items is likely to be more than ten times different for the tail user, a small overlap will cause a large number of negative items to rank in front of positive items, significantly destroying the accuracy of recommendation.\n Consequently, it is not satisfactory to directly use BPR and softmax losses for the task of personalized recommendation.\n \n \\item \\textbf{The IPS-CN, a well-known popularity debiasing method in CF, is prone to lift the tail performance by sacrificing the representation learning for the head.}\n Compared with BPR loss in Figure \\ref{fig:tail_bpr}, IPS-CN learns better item representations for the tail user, which only a slight part of distribution overlap for positive and negative samples, illustrated in Figure \\ref{fig:tail_ips}.\n However, for the head user in Figure \\ref{fig:head_ips}, the positive and negative item representations are mixed in a large space and cannot distinguish between each other.\n This results in a huge performance drop for head evaluations; more results can be found in Experiments.\n\\end{itemize}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\n\nAt the core of leading collaborative filtering (CF) models is \\wx{the learning of} high-quality representations of users and items from historical interactions.\nHowever, most CF models easily suffer from the popularity bias issue in the interaction data \\cite{unfairness,Crowd,MostPop,Beyond_Parity}.\nSpecifically, the training data distribution is typically long-tailed, \\emph{e.g., } a few head items occupy most of the interactions, whereas the majority of tail items are unpopular and receive little attention.\nThe CF models built upon the imbalanced data are prone to learn the popularity bias and even amplify it by over-recommending head items and under-recommending tail items.\nAs a result, the popularity bias causes the biased representations with poor generalization ability, making recommendations deviate from users' actual preferences.\n\n\nMotivated by concerns of popularity bias, studies on debiasing have been conducted to lift the tail performance.\nUnfortunately, most prevalent debiasing strategies focus on the trade-off between head and tail evaluations (see Table \\ref{tab:subgroups}), including post-processing re-ranking \\cite{Calibration,RankALS,rescale,FPC,PC_Reg}, balanced training loss \\cite{ESAM,ALS+Reg,Regularized_Optimization,PC_Reg}, sample re-weighting \\cite{ips,IPS-C,IPS-CN,Propensity_SVM-Rank,YangCXWBE18,UBPR}, and head bias removal by causal inference \\cite{CausE,PDA,DecRS,MACR}.\nWorse still, many of them hold some assumptions that are infeasible in practice, such as the balanced test distribution is known in advance to guide the hyperparameters' adjustment \\cite{DICE, MACR}, or a small unbiased data is present to train the unbiased model \\cite{KDCRec,CausE}.\nConsequently, \\wx{they pursue improvements on tail items but exacerbate the performance sacrifice of head items, leading to a severe overall performance drop.}\nThe trade-off between \\wx{the} head and tail evaluations \\wx{results in} suboptimal representations, which derails the generalization ability.\n\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\subcaptionbox{BPR loss (head)\\label{fig:head_bpr}}{\n\t \\vspace{-6pt}\n\t\t\\includegraphics[width=0.23\\linewidth]{figures\/head_bpr.pdf}}\n\t\\subcaptionbox{Softmax loss (head)\\label{fig:head_softmax}}{\n\t \\vspace{-6pt}\n\t\t\\includegraphics[width=0.23\\linewidth]{figures\/head_info.pdf}}\n \\subcaptionbox{IPS-CN \\cite{IPS-CN} (head) \\label{fig:head_ips}}{\n\t \\vspace{-6pt}\n\t\t\\includegraphics[width=0.23\\linewidth]{figures\/head_ips.pdf}}\n\t\\subcaptionbox{BC loss (head)\\label{fig:head_bc}}{\n\t \\vspace{-6pt}\n\t\t\\includegraphics[width=0.23\\linewidth]{figures\/head_bc.pdf}}\n\n\t\\subcaptionbox{BPR loss (tail)\\label{fig:tail_bpr}}{\n\t \\vspace{-6pt}\n\t\t\\includegraphics[width=0.23\\linewidth]{figures\/tail_bpr.pdf}}\n\t\\subcaptionbox{Softmax loss (tail)\\label{fig:tail_softmax}}{\n\t \\vspace{-6pt}\n\t\t\\includegraphics[width=0.23\\linewidth]{figures\/tail_info.pdf}}\n \\subcaptionbox{IPS-CN \\cite{IPS-CN} (tail)\\label{fig:tail_ips}}{\n\t \\vspace{-6pt}\n\t\t\\includegraphics[width=0.23\\linewidth]{figures\/tail_ips.pdf}}\n\t\\subcaptionbox{BC loss (tail)\\label{fig:tail_bc}}{\n\t \\vspace{-6pt}\n\t\t\\includegraphics[width=0.23\\linewidth]{figures\/tail_bc.pdf}}\n\n\t\\caption{Visualizations of item representations learned by LightGCN \\cite{LightGCN} on Yelp2018~\\cite{LightGCN}, where subfigures (a-d)\/(e-h) depict the identical head\/tail user as a green star, while the red and blue points denote positive and negative items, respectively. In each subfigure, the first row presents the 3D item representations projected on the unit sphere, while the second row shows the angle distribution of items \\emph{w.r.t.}~ the specific user and the statistics of mean angles.\n Compared to other losses, BC loss learns better head representations (\\emph{cf. } with the smallest mean positive angle, the vast majority of positive items fall into the group closest to the user) and tail representations (\\emph{cf. } a clear margin exists between positive and negative items for the tail user). \\za{BC loss learns a more reasonable representation distribution that is locally clustered and globally separated.} See more details in Appendix \\ref{sec:toy_example}.}\n\t\\label{fig:toy-example}\n\t\\vspace{-10pt}\n\\end{figure*}\n\n\n\n\\wxx{\nIn this paper, we conjecture that an ideal debiasing strategy should learn high-quality head and tail representations with powerful discrimination and generalization abilities, rather than playing a trade-off game between the head and tail performance.\nHere we follow the prior studies \\cite{IPS-CN,DICE,ips,IPS-C} to focus on one key ingredient in representation learning: the loss function.\nFigure \\ref{fig:toy-example} depicts the item representations, which is optimized via two non-debiasing losses (BPR \\cite{BPR} and Softmax \\cite{SampledSoftmaxLoss}) and one debiasing loss (IPS-CN \\cite{IPS-CN}).\nWherein, representation discrimination is reflected in how well the positive items of a user are apart from the negatives.\nOur insights are:\n(1) For a user, the non-debiasing losses are inadequate to discriminate his\/her positive and negative items well, since their representations are largely overlapped as Figures \\ref{fig:head_bpr} and \\ref{fig:head_softmax} show;\n(2) Although IPS-CN achieves better discrimination power in the tail group than BPR (\\emph{cf. } positive items get smaller angles to the ego user in Figure \\ref{fig:tail_ips}, as compared to Figure \\ref{fig:tail_bpr}), it gets worse discrimination ability in the head (\\emph{cf. } positive items hold larger angles to the ego user in Figure \\ref{fig:head_ips}, as compared to Figure \\ref{fig:head_bpr}).\n}\n\n\n\n\n\n\n\nTowards this end, we incorporate \\underline{B}ias-aware margins into \\underline{C}ontrastive Loss and devise a simple yet effective \\textbf{BC Loss} to guide the head and tail representation learning of CF models.\nSpecifically, we first \\wx{employ} a bias degree extractor to \\wx{quantify the influence of interaction-wise popularity} --- that is, how well an interaction is predicted, when only popularity information of the target user and item is used.\nInteractions involving inactive users and unpopular items often \\wx{align with} lower bias degrees, indicating that popularity fails to reflect user preference faithfully.\nIn contrast, \\wx{interactions with active users and popular items are spurred by the popularity information, thus easily inclining to high bias degrees.}\nWe then move on to \\wx{train} the CF model by converting the bias degrees into the angular margins between user and item representations.\nIf the bias degree is low, we impose a larger margin to strongly squeeze the tightness of representations.\nIn contrast, if the bias degree is large, we \\wx{exert} a small or vanishing margin to reduce the influences of biased representations.\n\\wx{Through this way, for each ego user's representation, BC quantitatively controls its bias-aware margins with item representations --- adaptively intensifying the representation similarity among positive items, while diluting that among negative items.}\nBenefiting from stringent and discriminative representations, BC loss significantly improves both head and tail performance.\n\n\n\n\n\n\n\n\n\nFurthermore, BC loss has \\wx{three} desirable advantages.\nFirst, it has a clear geometric interpretation, as \\wx{illustrated} in Figure \\ref{fig:geometric_interpretation}.\nSecond, it brings forth a simple but effective mechanism of hard example mining (See Appendix \\ref{sec:hard_example}).\nThird, we theoretically reveal that BC loss tends to learn a low-entropy cluster for positive pairs (\\emph{e.g., } \\wx{compactness of matched users and items}) and a high-entropy space for negative pairs (\\emph{e.g., } \\wx{dispersion of unmatched users and items}) (See Theorem \\ref{theorem}).\nConsidering the theoretical guarantee and empirical effectiveness, \\wx{we argue that} BC loss is not only promising to alleviate popularity bias, but also suitable as \\wx{a} standard learning strategy in CF.\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Preliminary of Collaborative Filtering (CF)}\n\n\\textbf{Task Formulation.} \\label{sec:task-formulation}\nPersonalized recommendation is retrieving a subset of items from a large catalog to match user preference.\nHere we consider a \\wx{typical} scenario, collaborative filtering (CF) with implicit feedback \\cite{softmax}, which can be framed as a top-$N$ recommendation problem.\nLet $\\Set{O}^{+}=\\{(u,i)|y_{ui}=1\\}$ be the historical interactions between users $\\Set{U}$ and items $\\Set{I}$, where $y_{ui}=1$ indicates that user $u\\in\\Set{U}$ has adopted item $i\\in\\Set{I}$ before.\nOur goal is to optimize a CF model $\\hat{y}: \\Space{U}\\times\\Space{I}\\rightarrow\\Space{R}$ that latches on user preference towards items.\n\n\n\n\\textbf{Modeling Scheme.}\nScrutinizing leading CF models \\cite{BPR,LightGCN,SVD++,MultVAE}, we systematize the common paradigm as a combination of three modules: user encoder $\\psi(\\cdot)$, item encoder $\\phi(\\cdot)$, and similarity function $s(\\cdot)$.\nFormally, we depict one CF model as $\\hat{y}(u,i) = s(\\psi(u), \\phi(i))$,\nwhere $\\psi:\\Space{U}\\rightarrow\\Space{R}^d$ and $\\phi:\\Space{I}\\rightarrow\\Space{R}^d$ encode the identity (ID) information of user $u$ and item $i$ into $d$-dimensional representations, respectively;\n$s:\\Space{R}^d \\times\\Space{R}^d\\rightarrow\\Space{R}$ measures the similarity between user and item representations.\nIn literature, there are various choices of encoders and similarity functions:\n\\begin{itemize}[leftmargin=*]\n \\item Common encoders roughly fall into three groups: ID-based (\\emph{e.g., } MF \\cite{BPR,SVD++}, NMF \\cite{NMF}, CMN \\cite{cmn}), history-based (\\emph{e.g., } SVD++ \\cite{SVD++}, FISM \\cite{FISM}, MultVAE \\cite{MultVAE}), and graph-based (\\emph{e.g., } GCMC \\cite{GCMC}, PinSage \\cite{PinSage}, LightGCN \\cite{LightGCN}) fashions.\n Here we select two high-performing encoders, MF and LightGCN, as the backbone models being optimized.\n\n \\item The widely-used similarity functions include dot product \\cite{BPR}, cosine similarity \\cite{RendleKZA20}, and neural networks \\cite{NMF}.\n As suggested in the recent study \\cite{RendleKZA20}, cosine similarity is a simple yet effective and efficient similarity function in CF models, having achieved strong performance.\n For better interpretation, we take a geometric view and denote it by:\n \\begin{gather}\\label{eq:cosine-similarity}\n s(\\psi(u), \\phi(i))=\\frac{\\Trans{\\psi(u)}\\phi(i)}{\\norm{\\psi(u)} \\cdot \\norm{\\phi(i)}}\\doteq \\cos{(\\hat{\\theta}_{ui})},\n \\end{gather}\n in which $\\hat{\\theta}_{ui}$ is the angle between the user representation $\\psi(u)$ and item representation $\\phi(i)$.\n\\end{itemize}\n\n\n\n\n\n\\textbf{Learning Strategy.}\nTo optimize the model parameters, CF models mostly frame the top-$N$ recommendation problem into a supervised learning task, and resort to one of three classical learning strategies:\npointwise loss (\\emph{e.g., } binary cross-entropy \\cite{CrossEntropy}, mean square error \\cite{SVD++}),\npairwise loss (\\emph{e.g., } BPR \\cite{BPR}, WARP \\cite{WARP}), and softmax loss \\cite{softmax}.\nAmong them, pointwise and pairwise losses are long-standing and widely-adopted objective functions in CF.\nHowever, extensive studies \\cite{PC_Reg,unfairness,tang2020investigating} have analytically and empirically confirmed that using pointwise or pairwise loss is prune to propagate more information towards the head user-item pairs, which\namplifies popularity bias.\n\n\n\nSoftmax loss is much less explored in CF than its application in other domains like CV \\cite{ArcFace,SphereFace}.\nRecent studies \\cite{RendleKZA20,CLRec,S3-Rec,DBLP:conf\/www\/Lian0C20,DBLP:conf\/recsys\/YiYHCHKZWC19} find that it inherently conducts \\wx{hard example mining} over multiple negatives and aligns well with the ranking metric, thus \\wx{attracting a surge of interest} in recommendation.\nHence, we cast the minimization of softmax loss \\cite{SampledSoftmaxLoss} as the representative learning strategy:\n\\begin{gather}\n \\mathbf{\\mathop{\\mathcal{L}}}_0 = -\\sum_{(u,i)\\in\\Set{O}^{+}}\\log\\frac{\\exp{(\\cos(\\hat{\\theta}_{ui})\/\\tau)}}{\\exp{(\\cos(\\hat{\\theta}_{ui})\/\\tau)}+\\sum_{j\\in\\Set{N}_{u}}\\exp{(\\cos(\\hat{\\theta}_{uj})\/\\tau)}},\n\\end{gather}\nwhere $(u,i)\\in\\Set{O}^{+}$ is one observed interaction of user $u$, while $\\Set{N}_{u}=\\{j|y_{uj}=0\\}$ is the set of sampled unobserved items that $u$ did not interact with before; $\\tau$ is the hyper-parameter known as the temperature in softmax \\cite{tau}.\nNonetheless, modifying softmax loss to enhance the discriminative power of representations and alleviate the popularity bias remains largely unexplored.\nTherefore, \\wx{our work aims to} devise a more generic and broadly-applicable variant of softmax loss for CF tasks, which can improve the long-tail performance fundamentally.\n\\section{Methodology of BC Loss}\nOn the basis of softmax loss, we devise our BC loss and present its desirable characteristics.\n\n\n\\subsection{{Popularity Bias Extractor}}\nBefore mitigating popularity bias, we need to quantify the influence of popularity bias on a single user-item pair.\nOne straightforward solution is to compare the performance difference between the biased and unbiased evaluations.\n\\wx{However, this is not feasible as} the unbiased data is usually unavailable in practice.\nStatistical metrics of popularity could be a reasonable proxy of the biased information, such as user popularity statistics $p_{u}\\in\\Space{P}$ (\\emph{i.e., } the number of historical items that user $u$ has interacted with before) and item popularity statistics $p_{i}\\in\\Space{P}$ (\\emph{i.e., } the number of observed interactions that item $i$ is involved in).\nIf \\wx{the impact of} the interaction between $u$ and $i$ can be captured well based solely on such statistics, the model is susceptible to exploiting popularity bias for prediction.\nHence, we argue that the popularity-only prediction will delineate the influence of bias.\n\n\n\nTowards this end, we first train an additional module, termed popularity bias extractor, which only takes the popularity statistics as input to make prediction.\n\\wx{Similar} to the modeling of CF (\\emph{cf. } Section \\ref{sec:task-formulation}), the bias extractor is formulated as a function $\\hat{y}_b:\\Space{P}\\times\\Space{P}\\rightarrow\\Space{R}$:\n\\begin{gather}\\label{eq:bias-prediction}\n \\hat{y}_b(p_u,p_i) = s(\\psi_b(p_u), \\phi_b(p_i)) \\doteq cos(\\hat{\\xi}_{ui}),\n\\end{gather}\nwhere the user popularity encoder $\\psi_b:\\Space{P}\\rightarrow\\Space{R}^d$ and the item popularity encoder $\\phi_b:\\Space{P}\\rightarrow\\Space{R}^d$ map the popularity statistics of user $u$ and item $i$ into $d$-dimensional popularity embeddings $\\psi_b(p_u)$ and $\\phi_b(p_i)$, respectively;\n$s:\\Space{R}^d\\times\\Space{R}^d\\rightarrow\\Space{R}$ is \\wx{the cosine similarity function} between popularity embeddings (\\emph{cf. } Equation \\eqref{eq:cosine-similarity}).\n$\\hat{\\xi}_{ui}$ is the angle between $\\psi_b(p_u)$ and $\\phi_b(p_i)$.\n\nWe then minimize the following softmax loss to optimize the popularity bias extractor:\n\\begin{gather} \\label{eq:bias-extractor}\n \\mathbf{\\mathop{\\mathcal{L}}}_b = -\\sum_{(u,i)\\in\\Set{O}^{+}}\\log\\frac{\\exp{(\\cos(\\hat{\\xi}_{ui})\/\\tau)}}{\\exp{(\\cos(\\hat{\\xi}_{ui})\/\\tau)}+\\sum_{j\\in\\Set{N}_{u}}\\exp{(\\cos(\\hat{\\xi}_{uj})\/\\tau)}}.\n\\end{gather}\nThis optimization enforces the extractor to reconstruct the historical interactions \\za{using only biased information (\\emph{i.e., } popularity statistics)} and makes the reconstruction reflect the interaction-wise bias degree.\n\\za{As shown in Appendix \\ref{sec:bias_extractor}, interactions with active users and popular items are inclining to learn well via Equation \\eqref{eq:bias-extractor}.}\nFurthermore, we can distinguish hard interactions based on the bias degree, \\wx{\\emph{i.e., } the interactions that can be hardly predicted by popularity statistics} ought to be more informative for representation learning in the target CF model.\nIn a nutshell, the popularity bias extractor underscores the bias degree of each user-item interaction, which substantively reflects how hard it is to be predicted.\n\n\n\\subsection{BC Loss}\nWe move on to \\wx{devise} a new BC loss for the target CF model.\nOur BC loss stems from softmax loss but converts the interaction-bias degrees into the bias-aware angular margins among the representations to enhance the discriminative power of representations.\nOur BC loss is:\n\\begin{align}\\label{equ:bc_loss}\n \\mathbf{\\mathop{\\mathcal{L}}}_{\\text{BC}} =\n -\\sum_{(u,i)\\in\\Set{O}^{+}}\\log\\frac{\\exp{(\\cos(\\hat{\\theta}_{ui}+M_{ui})\/\\tau)}}{\\exp{(\\cos(\\hat{\\theta}_{ui}+M_{ui})\/\\tau)}+\\sum_{j\\in\\Set{N}_{u}}\\exp{(\\cos(\\hat{\\theta}_{uj})\/\\tau)}},\n\\end{align}\nwhere $M_{ui}$ is the bias-aware angular margin for the interaction $(u,i)$ defined as:\n\\begin{gather}\n M_{ui} = \\min \\{\\hat{\\xi}_{ui}, \\pi - \\hat{\\theta}_{ui}\\}\n\\end{gather}\nwhere $\\hat{\\xi}_{ui}$ is derived from the popularity bias extractor (\\emph{cf. } Equation \\eqref{eq:bias-prediction}), and $\\pi - \\hat{\\theta}_{ui}$ is the upper bound to restrict $\\cos(\\cdot + M_{ui})$ to be a monotonically decreasing function.\nIntuitively, if a user-item pair $(u,i)$ is the hard interaction \\wx{that can hardly be} reconstructed by its popularity statistics, it holds a high value of $\\hat{\\xi}_{ui}$ and leads to a high value of $M_{ui}$; henceforward, BC loss imposes the large angular margin $M_{ui}$ between the negative item $j$ and positive item $i$ and optimizes the representations of user $u$ and item $i$ to lower $\\hat{\\xi}_{ui}$.\nSee more details and analyses in Section \\ref{sec:three_advantages}.\n\n\\wx{It is noted that} BC loss is extremely easy to implement in recommendation tasks, which only needs to revise several lines of code.\nMoreover, compared with softmax loss, BC loss only adds negligible computational complexity during training \n\\za{(\\emph{cf. } Table \\ref{tab:elapse_time})}\nbut achieves more discriminative representations. \nHence, we recommend \\wx{to use} BC loss not only as a debiasing strategy to alleviate the popularity bias, but also as a standard loss in recommender models to enhance the discriminative power.\nNote that the modeling of $M_{ui}$ is worth exploring, such as the more complex version $M_{ui} = \\min\\{\\lambda \\cdot \\hat{\\xi}_{ui}, \\pi - \\hat{\\theta}_{ui}\\}$ where $\\lambda$ controls the strength of the bias-margin.\nMeanwhile, carefully designing a monotonically decreasing function helps to get rid of the upper bound restriction.\nWe will leave the exploration of bias-margin in future work.\n\n\n\\section{Analyses of BC Loss}\\label{sec:three_advantages}\n\nWe analyze desirable characteristics of BC loss.\nSpecifically, we start by presenting its geometric interpretation, and then show its theoretical properties \\emph{w.r.t.}~ compactness and dispersion of representations. The hard mining mechanism of BC loss is discussed in Appendix \\ref{sec:hard_example}. \n\n\\begin{figure}[t]\n\t\\centering\n\t\\subcaptionbox{Softmax Loss (2D)\\label{fig:2d_softmax}}{\n\t \\vspace{-5pt}\n\t\t\\includegraphics[width=0.23\\linewidth]{figures\/2d-softmax.pdf}}\n\t\\subcaptionbox{BC Loss (2D)\\label{fig:2d_bc}}{\n\t \\vspace{-5pt}\n\t\t\\includegraphics[width=0.23\\linewidth]{figures\/2d-bc.pdf}}\n\t\\subcaptionbox{Softmax Loss (3D)\\label{fig:3d_softmax}}{\n\t \\vspace{-5pt}\n\t\t\\includegraphics[width=0.23\\linewidth]{figures\/3d-softmax.pdf}}\n\t\\subcaptionbox{BC Loss (3D)\\label{fig:3d_bc}}{\n\t \\vspace{-5pt}\n\t\t\\includegraphics[width=0.23\\linewidth]{figures\/3d-bc.pdf}}\n\n\t\\caption{Geometric Interpretation of softmax loss and BC loss in 2D and 3D hypersphere. \n\n\tThe dark red region indicates the discriminative user constraint, while the light red region is for comparison.}\n\t\\label{fig:geometric_interpretation}\n\t\\vspace{-10pt}\n\\end{figure}\n\n\\subsection{\\textbf{Geometric Interpretation}}\nHere we probe into the ranking criteria of softmax loss and BC loss, from the geometric perspective.\nTo simplify the geometric interpretation, we analyze one user $u$ with one observed item $i$ and only two unobserved items $j$ and $k$.\nThen the posterior probabilities obtained by softmax loss are: $\\frac{\\exp{(\\cos(\\hat{\\theta}_{ui})\/\\tau)}}{\\exp{(\\cos(\\hat{\\theta}_{ui})\/\\tau)}+\\exp{(\\cos(\\hat{\\theta}_{uj})\/\\tau)}+\\exp{(\\cos(\\hat{\\theta}_{uk})\/\\tau)}}$.\nDuring training, softmax loss encourages the ranking criteria $\\hat{\\theta}_{ui} < \\hat{\\theta}_{uj}$ and $\\hat{\\theta}_{ui} < \\hat{\\theta}_{uk}$ to model the basic assumption that the observed interaction $(u,i)$ indicates more positive cues of user preference than the unobserved interactions $(u,j)$ and $(u,k)$.\n\nIntuitively, to make the ranking criteria more stringent, we can impose an angular margin $M_{ui}$ on it and establish a new criteria $\\hat{\\theta}_{ui} + M_{ui} < \\hat{\\theta}_{uj}$ and $\\hat{\\theta}_{ui} + M_{ui} < \\hat{\\theta}_{uk}$.\nDirectly formulating this idea arrives at the posterior probabilities of BC loss: $\\frac{\\exp{(\\cos(\\hat{\\theta}_{ui}+ M_{ui})\/\\tau)}}{\\exp{(\\cos(\\hat{\\theta}_{ui}+M_{ui})\/\\tau)}+\\exp{(\\cos(\\hat{\\theta}_{uj})\/\\tau)}+\\exp{(\\cos(\\hat{\\theta}_{uk})\/\\tau)}}$.\nObviously, BC loss is more rigorous about the ranking assumption compared with softmax loss. See Appendix \\ref{sec:hard_example} for more detailed explanations.\n\n\nWe then depict the geometric interpretation and comparison of softmax loss and BC loss in Figure \\ref{fig:geometric_interpretation}.\nAssume the learned representations of $i$, $j$, and $k$ are given, and softmax and BC losses are optimized to the same value.\nIn softmax loss, the constraint boundaries for correctly ranking user $u$'s preference are $\\hat{\\theta}_{ui} = \\hat{\\theta}_{uj}$ and $\\hat{\\theta}_{ui} = \\hat{\\theta}_{uk}$; whereas, in BC loss, the constraint boundaries are $\\hat{\\theta}_{ui} + M_{ui} = \\hat{\\theta}_{uj}$ and $\\hat{\\theta}_{ui} + M_{ui} = \\hat{\\theta}_{uk}$.\nGeometrically, from softmax loss (\\emph{cf. } Figure \\ref{fig:3d_softmax}) to BC loss (\\emph{cf. } Figure \\ref{fig:3d_bc}), it is a more stringent circle-like region on the unit sphere in the 3D case.\nFurther enlarging the margin $M_{ui}$ will lead to a smaller hyperspherical cap, which is an explicit discriminative constraint on a manifold.\nAs a result, limited constraint regions squeeze the tightness of similar items \\wx{and encourages the separation of dissimilar items.}\nMoreover, with the increase of representation dimension, BC loss has more restricted learning requirements, exponentially decreasing the area of constraint regions for correct ranking, and becomes progressively powerful to learn discriminative representations.\n\n\n\n\n\n\n\n\\subsection{\\textbf{Theoretical Properties}}\nBC loss improves head and tail representation learning by enforcing the compactness of matched users and items, while imposing the dispersion of unmatched users and items. See detailed proof in Appendix \\ref{sec:proof}.\n\n\n\n\n\n\\begin{restatable}{theorem}{bclosstheorem} \\label{theorem}\nLet $\\Mat{v}_u \\doteq \\psi(u)$, $\\Mat{v}_i \\doteq \\phi(i)$, and $\\Mat{c}_{u} = \\frac{1}{|\\Set{P}_u|}\\sum_{i\\in \\Set{P}_{u}}\\Mat{v}_i$, $\\Mat{c}_{i} = \\frac{1}{|\\Set{P}_{i}|} \\sum_{u\\in\\Set{P}_{i}} \\Mat{v}_{u}$, where $\\Set{P}_u = \\{i|y_{ui} =1\\}$ and $\\Set{N}_{u}=\\{i|y_{ui}=0\\}$ are the sets of user $u$'s positive and negative items, respectively; $\\Set{P}_i = \\{u|y_{ui} =1\\}$ is the set of item $i$'s positive users.\nAssuming the representations of users and items are normalized, the minimization of BC loss is equivalent to minimizing a compactness part and a dispersion part simultaneously:\n\\begin{align}\n \\mathbf{\\mathop{\\mathcal{L}}}_{BC} \\geq \\underbrace{\\sum_{u\\in\\Set{U}}\\norm{\\Mat{v}_u-\\Mat{c}_u}^2 + \\sum_{i\\in\\Set{I}}\\norm{\\Mat{v}_i-\\Mat{c}_i}^2}_{\\text{Compactness part}}\\underbrace{-\\sum_{u\\in\\Set{U}}\\sum_{j\\in\\Set{N}_{u}}\\norm{\\Mat{v}_u-\\Mat{v}_j}^2}_{\\text{Dispersion part}}\n \\propto \\underbrace{H(\\Mat{V}|Y)}_{\\text{Compactness}}\\:\\: \\underbrace{-H(\\Mat{V})}_{\\text{Dispersion}}.\n\\end{align}\n\\end{restatable}\n\n\n\\noindent\\textbf{Discussion.}\n$\\Mat{c}_{u}$ is the averaged representations of all items that $u$ has interacted with, which describes $u$'s interest; similarly, $\\Mat{c}_{i}$ profiles item $i$'s user group.\nFor the compactness part, BC loss forces the user's positive items to be user-centric and vice versa.\n\\wx{From} the entropy perspective, compactness part tends to learn a low-entropy cluster for positive interactions, \\emph{i.e., } high compactness for similar users and items.\nFor the dispersion part, for users and items from unobserved interactions, BC loss maximizes the pairwise euclidean distance between their representations and encourages them to be distant from each other;\n\\wx{Hence, from} the entropy viewpoint, dispersion part levers the spread of representations to learn a high-entropy representation space, \\emph{i.e., } large separation degree for dissimilar users and items.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Experiments}\nWe aim to answer the following research questions:\n\\begin{itemize}[leftmargin=*]\n \\item \\textbf{RQ1:} How does BC Loss perform compared with debiasing strategies in various evaluations?\n \\wx{\\item \\textbf{RQ2:} Does BC loss cause the trade-off between head and tail performance?}\n \\item \\textbf{RQ3:} What are the impacts of the components (\\emph{e.g., } temperature, margin) on BC Loss?\n\\end{itemize}\n\n\\noindent\\textbf{Baselines \\& Datasets.} \nSOTA debiasing strategies in various research lines are compared: sample re-weighting (IPS-CN \\cite{IPS-CN}), bias removal by causal inference (MACR \\cite{MACR}, CausE \\cite{CausE}), and regularization-based framework (sam+reg \\cite{Regularized_Optimization}).\nExtensive experiments are conducted on \\za{eight} real-world benchmark datasets: Tencent \\cite{Tencent}, Amazon-Book \\cite{Amazon-Book}, Alibaba-iFashion \\cite{Alibaba-ifashion}, Yelp2018 \\cite{LightGCN}, Douban Movie \\cite{Douban}, \\za{Yahoo!R3 \\cite{Yahoo}, Coat \\cite{ips}} and KuaiRec \\cite{KuaiRec}.\nFor comprehensive comparisons, almost all standard test distributions in CF are covered in the experiments: balanced test set \\cite{MACR,DICE,KDCRec}, randomly selected imbalanced test set \\cite{ESAM,ZhuWC20}, temporal split test set \\cite{PDA,DecRS,Regularized_Optimization}, and unbiased test set \\cite{ips,KuaiRec,Yahoo}.\nSee more experiments on KuaiRec, \\za{Yahoo!R3, and Coat} for unbiased test evaluation in Appendix \\ref{sec:KuaiRec} and \\za{more comparison results between BC loss and other standard losses (most widely used BPR \\cite{BPR}, newest proposed CCL \\cite{CCL} and SSM \\cite{SSM}) in Appendix \\ref{sec:standard_loss}.}\n\n\\begin{table*}[t]\n\\caption{Overall debiasing performance comparison in balanced and imbalanced test sets. \n}\n\\vspace{-5pt}\n\\label{tab:overall}\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{l|cc|cc|cc|cc|cc|cc}\n\\toprule\n\\multicolumn{1}{c|}{} & \\multicolumn{4}{c|}{Tencent} & \\multicolumn{4}{c|}{Amazon-book} & \\multicolumn{4}{c}{Alibaba-iFashion} \\\\\n\\multicolumn{1}{c|}{} &\n \\multicolumn{2}{c}{Balanced} &\n \\multicolumn{2}{c|}{Imbalanced} &\n \\multicolumn{2}{c}{Balanced} &\n \\multicolumn{2}{c|}{Imbalanced} &\n \\multicolumn{2}{c}{Balanced} &\n \\multicolumn{2}{c}{Imbalanced} \\\\\n \n\\multicolumn{1}{c|}{} & Recall & NDCG & Recall & NDCG & Recall & NDCG & Recall & NDCG & Recall & NDCG & Recall & NDCG \\\\\n\\midrule\nMostPop & 0.0002 & 0.0002 & 0.0384 & 0.0208 & 0.0001 & 0.0001 & 0.0102 & 0.0063 & 0.0003 & 0.0001 & 0.0212 & 0.0084 \\\\\\midrule\nMF & 0.0052 & 0.0040 & \\underline{0.0982} & \\underline{0.0643} & 0.0109 & 0.0103 & \\underline{0.0856} & \\underline{0.0638} & 0.0056 & 0.0028 & \\underline{0.0843} & \\underline{0.0411} \\\\\n+ IPS-CN & \\underline{0.0075} & \\underline{0.0058} & 0.0686 & 0.0421 & 0.0132 & 0.0123 & 0.0765 & 0.0554 & 0.0050 & 0.0027 & 0.0551 & 0.0255 \\\\\n+ CausE & 0.0056 & 0.0043 & 0.0687 & 0.0468 & 0.0115 & 0.0105 & 0.0720 & 0.0551 & 0.0005 & 0.0003 & 0.0185 & 0.0086 \\\\\n+ sam+reg & 0.0070 & 0.0054 & 0.0406 & 0.0266 & 0.0141 & 0.0132 & 0.0599 & 0.0443 & 0.0067 & 0.0032 & 0.0305 & 0.0146 \\\\\n+ MACR & 0.0067 & 0.0046 & 0.0326 & 0.0241 & \\underline{0.0181} & \\underline{0.0146} & 0.0292 & 0.0229 & \\underline{0.0086} & \\underline{0.0041} & 0.0650 & 0.0331 \\\\\n+ BC Loss & \\textbf{0.0087}* & \\textbf{0.0068}* & \\textbf{0.1298}* & \\textbf{0.0904}* & \\textbf{0.0221}* & \\textbf{0.0202}* & \\textbf{0.1198}* & \\textbf{0.0948}* & \\textbf{0.0095}* & \\textbf{0.0048}* & \\textbf{0.0967}* & \\textbf{0.0487}* \\\\\\midrule\nImp. \\% & 16.0\\% & 17.2\\% & 32.2\\% & 40.1\\% & 22.1\\% & 38.4\\% & 40.0\\% & 49.6\\% & 10.5\\% & 17.1\\% & 14.7\\% & 18.5\\% \\\\\\midrule\\midrule\nLightGCN & 0.0055 & 0.0042 & \\underline{0.1065} & \\underline{0.0712} &0.0123 & 0.0116 & \\underline{0.0941} & \\underline{0.0724} & 0.0036 & 0.0017 & \\underline{0.0660} & \\underline{0.0322} \\\\\n+ IPS-CN & 0.0072 & 0.0054 & 0.0900 & 0.0599 & 0.0148 & 0.0136 & 0.0836 & 0.0639 & 0.0038 & 0.0017 & 0.0658 & 0.0317 \\\\\n+ CausE & 0.0055 & 0.0040 & 0.0966 & 0.0665 & 0.0134 & 0.0121 & 0.0926 & 0.0717 & 0.0029 & 0.0013 & 0.0449 & 0.0221 \\\\\n+ sam+reg & \\underline{0.0076} & \\underline{0.0056} & 0.0653 & 0.0436 & 0.0157 & 0.0149 & 0.0773 & 0.0600 & \\underline{0.0056} & \\underline{0.0027} & 0.0502 & 0.0252 \\\\\n+ MACR & 0.0075 & 0.0050 & 0.0731 & 0.0532 & \\underline{0.0183} & \\underline{0.0153} & 0.0767 & 0.0600 & 0.0033 & 0.0015 & 0.0475 & 0.0238 \\\\\n+ BC Loss & \\textbf{0.0095}* & \\textbf{0.0073}* & \\textbf{0.1194}* & \\textbf{0.0832}* & \\textbf{0.0257}* & \\textbf{0.0227}* & \\textbf{0.1123}* & \\textbf{0.0903}* & \\textbf{0.0077}* & \\textbf{0.0037}* & \\textbf{0.0992}* & \\textbf{0.0510}* \\\\\\midrule\nImp. \\% & 25.0\\% & 30.1\\% & 12.1\\% & 16.9\\% & 40.4\\% & 48.4\\% & 19.3\\% & 24.7\\% & 37.5\\% & 37.0\\% & 50.3\\% & 58.4\\% \\\\\n\\bottomrule\n\\end{tabular}}\n\\vspace{-15pt}\n\\end{table*}\n\n\n\n\\subsection{Performance Comparison (RQ1)}\n\n\\subsubsection{\\textbf{Evaluations on Imbalanced and Balanced Test Sets}}\n\\noindent\\textbf{Motivation.}\nMany prevalent debiasing methods assume that test distribution is known in advance \\cite{MACR,DICE,ESAM}, \\emph{i.e., } the validation set has similar distribution with the test set. \nMoreover, only an imbalanced or balanced test set is evaluated.\nHowever, in real-world applications, the test distributions are usually unavailable and can even reverse the prior in the training distribution.\nWe conjecture that a good debiasing recommender is required to perform well on both imbalanced and balanced test distributions. \nIn our settings, no information about the balanced test is provided in advance.\n\n\\noindent\\textbf{Data Splits.}\nThe models are identical across both imbalanced and balanced evaluations.\nThe test distribution in the balanced evaluation is uniform, \\emph{i.e., } randomly sample $15\\%$ of interactions with equal probability \\emph{w.r.t.}~ items.\nBesides, the test splits for the imbalanced test are similarly long-tailed like the train and validation sets, \\emph{i.e., } randomly split the remaining interactions into training, validation, and imbalanced test sets ($60\\%:10\\%:15\\%$).\n\n\\noindent\\textbf{Results.} Table \\ref{tab:overall} reports the comparison of performance in imbalanced and balanced test evaluations.\nThe best performing methods are bold and starred, while the strongest baselines are underlined; Imp.\\% measures the relative improvements of BC loss over the strongest baselines.\nWe observe that:\n\\begin{itemize} [leftmargin = *]\n\\item \\textbf{BC loss significantly outperforms the state-of-the-art baselines \nin both balanced and imbalanced evaluations across all datasets.}\nIn particular, it achieves consistent improvements over the best debiasing baselines and original CF models by $12.1\\%\\sim 58.4\\%$.\nThis clearly demonstrates that BC loss not only effectively alleviates the amplification of popularity bias but also improves the discriminative power of representations.\n\\wx{Moreover, Table \\ref{tab:elapse_time} shows the computational costs of all methods. Compared to the backbone models, BC loss only adds negligible time complexity.}\n\n\\item \\textbf{Debiasing baselines sacrifice the imbalanced performance and perform inconsistently across datasets.}\nDebiasing strategies generally achieve higher balanced performance at the expense of a large imbalanced performance drop. \nSpecifically, the strongest baselines over all imbalanced test sets are \\wx{the} original CF models.\nWorse still, as the degree of data sparsity increases, some debiasing methods fail to quantify the popularity bias and limit their bias removal ability. \nFor example, in the sparest Alibaba-iFashion dataset, the results of MF+IPS-CN, MF+CausE, LightGCN+MACR, and LightGCN+CausE on the balanced evaluation are lower than original CF models (MF or LightGCN).\nIn contrast, benefiting from popularity bias-aware margin, BC loss can learn discriminative representations that accomplish more profound user and item understanding, leading to higher head and tail recommendation quality.\n\\end{itemize}\n\n\n\\subsubsection{\\textbf{Evaluations on Temporal Split Test Set}}\n\\noindent\\textbf{Motivation.}\n\\begin{wraptable}{r}{0.6\\columnwidth}\n \\centering\n \\vspace{-25pt}\n \\caption{The performance comparison on Douban dataset. }\n \\label{tab:time_split}\n \\vspace{-5pt}\n \\resizebox{0.6\\columnwidth}{!}{\n \\begin{tabular}{l|ccc|ccc}\n \\toprule\n & \\multicolumn{3}{c|}{MF} & \\multicolumn{3}{c}{LightGCN} \\\\\n \\multicolumn{1}{c|}{} & HR & Recall & NDCG & HR & Recall & NDCG \\\\\\midrule\n Backbone & \\underline{0.2924} & \\underline{0.0294} & \\underline{0.0472} & \\underline{0.3543} & \\underline{0.0313} & \\underline{0.0602} \\\\\n + IPS-CN & 0.2514 & 0.0174 & 0.0324 & 0.3212 & 0.0261 & 0.0502 \\\\\n + CausE & 0.2725 & 0.0203 & 0.0376 & 0.3403 & 0.0275 & 0.0514 \\\\\n + sam+reg & 0.2826 & 0.0191 & 0.0390 & 0.2944 & 0.0252 & 0.0488 \\\\\n + MACR & 0.1084 & 0.0087 & 0.0163 & 0.3127 & 0.0271 & 0.0519 \\\\\n + BC loss & \\textbf{0.3742}* & \\textbf{0.0324}* & \\textbf{0.0601}* & \\textbf{0.3562}* & \\textbf{0.0346}* & \\textbf{0.0652}* \\\\\\midrule\n Imp. \\% & 28.0\\% & 10.2\\% & 27.3\\% & 0.5\\% & 10.4\\% & 8.3\\% \\\\\\bottomrule\n \\end{tabular}}\n \\vspace{-15pt}\n\\end{wraptable}\nIn real applications, popularity bias dynamically changes over time.\nHere we consider temporal split test evaluation on Douban Movie where the historical interactions are sliced into the training, validation, and test sets (7:1:2) according to the timestamps.\n\n\n\\noindent\\textbf{Results.}\nAs Table \\ref{tab:time_split} shows, BC loss is steadily superior to all baselines \\emph{w.r.t.}~ all metrics on Douban Movie.\nFor instance, it achieves significant improvements over the MF and LightGCN backbones \\emph{w.r.t.}~ Recall@20 by 10.2\\% and 10.4\\%, respectively.\nThis validates that BC loss endows the backbone models with better robustness against the popularity distribution shift and alleviates the negative influence of popularity bias.\nSurprisingly, none of the debiasing baselines could maintain a comparable performance to the backbones.\nWe ascribe the failure to their preconceived idea of tail items, which possibly change over time.\n\n\n\n\n\n\n\\begin{table*}[t]\n \\caption{The performance evaluations of head, mid, and tail on Tencent dataset. \n \n }\n \\label{tab:subgroups}\n \\vspace{-5pt}\n \\resizebox{\\textwidth}{!}{\\begin{tabular}{l|llll|llll}\n \\toprule\n \\multicolumn{1}{c|}{} & \\multicolumn{4}{c|}{Balanced NDCG@20} & \\multicolumn{4}{c}{Imbalanced NDCG@20} \\\\\n \\multicolumn{1}{c|}{} & \\multicolumn{1}{c}{Tail} & \\multicolumn{1}{c}{Mid} & \\multicolumn{1}{c}{Head} & \\multicolumn{1}{c|}{Overall} & \\multicolumn{1}{c}{Tail} & \\multicolumn{1}{c}{Mid} & \\multicolumn{1}{c}{Head} & \\multicolumn{1}{c}{Overall} \\\\\\midrule\n MF & 0.00004 & 0.00097 & 0.01250 & 0.00402 & 0.00021 & 0.00197 & 0.06837 & 0.06431 \\\\\n + IPS-CN & $0.00009 ^{\\color{+}+125 \\%}$ & $ 0.00212^{\\color{+}+ 119\\%}$ & $ 0.01684^{\\color{+}+ 35\\%}$ & $ 0.00575^{\\color{+}+43 \\%}$ & $0.00056 ^{\\color{+}+167 \\%}$ & $0.00401 ^{\\color{+}+ 104\\%}$ & $ 0.04439^{\\color{-}-35\\%}$ & $ 0.04205^{\\color{-}-35\\%}$ \\\\\n + CausE & $0.00008 ^{\\color{+}+ 100\\%}$ & $ 0.00149^{\\color{+}+54 \\%}$ & $0.01168 ^{\\color{-}-7\\%}$ & $ 0.00430^{\\color{+}+ 7\\%}$ & $ 0.00038^{\\color{+}+81 \\%}$ & $0.00253 ^{\\color{+}+ 28\\%}$ & $ 0.04876^{\\color{-}- 29\\%}$ & $ 0.04680^{\\color{-}- 27\\%}$ \\\\\n + sam-reg & $0.00006 ^{\\color{+}+50 \\%}$ & $0.00135 ^{\\color{+}+39 \\%}$ &\n $ 0.01573^{\\color{+}+ 26\\%}$ & $ 0.00535^{\\color{+}+ 33\\%}$ & $0.00011 ^{\\color{-}-48 \\%}$ &$0.00281 ^{\\color{+}+43 \\%}$ & $0.02850 ^{\\color{-}-58 \\%}$ & $0.02661 ^{\\color{-}-59 \\%}$\\\\\n + MACR & $\\mathbf{0.00188} ^{\\color{+}+ 4600\\%}$ & $\\mathbf{0.00521} ^{\\color{+}+437 \\%}$ & $0.00555 ^{\\color{-}-56 \\%}$ & $0.00456 ^{\\color{+}+13 \\%}$ & $\\mathbf{ 0.00370}^{\\color{+}+ 1662\\%}$ & $ 0.00615^{\\color{+}+ 212\\%}$ & $ 0.02748^{\\color{-}- 60\\%}$ & $ 0.02413^{\\color{-}- 62\\%}$ \\\\\n + BC loss & $ 0.00024^{\\color{+}+ 500\\%}$ & $ 0.00355^{\\color{+}+ 266\\%}$ & $ \\mathbf{0.01831}^{\\color{+}+46 \\%}$ & $ \\mathbf{0.00680} ^{\\color{+}+ 69\\%}$ & $0.00142 ^{\\color{+}+ 576\\%}$ & $ \\mathbf{0.00712}^{\\color{+}+261 \\%}$ & $\\mathbf{0.09552} ^{\\color{+}+ 40\\%}$ & $\\mathbf{0.09040} ^{\\color{+}+ 41\\%}$ \\\\\\midrule\\midrule\n LightGCN & 0.00025 & 0.00193 & 0.01136 & 0.00417 & 0.00094 & 0.00391 & 0.07561 & 0.07121 \\\\\n + IPS-CN & $0.00140 ^{\\color{+}+460 \\%}$ & $0.00241 ^{\\color{+}+25 \\%}$ & $0.01560 ^{\\color{+}+ 37\\%}$ & $0.00544 ^{\\color{+}+ 30\\%}$ & $0.00109 ^{\\color{+}+ 16\\%}$ & $0.00522 ^{\\color{+}+34 \\%}$ & $ 0.06333^{\\color{-}- 16\\%}$ & $ 0.05993^{\\color{-}- 16\\%}$ \\\\\n + CausE & $ 0.00006^{\\color{-}- 76\\%}$ & $ 0.00138^{\\color{-}- 29\\%}$ & $ 0.01177^{\\color{+}+ 4\\%}$ & $0.00403 ^{\\color{-}- 3\\%}$ & $ 0.00040^{\\color{-}-57 \\%}$ & $0.00279 ^{\\color{-}- 29\\%}$ & $0.06996 ^{\\color{-}-7 \\%}$ & $0.06650 ^{\\color{-}-7 \\%}$ \\\\\n + sam-reg & $0.00006 ^{\\color{-}-76 \\%}$ & $0.00120 ^{\\color{-}-38 \\%}$ & $ 0.01727^{\\color{+}+52 \\%}$ & $0.00560 ^{\\color{+}+ 34\\%}$ & $0.00024 ^{\\color{-}-74 \\%}$ & $0.00253 ^{\\color{-}- 35\\%}$ & $0.04647 ^{\\color{-}-39 \\%}$ & $0.04355 ^{\\color{-}- 39\\%}$ \\\\\n + MACR & $\\mathbf{0.00287} ^{\\color{+}+ 1048\\%}$ & $\\mathbf{0.00461} ^{\\color{+}+139 \\%}$ & $0.00454 ^{\\color{-}- 60\\%}$ & $0.00501 ^{\\color{+}+20 \\%}$ & $ \\mathbf{0.00389}^{\\color{+}+ 313\\%}$ & $ \\mathbf{0.00635}^{\\color{+}+ 62\\%}$ & $0.04058 ^{\\color{-}- 46\\%}$ & $0.05323 ^{\\color{-}-25 \\%}$ \\\\\n + BC loss & $0.00057 ^{\\color{+}+ 128\\%}$ & $0.00321 ^{\\color{+}+ 66\\%}$ & $\\mathbf{0.01943} ^{\\color{+}+ 71\\%}$ & $\\mathbf{0.00730} ^{\\color{+}+ 75\\%}$ & $ 0.00125^{\\color{+}+33 \\%}$ & $0.00516 ^{\\color{+}+ 32\\%}$ & $ \\mathbf{0.08823}^{\\color{+}+ 17\\%}$ & $ \\mathbf{0.08320}^{\\color{+}+17 \\%}$\\\\\\bottomrule\n \\end{tabular}}\n \\vspace{-10pt}\n \\end{table*}\n\n\\subsection{Head, Mid, \\& Tail Performance (RQ2)}\n\n\\noindent\\textbf{Motivation.}\nTo further evaluate whether BC loss lifts the tail performance by inevitably sacrificing the head performance, we divide the test set of Tencent into three subgroups, according to the interaction number of each item: head (popular items that are in the top third), mid (normal items in the middle), and tail (unpopular items in the bottom third).\nMost previous studies focus on average NDCG@20 for evaluation, especially balanced test evaluations \\cite{MACR,DICE}. \nHowever, average metrics could be insufficient to reflect the performance of each subgroup.\nA trivial solution to achieve high performance is promoting the rankings of low-popularity items in the recommendations.\nIn this case, only the average metrics are not reliable on the balanced test.\nTherefore, we report the performance of individual subgroups on both balanced and imbalanced test sets for a more comprehensive comparison.\n\n\n\n\n\\noindent\\textbf{Results.}\nTable \\ref{tab:subgroups} shows the evaluations of the head, mid, and tail subgroups.\nThe red and blue numbers in percentage separately refer to the improvement and decline of each method relative to the original CF model (MF or LightGCN).\nWe find that:\n\\begin{itemize}[leftmargin = *]\n \\item \\textbf{BC loss is the only method that consistently yields remarkable improvements in every subgroup.}\n With a closer look at the head evaluation, BC loss shows its ability to learn more discriminative representations for popular items across imbalanced and balanced settings.\n In particular, it achieves significant improvements over MF and LightGCN \\emph{w.r.t.}~ head NDCG by 40\\% and 17\\% in the imbalanced test evaluation, respectively.\n We attribute improvements to the usage of bias-aware margin, which boosts the recommendation quality for the tail and head items.\n \n \\item \\textbf{As the performance comparison among subgraphs in the imbalanced scenario shows, the baselines enhance the tail performance but sacrifice the head performance.}\n \n Specifically, these baselines hardly maintain the head performance and show a clear trade-off trend between the head and tail performance.\n Taking MACR as an example, although the great improvement ($+1662\\%$) over MF is achieved in the tail subgraph, it brings in the dramatic drop ($-62\\%$) in the head subgraph, which lowers the overall performance by a big drop ($-60\\%$). \n Here we ascribe the trade-off to blindly promoting the rankings of tail items for matched and unmatched, rather than improving the discriminative power of representations.\n\\end{itemize}\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\subcaptionbox{Varying margins.\\label{fig:margin}}{\n\t \\vspace{-5pt}\n\t\t\\includegraphics[width=0.43\\linewidth]{figures\/margin_tencent_mf.pdf}}\n\t\\subcaptionbox{MF+BC loss\\label{fig:tau_mf}}{\n\t \\vspace{-5pt}\n\t\t\\includegraphics[width=0.26\\linewidth]{figures\/effect_tau_mf.pdf}}\n\t\\subcaptionbox{LightGCN+BC loss\\label{fig:tau_Lightgcn}}{\n\t \\vspace{-5pt}\n\t\t\\includegraphics[width=0.26\\linewidth]{figures\/effect_tau_lgn.pdf}}\n\t\\vspace{-5pt}\n\t\\caption{(a) Comparisons with a varying margin; (b-c) Temperature $\\tau$ sensitivity analysis on Tencent.}\n\t\\label{fig:study-bc-loss}\n\t\\vspace{-10pt}\n\\end{figure*}\n\n\n\\subsection{Study on BC Loss (RQ3)}\n\\label{sec:ablation}\n\n\\textbf{Effect of Bias-aware Margin.}\nFigure \\ref{fig:margin} displays the performance on balanced and imbalanced test sets on Tencent among softmax loss, BC loss with constant margin M \\cite{ArcFace}, and BC loss with adaptive bias-aware margin.\nBC loss achieves the best performance, illustrating that bias-aware margin indeed is effective at reducing popularity bias and learning high-quality representations.\n\n\\textbf{Effect of Temperature $\\tau$.}\nBC loss has one hyperparameter to tune --- temperature $\\tau$ in Equation \\eqref{equ:bc_loss}.\nIn Figure \\ref{fig:tau_mf} and \\ref{fig:tau_Lightgcn}, both balanced and imbalanced evaluations exhibit the concave unimodal functions of $\\tau$, where the curves reach the peak almost synchronously in a small range of $\\tau$. For example, MF+BC loss gets the best performance when $\\tau=0.05$ and $\\tau=0.07$ in balanced and imbalanced settings, respectively;\nWe observe similar trends on other datasets and skip them due to the space limit.\nThis justifies that BC loss does not suffer from the trade-off between the balanced and imbalanced evaluations and improves the generalization without sacrificing the head performance.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Related Work}\nPrevalent popularity debiasing strategies in CF roughly fall into four research lines.\n\n\\textbf{Post-processing re-ranking methods} \\cite{Calibration,RankALS,rescale,FPC,PC_Reg} are applied to the output of the recommender system without changing the representations of users and items. \nThe purposes of modifying the ranking of models can be various: Calibration \\cite{Calibration} ensures that the past interests proportions of users are expected to maintain at the same level; RankALS \\cite{RankALS} aims to increase the diversification of recommendation; \nFPC \\cite{FPC} investigates the popularity bias in the dynamic recommendation by rescaling the predicted scores.\n\n\\textbf{Regularization-based frameworks} \\cite{ESAM,ALS+Reg,Regularized_Optimization,PC_Reg} explore the use of regularization that provides a tunable mechanism for controlling the trade-off between recommendation accuracy and coverage.\nThe difference among these methods is the design of penalty terms: \nALS+Reg \\cite{ALS+Reg} defines intra-list distance as the penalty to achieve the fair recommendation; ESAM \\cite{ESAM} introduces the attribute correlation alignment, center-clustering, and self-training regularization to learn good feature representations; sam-reg \\cite{Regularized_Optimization} regularizes the biased correlation between user-item relevance and item popularity; Reg \\cite{PC_Reg} decouples the item popularity with the model preference predictions.\n\n\\textbf{Sample re-weighting methods} \\cite{ips,IPS-C,IPS-CN,Propensity_SVM-Rank,YangCXWBE18,UBPR}, also known as Inverse Propensity Score (IPS), view the item popularity in the training set as the propensity score and exploit its inverse to re-weight loss of each instance.\nTo address the high variance of re-weighted loss, many of them \\cite{IPS-CN,IPS-C} further employ normalization or smoothing penalty to attain a more stable output.\nHowever, the unreliability of methods is due to their measurement of the propensity score, leveraging the item frequency but failing to consider interaction-wise popularity bias.\n\n\\textbf{Bias removal by causal inference methods} \\cite{CausE,KDCRec, DICE, PDA, DecRS, MACR}, getting inspiration from the recent success of counterfactual inference, specify the role of popularity bias in assumed causal graphs and mitigate the bias effect on the prediction. \nHowever, the causal structure is heuristically assumed based on the author's understanding, without any theoretical guarantee.\n\nBC loss opens up a possibility of conventional debiasing methods in CF that mitigate the popularity bias by enhancing the discriminative power. \nRecent studies, boosting the discriminative feature spaces by modified softmax loss are mainly discussed in face recognition, where a constant margin is added \\cite{ArcFace} to better classify.\nWe transfer it in CF and compare it with BC loss in Figure \\ref{fig:margin}.\n\n\n\\section{Conclusion} \\label{sec:conclusion}\nDespite the great success in collaborative filtering, today's popularity debiasing methods are still far from \\wx{being able to improve} the recommendation quality. \nIn this work, we proposed a simple yet effective BC loss, utilizing popularity bias-aware margin to eliminate the popularity bias.\nGrounded by theoretical proof, clear geometric interpretation and real-world visualization study, BC loss boosts the head and tail performance by learning a more discriminative representation space.\nExtensive experiments verify that the remarkable improvement in head and tail evaluations on various test sets indeed comes from \\wx{the} better representation rather than \\wx{simply} catering to the tail.\n\nThe limitations of BC loss are in three respects, which will be addressed in future work: 1) the modeling of bias-aware margin is worth exploring, which could significantly influence the performance of BC loss, 2) multiple important biases, such as exposure and selection bias, are not considered, and 3) more experiments comparing BC loss to standard CF losses (\\emph{e.g., } cross-entropy, WARP) are needed to \\wx{further} demonstrate the power of BC loss in regular recommendation tasks (\\za{See comparison to BPR, CCL and SSM in Appendix \\ref{sec:standard_loss}}). \nWe believe \\wx{that} this work provides a potential research direction to diagnose the debiasing of long-tail ranking and will inspire more works.\n\\section*{Checklist}\n\n\n\n\n\\begin{enumerate}\n\n\n\\item For all authors...\n\\begin{enumerate}\n \\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?\n \\answerYes{}\n \\item Did you describe the limitations of your work?\n \\answerYes{See Section~\\ref{sec:conclusion}.}\n \\item Did you discuss any potential negative societal impacts of your work?\n \\answerNo{This paper proposes a novel debiasing algorithm for recommendation system, which does not have any negative societal impacts.}\n \\item Have you read the ethics review guidelines and ensured that your paper conforms to them?\n \\answerYes{}\n\\end{enumerate}\n\n\n\\item If you are including theoretical results...\n\\begin{enumerate}\n \\item Did you state the full set of assumptions of all theoretical results?\n \\answerYes{}\n \\item Did you include complete proofs of all theoretical results?\n \\answerYes{See Appendix~\\ref{sec:proof}.}\n\\end{enumerate}\n\n\n\\item If you ran experiments...\n\\begin{enumerate}\n \\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?\n \\answerYes{We include code by URL in abstract.}\n \\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?\n \\answerYes{See Appendix~\\ref{sec:implementation}.}\n \\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?\n \\answerYes{}\n \\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?\n \\answerYes{See Appendix~\\ref{sec:implementation}.}\n\\end{enumerate}\n\n\n\\item If you are using existing assets (e.g., code, data, models) or curating\/releasing new assets...\n\\begin{enumerate}\n \\item If your work uses existing assets, did you cite the creators?\n \\answerYes{}\n \\item Did you mention the license of the assets?\n \\answerYes{}\n \\item Did you include any new assets either in the supplemental material or as a URL?\n \\answerYes{}\n \\item Did you discuss whether and how consent was obtained from people whose data you're using\/curating?\n \\answerNA{}\n \\item Did you discuss whether the data you are using\/curating contains personally identifiable information or offensive content?\n \\answerNA{}\n\\end{enumerate}\n\n\n\\item If you used crowdsourcing or conducted research with human subjects...\n\\begin{enumerate}\n \\item Did you include the full text of instructions given to participants and screenshots, if applicable?\n \\answerNA{}\n \\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?\n \\answerNA{}\n \\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?\n \\answerNA{}\n\\end{enumerate}\n\n\n\\end{enumerate}\n\n\n\n\\section{In-depth Analysis of BC loss}\n\\subsection{Visualization of A Toy Experiment} \\label{sec:toy_example}\n\n\n\nHere we visualize a toy example on the Yelp2018 \\cite{LightGCN} dataset to showcase the effect of BC loss.\nSpecifically, we train a two-layer LightGCN whose embedding size is three, and illustrate the 3-dimensional normalized representations on a 3D unit sphere in Figure \\ref{fig:toy-example} (See the magnified view in Figure \\ref{fig:magnified-toy-example}).\nWe train the identical LightGCN backbone with different loss functions: BPR loss, softmax loss, BC loss, and IPS-CN \\cite{IPS-CN}.\nFor the same head\/tail user (\\emph{i.e., } green stars), we plot 500 items in the unit sphere covering all positive items (\\emph{i.e., } red dots) and randomly-selected negative items (\\emph{i.e., } blue dots) from both the training and testing sets.\nMoreover, the angle distribution (the second row of each subfigure) of positive and negative items for a certain user quantitatively shows the discriminative power of each loss.\nWe observe that:\n\\begin{itemize}[leftmargin = *]\n \\item \\textbf{BC loss learns more discriminative representations in both head and tail user cases. Moreover, BC loss learns a more reasonable representation distribution that is locally clustered and globally separated.} \n As Figures \\ref{fig:head_bc} and \\ref{fig:tail_bc} show, for head and tail users, BC loss encourages around 40\\% and 55\\% of positive items to fall into the group closest to user representations, respectively.\n In other words, these item representations are almost clustered to a small region.\n BC loss also achieves the smallest distance \\emph{w.r.t.}~ mean positive angle.\n This verifies that BC loss tends to learn a high similar item\/user compactness.\n Moreover, Figure \\ref{fig:tail_bc} presents a clear margin between positive and negative items, reflecting a highly-discriminative power.\n Compared to softmax loss in Figures \\ref{fig:head_softmax} and \\ref{fig:tail_softmax}, the compactness and dispersion properties of BC loss come from the incorporation of interaction-wise bias-aware margin.\n \n \\item \\textbf{The representations learned by standard CF losses - BPR loss and softmax loss - are not discriminative enough.}\n Under the supervision of BPR and softmax losses, item representations are separably allocated in a wide range of the unit sphere, where blue and red points occupy almost the same space area, as Figures \\ref{fig:head_bpr} and \\ref{fig:head_softmax} demonstrate.\n Furthermore, Figure \\ref{fig:tail_bpr} shows only a negligible overlap between positive and negative items' angle distributions.\n However, as the negative items are much more than the positive items for the tail user, a small overlap will make many irrelevant items rank higher than the relevant items, thus significantly hindering the recommendation accuracy.\n Hence, directly optimizing BPR or softmax loss might be suboptimal for the personalized recommendation tasks. \n \n \\item \\textbf{IPS-CN, a well-known popularity debiasing method in CF, is prone to lift the tail performance by sacrificing the representation learning for the head.}\n Compared with BPR loss in Figure \\ref{fig:tail_bpr}, IPS-CN learns better item representations for the tail user, which achieves smaller mean positive angle as illustrated in Figure \\ref{fig:tail_ips}.\n However, for the head user in Figure \\ref{fig:head_ips}, the positive and negative item representations are mixed and cannot be easily distinguished. Worse still, the representations learned by IPS-CN has a larger mean positive angle for head user compared to BPR loss. \n This results in a dramatic performance drop for head evaluations.\n\\end{itemize}\n\n\n\n\n\\subsection{Hard Example Mining Mechanism - one desirable property of BC loss:}\\label{sec:hard_example}\n\nWe argue that the mechanism of adaptively mining hard interactions is inherent in BC loss, which improves the efficiency and effectiveness of training.\nDistinct from softmax loss that relies on the predictive scores to mine hard negative samples \\cite{sgl} and leaves the popularity bias untouched, our BC loss considers the interaction-wise biases and adaptively locates hard informative interactions.\n\nSpecifically, the popularity bias extractor $\\hat{y}_b$ in Equation \\eqref{eq:bias-prediction} can be viewed as a hard sample detector.\nConsidering an interaction $(u,i)$ with a high bias degree $cos(\\hat{\\xi}_{ui})$, we can only use its popularity information to predict user preference and attain a vanishing bias-aware angular margin $M_{ui}$.\nHence, interaction $(u,i)$ \\wx{will} plausibly serve as the biased and easy \\wx{sample, if it involves the} active users and popular items.\nIts close-to-zero margin makes $(u,i)$'s BC loss approach softmax loss, thus downgrading the ranking criteria to match the basic assumption of softmax loss.\n\nIn contrast, if the popularity statistics are deficient in recovering user preference via the popularity bias extractor, the interaction $(u,i)$ garners the low bias degree $cos(\\hat{\\xi}_{ui})$ and exerts the significant margin $M_{ui}$ on its BC loss.\nHence, it could work as the hard \\wx{sample}, which typically covers the tail users and items, and yields a more stringent assumption that user $u$ prefers the tail item over \\wx{the} other popular items by a large margin.\nSuch a significant margin makes the losses more challenging to learn.\n\nIn a nutshell, BC loss adaptively prioritizes the interaction \\wx{samples} based on their bias degree and leads the CF model to shift its attention to hard \\wx{samples}, thus improving both head and tail performance, compared with softmax loss (\\emph{cf. } Section \\ref{sec:ablation}).\n\n\n\n\n\n\n\\subsection{Proof of Theorem \\ref{theorem}}\n\\label{sec:proof}\n\n\\bclosstheorem*\n\n\\begin{proof}\nLet the upper-case letter $\\Mat{V} \\in \\Space{V}$ be the random vector of representation and $\\Space{V} \\subseteq \\Space{R}^d$ be the representation space.\nWe use the normalization assumption of representations to connect cosine and Euclidean distances, \\emph{i.e., } if $\\norm{\\Mat{v}_u}=1$ and $\\norm{\\Mat{v}_i}=1$, $\\Mat{v}_u^T\\Mat{v}_i = 1 - \\frac{1}{2} \\norm{\\Mat{v}_u-\\Mat{v}_i}^2$, $\\forall u,i$. \n \n \nLet $\\Set{P}_u = \\{i|y_{ui} =1\\}$ be the set of user $u$'s positive items , $\\Set{P}_i = \\{u|y_{ui} =1\\}$ to be the set of item $i$'s positive users, and $\\Set{N}_{u}=\\{i|y_{ui}=0\\}$ be the set of user $u$'s negative items.\nClearly, there exists an upper bound $m$, \\emph{s.t. } $-1 < \\cos(\\hat{\\theta}_{ui}+M_{ui}) \\leq \\Mat{v}_u^T\\Mat{v}_i - m < 1 $.\nTherefore, we can analyze BC loss, which has the following relationships:\n \n \n \n \n \\begin{align} \n \\mathbf{\\mathop{\\mathcal{L}}}_{BC} \\geq & -\\sum_{(u,i)\\in\\Set{O}^{+}}\\log\\frac{\\exp{((\\Mat{v}_u^T\\Mat{v}_i - m)\/\\tau})}{\\exp{((\\Mat{v}_u^T\\Mat{v}_i - m)\/\\tau)}+\\sum_{j\\in\\Set{N}_{u}}\\exp{((\\Mat{v}_u^T\\Mat{v}_j)\/\\tau)}} \\nonumber\\\\\n &= -\\sum_{(u,i)\\in\\Set{O}^{+}} \\frac{\\Mat{v}_u^T\\Mat{v}_i - m}{\\tau} + \\sum_{(u,i)\\in\\Set{O}^{+}} \\log (\\exp{\\frac{\\Mat{v}_u^T\\Mat{v}_i - m}{\\tau}}+\\sum_{j\\in\\Set{N}_{u}}\\exp{\\frac{\\Mat{v}_u^T\\Mat{v}_j}{\\tau}}). \\label{eq:split_bc}\n \\end{align}\n We now probe into the first term in Equation \\eqref{eq:split_bc}:\n\\begin{align}\n -\\sum_{(u,i)\\in\\Set{O}^{+}} \\frac{\\Mat{v}_u^T\\Mat{v}_i - m}{\\tau}\n =& \\sum_{(u,i)\\in\\Set{O}^{+}} \\frac{\\norm{\\Mat{v}_u-\\Mat{v}_i}^2}{2\\tau} +\\frac{m-1}{\\tau} \\nonumber\\\\\n \\eqc& \\sum_{(u,i)\\in\\Set{O}^{+}} \\norm{\\Mat{v}_u-\\Mat{v}_i}^2 \\nonumber\\\\\n \n =& \\sum_{u\\in\\Set{U}}\\sum_{i \\in \\Set{P}_u} (\\norm{\\Mat{v}_u}^2 - \\Mat{v}_u^T\\Mat{v}_i )\n + \\sum_{i\\in\\Set{I}}\\sum_{u\\in\\Set{P}_i} (\\norm{\\Mat{v}_i}^2 - \\Mat{v}_u^T\\Mat{v}_i )\\nonumber \\\\\n \\eqc& \\sum_{u\\in\\Set{U}}\\norm{\\Mat{v}_u-\\Mat{c}_u}^2 + \\sum_{i\\in\\Set{I}}\\norm{\\Mat{v}_i-\\Mat{c}_i}^2, \\label{eq:conpactness}\n \\end{align}\n where the symbol $\\eqc$ indicates equality up to a multiplicative and\/or additive constant; $\\Mat{c}_{u} = \\frac{1}{|\\Set{P}_u|}\\sum_{i\\in \\Set{P}_{u}}\\Mat{v}_i$ is the averaged representation of all items that $u$ has interacted with, which describes $u$'s interest; $\\Mat{c}_{i} = \\frac{1}{|\\Set{P}_{i}|} \\sum_{u\\in\\Set{P}_{i}} \\Mat{v}_{u}$ is the averaged representation of all users who have adopted item $i$, which profiles its user group.\n \n We further analyze Equation \\eqref{eq:conpactness} from the entropy view by conflating the first two terms:\n \\begin{align}\n -\\sum_{(u,i)\\in\\Set{O}^{+}}\\frac{\\Mat{v}_u^T\\Mat{v}_i - m}{\\tau} \\eqc \\sum_{\\Mat{v}} \\norm{\\Mat{v} - \\Mat{c}_{v}}^2,\n \\end{align}\n where $\\Mat{v}\\in\\{\\Mat{v}_{u}|u\\in\\Set{U}\\}\\cup\\{\\Mat{v}_{i}|i\\in\\Set{I}\\}$ summarizes the representations of users and items, with the mean of $\\Mat{c}_{v}$.\n Following \\cite{BoudiafRZGPPA20}, we further interpret this term as a conditional cross-entropy between $\\Mat{V}$ and another random variable $\\bar{\\Mat{V}}$ whose conditional distribution given $Y$ is a standard Gaussian $\\bar{\\Mat{V}}|Y \\sim \\mathcal{N}(\\Mat{c}_{\\Mat{V}},I)$:\n \\begin{align}\n -\\sum_{(u,i)\\in\\Set{O}^{+}}\\frac{\\Mat{v}_u^T\\Mat{v}_i - m}{\\tau} & \\eqc\n H(\\Mat{V};\\bar{\\Mat{V}}|Y)=H(\\Mat{V}|Y)+D_{KL}(\\Mat{V}||\\bar{\\Mat{V}}|Y) \\nonumber\\\\\n & \\propto H(\\Mat{V}|Y),\\label{eq:first-term}\n \\end{align}\n where $H(\\cdot)$ denotes the cross-entropy, and $D_{KL}(\\cdot)$ denotes the $KL$-divergence.\n As a consequence, the first term in Equation \\eqref{eq:split_bc} is positive proportional to $H(\\Mat{V}|Y)$.\n \n This concludes the proof for the first compactness part of BC loss.\n \n We then inspect the second term in Equation \\eqref{eq:split_bc} to demonstrate its dispersion property:\n \\begin{align}\n & \\sum_{(u,i)\\in\\Set{O}^{+}} \\log (\\exp{\\frac{\\Mat{v}_u^T\\Mat{v}_i - m}{\\tau}}+\\sum_{j\\in\\Set{N}_{u}}\\exp{\\frac{\\Mat{v}_u^T\\Mat{v}_j}{\\tau}})\\nonumber\\\\\n \\geq& \\sum_{(u,i)\\in\\Set{O}^{+}} \\log{(\n \\sum_{j\\in\\Set{N}_{u}}\\exp{\\frac{\\Mat{v}_u^T\\Mat{v}_j}{\\tau}})} \\nonumber\\\\\n \\geq& \\sum_{u\\in\\Set{U}}\\sum_{j\\in\\Set{N}_{u}}\\frac{\\Mat{v}_u^T\\Mat{v}_j}{\\tau}\\nonumber\\\\\n \\eqc& - \\sum_{u\\in\\Set{U}}\\sum_{j\\in\\Set{N}_{u}} \\norm{\\Mat{v}_u - \\Mat{v}_j}^2, \\label{eq:dispersion}\n \\end{align}\n where we drop the redundant terms aligned with the compactness objective in the second line, and adopt Jensen's inequality in the third line.\n As shown in prior studies \\cite{WangS11}, minimizing this term is equivalent to maximizing entropy $H(\\Mat{V})$:\n \\begin{align} \\label{eq:second-term}\n \\sum_{(u,i)\\in\\Set{O}^{+}} \\log (\\exp{\\frac{\\Mat{v}_u^T\\Mat{v}_i - m}{\\tau}}+\\sum_{j\\in\\Set{N}_{u}}\\exp{\\frac{\\Mat{v}_u^T\\Mat{v}_j}{\\tau}}) \\propto -\\mathcal{H}(\\Mat{V}).\n \\end{align}\n As a result, the second term in Equation \\eqref{eq:split_bc} works as the dispersion part in BC loss.\n\\end{proof}\n\n\n\\begin{landscape}\n \\begin{figure}[t]\n \\centering\n \\subcaptionbox{BPR loss (head)\\label{fig:head_bpr}}{\n \\vspace{-6pt}\n \\includegraphics[width=0.23\\linewidth]{figures\/head_bpr.pdf}}\n \\subcaptionbox{Softmax loss (head)\\label{fig:head_softmax}}{\n \\vspace{-6pt}\n \\includegraphics[width=0.23\\linewidth]{figures\/head_info.pdf}}\n \\subcaptionbox{IPS-CN \\cite{IPS-CN} (head) \\label{fig:head_ips}}{\n \\vspace{-6pt}\n \\includegraphics[width=0.23\\linewidth]{figures\/head_ips.pdf}}\n \\subcaptionbox{BC loss (head)\\label{fig:head_bc}}{\n \\vspace{-6pt}\n \\includegraphics[width=0.23\\linewidth]{figures\/head_bc.pdf}}\n \n \\subcaptionbox{BPR loss (tail)\\label{fig:tail_bpr}}{\n \\vspace{-6pt}\n \\includegraphics[width=0.23\\linewidth]{figures\/tail_bpr.pdf}}\n \\subcaptionbox{Softmax loss (tail)\\label{fig:tail_softmax}}{\n \\vspace{-6pt}\n \\includegraphics[width=0.23\\linewidth]{figures\/tail_info.pdf}}\n \\subcaptionbox{IPS-CN \\cite{IPS-CN} (tail)\\label{fig:tail_ips}}{\n \\vspace{-6pt}\n \\includegraphics[width=0.23\\linewidth]{figures\/tail_ips.pdf}}\n \\subcaptionbox{BC loss (tail)\\label{fig:tail_bc}}{\n \\vspace{-6pt}\n \\includegraphics[width=0.23\\linewidth]{figures\/tail_bc.pdf}}\n \n \\caption{Magnified View of Figure \\ref{fig:toy-example}.}\n \\label{fig:magnified-toy-example}\n \\vspace{-20pt}\n \\end{figure}\n \\end{landscape}\n\n\n\\section{Experiments} \\label{sec:app_exp}\n\n\n\\begin{table}[b]\n \\centering\n \\vspace{-10pt}\n \\caption{Dataset statistics.}\n \n \\label{tab:dataset-statistics}\n \\resizebox{\\columnwidth}{!}{\n \\begin{tabular}{lrrrrrrr}\n \\toprule\n & KuaiRec & Douban Movie & Tencent & Amazon-Book & Alibaba-iFashion & Yahoo!R3 & Coat\\\\ \\midrule\n \\#Users & 7175 & 36,644 & 95,709 & 52,643 & 300,000 & 14382 & 290 \\\\\n \\#Items & 10611 & 22,226 & 41,602 & 91,599 & 81,614 & 1000 & 295 \\\\\n \\#Interactions & 1062969 & 5,397,926 & 2,937,228 & 2,984,108 & 1,607,813 & 129,748 & 2,776 \\\\\n Sparsity & 0.01396 & 0.00663 & 0.00074 & 0.00062 & 0.00007 \n & 0.00902 & 0.03245\\\\ \\midrule\n $D_{KL}$-Train & 1.075 & 1.471 & 1.425 & 0.572 & 1.678 & 0.854 & 0.356 \\\\\n $D_{KL}$-Validation & 1.006 & 1.642 & 1.423 & 0.572 & 1.705 & 0.822 & 0.350\\\\\n $D_{KL}$-Balanced & - & - & 0.003 & 0.000 & 0.323 & - & - \\\\\n $D_{KL}$-Imbalanced & - & - & 1.424 & 0.571 & 1.703 & - & - \\\\ \n $D_{KL}$-Temporal & - & 1.428 & - & - & - & - & - \\\\\n $D_{KL}$-Unbiased & 1.666 & - & - & - & -& 0.100 & 0.109 \\\\ \\bottomrule\n \\end{tabular}}\n\\end{table}\n\n\\subsection{Experimental Settings} \\label{sec:implementation}\n\\textbf{Datasets.} \nWe conduct experiments on five real-world benchmark datasets: Tencent \\cite{Tencent}, Amazon-Book \\cite{Amazon-Book}, Alibaba-iFashion \\cite{Alibaba-ifashion}, Douban Movie \\cite{Douban}, and KuaiRec \\cite{KuaiRec}.\nAll datasets are public and vary in terms of size, domain, and sparsity.\nTable \\ref{tab:dataset-statistics} summarizes the dataset statistics, where the long-tail degree is monitored by KL-divergence between the item popularity distribution and the uniform distribution, \\emph{i.e., } $D_{KL}(\\hat{P}_{data}|| \\text{Uniform})$.\nA larger KL-divergence value indicates that the heavier portion of interactions concentrates on the head of distribution. \nIn the stage of pre-processing data, we follow the standard 10-core setting \\cite{PDA,KGAT} to filter out the items and users with less than ten interactions.\n\n\\noindent\\textbf{Data Splits.}\nFor comprehensive comparisons, almost all standard test distributions in CF are covered in the experiments: balanced test set \\cite{MACR,DICE,KDCRec}, randomly selected imbalanced test set \\cite{ESAM,ZhuWC20}, temporal split test set \\cite{PDA,DecRS,Regularized_Optimization}, and unbiased test set \\cite{ips,KuaiRec,Yahoo}. \nThree datasets (\\emph{i.e., } Tencent, Amazon-Book, Alibaba-iFashion) are partitioned into both balanced and randomly selected imbalanced evaluations. \nAs an intervention test, the balanced evaluation (\\emph{i.e., } uniform distribution) is frequently employed in recent debiasing CF approaches \\cite{MACR,DICE,KDCRec,Causal_inference}.\nDouban is split based on the temporal splitting strategy \\cite{MengMMO20}. \nKuaiRec is an unbiased, fully-observed dataset in which the feedback of the test set's interaction is explicitly collected.\n\n\n\\vspace{5pt}\n\\noindent\\textbf{Evaluation Metrics.}\nWe adopt the all-ranking strategy \\cite{KricheneR20}, \\emph{i.e., } for each user, all items are ranked by the recommender model, except the positive ones in the training set. To evaluate the quality of recommendation, three widely-used metrics are used: Hit Ratio (HR@$K$), Recall@$K$, Normalized Discounted Cumulative Gain (NDCG@$K$), where $K$ is set as $20$ by default.\n\n\\vspace{5pt}\n\\noindent\\textbf{Baselines.}\nWe validate our BC loss on two widely-used CF models, MF \\cite{BPR} and LightGCN \\cite{LightGCN}, which are representatives of the conventional and state-of-the-art CF models.\nWe compare BC loss with the popular debiasing strategies in various research lines: sample re-weighting (IPS-CN \\cite{IPS-CN}), bias removal by causal inference (MACR \\cite{MACR}, CausE \\cite{CausE}), and regularization-based framework (sam+reg \\cite{Regularized_Optimization}).\n\\za{We also compare BC loss to other standard losses used in collaborative filtering including most commonly used loss (BPR loss \\cite{BPR}) and newest proposed softmax losses (CCL \\cite{CCL} and SSM \\cite{SSM})}.\n\n\\vspace{5pt}\n\\noindent\\textbf{Parameter Settings.}\nWe conduct experiments using a Nivida-V100\nGPU (32 GB memory) on a server with a\n40-core Intel CPU (Intel(R) Xeon(R) CPU E5-2698 v4).\nWe implement our BC loss in PyTorch.\n\\wx{Our codes, datasets, and hyperparameter settings are available at \\url{https:\/\/anonymous.4open.science\/r\/BC-Loss-8764\/model.py} to guarantee reproducibility.}\nFor a fair comparison, all methods are optimized by Adam \\cite{Adam} optimizer with the batch size as 2048, embedding size as 64, learning rate as 1e-3, and the coefficient of regularization as 1e-5 in all experiments.\nFollowing the default setting in \\cite{LightGCN}, the number of embedding layers for LightGCN is set to 2.\nWe adopt the early stop strategy that stops training if Recall@20 on the validation set does not increase for 10 successive epochs.\nA grid search is conducted to tune the critical hyperparameters of each strategy to choose the best models \\emph{w.r.t.}~ Recall@20 on the validation set.\n\\za{For softmax, SSM, and BC loss, we search $\\tau$ in [0.06, 0.14] with a step size of 0.02.}\nFor CausE, 10\\% of training data with balanced distribution are used as intervened set, and $cf\\_pen$ is tuned in $[0.01,0.1]$ with a step size of 0.02.\nFor MACR, we follow the original settings to set weights for user branch $\\alpha=1e-3$ and item branch $\\beta=1e-3$, respectively. We further tune hyperparameter $c=[0, 50]$ with a step size of 5.\n\\za{For CCL loss, we search $w$ in $\\{1,2,5,10,50,100,200\\}$, $m$ in the range $[0.2,1]$ with a step size of 0.2. When it come to the number of negative samples, the softmax, SSM, CCL, and BC loss set 128 for MF backbone and in-batch negative sampling for LightGCN models.}\n\n\\subsection{Training Cost} \n\\begin{table}[t]\n \\centering\n \\caption{Training cost on Tencent (seconds per epoch\/in total). }\n \n \\label{tab:elapse_time}\n \\resizebox{0.9\\linewidth}{!}{\n \\begin{tabular}{l|rrrrrr}\n \\toprule\n & Backbone & +IPS-CN & +CausE & +sam+reg & +MACR & +BC loss\\\\ \\midrule\n MF & 15.5 \/ 17887 & 17.8 \/ 10662 & 16.6 \/ 1859 & 18.2 \/ 3458 & 160 \/ 17600 & 36.1 \/ 12815\\\\\n LightGCN & 78.6 \/ 4147 & 108 \/ 23652 & 47.2 \/ 3376 & 49.8 \/ 10458 & 135 \/ 20250 & 283 \/ 7075\\\\ \\bottomrule\n \\end{tabular}}\n\\end{table}\n\nIn terms of time complexity, as shown in Table \\ref{tab:elapse_time}, we report the time cost per epoch and in total of each baselines on Tencent.\nCompared with backbone methods (\\emph{i.e., } MF and LightGCN), BC loss adds very little computing complexity to the training process. \n\n\n\n\\subsection{Evaluations on Unbiased Test Set} \\label{sec:KuaiRec}\n\n\\noindent\\textbf{Motivation.}\nBecause of the missing-not-at-random condition in a real recommender system, offline evaluation on collaborative filtering and recommender system is commonly acknowledged as a challenge.\nTo close the gap, Yahoo!R3 \\cite{Yahoo} and Coat \\cite{ips} are widely used, which offer unbiased test sets that are collected using the missing-complete-at-random (MCAR) concept.\nAdditionaly, newly proposed KuaiRec \\cite{KuaiRec} also provides a fully-observed unbiased test set with 1,411 users over 3,327 videos.\nWe conduct experiments on all these datasets for comprehensive comparison, and KuaiRec is also included as one of our unbiased evaluations for two key reasons:\n1) It is significantly larger than existing MCAR datasets (\\emph{e.g., } Yahoo! and Coat); \n2) It overcomes the missing values problem, making it as effective as an online A\/B test.\n\n\\begin{table}[t]\n \\centering\n \\caption{The performance comparison on KuaiRec dataset. }\n \\label{tab:KuaiRec}\n \\resizebox{0.7\\columnwidth}{!}{\n \\begin{tabular}{l|ccc|ccc}\n \\toprule\n & \\multicolumn{3}{c|}{Validation} & \\multicolumn{3}{c}{Unbiased Test} \\\\\n \\multicolumn{1}{c|}{} & HR & Recall & NDCG & HR & Recall & NDCG \\\\\\midrule\n LightGCN & 0.299 & 0.069 & 0.051 & 0.104 & 0.0038 & 0.0064 \\\\\n + IPS-CN & 0.255 & 0.056 & 0.042 & \\underline{0.109} & \\underline{0.0073} & \\underline{0.0083} \\\\\n + CausE & 0.292 & 0.067 & 0.050 & 0.101 & 0.0056 & 0.0077 \\\\\n + sam+reg & 0.274 & 0.060 & 0.047 & 0.107 & 0.0069 & 0.0080 \\\\\n + BC loss & 0.343 & 0.076 & 0.062 & \\textbf{0.139}* & \\textbf{0.0077}* & \\textbf{0.0115}* \\\\\\midrule\n Imp.\\% & - & - & - & 27.5\\% & 4.05\\% & 38.6\\% \\\\\\bottomrule\n \\end{tabular}}\n \\vspace{-15pt}\n\\end{table}\n\n\\noindent\\textbf{Parameter Settings.} For BC loss on Yahoo!R3 and Coat, we search $\\tau_1$ in [0.05, 0.21] with a step size of 0.01, and $\\tau_2$ in [0.1, 0.6] with a step size of 0.1, and search the number of negative samples in [16, 32, 64, 128]. We adopt the batch size of 1024 and learning rate of 5e-4.\n\n\\noindent\\textbf{Results.}\nTable \\ref{tab:KuaiRec} and \\ref{tab:Yahoo&Coat} illustrate the unbiased evaluations on KuaiRec, Yahoo!R3, and Coat dataset using LightGCN and MF as backbone models, respectively.\nThe best performing methods are bold and starred, while the strongest baselines are underlined;\nImp.\\% measures the relative improvements of BC loss over the strongest baselines.\nBC loss is consistently superior to all baselines \\emph{w.r.t.}~ all metrics. \nIt indicates that BC loss truly improves the generalization ability of recommender.\n\n\n\n\\subsection{Performance Comparison with Standard Loss Functions in CF} \\label{sec:standard_loss}\n\n\\noindent\\textbf{Motivation.}\nTo verify the effectiveness of BC loss as a standard learning strategy in collaborative filtering, we further conduct the experiments over various datasets between BC loss, BPR loss, CCL loss \\cite{CCL}, and SSM loss \\cite{SSM}. We choose these three losses as baselines for two main reasons: 1) BPR loss is the most commonly applied in recommender system; 2) SSM and CCL are most recent proposed losses, where SSM is also a softmax loss and CCL employs a global margin. \n\n\\noindent\\textbf{Results.}\nTable \\ref{tab:loss_compare} reports the performance on both balanced and imbalanced test sets on various datasets among different losses. \nWe have two main observations: (1) Clearly, our BC loss consistently outperforms CCL and SSM; (2) CCL and SSM achieve comparable performance to BC loss in the imbalanced evaluation settings, while performing much worse than BC loss in the balanced evaluation settings. This indicates the superiority of BC loss in alleviating the popularity bias, and further justifies the effectiveness of the bias-aware margins.\n\nTable \\ref{tab:Yahoo&Coat} shows the unbiased evaluations on Yahoo!R3 and Coat dataset. \nWith regard to all criteria, BC loss constantly outperforms all other losses. \nIt verifies the effectiveness of instance-wise bias margin of BC loss.\n\n\\begin{table*}[t]\n\\caption{The performance comparison on Tecent, Amazon-book and Alibaba-iFashion datasets. \n}\n\\label{tab:loss_compare}\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{l|cc|cc|cc|cc|cc|cc}\n\\toprule\n\\multicolumn{1}{c|}{} & \\multicolumn{4}{c|}{Tencent} & \\multicolumn{4}{c|}{Amazon-book} & \\multicolumn{4}{c}{Alibaba-iFashion} \\\\\n\\multicolumn{1}{c|}{} &\n \\multicolumn{2}{c}{Balanced} &\n \\multicolumn{2}{c|}{Imbalanced} &\n \\multicolumn{2}{c}{Balanced} &\n \\multicolumn{2}{c|}{Imbalanced} &\n \\multicolumn{2}{c}{Balanced} &\n \\multicolumn{2}{c}{Imbalanced} \\\\\n \n\\multicolumn{1}{c|}{} & Recall & NDCG & Recall & NDCG & Recall & NDCG & Recall & NDCG & Recall & NDCG & Recall & NDCG \\\\\n\\midrule\n\nBPR & 0.0052 & 0.0040 & 0.0982 & 0.0643 & 0.0109 & 0.0103 & 0.0850 & 0.0638 & 0.0056 & 0.0028 & 0.0843 & 0.0411 \\\\\nSSM & 0.0055 & 0.0045 & \\underline{0.1297} & \n\\underline{0.0872} & 0.0156 & 0.0157 & 0.1125 & \\underline{0.0873} & \n\\underline{0.0079} & \\underline{0.0040} & \\underline{0.0963} & \\underline{0.0436} \\\\\nCCL & \\underline{0.0057} & \\underline{0.0047} & 0.1216 & 0.0818 & \\underline{0.0175} & \\underline{0.0167} & \\underline{0.1162} & \\underline{0.0927} & 0.0075 & 0.0038 & 0.0954 & 0.0428 \\\\\nBC Loss & \\textbf{0.0087}* & \\textbf{0.0068}* & \\textbf{0.1298}* & \\textbf{0.0904}* & \\textbf{0.0221}* & \\textbf{0.0202}* & \\textbf{0.1198}* & \\textbf{0.0948}* & \\textbf{0.0095}* & \\textbf{0.0048}* & \\textbf{0.0967}* & \\textbf{0.0487}* \\\\\\midrule\nImp. \\% & 52.6\\% & 44.7\\% & 0.1\\% & 3.7\\% & 26.3\\% & 21.0\\% & 3.1\\% & 2.3\\% & 20.3\\% & 20.0\\% & 0.4\\% & 11.7\\% \\\\\n\\bottomrule\n\\end{tabular}}\n\\end{table*}\n\n\n\n\\begin{table}[t]\n \\centering\n \\caption{The performance comparison on Yahoo!R3 and Coat dataset. }\n \\label{tab:Yahoo&Coat}\n \\resizebox{0.6\\columnwidth}{!}{\n \\begin{tabular}{l|cc|cc}\n \\toprule\n & \\multicolumn{2}{c|}{Yahoo!R3} & \\multicolumn{2}{c}{Coat} \\\\\n \\multicolumn{1}{c|}{} & Recall & NDCG & Recall & NDCG \\\\\\midrule\n \n IPS-CN & 0.1081 & 0.0487 & 0.1700 & 0.1377 \\\\\n CausE & 0.1252 & 0.0537 & \\underline{0.2329} & 0.1635 \\\\\n sam+reg & 0.1198 & 0.0548 & 0.2303 & 0.1869 \\\\\n MACR & 0.1243 & 0.0539 & 0.0798 & 0.0358 \\\\\\midrule\n BPR & 0.1063 & 0.0476 & 0.0741 & 0.0361\\\\\n SSM & \\underline{0.1470} & \n \\underline{0.0688} & 0.2022 & 0.1832\\\\\n CCL & 0.1428 & 0.0676 & 0.2150 & \\underline{0.1885} \\\\\n BC loss & \\textbf{0.1487}* & \\textbf{0.0706}* & \\textbf{0.2385}* & \\textbf{0.1969}* \\\\\\midrule\n Imp.\\% & 1.2\\% & 2.6\\% & 2.4\\% & 4.5\\% \\\\\\bottomrule\n \\end{tabular}}\n\\end{table}\n\n\\subsection{Study on BC Loss (RQ3) - Effect of Popularity Bias Extractor} \\label{sec:bias_extractor}\n\n\\noindent\\textbf{Motivation.} To check the effectiveness of popularity bias extractor, we need to answer two main questions: 1) what kinds of interactions will be learned well by the bias extractor? What does the learned bias-angle distribution look like? 2) can BC loss benefit from the bias margin extracted according to the popularity bias extractor in various groups of interactions? We devise two distinct experiments to tackle these two problems.\n\n\\noindent\\textbf{Experiment Setting A.}\nUsers can be divided into three parts: head, mid, and tail, based on their popularity scores. Analogously, items can be partitioned into the head, mid, and tail parts. As such, we can categorize all user-item interactions into nine subgroups. \nWe have visualized the learned angles for various types of interactions in the following table \\ref{fig:angles}. \n\n\\noindent\\textbf{Results A.} The table \\ref{fig:angles} shows the learned angles over all subgroups. We find that interactions between head users and head items tend to hold small angles. Moreover, as evidenced by the high standard deviation, the interactions stemming from the same subgroup types are prone to receive a wide range of angular values. This demonstrates the variability and validity of instance-wise angular margins.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.5\\linewidth]{figures\/angle.png}\n \\caption{Visualization of popularity bias angle for different types of interactions on Tencent.}\n \\label{fig:angles}\n\\end{figure*}\n\n\n\\noindent\\textbf{Experiment Setting B.}\nWe partition interactions into disjoint subgroups on Tencent dataset, based on the bias degree estimated by our popularity bias extractor. These subgraphs are composed of (1) hard interactions with low bias degrees (40\\% of total interactions, Popularity rank > 1000), and (2) easy interactions with high bias degrees (15\\% of total interactions, Popularity rank < 100), respectively. Then, we use four losses (BPR, CCL, SSM, and BC losses) to train the same MF backbone.\n\n\\noindent\\textbf{Results B.}\nFigure \\ref{eq:bias-extractor} depicts the performance of each loss in the hard-interaction subgroup, easy-interaction subgraph, and all-interaction group, during the training phase. With the increase of training epochs, the performance of BPR drops dramatically in the hard-interaction subgroup, while increasing in the all-interaction group. Possible reasons are BPR is easily influenced by the interaction-wise bias: BPR focuses largely on the majority subgroups of easy interactions, while sacrificing the performance of the minority subgroups of hard interactions. In contrast, the performance of BC loss consistently increases over every subgroup during training. This indicates that BC loss is able to mitigate the negative influences of popularity bias, thus further justifying that the popularity bias extract captures the interaction-wise bias well.\n\n\n\n\\begin{figure*}[b]\n\t\\centering\n\t\\subcaptionbox{Hard interactions}{\n\t\t\\includegraphics[width=0.32\\linewidth]{figures\/easy.png}}\n\t\\subcaptionbox{Easy interactions}{\n\t\t\\includegraphics[width=0.32\\linewidth]{figures\/hard.png}}\n\t\\subcaptionbox{All interactions}{\n\t\t\\includegraphics[width=0.32\\linewidth]{figures\/average.png}}\n\t\\caption{(a) The performance of BPR loss on hard interactions consistently drops; (b)- (c). BC loss shows the promise to mitigate the popularity bias by utilizing the popularity bias extractor.}\n\t\\label{fig:pop_extractor}\n\\end{figure*}\n\n\n\\newpage\n\\za{\n\\subsection{Visualization of interaction bias degree}\nFigures \\ref{fig:item_pop} and \\ref{fig:user_pop} illustrate the relations between popularity scores and bias degree extracted by popularity bias extractor \\emph{w.r.t.}~ user and item sides, respectively. Specifically, positive trends are shown, where their relations are also quantitatively supported by Pearson correlation coefficients. (0.7703 and 0.662 for item and user sides, respectively). It verifies the power of popularity embeddings to predict the popularity scores --- that is, user popularity embeddings derived from the popularity bias extractor are strongly correlated and sufficiently predictive to user popularity scores; analogously to the item side.}\n\\begin{figure*}[t]\n\t\\centering\n\t\\subcaptionbox{Bias degree \\emph{w.r.t.}~ Item popularity bias \\label{fig:item_pop}}{\n\t \n\t\t\\includegraphics[width=0.43\\linewidth]{figures\/item_pop.png}}\n\t\\subcaptionbox{Bias degree \\emph{w.r.t.}~ User popularity bias\\label{fig:user_pop}}{\n\t \n\t\t\\includegraphics[width=0.43\\linewidth]{figures\/user_pop.png}}\n\t\\caption{Visualizations of relationships between interaction bias degree estimated by our popularity bias extractor and item\/user popularity statistics. Pearson correlation coefficients are provided.}\n\\end{figure*}\n\n\n\n\n\\begin{table}[b]\n \\centering\n \\caption{Model architectures and hyper-parameters}\n \\label{tab:parameter}\n \\resizebox{0.6\\columnwidth}{!}{\n \\begin{tabular}{l|c|c|c|c|c|c}\n \\toprule\n & \\multicolumn{5}{c}{BC loss hyper-parameters} \\\\\\midrule\n \n \\multicolumn{1}{c|}{} & $\\tau_1$ & $\\tau_2$ & lr & batch size & No. negative samples \\\\\\midrule\n \\textbf{MF} \\\\\\midrule\n Tencent & 0.06 & 0.1 & 1e-3 & 2048 & 128 \\\\\\midrule\n iFashion & 0.08 & 0.1 & 1e-3 & 2048 & 128 \\\\\\midrule\n Amazon & 0.08 & 0.1 & 1e-3 & 2048 & 128 \\\\\\midrule\n Douban & 0.08 & 0.1 & 1e-3 & 2048 & 128 \\\\\\midrule\n Yahoo!R3 & 0.15 & 0.2 & 5e-4 & 1024 & 128 \\\\\\midrule\n Coat & 0.09 & 0.4 & 5e-4 & 1024 & 64\\\\\\midrule\n \\textbf{LightGCN} \\\\\\midrule\n Tencent & 0.12 & 0.1 & 1e-3 & 2048 & in-batch\\\\\\midrule\n iFashion & 0.14 & 0.1 & 1e-3 & 2048 & in-batch\\\\\\midrule\n Amazon & 0.08 & 0.1 & 1e-3 & 2048 & in-batch\\\\\\midrule\n Douban & 0.14 & 0.1 & 1e-3 & 2048 & in-batch\\\\\\midrule\n \\end{tabular}}\n\\end{table}\n\\section{In-depth Analysis of BC loss}\n\\subsection{Visualization of A Toy Experiment} \\label{sec:toy_example}\n\n\n\nHere we visualize a toy example on the Yelp2018 \\cite{LightGCN} dataset to showcase the effect of BC loss.\nSpecifically, we train a two-layer LightGCN whose embedding size is three, and illustrate the 3-dimensional normalized representations on a 3D unit sphere in Figure \\ref{fig:toy-example} (See the magnified view in Figure \\ref{fig:magnified-toy-example}).\nWe train the identical LightGCN backbone with different loss functions: BPR loss, softmax loss, BC loss, and IPS-CN \\cite{IPS-CN}.\nFor the same head\/tail user (\\emph{i.e., } green stars), we plot 500 items in the unit sphere covering all positive items (\\emph{i.e., } red dots) and randomly-selected negative items (\\emph{i.e., } blue dots) from both the training and testing sets.\nMoreover, the angle distribution (the second row of each subfigure) of positive and negative items for a certain user quantitatively shows the discriminative power of each loss.\nWe observe that:\n\\begin{itemize}[leftmargin = *]\n \\item \\textbf{BC loss learns more discriminative representations in both head and tail user cases. Moreover, BC loss learns a more reasonable representation distribution that is locally clustered and globally separated.} \n As Figures \\ref{fig:head_bc} and \\ref{fig:tail_bc} show, for head and tail users, BC loss encourages around 40\\% and 55\\% of positive items to fall into the group closest to user representations, respectively.\n In other words, these item representations are almost clustered to a small region.\n BC loss also achieves the smallest distance \\emph{w.r.t.}~ mean positive angle.\n This verifies that BC loss tends to learn a high similar item\/user compactness.\n Moreover, Figure \\ref{fig:tail_bc} presents a clear margin between positive and negative items, reflecting a highly-discriminative power.\n Compared to softmax loss in Figures \\ref{fig:head_softmax} and \\ref{fig:tail_softmax}, the compactness and dispersion properties of BC loss come from the incorporation of interaction-wise bias-aware margin.\n \n \\item \\textbf{The representations learned by standard CF losses - BPR loss and softmax loss - are not discriminative enough.}\n Under the supervision of BPR and softmax losses, item representations are separably allocated in a wide range of the unit sphere, where blue and red points occupy almost the same space area, as Figures \\ref{fig:head_bpr} and \\ref{fig:head_softmax} demonstrate.\n Furthermore, Figure \\ref{fig:tail_bpr} shows only a negligible overlap between positive and negative items' angle distributions.\n However, as the negative items are much more than the positive items for the tail user, a small overlap will make many irrelevant items rank higher than the relevant items, thus significantly hindering the recommendation accuracy.\n Hence, directly optimizing BPR or softmax loss might be suboptimal for the personalized recommendation tasks. \n \n \\item \\textbf{IPS-CN, a well-known popularity debiasing method in CF, is prone to lift the tail performance by sacrificing the representation learning for the head.}\n Compared with BPR loss in Figure \\ref{fig:tail_bpr}, IPS-CN learns better item representations for the tail user, which achieves smaller mean positive angle as illustrated in Figure \\ref{fig:tail_ips}.\n However, for the head user in Figure \\ref{fig:head_ips}, the positive and negative item representations are mixed and cannot be easily distinguished. Worse still, the representations learned by IPS-CN has a larger mean positive angle for head user compared to BPR loss. \n This results in a dramatic performance drop for head evaluations.\n\\end{itemize}\n\n\n\n\\subsection{Hard Example Mining Mechanism - one desirable property of BC loss:}\\label{sec:hard_example}\n\nWe argue that the mechanism of adaptively mining hard interactions is inherent in BC loss, which improves the efficiency and effectiveness of training.\nDistinct from softmax loss that relies on the predictive scores to mine hard negative samples \\cite{sgl} and leaves the popularity bias untouched, our BC loss considers the interaction-wise biases and adaptively locates hard informative interactions.\n\nSpecifically, the popularity bias extractor $\\hat{y}_b$ in Equation \\eqref{eq:bias-prediction} can be viewed as a hard sample detector.\nConsidering an interaction $(u,i)$ with a high bias degree $cos(\\hat{\\xi}_{ui})$, we can only use its popularity information to predict user preference and attain a vanishing bias-aware angular margin $M_{ui}$.\nHence, interaction $(u,i)$ \\wx{will} plausibly serve as the biased and easy \\wx{sample, if it involves the} active users and popular items.\nIts close-to-zero margin makes $(u,i)$'s BC loss approach softmax loss, thus downgrading the ranking criteria to match the basic assumption of softmax loss.\n\nIn contrast, if the popularity statistics are deficient in recovering user preference via the popularity bias extractor, the interaction $(u,i)$ garners the low bias degree $cos(\\hat{\\xi}_{ui})$ and exerts the significant margin $M_{ui}$ on its BC loss.\nHence, it could work as the hard \\wx{sample}, which typically covers the tail users and items, and yields a more stringent assumption that user $u$ prefers the tail item over \\wx{the} other popular items by a large margin.\nSuch a significant margin makes the losses more challenging to learn.\n\nIn a nutshell, BC loss adaptively prioritizes the interaction \\wx{samples} based on their bias degree and leads the CF model to shift its attention to hard \\wx{samples}, thus improving both head and tail performance, compared with softmax loss (\\emph{cf. } Section \\ref{sec:ablation}).\n\n\n\n\n\\subsection{Proof of Theorem \\ref{theorem}}\n\\label{sec:proof}\n\n\\bclosstheorem*\n\n\\begin{proof}\nLet the upper-case letter $\\Mat{V} \\in \\Space{V}$ be the random vector of representation and $\\Space{V} \\subseteq \\Space{R}^d$ be the representation space.\nWe use the normalization assumption of representations to connect cosine and Euclidean distances, \\emph{i.e., } if $\\norm{\\Mat{v}_u}=1$ and $\\norm{\\Mat{v}_i}=1$, $\\Mat{v}_u^T\\Mat{v}_i = 1 - \\frac{1}{2} \\norm{\\Mat{v}_u-\\Mat{v}_i}^2$, $\\forall u,i$. \n \n \nLet $\\Set{P}_u = \\{i|y_{ui} =1\\}$ be the set of user $u$'s positive items , $\\Set{P}_i = \\{u|y_{ui} =1\\}$ to be the set of item $i$'s positive users, and $\\Set{N}_{u}=\\{i|y_{ui}=0\\}$ be the set of user $u$'s negative items.\nClearly, there exists an upper bound $m$, \\emph{s.t. } $-1 < \\cos(\\hat{\\theta}_{ui}+M_{ui}) \\leq \\Mat{v}_u^T\\Mat{v}_i - m < 1 $.\nTherefore, we can analyze BC loss, which has the following relationships:\n \n \n \n \n \\begin{align} \n \\mathbf{\\mathop{\\mathcal{L}}}_{BC} \\geq & -\\sum_{(u,i)\\in\\Set{O}^{+}}\\log\\frac{\\exp{((\\Mat{v}_u^T\\Mat{v}_i - m)\/\\tau})}{\\exp{((\\Mat{v}_u^T\\Mat{v}_i - m)\/\\tau)}+\\sum_{j\\in\\Set{N}_{u}}\\exp{((\\Mat{v}_u^T\\Mat{v}_j)\/\\tau)}} \\nonumber\\\\\n &= -\\sum_{(u,i)\\in\\Set{O}^{+}} \\frac{\\Mat{v}_u^T\\Mat{v}_i - m}{\\tau} + \\sum_{(u,i)\\in\\Set{O}^{+}} \\log (\\exp{\\frac{\\Mat{v}_u^T\\Mat{v}_i - m}{\\tau}}+\\sum_{j\\in\\Set{N}_{u}}\\exp{\\frac{\\Mat{v}_u^T\\Mat{v}_j}{\\tau}}). \\label{eq:split_bc}\n \\end{align}\n We now probe into the first term in Equation \\eqref{eq:split_bc}:\n\\begin{align}\n -\\sum_{(u,i)\\in\\Set{O}^{+}} \\frac{\\Mat{v}_u^T\\Mat{v}_i - m}{\\tau}\n =& \\sum_{(u,i)\\in\\Set{O}^{+}} \\frac{\\norm{\\Mat{v}_u-\\Mat{v}_i}^2}{2\\tau} +\\frac{m-1}{\\tau} \\nonumber\\\\\n \\eqc& \\sum_{(u,i)\\in\\Set{O}^{+}} \\norm{\\Mat{v}_u-\\Mat{v}_i}^2 \\nonumber\\\\\n \n =& \\sum_{u\\in\\Set{U}}\\sum_{i \\in \\Set{P}_u} (\\norm{\\Mat{v}_u}^2 - \\Mat{v}_u^T\\Mat{v}_i )\n + \\sum_{i\\in\\Set{I}}\\sum_{u\\in\\Set{P}_i} (\\norm{\\Mat{v}_i}^2 - \\Mat{v}_u^T\\Mat{v}_i )\\nonumber \\\\\n \\eqc& \\sum_{u\\in\\Set{U}}\\norm{\\Mat{v}_u-\\Mat{c}_u}^2 + \\sum_{i\\in\\Set{I}}\\norm{\\Mat{v}_i-\\Mat{c}_i}^2, \\label{eq:conpactness}\n \\end{align}\n where the symbol $\\eqc$ indicates equality up to a multiplicative and\/or additive constant; $\\Mat{c}_{u} = \\frac{1}{|\\Set{P}_u|}\\sum_{i\\in \\Set{P}_{u}}\\Mat{v}_i$ is the averaged representation of all items that $u$ has interacted with, which describes $u$'s interest; $\\Mat{c}_{i} = \\frac{1}{|\\Set{P}_{i}|} \\sum_{u\\in\\Set{P}_{i}} \\Mat{v}_{u}$ is the averaged representation of all users who have adopted item $i$, which profiles its user group.\n \n We further analyze Equation \\eqref{eq:conpactness} from the entropy view by conflating the first two terms:\n \\begin{align}\n -\\sum_{(u,i)\\in\\Set{O}^{+}}\\frac{\\Mat{v}_u^T\\Mat{v}_i - m}{\\tau} \\eqc \\sum_{\\Mat{v}} \\norm{\\Mat{v} - \\Mat{c}_{v}}^2,\n \\end{align}\n where $\\Mat{v}\\in\\{\\Mat{v}_{u}|u\\in\\Set{U}\\}\\cup\\{\\Mat{v}_{i}|i\\in\\Set{I}\\}$ summarizes the representations of users and items, with the mean of $\\Mat{c}_{v}$.\n Following \\cite{BoudiafRZGPPA20}, we further interpret this term as a conditional cross-entropy between $\\Mat{V}$ and another random variable $\\bar{\\Mat{V}}$ whose conditional distribution given $Y$ is a standard Gaussian $\\bar{\\Mat{V}}|Y \\sim \\mathcal{N}(\\Mat{c}_{\\Mat{V}},I)$:\n \\begin{align}\n -\\sum_{(u,i)\\in\\Set{O}^{+}}\\frac{\\Mat{v}_u^T\\Mat{v}_i - m}{\\tau} & \\eqc\n H(\\Mat{V};\\bar{\\Mat{V}}|Y)=H(\\Mat{V}|Y)+D_{KL}(\\Mat{V}||\\bar{\\Mat{V}}|Y) \\nonumber\\\\\n & \\propto H(\\Mat{V}|Y),\\label{eq:first-term}\n \\end{align}\n where $H(\\cdot)$ denotes the cross-entropy, and $D_{KL}(\\cdot)$ denotes the $KL$-divergence.\n As a consequence, the first term in Equation \\eqref{eq:split_bc} is positive proportional to $H(\\Mat{V}|Y)$.\n \n This concludes the proof for the first compactness part of BC loss.\n \n We then inspect the second term in Equation \\eqref{eq:split_bc} to demonstrate its dispersion property:\n \\begin{align}\n & \\sum_{(u,i)\\in\\Set{O}^{+}} \\log (\\exp{\\frac{\\Mat{v}_u^T\\Mat{v}_i - m}{\\tau}}+\\sum_{j\\in\\Set{N}_{u}}\\exp{\\frac{\\Mat{v}_u^T\\Mat{v}_j}{\\tau}})\\nonumber\\\\\n \\geq& \\sum_{(u,i)\\in\\Set{O}^{+}} \\log{(\n \\sum_{j\\in\\Set{N}_{u}}\\exp{\\frac{\\Mat{v}_u^T\\Mat{v}_j}{\\tau}})} \\nonumber\\\\\n \\geq& \\sum_{u\\in\\Set{U}}\\sum_{j\\in\\Set{N}_{u}}\\frac{\\Mat{v}_u^T\\Mat{v}_j}{\\tau}\\nonumber\\\\\n \\eqc& - \\sum_{u\\in\\Set{U}}\\sum_{j\\in\\Set{N}_{u}} \\norm{\\Mat{v}_u - \\Mat{v}_j}^2, \\label{eq:dispersion}\n \\end{align}\n where we drop the redundant terms aligned with the compactness objective in the second line, and adopt Jensen's inequality in the third line.\n As shown in prior studies \\cite{WangS11}, minimizing this term is equivalent to maximizing entropy $H(\\Mat{V})$:\n \\begin{align} \\label{eq:second-term}\n \\sum_{(u,i)\\in\\Set{O}^{+}} \\log (\\exp{\\frac{\\Mat{v}_u^T\\Mat{v}_i - m}{\\tau}}+\\sum_{j\\in\\Set{N}_{u}}\\exp{\\frac{\\Mat{v}_u^T\\Mat{v}_j}{\\tau}}) \\propto -\\mathcal{H}(\\Mat{V}).\n \\end{align}\n As a result, the second term in Equation \\eqref{eq:split_bc} works as the dispersion part in BC loss.\n\\end{proof}\n\n\n\n\\begin{landscape}\n \\begin{figure}[t]\n \\centering\n \\subcaptionbox{BPR loss (head)\\label{fig:head_bpr}}{\n \\vspace{-6pt}\n \\includegraphics[width=0.23\\linewidth]{figures\/head_bpr.pdf}}\n \\subcaptionbox{Softmax loss (head)\\label{fig:head_softmax}}{\n \\vspace{-6pt}\n \\includegraphics[width=0.23\\linewidth]{figures\/head_info.pdf}}\n \\subcaptionbox{IPS-CN \\cite{IPS-CN} (head) \\label{fig:head_ips}}{\n \\vspace{-6pt}\n \\includegraphics[width=0.23\\linewidth]{figures\/head_ips.pdf}}\n \\subcaptionbox{BC loss (head)\\label{fig:head_bc}}{\n \\vspace{-6pt}\n \\includegraphics[width=0.23\\linewidth]{figures\/head_bc.pdf}}\n \n \\subcaptionbox{BPR loss (tail)\\label{fig:tail_bpr}}{\n \\vspace{-6pt}\n \\includegraphics[width=0.23\\linewidth]{figures\/tail_bpr.pdf}}\n \\subcaptionbox{Softmax loss (tail)\\label{fig:tail_softmax}}{\n \\vspace{-6pt}\n \\includegraphics[width=0.23\\linewidth]{figures\/tail_info.pdf}}\n \\subcaptionbox{IPS-CN \\cite{IPS-CN} (tail)\\label{fig:tail_ips}}{\n \\vspace{-6pt}\n \\includegraphics[width=0.23\\linewidth]{figures\/tail_ips.pdf}}\n \\subcaptionbox{BC loss (tail)\\label{fig:tail_bc}}{\n \\vspace{-6pt}\n \\includegraphics[width=0.23\\linewidth]{figures\/tail_bc.pdf}}\n \n \\caption{Magnified View of Figure \\ref{fig:toy-example}.}\n \\label{fig:magnified-toy-example}\n \\vspace{-20pt}\n \\end{figure}\n \\end{landscape}\n\n\n\\section{Experiments} \\label{sec:app_exp}\n\n\n\\begin{table}[b]\n \\centering\n \\vspace{-10pt}\n \\caption{Dataset statistics.}\n \n \\label{tab:dataset-statistics}\n \\resizebox{\\columnwidth}{!}{\n \\begin{tabular}{lrrrrrrr}\n \\toprule\n & KuaiRec & Douban Movie & Tencent & Amazon-Book & Alibaba-iFashion & Yahoo!R3 & Coat\\\\ \\midrule\n \\#Users & 7175 & 36,644 & 95,709 & 52,643 & 300,000 & 14382 & 290 \\\\\n \\#Items & 10611 & 22,226 & 41,602 & 91,599 & 81,614 & 1000 & 295 \\\\\n \\#Interactions & 1062969 & 5,397,926 & 2,937,228 & 2,984,108 & 1,607,813 & 129,748 & 2,776 \\\\\n Sparsity & 0.01396 & 0.00663 & 0.00074 & 0.00062 & 0.00007 \n & 0.00902 & 0.03245\\\\ \\midrule\n $D_{KL}$-Train & 1.075 & 1.471 & 1.425 & 0.572 & 1.678 & 0.854 & 0.356 \\\\\n $D_{KL}$-Validation & 1.006 & 1.642 & 1.423 & 0.572 & 1.705 & 0.822 & 0.350\\\\\n $D_{KL}$-Balanced & - & - & 0.003 & 0.000 & 0.323 & - & - \\\\\n $D_{KL}$-Imbalanced & - & - & 1.424 & 0.571 & 1.703 & - & - \\\\ \n $D_{KL}$-Temporal & - & 1.428 & - & - & - & - & - \\\\\n $D_{KL}$-Unbiased & 1.666 & - & - & - & -& 0.100 & 0.109 \\\\ \\bottomrule\n \\end{tabular}}\n\\end{table}\n\n\n\n\n\\subsection{Experimental Settings} \\label{sec:implementation}\n\\textbf{Datasets.} \nWe conduct experiments on eight real-world benchmark datasets: Tencent \\cite{Tencent}, Amazon-Book \\cite{Amazon-Book}, Alibaba-iFashion \\cite{Alibaba-ifashion}, Yelp2018 \\cite{LightGCN}, Douban Movie \\cite{Douban}, \\za{Yahoo!R3 \\cite{Yahoo}, Coat \\cite{ips}} and KuaiRec \\cite{KuaiRec}.\nAll datasets are public and vary in terms of size, domain, and sparsity.\nTable \\ref{tab:dataset-statistics} summarizes the dataset statistics, where the long-tail degree is monitored by KL-divergence between the item popularity distribution and the uniform distribution, \\emph{i.e., } $D_{KL}(\\hat{P}_{data}|| \\text{Uniform})$.\nA larger KL-divergence value indicates that the heavier portion of interactions concentrates on the head of distribution. \nIn the stage of pre-processing data, we follow the standard 10-core setting \\cite{PDA,KGAT} to filter out the items and users with less than ten interactions.\n\n\\noindent\\textbf{Data Splits.}\nFor comprehensive comparisons, almost all standard test distributions in CF are covered in the experiments: balanced test set \\cite{MACR,DICE,KDCRec}, randomly selected imbalanced test set \\cite{ESAM,ZhuWC20}, temporal split test set \\cite{PDA,DecRS,Regularized_Optimization}, and unbiased test set \\cite{ips,KuaiRec,Yahoo}. \nThree datasets (\\emph{i.e., } Tencent, Amazon-Book, Alibaba-iFashion) are partitioned into both balanced and randomly selected imbalanced evaluations. \nAs an intervention test, the balanced evaluation (\\emph{i.e., } uniform distribution) is frequently employed in recent debiasing CF approaches \\cite{MACR,DICE,KDCRec,Causal_inference}.\nDouban is split based on the temporal splitting strategy \\cite{MengMMO20}. \nKuaiRec is an unbiased, fully-observed dataset in which the feedback of the test set's interaction is explicitly collected.\n\n\n\\vspace{5pt}\n\\noindent\\textbf{Evaluation Metrics.}\nWe adopt the all-ranking strategy \\cite{KricheneR20}, \\emph{i.e., } for each user, all items are ranked by the recommender model, except the positive ones in the training set. To evaluate the quality of recommendation, three widely-used metrics are used: Hit Ratio (HR@$K$), Recall@$K$, Normalized Discounted Cumulative Gain (NDCG@$K$), where $K$ is set as $20$ by default.\n\n\\vspace{5pt}\n\\noindent\\textbf{Baselines.}\nWe validate our BC loss on two widely-used CF models, MF \\cite{BPR} and LightGCN \\cite{LightGCN}, which are representatives of the conventional and state-of-the-art CF models.\nWe compare BC loss with the popular debiasing strategies in various research lines: sample re-weighting (IPS-CN \\cite{IPS-CN}), bias removal by causal inference (MACR \\cite{MACR}, CausE \\cite{CausE}), and regularization-based framework (sam+reg \\cite{Regularized_Optimization}).\n\\za{We also compare BC loss to other standard losses used in collaborative filtering including most commonly used loss (BPR loss \\cite{BPR}) and newest proposed softmax losses (CCL \\cite{CCL} and SSM \\cite{SSM})}.\n\n\\vspace{5pt}\n\\noindent\\textbf{Parameter Settings.}\nWe conduct experiments using a Nivida-V100\nGPU (32 GB memory) on a server with a\n40-core Intel CPU (Intel(R) Xeon(R) CPU E5-2698 v4).\nWe implement our BC loss in PyTorch.\n\\wx{Our codes, datasets, and hyperparameter settings are available at \\url{https:\/\/github.com\/anzhang314\/BC-Loss} to guarantee reproducibility.}\nFor a fair comparison, all methods are optimized by Adam \\cite{Adam} optimizer with the batch size as 2048, embedding size as 64, learning rate as 1e-3, and the coefficient of regularization as 1e-5 in all experiments.\nFollowing the default setting in \\cite{LightGCN}, the number of embedding layers for LightGCN is set to 2.\nWe adopt the early stop strategy that stops training if Recall@20 on the validation set does not increase for 10 successive epochs.\nA grid search is conducted to tune the critical hyperparameters of each strategy to choose the best models \\emph{w.r.t.}~ Recall@20 on the validation set.\n\\za{For softmax, SSM, and BC loss, we search $\\tau$ in [0.06, 0.14] with a step size of 0.02.}\nFor CausE, 10\\% of training data with balanced distribution are used as intervened set, and $cf\\_pen$ is tuned in $[0.01,0.1]$ with a step size of 0.02.\nFor MACR, we follow the original settings to set weights for user branch $\\alpha=1e-3$ and item branch $\\beta=1e-3$, respectively. We further tune hyperparameter $c=[0, 50]$ with a step size of 5.\n\\za{For CCL loss, we search $w$ in $\\{1,2,5,10,50,100,200\\}$, $m$ in the range $[0.2,1]$ with a step size of 0.2. When it come to the number of negative samples, the softmax, SSM, CCL, and BC loss set 128 for MF backbone and in-batch negative sampling for LightGCN models.}\n\n\n\n\\subsection{Training Cost} \n\\begin{table}[t]\n \\centering\n \\caption{Training cost on Tencent (seconds per epoch\/in total). }\n \n \\label{tab:elapse_time}\n \\resizebox{0.9\\linewidth}{!}{\n \\begin{tabular}{l|rrrrrr}\n \\toprule\n & Backbone & +IPS-CN & +CausE & +sam+reg & +MACR & +BC loss\\\\ \\midrule\n MF & 15.5 \/ 17887 & 17.8 \/ 10662 & 16.6 \/ 1859 & 18.2 \/ 3458 & 160 \/ 17600 & 36.1 \/ 12815\\\\\n LightGCN & 78.6 \/ 4147 & 108 \/ 23652 & 47.2 \/ 3376 & 49.8 \/ 10458 & 135 \/ 20250 & 283 \/ 7075\\\\ \\bottomrule\n \\end{tabular}}\n\\end{table}\n\n\nIn terms of time complexity, as shown in Table \\ref{tab:elapse_time}, we report the time cost per epoch and in total of each baselines on Tencent.\nCompared with backbone methods (\\emph{i.e., } MF and LightGCN), BC loss adds very little computing complexity to the training process. \n\n\n\n\n\\subsection{Evaluations on Unbiased Test Set} \\label{sec:KuaiRec}\n\n\\noindent\\textbf{Motivation.}\nBecause of the missing-not-at-random condition in a real recommender system, offline evaluation on collaborative filtering and recommender system is commonly acknowledged as a challenge.\nTo close the gap, Yahoo!R3 \\cite{Yahoo} and Coat \\cite{ips} are widely used, which offer unbiased test sets that are collected using the missing-complete-at-random (MCAR) concept.\nAdditionaly, newly proposed KuaiRec \\cite{KuaiRec} also provides a fully-observed unbiased test set with 1,411 users over 3,327 videos.\nWe conduct experiments on all these datasets for comprehensive comparison, and KuaiRec is also included as one of our unbiased evaluations for two key reasons:\n1) It is significantly larger than existing MCAR datasets (\\emph{e.g., } Yahoo! and Coat); \n2) It overcomes the missing values problem, making it as effective as an online A\/B test.\n\n\\begin{table}[t]\n \\centering\n \\caption{Performance comparison on KuaiRec dataset. }\n \\label{tab:KuaiRec}\n \\resizebox{0.7\\columnwidth}{!}{\n \\begin{tabular}{l|ccc|ccc}\n \\toprule\n & \\multicolumn{3}{c|}{Validation} & \\multicolumn{3}{c}{Unbiased Test} \\\\\n \\multicolumn{1}{c|}{} & HR & Recall & NDCG & HR & Recall & NDCG \\\\\\midrule\n LightGCN & 0.299 & 0.069 & 0.051 & 0.104 & 0.0038 & 0.0064 \\\\\n + IPS-CN & 0.255 & 0.056 & 0.042 & \\underline{0.109} & \\underline{0.0073} & \\underline{0.0083} \\\\\n + CausE & 0.292 & 0.067 & 0.050 & 0.101 & 0.0056 & 0.0077 \\\\\n + sam+reg & 0.274 & 0.060 & 0.047 & 0.107 & 0.0069 & 0.0080 \\\\\n + BC loss & 0.343 & 0.076 & 0.062 & \\textbf{0.139}* & \\textbf{0.0077}* & \\textbf{0.0115}* \\\\\\midrule\n Imp.\\% & - & - & - & 27.5\\% & 4.05\\% & 38.6\\% \\\\\\bottomrule\n \\end{tabular}}\n \\vspace{-15pt}\n\\end{table}\n\n\\noindent\\textbf{Parameter Settings.} For BC loss on Yahoo!R3 and Coat, we search $\\tau_1$ in [0.05, 0.21] with a step size of 0.01, and $\\tau_2$ in [0.1, 0.6] with a step size of 0.1, and search the number of negative samples in [16, 32, 64, 128]. We adopt the batch size of 1024 and learning rate of 5e-4.\n\n\\noindent\\textbf{Results.}\nTable \\ref{tab:KuaiRec} and \\ref{tab:Yahoo&Coat} illustrate the unbiased evaluations on KuaiRec, Yahoo!R3, and Coat dataset using LightGCN and MF as backbone models, respectively.\nThe best performing methods are bold and starred, while the strongest baselines are underlined;\nImp.\\% measures the relative improvements of BC loss over the strongest baselines.\nBC loss is consistently superior to all baselines \\emph{w.r.t.}~ all metrics. \nIt indicates that BC loss truly improves the generalization ability of recommender.\n\n\n\n\n\n\n\\subsection{Performance Comparison with Standard Loss Functions in CF} \\label{sec:standard_loss}\n\n\\noindent\\textbf{Motivation.}\nTo verify the effectiveness of BC loss as a standard learning strategy in collaborative filtering, we further conduct the experiments over various datasets between BC loss, BPR loss, CCL loss \\cite{CCL}, and SSM loss \\cite{SSM}. We choose these three losses as baselines for two main reasons: 1) BPR loss is the most commonly applied in recommender system; 2) SSM and CCL are most recent proposed losses, where SSM is also a softmax loss and CCL employs a global margin. \n\n\\noindent\\textbf{Results.}\nTable \\ref{tab:loss_compare} reports the performance on both balanced and imbalanced test sets on various datasets among different losses. \nWe have two main observations: (1) Clearly, our BC loss consistently outperforms CCL and SSM; (2) CCL and SSM achieve comparable performance to BC loss in the imbalanced evaluation settings, while performing much worse than BC loss in the balanced evaluation settings. This indicates the superiority of BC loss in alleviating the popularity bias, and further justifies the effectiveness of the bias-aware margins.\n\nTable \\ref{tab:Yahoo&Coat} shows the unbiased evaluations on Yahoo!R3 and Coat dataset. \nWith regard to all criteria, BC loss constantly outperforms all other losses. \nIt verifies the effectiveness of instance-wise bias margin of BC loss.\n\n\\begin{table*}[t]\n\\caption{Performance comparison on Tecent, Amazon-book and Alibaba-iFashion datasets.}\n\\label{tab:loss_compare}\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{l|cc|cc|cc|cc|cc|cc}\n\\toprule\n\\multicolumn{1}{c|}{} & \\multicolumn{4}{c|}{Tencent} & \\multicolumn{4}{c|}{Amazon-book} & \\multicolumn{4}{c}{Alibaba-iFashion} \\\\\n\\multicolumn{1}{c|}{} &\n \\multicolumn{2}{c}{Balanced} &\n \\multicolumn{2}{c|}{Imbalanced} &\n \\multicolumn{2}{c}{Balanced} &\n \\multicolumn{2}{c|}{Imbalanced} &\n \\multicolumn{2}{c}{Balanced} &\n \\multicolumn{2}{c}{Imbalanced} \\\\\n \n\\multicolumn{1}{c|}{} & Recall & NDCG & Recall & NDCG & Recall & NDCG & Recall & NDCG & Recall & NDCG & Recall & NDCG \\\\\n\\midrule\nBPR & 0.0052 & 0.0040 & 0.0982 & 0.0643 & 0.0109 & 0.0103 & 0.0850 & 0.0638 & 0.0056 & 0.0028 & 0.0843 & 0.0411 \\\\\nSSM & 0.0055 & 0.0045 & \\underline{0.1297} & \n\\underline{0.0872} & 0.0156 & 0.0157 & 0.1125 & \\underline{0.0873} & \n\\underline{0.0079} & \\underline{0.0040} & \\underline{0.0963} & \\underline{0.0436} \\\\\nCCL & \\underline{0.0057} & \\underline{0.0047} & 0.1216 & 0.0818 & \\underline{0.0175} & \\underline{0.0167} & \\underline{0.1162} & \\underline{0.0927} & 0.0075 & 0.0038 & 0.0954 & 0.0428 \\\\\nBC Loss & \\textbf{0.0087}* & \\textbf{0.0068}* & \\textbf{0.1298}* & \\textbf{0.0904}* & \\textbf{0.0221}* & \\textbf{0.0202}* & \\textbf{0.1198}* & \\textbf{0.0948}* & \\textbf{0.0095}* & \\textbf{0.0048}* & \\textbf{0.0967}* & \\textbf{0.0487}* \\\\\\midrule\nImp. \\% & 52.6\\% & 44.7\\% & 0.1\\% & 3.7\\% & 26.3\\% & 21.0\\% & 3.1\\% & 2.3\\% & 20.3\\% & 20.0\\% & 0.4\\% & 11.7\\% \\\\\n\\bottomrule\n\\end{tabular}}\n\\end{table*}\n\n\n\n\n\\begin{table}[t]\n \\centering\n \\caption{Performance comparison on Yahoo!R3 and Coat dataset. }\n \\label{tab:Yahoo&Coat}\n \\resizebox{0.4\\columnwidth}{!}{\n \\begin{tabular}{l|cc|cc}\n \\toprule\n & \\multicolumn{2}{c|}{Yahoo!R3} & \\multicolumn{2}{c}{Coat} \\\\\n \\multicolumn{1}{c|}{} & Recall & NDCG & Recall & NDCG \\\\\\midrule\n \n IPS-CN & 0.1081 & 0.0487 & 0.1700 & 0.1377 \\\\\n CausE & 0.1252 & 0.0537 & \\underline{0.2329} & 0.1635 \\\\\n sam+reg & 0.1198 & 0.0548 & 0.2303 & 0.1869 \\\\\n MACR & 0.1243 & 0.0539 & 0.0798 & 0.0358 \\\\\\midrule\n BPR & 0.1063 & 0.0476 & 0.0741 & 0.0361\\\\\n SSM & \\underline{0.1470} & \n \\underline{0.0688} & 0.2022 & 0.1832\\\\\n CCL & 0.1428 & 0.0676 & 0.2150 & \\underline{0.1885} \\\\\n BC loss & \\textbf{0.1487}* & \\textbf{0.0706}* & \\textbf{0.2385}* & \\textbf{0.1969}* \\\\\\midrule\n Imp.\\% & 1.2\\% & 2.6\\% & 2.4\\% & 4.5\\% \\\\\\bottomrule\n \\end{tabular}}\n \\end{table}\n \n \n \\subsection{Study on BC Loss (RQ3) - Effect of Popularity Bias Extractor} \\label{sec:bias_extractor}\n \n \\noindent\\textbf{Motivation.} To check the effectiveness of popularity bias extractor, we need to answer two main questions: 1) what kinds of interactions will be learned well by the bias extractor? What does the learned bias-angle distribution look like? 2) can BC loss benefit from the bias margin extracted according to the popularity bias extractor in various groups of interactions? \n We devise the following experiment to tackle the aforementioned problems.\n \n \\noindent\\textbf{Experiment Setting.}\n Users can be divided into three parts: head, mid, and tail, based on their popularity scores. Analogously, items can be partitioned into the head, mid, and tail parts. As such, we can categorize all user-item interactions into nine subgroups. \n We have visualized the learned angles for various types of interactions in the following table \\ref{fig:angles}. \n \n \\noindent\\textbf{Results.} The table \\ref{fig:angles} shows the learned angles over all subgroups. We find that interactions between head users and head items tend to hold small angles. Moreover, as evidenced by the high standard deviation, the interactions stemming from the same subgroup types are prone to receive a wide range of angular values. This demonstrates the variability and validity of instance-wise angular margins.\n \n \\begin{figure*}\n \\centering\n \\includegraphics[width=0.5\\linewidth]{figures\/angle.png}\n \\caption{Visualization of popularity bias angle for different types of interactions on Tencent.}\n \\label{fig:angles}\n \\end{figure*}\n \n \n \n \n \n \n \n \n \n\n\\subsection{Visualization of interaction bias degree}\nFigures \\ref{fig:item_pop} and \\ref{fig:user_pop} illustrate the relations between popularity scores and bias degree extracted by popularity bias extractor \\emph{w.r.t.}~ user and item sides, respectively. Specifically, positive trends are shown, where their relations are also quantitatively supported by Pearson correlation coefficients. (0.7703 and 0.662 for item and user sides, respectively). It verifies the power of popularity embeddings to predict the popularity scores --- that is, user popularity embeddings derived from the popularity bias extractor are strongly correlated and sufficiently predictive to user popularity scores; analogously to the item side.\n \\begin{figure*}[t]\n \\centering\n \\subcaptionbox{Bias degree \\emph{w.r.t.}~ Item popularity bias \\label{fig:item_pop}}{\n \n \\includegraphics[width=0.43\\linewidth]{figures\/item_pop.png}}\n \\subcaptionbox{Bias degree \\emph{w.r.t.}~ User popularity bias\\label{fig:user_pop}}{\n \n \\includegraphics[width=0.43\\linewidth]{figures\/user_pop.png}}\n \\caption{Visualizations of relationships between interaction bias degree estimated by our popularity bias extractor and item\/user popularity statistics. Pearson correlation coefficients are provided.}\n \\end{figure*}\n \n \n \n \n \\begin{table}\n \\centering\n \\caption{Model architectures and hyper-parameters}\n \\label{tab:parameter}\n \\resizebox{0.6\\columnwidth}{!}{\n \\begin{tabular}{l|c|c|c|c|c|c}\n \\toprule\n & \\multicolumn{5}{c}{BC loss hyper-parameters} \\\\\\midrule\n \n \\multicolumn{1}{c|}{} & $\\tau_1$ & $\\tau_2$ & lr & batch size & No. negative samples \\\\\\midrule\n \\textbf{MF} \\\\\\midrule\n Tencent & 0.06 & 0.1 & 1e-3 & 2048 & 128 \\\\\\midrule\n iFashion & 0.08 & 0.1 & 1e-3 & 2048 & 128 \\\\\\midrule\n Amazon & 0.08 & 0.1 & 1e-3 & 2048 & 128 \\\\\\midrule\n Douban & 0.08 & 0.1 & 1e-3 & 2048 & 128 \\\\\\midrule\n Yahoo!R3 & 0.15 & 0.2 & 5e-4 & 1024 & 128 \\\\\\midrule\n Coat & 0.09 & 0.4 & 5e-4 & 1024 & 64\\\\\\midrule\n \\textbf{LightGCN} \\\\\\midrule\n Tencent & 0.12 & 0.1 & 1e-3 & 2048 & in-batch\\\\\\midrule\n iFashion & 0.14 & 0.1 & 1e-3 & 2048 & in-batch\\\\\\midrule\n Amazon & 0.08 & 0.1 & 1e-3 & 2048 & in-batch\\\\\\midrule\n Douban & 0.14 & 0.1 & 1e-3 & 2048 & in-batch\\\\\\midrule\n \\end{tabular}}\n \\end{table}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nRecent X-ray observations of galaxy clusters reveal various phenomena\nin the gaseous haloes of clusters, such as mergers, cavities, shocks\nand cold fronts (CFs). CFs are thought to be contact discontinuities,\nwhere the density and temperature jump, while the pressure\nremains continuous (up to projection\/resolution effects). They are\ncommon in clusters \\citep{markevitch07} and vary in morphology (they\nare often arcs but some linear or filamentary CFs exist), contrast\n(the density jump is scattered around a value of $\\sim 2$), quiescence\n(some are in highly disturbed merging regions, and some in smooth,\nquiet locations), and orientation (some are arcs around the cluster\ncentres, and some are radial, or spiral in nature). CFs have been postulated to\noriginate from cold material stripped during mergers \\citep[][ for\n example]{markevitch00} or sloshing of the intergalactic medium\n\\citep[IGM; ][]{markevitch01,ascasibar06}.\nIn some cases, a metallicity gradient is observed across the CF,\nindicating that it results from stripped gas, or radial motions of gas\n\\citep[see ][and references therein]{markevitch07}. Shear is often\nfound along CFs in relaxed cluster cores, implying nearly sonic bulk flow\nbeneath (towards the cluster's core) the CF \\citep{keshet10}. Whether\nCFs are present at very large radii ($\\gtrsim 500~{\\rm kpc}$) is\ncurrently unknown because of observational limitations.\n\nCold fronts are not rare. \\citet{markevitch03} find CFs in more than\nhalf of their cool core clusters, suggesting that most, if not all\nsuch clusters harbor CFs \\citep{markevitch07}. \\citet{ghizzardi10}\nsurvey a sample of 42 clusters using XMM-Newton \nand find that 19 out of 32 nearby ($z<0.075$) clusters host at least one CF. Based on \nobservational limitations, and orientations of cold fronts, they\nconclude that this is a lower limit. While many CFs are directly\nattributed to mergers, out of $23$ clusters in their\nsample that seem relaxed, $10$ exhibit\ncold fronts. These 10 have systematically lower \ncentral entropy values. Another useful compilation of clusters for\nwhich cold fronts have been observed can be found in\n\\citet{owers09}. Out of the 9 high-quality Chandra observations of clusters\nwith observed cold fronts, 3 of them, RXJ1720.1+2638, MS1455.0+2232\nand Abel 2142 have non-disturbed morphologies.\n\nIn what follows, we propose that some CFs are produced by merging of\nshocks. When two shocks propagating in the same direction merge, a CF is always\nexpected to form. Its parameters can be calculated by solving the\nshock conditions and corresponding Riemann problem. Most of the\nscenarios in which shocks are produced (quasars, AGN jets, mergers)\npredict that shocks will be created at, or near, a cluster centre, and\nexpand outwards (albeit not necessarily isotropically). When a\nsecondary shock trails a primary shock, it always propagates faster (supersonically with\nrespect to the subsonic downstream flow of the primary shock), so\ncollisions between outgoing shocks in a cluster are inevitable if they\nare generated within a sufficiently short time interval. We note that\ncontact discontinuities can also form from head on collisions between\nshocks, and various other geometrical configurations. We restrict\nourselves in this paper to merging of two shock propagating in roughly\nthe same direction.\n\nIn \\S~\\ref{sec:riemann} we solve for the parameters of CFs that are\ncaused by merging two arbitrary planar shocks. We show results of\nshock-tube hydrodynamic simulation that agree with the calculated\nanalytical prediction. In \\S~\\ref{sec:spherical}, we describe our spherical\nhydrodynamic simulations and realistic cluster profiles that are later\nused to investigate\nhow these\nshocks may be produced, and discuss various ways to produce shocks in\ncluster settings. We identify two general mechanisms that\ncreate shocks in clusters: energetic explosions near the centre, or\nchanges and recoils in the potential well of the cluster that\ncorrespond to various merger events.\nThese mechanisms are simulated in \\S~\\ref{sec:explosions} and \\S~\\ref{sec:mergers} \nand also serve to demonstrate the robustness of the shock\ninduced cold fronts (SICFs) that form.\nIn \\S~\\ref{sec:virial} we follow cluster growth and accretion over a\nHubble time by using self-consistent 1D hydrodynamic simulations of gas and\ndark-matter. The virial shock that naturally forms serves as a primary\nshock, and we demonstrate that SICFs form when secondary shocks\npropagating from the centre merge with it. We show that oscillations of this\nshock around its steady state position can produce a series of SICFs. \nIn \\S~\\ref{sec:stabil_observe} we discuss aspects regarding the stability and\nsustainability of cold fronts in general, and derive some\nobservational predictions that could serve to distinguish between\nCFs produced via this and other mechanisms. In \\S~\\ref{sec:summary}\nwe discuss our findings and conclude. Compact expressions for SICF\nparameters can be found in appendix \\S~\\ref{sec:gamma53}.\n\n\\section{Merging of Two Trailing Shocks}\n\\label{sec:riemann}\n\nIn this section we examine two planar shocks propagating in the same\ndirection through a homogeneous medium and calculate the parameters of\nthe contact\ndiscontinuity that forms when the \nsecond shock (secondary) overtakes the leading (primary) one. This problem is fully\ncharacterized (up to normalization) by the Mach numbers of the primary\nand secondary shocks, $M_0$ and $M_1$. At the instant of collision, a\ndiscontinuity in velocity, density and pressure develops,\ncorresponding to a Riemann problem \\citep[second case\nin][\\S93]{landau59}. The discontinuity evolves into a (stronger)\nshock propagating in the initial direction and a reflected rarefaction\nwave, separated by a contact discontinuity which we shall refer to as\na shock induced CF (SICF). The density is higher (and the entropy\nlower) on the SICF side closer to the origin of the shocks. In\nspherical gravitational systems, this\nyields a Rayleigh-Taylor stable configuration if the shocks are\nexpanding outwards. Now, we derive the discontinuity parameters.\n\n\\subsection{The Discontinuity Contrast}\n\n\\begin{figure}\n\\includegraphics[width=3.5in, trim =190pt 82pt 200pt 30pt,clip=true ]{mach_scheme}\n\\caption{Schematics of shock merging. Pressure p is shown vs. an\n arbitrary spatial coordinate x, which is not fixed between the\n panels. {\\it Top:} two shocks, propagating according to the arrows,\n induce transitions between zones $0\\rightarrow 1$ and between $1\\rightarrow 2$. {\\it Middle\n panel}, the instance of collisions between the shocks. The transition\n $0\\rightarrow 2$ does not correspond to consistent shock jump\n conditions. {\\it Bottom:} at the Lagrangian location of the\n collision an isobaric discontinuity between $3_{o}$ and $3_{i}$\n forms, separating between a forward moving shock \n ($0\\rightarrow 3_{o}$) and a reflected rarefaction (smooth transition\n between $2$ and $3_{i}$).\\label{fig:scheme}}\n\\end{figure}\n\nWe consider an ideal gas with an adiabatic index $\\gamma.$ Before\nthe shocks collide, denote the unshocked region as zone $0$, the\nregion between the two shocks as zone $1$, and the doubly shocked\nregion as zone $2$ (these definitions are demonstrated schematically\nin fig. \\ref{fig:scheme}). After the merging, regions $0$ and $2$ remain\nintact, but zone $1$ vanishes and is replaced by two regions, zone\n$3_{o}$ (the outer region for outgoing\nspherical shocks; adjacent to zone $0$) and zone $3_{i}$ (inner; adjacent to $2$),\nseparated by the SICF. We refer to the plasma density, pressure,\nvelocity and speed of sound respectively as $\\rho$, $p$, $u$ and $c$.\nVelocities of shocks at the boundaries between zones are denoted by\n$v_i$, with $i$ the upstream zone. We rescale all parameters by the\nunshocked parameters $\\rho_0$ and $p_0$.\n\nThe velocity of the leading shock is\n\\begin{eqnarray}\nv_0&=&u_0+M_0~c_0,\\label{eq:v0}\n\\label{eq:v}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\nc_i&=&\\left(\\frac{\\gamma p_i}{\\rho_i}\\right)^{1\/2}\n\\end{eqnarray}\nfor each zone $i$. Without loss of generality, we measure velocities\nwith respect to zone $0$, implying $u_0=0$.\n\nThe state of zone $1$ is related to zone $0$ by the Rankine-Hugoniot conditions,\n\\begin{eqnarray}\np_1&=&p_0\\frac{2\\gamma M_0^2-\\gamma+1}{\\gamma+1}, \\label{eq:purho1}\\\\\nu_1&=& u_0 + \\frac{p_1-p_0}{\\rho_0(v_0-u_0)}, \\label{eq:purho2}\\\\\n\\rho_1&=&\\rho_0\\frac{u_0-v_0}{u_1-v_0}. \\label{eq:purho3}\n\\label{eq:purho}\n\\end{eqnarray}\nThe equations are ordered such that each equations uses only known\nvariables from previous equations.\nThe state of the doubly shocked region (zone $2$) is related to zone\n$1$ by reapplying equations \\ref{eq:v}-~\\ref{eq:purho} with the\nsubscripts $0,1$ replaced respectively by $1,2$, and $M_0$ replaced by\n$M_1$.\n\nAcross the contact\ndiscontinuity that forms as the shocks collide (the SICF), the\npressure and velocity are continuous but\nthe density, temperature and entropy are not; the CF contrast is\ndefined as ${q}\\equiv\\rho_{3{i}}\/\\rho_{3{o}}$. Regions $0$ and\n$3_{o}$ are related by the Rankine-Hugoniot jump conditions across\nthe newly formed shock,\n\\begin{eqnarray}\np_3&=&p_0\\frac{(\\gamma+1)\\rho_{3{o}}-(\\gamma-1)\\rho_0}{(\\gamma+1)\\rho_0-(\\gamma-1)\\rho_{3{o}}}, \\label{eq:u2u0a}\\\\\nu_3-u_0&=&\\left[(p_3-p_0)\\left(\\frac{1}{\\rho_0}-\\frac{1}{\\rho_{3{o}}}\\right)\\right]^{1\/2} \\,. \\label{eq:u2u0b}\n\\end{eqnarray}\nThe adiabatic rarefaction from pressure $p_2$ down to $p_3=p_{3{i}}=p_{3{o}}$ is determined by\n\\begin{eqnarray}\nu_2-u_3&=&-\\frac{2c_2}{\\gamma-1}\\left[1-\\left(\\frac{p_3}{p_2}\\right)^{(\\gamma-1)\/2\\gamma}\\right] . \\label{eq:u2u0c}\n\\end{eqnarray}\nThe system can be solved by noting that the sum of\neqs. (\\ref{eq:u2u0b}) and (\\ref{eq:u2u0c}) equals $u_2-u_0$, fixing\n$\\rho_{3{o}}$ as all other parameters are known from\neqs. (\\ref{eq:v0}-\\ref{eq:u2u0a}).\n\nFinally, the Mach number of the new shock and the rarefacted density are given by\n\\begin{eqnarray}\nM_f^2 & = & \\frac{2\\rho_{3{o}}\/\\rho_0}{(\\gamma+1)-(\\gamma-1)\\rho_{3{o}}\/\\rho_0} \\,, \\\\\n\\rho_{3{i}}&=&\\rho_2(p_3\/p_2)^{1\/\\gamma} . \\label{eq:rho3}\n\\end{eqnarray}\n\n\\begin{figure}\n\\includegraphics[width=3.5in]{m0m1}\n\\caption{Contrast ${q}$ (labeled contours) of a CF generated by a\n collision between two trailing \n shocks with Mach numbers $M_0$ and $M_1,$ for $\\gamma=5\/3$ .\n The color map shows the final Mach number $M_f$ normalized by $M_0~M_1.$\\label{fig:m0m1}}\n\\end{figure}\nClosed-form expressions for ${q}$ and $M_f$ are presented in \\S~\\ref{sec:gamma53}.\nFigure \\ref{fig:m0m1} shows the discontinuity contrast $q$ for various\nvalues of $M_0$ and $M_1,$ for $\\gamma=5\/3.$ Typically ${q}\\sim 2,$\nranging from $1.45$ for $M_0=M_1=2$, for example, to\n${q}_{max}=2.653$. The figure colorscale shows the\ndimensionless factor $f\\equiv M_f\/(M_0 M_1)$, ranging between\n$0.75$ and $1$ for $1<\\{M_0,M_1\\}<100$.\n\n\\begin{figure}\n\\includegraphics[width=3.5in]{figRmax}\n\\caption{ The maximal SICF contrast (solid, left axes) as a function of $\\gamma$.\nIt occurs in a collision between a very strong shock $M_0\\to \\infty$ and a trailing shock of finite Mach number $M_1$ (dashed, right axes).\nFor $\\gamma= 5\/3$, ${q}_{max} = 2.65$ with $M_1 = 6.65$.\\label{fig:qmax}\n}\n\\end{figure}\nThe maximal contrast ${q}_{max}$ depends only on the\nadiabatic index, as shown in fig. \\ref{fig:qmax}.\nIt occurs for a very strong primary shock $M_0\\to\\infty$ and a finite secondary $M_1$;\nsee \\S~\\ref{sec:gamma53} for details.\nFor $\\gamma=5\/3$ we find ${q}_{max}=2.653$, achieved\nfor $M_0\\gg 1$ and $M_1\\simeq 6.65$.\n\n\\subsection{Planar Hydrodynamic Example of SICF Formation}\n\\label{sec:planar}\n\\begin{figure}\n\\includegraphics[width=3.6in, trim=35 0 0 65, clip=true]{planar}\n\\caption{Shock tube simulation of a contact discontinuity (SICF)\n formed by merging of two shocks. The piston propagates from $x=0$ in the positive\n direction at a supersonic speed creating the shock between\n $0\\rightarrow 1$. At t=.04 the piston's velocity\n abruptly increases causing a second shock to propagate between\n $1\\rightarrow 2$. At the position and time where the shocks merge a contact\n discontinuity forms between regions $3_{o}$ and $3_{i}$. {\\it Top:}\n density (normalized so that $\\rho_0=1$). Red lines mark the\n trajectories of some Lagrangian cell boundaries. The zones here are\n analougous to those in \\S~\\ref{sec:riemann} and\n fig.~\\ref{fig:scheme}. {\\it Bottom:} entropy\n (normalized so that $S_0\\equiv T_0\/\\rho_0^{2\/3}=1$). Note that there\n is no entropy change between zone $2$ and $3_{i}$ because the\n gas rarefies adiabatically between them. The contrast of the SICF,\n ${q}$, is the density jump between zones $3_{o}$ and\n $3_{i}$. \\label{fig:planar}\n}\n\\end{figure}\nTo demonstrate the formation of an SICF we present in\nfig.~\\ref{fig:planar} results from a 1D planar hydrodynamic code. The\nnumerical scheme of the hydrodynamic calculation is Lagrangian, with\nthe locations and velocities\ndefined at edges of cells and thermodynamic variables defined at the centres of\ncells. The density $\\rho$ and internal energy $e$, are the free thermodynamic\nvariables, setting the pressure $p$, temperature $T$ and the\nentropy $S$ according to an ideal gas equation of state. Throughout\nthe\npaper we consider monoatomic gas, with an adiabatic constant of\n$\\gamma=5\/3$ unless stated otherwise. The time propagation is\ncalculated by a leap-frog explicit scheme. The timesteps here are\nadaptive, and set by a standard Courant Friedrichs Levy\ncondition. Discontinuities are integrated over by employing\nfirst and second order Von Neumann artificial viscosity.\n\nThe simulation presented in fig.~\\ref{fig:planar} was run with 500\ncells. Without loss of\ngenerality, we can assume $\\rho_0=e_0=T_0=1$. The definition of\ntemperature here implies an arbitrary\nheat capacity that does not enter into the equations of motions. We define\nthe entropy as $S\\equiv T\/\\rho^{\\gamma-1}$ (the exponent of the usual\nformal definition) so $S_0=1$ as well.\nWe use external boundary conditions of a piston propagating into the\nmaterial from $X=0$ in the positive direction at Mach number\n$M_1=2.35$. At time $t=0.04$ the piston's velocity is increased to\ncreate a second shock propagating in the same direction, with\n$M_2=1.76$ with respect to the previously shocked gas. The contact\ndiscontinuity that forms has a density ratio\\footnote{The SICF\n strength is calculated in the simulation according to\n ${q}=(S_{3{i}}\/S_{3{o}})^{-1\/\\gamma}$ because in the rarefied zone, $3_{i}$ the\n density is not constant, and so harder to measure directly, while\n the entropy is constant throughout the zone.} ${q}\\equiv\n\\rho_{3{i}}\/\\rho_{3{o}}=1.466$ while the theoretical prediction\naccording \nto equations \\ref{eq:v}-\\ref{eq:rho3} is ${q}=1.459$ - in satisfactory\nagreement for the resolution used, and for the accuracy needed in\nthe discussion presented here. The\nsame technique has also been performed for larger Mach numbers and\nyielded results with comparable accuracy.\n\n\\section{Numerical Investigation of Spherical Cold Fronts }\n\\label{sec:spherical}\n\\subsection{Driving Shocks}\nWe explore different ways to drive central shocks through clusters of\ngalaxies. The first, most intuitive method is external energy\ninjection at the centre (i.e. explosions), which would correspond, for\nexample, to mechanical energy released in \nAGN bursts. Observations in X-ray and radio indicate a wide variety of\nshocks and sound waves \\citep{fabian03b,forman07} emitted repeatedly,\nperhaps even periodically from a central AGN.\nMany clusters also include hot and diffuse bubbles, that are most likely inflated\nby AGN jets that are decollimated by the ambient cluster gas\n\\citep[see review by ][and reference\ntherein]{mcnamara07}.\nIf these bubbles are a plausible mechanism to counteract the\novercooling problem, the energies associated with them must be\ncomparable to the cooling rate of the clusters \\citep[$\\gtrsim 10^{44}{\\rm\n erg~sec}^{-1}$; ][]{edge90,david93,markevitch98} over some fraction\nof the Hubble time, indicating that the total energy injection is\n$\\gtrsim 10^{61}{\\rm erg}$.\n The process of inflation of these bubbles sends\nshocks through the ICM, but at the same time the extremely hot,\nunderdense bubbles alter the radial\nstructure of the ICM in a non trivial manner. In \\S~\\ref{sec:explosions} we\ndemonstrate (noting the limitations of the spherically symmetric\nanalysis in that case) that mergers between shocks driven by such central\nexplosions can produce SICFs.\n\nShocks can also be driven gravitationally. Mergers of galaxies and subhaloes\nchange the structure and depth of the potential well causing sound\nwaves and shocks to propagate through the haloes. These shocks\ntypically do not considerably disturb the ICM structure, leaving a\nstatic gaseous halo after they propagate. We show in\n\\S~\\ref{sec:mergers} that abrupt changes to the gravitational\npotential of the halo directly produce shocks. In particular, in the\nfirst passage stage of a merger, two outward propagating shocks are\nproduced: one when the halo contracts due to the increase of the\ngravitational force, and another when the halo abruptly rarefies once\nthe merging subhalo flies out. The numerical investigation of explosion and merger\ndriven SICF is performed using a spherical one\ndimensional hydrodynamic code within a static potential well\n\\citep[a version of the code ``Hydra'' documented in ][that is stripped from the\ndynamic dark matter evolution and self-gravitating gas]{bd03} that,\nalong with the initial conditions, is described in\n\\S~\\ref{sec:hydra}.\n\nIn \\S~\\ref{sec:virial} we investigate the interaction of a secondary\nshock with the virial shock that is always expected at edges of\nclusters. To produce this primary shock, we simulate the\ncosmological evolution of a cluster by using a hydrodynamic\nscheme that includes a self-gravitating gas that flows within self-gravitating,\ndynamic dark matter shells, and starts from spherical cosmological\noverdensities at a redshift $z=100$. We show that perturbations can either be\ninvoked manually, or occur naturally when infalling dark matter has\nsufficiently radial orbits so it interacts with the central core\ndirectly. In this context, we also show that low amplitude\nperturbations in the core send sound waves that can steepen into a\nshock that interacts with the virial shock.\n\n\\subsection{The ``Hydra'' Hydrodynamical Code}\n\\label{sec:hydra}\nThe cosmological implications of shock interactions in clusters are tested\nusing 1D, spherical hydrodynamic simulations performed with\n``Hydra'' \\citep{bd03}.\nThe code is run here in two modes: cosmological and static DM (dark\nmatter) modes. In \\S~\\ref{sec:staticdm} we\nuse the code in the static DM mode, a simplified\nconfiguration in which the baryons do not self-gravitate, and there\nis no evolution of the dark matter. Rather, baryons start\nin hydrostatic equilibrium within a fixed cluster-like potential well. In\n\\S~\\ref{sec:virial} we use the code in its cosmological mode,\nutilizing the full\ncapabilities of the code to evolve baryons and dark matter\nself-consistently from a cosmological initial perturbation at high\nredshift to $z=0$. Next, we\nsummarize the physical processes implemented in both modes\nof ``Hydra'' and then highlight the difference in\nboundary conditions and initial conditions corresponding\nto each modus operandi.\n\nSimilar to the planar code described in\n\\S~\\ref{sec:planar}, the code uses a Lagrangian scheme. The\npositions of baryonic spherical shells are defined by their boundaries (with the\ninnermost boundary at $r=0$), and the thermodynamic properties are\ndefined at centres of shells. A summary of the fluid equations that\nare solved,\nand the numerical difference scheme is available in \\citet{bd03}.\nThe code (in its cosmological mode) also evolves thin discrete dark\nmatter shells, that propagate\nthrough the baryons, and interact with them gravitationally. The full\ndiscrete force equation for the ``i''th baryonic shell is thus:\n\\begin{equation}\n\\ddot{r_i}=-4\\pi r^2\\frac{\\Delta P_i}{\\Delta\n M_i}-\\frac{GM_i}{(r_i+\\epsilon)^2}+\\frac{j_i^2}{r_i^3},\n\\label{eq:force}\n\\end{equation}\nwith $r_i,M_i,j_i$ the radius of boundary $i$, the mass (baryonic +\nDM) enclosed\nwithin that radius, and the specific angular momentum prescribed to\nthis shell respectively, and $\\Delta P_i,\\Delta M_i$ the pressure\ndifference and baryonic mass difference between\nthe centres of shells $i+1$ and\n$i$. $\\epsilon$ is the gravitational smoothing length ($50~{\\rm pc}$ and\n $500~{\\rm pc}$ in the static DM and cosmological modes respectively).\n\nThe dark matter and baryonic shells are assigned\nangular momentum that acts as an outward centrifugal force designed\nto repel shells that are close to the singularity at the centre, in\nanalogy to the angular momentum of a single star, or of\nthe gaseous or stellar disk in realistic systems, keeping them from falling into the centre\nof the potential well in the absence of pressure support. In\nthe cosmological mode the shells are initially expanding with a\nmodified Hubble expansion, and the angular momentum is added as each shell\nturns around, corresponding to the physical generation of angular\nmomentum via tidal torquing around the maximal expansion radius. In\nthe static DM mode, the angular momentum is present from the beginning\nof the simulation, and is taken into account when the hydrostatic\ninitial profile is calculated. The angular momentum prescription of\nthe baryons is determined by the requirement that as gas passes\nthrough the virial radius, its angular momentum will be some fraction\n\\citep[$\\lambda=0.05$,][]{bullock01_j} of the value needed to support\nit on circular motions. As\nexpected, the gas only becomes angular momentum supported near the\ncentre, and when gas cooling is turned on (in the simulations\npresented below, the angular momentum of the baryons is never important, and\nis presented here only for completeness). The dark matter is\nprescribed angular momentum according to the eccentricity of a shell's\ntrajectory\nas it falls in through the virial radius: $j=v_{\\rm vir}r_{\\rm vir}\\sqrt{2(1-\\beta)},$\n\\citep[corresponding to: $v_\\theta^2=v_\\phi^2=(1-\\beta)v_r^2$;][\n\\S~4.2]{bt87}. $v_{\\rm vir},r_{\\rm vir}$ are the virial velocity and\nradius, and $v_r,v_\\theta,v_\\phi$ are the radial and two tangential\nvelocity components of an infalling trajectory. \nLow values\nof $\\beta$ indicate large angular momentum, and $\\beta=1$ corresponds\nto purely radial orbits. $\\beta$ is constant for all the dark matter shells in the\nsimulation. Gas cooling can be turned on, by interpolating from a\ntabulated version of \\citet{sutherland93}, with a prescribed metallicity.\n\nRather than use a leap-frog integration scheme, which is sufficient for most explicit\nhydrodynamic schemes, we use here 4th order Runge-Kutta integration\nover the coupled dark matter and baryonic timestep. This allows us to\nderive a timestep criterion for the dark matter shells, by comparing\nchanges in the\nvelocities and radii of the dark matter and baryons between\nthe 1st and 4th order propagation and\nlimit this difference to some fraction of the velocity and radius. The\ntimestep is determined by the combined Courant Friedrichs Levy\ncondition and this requirement. If the criterion is not met, the\ncode reverts to the beginning of the timestep and recalculates the\nnext step with a reduced timestep.\n\nThe numerical integration scheme,\nangular momentum and cooling is described in \\citet{bd03}. In\naddition, we report here on two new ingredients to the calculation:\nadaptive changes to the grid, and 1D convective model.\nBaryonic shells are now split and merged if they become too thick or\nthin respectively. A shell is split if its width is larger\nthan some fraction (typically $0.2$) of its radius, or if it is larger\nthan some fraction (typically $4$) of the shell directly below it and\nabove it, provided that it is larger than a\nminimal value (typically $2~{\\rm kpc}$), and that the shell is not in\nthe angular momentum\nsupported region near the centre of the halo. Shells are merged when the\ncumulative width of two adjacent shells is smaller than some\nvalue (typically $0.1~{\\rm kpc}$). Shell merging is helpful in cases\nwhere two shells are pressed\ntogether, decreasing the timestep to unreasonable values (reasonable\ntimesteps for cosmological simulations are above $10^{-7}~{\\rm Gyr}$).\nAn additional component included in ``Hydra'' for the first time is a 1D mixing\nlength convection model, that acts to transfer energy outwards in regions where\nthe entropy profile is non-monotonic \\citep{spiegel63}. This model\nis described in detail in \\citet{bd10}, and is used here only in the final\ncase described in \\S~\\ref{sec:virial} in its maximal form: bubbles can\nrise until they reach the local speed of sound. In this work, the\ninclusion of the maximally efficient mixing length theory convection ensures\nthat the entropy is always monotonically increasing, in an energetically\nself-consistent way.\n\nThe initial conditions for a cosmological run are derived by requiring an\naverage mass evolution history for a halo with some final mass \\citep{neistein06}. The\noverdensity of each shell at the initial time is calculated so that the shell would pass\nthrough its virial ($r_{180}$) radius at the required time \\citep[the\nprocedure is\ndescribed in detail in][]{bdn07}. This procedure has an additional\nbenefit of setting the initial grid in a way that would ensure that\nshells will pass through the virial radius at constant time intervals.\nIn this procedure, and in the\ncosmological simulation, a force corresponding to the cosmological\nconstant has been added to\neq.~\\ref{eq:force}, making the simulation completely consistent with\n$\\Lambda$CDM \\citep{bdn07} (the cosmological parameters used\nthroughout the paper are:\n$\\Omega_m=0.3,~\\Omega_\\Lambda=0.7,~h=0.7$). This has little overall\neffect, and essentially no\nimpact once the shells are collapsed, and the overdensities become non-linear.\n\nIn the stripped down, static DM mode, there are no dark matter shells,\nand the gas does not self-gravitate. Instead, the value $M_i$ of\neq. \\ref{eq:force} is interpolated from a predefined lookup table.\nIn this mode, a rigid external boundary condition is applied, in\ncontrast to a vacuum boundary condition in the cosmological mode. The\ninner boundary condition in both cases is trivially satisfied because\nall terms in eq. \\ref{eq:force} are individually zero there.\n\n\\begin{figure}\n\\includegraphics[width=3.6in, trim=30 0 10 0, clip=true]{profile}\n\\caption{Radial profiles of cluster initial conditions. {\\it Top:}\n density (red, left axis) and total enclosed mass (green, right axis) of\n the cluster. {\\it Bottom:} Temperature (red, left axis) and entropy\n (green, right axis), as a function of radius (upper x axis) and\n virial fraction (lower x axis). The total mass profile, and the temperature\n profile are prescribed, as well as a pivot point in the entropy\n (value of $10~{\\rm keV~cm^2}$ at $r=0.01R_{\\rm vir}$). The density\n and entropy profiles are calculated so that the gas is in\n hydrostatic equilibrium. The\n temperature profile is set to exhibit a decrease by a factor\n $\\approx 1.5$ between $0.1$ to $0.01R_{\\rm vir}$.\\label{fig:profile}\n}\n\\end{figure}\nFor the static DM simulations we wish to define initial conditions\nwhich are typical of those in clusters. The total mass profile is\ndefined by an NFW profile \\citep{nfw97}, with\n$M_{\\rm vir}=3\\times 10^{14}M_\\odot,$ and concentration set to $c=5.4$ according\nto \\citet{bullock01_c}. The virial radius is $1730~{\\rm kpc}.$ A central galaxy (BCG) is\nmodeled by a Hernquist profile \\citep{hernquist90} superimposed on the\nNFW profile, with a sharp cutoff at $10~{\\rm kpc}$ and concentration\n$a=6~{\\rm kpc}$ normalized to $M_{BCG}=3\\times 10^{11}M_\\odot.$ We define an\ninward decreasing temperature profile \\citep{leccardi08} that peaks\nat $r_T=0.1~R_{\\rm vir}$ at $T(r_T)=T_{\\rm\n vir}=2.7\\times 10^7K$ and drops inwards\naccording to $T(r)=T_{\\rm vir}(r\/r_T)^{0.2}$. Outwards, it is linearly\ndecreasing from $T(r_T)=T_{\\rm vir}$ to $T(R_{\\rm vir})=0.7~T_{\\rm\n vir}.$ Once the\nentropy of a single point is defined, the\ndensity profile can be constructed by the hydrostatic requirement\nwithout further degeneracy.\nWe set $S(0.01R_{\\rm vir})=10~{\\rm keV~cm^2}$ as our pivot point \\citep{cavagnolo09}. The\nfinal baryonic mass of the resulting profile in the ICM is $2.5\\times 10^{13}M_\\odot,$\nwhich is $\\approx 8\\%$ of the total mass. The\ndensity, total mass, temperature and entropy profiles are plotted in\nfig.~\\ref{fig:profile}. We note that while the constructed initial\nprofile reasonably\nresembles that of a realistic cluster, the mechanism that is studied\nhere, of SICFs, is not sensitive to these details; any merging between\nshocks, in any profile, should produce them.\n\n\\section{Static DM simulations}\n\\label{sec:staticdm}\n\\subsection{Cluster Explosions}\n\\label{sec:explosions}\n\\begin{figure}\n\\includegraphics[width=3.6in, trim=35 0 0 65, clip=true]{explode_b}\n\\caption{Time evolution of an initially hydrostatic halo showing \n explosion driven shocks from the centre propagating outwards. The\n energy injection parameters are described in the text. {\\it Top:}\n entropy color map on the radius-time plane. The red thin lines mark\n the position of every 50th Lagrangian shell boundary. The black\n dots mark \n shocks, defined as artificial viscosity pressure exceeding\n $10^{-11}erg~cm^{-3}$. These shocks are driven by two explosions, at $t=0.1~{\\rm Gyr}$ and\n $0.13~{\\rm Gyr}$. The entropy of the shocked gas is saturated\n in this colormap, but the\n entropy jump of the SICF is visible at the Lagrangian location of collision\n between the shocks. {\\it middle:} temperature colormap of the same\n simulation. The temperature increase induced by the shocks, and the\n temperature jump at the SICF (by a factor of $1.3$) is visible. {\\it Bottom:} Pressure\n colormap. The pressure across the SICF is smooth, as expected for\n cold fronts. \\label{fig:explode}\n}\n\\end{figure}\nThe first mechanism we investigate to produce shocks is that of cosmic\nexplosions. Many processes can cause abrupt energy injection at\ncentres of clusters, among which are AGNs or starbursts at the BCG and\ngalaxy-galaxy mergers. The discussion in this section is confined to\ninjection to the baryonic component itself (changes to the\ngravitational potential well will be discussed next). As shocks sweep\nthrough matter,\nthey constantly loose energy to heating the gas, and when the shocks\nexpand spherically, they also weaken due to the geometrical increase\nin the surface of the shock \\footnote{\\citet{waxman93,kushnir05}\n showed that for power law density profiles and strong shocks, as\n long as the mass diverges with radius, or, $\\rho_{\\rm gas}\\propto\n r^{-w},$ with $w\\leq 3$, the shock will weaken}. We \nseek here to create shocks that will survive to some considerable \nfraction of the halo\nradius, requiring that the amount of energy that is injected will be\ncomparable to the total thermal energy of the gas in the cluster. This\nenergy scale is similar to that required to solve the overcooling\nproblem, indicating that any solution to the cluster overcooling\nproblem via strong shocks will produce shocks that will expand to a\nsignificant fraction of the halo radius.\n\nFig.~\\ref{fig:explode} describes the time dependent\nevolution of an initially hydrostatic cluster atmosphere, as two\nexplosions are driven through it. The simulation has $1,000$\nlogarithmically spaced shells, between $0.0005~R_{\\rm vir}$ and\n$R_{\\rm vir}.$ Here, and in the following examples, the cooling is\nturned off, which is justified in the absence of any feedback\nmechanism in the simulation that would counteract the\novercooling. The explosions are driven by setting a homologic\n($v=v_{\\rm exp}r\/r_{\\rm exp}$) velocity profile with\nhighly a supersonic velocity ($v_{\\rm exp}$), sharply cutting off beyond some\nradius $r_{\\rm exp}$. For\nthe first explosion, these parameters are $v_{\\rm exp}=10,000~{\\rm\n km~sec}^{-1}$ and $r_{exp}=20~{\\rm kpc},$ injecting\n$1.7\\times 10^{61}~{\\rm erg}$ into the core of the cluster. The second\n explosion, $3\\times 10^7~{\\rm yr}$ later, has a velocity of $v_{\\rm exp}=20,000~{\\rm km~sec}^{-1}$ at $r_{exp}=40~{\\rm kpc},$ and\ninjects $4.7\\times 10^{61}~{\\rm erg}.$ These large values are not a-typical for\nthe energy estimated in radio bubbles \\citep{birzan04,mcnamara07}. While no buoyancy is\npossible within a spherical symmetric framework, the explosions do\ncreate extremely hot and dilute regions, and drive strong shocks.\nThe two shocks can be traced in fig.~\\ref{fig:explode} by the breaks in the plotted\nLagrangian lines (velocity jump), by\nthe temperature jump, and by the regions in the simulation with large\nartificial viscosity (black dots). At the location of merging\nbetween the shocks (at $t=0.15~{\\rm Gyr}$ and radius $\\approx 150~{\\rm\n kpc}$), an SICF (contact discontinuity) forms (visible in\nthe entropy colormap) that propagates with the matter. We have\nfollowed the simulation for $1~{\\rm Gyr}$ and, as expected in the\nabsence of any diffusive mechanisms, the SICF retains its strength.\n\nThe energies and time interval between the shocks have been chosen arbitrarily to\ncreate a shock collision at $\\approx 150~{\\rm kpc}.$ A series of\nsharp, frequent and strong central shocks is expected in some\nself-consistent radiative and\nmechanical AGN feedback models \\citep{ciotti07,ciotti09}, and is\nindicated in some observations \\citep[for example, ][for Perseus and\nM87 respectively]{fabian03b,forman05}. Non-spherical,\nfrequent collisions may occur between the shocks produced by\nthe two bubbles of a radio active AGN, at a position roughly above one\nof the bubbles \\citep[the generation of two shocks corresponding to observed\nbubbles, as well as\nradial soundwaves that steepen into a shock is discussed\nin][]{fabian03b}. The shock from the near-by bubble (primary) arrives to that position\n first, and\nthe shock from the opposite bubble (secondary) arrives later. Given our 1D\nframework, we do not attempt to\nmap the parameter\nspace of possible explosions. Since shocks weaken as they propagate\noutwards, weaker shocks which occur within a shorter time interval\nwill produce smaller radii SICF with the same strength as stronger\nshocks driven within a larger time interval.\n\n\\subsection{Gravitational Perturbations: Merger Induced Shocks}\n\\label{sec:mergers}\nIf the core of an otherwise hydrostatic cluster is gravitationally\nperturbed, the reaction of the gas to the perturbation often results\nin shocks. While explosions, discussed in \\S~\\ref{sec:explosions},\ntypically considerably heat the shocked gas\nnear the centre, gravitational perturbations make a more subtle change to\nthe gas, while forming strong outward expanding shocks. The core is perturbed\nwhen gas or dark matter\nis accreted to the centre of the cluster, either smoothly (creating\nweak perturbations) or by mergers, (causing a more violent and abrupt\nshock). In this section we describe the shocks that occur as a\nresult of a large change to the core potential depth of a cluster due\nto mergers. In \\S~\\ref{sec:virial} we show that weak oscillations in\nthe core send sound waves through the gas, that steepen into shocks.\n\n\\begin{figure}\n\\includegraphics[width=3.6in, trim=35 0 0 65, clip=true]{flyby}\n\\caption{Same as fig.~\\ref{fig:explode}, but now the\n potential well is modified by artificially adding a central object\n of mass $3\\times 10^{13}M_\\odot$, which is then\n removed $2\\times 10^7{\\rm yr}$ later (corresponding to a first passage of\n a 1:10 ratio subhalo). Note that the scales are different from fig.~\\ref{fig:explode}. \\label{fig:flyby}\n}\n\\end{figure}\nFig.~\\ref{fig:flyby} shows a merger event of a cluster with the same initial conditions\ndescribed in \\S~\\ref{sec:hydra} and fig.~\\ref{fig:profile} with a\ndense substructure with total mass of \n$3\\times 10^{13}M_\\odot$ (a $10\\!\\!:\\!\\!1$\nmerger ratio). The\nsimulation describes, in a simplified form, how the\nmain halo will react to the first passage of the substructure near its\ncore. During its first pericenter passage, the subcluster will first\ncause the core to contract, and then, as it flies out, allow the core\nto relax again. Here, we model this in a simplistic way by adding, and\nthen removing (after $2\\times 10^7~{\\rm yr}$), a massive object at the centre of the simulated\ncluster. The addition of the mass causes all the cluster's atmosphere to contract,\naffecting the inner region more than the outer. A coherent\nflow inwards begins, that stops as an outward expanding shock passes\nthrough the cluster, readjusting it to a new hydrostatic equilibrium\n(note in fig.~\\ref{fig:flyby} that after the passage of the primary shock the post-shock gas\nis almost at rest). Once the central object is removed, the halo jolts out from the\ninside out (again, because the addition of the central force affects\nthe inner core the most), causing another shock to\noccur. The energy for these two shocks is taken from\nthe orbital energy of the merging subcluster, and would cause it to\nspiral in. In the figure both shocks are clearly visible in the\ntemperature maps, and through the black dots corresponding to strong\nartificial viscosity. At the instant of merging between the shocks (at $t=0.044~{\\rm Gyr}$ and radius $\\approx 25~{\\rm\n kpc}$),\nan SICF forms which is seen in the entropy and temperature\ncolormaps. The time interval for the merger interaction is chosen manually, but\ndoes not seem improbable noting that an object approaching the centre\nat $1000{~\\rm km~sec}^{-1}$, roughly the virial velocity, passes\n$20~{\\rm kpc}$ during that period. Shocks associated with sequential\ncore-passages of an inspiraling subhalo could also induce large scale\nSICFs. While shocks produced by merger\nevents have been seen in simulations \\citep{mccarthy07},\nthe formation of SICF in these simulations hasn't\nbeen tested yet, and is left for future work.\n\n\\section{Virial Shocks and SICFs}\n\\label{sec:virial}\nAt the edges of galaxy clusters there are virial shocks with typical\nMach numbers $\\sim 30-100,$ heating the gas from $\\sim 10^4~{\\rm K}$\nto $\\sim 10^7~{\\rm K}$ by converting kinetic energy into thermal\nenergy. Virial shocks are probably observed by Suzaku \\citep{george09,hoshino10}.\nThe rate of expansion of a virial shock is set on average by the\nmass flux and velocity of the infalling material\n\\citep[see for example][]{bertschinger85}. Secondary shocks that originate from\nthe centre of clusters, due, for example, to the mechanisms discussed in\n\\S~\\ref{sec:spherical}\nwill collide and merge with the virial shock, creating SICFs at the\nlocations of the virial shock at the corresponding times. Since\nshock mergers with virial shocks require only one additional shock to form\n(rather than two in the general cases described in\n\\S~\\ref{sec:staticdm}), and no\nparticular timing is required, such SICFs are perhaps even more\nprobable. When the Mach number of the primary (virial) shock is $M_0\\gtrsim 3$,\nwhich is always the case,\nthe strength of the SICF becomes almost independent of the secondary\nshock, allowing for a rather narrow range of strengths around\n${q}\\approx 2$ as long as the secondary shock is sufficiently strong (fig.~\\ref{fig:m0m1}).\n\nWe simulate the merging of a virial shock with a secondary by evolving self consistently the dark matter and\nbaryons from an initial perturbation in the cosmological mode described in \\S~\\ref{sec:hydra}.\nThese initial conditions provide a reasonable laboratory for gas dynamics\nin the ICM, producing correct temperatures and\ndensities, and tracking the\nvirialization process of the gas in a self consistent way. In order to\ncapture the production of a secondary shock and its interaction with\nthe virial shock in 1D simulation, we examine various artificial\nschemes which mimic the full 3D dynamics, as discussed below. A more\ndetailed simulation of cluster evolution would require 3D simulations\nthat include AGN feedback and merger and smooth accretion\nfor baryons and dark matter.\nThe cosmological simulations here have been performed with $2,000$\nbaryonic shells, and $10,000$ dark matter shells, sufficient according\nto our convergence tests.\n\nFor the average mass accretion histories used here\n(\\S~\\ref{sec:hydra}), the\naccretion rate grows with halo mass, even after taking into account\nthe late time decline in accretion rates \\citep{dekel09}, so\nshells that fall later are more massive. When late\n massive dark matter shells fall into the inner core, their\nmass becomes comparable to that of the gas enclosed within that shell, causing\nit to vibrate stochastically. These core vibrations drive sound\nwaves that are emitted from the centre\nand steepen to create shocks (the energy for these\nshock is taken from the orbits of the dark matter shells). We can\ncontrol the level of stochastic noise by increasing or decreasing the\nangular\nmomentum prescribed to the dark matter. In the first two ``synthetic''\nexamples shown here, we use $\\beta=0.7$ (see \\S~\\ref{sec:hydra}) to reduce noise in the\nevolution, and then manually add perturbations at time $-7~{\\rm Gyr}$\n($z=0.84$). In the last example, we decrease the dark matter angular\nmomentum ($\\beta=0.991$), and use the numerical stochastic noise\nas a proxy for the perturbations that halo cores undergo during their\nformation and accretion. We demonstrate that this noise naturally give rise\nto shocks that interact with the virial shock. This interaction seems\nto be quasi periodic, and we suggest a mechanism for driving this behavior. In\nthis final example, the entropy profile is sometimes non-monotonic,\nimplying that convection might be important. We\n test the effect of convection using our 1D mixing length theory and\nshow that it does not significantly reduce the strength of the SICFs\nthat are produced, and does not change the overall evolution significantly.\n\n\\subsection{Manual Initiation of Secondary Shocks}\n\\label{sec:manual_shocks}\n\\begin{figure*}\n\\includegraphics[width=7.in, trim=40 0 50 0, clip=true]{cosmo_madd}\n\\caption{Cosmological mode simulation of the evolution of a\n $M(z=0)=3\\times 10^{14}M_\\odot$ cluster. The gray lines mark the evolution of\n every 50th Lagrangian shell,\n and the colormaps are entropy ({\\it top}) and temperature ({\\it\n bottom}). The initial Hubble expansion, turnaround and the virial\n shock are visible.\nInitially, at $z=100$, shells expand due to the\n near-Hubble flow. At time $-7~{\\rm Gyr}$ a central object with mass\n $3\\times 10^{13}M_\\odot$ is permanently added, causing a shock to\n propagate outwards. The collision between this shock and the virial\n shock creates an SICF visible in the entropy colormap which\n persists to $z=0$.\\label{fig:cosmo_madd}\n}\n\\end{figure*}\n\\begin{figure*}\n\\includegraphics[width=7.in, trim=40 0 50 0, clip=true]{cosmo_msin}\n\\caption{A cosmological mode simulation with the same initial\n conditions as in fig.~\\ref{fig:cosmo_madd}, but with the core\n perturbed briefly between $-7$ and $-6.75~{\\rm Gyr}$ (see\n \\S~\\ref{sec:manual_shocks} for details). The perturbation\n is a sinusoid in time, with an amplitude of $3\\times 10^{13}M_\\odot$, and\n a period of $5\\times 10^7~{\\rm Gyr},$ following by a constant left-over\n mass of $5\\times 10^{12}M_\\odot.$ Weak shocks and sound waves are sent\n outwards, steepening into a single shock that collides with the\n virial shock, producing an SICF.\\label{fig:cosmo_msin}\n}\n\\end{figure*}\nFig.~\\ref{fig:cosmo_madd} describes a typical evolution of a cluster\nthat ends up with mass\n$3\\times 10^{14}M_\\odot$ at $z=0$, and subsequent interaction of the\nvirial shock with a\nsecondary merger induced shock. The merger here is $10\\!\\!:\\!\\!1$, with a\ncentral mass of $3\\times 10^{13}M_\\odot$ instantaneously added at $-7~{\\rm\n Gyr}$ (at that time the cluster's virial mass\nis $\\approx 10^{14}M_\\odot$). Unlike the procedure described in\n\\S~\\ref{sec:mergers}, we do \nnot remove the merged mass, creating only one shock. The Hubble\nexpansion, turnaround and virial shocks are seen,\nas well as the SICF formation at time $\\approx -5.5~{\\rm Gyr}$ and at\nradius $\\approx 700~{\\rm kpc}$. After the deepening of the gravitational\npotential well, the secondary shock mediates the re-establishment of\nhydrostatic equilibrium. Thus, the post-shock gas, and the SICF, are roughly at\nrest in the Eulerian laboratory frame of reference.\nThe post-shock temperatures, particularly below $100~{\\rm kpc}$ are\nunrealistically high, corresponding, perhaps, to disturbed morphologies seen in\ncluster merger events \\citep[the bullet cluster; ][]{tucker98}, although no entropy inversion occurs there. This\nis remedied somewhat when the shock is initiated by a series of weaker\nshocks and sound waves that steepen into a strong shock further from\nthe cluster's centre (fig.~\\ref{fig:cosmo_msin}), and is completely\nalleviated when the shocks are due to persistent stochastic noise near\nthe core (\\S~\\ref{sec:natural_shocks}), generating profiles which are\nalso consistent with quiescent cool core clusters.\n\nWe produce a weaker, more local perturbation by engineering a\nperturber that sends weak shocks and sound waves that steepen into a\nshock.\nFig.~\\ref{fig:cosmo_msin} shows an evolution of the same cluster as in\nfig.~\\ref{fig:cosmo_madd}, but with a central perturber who's mass is\nsinusoidally changing between $-7.1~{\\rm Gyr}^{-1}$,\n blue dots), and CFs are traced by their large entropy gradients\n ($\\partial{\\rm ln} S\/\\partial {\\rm ln}r>0.5$, green dots).\n Rarefaction waves can be seen as small motions of the Lagrangian\n shells (illustrated by the dashed magenta curve, manually added\n based on a time series analysis). Their trajectory is approximately\n the reflection of the preceding outgoing compression, time inverted\n about the last secondary-virial shock collision.\n \\label{fig:time}}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=3.5in, trim =40pt 0pt 50pt 0pt,clip=true ]{final_profiles}\n\\caption{Thermodynamic profile of the simulated cluster shown in\n figures \\ref{fig:cosmo_ad}-\\ref{fig:time} at $z=0$ (solid lines) and of the same\n simulation with maximal convection (dashed lines). {\\it Top:}\n entropy (red, left axis) and pressure (green,right\n axis). The entropy of the convective simulation has been raised by\n $0.5~{\\rm dec}$ to distinguish between the models. {\\it Bottom:} density (red, left axis) and temperature\n (green, right axis). The positions of CFs in the adiabatic, non-convective\n simulations are marked with blue dashed\n vertical lines.\\label{fig:prof}\n }\n\\end{figure}\n\n The calculations presented in Figures\n\\ref{fig:cosmo_ad}-\\ref{fig:prof} are adiabatic. When cooling is\nturned on, and\nin the absence of feedback, this simulation suffers from overcooling,\ncreating an overmassive BCG of $2\\times 10^{12}M_\\odot$ and BCG\naccretion rates (corresponding to star\nformation rates) of $\\gtrsim 100~M_\\odot{\\rm yr}^{-1}$, and the luminosity\nexceeds the $L_x-T$ relation \\citep{edge90,david93,markevitch98}. The excitation of the basic mode, and\nperiodic CFs, are observed in all these simulations. Since the virial shock's\nstrength is non-monotonic, it is not surprising that between SICF\nformation, the post-shock entropy is non-monotonic. This introduces a\nproblem in 1D simulations where convection due to entropy inversion\ncannot naturally occur. We test this by rerunning the adiabatic\nsimulation with 1D convection according to mixing length theory\n\\citep{spiegel63}, with the mixing length coefficient set such that\nthe convection is limited only by the requirement that bubbles never\nexceed the sonic speed. This acts to redistribute the energy and\nentropy between SICFs, but does not change the overall dynamics. The\nfinal profile in this case is plotted in the dashed lines of\nfig.~\\ref{fig:prof}. The strength of the SICF in this case is reduced,\nbut it is still clearly visible, with typical values of ${q}\\approx\n1.5$. It is highly improbable that convection operates in its maximal physical\nefficiency, particularly in the presence of ICM magnetic\nfields. \\citet{parrish08b,parrish09} show that in the presence of weak\nmagnetic fields the effective convection is smaller by many orders of\nmagnitude than the heat fluxes carried by conduction, implying that it\nis highly suppressed from its mixing length theoretical value (Ian\nParrish, private communication).\n\nWe note a qualitative similarity between reverberations of the virial accretion shocks\ndiscussed here, and another prominent case of accretion shocks in\nastrophysics - of type II core collapse supernovae. \\citet{burrows07} find that\nstalled accretion shocks around type II cores are unstable to 2D\n(g-mode) reverberation that,\nafter enough cycles, accumulate\nsufficient energy and amplitude to cause\nan outbreak of the stalled shock. While the\nstanding accretion shock instability (SASI) is predominantly 2D, and\nacts on different scales than discussed here, it does draw its\nenergy from the accreted gas, and is perturbed by sound waves\nthat are emitted from the vibrating core, that accumulate to a single\nunstable mode.\n\n\\section{Stability and Observability of Shock Induced Cold Fronts}\n\\label{sec:stabil_observe}\n\\subsection{Stability}\n\\label{sec:stability}\nThe stability of CFs limits the time duration over which they are\ndetectable, and so is important when comparing CF formation models\nwith observations. Various processes can cause a gradual breakup or\nsmearing of the CF. They act in other CF formation models as well.\n\nThermal conduction and diffusion of particles across the CF smears the\ndiscontinuity on a timescale that is set by the thermal velocity and\nthe mean free path of the protons. The Spitzer m.f.p. $\\lambda$ in an\nunmagnetized plasma with typical cluster densities is a few kpc, and\ndepends on the thermal conditions on both sides of the discontinuity\n\\citep{markevitch07}. Taking $\\lambda \\sim 10~{\\rm kpc}$ and a thermal\nvelocity $\\bar{v}\\sim 1000~{\\rm km \\, sec}^{-1},$ and assuming that a\nCF is visible if it is sharper than $L_{\\rm obs}\\sim 10~{\\rm kpc},$ we\nget a characteristic timescale for CF dissipation,\n\\begin{equation}\nt\\sim \\frac{L_{\\rm obs}^2}{D}\\sim\\frac{3L_{\\rm\n obs}^2}{\\bar{v}\\lambda}\\sim 10^7{\\rm yr}.\n\\end{equation}\nThis result indicates that in order for shock induced CFs to be\nobservable, either (i) they are formed frequently \\citep[e.g., by a\nseries of AGN bursts; see][]{ciotti07,ciotti09}; or (ii) magnetic\nfields reduce the m.f.p. considerably, as some evidence suggests\n\\citep[see the discussion in][]{lazarian06}.\n\nHeat flux driven buoyancy instability \\citep[HBI; ][]{parrish08,quataert08,parrish09} tends to\npreferentially align magnetic fields perpendicular to the heat flow in\nregions where the temperature decreases in the direction of gravity.\nThis effect has been argued to reduce radial diffusion within the inward cooling\nregions in the cores of cool core clusters.\nAs suggested in \\citet{parrish08}, the temperature gradient across a\nCF is opposite to the direction of\ngravity, and HBI instability is expected to quickly form there (over $8~{\\rm\n Myr}$ in their example), aligning the magnetic fields parallel to\nthe discontinuity and\ndiminishing radial diffusion. This possibility needs to be addressed\nfurther by detailed numerical magneto-hydrodynamic simulations.\nAlso, the timescale for achieving local thermal equilibrium between\nthe electrons and ions below the virial shocks are long. It is unclear\nwhat the proper diffusion \ncoefficients are in that case, and a particle transport analysis needs\nto be applied. These important issues are left for future work.\n\n\\begin{figure}\n\\includegraphics[width=3.3in]{mr}\n\\caption{Fractional decline (contours) in initial CF contrast ${q}$\n induced by a collision with a shock of Mach number $M_1$ arriving from the dense CF side, for $\\gamma=5\/3.$\n The vertical dashed line corresponds to the maximal SICF contrast, ${q}_{max}$.\n \\label{fig:mr}}\n\\end{figure}\nCFs could also be degraded by subsequent shocks that sweep outwards across\nthe CF. This is true regardless of their formation mechanisms.\nHowever, the CF contrast is only slightly diminished by this\neffect. This is shown analytically by recalculating the Riemann\nproblem described in \\S~\\ref{sec:riemann} but starting from a general\ncontact discontinuity with strength \n${q}$ instead of a primary shock. For ${q}=2$ this decrement is between $4\\%$ for $M_1<2$, to\n$20\\%$ for $M_1\\gtrsim 10$, as\nshown in fig.~\\ref{fig:mr}. Such passing shocks seed\nRichtmyer-Meshkov instabilities which could cause the CF to break\ndown. Although less efficient than Rayleigh-Taylor instabilities,\nthese instabilities operate regardless of the alignment with gravity.\nThe outcome depends on $M_1$ and on any initial small perturbations in\nthe CF surface. However, the instability is suppressed if the CF\nbecomes sufficiently smoothed.\n\nIt is worth noting that KHI, which could break down CFs formed through\nram pressure stripping and sloshing, does not play an important role\nin SICFs because there is little or no shear velocity across them.\nHowever, the stabilizing alignment of magnetic fields caused by\nsuch shear \\citep{markevitch07,keshet10} is also not expected here.\n\n\\subsection{Observability}\n\\label{sec:observe}\n\\citet{owers09} present high quality Chandra observations of\n$3$ relaxed CFs. They all seem concentric with respect to the cluster\ncentre, and spherical in appearance. They interpret these as\nevidence for sloshing, and find spiral characteristics in $2$ of\nthem. In the cold front sample of \\citet{ghizzardi10}, $10$ of the\nrelaxed morphology clusters also host cold fronts. We argue here that\nsome CFs of this type could originate from \nshock mergers. In the SICF model, CFs are spherical, unlike the\ntruncated CFs formed by ram pressure stripping or sloshing, so no\nspecial projection orientation is required. On the other hand, SICFs\ndo not involve metallicity discontinuities, observed in some of these\nCFs. The observed contrasts of these three CFs in \\citet{owers09} are quite uniform, with best\nfit values in the range $2.0-2.1$ (with $\\sim 20\\%$ uncertainty) in\nall three, as expected for SICFs\\footnote{However, shocks sweeping\n over CFs could diminish their contrast towards ${q}\\sim 2$,\nregardless of CF origin. A larger, more complete sample of CFs is\nneeded to determine the origin\/s of the relaxed population.}.\n\nThe following characteristics are specific to SICFs, and could thus be\nused to differentiate between SICFs and CFs formed by other\nmechanisms. These are also predictions for SICFs that are expected to\nform at radii that are not observable today.\n\n{\\bf Morphology.} SICFs are quasi-spherical around the source of the\nshocks. In contrast, for example, a galactic merger defines an orbit plane, and the\ncorresponding perturbation (stripped material or sloshing and centre displacement)\nwill create CFs\nparallel to\nthis plane. In some sloshing\nscenarios \\citep{ascasibar06} the CFs extend considerably above the\nplane, but would never cover all\nviewing angles; an observed\nclosed CF ``ring'' would strongly point towards an SICF. The statistical\nproperties of a large CF sample could thus be used to distinguish\nbetween the different scenarios. \\citet{ghizzardi10} identify 23\napparently relaxed clusters, 10 of which exhibit cold fronts. They find a\n{\\it perfect} correlation between central entropy levels and cold fronts, with\nthe 10 lowest central entropy clusters exhibiting cold fronts. This\ncorrelation is also consistent with convection of low entropy gas from the\ncentre \\citep{markevitch07,keshet10}, but would require that all $10$\nCFs are viewed from a favorable viewing angle. We offer here an\nalternative interpretation, in which the low\nentropy and short cooling times imply relatively more active\nAGNs and consequently the formation of shocks on shorter periods\n\\citep{ciotti09}. The SICFs naturally cover $4\\pi$ of the sky,\nalleviating the viewing angle statistical requirement.\n\n{\\bf Amplitude.} SICFs have distinct entropy and density contrasts\nthat depend weakly on shock parameters; $q$ is typically larger than\n$\\sim 1.4$ (assuming $M_1\\geq 2$) and is always smaller than\n$q_{max}=2.65$. This is consistent with observations.\n\n{\\bf Extent.} If cluster oscillations, as described in\n\\S~\\ref{sec:natural_shocks} are excited, SICF radii\nshould be approximately logarithmically spaced.\nAny shock that expands and collides with the virial shock will create\nan SICF at the location of the virial shock, far beyond the core.\nThe external SICFs are\nyounger, and would appear sharper owing to less degradation due to the\nprocesses discussed in \\S~\\ref{sec:stability}. This concentric CF\ndistribution thus resembles tree rings. Deep observations capable of\ndetecting CFs at $r\\sim 1~{\\rm Mpc}$ are predicted to find SICFs\n(fig.~\\ref{fig:prof}). Such distant CFs occur naturally only in the\nSICF model \\citep[observations by Suzaku may be able\nto probe this range with sufficient quality in the near future;\n][]{hoshino10}.\n\n{\\bf Plasma diagnostics.} Shocks are known to modify plasma\nproperties in a non-linear manner, for example by accelerating\nparticles to high energies and amplifying\/generating magnetic\nfields. The plasmas on each side of an SICF may thus differ, being\nprocessed either by two shocks or by one, stronger shock. This may\nallow indirect detection of the CF, in particular if the two shocks\nwere strong before the collision. For example, enhanced magnetic\nfields below the CF may be observable as excess synchrotron emission\nfrom radio relics that extend across the CF, in nearby clusters, using\nfuture high-resolution radio telescopes (MWA,LOFAR, SKA).\n\n\\section{Summary and Discussion}\n\\label{sec:summary}\n\nOur study consists of two parts. The first is a general, analytic discussion\nabout cold fronts that form as a result of a merging between a primary and\nsecondary shock propagating in the same direction. This is a novel\nmechanism to create cold fronts discussed here for the first time.\nThe second part\ndescribes the possible relevance of this SICF mechanism in cluster\nenvironments using 1D spherical hydrodynamical simulations.\n\nWe have shown in \\S~\\ref{sec:riemann} that when shocks moving in the\nsame direction merge they generate a CF. The density contrast across\nthe CF is calculated as\na function of the Mach numbers of the two shocks. It is typically\nlarger than $1.4$ (if $M\\gtrsim2$), and is always smaller than\n${q}_{max}=2.65$. We support the analytical calculation by a detailed\ninvestigation of a shock tube planar hydrodynamical simulation.\n\nIn \\S~\\ref{sec:staticdm} and \\S~\\ref{sec:virial}, using a 1D spherical hydrodynamic code, we demonstrate that SICFs in\nclusters are a natural consequence of shocks that are generated at\ncentres of clusters. We entertain two ways to invoke shocks: by\ninjecting large amounts of energy near the centre (corresponding to AGN\noutbursts), and by abruptly changing potential the well of the cluster\n(corresponding to merger events).\n Finally, by simulating cluster evolution from initial cosmological\n perturbations over a Hubble time we show that outgoing shocks that\n merge with the virial shock create very distinctive CFs. These shocks\n can be caused by a critical event (merger or explosion) but can also\n be invoked by stochastic oscillations of the cluster's core, caused\n by accretion of low angular momentum substructure that perturbs the\n core. We show that a reverberation\nmode exists in the haloes of galaxies and clusters that causes\nperiodic merging between the virial shocks and secondary shocks,\nproducing an SICF every cycle.\n The simulated SICF contrast is\nconsistent with the theoretical predictions of \\S~\\ref{sec:riemann}.\nA more thorough investigation of this potentially important mode,\nincluding its stability in 3D is left for future works.\n\n\nWe then discuss in \\S~\\ref{sec:stability} the\nsurvivability and degradation in time of CFs after they are formed. The\nCF discontinuity is smeared over time by diffusion,\nat a rate that depends on the unknown nature and amplitude of magnetic\nfields.\nCFs are susceptible to heat-flux-driven buoyancy instability\n\\citep[HBI; ]{parrish08},\nwhich could align the magnetic field tangent to the CF and potentially\nmoderate further diffusion. SICFs, like all other CFs, are subject to\nRichtmyer-Meshkov instabilities from subsequent shocks passing through\nthe cluster. Such collisions also reduce the CF contrast until it reaches\n$q\\sim 2$. Unlike most other CF models, an SICF is not expected to\nsuffer from KHI.\n\nThe predicted properties of SICFs are presented in\n\\S\\ref{sec:observe}, and reproduce some of the CF features discussed\nin \\citet{owers09} and \\citet{ghizzardi10}. In particular, we suggest that CFs in relaxed\nclusters, with no evidence of mergers, shear, or chemical\ndiscontinuities, may have formed by shock collisions. We list the\nproperties of SICFs that could distinguish them from CFs formed by other\nmechanisms.\nThe SICF model predicts quasi-spherical CFs which are concentric about\nthe cluster centre, with contrast ${q} \\sim 2,$\nand possibly extending as far out as the virial shock.\nAn observed closed (circular\/oval) CF could only be an SICF.\nIn the specific case of cluster reverberation,\na distinct spacing pattern between CFs is expected.\nIt may be possible to detect them indirectly, for example as\ndiscontinuities superimposed on peripheral radio emission.\n\nShocks originating from the cluster centre naturally occur in feedback\nmodels that are invoked to solve the overcooling problem. They are\nalso formed by mergers of substructures with the BCG. Thus, SICFs\nshould be a natural phenomenon in clusters. Further work is needed to\nassess how common SICFs are with respect to other types of CFs, and to\ncharacterize inner SICFs that could result, for example, from\nmergers between offset AGN shocks. The properties of SICFs\nin 3D will be pursued in future work.\n\n\\section*{Acknowledgements}\nWe thank I. Parrish, Du{\\v s}an Kere{\\v s} and M. Markevitch for\nuseful discussions and the referee, Trevor Ponman, for helpful\nsuggestions. YB acknowledges the \nsupport of an ITC fellowship from the Harvard College Observatory. UK\nacknowledges support by NASA through Einstein Postdoctoral Fellowship\ngrant number PF8-90059 awarded by the Chandra X-ray Centre, which is\noperated by the Smithsonian Astrophysical Observatory for NASA under\ncontract NAS8-03060.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{s1}\nA non-well-founded proof is usually defined as a possibly infinite tree of formulas (sequents) that is constructed according to inference rules of a proof system and, in addition, that satisfies a particular condition on infinite branches. A cyclic, or circular, proof can be defined as a finite pointed graph of formulas (sequents) which unraveling yields a non-well-founded proof. These proofs turn out to be an interesting alternative to traditional proofs for logics with inductive and co-inductive definitions, fixed-point operators and similar features. For example, proof systems allowing cyclic proofs can be defined for the modal $\\mu$-calculus \\cite{AfLe}, the Lambek calculus with iteration \\cite{Kuz} and for Peano arithmetic \\cite{Sim}. In the last case, these proofs can be understood as a formalization of the concept of proof by infinite descend.\n\nStructural proof theory of deductive systems allowing cyclic and non-well-founded proofs seems to be underdeveloped. In \\cite{ForSan}, J.~Fortier and L.~Santocanale considered the case of the $\\mu$-calculus with additive connectives. They present a procedure eliminating all applications of the cut-rule from a cyclic proof and resulting an infinite tree of sequents. Though they don't show that this tree satisfies a guard condition on infinite branches, which is necessary for non-well-founded proofs in the $\\mu$-calculus. Unfortunately, we don't know other syntactic cut-elimination results for systems with non-well-founded proofs. \nHere we present a sequent calculus for the Grzegorczyk modal logic allowing non-well-founded proofs and obtain the cut-elimination theorem for it. This article is an extended version of the conference paper \\cite{SavSham}.\n\nThe Grzegorczyk modal logic $\\mathsf{Grz}$ is a well-known modal logic \\cite{Maks}, which can be characterized by reflexive partially ordered Kripke frames without infinite ascending chains. This logic is complete w.r.t. the arithmetical semantics, where the modal connective $\\Box$ corresponds to the strong provability operator \\textit{\"... is true and provable\"} in Peano arithmetic. There is a translation from $\\mathsf{Grz}$ into the G\\\"{o}del-L\\\"{o}b provability logic $\\mathsf{GL}$ such that \n$$\\mathsf{Grz} \\vdash A \\Longleftrightarrow \\mathsf{GL} \\vdash A^\\ast,$$ where $A^\\ast$ is obtained from $A$ by replacing all subformulas of the form $\\Box B$ by $B \\wedge \\Box B$.\n\nRecently a new proof-theoretic presentation for the G\\\"{o}del-L\\\"{o}b provability logic $\\mathsf{GL}$ in the form of a sequent calculus allowing non-well-founded proofs was given in \\cite{Sham, Iemhoff}. \nWe wonder whether cyclic and, more generally, non-well-founded proofs can be fruitfully considered in the case of $\\mathsf{Grz}$.\nWe consider a sequent calculus allowing non-well-founded proofs for the Grzegorczyk modal logic and present the cut-elimination theorem for the given system.\nIn order to avoid nested co-inductive and inductive reasoning, we adopt an approach from denotational semantics of computer languages, where program types are interpreted as ultrametric spaces and fixed-point combinators are encoded using the Banach fixed-point theorem (see \\cite{BaMa}, \\cite{Esc}, \\cite{Breu}). We consider the set of non-well-founded proofs of $\\mathsf{Grz}$ and various sets of operations acting on theses proofs as ultrametric spaces and define our cut-elimination operator using the Prie\\ss-Crampe fixed-point theorem (see \\cite{PrCr}), which is a strengthening of the Banach's theorem. \nIn this paper, we also establish that in our sequent system it is sufficient to consider only unravellings of cyclic proofs instead of arbitrary non-well-founded proofs in order to obtain all provable sequents. As an application of cut-elimination, we obtain the Lyndon interpolation property for the Grzegorczyk modal logic $\\mathsf{Grz}$ proof-theoretically. \n \n\n\n\nRecall that the Craig interpolation property for a logic $\\mathsf{L}$ says that if $A$ implies $B$, then there is an interpolant, that is, a formula $I$ containing only common variables of $A$ and $B$ such that $A$ implies $I$ and $I$ implies $B$. The Lyndon interpolation property is a strengthening of the Craig one that also takes into consideration negative and positive occurrences of the shared propositional variables; that is, the variables occurring in $I$ positively (negatively) must also occur both in $A$ and $B$ positively (negatively).\n\nThough the Grzegorczyk logic has the Lyndon interpolation property \\cite{Maks2}, there were seemingly no syntactic proofs for this result. \nIt is unclear how Lyndon interpolation can be obtained from previously introduced sequent systems for $\\mathsf{Grz}$ \\cite{Avron, BorGen, DyNe} by direct proof-theoretic arguments because these systems contain inference rules in which a polarity\nchange occurs under the passage from the principal formula in the conclusion to its immediate ancestors in the premise. In this article, we give a syntactic proof of Lyndon interpolation for the Grzegorczyk modal logic as an application of our cut-elimination theorem.\n \nThe paper is organised as follows. In Section \\ref{SecPrel} we recall a standart sequent calculus for $\\mathsf{Grz}$. In Section \\ref{SecNWF} we introduce the proof system $\\mathsf{Grz_{\\infty}}$ that allows non-well-founded proofs and prove its equivalence to the standard one. In Section \\ref{SecUlt} we recall basic notions of the theory of ultrametric spaces and consider several relevant examples. In Section \\ref{SecAdm} we state admissability of several rules for our system that will be used later. In Section \\ref{SecCut} we establish the cut elimination result for the system $\\mathsf{Grz}_\\infty$ syntactically. In \\ref{SecLyn} we prove the Lyndon interpolation property for the logic $\\mathsf{Grz}$. Finally, in Section \\ref{SecCyc} we establish that every provable sequent of $\\mathsf{Grz}_\\infty$ has a cyclic proof.\n\n\n\n\n\\section{Preliminaries}\n\\label{SecPrel}\nIn this section we recall the Grzegorczyk modal logic $\\mathsf{Grz}$ and define an ordinary sequent calculus for it.\n\n\\textit{Formulas} of $\\mathsf{Grz}$, denoted by $A$, $B$, $C$, are built up as follows:\n$$ A ::= \\bot \\,\\,|\\,\\, p \\,\\,|\\,\\, (A \\to A) \\,\\,|\\,\\, \\Box A \\;, $$\nwhere $p$ stands for atomic propositions. \nWe treat other boolean connectives and the modal operator $\\Diamond$ as abbreviations:\n\\begin{gather*}\n\\neg A := A\\to \\bot,\\qquad\\top := \\neg \\bot,\\qquad A\\wedge B := \\neg (A\\to \\neg B),\n\\\\\nA\\vee B := (\\neg A\\to B),\\qquad\\Diamond A := \\neg\\Box \\neg A.\n\\end{gather*}\n\n\n\nThe Hilbert-style axiomatization of $\\mathsf{Grz}$ is given by the following axioms and inference rules:\n\n\\textit{Axioms:}\n\\begin{itemize}\n\\item[(i)] Boolean tautologies;\n\\item[(ii)] $\\Box (A \\rightarrow B) \\rightarrow (\\Box A \\rightarrow \\Box B)$;\n\\item[(iii)] $\\Box A \\rightarrow \\Box \\Box A$;\n\\item[(iv)] $\\Box A \\rightarrow A$;\n\\item[(v)] $\\Box(\\Box(A \\rightarrow \\Box A) \\rightarrow A) \\rightarrow \\Box A$.\n\\end{itemize}\n\n\\textit{Rules:} modus ponens, $A \/ \\Box A$. \\\\\n\n\nNow we define an ordinary sequent calculus for $\\mathsf{Grz}$. A \\textit{sequent} is an expression of the form $\\Gamma \\Rightarrow \\Delta$, where $\\Gamma$ and~$\\Delta$ are finite multisets of formulas. For a multiset of formulas $\\Gamma = A_1,\\dotsc, A_n$, we define $\\Box \\Gamma$ to be $\\Box A_1,\\dotsc, \\Box A_n$.\n\n\n\nThe system $\\mathsf{Grz_{Seq}}$, which is a variant of the sequent calculus from \\cite{Avron}, is defined by the following initial sequents and inference rules: \n\\begin{gather*}\n\\AXC{ $\\Gamma, A \\Rightarrow A, \\Delta $ ,}\n\\DisplayProof \\qquad\n\\AXC{ $\\Gamma , \\bot \\Rightarrow \\Delta $ ,}\n\\DisplayProof\\\\\\\\\n\\AXC{$\\Gamma , B \\Rightarrow \\Delta $}\n\\AXC{$\\Gamma \\Rightarrow A,\\Delta $}\n\\LeftLabel{$\\mathsf{\\to_L}$}\n\\BIC{$\\Gamma , A \\to B \\Rightarrow \\Delta$}\n\\DisplayProof\\;,\\qquad\n\\AXC{$\\Gamma , A \\Rightarrow B, \\Delta $}\n\\LeftLabel{$\\mathsf{\\to_R}$}\n\\UIC{$\\Gamma \\Rightarrow A \\to B, \\Delta$}\n\\DisplayProof\\;,\\\\\\\\\n\\AXC{$\\Gamma, B, \\Box B \\Rightarrow \\Delta $}\n\\LeftLabel{$\\mathsf{refl}$}\n\\UIC{$\\Gamma , \\Box B \\Rightarrow \\Delta$}\n\\DisplayProof\\;,\\qquad\n\\AXC{$ \\Box\\Pi, \\Box(A\\to\\Box A) \\Rightarrow A$}\n\\LeftLabel{$\\mathsf{\\Box_{Grz}}$}\n\\UIC{$\\Gamma, \\Box \\Pi \\Rightarrow \\Box A ,\\Delta $}\n\\DisplayProof \\;.\n\\end{gather*}\n\\begin{center}\n\\textbf{Fig. 1.} The system $\\mathsf{Grz_{Seq}}$\n\\end{center}\n\nThe cut rule has the form\n\\begin{gather*}\n\\AXC{$\\Gamma\\Rightarrow A,\\Delta$}\n\\AXC{$\\Gamma,A\\Rightarrow\\Delta$}\n\\LeftLabel{$\\mathsf{cut}$}\n\\RightLabel{ ,}\n\\BIC{$\\Gamma\\Rightarrow\\Delta$}\n\\DisplayProof\n\\end{gather*}\n\nwhere $A$ is called the \\emph{cut formula} of the given inference. \n\n\\begin{lem} \\label{prop}\n$\\mathsf{Grz_{Seq}} + \\mathsf{cut}\\vdash \\Gamma\\Rightarrow\\Delta$ if and only if $\\mathsf{Grz} \\vdash \\bigwedge\\Gamma\\to\\bigvee\\Delta $. \n\\end{lem}\nThis lemma is completely standard, so we omit the proof.\n\n\n\\begin{thm}\\label{cutelimgrz}\nIf $\\mathsf{Grz_{Seq}} + \\mathsf{cut}\\vdash \\Gamma\\Rightarrow\\Delta$, then $\\mathsf{Grz_{Seq}} \\vdash \\Gamma\\Rightarrow\\Delta$. \n\n\\end{thm}\n\n\nA syntactic cut-elimination for $\\mathsf{Grz}$ was obtained by M.~Borga and P.~Gentilini in \\cite{BorGen}. In this paper, we will give another proof of this cut-elimination theorem in the next sections. \n\n \n\n\n\n\n\n\n\\section{Non-well-founded proofs}\n\\label{SecNWF}\nIn this section we introduce a sequent calculus for $\\mathsf{Grz}$ allowing non-well-founded proofs and define two translations that connect ordinary and non-well-founded sequent systems.\n\nInference rules and initial sequents of the sequent calculus $\\mathsf{Grz_\\infty}$ have the following form:\n\\begin{gather*}\n\\AXC{ $\\Gamma, p \\Rightarrow p, \\Delta $ ,}\n\\DisplayProof\\qquad\n\\AXC{ $\\Gamma , \\bot \\Rightarrow \\Delta$ ,}\n\\DisplayProof \\\\\\\\\n\\AXC{$\\Gamma , B \\Rightarrow \\Delta $}\n\\AXC{$\\Gamma \\Rightarrow A,\\Delta $}\n\\LeftLabel{$\\mathsf{\\to_L}$}\n\\BIC{$\\Gamma , A \\to B \\Rightarrow \\Delta$}\n\\DisplayProof\\;,\\qquad\n\\AXC{$\\Gamma , A \\Rightarrow B, \\Delta $}\n\\LeftLabel{$\\mathsf{\\to_R}$}\n\\UIC{$\\Gamma \\Rightarrow A \\to B, \\Delta$}\n\\DisplayProof\\;,\\\\\\\\\n\\AXC{$\\Gamma, B, \\Box B \\Rightarrow \\Delta $}\n\\LeftLabel{$\\mathsf{refl}$}\n\\UIC{$\\Gamma , \\Box B \\Rightarrow \\Delta$}\n\\DisplayProof\\;,\\qquad\n\\AXC{$\\Gamma, \\Box \\Pi \\Rightarrow A, \\Delta$}\n\\AXC{$\\Box \\Pi \\Rightarrow A$}\n\\LeftLabel{$\\mathsf{\\Box}$}\n\\BIC{$\\Gamma, \\Box \\Pi \\Rightarrow \\Box A, \\Delta$}\n\\DisplayProof \\;.\n\\end{gather*}\n\\begin{center}\n\\textbf{Fig. 2.} The system $\\mathsf{Grz}_\\infty$\n\\end{center}\n\nThe system $\\mathsf{Grz}_{\\infty}+\\mathsf{cut}$ is defined by adding the rule ($\\mathsf{cut}$) to the system $\\mathsf{Grz_\\infty}$.\nAn \\emph{$\\infty$--proof} in $\\mathsf{Grz}_\\infty$ ($\\mathsf{Grz}_{\\infty}+\\mathsf{cut}$) is a (possibly infinite) tree whose nodes are marked by\nsequents and whose leaves are marked by initial sequents and that is constructed according to the rules of the sequent calculus. In addition, every infinite branch in an $\\infty$--proof must pass through a right premise of the rule ($\\Box$) infinitely many times. A sequent $\\Gamma \\Rightarrow \\Delta$ is \\emph{provable} in $\\mathsf{Grz}_\\infty$ ($\\mathsf{Grz}_{\\infty}+\\mathsf{cut}$) if there is an $\\infty$--proof in $\\mathsf{Grz}_\\infty$ ($\\mathsf{Grz}_{\\infty}+\\mathsf{cut}$) with the root marked by $\\Gamma \\Rightarrow \\Delta$.\n\n\nThe \\emph{$n$-fragment} of an $\\infty$--proof is a finite tree obtained from the $\\infty$--proof by cutting every branch at the $n$th from the root right premise of the rule ($\\Box$). The $1$-fragment of an $\\infty$--proof is also called its \\emph{main fragment}. We define the \\emph{local height $\\lvert \\pi \\rvert$ of an $\\infty$--proof $\\pi$} as the length of the longest branch in its main fragment. An $\\infty$--proof only consisting of an initial sequent has height 0.\n\n\n\nFor instance, consider an $\\infty$--proof of the sequent $\\Box(\\Box(p \\rightarrow \\Box p) \\rightarrow p) \\Rightarrow p$: \n\n\\begin{gather*}\n\\AXC{\\textsf{Ax}}\n\\noLine\n\\UIC{$ F, p\\Rightarrow p$}\n\\AXC{\\textsf{Ax}}\n\\noLine\n\\UIC{$ F,p\\Rightarrow \\Box p,p$}\n\\LeftLabel{$\\mathsf{\\to_R}$}\n\\UIC{$ F \\Rightarrow p\\to\\Box p,p$}\n\\AXC{\\textsf{Ax}}\n\\noLine\n\\UIC{$p, F \\Rightarrow p$}\n\\AXC{$\\vdots$}\n\\noLine\n\\UIC{$ F \\Rightarrow p$} \n\\LeftLabel{$\\mathsf{\\Box}$} \n\\BIC{$p, F \\Rightarrow \\Box p$}\n\\LeftLabel{$\\mathsf{}\\to_R$}\n\\UIC{$ F \\Rightarrow p\\to\\Box p$}\n\\LeftLabel{$\\mathsf{\\Box}$} \n\\BIC{$ F \\Rightarrow \\Box(p\\to \\Box p),p$} \n\\LeftLabel{$\\mathsf{\\to_L}$}\n\\BIC{$\\Box(p \\rightarrow \\Box p) \\rightarrow p, F \\Rightarrow p$}\n\\LeftLabel{$\\mathsf{refl}$}\n\\RightLabel{ ,} \n\\UIC{$F \\Rightarrow p $}\n\\DisplayProof\n\\end{gather*}\nwhere $F=\\Box(\\Box(p \\rightarrow \\Box p) \\rightarrow p) $. \nThe local height of this $\\infty$--proof equals to 4 and its main fragment has the form\n\\begin{gather*}\n\\AXC{\\textsf{Ax}}\n\\noLine\n\\UIC{$ F, p\\Rightarrow p$}\n\\AXC{\\textsf{Ax}}\n\\noLine\n\\UIC{$ F,p\\Rightarrow \\Box p,p$}\n\\LeftLabel{$\\mathsf{\\to_R}$}\n\\UIC{$ F \\Rightarrow p\\to\\Box p,p$}\n\\AXC{\\qquad \\qquad \\qquad \\qquad}\n\\LeftLabel{$\\mathsf{\\Box}$} \n\\BIC{$ F \\Rightarrow \\Box(p\\to \\Box p),p$} \n\\LeftLabel{$\\mathsf{\\to_L}$}\n\\BIC{$\\Box(p \\rightarrow \\Box p) \\rightarrow p, F \\Rightarrow p$}\n\\LeftLabel{$\\mathsf{refl}$}\n\\RightLabel{ .} \n\\UIC{$F \\Rightarrow p $}\n\\DisplayProof\n\\end{gather*}\nWe denote the set of all $\\infty$-proofs in the system $\\mathsf{Grz}_{\\infty} +\\mathsf{cut} $ by $\\mathcal P$. \nFor $\\pi, \\tau\\in\\mathcal P$, we write $\\pi \\sim_n \\tau$ if $n$-fragments of these $\\infty$-proofs are coincide. For any $\\pi, \\tau\\in\\mathcal P$, we also set $\\pi \\sim_0 \\tau$.\n\nNow we define two translations that connect ordinary and non-well-founded sequent calculi for $\\mathsf{Grz}$. \n\n\\begin{lem}\\label{AtoA}\nWe have $\\mathsf{Grz}_{\\infty} \\vdash \\Gamma,A\\Rightarrow A,\\Delta$ for any sequent $\\Gamma \\Rightarrow \\Delta$ and any formula $A$.\n\\end{lem}\n\\begin{proof}\nStandard induction on the structure of $A$.\n\\end{proof}\n\n\\begin{lem}\\label{Grz-schema}\nWe have $\\mathsf{Grz}_{\\infty}\\vdash\\Box(\\Box(A \\rightarrow \\Box A) \\rightarrow A) \\Rightarrow A$ for any formula $A$.\n\\end{lem}\n\\begin{proof}\nConsider an example of $\\infty$--proof for the sequent $\\Box(\\Box(p \\rightarrow \\Box p) \\rightarrow p) \\Rightarrow p$ given above. We transform this example into an $\\infty$--proof for $\\Box(\\Box(A \\rightarrow \\Box A) \\rightarrow A) \\Rightarrow A$ by replacing $p$ with $A$ and adding required $\\infty$--proofs instead of initial sequents using Lemma \\ref{AtoA}. \n\\end{proof}\n\nRecall that an inference rule is called admissible (in a given proof system) if, for any instance of the rule, the conclusion is provable whenever all premises are provable. \n\\begin{lem}\\label{weakening}\nThe rule\n\\begin{gather*}\n\\AXC{$\\Gamma\\Rightarrow\\Delta$}\n\\LeftLabel{$\\mathsf{weak}$}\n\\UIC{$\\Pi,\\Gamma\\Rightarrow\\Delta,\\Sigma$}\n\\DisplayProof\n\\end{gather*}\nis admissible in the systems $\\mathsf{Grz_{Seq}} $ and $\\mathsf{Grz}_{\\infty} +\\mathsf{cut}$.\n\\end{lem}\n\\begin{proof}\nStandard induction on the structure (local height) of a proof of $\\Gamma\\Rightarrow\\Delta$.\n\\end{proof}\n\n\\begin{thm}\\label{seqtoinfcut}\nIf $\\mathsf{Grz_{Seq}}+\\mathsf{cut}\\vdash\\Gamma\\Rightarrow\\Delta$, then $\\mathsf{Grz}_{\\infty}+\\mathsf{cut}\\vdash\\Gamma\\Rightarrow\\Delta$.\n\\end{thm}\n\\begin{proof}\nAssume $\\pi$ is a proof of $\\Gamma\\Rightarrow\\Delta$ in $\\mathsf{Grz_{Seq}}+\\mathsf{cut}$. By induction on the size of $\\pi$ we prove $\\mathsf{Grz}_{\\infty}+\\mathsf{cut}\\vdash\\Gamma\\Rightarrow\\Delta$. \n\nIf $\\Gamma \\Rightarrow \\Delta $ is an initial sequent of $\\mathsf{Grz_{Seq}}+\\mathsf{cut}$, then it is provable in $\\mathsf{Grz}_{\\infty}+\\mathsf{cut}$ by Lemma \\ref{AtoA}.\nOtherwise, consider the last application of an inference rule in $\\pi$. \n\nThe only non-trivial case is when the proof $\\pi$ has the form \n\\begin{gather*}\n\\AXC{$\\pi^\\prime$}\n\\noLine\n\\UIC{$\\Box \\Pi,\\Box(A\\to\\Box A)\\Rightarrow A$}\n\\LeftLabel{$\\mathsf{\\Box_{Grz}}$}\n\\RightLabel{ ,}\n\\UIC{$\\Sigma,\\Box\\Pi\\Rightarrow \\Box A, \\Lambda$}\n\\DisplayProof\n\\end{gather*} \nwhere $\\Sigma,\\Box\\Pi = \\Gamma$ and $\\Box A, \\Lambda = \\Delta$. By the induction hypothesis there is an $\\infty$--proof $\\xi$ of $\\Box \\Pi,\\Box(A\\to\\Box A)\\Rightarrow A$ in $\\mathsf{Grz}_{\\infty}+\\mathsf{cut}$.\n\nWe have the following $\\infty$--proof $\\lambda$ of $\\Box \\Pi\\Rightarrow A$ in $\\mathsf{Grz}_{\\infty}+\\mathsf{cut} $:\n\\begin{gather*}\n\\AXC{$\\xi^{\\prime}$}\n\\noLine\n\\UIC{$\\Box \\Pi,\\Box(A\\to\\Box A)\\Rightarrow A,A$}\n\\LeftLabel{$\\mathsf{\\to_R}$}\n\\UIC{$\\Box \\Pi\\Rightarrow G,A$}\n\\AXC{$\\xi$}\n\\noLine\n\\UIC{$\\Box \\Pi,\\Box(A\\to\\Box A)\\Rightarrow A$}\n\\LeftLabel{$\\mathsf{\\to_R}$}\n\\UIC{$\\Box \\Pi\\Rightarrow G$}\n\\LeftLabel{$\\Box$}\n\\BIC{$\\Box \\Pi\\Rightarrow\\Box G,A$}\n\\AXC{$\\theta$}\n\\noLine\n\\UIC{$\\Box\\Pi,\\Box G \\Rightarrow A$}\n\\LeftLabel{$\\mathsf{cut}$}\n\\RightLabel{ ,}\n\\BIC{$\\Box\\Pi\\Rightarrow A$}\n\\DisplayProof\n\\end{gather*}\nwhere $G= \\Box(A \\rightarrow \\Box A) \\rightarrow A$, $\\xi^{\\prime}$ is an $\\infty$--proof of $\\Box \\Pi,\\Box(A\\to\\Box A)\\Rightarrow A,A$ obtained from $\\xi$ by Lemma \\ref{weakening} and $\\theta$ is an $\\infty$--proof of $\\Box\\Pi,\\Box G \\Rightarrow A$, which exists by Lemma \\ref{Grz-schema} and Lemma \\ref{weakening}.\n\nThe required $\\infty$--proof for $\\Sigma,\\Box\\Pi\\Rightarrow \\Box A, \\Delta$ has the form\n\\begin{gather*}\n\\AXC{$\\lambda'$}\n\\noLine\n\\UIC{$\\Sigma,\\Box\\Pi\\Rightarrow A,\\Lambda$}\n\\AXC{$\\lambda$}\n\\noLine\n\\UIC{$\\Box\\Pi\\Rightarrow A$}\n\\LeftLabel{$\\Box$}\n\\RightLabel{ ,}\n\\BIC{$\\Sigma,\\Box\\Pi\\Rightarrow \\Box A, \\Lambda$}\n\\DisplayProof\n\\end{gather*}\nwhere $\\lambda'$ is an $\\infty$-proof of the sequent $\\Sigma,\\Box\\Pi\\Rightarrow A,\\Lambda$ obtained from the $\\infty$-proof $\\lambda$ by Lemma \\ref{weakening}.\n\nThe cases of other inference rules being last in $\\pi$ are straightforward, so we omit them.\n\n\\end{proof}\n\n\n\nFor a sequent $\\Gamma\\Rightarrow\\Delta$, let $Sub(\\Gamma\\Rightarrow\\Delta)$ be the set of all subformulas of the formulas from $\\Gamma \\cup\\Delta$.\nFor a finite set of formulas $\\Lambda$, let $\\Lambda^\\ast$ be the set $\\{\\Box(A\\to\\Box A)\\mid A\\in\\Lambda\\}$.\n\n\\begin{lem} \\label{translation\nIf $\\mathsf{Grz_\\infty}\\vdash \\Gamma\\Rightarrow\\Delta$, then $\\mathsf{Grz_{Seq}} \\vdash \\Lambda^\\ast,\\Gamma\\Rightarrow\\Delta$ for any finite set of formulas $\\Lambda$.\n\\end{lem}\n\\begin{proof}\nAssume $\\pi$ is an $\\infty$--proof of the sequent $\\Gamma\\Rightarrow\\Delta$ in $\\mathsf{Grz}_\\infty$ and $\\Lambda$ is a finite set of formulas.\nBy induction on the number of elements in the finite set $Sub(\\Gamma\\Rightarrow\\Delta)\\backslash \\Lambda$ with a subinduction on $\\lvert \\pi \\rvert$, we prove $\\mathsf{Grz_{Seq}} \\vdash \\Lambda^\\ast,\\Gamma\\Rightarrow\\Delta$. \n\n\nIf $\\lvert \\pi \\rvert=0$, then $\\Gamma\\Rightarrow\\Delta$ is an initial sequent. We see that the sequent $\\Lambda^\\ast,\\Gamma\\Rightarrow\\Delta$ is an initial sequent and it is provable in $\\mathsf{Grz_{Seq}}$.\nOtherwise, consider the last application of an inference rule in $\\pi$. \n\nCase 1. Suppose that $\\pi$ has the form\n\\begin{gather*}\n\\AXC{$\\pi^\\prime$}\n\\noLine\n\\UIC{$\\Gamma,A\\Rightarrow B,\\Sigma$}\n\\LeftLabel{$\\mathsf{\\to_R}$}\n\\RightLabel{ ,}\n\\UIC{$\\Gamma\\Rightarrow A\\to B,\\Sigma$}\n\\DisplayProof\n\\end{gather*}\nwhere $A\\to B,\\Sigma = \\Delta$.\nNotice that $\\lvert \\pi^\\prime \\rvert < \\lvert \\pi \\rvert $. By the induction hypothesis for $\\pi^\\prime$ and $\\Lambda$, the sequent $\\Lambda^\\ast,\\Gamma,A\\Rightarrow B,\\Sigma$ is provable in $\\mathsf{Grz_{Seq}}$. \nApplying the rule ($\\mathsf{\\to_R}$) to it, we obtain that the sequent $\\Lambda^\\ast,\\Gamma\\Rightarrow\\Delta$ is provable in $\\mathsf{Grz_{Seq}}$.\n\nCase 2. Suppose that $\\pi$ has the form\n\\begin{gather*}\n\\AXC{$\\pi^\\prime$}\n\\noLine\n\\UIC{$\\Sigma, B\\Rightarrow \\Delta$}\n\\AXC{$\\pi^{\\prime\\prime}$}\n\\noLine\n\\UIC{$\\Sigma \\Rightarrow A,\\Delta$}\n\\LeftLabel{$\\mathsf{\\to_L}$}\n\\RightLabel{ ,}\n\\BIC{$\\Sigma, A\\to B\\Rightarrow \\Delta$}\n\\DisplayProof\n\\end{gather*}\nwhere $\\Sigma, A\\to B = \\Gamma$. We see that $\\lvert \\pi^\\prime \\rvert < \\lvert \\pi \\rvert $. By the induction hypothesis for $\\pi^\\prime$ and $\\Lambda$, the sequent $\\Lambda^\\ast,\\Sigma, B\\Rightarrow \\Delta$ is provable in $\\mathsf{Grz_{Seq}}$. Analogously, we have $\\mathsf{Grz_{Seq}} \\vdash \\Lambda^\\ast,\\Sigma \\Rightarrow A,\\Delta$. Applying the rule ($\\mathsf{\\to_L}$), we obtain that the sequent $\\Lambda^\\ast,\\Sigma, A\\to B \\Rightarrow\\Delta$ is provable in $\\mathsf{Grz_{Seq}}$.\n\nCase 3. Suppose that $\\pi$ has the form\n\\begin{gather*}\n\\AXC{$\\pi^\\prime$}\n\\noLine\n\\UIC{$\\Sigma,A,\\Box A\\Rightarrow \\Delta$}\n\\LeftLabel{$\\mathsf{refl}$}\n\\RightLabel{ ,}\n\\UIC{$\\Sigma,\\Box A\\Rightarrow \\Delta$}\n\\DisplayProof\n\\end{gather*}\nwhere $\\Sigma, \\Box A = \\Gamma$. We see that $\\lvert \\pi^\\prime \\rvert < \\lvert \\pi \\rvert $. By the induction hypothesis for $\\pi^\\prime$ and $\\Lambda$, the sequent $\\Lambda^\\ast,\\Sigma, A, \\Box A\\Rightarrow \\Delta$ is provable in $\\mathsf{Grz_{Seq}}$. Applying the rule ($\\mathsf{refl}$), we obtain $\\mathsf{Grz_{Seq}} \\vdash \\Lambda^\\ast,\\Sigma, \\Box A\\Rightarrow\\Delta$. \n\nCase 4. Suppose that $\\pi$ has the form\n\\begin{gather*}\n\\AXC{$\\pi^\\prime$}\n\\noLine\n\\UIC{$\\Phi, \\Box \\Pi \\Rightarrow A, \\Sigma$}\n\\AXC{$\\pi^{\\prime\\prime}$}\n\\noLine\n\\UIC{$\\Box \\Pi \\Rightarrow A$}\n\\LeftLabel{$\\mathsf{\\Box}$}\n\\RightLabel{ ,}\n\\BIC{$\\Phi, \\Box \\Pi \\Rightarrow \\Box A, \\Sigma$}\n\\DisplayProof\n\\end{gather*}\nwhere $\\Phi, \\Box \\Pi = \\Gamma$ and $\\Box A, \\Sigma =\\Delta$.\n\nSubcase 4.1: the formula $A$ belongs to $\\Lambda$. We see that $\\lvert \\pi^\\prime \\rvert < \\lvert \\pi \\rvert $. By the induction hypothesis for $\\pi^\\prime$ and $\\Lambda$, the sequent $\\Lambda^\\ast,\\Phi, \\Box \\Pi \\Rightarrow A, \\Sigma$ is provable in $\\mathsf{Grz_{Seq}}$. \nThen we see\n\\begin{gather*}\n\\AXC{$\\mathsf{Ax}$}\n\\noLine\n\\UIC{$\\Lambda^\\ast,\\Box A,\\Phi, \\Box \\Pi \\Rightarrow \\Box A, \\Sigma$}\n\\AXC{$\\Lambda^\\ast,\\Phi, \\Box \\Pi \\Rightarrow A, \\Sigma$}\n\\LeftLabel{$\\mathsf{weak}$}\n\\UIC{$\\Lambda^\\ast,\\Phi, \\Box \\Pi \\Rightarrow A, \\Box A,\\Sigma$}\n\\LeftLabel{$\\mathsf{\\to_L}$}\n\\BIC{$(\\Lambda\\backslash\\{A\\})^\\ast,A\\to\\Box A,\\Box(A\\to\\Box A),\\Phi, \\Box \\Pi \\Rightarrow \\Box A, \\Sigma$}\n\\LeftLabel{$\\mathsf{refl}$}\n\\RightLabel{ ,}\n\\UIC{$(\\Lambda\\backslash\\{A\\})^\\ast,\\Box(A\\to\\Box A),\\Phi, \\Box \\Pi \\Rightarrow \\Box A, \\Sigma$}\n\\DisplayProof\n\\end{gather*}\nwhere the rule ($\\mathsf{weak}$) is admissible by Lemma \\ref{weakening}.\n\nSubcase 4.2: the formula $A$ doesn't belong to $\\Lambda$. We have that the number of elements in $Sub(\\Box\\Pi\\Rightarrow A)\\backslash(\\Lambda\\cup \\{A\\})$ is strictly less than the number of elements in $Sub(\\Phi, \\Box \\Pi \\Rightarrow \\Box A, \\Sigma)\\backslash\\Lambda$. Therefore, by the induction hypothesis for $\\pi^{\\prime\\prime}$ and $\\Lambda\\cup \\{A\\}$, the sequent $\\Lambda^\\ast,\\Box(A\\to\\Box A),\\Box \\Pi \\Rightarrow A$ is provable in $\\mathsf{Grz_{Seq}}$. Then we have\n\\begin{gather*}\n\\AXC{$\\Lambda^\\ast,\\Box(A\\to\\Box A),\\Box \\Pi \\Rightarrow A$}\n\\LeftLabel{$\\mathsf{\\Box_{Grz}}$}\n\\RightLabel{ .}\n\\UIC{$\\Lambda^\\ast,\\Phi, \\Box \\Pi \\Rightarrow \\Box A, \\Sigma$}\n\\DisplayProof\n\\end{gather*}\n\\end{proof}\nFrom Lemma \\ref{translation} we immediately obtain the following theorem.\n\\begin{thm}\\label{inftoseq}\nIf $\\mathsf{Grz_\\infty}\\vdash \\Gamma\\Rightarrow\\Delta$, then $\\mathsf{Grz_{Seq}} \\vdash \\Gamma\\Rightarrow\\Delta$.\n\\end{thm}\n\n\n\n\\section{Ultrametric spaces}\n\\label{SecUlt}\nIn this section we recall basic notions of the theory of ultrametric spaces (cf. \\cite{Shor}) and consider several examples concerning $\\infty$-proofs\n\nAn \\emph{ultrametric space} $(M,d)$ is a metric space that satisfies a stronger version of the triangle inequality: for any $x,y,z\\in M$\n$$d(x,z) \\leqslant \\max \\{d(x,y), d(y,z)\\}.$$ \n\nFor $x\\in M$ and $r\\in [0, + \\infty)$, the set $B_r(x)=\\{ y\\in M \\mid d(x,y) \\leqslant r\\}$ is called the \\emph{closed ball} with \\emph{center $x$} and \\emph{radius $r$}. Recall that a metric space $(M,d)$ is \\emph{complete} if any descending sequence of closed balls, with radii tending to $0$, has a common point. An ultametric space $(M,d)$ is called \\emph{spherically complete} if an arbitrary descending sequence of closed balls has a common point.\n\nFor example, consider the set $\\mathcal P$ of all $\\infty$-proofs of the system $\\mathsf{Grz}_{\\infty} +\\mathsf{cut} $. We can define an ultrametric $d_{\\mathcal P} \\colon {\\mathcal P} \\times {\\mathcal P} \\to [0,1]$ on $\\mathcal P$ by putting\n\\[d_{\\mathcal P}(\\pi,\\tau) = \n\\inf\\{\\frac{1}{2^n} \\mid \\pi \\sim_n \\tau\\}.\n\\]\nWe see that $d_{\\mathcal P}(\\pi , \\tau) \\leqslant 2^{-n}$ if and only if $\\pi \\sim_n \\tau$. Thus, the ultrametric $d_{\\mathcal P}$ can be considered as a measure of similarity between $\\infty$-proofs.\n\n\\begin{prop}\\label{ComplP}\n$(\\mathcal{P},d_{\\mathcal P})$ is a (spherically) complete ultrametric space.\n\\end{prop} \n\n \nConsider the following characterization of spherically complete ultrametric spaces. Let us write $x \\equiv_r y$ if $d(x,y)\\leqslant r$.\nTrivially, the relation $\\equiv_r$ is an equivalence relation for any ultrametric space and any number $r\\geqslant 0$.\n\n\n\n\\begin{prop}\\label{sphcomp}\nAn ultametric space $(M,d)$ is spherically complete if and only if for any sequence $(x_i)_{i\\in \\mathbb N}$ of elements of $M$, where $x_i\\equiv_{r_i}x_{i+1}$ and \n$r_i \\geqslant r_{i+1}$ for all $i\\in \\mathbb N$, there is a point $x$ of $M$ such that $x \\equiv_{r_i} x_i$ for any $i\\in\\mathbb N$.\n\\end{prop}\n\\begin{proof}\n($\\Rightarrow$) Assume $(M,d)$ is a spherically complete ultrametric space. Consider a sequence $(x_i)_{i\\in \\mathbb N}$ of elements of $M$ such that $x_i\\equiv_{r_i}x_{i+1}$ and $r_i \\geqslant r_{i+1}$ for all $i\\in \\mathbb N$. Then the sequence $(B_{r_i}(x_i))$ is a descending sequence of closed balls, and therefore by spherical completeness has a common point $x$. Trivially, the point $x$ satisfies the desired conditions.\n\n($\\Leftarrow$) Assume there is a descending sequence of closed balls $(B_{r_i}(x_i))$. We have that $x_0 \\equiv_{r_0} x_1 \\equiv_{r_1} \\dots$ and $r_i \\geqslant r_{i+1}$ for all $i\\in \\mathbb N$. So there is an element $x\\in M$ such that $x \\equiv_{r_i} x_i$, meaning it lies in all the balls.\n\\end{proof}\n\nIn an ultrametric space $(M,d)$, a function $f \\colon M\\to M$ is called \\emph{non-expansive} if $d(f(x),f(y)) \\leqslant d(x,y)$ for all $x,y\\in M$.\nFor ultrametric spaces $(M, d_M)$ and $(N, d_N)$, the Cartesian product $M \\times N$ can be also considered as an ultrametric space with the metric $$d_{M \\times N} ((x_1,y_1),(x_2,y_2)) = \\max \\{d_M(x_1,x_2), d_N(y_1,y_2) \\}.$$ \n\n\nLet us consider another example. For $m\\in \\mathbb N$, let $\\mathcal F_m$ denote the set of all non-expansive functions from $\\mathcal P^m$ to $\\mathcal P$. Note that any function $\\mathsf u\\colon \\mathcal P^m\\to\\mathcal P$ is non-expansive if and only if for any tuples $\\vec\\pi$ and $\\vec\\pi'$, and any $n\\in\\mathbb N$ we have \n$$\\pi_1\\sim_n\\pi'_1,\\dotsc,\\pi_m\\sim_n\\pi'_m\\Rightarrow \\mathsf u(\\vec\\pi)\\sim_n\\mathsf u(\\vec\\pi').$$\nNow we introduce an ultrametric for $\\mathcal F_m$. For $\\mathsf{a,b}\\in \\mathcal F_m$, we write \n$\\mathsf a\\sim_{n,k}\\mathsf b$ if $\\mathsf a(\\vec{\\pi})\\sim_n\\mathsf b(\\vec{\\pi})$ for any $\\vec{\\pi}\\in\\mathcal P^m$ and, in addition, $\\mathsf a(\\vec{\\pi}) \\sim_{n+1}\\mathsf b(\\vec{\\pi})$ whenever $\\Sigma_{i=1}^m\\lvert\\pi_i\\rvert< k$.\\footnote{This definition is inspired by \\cite[Subsection 2.1]{GiMi}.}\nAn ultrametric $l_m$ on $\\mathcal F_m$ is defined by \n\\[l_m(\\mathsf a,\\mathsf b)=\\frac{1}{2}\\inf\\{\\frac{1}{2^n}+\\frac{1}{2^{n+k}}\\mid \\mathsf a\\sim_{n,k}\\mathsf b\\}.\\]\nWe see that $l_m(\\mathsf a,\\mathsf b) \\leqslant 2^{-n-1}+2^{-n-k-1}$ if and only if $\\mathsf a\\sim_{n,k}\\mathsf b$.\n\n\\begin{prop}\\label{SphCom}\n$(\\mathcal F_m, l_m)$ is a spherically complete ultrametric space.\n\\end{prop}\n\\begin{proof}\nAssume we have a series\n$\\mathsf a_0\\sim_{n_0,k_0}\\mathsf a_1\\sim_{n_1,k_1}\\dotso $,\nwhere the sequence $r_i=2^{-n_i}+2^{-n_i-k_i-1}$ is non-increasing. From Proposition \\ref{sphcomp}, it is sufficient to find a function $\\mathsf a\\in\\mathcal F_m$ such that $\\mathsf a\\sim_{n_i,k_i}\\mathsf a_i$ for all $i\\in \\mathbb N$.\n\n\nSuppose $\\lim_{i\\to\\infty}r_i=0$. Consider a tuple $\\vec{\\pi}\\in\\mathcal P^m $. We have that $\\lim_{i\\to\\infty}n_i=+ \\infty$ and $\\mathsf a_0 (\\vec{\\pi})\\sim_{n_0}\\mathsf a_1 (\\vec{\\pi})\\sim_{n_1}\\dotso$ .\nBy Proposition \\ref{ComplP}, there is an $\\infty$-proof $\\tau$ such that $\\tau \\sim_{n_i} \\mathsf a_i (\\vec{\\pi})$ for all $i\\in \\mathbb N$. We define $\\mathsf a (\\vec{\\pi}) = \\tau$. We need to check that the mapping $\\mathsf a$ is non-expansive. If for tuples $\\vec\\pi$ and $\\vec\\pi'$ we have $\\pi_1\\sim_n\\pi'_1,\\ldots,\\pi_m\\sim_n\\pi'_m$ for some $n\\in\\mathbb N$, then we can choose $i$ such that $n_i>n$. We have $\\mathsf a(\\vec\\pi)\\sim_n \\mathsf a_i(\\vec\\pi)\\sim_n\\mathsf a_i(\\vec\\pi')\\sim_n\\mathsf a(\\vec\\pi')$. Therefore $\\mathsf a(\\vec\\pi)\\sim_n \\mathsf a(\\vec\\pi')$ and the mapping $\\mathsf a$ is non-expansive.\n\nIf $\\lim_{i\\to\\infty}r_i>0$, then $\\lim_{i\\to\\infty}n_i= n$ for some $n \\in \\mathbb{N}$. We have two cases: either $\\lim_{i\\to\\infty}k_i= k$ for a number $k \\in \\mathbb{N}$, or $\\lim_{i\\to\\infty}k_i= +\\infty$. In the first case, there is $j\\in\\mathbb N$ such that $(n_i,k_i)=(n_j,k_j)$ for all $i>j$, and we can take $\\mathsf a_j$ as $\\mathsf a$. Here the mapping $\\mathsf a$ is obviously non-expansive.\nIn the second case, for a tuple $\\vec{\\pi}$ we define $\\mathsf a (\\vec{\\pi})$ to be $\\mathsf{a}_j (\\vec{\\pi})$, where $j= \\min \\{ i \\in \\mathbb{N} \\mid n_i= n \\text{ and } \\sum_{s=1}^m\\lvert\\pi_s\\rvert < k_i \\}$. For all tuples $\\vec\\pi$ and $\\vec\\pi'$ we have $\\mathsf a(\\vec\\pi)\\sim_0 \\mathsf a(\\vec\\pi')$. If for tuples $\\vec\\pi$ and $\\vec\\pi'$ and some $n\\geqslant 1$ we have $\\pi_1\\sim_n\\pi'_1,\\ldots,\\pi_m\\sim_n\\pi'_m$, then $\\sum_{s=1}^m\\lvert\\pi_s\\rvert=\\sum_{s=1}^m\\lvert\\pi_s'\\rvert=t$ and $\\mathsf a(\\vec\\pi)=\\mathsf a_j(\\vec\\pi)\\sim_n \\mathsf a_j(\\vec\\pi')=\\mathsf a(\\vec\\pi')$, where $j=\\min \\{ i \\in \\mathbb{N} \\mid n_i= n \\text{ and } t < k_i \\}$. Therefore $\\mathsf a(\\vec\\pi)\\sim_n \\mathsf a(\\vec\\pi')$ and the mapping $\\mathsf a$ is non-expansive.\n\n\n\n\n\n\n\n\\end{proof}\n\n\nIn an ultrametric space $(M,d)$, a function $f\\colon M \\to M$ is called \\emph{(strictly) contractive} if $d(f(x),f(y)) < d(x,y)$ when $x \\neq y$. \n\n\nNotice that any operator $\\mathsf U\\colon\\mathcal F_m\\to \\mathcal F_m$ is strictly contractive if and only if for any $\\mathsf a,\\mathsf b\\in\\mathcal F_m$, and any $n,k\\in\\mathbb N$ we have \n$$\\mathsf a\\sim_{n,k}\\mathsf b\\Rightarrow \\mathsf U(\\mathsf a)\\sim_{n,k+1}\\mathsf U(\\mathsf b).$$\n\n\nNow we state a generalization of the Banach's fixed-point theorem for ultrametric spaces that will be used in the next sections.\n\n\n\\begin{thm}[Prie\\ss-Crampe \\cite{PrCr}]\\label{fixpoint}\nLet $(M,d)$ be a non-empty spherically complete ultrametric space. Then every strictly contractive mapping $f\\colon M\\to M$ has a unique fixed-point. \n\\end{thm}\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Admissible rules}\\label{SecAdm}\n\n\n\n\nIn this section, for the system $\\mathsf{Grz}_{\\infty} +\\mathsf{cut}$, we state admissibility of auxiliary inference rules, which will be used in the proof of the cut-elimination theorem. \n\nRecall that the set $\\mathcal P$ of all $\\infty$-proofs of the system $\\mathsf{Grz}_{\\infty} +\\mathsf{cut} $ can be considered as an ultrametric space with the metric $d_{\\mathcal P}$.\n\nBy $\\mathcal{P}_n$ we denote the set of all $\\infty$-proofs that do not contain applications of the cut rule in their $n$-fragments. We also set $\\mathcal{P}_0= \\mathcal{P}$.\n\nA mapping $\\mathsf u:\\mathcal P^m\\to \\mathcal P$ is called \\emph{adequate} if for any $n\\in\\mathbb N$ we have $\\mathsf u(\\pi_1,\\ldots,\\pi_n)\\in\\mathcal P_n$, whenever $\\pi_i\\in\\mathcal P_n$ for all $i\\leqslant n$. \n\n\nIn $\\mathsf{Grz}_{\\infty} +\\mathsf{cut}$, we call a single-premise inference rule \\emph{strongly admissible} if there is a non-expansive adequate mapping $\\mathsf{u}\\colon\\mathcal{P} \\to \\mathcal{P}$ that maps any $\\infty$-proof of the premise of the rule to an $\\infty$-proof of the conclusion. The mapping $\\mathsf{u}$ must also satisfy one additional condition: $\\lvert \\mathsf{u}(\\pi)\\rvert \\leqslant \\lvert \\pi \\rvert$ for any $\\pi \\in \\mathcal{P}$.\n\nIn the following lemmata, non-expansive mappings are defined in a standard way by induction on the local heights of $\\infty$-proofs for the premises. So we omit further details.\n\n\\begin{lem}\\label{strongweakening}\nFor any finite multisets of formulas $\\Pi$ and $\\Sigma$, the inference rule\n\\begin{gather*}\n\\AXC{$\\Gamma\\Rightarrow\\Delta$}\n\\LeftLabel{$\\mathsf{wk}_{\\Pi, \\Sigma}$}\n\\UIC{$\\Pi,\\Gamma\\Rightarrow\\Delta,\\Sigma$}\n\\DisplayProof\n\\end{gather*}\nis strongly admissible in $\\mathsf{Grz}_{\\infty} +\\mathsf{cut}$.\n\\end{lem}\n\n\n\\begin{lem}\\label{inversion}\nFor any formulas $A$ and $B$, the rules\n\\begin{gather*}\n\\AXC{$\\Gamma , A \\rightarrow B \\Rightarrow \\Delta$}\n\\LeftLabel{$\\mathsf{li}_{A \\to B}$}\n\\UIC{$\\Gamma ,B \\Rightarrow \\Delta$}\n\\DisplayProof\\qquad\n\\AXC{$\\Gamma , A \\rightarrow B \\Rightarrow \\Delta$}\n\\LeftLabel{$\\mathsf{ri}_{A \\to B}$}\n\\UIC{$\\Gamma \\Rightarrow A, \\Delta$}\n\\DisplayProof\\\\\\\\\n\\AXC{$\\Gamma \\Rightarrow A \\rightarrow B, \\Delta$}\n\\LeftLabel{$\\mathsf{i}_{A \\to B}$}\n\\UIC{$\\Gamma ,A \\Rightarrow B, \\Delta$}\n\\DisplayProof\\qquad\n\\AXC{$\\Gamma \\Rightarrow \\bot, \\Delta$}\n\\LeftLabel{$\\mathsf{i}_{\\bot}$}\n\\UIC{$\\Gamma \\Rightarrow \\Delta$}\n\\DisplayProof\\qquad \n\\AXC{$\\Gamma \\Rightarrow \\Box A, \\Delta$}\n\\LeftLabel{$\\mathsf{li}_{\\:\\Box A}$}\n\\UIC{$\\Gamma \\Rightarrow A, \\Delta$}\n\\DisplayProof\n\\end{gather*}\nare strongly admissible in $\\mathsf{Grz}_{\\infty} +\\mathsf{cut}$.\n\\end{lem}\n\n\\begin{lem}\\label{weakcontraction}\nFor any atomic proposition $p$, the rules\n\\begin{gather*}\n\\AXC{$\\Gamma , p,p \\Rightarrow \\Delta$}\n\\LeftLabel{$\\mathsf{acl}_{p}$}\n\\UIC{$\\Gamma ,p \\Rightarrow \\Delta$}\n\\DisplayProof\\qquad\n\\AXC{$\\Gamma \\Rightarrow p,p, \\Delta$}\n\\LeftLabel{$\\mathsf{acr}_{p}$}\n\\UIC{$\\Gamma \\Rightarrow p, \\Delta$}\n\\DisplayProof\n\\end{gather*}\nare strongly admissible in $\\mathsf{Grz}_{\\infty} +\\mathsf{cut}$.\n\\end{lem}\n\n\n\n\n\n\\section{Cut elimination}\n\\label{SecCut}\n\n\nIn this section, we construct a continuous function from $\\mathcal{P}$ to $\\mathcal{P}$, which maps any $\\infty$-proof of the system $\\mathsf{Grz}_{\\infty} +\\mathsf{cut}$ to a cut-free $\\infty$-proof of the same sequent. \n\nLet us call a pair of $\\infty$-proofs $(\\pi,\\tau)$ a \\emph{cut pair} if $\\pi$ is an $\\infty$-proof of the sequent $\\Gamma\\Rightarrow\\Delta,A$ and $\\tau$ is an $\\infty$-proof of the sequent $A, \\Gamma\\Rightarrow \\Delta$ for some $\\Gamma, \\Delta$, and $A$. For a cut pair $(\\pi,\\tau)$, we call the sequent $\\Gamma\\Rightarrow \\Delta$ its \\emph{cut result} and the formula $A$ its cut formula.\n\nFor a modal formula $A$, a non-expansive mapping $\\mathsf{u}$ from $\\mathcal P \\times \\mathcal P$ to $\\mathcal P$ is called \\emph{$A$-removing} if it maps every cut pair $(\\pi,\\tau)$ with the cut formula $A$ to an $\\infty$-proof of its cut result.\nBy $\\mathcal R_A$, let us denote the set of all $A$-removing mappings.\n\n\\begin{lem}\\label{Comp R_A}\nFor each formula $A$, the pair $(\\mathcal R_A,l_2)$ is a non-empty spherically complete ultrametric space.\n\n\\end{lem}\n\\begin{proof}\nThe proof of spherical completeness of the space $(\\mathcal R_A,l_2)$ is analoguos to the proof of Proposition \\ref{SphCom}.\n\nWe only need to check that the set $\\mathcal R_A$ is non-empty. Consider the mapping $\\mathsf u_{cut}:\\mathcal P^2\\to\\mathcal P$ that is defined as follows. For a cut pair $(\\pi,\\tau)$ with the cut formula $A$, it joins the $\\infty$-proofs $\\pi$ and $\\tau$ with an appropriate instance of the rule $(\\mathsf{cut})$ . For all other pairs, the mapping $\\mathsf u_{cut}$ returns the first argument.\n\nClearly, $\\mathsf u_{cut}$ is non-expansive and therefore lies in $\\mathcal R_A$.\n\\end{proof}\n\nIn what follows, we use nonexpansive adequate mappings $ \\mathsf{wk}_{\\Pi, \\Sigma}$, $\\mathsf{li}_{A\\to B}$, $\\mathsf{ri}_{A\\to B}$, $\\mathsf{i}_{A\\to B}$, $\\mathsf{i}_\\bot$, $\\mathsf{li}_{\\Box A}$, $\\mathsf{acl}_{p}$, $\\mathsf{acr}_{p}$ from Lemma \\ref{strongweakening}, Lemma \\ref{inversion}, and Lemma \\ref{weakcontraction}.\n\n\n\\begin{lem}\\label{repadeq}\nFor any atomic proposition $p$, there exists an adequate $p$-removing mapping $\\mathsf {re}_p$.\n\\end{lem}\n\\begin{proof}\n\nAssume we have two $\\infty$-proofs $\\pi$ and $\\tau$. If the pair $(\\pi,\\tau)$ is not a cut pair or is a cut pair with the cut formula being not $p$, then we put $\\mathsf{re}_{p}(\\pi,\\tau)=\\pi$. \nOtherwise, we define $\\mathsf{re}_{p}(\\pi,\\tau)$ by induction on $\\lvert \\pi\\rvert$. Let the cut result of the pair $(\\pi,\\tau)$ be $\\Gamma\\Rightarrow \\Delta$.\n\nIf $\\lvert \\pi\\rvert=0$, then $\\Gamma\\Rightarrow \\Delta, p$ is an initial sequent. Suppose that $\\Gamma\\Rightarrow \\Delta$ is also an initial sequent. Then $\\mathsf{re}_{p}(\\pi,\\tau)$ is defined as the $\\infty$-proof consisting only of the sequent $\\Gamma\\Rightarrow \\Delta$. If $\\Gamma\\Rightarrow \\Delta$ is not an initial sequent, then $\\Gamma$ has the form $p,\\Phi$, and $\\tau$ is an $\\infty$-proof of the sequent $p,p,\\Phi \\Rightarrow \\Delta$. Applying the non-expansive adequate mapping $\\mathsf{acl}_p$ from Lemma \\ref{weakcontraction}, we put $\\mathsf{re}_{p}(\\pi,\\tau) := \\mathsf{acl}_p (\\tau)$. \n\nNow suppose that $\\lvert \\pi \\rvert >0$. We define $\\mathsf{re}_{p}(\\pi,\\tau)$ according to the last application of an inference rule in $\\pi$: \n\\begin{align*}\n\\left(\n\\AXC{$\\pi_0$}\n\\noLine\n\\UIC{$\\Gamma,B\\Rightarrow C,\\Sigma,p$}\n\\LeftLabel{$\\mathsf{\\to_R}$}\n\\UIC{$\\Gamma\\Rightarrow B\\to C,\\Sigma,p$}\n\\DisplayProof\n,\\tau\n\\right)\n&\\longmapsto\n\\AXC{$\\mathsf{re}_p(\\pi_0, \\mathsf{i}_{B \\to C}(\\tau))$}\n\\noLine\n\\UIC{$\\Gamma,B\\Rightarrow C,\\Sigma$}\n\\LeftLabel{$\\mathsf{\\to_R}$}\n\\RightLabel{ ,}\n\\UIC{$\\Gamma\\Rightarrow B\\to C,\\Sigma$}\n\\DisplayProof\n\\\\\\\\\n\\left(\n\\AXC{$\\pi_0$}\n\\noLine\n\\UIC{$\\Sigma, C\\Rightarrow \\Delta, p$}\n\\AXC{$\\pi_1$}\n\\noLine\n\\UIC{$\\Sigma \\Rightarrow B,\\Delta, p$}\n\\LeftLabel{$\\mathsf{\\to_L}$}\n\\BIC{$\\Sigma, B\\to C\\Rightarrow \\Delta, p$}\n\\DisplayProof\n,\\tau\n\\right)\n&\\longmapsto\n\\AXC{$\\mathsf{re}_p (\\pi_0, \\mathsf{li}_{B\\to C} (\\tau))$}\n\\noLine\n\\UIC{$\\Sigma, C\\Rightarrow \\Delta$}\n\\AXC{$\\mathsf{re}_p (\\pi_1, \\mathsf{ri}_{B\\to C} (\\tau))$}\n\\noLine\n\\UIC{$\\Sigma \\Rightarrow B,\\Delta$}\n\\LeftLabel{$\\mathsf{\\to_L}$}\n\\RightLabel{ ,}\n\\BIC{$\\Sigma, B\\to C\\Rightarrow \\Delta$}\n\\DisplayProof\n\\\\\\\\\n\\left(\n\\AXC{$\\pi_0$}\n\\noLine\n\\UIC{$\\Sigma,B,\\Box B\\Rightarrow \\Delta,p$}\n\\LeftLabel{$\\mathsf{refl}$}\n\\UIC{$\\Sigma,\\Box B\\Rightarrow \\Delta,p$}\n\\DisplayProof\n,\\tau\n\\right)\n&\\longmapsto\n\\AXC{$\\mathsf{re}_p(\\pi_0, \\mathsf{wk}_{ B, \\emptyset} (\\tau)$}\n\\noLine\n\\UIC{$\\Sigma,B,\\Box B\\Rightarrow \\Delta$}\n\\LeftLabel{$\\mathsf{refl}$}\n\\RightLabel{ ,}\n\\UIC{$\\Sigma,\\Box B\\Rightarrow \\Delta$}\n\\DisplayProof\n\\\\\\\\\n\\left(\n\\AXC{$\\pi_0$}\n\\noLine\n\\UIC{$\\Gamma\\Rightarrow B,\\Delta, p$}\n\\AXC{$\\pi_1$}\n\\noLine\n\\UIC{$\\Gamma,B \\Rightarrow \\Delta, p$}\n\\LeftLabel{$\\mathsf{cut}$}\n\\BIC{$\\Gamma\\Rightarrow \\Delta, p$}\n\\DisplayProof\n,\\tau\n\\right)\n&\\longmapsto\n\\AXC{$\\mathsf{re}_p (\\pi_0, \\mathsf{wk}_{\\emptyset,B} (\\tau))$}\n\\noLine\n\\UIC{$\\Gamma\\Rightarrow B,\\Delta$}\n\\AXC{$\\mathsf{re}_p (\\pi_1, \\mathsf{wk}_{B,\\emptyset} (\\tau))$}\n\\noLine\n\\UIC{$\\Gamma, B\\Rightarrow \\Delta$}\n\\LeftLabel{$\\mathsf{cut}$}\n\\RightLabel{ ,}\n\\BIC{$\\Gamma\\Rightarrow \\Delta$}\n\\DisplayProof\n\\\\\\\\\n\\left(\n\\AXC{$\\pi_0$}\n\\noLine\n\\UIC{$\\Phi, \\Box \\Pi \\Rightarrow B, \\Sigma, p$}\n\\AXC{$\\pi_1$}\n\\noLine\n\\UIC{$\\Box \\Pi \\Rightarrow B$}\n\\LeftLabel{$\\mathsf{\\Box}$}\n\\BIC{$\\Phi, \\Box \\Pi \\Rightarrow \\Box B, \\Sigma, p$}\n\\DisplayProof\n,\\tau\\right)\n&\\longmapsto\n\\AXC{$\\mathsf{re}_p(\\pi_0, \\mathsf{li}_{\\: \\Box B}(\\tau))$}\n\\noLine\n\\UIC{$\\Phi, \\Box \\Pi \\Rightarrow B, \\Sigma$}\n\\AXC{$\\pi_1$}\n\\noLine\n\\UIC{$\\Box \\Pi \\Rightarrow B$}\n\\LeftLabel{$\\mathsf{\\Box}$}\n\\RightLabel{ .}\n\\BIC{$\\Phi, \\Box \\Pi \\Rightarrow \\Box B, \\Sigma$}\n\\DisplayProof\n\\end{align*}\n\nThe mapping $\\mathsf{re}_p$ is well defined. \n\nNow we claim that $\\mathsf{re}_p$ is non-expansive. It sufficient to check that for any pairs $(\\pi,\\tau)$ and $(\\pi^\\prime,\\tau^\\prime)$, and any $n\\in\\mathbb N$ we have \n\\[\\pi\\sim_n\\pi',\\; \\tau\\sim_n\\tau'\\Rightarrow \\mathsf{re}_p(\\pi,\\tau)\\sim_n \\mathsf{re}_p(\\pi^\\prime,\\tau^\\prime).\\]\n\nAssume we have two pairs of $\\infty$-proofs $(\\pi,\\tau)$ and $(\\pi^\\prime,\\tau^\\prime)$ such that $\\pi\\sim_n\\pi'$, $\\tau\\sim_n\\tau'$.\nIf $n=0$, then trivially $ \\mathsf{re}_p(\\pi,\\tau)\\sim_n \\mathsf{re}_p(\\pi^\\prime,\\tau^\\prime)$. If $n>0$, then main fragments of $\\pi$ and $\\pi^\\prime$ ($\\tau$ and $\\tau^\\prime$) are identical.\nSuppose that $(\\pi,\\tau)$ is not a cut pair or is a cut pair with the cut formula being not $p$. Then the same condition holds for the pair $(\\pi^\\prime,\\tau^\\prime)$. In this case, we have \n\\[ \\mathsf{re}_p(\\pi,\\tau) = \\pi \\sim_n \\pi^\\prime = \\mathsf{re}_p(\\pi^\\prime,\\tau^\\prime).\\] \n\nOtherwise, $(\\pi,\\tau)$ and $(\\pi^\\prime,\\tau^\\prime)$ \nare cut pairs with the cut formula $p$.\nNow the statement $\\mathsf{re}_p(\\pi,\\tau)\\sim_n \\mathsf{re}_p(\\pi^\\prime,\\tau^\\prime)$ is proved by induction on $\\lvert \\pi\\rvert= \\lvert \\pi^\\prime\\rvert$ with the case analysis according to the definition of $\\mathsf{re}_p$. \n\nIf the last inference in $\\pi$ is an application of the rule $(\\mathsf{\\to_R})$, then $\\pi$ and $\\pi^\\prime$ have the following forms\n\\[\\AXC{$\\pi_0$}\n\\noLine\n\\UIC{$\\Gamma,B\\Rightarrow C,\\Sigma,p$}\n\\LeftLabel{$\\mathsf{\\to_R}$}\n\\UIC{$\\Gamma\\Rightarrow B\\to C,\\Sigma,p$}\n\\DisplayProof\\qquad\n\\AXC{$\\pi^\\prime_0$}\n\\noLine\n\\UIC{$\\Gamma,B\\Rightarrow C,\\Sigma,p$}\n\\LeftLabel{$\\mathsf{\\to_R}$}\n\\RightLabel{ ,}\n\\UIC{$\\Gamma\\Rightarrow B\\to C,\\Sigma,p$}\n\\DisplayProof\n\\]\nwhere $\\pi_0 \\sim_n \\pi^\\prime_0$. Since $\\mathsf{i}_{B \\to C}$ is non-expansive, we have $\\mathsf{i}_{B \\to C}(\\tau) \\sim_n \\mathsf{i}_{B \\to C}(\\tau^\\prime)$. Consequently, by the induction hypothesis for pairs $(\\pi_0, \\mathsf{i}_{B \\to C}(\\tau))$ and $(\\pi^\\prime_0, \\mathsf{i}_{B \\to C}(\\tau^\\prime))$, we have $\\mathsf{re}_p(\\pi_0, \\mathsf{i}_{B \\to C}(\\tau)) \\sim_n \\mathsf{re}_p(\\pi^\\prime_0, \\mathsf{i}_{B \\to C}(\\tau))$. Thus we obtain $\\mathsf{re}_p(\\pi,\\tau)\\sim_n \\mathsf{re}_p(\\pi^\\prime,\\tau^\\prime)$.\n\nConsider the case when the last inference in $\\pi$ is an application of the rule $(\\mathsf{\\Box})$. We have that $\\pi$ and $\\pi^\\prime$ have the following forms\n\\[\\AXC{$\\pi_0$}\n\\noLine\n\\UIC{$\\Phi, \\Box \\Pi \\Rightarrow B, \\Sigma, p$}\n\\AXC{$\\pi_1$}\n\\noLine\n\\UIC{$\\Box \\Pi \\Rightarrow B$}\n\\LeftLabel{$\\mathsf{\\Box}$}\n\\BIC{$\\Phi, \\Box \\Pi \\Rightarrow \\Box B, \\Sigma, p$}\n\\DisplayProof\\qquad\n\\AXC{$\\pi^\\prime_0$}\n\\noLine\n\\UIC{$\\Phi, \\Box \\Pi \\Rightarrow B, \\Sigma, p$}\n\\AXC{$\\pi^\\prime_1$}\n\\noLine\n\\UIC{$\\Box \\Pi \\Rightarrow B$}\n\\LeftLabel{$\\mathsf{\\Box}$}\n\\RightLabel{ ,}\n\\BIC{$\\Phi, \\Box \\Pi \\Rightarrow \\Box B, \\Sigma, p$}\n\\DisplayProof\n\\] \nwhere $\\pi_0 \\sim_n \\pi^\\prime_0$ and $\\pi_1 \\sim_{n-1} \\pi^\\prime_1$. Since $\\mathsf{li}_{\\: \\Box B}$ is non-expansive, we have $\\mathsf{li}_{\\: \\Box B}(\\tau) \\sim_n \\mathsf{li}_{\\: \\Box B}(\\tau^\\prime)$. Consequently, by the induction hypothesis for pairs $(\\pi_0, \\mathsf{li}_{\\: \\Box B}(\\tau))$ and $(\\pi^\\prime_0, \\mathsf{li}_{\\: \\Box B}(\\tau^\\prime))$, we have $\\mathsf{re}_p(\\pi_0, \\mathsf{li}_{\\: \\Box B}(\\tau)) \\sim_n \\mathsf{re}_p(\\pi^\\prime_0, \\mathsf{li}_{\\: \\Box B}(\\tau))$. Thus we obtain $\\mathsf{re}_p(\\pi,\\tau)\\sim_n \\mathsf{re}_p(\\pi^\\prime,\\tau^\\prime)$.\n\nAll the other cases are treated similarly, so we omit them. We have that the mapping $\\mathsf{re}_p$ is non-expansive.\nIt remains to check that the mapping $\\mathsf{re}_p$ is adequate.\n \nAssume we have a pair of $\\infty$-proofs $(\\pi,\\tau)$ such that $\\pi \\in \\mathcal{P}_n$ and $\\tau\\in\\mathcal{P}_n $. We claim that $\\mathsf{re}_p(\\pi,\\tau) \\in \\mathcal{P}_n$. If $n=0$, then trivially $ \\mathsf{re}_p(\\pi,\\tau) \\in \\mathcal{P}_0=\\mathcal{P}$. If the pair $(\\pi,\\tau)$ is not a cut pair or is a cut pair with the cut formula being not $p$, then we have $ \\mathsf{re}_p(\\pi,\\tau) = \\pi \\in \\mathcal{P}_n$. \n\nNow suppose $(\\pi,\\tau)$ is a cut pair with the cut formula $p$ and $n>0$. We proceed by induction on $\\lvert \\pi\\rvert$ with the case analysis according to the definition of $\\mathsf{re}_p$. Let us consider the cases of inference rules $(\\mathsf{\\to_L})$ and $(\\mathsf{\\Box})$. \n\nIf the last inference in $\\pi$ is an application of the rule $(\\mathsf{\\to_L})$, then $\\pi$ has the following form\n\\[\n\\AXC{$\\pi_0$}\n\\noLine\n\\UIC{$\\Sigma, C\\Rightarrow \\Delta, p$}\n\\AXC{$\\pi_1$}\n\\noLine\n\\UIC{$\\Sigma \\Rightarrow B,\\Delta, p$}\n\\LeftLabel{$\\mathsf{\\to_L}$}\n\\BIC{$\\Sigma, B\\to C\\Rightarrow \\Delta, p$}\n\\DisplayProof\n\\]\nwhere $\\pi_0, \\pi_1 \\in \\mathcal{P}_n$. Since $\\mathsf{li}_{B \\to C}$ and $\\mathsf{ri}_{B \\to C}$ are adequate, we have $\\mathsf{li}_{B \\to C}(\\tau)\\in \\mathcal{P}_n$ and $\\mathsf{ri}_{B \\to C}(\\tau) \\in \\mathcal{P}_n$. Consequently, by the induction hypothesis for the pairs $(\\pi_0, \\mathsf{li}_{B \\to C}(\\tau))$ and $(\\pi_1, \\mathsf{ri}_{B \\to C}(\\tau))$, we have $\\mathsf{re}_p(\\pi_0, \\mathsf{li}_{B \\to C}(\\tau))\\in \\mathcal{P}_n$ and $\\mathsf{re}_p(\\pi_1, \\mathsf{ri}_{B \\to C}(\\tau))\\in \\mathcal{P}_n$. Thus we obtain $\\mathsf{re}_p(\\pi,\\tau)\\in \\mathcal{P}_n$.\n\nConsider the case when the last inference in $\\pi$ is an application of the rule $(\\mathsf{\\Box})$.\nThen $\\pi$ has the form\n\\[\\AXC{$\\pi_0$}\n\\noLine\n\\UIC{$\\Phi, \\Box \\Pi \\Rightarrow B, \\Sigma, p$}\n\\AXC{$\\pi_1$}\n\\noLine\n\\UIC{$\\Box \\Pi \\Rightarrow B$}\n\\LeftLabel{$\\mathsf{\\Box}$}\n\\RightLabel{ ,}\n\\BIC{$\\Phi, \\Box \\Pi \\Rightarrow \\Box B, \\Sigma, p$}\n\\DisplayProof\n\\] \nwhere $\\pi_0 \\in \\mathcal{P}_n$ and $\\pi_1 \\in \\mathcal{P}_{n-1}$. Since $\\mathsf{li}_{\\: \\Box B}$ is adequate, we have $\\mathsf{li}_{\\: \\Box B}(\\tau) \\in \\mathcal{P}_n$. Consequently, by the induction hypothesis for the pair $(\\pi_0, \\mathsf{li}_{\\: \\Box B}(\\tau))$, we have $\\mathsf{re}_p(\\pi_0, \\mathsf{li}_{\\: \\Box B}(\\tau)) \\in \\mathcal{P}_n$. Thus we obtain $\\mathsf{re}_p(\\pi,\\tau)\\in \\mathcal{P}_n$.\n\nNotice that the last inference in $\\pi$ differs from an application of the rule $(\\mathsf{cut})$. The remaining cases of inference rules $(\\to_\\mathsf{R})$ and $(\\mathsf{refl})$ are treated in a similar way to the case of $(\\to_\\mathsf{L})$, so we omit them.\n\n\\end{proof}\n\n\\begin{lem}\\label{reboxadeq}\nGiven an adequate $B$-removing mapping $\\mathsf{re}_B$, there exists an adequate $\\Box B$-removing mapping $\\mathsf{re}_{\\Box B}$.\n\\end{lem}\n\\begin{proof}\nAssume we have an adequate $B$-removing mapping $\\mathsf{re}_B$.\nThe required $\\Box B$-removing mapping $\\mathsf{re}_{\\Box B}$ is obtained as the fixed-point of a contractive operator $\\mathsf G_{\\Box B} \\colon \\mathcal R_{\\Box B} \\to \\mathcal R_{\\Box B}$. \n\nFor a mapping $\\mathsf u\\in \\mathcal R_{\\Box B}$ and a pair of $\\infty$-proofs $(\\pi,\\tau)$, the $\\infty$-proof $\\mathsf G_{\\Box B}(\\mathsf u)(\\pi,\\tau)$ is defined as follows. \nIf $(\\pi,\\tau)$ is not a cut pair or a cut pair with \nthe cut formula being not $\\Box B$, then we put $\\mathsf G_{\\Box B}(\\mathsf u)(\\pi,\\tau)=\\pi$. \n\n\n\n\nNow let $(\\pi,\\tau)$ be a cut pair with \nthe cut formula $\\Box B$ and the cut result $\\Gamma\\Rightarrow \\Delta$.\nIf $\\lvert \\pi\\rvert=0$ or $\\lvert \\tau \\rvert=0$, then $\\Gamma\\Rightarrow \\Delta$ is an initial sequent. In this case, we define $\\mathsf G_{\\Box B}(\\mathsf u)(\\pi,\\tau)$ as the $\\infty$-proof consisting only of the sequent $\\Gamma\\Rightarrow \\Delta$. \n\nSuppose that $\\lvert \\pi\\rvert>0$ and $\\lvert \\tau \\rvert>0$. We define $\\mathsf G_{\\Box B}(\\mathsf u)(\\pi,\\tau)$ according to the last application of an inference rule in $\\pi$:\\\\\n\\begin{align*}\n\\left(\n\\AXC{$\\pi_0$}\n\\noLine\n\\UIC{$\\Gamma,C\\Rightarrow D,\\Sigma,\\Box B$}\n\\LeftLabel{$\\mathsf{\\to_R}$}\n\\UIC{$\\Gamma\\Rightarrow C\\to D,\\Sigma,\\Box B$}\n\\DisplayProof\n,\\tau\n\\right)\n&\\longmapsto\n\\AXC{$\\mathsf{u}(\\pi_0, \\mathsf{i}_{C \\to D}(\\tau))$}\n\\noLine\n\\UIC{$\\Gamma,C\\Rightarrow D,\\Sigma$}\n\\LeftLabel{$\\mathsf{\\to_R}$}\n\\RightLabel{ ,}\n\\UIC{$\\Gamma\\Rightarrow C\\to D,\\Sigma$}\n\\DisplayProof\n\\\\\\\\\n\\left(\n\\AXC{$\\pi_0$}\n\\noLine\n\\UIC{$\\Sigma, D\\Rightarrow \\Delta, \\Box B$}\n\\AXC{$\\pi_1$}\n\\noLine\n\\UIC{$\\Sigma \\Rightarrow C,\\Delta, \\Box B$}\n\\LeftLabel{$\\mathsf{\\to_L}$}\n\\BIC{$\\Sigma, C\\to D\\Rightarrow \\Delta, \\Box B$}\n\\DisplayProof\n,\\tau\n\\right)\n&\\longmapsto\n\\AXC{$\\mathsf{u} (\\pi_0, \\mathsf{li}_{C\\to D} (\\tau))$}\n\\noLine\n\\UIC{$\\Sigma, D\\Rightarrow \\Delta$}\n\\AXC{$\\mathsf{u} (\\pi_1, \\mathsf{ri}_{C\\to D} (\\tau))$}\n\\noLine\n\\UIC{$\\Sigma \\Rightarrow C,\\Delta$}\n\\LeftLabel{$\\mathsf{\\to_L}$}\n\\RightLabel{ ,}\n\\BIC{$\\Sigma, C\\to D\\Rightarrow \\Delta$}\n\\DisplayProof\n\\end{align*}\n\\begin{align*}\n\\left(\n\\AXC{$\\pi_0$}\n\\noLine\n\\UIC{$\\Sigma,C,\\Box C\\Rightarrow \\Delta,\\Box B$}\n\\LeftLabel{$\\mathsf{refl}$}\n\\UIC{$\\Sigma,\\Box C\\Rightarrow \\Delta,\\Box B$}\n\\DisplayProof\n,\\tau\n\\right)\n&\\longmapsto\n\\AXC{$\\mathsf{u}(\\pi_0, \\mathsf{wk}_{ C, \\emptyset} (\\tau)$}\n\\noLine\n\\UIC{$\\Sigma,C,\\Box C\\Rightarrow \\Delta$}\n\\LeftLabel{$\\mathsf{refl}$}\n\\RightLabel{ ,}\n\\UIC{$\\Sigma,\\Box C\\Rightarrow \\Delta$}\n\\DisplayProof\n\\\\\\\\\n\\left(\n\\AXC{$\\pi_0$}\n\\noLine\n\\UIC{$\\Gamma\\Rightarrow C,\\Delta, \\Box B$}\n\\AXC{$\\pi_1$}\n\\noLine\n\\UIC{$\\Gamma,C \\Rightarrow \\Delta, \\Box B$}\n\\LeftLabel{$\\mathsf{cut}$}\n\\BIC{$\\Gamma\\Rightarrow \\Delta, \\Box B$}\n\\DisplayProof\n,\\tau\n\\right)\n&\\longmapsto\n\\AXC{$\\mathsf{u} (\\pi_0, \\mathsf{wk}_{\\emptyset,C} (\\tau))$}\n\\noLine\n\\UIC{$\\Gamma\\Rightarrow C,\\Delta$}\n\\AXC{$\\mathsf{u} (\\pi_1, \\mathsf{wk}_{C,\\emptyset} (\\tau))$}\n\\noLine\n\\UIC{$\\Gamma, C\\Rightarrow \\Delta$}\n\\LeftLabel{$\\mathsf{cut}$}\n\\RightLabel{ ,}\n\\BIC{$\\Gamma\\Rightarrow \\Delta$}\n\\DisplayProof\n\\\\\\\\\n\\left(\n\\AXC{$\\pi_0$}\n\\noLine\n\\UIC{$\\Phi, \\Box \\Pi \\Rightarrow C, \\Sigma, \\Box B$}\n\\AXC{$\\pi_1$}\n\\noLine\n\\UIC{$\\Box \\Pi \\Rightarrow C$}\n\\LeftLabel{$\\mathsf{\\Box}$}\n\\BIC{$\\Phi, \\Box \\Pi \\Rightarrow \\Box C, \\Sigma, \\Box B$}\n\\DisplayProof\n,\\tau\\right)\n&\\longmapsto\n\\AXC{$\\mathsf{u}(\\pi_0, \\mathsf{li}_{\\: \\Box C}(\\tau))$}\n\\noLine\n\\UIC{$\\Phi, \\Box \\Pi \\Rightarrow C, \\Sigma$}\n\\AXC{$\\pi_1$}\n\\noLine\n\\UIC{$\\Box \\Pi \\Rightarrow C$}\n\\LeftLabel{$\\mathsf{\\Box}$} \n\\RightLabel{ .}\n\\BIC{$\\Phi, \\Box \\Pi \\Rightarrow \\Box C, \\Sigma$}\n\\DisplayProof\n\\end{align*}\n\nConsider the case when $\\pi$ has the form \n\\[\n\\AXC{$\\pi_0$}\n\\noLine\n\\UIC{$\\Phi, \\Box \\Pi \\Rightarrow B, \\Sigma$}\n\\AXC{$\\pi_1$}\n\\noLine\n\\UIC{$\\Box \\Pi \\Rightarrow B$}\n\\LeftLabel{$\\mathsf{\\Box}$}\n\\RightLabel{ .}\n\\BIC{$\\Phi, \\Box \\Pi \\Rightarrow \\Box B, \\Sigma$}\n\\DisplayProof\n\\]\nWe define $\\mathsf G_{\\Box B}(\\mathsf u)(\\pi,\\tau)$ according to the last application of an inference rule in $\\tau$. \nIf the last inference is an application of the rule $\\mathsf{(refl)}$ with the principal formula being not $\\Box B$ or is an application of the rule $\\mathsf{(\\Box)}$ without the formula $\\Box B$ in the right premise, then $\\mathsf G_{\\Box B}(\\mathsf u)(\\pi,\\tau)$ is defined similarly to the previous cases of $\\mathsf{(refl)}$ and $(\\mathsf{\\Box})$. Cases of inference rules $\\mathsf{(\\to_L)}$, $\\mathsf{(\\to_R)}$, and $\\mathsf{(cut)}$ are also completely similar to the previous cases of $\\mathsf{(\\to_L)}$, $\\mathsf{(\\to_R)}$, and $\\mathsf{(cut)}$.\n\n\n\nIf the last application of an inference rule in $\\tau$ is an application of the rule $\\mathsf{(refl)}$ with the principal formula $\\Box B$, then we put\n\\[\\mathsf G_{\\Box B}(\\mathsf u) \\colon\n\\left(\\pi,\n\\AXC{$\\tau_0$}\n\\noLine\n\\UIC{$\\Gamma,B,\\Box B\\Rightarrow \\Delta$}\n\\LeftLabel{$\\mathsf{refl}$}\n\\UnaryInfC{$\\Gamma,\\Box B\\Rightarrow \\Delta$}\n\\DisplayProof\n\\right)\n\\longmapsto\\mathsf{re}_{B}(\\pi_0,\\mathsf{u}(\\mathsf{wk}_{B,\\emptyset}(\\pi),\\tau_0)).\n\\]\n\nIt remains to consider the case when $\\tau$ has the form\n\\[\n\\AxiomC{$\\tau_0$}\n\\noLine\n\\UnaryInfC{$\\Phi^\\prime, \\Box B, \\Box \\Pi^\\prime \\Rightarrow C, \\Sigma^\\prime$}\n\\AxiomC{$\\tau_1$}\n\\noLine\n\\UnaryInfC{$\\Box B,\\Box \\Pi^\\prime \\Rightarrow C$}\n\\LeftLabel{$\\mathsf{\\Box}$}\n\\RightLabel{.}\n\\BinaryInfC{$\\Phi^\\prime, \\Box B, \\Box \\Pi^\\prime \\Rightarrow \\Box C, \\Sigma^\\prime$}\n\\DisplayProof\n\\]\nWe define $\\mathsf G_{\\Box B}(\\mathsf u)(\\pi,\n\\tau)$ as\n\\[\n\\AxiomC{$\\mathsf{u}(\\mathsf{li}_{\\Box C}(\\pi),\\tau_0)$}\n\\noLine\n\\UnaryInfC{$\\Phi^\\prime, \\Box \\Pi^\\prime \\Rightarrow C, \\Sigma^\\prime$}\n\\AxiomC{$\n\\mathsf{u}\n\\left(\\pi',\n\\mathsf{wk}_{\\Box\\Pi\\backslash\\Box\\Pi^\\prime,\\varnothing}(\\tau_1)\n\\right)$}\n\\noLine\n\\UnaryInfC{$\\Box\\Pi\\backslash \\Box\\Pi^\\prime, \\Box\\Pi^\\prime\\Rightarrow C$}\n\\LeftLabel{$\\mathsf{\\Box}$}\n\\RightLabel{ ,}\n\\BinaryInfC{$\\Phi^\\prime, \\Box \\Pi^\\prime \\Rightarrow \\Box C, \\Sigma^\\prime$}\n\\DisplayProof\n\\]\nwhere $\\pi'$ equals to\n\\[\n\\AxiomC{$\\mathsf{wk}_{\\Box\\Pi^\\prime\\backslash\\Box\\Pi,C}(\\pi_1)$}\n\\noLine\n\\UnaryInfC{$\\Box\\Pi^\\prime\\backslash \\Box\\Pi, \\Box\\Pi\\Rightarrow B,C$}\n\\AxiomC{$\\pi_1$}\n\\noLine\n\\UnaryInfC{$\\Box\\Pi\\Rightarrow B$}\n\\LeftLabel{$\\mathsf{\\Box}$}\n\\RightLabel{ .}\n\\BinaryInfC{$\\Box\\Pi^\\prime\\backslash \\Box\\Pi,\\Box\\Pi\\Rightarrow\\Box B, C$}\n\\DisplayProof\n\\]\nNotice that $\\Box\\Pi^\\prime\\backslash \\Box\\Pi,\\Box\\Pi=\\Box\\Pi\\backslash \\Box\\Pi^\\prime,\\Box\\Pi^\\prime$.\n\nNow the operator $\\mathsf {G_{\\Box B}}$ is well-defined.\nBy the case analysis according to the definition of $\\mathsf {G_{\\Box B}}$, we see that $\\mathsf {G_{\\Box B}} (\\mathsf u) $ is non-expansive and belongs to $\\mathcal R_{\\Box B}$ whenever $\\mathsf u \\in \\mathcal R_{\\Box B}$.\n\nWe claim that $\\mathsf {G_{\\Box B}}$ is contractive. It sufficient to check that for any $\\mathsf u, \\mathsf v\\in \\mathcal R_{\\Box B}$ and any $n,k\\in\\mathbb N$ we have \n\\[\\mathsf u\\sim_{n,k}\\mathsf v \\Rightarrow \\mathsf {G_{\\Box B}(u)}\\sim_{n,k+1}\\mathsf{G_{\\Box B}(v)}.\\]\n\n\nAssume there are two $\\Box B$-removing mappings $\\mathsf u$ and $\\mathsf v$ such that $\\mathsf u\\sim_{n,k}\\mathsf v$. Consider an arbitrary pair of $\\infty$-proofs $(\\pi,\\tau)$. By the case analysis according to the definition of $\\mathsf {G_{\\Box B}}$, we prove that $\\mathsf G_{\\Box B}(\\mathsf u)(\\pi,\\tau)\\sim_{n}\\mathsf G_{\\Box B}(\\mathsf v)(\\pi,\\tau)$. In addition, we check that if $\\lvert\\pi\\rvert+\\lvert\\tau\\lvert=latex,distance=4cm]\n \\draw[->, thick] ({pic cs:a}) to [out=0,in=-20]({pic cs:b});\n \\end{tikzpicture}\n\\end{gather*}\nwhere $F=\\Box(\\Box(p \\rightarrow \\Box p) \\rightarrow p) $. \n\nThe notion of cyclic proof determines the same provability\nrelation as the notion of regular $\\infty$-proof. Obviously, each cyclic proof can be\nunravelled into a regular $\\infty$-proof. The converse is also true.\n\\begin{prop}[cf. \\cite{Sham}, Proposition 3.1]\nAny regular $\\infty$-proof of $\\mathsf{Grz}_\\infty$ can be obtained by unraveling a cyclic proof.\n\\end{prop}\n\n\n\n\nIn the rest of the section we establish that any sequent provable in $\\mathsf{Grz}_\\infty$ has a cyclic proof.\n\n\n\n\\begin{lem}\\label{contraction}\nFor any formula $A$, the rules\n\\begin{gather*}\n\\AXC{$\\Gamma , A,A \\Rightarrow \\Delta$}\n\\LeftLabel{$\\mathsf{cl}_{A}$}\n\\UIC{$\\Gamma ,A \\Rightarrow \\Delta$}\n\\DisplayProof\\qquad\n\\AXC{$\\Gamma \\Rightarrow A,A, \\Delta$}\n\\LeftLabel{$\\mathsf{cr}_{A}$}\n\\UIC{$\\Gamma \\Rightarrow A, \\Delta$}\n\\DisplayProof\n\\end{gather*}\nare admissible in $\\mathsf{Grz}_{\\infty}$.\n\\end{lem}\n\\begin{proof}\nAssume $\\pi$ and $\\tau$ are $\\infty$-proofs of the sequents $\\Gamma , A,A \\Rightarrow \\Delta$ and $\\Gamma \\Rightarrow A,A, \\Delta$ in the system $\\mathsf{Grz}_{\\infty}$.\nLet $\\xi$ be an $\\infty$-proof of the sequent $\\Gamma , A \\Rightarrow A,\\Delta$ in $\\mathsf{Grz}_{\\infty}$, which exists by Lemma \\ref{AtoA}.\n\nThe required $\\infty$-proofs of the sequents $\\Gamma , A \\Rightarrow \\Delta$ and $\\Gamma \\Rightarrow A,\\Delta$ are defined by setting $\n\\mathsf{cl}_{A}(\\pi)=\\mathsf{re}_A(\\xi, \\pi)$ and \n$\n\\mathsf{cr}_{A}(\\tau) =\\mathsf{re}_A(\\tau,\\xi)$, where $\\mathsf{re}_A$ is an $A$-removing mapping from Lemma \\ref{reabadeq}. Since the mapping $\\mathsf{re}_A$ is adequate, the $\\infty$-proofs $\n\\mathsf{cl}_{A}(\\pi)$ and $\\mathsf{cr}_{A}(\\tau)$ do not contain applications the rule $(\\mathsf{cut})$.\n \n\\end{proof}\n\nLet $\\mathcal{T}^\\ast$ denote the set of all root-preserving mappings from the set of $\\infty$-proofs of $\\mathsf{Grz}_\\infty$ to itself.\n\n\\begin{lem}\\label{Comp T*}\nThe pair $(\\mathcal T^\\ast, l_1)$ is a non-empty spherically complete ultrametric space.\n\\end{lem}\n\\begin{proof}\nThe proof of spherical completeness of the space $(\\mathcal T^\\ast, l_1)$ is analogous to the proof of Proposition \\ref{SphCom}.\nThe space is obviously non-empty, since the identity function lies in $\\mathcal T^\\ast$.\n\\end{proof}\n\nAn application of the modal rule $(\\Box)$ \n\\[\\AXC{$\\Gamma, \\Box \\Pi \\Rightarrow A, \\Delta$}\n\\AXC{$\\Box \\Pi \\Rightarrow A$}\n\\LeftLabel{$\\mathsf{\\Box}$}\n\\RightLabel{ .}\n\\BIC{$\\Gamma, \\Box \\Pi \\Rightarrow \\Box A, \\Delta$}\n\\DisplayProof \n\\]\nis called \\emph{slim} if the multiset $\\Pi$ here is a set. \nAn $\\infty$-proof is called \\emph{slim} if every application of the rule $(\\Box)$ in it is slim.\n\n\\begin{lem}\nIf $\\mathsf{Grz}_\\infty\\vdash \\Gamma \\Rightarrow \\Delta$, then the sequent $\\Gamma \\Rightarrow \\Delta$ has a slim $\\infty$-proof in $\\mathsf{Grz}_\\infty$.\n\\end{lem} \n\\begin{proof}\n\n\nWe construct a mapping $\\mathsf {slim}\\in \\mathsf T^\\ast$ that maps an $\\infty$-proof to a slim $\\infty$-proof of the same sequent. This mapping is defined as the fixed-point of a contractive operator $\\mathsf H\\colon \\mathcal T^\\ast \\to \\mathcal T^\\ast $.\n\nFor a mapping $\\mathsf u\\in \\mathcal T^\\ast$ and an $\\infty$-proof of $\\mathsf{Grz}_\\infty$, the $\\infty$-proof $\\mathsf H(\\mathsf u)(\\pi)$ is defined as follows.\nIf $\\lvert \\pi\\rvert=0$, then we put $\\mathsf H(\\mathsf u)(\\pi)=\\pi$. \n\nOtherwise, we define $\\mathsf H(\\mathsf u)(\\pi)$ according to the last application of an inference rule in $\\pi$:\n\\begin{gather*}\n\\AXC{$\\pi_1$}\n\\noLine\n\\UIC{$\\Gamma , B \\Rightarrow \\Delta$}\n\\AXC{$\\pi_2$}\n\\noLine\n\\UIC{$\\Gamma \\Rightarrow A, \\Delta$}\n\\LeftLabel{$\\mathsf{\\rightarrow_L}$}\n\\BIC{$\\Gamma , A \\rightarrow B \\Rightarrow \\Delta$}\n\\DisplayProof \n\\longmapsto\n\\AXC{$\\mathsf u(\\pi_1)$}\n\\noLine\n\\UIC{$\\Gamma , B \\Rightarrow \\Delta$}\n\\AXC{$\\mathsf u(\\pi_2)$}\n\\noLine\n\\UIC{$\\Gamma \\Rightarrow A, \\Delta$}\n\\LeftLabel{$\\mathsf{\\rightarrow_L}$}\n\\RightLabel{ ,}\n\\BIC{$\\Gamma , A \\rightarrow B \\Rightarrow \\Delta$}\n\\DisplayProof \n\\end{gather*}\n\\begin{gather*}\n\\AXC{$\\pi_0$}\n\\noLine\n\\UIC{$\\Gamma, A \\Rightarrow B , \\Delta$}\n\\LeftLabel{$\\mathsf{\\rightarrow_R}$}\n\\UIC{$\\Gamma \\Rightarrow A \\rightarrow B , \\Delta$}\n\\DisplayProof \n\\longmapsto\n\\AXC{$\\mathsf u(\\pi_0)$}\n\\noLine\n\\UIC{$\\Gamma, A \\Rightarrow B , \\Delta$}\n\\LeftLabel{$\\mathsf{\\rightarrow_R}$}\n\\RightLabel{ ,}\n\\UIC{$\\Gamma \\Rightarrow A \\rightarrow B , \\Delta$}\n\\DisplayProof \n\\end{gather*}\n\\begin{gather*}\n\\AXC{$\\pi_0$}\n\\noLine\n\\UIC{$\\Gamma, A, \\Box A \\Rightarrow \\Delta$}\n\\LeftLabel{$\\mathsf{refl}$}\n\\UIC{$\\Gamma , \\Box A\\Rightarrow \\Delta$}\n\\DisplayProof \n\\longmapsto\n\\AXC{$\\mathsf u(\\pi_0)$}\n\\noLine\n\\UIC{$\\Gamma, A, \\Box A \\Rightarrow \\Delta$}\n\\LeftLabel{$\\mathsf{refl}$}\n\\RightLabel{ .}\n\\UIC{$\\Gamma , \\Box A\\Rightarrow \\Delta$}\n\\DisplayProof \n\\end{gather*}\n\nFor a multiset $\\Pi$, we denote its underlying set by $\\Pi^S$. The case of the modal rule is as follows:\n\\begin{gather*}\n\\AXC{$\\pi_1$}\n\\noLine\n\\UIC{$\\Gamma, \\Box \\Pi \\Rightarrow A, \\Delta$}\n\\AXC{$\\pi_2$}\n\\noLine\n\\UIC{$\\Box \\Pi \\Rightarrow A$}\n\\LeftLabel{$\\mathsf{\\Box}$}\n\\BIC{$\\Gamma, \\Box \\Pi \\Rightarrow \\Box A, \\Delta$}\n\\DisplayProof \n\\longmapsto\n\\AXC{$\\mathsf u(\\pi_1)$}\n\\noLine\n\\UIC{$\\Gamma, \\Box \\Pi \\Rightarrow A, \\Delta$}\n\\AXC{$\\mathsf {u}(\\pi^\\prime_2)$}\n\\noLine\n\\UIC{$\\Box \\Pi^S \\Rightarrow A$}\n\\LeftLabel{$\\mathsf{\\Box}$}\n\\RightLabel{ ,}\n\\BIC{$\\Gamma, \\Box \\Pi \\Rightarrow \\Box A, \\Delta$}\n\\DisplayProof \n\\end{gather*}\nwhere $\\pi^\\prime_2$ is an $\\infty$-proof of the sequent $\\Box \\Pi^S \\Rightarrow A$ in $\\mathsf{Grz}_\\infty$. This $\\infty$-proof exists by the\nadmissibility of contraction rules obtained in Lemma \\ref{contraction}.\n \nNow it can be easily shown, that if $\\mathsf u\\sim_{n,k} \\mathsf v$ for some $\\mathsf u, \\mathsf v\\in \\mathcal T^\\ast$ and $n,k\\in\\mathbb N$, then $\\mathsf {H(u)}\\sim_{n,k+1}\\mathsf{H(v)}$. Therefore the mapping $\\mathsf H$ is a contractive operator on a spherically complete ultrametric space. Thus, it has a fixed-point, which we denote by $\\mathsf {slim}$.\n\nIn analogous way to the proofs of Lemma \\ref{reboxadeq} and Theorem \\ref{infcuttoinf}, we see that $\\mathsf {slim}(\\pi)$ is a slim $\\infty$-proof for any $\\pi$.\n\\end{proof}\n\n\n\\begin{thm}\nIf $\\mathsf{Grz}_\\infty\\vdash \\Gamma \\Rightarrow \\Delta$, then there is a cyclic proof for the given sequent. \n\\end{thm}\n\\begin{proof}\nLet $\\pi$ be a slim $\\infty$-proof of the sequent $\\Gamma \\Rightarrow \\Delta$ obtained in the previous lemma. Note that all formulas from $\\pi$ are subformulas of the formulas from $\\Gamma \\cup \\Delta$. Consequently, the $\\infty$-proof $\\pi$ contains only finitely many different sequents that occur as right premises of the rule $(\\Box)$ in it. By $k$, we denote the number of these sequents. \n\nLet $\\xi$ be the $(k+2)$-fragment of the $\\infty$-proof $\\pi$. Consider any branch $a_0, a_1, \\dotsc, a_n$ in $\\xi$ connecting the root with a leaf $a_n$ that is not marked by an initial sequent. This branch containes a pair of different nodes $a$ and $b$ determining coinciding right premises of the rule $(\\Box)$. Assuming that $b$ is further from the root of $\\xi$ than $a$, we cut the branch under consideration at the node $b$ and connect $b$, which has become a leaf, with $a$ by a back-link. Applying\na similar operation to the remaining branches of $\\xi$, we turn the $(k+2)$-fragment of $\\pi$ into a cyclic proof of the sequent $\\Gamma \\Rightarrow \\Delta$. \n\n\n\\end{proof}\n\n\n\\paragraph*{Funding.} The article was prepared within the framework of the Basic\nResearch Program at the National Research University \nHigher School of Economics (HSE) and supported within the framework of a subsidy by the Russian Academic Excellence Project \n'5-100'. This work is also supported by the Russian Foundation for Basic Research, grant 15-01-09218a.\n\n\\paragraph*{Acknowledgements.} The second author heartily thanks his beloved wife Maria Shamkanova for her warm and constant support. He is also indebted to the Lord God Almighty. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nInflation of a cylindrical rubber tube usually generates a prototypical localization instability, known as localized bulging, and such an interesting phenomenon was first recorded in detail and primarily analyzed by \\cite{mallock}. With applications to bulge initiation and propagation in pressure vessels \\citep{jam1984,ijms1988,book2007} and aneurysm formation in human arteries \\citep{new2011,sa2018}, inflation of a cylindrical hyperelastic tube offers a pertinent paradigm to study the mechanism of localized instabilities \\citep{ky1990,ky1991,prsa2021}. In addition, recent advances on inflated tubes and localized bulging can shed light on bifurcation in arteries affected by Marfan's syndrome \\citep{mrc2009,mrc2010}, bulge prevention in energy harvesting devices \\citep{re2013}, inflation-driven soft robots \\citep{jin2021}, inflation of nematic elastomers \\citep{he2020}, and bulge formation subject to additional effects of electric actuation \\citep{lu2015}, swelling \\citep{dm2017}, magnetic field \\citep{ijss2018}, and plasticity \\citep{takla2}. \n\nIn general, the evolution of localized bulging is mainly composed of three typical periods, including bulge initiation, growth, and propagation \\citep{wang2019}. Specifically, experimental evidence suggests that the diameter of the bulge will reach a maximum in the propagation stage, and a two-phase deformation can be observed \\citep{ky1990}. On the theoretical side, the pioneering works on bifurcation analysis of inflated membrane tubes and thick tubes were conducted by \\cite{ho79,ho791}. Later, \\cite{fu2008} found that a zero mode, which is usually viewed as another uniform deformation, precisely generates localized bulging, and a bifurcation condition was derived for a membrane tube based on the dynamical systems theory. Since then, multiple fundamental problems were investigated, such as solution stability \\citep{pearce2010,fuxie2010,ams2012}, influence of material constitution \\citep{pearce2012}, imperfection sensitivity \\citep{fuxie2012}, and dynamic inflation \\citep{dynamic}. Within the framework of nonlinear elasticity, an explicit bifurcation condition for thick tubes was derived by \\cite{fu2016}, and the accuracy of membrane assumption was examined. This bifurcation condition paves a convenient way to study localized bugling in rotating cylinders \\citep{wang2017}, fiber-reinforced tubes \\citep{wangfu2018}, and layered tubes \\citep{liu2019,ye2019}. Furthermore, a comprehensive understanding of localized bulging can provide useful insight into localized necking \\citep{eml2018}. We mention that the evolution of a bulge can only be handled in the nonlinear regime. To perform post-bifurcation analysis for inflated thick-walled cylinders, \\cite{fead2013} presented a numerical procedure by virtue of nonlinear finite element models. Within the framework of nonlinear elasticity, \\cite{ye2020} carried out a weakly nonlinear analysis to deduce the amplitude equation for a bulge and discovered an interesting parametric domain where necking occurs instead of bulging. Recently, the analysis of localizations was further extended to situations where surface tension is involved \\cite{wang2020,emery2021,emery20211,fu2021}. Recent papers have shown a growing interest in the community for such a paradigmatic localization problem.\n\nIn addition to the investigations mentioned above, some effort was dedicated to establishing a reduced model applicable to inspect bulge formation and propagation in cylindrical hyperelastic tubes. \\cite{audoly2018} derived a one-dimensional model for the analysis of localized bulging in long tubes making use of nonlinear membrane theory and asymptotic expansion. A fundamental assumption is that localization can be viewed as a long-wavelength periodic mode. Subsequently, \\cite{biggins2020} proposed an energy method in virtue of asymptotic expansions to track the nonlinear evolution of an inhomogeneous solution and further to identify phase separation of typical localization such as beading, bulging, and necking.\n\nAs mentioned earlier, localized bulging in inflated tubes can be viewed as an analogue to aneurysm formation in arteries \\citep{sh2001,fu2012}. Accordingly, theoretical or numerical analysis was performed to supply mechanical insight into possible pathogenesis behind aneurysms, see \\cite{ijes2014,fu2015,vn2017,vn2018} and the references therein. In particular, our previous analysis reveals a fact that an aneurysm is more prone to appear as the innermost layer of an artery stiffens \\citep{ye2019}. This qualitatively interprets why the risk of aneurysm formation for a person becomes higher with aging \\citep{fg2015}. \n\nGenerally speaking, two loading conditions, i.e. either the resultant axial force or the axial length is specified, are adopted in experiments. The former was adopted in \\cite{ky1990,ky1991,wang2019,prsa2021} while the latter was used in \\cite{ijms2006,ijms2008,guo2016,wang2019,prsa2021}. On the one hand, the resultant axial force can be fixed by suspending a dead weight at one end while air comes into the tube from the other end. In this scenario, the curve of pressure versus volume ratio possesses an $N$-shape. Therefore, localized bulging is induced by a limit-point instability \\citep{ijes1971,ijnm2007,wt2018} and the propagation pressure can be predicted by Maxwell's equal-area rule \\citep{jam1984}. On the other hand, the fixed axial length can be acquired by imposing a pre-stretch on the tube and then fixing the total length. In this situation, the pressure-volume ratio curve may be monotonic. The effect of pre-stretch on pressurized thin tubes was examined by \\cite{mao2014,hnijms}. Furthermore, human arteries usually suffer a fixed pre-stretch \\textit{in vivo} \\citep{bmm2014}. In this study, we also employ these two loading approaches.\n\nIt is pointed out that most existing literature on localized bulging is mainly concerned with a homogeneous or piecewise homogeneous material. Yet functionally graded materials (abbreviated by FGMs) with continuously varied physical or mechanical properties have unfolded many unusual functions such as high-temperature resistance and diminishing stress or strain concentration. This new type of material first emerged in the middle of the 1980s by mixing metal and ceramic phases in a manageable way. In doing so, it will acquire a desirable material property that distributes spatially. Afterwards, the mechanics of FGM has rapidly taken a center stage in solid mechanics \\citep{ke2008,zhong2012,jha2013}. Especially, the popularity of 3D printing techniques substantially facilitates the fabrication of FGMs. Besides, human arteries chiefly consist of intima, media, and adventitia \\citep{je2000,hgo}, which can be viewed as a sandwich structure and further a special graded structure. In most practical applications, we intend to prevent localized bulging or aneurysm formation, such as an Anaconda wave-energy extraction device \\citep{re2013}. Naturally, improving the stiffness of a homogeneous tube will raise the critical pressure inducing localized bulging since pressure is proportional to the elastic modulus. However, an extremely stiff structure will lose the ability to deform subjected to a similar load and hence will degrade certain structural or physiological performance, particularly for arteries. Naturally, a compromised way is to partially stiffen. Then a well-imposed question arises. Where is the optimal position to be strengthened, the innermost surface, the outermost surface, or the middle part? In fact, the question has been perfectly solved by nature. It is the intermediate layer that is hardest, and this is also the optimization result of artery evolution \\citep{je2000,jctr2012}. But why? Previous studies have shown the potential effect of modulus gradient form on the stress distribution in functionally graded rubberlike cylinders and spheres \\citep{mms2011} or pattern transition in growing graded tubular tissues \\citep{liu2020}. It is therefore of fundamental significance to clarify the influence of material inhomogeneity, different modulus gradients as well as the maximum modulus mismatch on bulge initiation, growth, and propagation. This motivates the current study. Although deformation or bifurcation analysis was carried by \\cite{bb2009} for inflated and everted circular cylinders comprised of a graded Mooney-Rivlin material and by \\cite{chenwq2017} for pressurized graded cylindrical tubes, respectively, localization instability was still not addressed. In this study, we aim at furnishing a thorough analysis of localized bulging in graded hyperelastic tubes under the combined action of internal pressure and axial stretching and unveiling the mechanism behind structure optimization of arteries.\n\nThis paper is organized as follows. In Section 2, we characterize the primary deformation generated by inflation for a general material constitution and a generic modulus gradient and establish all required formulas used in the bifurcation condition within the framework of nonlinear elasticity. Section 3 demonstrates the numerical strategy for identifying bulge initiation and presents a detailed parametric study for the onset of localized bulging. A finite element model is constructed and validated in Section 4. A nonlinear analysis based on finite element analysis is performed in Section 5 and the bulge propagation is investigated by Maxwell's equal-area rule. Finally, some conclusions are given in Section 6.\n\n\\section{Primary deformation and bifurcation condition}\n\\begin{figure}[!htbp]\n\\centering\\includegraphics[scale=0.8]{fig1}\\caption{(Color online) A graded tube with inner radius A, outer radius B and length $L$ in its stress-free state.}\\label{fig1}\n\\end{figure}\nAs shown in Figure \\ref{fig1}, a cylindrical tube with inner radius $A$ and outer radius $B$ in its initial state is considered in this study. Since we focus on a graded soft tube, it is assumed that the elastic modulus of the tube is no longer uniformly distributed in the radial direction but is homogeneous in the axial direction. When such a graded tube is subjected to an internal pressure $P$ and an axial extension, a primary deformation is first generated, and the inner and outer radii become $a$ and $b$, respectively. Then we define the initial state as the reference configuration while the primary state the current configuration. Assuming that the tube is composed of incompressible hyperelastic material, we can use $W(\\mathbf{F})$ to represent the strain-energy function where $\\mathbf{F}$ is the deformation gradient and is subjected to a constraint $\\operatorname{det} \\mathbf{F}=1$, which is the so-called incompressibility condition.\n\nFor the current problem, it is convenient to adopt cylindrical polar coordinates in both the reference and current configurations. By determining the centroid as the origin, the position coordinates of the same material point in the reference and current states are denoted by $(R, \\Theta, Z)$ and $(r, \\theta, z)$, respectively, and the tube occupies $A\\leqslant R\\leqslant B$ and $-L\/2\\leqslant Z\\leqslant L\/2$ in the reference state. Bearing in mind that the primary deformation is axisymmetric, we can write the deformation gradient as\n\\begin{equation}\n\\mathbf{F}=\\lambda_1\\bm e_r\\otimes\\bm e_r+\\lambda_2\\bm e_\\theta\\otimes\\bm e_\\theta+\\lambda_3\\bm e_z\\otimes\\bm e_z,\\label{eq2_1}\n\\end{equation} \nwhere $\\lambda_i$ $(i=1,2,3)$ is the principal stretch in $i$-direction and the subscripts 1, 2 and 3 denote the $r$-, $\\theta$- and $z$-directions. Furthermore, the common orthonormal basis $\\{\\bm e_r, \\bm e_\\theta, \\bm e_z\\}$ is used in both states. In particular, the first two principal stretches are equal to\n\\begin{equation}\n\\lambda_1=\\dfrac{\\textrm{d} r}{\\textrm{d} R},~~\\lambda_2=\\dfrac{r}{R},\\label{eq2_2}\n\\end{equation}\nand the third one $\\lambda_3$ is a constant since the primary deformation is homogeneous in the $z$-direction. \n\nThe strain-energy function can be written as a function of these principal stretches $W(\\lambda_1,\\lambda_2,\\lambda_3)$. Accordingly, the non-zero components of Cauchy Stress tensor are given by \n\\begin{align}\n\\sigma_{ii}=\\lambda_iW_{,i}-p,~~\\textrm{no summation}, \\label{eq2_3}\n\\end{align}\nwhere $p$ depicts the Lagrange multiplier enforcing the incompressibility condition. Meanwhile, we have defined that a comma means derivative with respect to the corresponding variable, for instance, $W_{,2}=\\partial W\/\\partial \\lambda_2$. \n\nIt can be readily checked that the primary deformation is governed by the sole equilibrium equation: \n\\begin{align}\n\\dfrac{\\textrm{d} \\sigma_{rr}}{\\textrm{d} r}+\\dfrac{\\sigma_{rr}-\\sigma_{\\theta\\theta}}{r}=0.\\label{eq2_4}\n\\end{align}\n\nIn order to facilitate subsequent analysis, we define two parameters according to the incompressibility condition \n\\begin{align}\n\\lambda=\\lambda_2=\\frac{r}{R},~\\lambda_z=\\lambda_3.\\label{eq2_5}\n\\end{align}\nCorrespondingly, the first principal stretch is determined by $\\lambda_1=1\/(\\lambda \\lambda_z)$. In addition, the following relations can be derived :\n\\begin{align}\n&r^2=\\lambda_z^{-1}(R^2-A^2)+a^2,~~\\theta=\\Theta,~~z=\\lambda_zZ.\\label{eq2_6}\n\\end{align}\n\nNext, we replace $\\lambda_1$ in $W$ and employ a reduced strain-energy function $w$ which is determined by\n\\begin{align}\nw(\\lambda,\\lambda_z)\\equiv W(1\/(\\lambda \\lambda_z),\\lambda,\\lambda_z).\\label{eq2_7}\n\\end{align}\nSubsequently, we find that\n\\begin{align}\nw_{,1}=W_{,2}-\\frac{1}{\\lambda^2\\lambda_z}W_{,1},~~w_{,2}=W_{,3}-\\frac{1}{\\lambda\\lambda_z^2}W_{,1}.\\label{eq2_8}\n\\end{align}\n\nSubstituting the stress components in (\\ref{eq2_3}) into the governing equation (\\ref{eq2_4}) and utilizing (\\ref{eq2_8}) yield\n\\begin{align}\n\\sigma_{rr}=\\int_{r_0}^{r}\\frac{\\lambda w_{,1}}{r}\\textrm{d}r, \\label{eq2_9}\n\\end{align}\nwhere $r_0$ is a constant. Actually $w$ is dependent on $R$ for a graded material. It can be deduced from $(\\ref{eq2_5})_1$ and $(\\ref{eq2_6})_1$ that $R=A\\sqrt{\\lambda_z\\lambda_a^2-1}\/\\sqrt{\\lambda_z\\lambda^2-1}$. Then we apply a variable exchange $\\lambda=r\/R$ and a new identity of the Cauchy stress $\\sigma_{rr}$ is given by\n\\begin{equation}\n\\sigma_{rr}=\\int_{\\lambda_0}^\\lambda\\frac{w_{,1}}{1-\\lambda^2\\lambda_z}\\textrm{d}\\lambda,\\label{eq2_10}\n\\end{equation}\nwhere $\\lambda_0$ is another constant to be determined later and we have replaced $R$ by $A\\sqrt{\\lambda_z\\lambda_a^2-1}\/\\sqrt{\\lambda_z\\lambda^2-1}$ such that the integrand in (\\ref{eq2_10}) is only a function of $\\lambda$ (see also equation (\\ref{eq2_15})). \n\nBefore we proceed further, it is appropriate to introduce two hoop stretches at the inner surface and outer surface as follows\n\\begin{equation}\n\\lambda_a=\\frac{a}{A},~~\\lambda_b=\\frac{b}{B},\\label{eq2_11}\n\\end{equation}\nand their relation can be obtained from (\\ref{eq2_6})\n\\begin{equation}\n\\lambda_b=\\sqrt{\\frac{B^2-A^2+A^2\\lambda_a^2\\lambda_z}{B^2\\lambda_z}}.\\label{eq2_12}\n\\end{equation}\n\nIt is assumed that the outer surface of the tube is traction-free and the inner surface suffers a pressure $P$. Therefore, we obtain\n\\begin{equation}\n\\sigma_{rr}|_{\\lambda=\\lambda_b}=0,~~\\sigma_{rr}|_{\\lambda=\\lambda_a}=-P.\\label{eq2_13}\n\\end{equation}\nIt then follows from the boundary condition $(\\ref{eq2_13})_{1}$ that the constant of integration $\\lambda_0$ is identical to $\\lambda_b$. In addition, the other one $(\\ref{eq2_13})_{2}$ furnishes\n\\begin{equation}\nP=\\int_{\\lambda_b}^{\\lambda_a}\\frac{w_{,1}}{1-\\lambda^2\\lambda_z}\\textrm{d}\\lambda.\\label{eq2_14}\n\\end{equation}\nActually, the pressure $P$ can be regarded as a function of $\\lambda_a$ and $\\lambda_z$. Similarly, the resultant axial force $N$ in any cross-section is also dependent on $(\\lambda_a,\\lambda_z)$ and holds the following form\n\\begin{align}\nN=2\\pi\\int_a^b\\sigma_{zz}r\\textrm{d}r-P\\pi a^2=\\pi A^2(\\lambda_a^2\\lambda_z-1)\\int_{\\lambda_b}^{\\lambda_a}\\frac{2\\lambda_z w_{,2}-\\lambda w_{,1}}{(\\lambda^2\\lambda_z-1)^2}\\lambda \\textrm{d}\\lambda.\\label{eq2_15}\n\\end{align}\n\nIt should be mentioned that (\\ref{eq2_14}) and (\\ref{eq2_15}) are valid for both homogeneous and graded materials. For graded tubes, the reduced strain-energy function in these two expressions contains radius-dependent elastic modulus. Moreover, (\\ref{eq2_14}) can be seen as a special case in \\cite{chenwq2017} where both the inner and outer surfaces suffer pressures.\n\nCurrently, the primary deformation induced by inflation has been fully characterized by the above two equations for a graded hyperelastic tube of finite thickness. In practice, there are usually two different end conditions, i.e. either the axial force $N$ or the axial length $\\lambda_zL$ is fixed. The former case can be attained by imposing an object of dead weight at one end and leaving the other end free. Correspondingly, (\\ref{eq2_14}) and (\\ref{eq2_15}) provide two algebraic equations for the two unknowns $\\lambda_a$ and $\\lambda_z$ and solving them for a given pressure and a given axial force yields the solution that characterizes the primary deformation. This kind of loading type has been used in the experimental investigations carried out by \\cite{ky1990,ky1991,guo2016,wang2019, prsa2021}. In the other loading condition, the tube is first uniformly stretched and then the two ends are fastened. As a result, the axial length or the axial stretch $\\lambda_z$ is fixed. Furthermore, the axial force $N$ is not controlled and may increase, decrease or remain constant during the inflation process. We emphasize that this scenario corresponds to the $in$ $vivo$ situation for human arteries \\citep{bmm2014}. Also, such an experimental setup was employed by \\cite{ijms2006,ijms2008,wang2019,prsa2021}. Since $\\lambda_z$ is prescribed, the internal pressure $P$ is only a function of $\\lambda_a$, and the primary deformation is fully determined by equation (\\ref{eq2_14}).\n\nIn an earlier study by \\cite{chenwq2017}, various bifurcation behaviors were investigated based on the incremental theory for functionally graded tubes subjected to internal pressure, external pressure, and end compression, and periodic modes in the axial and hoop directions were analyzed. In this study, we consider that the graded tube suffers a combined action of internal pressure and an axial extension. Therefore, zero-mode may become preferred \\citep{fu2008}, resulting in localized bulging in soft cylindrical tubes. On the one hand, if either the resultant axial force $N$ or the prescribed axial stretch $\\lambda_z$ is greater enough, localized bulging can disappear \\citep{liu2019}. On the other hand, if the axial stretch $\\lambda_z$ is less than a threshold value, global buckling can replace localized bulging \\citep{lin2020,prsa2021}. In this paper, we intend to investigate localized bulging (aneurysm formation) in graded tubes and further to elucidate the influence of modulus gradient on bulge formation, growth, and propagation. To this end, we merely concentrate on the situation that localized bulging is preferred as a result of the primary bifurcation.\n\nBearing in mind that we have denoted the internal pressure $P$ and the resultant axial force $N$ as functions of the stretches $\\lambda_a$ and $\\lambda_z$, it is suitable to resort to an explicit bifurcation condition that the Jacobian of $P$ and $F$ in terms of variables $\\lambda_a$ and $\\lambda_z$ vanishes. This concise bifurcation condition was first proposed by \\cite{fu2016} in the framework of finite elasticity. Later, its applicability to localized bulging in bilayer tubes was verified numerically using FE analysis in commercial software Abaqus \\citep{liu2019,ye2019}. Recently, \\cite{yu2021} offered analytical proof of this bifurcation condition with two relatively weak assumptions. The first one is that the strain-energy function is isotropic while the second one is that the deformations before bulge initiation and in the propagation stage are both axisymmetric. So, it also supports the validity of this bifurcation condition when applied to graded tubes. Writing explicitly, we obtain the bifurcation condition for localized bulging as follows\n\\begin{equation}\nJ(P,N)=\\frac{\\partial P}{\\partial\\lambda_a}\\frac{\\partial N}{\\partial\\lambda_z}-\\frac{\\partial P}{\\partial \\lambda_z}\\frac{\\partial N}{\\partial\\lambda_a}=0.\\label{eq2_16}\n\\end{equation}\n\nWith the aid of (\\ref{eq2_16}), the first bifurcation points for fixed axial force and fixed axial length can be identified, respectively, by the following equations\n\\begin{equation}\n\\left\\{\n\\begin{aligned}\n&\\textrm{fixed axial force (loading type I):}~~~\nJ(P,N)=0,~~\nN(\\lambda_a,\\lambda_z)=N_0,\\\\&\n\\textrm{fixed axial length (loading type II):}~~~\nJ(P,N)=0,~~\n\\lambda_z=\\lambda_{z0},\n\\end{aligned}\n\\right.\\label{eq2_17}\n\\end{equation}\nwhere $N_0$ and $\\lambda_{z0}$ are given constants. For convenience, we abbreviate loading type I (II) as LTI (LTII) from now on. Indeed, equation $(\\ref{eq2_17})_1$ furnishes two critical stretch values $\\lambda_a^c$ and $\\lambda_z^c$ for LTI where localized bulging initiates, and equation $(\\ref{eq2_17})_2$ determines one critical stretch value $\\lambda_a^c$ for LTII. Subsequently, the critical pressures $P_{cr}$ for both loading types can be calculated from equation (\\ref{eq2_14}).\n\nNext, we suppose that the tube is composed of the incompressible Gent material in all illustrative examples, and the strain-energy function is given by\n\\begin{equation}\nW=-\\frac{\\mu(R)}{2}J_m\\textrm{ln}\\bigg(1-\\frac{\\lambda_1^2+\\lambda_2^2+\\lambda_3^2-3}{J_m}\\bigg),\\label{eq2_18} \n\\end{equation}\nwhere $\\mu(R)$ stands for the radius-dependent shear modulus for a graded tube and $J_m$ is a material parameter representing the maximum extensibility. In particular, we set $J_m=97.2$ which is typical for rubber \\citep{gent}. Note that this model can well capture the bulge initiation, bulge growth and bulge propagation in finite element simulations \\citep{liu2019,ye2019}, and these three distinctive stages were commonly seen in experiments \\citep{wang2019}. \n\nAs mentioned earlier, we are also concerned with how different modulus gradients affect the onset of localized bulging and bulge evolution. We emphasize that an accurate modulus distribution in a graded structure is difficult to describe. When fabricated polydimethylsiloxane (PDMS) material is exposed to UV radiation, Young's modulus may have an exponential distribution, resulting in a graded PDMS structure \\citep{chen2018}. Generally speaking, the modulus distribution may be varied in different situations. A power law was adopted in \\cite{bb2009} and a linear function was employed in \\cite{chenwq2017}. In our previous study for growth-induced surface instabilities, it was found that different modulus distributions can alter pattern transition \\citep{liu2020}. Moreover, it was shown in \\cite{ye2019} that a stiffer inner layer makes the structure less stable. If a sandwich tube has a stiffer or softer core and the two faces share the same shear modulus, there must exist a competition among these three layers and may cause some interesting results. This intuitively motivates the investigation on a non-monotonic modulus distribution, which can further be used to model human arteries. As a result, we employ three modulus functions as follows\n\\begin{equation}\n\\left\\{\n\\begin{aligned}\n&\\textrm{linear distribution:}~\n\\mu(R)=\\mu_B\\left[(\\beta_1-1)\\dfrac{R-B}{A-B}+1\\right],\\\\&\n\\textrm{exponential distribution:}~\\mu(R)=\\mu_B\\left[(\\beta_1-1)\\textrm{exp}\\left(\\zeta\\dfrac{A-R}{B-A}\\right)+1\\right],\\\\&\n\\textrm{sinusoidal distribution:}~\\mu(R)=\\mu_B\\left[(\\beta_2-1)\\sin\\left(\\dfrac{R-A}{B-A}\\pi\\right)+1\\right],\n\\end{aligned}\n\\right.\\label{eq2_19}\n\\end{equation}\nwhere $\\beta_1=\\mu(A)\/\\mu_B$ signifies the ratio of the shear modulus of the inner surface to that of the outer surface and $\\mu_B=\\mu(B)$. The $\\beta>1$ means that the inner surface is stiffer than the outer surface. In the sinusoidal distribution, we define $H=(B+A)\/2$ and let $\\beta_2=\\mu(H)\/\\mu_B$. Moreover, the parameter $\\zeta$ in $(\\ref{eq2_19})_2$ refers to as the decay rate. In fact, letting $R=B$ in $(\\ref{eq2_19})_1$ yields $\\mu(B)=\\mu_B\\left((\\beta_1-1)\\textrm{exp}(-\\zeta)+1\\right)$. In order to satisfies $\\mu(B)=\\mu_B$, we must have $\\zeta=+\\infty$. In our illustrative examples, we specify $\\zeta=30$ as an approximation such that exp$(-30)\\approx 9.358\\times10^{-14}$. Meanwhile, the effect of $\\zeta$ is out of the scope of the current study. We then plot these three modulus distributions in Figure \\ref{fig2} to depict the details.\n\n\\begin{figure}[!htbp]\n\\centering\\includegraphics[scale=1.2]{fig2}\\caption{Three typical distributions of the shear modulus $\\mu$ in the radial direction. The parameters are given by $\\beta_1=\\beta_2=20$, $A\/B=0.8$, $\\zeta=30$.}\\label{fig2}\n\\end{figure}\n\nIn this section, we have derived the expressions of the pressure $P$ and the resultant axial force $N$ for the primary deformation of a graded soft tube. Meanwhile, a theoretical framework for determining the bulge initiation is established for a general material model and a general modulus gradient. In the next section, a detailed parametric analysis will be carried out based on the bifurcation condition (\\ref{eq2_16}).\n\n\\section{Theoretical analysis of bulge initiation}\nWe have indicated that this study only focuses on the situation where localized mode is preferred. Accordingly, the prescribed axial force $N_{0}$ and the prescribed axial stretch $\\lambda_{z0}$ in (\\ref{eq2_17}) should lie in a proper domain. In the succeeding calculations, we specify $N_0=0$ and $\\lambda_{z0}=1.5$ and suppose that the analysis can also shed light on qualitative effects of different parameters and modulus gradients for other $N_0$ and $\\lambda_{z0}$. To facilitate further analysis, we introduce some dimensionless quantities as follows\n\\begin{align}\nA^*=\\dfrac{A}{B},~~N^*=\\dfrac{N}{\\mu_B B^2},~~\\mu^*=\\dfrac{\\mu}{\\mu_B},~~P^*=\\dfrac{P}{\\mu_B},~~w^*=\\dfrac{w}{\\mu_B}.\n\\end{align}\nCorrespondingly, the bifurcation condition for localized bulging is rewritten as $J(P^*, N^*)=0$.\n\nIn practice, analytical expressions of the dimensionless pressure $P^*$ and the dimensionless force $N^*$ can be derived for many homogeneous material models such as the neo-Hookean model, Mooney-Rivlin model, Ogden model, and Gent model. Accordingly, equation (\\ref{eq2_17}) contains two sets of nonlinear algebraic equations, and solving them numerically can lead to the critical stretch $\\lambda_a^c$. However, when the shear modulus becomes position-dependent in the Gent model, explicit formulas for $P^*$ and $N^*$ may be difficult to acquire for a complex modulus distribution, for instance, an exponential one. Therefore, we need to employ a numerical integration scheme. To construct a general computation framework for identifying the first bifurcation point, we will introduce our solution procedure first and then illustrate the bifurcation results.\n\n\\subsection{Solution strategy}\nWithout loss of generality, we apply the exponential modulus to exhibit the solution procedure, and all symbolic calculations are carried out in $Mathematica$ \\citep{math2019}. This selection gives rise to that exponential functions are involved in the integrands of (\\ref{eq2_14}) and (\\ref{eq2_15}). So explicit primitive functions of these two integrations become difficult to obtain. We shall resort to the built-in command $NIntegrate$ in $Mathematica$. In the case of $N_0=0$, the two unknowns $\\lambda_a$ and $\\lambda_z$ can be identified simultaneously according to $(\\ref{eq2_17})_1$. In order to generate effective numerical scheme, we define some necessary quantities as follows\n\\begin{equation}\n\\left\\{\n\\begin{aligned}\n&P_1(\\lambda,\\lambda_a,\\lambda_z)=\\dfrac{w^*_{,1}}{1-\\lambda^2\\lambda_z},\\\\&\nN_1(\\lambda_a,\\lambda_z)=\\pi(A^*)^2(\\lambda^2_a\\lambda_z-1),~~\nN_2(\\lambda,\\lambda_a,\\lambda_z)=\\lambda\\dfrac{2\\lambda_zw^*_{,2}-\\lambda w^*_{,1}}{(\\lambda^2\\lambda_z-1)^2}.\n\\end{aligned}\n\\right.\\label{eq3_2}\n\\end{equation}\nBy use of the above new notations, we rewrite $P^*$ and $N^*$ as\n\\begin{align}\nP^*=\\int_{\\lambda_b}^{\\lambda_a}P_1(\\lambda,\\lambda_a,\\lambda_z)\\textrm{d}\\lambda,~~N^*=N_1(\\lambda_a,\\lambda_z)\\int_{\\lambda_b}^{\\lambda_a}N_2(\\lambda,\\lambda_a,\\lambda_z)\\textrm{d}\\lambda.\\label{eq3_3}\n\\end{align}\n\nIt is known that the bifurcation condition $J(P^*,N^*)=0$ actually describes a curve in the $(\\lambda_a,\\lambda_z)$-plane. Similarly, the loading condition $N^*=0$ provides another curve in the same plane. So, the intersection point corresponds to the first bifurcation point where localized bulging occurs. It is found from equation (\\ref{eq2_12}) that $\\lambda_b$ is a function of $\\lambda_a$ and $\\lambda_z$. According to the derivative of integral with variable bounds, we arrive at \n\\begin{equation}\n\\left\\{\n\\begin{aligned}\n&\\dfrac{\\partial P^*}{\\partial \\lambda_a}=\\int_{\\lambda_b}^{\\lambda_a}\\dfrac{\\partial P_1}{\\partial \\lambda_a}\\textrm{d}\\lambda+P_1(\\lambda,\\lambda_a,\\lambda_z)\\Big |_{\\lambda=\\lambda_a}-P_1(\\lambda,\\lambda_a,\\lambda_z)\\Big |_{\\lambda=\\lambda_b}\\dfrac{\\partial \\lambda_b}{\\partial \\lambda_a}, \\\\&\n\\dfrac{\\partial P^*}{\\partial \\lambda_z}=\\int_{\\lambda_b}^{\\lambda_a}\\dfrac{\\partial P_1}{\\partial \\lambda_z}\\textrm{d}\\lambda-P_1(\\lambda,\\lambda_a,\\lambda_z)\\Big |_{\\lambda=\\lambda_b}\\dfrac{\\partial \\lambda_b}{\\partial \\lambda_z},\\\\&\n\\dfrac{\\partial N^*}{\\partial \\lambda_a}=\\dfrac{\\partial N_1}{\\partial \\lambda_a}\\int_{\\lambda_b}^{\\lambda_a}N_2\\textrm{d}\\lambda+N_1\\left(\\int_{\\lambda_b}^{\\lambda_a}\\dfrac{\\partial N_2}{\\partial \\lambda_a}\\textrm{d}\\lambda+N_2(\\lambda,\\lambda_a,\\lambda_z)\\Big |_{\\lambda=\\lambda_a}-N_2(\\lambda,\\lambda_a,\\lambda_z)\\Big |_{\\lambda=\\lambda_b}\\dfrac{\\partial \\lambda_b}{\\partial \\lambda_a}\\right), \\\\&\n\\dfrac{\\partial N^*}{\\partial \\lambda_z}=\\dfrac{\\partial N_1}{\\partial \\lambda_z}\\int_{\\lambda_b}^{\\lambda_a}N_2\\textrm{d}\\lambda+N_1\\left(\\int_{\\lambda_b}^{\\lambda_a}\\dfrac{\\partial N_2}{\\partial \\lambda_z}\\textrm{d}\\lambda-N_2(\\lambda,\\lambda_a,\\lambda_z)\\Big |_{\\lambda=\\lambda_b}\\dfrac{\\partial \\lambda_b}{\\partial \\lambda_z}\\right).\n\\end{aligned}\n\\right.\\label{eq3_4}\n\\end{equation}\nAlthough all integrals in (\\ref{eq3_4}) can not be solved analytically, we emphasize that it still formulates the theoretical foundation of our numerical scheme. If all geometric and material parameters are specified, we could numerically determine the implicit function of $\\lambda_z(\\lambda_a)$ according to $J(P^*, N^*)=0$. \n\nNext, we briefly summarize the solution procedure by specifying $A^*=0.4$, $\\beta_1=30$, $\\zeta=30$ and $J_m=97.2$. This parametric setting denotes a thick grade tube where the inner surface is stiffer. We first give an initial value $\\lambda_z=1.001$ and seek a solution to $J(P^*, N^*)=0$ using Newton's iteration method. In doing so, a pair of solution $\\{\\lambda_z=1.001,\\lambda_a=2.026\\}$ can be obtained. We then adopt a loop algorithm starting from $\\lambda_z=1.001$ with a small incremental step. It is well-known that the classical Newton's method for solving nonlinear algebraic equations is sensitive to the initial guess. Especially for complex problems, a proper initial guess can make the solution approach more efficient. Under the assumption that the curve of $J(P^*, N^*)=0$ is smooth, a nice initial guess can be found around the neighborhood of the solution of the previous step. By doing so, the relation between $\\lambda_a$ and $\\lambda_z$ can be found. We emphasize that this solution procedure can be applied to another nonlinear algebraic equation $N^*=0$, which offers the second implicit function of $\\lambda_z$ and $\\lambda_z$. For simplicity, extra computation details are omitted. In the case of fixed axial length, the value $\\lambda_z=1.5$ supplies a horizontal line in the $(\\lambda_a,\\lambda_z)$-plane. We supply the graphical interpretation of the solution strategy for the bulge initiation. Currently, determination of the first bifurcation point is equivalent to identifying the intersection of $N^*=0$ and $J(P^*,N^*)=0$ or the counterpart of $\\lambda_z=1.5$ and $J(P^*,N^*)=0$, as shown in Figure \\ref{fig3}. For the prescribed parameters given earlier, we find that $\\lambda_z^c=1.182$ and $\\lambda_a^c=1.932$ if $N^*=0$ while $\\lambda_a^c=1.852$ for $\\lambda_z=1.5$.\n\nSo far, we have narrated the solution strategy for identifying the bulge initiation for graded soft tubes of arbitrary thickness. Especially, it is a numerical-scheme-based framework. Although we only employ the Gent model, this framework also supplies a calculation platform for a general material model and a generic modulus distribution. In the following analysis, we shall apply the solution procedure to the three different modulus functions in (\\ref{eq2_19}) and aim to elucidate the influence of modulus gradient as well as the position where maximum modulus attains on the bulge initiation.\n \n\\begin{figure}[!htbp]\n\\centering\n\\subfigure[The resultant axial force is fixed.]{\\includegraphics[scale=0.78]{fig3a}{\\label{fig3a}}}\\hspace{5mm}\n\\subfigure[The axial length is fixed.]{\\includegraphics[scale=0.78]{fig3b}{\\label{fig3b}}}\n\\caption{Contour plots of the bifurcation condition and the corresponding loading curves ($N^*=0$ or $\\lambda_z=1.5$) when the exponential modulus gradient $(\\ref{eq2_19})_2$ is used, and the parameters are given by $A^*=0.4$, $\\beta_1=30$, $\\zeta=30$ and $J_m=97.2$. The intersections indicate the first bifurcation point resulting in localized bulging.}\\label{fig3}\n\\end{figure}\n\n\\subsection{LTI: Fixed axial force}\nFirst of all, we consider the case of fixed axial force. The aim is to unravel the effect of different geometrical and material parameters and various modulus gradients on the bulge initiation. It is emphasized that there are at most four free parameters in the graded Gent model, videlicet, the dimensionless inner radius $A^*$, the ratio of shear modulus $\\beta_1$ or $\\beta_2$, the $J_m$ and the decay rate $\\zeta$ in $(\\ref{eq2_19})_2$. As introduced before, the last two parameters will be fixed by $J_m=97.2$ and $\\zeta=30$ in all illustrative examples, and their influence is beyond the scope of this study. As a result, there are two independent parameters $A^*$ and $\\beta_1$ (or $\\beta_2$). The former measures the thickness of the graded tube while the latter displays the maximum modulus variation. In addition to these parameters, another important factor is the modulus distribution. Therefore, we shall assign a representative value of $\\beta_1$ or $\\beta_2$ and exhibit the results for different inner radii and repeat this procedure when $A^*$ turns into the fixed parameter.\n\nApplying the solution strategy outlined in the preceding subsection, the dependences of the critical stretch $\\lambda_a^c$ on the dimensionless inner radius $A^*$ are shown in Figure \\ref{fig4}. The modulus ratios are set to be $5$ in Figure \\ref{fig4a} and $30$ in Figure \\ref{fig4b}, respectively. It can be seen that the critical stretch $\\lambda_a^c$ always decreases monotonically with increased $A^*$. Note that varying $A^*$ is equivalent to altering the thickness of the tube. This implies the fact that localized bulging is invariably easier to take place in a thinner tube, regardless of the modulus distribution. The curves for the sinusoidal function are always higher, leading to a greater critical stretch for a given $A^*$ and further a more stable structure compared to the other two modulus distributions. We observe that the curves for the sinusoidal function in both figures are almost identical. It is therefore concluded that the modulus ratio $\\beta_2$ has a weak influence on the critical bifurcation stretch $\\lambda_a^c$. Furthermore, the dashed line is higher than the solid one in \\ref{fig4a} while becomes lower in \\ref{fig4b}. This indicates that a graded tube with the shear modulus decaying exponentially from the inner surface is more stable than the counterpart where the shear modulus decays linearly when $\\beta_1=5$. However, if $\\beta_1=30$, a reverse inference can be obtained. Ultimately, as $A^*$ goes to unity, the deviations among all curves almost vanish, so modulus gradient can be ignored in extremely thin tubes.\n\n\\begin{figure}[!htbp]\n\\centering\n\\subfigure[$\\beta_1=\\beta_2=5$.]{\\includegraphics[scale=0.78]{fig4a}{\\label{fig4a}}}\\hspace{5mm}\n\\subfigure[$\\beta_1=\\beta_2=30$.]{\\includegraphics[scale=0.78]{fig4b}{\\label{fig4b}}}\n\\caption{The critical stretch $\\lambda_a^c$ versus the dimensionless inner radius $A^*$ when the three modulus functions in (\\ref{eq2_19}) are adopted. The resultant axial force is given by $N^*=0$, and the other parameters are given by $\\zeta=30$ and $J_m=97,2$.}\\label{fig4}\n\\end{figure}\n\nThe above analysis primarily reveals the complexity of the influence of modulus gradient on the bifurcation threshold. In the following study, the modulus ratios $\\beta_1$ and $\\beta_2$ are taken into account to explore the impact of the modulus ratio as well as the modulus distribution. Figure \\ref{fig5} displays the relations between $\\lambda_a^c$ and the modulus ratio for three modulus gradients by specifying $A^*=0.7$. The right figure highlights the detail of the left one when the modulus ratio is around unity. Seen from Figure \\ref{fig5}, we find that the critical stretch approaches a limit value for all modulus functions as $\\beta_1$ or $\\beta_2$ increases. If the shear modulus has a sinusoidal distribution, the bulge initiation is nearly independent of the modulus ratio $\\beta_2$, especially when $\\beta_2>1$. Yet the critical stretch is always a decreasing function for the other two modulus distributions. Specifically, if the shear modulus grows exponentially from the inner surface ($\\beta_1<1$), the bifurcation threshold only varies in a very small range compared to the linearly increasing counterpart, and a linear distribution always delays the occurrence of localized bulging. Nevertheless, when $\\beta_1>1$, the bifurcation threshold decreases fast and starts to be impervious to the variation of $\\beta_1$ as $\\beta_1$ approaches practically 30 when the linear function $(\\ref{eq2_19})_1$ is selected. On the one hand, it is observed from figure \\ref{fig5} that these three curves connect at $\\beta_1=\\beta_2=1$ where a graded tube is reduced to a homogeneous one. On the other hand, the solid line and dashed line intersect at another point and we define the corresponding horizontal coordinate as $\\beta_c$. It can be solved that $\\beta_c=12.78$ for $A^*=0.7$. Accordingly, the exponential distribution retards the bifurcation for $1<\\beta_1<\\beta_c$ while diminishes the bulge initiation for $\\beta>\\beta_c$. In other words, compared to a linearly decayed shear modulus from the inner surface, an exponential counterpart gives rise to a more stable structure as $\\beta$ is lower than $\\beta_c$ while resulting in a less stable structure otherwise. \n\n\\begin{figure}[!htbp]\n\\centering\\includegraphics[scale=0.78]{fig5}\\caption{The critical stretch $\\lambda_a^c$ versus the ratio of modulus when the three modulus functions in (\\ref{eq2_19}) are adopted. (a) The modulus ratio ranges from 0.01 to 101 and (b) the modulus ratio ranges from 0.01 to 1. The right figure is a blowup of the right figure when the modulus ratio is small. The resultant axial force is given by $N^*=0$, and the other parameters are given by $A^*=0.7$, $\\zeta=30$ and $J_m=97,2$.}\\label{fig5}\n\\end{figure}\n\nTo verify that the main characteristic in Figure \\ref{fig5} is irrelevant to the special choice $A^*=0.7$, we further present the dependence of $\\lambda^c_a$ on the modulus ratio for three modulus functions in Figure \\ref{fig6} by considering a thicker tube where $A^*=0.5$. Likewise, the right part exhibits a blowup when the modulus ratio is around unity. It can be seen that Figure \\ref{fig6} differs from Figure \\ref{fig5} in the range of the vertical axis. In addition to this small divergence, these two figures are qualitatively the same. So similar conclusion can be drawn and we identify $\\beta_c=11.22$ for $A^*=0.5$. \n\nIn summary, if other conditions remain identical and only the thickness can be varied, a higher thickness always delays bulge formation in a pressurized graded tube. However, the effect of the modulus gradient is quite distinctive. Although we only select three prototypical modulus functions as shown in (\\ref{eq2_19}), the bifurcation threshold versus the modulus experiences various tendencies. When the shear modulus either grows monotonically or declines monotonically from the inner surface, the critical stretch $\\lambda_a^c$ is constantly a decreasing function as $\\beta_1$ increases and will attain a constant value for large enough $\\beta_1$. This implies that the bifurcation threshold remains practically unaffected when the modulus mismatch between the inner and outer surfaces is fairly high. Furthermore, these two curves intersect at $\\beta_1=1$ and $\\beta_1=\\beta_c>1$, leading to a transition zone $1<\\beta_1<\\beta_c$ where a graded tube with shear modulus distributing exponentially is more stable. Yet a non-monotonically sinusoidal distribution of the shear modulus leads to another situation. The curve of $\\lambda_a^c$ versus $\\beta_c$ is slowly increasing if $\\beta_2<1$ and becomes approximately a horizontal line once $\\beta_2$ passes the value of unity. Specifically, the critical stretch $\\lambda_a^c$ is practically identical to that of the homogeneous counterpart. As a result, this special graded tube has similar deformation behavior to its homogeneous analogue. We mention that a graded tube with the shear modulus distributed sinusoidally can be viewed as an effectively continuous simulacrum of a sandwich tube. Thus, we note that a sandwich tube with a hard internal core can resist the internal pressure inducing aneurysm formation without influence on the critical stretch. From the viewpoint of a structure optimization, such a composition reduces material cost but increases the critical pressure giving rise to aneurysm formation. Bearing in mind that human arteries are composed of a sandwich structure where the core (intermediate layer) has the largest elastic modulus \\citep{je2000,jctr2012}, our theoretical prediction agrees well with the natural evolution and optimization of human arteries.\n\n\\begin{figure}[!htbp]\n\\centering\\includegraphics[scale=0.78]{fig6}\\caption{The critical stretch $\\lambda_a^c$ versus the ratio of modulus when the three modulus functions in (\\ref{eq2_19}) are adopted. (a) The modulus ratio ranges from 0.01 to 101 and (b) the modulus ratio ranges from 0.01 to 1. The right figure is a blowup of the right figure when the modulus ratio is small. The resultant axial force is given by $N^*=0$, and the other parameters are given by $A^*=0.5$, $\\zeta=30$ and $J_m=97,2$.}\\label{fig6}\n\\end{figure}\n\n\n\\subsection{LTII: Fixed axial length}\nSecondly, we address the other commonly adopted loading condition where the axial length is prescribed by stretching a rubber tube and then fixing two ends. A noteworthy feature in LTII is that the curve of pressure against stretch $\\lambda_a$ or volume ratio $\\lambda_a^2\\lambda_z$ may lose $N$-shape and further may be a monotonically increasing function. Consequently, the limit-point theory fails to identify the bifurcation point, and we resort to the bifurcation condition derived by \\cite{fu2016}. \n\nPrevious studies of localized bulging in inflated bilayer tubes have indicated that the effect of material and geometrical parameters on the bifurcation threshold directing to localized bulging has no essential distinction \\citep{liu2019,ye2019}. Therefore, we anticipate that similar bifurcation results as that in LTI will be observed here. To this end, we specify the modulus ratio and plot the dependence of $\\lambda_a^c$ on the dimensionless inner radius $A^*$ in Figure \\ref{fig7} while Figure \\ref{fig8} displays $\\lambda_a^c$ against the modulus ratio $\\beta_1$ or $\\beta_2$. As explained earlier, the axial stretch is given by $\\lambda_z=1.5$ in all illustrative examples. For comparison, Figure \\ref{fig7a} shows the outcomes for three modulus functions when $\\beta_1=\\beta_2=5$ while Figure \\ref{fig7b} exhibits the counterparts when $\\beta_1=\\beta_2=30$. As expected, the curve for the sinusoidal function is the highest. In addition, the curve for a linear function is lower than the one for exponential function as $\\beta_1=5$ but becomes higher when $\\beta_1$ is transferred to 30. These features are consistent with those in Figure \\ref{fig4}, so we omit more detailed descriptions.\n\nThen we explore the influence of modulus ratio depicted in Figure \\ref{fig8}. It can be seen that the tendencies and shapes of all curves are consistent with those in Figures \\ref{fig5} and \\ref{fig6}. Similarly, we determine the second intersect of the solid line and the dashed line as $\\beta_c=13.01$. \n\\begin{figure}[!htbp]\n\\centering\n\\subfigure[$\\beta_1=\\beta_2=5$.]{\\includegraphics[scale=0.78]{fig7a}{\\label{fig7a}}}\\hspace{5mm}\n\\subfigure[$\\beta_1=\\beta_2=30$.]{\\includegraphics[scale=0.78]{fig7b}{\\label{fig7b}}}\n\\caption{The critical stretch $\\lambda_a^c$ versus the dimensionless inner radius $A^*$ when the three modulus functions in (\\ref{eq2_19}) are adopted. The axial stretch is given by $\\lambda_z=1.5$, and the other parameters are given by $\\zeta=30$ and $J_m=97,2$.}\\label{fig7}\n\\end{figure}\n\n\\begin{figure}[!htbp]\n\\centering\\includegraphics[scale=0.78]{fig8}\\caption{The critical stretch $\\lambda_a^c$ versus the ratio of modulus when the three modulus functions in (\\ref{eq2_19}) are adopted. (a) The modulus ratio ranges from 0.01 to 101 and (b) the modulus ratio ranges from 0.01 to 1. The right figure is a blowup of the right figure when the modulus ratio is small. The axial stretch is given by $\\lambda_z=1.5$, and the other parameters are given by $A^*=0.7$, $\\zeta=30$ and $J_m=97,2$.}\\label{fig8}\n\\end{figure}\n\nAt present, the influence of the tube thickness, the shear modulus ratio, and the modulus distribution on bulge initiation is revealed by the use of the bifurcation condition $J(P^*, N^*)=0$ and based on the solution procedure established at the beginning of this section. We emphasize that in practical applications not only the bulge formation but also the bulge evolution is of great interest because material rapture often occurs in the growth or propagation stage. However, theoretically characterizing the solution path for bulge growth is extremely difficult. Only weakly nonlinear analysis can be carried out for thick tubes in the framework of finite elasticity \\citep{ye2020}. Unfortunately, the information of weakly nonlinear behavior has limited guidance in the bulge growth stage since the bifurcation nature of localized bulging is subcritical, and experimentally observed amplitude is always finite. To acquire more knowledge of bulge growth in inflated graded tubes, we shall establish a finite element model in commercial software Abaqus in the next section.\n\n\\section{Finite element model}\nIn this section, we pursue constructing a finite element (FE) model in Abaqus to conduct a post-buckling analysis of localized bulging in a pressurized graded tube. Although the incompressible Gent model has not been built in the software, one could write UHYPER subroutine codes following the user guideline \\citep{abaqus}. In our previous studies \\citep{liu2019,ye2019}, homogeneous Gent model was supplemented into Abaqus employing UHYPER subroutine coding. Another important issue is to model modulus gradient. In practice, we can discretize the tube into many sub-tubes in the thickness direction and each sub-tube is composed of a homogeneous Gent material where the shear modulus can be determined according to the given modulus function. This is equivalent to approximating a continuous modulus function by a staircase one. As the mesh size tends to zero, the deformation behavior of a layered tube would perfectly coincide with that of the original graded one. Based on this idea, pattern transition induced by volumetric growth in graded structures was investigated using FE analysis based on this idea \\cite{liu2020}. Moreover, another more convenient way is to make use of the linearly temperature-dependent elastic modulus option in Abaqus \\citep{prsa2014,chen2018}. In this alternative approach, the temperature distribution will exactly conform to the modulus distribution and the thermal expansion coefficient must be set to be zero. \n\nIn principle, both approaches can be applied to simulate the deformation and instability of a graded tube under the combined action of internal pressure and axial extension in Abaqus. Notwithstanding, it should be pointed out that the former method may need more grids to guarantee the quality of meshes and may further require additional computation costs. So we employ the latter method and write the UHYPER subroutine codes in Fortran for the graded Gent model by creating a connection between the shear modulus and the temperature field. Ultimately, a graded Gent model with arbitrary modulus function is well established in Abaqus. In doing so, we can carry out FE simulations for the localized bulging.\n\nBefore proceeding further, we shall check the validity of the FE model. For that purpose, we calculate the bifurcation threshold using FE analysis and compare the results with the corresponding theoretical predictions. To eliminate the end effects of a grade tube with finite length, we construct all FE models by specifying the length to overall diameter by 30, or equivalently $L\/B=60$. Furthermore, the built-in command ``Buckle\" can not be applied to solve the eigenvalue problem when the critical mode is zero, and we utilize the Module ``Static, Riks\" to perform a fully nonlinear analysis. \n\nTo trigger localized bulging, certain structures or physical imperfections need to be involved in the FE model. In the case of fixed axial force, we refer to the practical end conditions in an inflation experiment where both the two ends are fastened and the radial movement is confined \\citep{wang2019}. So in the FE model, the radial displacements on the two ends are restricted and this can be regarded as a geometrical imperfection. For fixed axial length, the shear modulus at the center $Z=0$ is assigned to $\\mu(R)-0.001$ if $\\beta_1$ (or $\\beta_2$) is less than 30 or otherwise a greater magnitude $\\mu(R)-0.01$ shall be used. In addition, A quadratic 3D hybrid element with reduced integration (C3D20RH) is employed in all subsequent FE simulations.\n\nFor convenience, the FE model is validated by taking the linear modulus function $(\\ref{eq2_19})_1$ as an example, and all comparisons are summarized in Figure \\ref{fig9}. The theoretical solutions are denoted by solid lines while the FE results are represented by red dots. The resultant axial force is specified by $N^*=0$ in Figures \\ref{fig9a} and \\ref{fig9b} and the axial stretch is fixed by $\\lambda_z=1.5$ in Figures \\ref{fig9c} and \\ref{fig9d}, respectively. It is found that the critical stretch based on FEA agrees well with that from the theoretical model. This implies the credibility and robustness of the established FE model and the UHYPER subroutine codes for the graded Gent model.\n\n\\begin{figure}[!htbp]\n\\centering\n\\subfigure[The modulus ratio is given by $\\beta_1=30$.]{\\includegraphics[scale=0.78]{fig9a}{\\label{fig9a}}}\\hspace{5mm}\n\\subfigure[The inner radius is given by $A^*=0.7$.]{\\includegraphics[scale=0.78]{fig9b}{\\label{fig9b}}}\n\\subfigure[The modulus ratio is given by $\\beta_1=30$.]{\\includegraphics[scale=0.78]{fig9c}{\\label{fig9c}}}\\hspace{5mm}\n\\subfigure[The inner radius is given by $A^*=0.7$.]{\\includegraphics[scale=0.78]{fig9d}{\\label{fig9d}}}\n\\caption{(Color online) Comparisons of the critical stretch $\\lambda_a^c$ between the theoretical predictions and the FE results when the shear modulus varies linearly. The resultant axial force is fixed by $N^*=0$ in top subfigures while the axial stretch is given by $\\lambda_z=1.5$ in the bottom subfigures. The red dots denote the FE solutions and the solid lines correspond to the theoretical ones.}\\label{fig9}\n\\end{figure}\n\nNow, a robust FE model for simulating localized bulging in pressurized graded tubes has been formulated. In particular, the modulus gradient in the FE model can be arbitrary. So the FE model paves a convenient way to implement a fully nonlinear analysis of bulge evolution. In the next section, both bulge growth and bulge propagation will be investigated for a graded tube.\n\n\\section{Bulge growth and propagation}\nIt is known that a complete process of localized bulging in pressurized tubes contains three typical stages, consisting of bulge initiation, bulge growth, and bulge propagation. When the internal pressure passes a critical value, a bulge profile appears. With increased pressure, the diameter of the bulge grows until it reaches a maximum. Then we define a new parameter $\\lambda_a^m$ that depicts the largest hoop stretch at the inner surface. As a result, the $\\lambda_a^mA^*$ indicates the maximum inner radius for the bulge. Afterwards, the growing bulge ceases to expand in the radial direction and only dilates in the axial direction at constant pressure, and the instability enters into the propagation stage. We emphasize that an ideal propagation stage only exists in an infinitely long tube, so the maximum radius cannot be exactly reached in a tube of finite length.\n\nIn many actual applications, such as soft robots and flexible actuation, bulge profile is harmful to the normal function of these devices, so the parametric analysis for the onset of localized bulging can supply useful insight into bulge suppression. If localized bulging has emerged, material rupture may appear either in the growth stage or during the period of bulge propagation. In practice, the rupture risk is possibly relevant to the size of a bulge or the magnitude of circumferential stretch. For clinicians, the diameter of an aneurysm can be used to evaluate the danger of aneurysm fracture \\citep{jb2012}. It is therefore of great significance to estimate how large a bulge can finally attain. As noted earlier, human arteries are composed of three layers where the intermediate layer has the highest elastic modulus, and we can approximately take a non-monotonic modulus distribution to qualitatively elucidate the mechanism behind artery evolution. Hence, the section aims to reveal the effect of modulus ratio and material gradient on the maximum magnitude of the bulge and to provide further insight into how a normal artery is constructed to resist aneurysm formation while keeping proper elasticity.\n\n\\begin{figure}[!htbp]\n\\centering\\includegraphics[scale=1]{fig10}\\caption{(Color online) The dependence of pressure $P^*$ on volume ratio $v$ for the primary deformation subjected to inflation of a graded tube where the sinusoidal distribution $(\\ref{eq2_19})_3$ is applied. The resultant axial force is fixed by $N^*=0$ and the parameters are give by $A^*=0.7$ and $\\beta_2=30$. }\\label{fig10}\n\\end{figure}\n\nIt is pointed out that the propagation stage is a two-phase deformation and is composed of two uniform states joined by a transition zone. When the resultant axial force is fixed, the curve of pressure versus $v$ or $\\lambda_a$ presents an $N$-shape where the first stationary point corresponds to the onset of localized bulging. The $v=\\lambda_a^2\\lambda_z$ denotes the ratio of the volume enclosed in the undeformed tube to that in the inflated state. In this scenario, the pressure under which a bulge starts to propagate can be determined according to Maxwell's equal-area rule \\citep{jam1984}. To clearly illustrate this methodology, we consider a graded tube with sinusoidally varying shear modulus subjected to inflation, and the resultant axial force is fixed by $N^*=0$. Figure \\ref{fig10} sketches the relation between the dimensionless pressure $P^*$ and the volume ratio $v$ when the dimensionless inner radius $A^*$ is set to be 0.7. The marked pressure $P_k^*$ depicts the origination of bulge propagation. Maxwell's equal-area rule yields\n\\begin{align}\nP^*_k(v_1-v_0)=\\int_{v_0}^{v_1}P^*(v)\\textrm{d}v, \\label{eq5_1}\n\\end{align}\nwhere $v_0$ corresponds to a state before localized bulging and $v_1$ the counterpart after bulge has propagated two both ends. Referring to the solution strategy outlined in Section 3.1, we can also program in $Mathematica$ to identify the exact values of $P_k^*$, $v_0$ and $v_1$, and the technicality is neglected for brevity. For instance, for the parameters used in plotting Figure \\ref{fig10}, we acquire that $P_k^*=3.318$, $v_0=1.575$ and $v_1=192.87$. However, when the axial length is specified, the pressure versus $v$ or $\\lambda_a$ may become monotonic, so Maxwell's equal-area rule is no longer valid. Notwithstanding, it is naturally expected that the principal conclusion for fixed axial force can offer qualitatively guidance for other loading methods as usual.\n\n\\begin{figure}[!htbp]\n\\centering\\includegraphics[scale=1]{fig11}\\caption{Illustration of dependences of $\\lambda_a^m$ on $\\beta_1$ when the linear and exponential functions are employed or $\\beta_2$ when the sinusoidal function is applied. The resultant axial force is given by $N^*=0$ and the dimensionless inner radius $A^*$ is prescribed by $0.7$.}\\label{fig11}\n\\end{figure}\n \nIn the subsequent analysis, we shall illuminate the influence of the modulus ratio as well as the material gradient on the maximum circumferential stretch $\\lambda_a^m$ based on equation (\\ref{eq5_1}). Similar to the parametric analysis for the onset of localized bulging, we also plot in Figure \\ref{fig11} the curves for three modulus functions in (\\ref{eq2_19}) when the modulus ratio $\\beta_1$ or $\\beta_2$ ranges from 1 to 101 and the dimensionless inner radius is given by $A^*=0.7$. This implies that the shear modulus of the inner (or middle) surface is equal to or larger than that of the outer surface. Moreover, it is a wonder that the trend of each curve is identical to that in Figures \\ref{fig5} and \\ref{fig6}. Similarly, the sinusoidal distribution is highest among these curves, indicating that a localized bugle can attain a greater radius in this case compared to a linearly decayed or an exponentially decayed modulus distribution if the peak value of shear modulus and the thickness remain the same. Meanwhile, the curve of sinusoidal distribution increases slightly from $\\beta_2=1$ and soon becomes virtually flat as $\\beta_2$ is around 10. Consequently, such a special non-monotonic modulus gradient brings a negligible change in the final deformed state. Note that these curves intersect at $\\beta_1=\\beta_2=1$ where a homogeneous counterpart is obtained. Furthermore, we only focus on the linear and exponential distributions since the unique difference is the modulus gradient. It can be seen that both curves are monotonically decreasing as $\\beta_1>1$. Especially, within the domain $1<\\beta_1<\\beta_s$ where $\\beta_s=8.67$, a pressurized graded tube with the shear modulus distributed exponentially can finally undergo a larger bulge compared to the linear counterpart if the axial force is specified. On the other hand, with increased $\\beta_1$, the dependence of the final magnitude of a bulge on the modulus ratio becomes weaker and weaker until the modulus ratio has no influence.\n\n\\begin{figure}[!htbp]\n\\centering\\includegraphics[scale=1]{fig12}\\caption{(Color online) Graphical interpretation of the scaled amplitude $\\lambda_a(0)-\\lambda_\\infty$. The white horizontal line marks the axis of the tube.}\\label{fig12}\n\\end{figure}\n\nAs of now, we have performed a theoretical analysis on the limit magnitude of the bulge in a graded tube subject to combined fixed axial force and internal pressure in the light of Maxwell's equal area. An interesting discovery is that a sinusoidal modulus function does practically not alter the final size of a bulge compared to its homogeneous counterpart. Bearing in mind that a similar situation takes place for the bulge initiation, we are left to speculate what will happen in the bulge growth stage for a homogeneous tube and a graded tube where the shear modulus distributes sinusoidally. To this end, we apply the established FE model in the previous section and carry out a fully nonlinear analysis. At first, we define the amplitude of localized bulging. As illustrated in Figure \\ref{fig12}, the hoop stretch is dependent not only on the radial coordinate $R$ but also on the axial coordinate $Z$ after bulge initiation. In a perfect system, namely, the tube is infinitely long, bulge solution holds translation invariance. So we can assume that localized bulging, associated with the greatest diameter of the bulge, always appears at the center of the tube $Z=0$. Then the dimensionless radius at the inner surface and $Z=0$ writes $\\lambda_a(0)A^*$. Furthermore, we denote the corresponding radius for the uniform inflation region by $\\lambda_\\infty A^*$, where $\\lambda_\\infty$ stands for the hoop stretch at the infinity. In a uniform inflation process, the $\\lambda_a$ is independent of $Z$ hence we get $\\lambda_a=\\lambda_\\infty$. In an illustrative FE example, we identify $\\lambda_\\infty$ in the non-bulging region, as shown in Figure \\ref{fig12}. The scaled amplitude of a bulge is given by $\\lambda_a(0)-\\lambda_\\infty$. \n\nOn specifying $A^*=0.7$, we show the bifurcation diagrams for a homogeneous tube and a graded one in Figure \\ref{fig13} using FE analysis. The modulus ratio for the graded tube is given by $\\beta_2=30$, and the $\\beta_2$ is reduced to unity in a homogeneous tube. In particular, Figure \\ref{fig13a} displays the results when the resultant axial force is prescribed while Figure \\ref{fig13b} plots the counterparts when the axial length is fixed. From the bifurcation diagram, it is observed that the amplitude of bulge is trivial in the uniform inflation where $\\lambda_\\infty$ is reduced to the circumferential stretch in the primary deformation $\\lambda_a$. As the inflation continues, especially when $\\lambda_\\infty$ passes the critical value $\\lambda_a^c$, a non-trivial solution resulting in localized bulging takes place and the scaled amplitude $\\lambda_a(0)-\\lambda_\\infty$ gets a positive value. Afterwards, the amplitude increases with reduced $\\lambda_\\infty$. This implies the fact that the existing bulge dilates in both the radial and the axial directions while the non-bulging part shrinks. Remarkably, we again find an extremely good coincidence for the deformation processes between the homogeneous tube and the graded one no matter it is the resultant axial force $N^*$ or the axial stretch $\\lambda_z$ that is fixed. Combining the analysis for the bulge initiation and the maximum size of a bulge, it is therefore summarized that a graded tube with sinusoidally distributed shear modulus can undergo a higher internal pressure and remain the deformation feature at the same time. For instance, the deformation paths in Figure \\ref{fig13a} are identical while the associated critical pressures for a graded tube and a homogeneous one are $P_{cr}^*=5.248$ and $P_{cr}^*=0.271$, respectively. \n\n\\begin{figure}[!htbp]\n\\centering\n\\subfigure[The resultant axial force is fixed by $N^*=0$.]{\\includegraphics[scale=0.9]{fig13a}{\\label{fig13a}}}\\hspace{5mm}\n\\subfigure[The axial length is fixed by $\\lambda_z=1.5$.]{\\includegraphics[scale=0.9]{fig13b}{\\label{fig13b}}}\n\\caption{(Color online) Comparisons of the bifurcation diagrams between a graded tube where the shear modulus function is given by $(\\ref{eq2_19})_3$ and a homogeneous counterpart. The parameters are given by $A^*=0.7$ and $\\beta_2=30$. The black solid lines represent the results for the homogeneous tube while the red dashed lines the counterparts for the graded one. The profiles of these blue dots highlighted in the figure are displayed in Figures \\ref{fig14} and \\ref{fig15}, respectively.}\\label{fig13}\n\\end{figure}\n\nWe also highlight three pairs of $\\{\\lambda_\\infty,\\lambda_a(0)-\\lambda_\\infty\\}$ by blue dots for the graded tube in Figures \\ref{fig13a} and \\ref{fig13b}, respectively. In particular, we select a state before instability and two states after bulge initiation. The corresponding coordinates $\\{\\lambda_\\infty,\\lambda_a(0)-\\lambda_\\infty\\}$ read $\\{1.4,0\\}$, $\\{1.267, 4.004\\}$ and $\\{1.244, 5.373\\}$ from the bottom to the top in Figure \\ref{fig13a} while in Figure \\ref{fig13b} the counterparts are given by $\\{1.4,0\\}$, $\\{1.209, 3.253\\}$ and $\\{1.176, 4.771\\}$. The deformed profiles for the three highlighted points are exhibited in Figure \\ref{fig13a}. In the right part, we further plot the cross-sections enclosed in the dashed rectangular to offer a clear view of the stress distribution on the inner surface. The aspect ratio of length to diameter is 30. As introduced in Section 4, the radial displacement of two ends are restricted and this setup serves as a geometric imperfection that can be used to trigger aneurysm formation at a critical pressure. It can be seen that the end effect decays very fast and has no influence on the deformation around the center where localized bulging is expected to appear. As the stretch passes the critical stretch $\\lambda_a^c$, a bulge occurs at the center of the tube and starts to expand in the radial and axial orientations. At the same time, the inner or outer diameter at the non-bulging area (outside the dashed rectangular in Figure \\ref{fig14}) suffers a small but continuous drop, which explains why the curve in the amplitude diagram turns back when $\\lambda_\\infty$ exceeds the critical stretch $\\lambda_a^c$. From the sectional view, it is also found that the stress and strain are both concentrated on the center of the bulge, which may cause potential rapture. \n\n\\begin{figure}[!htbp]\n\\centering\\includegraphics[scale=0.8]{fig14}\\caption{(Color online) Deformed profiles of the three marked points in Figure \\ref{fig13a} for the graded tube when the axial force is fixed by $N^*=0$. (a) to (c) show the entire deformations at different periods while (d) to (e) illustrate the sectional views. The values of $\\lambda_\\infty$ and $\\lambda_a(0)-\\lambda_\\infty$ are listed below the corresponding subfigures. The parameters are given by $A^*=0.7$ and $\\beta_2=30$.}\\label{fig14}\n\\end{figure}\n\nFinally, we plot in Figure \\ref{fig15} the deformed configurations of all blue points in Figure \\ref{fig13b}. The layout is similar to that in Figure \\ref{fig14}. We emphasize that an alternative physical defect that the shear modulus at the middle cross-section $Z=0$ is reduced to $\\mu(R)-0.01$ is employed in the case of fixed axial length. In doing so, the center of the tube is slightly softer than other positions. Then localized bulging occurs at the center if the internal pressure attains a critical value. Additionally, the cross-sections in the dashed rectangular are shown to depict evidently the stress distribution on the inner surface. \n\n\n\\begin{figure}[!htbp]\n\\centering\\includegraphics[scale=0.8]{fig15}\\caption{(Color online) Deformed profiles of the three marked points in Figure \\ref{fig13b} for the graded tube when the axial stretch is fixed by $\\lambda_z=1.5$. (a) to (c) show the entire deformations at different periods while (d) to (e) illustrate the sectional views. The values of $\\lambda_\\infty$ and $\\lambda_a(0)-\\lambda_\\infty$ are listed below the corresponding subfigures. The parameters are given by $A^*=0.7$ and $\\beta_2=30$.}\\label{fig15}\n\\end{figure}\n\n\n\\section{Concluding remarks}\nLocalized bulging or aneurysm formation in inflated graded cylindrical tubes of arbitrary thickness was investigated in detail within the framework of finite elasticity. The primary deformation where a tube dilates uniformly was analytically determined to formulate the expressions of the internal pressure and the resultant axial force. Meanwhile, a concise bifurcation condition in terms of the internal pressure and the resultant axial force has been applied and a semi-analytical solution procedure was proposed to determine the onset of localized bulging. Two typical loading conditions, defined by fixed axial force and fixed axial length, were adopted throughout the paper. In particular, the case of fixed axial force has potential applications in soft pneumatic actuators \\citep{he2020,li2022} while the other end condition corresponds to the \\textit{in vivo} status of human arteries \\citep{bmm2014}. For all illustrative examples, the incompressible Gent material was engaged and three archetypal modulus gradients were used, containing a linear, an exponential, and a sinusoidal function. Then a thorough theoretical analysis was conducted for the critical stretch $\\lambda_a^c$ at the inner surface. Note that the structure is more stable with a higher $\\lambda_a^c$ in a volume control problem. In this sense, a thicker tube is always less susceptible to suffering localized bulging regardless of a concrete form of the material gradient. Remarkably, it turns out that the critical stretch is sensitive to the specific modulus distribution applied. In particular, a sinusoidal function almost does not influence $\\lambda_a^c$, no matter what values of the ratio of the shear modulus on the middle position to that on the outer surface (denoted by $\\beta_2$). Furthermore, a linear or an exponential one will reduce $\\lambda_a^c$ as the ratio of the shear modulus on the inner surface to that on the outer surface (denoted by $\\beta_1$) increases. However, a larger enough $\\beta_1$ ceases to affect the critical stretch. It is emphasized that a qualitatively similar conclusion can be obtained for both end conditions. The current analysis on bulge initiation implies that not only the modulus mismatch but also the exact position where maximum modulus attains is vital to the initiation of localized bulging. Bearing in mind that an accurate measurement of elastic modulus of human arteries is extremely difficult \\citep{prsa2019} and we only have a general understanding that the media (intermediate layer) of an artery occupies a larger elastic modulus \\cite{je2000,jctr2012}, the current analysis concerning sinusoidally distributed shear modulus implies that the deformation is almost insensitive to the practical value of the shear modulus of the middle surface. \n\nTo track the evolution of a bulge, a finite element model incorporating material inhomogeneity is established using Abaqus UHYPER subroutine coding. This model can be used to perform nonlinear analysis for localized bulging in inflated graded tubes for an arbitrary modulus gradient. The robustness of the established FE model was validated by comparing the bulge initiations using finite element analysis to the theoretical predictions when a linear modulus gradient is applied. Ultimately, bulge propagation for the case of fixed axial force was analytically studied according to Maxwell's equal-area rule, and the influence of material gradient, as well as the modulus ratio on the maximum magnitude of a bulge, were clarified. Combing the analysis for the onset of localized bulging, it is found that either the material gradient or the modulus ratio has the same effect on the deformed profiles at the critical state and at the propagation stage. Furthermore, we compared the bifurcation diagrams between a graded tube where the shear modulus decays sinusoidally from the middle surface to both lateral boundaries and its homogeneous counterpart based on finite element simulations. Interestingly, these two bifurcation diagrams are nearly identical for both loading types. Therefore, this work provides an answer for one fundamental problem arising from structure optimization, i.e. stiffening the intermediate surface enhances the ability of a structure to resist internal pressure, and neither the critical stretch nor the deformation path varies compared with the homogeneous counterpart. This conclusion is perfectly consistent with the actual situation of human arteries. It is expected that the current analysis would supply useful insight into localization instabilities in graded structure as well as into aneurysm prevention in functional soft devices.\n\n\n\\section*{Acknowledgments}\nThe work was supported by the National Natural Science Foundation of China (Grant Nos 12072227, 12072225, and 12121002). The Abaqus simulations were carried out on TianHe-1 (A) at the National Supercomputer Center in Tianjin, China. We thank Prof. Yibin Fu at Keele University for valuable advice and discussion.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nQuantum coherence arises due to the superposition principle and is an essential property of quantum systems.\nBasic features of coherence were well studied through \\cite{glauber1963coherent}, where they were investigated\nin the context of quantum optics. But these studies have focused only on the detection of quantum coherence\nwhile a method to estimate coherence was not available. A procedure to measure coherence using a quantum\ninformation theoretic framework was introduced by Baumgratz et. al. in Ref.~\\cite{baumgratz2014quantifying}.\nThis method is used to estimate the coherence of finite dimensional systems. This led to some fundamental results in the field of resource\ntheory of quantum coherence \\cite{winter2016operational,brandao2015reversible,streltsov2017colloquium,chitambar2019quantum,chitambar2015relating}.\nSubsequently quantum coherence has been studied in the contexts, where the system is either in a\nnon-inertial frame of reference \\cite{wang2016irreversible,he2018multipartite} or where the system is in contact\nwith an external environment \\cite{Pati2018Banerjee,chandra2019time,radhakrishnan2019dynamics,cao2020fragility}.\nBut most of these investigations have been carried out on quantum systems with finite number of degrees\nof freedom like qubits.\n\nQuantum communication requires a quantum description of the interaction and propagation of electromagnetic waves. An infinite number of\ndegrees of freedom with continuous spectrum is needed to describe the electromagnetic waves. Hence from both theoretical and experimental\nperspective, continuous variable (CV) states \\cite{braunstein2005quantum,adesso2014continuous,weedbrook2012gaussian,wang2007quantum}\nof infinite dimensional systems are very important. Continuous variable systems constitute an extremely powerful resource to quantum\ninformation processing. One particular class of continuous variable states is the Gaussian states\n\\cite{weedbrook2012gaussian,olivares2012quantum,wang2007quantum} for which the quantum state has a representation in terms\nof Gaussian functions. To a first approximation, the ground states of the thermal states of a quantum system is a\nGaussian state. Similarly there are dynamical operations in which quantum\nstates are transformed to another Gaussian states and these are referred to as Gaussian operations. The Gaussian operations are linear in\nnature and any nonlinear operation can be approximated to a Gaussian operation to a satisfactory level of accuracy. In light of these reasons,\nthe Gaussian states occupy a special place in the study of continuous variable systems. The first quantum resource to be studied extensively\nin a Gaussian state is its entanglement. Since a covariance matrix along with the displacement vectors completely characterizes a Gaussian\nstate, the entanglement can be quantified using a covariance matrix based approach. To measure the quantum coherence of Gaussian states\na measure based on relative entropy, using covariance matrix and displacement vectors was introduced in\nRef.~\\cite{xu2016quantifying}. The fundamental postulates of {\\it (i)} positivity {\\it (ii)} monotonicity and {\\it (iii)} convexity\nwere verified for this relative entropy measure.\n\nIn most of the investigations on discrete and continuous variable systems, we do not consider the effect of the\nexternal environment on the system. But in reality the quantum systems are exposed to an environment, which acts as a bath\nand affects the characteristics of the system by constantly interacting with it. To understand the effects of the environment and\nincorporate them in the characterization of the quantum resources, we take an open quantum system approach. Here we model\nthe environment as a quantum many-body object which is coupled to the system. Based on the strength of the coupling and the\nother characteristics of the system and the environment, there is a dynamical change in the quantum resource. Depending on\nthe bath spectrum and whether the system is weakly or strongly coupled to the environment, the system can exhibit a Markovian or\nnon-Markovian dynamics \\cite{breuer2016colloquium,de2017dynamics}. A quantum resource can completely disappear and then\nreappear a phenomenon known as revival. These features are dependent not only on the initial properties of the bath and the system\nbut also on how they are coupled to each other. Understanding this dynamical change is essential to the fabrication of quantum devices\n\\cite{shnirman2003noise,astafiev2006temperature,yoshihara2006decoherence,tong2006signatures,kakuyanagi2007dephasing,\nseoanez2007dissipation,ribeiro2015non,vyas2020master}.\nIt provides us a clear idea of the operational time of the quantum devices, i.e., the time within which the quantum operations have to\nbe completed before the quantum resources vanish. The time evolution of the entanglement of finite dimensional systems has been\ninvestigated in a very detailed manner and some unique features like sudden death of entanglement\n\\cite{yu2004finite, yu2006sudden,yu2009sudden} and entanglement revival\n\\cite{bellomo2007non,deng2021sudden,xu2010experimental,mazzola2009sudden}\ndue to environmental back action have been observed. For continuous variable systems, investigations on the dynamics of\nentanglement have been done \\cite{Liu2007Goan,Zhang2007hong,Paz2008dynamics}. An open quantum system study of coherence\nhas been carried out using an atom-field interaction model \\cite{chandra2019time} and a central spin model\n\\cite{radhakrishnan2019dynamics}. The dynamics of coherence and its distribution was extensively investigated for the different\nbipartite and tripartite states. But both these investigations were carried out on finite dimensional system. For the continuous variable\nsystems, the quantum coherence dynamics has not been investigated so far. In this work we investigate the dynamics of coherence\nof a single mode Gaussian states {\\it viz} the coherent state, the squeezed state and the displaced squeezed state.\n\nThe plan of the manuscript is as follows: In Sec.~\\ref{states and coherence} we give a brief introduction to the continuous variable\nquantum state and explain the Gaussian states in detail. Also we describe the covariance matrix based coherence measure introduced\nin Ref.~\\cite{xu2016quantifying}. The description of the Gaussian state in contact with a non-Markovian environment is\ndescribed in Sec.~{\\ref{singlemodeNME}.\nFor the single mode coherent states the dynamics of quantum coherence is discussed in Sec.~\\ref{coherentstates}. Next we\nstudy the effect of non-Markovian environment on squeezed states in Sec.~\\ref{squeezedstates}. An analysis of coherence in\nthe single mode displaced squeezed state is shown in Sec.~\\ref{displacedsqueezedstate}. Finally in Sec.~\\ref{crossovertransition},\nwe investigate the crossover from Markovian to non-Markovian dynamics. We present our conclusions in Sec. \\ref{conclusions}.\n\n\n\\section{Gaussian States and Quantification of coherence}\n\\label{states and coherence}\n\n\\subsection{Gaussian states}\nA continuous variable quantum system \\cite{braunstein2005quantum,adesso2014continuous} has an infinite dimensional\nHilbert space in which observables have a continuous eigenspectrum.\nAn example of a continuous-variable system is the electromagnetic field in which the quantized radiation modes are represented by the\nbosonic modes. The tensor product Hilbert space corresponding to these modes is $\\otimes_{k=1}^{n} H_k$, where `$n$' is the\nnumber of bosonic modes. For a given mode `$k$' of the bosonic field, the operators $a_k^{\\dagger}$ and $a_k$ are the corresponding\ncreation and annihilation operators. Apart from the bosonic field operators, the continuous-variable system can also be described using\nthe quadrature operators $\\{x_k,p_k\\}$. The quadrature field operators can be arranged in the form of a $2n$-dimensional vector as\n${\\bm \\xi} = \\left(x_1, p_1,...,x_n, p_n\\right)$. Here the $2n$-dimensional vectors ${\\bm \\xi}$ contains the quadrature pairs of all the $n$ modes.\nIn terms of the bosonic field operators, the quadrature operators $\\{x_k,p_k\\}$ are expressed as\n\\begin{align*}\nx_k = (a_k + a_k^{\\dagger}), \\qquad p_k = -i (a_k - a_k^{\\dagger}),\n\\end{align*}\nand they are canonically conjugate to each other. These quadrature field operators act similar to the canonically conjugate position and\nmomentum operators of a harmonic oscillator.\n\nIn the present work we study a single-mode continuous variable system. The two quadrature operators corresponding to the single mode\ncontinuous variable system are $\\xi_1=x=(a + a^{\\dagger})$ and $\\xi_2=p=-i (a - a^{\\dagger})$ and the 2D-vector\n${\\bm \\xi} = \\{\\xi_{1}, \\xi_{2} \\}$. The quadrature operators satisfy the canonical commutation relations\n$\\left[ \\xi_i, \\xi_j\\right]=2i \\Omega_{ij}$, where $\\Omega_{ij}$ are the elements of the matrix\n\\begin{align}\n\\bm{\\Omega} = \\left(\\begin{array}{cc}\n 0 & 1\\\\\n-1 & 0\\\\\n\\end{array}\n\\right).\n\\end{align}\nIn particular, we study the coherence dynamics of the Gaussian states. The Gaussian states \\cite{weedbrook2012gaussian,olivares2012quantum}\nare defined as the states with Gaussian Wigner function. For a Gaussian state, the Wigner quasiprobability distribution is\n\\begin{align}\n\\nonumber\nW({\\bm \\xi}) = \\frac{1}{2\\pi \\sqrt{\\rm{det}~\\bm{V}}} \\exp \\left\\{ - \\frac{1}{2} ({\\bm \\xi - \\bm{\\overline \\xi} }).\n\\bm{V}^{-1} ({\\bm \\xi - \\bm{\\overline \\xi} })^{T} \\right\\}\n\\label{wigner}\n\\end{align}\nA Gaussian state is completely characterized by the first and the second moments of the quadrature field operators. Here the\nvector of the first moments ${\\overline{\\bm \\xi}} = (\\langle \\xi_1 \\rangle, \\langle \\xi_2 \\rangle)$ and the covariance matrix\n${\\bm V}$\n\\begin{align}\nV_{ij} = \\langle \\{ \\Delta \\xi_i, \\Delta \\xi_j \\} \\rangle = {\\rm Tr} \\left( \\{ \\Delta \\xi_i, \\Delta \\xi_j \\} \\rho \\right),\n\\end{align}\nwhere $\\Delta \\xi_i=\\xi_i - \\langle \\xi_i \\rangle$ is the fluctuation operator and\n$\\{ \\Delta \\xi_i, \\Delta \\xi_j \\} = (\\Delta \\xi_i \\Delta \\xi_j + \\Delta \\xi_j \\Delta \\xi_i )\/2$. For the single mode quantum system under\nconsideration, the matrix elements of the covariance matrix are\n\\begin{eqnarray}\nV_{ii} &=& \\langle \\xi_i^2 \\rangle - \\langle \\xi_i \\rangle^2, \\\\\nV_{ij} &=& \\frac{1}{2} \\langle \\xi_i \\xi_j + \\xi_j \\xi_i \\rangle - \\langle \\xi_i \\rangle \\langle \\xi_j \\rangle.\n\\label{covele}\n\\end{eqnarray}\nTo reflect the positivity of the density matrix, the covariance matrix has to satisfy the uncertainty relation \\cite{simon1994quantum}\n$\\bm{V} + i \\bm{\\Omega} \\ge 0$.\n\n\\subsection{Measurement of Coherence}\n\nThe quantum coherence of a qubit system is measured as follows: First we define the set of incoherent states $\\mathcal{I}$,\ni.e., the states with zero coherence. Next we use a suitable distance measure to find the distance between the\narbitrary density matrix and the closest incoherent states. In Ref.~\\cite{baumgratz2014quantifying}, the authors used the\nrelative entropy measure to quantify the amount of coherence in the system. In general the relative entropy distance\nbetween two density matrices is\n\\begin{align}\n\\mathcal{D}(\\rho) = \\min_{\\sigma} S(\\rho \\| \\sigma) = \\min_{\\sigma} {\\rm Tr} (\\rho \\log_{2} \\rho - \\rho \\log_{2} \\sigma),\n\\label{coher1}\n\\end{align}\nwhere $\\rho$ is the given quantum state and $\\sigma$ is the reference state. Using this formulation we can find the\ncoherence in the system by assuming the reference state to be the incoherent state. It has been proved that for the relative\nentropy of coherence, the closest incoherent state is $\\rho_d=\\sum_n \\rho _{nn} \\vert n \\rangle \\langle n \\vert$, which is the\ndiagonal state of the density matrix. Hence it is not necessary to perform the minimization to determine the quantum coherence.\nThe relative entropy measure reads:\n\\begin{eqnarray}\nC(\\rho) = S(\\rho \\| \\rho_d) = S(\\rho_{d}) - S(\\rho),\n\\label{coher2}\n\\end{eqnarray}\nwhere $S(\\rho) = - {\\rm tr} (\\rho \\log_{2} \\rho)$ is the vonNeumann entropy of the system. So far, the relative entropy of coherence is the\nmost widely used measure. The quantum coherence of continuous variable system was first investigated in\n\\cite{zhang2016quantifying}. Here the authors used a relative entropy\nbased quantifier of coherence by considering the density matrix $\\rho$ in the Fock basis. While this method is a valid process\nto quantify coherence, it does not give a closed form expression for general Gaussian states. Alternatively in\nRef.~\\cite{xu2016quantifying}, the coherence measure for a one-mode Gaussian state $\\rho$ was defined as\n\\begin{align}\nC(\\rho) = \\inf_{\\delta} S(\\rho \\| \\delta).\n\\end{align}\nHere $S(\\rho \\| \\delta)$ is the relative entropy measure with $\\delta$ being an incoherent Gaussian state\nand the infimum runs over all incoherent Gaussian states. Also, it was proved that\na one mode Gaussian state is incoherent if and only if it is a thermal state. Hence for a Gaussian state, the minimization\nis achieved for thermal states of the form\n\\begin{eqnarray}\n\\rho_d = {\\overline \\rho} = \\sum_{n=0}^{\\infty} \\frac{\\overline{n}^n}{(1+\\overline{n})^{n+1}} \\vert n \\rangle \\langle n \\vert,\n\\label{thermal}\n\\end{eqnarray}\nwhere $\\overline{n}={\\rm Tr} [a^\\dagger a {\\overline \\rho}]$ is the mean number. For the Gaussian states, the entropy of a\ngiven state $\\rho$ can be written as\n\\begin{align}\nS(\\rho) = \\frac{\\nu+1}{2} \\log_{2} \\frac{\\nu+1}{2} - \\frac{\\nu-1}{2} \\log_{2} \\frac{\\nu-1}{2},\n\\end{align}\nwhere $\\nu = \\sqrt{{\\rm det} {\\bm V}}$. Based on these expressions, the relative entropy based coherence measure for the\nGaussian states is\n\\begin{eqnarray}\nC(\\rho) &=& S(\\rho \\| {\\overline \\rho}) ={ \\rm Tr} (\\rho \\log_2 \\rho - \\rho \\log_2 {\\overline \\rho}) \\\\\n&=& \\frac{\\nu-1}{2} \\log_2 \\frac{\\nu-1}{2} - \\frac{\\nu+1}{2} \\log_2 \\frac{\\nu+1}{2} \\nonumber \\\\\n&+& (\\overline{n}+1) \\log_2 (\\overline{n}+1) - \\overline{n} \\log_2 \\overline{n}.\n\\label{coher3}\n\\end{eqnarray}\nHence the quantum coherence of a Gaussian state is completely determined by the determinant of the covariance matrix\nand the mean number $\\overline{n}$ associated with the Gaussian state. But this measure can be used to compute\ncoherence of only Gaussian states. It is important to note that when the system-environment interaction Hamiltonian\nis bilinear in the creation and annihilation operators, the Gaussian nature of the quantum states is preserved\nduring the time evolution \\cite{schumaker1986quantum,olivares2012quantum}.\n\n\n\\section{Single mode Gaussian states in a Non-Markovian environment}\n\\label{singlemodeNME}\nA study of the dynamics of quantum coherence of qubit systems have been investigated both theoretically\n\\cite{Pati2018Banerjee,chandra2019time,radhakrishnan2019dynamics} and experimentally \\cite{cao2020fragility,ding2021experimental}.\nOn the theoretical side, the dynamics was characterized in atomic systems and spin systems for both Markovian\nand non-Markovian environments. Experimentally in \\cite{cao2020fragility,ding2021experimental}, the dynamics of\ncoherence, entanglement and mutual information were compared in qubit system. In the present work we consider a\ncontinuous variable system consisting of a single bosonic mode of frequency $\\omega_{0}$ coupled to a general\nnon-Markovian environment \\cite{xiong2010exact,zhang2012general,ali2014exact,ali2015non,xiong2015non}\nat finite temperature. The single bosonic mode can be either a quantum optical field, a superconducting resonator\nor a nano-mechanical oscillator. A structured bosonic\nreservoir with a collection of infinite modes of varying frequencies can describe a general non-Markovian environment. The entire\nsystem comprising of the single bosonic mode and the reservoir can be described by the Fano-Anderson Hamiltonian\n\\cite{anderson1961localized,fano1961effects}. This Hamiltonian was introduced in the context of atomic \\cite{fano1961effects}\nand condensed mater physics \\cite{anderson1961localized} and has been used to study several different models.\nThe Fano-Anderson Hamiltonian of the system and bath combination is:\n\\begin{align}\nH = \\hbar \\omega_0 a^{\\dagger}a + \\hbar \\sum_k \\omega_k b_k^{\\dagger}b_k\n + \\hbar \\sum_k \\left(\\mathcal{V}_k a^{\\dagger} b_k + \\mathcal{V}_k^{\\ast} b_k^{\\dagger}a \\right),\n\\label{Tot}\n\\end{align}\nwhere $\\omega_{0}$ is the system frequency and $a^{\\dag}$ ($a$) is the creation (annihilation) operator\ncorresponding to the bosonic mode (system) and $b^{\\dag}_{k}$ ($b_{k}$) is the creation (annihilation) operator of\nthe $k^{th}$ mode of the bosonic reservoir with frequency $\\omega_{k}$. The factor $\\mathcal{V}_{k}$ represents\nthe coupling strength between the system and the environment.\n\nThe dynamics of the single mode system and the environment is solved using the Heisenberg equation of motion\napproach. The time evolution of the bosonic field operator and the environment operators reads:\n\\begin{align}\na(t) = e^{\\frac{i H t}{\\hbar}} a e^{-\\frac{i H t}{\\hbar}}, \\quad \\quad b_k(t) = e^{\\frac{i H t}{\\hbar}} b_k e^{-\\frac{i H t}{\\hbar}}.\n\\end{align}\nIn the Heisenberg picture, these operators satisfy the following equations of motion,\n\\begin{eqnarray}\n& & \\frac{d}{dt} a(t) = -\\frac{i}{\\hbar} \\left[ a(t), H \\right] = -i \\omega_0 a(t) - i \\sum_k \\mathcal{V}_k b_k(t), \\quad\n\\label{EOM1} \\\\\n& & \\frac{d}{dt} b_k(t) = - \\frac{i}{\\hbar} \\left[ b_k(t), H \\right] = -i \\omega_k b_k(t) - i \\mathcal{V}_k^{\\ast} a(t). \\quad\n\\label{EOM2}\n\\end{eqnarray}\nSolving for Eqn. (\\ref{EOM2}) for $b_{k}$ we get the solution\n\\begin{align}\nb_k(t) = b_k(0) e^{-i\\omega_k t} - i \\mathcal{V}_k^{\\ast} \\int_0^t d\\tau~a(\\tau)~e^{-i\\omega_k(t-\\tau)}.\n\\label{EOM3}\n\\end{align}\nSubstituting Eq. (\\ref{EOM3}) in (\\ref{EOM1}) we arrive at the following quantum Langevin equation \\cite{ali2017nonequilibrium}\n\\begin{align}\n{\\dot a}(t) + i \\omega_0 a(t) + \\int_0^t d\\tau g(t,\\tau) a(\\tau) = - i \\sum_k \\mathcal{V}_k b_k(0) e^{-i\\omega_k t}.\n\\label{langevinequation}\n\\end{align}\nHere the integral kernel $g(t,\\tau) = \\sum_k |\\mathcal{V}_k|^2 e^{-i\\omega_k (t-\\tau)}$ characterizes the\nnon-Markovian memory effects between the system and the environment. For the continuous environment\nspectrum $g(t,\\tau) = \\int_0^{\\infty} d\\omega J(\\omega) e^{-i\\omega(t-\\tau)}$, where\n$J(\\omega)=\\varrho(\\omega) |\\mathcal{V}(\\omega)|^2$ is the spectral density which characterizes all the\nnon-Markovian memory of the environment on the system. Here $\\varrho(\\omega)$\nbeing the density of states of the environment and $\\omega$ is the continuously varying frequency of the bath.\nDue to the linearity of Eq. (\\ref{langevinequation}) the time evolved bosonic operator $a(t)$ can be expressed\n\\cite{de2000continuous} in terms of the initial field operators $a(0)$ and $b_{k}(0)$ of the system and the environment as\n\\begin{align}\na(t) = u(t) a(0) + f(t).\n\\label{fieldoperators}\n\\end{align}\nThe time-dependent coefficient $u(t)$ and noise operator $f(t)$ are determined by the quantum Langevin equation,\nand they satisfy the two integrodifferential equations given below:\n\\begin{eqnarray}\n\\frac{d}{dt} u(t) &=& - i \\omega_0 u(t) - \\int_0^t d\\tau g(t,\\tau) u(\\tau),\n\\label{utime} \\\\\n\\frac{d}{dt} f(t) &=& -i \\omega_0 f(t) - \\int_0^t d\\tau g(t,\\tau) f(\\tau) \\nonumber \\\\\n & & - i \\sum_k \\mathcal{V}_k b_k(0) e^{-i\\omega_k t}.\n\\label{ftime}\n\\end{eqnarray}\nTo determine $u(t)$ we solve Eqn. (\\ref{utime}) numerically with the initial condition $u(0) =1$.\nHere the solution for the noise operator obtained using the initial condition $f(0) =0$ is\n\\begin{align}\nf(t) = - i \\sum_k \\mathcal{V}_k b_k(0) \\int_0^t d\\tau e^{-i\\omega_k \\tau} u(t,\\tau).\n\\label{ft2}\n\\end{align}\nThe nonequilibrium thermal fluctuation can be evaluated using the quantity\n\\begin{align}\n\\langle f^{\\dagger}(t) f(t)\\rangle = v(t) = \\! \\! \\int_{0}^{t} \\! \\! \\! d\\tau_1 \\int_{0}^{t^\\prime} \\! \\! \\! d\\tau_2 u(t, \\tau_1)\n{\\widetilde g}(\\tau_1,\\tau_2) u^{\\ast}(t^{\\prime}, \\tau_2),\n\\label{fdf}\n\\end{align}\nwhere we consider the initial state of the total system to be $\\rho_{tot}(0)=\\rho_S(0) \\otimes \\rho_E(0)$.\nHere the initial environment state $\\rho_E(0) = \\exp(-\\beta H_E)\/{\\rm Tr}[\\exp(-\\beta H_E)]$ is the thermal state\nfor the Hamiltonian $H_E=\\sum_k \\hbar \\omega_k b_k^{\\dagger}b_k$, where $\\beta = 1\/k_{B} T$ is the inverse\ntemperature and $k_B$ is the Boltzmann constant.\n\nWhen the environment has a continuous spectrum, the time correlation function reads:\n\\begin{align}\n{\\widetilde g}(\\tau_1,\\tau_2) = \\int_0^{\\infty} d\\omega J(\\omega) {\\bar n}(\\omega) e^{-i\\omega(\\tau_1-\\tau_2)},\n\\end{align}\nwhere ${\\bar n}(\\omega)=1\/(e^{\\hbar \\omega \/ k_{B} T}-1)$ is the initial particle number distribution of the\nbosonic environment. From the Eqs. (\\ref{fieldoperators}) to (\\ref{fdf}), one can obtain the time-dependent\naverage values of the system operators as follows:\n\\begin{align}\n\\label{avg1}\n&\\langle a(t) \\rangle = u(t) \\langle a(0) \\rangle, ~~\\langle a^{\\dagger}(t) \\rangle = u^{\\ast}(t) \\langle a^{\\dagger}(0) \\rangle, \\\\\n\\label{avg2}\n&\\langle a(t) a(t) \\rangle = (u(t))^{2} \\langle a(0) a(0) \\rangle, \\\\\n\\label{avg3}\n&\\langle a^{\\dagger}(t) a^{\\dagger}(t)\\rangle = (u^{\\ast}(t))^{2} \\langle a^{\\dagger}(0) a^{\\dagger}(0) \\rangle, \\\\\n\\label{avg4}\n&\\langle a^{\\dagger}(t) a(t) \\rangle = |u(t)|^{2} \\langle a^{\\dagger}(0) a(0) \\rangle + v(t).\n\\end{align}\nHere we consider $\\langle f(t)\\rangle=\\langle f^{\\dagger}(t)\\rangle=0$ and also\n$\\langle f(t) f(t) \\rangle=\\langle f^{\\dagger}(t) f^{\\dagger}(t)\\rangle=0$, since the reservoir is initially in a thermal state\nuncorrelated to the system. Using the time-dependent average values in Eqs (\\ref{avg1}) -(\\ref{avg4}), we can\nevaluate the time evolved first and second moments of the quadrature operators namely,\n$\\langle \\xi_1(t) \\rangle$, $\\langle \\xi_2(t) \\rangle$, $\\langle \\xi_1^2(t) \\rangle$,\n$\\langle \\xi_2^2(t) \\rangle$, $\\langle \\xi_1(t) \\xi_2(t) \\rangle$, and $\\langle \\xi_2(t) \\xi_1(t) \\rangle$.\nThe elements of the time evolved covariance matrix are\n\\begin{eqnarray}\nV_{11} &=& 1+ 2 v(t) + 2 |u(t)|^{2} \\, {\\rm Cov}(a^{\\dagger}(0), a(0)) \\nonumber \\\\\n & & + (u(t))^{2} \\, {\\rm Var}(a(0)) + (u^{\\ast}(t))^{2} \\, {\\rm Var}(a^{\\dagger}(0)), \\qquad\n\\label{V11}\n\\end{eqnarray}\n\\begin{eqnarray}\nV_{22} &=& 1+ 2 v(t) + 2 |u(t)|^{2} \\, {\\rm Cov}(a^{\\dagger}(0), a(0)) \\nonumber \\\\\n & & - (u(t))^{2} \\, {\\rm Var}(a(0)) - (u^{\\ast}(t))^{2} \\, {\\rm Var}(a^{\\dagger}(0)), \\qquad\n\\label{V22}\n\\end{eqnarray}\n\\begin{equation}\nV_{12} = i (u^{\\ast}(t))^{2} \\; {\\rm Var}(a^{\\dag} (0)) - i ((u(t))^{2} {\\rm Var}(a(0)).\n\\end{equation}\nwhere ${\\rm Cov}(a,b) = \\langle a b \\rangle - \\langle a \\rangle \\langle b \\rangle$ and ${\\rm Var}(a) = {\\rm Cov}(a,a)$ and\n$V_{12} = V_{21}$ due to the symmetry of the covariance matrix.\n\n\nOnce the initial state and the bath parameters are known, the time evolved covariance matrix elements are completely\ndetermined using the nonequilibrium Green's function $u(t)$ and $v(t)$. To calculate the Green's function we need to\nspecify the spectral density $J(\\omega)$ of the environment. In our work we consider a Ohmic-type spectral density which\ncan simulate a large class of thermal baths \\cite{leggett1987dynamics}\n\\begin{align}\nJ(\\omega) = \\eta ~\\omega \\left( \\frac{\\omega}{\\omega_c} \\right)^{s-1} ~e^{-\\omega\/\\omega_c},\n\\label{sd}\n\\end{align}\nwhere $\\eta$ is the coupling strength between the system and the environment and $\\omega_{c}$ is the frequency cut-off\nof the environmental spectra. A localized mode is generated when the system-environment coupling approaches a\ncritical value $\\eta_{c} = \\omega_{0} \/(\\omega_{c} \\Gamma(s))$ where $\\Gamma(s)$ is the Gamma function.\nDepending on the value of $s$, the environment is classified as Ohmic for $s=1$,\nsub-Ohmic for $s<1$ and super-Ohmic for $s>1$. Throughout our work we use a scaled temperature\n$T_{s} = k_{B} T\/ \\hbar \\omega_{0}$, where $\\omega_{0}$ is the system frequency. The coherence dynamics for\ndifferent environmental conditions, and various system-environment couplings is studied for pure Gaussian\nstates through our work. Since the Hamiltonian under consideration (\\ref{Tot}) is bilinear in the creation and annihilation operators,\nduring evolution, the Gaussian states preserve their form and remain Gaussian.\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{figure1.pdf}\n\\caption{The time evolution of quantum coherence of a coherent state with parameter $\\alpha$ in contact with a Ohmic bath is shown for\nweakly coupled systems ($\\eta = 0.01 \\; \\eta_{c}$) in (a) at low temperature $T_{s} =1$, (b) at high temperature $T_{s} =20$ and\nfor the strongly coupled systems ($\\eta = 2.0 \\; \\eta_{c}$) in (c) at low temperature $T_{s} =1$, (d) at high temperature $T_{s} =20$.\nWe use the cut-off frequency $\\omega_{c} = 5.0 \\; \\omega_{0}$. }\n\\label{fig1}\n\\end{figure}\n\n\n\\section{Quantum Coherence dynamics of coherent states in a Non-Markovian environment}\n\\label{coherentstates}\nGlauber \\cite{glauber1963coherent} defined a coherent state $| \\alpha \\rangle$ as the eigenstate of the annihilation operator $a$ with eigenvalue\n$\\alpha \\in \\mathbb{C}$. We can generate the coherent state from the ground state $|0 \\rangle$ of an oscillator through the action\nof the displacement operator $ D(\\alpha) = \\exp( \\alpha a^{\\dag} - \\alpha^{\\ast} a)$, where $a^{\\dag}$ and $a$ are the\nannihilation and creation operators of the standard harmonic oscillator. In the number basis, the coherent states\ncan be expressed as\n\\begin{align}\n|\\alpha \\rangle = \\exp\\left( - \\frac{1}{2} |\\alpha|^{2} \\right) \\sum_{n=0}^{\\infty} \\frac{\\alpha^{n}}{\\sqrt{n !}} |n \\rangle.\n\\end{align}\nwhere $|n \\rangle$ is the oscillator number basis. The coherent states are Gaussian wavepackets\nwhich does not spread and has minimal uncertainty. Thus for a free classical mode, the closest quantum analogue is the\ncoherent state. Hence the coherent state is very useful in quantum optics especially to study the state of the quantized\nelectromagnetic field. The time evolution of the quantum coherence of a coherent state subjected to a spectral density of the\nform given in Eq. (\\ref{sd}) is reported in this section. Depending on the value of the parameter $s$, the environments are\nclassified as Ohmic $s=1$, sub-Ohmic $s<1$ and super-Ohmic $s>1$.\n\n\\noindent{\\it Ohmic bath (s=1):} \\\\\nThe time dynamics of quantum coherence of a coherent state for a Ohmic bath with spectral density\n$J(\\omega) = \\eta \\omega \\exp(- \\omega \/ \\omega_{c})$ is shown in Fig. \\ref{fig1}. From the plots Figs \\ref{fig1} (a) - (d) we find\nthat higher the value of the coherent state parameter `$\\alpha$' higher the initial amount of coherence and also the\nvalue at a given time. In Fig \\ref{fig1}(a) and \\ref{fig1}(b) the time variation of coherence, when the system is weakly coupled to the\nenvironment i.e., $\\eta =0.01 \\, \\eta_{c} $ is shown. Particularly in Fig. \\ref{fig1}(a) we look at the low temperature limit and in Fig. \\ref{fig1}(b),\nthe high temperature regime of the coherence dynamics. From both these plots we find that quantum coherence decreases\nmonotonically with time. But it falls faster at higher temperature because apart from the environmental effects, the\nthermal effects also contribute to the system decoherence. The dynamics of coherence when the system is strongly coupled\nto the environment i.e., for $\\eta=2.0 \\, \\eta_{c}$ is shown in Fig. \\ref{fig1}(c) and \\ref{fig1}(d). Here we can see that the coherence initially decreases\nmonotonically and then there is an oscillatory phase. This oscillatory phase is because of the environmental back action\ndue to the non-Markovian nature of the bath. For lower tempertures as shown in Fig. \\ref{fig1}(c), the coherence saturates at a\nhigher value compared to the high temperature limit given in Fig. \\ref{fig1}(d).\n\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{figure2.pdf}\n\\caption{The coherence dynamics of a coherent mode with parameter $r$ in contact with sub-Ohmic bath is studied for\nweakly coupled systems ($\\eta = 0.01 \\; \\eta_{c}$) in (a) low temperature $T_{s} =1$, (b) high temperature $T_{s} =20$.\nFor the strongly coupled systems ($\\eta = 2.0 \\; \\eta_{c}$) the plots are (c) at low temperature $T_{s}=1$ and (d) at high temperature $T_{s}=20$.\nWe use the cut-off frequency $\\omega_{c} = 5.0 \\; \\omega_{0}$. }\n\\label{fig2}\n\\end{figure}\n\n\\noindent{\\it Sub-Ohmic bath (s=1\/2):} \\\\\nTo investigate the coherence dynamics in the sub-Ohmic region we consider $s=1\/2$ in Eq. (\\ref{sd}).\nThe corresponding spectral density reads $J(\\omega) = \\eta \\sqrt{\\omega \\, \\omega_{c}} \\exp(- \\omega\/ \\omega_{c})$.\nThe results corresponding to this analysis is presented in Fig. \\ref{fig2}. The time variation of\ncoherence for the weak coupling limit with $\\eta =0.01 \\, \\eta_{c} $ is given in Fig. \\ref{fig2} (a) and (b). In Fig. \\ref{fig2} (a) the\nlow temperature limit is analyzed and we find that the coherence decreases monotonically with time. At higher\ntemperatures also, coherence decreases monotonically but the fall is much faster as seen in Fig. \\ref{fig2} (b).\nThis faster fall is because at high temperatures apart from the dissipation, the thermal fluctuations also cause\na decoherence in the system. The coherence dynamics when the system is strongly coupled to the environment $\\eta=2.0 \\, \\eta_{c}$ is\npresented in Fig. (\\ref{fig2}) (c) and (d) corresponding to the low and high temperature cases respectively. In the\nlow temperature case illustrated in Fig. \\ref{fig2} (c), we find a revival of coherence due to the non-Markovian effects\nof the bath. Such a coherence revival is not observed in the high temperature limit even in the strong coupling case.\nThis is because the temperature affects the environmental back action on the system, a feature which has also\nbeen observed in Ref. [].\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{figure3.pdf}\n\\caption{A coherent mode in contact with a super-Ohmic bath is studied for weakly interacting systems ($\\eta = 0.01 \\; \\eta_{c}$) for (a) $T_{s}=1$\nand (b) $T_{s} = 20$ and also for strongly interacting systems ($\\eta = 2.0 \\; \\eta_{c}$) with (c) $T_{s} =1$ and (d) $T_{s}=20$. The cut-off\nfrequency used is $\\omega_{c} = 5.0 \\; \\omega_{0}$.}\n\\label{fig3}\n\\end{figure}\n\n\\noindent{\\it Super-Ohmic bath (s=3):} \\\\\nFor the super-Ohmic bath we consider $s=3$ to investigate the dynamics of coherence. The spectral density in this case\nreads $J(\\omega) = \\eta \\, \\omega_{c} (\\omega \/ \\omega_{c})^{3} \\exp(- \\omega\/ \\omega_{c})$. The coherence evolution\nin this case is described through the plots in Fig. \\ref{fig3} (a) to (d) considering different coupling strengths and different\ntemperatures. In Fig. \\ref{fig3} (a) and (b) we analyze the coherence variation in the weak coupling limit when $\\eta =0.01 \\, \\eta_{c} $.\nHere we can see that the coherence decreases monotonically both in the low and high temperature limits. Due to temperature\ndependent decoherence effects, the rate of decrease is higher in Fig \\ref{fig3} (b). The nature of coherence variation in\nthe strong coupling limit ($\\eta=2.0 \\, \\eta_{c}$) is examined in Fig. \\ref{fig3} (c) and (d) respectively. In both these cases we see that\nthe coherence decreases initially and then saturates at a finite value a phenomenon known as coherence freezing. But the\nvalue at which coherence freezes is different in Fig. \\ref{fig3} (c) and Fig. \\ref{fig3} (d) and is also dependent on the parameter\n$\\alpha$. For a higher $\\alpha$, the coherence freezes at a higher value. The general decrease in coherence\nis due to the environmental effects. For the same environment at different temperatures, the system experiences\ntemperature dependent decoherence effects.\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{figure4.pdf}\n\\caption{A squeezed mode in contact with a Ohmic bath is studied for weakly interacting systems ($\\eta = 0.01 \\; \\eta_{c}$) for (a) $T_{s}=1$\nand $T_{s} =20$ and also for strongly coupled systems ($\\eta = 2.0 \\; \\eta_{c}$) for (a) $T_{s}=1$ and $T_{s} =20$. The cut-off frequency used is\n$\\omega_{c} = 5.0 \\, \\omega_{0}$. }\n\\label{fig4}\n\\end{figure}\n\n\\section{Non-Markovian dynamics of Squeezed states}\n\\label{squeezedstates}\nA squeezed state has a minimum value for the product of the dispersion of the position and momentum operators.\nFor a squeezed state \\cite{walls1983squeezed}, the Hamiltonian consists of terms quadratic in the creation and the\nannihilation operator. The Gaussian unitary corresponding to the one mode squeezing operator is\n\\begin{align*}\nS(r) = \\exp \\left[ r \\left(a^{2} -( a^{\\dag})^{2} \\right) \\right].\n\\end{align*}\nHere we assume $r \\in \\mathbb{R}$ for the sake of convenience. The Bogoliubov transformation of the annihilation and creation\noperator based on the squeezing operator reads:\n\\begin{align}\nS^{\\dag} (r) a S(r) = a \\cosh r - a^{\\dag} \\sinh r,, \\nonumber \\\\\nS^{\\dag}(r) a^{\\dag} S(r) = a^{\\dag} \\cosh r - a \\sinh r, \\nonumber\n\\end{align}\nWhen this squeezing operator is applied to a vaccum state we can generate a squeezed vaccum state\n\\begin{align*}\n|0,r \\rangle = S(r) |0 \\rangle = \\frac{1}{\\sqrt{\\cosh r}} \\sum_{n=0}^{\\infty} \\frac{\\sqrt{(2n)!}}{2^{n} n!} \\tanh^{n} r |2n \\rangle.\n\\end{align*}\nIn this state, the variance of one of the quadrature operator is below the quantum shot noise and hence it is called a\nsqueezed state. To compensate the squeezing in one quadrature, there is an antisqueezing in the other quadrature.\nThe dynamics of quantum coherence of the single mode squeezed state is presented in this section, for the spectral\ndensity Eq. (\\ref{sd}), considering the Ohmic, sub-Ohmic and the super-Ohmic conditions.\n\n\\noindent{\\it Ohmic bath (s=1) :} \\\\\nThe Ohmic bath has a spectral density $J(\\omega) = \\eta \\omega \\exp(-\\omega\/\\omega_{c})$ and the dynamics of quantum\ncoherence of a single mode squeezed state is given through the plots in Fig. (\\ref{fig4}). When the squeezed single mode\nsystem is weakly coupled ($\\eta =0.01 \\, \\eta_{c} $) to the environment, the coherence evolution is shown through Fig. \\ref{fig4}(a) and\nFig. \\ref{fig4}(b) for the high and the low temperature limits respectively. We do not observe non-Markovian effects in both the limits.\nBut the coherence falls faster in the high temperature limit due to temperature dependent decoherence effects. Under\nthe conditions when the system is strongly coupled to the environment ($\\eta=2.0 \\, \\eta_{c}$), the coherence evolution is shown in\nFig. \\ref{fig4} (c) and Fig. \\ref{fig4} (d) respectively. We observe a very small non-Markovian effects in the low temperature\nlimit. It is absent in the high temperature limit due to the thermal decoherence effects.\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{figure5.pdf}\n\\caption{The transient dynamics of coherence for a single squeezed mode in contact with a sub-Ohmic bath is studied for weakly\ninteracting systems ($\\eta = 0.01 \\; \\eta_{c}$) for (a) $T_{s} =1$ and (b) $T_{s} =20$ and strongly interacting systems ($\\eta = 2.0 \\; \\eta_{c}$)\n(c) $T_{s} =1$ and (d) $T_{s}=20$ and using a cut-off frequency of $\\omega_{c} = 5.0 \\; \\omega_{0}$. }\n\\label{fig5}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{figure6.pdf}\n\\caption{For a squeezed mode in contact with a super-Ohmic bath, the coherence dynamics is studied for weakly interacting systems\n($\\eta = 0.01 \\; \\eta_{c}$) for (a) $T_{s} =1$ and (b) $T_{s} =20$ and strongly coupled systems ($\\eta = 2.0 \\; \\eta_{c}$)\n(c) $T_{s}=1$ and (d) $T_{s} =20$. The cut-off frequency used is $\\omega_{c} = 5.0 \\; \\omega_{0}$. }\n\\label{fig6}\n\\end{figure}\n\n\\begin{figure*}\n\\includegraphics[scale=0.5]{figure7.pdf}\n\\caption{The temporal evolution of the quantum coherence $(C)$ as a function of time ($\\omega_{0} t$) and coupling strength\n($\\eta\/\\eta_{c}$) is given for the displaced squeezed state in contact with an Ohmic bath. The plot is divided into four columns\nas (a) $\\alpha =0.1, r =0.1$, (b) $\\alpha =0.1, r=2.0$ (c) $\\alpha = 4.0, r=0.1$ and (d) $\\alpha =4.0, r = 2.0$. In each column\nthere are two plots one for $T_{s} = 1$ and $T_s =20$. The cut-off frequency used is $\\omega_{c} = 5.0 \\; \\omega_{0}$. }\n\\label{fig7}\n\\end{figure*}\n\n\n\\noindent{\\it Sub-Ohmic bath (s=1\/2):} \\\\\nFor a sub-Ohmic bath with spectral density, $J(\\omega) = \\eta \\sqrt{\\omega \\, \\omega_{c}} \\exp(- \\omega\/ \\omega_{c})$,\nthe coherence dynamics of the single mode squeezed state is shown via the plots in Fig. \\ref{fig5} (a) - (d). In Fig.\n\\ref{fig5} (a) and \\ref{fig5}(b), the system is analyzed when it is weakly coupled to the bath ($\\eta=0.01 \\, \\eta_{c} $). Here we\nsee that at high temperature the coherence falls faster due to thermal decoherence. The strong coupling effects\n($\\eta=2.0 \\, \\eta_{c}$) are analyzed in the plots \\ref{fig5}(c) and \\ref{fig5}(d) respectively. In the low temperature limit, there is\na small non-Markovian effect, but in the high temperature limit, there is no environmental back action. Overall,\nwe find that coherence falls faster with increase in temperature.\n\n\\noindent{\\it Super-Ohmic bath (s=3):} \\\\\nTo study the effects of a super-Ohmic environment, we consider a bath with a spectral density\n$J(\\omega) = \\eta \\, \\omega_{c} (\\omega \/ \\omega_{c})^{3} \\exp(- \\omega\/ \\omega_{c})$. The plots in Fig. \\ref{fig6}\n(a) - (d) show the dynamics of the quantum coherence of the squeezed single mode state. The change of coherence\nwith time, when the system is weakly coupled to the bath ($\\eta =0.01 \\, \\eta_{c} $) is shown in Fig. \\ref{fig6} (a) and (b). The\ncoherence falls monotonically in the weak coupling limit and the rate of fall of coherence increases with increase in\nthe temperature. In the strong coupling limit i.e., when ($\\eta=2.0 \\, \\eta_{c}$), the coherence initially decreases but saturates at a\nfinite value, a phenomenon known as coherence freezing. The rate of fall of coherence and the saturation value of coherence\nis dependent on the temperature and for higher temperature the coherence saturates at a lower value.\nIn general we find that the amount of coherence is directly proportional to the squeezing parameter $r$, under all the\nenvironmental conditions.\n\n\n\\section{Quantum coherence evolution of displaced squeezed states}\n\\label{displacedsqueezedstate}\n\nA displaced squeezed \\cite{kim1989properties} state has a general form as given below:\n\\begin{align*}\n|\\alpha, r \\rangle = D(\\alpha) S(r) |0\\rangle,\n\\end{align*}\nwhere $|0 \\rangle$ is the vaccum state and $D(\\alpha)$ and $S(r)$ are the displacement and squeezing operators\ngiven below:\n\\begin{align*}\nD(\\alpha) = \\exp(\\alpha a^{\\dag} - \\alpha^{\\ast} a); \\quad \\quad S(r) = \\exp(r a^{2} - r (a^{\\dag})^{2}).\n\\end{align*}\nHere $\\alpha \\in \\mathbb{C}$ and $r \\in \\mathbb{R}$ are the relevant parameters. In this state, we investigate the transient dynamics\nof quantum coherence for different values of the displacement parameter $\\alpha$ and the squeezing parameter $r$.\nThe single mode displaced squeezed state is examined for the spectral density Eq. (\\ref{sd}), considering the Ohmic,\nsub-Ohmic and the super-Ohmic conditions.\n\n\\noindent{\\it Ohmic bath (s=1) :} \\\\\nOhmic bath has a spectral density $J(\\omega) = \\eta \\omega \\exp(-\\omega\/\\omega_{c})$ and the coherence dynamics\nof the single mode displaced squeezed state for this spectrum is given through the $3D$ plots in Fig. \\ref{fig7}.\nThe coherence $C(\\rho)$ being the vertical axis and the parameters $\\omega_{0}t$ and $\\eta\/\\eta_{c}$ along the orthogonal\nhorizontal axes. The figure is split in to four columns labelled from (a) to (d) corresponding to the four different states of the displaced\nsqueezed states. The first row gives the plots in the low temperature limit ($T_{s} = 1$) and the second row the\nhigh temperature regime ($T_{s} = 20$).\n\nIn the first column i.e., Fig. \\ref{fig7} (a), we present the dynamics for the parameter values of ($\\alpha =0.1, r =0.1$).\nBoth in the high and low temperature limit, the coherence falls monotonically to zero in a very fast manner which can\nbe referred to as coherence sudden death in analogy with the sudden death of entanglement. In the low temperature\nlimit the coherence revives and saturates at a finite value, which does not happen when the temperature is high. After\nrevival in the low temperature limit, the coherence attains saturation and here it shows a non-Markovian feature that survives\nfor a long time. When the squeezing is increased to high values ($\\alpha=0.1, r=2.0$), the coherence behavior is shown in Fig. \\ref{fig7} (b).\nHere we again find a coherence sudden death in both the low and high temperature limits. When the displacement\nvalues are higher, the coherence is shown in Fig. \\ref{fig7} (c) and \\ref{fig7} (d) for the values ($\\alpha=4.0, r=0.1$)\nand ($\\alpha=4.0, r=2.0$) respectively. The system showns coherence sudden death after which there is a coherence\nrevival and saturation in this limit. The system exhibits non-Markovian behavior for the parameters after the revival.\nFrom all the plots Fig. \\ref{fig7} (a) to (d) we can see that the saturation value is higher when ($\\alpha=4.0, r=0.1$)\nand on increasing the squeezing parameter it decreases as can be seen from Fig \\ref{fig7} (d) corresponding to the parameter values\n($\\alpha=4.0, r=2.0$).\n\n\n\\begin{figure*}\n\\includegraphics[scale=0.5]{figure8.pdf}\n\\caption{The coherence dynamics of a displaced squeezed mode is shown in the figure above as a function of time\n$\\omega_{0}t$, and $\\eta\/\\eta_{c}$ when it is in contact with a sub-Ohmic bath. The plot is divided into four columns\n(a) $\\alpha =0.1, r =0.1$, (b) $\\alpha =0.1, r=2.0$, (c) $\\alpha = 4.0, r=0.1$, and (d) $\\alpha =4.0, r = 2.0$. In each\ncolumn there are two plots for $T_{s} =1$ and $T_{s} =20$. The cut-off frequency used is $\\omega_{c} = 5.0 \\; \\omega_{0}$.}\n\\label{fig8}\n\\end{figure*}\n\n\\begin{figure*}\n\\includegraphics[scale=0.5]{figure9.pdf}\n\\caption{The time evolution of coherence of a displaced squeezed mode in contact with a super-Ohmic bath is shown in\nthe figure above as a funciton of $\\omega_{0}t$, and $\\eta\/\\eta_{c}$. There are four columns in the plot\n(a) $\\alpha =0.1, r =0.1$, (b) $\\alpha =0.1, r=2.0$, (c) $\\alpha = 4.0, r=0.1$, and (d) $\\alpha =4.0, r = 2.0$.\nIn each column there are two plots for $T_{s} =1$ and $T_{s} =20$. The cut-off frequency used is\n$\\omega_{c} = 5.0 \\; \\omega_{0}$.}\n\\label{fig9}\n\\end{figure*}\n\n\\noindent{\\it Sub-Ohmic bath (s=1\/2):} \\\\\nThe single displaced squeezed mode on being affected by a sub-Ohmic bath is given through the plots in Fig. \\ref{fig8}.\nThe corresponding spectral density is $J(\\omega) = \\eta \\sqrt{\\omega \\, \\omega_{c}} \\exp(- \\omega\/ \\omega_{c})$.\nThe $3D$ plots describing the variation of coherence given as a set of plots, where the first row represents the\nlow temperature regime ($kT=1$) and the second row representing the high temperature regime ($kT=20$).\nWe label the columns from (a) to (d) in the plots.\n\nFor the parameter values of ($\\alpha =0.1$, $r=0.1$) and ($\\alpha =0.1$, $r=2.0$), the coherence dynamics\nis given in Fig. \\ref{fig8} (a) and (b) respectively. Here in the low temperature limit we observe both coherence\nsudden death and coherence revival and also a non-Markovian effect in the coherence obtained after the revival.\nMeanwhile in the high temperature limit, we observe only coherence sudden death and there is no revival.\nIn the regimes ($\\alpha =4.0$, $r=0.1$) and ($\\alpha =4.0$, $r=2.0$) we see the coherence again decaying\nwithin a short interval of time as shown through the plots in Fig. \\ref{fig8} (c) and (d) respectively. The coherence sudden\ndeath and the coherence revival is observed both in the low and high temperature limit. Also the non-Markovian\neffects are clearly seen in these plots. In the sub-Ohmic limit, we see coherence revivals only in the high $\\alpha$\nregion.\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{figure10.pdf}\n\\caption{The $3$-D plots $\\frac{{\\rm d} C}{{\\rm d} \\eta_{s}}$ Vs $\\eta_{s}$ Vs $\\omega_{0} t$ for the squeezed coherent\nstate with parameters $\\alpha = 4.0, r = 0.1$ is given above. In Fig. (a) and (c) we study the low temperature ($T_{s} =1$)\nand the high temperature limit ($T_{s} = 20$) when the state is in contact with an Ohmic bath. The low temperature ($T_{s} =1$)\nand high temperature limit ($T_{s} = 20$) is shown in Figs. (b) and (d) respectively, when the state is exposed to a sub-Ohmic\nbath. The cut-off frequency used is $\\omega_{c} = 5.0 \\; \\omega_{0}$.}\n\\label{fig10}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{figure11a.pdf}\n\\includegraphics[width=8.0 cm]{figure11b.pdf}\n\\caption{The dissipation coefficient $\\gamma(t)$ is calculated from the exact solution of $u(t)$.\nWe present here the contour plot of the dissipation coefficient $\\gamma(t)$ and its time derivative\n${\\rm d}\\gamma\/{\\rm d}t$ with varying time and coupling strength. In Fig. (a) and (b) we show\nthe transient behavior of $\\gamma(t)$ for Ohmic ($s=1$) and sub-Ohmic ($s=1\/2$) bath spectrum respectively.\nWe next plot the time derivative ${\\rm d}\\gamma\/{\\rm d}t$ with varying time and coupling strength\nwhen the system is in contact with an Ohmic bath (c) and sub-Ohmic bath (d) respectively. The cut-off frequency\n$\\omega_c = 5.0 \\omega_0$.}\n\\label{fig11}\n\\end{figure}\n\n\\noindent{\\it Super-Ohmic bath (s=3):} \\\\\nThe quantum coherence dynamics of the single displaced squeezed mode, when it is exposed to a super-Ohmic\nenvironment is given here. The spectral density used for this computation is\n$J(\\omega) = \\eta \\, \\omega_{c} (\\omega \/ \\omega_{c})^{3} \\exp(- \\omega\/ \\omega_{c})$. The\nresults pertaining to this study is given as $3D$ plots in Fig. \\ref{fig9}, with the coherence $C(\\rho)$ being along\nthe vertical axis and the parameters $\\omega_{0} t$ and $\\eta\/\\eta_{c}$ along the orthogonal horizontal axes.\nThe first row represents the low temperature regime ($T_{s}=1$) and the second row the high temperature regime\n($T_{s}=20$) and the columns in the figure are labelled from (a) to (d).\n\n\\begin{figure}\n\\includegraphics[width=9.5cm]{figure12.pdf}\n\\caption{The fluctuation coefficient $\\widetilde{\\gamma}(t)$ is calculated from\nthe exact solution of $u(t)$ and $v(t)$. We present here the contour plot of the\nfluctuation coefficient $\\widetilde{\\gamma}(t)$ for Ohmic ($s=1$) spectrum in the\nlow temperature (a) $T_{s}=1$ and in the higher temperature (c) $T_{s}=20$\nrespectively. We next show the contour plot of fluctuation coefficient $\\widetilde{\\gamma}(t)$\nfor sub-Ohmic ($s=1\/2$) spectrum in the low temperature (b) $T_{s}=1$ and in the high\ntemperature (d) $T_{s}=20$. The cut-off frequency in all the cases $\\omega_c = 5.0 \\omega_0$.}\n\\label{fig12}\n\\end{figure}\n\nIn Fig. {\\ref{fig9}} (a) and (b) the coherence dynamics is given for parameter values of ($\\alpha =0.1$, $r=0.1$)\nand ($\\alpha =0.1$, $r=2.0$) respectively. For the parameter values ($\\alpha =4.0$, $r=0.1$) and\n ($\\alpha =4.0$, $r=2.0$), the dynamics is give through the plots in Fig. {\\ref{fig9}} (c) and (d). From all the\nplots we observer that the coherence decays monotonically and then attains a saturation value at which the\ncoherence freezes for the rest of the evolution. But the fall of coherence and the value at which the coherence\nfreezes is dependent on the temperature and higher the temperature, lower the saturation value. This is\nbecause apart from the loss of coherence due to dissipation, some of the coherence is also\nlost due to the thermal decoherence. From Fig. \\ref{fig9} (a) we can see that initial coherence is low\nwhen both the $\\alpha$ and $r$ values are low. On increasing either the displacement parameter $\\alpha$\nor the squeezing parameter $r$, we find that the system has a higher amount of initial coherence i.e.,\ncoherence at $t=0$ as shown in Fig. \\ref{fig9} (b) and (c). Further, from these two cases, we also observe\nthat the saturation value is higher when $\\alpha$ is higher. Finally in \\ref{fig9} (d) we examine the situation\nwhere both the displacement parameter ($\\alpha$) and the squeezing parameter ($r$) are higher.\nIn this case we find that the initial amount of coherence is comparable to the cases in Fig. \\ref{fig9} (b) and (c),\nbut the fall of coherence and the saturation value more closely resembles the case in Fig. \\ref{fig9} (c),\nwhere ($\\alpha =4.0$, $r=0.1$). An increase in either the displacement parameter or the squeezing parameter\nincreases the quantum coherence in the system.\n\n\n\\section{Markovian to non-Markovian crossover}\n\\label{crossovertransition}\nThe present work considers a Gaussian state in contact with an external non-Markovian environment. When we\nconsider a Ohmic or sub-Ohmic bath coupled to the system, we initially observe a Markovian evolution for the coherent\nstate, squeezed state and displaced squeezed state. Depending on whether the coupling is weak or strong, the\ndynamics continues to remain either Markovian or switches over to non-Markovian behavior. In this current section\nwe examine the crossover behavior in the dynamics of a displaced squeezed state.\n\nLet us consider the evolution of a displaced squeezed state with the parameter values of $\\alpha = 4.0 $ and $r = 0.1 $.\nWhen the coupling between the bath and the system is weak, we observe that the dynamics of the system is\ncompletely Markovian where the system experiences either a death of coherence or coherence freezing. In\nthis case, the correlation time of the bath is much shorter than the system evolution time. The irreversible loss\nof information occurs, because the bath states are restored very quickly and so any information received from\nthe state is lost. For systems which have a stronger coupling with the bath, the initial coherence decay is Markovian\nand then there is a sudden switch where the dynamics becomes non-Markovian. In the non-Markovian phase there\nis a back flow of information from the environment to the system. To check this behaviour, we look at Fig. \\ref{fig10},\nthe $3D$ plots of $\\frac{{\\rm d} C}{{\\rm d} \\eta_{s}}$ Vs $\\eta_{s}$ Vs $\\omega_{0} t$ in both the low temperature\n($T_{s} = 1$) and the high temperature ($T_{s} = 20$) limits. In Fig. \\ref{fig10} (a), the plot shows ${\\rm d} C \/ {\\rm d} \\eta_{s}$\nfor the Ohmic spectrum in the low temperature limit. Here we find that initially, the slope is $-ve$ and monotonic implying a Markovian nature.\nIncreasing the coupling strength, we observe that the slope changes from $-ve$ to $+ve$ indicating a change from\nMarkovian to non-Markovian behavior. For the Ohmic bath in the low temperature regime this crossover in the behavior\nof dynamics is very abrupt and this sudden change is reminiscent of a phase transition. In the initial phase we observe\n$\\tau_{b} \\ll \\tau_{s}$, where $\\tau_{b}$ is the relaxation time of the bath and $\\tau_{s}$ is the evolution time of the system.\nAfter the crossover (i.e., transition) the time scales are related as $\\tau_{b} \\approx \\tau_{s}$ indicative of a new phase.\nIn the low temperature limit $(T_{s} = 1)$, the sub-Ohmic regime is described in Fig. \\ref{fig10} (b). Here there is a region\nwhere the coherence remains constant before the transition to the non-Markovian evolution. The high temperature\n$(T_{s} = 20)$ case of the $3$D plots are shown in Fig. \\ref{fig10} (c) and (d) for the Ohmic and sub-Ohmic baths respectively.\nFrom the plots we can observe that transition region between the Markovian and non-Markovian regimes is more pronounced.\nAlso the non-Markovian effect is decreased indicating a lesser amount of environmental back flow of coherence. This\nreduced amount of back flow is due to the decoherence caused by thermal effects.\n\nThe dynamical behavior of the continuous variable system can also be studied using a quantum master equation.\nThe crossover between Markovian and non-Markovian can be perceived from the changing behavior of the\ndecay rates in the master equation. The total density matrix $\\rho_{tot}$ describing the continuous variable system and\nenvironment has a dynamics governed by the quantum evolution operator\n$\\rho_{tot}(t) = \\exp\\left( - \\frac{i}{\\hbar} H t \\right) \\rho_{tot} (0) \\exp\\left( \\frac{i}{\\hbar} H t \\right)$.\nThe initial state of the system is considered to be $\\rho_{tot}(0) = \\rho_{s} (0) \\otimes \\rho_{E} (0) $,\nwhere $\\rho_{E}(0) = \\exp( - \\beta H_{E}) \/ ({\\rm Tr} \\exp(- \\beta H_{E}) )$ as proposed in\n\\cite{feynman1963Vernon,caldeira1983path}. Tracing over the environmental degrees of freedom we can get the\nreduced density matrix of the system. The master equation for the reduced density matrix reads:\n\\begin{eqnarray}\n\\frac{{\\rm d} \\rho(t)}{{\\rm d} t} &=& - i \\omega_{0}^{\\prime} (t) [ a^{\\dag} a, \\rho(t) ] \\nonumber \\\\\n & & + \\gamma(t) [ 2 a \\rho(t) a^{\\dag} - a^{\\dag} a \\rho(t) - \\rho(t) a^{\\dag} a ] \\nonumber \\\\\n & & + \\widetilde{\\gamma} (t) [ a \\rho(t) a^{\\dag} + a^{\\dag} \\rho(t) a - a^{\\dag} a \\rho(t)\n - \\rho(t) a a^{\\dag} ] \\qquad\n\\end{eqnarray}\nwhere the coefficients are\n\\begin{align*}\n \\omega_{0}^{\\prime} (t) = {\\rm Im} \\left[ \\frac{\\dot{u}(t)}{u(t)} \\right], \\qquad \\gamma(t) = - {\\rm Re} \\left[ \\frac{\\dot{u}(t)}{u(t)} \\right] \\\\\n \\widetilde{\\gamma}(t) = \\dot{v}(t) - 2 v(t) {\\rm Re} \\left[ \\frac{\\dot{u}(t)}{u(t)} \\right].\n\\end{align*}\nHere $\\omega_{0}^{\\prime} (t)$ is the renormalized frequency of the single mode system and $\\gamma(t)$ and $\\widetilde{\\gamma}(t)$ are the\ndissipation and fluctuation coefficients. In Fig.~\\ref{fig11} (a) and (b), we present a contour plot of the dissipation coefficient\n$\\gamma(t)$ varying with time and coupling strength. The Ohmic case is described through the plot\nin \\ref{fig11}(a), where in the weak coupling limit $\\eta_{s} \\leq 1$ we find that the dissipation is almost a constant\nthroughout the entire evolution. For the strong coupling regime $\\eta_{s} \\geq 1$, the dissipation rate $\\gamma(t)$ increases\ninitially and then decreases. This change in the direction of the decay rate with time, indicates a backflow of information from\nthe environment to the system. A similar behavior of the dissipation constant $\\gamma(t)$ is observed in the sub-Ohmic case\nas shown in \\ref{fig11}(b). The reverse flow of information is more transparent from the contour plot of the slope of $\\gamma(t)$.\nWe show contour plot of the time derivative of the decay rate (${\\rm d}\\gamma\/{\\rm d}t$) for the Ohmic case in Fig.~\\ref{fig11} (c)\nand sub-Ohmic case in Fig.~\\ref{fig11} (d). From the plot we observe that the slope of the dissipation coefficient $\\gamma(t)$,\ntransitions from being positive to negative. This is indicative of a crossover from the Markovian to a non-Markovian regime.\n\nNext, we study the variation of the fluctuation coefficient namely $\\widetilde{\\gamma}(t)$ in Fig. \\ref{fig12}.\nWe show the contour plot of fluctuation coefficient for Ohmic spectrum in the low temperature limit ($T_{s}=1$)\nin Fig.~\\ref{fig12} (a) and for the high temperature limit ($T_{s}=20$) in Fig. \\ref{fig12} (c). In the strong coupling\nregime ($\\eta_{s} \\geq 1$), the fluctuation coefficient $\\widetilde{\\gamma}(t)$ rises initially and then starts decreasing\nwith time. This changing behavior of $\\widetilde{\\gamma}(t)$ indicates the environmental back action in the strong coupling\nregime. The contour plot of fluctuation coefficient for sub-Ohmic spectrum in the low temperature ($T_{s} = 1$)\nis shown in Fig.~\\ref{fig12} (b). For the high temperature limit ($T_{s} = 20$) is given in Fig. \\ref{fig12} (d). We observe a\n similar dynamical crossover of $\\widetilde{\\gamma}(t)$ for the sub-Ohmic spectral density as well. Hence the\n environmental back flow of information \\cite{breuer2009measure,liu2011experimental,chruscinski2014degree}\n i.e., the non-Markovian behavior can be obtained from the dynamical behavior of the dissipation and the fluctuation\n coefficients. The dynamical crossover observed in the coherence dynamics of a single mode state,\n stands verified by the analysis carried out from the dynamics of the fluctuation and dissipation coefficients of the\n master equation of the state.\n\n\n\n\\section{Conclusion}\n\\label{conclusions}\nThe transient dynamics of quantum coherence of a single mode Gaussian state is analyzed in the open quantum\nsystem formalism. For this we consider a non-Markovian environment, which is a collection of infinite bosonic modes\nof varying frequencies. The entire analysis is carried out in the finite temperature limit with arbitrary system-bath coupling\nstrength. To measure the quantum coherence we use the relative entropy measure which estimates the distance to the\nthermal state. Hence the quantum coherence can be completely characterized by the determinant of the covariance\nmatrix and the average of the number operator of the Gaussian states. We solve the quantum Langevin equation for the\nbosonic mode operators to obtain the time evolved covariance matrix elements. The time evolution of the field operators are\ndetermined by the two basic nonequilibrium Green's functions namely $u(t)$ and $v(t)$.\n\nThe time dynamics of quantum coherence is studied for three distinct experimentally realizable Gaussian states. The\ninvestigations have been carried out under three different environmental spectral densitites {\\it viz} Ohmic, sub-Ohmic\nand super-Ohmic spectral densitites. When the system interacts weakly with the environment, the quantum coherence\ndecreases monotonically with time. The coherence decay rate is relatively lower for the super-Ohmic bath when compared\nwith the Ohmic and the sub-Ohmic baths. Also the rate of decay increases with the temperature. In general we observe\na coherence death for the Ohmic and the sub-Ohmic baths in the short time limit. For the super-Ohmic bath we observe coherence freezing\nwhich is a saturation of coherence at a finite value. In the strong coupling regime, the coherence decreases initially\nand then starts increasing resulting in a revival of coherence for the Ohmic and the sub-Ohmic baths. This is followed by\na stabilization of coherence with an oscillatory phase. This oscillatory behavior is due to the environmental backaction\nwhich is a feature of non-Markovian dynamics in the system. Hence in our study we observe a transition or crossover\nfrom the Markovian dynamics to the non-Markovian dynamics. The non-Markovian memory effect helps us to restore\nquantum resources in the strong coupling limit. The rate at which the crossover in the dynamics occurs\ndepends on the parameters of the Gaussian state as well as on the environmental parameters. To verify the existence of the crossover\nwe study the dynamical behavior of the quantum system using a master equation approach. Here we compute the dissipation and\nfluctuation coefficients in the master equation of the reduced density matrix. Throughout the evolution, the\ndissipation rate is almost a constant in the weak coupling limit. In the strong coupling limit, the dissipation rate\nincreases initially and then decreases, which implies a Markovian to non-Markovian transition. A plot of the time\nderivative of the decay rate shows that the slope of the dissipation coefficient changes from being positive to negative.\nThis is a clear indication of the crossover from the Markovian to the non-Markovian dynamics. This crossover\nfeature is also verified from the dynamics of the fluctuation coefficient. Thus we find that the quantum system has both\nMarkovian and non-Markovian behavior at different times and there is a dynamical crossover from the Markovian to\nthe non-Markovian regime. This change from Markovian to non-Markovian regime is not a gradual change.\nInstead it is either a sudden change or there is a time period between the Markovian and non-Markovian regimes,\nwhere there is no dynamical change. This raises an important question as to whether this dynamical change can\nbe considered as some kind of phase transition between two different regimes with totally different relaxation time\nscales. While the results seems to point towards such a conclusion, a more detailed study from the point of view\nof quantum phase transitions is needed to confirm and validate this, and such a study will form the scope of our future works.\nA similar observation on an abrupt change from Markovian to non-Markovian dynamics has also been made in\nRef. \\cite{pang2018abrupt}. Here the authors study a single qubit system in contact with a single qubit environment.\nIn our work we consider a single mode Gaussian state in contact with a structured bosonic reservoir with a collection of\ninfinite modes to describe a general non-Markovian environment. The investigations carried out in the present work can be\nexperimentally verified by measuring the quantum coherence of Gaussian systems\n\\cite{bowen2004experimental,furusawa1998unconditional,yonezawa2004demonstration,diguglielmo2007experimental}.\n\n\n\\section*{Acknowledgements}\nMd.~Manirul Ali was supported in part by the Centre for Quantum Science and Technology, Chennai Institute of Technology, India,\nvide funding number CIT\/CQST\/2021\/RD-007.\nChandrashekar Radhakrishnan was supported in part by a seed grant from IIT Madras to the Centre for\nQuantum Information, Communication and Computing.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}