id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2305.02012 | A Perspective on Explainable Artificial Intelligence Methods: SHAP and
LIME | eXplainable artificial intelligence (XAI) methods have emerged to convert the
black box of machine learning (ML) models into a more digestible form. These
methods help to communicate how the model works with the aim of making ML
models more transparent and increasing the trust of end-users into their
output. SHapley Additive exPlanations (SHAP) and Local Interpretable Model
Agnostic Explanation (LIME) are two widely used XAI methods, particularly with
tabular data. In this perspective piece, we discuss the way the explainability
metrics of these two methods are generated and propose a framework for
interpretation of their outputs, highlighting their weaknesses and strengths.
Specifically, we discuss their outcomes in terms of model-dependency and in the
presence of collinearity among the features, relying on a case study from the
biomedical domain (classification of individuals with or without myocardial
infarction). The results indicate that SHAP and LIME are highly affected by the
adopted ML model and feature collinearity, raising a note of caution on their
usage and interpretation. | Ahmed Salih, Zahra Raisi-Estabragh, Ilaria Boscolo Galazzo, Petia Radeva, Steffen E. Petersen, Gloria Menegaz, Karim Lekadir | 2023-05-03T10:04:46Z | http://arxiv.org/abs/2305.02012v3 | # Commentary on explainable artificial intelligence methods: SHAP and LIME
###### Abstract
eXplainable artificial intelligence (XAI) methods have emerged to convert the black box of machine learning models into a more digestible form. These methods help to communicate how the model works with the aim of making machine learning models more transparent and increasing the trust of end-users into their output. SHapley Additive exPlanations (SHAP) and Local Interpretable Model Agnostic Explanation (LIME) are two widely used XAI methods particularly with tabular data. In this commentary piece, we discuss the way the explainability metrics of these two methods are generated and propose a framework for interpretation of their outputs, highlighting their weaknesses and strengths.
_Article history_:
1
Footnote 1: Corresponding author email: [email protected]
2
Footnote 2: Zahra Raisi-Estabragh email: [email protected]
3
Footnote 3: Ilaria Boscolo Galazzo email: [email protected]
4
Footnote 4: Petia Radeva email: [email protected]
5
Footnote 5: Steffen E. Petersen email: [email protected]
6
Footnote 6: Gloria Menegaz email: [email protected]
7
Footnote 7: Karim Lekadir email: [email protected]
## 1 Introduction
Machine learning methods have shown great success in a variety of domains e.g. biology [1], medicine [2], economy [3] and education [4]. However, such success is accompanied by complexity in understanding how these models work, why the models make a specific decision, what features/regions are most influencing the model output and the degree of certainty the model has in the outcome generated. All of these questions and more are raised by the end users specially when advanced models including deep neural networks are implemented. Accordingly, a new field of research has emerged named eXplainable artificial intelligence (XAI) with the main objective of demystifying 'black box' models into a more comprehensible form [5]. XAI is indispensable to increase the model transparency and the trust of end-users in the model outcome [6]. Such additional reassurances are essential for wide implementation of such models, particularly in high risk fields such as healthcare.
Several approaches have been proposed as XAI methods dealing with a variety of data and model types, aiming to locally and globally explain the models outputs, from which SHAP [7] and LIME [8] form the most popular XAI methods. SHapley Additive exPlanations (SHAP) [7] is an XAI method designed based on game theory. It aims to explain any model by considering each feature (or predictor) in the model as a player and the model outcome as the payoff. SHAP provides local and global explanation, meaning that it is able to explain the role of the features for all instances and for a specific instance. Local Interpretable Model Agnostic Explanation (LIME) [8] is another XAI method that aims to explain how the model works locally for a specific instance in the model. It approximates any complex model and transfers it to a local interpretable model for a specific instance. Many other approaches and methods were proposed with the similar aims to make the machine learning models more interpretable. SHAP and LIME are the two most common XAI methods applied to tabular data. Despite the limitations of SHAP and LIME in terms of uncertainty estimates, generalization, feature
dependencies, and inability to infer causality [9], they have substantial value in explaining and interpreting complex machine models.
However, does the end user understands how these XAI methods work? And why they identify specific regions/features as more informative than others? Is it enough for the end user to know that these features/regions are more informative, because they increase the model output without knowing how the XAI method came up with such a result? For example, when SHAP assigns a high/low score for a feature, does the end user knows how this score is calculated? SHAP and LIME perform many analyses in the background and solve complex equations to come up with their explanations. In many settings, complex models will be interpreted by non-expert end-users, who may find understanding of the workings of XAI methods challenging. It is not expected that the end users from different domains understand every minutiae of XAI methods, but it is vital that they are aware of the general framework of the XAI method used. While XAI methods aimed to unveil the complexity of complex black box models, they themselves suffer from the same issue, in that their usefulness may be limited by the complexity of their outputs. In this commentary piece, we will discuss SHAP and LIME XAI methods, highlighting their underlying assumptions and whether the end users are possible to grasp their key concepts appropriately. We will also present some notions to increase the understanding XAI methods and promote their appropriate use by researchers.
### Shap
SHAP is a post-hoc model-agnostic method that can be applied to any machine learning model [7]. It is based on game theory which calculates the contribution of each player to the payout. In machine learning models, the players and the payout are substituted by features and the model outcome, respectively. SHAP calculates a score for each feature in the model, which represents its weight to the model output. To calculate the scores, it considers all collations between the features to cover all cases where all features and a subset of features are in the model. Due to the increases of computational complexity of SHAP when the number of features increases, an approximation SHAP method is proposed named Kernel SHAP [7].
SHAP has been applied widely in a variety of do
mains in order to explain the model output, either locally or globally [10; 11; 12; 13]. However, there are some points the researcher end-users should be aware when applying SHAP. Firstly, SHAP is a model-dependent method. This means that the outcome of SHAP depends on the model used with different machine learning models perhaps leading to different explainability scores generated by SHAP. Accordingly, when different models are applied, the top features identified by SHAP may differ between models. To illustrate the model-dependency point, figure 1 are SHAP summary plots using four classification models. The plots reveals features in order of important, for each model based on their effect toward the model outcome. The four classification models were applied to 1500 subjects (20% test) to classify them as cases (myocardial infarction, MI) or controls (No MI). The plot shows that there is agreement on the top three most informative features among the tested models, however there is a notable variation in the order of the other seven features among. For instance, body mass index is the least important one in DT and LR while it is the third one in the LBGM model and the seventh in the SVC model. The position of alcohol consumption and Waist-Hip ratio similarly varies between models. In addition, the last five features are almost with zero SHAP score in DT model indicating they do not affect the model output. It is worth to mention that this variation is observed while the accuracy of the used model is close to each other.
Secondly, another potential pitfall is in interpretation of the score or SHAP values. This is because the assigned score does not represent the same unit as that of the feature. The end users should focus on the order of the features which represents their significance. Third, SHAP is not protected against biased classifier and might generate unrealistic explanations that do not capture the underlying biases [14]. Finally, SHAP assumes the features are independent. It means that SHAP assumes there is no correlation between the features used in the model. Such assumption will affect the assigned score (weight) to each feature in the model. Some features might be assigned a low score despite being significantly associated with the outcome. This is because they do not improve the model performance due to their collinearity with other features that already improve the model performance. Although there are some works which tried to deal with the issue of collinearity [15, 16], yet the proposed method is either limited to a local explanation [15] or the
Figure 1: SHAP output to explain the four models globally. DT: decision tree; LGBM: light gradient-boosting machine; LR: logistic regression; SVC: support vector machines classifier; ACC: accuracy; MI: myocardial infarction.
explanation is user-dependent [16]. Another approach was proposed to assess the stability of the list of informative features generated by SHAP, particularry when the features are collinear [17]. The method calculates a value named normalized movement rate (NMR) which assesses how the order of the features (generated by SHAP) will be affected when the top features are removed from the model iteratively. The smaller the NMR, the more the list of informative features is stable.
### Lime
LIME is a model agnostic local explanation method [8]. It explains the influence of each feature to the outcome for a single subject. In the classification models, it also shows the probability that the subject might be belong to any class. In addition, it shows the contribution of each feature in each class with visualized plot.
However, LIME converts any model into a linear local model, and then reports the coefficient values which represent the weights of the features in the model. In other word, if the user applies some models that take into account the non-linearity between features and the outcome, this might be missing in the explanation generated by LIME. This is because the non-linearity is lost in the linear model generated by LIME. In addition, LIME is also a model-dependent method, meaning the used model will affect the outcome of LIME. Figure 2 shows the output of LIME for the same individual using the same classifiers as explained before. The first part of the plot shows the probability that the subject is classified as control (Non-MI) or with MI in each of the used model. The second part (in the middle) shows the weight (coefficient value) of each feature in the local linear model while the last part shows the actual value of each feature. Moreover, the plot shows the features contribution toward each class based on the assigned color. First of all, the probability that the subject, whether it is control (Non-MI) or with MI, is different for each of the used model. It shows that LBGM is the most certain while DT is the least. In addition, the plot shows that the same feature is contributing in different class among the tested model. For example, alcohol consumption contributes in the MI class in LR and SVC while it contributes in the Non-MI class in the DT and LBGM. Body mass index and Townsend deprivation contribute in the MI class while they contribute in the Non-MI class in the other three models. In addition, the plot shows
that the features are with similar effect size although four different models were used. This is due to that LIME generates an approximate local linear model and then reports the weights of the features.
Figure 2: **LIME output to explain the model locally for the same instance using four classifiers. DT:decision tree; LGBM: light gradient-boosting machine; LR: logistic regression; SVC: support vector classifier; ACC: accuracy; MI:myocardial infarction.**
With regards to the colinearity, the interpretation of the weights generated by LIME indicates increase/decrease per one unit change in the feature will lead to increase/decrease in the outcome while other features are constants. Such assumption is not realistic with collinear data where group of features might change simultaneously. In the example above, most of the features are collinear such as high cholesterol and body mass index. It is indeed the exact interpretation with the coefficient value in the linear models. But because they are generated by LIME, the user might think that they have more power and meaning than the classical coefficient values in the machine learning models. Finally, similar to SHAP, LIME can be fooled by biased classifiers and the generated explanations do not reflect or represent the biases [14].
## 2 Recommendations
SHAP and LIME are two XAI methods that aid understanding of machine learning models in different research fields. They have been implemented in some sensitive domains [18; 19; 20] where misinterpreting the outcomes might be very expensive or critical. Data scientists who are working daily on machine learning and XAI, over-trust the explanations generated by XAI methods, misuse the interpretability tools and do not accurately understand the visualized output of the XAI methods [21].
It is crucial that SHAP results are presented alongside the plots output with a simple language to explain the outcomes and the assumptions behind SHAP e.g. features are independent and model-dependent. Moreover, it would be better that the end user implements different machine learning models when the features are collinear in order to compare and contrast the outcome of SHAP from each model. Thereafter, the NMR [17] value would be useful to pick the model that presents the more stable list of informative features generated by SHAP if the aim is to explain the model globally. On the other hand, if the aim is to explain the model locally for a single instance or sub-group of individuals, then approximated SHAP value [15] would be better to be applied as this is a modified version of SHAP that takes into account the collinearity among the features. In addition, converting the scores of each feature of the model into a more digestible form e.g. coefficient value would inevitably increase the understanding of the score
and ultimately the method itself. It is worthy that LIME provides explanation regarding the local model linearity with the model outcome as the users might not be familiar with the concept behind LIME. The users will be much aware and understand the outcome when a simple language accompanied the outcome. Moreover, the explanation of LIME might be different using the same model, but for other instance. In other words, the interpretation of LIME only applies for one subject and cannot be used or considered as a general interpretation for the whole model.
## 3 Conclusions
In the current commentary, we discussed two widely used XAI methods specially with tabular data. The highlighted and discussed points are very significant and critical to be considered when XAI methods are implemented in any domain. Considering the end users not from technical background, it is needful that they are aware of these issues in order to use the methods most appropriately. They are meant to increase the understanding of the outcomes of SHAP and LIME and ultimately produce more meaningful explanation.
## Funding
AS is supported by a British Heart Foundation project grant (PG/21/10619). IBG and GM acknowledge support from Fondazione CariVerona (Bando Ricerca Scientifica di Eccellenza 2018, EDIPO project - reference number 2018.0855.2019. ZRE recognises the National Institute for Health and Care Research (NIHR) Integrated Academic Training programme which supports her Academic Clinical Lectureship post and was also supported by British Heart Foundation Clinical Research Training Fellowship No. FS/17/81/33318. SEP acknowledges support from the National Institute for Health and Care Research (NIHR) Biomedical Research Centre at Barts and have received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 825903 (euCanSHare project). This article is supported by the London Medical Imaging and Artificial Intelligence Centre for Value Based Healthcare (AI4VBH), which is funded from the Data to Early Diagnosis and Precision Medicine strand of the government's Industrial Strategy Challenge Fund, managed and delivered by Innovate UK on behalf of UK Research and Innovation (UKRI).
Views expressed are those of the authors and not necessarily those of the AI4VBH Consortium members, the NHS, Innovate UK, or UKRI.
## Disclosure
SEP provides consultancy to Cardiovascular Imaging Inc, Calgary, Alberta, Canada. The remaining authors have no disclosures.
## Statement
The paper is under consideration at Pattern Recognition Letters.
|
2306.04133 | Answering Compositional Queries with Set-Theoretic Embeddings | The need to compactly and robustly represent item-attribute relations arises
in many important tasks, such as faceted browsing and recommendation systems. A
popular machine learning approach for this task denotes that an item has an
attribute by a high dot-product between vectors for the item and attribute -- a
representation that is not only dense, but also tends to correct noisy and
incomplete data. While this method works well for queries retrieving items by a
single attribute (such as \emph{movies that are comedies}), we find that vector
embeddings do not so accurately support compositional queries (such as movies
that are comedies and British but not romances). To address these set-theoretic
compositions, this paper proposes to replace vectors with box embeddings, a
region-based representation that can be thought of as learnable Venn diagrams.
We introduce a new benchmark dataset for compositional queries, and present
experiments and analysis providing insights into the behavior of both. We find
that, while vector and box embeddings are equally suited to single attribute
queries, for compositional queries box embeddings provide substantial
advantages over vectors, particularly at the moderate and larger retrieval set
sizes that are most useful for users' search and browsing. | Shib Dasgupta, Andrew McCallum, Steffen Rendle, Li Zhang | 2023-06-07T04:04:36Z | http://arxiv.org/abs/2306.04133v1 | # Answering Compositional Queries with Set-Theoretic Embeddings
###### Abstract
The need to compactly and robustly represent item-attribute relations arises in many important tasks, such as faceted browsing and recommendation systems. A popular machine learning approach for this task denotes that an item has an attribute by a high dot-product between vectors for the item and attribute--a representation that is not only dense, but also tends to correct noisy and incomplete data. While this method works well for queries retrieving items by a single attribute (such as _movies that are comedies_), we find that vector embeddings do not so accurately support compositional queries (such as _movies that are comedies and british but not romances_). To address these set-theoretic compositions, this paper proposes to replace vectors with box embeddings, a region-based representation that can be thought of as learnable Venn diagrams. We introduce a new benchmark dataset for compositional queries, and present experiments and analysis providing insights into the behavior of both. We find that, while vector and box embeddings are equally suited to single attribute queries, for compositional queries box embeddings provide substantial advantages over vectors, particularly at the moderate and larger retrieval set sizes that are most useful for users' search and browsing.
Machine Learning, ICML
## 1 Introduction
Representing which items correspond to which attributes is a key capability in many applications. It can facilitate tasks such as user browsing, faceted search, and recommendations which are crucial in diverse platforms for helping the user discover useful content.
A popular machine learning approach for this task represents items and attributes by vector embeddings, and indicates that an item has an attribute by a high dot-product among their vectors. An embedding representation has multiple advantages: not only does it compactly represent the full item-attribute co-occurrence matrix, but it also generalizes beyond the training data in ways that tend to correct noisy and incomplete observations. For example, if the set of video attribute data includes a diverse set of user-generated tags, the association of these tags will certainly be incomplete and noisy; however, users' ability to reliably browse by such diverse tags is a valuable amenity. The embedding representation enables a movie to be beneficially labeled with tags beyond the incomplete set hand-applied by users.
A simple use-case for item-attribute relations is querying for items having a single relation, such _songs in the jazz genre_. The predominant, traditional evaluation of such systems addresses this single-attribute case. However, there is increasing interest in accurately handling richer, multi-attribute queries representing set-theoretic compositional queries, such as _songs that are jazz but not smooth-jazz_ or _movies for children that are animated but not having monsters_. Formally, this corresponds to conjunctions of (possibly negated) attributes, e.g., _jazz \(\wedge\neg\) smooth-jazz_ or _children \(\wedge\) animation \(\wedge\neg\) monster_. While the single attribute case is well studied, there is much less work on how to handle compositional queries in embedding spaces. The combinations of concepts are inherently set-theoretic, which geometrically, we argue, are more naturally represented by regions rather than vector points.
This paper studies set-theoretic compositional attribute queries on items, introduces a new benchmark dataset, analyzes the behavior of vector embeddings on such queries, and proposes to replace vectors with box embeddings (Vilnis & McCallum, 2015), a region-based representation that can be thought of as learnable Venn diagrams, supporting region-based intersection and negation.
Our results and analysis shed light on what makes the compositional task so challenging: For single concept classification, it usually suffices to focus on the "top" results. However, for compositional queries, we are forced to achieve good quality on the tail and understand the boundaries of concepts. For instance, given a query _jazz \(\wedge\neg\) smooth jazz_, it is not sufficient to understand the top _jazz_ and the top
_smooth jazz_ tracks, but the model needs to understand the boundaries of _smooth jazz_ to make judgments about songs.
In this paper, we first describe compositional queries represented in vector embedding spaces. We discuss the prior work on heuristics to combine results based on a probabilistic interpretation of similarity scores, and based on vector space arithmetic (Mikolov et al., 2013), and discuss the weaknesses of the vector-point-based representations. Then we discuss various region-based set theoretic embeddings, settling on the particular advantages of box embeddings (Vilnis and McCallum, 2015). Box embeddings naturally represent regions whose volume is easy to calculate, are closed under intersection, efficiently represent negation for the common case of a small number of negations, and naturally have good generalization properties. Although the box embedding methodology has been introduced previously, this paper is the first to analyze its compositional properties empirically in depth.
A key challenge to empirical studying compositional queries is the lack of existing datasets and benchmarks for this task. In this work, we propose a new benchmark that contains a set of compositional queries with ground truth items as well as noisy and incomplete data for training1. Our raw data sources are the combination of the widely-studied MovieLens dataset (Harper and Konstan, 2015) and Wikidata2. We use the user-generated tagging data in MovieLens as the noisy and incomplete training data for the relationship between movies and genres. We use Wikidata to construct ground truth semantic labels for the movies. In addition, we carry out statistical analysis on the co-occurrence of labels to form meaningful compositional queries. We employ cross-verification to make sure the ground truth data is reasonably accurate and complete.
Footnote 1: Link to the benchmark: [https://github.com/google-research-datasets/genre2movies](https://github.com/google-research-datasets/genre2movies)
Footnote 2: [https://www.wikidata.org/](https://www.wikidata.org/)
We carry out a systematic empirical study and analysis of both vector and box embedding models on this benchmark. Our study finds that, while that vector and box embeddings are equally suited for singleton queries, for compositional queries box embeddings provide significant accuracy gains, particularly when evaluated at the moderate and larger retrieval set sizes are most useful when users want to explore and browse query results.
## 2 Related Work
In this section, we aim to put our work into the perspective of recent advancements of region-based representations, compositional querying of vector embeddings, and compositional generalization.
### Compositional Queries with Vector Embeddings
It is common in machine learning to represent discrete entities such as items or attributes by vectors (Bengio et al., 2013) and to learn them by fitting the training data. Besides semantic similarity, some have claimed that learned vectors have compositional properties through vector arithmetic, for example in empirical analysis of word2vec (Mikolov et al., 2013) and GLOVE (Pennington et al., 2014), and some theoretical analysis (Levy and Goldberg, 2014; Arora et al., 2018). However, anecdotally, many have found that the compositional behavior of vectors is far from reliable (Rogers et al., 2017). Our paper provides a comprehensive evaluation of vector embeddings on compositional queries, and compares the results to a region-based alternative.
### Region Based Embeddings
Often concepts have different generality and their conjunction, disjunction and negation imply yet another concept, e.g, 'horror movies' is a broader concept than 'ghost movies'. One of the first efforts to capture this specificity/generality and the asymmetric relation between entities is made by Vilnis and McCallum (2015) where the authors propose to use Gaussian distributions to represent each concept, and KL Divergence as a score function. Many consequent methods are proposed based on region-based embeddings to solve this problem. Chamberlain et al. (2017) uses hyperbolic disks, and Ganea et al. (2018) use hyperbolic cones, however these are not closed under intersection nor are their intersections easily computable. Vendrov et al. (2016) and Lai and Hockenmaier (2017) use an axis-aligned cone to represent an entailment relation. Vilnis et al. (2018) extend the work of Lai and Hockenmaier (2017) by adding an upper-bound to the cone, resulting in a strictly increased representational capacity. Li et al. (2019) and Dasgupta et al. (2020) propose improved training methods to handle the difficulties inherent in gradient-descent based training. In their work, Dasgupta et al. (2022) use box embeddings to capture word semantics from large text corpora. They demonstrate that the trained box embeddings capture set theoretic semantics, e.g., 'tongue' \(\cup\) 'body' is similar to'mouth' but 'tongue' \(\cup\) 'language' is similar to 'dialect'. These compositions are non-trivial for vector based embeddings. However, in their task, they could not quantify the degree to which these embeddings are capturing set semantics. In contrast, we develop an evaluation method to measure the degree of effectiveness of box embeddings to capture the semantics in compositional queries for recommendation systems.
### Compositional Generalization
There have been many recent efforts to understand whether natural language processing systems demonstrate systematic intelligence (Fodor and Pylyshyn, 1988) which is the ability to produce concepts through different combinations of already known concepts. Many benchmark datasets have been proposed towards that extent e.g., SCAN (Lake and Baroni, 2017), COGS (Kim and Linzen, 2020), CFQ (Keysers et al., 2020). All these benchmarks are fundamentally based on creating train/test split such that the distribution of individual concepts in training remains the same during the test, but the distribution of compounds created by those concepts should differ as much as possible during the test phase. In this work, we rely on the semantics of the embedding space to answer compositional queries, thus unlike these benchmarks, we do not train on any compounds. Furthermore, our benchmark here focuses on incomplete and noisy database completion. Sun et al. (2020); Ren et al. (2020); Ren et al. (2020) are some of the recent works that focus on logical query over knowledge bases (KB).
## 3 Problem Formulation
Let \(I\) be a set of \(m\)_items_ and let \(A\) be a set of \(n\)_attribute_ descriptors of the items. For example, \(I\) could be a set of movies and \(A\) a set of genres or actors. Let \(O\subseteq I\times A\) be a set of movie-attribute pairs. Each pair \((i,a)\in O\) represents an assignment of an attribute to an item. For example, it could represent that a particular movie is a _comedy_ movie. Alternatively, \(O\) can be seen as \(m\times n\) matrix with \(O_{i,a}=1\iff(i,a)\in O\).
Compositional QueryWe define a _query_ as a logical combination of attributes. We denote the set of all queries as \(Q\). A _singleton_ query consists of a single attribute \(a\in A\). We are especially interested in the conjunctive compositional queries with the form of \(q_{1}\wedge q_{2}\wedge\cdots\wedge q_{k}\), where each \(q_{i}\) is of the form of \(a_{i}\) or \(\neg a_{i}\) for some attribute \(a_{i}\). Some examples of conjunctive queries are _intersection_ queries \(a_{1}\wedge a_{2}\), such as _'comedy and romantic'_; and _difference_ queries, such as _'comedy but not romantic comedy'_. In our evaluation (Section 5), we consider combinations of up to three attributes with at least one positive attribute.
The conjunctive queries are probably the most common and natural queries - they also entail a clear semantics. The other main reason for focusing on the conjunctive queries is that they convey the main challenges of the compositional queries - conjunction would refine the query so to lead to increasingly "sparse" results; and the negation is usually missing from the training data so as to force the model to understand the "boundary" of an attribute. Indeed, as we shall show later, these properties do cause difficulty to models which work well on evaluation metrics of individual attributes.
Querying with Ground Truth DataWe are interested in matching items to a query based on the movie-attribute assignments \(O\). In the case of complete annotations \(O\), this is a straightforward process. We denote the matching function as \(\mathcal{I}\), that maps a query to a set of items. For a singleton query \(q=a\), the matching function is
\[\mathcal{I}(a)=\{i\in I|(i,a)\in O\}. \tag{1}\]
The matching function for an _intersection_ query is
\[\mathcal{I}(a_{1}\wedge a_{2}) =\{i\in I|(i,a_{1})\in O\wedge(i,a_{2})\in O\}\] \[=\mathcal{I}(a_{1})\cap\mathcal{I}(a_{2}) \tag{2}\]
and for _difference_ queries:
\[\mathcal{I}(a_{1}\wedge\neg a_{2}) =\{i\in I|(i,a_{1})\in O\wedge(i,a_{2})\not\in O\}\] \[=\mathcal{I}(a_{1})\setminus\mathcal{I}(a_{2}) \tag{3}\]
As we can observe from equations 2 and 3, querying with compositional queries is essentially equivalent to set theoretic operations of item-sets for individual attributes. In an ideal scenario, where the ground truth \(O\) is fully observed, the matching process is well defined.
Querying with Incomplete and Noisy DataIn our work, we consider the case where the attribute assignments are only partially observed and noisy. This is a common scenario in real-world applications. Let us denote these noisy incomplete assignments as \(O^{\prime}:I\times A\rightarrow\mathbb{R}\) or equivalently in matrix notation \(O^{\prime}\in\mathbb{R}^{I\times A}\). Note that here assignments are not boolean but can be real valued numbers. Values of zero do not imply that the item does not have the attribute but could just mean that we don't know its assignment. In general, any number \(O^{\prime}_{i,a}\) represents a weak and noisy indicator of the true assignment \(O_{i,a}\) of \(a\) to \(i\). Our work aims at developing techniques to predict the matching function \(\hat{\mathcal{I}}:Q\to 2^{I}\) from the noisy data \(O^{\prime}\). Instead of predicting the membership, we are usually more interested in ranking the items for a query and we will propose methods that produce membership scores \(\hat{Y}:Q\times I\rightarrow\mathbb{R}\). These scores can be used for ranking the items by increasing scores, i.e., \(i_{1}\) is more likely to match a query \(q\) than \(i_{2}\) if \(\hat{Y}(q,i_{1})>\hat{Y}(q,i_{2})\).
## 4 Method
In this work, we focus on embedding based models for representing attributes and for answering compositional queries. In these models, we embed each attribute and item in the geometric space, e.g. as a vector or a box, and use their geometry, e.g. distance or volume, to capture the semantic relationship. Embedding models are quite common and effective for answering singleton queries. However, answering compositional queries is more challenging. We will present approaches for compositional queries based on
vector and box embeddings. While the combination for vector embeddings will be heuristic, box embeddings fit naturally the compositional aspect because of their set theoretic nature.
### Vector Embeddings
Vector based methods represent items and attributes by embedding vectors, which are learned by fitting some, typically co-occurrence based, learning objective. In this work, we focus on the matrix factorization method for learning a vector based embedding model. As shown by Pennington et al. (2014), the matrix factorization method, by setting the objective values and the weights properly, can be used to produce embeddings that achieve state-of-the-art evaluation results on the word analogy task, which is related to the compositional query task.
In such a method, the incomplete and noisy matrix \(O^{\prime}\) can be factorized as,
\[O^{\prime}\approx U\,V^{T} \tag{4}\]
with \(U\in\mathbb{R}^{I\times d}\) and \(V\in\mathbb{R}^{A\times d}\). Each row vector \(u_{i}\in\mathbb{R}^{d}\) in \(U\) (or \(v_{a}\in\mathbb{R}^{d}\) in \(V\)) is the vector representation of the item \(i\) (respectively attribute \(a\)). We would like their dot product \(\langle u_{i},v_{a}\rangle\) to be close to the observation \(O^{\prime}_{i,a}\), where the closeness is defined through a loss function, with more details described below.
#### 4.1.1 Training
There is a large body of literature on defining the loss function for learning \(U\) and \(V\). The main options are on how the loss is defined for each \((i,a)\) pair and on how much weight is given to each pair. In this paper, we discuss a few broadly used methods.
First, we apply a transformation function \(\Phi:\mathbb{R}\rightarrow\mathbb{R}\) to the dot product to obtain a score, i.e.
\[\hat{Y}(a,i):=\Phi(\langle u_{i},v_{a}\rangle)\,. \tag{5}\]
Here we consider either the identity or the sigmoid function for \(\Phi\). For each choice of \(\Phi\), we define a loss function for measuring how "far" the prediction is from the observation. When \(\Phi\) is the identity function, we use the hinge-loss; and when \(\Phi\) is the sigmoid function, we use the cross-entropy loss.
Since our data contains only positive pairs, we apply the following common negative sampling method to create negative samples: whenever the optimization algorithm encounters a non-zero item-attribute pair \(O^{\prime}_{i,a}\), it also samples a few negatives by randomly changing either the item or the attribute, i.e. by setting \(O^{\prime}_{i^{\prime},a}=0\) and \(O^{\prime}_{i,a^{\prime}}=0\) for a randomly chosen \(i^{\prime}\neq i\) and \(a^{\prime}\neq a\). Then we learn \(U,V\) by minimizing the loss through the stochastic gradient method (Koren et al., 2009).
Intuitively, the loss defined above encourages the embeddings of attributes and items that have non-zero values in \(O^{\prime}\) to come closer in terms of the dot product similarity measure, while pushing apart embeddings of those item-attribute pairs that have a value of zero in \(O^{\prime}\).
#### 4.1.2 Singleton Query
The trained embedding model provides a natural way to answer singleton queries, i.e. for \(q=a\), we can directly use the score \(\hat{Y}(a,i)\) as defined by (5). When we want to predict a boolean membership \(\hat{\mathcal{I}}(a)\), the ranked scores can be thresholded
\[\mathcal{I}(a)=\{i:\hat{Y}(a,i)>\tau_{a}\}, \tag{6}\]
where \(\tau_{a}\) can be chosen dependent on the evaluation metric.
#### 4.1.3 Compositional Query
Answering compositional queries with vector embedding models is more challenging. The matrix factorization model (Equation 4) does not provide a natural way to answer queries such as intersection queries \(a_{1}\wedge a_{2}\) or difference queries \(a_{1}\wedge\neg a_{2}\). Here, we will discuss two heuristic approaches. One of them is based on score aggregation of individual attribute scores. The other is based on capturing the composition semantics in the embedding space.
Score AggregationOne way to answer compositional queries is to combine the item scores of individual attribute scores by treating the score \(\hat{Y}(a,i)\) as the probability of the item \(i\) satisfying the attribute \(a\). This only makes sense when \(\hat{Y}(a,i)\) is defined in the range of \([0,1]\), for example when \(\Phi\) is the sigmoid function. Then naturally one can define:
\[\hat{Y}(\neg a,i) =1-\hat{Y}(a,i)\,,\] \[\hat{Y}(q_{1}\wedge q_{2},i) =\hat{Y}(q_{1},i)\cdot\hat{Y}(q_{2},i)\,.\]
With these formulas, we can obtain a score between any conjunctive query and any item. The formula here implicitly assumes that the attributes are independent of each other. While this serves as a reasonable heuristic, it may fail when the attributes are correlated.
Embedding AggregationIn this approach, we rely on the embedding space semantics to generalize over the compositional semantics. Instead of aggregating scores, we aggregate the underlying embeddings using vector arithmetic, such as summation and subtraction between vectors to represent their composition. This has been shown to be effective in practice (Mikolov et al., 2013; Pennington et al., 2014) and justified in theory (Levy & Goldberg, 2014; Arora et al., 2018).
Under such vector arithmetic, we use summation for the
intersection queries, i.e.
\[\hat{Y}(a_{1}\wedge a_{2},i)=\Phi(\langle u_{i},v_{a_{1}}+v_{a_{2}}\rangle),, \tag{7}\]
and subtraction for the difference query,
\[\hat{Y}(a_{1}\wedge\neg a_{2},i)=\Phi(\langle u_{i},v_{a_{1}}-v_{a_{2}}\rangle). \tag{8}\]
### Set theoretic Embeddings
We observe that the inherent nature of predicting combinations of queries is set theoretic. To illustrate further, consider the equation 2 & equation 3 which correspond to the conjunction and negation of two queries. In there, note how the conjunction can be interpreted as the intersection between the item-set retrieved for individual queries (similar for the negation as well). However, in the case of vector based embeddings, the choices to represent this set operations are not natural and do not conform naturally to the set theoretic axioms. We briefly describe Box embeddings and then discuss how they can be used for compositional queries.
#### 4.2.1 Overview of Box Embeddings
Box embeddings were first introduced by Vilnis et al. (2018) where an elements \(\mathbf{a}\) of some set \(A\) is represented through Cartesian product of intervals,
\[\begin{split}\operatorname{Box}(\mathbf{a})&:= \prod_{i=1}^{d}[a_{i}^{-},a_{i}^{+}]\\ &=[a_{1}^{-},a_{1}^{+}]\times\cdots\times[a_{d}^{-},a_{d}^{+}] \subseteq\mathbb{R}^{d}.\end{split} \tag{9}\]
This can be thought of as a \(d\)- dimensional hyper rectangle in euclidean space. The volume of a box can be calculated by multiplying the side lengths of the rectangle,
\[|\operatorname{Box}(\mathbf{a})|=\prod_{i=1}^{d}\max(0,a_{i}^{+}-a_{i}^{-}),\]
and when two boxes intersect, their intersection is yet another box,
\[\operatorname{Box}(\mathbf{a})\cap\operatorname{Box}(\mathbf{b})=\prod_{i=1} ^{d}[\max(a_{i}^{-},b_{i}^{-}),\min(a_{i}^{+},b_{i}^{+})].\]
This min and max operations involved in intersection hinders gradient based training because they cause large areas of the parameter space with no gradient signal. Dasgupta et al. (2020) proposed GumbelBox (\(\operatorname{Box}_{G}\)) to solve this problem. The corners of the boxes \(\{a_{i}^{\pm}\}\) are replaced with Gumbel random variables, \(\{A_{i}^{\pm}\}\), where the probability of any point \(\mathbf{z}\in\mathbb{R}^{d}\) being inside the box \(\operatorname{Box}_{G}(\mathbf{a})\) is given by
\[P(\mathbf{z}\in\operatorname{Box}_{G}(\mathbf{a}))=\prod_{i=1}^{d}P(z_{i}>A_ {i}^{-})P(z_{i}<A_{i}^{+}).\]
This probability can also be thought of as soft membership function, thus Gumbel Box can also be interpreted as representation for fuzzy set (Dasgupta et al., 2022). The Gumbel distribution was chosen as it was min/max stable, thus the intersection \(\operatorname{Box}_{G}(\mathbf{a})\cap\operatorname{Box}_{G}(\mathbf{b})\) which was defined as a new box with corners modeled by the random variables \(\{C_{i}^{\pm}\}\) where
\[C_{i}^{-}\coloneqq\max(A_{i}^{-},B_{i}^{-})\text{ and }C_{i}^{+}\coloneqq\min(A_{i}^{+},B_{i }^{+})\]
is actually a Gumbel box as well. This max and min over the random variable bolis down to _logsumexp_ over the end points, which is a smooth function. The volume function of Gumbel box is a smooth function of the parameters as well.
\[|\operatorname{Box}_{G}(\mathbf{a})|=\prod_{i=1}^{d}\text{Softplus}(a_{i}^{+}- a_{i}^{-}),\]
where, \(\text{Softplus}(x)=\beta\log(1+\exp(\frac{x}{\beta}))\). For all further discussions, we denote Gumbelbox \(\operatorname{Box}_{G}\) as \(\operatorname{Box}\).
#### 4.2.2 Training
In this section, we formulate the training objective through the lens of set theoretic representation learning. Let us consider, for each attribute \(a\in A\), \(\operatorname{Box}(a)\) to be its box representation. Similarly, for each item \(i\in I\), let \(\operatorname{Box}(i)\) be its box representation. We conceptualize each attribute \(a\) as the set of movies that has that attribute as its descriptor. For example, the attribute "Brad Pitt" can be thought of as the set of all the movies that Brad Pitt is associated with.
More formally, given an attribute \(a\), and an item \(i\), if we observe that \(O^{\prime}_{i,a}=1\), then we want to the set-representation of the token \(\operatorname{Box}(a)\) to contain the representation of \(\operatorname{Box}(i)\). In order to enforce such a training objective, we use the probabilistic interpretation of set containment, i.e., if set \(S\) contains set \(S^{\prime}\), then \(P(S|S^{\prime})=1\). In our formulation, if \(O^{\prime}_{i,a}=1\), then
\[P(a|i)=1 \tag{10}\] \[\frac{|\operatorname{Box}(a)\cap\operatorname{Box}(i)|}{| \operatorname{Box}(i)|}=1 \tag{11}\]
From the above equation, we can see that we are able to calculate this conditional probability with the model parameters. Also, for a negative sample, i.e., \(O^{\prime}_{i,a}=0\) for a \((i,a)\), we want the opposite, i.e., \(P(a|i)=0\). We optimize for the binary cross entropy loss between \(O^{\prime}_{i,a}\) and \(P(a|i)\):
\[\begin{split}&\mathcal{L}_{bce}=\\ &-\sum_{(i,a)\in I\times A}\big{[}O^{\prime}_{i,a}\ln P(a|i)+(1-O _{i,a})\ln(1-P(a|i))\big{]}\end{split}\]
#### 4.2.3 Singleton Query
The item scoring function for a singleton query \(q=a\) can be directly represented by the probabilistic semantics of set
containment (Equation 11). We define the scoring function as
\[\hat{Y}(a,i):=\frac{|\operatorname{Box}(a)\cap\operatorname{Box}(i)|}{| \operatorname{Box}(i)|}. \tag{12}\]
Again, for predicting sets, we can just apply a threshold
\[\hat{\mathcal{I}}(a)=\{i\in I|\hat{Y}(a,i)>\tau\}. \tag{13}\]
#### 4.2.4 Compositional Query
As we have argued before, the vector representation for logical-and query has posed as vector average which fails to obey many set theoretic axioms. The choice of the box embedding based representation are natural to the set theoretic task. Their intersection is an organic representation of the logical-and composition. More formally, given two attributes \(a_{1}\) and \(a_{2}\), the item-set that corresponds to their logical-and can be inferred as,
\[\hat{Y}(a_{1}\wedge a_{2},i)=\frac{|\operatorname{Box}(a_{1})\cap\operatorname {Box}(a_{2})\cap\operatorname{Box}(i)|}{|\operatorname{Box}(i)|} \tag{14}\]
Using inclusion-exclusion, we could express the score for difference as following,
\[\hat{Y}(a_{1}\wedge\neg a_{2},i)=\] \[\frac{|\operatorname{Box}(a_{1})\cap\operatorname{Box}(i)|-| \operatorname{Box}(a_{1})\cap\operatorname{Box}(a_{2})\cap\operatorname{ Box}(i)|}{|\operatorname{Box}(i)|} \tag{15}\]
Similarly the exact score for more complex queries can be computed using Inclusion-Exclusion principle.
## 5 Set theoretic Evaluation Benchmark
In the previous sections we described the methodologies for addressing set theoretic queries. In order to compare these methods, we need to evaluate them against a benchmark of compositional queries. However, to the best of our knowledge, no benchmark has been developed to evaluate a model that intend to solve such attribute based compositional query problem.
We developed an evaluation benchmark based on the broadly used MovieLens dataset (Harper & Konstan, 2015). In the benchmark, we extracted movie-genre annotations \(O\) for the movie domain from Wikidata and assign genre attributes to the movies. The noisy and incomplete annotations \(O^{\prime}\) from which the embedding models can be trained were created from user-generated tagging data from Movielens. We created a set of meaningful compositional queries using the annotation statistics \(O\) and heuristics. The ground truth results for the queries follows from \(O\) and set operations. The overall statistics of the query dataset can be found it Table 1. The datasets for ground truth \(O\) and for the compositional queries are available at [https://github.com/google-research-datasets/genre2movies](https://github.com/google-research-datasets/genre2movies). In the remainder of this section, we will describe the data generation process in detail.
### Training Data
The Movielens dataset provides a set of user annotated tags for each movie. These tags are used to describe a wide variety of attributes associated with the movies, for example, genre, actor, theme, short content descriptions, reviews etc. Since the viewers are only tagging the movies with few relevant tags that they think are most important, the tags for each movie are far from complete. To add to that problem, the tags are subjective which is a source of uncertainty and noise. We use this tag-vs-movie information as the incomplete noisy estimate matrix \(O^{\prime}\), where the set of all movies are the item set \(I\) and set of tags are the attribute set \(A\). This dataset has \(|I|=19,545\) items, \(|A|=35,169\) attributes and \(||O^{\prime}||_{0}=195,878\) non-zero item-attribute pairs. We also note that the user ratings can be useful to provide semantic grouping of movies though we are not using them in this paper.
### Evaluation Benchmark
Genres are a common type of queries for movies and composing two different genres is very common place, e.g, "children's animation", "children's animation but not monster" etc. We consider "movie genres" as our queries for comparing the performance amongst different methods to prove our hypothesis empirically.
Genre ExtractionMovielens provides some genre annotations but they are very coarse, with only \(19\) different genres. The user tagging data, on the other hand, is highly diverse but very noisy. So for the ground truth, we use the high quality data source of Wikidata. Given each movie from Movielens, we query Wikidata infobox for the genre information. Wikidata has much richer genre descriptions and, when present, is quite accurate. However, it may miss many genres as it often only retain the finest category. To solve this problem, we extract a genre-hierarchy from Wikidata and use it to populate more genre annotations. For
\begin{table}
\begin{tabular}{l c c c} \hline \hline Query type & \multicolumn{2}{c}{\#Queries} & mean \(\rho\) \\ \hline Singleton & \(a\) & 218 & 1.0 \\ Intersection & \(a\wedge b\) & 556 & 0.142 \\ Difference & \(a\wedge\neg b\) & 149 & 0.785 \\ Triple Intersection & \(a\wedge b\wedge c\) & 1604 & 0.054 \\ Triple Difference & \(a\wedge b\wedge\neg c\) & 302 & 0.277 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of generated queries and assigned movies for the proposed evaluation benchmark. \(\rho\) is defined in (16).
example, if a movie has genre'sci-fi', and we know from the genre-hierarchy that \(<\)'sci-fi _isA_ 'fiction'\(>\), then we add 'fiction' in its annotation as well. We treat this dataset as \(O\). This dataset has \(25,878\) items, \(218\) genre attributes and \(||O||_{0}=83,670\) non-zero pairs. We realise that \(O\) likely has still some noise and some incompleteness, but it is significantly better than the Movielens genre and tag data. We provide a detailed analysis of the incompleteness of the benchmark in Appendix C.
Set-theoretic Query GenerationWith the genre annotations for each movie, it is straight forward to create single attribute queries. However, creating compositional queries requires more thoughts. When we combine two arbitrary genres, most of these combinations will not pan out to be an interesting query, for example they can come back as a (mostly) empty set, e.g.'sports' and 'apocalyptic' does not have anything in common, or one is completely contained in the other, e.g.'sci-fi' is contained in 'fiction'.
These examples suggest that the relative size of the query result may be used to define "interesting queries". Indeed, that is how we construct compositional queries in our benchmark. We use the co-occurrence statistics \(|\mathcal{I}(a\wedge b)|\) of two attributes \(a\) and \(b\) to determine which of the genres to consider when composing a complex query. Intuitively, for two queries \(q_{1},q_{2}\) and their combination \(q=q_{1}\operatorname{op}q_{2}\), we consider \(q\) interesting if it is meaningful, i.e when \(|\mathcal{I}(q)|\) is relatively larger than the size of combining two random sets, and non-trivial, i.e. when \(|\mathcal{I}(q)|\) is smaller than either \(|\mathcal{I}(q_{1})|\) or \(|\mathcal{I}(q_{2})|\).
We apply the above criteria to obtain both pairwise and triplet queries of the form of \(a\wedge b\), \(a\wedge\neg b\), \(a\wedge b\wedge c\), \(a\wedge b\wedge\neg c\). We summarize the statistics of the queries we obtain in Table 1 and give some examples in Table 5 in Appendix A. To illustrate the "difficulty" of each query, we also compute the ratio \(\rho\) between the size of the result set and the minimum size of each attribute involved in the query, i.e
\[\rho(q_{1}\wedge\cdots\wedge q_{k})=\frac{|\mathcal{I}(q_{1}\wedge\cdots \wedge q_{k})|}{\min(|\mathcal{I}(q_{1})|,\cdots,|\mathcal{I}(q_{k})|)}\,. \tag{16}\]
The value of \(\rho\) represents the chance of success if we are to randomly guess one movie from the most restrictive "atom" query (\(a\) or \(\neg a\)), so it gives a sense of the sparsity of the results.
## 6 Results
In this section, we use our proposed set-theoretic query benchmark to compare the methods of Section 4.
### Methods
We use the movie-vs-attribute matrix, \(O^{\prime}\), obtained from the MovieLens dataset (more details on Section 5.1) for training. We list the baselines as well as the corresponding methods with their training details here:
**Attribute Lookup**: A natural way to generate a list of movies given a query is to perform a lookup in the training matrix \(O^{\prime}\) (see equations 2 and 3 where we substitute \(O\) with \(O^{\prime}\)). We sort the movies based on the number of users that tagged a movie with that attribute. This helps to reduce tagging noise.
**Vector Embedding**: We use the matrix factorization method as described in Section 4.1 to obtain the vector representation of the attributes and movies. We denote the **Score Aggregation** technique described in 4.1.3 as _Vector (Probabilistic)_ and **Embedding Aggregation** as _Vector (Algebraic)_ in the compositional query result (Tables 3 and 4).
**Box Embedding**: We use the set-theoretic embedding method as described in 4.2. We train our method with containment based probabilistic semantics (equation 11). The inference for individual attributes is governed by set containment semantics, movies are ranked by the extent to which they are contained by the query attribute (equation 13). The set composition queries are handled using inclusion-exclusion of intersection scores (refer to equation 15).
### Evaluation Protocol
We evaluate five different types of query tasks: singleton queries (\(q=a\)), intersection (\(q=a\wedge b\)), difference (\(q=a\wedge\neg b\)), triple intersection (\(q=a\wedge b\wedge c\)) and triple difference (\(q=a\wedge b\wedge\neg c\)). For every query in each query task, the methods (see Section 6.1) are queried for a ranked list of movies. We calculate the precision@\(k\) with \(k\in\{1,10,20,50\}\) of the ranked movies w.r.t the evaluation ground-truth given in our evaluation benchmark (see Section 5.2). We report the mean precision value over all queries in a query task. We did an extensive hyperparameter search (see Appendix B for details). The best hyper-parameter for each method is determined by using the precision@1 metric for the singleton queries (\(q=a\)). We use those model checkpoints to asses the performance on all other compositional queries.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Methods & P@1 & P@10 & P@20 & P@50 \\ \hline Attribute Lookup & 34.6 & **27.8** & 24.2 & 18.7 \\ Vector & **36.8** & 25.1 & 22.0 & 18.0 \\ Box & **36.8** & 27.6 & **25.1** & **21.1** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Precision (in %, higher the better) for the singleton queries. We select the models based on P@1 performance for each method. We observe that all the methods perform similarly for these type of queries.
### Quantitative Results
#### 6.3.1 Singleton Queries
We report the performance of singleton queries in the Table 2. As can be seen, all methods perform very similarly for singleton queries. Especially for the precision@1 metric, the results between the embedding models are very close to each other. The quality of vector embeddings drops for larger cutoffs more than box embeddings. This could be an artifact of hyperparameter tuning where we selected the hyperparameters for precision@1 quality (on the validation set). Also the lookup baseline provides a good quality which indicates that sorting the movies for each genre by the frequency that the tag was assigned to a movie is effective in reducing noise. For movies that are tagged by many users, this is equivalent to a majority vote which is a straight-forward way to resolve tagging disagreement.
#### 6.3.2 Compositional Queries
Table 3 shows the results for queries of pairs of attributes and Table 4 for attribute triples.
Attribute LookupThe lookup baseline performs particularly poorly for intersection queries where it suffers from the incompleteness of its data \(O^{\prime}\) and its lack of compensating it through generalization. This effect is especially strong for larger precision cutoffs where the lookup quality degrades quickly, e.g., the quality for intersection queries drops from \(24.8\%\) at precision at 1 to \(4.1\%\) for precision at 50. Similar effects can be seen for triple intersections and triple differences.
Vector EmbeddingsThe embedding methods perform noticeably better on intersection queries and higher cutoffs due to their generalization capabilities. Interestingly, the lookup baseline performs much better than vector embeddings on the difference queries that require an understanding of negation. This holds both for pair and triple queries. We hypothesize that this is attributed to the vector embedding's lack of understanding of the boundaries of concepts. For example to understand when a movie is still of genre \(a\) but not anymore of genre \(b\). Especially, the probabilistic treatment of vector embeddings with its independence assumptions seems to fail here. The algebraic treatment works better for difference queries. On the other hand, the probabilistic treatment of vector scores works slightly better for intersection queries (especially for precision at 1).
Box EmbeddingsWe observe that box embedding methods outperform the vector embeddings and the lookup baseline by a large margin for all compositional queries and cutoffs. Unlike vector embeddings, box embeddings are successful for difference queries. This indicates that box embeddings understand the boundaries of concepts better which is important for these types of queries. Box embeddings perform well for both intersections and triples. Unlike the attribute lookup, the quality of box embeddings also degrades slower for larger precision cutoffs, meaning that box embeddings provide better results beyond the very top of the result list than the lookup method. The observed superior quality of box embeddings on the compositional query tasks suggests that the built-in set-theoretic semantics of box embeddings are beneficial for such query tasks.
## 7 Conclusion
This paper has presented a study of compositional queries on item-attribute relations (especially with conjunctions and negations). We demonstrated that this set-theoretic problem is non-trivial for embedding models, and argued that it requires not just a modeling of closeness but also a
\begin{table}
\begin{tabular}{l r r r r r r r r} \hline \hline & \multicolumn{6}{c}{Triple Intersection} & \multicolumn{6}{c}{Triple Difference} \\ \cline{2-10} Methods & P@1 & P@10 & P@20 & P@50 & P@1 & P@10 & P@20 & P@50 \\ \hline Attribute Lookup & 12.2 & 3.2 & 1.7 & 0.7 & 33.1 & 17.3 & 13.3 & 8.4 \\ Vector (Probabilistic) & 15.4 & 8.4 & 6.4 & 4.4 & 11.6 & 12.2 & 11.4 & 10.8 \\ Vector (Algebraic) & 10.9 & 7.5 & 6.2 & 4.6 & 20.1 & 18.2 & 16.5 & 13.7 \\ Box & **20.7** & **11.9** & **9.0** & **6.2** & **36.1** & **28.9** & **24.8** & **20.4** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Precision (in %, higher the better) for the triple intersection and difference of genre query.
\begin{table}
\begin{tabular}{l r r r r r r r r} \hline \hline & \multicolumn{6}{c}{Intersection} & \multicolumn{6}{c}{Difference} \\ \cline{2-10} Methods & P@1 & P@10 & P@20 & P@50 & P@1 & P@10 & P@20 & P@50 \\ \hline Attribute Lookup & 24.8 & 11.5 & 7.7 & 4.1 & 44.1 & 36.0 & 31.9 & 28.4 \\ Vector (Probabilistic) & 24.1 & 15.4 & 12.6 & 9.3 & 15.3 & 11.8 & 11.3 & 10.8 \\ Vector (Algebraic) & 19.4 & 12.6 & 11.0 & 8.5 & 33.0 & 33.1 & 32.0 & 28.4 \\ Box & **32.9** & **20.6** & **16.1** & **11.3** & **48.4** & **43.7** & **43.2** & **41.3** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Precision (in %, higher the better) for the intersection and difference of genre query.
modeling of boundaries. We have aimed to show--through discussion of representational capacity, empirical results, and analysis--the advantages of a region-based embedding, such a box embeddings, over the traditional vector embeddings. We have also curated and released a new benchmark dataset, and proposed an evaluation protocol for movie retrieval based on compositional genres, giving an opportunity to study query intersections and differences. In future, building on this work, we hope to extend the advantages of our approach to personalized recommendation systems that can provide personalized search.
|
2303.10940 | Ab initio calculation of the spectrum of Feshbach resonances in NaLi +
Na collisions | We present a combined experimental and theoretical study of the spectrum of
magnetically tunable Feshbach resonances in NaLi $(a^3\Sigma^+)$ $+$ Na
collisions. In the accompanying paper, we observe experimentally 8 and 17
resonances occur between $B=0$ and $1400$~G in upper and lower spin-stretched
states, respectively. Here, we perform ab initio calculations of the NaLi $+$
Na interaction potential and describe in detail the coupled-channel scattering
calculations of the Feshbach resonance spectrum. The positions of the
resonances cannot be predicted with realistic uncertainty in the
state-of-the-art ab initio potential, but our calculations yield a typical
number of resonances that is in near-quantitative agreement with experiment. We
show that the main coupling mechanism results from spin-rotation and spin-spin
couplings in combination with the anisotropic atom-molecule interaction. The
calculations furthermore explain the qualitative difference between the numbers
of resonances in either spin state. | Tijs Karman, Marcin Gronowski, Michal Tomza, Juliana J. Park, Hyungmok Son, Yu-Kun Lu, Alan O. Jamison, Wolfgang Ketterle | 2023-03-20T08:34:21Z | http://arxiv.org/abs/2303.10940v2 | # _Ab initio_ calculation of the spectrum of Feshbach resonances in NaLi + Na collisions
###### Abstract
We present a combined experimental and theoretical study of the spectrum of magnetically tunable Feshbach resonances in NaLi (\({}^{a}\Sigma^{+}\)) + Na collisions. In the accompanying paper, we observe experimentally 8 and 17 resonances occur between \(B=0\) and 1400 G in upper and lower spin-stretched states, respectively. Here, we perform _ab initio_ calculations of the NaLi + Na interaction potential and describe in detail the coupled-channel scattering calculations of the Feshbach resonance spectrum. The positions of the resonances cannot be predicted with realistic uncertainty in the state-of-the-art _ab initio_ potential, but our calculations yield a _typical_ number of resonances that is in near-quantitative agreement with experiment. We show that the main coupling mechanism results from spin-rotation and spin-spin couplings in combination with the anisotropic atom-molecule interaction. The calculations furthermore explain the qualitative difference between the numbers of resonances in either spin state.
## I Introduction
Magnetically tunable resonances have become an indispensable tool for controlling the interactions between ultracold atoms [1]. Control over the interactions enables using atoms as quantum simulators to study for example condensed matter physics [2; 3; 4; 5; 6]. Feshbach resonances are also used to associate pairs of ultracold atoms into weakly bound molecules [7]. These weakly bound molecules can be transferred coherently to their absolute ground state using a stimulated Raman adiabatic passage (STIRAP) [8], which has become a common tool for producing ultracold ground-state molecules [9; 10; 11; 12]. Resonances in molecular collisions could similarly enable the control of contact interactions between molecules, or between molecules and atoms. This could enable sympathetic cooling of molecules [13], and the bottom up construction of ultracold polyatomic molecules [14]. Furthermore, scattering resonances are incredibly sensitive to the details of the interaction potential and provide an interesting testing ground for theory [15]. In particular, there is an active debate about the nature of so-called "sticky collisions" between ultracold molecules [16; 17; 18], their role in collisional losses [19], and what simplified statistical models can be used to describe these [16; 17; 20; 21]. Observation of atom-molecule and molecule-molecule resonances could directly test some of these models by measuring the "effective" density of states [18; 22], the probability of short-range loss [21; 23], and by probing the mechanism of this loss.
Collisional loss also represents a major hurdle in the observation of Feshbach resonances. The resonance states correspond to quasi-bound states of the atom-molecule or molecule-molecule collision complex. In the presence of collisional loss, however, these states experience decay and the resonances broaden [21]. In the case of so-called universal loss [24], which is consistent with many experimental observations of molecule-molecule [9; 10; 11; 12; 25; 26; 27; 28; 29] loss rates, the short-range loss becomes complete and the resonances disappear entirely. Universal loss is also observed for some but not all ultracold molecule-atom collisions [30; 31; 32; 33]. For these reasons it is an open question for which systems scattering resonances exist.
Recently, scattering resonances have been observed in NaK+K [29], NaLi+Na [34], and even NaLi+NaLi [35] collisions, but the three systems are very different. The resonances observed in NaK+K could be interpreted as long-range states that are further split into several resonances by hyperfine interactions, and which could tentatively be assigned free-molecule and free-atom quantum numbers [31]. In the case of collisions between triplet NaLi molecules and Na atoms in the spin stretched state, the background loss is much less than universal. This situation, however, could be unique to triplet NaLi, which is the lightest bialkali realized that furthermore has relatively weak interactions with Na in the spin-stretched state. By contrast, NaLi bimolecular collisions exhibit universal background loss. Nevertheless, a single resonance has been observed in NaLi+NaLi collisions, though assignment of the observed resonance in terms of a stable resonance state is complicated [35].
Here, we present a combined experimental and theoretical study of the spectrum of magnetically tunable Feshbach resonances in NaLi (\(a^{3}\Sigma^{+}\)) + Na collisions. In the accompanying paper we show 8 and 17 resonances
occur between \(B=0\) and 1400 G in the upper and lower spin-stretched states, respectively [36]. We perform _ab initio_ calculations of the NaLi+Na interaction potential, and use these to perform coupled-channel scattering calculations of the Feshbach resonance spectrum. We show that the positions of the resonances cannot be predicted with realistic uncertainty in the state-of-the-art _ab initio_ potential, but that our calculations can nevertheless confirm the expected number of resonances in this magnetic field range, and furthermore explain the qualitative difference between the numbers of resonances in either spin state, and the nature of the coupling mechanism. In this paper, we describe in detail how these calculations were performed, and also present additional supporting calculations of the density of states and the effect of hyperfine interactions on the spectrum of resonances.
## II Interaction potential
Intermolecular interactions in the triatomic NaLi+Na system can be formally decomposed into a pairwise additive two-body part and a pairwise nonadditive three-body part. The two-body part is the sum of interatomic interaction between all atomic pairs obtained neglecting the presence of other atoms. The nonadditive three-body interaction results from the electron density modification and correlation extending beyond atomic pairs and can also be understood as the change of pairwise interactions due to the presence of other atoms.
### Two-body interactions
We first consider the two-body interactions. The interaction between two alkali-metal atoms \(A\) and \(B\) depends on the distance between the atoms and whether the spins are singlet or triplet coupled, which can be represented as
\[\hat{V}_{AB}(r_{AB})=\bar{V}_{AB}(r_{AB})+\hat{s}_{A}\cdot\hat{s}_{B}\Delta V _{AB}(r_{AB}), \tag{1}\]
where \(\bar{V}_{AB}(r)=[3V_{AB}^{(T)}(r)+V_{AB}^{(S)}(r)]/4\) and \(\Delta V_{AB}(r)=V_{AB}^{(T)}(r)-V_{AB}^{(S)}(r)\). The singlet \(V_{AB}^{(S)}\) and triplet \(V_{AB}^{(T)}\) potentials for NaNa and NaLi are taken from Refs. [37; 38; 39]. These are empirical potentials that reproduce the known molecular spectroscopy and atom-atom scattering lengths accurately. The total additive electronic interaction is then computed for fixed molecular bond length, as a function of the molecule-atom center-of-mass separation \(R\) and the Jacobi angle \(\theta\) as
\[\hat{V}^{2b}(R,\theta)=\sum_{A<B}\hat{V}_{AB}(r_{AB}), \tag{2}\]
where, the three interatomic distances \(r_{AB}\) are obtained as a function of \(R\) and \(\theta\) and evaluated with Eq. (1) for each pair. This total additive interaction is represented in a Legendre expansion convenient for scattering calculations
\[\hat{V}^{2b}(R,\theta) =\sum_{L}P_{L}(\cos\theta)\times\Big{[}V_{L}^{(0)}(R)+\hat{s}_{1 }\cdot\hat{s}_{2}V_{L}^{(12)}(R)\] \[+\hat{s}_{1}\cdot\hat{s}_{3}V_{L}^{(13)}(R)+\hat{s}_{2}\cdot\hat {s}_{3}V_{L}^{(23)}(R)\Big{]}, \tag{3}\]
computed by Gauss-Legendre quadrature including terms up to \(L=70\). The required matrix elements in the channel basis are given in the Appendix.
### Nonadditive three-body potential
The nonadditive three-body contribution to the interaction energy in the triatomic \(ABC\) system is computed as
\[V^{3b}(R,\theta)=E_{ABC}-\sum_{A<B}E_{AB}+\sum_{A}E_{A}\,, \tag{4}\]
where the total energies of the trimer \(E_{ABC}\), dimers \(E_{AB}\), and monomers \(E_{A}\) at the trimer geometry \((R,\theta)\) are computed using a trimer basis set.
First, the total energies and resulting three-body term are computed using the coupled cluster method [40] with the full treatment of single and double excitations and estimation of the connected triples contribution non-iteratively by many-body perturbation theory, CCSD(T), [41]. Next, the correction to account for the remaining triple excitations in the coupled-cluster expansion, CCSDT [42], is added. Large Gaussian basis sets are employed. Thus, all total energies in Eq. (4) are calculated using
\[E=E_{\text{apCV52}}^{\text{HF}}+\delta E_{\text{CBS(Q,5)}}^{\text{CCSD(T)}}+ \delta E_{\text{apCVTZ}}^{\text{CCSDT}}\,, \tag{5}\]
where the Hartree-Fock energy, \(E_{\text{apCV52}}^{\text{HF}}\), is calculated in the Douglas-Kroll correlation-consistent polarized core-valence quintuple-\(\zeta\) quality basis sets, aug-cc-pCV5Z-DK [43]. Next, the correlation energy at the CCSD(T) level, \(\delta E_{\text{CBS(Q,5)}}^{\text{CCSD(T)}}\) is extrapolated to the complete basis set (CBS) limit with the two-point formula [44] and the aug-cc-pCVQZ-DK and aug-cc-pCV5Z-DK basis sets. Relativistic effects are included in those calculations with eXact-2-Component (X2C) Hamiltonian [45]. Finally, the full triples correction \(\delta E_{\text{apCVTZ}}^{\text{CCSDT}}\) defined as a difference between CCSDT and CCSD(T) results is calculated using the aug-cc-pCVTZ basis sets. We correlate only the three valence electrons in the CCSDT calculations, which in the case of the NaLi+Na system is equivalent to reaching the full configuration interaction (FCI) quality for valence electrons.
The CCSD(T) calculations are performed with the MOLPRO package of ab initio programs [46; 47], while the CCSDT results are obtained with the MRCC2019 software [48; 49]. The Legendre expansion of the nonadditive three-body interaction potential is available in the Supplementary Material.
The nonadditive three-body interaction potential is computed for the spin-stretched quartet state where the coupled-cluster method can be applied. We subsequently assume the nonadditive part of the interaction is spin-independent and use it for the doublet states, too. This approximation may be justified by much higher importance of the three-body interaction for the spin-stretched state where it is larger than the two-body contribution, while deeply bound doublet states are strongly dominated by two-body interactions. Additionally, the decomposition into additive and nonadditive parts is ambiguous for the relevant low-spin states. Finally, later we will show that the details of doublet states are less important for resonance prediction. We note that the spin-dependence of the two-body interactions is fully accounted for.
### Electron spin-spin interaction
The electron spin-spin interaction can originate either from direct spin-spin magnetic dipole-dipole interaction or indirect interaction in the second-order of perturbation theory mediated by spin-orbit coupling. The total effective Hamiltonian for the electron spin-spin interaction reads as
\[\hat{H}_{SS}=\hat{\mathbf{S}}^{T}\mathbf{D}\hat{\mathbf{S}} \tag{6}\]
where \(\hat{\mathbf{S}}\) represents spin of the system and \(\mathbf{D}\) is \(3\times 3\) matrix. In the magnetic axis frame the \(\mathbf{D}\) become diagonal, and the effective Hamiltonian can be parameterized by \(D\) and \(E\) constants [50], as
\[\hat{H}_{SS}=D\left[\hat{S}_{z}^{2}-\frac{1}{3}S(S+1)\right]\ +E\left[\hat{S}_{x}^{2}- \hat{S}_{y}^{2}\right] \tag{7}\]
where \(\hat{S}_{x}\) is the \(x\) component of the spin operator \(\hat{\mathbf{S}}\) with respect to the magnetic axis system, and similar for \(y\) and \(z\). The magnetic axes depend on the geometry of the NaLi+Na complex. In the case of linear configurations, the \(z\) axis coincides with the NaLi axis. In the global minima of the potential energy surface, the \(z\) is perpendicular to the plane defined by atoms, whereas the angle between \(y\) and the NaLi axis is about 30 degrees. Preliminary computations with multi-reference configuration interaction methods show that the spin-orbit interaction plays a minor role here, and \(E\) is orders of magnitude smaller than \(D\). Thus, we neglect the spin-orbit interaction, include only two-electron spin-spin direct magnetic interaction. We use the multi-reference averaged quadratic coupled-cluster (MR-AQCC) [51] electronic wave function as implemented [52] in Orca [53; 54]. We describe scalar relativistic effects by the Douglass-Kroll-Hess (DKH) Hamiltonian [55] and include the picture-change effects [56]. The aug-cc-pCVTZ-DK basis sets are used. The components of the \(\mathbf{D}\) matrix, the coupling coefficient \(D\), and \(E/D\) ratio are available in the Supplementary Material.
We have found that the order of magnitude of the isotropic coupling coefficient \(D\) is conserved for most intermonomer configurations available during low-energy NaLi+Na collision. In the scattering calculations reported below, we neglected the interaction-induced variation of the spin-spin coupling. This means the modification of the spin-spin coupling in a NaLi molecule due to its interaction with a Na atom is omitted, but the geometry dependence of the spin-spin coupling in a NaLi molecule on its orientation within a NaLi+Na complex is included.
### Geometry and interpolation
We use Jacobi coordinates to parameterize the geometry of NaLi+Na complex. The interatomic separation in NaLi is fixed at the vibrationally averaged interatomic distance of 9.139 \(a_{0}\) for the ground state of NaLi (\({}^{3}\Sigma^{+}\)) [15]. We perform the computations for more than 20 values of the atom-molecule separation \(R\). The cosine of angle (\(x_{i}=\cos(\theta_{i}),i\in\{0,1,2,...,13,14\}\)) are selected from the roots of the 15-th order Legendre polynomial, and are ordered as \(x_{i}>x_{i+1}\). We independently calculated the nonadditive three-body contribution [\(V^{3b}(R,\theta)\)] and electron spin-spin interaction coupling [\(D(R,\theta)\)] for each combination of geometric parameters. For each distance, we interpolate the three-body interaction term and the spin-spin interaction coupling constants by Legendre polynomials [57]. We determine Legendre expansion coefficients using the Gauss-Legendre quadrature as
\[V_{L}^{(3b)}(R)=(2L+1)\sum_{i=0}^{n-1}w_{i}P_{L}(x_{i})V^{3b}(R,\theta_{i}), \tag{8}\]
and similar for the spin-spin coupling, and \(w_{i}\) are the quadrature weights. For use in scattering calculations, the radial dependence of the expansion coefficients is interpolated using the reproducing kernel Hilbert space method [58].
### Accuracy
The global minimum of the molecule-atom interaction potential for the quartet state of NaLi+Na with the internuclear distance in NaLi fixed at \(9.139\,a_{0}\) occurs at \(R_{e}=7.48\,a_{0}\) and \(\theta_{e}=117.3\,^{\circ}\) with the well depth of \(D_{e}=812.5\,\mathrm{cm}^{-1}\) (see Fig. 1). At this equilibrium geometry two-body and three-body interactions contribute \(230.7\,\mathrm{cm}^{-1}\) and \(581.8\,\mathrm{cm}^{-1}\) to the binding energy, respectively. The CCSD(T) method reproduced \(555.0\,\mathrm{cm}^{-1}\) (\(95\%\)) of the nonadditive part and the CCSDT correction accounts for \(26.8\,\mathrm{cm}^{-1}\). The final uncertainty of the calculated nonadditive three-body potential is about \(17\,\mathrm{cm}^{-1}\) (\(3\,\%\)) at the global minimum of the potential energy surface and results mainly from the incompleteness of the basis set in the CCSD(T) (\(\sim\)\(5\,\mathrm{cm}^{-1}\)) and CCSDT
(\(\sim\)8 cm\({}^{-1}\)) computations, and the lack of core-electron correlation at the CCSDT level (\(\sim\)3 cm\({}^{-1}\)). We estimate the magnitude of the neglected contribution of core and core-valence correlation beyond CCSD(T) level by running all-electron CCSDT calculations with the virtual one-electron space truncated by 20 % [59].
Our calculations show that the three-body term is larger than the two-body one and predominantly attractive. Consequently, the intermonomer NaLi distance is shorter than the intramolecular one. The interatomic distances in the complex are closer to the classical turning point of diatomic potentials than to their minima. The repulsive part of the two-body potential is less accurate than its long-range part. This introduces additional uncertainty to the overall potential, which can be as high as 16 cm\({}^{-1}\) (an additional uncertainty of \(\sim\)3 cm\({}^{-1}\) of the three-body interaction), estimated from the difference between an experimental two-body potential for Na\({}_{2}\)[60] and a highly accurate theoretical potential for NaLi [15].
The native uncertainty of our state-of-the-art electronic calculation of the three-body term is relatively small, although still too large to predict molecular scattering lengths. Additional uncertainty is introduced by using the rigid rotor approximation to describe NaLi within NaLi+Na. This approximation works very well for van der Waals complexes of deeply-bound molecules, but for high-spin weakly bound alkali-metal systems may result in non-negligible errors [61]. Our calculations show that the global equilibrium geometry for NaLi+Na with relaxed distance in NaLi has isosceles triangular symmetry and internuclear distances smaller by around 20% than in the diatomic molecules due to large three-body forces, the magnitude of which monotonically increases with decreasing atom-molecule separation. To reflect the errors related to the difference in three-body interaction at the global minimum and the minimum within the rigid rotor approximation used, as well as the increased uncertainty of the two-body interactions at these shorter interatomic distances, we used a more conservative estimate of the uncertainty of the three-body term in the remainder of this paper.
## III Scattering Calculations
### Basis set and Hamiltonian
In our coupled-channels calculations the scattering wavefunction is expanded in a basis of fully coupled channel functions of the form
\[|(NL)J(s\ s_{3})S;\mathcal{J}\mathcal{M}\rangle=\sum_{M_{J},M_{S}} \langle JM_{J}SM_{S}|\mathcal{J}\mathcal{M}\rangle\] \[\times|(NL)JM_{J}\rangle|(s\ s_{3})SM_{S}\rangle \tag{9}\]
where \(\langle j_{1}m_{1}j_{2}m_{2}|jm\rangle\) is a Clebsch-Gordan coefficient. The quantum number \(N\) represents the rotational angular momentum of the NaLi molecule, and \(L\) the angular momentum associated with the end-over-end rotation of the atom and molecule about one another, which are Clebsch-Gordan coupled to a total mechanical angular momentum \(J\) with \(B\)-field projection \(M_{J}\). Similarly, \(s\) denotes the molecular electronic spin resultant from coupling atomic spins \(s_{1}\) and \(s_{2}\) within NaLi, whereas \(s_{3}\) is the electronic spin of Na, and \(S\) the total electronic spin with \(B\)-field projection \(M_{S}\). In the coupled basis, \(J\) and \(S\) are subsequently coupled to a total angular momentum \(\mathcal{J}\) with magnetic-field projection \(\mathcal{M}=M_{J}+M_{S}\). Nuclear spin is initially not taken into account, see Sec. III.7. We assume the molecular bond length fixed at the triplet equilibrium position.
The total angular momentum projection along the magnetic field axis \(\mathcal{M}=M_{J}+M_{S}\) is strictly conserved. For large enough magnetic field \(M_{S}\) becomes a good quantum number, and therefore also \(M_{J}=\mathcal{M}-M_{S}\) is good. The Na atomic spin is \(s_{3}=1/2\) throughout. Due to the large singlet-triplet splitting in the NaLi molecule, \(s=0\) or 1 is also a good quantum number. For a separated atom and molecule, \(m_{s}\) and \(m_{s_{3}}\) would separately become good quantum numbers, but at chemically relevant distances the exchange splitting between the doublet and quartet interaction potentials is dominant such that \(S=1/2\) and \(3/2\) are good quantum numbers. Hence, we can effectively consider each \(|SM_{S}\rangle\) state separately, with only perturbatively weak couplings between them. For each of these spin channels, there are strong and anisotropic interactions that couple different \(N\) and \(L\) channels, but conserve \(J\) and \(M_{J}\). The initial channel corresponds to \(s\)-wave collisions in the spin-stretched rotational ground state, \(|(NL)JM_{J}\rangle|SM_{S}\rangle=|(00)00\rangle|3/2\ 3/2\rangle\) for the low-field seeking upper spin-stretched state, and \(|(00)00\rangle|3/2,\ -3/2\rangle\) for the high-field seeking lower spin-stretched state, respectively.
In our coupled-channels calculations we include the electronic interaction described in Sec. II,
\[\hat{V}(R,\theta)=\sum_{L}P_{L}(\cos\theta)\times\Big{[}V_{L}^{(0 )}(R)+\hat{s}_{1}\cdot\hat{s}_{2}V_{L}^{(12)}(R)\] \[+\hat{s}_{1}\cdot\hat{s}_{3}V_{L}^{(13)}(R)+\hat{s}_{2}\cdot\hat{ s}_{3}V_{L}^{(23)}(R)+V_{L}^{(3b)}(R)\Big{]}, \tag{10}\]
the Zeeman interaction with the magnetic field,
\[\hat{H}_{\text{Zeeman}}=\mu_{B}g_{e}B(\hat{s}_{1,z}+\hat{s}_{2,z}+\hat{s}_{3,z }), \tag{11}\]
the magnetic dipole-dipole interaction
\[\hat{V}_{\text{magn.dip}}= -\sqrt{30}\frac{(\mu_{B}g_{e}\alpha)^{2}}{R^{3}}\] \[\times\Big{[}\hat{s}\otimes\hat{s}_{3}]^{(2)}\otimes C^{(2)}( \hat{R})\Big{]}_{0}^{(0)}, \tag{12}\]
and the spin-rotation
\[\hat{H}_{\text{spin-rotation}}=\gamma_{s}\hat{N}\cdot\hat{s}, \tag{13}\]
and spin-spin couplings
\[\hat{H}_{\rm spin-spin}=\lambda_{s}\sqrt{30}/3\left[\left[\hat{s} \otimes\hat{s}\right]^{(2)}\otimes C^{(2)}(\hat{r}_{\rm NaLi})\right]_{0}^{(0)}, \tag{14}\]
where \([\hat{A}\otimes\hat{B}]_{q}^{(k)}\) indicates a tensor product, see the Appendix. The spin-spin coupling parameter \(\lambda_{s}=0.0189\) cm\({}^{-1}\) is computed here, whereas for the spin-rotation coupling we use the \(\gamma_{s}=0.005\) cm\({}^{-1}\) upper limit estimated in Ref. [15]. The reader is referred to the Appendix for a full description of the Hamiltonian and for matrix elements in the channel basis.
The spin-rotation coupling \(\gamma_{s}\hat{N}\cdot\hat{s}\) and spin-spin coupling \(\lambda_{s}\left[\left(\hat{s}\cdot\hat{r}_{\rm NaLi}\right)\left(\hat{s}\cdot \hat{r}_{\rm NaLi}\right)-\frac{1}{3}\hat{s}^{2}\right]\) play an important role here as these are spin-dependent couplings that are not diagonal in \(S\) and \(M_{S}\). Losses by Zeeman relaxation or transitions to the chemically reactive doublet potential must involve these couplings. It is worth noting that if the electronic interaction were isotropic, _i.e._, independent of the relative orientation of the atom and molecule, these spin-dependent couplings would not be enough to lead to Zeeman relaxation. However, after accounting for the spin-rotation and spin-spin coupling, molecular eigenstates in different Zeeman levels have different rotational-state decompositions, and hence are coupled by the anisotropic part of the electronic interaction. That is, physically, the strong anisotropic electronic interaction can reorient the molecule which effectively flips the electronic spin because the spin is coupled to the molecular axis by spin-rotation and spin-spin coupling. The role of anisotropic interactions implies that transitions occur at short range and cannot be described by simpler models based on isotropic long-range \(R^{-6}\) interactions alone [31].
After having described the atom-molecule interactions, approximately good quantum numbers, and the critical role of various coupling mechanisms, we continue by performing full quantum mechanical coupled-channels calculations. For computational tractability we will start out ignoring hyperfine and vibrational degrees of freedom. By ignoring the vibrational coordinate of the molecule, and fixing the molecular bond length to the triplet NaLi equilibrium bond length, our model cannot directly describe chemical reactions. The only energetically accessible products are to form singlet NaLi or Na\({}_{2}\) molecules. Chemical reactions will occur on the low-spin potential. In our coupled-channels calculations, we model these by imposing an absorbing boundary condition at \(R_{\rm min}=4.5\ a_{0}\), which can be reached on the low-spin potential, but not on the high-spin potential which is highly repulsive at these short distances, see Fig. 1. This choice seems arbitrary but it does not affect the results as long as the boundary condition is imposed in a region where the high-spin potential is highly repulsive and simultaneously the low-spin potential is strongly attractive. Any flux that reaches this region in the low-spin state will continue classically to smaller \(R\), such that it does not matter where exactly in this region one matches to the absorbing boundary condition. We have confirmed this numerically for \(R_{\rm min}\) between 4 and 4.5 \(a_{0}\).
### Cross sections and rate coefficients
We solve the coupled-channels equations numerically using the renormalized Numerov propagator. Using the method of [62], and described in more detail in Ref. [63], we match to reactive boundary conditions at short range and the usual scattering boundary conditions at long range. Again, the short-range boundary condition is imposed at \(R_{\rm min}=4.5\ a_{0}\), a separation that can be reached only on the low-spin potential and effectively models chemical reactions. This procedure yields an "inelastic"
Figure 1: **Interaction potentials for the NaLi+Na collision complex.** The potentials are shown as a function of \(R\), the center of mass distance between the atom and molecule, and \(\theta\) the Jacobi angle between the orientation of the molecule and the approach of the atom, with \(\theta=0\) corresponding to Na-NaLi and \(\theta=180^{\circ}\) to Na-LiNa. The molecular bond length is fixed at the triplet equilibrium bond length \(r=8.8\ a_{0}\). **(a)** high-spin quartet \(S=3/2\) potential. **(b)** low-spin doublet \(S=1/2\) potential. In principle there are two doublet potentials that can have an avoided crossing. What is shown here is the doublet potential for a pure triplet NaLi molecule, \(s=1\).
\(S\)-matrix, \(\mathbf{S}^{\rm(LR)}\) and a "reactive" \(S\)-matrix \(\mathbf{S}^{\rm(SR)}\). The elements of the inelastic \(S\)-matrix, \(S^{\rm(LR)}_{f,L^{\prime};i,L}\), describe the amplitudes for scattering from an initial state \(i\) and partial wave \(L\), to a final state \(f\) and partial wave \(L^{\prime}\). The elements of the reactive \(S\)-matrix, \(S^{\rm(SR)}_{r;i,L}\) describe scattering from an initial state \(i\) and partial wave \(L\) into a reactive channel \(r\) at short range. The reactive channels are determined by diagonalizing the Hamiltonian excluding radial kinetic energy at the short-range matching point. From the \(S\)-matrices one can determine the elastic cross section
\[\sigma^{\rm elastic}=\frac{\pi}{k^{2}}\sum_{L,L^{\prime}}\left|\delta_{L,L^{ \prime}}-S^{\rm(LR)}_{i,L^{\prime};i,L}\right|^{2}, \tag{15}\]
where \(i\) is the initial state and \(k=\hbar^{-1}\sqrt{2\mu E}\) is the intial wavenumber. From the \(S\) matrices one can also determine the inelastic rate coefficient
\[k^{\rm(inelastic)}=\frac{\pi}{\mu k}\sum_{L,L^{\prime},f\neq i}\left|S^{\rm( LR)}_{f,L^{\prime};i,L}\right|^{2}, \tag{16}\]
the rate coefficient for loss at short range
\[k^{\rm(short)}=\frac{\pi}{\mu k}\sum_{L,r}\left|S^{\rm(SR)}_{r;i,L}\right|^{2}, \tag{17}\]
and we define a total loss rate coefficient, \(k^{\rm(loss)}=k^{\rm(inelastic)}+k^{\rm(short)}\). Both the elastic cross section and the rate coefficients are energy independent for energies well below the van der Waals energy which is in the order of 500 \(\mu\)K for NaLi+Na. We computed the cross sections and loss rate coefficients at a single collision energy of 4 \(\mu\)K.
### Convergence
First we consider convergence of the calculation with \(J_{\rm max}\), the highest value of \(J\) included in the basis set. As explained above, we can essentially consider each spin channel \(|SM_{S}\rangle\) independently with only perturbative couplings between them, and within each spin level, we can consider \(J\) and \(M_{J}\) to be good quantum numbers. Spin-rotation coupling has the selection rules \(\Delta J=0\to 1\) and \(|\Delta M_{S}|\leq 1\). Spin-spin coupling has the selection rules \(\Delta J=0\to 2\) and \(|\Delta M_{S}|\leq 2\). Hence, if these spin-dependent couplings act only perturbatively we expect cross sections do not change by including functions with \(J=3\) or higher, and that cross sections scale quadratically in the spin-rotation and spin-spin coupling strengths. This is exactly what we observe in Fig. 2(a). Inclusion of functions with \(J=3\) does not affect the cross sections (dots), and the cross sections scale perturbatively with the spin-rotation and spin-spin coupling strengths (crosses). The contribution of the magnetic dipolar interaction between the atomic and molecular magnetic moment is far smaller and we have discussed in less detail for this reason, although it is included in the calculation. Finally we note that the mechanism responsible for the loss rates requires anisotropic electronic interactions, but the anisotropy is not perturbatively weak.
The convergence of the calculation with \(N_{\rm max}\) for fixed \(J_{\rm max}=1\) is shown in Fig. 2(b). The expectation is that the calculation converges when all locally open channels are included, _i.e._, when the excitation energy of channels that are excluded are all higher than the depth of the interaction potential. [64; 65; 66] If we estimate the required \(N_{\rm max}\) based on the depth of the spin-stretch potential of 800 cm\({}^{-1}\), one would expect the calculation to converge with \(N_{\rm max}\approx 70\). Instead, we see the calculation converges with much higher \(N_{\rm max}=350\). The reason for this appears to be that these higher rotational channels continue to contribute locally open channels in the low-spin state that affect the background scattering length.
As we have seen, converging the scattering calculation requires including also functions with \(J=2\). Scattering calculations for basis sets truncated at \(N_{\rm max}=350\) and \(J_{\rm max}=2\) become prohibitively computer intensive. Fortunately, the typical number of resonances appears to converge more rapidly with \(N_{\rm max}\), and can be predicted with much lower truncation of \(N\). Figure 2(c) shows loss rates computed with various \(N_{\rm max}\) up to 30 for fixed \(J_{\rm max}=2\). We will come back to convergence of the typical density of resonances after discussing the dependence on the interaction potential.
Figure 3 shows the sensitivity of the scattering rates to the interaction potential, parameterized by \(\lambda\). Here, we scale by a factor \(1+\lambda\) the non-additive three-body part of the interaction potential, _i.e._, the part that is computed _ab initio_ and is uncertain up to an estimated 3 % within the rigid rotor approximation for NaLi, and considerably larger when this approximation is relaxed, see Sec. II.5. Figure 3 is computed with \(J_{\rm max}=2\) and \(N_{\rm max}=20\). By modifying the three-body interaction by less than \(10^{-4}\), we find that the resonances are unaffected. By modifying the potential by about \(10^{-3}\), we find that the resonances start to shift such that realistically their positions are completely undetermined. When the scaling of the potential is at the level of several percents, we find that horizontal \(B\)-field independent resonances appear. This occurs when the potential is scaled to support a resonance near zero collision energy for the _initial_ Zeeman level, which is therefore not tuned by the magnetic field. Therefore, with realistic uncertainties in the three-body part of the interaction and the rigid rotor approximation, both the position of magnetically tunable resonances and the background scattering length are completely undetermined but we can still draw probabilistic conclusions about a typical range of values for the scattering length.
Figure 4 shows a similar "\(\lambda\)-scan", but now for \(N_{\rm max}=6\) which artificially reduces the density of resonances somewhat and produces a more sparse figure. In panel 4(a) we have scaled the non-additive three-body part of the interaction potential for both the high-spin and the low-spin potential, as before. In panel 4(b), by
contrast, we have assumed this uncertainty is entirely in the low-spin potential, and we have kept the high-spin potential constant. In this case, we no longer observe horizontal \(B\)-field resonances as these are supported by the initial top Zeeman energy level, which is a high-spin state. We find that several of the resonances are now completely independent of the scaling of the low-spin potential up to \(\lambda=0.1\). This indicates the resonances are supported by the non-reactive high-spin potential. When the scaling of the low-spin potential reaches the percent level, coupling to the low-spin state starts to affect the collision rate, especially for the broader features.
The analysis above shows that the resonances are supported by the high-spin potential, and that their positions are sensitive to \(0.1\)\(\%\) uncertainty in the non-additive three-body part of the interaction potential. This means that the _ab initio_ prediction of the resonance positions is beyond the capability of state-of-the-art theory. In addition to highly accurate atom-molecule interactions, the quantitative prediction of resonance positions would require converged coupled-channels calculations that also fully account for hyperfine, molecular vibrations, and chemical reactions. This is not attempted here, and instead we interpret only the typical number of resonances, their widths, and coupling mechanisms.
Figure 3: **Dependence of loss rates on scaling of three-body interactions.** Loss rates as a function of \(B\)-field and scaling of the three-body interaction by a factor \(1+\lambda\) for \(N_{\rm max}=20\), shown for the upper stretched state. This shows that due to the uncertainty of \(\lambda\) of at least several percent, the background loss and resonance positions are undetermined. Several \(B\)-independent resonances are observed, where the \(\lambda\)-scaling tunes the initial spin-stretched potential such that it supports a resonance, i.e., a bound state near zero energy. Hence, prediction of resonance positions requires knowledge of the interaction potentials to an accuracy that cannot realistically be achieved by _ab initio_ calculations.
Figure 2: **Convergence of the coupled-channels calculations with truncation of the basis set.****(a)** Calculation with fixed \(N_{\rm max}=20\). Cross sections converge with \(J_{\rm max}=2\). When the basis is truncated at \(J_{\rm max}=1\), the cross sections scale quadratically with the spin-rotation coupling constant \(\gamma_{s}\). This is demonstrated by agreement with the crosses which show one-quarter of the cross section obtained by scaling \(\gamma_{s}\) up by a factor of two. When the basis is truncated at \(J_{\rm max}=2\), spin-spin coupling contributes and the cross section scales perturbatively with both coupling constants (crosses). Spin-spin coupling is typically dominant, but not by a large factor so both mechanisms contribute. Magnetic dipole-dipole coupling does not play an important role. **(b)** Calculation with fixed \(J_{\rm max}=1\) The scattering cross section can be converged with \(N_{\rm max}\), but requires an impractically large basis set. **(c)** Calculation with fixed \(J_{\rm max}=2\) The scattering cross section is not converged with \(N_{\rm max}\), but the density and typical width of resonances are not strongly dependent on \(N_{\rm max}\).
### Background elastic and inelastic scattering
Next we consider interaction potentials obtained by scaling the non-additive three-body interactions by \(1+\lambda\) with \(\lambda\) between \(-0.1\) and \(+0.1\), on the order of the uncertainty in the interaction potential. We consider each of these Hamiltonians statistically independent realizations of the physical NaLi+Na system. By performing scattering calculations for these different realizations, we gather statistics about the number of resonances and their widths.
We first consider the nonresonant background. Figure 5 shows the dependence of the elastic cross section and loss rate coefficient on scaling of the non-additive three-body interaction by \(1+\lambda\) at a fixed magnetic field \(B=1500\) G, \(N_{\rm max}=20\). These calculations are performed for three initial states; the top stretched state where the molecular electronic spin projection \(m_{s}=1\) and the atomic electronic spin projection \(m_{3}=1/2\), the bottom stretched state where \(m_{s}=-1\) and \(m_{3}=-1/2\), and the non-stretched state \(m_{s}=1\), \(m_{3}=-1/2\). The elastic cross sections are nearly identical in the two stretched states, and different but of the same order in the non-stretched initial state. For the loss rate coefficient, the _typical_ behavior is also that the cross sections are similar between the two stretched states, while the differences at fixed \(\lambda\) can be as large as an order of magnitude. For the non-stretched initial state the loss rate coefficient is significantly larger. This can be understood
Figure 4: **Spin-dependence of three-body interactions.** Loss rates are shown as a function of \(B\)-field and scalings of the interactions potential. Calculations are done with \(N_{\rm max}=6\), which artificially reduced the number of resonances compared to Fig. 3. **(a)** The spin-independent three-body interaction is scaled by \(1+\lambda\). In this case, all resonance positions depend on the scaling of the potential, the background loss rate varies strongly, and new \(B\)-independent resonances appear by scaling the initial-state potential such that supports a bound state near zero energy. **(b)** The spin splitting is scaled by \(1+\lambda\), while the spin-stretched potential is kept fixed. Therefore, only the doublet potential is varied. The resonances near \(150\), \(350\), \(1150\), and \(1250\) G are independent of the scaling of the low-spin potentials, i.e., they are completely supported by the non-reactive spin-stretched potential. The feature just above \(1000\) G has a weak dependence on the scaling of the doublet potentials, and several broader features such as that around \(500\) G have a stronger dependence.
Figure 5: **Cross sections and rates as a function of \(\lambda\) scaling.****(a)** The elastic collision cross section obtained by scaling the non-additive three-body interaction by \(1+\lambda\) at a fixed magnetic field \(B=1500\) G. Different colors correspond to different spin states, \(|sm_{s}|\)\(|s_{3}m_{s_{3}}\)), as indicated. **(b)** Loss rate coefficients as a function \(\lambda\) scaling for fixed \(B=1500\) G.
as follows: For non-stretched states the collision can directly proceed on the reactive potential and lead to loss. For stretched states, the initial potential is nonreactive such that loss processes require a spin flip which is unlikely since it is perturbative in the weak spin-spin and spin-rotation coupling, as we have seen above.
In Figure 6 we show again the effect of scaling of the non-additive three-body interaction by \(1+\lambda\), but ordered by increasing cross section and loss rate. We interpret the resulting horizontal axis as the cumulative probability to obtain a specific cross section or rate coefficient [65]. Panel 6(a) shows that elastic cross sections, for which all cumulative probability distributions are similar and agree closely with the expected distribution for a \(R^{-6}\) potential [67], shown as the black markers. This distribution is characterized by a mean scattering length \(\bar{a}=0.47799(2\mu C_{6}/\hbar)^{1/4}\approx 52~{}a_{0}\). The experimentally measured scattering length is \(260~{}a_{0}\), the corresponding elastic cross section is larger than one might expect.
Panel 6(b) shows the cumulative probability distribution of the loss rate coefficient. The two spin-stretched states show a similar distribution, whereas the typical loss rate is much higher in the non-stretched state, as observed before. Similar probability distributions are obtained with \(N_{\rm max}=10\) and \(20\). In the spin-stetched states, the background loss rate is likely to lie between \(10^{-13}\) and a few \(10^{-12}\) cm\({}^{3}\)/s. Experimentally, a typical background loss rate of \(5\times 10^{-12}\) cm\({}^{3}\)/s is observed, which on the higher end of this distribution function, but not in disagreement with it.
Finally, in panel 6(c), we consider the background ratio of elastic-to-inelastic collisions, \(\gamma\), We compute the ratio of elastic-to-inelastic collisions as
\[\gamma=\frac{\sigma^{\rm(elastic)}\langle v\rangle}{k^{\rm(loss)}}, \tag{18}\]
where \(\langle v\rangle\) is the thermally averaged velocity at \(T=2~{}\mu\)K [13], and we take for the elastic cross section \(4\pi a^{2}\) where \(a=260~{}a_{0}\) is the measured elastic cross section. The resulting ratio of elastic-to-inelastic collisions is likely between \(100\) and \(1000\) in the spin-stretched states, in agreement with the experimental value of \(300\)[13].
### Density of resonances
Next, we compare the typical number of resonances to the density of states of the spin-stretched NaLi+Na collision complex. Figure 7 shows typical magnetic field scans for \(N_{\rm max}=2\), \(10\), and \(30\) for both the top and bottom spin-stretched states. The density of resonances increases with \(N_{\rm max}\) and is higher for the lower spin-stretched state than it is for the upper one.
In order to explain the observed number of resonances, we calculate their density of states. Again, the resonances are supported by the spin-stretched interaction potential, and the magnetically tunable resonances correspond to Zeeman sub-states that are different from
Figure 6: **Probability distribution of collision rates.** Cumulative probability distribution obtained by sorting cross sections and rate coefficients for different \(\lambda\) scaling at fixed \(B=1500\) G [65]. Different colors correspond to different states, \(|sm_{s}\rangle|s_{3}m_{s_{3}}\rangle\), as indicated. **(a)** Distribution of the elastic cross section, **(b)** loss rate coefficient, and **(c)** the ratio of elastic-to-inelastic collisions. Solid (dashed) lines correspond to \(N_{\rm max}=20\) (\(10\)). In panel(a) the black markers indicated the expected distribution of elastic cross sections for a van der Waals potential [67].
the initial state. This means each resonance can be assigned total electron spin \(S\), \(M_{S}\) quantum numbers. Spin is coupled to the spatial degrees of freedom perturbatively through spin-spin and spin-rotation coupling. This means we can assign each state total mechanical angular momentum \(J\), \(M_{J}\), and the spin-rotation and spin-spin selection rules tell us which values of these nearly-good quantum numbers contribute. Spin-rotation coupling is
Figure 7: **Representative magnetic field scans for different basis set truncation.** The figure shows representative magnetic field scans of the collisional loss rate coefficient. Different panels correspond to truncation of the basis set at \(N_{\rm max}=2\), 10, 30, as indicated. The left (right) hand column shows results for the upper (lower) spin-stretched state.
rank-1 in the spin and spatial degrees of freedom and couples to states with \(J=1\) and \(|\Delta M_{S}|\leq 1\). The density of such states is determined by the \(J=1\) density of states on the spin-stretched potential. The total number of bound states below energy \(E\) can be computed quasiclassically using a phase-space integral [18]. We determine the total number of resonances expected as the total number of bound states less than the Zeeman energy shift for \(\Delta M_{S}=1\) below threshold. Spin-spin coupling is rank-2 in the spin and spatial degrees of freedom and couples to \(J=2\) and \(|\Delta M_{S}|\leq 2\) states. We similarly determine the number of resonances by computing the number of \(J=2\) states bound by less than the Zeeman shifts for \(\Delta M_{S}=1\) and 2.
We computed the total numbers of bound states using the phase-space integrals of Ref. [18],
\[N^{\rm(3D)}=(2J+1)\frac{g_{\rm parity}8\pi\sqrt{2}m_{\rm Na}^{2 }m_{\rm Li}}{3h^{3}(2m_{\rm Na}+m_{\rm Li})}\] \[\times\iiint\frac{rR}{\sqrt{\mu R^{2}+\mu_{\rm NaLi}r^{2}}}\left[ E-V(\mathbf{q})\right]^{3/2}\ dr\ dR\ d\theta, \tag{19}\]
as well as the number of bound states for the NaLi vibrational coordinate \(r=r_{e}\) fixed at the equilibrium distance
\[N^{\rm(2D)}=(2J+1)\frac{g_{\rm parity}2\pi m_{\rm Na}^{2}m_{\rm Li }}{h^{2}(2m_{\rm Na}+m_{\rm Li})\sqrt{\mu_{\rm NaLi}}}\] \[\times\iint\frac{rR}{\sqrt{\mu R^{2}+\mu_{\rm NaLi}r^{2}}}\left[ E-V(\mathbf{q})\right]\ dR\ d\theta, \tag{20}\]
where \(\mathbf{q}=\{R,r,\theta\}\) are the Jacobi coordinates, \(V(\mathbf{q})\) is the interaction potential, \(m_{x}\) are the atomic masses, and \(g_{\rm parity}=1/2\) is a factor that accounts for parity conservation. From this we find that we should expect to encounter approximately 14 Feshbach resonances when excluding the vibrational coordinate. This is a useful reference for the scattering calculations, where the vibrational coordinate is fixed. When vibrations are accounted for, the number of resonances increases from 14 to 21. Hence, the NaLi molecular vibrations contribute significantly to the density of states of the complex, but not by orders of magnitude. Hence a large fraction of the resonances corresponds to NaLi in the vibrational ground state, justifying freezing the vibrational coordinate in our scattering calculations. Figure 8(a) shows the quasiclassical density of resonances as a function of \(R_{\rm max}\), the upper integration limit used in evaluating Eqs. (19) and (20). This shows that the resonances are supported by atom-molecule distances up to approximately 40 \(a_{0}\), which is considerably shorter than the range of van der Waals interactions \(R_{6}=(2\mu C_{6})^{1/4}\approx 109\ a_{0}\) in the system, suggesting that in some sense the resonances are of short-range nature. Nevertheless, at atom-molecule distances around 40 \(a_{0}\) the electronic interaction is close to its \(R^{-6}\) asymptotic form, and the vast majority of the density of states is hence supported by the van der Waals potential. This was previously also argued for NaK+K collisions [31], and may more generally be true of atom+molecule collisions [68]. Finally, we will see that the coupling mechanism for these resonances involves the anisotropy of the atom+molecule interaction, and hence this cannot be understood in terms of the long-range interaction which is isotropic for a molecule in its rotational ground state.
Because the density of states increases with \(2J+1\), and because for \(J=2\) resonances can occur with both \(|\Delta M_{S}|=1\) and 2, the typical density of resonances for \(J=2\) is a factor five larger than for \(J=1\). Hence we can conclude that most of the resonances observed, approximately 5 out of 6 resonances, can be assigned \(J=2\) and are due to spin-spin coupling.
We also compute the density of states quantum mechanically using the same channel basis as used in the scattering calculations. This is useful for a direct com
Figure 8: **Number of resonances.** The plot shows the expected number of resonances between 0 and 1500 G from (**a**) quasiclassical phase-integrals Eqs. (19) and (20) as a function of the maximum molecule-atom center-of-mass separation \(R_{\rm max}\) up to which the integrals are evaluated and (**b**) quantum mechanical calculations of bound states on each adiabatic potential energy curve, as a function of \(N_{\rm max}\) that truncates the channel basis set.
parison to the scattering calculations. To this end, we first compute adiabatic potential energy curves by diagonalizing the Hamiltonian excluding radial kinetic energy at every grid point. On each adiabatic potential curve, we compute bound state wavefunctions using sinc-function discrete variable representation [69]. We record the number of bound states below the initial state threshold. We repeat this at both ends of the magnetic field range, and the difference in the number of bound states provides an estimate for the number of resonances. The results are shown in Fig. 8(b). As can be seen, the total number of resonances increases with \(N_{\rm max}\) truncation of the basis set, and converges around \(N_{\rm max}=30\). At \(N_{\rm max}=30\) the highest adiabatic potentials for the spin-stretched state no longer support bound states, thus the typical number of resonances converges, although the background scattering length does not converge until much higher \(N_{\rm max}\) due to the stronger interactions in the low-spin states. The total number of resonances is close to the quasi-classical estimate, but not in perfect agreement with it due to the light masses and relatively weak interactions in the spin-stretched state. The total number of resonances matches observations for a typical \(B\)-field scan in the lower spin-stretched state. However, in the upper spin-stretched state we find a similar density of states, although typically a much lower number of resonances is observed.
### Missing resonances in the upper spin-stretched state
One might expect that the higher number of resonances in the lower spin-stretched state comes from quasi-bound states in the Zeeman-excited \(N=0\) states. However, this would be reflected in a higher density of states for the lower stretched state in the calculation above. What happens is not that the total number of states is higher for the lower state, but rather that we do not observe every state as a resonance in the upper spin-stretched state. This is caused by fast decay of some of the resonances to the lower-lying Zeeman levels. This is illustrated in Fig. 9, which shows typical \(B\)-field scans from a calculation that excludes non-initial Zeeman states in \(N=0\). In the case of the lower stretched state, the typical number of resonances is not reduced by omitting the excited Zeeman levels, _i.e._, these do not cause the higher number of resonances. In the case of the upper stretched state, the typical number of resonances is increased by _omitting_ the lower-lying Zeeman states. Some of these resonances were previously not visible due to fast loss to the lower-lying Zeeman states.
Loss to lower-lying Zeeman states WITH N=0 can occur only via spin-spin coupling. Spin-rotation coupling cannot simultaneously fulfil the parity selection rule and the selection rule \(\Delta J=1\). The argument is as follows. In the rotational ground state \(N=0\), the orbital and total mechanical angular momentum are equal, \(J=L\). For \(J=1\) this means that all levels in the rotational ground state have \(L=1\), and hence odd parity. Since the parity of \(N+L\) is conserved, these inelastic exit channels are inaccessible. Note that spin-rotation coupling can lead to resonances and chemical loss since in these cases, N is changing.
Figure 10 shows once more typical \(B\)-field scans but now including \(J=0,1\) states only. This has artificially removed the contribution of the spin-spin interaction, which couples to \(J=2\) states. Therefore, no decay to lower Zeeman states WITH N=0 is possible. This removes the effect discussed previously where fast inelastic scattering to lower Zeeman states renders fewer resonances observable in the upper spin-stretched state. The qualitative differences between the upper and lower spin-stretched states occur only for \(J=2\) resonances, are attributed to spin-spin coupling and would therefore not occur for collisions between atoms and spin-doublet molecules.
### Hyperfine interactions
Next, we investigate the effect hyperfine interactions by including nuclear spin in the coupled-channels scattering calculations. To this end, the basis set of Eq. (9) is extended to functions of the form
\[|(NL)J(s\ s_{3})S;\mathcal{J}\mathcal{M})|i_{1}m_{i_{1}})|i_{2}m_{i_{2}})|i_{3} m_{i_{3}}). \tag{21}\]
The Na nuclear spin is \(i_{1}=i_{3}=3/2\) and the Li nuclear spin is \(i_{2}=1\). Only functions with \(\mathcal{M}+m_{i_{1}}+m_{i_{2}}+m_{i_{3}}\) equal to \(+11/2\) are included for the upper spin-stretched state, and \(-11/2\) for the lower spin-stretched state, respectively. The hyperfine couplings included take the form \(a\ \hat{i}\cdot\hat{s}\) for each atom. We write this dot product as \(\hat{i}_{z}\hat{s}_{z}-\hat{i}_{+1}\hat{s}_{-1}-\hat{i}_{-1}\hat{s}_{+1}\). Matrix elements of the spherical components of the electron spin operators are given in the Appendix. The action of the nuclear spin operators is
\[\hat{i}_{z}|im\rangle =m|im\rangle, \tag{22}\] \[\hat{i}_{+1}|im\rangle =-\sqrt{\frac{1}{2}(i-m)(i+m+1)}\ |i,m+1\rangle,\] \[\hat{i}_{-1}|im\rangle =\sqrt{\frac{1}{2}(i+m)(i-m+1)}\ |i,m-1\rangle. \tag{23}\]
Inclusion of hyperfine substantially increases the dimension of the basis set and we performed calculations for modest \(N_{\rm max}=10\), shown in Fig. 11. As can be seen, the inclusion of hyperfine structure modifies the spectrum of resonances, but it does not substantially increase the number of resonances nor does it lead to clearly identifiable multiplet structure on existing resonances.
Figure 9: **Representative magnetic field scans with non-initial Zeeman states removed.** Shown are representative magnetic field scans obtained by excluding the non-initial Zeeman states for \(N=0\) from the calculation. Different panels correspond to truncation of the basis set at \(N_{\rm max}=2\), 10, 30, as indicated. The left (right) hand column shows results for the upper (lower) spin-stretched state. Excluding the lower-lying Zeeman levels for the upper spin-stretched state increases the typical number of resonances, whereas excluding excited Zeeman levels for the lower spin-stretched state does not reduce the typical number of resonances.
Figure 10: **Representative magnetic field scans with the channel basis truncated with \(J_{\rm max}=1\).** Different panels correspond to truncation of the basis set at \(N_{\rm max}=2\), 10, 30, as indicated. The left (right) hand column shows results for the upper (lower) spin-stretched state. Truncation of the basis set with \(J_{\rm max}=1\) has reduced the number of resonances, and the qualitative difference in the number of visible resonances between the two spin states has disappeared.
### Higher magnetic field strengths
Finally, we consider a wider \(B\)-field scan of the collisional loss rate that is not accessible in the experiment. In Fig. 12 we see that a strong resonance feature occurs around 5000 G for the upper spin-stretched state. This feature occurs consistently at the same magnetic field strength and is insensitive to uncertainty of the interaction potential. This is not a Feshbach resonance, but rather results from resonant energy transfer via spin-spin coupling where the energy released by Zeeman relaxation matches the rotational energy associated with excitation from \(N=0\) to \(N=2\). This occurs only in the upper spin-stretched state, as Zeeman relaxation is not possible in the lower state.
A similar feature may be expected around 1650 G and 3300 G, where the Zeeman energy released by a double and a single spin flip, respectively, is resonant with the \(N=0\) to \(N=1\) rotational transition. This, however, is not observed as these rotational states are not coupled by spin-spin nor by spin-rotation coupling.
### Discussion
A summary of our findings is represented in Fig. 13 which can also be found in the accompanying paper [36]. This figure compares the spectra of Feshbach resonances for the upper and lower spin-stretched states. Markers show four times the loss rate obtained with the spin-spin and spin-rotation couplings halved. Agreement with the solid lines indicates these spin-dependent couplings act perturbatively, and the loss rates scale as the coupling constants squared. Note that this does not apply to the sharp resonances, which are narrower for smaller coupling. The dashed-dotted line indicates the loss rate obtained with the interaction anisotropy turned off. Turning off the interaction anisotropy effectively turns off the dominant loss mechanism, which requires the combination of spin-spin or spin-rotation coupling and the anisotropic atom-molecule interaction. The resulting, much smaller loss rate is entirely due to the magnetic dipole-dipole interaction, and vanishes within our model if the dipole-dipole interaction is also turned off. The role of the anisotropic atom-molecule interaction, however, is
Figure 11: **Including hyperfine interactions in the calculations.** Representative magnetic field scans are obtained with and without hyperfine interactions. Panel **(a)** shows results for the upper spin-stretched state and panel **(b)** the lower spin-stretched state. Including hyperfine interactions increases the number of resonances somewhat, but not significantly, and it does not lead to clearly identifiable multiplet splittings.
Figure 12: **Collisional loss rates at higher magnetic fields.** Plotted are representative magnetic field scans up to higher magnetic fields than are probed experimentally. In the upper spin-stretched state, a strong feature appears which is caused by a resonance between the Zeeman relaxation energy and a rotational excitation. In the lower spin-stretched state, more resonances are typically visible, but the strong feature near 5000 G is missing as Zeeman relaxation is not possible in this spin state.
not perturbative. This can be seen from the disagreement between the solid line, which resulted from the full calculation, and the dotted line, which is obtained from a calculation where we halved the strength of anisotropic interactions and multiplying the resulting loss rate by a factor four. We also see that changing the strength of the anisotropic short-range interaction affects the resonance positions. This means that the pattern of resonances cannot be explained in terms of the long-range _isotropic_ van der Waals interaction alone, as is argued in Ref. [68], despite the fact that vast majority of resonances is supported by the long-range \(R^{-6}\) interaction, as argued in Ref. [68] and confirmed in our density of states calculations.
In Figure 13 we also witness the qualitative difference in the number of resonances observed in the two maximally stretched spin states. By performing calculations for various \(\lambda\) scalings of the potential, we _typically_ observe around 5 and 10 resonances for the upper and lower spin-stretched state, respectively. This is in qualitative agreement with experimental observation of 17 and 8 resonances in the upper and lower stretched state, respectively. The difference between theory and experiment is partially explained by the neglect of the vibrational degree of freedom and hyperfine, as discussed above. This difference between the two spin states in observed resonance density is not attributed to a difference in density of states, but rather to the decay of resonances to Zeeman-relaxed channels from the upper spin-stretched state, as discussed in Sec. III.6.
Figure 13 also suggests substantial differences in the background loss rate between the upper and lower spin-stretched states, in contrast to the experimental observations where very similar loss rates were found. Indeed, this demonstrates that for a particular realization of the calculation, for specific \(\lambda\) scalings of the potential, such differences can occur, and there is no guarantee that the two state exhibit the same background loss rate despite the losses being determined by the same mechanisms. However, this is not necessarily the _expected_ behavior. In the \(\lambda\)-scans of the ratio of elastic-to-inelastic collisions, shown in Fig. 5, we see that the two stretched spin-states exhibit qualitatively the same behavior, and that for most values of \(\lambda\) the loss rates are similar between the two states, but for specific values of \(\lambda\) there can be large differences. Such large differences can occur where a resonance occurs in one of the two spin states, but they can also reflect differences in the background scattering rates. In Figs. 7, 9, 10 we see several examples where the background loss rate between the two spin states can be either similar, different by a small factor, or different about an order of magnitude. Where the background scattering rates are different - for a specific \(\lambda\) scaling and basis-set truncation, \(N_{\text{max}}\) - these differences are not systematic; it can either be the upper or lower spin-stretched state that experiences the higher background scattering rate. Again, as shown most clearly in Fig. 5, the _expected_ background behavior is similar for the two spin states.
## IV Conclusions
We have performed coupled-channels scattering calculations of Feshbach resonances in spin-polarized NaLi (\(a^{3}\Sigma^{+}\)) + Na collisions based on _ab initio_ interaction potentials calculated in this work. Quantitatively predicting the background scattering length or the positions of resonances is beyond the reach of state-of-the-art theory. However, the calculations do explain experimental observations qualitatively. The background loss is fast in non-stretched spin states, whereas in stretched states the ratio of elastic-to-inelastic collisions can be around 100, in agreement with previous observations of sympathetic cooling. When comparing the upper and lower stretched states we find the _expected_ background loss rate to be similar, also in agreement with experimental observations.
The calculations furthermore capture a series of Fes
Figure 13: **Perturbative analysis of calculated collision rates.** Resonance spectra are displayed for the upper and lower spin-stretched states in orange and blue solid lines, respectively, calculated for \(\lambda=-0.02\) and \(N_{\text{max}}=30\), also shown as Fig. 4(a) in the accompanying paper [36]. Markers show four times the loss rate obtained with the spin-spin and spin-rotation couplings halved, and agreement with the solid lines indicates these spin-dependent couplings act perturbatively. For the upper stretched state, the dashed-dotted line indicates the loss rate obtained with the interaction anisotropy turned off, which is entirely due to the magnetic dipole-dipole interaction. The much smaller loss rate in this case identifies a combination of the anisotropic interaction and the spin-spin and spin-rotation coupling as the dominant loss mechanism. The dotted line is obtained by halving the strength of anisotropic interactions and multiplying the resulting loss rate by a factor four. This completely changes the shape and resonance positions. The disagreement between the dotted and the full line indicates that the anisotropic interactions do not act perturbatively.
hbach resonances. We show that these resonance states are supported by relatively short atom-molecule separations up to 40 \(a_{0}\), We show that the dominant coupling mechanism is a combination of the anisotropic atom-molecule interaction and the spin-spin coupling and to a lesser extent spin-rotation coupling. The resonance states are supported by atom-molecule separations up to 40 \(a_{0}\), where the interaction can be described by its asymptotic \(R^{-6}\) form, as was previously argued for alkali-metal atom-molecule collisions [31, 68]. However, due to the critical role of the _anisotropic_ atom-molecule interaction in the coupling mechanism, the resonance positions depend sensitively on the anisotropic short-range interactions, and the pattern of resonances cannot be described in terms of the long-range \(R^{-6}\) interaction alone.
In the lower spin-stretched state we observe approximately 10 resonances up to 1500 G. In the upper spin-stretched state only around 5 resonances are visible due to fast decay to lower-lying Zeeman states in \(N=0\). This qualitative difference between the upper and lower spin-stretched states has also been observed experimentally, where the two states support 8 and 17 resonances, respectively. Molecular vibrations and hyperfine interactions, which were excluded in most of the scattering calculations that we performed, are expected to further increase the number of observable resonances. Calculations of the density of states suggests molecular vibrations increase the number of resonances by 50 %, and a scattering calculation including hyperfine interaction in a small basis suggests that this too can increase the number of observed resonances somewhat. Hence, the calculations are in semi-quantitative agreement with the experimental observations.
Our combined experimental and theoretical study has shown that Feshbach resonances and collisional complexes can be well understood on the basis of state-of-the-art first principles calculations. Due to the light elements, highly accurate electronic structure calculations can be performed on this triatomic collision complex, with an uncertainty smaller than 3% of the pairwise nonadditive three-body part of the potential. To make fully quantitative predictions, the accuracy of the electronic structure calculations should be improved by more than one order of magnitude, and the coupled-channels calculations should be converged, including vibration and hyperfine structure. While it is not feasible with present-day methods and computational power, predicting the exact positions of measured Feshbach resonances in NaLi+Na collisions constitutes a perfect testbed and playground for near-future developments of both electronic structure and scattering theory. In the mean time, this work emphasizes we can still draw statistical conclusions regarding the density and typical width of resonances, and the typical background loss rate, that are in nearly quantitative agreement with experimental observations. Our work showcases how scattering calculations can be used as a "_numerical experiment_" in which the interaction can be scaled at will, specific couplings can be turned off, or exit channels removed, as a versatile tool to identify the dominant coupling mechanisms, the nature of the resonance states, and their relevant decay pathways.
###### Acknowledgements.
M. G. and M. T. were supported by the National Science Centre Poland (Sonata Bis Grant no. 2020/38/E/ST2/00564) and the PL-Grid Infrastructure (computational grant no. PLG/2020/014342). The MIT work was supported by the NSF through the Center for Ultracold Atoms and Grant No. 1506369 and from the Air Force Office of Scientific Research (MURI, Grant No. FA9550-21-1-0069). J. P. and H. S. acknowledge additional support from the Samsung Scholarship.
## Appendix A Matrix elements
In this Appendix we give all required matrix elements in the coupled basis functions of Eq. (9). First we consider the matrix elements of the electronic spin operators required for the Zeeman interaction \(\hat{H}_{\rm Zeeman}=\mu_{B}g_{e}B(\hat{s}_{1,z}+\hat{s}_{2,z}+\hat{s}_{3,z})\). These are
\[\langle(NL)J(s\ s_{3})S; \mathcal{J}\mathcal{M}|\hat{s}_{1,q}|(N^{\prime}L^{\prime})J^{ \prime}(s^{\prime}\ s_{3})S^{\prime};\mathcal{J^{\prime}}\mathcal{M}\rangle=\] \[\delta_{N,N^{\prime}}\delta_{L,L^{\prime}}\delta_{J,J^{\prime}}(- 1)^{2\mathcal{J}-\mathcal{M}+J+2S^{\prime}+s+s^{\prime}+s_{1}+s_{2}+s_{3}+1} \left[\mathcal{J},\mathcal{J^{\prime}},S,S^{\prime},s,s^{\prime}\right]^{1/2}\] \[\times\left(\begin{array}{ccc}\mathcal{J}&1&\mathcal{J^{ \prime}}\\ \mathcal{M}&q&\mathcal{M^{\prime}}\end{array}\right)\left\{\begin{array}{ccc} \mathcal{J}&1&\mathcal{J}\\ S^{\prime}&J&S\end{array}\right\}\left\{\begin{array}{ccc}S&1&S^{\prime}\\ s^{\prime}&s_{3}&s\end{array}\right\}\left\{\begin{array}{ccc}s&1&s^{\prime} \\ s_{1}&s_{2}&s_{1}\end{array}\right\}\sqrt{s_{1}(s_{1}+1)(2s_{1}+1)}, \tag{10}\]
and similarly
\[\langle(NL)J(s\ s_{3})S; \mathcal{J}\mathcal{M}|\hat{s}_{2,q}|(N^{\prime}L^{\prime})J^{ \prime}(s^{\prime}\ s_{3})S^{\prime};\mathcal{J^{\prime}}\mathcal{M}\rangle=\] \[\delta_{N,N^{\prime}}\delta_{L,L^{\prime}}\delta_{J,J^{\prime}}(- 1)^{2\mathcal{J}-\mathcal{M}+J+2S^{\prime}+s_{1}+s_{2}+s_{3}+1}\left[ \mathcal{J},\mathcal{J^{\prime}},S,S^{\prime},s,s^{\prime}\right]^{1/2}\] \[\times\left(\begin{array}{ccc}\mathcal{J}&1&\mathcal{J^{ \prime}}\\ \mathcal{M}&q&\mathcal{M^{\prime}}\end{array}\right)\left\{\begin{array}{ccc} \mathcal{J}&1&\mathcal{J}\\ S^{\prime}&J&S\end{array}\right\}\left\{\begin{array}{ccc}S&1&S^{\prime}\\ s^{\prime}&s_{3}&s\end{array}\right\}\left\{\begin{array}{ccc}s&1&S^{\prime} \\ s_{2}&s_{1}&s_{2}\end{array}\right\}\sqrt{s_{2}(s_{2}+1)(2s_{2}+1)}, \tag{11}\]
and
\[\langle(NL)J(s\ s_{3})S; \mathcal{J}\mathcal{M}|\hat{s}_{3,q}|(N^{\prime}L^{\prime})J^{ \prime}(s^{\prime}\ s_{3})S^{\prime};\mathcal{J^{\prime}}\mathcal{M}\rangle=\] \[\delta_{N,N^{\prime}}\delta_{L,L^{\prime}}\delta_{s,s^{\prime}} \delta_{J,J^{\prime}}(-1)^{2\mathcal{J}-\mathcal{M}+J+2S^{\prime}+s+s_{3}} \left[\mathcal{J},\mathcal{J^{\prime}},S,S^{\prime}\right]^{1/2}\] \[\times\left(\begin{array}{ccc}\mathcal{J}&1&\mathcal{J^{ \prime}}\\ \mathcal{M}&q&\mathcal{M^{\prime}}\end{array}\right)\left\{\begin{array}{ccc} \mathcal{J}&1&\mathcal{J}\\ S^{\prime}&J&S\end{array}\right\}\left\{\begin{array}{ccc}S&1&S^{\prime}\\ s^{\prime}&s_{3}&s\end{array}\right\}\sqrt{s_{3}(s_{3}+1)(2s_{3}+1)}, \tag{12}\]
where the \(q=-1,0,1\) spherical components of the spin operators are given by \(\hat{s}_{0}=\hat{s}_{z}\) and \(\hat{s}_{\pm 1}=\mp(\hat{s}_{x}\pm i\hat{s}_{y})/\sqrt{2}\).
The remaining operators we consider are all scalar, and diagonal in \(\mathcal{J}\) and \(\mathcal{M}\). For the spin-rotation coupling \(\gamma_{s}\hat{N}\cdot\hat{s}\) with \(\gamma_{s}=0.005\) cm\({}^{-1}\) we need the matrix elements
\[\langle(NL)J(s\ s_{3})S; \mathcal{J}\mathcal{M}|\hat{N}\cdot\hat{s}|(N^{\prime}L^{\prime}) J^{\prime}(s^{\prime}\ s_{3})S^{\prime};\mathcal{J}\mathcal{M}\rangle=\] \[\delta_{N,N^{\prime}}\delta_{L,L^{\prime}}\delta_{S,S^{\prime}}(- 1)^{2J^{\prime}+S+\mathcal{J}+N+L+s+s_{3}+S^{\prime}}\left[J,J^{\prime},S,S^{ \prime}\right]^{1/2}\] \[\times\left\{\begin{array}{ccc}J&1&J^{\prime}\\ N^{\prime}&L&N\end{array}\right\}\left\{\begin{array}{ccc}S&1&S^{\prime}\\ s^{\prime}&s_{3}&s\end{array}\right\}\sqrt{N(N+1)(2N+1)s(s+1)(2s+1)}. \tag{13}\]
The spin-spin coupling is given by \(\lambda_{s}\sqrt{30}/3\left[\left[\hat{s}\otimes\hat{s}\right]^{(2)}\otimes C ^{(2)}(\hat{r}_{\rm NaLi})\right]_{0}^{(0)}\) with \(\lambda_{s}=-0.0189\) cm\({}^{-1}\). Here,
\[\left[A^{(k_{1})}\otimes B^{(k_{2})}\right]_{q}^{(k)}=\sum_{q_{1},q_{2}}A^{(k_ {1})}_{q_{1}}B^{(k_{2})}_{q_{2}}\langle k_{1}q_{1}k_{2}q_{2}|kq\rangle \tag{14}\]
is the rank-\(k\) spherical tensor product and \(C^{(2)}(\hat{R})\) is a tensor of Racah-normalized spherical harmonics depending on the Euler angles of the molecular axis, \(\mathbf{r}_{\rm NaLi}\). For the matrix elements we have
\[\langle(NL)J(s\ s_{3})S; \mathcal{J}\mathcal{M}|\left[\left[\hat{s}\otimes\hat{s}\right]^ {(2)}\otimes C^{(2)}(\hat{r}_{\rm NaLi})\right]_{0}^{(0)}|(N^{\prime}L^{ \prime})J^{\prime}(s^{\prime}\ s_{3})S^{\prime};\mathcal{J}\mathcal{M}\rangle=\] \[\delta_{L,L^{\prime}}\delta_{s,s^{\prime}}(-1)^{2J^{\prime}+S+ \mathcal{J}+N+L+s+s_{3}+S^{\prime}}\left[J,J^{\prime},S,S^{\prime}\right]^{1/2}\] \[\times\left\{\begin{array}{ccc}J&2&J^{\prime}\\ N^{\prime}&L&N\end{array}\right\}\left\{\begin{array}{ccc}S&2&S^{\prime}\\ s^{\prime}&s_{3}&s\end{array}\right\}\left(\begin{array}{ccc}N&2&N^{\prime}\\ 0&0&0\end{array}\right)\frac{\sqrt{s(2s-1)}}{\sqrt{5}\left(\begin{array}{ccc}s& 2&s\\ -s&2&s-2\end{array}\right)}. \tag{15}\]
In order to evaluate the magnetic dipole-dipole interaction \(\hat{V}_{\rm magn.dip}=-\sqrt{30}(\mu_{B}g_{e}\alpha)^{2}R^{-3}\left[[\hat{s}\otimes \hat{s}_{3}]^{(2)}\otimes C^{(2)}(\hat{R})\right]_{0}^{(0)}\), we use the following
\[\langle(NL)J(s\ s_{3})S; \mathcal{J}\mathcal{M}|\left[[\hat{s}\otimes\hat{s}_{3}]^{(2)} \otimes C^{(2)}(\hat{R})\right]_{0}^{(0)}|(N^{\prime}L^{\prime})J^{\prime}(s^{ \prime}\ s_{3})S^{\prime};\mathcal{J}\mathcal{M}\rangle=\] \[\delta_{N,N^{\prime}}\delta_{s,s^{\prime}}(-1)^{J^{\prime}+S+ \mathcal{J}+N+L^{\prime}+J+L}\left[L,L^{\prime},J,J^{\prime},S,S^{\prime} \right]^{1/2}\] \[\times\left\{\begin{array}{ccc}J&2&J^{\prime}\\ L^{\prime}&N&L\end{array}\right\}\left\{\begin{array}{ccc}s&s^{\prime}&1\\ s_{3}&s_{3}&1\\ S&S^{\prime}&2\end{array}\right\}\left(\begin{array}{ccc}L&2&L^{\prime}\\ 0&0&0\end{array}\right)\sqrt{s(s+1)(2s+1)s_{3}(s_{3}+1)(2s_{3}+1)}. \tag{10}\]
Finally, for the electronic interaction we have the following expressions
\[\langle(NL)J(s\ s_{3})S; \mathcal{J}\mathcal{M}|P_{\ell}(\cos\theta)|(N^{\prime}L^{\prime })J^{\prime}(s^{\prime}\ s_{3})S^{\prime};\mathcal{JM}\rangle=\] \[\delta_{J,J^{\prime}}\delta_{s,s^{\prime}}\delta_{S,S^{\prime}}( -1)^{J+S+J+N+L}\left[\mathcal{J},J,J^{\prime},\ell,N,N^{\prime},L,L^{\prime} \right]^{1/2}\] \[\times\left\{\begin{array}{ccc}N&N^{\prime}&\ell\\ L&L^{\prime}&\ell\\ J&J^{\prime}&0\end{array}\right\}\left\{\begin{array}{ccc}\mathcal{J}&0& \mathcal{J}^{\prime}\\ J^{\prime}&S&J\end{array}\right\}\left(\begin{array}{ccc}N&\ell&N^{\prime} \\ 0&0&0\end{array}\right)\left(\begin{array}{ccc}L&\ell&L^{\prime}\\ 0&0&0\end{array}\right), \tag{11}\]
\[\langle(NL)J(s\ s_{3})S; \mathcal{J}\mathcal{M}|P_{\ell}(\cos\theta)\hat{s}_{1}\cdot\hat {s}_{3}|(N^{\prime}L^{\prime})J^{\prime}(s^{\prime}\ s_{3})S^{\prime}; \mathcal{JM}\rangle=\] \[\delta_{J,J^{\prime}}\delta_{S,S^{\prime}}(-1)^{N+L+s_{1}+s_{2}+ s^{\prime}}\left[\mathcal{J},J^{\prime},\ell,N,N^{\prime},L,L^{\prime},S,S^{ \prime},s,s^{\prime},1\right]^{1/2}\] \[\times\sqrt{s_{1}(s_{1}+1)(2s_{1}+1)s_{3}(s_{3}+1)(2s_{3}+1)} \tag{12}\]
and
\[\langle(NL)J(s\ s_{3})S; \mathcal{J}\mathcal{M}|P_{\ell}(\cos\theta)\hat{s}_{2}\cdot\hat{ s}_{3}|(N^{\prime}L^{\prime})J^{\prime}(s^{\prime}\ s_{3})S^{\prime};\mathcal{JM}\rangle=\] \[\delta_{J,J^{\prime}}\delta_{S,S^{\prime}}(-1)^{N+L+s_{1}+s_{2}+ s}\left[\mathcal{J},J,J^{\prime},\ell,N,N^{\prime},L,L^{\prime},S,S^{\prime},s,s^{ \prime},1\right]^{1/2}\] \[\times\sqrt{s_{2}(s_{2}+1)(2s_{2}+1)s_{3}(s_{3}+1)(2s_{3}+1)}. \tag{13}\]
The full expansion of the interaction is given in Eq. (3), and the expansion coefficients are determined as explained in the main text.
|
2308.05959 | Learned Point Cloud Compression for Classification | Deep learning is increasingly being used to perform machine vision tasks such
as classification, object detection, and segmentation on 3D point cloud data.
However, deep learning inference is computationally expensive. The limited
computational capabilities of end devices thus necessitate a codec for
transmitting point cloud data over the network for server-side processing. Such
a codec must be lightweight and capable of achieving high compression ratios
without sacrificing accuracy. Motivated by this, we present a novel point cloud
codec that is highly specialized for the machine task of classification. Our
codec, based on PointNet, achieves a significantly better rate-accuracy
trade-off in comparison to alternative methods. In particular, it achieves a
94% reduction in BD-bitrate over non-specialized codecs on the ModelNet40
dataset. For low-resource end devices, we also propose two lightweight
configurations of our encoder that achieve similar BD-bitrate reductions of 93%
and 92% with 3% and 5% drops in top-1 accuracy, while consuming only 0.470 and
0.048 encoder-side kMACs/point, respectively. Our codec demonstrates the
potential of specialized codecs for machine analysis of point clouds, and
provides a basis for extension to more complex tasks and datasets in the
future. | Mateen Ulhaq, Ivan V. Bajić | 2023-08-11T06:28:19Z | http://arxiv.org/abs/2308.05959v1 | # Learned Point Cloud Compression for Classification
###### Abstract
Deep learning is increasingly being used to perform machine vision tasks such as classification, object detection, and segmentation on 3D point cloud data. However, deep learning inference is computationally expensive. The limited computational capabilities of end devices thus necessitate a codec for transmitting point cloud data over the network for server-side processing. Such a codec must be lightweight and capable of achieving high compression ratios without sacrificing accuracy. Motivated by this, we present a novel point cloud codec that is highly specialized for the machine task of classification. Our codec, based on PointNet, achieves a significantly better rate-accuracy trade-off in comparison to alternative methods. In particular, it achieves a 94% reduction in BD-bitrate over non-specialized codecs on the ModelNet40 dataset. For low-resource end devices, we also propose two lightweight configurations of our encoder that achieve similar BD-bitrate reductions of 93% and 92% with 3% and 5% drops in top-1 accuracy, while consuming only 0.470 and 0.048 encoder-side kMACs/point, respectively. Our codec demonstrates the potential of specialized codecs for machine analysis of point clouds, and provides a basis for extension to more complex tasks and datasets in the future.
Point cloud compression, coding for machines
## I Introduction
Point clouds are used to represent 3D visual data in many applications, including autonomous driving, robotics, and augmented reality. Recent advances in deep learning have led to the development of deep learning-based methods for machine vision tasks on point cloud data. Common tasks include classification, segmentation, object detection, and object tracking. However, current deep learning-based methods often require significant computational resources, which impose hardware requirements. Such significant requirements may not be physically or economically feasible for end devices.
One approach to address the issue of insufficient end-device computational resources is to transmit the point cloud and other sensor data to a server for processing. However, this introduces its own challenges, including the effects of network availability, latency, and bandwidth. In order to reduce network requirements, the end device may compress the point cloud data before transmission. However, network capabilities vary depending on various factors, including end-device location and network congestion. This means that sufficient bandwidth may still not be available to transmit the point cloud data. A hybrid strategy is to perform part of the machine task on the end device itself. This can reduce the amount of data that needs to be transmitted to the server for further processing, without exceeding the computational budget of the end device [1]. This enhances the robustness of the end device to varying network conditions, while potentially improving overall system latency and adaptability [2].
We propose a novel learned point cloud codec for classification. Our learned codec takes a point cloud as input, and outputs a highly compressed representation that is intended solely for machine analysis. To our knowledge, this is the first point cloud codec specialized for machine analysis. Existing codecs for point clouds are designed to reconstruct point clouds intended for human viewing. This means that a significant amount of bits is wasted on encoding information that is not strictly relevant to machine analysis. By partially processing the point cloud before compression, our codec is able to achieve significantly better compression performance, without compromising task accuracy.
In this paper, we present our task-specialized codec architecture in full and lightweight configurations. We evaluate its rate-accuracy (RA) performance on the ModelNet40 dataset [3]. We also investigate how the number of input points (and thus reduced computation) affects the RA performance of our proposed codec. Furthermore, we compare our proposed codec's performance against alternative non-specialized methods. Our code for training and evaluation is available online1.
Footnote 1: [https://github.com/multimedilabst/learned-point-cloud-compression-for-classification](https://github.com/multimedilabst/learned-point-cloud-compression-for-classification)
## II Related work
Point cloud classification models can be organized into groups based on the type of input data they accept. Models such as VoxNet [4] take as input point clouds that have been preprocessed into a voxel grid. Unfortunately, these methods often use 3D convolutions, which require a significant amount of computational resources. Additionally, since most voxels are usually empty, these methods arguably waste a significant amount of computation on empty space. Furthermore, the voxel grid representation is not very compact, and thus requires a significant amount of memory for higher spatial resolutions (e.g., a 32-bit tensor of shape \(1024\times 1024\times 1024\) occupies 32 GB). Models such as OctNet [5] take octrees as input. Octrees offer a more compact representation of the voxelized point cloud by encoding the node occupancy in bitstrings. Large unoccupied regions of space may be represented
via a single "0" node in an octree. Point-based models such as PointNet [6] and PointNet++ [7] directly accept raw point lists \((x_{1},x_{2},\ldots,x_{P})\), where \(x_{i}\in\mathbb{R}^{3}\) represents a point in a 3D space and \(P\) is the number of points. Some challenges faced with this input format include designing order-invariant models (due to the lack of a worthwhile canonical ordering of points), as well as in devising operations capable of using the metric structure induced by point locality. Despite the challenges, point-based models are able to achieve surprisingly competitive accuracy, and offer the most promise in terms of minimizing computational requirements.
PointNet [6], which our proposed architecture is based on, can be represented as an input permutation-invariant function:
\[f(x_{1},\ldots,x_{n})=(\gamma\circ\pi)(h(x_{1}),\ldots,h(x_{n})),\]
where \(h\) is applied to each point \(x_{i}\) individually, \(\pi\) is a simple permutation-invariant function, and \(\gamma\) may be any function. In the original PointNet architecture, \(h\) is a weight-shared MLP, \(\pi\) is a max pooling function applied across the point dimension, and \(\gamma\) is an MLP.
In the related field of learned image compression, Balle _et al._[8] proposed a variational autoencoder (VAE) architecture for image compression. Here, the model transforms the input \(\mathbf{x}\) into a latent representation \(\mathbf{y}\), which is then quantized into \(\mathbf{\hat{y}}\) and losslessly compressed using a learned entropy model. The codec is trained end-to-end using the loss function
\[\mathcal{L}=R+\lambda\cdot D(\mathbf{x},\mathbf{\hat{x}}), \tag{1}\]
where \(D(\mathbf{x},\mathbf{\hat{x}})\) is the distortion measure between input \(\mathbf{x}\) and decoded \(\mathbf{\hat{x}}\), and \(R\) is the estimate of the entropy of \(\mathbf{\hat{y}}\). One simple entropy model, known in the literature as an _entropy bottleneck_, makes a "factorized" prior assumption -- that each element within a latent channel is independently and identically distributed. It models a monotonically increasing non-parametric cumulative distribution function using a differentiable MLP. This mechanism has also shown effectiveness in learned codecs in other fields, including learned point cloud compression (PCC), and has been incorporated by a variety of works including [9, 10, 11, 12, 13].
We have also based our work on ideas introduced for machine tasks on images. Early works demonstrated the use of standard codecs in compressing the latent features [14]. More recently, approaches such as Video Coding for Machines (VCM) [15] and Coding for Machines (CIM) have gained traction in the research community. For instance, works such as [16, 17] demonstrate the potential bitrate savings of scalable image compression in a multi-task scenario for a machine vision task (e.g., facial landmark detection, or object detection) and human vision (e.g., image reconstruction). In this work, we focus solely on a single machine vision task applied to point cloud data.
## III Proposed codec
### _Input compression_
In Fig. 0(a), we show an abstract representation of an input codec, similar to the "chain" configuration explored by [18] for end-to-end image compression for machines. In this codec, the input point cloud \(\mathbf{x}\) is encoded directly, without any intermediate feature extraction. On the decoder side, the point cloud is then reconstructed as \(\mathbf{\hat{x}}\). Any point cloud compression codec can be used for this purpose, including standard non-learned codecs such as G-PCC [19]. Finally, the reconstructed point cloud \(\mathbf{\hat{x}}\) is fed into a classification model (e.g., PointNet) in order to obtain the class prediction \(\mathbf{\hat{t}}\). This approach provides a baseline for comparison with our proposed codec.
### _Motivation for the proposed codec_
An efficient task-specific codec can be developed using the concept of Information Bottleneck (IB) [20]:
\[\min_{p(\mathbf{\hat{y}}|\mathbf{x})}\quad I(\mathbf{x};\mathbf{\hat{y}})-\beta\cdot I(\mathbf{ \hat{y}};\mathbf{\hat{t}}), \tag{2}\]
where \(I(\cdot;\cdot)\) is the mutual information [21], \(p(\mathbf{\hat{y}}\mid\mathbf{x})\) is the mapping from the input point cloud \(\mathbf{x}\) to the latent representation \(\mathbf{\hat{y}}\), and \(\beta>0\) is the IB Lagrange multiplier [20]. We can think of \(p(\mathbf{\hat{y}}\mid\mathbf{x})\) as feature extraction followed by quantization. Hence, \(\mathbf{\hat{y}}\) is fully determined whenever \(\mathbf{x}\) is given, so \(H(\mathbf{\hat{y}}\mid\mathbf{x})=0\), where \(H(\cdot\mid\cdot)\) is the conditional entropy [21]. Therefore, \(I(\mathbf{x};\mathbf{\hat{y}})=H(\mathbf{\hat{y}})-H(\mathbf{\hat{y}}\mid\mathbf{x})=H(\mathbf{\hat{y }})\).
Furthermore, since decreasing \(-\beta\cdot I(\mathbf{\hat{y}};\mathbf{\hat{t}})\) would improve the accuracy of the task, we can use \(\lambda\cdot D(\mathbf{t},\mathbf{\hat{t}})\) as a proxy for \(-\beta\cdot I(\mathbf{\hat{y}};\mathbf{\hat{t}})\), where \(\mathbf{t}\) is the ground-truth label, \(\mathbf{\hat{t}}\) is the label produced using the compressed latent representation, and \(D(\mathbf{t},\mathbf{\hat{t}})\) is a distortion measure. Therefore, in our case, the IB (2) becomes:
\[\min_{p(\mathbf{\hat{y}}|\mathbf{x})}\quad H(\mathbf{\hat{y}})+\lambda\cdot D(\mathbf{t},\mathbf{ \hat{t}}). \tag{3}\]
It is clear that the form of IB in (3) is analogous to the loss function (1). We make use of this analogy to develop the proposed codec, which is described next.
### _Proposed architecture_
In Fig. 0(b), we show a high-level representation of our proposed codec architecture. Following the terminology of [8], we refer to \(g_{a}\) as the _analysis transform_, and \(g_{s}\) as the _synthesis transform_. In this architecture, the input point cloud \(\mathbf{x}\) is first
Fig. 1: High-level comparison of codec architectures.
encoded into a latent representation \(\mathbf{y}=g_{a}(\mathbf{x})\), which is then quantized as \(\mathbf{\hat{y}}=Q(\mathbf{y})\), and then losslessly compressed using a learned entropy model. Therefore, \(p(\mathbf{\hat{y}}\mid\mathbf{x})\) from the IB is \(Q\circ g_{a}\). Then, the reconstructed latent representation \(\mathbf{\hat{y}}\) is used to predict the classes \(\mathbf{\hat{t}}=g_{s}(\mathbf{\hat{y}})\).
Our proposed architecture, visualized in Fig. 2, is based on the PointNet [6] classification model. The input 3D point cloud containing \(P\) points is represented as a matrix \(\mathbf{x}\in\mathbb{R}^{3\times P}\). The input \(\mathbf{x}\) is fed into a sequence of encoder blocks. Each encoder block consists of a convolutional layer, a batch normalization layer, and a ReLU activation. As described in [6], the early encoder blocks must be applied to each input point independently and identically, in order to avoid learning input permutation-specific dependencies. Therefore, we use pointwise convolutional layers with kernel size 1, which are exactly equivalent to the "shared MLP" described in [6].
Following the sequence of encoder blocks, a max pooling operation is applied along the point dimension to generate an input permutation-invariant feature vector of length \(N\). The resulting feature vector is then multiplied element-wise by a trainable gain vector \(v\in\mathbb{R}^{N}\), which is initialized to \([1,1,\ldots,1]\) before training. Due to the batch normalization layers, the resulting feature vector has small values. Thus, in order to improve training stability and the rate of convergence, the feature vector is multiplied by a constant scalar value of \(10\). The resulting vector is the output of the encoder-side analysis transform, which we label as \(\mathbf{y}\).
Then, we quantize \(\mathbf{y}\) via uniform quantization (specifically, integer rounding) to obtain \(\mathbf{\hat{y}}\). During training, uniform quantization is simulated using additive uniform noise \(\mathcal{U}(-0.5,0.5)\). The quantized vector \(\mathbf{\hat{y}}\) is then losslessly encoded using a fully-factorized learned entropy model introduced in [8].
On the decoder side, the decoded vector \(\mathbf{\hat{y}}\) is fed into an MLP consisting of fully-connected layers interleaved with ReLU activations and batch normalizations. Before the last fully-connected layer, we use a dropout layer that randomly sets 30% of its inputs to zero. The output of the MLP is a vector of logits \(\mathbf{\hat{t}}\in\mathbb{R}^{T}\), where \(T\) is the number of classes.
We provide "full", "lite", and "micro" configurations of the proposed codec. For each configuration, Table I lists the number of layer output channel sizes along with the estimated MAC (multiply-accumulate) counts2. Group convolutional layers are specified in the format "output_size/group". In contrast to [6], we do not use any input or feature transformations in order to simplify the architectures for clearer analysis, as well as to reduce the computational requirements.
Footnote 2: One MAC operation may be considered equivalent to a FLOP (floating-point operation) on most hardware.
### _Lightweight and micro architectures_
In addition to our "full" proposed codec, we also provide a lightweight configuration, which we denote as "lite". In this architecture, the encoder-side layers contain fewer output channels. To further reduce encoder-side computational costs, they also use group convolutions with channel shuffles in between, as is done in ShuffleNet [22]. After training, the gain and batch normalization layers may be fused into the preceding convolutional layer. The "lite" architecture strikes a balance between RA performance and encoder complexity. In fact, the encoder-side transform requires only 0.47k MACs/point, which is significantly less than the 150k MACs/point required by the "full" architecture encoder. For input point clouds consisting of \(P=256\) points, the total MAC count for the "lite" codec is 120k, which is below the corresponding decoder-side MAC count of 160k.
Additionally, we examine a "micro" architecture, whose encoder-side transform consists of only a single encoder block with 16 output channels, and a max pooling operation. This codec is useful for analysis and comparison -- and yet, it is also capable of surprisingly competitive RA performance.
## IV Experiments
Our models were trained on the ModelNet40 [3] dataset, which consists of 12311 different 3D object models organized into 40 classes. We used an Adam optimizer with a learning rate of 0.001. Our code was written using the PyTorch, CompressAI [23], and CompressAI Trainer [24] libraries.
The loss function that is minimized during training is:
\[\mathcal{L}=R+\lambda\cdot D(\mathbf{t},\mathbf{\hat{t}}),\]
where the rate \(R=-\log p_{\mathbf{\hat{y}}}(\mathbf{\hat{y}})\) is the log of the likelihoods outputted by the entropy model, and the distortion \(D(\mathbf{t},\mathbf{\hat{t}})\) is the cross-entropy between the one-hot encoded labels \(\mathbf{t}\) and the softmax of the model's prediction \(\mathbf{\hat{t}}\). We trained different
Fig. 2: Proposed codec architecture.
models to operate at different rate points by varying the hyperparameter \(\lambda\in[10,16000]\).
### _Proposed codec_
For each of the proposed codec architectures, we trained a variation of each codec to accept an input point cloud containing \(P\in\mathcal{P}\) points. We trained eight such variations for each of the values in the set \(\mathcal{P}=\{8,16,32,64,128,256,512,1024\}\). Although each codec is capable of handling a variable number of points, training a separate model for each \(P\) guarantees that each codec is well-optimized for each rate-accuracy trade-off.
### _Input compression codec_
We compare our proposed codec against an "input compression" codec architecture. For this codec, the encoder may be taken from any point cloud codec. We have tested multiple codecs, including TMC13 [25] v14 (an implementation of the G-PCC v2 [19] standard), OctAttention [12], and Draco [26]. On the decoder side is the corresponding point cloud decoder, followed by a PointNet classification model. We trained a PointNet model (without the input and feature transforms) for each \(P\in\mathcal{P}\).
We generated eight separate datasets of \(P\)-point point clouds, where each point cloud was uniformly subsampled from the test dataset. Then, we compressed and decompressed each point cloud from each \(P\)-point dataset at various compression ratios. The compression ratio can be effectively controlled by varying the amount of input scaling, which we denote by \(S\). (The input scaling parameter is directly proportional to the number of bins used during uniform quantization of the input points.) We varied \(S\) over the set \(\mathcal{S}=\{1,2,4,\ldots,256\}\) and \(P\) over \(\mathcal{P}\) to produce \(|\mathcal{P}|\cdot|\mathcal{S}|\) distinct datasets. We evaluated each dataset associated with the pair \((P,S)\in\mathcal{P}\times\mathcal{S}\) on the correspondingly trained PointNet models to obtain a set of rate-accuracy points. Finally, we took the Pareto front of this set to obtain the best rate-accuracy curve achieved by the tested input compression codec.
### _Reconstruction_
In order to visually assess the contents of the machine task-specialized bitstream, we trained a point cloud reconstruction network on top of our trained models. This auxiliary network was trained to minimize the loss function \(\mathcal{L}=D(\mathbf{x},\mathbf{\hat{x}})\), where we used Chamfer distance for \(D\).
We also identify a critical point set for a fixed point cloud. A critical point set is a minimal set of points which generate the exact same latent \(\mathbf{y}\), and correspondingly, the same bitstream. Formally, for any given point cloud \(\mathbf{x}\), let \(\mathbf{x}_{C}\subseteq\mathbf{x}\) denote a (not necessarily unique) critical point set. Then, \(g_{a}(\mathbf{x}_{C})=g_{a}(\mathbf{x})=\mathbf{y}\), and there is uniquely one valid critical point set \((\mathbf{x}_{C})_{C}\) for \(\mathbf{x}_{C}\), and it is itself. Since \(g_{a}\) contains a max pooling operation, the critical point set is not theoretically unique; however, in practice, it is rare for there to be more than one critical point set. A critical point set may be computed by \(\mathbf{x}_{C}=\bigcup_{1\leq j\leq N}\arg\max_{\mathbf{x}_{i}\in\mathbf{x}}(h(\mathbf{x}_{i} ))_{j}\), where \(\{h(\mathbf{x}_{i}):1\leq i\leq P\}\) represents the entire set of generated latent vectors immediately preceding max pooling.
## V Results
Fig. 3 shows the rate-accuracy (RA) curves for the proposed "full", "lite", and "micro" codecs in comparison with the input compression codec. Also included are two baseline accuracies taken from the official PointNet paper [6], for the model with (89.2%) and without (87.1%) the input/feature transforms. Since our compression models were all trained _without_ the input/feature transforms, the lower baseline offers a more direct comparison. In Table II, we list the peak accuracies attained by each codec, as well as the Bjontegaard-Delta (BD) [27] improvements in rates and accuracies relative to the reference input compression codec.
Our "full" proposed codec, which is an extension of PointNet (no transforms), achieves the lower baseline accuracy at \(120\) bits, and an 80% accuracy at \(30\) bits. Our "lite" proposed codec saturates in RA performance at around \(P=512\) input points. At around \(P=256\), the total MAC count of the proposed "lite" encoder is roughly equal to the decoder. As shown by the rate-accuracy curves, the \(P=256\) model does not suffer too significant a drop in rate-accuracy performance. This suggests that our method is capable of achieving a good trade-off between rate, accuracy, and runtime performance.
Similarly, our "micro" codec suffers a further slight drop in rate-accuracy performance, but achieves another significant improvement in runtime performance. The input compression codec is the worst performing codec, and attains the lower baseline accuracy at roughly 2000 bits.
In Fig. 4, we show various point clouds that have been reconstructed from the bitstreams generated by our proposed codecs. For each codec, we include samples reconstructed from bitstreams compressed at different rates. Above each reconstructed point cloud, we show the corresponding reference point cloud, with critical points marked in red.
## VI Discussion
For input point clouds containing \(P=1024\) points, our "full", "lite", and "micro" codec configurations achieve an accuracy of 80% with as few as 30, 40, and 50 bits. For comparison, \(\log_{2}(40)\approx 5.3\) bits are required to losslessly encode uniformly distributed class labels of the 40 classes from ModelNet40. Our codec comes surprisingly close to this theoretical lower bound, despite the fact that our architecture design omits the traditional MLP "classifier" within the encoder. The same pointwise function is applied to all points, and the only operation that "mixes" information between the points is a pooling operation. This suggests that our encoder should possess limited classification abilities, and yet it consumes only a few times more bits than the theoretical lower bound.
To put this into perspective, consider coding for image classification, which is one of the best developed areas in the field of coding for machines. Current state-of-the-art (SOTA) approaches [28, 29, 30] for coding for image classification on ImageNet [31] require upwards of \(0.01\) bits per pixel (bpp) to maintain a reasonable top-1 accuracy. With the typical input image resolution of \(224\times 224\), this works out to be around \(500\) bits. However, the maximum classifier output entropy with \(1000\) classes is only \(\log_{2}(1000)\approx 10\) bits, which is several orders of magnitude lower. Hence, the gap between the current SOTA and theoretical limits on coding for image classification is much higher than what is achieved by our proposed codec for point cloud classification.
To explore why our codec comes so close to the theoretical lower bound, we propose the following arguments. Let \(\mathbf{x}\) represent a possible input point cloud, and let \(\mathbf{\hat{y}}=(Q\circ g_{a})(\mathbf{x})\) be its quantized transformed latent representation. Applying the data processing inequality to the Markov chain \(\mathbf{x}\rightarrow\mathbf{x}\rightarrow\mathbf{\hat{y}}\), we determine that \(I(\mathbf{x};\mathbf{x})\geq I(\mathbf{x};\mathbf{\hat{y}})\). Furthermore, since \(Q\circ g_{a}\) is deterministic, \(H(\mathbf{\hat{y}}\mid\mathbf{x})=0\), and so
\[H(\mathbf{x})=I(\mathbf{x};\mathbf{x})\geq I(\mathbf{x};\mathbf{\hat{y}})=H(\mathbf{\hat{y}})-H(\mathbf{ \hat{y}}\mid\mathbf{x})=H(\mathbf{\hat{y}}).\]
Fig. 4: Reconstructions of a sample airplane 3D model from the ModelNet40 test set for various codecs and bitrates. For each reconstruction, its corresponding reference point cloud is marked with _critical points_ in red.
Fig. 3: Rate-accuracy curves evaluated on the ModelNet40 test set.
This indicates theoretically that the quantized latent representation \(\hat{\mathbf{y}}\) must on average be at least as compressible as the input point cloud \(\mathbf{x}\) that it was derived from.
Since the critical point set \(\mathbf{x}_{C}\subseteq\mathbf{x}\) produces the exact same \(\hat{\mathbf{y}}\) as the original input point cloud \(\mathbf{x}\), we may use the same arguments as above to argue that
\[H(\mathbf{x})\geq H(\mathbf{x}_{C})\geq H(\hat{\mathbf{y}}).\]
This provides us with a potentially tighter bound. In fact, as shown in Fig. 4iii, much of the general shape of the shown sample point cloud can be reconstructed from only \(23\) bits of information. Furthermore, since \(|\mathbf{x}_{C}|\leq N\), there are only at most \(32\) and \(16\) distinct critical points for the "lite" and "micro" codecs, respectively. This suggests part of the reason for why our proposed codec achieves such big gains in comparison to input compression.
## VII Conclusion
In this paper, we proposed a new codec for point cloud classification. Our experiments demonstrated that the "full" configuration of the codec achieves stellar rate-accuracy performance, far exceeding the performance of alternative methods. We also presented "lite" and "micro" configurations of the codec whose encoders consume minimal computational resources, and yet achieve comparable gains in rate-accuracy performance.
Our work may be extended to other point cloud tasks, such as segmentation and object detection, or to more complex tasks involving larger models and larger point clouds from real-world datasets. Our work also sets a good starting point for further research into approaches for scalable and multi-task point cloud compression. We hope that our work will help towards achieving the end goal of more capable end devices.
|
2304.05454 | Zero-shot Temporal Relation Extraction with ChatGPT | The goal of temporal relation extraction is to infer the temporal relation
between two events in the document. Supervised models are dominant in this
task. In this work, we investigate ChatGPT's ability on zero-shot temporal
relation extraction. We designed three different prompt techniques to break
down the task and evaluate ChatGPT. Our experiments show that ChatGPT's
performance has a large gap with that of supervised methods and can heavily
rely on the design of prompts. We further demonstrate that ChatGPT can infer
more small relation classes correctly than supervised methods. The current
shortcomings of ChatGPT on temporal relation extraction are also discussed in
this paper. We found that ChatGPT cannot keep consistency during temporal
inference and it fails in actively long-dependency temporal inference. | Chenhan Yuan, Qianqian Xie, Sophia Ananiadou | 2023-04-11T18:59:05Z | http://arxiv.org/abs/2304.05454v1 | # Zero-shot Temporal Relation Extraction with ChatGPT
###### Abstract
The goal of temporal relation extraction is to infer the temporal relation between two events in the document. Supervised models are dominant in this task. In this work, we investigate ChatGPT's ability on zero-shot temporal relation extraction. We designed three different prompt techniques to break down the task and evaluate ChatGPT. Our experiments show that ChatGPT's performance has a large gap with that of supervised methods and can heavily rely on the design of prompts. We further demonstrate that ChatGPT can infer more small relation classes correctly than supervised methods. The current shortcomings of ChatGPT on temporal relation extraction are also discussed in this paper. We found that ChatGPT cannot keep consistency during temporal inference and it fails in actively long-dependency temporal inference.
## 1 Introduction
The temporal relation extraction task aims to extract the temporal relation between either two event triggers in the given document [14]. In this way, a timeline of events in the document can be constructed. It is a crucial task for many downstream NLP tasks, such as natural language understanding [11, 12], storyline construction [13, 14], and temporal question answering [15, 16], etc. Conventionally, recent temporal relation extraction (RE) models are fine-tuned based on pre-trained language models (PLMs), such as BERT and RoBERTa [12, 13]. On top of the PLMs, complex neural networks are applied to classify the temporal relations, such as self-attention [13, 14], graph convolutional networks (GCNs) [12], and policy network [11]. Most well-performed temporal relation extractors are supervised models, that is, they heavily rely on annotated training documents first before extracting temporal relations on the testing set. However, annotating the temporal relations in training documents requires much domain experts' efforts [10, 14], which is a high cost.
Different from supervised learning methods, zero-shot learning (ZSL) [15] aims to train the model that can be generalized to unseen data without annotated training data and has attracted much attention in recent years. Most recently, large language models (LLMs) [1, 12] such as ChatGPT1 have exhibited remarkable ability in zero-shot learning on various natural language processing (NLP) and medical tasks [1], such as information extraction [14], machine translation [15], summarization evaluation [16], and mental health detection [17]. However, the performance of LLMs in detecting temporal relations between events are not explored yet. Therefore, it is an urgent and spontaneous question if the LLM can perform zero-shot temporal relation extraction tasks well given a proper prompt approach, and if LLM can be the new paradigm for temporal RE.
Footnote 1: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt)
In this paper, we explore the performance of ChatGPT on the zero-shot temporal relation extraction, and propose three different prompt strategies to interact with ChatGPT. More specifically, we start with the simple zero-shot prompt that directly requires ChatGPT to infer the temporal relation given the document. Then, we design the event ranking prompt, where ChatGPT is asked to infer the events shown in the given document instead of inferring temporal relations. Finally, we propose the chain-of-thought (CoT) prompt [12] to break down the task into two-stage, which guides ChatGPT to make temporal relation reasoning step by step. Based on our experimental results
and analysis, we have the following findings:
* **Overall Performance.** ChatGPT significantly underperforms advanced supervised methods and even traditional neural network methods such as Bi-LSTM, indicating the challenge of temporal relation detection with ChatGPT without task-specific fine-tuning.
* **Prompts.** Similar to the finding of recent efforts, the CoT prompt can significantly improve ChatGPT's performance compared with other prompts across all datasets, indicating the importance of proper prompt engineering.
* **Limitations.** Compared with supervised methods, ChatGPT has better performance in the temporal relations with small proportions in the dataset. However, it is also found to have limitations in detecting long-dependency temporal relation extraction and inconsistent temporal relation inference.
## 2 Related Work
### Temporal Relation Extraction
Several studies have explored the use of temporal information in relation extraction. For example, Han et al. (Han et al., 2019, 2019) incorporated tense information and time temporal interactions into their models. Other researchers have proposed using graph neural networks (GNNs) to encode dependency structures, which are important for temporal relation extraction (Mathur et al., 2021; Schlichtkrull et al., 2018). Wang et al. (Wang et al., 2022) added an attention layer to an R-GCN-based model to focus on document-creation-time (DCT). Man et al. (Man et al., 2022) used a reinforcement learning framework to select optimized sentences for input into neural models, improving performance.
In the clinical domain, Leeuwenberg et al. (Leeuwenberg and Moens, 2017) applied integer linear programming constraints to learn structured temporal relations. Dligach et al. (Dligach et al., 2017) improved performance by using neural networks such as CNN and LSTM as the backbone model. Lin et al. (Lin et al., 2019) utilized pre-trained language models like BERT to learn contextualized embeddings. However, no work has yet explored the feasibility of using LLMs in temporal relation extraction.
### Zero-shot Learning with ChatGPT
Since it was launched, ChatGPT has drawn much attention to its strong ability for various NLP tasks. In the clinical domain, Tang et al. explored ChatGPT's ability on zero-shot named entity recognition and relation extraction (Tang et al., 2023). The experiments on NCBI Disease and BC5CDR Chemical datasets showed that ChatGPT cannot recognize named entities correctly in the clinical domain as the F1 drops around 55.94%-91.58% compared to SOTA-supervised methods. ChatGPT's poor clinical NER was also proved by testing in i2b2 dataset (Hu et al., 2023). However, it can achieve comparable performance in clinical relation extraction as the F1 score only decreased by 4.73%-10.93% (Tang et al., 2023; Agrawal et al., 2022).
In extraction-related tasks, some work evaluated ChatGPT's ability in event extraction, general information extraction, and relation extraction (Tang et al., 2023; Gao et al., 2023; Wei et al., 2023). The evaluation process follows multi-stage interactions/conversations with ChatGPT and guides it to produce the desired answers. The evaluation results of these studies showed that with proper prompting, ChatGPT can achieve comparable performance with the supervised methods on zero-shot or few-shot settings of extraction tasks. However, there is also some work pointing out that ChatGPT's ability is still limited in some specific extraction scenarios such as extracting clinical notes with privacy information masked (Tang et al., 2023). Some work also discussed that for non-digital-available texts, such as historical documents, the entity recognition was performed poorly by ChatGPT (Gonzalez-Gallardo et al., 2023; Borji, 2023).
## 3 ChatGPT for Zero-shot Temporal Relation Extraction
Given the input document, the temporal relation extraction aims to identify the temporal relation between any two event triggers in the document, which is modeled as the multi-classification problem. We propose three different prompts methods to evaluate ChatGPT's performance on temporal relation extraction as shown in Figure 1.
### Zero-shot Prompt
In this prompt, given the document \(D=\{x_{1},x_{2},x_{3},\cdots,x_{n}\}\), we first label the event triggers with <EVENT></EVENT>. That is
if \(x_{i}\) is an event trigger, we label it as <EVENT>\(x_{i}\)</EVENT>. Then the whole labeled document is sent to ChatGPT and we ask ChatGPT to find the temporal relations between either two events. Note that our goal is to test ChatGPT's zero-shot ability in the temporal relation extraction task. Therefore, we do not provide any examples in our prompts. As shown in Figure. 1, we give ChatGPT the whole document, the list of all temporal relations, and the annotations of event triggers. In the end, the input question is designed as "what is the temporal relation between <EVENT>\(x_{i}\)</EVENT> and <EVENT>\(x_{j}\)</EVENT>?". We let ChatGPT to answer the question by using the temporal relations provided in the list.
### Event Ranking Prompt
Further, we design a new prompt to make the task easier to learn for ChatGPT. Specifically, given the document \(D\) and one event trigger \(e_{i}\) with the form <EVENT>\(x_{i}\)</EVENT> and temporal relation set \(R\), we require ChatGPT to complete the query \((e_{i},r_{j},?)\)\(\forall r_{j}\in R\). In this way, instead of querying \((e_{i},?,e_{j})\) as in the previous prompt, ChatGPT is required to predict the missing event trigger. As event triggers are already shown in the given text/document, ChatGPT is more likely to infer the event triggers than unseen temporal relations as they are not explicitly provided in the context. In detail, as shown in Fig. 1, we achieve this by asking the question first, such as "Which events happened before \(e_{i}\)?". Then based on the feedback of ChatGPT, we form the \(<e_{i},r,e_{j}>\) triplets to perform the evaluation. Note that if the same event pair is detected in different temporal relation (>2), we denote that this event pair has a "vague" relation as ChatGPT cannot confidently determine which temporal relation the event pair is classified. We also asked ChatGPT to provide some concise prompts to perform temporal relation extraction tasks. As shown in Fig 2, the prompts provided by ChatGPT are in line with our event ranking prompt approach.
### Chain-of-thought Prompt
We notice that if two event triggers refer to the same event but point to different timestamps of the duration of that event, ChatGPT cannot distinguish them. For example, in Figure. 1, "<EVENT e3> started" and "<EVENT e6> ended" referred to the beginning and the end of the season event. ChatGPT assumes that these two event triggers happened at the same time if we directly ask about the temporal relation following the zero-shot prompt.
Figure 1: The proposed prompts.
Figure 2: Prompts generated by ChatGPT.
Therefore, we propose a new chain-of-thought prompt with two steps, which firstly navigates ChatGPT to distinguish event triggers referring to the same event, and then guides ChatGPT to infer their temporal relation. Specifically, given the document \(D\) and two event triggers \(e_{1}\) and \(e_{2}\), we first ask ChatGPT to determine if \(e_{1}\) and \(e_{2}\) refer to the same event. If they are not, we further ask ChatGPT to determine the temporal relation between the two event triggers. If they point to the same event, we use a similar prompt but with the extra phrase "in that event" to ask ChatGPT. As shown in Fig. 1, we first ask ChatGPT if the two event triggers <EVENT>started</EVENT> and <EVENT>ended</EVENT> refer to the same event. Then based on ChatGPT's feedback, we further iteratively go through the whole temporal relation list to determine which temporal relation exists between the two event triggers. We show the full prompts we designed in detail in Appendix A.
## 4 Experiments
### Datasets
We use three datasets to evaluate the performance of ChatGPT on the zero-shot temporal relation extraction. The statistical details of these datasets are shown in the Table 1.
* **TimeBank-Dense**(Cassidy et al., 2014) labeled 36 news documents in total. The temporal relations between event triggers-event triggers, time-event triggers, timex-timex, are labeled. Following previous work, we only test our model on the event trigger-event trigger relation in the testing set.
* **MATRES**(Ning et al., 2018) is a dataset primarily focusing on temporal relations of event triggers with local sentences (1 or 2 sentences).
* **TDDiscourse**(Naik et al., 2019) was created to explicitly emphasize global discourse-level temporal ordering. Based on the annotation accuracy, the dataset is split into TDDMan and TDDAuto, where TDDAuto introduces much more automatic labels and noise. In this paper, we only evaluate ChatGPT on TDDMan because of the budget limitation.
We only use the testing set of each dataset directly to test our approaches as we do not require training of ChatGPT, following the zero-shot setting. We report the F1 score on each dataset and each temporal relation.
### Baseline Models
Since there is no zero-shot learning methods for temporal relation extraction before, we compare the performance of ChatGPT with the following advanced supervised methods:
* CAEVO (Chambers et al., 2014) a sieve-based architecture that includes multi-level classifiers for intra-sentence temporal relation learning.
* SP+ILP (Ning et al., 2017) a structured learning approach that captures the global temporal features when inferring the relation between two local events.
* Bi-LSTM (Cheng and Miyao, 2017) a Bi-LSTM-based model that encodes the dependency path between two events to classify temporal relation.
* Joint (Han et al., 2019) a joint end-to-end event and temporal relation extraction model that shares the contextualized embedding of two sub-tasks
* Deep (Han et al., 2019) a neural network model that utilized SSVM as the scoring function to learn temporal constraints and context embeddings.
* UCGraph (Liu et al., 2021) a graph-based model that is trained with mask pre-training mechanism. The model's uncertainty level is used to guide the inference during testing.
* TIMERS (Mathur et al., 2021) a graph-based model that leverages three graphs to learn temporal, rhetorical, and syntactic information, respectively.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Dataset & Train & Dev & Test & Labels \\ \hline TB-Dense & 4,032 & 629 & 1,427 & 6 \\ MATRES & 6,336 & – & 837 & 4 \\ TDDMan & 4,000 & 650 & 1,500 & 5 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of the number of annotated event pairs and different temporal relation labels of the MATRES, TB-Dense and TDDMan datasets.
* SCS-EERE Man et al. (2022) a reinforcement learning-based selector is designed to select the optimized sentences for temporal inference between the given two events.
* RSGT Zhou et al. (2022) a syntactic-and-semantic-based graph model pre-trained on a temporal neighbor prediction task.
* FaithTRE Wang et al. (2022) a model that applied Dirichlet prior to estimating the correctness likelihood. A temperature scaling is also used to recalibrate the model confidence measure after bias mitigation.
* DTRE Wang et al. (2022) a document creation time(DCT)-aware graph with a global consistency mechanism when inferring temporal relations.
* MulCo Yao et al. (2022) a joint model using the BERT to learn contextualized features and GNN to capture syntactic structures. The two models are combined via a multi-level contrastive learning framework.
### Results
In table 2, we can see that ChatGPT struggles to outperform supervised state-of-the-art models such as SCS-EERE Man et al. (2022) and RSGT Zhou et al. (2022), and even the traditional neural networks methods such as CAEVO and BI-LSTM, indicating its ineffective in the temporal relation extraction task in the zero-shot setting. Table 2 also shows the performance of ChatGPT under different prompts in three datasets. We noticed that ChatGPT_ER yields the worst performance on the MATRES and TDDMan datasets. ChatGPT's performance with the event ranking prompt on TDD-Man dataset is poor as most event trigger pairs cannot be detected under this prompt. However, if the event trigger pairs are explicitly fed to ChatGPT, it can somehow partially infer the temporal relations correctly. For example, the zero-shot prompt and CoT prompt could improve the F1 score by \(14.8\%\) and \(23.8\%\), respectively. While for the TB-Dense dataset, ChatGPT_ZS has the worst performance. ChatGPT_CoT achieves the best performance across all datasets, such as it significantly outperforms ChatGPT_ZS and ChatGPT_ER by 27.1% and 33.1%. This illustrates the effectiveness of the CoT prompt with step-by-step guidance in prompting ChatGPT.
Table 3, Table 4 and Table 5 further list the detail results of ChatGPT with different prompts on three datasets. We can see that the performance with the event ranking prompt is much better than that on other datasets. ChatGPT with the zero-shot prompt cannot determine the temporal relation "is included" and therefore yields a 0.0 performance in this type of relation. The CoT prompt improves the overall performance by significantly detecting "before" and "after" temporal relations. As these two relations take a great portion of the whole dataset, the overall performance is also improved.
## 5 Discussion
### ChatGPT is slightly better on small temporal relation classes
The imbalanced data is a severe long-existing problem in the temporal relation extraction task. Because of the events' temporal order frequency in real life, some temporal relations, such as "simultaneous" and "equal", are very limited in most temporal relation extraction datasets. And popular NLP data-augmented methods are difficult to be applied in the temporal domain. Therefore, most state-of-the-art supervised methods yield much worse performance on small relation classes.
In Deep Han et al. (2019), the authors reported detailed performance on each temporal class in MATRES dataset. We therefore especially compare the performance on small relation classes against Deep, i.e., "EQUAL" and "VAGUE". As shown in Table 3, the supervised model Deep could achieve much better overall performance (\(81.7\) F1 score), due to the contribution of two majority relations, "before" and "after". Compared to Deep's 0.0 performance on "EQUAL" and "VAGUE", ChatGPT with event ranking, CoT and zero-shot prompts can correctly extract some small class relations.
### ChatGPT failed in actively long-dependency temporal relation extraction
As shown in Table 2, ChatGPT's performance drops a lot in the TDDMan dataset. We argue that this is mainly because the TDDMan focuses more on discourse-level temporal relations and ChatGPT failed to extract useful information from long documents. As shown in Table 4, the event ranking prompt yields almost 0.0 on the whole dataset. In practice, we initially input the whole document \(D\) to ChatGPT and ask ChatGPT which event triggers
in the document \(D\) happened before the given event trigger \(e_{1}\). That is, ChatGPT should actively search all event triggers in the document and produce answers. However, in the TDDMan dataset, ChatGPT cannot produce a formatted answer and the outputs sometimes are even some random words in the document instead of event triggers. We then test the limit of the size of the input document, i.e., the number of sentences. We finally found that if we limit the size of the input document to at most 8 context sentences around the event triggers, ChatGPT would be more stable to produce formatted answers instead of repeating part of the document randomly. However, in this way, most temporal relations extracted by ChatGPT do not match with the golden labels in the TDDMan dataset because the extracted temporal relations are only in short dependency while TDDMan emphasizes long-distance temporal dependency.
Nevertheless, surprisingly, if we explicitly ask ChatGPT what the temporal relations between two event triggers labeled in the document are, ChatGPT can answer some of them correctly. Note that we do not cut the size of the document in this case and ChatGPT can still learn some of the temporal dependency in the document. Compared to the event ranking prompt, ChatGPT passively receives two event trigger information with the zero-shot and CoT prompts. This may reduce the inference difficulty as more information is given.
### ChatGPT can be improved via multi-stage "yes" or "no" prompts
Intuitively, the most efficient way to query ChatGPT about temporal relations between two events in the given document should be the zero-shot prompt, which directly asks the ChatGPT to answer the temporal relation between any two event
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{3}{c}{MATRES} & \multicolumn{3}{c}{TDDMan} & \multicolumn{3}{c}{TB-Dense} \\ \cline{2-11} & prec & recall & F1 & prec & recall & F1 & prec & recall & F1 \\ \hline CAEVO (Chambers et al., 2014) & – & – & – & 32.3 & 10.7 & 16.1 & 49.9 & 46.6 & 48.2 \\ SP+ILP (Ning et al., 2017) & 71.3 & 82.1 & 76.3 & 23.9 & 23.8 & 23.8 & 58.4 & 58.4 & 58.4 \\ Bi-LSTM (Cheng and Miyao, 2017) & 59.5 & 59.5 & 59.5 & 24.9 & 23.8 & 24.3 & 63.9 & 38.9 & 48.4 \\ Joint (Han et al., 2019b) & – & – & 75.5 & 41.0 & 41.1 & 41.1 & – & – & 64.5 \\ Deep (Han et al., 2019a) & 77.4 & 86.4 & 81.7 & – & – & – & 62.7 & 58.9 & 62.5 \\ UCGraph (Liu et al., 2021) & – & – & – & 44.5 & 42.3 & 43.4 & 62.4 & 56.1 & 59.1 \\ TIMERS (Mathur et al., 2021) & 81.1 & 84.6 & 82.3 & 43.7 & 46.7 & 45.5 & 48.1 & 65.2 & 67.8 \\ SCS-EERE (Man et al., 2022) & 78.8 & 88.5 & 83.4 & – & – & 51.1 & – & – & – \\ FaithTRE (Wang et al., 2022a) & – & – & 82.7 & – & – & 52.9 & – & – & – \\ RSGT (Zhou et al., 2022) & 82.2 & 85.8 & 84.0 & – & – & – & 68.7 & 68.7 & 68.7 \\ DTRE (Wang et al., 2022b) & – & – & – & 56.3 & 56.3 & 56.3 & – & – & 70.2 \\ MulCo (Yao et al., 2022) & 88.2 & 88.2 & 56.2 & 54.0 & 55.1 & 84.9 & 84.9 & 84.9 \\ \hline \hline ChatGPT\_ZS & 26.4 & 24.3 & 25.3 & 17.7 & 13.6 & 15.3 & 23.7 & 14.3 & 17.8 \\ ChatGPT\_ER & 21.9 & 17.3 & 19.3 & 3.7 & 0.3 & 0.5 & 37.6 & 35.8 & 36.6 \\ ChatGPT\_CoT & 48.0 & 57.7 & 52.4 & 26.8 & 22.3 & 24.3 & 43.4 & 32.2 & 37.0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The comparison of ChatGPT with various prompt techniques and supervised state-of-the-art models.
\begin{table}
\begin{tabular}{l c c c c c c c c||c c c} \hline \hline \multirow{2}{*}{Relation} & \multicolumn{3}{c}{Zero-shot} & \multicolumn{3}{c}{CoT} & \multicolumn{3}{c}{Event ranking} & \multicolumn{3}{c}{Deep} \\ \cline{2-11} & prec & recall & F1 & prec & recall & F1 & prec & recall & F1 & prec & recall & F1 \\ \hline overall & 26.4 & 24.3 & 25.3 & 48.0 & 57.7 & 52.4 & 21.9 & 17.3 & 19.3 & 77.4 & 86.4 & 81.7 \\ EQUAL & 0.0 & 0.0 & 0.0 & 7.1 & 2.9 & 4.1 & 5.8 & 11.1 & 7.6 & 0.0 & 0.0 & 0.0 \\ VAGUE & 14.3 & 58.7 & 23.1 & 14.4 & 8.1 & 10.4 & 14.6 & 86.2 & 25.0 & 0.0 & 0.0 & 0.0 \\ AFTER & 34.0 & 25.6 & 29.2 & 41.6 & 41.8 & 41.7 & 36.4 & 1.6 & 3.0 & 72.3 & 84.8 & 78.0 \\ BEFORE & 52.5 & 17.8 & 26.6 & 63.1 & 71.6 & 67.1 & 57.0 & 13.0 & 21.1 & 80.1 & 89.6 & 84.6 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The zero-shot performance of ChatGPT with three different prompts on the MATRES dataset.
triggers. If only one event trigger is given, then the prompts provided by ChatGPT, i.e., event ranking, should be used to interact with ChatGPT. However, our extensive experiments show that these two prompt methods produce much worse performance in most cases compared to the CoT prompt. Note that both zero-shot and CoT provide sufficient information about event triggers. We argue there is another difference resulting in the performance gap.
Comparing the two prompts, one significant difference is that the CoT prompt only accepts "yes" or "no" answers while the zero-shot prompt returns a specific temporal relation label. In the zero-shot prompt, ChatGPT is required to select a temporal relation from the given list, which is similar to a conventional multi-class classification problem. However, in the CoT prompt, ChatGPT only has to determine if one specific temporal relation exists (or not) between two event triggers. This simplified the problem into a binary classification. Further, with the previous question-answer pair as context, ChatGPT has a higher probability of making the correct selection. For example, in Fig 1, in the first round, ChatGPT already inferred that the relation is not "EQUAL". Then in the second round, ChatGPT is more confident to predict the temporal relation from _[BEFORE, AFTER, VAGUE]_ instead of "EQUAL".
### ChatGPT's temporal inference is inconsistent even with sufficient context
During the testing of ChatGPT with the event ranking prompt, we noticed a fatal issue in ChatGPT's temporal relation extraction, namely the inconsistent temporal relation inference. Given the same input document, ChatGPT may produce different temporal relations between two event triggers. As the top example shown in Figure 3, given the document \(D\), if the prompt is "Which event triggers happened _before_\(e_{1}\)?", ChatGPT will give a list of event triggers, e.g, [\(e_{2},e_{5},e_{6}\)]. Now given the same document \(D\), if the prompt is "Which event triggers happened _after_\(e_{6}\)", ChatGPT is expected to at least include \(e_{1}\) in the list. However, during the experiments, we noticed that ChatGPT failed in this scenario multiple times and the failures can be categorized into two cases. The first failure case is that ChatGPT does not include \(e_{1}\) in any list associated with \(e_{6}\). The second case is that ChatGPT includes \(e_{1}\) in a wrong list, e.g., "EQUAL", associated with \(e_{6}\) which violates the temporal consistency.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Relation} & \multicolumn{3}{c}{Zero-shot} & \multicolumn{3}{c}{CoT} & \multicolumn{3}{c}{Event ranking} \\ \cline{2-10} & prec & recall & F1 & prec & recall & F1 & prec & recall & F1 \\ \hline overall & 17.7 & 13.6 & 15.3 & 26.8 & 22.3 & 24.3 & 3.7 & 0.3 & 0.5 \\ is included & 9.5 & 0.7 & 1.3 & 20.9 & 3.1 & 5.4 & 0.0 & 0.0 & 0.0 \\ include & 41.9 & 17.7 & 24.8 & 37.9 & 11.2 & 17.3 & 0.0 & 0.0 & 0.0 \\ after & 14.7 & 9.0 & 11.2 & 33.3 & 4.3 & 7.5 & 0.0 & 0.0 & 0.0 \\ before & 29.7 & 22.9 & 25.9 & 35.1 & 70.8 & 46.9 & 12.5 & 0.7 & 1.4 \\ simultaneous & 3.9 & 39.1 & 7.0 & 0.0 & 0.0 & 0.0 & 11.1 & 2.2 & 3.6 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The zero-shot performance of ChatGPT with three prompts on the TDD-Man dataset.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Relation} & \multicolumn{3}{c}{Zero-shot} & \multicolumn{3}{c}{CoT} & \multicolumn{3}{c}{Event ranking} \\ \cline{2-10} & prec & recall & F1 & prec & recall & F1 & prec & recall & F1 \\ \hline overall & 23.7 & 14.3 & 17.8 & 43.4 & 32.2 & 37.0 & 37.6 & 35.8 & 36.6 \\ is included & 0.0 & 0.0 & 0.0 & 10.0 & 1.9 & 3.2 & 6.2 & 3.8 & 4.7 \\ include & 3.3 & 10.7 & 24.8 & 5.5 & 16.1 & 8.2 & 16.7 & 5.4 & 8.1 \\ after & 29.0 & 17.2 & 11.2 & 70.4 & 13.9 & 23.2 & 19.0 & 1.5 & 2.7 \\ before & 40.0 & 9.9 & 25.9 & 35.0 & 75.5 & 47.9 & 31.2 & 25.3 & 27.9 \\ simultaneous & 1.5 & 45.5 & 3.0 & 33.3 & 4.5 & 8.0 & 6.7 & 50.0 & 11.8 \\ vague & 44.6 & 24.0 & 31.2 & 51.2 & 29.6 & 37.5 & 46.0 & 63.5 & 53.4 \\ \hline \hline \end{tabular}
\end{table}
Table 5: The zero-shot performance of ChatGPT with three prompts on the TimeBank-Dense dataset.
A similar problem also happened to the CoT prompt. As shown in Figure 4, during the experiments, ChatGPT would give various answers instead of "vague" as we specified, if it thinks there is no clue to infer the temporal relation between the given two event triggers. These answers include "Cannot determine.", "I cannot answer that question as it is unclear from the given information.", "Unknown.", etc. We treat all of these answers as "vague" when we evaluate ChatGPT. Intuitively, if ChatGPT cannot answer "yes" or "no" for the specific temporal relation between two event triggers, it should also not be able to answer questions about other temporal relations of the same event triggers. However, ChatGPT may violate its "unknown" statement in the multi-stage prompts. We test if ChatGPT's inconsistency exists in multi-stage prompts by implementing the following experiments. For example, during the \(i\)-th round, if we ask "Did \(e_{1}\) happen before \(e_{2}\)?" and ChatGPT's answer includes "It is unclear from the given information.". We then ask the next \(i+1\) question "Did \(e_{1}\) happen after \(e_{2}\)?". And surprisingly, in most cases (\(84\%\)), ChatGPT can classify the event trigger pair into other temporal relation classes even if it claims that the information is insufficient. Moreover, \(96\%\) of these classification results is incorrect.
## 6 Conclusion
In this work, we comprehensively test ChatGPT's zero-shot ability on temporal relation extraction. We designed three different prompts to evaluate ChatGPT's performance. Our extensive experiments demonstrate that with proper prompting, ChatGPT's performance on zero-shot temporal relation extraction can be significantly improved, highlighting the importance of prompt engineering to better trigger ChatGPT's ability in future work. However, compared to supervised methods, ChatGPT is still far behind. We further discuss our findings from experimental results, including its better performance in small classes than supervised methods and its drawbacks such as failures in long-distance temporal dependency inference and inconsistent temporal relation inference. Our work only takes the initial step of exploring the
Figure 4: ChatGPT’s inconsistency failures examples in the CoT prompt. The top shows ChatGPT does not reply “vague” when unsure and the bottom shows that ChatGPT still infers other inconsistent relations.
Figure 3: ChatGPT’s two temporal inconsistency cases examples in the Event ranking prompt.
LLMs for zero-shot temporal relation extraction. To fill the gap between the performance of LLMs in the zero-shot setting and that of advanced supervised methods, we believe more efforts should be explored in the future, such as the few-shot prompt engineering and investigating the performance of other LLMs such as GPT-4 [1].
|
2305.12201 | GraVAC: Adaptive Compression for Communication-Efficient Distributed DL
Training | Distributed data-parallel (DDP) training improves overall application
throughput as multiple devices train on a subset of data and aggregate updates
to produce a globally shared model. The periodic synchronization at each
iteration incurs considerable overhead, exacerbated by the increasing size and
complexity of state-of-the-art neural networks. Although many gradient
compression techniques propose to reduce communication cost, the ideal
compression factor that leads to maximum speedup or minimum data exchange
remains an open-ended problem since it varies with the quality of compression,
model size and structure, hardware, network topology and bandwidth. We propose
GraVAC, a framework to dynamically adjust compression factor throughout
training by evaluating model progress and assessing gradient information loss
associated with compression. GraVAC works in an online, black-box manner
without any prior assumptions about a model or its hyperparameters, while
achieving the same or better accuracy than dense SGD (i.e., no compression) in
the same number of iterations/epochs. As opposed to using a static compression
factor, GraVAC reduces end-to-end training time for ResNet101, VGG16 and LSTM
by 4.32x, 1.95x and 6.67x respectively. Compared to other adaptive schemes, our
framework provides 1.94x to 5.63x overall speedup. | Sahil Tyagi, Martin Swany | 2023-05-20T14:25:17Z | http://arxiv.org/abs/2305.12201v2 | # GraVAC: Adaptive Compression for Communication-Efficient Distributed DL Training
###### Abstract
Distributed data-parallel (DDP) training improves overall application throughput as multiple devices train on a subset of data and aggregate updates to produce a globally shared model. The periodic synchronization at each iteration incurs considerable overhead, exacerbated by the increasing size and complexity of state-of-the-art neural networks. Although many gradient compression techniques propose to reduce communication cost, the ideal compression factor that leads to maximum speedup or minimum data exchange remains an open-ended problem since it varies with the quality of compression, model size and structure, hardware, network topology and bandwidth. We propose _GraVAC_, a framework to dynamically adjust compression factor throughout training by evaluating model progress and assessing gradient information loss associated with compression. _GraVAC_ works in an online, black-box manner without any prior assumptions about a model or its hyperparameters, while achieving the same or better accuracy than dense SGD (i.e., no compression) in the same number of iterations/epochs. As opposed to using a static compression factor, _GraVAC_ reduces end-to-end training time for ResNet101, VGG16 and LSTM by 4.32\(\times\), 1.95\(\times\) and 6.67\(\times\) respectively. Compared to other adaptive schemes, our framework provides 1.94\(\times\) to 5.63\(\times\) overall speedup.
deep learning, data-parallel training, gradient compression, sparsification, adaptive systems
## I Introduction
Deep Learning (DL) is a supervised machine learning approach that optimizes a loss function over a non-convex surface by comparing model predictions with ground truth. Each training iteration in DL involves forward and backward pass, i.e., generate predictions from input data, assess loss, compute gradients and update model parameters via optimization method like gradient descent. Training is an iterative process, typically involving multiple passes over the entire dataset where each pass is called an _epoch_. DL is also heavily influenced by certain _hyperparameters_ that affect training speed, quality, or both. Commonly used hyperparameters are learning rate, momentum, batch size, weight decay, epochs, activation function, etc.
Distributed data-parallel (DDP) methods further scale training across multiple nodes that train a globally shared model with I.I.D. data (independent and identically distributed) by periodically aggregating locally computed gradients at the end of each iteration. The compute requirements to train DL models doubles every 3.5 months [1], while the compute gains in chip design for ML accelerators and bandwidth gains in telecommunications networks double every 24 and 18 months [2, 3]. Thus, the infrastructure required to train state-of-the-art models tends to fall behind their compute and networking demands. Since upgrading network stack in the cloud, datacenter and HPC clusters can be infrequent as compared to appending new accelerators in pre-existing systems, gradient communication tends to be the major bottleneck in distributed training [4].
Different compression techniques have been proposed in recent years to mitigate this synchronization overhead. However, the optimal compression factor (CF) that minimizes data exchange or end-to-end training time depends on the model itself (i.e., its size, structure and depth), available network bandwidth and the compression overhead itself. Unlike traditional HPC and distributed computing applications that only measure parallel efficiency, DDP training has an additional statistical efficiency associated with it. Although the amount of computation performed on each iteration is the same, some iterations tend to be more crucial than others towards the overall learning of the model. Updates are especially sensitive in early stages and to hyperparameters like learning rate schedule, momentum and weight decay [5]. It would thus be intuitive to compare information loss in gradients on account of compression, and use a lower CF when considerably more information is lost and a higher CF when most information is preserved under compression. We can subsequently increase compression as training continues and gradients saturate, and decrease it back during the aforementioned critical stages.
We take into account the parallel and statistical efficiency aspect of gradient compression in this work: a high CF improves overall throughput (i.e., number of samples processed per second) by reducing communication cost, but increases information loss in the gradients resulting in either slower or insignificant updates. The two metrics in DDP compression are pareto-related as one improves at the detriment of the other. We propose _GraVAC_: {**Gra**}_ident {**V**}_ar_iance-based {**A**}_daptive {**C**}_ompression to dynamically adjust CF by comparing information loss from compression with that of the original gradients computed in backpropagation. _GraVAC_ evaluates different CFs in a given search space and determines the CF that best balances parallel and statistical efficiency in DDP training with compression. We validate our approach over a variety of DL models and directly compare with static CF on compressors like Top-\(k\)[6], Deep Gradient Compression or DGC [7], Redsync [9] and Random-\(k\)[6].
## II Background and related work
DDP training can be implemented either via MPI-based collectives (AllReduce) [10, 11, 12] or using one or more centralized parameter servers (PS) [13] to accumulate and distribute model updates among workers.
### _Scaling Efficiency of DDP Training_
DL training is an iterative process that involves parameter updates at each step via gradient descent (GD) [14]. Full GD uses entire training data at every step, making the whole process slow and compute-intensive, while Stochastic GD processes a single sample and does not vectorize multiple samples on fast accelerators. Mini-batch GD is the optimal middle ground between Full and Stochastic GD where \(b\) samples are randomly sampled from I.I.D. data. Eqn. (1) describes the update rule in mini-batch GD where parameters \(w\) at \((i+1)\)-_th_ iteration on \(N\) workers minimize loss function \(\mathcal{L}(\cdot)\) on input samples _xj_ of size \(b\) from distribution \(\mathcal{X}_{j}\) with learning rate \(\eta\). With weak scaling, we can increase the amount of per-iteration work by adding more workers and keeping per-worker batch-size \(b\) the same.
\[w_{i+1}=w_{i}-\eta\frac{1}{N}\sum_{n=1}^{n=N}\frac{1}{|b|}\sum_{j\in b}\frac{ \partial}{\partial w_{i}}\mathcal{L}(x_{(j,n)},w_{i}) \tag{1}\]
The _ideal_ throughput of a distributed application \(T_{N}\) executed across \(N\) workers is \(N\) times the throughput of a single worker \(T_{I}\). The deviation is measured via "scaling efficiency" in Eqn. 2a. Assuming negligible IO overhead, iteration time in dense SGD is bounded by computation and communication time (Eqn. (2b)). It may be possible to overlap communication with computation, but only partially since the latter is comparatively much lower on modern GPUs and TPUs. Model communication has been shown to be an order of hundreds or even thousands of magnitudes higher than gradient computation. Thus, frequent synchronization (\(t_{sync}\)) is the bottleneck that halts linear scaling in DDP. Table 1 describes the size, density and convergence target of ResNet101 [15], VGG16 [16] and LSTM [17] with dense SGD communication. Latency is further exacerbated on constrained networks with limited bandwidth as large volumes of data is exchanged by multiple workers simultaneously.
\[\eta_{scaling}=T_{N}/N\cdot T_{1} \tag{2a}\] \[t_{iter}\approx t_{compute}+t_{sync} \tag{2b}\]
For a DL model with a total of \(M\) parameters, the time cost based on the \(\alpha\)-\(\beta\) communication model (where \(\alpha\) is the latency and \(\beta\) is the inverse of bandwidth) for tree-based allreduce is \((2\alpha logN+2MlogN\beta)\)[18]. For ring-based allreduce, this becomes \(2(N-1)\alpha+2M\beta(N-1)/N\). Hence, communication cost increases as more workers are added to the mix in distributed training. Fig. 1a shows how overall throughput deviates from the ideal as cluster-size increases. The scaling efficiency is also influenced by the message size, i.e., total gradients/parameters to be communicated. In dense SGD, we observed scaling to be affected by the tensor-size distributions across the layers of a model as well. For e.g., LSTM has a better \(\eta_{scaling}\) than ResNet101 despite being a larger model. This is because parameters in LSTM are spread across just 2 layers, compared to 101 in ResNet101.
### _Gradient Variance in Deep Learning_
Prior work has demonstrated that gradient information can help measure the statistical efficiency of distributed training [19, 20]. There is a strong correlation between changes in the eigen values of second-order hessian [21] and first-order gradients (i.e., variance). [22, 23] explores how gradients behave in early stages of DL training and during certain critical periods, influenced by hyperparameters like learning rate schedule, gradient clipping and type of SGD used (e.g., zero, first or second-order moments). Fig. 1b attests those findings where we plot variance over the starting iterations and notice how drastically the gradients change and saturate over training.
### _Gradient Compression_
Many lossy compression techniques have been proposed for DDP and federated learning in recent years. Lossy compression incurs a fundamental trade-off between data-size and information loss; one can either reduce message size by losing more information, or preserve data quality by keeping majority of the original bits intact. In the context of DDP, higher CF reduces communication time at the cost of accuracy degradation or more steps/epochs required for the same convergence. CF measures the size of original gradients to the size of compressed tensors. E.g., compressing 10% gradients gives CF of 10x, while 1% gives 100x. Lossy compression can be broadly classified into _quantization_, _sparsification_ or _low-rank approximations_.
The bit-width of single-precision (32-bit) floats is reduced in gradient quantization. Techniques like automatic mixed
Fig. 1: Communication overhead and early critical period in DDP training.
precision (AMP) [24] reduces gradients to half-precision, resulting in 2x CF. QSGD [25] balances the trade-off between accuracy and quantization precision. 1-bit SGD [26] reduces 32-bit floats to 1-bit and propagates quantization error via error-feedback. Sparsification methods communicate only a fraction of the gradient values along with their indices and set everything else to 0. Top-\(k\) sparsifies by extracting the top \(k\)% values while Random-\(k\) does so randomly with negligible compression overhead. DGC discards gradients below a certain threshold along with using momentum correction and gradient clipping. Methods like Redsync [40] combine quantization and sparsification, but the estimation quality is not accurate [27]. Approaches like PowerSGD [28] and Pufferfish [29] achieve compression via low-rank updates. The former can be viewed as adding regularization in DL, while the latter performs low-rank factorization on fully connected, convolutional and LSTM layers.
_What should be the ideal CF in Compression-based DDP?_
The ideal CF is one that reduces communication time without trimming too much gradients which can be detrimental to final model. Compression has its own associated costs depending on the target CF and computational complexity of the mechanism itself. These factors affect both the parallel efficiency of distributed training as well as statistical inefficiency due to information loss from compression. Fig. 2 aptly demonstrates this where the CF that gives maximum speedup varies for each model and compression technique employed. The models are trained to Table 1 targets. ResNet101 on Top-\(k\) achieves most speedup at 100x, while VGG16 and LSTM peak at CFs 1000x and 10x respectively. On the other hand, ResNet101 fails to converge for any CF with Random-\(k\) compression. VGG16 and LSTM converged with 10x and failed with other CFs. Although a typical ML practitioner may not necessarily need to think about a plethora of compression methods, choosing the right CF with any compressor and DL model that minimizes training time, or even converges successfully, presents a non-trivial challenge.
Dynamic compression mechanisms like AdaQS [30] perform quantization using gradient mean to standard deviation ratio (MSDR). Systems like Accordion [31] and ScaDLES [32] switch between low and high compression based on critical regime identification. We tackle the ideal CF exploration problem in _GraVAC_ in a gradient-driven manner by comparing variance of prior and post-compression gradients. For clarity, prior-compression gradients refer to the original tensors computed in backward pass. By measuring the information lost in compression, we dynamically adjust CF over each iteration. Starting with a low CF initially, we gradually increase compression as training progresses. On encountering senstive or critical regions, _GraVAC_ switches to a lower CF that least degrades convergence.
## III Design and implementation
In this section, we first describe the trade-off between parallel and statistical efficiency of DDP training _with_ compression. Then we describe the metrics "compression gain" and "compression throughput" to combine the two, and explain _GraVAC_'s adaptive compression algorithm.
### _Parallel Efficiency of Gradient Compression_
The end goal of gradient compression is to improve DDP scaling efficiency. Application scaling is governed by the DDP mechanism (ring-based, tree-based allreduce or parameter servers), communication library used (MPI, NCCL [11], Gloo [10] or RPC) and available bandwidth. Keeping the latter and network infrastructure aside, speedup in any DL model depends on the target CF, quality of estimation and compression overhead. The overall iteration time in Eqn. 2b is adjusted for compression as
\[t_{iter}^{(c)}\approx t_{compute}+t_{sync}^{(c)}+t_{compress}^{(c)}+t_{ decompress}^{(c)}\]
where it takes \(t_{compress}^{(c)}\) time to reduce gradients to CF \(c\) such that it reduces communication time to \(t_{sync}^{(c)}\). \(t_{decompress}^{(c)}\) is the time taken to reconstruct the compressed gradients to the same dimension as the original gradients. A viable compressor must have its compression time considerably lower than synchronization time.
The parallel efficiency of a distributed application suffers with more workers due to higher synchronization costs. Improving the network bandwidth alleviates this to only a certain extent. [4] investigates how DDP throughput improves marginally with higher bandwidth. They observed that ResNet50 peaks to 75% scale-out on a 25 Gbps network and remains the same even for 100 Gbps. Its because network transport implementation of current DL frameworks cannot fully utilize the available network bandwidth. Thus, even though cloud providers like GCP provide anywhere from 10-32 Gbps bandwidth depending on the machine type and VM size, they may not be utilized to their full potential.
Fig. 3 shows how the throughput increases and communication overhead reduces with compression. The results are relative to CF 10x for each model. We perform layerwise DGC compression over a 32 GPU cluster. System throughput is determined only by compression overhead and communication time as the compute time in backpropagation stays the same across all CFs. Based on the compressor used, compression latency may vary with target CF. For e.g., it decreases with larger CF as Top-\(k\) uses max-heap and sorts the top \(k\)% elements
Fig. 2: CF with maximal speedup (to reach Table 1 targets) varies for each model and compression technique used. The results are normalized by 10x CF while a speedup of 0.0 implies convergence failure.
in \(O(N+k\log k)\) time. Throughput for ResNet101 and VGG16 saturates at 500x and does not improve thereafter, while LSTM saturates at 1000x (Fig. 2(a)). Communication savings also diminish at higher CFs due to small message size and network saturation (Fig. 2(b)). Thus, the highest CF may not necessarily correspond to the largest throughput.
### _Statistical Inefficiency of Gradient Compression_
Gradient compression mechanisms rely on _error-feedback_[35, 36] which essentially acts as delayed updates, as commonly noted in asynchronous training. The gradients ineligible for compression in the current iteration are not discarded, but added to _residual gradients_ which in turn are added to gradients computed in the next iteration. Residual gradients and error-feedback helps preserve important features and is critical to convergence [6, 7, 8]. Applying compression without error-feedback has been shown to achieve lower accuracy in deep learning models [35]. At the same time, residual gradients can sometimes degrade generalization performance due to stale updates.
DDP training with very high CFs can negatively impact training time, convergence quality, or both if the compressed gradients are too sparse or quantized to update the model in any significant way. _It is thus crucial to have an indicator that quantifies information loss between compressed and the original gradients._ We do so by comparing variance between the original and compressed tensors on every iteration and see how it relates to actual model convergence. Denoting the original gradients as _BC_ (_Before-Compression_) and compressed tensors as _AC_ (_After-Compression_), we compare BC and AC tensors in two separate configurations with CFs 10x and 1000x in Fig. 4, 5 and 6. We compare the convergence curves for the two CFs with _Dense SGD_ (i.e., no compression) to see how much accuracy degrades with compression.
_AC_ 10x is nearly identical to its _BC_ counterpart in ResNet101 (Fig. 3(a)) while there is considerably more information loss in between _BC_ and _AC_ 1000x (Fig. 3(b)). This translates to their convergence curves in Fig. 3(c) as well where 10x and dense SGD runs follow a similar convergence trajectory while 1000x achieves considerably lower accuracy for the same iterations.
VGG16 follows a similar trend with 10x CF. The _BC_ and _AC_ gradient variance (Fig. 4(a)) is nearly identical and so are the convergence curves for 10x and Dense SGD (Fig. 4(c)). We notice a slight deviation between _BC_ and _AC_ at 1000x initially in Fig. 4(b), which correlates to slow convergence in the early iterations for 1000x in Fig. 4(c). As the deviation _BC_ and _AC_ decreases, we see both CFs converge to the same accuracy as Dense SGD in the same iterations.
The _AC_ 10x and 1000x gradients lie on similar scales as _BC_ in LSTM, although the higher CF has slightly higher variance (Fig. 5(a) and 4(b)). As seen from Fig. 4(c), Dense SGD has the least perplexity (thus, better model quality), followed by 10x and 1000x CFs.
To compare the information loss between the original and gradients compressed to CF \(c\), we define a simplistic metric called _Compression gain_. As part of error feedback, we update the gradients such that \(g_{ef}^{(i)}=g_{0}^{(i)}+\text{residual\_ gradients}^{(i-1)}\) for \(i\geq 1\). Here, \(g_{0}^{(i)}\) are the original gradients calculated via backpropagation at iteration \(i\), while \(\text{residual\_ gradients}^{(i-1)}\) are left-overs from the last iteration \((i-1)\) and before, which are added back as part of error-feedback to produce \(g_{ef}^{(i)}\) for the current iteration. With compression operator \(\mathcal{C}\), gradients are compressed as \(g_{c}^{(i)}=\mathcal{C}[g_{ef}^{(i)}]\). Compression gain is then measured as the ratio of expected variance of compressed gradients \(g_{c}^{(i)}\) and the original gradients modified with error-feedback, i.e., \(g_{ef}^{(i)}\):
\[\text{Compression gain}=\frac{\mathbb{E}[||g_{c}^{(i)}||^{2}]}{\mathbb{E}[||g_{ ef}^{(i)}||^{2}]}\]
In prior work, gradient noise has been well studied in deep learning literature pertaining to divergence between locally-computed and aggregated gradients in DDP [37, 38, 20]. These works use gradient information to tweak the global batch-size in DDP to optimize job completion time or allocate optimal resources for a job. Instead of looking at local and global gradients, _GraVAC_'s novelty comes from evaluating the noise between the original and compressed tensors. The gradients computed over each iteration can be noisy. Thus, we keep a moving average of the respective variances of the original and compressed gradients. The computation and memory footprint of this approach is low since the window-size in moving average is finite and only a single-precision floating point is stored for every iteration. Compression gain is bounded between \(\{0,1\}\) such that it is low when \(\mathcal{C}\) trims too much information. As models keep training, gradients saturate and higher compression becomes more viable in later stages of training. Hence, compression gain increases over training as compressed tensors become more aligned with the original gradients.
We plot compression gains for the three models when training with fixed CF 10x and 1000x respectively, shown in Fig. 3(d), 4(d) and 6(d). In each model, 10x has higher compression gain than 1000x since more information is preserved in the smaller CF. _It should also be apparent that Dense SGD training has a constant gain of 1.0_. For all models, convergence curve of 10x follows a similar trajectory as Dense SGD. Correspondigly, the compression gain of 10x stays close to 1.0 throughout. In ResNet101, gain of 1000x is low initially and grows in an oscillating manner, although still lower than gains of 10x and
Fig. 3: Throughput and communication speedup for layerwise DGC compression, normalized by 10x CF.
Dense SGD. The low gains in the first 1000 iterations of CF 1000x correlates to the considerable gap between _BC_ and _AC_ gradients in Fig. (b)b and lower accuracy in Fig. (c)c. VGG16 is more robust to higher CFs (Fig. (c)c), as also seen from the high compression gains of CF 1000x in Fig. (d)d. For LSTM, compression gain for 10x stays close to 1.0 and between 0.8-0.9 for 1000x. The proximity of the two CFs to Dense SGD's gain of 1.0 is equivalent to their perplexity curves in Fig. (c)c. From these results we see how compression gain serves as a viable indicator of the statistical efficiency of DDP with compression.
### _Combining System Throughput and Compression Gain_
As described earlier in II-C as well as Fig. 2, choosing a high CF unintuitively does not necessarily improve training time and may even degrade final model quality. Thus, to account for both the parallel and statistical efficiency DDP training _with_ gradient compression, we combine _system throughput_ (T\({}_{system}\)) and _compression gain_ into a single metric called _Compression Throughput_:
\[\text{T}_{compression}=\text{ T}_{system}\ \times\ \text{Compression gain}\]
If CF is high, system throughput would be high as well but compression gain would relatively be lower, decreasing the resulting T\({}_{compression}\). On the other hand, compression gain will be high for a low CF, but system throughput will be lower due to relatively higher communication overhead. _With Compression Throughput, we capture this pareto-relationship between the parallel (system throughput) and statistical efficiency (compression gain) of gradient compression in DDP._
We build _GraVAC_ as a modular extension on top of PyTorch's [33] DDP module [34] using Python in about 3000 lines of code. A base GravacOptimizer wraps common SGD optimizers implemented in PyTorch by extending the base torch.optim.Optimizer class. The optimizer takes an additional Compressor object that specifies the type of compression technique used. We implement four pre-existing techniques as compressor classes in this paper: Top-_k_, DGC, Redsync and Random-_k_. Compression for the appropriate CF and its gain is computed before the optimizer step function which applies the aggregated gradient updates on model parameters.
_GraVAC Algorithm:_ Alg. 1 describes _GraVAC_'s approach of using compressor \(\mathcal{C}\) to scale CFs in the exploration space [\(\theta_{min},\theta_{max}\)], where each candidate CF is evaluated for window steps and incremented in step-size of \(\theta_{s}\) w.r.t. \(\theta_{min}\). For e.g., scaling from CF 10x to CF 20x means \(\theta_{s}=20/10=2\)x. The threshold \(\epsilon\) denotes the minimum compression gain required
Fig. 4: ResNet101: Prior and Post-Compression gradients, test accuracy and compression gain for CFs 10x and 1000x.
Fig. 5: VGG16: Prior and Post-Compression gradients, test accuracy and compression gain for CFs 10x and 1000x.
Fig. 6: LSTM: Prior and Post-Compression gradients, test perplexity (lower is better) and compression gain for CFs 10x and 1000x.
```
1Input:\(\theta_{min}\), \(\theta_{max}\), \(\epsilon\), \(\theta_{s}\), \(\omega\), window, compressor \(\mathcal{C}\)
2\(w_{o}\) : initial model state, N: total nodes, b: per-worker batch-size, residual = 0; T\({}_{sys}\), T\({}_{compress}\) = empty()
3Train for i = 1,2,3... \(\triangleright\)training iterations
4\(g_{o}^{(i)},t_{o}=\nabla f(x^{(i)},w_{i})\)\(\triangleright\) backpropagation
5\(g_{o}^{(i)}=g_{o}^{(i)}+\) residual \(\triangleright\) error-feedback
6\(g_{min}^{(i)},t_{min}=\mathcal{C}(g_{o}^{(i)},\theta_{min})\)\(\triangleright\) compress to CF \(\theta_{min}\)
7\(\delta_{min}=\text{EWMA}(\frac{||g_{min}^{(i)}||^{2}}{||g_{o}^{(i)}||^{2}}) \triangleright\)\(\theta_{min}\) compression gain
8\(g_{c}^{(i)},t_{c}^{(i)}=\mathcal{C}(g_{min}^{(i)},\theta_{s})\)\(\triangleright\) compress to CF (\(\theta_{s}\cdot\theta_{min}\))
9\(\delta_{c}=\text{EWMA}(\frac{||g_{c}^{(i)}||^{2}}{||g_{o}^{(i)}||^{2}})\)\(\triangleright\) gain for CF (\(\theta_{s}\cdot\theta_{min}\))
10\(t_{compress}=t_{min}+t_{c}\)\(\triangleright\) total compression time
11if\(\delta_{c}\geq\epsilon\) :
12\(\tilde{g}^{(i)}\), \(t_{s}\) = Aggregate(\(g_{c}^{(i)}\))\(\triangleright\) synchronize \(g_{c}^{(i)}\) residual = \(g_{o}^{(i)}-g_{c}^{(i)}\)\(\triangleright\) update residual
13\(t_{iter}\) = \(t_{o}\) + \(t_{compress}\) + \(t_{s}\)\(\triangleright\) iteration time UpdateStep(\(\theta_{min},\delta_{min},t_{iter}\))
14
15elseif\(\delta_{c}<\epsilon\)and\(\delta_{min}\geq\epsilon\) :
16\(\tilde{g}^{(i)}\), \(t_{s}\) = Aggregate(\(g_{o}^{(i)}\))\(\triangleright\) synchronize \(g_{min}^{(i)}\) residual = \(g_{o}^{(i)}-g_{min}^{(i)}\)\(\triangleright\) update residuals
17\(t_{iter}\) = \(t_{o}\) + \(t_{compress}\) + \(t_{s}\)\(\triangleright\) iteration time UpdateStep(\(\theta_{min},\delta_{min},t_{iter}\))
18
19else
20\(\tilde{g}^{(i)}\), \(t_{s}\) = Aggregate(\(g_{o}^{(i)}\))\(\triangleright\) synchronize \(g_{o}^{(i)}\) residual = 0\(\triangleright\) no residual gradients
21\(t_{iter}\) = \(t_{o}\) + \(t_{s}\)\(\triangleright\) iteration time UpdateStep(\(1,1,t_{iter}\))
22
23\(w_{i+1}=w_{i}-\eta\cdot\tilde{g}^{(i)}\)\(\triangleright\) apply SGD update
24\(\theta_{s}\) = CheckGraVAC(\(i\), \(\theta_{s},\delta_{min},\delta_{c}\))
25
26procedureUpdateStep(\(\theta,\delta,t_{iter}\)):
27 T\({}_{sys}\) = N \(\triangleright\) b/\(t_{iter}\)\(\triangleright\) system throughput
28\(\text{T}_{compress}[\theta]\) = T\({}_{sys}\cdot\delta\)\(\triangleright\) compression throughput
29
30
31procedureCheckGraVAC(\(i\), \(\theta_{s},\delta_{min},\delta_{c}\)):
32ifi % window == 0 :
33\(\theta_{s}\) = ScalingPolicy(\(\theta_{s}\)) \(\triangleright\) compression scale-up
34if\(\omega\geq\frac{||\delta_{min}-\delta_{c}||}{\delta_{min}}\) :
35\(\theta_{min}=\theta_{s}\cdot\theta_{min}\)\(\triangleright\) scale-up minimum CF
36
37ct = sort(T\({}_{compress}\).values()) \(\triangleright\) T\({}_{compress}\) vals
38if\(|\frac{c[-1]}{c[-2]}|\leq\omega\) :
39\(\theta_{ideal}\) = T\({}_{compress}.get\)(ct\([-2]\)) \(\triangleright\) ideal CF
40return\(\theta_{ideal}/\theta_{min}\)\(\triangleright\) gives optimal \(\theta_{s}\)
41else
42return\(\theta_{s}\)\(\triangleright\) else use old scaling factor
```
**Algorithm 1**_GraVAC_'s Adaptive Compression
## 4 Algorithm 1: GraVAC's Adaptive Compression
In this section, we present a method for the GraVAC to solve the GraVAC problem. We first present a method for the GraVAC problem.
system/compression throughput. The CF and compression gain are both 1, as set in the UpdateStep function at line 25.
Following SGD update (line 26), we evaluate _GraVAC_ to assess the performance of CFs evaluated so far. This happens at a frequency determined by window. Here, we adjust \(\theta_{s}\) by a certain factor to scale up compression, determined by the chosen ScalingPolicy. The scaling policy tunes compression only until the upper bound \(\theta_{max}\). We explore two scaling policies in this paper that we describe in detail under section IV-B. After scaling \(\theta_{s}\), we also assess if the minimum CF, i.e., \(\theta_{min}\) can be scaled up as well. The intuition is that as training progresses, model gradually starts converging as well and we can use higher compression even for the minimum CF later on. In addition to candidate CFs, we thus scale up the minimum CF as well. The transition is made if the current gain \(\delta_{c}\) is within \(\omega\)% of the gain of previous \(\theta_{min}\) (line 34). Once enough CFs are evaluated, we look at the two largest compression throughputs (line 36) and fetch the corresponding CF if they are within the bounds of \(\omega\). We do this as it means the compression throughput has saturated and thus, we pick the lower CF as \(\theta_{ideal}\) (line 38) and send the appropriate step-size (line 39). If the threshold \(\omega\) is not met, we use \(\theta_{s}\) as is.
_When does compression scale-up?_ As seen from Alg. 1, the compression scale-up happens during _GraVAC_'s evaluation phase where we scale the step-size \(\theta_{s}\) in accordance with a specific scaling policy. At the same time, we escalate the minimum CF \(\theta_{min}\) to currently evaluated CF if the two compression gains are within \(\omega\)% of each other.
_When does compression scale-down?_ Compression scale-down is determined by \(\epsilon\) (shown via conditional statements lines 11-25). If current CF loses considerably more information in compressed gradients \(g_{c}^{(i)}\), we use the lower CF \(\theta_{min}\). If the latter fails to meet \(\epsilon\) as well, we send uncompressed gradients \(g_{o}^{(i)}\) as a last resort.
## IV Evaluation
### _Cluster Setup and Training Hyperparameters_
We evaluate _GraVAC_ on a 32 GPU setup on the Google Cloud Platform (GCP) across 8 VMs. Each VM is a n1-standard-8 machine type with 8 vCPUs, 30 GB system memory and 4 NVIDIA V100 GPUs with 16 GB VRAM each. The machines are configured with PyTorch 1.10.1, CUDA 11.3, CUDA driver 465.19.01 and NCCL 2.10.3.
We evaluate the three models described in Table 1. ResNet101 is trained with per-worker batch size 32, momentum 0.9, weight decay 0.0001 and SGD optimizer with initial learning rate (lr) 0.1 decayed by a factor of 10 at 9K and 14K iterations respectively. VGG16 is also trained with per-worker batch-size 32, weight decay 0.0005, momentum 0.9 and SGD with fixed lr 0.1. Lastly, LSTM is measured with test perplexity (i.e., exponential of test loss) with per-worker batch-size 20, momentum 0.9, weight decay 0.0001 and SGD with fixed lr 0.1. The model is initialized with 1500 embedding dimensions and 2 hidden layers with 35 bptt steps.
We evaluate _GraVAC_ with different scaling policies and look at their convergence curves (i.e. test accuracy/perplexity vs. iterations), average compression throughput of candidate CFs and kernel density estimates (KDE) of training iterations using different CFs over the course of training. KDE gives the distribution over the iterations for all CFs and plotted on the log-scale with smoothing bandwidth of \(0.1\) passed to the gaussian KDE.
### _GraVAC's Adaptive Compression Policies_
In this section, we look at how _GraVAC_ achieves optimal CF for a given \(\theta_{min}\), \(\theta_{max}\), \(\epsilon\), window, \(\omega\) and stepsize. To see how a model converges and communication costs vary by evaluating different candidate CFs in the search space, we employ an _Exponential_ policy that upscales CFs aggressively, and a relatively smoother _Geometric_ scaling policy that scales CFs as a geometric progression.
#### Iv-B1 Exponential scaling policy
In this policy, we implement the ScalingPolicy function from Alg. 1 such that CFs are scaled up in exponents of 2 w.r.t the first initialized \(\theta_{min}\). On top of DGC, we set \(\theta_{min}\) and \(\theta_{max}\) to 10x and 1000x, window=500 and \(\omega\)=1%. So we scale up by factors of \(2^{1}\), \(2^{2}\), \(2^{4}\), \(2^{8}\) w.r.t 10x up until 1000x. The candidate CFs thus evaluated in this policy are 10x, 20x, 40x, 160x and 1000x. We run _GraVAC_ on two configuration with different thresholds on compression gain, \(\epsilon\) = 0.7 and 0.9. The lower \(\epsilon\) relaxes the constraint on the gain for higher CFs to be eligible for communication, thus achieving higher compression. A large \(\epsilon\) (i.e., close to 1) allows for compression only if the compressed tensors are highly representative of the original gradients. First, we compare these two thresholds with Dense SGD as the latter demonstrates the ideal convergence scenario. Then, we compare _GraVAC_ with different compression techniques on static CFs and look at final model accuracy, communication savings and overall speedup.
_ResNet101:_ Fig. 7 shows how _GraVAC_ achieves the same convergence as dense SGD in the same number of iterations. The low and high \(\epsilon\) reduce overall communication volume by 163\(\times\) and 19\(\times\) over dense SGD. _We measure communication volume as the ratio of cumulative single-precision floats exchanged among workers in GraVAC relative to dense SGD._ Training cycle is slightly more volatile with compression, as seen from the accuracy drop due to lr decay at around 9000-th iteration. The drop is more apparent for \(\epsilon\) = 0.7 as we continue to train with higher CFs on account of the lower threshold. Comparatively, \(\epsilon\) = 0.9 is more robust to hyperparameter tuning like lr decay as we tend to train with a lower CF due to higher threshold. This is corroborated from Fig. 7b which shows distribution of training iterations over the CFs. We equally train with 10x and 1000x for \(\epsilon\) = 0.9, while we mostly train with 1000x for \(\epsilon\) of 0.7. For the compression throughputs of \(\epsilon\) = 0.9 in Fig. 7c, it might seem counterintuitive at first that although \(T_{compression}\) is maximum for 1000x and minimum for 10x, we still evenly train with the two CFs. This is on account of the high threshold and because \(\theta_{min}\) did not scale up and remained at 10x for ResNet101. Thus, whenever
the compression gain of any candidate CF did not meet the threshold, we synchronized gradients compressed at 10x. For \(\epsilon\) of 0.7, compression throughput was maximum for 1000x and we trained at this CF for most iterations as the corresponding gain easily met that threshold.
_VGG16:_ Like ResNet101, VGG16 also converges to the same accuracy as dense SGD within the same iterations, where \(\epsilon\) = 0.7 and 0.9 reduce communication volume by 80\(\times\) and 13.5\(\times\) over dense SGD (Fig. 9). Although \(T_{compression}\) is maximum at 1000x for \(\epsilon\) = 0.9, the corresponding gain was _not_ as high to meet the threshold. Because of this, we switch back to \(\theta_{min}\) and thus train with 10x for majority iterations as seen from the kernel density estimates in Fig. 8b. However, when \(\epsilon\) was lower, we were able to find 40x CF to meet that threshold. \(T_{compression}\) corresponding to this CF was second largest in our exploration space. As candidate CFs are evaluated over the iterations, the model gradually converges and as a result, compression gain improves even further on larger CFs as training progresses. Ultimately, we arrive on \(\theta_{ideal}\) = 1000x corresponding to the maximum compression throughput (Fig. 8c).
_LSTM:_ Like the models before, _GraVAC_ with either \(\epsilon\) converged in the same iterations as dense SGD training, while reducing the communication volume by 279\(\times\) and 289\(\times\) for \(\epsilon\) of 0.9 and 0.7 respectively. Given the dataset, model and training hyperparameters, we already saw from Fig. 6d that compression gain for LSTM was high for both 10x and 1000x. We observed a similar trend here as compression gain corresponding to 1000x easily satisfied both thresholds and thus, we train with the largest available CF for most iterations (Fig. 9b). Correspondingly, the compression throughput is maximum at this CF as well.
Further, we compare _GraVAC_ with static CFs running on different compression techniques. In particular, we train our models with Top-\(k\), DGC, Redsync and Random-\(k\) at CFs 10x and 1000x. We run each compression technique to report the final accuracy/perplexity until it does not improve any further, difference in convergence compared to dense SGD baseline from Table 1, and relative training speedup over Top-\(k\) 10x for each model. The results are tabulated in Table II. We do not consider dense SGD training in this comparison since we already established previously how _GraVAC_ is able to achieve the same convergence in the same iterations, and other compression techniques have already been compared to dense SGD in prior works. For ResNet101, 1000x CF on Redsync, DGC and Top-\(k\) have considerably high speedups than 10x Top-\(k\). However, these methods at 1000x CF achieve considerably less accuracy than Top-\(k\) at 10x. At 1000x, Top-\(k\), DGC and Redsync do not improve beyond 76.4%, 78.6% and 77.4% top-1 test accuracy. Random-\(k\) failed to converge at either CF and accuracy did not improve beyond 20%. Because of _GraVAC_'s adaptive scheme, we converge to 80.2% accuracy while still reducing training time by 4.32\(\times\).
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Model** & **Compression** & **Acc./Ppl** & **Diff.** & **Speedup** \\ \hline \multirow{8}{*}{ResNet101} & Top-\(k\) 10x & 80.14\% & +0.14\% & 1\(\times\) \\ \cline{2-5} & Top-\(k\) 1000x & 76.4\% & \(-3.6\%\) & 3.02\(\times\) \\ \cline{2-5} & DGC 10x & 80.4\% & +0.4\% & 1.23\(\times\) \\ \cline{2-5} & DGC 1000x & 78.6\% & \(-1.4\%\) & 5.19\(\times\) \\ \cline{2-5} & Redsync 10x & 79.4\% & \(-0.6\%\) & 1.2\(\times\) \\ \cline{2-5} & Redsync 1000x & 77.4\% & \(-2.6\%\) & 6.94\(\times\) \\ \cline{2-5} & Random-\(k\) 10x & - & - & - \\ \cline{2-5} & Random-\(k\) 1000x & - & - & - \\ \cline{2-5} & _GraVAC_ & **80.2\%** & **+0.2\%** & **4.32\(\times\)** \\ \hline \multirow{8}{*}{VGG16} & Top-\(k\) 10x & 91.2\% & +1.2\% & 1\(\times\) \\ \cline{2-5} & Top-\(k\) 1000x & 90.68\% & +0.68\% & 3.22\(\times\) \\ \cline{2-5} & DGC 10x & 90.8\% & +0.8\% & 0.935\(\times\) \\ \cline{2-5} & DGC 1000x & 90.4\% & +0.4\% & 3.35\(\times\) \\ \cline{2-5} & Redsync 100x & 90.45\% & +0.45\% & 0.99\(\times\) \\ \cline{2-5} & Redsync 1000x & 90.3\% & +0.3\% & 3.6\(\times\) \\ \cline{2-5} & Random-\(k\) 10x & 87.8\% & \(-2.2\%\) & 0.7\(\times\) \\ \cline{2-5} & Random-\(k\) 1000x & - & - & - \\ \cline{2-5} & _GraVAC_ & **90.48\%** & **+0.48\%** & **1.95\(\times\)** \\ \hline \multirow{8}{*}{LSTM} & Top-\(k\) 10x & 22.0 & +0.0 & 1\(\times\) \\ \cline{2-5} & Top-\(k\) 1000x & 26.78 & \(-4.78\) & 3.36\(\times\) \\ \cline{2-5} & DGC 10x & 21.67 & +0.33 & 1.23\(\times\) \\ \cline{2-5} & DGC 1000x & 25.14 & \(-3.14\) & 6.25\(\times\) \\ \cline{2-5} & Redsync 10x & 21.65 & +0.35 & 1.17\(\times\) \\ \cline{2-5} & Redsync 1000x & 24.24 & \(-2.24\) & 6.9\(\times\) \\ \cline{2-5} & Random-\(k\) 10x & 24.15 & \(-2.15\) & 1.3\(\times\) \\ \cline{2-5} & Random-\(k\) 1000x & - & - & - \\ \cline{2-5} & _GraVAC_ & **21.25** & **+0.75** & **6.67\(\times\)** \\ \hline \end{tabular}
\end{table} TABLE II: _GraVAC’s model quality and speedup over static CFs_
Fig. 8: VGG16: _GraVAC_ with \(\epsilon\) = [0.7, 0.9] and Dense SGD.
Fig. 7: ResNet101:_GraVAC_ with \(\epsilon\) = [0.7, 0.9] and Dense SGD.
For VGG16, we previously observed that the model is already quite robust to high compression (Fig. 5). We see that again there for Top-\(k\), DGC and Redsync at 1000x cross 90% accuracy with 3.22, 3.35 and 3.6\(\times\) speedup over Top-\(k\) 10x. Random-\(k\) at 10x also converged, albeit to a lower 87.8% accuracy and slower convergence. Since _GraVAC_ attains 90.48% test accuracy with 1.95\(\times\) training speedup, other compression schemes were more optimal in this case simply because they used high CFs.
In LSTM, _GraVAC_ obtains the least perplexity of 21.25 while still providing maximum speedup of 6.67\(\times\) over Top-\(k\) 10x. Random-\(k\) 10x converged to 24.15 perplexity and did not improve further, while Random-\(k\) 1000x failed here again. Of all the configurations, only Top-\(k\), DGC and Redsync at 10x CF and _GraVAC_ achieved better perplexity than dense SGD.
Thus, we see how _GraVAC_ is able to train models like ResNet101 and LSTM to high accuracy/perplexity and still reduce training time significantly. Static compression schemes achieve high accuracy at low CF at the cost of high communication overhead, thus providing lower speedup. Large CFs considerably reduce communication, but the final model quality is not at par with _GraVAC_. On the flip side, some over-parameterized models like VGG16 can be robust to compression and still converge successfully at high static CFs.
#### Iv-B2 Geometric scaling policy
We also propose a relatively smoother compression policy where ScalingPolicy increments CFs as a geometric progression with common ratio 2. We deploy _GraVAC_ with Redsync on ResNet101 and set \(\theta_{min}\) = 10x, \(\theta_{max}\) = 2000x, \(\epsilon\) = 0.7, window = 2000 steps and \(\omega\) = 1%. Thus, candidate CFs are 10x, 20x, 40x, 80x, 160x, 320x, 640x, 1280x and 2000x. Fig. (a)a shows the accuracy curve over the iterations. Compared to dense SGD (Fig. (a)a), _GraVAC_ with geometric scaling converged _while reducing communication volume by 76\(\times\)_. In contrast to exponential scaling, convergence is relatively slower because we evaluate each candidate CF for a larger window size. As a result, gradients get even smaller as _GraVAC_ gradually arrives at larger CFs and compression gain increases beyond \(\epsilon\). Thus, we see similar iteration densities from CF 10x to 640x (Fig. (b)b). After the first 7 CFs are evaluated over 2000 steps each, we mostly train with CF 1280x from 16K iterations onward (because 8 \(\times\) 2000 = 16000). We did not scale to 2000x in our evaluation since compression throughput for 1280x and 2000x was 1029.9 and 1035.4, which falls within \(\omega\)'s bound of 1%. _This case highlights the effectiveness of GraVAC such that it does not scale the CF beyond a point when it stop improving the parallel or statistical efficiency of gradient compression_. In this case, _GraVAC_ does not compress beyond 1280x as it corresponds to the maximum compression throughput (and at a lower CF of 1280x compared to 2000x).
### _Gains of Multi-level Compression in GraVAC_
Alg. 1 explains how at each iteration, _GraVAC_ scales compression from initial \(\theta_{min}\) to current CF being evaluated (i.e., \(\theta_{c}\)), up to the maximum allowed \(\theta_{max}\). Thus, compressing the original gradients (computed over backward pass) twice; i.e., once over \(\theta_{min}\) and then again on \(\theta_{c}\) can incur significant overhead, especially on larger models. The latency of a compressor may vary with the size of the tensor to compress as well as the target CF. To reduce the cumulative overhead of compressing original tensors multiple times, we apply a multi-level compression scheme as follows: given a compressor \(\mathcal{C}\) and tensor \(\mathcal{X}\) to be compressed to CFs \(\theta_{1}\) and \(\theta_{2}\) such that \(\theta_{2}>\theta_{1}\), rather than compressing each CF on \(\mathcal{X}\) as:
\[\mathcal{X}_{1}=\mathcal{C}(\theta_{1},\mathcal{X})\text{ and }\mathcal{X}_{2}= \mathcal{C}(\theta_{2},\mathcal{X})\]
to produce compressed tensors where \(|\mathcal{X}_{2}|<|\mathcal{X}_{1}|<|\mathcal{X}|\). In _GraVAC_, we first compute \(\mathcal{X}_{1}\) and then compress this tensor to \(\theta_{2}^{{}^{\prime}}\) to produce \(\mathcal{X}_{2}^{{}^{\prime}}\):
\[\mathcal{X}_{1}=\mathcal{C}(\theta_{1},\mathcal{X})\implies\mathcal{X}_{2}^{ {}^{\prime}}=\mathcal{C}(\theta_{2}^{{}^{\prime}},\mathcal{X}_{1})\ :\ \theta_{2}^{{}^{\prime}}=\frac{\theta_{2}}{\theta_{1}}\]
The resulting tensor \(\mathcal{X}_{2}^{{}^{\prime}}\) is such that \(\mathcal{X}_{2}^{{}^{\prime}}=\mathcal{X}_{2}\) for \(\theta_{2}^{{}^{\prime}}=\theta_{2}/\theta_{1}\). The appeal of doing so is that the second compression operation is applied on a smaller tensor \(\mathcal{X}_{1}\) instead of \(\mathcal{X}\) again. We tabulate the savings of multi-level compression in Table 3. Let's consider a scaling case of _GraVAC_ where \(\theta_{min}=10\)x and current CF evaluated is 1000x. Then multilevel _GraVAC_ first compresses to 10x and then further compresses the reduced tensors to 100x, i.e., \(\theta_{1}=10\)x and \(\theta_{2}^{{}^{\prime}}=100\)x so that \(\theta_{2}=1000\)x. In direct approach, we first compress original gradients to 10x, then compress the original gradients again to 1000x. From our results, we see that multi-level compression is at least 1.1\(\times\) and up to 1.83\(\times\) faster than directly compressing the original tensors twice.
### _Comparing_ GraVAC _with Prior Art_
In this section, we compare _GraVAC_ with another adaptive scheme called Accordion [31]. For the three models, we use bounds of Rank-1 and Rank-4 for compression in Accordion, as described in [31] and compare with _GraVAC_ in terms of communication and time savings (i.e., training speedup) to achieve the same test accuracy/perplexity. The savings are normalized by Accordion's performance for each respective model, shown in Table 4. For ResNet101, _GraVAC_ reduces total communication volume by 44.5\(\times\) and reduces training time by 1.94\(\times\) over Accordion. _GraVAC_ speeds up training by 5.63\(\times\) over Accordion for communication-heavy models like VGG16. In LSTM training, _GraVAC_ converges twice as fast by reducing communication volume up to 104.2\(\times\).
Fig. 10: ResNet101: _GraVAC_ with Geometric scaling policy.
Accordion is based on detecting critical regions during training, i.e., when inter-iteration gradients computed in backward pass change significantly and cross a certain user-defined threshold. Accordion switches between 2 compression factors such that it uses the low CF in critical regions and the higher CF otherwise. On the other hand, _GraVAC_ looks at information loss on account of compression (i.e., statistical efficiency) and not just relative gradient change in sensitive regions of training. That is, _GraVAC_ looks at intra-iterations gradients as well (between original and gradients compressed at different CFs). Additionally, _GraVAC_ scales compression across a wider range and carefully inspects intermediary CFs as potential compression candidates. Thus, we obtain higher speedups when training with _GraVAC_.
#### Iv-B1 _GraVAC_ vs. _Accordion on Random-_k_ Compression
We previously saw in Fig. 1(b) and Table II that ResNet101 failed to converge at any CF with Random-_k_ compression. In this section, we present a special case of using Random-_k_ under the hood with both _GraVAC_ and Accordion. Although the compression quality of Random-_k_ is lower compared to other compressors, we present this as a special case to demonstrate how _GraVAC_ is more dynamic and operates at a finer granularity. We launch _GraVAC_ with Random-_k_ on \(\theta_{min}\) = 1.5x, \(\theta_{max}\) = 1000x, window = 2000 and \(\epsilon\) = 0.7. The CFs are scaled up via _geometric scaling policy_. Accordion was also deployed with the same min-max bounds on CF as _GraVAC_, i.e., low CF = 1.5x and high CF = 1000x. The convergence curves comparing _GraVAC_ and Accordion are shown in Fig. 10(a). Unlike static 10x Random-_k_ compression (Fig. 1(b)) that failed to converge, we were able to achieve to 78% top-1 test accuracy for ResNet101 with _GraVAC_. The CFs used for training by _GraVAC_ were 1.5x, 3x, 6x, 12x, 24x and 48x. All candidate CFs beyond this were ignored as they did not meet the required threshold of \(\epsilon\). CF 12x has the highest density, implying most iterations used this CF for training (Fig. 10(b)). Correspondingly, compression throughput is maximum for this CF as well. Compared to dense SGD, we reduced overall communication volume by 18\(\times\).
As for Accordion on Random-_k_, we see in Fig. 10(a) that training saturates at 20% accuracy. This is because Accordion does _not_ consider the efficacy of the compression technique itself, and only switches between a low and high CF if the uncompressed, inter-iteration gradients change beyond a certain measure. With a low CF 1.5x, information loss in Random-_k_ was too high to update ResNet101 in a meaningful way.
## V Conclusion
Gradient noise has previously been used as a scalability indicator for batch and cluster-size scaling in deep learning [19, 20, 37, 38, 39]. Adaptive compression schemes like Accordion [31] switch between two compression levels based on when the inter-iteration gradients change by some margin. _GraVAC_'s key insight is to tweak compression factor over the course of training while balancing the pareto-relationship between parallel and statistical efficiency in gradient compression. We use "compression gain" to measure information loss on account of compression and choose a CF appropriately. In our evaluation, we see that _GraVAC_ converges 1.95 to 6.67\(\times\) faster than choosing a static CF, while converging in the same number of iterations as dense SGD. Compared to Accordion, we observed up to 5.63\(\times\) reduction in end-to-end training time.
One should be mindful when training models with _GraVAC_ as it introduces parameters like compression threshold (\(\epsilon\)) and window size that may affect overall training performance. Setting too small a window size may result in poor convergence as all the candidate CFs may be exhausted while the model is still in early training stages and gradients are still volatile. As for \(\epsilon\), choosing a very small threshold may enable high compression but may lead to model degradation by allowing high CF gradients from the beginning that will not update the model in a significant way.
|
2305.04463 | Non-split Domination Cover Pebbling Number for Some Class of Middle
Graphs | Let $G$ be a connected graph. A pebbling move is defined as taking two
pebbles from one vertex and placing one pebble to an adjacent vertex and
throwing away the other pebble. The non-split domination cover pebbling number,
$\psi_{ns}(G)$, of a graph $G$ is the minimum of pebbles that must be placed on
$V(G)$ such that after a sequence of pebbling moves, the set of vertices with a
pebble forms a non-split dominating set of $G$, regardless of the initial
configuration of pebbles. We discuss some basic results, NP-completeness of
non-split domination number, and determine $\psi_{ns}$ for some families of
Middle graphs. | A. Lourdusamy, I. Dhivviyanandam, Lian Mathew | 2023-05-08T05:17:06Z | http://arxiv.org/abs/2305.04463v1 | ###### Abstract
###### Abstract
Let \(G\) be a connected graph. A pebbling move is defined as taking two pebbles from one vertex and placing one pebble to an adjacent vertex and throwing away the other pebble. The non-split domination cover pebbling number, \(\psi_{ns}(G)\), of a graph \(G\) is the minimum of pebbles that must be placed on \(V(G)\) such that after a sequence of pebbling moves, the set of vertices with a pebble forms a non-split dominating set of \(G\), regardless of the initial configuration of pebbles. We discuss some basic results, NP-completeness of non-split domination number, and determine \(\psi_{ns}\) for some families of Middle graphs.
Keywords: non-split dominating set, non-split domination cover pebbling number, cover pebbling number, Middle graphs.
2020 AMS Subject Classification: : 05C12, 05C25, 05C38, 05C76.
**Nonsplit Domination Cover Pebbling**
**Number for Some Class of Middle Graphs**
A. Lourdusamy\({}^{1}\), I. Dhivviyanandam\({}^{2}\), Lian Mathew\({}^{3}\)
\({}^{1}\)Department of Mathematics,
St. Xavier's College (Autonomous), Palayamkottai
Email: [email protected]; [https://orcid.org/0000-0001-5961-358X](https://orcid.org/0000-0001-5961-358X).
\({}^{2}\) Reg. No : 20211282091003, Department of Mathematics,
St. Xavier's College (Autonomous), Palayamkottai
Affiliated to Manonmaniam Sundaranar University, Abisekapatti-627012, Tamil Nadu, India,
Email: [email protected]; [https://orcid.org/0000-0002-3805-6638](https://orcid.org/0000-0002-3805-6638).
\({}^{3}\)Department of Mathematics
CHRIST(Deemed to be university), Pune- Lavasa Campus, India.
Email: [email protected]; [https://orcid.org/0000-0002-4926-7756](https://orcid.org/0000-0002-4926-7756).
## 1 Introduction
Lagarias and Saks were the first ones to introduce the concept of pebbling and F.R.K. Chung[1] used the concept in pebbling to solve a number theoretic conjecture. Then many others followed suit including Glenn Hulbert
who published a survey of pebbling variants[2]. The subject of graph pebbling has seen massive growth after Hulbert's survey. In the past 30 years so many new variants in graph pebbling have been developed which can be applied to the field of transportation, computer memory allocation, game theory, and the installation of mobile towers.
Let us denote \(G^{\prime}\)s vertex and edge sets as \(V(G)\)and \(E(G)\), respectively. Consider a graph with a fixed number of pebbles at each vertex. One pebble is thrown away and the other is placed on an adjacent vertex when two pebbles are removed from a vertex. This process is known as a pebble move. The pebbling number of a vertex \(v\) in a graph \(G\) is the smallest number \(\pi(G,v)\), allowing us to shift a pebble to \(v\) using a sequence of pebbling move, regardless of how these pebbles are located on G's vertices. The pebbling number, \(\pi(G)\), of a graph \(G\) is the maximum of \(\pi(G,v)\) over all the vertices \(v\) of a graph. Considering the concept of cover pebbling [6] and non-split domination [5] we develop a new concept, called the non-split domination cover pebbling number of a graph, denoted by \(\psi_{ns}(G)\). In paper[6] "The cover pebbling number, \(\lambda(G)\) is defined as the minimum number of pebbles required such that given any initial configuration of at least \(\lambda(G)\) pebbles, it is possible to make a series of pebbling moves to place at least one pebble on every vertex of \(G\)" and in [5] The domination cover pebbling number is defined as " the minimum number of pebbles required so that any initial configuration of pebbles can be shifted by a sequence of pebbling moves so that the set of vertices that contain pebbles form a dominating set \(S\) of \(G\)". Kulli, V.R et al. introduced the non-split domination number in[5]. A dominating set \(D\) of a graph \(G=(V,E)\) is a non-split dominating set if the induced \(<V-D>\) is connected. The non-split domination number \(\gamma_{ns}(G)\) of \(G\) is the minimum cardinality of a non-split dominating set. We developed the concept of non-split domination cover pebbling deriving from the concept of cover pebbling and non-split domination in graphs. Thus, we arrived the definition of the non-split domination cover pebbling number,\(\psi_{ns}(G)\), of a graph \(G\) as the minimum of pebbles that must be placed on \(V(G)\) such that after a sequence of pebbling moves, the set of vertices with a pebble forms a non-split dominating set of \(G\), regardless of the initial configuration of pebbles. We discuss the basic results and determine \(\psi_{ns}\) for path graphs, wheel graphs, cycle graphs, and fan graph
Preliminaries
For graph-theoretic terminologies, the reader can refer to[3, 4].
**Result 1**.: _[_5_]___
1. _The non-split domination number of a complete graph_ \(K_{n}\) _is_ \(\gamma_{ns}(K_{n})=1\)_._
2. _The non-split domination number of a Wheel graph is_ \(\gamma_{ns}(W_{n})=1\)_._
3. _The non-split domination number of a path is_ \(\gamma_{ns}(P_{n})=n-2\)_._
4. _The non-split domination number of Cycle is_ \(\gamma_{ns}(C_{n})=n-2\)_._
**Result 2**.: _[_7_]_ _The domination cover pebbling number of the wheel is_ \(\psi(W_{n})=n-2\)_._
**Result 3**.: _[_6_]_ _The cover pebbling number of path_ \(P_{n}\) _is_ \(\gamma(P_{n})=2^{n}-1\)_._
## 3 Results
**Theorem 1**.: _For a simple connected graph \(G\), \(\psi(G)\leq\psi_{ns}(G)\leq\sigma(G)\)._
**Theorem 2**.: _For a complete graph \(K_{n}\) on \(n\) vertices, the nonsplit domination cover pebbling number is, \(\psi_{ns}(K_{n})=1\)_
**Theorem 3**.: _The non-split domination cover pebbling number of a wheel graph \(W_{n}\) is, \(\psi_{ns}(W_{n})=\psi(W_{n}).\)_
### NP- Completeness of Nonsplit Domination Problem
The proof is by reduction from the known NP-complete problem 'domination'. The domination problem asks 'for a given graph \(G\) and an integer \(k\), does the graph \(G\) contains a domination set of cardinality at most \(k\)?'.
**Theorem 4**.: _The nonsplit domination problem is NP-complete._
Proof.: Let \(G\) be any graph. Construct \(G^{*}\) as follows: Let the graph \(G\) be on the first level and let the Path \(P_{2}\) be on the second level. Join each vertex of graph \(G\) to both the vertices of the path \(P_{2}\) (see Figure 1). It is clear that the construction can be done in polynomial time.
Now, let \(G\) be a graph with domination number \(k\). Then, \(k+1\) vertices will form the non-split domination set for the graph \(G^{*}\) iff \(G\) has a dominating set with \(k\) vertices.
Suppose that the graph \(G\) has a dominating set with cardinality \(k\). Choose any one vertex from the path \(P_{2}\). Then, it is straightforward to see that these \(k+1\) vertices form a dominating set for the graph \(G^{*}\). Moreover, if we remove the \(k+1\) vertices and the edges incident to it, we get a connected graph. Thus, the non-split domination number of \(G^{*}\) is \(k+1\) whenever \(G\) has a domination set with cardinality \(k\).
Conversely, assume that \(k+1\) vertices form the non-split dominating set for the graph \(G^{*}\). Then, by the definition \(k+1\) vertices form a dominating set for \(G^{*}\), and hence the domination set for graph \(G\) has cardinality \(k\). Thus, \(G\) has a dominating set with \(k\) vertices whenever \(G^{*}\) has a non-split domination set with order \(k+1\).
### Nonsplit Domination Cover Pebbling Number for the Middle Graph of Path
**Theorem 5**.: _The non-split domination cover pebbling number of the middle graph of the path is, \(\psi_{ns}(M(P_{n}))=2^{n+1}-3\)._
Figure 1: Construction of \(G\) from \(G^{*}\)
Proof.: Let \(V(P_{n})=\{x_{1},x_{2},x_{3},\cdots,x_{n}\}\) be the vertices of path \(P_{n}\) and \(y_{1},y_{2},\cdot,y_{n-1}\) be the added vertices corresponding to the edges \(e_{1},e_{2},\cdots,e_{n-1}\) of \(P_{n}\) to obtain \(M(P_{n})\).Then the total number of vertices is \(2n-1\) and the edges are \(3n-4\). Consider the non-split domination set \(D=\{y_{i}|1\leq i\leq n-1\}\) with containing \(n\) vertices. The vertices of \(D\) dominate all the vertices in \(M(P_{n})\) and the induced subgraph \(<V-D>\) is a connected graph. Placing \(2^{n+1}-4\) pebbles on any one of the end vertices we could cover the non-split dominating set's vertices \(x_{n}\) to \(x_{2}\), if we place all the pebbles at \(x_{n}\). And also we don't get the non-split domination set. Hence, the non-split domination cover pebbling number of \(M(P_{n})\) is \(\psi_{ns}(M(P_{n}))\geq 2^{n+1}-3\).
Considering the configuration of \(C\) with \(2^{n+1}-3\) on the vertices of \(M(P_{n})\), we prove the sufficient condition to cover the non-split dominating set.
Case 1:Let the source vertex be \(x_{1}\).
If we place all the pebbles on \(x_{1}\), to cover \(x_{n}\) we need \(2^{n}\) pebbles. Thus, to cover the next furthest non-split dominating set is \(x_{n-1}\) which requires \(2^{n-1}\). Likewise, we get the series of Pebble distribution \(2^{n}+2^{n-1}+2^{n-2}+\cdots+2^{2}+2^{0}\) to cover all the vertices of the non-split dominating set. Thus, we used \(\sum_{k=2}^{n}2^{k}+1=2^{n+1}-3\) pebbles.
Case 2:Let the source vertex be \(x_{l},\ 1<l\leq n-1\).
Let us place all the pebbles at \(x_{l}\). If \(l<\lfloor\frac{n}{2}\rfloor\), then we need \(2^{(n-l)+1}-3\) pebbles to cover \(x_{l}\) to \(x_{n}\) of the non-split dominating set. Now to cover the remaining non-split dominating set we need \(2^{l+1}-4\). Thus, we used \(2^{(n-l)+1}+2^{l+1}-7<2^{n+1}-3\) pebbles.
Case 3:Let the source vertex be \(y_{1}\).
Let us consider the source vertex either any one of the non-split dominating vertices of \(y_{1}\) or \(y_{n}-1\). Consider all the pebbles placed on \(y_{1}\). Then we need \(4\) pebbles to cover the adjacent vertices \(x_{1}\) and \(x_{2}\). Then to cover the remaining non-split dominating set of vertices we need \(2^{n}-4\) pebbles. Thus, we used \(2^{n}<2^{n+1}-3\) pebbles. Hence, \(\psi_{ns}(M(P_{n}))=2^{n+1}-3\).
### Nonsplit Domination Cover Pebbling Number for The Middle Graph of Cycle Graphs
**Theorem 6**.: _The non-split domination cover pebbling number of the middle graph of the cycle is,_
\[\psi_{ns}(M(C_{n}))=\begin{cases}2\sum_{k=0}^{\lceil\frac{n}{2}\rceil}2^{k}-8&n \ is\ odd\\ \\ \sum_{k=1}^{\lfloor\frac{n}{2}\rfloor+1}2^{k}-8+\sum_{k=1}^{\lfloor\frac{n}{2 }\rfloor}2^{k}&n\ is\ even.\end{cases}\]
Proof.: Let \(V(C_{n})=\{x_{1},x_{2},x_{3},\cdots,x_{n}\}\) and \(y_{1},y_{2},\cdots,y_{n}\) be the inserted vertices corresponding to edges \(e_{1},e_{2},\cdots,e_{n}\) of \(C_{n}.\) to construct the middle graph of Cycle \(M(C_{n})\). Then the total number of vertices is \(2n\) and the edges are \(3n\). Let the non-split dominating set \(D=\{y_{i}\cup x_{1},x_{2},x_{3},\cdots,x_{j}\}\) where \(1\leq i,j\leq n\) and \(x_{j}\neq N[y_{i}]\). Thus, \(D\) dominates all the vertices of \(M(C_{n})\), and \(<V-D>\) is connected. The total number of vertices in \(D\) is \(n-1\).
**Case 1:** When \(n\) is odd.
Without loss of generality, Let \(D=\{y_{n},x_{2},x_{3},\cdots,x_{n-1}\}.\) Placing \(2\sum_{k=0}^{\lceil\frac{n}{2}\rceil}2^{k}-7\) pebbles on the source vertex \(x_{1}\) we can not put one pebble each on all the vertices of \(D\). Hence, \(\psi_{ns}(M(C_{n}))\geq 2\sum_{k=0}^{\lceil\frac{n}{2}\rceil}2^{k}-8\), when n is odd.
Distribution of \(2\sum_{k=0}^{\lceil\frac{n}{2}\rceil}2^{k}-8\) pebbles on the configuration of \(C\), we cover all the vertices of \(D\). Now we prove the sufficient condition for \(M(C_{n})\), when \(n\) is odd.
**Case 1.1:** Let the source vertex be \(x_{1}\).
Using the **Theorem 5** we can cover the non-split dominating set of \(D\) from \(x_{2}\) to \(x_{\lceil\frac{n}{2}\rceil}\). The total number of pebbles used are \(2^{\lceil\frac{n}{2}\rceil+1}-4\). Similarly, to cover the remaining non-split dominating set \(\{y_{n},x_{n-1},\cdots,x_{\lceil\frac{n}{2}\rceil+1}\) we use \(2^{\lceil\frac{n}{2}\rceil+1}-6\) pebbles. Hence we have spent total \(2^{\lceil\frac{n}{2}\rceil+2}-10=2\sum_{k=0}^{\lceil\frac{n}{2}\rceil}2^{k}-8\) pebbles. **Case 1.2:** Let the source vertex be any one of the vertices of \(y_{i}\), \(1\leq i\leq n\). Without loss of generality, let the source vertex be \(y_{n}\). Using the **Theorem 5** we can cover the non-split dominating set of \(D\) from \(x_{2}\) to \(x_{\lceil\frac{n}{2}\rceil}\). The total number of pebbles used are \(2^{\lceil\frac{n}{2}\rceil+1}-4\). Similarly, to cover the remaining non-split dominating set \(\{y_{n},x_{n-1},\cdots,x_{\lceil\frac{n}{2}\rceil+1}\) we use \(2^{\lceil\frac{n}{2}\rceil}-3\) pebbles. Hence we have spent total \(3(2^{\lceil\frac{n}{2}\rceil})-7<2\sum_{k=0}^{\lceil\frac{n}{2}\rceil}2^{k}-8\) pebbles. Hence, \(\psi_{ns}(M(C_{n}))\leq 2\sum_{k=0}^{\lceil\frac{n}{2}\rceil}2^{k}-8\), When n is odd.
**Case 2:** When \(n\) is even.
Without loss of generality, Let \(D=\{y_{n},x_{2},x_{3},\cdots,x_{n-1}\}.\) Placing \(\sum_{k=1}^{\lfloor\frac{n}{2}\rfloor+1}2^{k}-7+\sum_{k=1}^{\lfloor\frac{n}{2} \rfloor}2^{k}\) pebbles on the source vertex \(x_{1}\) we can not put one pebble each on all the vertices of \(D\). Hence, \(\psi_{ns}(M(C_{n}))\geq\sum_{k=1}^{\lfloor\frac{n}{2}\rfloor+1}2^{k}-8+\sum_{k =1}^{\lfloor\frac{n}{2}\rfloor}2^{k}\), When n is even.
Distribution of \(\sum_{k=1}^{\lfloor\frac{n}{2}\rfloor+1}2^{k}-8+\sum_{k=1}^{\lfloor\frac{n}{2} \rfloor}2^{k}\) pebbles on the configuration of \(C\), we cover all the vertices of \(D\). Now we prove the sufficient condition for \(M(C_{n})\), when \(n\) is even.
**Case 2.1:** Let the source vertex be \(x_{1}\).
Using the **Theorem 5** we can cover the non-split dominating set of \(D\) from \(x_{2}\) to \(x_{\lceil\frac{n}{2}\rceil+1}\). The total number of pebbles used are \(2^{\lceil\frac{n}{2}\rceil+2}-4\). Similarly, to cover the remaining non-split dominating set \(\{y_{n},x_{n-1},\cdots,x_{\lceil\frac{n}{2}\rceil+2}\) we use \(2^{\lceil\frac{n}{2}\rceil+1}-6\) pebbles. Hence we have spent total \(3(2^{\lceil\frac{n}{2}\rceil+1})-10=\sum_{k=1}^{\lfloor\frac{n}{2}\rfloor+1}2 ^{k}-8+\sum_{k=1}^{\lfloor\frac{n}{2}\rfloor}2^{k}\) pebbles. **Case 2.2:** Let the source vertex be any one of the vertices of \(y_{i}\), \(1\leq i\leq n\). Without loss of generality, let the source vertex be \(y_{n}\). Using the **Theorem 5** we can cover the non-split dominating set of \(D\) from \(x_{2}\) to \(x_{\lceil\frac{n}{2}\rceil+1}\). The total number of pebbles used are \(2^{\lceil\frac{n}{2}\rceil+2}-4\). Similarly, to cover the remaining non-split dominating set \(\{y_{n},x_{n-1},\cdots,x_{\lceil\frac{n}{2}\rceil+2}\) we use \(2^{\lceil\frac{n}{2}\rceil}-3\) pebbles. Hence we have spent total \(5(2^{\lceil\frac{n}{2}\rceil})-7<\sum_{k=1}^{\lfloor\frac{n}{2}\rfloor+1}2^{k }-8+\sum_{k=1}^{\lfloor\frac{n}{2}\rfloor}2^{k}\) pebbles. Hence, \(\psi_{ns}(M(C_{n}))\leq\sum_{k=1}^{\lfloor\frac{n}{2}\rfloor+1}2^{k}-8+\sum_ {k=1}^{\lfloor\frac{n}{2}\rfloor}2^{k}\), when n is even. Hence proved.
### Non-split Domination Cover Pebbling Number for The Middle Graph of Wheel Graphs
**Theorem 7**.: _The non-split domination cover pebbling number of the middle graph of the wheel is,_
\[\psi_{ns}(M(W_{n}))=\begin{cases}(\lfloor\frac{n}{2}\rfloor)8+6&n \ is\ odd\\ (\lfloor\frac{n}{2}\rfloor)8+10&n\ is\ even.\end{cases}\]
Proof.: Let \(V(W_{n})=\{x_{0},x_{1},x_{2},x_{3},\cdots,x_{n-1}\}\) be the vertices of \(W_{n}\) and
be the center vertices of \(W_{n}\). Let \(y_{1},y_{2},\cdots,y_{n-1}\) be the inserted vertices corresponding to edges \(v_{0}v_{i}\) where \(1\leq i\leq n-1\) and \(a_{1},a_{2},\cdots,a_{n-2}\) be the inserted vertices on the edges \(x_{j}x_{j+1}\) where \(1\leq j\leq n-2\) and \(a_{n-1}\) lies in \(x_{n-1}x_{1}\). Also the total number of edges in the \(M(W_{n})\) is \(3(n-1)+1\).
**Case 1:** When \(n\) is even. Here \(d(y_{i})=n+3\) (\(1\leq i\leq n-1\))
**Sub case 1.1:** Suppose \(i\) is odd.
Consider the set \(D=\{y_{i},a_{i+1},a_{i+3},\cdots,a_{n-2}a_{i-2},a_{i-5}\cdots,a_{1}\}\) where \(a_{j}\neq N(y_{i})\) and \(j=1,3,\cdots,i-2,i+1,i+3,\cdots,n-1\) be a non-split dominating set and \(<V_{D}>\) is connected which is the minimum Non-split dominating set of \(M(W_{n})\). If we place \((\lfloor\frac{n}{2}\rfloor)8+9\) pebbles on \(x_{1}\), we can not cover all the vertices of \(D\). Hence we require \((\lfloor\frac{n}{2}\rfloor)8+10\) pebbles to cover \(D\). Hence \(\psi_{ns}(M(W_{n}))\geq(\lfloor\frac{n}{2}\rfloor)8+10\), When \(n\) is even.
Distribution of \((\lfloor\frac{n}{2}\rfloor)8+10\) pebbles on the configuration of \(C\), we cover all the vertices of \(D\). Now we prove the sufficient condition for \(M(W_{n})\), when \(n\) is even.
**Subcase 1.2:** Let the source vertex be \(a_{k}\), where \(k\) is not in \(D\).
Without loss of generality, let the source vertex be \(a_{1}\). Since there will be \(\lfloor\frac{n}{2}\rfloor-3\) vertices having the distance \(3\) and \(2\) vertices having the distance \(2\) from the source vertex then we need \((\lfloor\frac{n}{2}\rfloor)8+6<(\lfloor\frac{n}{2}\rfloor)8+10\) Pebbles.
**Subcase 1.3:** Let the source vertex be \(x_{0}\).
Let us place all the pebbles on \(x_{0}\). Since all the dominating vertices are at the distance of \(2\) from the center except \(y_{i}\), we need \(2+\lfloor\frac{n}{2}\rfloor)4\) pebbles to cover the non-split dominating set. Thus, \(\psi_{ns}(M(W_{n}))\geq(\lfloor\frac{n}{2}\rfloor)8+10\), When \(n\) is even and \(i\) is odd.
**Case 1.2:** When \(i\) is even.
Consider the dominating set \(D=\{y_{i},a_{i+1},a_{i+3},\cdots,a_{n-1},a_{i-2},a_{i-4},\cdots,a_{2}\}\) where \(a_{j}\neq N[y_{i}]\) and \(j=2,4,\cdots,i-2,i+1,i+3,\cdots,n-1\) be the minimum dominating set and \(<V_{D}>\) is connected. Now to prove the Non-split domination pebbling number of \(M(W_{n})\), when n is even and \(i\) is even, we can follow the same method of **Case 1, Subcase 1.1, 1.2 and 1.3..**
**Case 2:** When \(n\) is odd.
Here \(d(y_{i})=n+3\) (\(1\leq i\leq n-1\))
**Subcase 2.1:** Suppose \(i\) is odd.
Consider the set \(D=\{y_{i},a_{i+1},a_{i+3},\cdots,a_{n-1}a_{i-2},a_{i-4}\cdots,a_{1}\}\) where \(a_{j}\neq N(y_{i})\) and \(j=1,3,\cdots,i-2,i+1,i+3,\cdots,n-1\) be a non-split dominating set and \(<V_{D}>\) is connected which is the minimum Non-split dominating
set of \(M(W_{n})\). If we place \((\lfloor\frac{n}{2}\rfloor)8+5\) pebbles on \(x_{1}\), we can not cover all the vertices of \(D\). Hence we require \((\lfloor\frac{n}{2}\rfloor)8+6\) pebbles to cover \(D\). Hence \(\psi_{ns}(M(W_{n}))\geq(\lfloor\frac{n}{2}\rfloor)8+6\), When \(n\) is odd.
Distribution of \((\lfloor\frac{n}{2}\rfloor)8+6\) pebbles on the configuration of \(C\), we cover all the vertices of \(D\). Now we prove the sufficient condition for \(M(W_{n})\), when \(n\) is odd.
**Subcase 1.2:** Let the source vertex be \(a_{k}\), where \(k\) is not in \(D\).
Without loss of generality, let the source vertex be \(x_{1}\). Since there will be \(\lfloor\frac{n}{2}\rfloor-2\) vertices having the distance \(3\), one vertex at the distance of two and one vertex is adjacent to the source vertex, then we require \((\lfloor\frac{n}{2}\rfloor)8+6\) pebbles.
**Subcase 1.3:** Let the source vertex be \(x_{0}\).
Let us place all the pebbles on \(x_{0}\). Since all the dominating vertices are at the distance of \(2\) from the center except \(y_{i}\), we need \(2+\lfloor\frac{n}{2}\rfloor)4\) pebbles to cover the non-split dominating set. **Subcase 1.4:** Let the source vertex be any one of the vertices of \(y_{i}\) where \(i\) is odd.
Let us place all the pebbles on \(y_{1}\). Since all the dominating vertices are at the distance of \(2\) from \(y_{1}\) then we need \(1+\lfloor\frac{n}{2}\rfloor)4\) pebbles to cover the non-split dominating set. Thus, \(\psi_{ns}(M(W_{n}))=(\lfloor\frac{n}{2}\rfloor)8+6\), When \(n\) is even and \(i\) is odd.
**Case 1.2:** When \(i\) is even.
Consider the dominating set \(D=\{y_{i},a_{i+1},a_{i+3},\cdots,a_{n-1},a_{i-2},a_{i-4},\cdots,a_{2}\}\) where \(a_{j}\neq N[y_{i}]\) and \(j=2,4,\cdots,i-2,i+1,i+3,\cdots,n-1\) be the minimum dominating set and \(<V_{D}>\) is connected. Now to prove the Non-split domination pebbling number of \(M(W_{n})\), when n is odd and \(i\) is even, we can follow the same method of **Case 2, Subcase 2.1, 2.2, 2.3 and 1.4.**. Thus proved. Hence, the non-split domination cover pebbling number of \(M(W_{n})=(\lfloor\frac{n}{2}\rfloor)8+6\).
### Non-split Domination Cover Pebbling Number for The Middle Graph of Fan Graphs
**Theorem 8**.: _The non-split domination cover pebbling number of the middle graph of the fan is,_
\(\psi_{ns}(M(F_{n}))=\begin{cases}(\lceil\frac{n}{2}\rceil-1)8+6&\quad n\ is\ odd\\ \\ (\lfloor\frac{n}{2}\rfloor-2)8+6&\quad n\ is\ even.\end{cases}\)
Proof.: It follows from Theorem 7.
|
2307.05104 | A Deep Dive into Perturbations as Evaluation Technique for Time Series
XAI | Explainable Artificial Intelligence (XAI) has gained significant attention
recently as the demand for transparency and interpretability of machine
learning models has increased. In particular, XAI for time series data has
become increasingly important in finance, healthcare, and climate science.
However, evaluating the quality of explanations, such as attributions provided
by XAI techniques, remains challenging. This paper provides an in-depth
analysis of using perturbations to evaluate attributions extracted from time
series models. A perturbation analysis involves systematically modifying the
input data and evaluating the impact on the attributions generated by the XAI
method. We apply this approach to several state-of-the-art XAI techniques and
evaluate their performance on three time series classification datasets. Our
results demonstrate that the perturbation analysis approach can effectively
evaluate the quality of attributions and provide insights into the strengths
and limitations of XAI techniques. Such an approach can guide the selection of
XAI methods for time series data, e.g., focusing on return time rather than
precision, and facilitate the development of more reliable and interpretable
machine learning models for time series analysis. | Udo Schlegel, Daniel A. Keim | 2023-07-11T08:26:08Z | http://arxiv.org/abs/2307.05104v1 | # A Deep Dive into Perturbations
###### Abstract
Explainable Artificial Intelligence (XAI) has gained significant attention recently as the demand for transparency and interpretability of machine learning models has increased. In particular, XAI for time series data has become increasingly important in finance, healthcare, and climate science. However, evaluating the quality of explanations, such as attributions provided by XAI techniques, remains challenging. This paper provides an in-depth analysis of using perturbations to evaluate attributions extracted from time series models. A perturbation analysis involves systematically modifying the input data and evaluating the impact on the attributions generated by the XAI method. We apply this approach to several state-of-the-art XAI techniques and evaluate their performance on three time series classification datasets. Our results demonstrate that the perturbation analysis approach can effectively evaluate the quality of attributions and provide insights into the strengths and limitations of XAI techniques. Such an approach can guide the selection of XAI methods for time series data, e.g., focusing on return time rather than precision, and facilitate the development of more reliable and interpretable machine learning models for time series analysis.
Keywords:Explainable AI XAI Evaluation XAI for Time Series.
## 1 Introduction
Artificial intelligence (AI) has become an integral part of our daily lives, from the personalized advertisement we receive on social media to conversational AI (chatbots) answering questions of users and customers using deep neural networks. However, as the complexity of deep neural network models increases, so does the difficulty in understanding how they get to their decisions [7]. A lack of interpretability can lead to severe consequences in critical domains such as finance, healthcare, and transportation, including financial losses, medical errors, and even loss of life by providing wrong decisions if complex models are deployed [16]. One promising approach to addressing such issues is through the usage of explainable artificial intelligence (XAI), which seeks to provide insights into the inner workings of complex models and the factors that drive their decision-making [7]. One particular area of interest is time series data, which is
characterized by the sequential nature of its observations and the interdependencies between them, as more sensors generate a massive amount of data and more tasks are tackled by complex models [25].
In recent years, a growing body of research has focused on developing XAI techniques tailored explicitly for time series data [25]. These techniques often rely on the concept of attributions, which aim to identify the contributions of individual features and time points to the overall prediction made by a model [25]. By providing insights into which parts of the input data are most relevant to the output, attributions can help users understand the reasoning behind the model's decision-making process [18]. However, the evaluation of such attributions is not trivial [19]. To address the challenge of evaluating the quality of explanations for time series data, perturbation analysis has emerged as a promising evaluation technique [17, 22]. Perturbation analysis involves systematically modifying the input data and assessing the impact on the attributions generated by XAI methods [19]. By perturbing the input data, it is possible to evaluate the robustness of the explanations provided by XAI methods [25]. However, the effectiveness of perturbation analysis for evaluating the quality of attributions for time series data has not been extensively studied [19].
In this paper, we apply attribution techniques from various fields to a convolution neural network trained on time series classification data to evaluate and inspect the generated attributions in detail using perturbations, which involves systematically altering the input data and observing the effect on the model's output. We investigate the performance of attribution techniques compared to each other based on the perturbation analysis result and explore the perturbation changes based on these attributions to gain insights into the model. Through such an analysis, we can identify spurious correlations and shortcuts in the complex models and thus enable developers to potentially improve models by debugging datasets. We show that our convolution neural network trained on time series classification learned certain shortcuts to achieve state-of-the-art performances. Based on these experiments and results, we provide guidelines for the application of attribution techniques for time series classification and release our evaluation framework to investigate other attribution techniques.
Thus, we contribute: (1) an in-depth analysis of attribution techniques on time series classification for deep learning models using a perturbation analysis, (2) insights into convolution neural networks trained on time series based on the generated attributions, (3) guidelines and a framework for applying attribution techniques for time series models with a perturbation analysis card for reporting. We first look into related work, and then we introduce the perturbation analysis methodology and the experiment setup we use for our deep dive. Here we also propose perturbation analysis cards as a guideline to report the results of an evaluation. Next, we present our results and discuss the impact of our conclusions for attribution techniques applied to time series. Lastly, in future work, we motivate new measures for the evaluation of attributions on time series data.
Results and source code of the experiments is online available at:
[https://github.com/visual-xai-for-time-series/time-series-xai-perturbation-analysis](https://github.com/visual-xai-for-time-series/time-series-xai-perturbation-analysis)
## 2 Related Work
Explainable AI (XAI) accelerated through several surveys [7, 1] and techniques, e.g., LIME [15] and SHAP [12] in the last few years. Especially, attributions are prevalent in the image domain as heatmap explanations are easy to understand for users [10]. Some theoretical works dig into the backgrounds of why models learn certain shortcuts to solve tasks [6] and thus enable further explanations for decisions. However, evaluating explanations is still a slowly growing area with limited work toward benchmarking different techniques against each other [8]. Further, shortcuts or spurious correlations are not trivial to detect in explanations and need an in-depth analysis to be able to identify these [29].
Some works started to collect possible evaluation techniques [14] and categorized these into five measurements: mental model, explanation usefulness and satisfaction, user trust and reliance, human-AI task performance, and computational measures. The first few measures focus on evaluating with or in cooperation with humans and are thus heavily influenced by human factors. The computational measures exclude human factors and focus on purely automatic evaluation of explanations. In this work, we inspect the computational measures and, more precisely, the explainer fidelity of the attribution technique on the model to show how the attributions fit the model.
XAI for time series classification (TSC), on the one hand, incorporates previously proposed explanation techniques from other fields and introduces the time dependence into some of the techniques [25]. Theissler et al. [25] categorize possible explanations for TSC into time point, subsequence, and instance explanations. All these operate on a different level of the time series and are thus unique in their explanation and evaluation. In this work, we tackle time point explanations and, to be more precise, attributions to highlight and explore shortcuts and spurious correlations. As Schlegel et al. [17] and others [22, 13, 25] demonstrated, attributions techniques such as LIME [15], SHAP [12], LRP [4], GradCAM [21], Integrated Gradients [24], and more [20], produce working attributions on time series to extract explanations from a model. However, in most cases, only purely computational measures are applied to the attributions, which are not further inspected, e.g., Mercier et al. [13] to gain deeper insights.
Schlegel et al. [17] started by using a perturbation analysis on attribution techniques applied to TSC using various perturbation functions to highlight that techniques for images and text are also working on time series. Based on such preliminary experiments, they enhanced their approach with additional perturbation functions to showcase deeper insights into the fidelity evaluation [19]. Mercier et al. [13] enhanced these perturbations with further measures from the image domain, such as (in)fidelity and sensitivity [27]. Simic et al. [22] extended the proposed methods by Schlegel et al. [19] with out-of-distribution detecting functions and gave guidelines for the selection of attribution techniques and the size of the window for the perturbation. Turbe et al. [26] enhance previous approaches with another metric to improve the comparison of the attribution techniques and the ability to demonstrate their fidelity towards the model. However, all of these approaches do not look into the attributions and the produced
values to investigate further into the techniques behind the attributions and the models. Thus, an in-depth analysis is needed to investigate the attributions generated for time series classification models.
## 3 Perturbation Analysis
We use the perturbation analysis approach by Schlegel et al. [19] to generate attributions, verify, and compare them using the proposed perturbation function strategies [17, 22]. We extend the comparison by calculating the Euclidean and cosine distance between the original and the perturbed time series instance and the Euclidean and cosine distance between the original attributions of the dataset and the attributions of the perturbed instances of the dataset. Collecting these results can help us have a more in-depth analysis of the attribution techniques and reveal relations between attributions and models. However, we first need to establish the general perturbation analysis.
Let \(D=(X,Y)\) be a time series classification dataset with \(X\) as the time series samples and \(Y\) as the time series labels. \(X=\{ts_{1},ts_{2},...,ts_{n}\}\) contains \(n\) time series samples with \(m\) time points for each sample represented as \(ts=\{tp_{1},tp_{2},...,tp_{m}\}\), where \(tp_{1}\) is the value for the \(i\)th time point of \(ts\). \(Y=\{l_{1},l_{2},...,l_{n}\}\) contains \(n\) labels one label for each time series sample. Let \(M(ts,\theta)=y^{\prime}\) be a time series classification model which predicts a label \(y^{\prime}\) based on a time series input \(ts\) and has the parameters \(\theta\). Let \(A(X,M,\theta)\) be an XAI technique for generating attributions for the time series data. The original attributions for \(X\) generated by \(A\) can be represented as \(A(X,M,\theta)=\{a_{1},a_{2},...,a_{m}\}\), where \(a_{i}\) is the attribution score for the \(i\)th time point of \(X\), \(M\) the time series classification model for which the attributions are calculated, and \(\theta\) the parameters of the attribution technique.
To perform perturbation analysis, we introduce a perturbation function \(g\) that modifies \(X\) in a controlled manner. Specifically, we define a perturbed time series dataset \(X^{\prime}\) as:
\[X^{\prime}=g(X,A,\xi) \tag{1}\]
Our perturbation function \(g\) modifies the dataset \(X\) based on the attributions \(A\) and a threshold \(\xi\). The value for the modification can be changed and depends on the selected function \(g\), e.g., exchange to zero. The threshold \(\xi\) can be set to a value by hand or some other function, e.g., using the 90-percentile of the attributions so that the attributions, e.g., \(a_{i}\) the \(i\)th element, above the threshold, will be modified to the previously set value, e.g., zero. Figure 1 demonstrates the approach with zero perturbations on attributions with high values.
The original \(X\) and the perturbed dataset \(X^{\prime}\) get predicted with the model \(M\) to get \(M(X)=Y^{\prime}\) and \(M(X^{\prime})=Y^{\prime\prime}\). Based on Schlegel et al. [19], we incorporate a quality metric \(qm\), e.g., accuracy, to compare the performance of the model \(M\) with the original \(X\) and the perturbed dataset \(X^{\prime}\). For the time series classification, we assume that the \(qm\) decreases after the original data changes, and thus the labels are not fitting anymore [17]. We further assume
a waiting attribution technique decreases the performance more heavily as the most relevant parts of the input data get perturbed [8]. Thus, we assume:
\[qm(Y^{\prime},Y)\leq qm(Y^{\prime\prime},Y) \tag{2}\]
However, in some cases, the scores are very similar [17, 13], and a deeper investigation into the attributions is necessary to find similarities or dissimilarities in the relevances of the techniques. Thus, we do not only compare the quality metrics but also focus on the distances between the original \(X\) and the perturbed \(X^{\prime}\) datasets. We apply the Euclidean and cosine distances to the datasets as these are common distance functions for time series [2] to collect the changes of the perturbation function \(g\). We define the Euclidean distance as:
\[Euc(X,X^{\prime})=\sqrt{\sum_{i=1}^{n}(ts_{i}-ts^{\prime}_{i})^{2}} \tag{3}\]
where \(X=ts_{1},ts_{2},...,ts_{n}\) and \(X^{\prime}=ts^{\prime}_{1},ts^{\prime}_{2},...,ts^{\prime}_{n}\) are the two time series being compared. And we define the cosine distance as:
\[Cos(X,X^{\prime})=1-\frac{\sum_{i=1}^{n}ts_{i}\times ts^{\prime}_{i}}{\sqrt{ \sum_{i=1}^{n}ts^{2}_{i}}\times\sqrt{\sum_{i=1}^{n}ts^{\prime 2}_{i}}} \tag{4}\]
where \(X=ts_{1},ts_{2},...,ts_{n}\) and \(X^{\prime}=ts^{\prime}_{1},ts^{\prime}_{2},...,ts^{\prime}_{n}\) are the two time series being compared. These changes enable us to compare the attributions not only on a performance level but on a raw level directly on the data.
## 4 Experiments with Perturbation Analysis
For our analysis, we look into the time series that changed and those that did not change during the perturbation analysis. We especially want to understand the attribution distributions to investigate the attribution techniques responsible for fitting explanations, with high fidelity [14], on the models. Fitting explanations in our assumptions are techniques that change the prediction of more samples in a perturbation analysis [17, 19, 13]. However, a general measure and metric for evaluating explanations are essential, but another factor is the attributions, as these can also hide information or present spurious correlations [29]. E.g., the question of how attributions are distributed over the techniques arises.
Figure 1: Starting from a time series \(ts\), we use a selected attribution technique \(A\) to get attributions. Based on the attributions, we use a selected perturbation function \(g\) to set highly relevant time points, e.g., to zero. Further information in Schlegel et al. [19].
To answer such questions and others, we use the changes from \(Y\) (old prediction) to \(Y^{\prime}\) (new prediction) to look into the samples that changed their prediction and those that do not change. We especially want to know when a change in the prediction happened, e.g., after how many removals based on the attributions and the perturbation strategy. Thus, we look at the prediction changes from one class to the other. E.g., in a binary classification with the assumption from above, the predictions change from one to the other class to demonstrate that the attributions highlight relevant time points for the model. Thus, we slowly perturb more and more values from the time series until there is a change in prediction. We use the percentile values (99, 98,..., 25) as a threshold for the perturbation and record when the change happens. Further, we collect the skewness of the attributions of the changed and unchanged predictions. With such an exploration of the distributions of the attributions, we enable to inspect patterns inside of the attributions generated by different techniques. Also, the distributions of the skewness enable to have another factor for the comparison of the attribution techniques. Lastly, we do not only collect the skewness but also the Euclidean and the cosine distances of the original sample to the perturbed instance with regard to the changed and unchanged predictions. All these different collected statistics and properties can help us to identify various findings, insights, and correlations in the attribution techniques as we collect as much data from our perturbation analysis as possible.
## 4 Summary
Overall, we have the following dimensions we want to experiment on: a) attribution techniques, b) perturbation strategy. We collect and analyze the following properties: a) mean of the raw data samples of the changed and unchanged predictions; b) skewness of attributions based on the changed and unchanged predictions after the perturbation; c) new class distributions of the changed and unchanged predictions after the perturbation; d) amount of relevant attributions needed to perturb an instance to another class prediction. Figure 2 presents the collected properties using a perturbation analysis card with various statistics aggregated and visualized for easier analysis. We created these perturbation cards for all the experiments.
## 5 Hypotheses
After we established our experiment setup, we generated hypotheses around the results of the experiment on the basis of other work. Based on the preliminary experiments by Schlegel et al. [17], we generated the hypothesis that SHAP or SHAP derivatives will lead to the best results for the TSC task. Based on the results of Simic et al. [22], we will further look into the other attributions and double-check the results of Schlegel et al. [17] and the SHAP results even if SHAP results are less consistent [22]. Based on Simic et al. [22], we further look into the different perturbation strategies as we hypothesize that using one strategy is not enough to find a suitable attribution technique. Based on Geirhos et al. [6], we want also to check if there are patterns in the data the attributions show as relevant to find shortcuts the model learned to classify the data. E.g., using certain maximum or minimum values to separate one class from the other in a binary classification problem.
## 4 Perturbation Analysis Card
The perturbation analysis card is our proposed approach to reporting the results of our perturbation analysis strategies and techniques. Figure 2 shows such a perturbation analysis card with meta information (S), bar charts about the switch from one class to another (C), bar charts for the distribution of distances (D), statistics about the attributions (A), and a line plot visualization about the raw time series data (R).
Starting on top, Figure 2 (S), a short introduction presents a description of the dataset, the attribution technique, and the perturbation strategy. Right under the description, a stacked vertical bar chart shows a short glimpse of how good or bad the overall perturbation was. A good perturbation with an attribution technique presents just a lot of blue in this bar chart, while a bad perturbation shows a lot of orange in the visualization. Next to it, the exact numbers of the changed and unchanged samples are shown so that comparable numbers enhance the fast glance with other cards.
Figure 2 (C) gives a detailed view of the perturbation and the changes there. The bar chart on the left visualizes the classes of the changed and unchanged predictions. For the changed prediction, the visualization also further presents the classes before and after the perturbation. Such visualization can help to identify spurious correlations as a model could, for instance, learn one feature of one class for the prediction. The bar chart on the right at (C) shows the number of perturbed values needed to change the prediction. The fewer changes needed, the better the attribution can identify relevant values.
In Figure 2 (D), the histogram of the distances between the perturbed and the original instances are shown. On top of (D), the Euclidean distances, and on the bottom of (D), the cosine distance can help to find clusters of needed changes for the perturbation of the samples by revealing a trend towards a certain distance range. Also, the distances can be used to compare the individual attribution techniques against each other. A smaller distance range, together with a lower number of perturbed values, presents a more focused technique.
Figure 2 (A) visualizes more statistical information about the attributions. The plot on top of (A) shows the skewness of the attributions of the samples of the dataset. On the bottom, the means of the attributions are visualized. Through these, a general trend of the changed and unchanged samples and their attributions can be seen. Especially, outliers are interesting as a starting point for deeper analysis with other methods and visualizations.
Lastly, in Figure 2 (R), the time series time point means of the changed and unchanged samples can be inspected. So, for every time point in the time series, the mean of it over the subset (changed or unchanged) of the whole dataset is calculated and visualized. Thus, in the case of, e.g., FordA, with a standardization of the dataset, the samples slowly converge to zero. The visualization enables to spot large differences between the changed and unchanged samples.
## 5 Results and Discussion
Our current experiment setup evolves around an in-depth analysis of the attributions of seven attribution techniques (Saliency, IntegratedGradients, DeepLift, Occlusion, GradientShap, DeepLiftShap, KernelShap) based on the implementations in Captum 1. We incorporate 16 perturbation strategies, two based on Simic et al. [22], six based on Schlegel et al. [19], and eight extensions we describe later. We implemented nine single time point perturbations (zero, mean, inverse, dataset mean, dataset max, dataset min, OOD high, OOD low, random between min and max) and seven subsequence perturbations (zero, subsequence mean, dataset mean, inverse, OOD high, OOD low, random between min and max). The subsequence length is fixed to ten percent of the length of the data.
Footnote 1: Captum is a Pytorch-based XAI module for Python: [https://captum.ai/](https://captum.ai/)
We focus on the UCR benchmark datasets [5] and take three of the most extensive datasets (FordA, FordB, ElectricDevices) to investigate data characteristics. However, our approach can be applied to any time series classification dataset. The FordA and FordB are sensor data with a length of 500 and provide
Figure 2: Perturbation analysis card for the FordA dataset: the top description (S) contains general statistics for the card, starting with the dataset, the attribution technique, and the perturbation strategy. Beneath are the statistics for the amount of changed and unchanged sample predictions encoded as a progress bar and with numbers. The plots in (C) give a more detailed insight into the class changes after the perturbation. The left plot presents the amount of changed and unchanged samples for each class and also visualizes the class change for changed predictions. The right plot shows the number of perturbed values when a change in prediction happens. In (D), the distances of the original to the perturbed instance are shown. The top presents the Euclidean distance between the pairs, and the bottom shows the cosine distance. (A) presents the skew (top) and mean (bottom) of the attributions for the changed and unchanged sample predictions. In (R), the mean of every value at a specific time point is visualized for the changed and unchanged samples.
an anomaly detection binary classification task. FordA has 3601 samples in the training set and 1320 in the test set. FordB has 3636 samples in the training set and 810 in the test set. The ElectricDevices dataset is shorter, with only 96 time points. However, the dataset has 8926 training samples and 7711 test samples.
We investigate two architectures of convolutional neural networks. The first architecture tackles the FordA and FordB datasets. The model consists of three 1D convolutional layers with a kernel size of three and increases the channels from one to 10 to 50 to 100. A max-pooling of three after the convolutional layer decreases the size again. A ReLu activation function activates the neuron. Afterward, a fully connected layer with 100 neurons and a ReLu function uses the feature maps from the convolutional layers to process the data further. And lastly, another fully connected layer with two neurons classifies the data with a softmax activation on top. We train the model with a batch size of 120 and the Adam optimizer [11]. The second architecture is trained on the ElectricDevices data and introduces a residual from the input to the fully connected layers. The original input gets downsampled using a 1D convolution with kernel size seven for the residual addition right before the fully connected layers.
We train our models using the cross-entropy loss for multi-label classification on the datasets for 500 epochs. Our models achieve for FordA an accuracy of 0.99 for the training set and 0.89 for the test set, demonstrating overfitting to the training data. Our models achieve for FordB an accuracy of 0.99 for the training set and 0.70 for the test set, demonstrating overfitting to the training data. Our models achieve for ElectricDevices an accuracy of 0.94 for the training set and 0.64 for the test set, demonstrating overfitting to the training data. As Ismail Fawaz et al. [9] showed, even our simple models are not too far from the state-of-the-art with other more sophisticated models. However, as we want to analyze our model, we look into the attributions of the training data, and thus our overfitting is a nice bonus to investigate spurious correlations and shortcuts [6].
**Results -** First, we start with the FordA dataset; next, we will present the FordB results, and lastly, the ElectricDevices dataset. FordA demonstrates interesting results regarding the attribution techniques and the perturbation strategies. The best working strategies are setting the perturbed value to an out-of-distribution low [26] on a subsequence [19] as you can see in Figure 6. Especially, the saliency method [23] achieves the best result regarding the flip of predictions by flipping 2088 of 3601 samples, as also seen in Figure 2. However, the KernelSHAP method [12] comes close to the flip with just 39 less with 2049 flips. Also, as seen in Figure 2 on the plot on the right, the perturbation strategy out-of-distribution low changes the class quite late with a lot of perturbed values. Such an effect is unwanted in many cases as the model is, so to say, attacked by an adversarial attack outside of the distribution of the original data. In some cases, we can use such a method to test the model on data shifts, as, for example, the attributions can shift heavily. However, for our focus on the model itself, such an adversarial attack is interesting but does not show internal decision makings for the dataset we are interested in.
However, we also notice that the perturbation strategy heavily influences the best working method. If we switch, for example, to a perturbation to zero, we see Occlusion [28] as the winner in Figure 6. Such a change in the best working technique demonstrates that the perturbation analysis just with one strategy is not enough to compare attribution techniques. We need multiple strategies to decide on one technique. However, we can also further take a deeper look into the attributions themselves. Focusing on the different skewness of the attributions and their distributions as seen in Figure 3, we can already see some trends toward techniques enabling an easier inspection of the method and how well the method performs for the perturbation analysis. Especially, KernelSHAP in Figure 3 demonstrates a nice pattern with two nearly not overlapping distributions. Such a nice distribution can help us to decide on one attribution technique.
The model for the FordB dataset is a bit worse than for the FordA dataset, which leads, in most cases, to worse performance in the perturbation analysis [22]. However, again the KernelSHAP and Saliency generate good working attributions for the change in the prediction for the perturbation to zero strategy. For this dataset, KernelSHAP achieves to change of 3472 from 3636 samples as seen in Figure 7. Especially interesting is the distribution of the skewness of the attributions. A more in detail analysis of these two peaks could lead to further insights into the model and the attributions, but such an analysis needs another visualization, e.g., projecting the attributions in a scatter plot. However, if we further inspect our corresponding model card in Figure 4, we can see that on the plot on the right, the change happens if a lot of values are removed from the original sample. Such a result is also observable in the other perturbation
Figure 3: Skewness distribution of the attribution techniques on the FordA dataset. Clear differences in the distributions of the attributions are visible. Further, the differences between changed and unchanged sample predictions and their attributions in the distributions are observable and show two different patterns in the techniques.
cards for the other techniques. In our study, we have identified a possible shortcut [6] that our model has learned from the training data. We speculate that the shortcut consists of a certain range or specific time points which need to be in a certain range of values to be classified as one class or the other class, and if we destroy this property, we change the prediction. So, our model learns a static version or range for one class and classifies everything else into the other class. Such a model does have more in common with an outlier detector than with a wanted classifier. Thus, we identified a shortcut of the model to be able to improve the classification without using all available features [6].
The ElectricDevices dataset is harder for the model as we do not only have a binary classification problem but seven classes the model needs to separate. However, as before, not even the state-of-the-art performance is as accurate as possible [9], which leads to worse attributions and a more diverse perturbation analysis result. Again, KernelSHAP performs best with a change of 8906 from 8926 samples with the values perturbed to the global max as seen in the perturbation card of Figure 5. However, also IntegratedGradients [24] works well, but only with another perturbation strategy, namely changing the perturbed value to the global mean of the dataset. The dataset demonstrates quite nicely that the attribution techniques need different perturbation strategies to reveal the models' internal decision-making. Some of the techniques focus on different features the model learned as the ranking of the best-performing attribution techniques based on the perturbation analysis changes from strategy to strategy for this dataset. Additionally, when we delve into the labels of the changed and
Figure 4: Perturbation analysis card for the FordB dataset. The visualizations show a very distinct pattern. For the means of the raw samples (R), the few unchanged samples compose quite a diverse pattern, while the changed ones go to zero based on their standardization. However, the plot on the top (C) with the orange marker presents a pattern we do not want to have in a perturbation analysis, as it shows that we need to perturb a lot of data to achieve a change in prediction.
unchanged predictions, we notice that various attribution methods alter different labels in the perturbation. For example, KernelSHAP seems to modify every class besides seven, whereas Saliency influences classes other than five and six. However, unlike the previous FordB dataset, we do not see an unwanted perturbation pattern in the amount of perturbed values visualization. Such an effect presents that the attribution techniques are more suitable for the dataset and model than for the FordB model.
**Summary -** As we have seen in our results (Figure 6, Figure 7, Figure 8), KernelSHAP performs quite well but takes a lot of time to calculate the attributions. Due to the sampling-based approach of KernelSHAP, the attributions are not deterministic and can vary from multiple computational runs. Further, in many cases, Saliency (or Vanilla Gradient multiplied by the Input) works surprisingly well and is only sometimes improved by additional extensions on top, such as IntegratedGradients. Thus, Saliency provides a promising variant for future experiments and techniques on top of it. So, if the attribution (explanation) is time-critical, Saliency is a well-suited method. If it is not time-critical, KernelSHAP provides the best-working attributions based on our experiments. Our collected data has even more insights and findings using the proposed perturbation analysis cards, which we look forward to analyzing and publishing with the code. The published source code can be used as a framework to experiment on more datasets, and the perturbation analysis cards can be used to report the results. The GitHub repository can be explored with more perturbation analysis cards and JSON data for the collected results of our experiments.
Figure 5: Perturbation analysis card for the ElectricDevices dataset. The skewness distribution is quite interesting as it nearly presents a Gaussian distribution, with the mean being more sparse and quite focused on only three large pillars.
## 6 Conclusion and Future Work
After reviewing related work, we presented an in-depth analysis of perturbation strategies for attributions on time series. With the analysis, we dug into a CNN trained on time series classification to investigate attributions, perturbation strategies, and shortcuts the network learned. We presented our results in perturbation analysis cards to enable users to analyze the results in detail by inspecting the aggregated data in visualizations and comparing them easily with other techniques based on the provided cards. We identified SHAP as a suitable method to generate working attributions in all experimented datasets. Other gradient-based methods also work quite well but do not perform as well as, e.g., KernelSHAP. However, depending on the perturbation strategy, the best working attribution technique changes quite drastically also for some techniques. We advise not only focusing on a single strategy but to using multiple strategies and aggregating the results of these, and looking at the distribution of the skewness to enhance the comparability. In our experiments, we also found a shortcut or spurious correlation for the FordB dataset, which our model learned to classify one class and to classify everything else as the other class.
**Future work -** We want to extend the experiment to other attribution techniques and compare the results with the already collected experiment results. Also, we want to compare the attributions even in more detail by, e.g., aggregating the attributions and comparing them on a higher level to find matching patterns. Different trends and subsequences are further patterns to analyze to gain knowledge into the attribution techniques. With such an approach, we also want to include the _local Lipschitz estimate_[3] to rank consistent attributions higher. Last, we want to extend the _Perturbation Effect Size_[22] and use our gained knowledge to combine perturbation strategies, switching predictions, and distances to generate a measure to evaluate attributions on time series classification models more robust and fully automatically to make it easier for users to decide which attributions to use for explanations. We also want to enhance our perturbation analysis cards further to be more easily readable and comfortable for non-experts to be able to gain insights at a single glance.
#### 6.0.1 Acknowledgements
This work has been partially supported by the Federal Ministry of Education and Research (BMBF) in VIKING (13N16242).
|
2308.15592 | Non-local Interactions are Essential Elements for Dark Matter Halo
Stability: A Cross-Model Study | This paper introduces a comprehensive methodology for examining the stability
of dark matter (DM) halos, emphasizing the necessity for non-local
inter-particle interactions, whether they are fundamental or effective in
nature, to maintain halo stability. We highlight the inadequacy of vanilla cold
collision-less DM models in forecasting a stable halo without considering a
"non-local" interaction in the halo's effective free energy, which could
potentially arise from factors like baryonic feedback, self-interactions, or
the intrinsic quantum characteristics of dark particles. The stability
prerequisite necessitates significant effective interactions between any two
points within the halo, regardless of their distance from the center. The
methodology proposed herein offers a systematic framework to scrutinize the
stability of various DM models and refine their parameter spaces. We deduce
that DM halos within a model, where the deviation from the standard cold
collision-less framework is confined to regions near the halo center, are
unlikely to exhibit stability in their outer sectors. In our study, we
demonstrate that the issue of instability within DM halos cannot be addressed
adequately using perturbative quantum effects. This issue is less pronounced
for fermionic DM but suffers from a higher degree of severity when considering
bosonic DM. We find that halos made of bosons with notable quantum effects have
sharp edges, while those made of fermions show more diffuse boundaries
extending toward infinity. We also explore the broadest form of the effective
free-energy around a chosen mass profile. | Ahmad Borzou | 2023-08-29T19:38:10Z | http://arxiv.org/abs/2308.15592v1 | # Non-local Interactions are Essential Elements for Dark Matter Halo Stability: A Cross-Model Study
###### Abstract
This paper introduces a comprehensive methodology for examining the stability of dark matter (DM) halos, emphasizing the necessity for non-local inter-particle interactions, whether they are fundamental or effective in nature, to maintain halo stability. We highlight the inadequacy of vanilla cold collision-less DM models in forecasting a stable halo without considering a "non-local" interaction in the halo's effective free energy, which could potentially arise from factors like baryonic feedback, self-interactions, or the intrinsic quantum characteristics of dark particles. The stability prerequisite necessitates significant effective interactions between any two points within the halo, regardless of their distance from the center. The methodology proposed herein offers a systematic framework to scrutinize the stability of various DM models and refine their parameter spaces. We deduce that DM halos within a model, where the deviation from the standard cold collision-less framework is confined to regions near the halo center, are unlikely to exhibit stability in their outer sectors. In our study, we demonstrate that the issue of instability within DM halos cannot be addressed adequately using perturbative quantum effects. This issue is less pronounced for fermionic DM but suffers from a higher degree of severity when considering bosonic DM. We find that halos made of bosons with notable quantum effects have sharp edges, while those made of fermions show more diffuse boundaries extending toward infinity. To present the potentials of the cross-model approach, we explore the broadest form of the effective free-energy around a chosen mass profile. Next, as a show case study, we employ a model where the deviation from the standard cold collision-less DM model is represented by a two-body interaction in the effective free-energy to show how to use observations to investigate universal classes of DM models.
## I Introduction
The \(\Lambda\)-CDM cosmological model, which characterizes dark matter (DM) as a cold and collision-less gas interacting with visible matter solely via gravity, effectively accounts for several observed phenomena. These phenomena include the cosmic microwave background's (CMB) correlation function [1], supernova redshift surveys [2], and the clustering of galaxies [3]. Furthermore, the model aligns with the observation that smaller structures develop earlier in cosmic history [1; 4]. Apart from a recent discrepancy in the Hubble constant value [5], the \(\Lambda\)-CDM model successfully interprets large cosmic scales.
DM simulations that start from early universe initial density perturbations predict the formation of DM halos with dense and steep profiles towards the center, creating a cusp. However, this prediction contradicts observations from rotation curves of stars near galactic centers and gases in the outskirts, as well as stellar velocity dispersion data. These observations suggest a more constant mass density, or a core, at the center of galaxies [6; 7; 8; 9; 10].
This inconsistency between DM simulations and observational data can be reconciled by considering baryonic effects on dark matter. However, unlike DM N-body simulations, these effects are not fundamentally modeled. Instead, aspects like star formation and viscosity are directly integrated into the simulations using numerous free parameters, which are then adjusted to align with observations [11]. The wide parameter selection flexibility in the current versions is suboptimal, although incorporating visible matter effects remains critical.
While N-body simulations may explain observed DM mass profiles by incorporating baryonic feedback, their numerous free parameters potentially allow the explanation of non-physical phenomena as well. To evaluate N-body simulations and other alternate DM paradigms, it is necessary to examine their predictions against observations not used in parameter tuning. One such observation is the stability of halos.
We aim to initiate a systematic investigation of the stability of DM halos in this article. The stability of DM halos can be probed by predicting the position dependent halo stability in a given DM scenario and comparing it with observational data. Furthermore, studying the stability of DM halos can enable cross-model comparisons and classifications of DM scenarios into universal categories, which can facilitate a "coarse-grained-assessment" of DM models in light of the current observational limitations and the abundance of theoretical possibilities.
A key requirement for long-term stability is the satisfaction of the Vlasov-Poisson equation, which ensures that the net force on DM in the halo is zero, leading to dynamical stability [12; 13]. However, fluctuations in an N-body system are inevitable. If the system is not at the minimum of effective free energy, these fluctuations can rapidly increase, destabilizing the halo. This means that dynamical stability does not necessarily guarantee "thermodynamic" stability, and a dynamically sta
ble halo could still experience gravothermal catastrophe or collapse [14; 15]. Nevertheless, "thermodynamic" stability does imply dynamic stability [16].
To investigate "thermodynamic" stability, Landau damping and violent relaxation should be considered to find a solution minimizing the N-body system's free energy. While this approach is robust, it increases the complexity of the calculations and depends on the DM model. Existing studies on fermionic DM "thermodynamic" stability can be found in [17; 18; 19]. For some DM models, more attention has been devoted to Vlasov-Poisson dynamical stability, with less focus on long-term stability.
Given the complexity and model-dependence of the current approach to "thermodynamic" stability, this article employs Landau's field-theoretic method to investigate long-term stability states of halos. This approach avoids dealing with the specifics of the DM model, instead focusing on the collective system's symmetries. Intriguingly, a broad spectrum of seemingly different models fall under one universality class, complying with the same statistical equations determined almost exclusively by the symmetries of the N-body system, barring highly entangled quantum systems where topology plays a role. This approach allows us to investigate a wide array of DM models and their free parameters within a single study. The unique aspect of our approach is its capacity to transfer results between different DM models.
In this article, we demonstrate that no self-gravitating classical model of DM can predict a stable halo unless a "non-local" interaction between mass densities is included in the effective free energy of the halo. This interaction could be collective, for example due to baryonic feedback or self-interactions, or resulting from the quantum nature of dark particles, whether fermionic or bosonic. This stability condition demands substantial interactions between any two locations in the halo, even if both are far from the center. Therefore, if a DM model's deviation from the standard cold, collision-less scenario is confined to regions near the halo's center, the halo will still be unstable. Consequently, models like the cold, collision-less DM with baryonic feedback, where visible matter is located at the center, are unlikely to predict pressure-supported stable outer halo regions.
To showcase the potential of the field theoretic approach to studying DM halos, we expand the most general form of effective free-energy around an arbitrary mass profile. We then choose a model whose deviation from the standard cold, collision-less model is a two-body interaction in the effective free-energy. We demonstrate that even with a "non-local" interaction, the halo may still be unstable for certain forms of the interaction. Moreover, using the showcase, we demonstrate that fluctuations of DM mass density around an empirically determined mass profile, are contingent on the universal class of DM scenarios. As such, it becomes possible to restrict their parameter space. Importantly, any reduction in the parameter space of a particular universal class extends to all DM models within that category.
The structure of the article is as follows. In section II, we establish the effective free energy of a simple, cold, collision-less DM halo, demonstrating its inherent instability. In section III, we introduce DM interactions into the effective free-energy equation, elucidating the necessity of non-local interactions to stabilize the halo. Proceeding to section IV, we formulate the effective free-energy of a halo incorporating non-interactive DM quantum effects. The equivalence between this quantum model and a classic interactive DM model, in the context of effective free-energy, is demonstrated. Further, we explore a particular model where quantum effects can be analyzed via the perturbation method. In section V, we design the most encompassing model of DM perturbations, illustrating its potential to investigate universal classes of DM perturbation models. Finally, in section VI, we draw our conclusions.
## II Non-interactive classic DM
We begin this section by deriving the effective free energy functional of mass density for a halo of cold collisionless DM, neglecting the effects of baryons. We consider a small volume element \(\Delta V\) at a position \(x\) in the halo. The number of particles in this volume element is \(N(x)=\frac{\rho(x)}{m}\Delta V\), where \(\rho(x)\) is the mass density and \(m\) is the particle mass. We assume that the system is in a steady state, so that the probability of the volume's state is equal to
\[\mathcal{P}(x)=\exp\left(\beta\left(\mu(x)-m\phi(x)\right)\sum_{ \varepsilon}n_{\varepsilon}-\beta\sum_{\varepsilon}n_{\varepsilon}\varepsilon \right), \tag{1}\]
where \(\mu(x)\) is the chemical potential at the volume, \(n_{\varepsilon}\) is the occupation number of \(\varepsilon\) energy level, \(\beta\) is the inverse of the temperature of DM, and the gravitational potential is
\[\phi(r)=-4\pi G\Big{(}\frac{1}{r}\int_{0}^{r}\rho(r^{\prime})r^{ \prime 2}dr^{\prime}+\int_{r}^{R}\rho(r^{\prime})r^{\prime}dr^{\prime}\Big{)}. \tag{2}\]
Due to the absence of correlations between \(\Delta V\) volumes in the vanilla collision-less model, the probability of the state of the halo can be found by multiplying the probabilities of all \(\Delta V\) volumes. After a sum over all the possible halo states with their respective weights, halo's partition function reads
\[\mathcal{Z} = \prod_{x}\Bigg{(}\sum_{N(x)}\exp\Big{(} \tag{3}\] \[\beta N\left(\mu-m\phi\right)-N\operatorname{Ln}\left(\frac{ \lambda^{3}\rho}{m}\right)+N\Big{)}\Bigg{)},\]
where \(\lambda=\sqrt{(2\pi m)^{-1}\beta}\), the summation of \(\varepsilon\) has been calculated, and both the Stirling's approximation and \(\rho=mN/\delta V\) have been utilized. In this equation, we
have used the Landau's approach of rearranging the summation over the halo states to only keep the summation over the parameter of interest. If we choose the infinitesimal volume to be arbitrarily small, we can approximate the integral as \(\int d^{3}x\simeq\sum_{x}\delta V\) and after defining \(D\rho\equiv\prod_{x}\int d\rho(x)\), halo's partition function can be expressed as follows
\[\mathcal{Z} = \int D\rho\exp\Bigg{(}\] \[-\int d^{3}x\,\frac{\rho}{m}\Big{[}\text{Ln}\{\frac{\rho}{m} \lambda^{3}\}-1-\beta\mu+\beta m\phi\Big{]}\Bigg{)}.\]
Therefore, the effective free energy functional of DM mass density of the halo takes the following form [20; 21]
\[\beta F_{\text{\tiny Simple}} = \int d^{3}x\frac{\rho}{m}\Big{[}\text{Ln}\{\frac{\rho}{m}\lambda ^{3}\}-1-\beta\mu+\beta m\phi\Big{]}. \tag{5}\]
A DM halo in a state of stability is positioned at the nadir of the effective free energy curve. This premise implies the initial constraint on any proposed DM model of halos. Specifically, it requires that the first functional derivative of the system is zero at the halo's mean density
\[\frac{\delta\beta F_{\text{\tiny Simple}}}{\delta\rho(q)}\Big{|}_{\langle\rho \rangle}\simeq 0. \tag{6}\]
In an attempt to calculate the functional derivative, we make use of the following equations as expounded in appendix A
\[\frac{\delta}{\delta\rho(\vec{r}\,)}=\frac{\delta}{4\pi r^{2} \delta\rho(r)},\] \[\frac{\delta\phi(\vec{r}_{1})}{\delta\rho(\vec{r}_{2})}=-\frac{G }{r_{{}_{>}}},\] \[\int d^{3}x\,\rho(\vec{x})\frac{\delta\phi(\vec{x})}{\delta\rho( \vec{x}\,^{\prime})}=\phi(\vec{x}\,^{\prime}), \tag{7}\]
where \(r_{{}_{>}}\) denotes the larger of \(r_{1}\) and \(r_{2}\) and we presume a spherical symmetry.
By applying \(\frac{\delta}{\delta\rho(q)}\) to the right side of equation (5) and utilizing (7) to solve the integrals, the first functional derivative of the effective free-energy can be written as follows
\[\frac{\delta\beta F_{\text{\tiny Simple}}}{\delta\rho(q)}=\frac{4\pi q^{2}}{ m}\Big{[}\text{Ln}\{\frac{\rho}{m}\lambda^{3}\}-\beta\mu+2\beta m\phi\Big{]}. \tag{8}\]
The chemical potential \(\mu(r)\), which remains unestablished by observations, can be tailored such that the first functional derivative of the effective free energy is null. Therefore, presuming that DM follows a simple system statistics and using the semi-equality mentioned above, we can express the chemical potential as follows
\[\mu_{\text{\tiny Simple}}\simeq\beta^{-1}\text{Ln}\{\frac{\rho}{m}\lambda^{ 3}\}+2m\phi. \tag{9}\]
As has been previously demonstrated, the second functional derivative of either entropy or free energy is necessary to analyze the stability of gravitational systems [22; 15]. Given that our study reorganizes the sum in the partition function to define the effective free energy, the second functional derivative of the latter should analogously shed light on the stability of halos. Indeed, equation (9) represents a necessary but not sufficient condition for a stable halo. The rationale behind this is that the first functional derivative is also null at a maximum or extremum of an effective free-energy. In order for the halo to be at the minimum of the effective free-energy and thus be stable, the second functional derivative of the effective free-energy needs to be positive at any pair of arbitrary locations within the halo. However, the second functional derivative of the effective free-energy in equation (5) does not meet this requirement. It is expressed as follows
\[\frac{\delta^{2}\beta F_{\text{\tiny Simple}}}{\delta\rho(\vec{r}_{2})\delta \rho(\vec{r}_{1})}=\frac{\delta(r_{2}-r_{1})}{4\pi r_{2}^{2}m\rho}-\frac{2 \beta G}{r_{{}_{>}}}. \tag{10}\]
It is evident that the second functional derivative of the simple cold collision-less DM model, as described in equation (10), is negative when \(r_{1}\neq r_{2}\). This implies that equation (9) corresponds to the maximum, rather than a minimum, of the effective free-energy as described in equation (5). This suggests that the halo, while momentarily static, will eventually either condense towards a higher mass density profile or explode, thereby disappearing. Given that the effective free-energy equates to the negative logarithm of the probability of the mass density profile, the direction of evolution is random if a halo's chemical potential corresponds to equation (9). We can demonstrate that if fluctuations cause the halo's chemical potential to exceed the value given in equation (9), the halo's mass density escalates indefinitely towards higher values. Conversely, if the halo's chemical potential falls below the value given in equation (9), the halo's mass density diminishes indefinitely towards lower mass density.
## III Interactive classic DM
This section aims to delve into the interactions between DM particles that may rectify the instability previously noted in the simple cold collision-less DM model. Specifically, we seek interactions that ensure the second functional derivative of the effective free-energy remains positive for any selected pairs of locations.
In general, the Hamiltonian's inter-particle interactions can originate from either collective or fundamental forces. It can be represented as follows
\[H_{I}=\frac{1}{2!}\sum_{ij}U_{zij}+\frac{1}{3!}\sum_{ijk}U_{ijk}+\cdots, \tag{11}\]
where the indices denote all the DM particles within the halo. Applying the identity \(\sum_{i}=\int d^{3}x\,\frac{\rho}{m}\), we can reformulate the equation in a continuum form
\[H_{I} = \frac{1}{2!}\int d^{3}x_{1}d^{3}x_{2}U_{2}(\vec{x}_{1},\vec{x}_{2}) \rho(\vec{x}_{1})\rho(\vec{x}_{2}) \tag{12}\] \[+ \frac{1}{3!}\int d^{3}x_{1}d^{3}x_{2}d^{3}x_{3}U_{3}(\vec{x}_{1}, \vec{x}_{2},\vec{x}_{3})\rho(\vec{x}_{1})\rho(\vec{x}_{2})\rho(\vec{x}_{3})\] \[+ \cdots\,.\]
In this scenario, the partition function for the halo in the presence of classical interactions is given as
\[\mathcal{Z} = \sum_{N}\int d^{3N}q\exp\left(-\beta\left(\mu-m\phi\right)N-\beta H _{I}\right) \tag{13}\] \[\times\int d^{3N}p\exp\left(-\beta H_{\mbox{\tiny Simpla}}\right),\]
Here, the division is feasible because \(H_{\mbox{\tiny Simpla}}\) depends solely on particle momentum, whereas \(H_{I}\) and \(\phi\) are functions of particle position. Given that \(H_{I}\) can be entirely expressed in terms of \(\rho(x)\) and considering that
\[\rho(\vec{x})\equiv m\sum_{i=1}^{N}\delta(\vec{x}-\vec{q}_{i})\]
remains a function of the particle positions \(\vec{q}_{i}\), the enumeration of energy states (\(\sum_{N}\int d^{3N}q\)) equates to that given by \(\int D\rho\).
Upon resummation, the partition function can be traditionally written as [20]
\[\mathcal{Z}=\int D\,\rho\exp\Bigg{(}F_{\mbox{\tiny Simpla}}+F_{I}\Bigg{)}. \tag{14}\]
The effective free energy here is partitioned into the simple term and the interaction term. In its most generic form, the latter can be expanded to
\[F_{I}=\sum_{n=2}^{\infty}\frac{1}{n!}\int\left(\prod_{a=1}^{n}d^{3}x_{a}\rho( \vec{x}_{a})\right)U_{n}(\vec{x}_{1}\cdots\vec{x}_{n}). \tag{15}\]
The modification of the effective free-energy alters equation (6), which leads to the effective chemical potential, under the assumption of halo steadiness, being
\[\mu\simeq\mu_{\mbox{\tiny Simple}}+\frac{\delta\beta F_{I}}{\delta\rho(\vec{ x})}\Big{|}_{\langle\rho\rangle}. \tag{16}\]
Additionally, the second derivative of the effective free-energy becomes
\[\frac{\delta^{2}\beta F}{\delta\rho(\vec{r}_{2})\delta\rho(\vec{ r}_{1})} = \frac{\delta(r_{2}-r_{1})}{4\pi r_{2}^{2}m\rho}-\frac{2\beta G}{r _{>}} \tag{17}\] \[+ \beta U_{2}(\vec{x}_{1},\vec{x}_{2})+\cdots\,.\]
From the above, it's clear that by merely adjusting the two-body interaction, we can make the second functional derivative of the effective free-energy positive, irrespective of the chosen \(\vec{x}_{1}\) and \(\vec{x}_{2}\). This approach can potentially explain the long-term stability of halos. However, the two-body interaction, \(U_{2}(\vec{x}_{1},\vec{x}_{2})\), necessary for long-term stability must be non-local due to the existence of \(2\beta G/r_{>}\) and cannot be proportional to the Dirac delta function. This means that the interaction should remain positive and non-zero when the interacting particles' positions vary. Given that gravity, already accounted for, is the only known force capable of operating over galactic distances, \(U_{2}(\vec{x}_{1},\vec{x}_{2})\) must either have an unusual nature, such as emerging effectively from quantum effects, or be collectively produced by other phenomena like baryonic feedback.
Since most baryons in galaxies are located at the center, an intriguing research direction could involve exploring whether baryonic feedback can generate an effective two-body interaction that remains non-zero even between two points that are both distanced from the center.
## IV Quantum DM
Considering DM models with appreciable quantum effects, we examine the alterations to the partition function of a non-interacting DM system. The formal partition function of such a system is expressed as
\[\mathcal{Z}=\int d^{3}x_{1}\cdots d^{3}x_{N}\sum_{E,N}e^{-\beta(E-\mu N)}| \Psi_{E}(\vec{x}_{1}\cdots\vec{x}_{N})|^{2}, \tag{18}\]
with \(\Psi\) symbolizing the quantum state of the halo, which is detailed as
\[\Psi_{E}(\vec{x}_{1}\cdots\vec{x}_{N})=\left(N!\right)^{-\frac{1}{2}}\sum_{p} p\left[u_{\epsilon_{1}}(\vec{x}_{1})\cdots u_{\epsilon_{N}}(\vec{x}_{N})\right]. \tag{19}\]
Here, \(p\) signifies the permutation operator with \(\sum_{p}\) pointing to all possible permutations, and \(u_{\epsilon_{i}}(\vec{x}_{i})\) defines the wave function of the \(i^{\rm th}\) particle satisfying the Schrodinger equation.
In scenarios where DM particles are non-interacting, the total energy of the halo equates to the aggregation of the energies of the individual particles, represented as \(E=\sum_{i=1}^{N}\epsilon_{i}\). Consequently, the partition function can be rewritten as
\[\mathcal{Z} = \int d^{3}x_{1}\cdots d^{3}x_{N}\sum_{N}e^{\beta\mu N}\frac{1}{N! }\sum_{p}\Bigg{[} \tag{20}\] \[\left(\sum_{\epsilon_{1}}u_{\epsilon_{1}}^{*}(p\vec{r}_{1})u_{ \epsilon_{1}}(\vec{r}_{1})e^{-\beta\epsilon_{1}}\right)\] \[\times\cdots\times\] \[\left(\sum_{\epsilon_{N}}u_{\epsilon_{N}}^{*}(p\vec{r}_{N})u_{ \epsilon_{N}}(\vec{r}_{N})e^{-\beta\epsilon_{N}}\right)\Bigg{]}.\]
After introducing
\[f\left(p\vec{r_{i}},\,\vec{r}_{i}|\beta\right)\equiv\frac{\lambda^{3}}{V}\sum_{ \epsilon}u_{\epsilon}^{*}(p\vec{r_{i}})u_{\epsilon}(\vec{r}_{i})e^{-\beta \epsilon}, \tag{21}\]
and applying the properties of natural logarithm and Stirling's approximation of the logarithm of \(N!\), the partition function assumes the following form
\[\mathcal{Z} = \int d^{3}x_{1}\cdots d^{3}x_{N}\sum_{N}e^{\beta\mu N}\exp\Bigg{(} \tag{22}\] \[-N\mathrm{Ln}\left(\frac{\lambda^{3}N}{V}\right)+N+\] \[\mathrm{Ln}\Bigg{(}\sum_{p}\left[f\left(p\vec{r_{1}},\vec{r}_{1 }|\beta\right)\cdots f\left(p\vec{r}_{N},\vec{r}_{N}|\beta\right)\Big{]}\right) \Bigg{)}.\] \[\Bigg{)}.\]
Using the demonstration in [21], the resultant partition function can be expressed as a combination of the typical cold collision-less DM equation (5) and a corrective term due to quantum effects:
\[F=F_{{}_{\mathrm{Simple}}}+F_{\mathrm{Q.M.}}\] \[\beta F_{\mathrm{Q.M.}}=-\beta\mathrm{Ln}\Bigg{(}\sum_{p}\left[f \left(p\vec{r}_{1},\vec{r}_{1}|\beta\right)\cdots f\left(p\vec{r}_{N},\vec{r} _{N}|\beta\right)\right]\Bigg{)}. \tag{23}\]
Upon evaluating this effective free-energy and comparing it with equation (15), it becomes clear that, in this context, a quantum description of DM is equivalent to a classical model of DM that includes specific types of interactions. A similar equivalence of statistical models for, non-gravitational, quantum and classic interactive systems was introduced by Uhlenbeck and Gropper in the 1930s [23].
**When \(f\left(\vec{r},\vec{r}\right)=1\) & \(f\left(\vec{r}_{1},\vec{r}_{2}\right)\ll 1\) for large \(|\vec{r}_{1}-\vec{r}_{2}|\):**
In this subsection, we consider a DM halo that exhibits weak entanglement. In this case, the function \(f\left(\vec{r}_{1},\vec{r}_{2}\right)\), which characterizes the degree of quantum effects, only holds significant value when the locations \(\vec{r}_{1}\) and \(\vec{r}_{2}\) are close. Given this scenario, we can express the quantum modification to the effective free-energy, as depicted in equation (23), in an expanded form
\[F_{\mathrm{Q.M.}} = -\mathrm{Ln}\Bigg{(}1\pm\frac{1}{2}\sum_{ij}|f\left(\vec{r}_{i}, \vec{r}_{j}\right)|^{2}+\ldots\Bigg{)}, \tag{24}\] \[\simeq \mp\frac{1}{2}\sum_{ij}|f\left(\vec{r}_{i},\vec{r}_{j}\right)|^{2},\]
where each term inside the logarithm represents the number of permutations, and we have neglected higher order terms. Also, in terms with dual sign, the upper and lower signs correspond to bosons and fermions respectively. Upon employing the relation \(\sum_{i}=\int d^{3}x\rho/m\), the above equation can be transformed as
\[F_{\mathrm{Q.M.}}=\frac{\mp 1}{2m^{2}}\int d^{3}x_{1}d^{3}x_{2}|f \left(\vec{x}_{1},\vec{x}_{2}\right)|^{2}\rho(\vec{x}_{1})\rho(\vec{x}_{2}). \tag{25}\]
Consequently, the additional term in the second derivative of the effective free-energy assumes the form
\[\frac{\delta^{2}\beta F}{\delta\rho(\vec{r}_{2})\delta\rho(\vec{r }_{1})}=\frac{\delta(\vec{r}_{2}-\vec{r}_{1})}{m\rho}-\frac{2\beta G}{r_{{}_{> }}}\mp\frac{1}{m^{2}}|f\left(\vec{x}_{1},\vec{x}_{2}\right)|^{2}. \tag{26}\]
From the above discussion, we infer that in a DM halo where quantum effects are not strong, a bosonic DM tends to exacerbate the instability problem by rendering the second derivative of the effective free energy more negative. On the other hand, a fermionic DM has the potential to alleviate the instability for closely situated location pairs. Nevertheless, in the event that \(\vec{x}_{1}\) and \(\vec{x}_{2}\) are considerably distant, the corrective term loses significance, and the halo reverts to an unstable state.
By employing equation (26), we can deduce that halos composed of bosons, where quantum effects are substantial, exhibit sharply defined edges. In contrast, halos consisting of fermions with considerable quantum influences feature edges that are less sharply delineated and extend more diffusely toward infinity. The underlying reason for this behavior is that as we venture further into the halo's outer regions, we eventually reach a distance where quantum effects can be adequately addressed using perturbation methods. Given that the value of \(r_{{}_{>}}\) is relatively large at these distances, the influence of gravitational instability becomes minimal. Nonetheless, the instability inherent to bosons, as indicated in equation (26), persists. This confines the bosonic halo to regions where quantum effects cannot be treated perturbatively and sharply cuts the outer region.
## V Most general model of DM perturbations
The enduring mystery surrounding the nature of DM does not preclude us from leveraging the observed (at least quasi-)stability of DM halos to our advantage. For one, stability constraints imply that when we expand the effective free-energy of any given DM model around a mass profile \(\rho_{{}_{O}}\), as established by experiments, only the leading terms significantly contribute. Furthermore, these expansion coefficients are determinable based on the symmetries of the density perturbations around \(\rho_{{}_{O}}\) in the halo.
Drawing parallel to other statistical systems [24], it can be suggested that the symmetry of these fluctuations, not
the specific principles of the DM model, dictates the effective free-energy of mass density perturbations in the halo. Consequently, a broad and seemingly disparate collection of DM models may actually fall under the same universality class, and hence, predict similar fluctuations around the chosen background mass density \(\rho_{{}_{O}}\).
To gain deeper insights, we undertake an expansion of the most general effective free-energy of a DM halo around \(\rho_{{}_{O}}\), as given by
\[\beta F[\varphi]=\sum_{n=1}^{\infty}\int d^{3}x_{1}\cdots d^{3}x_{n}\,c^{(n)} \varphi_{1}\cdots\varphi_{n}, \tag{27}\]
where
\[c^{(n)}(\vec{x}_{1},\cdots,\vec{x}_{n})\equiv\frac{\delta^{n} \beta F}{\delta\rho(\vec{x}_{1})\cdots\rho(\vec{x}_{n})}\Big{|}_{\rho_{{}_{O} }}\rho_{{}_{O}}(\vec{x}_{1})\cdots\rho_{{}_{O}}(\vec{x}_{n}),\] \[\varphi_{i}\equiv\frac{\rho(\vec{x}_{i})-\rho_{{}_{O}}(\vec{x}_{ i})}{\rho_{{}_{O}}(\vec{x}_{i})}, \tag{28}\]
where a constant is absorbed by the normalization factor. It's important to note that the first functional derivative of the effective free-energy at \(\rho_{{}_{O}}\) isn't set to zero given that the choice of \(\rho_{{}_{O}}\) isn't necessarily equal to the average mass density \(\langle\rho\rangle\), and retains a degree of arbitrariness.
Having established this, we can proceed to study potential predictions of DM models for mass density fluctuations \(\varphi(x)\) by systematically varying the \(c^{(n)}\) coefficients. This essentially amounts to transitioning from one universality class to another. Observational data can then be used to evaluate these classes of DM models based on the predictions they make. For instance, the n-body correlation between mass densities, i.e. \(\langle\varphi(\vec{x}_{1})\cdots\varphi(\vec{x}_{n})\rangle\), across the halo is directly linked to the selection of \(c^{(n)}\) and can be tested using observations.
In our previous work, we demonstrated how two-body correlations \(\langle\varphi(\vec{x}_{1})\varphi(\vec{x}_{2})\rangle\), as derived from observations, can help in refining the parameter space of DM models [21]. Continuing along this vein, we show here how the average \(\langle\varphi(\vec{x})\rangle\) - constructible in a similar manner from observational data - along with stability constraints can aid in constraining the parameters of DM models.
### A DM Model with a Two-Body Interaction: A Showcase
In the present study, we strive to illustrate a straightforward extension to the classic model of cold, collisionless DM. This extension incorporates two-body interactions into the effective free-energy of the dark matter halo
\[F=F_{{}_{\rm simple}}+\frac{1}{2}\int d^{3}x_{1}d^{3}x_{2}\,\rho( \vec{x}_{1})\rho(\vec{x}_{2})U_{2}(\vec{x}_{1},\vec{x}_{2}). \tag{29}\]
Once we have the complete form of \(F\), equation (28) can be utilized to compute the coefficients that describe the mass density fluctuation effective free-energy represented in equation (27). The coefficients are given by the following expressions
\[c^{(1)}=\frac{\rho_{{}_{O}}(\vec{x})}{m}\Biggl{(}{\rm Ln}\left( \frac{\rho_{{}_{O}}(\vec{x})}{m}\lambda^{3}\right)-\beta\mu(\vec{x})+2\beta m \phi\Biggr{)}\] \[+\int d^{3}x^{\prime}U_{2}(\vec{x},\vec{x}\,^{\prime})\rho_{{}_{O }}(\vec{x^{\prime}})\rho_{{}_{O}}(\vec{x}),\] \[c^{(2)}=\Biggl{(}\frac{\delta(\vec{x}_{2}-\vec{x}_{1})}{m\rho_{{} _{O}}(\vec{x}_{1})}-\frac{2\beta G}{r_{{}_{>}}}+\beta U_{2}(\vec{x}_{1},\vec{x }_{2})\Biggr{)}\rho_{{}_{O}}(\vec{x}_{1})\rho_{{}_{O}}(\vec{x}_{2}),\] \[c^{(3)}=-\frac{1}{m}\delta(\vec{x}_{2}-\vec{x}_{1})\delta(\vec {x}_{3}-\vec{x}_{2})\rho_{{}_{O}}(\vec{x}_{3}),\] \[c^{(4)}=\frac{2}{m}\delta(\vec{x}_{2}-\vec{x}_{1})\delta(\vec{ x}_{3}-\vec{x}_{2})\delta(\vec{x}_{4}-\vec{x}_{3})\rho_{{}_{O}}(\vec{x}_{4}). \tag{30}\]
A comprehensive analysis of possible \(U_{2}(\vec{x}_{1},\vec{x}_{2})\) interaction terms and their observational implications is beyond the scope of the present work. In this paper, we simplify the showcase by selecting the interaction such that \(-\frac{2\beta G}{r_{{}_{>}}}+\beta U_{2}(\vec{x}_{1},\vec{x}_{2})\simeq 0\). It seems unlikely that this interaction originates from a fundamental force. Instead, it may be more plausible to consider that the interaction is mediated by phenomena that become significant towards the center of a halo. Regardless of its origin, we are using this interaction as a toy model for the purpose of this presentation, offering a simplified way to explore and understand the system in question. Consequently, the effective free-energy of our chosen showcase model is
\[\beta F=\int d^{3}x\,\frac{\rho_{{}_{O}}}{m}(\vec{x})g[\varphi],\] \[g[\varphi]\equiv h(x)\varphi(x)+\varphi^{2}(x)-\varphi^{3}(x)+ 2\varphi^{4}(x),\] \[h(\vec{x})\equiv\frac{m}{\rho_{{}_{O}}(\vec{x})}c^{(1)},\]
where the term \(h(\vec{x})\) is dependent on the specific halo under investigation as the chemical potential \(\mu\) is influenced by the halo's environment and other characteristics. To emphasize the potency of the statistical field theory approach, we make the assumption that the chemical potential and the temperature of a halo are determined through observations and that these observations suggest \(h(r)=r^{-3}\). In other words, at the moment, direct observational measurements of the chemical potential and the temperature remains a challenge. Therefore, we have to make certain assumptions to account for this limitation. To that end, we have translated our assumptions about the chemical potential and the temperature into a specific form for the \(h(r)\) term. With this assumption, we can plot the functional \(g[\varphi]\) as a function of the mass density fluctuations \(\varphi\), as shown in figure 1. For the sake of argument, we would like to see how the mean of the mass density fluctuations can be utilized to evaluate the model defined in equation (29).
In figure 1, assuming that the radius of the halo is scaled to one, the curve with label \(r=1\) belongs to the
edge of the halo. It shows that at this distance from the center, the minimum of \(g[\varphi]\) is at \(\varphi=0\) and the average of the fluctuations tend to be zero. The curve representing distance of 0.9 has a minimum slightly less than zero. This means that if we measure the fluctuations at that distance, the average would have a net negative value. As is evident from the rest of the curves, this general trend exist that as we move toward the center, the average of the perturbations deviate further away from zero, and the width of the well shape of the curves increases as the distance decreases toward the center. Finally, at a distance equal to 10% of the halo's radius, the curve labeled with \(r=0.1\) shows no minimum, i.e. the width of the well has become substantially large, indicating the instability of the toy model's halo in that region. In other words, from figure 1, we note that the minimum of the functional \(g[\varphi]\), representing where \(\langle\varphi\rangle\) is located, shifts from a significantly negative value at the halo's center to zero in the outer region of the halo. Therefore, by measuring the DM mass density as a function of distance from the center of a halo and subtracting it from the selected mass profile, for example NFW or Burkert, we can construct a phenomenological \(\langle\varphi\rangle\) that can be used to test the predictions of a given class of DM models, consequently restricting the parameter space of all DM models belonging to that class.
## VI Conclusion
We presented a robust methodology to investigate the stability of DM halos. Our emphasis was on the critical role of non-local inter-particle interactions, be they fundamental or effective, in sustaining the stability of these halos. We underscored the shortfalls of conventional cold collision-less DM models in predicting stable halos without taking into account a "non-local" interaction within the effective free energy of the halo. These interactions might stem from elements like baryonic feedback, self-interactions, or the inherent quantum features of dark particles. Stability, as we concluded, required substantial effective interactions between any two points inside the halo, independent of their distance from the center.
The methodology we proposed serves as a systematic framework for scrutinizing classes of DM models and for refining their parameter spaces. Based on our investigation, we inferred that DM halos, where the divergence from the standard cold collision-less framework was restricted to areas near the halo center, were not expected to maintain stability in their outer regions.
We showed that the problem of instability within DM halos could not be sufficiently resolved by resorting to perturbative quantum effects. This problem was not as severe for fermionic dark matter, yet it was considerably more pronounced in the case of bosonic DM.
In showcasing the potential of this cross-model approach, we delved into the most encompassing form of the effective free-energy around a selected mass profile.
We used a model in which the deviation from the standard cold collision-less DM model was characterized by a two-body interaction within the effective free-energy. We demonstrated how to utilize observational data to examine different classes of DM models.
|
2308.05781 | Conserved spin operator of Dirac's theory in spatially flat FLRW
spacetimes | New conserved spin and orbital angular momentum operators of Dirac's theory
on spatially flat FLRW spacetimes are proposed generalizing thus recent results
concerning the role of Pryce's spin operator in the flat case [I. I. Cot\u
aescu, Eur. Phys. J. C (2022) 82:1073]. These operators split the conserved
total angular momentum generating the new spin and orbital symmetries that form
the rotations of the isometry groups. The new spin operator is defined and
studied in active mode with the help of a suitable spectral representation
giving its Fourier transfor. Moreover, in the same manner is defined the
operator of the fermion polarization. The orbital angular momentum is derived
in passive mode using a new method, inspired by Wigner's theory of induced
representations, but working properly only for global rotations. In this
approach the quantization is performed finding that the one-particle spin and
orbital angular momentum operators have the same form in any FLRW spacetime
regardless their concrete geometries given by various scale factors. | Ion I. Cotaescu | 2023-08-10T15:02:52Z | http://arxiv.org/abs/2308.05781v1 | # Conserved spin operator of Dirac's theory in spatially flat FLRW spacetimes
###### Abstract
New conserved spin and orbital angular momentum operators of Dirac's theory on spatially flat FLRW spacetimes are proposed generalizing thus recent results concerning the role of Pryce's spin operator in the flat case [I. I. Cotaescu, Eur. Phys. J. C (2022) 82:1073]. These operators split the conserved total angular momentum generating the new spin and orbital symmetries that form the rotations of the isometry groups. The new spin operator is defined and studied in active mode with the help of a suitable spectral representation giving its Fourier transform. Moreover, in the same manner is defined the operator of the fermion polarization. The orbital angular momentum is derived in passive mode using a new method, inspired by Wigner's theory of induced representations, but working properly only for global rotations. In this approach the quantization is performed finding that the one-particle spin and orbital angular momentum operators have the same form in any FLRW spacetime regardless their concrete geometries given by various scale factors.
## 1 Introduction
The spin operator is defined naturally in the non-relativistic quantum mechanics where this is conserved commuting with the Hamiltonian operator. In contrast, the would-be spin operator of Dirac's theory in the relativistic quantum mechanics (RQM) whose components are the rotation generators of the \(SL(2,\mathbb{C})\) group does not commute with the the Dirac Hamiltonian neither in Minkowski flat spacetime nor in spatially flat Friedman-Lemaitre-Robertson Walker (FLRW) manifolds. For this reason many authors studied the possibility of defining new conserved spin and orbital angular momentum operators splitting properly the conserved operator of total angular momentum.
This problem was solved in Minkowski spacetime by Pryce which proposed the desired conserved spin operator in momentum representation (MR) giving its Fourier transform many time ago. As this result was written in the active mode in which the operators act on the mode spinors of MR it was less relevant being ignored for more than seven decades. Recently, by using a new spectral representation of the Fourier operators we re-defined this operator in configuration representation (CR) showing that this is just the spin operator we need in Dirac's theory on Mikowski spacetime [2]. Moreover, in the same manner we defined the operator of fermion polarization in the most general case when the direction of spin projections depends on momentum. In other respects, for avoiding the difficulties in manipulating operators in active mode we proposed the method of associated operators in passive mode relating any operator acting on the Dirac field in CR to a pair of operators acting directly on the wave spinors in MR [3]. In this framework we studied the entire operator algebra of Dirac's theory in special relativity performing then the quantization giving the conserved one-particle operators as well as the operators having oscillating terms as those producing zitterbewegung [3].
In the present paper we would like to extend this study to any spatially flat FLRW spacetime but restricting ourselves only to the conserved operators generating the spin and orbital symmetries, i. e. the new conserved spin and orbital angular momentum operators we have to study here. For this purpose we use both the spectral representations in active mode and the method of associated operators in passive mode. In Minkowski spacetime these methods are based on the special structure of the mode spinors constructed according to Wigner's theory of induced representations [4, 5, 6, 7, 8]. As this theory cannot be applied in the case of FLRW spacetimes where we do not have boost transformations, we are forced to use an alternative approach but which works properly only for global rotations. In this manner we derive the associated isometry generators pointing out the associated spin and orbital angular momentum operators we need for applying the Bogolyubov method of quantization [9]. Finally we find that the conserved one-particle operators of the quantum field theory (QFT) have the same forms in any FLRW spacetime as being independent on time and implicitly on the scale factors. In this paper we use exclusively conformal frames with conformal coordinates and diagonal tetrad gauge.
We start in the next section presenting the Lagrangian theory of Dirac's free field on FLRW spacetimes defining the new spin and orbital symmetries generated by the conserved spin and orbital angular momentum operators. The third section is devoted to the structure of the mode spinors in MR separating the time modulation functions governing their time evolution from the spin parts constructed by using a new method inspired by Wigner's one applied in the flat case. We specify that these mode spinors are defined in rest frames only if we set the rest frame vacuum we proposed recently [10]. In the next section we adopt the active mode in which the operators act on the mode spinors in CR. Consequently, for deriving the action of the components of the new spin operator we must resort to our spectral representation for deriving their Fourier transforms finding how these depend on the time modulation functions. However, in spite
of this apparent time dependence the spin operator is conserved via Noether's theorem. Moreover, in Minkowski spacetime the Fourier transforms of the spin components become just those proposed by Pryce in MR [1].
Observing then the difficulties in deriving the above results we abandon the active mode developing in Sec. 5 the mentioned method of associated operators in passive mode. This gives us the opportunity of pointing out that the covariant representation of the Dirac field in CR is equivalent with a direct product of representations carried by the wave spinors in MR, just as in Wigner's theory in Minkowski spacetime. Hereby we derive the associated operators to the isometry generators preparing the quantization procedure performed in Sec. 6 where we derive the corresponding one-particle operators of QFT which have the same expressions in all the FRLW spacetimes including the Minkowski one. Finally we present our concluding remarks. Some technical details related to the \(SL(2,\mathbb{C})\) are presented in an Appendix.
## 2 Covariant Dirac field on spatially flat FLRW spacetimes
The Dirac free field on \((1+3)\)-dimensional local Minkowskian spacetimes \(M\) may be defined in frames \(\{x;e\}\) formed by local chats, \(\{x\}\), and orthogonal frames defined by the terades, \(e\). The coordinates \(x^{\mu}\) are labeled by natural indices, \(\alpha,...\mu,\nu,...=0,1,2,3\) while the vector fields \(e_{\hat{\alpha}}=e_{\hat{\alpha}}^{\mu}\partial_{\mu}\) defining the local orthogonal frames, and the 1-forms \(\omega^{\hat{\alpha}}=\hat{e}_{\mu}^{\hat{\alpha}}dx^{\mu}\) of the dual coframes are labeled by local indices, \(\hat{\mu},\hat{\nu},...=0,1,2,3\). The natural indices are raised or lowered by the metric tensor of \(M\), \(g_{\mu\nu}=\eta_{\hat{\alpha}\hat{\beta}}e_{\mu}^{\hat{\alpha}}\hat{e}_{\nu}^ {\hat{\beta}}\), while for the local ones we have to use the Minkowski metric \(\eta=\mathrm{diag}(1,-1,-1,-1)\). In an arbitrary frame \(\{x;e\}\) of \(M\) the tetrad-gauge invariant action of the Dirac field \(\psi:M\rightarrow\mathcal{V}_{D}\), of mass \(m\), minimally coupled to background's gravity, reads,
\[\mathcal{S}[e,\psi]=\int\,d^{4}x\sqrt{g}\left\{\frac{i}{2}[\overline{\psi} \gamma^{\hat{\alpha}}\nabla_{\hat{\alpha}}\psi-(\overline{\nabla_{\hat{\alpha }}\psi})\gamma^{\hat{\alpha}}\psi]-m\overline{\psi}\psi\right\} \tag{1}\]
where \(\bar{\psi}=\psi^{+}\gamma^{0}\) is the Dirac adjoint of \(\psi\) and \(g=|\det(g_{\mu\nu})|\). The field \(\psi\) takes values in the space of Dirac spinors \(\mathcal{V}_{D}\) which carries the Dirac representation \(\rho_{D}=(\frac{1}{2},0)\oplus(0,\frac{1}{2})\) of the \(SL(2,\mathbb{C})\) group. In this representation one can define the Dirac matrices \(\gamma^{\hat{\alpha}}\) (with local indices) which are Dirac self-adjoint, \(\overline{\gamma^{\hat{\mu}}}=\gamma^{0}\gamma^{\hat{\mu}^{+}}\gamma^{0}= \gamma^{\hat{\mu}}\), and satisfy the anti-commutation rules (A.1). These matrices may extend the \(sl(2,\mathbb{C})\) Lie algebra to a \(su(2,2)\) one such that \(\rho_{D}\) is in fact much larger than a representation of the \(SL(2,\mathbb{C})\) group. The gauge invariant field theories use covariant derivatives of the form
\[\nabla_{\hat{\alpha}}=e_{\hat{\alpha}}^{\mu}\nabla_{\mu}=e_{\hat{\alpha}}+ \frac{i}{2}\hat{\Gamma}_{\hat{\alpha}\hat{\beta}}^{\hat{\gamma}}s_{\cdot\cdot \cdot}^{\hat{\beta}\cdot}\,, \tag{2}\]
where \(s^{\hat{\alpha}\hat{\beta}}\) are the \(SL(2,\mathbb{C})\) generators (A.2) while \(\hat{\Gamma}_{\hat{\mu}\hat{\nu}}^{\hat{\sigma}}=e_{\hat{\mu}}^{\alpha}e_{\hat {\rho}}^{\beta}(\hat{e}_{\gamma}^{\hat{\sigma}}\Gamma_{\alpha\beta}^{\gamma}- \hat{e}_{\beta,\alpha}^{\hat{\sigma}})\) are the connection components in local frames expressed in terms of tetrads
and Christoffel symbols, \(\Gamma^{\gamma}_{\alpha\beta}\). Note that these connections are known as the spin connections and denoted often by \(\Omega\). The covariant derivatives of the Dirac operator \(E_{x}=i\gamma^{\hat{\alpha}}\nabla_{\hat{\alpha}}-m\) guarantee that the Dirac equation \(E_{x}\psi=0\) is _gauge invariant_ in the sense that this does not change its form when we change the positions of local frames performing gauge transformations. The particular solutions of the Dirac equation form a vector space that may be equipped with the relativistic scalar product [10]
\[\langle\psi,\psi^{\prime}\rangle_{D}=\int_{\Sigma}d\sigma_{\mu}\sqrt{g}\,e^{ \mu}_{\hat{\alpha}}\,\bar{\psi}(x)\gamma^{\hat{\alpha}}\psi^{\prime}(x)\,, \tag{3}\]
whose integral is performed on a space-like section \(\Sigma\subset M\).
In what follows we study the Dirac free field on \((1+3)\)-dimensional spatially flat FLRW spacetimes \(M(a)\) having scale factors \(a\). We work exclusively in conformal frames \(\{x_{c};e\}\) formed by conformal chats, \(\{t_{c},\vec{x}_{c}\}\), and orthogonal frames defined by terades, \(e\). The conformal coordinates are the conformal time \(t_{c}\) and the co-moving Cartesian space coordinates, \(x^{i}_{c}\)\((i,j,k...=1,2,3)\), giving the line element
\[ds^{2}=g_{\mu\nu}(x_{c})dx^{\mu}_{c}dx^{\nu}_{c}=a(t_{c})^{2}(dt^{2}_{c}-d\vec {x}_{c}\cdot d\vec{x}_{c})\,. \tag{4}\]
Here we adopt the diagonal tetrad-gauge in which the local frames are defined by
\[e_{0} = \frac{1}{a(t_{c})}\,\partial_{t_{c}}\,,\qquad\omega^{0}=a(t_{c}) dt_{c}\,, \tag{5}\] \[e_{i} = \frac{1}{a(t_{c})}\,\partial_{i}\,,\qquad\ \ \omega^{i}=a(t_{c})dx^{i}\,, \tag{6}\]
such that we preserve the global \(SO(3)\) symmetry allowing us to use systematically the \(SO(3)\) vectors. In these frames the Dirac equation can be put in Hamiltonian form, \(i\partial_{t_{x}}\psi=H_{c}\psi\), with the help of the Hamiltonian operator
\[H_{c}(x_{c})=ma(t_{c})\gamma^{0}-i\gamma^{0}\gamma^{i}\partial_{x^{i}_{c}}- \frac{3i}{2}\frac{\dot{a}(t_{c})}{a(t_{c})}\,,\quad\dot{a}(t_{c})=\frac{da(t_ {c})}{dt_{c}}\,. \tag{7}\]
while the scalar product takes the form
\[\langle\psi,\psi^{\prime}\rangle_{D}=\int_{\Sigma}d^{3}x_{c}\,a(t_{c})^{3} \bar{\psi}(x_{c})\gamma^{0}\psi(x_{c})=\int_{\Sigma}d^{3}x_{c}\,a(t_{c})^{3} \psi^{+}(x_{c})\psi(x_{c})\,, \tag{8}\]
where the integration is performed over a flat space section, \(\Sigma=\mathbb{R}^{3}\).
In the spacetimes \(M(a)\) the action (1) is invariant at least under _global_ isometries, \((R,a):x^{i}_{c}\to x^{i\,\prime}_{c}=R^{i}_{\cdot\,j}x^{j}_{c}+a^{i}\) formed by rotations \(R\in SO(3)\) and three-dimensional translations, \(\vec{a}\in T(3)\). The isometry group \(E(3)=T(3)\)\(\circledotimes SO(3)\) is a semidirect product where \(T(3)\) is the invariant subgroup. The Dirac field transforms under isometries according to the _covariant_ representation
\(T\,:\,(r,\vec{a})\to T_{r,\vec{a}}\) of the group \(E(3)=T(3)\)\(\!\!\mbox{\small$\circ$}\!\!\mbox{\small$\circ$}\!\!\mbox{\small$\circ$}\!\!\mbox{ \small$\circ$}\!\!SU(2)\) which is the universal covering group of the group \(E(3)\)[8]. We have thus the global transformations
\[(T_{r,\vec{a}}\psi)(t_{c},\vec{x}_{c})=r\psi\left(t_{c},R(\hat{r})^{-1}(\vec{x} _{c}-\vec{a})\right)\,,\quad\forall r\in\rho_{D}[SU(2)]\,,\,\vec{a}\in T(3). \tag{9}\]
generated by the basis-generators of the corresponding representation of the algebra \({\rm Lie}(T)\) that read
\[P^{i}=\left.i\frac{\partial T_{1,a}}{\partial a^{i}}\right|_{a=0}\,,\quad J_{i }=\left.i\frac{\partial T_{r(\theta),0}}{\partial\theta^{i}}\right|_{\theta=0}\,, \tag{10}\]
where \(r(\theta)\) is defined in Eq. (A.8). We obtain thus the momentum components, \(P^{i}=-i\partial_{x_{c}^{i}}\), and those of the total angular momentum,
\[J_{i} = \frac{1}{2}\,\varepsilon_{ijk}J_{jk}=-i\varepsilon_{ijk}\underline {x}^{j}\partial_{x_{c}^{k}}+s_{i}\,, \tag{11}\]
where the components \(\underline{x}^{i}\) of the position (or coordinate) vector-operator \(\underline{\vec{x}}\) act as \((\underline{x}^{i}\psi)(x_{c})=x_{c}^{i}\psi(x_{c})\) while the reducible matrices \(s_{i}\) are given by Eq. (A.7). The operators \(P^{i}\) and \(J_{i}\) are the standard basis-generators of the algebra \({\rm Lie}(T)\). As the action (1) is invariant under isometries the scalar product (8) is also invariant,
\[\langle T_{\lambda,a}\psi,T_{\lambda,a}\psi^{\prime}\rangle_{D}=\langle\psi, \psi^{\prime}\rangle_{D}\,. \tag{12}\]
The generators \(X\in{\rm Lie}(T)\) are self-adjoint operators, \(\langle\psi,X\psi^{\prime}\rangle_{D}=\langle X\psi,\psi^{\prime}\rangle_{D}\), conserved via Noether's theorem. Therefore, we may conclude that in this framework the covariant representation \(T\) behaves as a _unitary_ one with respect to the relativistic scalar product (8).
The subgroup \(SU(2)\subset\tilde{E}(3)\) is of a special interest such that we denote by \(T_{r,0}=T_{\hat{r}}^{r}\) the restriction of the covariant representation \(T\) to this subgroup. The basis-generators of the representation \(T^{r}\) are the components of total angular momentum operator, \(\vec{J}=\underline{\vec{x}}\wedge\vec{P}+\vec{s}\), defined by Eq. (11), which is formed by the orbital term \(\underline{\vec{x}}\wedge\vec{P}\) and the spin matrix \(\vec{s}\in sl(2,\mathbb{C})\). As these operators are not conserved separately via Noether's theorem we must look for a new conserved spin operator \(\vec{S}\) associated to a suitable new position operator, \(\vec{X}=\underline{\vec{x}}+\delta\vec{X}\), allowing the splitting
\[\vec{J}=\underline{\vec{x}}\wedge\vec{P}+\vec{s}=\vec{L}+\vec{S}\,,\qquad\vec{ L}=\vec{X}\wedge\vec{P}\,, \tag{13}\]
which impose the correction \(\delta\vec{X}\) to satisfy \(\delta\vec{X}\wedge\vec{P}=\vec{s}-\vec{S}\). This new splitting gives rise to a pair of new \(su(2)\sim so(3)\) symmetries, namely the _orbital_ symmetry generated by \(\{L_{1},L_{2},L_{3}\}\) and the _spin_ one generated by \(\{S_{1},S_{2},S_{3}\}\).
The plane wave solutions of the Dirac equation depend on _arbitrary_ Pauli spinors \(\xi=\{\xi_{\sigma}|\sigma=\pm\frac{1}{2}\}\) determining the fermion polarization. These spinors form similar bases in both the spaces \({\cal V}_{P}\) of Pauli spinors carrying the irreducible representations \((\frac{1}{2},0)\) and \((0,\frac{1}{2})\) of \(\rho_{D}\). The basis of polarization spinors can be changed at any time, \(\xi\to\hat{r}\xi\), by applying a \(SU(2)\) rotation \(\hat{r}\). For this reason, when we study this symmetry it is convenient to denote the Dirac field
by \(\psi_{\xi}\) instead of \(\psi\) pointing out explicitly its dependence on the polarization spinors. We define now the transformations of spin symmetry with the help of a representation \(T^{s}:\hat{r}\to T^{s}_{\hat{r}}\) of the group \(SU(2)\) acting as,
\[\left(T^{s}_{\hat{r}(\theta)}\psi_{\xi}\right)(x_{c})=\psi_{\hat{r}(\theta)\xi} (x_{c})\,, \tag{14}\]
where \(\hat{r}(\theta)\) are the rotations (A.8) with Cayley-Klein parameters. Then the components of the spin operator can be defined as the generators of this representation, [2]
\[S_{i}=\left.i\frac{\partial T^{s}_{\hat{r}(\theta)}}{\partial\theta^{i}} \right|_{\theta^{i}=0}\;\;\Rightarrow\;\;S_{i}\psi_{\xi}=\psi_{\hat{s}_{i}\xi}\,, \tag{15}\]
whose action is obvious. Similarly we define the orbital representation \(T^{o}:\hat{r}\to T^{o}_{\hat{r}}\) as
\[\left(T^{o}_{\hat{r}(\theta)}\psi_{\xi}\right)(t_{c},\vec{x}_{c})=r(\theta) \psi_{\hat{r}(\theta)^{-1}\xi}\left(t_{c},R[\hat{r}(\theta)]^{-1}\vec{x}_{c} \right)\,, \tag{16}\]
for accomplishing the factorization \(T^{r}=T^{o}\otimes T^{s}\). The basis-generators of the orbital representation,
\[L_{i}=\left.i\frac{\partial T^{o}_{\hat{r}(\theta)}}{\partial\theta^{i}} \right|_{\theta^{i}=0}\,, \tag{17}\]
are the components of the new conserved orbital angular momentum operator. In what follows we shall pay a special attention to the new operators \(\vec{S}\) and \(\vec{L}\).
## 3 Mode spinors in momentum representation
The Dirac equation can be solved formally on \(M(a)\) in conformal frames \(\{x_{c};e\}\) allowing solutions of the general form [10],
\[\psi(x_{c}) = \psi^{(+)}(x_{c})+\psi^{(-)}(x_{c}) \tag{18}\] \[= \int d^{3}p\sum_{\sigma}[U_{\vec{p},\sigma}(x_{c})\alpha(\vec{p},\sigma)+V_{\vec{p},\sigma}(x_{c})\beta^{*}(\vec{p},\sigma)]\,,\]
expressed in terms of particle, \(\alpha\), and antiparticle, \(\beta\), wave spinors and mode spinors \(U_{\vec{p},\sigma}\) and \(V_{\vec{p},\sigma}\), of positive and respectively negative frequencies. These spinors satisfy the eigenvalues problems \(P^{i}U_{\vec{p},\sigma}(t_{c},\vec{x}_{c})=p^{i}U_{\vec{p},\sigma}(t_{c},\vec {x}_{c})\) and \(P^{i}V_{\vec{p},\sigma}(t_{c},\vec{x}_{c})=-p^{i}V_{\vec{p},\sigma}(t_{c},\vec {x}_{c})\) and form an orthonormal basis being related through the charge conjugation,
\[V_{\vec{p},\sigma}(t_{c},\vec{x}_{c})=U^{c}_{\vec{p},\sigma}(t_{c},\vec{x}_{c })=CU^{*}_{\vec{p},\sigma}(t_{c},\vec{x}_{c})\,,\quad C=i\gamma^{2}\,, \tag{19}\]
satisfying the orthogonality relations
\[\langle U_{\vec{p},\sigma},U_{\vec{p}^{\,\prime},\sigma^{\prime}}\rangle = \langle V_{\vec{p},\sigma},V_{\vec{p}^{\,\prime},\sigma^{\prime}} \rangle=\delta_{\sigma\sigma^{\prime}}\delta^{3}(\vec{p}-\vec{p}^{\,\prime}) \tag{20}\] \[\langle U_{\vec{p},\sigma},V_{\vec{p}^{\,\prime},\sigma^{\prime}}\rangle = \langle V_{\vec{p},\sigma},U_{\vec{p}^{\,\prime},\sigma^{\prime}} \rangle=0\,, \tag{21}\]
with respect to the relativistic scalar product (8). Moreover, this basis is supposed to be complete accomplishing the condition [10]
\[\int d^{3}p\sum_{\sigma}\left[U_{\vec{p},\,\sigma}(t_{c},\vec{x}_{c})U_{\vec{p}, \sigma}^{+}(t_{c},\vec{x}_{c}^{\,\prime})+V_{\vec{p},\sigma}(t_{c},\vec{x}_{c} )V_{\vec{p},\sigma}^{+}(t_{c},\vec{x}_{c}^{\,\prime})\right]\]
\[=a(t_{c})^{-3}\delta^{3}(\vec{x}_{c}-\vec{x}_{c}^{\,\prime})\,. \tag{22}\]
The space of the solutions of Dirac's equation \({\cal F}=\{\psi|E_{x_{c}}\psi=0\}={\cal F}^{+}\oplus{\cal F}^{-}\) is formed by the subspaces of solutions of positive, \({\cal F}^{+}\), and negative, \({\cal F}^{-}\), frequencies which are orthogonal with respect to the scalar product (8).
In RQM the physical meaning of the free field \(\psi\) is encapsulated in its wave spinors
\[\alpha=\left(\begin{array}{c}\alpha_{\frac{1}{2}}\\ \alpha_{-\frac{1}{2}}\end{array}\right)\in\tilde{\cal F}^{+}\,,\quad\beta= \left(\begin{array}{c}\beta_{\frac{1}{2}}\\ \beta_{-\frac{1}{2}}\end{array}\right)\in\tilde{\cal F}^{-}\,, \tag{23}\]
whose values can be obtained applying the inversion formulas
\[\alpha_{\sigma}(\vec{p})=\langle U_{\vec{p},\sigma},\psi\rangle_{D}\,,\quad \beta_{\sigma}(\vec{p})=\langle\psi,V_{\vec{p},\sigma}\rangle_{D}\,, \tag{24}\]
resulted from Eqs. (20) and (21). We assume now that the spaces \(\tilde{\cal F}^{+}\sim\tilde{\cal F}^{-}\sim{\cal L}^{2}(\Omega_{\hat{p}},d^{ 3}p,{\cal V}_{P})\) are equipped with the same scalar product,
\[\langle\alpha,\alpha^{\prime}\rangle=\int d^{3}p\,\alpha^{+}(\vec{p})\alpha^ {\prime}(\vec{p})=\int d^{3}p\sum_{\sigma}\alpha_{\sigma}^{*}(\vec{p})\alpha _{\sigma}^{\prime}(\vec{p})\,, \tag{25}\]
and similarly for the spinors \(\beta\). Then after using Eqs. (20) and (21) we obtain the important identity
\[\langle\psi,\psi^{\prime}\rangle_{D}=\langle\alpha,\alpha^{\prime}\rangle+ \langle\beta,\beta^{\prime}\rangle\,, \tag{26}\]
showing that the spaces \({\cal F}\) and \(\tilde{\cal F}=\tilde{\cal F}^{+}\oplus\tilde{\cal F}^{-}\) related through the expansion (18) are _isometric_.
The general form of the mode spinors can be studied in any manifold \(M(a)\), exploiting the Dirac equation in MR. As here we cannot apply the Wigner method of constructing mode spinors in Minkowski spacetime, we must look for an alternative approach inspired by the identity (A.12). For our further purposes, it is convenient to separate from the beginning the orbital part from the spin terms assuming that the mode spinors have the general form
\[U_{\vec{p},\sigma}(t_{c},\vec{x}_{c}) = [2\pi a(t_{c})]^{-\frac{3}{2}}e^{i\vec{p}\cdot\vec{x}_{c}}{\cal U }_{p}(t_{c})d(\vec{p})u_{\sigma}(\vec{p}) \tag{27}\] \[V_{\vec{p},\sigma}(t_{c},\vec{x}_{c}) = [2\pi a(t_{c})]^{-\frac{3}{2}}e^{-i\vec{p}\cdot\vec{x}_{c}}{\cal V }_{p}(t_{c})d(\vec{p})v_{\sigma}(\vec{p}) \tag{28}\]
where we introduce the diagonal matrix-functions \({\cal U}_{p}(t_{c})\) and \({\cal V}_{p}(t_{c})\) which depend only on \(t\) and \(p=|\vec{p}\,|\), determining the time modulation of the fundamental spinors.
The spin part is separated with the help of the Hermitian singular matrix
\[d(\vec{p})=d^{+}(\vec{p})=1+\frac{\gamma^{0}\vec{\gamma}\cdot\vec{p}}{p}\,, \tag{29}\]
acting on the Dirac spinors that in the standard representation of the Dirac matrices (with diagonal \(\gamma^{0}\)) have the form [10, 11]
\[u_{\sigma}(\vec{p})=\left(\begin{array}{c}\xi_{\sigma}(\vec{p})\\ 0\end{array}\right)\,,\quad v_{\sigma}(\vec{p})=Cu_{\sigma}^{*}(\vec{p})=\left( \begin{array}{c}0\\ -\eta_{\sigma}(\vec{p})\end{array}\right)\,, \tag{30}\]
depending on the Pauli spinors, \(\xi_{\sigma}(\vec{p})\) and \(\eta_{\sigma}(\vec{p})=i\sigma_{2}\xi_{\sigma}^{*}(\vec{p})\) which are normalized, \(\xi_{\sigma}^{+}(\vec{p})\xi_{\sigma^{\prime}}(\vec{p})=\eta_{\sigma}^{+}( \vec{p})\eta_{\sigma^{\prime}}(\vec{p})=\delta_{\sigma\sigma^{\prime}}\). satisfying the completeness condition,
\[\sum_{\sigma}\xi_{\sigma}(\vec{p})\xi_{\sigma}^{+}(\vec{p})=\sum_{\sigma}\eta _{\sigma}(\vec{p})\eta_{\sigma}^{+}(\vec{p})={\bf 1}_{2\times 2}\,. \tag{31}\]
The form of these spinors depends on the direction of the spin projection which, in general, may depend on \(\vec{p}\) as in the case of the helicity basis. Here we say that the polarization depending on momentum is a _peculiar_ polarization. Otherwise, the plarization independent on momentum will be referred as _common_ polarization. The corresponding Dirac spinors have the completeness properties [11]
\[\sum_{\sigma}u_{\sigma}(\vec{p})u_{\sigma}^{+}(\vec{p})=\frac{1+ \gamma^{0}}{2}\equiv\pi_{+}\,, \tag{32}\] \[\sum_{\sigma}v_{\sigma}(\vec{p})v_{\sigma}^{+}(\vec{p})=\frac{1- \gamma^{0}}{2}\equiv\pi_{-}\,, \tag{33}\]
laying out the Hermitian matrices \(\pi_{\pm}=(\pi_{\pm})^{+}\) that form a complete system of projection operators as \(\pi_{+}\pi_{-}=0\) and \(\pi_{+}+\pi_{-}=1\in\rho_{D}\). All the above auxiliary quantities will be useful in the further calculations having simple calculation rules as, for example, \(d(\vec{p})^{2}=2d(\vec{p}),\,\,\,d(\vec{p})d(-\vec{p})=0\), etc..
The principal pieces are the diagonal matrix-functions determining the time modulation of the mode spinors which can be represented as
\[{\cal U}_{p}(t_{c}) = \pi_{+}u^{+}(t_{c},p)+\pi_{-}u^{-}(t_{c},p)\,, \tag{34}\] \[{\cal V}_{p}(t_{c}) = \pi_{+}v^{+}(t_{c},p)+\pi_{-}v^{-}(t_{c},p)\,, \tag{35}\]
in terms of the time modulation functions \(u^{\pm}(t_{c},p)\) and \(v^{\pm}(t_{c},p)\) whose differential equations in the general case of \(m\neq 0\) may be obtained by substituting Eqs. (27) and (28) in Dirac's equation. Then, after a few manipulation, we find the systems of the first order differential equations [12]
\[\left[i\partial_{t_{c}}\mp m\,a(t_{c})\right]u^{\pm}(t_{c},p) = p\,u^{\mp}(t_{c},p)\,,\] \[\left[i\partial_{t_{c}}\mp m\,a(t_{c})\right]v^{\pm}(t_{c},p) = -p\,v^{\mp}(t_{c},p)\,, \tag{36}\]
which govern the time modulation of the free Dirac field on any spatially flat FLRW manifold. The solutions of this system depend on integration constants that must be selected according to the charge conjugation condition (19) which requires to have \({\cal V}_{p}={\cal U}_{p}^{c}=C{\cal U}_{p}^{*}C^{-1}=\gamma^{5}{\cal U}_{p}^{ *}\gamma^{5}\) leading to the mandatory condition
\[v^{\pm}(t_{c},p)=\left[u^{\mp}(t_{c},p)\right]^{*}\,. \tag{37}\]
The remaining normalization constants can be restricted as we have the prime integrals of the system (36), \(\partial_{t_{e}}(|u^{+}(t_{c},p)|^{2}+|u^{-}(t_{c},p)|^{2})=\partial_{t_{e}}(|v^ {+}(t_{c},p)|^{2}+|v^{-}(t_{c},p)|^{2})=0\), allowing us to impose the normalization conditions
\[|u^{+}(t_{c},p)|^{2}+|u^{-}(t_{c},p)|^{2}=|v^{+}(t_{c},p)|^{2}+|v^{-}(t_{c},p )|^{2}=1\,, \tag{38}\]
which guarantee Eqs. (20) and (21) to be accomplished. Hereby we find the calculation rules \({\rm Tr}({\cal U}_{p}{\cal U}_{p}^{*})={\rm Tr}({\cal V}_{p}{\cal V}_{p}^{*})=2\) resulted from Eqs. (38) as \({\rm Tr}(\pi_{\pm})=2\).
Gathering all the above elements we obtain the intuitive forms of the mode spinors in conformal frames and standard representation of the \(\gamma\)-matrices [10],
\[U_{\vec{p},\sigma}(t_{c},\vec{x}_{c}) = \frac{e^{i\vec{p}\cdot\vec{x}_{c}}}{[2\pi a(t_{c})]^{\frac{3}{2}} }\left(\begin{array}{c}u^{+}(t_{c},p)\,\xi_{\sigma}(\vec{p})\\ u^{-}(t_{c},p)\,\frac{\vec{\sigma}\cdot\vec{p}}{p}\,\xi_{\sigma}(\vec{p})\end{array} \right)\,, \tag{39}\] \[V_{\vec{p},\sigma}(t_{c},\vec{x}_{c}) = -\frac{e^{i\vec{p}\cdot\vec{x}_{c}}}{[2\pi a(t_{c})]^{\frac{3}{2} }}\left(\begin{array}{c}v^{+}(t_{c},p)\,\frac{\vec{\sigma}\cdot\vec{p}}{p} \,\eta_{\sigma}(\vec{p})\\ v^{-}(t_{c},p)\,\eta_{\sigma}(\vec{p})\end{array}\right)\,. \tag{40}\]
A special problem comes from the fact that these spinors are not defined in the rest frame where \(\vec{p}=0\) because of the matrix \(\frac{\vec{\sigma}\cdot\vec{p}}{p}\) which is undefined in this limit. For this reason we proposed the _rest frame vacuum_ defined by the following supplementary conditions [10]
\[u^{-}(t_{c},0)=v^{+}(t_{c},0)=0\,,\quad|u^{+}(t_{c},0)|=|v^{-}(t_{c},0)|=1\,. \tag{41}\]
In this vacuum the mode spinors in the rest frame are well-defined,
\[U_{0,\sigma}(t_{c},\vec{x}) = [2\pi a(t_{c})]^{-\frac{3}{2}}e^{-im\,t(t_{c})}u_{\sigma}(0)\,, \tag{42}\] \[V_{0,\sigma}(t_{c},\vec{x}) = [2\pi a(t_{c})]^{-\frac{3}{2}}e^{im\,t(t_{c})}v_{\sigma}(0)\,, \tag{43}\]
depending on the cosmic time
\[t(t_{c})=\int_{t_{c}{}_{0}}^{t_{c}}dt_{c}^{\prime}a(t_{c}^{\prime})\,, \tag{44}\]
the rest energy of special relativity, \(E_{0}=m\), and the rest frame spinors \(u_{\sigma}(0)\) and \(v_{\sigma}(0)\). Note that in Minkowski spacetime the time modulation functions (A.11) satisfy naturally the conditions (41) defining the rest frame vacuum. In contrast, on the de Sitter expanding universe one prefers the _adiabatic_ vacuum [11] but which cannot be defined in FLRW spacetimes with finite big bang times. In our opinion, the de Sitter adiabatic vacuum can be seen as the partner of the rest frame one in the process of cosmological particle creation under inflation [12].
## 4 Operators in active mode
In linear algebra, a linear operator may act changing the basis in the active mode or transforming only the coefficients without affecting the basis if we adopt the passive mode. Let us start with the operators in active mode.
The simplest operators of Dirac's theory on \(M(a)\) are the multiplicative and differential operators in CR. The differential operators are \(4\times 4\) matrices depending on space derivatives, \(f(i\partial_{i})\in\rho_{D}\), whose action on the mode spinors
\[\left[f(i\partial_{i})\psi\right](x) = \int d^{3}p\sum_{\sigma}\left[f(p^{i})U_{\vec{p},\sigma}(x)\alpha _{\sigma}(\vec{p})+f(-p^{i})V_{\vec{p},\sigma}(x)\beta_{\sigma}^{*}(\vec{p})\right] \tag{45}\]
is given by the momentum-dependent matrices \(f(p^{i})\) called the Fourier transforms of the operators \(f(i\partial_{i})\). The principal differential operators are the translation generators \(P_{i}=i\partial_{i}\) and the time-dependent Hamiltonian operator (7) whose Fourier transform
\[H_{c}(t_{c},\vec{p})=ma(t_{c})\gamma^{0}+\gamma^{0}\vec{\gamma}\cdot\vec{p}- \frac{3i}{2}\frac{\dot{a}(t_{c})}{a(t_{c})}\,, \tag{46}\]
allows us to write the formal actions
\[\left(H_{c}U_{\vec{p},\sigma}\right)(x_{c})=H_{c}(t_{c},\vec{p})U_{\vec{p}, \sigma}(x_{c})\,,\quad\left(H_{c}V_{\vec{p},\sigma}\right)(x_{c})=H_{c}(t_{c}, -\vec{p})\,,V_{\vec{p},\sigma}(x_{c})\,, \tag{47}\]
which will be useful for identifying conserved quantities.
Most interesting are the integral operators whose kernels may depend on time. Here we restrict ourselves to the equal-time operators acting in conformal frames as
\[(A\psi)(t_{c},\vec{x}_{c})=\int d^{3}x_{c}^{\prime}\mathfrak{A}(t_{c},\vec{x}_ {c},\vec{x}_{c}^{\,\prime})\psi(t_{c},\vec{x}_{c}^{\,\prime})\,, \tag{48}\]
preserving the conformal time \(t_{c}\). The multiplication takes over this property,
\[A=A_{1}A_{2} \Rightarrow \mathfrak{A}(t_{c},\vec{x}_{c},\vec{x}_{c}^{\,\prime})=\int d^{3 }x_{c}^{\prime\prime}\,\mathfrak{A}_{1}(t,\vec{x}_{c},\vec{x}_{c}^{\,\prime \prime})\mathfrak{A}_{2}(t_{c},\vec{x}_{c}^{\,\prime\prime},\vec{x}_{c}^{\, \prime})\,, \tag{49}\]
such that the set of equal-time operators forms an algebra \(E[t_{c}]\subset\mathrm{Aut}(\mathcal{F})\) at any fixed time \(t_{c}\). A special subalgebra \(F[t_{c}]\subset E[t_{c}]\) is formed by operators with local kernels, \(\mathfrak{A}(t_{c},\vec{x}_{c},\vec{x}_{c}^{\,\prime})=\mathfrak{A}(t_{c}, \vec{x}_{c}-\vec{x}_{c}^{\,\prime})\), allowing three-dimensional Fourier representations,
\[\mathfrak{A}(t_{c},\vec{x}_{c})=\int d^{3}p\,\frac{e^{i\vec{p}\cdot\vec{x}_{c }}}{(2\pi)^{3}}A(t_{c},\vec{p})\,, \tag{50}\]
depending on the matrices \(A(t_{c},\vec{p})\in r_{D}\) we call here the Fourier transforms of the operators \(A\). Then the action (48) on a field (18) can be written as
\[(A\psi)(t_{c},\vec{x}_{c})=\int d^{3}x_{c}^{\prime}\,\mathfrak{A} (t,\vec{x}_{c}-\vec{x}_{c}^{\,\prime})\psi(t_{c},\vec{x}_{c}^{\,\prime})\] \[\quad=\int d^{3}p\sum_{\sigma}\left[A(t_{c},\vec{p})U_{\vec{p}, \sigma}(t_{c},\vec{x})\alpha_{\sigma}(\vec{p})+A(t_{c},-\vec{p})V_{\vec{p}, \sigma}(t,\vec{x})\beta_{\sigma}^{*}(\vec{p})\right]\,. \tag{51}\]
Remarkably, the operator multiplication, \(A=A_{1}A_{2}\), in \(F[t]\) algebra leads to the convolution of the corresponding kernels \(\mathfrak{A}=\mathfrak{A}_{1}*\mathfrak{A}_{2}\) defined as
\[\mathfrak{A}(t_{c},\vec{x}_{c}-\vec{x}_{c}^{\,\prime})=\int d^{3}p^{\prime \prime}\mathfrak{A}_{1}(t,\vec{x}-\vec{x}^{\,\prime\prime})\mathfrak{A}_{2}(t, \vec{x}^{\,\prime\prime}-\vec{x}^{\,\prime})\,, \tag{52}\]
and, consequently, to the multiplication of their Fourier transforms, \(A(t_{c},\vec{p})=A_{1}(t_{c},\vec{p})A_{2}(t_{c},\vec{p})\). One obtains thus the new algebra \(\bar{F}[t_{c}]\) in MR formed by the Fourier transforms of the Fourier operators in which the identity is \(I(\vec{p})=1\in\rho_{D}\). Obviously, the operator \(A\in F[t_{c}]\) is invertible if its Fourier transform is invertible in \(\bar{F}[t_{c}]\).
We say that a Fourier operator \(A\) is reducible if its Fourier transform satisfies
\[\left.\langle V_{\vec{p},\sigma},A(t_{c},\vec{p})U_{\vec{p},\sigma}\rangle_{D} \right|_{t_{c}}=\left.\langle U_{\vec{p},\sigma},A(t_{c},-\vec{p})V_{\vec{p}, \sigma}\rangle_{D}\right|_{t_{c}}=0\,, \tag{53}\]
at any instant \(t_{c}\). These operators have simple expectation values
\[\left.\langle\psi,A\psi\rangle_{D}\right|_{t_{c}}=\left.\langle U_{\vec{p}, \sigma},A(t_{c},\vec{p})U_{\vec{p},\sigma}\rangle_{D}\right|_{t_{c}}+\left. \langle V_{\vec{p},\sigma},A(t_{c},-\vec{p})V_{\vec{p},\sigma}\rangle_{D} \right|_{t_{c}} \tag{54}\]
that become conserved quantities when the Fourier transform \(A(t_{c},\vec{p})\) accomplishes the condition
\[\frac{d}{dt_{c}}\left.\langle\psi,A\psi\rangle_{D}\right|_{t_{c}}=0\;\;\; \Rightarrow\;\;\;i\partial_{t_{c}}A(t_{c},\vec{p})=[H_{c}(t_{c},\vec{p}),A(t _{c},\vec{p})]\, \tag{55}\]
allowing us to identify the conserved operators directly without resorting to Noether's theorem.
In this framework we may define the transformations of the spin symmetry (14) as Fourier operators constructed with the help of the spectral representations we proposed recently. We assume that the operators \(T_{\hat{r}}^{s}\) are Fourier operators whose kernels allow the spectral representation
\[\mathfrak{T}_{\hat{r}}^{s}(t_{c},\vec{x}_{c}-\vec{x}_{c}^{\;\prime}) = \int d^{3}p\,a(t_{c})^{3}\sum_{\sigma,\sigma^{\prime}}\Big{[}U_{ \vec{p},\xi_{\sigma}}(t_{c},\vec{x}_{c})D_{\sigma\sigma^{\prime}}(\hat{r},\vec {p})U^{+}_{\vec{p},\xi_{\sigma^{\prime}}}(t_{c},\vec{x}_{c}^{\;\prime}) \tag{56}\] \[\qquad\qquad\qquad+V_{\vec{p},\eta_{\sigma}}(t_{c},\vec{x}_{c})D ^{*}_{\sigma\sigma^{\prime}}(\hat{r},\vec{p})V^{+}_{\vec{p},\eta_{\sigma^{ \prime}}}(t_{c},\vec{x}_{c}^{\;\prime})\Big{]}\]
where
\[D_{\sigma\sigma^{\prime}}(\hat{r},\vec{p})=\xi_{\sigma}^{+}(\vec{p})\hat{r}\xi _{\sigma^{\prime}}(\vec{p}) \tag{57}\]
are the usual matrix elements of the fundamental representation of the little group \(SU(2)\) in the basis of polarization spinors \(\{\xi\}\). The rotations \(\hat{r}\in SU(2)\) transform this basis and implicitly the mode spinors as
\[\hat{r}\,\xi_{\sigma}(\vec{p}) = \sum_{\sigma^{\prime}}\xi_{\sigma^{\prime}}(\vec{p})D_{\sigma^{ \prime}\sigma}(\hat{r},\vec{p})\Rightarrow U_{\vec{p},\hat{r}\xi_{\sigma}}(x_{ c})=\sum_{\sigma^{\prime}}U_{\vec{p},\xi_{\sigma^{\prime}}}(x_{c})D_{\sigma^{ \prime}\sigma}(\hat{r},\vec{p})\,, \tag{58}\] \[\hat{r}\,\eta_{\sigma}(\vec{p}) = \sum_{\sigma^{\prime}}\eta_{\sigma^{\prime}}(\vec{p})D^{*}_{\sigma ^{\prime}\sigma}(\hat{r},\vec{p})\Rightarrow V_{\vec{p},\hat{r}\eta_{\sigma}}(x_{ c})=\sum_{\sigma^{\prime}}V_{\vec{p},\eta_{\sigma^{\prime}}}(x_{c})D^{*}_{ \sigma^{\prime}\sigma}(\hat{r},\vec{p})\,. \tag{59}\]
Therefore, we obtain the desired action of the operators of the spin symmetry
\[[T_{\hat{r}}^{s}\psi_{\xi}](t_{c},\vec{x}_{c})=\int d^{3}x_{c}^{\prime}\mathfrak {T}_{\hat{r}}^{s}(t_{c},\vec{x}_{c}-\vec{x}_{c}^{\;\prime})\psi_{\xi}(t_{c}, \vec{x}_{c}^{\;\prime})=\psi_{\hat{r}\xi}(t_{c},\vec{x}_{c})\,, \tag{60}\]
in accordance with the definition (14). The next step is to substitute the mode spinors (27) and (28) the integral (56) for deriving the Fourier transforms
\[T_{\hat{r}}^{s}(t_{c},\vec{p})={\cal U}_{p}(t_{c})d(\vec{p})\,r\,\pi_{+}d(\vec{ p})\,{\cal U}_{p}^{*}(t_{c})+{\cal V}_{p}(t_{c})d(-\vec{p})\,r\,\pi_{-}d(- \vec{p})\,{\cal V}_{p}^{*}(t_{c})\,, \tag{61}\]
of the operators \(T^{s}_{\hat{r}}\) in terms of projection operators, (32) and (33), and rotations (A.5).
Hereby it results that the components of the spin operator (15) are Fourier operators whose Fourier transforms read
\[S_{i}(t_{c},\vec{p})={\cal U}_{p}(t_{c})d(\vec{p})\,s_{i}\,\pi_{+}d(\vec{p}){ \cal U}^{*}_{p}(t_{c})+{\cal V}_{p}(t_{c})d(-\vec{p})\,s_{i}\,\pi_{-}d(-\vec{p} ){\cal V}^{*}_{p}(t_{c})\,, \tag{62}\]
where \(s_{i}\) are the diagonal matrices (A.7) which commutes with \(\pi_{\pm}\). Furthermore, taking into account that the time modulation functions satisfy the conditions (37) and(38) we obtain the definitive form of these Fourier transforms as
\[S_{i}(t_{c},\vec{p}) = \left[|u^{+}(t_{c},p)|^{2}-|u^{-}(t_{c},p)|^{2}\right]s_{i}+\frac{ 2|u^{-}(t_{c},p)|^{2}}{p^{2}}p^{i}\vec{p}\cdot\vec{s} \tag{63}\] \[+ \frac{i}{p}\left[u^{+}(t_{c},p)u^{-}(t_{c},p)^{*}\pi_{+}+u^{+}(t _{c},p)^{*}u^{-}(t_{c},p)\pi_{-}\right]\epsilon_{ijk}p^{j}\gamma^{k}\,.\]
If, in addition, we set the rest frame vacuum imposing the limit (41) then we obtain the correct limit in rest frames
\[\lim_{\vec{p}\to 0}S_{i}(t_{c},\vec{p})=s_{i}\,. \tag{64}\]
In the particular case of the Minkowski spacetime, after substituting the time modulation functions (A.11) in Eq. (63), we obtain the components of Pryce's spin operator [1],
\[S_{i}(\vec{p})_{\rm Pryce}=\frac{m}{E(p)}s_{i}+\frac{p^{i}(\vec{s}\cdot\vec{p} )}{E(p)(E(p)+m)}+\frac{i}{2E(p)}\epsilon_{ijk}p^{j}\gamma^{k}\,, \tag{65}\]
which are the Fourier transforms of the components of the conserved spin operator of Dirac's theory in special relativity.
The components of the spin operator \(\vec{S}\) act in CR through their Fourier transforms (63) according to the general rule (51). The action of these operators on the field \(\psi\) can be derived as in Eq. (51) applying the operators (62) on the mode spinors (27) and (28) and using the obvious identities
\[\pi_{+}d(\pm\vec{p}){\cal U}^{*}_{p}(t_{c}){\cal U}_{p}(t_{c})d( \pm\vec{p})\pi_{+} = \pi_{+}\,, \tag{66}\] \[\pi_{-}d(\pm\vec{p}){\cal V}^{*}_{p}(t_{c}){\cal V}_{p}(t_{c})d( \pm\vec{p})\pi_{-} = \pi_{-}\,, \tag{67}\]
resulted from the condition (38). We obtain thus
\[(S_{i}U_{\vec{p},\xi_{\sigma}})(x_{c}) = S_{i}(t_{c},\vec{p})U_{\vec{p},\xi_{\sigma}}(x_{c})=U_{\vec{p}, \xi_{i}\xi_{\sigma}}(x_{c})\,, \tag{68}\] \[(S_{i}V_{\vec{p},\eta_{\sigma}})(x_{c}) = S_{i}(t_{c},-\vec{p})V_{\vec{p},\eta_{\sigma}}(x_{c})=V_{\vec{p},\xi_{i}\eta_{\sigma}}(x_{c})\,, \tag{69}\]
recovering the desired action (15). Moreover, we verify that the operators \(S_{i}\) are conserved as their Fourier transforms (63) satisfy the condition (55) in spite of their complicated dependence on time. In addition, these operators are translation invariant obeying \(\left[P^{j},S_{i}\right]=0\) and have the expected algebraic properties
\[\left[S_{i}(t_{c},\vec{p}),S_{j}(t_{c},\vec{p})\right]=i\epsilon_ {ijk}S_{k}(t_{c},\vec{p}) \Rightarrow \left[S_{i},S_{j}\right]=i\epsilon_{ijk}S_{k}\,, \tag{70}\] \[\vec{S}^{\,2}(t_{c},\vec{p})=\frac{3}{4}\cdot 1\in\rho_{D} \Rightarrow \vec{S}^{\,2}=\frac{3}{4}\cdot 1\in\rho_{D}\,, \tag{71}\]
showing that they generate a fundamental representation of the \(SU(2)\) group.
With the above elements we may define the operator of fermion polarization for any peculiar polarization given by a pair of related spinors \(\xi_{\sigma}(\vec{p})\) and \(\eta_{\sigma}(\vec{p})\) assumed to satisfy the eigenvalues problems
\[\hat{s}_{i}n^{i}(\vec{p})\xi_{\sigma}(\vec{p})=\sigma\,\xi_{\sigma}(\vec{p}) \Rightarrow\hat{s}_{i}n^{i}(\vec{p})\eta_{\sigma}(\vec{p})=-\sigma\,\eta_{ \sigma}(\vec{p}), \tag{72}\]
where the unit vector \(\vec{n}(\vec{p})\) gives the peculiar direction with respect to which the peculiar polarization is measured. We define the polarization operator as the Fourier operator \(W_{s}\) having the Fourier transform
\[W_{s}(t_{c},\vec{p})={\cal U}_{p}(t_{c})d(\vec{p})\,w(\vec{p})\,\pi_{+}d(\vec{p }){\cal U}_{p}^{*}(t_{c})+{\cal V}_{p}(t_{c})d(-\vec{p})\,w(-\vec{p})\,\pi_{-} d(-\vec{p}){\cal V}_{p}^{*}(t_{c})\, \tag{73}\]
where \(w(\vec{p})=s_{i}n^{i}(\vec{p})\). This operator is conserved and translation invariant acting as
\[(W_{s}U_{\vec{p},\xi_{\sigma}(\vec{p})})(x)=W_{s}(\vec{p})U_{\vec {p},\xi_{\sigma}(\vec{p})}(x) = U_{\vec{p},w(\vec{p})\xi_{\sigma}(\vec{p})}(x)=\sigma U_{\vec{p},\xi_{\sigma}(\vec{p})}(x)\,, \tag{74}\] \[(W_{s}V_{\vec{p},\eta_{\sigma}(\vec{p})})(x)=W_{s}(-\vec{p})V_{ \vec{p},\eta_{\sigma}(\vec{p})}(x) = V_{\vec{p},w(\vec{p})\eta_{\sigma}(\vec{p})}(x)=-\sigma V_{\vec{p},\eta_{\sigma}(\vec{p})}(x)\,. \tag{75}\]
on the mode spinors depending on the polarization spinors satisfying Eqs. (72).
The above calculations in active mode are tedious and less intuitive because of the complicated Fourier transforms of the principal operators. This explain why the Pryce spin operator proposed long time ago was ignored for more than seven decades. The solution we proposed recently is to consider the passive mode in which we focus on the operators acting on the wave spinors (23).
## 5 Operators in passive mode
In the passive mode we relate the operators acting on the free field (18) in CR to corresponding operators acting on the wave spinors (23) we call the _associated_ operators. We associate thus to each operator \(A:{\cal F}\to{\cal F}\) the pair of operators, \(\tilde{A}:\tilde{\cal F}^{+}\to\tilde{\cal F}\) and \(\tilde{A}^{c}:\tilde{\cal F}^{-}\to\tilde{\cal F}\), obeying
\[(A\psi)(x)=\int d^{3}p\sum_{\sigma}\Big{[}U_{\vec{p},\sigma}(x)(\tilde{A} \alpha)_{\sigma}(\vec{p})+V_{\vec{p},\sigma}(x)(\tilde{A}^{c}\beta)_{\sigma}^{ *}(\vec{p})\Big{]}\, \tag{76}\]
such that the brackets of \(A\) for two different fields, \(\psi\) and \(\psi^{\prime}\), can be calculated as
\[\langle\psi,A\psi^{\prime}\rangle_{D}=\langle\alpha,\tilde{A}\alpha^{\prime} \rangle+\langle\beta,\tilde{A}^{c\,+}\beta^{\prime}\rangle\,. \tag{77}\]
Hereby we deduce that if \(A=A^{+}\) is Hermitian with respect to the Dirac scalar product (3) then the associated operators are Hermitian with respect to the scalar product (25), \(\tilde{A}=\tilde{A}^{+}\) and \(\tilde{A}^{c}=\tilde{A}^{c\,+}\). For simplicity we denote the Hermitian conjugation of the operators acting on \({\cal F}\) and \(\tilde{\cal F}\) with the same symbol but bearing in mind that the scalar products of these spaces are different. Note that, the operators \(A\in E[t_{c}]\) and their associated operators, \((\tilde{A},\tilde{A}^{c})\), may
depend on time such that we must be careful considering the entire algebra we manipulate as frozen at a fixed time \(t_{c}\).
The new operators \(\tilde{A}\) and \(\tilde{A}^{c}\) are well-defined as their action can be derived by applying the inversion formulas (24) at any given instant \(t_{c}\). We find thus that \(\tilde{A}\) and \(\tilde{A}^{c}\) are integral operators that may depend on time acting as
\[\left.(\tilde{A}\alpha)_{\sigma}(\vec{p})\right|_{t_{c}} = \int d^{3}p^{\prime}\sum_{\sigma^{\prime}}\left.\langle U_{\vec{p },\sigma},AU_{\vec{p}^{\,\prime},\sigma^{\prime}}\rangle_{D}\right|_{t_{c}} \alpha_{\sigma^{\prime}}(\vec{p}^{\,\prime}) \tag{78}\] \[+ \int d^{3}p^{\prime}\sum_{\sigma^{\prime}}\left.\langle U_{\vec{p },\sigma},AV_{\vec{p}^{\,\prime},\sigma^{\prime}}\rangle_{D}\right|_{t_{c}} \beta_{\sigma^{\prime}}^{*}(\vec{p}^{\,\prime})\,,\] \[(\tilde{A}^{c}\beta)_{\sigma}(\vec{p})\right|_{t} = \int d^{3}p^{\prime}\sum_{\sigma^{\prime}}\left.\langle U_{\vec{ p}^{\,\prime},\sigma^{\prime}},AV_{\vec{p},\sigma}\rangle_{D}\right|_{t_{c}} \alpha_{\sigma^{\prime}}^{*}(\vec{p}^{\,\prime})\] (79) \[+ \int d^{3}p^{\prime}\sum_{\sigma^{\prime}}\left.\langle V_{\vec{ p}^{\,\prime},\sigma^{\prime}},AV_{\vec{p},\sigma}\rangle_{D}\right|_{t} \beta_{\sigma^{\prime}}(\vec{p}^{\,\prime})\,,\]
through kernels which are the matrix elements of the operator \(A\) in the basis of mode spinors.
In what follows we restrict ourselves to reducible operators which do not mix the particle and antiparticle subspaces complying with the condition (53) at any time. In the particular case of reducible Fourier operators, \(A\in\,F[t]\), having time-dependent Fourier transforms \(A(t,\vec{p})\), the non-vanishing matrix elements can be calculated easier as
\[\left.\langle U_{\vec{p},\sigma},AU_{\vec{p}^{\,\prime},\sigma^{ \prime}}\rangle_{D}\right|_{t_{c}}=\left.\langle U_{\vec{p},\sigma},A(t_{c}, \vec{p}^{\,\prime})U_{\vec{p}^{\,\prime},\sigma^{\prime}}\rangle_{D}\right|_{ t_{c}}\] \[\qquad=\delta^{3}(\vec{p}-\vec{p}^{\,\prime})\hat{u}_{\sigma}^{+} (\vec{p})\pi_{+}d(\vec{p}){\cal U}_{p}^{*}(t_{c})A(t_{c},\vec{p}){\cal U}_{p}(t _{c})d(\vec{p})\pi_{+}\,\hat{u}_{\sigma^{\prime}}(\vec{p})\,, \tag{80}\] \[\left.\langle V_{\vec{p},\sigma},AV_{\vec{p}^{\,\prime},\sigma^{ \prime}}\rangle_{D}\right|_{t_{c}}=\left.\langle V_{\vec{p},\sigma},A(t_{c},- \vec{p}^{\,\prime})V_{\vec{p}^{\,\prime},\sigma^{\prime}}\rangle_{D}\right|_{ t_{c}}\] \[\qquad=\delta^{3}(\vec{p}-\vec{p}^{\,\prime})\hat{v}_{\sigma}^{+} (\vec{p})\pi_{-}d(\vec{p}){\cal V}_{p}^{*}(t_{c})A(t_{c},-\vec{p}){\cal V}_{p} (t_{c})d(\vec{p})\pi_{-}\,\hat{v}_{\sigma^{\prime}}(\vec{p})\,, \tag{81}\]
observing that in this case the associated operators are simple \(2\times 2\) matrix-operators acting separately on the spaces \(\tilde{\cal F}^{+}\) and \(\tilde{\cal F}^{-}\). We observe that when \(A(t_{c},\vec{p})\) commutes with \({\cal U}_{p}(t_{c})\), \({\cal V}_{p}(t_{c})\), \(d(\vec{p})\) and \(\pi_{\pm}\) then we must use directly the identities (66) and (67). Applying first this rule to the components of our conserved spin operator we obtain the associated operators
\[S_{i} \Rightarrow \tilde{S}_{i}=-\tilde{S}_{i}^{c}=\frac{1}{2}\Sigma_{i}(\vec{p})\,, \tag{82}\]
where the \(2\times 2\) matrices \(\Sigma_{i}(\vec{p})\) have the matrix elements
\[\Sigma_{i\,\sigma\sigma^{\prime}}(\vec{p})=2\hat{u}_{\sigma}^{+}(\vec{p})s_{i} \hat{u}_{\sigma^{\prime}}(\vec{p})=\xi_{\sigma}^{+}(\vec{p})\sigma_{i}\,\xi_{ \sigma^{\prime}}(\vec{p})\,, \tag{83}\]
depending on the polarization spinors and having the same algebraic properties as the Pauli matrices. A similar procedure gives the simple associated operators of the polarization operator (73),
\[W_{s}\ \Rightarrow \tilde{W}_{s}=-\tilde{W}_{s}^{c}=\frac{1}{2}\sigma_{3}\,, \tag{84}\]
according to the definition (72) of the polarization spinors.
It is interesting now to see how the spin operator is related to the generators of the \(E(3)\) isometries of the spacetimes \(M(a)\). As the covariant representation \(T\) defined by Eq. (9) is reducible this may be associated to a pair of representations whose operators \(\tilde{T}\in\mbox{Aut}(\tilde{\cal F}^{+})\) and \(\tilde{T}^{c}\in\mbox{Aut}(\tilde{\cal F}^{-})\) are related to \(T\) as,
\[(T_{r,\vec{a}}\,\psi)(x_{c})\] \[\qquad\qquad=\int d^{3}p\sum_{\sigma}\left[U_{\vec{p},\sigma}(x_{ c})(\tilde{T}_{r,\vec{a}}\,\alpha)_{\sigma}(\vec{p})+V_{\vec{p},\sigma}(x_{c})( \tilde{T}^{c}_{r,\vec{a}}\,\beta)^{*}_{\sigma}(\vec{p})\right]\,. \tag{85}\]
In other respects, by using the identity \((R\vec{x})\cdot\vec{p}=\vec{x}\cdot(R^{-1}\vec{p})\) we expand Eq. (9) changing the integration variable as
\[(T_{r,\vec{a}}\psi)(t_{c},\vec{x}_{c})=r\psi\left(t_{c},R(\hat{r })^{-1}(\vec{x}_{c}-\vec{a})\right)\] \[\qquad\qquad=\int d^{3}p\sum_{\sigma}\left[rU^{\prime}_{\vec{p}, \sigma}(x_{c})\alpha_{\sigma}(\vec{p}\,^{\prime})e^{i\vec{a}\cdot\vec{p}}+rV ^{\prime}_{\vec{p},\sigma}(x_{c})\beta^{*}_{\sigma}(\vec{p}\,^{\prime})e^{- i\vec{a}\cdot\vec{p}}\right]\,, \tag{86}\]
where the new mode spinors
\[U^{\prime}_{\vec{p},\sigma}(x_{c}) = [2\pi a(t_{c})]^{-\frac{3}{2}}e^{i\vec{p}\cdot\vec{x}_{c}}{\cal U }_{p}(t_{c})d(\vec{p}\,^{\prime})u_{\sigma}(\vec{p}\,^{\prime})\,, \tag{87}\] \[V^{\prime}_{\vec{p},\sigma}(x_{c}) = [2\pi a(t_{c})]^{-\frac{3}{2}}e^{-i\vec{p}\cdot\vec{x}_{c}}{\cal V }_{p}(t_{c})d(\vec{p}\,^{\prime})v_{\sigma}(\vec{p}\,^{\prime})\,, \tag{88}\]
depend on the transformed momentum
\[\vec{p}\,^{\prime}=R(\hat{r})^{-1}\vec{p}\ \ \Rightarrow\ \ |\vec{p}\,^{\prime}|=|\vec{p}|\,. \tag{89}\]
Observing then that, according to Eq. (A.4), we have \(rd(\vec{p}\,^{\prime})=d(\vec{p})r\) we deduce that \(\tilde{T}_{r,\vec{a}}\simeq\tilde{T}^{c}_{r,\vec{a}}\) acting alike on the subspaces \(\tilde{\cal F}^{+}\) and \(\tilde{\cal F}^{-}\) as,
\[(\tilde{T}_{r,\vec{a}}\,\alpha)_{\sigma}(\vec{p})=e^{i\vec{a}\cdot\vec{p}}\sum _{\sigma^{\prime}}D_{\sigma\sigma^{\prime}}(\hat{r},\vec{p})\alpha_{\sigma^{ \prime}}(\vec{p}\,^{\prime})\,, \tag{90}\]
and similarly for \(\beta\), where \(D(\hat{r},\vec{p})\) is just the matrix (57).
The representation \(\tilde{T}_{r,\vec{a}}\) is unitary with respect to the scalar product (25), \(\langle\tilde{T}_{\lambda,a}\alpha,\tilde{T}_{\lambda,a}\alpha^{\prime} \rangle=\langle\alpha,\alpha^{\prime}\rangle\). As the covariant representations are unitary with respect to the scalar product (8) which can be decomposed as in Eq. (26) we conclude that the expansion (18) establishes the unitary equivalence, \(T=\tilde{T}\oplus\tilde{T}\). Consequently, the self-adjoint generators \(\tilde{X}\in\mbox{Lie}(\tilde{T})\) defined as
\[\tilde{P}^{i}=-\left.i\frac{\partial\tilde{T}_{1,\vec{a}}}{\partial a^{i}} \right|_{\vec{a}=0}\,,\quad\tilde{J}_{i}\,=\left.i\frac{\partial\tilde{T}_{r (\theta),0}}{\partial\theta^{i}}\right|_{\theta=0}\,, \tag{91}\]
are related to the corresponding ones, \(X\in\mbox{Lie}(T)\), such that
\[(X\psi)(x_{c}) = \int d^{3}p\sum_{\sigma}\left[U_{\vec{p},\sigma}(x_{c})(\tilde{X} \,\alpha)_{\sigma}(\vec{p})-\left.V_{\vec{p},\sigma}(x_{c})(\tilde{X}\,\beta) ^{*}_{\sigma}(\vec{p})\right]\,, \tag{92}\]
as we deduce deriving Eq. (85) with respect to the corresponding group parameter \(\zeta\in(\theta^{i},a^{i})\) in \(\zeta=0\). We find thus that the isometry generators are reducible on \(\tilde{\cal F}\) obeying \(\tilde{X}^{c}=-\tilde{X}\) as a consequence of the fact that \(\tilde{T}^{c}\simeq\tilde{T}\).
The associated Abelian generators are trivial being diagonal in momentum basis,
\[P^{i}\;\;\;\Rightarrow\;\;\tilde{P}^{i}=-\tilde{P}^{c\,i}=p^{i}\,. \tag{93}\]
For rotations we use the Cayley-Klein parameters as in Eq. (A.8) recovering the natural splitting (13),
\[J_{i}=L_{i}+S_{i}\;\;\;\;\;\Rightarrow\;\;\;\;\;\tilde{J}_{i}=-\tilde{J}_{i}^{ c}=\tilde{L}_{i}+\tilde{S}_{i}\,, \tag{94}\]
laying out the components of the spin operator (82) and intuitive components of the orbital angular momentum operator,
\[L_{i}\;\;\;\Rightarrow\;\;\;\tilde{L}_{i}=-\tilde{L}_{i}^{c}=-i\epsilon_{ijk} p^{j}\tilde{\partial}_{k}\,, \tag{95}\]
where \(\tilde{\partial}_{k}\) are the _covariant_ derivatives [2],
\[\tilde{\partial}_{i}=\partial_{p^{i}}1_{2\times 2}+\Omega_{i}(\vec{p})\,, \tag{96}\]
defined such that \(\tilde{\partial}_{i}[\xi_{\sigma}(\vec{p})\alpha_{\sigma}(\vec{p})]=\xi_{ \sigma}(\vec{p})\tilde{\partial}_{i}\alpha_{\sigma}(\vec{p}\). Therefore, the connections,
\[\Omega_{i\,\sigma\sigma^{\prime}}(\vec{p})=\xi_{\sigma}^{+}(\vec{p})\left[ \partial_{p^{i}}\xi_{\sigma^{\prime}}(\vec{p})\right]=\left\{\eta_{\sigma}^{+ }(\vec{p})\left[\partial_{p^{i}}\eta_{\sigma^{\prime}}(\vec{p})\right]\right\} ^{*}\,, \tag{97}\]
are anti-Hermitian, \(\Omega_{i\,\sigma\sigma^{\prime}}(\vec{p})=-\Omega_{i\,\sigma^{\prime}\sigma} ^{*}(\vec{p})\), such that the operators \(i\tilde{\partial}_{i}\) are Hermitian. The principal property of the covariant derivatives is to commute with the spin components, \([\tilde{\partial}_{i},\tilde{S}_{j}]=0\) thanks to the connections \(\Omega_{i}(\vec{p})\). This becomes trivial in the case of common polarization when \(\Omega_{i}=0\) and \(\tilde{S}_{i}\) are independent on \(\vec{p}\). Note that we proposed these derivatives for the first time in Dirac's theory in Minkowski spacetime [2].
The sets of conserved operators \(\{\tilde{L}_{1},\tilde{L}_{2},\tilde{L}_{3}\}\) and \(\{\tilde{S}_{1},\tilde{S}_{2},\tilde{S}_{3}\}\) generate the representations \(\tilde{T}^{o}\) and \(\tilde{T}^{s}\) of the associated factorization
\[T^{r}=T^{o}\otimes T^{s}\;\;\;\Rightarrow\;\;\;\tilde{T}^{r}=\tilde{T}^{o} \otimes\tilde{T}^{s}\,, \tag{98}\]
of the \(SU(2)\) restriction \(\tilde{T}^{r}\equiv\left.\tilde{T}\right|_{SU(2)}\) of the representation \(\tilde{T}\). Thus we have found the generators of the associated orbital representation studying the isometry generators without resorting to new spectral representations. In this manner we cannot come back to the active mode in CR but we have all we need for performing the quantization.
## 6 Quantization
The association between the operator acting in CR and MR allows us to derive at any time the expectation values of operators defined in MR according to the general rule (77). We may apply thus the Bogolyubov method [9] for quantizing
the isometry generators of the massive Dirac fermions of arbitrary polarization. According to this method, we replace first the wave spinors in MR with field operators, \((\alpha,\alpha^{*})\to(\mathfrak{a},\mathfrak{a}^{\dagger})\) and \((\beta,\beta^{*})\to(\mathfrak{b},\mathfrak{b}^{\dagger})\), satisfying canonical anticommutation relations among them the non-vanishing ones are,
\[\left\{\mathfrak{a}_{\sigma}(\vec{p}),\mathfrak{a}^{\dagger}_{\sigma^{\prime}}( \vec{p}^{\,\prime})\right\}=\left\{\mathfrak{b}_{\sigma}(\vec{p}),\mathfrak{b }^{\dagger}_{\sigma^{\prime}}(\vec{p}^{\,\prime})\right\}=\delta_{\sigma\sigma^ {\prime}}\delta^{3}(\vec{p}-\vec{p}^{\,\prime})\,. \tag{99}\]
The Dirac free field becomes thus the field operator
\[\psi(x)=\int d^{3}p\sum_{\sigma}\left[U_{\vec{p},\sigma}(x)\mathfrak{a}_{ \sigma}(\vec{p})+V_{\vec{p},\sigma}(x)\mathfrak{b}^{\dagger}_{\sigma}(\vec{p}) \right]\,, \tag{100}\]
denoted with the same symbol but acting on the Fock state space equipped with the scalar product \(\langle\ \mid\ \rangle\) and a normalized vacuum state \(|0\rangle\) accomplishing
\[\mathfrak{a}_{\sigma}(\vec{p})|0\rangle=\mathfrak{b}_{\sigma}(\vec{p})|0 \rangle=0\,,\quad\langle 0|\mathfrak{a}^{\dagger}_{\sigma}(\vec{p})=\langle 0| \mathfrak{b}^{\dagger}_{\sigma}(\vec{p})=0\,. \tag{101}\]
The sectors with different number of particles have to be constructed applying the standard method for constructing generalized momentum bases of various polarizations.
Through quantization the expectation value of any time-dependent operator \(A(t)\) of RQM becomes a one-particle operator
\[A(t_{c})\ \to\ \mathsf{A}=\,:\langle\psi,A(t_{c})\psi\rangle_{D}: \mid_{t_{c}=t_{c\,0}}\,, \tag{102}\]
calculated respecting the normal ordering of the operator products [13, 14] at the initial time \(t_{c\,0}\). This procedure allows us to write down any one-particle operator \(\mathsf{A}\) directly in terms of operators \((\tilde{A},\tilde{A}^{c})\) associated to the operator \(A=A(t_{c})|_{t_{c\,0}}\). We consider here only the reducible operators for which we obtain the general formula
\[\mathsf{A}=\int d^{3}p\left[\mathfrak{a}^{\dagger}(\vec{p})(\tilde{A} \mathfrak{a})(\vec{p})-\mathfrak{b}^{\dagger}(\vec{p})(\tilde{A}^{c\,+} \mathfrak{b})(\vec{p})\right]\,, \tag{103}\]
written with the compact notation
\[\mathfrak{a}^{\dagger}(\vec{p})(\tilde{A}\mathfrak{a})(\vec{p})\equiv\sum_{ \sigma}\mathfrak{a}^{\dagger}_{\sigma}(\vec{p})(\tilde{A}\mathfrak{a})_{\sigma }(\vec{p})\,, \tag{104}\]
and similarly for the second term. We specify that the bracket (102) is calculated according to Eq. (77) in which the last term changes its sign after introducing the normal ordering of the operator products.
Given an arbitrary operator \(A\in\mathrm{Aut}(\mathcal{F})\) and its Hermitian conjugated \(A^{+}\) we define the adjoint operator of \(\mathsf{A}\),
\[A^{+}(t_{c})\ \to\ \mathsf{A}^{\dagger}=\,:\langle\psi,A(t)^{+}\psi\rangle_{D}: \mid_{t_{c\,0}}=\,:\langle A(t)\psi,\psi\rangle_{D}:\mid_{t_{c\,0}}\,, \tag{105}\]
complying with the standard definition \(\langle\alpha|\mathsf{A}^{\dagger}\beta\rangle=\langle\mathsf{A}\alpha|\beta\rangle\) on the Fock space. In what follows we meet only self-adjoint one-particle operators as all their
corresponding operators of RQM are reducible and Hermitian with respect to the scalar products of the spaces in which they act. We obtain thus an operator algebra formed by fields and self-adjoint one-particle operators which have the obvious properties
\[[\mathsf{A},\psi(x)]=-(A\psi)(x)\,,\qquad[\mathsf{A},\mathsf{B}]=:\left\langle \psi,[A,B]\psi\right\rangle_{D}:\,, \tag{106}\]
preserving the structures of Lie algebras but without taking over other algebraic properties of their parent operators from RQM as the product of two one-particle operators is no longer an operator of the same type. Therefore, we must restrict ourselves to the Lie algebras of symmetry generators and unitary transformations whose actions reduce to sums of successive commutations.
The simplest one-particle operator is the electric charge
\[\mathsf{Q}=:\left\langle\psi,\psi\right\rangle_{D}:=\int d^{3}p\,\left[ \mathsf{a}^{\dagger}(\vec{p})\mathsf{a}(\vec{p})-\mathsf{b}^{\dagger}(\vec{p} )\mathsf{b}(\vec{p})\right]\,, \tag{107}\]
related to the internal gauge symmetry \(U(1)_{\rm em}\)[13]. Other diagonal operators in momentum basis are the momentum components
\[\mathsf{P}^{i} = :\left\langle\psi,P^{i}\psi\right\rangle_{D}:=\int d^{3}p\,p^{i} \left[\mathsf{a}^{\dagger}(\vec{p})\mathsf{a}(\vec{p})+\mathsf{b}^{\dagger}( \vec{p})\mathsf{b}(\vec{p})\right]\,, \tag{108}\]
as well as our new operator of fermion polarization,
\[\mathsf{W}_{s}=:\left\langle\psi,W_{s}\psi\right\rangle_{D}:=\frac{1}{2}\int d ^{3}p\left[\mathsf{a}^{\dagger}(\vec{p})\sigma_{3}\mathsf{a}(\vec{p})+\mathsf{ b}^{\dagger}(\vec{p})\sigma_{3}\mathsf{b}(\vec{p})\right]\,, \tag{109}\]
which enters in the incomplete set of commuting operators \(\{\mathsf{P}^{1},\mathsf{P}^{2},\mathsf{P}^{3},\mathsf{W}_{s},\mathsf{Q}\}\) determining the momentum bases of the Fock state space up to an integration constant that has to be fixed setting the vacuum as in the case of our rest frame vacuum defined by Eqs. (41). In Minkowski spacetime we do not have this inconvenience as there exists a conserved energy operator able to complete this set commuting with all the other operators. In contrast, in the de Sitter expanding universe we have a conserved energy operator but this does not commute with the momentum components. Thus the problem of setting the vacuum arises in all the FLRW spacetimes apart from the Minkowski one.
Furthermore, applying the general rule (103) to the rotation generators we find first the splitting of the total angular momentum
\[\mathsf{J}_{i}=:\left\langle\psi,J_{i}\psi\right\rangle_{D}:=\mathsf{L}_{i}+ \mathsf{S}_{i}\,, \tag{110}\]
where the components of the orbital angular momentum, \(\mathsf{L}_{i}\), and spin operator, \(\mathsf{S}_{i}\), can be written as
\[\mathsf{L}_{i} = -\frac{i}{2}\int d^{3}p\,\epsilon_{ijk}p^{j}\left[a^{\dagger}( \vec{p})\overset{\leftrightarrow}{\partial_{i}}a(\vec{p})+b^{\dagger}(\vec{p}) \overset{\leftrightarrow}{\partial_{i}}b(\vec{p})\right]\,, \tag{111}\] \[\mathsf{S}_{i} = \frac{1}{2}\int d^{3}p\left[a^{\dagger}(\vec{p})\Sigma_{i}(\vec{p })a(\vec{p})+b^{\dagger}(\vec{p})\Sigma_{i}(\vec{p})b(\vec{p})\right]\,, \tag{112}\]
according to Eqs. (95) and (82), using the special notation
\[\alpha^{+}\stackrel{{\leftrightarrow}}{{\partial}}_{i}\beta=\alpha^{+ }(\partial_{p^{i}}\beta)-(\partial_{p^{i}}\alpha^{+})\beta+2\alpha^{+}\Omega( \vec{p})\beta\,, \tag{113}\]
inspired by Green's theorem, which points out explicitly that \({\sf L}_{i}\) are self-adjoint operators. The components \({\sf L}_{i}\) and \({\sf S}_{i}\) form the bases of two independent unitary representations of the \(su(2)\sim so(3)\) algebra, \([{\sf L}_{i},{\sf S}_{j}]=0\), generating the orbital and respectively spin symmetries. Moreover, these operators are conserved while the commutation relations
\[\left[{\sf L}_{i},{\sf P}^{j}\right]=i\epsilon_{ijk}{\sf P}^{k}\,,\qquad\left[ {\sf S}_{i},{\sf P}^{j}\right]=0\,, \tag{114}\]
show that only the spin operator is, in addition, invariant under space translations.
We obtained thus all the one-particle operators of QFT coming from the conserved operators of RQM. All these operators have similar forms in any space-time \(M(a)\), including the Minkowski one, being independent on the scale factors. However, this universality is limited as the evolution of the time-dependent operators is determined by the time modulation functions satisfying the system (36). For example, if we apply the above procedures to the Hamiltonian (7) at an arbitrary instant \(t_{c}\) we obtain the time-dependent operator
\[{\sf H}_{c}(t_{c})=\,:\langle\psi,H_{c}(t_{c})\psi\rangle_{D}\,:\mid_{t_{c}}= \int d^{3}p\,\tilde{H}_{c}(t_{c},p)\left[{\sf a}^{\dagger}(\vec{p}){\sf a}( \vec{p})+{\sf b}^{\dagger}(\vec{p}){\sf b}(\vec{p})\right]\,, \tag{115}\]
where the quantity
\[H_{c}(t_{c},p) = m\,a(t_{c})\left(|u^{+}(t_{c},p)|^{2}-|u^{-}(t_{c},p)|^{2}\right) \tag{116}\] \[+ p\left(u^{+}(t_{c},p)u^{-}(t_{c},p)^{*}+u^{+}(t_{c},p)^{*}u^{-} (t_{c},p)\right)-\frac{3i}{2}\frac{\dot{a}(t_{c})}{a(t_{c})}\,,\]
plays the role of energy giving just the special relativistic energy \(E(p)\) in the flat limit when \(a\to 1\), \(\dot{a}\to 0\) and the time modulation functions take the form (A.11). In other respects, the form of the orbital angular momentum (111) suggests us that the operator of the initial position related to the conserved spin as in Eq. (13) could have the same form as in the flat case, [2, 3]
\[{\sf X}^{i}=\frac{i}{2}\int d^{3}p\left[{\sf a}^{\dagger}(\vec{p})\stackrel{{ \leftrightarrow}}{{\partial}}_{i}{\sf a}(\vec{p})-{\sf b}^{\dagger}(\vec{p}) \stackrel{{\leftrightarrow}}{{\partial}}_{i}{\sf b}(\vec{p}) \right]\,. \tag{117}\]
For studying the time evolution of this operator we need to calculate commutators as \([{\sf H}_{c}(t_{c}),{\sf X}^{i}]\) which depends on the derivatives of time modulation functions, \(\partial_{p}u^{\pm}(t_{c},p)\). Therefore we may conclude that the time evolution of the principal observables must be studied in each particular case separately.
## Appendix A \(Sl(2,\mathbb{C})\) transformations
The Dirac field \(\psi:M\to{\cal V}_{D}\) takes values in the space \({\cal V}_{D}={\cal V}_{P}\oplus{\cal V}_{P}\) of the Dirac representation \(\rho_{D}=(\frac{1}{2},0)\oplus(0,\frac{1}{2})\) of the \(SL(2,\mathbb{C})\) group where one defines the
Dirac \(\gamma\)-matrices with local indices and the Hermitian form \(\bar{\psi}\psi\) with the Dirac adjoint \(\overline{\psi}=\psi^{+}\gamma^{0}\) of \(\psi\). These matrices satisfy the anticommutation rules
\[\{\gamma^{\hat{\mu}},\gamma^{\hat{\nu}}\}=2\eta^{\hat{\mu}\hat{\nu}}\] (A.1)
giving rise to the \(SL(2,\mathbb{C})\) generators
\[s^{\hat{\mu}\hat{\nu}}=\frac{i}{4}\left[\gamma^{\hat{\mu}},\gamma^{\hat{\nu}} \right]=\overline{s^{\hat{\mu}\hat{\nu}}}\in\rho_{D}[sl(2,\mathbb{C})]\] (A.2)
which are Dirac self-adjoint such that the transformations
\[\lambda(\hat{\omega})=\exp\left(-\frac{i}{2}\hat{\omega}^{\hat{\alpha}\hat{ \beta}}s_{\hat{\alpha}\hat{\beta}}\right)\in\rho_{D}[SL(2,\mathbb{C})]\,,\] (A.3)
having real-valued parameters, \(\hat{\omega}^{\hat{\alpha}\hat{\beta}}=-\hat{\omega}^{\hat{\beta}\hat{\alpha}}\), leave the Hermitian form invariant as \(\overline{\lambda(\hat{\omega})}=\lambda^{-1}(\hat{\omega})=\lambda(-\hat{ \omega})\). The corresponding Lorentz transformations, \(\Lambda^{\hat{\mu}}_{\cdot\hat{\nu}}(\hat{\omega})\equiv\Lambda^{\hat{\mu}}_{ \cdot\hat{\nu}}[\lambda(\hat{\omega})]=\delta^{\hat{\mu}}_{\hat{\nu}}+\hat{ \omega}^{\hat{\mu}}_{\cdot\hat{\nu}}+\frac{1}{2}\,\hat{\omega}^{\hat{\mu}}_{ \cdot\hat{\alpha}}\hat{\omega}^{\hat{\alpha}}_{\cdot\hat{\nu}}+\cdots\) satisfy the identities
\[\lambda^{-1}(\hat{\omega})\gamma^{\hat{\alpha}}\lambda(\hat{\omega})=\Lambda (\hat{\omega})^{\hat{\alpha}}_{\cdot\hat{\beta}}\gamma^{\hat{\beta}}\,,\] (A.4)
which encapsulate the canonical homomorphism [8].
In the chiral representation of the Dirac matrices (with diagonal \(\gamma^{5}\)) the transformations \(\lambda(\hat{\omega})\) generated by the matrices \(s^{\mu\nu}\) are reducible to the subspaces of Pauli spinors \({\cal V}_{P}\) carrying the irreducible representations \((\frac{1}{2},0)\) and \((0,\frac{1}{2})\) of \(\rho_{D}\)[8, 7]. We denote by
\[r=\text{diag}(\hat{r},\hat{r})\in\rho_{D}\left[SU(2)\right]\] (A.5)
the transformations we call here simply rotations, and by
\[l=\text{diag}(\hat{l},\hat{l}^{-1})\in\rho_{D}\left[SL(2,\mathbb{C})/SU(2)\right]\] (A.6)
the Lorentz boosts. For rotations we use the generators
\[s_{i}=\frac{1}{2}\epsilon_{ijk}s^{jk}=\text{diag}(\hat{s}_{i},\hat{s}_{i})\,, \quad\hat{s}_{i}=\frac{1}{2}\sigma_{i}\,,\] (A.7)
and Cayley-Klein parameters \(\theta^{i}=\frac{1}{2}\epsilon_{ijk}\hat{\omega}^{jk}\) such that
\[r(\theta)=\text{diag}(\hat{r}(\theta),\hat{r}(\theta))\,,\quad\quad\hat{r}( \theta)=e^{-i\theta^{i}\hat{s}_{i}}=e^{-\frac{i}{2}\theta^{i}\sigma_{i}}\,,\] (A.8)
where \(\sigma_{i}\) are the Pauli matrices. Similarly, we chose the parameters \(\tau^{i}=\hat{\omega}^{0i}\) and the generators \(s_{0i}=-s^{0i}=\text{diag}(i\hat{s}_{i},-i\hat{s}_{i})\) for the Lorentz boosts, \(l(\tau)=\text{diag}(\hat{l}(\tau),\hat{l}^{-1}(\tau))\) where \(\hat{l}(\tau)=e^{\tau^{i}\hat{s}_{i}}=e^{\frac{1}{2}\tau^{i}\sigma_{i}}\). The corresponding transformations of the group \(L_{+}^{\uparrow}\) will be denoted as \(R(\hat{r})=\Lambda(r)\) and \(L(\hat{l})=\Lambda(l)\).
Particularly, the boosts (A.6) of Wigner's method of constructing the covariant Dirac field in Minkowski spacetime have the parameters \(\tau^{i}=-\frac{p^{i}}{p}\text{tanh}^{-1}\frac{p}{E(p)}\) taking the form [7]
\[l_{\vec{p}}=\frac{E(p)+m+\gamma^{0}\vec{\gamma}\cdot\vec{p}}{\sqrt{2m(E(p)+m)} }=l_{\vec{p}}^{+}\,,\quad l_{\vec{p}}^{-1}=l_{-\vec{p}}=\bar{l}_{\vec{p}}\,,\] (A.9)
where \(E(p)=\sqrt{m^{2}+\vec{p}^{2}}\) is the energy in special relativity. These boosts which determine the form of the mode spinors in Minkowski spacetime can be related as
\[\sqrt{\frac{m}{E(p)}}l_{\vec{p}}\,e^{-iE(p)t}=u_{M}^{+}(t,p)+\frac{\gamma^{0} \vec{\gamma}\cdot\vec{p}}{p}u_{M}^{-}(t,p)\] (A.10)
to the time modulation functions in this manifold,
\[u_{M}^{\pm}(t,p)=\sqrt{\frac{E(p)\pm m}{2E(p)}}e^{-iE(p)t}\,.\] (A.11)
Hereby we deduce the identity
\[\left(u_{M}^{+}(t,p)+\frac{\gamma^{0}\vec{\gamma}\cdot\vec{p}}{p}u_{M}^{-}(t, p)\right)\pi_{+}=\left(u_{M}^{+}(t,p)\pi_{+}+u_{M}^{-}(t,p)\pi_{-}\right) \left(1+\frac{\gamma^{0}\vec{\gamma}\cdot\vec{p}}{p}\right)\pi_{+}\] (A.12)
that inspires our method of constructing Dirac mode spinors in any FLRW spacetime with the help of the matrix (29) revealed here.
|
2304.04986 | Deep learning of experimental electrochemistry for battery cathodes
across diverse compositions | Artificial intelligence (AI) has emerged as a tool for discovering and
optimizing novel battery materials. However, the adoption of AI in battery
cathode representation and discovery is still limited due to the complexity of
optimizing multiple performance properties and the scarcity of high-fidelity
data. In this study, we present a machine-learning model (DRXNet) for battery
informatics and demonstrate the application in the discovery and optimization
of disordered rocksalt (DRX) cathode materials. We have compiled the
electrochemistry data of DRX cathodes over the past five years, resulting in a
dataset of more than 19,000 discharge voltage profiles on diverse chemistries
spanning 14 different metal species. Learning from this extensive dataset, our
DRXNet model can automatically capture critical features in the cycling curves
of DRX cathodes under various conditions. Illustratively, the model gives
rational predictions of the discharge capacity for diverse compositions in the
Li--Mn--O--F chemical space as well as for high-entropy systems. As a universal
model trained on diverse chemistries, our approach offers a data-driven
solution to facilitate the rapid identification of novel cathode materials,
accelerating the development of next-generation batteries for carbon
neutralization. | Peichen Zhong, Bowen Deng, Tanjin He, Zhengyan Lun, Gerbrand Ceder | 2023-04-11T05:09:48Z | http://arxiv.org/abs/2304.04986v3 | # Deep learning of experimental electrochemistry for battery cathodes across diverse compositions
###### Abstract
Artificial intelligence (AI) has emerged as a powerful tool in the discovery and optimization of novel battery materials. However, the adoption of AI in battery cathode representation and discovery is still limited due to the complexity of optimizing multiple performance properties and the scarcity of high-fidelity data. In this study, we present a comprehensive machine-learning model (DRXNet) for battery informatics and demonstrate the application in discovery and optimization of disordered rocksalt (DRX) cathode materials. We have compiled the electrochemistry data of DRX cathodes over the past five years, resulting in a dataset of more than 30,000 discharge voltage profiles with 14 different metal species. Learning from this extensive dataset, our DRXNet model can automatically capture critical features in the cycling curves of DRX cathodes under various conditions. Illustratively, the model gives rational predictions of the discharge capacity for diverse compositions in the Li-Mn-O-F chemical space and high-entropy systems. As a universal model trained on diverse chemistries, our approach offers a data-driven solution to facilitate the rapid identification of novel cathode materials, accelerating the development of next-generation batteries for carbon neutralization.
## I Introduction
The pursuit of carbon neutrality has become a global imperative in the face of climate change, driving the transition to renewable energy sources and the widespread adoption of electric vehicles [1, 2, 3]. High-performance battery cathode materials with large energy density, high-rate performance, and long cycle life are central to these advancements. The development of new cathode materials is essential to meeting the increasing demand for energy storage and advancing the electrification of transportation systems [4].
Artificial intelligence (AI) has emerged as a powerful tool in the discovery and optimization of novel battery materials [5]. By leveraging vast amounts of experimental and computational data, AI-assisted techniques can accelerate the battery design process by identifying promising candidates within large chemical spaces [6], uncovering hidden structure-property relationships via machine-learned atomistic modeling [7], predicting battery remaining lifespan [8, 9, 10], and optimizing the fast charge/discharge protocol [11]. These efforts significantly reduce the time and cost required for conventional trial-and-error approaches. Most recently, the battery data genome initiative has been proposed to use AI assistance to accelerate the discovery and optimization of battery materials [12].
Despite these advancements, current machine-learning efforts in battery research primarily focus on predicting the lifespan within one battery system in a rather simple chemical space, such as NMC (Ni-Mn-Co). The development of exploratory machine learning for representing chemical effects in a more complicated multi-dimensional chemical space remains underdeveloped due to the challenges associated with simultaneously optimizing multiple electrochemical properties (e.g., rate capability, cyclability, and various test voltage windows) and capturing the complex chemistry among different transition metal (TM) species [13]. Moreover, the scarcity of high-fidelity data further hinders the progress of AI in the battery field.
Specifically, disordered rocksalt (DRX) materials emerge as promising cathode materials that make use of earth-abundant precursors and can enable scaling of Li-ion energy storage to several TWh/year production [14]. Owing to the nearly unlimited compositional design space and considerably more complex structure-property relationship of DRX cathodes compared with conventional layered cathodes (Fig. 1A), their rational design requires the extensive involvement of advanced characterization techniques (e.g., pair-distribution function analysis [15], spherical-aberration-corrected transmission electron microscopy [16], solid-state nuclear magnetic resonance spectroscopy [17]) as well as complicated computational tools (e.g., high-dimensional cluster expansion and Monte Carlo simulation [18, 19]) under a conventional frame of investigation. Data-driven methods offer alternative means of compositional design and optimization of high-dimensional DRX cathodes without having to fully construct their structure-property relationships.
In light of these challenges, we developed DRXNet, an exploratory machine-learning model for the discovery and optimization of battery cathode materials. DRXNet uses composition, test current density, working voltage
window, and cycle number as inputs to predict entire discharge voltage profiles. By training and testing on over 30,000 experimental discharge voltage profiles of DRX materials comprising various metal species, we show that the model accurately captures the cathode electrochemistry under different test conditions. Notably, DRXNet captures cycled discharge capacity in diverse Li-Mn-O-F compositions and makes rational predictions for several high-entropy systems. As a universal model trained on diverse chemistries, DRXNet offers a data-driven solution to facilitate the rapid identification of novel cathode materials with improved energy-storage capabilities and paves the way for the development of next-generation batteries that can power a carbon-neutral future.
## II Results
### DRX Battery Test Dataset
A prototype binary DRX cathode (Li\({}_{1+x}\)M'\({}_{a}\)M"\({}_{b}\)O\({}_{2-y}\)F\({}_{y}\)) is composed of three primary compositional parameters: (1) the redox-active species M'; (2) the inert high-valent TM M", which compensates for the Li excess and stabilizes disordered structures [22]; (3) fluorine, which enhances the cyclability and accommodates more Li excess without losing TM redox by reducing the anion valence [23]. As shown in Fig. 1A, the discharge-voltage profile presents a negative slope of voltage against capacity. This profile shape is tied to various factors such as the DRX composition, applied
Figure 1: **Introduction to discharge voltage profiles and the collected experimental DRX-TD dataset.** (A) The voltage profile illustrates the relationship between capacity (\(Q\)) and voltage (\(V\)). The derivative quantity \(dQ/dV\) peaks at the redox potential of the TM, where a pronounced peak indicates a flat plateau in the voltage profile. (B) The elemental distribution of collected experimental electrochemistry data. In total, the dataset contains 19,259 discharge profiles collected from DRX oxides and 11,604 discharge profiles from oxyfluorides. The color-coded boxes indicate the number of discharge profiles (cycles) on compounds that contain that specific element. The number within each elemental box represents the count of individual experiments conducted. (C) A histogram of the number of cycles (\(N_{\text{cycle}}\)) and current density (rate) for all the individual electrochemical tests.
current density rate, and degradation that may have occurred in various cycles. In experiments, the capacity \(Q\) is measured by determining the cumulative charge transfer using galvanostatic tests. Another relevant quantity of the voltage profile, \(dQ/dV\), is obtained by taking the derivative of \(Q\) with respect to \(V\). The \(dQ/dV\) curve is a crucial physical quantity for analyzing characteristic redox potentials from different TMs.
Unlike conventional NMC-based layered cathodes, DRX materials exhibit much more diverse electrochemical performance due to their significantly larger chemical existence and the more complex structure-property relationship involving not only long-range ordering but also short-range ordering [24]. For instance, Mg doping in Mn-based oxyfluoride DRX can increase the discharge capacity while retaining similar voltage-profile shapes [25]; Cr doping in Li\({}_{1.29}\)Mn\({}_{0.4}\)Ti\({}_{0.4}\)O\({}_{2.0}\) results in comparable low-rate capacity but significantly improves the high-rate performance due to the non-topotactic TM migration [26]. These non-linear effects arising from compositional changes make both material design and machine-learning modeling challenging, thereby necessitating a comprehensive, high-fidelity dataset to address such issues.
We have compiled the electrochemical test data related to DRX compounds by mining electronic experimental notebooks in our research over the past five years to construct the DRX Test Dataset (DRX-TD). The dataset contains not only the successful materials using galvanostatic charge/discharge tests in several papers [24; 25; 26; 27; 28; 29; 30; 31; 32], but also less well-performing DRX compounds. This endeavor yielded a comprehensive dataset containing 30,000 discharge profiles across 16 different elements (14 metal species) from lab experiments and published literature (see Methods). An individual electrochemical
Figure 2: **Model design of DRXNet**: An end-to-end pipeline that maps \(Q_{i}=\mathcal{F}(V_{i}|\mathcal{O})\), which consists of the electrochemical condition network \(\mathcal{O}\) and the state prediction network \(\mathcal{F}\). (A) The electrochemical condition network encodes the DRX composition, current density rate, and cycle information. The three encoded vectors are synthesized through gated-MLPs with soft-attention to obtain the condition vector \(\vec{X}_{\mathcal{O}}\)[20]. (B) The state prediction is approximated as a forward deep neural network that takes the voltage state \(V_{i}\) and cycling voltage window \(V_{\text{low}},V_{\text{high}}\) as inputs. The encoded condition vector \(\vec{X}_{\mathcal{O}}\) is element-wise added in the hidden layer of \(\mathcal{F}\). The circled symbols are all element-wise operations. (C) The message-passing graph neural network (GNN) is used for compositional encoding of DRX, adapted from Roost model [21].
test is defined as a group of \(N_{\text{cycle}}\) discharge profiles with a fixed current density rate, where \(N_{\text{cycle}}\) is the number of cycles conducted in such a test, corresponding to the results obtained from one coin-cell in experiments. The distribution of elements in the DRX-TD is shown in Fig. 1B, where the number in each element's box represents the number of times that element is presented in a compound for which an electrochemical test is present. The box's color indicates the number of times that element is presented in a discharge profile. Comprising 19,259 discharge profiles of DRX oxides and 11,604 discharge profiles of oxyfluorides, the dataset offers extensive coverage of major redox-active TMs. Figure 1C displays the histograms of the \(N_{\text{cycle}}\) and the loading current rates. Most of the electrochemical tests were conducted at a low current rate (20 mA/g).
Building upon the DRX-TD, 100 points were uniformly sampled from the values of \(V\) and \(Q\) for each discharge profile, resulting in a voltage series \(V=[V_{1},V_{2},...,V_{i},...]\) and a capacity series \(\mathbf{Q}=[Q_{1},Q_{2},...,Q_{i},...]\). The \(dQ/dV\) curve was then calculated based on \(\mathbf{V}\). As \(dQ/dV\) is a more intrinsic property for battery materials, including this value in the modeling allows for a more representative analysis of the electrochemical performance of DRX compounds under various conditions (see Methods).
### DRXNet architecture
DRXNet aims to draw a connection between chemistry and cathode performance by establishing a mapping between \(\mathbf{V}\) and \(\mathbf{Q}\) for arbitrary cathode compositions under various test conditions. This idea can be conceptualized as identifying a function \(\mathcal{F}\) that maps cathode parameters and the voltage state \(V_{i}\) to produce the capacity state \(Q_{i}\) as an output. The function \(\mathcal{F}\) is conditionally defined by the parameters \(\mathcal{O}\), which consider the electrode composition, current rate, and cycle number
\[Q_{i}=\mathcal{F}(V_{i}|\mathcal{O}). \tag{1}\]
With this intuition, we designed DRXNet with two main components, as shown in Fig. 2A and Fig. 2B: (1) An electrochemical condition network that generates a feature vector \(\vec{X}_{\mathcal{O}}\) based on the compound composition and additional features of the electrochemical test information; (2) A state prediction network to approximate the discharge state of the cathode as a function of the voltage state, \(Q_{i}=\mathcal{F}(V_{i}|\mathcal{O})\), given the electrochemical conditional encoding of \(\mathcal{O}\). For instance, Algorithm 1 demonstrates how DRXNet predicts the first-cycle discharge profile of Li\({}_{1.2}\)Mn\({}_{0.4}\)Ti\({}_{0.4}\)O\({}_{2}\) at a current rate of 20 mA/g between 1.5 and 4.8 V.
```
Condition Inputs: \[\vec{X}_{\mathcal{O}_{1}}=\vec{X}_{\text{comp}}+\sigma_{f_{1}}(\vec{X}_{ \text{comp}}||\vec{X}_{\text{rate}})\cdot f_{1}(\vec{X}_{\text{comp}}||\vec{X}_ {\text{rate}})\] \[\vec{X}_{\mathcal{O}_{N}}=\vec{X}_{\mathcal{O}_{1}}+\sigma_{f_{2} }(\vec{X}_{\mathcal{O}_{1}}||\vec{X}_{\text{cycle}})\cdot f_{2}(\vec{X}_{ \mathcal{O}_{1}}||\vec{X}_{\text{cycle}})\] \[\cdot\mathbf{W}_{n}(N-1)\] Inputs: \[\mathbf{V}=[1.5,...,V_{i},...,4.8]\to N\text{ series}\] for\(i=1\)to\(N\)do Compute \(Q_{i}=\mathcal{F}(V_{i}|\vec{X}_{\mathcal{O}})\rightarrow\mathcal{F}\) is an NN end Outputs: \(\mathbf{Q}=[Q_{1},...,Q_{i},...,Q_{N}]\to N\text{ series}\)
```
**Algorithm 1**The workflow of DRXNet with an example of Li\({}_{1.2}\)Mn\({}_{0.4}\)Ti\({}_{0.4}\)O\({}_{2}\)
Initially, three condition inputs (composition, rate, cycle) are encoded to represent \(\mathcal{O}\). We use Roost, a graph neural network model proposed by Goodall and Lee [21], for compositional encoding. Roost takes elements as graph nodes and updates the correlation between elements through weighted message passing based on each element's fractional concentration (Fig. 2C). The nodes are initialized with elemental embedded vectors \(\vec{h}s\) (\(s\): specie) from mat2vec to capture as much prior chemical information as possible through text mining of previously published literature [33]. Moreover, we consider only the cation species as independent nodes in Roost, treating the anion-species information (fluorine) as a mean-field background, i.e., \(\vec{h}^{\prime}_{\text{Li}}=\vec{h}_{\text{Li}}+c_{\text{F}}\cdot\vec{h}_{ \text{F}}\), where \(c_{\text{F}}\) is the fractional concentration of fluorine and \(\vec{h}_{\text{Li/F}}\) is the embedded vector of Li/F. Rate and cycle information is encoded using multi-layer perceptrons (MLPs).
Because the rate and cycle properties are intrinsically affected by the composition, we used gated-MLPs with soft attention for electrochemical condition encoding via a hierarchical network structure [20]. The \(\vec{X}_{\mathcal{O}_{1}}=\vec{X}_{\text{comp}}+\sigma_{f_{1}}(\vec{X}_{\text{ comp}}||\vec{X}_{\text{rate}})\cdot f_{1}(\vec{X}_{\text{comp}}||\vec{X}_{\text{rate}})\) is a rate-informed feature vector, where \(\sigma_{f}\) and \(f\) represent MLPs with different activation functions and \(||\) denotes the concatenation operation. In addition, the cycle-informed vector \(\vec{X}_{\mathcal{O}_{N}}=\vec{X}_{\mathcal{O}_{1}}+\sigma_{f_{2}}(\vec{X}_{ \mathcal{O}_{1}}||\vec{X}_{\text{cycle}})\cdot f_{2}(\vec{X}_{\mathcal{O}_{1} }||\vec{X}_{\text{cycle}})\cdot\mathbf{W}_{n}(N-1)\) is linearly dependent on the cycle number with a trainable weight \(\mathbf{W}_{n}\). As such, the feature vector \(\vec{X}_{\mathcal{O}_{1}}\) is used to represent the 1st cycle and \(\vec{X}_{\mathcal{O}_{N}}\) is used to represent the \(N\)-th cycle, respectively.
Lastly, we used several MLPs to construct the state prediction network \(\mathcal{F}\), as shown in Fig. 2B. \(\mathcal{F}\) takes the voltage state \(V_{i}\) and working window \(V_{\text{low}},V_{\text{high}}\) as inputs, and the \(\vec{X}_{\mathcal{O}}\) is element-wise added to the hidden layer of \(\mathcal{F}\) to inform \(\mathcal{F}\) of conditions \(\mathcal{O}\) (see Methods). As such, the state prediction network \(\mathcal{F}\) is constructed as a simple function mapping from the voltage state \(V_{i}\) to the capacity \(Q_{i}\). In addition, \((dQ/dV)_{i}\) is obtained by auto-differentiation of \(\mathcal{F}\).
### Applicability domain
In this section, we explore the scope of DRXNet's applicability in the realm of composition space. Determination of the applicability domain in battery machine-learning models can be challenging due to the unavailability of a sufficient test dataset, as generating new data necessitates the synthesis of new solid-state materials or conducting battery cycling tests for weeks to months [13; 34]. For example, simply separating the sequence of voltage and capacity signals \(\{V_{i},Q_{i}\}\) into training and test sets can result in data-leakage issues and a failure to represent the expected error in real applications. To evaluate the expressibility and generalization of DRXNet, we designed several experiments by partitioning the dataset based on compositions. The electrochemical tests with
Figure 3: **Error analysis of DRXNet in compositional space**: (A)–(B) Models are trained on DRX compositions with only two TM species (denoted as 2TM). The models are tested on predicting the delivered capacity between 2.0 and 4.4 V for DRXs composed of three metal species (denoted as 3TM) and higher components (denoted HE for high entropy). (C)–(D) Models are trained on a 2TM dataset along with the first cycles of 3TM and HE as corrected models. The corrected models are tested for subsequent cycles on 3TM and HE to assess the prediction error. (E)–(F) Models are trained on a 3TM dataset and tested on 3TM and HE dataset.
no more than two metal species (2TM, excluding Li) were designated as the training set, whereas the tests with three metal species (3TM) and higher numbers of TM components (high-entropy, HE) were assigned as test sets. For each test, an ensemble of five independent models was trained to enhance the overall prediction accuracy and robustness and to quantify the model variance. Predictions were generated by averaging the predicted capacities of each DRXNet model.
In Fig. 3A and Fig. 3B, we show the performance of the DRXNet models trained on the 2TM dataset and tested on the 3TM and HE datasets. Mean absolute errors (MAEs) of 0.1/0.13 V for the average voltage (\(\bar{V}=\sum_{i}V_{i}\Delta Q_{i}/\sum_{i}\Delta Q_{i}\)) and 23.38/29.97 mAh/g for the capacity were obtained for the 3TM/HE test datasets, respectively, by comparing the prediction to the experimental Q-V curve within the voltage range of 2.0 - 4.4 V. As a baseline, the mean absolute deviation (MAD) of average voltage is 0.16/0.21 V for 3TM/HE, and the MAD of discharge capacity is 36.59/38.54 mAh/g for 3TM/HE for several comparisons. We found that large prediction errors already occurred at the first cycle and propagated into the subsequent cycles. Notably, a systematic underestimation of capacity is observed for
Figure 4: **Model variance analysis of DRXNet in compositional space**: The prediction error of discharge capacity between 2.0 and 4.4 V (\(y\)-axis) _vs._ cycle number (\(x\)-axis). The model variance is represented by \(\sigma_{Q}\), a standard deviation of the ensemble of the models’ prediction, which is plotted as scaled colored dots. (A)–(B): Predictions on 3TM/HE using models trained on the 2TM dataset. (C)–(D): Predictions on 3TM/HE using models trained on both the 2TM dataset and the first cycles of the 3TM/HE dataset. (E)–(F): Predictions on 3TM/HE using models trained on the 3TM dataset.
HE (Fig. 3B), which can be rationalized by the fact that 2TM represents low-entropy DRX and cannot capture the improved performance arising from the novel high-entropy physics [32].
For practical applications, there are two approaches to improving the model's accuracy and enhancing its predictive capabilities: (1) new data points can be continuously collected as experiments progress, enabling on-the-fly training with in-situ data to improve predictive performance in data-scarce chemical space; and (2) incorporating a diverse range of information from chemical space and test conditions to deliver well-pretrained models. Regarding (1), we further tested this concept by evaluating the improvement when DRXNet was trained on a dataset containing all 2TM data and was provided with the first cycle data from 3TM/HE materials. The knowledge of just the first cycle data resulted in a reduction of the capacity MAE from 23.38/29.97 mAh/g to 14.84/17.58 mAh/g for 3TM/HE. The enhanced performance achieved by explicitly training with the first cycle indicates that the model can better generalize cycling performance, even when experiments for a specific composition are not extensively sampled. This capability has the potential to significantly reduce the month-long timeframe typically required for electrochemical testing. In regard to (2), we present the results for models trained on the 3TM dataset, where Fig. 3E displays the training errors (6.0 mAh/g), and the test error on the HE data is 19.63 mAh/g, reduced by 10 mAh/g from those trained on 2TM data.
To rationalize the improvement and assess the expressibility for extrapolation in untrained domains, we examined the prediction error and model variance as a function of cycle number, as shown in Fig. 4. The standard deviation of the prediction from the ensemble of five DRXNet models (\(\sigma_{Q}\)) was used to represent the model variance as an approximation of how uncertain the predictions are. The 2TM model exhibited moderate model variance for 3TM predictions and high model variance for HE predictions (\(>\) 50 mAh/g) as shown in Fig. 4A and B. Training the model with first-cycle data led to a substantial decrease in both the prediction error and model variance for the initial few cycles, although the model variance increased subsequently with the cycle number for untrained domains (Fig. 4C and D).
It is important to note that the models trained on 3TM data show a significantly reduced prediction error and model variance for the HE prediction compared to those obtained when training the 2TM model (Fig. 4F). This finding suggests that the 2TM dataset is not adequate for extracting relevant information and generalizing to other compositions. The scaling to high-component electrodes necessitates capturing more than 2TM correlations or interactions in training the graph neural network. Fail
Figure 5: **Illustration of synergistic predictions of discharge capacity in Li–Mn–O–F DRX systems**: (A) Design principle from theory, where the Li/Mn/F content jointly determines the performance of battery materials (see Ref. [31]). (B) Prediction of discharge capacity for the 1st and 30th cycle in Li–Mn–O–F chemical space between 1.5 – 4.8 V at a current density rate of 20 mA/g, with the blue stars indicating the compositions included in the training set.
ure to do so may lead to systematic prediction errors, as demonstrated in Fig. 3B. When the model is able to acquire sufficient chemical domain knowledge (e.g., 3TM-model), it becomes feasible to extrapolate the electrochemical properties of high-component electrodes.
In the following sections, we will present several examples to illustrate how DRXNet learns the underlying cathode chemistry and assists in the design of new materials. The models used for these applications are pre-trained on all discharge profiles.
### Synergistic predictions in Li-Mn-O-F
Manganese is an attractive earth-abundant, non-precious TM for next-generation cathode design [27]. Lun _et al._[31] proposed three primary design degrees of freedom for Mn-based DRX (Fig. 5A): (1) the Li-excess content, which controls the presence of a percolating network creating facile Li diffusion; (2) the Mn content, as low amounts of Mn can lead to severe oxygen redox and poor cyclability; and (3) the fluorine content, which lowers the total cation valence and provides greater freedom to optimize the Li and Mn content. Fluorine modifies cation short-range order through the strong Li-F attraction and lowers the initial capacity [25; 35]. These theoretical principles are highly correlated and exert non-linear effects on performance, making it challenging to predict.
We used DRXNet to predict the discharge capacity between 1.5 and 4.8 V at a current rate of 20 mA/g for the 1st and 30th cycles as a function of Li and F content, which is illustrated in Fig. 5B. The amount and valence of Mn follow directly from the Li and F content. The critical feature of fluorine that has been extensively characterized experimentally is well captured by our model. A higher F content (\(y\) in O\({}_{2-y}\)F\({}_{y}\)) results in a lower discharge capacity for the 1st cycle but a higher capacity for the 30th cycle. In particular, Li\({}_{1.333}\)Mn\({}_{0.667}\)O\({}_{2}\) is predicted to have the highest capacity (\(>320\) mAh/g) for the first cycle but the lowest capacity for the 30th cycle. This prediction is consistent with the understanding the capacity originates from oxygen as the valence of Mn is 4+. Such a large amount of O-redox leads to rapid capacity fade and aligns with the experimental observations of disordered Li\({}_{2}\)MnO\({}_{3}\) reported in Ref. [36].
To rationalize the extrapolation ability of DRXNet, we plot the compositions in the training dataset using blue stars in Fig. 5B. It is evident that despite the sparse distribution of training points across the composition map, DRXNet delivers accurate predictions that align with the experimental observations beyond the scope of the training points. As DRXNet is trained on various compositions beyond the Li-Mn-O-F chemical space, the ability to extrapolate to other domains can be attributed to the transfer learning from other F- and non-F-containing compounds. The example in this section demonstrates how practitioners can generalize the design principles from a data-driven perspective purely starting from experiments.
### Exploratory search for high-entropy cathodes
High-entropy DRXs are composed of many species and present a vast chemical space to explore for battery materials discovery. In this section, we present two case studies of predicted high-entropy DRXs: Li\({}_{1.2}\)Mn\({}_{0.1}\)Mg\({}_{0.1}\)Cr\({}_{0.3}\)Ti\({}_{0.2}\)Nb\({}_{0.1}\)O\({}_{1.8}\)F\({}_{0.2}\) (HE-1) and Li\({}_{1.2}\)Mn\({}_{0.1}\)Mg\({}_{0.1}\)Cr\({}_{0.15}\)V\({}_{0.15}\)Ti\({}_{0.2}\)Nb\({}_{0.1}\)O\({}_{1.8}\)F\({}_{0.2}\) (HE-2). The predicted discharge profiles under various current densities are shown in Fig. 6A and B. A more comprehensive map of other compositions is included in the S
Figure 6: Predicted discharge profiles of two high-entropy DRX materials with various current density rates (from 20 mA/g to 1000 mA/g). (A) Li\({}_{1.2}\)Mn\({}_{0.1}\)Mg\({}_{0.1}\)Cr\({}_{0.3}\)Ti\({}_{0.2}\)Nb\({}_{0.1}\)O\({}_{1.8}\)F\({}_{0.2}\) (HE-1) and (B) Li\({}_{1.2}\)Mn\({}_{0.1}\)Mg\({}_{0.1}\)Cr\({}_{0.15}\)V\({}_{0.15}\)Ti\({}_{0.2}\)Nb\({}_{0.1}\)O\({}_{1.8}\)F\({}_{0.2}\) (HE-2). The inset displays the cycled discharge capacity at a current density rate of 20 mA/g.
For HE-1, DRXNet predicts a discharge capacity of 276 mAh/g at a low current rate of 20 mA/g. The discharge voltage profile shows a clear transition from a flat curve to a slop curve after at around 3 V, which has been widely observed in Mn redox and/or Cr redox-based DRXs [26, 32, 37]. A capacity of 196 mAh/g is predicted at 1000 mA/g, retaining 71% of that delivered at a slow rate of 20 mA/g. From previous studies, multi-elemental substitution (i.e. high-entropy strategy) frustrates unfavorable SRO that leads to sluggish kinetics; Cr incorporation and Cr\({}^{6+}\)-migration at high voltage upon delithiation opens up a better-extended 0-TM network for Li transport, both of which can improve the Li diffusion kinetics [26, 32]. DRXNet clearly learns those benefits and extrapolates rationally into electrochemistry prediction of the high-entropy compositions.
As a comparison, partial V\({}^{3+}\) to Cr\({}^{3+}\) substitution in HE-1, yielding HE-2 is expected to change the shape of the voltage curves dramatically due to the low potential of V\({}^{5+}\)/V\({}^{3+}\) reduction, which is also predicted using DRXNet as shown in Fig. 6B. It is clearly demonstrated that with V\({}^{3+}\) incorporation, a nearly constant slope can be observed down to the low-voltage region, which is characteristic for V-based DRX cathodes reported previously [38, 39]. Nevertheless, V\({}^{5+}\) has a similar migration mechanism to enhance Li transport as Cr and is likely to be beneficial for the rate capability [38]. Consistently, although a slightly lower discharge capacity is predicted (266 mAh/g) for HE-2, it retains 171 mAh/g capacity at 1000 mA/g (64% of the capacity at 20 mA/g), which is better than the majority of the DRX cathodes reported to date.
In terms of cyclability, the inset plot shows the predicted discharge capacity of DRX materials within 20 cycles for both materials. A more dramatic capacity drop from the first 5 cycles is predicted, which slows down upon further cycling. This result is in full agreement with experimental findings which indicate that some of the irreversibility in the initial cycles, such as surface carbonate decomposition [40] or cathode-electrolyte interface formation [41]. These examples illustrate how practitioners can effectively use DRXNet to navigate the extensive chemical space of high-entropy DRXs and identify promising candidates for cathode design and optimization.
## III Discussion
The pursuit of carbon neutralization by optimizing the discovery and application of energy-storage materials using AI has long captivated materials scientists. Numerous efforts have been made in this area, and the Battery Data Genome is proposed as a potential breakthrough along with the fast development of AI technologies [9, 10, 11, 12]. With this endeavor, we proposed a deep learning approach for battery electrochemistry representation and learning from the experimental data. We developed a machine-learning model (DRXNet) trained on over 30,000 experimental discharge voltage profiles, encompassing diverse compositional chemistry in DRX cathodes. This was achieved through a novel model design consisting of an electrochemical condition network (\(\mathcal{O}\)) and a state prediction network (\(\mathcal{F}\)).
The design of the two networks promotes modularity in the architecture, streamlining the optimization and interpretation of each network individually and their learned features. For instance, the hierarchical network structure in \(\mathcal{O}\) presents feature vectors for both the first and the \(N\)-th cycle (see Eq.(7)). Analyzing the differences between these two vectors can reveal insights into the material's cyclability fingerprint. Moreover, as highlighted in model training, the loss function is designed for multi-task learning (see Eq.(13)). The 1st and \(N\)-th cycle capacities are trained simultaneously in each update. This loss function including two contrastive terms enhances learning efficiency, as each component can focus on specific aspects of the problem (i.e., \(\ell(Q^{1})\) for composition and rate, \(\ell(Q^{N})\) for composition, rate, and cycle), leading to improved physical meaning of the network rather than minimizing the average training error.
In addition, the electrochemical condition network design provides flexibility in terms of model application. We recognize that most training datasets are derived from our own experimental results, which do not encompass critical testing parameters such as particle size, electrolyte type, electrode fabrication methods, etc. These factors have been coarsely integrated into the compositional model in our dataset. In principle, researchers can choose to include any factors to design the electrochemical feature vector, depending on the specific problem they are addressing. Given the vast amount and complexity of these properties, a synthetic data collection approach is necessary. Data-mining techniques, such as text mining and figure mining, can automatically retrieve valuable experimental information from decades of published literature [42, 43]. This has the potential to enhance the model's generalizability and incorporate extensive prior domain knowledge in electrochemical fields.
We would also like to discuss the depth and transferability of DRXNet's predictive capabilities for exploration, especially for the state prediction network \(\mathcal{F}\). We further tested the electrochemical performance of HE-2 by varying the voltage window and cycling rate, which are the parameters that typically require multiple individual electrochemical tests in experiments. Figure 7A displays the discharge profiles between 2.0 - 4.4 V, with two additional rates tested (10 mA/g for a low rate and 10\({}^{4}\) mA/g for an extremely high rate). These conditions are infrequently incorporated in our training data. The low rate exhibits a discharge profile very similar to 20 mA/g, which is entirely consistent with experimental findings, as the discharge process at a low rate exhibits a reduced overpotential. The 10 A/g rate discharge profile demonstrates a sharp drop in voltage, reasonably indicating poor performance at an extremely high rate. On
the other hand, Figure 7B presents the discharge profiles between 2.0 - 4.0 V, which starts to show some unphysical predictions. Small offsets appear at the beginning of discharge for the low rate profiles, resulting in a non-zero discharge capacity at 4.0 V. We attribute this discrepancy to (1) the connection between voltage state \(V_{i}\) and window \([V_{\text{low}},V_{\text{high}}]\) being achieved by linear combinations in the hidden layer (Fig. 2B), which is a data-driven encoding and requires training; and (2) a limited number of experiments being conducted with \(V_{\text{high}}\) lower than 4.0 V, which leads to data scarcity in such voltage range. These two reasons may rationalize why the predictions for 2.0 - 4.4 V show better accuracy (no offsets) while the ones for 2.0 - 4.0 V deviate higher.
Based on the tests, our primary conclusion is that DRXNet exhibits a reasonable ability to learn chemical interactions and generalize to test conditions included in the dataset among different chemical compositions. However, for test conditions that the model has not encountered (e.g., experiments with \(V_{\text{high}}<4.0\) V), discrepancies or unphysical profiles may arise (e.g., non-zero capacity at the beginning). This highlights the data scarcity issue, which arises from human bias and outliers in experimental setups or poorly performing systems, as researchers may discontinue their discovery efforts when faced with unfavorable results [44]. In the future, automated labs can address this scarcity issue by enabling more extensive exploration of the experimental space (e.g., various voltage windows to find the optimal trade-off between energy density and cyclability, along with a combination of different current density rates), even for "failed" experiments [45]. This approach can result in a more comprehensive dataset for building machine-learning models and understanding the electrochemical properties of battery materials.
In conclusion, DRXNet represents a significant step forward in the development of machine-learning models for battery materials research. By continuously refining the model and incorporating additional data and parameters, we anticipate that such a machine-learning framework will play an increasingly critical role in the discovery and optimization of next-generation battery materials.
## IV Methods
### Data collection
We collected coin-cell electrochemical test data from our lab starting in 2016 and converted them into a digital format (.json). Each.json file contains information on one individual electrochemical test, including the electrode composition, electrode mass (g), active mass (g), test current rate (mA/g), low and high voltage value of the working window (V), and charge/discharge profiles of \(N_{\text{cycle}}\) collected cycles.
For the in-house battery tests, the CR2032 coin cells were assembled using commercial 1 M LiPF\({}_{6}\) in an ethylene carbonate and dimethyl carbonate solution (volume ratio 1:1) as the electrolyte, glass microfiber filters (Whatman) as separators, and Li-metal foil (FMC) as the anode. The coin cells were tested on an Arbin battery cycler at room temperature. The cathode consisted of a mixture of active material (DRX), Super C65 carbon black, and polytetrafluoroethylene (PTFE). The capacity signal, collected in units of Ah from the Arbin battery cycler, was normalized to mAh/g using the mass of the active material (active mass). The data from the failed tests (e.g., Arbin cycler breakdown, electrolyte failure, strong signal fluctuations...) were removed from the dataset (see Supplementary Information for examples).
To enhance the generalization and expressibility of DRXNet, we expanded the dataset by figure mining published voltage profiles in related systems not covered by our lab tests (see Supplementary Information for details), which was accomplished using the WebPlotDigitizer [46].
We used the UnivariateSpline method to denoise the
Figure 7: Predicted discharge profiles with various current density rates (from 10 mA/g to 10 A/g) of Li\({}_{1.2}\)Mn\({}_{0.1}\)Mg\({}_{0.1}\)Cr\({}_{0.15}\)V\({}_{0.15}\)Ti\({}_{0.2}\)Nb\({}_{0.1}\)O\({}_{1.8}\)F\({}_{0.2}\) (HE-2) between (a) 2.0 – 4.4 V and (b) 2.0 – 4.0 V. The inset displays the cycled discharge capacity at a current density rate of 20 mA/g.
experimental profile and compute the \(dQ/dV\) curve. One hundred points were uniformly sampled to form the voltage series \(\mathbf{V}=[V_{0},V_{1},...,V_{i},...]\) for each discharge profile, and the capacity series and \(dQ/dV\) series were calculated accordingly from \(\mathbf{V}\).
### Model design
#### ii.2.1 Preliminaries
We define a linear layer with trainable weight \(\mathbf{W}\) and bias \(\mathbf{b}\) as
\[L(\vec{X})=\vec{X}\mathbf{W}+\mathbf{b}. \tag{2}\]
A multi-layer perceptron (MLP) is denoted as
\[\phi(\vec{X})=\sigma\left(L(\vec{X})\right)=\sigma\circ L(\vec{X}), \tag{3}\]
where \(\sigma\) is a non-linear activation function.
#### ii.2.2 Compositional encoding
For elemental information, each element is first embedded into a 200-dimensional vector using mat2vec [33]. Roost (Representation Learning from Stoichiometry) model is used for compositional encoding [21], which is a graph neural network (GNN) with message passings as follows:
\[\begin{split}\vec{h}_{i}^{t+1}&=\vec{h}_{i}^{t}+ \sum_{j,m}a_{i,j}^{t,m}\cdot\sigma_{g}\circ L_{c}\left(\vec{h}_{i}^{t}||\vec{h }_{j}^{t}\right),\\ a_{i,j}^{t,m}&=\frac{w_{j}\exp(e_{i,j}^{t,m})}{ \sum_{k}w_{k}\exp(e_{i,k}^{t,m})},\ e_{i,k}^{t,m}=\sigma_{g}\circ L_{a}\left( \vec{h}_{i}^{t}||\vec{h}_{j}^{t}\right).\end{split} \tag{4}\]
In these equations, \(\vec{h}_{i}^{t}\) represents the \(t\)-th hidden layer for the \(i\)-th element; \(||\) denotes the concatenation operation; and the soft-attention coefficient \(a_{i,j}^{t,m}\) describes the interaction between elements \(i\) and \(j\), with \(m\) as the index of multi-head attention. \(L_{c}\) and \(L_{a}\) denote the linear layer for the core and attention layer, respectively. The fractional concentration \(w_{j}\) of element \(j\) depends on the specific compound (e.g., \(w_{j}=0.6/0.2/0.2\) for Li/Mn/Ti in Li\({}_{1.2}\)Mn\({}_{0.4}\)Ti\({}_{0.4}\)O\({}_{2.0}\)). \(\sigma_{g}\) is the SiLu activation function. After \(n\) graph convolution layers, the encoded composition vector \(\vec{X}_{\text{comp}}\) is obtained by average pooling over the elements with weighted attention
\[\vec{X}_{\text{comp}}=\text{Pooling}\left[\frac{w_{i}\exp\left(\sigma_{g} \circ L_{a}(\vec{h}_{i}^{n})\right)}{\sum_{k}\exp\left(\sigma_{g}\circ L_{a}( \vec{h}_{i}^{n})\right)}\cdot\left(\sigma_{g}\circ L_{c}(\vec{h}_{i}^{n}) \right)\right] \tag{5}\]
#### ii.2.3 Electrochemical condition encoding
The electrochemical test primarily involves two pieces of information: the current density rate and cycle number. We use MLPs to encode the rate and cycle number:
\[\vec{X}_{\text{rate}}=\sigma_{g}\circ L(\text{rate}),\ \vec{X}_{\text{ cycle}}=\sigma_{g}\circ L(\text{cycle}). \tag{6}\]
As the actual rate and cycle performance are strongly correlated with cathode materials, the relationship between the composition, rate, and cycle is synthesized using gated-MLPs with soft attention[20]:
\[\begin{split}\vec{X}_{\mathcal{O}_{1}}&=\vec{X}_{ \text{comp}}+\sigma_{f_{1}}(\vec{X}_{\text{comp}}||\vec{X}_{\text{rate}})\cdot f _{1}(\vec{X}_{\text{comp}}||\vec{X}_{\text{rate}})\\ \vec{X}_{\mathcal{O}_{N}}&=\vec{X}_{\mathcal{O}_{1 }}+\sigma_{f_{2}}(\vec{X}_{\mathcal{O}_{1}}||\vec{X}_{\text{cycle}})\cdot f_{ 2}(\vec{X}_{\mathcal{O}_{1}}||\vec{X}_{\text{cycle}})\cdot\mathbf{W}_{n}(N-1),\end{split} \tag{7}\]
where \(\sigma_{f}=\sigma_{s}\circ B\circ L\) is an MLP, \(\sigma_{s}\) is the Sigmoid activation function, and \(f=\sigma_{g}\circ B\circ L\) is an MLP with SiLu activation function \(\sigma_{g}\). The BatchNormalization layer \(B\) is added before the activation function. In this equation, \(\vec{X}_{\mathcal{O}_{1}}\) is a feature vector jointly determined by the composition and rate information, which is used to predict the first cycle property. \(\vec{X}_{\mathcal{O}_{N}}\) is a feature vector jointly determined by the composition, rate, and cycle information, which is used to predict the \(N\)-th cycle property. The difference between \(\vec{X}_{\mathcal{O}_{1}}\) and \(\vec{X}_{\mathcal{O}_{N}}\) is linearly dependent on the number of cycles with a trainable weight \(\mathbf{W}_{n}\), allowing the model to learn cycle performance contrastively.
#### ii.2.4 State prediction network
The state prediction network (\(\mathcal{F}\)) takes the inputs of voltage state (\(V_{i}\)) and outputs the discharge-capacity state (\(Q_{i}\))
\[Q_{i}=\mathcal{F}\left(V_{i}|\mathcal{O}\right). \tag{8}\]
In practice, the voltage profile is measured within the applied voltage window \([V_{\text{low}},V_{\text{high}}]\). To accommodate the voltage window in the discharge state prediction, the first layer in \(\mathcal{F}\) is encoded via an MLP:
\[\vec{Z}_{i}^{0}=\sigma_{\mathcal{F}}\left([V_{\text{low}},V_{\text{high}}]^{T} \,\mathbf{W}_{1}+[V_{i}]^{T}\mathbf{W}_{2}\right), \tag{9}\]
where \(\sigma_{\mathcal{F}}(\cdot)\) is the Softplus activation function and \(\mathbf{W}_{1/2}\) is the trainable weight. We used a ResNet-like structure to incorporate the test-condition information from \(\vec{X}_{\mathcal{O}}\)[47]
\[\begin{split}\vec{Z}_{i}^{1}&=\sigma_{\mathcal{F}} \circ L_{0}\left(\vec{Z}_{i}^{0}+\vec{X}_{\mathcal{O}_{1}}\right)\\ \vec{Z}_{i}^{N}&=\sigma_{\mathcal{F}}\circ L_{0}\left( \vec{Z}_{i}^{0}+\vec{X}_{\mathcal{O}_{N}}\right)\end{split} \tag{10}\]
The state of capacity is obtained by
\[\begin{split} Q_{i}^{1}&=\sigma_{\mathcal{F}}\circ L_{ 2}\circ\sigma_{\mathcal{F}}\circ L_{1}(\vec{Z}_{i}^{1})\\ Q_{i}^{N}&=\sigma_{\mathcal{F}}\circ L_{2}\circ \sigma_{\mathcal{F}}\circ L_{1}(\vec{Z}_{i}^{N})\end{split} \tag{11}\]
where \(Q_{i}^{1}\) is the capacity for the first cycle and \(Q_{i}^{N}\) is the capacity for the \(N\)-th cycle. Because the discharge capacity is always positive, \(\sigma_{\mathcal{F}}\) is added to constrain the predicted capacity to be positive and accelerate the training process. \(dQ/dV\) for the redox potential can be obtained via PyTorch auto-differentiation [48]
\[\frac{dQ}{dV}\bigg{|}_{i}=\text{AutoDiff}(Q_{i},V_{i}). \tag{12}\]
### Model training
The model is trained to minimize the sum of multi-task losses for the capacity of the first cycle, the \(n\)-th cycle, and \(dQ/dV\):
\[\mathcal{L}=w_{Q}\ell(Q_{i}^{N})+w_{dQ}\ell(\frac{dQ^{N}}{dV_{i}})+w_{Q_{1}} \ell(Q_{i}^{1})+\mathcal{R}. \tag{13}\]
The MSE loss function is used for \(\ell(Q_{i}^{N})\) and \(\ell(\frac{dQ^{N}}{dV_{i}})\), whereas the MAE loss function is employed for the first cycle as a contrastive term \(\ell(Q_{i}^{1})\). The weights for \(Q_{i}^{N}\), \(dQ/dV\), and \(Q_{i}^{1}\) are set to \(w_{Q}=1\), \(w_{dQ}=1\), and \(w_{Q_{1}}=5\). The term \(\mathcal{R}\) represents regularization, which consists of two parts: (1) an \(\ell_{2}\)-norm regularization of the network's parameters \(||\mathbf{\theta}||_{2}\) and (2) a smoothing term \(||dQ/d\mathbf{c}||_{2}\) to avoid large, unphysical performance fluctuations (\(\mathbf{c}\) denotes the fractional concentration of elements). The weight of regularization is \(10^{-4}\).
To make predictions, an ensemble of five independent models was trained to make predictions. Each model was trained with a batch size of 1024 within 30 epochs. The Adam optimizer was used with \(10^{-3}\) as the initial learning rate. The ExponentialLR scheduler was used to adjust the learning rate with a decay of 0.9 per epoch.
## V Acknowledgments
This work was primarily supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division under Contract No. DE-AC0205CH11231 (Materials Project program KC23MP). The data collection in this work was supported by the Assistant Secretary for Energy Efficiency and Renewable Energy, Vehicle Technologies Office, under the Advanced Battery Materials Research (BMR) Program, of the US Department of Energy (DOE) under contract No. DE-AC0205CH11231. The computational modeling in this work was supported by the computational resources provided by the Extreme Science and Engineering Discovery Environment (XSEDE), supported by National Science Foundation grant number ACI1053575; the National Energy Research Scientific Computing Center (NERSC); and the Lawrencium computational cluster resource provided by the IT Division at the Lawrence Berkeley National Laboratory. The authors thank Huiwen Ji, Jianping Huang, and Zijian Cai for their help in experimental data collection and Yifan Chen for valuable discussions.
## VI Availability
The models will be released after review or upon reasonable request.
## VII Competing interests
The authors declare no competing interests.
|
2305.02006 | Orbital Fulde-Ferrell-Larkin-Ovchinnikov state in an Ising
superconductor | The critical field behavior of a layered Ising superconductor with finite
number of layers is studied. Under in-plane magnetic fields, the
finite-momentum superconductivity dubbed as the orbital Fulde-
Ferrell-Larkin-Ovchinnikov state is found in the regime of low field and high
temperature. Our theory agrees well with the experimental results in Nature
619, 46 (2023). | Noah F. Q. Yuan | 2023-05-03T09:56:00Z | http://arxiv.org/abs/2305.02006v2 | # Orbital Fulde-Ferrell-Larkin-Ovchinnikov state in an Ising superconductor
###### Abstract
The critical field behavior of a layered Ising superconductor with intermediate number of layers is studied. Under in-plane magnetic fields, the finite-momentum superconductivity dubbed as the orbital Fulde-Ferrell-Larkin-Ovchinnikov state is found in the regime of low field and high temperature. Our theory agrees well with the experimental results in arXiv:2211.07745. We also predict the upper critical field behavior in the regime of high field and low temperature.
The dimensionality of a superconductor is usually derived from its critical field behavior, which reflects the spatial profile of the order parameter under external magnetic fields. When the superconductor is three-dimensional (3D), Abrikosov vortices [1] will be formed under fields above the lower critical field, where the magnitude of the order parameter is highly non-uniform and the phase has an integer winding around a vortex core. As a result, the upper critical field of a 3D superconductor is linear in temperature, and the critical exponent is independent of the field direction.
On the contrary, in a two-dimensional (2D) superconductor the order parameter is mostly uniform in magnitude and Abrikosov vortices cannot be formed under in-plane fields. Since the characteristic size of the Abrikosov vortex core is the coherence length \(\xi\), the criterion of 2D superconductivity is roughly \(d<\xi\), where \(d\) is the thickness. Detailed calculations further reveal the critical thickness for 2D superconductivity \(d<d_{c}\approx 1.8\xi\)[2]. In a 2D superconductor, near the zero-field critical temperature, the out-of-plane upper critical field is still linear in temperature, while the in-plane one is square-root in temperature [2; 3], as verified in experiments [4; 5; 6; 7].
The above arguments on dimensionality are based on continuum models of superconductors. Over the past several decades, the layered superconductors have been discovered [2; 7; 8; 9], where each layer is an atomically thin 2D superconductor, and different layers are weakly coupled by Josephson coupling [10; 11; 12; 13; 14]. For a layered superconductor with infinite number of layers, a dimensional crossover can be realized by an in-plane magnetic field [2; 9; 15; 16; 17]. When the in-plane field is weak compared with the interlayer Josephson coupling, the layered superconductor can be treated as 3D with an upper critical field linear in temperature. As the field increases so that the interlayer coupling is relatively negligible, the layered superconductor behaves as if a 2D superconductor with an upturn in the upper critical field. Such a dimensional crossover has been experimentally observed in several layered superconductors [16; 17; 18; 19].
Recently, it was experimentally found [20] that a layered Ising superconductor NbSe\({}_{2}\) with intermediate thickness can have unconventional critical field behavior. The upper critical field is square-root in temperature and hence 2D at low fields. As the field increases, an additional phase transition associated with a tricritical point is found instead of a dimensional crossover from 3D to 2D. These results are inconsistent with the critical field behavior of the layered superconductor with infinite layers [2; 15; 16; 17], but show similarities to bilayer superconductors [21; 22] instead. In Refs. [21; 22], it is found that in a bilayer superconductor linked by Josephson coupling, unconventional superconducting states similar to the Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) states [23; 24] can be realized under in-plane magnetic fields mainly through the orbital effect.
In this manuscript, we would like to study the layered superconductors with intermediate number of layers under in-plane fields, and try to generalize the FFLO-like states in Refs. [21; 22] to multilayer cases.
It is well known that two depairing mechanisms are introduced by an external magnetic field, namely the or
Figure 1: Orbital FFLO phases in \(N\)-layer superconductors. a) Upper critical field as a function of temperature. With the increasing zero-field critical temperature, even number \(N\) changes from 2 to 20, labeled by different colors. b) Upper critical field versus Cooper pair momentum. Colors denote \(N\), the same as a). Dashed lines denote the fitting from the analytical formula Eq. (6). As \(N\) increases, upper critical field approaches the envelope \(B=B_{0}(T_{c}-T)/T_{0}\) in a) and optimal Cooper pair momentum approaches the envelope \(q/q_{0}=-\frac{1}{2}B/B_{0}\) in b). Here \(B_{0},T_{0},q_{0}\) are defined in Eq. (2).
bital effect and the Zeeman effect. At weak fields, the Zeeman effect can be neglected [16; 17; 25]. In the following we first consider the orbital effect of the magnetic field in the low field and high temperature regime, and then introduce Ising paramagnetic limiting field in the high field and low temperature regime.
_Model_--For the orbital effect, we consider the \(N\)-layer Lawrence-Doniach (LD) model with the following free energy density per area [21; 15; 22]
\[f = \sum_{l=1}^{N}\left\{\alpha|\psi_{l}|^{2}+\frac{\beta}{2}|\psi_{ l}|^{4}+\frac{|(\nabla_{\parallel}-2ie\mathbf{A}_{l})\psi_{l}|^{2}}{2m}\right\}\] \[- J\sum_{l=1}^{N-1}(\psi_{l}^{*}\psi_{l+1}+\psi_{l+1}^{*}\psi_{l}) \exp\left(2ie\int_{ld}^{(l+1)d}A_{z}dz\right),\]
where \(m>0\) is the electron mass, \(e\) the electron charge, and \(J>0\) is the Josephson coupling between nearest neighbor layers. In the gradient terms, \(\nabla_{\parallel}=(\partial_{x},\partial_{y})\) is the in-plane gradient operator, \(\mathbf{A}_{l}\) is the in-plane vector potential on layer \(l\), and \(A_{z}\) is the out-of-plane component of vector potential. As a function of temperature \(T\), \(\alpha=\alpha_{0}(T-T_{c1})\) where \(T_{c1}\) is the critical temperature of the single layer. On the contrary, \(\beta>0\) can be treated as a positive constant independent of temperature.
We first briefly review the salient physics of the LD model with infinite number of layers \(N\rightarrow\infty\), which can be well described by the anisotropic Ginzburg-Landau model of a 3D superconductor. At zero field, the critical temperature is \(T_{c}\equiv T_{c1}+2J/\alpha_{0}\). Near \(T_{c}\), the in-plane upper critical field is linear in temperature \(T\)[2], implying the 3D nature of the infinite-layer LD model.
Now we turn to finite number of layers. We employ the 2D ansatz \(\psi_{l}=\Delta_{l}e^{i\mathbf{q}\cdot\mathbf{r}}\), whose magnitude is in-plane uniform \(|\psi_{l}(\mathbf{r})|\equiv\Delta_{l}\), and the phase is characterized by Cooper pair momentum \(\mathbf{q}\). Correspondingly we choose the gauge \(A_{z}=0\), \(\mathbf{A}_{l}=[l-\frac{1}{2}(N+1)]d\mathbf{B}\times\hat{\mathbf{z}}\). Hence the free energy is \(f=f(\Delta,\mathbf{q},\hat{\mathbf{B}})\) with vector \(\Delta=\{\Delta_{l}\}\). To describe the critical behavior, we calculate the Hessian matrix \(H_{ll^{\prime}}\equiv\partial^{2}f/\partial\Delta_{l}\partial\Delta_{l^{ \prime}}\) at \(\Delta=0\). Then the upper critical field of the second order superconducting phase transition is determined by: Finding the minimal eigenvalue \(h\) of \(H\), then working out the optimal momentum \(\mathbf{q}_{0}\) which minimizes \(h=h(\mathbf{q})\), and finally solving \(h(\mathbf{q}_{0},\mathbf{B},T)=0\) to find the upper critical field \(B_{c2}(T)\) as a function of temperature \(T\).
_Results_--The upper critical field and optimal Cooper pair momentum are numerically calculated and plotted in Fig. 1a and b respectively, normalized by the three units of temperature, field and momentum respectively
\[T_{0}=\frac{J}{\alpha_{0}},\quad B_{0}=\frac{\Phi_{0}q_{0}}{2\pi d},\quad q_{0 }=\sqrt{2mJ}. \tag{2}\]
These units are all related to Josephson coupling \(J\).
At zero field, superconductivity occurs at temperatures below the layer-dependent critical temperature
\[T_{cN}=T_{c1}+2T_{0}\cos\left(\frac{\pi}{N+1}\right), \tag{3}\]
as plotted in Fig. 2a. The 3D bulk critical temperature is recovered \(T_{cN}\to T_{c}\) at \(N\rightarrow\infty\). Similar results can be found in Ref. [26], while the mechanism is interlayer Cooper pairs instead of interlayer Josephson coupling. The zero-field critical temperature data of few-layer NbSe\({}_{2}\) can be found in Ref. [6], which agree well with Eq. (3) as shown in the inset of Fig. 2a.
As shown in Fig. 1b, at low fields, the Cooper pair momentum remains zero, and the layered superconductor behaves as a 2D superconductor with square-root temperature-dependence of the upper critical field as shown in Fig. 1a. When \(N\rightarrow\infty\), the upper critical field approaches the envelope \(B\to B_{0}(T_{c}-T)/T_{0}\), and behaves more like 3D superconductivity as expected.
When the field further increases, the optimal Cooper pair momentum becomes finite as shown in Fig. 1b, which applies to all even numbers of \(N\geq 2\). This corresponds to a phase transition with the tricritical point
\[B_{N}^{*}\approx 1.6B_{0}\sin\left(\frac{\pi}{N+1}\right),\;T_{N}^{*}\approx T_{c} -1.6T_{0}\sin\left(\frac{\pi}{N+1}\right). \tag{4}\]
Figure 2: a) Zero-field critical temperature \(T_{cN}\) as a function of layer number \(N\). Dots are from numerical simulations as in Fig. 1a and the black line is Eq. (3). Inset: Fittings (lines) of experimental data (markers) in Ref. [6] by Eq. (3). b) Tricritical field \(B_{N}^{*}\) as a function of layer number \(N\). Dots are from numerical simulations as in Fig. 1b and the black line is Eq. (4). The dashed line is \(\sqrt{2}B_{0}\sin(\pi/N)\).
When \(B<B_{N}^{*}\), the Cooper pair momentum is zero \(q=0\), When \(B>B_{N}^{*}\), \(q\neq 0\), and at the tricritical point \((T^{*},B^{*})\), two types of superconducting phases \(q=0\) and \(q\neq 0\) coexist with the normal phase. The numerical tricritical field \(B_{N}^{*}\) together with Eq. (4) is plotted for different \(N\) in Fig. 2b. In our finite-layer model, inversion symmetry is found to hold for the free energy, under which layer \(l\) with momentum \(\mathbf{q}\) maps to layer \(N+1-l\) with momentum \(-\mathbf{q}\). Thus solutions with \(\pm\mathbf{q}\) are degenerate in calculating the upper critical field, and in Fig. 1b we only plot the non-positive branch.
The superconducting phase with finite-momentum Cooper pairs is not rare at least in theoretical studies. In 1964, Fulde and Ferrell [23], Larkin and Ovchinnikov [24] proposed that due to Zeeman effect, magnetic field can drive Cooper pairs into finite momentum. Since the energy saved by finite-momentum Cooper pairs per unit field is small, conventional FFLO states are expected at low temperatures and high fields, which turn out not easy to be detected experimentally. However, in our theory, the finite-momentum Cooper pairs are boosted by magnetic field via orbital effect, which can survive at relatively low field and high temperatures. We may dub such states as the orbital Fulde-Ferrell-Larkin-Ovchinnikov states. When \(N\to\infty\), \(B_{N}^{*}\to 0,T_{N}^{*}\to T_{c}\), the orbital FFLO states vanish as expected.
The phase transition of orbital FFLO states will induce an kink in the upper critical field at the tricritical point as elaborated below.
_Analysis_--To figure out the origin of the orbital FFLO states, we check the spatial distribution of order parameter along the out-of-plane direction. It is found that near phase transitions, Cooper pairs tend to nucleate on boundary layers \(l=1\) and \(l=N\) due to the open boundary condition [27]. Thus the critical behavior of the multilayer superconductor is governed by a pair of boundary modes, namely \(\psi_{l+}(\mathbf{r})=e^{i\mathbf{q}\cdot\mathbf{r}}\psi_{+}f_{l}\) near layer \(l=N\) and \(\psi_{l-}(\mathbf{r})=e^{i\mathbf{q}\cdot\mathbf{r}}\psi_{-}f_{N+1-l}\) near layer \(l=1\), where \(f_{l}\sim l^{2}\) describes the power-law nucleation pattern. One then calculates the 2 by 2 Hessian matrix of the boundary modes \(\mathcal{H}_{\tau\tau^{\prime}}=\partial^{2}f/\partial\psi_{\tau}\partial\psi_ {\tau^{\prime}}\) with \(\tau,\tau^{\prime}=\pm\), to determine the critical behavior of the multilayer superconductor.
Under inversion symmetry \(\psi_{+}\leftrightarrow\psi_{-},\mathbf{q}\to-\mathbf{q}\), the free energy is invariant. Introducing the Pauli matrices \(\mathbf{\tau}\) acting on the basis \((\psi_{+},\psi_{-})^{\mathrm{T}}\), we find the Hessian matrix would satisfy the symmetry constraint \(\tau_{x}\mathcal{H}(\mathbf{q},\mathbf{B})\tau_{x}=\mathcal{H}(-\mathbf{q},\mathbf{B})\). Up to the second order in \(q\) and \(B\) we have
\[\mathcal{H}_{0}(\mathbf{q},\mathbf{B})=a+b(\hat{\mathbf{z}}\times\mathbf{B})\cdot\mathbf{q}\tau_{z }+cq^{2}-\mathcal{J}\tau_{x} \tag{5}\]
where \(a=r(T-T_{a})+\chi B^{2}\), and \(r,\chi,b,c,\mathcal{J},T_{a}\) are phenomenological parameters. For the stability of the free energy, we require \(r,\chi,c\) to be positive. The \(\mathbf{q},\mathbf{B}\) coupling term is due to the orbital effect, while the magnetic energy term \(\chi B^{2}\) can be induced by both orbital and Zeeman effects. The effective Josephson coupling \(\mathcal{J}\) between two boundary modes cannot be neglected, as the localization of the boundary modes is power-law (\(f_{l}\sim l^{2}\)) instead of exponential.
By minimizing Eq. (5) one can obtain the optimal Cooper pair momentum as a function of field
\[\mathbf{q}=\frac{b}{2c}(\mathbf{B}\times\hat{\mathbf{z}})\mathrm{Re}\sqrt{1-\left(\frac{B ^{*}}{B}\right)^{4}} \tag{6}\]
with the tricritical field \(B^{*}=\sqrt{2c|\mathcal{J}|}/b\).
The field-dependence of optimal Cooper pair momentum is plotted in Fig. 1b (dashed lines), which agrees well with numerical simulations (dots). Correspondingly, the upper critical field as a function of temperature is plotted in Fig. 3 (orange and blue lines), which shows a kink at the tricritical point, and agrees well with experimental data from Ref. [20] (black dots).
Symmetry principles also allow us to write down higher order terms. For example, when the layer has in-plane threefold rotation symmetry and vertical mirror symmetry such as in the transition metal dichalcogenides (TMD), higher order terms can have the following form
\[\mathcal{H}_{1}(\mathbf{q},\mathbf{B})=\lambda_{1}\mathrm{Re}(q_{+}^{2}B_{+}^{4})+ \lambda_{2}\mathrm{Re}(q_{+}^{4}B_{+}^{2}) \tag{7}\]
where \(\lambda_{1,2}\) are phenomenonlogical parameters, and the total Hessian matrix is \(\mathcal{H}=\mathcal{H}_{0}+\mathcal{H}_{1}\). These higher order terms leads to the anisotropy of the field-dependent critical temperature in the orbital FFLO phase with
Figure 3: Orbital FFLO phases in the effective model. Black dots are experimental data of upper critical field in Ref. [20], while orange and blue lines are analytical results according to Eq. (5). Three phases conventional pairing state \(q=0\), orbital FFLO state \(q\neq 0\) and normal phase coexist at the tricritical point (\(T^{*}=0.87T_{c},B^{*}\)).
0, while the field-dependent critical temperature in the conventional superconducting phase is isotropic since \(q=0\). To be concrete, we can calculate the anisotropic part of the field-dependent critical temperature by plugging Eq. (6) into Eq. (7)
\[\Delta T_{c}(B,\varphi)=\lambda(B)\theta(B-B^{*})\cos(6\varphi), \tag{8}\]
where \(\mathbf{B}=B(\cos\varphi,\sin\varphi)\) is written in the polar coordinate, \(\theta\) is the Heaviside theta function, and \(\lambda(B)=(-\lambda_{1}\tau^{2}+\lambda_{2}\tau^{4}\zeta)\zeta B^{6}\), with \(\tau=b/(2c),\zeta=1-(B^{*}/B)^{4}\). The leading order anisotropy of the orbital FFLO states in TMD is sixfold as shown in the experiments [20].
The degeneracy between two boundary modes \(\psi_{\pm}\) will be lifted by the quartic order coupling between them
\[f_{4}=\beta_{1}(|\psi_{+}|^{4}+|\psi_{-}|^{4})+\beta_{2}|\psi_{+}|^{2}|\psi_{-} |^{2} \tag{9}\]
with coefficients \(\beta_{1,2}\), and \(\beta_{1}>0\) for the stability of the free energy. When \(\beta_{2}>2\beta_{1}\) the equilibrium state will break the inversion symmetry and choose one of the boundary modes of \(\psi_{\pm}\). When \(\beta_{2}<2\beta_{1}\) the equilibrium state will preserve the inversion symmetry to form a symmetric combination of \(\psi_{\pm}\), while the in-plane translation symmetry along \(\mathbf{q}\propto\mathbf{B}\times\mathbf{z}\) direction is spontaneously broken. In the decoupling limit \(\mathcal{J}\to 0\), the equilibrium state will preserve the inversion symmetry to fully compensate the orbital effect, hence we expect \(\beta_{2}<2\beta_{1}\) generally holds and the orbital FFLO state would be a giant Josephson-type vortex formed by two boundary modes.
Away from the upper critical field region, the order parameter would be determined by the nonlinear Ginzburg-Landau equations of the finite-layer LD model. It would be demanding to solve them even numerically. Nonetheless, in this work we argue the possible spatial configuration from the theoretical results and experimental measurements on vortices [20]. We expect that, away from the upper critical field the orbital FFLO states would be mainly in the bulk, forming Josephson-type vortices in terms of nearest layers. Inspired by the two-component ansatz of the orbital FFLO state, we envision that the general order parameter configuration of orbital FFLO states can be written in the multi-component form \(\Psi_{l,n}=(\psi_{l},\dots,\psi_{l+n})^{\rm T}=\sum_{k=0}^{n}\Phi_{k}e^{i\mathbf{ q}_{k}\cdot\mathbf{r}}\) with multi-momenta \(\{\mathbf{q}_{k}\}\). The distance between Josephson-type vortices is expected to be controlled by the magnetic field [1, 22]. Near the upper critical field, the Josephson-type vortices could be closely packed, so that the bulk supercurrent is compensated by nearest Josephson-type vortices, leaving the giant Josephson-type vortex formed by two boundary modes as discussed previously.
_Paramagnetic limit field of Ising superconductivity_--When \(T\to T_{c1}\), we find the upper critical field always diverges, independent of the layer number \(N\), as shown in Fig. 1a. This divergence is the same as that in the LD model with infinite layers (where the divergence implies the dimensional crossover from 3D to 2D), as the layered superconductor is essentially in the 2D limit at low temperatures and high fields. To cure the divergence of the upper critical field at low temperatures, we need to take into account the Zeeman effect and the Ising spin-orbit coupling (SOC). When all individual layers are considered decoupled in the 2D limit, the layered Ising superconductor is equivalent to a monolayer Ising superconductor. Suppose the effective Ising SOC energy is \(\Delta_{so}\), then the Ising limit is \(B_{so}=\Delta_{so}/\mu\) with magnetic moment \(\mu\), and the in-plane upper critical field of the monolayer Ising superconductor is determined by
\[\log\left(\frac{T}{T_{c1}}\right)=\frac{2B^{2}}{B^{2}+B_{so}^{2}}F\left(\frac {\mu\sqrt{B^{2}+B_{so}^{2}}}{\pi T}\right) \tag{10}\]
where \(F(x)=\frac{1}{2}\mathrm{Re}\left\{\psi(\frac{1}{2})-\psi\left[\frac{1}{2}(1+ ix)\right]\right\}\) is defined by the digamma function \(\psi(x)\), and \(T_{c1}\) is the monolayer critical temperature at zero field.
Near zero temperature, the upper critical field is still divergent. In other words, at any given field \(B\), superconductivity survives with nonzero critical temperature
\[\frac{T_{c}(B)}{T_{c1}}=\left(\frac{B_{P}}{\sqrt{2(B^{2}+B_{so}^{2})}}\right) ^{B^{2}/B_{so}^{2}}, \tag{11}\]
where \(B_{P}=1.25T_{c}/\mu\) is the Pauli limit [28, 29].
In practice, when the critical temperature is too low to be detected by laboratory apparatus, superconductivity is considered destroyed by the magnetic field. The following formula would be a good choice for the practical value of zero-temperature in-plane upper critical field of the monolayer Ising superconductor
\[B_{c2}(T=0)\approx B_{so}+2B_{P}. \tag{12}\]
When the Ising limit is far beyond the Pauli limit, \(B_{c2}(0)\) would be determined mainly by the Ising limit instead of the geometric mean between Ising and Pauli limits [6].
In experiments, one may find that the superconducting gap continuously decreases to values smaller than the precision of measurements, as shown in the bilayer and trilayer NbSe\({}_{2}\) samples [30].
To summarize, we generalize the orbital FFLO states theoretically derived in the bilayer superconductors [21, 22] to multilayers in the high temperature low field regime, and further study the 2D limit in the low temperature high field regime. Our theory can be applied in NbSe\({}_{2}\) layers [6, 20, 30].
_Acknowledgement_--The author thanks Puhua Wan, Jianting Ye and Chao-Xing Liu for inspiring discussions. The author thanks Puhua Wan and Jianting Ye for sharing the experimental data. The author acknowledges the National Natural Science Foundation of China (Grant. No. 12174021) for the financial support. |
2305.12215 | Introduction to Loop Quantum Gravity: Rovelli's lectures on LQG | These notes are a transcript of Carlo Rovelli's lectures on Loop Quantum
Gravity, given in Marseille in 2018, which (at present) can be entirely found
on YouTube. I transcribed them in LaTeX in early 2020 as an exercise to get
ready for my Ph.D. in LQG at Western University. This transcript is meant to be
a (hopefully helpful) integration for the video version. I reported the order
of the topics and the chronological structure exactly as presented by Rovelli
throughout the course, primarily to facilitate the comparison. Each Section
corresponds to a different Lecture. The parts written in textit are my
additions. Sometimes in the text, I report references, which specify precisely
the minute and the second of the corresponding video on YouTube, to very short
historical digressions or excursus made during the lectures by Rovelli that I
have not explicitly transcribed in these notes. Where appropriate, I took some
figures from the book "Covariant Loop Quantum Gravity - An elementary
introduction to Quantum Gravity and Spinfoam Theory" by Carlo Rovelli and
Francesca Vidotto, to which I always refer by the term "the book" in the
following. For what concerns the equations, where possible, I tried to write
down the "correct" versions present within the book. Finally, I thank Carlo
Rovelli himself for reviewing these notes. I apologize in advance for any
errors, and I wish everyone a lot of fun! | Pietropaolo Frisoni | 2023-05-20T15:44:40Z | http://arxiv.org/abs/2305.12215v1 | # Introduction to Loop Quantum Gravity
###### Abstract
These notes are a transcript of Carlo Rovelli's lectures on Loop Quantum Gravity, given in Marseille in 2018, which (at present) can be entirely found on YouTube. I transcribed them in LaTeX in early 2020 as an exercise to get ready for my Ph.D. in LQG at Western University. This transcript is meant to be a (hopefully helpful) integration for the video version. I reported the order of the topics and the chronological structure exactly as presented by Rovelli throughout the course, primarily to facilitate the comparison. Each Section corresponds to a different Lecture. The parts written in _texti_ are my additions. Sometimes in the text, I report references, which specify precisely the minute and the second of the corresponding video on YouTube, to very short historical digressions or excursus made during the lectures by Rovelli that I have not explicitly transcribed in these notes. Where appropriate, I took some figures from the book "Covariant Loop Quantum Gravity - An elementary introduction to Quantum Gravity and Spinfoam Theory" by Carlo Rovelli and Francesca Vidotto, to which I always refer by the term "the book" in the following. For what concerns the equations, where possible, I tried to write down the "correct" versions present within the book. Finally, I thank Carlo Rovelli himself for reviewing these notes. I apologize in advance for any errors, and I wish everyone a lot of fun!
###### Contents
* 1 The empirical basis of quantum gravity
* 2 Space
* 2.1 Concepts of Space
* 3 \(SU(2)\) group
* 3.1 Mathematics of \(SU(2)\)
* 3.2 Left Invariant operator
* 3.3 Representation theory
* 3.4 Peter-Weyl theorem
* 4 |
2304.05440 | PixelRNN: In-pixel Recurrent Neural Networks for End-to-end-optimized
Perception with Neural Sensors | Conventional image sensors digitize high-resolution images at fast frame
rates, producing a large amount of data that needs to be transmitted off the
sensor for further processing. This is challenging for perception systems
operating on edge devices, because communication is power inefficient and
induces latency. Fueled by innovations in stacked image sensor fabrication,
emerging sensor-processors offer programmability and minimal processing
capabilities directly on the sensor. We exploit these capabilities by
developing an efficient recurrent neural network architecture, PixelRNN, that
encodes spatio-temporal features on the sensor using purely binary operations.
PixelRNN reduces the amount of data to be transmitted off the sensor by a
factor of 64x compared to conventional systems while offering competitive
accuracy for hand gesture recognition and lip reading tasks. We experimentally
validate PixelRNN using a prototype implementation on the SCAMP-5
sensor-processor platform. | Haley M. So, Laurie Bose, Piotr Dudek, Gordon Wetzstein | 2023-04-11T18:16:47Z | http://arxiv.org/abs/2304.05440v1 | PixelRNN: In-pixel Recurrent Neural Networks for End-to-end-optimized Perception with Neural Sensors
###### Abstract
Conventional image sensors digitize high-resolution images at fast frame rates, producing a large amount of data that needs to be transmitted off the sensor for further processing. This is challenging for perception systems operating on edge devices, because communication is power inefficient and induces latency. Fueled by innovations in stacked image sensor fabrication, emerging sensor-processors offer programmability and minimal processing capabilities directly on the sensor. We exploit these capabilities by developing an efficient recurrent neural network architecture, PixelRNN, that encodes spatio-temporal features on the sensor using purely binary operations. PixelRNN reduces the amount of data to be transmitted off the sensor by a factor of \(64\times\) compared to conventional systems while offering competitive accuracy for hand gesture recognition and lip reading tasks. We experimentally validate PixelRNN using a prototype implementation on the SCAMP-5 sensor-processor platform.
## 1 Introduction
Increasingly, cameras on edge devices are being used for enabling computer vision perception tasks rather than for capturing images that look beautiful to the human eye. Applications include various tasks in virtual and augmented reality displays, wearable computing systems, drones, robotics, and the internet of things, among many others. For such edge devices, low-power operation is crucial, making it challenging to deploy large neural network architectures which traditionally leverage modern graphics processing units for inference.
A plethora of approaches have been developed in the "TinyML" community to address these challenges. Broadly speaking, these efforts focus on developing smaller [25] or more efficient network architectures, often by pruning or quantizing larger models [10]. Platforms like TensorFlow Lite Micro enable application developers to deploy their models directly to power-efficient microcontrollers which process data closer to the sensor. Specialized artificial intelligence (AI) accelerators, such as Movidius' Myriad vision processing unit, further reduce the power consumption. While these approaches can optimize the processing component of a perception system, they do not reduce the large amount of digitized sensor data that needs to be transmitted to the processor in the first place, via power-hungry interfaces such as MIPI-CSI, and stored in the memory. This omission is highly significant as data transmission and memory access are among the biggest power sinks in imaging systems [20]. This raises the question of how to design perception systems where sensing, data communication, and processing components are optimized end to end.
Efficient perception systems could be designed such that important task-specific image and video features are encoded directly on the imaging sensor using in-pixel processing, resulting in the sensor's output being significantly reduced to only these sparse features. This form of in-pixel feature encoding mechanism could significantly reduce the required bandwidth, thus reducing power consumption of data communication, memory management, and downstream processing. Event sensors [19] and emerging focal-plane sensor-processors [53] are promising hardware platforms for such perception systems because they can directly extract either temporal information or spatial features, respectively, on the sensor. These features can be transmitted off the sensor using low-power parallel communication interfaces supporting low bandwidths.
Our work is motivated by the limitations of existing feature extraction methods demonstrated on these emerging sensor platforms. Rather than extracting simple temporal gradients [19] or spatial-only features via convolutional neural networks (CNNs) [7, 6], we propose in-pixel recurrent neural networks (RNNs) that efficiently extract spatio-temporal features on sensor-processors for bandwidth-efficient perception systems. RNNs are state-of-the-art network architectures for processing sequences, such as video in computer vision tasks [31]. Inspired by the emerging paradigm of neural sensors [39], our in-pixel RNN frame
work, dubbed PixelRNN, comprises a light-weight in-pixel spatio-temporal feature encoder. This in-pixel network is jointly optimized with a task-specific downstream network. We demonstrate that our architecture outperforms event sensors and CNN-based sensor-processors on perception tasks, including hand gesture recognition and lip reading, while drastically reducing the required bandwidth compared to any traditional sensor based approaches. Moreover, we demonstrate that PixelRNN offers better performance and lower memory requirements than larger RNN architectures in the low-precision settings of in-pixel processing.
Our work's contributions include
* the design and implementation of in-pixel recurrent neural networks for sensor-processors, enabling bandwidth-efficient perception on edge devices;
* the demonstration that our on-sensor spatio-temporal feature encoding maintains high performance while significantly reducing sensor-to-processor communication bandwidth with several tasks, including hand gesture recognition and lip reading;
* the experimental demonstration of the benefits of in-pixel RNNs using a prototype implementation on the SCAMP-5 sensor-processor.
## 2 Related Work
Performing feature extraction on power and memory constrained computing systems requires the union of multiple fields: machine learning, specialized hardware, and network compression techniques.
Machine Learning on the Edge.Edge computing devices are often subject to severe power and memory constraints, leading to various avenues of research and development. On the hardware side, approaches include custom application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), or other energy efficient AI accelerators. However, this does not address the issue of data transmission from imaging sensors, which is one of the main sources of power consumption [20]. To circumvent the memory constraints, network compression techniques are introduced. They fall into roughly five categories [10]: 1. parameter reduction by pruning redundancy [4, 46, 26, 57]; 2. low-rank parameter factorization [15, 28, 49]; 3. carefully designing structured convolutional filters [16, 48, 52]; 4. creating smaller models [22, 1, 8]; 5. parameter quantization [13, 42, 27, 50, 35, 58, 30]. In this work, we move compute directly onto the sensor and also apply ideas and techniques mentioned above.
Beyond Frame-based Sensing.Event-based cameras have been gaining popularity [19] as the readout is asynchronous and often sparse, triggered by pixel value changes above a certain threshold. However, these sensors are not programmable and do data compression with a simple fixed function. Another emerging class of sensors include focal plane sensor-processors, also known as pixel processor arrays. Along with supporting traditional sensing capabilities, these sensors have a processing element embedded into each pixel. While conventional vision systems have separate hardware for sensing and computing, sensor-processors perform both tasks "in pixel," enabling efficient, low-latency and low-power computation. Recently, sensor-processors with some programmability have emerged [9, 40, 37, 43, 55, 18]. Further advances in 3D fabrication techniques, including wafer-level hybrid bonding and stacked CMOS image sensors, set the stage for rapid development of increasingly more capable programmable sensors.
In-pixel Perception.In the past few years, there has been a surge of advances in neural networks for vision tasks as well as an increasing desire to perform tasks on constrained mobile and wearable computing systems. Sensor-processors are a natural fit for such systems as they can perform sophisticated visual computational tasks at a significantly lower power than traditional hardware. Some early chips [36, 44] were based on implementing convolution kernels in a recurrent dynamical "Cellular Neural Network" model [12, 45]. In 2019, Bose et al. created "A Camera that CNNs" - one of the first works to implement a deep convolutional neural network on the sensor [6]. Since then, there have been a number of other works in CNNs on programmable sensors [47, 14, 51, 21, 7, 32, 34, 33]. These works extract features in the spatial domain, but miss a huge opportunity in failing exploit temporal information. Purely CNN based approaches do not utilize or capitalize on the temporal redundancy or information of the sequence of frames. Our work introduces light-weight extraction of spatio-temporal features, better utilizing the structure of the visual data, all while maintaining low bandwidth and high accuracy.
## 3 In-pixel Recurrent Neural Networks
Emerging sensor-processors with in-pixel processing enable the development of end-to-end-optimized on-sensor and downstream networks off-sensor. In this section, we describe a new on-sensor recurrent spatio-temporal feature encoder that significantly improves upon existing temporal- or spatial-only feature encoders for video processing, as shown in the next section. The proposed pipeline is illustrated in Figure 1.
### In-Pixel CNN-based Feature Encoder
Convolutional neural networks are among the most common network architectures in computer vision. They are
written as
\[\text{CNN}\;\left(\mathbf{x}\right) =\left(\phi_{n-1}\circ\phi_{n-2}\circ\ldots\circ\phi_{0}\right) \left(\mathbf{x}\right),\] \[\phi_{i} =\mathbf{x}_{i} \mapsto\psi\left(\mathbf{w}_{i}*\mathbf{x}_{i}+\mathbf{b}_{i} \right), \tag{1}\]
where \(\mathbf{w}_{i}*\mathbf{x}_{i}:\mathbb{N}^{N_{i}\times M_{i}\times C_{i}} \mapsto\mathbb{N}^{N_{i+1}\times M_{i+1}\times C_{i+1}}\) describes the multi-channel convolution of CNN layer \(i\) and \(\mathbf{b}_{i}\) is a vector containing bias values. Here, the input image \(\mathbf{x}_{i}\) has \(C_{i}\) channels and a resolution of \(N_{i}\times M_{i}\) pixel and the output of layer \(i\) is further processed by the nonlinear activation function \(\psi\).
The SCAMP-5 system used in this work lacks native multiplication operations at the pixel level. Due to this limitation, works storing network weights \(\mathbf{w}_{i}\) in pixel typically restrict themselves to using binary, \(\{-1,1\}\), or ternary \(\{-1,0,1\}\) values. This reduces all multiplications to sums or differences, which are highly efficient native operations.
### In-pixel Spatio-temporal Feature Encoding
Recurrent neural networks (RNNs) are state-of-the-art network architectures for video processing. Whereas a CNN only considers each image in isolation, an RNN extracts spatio-temporal features to process video sequences more effectively. Network architectures for sensor-processors must satisfy two key criteria. First, they should be small and use low-precision weights. Second, they should comprise largely of local operations as the processors embedded within each pixel can only communicate with their direct neighbors (e.g., [9]).
To satisfy these unique constraints, we devise an RNN architecture that combines ideas from convolutional gated recurrent units (GRUs) [2] and minimal gated units [56]. The resulting simple, yet effective PixelRNN architecture, is written as
\[\mathbf{f}_{t} =\psi_{f}\left(\mathbf{w}_{f}*\text{CNN}\left(\mathbf{x}_{t} \right)+\mathbf{u}_{f}*\mathbf{h}_{t-1}\right), \tag{2}\] \[\mathbf{h}_{t} =\mathbf{f}_{t}\odot\mathbf{h}_{t-1},\] (3) \[\mathbf{o}_{t} =\psi_{o}\left(\mathbf{w}_{o}*\text{CNN}\left(\mathbf{x}_{t} \right)+\mathbf{u}_{o}*\mathbf{h}_{t-1}\right), \tag{4}\]
where \(\mathbf{w}_{f}\), \(\mathbf{u}_{f}\), \(\mathbf{w}_{o}\), \(\mathbf{u}_{o}\) are small convolution kernels and \(\psi_{f}\) is either the sign (when working with binary constraints) or the sigmoid function (when working with full precision). We include an optional nonlinear activation function \(\psi_{o}\) and an output layer \(\mathbf{o}_{t}\) representing the values that are actually transmitted off sensor to the downstream network running on a separate processor. For low-bandwidth operation, the output layer \(o\) is only computed, and values transmitted off the sensor, every \(K\) frames. The output layer can optionally be omitted, in which case the hidden state \(\mathbf{h}_{t}\) is streamed off the sensor every \(K\) frames.
PixelRNN uses what is commonly known as a "forget gate", \(\mathbf{f}_{t}\), and a hidden state \(\mathbf{h}_{t}\), which are updated at each time step \(t\) from the input \(\mathbf{x}_{t}\). RNNs use forget gates to implement a "memory" mechanism that discards redundant spatial features over time. PixelRNN's forget gate is also motivated by this intuition, but our update mechanism in Eq. 3 is tailored to working with binary constraints using values \(\{-1,1\}\). In this case, Eq. 3 flips the sign of \(\mathbf{f}_{t}\) in an element-wise manner rather than decaying over time. This mechanism works very well in practice when \(\mathbf{h}_{t}\) is re-initialized to all ones every 16 time steps.
PixelRNN's architecture includes alternative spatial- or temporal-only feature extractors as special cases. For example, it is intuitive to see that it models a conventional CNN by omitting the recurrent units. We specifically write out the output gate in our definition to make it intuitive how PixelRNN also approximates a difference camera as a special case, which effectively implement a temporal-only fea
Figure 1: The perception pipeline of PixelRNN can be broken down into an on-sensor encoder and a task-specific decoder. On the left is the camera equipped with a sensor–processor, which offers processing and memory at the pixel level. The captured light is directly processed by a CNN that extracts spatial features, which are further processed by a convolutional recurrent neural network with built-in memory and temporal feature extraction. Here we show our PixelRNN variant on the right, \(\star\) being the convolution operator, \(\odot\) element-wise multiplication, and \(\psi\) a nonlinear function. Instead of sending out full \(256\times 256\) values at every time step, our encoder compresses by \(64\times\). While we show this pipeline for a lip reading task, the decoder can be designed for any perception task.
ture encoder. In this case, \(\mathbf{h}_{t}=\mathbf{x}_{t},\mathbf{w}_{o}=1,\mathbf{u}_{o}=-1\) and \(\psi_{o}\left(x\right)=\begin{array}{cc}-1&\text{for x}<-\delta\\ 0&\text{for }-\delta\leq\text{x}\leq\delta\\ 1&\text{for }\delta<\text{x}\end{array},\) for some threshold \(\delta\). The image formation model of event cameras [19] is asynchronous and a difference camera represents only a crude approximation, but it serves as a pedagogically useful temporal-only encoder in the context of this discussion.
### Learning Quantized In-pixel Parameters
PixelRNN uses binary weights to reduce all multiplications to sums. To learn these parameters efficiently, we parameterize each of these values \(w\) using a continuous value \(\tilde{w}\) and a quantization function \(q\) such that
\[w=q\left(\tilde{w}\right),\quad q:\mathbb{R}\rightarrow\mathcal{Q}. \tag{5}\]
Here, \(q\) maps a continuous value to the closest discrete value in the feasible set \(\mathcal{Q}\), i.e., \(\{-1,1\}\).
One can employ surrogate gradient methods [3, 54], continuous relaxation of categorical variables using Gumbel-Softmax [29, 38], or other approaches to approximately differentiate through \(q\). For the binary weights we use, \(w=q(\tilde{w})=\text{sign}(\tilde{w})\), and we found approximating the gradient of the sign function with the derivative of \(\text{tanh}(mx)\) produced very good results:
\[\frac{\partial q}{\partial\tilde{w}}\approx m\cdot(1-\tanh^{2}(m\tilde{w})) \tag{6}\]
where \(m>0\) controls the steepness of the \(\tanh\) function, which is used as a differentiable proxy for \(q\approx\tanh(m\tilde{w})\) in the backward pass. The larger \(m\) is, the more it resembles the sign function and the more the gradient resembles the delta function.
### Implementation Details
We implement our per-frame CNN feature encoder, testing both 1 or 2 layers and process images at a resolution of \(64\times 64\) pixels by downsampling the raw sensor images before feeding them into the CNN. In all experiments, our PixelRNN transmits data off the sensor every 16 frames. Thus, we achieve a reduction in bandwidth by a factor of 64\(\times\) compared to the raw data. In all of our experiments, we set the function \(\psi_{o}\) to the identity function.
Additional implementation details are found in the supplement and source code will be made public for enhanced reproducibility.
## 4 Experiments
Evaluating Feature Encoders.As discussed in the previous section, RNNs require a CNN-based feature encoder as part of their architecture. In-pixel CNNs have been described in prior work [7, 32, 34], albeit not in the context of video processing with RNNs.
Table 1 summarizes the simulation performance of re-implementations of various CNN architectures on image classification using the MNIST, CIFAR-10 dataset, and hand gesture recognition from individual images.
Bose et al. [6, 7] describes a 2-layer CNN with ternary weights. The two works have the same architecture but differ drastically in the sensor-processor implementation. Liu et al. [32, 34] describes 1- and 3-layer CNNs with binary weights using a different architecture than Bose while using similar sensor-processor implementations concepts. Our feature encoder is a binary 2-layer variant of Bose et al.'s CNN. Each layer has 16 kernels of size \(5\times 5\) and are followed with a non-linearity and maxpooling of \(4\times 4\). The 16 \(16\times 16\) channels are then concatenated into a single \(64\times 64\) image size to serve as the input to the next convolutional layer or to the PixelRNN. All of these CNNs are roughly on-par with some performing better at some tasks than others. Ours strikes a good balance between accuracy and model size. We do not claim this CNN to be a contribution of our work, but include this brief comparison for completeness.
Baseline Architectures.We use several baselines for our analysis. The RAW camera mode simply outputs the input frame at every time step and represents the naive imaging approach. The simulated difference camera represents a simple temporal-only feature encoder. We also include several RNN architectures, including long short-term memory (LSTM), gated recurrent unit (GRU), minimal gated unit (MGU), a simple RNN (SRNN), and our PixelRNN. Moreover, we evaluated each of the RNN architectures using 1-layer and 2-layer CNN feature encoders. The output of all RNNs is read from the sensor only once every 16 time steps. All baselines represent options for in-pixel processing and their respective output is streamed off the sensor. In all cases, a single fully-connected network layer processes this output on a downstream processor to compute the final clas
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Model Name & MNIST & CIFAR-10 & Hand & \# Model & Size \\ & & & Gesture & Params & (MB) \\ \hline Bose [7, 6] & 95.0\% & 39.8\% & 43.4\% & 257 & 0.05 \\ Liu 2020 [32] & 80.0\% & 32.5\% & 57.4\% & 258 & 0.03 \\ Liu 2022 [34] & **95.1\%** & 32.6\% & 60.2\% & \(2,374\) & 0.30 \\ Our CNN & 90.9\% & **43.1\%** & **68.1\%** & 802 & 0.10 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Comparing CNN Feature Encoders. We simulated different in-pixel CNNs using image classification on MNIST, CIFAR-10, and the Cambridge hand gesture recognition based on different implementations. All CNN architectures perform roughly on par with our binary 2-layer CNN encoder striking a good balance between accuracy and model size. The model size is computed by multiplying the number of model parameters by the quantization of the values.**
sification scores. This fully-connected layer is trained end to end for each of the baselines. While this fully-connected layer is simple, it is effective and could be customized for a specific task.
Additional details on these baselines, including formulas and training details, are listed in the supplement. Table 2 shows an overview of these, listing the number of model parameters and the readout bandwidth for each of them.
Datasets.For the hand gesture recognition task, we use the Cambridge Hand Gesture Recognition dataset. This dataset consists of 900 video clips of 9 gesture classes; each class contains 100 videos. For the lip reading task, we use the Tulips1 dataset. This dataset is a small Audiovisual database of 12 subjects saying the first 4 digits in English; it was introduced in [41].
Accuracy vs. Memory.In Figure S1, we evaluate the accuracy of several baseline architectures on two tasks: hand gesture recognition (left) and lip reading (right). We compare the baselines described above, each with 1- and 2-layer CNN encoders and binary or full 32-bit floating point precision. For the full-precision networks, PixelRNN achieves an accuracy comparable to the best models, but it provides one of the lowest memory footprints. Comparing the networks with binary weights, PixelRNN also offers the best accuracy with a memory footprint comparable to the next best method. Surprisingly, larger architectures, such as GRUs and LSTMs, do not perform well when used with binary weights. This can be explained by the increasing difficulty of reliably training increasingly large networks with binary parameter constraints. Leaner networks, such as SRNN and PixelRNN, can be trained more robustly and reliably in these settings. Note that the memory plotted on the x-axis represents all intermediate features, per pixel, that need to be stored during a single forward pass through the RNN. We do not count the network parameters in this plot, because they do not dominate the memory requirements and can be shared among the pixels.
Constraints of the Experimental Platform.Our hardware platform, SCAMP-5, provides an equivalent to 32 bytes of memory per pixel for storing intermediate features. This limit is illustrated as dashed vertical lines in Figure S1, indicating that only low-precision RNN networks are feasible on this platform. The available memory is computed as follows. SCAMP-5 has 6 analog registers per pixel, each we
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multicolumn{2}{c}{\multirow{2}{*}{Model Name}} & \# Model & \multicolumn{1}{c}{Readout} \\ & & Parameters & Bandwidth \\ \hline \multirow{3}{*}{
\begin{tabular}{c} **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S** **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S** **S**lass **S**lass **S**lass **S**lass **S** **S**lass **S**lass **S**lass **S**lass **S** **S**lass **S**lass **S**lass **S**lass **S** **S**lass **S**lass **S** **S**lass **S**lass **S** **S**lass **S** **S**lass **S** **S**lass **S** **S**lass **S** **S**lass **S** **S**lass **S** **S**lass **S** **S**lass **S** **S**lass **S** **S**lass **S** **S** **S**lass **S** **S** **S** **S** **S**lass **S** **S** **S** **S** **S**S** **S** **S** **S** **S**S** **S**S** **S** **S** **S**S** **S** **S**S** **S**S** **S**S** **S**S** **S**S** **S**S** **S**S** **S**S** **S**S** **S**S** **S**S** **S**S** **S**S** **S**S** **S**S** **S**S**S** **S**S**S** **S**S**S** **S**S**S**S** **S**S**S**S**S **S**S**S**S**S**S **
assume is equivalent to 1 byte: 2 registers store the model weights, 2 are reserved for computation, leaving 2 bytes per pixels for the intermediate features. SCAMP-5 consists of an array of \(256\times 256\) pixel processors, however our approach operates on a smaller effective image size of \(64\times 64\). This allows us to consider a single "pixel" to comprise of a block of \(4\times 4\) pixel elements, increasing the effective bytes per pixel to 32 bytes.
Accuracy vs. Bandwidth.We select a readout bandwidth of 4,096 values (every 16 frames) based on the available bandwidth of our hardware platform, the SCAMP-5 sensor. In Figure 3 we evaluate the effect of further reducing this bandwidth on the accuracy for the PixelRNN architecture. Bandwidth is controlled using a max-pooling layer operating at differing sizes from \(1\times 1\) through \(8\times 8\) and then at multiples of 8 up to \(64\times 64\) before inputting the intensity images to PixelRNN. The resulting output bandwidths range between 1 to 4096. We ran each experiment ten times and the best performances of each are plotted for hand gesture recognition and lip reading. We observe that the bandwidth could be further reduced to about 1,000 values every 16 frames without significantly degrading the accuracy on these datasets. However, decreasing the bandwidth beyond this also reduces the accuracy.
## 5 Experimental Prototype
### Pixel-Level Programmable Sensors
SCAMP-5 [9] is one of the emerging programmable sensors representative of the class of focal-plane sensor-processors (FPSP). Unlike conventional image sensors, each of the \(256\times 256\) pixels is equipped with an arithmetic logic unit, 6 local analog and 13 digital memory registers, control and I/O circuitry, and access to certain registers of the four neighboring pixels. SCAMP-5 operates in single-instruction multiple-data (SIMD) mode with capabilities to address individual pixels, patterns of pixels, or the whole array. It features mixed-mode operation execution that allows for low-power compute prior to A/D conversion. Most importantly, SCAMP-5 is programmable.
### Implementation of PixelRNN on SCAMP-5
The pipeline for our prototype feature extractor is shown in Figure 4. Because of the memory architecture on SCAMP-5, performing multiple convolutions and different updates of gates and states require us to split the focal plane into 16 parallel processors with a smaller effective image size. The input image is binarized, downsampled to \(64\times 64\), and duplicated out to a \(4\times 4\) grid of parallel processor elements (PE) of size \(64\times 64\). Each PE performs a convolution with a \(5\times 5\) kernel, yielding 16 feature maps per convolutional layer. The 16 \(64\times 64\) features undergo a ReLU activation, maxpooling and binarization to 16 \(16\times 16\). These are then concatenated to create a single \(64\times 64\) input to the RNN or another convolutional layer if desired. This process makes use of the image transformation methods for SCAMP-5 introduced by [5]. Our RNN uses both the output of the CNN and the hidden state to update the hidden state and compute an output every 16 time steps. The RNN gates are calculated via convolution and element-wise multiplication. To suit the SCAMP-5 architecture, we limited operations to addition, XOR, and negation, and trained a binary version of PixelRNN, binarizing weights and features to -1 and 1. Instead of multiplications, we now just need addition and subtraction.
Memory Allocation.SCAMP-5's analog and digital registers are limited in number and present different challenges. Analog registers cannot hold values for long periods before decaying. The decay is exacerbated if one moves information from pixel to pixel such as in shifting an image. We found using analog registers with a routine to refresh their content to a set of quantized values inspired by [7] helped circumvent some of the challenges. This allowed the storage of binary weights for convolutions and the hidden state for prolonged periods of time. The remaining memory registers were used for performing computations and storing intermediate feature maps.
Convolution Operation.A single pixel cannot hold all weights of a single kernel, so the weights are spread across a single analog register plane as shown in Figure 4. To perform a convolution, SCAMP-5 iterates through all 25
Figure 3: **Bandwidth Analysis. We can control the bandwidth of data read off the sensor using increasingly larger max-pooling layers before inputting the intensity images to PixelRNN at the cost of decreased accuracy.**
weights in the \(5\times 5\) kernel, each time multiplying it with the whole image and adding to a running sum. The image is then shifted, the next weight fills the register plane, and the process continues until the feature is computed. We include a detailed diagram in the supplement and more information can be found in [7].
RNN Operation.Figure 5 shows the layout of the sequence of operation in the RNN. Each pixel contains 6 analog registers, named A,B,C,D,E,and F. We refer to register plane A as all the A registers across the entire image sensor. In Figure 5, the \(256\times 256\) pixels are split into a \(4\times 4\) larger processor elements of size \(64\times 64\). In register plane A, we take the output from the CNN and the previous hidden state and duplicate it to two other PEs in plane A. Register plane B holds the corresponding weights \(\textbf{w}_{f}\), \(\textbf{u}_{f}\), \(\textbf{w}_{o}\), \(\textbf{u}_{o}\) for the convolution operators needed. 4 convolutions are simultaneously run on one register plane. The outputs in plane B are shifted and added. A binarization is applied to get \(\textbf{f}_{t}\). This is then used to update a hidden state via element-wise multiplication every time step. Every 16 time
Figure 4: This pipeline shows the sequence of operations from left to right. The input image is downsampled, duplicated, and binarized. Stored convolutional weights perform 16 convolutions, to produce 16 feature maps in the \(4\times 4\) grid of processor elements. A ReLU activation is applied, followed by max-pooling, downsampling, and binarization. This can either be fed to another CNN layer or to the input of the RNN. The RNN takes in the output of the CNN and the previous hidden state \(\textbf{h}_{t-1}\). The hidden state \(\textbf{h}_{t}\) is updated every timestep. The output \(\textbf{o}_{t}\) is read out every 16 frames, yielding 64\(\times\) decrease in bandwidth.
Figure 5: To implement the PixelRNN on SCAMP-5, the image plane is split into a \(4\times 4\) grid of processor elements shown above. Two analog register planes are used, Register planes A and B. Above, we show the sequence of operations from left to right. The input from the CNN and the previous hidden state are duplicated in A. These 4 PEs are convolved \(*\) with the corresponding gate weights stored in plane B. The resulting convolutions in the second column are then added to compute the output \(\textbf{o}_{t}\) and the forget gate \(\textbf{f}_{t}\). Note that an in-place binarization is applied to \(\textbf{f}_{t}\). The hidden state \(\textbf{h}_{t}\) is updated via an element-wise multiplication \(\odot\) of \(\textbf{h}_{t-1}\) and \(\textbf{f}_{t}\).
steps, SCAMP-5 outputs the \(64\times 64\) image corresponding to the output gate \(\mathbf{o}_{t}\). Our spatio-temporal feature encoder distills the salient information while giving a 64\(\times\) decrease in bandwidth.
Accounting for Analog Uncertainty.As with all analog compute, a certain amount of noise should be expected. As such, we treat each of the SCAMP's analog registers to contain values split across equally spaced discrete intervals. During convolutions, the binary image and binary weights are XNOR-ed. Depending on the result, we either add or subtract an analog value approximately equal to \(10\). As the analog registers have an approximate range of values \(-128\) to \(127\), the interval cannot be increased without risking saturation during convolutions. However, there is uncertainty when it comes to the precision and uniformity of the intervals. Along with decay, this uncertainty, spatial non-uniformity, and noise affect the operations that follow. In the RNN, these effects accumulate for 16 frames, leading to a significant amount noise. To account for these effects, we trained binary models in simulation with varying amounts of added Gaussian noise in the CNN and the RNN prior to quantization of the features. We also fine-tuned the off-sensor layer on the training set outputs from SCAMP-5.
Assessment.To test our prototype, we uploaded the video datasets to SCAMP-5 in sequence and saved out the outputs every 16 frames (see supplement for additional details). In our current implementation, it takes roughly 95 ms to run a single frame through the on-sensor encoder. The \(64\times 64\) output region then goes through the off-sensor linear layer decoder. We evaluate the performance using the models trained with and without noise. The results shown in Table 3 highlight the benefits of training with noise, as well as the difficulty that comes with working with analog registers. We see even running the same train set through SCAMP-5 two separate times does not result in the same performance. Without the noise-trained model, we reached 61.1% on the hand gesture recognition test set. Performance improved to 73.3% when we used the weights from training with noise. Similarly, the performance on lip reading was boosted to 70.0% when using a model trained on noise. While added noise during training helps, the noise characteristics of SCAMP-5 are much more complex. Such issues may be mitigated in future sensor-processors with sufficient digital registers to avoid having to rely upon the use of analog computation. While limited by noise, we demonstrated the first in-pixel spatio-temporal encoder for bandwidth compression.
## 6 Discussion
In the traditional computer vision pipeline, full-frame images are extracted from the camera and are fed into machine learning models for different perception tasks. While completely viable in systems not limited by compute, memory, or power, many edge devices do not offer this luxury. For systems like AR/VR devices, robotics, and wearables, low-power operation is crucial, and even more so if the system requires multiple cameras. The community has already been working on creating smaller, more efficient models, as well as specialized accelerators. However, the communication between camera and processor that consumes nearly 25% of power in these systems [20] has not been optimized. In this work, we demonstrate how running a simple in-pixel spatio-temporal feature extractor can decrease the bandwidth, and hence power associated with readout, by 64\(\times\). Even with highly quantized weights and signals and a very simple decoder, we still maintain good performance on hand gesture recognition and lip reading. We studied different RNN architectures and presented PixelRNN that performs well in highly quantized settings, we studied just how small we could make the bandwidth before affecting performance, shown in Figure 3, and implemented a physical prototype with one of the emerging sensors, SCAMP-5, that is paving the way for future sensors.
Limitations and Future Work.One of the biggest challenges of working with SCAMP5 is accounting for the analog noise, but the platform offers great flexibility to program the data movement between pixels to implement prototypes. While SCAMP-5 offers many exciting capabilities, it is still limited in memory and compute as all circuitry needs to fit in a single pixel. Until recently, adding circuitry or memory to image sensors compromised the fill factor, which worsens the imaging performance and limits achievable image
\begin{table}
\begin{tabular}{c c c c} \hline \hline & Train Set & Train Set & Test Set \\ & Accuracy run 1 & Accuracy run 2 & Accuracy \\ \hline \hline
**Hand Gesture Recognition** & & & \\ Noise-free Model & 100.0\% & 64.0\% & 61.1\% \\ Model trained with noise & 95.3\% & 70.8\% & 73.3\% \\ \hline
**Lip Reading** & & & \\ Noise-free Model & 100.0\% & 78.9\% & 50.0\% \\ Model trained with noise & 98.5\% & 84.85\% & 70.0\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Experimental Results.** We run the training sets through the SCAMP-5 implementation twice. The first outputs are used to fine-tune the off-sensor linear layer decoder. In theory, the train set accuracy of the second runs should be close. With the noise accumulated through the analog compute, however, the SCAMP-5 implementation is not deterministic. Adding Gaussian noise during training increases the test-set performance.
resolution. With the new developments in stacked CMOS image sensors, future sensors will be able to host much more compute and memory on the sensor plane, allowing us to design more expressive models and to apply tools like architecture search to optimize where compute happens in a system [17]. Until then, we are limited to light-weight in-pixel models. Noise related to analog compute can also be mitigated by switching over to digital compute. Our work is not only applicable to SCAMP-5 but to all future focal-plane processors.
Conclusion.Emerging image sensors offer programability and compute directly in the pixels. Our work is the first to demonstrate how to capitalize on these capabilities using efficient RNN architectures, decreasing the bandwidth of data that needs to be read off the sensor as well as stored and processed by downstream application processors by a factor of 64\(\times\). We believe our work paves the way for other inference tasks of future artificial intelligence-driven sensor-processors.
## Acknowledgements
This project was in part supported by the National Science Foundation and Samsung.
|
2308.14698 | Circumventing the odd particle-number sign problem in the shell model
Monte Carlo | The shell model Monte Carlo (SMMC) method is a powerful method for
calculating exactly (up to statistical errors) thermal observables and
statistical properties of atomic nuclei. However, its application has been
limited by a sign problem at low temperatures that arises from the projection
onto odd particle number even for good-sign interactions. Here, we develop a
technique - the partition function extrapolation method (PFEM) - to extract the
ground-state energy of an odd-mass nucleus from the excitation partition
function calculated at temperatures at which this sign problem is moderate. We
validate the PFEM in heavy even-mass nuclei and systematically calculate
ground-state energies for isotopic chains of heavy odd-mass nuclei. The PFEM
can be extended to other finite-size quantum many-body systems. | Y. Alhassid, P. Fanto, C. Özen | 2023-08-28T16:46:51Z | http://arxiv.org/abs/2308.14698v1 | # Circumventing the odd particle-number sign problem in the shell model Monte Carlo
###### Abstract
The shell model Monte Carlo (SMMC) method is a powerful method for calculating exactly (up to statistical errors) thermal observables and statistical properties of atomic nuclei. However, its application has been limited by a sign problem at low temperatures that arises from the projection onto odd particle number even for good-sign interactions. Here, we develop a technique - the partition function extrapolation method (PFEM) - to extract the ground-state energy of an odd-mass nucleus from the excitation partition function calculated at temperatures at which this sign problem is moderate. We validate the PFEM in heavy even-mass nuclei and systematically calculate ground-state energies for isotopic chains of heavy odd-mass nuclei. The PFEM can be extended to other finite-size quantum many-body systems.
_Introduction_ - The shell model Monte Carlo (SMMC) method [1; 2; 3; 4; 5; 6] can calculate exactly (up to statistical errors) finite-temperature observables within the configuration-interaction (CI) shell model framework in model spaces that are far beyond the reach of conventional diagonalization methods. The SMMC has been applied to study nuclear state and level densities in nuclei as heavy as the lanthanides [7; 8; 9; 10; 11; 12]. Similar auxiliary-field Monte Carlo (AFMC) methods have been applied to other quantum many-body systems such as cold atoms [13; 14; 15].
Quantum Monte Carlo calculations for fermions are often limited by the Monte Carlo sign problem, which leads to large statistical errors. Calculations of state and level densities [7; 8; 9; 10; 11; 12] have been carried out using interactions that include the dominant pairing and multipole-multipole components of effective nuclear interactions [16], which have no sign problem in the grand-canonical ensemble. However, the projection onto fixed numbers of protons and neutrons is necessary for describing finite-size nuclei. While the projection on an even number of particles (both protons and neutrons) preserves the good sign of the interaction, the projection onto odd particle number introduces a sign problem at low temperatures, which we shall refer to as the odd-particle sign problem.
Although this odd-particle sign problem is moderate at not too low temperatures, the statistical fluctuations become too large at low temperatures for the ground-state energies in odd particle-number systems to be directly determined. The ground-state energy is a crucial quantity because it is necessary for determining the excitation energy in level density calculations. In Ref. [17], imaginary-time Green's functions of neighboring even-number systems were used to determine ground-state energies of odd-mass nuclei. However, calculating these Green's functions in heavy nuclei is computationally expensive. In Ref. [10], SMMC calculations at higher temperatures were combined with experimental results to extract the ground-state energy of odd-mass nuclei.
Here, we introduce a self-contained method - the partition function extrapolation method (PFEM) - that determines the ground-state energy from the SMMC partition function at temperatures above the onset of the sign problem. This method consists of expressing the excitation partition function via a parameterized model for the state density and fitting the parameters of this model to obtain the ground-state energy. We validate this method by applying it to even-even nuclei, in which the ground-state energy can be determined directly from SMMC calculations at low temperatures. We then apply the PFEM systematically to calculate the ground-state energies of odd-mass samarium and neodymium isotopes.
Beyond the SMMC, the PFEM is also useful for determining ground-state energies in the static-path plus random-phase approximation (SPA+RPA), which has recently been applied to calculate state densities of heavy nuclei [18]. The SPA+RPA breaks down at low but nonzero temperature, requiring the use of an extrapolation method to determine the ground-state energy. In Ref. [18], a preliminary version of the PFEM was applied to extract the SPA+RPA ground-state energies of samarium isotopes \({}^{148-155}\)Sm.
_Method_ - In the SMMC method, the thermal expectation value of an observable \(\hat{O}\) is given by [6]
\[\langle\hat{O}\rangle=\frac{\mathrm{Tr}\left(\hat{O}e^{-\beta\hat{H}}\right) }{\mathrm{Tr}\left(e^{-\beta\hat{H}}\right)}=\frac{\int D[\sigma]W_{\sigma} \Phi_{\sigma}\langle\hat{O}\rangle_{\sigma}}{\int D[\sigma]W_{\sigma}\Phi_{ \sigma}}\,, \tag{1}\]
where \(\hat{H}\) is the nuclear Hamiltonian, \(\sigma\) are a set of time-dependent auxiliary fields, \(W_{\sigma}=G_{\sigma}|\mathrm{Tr}_{A}\hat{U}_{\sigma}|\) is a positive-definite weight function (\(G_{\sigma}\) is a Gaussian weight and \(\hat{U}_{\sigma}\) is the propagator of non-interacting nucleons moving in external auxiliary fields \(\sigma\)), \(\Phi_{\sigma}=\mathrm{Tr}_{A}\hat{U}_{\sigma}/|\mathrm{Tr}_{A}\hat{U}_{ \sigma}|\) is the Monte Carlo sign, and \(\langle\hat{O}\rangle_{\sigma}=\mathrm{Tr}_{A}\left(\hat{O}\hat{U}_{\sigma} \right)/\mathrm{Tr}_{A}\hat{U}_{\sigma}\). Here \(\mathrm{Tr}_{A}\) represents the canonical
ensemble trace with respect to fixed numbers of protons and neutrons and is calculated with projection methods; see Ref. [6] and works cited therein. In the SMMC method, the Metropolis-Hastings algorithm is used to sample uncorrelated field configurations \(\sigma_{k}\), and observables are estimated by averages over the samples, i.e., \(\langle\hat{O}\rangle\approx\sum_{k}\Phi_{\sigma_{k}}\langle\hat{O}\rangle_{ \sigma_{k}}/\sum_{k}\Phi_{\sigma_{k}}\). For odd number of protons or neutrons, the average sign \(\langle\Phi_{\sigma}\rangle\) decays rapidly as the temperature is lowered, leading to large statistical errors on the Monte Carlo estimates of observables at low temperatures.
The goal of the PFEM is to determine the ground-state energy \(E_{0}\), given the SMMC thermal energy estimates and their associated errors at higher temperatures. The thermal energy \(E(\beta)\) at inverse temperature \(\beta\) can be calculated in the SMMC as the expectation value of the Hamiltonian \(\hat{H}\). Measuring the energies relative to a reference energy \(E_{\rm ref}\), the corresponding excitation partition function is given by
\[Z^{\prime}(\beta;E_{\rm ref})=Z(\beta)e^{\beta E_{\rm ref}}\,, \tag{2}\]
where the partition function \(Z(\beta)={\rm Tr}\,e^{-\beta\hat{H}}\) can be obtained by the integral relation \(\ln Z(\beta)=\ln Z(0)-\int_{0}^{\beta}d\beta^{\prime}E(\beta^{\prime})\) (\(\ln Z(0)\) depends only on the single-particle model space dimension and numbers of valence nucleons). The excitation partition function for an arbitrary reference energy \(E_{\rm ref}\) can be related to the excitation partition function for the ground state energy \(E_{0}\) by
\[\ln Z^{\prime}(\beta;E_{\rm ref})=\ln Z^{\prime}(\beta;E_{0})-\beta(E_{0}-E_{ \rm ref})\,. \tag{3}\]
We express \(Z^{\prime}(\beta;E_{0})\) in Eq. (3) as the Laplace transform of the state density \(\rho(E_{x})\)
\[Z^{\prime}(\beta;E_{0})=\int_{0}^{\infty}dE_{x}\,\rho(E_{x})e^{-\beta E_{x}}\,, \tag{4}\]
where \(E_{x}=E-E_{0}\) is the excitation energy. In the PFEM, we use a parameterized model for \(\rho(E_{x})\) in Eq. (3) and fit the parameters of the model, together with the ground state energy \(E_{0}\), to the SMMC results for \(\ln Z^{\prime}(\beta;E_{\rm ref})\). The SMMC results can be calculated for not too low temperatures for which the sign problem is moderate. In particular, to describe odd-mass nuclei, we use the back-shifted Bethe formula (BBF) [19]
\[\rho_{\rm BBF}(E_{x})=\frac{\sqrt{\pi}}{12a^{1/4}}\frac{e^{2\sqrt{a(E_{x}- \Delta)}}}{(E_{x}-\Delta)^{5/4}}\,, \tag{5}\]
where \(a\) is the single-particle level density parameter and \(\Delta\) is the back-shift parameter. Inserting Eq. (5) into Eq. (4) and then inserting that result into Eq. (3) expresses the SMMC excitation partition function in terms of the three parameters (\(a,\Delta,E_{0}\)). We can then determine these parameters by a \(\chi^{2}\) fit.
In practice, we carry out this fit in two steps. In the first step, we apply the saddle-point approximation to the integral in Eq. (4), in which we use for \(\rho(E_{x})\) the BBF in Eq. (5), and obtain
\[\ln Z^{\prime}(\beta;E_{\rm ref})\approx\frac{a}{\beta}+\ln\left(\frac{\pi \beta}{6a}\right)-\beta S\,, \tag{6}\]
where \(S=E_{0}-E_{\rm ref}+\Delta\). We fit Eq. (6) to the SMMC data at moderate temperatures to obtain fitted values of \((a,S)\). In the second step, keeping the values of \((a,S)\) fixed, we fit the full expression (3) to the SMMC data at low temperatures, where we again use the BBF in Eq. (5) for \(\rho(E_{x})\) and carry out the integral in Eq. (4) numerically. This is a one-parameter \(\chi^{2}\) fit of \(E_{0}\); the back-shift parameter \(\Delta\) is determined by \(\Delta=S-(E_{0}-E_{\rm ref})\).
For positive values of \(\Delta\), the BBF (5) is not defined for \(E_{x}\leq\Delta\). In such cases, we replace the BBF in Eq. (4) with the Gilbert-Cameron composite formula [20]
\[\rho_{\rm comp}(E_{x})=\begin{cases}\frac{1}{T_{1}}e^{(E_{x}-E_{1})/T_{1}}&E_{ x}<E_{M}\\ \rho_{\rm BBF}(E_{x})&E_{x}>E_{M}\end{cases}\,, \tag{7}\]
where \(E_{M}\) is a matching energy and \((E_{1},T_{1})\) are defined by the conditions that the state density and its first derivative be continuous at \(E_{M}\). In this case, the second step of the PFEM is a two-parameter fit of the values \((E_{0},E_{M})\). For the odd-mass isotopes studied here, we find that \(\Delta\) is negative in the BBF. However, the composite formula was applied to the SPA+RPA in Ref. [18].
_Validation_ - To assess the accuracy of the PFEM, we applied it to calculate the ground-state energies of even-mass samarium isotopes \({}^{148,150,152,154}\)Sm. For these nuclei, there is no sign problem and we can calculate \(E_{0}\) from the thermal SMMC energies at large \(\beta\) values. We used the model space and effective pairing plus multipole-multipole interaction described in Ref. [7]. For \({}^{152,154}\)Sm, the back-shift \(\Delta\) is negative, as is typical for odd-mass nuclei, and we use the BBF (5) in the second step of the PFEM. For \({}^{148,150}\)Sm, \(\Delta\) is positive and we use instead the composite formula (7) in the second step.
As a typical result, we show in Fig. 1 the logarithm of the excitation partition function (2) of \({}^{154}\)Sm calculated in the SMMC, compared with the fits from the saddle-point formula (6) and the full BBF (5) used in Eq. (4). The saddle-point formula is fitted to data in the moderate temperature range \(0.3\leq T\leq 0.6\) MeV, while the full BBF is fit to low-temperature data \(T\leq 0.4\) MeV.
In Table 1, we compare the PFEM ground-state energies with those extracted from SMMC energies at high \(\beta\) values. We estimated the errors on the PFEM ground-state energies differently for the BBF and composite formula approaches. For the BBF, the error bar on \(E_{0}\) reflects the points at which the \(\chi^{2}\) per degree of freedom changes by one from its minimum value obtained at the fitted \(E_{0}\) point. This change is asymmetric, and this asymmetry reflects the constraint that \(\Delta\) be negative. For the composite formula approach applied to
\({}^{148,150}\)Sm, we instead calculated the \(\chi^{2}\) per degree of freedom for multiple \(E_{M}\) values and fit a parabola to these results. We then calculate the error bars on \(E_{M}\) and \(E_{0}\) by finding the points at which this parabola predicts the \(\chi^{2}\) per degree of freedom to increase by one. This latter approach results in significantly higher error bars, limiting the composite fit approach. For practical applications, the limitation is not significant as the BBF can be applied to nearly all odd-mass isotopes.
For the more deformed isotopes \({}^{152,154}\)Sm, we calculated the reference SMMC ground-state energies \(E_{0}\) in Table 1 by fitting the thermal SMMC energies at high \(\beta\) to a rotational model \(E(T)\approx E_{0}+T\)[7]. In contrast, for \({}^{148,150}\)Sm, we determined \(E_{0}\) by averaging the SMMC energies in the range \(\beta\approx 8-15\) MeV\({}^{-1}\). We find excellent agreement between the PFEM results for \({}^{152,154}\)Sm, for which we used the BBF in the second step of the fit, and the reference SMMC values. For \({}^{148,150}\)Sm, for which we used the composite formula in the second step of the fit, we find that the PFEM ground-state energies are systematically lower than the reference SMMC values by roughly \(\sim 300\) keV.
Finally, we note that, for odd-mass nuclei, we cannot calculate reliably the SMMC excitation partition function at low temperatures, and it is useful to examine how this temperature restriction affects the PFEM results. Table 1 shows the values \(E_{0}^{\rm PFEM,\,r}\) obtained for the ground-state energies using the restricted temperature range \(\beta\approx 3.5-5\) MeV\({}^{-1}\) in the second step of the fit. Restricting the temperature range shifts the BBF fit to somewhat higher values, as shown in the results for \({}^{152,154}\)Sm. In contrast, the temperature restriction does not affect much the results of the composite formula fit. In both the BBF and composite formula cases, the temperature restriction increases the size of the error bars. This result emphasizes the importance of extending the odd-mass SMMC calculations to as large \(\beta\) values as possible.
_Application to odd-mass lanthanides_ - Having validated the PFEM for even-mass nuclei, we applied this method to calculate the ground-state energies of odd-mass neodymium and samarium isotopes. We used the BBF in the second step of the fit in each case. Fig. 2 shows a representative result for \({}^{153}\)Sm. As shown in this figure, the sign problem lim
Figure 1: The logarithm of the excitation partition function \(\ln Z^{\prime}\) as a function of temperature \(T\) for \({}^{154}\)Sm. The SMMC results (blue squares) are compared with the saddle point fit results (orange line) and BBF fit results (black dashed line).
Figure 2: The logarithm of the excitation partition function \(\ln Z^{\prime}\) as a function of temperature \(T\) for \({}^{153}\)Sm. Colors and symbols are as in Fig. 1.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \(E_{0}^{\rm SMMC}\) (MeV) & \(E_{0}^{\rm PFEM}\) (MeV) & \(E_{0}^{\rm PFEM,\,r}\) (MeV) \\ \hline \({}^{154}\)Sm & -295.45 \(\pm\).01 & -295.43 (+0.08,\(-\)0.03) & -295.21 (+.11, \(-\).11) \\ \({}^{152}\)Sm & -275.85 \(\pm\).02 & -275.77 (+.05,\(-\)0.02) & -275.55 (+.10,\(-\).11) \\ \({}^{150}\)Sm & -255.77 \(\pm\).02 & -255.99 \(\pm\).14 & -256.08 \(\pm\).47 \\ \({}^{148}\)Sm & -235.66 \(\pm\).02 & -235.95 \(\pm\).27 & -235.92 \(\pm\).40 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The ground-state energies \(E_{0}^{\rm PFEM}\) from the PFEM compared with the ground-state energies \(E_{0}^{\rm SMMC}\) obtained from SMMC energies at high \(\beta\) values. For \({}^{152,154}\)Sm, we obtain \(E_{0}^{\rm SMMC}\) by a fit to a rotor model, while for \({}^{148,150}\)Sm we average the energies for \(\beta\approx 8-15\) MeV\({}^{-1}\). In the PFEM, we use the BBF in the second step for \({}^{152,154}\)Sm and the composite formula in the second step for \({}^{148,150}\)Sm. The value \(E_{0}^{\rm PFEM}\) is obtained using \(\beta\approx 2.5-15\) MeV\({}^{-1}\) in the second step of the fit, while \(E_{0}^{\rm PFEM,\,r}\) is obtained with \(\beta\) restricted to the range \(\beta\approx 3.5-5\) MeV\({}^{-1}\) typical of an odd-mass nucleus.
squares) to \(\beta\) values below \(\beta\approx 4-5\) MeV\({}^{-1}\). The solid line describes the first step of fitting the saddle-point formula to the SSMC data in a moderate temperature range \(0.3\leq T\leq 0.6\) MeV to determine \(a\) and \(S\). The dashed line is the BBF fit in the low-temperature range \(0.2\leq T\leq 0.4\) MeV (where SMMC data exists) to determine the ground-state energy \(E_{0}\).
In Table 2, we compare the PFEM results to those obtained from the method of Ref. [10], which combines the SMMC results with experimental data to extract the average excitation energy as a function of temperature. Overall, the ground-state energies of the two methods are in good agreement. The PFEM \(E_{0}\) values tend to be somewhat lower than those from the method of Ref. [10]. Also, the PFEM gives larger error bars, especially for the more neutron-rich neodymium and samarium isotopes. The main advantage of the PFEM is that it does not rely on any experimental data and uses only the SMMC calculations.
_Conclusions and Outlook_ - We have developed the partition function extrapolation method (PFEM) to extract the ground-state energy from SMMC calculations in odd-mass nuclei, which are limited at low temperatures by a sign problem resulting from the projection onto odd particle number. The PFEM consists of applying a parameterized model for the state density to fit the excitation partition function (2), which we obtain from SMMC calculations. In the PFEM we primarily apply the BBF state density (5) since the back-shift parameter \(\Delta\) is usually negative for odd-mass nuclei, but in cases in which \(\Delta\) is positive we use instead the composite formula (7).
We validated the PFEM method in even-even samarium isotopes, for which ground-state energies can be obtained directly from the SMMC thermal energies at low temperatures. We then applied the PFEM to calculate the ground-state energies for odd-mass neodymium and samarium isotopes and found excellent agreement with the ground-state energies obtained with the method of Ref. [10], which combined the SMMC results with experimental data. A main advantage of the PFEM is that it requires no additional information beyond the SMMC results. Moreover, the PFEM is computationally efficient, in contrast to the Green's function method of Ref. [17].
The PFEM is also useful in the context of many-body methods other than the SMMC. For example, in Ref. [18], a preliminary version of the PFEM was used to estimate ground-state energies in the static-path plus random-phase approximation (SPA+RPA).
Finally, the PFEM can be extended to AFMC studies in many-body systems other than nuclei, provided that a reliable parameterized model for the state density can be found. A possible application would be to determine the energy staggering pairing gap in strongly interacting cold atomic two-species Fermi gases [14; 15]. This gap is defined for \(N\) particles by \(\Delta_{E}=[2E(N/2+1,N/2)-E(N/2+1,N/2+1)-E(N/2,N/2)]/2\), where \(E(N_{\uparrow},N_{\downarrow})\) is the ground-state energy for \(N_{\uparrow}\) spin-up and \(N_{\downarrow}\) spin-down particles. For \(N_{\uparrow}=N_{\downarrow}\) there is no sign problem but the spin-imbalanced system with \(N_{\uparrow}\neq N_{\downarrow}\) has a sign problem at low temperatures.
_Acknowledgments_ - This work was supported in part by the U.S. DOE grant No. DE-SC0019521, and by the U.S. DOE NNSA Stewardship Science Graduate Fellowship under cooperative agreement No. NA-0003960. The calculations used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231. We thank the Yale Center for Research Computing for guidance and use of the research computing infrastructure.
|
2306.12745 | Stability of piecewise flat Ricci flow | The stability of a recently developed piecewise flat Ricci flow is
investigated, using a linear stability analysis and numerical simulations, and
a class of piecewise flat approximations of smooth manifolds is adapted to
avoid an inherent numerical instability. These adaptations have also been used
in a related paper to show the convergence of the piecewise flat Ricci flow to
known smooth Ricci flow solutions for a variety of manifolds. | Rory Conboye | 2023-06-22T08:56:40Z | http://arxiv.org/abs/2306.12745v1 | # Stability of piecewise flat Ricci flow
###### Abstract
The stability of a recently developed piecewise flat Ricci flow is investigated, using a linear stability analysis and numerical simulations, and a class of piecewise flat approximations of smooth manifolds is adapted to avoid an inherent numerical instability. These adaptations have also been used in a related paper to show the convergence of the piecewise flat Ricci flow to known smooth Ricci flow solutions for a variety of manifolds.
Keywords: Numerical Ricci flow, piecewise linear, geometric flow, linear stability
Mathematics Subject Classification: 53C44, 57Q15, 57R12, 65D18
## 1 Introduction
The Ricci flow is a uniformizing flow on manifolds, evolving the metric to reduce the strength of the Ricci curvature. It was initially developed by Richard Hamilton to help prove the Thurston geometrization conjecture [1], and it remains an important tool for analysing the interplay between geometry and topology. Recently, its use has expanded further, with numerical evolutions finding applications in facial recognition [2], cancer detection [3], and space-time physics [4, 5, 6].
Piecewise flat manifolds are formed by joining flat Euclidean segments in a consistent manner. To allow for a natural refinement, the piecewise flat manifolds considered here are formed from a mesh of cube-like blocks, with each block composed of six flat tetrahedra. In this case, the topology is determined by the simplicial graph, and the geometry by a discrete set of edge-lengths. This conveniently leads to a piecewise flat Ricci flow as a rate of change of edge-lengths, as introduced in [7], with the rate of change determined by an approximation of the smooth Ricci curvature at the edges. However, despite the curvature approximations and some analytic solutions of the piecewise flat Ricci flow converging to their smooth counterparts as the mesh is refined [7], numerical evolutions have led to an exponential growth of errors in the edge-lengths.
This instability is analysed here, and a method for suppressing the instability proposed, using blocks that are internally flat instead of being composed of flat tetrahedra. This is
implemented by introducing a constraint on the length of an edge in the interior of each block. A linear stability analysis and numerical simulations are then used to demonstrate the initial instability and the effectiveness of its suppression. Computations using these adapted piecewise flat manifolds can already be seen in [8], where they have successfully been used in the piecewise flat Ricci flow of manifolds with a variety of different properties, showing convergence to known Ricci flow solutions and behaviour.
The remainder of the paper begins with an introduction to the piecewise flat manifolds used in this paper, and the piecewise flat Ricci flow, summarizing the main results from [7]. The numerical instabilities are described in section 3, along with a motivation and explanation of the suppression. A linear stability analysis in section 4 shows the exponential growth to arise from the numerical errors in the length measurements, with numerical simulations matching the growth rates. In section 5, the same linear stability analysis no longer indicates an exponential growth when the suppressing method is used, with numerical simulations also showing stable behaviour.
## 2 Background
### Piecewise flat manifolds and triangulations
In three dimensions, the most simple of piecewise flat manifolds are formed by joining Euclidean tetrahedra together, with the triangular faces between neighbouring tetrahedra identified. The resulting graph encodes the topology of the manifold, with the geometry completely determined by the set of edge-lengths. Piecewise flat approximations of smooth manifolds can be constructed by first setting up a tetrahedral graph on the smooth manifold, using geodesic segments as edges. A piecewise flat manifold can then be defined using the same graph, with the edge-lengths determined by the lengths of the corresponding geodesic segments on the smooth manifold. Such a piecewise flat approximation is known as a triangulation of the smooth manifold.
In order to test for convergence to smooth curvature and Ricci flow, a set of triangulations which can be scaled in some regular way must be used. Three such triangulation-types were defined in [8], using building blocks which can be tiled to form a complete tetrahedral graph. These building blocks are defined below, in terms of a set of coordinates \(x\), \(y\), and \(z\), with each block covering a unit of volume. Diagrams of each block are also shown in figure 1.
1. The _cubic_ block forms a coordinate cube composed of six tetrahedra, with three independent edges along the coordinate directions, three independent face-diagonals and a
Figure 1: The three different block types, with the six tetrahedra of the cubic block on the far left, and a slight separation of the three diamond shapes forming the diamond block.
single internal body-diagonal. The tetrahedra are specified so that the face-diagonals on opposite sides are in the same directions.
2. The _skew_ block has the same structure as the cubic block, but with its vertices skewed in the \(x\) and \(z\) directions, with \(v_{x}=(1,-1/3,0)\) and \(v_{z}=(-1/3,-2/9,1)\).
3. The _diamond_ block is constructed from a set of four tetrahedra around each coordinate axis, with edges in the outer rings formed by the remaining coordinate directions.
These blocks can be used to triangulate manifolds with three-torus (\(T^{3}\)) topologies, using a cuboid-type grid of blocks to cover the fundamental domain, identifying the triangles, edges and vertices on opposite sides. The resulting triangulations have computational domains that are compact without boundary. Manifolds with other topologies can also be triangulated using these blocks but require slightly more complicated tetrahedral graphs, see for example the Nil manifold in [8]. Since the stability should depend on the local structure of the piecewise flat manifold, only \(T^{3}\) topology triangulations will be considered here.
### Piecewise flat curvature and Ricci Flow
While any neighbouring pair of tetrahedra in a piecewise flat manifold still forms a Euclidean space, a natural measure of curvature arises from the sum of the dihedral angles \(\theta_{t}\) around an edge \(\ell\), with the difference from \(2\pi\) radians known as a deficit angle,
\[\epsilon_{\ell}:=2\pi-\sum_{t}\theta_{t}\,. \tag{2.1}\]
Triangulations of smooth manifolds are deemed good approximations if the deficit angles are uniformly small. The resolution of a piecewise flat approximation can then be increased by having a higher concentration of tetrahedra, or a finer grid of the blocks defined above.
Conceptually, the deficit angles correspond to surface integrals of the sectional curvature orthogonal to each edge. However, a number of examples show that a single deficit angle does not carry enough information to approximate the smooth curvature directly, see section 5.1 of [7] for example. Instead, this correspondence is used to construct volume integrals of both the scalar curvature at each vertex and the sectional curvature orthogonal to each edge, which can then be used to give the Ricci curvature along the edges.
Volumes \(V_{v}\) associated with each vertex \(v\) are defined to form a dual tessellation of the piecewise flat manifold, with barycentric duals used here. Edge-volumes \(V_{\ell}\) are then defined
Figure 2: The deficit angle at an edge, and a cross section of the region around an edge showing the vertex and edge volumes.
as a union of the volumes \(V_{v}\) for the vertices on either end, capped by surfaces orthogonal to the edges at each vertex, as shown in figure 2. From [7], piecewise flat approximations of the scalar curvature at each vertex \(v\), sectional curvature orthogonal to each edge \(\ell\), and the Ricci curvature along each edge \(\ell\), are given by the expressions:
\[R_{v} =\frac{1}{V_{v}}\sum_{i}|\ell_{i}|\,\epsilon_{i}\,, \tag{2.2a}\] \[K^{\perp}_{\ell} =\frac{1}{V_{\ell}}\left(|\ell|\,\epsilon_{\ell}+\sum_{i}\frac{1} {2}|\ell_{i}|\cos^{2}(\theta_{i})\epsilon_{i}\right),\] (2.2b) \[Rc_{\ell} =\frac{1}{4}(R_{v_{1}}+R_{v_{2}})-K^{\perp}_{\ell}\,, \tag{2.2c}\]
with the indices \(i\) labelling the edges intersecting the volumes \(V_{v}\) and \(V_{\ell}\), \(\theta_{i}\) representing the angle between the edge \(\ell_{i}\) and \(\ell\), and \(v_{1}\) and \(v_{2}\) indicating the vertices bounding \(\ell\). Computations for a number of manifolds have shown these expressions to converge to their corresponding smooth values [7]. Similar constructions have been developed for the extrinsic curvature [9], with numerical computations successfully converging to their smooth values.
The Ricci flow of a smooth manifold changes the metric \(g\) due to the Ricci curvature \(Rc\),
\[\frac{dg}{dt}=-2Rc\,. \tag{2.3}\]
The resulting change in the length of a geodesic segment can be given solely by the Ricci curvature along and tangent to it, as shown in section 6.3 of [7]. Since the edge-lengths of a triangulation correspond to the lengths of these geodesic segments, a piecewise flat approximation of the smooth Ricci flow can be given by a fractional change in the edge-lengths,
\[\frac{1}{|\ell|}\frac{d|\ell|}{dt}=-Rc_{\ell} \tag{2.4}\]
The equation above has been shown to converge to known smooth Ricci flow solutions as the resolution is increased, using analytic computations for symmetric manifolds in [7], and numerical evolutions for a variety of other manifolds in [8]. This approach has also been used by Alsing, Miller and Yau [10], but with a different edge volume \(V_{\ell}\), which works when the triangulations are adapted to the spherical symmetry of the manifolds studied there.
## 3 Instability and suppression
### Instability
Despite the close approximation of the piecewise flat Ricci curvature \(Rc_{\ell}\) to its corresponding smooth values [7], initial numerical evolutions of the piecewise flat Ricci flow resulted in an exponential growth of the face-diagonals for both the cubic and skew triangulation types, even for manifolds that are initially flat. Since the deficit angles should all be zero for a triangulation of a flat manifold, this growth must arise from numerical errors.
The top two graphs in figure 3 show how far the face-diagonal edge-lengths deviate from a flat triangulation. These deviations start at the level of the numerical precision and grow
exponentially from there. The growth is invariant to the step size, has occurred for every grid size tested and for both the normalized and non-normalized piecewise flat Ricci flow equations, and soon dominates any evolution. The growth rates also _increase_ when the scale of the edge-lengths is reduced, countering any improved precision from an increase in resolution. This has led to the proposition below.
**Proposition 1**.: _The piecewise flat Ricci flow is exponentially unstable when directly applied to cubic and skew type triangulations._
This proposition is proved in section 4.2 using a linear stability analysis of all cubic and skew triangulations of a three-torus. Section 4.3 then shows the growth rates for a number of numerical simulations to be in close agreement with the results of the linear stability analysis.
### Suppressing instability
While it is clear that the evolution shown on the top row of figure 3 does not correspond to smooth Ricci flow, it is also inherently non-smooth in nature, particularly with the growth rates increasing as the resolution of a triangulation is increased. This suggests an extra freedom in the cubic and skew triangulations that does not arise in smooth manifolds.
In both the linearized equations in section 4.2, and the numerical simulations in section 4.3, all of the face-diagonal edges can be seen to grow at the same growth rate, with the other types of edges remaining unchanged. With only the face-diagonals growing, each face can still be viewed as a flat parallelogram. The exterior of each block can also still be embedded in three dimensional Euclidean space as a parallelepiped, with the change in the face-diagonals acting similar to a change of coordinates. This is shown in the left two images of figure 4.
Figure 3: Exponential growth of the errors in the face-diagonals is shown on the top row, for both cubic and skew triangulations of a flat three-torus. The bottom shows the suppression of this growth when the body-diagonals are adjusted to give blocks with flat interiors.
The distance between the bottom left and top right vertices of the parallelepiped in figure 4 must clearly grow if it is to remain embedded in Euclidean space, but the corresponding body-diagonals remain unchanged. This produces a growing deficit angle around the body-diagonals, shown on the right of figure 4, which then drives the growth of the face-diagonals. The addition of the body diagonal to each block can also be interpreted as producing an over-determined system, with seven edge-lengths associated with each vertex or block, while there are only six metric components at each point of a smooth manifold. However, this interpretation also suggests a solution.
The flat segments of a piecewise flat manifold do not necessarily have to be tetrahedra, these are just the most simple of segments. If each block of the cubic and skew triangulations are instead treated as flat, the mechanism that drives the exponential growth of the face-diagonals will be broken. This has lead to the following:
**Proposition 2**.: _The exponential instability is suppressed for the piecewise flat Ricci flow of cubic and skew type piecewise flat manifolds with flat blocks as the piecewise flat segments._
In practice the body-diagonals are retained since it is easier to compute dihedral angles and volumes with a tetrahedral graph. Their lengths are then continually re-defined to give zero deficit angles around them, and hence a flat interior for each block, as shown in figure 5. This results in a set of constraint equations to determine the lengths of the body-diagonals at each step of an evolution, circumventing the over-determined nature of the tetrahedral triangulations.
Proposition 2 is proved for a number of cubic and skew grids of \(T^{3}\) manifolds in section 5.1, and numerical simulations show the suppression of the instability in section 5.2. The use of flat blocks has also given stable evolutions for all of the computations in [8], giving remarkably close approximations to their corresponding smooth Ricci flows.
Figure 4: The effect of the face-diagonal growth on the exterior of a cubic block, with the resulting deficit angle arising from an unchanged body-diagonal shown on the right.
Figure 5: The deficit angle \(\epsilon\) at the body-diagonal is shown, along with the perturbation \(\delta\) of the body-diagonal that makes this deficit angle zero, giving a flat interior for the block.
Initial instability
Since it is the linear terms in a set of differential equations that lead to exponential growth, the linear stability of the piecewise flat Ricci flow was tested for cubic and skew triangulations of flat \(T^{3}\) manifolds.
### Linear stability analysis
A linear stability analysis uses the linear terms of a perturbation away from an equilibrium to test for the stability of that equilibrium.
**Definition 3**.: For a system of differential equations \(\frac{dx_{i}}{dt}=f_{i}(x_{j})\):
1. A stationary solution \(x_{i}=x_{i}^{0}\) is a solution that does not change with \(t\), i.e. \(\frac{dx_{i}^{0}}{dt}=0\).
2. Linearized equations at \(x_{i}^{0}\) are the linear terms in a series expansion of \(f_{i}(x_{j}^{0}+\delta_{j})\) about \(\delta_{j}=0\). The zero order terms vanish since they correspond to a stationary solution, resulting in the equations \(\frac{d\delta_{i}}{dt}=a_{ij}\,\delta_{j}\) with real numbers \(a_{ij}\).
3. The system is linearly unstable at \(x_{i}^{0}\) if the coefficient matrix \(A=a_{ij}\) for the linearized equations has any eigenvalues with positive real parts. Solutions of the linearized equations consist of linear combinations of exponential functions, with the eigenvalues of \(A\) giving the growth rates.
Euclidean metrics provide stationary solutions for the smooth Ricci flow, having zero Ricci curvature, and triangulations of flat Euclidean manifolds are stationary for the piecewise flat Ricci flow, with zero deficit angles and therefore zero piecewise flat Ricci curvature. The edge-lengths for cubic and skew triangulations of flat Euclidean space, with unit volume blocks, are given in table 1 below.
Any global scaling of the edge-lengths in table 1 will also be stationary solutions of the piecewise flat Ricci flow. However, the linearized equations and the eigenvalues of the coefficient matrix are not invariant to this rescaling. The effect of globally rescaling the triangulation blocks is therefore given below.
**Lemma 4** (Scale factor).: _If a triangulation of a flat manifold has a coefficient matrix \(A\), then the coefficient matrix for a rescaling of all of the edges by a factor of \(c\) will be \(\frac{1}{c^{2}}A\)._
Proof.: From (2.2), the piecewise flat Ricci curvature for an edge \(\ell_{i}\) can be written as the sum
\[Rc_{i}=\sum_{k}b_{ik}\frac{\left|\ell_{k}\right|\epsilon_{k}}{V_{k}}, \tag{4.1}\]
\begin{table}
\begin{tabular}{l|c c c|c c c|c} & \(\ell_{x}\) & \(\ell_{y}\) & \(\ell_{z}\) & \(\ell_{yz}\) & \(\ell_{zx}\) & \(\ell_{xy}\) & \(\ell_{xyz}\) \\ \hline Cubic & \(1\) & \(1\) & \(1\) & \(\sqrt{2}\) & \(\sqrt{2}\) & \(\sqrt{2}\) & \(\sqrt{3}\) \\ Skew & \(\frac{1}{3}\sqrt{10}\) & \(1\) & \(\frac{1}{9}\sqrt{94}\) & \(\frac{1}{9}\sqrt{139}\) & \(\frac{1}{9}\sqrt{142}\) & \(\frac{1}{3}\sqrt{13}\) & \(\frac{1}{9}\sqrt{133}\) \\ \end{tabular}
\end{table}
Table 1: The edge-lengths for flat triangulations with unit volume blocks.
for some coefficients \(b_{ik}\). The series expansion of \(Rc_{i}\) for a perturbation \(\delta_{j}\) of some edge \(\ell_{j}\) is given by the series expansions of the individual terms appearing on the right-hand side above. The zero-order terms for the deficit angles \(\epsilon_{k}\) will always be zero, since these correspond to a triangulation of Euclidean space. Hence, the linear terms in the expansion of \(Rc_{i}\) must be given by the linear terms from the deficit angles, and the zero-order terms, or non-perturbed values, for the remaining variables.
For a global rescaling of all the edge-lengths of a triangulation by a factor of \(c\), the volumes are clearly scaled by \(c^{3}\). The coefficients \(b_{ik}\) can be seen in (2.2) to be either constant or depend on the angles between edges, and therefore not depend on the scaling. This gives the relations
\[|\ell_{k}^{c}|=c\,|\ell_{k}|,\qquad V_{k}^{c}=c^{3}\,V_{k},\qquad b_{ik}^{c}=b_ {ik}, \tag{4.2}\]
with the superscript \(c\) representing the rescaled terms. The deficit angles depend on the relative lengths of the edges, and since the perturbation \(\delta_{j}\) is the only length that is _not_ rescaled by \(c\), the deficit angle would be the same as if only \(\delta_{j}\) was rescaled, but by a factor of \(1/c\). An expansion of the deficit angle \(\epsilon_{k}^{c}(\delta_{j})\) for the rescaled blocks is therefore given by the equation
\[\epsilon_{k}^{c}(\delta_{j})\ =\ \epsilon_{kj}^{c}\,\delta_{j}+O(\delta_{j}^{2 })\ =\ \frac{1}{c}\,\epsilon_{kj}\,\delta_{j}+O(\delta_{j}^{2})\, \tag{4.3}\]
with \(\epsilon_{kj}^{c}\) and \(\epsilon_{kj}\) representing the first order coefficients.
Using the piecewise flat Ricci flow equation (2.4), the linear coefficients \(a_{ij}^{c}\) for the rescaled triangulation can now be given in terms of the linear coefficient \(a_{ij}\) for the original triangulation,
\[a_{ij}^{c}\ =\ -|\ell_{i}^{c}|\,\sum_{k}b_{ik}^{c}\frac{|\ell_{k}^{c}|\, \epsilon_{kj}^{c}}{V_{k}^{c}}\ =\ -\,(c\,|\ell_{i}|)\sum_{k}b_{ik}\frac{(c\,|\ell_{k}|)\, \left(\frac{1}{c}\epsilon_{kj}\right)}{c^{3}\,V_{k}}\ =\ \frac{1}{c^{2}}\,a_{ij}. \tag{4.4}\]
The coefficient matrix \(A\), and hence its eigenvalues, are therefore scaled by a factor of \(1/c^{2}\) when all the edge-lengths of a triangulation are rescaled by \(c\).
### Proof of proposition 1
To calculate the linearized equations for the piecewise flat Ricci flow, a number of properties of both the graph structure and the linearization itself are taken advantage of.
* It is only necessary to determine the linearized equations for a single set of three face-diagonals due to the symmetry of the grids. The equations for all of the other face-diagonals will have the same coefficients, with an appropriate translation of indices.
* A \(3\times 3\times 3\) grid of cubic or skew blocks provides all of the edges required to determine the piecewise flat Ricci curvature for the edges in the central block. This can be seen from (2.2), where the piecewise flat Ricci curvature \(Rc_{\ell}\) depends only on the deficit angles at edges \(\ell_{j}\) that have a vertex in common with \(\ell\), and these depend only on the lengths of edges in tetrahedra containing the edge \(\ell_{i}\).
* Series expansions of the piecewise flat Ricci curvature need only be computed for a single perturbation variable at a time, with the linear terms for each perturbation computed separately and then added together to give the complete linearized equation. This avoids the need to compute series expansions of multiple variables simultaneously.
Symbolic manipulations in _Mathematica_ were used to calculate the linearized equations for both the cubic and skew triangulations, with the code and results available in the Zenodo repository at [https://doi.org/10.5281/zenodo.8067524](https://doi.org/10.5281/zenodo.8067524).
**Theorem 5** (Linear instability of cubic triangulations).: _The piecewise flat Ricci flow of any cubic triangulation of a flat \(T^{3}\) manifold is linearly unstable, with perturbations growing exponentially at a rate of at least \(12/c^{2}\) for cubic blocks with volume \(c^{3}\)._
Proof.: To begin, the linearized equations for the piecewise flat Ricci flow equations about a flat cubic triangulation with unit volume blocks are calculated. Each face-diagonal in a \(3\times 3\times 3\) grid of cubic blocks is perturbed in turn, with the linearized equations at a set of three face-diagonals in the central block computed for each perturbation, and the contributions from all of these perturbation then added together. The resulting linearized equation at the \(xy\)-face-diagonal takes the form:
\[\frac{d}{dt}\,\delta_{xy} (0,0,0)=\] \[-4\ \delta_{xy}(0,0,0)-\delta_{xy}(-1,-1,0)-\delta_{xy}(1,1,0)\] \[+\frac{3}{2}\ \big{(}\delta_{xy}(-1,0,0)+\delta_{xy}(0,-1,0)+ \delta_{xy}(0,1,0)+\delta_{xy}(1,0,0)\big{)}\] \[+\,2\ \big{(}\delta_{xy}(0,0,-1)+\delta_{xy}(0,0,1)\big{)}\] \[-\frac{1}{2}\ \big{(}\delta_{yz}(0,-1,-1)+\delta_{yz}(1,1,0) \big{)}+\delta_{yz}(0,-1,0)+\delta_{yz}(1,1,-1)\] \[+\frac{3}{2}\ \big{(}\delta_{yz}(0,0,0)+\delta_{yz}(1,0,-1) \big{)}\] \[-\frac{1}{2}\ \big{(}\delta_{zx}(-1,0,-1)+\delta_{zx}(1,1,0) \big{)}+\delta_{zx}(-1,0,0)+\delta_{zx}(1,1,-1)\] \[+\frac{3}{2}\ \big{(}\delta_{zx}(0,0,0)+\delta_{zx}(0,1,-1) \big{)}, \tag{4.5}\]
with the coordinates in parentheses indicating the location of the perturbed edge in the triangulation grid, with respect to the central block. Due to the symmetries in the cubic lattice, the linearized equations for the other two face-diagonals are given by a permutation of the \(\{xy,yz,zx\}\) subscripts and a similar permutation of the the grid coordinates. The linearized equation for _any_ face-diagonal in a \(T^{3}\) grid of cubic blocks can then be given by a discrete linear transformation of the grid coordinates.
The set of coefficients in (4.5) are the same for the linearized equations at all of the face-diagonals in the triangulation, for any size grid, so the set of elements in each row of the coefficient matrix \(A\) will also be the same. These elements sum to \(12\), which must be an eigenvalue of \(A\) with a corresponding eigenvector consisting of all ones. From lemma 4, the coefficient matrix for a triangulation with blocks of volume \(c^{3}\) will have an eigenvalue of \(12/c^{2}\). Any solution to the set of linearized equations must then contain an exponential term with a growth rate of \(12/c^{2}\), leading to an exponential growth for the perturbations in all of the face-diagonals of at least this rate.
**Theorem 6** (Linear instability of skew triangulations).: _The piecewise flat Ricci flow of any skew triangulation of a flat \(T^{3}\) manifold is linearly unstable, with perturbations growing exponentially at a rate of at least \(0.996/c^{2}\) for cubic blocks with volume \(c^{3}\)._
Proof.: As with the cubic triangulations in lemma 5, the linearized equations about a flat skew triangulation with unit volume blocks are first computed. Unlike the cubic case, the skew blocks do not have the same symmetries as the cubic blocks, so the linearized equations for each of the three types of face-diagonals, \(\ell_{xy}\), \(\ell_{yz}\) and \(\ell_{zx}\) must be found separately. The linearized equations are not displayed here, but can be found in the Zenodo repository, [https://doi.org/10.5281/zenodo.8067524](https://doi.org/10.5281/zenodo.8067524). As with the cubic case, the linearized equations for all of the face-diagonals in a \(T^{3}\) grid of skew blocks can be given by a discrete transformation of the grid coordinates for each of the three face-diagonals in a single block.
From these equations, it can be seen that the sum of the coefficients are not the same for each face-diagonal, so the vector of all ones is not an eigenvector for the coefficient matrix \(A\) as it was for the cubic triangulations. However, by ordering the indices of the face-diagonals \(\ell_{i}\) according to their edge-type, a similar approach can be used. The indices for an \(n\)-block triangulation are defined so that \(i\in\{1,...,n\}\) for the \(yz\)-diagonals, \(i\in\{n+1,...,2n\}\) for the \(zx\)-diagonals and \(i\in\{2n+1,...,3n\}\) for the \(xy\)-diagonals. Defining a vector \(v\) so that
\[v=\left\{\begin{array}{ccc}p&\text{if}&1\leq i\leq n\\ q&\text{if}&n+1\leq i\leq 2n\\ r&\text{if}&2n+1\leq i\leq 3n\end{array}\right.\quad\text{for }p,q,r,\in \mathbb{R}, \tag{4.6}\]
the product of the matrix \(A\) with \(v\) is
\[A\ v=a_{ij}\cdot v_{j}=\left(\sum_{j=1}^{n}a_{ij}\right)p+\left(\sum_{j=n+1}^{ 2n}a_{ij}\right)q+\left(\sum_{j=2n+1}^{3n}a_{ij}\right)r. \tag{4.7}\]
Since there will only be three different values for the elements of the resulting vector, one for each type of face-diagonal, the information in this product can be reduced to the \(3\times 3\) matrix product below,
\[\left(\begin{array}{ccc}0.308&0.311&0.282\\ 0.410&0.415&0.376\\ 0.266&0.269&0.244\end{array}\right)\left(\begin{array}{c}p\\ q\\ r\end{array}\right), \tag{4.8}\]
with the matrix elements obtained by summing the appropriate coefficients in the linearized equations. This matrix has a maximum eigenvalue of approximately \(0.966\), with a corresponding eigenvector of \((0.532,0.710,0.461)\). From (4.7), the matrix \(A\) must also have this eigenvalue, with eigenvector \(v\) from (4.6) where \(p=0.532\), \(q=0.710\) and \(r=0.461\).
From lemma 4, the coefficient matrix for a triangulation with blocks of volume \(c^{3}\) will have an eigenvalue of approximately \(0.966/c^{2}\). Any solution to the set of linearized equations must then contain an exponential term with a growth rate of approximately \(0.966/c^{2}\), leading to an exponential growth for the perturbations in all of the face-diagonals of at least this rate.
The linearized equations in the proofs of lemmas 5 and 6 have also been used to construct the coefficient matrices for \(3\times 3\times 3\) and \(3\times 3\times 4\) grids of blocks using _Mathematica_. This
was done for both the cubic and skew blocks, with the edge-lengths from table 1, and for a rescaling of these edges by a factor of \(1/3\). The eigenvalues for each matrix were then computed, again using _Mathematica_, with the largest real parts matching the eigenvalues in lemmas 5 and 6, as shown in table 2.
_Remark 7_.: Due to the effect of the scaling factor \(c\) in lemma 4, the instabilities are more severe when the grid resolutions are increased. This is the opposite of the piecewise flat approximations, which should converge to their corresponding smooth values as the resolution is increased.
_Remark 8_.: The instability for the skew triangulations is an order of magnitude less than for the cubic triangulations for the same block volumes. It can also be noted that the cubic blocks are only borderline Delaunay for a flat manifold, with the circumcentres of all tetrahedra in a single block coinciding at the centre of that block. The skew blocks were initially used because they form more strongly Delauney triangulations, where Voronoi dual volumes can be used with more confidence. For a flat diamond block, the circumcentres of all of the tetrahedra coincide with their barycentres, making them as strongly Delaunay as possible, and may offer a clue to explain the original lack of instability in the diamond triangulations.
### Numerical simulations
Simulations have been run for the piecewise flat Ricci flow of \(3\times 3\times 3\), \(4\times 4\times 4\) and \(5\times 5\times 5\) grids of both cubic and skew blocks. The base edge-lengths were taken from table 1, scaled by a factor of \(1/3\) for the skew triangulations, with lemmas 5 and 6 indicating that these should give exponential growth rates on the order of \(10\). The edge-lengths were then approximated as double-precision floating point numbers, and each perturbed by a random number from a normal distribution with standard deviation of \(10^{-15}\), the level of numerical precision. Evolutions were performed using an Euler method with \(100\) steps of size \(0.01\), and deviations in the face-diagonal edge-lengths were fitted to a linear combination of exponential functions,
\[f_{cubic}(t) =a_{1}e^{k_{1}t}+a_{2}e^{k_{2}t}+a_{3}e^{k_{3}t}+c\,,\] \[f_{skew}(t) =a_{1}e^{k_{1}t}+b\,t+c\,. \tag{4.9}\]
The number of terms was chosen to include all of the positive eigenvalues for the \(3\times 3\times 3\) grid triangulations shown in table 2. A linear term was also added for the skew function, as
\begin{table}
\begin{tabular}{c|c c|c c} & \multicolumn{2}{c}{\(c=1\)} & \multicolumn{2}{c}{\(c=1/3\)} \\ & \(3\times 3\times 3\) & \(3\times 3\times 4\) & \(3\times 3\times 3\) & \(3\times 3\times 4\) \\ \hline Cubic & 12, 6, 2.739, 0 & 12, 8, 6, 4.514 & 108, 54, 24.65, 0 & 108, 72, 54, 40.63 \\ Skew & 0.966, 0. & 0.966, 0. & 8.697, 0. & 8.697, 0. \\ \end{tabular}
\end{table}
Table 2: The largest of the real parts of the eigenvalues for both cubic and skew triangulations, with two different grid sizes and two different scales \(c\). The non-integer values are approximated to three decimal places.
the non-face-diagonal edges showed a consistent linear growth of about \(10^{-15}\). Results of the growth rates and \(R\)-squared values for the best-fit functions are presented in table 3, with sample graphs of the fitted functions and their corresponding data shown in figure 6.
The close agreement between the growth rates in table 3 and eigenvalues in table 2, along with the extremely high \(R\)-squared values, demonstrate how effective the linearized equations are in approximating the evolution. The low interquartile ranges also show consistent behaviour across all edges, with surprisingly comparable growth rates over the different grid sizes, considering a higher number of distinct eigenvalues should be expected for larger grids. In particular, the simulations support the hypothesis that the eigenvalues in lemmas 5 and 6 represent the largest growth rates for any cubic or skew type triangulations.
## 5 Suppression of Exponential Instability
With the body-diagonals re-defined to give flat interiors for each block, the linear stability analysis and numerical simulations are performed again, with the exponential growth suppressed in both.
\begin{table}
\begin{tabular}{l l r|c c|c c|c c} & \multicolumn{6}{c}{\(3\times 3\times 3\)} & \multicolumn{3}{c}{\(4\times 4\times 4\)} & \multicolumn{3}{c}{\(5\times 5\times 5\)} \\ & \multicolumn{2}{c|}{Lin. App.} & \multicolumn{1}{c|}{Median} & \multicolumn{1}{c|}{IQR} & \multicolumn{1}{c|}{Median} & \multicolumn{1}{c|}{IQR} & \multicolumn{1}{c}{Median} & \multicolumn{1}{c}{IQR} \\ \cline{2-9} Cubic & \(k_{1}\) & 12 & 11.998 & 0.001 & 11.997 & 0.007 & 11.97 & 0.04 \\ & \(k_{2}\) & 6 & 6.04 & 0.01 & 6.04 & 0.08 & 6.2 & 0.5 \\ & \(k_{3}\) & 2.739 & 2.86 & 0.04 & 2.9 & 0.2 & 3.8 & 1.9 \\ & \(R^{2}\) & & 0.99998 & \(10^{-6}\) & 0.99994 & \(10^{-4}\) & 0.99930 & \(10^{-3}\) \\ Skew & \(k\) & 8.697 & 8.339 & 0.004 & 8.336 & 0.01 & 8.337 & 0.01 \\ & \(R^{2}\) & & 0.999999 & \(10^{-7}\) & 0.999998 & \(10^{-6}\) & 0.999999 & \(10^{-6}\) \\ \end{tabular}
\end{table}
Table 3: The median growth rates and \(R\)-squared values with their interquartile ranges (IQR) for each triangulation. The values for the cubic triangulations are in extremely close agreement with the corresponding eigenvalues for the \(3\times 3\times 3\) grid in table 2, and the skew parameters in close agreement.
Figure 6: The best-fit graphs with the _lowest_\(R\)-squared values (0.99998 and 0.999999 respectively) for deviations in the edge-lengths of the \(3\times 3\times 3\) grid triangulations, closely agreeing with the evolution values represented by the points.
### Linear instability suppressed
The linearized equations are calculated here by taking advantage of the same properties outlined at the beginning of section 4.2, but with some additional steps to ensure that each block is flat after any perturbations.
* Once a face-diagonal \(\ell_{j}\) is perturbed by an arbitrary amount \(\delta_{j}\), only the blocks on either side of that face are affected. For the body-diagonals of each of these blocks, the deficit angle can be found in terms of \(\delta_{j}\) and the length \(b\) of the body-diagonal itself.
* Setting the deficit angle to zero, the length \(b\) which gives a flat block can be found in terms of \(\delta_{j}\). Since only linear terms in \(\delta_{j}\) will impact the linearized equations, \(b\) need only be found as a linear approximation in \(\delta_{j}\).
* A grid of size \(4\times 4\times 4\) is now required to determine the piecewise flat Ricci flow for the edges in a central block, due to the changes in body-diagonals.
The linearized equations are first used to show that the approaches in lemmas 5 and 6 no longer imply a linear instability once flat blocks are used, and then to show that the linear instability is actually suppressed for a number of different grid sizes.
**Theorem 9**.: _When the body-diagonals are re-defined to give flat blocks, summing rows of the linear coefficient matrix no longer implies a linear instability of the piecewise flat Ricci flow._
Proof.: Each face-diagonal \(\ell_{j}\) in a \(4\times 4\times 4\) grid of blocks is perturbed away from the flat values in table 1 by an arbitrary amount \(\delta_{j}\), with the body-diagonals on either side of that face re-defined in terms of \(\delta_{j}\) to give a zero deficit angle. The linear impact of each perturbation \(\delta_{j}\) on a face-diagonal \(\ell_{i}\) in a central block can then be calculated using the piecewise flat Ricci flow equations (2.4), and the separate terms summed to give the linearized equation for \(\ell_{i}\).
The resulting equation is shown below for an \(xy\)-face-diagonal of the cubic triangulation, with the coordinates indicating the relative locations of the face-diagonals on the grid,
\[\frac{d}{dt}\,\delta_{xy}(0,0,0)=\] \[\qquad-5\ \delta_{xy}(0,0,0)-\delta_{xy}(-1,-1,0)-\delta_{xy}(1,1,0)\] \[\qquad-1/4\big{(}\delta_{xy}(-1,0,1)+\delta_{xy}(0,-1,1)+\delta_{ xy}(0,1,-1)+\delta_{xy}(1,0,-1)\big{)}\] \[\qquad+5/4\big{(}\delta_{xy}(-1,0,0)+\delta_{xy}(0,-1,0)+\delta_{ xy}(0,1,0)+\delta_{xy}(1,0,0)\big{)}\] \[\qquad+3/2\big{(}\delta_{xy}(0,0,-1)+\delta_{xy}(0,0,1)\big{)}\] \[\qquad-1/2\big{(}\delta_{yz}(0,-1,-1)+\delta_{yz}(0,0,-1)+\delta_ {yz}(1,0,0)+\delta_{yz}(1,1,0)\big{)}\] \[\qquad-1/4\big{(}\delta_{yz}(-1,0,0)+\delta_{yz}(0,1,-1)+\delta_ {yz}(1,-1,0)+\delta_{yz}(2,0,-1)\big{)}\] \[\qquad+3/4\big{(}\delta_{yz}(0,-1,0)+\delta_{yz}(0,0,0)+\delta_{ yz}(1,0,-1)+\delta_{yz}(1,1,-1)\big{)}\] \[\qquad-1/2\big{(}\delta_{zx}(-1,0,-1)+\delta_{zx}(0,0,-1)+\delta_ {zx}(0,1,0)+\delta_{zx}(1,1,0)\big{)}\] \[\qquad-1/4\big{(}\delta_{zx}(-1,1,0)+\delta_{zx}(0,-1,0)+\delta_ {zx}(0,2,-1)+\delta_{zx}(1,0,-1)\big{)}\] \[\qquad+3/4\big{(}\delta_{zx}(-1,0,0)+\delta_{zx}(0,0,0)+\delta_{ zx}(0,1,-1)+\delta_{zx}(1,1,-1)\big{)}. \tag{5.1}\]
As in the proof of lemma 5, the linearized equations for the other two types of face-diagonal in the cubic triangulation are simply given by a permutation of the \(\{xy,yz,zx\}\) subscripts and of the grid coordinates. The linearized equations for a skew triangulation are not displayed here, but can be found in the Zenodo repository at [https://doi.org/10.5281/zenodo.8067524](https://doi.org/10.5281/zenodo.8067524). The linearized equations for all of the face-diagonals in any \(T^{3}\) grid of cubic or skew blocks can be found through a discrete transformation of the grid coordinates.
The coefficients in equation (5.1) can be seen to sum to zero, with the coefficients of the linearized equations for the face-diagonals in the skew triangulation also summing to zero. This implies that the vector of all ones is an eigenvector of the coefficient matrix \(A\), with an eigenvalue of zero, for all grid-sizes of both the cubic and skew triangulations. While this approach gave positive eigenvalues in lemmas 5 and 6, proving the equations there to be linearly unstable, it does not do so when flat blocks are used instead of just flat tetrahedra.
_Remark 10_.: For many constant row-sum matrices, the eigenvalue given by the row-sum can be proved to be the largest eigenvalue. For example, if all of the coefficients except the first in equation 5.1 were positive, the Gershgorin circle theorem would be sufficient to prove this. Recent progress has also been made on eigenvalue bounds for more general constant row-sum matrices [11, 12]. Unfortunately, a proof that the same is true in this case has not been found, but direct computation of the eigenvalues below and the numerical simulations in section 5.2 suggest that it is true.
For particular grid sizes, the linearized equations were used to construct the coefficient matrices directly, using the mathematical software _Mathematica_, and the set of eigenvalues computed for each. The eigenvalue with the largest real part is given for each piecewise flat manifold in table 4. For the cubic blocks these were computed exactly, but numerical methods were required to compute the eigenvalues for the skew blocks.
Clearly, there cannot be any exponential growth terms for the linearized equations in the cubic triangulations since the largest real part of the eigenvalues is zero. The largest values for the skew triangulation are _practically_ zero, being equivalent to zero for the numerical precision of the computations. While an eigenvalue of zero does not imply linear stability, it is consistent with the linearization of the smooth Ricci flow near a flat manifold, which also contains zero eigenvalues [13]. Also, since the piecewise flat curvatures depend on local regions of a piecewise flat manifold, it is deemed unlikely that the instability would re-emerge in larger grids.
\begin{table}
\begin{tabular}{l|c c|c c|} & \multicolumn{2}{c|}{\(c=1\)} & \multicolumn{2}{c}{\(c=1/3\)} \\ & \(3\times 3\times 3\) & \(3\times 3\times 4\) & \(3\times 3\times 3\) & \(3\times 3\times 4\) \\ \hline Cubic & 0 & 0 & 0 & 0 \\ Skew & \(3.0\times 10^{-15}\) & \(3.3\times 10^{-15}\) & \(2.0\times 10^{-14}\) & \(3.7\times 10^{-14}\) \\ \end{tabular}
\end{table}
Table 4: The eigenvalues with the largest real part, showing a suppression of the linear instability when flat blocks are used.
### Numerical simulations with instability suppressed
Numerical simulations of the piecewise flat Ricci flow have also been run with blocks that are effectively flat. In order to compare with the simulations in section 4.3, the same triangulations and edge-length perturbations were used, and evolved using the same Euler method with 100 steps of size 0.01, but with the body-diagonal edge-lengths adapted at the beginning of each step. This was done by first adding a perturbation variable \(\delta_{j}\) to the length of each body-diagonal \(\ell_{j}\), computing the deficit angle around \(\ell_{j}\) as a function of \(\delta_{j}\), and then setting this equal to zero and solving for \(\delta_{j}\). Since the deficit angles should already be small for a good triangulation, a linear approximation in \(\delta_{j}\) about zero is used for the deficit angle, giving a unique solution for \(\delta_{j}\) when set equal to zero. To show that the exponential growth is suppressed, the median and maximum changes in edge-lengths for all triangulations are shown in table 5, with the deviations of the edge-lengths for the blocks with the largest changes shown in figure 7. These same edges were used for the graphs in figure 3.
The adapting of the body-diagonals has clearly suppressed the exponential growth for both the cubic and skew simulations. For the cubic triangulations, most of the edge-lengths do not change, and those that do, only change during the first quarter of time steps and by an extremely small amount, less than the largest of the initial perturbations. While the growth rates are non-zero throughout the simulation, as seen in figure 7, when combined with the step size for the Euler method the resulting changes are lower than the numerical precision. The set of edge-lengths then become stationary once the rates of change drop below this threshold.
For the skew triangulations, the numerical precision is lower as it depends on the lengths of the edges, so the oscillations in the lower-right graph in figure 7 continue to produce a linear growth. Linear functions have therefore been fitted to the data for each edge in the triangulation, with the results in table 5 showing consistent, extremely small rates of change across all edges, also agreeing with the rates of the background linear growth for the unsuppressed simulations in section 4.3. While this linear growth does not directly correspond
\begin{table}
\begin{tabular}{l l|c c c} & & \(3\times 3\times 3\) & \(4\times 4\times 4\) & \(5\times 5\times 5\) \\ \hline Cubic & Median change & 0 & 0 & 0 \\ & Max. change & \(1.7\times 10^{-15}\) & \(3.1\times 10^{-15}\) & \(2.2\times 10^{-15}\) \\ & Initial pert. & \(2.4\times 10^{-15}\) & \(3.1\times 10^{-15}\) & \(3.3\times 10^{-15}\) \\ Skew & Median change & \(4.6\times 10^{-15}\) & \(4.6\times 10^{-15}\) & \(4.6\times 10^{-15}\) \\ & Max. change & \(6.7\times 10^{-15}\) & \(8.0\times 10^{-15}\) & \(8.1\times 10^{-15}\) \\ & Initial pert. & \(2.9\times 10^{-15}\) & \(3.1\times 10^{-15}\) & \(4.0\times 10^{-15}\) \\ & Median slope & \(4.5\times 10^{-15}\) & \(4.6\times 10^{-15}\) & \(4.5\times 10^{-15}\) \\ & IQR of slope & \(7.5\times 10^{-16}\) & \(7.8\times 10^{-16}\) & \(7.6\times 10^{-16}\) \\ \end{tabular}
\end{table}
Table 5: The median and maximum edge-length changes for each triangulation, along with the maximum values of the initial random perturbations. The median and interquartile ranges (IQR) for the best-fit linear functions in the skew triangulations are also shown. All values are close to the numerical precision, showing a suppression of the exponential growth.
to smooth Ricci flow, it does not conflict with it either. The consistency of the rates of change lead to a global change in the scale but will not produce any curvature, so the piecewise flat manifold remains in a stationary state, growing but remaining flat. The low rate means the effect will not be noticeable at regular scales for extremely long time frames, but the effect can still be avoided by using the normalized Ricci flow which preserves the global volume.
## 6 Conclusion
The stability of the piecewise flat Ricci flow has been demonstrated for cubic and skew type triangulations that have been adapted so that the interior of each block is effectively flat. In practice, the six tetrahedra in each block are kept, with the length of each interior body-diagonal re-defined to give a zero deficit angle, and therefore a flat interior for each block. This makes the internal angles easier to compute, since the internal geometry of a tetrahedron is entirely determined by the lengths of its edges, and ensures that the edge-lengths alone determine the geometry of each piecewise flat manifold.
A linear stability analysis has verified the exponential instability of the face-diagonal edges seen in numerical simulations for cubic and skew triangulations of flat Euclidean space. The linear coefficient matrices have a constant positive value for the sum of the elements in each row, which must therefore be an eigenvalue, and coincides with the largest growth rate seen in the numerical simulations. Once the triangulations are adapted, the row-sums of the linear coefficient matrices are all zero, and the numerical simulations are stable. As with related types of matrices, it seems reasonable to expect the constant row-sums to give the largest real eigenvalue for each linear coefficient matrix, which would be zero here, in agreement with
Figure 7: Deviations of the face-diagonal edge-lengths from their pre-perturbed values, for blocks with the largest edge-length changes, and the corresponding rates of change from the adapted piecewise flat Ricci flow, showing a clear suppression of the exponential instability.
the smooth Ricci flow [13]. A proof for this has not been found, but direct computation of the eigenvalues for specific triangulations, and numerical simulations for others, support the hypothesis.
While this paper only shows that the adapted triangulations of a flat manifold are stationary for the piecewise flat Ricci flow of [7], this flow is seen to successfully approximate the smooth Ricci flow in [8] for adapted cubic and skew type triangulations of a variety of different manifolds. Also, despite re-defining the body-diagonals to have zero deficit angles, computations of the piecewise flat Ricci curvature in [8] are just as accurate at these body-diagonals as they are at the other edges.
**Data Availability:** The _Mathematica_ notebooks used for the computations and numerical simulations, and the data generated by these, are available in the Zenodo repository at [https://doi.org/10.5281/zenodo.8067524](https://doi.org/10.5281/zenodo.8067524).
## Acknowledgements
I'd like to thank Robert Sheehan and Chris Mitchell for many helpful discussions which greatly benefited this work.
|
2301.08962 | Leveraging Spatial and Temporal Correlations for Network Traffic
Compression | The deployment of modern network applications is increasing the network size
and traffic volumes at an unprecedented pace. Storing network-related
information (e.g., traffic traces) is key to enable efficient network
management. However, this task is becoming more challenging due to the
ever-increasing data transmission rates and traffic volumes. In this paper, we
present a novel method for network traffic compression that exploits spatial
and temporal patterns naturally present in network traffic. We consider a
realistic scenario where traffic measurements are performed at multiple links
of a network topology using tools like SNMP or NetFlow. Such measurements can
be seen as multiple time series that exhibit spatial and temporal correlations
induced by the network topology, routing or user behavior. Our method leverages
graph learning methods to effectively exploit both types of correlations for
traffic compression. The experimental results show that our solution is able to
outperform GZIP, the \textit{de facto} traffic compression method, improving by
50\%-65\% the compression ratio on three real-world networks. | Paul Almasan, Krzysztof Rusek, Shihan Xiao, Xiang Shi, Xiangle Cheng, Albert Cabellos-Aparicio, Pere Barlet-Ros | 2023-01-21T15:27:04Z | http://arxiv.org/abs/2301.08962v1 | # Leveraging Spatial and Temporal Correlations for Network Traffic Compression
###### Abstract
The deployment of modern network applications is increasing the network size and traffic volumes at an unprecedented pace. Storing network-related information (e.g., traffic traces) is key to enable efficient network management. However, this task is becoming more challenging due to the ever-increasing data transmission rates and traffic volumes. In this paper, we present a novel method for network traffic compression that exploits spatial and temporal patterns naturally present in network traffic. We consider a realistic scenario where traffic measurements are performed at multiple links of a network topology using tools like SNMP or NetFlow. Such measurements can be seen as multiple time series that exhibit spatial and temporal correlations induced by the network topology, routing or user behavior. Our method leverages graph learning methods to effectively exploit both types of correlations for traffic compression. The experimental results show that our solution is able to outperform GZIP, the _de facto_ traffic compression method, improving by 50%-65% the compression ratio on three real-world networks.
Graph Neural Network, Recurrent Neural Network, Spatio-Temporal Correlations, Compression
## I Introduction
In the last years, modern networks have seen a considerable growth in network traffic [1] and connected devices [2]. The deployment of modern applications (e.g., vehicular networks, IoT, virtual reality, video streaming, Industry 4.0) and continuous improvements in network technology (e.g., link speeds) are accentuating this trend even further. Storing network traffic information (e.g., packet traces, link-level traffic measurements, flow-level measurements) is important for network operators to perform network management tasks, such as network planning, traffic engineering, traffic classification, anomaly detection or network forensics, among others. The emerging Network Digital Twin paradigm (NDT) will also require storage and analysis of vast amounts of network traffic data [3, 4].
As a consequence, the efficient storage of network traffic is becoming more challenging than ever. Network traffic traces from Internet Service Providers (ISP), backbone or data center networks can easily occupy hundreds of terabytes per day [5] or petabytes in the case of mobile networks [6]. For example, only a 24-hour trace from a single 10 Gbps link can result in 108 terabytes of data in the worst case. Storing traces from real-world networks can be difficult as they have in the order of hundreds of links [7]. In addition, such traces can contain thousands of concurrent flows per second [5]. Even storing aggregated flow-level information (e.g., NetFlow) can require hundreds of terabytes of disk storage per day [8, 9].
Traditionally, network traffic traces are compressed using GZIP [10], a popular lossless method for compressing files regardless of their format (e.g., test, csv file). Network operators typically collect traffic traces in PCAP format [11] and they simply compress them with GZIP or similar tools. However, GZIP is a generic compression tool, resulting in sub-optimal compression performance when applied to network traffic data.
Past works showed that network traffic traces are far from being purely random, meaning that they intrinsically have some underlying structure [1, 12, 13, 14, 15]. In particular, traffic traces are known to present spatial and temporal patterns that could potentially be exploited to increase current compression ratios. In this work, we seek to understand if recent advancements in neural network (NN) architectures could effectively be used to leverage such correlations to achieve better compression ratios than traditional tools such as GZIP.
We consider a traffic compression scenario where multiple link-level traffic measurements are performed over time for a network topology using standard measurement tools, such as SNMP or NetFlow. These measurements can be seen as multiple time series (i.e., one per link) that exhibit spatio-temporal correlations. Temporal correlations result from user behavior and seasonality in network traffic (e.g., day/night or workday/weekend patterns). Spatial correlations between links are mostly induced by the network topology and routing, among other reasons (e.g., correlations in traffic demands or resulting from protocol behavior).
In this paper we present a neural traffic compression method that exploits the spatio-temporal correlations naturally present in network traffic. Our compressor contains two main modules: a _predictor_ that is implemented using neural networks (i.e., Recurrent and Spatio-Temporal Graph Neural Networks) and an _encoder_. The main role of the predictor is to exploit the spatial and temporal correlations between the network links to accurately estimate, from past observations, the distribution of the data to be compressed. The encoder is implemented using arithmetic coding (AC) [16], a popular lossless compression method. Based on the predicted distributions, AC decides how to better encode the traffic information. The proposed solution also implements a _decoder_ for decompression, which inverts
the process to recover the original traffic data.
To showcase the compression capabilities of our method, we first evaluate it on synthetically-generated traffic with different degrees of temporal and spatial correlation. The results with synthetic data show that our proposed solution can improve GZIP's compression ratios by \(\geq\)35%, even in scenarios with weak correlation. Next, we evaluate our compression method with real-world datasets that cover several months of traffic from three real-world networks. Experimental results show that our method can reduce the size of compressed files by 50%-65% compared to GZIP and by a factor between 2.6x and 4.2x with respect to the original file.
## II Background
In this paper, we consider the compression scenario defined by a network topology with link-level traffic measurements. These measurements indicate the traffic volume over time going through each link. They can be obtained from the real-world network using network monitoring tools such as SNMP or NetFlow. Link-level measurements are performed periodically and stored in time bins (e.g., bins of 5 minutes), resulting in a sequence of accumulated traffic values that we want to store efficiently in disk1. Figure 1 shows an overview of the compression scenario.
Footnote 1: We chose this scenario for its relevance and simplicity, but note that the same principles apply for example to flow-level measurements (e.g., NetFlow), where flows (instead of link-level measurements) can be seen as multiple time series exhibiting spatio-temporal patterns.
When compressing network traffic measurements, network administrators typically follow a simplistic approach based on well-known compression software such as GZIP [10]. However, such methods are generic, meaning that they were designed to compress multiple kinds of information (e.g., images, text). This results in low compression ratios when used with network traffic traces. Equation 1 shows how to compute the compression ratio (CR) of a file.
\[CR=\frac{Uncompressed\_size}{Compressed\_size} \tag{1}\]
### _Exploiting temporal and spatial correlations_
In our work, we leverage ML to exploit the network traffic characteristics and achieve high compression ratios. Specifically, link-level traffic measurements are time series data, meaning that the traffic values can be seen as a set indexed by time. A time series can be typically described by its seasonality and trend. Seasonality refers to a pattern repeated in time at a certain frequency (e.g., day/night). The trend indicates long-term tendency of the time series to increase, decrease or remain stable. Figure 2 shows the daily seasonality present in link-level traffic measurements during \(\approx\)1 month on two real-world datasets used in our experiments (see Section IV-A). In addition, the network topology and routing introduce spatial correlations in the link-level traffic measurements. This means that the links sharing paths are going to have a similar traffic behavior, which we believe that it can be exploited for improving the compression ratios.
To showcase the presence of spatial correlations, we compute the Pearson correlation coefficient between each pair of links in two real-world topologies. This coefficient indicates how strongly correlated are two sets of data or vectors. The following equation shows how to compute the Pearson correlation:
\[r=\frac{\sum_{i=1}^{n}(x_{i}-\overline{x})(y_{i}-\overline{y})}{\sqrt{\sum_{i= 1}^{n}(x_{i}-\overline{x})^{2}(y_{i}-\overline{y})^{2}}} \tag{2}\]
where \(\overline{x}\) and \(\overline{y}\) are the means of the vectors x and y respectively. The resulting value is contained between the range [-1, +1], where -1 indicates negative correlation. This can happen when the traffic increases in one link but decreases in the other link. A value of 0 indicates no correlation and
Fig. 1: Overview of the network traffic compression scenario. Link-level traffic measurements are extracted from the real-world network and stored in the disk.
Fig. 3: Pearson correlation between links for the Abilene (left) and Geant (right) datasets. The darker colors indicate high spatial correlation between links. This means that the traffic values between links have a positive correlation (red) or a negative one (blue). Best viewed in color.
Fig. 2: Two link-level network traffic measurements during \(\approx\)1 month from two real-world datasets. Notice the y-axis is in logarithmic scale. The figures indicate that the resulting time series from the measurements have temporal patterns.
values close to +1 indicate positive correlation (i.e., the traffic increases in both links in similar proportions). Figure 3 shows the Pearson correlation for the real-world Abilene (left) and Geant (right) datasets [17]. The darker the color in the figure, the higher is the correlation between links. The figures indicate that indeed there is spatial correlation between links. We believe that both temporal and spatial correlations can be exploited to achieve higher compression ratios than generic methods like GZIP.
### _Arithmetic Coding_
Our method leverages arithmetic coding (AC) [16] to compress the sequences of traffic values. This is a lossless method that compresses a stream of symbols (e.g., text characters) into a single number between [0, 1). To do this, AC assigns less bits to frequent symbols and more bits to less frequent symbols. In contrast to other popular compression methods such as Huffman coding [18], AC achieves better compression ratios and it can work in an online fashion. In addition, the AC compression algorithm works with probability distributions, making it a good fit with ML technologies.
Figure 4 shows the procedure to code a short text sequence using AC. Initially, the AC takes as input the set of possible symbols and a probability distribution. For simplicity, in this example the distribution remains static but a predictive model can be used to update the distribution after coding each symbol. Initially, the range [0, 1) is divided into segments proportionally to the symbol probability distribution. Then, the AC selects the segment [0, 0.2) corresponding to the first input symbol 'A' from the text sequence. Afterwards, this segment is divided into segments following the same proportions of the probability distribution. This process is repeated recursively for each symbol until the End-of-Data symbol is met. Finally, a decimal value from within the End-of-Data segment is picked as the tag for the entire text sequence.
The decoding part follows a symmetric procedure to the coding part. To recover the original text sequence, the algorithm takes as input the tag, the set of possible symbols and the probability distribution. Similarly, the range [0, 1) is divided into segments proportionally to the probability distribution and the segment that includes the codeword is selected. The symbol that corresponds to the segment represents the first symbol from the original text sequence. Then, the process starts again, decoding the original sequence one symbol at a time. This is a recursive process that finishes when the End-of-Data symbol is met. Figure 5 shows an example of decoding the tag and recovering the original text message.
The compression performance of the AC is defined by the quality of the probability distribution. Consider a scenario where there is a predictive model to dynamically compute the probability distribution for each symbol in a sequence. As an example, consider the AC at time bin \(t\) and we want to compress the value at _t+1_. Then, the AC can use a predictive model that takes as input the past \(k\) symbols and predicts the probability distribution for the next symbol at _t+1_. If the model is accurate, AC will assign less bits to encode the symbol, resulting in close-to-optimal compression performance. On the other hand, if the model is not accurate, the probabilities will not correspond to the real symbol, which results in poor compression or it can even increase the final file size. In this paper we want to leverage ML to build an accurate predictor to compress the sequences of network traffic measurements.
### _Notation and problem statement_
Formally, link-level traffic measurements are represented as a matrix \(\mathbf{X}\in\mathbb{N}^{w\times l}\), where \(w\) represents the sliding window for \(l\) links. Each traffic measurement is a random vector \(\mathbf{x}_{t}\in\mathbb{N}^{l}\). For the arithmetic encoder we need a one step forecast distribution \(p(\mathbf{x}_{t}|\mathbf{x}_{<t})\) to capture temporal dependence. As the arithmetic encoder operates on streams of symbols, we further partition the distribution with a chain rule to capture spatial dependence:
\[p(\mathbf{x}_{t}|\mathbf{x}_{<t})=\prod_{l}p(x_{l}|\mathbf{x}_{t,<l},\mathbf{x}_{<l}). \tag{3}\]
Here we assume the stationary model \(p(x_{l}|\mathbf{x}_{t,<l},\mathbf{x}_{<l})\) and mask-out the unknown traffic values. Note that there is no natural order for the auto-regressive model, however, the only requirement is that the order must be the same during compression and decompression.
Fig. 4: Given a finite set with all possible symbols and a probability distribution, the sequence “AABC” is encoded into a single decimal value. The process starts by dividing the range [0, 1) proportionally to the input distribution. Then, the process picks the segment that corresponds to the first symbol from the original sequence for further division. This process is repeated recursively until all symbols have been encoded.
Fig. 5: Given a tag, a set of symbols and a probability distribution, the decoding process starts by dividing the range [0, 1) proportionally to the distribution. The segment that contains the tag is selected and its corresponding symbol is decoded as the first symbol from the original text sequence. Then, the segment is divided proportionally, starting a recursive process that finishes with the End-of-Data symbol.
## III Design
In this section we present a method for network traffic compression based on Neural Networks (NN). This method compresses link-level traffic measurements that evolve during time. These measurements are performed periodically and aggregated in time bins. For example, the bins can be of 5 minutes, indicating the traffic that passed through a link during this period of time.
We consider two link-level compression scenarios. In the first one, we want to compress link-level traffic measurements from a single link (e.g., access link). This is a common practice in small or medium size networks where internal traffic is smaller and not considered to be of interest in many cases (e.g., enterprise, campus networks). With a single link, we can only exploit temporal correlations as no other links are considered. The second scenario represents a more general use-case, where we want to simultaneously compress the traffic from multiple links of a network topology (i.e., network-wide compression [19]). This situation will be more common in large networks, such as those of Internet Service Providers, which can have a global view of the network topology. In this case, apart from the temporal correlations of the first scenario, the routing and the topology will also introduce spatial correlations that we can exploit for compression purposes.
Our compression method takes as input the link-level traffic values and outputs a floating point number for each link. This value corresponds to the tag of the AC and it represents the entire sequence compressed (see Section II-B). In the network-wide scenario (e.g., ISP), our proposed method takes as input the network topology in addition to the traffic values. The proposed compression method uses a sliding window to iterate over all traffic measurements. In other words, it processes the traffic values within the sliding window to output a probability distribution. This distribution is used to actually encode the traffic measurements immediately after the sliding window. The process is repeated until the window iterated over all values.
The proposed compression method is composed of two main blocks: the predictor and the encoder/decoder. The predictor leverages a NN model to predict the probability distribution in the next time bin after the sliding window (see Section III-A). The encoder/decoder is in charge of actually compressing the sequences of traffic values (see Section III-B). The better are the predictions of the NN model, the better is the compression ratio of our method. Figure 7 shows an overview of the compressor module, with its inputs and outputs.
### _Predictor_
The predictor is implemented using a Recurrent Neural Network (RNN) in its simplest form. This implementation is used when compressing link-level traffic measurements of a single link. Specifically, the RNN processes the link-level sequence of traffic values and afterwards a Multi-Layer Perceptron (MLP) takes the resulting hidden states and outputs the parameters of a probability distribution (e.g., Normal, Laplace). The probability distribution is then used by the AC to code the real link measurements. Notice that the RNN architecture only exploits temporal correlations.
In the network-wide scenario, we implement the predictor using a Spatio-Temporal Graph Neural Network (STGNN) [20]. Inspired by the message passing neural network [21], the proposed ST-GNN uses a message passing step for each time bin to exploit the spatial and temporal correlations. This step consists of exchanging information between neighboring links and it is necessary to propagate the link-level
Fig. 6: Initially, the per-link feature vectors are initialized at _t=0_ for all links with the corresponding traffic values and padded with zeros. These vectors are then processed by the MLP, resulting in a new hidden state vector. For each link in the topology, the hidden states of the neighboring links are aggregated (e.g., using a sum) and concatenated with the actual link hidden state. The resulting hidden state is processed by the RNN, which outputs a final hidden state for the current time bin. This state is then used to initialize the same link feature vector in time bin _t=1_. The same process is repeated and it finishes after iterating over all bins within the time window. Finally, a different MLP (denoted R) takes the resulting hidden state from the last time bin and outputs the mean and standard deviation of a probability distribution. This distribution is used by the AC to code the actual traffic values of link _L1_ at time bin t=2.
Fig. 7: Our method takes as input the traffic values from within the sliding window and outputs a tag per link, which represents the per-link compressed sequence. In the network-wide scenario, the network topology is also part of the input data. Suppose the window finishes at time bin \(t\), the predictor computes the probability distribution of the traffic values at _t+1_. These are used to encode the real values from _t+1_. Afterwards, the sliding window is shifted by one time bin and the process starts again until the end of the sequence..
information across the topology. The ST-GNN takes as input the traffic measurements and the network topology and outputs a per-link probability distribution. The ST-GNN enables to exploit both spatial and temporal correlations naturally present in network traffic traces (see Section II-A).
Figure 6 shows an overview of the internals of the ST-GNN architecture. For simplicity, we only show the steps of predicting the probability distribution in a single link _L1_ using a time window of size 2. Initially, at time bin _t=0_ the links' feature vector are initialized with the traffic values and padded with 0. Then, the feature vector is processed by a MLP and the output vector is sent to all the neighboring links. At the same time, the current link _L1_ receives the hidden states from the neighboring links, aggregates them using a sum and concatenates the actual link traffic value. The resulting hidden state is then processed by the RNN, which outputs a final hidden state for the present time bin. This hidden state is used in the next time bin _t=1_ to initialize the feature vector. The process is repeated for all time bins within the sliding window. In the last bin, a different MLP (denoted R in the figure) is used to process the final hidden states and to output the mean and standard deviation of the traffic value probability distribution (e.g., Normal, Laplace). This probability distribution is used by the AC to compress the traffic value of link _L1_ at time bin _t=2_.
### _Encoder/Decoder_
The encoder/decoder is responsible for effectively compressing/decompressing the actual sequences of traffic values. Specifically, it takes the link-level probability distributions from the predictor and compresses/decompresses the sequences of traffic values. We implement the encoder/decoder using arithmetic coding (AC) [16]. The AC compresses each link-level traffic sequence into a single floating point number (i.e., one decimal number per link). When decoding, the method works symmetrically to the encoder (see Section II-B). The more accurate the NN-based predictor, the higher the compression ratios as less bits will be used for compressing the traffic sequences.
### _Mask_
The ST-GNN model uses a mask to exploit the spatial correlations between links, enabling the model to learn the conditional distribution \(p(x_{l}|\textbf{x}_{t,<l},\textbf{x}_{<l})\) from Equation 3. Specifically, the mask is used to gradually incorporate the already compressed/decompressed link traffic values of a time bin into the prediction. By masking the known link traffic values, our model learns to predict the conditional probability distribution. As an example, consider a topology with 2 links and a time window of size 2. This means we know all the link traffic values for time bins _t=0_ and _t=1_. Then, the ST-GNN uses the known traffic values to predict the probability distributions for both links at _t=2_. From all the distributions, a single one is picked and the corresponding link traffic value is compressed using AC. The link is then marked as known for bin _t=2_ using the mask and the ST-GNN uses the updated link features to predict the probability distribution for the missing link. This prediction is conditional to the known traffic value, and thus exploiting the spatial correlation between links.
During training, the mask of unknown links is created randomly. This means that for each sliding window we associate a random mask over the links to indicate whose link traffic values are known. In the compression/decompression phase, the mask starts by marking all the traffic values as unknown. Then, the predictor compresses/decompresses the traffic values in order, incorporating them into the prediction by changing the mask. Figure 8 illustrates how the link-level features are initialized and how the loss is computed for a single link. In particular, for each link and time bin, the input link features are the traffic measurements and the mask.
### _Compression_
Our proposal uses the link-level traffic values from the sliding window to predict the probability distributions, including the masked links from the next time bin. In particular, if the time window finishes at time bin \(t\), it uses the previous \(k\) traffic values until \(t\) to predict the probability distributions for time bin _t+1_. These distributions are then used by the AC to code the actual values from time bin _t+1_. Figure 9 shows an overview of the compression process for a single link. In the network-wide compression scenario, the predictor is implemented with a ST-GNN that takes as additional input features the neighboring link's hidden states. When compressing a single link, the predictor is implemented with a RNN that
has only information from the past traffic values independently for each link.
Algorithm 1 shows in detail how the compression procedure works. To simplify the pseudocode, we compress the traffic values in the last time bin from a sliding window. The algorithm takes as input the ordered sequence of time windows and starts iterating over them (line 3). The first traffic values from the first time window are compressed using uniform probabilities (line 5 to line 10). Then, the algorithm encodes the traffic values from the last position of the time window (line 11). To do this, a loop iterates over each link, compressing one value at a time (line 13). For each link, the algorithm leverages the NN-based model to compute the quantized probability distributions (lines 14 and 15). The model can be implemented with a RNN or a ST-GNN, depending on the compression scenario. In the network-wide scenario, the algorithm uses a heuristic to determine the link order (line 17), incorporating them into the prediction. After encoding the selected link, the link-level features and the mask are updated in line 19. The process starts again and is repeated until all links have been encoded.
The heuristic to select the link in the network-wide scenario is based on an increasing order of standard deviation (line 17). We experimentally observed this heuristic helps the ST-GNN decrease the uncertainty in the predictions. In other words, leaving the links where the GNN model is more uncertain to the end helps the GNN make better predictions. This is because the ST-GNN will have more links with known traffic values, reducing the model's uncertainty over the links with higher standard deviation.
The compression process results in a tag for each link, encoding the link's traffic sequence. This tag is stored in a single file on disk. When decompressing, the tag is used by the decoder to recover the original traffic sequence without losing information. Notice that the compression process is performed in a streaming fashion, differing from traditional methods like GZIP that are static. In other words, our method can compress the traffic values as they come, without the need of storing the measurements in a buffer before compressing.
### _Decompression_
For decompression, the model reads the tag from the compressed files and uses the same NN-based model to recover the original link sequences bin by bin. The first elements within the first sliding window are decompressed using uniform probability distributions. Then, when the entire sliding window is decompressed, the predictor uses the recovered values to compute the probability distributions for the links. Similarly, in the network-wide compression scenario the algorithm leverages the same heuristic to select the order in which to decode the traffic values for each time bin. Figure 10 shows a general overview of the decompression phase. Notice that for each time bin, the predictor receives the same input information as in the compression phase. The decoder also receives the same information but in this case its operations are inverted to decode (see Section II-B). The algorithm to decompress is
Fig. 8: Graphical example showing how link features are initialized for each time bin. Consider a time window of size 2 that finishes at time bin \(t\). From time bin _t_+_1_, we already compressed 2 traffic values from link _L1_ and _L2_ and we want to compress the value in _L3_. The feature vectors assigned to each link for each time bin are initialized with the known values and the mask. This information is processed by the predictor, which outputs a quantized probability distribution for the missing link value at _t+1_. The loss is computed for the same link and back propagated all the way to the link input features.
Fig. 10: Overview of the decompression process for a single link. The process is similar to the compression, but now the arithmetic coding uses the decoder to recover the original sequence of symbols. Notice that for each time bin the predictor receives the same input information as in the compression phase.
Fig. 9: Overview of the compression process for a single link. The predictor processes the information from the sliding window. In the network-wide scenario, the predictor incorporates information from the links whose traffic is known at _t+1_ to predict the conditional probability distribution \(p(\boldsymbol{x}_{1}|\boldsymbol{x}_{t,<},\boldsymbol{x}_{<t})\). The output is a single decimal value that encodes the link’s traffic sequence.
the same as Algorithm 1 but replacing the coding operations from _encode(\(\cdot\))_ by their inverse decoding operations.
## IV Experimental Results
### _Methodology_
We evaluated the compression performance of our method with respect to GZIP, the _de facto_ compression method of network traffic traces. In the first experiment, we generated synthetic datasets with different intensities of spatial and temporal correlations on the NSFNet topology [22] with 42 directional links. We synthetically created signals with different correlations (see Section IV-C) and extracted 1,000 samples using a window size of 5 time bins, including the labels. In other words, the NN-based models leverage the link-level traffic values of 4 time bins to compress the values within the 5th bin. During training, we created 50 different random masks for each time window. We experimentally observed that the higher the number of different masks, the better is the accuracy of the ST-GNN model, but paying the cost of increasing training time. This is because increasing the number of different masks enriches the training process as there is more data to train the NN on.
In the second experiment, we evaluated the compression capabilities of the NN-based models on three real-world datasets. The first two datasets are the Abilene and Geant datasets from [17]. The Abilene dataset corresponds to a topology with 30 directional links and a total of 41,741 samples after data cleaning and using a time window of size 5. This dataset contains the link-level traffic measurements (in bytes) during 6 months in intervals of 5 minutes. The Geant dataset corresponds to a topology of 72 directional links and a total of 6,063 samples after data cleaning for the same window size. The third dataset is more recent and it was obtained from in-house link-level traffic measurements in a campus network. The dataset contains 12 months of per-link network traffic measurements in intervals of 5 minutes (from December 2020 until December of 2021). The topology contains 16 directional links and a total of 102,799 samples with window size of 5.
All the experiments were performed on _off-the-shelf_ hardware. In particular, we used a single machine with an AMD Ryzen 9 5950X 16-Core Processor with one GeForce GTX 1080 Ti GPU for training the models. We trained all NN-based models using 70% of the samples for training and 30% for evaluation. For each time bin, we created 40, 40 and 20 unique random masks for the Abilene, Geant and Campus network datasets. The loss function used was the negative log likelihood of a Laplace distribution. To work with a finite set of probabilities, we quantized the Laplace distribution. After training, we chose the model with lowest evaluation error and we compressed the entire datasets.
### _Implementation_
We implemented the ST-GNN and the RNN using Tensorflow 2.8 [23]. The RNN was implemented using the Gated Recurrent Unit architecture [24]. The scripts to pre-process the datasets were written in Python 3.8 and we used the NetworkX [25] and NumPy [26] libraries for graph-related operations. We leveraged an open-source implementation of the arithmetic coding for Python [27] to implement the encoder. In the synthetic experiment, we used the Statsmodels Python library [28] to implement the auto-regressive model.
### _Synthetic data generation_
In our compression scenario, we assumed the network topology is an input to the model (only in a network-wide scenario). The only remaining variables that have an impact on the link-level traffic measurements are the source-destination flows. There is one flow for each pair of source-destination nodes within a network topology. All flows follow the shortest path routing policy to reach the destination node. Consequently, each link will be traversed by a subset of all flows.
We assigned for each flow a periodic signal that originated from a _sine_ wave. This wave is scaled by 40 to obtain values in the order of hundreds when aggregated on each link and is shifted to contain only positive values. In addition, we add random noise to the signal, we randomly shift its phase to start at different values and we randomly change the periodicity. If all flows originate from the same signal, the links are highly correlated in space as their values will increase/decrease proportionally for each time bin. In our experiment, we consider 4 degrees of spatial correlation: 0%, 30%, 60% and 100%, indicating the percentage of flows that have the same signal characteristics.
Temporal correlations are present in stationary time series with periodical patterns. The _sinusoidal_ signal naturally meets these requirements, resulting in a time series with high temporal correlation. To decrease the temporal correlation, we added additional noise to the flow signal from auto-regressive (AR) models. To control the intensity of the temporal correlation in the experiment we controlled the percentage of flows that are added with noise. Specifically, we consider 4 degrees of temporal correlations: 0%, 30%, 60% and 100%, indicating the percentage of flows that conserve the original signal.
In Figure 11 (left) we show how the spatial correlation increases in our synthetic datasets. Specifically, we grouped all the datasets by the intensity of spatial correlation. Then, we computed the Pearson correlation for each pair of links
Fig. 11: Boxplots of the Pearson correlation in the synthetic datasets (left). The higher is the spatial correlation, the higher are the pearson correlation coefficients between links. On the right, we show the higher is the temporal correlation, the higher is the percentage of link-level stationary time series in the synthetic datasets.
following Equation II-A. The results indicate that the higher the intensity of the spatial correlation (x-axis), the higher the Pearson correlation (y-axis). Figure 11 (right) shows how the temporal correlation present in the synthetic data increases with the temporal correlation coefficient (x-axis). Similarly, we grouped all datasets by temporal intensity. Then, we performed the Augmented Dickey-Fuller (ADF) test [29] for each link-level traffic sequence, counting the number of paths that are stationary (i.e., the mean and variance of the time series do not change in time). In particular, if the ADF test was returning a _p-value_ smaller or equal than 0.05, we rejected the null hypothesis (H0), considering the time series to be stationary.
### _Evaluation on synthetic data_
In total, there are 16 experiments that correspond to all possible combinations of spatial and temporal correlation intensities. For each of these experiments, we consider both network-wide and independent link-level compression scenarios (see Section III). In total, we trained 32 models from which 16 of them were based on the ST-GNN model and the other 16 on RNNs.
Figure 12 (left) shows the percentages of compression ratio improvement for the ST-GNN with respect to the GZIP baseline in the network-wide scenario. The results indicate that the ST-GNN outperforms GZIP by a large margin in all experiments. Notice that the scenario with maximum spatial and temporal correlation contains a small number of link-level traffic values that are repeated frequently. GZIP uses Huffman coding [18] as the underlying algorithm, which can effectively exploit the repeated traffic values. This explains why the compression ratio improvement is the lowest for this particular scenario. In addition, the figure indicates the expected performance of the ST-GNN when evaluated in real-world scenarios. In particular, the intensity of the temporal and spatial correlations of a real-world dataset could point to the expected performance with respect to GZIP.
To showcase the capabilities of our method to exploit spatial and temporal correlations simultaneously, we compare it with the RNN-based model. Recall that the RNN compresses one link only. Therefore, we apply the RNN model for each link in the topology, exploiting temporal correlations solely. For the sake of fairness, we maintained the same hidden state sizes in both ST-GNN and RNN models.
Figure 12 (right) shows the performance improvement of the ST-GNN models with respect to the RNN-based models. The ST-GNN model outperforms the RNN in all correlation scenarios, but it has outstanding performance in scenarios with high spatial correlation. The results indicate that our model has the flexibility to exploit both spatial and temporal correlations. Notice that in the case of 0% of spatial correlation and maximum temporal correlation, the improvement of the GNN model is \(\approx\)1%, indicating that they perform similarly when there is high temporal correlation. This is expected as the ST-GNN model also incorporates a RNN (see Figure 6).
### _Compressing real-world data_
In this experiment we evaluated the compression performance of our method on real-world link-level traffic measurements. To do this, we compressed three real-world datasets (see Section IV-A). We compared the compression performance against three baselines: Static AC, Adaptive AC and GZIP. The Static AC and Adaptive AC baselines are similar to our method but the probability distribution is computed without using ML. In particular, Static AC iterates over the entire dataset and computes the probability distribution for the link-level traffic measurements. Then, it compresses/decompresses the entire dataset using the same static distribution for each AC coding step. The Adaptive AC baseline computes the new distribution using the values within the sliding window. These baselines are intended to show the benefit of using ML to implement the predictor model. Finally, we apply GZIP to compress the entire dataset.
Figure 13 shows the compression ratios for the three real-world datasets in the link-level scenario. In particular, each
Fig. 12: Compression ratio improvement for the ST-GNN model with respect to GZIP (left) and RNN (right). Notice that in the scenario with higher spatial and temporal correlations there are a few traffic values that are highly repeated in the dataset, which GZIP’s underlying algorithm exploits effectively.
Fig. 13: Compression ratios for the single-link scenario.
baseline was applied to compress each link individually and we compared the resulting compressed links with their original file size (i.e., one file per link). The results indicate a remarkable performance of our compression method for all datasets, outperforming GZIP by a large margin. In addition, the figure indicates a clear advantage of using an adaptive ML-based predictor to exploit temporal correlations present within the sliding window.
Figure 14 shows the experimental results of compressing the same datasets in the network-wide scenario. Recall that our compression method is implemented using the ST-GNN, which leverages the traffic values from all links to dynamically compute the probability distributions. In this scenario, Static AC computes the probability distribution using the entire dataset (i.e., including all links) and Adaptive AC updates the distribution by including the values from all links within the sliding window.
The experimental results indicate that our proposed method achieves the highest compression ratios in all three datasets. Particularly, it outperforms Static AC by \(\approx\)47%, \(\approx\)41% and \(\approx\)29% for the Geant, Campus network and Abilene datasets respectively. In addition, the performance improvement with respect to GZIP is of \(\approx\)62%, \(\approx\)53% and \(\approx\)50% for the same datasets respectively. These results showcase the benefit of leveraging ML to exploit spatial and temporal correlations for traffic compression.
### _Cost_
In this section we discuss the cost of using our method for online compression. Specifically, our method compresses the traffic measurements in a streaming fashion (see Section III). This means that it can compress the link traffic measurements as they come from the network monitoring platform. Conversely, GZIP needs to wait to have the entire dataset to apply the compression algorithm. Alternatively, GZIP could compress all links at once on each time bin independently. We did this and the experimental results indicate that GZIP achieves a compression ratio of \(\approx\)0.94, \(\approx\)0.4 and \(\approx\)0.54 for the Geant, Campus network and Abilene datasets. In other words, the compressed data occupies more space than the original data, contrasting with the results from Figure 14.
We computed the average cost of compressing one time bin using the ST-GNN and RNN models to evaluate the deployment to real-world online traffic compression. In addition, we computed the size of storing the model's weights into a file. Table I shows the average cost of compressing one time bin for all real-world datasets, indicating that our method is capable of online compression. In other words, when it receives the aggregated traffic of, for example, the last 5 minutes, our method can effectively compress the values in the order of seconds. Finally, Table I also shows how much memory it is required to store the trained weights of the neural network. The results indicate that the model is lightweight and it achieves high compression ratios with an expendable model size.
## V Related Work
The most popular existing method for network traffic compression is GZIP. However, the networking community has investigated different approaches for compressing network traffic. The work of [30] proposes a lossy method to approximate the real network traffic by capturing the most relevant traffic features. In [31] they propose to exploit the traffic redundancies at the packet level to reduce the transmitted traffic. In [32] they propose an architecture to implement the LZ77 compression algorithm [33] on a FPGA. The work of [34] describes a solution for on-the-fly storage, indexing and querying of network flow data. A more recent work leverages the P4 language [35] and generalized deduplication to implement a solution that operates at line-speed.
The compression method presented in this paper has similarities with the problem of traffic prediction. The works of [36, 37] propose the use of RNNs to predict the network traffic. A more recent work proposes to use simulated annealing and an Autoregressive Integrated Moving Average model [38] to predict the network traffic. In [39] they use a graph-based ML algorithm to predict the link-level traffic loads in backbone networks. The work from [14] they leverage inter-flow correlations and intra-flow dependencies to predict the traffic matrix using RNNs. Finally, the work from [40] proposes to use a spatio-temporal convolutional neural network with attention mechanisms to predict wireless traffic.
Despite the similarities with traffic prediction, it is important to remark that traffic compression has some particularities. First, our compression method considers only the probability distribution for each link, instead of the exact traffic values. This is due to the requirements of the arithmetic coding part. Second, we only work with the values from the next time bin, whereas in traffic prediction the work horizon is typically larger (e.g., predict the traffic for the next couple of hours). Finally, when decompressing information we do not have access to the first elements of the time-window, forcing the use of simple methods that do not depend on the compressed information (e.g., uniform probability).
Fig. 14: Compression ratios for the network-wide scenarios. Notice that in this experiment we are compressing the entire dataset.
## VI Conclusion
Existing methods for network traffic compression are generic, resulting in low compression ratios. This limitation becomes even more critical when compressing traffic in an online scenario. In our work, we proposed the use of ML and arithmetic coding to compress link-level traffic measurements. Specifically, we presented a method that exploits the spatial and temporal correlations intrinsic in the traffic measurements. The experimental results show that it can effectively compress real-world traffic traces, with an improvement of \(\geq\)50% in compression ratio for real-world datasets with respect to GZIP.
## VII Acknowledgment
This publication is part of the Spanish I+D+i project TRAINER-A (ref. PID2020-118011GB-C21), funded by MCIN/ AEI/10.13039/501100011033. This work is also partially funded by the Catalan Institution for Research and Advanced Studies (ICREA) and the Secretariat for Universities and Research of the Ministry of Business and Knowledge of the Government of Catalonia and the European Social Fund. This work was also supported by the Polish Ministry of Science and Higher Education with the subvention funds of the Faculty of Computer Science, Electronics and Telecommunications of AGH University and by the PL-Grid Infrastructure.
|
2307.05110 | Gate voltage induced injection and shift currents in AA- and AB-stacked
bilayer graphene | Generating photogalvanic effects in centrosymmetric materials can provide new
opportunities for developing passive photodetectors and energy harvesting
devices. In this work, we investigate the photogalvanic effects in
centrosymmetric two-dimensional materials, AA- and AB-stacked bilayer graphene,
by applying an external gate voltage to break the symmetry. Using a
tight-binding model to describe the electronic states, the injection
coefficients for circular photogalvanic effects and shift conductivities for
linear photogalvanic effects are calculated for both materials with light
wavelengths ranging from THz to visible. We find that gate voltage induced
photogalvanic effects can be very significant for AB-stacked bilayer graphene,
with generating a maximal dc current in the order of mA for a 1 $\mu$m wide
sample illuminated by a light intensity of 0.1 GW/cm$^2$, which is determined
by the optical transition around the band gap and van Hove singularity points.
Although such effects in AA-stacked bilayer graphene are about two orders of
magnitude smaller than those in AB-stacked bilayer graphene, the spectrum is
interestingly limited in a very narrow photon energy window, which is
associated with the interlayer coupling strength. A detailed analysis of the
light polarization dependence is also performed. The gate voltage and chemical
potential can be used to effectively control the photogalvanic effects. | Ze Zheng, Kainan Chang, Jin Luo Cheng | 2023-07-11T08:37:12Z | http://arxiv.org/abs/2307.05110v1 | # Gate voltage induced injection and shift currents in AA- and AB-stacked bilayer graphene
###### Abstract
Generating photogalvanic effects in centrosymmetric materials can provide new opportunities for developing passive photodetectors and energy harvesting devices. In this work, we investigate the photogalvanic effects in centrosymmetric two-dimensional materials, AA- and AB-stacked bilayer graphene, by applying an external gate voltage to break the symmetry. Using a tight-binding model to describe the electronic states, the injection coefficients for circular photogalvanic effects and shift conductivities for linear photogalvanic effects are calculated for both materials with light wavelengths ranging from THz to visible. We find that gate voltage induced photogalvanic effects can be very significant for AB-stacked bilayer graphene, with generating a maximal dc current in the order of mA for a 1 \(\mu\)m wide sample illuminated by a light intensity of 0.1 GW/cm\({}^{2}\), which is determined by the optical transition around the band gap and van Hove singularity points. Although such effects in AA-stacked bilayer graphene are about two orders of magnitude smaller than those in AB-stacked bilayer graphene, the spectrum is interestingly limited in a very narrow photon energy window, which is associated with the interlayer coupling strength. A detailed analysis of the light polarization dependence is also performed. The gate voltage and chemical potential can be used to effectively control the photogalvanic effects.
Introduction
Photogalvanic effects are nonlinear optical responses that generate direct currents in homogeneous materials, and such a passive process is considered as a direct and powerful photoelectric conversion method [1; 2; 3]. The widely discussed photogalvanic effects can be induced by the one-color injection current and shift current, which are second order nonlinear optical processes occurring in noncentrosymmetric materials, or the two-color coherent current injection processes, which are third (for "1+2" process) [4] or fifth (for "2+3" process) [5] order nonlinear optical processes and are not sensitive to the inversion symmetry of materials. According to the response to the light polarization, second order photogalvanic effects are also phenomenologically divided into circularly polarized photogalvanic effect and linearly polarized photogalvanic effect, where the latter is light phase insensitive and can be used for solar energy harvest without forming p-n junctions to surpass the Shockley-Queisser limit [6; 7; 8]. One of the research topics in this field is to find materials with significant photogalvanic effects at a specific frequency range, and several studies have been conducted on various new materials, including 2D materials [9; 10; 11; 12; 13], Dirac or Weyl semimetals [1; 14; 15], ferroelectric materials [16; 17; 18; 19], and so on.
As the first two-dimension material, graphene is a potential candidate for realizing new functionality in optoelectronic devices due to its superior optical and electronic properties exceeding many traditional bulk materials. However, because of its centrosymmetric crystal structure, one-color injection and shift currents vanish in many few-layer graphene as well as their nanostructures, while two-color coherent control has been well studied in both theories [20; 21; 4; 22] and experiments [23; 24]. It is still meaningful to generate one-color injection and shift currents in centrosymmetric graphene based structure, in order to utilize its extraordinary physical properties. The generation of second order response can be realized by forming an asymmetric interface or edge [25], applying an external electric field [26], forming surface curvature [27], considering the spatial variation of the light field [28], and stacking graphene layers into asymmetric structure [29]. Wei _et al._[9] studied the gate field induced injection and shift currents in zigzag graphene nanoribbons, and found that the subband and edge states determine the generated currents with an effective modulation of their amplitudes by the ribbon width and the static field strength. Xiong _et al._[30] investigated the light polarization dependence of in-plane shift current in a AB-stacked
bilayer graphene (AB-BG) with applying a gate voltage, and their results clearly illustrated a sizeable photocurrent at a given light frequency; however, neither the spectra of the shift conductivity nor the injection current was present. By stacking two layers of monolayer graphene with a relative rotation to form a twisted bilayer graphene, a large shift current can be produced due to a huge density of states when the flat band is formed at magic angles [12; 13; 31]. Surprisingly, whether the gate voltage can generate photogalvanic effect in AA-stacked bilayer graphene (AA-BG) is still not clear.
In this paper, we systematically study the spectra of the injection coefficients and shift conductivities of AA-BG and AB-BG under applying a gate voltage to break the inversion symmetry, as well as their dependence on the gate voltage and chemical potential. Their electronic states are described by widely adopted tight-binding model formed by the carbon \(2p_{z}\) orbitals [26; 32], and the expressions for injection coefficient and shift conductivity are employed from Ref. [33]. Our results confirm the feasibilities of generating photogalvanic effects in AA-BG and AB-BG. Particularly, the response of AA-BG distributes in a very narrow spectral region, while a maximal current in the order of mA can be generated in AB-BG for a 1 \(\mu\)m wide sample at light intensity of 0.1 GW/cm\({}^{2}\).
This paper is organized as follows. In Sec. II we introduce the tight-binding models for the AA-BG and AB-BG under applying a gate voltage, and give the expressions for the injection coefficient and shift conductivity. In Sec. III we present the spectra of injection coefficient and shift conductivity for AA-BG and AB-BG, and discuss the effects of the gate voltage and chemical potential. We conclude in Sec. IV.
## II Models
### Hamiltonian
We consider the tight-binding Hamiltonian for the AA-BG and AB-BG, whose crystal structures are illustrated in Fig. 1 (a) and (b), respectively. These two structures have the same primitive lattice vectors \(\mathbf{a}_{1}=a_{0}\left(\frac{1}{2}\hat{\mathbf{x}}+\frac{\sqrt{3}}{2}\hat{\mathbf{y}}\right)\) and \(\mathbf{a}_{2}=a_{0}\left(-\frac{1}{2}\hat{\mathbf{x}}+\frac{\sqrt{3}}{2}\hat{\mathbf{y}}\right)\) with the lattice constant \(a_{0}=2.46\) A. The atomic positions in the unit cell are taken as \(\mathbf{\tau}_{A}=\mathbf{0}\), \(\mathbf{\tau}_{B}=(\mathbf{a}_{1}+\mathbf{a}_{2})/3\), \(\mathbf{\tau}_{A^{\prime}}=c\hat{\mathbf{z}}\), and \(\mathbf{\tau}_{B^{\prime}}=\mathbf{\tau}_{B}+c\hat{\mathbf{z}}\) for AA-BG, and \(\mathbf{\tau}_{A}=\mathbf{0}\), \(\mathbf{\tau}_{B}=(\mathbf{a}_{1}+\mathbf{a}_{2})/3\), \(\mathbf{\tau}_{A^{\prime}}=\mathbf{\tau}_{B}+c\hat{\mathbf{z}}\), and \(\mathbf{\tau}_{B^{\prime}}=2\mathbf{\tau}_{B}+c\hat{\mathbf{z}}\) for AB-BG, where \(c=3.35\) A is the interlayer distance.
The primitive reciprocal lattice vectors are \(\mathbf{b}_{1}=\frac{2\pi}{a_{0}}\left(\hat{\mathbf{x}}+\frac{1}{\sqrt{3}}\hat{\mathbf{y}}\right)\) and \(\mathbf{b}_{2}=\frac{2\pi}{a_{0}}\left(-\hat{\mathbf{x}}+\frac{1}{\sqrt{3}}\hat{\mathbf{y}}\right)\). The electronic states are described by a tight-binding model employing carbon \(2p_{z}\) orbitals. The unperturbed Hamiltonian [32] for AA-BG is
\[H_{\mathbf{k}}^{\rm AA}=\left(\begin{array}{cccc}-\Delta&\gamma_{0}g_{\mathbf{k}}& \gamma_{1}&\gamma_{3}g_{\mathbf{k}}\\ \gamma_{0}g_{\mathbf{k}}^{*}&-\Delta&\gamma_{3}g_{\mathbf{k}}^{*}&\gamma_{1}\\ \gamma_{1}&\gamma_{3}g_{\mathbf{k}}&\Delta&\gamma_{0}g_{\mathbf{k}}\\ \gamma_{3}g_{\mathbf{k}}^{*}&\gamma_{1}&\gamma_{0}g_{\mathbf{k}}^{*}&\Delta\end{array} \right)\,. \tag{1}\]
Here \(\mathbf{k}\) is the electron wavevector, and \(g_{\mathbf{k}}=1+e^{-i\mathbf{k}\cdot\mathbf{a}_{1}}+e^{-i\mathbf{k}\cdot\mathbf{a}_{2}}\). The hopping parameters are illustrated in Fig. 1 (a) with \(\gamma_{0}=2.569\) eV, \(\gamma_{1}=0.361\) eV, and \(\gamma_{3}=-0.032\) eV. The on-site energies \(\pm\Delta\) are induced by a gate voltage. The Hamiltonian for AB-BG is given from Ref. [26] as
\[H_{\mathbf{k}}^{\rm AB}=\left(\begin{array}{cccc}-\Delta-\frac{\Delta^{\prime}} {2}&\gamma_{0}^{\prime}g_{\mathbf{k}}&\gamma_{4}^{\prime}g_{\mathbf{k}}&\gamma_{3}^{ \prime}g_{\mathbf{k}}^{*}\\ \gamma_{0}^{\prime}g_{\mathbf{k}}^{*}&-\Delta+\frac{\Delta^{\prime}}{2}&\gamma_{1 }^{\prime}&\gamma_{4}^{\prime}g_{\mathbf{k}}\\ \gamma_{4}^{\prime}g_{\mathbf{k}}^{*}&\gamma_{1}^{\prime}&\Delta+\frac{\Delta^{ \prime}}{2}&\gamma_{0}^{\prime}g_{\mathbf{k}}\\ \gamma_{3}^{\prime}g_{\mathbf{k}}&\gamma_{4}^{\prime}g_{\mathbf{k}}^{*}&\gamma_{0}^{ \prime}g_{\mathbf{k}}^{*}&\Delta-\frac{\Delta^{\prime}}{2}\end{array}\right)\,, \tag{2}\]
where the hopping parameters (see Fig. 1 (b)) are \(\gamma_{0}^{\prime}=-3.16\) eV, \(\gamma_{1}^{\prime}=0.381\) eV, \(\gamma_{3}^{\prime}=-0.38\) eV, and \(\gamma_{4}^{\prime}=0.14\) eV. The on-site potential difference \(\Delta^{\prime}=0.022\) eV is induced by the asymmetric environment of A, B atoms in the crystal structure.
Figure 1: Crystal structures and tight-binding hopping parameters for (a) AA-BG and (b) AB-BG.
The eigenstates \(C_{n\mathbf{k}}\) and eigenenergies \(\epsilon_{n\mathbf{k}}\) at the \(n\)th band are obtained by diagonalizing the Hamiltonian through
\[H_{\mathbf{k}}C_{n\mathbf{k}}=\epsilon_{n\mathbf{k}}C_{n\mathbf{k}}\,. \tag{3}\]
The calculation of the optical responses involves the position operator \(\widetilde{\mathbf{r}}_{\mathbf{k}}\) and velocity operator \(\widetilde{\mathbf{v}}_{\mathbf{k}}\), which are
\[\widetilde{\mathbf{r}}_{\mathbf{k}}=i\mathbf{\nabla}_{\mathbf{k}}+\left(\begin{array}{cccc} \mathbf{\tau}_{A}&0&0&0\\ 0&\mathbf{\tau}_{B}&0&0\\ 0&0&\mathbf{\tau}_{A^{\prime}}&0\\ 0&0&0&\mathbf{\tau}_{B^{\prime}}\end{array}\right)\,,\quad\widetilde{\mathbf{v}}_{\bm {k}}=\frac{1}{i\hbar}[\widetilde{\mathbf{r}}_{\mathbf{k}},H_{\mathbf{k}}]\,, \tag{4}\]
respectively. The matrix elements of the position operator give the Berry connections \(\mathbf{\xi}_{nm\mathbf{k}}\) by
\[\mathbf{\xi}_{nm\mathbf{k}}=C_{n\mathbf{k}}^{\dagger}\widetilde{\mathbf{r}}_{\mathbf{k}}C_{m\mathbf{ k}}\,, \tag{5}\]
and those of the velocity operator are calculated as \(\mathbf{v}_{nm\mathbf{k}}=C_{n\mathbf{k}}^{\dagger}\widetilde{\mathbf{v}}_{\mathbf{k}}C_{m\mathbf{k}}\). Due to the derivative with respect to the wavevector \(\mathbf{k}\), a direct calculation of \(\mathbf{\xi}_{nm\mathbf{k}}\) from Eq. (5) requires that the wavefunction \(C_{n\mathbf{k}}\) is a smooth function of \(\mathbf{k}\). However, this becomes quite difficult in numerical calculation because of the phase arbitrary for a numerical wavefunction. Practically, the off-diagonal terms of \(\mathbf{\xi}_{nm\mathbf{k}}\) can be also calculated from the velocity operator as
\[\mathbf{r}_{nm\mathbf{k}}=\begin{cases}\mathbf{\xi}_{nm\mathbf{k}}=\frac{\mathbf{v}_{nm\mathbf{k}}}{i \omega_{nm\mathbf{k}}}&(n\neq m)\\ 0&(n=m)\end{cases}\,, \tag{6}\]
with \(\hbar\omega_{nm\mathbf{k}}=\epsilon_{n\mathbf{k}}-\epsilon_{m\mathbf{k}}\). The diagonal terms \(\xi^{a}_{nn\mathbf{k}}\) usually appear in the generalized derivative of \((r^{c}_{\mathbf{k}})_{;nm\mathbf{k}^{a}}=\frac{\partial r^{c}_{nm\mathbf{k}}}{\partial k^ {a}}-i(\xi^{a}_{nn\mathbf{k}}-\xi^{a}_{mm\mathbf{k}})r^{c}_{nm\mathbf{k}}\), which is calculated alternatively [9] by
\[(r^{c}_{\mathbf{k}})_{;nm\mathbf{k}^{a}}= \frac{-ir^{c}_{nm\mathbf{k}}\mathcal{V}^{a}_{mn\mathbf{k}}+\hbar M^{ca}_{ nm\mathbf{k}}+i[r^{a}_{\mathbf{k}},v^{c}_{\mathbf{k}}]_{nm}}{i\omega_{nm\mathbf{k}}}\,, \tag{7}\]
with \(\mathcal{V}^{a}_{mn\mathbf{k}}=v^{a}_{mm\mathbf{k}}-v^{a}_{nn\mathbf{k}}=\frac{\partial \omega_{mn\mathbf{k}}}{\partial k^{a}}\) and
\[M^{ca}_{nm\mathbf{k}}=C_{n\mathbf{k}}^{\dagger}\frac{1}{i\hbar}[\widetilde{r}^{a}_{ \mathbf{k}},\widetilde{v}^{c}_{\mathbf{k}}]C_{m\mathbf{k}}\,, \tag{8}\]
where the Raman letters \(a,c\) indicate the Cartesian directions \(x,y,z\). Note that the electron wavevector has only in-plane components \(x,y\), the derivative \(\frac{\partial}{\partial k^{z}}\) thus gives zero and \((r^{a}_{\mathbf{k}})_{;nm\mathbf{k}^{z}}=-i(\xi^{z}_{nn\mathbf{k}}-\xi^{z}_{nm\mathbf{k}})r^{a} _{nm\mathbf{k}}\).
### Injection and shift currents
We focus on the injection and shift currents induced by a laser pulse centered at frequency \(\omega\), for which the electric field is \(\mathbf{E}(t)=\mathbf{E}_{0}(t)e^{-i\omega t}+c.c.\) and \(\mathbf{E}_{0}(t)\) is a slow varying envelop function. The response static currents can be written as
\[\mathbf{J}_{0}(t)=\mathbf{J}_{\rm inj}(t)+\mathbf{J}_{\rm sh}(t)\,. \tag{9}\]
Here the first term \(\mathbf{J}_{\rm inj}(t)\) is a one-color injection current satisfying
\[\frac{dJ_{\rm inj}^{a}(t)}{dt}=2i\eta^{abc}(\omega)E_{0}^{b}(t)[E_{0}^{c}(t)]^ {*}\,, \tag{10}\]
with the injection coefficient \(\eta^{abc}(\omega)\) given by
\[\eta^{abc}(\omega)=\frac{2e^{3}\pi}{\hbar^{2}}\int\frac{d\mathbf{k}}{4\pi^{2}} \sum_{nm}\mathcal{V}_{mn\mathbf{k}}^{a}f_{nm\mathbf{k}}{\rm Im}[r_{mn\mathbf{k}}^{c}r_{nm \mathbf{k}}^{b}]\delta(\omega_{mn\mathbf{k}}-\omega)\,. \tag{11}\]
Here \(f_{nm\mathbf{k}}=f_{n\mathbf{k}}-f_{m\mathbf{k}}\) is the population difference with the Fermi-Dirac distribution \(f_{n\mathbf{k}}=[1-e^{(\epsilon_{n\mathbf{k}}-\mu)/k_{B}T}]^{-1}\) for given chemical potential \(\mu\) and temperature \(T\). The second term \(\mathbf{J}_{\rm sh}(t)\) in Eq. (9) is a shift current written as
\[J_{\rm sh}^{a}(t)=2\sigma^{abc}(\omega)E_{0}^{b}(t)[E_{0}^{c}(t)]^{*}\,, \tag{12}\]
with the shift conductivity \(\sigma^{abc}(\omega)\) given by
\[\sigma^{abc}(\omega)=-\frac{i\pi e^{3}}{\hbar^{2}}\int\frac{d\mathbf{k}}{4\pi^{2}} \sum_{nm}f_{nm\mathbf{k}}\left[r_{mn\mathbf{k}}^{b}\left(r_{\mathbf{k}}^{c}\right)_{;nmk^{ a}}+r_{mn\mathbf{k}}^{c}\left(r_{\mathbf{k}}^{b}\right)_{;nmk^{a}}\right]\delta(\omega_{mn \mathbf{k}}-\omega)\,. \tag{13}\]
Further discussion of photocurrents starts with a symmetry analysis on the tensors of \(\eta^{abc}(\omega)\) and \(\sigma^{abc}(\omega)\). The presence of time-reversal symmetry gives \(\mathbf{r}_{nm\mathbf{k}}=\mathbf{r}_{mn(-\mathbf{k})}=[\mathbf{r}_{nm(-\mathbf{k})}]^{*}\), \(\mathbf{v}_{nm\mathbf{k}}=-\mathbf{v}_{m(-\mathbf{k})}=-[\mathbf{v}_{nm(-\mathbf{k})}]^{*}\), \(\epsilon_{n\mathbf{k}}=\epsilon_{n(-\mathbf{k})}\), and \(\left(r_{\mathbf{k}}^{b}\right)_{;nm\mathbf{k}^{a}}=-\left(r_{-\mathbf{k}}^{b}\right)_{;mn \mathbf{k}^{a}}=-[\left(r_{\mathbf{k}}^{b}\right)_{;nm\mathbf{k}^{a}}]^{*}\). Thus from Eqs. (11) and (13), we obtain \(\eta^{abc}=[\eta^{abc}]^{*}\) and \(\sigma^{abc}=[\sigma^{abc}]^{*}\), which are both real numbers. At finite gate voltage, the crystal point group of AB-BG is \(C_{3v}\), whose symmetry is lower than that of AA-BG with crystal point group \(C_{6v}\). Thus we can check the symmetry properties of AB-BG first, and then refine them to AA-BG. Combining the point group and the time reversal symmetry, the nonzero tensor components satisfy \(\eta^{xzx}=\eta^{zyy}=\eta^{xxz}=\eta^{yyz}\), \(\sigma^{xzx}=\sigma^{yzy}=\sigma^{xxz}=\sigma^{yyz}\), \(\sigma^{zxx}=\sigma^{zyy}\), \(\sigma^{zzz}\), and \(\sigma^{yyy}=-\sigma^{yxx}=-\sigma^{xxy}=-\sigma^{xyx}\). Then the injection current becomes
\[\frac{dJ_{\rm inj}^{a}(t)}{dt}=4\eta^{xzx}(\omega){\rm Im}\{E_{0}^{a}(t)[E_{0} ^{z}(t)]^{*}\}(1-\delta_{a,z})\,, \tag{14}\]
and the shift current is
\[J^{x}_{\rm sh}(t) =4\sigma^{xzx}(\omega){\rm Re}\left\{E^{z}_{0}(t)[E^{x}_{0}(t)]^{*} \right\}-4\sigma^{yyy}(\omega){\rm Re}\left\{E^{x}_{0}(t)[E^{y}_{0}(t)]^{*} \right\}\,, \tag{15a}\] \[J^{y}_{\rm sh}(t) =4\sigma^{zxx}(\omega){\rm Re}\left\{E^{z}_{0}(t)[E^{y}_{0}(t)]^{* }\right\}+2\sigma^{yyy}(\omega)\left[|E^{y}_{0}(t)|^{2}-|E^{x}_{0}(t)|^{2} \right]\,,\] (15b) \[J^{z}_{\rm sh}(t) =2\sigma^{zxx}(\omega)\left[|E^{x}_{0}(t)|^{2}+|E^{y}_{0}(t)|^{2} \right]+2\sigma^{zzz}(\omega)|E^{z}_{0}(t)|^{2}\,. \tag{15c}\]
For AA-BG, the results are similar except that the \(\sigma^{yyy}\) component disappears due to the extra crystal symmetry.
The injection current in AA-BG or AB-BG requires an elliptically polarized light incident obliquely, and its \(z\)-component vanishes due to the lack of freely moving electrons along this quantum confined direction. The \(z\)-component of shift current in AA-BG or AB-BG, induced by the charge shift between the two layers under the light excitation, can be always generated. Such shift current can lead to charge accumulation between these two layers, which can further induce a gate voltage in this system, as discussed by Gao _et al._[34]. The in-plane components of the shift current in AA-BG can be generated only for an elliptically polarized light incident obliquely, while those in AB-BG have no such limit.
## III Results
### Analytical results for AA-BG
The Hamiltonian for the AA-BG can be analytically diagonalized. The eigenstates are
\[C_{n\mathbf{k}}=\frac{\sqrt{1-\alpha_{n}\mathcal{N}_{\beta_{n}\mathbf{k}}}}{2\sqrt{2}} \left(\begin{array}{c}-\hat{g}_{\mathbf{k}}\\ -\beta_{n}\\ \beta_{n}\hat{g}_{\mathbf{k}}\\ 1\end{array}\right)+\frac{\alpha_{n}\sqrt{1+\alpha_{n}\mathcal{N}_{\beta_{n} \mathbf{k}}}}{2\sqrt{2}}\left(\begin{array}{c}\hat{g}_{\mathbf{k}}\\ \beta_{n}\\ \beta_{n}\hat{g}_{\mathbf{k}}\\ 1\end{array}\right)\,, \tag{16}\]
with \(\hat{g}_{\mathbf{k}}=g_{\mathbf{k}}/|g_{\mathbf{k}}|\) and
\[\mathcal{N}_{\beta_{n}\mathbf{k}}=\frac{\gamma_{3}|g_{\mathbf{k}}|+\beta_{n}\gamma_{1 }}{\sqrt{\Delta^{2}+(\gamma_{3}|g_{\mathbf{k}}|+\beta_{n}\gamma_{1})^{2}}}\,. \tag{17}\]
Here \(n=1,2,3,4\) denotes the band index with \(\alpha_{n}=-1,-1,+1,+1\) and \(\beta_{n}=-1,+1,-1,+1\), respectively. The associated eigenenergies are
\[\epsilon_{n\mathbf{k}}=\beta_{n}\gamma_{0}|g_{\mathbf{k}}|+\alpha_{n}\sqrt{\Delta^{2} +(\gamma_{3}|g_{\mathbf{k}}|+\beta_{n}\gamma_{1})^{2}}\,. \tag{18}\]
With the analytic wavefunctions in Eq. (16), Berry connections \(\mathbf{\xi}_{nm\mathbf{k}}\) can be calculated directly from Eq. (5), as listed in Appendix A, where the relations between all components are also presented. There exist selection rules for \(r^{z}_{nm\mathbf{k}}\) as
\[r^{z}_{13\mathbf{k}}=r^{z}_{31\mathbf{k}}=\frac{c\mathcal{N}_{-1\mathbf{k}}}{2}\,,\quad r^{z }_{24\mathbf{k}}=r^{z}_{42\mathbf{k}}=\frac{c\mathcal{N}_{+1\mathbf{k}}}{2}\,. \tag{19}\]
Therefore, \(r^{z}_{nm\mathbf{k}}\) is nonzero only for the band pair \((n,m)=(1,3)\) or \((2,4)\). The injection coefficient becomes
\[\eta^{xxx}(\omega)= \frac{e^{3}}{2\pi\hbar^{2}}\int d\mathbf{k}\left\{f_{13\mathbf{k}}\mathcal{ V}^{x}_{31\mathbf{k}}\text{Im}[r^{x}_{31\mathbf{k}}r^{z}_{13\mathbf{k}}]\delta(\omega_{31 \mathbf{k}}-\omega)\right.\] \[\left.+f_{24\mathbf{k}}\mathcal{V}^{x}_{42\mathbf{k}}\text{Im}[r^{x}_{42 \mathbf{k}}r^{z}_{24\mathbf{k}}]\delta(\omega_{42\mathbf{k}}-\omega)\right\}\,. \tag{20}\]
The intraband Berry connections are obtained as
\[\mathbf{\xi}_{nn\mathbf{k}}=\frac{1}{2}\left[g^{*}_{\mathbf{k}}(i\mathbf{\nabla}_{\mathbf{k}})g_{ \mathbf{k}}+\frac{a_{0}}{\sqrt{3}}\hat{\mathbf{y}}\right]+\frac{1}{2}c\hat{\mathbf{z}} \left(1+\alpha_{n}\sqrt{1-\mathcal{N}^{2}_{\beta_{n}\mathbf{k}}}\right)\,, \tag{21}\]
The matrix elements for \(\xi^{x/y}_{nn\mathbf{k}}\) are independent of the band index \(n\), thus \((r^{a}_{\mathbf{k}})_{;nm\mathbf{k}^{b}}=\frac{\partial r^{a}_{nm\mathbf{k}}}{\partial k^{ b}}\) for \(b=x,y\) and \((r^{a}_{\mathbf{k}})_{;nm\mathbf{k}^{z}}=-i(\xi^{z}_{nn\mathbf{k}}-\xi^{z}_{mm\mathbf{k}})r^{a} _{nm\mathbf{k}}\). The shift conductivities become
\[\sigma^{xxx}(\omega)= -i\frac{e^{3}}{4\pi\hbar^{2}}\int d\mathbf{k}\left[f_{13\mathbf{k}} \left(r^{z}_{31\mathbf{k}}\frac{\partial r^{x}_{13\mathbf{k}}}{\partial k_{x}}+r^{x}_ {31\mathbf{k}}\frac{\partial r^{z}_{13\mathbf{k}}}{\partial k_{x}}\right)\delta(\omega _{31\mathbf{k}}-\omega)\right.\] \[\left.+f_{24\mathbf{k}}\left(r^{z}_{42\mathbf{k}}\frac{\partial r^{x}_{24 \mathbf{k}}}{\partial k_{x}}+r^{x}_{42\mathbf{k}}\frac{\partial r^{z}_{24\mathbf{k}}}{ \partial k_{x}}\right)\delta(\omega_{42\mathbf{k}}-\omega)\right]\,, \tag{22a}\] \[\sigma^{zzz}(\omega)= \frac{e^{3}}{2\pi\hbar^{2}}\int d\mathbf{k}\left[f_{12\mathbf{k}}|r^{z}_ {31\mathbf{k}}|^{2}(\xi^{z}_{33\mathbf{k}}-\xi^{z}_{11\mathbf{k}})\delta(\omega_{31\mathbf{k} }-\omega)\right.\] \[\left.+f_{24\mathbf{k}}|r^{z}_{42\mathbf{k}}|^{2}(\xi^{z}_{44\mathbf{k}}-\xi ^{z}_{22\mathbf{k}})\delta(\omega_{42\mathbf{k}}-\omega)\right]\,,\] (22b) \[\sigma^{zxx}(\omega)= \frac{e^{3}}{2\pi\hbar^{2}}\int d\mathbf{k}\sum_{nm}f_{nm\mathbf{k}}|r^{ x}_{mn\mathbf{k}}|^{2}(\xi^{z}_{mm\mathbf{k}}-\xi^{z}_{nn\mathbf{k}})\delta(\omega_{mn\mathbf{k} }-\omega)\,, \tag{22c}\]
It can be seen that the coefficients \(\eta^{xzx}\), \(\sigma^{xzx}\), and \(\sigma^{zzz}\) are induced by the transitions only from the band 1 to 3 or from the band 2 to 4, while \(\sigma^{zxx}\) has no such limit. These coefficients can be further simplified with the analytical expressions of all these quantities, which can be obtained under the linear dispersion approximation around the Dirac points, as shown in Appendix B.
Figure 2 (a) shows the band structure of AA-BG for \(\Delta=0\) and 0.4 eV. With applying a gate voltage, the interlayer coupling shifts the energies of the Dirac cones of each layer, while the electronic states at zero energy are still degenerate. The bands 1 and 3 (or 2
and 4) are approximately parallel to each other, and their energy differences are in the range of \(2\sqrt{\Delta^{2}+(\gamma_{1}+3\gamma_{3})^{2}}\leq\hbar\omega_{42\mathbf{k}}\leq 2 \sqrt{\Delta^{2}+\gamma_{1}^{2}}\leq\hbar\omega_{31\mathbf{k}}\leq 2\sqrt{\Delta^{2}+( \gamma_{1}-3\gamma_{3})^{2}}\) due to \(0\leq|g_{\mathbf{k}}|\leq 3\), where the middle value is obtained at the Dirac points and the other two values are obtained at the M points. Figure 2 (b) gives the joint density of states (JDOS) \(\mathcal{J}_{31}(\omega)\) and \(\mathcal{J}_{42}(\omega)\) for related two pairs of bands, which are defined as
\[\mathcal{J}_{nm}(\omega)=\int d\mathbf{k}\delta(\hbar\omega_{nm\mathbf{k}}-\hbar\omega)\,. \tag{23}\]
These two JDOS are strongly localized in energy, regardless of whether there is the gate voltage. For \(\Delta=0.4\) eV, \(\mathcal{D}_{42}(\omega)\) is nonzero in the energy range of \([0.95,1.08]\) eV and \(\mathcal{D}_{31}(\omega)\) is nonzero in the energy range of \([1.08,1.21]\) eV.
### Band structure of AB-BG
The Hamiltonian in Eq. (2) for AB-BG can be also analytically diagonalized, as shown in Appendix C, but the expressions for the eigenenergies are too complicated to provide meaningful physical insight, thus we discuss the band structure based on numerical calculation. This work focuses on the electronic transitions around the Dirac points, for convenience, the wavevectors are expressed as \(\mathbf{k}=\bar{k}\frac{2\pi}{a_{0}}(\hat{\mathbf{x}}\cos\theta+\hat{\mathbf{y}}\sin \theta)+\mathbf{K}\) with \(\theta=2n\pi/3\) along the K-M directions, and \(\theta=(2n+1)\pi/3\) along the K-\(\Gamma\) directions. Figure 3 (a) gives the band structure for AB-BG at gate voltages \(\Delta=0\) and \(0.4\) eV. At \(\Delta=0\), in each Dirac cone, the two middle bands are degenerate at the Dirac points with \(\bar{k}=0\) and other three \(\mathbf{k}\) points
Figure 2: Band structure (a) and JDOS (b) for AA-BG at \(\Delta=0\) (dashed curves) and \(\Delta=0.4\) eV (solid curves).
on the K-M paths with \(\bar{k}=-\frac{\gamma_{1}^{\prime}\gamma_{3}^{\prime}}{\sqrt{3}\pi\gamma_{0}^{ \prime\,2}}\sim 0.003\) (see details in Appendix C). Meanwhile, the energy differences, \(\hbar\omega_{31\mathbf{k}}\) and \(\hbar\omega_{42\mathbf{k}}\), have minima at the Dirac points. For nonzero gate voltage, the degeneracy at these points is lifted. The eigenenergies at the Dirac points are \(\pm\Delta-\frac{\Delta^{\prime}}{2}\), \(\pm\sqrt{\Delta^{2}+\gamma_{1}^{2}}+\frac{\Delta^{\prime}}{2}\), and the middle two bands around the Dirac points have the Mexican hat shape [35]. At \(\Delta=0.4\) eV, the energy difference \(\hbar\omega_{32\mathbf{k}}\) shows a minimum with increasing \(\bar{k}\) for each \(\theta\), as shown in the \(\mathbf{k}\)-resolved energy difference in the inset, where the three-fold rotational symmetry can be clearly seen around this Dirac point. Along the K-M
directions, the minima of \(\hbar\omega_{32\mathbf{k}}\) appear around \(\bar{k}=0.027\) to give the band gap of \(E_{g}=0.28\) eV; and along the K-\(\Gamma\) directions, the minima appear around \(\bar{k}=0.023\), which have an energy \(E_{1}=0.4\) eV higher than the band gap and give a van Hove singularity (VHS). Similar results can be found for \(\hbar\omega_{42\mathbf{k}}\), and another VHS appears with energy \(E_{2}=0.97\) eV; however, \(\hbar\omega_{31\mathbf{k}}\) shows a minimum at the Dirac points but no VHS appears. Figure 3 (b) gives JDOS of \(\mathcal{J}_{31}(\omega)\), \(\mathcal{J}_{32}(\omega)\), \(\mathcal{J}_{41}(\omega)\), and \(\mathcal{J}_{42}(\omega)\) at \(\Delta=0\) and \(0.4\) eV. The gate voltage changes these JDOS significantly around the band edge. \(\mathcal{J}_{32}(\omega)\) and \(\mathcal{J}_{42}(\omega)\) have divergences at the VHS points with energies \(E_{1}\) and \(E_{2}\), respectively; and \(\mathcal{J}_{31}(\omega)\) has a peak located at \(E_{3}\sim 0.97\) eV around the band edge, which is induced by the nearly parallel bands (1, 3) around the Dirac points.
The VHS points do not appear for all gate voltages. Figure 3 (c) exhibits \(\Delta\) dependence of the \(\bar{k}\) value for the minimal energy of \(\hbar\omega_{32\mathbf{k}}\) and \(\hbar\omega_{42\mathbf{k}}\) for \(\theta\) along the K-M and K-\(\Gamma\) directions, respectively. Along the K-M directions, \(\hbar\omega_{32\mathbf{k}}\) has a minimum value at nonzero \(\bar{k}\) for all \(\Delta\), which gives the band gap \(E_{g}\) of the system; while along the K-\(\Gamma\) directions, the minimum energy \(E_{1}\) moves to a nonzero \(\bar{k}\) only for \(\Delta\geq 0.023\) eV, where VHS appears as well. Note that the JDOS \(\mathcal{J}_{32\mathbf{k}}\) shows a maximum at the band edge when there is no VHS for \(\Delta<0.023\) eV. However, the minima of \(\hbar\omega_{42\mathbf{k}}\) along the K-M and K-\(\Gamma\) directions locate not at the Dirac points only for \(\Delta\geq 0.174\) eV, where VHS appears as well. For \(\Delta<0.174\) eV, \(\mathcal{J}_{42}(\omega)\) also shows a maximum at the band edge between the bands 4 and 2, where this energy is still noted as \(E_{2}\); the maximum of \(\mathcal{J}_{31}(\omega)\) also locates at the band edge between bands 3 and 1, where this energy is still noted as \(E_{3}\). The gate voltage dependences of these energies \(E_{g}\), \(E_{1}\), \(E_{2}\), and \(E_{3}\) are shown in Fig. 3 (d).
### Injection coefficients and shift conductivities at \(\Delta=0.4\) eV
In this section we present the numerical results for injection coefficient \(\eta^{xxx}(\omega)\) and shift conductivities \(\sigma^{yyy}(\omega)\), \(\sigma^{xzx}(\omega)\), \(\sigma^{zxx}(\omega)\), and \(\sigma^{zzz}(\omega)\). The parameters are chosen as \(T=300\) K, \(\mu=0\), \(\Delta=0.4\) eV. During the numerical calculation, the Brillouin zone is divided into a \(3000\times 3000\) homogeneous grid. The \(\delta\) functions in Eqs. (11) and (13) are approximated by a Gaussian function as \(\delta(\omega)=\frac{\hbar}{\sqrt{\pi}\Gamma}e^{-(\hbar\omega)^{2}/\Gamma^{2}}\) with the Gaussian broadening \(\Gamma=10\) meV.
Figure 4 (a) shows the injection coefficient spectra for AA-BG and AB-BG. For the injection in AA-BG, the spectrum is just a peak located in a very narrow energy range
1.069 eV\(<\hbar\omega<\) 1.087 eV with an absolute value about 0.067 \(\rm{A\cdot s^{-1}\cdot m/V^{2}}\). From the analytic results shown in Eq. (17), the spectra include two contributions at different photon energy regions: one is from the optical transition between the bands (1, 3) for photon energy \(\hbar\omega>2\sqrt{\Delta^{2}+\gamma_{1}^{2}}\) or 1.078 eV\(<\hbar\omega<\)1.087 eV, and the other is between the bands (2, 4) for \(\hbar\omega<2\sqrt{\Delta^{2}+\gamma_{1}^{2}}\) or 1.069 eV\(<\hbar\omega<\)1.078 eV; both magnitudes are nearly proportional to \(\hbar\omega-2\sqrt{\Delta^{2}+\gamma_{1}^{2}}\). These two contributions merge as a single peak just because the \(\delta\) function is numerically broadened with \(\Gamma=10\) meV, which is even larger than each energy region. The injection coefficient \(\eta^{xxx}\) in AB-BG starts with photon energy higher than the gap, i.e., \(\hbar\omega>0.28\) eV, and reaches its maximum value of 25 \(\rm{A\cdot s^{-1}\cdot m/V^{2}}\) in amplitude at \(\hbar\omega=0.45\) eV, which is slightly larger than the first VHS energy of JDOS \(E_{1}\); the energy difference arises from the zero electron velocity at this VHS. Considering the thickness of a bilayer graphene as \(2c=6.7\) A, the effective bulk injection coefficient is
\(3.7\times 10^{10}\)\(\mu\rm A\cdot s^{-1}V^{-2}\), which is nearly 50 times larger that that in bulk GaAs [36]. After this peak, the amplitude of injection coefficient decreases as the photon energy increases, except for a small peak located around the JDOS peak at higher energy \(E_{2}\) or \(E_{3}\). It can be seen that the injection coefficient for AB-BG is about two orders of magnitude larger than that for AA-BG. To have a direct impression on these values, we give an estimation on how large the injection current can be in AB-BG. Based on the Eq. (14), when the laser is a 45\({}^{\circ}\) obliquely incident \(p\)-polarized light with photon energy of 0.45 eV, light intensity of \(I=0.1\) GW/cm\({}^{2}\), and pulse duration of \(\tau=1\) ps, the generated injection current is \(2\eta^{xxx}\frac{I}{2c\epsilon_{0}}W\tau\sim 9\) mA for an electrode with a width \(W=1\)\(\mu\)m.
Then we turn to the shift conductivities, as shown in Figs. 4 (b-d). Figure 4 (c) gives the shift conductivity for AA-BG. It can be seen that the component \(\sigma^{zzz}\) is about one order of magnitude larger than \(\sigma^{zxx}\), or is at least two order of magnitude larger than \(\sigma^{zxx}\). Both \(\sigma^{zzz}\) and \(\sigma^{zzx}\) have nonzero values only in the very narrow energy regions, similar to the injection coefficient. These results are consistent with the analytic results shown in Eqs. (18-19). Interestingly, \(\sigma^{zxx}\) includes the contributions from the band 1 to 3 and from the band 2 to 4 but with opposite signs. For AB-BG shown in Figs. 4 (b) and (d), all nonzero components start from the band edge \(\hbar\omega\geq E_{g}\). Different from the injection coefficients, the shift conductivities at the band edge are nonzero, and show prominent peaks. Especially, \(\sigma^{yyy}\) shows a large value about \(6\times 10^{-13}\)\(\rm A\cdot m/V^{2}\) at the band edge and it drops quickly with increasing the photon energy. The effective bulk shift conductivity is 896 \(\mu\rm A/V^{2}\), which is several times larger than in GeSe (200 \(\mu\rm A/V^{2}\)) [36]. Besides, the component \(\sigma^{zzz}\) is at least one order of magnitude smaller than other nonzero components, totally different from the case of AA-BG, where it is the largest one. The spectra of \(\sigma^{zxx}\) and \(\sigma^{zxx}\) have similar amplitude around a few \(10^{-14}\)\(\rm A\cdot m/V^{2}\), which is a few tens of times smaller than the peak of \(\sigma^{yyy}\); they also show some fine structures around those characteristic energies \(E_{1}\), \(E_{2}\), and \(E_{3}\). We repeat the above estimation for the shift current using the same parameters but \(\hbar\omega=0.3\) eV, and then obtain the generated shift current of \(2\sigma^{yyy}\frac{I}{2c\epsilon_{0}}W\sim 0.23\) mA.
### Effects of Gate voltage
Figure 5 gives the gate voltage dependence of the injection coefficients and shift conductivities for AA-BG and AB-BG at zero chemical potential. Note that the negative gate
voltage leads to opposite coefficients, which are consistent with the results by Xiong _et al._[30], thus only positive gate voltages are shown here.
Figures 5 (a) and (b) show the spectra of \(\eta^{xxx}\) and \(\sigma^{zzz}\) for AA-BG, respectively. As indicated in previous section, both spectra for different gate voltages are nonzero in a very narrow photon energy region. With the increase of the gate voltage, the region moves to larger energy and the values of both spectra increase, which are indicated by \(\propto\Delta\) in Eqs. (109) and (110). Figure 5 (c) gives the injection coefficient \(\eta^{xxx}\) for AB-BG. At each gate voltage, the injection coefficient shows two peaks located at photon energies slightly larger than \(E_{1}\) and \(E_{2}\), which have been discussed in previous section. As the gate voltage \(\Delta\) varies, the peak amplitude reaches a maximum at \(\Delta\sim 0.2\) eV. The shift conductivities \(\sigma^{xxx},\sigma^{yyy}\) and \(\sigma^{zxx}\) for AB-BG are plotted in Figs. 5 (d-f). They show some similar characteristics: (1) The spectra are located at about the band gap similar to the case of \(\Delta=0.4\) eV, and their amplitudes increase with the decrease of \(\Delta\); \(\sigma^{xzx}\) and \(\sigma^{zxx}\) increase much faster than \(\sigma^{yyy}\). (2) There exist sign changes of shift conductivities.
### Effects of Chemical potential
The chemical potential \(\mu\) dependence of injection coefficients and shift conductivities at \(\Delta=0.4\) eV are depicted in Fig. 6 with the same layout as Fig. 5. For AA-BG in Figs. 6 (a) and (b), they show very similar asymmetric dependence on the chemical potential: with the increase of the chemical potential, the values of all coefficients increase and the locations shift to higher or lower photon energies depending on the sign of the chemical potential. For positive chemical potential, the transitions between bands (1, 3) are suppressed according to the Pauli blocking effects, while new extra transitions between bands (2, 4) appear due to the additional free electrons in the band 2. The extra transitions require lower photon energy and red shift the spectra, and they also correspond to larger JDOS, leading to larger coefficients. Similar results can be analyzed for negative chemical potential, but with switching the band pairs (1, 2) and (3, 4).
In AB-BG, the chemical potential \(\mu\) has different effects, as shown in Figs. 6 (c-f). Due to the existence of the band gap, the spectra are hardly changed when the chemical potential lies in the gap. When \(\mu\) is above the conduction band edge or below the valence band edge,
the main peak of \(\eta^{zzx}\) around 0.5 eV is reduced gradually due to the Pauli blocking, and there appear new transitions between the bands (1, 2) or (3, 4) to give additional injections with opposite signs. Similar results are obtained for the shift conductivities.
## IV Conclusion
In this paper we have studied the gate voltage induced injection current and shift current in AA- and AB-stacked bilayer graphene. The gate voltage plays a crucial role in breaking the inversion symmetry of bilayer graphene to induce photogalvanic effects, and at the same time it effectively changes the band structure for AB-BG with opening gaps located in the K-M directions and inducing additional VHS located in the K-\(\Gamma\) directions. In AA-BG, the injection and shift currents are mainly induced by optical transitions between two pairs of nearly parallel bands; the coefficient spectra locate in a very narrow photon energy region of about 20 meV. In AB-BG, the optical transition can occur between any possible band pairs, and the structure of spectra are strongly determined by the band gap and the VHS energies. For both structures, the injection and shift currents can be generated by the existence of an oblique \(p\)-polarized light component, while the in-plane shift currents in AB-BG can also be generated by normal incident lights. The out-of-plane shift current finally results in a static electric polarization between layers. The stacking order has significant effects on both currents. The injection coefficient for AA-BG is about two orders of magnitude smaller than that for AB-BG, while the shift conductivities are mostly in the same order of magnitude. All these coefficients can be effectively modulated by the gate voltage and the chemical potential. Our results suggest that gate voltage controlled bilayer graphene can be used to realize tunable optoelectronic detectors working in the mid-infrared.
###### Acknowledgements.
This work has been supported by National Natural Science Foundation of China Grant No. 12034003, 12004379, and 62250065. J.L.C. acknowledges the support from Talent Program of CIOMP.
## Appendix A Berry connections of AA-BG
The general expression for the Berry connection of AA-BG is
\[\mathbf{\xi}_{nm\mathbf{k}} =\left(\sqrt{1-\alpha_{n}\mathcal{N}_{\beta_{n}\mathbf{k}}}\sqrt{1- \alpha_{m}\mathcal{N}_{\beta_{m}\mathbf{k}}}+\alpha_{n}\alpha_{m}\sqrt{1+\alpha_{n} \mathcal{N}_{\beta_{n}\mathbf{k}}}\sqrt{1+\alpha_{m}\mathcal{N}_{\beta_{m}\mathbf{k}}} \right)\times\] \[\times\frac{1+\beta_{n}\beta_{m}}{8}\left[\hat{g}_{\mathbf{k}}^{*}(i \mathbf{\nabla}_{\mathbf{k}}\hat{g}_{\mathbf{k}})+\hat{\mathbf{y}}d\right]\] \[+\left(\alpha_{m}\sqrt{1-\alpha_{n}\mathcal{N}_{\beta_{n}\mathbf{k}}} \sqrt{1+\alpha_{m}\mathcal{N}_{\beta_{m}\mathbf{k}}}+\alpha_{n}\sqrt{1+\alpha_{n} \mathcal{N}_{\beta_{n}\mathbf{k}}}\sqrt{1-\alpha_{m}\mathcal{N}_{\beta_{m}\mathbf{k}}}\right)\] \[\times\frac{\beta_{n}\beta_{m}-1}{8}\left[\hat{g}_{\mathbf{k}}^{*}(i \mathbf{\nabla}_{\mathbf{k}}\hat{g}_{\mathbf{k}})-\hat{\mathbf{y}}d\right]\] \[+\frac{i\delta_{\beta_{n}\beta_{m}}}{2}\left(\sqrt{1-\alpha_{n} \mathcal{N}_{\beta_{n}\mathbf{k}}}\mathbf{\nabla}_{\mathbf{k}}\sqrt{1-\alpha_{m}\mathcal{N }_{\beta_{m}\mathbf{k}}}+\alpha_{n}\alpha_{m}\sqrt{1+\alpha_{n}\mathcal{N}_{\beta_ {n}\mathbf{k}}}\mathbf{\nabla}_{\mathbf{k}}\sqrt{1+\alpha_{m}\mathcal{N}_{\beta_{m}\mathbf{k}} }\right)\] \[+\left(\sqrt{1-\alpha_{n}\mathcal{N}_{\beta_{n}\mathbf{k}}}+\alpha_{n} \sqrt{1+\alpha_{n}\mathcal{N}_{\beta_{n}\mathbf{k}}}\right)\left(\sqrt{1-\alpha_ {m}\mathcal{N}_{\beta_{m}\mathbf{k}}}+\alpha_{m}\sqrt{1+\alpha_{m}\mathcal{N}_{ \beta_{m}\mathbf{k}}}\right)\] \[\times\frac{1+\beta_{n}\beta_{m}}{8}c\hat{\mathbf{z}} \tag{10}\]
with \(d=\sqrt{3}/3a_{0}\). Here we give the \(x\)-component between different bands as
\[r_{13\mathbf{k}}^{x} =-r_{31\mathbf{k}}^{x}=-\frac{i}{2}\frac{\frac{\partial\mathcal{N}_{ -1\mathbf{k}}}{\partial k_{x}}}{\sqrt{1-\mathcal{N}_{-1\mathbf{k}}^{2}}}=-\frac{i}{2} \frac{\gamma_{3}}{|\Delta|}(1-\mathcal{N}_{-1\mathbf{k}}^{2})\frac{\partial|g_{\bm {k}}|}{\partial k_{x}}\,, \tag{11a}\] \[r_{24\mathbf{k}}^{x} =-r_{42\mathbf{k}}^{x}=-\frac{i}{2}\frac{\frac{\partial\mathcal{N}_{ +1\mathbf{k}}}{\partial k_{x}}}{\sqrt{1-\mathcal{N}_{+1\mathbf{k}}^{2}}}=-\frac{i}{2} \frac{\gamma_{3}}{|\Delta|}(1-\mathcal{N}_{+1\mathbf{k}}^{2})\frac{\partial|g_{\bm {k}}|}{\partial k_{x}}\,,\] (11b) \[r_{12\mathbf{k}}^{x} =r_{21\mathbf{k}}^{x}=-r_{34\mathbf{k}}^{x}=-r_{43\mathbf{k}}^{x}\] \[=\frac{1}{4}\left[\sqrt{1+\mathcal{N}_{-1\mathbf{k}}}\sqrt{1-\mathcal{ N}_{+1\mathbf{k}}}+\sqrt{1-\mathcal{N}_{-1\mathbf{k}}}\sqrt{1+\mathcal{N}_{+1\mathbf{k}}} \right]\left[\hat{g}_{\mathbf{k}}^{*}\left(i\frac{\partial\hat{g}_{\mathbf{k}}}{ \partial k_{x}}\right)\right]\,,\] (11c) \[r_{32\mathbf{k}}^{x} =-r_{23\mathbf{k}}^{x}=r_{14\mathbf{k}}^{x}=-r_{41\mathbf{k}}^{x}\] \[=\frac{1}{4}\left[\sqrt{1+\mathcal{N}_{-1\mathbf{k}}}\sqrt{1-\mathcal{ N}_{+1\mathbf{k}}}-\sqrt{1-\mathcal{N}_{-1\mathbf{k}}}\sqrt{1+\mathcal{N}_{+1\mathbf{k}}} \right]\left[\hat{g}_{\mathbf{k}}^{*}\left(i\frac{\partial\hat{g}_{\mathbf{k}}}{ \partial k_{x}}\right)\right]\,. \tag{11d}\]
Combining with other quantities in Eqs. (19) and (21), the injection coefficients and the shift conductivities can be evaluated. For the latter use, we also need
\[\mathcal{V}_{21\mathbf{k}} =\frac{2\gamma_{3}}{\hbar}\mathcal{N}_{-1\mathbf{k}}\frac{\partial|g_{ \mathbf{k}}|}{\partial k_{x}}\,, \tag{12}\] \[\mathcal{V}_{43\mathbf{k}} =\frac{2\gamma_{3}}{\hbar}\mathcal{N}_{+1\mathbf{k}}\frac{\partial|g_{ \mathbf{k}}|}{\partial k_{x}}\,. \tag{13}\]
Appendix B Analytical expressions of \(\eta^{zxx}\), \(\sigma^{zxx}\), and \(\sigma^{zzz}\) in AA-BG under the linear dispersion approximation
Here we give the analytic results for \(\eta^{zxx}\) in Eq. (20), \(\sigma^{zxx}\) in Eq. (22a) and \(\sigma^{zzz}\) in Eq. (22b) under the linear dispersion approximation around the Dirac points. The term of \(\sigma^{zxx}\) is not discussed due to its very small magnitude, as shown in Fig. 4 (c).
The integrands of \(\eta^{xzx}\), \(\sigma^{xzx}\), and \(\sigma^{zzz}\) are functions of \(|g_{\mathbf{k}}|\), \(\frac{\partial|g_{\mathbf{k}}|}{\partial k_{x}}\), and \(\frac{\partial^{2}|g_{\mathbf{k}}|}{\partial k_{x}^{2}}\), where all terms involving \(|g_{\mathbf{k}}|\) can be simplified by using the properties of the \(\delta\) function. The function \(\delta(\hbar\omega_{nm\mathbf{k}}-\hbar\omega)\) is nonzero only for \(|g_{\mathbf{k}}|=G_{nm}\) with
\[\gamma_{3}G_{31}=\gamma_{1}-\sqrt{\left(\frac{\hbar\omega}{2} \right)^{2}-\Delta^{2}}\,,\text{ for }\hbar\omega\geq 2\sqrt{\Delta^{2}+\gamma_{1}^{2}}\,, \tag{50}\] \[\gamma_{3}G_{42}=\sqrt{\left(\frac{\hbar\omega}{2}\right)^{2}- \Delta^{2}}-\gamma_{1}\,,\text{ for }\hbar\omega\leq 2\sqrt{\Delta^{2}+\gamma_{1}^{2}}\,. \tag{51}\]
Further we get
\[\left.\left(\mathcal{N}_{-1\mathbf{k}}\right)\right|_{|g_{\mathbf{k}}|=G_{31}}=-\left. \left(\mathcal{N}_{+1\mathbf{k}}\right)\right|_{|g_{\mathbf{k}}|=G_{42}}=-\sqrt{1- \left(\frac{2\Delta}{\hbar\omega}\right)^{2}}\,. \tag{52}\]
1. By substituting the expressions of \(\mathcal{V}_{nm\mathbf{k}}^{x}\), \(r_{31\mathbf{k}}^{x}\), \(r_{13\mathbf{k}}^{z}\), \(r_{42\mathbf{k}}^{x}\), and \(r_{24\mathbf{k}}^{z}\), \(\eta^{xzx}\) becomes \[\eta^{zxx}= \frac{e^{3}}{2\pi\hbar^{2}}\int d\mathbf{k}\left(\frac{c\gamma_{3}^{2 }}{2\hbar|\Delta|}\right)\left\{f_{12\mathbf{k}}\mathcal{N}_{-1\mathbf{k}}^{2}(1- \mathcal{N}_{-1\mathbf{k}}^{2})\left(\frac{\partial|g_{\mathbf{k}}|}{\partial k_{x}} \right)^{2}\delta(\omega_{31\mathbf{k}}-\omega)\right.\] \[\left.+f_{34\mathbf{k}}\mathcal{N}_{+1\mathbf{k}}^{2}(1-\mathcal{N}_{+1\bm {k}}^{2})\left(\frac{\partial|g_{\mathbf{k}}|}{\partial k_{x}}\right)^{2}\delta( \omega_{42\mathbf{k}}-\omega)\right\}\] \[= \frac{e^{3}c|\Delta|}{\pi\hbar^{2}(\hbar\omega)^{2}}\left[1- \left(\frac{2\Delta}{\hbar\omega}\right)^{2}\right]\left\{f_{13\mathbf{k}}||_{g_{ \mathbf{k}}|=G_{31}}\mathcal{F}_{31}(\omega)+f_{24\mathbf{k}}||_{g_{\mathbf{k}}|=G_{42}} \mathcal{F}_{42}(\omega)\right\}\,,\] (53) with \[\mathcal{F}_{nm}(\omega)=\int d\mathbf{k}\left(\gamma_{3}\frac{\partial|g_{\mathbf{k}} |}{\partial k_{x}}\right)^{2}\delta(\hbar\omega_{nm\mathbf{k}}-\hbar\omega)\,.\] (54)
2. To get the result for \(\sigma^{xzx}\), we use \[\frac{\partial\mathcal{N}_{-1\mathbf{k}}}{\partial k_{x}}=(1-\mathcal{N}_{-1\mathbf{k} }^{2})^{3/2}\frac{\gamma_{3}}{|\Delta|}\frac{\partial|g_{\mathbf{k}}|}{\partial k _{x}}\] (55) to get \[r_{31\mathbf{k}}^{z}\frac{\partial r_{13\mathbf{k}}^{x}}{\partial k_{x}}+r _{31\mathbf{k}}^{x}\frac{\partial r_{13\mathbf{k}}^{z}}{\partial k_{x}}= \frac{ic}{4}(1+\mathcal{N}_{-1\mathbf{k}}^{2})(1-\mathcal{N}_{-1\mathbf{k} }^{2})^{3/2}\left(\frac{\gamma_{3}}{|\Delta|}\frac{\partial|g_{\mathbf{k}}|}{ \partial k_{x}}\right)^{2}\] \[-\frac{ic}{4}\mathcal{N}_{-1\mathbf{k}}(1-\mathcal{N}_{-1\mathbf{k}}^{2}) \frac{\gamma_{3}}{|\Delta|}\frac{\partial^{2}|g_{\mathbf{k}}|}{\partial k_{x}^{2}}\,.\] (56)
Similar expressions can be obtained for terms involving \(\mathbf{r}_{32\mathbf{k}}\). Then we get
\[\sigma^{zzx}= \frac{e^{3}c}{4\pi\hbar(\hbar\omega)^{2}}\left\{\left[2-\left(\frac{ 2\Delta}{\hbar\omega}\right)^{2}\right]\frac{2|\Delta|}{\hbar\omega}\left[f_{13 \mathbf{k}}||_{g_{\mathbf{k}}|=G_{31}}\mathcal{F}_{31}(\omega)+f_{24\mathbf{k}}||_{g_{\bm {k}}|=G_{42}}\mathcal{F}_{42}(\omega)\right]\right.\] \[\left.-|\Delta|\sqrt{1-\left(\frac{2\Delta}{\hbar\omega}\right)^{ 2}}\left[f_{13\mathbf{k}}||_{g_{\mathbf{k}}|=G_{31}}\mathcal{Q}_{31}(\omega)-f_{24\mathbf{ k}}||_{g_{\mathbf{k}}|=G_{42}}\mathcal{Q}_{42}(\omega)\right]\right\}\,.\] (100) with \[\mathcal{Q}_{nm}(\omega)=\int d\mathbf{k}\gamma_{3}\frac{\partial^{2}|g_{\mathbf{k}} |}{\partial k_{x}^{2}}\delta(\hbar\omega_{nm\mathbf{k}}-\hbar\omega)\,. \tag{101}\]
3. The term of \(\sigma^{zzz}(\omega)\) becomes \[\sigma^{zzz}(\omega)= \frac{e^{3}}{2\pi\hbar^{2}}\int d\mathbf{k}\left\{f_{13\mathbf{k}}\frac{ c^{2}}{4}\mathcal{N}_{-1\mathbf{k}}^{2}c\sqrt{1-\mathcal{N}_{-1\mathbf{k}}^{2}}\delta( \omega_{31\mathbf{k}}-\omega)\right.\] \[\left.+f_{24\mathbf{k}}\frac{c^{2}}{4}\mathcal{N}_{+1\mathbf{k}}^{2}c \sqrt{1-\mathcal{N}_{+1\mathbf{k}}^{2}}\delta(\omega_{42\mathbf{k}}-\omega)\right\}\] \[= \frac{e^{3}c^{3}}{4\pi\hbar}\frac{|\Delta|}{\hbar\omega}\left[1- \left(\frac{2\Delta}{\hbar\omega}\right)^{2}\right]\left[f_{13\mathbf{k}}||_{g_{ \mathbf{k}}|=G_{31}}\mathcal{J}_{31}(\omega)+f_{24\mathbf{k}}||_{g_{\mathbf{k}}|=G_{42}} \mathcal{J}_{42}(\omega)\right]\,.\] (102) with \[\mathcal{J}_{nm}(\omega)=\int d\mathbf{k}\delta(\hbar\omega_{nm\mathbf{k}}-\hbar\omega )\,.\] (103)
When the optical transition occurs just around the Dirac points \(\mathbf{K}\), we can approximate \(|g_{\mathbf{k}+\mathbf{K}}|=\sqrt{3}a_{0}k/2\), then the \(\delta\) functions can be worked out as
\[\delta(2\sqrt{\Delta^{2}+(\gamma_{3}|g_{\mathbf{k}}|-\gamma_{1})^{2}} -\hbar\omega) =\frac{\delta\left(k-2G_{31}/(\sqrt{3}a_{0})\right)}{\sqrt{3}a_{ 0}|\gamma_{3}|\sqrt{1-\left(\frac{2\Delta}{\hbar\omega}\right)^{2}}}\theta( \hbar\omega-2\sqrt{\Delta^{2}+\gamma_{1}^{2}})\,, \tag{104}\] \[\delta(2\sqrt{\Delta^{2}+(\gamma_{3}|g_{\mathbf{k}}|+\gamma_{1})^{2}} -\hbar\omega) =\frac{\delta\left(k-2G_{42}/(\sqrt{3}a_{0})\right)}{\sqrt{3}a_{ 0}|\gamma_{3}|\sqrt{1-\left(\frac{2\Delta}{\hbar\omega}\right)^{2}}}\theta(2 \sqrt{\Delta^{2}+\gamma_{1}^{2}}-\hbar\omega)\,. \tag{105}\]
Then we get
\[\begin{pmatrix}\mathcal{J}_{31}(\omega)\\ \mathcal{J}_{42}(\omega)\end{pmatrix}=\frac{8\pi}{3a_{0}^{2}\gamma_{3}^{2}\sqrt{1- \left(\frac{2\Delta}{\hbar\omega}\right)^{2}}}\left|\gamma_{1}-\sqrt{\left( \frac{\hbar\omega}{2}\right)^{2}-\Delta^{2}}\right|\begin{pmatrix}\theta(\hbar \omega-2\sqrt{\Delta^{2}+\gamma_{1}^{2}})\\ \theta(2\sqrt{\Delta^{2}+\gamma_{1}^{2}}-\hbar\omega)\end{pmatrix}\,, \tag{14}\] \[\begin{pmatrix}\mathcal{F}_{31}(\omega)\\ \mathcal{F}_{42}(\omega)\end{pmatrix}=\frac{3a_{0}^{2}\gamma_{3}^{2}}{8} \begin{pmatrix}\mathcal{J}_{31}(\omega)\\ \mathcal{J}_{42}(\omega)\end{pmatrix}\,,\] (15) \[\begin{pmatrix}\mathcal{Q}_{31}(\omega)\\ \mathcal{Q}_{42}(\omega)\end{pmatrix}=-\frac{\pi}{\sqrt{1-\left(\frac{2 \Delta}{\hbar\omega}\right)^{2}}}\begin{pmatrix}\theta(\hbar\omega-2\sqrt{ \Delta^{2}+\gamma_{1}^{2}})\\ \theta(2\sqrt{\Delta^{2}+\gamma_{1}^{2}}-\hbar\omega)\end{pmatrix}\,, \tag{16}\]
where two Dirac points have been counted in the integration. In such approximation, the expressions for \(\eta^{xxx}\), \(\sigma^{xxx}\), and \(\sigma^{zzz}\) are expressed as
\[\eta^{xxx}(\omega)= \frac{e^{3}c|\Delta|\sqrt{1-\left(\frac{2\Delta}{\hbar\omega} \right)^{2}}}{\hbar^{2}(\hbar\omega)^{2}}\left|\gamma_{1}-\sqrt{\left(\frac{ \hbar\omega}{2}\right)^{2}-\Delta^{2}}\right|\left(\mathcal{M}_{31}(\omega)+ \mathcal{M}_{42}(\omega)\right)\,, \tag{17}\] \[\sigma^{xxx}(\omega)= \frac{e^{3}c|\Delta|\left(\hbar^{2}\omega^{2}-2\Delta^{2}\right) }{2\hbar(\hbar\omega)^{4}\sqrt{1-\left(\frac{2\Delta}{\hbar\omega}\right)^{2} }}\left|\sqrt{1-\left(\frac{2\Delta}{\hbar\omega}\right)^{2}}-\frac{2\gamma_{ 1}}{\hbar\omega}\right|\left(\mathcal{M}_{31}(\omega)+\mathcal{M}_{42}(\omega)\right)\] \[-\frac{ce^{3}|\Delta|}{4\hbar(\hbar\omega)^{2}}\left(\mathcal{M}_ {31}(\omega)-\mathcal{M}_{42}(\omega)\right)\,,\] (18) \[\sigma^{zzz}(\omega)= \frac{e^{3}c^{3}|\Delta|\sqrt{1-\left(\frac{2\Delta}{\hbar \omega}\right)^{2}}}{3\hbar(a_{0}\gamma_{3})^{2}}\left|\sqrt{1-\left(\frac{2 \Delta}{\hbar\omega}\right)^{2}}-\frac{2\gamma_{1}}{\hbar\omega}\right|\left( \mathcal{M}_{31}(\omega)+\mathcal{M}_{42}(\omega)\right)\,, \tag{19}\]
respectively, with
\[\begin{pmatrix}\mathcal{M}_{31}(\omega)\\ \mathcal{M}_{42}(\omega)\end{pmatrix}=\begin{pmatrix}f_{13\boldsymbol{k}}|_{| g_{\boldsymbol{k}}|=G_{31}}\theta(\hbar\omega-2\sqrt{\Delta^{2}+\gamma_{1}^{2}}) \\ f_{24\boldsymbol{k}}|_{|g_{\boldsymbol{k}}|=G_{42}}\theta(2\sqrt{\Delta^{2}+ \gamma_{1}^{2}}-\hbar\omega)\end{pmatrix}\,. \tag{20}\]
Through the Taylor expansion, the above expressions around frequency \(2\sqrt{\Delta^{2}+\gamma_{1}^{2}}\) can be
approximated as
\[\eta^{zzx}(\omega)\approx \frac{ce^{3}|\Delta|\left|2\sqrt{\gamma_{1}^{2}+\Delta^{2}}-\hbar \omega\right|}{8\hbar^{2}(\gamma_{1}^{2}+\Delta^{2})}\left(\mathcal{M}_{31}( \omega)+\mathcal{M}_{42}(\omega)\right)\,, \tag{100}\] \[\sigma^{zzz}(\omega)\approx \frac{ce^{3}|\Delta|(2\gamma_{1}^{2}+\Delta^{2})\left|2\sqrt{ \gamma_{1}^{2}+\Delta^{2}}-\hbar\omega\right|}{32\hbar\gamma_{1}^{2}\sqrt{ \gamma_{1}^{2}+\Delta^{2}}^{3}}\left(\mathcal{M}_{31}(\omega)+\mathcal{M}_{42 }(\omega)\right)\] \[-\frac{ce^{3}|\Delta|}{16\hbar(\gamma_{1}^{2}+\Delta^{2})}\left( \mathcal{M}_{31}(\omega)-\mathcal{M}_{42}(\omega)\right)\,,\] (101) \[\sigma^{zzz}(\omega)\approx \frac{ce^{3}|\Delta|\left|2\sqrt{\gamma_{1}^{2}+\Delta^{2}}- \hbar\omega\right|}{6\hbar a_{0}^{2}\gamma_{3}^{2}(\gamma_{1}^{2}+\Delta^{2}) }\left(\mathcal{M}_{31}(\omega)+\mathcal{M}_{42}(\omega)\right)\,. \tag{102}\]
## Appendix C Eigenenergies of AB-BG
The eigenenergies \(\epsilon\) satisfy the equation
\[|H_{\mathbf{k}}^{\rm AB}-\epsilon|=0\,, \tag{103}\]
or
\[\epsilon^{4}+x_{2}\epsilon^{2}+x_{1}\epsilon+x_{0}=0\,, \tag{104}\]
with
\[x_{2}= -{\gamma_{1}^{\prime}}^{2}-\left(2{\gamma_{0}^{\prime}}^{2}+{ \gamma_{3}^{\prime}}^{2}+2{\gamma_{4}^{\prime}}^{2}\right)|g_{\mathbf{k}}|^{2}-2 \left[\Delta^{2}+\left(\frac{\Delta^{\prime}}{2}\right)^{2}\right]\,, \tag{105}\] \[x_{1}= -4{\gamma_{0}^{\prime}}{\gamma_{4}^{\prime}}\left({\gamma_{1}^{ \prime}}|g_{\mathbf{k}}|^{2}+{\gamma_{3}^{\prime}}^{2}{\rm Re}\left[g_{\mathbf{k}}^{3} \right]\right)+\Delta^{\prime}\left({\gamma_{3}^{\prime}}^{2}|g_{\mathbf{k}}|^{2}-{ \gamma_{1}^{\prime}}^{2}\right)\,,\] (106) \[x_{0}= \left({\gamma_{0}^{\prime}}^{2}-{\gamma_{4}^{\prime}}^{2}\right) ^{2}|g_{\mathbf{k}}|^{4}-2{\gamma_{3}^{\prime}}\left[{\gamma_{1}^{\prime}}\left({ \gamma_{0}^{\prime}}^{2}+{\gamma_{4}^{\prime}}^{2}\right)-{\gamma_{0}^{\prime }}{\gamma_{4}^{\prime}}\Delta^{\prime}\right]{\rm Re}[g_{\mathbf{k}}^{3}]\] \[+\left[\Delta^{2}-\left(\frac{\Delta^{\prime}}{2}\right)^{2} \right]\left[{\gamma_{1}^{\prime}}^{2}+\Delta^{2}-\left(\frac{\Delta^{\prime}} {2}\right)^{2}\right]\,. \tag{107}\]
Then the analytic expressions of the eigenenergies are
\[\epsilon_{n\mathbf{k}}=\frac{1}{2}\left[\alpha_{n}\sqrt{-2x_{2}-\beta_{n}\frac{2x _{1}}{\sqrt{y}}-y}+\beta_{n}\sqrt{y}\right]\,,\quad\text{ for }n=1,2,3,4\,. \tag{108}\]
with
\[y = \frac{1}{6}\left[4^{\frac{1}{3}}\left(y_{1}+\sqrt{y_{1}^{2}-4y_{2}^{ 3}}\right)^{\frac{1}{3}}+\frac{4^{\frac{2}{3}}y_{2}}{\left(y_{1}+\sqrt{y_{1}^{2 }-4y_{2}^{3}}\right)^{\frac{1}{3}}}-4x_{2}\right]\,, \tag{100}\] \[y_{1} = 2x_{2}^{3}+27x_{1}^{2}-72x_{2}x_{0}\,,\] (101) \[y_{2} = x_{2}^{2}+12x_{0}\,. \tag{102}\]
At the Dirac points with \(g_{\mathbf{k}}=0\), the four eigenenergies are \(\pm\Delta-\frac{\Delta^{\prime}}{2}\), \(\pm\sqrt{\Delta^{2}+\gamma_{1}^{2}}+\frac{\Delta^{\prime}}{2}\).
In general the electron-hole symmetry for AB-BG is broken due to the nonzero of \(\gamma_{4}^{\prime}\) and \(\Delta^{\prime}\). However, we find that \(\gamma_{4}^{\prime}\) and \(\Delta^{\prime}\) have negligble effects on the optical transition between the bands (2, 3). With setting \(\gamma_{4}^{\prime}=0\) and \(\Delta^{\prime}=0\), the eigenvalues become
\[\epsilon_{n\mathbf{k}}=\alpha_{n}\frac{1}{\sqrt{2}}\sqrt{z_{1}+\alpha_{n}\beta_{n }\sqrt{z_{2}}}\,, \tag{103}\]
with
\[z_{1} = {\gamma_{1}^{\prime}}^{2}+2\Delta^{2}+\left(2{\gamma_{0}^{ \prime}}^{2}+{\gamma_{3}^{\prime}}^{2}\right)|g_{\mathbf{k}}|^{2}\,, \tag{104}\] \[z_{2} = 4{\gamma_{0}^{\prime}}^{2}\left[{\gamma_{3}^{\prime}}^{2}|g_{ \mathbf{k}}|^{4}+2\gamma_{1}^{\prime}\gamma_{3}^{\prime}\text{Re}[g_{\mathbf{k}}^{3}] +({\gamma_{1}^{\prime}}^{2}+4\Delta^{2})|g_{\mathbf{k}}|^{2}\right]+\left({\gamma_ {3}^{\prime}}^{2}|g_{\mathbf{k}}|^{2}-{\gamma_{1}^{\prime}}^{2}\right)^{2}\,. \tag{105}\]
Obviously, the electronic states become electron-hole symmetric. Using Eq. (103), we can have analytic discussion on the band gap \(E_{g}\) and the VHS for \(\mathcal{J}_{32}\). Around the Dirac point \(\mathbf{K}\), the approximation \(g_{\mathbf{k}+\mathbf{K}}=-re^{i\theta}\) can be adopted for \(\mathbf{k}=\frac{2r}{\sqrt{3}a_{0}}(\cos\theta\mathbf{\hat{x}}+\sin\theta\mathbf{\hat{y}})\). For zero \(\Delta\), the zero energy of \(\epsilon_{3\mathbf{k}}\) can be directly found from Eq. (103) at \(r=0\) or \(r=r_{0}=-\frac{\gamma_{1}^{\prime}\gamma_{3}^{\prime}}{{\gamma_{0}^{\prime}}^ {2}}\) and \(\theta=(2n+1)\pi/3\). Therefore, there exist in total four degenerate zero energy points in one Dirac cone at \(\Delta=0\); one is at this Dirac point, and the other three locate along the K-M directions. Furthermore, for small \(r\), \(\epsilon_{3\mathbf{k}}\) can be approximated by
\[\epsilon_{3\mathbf{k}}^{2}=\Delta^{2}+c_{2}r^{2}+c_{3}\cos(3\theta)r^{3}+c_{4}r^{ 4}\,, \tag{106}\]
with
\[c_{2} = {\gamma_{3}^{\prime}}^{2}-\frac{4{\gamma_{0}^{\prime}}^{2}\Delta ^{2}}{{\gamma_{1}^{\prime}}^{2}}\,, \tag{107}\] \[c_{3} = -\frac{2{\gamma_{0}^{\prime}}^{2}\gamma_{3}^{\prime}}{\gamma_{1}^ {\prime}}\,,\] (108) \[c_{4} = \frac{{\gamma_{0}^{\prime}}^{2}}{{\gamma_{1}^{\prime}}^{2}}\left[ {\gamma_{0}^{\prime}}^{2}-2{\gamma_{3}^{\prime}}^{2}+\frac{4\Delta^{2}(2{ \gamma_{0}^{\prime}}^{2}-{\gamma_{3}^{\prime}}^{2})}{{\gamma_{1}^{\prime}}^{2 }}+\frac{16{\gamma_{0}^{\prime}}^{2}\Delta^{4}}{{\gamma_{1}^{\prime}}^{4}} \right]\,. \tag{109}\]
From Eq. (106) the band structure around the Dirac points has following features:
1. For nonzero \(\Delta\), the energy \(\epsilon_{3\mathbf{k}}\) at the Dirac point \(\mathbf{K}\) is an extreme, and it is a local minimum (maximum) as \(c_{2}>0\) (\(c_{2}<0\)), which corresponds to \(|\Delta|<\Delta_{c}\) (\(|\Delta|>\Delta_{c}\)) with \(\Delta_{c}=|\gamma_{3}^{\prime}\gamma_{1}^{\prime}/(2\gamma_{0}^{\prime})|=0.0229\) eV.
2. We first look at the case \(|\Delta|>\Delta_{c}\) (\(c_{2}<0\)). For a fixed \(\theta\), \(\epsilon_{3\mathbf{k}}\) around the Dirac point \(\mathbf{K}\) has one more local minimum located at \(r=r_{e}(\cos 3\theta)\) with \[r_{e}(\cos 3\theta)=\frac{-3c_{3}\cos 3\theta+\sqrt{9c_{3}^{2}\cos^{2}3 \theta-32c_{2}c_{4}}}{8c_{4}}\,.\] (106) When \(r\) is fixed and \(\theta\) varies, \(\epsilon_{3\mathbf{k}}\) has local maxima as \(\cos 3\theta=1\) and local minima as \(\cos 3\theta=-1\). When both \(r\) and \(\theta\) are considered, there exists a minimum at \(r=r_{e}(-1)\) and \(\theta=(2n+1)\pi/3\) (along the K-\(\Gamma\) directions for integer \(n\)), and a VHS point at \(r=r_{e}(1)\) and \(\theta=2n\pi/3\) (along the K-M directions).
3. For the case \(|\Delta|<\Delta_{c}\) (\(c_{2}>0\)), \(\epsilon_{3\mathbf{k}}\) has no VHS point around the Dirac points but the minimum along K-\(\Gamma\) directions still exists.
4. Similar analysis can be applied to study the JDOS \(\mathcal{J}_{42}=\mathcal{J}_{31}\). After ignoring \(\gamma_{4}^{\prime}\) and \(\Delta^{\prime}\), \(\epsilon_{4\mathbf{k}}-\epsilon_{2\mathbf{k}}\) has a local minimum at the \(\mathbf{K}\) point, and there is no VHS in \(\mathcal{J}_{42}\). Therefore, \(\gamma_{4}^{\prime}\) and \(\Delta^{\prime}\) play a key role in forming a VHS in \(\mathcal{J}_{42}\).
|
2303.01859 | POPGym: Benchmarking Partially Observable Reinforcement Learning | Real world applications of Reinforcement Learning (RL) are often partially
observable, thus requiring memory. Despite this, partial observability is still
largely ignored by contemporary RL benchmarks and libraries. We introduce
Partially Observable Process Gym (POPGym), a two-part library containing (1) a
diverse collection of 15 partially observable environments, each with multiple
difficulties and (2) implementations of 13 memory model baselines -- the most
in a single RL library. Existing partially observable benchmarks tend to fixate
on 3D visual navigation, which is computationally expensive and only one type
of POMDP. In contrast, POPGym environments are diverse, produce smaller
observations, use less memory, and often converge within two hours of training
on a consumer-grade GPU. We implement our high-level memory API and memory
baselines on top of the popular RLlib framework, providing plug-and-play
compatibility with various training algorithms, exploration strategies, and
distributed training paradigms. Using POPGym, we execute the largest comparison
across RL memory models to date. POPGym is available at
https://github.com/proroklab/popgym. | Steven Morad, Ryan Kortvelesy, Matteo Bettini, Stephan Liwicki, Amanda Prorok | 2023-03-03T11:25:33Z | http://arxiv.org/abs/2303.01859v1 | # POPGym: Benchmarking Partially Observable Reinforcement Learning
###### Abstract
Real world applications of Reinforcement Learning (RL) are often partially observable, thus requiring memory. Despite this, partial observability is still largely ignored by contemporary RL benchmarks and libraries. We introduce Partially Observable Process Gym (POPGym), a two-part library containing (1) a diverse collection of 15 partially observable environments, each with multiple difficulties and (2) implementations of 13 memory model baselines - the most in a single RL library. Existing partially observable benchmarks tend to fixate on 3D visual navigation, which is computationally expensive and only one type of POMDP. In contrast, POPGym environments are diverse, produce smaller observations, use less memory, and often converge within two hours of training on a consumer-grade GPU. We implement our high-level memory API and memory baselines on top of the popular RLlib framework, providing plug-and-play compatibility with various training algorithms, exploration strategies, and distributed training paradigms. Using POPGym, we execute the largest comparison across RL memory models to date. POPGym is available at [https://github.com/prooroklab/popgym](https://github.com/prooroklab/popgym).
## 1 Introduction
Datasets like MNIST (Lecun et al., 1998) have driven advances in Machine Learning (ML) as much as new architectural designs (Levine et al., 2020). Comprehensive datasets are paramount in assessing the progress of learning algorithms and highlighting shortcomings of current methodologies. This is evident in the context of RL, where the absence of fast and comprehensive benchmarks resulted in a reproducability crisis (Henderson et al., 2018). Large collections of diverse environments, like the Arcade Learning Environment, OpenAI Gym, ProcGen, and DMLab provide a reliable measure of progress in deep RL. These fundamental benchmarks play a role in RL equivalent to that of MNIST in supervised learning (SL).
The vast majority of today's RL benchmarks are designed around Markov Decision Processes (MDPs). In MDPs, agents observe a _Markov state_, which contains all necessary information to solve the task at hand. When the observations are Markov states, the Markov property is satisfied, and traditional RL approaches guarantee convergence to an optimal policy (Sutton and Barto, 2018, Chapter 3). But in many RL applications, observations are ambiguous, incomplete, or noisy - any of which makes the MDP _partially observable_ (POMDP) (Kaelbling et al., 1998), breaking the Markov property and all convergence guarantees. Furthermore, Ghosh et al. (2021) find that policies trained under the ideal MDP framework cannot generalize to real-world conditions when deployed, with epistemic uncertainty turning real-world MDPs into POMDPs. By introducing _memory_ (referred to as sequence to sequence models in SL), we can summarize the observations1 therefore restoring policy convergence guarantees for POMDPs (Sutton and Barto, 2018, Chapter 17.3).
Footnote 1: Strictly speaking, the agent actions are also required to guarantee convergence. We consider the previous action as part of the current observation.
Despite the importance of memory in RL, most of today's comprehensive benchmarks are fully observable or near-fully observable. Existing partially observable benchmarks are often navigation-based - representing only spatial POMDPs, and ignoring applications like policymaking, disease diagnosis, teaching, and ecology (Cassandra, 1998). The state of memory-based models in RL libraries is even more dire - we are not aware of any RL libraries that
or four distinct memory baselines. In nearly all cases, these memory models are limited to frame stacking and LSTM.
To date, there are no popular RL libraries that provide a diverse selection of memory models. Of the few existing POMDP benchmarks, even fewer are comprehensive and diverse. As a consequence, there are no large-scale studies comparing memory models in RL. We propose to fill these three gaps with our proposed POPGym.
### Contributions
POPGym is a collection of 15 partially observable gym environments (Figure 1) and 13 memory baselines. All environments come with at least three difficulty settings and randomly generate levels to prevent overfitting. The POPGym environments use low-dimensional observations, making them fast and memory efficient. Many of our baseline models converge in under two hours of training on a single consumer-grade GPU ( Table 1, Figure 2). The POPGym memory baselines utilize a simple API built on top of the popular RLlib library (Liang et al., 2018), seamlessly integrating memory models with an assortment of RL algorithms, sampling, exploration strategies, logging frameworks, and distributed training methodologies. Utilizing POPGym and its memory baselines, we execute a large-scale evaluation, analyzing the capabilities of memory models on a wide range of tasks. To summarize, we contribute:
1. A comprehensive collection of diverse POMDP tasks.
2. The largest collection of memory baseline implementations in an RL library.
3. A large-scale, principled comparison across memory models.
## 2 Related Work
There are many existing RL benchmarks, which we categorize as fully (or near-fully) observable and partially observable. In near-fully observable environments, large portions of the the Markov state are visible in an observation, though some information may be missing. We limit our literature review to _comprehensive_ benchmarks (those that contain a wide set of tasks), as environment diversity is essential for the accurate evaluation of RL agents (Cobbe et al., 2020).
### Fully and Near-Fully Observable Benchmarks
The Arcade Learning Environment (ALE) (Bellemare et al., 2013) wraps Atari 2600 ROMs in a Python interface. Most of the Atari games, such as Space Invaders or Missile Command are fully observable (Cobbe et al., 2020). Some games like asteroids require velocity observations, but models can recover full observability by stacking four consecutive observations (Mnih et al., 2015), an approach that does not scale for longer timespans. Even seemingly partially-observable multi-room games like Montezuma's Revenge are made near-fully observable by displaying the player's score and inventory (Burda et al., 2022).
OpenAI Gym (Brockman et al., 2016) came after ALE, implementing classic fully observable RL benchmarks like CartPole and MountainCar. Their Gym API found use in many other environments, including our proposed benchmark.
Figure 1: Renders from select POPGym environments.
Cobbe et al. (2020) find that randomly generated environments are critical to training general agents, showing policies will overfit to specific levels otherwise. They propose ProcGen: 16 procedurally generated environments with pixel-space observations. Most environments are fully or near-fully observable, although a few environments provide a partially observable mode, effectively turning them into 2D area coverage (navigation) tasks. ProcGen motivates POPGym's use of random level generation.
### Partially Observable Benchmarks
When enumerating partially observable benchmarks, we find many are based on 3D first-person navigation. DeepMind Lab (Beattie et al., 2016) (DMLab) is a 3D first-person view navigation simulator based on the Quake 3 physics engine. It implements various tasks such as collecting fruits, maze exploration, and laser tag. VizDoom (Kempka et al., 2016) is another 3D navigation simulator based on the PC game Doom. It gives the agent weapons and adds computer-controlled characters that can shoot at the player. Miniworld (Chevalier-Boisvert, 2018) is a third 3D first-person view navigation simulator that is easier to install than DMLab or VizDoom. MiniGrid (Chevalier-Boisvert et al., 2018) and GridVerse (Baisero and Katt, 2021) are 2D navigation simulators with a first-person view. Unlike the previously mentioned 3D simulators, agents converge on gridworld environments much faster due to the smaller observation space. This makes it a popular benchmark for memory models.
There are few POMDP libraries that provide tasks beyond navigation. Behaviour suite (BSuite) evaluates agents on a variety of axes, one of which is memory (Osband et al., 2020), but they only provide two POMDPs. Similar to our benchmark, (Zheng and Tellex, 2020) provide classic POMDPs with low-dimensional observation spaces. But their tasks are solvable without neural networks and are not difficult enough for modern deep RL. Ni et al. (2022) provide 21 environments, most of which are a special case of POMDP known as _latent MDPs_(Kwon et al., 2021), where a specific MDP is chosen from a set of possible MDPs at the beginning of an episode. (Morad et al., 2022) provides three POMDPs, which is insufficient for a benchmark.
We briefly mention the Starcraft (Samvelyan et al., 2019) and VMAS (Bettini et al., 2022) benchmarks because multi-agent environments are intrinsically partially observable, but we focus specifically on single-agent POMDPs.
### Shortcomings of Current Benchmarks
Popular fully observable benchmarks use pixel-based observation spaces, adding a layer of complexity that takes an order of magnitude longer to train when compared against state-based observation counterparts (Seita, 2020). In fully observable environments, visually pleasing results are worth a few extra hours training. This dogma persists into partial observability, where environments often take 10x longer to converge than their fully observable counterparts. Popular benchmarks using 3D graphics take hundreds of billions of timesteps (Parisotto et al., 2020) and multiple weeks (Morad et al., 2021) on a GPU to train a single policy to convergence. Until sample efficiency in partially observable RL improves, we must forgo pixel-based observations or continue to struggle with reproducibility.
Many partially observable tasks with pixel-based observation spaces are based on some form of navigation (Ramani, 2019). Although navigation can be a partially observable task, wall following behavior in perfect mazes guarantees complete area coverage without the need for memory. When mazes are imperfect (i.e. contain cycles), deterministic wall following can get stuck in infinite loops. However, RL policies often have some amount of stochasticity that can break out of these loops. Kadian et al. (2020) and Morad et al. (2021) inadvertently show that memory-free navigation agents learn wall following strategies2 that are surprisingly effective in imperfect real-world mazes. We confirm these findings with our experiments, showing that memory-free agents are competitive with memory-endowed agents in certain navigation benchmarks.
Footnote 2: [https://en.wikipedia.org/wiki/Maze-solving_algorithm#Wall_follower](https://en.wikipedia.org/wiki/Maze-solving_algorithm#Wall_follower)
All other (imperfect) mazes can be fully explored by storing no more than two past locations (observations) in memory (Blum and Kozen, 1978). Navigation-based tasks like area coverage, moving to a coordinate, or searching for items can be reduced to the maze exploration task. We do not claim
that navigation tasks are easy, but rather that it is important to have a variety of tasks to ensure we evaluate all facets of memory, such as _memory capacity_, that navigation tasks might miss.
### Existing Memory Baselines
The state of memory models in RL is even more bleak than the benchmarks. Most libraries provide frame stacking and a single type of RNN. OpenAI Baselines (Dhariwal et al., 2017), Stable-Baselines3 (Raffin et al., 2021), and CleanRL (Huang et al., 2021) provide implementations of PPO with frame stacking and an LSTM. Ray RLlib (Liang et al., 2018) currently implements frame stacking, LSTM, and a transformer for some algorithms. Ni et al. (2022) implement LSTM, GRUs, and two model-based memory models. Yang & Nguyen (2021) provides recurrent versions of the DDPG, TD3, and SAC RL algorithms, which utilize GRUs and LSTM. Zheng & Tellex (2020) implement multiple classical POMDP solvers, but these do not use neural networks, preventing their application to more complex tasks. There is currently no go-to library for users who want to compare or apply non-standard memory models.
### A Brief Summary on Memory
When designing a library of memory models, it is important to select competitive models. Ni et al. (2022) show that sequence to sequence models from SL are competitive or better than RL-specific memory methods while being more straightforward to implement, so we focus specifically on sequence to sequence models (called memory throughout the paper). Although a strict categorization of memory is elusive, most methods are based on RNNs, attention, or convolution.
RNNs (Elman, 1990) take an input and hidden state, feed them through a network, and produce a corresponding output and hidden state. RNNs depend on the previous state and must be executed sequentially, resulting in slow training but fast inference when compared with other methods.
Attention-based methods (Vaswani et al., 2017) have supplanted RNNs in many applications of SL, but traditional transformers have quadratically-scaling memory requirements, preventing them from running on long episodes in RL. Recent linear attention formulations (Schlag et al., 2021; Katharopoulos et al., 2020) claim to produce transformer-level performance in linear time and space, potentially enabling widespread use of attention in RL.
Like attention, convolutional methods are computationally efficient (Bai et al., 2018), lending themselves well to RL. They are less common than recurrent or attention-based methods in SL, and there is little literature on their use in RL.
## 3 POPGym Environments
All of our environments bound the cumulative episodic reward in \([-1,1]\). In some cases (e.g. repeating previous observations) an optimal policy would receive a cumulative reward of one in expectation. In other environments (e.g. playing battleship with randomly placed ships), an optimal policy has an expected episodic cumulative reward of less than one.
We tag our proposed environments as _diagnostic_, _control_, _noisy_, _game_, and _navigation_. Each tag is designed to represent a different class of POMDP, and each environment has at least three distinct difficulty settings, creating the most diverse POMDP benchmark thus far. Our proposed environments are all _overcomplete_ POMDPs, meaning our environments have more unique latent Markov states than unique observations (Sharan et al., 2017; Jin et al., 2020).
**Diagnostic** environments probe model capabilities with respect to the duration of memories, forgetting, and compression and recall. They are designed to quickly summarize the strengths and weaknesses of a specific model.
**Control** environments are control RL environments made partially observable by removing part of the observation. Solving these tasks only requires short-term memory.
**Noisy** environments require the memory model to estimate the true underlying state by computing an expectation over many observations. These are especially useful for real-world robotics tasks.
**Game** environments provide a more natural and thorough evaluation of memory through card and board games. They stress test memory capacity, duration, and higher-level reasoning.
**Navigation** environments are common in other benchmarks, and we include a few to ensure our benchmark is comprehensive. More than anything, our navigation environments examine how memory fares over very long sequences.
### Environment Descriptions
1. **Repeat First (Diagnostic):** At the first timestep, the agent receives one of four values and a remember indicator. Then it randomly receives one of the four values at each successive timestep without the remember indicator. The agent receives a reward for outputting (remembering) the first value.
2. **Repeat Previous (Diagnostic):** Like Repeat First, observations contain one of four values. The agent is rewarded for outputting the observation from some constant \(k\) timesteps ago, i.e. observation \(o_{t-k}\) at time \(t\).
3. **Autoencode (Diagnostic):** During the WATCH phase, a deck of cards is shuffled and played in sequence to the agent with the watch indicator set. The watch indicator is unset at the last card in the sequence, where the agent must then output the sequence of cards in order. This tests whether the agent can encode a series of observations into a latent state, then decode the latent state one observation at a time.
4. **Stateless Cartpole (Control):** The cartpole environment from Barto et al. (1983), but with the angular and linear positions removed from the observation. The agent must integrate to compute positions from velocity.
5. **Stateless Pendulum (Control):** The swing-up pendulum (Doya, 1995), with the angular position information removed.
6. **Noisy Stateless Cartpole (Control, Noisy):** Stateless Cartpole (Env. 4) with Gaussian noise.
7. **Noisy Stateless Pendulum (Control, Noisy):** Stateless Pendulum (Env. 5) with Gaussian noise.
8. **Multiarmed Bandit (Noisy, Diagnostic):** The multiarmed bandit problem (Slivkins & others, 2019; Lattimore & Szepesvari, 2020) posed as an episodic task. Every episode, bandits are randomly initialized. Over the episode, the player must trade off exploration and exploitation, remembering which bandits pay best. Each bandit has some probability of paying out a positive reward, otherwise paying out a negative reward. Note that unlike the traditional multiarmed bandit task where the bandits are fixed once initialized, these bandits reset every episode, forcing the agent to learn a policy that can adapt between episodes.
9. **Higher Lower (Game, Noisy):** Based on the card game higher-lower, the agent must guess if the next card is of higher or lower rank than the previous card. The next card is then flipped face-up and becomes the previous card. Using memory, the agent can utilize card counting strategies to predict the expected value of the next card, improving the return.
10. **Count Recall (Game, Diagnostic, Noisy):** Each turn, the agent receives a next value and query value. The agent must answer the query with the number of occurrences of a specific value. In other words, the agent must store running counts of each unique observed value, and report a specific count back, based on the query value. This tests whether the agent can learn a compressed structured memory representation, such that it can continuously update portions of memory over a long sequence.
11. **Concentration (Game):** A deck of cards is shuffled and spread out face down. The player flips two cards at a time face up, receiving a reward if the flipped cards match. The agent must remember the value and position of previously flipped cards to improve the rate of successful matches.
12. **Battleship (Game):** A partially observable version of Battleship, where the agent has no access to the board and must derive its own internal representation. Observations contain either HIT or MISS and the position of the last salvo fired. The player receives a positive reward for striking a ship, zero reward for hitting water, and negative reward for firing on a specific tile more than once.
13. **Mine Sweeper (Game):** The computer game Mine Sweeper, but like our Battleship implementation, the agent does not have access to the board. Each observation contains the position and number of adjacent mines to the last square "clicked" by the agent. Clicking on a mined square will end the game and produce a negative reward. The agent must remember where it has already searched and must integrate information from nearby tiles to narrow down the location of mines. Once the agent has selected all non-mined squares, the game ends.
14. **Labyrinth Explore (Navigation):** The player is placed in a discrete, 2D procedurally-generated maze, receiving a reward for each previously unreached tile it reaches. The player can only observe adjacent tiles. The agent also receives a small negative reward at each timestep, encouraging the agent to reach all squares quickly and end the episode.
15. **Labyrinth Escape (Navigation):** The player must escape the procedurally-generated maze, using the same observation space as Labyrinth Explore. This is a sparse reward setting, where the player receives a positive reward only after solving the maze.
## 4 POPGym Baselines
Our memory model API relies on an abstract memory model class, only requiring users to implement memory_forward and initial_state methods. Our memory API builds on top of RLlib, exposing various algorithms, exploration methods, logging, distributed training, and more.
We collect well-known memory models from SL domains and wrap them in this API, enabling their use on RL tasks. We rewrite models where the existing implementation is slow, unreadable, not amenable to our API, or not written in Pytorch. Some of these sequence models have yet to be applied in the context of reinforcement learning.
1. **MLP:** An MLP that cannot remember anything. This serves to form a lower bound for memory performance, as well and ensuring memory models are actively using memory, rather than just leveraging their higher parameter counts.
2. **Positional MLP (PosMLP):** An MLP that can access the current episodic timestep. The current timestep is fed into the positional encoding from Vaswani et al. (2017), which is summed with the incoming features. PosMLP enables agents to learn time-dependent policies (those which evolve over the course of an episode) without explicitly using memory.
3. **Elman Networks:** The original RNN, from Elman (1990). Elman networks sum the recurrent state and input, passing the resulting vector through a linear layer and activation function to produce the next hidden state. Elman networks are not used much in SL nowadays due to vanishing and exploding gradients.
4. **Long Short-Term Memory (LSTM):** Hochreiter & Schmidhuber (1997) designed LSTM to address the vanishing and exploding gradient problems present in earlier RNNs like the Elman Network. LSTM utilizes a constant error carousel to handle longer dependencies and gating to ensure recurrent state stability during training. It has two recurrent states termed the hidden and cell states.
5. **Gated Recurrent Unit (GRU):** The GRU is a simplification of LSTM, using only a single recurrent state. The GRU appears to have similar performance to LSTM in many applications while using fewer parameters (Chung et al., 2014).
\begin{table}
\begin{tabular}{l r r r r} Environment & Colab FPS & Laptop FPS & Environment & Colab FPS & Laptop FPS \\ \hline Repeat First & 23,895 & 155,201 & Multiarmed Bandit & 48,751 & 469,325 \\ Repeat Previous & 50,349 & 136,392 & Battleship & 117,158 & 235,402 \\ Autoencoder & 121,756 & 251,997 & Concentration & 47,515 & 157,217 \\ Stateless Cartpole & 73,622 & 218,446 & Higher Lower & 24,312 & 76,903 \\ Stateless Pendulum & 8,168 & 26,358 & Count Recall & 16,799 & 53,779 \\ Noisy Stateless Cartpole & 6,269 & 66,891 & Mineweper & 8,434 & 32,003 \\ Noisy Stateless Pendulum & 6,808 & 20,090 & Labyrinth Escape & 1,399 & 41,122 \\ & & & Labyrinth Explore & 1,374 & 30,611 \\ \end{tabular}
\end{table}
Table 1: Frames per second (FPS) of our environments, computed on the Google Colab free tier and a Macbook Air (2020) laptop.
6. **Independently Recurrent Networks (IndRNN):** Stacking LSTM and GRU cells tends to provide few benefits when compared with traditional deep neural networks. IndRNNs update the recurrent state using elementwise connections rather than a dense layer, enabling much deeper RNNs and handling longer dependencies than the LSTM and GRU (Li et al., 2018). In our experiments, we utilize a 2-layer IndRNN.
7. **Differentiable Neural Computers (DNC):** Graves et al. (2016) introduce a new type of recurrent model using external memory. The DNC utilizes an RNN as a memory controller, reading and writing to external storage in a differentiable manner.
8. **Fast Autoregressive Transformers (FART):** Unlike the traditional attention matrix whose size scales with the number of inputs, FART computes a fixed-size attention matrix at each timestep, taking the cumulative elementwise sum over successive timesteps (Katharopoulos et al., 2020). FART maintains two recurrent states, one for the running attention matrix and one for a normalization term, which helps mitigate large values and exploding gradients as the attention increases grows over time. The original paper omits a positional encoding, but we find it necessary for our benchmark.
9. **Fast Weight Programmers (FWP):** The theory behind FART and FWP is different, but the implementation is relatively similar. FWP also maintains a running cumulative sum. Unlike FART, FWP normalizes the key and query vectors to sum to one, requiring only a single recurrent state and keeping the attention matrix of reasonable scale (Schlag et al., 2021). Unlike the original paper, we add a positional encoding to FWP.
10. **Frame Stacking (Fr.Stack):** Mnih et al. (2015) implemented frame stacking to solve Atari games. Frame stacking is the concatenation of \(k\) observations along the feature dimension. Frame stacking is not strictly convolutional, but is implemented similarly to other convolutional methods. Frame stacking is known to work very well in RL, but the number of parameters scales with the receptive field, preventing it from learning long-term dependencies.
11. **Temporal Convolutional Networks (TCN):** TCNs slide 1D convolutional filters over the temporal dimension. On long sequences, they are faster and use less memory than RNNs. TCNs avoid the vanishing gradient problem present in RNNs because the gradient feeds through each sequence element individually, rather than propagating through the entire sequence (Bai et al., 2018).
12. **Legendre Memory Units (LMU):** LMUs are a mixture of convolution and RNNs. They apply Legendre polynomials across a sliding temporal window, feeding the results into an RNN hidden state (Voelker et al., 2019). LMUs can handle temporal dependencies spanning up to 100K timesteps.
13. **Diagonal State Space Models (S4D):** S4D treats memory as a controls problem. It learns a linear time invariant (LTI) state space model for the recurrent state. S4D applies a Vandermonde matrix to the sequence of inputs, which we can represent using either convolution or a recurrence. Computing the result convolutionally makes it very fast. In SL, S4D was able to solve the challenging 16,000 timestep Path-X task, demonstrating significant capacity for long-term dependencies (Gu et al., 2022).
Figure 2: Performance characteristics for POPGym memory baselines on random inputs. We use a recurrent state size of 256, a batch size of 64, and a episode length of 1024. We compute CPU statistics on a 3GHz Xeon Gold and GPU statistics on a 2080Ti, reporting the mean and 95% confidence interval over 10 trials. Train times correspond to a full batch while inference times are per-element (i.e. the latency to compute a single action). Note that GPU Train Time has logarithmic scale.
## 5 Experiments
Our memory framework hooks into RLlib, providing integration with IMPALA, DQN, and countless other algorithms. Due to computational constraints, we only execute our study on Proximal Policy Optimization (PPO) (Schulman et al., 2017). We tend to use conservative hyperparameters to aid in reproducibility - this entails large batch sizes, low learning rates, and many minibatch passes over every epoch. We run three trials of each model over three difficulties for each environment, resulting in over 1700 trials. We utilize the _max-mean episodic reward_ (MMER) in many plots. We compute MMER by finding the mean episodic reward for each epoch, then taking the maximum over all epochs, resulting in a single MMER value for each trial. We present the full experimental parameters in Appendix A and detailed results for each environment and model in Appendix B. We provide a summary over models and tasks in Figure 3. Figure 2 reports model throughput and Table 1 provides environment throughput.
## 6 Discussion
In the following paragraphs, we pose some questions and findings made from the results of our large-scale study.
Supervised learning is a bad proxy for RL.Supervised learning experiments show that IndRNN, LMU, FART, S4D, DNC, and TCN surpass LSTM and GRUs by a wide margin (Li et al., 2018; Voelker et al., 2019; Katharopoulos et al., 2020; Gu et al., 2022; Graves et al., 2016; Bai et al., 2018). S4D is unstable and often crashed due to exploding weights, suggesting it is not suitable for RL out of the box and that further tuning may be required. Although linear attention methods like FWP and FART show significant improvements across a plethora of supervised learning tasks, they were some of the worst contenders in RL. Classical RNNs outperformed modern memory methods, even though RNNs have been thoroughly supplanted in SL (Figure 3). The underlying cause of the disconnect between RL and SL performance is unclear and warrants further investigation.
Use GRUs for performance and Elman nets for efficiency.Within traditional RNNs, there seems little reason to use LSTM, as GRUs are more efficient and perform better. Elman networks are largely ignored in modern SL and RL due to vanishing or exploding gradients, but these issues did not impact our training. We find Elman networks perform on-par with LSTM while exhibiting some of the best parameter and memory efficiency out of any model (Figure 2). Future work could investigate why Elman networks work so well in RL given their limitations, and distill these properties into memory models suited specifically for RL.
Are maze navigation tasks sufficient for benchmarking memory?Existing POMDP benchmarks focus primarily navigation tasks. In our experiments, we show that the MLP received the
Figure 3: (Left) A summary comparison of baselines aggregated over all environments. We normalize the MMER such that 0 denotes the worst trial and 1 denotes the best trial for a specific environment. We report the interquartile range (box), median (horizontal line), and mean (dot) normalized MMER over all trials. (Right) Single value scores for each model, produced by meaning the MMER over all POPGym environments. We also provide scores with navigation (Labyrinth) environments excluded; the reasoning is provided in the discussion section.
highest score on almost all navigation tasks, beating all memory models (Figure 4). This is in line with our hypothesis from subsection 2.3, and raises doubts concerning previous models evaluated solely on navigation tasks. Does a novel memory method outperform baselines because of a better memory architectures, or just because it has more trainable parameters? Future work can bypass this scrutiny by including a diverse set of tasks beyond navigation, and by modifying simple navigation tasks to better leverage memory (e.g. positive reward for correctly answering "how many rooms are there in the house?").
Positional MLPs are an important baseline.Masked control tasks turn MDPs into POMDPs by hiding the velocity or position portions of classic control problems, and are probably the second most popular type of POMDP in literature after navigation. The positional MLP performed notably better than the MLP, nearly solving the Stateless Cartpole masked control task on easy (Figure 4). This is entirely unexpected, as providing the current timestep to an MLP is insufficient to compute the position and underlying Markov state. Outside of masked control, the positional MLP regularly outperformed the MLP (Figure 3). Stateless policies that evolve over time could be an interesting topic for future work, and should be a standard baseline in future memory comparisons.
Is PPO enough?Memory models do not noticeably outperform the MLP in many game environments, such as Autoencode or Battleship, indicating that the memory is minimally effective in these tasks (Figure 4). All thirteen models converge to the nearly same reward, suggesting this could be due to issues with PPO rather than the memory models themselves. Future work could focus on designing new algorithms to solve these tasks. In parallel, research institutions with ample compute could ablate POPGym across other algorithms, such as Recurrent Replay Distributed DQN (Kapturowski et al., 2019).
## 7 Conclusion
We presented the POPGym benchmark, a collection of POMDPs and memory baselines designed to standardize RL in partially observable environments. We discovered a notable disconnect between memory performance in supervised and reinforcement learning, with older RNNs surpassing linear transformers and modern memory models. According to our experiments, the GRU is the best general-purpose memory model, with Elman networks providing the best tradeoff between performance and efficiency. We revealed shortcomings in prior benchmarks focused on control and navigation POMDPs, emphasizing the importance of numerous and diverse POMDPs for evaluating memory. There is still a great deal of work to be done towards solving POMDPs, and we hope POPGym provides some measure of progress along the way.
Figure 4: Selected results used in the discussion section. We standardize the MMER from \([-1,1]\) to \([0,1]\) for readability. The colored bars denote the mean and the black lines denote the 95% bootstrapped confidence interval. Full results across all environments are in Appendix B
## 8 Acknowledgements
Steven Morad and Stephan Liwicki gratefully acknowledge the support of Toshiba Europe Ltd. Ryan Kortevlesy was supported by Nokia Bell Labs through their donation for the Centre of Mobile, Wearable Systems and Augmented Intelligence to the University of Cambridge. |
2310.01961 | Soda: An Object-Oriented Functional Language for Specifying
Human-Centered Problems | We present Soda (Symbolic Objective Descriptive Analysis), a language that
helps to treat qualities and quantities in a natural way and greatly simplifies
the task of checking their correctness. We present key properties for the
language motivated by the design of a descriptive language to encode complex
requirements on computer systems, and we explain how these key properties must
be addressed to model these requirements with simple definitions. We give an
overview of a tool that helps to describe problems in an easy way that we
consider more transparent and less error-prone. | Julian Alfredo Mendez | 2023-10-03T11:12:51Z | http://arxiv.org/abs/2310.01961v1 | # Soda: An Object-Oriented Functional Language for Specifying Human-Centered Problems+
###### Abstract
We present Soda (Symbolic Objective Descriptive Analysis), a language that helps to treat qualities and quantities in a natural way and greatly simplifies the task of checking their correctness. We present key properties for the language motivated by the design of a descriptive language to encode complex requirements on computer systems, and we explain how these key properties must be addressed to model these requirements with simple definitions. We give an overview of a tool that helps to describe problems in an easy way that we consider more transparent and less error-prone.
Keywords:Responsible artificial intelligence Functional languages Object-oriented languages Human-centered programming languages
## 1 Introduction
Understanding how artificial intelligence (AI) agents work can be challenging because AI algorithms are complex and their reasoning opaque. Although transparency is often seen as a requirement, realistically, it might not always be possible, for example, due to privacy or security concerns, whereas the need to ensure that a system operates within moral bounds remains. At the same time, validation and verification procedures highly depend on the specific contextual interpretations that have been employed to ground abstract principles (e.g., fairness or privacy) into the concrete functionalities of an agent [1].
Verification can be a difficult or unfeasible task, and even when achieved, its specifications can be difficult to understand. Verification is only as strong as the assumptions that underlie the specification, which means that specifying assumptions and analyzing these specifications is crucial for verification [7]. Given that AI mostly operates in environments that are at best partially known by the system designer, and that properties are often discussed at a high level of abstraction by stakeholders without a background in formal languages, specification languages need to be easily understood by different stakeholders.
From a technical point of view, unit tests are crucial in ensuring the quality of a piece of software [36]. Test-driven development (TDD), which is a software development process in which developers create test cases together with the main development,
has been shown to be more productive than other more conventional development techniques [12]. We propose going one step further by introducing theorems together with the code, which in turn becomes more reliable.
Our contribution is to present Soda 1 (Symbolic Objective Descriptive Analysis), a new descriptive language based on widely adopted concepts such as modeling with classes and functional definitions. In this context, by descriptive language, we mean a language based on descriptions rather than processes, and that seems closer to a specification language rather than an implementation language. The set of basic constructs suffices to model complex requirements, but is small enough to be immediately understood by a larger group of stakeholders.
Footnote 1: [https://julianmendez.github.io/soda](https://julianmendez.github.io/soda)
In this paper, we address the following research question which is composed of two parts:
**RQ**: Can we design a **descriptive language** to encode requirements on AI systems such that:
* **RQ.1:** the language or a fragment is **formally verifiable**, and
* **RQ.2:** it is easily integrated into **state-of-the-art technology**?
In the following sections, we show our approach to this question.
## 2 Language Description
In this section, we present key properties for the language motivated by **RQ**, and we explain how these key properties must be addressed to model requirements with simple definitions.
### Key Properties
One of the key properties of the language is that it should be used to formalize requirements in an intuitive way, and one of the most effective ways to do it is to use types. We want the language to be _statically typed_, because static typing simplifies type checking, which in turn helps prevent formalization errors. To avoid errors caused by side effects [21], we make variables _immutable_, which are closer to their standard mathematical interpretation.
We want to relate qualities and quantities to describe complex conditions. We expect to use the very same language to model a problem containing hierarchies, like a resource access monitoring agent, and a problem containing measures, like a price monitoring agent. To do this, the language has to be _expressive_ and _general_.
Standard computer languages include a considerable number of reserved words and basic types, which usually hinder understanding. More reserved words usually bring more combinations and nuances to their use, and to simplify its comprehension, it is convenient for the language to have a _small_ set of constructs.
Alongside the aforementioned properties, we want the language to be used to evaluate effectively whether properties hold. For that, the language should be _easy to prototype_, and its prototypes should be _human-level efficient_, which means efficient enough for its expected use.
### Main Constructs
The main constructs are presented to ensure the requirements. We have chosen constructs that look similar to those used in popular programming languages.
Since Soda is _statically typed_, it needs a type definition construct. Let us name it ':' (colon). The syntax is \(x:A\), meaning that \(x\) is of type \(A\).
Due to _immutability_, we want to define constants and functions, but not variables in the computer science sense. In the following, we mention variables in the mathematical sense when referring to lambda expressions. To define a constant or a function, we need a construct. Let us name it '=' (equals sign). The notation is
\[f(x_{1}:A_{1})\ldots(x_{n}:A_{n}):A=e\]
where \(f\) is the _function name_, each \(x_{i}\)\((1\leq i\leq n)\) is a parameter of type \(A_{i}\), and \(e\) is an expression of type \(A\). A function \(f\) without parameters is called a _constant_. A function can be called using named parameters with the ':=' symbol. For example, \(f(x:Int)(y:Int):Int\) can be invoked as \(f(x:=0)(y:=1)\).
Most current programming languages include lambda expressions, which are anonymous functions based on lambda calculus [4]. We have included lambda expressions in Soda because they have been highly adopted. The notation
\[\texttt{lambda}\ x\longrightarrow f(x)\]
corresponds to \((\lambda x).f(x)\) or \(\lambda x\to f(x)\) in the literature. Since we work with typed lambda calculus, the type needs to be specified when it cannot be inferred, and we denote it by lambda \((x:A)\longrightarrow f(x)\).
Since the language is _expressive_ and _general_, it includes standard operations from mathematics, such as '+', '-', '*', '/', for arithmetic, and 'not', 'and', 'or' for logic, with the usual meaning. Logic functions are evaluated with lazy evaluation. Therefore, when \(a\) and \(b\) is evaluated, \(a\) is evaluated first, and if \(a\) is already false, \(b\) is not evaluated. Analogously for or, if the first value is already true. Lazy evaluation can also be used to compute functions that otherwise would be undefined, and this is done by using defined parts on the left to prevent undefined parts on the right. If the computations have no side effects, the result of computing with or without lazy evaluation is exactly the same, but the time needed is not.
Note that the language can define recursive functions over finite structures, as mainstream purely functional programming languages do. This gives an expressive power that is enough to model human-understandable constraints.
To define piecewise functions we have 'if-then-else' structures, with the notation
\[\texttt{if}\ b\ \ \texttt{then}\ e_{1}\ \ \texttt{else}\ e_{2}\]
where \(b\) is a Boolean expression, and \(e_{1}\) and \(e_{2}\) are expressions of the same type. The interpretation is standard, and the result is \(e_{1}\) if \(b\) is true, and \(e_{2}\) otherwise.
The pattern matching construct, called'match-case', has the format:
\[\texttt{match}\ x\ \ \texttt{case}\ p_{1}\Longrightarrow e_{1}\ \ \ldots\ \ \texttt{case}\ p_{n} \Longrightarrow e_{n}\]
where \(x\) is a variable to match, \(p_{i}\) are patterns, and \(e_{i}\) are expressions, for \(1\leq i\leq n\). The type of the match structure is the most specific supertype of the \(e_{i}\) expressions. The \(p_{i}\) patterns could be of different types, but they should be constructors that possibly contain construction variables. In fact, we use pattern matching to use extractors for object deconstruction, which can also be used for type checking. Although the notation if-then-else could be defined as a pattern matching structure, we decided to keep it because it is more concise and more universally recognizable.
We find it relevant to highlight that the constructs we present in this section are meant to create small functions, which improves readability [21], and to require the use of function names in intermediate computations to create accurate documentation for specifications.
### Types and Classes
Humans classify concepts into categories using features that help describe and reason. In Soda, we use types and classes to model objects that have attributes. They help us model ethical values like privacy and fairness, especially in relation to regulations.
There are some differences in view of whether it is convenient to have classes being instantiated by objects (like in Java [13]), or whether it is better to have modules, where functions are imported (like in Haskell [34]). We compromise between these two options because the objects created with the classes of Soda are immutable, and they work as namespaces for modules or to specify how to retrieve attributes in objects.
We distinguish between a type and a class as is usual in the literature [2]: an object has a _type_, and the type describes what the object can do, but not how, and, in contrast, a _class_ provides the implementation for an object. As most designers and programmers are familiar with object-oriented programming, we choose to build classes with a construct called 'class'. We adhere to the Open-Close Principle, where "software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification" [22]. Classes can be extended with the 'extends' construct
class \[A\] extends \[B_{1}\ldots B_{n}\] \[d_{1}\] \[\ldots\] \[d_{m}\] end
where class \(A\) extends classes \(B_{1},\ldots,B_{n}\), and \(d_{1},\ldots,d_{m}\) are constant or function definitions.
It is also possible to define _interfaces_, which are types that contain only declarations of constants and functions, without specifying what they contain. These declarations are in a block with the word 'abstract' as follows
class \[A\] abstract \[f_{1}\ldots f_{n}\] end
where each \(f_{i}\) is a constant or function declaration.
Each class has a default type constructor, which is named the same as the class with an extra underscore ('_') as suffix. The abstract elements in a class need to be given as parameters to instantiate a class. For example,
class Pair abstract fst : Int snd : Int end
can be instantiated with Pair_ (1) (2).
Since we want to be able to apply design patterns and the language is statically typed and object-oriented, we need a way to refer to the instance itself, and the construct is 'this'.
We allow type parameters for classes and functions to have polymorphism. To denote the type parameter, we use square brackets '[]'. In class declarations we specify that the parameter is of type Type. For example, a parameterized pair could be defined as
class Pair [A : Type] [B : Type] abstract fst : A snd : B end
and then instantiated as Pair_ [Int] [Int] (1) (2).
In addition, it is possible to declare upper and lower type bounds. An upper type bound is denoted with the word'subtype' or '<:' as found in the literature, and the lower type bound is the word'supertype' or '>:'. For example
class \(C\) [\(A\) subtype \(B\)] extends \(D\) indicates the declaration of class \(C\) parameterized with type \(A\), which is a subtype of \(B\), and \(C\) itself extends \(D\). Subtyping in this language follows the Liskov Substitution Principle [20]. To organize the classes, we group them in _packages_, so that a package is just a collection of classes. We can declare that a class belongs to a package using the word 'package'. In addition, the word 'import' helps to bring classes from other packages.
The language can be translated to other languages by including the so-called directives, which are specific to the target languages. The word 'directive' defines a piece of code that is considered only in specific translations and ignored in others.
## 3 Discussion and Implementation
### Integration with Scala
To prototype the specification, we translate it into the Scala [27] code. We use type checking and type inference provided by Scala. Each class is translated into a Scala trait, which is open for extension, each constant definition is translated to a lazy value (lazyval), and each function definition to a def, as well as each abstract constant or function declared with abstract. The default type constructor is a case class that extends the original trait. Scala case classes provide constructors and extractors for pattern matching and are not extensible. The if-then-else and match-case are very similar to what is provided in Scala. The supporting data types are the same provided by Scala, such as String, Int, Double, Option, and Seq.
The prototype can be run on the Java Virtual Machine (JVM), which is multiplatform and optimized for concurrent execution, and it can therefore use JVM libraries, so that, for example, a monitor can communicate with an AI agent interface. Although libraries can produce side effects, this can be easily controlled by the import commands, which define exactly the classes that are being used. On the one hand, a purely functional specification will not include any reference to those libraries. On the other hand, it is possible to connect an agent employing the corresponding JVM libraries. This dual use of the JVM brings the right amount of flexibility needed for a practical use, without losing control on critical parts.
The translation to Scala comprises all the nuances of the language. Type parameters, in Soda [A : Type], are also provided in Scala [A], as they are in Kotlin [17] and Java <A>, and Lean (A : Type).
### Integration with Lean
Lean [18] is a theorem prover and programming language based on the calculus of constructions with inductive types. Part of Soda can be translated into Lean to prove correctness of Soda snippets. The types in Lean are not the same as those in Scala, and the JVM libraries cannot be used, but some core purely functional pieces of code in Soda can be proven correct by using Lean.
The class definitions in Soda define three things that in Lean must be defined separately: a _type_, a _namespace_, and a _constructor_. Every Soda class defines a namespace, in Lean namespace. Some classes contain parametric internal values defined with abstract in Soda. These classes are translated into Lean as class. They include a default constructor, which has the same name as the class ending with an underscore, and the fields, all after the Lean where. Lean already provides extractors needed for pattern matching. The default equality given in Soda is achieved by deriving (deriving) in Lean from DecidableEq (or from BEq).
As for Scala, function definition in Soda is translated to a def at the beginning in Lean, if-then-else in Soda is identical in Lean, and a match-case structure in Soda is a match-with-\(\Rightarrow\) structure in Lean. The structure
\[\texttt{match}\ x\ \ \texttt{case}\ p_{1}\Longrightarrow e_{1}\ \ \ldots\ \ \texttt{case}\ p_{n} \Longrightarrow e_{n}\]
is translated as
\[\texttt{match}\,x\texttt{with}\ \mid\,p_{1}\Rightarrow e_{1}\ \ \ldots\ \mid\,p_{n}\Rightarrow e_{n}\]
For the case of package management (in Soda package and import) and self-instances (in Soda this) are not supported at the moment, and neither are the subtype and supertype type bounds.
Some basic types in Lean are stricter than in Soda or Scala, and there is not always a direct mapping from Soda to Lean. However, it is possible to create a specific mapping with directive, which allows adding a mapping for a type, and including Lean theorems with their proofs.
### Undefined States and Termination
Soda does not use exceptions. This is because they correspond to an imperative feature, as they are used to interrupt the evaluation of a function and perform a jump (_goto_) to the point where they are caught, assuming that they are caught at all.
This also implies that designs in this language need to be careful, considering edge cases and properly managing them when building objects. If exceptions are thrown from JVM objects, they can be caught by underlying types in Scala (e.g. Try), and then managed as objects.
Aside from the possibility of integrating JVM libraries, the language itself is highly expressive. There is no limit for self-recursion, which brings advantages and disadvantages. We provide an annotation to force tail recursion (@tailrec) to avoid stack overflow. There are two prominent functions that can be easily defined in this language. The first function is _range_, which generates the first natural numbers. The second function is _fold_, which applies a cumulative operation on a sequence. Both functions are sufficient to define the most common operations on sequences. Since both functions above always terminate, they are a convenient substitute for full recursion, preventing possible infinite recursion.
### Related Work
As mentioned above, the language aims to have a highly readable formal language for humans. In the design of the language, we consider the good properties of some programming languages. The main features we look for are:
* the specification is intended to be read and understood by a human (which is the most relevant point);
* everything defined is done only once, in one place (to avoid confusion due to partial definitions);
* objects are immutable (because one of the key properties is immutability);
* classes cannot be modified, but they can be extended (because of the Open-Close Principle [22]).
One of the properties that we adopted was the _functional notation_. For that, we evaluated three categories: the Lisp style, the ML (Meta Language) [23] style, and the Haskell style. In the Lisp category we can mention Clojure [14], which has a large community and can be integrated with the JVM. In the Haskell category we can mention Haskell, which is a de-facto standard in the functional programming community. In the same category, we also have Agda [26] and Idris [3], which can be used as proof assistants. In the ML category we can mention OCaml [19], which is a very efficient implementation of ML, Coq [32], which is a proof assistant with a very reduced core, and Lean [18], which is an efficient proof assistant. We wanted to avoid the excessive use of parentheses as in the Lisp style because it is less readable for non-experts. We wanted to avoid the definitions of partial functions, which is the standard notation in the Haskell style, in order to reduce undefined functions. We considered the ML style to be the most appropriate for the language, which induces definitions of total functions with a moderate number of parentheses.
Another property we considered was _readability_. For that we looked at Python [28, 33] and Prolog [5, 35]. Python is a language that is popular among scientists without a computer science background and is directed at a broad range of ages. For example, some young students start with Scratch [24] and then transition to Python or JavaScript [9], while [31], a minimalist dialect of Lisp, has been used to teach at university courses.
Prolog is also widely accepted in the scientific community. For example, 2APL (A Practical Agent Programming Language) [6] is a programming language for multi-agent systems consisting of individual agents that may share and access external environments. 2APL integrates the declarative and imperative style by connecting declarative beliefs and goals with events and plans. As later developments based on 2APL we can mention N-2APL (Norm-Aware Agent Programming Language), the 2OPL (Organisation Programming Language), and a framework for norm-aware multi-agent systems [8] that integrates the programming languages. Although Prolog is a logic language and the interpreter itself operates in a different way, the pieces of code created in Prolog are similar to those in purely functional languages.
Last but not least, we sought for a good _functional object-oriented integration_. This was provided by Scala, which has a thriving community of purely functional and object-oriented backgrounds. Scala also provides an advanced type inference system and can compile to JVM bytecode.
Table 1 contains a summary of the properties we searched for. We tested the efficiency of the programming languages mentioned above on a computer with an Intel Core i5-8350U CPU (1.70 GHz) with 8 cores, running on Linux 6.2.0 from the Ubuntu 22.04 LTS distribution. For all programming languages, we wrote a piece of code that was either a built-in cycle (like in Clojure, Prolog, and Python) or a recursion with tail recursion. We tried different powers of 10, and indicated the largest number of repetitions fitting in 30 seconds. This value is not meant to be a global benchmark to compare the languages or their implementations, but rather to show how Soda is well-positioned in terms of efficiency.
Employing formal proofs instead of only empirical evidence (like unit tests) gives a stronger reliability on multi-agent systems. This has been addressed in a develop
ment [16] where a verification framework for the GOAL agent programming language [15] has been formalized in Isabelle/HOL [25], which is a proof assistant based on higher-order logic. We also became interested in Scallina [10, 11], which is a tool for translating from Coq to Scala. As for languages for verification, we can also mention Dafny [30] and F* [29].
## 4 Conclusion
We present Soda, a language used to model constraints in AI systems. Our main motivation for Soda is to model complex requirements that need to be easily understood by humans. Furthermore, it can be used to model and prototype other types of constraints. In addition, Soda can be efficiently prototyped in an optimized multiplatform state-of-the-art technology, like the JVM, and some pieces of code can be verified in Lean.
We give an overview of a tool that helps to describe problems in a way that we consider more transparent and less error-prone. Although writing descriptions in this style could require more effort than with standard imperative languages, the effort to fully comprehend those descriptions is significantly smaller, thanks to the reduced number of constructs. This language is also conducive to writing better designs since each function explains a piece of code and works as a running documentation. In addition, the computer gives extra verification by checking and inferring the types.
In future work, we would like to expand the Lean translator to handle more nuances of Soda. Since understandability is a key feature of Soda, we would like to conduct a case study in which stakeholders can read descriptions and corroborate the readability of the language.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline lang. & version & A & B & C & D & E & F \\ \hline Agda & 2.6.3 & Yes & Yes & No & Yes & No & \(10^{9}\) \\ Clojure & 1.11.1 & Yes & No & No & Yes & \(10^{9}\) \\ Coq & 8.16.1 & Yes & Yes & No & Yes & No & \(10^{7}\) \\ Haskell & 8.8.4 & Yes & Yes & No & Yes & No & \(10^{8}\) \\ Idris2 & 0.6.0 & Yes & Yes & No & Yes & No & \(10^{10}\) \\ Lean & 4.0.0 & Yes & Yes & No & Yes & No & \(10^{8}\) \\ OCaml & 4.14.1 & Yes & No & Yes & Yes & No & \(10^{9}\) \\ Prolog & 8.4.2 & No & No & No & No & No & \(10^{8}\) \\ Python & 3.10.6 & No & No & Yes & No & No & \(10^{8}\) \\ Scala & 3.3.1 & Yes & No & Yes & Yes & Yes & \(10^{10}\) \\ Soda & 0.19.0 & Yes & Yes & Yes & Yes & Yes & \(10^{10}\) \\ \hline \hline \end{tabular} Columns: **A.** dominantly or only functional // **B.** purely functional in its core syntax // **C.** full object-oriented notation // **D.** statically typed // **E.** JVM integration // **F.** number of repetitions in 30 s
\end{table}
Table 1: Properties of related languages. |
2304.03031 | Evidentiality-aware Retrieval for Overcoming Abstractiveness in
Open-Domain Question Answering | The long-standing goal of dense retrievers in abtractive open-domain question
answering (ODQA) tasks is to learn to capture evidence passages among relevant
passages for any given query, such that the reader produce factually correct
outputs from evidence passages. One of the key challenge is the insufficient
amount of training data with the supervision of the answerability of the
passages. Recent studies rely on iterative pipelines to annotate answerability
using signals from the reader, but their high computational costs hamper
practical applications. In this paper, we instead focus on a data-centric
approach and propose Evidentiality-Aware Dense Passage Retrieval (EADPR), which
leverages synthetic distractor samples to learn to discriminate evidence
passages from distractors. We conduct extensive experiments to validate the
effectiveness of our proposed method on multiple abstractive ODQA tasks. | Yongho Song, Dahyun Lee, Myungha Jang, Seung-won Hwang, Kyungjae Lee, Dongha Lee, Jinyeong Yeo | 2023-04-06T12:42:37Z | http://arxiv.org/abs/2304.03031v6 | # Revisiting Dense Retrieval with Unanswerable Counterfactuals
###### Abstract
The retriever-reader framework is popular for open-domain question answering (ODQA), where a retriever samples for the reader a set of relevant candidate passages from a large corpus. A key assumption behind this method is that high relevance scores from the retriever likely indicate high answerability from the reader, which implies a high probability that the retrieved passages contain answers to a given question. In this work, we empirically dispel this belief and observe that recent dense retrieval models based on DPR often rank unanswerable counterfactual passages higher than their answerable original passages. To address such answer-unawareness in dense retrievers, we seek to use counterfactual samples as additional training resources to better synchronize the relevance measurement of DPR with the answerability of question-passage pairs. Specifically, we present counterfactually-**P**ivoting **C**ontrastive **L**earning (PiCL), a novel representation learning approach for passage retrieval that leverages counterfactual samples as pivots between positive and negative samples in their learned embedding space. We incorporate PiCL into the retriever training to show the effectiveness of PiCL on ODQA benchmarks and the robustness of the learned models.
## 1 Introduction
Open-domain question answering (ODQA) Chen and Yih (2020) is a task that finds the answers to natural language questions from a large collection of documents. A common approach to ODQA tasks is a two-stage retriever-reader framework Chen et al. (2017), where a retriever roughly selects relevant candidate passages from which a reader extracts the answers to a question. For the first-stage retrieval, recent studies leverage dense passage retriever (DPR) Karpukhin et al. (2020), which computes relevance scores based on the similarity between learned representations of questions and passages. Generally, it is assumed that high relevance scores from the retriever naturally lead to high answerability from the reader, which implies that the top retrieved passages are more likely to contain the correct answer to a given question. Based on such naive belief, many research efforts for ODQA have focused on improving the ranking performance of the retrievers to further enhance the performance of the subsequent reader Karpukhin et al. (2020); Xiong et al. (2021); Qu et al. (2021).
In this work, our counterfactual simulation dispels this myth. Specifically, we present a simple counterfactual sampling strategy that removes the answer span of a question from its positive passage. We observe that neural retrieval models based on DPR often rank such unanswerable counterfactual passages similarly to or higher than their answerable original passages. We hypothesize that dense retrievers are not answer-aware, which poses the asynchronicity between their relevance scores and answerability as a new challenge of the current retriever-reader framework.
Motivated by this, we seek to repurpose the counterfactual passages as additional training resources to better synchronize the relevance measurement of DPR with the answerability of retrieved passages. A straightforward implementation is to consider these counterfactual samples as hard negatives to supplement in-batch negatives Zhan et al. (2021). However, the semantic overlap between the counterfactual and original passages makes such naive application ineffective, as training a retriever with minimally different negative samples (_i.e._, answer-removed counterfactuals) potentially hurts the ability to capture semantic relevance of positive question-passage pairs.
To overcome this challenge, we propose _counterfactually-Pivoting_ _C_ontrastive _L_earning (PiCL), a novel contrastive representation learning
scheme for improving DPR. By applying PiCL, our goal is to make DPR be aware of both relevance and answerability on its embedding space, such that higher relevance leads to higher answerability, and vice versa. For that, the key idea of PiCL is to use the counterfactual samples as not only hard negatives but also pseudo positives, which functions as a pivot between the original positive and negative passages. To effectively learn with such multiple views, PiCL aims to optimize the following three objectives for learning embeddings of question and original/counterfactual passages: (1) modified DPR loss of mapping the positive passage closer to the question than negatives (including counterfactuals) in a batch, (2) Counterfactuals-as-hard-negatives loss of mapping the original positive passage closer to the question than counterfactuals, and (3) Counterfactuals-as-pseudo-positives loss of mapping the counterfactuals closer to the question than in-batch negative passages.
Our contributions are twofold. First, we demonstrate that the neural retrieval models suffer from the unawareness of answerability, which can be a bottleneck of the retriever-reader pipeline for ODQA. Second, we design a novel training framework named PiCL of learning to synchronize the relevance and answerability. Our extensive experiments validate the effectiveness of PiCL in diverse ODQA scenarios, showing that our approach is orthogonal to well-studied prior techniques on DPR.
## 2 Preliminaries and Related work
### Dense Passage Retrieval for ODQA
Open-domain question answering is a knowledge-intensive task that aims at answering factoid questions given a large corpus of texts Chen and Yih (2020). We particularly focus on a setting in which the corpus \(C\) is a collection of \(M\) passages \(p\), _i.e._\(C=\{p_{i}\}_{i=1}^{M}\). For each question-passage pair \((q_{i},p_{i}^{+})\) where \(q_{i}\) is **answerable** from \(p_{i}^{+}\), the passage \(p_{i}^{+}\) can be viewed as a concatenation of non-answer spans \(s_{l}\) and \(s_{r}\) and an answer span \(a_{i}\), _i.e._\(p_{i}^{+}=[s_{l};a_{i};s_{r}]\). While \(s_{l}\) and \(s_{r}\) provide some relevant contexts to the question, the **answerability** of \(q_{i}\) from \(p_{i}\) is determined by the presence of answer span \(a_{i}\), which contains not only the exact match for the gold answer but also key evidence to the question. Given a question \(q_{i}\), the reader finds the corresponding answer span \(a_{i}\) from an answerable passage \(p_{i}^{+}\) in the corpus \(C\).
Due to the large search space in the corpus \(C\), a first-stage retriever is commonly used in ODQA to find subsets of relevant passages to questions for the expensive reader. The predominant approach to passage retrieval is DPR Karpukhin et al. (2020), which leverages the efficient dual-encoder architecture denoted as \([f_{q},f_{p}]\) to encode questions and passages into a learned embedding space. For a question-answer passage pair \((q_{i},p_{i}^{+})\) and a set of \(N\) negative passages \(p_{j}^{-}\), DPR is trained to maximize the similarity between the question \(q_{i}\) and its answer passage \(p_{i}^{+}\):
\[\mathcal{L}(q_{i},p_{i}^{+},\{p_{j}^{-}\}_{j=1}^{N})=\\ -\log\frac{e^{\langle q_{i},p_{i}^{+}\rangle}}{e^{\langle q_{i},p_{i}^{+}\rangle}+\sum_{j=1}^{N}e^{\langle q_{i},p_{j}^{-}\rangle}} \tag{1}\]
where \(\langle q_{i},p_{i}\rangle\) is a function that computes the relevance score between a question \(q_{i}\) and a passage \(p_{i}\) as dot product between the question embedding \(f_{q}(q_{i})\) and the passage embedding \(f_{p}(p_{i})\):
\[\langle q_{i},p_{i}\rangle=f_{q}(q_{i})\cdot f_{p}(p_{i}) \tag{2}\]
At runtime, the retriever indexes all passages based on the similarity metric and performs efficient search using approximate nearest neighbor search libraries Johnson and Douze (2019).
To further enhance the discriminative power of DPR, more recent studies on dense retrieval adopt model-centric approaches Xiong et al. (2021); Qu et al. (2021); Izacard and Grave (2021), which involve iterative training pipelines that exploit multiple fine-tuned encoders. For example, state-of-the-art models including ANCE Xiong et al. (2021) and rocketQA Qu et al. (2021) focus largely on combining multiple dual- and cross-encoders into sophisticated frameworks for hard negative sampling. While effective, as presented in Table 1, using these complex model-centric approaches requires a significant amount of computing resources,
\begin{table}
\begin{tabular}{l l c c c} \hline \hline Retriever & Min.GPU & Batch Size & KD Teacher & Index Refresh \\ \hline DPR / DPR+PiCL & 1\(\times\)GTX & 32 & - & - & - \\ ANCE & **4\(\times\)V100** & 32 & - & **10K batches** \\ \multicolumn{4}{l}{RocketQA} & 2\(\times\)V100 & **1024** & Cross Encoder (CE) & 2\(\times\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Resource comparison of various dense retrieval approaches, based on Hofstätter et al. (2021).
which hamper their deployment in various scenarios (Lindgren et al., 2021; Du et al., 2022; Gao et al., 2022). In this work, we instead focus on a data-centric approach, which aims to maximally leverage given data sources without complicating the training process, to improve the ranking performance and robustness of DPR. We later show that our data-centric approach based on counterfactual augmentation is both effective as alternatives to DPR training and orthogonal to existing model-centric approaches.
### Counterfactual Learning for Text Data
Counterfactual inference has been applied to representation learning to obtain fair representations (Kusner et al., 2017) in various domains such as image classification (Goyal et al., 2019) and vision-language tasks (Liang et al., 2020; Niu et al., 2021). The key idea behind counterfactual learning is to train a model that is invariant to specific aspects of the input data (Johansson et al., 2016; Kusner et al., 2017).
Existing work on counterfactual learning for natural language expands upon this idea and aims to learn robust representations of text data by capturing causal features while mitigating spurious correlations (Choi et al., 2020; Liang et al., 2020; Choi et al., 2022; Tian et al., 2022). A general approach is to apply contrastive learning to differentiate factual (or positive) samples from counterfactual samples, which are minimally dissimilar but of different labels. Such an approach usually comes with dedicated masking strategies to minimize causal associations in counterfactual samples, and applying counterfactual learning with synthetic data has shown to yield robust representations.
However, few studies have delved into the effect of counterfactual learning on retrieval tasks (Choi et al., 2020). In this work, we reinterpret causal signals in ODQA tasks as answer spans and seek to apply counterfactual contrastive learning for DPR.
## 3 Relevance-Answerability
Asynchronicity on DPR
In this section, we first present the definition of counterfactual samples as unanswerable variants of the given passages. We then conduct a counterfactual analysis on DPR where retrievers compare the relevance scores between an answerable passage and its counterfactual counterpart. We show that dense retrievers are incapable of discriminating counterfactual samples from the original data and thus unaware of their answerability.
### Unanswerable Counterfactual Samples
for Passage Retrieval
Following previous studies (Liang et al., 2020; Choi et al., 2022), we synthesize counterfactual samples by removing causal signals from the original sample. Based on our assumption that answer spans serve as causal signals in ODQA, we remove the answer span to a question in the original passage to create its counterfactual sample. Formally, we define counterfactual samples for passage retrieval as follows:
**Definition 1**.: Let \((q,p)\) denote a question-answer passage pair, where \(p=[s_{l};a;s_{r}]\) contains an answer span \(a\) to the question \(q\) and non-answer spans \(s_{l}\) and \(s_{r}\). We define a **counterfactual sample**\(p^{*}\) as an unanswerable variant of \(p\) with minimal changes such that \(p^{*}=[s_{l};s_{r}]\).
Intuitively, the counterfactual sample is an unanswerable variant of the original answerable passage that loses features pertinent to answering the question (_i.e._, \(a\)). In practice, we set \(a\) to be the evidence sentence that contains the exact match of the answer to the question. Figure 1 provides the overview of our counterfactual sampling process.
### Measuring Answer-awareness
Our experiment aims to re-examine the belief that dense retrievers are **answer-aware**, or that the relevance score \(\langle q,p\rangle\) between \(q\) and \(p\) is synchronous with the answerability of \(q\) from \(p\). Given that counterfactual samples lack key features pertinent to the question, it is generally assumed that dense retrievers rank positive passages higher than their counterfactual samples. To assess the answer-awareness of dense retrievers, we check whether such assumption holds true. Specifically, we observe answerability-relevance mismatch, where answer
Figure 1: Illustration of a counterfactual sample.
able passages \(p\) are given lower relevance scores than their counterfactual counterparts \(p^{*}\).
**Definition 2**.: Given a question-passage pair \((q,p)\) where \(p\) contains answer \(a\) to \(q\), let \(p^{*}\) be the counterfactual sample of \(p\) such that \(a\notin p^{*}\), and \(\langle q,p\rangle\) be the scoring function for relevance between \(q\) and \(p\). **Answerability-relevance mismatch** between the question-passage pair \((q,p)\) occurs if the relevance score \(\langle q,p\rangle\) between \(q\) and \(p\) is lower than (or equal to) the score \(\langle q,p^{*}\rangle\) between \(q\) and \(p^{*}\), _i.e._, \(\langle q,p\rangle\leq\langle q,p^{*}\rangle\).
Under the counterfactual simulation setting, we further introduce **Answer-Awareness Rate** to quantify the rate at which the model predictions on relevance scores \(\langle q,p\rangle\) are synchronous with the answerability of \(q\) from \(p\). Given \(T\) question-answer passage pairs \(\{(q_{i},p_{i})\}_{i=1}^{T}\), answer-awareness rate counts the number of cases where answerability-relevance mismatch does not occur.
**Definition 3**.: Given a set of \(T\) triplets \(\mathcal{T}=\{(q_{i},p_{i},p_{i}^{*})|p_{i},p_{i}^{*}\in C\}_{i=1}^{T}\), where \(p_{i}^{*}\) is the counterfactual sample of \(p_{i}\), let \(\mathds{1}_{\langle q_{i},p_{i}\rangle\leq\langle q_{i},p_{i}^{*}\rangle}\) be a binary indicator of whether answerability-relevance mismatch occurs for a triplet \((q_{i},p_{i},p_{i}^{*})\), _i.e._, \(\langle q_{i},p_{i}\rangle\leq\langle q_{i},p_{i}^{*}\rangle\). **Answer-Awareness Rate**, or **AAR**, is measured as the proportion of \((q_{i},p_{i},p_{i}^{*})\) whose relevance scores \(\langle q_{i},p_{i}\rangle\) between questions and original passages are higher than scores \(\langle q_{i},p_{i}^{*}\rangle\) between their counterfactual counterparts \(p^{*}\).
\[\text{AAR}=1-\sum_{i=1}^{T}\mathds{1}_{\langle q_{i},p_{i}\rangle\leq\langle q _{i},p_{i}^{*}\rangle}/T \tag{3}\]
To validate the assumption on answer-awareness of dense retrievers, we observe how much AARs differ from their theoretical upper bound, which supposedly amounts to \(100\%\) provided that dense retrievers always measure relevance scores such that \(\langle q,p\rangle>\langle q,p^{*}\rangle\).
### Preliminary Experiments
We conduct our preliminary experiment on Natural Questions (NQ) (Kwiatkowski et al., 2019), a commonly used ODQA benchmark. We follow the settings from Karpukhin et al. (2020) and adopt NQ test set used for DPR, which provides 3,610 factoid questions and around 21 million passages segmented from English Wikipedia dump for evaluation. To measure AAR, we use the 1,382 questions from NQ test set whose gold \((q,p,p^{*})\) triplets are given.1 We then track whether the relevance score \(\langle q,p\rangle\) between each question \(q\) and its positive passage \(p\) is higher than the score \(\langle q,p^{*}\rangle\) between the counterfactual passage \(p^{*}\).
Footnote 1: Both NQ test set and golden passage information are made available by Karpukhin et al. (2020) at [https://github.com/facebookresearch/DPR](https://github.com/facebookresearch/DPR)
Figure 2 shows both AAR and the performance on passage retrieval and QA tasks. We first observe that AAR of a vanilla DPR significantly falls behind the theoretical upper bound, which contradicts the assumption that retrievers constantly rank positive passages higher than counterfactual passages. We also see that AAR of dense retrievers show positive correlations with the downstream performance on retrieval and QA tasks. A considerable increase in QA performance following the increase in AAR further suggests that answer-aware retrievers improve the effectiveness of retriever-reader frameworks on the downstream QA task even with a small gain in the retrieval performance.
**Effect of Hard Negatives.** One possible solution to improve AAR of retrievers is to adopt better
Figure 2: AAR experiments on Natural Questions. The radius around the mark indicates the minimum GPU memory usage for training (reported in Table 1).
negative mining strategies. To assess the effect of hard negatives, we compare AAR of \(\text{DPR}_{\text{inbatch}}\) and \(\text{DPR}_{\text{BM25}}\), which are trained with in-batch negatives and hard negatives from BM25 index, respectively. We also compare AAR of DPR with those of ANCE and RocketQA, which adopt sophisticated negative sampling strategies for retriever training. From Figure 2, we observe that \(\text{DPR}_{\text{BM25}}\) show better AAR than \(\text{DPR}_{\text{inbatch}}\), and that ANCE and RocketQA achieve better AAR than DPR. However, these approaches require large computational cost since they depend on extremely large batches and negative mining strategies which involve multi-stage training of different encoders (See Table 1).
**Effect of Sample Size.** Another possible solution is to increase the number of training samples, _i.e._ labeled question-passage pairs. To examine the correlation between the size of the train set and AAR of the resulting DPR model, we randomly sample 15,000 and 30,000 question-passage pairs from NQ train set and additionally train two DPR models with randomly sampled data. Figure 2 shows that AAR of DPR increases when trained on more question-passage pairs. Despite such correlation, there is a practical limit to the size of the collected dataset, which makes any such approach infeasible.
Later in this paper we present our data-centric approach based on counterfactual augmentation, PiCL. Note that in Figure 2, DPR+PiCL consistently shows better AAR than DPR trained under the same settings, _i.e._ trained with the same negative sampling strategies and datasets of the same size. We also present an extensive **analysis on answer-awareness** in **Appendix A.2**, including a detailed look into the similarity scores and possible factors behind answer-unawareness.
## 4 Synchronizing Relevance and Answerability on DPR
Our analysis on answer-awareness shows that dense retrievers are not robust to unanswerable counterfactual passages, resulting in a performance bottleneck of a retriever-reader. In this section, we present a novel contrastive learning approach that repurposes counterfactual samples to train a robust, answer-aware retriever. The idea is to redefine counterfactual samples as pivots between positive and negative passages (Section 4.1) and learn the relative similarity between questions, positives, negatives, and pivoting samples (Section 4.2).
### Counterfactual Samples as Pivots
We have reinterpreted causal signals in dense retrieval as answer spans in ODQA and synthesized unanswerable counterfactual samples in Section 3.1. Our goal is to incorporate counterfactual contrastive learning into DPR training such that the learned representation is conditioned on answer span (_i.e. positive_) and invariant to answer-irrelevant features (_i.e. negative_). For that, we aim to design a method where the model learns the relative positions of counterfactual samples on the embedding space from DPR.
Consider a question-passage pair \((q_{i},p_{i}^{+})\), its corresponding counterfactual sample \(p_{i}^{*}\), and a set of \(N\) negative passages \(\{p_{j}^{-}\}_{j=1}^{N}\). A DPR-based retriever learns question and passage representations such that the following inequality between positive passages \(p_{i}^{+}\) and negative passages \(p_{j}^{-}\) holds:
\[\langle q_{i},p_{i}^{+}\rangle>\langle q_{i},p_{j}^{-}\rangle \tag{4}\]
To specify the relative positions of counterfactual passage on the embedding space, one must take into account their relevance between positive passages (_i.e._ source passages) and negative passages. As shown in Figure 3, we view our counterfactual samples from two different perspectives, as hard negatives and as pseudo-positives.
**Counterfactuals as Hard Negatives.** Given that counterfactual samples lack relevant features to the question, \(p_{i}^{*}\) is a counterfactually _negative_ sample to an anchor question \(q_{i}\) while the original passage \(p_{i}^{+}\) serves as the _positive_. Thus the embedding similarity \(\langle q_{i},p_{i}^{*}\rangle\) between \(q_{i}\) and \(p_{i}^{*}\) is upper bounded by \(\langle q_{i},p_{i}^{+}\rangle\):
\[\langle q_{i},p_{i}^{+}\rangle>\langle q_{i},p_{i}^{*}\rangle \tag{5}\]
Learning to discriminate \(p_{i}^{*}\) from \(p_{i}^{+}\) would minimize the mutual information between representations of questions \(q_{i}\) and answer-irrelevant spans in \(p_{i}^{+}\), strengthening causal effects of answer spans.
**Counterfactuals as Pseudo Positives.** Since a counterfactual passage \(p_{i}^{*}\) is a minimally different
Figure 3: Illustration of the counterfactual samples as pivots in the embedding space.
sample that retains most of semantics in \(p_{i}^{+}\), \(p_{i}^{*}\) can be seen as a _pseudo-positive_ example in the passage retrieval. Semantic relevance between \(q_{i}\) and \(p_{i}^{*}\) distinguishes \(p_{i}^{*}\) from other _negatives_\(p_{j}^{-}\), which provide noisy contexts with respect to \(q_{i}\). Thus, the following holds for all \(p_{j}^{-}\):
\[\langle q_{i},p_{i}^{*}\rangle>\langle q_{i},p_{j}^{-}\rangle \tag{6}\]
Learning to discriminate \(p_{i}^{*}\) from \(p_{j}^{-}\) imposes a constraint on the encoders such that embeddings for questions and passages are computed based on their semantic alignment.
From Equation 5 and 6, we can derive that the embedding similarity \(\langle q_{i},p_{i}^{*}\rangle\) between questions and counterfactual samples are bounded by \(\langle q_{i},p_{i}^{+}\rangle\) and \(\langle q_{i},p_{j}^{-}\rangle\). Essentially, they can be re-formulated as **pivots** between positive and negative samples in the embedding space. Since both inequality constraints in Equation 5 and 6 satisfy Equation 4, our definition of counterfactual samples as pivots is in line with the objective of DPR (Equation 1).
### Counterfactually-Pivoting Contrastive Learning
Based on the ideas discussed in Section 4.1, we propose counterfactually-**Piv**oting **C**ontrastive **L**earning (**PiCL**) for dense retrieval for a robust, answer-aware dense retriever. Specifically, we introduce additional counterfactual contrastive loss terms into the objective function in DPR (Equation 1) to leverage counterfactual samples as pivots between positives and negatives. Consider a question-positive passage pair \(\{q_{i},p_{i}^{+}\}\), a set of \(N\) negatives \(\{p_{j}^{-}\}_{j\neq i}^{N}\), and their corresponding counterfactual samples \(p_{i}^{*}\) and \(\{p_{j}^{*}\}_{j\neq i}^{N}\). We define following loss terms as key components of PiCL.
**Modified Dense Passage Retrieval** loss, \(\mathcal{L}_{\text{dpr}}\), is a slight modification of the loss term in DPR where the counterfactual counterpart \(p_{i}^{*}\) to the positive passage \(p_{i}^{+}\) is added as a negative. To alleviate any interference from counterfactual passages as negatives, we add a balancing coefficient \(\lambda<1\) before the similarity term \(e^{\langle q_{i}.p_{i}^{*}\rangle}\).
\[\mathcal{L}_{\text{dpr}}(q_{i},p_{i}^{+},p_{i}^{*},\{p_{j}^{-}\}_ {j\neq i}^{N})=\\ -\log\frac{e^{\langle q_{i}.p_{i}^{+}\rangle}}{e^{\langle q_{i}.p _{i}^{+}\rangle}+\sum_{j\neq i}^{N}e^{\langle q_{i}.p_{j}^{-}\rangle}+\lambda e ^{\langle q_{i}.p_{i}^{*}\rangle}} \tag{7}\]
**Counterfactuals as Hard Negatives** loss, \(\mathcal{L}_{\text{chn}}\), is optimized to maximize the similarity between \(q_{i}\) and \(p_{i}^{*}\) while minimizing the similarity between \(q_{i}\) and \(p_{i}^{*}\). It imposes the key constraint on \(q_{i}\), \(p_{i}^{+}\), and \(p_{i}^{*}\) from Equation 5 to discriminate answer passages from non-answer counterfactual passages:
\[\mathcal{L}_{\text{chn}}(q_{i},p_{i}^{+},p_{i}^{*})=-\log\frac{e^{\langle q_{i }.p_{i}^{+}\rangle}}{e^{\langle q_{i}.p_{i}^{+}\rangle}+e^{\langle q_{i}.p_{i} ^{*}\rangle}} \tag{8}\]
**Counterfactuals as Pseudo Positives** loss, \(\mathcal{L}_{\text{cpp}}\), is optimized to maximize the relative similarity between \(q_{i}\) and \(p_{i}^{*}\) with respect to negative passages \(p_{j}^{-}\) and \(p_{j}^{*}\) in the given batch. The key difference between \(\mathcal{L}_{\text{dpr}}\) and \(\mathcal{L}_{\text{cpp}}\) is that the counterfactual passage is used as a positive in \(\mathcal{L}_{\text{cpp}}\) to retain semantic relevance in the learned embeddings.
\[\mathcal{L}_{\text{cpp}}(q_{i},p_{i}^{*},\{p_{j}^{-},p_{j}^{*}\}_ {j\neq i}^{N})=\\ -\log\frac{e^{\langle q_{i}.p_{i}^{*}\rangle}}{e^{\langle q_{i}.p _{i}^{*}\rangle}+\sum_{j\neq i}^{N}\bigl{(}e^{\langle q_{i}.p_{j}^{-}\rangle}+e^{ \langle q_{i}.p_{j}^{*}\rangle}\bigr{)}} \tag{9}\]
The final loss function \(\mathcal{L}\) is a weighted sum of all three loss fuctions \(\mathcal{L}_{\text{dpr}}\), \(\mathcal{L}_{\text{chn}}\), and \(\mathcal{L}_{\text{cpp}}\):
\[\mathcal{L}=\mathcal{L}_{\text{dpr}}+\tau_{1}\mathcal{L}_{\text{chn}}+\tau_{2} \mathcal{L}_{\text{cpp}} \tag{10}\]
where \(\tau_{1},\tau_{2}\) are hyperparameters that determine the importance of the terms.
## 5 Experiments
Having seen that answer-awareness of a dense retriever is closely associated with the model performance, we assess PiCL on common ODQA benchmarks to show that our learning approach improves the effectiveness of the retriever on passage retrieval and question answering.
### Experimental Settings
**Dataset.** We follow the settings in Section 3.3 and evaluate our approach on **Natural Questions (NQ)**Kwiatkowski et al. (2019). It is built upon web documents such as Wikipedia articles and human-annotated question-answer pairs, most of which are collected from real-world search queries Kwiatkowski et al. (2019). We follow prior work on dense retrieval Xiong et al. (2021); Qu et al. (2021); Ren et al. (2021) and use for training 58,812 factoid questions each paired with a set of positive passages. For inference, we use the preprocessed data from Karpukhin et al. (2020), which include 3,610 factoid questions and a corpus of about 21M Wikipedia passages preprocessed through the pipeline proposed in Chen et al. (2017).
**Baselines and Implementation Details.** We consider DPR Karpukhin et al. (2020) as the backbone architecture for all retrievers implemented in this section. Our focus is to assess how applying PiCL, _i.e._ enhancing answer-awareness, affects the performance of the backbone DPR. We also include ANCE Xiong et al. (2021) and RocketQA Qu et al. (2021), which use dedicated negative sampling pipelines to improve the performance of DPR. To show the orthogonality of PiCL to such negative mining approaches, we implement PiCL models under different negative sampling strategies (_e.g._, sampling from top retrieval results of BM25 and a fine-tuned DPR) and compare them with the baselines. Note that due to limited computation resource, we do not reproduce the performance of ANCE and RocketQA. See Appendix B for more details on the implementation.
### Main Experimental Results
**Retrieval Performance.** Table 2 compares the performance of PiCL models with the baselines on NQ benchmark. We observe that DPR+PiCL models yield consistent performance gains over vanilla DPR under all tested conditions. Similar to DPR, DPR+PiCL shows stronger performance when trained with hard negatives from top retrieval results of a fine-tuned DPR, which suggests that adding informative negative samples further boosts the discriminative power of an answer-aware retriever.2. Meanwhile, we see a consistent increase in the retrieval performance when trained with model-centric approaches, _i.e._, ANCE and RocketQA, which rely on substantially large batches and compute-intensive negative mining techniques (Table 1). The performance gain on DPR+PiCL from additional hard negatives implies that PiCL can be further improved when orthogonally applied to such model-centric approaches. We find a similar trend on TriviaQA benchmark in Table 8 of Appendix, indicating that PiCL is generalizable.
Footnote 2: We obtain hard negatives from the dataset from Karpukhin et al. (2020) available on [https://github.com/facebookresearch/DPR](https://github.com/facebookresearch/DPR)
**End-to-End QA Performance.** A key assumption underlying our work is that synchronizing a
\begin{table}
\begin{tabular}{l c c c|c c c c} \hline \hline \multirow{2}{*}{**Retriever**} & \multicolumn{4}{c}{**Training Resource**} & \multicolumn{4}{c}{**Natural Questions**} \\ \cline{2-7} & PLM & \#N & Hard Neg & Top-1 & Top-5 & Top-20 & Top-100 \\ \hline BM25\({}^{\dagger}\) & - & - & - & - & - & 59.1 & 73.7 \\ DPR\({}^{\dagger}\) & BERT\({}_{\text{base}}\) & 127 & - & - & 55.8 & 73.0 & 83.1 \\ ANCE\({}^{\dagger}\) & RoBERT\({}_{\text{base}}\) & 31+32 & ANN & - & - & 81.9 & 87.5 \\ RocketQA\({}^{\dagger}_{\text{toppl}}\) & BERT\({}_{\text{base}}\) & 1023+1024 & Cross Batch & - & - & - & 86.0 \\ RocketQA\({}^{\dagger}\) & ERNIE\({}_{\text{base}}\) & 1023+1024 & Cross Encoder & - & 74.0 & 82.7 & 88.5 \\ \hline \hline DPR\({}^{*}\) & BERT\({}_{\text{base}}\) & 127 & - & 31.77 & 58.12 & 74.76 & 84.07 \\ DPR\({}^{*}\) & BERT\({}_{\text{base}}\) & 127+128 & BM25 & 46.62 & 68.56 & 79.70 & 86.34 \\ \hline DPR+PiCL & BERT\({}_{\text{base}}\) & 127 & - & 35.35 & 61.55 & 76.81 & 85.87 \\ DPR+PiCL & BERT\({}_{\text{base}}\) & 127+128 & BM25 & **48.64** & **69.75** & **80.11** & **86.81** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Retrieval results on Natural Questions benchmark. The best and second results are in **Bold** and underline, respectively. PLMs: pre-trained LMs used for retriever initialization, #N: the number of negative samples, computed as in-batch negatives + hard negatives, \({}^{*}\) denotes reproduced results in our environment setting where #N=127+128. \({}^{\dagger}\) indicates reported results from Qu et al. (2021). We discuss the effect of PLMs on PiCL in Appendix A.4
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{\#N} & \multirow{2}{*}{**Reader**} & \multirow{2}{*}{**Retriever**} & \multicolumn{4}{c}{**Exact Match (EM) score**} \\ \cline{3-5} & & & Top-5 passages & Top-20 passages & Top-100 passages \\ \hline \multirow{2}{*}{127} & \multirow{2}{*}{DPR reader} & DPR & 31.83 & 36.87 & 37.45 \\ & & DPR+PiCL & 34.27 (+2.44) & 38.86 (+1.99) & 39.06 (+1.61) \\ \hline \multirow{2}{*}{127} & FiD\({}_{\text{base}}\) (T5) & DPR & 31.99 & 39.11 & 43.82 \\ & & DPR+PiCL & 34.27 (+2.28) & 41.47 (+2.36) & 44.85 (+1.03) \\ \hline \multirow{2}{*}{127+128} & FiD\({}_{\text{base}}\) (T5) & DPR & 38.31 & 43.13 & 45.37 \\ & & DPR+PiCL & **40.22** (+1.91) & **44.32** (+1.19) & **47.65** (+2.28) \\ \hline \hline \end{tabular}
\end{table}
Table 3: End-to-end QA performance of retriever-reader pipelines on Natural Questions benchmark. #N : number of negative samples used to train the retriever, computed as in-batch negatives (+ BM25 negatives), Top-\(k\): top-\(k\) retrieved passages for reader inference. Note that we reuse the checkpoints of DPR reader and FiD\({}_{\text{base}}\) (_i.e._ T5-base implementation of FiD) from Karpukhin et al. (2020) and Izacard and Grave (2021). Best scores are in **Bold**.
retriever with the reader improves end-to-end QA performance of the retriever-reader pipeline, as the retriever is more likely to provide informative evidence. To validate this assumption, we evaluate the downstream performance of the subsequent reader when paired with DPR+PiCL. Specifically, we re-use two reader models, an extractive reader from Karpukhin et al. (2020) and a Fusion-in-Decoder (FiD) from Izacard and Grave (2021), and switch different retrievers to sample top-\(k\) passages for reader inference. We then compute Exact Match (EM) scores, or the proportion of questions where answer predictions from the reader are correct. Table 3 reports end-to-end QA performance of the retriever-reader pipelines. Overall, applying PiCL on DPR consistently improves the QA performance of the retriever-reader under various settings. Particularly, we see that using a FiD instead of a DPR reader for DPR+PiCL further boosts the QA performance, suggesting that advanced readers benefit from the informative samples from PiCL models.
### Analysis on Dense Passage Retrieval
**Negative Samples.** Figure 3(a) shows the retrieval accuracy of DPR and DPR+PiCL trained with different amounts of in-batch and random negative samples from the corpus. We see a consistent increase in retrieval accuracy for DPR+PiCL, which is in line with the finding that the performance gain from PiCL over a vanilla DPR is independent of the number of negatives used for training. Particularly, DPR+PiCL trained with fewer negatives achieves performance comparable to or better than a vanilla DPR trained with more negatives. One possible reason for such efficiency is that using counterfactual samples for PiCL leads to the effect of data augmentation as the model learns to discriminate positive passages from hard negative passages.
**Answer Awareness.** One key assumption underlying our approach is that applying PiCL on DPR results in a more answer-aware retriever. To provide a fine-grained analysis on answer-awareness, we use DPR+PiCL trained with in-batch negatives to measure similarity scores \(\langle q,p^{+}\rangle\) and \(\langle q,p^{*}\rangle\) for the gold \((q,p^{+},p^{*})\) triplets from Section 3.3. Figure 3(b) compares the average of the scores \(\langle q,p^{+}\rangle\) and \(\langle q,p^{*}\rangle\) from DPR and DPR+PiCL. We observe a notable difference between the average \(\langle q,p^{+}\rangle\) and \(\langle q,p^{*}\rangle\) for DPR+PiCL and a relatively minor difference for DPR. In DPR+PiCL, the average \(\langle q,p^{+}\rangle\) is significantly larger than \(\langle q,p^{*}\rangle\), which indicates that DPR+PiCL learns to distinguish between answerable and counterfactual passages.
**Counterfactual Loss.** To study the efficacy of counterfactual samples as pivots, we conduct an ablation study on counterfactual contrastive learning objective in Equation 10. Specifically, we implement three PiCL baselines with the following modifications to the objective function, 1) \(\mathcal{L}_{\text{dpr}}+\mathcal{L}_{\text{cpp}}\), 2) \(\mathcal{L}_{\text{dpr}}+\mathcal{L}_{\text{cnn}}\), and 3) \(\mathcal{L}_{\text{cpp}}+\mathcal{L}_{\text{cnn}}\). Table 4 compares all baselines with the original PiCL and the vanilla DPR. All three PiCL baselines do not improve much over a vanilla DPR without either \(\mathcal{L}_{\text{dpr}}\), \(\mathcal{L}_{\text{cpp}}\) or \(\mathcal{L}_{\text{cnn}}\). In contrast, the original PiCL method outperforms the vanilla DPR, suggesting that using counterfactual samples as pivots is crucial in PiCL.
See Appendix A for detailed analysis on answer-awareness and generalizability of PiCL.
## 6 Conclusion
This work aims to re-examine the assumption that dense retrievers assign high relevance scores on answerable passages to questions. We first conduct a counterfactual simulation and observe that dense retrievers based on DPR often fail to rank answerable passages higher than their counterfactual counterparts. Based on this observation, we present PiCL, which repurposes counterfactual samples as pivots between positives and negatives for answer-aware DPR. Our experiments show that applying PiCL
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Retriever**} & \multicolumn{4}{c}{**Natural Questions**} \\ \cline{2-5} & Top-1 & Top-5 & Top-20 & Top-100 \\ \hline \(\mathcal{L}_{\text{dpr}}\) & 32.08 & 59.56 & 76.54 & 85.73 \\ \hline \(+\mathcal{L}_{\text{cpp}}\) & 28.14 & 56.65 & 75.12 & 85.54 \\ \(+\mathcal{L}_{\text{cnn}}\) & 30.80 & 58.59 & 75.98 & 84.85 \\ \(+\mathcal{L}_{\text{cnn}}\) & 33.99 & 61.88 & 77.70 & 85.84 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation studies on PiCL objective functions.
Figure 4: (a) Retrieval performance of DPR and DPR+PiCL on varying the numbers of negatives. (b) Similarity scores between queries and positive passages or counterfactual passages in DPR and DPR+PiCL.
on DPR not only enhances the model performance on the ODQA benchmark but also improves its robustness to unanswerable, counterfactual passages.
### Limitations
Below we summarize some limitations of our work and discuss potential directions to improve it: (i) Our definition of causal signals in answerable passages has been limited to answer sentences that contain exact matches of gold answers. While simple and efficient, our counterfactual sampling strategy leaves room for improvement, and more elaborate construction methods would lead to better counterfactual samples and further enhance the performance of PiCL. (ii) We observe that AAR measurements in Section 3 are not well calibrated with the downstream performance of the retriever, which limits the practical usefulness of AAR as an indicator of the model performance. In future work, we aim to refine the definition of AAR such that it serves as a formal evaluation metrics for ODQA.
### Broader Impact and Ethics Statement
Our work re-examines the answer-awareness of the dense retrievers and seeks to mitigate undesired model biases to false positives, or non-answer contexts in candidate passages, via counterfactual contrastive learning. While we have focused solely on the effectiveness of our approach on open-domain question answering, we believe that the concept of counterfactually-pivoting samples can be further explored in other representation learning tasks such as response retrieval for dialogue systems.
Meanwhile, our work shares the typical risks towards misinformation from common dense retrieval models [22, 17] as our implementation follows the common design based on dual encoders. Our work takes a step towards minimizing such risks from the retriever, but we note that there is still much work needed from the community to ensure the faithfulness of dense retrievers, particularly in specialized domains with insufficient data.
|
2305.03318 | Steady states of the Parker instability: the effects of rotation | We model the Parker instability in vertically stratified isothermal gas using
non-ideal MHD three-dimensional simulations. Rotation, especially differential,
more strongly and diversely affects the nonlinear state than the linear stage
(where we confirm the most important conclusions of analytical models), and
stronger than any linear analyses predict. Steady state magnetic fields are
stronger and cosmic ray energy density higher than in comparable nonrotating
systems. Transient gas outflows induced by the nonlinear instability persist
longer, of order 2 Gyr, with rotation. Stratification combined with
(differential) rotation drives helical flows, leading to mean-field dynamo.
Consequently, the nonlinear state becomes oscillatory (while both the linear
instability and the dynamo are non-oscillatory). The horizontal magnetic field
near the midplane reverses its direction propagating to higher altitudes as the
reversed field spreads buoyantly. The spatial pattern of the large-scale
magnetic field may explain the alternating magnetic field directions in the
halo of the edge-on galaxy NGC 4631. Our model is unique in producing a
large-scale magnetic structure similar to such observation. Furthermore, our
simulations show that the mean kinetic helicity of the magnetically driven
flows has the sign opposite to that in the conventional non-magnetic flows.
This has profound consequences for the nature of the dynamo action and
large-scale magnetic field structure in the coronae of spiral galaxies which
remain to be systematically explored and understood. We show that the energy
density of cosmic rays and magnetic field strength are not correlated at scales
of order a kiloparsec. | Devika Tharakkal, Anvar Shukurov, Frederick A. Gent, Graeme R. Sarson, Andrew Snodin | 2023-05-05T06:57:29Z | http://arxiv.org/abs/2305.03318v1 | # Steady states of the Parker instability: the effects of rotation
###### Abstract
We model the Parker instability in vertically stratified isothermal gas using non-ideal MHD three-dimensional simulations. Rotation, especially differential, more strongly and diversely affects the nonlinear state than the linear stage (where we confirm the most important conclusions of analytical models), and stronger than any linear analyses predict. Steady state magnetic fields are stronger and cosmic ray energy density higher than in comparable nonrotating systems. Transient gas outflows induced by the nonlinear instability persist longer, of order 2 Gyr, with rotation. Stratification combined with (differential) rotation drives helical flows, leading to mean-field dynamo. Consequently, the nonlinear state becomes oscillatory (while both the linear instability and the dynamo are non-oscillatory). The horizontal magnetic field near the midplane reverses its direction propagating to higher altitudes as the reversed field spreads buoyantly. The spatial pattern of the large-scale magnetic field may explain the alternating magnetic field directions in the halo of the edge-on galaxy NGC 4631. Our model is unique in producing a large-scale magnetic structure similar to such observation. Furthermore, our simulations show that the mean kinetic helicity of the magnetically driven flows has the sign opposite to that in the conventional non-magnetic flows. This has profound consequences for the nature of the dynamo action and large-scale magnetic field structure in the coronae of spiral galaxies which remain to be systematically explored and understood. We show that the energy density of cosmic rays and magnetic field strength are not correlated at scales of order a kiloparsec.
keywords: instabilities - magnetic fields - MHD - cosmic rays - ISM: structure - galaxies: magnetic fields
## 1 Introduction
The Parker instability is a magnetic Rayleigh-Taylor or magnetic buoyancy instability modified by cosmic rays that carry negligible weight but exert significant pressure. The instability is an important element of the large-scale dynamics of the interstellar medium (ISM) as it affects the vertical distributions of the gas, magnetic fields and cosmic rays and can drive gas outflows, thereby affecting the star formation. In our previous work (Tharakkal et al., 2022), we explored the development of the instability, with a focus on its nonlinear saturation, in a non-rotating disc with imposed unstable distributions of the gas, magnetic field and cosmic rays. Among the essentially nonlinear features of the instability are a transient gas outflow in the weakly nonlinear stage and a strong redistribution of magnetic fields, cosmic rays and thermal gas, resulting in a thinner thermal gas disc and very large scale heights and low energy densities of the magnetic field and cosmic rays. In this paper, we address the effect of rotation on the Parker instability.
Rotation is known to reduce the growth rate of the weak perturbations but it does not suppress the instability completely (Zweibel and Kulsrud, 1975; Foglizzo and Tagger, 1994, 1995; Matsuzaki et al., 1998; Kowal et al., 2003). However, rotation introduces a fundamentally new feature to the system: under the action of the Coriolis force, the gas flows produced by the instability become helical and can drive mean-field dynamo action that generates a magnetic field at a large scale comparable to that of the initial unstable configuration. Hanasz (1997), Hanasz and Lesch (1997, 1998) and Thelen (2000) simulate numerically the mean-field dynamo action driven by the magnetic buoyancy with and without cosmic rays, while Moss et al. (1999) present an analytical formulation. A striking feature of the nonlinear evolution of a rotating system, noticed by Machida et al. (2013) in their simulations of the galactic dynamo using ideal magnetohydrodynamics (MHD), is the possibility of quasi-periodic magnetic field reversals at the time scale of 1.5 Gyr, both near the disc midplane and at large altitudes. This appears to be an essentially nonlinear effect that relies on rotation since the linear instability does not develop oscillatory solutions and the nonlinear states are not oscillatory without rotation (Tharakkal et al., 2022). Foglizzo and Tagger (1994, their Section 7.1) find that the Parker instability can be oscillatory in a certain range of the azimuthal wave numbers. Machida et al. (2013) relate the reversals to the magnetic flux conservation, but we note that the _large-scale_ magnetic flux is not conserved when the mean-field dynamo is active. Our simulations of the nonlinear Parker instability in a rotating system suggest a different, more subtle explanation that relies on the correlations between magnetic and velocity fluctuations not dissimilar to those arising from the \(\alpha\)-effect that drives the mean-field dynamo action (see below). Large-scale magnetic fields whose horizontal direction alternates with height emerge in the simulations of mean-field dynamo action by Hanasz et al. (2004). This spatial pattern may be related to the field reversals near the midplane.
We explore the effects of rotation on the Parker instability in a numerical model similar to that of Tharakkal et al. (2022), quantifying both its linear and nonlinear stages and identifying the roles of the Coriolis force and the velocity shear of the differential rotation. We consider the instability in a local rectangular box with parameters similar to those of the Solar neighbourhood of the Milky Way. The structure of this paper is as follows. Section 2 describes briefly the numerical model, and in Section 3 we consider the linear stage of the instability. Section 4 presents a detailed comparison of the distributions of the thermal and non-thermal components of the system in the nonlinear, saturated stage of the instability and how they change when the rotational speed and shear rate vary. in Section 5, we clarify the mechanism of the magnetic field reversal and Section 8 discusses the effects of rotation on the systematic vertical flows. The mean-field dynamo action of the motions induced by the instability is our subject in Section 6 where we discuss the kinetic and magnetic helicities.
## 2 Basic equations and the numerical model
We use a model very similar to that of Tharakkal et al. (2022), with the only difference being that we now consider rotating systems, with either a solid-body or differential rotation. We consider the frame rotating at the angular velocity of the centre of the domain with the \(z\)-axis aligned with the gravitational acceleration and the angular velocity \(\boldsymbol{\Omega}\), the \(y\)-axis directed along the azimuth and the \(x\)-axis parallel to the radial direction of the local cylindrical frame. Vector \(x\)-components are occasionally referred to as radial, while \(y\)-components are called azimuthal.
The non-ideal MHD equations are formulated for the gas density \(\rho\), its velocity \(\boldsymbol{U}\), total pressure \(P\) (which includes the thermal, magnetic and cosmic-ray contributions), magnetic field \(\boldsymbol{B}\) and its vector potential \(\boldsymbol{A}\), and the energy density of cosmic rays \(\epsilon_{\rm cr}\). The initial conditions represent an unstable magneto-hydrostatic equilibrium, and the corresponding distributions \(\rho_{0}\), \(\boldsymbol{B}_{0}\) and \(\epsilon_{\rm cr,0}\) in \(z\) are maintained throughout the simulation as a background state. We solve for the deviations from them, denoted \(\rho^{\prime}\) for the density, \(\boldsymbol{u}\) for the velocity, \(\boldsymbol{P}^{\prime}\) for the total pressure, \(\boldsymbol{b}\) for the magnetic field and \(\boldsymbol{a}\) for its vector potential, and \(\epsilon^{\prime}_{\rm cr}\) and \(\boldsymbol{F}^{\prime}\) for the cosmic-ray energy density and flux. Cosmic rays are described in the fluid approximation with non-Fickian diffusion, so we have separate equations for their energy density and flux. The governing equations are solved numerically in a rectangular shearing box of the size \(4\times 4\times 3.5\,\rm kpc^{3}\) along the \(x\), \(y\) and \(z\) axes, respectively, with the mid-plane at \(z=0\) and \(|z|\leq 1.75\,\rm kpc\). The boundary conditions are periodic in \(x\), sliding-periodic in \(y\) and allow for a free exchange of matter through the top and bottom of the domain as specified in detail by Tharakkal et al. (2022).
The total velocity is given by \(\boldsymbol{U}=\boldsymbol{U}_{0}+\boldsymbol{u}\), where \(\boldsymbol{U}_{0}=Sx\hat{\boldsymbol{y}}\) is the mean rotation velocity in the rotating frame with the shear rate \(S=x\,\rm d\Omega/\rm d\tau\), and \(\boldsymbol{u}\) is the deviation from this, associated with the instability. For a solid-body rotation, \(S=0\), we have \(\boldsymbol{U}_{0}=0\). Both \(S\) and \(\Omega\) are assumed to be independent of \(z\) and \(S<0\) for realistic galactic rotation profiles. We neglect the vertical gradient of \(\Omega\) and \(S\); for its observed magnitude of order \(-15\)-\(25\,\rm km\,kpc^{-1}\)(Section 10.2.3 of Shukurov & Subramanian, 2021, and references therein), \(\Omega\) and \(S\) only vary by about \(10\)-\(15\) per cent within \(|z|\leq 1.5\,\rm kpc\).
The presence of rotation only affects the momentum and induction equations, so equations (1), (4)-(6), (9) and (10) for the mass conservation and cosmic rays of Tharakkal et al. (2022) still apply and only the momentum and induction equations are augmented with terms containing \(\Omega\) and \(S\):
\[\frac{\rm D\boldsymbol{u}}{\rm D\boldsymbol{t}}= -\frac{\nabla P}{\rho}+\boldsymbol{g}+\frac{\boldsymbol{v} \times\boldsymbol{B})\times\boldsymbol{B}}{4\pi\rho}-Su_{x}\hat{\boldsymbol{y }}-2\boldsymbol{\Omega}\times\boldsymbol{u}+\nabla\cdot\boldsymbol{\tau}\,, \tag{1}\] \[\frac{\partial\boldsymbol{a}}{\partial t}= \boldsymbol{u}\times(\nabla\times\boldsymbol{A})-Sa_{y}\hat{ \boldsymbol{x}}-Sx\frac{\partial\boldsymbol{a}}{\partial y}-\eta\nabla\times( \nabla\times\boldsymbol{a})\,, \tag{2}\]
where \(\rm D/D\boldsymbol{t}=\partial/\partial t+(\boldsymbol{U}_{0}+\boldsymbol{u})\cdot\nabla\) is the Lagrangian derivative, \(\boldsymbol{g}\) is the gravitational acceleration and \(\boldsymbol{\tau}\) is the viscous stress tensor. The Kepler gauge for the vector potential, as described by Oishi & Mac Low (2011) (see also Brandenburg et al., 1995), is appropriate for this shearing box framework.
We use the gravity field \(\boldsymbol{g}=-g(z)\hat{\boldsymbol{z}}\) obtained by Kuijken & Gilmore (1989) for the Solar vicinity of the Milky Way and consider an isothermal gas with the sound speed \(c_{\rm s}=18\,\rm km\,s^{-1}\) and temperature \(T=3.2\times 10^{4}\,\rm K\). In the background state (identified with the subscript zero, this is also the initial state), both the magnetic and cosmic ray pressures are adopted to be half the thermal pressure, \(P_{\rm m,0}/P_{\rm th,0}=P_{\rm cr,0}/P_{\rm th,0}=0.5\), where \(P_{\rm th,0}=c_{\rm s}^{2}\rho_{0}(0)\), \(P_{\rm m,0}=B_{0}^{2}(0)/(8\pi)\) and \(P_{\rm cr,0}=\epsilon_{\rm cr,0}(0)/3\) are the thermal, magnetic and cosmic ray pressures, respectively, and \(B_{0}(0)=5\,\rm\mu G\). The gas viscosity \(\gamma\) (included in \(\tau\)) and magnetic diffusivity \(\eta\) are chosen as \(\nu=0.1\,\rm kpc\,km\,s^{-1}\) and \(\eta=0.03\,\rm kpc\,km\,s^{-1}\), respectively, to be somewhat smaller than the turbulent values in the ISM (see Tharakkal et al., 2022, for further details and justification).
Table 1 presents the simulation runs discussed in this paper. The value of \(\Omega\) near the Sun is close to \(30\,\rm km\,s^{-1}\,\rm kpc^{-1}\) (referred to as the nominal value hereafter), and \(S=-\boldsymbol{\Omega}\) when the rotational speed is independent of the galactocentric distance (a flat rotation curve), \(|\boldsymbol{\Omega}\times\boldsymbol{r}|=\rm const\). Model \(\verb
of the most rapidly growing mode with those obtained in a range of analytical and numerical models. In this section, we focus on the modifications of the exponentially growing perturbations caused by the rotation and velocity shear.
Figures 2a,b show the evolution (in both the linear and nonlinear stages) of the root-mean-square (r.m.s.) magnitudes of the perturbations in the magnetic field and velocity, while Panels (c) and (d) show how the total magnetic field strength \(B_{\rm r.m.s.}\) and the mean cosmic ray energy density \(\epsilon_{\rm cr}\) at \(z=0\), respectively, evolve in the models of Table 1. As expected (Shu, 1974; Zweibel & Kulsrud, 1975; Foglizzo & Tagger, 1994, 1995; Hanasz & Lesch, 1997), the instability growth rate \(\Gamma\) (given in Table 1) decreases systematically with the angular velocity. The stretching of the magnetic lines along the radial (\(x\)) direction by the Coriolis force enhances the magnetic tension thus opposing the instability while the differential rotation shears the perturbations to reduce the radial wavelength also suppressing the instability (Foglizzo & Tagger, 1994).
The spatial structure of the unstable modes is illustrated in Fig. 3, which presents the two-dimensional power spectra of the perturbations affected by the solid-body (c-d) and differential (e-f) rotation and compares them with the non-rotating case (a-b). The spectra of the velocity and magnetic field perturbations are identical when \(\Omega=0\) but noticeable differences develop in rotating systems. In agreement with the analysis of Shu (1974), the dominant azimuthal wave number \(k_{y}\) decreases under the influence of rotation. The solid-body rotation leads to wider spectra in the radial and azimuthal wave numbers, consistent with the weaker variation of the instability growth rate with \(k_{y}\) in a rotating system (Fig. 1 of Foglizzo & Tagger, 1994). Since the Coriolis force couples the radial and azimuthal motions, the spectra in \(k_{x}\) and \(k_{y}\) are more similar to each other than in the case \(\Omega=0\). However, the velocity shear strongly reduces the range of \(k_{y}\) while the perturbations have significantly larger radial wave numbers \(k_{x}\) than in the cases \(\Omega=0\) and \(S=0\).
## 4 The saturated state
Figure 2 also shows that the nonlinear development of the instability and its statistically steady state are strongly affected by the rotation and velocity shear. Solid-body rotation does not affect much the magnitude of the magnetic field perturbations at \(t\gtrsim 1\) Gyr, presented with the solid and dash-dotted curves in Panel (a), but reduces the velocity perturbations shown in Panel (b). Understandably, the velocity shear enhances both (the dotted curves) by stretching the radial magnetic fields which, in turn, affect the motions. The case of faster rotation and correspondingly stronger shear confirms this tendency (dashed curves).
Panels (c) and (d) of Fig. 2, which show the total magnetic field strength and cosmic ray energy density at \(z=0\), suggest that the structure of the magnetic field is changed profoundly by rotation and, especially, by the velocity shear. For example, the magnitude of the magnetic field perturbations in Model \(\Omega\Omega\)S shown with the dotted curve in Panel (a) is less than twice larger than at \(\Omega=0\) (solid curve), but the total magnetic field at \(z=0\) shown in Panel (c) is almost an order of magnitude stronger since the perturbation is better localised near \(z=0\) (see below). The instability still removes both the magnetic field and cosmic rays from the system as in the case \(\Omega=0\), but at a much lower efficiency that depends on both the angular velocity and the rotational shear.
As compared to the case \(\Omega=0\), the system retains stronger magnetic field under the solid-body rotation but less cosmic rays, as shown with the solid and dash-dotted curves in Fig. 2(c,d). Figure 4 clarifies the details of the changes effected by rotation and velocity
Figure 1: The evolution of the gas density and magnetic field in Model \(\Omega\Omega\)S is illustrated for its three significant epochs: **(a)** the linear stage, **(b)** beginning of the magnetic field reversal in the early nonlinear stage and **(c)** the advanced nonlinear state (the specific simulation times are indicated for each frame). Selections of magnetic lines are shown (with colour representing the local magnetic field strength in \(\mu\)G) in the \((x,y,z)\)-space at the time indicated to the left of each frame. The horizontal average of the azimuthal magnetic field \((B_{y})_{\rm h}\) in \(\mu\)G is shown with colour on the vertical \((z,t)\)-plane as it evolves continuously (rather than at discrete times used for the magnetic lines). The gas density distribution is shown with colour on the vertical \((x,z)\)-planes (in \({\rm g\,cm^{-3}}\)) for each time.
shear, presenting the varying vertical profiles of the gas density, magnetic fields and cosmic rays in Models \(\Omega\)00N, \(\Omega\)30N and \(\Omega\)30S. Both solid-body and differential rotations reduce the gas scale height in the saturated state. The comparison of Panels (b-c) and (e-f) shows that the solid-body rotation leads to narrower distributions (smaller scale heights) of both magnetic field and cosmic rays about the midplane. Moreover, as we discuss below, the gas flow becomes helical in a rotating system (see Section 6), supporting the mean-field dynamo action. As a result, a large-scale radial magnetic field \(B_{\rm x}\), clearly visible in Fig. 5(d,f), emerges in a rotating system.
The velocity shear changes the nonlinear state qualitatively. Firstly, the scale heights of \(B\) and \(\epsilon_{\rm cr}\) near the midplane are even smaller at \(t=0.6\)-\(0.9\,\)Gyr in Panels (h) and (i) than at the comparable times in Panels (e) and (f). Secondly, and more importantly, the vertical profile of the magnetic field strength evolves to become more complicated at \(t=1.6\,\)Gyr in Panel (h), and the cosmic ray distribution reflects this change. The energy density of cosmic rays in Model \(\Omega\)30S, \(\langle\epsilon_{\rm cr}\rangle_{\rm h}(0)=0.2\epsilon_{\rm cr0}\) at \(t=1.6\,\)Gyr (Fig. 4i) is ten time larger than in Model \(\Omega\)00N. Differential rotation helps to confine cosmic rays because it drives dynamo action generating strong horizontal magnetic field, and this slows down the escape of cosmic rays as they spread along larger distances guided by the magnetic field.
The change in the vertical profile of \(\langle B\rangle_{\rm h}\) in Model \(\Omega\)30S at \(t=1.6\,\)Gyr reflects the reversal of the horizontal magnetic field near the midplane discussed and explained in Section 5.
## 5 Magnetic field reversal
The reversal of the magnetic field in the nonlinear stage of the instability has been noticed earlier by a few authors (see Section 1) but our simulations identify it as a generic feature of the Parker and magnetic buoyancy instabilities in rotating systems. This process is illustrated in Fig. 5 which shows how the evolution of the large-scale
Figure 3: The two-dimensional power spectra of \(u_{\rm c}^{*}\) (left column, in the units of \(\rm kpc^{2}\,km^{2}\,s^{-2}\)) and \(b_{z}\) (right column, in \(\rm kpc^{2}\,\mu G^{2}\)), averaged over \(|z|<1.75\,\)kpc, in Models \(\Omega\)00N (a–b), \(\Omega\)30N (c–d) and \(\Omega\)30S (e–f) at \(t=0.3\,\)Gyr (the linear stage of the instability).
Figure 2: The evolution of the root-mean-square magnitudes at the midplane \(z=0\) of **(a)** the magnetic field perturbation \(|\!\!\!|b|\!\!\!|\), normalised to \(B_{0}(0)\) (the strength of the background magnetic field at \(z=0\)), and **(b)** gas speed in the Models \(\Omega\)00N (solid, no rotation), \(\Omega\)30N (dash-dotted, solid-body rotation at the nominal \(\Omega\)), \(\Omega\)30S (dotted, differential rotation at the nominal \(\Omega\) and \(S\)) and \(\Omega\)60S (dashed, doubled \(\Omega\) and \(S\)). Similarly, panels **(c)** and **(d)** show the horizontally averaged total magnetic and cosmic ray energy densities at \(z=0\) for those models, normalized to the respective midplane values in the background state, \(\langle B\rangle_{\rm xy}\,(0)/B_{0}(0)\) and \(\langle\epsilon_{\rm cr}\rangle_{\rm xy}\,(0)/\epsilon_{\rm x0}(0)\), respectively.
horizontal magnetic field components \(\langle B_{x}\rangle_{\rm h}\) and \(\langle B_{y}\rangle_{\rm h}\) depends on rotation and the velocity shear.
Figure 5a shows again (see also Tharakkal et al. 2022a, for details) that, in a non-rotating system, the azimuthal magnetic field \(\langle B_{y}\rangle_{\rm h}\) decreases with time in strength and its scale height increases, while the radial field \(\langle B_{x}\rangle_{\rm h}\) shown in Fig. 5b is much weaker and varies along \(z\) without any systematic pattern. Solid-body rotation causes two major changes: the azimuthal field strength (Fig. 5c) first decreases faster than without rotation but then starts growing and, at late times, is stronger than for \(\Omega=0\). The field direction remains the same as of the imposed field, \(\langle B_{y}\rangle_{\rm h}>0\). Meanwhile, the radial field (Fig. 5d) is, at late times, comparable in strength to \(\langle B_{y}\rangle_{\rm h}\), well-ordered and is predominantly negative, \(\langle B_{x}\rangle_{\rm h}<0\). This change is a result of the mean-field \(\alpha^{2}\)-dynamo action driven by the mean helicity of the gas flow as discussed in Section 6.
The differential rotation of Model \(\Omega\Omega\Omega\)S (Fig. 5e,f) changes the evolution even more dramatically: it drives the more efficient \(\alpha\omega\)-dynamo with stronger \(\langle B_{x}\rangle_{\rm h}\) and, remarkably, exhibits a reversal of the large-scale horizontal magnetic field. The reversal starts in the weakly nonlinear phase at \(t=0.5\) Gyr with a rather abrupt emergence of a relatively strong positive radial magnetic field near the midplane, \(\langle B_{x}\rangle_{\rm h}>0\). The velocity shear with \(S<0\) stretches the positive radial field into a negative azimuthal magnetic field, so that \(\langle B_{y}\rangle_{\rm h}\) starts decreasing and reverses at \(t=1.6\) Gyr (Fig. 5e). The total horizontal magnetic field strength \(\langle(B_{x})_{\rm h}^{2}+\langle B_{y}\rangle_{\rm h}^{2}\rangle^{1/2}\) decreases to a minimum before increasing again, as \(\langle B_{y}\rangle_{\rm h}\) decreases to zero and then reemerges with the opposite direction. These changes in the large-scale magnetic field structure start near the midplane and spread to larger altitudes because of the magnetic buoyancy.
### The mechanism of the reversal
To understand the process that leads to the reversal of the large-scale azimuthal magnetic field, we consider individual terms in the induction equation written for the deviation from the imposed magnetic field,
\[\frac{\partial\mathbf{b}}{\partial t}=-(\mathbf{U}\cdot\nabla)\mathbf{B}+(\mathbf{B}\cdot \nabla)\mathbf{U}-\mathbf{B}\nabla\cdot\mathbf{U}+\eta\nabla^{2}\mathbf{b}. \tag{3}\]
Figure 6 shows, for Model \(\Omega\Omega\Omega\)S, the evolution of the mean radial and azimuthal components of the first three terms on the right-hand side of this equation, which represent the advection, stretching and compression of the corresponding magnetic field components near the midplane. The stretching terms \((\mathbf{B}\cdot\nabla)U_{x}\) and \((\mathbf{B}\cdot\nabla)U_{y}\) clearly dominate, producing a mean radial field \(\langle B_{x}\rangle_{\rm h}>0\) during the weakly nonlinear stage, \(0.6\lesssim t\lesssim 0.8\) Gyr, which decreases only slowly at later times (because of diffusion and buoyancy) while being gradually stretched by the differential rotation \(S<0\) into a negative azimuthal field \(\langle B_{y}\rangle_{\rm h}\), eventually leading to the reversal of the initially positive \(\langle B_{y}\rangle_{\rm h}\). This picture is very different from that for Model \(\Omega\Omega\)00N, where the stretching terms in both components rapidly vanish after a negative excursion during the early nonlinear phase (see Figs 5a,b and 7). Under the solid-body rotation, a positive radial field does emerge near \(z=0\) in the early nonlinear stage but, without the velocity shear, this does not lead to the reversal of the azimuthal field (Fig. 5c,d).
We have analyzed various parts of the averaged stretching term \(\langle(\mathbf{B}\cdot\nabla)U_{x}\rangle_{\rm h}\) in the \(x\)-component of equation (3) to understand which of them produces a positive radial component of the mean field. We
Figure 4: The evolution of the vertical profiles of the horizontally averaged and normalised gas density \(\langle\rho\rangle_{\rm h}/\rho_{0}(0)\) (left-hand column), magnetic field strength \((B)_{\rm h}/B_{0}(0)\) (middle) and cosmic ray energy density \(\langle\epsilon_{\rm x}\rangle_{\rm h}/\epsilon_{\rm eq}(0)\) (right-hand column). First row: Model \(\Omega\Omega\)00N (no rotation), second row: Model \(\Omega\Omega\)0N (nominal solid-body rotation), third row: Model \(\Omega\Omega\Omega\)0S (nominal rotation and shear). The times corresponding to the line styles are given in the legend of each row. Note that the direction of the mean azimuthal magnetic field \(\langle B_{y}\rangle_{\rm h}\) has reversed within a certain distance of the midplane at the later times, \(t=1.6\) and \(2.6\) Gyr.
note that \(\langle U_{x}\rangle_{\rm h}=0\) and then \(\langle(\mathbf{B}\cdot\nabla)U_{x}\rangle_{\rm h}=((\mathbf{b}\cdot\nabla)u_{x})_{\rm h}\). Thus,
\[\langle(\mathbf{B}\cdot\nabla)U_{x}\rangle_{\rm h}=\left(b_{x}\frac{\partial u_{x}}{ \partial x}\right)_{\rm h}+\left(b_{y}\frac{\partial u_{x}}{\partial y}\right)_ {\rm h}+\left(b_{z}\frac{\partial u_{x}}{\partial z}\right)_{\rm h}. \tag{4}\]
Figure 5: The evolution of the horizontally averaged magnetic field components, \(\langle B_{y}\rangle_{\rm h}\) (left-hand column) and \(\langle B_{x}\rangle_{\rm h}\) (right-hand column) in Models Q00N (a-b), Q30N (c-d) and Q30S (e–f). For Q30S the mean azimuthal field \(\langle B_{y}\rangle_{\rm h}\) decreases after \(t=0.6\) Gyr, and undergoes a reversal in sign at \(t\approx 1.6\) Gyr, with the reversal then spreading to higher altitudes. Meanwhile, the mean radial field \(\langle B_{x}\rangle_{\rm h}\) becomes positive and relatively strong near \(z=0\) rather abruptly at \(t\approx 0.5\) Gyr and then also spreads away from the midplane.
Figure 6: The evolution of the three terms on the right-hand side of the induction equation (3) volume-averaged near the midplane (\(z<0.4\) kpc): **(a)** the radial (\(x\)) and **(b)** the azimuthal (\(y\)) components of the stretching term \((\mathbf{B}\cdot\nabla)\mathbf{U}\) (solid), advection \(-(\mathbf{U}\cdot\nabla)\mathbf{B}\) (dotted) and compression \(-\mathbf{B}(\nabla\cdot\mathbf{U})\) (dash-dotted), in model Q30S.
Figure 7: As in Fig. 6, but for model D00N.
Figure 8 shows that the first two terms on the right-hand side of this equation are less significant than the third term, and that \(\langle b_{z}\,\partial u_{x}/\partial z\rangle_{\rm h}>0\) at \(|z|\lesssim 0.2\,\)kpc. The term \(\langle b_{x}\,\partial u_{x}/\partial x\rangle_{\rm h}\) also contributes to the generation of a positive \(\langle B_{x}\rangle_{\rm h}\) at all \(z\).
The positive correlation between \(b_{z}\) and \(\partial u_{x}/\partial z\), the main driver in the generation of the positive \(\langle B_{x}\rangle_{\rm h}\), arises because of: (i) the Coriolis force; and (ii) the emergence of a local minimum of \(\langle B_{y}\rangle_{\rm h}\) at the midplane produced by the buoyancy. To demonstrate this, we express \(u_{x}\) using the \(y\)-component of the momentum equation (1) with \(S=-\Omega\), differentiate the result with respect to \(z\), multiply it by \(b_{z}\) and average to obtain
\[\rho\Omega\left(b_{z}\frac{\partial u_{x}}{\partial z}\right)_{ \rm h} =\frac{1}{4\pi}\left(b_{z}^{2}\frac{\partial^{2}B_{y}}{\partial z ^{2}}\right)_{\rm h}+\frac{1}{8\pi}\left(\frac{\partial b_{z}^{2}}{\partial z }\frac{\partial B_{y}}{\partial z}\right)_{\rm h}\] \[+\left(b_{z}\,\frac{\partial\Psi}{\partial z}-b_{z}\,\frac{ \partial\rho}{\partial z}\Omega u_{x}\right)_{\rm h}\,, \tag{5}\]
where we have neglected the fluctuations in \(\rho\) when averaging on the left-hand side (which is justifiable since the random gas speed is subsonic) and \(\Psi\) combines all other terms:
\[\Psi=-\rho\frac{\mathrm{D}u_{y}}{\mathrm{D}t}-\frac{\partial P}{\partial y}- \frac{1}{8\pi}\frac{\partial b^{2}}{\partial y}+\frac{1}{4\pi}\left(b_{x}\, \frac{\partial b_{y}}{\partial x}+b_{y}\,\frac{\partial b_{y}}{\partial y} \right)\,, \tag{6}\]
where we neglect the viscosity (represented by the viscous stress tensor \(\tau\)) and \(b^{2}=b_{x}^{2}+b_{y}^{2}+b_{z}^{2}\). Figures 9a,b show vertical profiles of \(\langle B_{y}\rangle_{\rm h}\) in Models \(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\) where no reversal occurs and \(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\), while Fig. 9c clarifies the form of various terms in equation (5). The positive correlation \(\langle b_{z}\,\partial u_{x}/\partial z\rangle_{\rm h}\) emerges because of the first term on the right-hand side as soon as magnetic buoyancy produces a local minimum of \(\langle B_{y}\rangle_{\rm h}\) at \(z=0\) (see Fig. 9b), so that \(\partial^{2}B_{y}/\partial z^{2}\) is systematically positive at \(z=0\). Such a minimum does not develop in the case of solid-body rotation (Fig. 9a) where no reversal of \(\langle B_{y}\rangle_{\rm h}\) happens. As shown in Fig. 9c, the second and third terms in equation (5) are smaller in magnitude than the first term near \(z=0\) and partially compensate each other. The correlation \(\langle b_{z}\,\partial u_{x}/\partial z\rangle_{\rm h}\) is dominant and positive near \(z=0\), driving a reversal in the large-scale magnetic field near the midplane which then spreads to larger \(|z|\) as shown in Fig. 6c because of the magnetic buoyancy. We stress that the minimum of \(\langle B_{y}\rangle_{\rm h}\) at \(z=0\) can only arise at the nonlinear stage of the instability, because only then do the fluctuations \(b_{y}\) not average to zero.
We have verified that the reversal is not sensitive to the direction of the imposed magnetic field \(B_{0}(z)\dot{\phi}\); i.e., it occurs in the exactly the same manner for \(B_{0}(z)>0\) and \(B_{0}(z)<0\). Our simulations extend to 4 Gyr in duration (see Fig. 5). This is already a significant fraction of the galactic lifetime; therefore, we did not extend them further to find out if a further reversals would occur at later times. However, periodic reversals occur in a similar model where the unstable magnetic field is generated by an imposed _mean-field dynamo action_(Y. Qazi et al., 2022, in preparation). It appears that the emergence of the local minimum of \(\langle B_{y}\rangle_{\rm h}\) at \(z=0\) and its ensuing reversal is related to the mean-field dynamo action (which our imposed field emulates). The dynamo is driven by the mean helicity of the gas flow, and both Models \(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\) support this mechanism (as discussed below). However, the dynamo in Model \(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\), which has solid-body rotation (so is an \(\alpha^{2}\)-dynamo), is too weak, whereas the differential rotation of Model \(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\) enhances the dynamo enough (making it an \(\alpha\omega\)-dynamo) to produce the reversal. In the next section, we compute and discuss the mean helicity of the gas flow and other evidence for the mean-field dynamo action in Model \(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)
Figure 8: The vertical variation of the horizontally averaged stretching terms in equation (4) in Model \(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\Omega\
## 6 Helicity and Dynamo action
In Models \(\Omega\)30N, \(\Omega\)30S and \(\Omega\)60S, the Coriolis force causes the gas motions to become helical, and the resulting \(\alpha\)-effect produces a large-scale radial magnetic field \(\langle B_{\rm x}\rangle_{\rm h}\) (e.g., Sect. 7.1 of Shukurov & Subramanian, 2021). Differential rotation (in Models \(\Omega\)30S and \(\Omega\)60S) enhances the dynamo significantly, and we have discovered that this leads to a reversal in the azimuthal magnetic field direction discussed in Section 5. Both types of the turbulent dynamo (\(\alpha^{2}\) dynamo in \(\Omega\)30N and \(\alpha\omega\) in \(\Omega\)30S and \(\Omega\)60S) are driven by the mean kinetic helicity of the gas flow \(\chi_{\rm k}=\overline{\mathbf{h}\cdot(\nabla\times\vec{\mathbf{h}})}\), and the current helicity of the magnetic fluctuations \(\chi_{\rm m}=\overline{\hat{\mathbf{b}}\cdot(\nabla\times\vec{\mathbf{h}})}\) opposes the dynamo instability leading to a reduction of the \(\alpha\)-coefficient until a steady state is achieved (e.g., Sect. 7.11 of Shukurov & Subramanian, 2021). Here overbar denotes a suitable averaging, and we use the horizontal averages in our discussion, so \(\bar{\mathbf{u}}\) and \(\bar{\mathbf{b}}\) are understood as the deviations from the horizontal averages \(\langle\mathbf{B}\rangle_{\rm h}\) and \(\langle\mathbf{U}\rangle_{\rm h}\), such that
\[\mathbf{B}=\langle\mathbf{B}\rangle_{\rm h}+\bar{\mathbf{b}}\,,\quad\mathbf{U}=\langle\mathbf{U} \rangle_{\rm h}+\bar{\mathbf{u}}\,,\quad\langle\bar{\mathbf{b}}\rangle_{\rm h}=0\,, \quad\langle\bar{\mathbf{u}}\rangle_{\rm h}=0\,. \tag{7}\]
Figure 10 shows the evolution of the kinetic and current helicities and their variation with \(z\) obtained using the horizontal averages. As expected, both quantities have odd symmetry in \(z\) (e.g., Sect. 11.3.1 of Shukurov & Subramanian, 2021). Both are weak throughout the linear phase when the instability-driven perturbations are still weak, but increase significantly in magnitude during the early nonlinear phase at about \(t=0.5\,\mathrm{Gyr}\). The kinetic helicity reaches its maximum magnitude \(|\chi_{\rm k}|=|\langle\bar{\mathbf{u}}\cdot(\nabla\times\vec{\mathbf{h}})\rangle_{ \rm h}|=851\,\mathrm{km}^{2}\,\mathrm{s}^{-2}\,\mathrm{kpc}^{-1}\) near the upper and lower boundaries, \(z=\pm 1.6\,\mathrm{kpc}\), during the transitional phase at \(t=0.6\,\mathrm{Gyr}\). At a later time, \(t=1.9\,\mathrm{Gyr}\), the kinetic helicity reduces to a maximum of \(|\chi_{\rm k}|=340\,\mathrm{km}^{2}\,\mathrm{s}^{-2}\,\mathrm{kpc}^{-1}\) at \(|z|=1.6\,\mathrm{kpc}\). At early stages of the evolution, the current helicity has local extrema close to the midplane, where the magnetic field is stronger, \(|\chi_{\rm m}|=|\langle\bar{\mathbf{b}}\cdot(\nabla\times\vec{\mathbf{b}})\rangle_{ \rm h}|=89\,\mathrm{\mu G}^{2}\,\mathrm{kpc}^{-1}\) at \(t=0.6\,\mathrm{Gyr}\), \(|z|=0.1\,\mathrm{kpc}\). The extrema move away from the midplane in the nonlinear stage, to reach \(|\chi_{\rm m}|=7\,\mathrm{\mu G}^{2}\,\mathrm{kpc}^{-1}\) at \(t=1.2\,\mathrm{Gyr}\), \(|z|=0.5\,\mathrm{kpc}\) and \(|\chi_{\rm m}|=5\,\mathrm{\mu G}^{2}\,\mathrm{kpc}^{-1}\) at \(t=3\,\mathrm{Gyr}\), \(|z|=1\,\mathrm{kpc}\).
The vertical profiles of both kinetic and current helicities evolve in a rather complicated manner, with \(\chi_{\rm k}<0\) at \(z>0\) close to the midplane (although the magnitude is small), and \(\chi_{\rm k}>0\) at larger \(z\) in the case of pure magnetic buoyancy (dotted curve in Fig. 11 representing \(t=0.7\,\mathrm{Gyr}\)). In Model \(\Omega\)30S, \(\chi_{\rm k}<0\) at \(z>0\) close to the midplane just before \(t=0.7\,\mathrm{Gyr}\). Negative \(\chi_{\rm k}\) at \(z>0\) is expected from the action of the Coriolis force on the ascending and descending volume elements (Sect. 7.1 of Shukurov & Subramanian, 2021). However, \(\chi_{\rm k}>0\), as it occurs at larger \(z\) for all models presented in Fig. 11, is unexpected (see below for a discussion).
The \(\alpha\)-coefficient of the nonlinear mean-field dynamo is related to the kinetic and current helicities as (Sect. 7.11.2 of Shukurov & Subramanian, 2021)
\[\alpha=\alpha_{\rm k}+\alpha_{\rm m}\,, \tag{8}\]
where, in terms of the horizontal averages,
\[\alpha_{\rm k}=-\tfrac{1}{3}\tau_{0}(\bar{\mathbf{u}}\cdot(\nabla\times\vec{\mathbf{ h}}))_{\rm h}\,,\qquad\alpha_{\rm m}=\tfrac{1}{3}\tau_{0}\frac{\langle\bar{\mathbf{b}} \cdot(\nabla\times\vec{\mathbf{b}})\rangle_{\rm h}}{4\pi\rho}\,, \tag{9}\]
and \(\tau_{0}\) is the characteristic (correlation) time of the random flow.
Figure 11: The spatial distribution of the mean kinetic helicity \(\chi_{\rm k}\) at \(t=0.7\,\mathrm{Gyr}\) for four imposed (initial) magnetic field strengths specified by the parameters \(\beta_{m,0}\) and \(\beta_{z,\rm k0}\) defined in equation (12) and given in the legend. Among the models shown in this figure, cosmic rays are present only in Model \(\Omega\)30S where \((\beta_{m,0},\beta_{z,0})=(0.5,0.5)\) (dash-dotted: this is a vertical cross-section of the distribution in Fig. 10a).
The relevant time scale \(\tau_{0}\) differs from the time scale of the linear instability \(2\pi/(u_{0}k_{y})\) where \(u_{0}\) and \(k_{y}\) are the characteristic speed and azimuthal wave number of the most unstable mode shown in Figs 2b and Fig. 3e-f, respectively. Instead, \(\tau_{0}\) is determined by nonlinear effects and has to be measured separately. We calculate the correlation time using the time autocorrelation function \(C(\tau)\) of \(u_{z}\) (the vertical velocity \(u_{z}\) is a representative component since it is directly related to the instability),
\[\tau_{0}=\int_{0}^{\infty}C(\tau)\;\mathrm{d}\tau\,, \tag{10}\]
with the normalized autocorrelation function calculated as
\[C(\tau)=\frac{1}{T\left\langle\tilde{u}_{z}^{2}\right\rangle_{\mathrm{h}}} \left\langle\int_{0}^{T}\tilde{u}_{z}(t,\mathbf{x})\tilde{u}_{z}(t+\tau,\mathbf{x}) \;\mathrm{d}t\right\rangle_{\mathrm{h}}\,, \tag{11}\]
where \(T\) is the duration of the time series used to compute \(C(\tau)\). For a given \(z\), the integral in equation (11) is calculated for each \((x,y)\) and the result is averaged over \((x,y)\). Thus defined, the autocorrelation function and the corresponding correlation time depend on \(z\).
Figure 12 shows the time autocorrelation of \(u_{z}\) at three values of \(z\), and the form \(C(\tau)=\exp(-\tau/\tau_{0})\) provides a good fit, with the fitted values of \(\tau_{0}\) given in the legend: they vary between \(18\,\mathrm{Myr}\) at \(z=0\) and \(40\,\mathrm{Myr}\) at \(z=1.5\,\mathrm{kpc}\). We use the fitted \(C(\tau)\) to estimate \(\tau_{0}\) as this provides a more accurate result than the direct integration as in the definition (10).
We use \(\tau_{0}=30\,\mathrm{Myr}\) in equations (9), and the results are shown in Fig. 13. The largest in magnitude values \(|\alpha_{\mathrm{k}}|\approx 7\,\mathrm{km\,s^{-1}}\) are reached during the transition phase around \(t=0.6\,\mathrm{Gyr}\) near \(|z|=1.5\,\mathrm{kpc}\), whereas \(|\alpha_{\mathrm{m}}|\) is at its maximum around \(3\,\mathrm{km\,s^{-1}}\) during the nonlinear phase at \(t=3.6\,\mathrm{Gyr}\).
The spatial structure of \(\alpha_{\mathrm{k}}\) is relatively simple during the early nonlinear phase but becomes more complicated later. Closer to the midplane and at later stages of the evolution, \(\alpha_{\mathrm{k}}>0\) at \(z>0\) (and \(\alpha_{\mathrm{k}}<0\) at \(z<0\)) as expected, and the region where \(\alpha_{\mathrm{k}}\) is predominantly positive (albeit small in magnitude) extends to larger \(|z|\) with time (see Fig. 14 representing vertical sections of Fig. 13a).
As expected, the sign of the current helicity is opposite to that of \(\alpha_{\mathrm{k}}\) at almost all \(z\) and \(t\), so that the back-reaction of the magnetic field on the flow weakens the dynamo action leading to a (statistically) steady state at \(t\gtrsim 3\,\mathrm{Gyr}\).
The negative sign of \(\alpha_{\mathrm{k}}\) at \(z>0\) (corresponding to the positive kinetic helicity \(\chi_{\mathrm{k}}\)) appears to be a specific feature of a system driven by magnetic buoyancy or another magnetically driven instability such as the magneto-rotational instability (MRI). Hanasz and Lesch (1998) argue, using a model of reconnecting magnetic flux ropes, that negative \(\alpha_{\mathrm{k}}\) at \(z>0\) can occur in magnetic buoyancy-driven mean-field dynamos. In his analysis of the mean electromotive force produced by the magnetic buoyancy instability in its linear stage, Thelen (2000a, his Fig. 4) finds \(\alpha<0\) in the unstable region of the northern hemisphere in spherical geometry (corresponding to \(z>0\) in our case), although the "anomalous" sign of \(\alpha_{\mathrm{k}}\) remained unnoticed (Thelen 2000b). However, Brandenburg and Schmitt (1998) find \(\alpha_{\mathrm{k}}>0\) at \(z>0\) in their analysis of the \(\alpha\)-effect due to magnetic buoyancy. Brandenburg and Sokoloff (2002) find \(\alpha_{\mathrm{k}}<0\) at \(z>0\) in simulations of the MRI-driven dynamos (their Section 2 and \(\alpha_{\mathrm{yr}}\) in Figs 5, 7, 9 and 11). Kinetic helicity (and the corresponding \(\alpha_{\mathrm{k}}\)) of this 'anomalous' sign is also found in the simulations of MRI-driven dynamos of P. Dhang et al. (2023, in preparation) (K. Subramanian 2022, private communication). The origin and properties of the kinetic helicity of random flows driven by magnetic buoyancy and MRI deserves further attention. Our results indicate not only that the kinetic helicity has the anomalous sign but also that it can change in space and time.
The current helicity (Fig. 10b) and the corresponding contribution to the \(\alpha\)-effect (Fig. 13b) have the opposite signs to, and closely follow both the spatial distribution and evolution of, \(\chi_{\mathrm{k}}\) and \(\alpha_{\mathrm{k}}\) respectively (although the magnetic quantities have smoother spatial distributions than the corresponding kinetic ones). This confirms that the action of the Lorentz force on the flow weakens the dynamo action as expressed by equation (8). Together with the removal of the large-scale magnetic field by the Parker instability, this leads to the eventual evolution of the system to the statistically steady state.
Although the gas flows that become helical are driven by the instability, no simple and obvious relation of the mean helicity to the parameters that control the strength of the instability is apparent. Figure 11 shows how the vertical profile of the kinetic helicity \(\chi_{\mathrm{k}}\)
Figure 14: The variation of the normalised \(\alpha_{\mathrm{k}}\) with \(z\) in the early (\(t=0.7\,\mathrm{Gyr}\), solid) and late (\(t=2.6\,\mathrm{Gyr}\), dotted, \(t=3.6\,\mathrm{Gyr}\), dashed) nonlinear stages in Model \(\Omega\Omega\Omega\)S.
changes with the magnetic and cosmic ray pressures in the initial (imposed) state, specified in terms of their ratios to the thermal pressure at \(z=0\),
\[\beta_{\rm m0}=\frac{B_{0}(0)^{2}}{8\pi c_{\rm s}^{2}\rho_{0}(0)}\quad\mbox{and} \quad\beta_{\rm c0}=\frac{(\gamma_{\rm cr}-1)\epsilon_{\rm c0}(0)}{c_{\rm s}^{2} \rho_{0}(0)}\;, \tag{12}\]
where \(\gamma_{\rm cr}=4/3\). To avoid complications associated with the cosmic rays in the system behaviour, only one model of the four illustrated in Fig. 11 contains cosmic rays (Model \(\Omega 305\) discussed elsewhere in the text). The midplane strengths of the imposed magnetic field \(B_{0}(0)\) corresponding to \(\beta_{\rm m0}=0.5,1\) and \(1.5\) are \(5\), \(7\) and \(9\,\mu\)G, respectively. When \((\beta_{\rm m0},\beta_{\rm cr0})=(0.5,0)\), the magnetic field is too weak to be unstable and the system remains in the state of magneto-hydrostatic equilibrium, \(\chi_{\rm k}=0\). Adding cosmic rays, \((\beta_{\rm m0},\beta_{\rm cr0})=(0.5,0.5)\) (Model \(\Omega 305\)) destabilises the system producing helical flows discussed above. Adding magnetic rather than cosmic ray pressure, \((\beta_{\rm m0},\beta_{\rm cr0})=(1,0)\), also makes the system unstable, and the resulting mean helicity at larger \(|z|\) is greater than for \((\beta_{\rm m0},\beta_{\rm cr0})=(0.5,0.5)\). A still stronger magnetic field, \((\beta_{\rm m0},\beta_{\rm cr0})=(1.5,0)\) leads to comparable \(\chi_{\rm k}\) the previous two cases in \(|z|\lesssim 1\,{\rm kpc}\), except near the midplane. Altogether, it is difficult to identify a clear pattern in the dependence of the magnitude and spatial distribution of the mean helicity of the gas flow driven by the Parker instability; this invites further analysis, both analytical and numerical.
The dimensionless measure of the mean-field dynamo activity in a differentially rotating gas layer is provided by the dynamo number (Section 11.2 of Shukurov & Subramanian 2021)
\[D=\frac{\alpha Sh^{3}}{\beta^{2}}\,, \tag{13}\]
where \(h\) is the layer scale height, \(S\) is the velocity shear rate (\(S=-\Omega\) in our case), \(\alpha\) is given in equation (8) and
\[\beta=\tfrac{1}{3}\tau_{0}(\dot{u}^{2})_{\rm h}+\eta \tag{14}\]
is the magnetic diffusivity. The first term in this expression is the turbulent diffusivity and \(\eta\) is the explicit magnetic diffusivity from equation (2) or (3). As we use the horizontal averages in these relations, \(D\) is a function of \(z\) and varies with time together with \(h\), \(\alpha\) and \(\beta\); thus defined, \(D\) might be better called the local dynamo number, a measure of the dynamo efficiency at a given \(z\) and \(t\). In Model \(\Omega 305\), \(\eta=0.03\,{\rm kpc}\,{\rm km}\,{\rm s}^{-1}\) while the turbulent diffusivity varies, at \(t=1\,{\rm Gyr}\), from \(0.03\,{\rm kpc}\,{\rm km}\,{\rm s}^{-1}\) at \(z=0\) to \(0.5\,{\rm kpc}\,{\rm km}\,{\rm s}^{-1}\) at \(z=1\,{\rm kpc}\) (a nominal turbulent diffusivity in the ISM, where turbulence is mainly driven by supernovae, is \(1\,{\rm kpc}\,{\rm km}\,{\rm s}^{-1}\)). The dynamo amplifies a large-scale magnetic field provided \(|D|>D_{\rm c}\), where \(D_{\rm c}\) is a certain critical dynamo number (see below).
Figure 15 shows how the dynamo number varies with \(t\) and \(z\). During the transient phase, \(\langle\dot{u}^{2}\rangle_{\rm h}\) is relatively low while \(|\alpha|\) is at its maximum. The resulting dynamo number is as large as \(|D|\simeq 10^{4}\). As the system evolves into the nonlinear state, the turbulent diffusivity increases and the dynamo number reduces in magnitude. At \(t=0.6\,{\rm Gyr}\), \(D\) varies from \(4\) near the midplane to \(6\times 10^{3}\) at \(z=1\,{\rm kpc}\). At later times, \(D\) is larger near the midplane and reduces further in magnitude: at \(t=0.9\,{\rm Gyr}\), \(D=300\) near the midplane and \(9\) at \(z=1\,{\rm kpc}\).
As shown by Ruzmaikin et al. (1980), the \(\alpha\omega\)-dynamo in flat geometry generates oscillatory magnetic fields for \(D>0\), quadrupolar for \(D\gtrsim 180\) and dipolar for \(D\gtrsim 550\). The behaviour of the large-scale magnetic field in Model \(\Omega 305\) is consistent with these results: it is quadrupolar and oscillatory.
## 7 Relative distributions of cosmic rays and magnetic field
Similar to our analysis in Tharakkal et al. (2022a), we present in Table 2 the Pearson cross-correlation coefficient between the fluctuations in energy densities for different components in model \(\Omega 305\) at \(z=0.5\) and \(1\,{\rm kpc}\) for the late nonlinear stage at \(t=2.6\,{\rm Gyr}\), derived as
\[\epsilon^{\prime}_{\rm m}=\frac{B^{2}-\left\langle B^{2}\right\rangle_{\rm h}} {8\pi}\,,\quad\epsilon^{\prime}_{\rm cr}=\epsilon_{\rm cr}-\left\langle \epsilon_{\rm cr}\right\rangle_{\rm h}\,, \tag{15}\] \[\epsilon^{\prime}_{\rm h}=c_{\rm s}^{2}\left(\rho-\left\langle \rho\right\rangle_{\rm h}\right)\,,\quad\epsilon^{\prime}_{\rm k}=\tfrac{1}{2} \rho\dot{u}^{2}-\left(\tfrac{1}{2}\rho\dot{u}^{2}\right)_{\rm h}\,.\]
The only significant entry in the table is the anti-correlation between the magnetic and cosmic ray energy fluctuations at \(z=1\,{\rm kpc}\) where their contribution to the total pressure is noticeable (see Section 8). There are no signs of energy equipartition between cosmic rays and magnetic fields at kiloparsec scales; nor are there indications of equipartition at the turbulent scales, for either cosmic ray protons (Seta et al. 2018) or electrons (Tharakkal et al. 2022b).
## 8 vertical flows and force balance
Rotation affects significantly the vertical gas flow driven by the instability. As discussed by Tharakkal et al. (2022a) (and also in Model \(\Omega 000\)), a systematic gas outflow is transient without rotation and only occurs during the early nonlinear stage. Figure 16 shows the horizontally averaged vertical velocity \(\langle u_{\rm z}\rangle_{\rm h}\) in Models \(\Omega 30\)N (solid-body rotation) and \(\Omega 305\) (differential rotation). In both cases, systematic vertical flows occur at \(|z|\gtrsim 1\,{\rm kpc}\). The solid-body rotation (Fig. 16a) does not change much the structure of the flow in
\begin{table}
\begin{tabular}{c c c c c} \hline & \(\epsilon^{\prime}_{\rm h}\) & \(\epsilon^{\prime}_{\rm cr}\) & \(\epsilon^{\prime}_{\rm m}\) & \(\epsilon^{\prime}_{\rm k}\) \\ \hline \(\epsilon^{\prime}_{\rm h}\) & \(1,\,1\) & \(0.2,-0.03\) & \(-0.02,-0.2\) & \(-0.14,0.12\) \\ \(\epsilon^{\prime}_{\rm cr}\) & & \(1,\) & \(-0.4,-0.8\) & \(0.2,0.05\) \\ \(\epsilon^{\prime}_{\rm m}\) & & & \(1,1\) & \(-0.29,-0.1\) \\ \(\epsilon^{\prime}_{\rm k}\) & & & & \(1,1\) \\ \hline \end{tabular}
\end{table}
Table 2: The cross-correlation coefficient \(r\) of the fluctuations in various energy densities in the statistically steady state of Model \(\Omega 305\) at \(t=2.6\,{\rm Gyr}\) presented as \(a\), \(b\), where \(a\) and \(b\) refer to \(z=0.5\) and \(1\,{\rm kpc}\), respectively.
Figure 15: The evolution and vertical variation of the dynamo number of equation (13) in Model \(\Omega 305\).
comparison with the non-rotating system, with a transient outflow during the early nonlinear stage and a weak inflow at later times. In Model \(\Omega\)30N, the maximum outflow speed is \(|\langle u_{z}z_{\rm h}\rangle|=9\,{\rm km\,s^{-1}}\) at \(t=0.7\,{\rm Gyr}\), followed by the inflow at the speed \(|\langle u_{z}z_{\rm h}\rangle|=7\,{\rm km\,s^{-1}}\) at \(t>1.4\,{\rm Gyr}\). However, differential rotation not only changes dramatically the magnetic field structure and evolution (Fig. 5), but also supports a prolonged period of a systematic gas outflow at \(0.6\lesssim t\lesssim 3\,{\rm Gyr}\), which eventually evolves into a weak gas inflow at large \(|z|\) (Fig. 16b). The maximum outflow speed in Model \(\Omega\)30S is \(|\langle u_{z}\rangle|=7\,{\rm km\,s^{-1}}\) at \(t=0.6\,{\rm Gyr}\) at large \(|z|\), while the later inflow speed is \(|\langle u_{z}\rangle_{\rm h}|=1\,{\rm km\,s^{-1}}\) at \(t\gtrsim 3\,{\rm Gyr}\).
The pattern of the vertical flows shown in Fig. 16b is not dissimilar to the structure of the magnetic field shown in Fig. 5e-f and the dynamo number (Fig. 15) -- especially at later stages, \(t\gtrsim 3\,{\rm Gyr}\) -- suggesting that the magnetic field contributes noticeably to the vertical flow in Model \(\Omega\)30S.
To understand what drives the vertical flows, we present in Fig. 17 the vertical forces acting during various evolutionary stages of Model \(\Omega\)30S. It is instructive to compare them with those in non-rotating systems discussed by Tharakkal et al. (2022). Without rotation, as in Model \(\Omega\)00N (see also Fig. 12 of Tharakkal et al. 2022), both magnetic and cosmic ray pressures are reduced significantly as the system evolves into the nonlinear state, and the vertical gas flows are driven by the thermal pressure gradient. This changes in Model \(\Omega\)30S, where magnetic field, and to a lesser extent cosmic rays, make a stronger contribution to the force balance. Moreover, the gravity force and the thermal pressure gradient balance each other almost completely in the nonlinear state, so that the weaker magnetic and cosmic ray pressures appear to be capable of controlling the vertical velocity pattern, especially at \(|z|\gtrsim 0.5\,{\rm kpc}\). This is is illustrated in Fig. 18, which shows that the vertical variations of the net vertical force per unit mass are indeed similar in detail to those of the magnetic pressure gradient.
The magnetic and cosmic ray pressure gradients are weak because both non-thermal components of the simulated ISM are much less stratified than the thermal gas. However, their energy densities are large and they dominate over the thermal gas at \(|z|\gtrsim 0.5\)-1 kpc. Figure 19 shows the vertical profiles of the horizontally averaged ratios of the magnetic and cosmic ray pressures to the thermal pressure, \(\beta_{\rm m}\) and \(\beta_{\rm cr}\) respectively, defined as in equation (12) but for the evolving quantities. Although each non-thermal pressure component is subdominant near the midplane at all stages of the evolution, each of them exceeds the thermal pressure at larger altitudes as soon as the instability becomes nonlinear, \(t\gtrsim 0.6\,{\rm Gyr}\). It is useful to compare Fig. 19 with Fig. 18 of Tharakkal et al. (2022): rotation somewhat reduces the magnitudes of \(\beta_{\rm m}\) and \(\beta_{\rm cr}\) at large \(|z|\) but leads to the dominance of the non-thermal pressure components at smaller values of \(|z|\) than in a non-rotating system, and leads to a larger contribution from cosmic rays.
## 9 Discussion and Conclusions
Differential rotation affects the nonlinear state of the Parker instability more strongly than its linear properties. Without rotation, the system loses most of its magnetic field and cosmic rays as it evolves towards the steady state. A solid-body rotation does not change the nonlinear state significantly. However, differential rotation allows the system to retain better both the magnetic field and cosmic rays. The reason for that is the dynamo action (present also under the solid-body rotation but significantly enhanced by the differential rotation) which produces strong (about 2-3 \(\mu\)G) large-scale magnetic field both near the midplane and at large altitudes. As a result, cosmic rays (governed by anisotropic diffusion) spend longer times within the system.
The systematic vertical gas flows are also affected by the rotation, which prolongs the transient outflow at a speed \(|\langle u_{z}\rangle_{\rm h}|=7\,{\rm km\,s^{-1}}\) to the time interval \(0.6\lesssim t\lesssim 3\,{\rm Gyr}\). It appears that the magnetic field contributes significantly to driving the outflow. Meanwhile, cosmic rays do not play any significant role in driving the outflow at the scales explored here, \(|z|\lesssim 1.5\,{\rm kpc}\): because of the large diffusivity of cosmic rays, the vertical gradient of their pressure is very small.
Another dramatic effect of the dynamo action is that it leads to a reversal of the large-scale magnetic field, in what appears to be a sign of nonlinear oscillations of the large-scale magnetic field. Neither the Parker instability nor the dynamo are oscillatory by themselves. We have identified the rather subtle mechanism of the reversal and argue that it is an essentialy nonlinear phenomenon.
The reversal of the large-scale magnetic field is also reflected in its spatial distribution. The reversal starts near the midplane and then the reversed magnetic field spreads to larger altitudes (see Fig. 5e-f). As a result, the direction of the large-scale magnetic field reverses along \(z\) at any given time. An arguably similar pattern of regions with the sign of the Faraday depth alternating along the direction perpendicular to the disc plane is observed in the edge-on galaxy NGC 4631 (Mora-Partiarroyo et al., 2019). The comparison of Figs 5e-f and 5c-d shows that the Parker instability in a dynamo active system can produce rather complicated magnetic field structures. Our use of horizontal averages in Fig. 5 and elsewhere in the text conceals strong localised vertical magnetic fields typical of the magnetic buoyancy (see, e.g., Fig. 1), also observed in NGC 4631. Because of the low gas density at kpc-scale distances from the galactic midplane, observations of the Faraday rotation produced there are difficult; the observations of Mora-Partiarroyo et al. (2019) are the first of this kind, and future observation should show how widespread are such complex patterns. Further observational and theoretical studies of large-scale magnetic fields outside the discs of spiral galaxies promise new, unexpected
Figure 16: The evolution and variation with \(z\) of the horizontally averaged vertical velocity \(\langle u_{z}\rangle_{\rm h}\) in Models **(a)**\(\Omega\)30N and **(b)**\(\Omega\)30S.
insights into the dynamics of the interstellar gas and its magnetic fields.
An unusual feature of our results, which needs further effort to be understood, is that the mean kinetic helicity of the flows driven by the Parker and magnetic buoyancy instabilities is positive in the upper half-space, \(z>0\), and thus has the sign opposite to that in conventional stratified, rotating, non-magnetised systems. We note that positive kinetic helicity also occurs in some earlier studies of the mean-field dynamo action and \(\alpha\)-effect in magnetically-driven systems. However, this remarkable circumstance, which can have profound -- and poorly understood -- consequences for our understanding of the nature of large-scale magnetic fields outside galactic discs, has attracted relatively little attention.
## Acknowledgements
We are grateful to Axel Brandenburg and Kandaswamy Subramanian for useful discussions. G.R.S. would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme 'Frontiers in dynamo theory: from the Earth to the stars', where work on this paper was undertaken. This work was supported by EPSRC grant no. EP/R014604/1.
## Data availability
The raw data for this work were obtained from numerical simulations using the open-source PENCIL-CODE available at [https://github.com/pencil-code/pencil-code.git](https://github.com/pencil-code/pencil-code.git)). The derived data used for the analysis are available on request from Devika Tharakkal.
Figure 17: The vertical profiles of the horizontally averaged vertical forces in Model D20S normalised to the maximum magnitude of the gravitational force (dashed, repeated in all panels for reference): thermal (solid), cosmic ray (dotted) and magnetic (dash-dotted) pressure gradients. The contribution of the magnetic tension is much weaker, so it is not shown. Each panel represents a different evolutionary stage: **(a)**\(t=0.3\,\mathrm{Gyr}\) (linear instability), **(b)**\(0.6\,\mathrm{Gyr}\) (transitional); **(c)**\(1.6\,\mathrm{Gyr}\) (nonlinear state when the magnetic field has just reversed near \(z=0\)) and **(d)**\(3.6\,\mathrm{Gyr}\) (late nonlinear stage).
Figure 18: **(a)** The total vertical force per unit mass and **(b)** the resulting vertical velocity at times \(t=0.6\) (solid), \(1.6\) (dotted), \(2.6\) (dashed) and \(3.6\,\mathrm{Gyr}\) (dash-dotted). |
2301.03837 | Understanding the Complexity and Its Impact on Testing in ML-Enabled
Systems | Machine learning (ML) enabled systems are emerging with recent breakthroughs
in ML. A model-centric view is widely taken by the literature to focus only on
the analysis of ML models. However, only a small body of work takes a system
view that looks at how ML components work with the system and how they affect
software engineering for MLenabled systems. In this paper, we adopt this system
view, and conduct a case study on Rasa 3.0, an industrial dialogue system that
has been widely adopted by various companies around the world. Our goal is to
characterize the complexity of such a largescale ML-enabled system and to
understand the impact of the complexity on testing. Our study reveals practical
implications for software engineering for ML-enabled systems. | Junming Cao, Bihuan Chen, Longjie Hu, Jie Gao, Kaifeng Huang, Xin Peng | 2023-01-10T08:13:24Z | http://arxiv.org/abs/2301.03837v1 | # Understanding the Complexity and Its Impact on Testing in ML-Enabled Systems
###### Abstract
Machine learning (ML) enabled systems are emerging with recent breakthroughs in ML. A model-centric view is widely taken by the literature to focus only on the analysis of ML models. However, only a small body of work takes a system view that looks at how ML components work with the system and how they affect software engineering for ML-enabled systems. In this paper, we adopt this system view, and conduct a case study on Rasa 3.0, an industrial dialogue system that has been widely adopted by various companies around the world. Our goal is to characterize the complexity of such a large-scale ML-enabled system and to understand the impact of the complexity on testing. Our study reveals practical implications for software engineering for ML-enabled systems.
## I Introduction
The recent advances in machine learning (ML) have attracted an increasing interest in applying ML across a breadth of business domains, e.g., self-driving cars, virtual assistants, robotics, and health care. According to the Global AI Adoption Index by IBM [34], 35% of companies around the world have deployed AI in their business, while 42% of companies are exploring AI. Such a trend has caused the emergence of ML-enabled systems which are composed of ML and non-ML components. ML components are often important, but usually only a part of many components in ML-enabled systems [38].
The previous research on software engineering for machine learning often takes a model-centric view that focuses only on the analysis of ML models [38, 55]. For example, many advances have been made for DL model testing (e.g., [4, 19, 25, 40, 59, 68, 72, 82, 84]), verification (e.g., [57, 58, 66, 73, 9, 43]) and debugging (e.g., [43, 49, 54, 71]). Only a small body of work takes a holistic system view, e.g., architectural design [64, 80], technical debt [63, 70], ML component entanglement [83, 78, 53], feature interaction [1, 2, 8], and model interactions in Apollo [60]. However, the lack of system-level understanding of ML-enabled systems may hide problems in engineering ML-enabled systems and hinder practical solutions.
In this paper, we adopt this system view, and conduct a case study on Rasa 3.0 [11] to characterize the complexity of such a large-scale ML-enabled system as well as to understand the impact of the complexity on testing. Rasa is a task-oriented industrial dialogue system that has been widely used by various companies around the world. Therefore, we believe Rasa is a good representative of real-world ML-enabled systems.
We first investigate the complexity of Rasa at three levels. At the system level, we explore how ML components are adopted across the modules in Rasa. We find that there are 23 ML models in 15 ML components across 6 modules. At the interaction level, we analyze how ML components interact with other components in Rasa. We find that there are 43 interaction patterns and 230 interaction instances across 4 major categories and 8 inner categories. At the component level, we investigate how the code of ML components is composed by what kinds of code. We find that 57.1% of the code inside components are data processing code, and there are 8 composition patterns between data processing code and model usage code.
We then explore the impact of the complexity on testing from two perspectives. From the testing practice perspective, we analyze how is the characteristic of test cases, and how well they cope with the complexity. We find that the test coverage of component interactions is low because of the complexity from huge configuration space and from hidden component interactions. From the mutation testing perspective, we study how is the bug-finding capability of test cases and test data (i.e., the data for testing models), and how well they cope with the complexity. We find that there may be many potential bugs in data processing code that can only be detected by test cases, due to the complexity from data processing code. The capability of test data to kill mutants is limited because of the complexity from huge configuration space.
Based on our case study, we highlight practical implications to improve software engineering for ML-enabled systems. For example, the configuration space of ML-enabled systems should be tested adequately, and configuration suggestions should be provided to developers. A general taxonomy of data processing code should be constructed, and then the maintaining and testing tools for it can be developed. More integration-level test cases should be created to cover component interactions. Test cases and test data should be used in combination to detect both non-ML specific and ML-specific bugs.
In summary, this paper makes the following contributions.
* We conduct an in-depth case study on Rasa to characterize its complexity and the impact of its complexity on testing.
* We highlight practical implications to improve software engineering for ML-enabled systems.
## II Background and Study Design
We present the architecture of a typical task-oriented dialogue systems, an overview of Rasa, and our study design.
### _Architecture of a Typical Task-Oriented Dialogue System_
A task-oriented dialogue system (TDS) aims to assist users in performing specific tasks, such as restaurant booking and flight booking [15]. A pipeline-based TDS consists of four parts, i.e., natural language understanding (NLU), dialogue state tracking (DST), dialogue policy (Policy) and natural language generation (NLG) [86]. NLU parses a user utterance into a structured semantic representation, including intent and slot-values. The intent is a high-level classification of the user utterance, such as Inform and Request. Slot-values are task-specific entities that are mentioned in the user utterance, such as restaurant price range and location preference. After tokenization and featurization of the user utterance, NLU applies classification models to recognize intent, and named entity extraction models to extract slot-values. DST takes the entire history of the conversation, including both user utterances with predicted intents and slot-values and system responses, to estimate the current dialogue state, which is usually formulated as a sequential prediction task [77]. Dialogue state is typically the probability distribution of user intent and slot-values till the current timestamp. Given the estimated dialogue state, Policy generates the next system action, such as Query Database and Utter Question. As Policy determines a series of actions sequentially, sequential models such as Recurrent Neural Network (RNN) are applied. For actions that require a response, NLG converts the action into a natural language utterance, which is often considered as a sequence generation task [76].
### _An Overview of Rasa 3.0_
Rasa is a popular open-source ML-enabled TDS, which is fully implemented with Python and used by many well-known companies in customer service for real users, including Adobe, Airbus, and N26 [11]. An architecture overview of Rasa 3.0 is shown in Fig. 1. Each module consists of one single component or multiple semantically similar components. Apart from the modules in a typical TDS, Rasa proposes the Selector module to select candidate intents and responses for FAQ questions [16]. We present some concepts in Rasa 3.0 to ease our presentation.
**Components in Rasa.** There are two types of components in Rasa. We define _ML components_ as components that are implemented with ML models, and _rule-based components_ as components that are implemented with rule-based code logic. General utils code in Rasa is not considered in this paper, such as command line and database access code.
**Configuration File and Component Graph in Rasa.** As there are multiple available components in each module, developers need to choose components that are actually used in the Rasa pipeline with a _configuration file_ to build a chatbot. Parameters of each component are specified in the configuration file (e.g., ML model used by a component and hyperparameters of a ML model). Rasa applies Dask [6] to compile a configuration file into a component graph. Each node in the component graph denotes a component, and the edges connected with it denote upstream and downstream components with input and output data dependency. Execution of components obeys the topological order specified by edges. These components interact with each other through fields in shared Message class instances. An upstream component stores outputs to Message instances, and a downstream component retrieves them for further processing.
**ML Stages in Rasa.** Different from Apollo, which uses trained model files from external systems, and therefore only contains the inference stage of ML models [60], the training, evaluation and inference stages of ML models are all present in Rasa. Given a configuration file, Rasa separately compiles it to a training component graph and an inference component graph. In training stage, the trainable upstream components are first trained, and then process the training data used by downstream components. In evaluation stage, only the performance metrics of _IntentClassifier_, _EntityExtractor_ and _Policy_ are reported, as there is no ground truth for evaluation data in other modules.
### _Study Design_
Our goal is to understand the complexity and its impact on testing in Rasa. To achieve this goal, we propose five RQs.
* **RQ1 System Complexity Analysis**: how ML components are adopted across the modules in Rasa?
* **RQ2 Interaction Complexity Analysis**: how ML components interact with other components in Rasa?
* **RQ3 Component Complexity Analysis**: how the code of ML components is composed by what kinds of code?
* **RQ4 Testing Practice Analysis**: how is the characteristic of test cases, and how well they cope with the complexity?
* **RQ5 Mutation Testing Analysis**: how is the bug-finding capability of test cases and test data (i.e., the data for testing models), and how well they cope with the complexity?
**RQ1** aims to identify ML components in Rasa and broadly view them from the perspective of dependent libraries and ML models. **RQ2** aims to summarize a comprehensive taxonomy of component interaction patterns. **RQ3** aims to inspect the source code inside every component to characterize the statistics and composition patterns of different code types, including data processing code, model usage code, etc. Our findings from **RQ1**, **RQ2** and **RQ3** could reveal how the complexity originates and manifests in real world large-scale ML-enabled systems, which provide both practitioners and researchers with insights to overcome the complexities involved in implementing, maintaining, debugging and testing such complex systems.
**RQ4** aims to quantitatively assess Rasa's test cases from code coverage, test case statistics (i.e., granularity levels, oracle types, and ML stages), and component interaction coverage perspectives. **RQ5** aims to generate mutants (i.e., artificial bugs) and check whether these mutants can be killed (i.e., detected) by test cases. Further, for the survived mutants, we train Rasa with 3 default configuration files on _MutiWoz_[15], a widely used multi-domain TDS dataset. We calculate the statistical significance between the performance metrics from mutated Rasa code with metrics from pipelines trained with clean code. Our findings from **RQ4** and **RQ5** evaluate the testing practice in Rasa, and
shed light on automated test generation, bug localization and bug repairing techniques for complex ML-enabled systems.
## III RQ1: System Complexity Analysis
### _Methodology_
To answer **RQ1**, we identified ML and rule-based components in Rasa and characterized them through a detailed examination of Rasa's source code and documentation. We excluded DST and NLG as they are fully implemented with rule-based code logic in Rasa without ML components.
All the modules we identified are listed in Table I, except for a special module, _Shared_, as it contains general data processing code and ML model definition code (e.g., Transformer), while does not contain any independent components. We will include it in the last three RQs. Specifically, for each component, we recursively tracked methods within it to manually extract the model or rule definition code. We examined implementation details of APIs in ML libraries by reading the documentations and source code of external libraries, including ML model type and number of candidate models.
In particular, we analyzed whether ML components are implemented by using external direct libraries or indirect libraries, whether the components can be trained (notice that not only some of ML components can be trained, but also some rule-based components can be trained as long as they update internal parameters when processing training data), whether Rasa implements components with its own code and provides built-in model and rule definition code, and the lines of code (LoC) of each component excluding blank lines, code comments and import statements.
### _Results_
The results are summarized in Table I. Components shown in gray color are ML components, and others are rule-based components. There are 6 modules in total, including 15 ML and 14 rule-based components. These components contain 23 ML models and are implemented with 7 directly dependent external ML libraries and 3 indirectly dependent external ML libraries. In particular, all ML components in _Tokenizer_ and _Featurizer_ are not trainable because pre-trained language models are applied. All components in _Policy_ are implemented in Rasa's own code, because there are no ready-to-use Policy models provided by existing libraries. There are a total of 5348 LoC in ML components and 2980 LoC in rule-based components. In addition, the general module _Shared_ contains 5375 LoC, which is not listed in the table.
Notably, we find that classical machine learning models (e.g., Support Vector Machine and Conditional Random Field) together with deep learning models (e.g., Convolutional Neural Networks and Transformer) play an important role in Rasa. This is different from the previous study [60] on Apollo, which is focused on deep learning models.
Next, we introduce components used in each module.
**Tokenizer.** Tokenizer splits the user utterance into tokens with component specific split symbols (e.g., whitespace and punctuation). (1) _SpacyTokenizer_ provides the richest token information, including splitting tokens with rules, lemmatizing tokens with a look-up table, and performing part-of-speech tagging with a multi-layer perceptron (MLP). (2) _JiebaTokenizer_ is the only component that tokenizes non-English sentences using Hidden Markov Model (HMM) [20]. (3) _MitieTokenizer_ and _WhitespaceTokenizer_ toehaze text with predefined rules.
**Featurizer.** As shown in Fig. 1, Featurizer converts tokens into features for downstream module inference. (1) _ConveRT-Featurizer_ loads TFHub's [27] pre-trained ConveRT (Conversational Representations from Transformers) TensorFlow model to featurize tokens [29]. (2) _LanguageModelFeaturizer_ loads
Fig. 1: The Modules and Workflow of Rasa
pre-trained language models from Hugging Face Transformers [22], including BERT [17], GPT [31], XLNet [79], Roberta [46], XLM [41] and GPT2 [56]. (3) _MitieFeaturizer_ combines Canonical Correlation Analysis (CCA) feature and word morphology features together. (4) _SpacyFeaturizer_ applies HashEmbedCNN or Roberta to convert tokens to features, depending on the pre-trained Spacy pipeline specified in the configuration file. (5) _CountVectorsFearitizer_, _LexicalSyntacticFeaturizer_ and _RegexFeaturizer_ create sparse features with n-grams, sliding window and regex patterns, respectively.
**IntentClassifier.** IntentClassifier generates a predicted intent list ordered by confidence scores based on tokens and features from upstream modules. (1) _DIETClassifier_ implements Dual Intent and Entity Transformer (DIET) to perform intent classification and entity recognition simultaneously, and is therefore included in both **IntentClassifier** and **EntityExtractor** modules. (2) _MitieIntentClassifier_ and _SklearnIntentClassifier_ apply a multi-class Support Vector Machine (SVM) [65] with a sparse linear kernel using Scikit-learn and Mitie, respectively. (3) _KeywordIntentClassifier_ classifies user intent with keywords extracted from training data. (4) _FallbackClassifier_ is a post-processing component to check the results of other components in _DIETClassifier_. It identifies a user utterance with the intent nlu_fallback if the confidence scores are not greater than threshold, or the score difference of the two highest ranked intents is less than the ambiguity_threshold.
**EntityExtractor.** EntityExtractor extract entities such as the restaurant's location and price. (1) _DIETClassifier_ also serves as an EntityExtractor. (2) _CRFEntityExtractor_, _MitieEntityExtractor_ and _SpacyEntityExtractor_ utilize a conditional random fields (CRF) model, a multi-class linear SVM, and a MLP to predict entities, respectively. (3) _DuclingEntityExtractor_ and _RegexEntityExtractor_ extract entities using a duckling server [23] and regex patterns. (4) _EntitySynonymMapper_ is a post-processing component to convert synonymous entity values into a same value. As Fig. 1 shows, the value of "price" entity, "moderate", is coverted to "mid" by _EntitySynonymMapper_.
**Selector.**_ResponseSelector_ aims to directly select the response from a set of candidate responses, which is also known as response selection task in the literature [16]. It embeds user inputs and candidate responses in the same vector space, using the same neural network architecture as _DIETClassifier_.
**Policy.** Policy decides the action the system takes on each conversation based on dialogue states. (1) _TEDPolicy_ proposes a Transformer Embedding Dialogue (TED) model to embed dialogue states and system actions into a single semantic vector space, and select the action with the max similarity score with the current dialogue states [75]. (2) _MemoizationPolicy_, _AugmentedMemoizationPolicy_ and _RulePolicy_ match the current conversation history with examples in the training data and predefined rules to predict system actions. (3) _UnexpecTEDIntentPolicy_ decides on the possibility of the intent predicted by IntentClassifier given current dialogue states, which follows the same model architecture as _TEDPolicy_. (4) _PolicyEnsemble_ is a post-processing component to select the proper system action from output actions of different policies.
\begin{table}
\begin{tabular}{l|l l l l l l l l} \hline \hline
**Module** & **Component** & **Direct Lib.** & **Indirect Lib.** & **Model Type** & **No. Model** & **Trainable** & **Rasa Imp.** & **LoC** \\ \hline \multirow{4}{*}{Tokenizer} & JiebaTokenizer & Jieba & N/A & HMM & 1 & False & False & 85 \\ & SpacyTokenizer & Spacy & Thine & MLP & 1 & False & False & 39 \\ & MitieTokenizer & Mitie & N/A & N/A & N/A & False & False & 43 \\ & WhitespaceTokenizer & N/A & N/A & N/A & N/A & False & True & 52 \\ \hline \multirow{4}{*}{Feartizer} & ConvRTFeaturizer & TensorFlow & N/A & Transformer & 1 & False & False & 269 \\ & LanguageModelFeaturizer & Transformers & TensorFlow & Transformer & 6 & False & False & 378 \\ & MitieFeaturizer & Mitie & Dlib & CCA & 1 & False & False & 98 \\ & SpacyFeaturizer & Spacy & Thine & CNN & 2 & False & False & 66 \\ \cline{2-8} & CountVectorsFeaturizer & Scikit-learn & N/A & N/A & N/A & True & False & 520 \\ & LexicalSyntacticFeaturizer & N/A & N/A & N/A & N/A & True & True & 319 \\ & RegezFeaturizer & N/A & N/A & N/A & N/A & True & True & 151 \\ \hline \multirow{4}{*}{IntentClassifier} & DIETClassifier & TensorFlow & N/A & Transformer & 1 & True & True & 1217 \\ & MitieIntentClassifier & Mitie & Dlib & SVM & 1 & True & False & 89 \\ & SklearnIntentClassifier & Scikit-learn & N/A & SVM & 1 & True & False & 173 \\ & FullbackClassifier & N/A & N/A & N/A & N/A & False & True & 91 \\ & KeywordIntentClassifier & N/A & N/A & N/A & N/A & True & True & 132 \\ \hline \multirow{4}{*}{EntityExtractor} & DIETClassifier & TensorFlow & N/A & Transformer & 1 & True & True & 1217 \\ & CRFEntityExtractor & Scikit-learn & N/A & CRF & 1 & True & False & 438 \\ & MitieEntityExtractor & Mitie & Dlib & SVM & 1 & True & False & 164 \\ & SpacyEntityExtractor & Spacy & Thine & MLP & 1 & False & False & 52 \\ & bucklingEntityExtractor & N/A & N/A & N/A & N/A & False & False & 134 \\ & RegezEntityExtractor & N/A & N/A & N/A & N/A & True & True & 124 \\ & EntitySynonymMapper & N/A & N/A & N/A & N/A & True & True & 102 \\ \hline Selector & ResponseSelector & TensorFlow & N/A & Transformer & 2 & True & True & 560 \\ \hline \multirow{4}{*}{Policy} & TEDPolicy & TensorFlow & N/A & Transformer+CRF & 1 & True & True & 1262 \\ & UnexpecTEDIntentPolicy & TensorFlow & N/A & Transformer+CRF & 1 & True & True & 458 \\ & MemoizationPolicy & N/A & N/A & N/A & N/A & True & True & 207 \\ & AugmentedMemoizationPolicy & N/A & N/A & N/A & N/A & True & True & 65 \\ & RulePolicy & N/A & N/A & N/A & N/A & True & True & 818 \\ & PolicyEnsemble & N/A & N/A & N/A & N/A & False & True & 150 \\ \hline \hline \end{tabular}
\end{table} TABLE I: System Complexity Analysis of Rasa
### _Implications_
The system complexity of Rasa poses challenges for developers using Rasa (i.e., application developers) and developers creating Rasa (i.e., system developers).
**Complexity from ML supply chain.** Rasa depends on 10 external ML libraries directly or indirectly. Less than 100 (0.03%) projects out of 355392 projects using TensorFlow on GitHub depend on 10 more DL libraries [69]. It could be inferred that relying on 10 more ML libraries is also less common. For application developers, it is difficult to understand the implementation details of components that rely on external ML libraries, not to mention selecting proper components and parameters. For example, due to the lack of documentations of _MitieFeaturizer_ in Rasa, application developers need to inspect Mitie's source code to learn that it implements CCA using Dlib APIs. For system developers, vulnerabilities [81] and dependency bugs [32] may arise because of outdated or incompatible library versions. Therefore, future work should provide supports for the management of components and corresponding dependent ML libraries for ML-enabled systems, similar to traditional software component analysis [26].
**Complexity from configurations.** It could be extremely complex to configure Rasa with 29 components and hundreds of parameters, making it easy to misconfigure and thus affect functionality and performance. This kind of misconfiguration is similar to what happens in traditional configurable software systems [74]. Additionally, finding optimal configurations for application developers' specific TDS scenarios is difficult, also known as configuration debt [63]. Although AutoML has been extensively studied to select appropriate ML models and parameters for specific tasks, they all focus on selecting a single ML model without considering the combination of multiple ML models and rules [28]. Another challenge is to detect potential bugs by testing a huge set of configuration settings. Existing studies on ML model testing mainly focus on testing a single ML model with predefined hyperparameters [82].
## IV RQ2: Interaction Complexity Analysis
### _Methodology_
The workflow in Fig. 1 only shows the general flow among different modules. The details of interactions among components are still uncovered. We consider the interaction among two or more components, with at least one ML component. An interaction pattern contains a module placeholder, which could be instantiated with components in the module to generate interaction instances. For example, pattern (PolicyEnsemble, [Policy]) could be instantiated as (PolicyEnsemble, TEDPolicy) or (PolicyEnsemble, (TEDPolicy, RulePolicy)). To answer **RQ2**, we conducted a qualitative and quantitative analysis of the component interaction patterns and instances of components.
**Step 1: Extract interaction patterns.** The interaction can be divided into two categories: inter-module interaction and intra-module interaction. (1) Inter-module interaction: the interaction between two adjacent modules (e.g., Featurizer with Tokenizer) was considered. We analyzed the usages of Message in the component code because components use the Message class to transfer data. Specifically, we extracted all interaction patterns of its upstream and downstream component. We also considered the interaction between post-processing component (i.e., _FallbackClassifier_, _EntitySynonymMapper_ and _PolicyEnsemble_) and other components in their residing modules as inter-module interaction. (2) Intra-module interaction: we identify the interaction pattern for components in each module.
**Step 2: Generate interaction instances.** For each inter-module interaction pattern, we instantiated the module placeholder with every component in the module. For every intra-module interaction, we extracted the Cartesian product of all components in each module as interaction instances. We then filtered out component instances that do not contain ML components, or do not meet the constraints specified in Rasa documentation. For example, _CRFEntityExtractor_ could not use features of _SparseFeaturizer_ other than _RegexFeaturizer_.
**Step 3: Summarize the interaction taxonomy.** For generated component patterns and instances, we analyzed their semantics and summarized a component interaction taxonomy.
### _Results_
The component interaction taxonomy is shown in Fig. 2. It is divided into 4 high-level categories (i.e. _Inter-Module_, _Intra-Module_, _Component Instantiation_ and _Component Inheritance_) and 8 inner categories. Note that only _Inter-Module_ interactions contain components with direct data dependency, while other categories contain components with indirect interactions (e.g., two featurizers are used together). The number of interaction patterns and interaction instances in each category is listed as _pattern_count/instance_count_ in Fig. 2. There are a total of 43 interaction patterns and 230 interaction instances. Almost all categories include both ML to ML components and ML to rule-based components interactions. On the contrary, previous work on Apollo [60] also presented 4 of the 8 inner categories, but did not provide a taxonomy and quantitative analysis.
**Inter-Module**. Components from multiple modules interact through data transfers. In particular, _Output Selection_ means that the downstream component selects the proper output from multiple upstream outputs based on configurable criteria, e.g., _PolicyEnsemble_ with policies. _Output Refinement_ denotes that
Fig. 2: Taxonomy of Component Interactions
the downstream component complements the imperfect outputs of upstream components with rules, e.g., _EntitySynonymMapper_ with entity extractors. _Confidence Checking_ means that the downstream component checks reliability of the output from upstream components using ML models (e.g., _UnexpecTEDIntentPolicy_ with intent classifiers) or rules (e.g., _FallbackClassifier_ with IntentClassifiers). If the outputs are marked as not reliable, fallback behaviors such as the fall_back system action are triggered. _Usage Constraints_ defines components that should or should not be used together under certain circumstances. For example, _SpacyTokenizer_ is required by _CountVectorsFeaturizer_ when applying use_lemma option and _LexicalSyntacticFeaturizer_ when applying pos_tag option. _Data Dependency_ includes the rest of inter-module interaction patterns that do not fall into any of the above categories, which are relatively "trivial" interactions with no specific semantics.
**Intra-Module**. The interaction mode of components within a module differs from **Inter-Module**. These components interact indirectly when used together. _Priority Order_ means that the outputs of components within a module are selected according to priority order, e.g., the priority order of policies. _Usage Constraints_ is similar to _Usage Constraints_ in the inter-module category. For example, only one component in any of _Tokenizer_, _IntentClassifier_ and _EntityExtractor_ should be used in each configuration file, otherwise outputs of additional components will be overwritten. _Functionality Equivalence_ includes all intra-module interaction patterns that do not belong to any of the above categories, which are relatively "trivial" interactions involving components used together with no specific semantics.
**Component Instantiation**. Rasa supports creating multiple instances of a component within a configuration setting. For example, multiple _CountVectorFeaturizers_ instances with different ngram settings, and multiple _LanguageModelFeaturizer_ instances with different language models could be used together. We did not count this category of interaction patterns and instances, since developers could specify infinite instances of a component within a configuration setting.
**Component Inheritance**. The class inheritance mechanism allows ML models to be shared among components. For example, ML model definition class in _UnexpecTEDIntentPolicy_ is a subclass of the ML model definition class in _TEDPolicy_.
### _Implications_
**Lack of specifications for interactions.** The outputs of ML components for specific inputs are not guaranteed due to the stochastic nature of ML models [8]. Thus, it is more difficult to formulate interaction semantics in ML-enabled systems than in traditional systems. When testing samples are predicated wrongly, it is challenging to localize the exact faulty component. Moreover, even if the faulty component has been fixed and performance of it has been improved, the overall performance of the entire system may degrade [78]. Therefore, training and evaluation should be extended from component-level to system-level to consider interactions among components. In summary, we need to pay more attention to addressing the challenges caused by lack of specifications in bug localization and repairing for ML-enabled systems.
**Hidden interactions**. It is non-trivial to identify all interactions even for system developers of Rasa. For example, the _Data Dependency_ interaction between _RegexFeaturizer_ and _CR-FEntityExtractor_ is not marked in documentations and can only be identified from source code. Application developers may misuse components and get confused with the poor performance of the system without understanding the hidden interactions, especially for interaction categories like _Usage Constraints_, _Output Selection_ and _Priority Order_. Techniques like data flow analysis can be explored to automatically reveal component interactions in ML-enabled systems [62].
Furthermore, our results on component interaction complexity could be helpful to guide developers to build a better ML-enabled system. For example, developers can follow interaction patterns _Output Selection_ and _Output Refinement_ to improve the outputs of components at system level, as well as utilizing _Confidence Checking_ to detect cases that ML models can not handle, and then triggering fallback rules, which is very important in safety-critical systems like self-driving systems [60].
## V RQ3: Component Complexity Analysis
### _Methodology_
To answer **RQ3**, we classified categories of code snippets in each component and explored their composition patterns.
**Step 1: Label code snippets.** We segmented each source code file into code snippets according to semantic meaning, and then classified them into 6 categories: (1) model definition, the definition code of ML models; (2) rule definition, the definition code of rules in rule-based components; (3) model usage, the usage code of ML models; (4) rule usage, the usage code of rules; (5) data pre-processing, the input data processing code before model or rule usages; (6) data post-processing, the output data processing code after model or rule usages. Two of the authors labeled code snippets independently, and the third author was involved to resolve disagreements. The Cohen's Kappa coefficient of the two authors reached 0.830.
**Step 2: Summarize composition patterns of code snippets.** Based on labeled code snippets, we summarized the composition patterns of data processing code, and model or rule usage code in each component.
### _Results_
The statistics of different code categories are shown in Table II. We only considered the LoC of labeled code snippets, while
\begin{table}
\begin{tabular}{c|c c|c c|c c} \hline \hline \multirow{2}{*}{**Module**} & \multicolumn{2}{c}{**Data**} & \multicolumn{2}{c}{**Model**} & \multicolumn{2}{c}{**Rule**} \\ \cline{2-7} & Prc. & Post. & Usage & Definition & Usage & Definition \\ \hline Tokenizer & 8 & 80 & 27 & 0 & 25 & 25 \\ Featurizer & 390 & 323 & 92 & 0 & 162 & 119 \\ IntentClassifier & 441 & 131 & 113 & 298 & 3 & 69 \\ EntityExtractor & 746 & 311 & 120 & 298 & 24 & 30 \\ Selector & 48 & 55 & 9 & 16 & 0 & 0 \\ Policy & 1332 & 540 & 64 & 543 & 167 & 283 \\ Shared & 996 & 314 & 112 & 1673 & 0 & 43 \\ Total & 3961 & 1754 & 537 & 2828 & 381 & 569 \\ \hline \hline \end{tabular}
\end{table} TABLE II: LoC of Different Code Categories
ignoring general utils code such as class initialization. Data processing code contributes a total of 5715 (57.1%) LoC, while model usage&definition code and rule usage&definition code contribute 3365 (33.5%) and 950 (9.4%) LoC, respectively. 1673 (59.2%) of the 2828 LoC of model definition code is in _Shared_ module, which shows that the reuse of model definition code between different components is quite common. There is no model definition code in _Tokenizer_ and _Featurizer_, because ML components are all built on top of external ML libraries.
We classified data pre-processing and data post-processing categories into more specific types, due to the dominant proportion of data processing code in Rasa. Specifically, _Validation_ code intends to validate the input or output data of components. _Format Transformation_ code transforms data format, such as constructing vectors from Python arrays and reshaping vectors. _Component Input/Output Filter_ code filters data that does not meet the specified criteria, such as the absence of certain attributes. _Data Scale/Padding/Encoding/Decoding_ code changes the value of data, while _Data Split/Shuffle/Balance/Batch/Rank_ code changes the organization of data for better training and inference of components. We provide the complete codebook and statistics of data processing types at our website [7].
Moreover, we find that composition patterns of code snippets include sequential code composition pattern and various non-sequential composition patterns. In a typical sequential composition pattern, data is first pre-processed, and then processed by model or rule usage code, and finally post-processed. The non-sequential code composition patterns are summarized in Fig. 3. The black box is a data processing code snippet, the red box is a model or rule usage code snippet, and the green diamond means to select one or multiple downstream code snippets. The first 5 patterns consist of multiple model or rule usages in one component. The last 3 patterns consist of a single model or rule usage with multiple possible data processing snippets, decided by configurations or input data.
### _Implications_
**Data processing.** Data processing code is scattered at different granularity levels, unlike the well-documented and structured code of ML models and rules. In detail, data processing code includes data processing components (e.g., _Policy Ensemble_), general data processing classes and functions in _Shared_ module, and specific data processing snippets in components entangled with model or rule usages. On the one hand, it could become troublesome to identify and understand the semantics of all data processing code for application developers. A specific example is that data pre-processing code also exists in model definition class of TransformerRasaModel, including _Format Conversion_ and _Data Batch_ code. It could be explicitly helpful to automatically extract and analyze the semantics of data processing code with techniques like program analysis [61]. On the other hand, it would be challenging to maintain and test data processing code for system developers, possibly resulting in severe consequences with ML development paradigm shift from model-centric to data-centric [45]. In general, building a taxonomy of data processing code would be helpful for the maintaining and testing of data processing code.
**Code composition patterns.** These non-sequential composition patterns could introduce additional dynamic complexity for ML-enabled systems, e.g., it is too expensive to capture all possible run-time compositions of code snippets with static analysis. Although dynamic testing is widely adopted to complement the limitations of static analysis in traditional software [24], most existing testing techniques tailored for ML only target at ML model level [82]. It would be beneficial to extend them to include data processing code and composition patterns.
## VI RQ4: Testing Practice Analysis
### _Methodology_
To answer **RQ4**, test cases were inspected in three steps.
**Step 1: Label test cases.** We manually labeled the granularity level, oracle type and ML stage of each test case. There are three different granularity levels of test cases: (1) Method-level: testing single or multiple methods; (2) Component-level: testing the complete process of a component in training or inference stage; (3) Integration-level: testing the current component with upstream components. There are four test oracle types: (1) Given input-output pairs: the input and expected output data are given; (2) Component-specific constraints: the constraints must be satisfied according to the implementation of a component, i.e., the sum of confidence scores of the intent list generated by classifiers should equal to 1; (3) Differential executions: outputs of executions under different settings should change or remain the same, i.e., given the same input, the outputs of an original ML model and its loaded version from disk should remain the same; (4) Exception: whether or not the test case throws exceptions for certain inputs and configurations. Finally, the ML stages covered by test cases include training, inference and evaluation stages. To test the training stage of a component, test cases must first train it and check whether any test oracle is violated. Note that there may exist several oracle types and ML stages but only one granularity level for each test case in
Fig. 3: Non-Sequential Code Composition Patterns within Components
Rasa. Two of the authors labeled test cases independently, and the third author was involved to resolve disagreements. The Cohen's Kappa coefficient of granularity level, test oracle and ML stage is 0.907, 0.908, and 0.854, respectively.
**Step 2: Collect code coverage of test cases.** We collected the statement coverage and branch coverage of code via _pytest-cov_, because Rasa uses _pytest_ to run test cases.
**Step 3: Collect interaction pattern coverage of test cases.** We injected logging statements into methods of every component, and then executed test cases to collect the co-executed component sets of each test case. All component interaction instances, except _Model Inheritance_ instances and some _Usage Constraints_ instances that cannot be used together, were tried to be matched with these component sets. The matched interaction patterns and instances were considered as covered by test cases.
### _Results_
The code coverage and labeled statistics of test cases are shown in Table III. (1) The total statement coverage and branch coverage of code reach 93.2% and 92.0%, which is much higher than 21.5% and 13.3% in Apollo [60]. (2) The coverage of _Selector_ is only 68.3% and 66.4%, because _Selector_ has two candidate ML models while only one of them was tested. (3) There are 240 (52.0%) method-level, 156 (33.8%) component-level and 65 (14.1%) integration-level test cases. (4) There is no integration-level test cases in _Policy_, because _Policy_ was tested with given intents and entities input from developers without the dependency of NLU modules. (5) Inference and training stages have similar test case quantities. (6) Only test cases in _Shared_ module cover evaluation stage, because _Shared_ module provides the evaluation code for all components. (7) There are 312 (67.7%), 123 (26.3%), 15 (3.3%), and 49 (10.6%) test cases with given input-output pairs, component-specific constraints, differential executions and exception test oracles.
As Table IV shows, the test coverage of interactions is relatively low, i.e., 18 (48.6%) of 37 patterns and 30 (15.1%) of 199 instances are covered. This is because only integration-level tests cover components interactions. In particular, _Confidence Checking_ and _Output Selection_ are not covered.
### _Implications_
**Low test coverage of component interactions.** It is difficult to achieve a high test coverage of component interactions, due to the complexity caused by huge configuration space and hidden interactions. The only test cases that cover component interactions (i.e., integration-level test cases) contribute no more than 15% test cases. Yet, integration-level test cases can cover and kill more mutants than component-level and method-level test cases, as many mutants do not manifest in non-integration-level test cases [42]. Therefore, it is crucial to generate integration-level test cases for ML-enabled systems.
**Limited test oracle types.** It is challenging and time-consuming to write test cases with given input-output pairs and component-specific constraints oracles, due to the complexity brought by lacking of specification for interactions. As a result, those test cases without the need of specification of interactions, that is, differential executions and exception test oracles, have been widely utilized to tackle the oracle problem in test case generation techniques for traditional software, such as differential testing [21], fuzzing [44] and search-based testing [50]. Besides, we find that test cases with these two oracles have a similar capability to kill mutants similar to component-specific constraints oracle (see **RQ5**). In spite of this, only 13.9% test cases in Rasa are written with differential executions and exception test oracles, implying that there could be a big room to apply these two test oracle types in test case generation techniques for ML-enabled systems.
## VII RQ5: Mutation Testing Analysis
### _Methodology_
To answer **RQ5**, we performed an analysis of mutation testing [37]. It applies mutators to generate versions of faulty code, i.e., mutants. For every mutant, test cases were executed to collect the testing results to decide whether the mutant was killed. As Rasa contains both ML components and rule-based components, we considered both mutators for traditional software (i.e., syntactic mutators) and ML-specific mutators. As Table V shows, we used 9 syntactic mutators from Jia et al. [36] and 11 ML specific mutators from DeepCrime [33].
We list steps of mutation analysis in the following.
**Step 1: Generate mutants.** We generated syntactic mutants using _mutmutu_[51]. We used two groups of syntactic mutators, i.e., _Logic_ and _Value_, which mutate the logic flow and variable
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Category** & **Sub-Category** & **Cov. Patterns** & **Cov. Instances** \\ \hline \multirow{4}{*}{Inter-module} & Data Dependency & 9/25 & 17/95 \\ & Confidence Checking & 0/2 & 0/50 \\ & Output Selection & 0/1 & 0/18 \\ & Output Refinement & 1/1 & 1/4 \\ & Usage Constraints & 3/3 & 3/3 \\ \hline \multirow{2}{*}{Intra-module} & Functionality Equivalence & 2/2 & 3/18 \\ & Priority Order & 1/1 & 4/7 \\ & Usage Constraints & 2/2 & 2/4 \\ \hline Total & & 18/37 & 30/199 \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Test Coverage of Component Interactions
\begin{table}
\begin{tabular}{c|c c c c c c|c c c c c|c c} \hline \hline
**Module** & **Total** & \multicolumn{3}{c}{**Test Case Type**} & \multicolumn{3}{c}{**Test Case Stage**} & \multicolumn{3}{c}{**Oracle Type**} & \multicolumn{3}{c}{**Code Coverage**} \\ \cline{2-13} & Meth. & Comp. & Integ. & Infer. & Train & Eval. & I-O & C-S & Diff. & Exception & Stat. Cov. & Bran. Cov. \\ \hline Tokenizer & 27 & 7 & 20 & 0 & 25 & 14 & 0 & 24 & 1 & 0 & 3 & 97.4\% & 96.8\% \\ Featurizer & 62 & 13 & 14 & 35 & 56 & 40 & 0 & 46 & 5 & 3 & 8 & 95.7\% & 94.9\% \\ IntentClassifier & 36 & 7 & 15 & 14 & 29 & 30 & 0 & 18 & 11 & 7 & 1 & 92.5\% & 89.5\% \\ EntityExtractor & 41 & 6 & 19 & 16 & 34 & 31 & 0 & 18 & 14 & 8 & 1 & 92.3\% & 90.1\% \\ Selector & 13 & 4 & 6 & 3 & 9 & 12 & 0 & 6 & 5 & 1 & 1 & 68.3\% & 66.4\% \\ Policy & 165 & 77 & 88 & 0 & 105 & 127 & 0 & 117 & 51 & 0 & 20 & 95.7\% & 94.5\% \\ Shared & 138 & 129 & 2 & 7 & 90 & 84 & 0 & 89 & 42 & 2 & 16 & 92.3\% & 91.4\% \\ Total & 461 & 240 & 156 & 65 & 331 & 317 & 47 & 312 & 123 & 15 & 49 & 93.2\% & 92.0\% \\ \hline \hline \end{tabular}
\end{table} TABLE III: Code Coverage and Labeled Statistics of Test Cases
value. Besides, we generated ML specific mutants with DeepCrime [33]. We used 4 of 8 mutation groups in DeepCrime (_Activation_, _Regularization_, _Weights_ and _Optimization_). For others, mutators in _Training Data_ and _Validation_ groups are not the focus of this paper; _Hyperparameters_ group is not included, as hyperparameters in Rasa are specified with configuration files by developers; and _Loss Function_ group is not applicable, as the loss functions in Rasa are all implemented from scratch, while the mutators provided by DeepCrime are only to replace the Keras loss function API with another one. Besides, we only generated mutants for 6 labeled code categories in **RQ3**, excluding general utils code. We generated no more than 30 mutants for every Python class to reduce potential bias. We also only modified one AST node for every mutant.
**Step 2: Perform mutation testing analysis with test cases.** For every mutant, only test cases that cover the mutated line were collected (from test coverage data in **RQ4**) and executed to save running time. If any test case fails on a mutant, the mutant is considered as killed by the test case. Otherwise, the mutant is considered as survived. A test case could fail with three symptoms: (1) an assertion fails; (2) an execution or runtime error manifests; and (3) the test case times out. The maximum time for a test case to run is 10 times of its running time in the original clean code version. Besides, test cases were executed 3 times for every mutant to avoid flaky tests. We found that all test case statuses remain same for three runs.
**Step 3: Perform mutation testing analysis with test data.** For those survived mutants in Step 2, we assessed the impact of them with 3 default configuration files and the restaurant domain data in _Multiwoz_[15], which is a widely used multi-domain dataset to evaluate the performance of TDS. Given a configuration file, only components specified in it are included in the Rasa pipeline, thus mutated nodes of some survived mutants from Step 2 will not be executed as they are not _impacted_ by the configuration. Due to the stochastic nature of machine learning programs, we trained both the mutated program and original program for 5 times with 80/20 data split into train/test data randomly, and decided whether the performance metrics of two versions are statistically significant with non-negligible and non-small effect size. We followed the same formula to decide whether a mutant is killed with the test data as [33, 35], with the threshold of significance value is 0.05 and of effect size is 0.5. We adopted F1 scores of _IntentClassifier_, _EntityExtractor_ and _Policy_ as performance metrics, i.e., if the F1 score in any of the three modules is statistically different between two code versions, the mutant is marked as killed by test data.
### _Results_
The mutation testing results by each mutator are shown in Table V. There are 1447 mutants generated, 1106 (76.4%) mutants killed by test cases, 341 (23.6%) mutants survived, 146 (10.1%) mutants impacted, and 22 (1.5%) mutants killed by test data. Only 146 mutants from 341 survived mutants impact the default 3 Rasa pipelines, which shows that the huge configuration space is challenging to be tested adequately. 81.3% syntactic mutants and 20.0% ML specific mutants are killed by test cases, while 4.4% syntactic mutants and 24.4% ML specific mutants from impacted mutants are killed by test data. It shows that test case is much more effective to detect syntactic mutants and slightly less effective to detect ML specific mutants than test data. The killed syntactic mutants and ML-specific mutants by test data cause the F1 score degradation of _IntentClassifier_, _EntityExtractor_ and _Policy_ by 20.8%, 0.8%, 3.6% and 11.1%, 13.4%, 5.7% on average.
The mutation testing results w.r.t. the location of mutants are shown in Table VI. 224 (61.5%) of 364 mutants in model definition code, and 896 (82.8%) of 1082 mutants in other code categories are killed by test cases. In particular, few mutants in code categories except model definition are impacted and killed by test data, which implies that test data is only effective to kill mutants in model definition code.
We investigated the capability to detect mutants w.r.t. different categories of test cases, by calculating the ratio of _strong test case number_ to _all test case number_, and the ratio of _killed mutants_ to _covered mutants_ of them. We define _strong test
\begin{table}
\begin{tabular}{c|c|c|c c|c c} \hline \multirow{2}{*}{**Mutation Group**} & \multirow{2}{*}{**Operator**} & \multirow{2}{*}{**Total**} & \multicolumn{3}{c}{**Test Case**} & \multicolumn{3}{c}{**Test Data**} \\ \cline{4-7} & & Killed & Survival & Impacted & Killed \\ \hline \multirow{6}{*}{Logic} & ArOR & 109 & 86 & 23 & 10 & 1 \\ & ComOR & 109 & 88 & 21 & 6 & 0 \\ & LogOR & 145 & 112 & 33 & 14 & 0 \\ & AsOR & 20 & 19 & 1 & 0 & 0 \\ & MemOR & 32 & 30 & 2 & 0 & 0 \\ & KVR & 12 & 7 & 5 & 1 & 0 \\ \hline \multirow{2}{*}{Value} & BVR & 64 & 32 & 32 & 9 & 0 \\ & NVR & 224 & 180 & 64 & 18 & 2 \\ & AsVR & 582 & 525 & 67 & 10 & 0 \\ \hline \multirow{2}{*}{Activation} & ARCH & 22 & 3 & 19 & 18 & 6 \\ & ARM & 2 & 0 & 2 & 1 & 0 \\ & AAL & 22 & 11 & 11 & 11 & 2 \\ \hline \multirow{2}{*}{Regularization} & RAW & 6 & 0 & 6 & 3 & 3 \\ & RCW & 10 & 0 & 10 & 10 & 0 \\ & RRW & 5 & 0 & 5 & 5 & 0 \\ \hline \multirow{2}{*}{Weights} & WCI & 24 & 10 & 14 & 13 & 1 \\ & WAB & 4 & 0 & 4 & 3 & 1 \\ & WRB & 3 & 1 & 2 & 2 & 0 \\ \hline \multirow{2}{*}{Optimization} & OCH & 24 & 2 & 22 & 9 & 6 \\ & OCG & 8 & 0 & 8 & 3 & 0 \\ \hline Total & 1447 & 1106 & 341 & 146 & 22 \\ \hline \end{tabular}
\end{table} TABLE V: Mutation Testing Results
\begin{table}
\begin{tabular}{c|c|c c|c c} \hline \multirow{2}{*}{**Location**} & \multirow{2}{*}{**Total**} & \multicolumn{3}{c}{**Test Case Result**} & \multicolumn{3}{c}{**Test Data Result**} \\ \cline{3-5} & & Killed & Survival & Impacted & Killed \\ \hline Data Prep. & 385 & 326 & 59 & 23 & 0 \\ Data Post. & 271 & 222 & 49 & 5 & 0 \\ Model Usage & 307 & 243 & 64 & 4 & 0 \\ Model Def. & 364 & 224 & 140 & 99 & 22 \\ Rule Usage & 4 & 4 & 0 & 0 & 0 \\ Rule Def. & 115 & 101 & 14 & 4 & 0 \\ \hline \end{tabular}
\end{table} TABLE VI: Mutant Location Results
\begin{table}
\begin{tabular}{c|c|c c c c} \hline \multirow{2}{*}{**Category**} & \multirow{2}{*}{**Type**} & \multicolumn{3}{c}{**Test Num. Strong Test Num.**} & \multicolumn{1}{c}{**Covered**} & \multicolumn{1}{c}{**Killed**} \\ \hline \multirow{4}{*}{Granularity} & Method & 240 & 59 & 947 & 635 \\ & Component & 156 & 31 & 1121 & 709 \\ & Integration & 65 & 29 & 903 & 613 \\ \hline \multirow{4}{*}{Stage} & Infer. & 331 & 86 & 1358 & 995 \\ & Training & 317 & 75 & 1184 & 847 \\ & Evaluation & 47 & 11 & 772 & 476 \\ \hline \multirow{4}{*}{Oracle Type} & I-O & 312 & 98 & 1298 & 956 \\ & C-S & 123 & 19 & 1103 & 707 \\ & Diff & 15 & 3 & 625 & 338 \\ \cline{1-1} & Exception & 49 & 6 & 686 & 352 \\ \hline \end{tabular}
\end{table} TABLE VII: Test Case Mutation Results
_case_ as the test case that kills equal or more than 75% of its covered mutants. As Table VII shows, test cases in integration level have the highest ratio of strong test case (44.6%) and highest ratio of killed mutants (67.9%) among three granularity levels. Test cases with given input-output test oracle have the highest ratio of strong test case (31.4%) and highest ratio of killed mutants (73.7%) among four oracle types, while test cases with other three oracle types have similar ratios.
### _Implications_
**Non-ML specific bugs and test cases in ML-enabled systems.** Complexity from data processing code causes that non-ML specific bugs are prone to be introduced. Compared with test data, test case is more effective to detect syntactic mutants, i.e., non-ML specific bugs. Moreover, it is notorious for developers to analyze, localize and fix bugs in ML programs according to test data, thus interpreting [85], debugging [3] and repairing [67] techniques have been developed for ML models. It is easier for developers to localize and fix bugs with failed test cases by analyzing violated test oracles. Thus, we claim that non-ML specific bugs and test cases in ML-enabled systems should be paid more attention to. Although there is a rich set of test cases in Rasa that achieve high code coverage, the kill ratio of mutants remains to be improved (76.4%), especially of ML-specific mutants (29.8%). The applicability and limitations of existing test case generation, selection and quality assurance techniques in ML-enabled systems are worthwhile to be explored [18, 39].
**Challenges of test data to kill mutants.** Existing researches on mutation testing for ML programs only evaluated mutants with test data to decide whether they can be killed [30, 33, 35, 48]. However, the capability of test data to kill mutants in large-scale ML-enabled systems is limited for two reasons. First, due to complexity from configurations, only part of mutants will impact the components of actual configured systems. Second, the amount and distributions of training data and test data affect the results a lot. For example, we tried to train the clean code version and mutated version with 75% of original training data, the number of killed mutants changed from 22 to 83, which means some bugs may only manifest under specific training data settings. Therefore, system developers should evaluate and test ML-enabled systems under more possible configurations and data settings that may be used by application developers to detect potential bugs.
## VIII threats
First, our study conducts a case study on Rasa, a widely used task-oriented industrial dialogue system. It is not clear whether our results can be generalized to other ML-enabled systems. However, we believe it is a good start to take a system view for ML-enabled systems. Second, our study involves a lot of manual analyses of Rasa source code and documentations, which may incur biases. To reduce them, two of the authors conduct manual analysis separately, and a third author is involved to resolve disagreements. Third, the mutators that we adopt may not simulate real-world bugs. To mitigate it, we decide to use mutators from DeepCrime [33], whose mutators are actually summarized from real word ML bugs.
## IX Related Work
**Study of ML-Enabled Systems**. While much of the attention has been on ML models, less attention has been paid on system-level analysis [38]. Peng et al. [60] investigated the integration of ML models in Apollo by analyzing how ML models interact with the system and how is the current testing effort. Besides, Nahar et al. [52] explored collaboration challenges between data scientists and software engineers through interviews. Amershi et al. [5] and Bernardi et al. [10] reported challenges and practices of MLOPs (from model requirement to model monitoring) at Microsoft and Booking.com. Although they still take a model-centric view, they emphasize that models can be complexly entangled to cause non-monotonic errors [5] and model quality improvement does not necessarily indicate system value gain [10]. Further, Yokoyama [80] developed an architectural pattern to separate ML and non-ML components, while Serban and Visser [64] surveyed architectural challenges for ML-enabled systems. Sculley et al. [63] identified ML-specific technical debt in ML-enabled systems, while Tang et al. [70] further derived new ones from real-world code refactorings. In addition, some attempts were made on the problem of ML component entanglement [5], e.g., performing metamorphic testing on a system with two ML components [83], troubleshooting failures in a system with three ML components by human intellect [53], and decomposing errors in a system with two or three ML components [78]. These studies explore the interaction among models but only on simple systems. Moreover, Abdessalem et al. [1, 2] studied the feature interaction failures in self-driving systems, and proposed testing and repairing approaches to automatically detect and fix them. Apel et al. [8] also discussed feature interactions in ML-enabled systems, and suggested strategies to cope with them.
The main difference from the previous work is that we take a large-scale complex ML-enabled system, explore its complexity at three levels, and analyze the impact of its complexity on testing. The closest work is Peng et al.'s [60], but we report a deeper complexity analysis and also conduct a testing impact analysis.
**Mutation Testing for DL Models**. Jia et al. [36] used syntactic mutators for traditional programs to DL models. DeepMutation [48] and DeepMutation++ [30] defined DL-specific mutators. DeepCrime [33] derived DL-specific mutators based on real DL bugs. Jahangirova and Tonella [35] evaluated syntactic and DL-specific mutators. These studies are focused on model-level mutation, while we target at system-level mutation.
**Testing for Dialogue Systems**. Bozic and Wotawa [13] proposed a security testing approach for chatbots to prevent cross-site scripting and SQL injection. Bozic et al. [12] tested a hotel booking chatbot via planning. Bozic and Wotawa [14] introduced a metamorphic testing approach for chatbots. Similarly, Liu et al. [47] used semantic metamorphic relations to test the NLU module in dialogue systems. Despite the effort, less attention has been paid on system-level testing of dialogue systems.
## X Conclusion
We present a comprehensive study on Rasa to characterize its complexity at three levels and the impact of its complexity on testing from two perspectives. Furthermore, we highlight practical implications to improve software engiereing for ML-enabled systems. All study data and source code used in this paper are available at [https://rasasystemcomplexity.github.io/](https://rasasystemcomplexity.github.io/).
|
2306.16523 | New energy spectra in neutrino and photon detectors to reveal hidden
dark matter signals | Neutral particles capable of travelling cosmic distances from a source to
detectors on Earth are limited to photons and neutrinos. Examination of the
Dark Matter annihilation/decay spectra for these particles reveals the presence
of continuum spectra (e.g. due to fragmentation and W or Z decay) and peaks
(due to direct annihilations/decays). However, when one explores extensions of
the Standard Model (BSM), unexplored spectra emerge that differ significantly
from those of the Standard Model (SM) for both neutrinos and photons. In this
paper, we argue for the inclusion of important spectra that include peaks as
well as previously largely unexplored entities such as boxes and combinations
of box, peak and continuum decay spectra. | Wim Beenakker, Sascha Caron, Jochem Kip, Roberto Ruiz de Austri, Zhongyi Zhang | 2023-06-28T19:38:34Z | http://arxiv.org/abs/2306.16523v3 | # New energy spectra in neutrino and photon detectors to reveal hidden dark matter signals
###### Abstract
Neutral particles capable of travelling cosmic distances from a source to detectors on Earth are limited to photons and neutrinos. Examination of the Dark Matter annihilation/decay spectra for these particles reveals the presence of continuum spectra (e.g. due to fragmentation and W or Z decay) and peaks (due to direct annihilations/decays). However, when one explores extensions of the Standard Model (BSM), unexplored spectra emerge that differ significantly from those of the Standard Model (SM) for both neutrinos and photons. In this paper, we argue for the inclusion of important spectra that include peaks as well as previously unexplored entities such as boxes and combinations of box, peak and continuum decay spectra.
## 1 Introduction
The search for Dark Matter (DM) by indirect detection is the subject of many studies. A large number of experiments have investigated the cosmic antiproton, positron, photon and neutrino spectra. Notable experiments include but are not limited to, AMS-02 [1], Fermi-LAT [2], Icecube [3], ANTARES [4], H.E.S.S. [5], Pierre Auger [6], and VERITAS [7] which have been measuring charged cosmic rays, gamma rays, and neutrinos for decades. With the upcoming construction of new experiments such as KM3Net [8], the CTA observatory [9], and GRAND [10] the sensitivity to potential neutral particles arising from DM annihilation or decay will increase significantly.
Historically the investigated particle spectra resulting from annihilating or decaying DM have focussed on processes involving two Standard Model (SM) final state particles, e.g. DM DM \(\to b\bar{b}\), DM DM \(\to\tau^{+}\tau^{-}\), etc, which subsequently undergo decay, fragmentation and hadronization to produce antiproton, positron, photon, and neutrino spectra. These spectra are produced by well-understood mechanisms, leading to detailed analyses that can produce strong limits on the properties of DM. Some notable examples are the potential AMS-02 antiproton excess and the Fermi LAT gamma-ray excess, both of which have been explained by
DM annihilation with a DM mass in the \(\mathcal{O}(100)\) GeV region [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]. However, if one considers possible extensions beyond the standard model (BSM), other, still largely unexplored spectra are possible, which may differ considerably from the known standard spectra.
In this paper, we describe a new type of neutrino and photon spectrum in a largely model-independent way. It contains a combination of a well-defined peak, a box and a combination of neutrino/photon spectra produced by the decay, fragmentation and hadronization of SM particles. Of course, a clearly defined peak has been investigated by Icecube [28], ANTARES [29] and Fermi-LAT [30] among others, as it is easily obtained by DM particles directly annihilating or decaying into the relevant final state resulting in a comparatively clean signal. However, a box shape and, of course, the combination of all three features leads to a significantly different spectrum. These types of spectra have been largely overlooked in both experimental and phenomenological research. To facilitate the search for these spectra, we provide a code to obtain a user-defined non-standard neutrino or photon spectrum by specifying the appropriate parameters.
This paper is structured as follows. First, we detail the physics of the non-standard spectra and provide the relevant expressions of the kinematics. Next, some example BSM models are given that can produce non-standard neutrino or photon spectra. Then we detail the working of the sampling code and its verification, after which we provide some elementary parameter sets that capture the most important features of the spectra in order to facilitate experimental searches. We end with our conclusions.
## 2 Theoretical background
There are only two types of neutral particles that can travel cosmic distances from a source to detectors on or near Earth, namely photons and neutrinos. In the following subsections, the kinematics of the spectrum of a particle will be discussed as model-independently as possible. Both neutral particles can be described by the same kinematics, since they are both massless or have negligible mass. The main differences, of course, lie in the possible models that can produce such spectra. However, no such assumptions will be made in the following subsections.
Moreover, both DM annihilation and DM decay can produce cosmic rays. The kinematics for DM decay is identical to DM annihilation, with the only difference being that the initial energy for (non-relativistic) DM decay is \(M_{\rm DM}\), while for DM annihilation it is \(2M_{\rm DM}\). Thus, to obtain the kinematic expressions for the DM decay from those for the DM annihilation, one simply has to replace \(M_{\rm DM}\) by \(\frac{1}{2}M_{\rm DM}\)
In the following subsections, we assume the standard scenario that two DM particles annihilate to neutrinos in order to simplify the discussion.
### The box
The simplest non-standard spectrum arises from two DM particles annihilating into two BSM particles, \(X\), which subsequently decay into neutrinos:
\[{\rm DM\,DM}\to XX X\rightarrow\nu\nu\,, \tag{1}\]
where \(\nu\) is any SM neutrino. More complicated decays of \(X\) are of course possible, which will be discussed in section 2.3. Depending on the model, of course, the \(XX\)-pair can also be a \(X\overline{X}\)-pair, and any \(\nu\) may well be an \(\overline{\nu}\). This kind of DM annihilation leads to a 'box' shape of the neutrino spectrum. In the rest frame of the particle, \(X\) both neutrinos produced by the decay of \(X\) have a clearly defined energy of \(M_{X}/2\), where \(M_{X}\) is the mass of \(X\). However, the neutrinos need to be Lorentz boosted into the center-of-mass frame of the annihilating DM particles as they are now evaluated in the rest frame of the particle \(X\), which of course differs from the center-of-mass frame. Assuming that DM has zero momentum at annihilation, using spherical coordinates in the rest frame of \(X\), and choosing the motion of \(X\) in the \(z\) direction and thus boosted in the \(z\) direction, the energy of the neutrinos results in:
\[E_{\rm box}=\frac{M_{X}}{2}\left(\cosh\left(\eta\right)+\sinh \left(\eta\right)\cos\left(\theta\right)\right) \eta=\cosh^{-1}\left(\frac{M_{\rm DM}}{M_{X}}\right). \tag{2}\]
Here \(M_{\rm DM}\) is the DM mass. The \(\cos(\theta)\) term is due to the spherical coordinates in the rest frame of \(X\). To obtain a uniform distribution of points on a sphere, i.e. an isotropic decay, \(\cos(\theta)\) must be sampled uniformly
between 0 and 1, as opposed to sampling \(\theta\) uniformly between 0 and \(2\pi\)1. This uniform distribution in \(\cos(\theta)\) results in the neutrino having an equal probability of having any energy within the bounds of \(\cos(\theta)=-1\) and \(+1\), thereby resulting in a flat 'box' shape.
Footnote 1: See [http://corsimon.github.io/articles/uniformdistn-on-sphere/](http://corsimon.github.io/articles/uniformdistn-on-sphere/) for an in-depth explanation.
### The peak
Another process is the annihilation of two DM particles into a neutrino and a BSM particle \(X\):
\[\text{DMDM}\to\nu X\,. \tag{3}\]
The neutrino, which comes directly from DM annihilation, forms a clearly defined peak in the spectrum with an energy of
\[E_{\text{peak}}=\frac{4M_{\text{DM}}^{2}-M_{X}^{2}}{4M_{\text{DM}}}\,. \tag{4}\]
### Alternative decay modes for particle \(X\)
In most realistic BSM models, a possible particle \(X\) does not have a 100% branching ratio into \(\nu\nu\), but can have multiple different decay modes, e.g. \(\text{BR}[X\to\nu Z]=0.5\) and \(\text{BR}[X\to W^{+}e^{-}]=0.5\). The specific decay modes depend, of course, on the details and parameters of the chosen BSM model. Here, the assumption is explicitly made that \(X\) can only decay into SM particles. The SM particles2 will undergo fragmentation, hadronization, and decay, thereby also producing neutrinos or radiating off photons. A labelling is made of the neutrinos that come from the various stages of DM annihilation into neutrinos: the neutrinos that form the peak are called primary neutrinos, those in the box are called secondary, and all neutrinos that come from the fragmentation and hadronization of an SM particle are called standard neutrinos:
Footnote 2: Naturally except for neutrinos and \(e^{\pm}\).
Notably, from this equation it can be seen that for \(\mathrm{DM}\,\mathrm{DM}\to\nu X\) the energy of the peak is equal to the width of the box if \(M_{\mathrm{SM}}=0\). In general \(E_{\mathrm{peak}}=\Delta E_{\mathrm{box}}/(1-(M_{\mathrm{SM}}/M_{X})^{2})\). So for boxes with the same width, the peak is at the same energy, regardless of the energy of the box. Or, put another way, the position of the peak directly provides the width of the box up to the specific decay mode of \(X\).
Of course, the mass of the particles DM and \(X\) can also be expressed by the initial and final energy of the box:
\[M_{X}=\sqrt{E_{\mathrm{box,max}}E_{\mathrm{box,min}}}+\sqrt{E_{ \mathrm{box,max}}E_{\mathrm{box,min}}+M_{\mathrm{SM}}^{2}}\,, \tag{7}\] \[M_{\mathrm{DM}}=\begin{cases}\frac{E_{\mathrm{box,max}}M_{X}^{2 }}{M_{X}^{2}-M_{\mathrm{SM}}^{2}}&\mathrm{for}&\mathrm{DMDM}\to\nu X\,,\\ \frac{(E_{\mathrm{box,max}}+E_{\mathrm{box,min}})M_{X}^{2}}{M_{X}^{2}-M_{SM}^{ 2}}&\mathrm{for}&\mathrm{DMDM}\to XX\,.\end{cases}\]
For this expression, the decay mode of \(X\) must be fixed to know \(M_{\mathrm{SM}}^{2}\).
The standard neutrinos produced by the hadronization and fragmentation of the SM particle can best be determined with programs such as Pythia[31] and are not analytically determinable. However, pre-computed spectra of \(\mathrm{DMDM}\to\) SM SM can be used to sample these neutrinos. For \(\mathrm{DMDM}\to\) SM SM the energy of the SM particle is simply \(E_{\mathrm{SM}}=M_{\mathrm{DM}}\). For \(X\to\) SM\({}_{1}\) SM\({}_{2}\), i.e. two different SM particles, the energy of particle SM\({}_{1}\) and SM\({}_{2}\) is respectively given by:
\[E_{\mathrm{SM}_{1}}=\frac{M_{X}^{2}+M_{\mathrm{SM}_{1}}^{2}-M_{ \mathrm{SM}_{2}}^{2}}{2M_{X}}\,, E_{\mathrm{SM}_{2}}=\frac{M_{X}^{2}-M_{\mathrm{SM}_{1}}^{2}+M_{ \mathrm{SM}_{2}}^{2}}{2M_{X}}\,. \tag{8}\]
Thus, to obtain the correct neutrino spectrum of SM\({}_{1}\), the spectrum of \(\mathrm{DMDM}\to\) SM\({}_{1}\)SM\({}_{1}\) is used, where \(M_{\mathrm{DM}}=E_{\mathrm{SM}_{1}}\), and similarly for the neutrino spectrum of SM\({}_{2}\). Since this neutrino spectrum of SM\({}_{1/2}\) is in the rest state of \(X\), any neutrino sampled from this spectrum must be Lorentz boosted into the CM frame of annihilating DM particles, identical to the neutrinos coming directly from the decay of \(X\). Additionally, when using the pre-computed spectra of \(\mathrm{DMDM}\to\) SM\({}_{1}\) SM\({}_{1}\) to determine the spectrum of a single particle SM\({}_{1}\), its spectrum d\(N\)/d\(E\) is overestimated by a factor of 2 and thus needs to be compensated for. Furthermore, any correlations between the annihilation products of the pre-computed spectra are assumed to be negligible [32, 33].
The average number of primary and secondary neutrinos per collision can be straightforwardly determined via counting:
\begin{tabular}{c c c} Process & \(N_{\mathrm{prim}}\) & \(N_{\mathrm{sec}}\) \\ \hline \(\mathrm{DMDM}\to\nu X\) & 1 & \(\mathrm{BR}[X\to\nu\mathrm{SM}]\) \\ \(\mathrm{DMDM}\to XX\) & 0 & \(2\cdot\mathrm{BR}[X\to\nu\mathrm{SM}]\) \\ \end{tabular}
Table 1: The average number of primary (peak) and secondary (box) neutrinos per DM annihilation.
Note that when \(X\) decays into \(\nu\nu\), the number of secondary neutrinos is doubled and the number of standard neutrinos is zero for that decay mode. The number of standard neutrinos is determined by the integral of their spectrum, \(dN/dE\) and therefore varies from case to case.
## 3 Model examples
The only requirement to obtain these spectra is a DM particle capable of either self-annihilation or decay into a mediating particle \(X\) that couples to neutrinos and/or photons. Of course, there are some limitations on the possible interactions and decay modes of the DM and \(X\) particles. For example, if \(X\) is a spin \(\frac{1}{2}\) particle, then the decay mode \(X\to\nu\nu\) is forbidden by conservation of angular momentum, making a pure box shape impossible. Since the shape of the neutrino spectrum is determined by its kinematics, the spin of the DM or the \(X\)-particle is irrelevant when it comes to the position of the peak or the box; only the possible couplings are affected.
Two examples of candidates for a particle \(X\) coupling to neutrinos are: \(Z^{\prime}\) bosons, introduced in a \(U_{L_{\mu}-L_{\tau}}(1)\)[34, 35, 36, 37] gauge extensions of the SM and unstable heavy neutrinos such as those in an inverse seesaw mechanism [38, 39, 40]. A heavy neutrino \(\nu_{H}\) and DM particle \(\phi_{\rm DM}\) could, for example, have a non-standard muon neutron spectrum via the decay chain \(\phi_{\rm DM}\phi_{\rm DM}\to\nu_{\mu}\nu_{H}\) with \(\nu_{H}\to\nu_{\mu}Z\), and the \(Z\) boson gives a continuous neutrino spectrum. The number of potential models can also be much larger; additional gauge groups, different neutrino mass mechanisms or Higgs mechanisms specific to neutrinos could all produce peak or box shapes. Of course, the number of different DM candidates is very large. These include particles that are added manually, such as complex scalar DM, or those that arise as a consequence of the theory itself, e.g. neutralinos in supersymmetry.
Note that a single DM species annihilating with itself cannot have a neutrino spectrum consisting only of primary and secondary neutrinos, i.e. \({\rm DMDM}\to\nu X\) with \(X\to\nu\nu\). The initial state has an even number of fermions, while the final state has an odd number of fermions, which is of course impossible.3 For photons, of course, there are no such constraints. It is possible that a process such as \({\rm DMDM}\to\nu\nu X\) with \(X\to\nu\nu\), but here no peak is formed because the energy of the primary neutrinos and \(X\) is not uniquely determined by the two-body phase space. However, a spectrum with a peak and a box is easily obtained via the decay of fermionic DM. For example, a heavy neutrino decaying into a \(\nu\) and a scalar, which subsequently decays into two \(\nu\)'s.
Footnote 3: Two different DM particles annihilating would circumvent such a constraint, e.g. a sneutrino and a heavy neutrino as DM candidates could produce such a spectrum.
In terms of models that might produce non-standard photon spectra, there are a plethora [41, 42, 43, 44, 45] of models proposed [46] to explain the 2015 750 GeV di-photon excess in ATLAS and CMS. It should be noted that if \(X\) is electrically charged, as is very possible for a particle coupling to photons, the possible energy range of the spectrum is limited by constraints on the mass of \(X\), e.g. by LEP searches. This is in contrast to a particle \(X\) that couples to neutrinos via the weak force, which can more easily evade experimental searches. Remarkably, \(X\) does not have to be electrically charged to couple to photons. The neutral pion \(\pi^{0}\), for example, has a decay mode into two photons. A non-standard photon spectrum could, for example, be made by a pion-like BSM particle \(\Pi^{0}\) and a DM particle \(\phi_{\rm DM}\) by \(\phi_{\rm DM}\phi_{\rm DM}\to\Pi^{0}\Pi^{0}\) and \(\Pi^{0}\to\gamma\gamma\).
## 4 The sampling code
### Sampling procedure
The sampling of the neutrinos is done by sampling a primary, secondary or standard neutrino according to the probability \(N_{\rm prim}/N\), \(N_{\rm sec}/N\) and \(N_{\rm stand}/N\), respectively, where \(N=N_{\rm prim}+N_{\rm sec}+N_{\rm stand}\). The sampling of the primary neutrino is trivial because its energy is fixed. The secondary neutrinos are sampled with a uniform distribution, where the limits are given by Eq. (5). The standard neutrinos are sampled from precomputed spectra and subsequently Lorentz boosted into the CM frame of the annihilating DM particles as described previously. Sampling of the precomputed spectrum is performed by first determining the cumulative distribution function of the spectrum, which is obtained by numerical integration. A sampling is then performed using the standard inverse transform sampling. We use the precomputed spectra from [47, 48, 49].
### Verification of spectra
In order to verify the accuracy of this sampler, we perform cross-checks with multiple spectra computed with MadGraph5 v3.1.1 [51] and Pythia 8.309 across a range of masses and decay modes. We deem a simple visual inspection of the spectra to be sufficient to validate the fidelity of the sampler. In figure 1 four different example spectra are shown of which the process is indicated in the relevant plot. The top left and bottom two plots are generated through a simplified model in which \(X\) is a spin-0 or spin-1/2 particle respectively, while the spectrum from of top right plot is used from [50]. The decay mode of \(X\) of the two spectra from the simplified models both have a branching ratio of 100% into a single decay mode, while the spectrum in the top right has a more complicated decay mode of \({\rm BR}[X\to\nu h]{\rm BR}[X\to\nu Z]1/4\) and \({\rm BR}[X\to W^{\pm}e^{\mp}]{\rm BR}[X\to W^{\pm}\mu^{\mp}]{\rm BR}[X\to W^{ \pm}\tau^{\mp}]1/6\), which shows that the sampler can indeed handle more complicated decay modes.
In Figure 2, the two lower spectra of Figure 1 are plotted on a logarithmic scale, showing that the sampler at low energies performs accurately. There is some erratic behaviour at \(E\approx 10^{-4}\)GeV, but this is simply an artefact of a finite number of Monte Carlo samples.
It should be noted that the normalization of the sampler and the spectra produced by MadGraph and Pythia are different, as the former only samples a spectrum while the latter fully computes an annihilation process. However, this difference can be easily remedied by counting the average number of neutrinos per event, which then directly provides the relative normalization factor. The spectra calculated by MadGraph and Pythia are the sum of 100,000 iterations, while the sampler's spectra are the result of 1,000,000 samples, which are then normalized.
Remarkably, although the spectra shown are \(\mathcal{O}\)(1-10)TeV, there is no inherent scale for these spectra, since both \(M_{\mathrm{DM}}\) and \(M_{X}\) are a priori both unconstrained. Similarly, the detectability of any spectrum depends
Figure 1: Comparison of the spectra between MadGraph/Pythia (orange) and the sampling code (blue). The upper left and lower two spectra were generated with simplified models, while the upper right spectrum is an example point from a full model [50]. The lower two plots show the same model, but evaluated at different energies.
Figure 2: A comparison between the spectra generated by MadGraph/Pythia (orange) and the sampling code (blue) for two different spectra. The spectra of these plots are the same as the bottom two in 1, but plotted logarithmically.
strongly on its annihilation cross-section \(\langle\sigma v\rangle\), which is of course model specific and which we do not comment on here.
### Required input
The code can sample \(\text{DMDM}\rightarrow\nu/\gamma X\), \(\text{DMDM}\to XX\), \(\text{DM}\rightarrow\nu/\gamma X\), or \(\text{DM}\to XX\) in which \(X\) can have any decay products and branching ratios into SM particles. The required input is as follows:
* How the spectra is produced: DM decay (1) or DM annihilation (2)
* The particle type: \(\nu_{e}\), \(\nu_{\mu}\), \(\nu_{\tau}\), or \(\gamma\).
* Which process: \(\text{DMDM}\rightarrow\nu/\gamma X\) (1) or \(\text{DMDM}\to XX\) (2).
* The DM mass \(M_{\text{DM}}\) in GeV.
* The \(X\) mass \(M_{X}\) in GeV.
* The decay modes of \(X\) with the format being 'BR daughter1 daughter2'. The total branching ratio must sum to 1.
* The number of samples from the spectrum.
* The path to the csv file where the points are saved. When no input is given, no save is made. The keyword 'plot' or 'logplot' can be entered to plot the sampled data linearly or logarithmically.
Acceptable daughter particles of \(X\) decay are vl, e, mu, tau, h, z, w, ga, u, d, s, c, b, t, and g. The neutrino type is only important for the production of standard neutrinos; especially, the spectrum of tau neutrinos can differ significantly as compared to the spectra of electron and muon neutrinos when all other parameters are identical. The input can be provided manually or via an input file that is passed as an argument.
It is noteworthy that the masses of the DM and \(X\) particles are not constrained, while the interpolated spectra used for the daughter particles are tabulated up to 10,000 GeV, so the sampler cannot capture the spectra of the daughter particles with energy greater than 10,000 GeV.
One disadvantage of these non-standard spectra is, of course, the enlargement of the parameter space. In the typical indirect search for DM, with a fixed channel, only the mass of the DM particle is important. In these non-standard spectra, however, the dimensionality of the parameter space is at least 2, namely the DM mass and the mass of the BSM particle \(X\).
## 5 Conclusion
In this study, we have presented a novel approach for neutrino and photon spectra that is largely model-independent. We have developed a user-specified Monte Carlo sampler to efficiently sample these spectra, which can be found in this github repository. While previous experimental searches have focused on neutrino and photon spectra with lines (peaks) and continuous spectra, more complex spectra have been largely overlooked. It is important to note that the spectra we have discussed are not constrained by mass, allowing for arbitrarily high or low energy ranges. Furthermore, the inclusion of a two-dimensional parameter space that includes the mass of dark matter and the mass of a BSM particle X, increases the complexity compared to the typical one-dimensional parameter space. This sampler may prove valuable not only for conventional high-energy astroparticle searches, but also for low-energy searches.
## Acknowledgements
R. RdA is supported by PID2020-113644GB-I00 from the Spanish Ministerio de Ciencia e Innovacion. |
2307.06419 | The Acquisition of Semantic Relationships between words | The study of semantic relationships has revealed a close connection between
these relationships and the morphological characteristics of a language.
Morphology, as a subfield of linguistics, investigates the internal structure
and formation of words. By delving into the relationship between semantic
relationships and language morphology, we can gain deeper insights into how the
underlying structure of words contributes to the interpretation and
comprehension of language. This paper explores the dynamic interplay between
semantic relationships and the morphological aspects of different languages, by
examining the intricate relationship between language morphology and semantic
relationships, valuable insights can be gained regarding how the structure of
words influences language comprehension. | Mohamed Naamane | 2023-07-12T19:18:55Z | http://arxiv.org/abs/2307.06419v1 | # Natural Language Processing:
###### Abstract
The study of semantic relationships has revealed a close connection between these relationships and the morphological characteristics of a language. Morphology, as a subfield of linguistics, investigates the internal structure and formation of words. By delving into the relationship between semantic relationships and language morphology, we can gain deeper insights into how the underlying structure of words contributes to the interpretation and comprehension of language. This paper explores the dynamic interplay between semantic relationships and the morphological aspects of different languages, by examining the intricate relationship between language morphology and semantic relationships, valuable insights can be gained regarding how the structure of words influences language comprehension.
## 1 Introduction
The fundamental premise of this algorithm rests upon the extraction of a value that effectively captures the intrinsic relationship between two given words. This value serves as a pivotal seed in the discovery of analogous relationships with other words. It is noteworthy that this approach to numerical representation is not a recent development; rather, it finds its historical roots in the practices of ancient Arab civilizations.
## 2 Abjad_numerals
The assignment of numerical values to individual English letters follows a prescribed methodology:
DB["a"]=1
DB["b"]=2
DB["c"]=3
DB["d"]=4
DB["e"]=5
DB["f"]=6
DB["g"]=7
DB["h"]=8
DB["i"]=9
DB["i"]=10
DB["j"]=10
DB["k"]=20
DB["l"]=30
DB["m"]=40
DB["n"]=50
DB["n"]=50
DB["n"]=60
DB["n"]=70
DB["n"]=80
DB["n"]=80
DB["n"]=90
DB["s"]=100
[MISSING_PAGE_POST]
### Semantic Relationships
The extraction of a value represents the relationship between two given words, denoted as x and y, entails a process of converting these words into
series of numbers based on the assigned numerical values of their constituent letters.
\[S(x)=\sum_{i=0}^{n-1}S(\sum_{j=0}^{n-1}S(x\dot{i}))\]
As an illustration, let's consider the value of the letter "t" to be 200. In this case, the corresponding series of numbers for the word containing only the letter "t" would be represented as [2, 0, 0], where each number represents a single digit. To ensure equal series of numbers sizes between x and y, if the size of the series of numbers for x is smaller than the series of numbers for y, zeros are appended to the series of numbers for x until both series of numbers attain equal length.
The average product obtained by dividing the sum of the multiplication results between the corresponding elements within the two series of numbers by 10 serves as a quantitative measure that expresses the relationship between the two words. This value, often referred to as the seed, can be utilized to infer analogous relationships with other words. Specifically, it enables the establishment of a ratio that signifies the strength of the relationship between words x and z:
\[\textbf{\mu}=\sum(|\textbf{A}\textbf{Q}\textbf{B})|\textbf{/}\textbf{n}\]
**R(x,y)** = **\(\textbf{\mu}\textbf{/}\textbf{10}\)**
**seed = R(x,y)**
To explore additional words that share a similar relationship as the previously established relationship between x and y, we will compute a value that signifies the strength of the relationship between the seed relation and another word, denoted as z. This value serves as a metric to gauge the degree of association between x and z, allowing for the identification of words that exhibit analogous relationships to the initial pair:
\[\textbf{R(x,z)}=[|\textbf{seed * R(x,z) * 100}|]\]
This is an illustrative example of applying this algorithm to examine the relationship between the word pair ['brain', 'think'] within a small English corpus. The analysis shows that the word "imaging" exhibits the highest degree of correlation with a 237% relevance score after the word Brain:
\[\textbf{\#}\text{git clone -b master}\]
[https://github.com/mrmednmn/wre.git](https://github.com/mrmednmn/wre.git)
from wre.lang.arabic import Arabic
from wre.lang.english import English
from wre.wrelation import WordRelation
\[\textbf{\text{lang}}=\textbf{\text{English}}()\]
word_relation = WordRelation(lang)
word_pair = ["brain","think"]
\[\#\text{get the relation between 2 words as a seed, the word pair should have a relation.
\[\textbf{\text{seed}}=\textbf{\text{word\_relation.get\_relation(word\_pair[0],}}\]
word_pair[1])
\[\#\text{input file to get similar relations from it}\]
data = open("kns.txt", "r",
encoding=lang.encoding).read().split()
total_words = len(data)
with open("out.txt", "w", encoding=lang.encoding) as out_file:
for index, word in enumerate(data):
progress_percentage = (index + 1) /
total_words * 100
progress_message = f"Processing word {index + 1} of {total_words}
({progress_percentage:.2f}%)"
print(progress_message, end="r", flush=True)
\(\#\text{ge the relation between the seed relation and the current word in the input data
\[\text{relation}=\]
word_relation.get_relation(word_pair[0], word)
\[\#\text{check if the curr input word has a relation}\]
between the seed word pair
if word_relation.has_relation(seed, relation):
\[\#\text{get the difference between the seed and the current relation}\]
diff=word_relation.evaluate(seed, relation)
response = str(word_pair) + " is related to "
+ word +" per = " + str(diff)+"%"
out_file.write(response+"n")
clear_message = ""* len(progress_message)
print(clear_message, end='\r')
This is an illustrative example of applying this algorithm to examine the relationship between the word pair ['brain', 'think'] within a small English corpus. The analysis shows that the word
"imaging" exhibits the highest degree of correlation with a 237% relevance score after the word
"Brain".
### 4.Results
['brain', 'think'] is related to university per = 33%
['brain', 'think'] is related to book per = 139%
['brain', 'think'] is related to wave per = 77%
['brain', 'think'] is related to ai per = 155%
['brain', 'think'] is related to technology per = 55%
[MISSING_PAGE_POST]
['brain', 'think'] is related to romantic per = 51%
['brain', 'think'] is related to people per = 72%
['brain', 'think'] is related to Cognition per = 98%
['brain', 'think'] is related to Neurology per = 99%
['brain', 'think'] is related to Intelligence per = 47%
['brain', 'think'] is related to Memory per = 46%
['brain', 'think'] is related to Cognitive per = 98%
['brain', 'think'] is related to functions per = 61%
['brain', 'think'] is related to Neuroplasticity per = 6%
6%
['brain', 'think'] is related to Synapse per = 8%
['brain', 'think'] is related to Neural per = 15%
['brain', 'think'] is related to networks per = 9%
['brain', 'think'] is related to Neurotransmitters per =
5%
['brain', 'think'] is related to Cognitive per = 98%
['brain', 'think'] is related to development per = 51%
['brain', 'think'] is related to Brain per = 360%
['brain', 'think'] is related to structure per = 2%
['brain', 'think'] is related to Brain per = 360%
['brain', 'think'] is related to activity per = 61%
['brain', 'think'] is related to Neural per = 15%
['brain', 'think'] is related to pathways per = 12%
['brain', 'think'] is related to Cognitive per = 98%
['brain', 'think'] is related to processes per = 50%
['brain', 'think'] is related to Mental per = 27%
['brain', 'think'] is related to processes per = 50%
['brain', 'think'] is related to Brain per = 360%
['brain', 'think'] is related to health per = 108%
['brain', 'think'] is related to Brain per = 360%
['brain', 'think'] is related to functions per = 61%
['brain', 'think'] is related to Neurological per = 9%
['brain', 'think'] is related to disorders per = 98%
['brain', 'think'] is related to Brain per = 360%
['brain', 'think'] is related to imaging per = 237%
['brain', 'think'] is related to Neurodegeneration per = 6%
### Conclusion
In conclusion, the investigation into the relationship between language morphology and semantic associations has shed light on the potential for extracting valuable insights from the numerical representation of words. By employing a methodology that involves assigning numerical values to letters and subsequently generating series of numbers we have established a means to quantify the strength of relationships between words. This approach, exemplified by the computation of average products and ratios, enables the inference of analogous relationships and the discovery of words that share similar semantic connections.
As we continue to explore the complex relationship between language morphology and semantic associations, further research endeavors should focus on refining the computational techniques employed, expanding the linguistic datasets utilized, and investigating the generalizability of these findings across diverse languages and linguistic contexts. By doing so, we can continue to advance our understanding of language systems and unlock new avenues for linguistic analysis and application.
|
2303.03096 | Spontaneous Collapse of the Wavefunction: A Testable Proposal Motivated
by Discrete Physics | A modified form of quantum mechanics which includes a new mechanism for
wavefunction collapse is proposed. The collapse provides a solution to the
quantum measurement problem. This modified quantum mechanics is shown to arise
naturally from a fully discrete physics, where all physical quantities are
discrete rather than continuous. We compare the theory to the spontaneous
collapse theories of Ghirardi, Rimini, Weber and Pearle, and argue that the new
theory lends itself well to a realist interpretation of the wavefunction. | Martin J. Leckey, Adrian P. Flitney | 2023-03-03T04:56:09Z | http://arxiv.org/abs/2303.03096v1 | # Spontaneous Collapse of the Wavefunction:
###### Abstract
A modified form of quantum mechanics which includes a new mechanism for wavefunction collapse is proposed. The collapse provides a solution to the quantum measurement problem. This modified quantum mechanics is shown to arise naturally from a fully discrete physics, where all physical quantities are discrete rather than continuous. We compare the theory to the spontaneous collapse theories of Ghirardi, Rimini, Weber and Pearle, and argue that the new theory lends itself well to a realist interpretation of the wavefunction.
## 1 Introduction
Glory is like a circle in the water,
Which never ceaseth to enlarge itself
Till by broad spreading it disperse to nought.
Shakespeare, _Henry VI, part 1_
We make a proposal for a modified quantum mechanics that includes an outline of a solution to the'measurement problem.' The theory is shown to arise naturally from a fully discrete physics, where all physical quantities, including the magnitude of the wavefunction, are discrete. Since normal wavefunction evolution leads to an expansion or spreading with time, eventually the wavefunction will grow to such an extent that the average magnitude of the wavefunction threatens to fall below the minimum magnitude representable in the discrete physics under consideration, thus threatening to disappear altogether. Our key idea is that, since the wavefunction does not vanish, the existence of a minimum magnitude provides a reason, independent of any consideration of measurement, to suppose that the wavefunction will 'collapse', suddenly
reducing in size, when it reaches a certain critical volume in configuration space. How this form of collapse leads to a solution to the measurement problem will be explained in what follows.
In nature, there would appear to be no truly isolated systems, that is, systems with no interaction at all with their environment. According to standard interpretations of quantum mechanics, this means that the universe is most accurately represented by a single wavefunction. In the model discussed here, not only do wavefunctions spontaneously collapse, but there are also many separate wavefunctions in the universe, some representing as little as a single particle. This picture permits a realist interpretation of the wavefunction in which the wavefunction is not merely a mathematical tool for predicting outcomes of experiments or observations, but is also a physical feature of the universe in its own right. Furthermore, since the theory makes novel empirical predictions, it is in principle testable.
The theory has some features in common with two closely related theories of wavefunction collapse that have been proposed with the intention of providing a solution to the measurement problem. These are the theory of Ghirardi, Rimini, and Weber (1986, 1988) (GRW); and the Continuous Spontaneous Localization (CSL) theory (Ghirardi et al. 1990). In the following section, we give a brief account of the measurement problem. An introduction to existing collapse theories is presented in Sect. 3. In Sect. 4 we discuss discrete physics and demonstrate how discretization leads naturally to a consideration of wavefunction collapse. Section 5 introduces our modified quantum mechanics which we show can lead to a solution to the measurement problem (Sects. 6-7). The dynamics of the new model are further explored in Sects. 8-9. Comparison between the new theory and existing spontaneous collapse models is given in Sect. 10. Section 11 explores wavefunction realism and Sect. 12 discusses observational constraints on the theory, followed by concluding remarks. The new theory gives rise to novel observational predictions. Confirmation of these predictions could in principle empirically differentiate this collapse theory from both standard quantum theory and other collapse theories. Further details about novel predictions of the theory are given in appendices A and B.
## 2 Measurement Problem
In the orthodox (Dirac-von Neumann) formulation of quantum mechanics (Dirac 1930; von Neumann 1955), the time-evolution of a system is described by the (deterministic) Schrodinger equation. However, when a'measurement' takes place the normal evolution of the state is suspended and the system changes indeterministically, abruptly 'collapsing', resulting in a determinate measurement outcome. This gives rise to several problems that can be grouped together under the label 'the measurement problem': What selects a preferred basis for physical quantities in nature; why are interference effects not observed for macroscopic objects; what process constitutes a'measurement' and--most importantly--why do measurements have discrete outcomes (Schlosshauer 2007)? The orthodox formulation is unacceptable if we assume that measurements are physical interactions and we seek a fundamental theory that can provide an account of these interactions without reference to an ill-defined concept such as'measurement.'
The orthodox theory is adequate only in that it provides a means of determining the results of experiments but not if we wish to have a realist description of the physical world.
When a quantum system interacts with the environment, quantum coherence is delocalized into the entangled environment-system state and we effectively lose the ability to observe it (Zeh 1970; Zurek 1981). This process, known as decoherence, is effectively irreversible and helps explain the non-observation of macroscopic quantum interference effects, as well as often providing a preferred basis for physical quantities. However, it does not explain how measurements have unique outcomes (Schlosshauer 2007, pp. 57-60).
One way of viewing the measurement problem is to say that the following three properties are incompatible (McQueen 2015):
1. A measurement always give rise to a single determinate outcome.
2. The wavefunction provides a complete description of all the physical properties of a system.
3. The wavefunction always evolves according to the Schrodinger equation.
Various possible solutions to the measurement problem have been proposed. These can be loosely placed into three groups depending on which of the arms of the above trilemma are discarded. Relative state/many worlds interpretations give up the first, hidden variable theories opt out of the second, while physical collapse models, such as our own, modify the third. We will briefly discuss each of these in turn.
The relative state interpretation pioneered by Everett (1957) maintains that there is a total quantum state for the entire universe and that Schrodinger evolution is maintained at all times. Quantum measurement does not produce a particular outcome but causes the macroscopic world to branch into a multitude of copies that 'coexist' in some sense, either relative to the state of the other parts of the system (Everett's original proposal), to a particular branch of a constantly splitting universe (the _many worlds_ interpretation championed by DeWitt and Graham (1973) among others), or to a particular mind where each mind enters into an entangled state correlated with the state that they observe (the _many minds_ interpretation; see, for example, Lockwood (1996)).
The interpretation of Rovelli (1996, 2016, 2021), which he calls relational quantum mechanics, has much in common with relative state interpretations. In Rovelli's theory, the properties of the system are not absolute but relative to the interacting system: "...there is no reality except in _relations_ between physical systems" (Rovelli 2016, p. 115). Another theory with some similarities is that of Barad (2007) which she calls agential realism.
The hope with hidden variable theories is that 'hidden' determinate values of some physical quantities exist (for example, position) and that the quantum state is a way of describing a probability distribution over the values of each of those quantities. Some underlying law for the hidden variables then generates the dynamics of the system. In this sense, the values of the 'hidden' physical quantities already exist before a measurement takes place, thus circumventing the measurement problem. It has been known for some time that hidden variable theories are necessarily nonlocal (Bell 1964). The best
known such theory is the de Broglie-Bohm pilot wave theory (Bohm 1952). In this model, a system of particles is described both by the wavefunction, evolving in the usual manner according to the Schrodinger equation, and by the actual positions of the particles which evolve according to a 'guiding equation', relating the velocities of the particles to the wavefunction. The reason for the indeterminacy of quantum mechanics is that we do not know the exact initial position of the particle(s); measurement simply determines the position of the particle(s) and so is not mysterious.
Physical collapse theories modify the unitary dynamics of the wavefunction to make wavefunction collapse a physical process. The Schrodinger equation then becomes an approximation, applicable at the microscopic level but breaking down at larger scales. In principle, this difference is testable since it will produce small deviations from the predictions of orthodox quantum mechanics (see Sect. 12). Such collapse must be stochastic so that it conforms to the Born rule for probabilities. Any collapse theory must cause superpositions to collapse to probabilistic mixtures in a time sufficient small to avoid the observation of macroscopic superpositions and without having an appreciable effect on states that are resistant to decoherence.
Wigner (1967) championed the idea that human consciousness triggers a collapse of the wavefunction. That is, conscious states resist being forced into superpositions. Modern theories of consciousness-induced collapse, where superpositions of conscious states are dynamically suppressed, are explored by Okon et al. (2020) and Chalmers and McQueen (2022).
Gravitational measures may induce collapse in macroscopic superpositions of different mass density states (Diosi 1989), an idea championed by Roger Penrose (1996, 2014). When the structure of spacetime evolves into superpositions over a certain threshold, these superpositions collapse into a definite structure. Another suggestion is that absorption or emission of photons may trigger collapse to a photon number state and thereby induce localization of macroscopic objects (Simpson 2011).
Spontaneous collapse could contribute to a solution of a number of cosmological puzzles, such as the appearance of inhomogeneity in the early universe from an initial homogeneous state, the black hole information paradox and the origin of the second law of thermodynamics (Okn and Sudarsky 2017).
The first mathematically detailed spontaneous collapse model to be discussed in the literature was the theory of Ghirardi, Rimini and Weber (Ghirardi et al. 1986, 1988) though later efforts concentrated on the similar Continuous Spontaneous Localization model (Ghirardi et al. 1990). Details of the GRW model are presented in the next section. Comparison between our model and GRW/CSL models follows in Sect. 10.
## 3 Existing spontaneous collapse theories
The general idea behind the GRW model is that every particle in the universe is subject, at random times, to approximate spatial localizations. The effect of a localization is to cause the wavefunction representing the particle to collapse instantaneously. The following discussion is based on that in Bell (1987, pp. 201-209). Suppose that, before
collapse, a particle is part of a system represented by a wavefunction of \(N\) particles:
\[\Psi({\bf x}_{1},{\bf x}_{2},\ldots,{\bf x}_{N},t). \tag{1}\]
The effect on the wavefunction of a localization of particle \(k\), where \(k\) is randomly chosen from the \(N\) particles, is produced by multiplying the wavefunction by a single-particle 'jump factor' \(j({\bf x}^{\prime}-{\bf x}_{k})\), where \({\bf x}^{\prime}\) is a specific value chosen according to the probability density given by Eq. (4) below. The wavefunction is at the same time normalized to unity. That is, the new wavefunction is
\[\Psi^{\prime}({\bf x}_{1},\ldots,{\bf x}_{N},t)=\frac{j({\bf x}^{\prime}-{\bf x }_{k})\Psi({\bf x}_{1},\ldots,{\bf x}_{N},t)}{\parallel j({\bf x}^{\prime}-{ \bf x}_{k})\Psi({\bf x}_{1},\ldots,{\bf x}_{N},t)\parallel}. \tag{2}\]
GRW suggest that jump factor \(j({\bf x}^{\prime}-{\bf x}_{k})\) is a Gaussian of the form
\[j({\bf x}^{\prime}-{\bf x}_{k})=\left(\frac{\alpha}{\pi}\right)^{3/4}\exp \left(\frac{-\alpha({\bf x}^{\prime}-{\bf x}_{k})^{2}}{2}\right), \tag{3}\]
where \(\alpha\) is a new constant of nature. The effect of multiplying by the Gaussian is to approximately localize the particle within a radius of \(1/\sqrt{\alpha}\). The probability density of the Gaussian being centred at point \({\bf x}^{\prime}\) is taken to be
\[P({\bf x}^{\prime},t)=\parallel\Psi^{\prime}({\bf x}_{1},\ldots,{\bf x}_{N},t )\parallel^{2}=\int d{\bf x}_{1}\ldots d{\bf x}_{N}\,|\Psi^{\prime}({\bf x}_{1 },\ldots,{\bf x}_{N},t)|^{2}. \tag{4}\]
This ensures that the probability of collapse is greatest where the magnitude of the wavefunction is greatest, in close agreement with the probabilistic predictions of standard quantum mechanics. Localizations for each particle occur at random times with mean frequency \(\lambda\), and between each localization the wavefunction evolves according to the Schrodinger equation. By choosing appropriate values for \(\alpha\) and \(\lambda\), the theory attempts to make collapses for a microscopic system very infrequent while collapses for a macroscopic system occur frequently, making superpositions of such systems unobservable. The values normally chosen are \(1/\sqrt{\alpha}=10^{-7}\)m and \(\lambda=10^{-16}\)s meaning that, for a single particle, a collapse would occur on average every \(10^{9}\) years while for a macroscopic collection of, for example, \(10^{23}\) particles, a collapse would occur every \(10^{-7}\) seconds.
Since collapses will be common for macroscopic systems, they will be prevented from entering into superpositions of macroscopically separate locations, thus providing a solution to the measurement problem for macroscopic systems while maintaining normal continuous (Schrodinger) evolution for microscopic systems.
A quantum mechanical state can be represented in many mathematically equivalent ways, the representation depending on which observables are used to define the basis states of the representation. The theory of GRW gives a privileged place to the position observable, since the localizations occur in position space rather than momentum space, or the spaces of other observables.
One problem with the GRW model is that, for wavefunctions of systems containing identical particles, the existing symmetry of the wavefunction is destroyed when the wavefunction collapses. The symmetry (or antisymmetry) of the wavefunction with
respect to the exchange of identical particles is a requirement of standard quantum mechanics. The problem arises in the GRW model since, when a collapse occurs, the localization is focussed on a single particle, with only small effects on other identical particles that may be represented by the wavefunction.1
Footnote 1: Dove and Squires (1995) have developed versions of the original GRW model that preserve the symmetry character of the wavefunction.
In part due to the problems with the GRW model, continuous spontaneous localization models were subsequently developed (Ghirardi et al. 1990). These theories preserve the symmetry character of the wavefunction. We will not describe these theories here, since the new theory of collapse described in the current work more closely resembles the GRW theory, and CSL does not differ significantly from GRW on the issues that will be central to this paper. Similarly, we will not consider relativistic versions of the GRW model (Tumulka 2006).
## 4 Discrete Physics
Discrete physics is characterised by the quantities representing the state of a system being discrete valued and finite in number (Zuse 1970; Feynman 1982; Minsky 1982; Fredkin 1990).2 There is speculation that the whole of physical reality is digital: this is the approach taken by those modelling physics using cellular automata (Wolfram 1986, 2002; 't Hooft 2016). In a cellular automaton, the state of a physical system is taken to be represented by a certain finite number of discrete magnitudes, and these quantities are defined only at points on a spatial lattice with a finite separation. The magnitudes are updated at discrete time intervals according to set rules, representing the fundamental laws of physics. This approach is sometimes taken as merely a means of modelling continuous physical phenomena, but if it turned out that all state-variables currently taken to be continuous were in fact discrete, then an approach like this would give a more realistic picture of nature than an approach based on continuous quantities.
Footnote 2: A collection of articles about physics as computation, published in 2013, contains a number of articles that posit a discrete physics (Zenil 2013).
Aspects of string theory and other quantum theories of gravity have led many authors to suggest that there may be a minimum length scale in the universe, at approximately the Planck length, \(10^{-35}\)m (see, for example, Witten (1996); Baez (2003)). To many, such as 't Hooft (1997, 2016, 2018), this suggests that space-time may be discrete rather than continuous. Smolin (2001, 2006) also proposes that discrete space (and time) may arise from quantum theories of gravity known as loop quantum gravity. Beane et al. (2014) have considered possible observational consequences of a discrete space-time.
We will adopt the following principle: discretize physics not with the aim of approximating continuous reality, but with aim of considering the consequences of reality actually being discrete. Unlike 't Hooft (2016), we will not be assuming that the underlying discrete physics is classical, local, or linear, and unlike Wolfram (2002), we will not be assuming that it is reversible. In fact, as we shall see, the discrete physics we end up with is non-local, non-linear, and irreversible. One of the advantages of the theory that we introduce in Sect. 5 is that it follows naturally from the discretization of
quantum mechanics. Accordingly, we will take as our starting point a discrete-valued wavefunction.
Below we describe one way wavefunctions could be represented in a discrete physics. The discretization of the wavefunction will be carried out in the position representation.
Consider the wavefunction in configuration space for a system of \(N\) interacting particles, neglecting spin. In standard quantum mechanics, the wavefunction is continuous in magnitude and defined at all points in configuration space. The wavefunction can be written as a product of a magnitude and phase factor as follows:
\[\Psi({\bf x}_{1},{\bf x}_{2},\ldots,{\bf x}_{N},t)=f({\bf x}_{1},{\bf x}_{2}, \ldots,{\bf x}_{N},t)\,e^{i\theta({\bf x}_{1},{\bf x}_{2},\ldots,{\bf x}_{N},t )}, \tag{5}\]
where \(f\) and \(\theta\) are real-valued functions. The wavefunction is assumed to be normalized to unity in the usual manner. In a discrete physics, both position space and configuration space are divided into cells of small finite size. Consider dividing configuration space into cells of volume \(a_{1}^{3}\ldots a_{N}^{3}\), where \(a_{k}\) is the length characteristic of the \(k\)th particle represented by the wavefunction. Here the possibility has been left open that the length of the sides of the cell in configuration space may depend on the mass or energy of the particle to which they correspond. If the length turns out to be independent of the particle and its energy then the volume of a single cell will be \(a^{3N}\). We favour the option that \(a_{k}\) is proportional to the mean de Broglie wavelength of that particle since this wavelength roughly characterizes the rate of change of the wavefunction with distance. Further discussion of this point is beyond the scope of this paper but more details are given in Leckey (1998). We define a wavefunction that is both spatially discrete, with one value per cell in configuration space, and discrete valued:
\[f({\bf x}_{1},{\bf x}_{2},\ldots,{\bf x}_{N},t) = n_{f}({\bf x}_{1},{\bf x}_{2},\ldots,{\bf x}_{N},t)\,f_{0} \tag{6}\] \[\theta({\bf x}_{1},{\bf x}_{2},\ldots,{\bf x}_{N},t) = n_{\theta}({\bf x}_{1},{\bf x}_{2},\ldots,{\bf x}_{N},t)\,\theta _{0}\,, \tag{7}\]
where \(n_{f}\) and \(n_{\theta}\) are functions that yield natural numbers, \(f_{0}\) is the base magnitude of the discrete wavefunction and \(\theta_{0}\) is the base phase angle.
Define the relative volume of the wavefunction, \(v\) as the \(3N\)-dimensional volume in configuration space, \(V\) for which the wavefunction is non-zero, divided by the corresponding volume of a single cell in \(3N\) dimensions:
\[v=\frac{V}{a_{1}^{3}\ldots a_{N}^{3}}. \tag{8}\]
The discrete wavefunction has a clear boundary, unlike the case in orthodox quantum mechanics where all wavefunctions extend over the entire universe, since the wavefunction only goes to zero over a finite region where the potential is infinite, and this is not physical.
If we have \(N\) spin-half particles, the full wavefunction will be given by multiplying the spatial wavefunction by a spin vector of \(2^{N}\) components, so that the total wavefunction is given by \(2^{N}\,v\) complex values. We argue that the relative volume is a possible measure of the _complexity_ of the wavefunction, since the number of complex values required to specify a wavefunction of \(N\) particles is proportional to \(v\).
Critical Complexity Quantum Mechanics
We introduce a modified quantum mechanics that not only provides a new approach to solving the measurement problem by providing a mechanism for wavefunction collapse but admits a realist interpretation of wavefunctions that represent less than the entire universe. One strength of the theory is that it follows naturally from a fully discrete physics. We will show that the collapse of the wavefunction is well motivated when considering a discrete physics, and is not a mere ad hoc modification to linear quantum mechanics.
In standard quantum mechanics, when we write a wavefunction of a single particle or small group of particles, this is an approximation that we obtain by neglecting the (small) interaction with the environment. Under many interpretations of quantum mechanics, including the orthodox one, wavefunctions are regarded as merely convenient constructions useful for making calculations. Under this anti-realist interpretation of wavefunctions, under no circumstances will the wavefunction be regarded as corresponding to what exists in nature.
The theory that we put forward here differs considerably from standard quantum mechanics in providing for the existence of wavefunctions that represent a finite number of particles, much smaller than the total number of particles in the universe, and in allowing a realist interpretation of these wavefunctions and of wavefunction collapse. Furthermore, under this theory the number of particles a wavefunction represents can change with time, either through the wavefunction splitting into two or more smaller wavefunctions, or by combining with one or more other wavefunctions. From this point of view it is perhaps natural that a collapse or split of a wavefunction be triggered when it reaches some critical size, or complexity, for some measure of the complexity of the wavefunction. This takes up a suggestion by Leggett (1984, p. 598) that there may be "corrections to linear quantum mechanics which are functions, in some sense or other, of the _degree of complexity_ of the physical system described."
We propose the measure of complexity of a wavefunction (for the moment neglecting spin) is given by its relative volume in configuration space, as defined in the previous section. Configuration space has \(3N\) dimensions for a system of \(N\) particles so that the dimensionality of a 'volume' in this space will vary with the number of particles represented by the wavefunction. However, the relative volume is the number of cells occupied by the wavefunction and so is a dimensionless quantity. A fuller discussion of the relation between relative volume and complexity, and a possible link between relative volume, entropy and the arrow of time, is given in Leckey (1998, 2016).
We assume that the wavefunction of a quantum system will collapse, or split, when it reaches a certain critical relative volume in configuration space. What a 'collapse or split' of the wavefunction amounts to is explained in the following section. This modified quantum mechanics that involves an altered dynamics associated with the critical complexity of wavefunctions we label Critical Complexity Quantum Mechanics (CCQM). The collapse or split is a non-linear mechanism akin to that invoked in standard quantum mechanics by the process of measurement. However, in CCQM the collapse is triggered by a well defined physical mechanism independent of the presence of observation or measurement, thus providing a mechanism for solving the measurement problem.
## 6 Wavefunction collapse in CCQM
We propose that there exists a critical relative volume in configuration space such that, when the wavefunction reaches this volume, it will undergo a non-linear collapse, resulting in the relative volume reducing below the critical relative volume.
The existence of a critical volume is clearly motivated in the case where the wavefunction is in reality discrete valued. The relative volume of the wavefunction in configuration space determines the average wavefunction magnitude because the wavefunction remains normalized at all times. Thus, if a wavefunction represents the state of a large number of particles, some of the particles having probability distributions spreading out in position space, then as the relative volume increases with time eventually the wavefunction magnitude would fall everywhere below \(f_{0}\). Hence, the discrete wavefunction would become zero everywhere in configuration space, meaning that the system would effectively cease to exist!3
Footnote 3: If we were to assume that the entire wavefunction were to be renormalized every time a cell magnitude fell below \(f_{0}\), then rather than disappearing altogether, the wavefunction would vanish in localized regions instead, leading to the local fragmentation of the wavefunction. Local deviations from the predictions of quantum mechanics would occur, of a kind that we do not observe, motivating the type of collapse discussed in this section.
Minsky (1982, p. 537) discusses the problem of how to represent spherically propagating waves in position space in a discrete physics, and recognises that the tendency of the magnitude to everywhere fall below the threshold magnitude represents a problem. He does not, however, suggest a solution. It can be seen that a similar problem arises for discrete physics in the case of a wavefunction spreading in a \(3N\) dimensional configuration space.
We suggest that a natural solution to the problem of spreading wavefunctions in discrete quantum mechanics is for the wavefunction to 'collapse' to some extent when it reaches some critical relative volume instantaneously (or virtually instantaneously) reducing in volume. Alternatively, the wavefunction could split into two or more smaller wavefunctions; this possibility will be discussed in the next section. We have supposed that the wavefunction will be normalized to unity after every collapse, so the average magnitude per cell will rise when a collapse occurs. The wavefunction would then resume spreading until it again reached the critical volume, at which time it would collapse again, and so on. This is a natural solution to the problem, so it appears that some degree of spontaneous localization of wavefunctions is a natural consequence of a discrete physics.
For this solution to be viable, the collapse must not cause deviations from standard quantum mechanics that are in conflict with the results of experiments that have already been carried out. Consider the wavefunction of a single particle in free space. In reality, single-particle wavefunctions may be rare, but suppose for the moment that weakly interacting particles are in reality represented for much of the time by wavefunctions of only one or a small number of particles. (This supposition is discussed in more detail in the next section.) We can use these wavefunctions to put some constraints on the size of the critical relative volume, \(v_{c}\). We know, from interference experiments, that a wavefunction of a free particle can spread over large volumes in position space before two parts of that wavefunction interact at a detector to produce an interference pattern.
If the critical volume \(V_{c}\) is not large enough then such wavefunctions would collapse before reaching the detector, thus destroying the interference effects and leading to conflict with experimental results. This puts a lower bound on the value of the critical volume and hence on the critical relative volume \(v_{c}\).
The volume that the single-particle wavefunction reduces to after collapse must also be sufficiently large otherwise the collapse would have observable consequences. Some of these consequences are considered in Sec. 12.
It should be noted that an assumption is being made that the limitation on the relative volume will apply to photons as well as particles with mass. While there are certain difficulties associated with assigning a wavefunction to a photon, it has been demonstrated that this can be done as long as the wavefunction is interpreted slightly differently than is usual in elementary quantum mechanics. Methods of defining a photon wavefunction are given by Bialynicki-Birula (1994, 2020) and (Sipe 1995). According to these definitions, the probability interpretation and normalisation condition for the photon wavefunction differ slightly from the usual ones, but not in a way that significantly affects the arguments in this paper. The full treatment of photons is beyond the scope of this paper. Since photons are relativistic particles, they must be treated within an extension of CCQM to relativistic quantum theory.
## 7 Solution to the measurement problem
The type of collapse proposed in the previous section for a few-particle wavefunction is not itself a solution to the measurement problem. We have not yet discussed localizations to small regions, such as spots forming on photographic plates, that take place when measurements occur on many particle systems.
Consider an \(N\)-particle system in which each of the particles interacts relatively strongly with every other particle in the system. In our model, such a system would be represented by a single wavefunction. Suppose that the position probability distribution of each particle in the \(N\)-particle interacting system covers a volume of \(V_{s}\) in position space. Then the corresponding relative volume of the wavefunction in configuration space will be of the order \(v_{s}^{N}\), since the wavefunction will be spread over a distance of order \(V_{s}^{\frac{1}{4}}\) in each orthogonal direction in the \(3N\) dimensional space. Thus the relative volume will tend to increase exponentially with \(N\). Here the assumption has been made that the length of the cell-side, \(a\) is the same for each particle, which is a reasonable approximation for an order of magnitude calculation.
As a result, as \(N\) grows, the relative volume of the wavefunction will grow very rapidly, and will readily reach the critical relative volume \(v_{c}\). For an interacting, many particle system, a small spread in volume of the probability distribution for each particle in position space will contribute a large amount to the relative volume of the wavefunction in configuration space. Before the position probability distribution of each particle spreads to a macroscopic volume, the critical relative volume for the wavefunction will be reached, bringing about a collapse, thereby restricting each particle to a smaller volume in position space.
Thus, many-particle interacting systems will have particles within them that tend to remain localized in position space, consistent with our observations. Furthermore,
collapse of the wavefunction in this manner will prevent superpositions of large numbers of interacting particles spreading over macroscopic volumes in position space. In this way, the collapse mechanism will prevent the occurrence of macroscopically distinguishable superpositions. Instead, the wavefunction will collapse and give rise to a determinate outcome to the measurement. Conversely, a system of a single particle, or a small number of interacting particles, can spread over larger volumes in position space before the wavefunction representing the system reaches the critical relative volume and collapses, permitting the interference effects that we observe. Thus, we claim that the CCQM theory of wavefunction collapse explains why superposition effects sometimes occur, even over large distances, and yet we never see superposition effects for macroscopic objects. This provides a solution to the measurement problem.
## 8 A new non-linear dynamics
The modified quantum mechanics of CCQM must involve further changes to linear quantum mechanics other than simply introducing wavefunction collapses. The model must also involve alterations to the dynamics of linear quantum mechanics that have the effect that the number of particles represented by a single wavefunction remains finite and changes with time.
In this modified dynamics, a system described by a single, spreading wavefunction, when it comes into contact with another wavefunction, is postulated to evolve as follows. Suppose that the position probability distributions of the particles in the first wavefunction evolve, due to the spreading of the wavefunction, in such a way that they increasingly overlap with those of particles represented by the second wavefunction. Then there is a certain probability that these systems will combine into a single wavefunction, and that the probability they combine is related to the strength of interaction between the particles in those systems. This strength of interaction would be determined by the types of particles involved and, under most circumstances, by the expectation value of their separation in position space. It is natural to suppose that the greater the strength of interaction, the greater the probability of the systems combining. The simplest way for this to occur would be for the two wavefunctions \(\Psi_{1}\) and \(\Psi_{2}\) to be replaced by their symmetrized product wavefunction:
\[\Psi_{12}(\mathbf{x}_{1},\ldots,\mathbf{x}_{N},t)=S\Psi_{1}(\mathbf{x}_{1}, \ldots,\mathbf{x}_{j},t)\,\Psi_{2}(\mathbf{x}_{j+1},\ldots,\mathbf{x}_{N},t). \tag{9}\]
where \(S\) is the symmetrization operator which ensures that the combined wavefunction is symmetric with respect to the exchange of identical bosons between the wavefunctions, and antisymmetric with respect to the exchange of identical fermions.4 This wavefunction would continue to evolve and to combine with other wavefunctions until
the critical relative volume was reached, at which time the wavefunction would collapse.
When a wavefunction reaches the critical relative volume, we assume that there is some probability that the wavefunction splits into two or more separate wavefunctions. Each sub-system in the original wavefunction has some probability of becoming represented by a separate wavefunction, with the probability of becoming separated increasing the more weakly the particle or particles in the sub-system interact with the remainder of the system. If the wavefunction does not split then it must collapse by localizing in configuration space, so bringing the relative volume below the critical value.
## 9 Proposed collapse process
We have proposed that when the wavefunction of a system reaches a certain large relative volume, there is some probability that the wavefunction will break up into separate wavefunctions, but if this does not occur then the wavefunction must 'collapse' by localizing in configuration space, reducing in volume to some fraction, \(F,\,0<F<1\) of the original volume. For simplicity, we suggest a fraction of one-half. The actual fraction is not important as long as the volume of a 'free' wave remains large after the collapse. We will now make a tentative suggestion about the form this collapse might take by adapting a simple system of collapse from the GRW model. We label this model the 'jump' model of collapse. In this model, when a collapse occurs, the original wavefunction \(\Psi(x,t)\), of \(N\) particles, is multiplied by a 'jump factor' \(j(x^{\prime}-x)\) where \(x\equiv({\bf x}_{1},\ldots,{\bf x}_{N})\) and \(x^{\prime}\) is the centre of the collapse in configuration space. As in the GRW model, the jump factor \(j(x)\) is a Gaussian, but here a (symmetrized) Gaussian in configuration space rather than a Gaussian in position space; \(j(x)\) is a symmetrized product of single-particle Gaussians:
\[j({\bf x}_{1},\ldots,{\bf x}_{N})=S\,j({\bf x}_{1})\ldots j({\bf x}_{N}), \tag{10}\]
where
\[j({\bf x}_{i})=\left(\frac{\epsilon}{\pi}\right)^{3/4}\,\exp\left(-\frac{ \epsilon{\bf x}_{i}^{2}}{2}\right)\,, \tag{11}\]
and \(S\) refers to the symmetrization operator mentioned above. The value of \(\epsilon\) will vary in each particular case, determined by the value required to reduce the relative volume of the wavefunction by the fraction \(F\), the form of the wavefunction before collapse and the values of the constants \(v_{c},f_{0}\), and \(a_{k}\). We are not considering in detail experimental constraints in this paper, so we will not attempt to put rigorous bounds on the constants of the theory; however, some observational constraints on the theory will be discussed in Sec. 12. We assume the probability distribution of the collapse centre and renormalization of the wavefunction after collapse would be determined in the same way as in the GRW model, so as to closely preserve the statistics of the wavefunction. The important point to note is that the value of \(\epsilon\) will be much smaller for systems of a few particles than for systems of a large number of particles, as discussed in Sect. 7.
Unlike the GRW model, the model of collapse in CCQM preserves the symmetry of the wavefunction. Although the wavefunction will be antisymmetric under exchange
of any two identical fermions within the wavefunction, there clearly will not be any symmetry under exchange of particles represented by different wavefunctions. This is an immediate consequence of allowing wavefunctions smaller than the wavefunction of the entire universe. However, as pointed out by French and Taylor (1978, p. 570) this will not give rise to observable consequences as long as the separate wavefunctions do not overlap in physical space; in other words, as long as there are no regions in physical space where both have non-zero magnitude. Conversely, if separate wavefunctions do overlap significantly, then there may be observable consequences of this lack of exchange symmetry.
## 10 Advantages of CCQM over GRW/CSL
One criticism that can be levelled at the GRW/CSL theories is that they are ad hoc--the only motivation for the modification of linear quantum mechanics is to produce a solution to the measurement problem. CCQM, on the other hand, is independently motivated, as we have argued above, on the basis of a fully discrete physics. CCQM also has the potential to satisfy the 'rule of simulation' proposed by Feynman (1982), who suggests as a heuristic for discovering physical laws that we only accept laws that could be simulated on finite digital computers. For a further discussion, see Leckey (1998, p. 59).
The GRW model produces collapse by multiplying the wavefunction by a Gaussian in the three spatial coordinates of one of the particles. Since the tails of a Gaussian extend to infinity, this only ever approximately localizes the particle. If we accept that a particle is localized within some region only if the wavefunction goes to zero outside that region, then GRW does not achieve such localization. Indeed, if we consider the pointer of a measuring instrument that is in a superposition of possible outcomes prior to a measurement, then a GRW collapse does not bring about a reduction to a proper mixture of states but merely the same pre-measurement superposition with all but one component dramatically reduced. This is the so-called tails problem; McQueen (2015) identifies four variants of this problem in dynamical collapse models. Given that physical collapse theories take a realist view towards the wavefunction, we should not ignore the dynamical evolution of the tails. Since CCQM is based on a discrete physics where wavefunction amplitudes become equal to zero once they fall below a certain threshold, particles can be localized within a finite region and the tails of the wavefunction can be eliminated.
A major point of difference between the theories relates to the treatment of photons. The original GRW theory proposes that 'all constituents of any system' (Ghirardi et al. 1986, p. 480) are subject to localizations at the same rate, but it has been suggested that the localization of photons to the small volume given by the 'hitting' Gaussian (Eq. (3)) would produce an effect on the spectrum of the cosmic microwave background radiation in conflict with observations (Squires 1992). Subsequent versions of the GRW/CSL theory altered this proposal by making the rate of collapse proportional to the rest mass of the particle (Ghirardi et al. 1995), so that photons are never subject to direct collapse events. We suggest that it is an advantage of CCQM that the critical volume criterion for collapse can be applied equally to all particles, including free
photons. In Sect. 9, the assumption has been made that the hitting Gaussian would be extremely large for few-particle systems such as photons or neutrinos, so altering the momentum distribution very little, but by a potentially measurable amount. It is a virtue of the theory that it leads to precise empirical predictions for astronomical photon observations that could potentially distinguish CCQM from not only standard quantum mechanics, but from other collapse theories. Pearle (2018) gives a modified relativistic version of CSL that gives rise to some collapses for photons, distinct from that of CCQM. The appendices give specific predictions for CCQM for the resolution of distant stars and for the spectrum of the cosmic microwave background radiation.
Another feature of CCQM is that, due to the fact that the relative volume grows exponentially with the number of particles, there is a rapid transition from the 'quantum realm', where the position probability distributions of particles are free to spread in position space, to the 'classical realm' where their capacity to spread is limited. In GRW/CSL on the other hand, this transition is linear in the number of particles in a system, because the rate of localization is linear in the number of particles. Due to this difference, the transition between the two realms could take place within the realm of microscopic systems in CCQM, whereas the transition must take place within the realm of macroscopic systems in GRW/CSL.
## 11 CCQM and wavefunction realism
As noted in the introduction, according to standard quantum mechanics, as long as there is a non-zero strength of interaction between them, particles must be represented by a single wavefunction, whose arguments are the positions of all the particles. It would seem to follow that there can only be a single wavefunction for the whole universe, and there is nothing in GRW/CSL that alters this situation. This is to be contrasted with the case of CCQM, which represents the universe by many wavefunctions, each of which represents a limited number of particles.
According to the standard interpretation of a wavefunction of \(N\) particles, the quantity \(|\Psi({\bf x}_{1},\ldots,{\bf x}_{N},t)|^{2}\) is simply a probability density: \(|\Psi({\bf x}_{1},\ldots,{\bf x}_{N},t)|^{2}\,d{\bf x}_{1}\ldots d{\bf x}_{N}\) is the probability of finding, on simultaneous measurement of the positions of each of the \(N\) particles at time \(t\), particle 1 within volume \(d{\bf x}_{1}\) of \({\bf x}_{1}\), particle 2 within \(d{\bf x}_{2}\) of \({\bf x}_{2}\), and so on. This interpretation is adequate if one is only interested in predicting the results of experiments, but can the wavefunction be given a more direct realist interpretation? CCQM allows for the wavefunction itself to be interpreted realistically as a complex valued wave (or field) in a \(3N\) dimensional configuration space5.
Footnote 5: Taking spin into account, there are \(2N\) complex magnitudes at each point in configuration space: a vector field in configuration space.
This realist interpretation of the wavefunction is consistent with the interpretation suggested by Bell (1990) for the wavefunction in the GRW model: that the modifications to quantum mechanics introduced by the GRW model remove the concept of measurement as a primitive and allow the magnitude of the (continuous) wavefunction squared to be interpreted as the 'density of stuff' of which the world is made; and this is a density in a \(3N\) dimensional configuration space.
Another quantity of interest is the distribution of each particle \(k\) in three-dimensional position space, given that this particle is represented in an \(N\)-particle wavefunction. According to the standard probabilistic interpretation of the wavefunction, the quantity
\[g({\bf x}_{k})=\int|\Psi({\bf x}_{1},\ldots,{\bf x}_{N},t)|^{2}d{\bf x}_{1}\ldots d {\bf x}_{k-1}d{\bf x}_{k+1}\ldots d{\bf x}_{N}\,, \tag{12}\]
is the probability density of particle \(k\) in position space: \(g({\bf x}_{k})d{\bf x}_{k}\) is the probability of finding, upon measurement of position, particle \(k\) within volume \(d{\bf x}_{k}\) of \({\bf x}_{k}\) in position space. We suggest that as well as having this probabilistic interpretation, in CCQM the quantity \(g({\bf x}_{k})\) can be interpreted realistically as the density of the particle \(k\) in position space: it is a projection of the wavefunction from configuration space into position space. In the case of a discrete wavefunction, the volume in position space of a single particle \(k\) can be defined as the volume for which the quantity \(g({\bf x}_{k})\) is non-zero. (In a discrete physics, integrals are understood to be sums over the relevant range of the parameter(s).)
As discussed in Sect. 7, macroscopic systems of strongly interacting particles are described by separate wavefunctions, each confined to a localized region, so that \(g({\bf x}_{k})\) is well localized in position space for the particles in these systems. This corresponds to the macroscopic world as we are aware of it, thus providing an account of the world compatible with our experience.
Ghirardi (1996) and Ghirardi et al. (1995) reject the density of stuff interpretation of Bell (1990), and instead adopt an interpretation for CSL that involves the average mass density in three-dimensional position space. It is easy to see why the interpretation of \(g({\bf x}_{k})\) as the density of a particle in position space will not give a picture of reality compatible with experience in the case of CSL for, in that model, if we presume that there is one wavefunction for the whole universe and that this wavefunction is appropriately symmetrized under the exchange of identical particles, then every identical particle in the universe must have exactly the same position probability distribution. In other words, the distribution \(g({\bf x}_{k})\) of each identical particle must extend over the entire universe. This is not just because the wavefunction is continuous in the GRW/CSL model and so can never fall to zero magnitude. Even if the wavefunction were discrete, the distribution \(g({\bf x}_{k})\) of each particle of the same type as particle \(k\) would extend over the entire universe (other than regions in which there are no particles of this type) due to the presumed symmetry character of the wavefunction. This is why the supporters of GRW/CSL must adopt a particle-number density or mass density interpretation of the wavefunction in order to obtain a picture of reality compatible with our experience. In CCQM, identical particles only symmetrize within a single wavefunction which represents a finite number of particles extended over a limited volume of space. Hence, the position probability distribution will be confined within this volume, so we are able to adopt the density of stuff interpretation of Bell. It is an advantage of CCQM that it allows for a more straightforward realist interpretation of the wavefunction than is possible in the GRW/CSL models.
Observational constraints on the parameters
A number of new physical quantities are introduced in CCQM. Discretizaton of the wavefunction entails the introduction of a minimum magnitude, \(f_{0}\) and minimum phase, \(\theta_{0}\). In CCQM, a cell size, \(a\) that may be a function of physical parameters such as the De Broglie wavelength of the system, and a critical relative volume, \(v_{c}\) are introduced to govern the collapse or splitting of the wavefunction. An additional parameter is required to determine the fraction by which the wavefunction reduces in size when a collapse occurs.
The CSL and GRW models also contain new physical parameters whose values were initially chosen to provide a solution to the measurement problem, that is, to avoid collapse in microscopic systems and produce collapse in macroscopic ones.
Recent experimental work has delineating the parameter space of these models. Constraints on the parameter space of the CSL model have been made by observations on ultracold microcantilevers (Vinante et al., 2017, 2020); from observation of gravitational waves (Carlesso et al., 2016); matter-wave interference (Arndt and Hornberger, 2014); and neutron stars (Tillroy and Stacey, 2019). Spontaneous collapse models need to accommodate quantum teleportation over large distances--the record set in 2017 is 1400 km (Ren et al., 2017)--and entanglement of'macroscopic' objects up to \(10^{12}\) atoms (Kotler et al., 2021; Lepinay et al., 2021). In addition, spontaneous collapse gives rise to energy non-conservation (Pearle, 2000), which is in principle observable as excess thermal noise (Adler and Vinante, 2018; Bahrami, 2018; Mishra et al., 2018). A good summary of the parameter constraints is provided by Toros and Bassi (2018). Although these constraints are directed at particular models of wavefunction collapse, they also provide limits on the parameters of CCQM.
Photons do not undergo collapse directly in the more recent GRW/CSL models but only indirectly through entanglement with matter that undergoes collapse. However, in CCQM photons can collapse so additional constraints on the parameters will be provided by observation of distant stars since the photons from such stars will undergo (multiple) collapse(s) on their path to Earth. A collapse of a photon wavefunction to a small volume causes a large spread in the distribution of its momentum at the point the collapse occurs, in accordance with Heisenberg's Uncertainty Principle. Consequently, if the collapse volume were too small, it would lead to a measurable broadening of the image of the star.
In CCQM, a small effect on the cosmic microwave background would also be expected due to the collapse of photon wavefunctions since the era of decoupling in the early universe. Constraints on the collapse parameters due to the broadening of star images and the effect on the cosmic microwave background are given in appendices A and B.
## 13 Conclusion
The outlines of a new modified form of quantum mechanics, Critical Complexity Quantum Mechanics, have been presented. The theory embraces significant departures from orthodox quantum mechanics. In orthodox quantum mechanics, we often write down
wavefunctions representing a single or finite number of particles but these are approximations. The assumption is that the interaction of the system with the environment is small enough that the wavefunction of the system can be treated as separate from the environment and the rest of the Universe. In the model presented here, wavefunctions strictly represent finite numbers of particles, supporting a realist interpretation of the wavefunction. In our model, there is a critical relative volume which provides for an upper limit to the volume of wavefunctions in configuration space. Rules for the merging and splitting of wavefunctions are introduced that do not exist in orthodox quantum mechanics. The existence of a critical volume follows naturally from a fully discrete physics, where all physical quantities take only discrete values.
The theory provides a solution to the measurement problem, and one that does not suffer from the issues with tails that trouble other collapse models. The solution relies on the fact that where there are many particles that strongly interact there will be many particles per wavefunction. The relative volume grows exponentially with the number of particles represented, quickly reaching the critical volume, so collapse will happen more readily for such systems.
Some advantages of the new model of collapse are that it does not suffer from the tails problem or the problems with symmetrization that afflict other collapse models. We suggest that CCQM provides an intuitively satisfactory image of quantum reality, compatible with Bells 'density of stuff' interpretation of the wavefunction.
A fuller development of the theory, including the dynamics of merging, splitting and collapsing of wavefunctions, is required in order to provide detailed empirical predictions.
A central aim of the scientific endeavour is to provide an accurate representation of nature. Given that goal, it is well worth the price of introducing new laws governing the merging, splitting and collapsing of wavefunctions in order to obtain a satisfactory realist image of the quantum realm. These new laws generate testable predictions; therefore, it is worth further articulation and testing of those new predictions. The appendices outline two possible novel observations.
## Appendix A -- Light from a distant star
In order to put some numerical constraints on the theory, consider two situations where a one particle wavefunction may arise. First, suppose that a photon emitted from a star and travelling in interstellar space can be represented by a single-particle wavefunction. According to CCQM, this photon will undergo a number of spontaneous collapses en route to our detectors. On each collapse to a volume of width \(\Delta x\), the photon will get a small 'kick' of transverse momentum due to Heisenberg's Uncertainty Principle:
\[\Delta p=\frac{h}{4\pi\Delta x}. \tag{13}\]
Following Messiah (1967, pp. 158-159) we assume that after emission from an excited atom the photon wavefunction occupies a spherical shell of constant thickness \(t\).6
Messiah gives an approximate thickness of \(c\tau\), where \(\tau\) is the lifetime of the atomic excited state that emitted the photon. We could also consider \(t\) to be the coherence length of the photon. For the moment, the exact value of \(t\) is not important; we consider it further in Appendix B. We will assume that each collapse reduces the volume of the wavefunction by a factor of 2. The first collapse changes the volume occupied by the wavefunction from a spherical shell to a hemispherical one. Subsequent collapses reduce the solid angle subtended by the shell by a factor of 2. After a number of collapses, the volume occupied by the wavefunction will be approximately a circular disk of thickness \(t\). The first collapse occurs when the photon reaches a distance \(r_{c}\) from the star, when the wavefunction occupies a volume equal to the critical volume:
Footnote 1: The \(\tau\) is the time-dependent part of the wavefunction.
\[V_{c} =4\pi\,r_{c}^{2}\,t \tag{14}\] \[\text{or}\;\;r_{c} =\sqrt{\frac{V_{c}}{4\pi t}}\,.\]
The next collapse occurs when the volume occupied by the wavefunction again reaches \(V_{c}\), that is, when the shell reaches a distance from the star of \(\sqrt{2}\,r_{c}\) and so on. If the distance from the star to the observer is \(d\) and the number of wavefunction collapses in the photon's journey to our instruments is \(n\), then
\[d =(\sqrt{2})^{n-1}\,r_{c}\,, \tag{15}\] \[\Rightarrow r_{c} =d\,2^{(1-n)/2}\,.\]
The solid angle subtended by a spherical surface at its centre is given by
\[\Omega=\int\int\sin\theta^{\prime}d\theta^{\prime}d\phi. \tag{16}\]
For a cone with apex angle \(2\theta\) this becomes
\[\Omega_{\theta} =\int_{0}^{2\pi}\int_{0}^{\theta}\sin\theta^{\prime}d\theta^{ \prime}d\phi \tag{17}\] \[=2\pi(1-\cos\theta)\] \[=4\pi\sin^{2}\left(\frac{\theta}{2}\right).\]
For large distances from the star, the half-angle subtended by the photon wavefunction shell is small, so we can make the approximation \(\cos(\theta/2)\cong 1\) implying \(\sin\theta\cong 2\sin(\theta/2)\). In addition, note that each time the solid angle halves, \(\sin(\theta/2)\) goes down by a factor of \(\sqrt{2}\) (starting from \(\sin(\theta/2)=1/2\)) while the radius of the shell goes up by a factor \(\sqrt{2}\). The result is that the transverse spread of the photon
wavefunction, \(\Delta x=r\sin\theta\), becomes constant. That is, for the \(n\)th collapse
\[\Delta x=(\sqrt{2})^{n-1}\,r_{c}\sin\theta\cong(\sqrt{2})^{n-1}\left(2\left( \frac{1}{\sqrt{2}}\right)^{n+1}\right)r_{c}=r_{c} \tag{18}\]
\[\Rightarrow\Delta p\cong\frac{h}{4\pi r_{c}}.\]
Hence each collapse, apart from the first one which changes the volume occupied by the wavefunction from a spherical shell to a hemispherical one, confines the photon in the transverse direction to an approximately circular shell of radius \(r_{c}\) and thickness \(t\), and each collapse contributes the same amount to the transverse momentum of the photon through Heisenberg's Uncertainty Principle. For the moment, since we are only after an approximate figure, we shall ignore the fact that the earlier collapses contribute a slightly lower momentum kick since the wavefunction shape is more curved and so \(\Delta x\) is smaller. A more exact treatment that takes into account the different \(\Delta p\) in the early collapses produces no significant divergence from the following results.
The angular deviation of the photon from a straight line path can be written as
\[\Delta\phi=\frac{\Delta p_{total}}{p}=\frac{\lambda\Delta p_{total}}{h}. \tag{19}\]
Since there are \(n-1\) collapses that contribute to an increase in the transverse momentum of the photon, using (18) gives
\[\Delta\phi\cong\frac{(n-1)\lambda}{4\pi r_{c}}\,, \tag{20}\]
and substituting for \(r_{c}\) from (15):
\[\Delta\phi \cong\frac{(n-1)\,\lambda 2^{\frac{n-5}{2}}}{\pi d} \tag{21}\] \[\Rightarrow\log(n-1)+\frac{n-5}{2}\log 2 =\log\frac{\pi d\Delta\phi}{\lambda}\,.\]
For starlight, the best resolution so far obtained is 2 milli-arc-seconds or \(9.7\times 10^{-9}\) radians from the Very Large Telescope Interferometer for the red giant star \(\pi^{1}\) Gruis which is at a distance of 530 light-years (Paladini et al., 2018). Since this is a red giant, assume \(\lambda=6\times 10^{-7}\)m making the right hand side of (21) equal to \(\log\left(2.5\times 10^{17}\right)\). This resolution sets an upper limit for the number of collapses, \(n\). If there are more collapses, the additional transverse momentum the photon obtains will cause an excessive deviation of the starlight and consequent blurring of the details observed. A numerical solution to (21) gives
\[n\leq 107 \tag{22}\]
Using the maximum value of \(n\) in (15) gives a minimum value for \(r_{c}\):
\[r_{c}\geq\frac{d}{2^{53}}=544\,\mathrm{m}. \tag{23}\]
The value of \(r_{c}\) is both the distance from the star to the point of the first wavefunction collapse and the radius of the (approximate) circular disk in which the photon wavefunction is confined. The constraint on the radius can be converted to a constraint on the critical volume, \(V_{c}\), once a value for the thickness of the photon wavefunction has been decided (see Equation (34a).
## Appendix B -- Cosmic microwave background
Apart from small inhomogeneities of around one part in 100,000 (Smoot et al. 1992), the cosmic microwave background (CMB) is an almost perfect blackbody spectrum with a temperature of \(T=2.725\)K (Fixsen et al. 2011). The frequency spectrum follows Planck's Law, which can be written in terms of a volume energy density per unit frequency interval (Griffith 2005, p. 244):
\[U_{E}(\nu)=\frac{8\pi h\nu^{3}}{c^{3}}\,\frac{1}{\exp(h\nu/k_{B}T)-1}\,. \tag{24}\]
Dividing by the energy per photon, \(h\nu\) gives the photon number density per unit frequency interval
\[N(\nu)=\frac{8\pi\nu^{2}}{c^{3}}\,\frac{1}{\exp(u)-1}\,, \tag{25}\]
where \(u=h\nu/k_{B}T\) is a dimensionless variable with \(u\cong 2.82\) representing the peak of the volume energy density per unit frequency curve. For the exponential part of (25), consider, for small \(\varepsilon>0\):
\[\exp(x(1+\varepsilon)) =1+x(1+\varepsilon)+\frac{x^{2}(1+\varepsilon)^{2}}{2!}+\ldots \tag{26}\] \[=\left(1+x+\frac{x^{2}}{2!}+\ldots\right)+\varepsilon\,x\left(1+ x+\frac{x^{2}}{2!}+\ldots\right)+O(\varepsilon^{2})\] \[\approx e^{x}(1+\varepsilon\,x)\,,\]
where the last line is obtained by dropping terms of \(O(\varepsilon^{2})\). Hence,
\[\frac{1}{\exp(x(1+\varepsilon))-1} \approx\frac{1}{(e^{x}-1)+\varepsilon\,x\,e^{x}} \tag{27}\] \[\approx\frac{1}{e^{x}-1}\left(1-\frac{\varepsilon\,x\,e^{x}}{e^{ x}-1}\right)\,,\]
where we have used \((1+d)^{-1}\approx 1-d\) for small \(d\). Hence, following Pearle (2018) Equation (3.34), we can write the photon number density (25) for a slightly altered temperature \(T^{\prime}=T(1-\delta)\), where \(\delta<<1\), as:
\[\hat{N}(\nu) \approx\frac{8\pi\nu^{2}}{c^{3}}\,\frac{1}{\exp(u(1+\delta))-1} \tag{28}\] \[\approx N(\nu)\left[1-u\,\delta\frac{\exp(u)}{\exp(u)-1}\right]\,.\]
The collapse of the photon wavefunction will introduce a slight increase in photon energy, and hence frequency, due to Heisenberg's Uncertainty relation, as detailed in the previous section:
\[\Delta p =\frac{h}{4\pi r_{c}}\,, \tag{29}\] \[\Rightarrow\Delta\nu =\frac{c\Delta p}{h}=\frac{c}{4\pi r_{c}}\,.\]
Replacing \(\nu\) by \(\nu+\Delta\nu\) and dropping terms of \(O(\Delta\nu^{2})\) the photon number density becomes
\[\tilde{N}(\nu) \approx\frac{8\pi(\nu^{2}+2\nu\Delta\nu)}{c^{3}}\,\frac{1}{\exp (u(1+\frac{\Delta\nu}{\nu}))-1} \tag{30}\] \[\approx N(\nu)\left(1+\frac{2\Delta\nu}{\nu}\right)\left[1-u\frac{ \Delta\nu}{\nu}\,\frac{\exp(u)}{\exp(u)-1}\right]\] \[\approx N(\nu)\left[1-\frac{\Delta\nu}{\nu}\left(1-2\frac{\exp(u)-1}{u \,\exp(u)}\right)u\,\frac{\exp(u)}{\exp(u)-1}\right]\,.\]
By comparison with (28), the term
\[\frac{\Delta\nu}{\nu}\left(1-2\frac{\exp(u)-1}{u\,\exp(u)}\right)\,, \tag{31}\]
acts like a fluctuation (drop) in temperature. Substituting for \(\Delta\nu\) from (29) and writing the result in terms of the wavelength, we can write the temperature fluctuation as
\[\delta=\frac{\lambda}{4\pi r_{c}}\left(1-2\frac{\exp(u)-1}{u\,\exp(u)}\right)\,. \tag{32}\]
Now consider two models: (A) where the discrete cell size, \(a\) is fixed, or (B) where it is proportional to the wavelength of the particle, that is, \(a=\tilde{a}\lambda\), where \(\tilde{a}\) is a constant of CCQM. The CMB spectrum is known to some precision for approximately a factor of 10 on either side of the peak of the volume energy density per unit frequency (Mather et al. 1999), so fix the relative temperature fluctuation, \(\delta\) to be the maximum tolerable at a wavelength ten times that of the peak (that is, at \(\lambda\cong 19\,\mathrm{mm},\mathrm{or}\ u\cong 0.282\)):
\[\delta <10^{-5} \tag{33}\] \[\Rightarrow r_{c} >\frac{\lambda}{4\pi\,\delta}\left(1-2\frac{\exp(u)-1}{u\,\exp(u )}\right)=112\mathrm{m}\,.\]
This is smaller than the value obtained in Appendix A. However, due to the longer photon wavelength and consequent greater value of \(t\), it leads to a stronger constraint on the critical volume. As a first approximation, we will take the thickness of the wavefunction to be proportional to the wavelength of the photon: \(t=\tilde{t}\lambda\). To fix a value of \(\tilde{t}\), consider a typical visible light excited state with a lifetime of \(\tau=10^{-8}\) sec. The full width at half maximum of the Gaussian wavepacket will be approximately
3 m. The width over which the photon wavefunction falls to the threshold magnitude permitted in a discrete physics will be several times this value. We will estimate the width of a typical visible light photon wavefunction (\(\lambda=500\) nm) to be five times the full width at half maximum, that is, \(t=15\) m, giving a value of \(\tilde{t}=3\times 10^{7}\). The limits on the radius of the photon wavefunction shell obtained in (23) and (33) can now be converted to limits on the critical volume:
\[V_{c} >4\pi\,r_{c}^{2}\,\tilde{t}\,\lambda=6.7\times 10^{7}\text{m}^{3} (\text{starlight})\,, \tag{34a}\] \[V_{c} >9.0\times 10^{10}\text{m}^{3} (\text{CMB})\,, \tag{34b}\]
for the deviation of starlight and the CMB, respectively. The constraint on \(V_{c}\) from the observation of the cosmic microwave background is three orders of magnitude stronger than that obtained from the deviation of starlight from a distant star.
The constraint on the critical volume is comparable with the volume occupied by a two-photon entangled pair used in large scale quantum teleportation experiments. In the experiment of Ren et al. (2017), an entangled pair of photons is generated on a satellite and distributed to two ground stations. With a beam divergence of 10 \(\mu\)rad and a maximum separation of the ground stations of 2400 km, the volume occupied by the the two-photon wavefunction is approximately \(1.4\times 10^{8}\) m\({}^{3}\).
Now, consider model (B). Since the wavefunction is of a single particle we can relate the critical volume in position space, \(V_{c}\) to the critical relative volume in configuration space, \(v_{c}\) by
\[V_{c}=a^{3}v_{c}=\lambda^{3}\tilde{a}^{3}v_{c}\,. \tag{35}\]
Using (14), we can write the temperature fluctuation in this model as
\[\delta_{var} =\frac{\lambda}{4\pi}\sqrt{\frac{\tilde{t}\lambda}{2\pi\lambda^{ 3}\,\tilde{a}^{3}v_{c}}}\left(1-2\frac{\exp(u)-1}{u\,\exp(u)}\right) \tag{36}\] \[=k\left(1-2\frac{\exp(u)-1}{u\,\exp(u)}\right)\,,\]
where
\[k=\sqrt{\frac{\tilde{t}}{4\,\pi\,\tilde{a}^{3}v_{c}}} \tag{37}\]
is a constant dependent only on the parameters of CCQM. Using the value of \(V_{c}\) from (34b) and a wavelength of \(\lambda=19\)mm,
\[\tilde{a}^{3}v_{c}=\frac{V_{c}}{\lambda^{3}}=1.3\times 10^{16}\,, \tag{38}\]
and hence \(k=1.3\times 10^{-5}\). The frequency of the CMB radiation and its temperature both increase in proportion to redshift, so the parameter \(u\), which depends on their ratio, is independent of the cosmological epoch. Hence, the temperature fluctuation induced by photon wavefunction collapse in this model is also independent of epoch.
The divergence of the spectrum from the Planck black body law differs between the variable and fixed cell size models as indicated in Figure 1. With fixed cell size,
Figure 1: Photon number density per unit frequency versus frequency for the Cosmic Microwave Background spectrum at the present epoch for standard quantum mechanics (blue) and CCQM with a fixed cell size (red) or variable cell size (green). The differences between standard quantum mechanics and CCQM have been magnified by a factor of 5000 in order to be visible in the figure. Even at this magnification, above the peak in the photon number density (\(10^{11}\)Hz), the difference between standard quantum mechanics and CCQM with a fixed cell size is not visible.
there is a divergence at low frequencies (short wavelengths), while the spectrum for the variable cell size case shows a systematic bias, the CMB temperature being higher at frequencies below that of the peak energy density and lower at frequencies above that of the peak. The spectrum for the fixed cell size case and that for standard quantum mechanics become increasing close as the CMB temperature rises. The differences between the predictions of these models becomes unimportant at earlier epochs where we expect most collapses to have occurred. However, for the variable cell sized model, the divergence of the CMB profile to that predicted by standard quantum mechanics is independent of epoch.
The collapse of the photon wavefunction and the consequent increase in photon energy moves photons from one frequency interval to a higher interval. In the variable cell size case, below the peak number density (\(u\cong 2.82\)), there is an increase in the photon number density, while the situation is reversed above the peak. In the fixed cell size case, the change in the photon number density increases with decreasing frequency, leading to an increasing discrepancy with the Planck black body spectrum of standard quantum mechanics in the low frequency range. Above the peak photon number density, the curves are indistinguishable.
The above analysis applies to a single collapse. However, since the decoupling epoch around 370,000 years after the Big Bang there would have been many collapses, so it is necessary to sum the change in the photon number spectrum over all such collapses. Using (14) to substitute for \(r_{c}\) in (33) gives
\[\delta=\frac{\lambda^{3/2}}{4\,\pi}\sqrt{\frac{4\pi\,i}{V_{c}}}\left(1-2\frac {\exp(u)-1}{u\exp(u)}\right) \tag{39}\]
For earlier epochs, the wavelengths of the CMB are reduced by a factor of \(z+1\), where a redshift of \(z=0\) represents the current epoch. Thus, for the fixed cell size case (39) can be written as a function of \(z\) as
\[\delta(z)=-3.0\,\sqrt{\frac{1}{V_{c}(z+1)^{3}}}\,, \tag{40}\]
for a frequency of one-tenth that of the peak energy density per unit frequency (\(u\cong 0.282\)). Decoupling occurred at a redshift of approximately \(z=1100\). The total deviation from the Planck black body spectrum will be the sum of \(\delta(z)\) over all the redshifts \(z\) at which a collapse occurred. Since \(\delta(z)\) scales as \(z^{-1.5}\) the contributions from earlier times become increasingly small and are negligible close to the decoupling era. Limiting \(\sum\delta(z)\) to \(10^{-5}\), the constraint on the critical volume becomes
\[V_{c}>9.0\times 10^{10}\sum_{z}(z+1)^{-3}\,. \tag{41}\]
Most of the collapses will occur in the early universe, and these will make only a small contribution to the sum in Eq. (41), so this constraint will be weaker than the one obtained in the variable cell size model, where the deviation from the Planck spectrum generated by a collapse is independent of the CMB temperature. Although the fixed cell sized model gives rise to smaller deviations from the Planck spectrum, the model
presents other problems. In particular, the cell size must be much smaller than the shortest wavelength particle that we have observed, in order for noticeable effects not to be apparent in other experiments.
Although the calculations presented above are only approximate, they present patterns of deviation from the Planck spectrum that could be observed in future experiments.
|
2303.04921 | The resolution to the problem of consistent large transverse momentum in
TMDs | Parametrizing TMD parton densities and fragmentation functions in ways that
consistently match their large transverse momentum behavior in standard
collinear factorization has remained notoriously difficult. We show how the
problem is solved in a recently introduced set of steps for combining
perturbative and nonperturbative transverse momentum in TMD factorization.
Called a ``bottom-up'' approach in a previous article, here we call it a
``hadron structure oriented'' (HSO) approach to emphasize its focus on
preserving a connection to the TMD parton model interpretation. We show that
the associated consistency constraints improve considerably the agreement
between parametrizations of TMD functions and their large-$k_T$ behavior, as
calculated in collinear factorization. The procedure discussed herein will be
important for guiding future extractions of TMD parton densities and
fragmentation functions and for testing TMD factorization and universality. We
illustrate the procedure with an application to semi-inclusive deep inelastic
scattering (SIDIS) structure functions at an input scale $Q_0$, and we show
that there is improved consistency between different methods of calculating at
moderate transverse momentum. We end with a discussion of plans for future
phenomenological applications. | J. O. Gonzalez-Hernandez, T. Rainaldi, T. C. Rogers | 2023-03-08T22:30:23Z | http://arxiv.org/abs/2303.04921v1 | # The resolution to the problem of consistent large transverse momentum in TMDs
###### Abstract
Parametrizing TMD parton densities and fragmentation functions in ways that consistently match their large transverse momentum behavior in standard collinear factorization has remained notoriously difficult. We show how the problem is solved in a recently introduced set of steps for combining perturbative and nonperturbative transverse momentum in TMD factorization. Called a "bottom-up" approach in a previous article, here we call it a "hadron structure oriented" (HSO) approach to emphasize its focus on preserving a connection to the TMD parton model interpretation. We show that the associated consistency constraints improve considerably the agreement between parametrizations of TMD functions and their large-\(k_{T}\) behavior, as calculated in collinear factorization. The procedure discussed herein will be important for guiding future extractions of TMD parton densities and fragmentation functions and for testing TMD factorization and universality. We illustrate the procedure with an application to semi-inclusive deep inelastic scattering (SIDIS) structure functions at an input scale \(Q_{0}\), and we show that there is improved consistency between different methods of calculating at moderate transverse momentum. We end with a discussion of plans for future phenomenological applications.
+
Footnote †: preprint: JLAB-THY-23-3771
## I Introduction
Transverse momentum dependent (TMD) parton distribution functions (pdfs) and/or fragmentation functions (ffs), together with the TMD factorization theorems [1; 2; 3], have acquired a wide range of applications in hadronic, nuclear and high energy phenomenology [4; 5] over the past few decades. They are useful both for studying the role of intrinsic or nonperturbative effects in hadrons [6; 7] and for predicting transverse momentum distributions in cross sections after evolution to high energies. In the former case, they play an important role in testing, and thus refining, the partonic constituent interpretation of hadron structure. However, separating truly nonperturbative or intrinsic transverse momentum effects from the perturbatively generated transverse momentum that is calculable with collinear factorization has remained a difficult challenge. It is a problem that limits the predictive power of TMD factorization and creates ambiguity about the interpretation of phenomenologically extracted nonperturbative objects. This is especially the case with lower invariant energies, near the boundary between what may be considered an appropriate hard scale.
To see the issues clearly, recall that one normally categorizes contributions to a TMD cross section, such as semi-inclusive deep inelastic scattering (SIDIS), according its transverse momentum regions. On one hand, the small transverse momentum regions are associated with nonperturbative effects in hadronic bound states. There, purely nonpertubative parton model descriptions are often quite successful phenomenologically, especially for moderate \(Q\). On the other hand, in the regions of large \(q_{\rm T}\) perturbative tails where \(q_{\rm T}\approx Q\), calculations can be performed in fixed order perturbation theory with collinear factorization with \(Q\) as a hard scale. One example of this way of cataloging physically distinct regions can be seen in the treatment of SIDIS in Ref. [8], following the theoretical work in Ref. [9], where Fig. 17 shows two separate fits for the small transverse momentum nonperturbative peak and the large transverse momentum perturbative tail. Reference [8] attributes the behavior in each region to a different underlying physical mechanism, namely a nonperturbative peak and a perturbative tail at small and large transverse momentum respectively. There one reads that "the two exponential functions in our parameterisation \(F_{1}\) can be attributed to two completely different underlying physics mechanisms that overlap in the region \(P_{hT}\simeq 1.0\,({\rm GeV/c})^{2}\)."
[FI
Individual TMD pdfs and ffs can be viewed in an analogous way. When the transverse momentum in an individual TMD pdf is comparable to the renormalization scale \(\mu\), \(k_{\rm T}\approx\mu\approx\sqrt{\zeta}\), it is straightforward to calculate the TMD pdf directly from its operator definition at a fixed, low order in collinear factorization. This provides a very useful consistency check in phenomenological implementations. Namely, the parametrizations of TMD pdfs and ffs that are used in phenomenology must, within perturbative or power-suppressed errors, match their expressions as obtained from fixed order collinear factorization in the large transverse momentum (\(k_{\rm T}\approx\mu\)) limit as \(\mu\to\infty\).
However, most implementations of TMD phenomenology from the past decade find tension between the extracted TMD functions and their large transverse momentum limits as calculated in fixed order collinear factorization. Consider, for instance, the far right panel in Fig. 6 of [10]. The pale blue dot-dashed curve is the cross section calculation performed with TMD pdfs and ffs (the so-called "\(W\) term" or "TMD term"). This is to be compared with the dashed green curve (the "asymptotic" term), which represents the large transverse momentum asymptote of the cross section, calculated theoretically in collinear factorization. In principle, consistency demands that the TMD term and the asymptotic term approximately overlap in a range of \(\Lambda_{\rm QCD}\ll q_{\rm T}\ll Q\). As the figure illustrates, this is not the case, at least for calculations done with standard parametrizations of collinear and TMD functions. It is only at the extremely high energies, shown in the far left plot, that a region starts to emerge where the asymptotic and TMD terms (very roughly) begin to overlap at intermediate transverse momentum. While the exact details of the mismatch depend on the specifics of the implementation, the trend appears to be quite general [11; 12; 13; 14], and it applies to other processes where TMD factorization is often used1. The overall picture suggests that elements are still missing from the standard way that TMD factorization gets implemented at a practical level.
Footnote 1: A successful implementation of the matching, that predates modern TMD factorization theorems, was presented in [15]
A separate issue is that, for transverse momentum comparable to the hard scale (\(q_{\rm T}\approx Q\)), the small \(q_{\rm T}\ll Q\) approximation fails and a so-called "\(Y\)-term" is needed in order to get an accurate cross section calculation. However, the consistency problems alluded to above appear already at the level of the \(q_{\rm T}\ll Q\) contribution. In past papers, this small-\(q_{\rm T}\) contribution has sometimes been called the "\(W\)-term," and it is the contribution that involves TMD correlation functions. It, and the TMD correlation functions from which it is composed, is the main focus of this paper. Throughout this paper, we will call it the "TMD term" to emphasize its connection to TMD pdfs and ffs.
In this paper, we will show how to recover consistency between the TMD term and the large-\(q_{\rm T}\) asymptote by using an approach recently introduced by two of us [16]. In the process, we will diagnose some of the complications that, in the past, have been responsible for a mismatch. One problem arises from the way one imposes constraints of the form
\[f_{i/p}(x)\approx\int{\rm d}^{2}\mathbf{k}_{\rm T}\,f_{i/p}(x,\mathbf{k}_{\rm T})\,, \tag{1}\]
where here there is an "\(\approx\)" rather than a strict equality because such integrals are generally ultraviolet (UV) divergent and are only satisfied literally in a strict parton model interpretation where the pdf is a literal probability density. To maintain a partonic interpretation, one hopes to preserve an approximate version of Eq. (1) as accurately as possible. For a given parametrization of \(f_{i/p}(x)\), the parameters in a model of the nonperturbative transverse momentum in \(f_{i/p}(x,\mathbf{k}_{\rm T})\) are constrained by Eq. (1). Now, in standard procedures for implementing the Collins-Soper-Sterman (CSS) formalism and similar approaches to TMD factorization, the nonperturbative transverse momentum dependence is contained within transverse coordinate space functions that are usually labeled \(g_{i/p}(x,\mathbf{b}_{\rm T})\) (and \(g_{K}(\mathbf{b}_{\rm T})\) for the Collins-Soper (CS) kernel). To our knowledge, however, constraints corresponding to Eq. (1) are never directly imposed upon the \(g_{i/p}(x,\mathbf{b}_{\rm T})\) functions in phenomenological applications that use the \(g\)-function approach. As explained in Ref. [16], this will in general produce mismatches between the models of nonperturbative transverse momentum and the collinear functions \(f_{i/p}(x)\) that are used to describe the perturbative tails. We will see with explicit examples in this paper that the effects of the mismatch can propagate in transverse momentum space and spoil the matching at intermediate regions of transverse momentum. Although we will mainly use standard \(\overline{\rm MS}\) collinear pdfs and ffs for the parts of calculations that require collinear factorization, we will sometimes find it convenient in intermediate steps to work with collinear pdfs and ffs defined as the transverse momentum integrals of TMD pdfs and ffs with UV cutoffs,
\[f^{c}(x;\mu)\equiv\pi\int_{0}^{\mu^{2}}{\rm d}k_{\rm T}^{2}\,f_{i/p}(x,\mathbf{k}_ {\rm T};\mu;\zeta)\,, \tag{2}\]
where \(\mu\) is the usual auxiliary mass parameter associated with \(\overline{\rm MS}\) renormalization and \(\zeta\) is the CS scale. The "\(c\)" superscript on the left-hand side stands for "cutoff scheme." As will be explained in the text, the cutoff-defined and \(\overline{\rm MS}\) pdfs and ffs can be translated into one another at large \(\mu\) via relatively simple perturbative correction terms, so
the choice of which one to use is ultimately largely a matter of convenience. However, the explicit expressions for Eq. (2) do have the advantage of a natural and direct connection to a TMD parton model interpretation.
A coherent treatment of the issues discussed above will be necessary in order for a meaningful analysis of future SIDIS data in terms of TMD parton correlation functions to be possible, and for the interpretability of, for example, forthcoming results from the CEBAF 12 GeV program or a 24 GeV upgrade [17], as well as for a future electron-ion collider (EIC). In Ref. [16] we called the treatment a "bottom-up" approach to distinguish it from more conventional treatments whose starting points were tailored to very high energies. In this paper we will instead call it the "hadron structure oriented" (HSO) approach to emphasize the central role of the nonperturbative input and the focus on preserving a partonic interpretation.
In this paper, we will set up the calculation of the TMD term for SIDIS using the HSO approach of [16], and we will analyze in detail the transition to the large \(q_{\rm T}\) asymptotic term. We will show how imposing the integral relation in Eq. (2), ensuring a smooth transition between nonperturbative TMD behavior at small transverse momentum and the large transverse momentum tails, and several other adjustments to the conventional treatment fixes the problems outlined above. Specifically, we will show how to ensure that nonperturbative TMD pdf and ff parameterizations remain reasonably consistent with their expected large transverse momentum behavior, especially near the input scale. This work complements other efforts to address similar problems, for example [18; 19] imposes continuity and smoothness conditions on \(g\)-functions directly in coordinate space.
The structure of the paper is as follows: In Sec. II, we summarize the basic setup of SIDIS following the HSO organization of TMD factorization from [16]. We also explain the notation to be used throughout the paper. In Sec. III, we write down the general parametrizations of the TMD pdfs and ffts that we will use for calculations, and in Sec. IV we show how to specialize to specific models of the very small transverse momentum behavior, using Gaussian and spectator-motivated models for illustration. In Sec. V, we explain the calculation of the large transverse momentum asymptotic term in the HSO approach. In Sec. VI, we present sample calculations of the TMD term in SIDIS, with both the Gaussian and spectator inspired models for illustration. After analyzing how the conventional approach to TMD phenomenology leads to the complications discussed above, we show how they are solved in the HSO approach. We end in Sec. VII by discussing future plans for implementing phenomenological treatments in the HSO approach.
## II Semi-Inclusive DIS
We will adopt standard conventions for expressing SIDIS cross sections in the current fragmentation region, and our labels for the kinematical variables are mostly consistent with those of [20]. A lepton with momentum \(l\) scatters off a hadron target with momentum \(p\), and the momentum of the recoiling lepton is \(l^{\prime}\). The final state contains a measured hadron with momentum \(P_{\rm B}\) and is inclusive in all other final states \(X\):
\[l+p\to l^{\prime}+P_{\rm B}+X \tag{3}\]
Throughout this paper, we will use the usual Lorentz invariant kinematical variables,
\[q^{2} =-Q^{2}\,, x_{\rm bj} =\frac{Q^{2}}{2p\cdot q}\,, z_{\rm h} =\frac{P_{B}\cdot p}{p\cdot q}\,, \tag{4}\]
where \(q\equiv(l-l^{\prime})\) is the momentum of the exchanged photon. Except where specified, we will work in the Breit frame, with the proton moving in the plus light-cone direction (see figure 1). We will drop all power-suppressed target and final state kinematical mass corrections so that Breit frame momentum fractions are
\[x_{\rm N}\equiv-\frac{p^{+}}{q^{+}}\approx x_{\rm bj},\,. \tag{5}\]
where the "\(\approx\)" is a reminder that this identification only holds up to target power suppressed target mass corrections.
For characterizing regions of transverse momentum, we will use the variable
\[\mathbf{q}_{\rm T}\equiv-\frac{\mathbf{P}_{\rm B\rm T}}{z_{\rm N}}\,. \tag{6}\]
Here, \(\mathbf{q}_{\rm T}\) is the transverse momentum of the virtual photon in a frame, which we call the "hadron frame," where the target and final state hadrons are exactly back-to-back, and
\[z_{\rm N}\equiv\frac{P_{\rm B}^{-}}{q^{-}}\,. \tag{7}\]
ore details concerning the basic kinematical setup that we use may be found in [20]. In this paper, we will work in a strictly leading power approach, where \(x_{\rm bj}\approx x_{\rm N}\) and \(z_{\rm N}\approx z_{\rm h}\). To simplify notation, therefore, we will drop the subscripts on \(x\) and \(z\) from here forward.
Describing the cross section accurately over the full range of \(q_{\rm T}\) requires that one merge the treatment tailored to the \(q_{\rm T}/Q\ll 1\) region (the TMD term) with the collinear factorization treatment appropriate to the \(q_{\rm T}\approx Q\) region. Both calculations must agree approximately in the intermediate \(\Lambda_{\rm QCD}\ll q_{\rm T}\ll Q\) region. It is the treatment of the \(q_{\rm T}\ll Q\) region that involves TMD pdfs and ffs, and it is this contribution that we will focus on in this paper. In the small \(q_{\rm T}\) limit, and neglecting kinematical hadron mass corrections, \(z_{\rm N}\approx z\).
The usual TMD-factorization expression for the hadronic tensor is
\[W^{\mu\nu}(x,Q,z,\mathbf{P}_{\rm BT})\] \[=\sum_{j}H_{j}^{\mu\nu}\int{\rm d}^{2}\mathbf{k}_{\rm 1T}\ {\rm d}^{2} \mathbf{k}_{\rm 2T}\ f_{j/p}(x,\mathbf{k}_{\rm 1T};\mu_{Q},Q^{2})D_{h/j}(z,z\mathbf{k}_{ \rm 2T};\mu_{Q},Q^{2})\delta^{(2)}(\mathbf{q}_{\rm T}+\mathbf{k}_{\rm 1T}-\mathbf{k}_{\rm 2T})\] \[=\sum_{j}H_{j}^{\mu\nu}\int\frac{{\rm d}^{2}\mathbf{b}_{\rm T}}{(2 \pi)^{2}}\ e^{-i\mathbf{q}_{\rm T}\cdot\mathbf{b}_{\rm T}}\ \tilde{f}_{j/p}(x,\mathbf{b}_{\rm T};\mu_{Q},Q^{2})\ \tilde{D}_{h/j}(z,\mathbf{b}_{\rm T};\mu_{Q},Q^{2})\] \[=\sum_{j}H_{j}^{\mu\nu}\left[f_{j/p},D_{h/j}\right]\,, \tag{8}\]
where the sum is over all quark and antiquark flavors, and each line is a different way that SIDIS routinely gets presented in the literature. The functions \(f_{j/p}(x,\mathbf{k}_{\rm 1T};\mu_{Q},Q^{2})\) and \(D_{h/j}(z,z\mathbf{k}_{\rm 2T};\mu_{Q},Q^{2})\) are the TMD pdfs and ffs respectively, with their usual operator definitions [3]. Within the approximations that define TMD factorization in the current region, the longitudinal momentum fractions of the incoming and struck partons are fixed to \(x\) and \(z\). The momentum variables \(\mathbf{k}_{\rm 1T}\) and \(\mathbf{k}_{\rm 2T}\) are the transverse momenta of the struck and final state partons in the _hadron_ frame, and we have fixed the auxiliary renormalization and light-cone scales \(\mu\) and \(\sqrt{\zeta}\) in Eq. (2) equal to \(\mu_{Q}\) and \(Q\) respectively. (Ultimately, we will set \(\mu_{Q}=Q\), but for organizational purposes we will keep the symbols separate for now.) \(H_{j}^{\mu\nu}\) is a known hard coefficient. In transverse coordinate space, the TMD pdfs and ffs are
\[\tilde{f}_{j/p}(x,\mathbf{b}_{\rm T};\mu,\zeta) =\int{\rm d}^{2}\mathbf{k}_{\rm 1T}\,e^{-i\mathbf{k}_{\rm 1T}\cdot\mathbf{b}_{ \rm T}}f_{j/p}(x,\mathbf{k}_{\rm 1T};\mu,\zeta)\,,\] \[\tilde{D}_{q/j}(z,\mathbf{b}_{\rm T};\mu,\zeta) =\int{\rm d}^{2}\mathbf{k}_{\rm 2T}\,e^{i\mathbf{k}_{\rm 2T}\cdot\mathbf{b}_{ \rm T}}D_{q/j}(z,z\mathbf{k}_{\rm 2T};\mu,\zeta)\,, \tag{9}\]
and we have used these in the transverse coordinate space representation of \(W^{\mu\nu}\) on the third line of Eq. (8), which is the standard form for implementing evolution. On the last line of Eq. (8), we have used the common bracket notation for abbreviating the transverse convolution integrals. The hard factor is
\[H_{j}^{\mu\nu}=\frac{z}{2}{\rm Tr}[\gamma^{\nu}\gamma^{+}\gamma^{\mu}\gamma^{- }]|H|_{j}^{2} \tag{10}\]
Figure 1: Schematic of a SIDIS event as observed in the Breit frame. The dashed green lines represent the unobserved particles after the collision.
where the last factor (see, for instance [21]) reads
\[|H|_{j}^{2}=e_{j}^{2}\left\{1+\frac{C_{F}\alpha_{s}(\mu)}{4\pi}\left[-16+\frac{ \pi^{2}}{3}+6\ln\left(\frac{Q^{2}}{\mu^{2}}\right)-2\ln^{2}\left(\frac{Q^{2}}{ \mu^{2}}\right)\right]+O\left(\alpha_{s}^{2}(\mu)\right)\right\}\,. \tag{11}\]
Projection tensors applied to Eq. (8) give the usual unpolarized quark structure functions of SIDIS,
\[F_{1,2}\left(x,Q,z,\mathbf{P}_{\rm BT}\right)={\rm P}_{1,2}^{\mu\nu}\,W_{\mu\nu}(p, q,z,\mathbf{P}_{BT}), \tag{12}\]
where, still dropping kinematical hadron mass corrections,
\[{\rm P}_{1}^{\mu\nu} = -\frac{1}{2}\left[g^{\mu\nu}-4x^{2}\frac{p^{\mu}p^{\nu}}{Q^{2}} \right]\,, \tag{13a}\] \[{\rm P}_{2}^{\mu\nu} = -x\left[g^{\mu\nu}-12x^{2}\frac{p^{\mu}p^{\nu}}{Q^{2}}\right]\,, \tag{13b}\]
and
\[{\rm P}_{1}^{\mu\nu}H_{j,\mu\nu} = H_{1}=2z|H|_{j}^{2}\,,\] \[{\rm P}_{2}^{\mu\nu}H_{j,\mu\nu} = H_{2}=4zx|H|_{j}^{2}\,. \tag{14}\]
Reference [16] substantially reorganized the more standard ways of expressing the TMD factorization expression for \(W^{\mu\nu}\), as summarized by the sequence of steps in Sec. VI of that paper. Doing so required a high degree of specificity about exactly which versions of pdfs and ffs and their parametrizations were being discussed in a given context, and this led us to introduce a rather elaborate system of notation. For conciseness, we will drop most of that notation in this paper and instead indicate in the text which version of a symbol is being used whenever such distinctions become necessary. When we calculate Eq. (8), we will mostly be interested in using the final underlined \(\tilde{f}_{j/p}\), \(\tilde{D}_{h/j}\) in Eq. (61) of [16], although for the input scale calculations in this paper the difference between the underlined and "input" distributions is negligible. Any perturbatively calculable quantities will be maintained through order \(\alpha_{s}\), so results are all \(O\left(\alpha_{s}\right)\). Any collinear pdfs or ffs should be assumed to be defined in the \(\overline{\rm MS}\) renormalization scheme unless otherwise specified. Power suppressed errors will be expressed as \(O\left(m^{2}/Q^{2}\right)\) where \(m\) symbolizes any small mass scale like \(\Lambda_{\rm QCD}\) or a hadron mass.
To implement evolution, we rewrite Eq. (8) in a form where each TMD function is expressed in terms of evolution from an input scale \(Q_{0}\). Thus, we use (the SIDIS version of) Eq. (65) from [16],
\[W^{\mu\nu}(x,Q,z,\mathbf{P}_{\rm BT}) = \sum_{j}H_{j}^{\mu\nu}\int\frac{{\rm d}^{2}\mathbf{b}_{\rm T}}{(2\pi )^{2}}\ e^{-i\mathbf{q}_{\rm T}\cdot\mathbf{b}_{\rm T}}\ \tilde{f}_{j/p}(x,\mathbf{b}_{\rm T};\mu_{Q_{0}},Q_{0}^{2})\ \tilde{D}_{h/j}(z,\mathbf{b}_{\rm T};\mu_{Q_{0}},Q_{0}^{2}) \tag{15}\] \[\times \exp\left\{\tilde{K}(b_{\rm T};\mu_{Q_{0}})\ln\left(\frac{Q^{2}} {Q_{0}^{2}}\right)+\int_{\mu_{Q_{0}}}^{\mu_{Q}}\frac{{\rm d}\mu^{\prime}}{\mu^ {\prime}}\bigg{[}2\gamma(\alpha_{s}(\mu^{\prime});1)-\ln\frac{Q^{2}}{{\mu^{ \prime}}^{2}}\gamma_{K}(\alpha_{s}(\mu^{\prime}))\bigg{]}\right\}\,.\]
\(Q_{0}\) should be understood to be the lowest value of \(Q\) for which factorization techniques are considered reasonable, which in practice is usually between around 1 GeV and 4 GeV for SIDIS. An important observation underlying the HSO approach of Ref. [16] is that individual correlation functions, \(f_{j/p}(x,\mathbf{k}_{\rm T};\mu_{Q_{0}},Q_{0}^{2})\) or \(\tilde{D}_{h/j}(z,\mathbf{b}_{\rm T};\mu_{Q_{0}},Q_{0}^{2})\), have unambiguous transverse momentum dependence for all \(k_{\rm T}\), including all \(k_{\rm T}>Q_{0}\), which follows from their operator definitions. Once these input functions have been determined, evolving them to larger \(Q\) is only a matter of substituting them into Eq. (15) (after transforming into coordinate space). This can be used to simplify the organization of phenomenological implementations because one may focus attention on the nonperturbative momentum space treatment of hadron structure at \(Q\) near the initial input scale \(Q_{0}\). The only input that is then necessary to obtain the TMDs at any other higher scale is the evolution kernel.
In this paper, we will be mostly interested in the behavior of the input TMD pdfs and ffs, in which case the evolution factor does not enter. In places where we do need the evolution factor, we will use the same parametrization for the CS kernel from Sec. VII-A from Ref. [16] since it reproduces the correct \(O\left(\alpha_{s}\right)\) perturbative behavior while also capturing minimal basic expectations for the nonperturbative behavior. Thus, the input scale parametrization of the kernel that we will use is
\[\tilde{K}_{\rm input}(b_{\rm T};\mu_{Q_{0}})=\frac{2\alpha_{s}(\mu_{Q_{0}})C_{F}} {\pi}\left[K_{0}(b_{\rm T}m_{K})+\ln\left(\frac{m_{K}}{\mu_{Q_{0}}}\right)\right] \tag{16}\]
so the full (underlined, in the notation of Ref. [16]) kernel is
\[\tilde{K}(b_{\rm T};\mu_{Q_{0}})=\frac{2\alpha_{s}(\mu_{\overline{Q}_{0}})C_{F}}{ \pi}\left[K_{0}(b_{\rm T}m_{K})+\ln\left(\frac{m_{K}}{\mu_{\overline{Q}_{0}}} \right)\right]-\int_{\mu_{\overline{Q}_{0}}}^{\mu_{Q_{0}}}\frac{\mathrm{d}\mu^{ \prime}}{\mu^{\prime}}\gamma_{K}(\alpha_{s}(\mu^{\prime}))\,. \tag{17}\]
The nonperturbative model parameter in \(\tilde{K}(b_{\rm T};\mu_{Q_{0}})\) is \(m_{K}\). The bar on top of \(\overline{Q}_{0}\) and \(\mu_{\overline{Q}_{0}}\) is the symbol introduced in [16] to indicate that this is a scale that is fixed to \(Q_{0}\) at large \(b_{\rm T}\), but which transitions to \(\sim 1/b_{\rm T}\) behavior as \(b_{\rm T}\to 0\). The role of the "scale transformation function", \(\overline{Q}_{0}\), is analogous to that of \(b_{*}\) in the usual CSS treatment, and its exact choice is, in principle, arbitrary. We will continue to use the choice for \(\overline{Q}_{0}\) from Ref. [16]. We provide the expression in Appendix A of this paper. We remark that it is possible to consider other types of nonperturbative behavior for the CS kernel within the approach of Ref. [16], including recent calculations in lattice QCD (see for instance Refs. [22; 23; 24; 25]).
## III TMD parton density & fragmentation functions
For constructing parametrizations of the quark and antiquark TMD pdfs and ffs, we repeat the steps in Sec.VI of Ref. [16]. We continue to use the additive structure from the examples in Ref. [16] to interpolate between a nonperturbative core and the perturbative tail. The first terms transition into the fixed \(O\left(\alpha_{s}(\mu)\right)\) tail calculation of the TMD at large \(k_{\rm T}\), while the last term is a non-perturbative "core" that describes the peak at very small \(k_{\rm T}\). The core term is further constrained by an integral relation analogous to Eq. (2), which determines its overall normalization factor \(C_{h/j}\).
Thus, for the input quark ff
\[D_{{\rm input},h/j}(z,z\mathbf{k}_{\rm T};\mu_{Q_{0}},Q_{0}^{2}) = \frac{1}{2\pi z^{2}}\frac{1}{k_{\rm T}^{2}+m_{D_{h,j}}^{2}}\left[ A_{h/j}^{D}(z;\mu_{Q_{0}})+B_{h/j}^{D}(z;\mu_{Q_{0}})\ln\frac{Q_{0}^{2}}{k_{\rm T }^{2}+m_{D_{h,j}}^{2}}\right] \tag{18}\] \[+ \frac{1}{2\pi z^{2}}\frac{1}{k_{\rm T}^{2}+m_{D_{h,g}}^{2}}A_{h/j }^{D,g}(z;\mu_{Q_{0}})\] \[+ C_{h/j}^{D}\,D_{{\rm core},h/j}(z,z\mathbf{k}_{\rm T};Q_{0}^{2})\,,\]
where \(D_{{\rm core},h/j}(z,z\mathbf{k}_{\rm T};Q_{0}^{2})\) is a parametrization of the peak of the TMD ff to be specified later. To compactify notation, we have dropped the \((n,d_{r})\) superscripts that were used in [16], but we have included a hadron label \(h\) and \(j\in u,d,s,c,\ldots\) labels for parton flavors and anti-flavors. \(A^{D}\), \(B^{D}\), and \(C^{D}\) are abbreviations for the following expressions,
\[A_{h/j}^{D}(z;\mu_{Q_{0}}) \equiv \sum_{jj^{\prime}}\delta_{j^{\prime}j}\frac{\alpha_{s}(\mu_{Q_{0} })}{\pi}\left\{\left[(P_{jj^{\prime}}\otimes d_{h/j^{\prime}})(z;\mu_{Q_{0}}) \right]\right.-\frac{3C_{F}}{2}d_{h/j^{\prime}}(z;\mu_{Q_{0}})\right\}\,, \tag{19}\] \[B_{h/j}^{D}(z;\mu_{Q_{0}}) \equiv \sum_{jj^{\prime}}\delta_{j^{\prime}j}\frac{\alpha_{s}(\mu_{Q_{0 }})C_{F}}{\pi}d_{h/j^{\prime}}(z;\mu_{Q_{0}})\,,\] (20) \[A_{h/j}^{D,g}(z;\mu_{Q_{0}}) \equiv \frac{\alpha_{s}(\mu_{Q_{0}})}{\pi}\left[(P_{gj}\otimes d_{h/g}) (z;\mu_{Q_{0}})\right]\,,\] (21) \[C_{h/j}^{D}\equiv\frac{1}{N_{h/j}^{D}}\Bigg{[}d_{h/j}(z;\mu_{Q_{ 0}})-A_{h/j}^{D}(z;\mu_{Q_{0}})\ln\left(\frac{\mu_{Q_{0}}}{m_{D_{h,j}}}\right) -B_{h/j}^{D}(z;\mu_{Q_{0}})\ln\left(\frac{\mu_{Q_{0}}}{m_{D_{h,j}}}\right)\ln \left(\frac{Q_{0}^{2}}{\mu_{Q_{0}}m_{D_{h,j}}}\right)\,,\] (22) \[-A_{h/j}^{D,g}(z;\mu_{Q_{0}})\ln\left(\frac{\mu_{Q_{0}}}{m_{D_{h,g }}}\right)+\frac{\alpha_{s}(\mu_{Q_{0}})}{2\pi}\left\{\sum_{jj^{\prime}}\delta_ {j^{\prime}j}[\mathcal{C}_{\Delta}^{j^{\prime}/j}\otimes d_{h/j^{\prime}}](z; \mu_{Q_{0}})+[\mathcal{C}_{\Delta}^{g/j}\otimes d_{h/g}](z;\mu_{Q_{0}})\right\} \Bigg{]}\,.\]
where
\[P_{qq}(z) =P_{\bar{q}q}(z)=C_{F}\left[\frac{1+z^{2}}{(1-z)_{+}}+\frac{3}{2} \delta\left(1-z\right)\right]\,, \tag{23}\] \[P_{gq}(z) =C_{F}\frac{1+(1-z)^{2}}{z}\,,\] (24) \[\mathcal{C}_{\Delta}^{q/q}(z) =2P_{qq}(z)\ln z+C_{F}(1-z)-C_{F}\frac{\pi^{2}}{12}\delta(1-z)\,,\] (25) \[\mathcal{C}_{\Delta}^{g/q}(z) =2P_{gq}(z)\ln z+C_{F}z\,,\] (26) \[N_{h/j}^{D} \equiv 2\pi\,z^{2}\int_{0}^{\infty}dk_{\rm T}k_{\rm T}\,D_{{\rm core},h/ j}(z,z\mathbf{k}_{\rm T};Q_{0}^{2})\,. \tag{27}\]
For the TMD pdfs, the expressions are similar,
\[f_{{\rm int},i/p}(x,\mathbf{k}_{\rm T};\mu_{Q_{0}},Q_{0}^{2}) =\frac{1}{2\pi}\frac{1}{k_{\rm T}^{2}+m_{f_{i,p}}^{2}}\left[A_{i/ p}^{f}(x;\mu_{Q_{0}})+B_{i/p}^{f}(x;\mu_{Q_{0}})\ln\frac{Q_{0}^{2}}{k_{\rm T }^{2}+m_{f_{i,p}}^{2}}\right]\] \[+\frac{1}{2\pi}\frac{1}{k_{\rm T}^{2}+m_{f_{g,p}}^{2}}A_{i/p}^{f,g}(x;\mu_{Q_{0}})\] \[+C_{i/p}^{f}\,f_{{\rm core},i/p}(x,\mathbf{k}_{\rm T};Q_{0}^{2})\,, \tag{28}\]
with the corresponding abbreviations
\[A_{i/p}^{f}(x;\mu_{Q_{0}}) \equiv\sum_{ii^{\prime}}\delta_{i^{\prime}i}\frac{\alpha_{s}(\mu_ {Q_{0}})}{\pi}\left\{\left[(P_{i^{\prime}i}\otimes f_{i^{\prime}/p})(x;\mu_{Q _{0}})\right]-\frac{3C_{F}}{2}f_{i^{\prime}/p}(x;\mu_{Q_{0}})\right\}\,, \tag{29}\] \[B_{i/p}^{f}(x;\mu_{Q_{0}}) \equiv\sum_{i^{\prime}i}\delta_{i^{\prime}i}\frac{\alpha_{s}(\mu _{Q_{0}})C_{F}}{\pi}f_{i^{\prime}/p}(x;\mu_{Q_{0}})\,,\] (30) \[A_{i/p}^{f,g}(x;\mu_{Q_{0}}) \equiv\frac{\alpha_{s}(\mu_{Q_{0}})}{\pi}\left[(P_{ig}\otimes f_ {g/p})(x;\mu_{Q_{0}})\right]\,,\] (31) \[C_{i/p}^{f}\equiv\frac{1}{N_{i/p}^{f}}\Bigg{[}f_{i/p}(x;\mu_{Q_ {0}})-A_{i/p}^{f}(x;\mu_{Q_{0}})\ln\left(\frac{\mu_{Q_{0}}}{m_{f_{i,p}}} \right)-B_{i/p}^{f}(x;\mu_{Q_{0}})\ln\left(\frac{\mu_{Q_{0}}}{m_{f_{i,p}}} \right)\ln\left(\frac{Q_{0}^{2}}{\mu_{Q_{0}}m_{f_{i,p}}}\right)\,,\] \[-A_{i/p}^{f,g}(x;\mu_{Q_{0}})\ln\left(\frac{\mu_{Q_{0}}}{m_{f_{g,p }}}\right)+\frac{\alpha_{s}(\mu_{Q_{0}})}{2\pi}\left\{\sum_{ii^{\prime}}\delta _{i^{\prime}i}[\mathcal{C}_{\Delta}^{i/i^{\prime}}\otimes f_{i^{\prime}/p}](x ;\mu_{Q_{0}})+[\mathcal{C}_{\Delta}^{i/g}\otimes f_{g/p}](x;\mu_{Q_{0}}) \right\}\Bigg{]}\,. \tag{32}\]
where
\[P_{ig}(x) =T_{F}\left[x^{2}+(1-x)^{2}\right]\,, \tag{33}\] \[\mathcal{C}_{\Delta}^{i/i}(x) =C_{F}(1-x)-C_{F}\frac{\pi^{2}}{12}\delta(1-x)\,,\] (34) \[\mathcal{C}_{\Delta}^{g/p}(x) =2T_{F}x(1-x)\,,\] (35) \[N_{i/p}^{f} \equiv 2\pi\,\int_{0}^{\infty}dk_{\rm T}k_{\rm T}\,f_{{\rm core},i/p}(x, \mathbf{k}_{\rm T};Q_{0}^{2}) \tag{36}\]
In Eq. (28), \(f_{{\rm core},i/p}(x,\mathbf{k}_{\rm T};Q_{0}^{2})\) parametrizes the core peak of the TMD pdf. (We remind the reader that it is to be understood that all explicit perturbative parts in this paper are calculated to lowest order in \(\alpha_{s}\).)
To extend the TMD pdf and ff parametrizations above to account for the \(b_{\rm T}\ll 1/Q_{0}\) region, we transform to transverse coordinate space and use Eq. (92) of [16] and its analog for the TMD pdf,
\[\tilde{D}_{h/j}(z,\mathbf{b}_{\rm T};\mu_{Q_{0}},Q_{0}^{2}) =\tilde{D}_{{\rm int},h/j}(z,\mathbf{b}_{\rm T};\mu_{Q_{0}},\overline{Q }_{0}^{2})E(\overline{Q}_{0}/Q_{0},b_{\rm T})\,. \tag{37}\] \[\tilde{f}_{i/p}(x,\mathbf{b}_{\rm T};\mu_{Q_{0}},Q_{0}^{2}) =\tilde{f}_{{\rm int},i/p}(x,\mathbf{b}_{\rm T};\mu_{\overline{Q} _{0}},\overline{Q}_{0}^{2})E(\overline{Q}_{0}/Q_{0},b_{\rm T})\,. \tag{38}\]
with an evolution factor
\[E(\overline{Q}_{0}/Q_{0},b_{\rm T})\equiv\exp\left\{\int_{\mu_{ \overline{Q}_{0}}}^{\mu_{Q_{0}}}\frac{d\mu^{\prime}}{\mu^{\prime}}\left[\gamma( \alpha_{s}(\mu^{\prime});1)-\ln\frac{Q_{0}}{\mu^{\prime}}\gamma_{K}(\alpha_{s} (\mu^{\prime}))\right]+\ln\frac{Q_{0}}{\overline{Q}_{0}}\tilde{K}_{\rm input}( b_{\rm T};\mu_{\overline{Q}_{0}})\right\}\,. \tag{39}\]
Once the numerical values of parameters in \(\tilde{D}_{h/j}(z,b_{\rm T};\mu_{Q_{0}},Q_{0}^{2})\) and \(\tilde{f}_{i/p}(x,b_{\rm T};\mu_{Q_{0}},Q_{0}^{2})\) are determined and fixed as above, the TMD term at any other larger scale \(Q\) is found straightforwardly by substituting these into Eq. (15).
The scale \(\overline{Q}_{0}\) is designed to be approximately \(Q_{0}\) for \(Q\approx Q_{0}\), where the only important range of \(b_{\rm T}\) is \(b_{\rm T}\gtrsim 1/Q_{0}\), and the left and right sides of Eqs. (37)-(38) are nearly equal. For large \(Q\) (\(Q\gg Q_{0}\)), the UV \(b_{\rm T}\ll 1/Q_{0}\) region starts to become important and cannot be ignored. There, \(\overline{Q}_{0}\) smoothly transitions into a \(\sim 1/b_{\rm T}\) behavior such that RG improvement is implemented in the \(\mathbf{b}_{\rm T}\to\mathbf{0}_{\rm T}\) limit. The left sides of Eqs. (37)-(38) are the parametrizations that we labeled with underlines in Eq.(60) of Ref. [16], while the "input" functions on the left sides are to be used for phenomenological fitting for \(Q\approx Q_{0}\). By construction, the left and right sides of Eqs. (37)-(38), as well \(Q_{0}\) and \(\overline{Q}_{0}\), differ negligibly in the range of \(b_{\rm T}\) relevant to \(Q\approx Q_{0}\) phenomenology - recall the discussion in Sec. V of [16].
For the examples implementations we will perform in Sec. VI.4, we will use the approximation
\[E(\overline{Q}_{0}/Q_{0},b_{\rm T})\approx 1\,, \tag{40}\]
and set \(\overline{Q}_{0}\to Q_{0}\), since for this paper our main focus is on the \(Q\approx Q_{0}\) region and the construction of satisfactory parametrizations for \(\tilde{D}_{\rm{input},h/j}(z,b_{\rm T};\mu_{Q_{0}},Q_{0}^{2})\) and \(\tilde{f}_{\rm{input},i/p}(x,b_{\rm T};\mu_{Q_{0}},Q_{0}^{2})\). At the end of Sec. VI.4, we will restore the \(\overline{Q}_{0}\) treatment and confirm that its effect is negligible at \(Q\approx Q_{0}\).
It can be seen by inspection that the input parametrizations defined in Eq. (18) and Eq. (28) are constrained to match the perturbative large-\(k_{\rm T}\) collinear factorization approximations for the TMD pdfs and ffs,
\[D^{\rm{pert}}_{{\rm{input}},h/j}(z,z\mathbf{k}_{\rm T};\mu_{Q_{0}}, Q_{0}^{2}) =\frac{1}{2\pi z^{2}}\frac{1}{k_{\rm T}^{2}}\left[A_{h/j}^{D}(z; \mu_{Q_{0}})+B_{h/j}^{D}(z;\mu_{Q_{0}})\ln\frac{Q_{0}^{2}}{k_{\rm T}^{2}} \right]+\frac{1}{2\pi z}\frac{1}{k_{\rm T}^{2}}A_{h/j}^{D,g}(z;\mu_{Q_{0}})\,, \tag{41}\] \[f^{\rm{pert}}_{{\rm{input}},i/p}(x,\mathbf{k}_{\rm T};\mu_{Q_{0}}, Q_{0}^{2}) =\frac{1}{2\pi}\frac{1}{k_{\rm T}^{2}}\left[A_{i/p}^{f}(x;\mu_{Q_{ 0}})+B_{i/p}^{f}(x;\mu_{Q_{0}})\ln\frac{Q_{0}^{2}}{k_{\rm T}^{2}}\right]+ \frac{1}{2\pi}\frac{1}{k_{\rm T}^{2}}A_{i/p}^{f,g}(x;\mu_{Q_{0}})\,, \tag{42}\]
which are good approximations to the true TMD correlation functions when \(k_{\rm T}\approx Q_{0}\) and \(Q_{0}\gg m\). Equations (41) and (42) are calculable entirely within leading power collinear factorization. The same expressions apply at any value of \(Q\), but for this paper we are especially interested in \(Q\) near the input scale.
## IV Gaussian versus scalar diquark models
The model parametrizations of the last section are still quite general. The only choices that have been made so far are to use an additive structure to interpolate to the order-\(\alpha_{s}\) perturbative tail at \(k_{\rm T}\approx Q_{0}\) and the choice of the parametrization of the CS kernel in Eq. (17). Further assumptions are necessary before these parametrizations can become useful.
Most of the effort in nonperturbative modeling enters in the choices for the functional forms for \(D_{{\rm{core}},h/j}(z,z\mathbf{k}_{\rm T};Q_{0}^{2})\) and \(f_{{\rm{core}},i/p}(x,\mathbf{k}_{\rm T};Q_{0}^{2})\) that describe the very small \(k_{\rm T}\approx 0_{\rm T}\) behavior. However, many approaches to modeling or parametrizing this region of nonperturbative TMDs already exist [26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47], and one may defer to them at this stage in the parametrization construction. The only way these previously existing models need to be modified is by including the interpolation to the order \(\alpha_{s}\) large-\(k_{\rm T}\) behavior, and by imposing integral relations analogous to Eq. (2). All that remains is to adjust \(D_{{\rm{core}},h/j}(z,z\mathbf{k}_{\rm T};Q_{0}^{2})\) and \(f_{{\rm{core}},i/p}(x,\mathbf{k}_{\rm T};Q_{0}^{2})\) so as to recover (at least approximately) existing model parametrizations in the \(k_{\rm T}\approx 0\) region. The parameters \(m_{D_{j,h}},m_{D_{g,h}},m_{f_{i,p}},m_{f_{i,p}}\) control the transition between the \(k_{\rm T}\) model and the large \(k_{\rm T}\) perturbative tail.
For the purposes of this article, we will focus on two of the most commonly used models in phenomenology that are simple to implement. The first is the Gaussian model of TMDs (see, for example, Refs.[48; 49; 50]), which is often found to successfully describe data at lower \(Q\). It prescribes the functions forms
\[f^{\rm Gauss}_{{\rm{core}},i/p}(x,\mathbf{k}_{\rm T};Q_{0}^{2})=\frac{e^{-k_{\rm T} ^{2}/M_{\rm P}^{2}}}{\pi M_{\rm F}^{2}}\,,\qquad D^{\rm Gauss}_{{\rm{core}},h/j} (z,z\mathbf{k}_{\rm T};Q_{0}^{2})=\frac{e^{-z^{2}\,k_{\rm T}^{2}/M_{\rm D}^{2}}}{ \pi M_{\rm D}^{2}}\,. \tag{43}\]
The second model that we will consider is inspired by the popular spectator diquark model [51; 28]. For it, we adopt
the functional forms
\[f^{\rm Spect}_{{\rm core},i/p}(x,\mathbf{k}_{\rm T};Q_{0}^{2}) =\frac{6M_{\rm 0F}^{6}}{\pi\left(2M_{\rm F}^{2}+M_{\rm 0F}^{2} \right)}\,\frac{M_{\rm F}^{2}+k_{\rm T}^{2}}{\left(M_{\rm 0F}^{2}+k_{\rm T}^{2} \right)^{4}}\,. \tag{44}\] \[D^{\rm Spect}_{{\rm core},h/j}(z,z\mathbf{k}_{\rm T};Q_{0}^{2}) =\frac{2M_{\rm 0D}^{4}}{\pi\left(M_{\rm D}^{2}+M_{\rm 0D}^{2} \right)}\,\frac{M_{\rm D}^{2}+k_{\rm T}^{2}z^{2}}{\left(M_{\rm 0D}^{2}+k_{\rm T}^{2}z^{ 2}\right)^{3}}\,, \tag{45}\]
The overall factors in Eqs. (43)-(45) are chosen so that \(N_{h/j}^{D}=N_{i/p}^{f}=1\) in both models (recall Eq. (27) and Eq. (36)).
In later sections, it will often be convenient to work with collinear pdfs and ffs defined as the cutoff transverse momentum integrals of TMD pdfs and ffs. Hence, we define
\[f^{c}_{i/p}(x;\mu_{Q}) \equiv 2\pi\int_{0}^{\mu_{Q}}{\rm d}k_{\rm T}\,k_{\rm T}f_{i/p}(x, \mathbf{k}_{\rm T};\mu_{Q},Q^{2})\,, \tag{46}\] \[d^{c}_{h/j}(z;\mu_{Q}) \equiv 2\pi z^{2}\int_{0}^{\mu_{Q}}{\rm d}k_{\rm T}\,k_{\rm T}D_{h/j}( z,z\mathbf{k}_{\rm T};\mu_{Q},Q^{2})\,, \tag{47}\]
where the \(c\) superscript stands for "cutoff." The cutoff definitions could be defined more generally with an upper limit \(\mu_{f}\) different from \(\mu_{Q}\), but we will keep these scales equal for the present paper. The cutoff and \(\overline{\rm MS}\)-renormalized definitions are equal up to a scheme change and \(m^{2}/\mu_{Q}^{2}\)-suppressed corrections.
With our parametrizations of TMD pdfs and ffs in the previous section, the integrals are
\[f^{c}_{{\rm int},i/p}(x;\mu_{Q_{0}})= 2\pi\int_{0}^{\mu_{Q_{0}}}{\rm d}k_{\rm T}\,k_{\rm T}f_{{\rm int },i/p}(x,\mathbf{k}_{\rm T};\mu_{Q_{0}},Q_{0}^{2})=\] \[C^{f}_{i/p}\,f^{c}_{{\rm core},i/p}(x;\mu_{Q_{0}})+\frac{1}{2}A^ {f,g}_{i/p}(x;\mu_{Q_{0}})\ln\left(1+\frac{\mu_{Q_{0}}^{2}}{m_{f_{g,p}}^{2}}\right)\] \[+\frac{1}{2}A^{f}_{i/p}(x;\mu_{Q_{0}})\ln\left(1+\frac{\mu_{Q_{0} }^{2}}{m_{f_{i,p}}^{2}}\right)+\frac{1}{4}B^{f}_{i/p}(x;\mu_{Q_{0}})\left[\ln ^{2}\left(\frac{m_{f_{i,p}}^{2}}{Q_{0}^{2}}\right)-\ln^{2}\left(\frac{\mu_{Q _{0}}^{2}+m_{f_{i,p}}^{2}}{Q_{0}^{2}}\right)\right]\] \[=f_{i/p}(x;\mu_{Q_{0}})+O\left(\alpha_{s}(\mu_{0})^{2},\frac{m^{ 2}}{Q_{0}^{2}}\right)\,, \tag{48}\]
and
\[d^{c}_{{\rm int},h/j}(z;\mu_{Q_{0}})= 2\pi z^{2}\int_{0}^{\mu_{Q_{0}}}{\rm d}k_{\rm T}\,k_{\rm T}D_{{ \rm int},h/j}(z,z\mathbf{k}_{\rm T};\mu_{Q_{0}},Q_{0}^{2})=\] \[C^{D}_{h/j}\,d^{c}_{{\rm core},h/j}(z;\mu_{Q_{0}})+\frac{1}{2}A ^{D,g}_{h/j}(z;\mu_{Q_{0}})\ln\left(1+\frac{\mu_{Q_{0}}^{2}}{m_{D_{h,g}}^{2}}\right)\] \[+\frac{1}{2}A^{D}_{h/j}(z;\mu_{Q_{0}})\ln\left(1+\frac{\mu_{Q_{0} }^{2}}{m_{D_{h,g}}^{2}}\right)+\frac{1}{4}B^{D}_{h/j}(z;\mu_{Q_{0}})\left[\ln ^{2}\left(\frac{m_{D_{h,j}}^{2}}{Q_{0}^{2}}\right)-\ln^{2}\left(\frac{\mu_{Q_{0} }^{2}+m_{D_{h,j}}^{2}}{Q_{0}^{2}}\right)\right]\] \[=d_{h/j}(z;\mu_{Q_{0}})+O\left(\alpha_{s}(\mu_{0})^{2},\frac{m^{2 }}{Q_{0}^{2}}\right)\,, \tag{49}\]
with
\[f^{c,{\rm Gauss}}_{{\rm core},i/p}(x;\mu_{Q_{0}},Q_{0}^{2})=1-e^{-\mu_{Q_{0}}^ {2}/M_{\rm F}^{2}}\,,\qquad d^{c,{\rm Gauss}}_{{\rm core},h/j}(z;\mu_{Q_{0}},Q_ {0}^{2})=1-e^{-z^{2}\,\mu_{Q_{0}}^{2}/M_{\rm D}^{2}}\,, \tag{50}\]
in the case of the Gaussian model, and
\[f^{c,{\rm Spect}}_{{\rm core},i/p}(x;\mu_{Q_{0}},Q_{0}^{2}) =1-\frac{M_{\rm 0F}^{6}\left(2M_{\rm F}^{2}+M_{\rm 0F}^{2}+3\mu_{Q_{0}}^{2} \right)}{\left(2M_{\rm F}^{2}+M_{\rm 0F}^{2}\right)\left(M_{\rm 0F}^{2}+\mu_{Q_{0}}^{2} \right)^{3}}\,, \tag{51}\] \[d^{c,{\rm Spect}}_{{\rm core},h/j}(z;\mu_{Q_{0}},Q_{0}^{2}) =1-\frac{M_{\rm 0D}^{4}\left(M_{\rm D}^{2}+M_{\rm 0D}^{2}+2\mu_{Q_{0}}^{2}z^{ 2}\right)}{\left(M_{\rm D}^{2}+M_{\rm 0D}^{2}\right)\left(M_{\rm 0D}^{2}+\mu_{Q_{0}}^{2}z^{ 2}\right)^{2}}\,. \tag{52}\]
in the case of the spectator model. Note that Eqs. (50)-(52) are all 1 up to (at most) \(m^{2}/\mu_{Q_{0}}^{2}\)-suppressed errors.
The expressions in Eqs. (48)-(49) follow directly by substituting Eq. (18) and Eq. (28) into Eqs. (46)-(47). By substituting the expressions in Eq. (22) and Eq. (32) for \(C_{h/j}^{D}\) and \(C_{i/p}^{f}\), it is straightforward to verify that the collinear pdfs and ffs of Eq. (48) and Eq. (49) are equal to the standard \(\overline{\rm MS}\,f(x;\mu_{Q_{0}})\) and \(d(z;\mu_{Q_{0}})\) respectively in the limit that \(O\left(m^{2}/Q_{0}^{2}\right)\) and \(O\left(\alpha_{s}(\mu_{Q_{0}})^{2}\right)\) errors are negligible.
To further simplify later numerical examples and reduce the number of free parameters, in the spectator-like models we will fix \(M_{0{\rm F}}=M_{0{\rm D}}/z=0.2\,{\rm GeV}\). We will also assume that model masses have no parton flavor dependence, and that the \(M_{F,D}\) of the core distributions is the same as the \(m_{f,D}\) in the tail terms. That is, for both models we will assume for now
\[m_{f_{i,p}}=m_{f_{g,p}}=M_{F}\,, \tag{53}\] \[m_{D_{h,j}}=m_{D_{h,g}}=M_{D}\,. \tag{54}\]
In general, the parameters in Eqs. (53)-(54) could have different numerical values in the Gaussian and the spectator-like models, but we will keep the same labels in both to simplify notation.
It should be emphasized that nothing in the setup of Sec. III relies on the use of any _particular_ nonperturbative model. Indeed, one of the motivating advantages of the HSO approach is that the momentum space nonperturbative model of the \(k_{\rm T}\approx 0\) region becomes easily interchangeable, as demonstrated by our switching between the Gaussian and spectator diquark models above.
## V The large transverse momentum asymptote
In this section, we will enumerate the steps for extracting the large-\(q_{\rm T}\) asymptote of Eq. (15). These steps will be particularly relevant to phenomenological treatments of the \(Q\approx Q_{0}\) region. Our path here differs from that of more standard presentations in that we start with the _small_ transverse momentum part of the TMD term and extract the large \(q_{\rm T}\approx Q\) behavior, in contrast to the more usual steps that start with large-\(q_{\rm T}\) calculations of the cross section in collinear perturbation theory and extract the \(q_{\rm T}\to 0\) asymptote. Both approaches must give the same result up to \(O\left(m^{2}/Q^{2}\right)\) and \(O\left(\alpha_{s}(Q)^{n+1}\right)\) corrections.
In all steps below, we will assume we are analyzing the TMD term in a regime where \(q_{\rm T}\) is comparable to \(Q\) and \(Q\) approaches infinity. To be specific, we take \(q_{\rm T}=\eta Q\) where \(\eta\) is a fixed, order unity constant and we let \(m^{2}/Q^{2}\to 0\). It will be convenient to first express the transverse momentum convolution integral on the second line of Eq. (8) in the following way,
\[[f,D]=\int{\rm d}^{2}\mathbf{k}_{\rm T}\,f(x,\mathbf{k}_{\rm T}-\mathbf{q}_{\rm T}/2;\mu_{ Q};Q^{2})D(z,z\left(\mathbf{k}_{\rm T}+\mathbf{q}_{\rm T}/2\right);\mu_{Q};Q^{2})\,, \tag{55}\]
where flavor subscripts are dropped. If we first consider the region of the integrand where
\[\mathbf{k}_{\rm T}=\mathbf{q}_{\rm T}/2+O\left(m\right)\,, \tag{56}\]
then
\[f(x,\mathbf{k}_{\rm T}-\mathbf{q}_{\rm T}/2;\mu_{Q};Q^{2})D(z,z\left(\bm {k}_{\rm T}+\mathbf{q}_{\rm T}/2\right);\mu_{Q};Q^{2})\] \[\qquad=f(x,\mathbf{k}_{\rm T}-\mathbf{q}_{\rm T}/2;\mu_{Q};Q^{2})D^{ \rm pert}(z,z\mathbf{q}_{\rm T};\mu_{Q};Q^{2})+O\left(\frac{m^{2}}{q_{\rm T}^{2}} \right)\,. \tag{57}\]
If we specialize to the order-\(\alpha_{s}\) case, then the collinear perturbative expression from Eq. (41), appropriate to the \(k_{\rm T}\approx Q\) region, may be used for \(D^{\rm pert}(z,z\mathbf{q}_{\rm T};\mu_{Q};Q^{2})\). At order-\(\alpha_{s}^{n}\), higher order versions may be used. Likewise, if we consider the region where
\[\mathbf{k}_{\rm T}=-\mathbf{q}_{\rm T}/2+O\left(m\right)\,, \tag{58}\]
then
\[f(x,\mathbf{k}_{\rm T}-\mathbf{q}_{\rm T}/2;\mu_{Q};Q^{2})D(z,z\left(\bm {k}_{\rm T}+\mathbf{q}_{\rm T}/2\right);\mu_{Q};Q^{2})\] \[\qquad=f^{\rm pert}(x,-\mathbf{q}_{\rm T};\mu_{Q};Q^{2})D(z,z\left( \mathbf{k}_{\rm T}+\mathbf{q}_{\rm T}/2\right);\mu_{Q};Q^{2})+O\left(\frac{m^{2}}{q_{ \rm T}^{2}}\right)\,, \tag{59}\]
where \(f^{\rm pert}(x,-\mathbf{q}_{\rm T};\mu_{Q};Q^{2})\) is the \(n^{\rm th}\)-order perturbative expression appropriate to \(k_{\rm T}\approx Q\). Again, when we specialize to the order-\(\alpha_{s}\) treatment, the perturbative expression in Eq. (42) can be used, and at order-\(\alpha_{s}^{n}\), the higher order versions of these expressions may be used.
Having the expansions in Eq. (57) and Eq. (59) on hand motivates us to rewrite Eq. (55) in the form
\[[f,D] = D(z,z\mathbf{q}_{\rm T};\mu_{Q};Q^{2})\left(2\pi\int_{0}^{\mu_{Q}}{ \rm d}k_{\rm T}\,k_{\rm T}f(x,\mathbf{k}_{\rm T};\mu_{Q};Q^{2})\right) \tag{60}\] \[+ f(x,-\mathbf{q}_{\rm T};\mu_{Q};Q^{2})\left(2\pi\int_{0}^{\mu_{Q}}{ \rm d}k_{\rm T}\,k_{\rm T}D(z,z\mathbf{k}_{\rm T};\mu_{Q};Q^{2})\right)\] \[+ \int{\rm d}^{2}\mathbf{k}_{\rm T}\,\left\{f(x,\mathbf{k}_{\rm T}-\mathbf{q}_{ \rm T}/2;\mu_{Q};Q^{2})D(z,z\left(\mathbf{k}_{\rm T}+\mathbf{q}_{\rm T}/2\right);\mu_{ Q};Q^{2})\right.\] \[\qquad\qquad\left.-D(z,z\mathbf{q}_{\rm T};\mu_{Q};Q^{2})f(x,\mathbf{k}_{ \rm T}-\mathbf{q}_{\rm T}/2;\mu_{Q};Q^{2})\Theta(\mu_{Q}-|\mathbf{k}_{\rm T}-\mathbf{q}_{ \rm T}/2|)\right.\] \[\qquad\qquad\left.-D(z,z(\mathbf{k}_{\rm T}+\mathbf{q}_{\rm T}/2);\mu_{Q}; Q^{2})f(x,-\mathbf{q}_{\rm T};\mu_{Q};Q^{2})\Theta(\mu_{Q}-|\mathbf{k}_{\rm T}+\mathbf{q}_{ \rm T}/2|)\right\}\,,\]
where we have simply added and subtracted the first two lines from the exact Eq. (55) to get the integral on the last three lines. On the first two lines of Eq. (60), we may replace \(f(x,\mathbf{q}_{\rm T};\mu_{Q};Q^{2})\) and \(D(z,z\mathbf{q}_{\rm T};\mu_{Q};Q^{2})\) by their perturbative collinear approximations from Eq. (57) and Eq. (59). Since they are evaluated at \(\mathbf{q}_{\rm T}\approx Q\), this only introduces power-suppressed errors. We may also identify the cutoff integrals on the first two lines with the cutoff definitions of the collinear pdfs and ffs in Eqs. (46)-(47). The integrand of the last three lines is suppressed by \(O\left(m^{2}/q_{\rm T}^{2}\right)\) in regions where \(\mathbf{k}_{\rm T}=\pm\mathbf{q}_{\rm T}/2+O\left(m\right)\). Therefore, we may restrict our consideration of its behavior to regions where
\[|\mathbf{k}_{\rm 1T}| = |\mathbf{k}_{\rm T}-\mathbf{q}_{\rm T}/2|\sim q_{\rm T}\,, \tag{61}\] \[|\mathbf{k}_{\rm 2T}| = |\mathbf{k}_{\rm T}+\mathbf{q}_{\rm T}/2|\sim q_{\rm T}\,, \tag{62}\]
i.e., where both \(k_{\rm 1T}\) and \(k_{\rm 2T}\) are an order unity fraction of \(q_{\rm T}\). Then, all TMD pdfs and ffs in the integrand of the last three lines of Eq. (60) can be expanded in powers of \(m^{2}/q_{\rm T}^{2}\) and replaced by their perturbative approximations, with only power suppressed corrections. We thus have
\[[f,D] = D^{\rm pert}(z,z\mathbf{q}_{\rm T};\mu_{Q};Q^{2})f^{c}(x;\mu_{Q})+ \frac{1}{z^{2}}f^{\rm pert}(x,-\mathbf{q}_{\rm T};\mu_{Q};Q^{2})d^{c}(z;\mu_{Q}) \tag{63}\] \[+ \int{\rm d}^{2}\mathbf{k}_{\rm T}\,\left\{f^{\rm pert}(x,\mathbf{k}_{\rm T }-\mathbf{q}_{\rm T}/2;\mu_{Q};Q^{2})D^{\rm pert}(z,z\left(\mathbf{k}_{\rm T}+\mathbf{q}_{ \rm T}/2\right);\mu_{Q};Q^{2})\right.\] \[\qquad\qquad\left.-D^{\rm pert}(z,z\mathbf{q}_{\rm T};\mu_{Q};Q^{2})f ^{\rm pert}(x,\mathbf{k}_{\rm T}-\mathbf{q}_{\rm T}/2;\mu_{Q};Q^{2})\Theta(\mu_{Q}-| \mathbf{k}_{\rm T}-\mathbf{q}_{\rm T}/2|)\right.\] \[\qquad\qquad\left.-D^{\rm pert}(z,z(\mathbf{k}_{\rm T}+\mathbf{q}_{\rm T}/2 );\mu_{Q};Q^{2})f^{\rm pert}(x,-\mathbf{q}_{\rm T};\mu_{Q};Q^{2})\Theta(\mu_{Q}-| \mathbf{k}_{\rm T}+\mathbf{q}_{\rm T}/2|)\right\}+O\left(\frac{m^{2}}{q_{\rm T}^{2}}\right)\] \[= [f,D]_{\rm ASY}+O\left(\frac{m^{2}}{q_{\rm T}^{2}}\right)\]
Dropping the \(O\left(m^{2}/q_{\rm T}^{2}\right)\) errors gives the asymptotic term that we sought. We will denote this "asymptotic" approximation by \(\left[f,D\right]_{\rm ASY}\), as indicated on the last line. It is calculable entirely within collinear perturbation theory, and it is an increasingly accurate approximate of the full cross section as \(q_{\rm T}\propto Q\) and \(Q\to\infty\). The derivation above of Eq. (63) applies at any order of \(\alpha_{s}\), although for this paper we will be mostly interested in \(O\left(\alpha_{s}\right)\) expressions.
Notice that it is the _cutoff_ definitions, Eqs. (46)-(47), for the collinear functions, and not the usual \(\overline{\rm MS}\) definitions, that appear on the first line of Eq. (63). One recovers the full asymptotic term for the cross section by substituting this into Eq. (8).
To specialize to the \(O\left(\alpha_{s}\right)\) case at an input scale \(Q=Q_{0}\), with the parametrizations in Eqs. (18)-(28), one substitutes the expressions from Eqs. (41)-(42). Equations (48) and (49) are to be used for the \(f^{c}(x;\mu_{Q_{0}})\) and \(d^{c}(z;\mu_{Q})\) on the first line of Eq. (63). If we drop \(O\left(\alpha_{s}^{2}\right)\) and \(O\left(m^{2}/Q^{2}\right)\) errors, the first line then exactly matches the more standard form of the \(O\left(\alpha_{s}\right)\) asymptotic term (see, e.g., Ref. [11]).
The integral that starts on the second line of Eq. (63) is only non-zero at \(O\left(\alpha_{s}^{2}\right)\) or higher, so it may be dropped in a strictly \(O\left(\alpha_{s}\right)\) treatment. However, there are several advantages to retaining it. One is simply that it guarantees that, for \(Q=Q_{0}\), we recover the exact asymptotic \(k_{\rm T}\to Q_{0}\), \(m/Q_{0}\to 0\) limit of the order-\(\alpha_{s}^{n}\) TMD-term. Another is that it ensures cutoff-invariance through the lowest non-trivial order. Recall that the cutoff-defined pdfs and ffs can in general use a cutoff \(\mu_{f}\) that differs from \(\mu_{Q}\). In Eq. (63), \(\mu_{f}\) dependence would appear in \(f^{c}\), \(d^{c}\), and the \(\Theta\) functions in the integral of the last three lines. Dependence on \(\mu_{f}\) enters the standard asymptotic term at order \(\alpha_{s}^{2}\), but keeping the third term in Eq. (63) ensures that \(\mu_{f}\) dependence enters \(\left[f,D\right]_{\rm ASY}\) only at order \(\alpha_{s}^{3}\).
Example input scale treatment
Now we turn to demonstrating how the HSO treatment described in Secs. (II)-(IV) works in practice with explicit numerical implementations. Our purpose here is to compare the HSO treatment described thus far with the conventional steps for constructing phenomenological parametrizations, and to illustrate the improvements that are gained from using the former.
In Sec. VI.1 below, we will summarize the basic formulas and in Sec. VI.2 we will review the usual decomposition of a transverse momentum dependent cross section into a TMD term, an asymptotic term, and a \(Y\)-term. In Sec. VI.3, we will review the conventional style of implementing TMD factorization and show examples of the complications that can arise, some of which were already mentioned in the introduction, and in Sec. VI.4 we show how these are solved within the HSO approach.
In our calculations, we focus on the TMD pdfs and ffs parametrized at an initial scale \(Q=Q_{0}\), a scenario previously addressed in [10]. Estimating the lowest \(Q_{0}\) for which TMD factorization remains valid is rather non-trivial [16], and we leave it as an open question. For purposes of illustration, we will try two values in sections VI.3 and VI.4 below, from the relatively low (and reasonable) \(Q_{0}=4.0\) GeV, to the (far too conservative) \(Q_{0}=20.0\) GeV, to demonstrate how the procedure works for both a small and a large choices of \(Q_{0}\).
### Basic setup
The standard expression for the SIDIS differential cross section in terms of the structure functions \(F_{1}\) and \(F_{2}\) is
\[\frac{\mathrm{d}\sigma}{\mathrm{d}x\ \mathrm{d}y\ \mathrm{d}z\ \mathrm{d}q_{\mathrm{T}}^{2}} =\frac{\pi^{2}\alpha_{\mathrm{em}}^{2}z}{Q^{2}\,x\,y}\left[F_{1} \,x\,y^{2}+F_{2}\,(1-y)\right]\,, \tag{64}\]
where the \(F\) structure functions are the usual ones obtained by contracting the projectors in Eq. (13) with the hadronic tensor. In the small-\(q_{\mathrm{T}}\) approximation, the structure functions are expressed in terms of TMD pdfs and ffs,
\[F=F^{\mathrm{TMD}}+O\left(m/Q,q_{\mathrm{T}}/Q\right)\,, \tag{65}\]
\[F_{1}^{\mathrm{TMD}}\equiv 2\,z\,\sum_{j}\lvert H\rvert_{j}^{2}\left[f_{j/p},D_ {h/j}\right]\,,\qquad F_{2}^{\mathrm{TMD}}\equiv 4\,z\,x\,\sum_{j}\lvert H \rvert_{j}^{2}\left[f_{j/p},D_{h/j}\right]\,, \tag{66}\]
where the "TMD" superscript denotes the small-\(q_{\mathrm{T}}\) approximation. Compare Eq. (66) with Eq. (8) for the hadronic tensor. We will use the \(O(\alpha_{s})\) hard factor \(\lvert H\rvert_{j}^{2}\) from Eq. (11) in any calculations below. Calculating Eq. (66) in a specific phenomenological implementation involves making choices about how to parametrize the TMD functions \(f_{i/p}\) and \(D_{h/j}\), including choices about nonperturbative models and/or calculations at the input scale, the order of precision in perturbative parts, and any other approximations or assumptions used in the construction of a specific set of parametrizations.
Combining large (\(F^{\mathrm{FO}}\)) and small (\(F^{\mathrm{TMD}}\)) transverse momentum calculations
Before we contrast the \(F^{\mathrm{TMD}}\) calculations in the conventional and HSO styles, let us review the usual steps for merging calculations done with TMDs with purely collinear factorization calculations designed for the \(q_{\mathrm{T}}\approx Q\) region.
In the region where \(q_{\mathrm{T}}\approx Q\), the approximations in Eq. (66) fail. However, this is the region where fixed-order collinear factorization calculations, which use ordinary collinear pdfs and ffs, are most reliable. We express the large-\(q_{\mathrm{T}}\) fixed-order collinear approximation to the structure functions as
\[F=F^{\mathrm{FO}}+O\left(m/q_{\mathrm{T}}\right)\,,\qquad F^{\mathrm{FO}}=\, \sum_{i,j}d_{B/i}\otimes\hat{F}_{ij}\otimes f_{j/p}\,, \tag{67}\]
where the indices \(i,j\) run over parton flavors, and the FO superscript stands for "fixed-order." A choice must be made for the UV scheme that defines the collinear functions \(f_{i/p}\) and \(D_{h/j}\). The most common is renormalization in the \(\overline{\mathrm{MS}}\) scheme. The \(\hat{F}_{ij}\) are the partonic versions of the structure functions, and they have been calculated up to at least \(O(\alpha_{s}^{2})\)[52; 53; 54]. In our calculations, we will use \(O(\alpha_{s})\) results [9; 11; 55].
Following standard conventions, we will use the phrase "fixed order cross section" as a short hand for Eq. (64) calculated with the large-\(q_{\rm T}\) approximation in Eq. (67).2 While \(F^{\rm TMD}\) gives an accurate treatment of the \(q_{\rm T}\approx m\) region, and \(F^{\rm FO}\) provides an accurate treatment of the \(q_{\rm T}\approx Q\) region, what is ultimately needed is a factorized expression with only \(O\left(m^{2}/Q^{2}\right)\)-suppressed errors point-by-point in \(q_{\rm T}\). To construct it systematically, one starts by writing the structure functions in the TMD (low-\(q_{\rm T}\)) approximation with the error term made explicit,
Footnote 2: Note that the asymptotic term of Sec. V is also calculated in fixed order perturbation theory. However, in the terminology of this section “fixed order term” applies specifically to calculations done using the non-asymptotic Eq. (67).
\[F=F^{\rm TMD}+\left[F-F^{\rm TMD}\right]\,. \tag{68}\]
The error term in braces is only unsuppressed when \(q_{\rm T}\) is large relative to \(m\). Thus, it can be calculated in collinear factorization with only \(m^{2}/q_{\rm T}^{2}\)-suppressed errors. Since the error term itself is \(O\left(q_{\rm T}^{2}/Q^{2}\right)\), the result is that the overall error is \(m^{2}/Q^{2}\)-suppressed point-by-point in \(q_{\rm T}\). Thus, we define
\[\lim_{m/q_{\rm T}\to 0}F^{\rm TMD}=F^{\rm ASY} \tag{69}\]
to be the \(q_{\rm T}\sim Q\), \(Q\to\infty\) asymptote of the TMD approximation, as it is calculated in fixed order collinear factorization. The "\(\sim\)" means the ratio \(q_{\rm T}^{2}/Q^{2}\) is to be held fixed as \(Q\to\infty\). Applied to Eq. (68), the structure function becomes
\[F=F^{\rm TMD}+\left[F^{\rm FO}-F^{\rm ASY}\right]+O\left(m^{2}/Q^{2}\right)\,. \tag{70}\]
The asymptotic term is constructed to accurately describe the \(m\ll q_{\rm T}\ll Q\) region - both \(q_{\rm T}\ll Q\) and \(m\ll q_{\rm T}\) approximations have been applied simultaneously. For this paper, this is simply Eq. (63) applied to structure functions.
A minor subtlety is that the exact form of the asymptotic term \(F^{\rm ASY}\) depends on the details of how collinear pdfs and ffs are defined and on how higher order corrections in the perturbative expansion are truncated. If, in an \(O\left(\alpha_{s}^{n}\right)\) calculation, for example, the cutoff-defined pdfs and ffs of Eq. (63) are replaced by their corresponding \(\overline{\rm MS}\) definitions, then the resulting asymptotic terms will generally differ by \(O\left(m^{2}/Q^{2}\right)\)-suppressed and \(O\left(\alpha_{s}^{n+1}\right)\)-suppressed amounts. Furthermore, while \(F^{\rm ASY}\) is in principle equal to the low-\(q_{\rm T}\) limit of \(F^{\rm FO}\) as \(Q\to\infty\), generally this is only exactly true in calculations at the working order of perturbation theory. In calculations at a fixed \(Q\), the two asymptotic terms will typically differ by higher-order \(\alpha_{s}\) and power-suppressed terms. In other words, if \(F^{\rm ASY}\) is calculated to \(O\left(\alpha_{s}^{n}\right)\) with the cutoff scheme for pdfs and ffs, and \(F^{\rm FO,r}\) is calculated to the same order in some other scheme \(r\), then one will generally find
\[\left[\lim_{q_{\rm T}/Q\to 0}F^{\rm FO,r}\right]^{O\left(\alpha_{s}^{n}\right)}- \left[F^{\rm ASY}\,\right]^{O\left(\alpha_{s}^{n}\right)}=O\left(\alpha_{s}^{ n+1},m^{2}/Q^{2}\right)\,. \tag{71}\]
That is, there is a family of valid schemes for defining the exact asymptotic term at a given order, though some schemes can be preferable to others in the context of minimizing errors. Indeed, it is the first term in Eq. (71), with \(r=\overline{\rm MS}\), that represents the most common approach used in the past for calculating the asymptotic term. We will call the asymptotic term calculated using Eq. (63) \(F^{\rm ASY}_{\rm HSO}\).
Together, the second two terms in Eq. (70) are often called the "\(Y\)-term," and the structure function is written as
\[F=F^{\rm TMD}+Y\,+O\left(m/Q\right)\,. \tag{72}\]
to emphasize the role of \(Y\) as a large-\(q_{\rm T}\) correction to calculations done with TMD pdfs and ffs. Of course, the precise value of the \(Y\)-term contribution depends on the specific version of the asymptotic term.
In conventional treatments, the fixed order term is calculated with collinear functions in the \(\overline{\rm MS}\) scheme. The specific version of the asymptotic structure functions used is the first term in Eq. (71), so that
\[F^{\rm FO}_{\rm ST}= F^{\rm FO,\overline{\rm MS}},\qquad F^{\rm ASY}_{\rm ST}=\lim_{q_{ \rm T}/Q\to 0}F^{\rm FO,\overline{\rm MS}}\,, \tag{73}\]
with "\(\overline{\rm ST}\)" subscripts to indicate "standard." We will call a calculation of the asymptotic term done in the style of Sec. V \(F^{\rm ASY}_{\rm HSO}\) to distinguish it from Eq. (73). Since \(F^{\rm ASY}_{\rm HSO}\) is calculated with cutoff definitions for the collinear pdfs and ffs, this suggests that the cutoff definitions might be preferred as well for calculating \(F^{\rm FO}\). However, switching between the \(\overline{\rm MS}\) and cutoff schemes in \(F^{\rm FO}\) only produces power suppressed and perturbative errors beyond the
working order in \(\alpha_{s}\). Therefore, one may consistently interchange cutoff and \(\overline{\rm MS}\) definitions, and we will use \(F_{\rm ST}^{\rm FO}\) for our calculation of the fixed order structure function. We will see in later sections that the effect of switching between the two is small relative to the overall improvements from using the HSO approach. An interesting question for the future is whether calculations of \(F^{\rm FO}\) can be improved by switching to a cutoff scheme for the collinear functions, but we leave this to future work.
### The TMD term in the conventional treatment
The usual approach to applying TMD factorization to phenomenology has been reviewed in many places, so we will not repeat the details here. Readers are referred to, for example, Refs. [56; 57; 3] and references therein. The standard expression used in calculations follow from making the following replacement in Eqs. (66):
\[\left[f_{j/p},D_{h/j}\right] \to\int\frac{{\rm d}^{2}\mathbf{b}_{\rm T}}{(2\pi)^{2}}\ e^{-i\mathbf{q}_ {\rm T}\cdot\mathbf{b}_{\rm T}}\ \tilde{f}_{j/p}^{\rm OPE}(x,\mathbf{b}_{*};\mu_{b_{*}},\mu_{b_{*}}^{2})\ \tilde{D}_{h/j}^{\rm OPE}(z,\mathbf{b}_{*};\mu_{b_{*}},\mu_{b_{*}}^{2})\] \[\times\exp\left\{2\int_{\mu_{b_{*}}}^{\mu_{Q}}\frac{d\mu^{\prime} }{\mu^{\prime}}\left[\gamma(\alpha_{s}(\mu^{\prime});1)-\ln\frac{Q}{\mu^{ \prime}}\gamma_{K}(\alpha_{s}(\mu^{\prime}))\right]+\ln\frac{Q^{2}}{\mu_{b_{*} }^{2}}\tilde{K}(b_{*};\mu_{b_{*}})\right\}\] \[\times\exp\left\{-g_{j/p}(x,b_{\rm T})-g_{h/j}(z,b_{\rm T})-g_{K }(b_{\rm T})\ln\left(\frac{Q^{2}}{Q_{0}^{2}}\right)\right\}\,. \tag{74}\]
The \(\tilde{f}_{j/p}^{\rm OPE}\) and \(\tilde{D}_{h/j}^{\rm OPE}\) on the first line are the TMD pdfs and ffs in \(b_{\rm T}\)-space, expanded and truncated in an operator product expansion. The \(\gamma\), \(\gamma_{K}\) and \(\tilde{K}\) are the usual evolution kernels. The "\(\mathbf{b}_{*}\)" method has been used to regulate \(\tilde{f}_{j/p}^{\rm OPE}\), \(\tilde{D}_{h/j}^{\rm OPE}\), and \(\tilde{K}\) at large \(b_{\rm T}\). (See reviews of the \(\mathbf{b}_{*}\) method in Sec. IXA of [16] and in Sec. VIII of [58].) The most common choice for a functional form for \(\mathbf{b}_{*}\) is
\[\mathbf{b}_{*}(b_{\rm T})=\frac{\mathbf{b}_{\rm T}}{\sqrt{1+b_{\rm T}^{2}/b_{\rm max}^ {2}}}\,, \tag{75}\]
where \(b_{\rm max}\) is a transverse size scale that demarcates a separation between large and small transverse size regions. In principle, both the functional form of Eq. (75) and the value of \(b_{\rm max}\) are completely arbitrary, but a small \(b_{\rm max}\) justifies the use of the OPE on the first line of Eq. (74); the error term in the approximation in Eq. (74) is suppressed by powers of \(m\,b_{\rm max}\). All of the nonperturbative transverse momentum dependence is contained in the \(b_{\rm T}\)-space functions \(g_{j/p}\), \(g_{h/j}\), and \(g_{K}\), whose definitions in terms of the more fundamental correlation functions are
\[-g_{j/p}(x,b_{\rm T})\equiv\ln\left(\frac{\tilde{f}_{j/p}(x,\mathbf{b}_{\rm T}; \mu_{Q_{0}},Q_{0}^{2})}{\tilde{f}_{j/p}(x,\mathbf{b}_{*};\mu_{Q_{0}},Q_{0}^{2})} \right)\,,\qquad-g_{h/j}(z,b_{\rm T})\equiv\ln\left(\frac{\tilde{D}_{h/j}(z, \mathbf{b}_{\rm T};\mu_{Q_{0}},Q_{0}^{2})}{\tilde{D}_{h/j}(z,\mathbf{b}_{*};\mu_{Q_{0} },Q_{0}^{2})}\right)\,, \tag{76}\]
and
\[g_{K}(b_{\rm T})\equiv\tilde{K}(b_{*};\mu)-\tilde{K}(b_{\rm T};\mu)\,. \tag{77}\]
Conventional methods replace each of the \(g\)-functions, \(g_{j/p}\), \(g_{h/j}\), and \(g_{K}\), by an ansatz, with parameters to be fitted from measurements. The simplest and most common choices (e.g. [59; 60; 61]) are based on simple power laws like
\[g_{j/p}(x,b_{\rm T})=\frac{1}{4}\,M_{F}^{2}\,b_{\rm T}^{2}\,,\qquad g_{h/j}(z,b_ {\rm T})=\frac{1}{4\,z^{2}}M_{D}^{2}\,b_{\rm T}^{2} \tag{78}\]
for the input nonperturbative functions, where \(M_{F}\) and \(M_{D}\) are fit parameters. For the CS kernel, common parametrizations are
\[g_{K}(b_{\rm T})=\frac{1}{2}M_{K}^{2}b_{\rm T}^{2}\qquad{\rm or}\qquad g_{K}(b _{\rm T})=\frac{g_{2}}{2\,M_{K}^{2}}\ln\left(1+M_{K}^{2}b_{\rm T}^{2}\right)\,, \tag{79}\]
where \(M_{K}\) and \(g_{2}\) are fit parameters. The first of these functional forms is common in typical applications, but it conflicts with the expectation that evolution is slow at moderate \(Q\)[62; 63]. As a result, it was suggested in Ref. [56] that \(g_{K}(b_{\rm T})\) should exhibit very nearly constant behavior at large \(b_{\rm T}\), a behavior closely modeled by a logarithmic function. More complex fit parametrization ansatzes for all the g-functions have been introduced more recently (see
for instance Refs. [64; 65]), but the general approach of taking combinations of simple functional forms that reduce to power law behavior at small \(b_{\rm T}\) is similar to the above.
Note that, in the \(b_{*}\)-approach, before any truncation approximations are made, the product of TMD correlation functions must satisfy
\[\frac{\rm d}{{\rm d}b_{\rm max}}\left[f_{j/p},D_{h/j}\right]=O\left(mb_{\rm max }\right)\,. \tag{80}\]
That is, dependence on \(b_{\rm max}\) or on the form of \(\mathbf{b}_{*}(b_{\rm T})\) must be a negligible power correction for reasonably small \(b_{\rm max}\).3 In calculations at a specific order in \(\alpha_{s}\), violations of Eq. (80) may enter only through neglected higher orders in \(\alpha_{s}\). A significant violation of Eq. (80) in a TMD parametrization may indicate either that higher orders need to be included, or that \(b_{\rm max}\) has been chosen to be too large. A failure to find a negligible right side of Eq. (80) is thus a useful diagnostic tool.
Footnote 3: The power-suppressed errors on the right side of Eq. (80) will typically be \(m^{2}b_{\rm max}^{2}\), but the precise power of the suppression is not important for our present discussion.
We will label structure functions calculated in the conventional approach by \(F_{\rm ST}^{\rm TMD}\), with "ST" for "standard," and we will use this notation regardless of whichever specific model is used for the \(g\)-functions. What makes an approach "conventional" in the sense that we mean in this paper is that it imposes no extra, additional constraints on the \(g\)-functions to ensure consistent matching with collinear factorization. Specifically, the ansatzes of traditional approaches do not explicitly enforce the integral connection between collinear and TMD pdfs and ffs in Eq. (2), or guarantee a smooth interpolation to the large \(k_{\rm T}\) collinear factorization region.
In the following numerical examples, we will use CTEQ6.6 pdfs [66] (central values) and MAPFF1.0 ffs for \(\pi^{+}\)[67] (average over replicas), implemented in LHAPDF6[68]. We postpone a more detailed analysis that includes the uncertainty associated with the chosen LHAPDF6 sets for a later publication. For the purpose of this paper, we effectively assume "complete knowledge" of the collinear pdfs and ffs in the \(\overline{\rm MS}\) scheme stressing that our main points, and the logic behind the HSO approach, are not affected by such choices. The left-hand panels of Fig. 2 show the differential SIDIS cross section for \(Q_{0}=4.0\) GeV within the various different approximations discussed in Sec. VI.1 and Sec. VI.2, including the \(F_{\rm ST}^{\rm TMD}\) (the TMD approximation), the \(F_{\rm ST}^{\rm FO}\) (\(q_{\rm T}\approx Q\) approximation), and the \(F_{\rm ST}^{\rm ASY}\) (asymptotic term) calculations. We use \(x=0.1\), \(z=0.3\) and \(y=0.5\), which are kinematics accessible to both the COMPASS experiment [8] and the EIC [69]. To emphasize alternately the large-\(q_{\rm T}\) and small-\(q_{\rm T}\) regions, we have plotted the curves on a logarithmic scale in the upper left panel and a linear scale in the lower left panel. We take the \(g\)-functions to be parametrized as in Eq. (78), and the RG scale is \(\mu_{Q_{0}}=Q_{0}\). The curves are the TMD (solid red line), fixed order (dot-dashed black line) and asymptotic (dashed blue line) terms. Despite the small values used for the mass parameters, \(M_{\rm F}=M_{\rm D}/z=0.1\,\)GeV, the asymptotic term is nowhere close to overlapping with either the TMD or the fixed order terms anywhere in the range of \(q_{\rm T}\) between \(0\) and \(4\,\)GeV. This is a violation of the consistency requirement that, with a sufficiently large input scale \(Q_{0}\), there must be a region \(\Lambda_{\rm QCD}\ll q_{\rm T}\ll Q_{0}\) where the asymptotic term is simultaneously a good approximation of both the TMD and the \(q_{\rm T}\approx Q_{0}\) fixed order cross sections. This is a complication that arises frequently in the conventional methodology, and it is one that we alluded to in Sec. I. Among the reasons for the mismatch is a failure to impose the integral relation in Eq. (2) directly upon the \(g\)-functions in Eq. (78).
One might suspect that the mismatch is a consequence of the input scale \(Q_{0}\) being too small. To test this, we also consider the same computation, using the same nonperturbative mass scales, but now with an unreasonably large input scale of \(Q_{0}=20\,\)GeV. The result is shown in the right-hand panels of Fig. 2. Again, the upper panel is on a logarithmic scale, while the lower panel uses the linear scale to emphasize the region of smaller \(q_{\rm T}\). The agreement between the asymptotic and TMD terms improves, but even here there is a startlingly large mismatch between the three calculations in the region where \(q_{\rm T}\) is small but comparable to \(Q_{0}\). Even for \(Q_{0}\approx 20\) GeV, there is no region of \(q_{\rm T}\) where the three curves overlap simultaneously to a satisfactory degree. This point is made especially clear in the linear scale plots.
Note that this complication is independent of evolution or the question about how many orders of logarithms of \(Q/q_{\rm T}\) should be resummed. If the connection to collinear factorization is to be consistent, there must be a region where \(q_{\rm T}\) is a fixed fraction of \(Q\) and all three calculations merge in the limit as \(Q\to\infty\). Moreover, for any \(Q\) where we expect TMD factorization to be valid, the TMD and asymptotic terms should at least approximately match one another when \(q_{\rm T}\) is comparable to \(Q\). It is a contradiction, then, if this fails at the input scale. Note that the mismatches, both quantitative and qualitative, between the TMD terms and their expected asymptotic behavior is especially visible in the lower panels where the curves are plotted with linear axes.
For generating the plots in Fig. 2, it was necessary to fix the mass scales \(M_{F}\) and \(M_{D}\) in Eq. (78). The observed trends are quite general, however, and to demonstrate this we show the same \(Q_{0}=4.0\) GeV calculation in the left-hand panel of Fig. 3, but now with bands representing ranges of typically-sized nonperturbative mass scales,
\[0.1\ {\rm GeV}\leq M_{F}\leq 0.4\ {\rm GeV}\,, \tag{81}\] \[0.1\ {\rm GeV}\leq M_{D}/z\leq 0.3\ {\rm GeV}\,. \tag{82}\]
The value of \(b_{\rm max}\) for this plot remains fixed at \(1.0\ {\rm GeV}^{-1}\). Even with the freedom to adjust these nonperturbative parameters, it is clear that it is not possible to achieve reasonable agreement between the TMD term and the asymptotic term, even in regions where \(q_{\rm T}\) is comparable to \(Q_{0}\). The TMD bands do touch the asymptotic curve at around \(q_{\rm T}\approx 0.5\) GeV, but the two curves have very different qualitative shapes for all \(M_{D}\) and \(M_{F}\). For larger \(q_{\rm T}\), there is no approximate agreement between the asymptotic and TMD terms, regardless of \(M_{F}\) and \(M_{D}\). Indeed, the TMD band departs from the asymptotic term at around \(q_{\rm T}\approx 1.2\) GeV.
Another way to see the problems with the conventional treatment here is to observe that the approximate \(b_{\rm max}\)-independence of Eq. (80) is very badly violated with typical values of \(b_{\rm max}\), as shown by the right-hand panel in Fig. 3, which displays the TMD term with bands for \(b_{\rm max}\) variations from the very small value of \(0.1\ {\rm GeV}^{-1}\) up to a maximum typical value of \(b_{\rm max}=1.5\ {\rm GeV}^{-1}\) used in phenomenological applications. The bands are with fixed mass scales of \(M_{F}=M_{D}/z=0.25\) GeV. The orders-of magnitude variation badly contradicts the original \(b_{\rm max}\)-independence that exists before the OPE approximations. It implies that the \(M_{F}\) and \(M_{D}\) parameters must be given their own \(b_{\rm max}\)-dependence to (at least approximately) cancel the explicit \(b_{\rm max}\)-dependence seen in the figure. However, the far more modest \(M_{F}\) and \(M_{D}\) dependence seen in the left-hand panel shows that this cannot be made to work with typical model parametrizations of the \(g\)-functions and reasonable nonperturbative values for \(M_{F}\) and \(M_{D}\).
As a consequence of the strong \(b_{\rm max}\) sensitivity, practical phenomenological applications will often effectively promote \(b_{\rm max}\) to the status of an extra nonperturbative parameter as opposed to treating it as an entirely arbitrary cutoff. That is, attempts to approximately preserve Eq. (80) are effectively abandoned. But the result is that the large transverse momentum behavior becomes sensitive to parameters that are in principle to be restricted to describing only the nonperturbative small transverse momentum region. The predictive power that is gained from collinear factorization and the OPE is then compromised. This is a problem that has been well-known for some time [18].
The above observations illustrate that nonperturbative transverse momentum dependence in the conventional methodology has an unacceptably large impact on the large transverse momentum region, in a way that violates consistency with collinear factorization.
### In a hadron structure oriented approach
Next, we contrast the conventional approach of the preceding subsection with the HSO steps from Ref. [16] and Secs. (II)-(V) of this paper.
It should be emphasized that the two "approaches" being contrasted here refers only to specific phenomenological implementations and not to the basic theoretical setup. The fundamental TMD factorization theorem and the evolution equations are always the standard ones, and they are never modified. What distinguishes the HSO approach to phenomenological implementations from the conventional one is that the former imposes constraints on the input TMD parametrizations that guarantee consistency with collinear factorization in the appropriate limits. To see what this means more clearly, it may be helpful to recall that it is straightforward (though unnecessary) to use the \(\mathbf{b}_{*}\) method to rewrite the HSO expression in Eq. (15) in terms of the \(g\)-functions defined in Eq. (76), but with the explicit HSO parametrizations for \(\tilde{f}_{j/p}(x,\mathbf{b}_{\rm T};\mu_{Q_{0}},Q_{0}^{2})\) and \(\tilde{D}_{h/j}(z,\mathbf{b}_{\rm T};\mu_{Q_{0}},Q_{0}^{2})\). The final form of the evolved TMD pdfs and ffs are exactly the same. The full set of steps for translating the HSO approach into the conventional one may be found in Sec. IX of [16]. Cast in this way, the HSO approach is identical to the conventional one except that it imposes additional and important consistency conditions directly on the \(g\)-functions. In the treatment in this paper, this amounts to using Eq. (17), Eq. (18) and Eq. (28) (or, more generally, any other set of parametrizations that arise from the steps in Ref. [16]) inside Eqs. (37)-(38) instead of the conventionally unconstrained ansatzes like Eqs. (78)-(79).
We have focused on the kinematics of the \(Q\approx Q_{0}\) region, since the lowest acceptable values of \(Q\) are where one typically expects nonperturbative hadron structure effects to be most pronounced, and thus it is where nonperturbative versions of relations like Eq. (1) and Eq. (2) become especially important.
The steps for calculating the TMD term in the HSO approach were reviewed in Secs. (II)-(IV). If we specialize to the additive structure in Sec. III for the TMD parametrizations, then the HSO approach amounts to simply calculating
Eq. (15) with the parametrizations in Eq. (18) and Eq. (28). That is, we use
\[\left[f_{j/p},D_{h/j}\right] \rightarrow\int\frac{\mathrm{d}^{2}\mathbf{b}_{\mathrm{T}}}{(2\pi)^{2}} \ e^{-i\mathbf{q}_{\mathrm{T}}\cdot\mathbf{b}_{\mathrm{T}}}\ \tilde{f}_{j/p}(x,\mathbf{b}_{\mathrm{T}};\mu_{Q_{0}},\mu_{Q_{0}}^{2})\ \tilde{D}_{h/j}(z,\mathbf{b}_{\mathrm{T}};\mu_{Q_{0}},\mu_{Q_{0}}^{2})\] \[\times\exp\left\{\tilde{K}(b_{\mathrm{T}};\mu_{Q_{0}})\ln\left( \frac{Q^{2}}{Q_{0}^{2}}\right)+\int_{\mu_{Q_{0}}}^{\mu_{Q}}\frac{\mathrm{d}\mu ^{\prime}}{\mu^{\prime}}\biggl{[}2\gamma(\alpha_{s}(\mu^{\prime});1)-\ln\frac{ Q^{2}}{{\mu^{\prime}}^{2}}\gamma_{K}(\alpha_{s}(\mu^{\prime}))\biggr{]}\right\}\,, \tag{83}\]
Figure 2: SIDIS differential cross section (absolute value) in the standard approach, within different approximations for the structure functions: \(F_{\mathrm{ST}}^{\mathrm{TMD}}\) (solid red line), \(F_{\mathrm{ST}}^{\mathrm{ASY}}\) (dashed blue line) and \(F_{\mathrm{ST}}^{\mathrm{FP}}\) (dot-dashed black line). The chosen kinematics roughly correspond to regions accessible by the COMPASS experiment and the EIC. The TMD term is calculated with the quadratic model for the g-functions of Eq. (78), at fixed values for the small-mass parameters \(M_{\mathrm{F}}=M_{\mathrm{D}}/z=0.1\,\mathrm{GeV}\), and we have used the \(b_{*}\) prescription of Eq. (75) with \(b_{\mathrm{max}}=1.0\,\mathrm{GeV}^{-1}\). We consider the cross section at two values of the input scale \(Q_{0}\), and no TMD evolution is performed. Left: The cross section is shown for \(Q_{0}=4.0\,\mathrm{GeV}\). Right: The cross section is shown for \(Q_{0}=20.0\,\mathrm{GeV}\). For visibility, the bottom panels show the same curves as the top, but with a vertical linear scale and a reduced range of \(q_{\mathrm{T}}\). Note that, despite the small values of the mass parameters, the three approximations never overlap in the intermediate region of transverse momentum, \(m\ll q_{\mathrm{T}}\ll Q\).
with Eqs. (66). In the replacement, the \(\bar{D}_{h/j}\) and \(\tilde{f}_{j/p}\) are now to be understood to be the \(b_{\rm T}\)-space version of the parametrizations from Eq. (18) and Eq. (28) substituted into Eqs. (37)-(39). Explicit expressions for the input \(b_{\rm T}\)-space TMD functions are listed in Appendix B. We denote the resulting structure functions by \(F_{\rm BSO}^{\rm TMD}\). These are the underlined correlation functions from [16]4, or, if we restrict \(Q\approx Q_{0}\) and use the approximation in Eq. (40), they are just the \(b_{\rm T}\)-space input functions themselves. With \(O\left(\alpha_{s}\right)\) perturbative coefficients, their structure is
Footnote 4: Actually, these symbols refer to a class of models for the TMD pdfs and ffs since at this stage we still need to specify the exact form of the nonperturbative transverse momentum dependence in \(f_{\rm core,i/p}\) and \(D_{\rm core,h/j}\). We will use the same notation for all calculations that use this general approach.
\[F_{\rm HSO}^{\rm TMD}\sim\,\left(|H|_{j}^{2}\right)^{O\left(\alpha_{s}\right)} \,\left[f_{j/p},D_{h/j}\right]\,, \tag{84}\]
where \(\left(|H|_{j}^{2}\right)^{O\left(\alpha_{s}\right)}\) is the hard coefficient in Eq. (11), with kinematic factors and sums over flavors.
For the asymptotic term, we start from \(F_{\rm HSO}^{\rm TMD}\) and use the \(m\ll q_{\rm T}\ll Q\) approximation in Eq. (63) in place of \([f,D]\), so that the asymptotic structure functions are
\[F_{\rm HSO}^{\rm ASY}\sim\,\left(|H|_{j}^{2}\right)^{O\left(\alpha_{s}\right)} \,\left[f_{j/p},D_{h/j}\right]_{\rm ASY}\,. \tag{85}\]
For calculating the \(O\left(\alpha_{s}\right)\) fixed order structure function in \(q_{\rm T}\approx Q_{0}\) collinear factorization (see, for example, Ref. [54]), we use
\[\frac{\left(|H|_{j}^{2}\right)^{O\left(\alpha_{s}\right)}}{e_{j}^{2}}\,F_{\rm ST }^{\rm FO}=F_{\rm ST}^{\rm FO}+O\left(\alpha_{s}(\mu_{Q_{0}})^{2}\right)\,, \tag{86}\]
Figure 3: Variation of the TMD cross section (absolute value), in the standard approach, with respect to the small-mass parameters of Eq. (78) (left), and \(b_{\rm max}\) (right). In both cases, we have chosen the same kinematics as in the left panel of Fig. 2, and we have set \(Q=Q_{0}=4.0\,\)GeV, and no TMD evolution is performed. Left: the red band shows the envelope for the TMD term obtained by varying the model masses \(M_{\rm F}\) and \(M_{\rm D}\). Note the large variation of the band in the region where the asymptotic term (dashed blue line) and the fixed order term (dot-dashed black line) start to overlap, which results from the unconstrained behavior of the TMD term at large \(q_{\rm T}\). At very large values of \(q_{\rm T}\), the TMD and asymptotic terms are not consistent. Right: Envelope showing the variation of the TMD term (blue band) with respect to the value of \(b_{\rm max}\), at fixed values of the model masses. (Note that the edges of the envelope are not necessarily the curves associated with the extrema of the chosen range for \(b_{\rm max}\)). The strong \(b_{\rm max}\) dependence results from the lack of constraints on the models for the g-functions in our example. This dependence persists even in the region \(q_{\rm T}\sim Q\), where the OPE should in principle determine the behaviour of the TMD cross section.
Figure 4: SIDIS differential cross section (absolute value) in the HSO approach, comparing different approximations for the structure functions: \(F_{\rm HSO}^{\rm TMD}\) (solid red line), \(F_{\rm HSO}^{\rm ASV}\) (dashed blue line) and \(F^{\rm FO}\) as in Eq. (86) (dot-dashed black line). For comparison, the same kinematics have been used as in Fig. 2. The TMD term is calculated with the Gaussian models of Eqs. (43)–(50), with appropriate constraints as in Eq. (18) and Eq. (28). These models essentially determine the g-functions, similar to Eq. (78) in the standard approach, but with the correct treatment of the large-\(k_{\rm T}\) behavior and the implementation of integral relations. To allow for a meaningful comparison, we use the same values for the small-mass parameters \(M_{\rm F}=M_{\rm D}/z=0.1\,{\rm GeV}\) as in Fig. 2. The masses appearing in Eq. (18) are set to \(m_{D_{h,j}}=m_{D_{h,g}}=M_{\rm D}\), and those in Eq. (28) to \(m_{f_{i,p}}=m_{f_{g,p}}=M_{\rm F}\). We compute the cross section at the same two values for the input scale \(Q_{0}\) considered in Fig. 2. Left: The cross section for \(Q_{0}=4.0\,{\rm GeV}\). Right: The cross section for \(Q_{0}=20.0\,{\rm GeV}\). Note the improvement in the consistency of the three terms, even at \(Q=Q_{0}=4.0\,{\rm GeV}\) in the left panels, with respect to the standard approach shown in Fig. 2. As larger \(Q_{0}\) are considered (e.g., with the larger scale \(Q_{0}=20\,{\rm GeV}\) above) the three curves begin to converge in the \(m\ll q_{\rm T}\ll Q_{0}\) region.
where \(F_{\rm ST}^{\rm FO}\) are the \(\overline{\rm MS}\) structure functions of Eq. (67). Keeping the overall factor in \(F_{\rm ST}^{\rm FO}\) does not formally change the treatment at the \(O\left(\alpha_{s}\right)\) level, but retaining it improves the agreement with the asymptotic term of Sec. V in the \(m\ll q_{\rm T}\ll Q_{0}\) limit.
We show numerical examples of \(F_{\rm HSO}^{\rm TMD}\), \(F_{\rm HSO}^{\rm ASY}\) and \(F^{\rm FO}\) in Fig. 4, calculated using the Gaussian models of Eq. (43) in Eq. (18) and Eq. (28). The kinematics are the same as in Fig. 2, and the nonperturbative parameters take the values
\[M_{\rm F}=M_{\rm D}/z=0.1\,{\rm GeV}\,, \tag{87}\]
so that our treatment of the nonperturbative contribution is comparable to the conventional treatment in Fig. 2. Aside from the transition to a tail region, the Gaussian model mimics the power-law behavior of \(g\)-functions in Eq. (76) with Eq. (78) for the conventional approach. As in Fig. 2, we show the case of a lower input \(Q_{0}=4.0\) GeV in the left panels of Fig. 4, and a large \(Q_{0}=20.0\) GeV in the right panels. The upper two panels show the plots on a logarithmic scale to magnify the improvements at large transverse momentum. To magnify the effect of the improvement on the small transverse momentum region, we have replotted the same graphs on linear vertical axes and over a smaller \(q_{\rm T}\) range in the lower two panels. The qualitative and quantitative improvements of the HSO over the conventional approach are especially visible on the linear axes. For these calculations we have used the approximation \(\overline{Q}_{0}\to Q_{0}\) in Eqs. (37)-(38) because this allows us to utilize the analytic expressions for the TMD pdf and ff parametrizations. We confirm in Fig. 5, however, that the effect of the evolution factor is negligible at the input scale. This is by design; the evolution factor is only relevant for evolving to \(Q\) well above the input scale.
Comparing Fig. 4 with Fig. 2 confirms that, in terms of maintaining consistency with the collinear factorization region, there is a very substantial improvement with the HSO approach as compared with the conventional approach. For \(Q_{0}=4.0\) GeV, the TMD and asymptotic terms match nearly exactly for all \(q_{\rm T}\gtrsim 0.5\) GeV up to \(Q_{0}\). There is also a region around \(q_{\rm T}\approx 0.5\) GeV where all three calculations smoothly overlap. Notice also that the region of overlap becomes better defined when going from the left panel (low input scale) to the right panel (high input scale)
Figure 5: Comparison of the two versions of the HSO approach discussed in the text, at the two values of the input scale considered, \(Q_{0}=4.0\,{\rm GeV}\) and \(Q_{0}=20.0\,{\rm GeV}\). The red solid lines show the TMD term calculated directly with the input functions of Eq. (18) and Eq. (28), as it was done in our examples in Fig. 2 and Fig. 6. The blue dashed lines show the TMD term in the HSO approach but with renormalization group (RG) improvement (the underline version of functions from Ref. [16]) applied at very small \(b_{\rm T}\), implemented in Eqs. (37)–(39), with the transition function \(\overline{Q}_{0}\) of Appendix A. In the HSO approach, for \(Q\approx Q_{0}\), these RG improvements affect only the large \(q_{\rm T}\) region of the cross section. For our examples in this article, even for \(q_{\rm T}/Q_{0}\approx 1.5\), differences are not significant.
of Fig. 4. And, with the larger \(Q_{0}\), the agreement between the TMD and asymptotic terms is nearly exact over the whole visible range of \(q_{\rm T}\). Thus, the HSO plots exhibit the expected trends when choosing larger or smaller values of \(Q_{0}\). Of course, the calculations with \(Q_{0}\) as large as 20 GeV are not physically sensible, but they confirm that the two ways of computing the mid-\(q_{\rm T}\) behavior (with asymptotic and TMD terms) are compatible and consistent in the limit of a fixed \(q_{\rm T}/Q_{0}\) ratio and large \(Q_{0}\).
From Fig. 3, it is clear that in order to correct the large-\(q_{\rm T}\) behavior of the TMD-term in the conventional methodology to recover the asymptotic term, one would need to make further adjustments to the nonperturbative, non-tail part of the parametrization. But it would have to be done in a way that allows nonperturbative transverse momentum parameter dependence to propagate to unacceptably large \(q_{\rm T}\). That could be through both explicit nonperturbative parameters like \(M_{F}\) and \(M_{D}\) and through the residual dependence on \(b_{\rm max}\). In order to reduce the \(b_{\rm max}\) dependence at large \(q_{\rm T}\) to acceptable levels while forcing the TMD and asymptotic terms to converge in Fig. 3, one would have to allow dramatic dependence on nonperturbative parameters that affects the behavior at unacceptably large transverse momentum. To illustrate that the HSO approach addresses this problem, we plot the HSO structure functions, again at the input scale, \(Q=Q_{0}=4.0\,\)GeV, but now with both the Gaussian models of Eq. (43) and the spectator diquark models of Eq. (44), and with the same ranges of values of the nonperturbative mass parameters as were used in the nonventional treatment. In the HSO approach, there is no \(b_{*}\) or \(b_{\rm max}\), and the TMD and asymptotic terms converge toward one another automatically. The results are shown in Fig. 6, with red bands showing the effect of adjusting the nonperturbative mass parameters in the range of Eqs. (81)-(82), and with the Gaussian model in the left-hand panel and the spectator diquark model in the right-hand panel. In each case, we also display the HSO asymptotic (dashed blue line) and fixed order terms (dot-dashed black line).5 To see the improvement brought about by the HSO approach, these plots should be compared with the analogous plot in Fig. 3 of the conventional treatment.
Footnote 5: Since \(F_{\rm HSO}^{\rm ASY}\) is calculated with cutoff collinear functions, they also depend on the values of the mass parameters and should in principle be also already as bands in Fig. 6. However, the variations are negligibly small for the ranges of the mass parameters considered here, so for visibility we show only central lines instead.
The small-\(q_{\rm T}\) regions in both of the cases shown in Fig. 6 exhibit the behavior of their respective nonpertubative models. As \(q_{\rm T}\) grows, the red bands around the TMD curves converge around the asymptotic term, until the the TMD and asymptotic curves are indistinguishable, independently of the nonperturbative model or the values used for \(M_{D}\) and \(M_{F}\). This illustrates how the HSO approach enforces a smooth transition to a region that is insensitive to the value of nonperturbative transverse momentum dependence parameters. Even with the spectator model on the right, where the TMD curves come with visible bands close to the zero node, the curves still match the general shape of the asymptotic term down to \(q_{\rm T}\approx 0.5\,\)GeV. The HSO approach ensures this type of behavior.
With the Gaussian model in the left-hand panel Fig. 6, the bands show that agreement between the TMD term and the asymptotic term in the region of \(q_{\rm T}\approx 0.5\) to \(\approx 1.0\) GeV requires that the mass parameters be kept rather small. For spectator model, the right-hand plot shows that there is more flexibility to adjust the nonperturbative parameters without spoiling approximate agreement with the asymptotic term at mid \(q_{\rm T}\).
In Fig. 4 and Fig. 6, we also plotted the \(q_{\rm T}\approx Q\) fixed order curves to show its approximate overlap with the asymptotic and TMD terms in a region of mid \(q_{\rm T}\). In these calculations, we used \(\overline{\rm MS}\) pdfs and ffs. As mentioned in the discussion after Eq. (73), it may turn out to be preferable to use the cutoff definitions for the collinear functions to match what is done with the asymptotic term. For the purposes of this paper, however, the difference between the two is small enough to ignore, as can be seen in Fig. 7 where we plot the ratios of the collinear pdfs and ffs defined with the cutoff scheme and the \(\overline{\rm MS}\) scheme. For the ranges of \(x\) and \(z\) that we have consider in this paper, the difference between the schemes is \(\lesssim 10\%\), which is comparable to the spread between the asymptotic and fixed order curves in Fig. 4. It is perhaps interesting that the switch from the \(\overline{\rm MS}\) to the cutoff pdfs tends to move the fixed order curve closer to the asymptotic curve. However, we leave the question of whether switching to all cutoff definitions can improve the treatment to future work.
The above style of analysis can be applied directly to the individual TMD correlation functions instead of the full structure functions, and this may be a preferred way to organize the discussion in contexts where understanding the role of hadron structure is the primary goal. In particular, given a nonperturbative treatment of the small \(k_{\rm T}\) region of a TMD pdf or ff, we may confirm that the TMD function matches its order \(\alpha_{s}^{n}\) tail at \(k_{\rm T}\approx Q_{0}\). An example is shown in Fig. 8 for the Gaussian core model. The bands show the effect of varying the mass parameters as in the left panel of Fig. 6, calculated as in Eq. (18) and Eq. (28). The correlation functions are the TMD pdf of up-quarks in a proton (left panel), and the TMD ff of up-quarks into \(\pi^{+}\) (right panel). (These are exactly the functions used in the cross section of Fig. 6.) The dot-dashed lines are the corresponding perturbative calculations in Eq. (41) and Eq. (42). These are the "asymptotic terms," analogous to the dashed curves in Fig. 4, but corresponding to the separate TMD correlation functions. The plots show that, regardless of the nonperturbative treatment of small
\(k_{\rm T}\), the TMD correlation functions treated in this way are always consistent with their \(k_{\rm T}\approx Q_{0}\) behavior, found in collinear factorization, starting at around \(q_{\rm T}=1.0\,\mathrm{GeV}\). The analogous plots for other flavors exhibit similar trends.
## VII Conclusion
Let us conclude by summarizing the primary results of the last section: We have shown how to implement TMD factorization to calculate unpolarized SIDIS cross sections at an input scale \(Q_{0}\) in a way that centers the role of nonperturbative calculations of hadron structure, and we have shown how this leads to a dramatic improvement in the consistency between TMD and collinear factorization, particularly near the input scale \(Q_{0}\). Our approach, which we have called a "hadron structure oriented" approach in this paper, and which is based upon the setup in [16], imposes additional constraints beyond what is standard in the more conventional style of implementing TMD factorization, reviewed above in Sec. VI.3. These extra constraints are designed especially to preserve a TMD parton model interpretation (in the sense of preserving Eq. (1)) for small transverse momentum behavior while ensuring a consistent transition to collinear factorization at \(q_{\rm T}\approx Q_{0}\) and \(Q_{0}\gg\Lambda_{\rm QCD}\). We have emphasized throughout that it is straightforward to swap the parametrization of the nonperturbative core of a TMD pdf or ff in the HSO approach, so that any preferred model or nonperturbative technique for describing the small transverse momentum region may easily be incorporated into future implementations. We highlighted this modular feature of the HSO approach by exchanging a Gaussian model for a spectator diquark model in Fig. 6; replacing one description of the nonperturbative core by another leaves the \(q_{\rm T}\approx Q_{0}\) region of the TMD term unaffected and consistent with large-\(q_{\rm T}\) collinear factorization.
Of course, there are still other open questions with regard to the domain of applicability of TMD factorization to processes like SIDIS. For example, a definitive lowest value for \(Q\) (for each \(x\) and \(z\)) in SIDIS below which TMD factorization techniques absolutely cease to be useful remains to be determined. It is likely that a sharp transition does not exist. A related question is that of how high \(q_{\rm T}\) may become before the TMD term alone is no longer sufficient, and the description must transitions into a \(q_{\rm T}\approx Q\) region where TMD factorization fails and one must rely entirely on fixed order collinear factorization. (This is the issue of the "\(Y\)-term" alluded to in the introduction.)
Figure 6: Variation of the TMD cross section (absolute value), in the HSO approach, with respect to small-mass parameters (red bands). The same kinematics as in the left panel of Fig. 3 have been used, so that \(Q=Q_{0}=4.0\,\mathrm{GeV}\) and no TMD evolution is performed. For visibility, we display only the central lines of the corresponding cross sections with the \(F_{\rm HSO}^{\rm ASY}\) (dashed blue line) and \(F^{\rm FO}\) as in Eq. (86) (dot-dashed black line) approximations, since their variation with the small masses is very mild. Left: calculation with the Gaussian ansatz of Eq. (43), obtained by varying the model masses \(M_{\rm F}\) and \(M_{\rm D}\); this is the constrained version of the quadratic models for the g-functions of Eq. (78), in the standard approach. Right: implementation of the spectator model Eqs. (44)–(45) in the HSO approach. In each case, the HSO approach ensures the consistency of the initial models for TMDs and collinear factorization calculations. Note that our prescription can be readily applied to any other model.
Below some numerical value of \(Q\), it is no longer meaningful to separate a cross section into distinct large (\(q_{\rm T}\approx Q\)) and small (\(q_{\rm T}\approx\Lambda_{\rm QCD}\)) transverse momentum regions. These should probably be viewed as open empirical questions, to be confronted by future experimental tests. But posing them in a clear way requires unambiguous and internally consistent steps like those we have described here and in [16] with the HSO approach.
A separate phenomenological issue is that one generally finds tension between data for large transverse momentum in processes like SIDIS and Drell-Yan scattering and calculations performed with existing collinear pdf and ff fits [53; 54; 70]. This suggests that it will be important for future phenomenological efforts to fit TMD and collinear functions simultaneously in a full TMD factorization context. Of course, for this to be meaningful the nonperturbative parts need to be combined with collinear factorization in a consistent procedure, and this is what the HSO approach is meant to provide.
Extending the treatment in this paper of SIDIS to other processes like Drell-Yan scattering is straightforward. Moreover, order \(\alpha_{s}^{2}\) and even \(\alpha_{s}^{3}\) versions of the parametrizations are obtainable from straightforward, albeit somewhat cumbersome, translations of existing results. It will ultimately be necessary as well to formulate the spin and azimuthal dependent observables in TMD factorization in a manner analogous to what we have done here for the unpolarized case. There, interesting subtleties arise from matches and mismatches between small and large transverse regions of the TMD pdfs and ffs [71; 72; 73]. In addition, there exist other QCD formalisms that invoke the notion of a TMD or unintegrated parton density and find complications with preserving relationships like Eq. (1), see for example Refs. [74; 75] and the discussion in Refs. [76; 77; 78]. We hope that our work might provide some input in resolving these problems. Finally, it bears mentioning that the HSO approach that we advocate here is entirely compatible with other frameworks for setting up TMD factorization and/or transverse momentum resummation methods, including soft-collinear effective theory based approaches [79; 80; 81; 82; 83; 84].
For our next steps, we plan to perform explicit phenomenological extractions within the HSO approach discussed here. It has the advantage of placing us in a position to systematically analyze the contributions from any nonperturbative models (e.g., the spectator model) for the small transverse momentum region separately from the large
Figure 7: Ratio of the cutoff definitions of both collinear pdfs (dashed) and ffs (dot-dashed) and their \(\overline{\rm MS}\) counterparts. The results are shown for two values of the hard scale \(Q\in\{4,20\}\) GeV. Only the flavors that contribute most to the cross section under consideration have been shown to facilitate the readability of the plot. The cutoff has been chosen to match the hard scale, i.e. \(k_{c}=Q\) with the choice of the Gaussian model for TMDs for the “core” parametrization of both pdfs and ffs. Notice that any other choices for the “core” model or the nonperturbative mass parameters would only affect the result with power suppressed contributions which, if neglected, make the difference between the two schemes perturbatively calculable as in the last term in Eq. (22) (ff) and Eq. (32) (pdf). The dashed black lines correspond to the choices of \(x=0.1\) and \(z=0.3\) made in our computations throughout the paper. Replacing the two definitions thus accounts for a difference of order \(\sim 3\%\) (\(Q=4\) GeV) to \(\sim 2\%\) (\(Q=20\) GeV) for the pdf case and \(\sim 10\%\) (\(Q=4\) GeV) to \(\sim 5\%\) (\(Q=20\) GeV) for the ff case.
transverse momentum perturbative tails. Such analyses can then be related directly to specific regions of observable transverse momentum in experimental data, in the spirit of, for example, the discussion of Fig. 17 in [8]. Ultimately, one hopes to infer, from the extracted correlation functions, information about the underlying nonperturbative physics. To see an example of where this will be useful, consider Ref. [85], which describes a treatment of intrinsic transverse momentum in a field theoretic chiral constituent quark model where the chiral symmetry breaking scale is large relative to the constituent quark mass. The HSO approach discussed in this paper is ideally suited for connecting this and similar descriptions to SIDIS data in the context of a complete TMD factorization treatment. Notice in particular that the additive model we constructed in Secs. (III)-(IV) aligns naturally with the Gaussian-plus-tail type of description in Ref. [85]. More generally, adopting an HSO approach enables us to begin to ask more specific and detailed phenomenological questions about the adequacy of specific theories of nonperturbative small transverse momentum behavior.
The elements necessary for these and other studies designed to identify separate perturbative and nonperturbative structures are in place now, and extensions to higher orders in \(\alpha_{s}\) are straightforward, given existing results in the literature.
###### Acknowledgements.
We thank Fatma Aslan, Mariaelena Boglione, Nobuo Sato, and Andrea Simonelli for useful conversations. J.O. Gonzalez-Hernandez acknowledges funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 824093. T. Rogers and T. Rainaldi were supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under Award Number DE-SC0018106. This work was also supported by the DOE Contract No. DE- AC05-06OR23177, under which Jefferson Science Associates, LLC operates Jefferson Lab.
Figure 8: Comparison of the TMD functions in the HSO approach and their large-\(k_{\rm T}\) behaviour predicted by pQCD. The bands show the variation of the TMD pdf of Eq. (28) (left) and the TMD ff Eq. (18) (right), with respect to mass parameters, using the Gaussian ansatzes of Eq. (43). The range of masses indicated in the labels are the same as those used to obtained the red band in the left panel of Fig. 6. The dot-dashed lines show the pQCD calculation of Eq. (42) for the TMD pdf (left), and that of Eq. (41) for the TMD ff (right). The correct behaviour for the models, has been imposed from the onset in Eq. (41) Eq. (42), through the \(A\) and \(B\) coefficients. This is indeed a necessary condition for the agreement of the TMD cross section and the asymptotic term in the left panel of Fig. 6.
## Appendix A Scale transformation function
The scale transition function in Eq. (39) is in principle entirely arbitrary, see the discussion in Sec. V of [16], provided it has the general feature that it transitions from \(\sim 1/b_{\rm T}\) behavior to \(Q_{0}\) at a \(b_{\rm T}\) slightly below \(1/Q_{0}\). This ensures, by construction, that we avoid modifying the input scale treatment of Eq. (15) in the \(Q\approx Q_{0}\) region. In this paper, namely in Fig. 5, we have adopted the same choice as in Appendix C of [16],
\[\overline{Q}_{0}(b_{\rm T},a)=Q_{0}\left[1-\left(1-\frac{C_{1}}{Q_{0}b_{\rm T }}\right)e^{-b_{\rm T}^{2}a^{2}}\right]\,. \tag{101}\]
The constant \(C_{1}\) has the usual numerical value of \(C_{1}=2e^{-\gamma_{E}}\approx 1.123\). The specific value of \(a\) used in Fig. 5 is \(a=Q_{0}\).
## Appendix B TMD parametrization in \(b_{\rm T}\) space at the input scale
Here we list the \(b_{\rm T}\)-space versions of Eq. (18) and Eq. (28)
\[z^{2}\tilde{D}_{\rm{input},h/j}(z,\mathbf{b}_{\rm T};\mu_{Q_{0}},Q_{0 }^{2}) =K_{0}\left(b_{\rm T}m_{D_{h,j}}\right)\left[A_{h/j}^{D}(z;\mu_{Q _{0}})+B_{h/j}^{D}(z;\mu_{Q_{0}})\ln\left(\frac{b_{\rm T}Q_{0}^{2}e^{\gamma_{E }}}{2m_{D_{h,j}}}\right)\right]\] \[+K_{0}\left(b_{\rm T}m_{D_{h,j}}\right)A_{h/j}^{D,g}(z;\mu_{Q_{0}})\] \[+C_{h/j}^{D}\,z^{2}\tilde{D}_{\rm{core},h/j}(z,\mathbf{b}_{\rm T};Q_{0 }^{2})\,, \tag{102}\]
\[\tilde{f}_{\rm{input},i/p}(x,\mathbf{b}_{\rm T};\mu_{Q_{0}},Q_{0}^{2}) =K_{0}\left(b_{\rm T}m_{f_{i,p}}\right)\left[A_{i/p}^{f}(x;\mu_{Q _{0}})+B_{i/p}^{f}(x;\mu_{Q_{0}})\ln\left(\frac{b_{\rm T}Q_{0}^{2}e^{\gamma_{E }}}{2m_{f_{i,p}}}\right)\right]\] \[+K_{0}\left(b_{\rm T}m_{f_{g},r}\right)A_{g/p}^{f}(x;\mu_{Q_{0}})\] \[+C_{i/p}^{f}\,\tilde{f}_{\rm{core},i/p}(x,\mathbf{b}_{\rm T};Q_{0}^{2 })\,, \tag{103}\]
where
\[\tilde{f}_{\rm{core},i/p}(x,\mathbf{b}_{\rm T};Q_{0}^{2}) =\int{\rm d}^{2}\,\mathbf{k}_{\rm T}e^{-i\mathbf{k}_{\rm T}\cdot\mathbf{b}_{ \rm T}}f_{\rm{core},i/p}(x,\mathbf{k}_{\rm T};Q_{0}^{2})\,, \tag{104}\] \[\tilde{D}_{\rm{core},h/j}(x,\mathbf{b}_{\rm T};Q_{0}^{2}) =\int{\rm d}^{2}\,\mathbf{k}_{\rm T}e^{i\mathbf{k}_{\rm T}\cdot\mathbf{b}_{\rm T }}D_{\rm{core},h/j}(x,z\mathbf{k}_{\rm T};Q_{0}^{2})\,, \tag{105}\]
which for the Gaussian and spectator model that we use read
\[\tilde{f}_{\rm{core},i/p}^{\rm{Gauss}}(x,\mathbf{b}_{\rm T};Q_{0}^{2}) =e^{-\frac{b_{\rm T}^{2}M_{\rm T}^{2}}{4}}\,, \tag{106}\] \[z^{2}\tilde{D}_{\rm{core},h/j}^{\rm{Gauss}}(z,\mathbf{b}_{\rm T};Q_ {0}^{2}) =e^{-\frac{b_{\rm T}^{2}M_{\rm T}^{2}}{4z^{2}}}\,,\] (107) \[\tilde{f}_{\rm{core},i/p}^{\rm{Spect}}(x,\mathbf{b}_{\rm T};Q_{0}^{2}) =\frac{M_{\rm{0F}}^{2}\left(b_{\rm T}M_{\rm{0F}}\right)^{2}}{4\, \big{(}2M_{\rm{F}}^{2}+M_{\rm{0F}}^{2}\big{)}}\left(6K_{2}\left(b_{\rm T}M_{ \rm{0F}}\right)+\frac{\left(M_{\rm{F}}^{2}-M_{\rm{0F}}^{2}\right)\left(b_{\rm T }M_{\rm{0F}}\right)}{M_{\rm{0F}}^{2}}K_{3}\left(b_{\rm T}M_{\rm{0F}}\right) \right)\,,\] (108) \[z^{2}\tilde{D}_{\rm{core},h/j}^{\rm{Spect}}(z,\mathbf{b}_{\rm T};Q_ {0}^{2}) =\frac{M_{\rm{0D}}^{2}\left(b_{\rm T}M_{\rm{0D}}\right)}{2z^{2} \left(M_{\rm{D}}^{2}+M_{\rm{0D}}^{2}\right)}\left(4zK_{1}\left(\frac{b_{\rm T }M_{\rm{0D}}}{z}\right)+\frac{\left(M_{\rm{D}}^{2}-M_{\rm{0D}}^{2}\right)\left( b_{\rm T}M_{\rm{0D}}\right)}{M_{\rm{0D}}^{2}}K_{2}\left(\frac{b_{\rm T}M_{\rm{0D}} \right)}{z}\right)\,. \tag{109}\]
|
2308.11938 | A new five-dimensional vacuum-defect wormhole space-time | We introduce a novel extension to the Klinkahmer-vacuum defect model by
incorporating a fifth spatial coordinate, resulting in a comprehensive
five-dimensional wormhole space-time. This extension preserves its status as a
vacuum solution to the field equations in five-dimensions. We delve into the
behavior of geodesic equations in the proximity of this wormhole, shedding
light on its intriguing properties. | Faizuddin Ahmed | 2023-08-23T06:13:15Z | http://arxiv.org/abs/2308.11938v1 | # A new five-dimensional vacuum-defect wormhole space-time
###### Abstract
We introduce a novel extension to the Klinkahamer-vacuum defect model by incorporating a fifth spatial coordinate, resulting in a comprehensive five-dimensional wormhole space-time. This extension preserves its status as a vacuum solution to the field equations in five-dimensions. We delve into the behavior of geodesic equations in the proximity of this wormhole, shedding light on its intriguing properties.
**keywords:** Higher-dimensional gravity; Modified theories of gravity; exact solutions
**pacs numbers:** 04.50.-h; 04.50.Kd; 04.20.Jb
## 1 Introduction
Einstein's four-dimensional theory of general relativity was extended to five dimensions by Kaluza and Klein [1; 2] in an effort to unify gravitation and electromagnetism. This innovative concept proposed that electromagnetism could be incorporated by introducing an additional, compactified dimension within the framework of four-dimensional space-time. In this model, the formerly spatial fifth dimension became an integral part of the unified theory.
The five-dimensional model employs a metric tensor denoted as \(g_{AB}\), with a corresponding line element given by \(dS^{2}=g_{AB}\,dx^{A}\,dx^{B}\), where the indices \(A,B\) run through values \(0,123,4\) to represent time, space, and the extra dimension. This framework encompasses a four-dimensional manifold characterized by a line element \(ds^{2}=g_{\mu\nu}\,dx^{\mu}\,dx^{\nu}\), where indices \(\mu,\nu\) take on values \(0,123\) to denote the four-dimensional space-time. Within this context, the null-geodesics in the five-dimensional space-time, given by \(dS^{2}=0\), describe the motion of massive objects in the four-dimensional space-time, where \(ds^{2}<0\)[3; 4; 5]. This connection between null-geodesics and massive object motion necessitates the introduction of a relation such as fifth coordinate \(x^{4}=\chi=\chi(s)\), involving the proper time in the four-dimensional space-time, to account for the embedding of the four-dimensional manifold into the five-dimensional case. When the four-dimensional component \(g_{\mu\nu}\) within the five-dimensional space-time is explicitly independent of the fifth dimension, specifically \(g_{\mu\nu}=g_{\mu\nu}(x^{\alpha})\) and \(\partial g_{\mu\nu}/\partial\chi=0\), the fifth coordinate assumes the role of particle mass, and this formulation adheres to the weak equivalence principle [5; 6]. This also implies that the fifth force is absent, leaving the accelerations exactly as they are in four-dimensional theory. Null geodesics in five-dimensional manifolds has been studied in [3].
The vacuum Einstein's field equations for space-time plus extra dimension are given in terms of the Ricci tensor by \(R_{AB}=0\) (where \(A,B=0,123,4\)). This is the simplest type of higher-dimensional field equations in general relativity. This condition is relevant to the space-time-matter (STM), but can be applied to some other scenarios. There are a number of known solutions that embed four-dimensional manifolds with or without cosmological constant and/or spherically symmetric character [7], whereby the extended version is known to agree with observation [7; 8; 9]. A few examples are static wormhole solutions within the framework of five-dimensions Kaluza-Klein gravity in the presence of a massless ghost four-dimensional scalar field [10], five-dimensional Einstein equations with four-dimensional de Sitter-like expansion [11], five-dimensional Godel-type space-time [12; 13], FRW cosmological model [14], cosmological implications of nonseparable five-dimensional vacuum field equations [15], spherically symmetric dyonic wormhole-like space-time in five-dimensional Kaluza-Klein theory [16], axisymmetric regular multiwormhole [17] and stationary solutions [18; 19] in five-dimensional general relativity theory, wormholes in \((4+1)\) gravity [20], a regular vacuum solution in 5D gravity [21], classical and quantized aspects of dynamics in five-dimensional relativity [22], five-dimensional black holes [23].
In searching for a new five-dimensional solution to vacuum field equations, we must keep in mind a known vacuum solution in four-dimensional theory. Our investigation is based on a four-dimensional vacuum-defect wormhole metric presented in [24] given by the following line-element (see also related work [25])
\[ds^{2}|_{\mbox{vacuum-defect}}^{4D}=-dt^{2}+\left(1+\frac{b^{2}}{\xi^{2}} \right)^{-1}d\xi^{2}+(\xi^{2}+b^{2})\,(d\theta^{2}+\sin^{2}\theta\,d\phi^{2}), \tag{1}\]
where \(-\infty<(t,\xi)<+\infty\) and other coordinates are in the usual ranges. This four-dimensional wormhole metric is not only Ricci flat, \(R_{\mu\nu}=0\) but also the Riemann flat, \(R_{\mu\nu\alpha\beta}=0\).
Our primary objective in this study is to construct a novel five-dimensional vacuum-defect wormhole model, which serves as a natural extension or generalization of the existing vacuum-defect wormhole (1). Particular interest given around a solution which are not only the Ricci flat, \(R_{AB}\) but also Riemann flat, \(R_{ABCD}=0\) in five-dimensions. Notably, we focus on maintaining the vacuum nature of the five-dimensional solution by excluding any incorporation of a mass parameter, \(M\), in contrast to the approach taken in the work [26], where a higher-dimensional extension of the vacuum-defect wormhole was achieved by introducing a mass parameter \(M\) linked to the standard \(4D\) Schwarzschild metric.
## 2 Analysis of five-dimensional vacuum-defect wormhole
As stated above, we aim to extend the four-dimensional Klinkhamer-vacuum defect wormhole (1) into the higher-dimensional theory, especially in the five-dimensional manifold. Therefore, the line-element describing this five-dimensional wormhole metric in the chat \((t,\xi,\theta,\phi,\chi)\) is given by \((c=1=G=\hbar)\)
\[ds^{2}|_{\text{vacuum-defect}}^{SD}=ds^{2}|_{\text{vacuum-defect}}^{4D}+F( \xi,\theta)\,d\chi^{2}, \tag{2}\]
where the four-dimensional part \(ds^{2}|_{\text{vacuum-defect}}^{4D}\) is given by (1), \(\chi\) is the fifth coordinate of extra dimensions which is a spatial with range \(\chi\in(0,\infty)\), and the function \(F(\xi,\theta)=D(\xi)\,H(\theta)\) with \(D(\xi)\) and \(H(\theta)\) are unknown functions.
For the five-dimensional metric (2), the Einstein tensor \(G_{AA}\neq 0\) with an additional non-zero non-diagonal term given by
\[G_{\theta\xi}=G_{\xi\theta}=\frac{H^{\prime}}{4H}\,\Big{(}\frac{2\,\xi}{\xi^{ 2}+b^{2}}-\frac{D^{\prime}}{D}\Big{)}, \tag{3}\]
where \(H^{\prime}\) and \(D^{\prime}\) denotes derivative w. r. t. \(\theta\) and \(\xi\), respectively.
Solving for vacuum field equations and considering first \(G_{\theta\xi}=0\), from equation (3), we obtain
\[D(\xi)=c_{1}\,(\xi^{2}+b^{2}), \tag{4}\]
where \(c_{1}>0\) is an arbitrary constant choosing unity here for simplicity.
With this function \(D(\xi)\), the five-dimensional line-element (2) now becomes
\[ds^{2}=-dt^{2}+\Big{(}1+\frac{b^{2}}{\xi^{2}}\Big{)}^{-1}\,d\xi^{2}+(\xi^{2}+b ^{2})\,\Big{(}d\theta^{2}+\sin^{2}\theta\,d\phi^{2}+H(\theta)\,d\chi^{2}\Big{)}, \tag{5}\]
Still space-time (5 is a non-vacuum solution of the field equations. The non-zero components of the Einstein tensor \(G_{AB}\) for this metric (5) are
\[G_{tt}=R/2\quad,\quad G_{\xi\xi}=-\frac{\xi^{2}\,R}{2}\quad,\quad G_{\theta \theta}=1+\frac{H^{\prime}}{2\,H}\,\cot\theta\quad,\quad G_{\phi\phi}=\frac{ \sin^{2}\theta}{4\,H}\,\Big{(}-H^{\prime 2}+4\,H^{2}+2\,HH^{\prime\prime}\Big{)}. \tag{6}\]
where \(R\) is the Ricci scalar given by
\[R=\frac{-8\,H^{2}+H^{\prime 2}-2\,H\,(H^{\prime}\,\cot\theta+H^{\prime\prime}) }{2\,(\xi^{2}+b^{2})\,H^{2}}. \tag{7}\]
For vacuum field equations, the Einstein tensor, \(G_{AB}=0\), and the Ricci scalar. \(R=0\). From equations (6) and (7), we obtain
\[1+\frac{H^{\prime}}{2\,H}\,\cot\theta=0\quad,\quad H^{\prime 2}=4\,H^{2}+2\, HH^{\prime\prime}\quad,\quad H^{\prime 2}=8\,H^{2}+2\,H\,(H^{\prime}\,\cot\theta+H^{ \prime\prime}). \tag{8}\]
Simplification of the above equations result the function \(H(\theta)\) of the following form
\[H(\theta)=c_{2}\,\cos^{2}\theta, \tag{9}\]
where \(c_{2}>0\) is a constant choosing unity here for simplicity.
Therefore, the final form of the five-dimensional vacuum wormhole space-time (5) using the function (9) is of the following
\[ds^{2}|_{\text{vacuum-defect}}^{5D} = ds^{2}|_{\text{vacuum-defect}}^{4D}+(\xi^{2}+b^{2})\,\cos^{2} \theta\,d\chi^{2} \tag{10}\] \[= -dt^{2}+\Big{(}1+\frac{b^{2}}{\xi^{2}}\Big{)}^{-1}\,d\xi^{2}+(\xi^ {2}+b^{2})\,\Big{(}d\theta^{2}+\sin^{2}\theta\,d\phi^{2}+\cos^{2}\theta\,d \chi^{2}\Big{)}.\]
We see that the four-dimensional part of the metric tensor \(g_{AB}\) doesn't depend on the extra coordinate \(\chi\), and hence, \(\partial g_{\mu\nu}/\partial\chi=0\), where \(g_{\mu\nu}\) is the metric tensor of the four-dimensional manifold given by (1). Thus, the metric \(C_{5}\) of (10) is in fact a pure-canonical form. One can calculate the Kretschmann scalar curvature for this five-dimensional vacuum-defect wormhole space-time (10) and it will be zero, \(\mathcal{K}=R^{ABCD}\,R_{ABCD}=0\). In addition, this metric not only Ricci-flat, \(R_{AB}=0\) but also the Riemann-flat, \(R_{ABCD}=0\). Hence, there is a set of coordinates in which metric \(C_{5}\) of (10) becomes Minkowski space \(M_{5}\) in five-dimensions:
\[ds^{2}|_{\mbox{Minkowski}}^{5D}=-dT^{2}+dX^{2}+dY^{2}+dZ^{2}+dW^{2} \tag{11}\]
The precise transformations between \((T,X,Y,Z,W)\) and \((t,\xi,\theta,\phi,\chi)\) are complicated. However, in four-dimensional cases such as for the metric (1), we have the following transformation
\[X=\sqrt{\xi^{2}+b^{2}}\,\sin\theta\,\cos\phi,\quad Y=\sqrt{\xi^{2}+b^{2}}\, \sin\theta\,\sin\phi,\quad Z=\sqrt{\xi^{2}+b^{2}}\,\cos\theta \tag{12}\]
that gives us the Minkowski space \(M_{4}\) as
\[ds^{2}|_{\mbox{Minkowski}}^{4D}=-dT^{2}+dX^{2}+dY^{2}+dZ^{2}. \tag{13}\]
We see that this new five-dimensional wormhole space-times admits three constants of motions associated with three killing vectors denoted as \(X_{t}=\partial_{t}\), \(X_{\phi}=\partial_{\phi}\), and \(X_{\chi}=\partial_{\chi}\). The norm of the last killing vector given by
\[X_{\chi}^{2}=|\partial_{\chi}\bullet\partial_{\chi}|=g_{\chi\chi}=(\xi^{2}+b^{ 2})\,\cos^{2}\theta>0 \tag{14}\]
is spacelike and finite at \(\xi=0\) as well as on the axis of rotation \(\theta=0,\pi\). Furthermore, we also see that for \(\chi=const\) hypersurface, the wormhole space-time (10) reduces to the vacuum-defect defect metric (1). That means the four-dimensional Klinkhamer-defect wormhole is embedded in a five-dimensional space \(M_{5}\) with the metric given by (10).
Now, we study the geodesics motions of test particles around this new five-dimensional vacuum wormhole space-time (10). The geodesic equations are given by
\[\ddot{t} =0, \tag{15}\] \[\ddot{\xi} =\frac{b^{2}}{\xi\,(b^{2}+\xi^{2})}\,\dot{\xi}^{2}+\frac{\xi^{2} +b^{2}}{\xi}\,\dot{\theta}^{2}+\frac{(\xi^{2}+b^{2})\,\sin^{2}\theta}{\xi}\, \dot{\phi}^{2}+\frac{(\xi^{2}+b^{2})\,\cos^{2}\theta}{\xi}\,\dot{\chi}^{2},\] (16) \[\ddot{\theta} =\frac{2\,\xi}{\xi^{2}+b^{2}}\,\dot{\xi}\,\dot{\theta}+\frac{1}{ 2}\,\sin 2\theta\,(\dot{\phi}^{2}-\dot{\chi}^{2}),\] (17) \[\ddot{\phi} =-\,\frac{2\,\xi}{\xi^{2}+b^{2}}\,\dot{\xi}\,\dot{\phi}-2\,\cot \theta\,\dot{\theta}\,\dot{\phi},\] (18) \[\ddot{\chi} =2\,\Big{(}-\frac{\xi}{\xi^{2}+b^{2}}\,\dot{\xi}+\tan\theta\, \dot{\theta}\Big{)}\,\dot{\chi}. \tag{19}\]
where dot represents derivative w. r. t. affine parameter \(\tau\).
The first derivative of last two equations results
\[\dot{\phi}(\tau)=\frac{c_{3}}{(\xi^{2}+b^{2})\,\sin^{2}\theta},\quad\dot{\chi} (\tau)=\frac{c_{4}}{(\xi^{2}+b^{2})\,\cos^{2}\theta}, \tag{20}\]
where \(c_{3},c_{4}\) are arbitrary positive constants. Substituting Eq. (20) into the equations (16)-(17), one will get the second-order equations of \(\xi,\theta\) whose solutions are little bit complicated.
On the other hand, in the particular case \(\theta=const\neq\pi/2\), from geodesics equations (16)-(19), we obtain
\[\dot{\phi}(\tau)=\dot{\chi}(\tau)=\frac{c_{0}}{\xi^{2}+b^{2}},\quad\ddot{\xi}= \frac{b^{2}\,\dot{\xi}^{2}+c^{2}}{\xi\,(\xi^{2}+b^{2})}. \tag{21}\]
where \(c_{0}\) is an arbitrary constant.
The null-path in five-dimensions with \(ds^{2}|_{\mbox{vacuum-defect}}^{5D}=0\) in the pure-canonical metric (10) corresponds to the time-like path in four-dimensions with \(ds^{2}|_{\mbox{vacuum-defect}}^{4D}=-(\xi^{2}+b^{2})\,\cos^{2}\theta\,d\chi^{2}<0\) of a massive object [3, 4, 5]. This means that a particle in four-dimensional metric behaves as photon-like in five-dimensional metric or we can say all objects in four-dimensional metric travel along null paths in five-dimensional metric. It is interesting to note here that in the equatorial plane defined by \(\theta=\pi/2\), both the four-dimensional metric (1) and the five-dimensional metric (10) shows the same geometrical structure
\[ds^{2}|_{\theta=\pi/2}=-dt^{2}+\left(1+\frac{b^{2}}{\xi^{2}}\right)^{-1}d\xi^{ 2}+(\xi^{2}+b^{2})\,d\phi^{2}. \tag{22}\]
## 3 Conclusions
This newly developed five-dimensional vacuum-defect wormhole metric serves as an extension of the established four-dimensional Klinkhamer-vacuum defect model. It's worth highlighting that upon scrutinizing the geometric characteristics of both vacuum-defect wormhole models, as denoted by equations (1) and (10), a striking similarity becomes evident, particularly when considering in the equatorial plane defined by \(\theta=\pi/2\). Furthermore, an intriguing observation arises when investigating our wormhole model on surfaces where \(\chi\) remains constant. One can see that the well-established Klinkhamer-vacuum defect metric is embedded in the five-dimensional wormhole metric.
|
2305.06260 | Convolution of periodic multiplicative functions and the divisor problem | We study a certain class of arithmetic functions that appeared in Klurman's
classification of $\pm 1$ multiplicative functions with bounded partial sums,
c.f., Comp. Math. 153 (8), 2017, pp. 1622-1657. These functions are periodic
and $1$-pretentious. We prove that if $f_1$ and $f_2$ belong to this class,
then $\sum_{n\leq x}(f_1\ast f_2)(n)=\Omega(x^{1/4})$. This confirms a
conjecture by the first author. As a byproduct of our proof, we studied the
correlation between $\Delta(x)$ and $\Delta(\theta x)$, where $\theta$ is a
fixed real number. We prove that there is a non-trivial correlation when
$\theta$ is rational, and a decorrelation when $\theta$ is irrational.
Moreover, if $\theta$ has a finite irrationality measure, then we can make it
quantitative this decorrelation in terms of this measure. | Marco Aymone, Gopal Maiti, Olivier Ramaré, Priyamvad Srivastav | 2023-05-10T15:46:30Z | http://arxiv.org/abs/2305.06260v3 | # Convolution of periodic multiplicative functions and the divisor problem
###### Abstract.
We study a certain class of arithmetic functions that appeared in Klurman's classification of \(\pm 1\) multiplicative functions with bounded partial sums, c.f., Comp. Math. 153 (8), 2017, pp. 1622-1657. These functions are periodic and 1-pretentious. We prove that if \(f_{1}\) and \(f_{2}\) belong to this class, then \(\sum_{n\leq x}(f_{1}*f_{2})(n)=\Omega(x^{1/4})\). This confirms a conjecture by the first author made in Bull. Braz. Math. Soc. 2022, 53 (4), 1317-1329, 2022. As a byproduct of our proof, we studied the correlation
\[\lim_{X\to\infty}\frac{1}{X^{3/2}}\int_{0}^{X}\Delta(x)\Delta(\theta x)dx,\]
where \(\Delta(x)\) is the error term in the classical Dirichlet divisor problem. We prove that this limit is positive and non-trivial when \(\theta\) is rational, and \(0\) when \(\theta\) is irrational. Moreover, if \(\theta\) has a finite degree of irrationality, then we can make it quantitative the limit above in terms of this degree.
## 1. Introduction
### Main result and background
A question posed by Erdos in [6], known as the Erdos discrepancy problem, states that whether for all arithmetic functions \(f:\mathds{N}\to\{-1,1\}\) we have that the discrepancy
\[\sup_{x,d}\left|\sum_{n\leq x}f(nd)\right|=\infty. \tag{1}\]
When in addition \(f\) is assumed to be completely multiplicative, then this reduces to whether \(f\) has unbounded partial sums.
In 2015, Tao [14] proved that (1) holds for all \(f:\mathds{N}\to\{-1,1\}\), and a key point of its proof is that it is sufficient to establish (1) only in the class of completely multiplicative functions \(f\) taking values in the unit (complex) circle.
When \(f:\mathds{N}\to\{-1,1\}\) is assumed to be only multiplicative, then not necessarily \(f\) has unbounded partial sums. For example, \(f(n)=(-1)^{n+1}\) is multiplicative and clearly has bounded partial sums. In this case, \(f(2^{k})=-1\) for all positive integers \(k\). It was observed by Coons [5] that this rigidity on powers of \(2\) is actually necessary under suitable conditions on the values that \(f\) takes at the remaining primes. Later, in the same paper [14], Tao gave a partial classification of multiplicative functions taking values \(\pm 1\) with bounded partial sums: They must satisfy the previous rigidity condition on powers of \(2\) and they must be \(1\)-pretentious (for more on pretentious Number Theory we refer reader to [7] by Granville and Soundararajan), that is,
\[\sum_{p}\frac{1-f(p)}{p}<\infty.\]
Later, Klurman [9] proved that the only multiplicative functions \(f\) taking \(\pm 1\) values and with bounded partial sums are the periodic multiplicative functions with sum \(0\) inside each period, and thus, closing this problem for \(\pm 1\) multiplicative functions.
Building upon the referred work of Klurman, the first author proved in [1] that if we allow values outside the unit disk, a \(M\)-periodic multiplicative function \(f\) with bounded partial sums such that \(f(M)\neq 0\) satisfies
1. For some prime \(q|M\), \(\sum_{k=0}^{\infty}\frac{f(q^{k})}{q^{k}}=0\).
2. For each \(p^{a}\|M\), \(f(p^{k})=f(p^{a})\) for all \(k\geq a\).
3. For each \(\gcd(p,M)=1\), \(f(p^{k})=1\), for all \(k\geq 1\).
Conversely, if \(f:\mathds{N}\to\mathds{C}\) is multiplicative and the three conditions above are satisfied, then \(f\) has period \(M\) and has bounded partial sums. Therefore, these three conditions above give examples of multiplicative functions with values outside the unit disk with bounded partial sums, despite of the fact that \(f(M)\) is zero or not.
_Remark 1.1_.: It is interesting to observe that when it is assumed that \(|f|\leq 1\), the only way to achieve condition i. is with \(q=2\) and \(f(2^{k})=-1\) for all \(k\geq 1\).
_Remark 1.2_.: What makes the difference between a multiplicative function \(f\) satisfying i-ii-iii from a non-principal Dirichlet character \(\chi\) is that \(\chi\) neither satisfies i. nor iii.
Here we are interested in the convolution \(f_{1}*f_{2}\) for \(f_{1}\) and \(f_{2}\) satisfying i-ii-iii above. It was proved in [1] that
\[\sum_{n\leq x}(f_{1}*f_{2})(n)\ll x^{\alpha+\epsilon},\]
where \(\alpha\) is the infimum over the exponents \(a>0\) such that \(\Delta(x)\ll x^{a}\), where \(\Delta(x)\) is the classical error term in the Dirichlet divisor problem defined by:
\[\sum_{n\leq x}\tau(n)=x\log x+(2\gamma-1)x+\Delta(x).\]
It was conjectured in [1] that the partial sums of \(f_{1}*f_{2}\) obey the same \(\Omega\)-bound for \(\Delta(x)\), that is, \(\sum_{n\leq x}(f_{1}*f_{2})(n)=\Omega(x^{1/4})\). Here we establish this conjecture.
**Theorem 1.1**.: _Let \(f_{1}\) and \(f_{2}\) be periodic multiplicative functions satisfying i-ii-iii above, Then \(\sum_{n\leq x}(f_{1}*f_{2})(n)=\Omega(x^{1/4})\)._
_Example 1.1_.: The results from [1] give that for each prime \(q\) there exists a unique \(q\)-periodic multiplicative function \(f\) with bounded partial sums and such that \(f(q)\neq 0\). In the case \(q=2\), the corresponding function is \(f(n)=(-1)^{n+1}\). Therefore, in this particular case we have that \(\sum_{n\leq x}(f*f)(n)=\Omega(x^{1/4})\). In particular, this establishes the conjecture in an uncovered case by Proposition 3.1 of [1].
Our proof relies on two ingredients. The second one is a study of a family of quadratic forms, and is explained in Section 5. The first ingredient is a generalization of a result of Tong [15] and prove the next theorem.
**Theorem 1.2**.: _When \(a\) and \(b\) are non negative integers, we have_
\[\lim_{X\to\infty}\frac{1}{X^{3/2}}\int_{1}^{X}\Delta(x/a)\Delta(x/b)dx=\frac{ \tau(cd)}{6\pi^{2}\sqrt{\lambda}cd}\frac{\zeta(3/2)^{4}}{\zeta(3)}\prod_{p^{k }\|cd}\frac{1-\frac{k-1}{(k+1)p^{3/2}}}{1+1/p^{3/2}}\]
_where \(\lambda=\gcd(a,b)\), \(c=a/\lambda\) and \(d=b/\lambda\). Furthermore, when \(\theta>0\) is irrational, we have_
\[\lim_{X\to\infty}\frac{1}{X^{3/2}}\int_{1}^{X}\Delta(x)\Delta(\theta x)dx=0.\]
### The proof in the large
To prove Theorem 1.1, our starting point is the following formula from [1]:
\[\sum_{n\leq x}(f_{1}*f_{2})(n)=\sum_{n|M_{1}M_{2}}(f_{1}*f_{2}*\mu*\mu)(n) \Delta(x/n), \tag{2}\]
where \(\mu\) is the Mobius function. Therefore, the partial sums of \(f_{1}*f_{2}\) can be written as a finite linear combination of the quantities \((\Delta(x/n))_{n}\). Apart from the fact that \(\Delta(x)=\Omega(x^{1/4})\), we cannot, at least by a direct argument, prevent a conspiracy among the large values of \((\Delta(x/n))_{n}\) that would yield a cancellation among a linear combination of them.
To circumvent this, our approach is inspired by an elegant result of Tong [15]:
\[\int_{1}^{X}\Delta(x)^{2}dx=(1+o(1))\sum_{n=1}^{\infty}\frac{\tau(n)^{2}}{n^{3 /2}}\,X^{3/2}.\]
By (2), the limit
\[\lim_{X\to\infty}\frac{1}{X^{3/2}}\int_{1}^{X}\left|\sum_{n\leq x}(f_{1}*f_{2} )(n)\right|^{2}dx\]
can be expressed as a quadratic form with matrix \((c_{a,b})_{a,b|M_{1}M_{2}}\) where \(c_{a,b}\) is the correlation
\[c_{a,b}:=\lim_{X\to\infty}\frac{1}{X^{3/2}}\int_{1}^{X}\Delta(x/a)\Delta(x/b)dx.\]
As it turns out, these correlations does not vanish and computed in Theorem 1.2. With that in hand, the matrix correlation-term \(c_{a,b}\) can be expressed as
\[\frac{1}{\sqrt{\gcd(a,b)}}\varphi\left(\frac{\operatorname{lcm}(a,b)}{\gcd(a, b)}\right), \tag{3}\]
for some multiplicative function \(\varphi\).
This matrix entanglement is hard to analyze directly. In Section 5 we explore sufficient conditions for a matrix of the form (3) to be positive definite. When this happens, this ensures the referred \(\Omega\)-bound. Thanks to the Selberg diagonalization
process, we show that when \(\varphi\) is completely multiplicative and satisfies other conditions, then this matrix is positive definite. The main proof somehow reduecs to this case; we indeed find a way to conjugate our original matrix to reach a matrix related to a completely multiplicative function. With standard linear algebra of Hermitian matrices we conclude that our matrix \((c_{a,b})_{a,b|M_{1}M_{2}}\) is positive definite.
### Byproduct study
Motivated by Nyman's reformulation of the Riemann hypothesis [13], in recent papers [2, 3, 4] by Balazard, Duarte and Martin, the correlation
\[A(\theta):=\int_{0}^{\infty}\{x\}\{\theta x\}\frac{dx}{x^{2}}\]
has been thoroughly studied. Here \(\theta>0\) is any real number and \(\{x\}\) stands for the fractional part of \(x\). Several analytic properties for the function \(A(\theta)\) have been shown.
Motivated by this, we studied the "_divisor_" analogue
\[I(\theta)=\lim_{X\to\infty}\frac{1}{X^{3/2}}\int_{0}^{X}\Delta(x)\Delta( \theta x)dx.\]
As stated in Theorem 1.2, when \(\theta=p/q\) is a rational number, the limit above is described by a positive multiplicative function depending on \(p\) and \(q\). However and somewhat surprisingly, when \(\theta\) is irrational, this correlation vanishes. The next proposition establishes that this vanishing is indeed very strong, except maybe at points \(\theta\) that are badly approximated by rationals.
**Proposition 1.1**.: _Let \(\theta>0\) be an irrational number with degree of irrationality \(\eta\), that is, for each \(\epsilon>0\) there is a constant \(C>0\) such that the inequality_
\[|n-m\theta|\geq\frac{C}{m^{\eta+\epsilon}}\]
_is violated only for a finite number of positive integers \(n\) and \(m\). Then, for every positive \(\epsilon\), we have_
\[\int_{0}^{X}\Delta(x)\Delta(\theta x)dx=O(X^{3/2-1/(18\eta)+\epsilon}).\]
_In the other cases of irrationals \(\theta\), the integral above is \(o(X^{3/2})\)._
This shows that we have decorrelation among the values \(\Delta(x)\) and \(\Delta(\theta x)\) when \(\theta\) is irrational, and moreover, this gives that the function \(I(\theta)\) is discontinuous everywhere.
We mention that a similar decorrelation also has been obtained by Ivic and Zhai in [8]. In this paper they show decorrelation between \(\Delta(x)\) and \(\Delta_{k}(x)\), where \(\Delta_{k}(x)\) is the error term related to the \(k\)-fold divisor function, and \(k=3\) or \(4\).
## 2. Notation
### Asymptotic notation
We employ both Vinogradov's notation \(f\ll g\) or \(f=O(g)\) whenever there exists a constant \(C>0\) such that \(|f(x)|\leq C|g(x)|\), for all \(x\) in a set of parameters. When not specified, this set of parameters is \(x\in(a,\infty)\) for sufficiently large \(a>0\). We employ \(f=o(g)\) when \(\lim_{x\to a}\frac{f(x)}{g(x)}=0\). In this case \(a\) can be a complex number or \(\pm\infty\). Finally, \(f=\Omega(g)\) when \(\limsup_{x\to a}\frac{|f(x)|}{g(x)}>0\), where \(a\) is as in the previous notation.
### Number-theoretic notation
Here \(p\) stands for a generic prime number. We sometimes denote the least common multiple between \(a,b\) as \(\operatorname{lcm}(a,b)\). The greatest common divisor is denoted by \(\gcd(a,b)\). The symbol \(*\) stands for Dirichlet convolution between two arithmetic functions: \((f*g)(n)=\sum_{d|n}f(d)g(n/d)\).
## 3. Multiplicative auxiliaries
Our first task is to evaluate \(\sum_{n\geq 1}\tau(cn)\tau(dn)/n^{3/2}\) for coprime positive integers \(c\) and \(d\).
**Lemma 3.1**.: _Let \(c\) be fixed positive number and \(f(n)\) be a multiplicative function with \(f(c)\neq 0\). Then \(n\mapsto\frac{f(cn)}{f(c)}\) is multiplicative._
Proof.: For positive integers \(u,v\), we have
\[f(u)f(v)=f(\gcd(u,v))f(\operatorname{lcm}(u,v)).\]
Let \(u=cn\), \(v=cm\) with \(\gcd(n,m)=1\). Then \(f(cn)f(cm)=f(c)f(cnm)\). Therefore, we obtain
\[\frac{f(cm)}{f(c)}\frac{f(cn)}{f(c)}=\frac{f(cnm)}{f(c)}.\]
**Lemma 3.2**.: _Let \(c,d\) be two fixed positive integers with \(\gcd(c,d)=1\). Then_
\[\sum_{n=1}^{\infty}\frac{\tau(cn)\tau(dn)}{n^{s}}=\tau(cd)\frac{\zeta(s)^{4}}{ \zeta(2s)}\prod_{p^{k}\|cd}\left(1+p^{-s}\right)^{-1}\left(1-\frac{(k-1)}{(k+1 )}p^{-s}\right).\]
The quantity we compute appears in several places, for instance in [11] by Lee & Lee of in [12, Theorem 2.4] by Munsch & Shparlinski.
Proof.: Note that \(\frac{\tau(cn)}{\tau(c)}\) is a multiplicative function in the variable \(n\), and so is \(\frac{\tau(cn)\tau(dn)}{\tau(c)\tau(d)}\). Therefore, for \(\Re(s)>1\) we have the following Euler factorization
\[\sum_{n=1}^{\infty}\frac{\tau(cn)\tau(dn)}{\tau(c)\tau(d)n^{s}}=\prod_{p\mid cd }\left(1+\sum_{\ell=1}^{\infty}\frac{\tau(p^{\ell})^{2}}{p^{\ell s}}\right) \prod_{p\mid cd}\left(1+\sum_{\ell=1}^{\infty}\frac{\tau(cp^{\ell})\tau(dp^{ \ell})}{\tau(c)\tau(d)p^{\ell s}}\right).\]
For \(|x|<1\), we know that
\[\sum_{\ell=0}^{\infty}(\ell+1)x^{\ell}=\frac{1}{(1-x)^{2}},\hskip 28.452756pt \sum_{\ell=0}^{\infty}(\ell+1)^{2}x^{\ell}=\frac{(1+x)}{(1-x)^{3}},\]
from which we also derive that
\[\sum_{\ell=0}^{\infty}\ell(\ell+1)x^{\ell}=\frac{2x}{(1-x)^{3}}.\]
Now,
\[\prod_{p\mid cd}\left(1+\sum_{\ell=1}^{\infty}\frac{\tau(p^{\ell}) ^{2}}{p^{\ell s}}\right) =\prod_{p}\left(1+\sum_{\ell=1}^{\infty}\frac{(\ell+1)^{2}}{p^{ \ell s}}\right)\prod_{p\mid cd}\left(1+\sum_{\ell=1}^{\infty}\frac{(\ell+1)^{2 }}{p^{\ell s}}\right)^{-1}\] \[=\prod_{p}\frac{(1+p^{-s})}{(1-p^{-s})^{3}}\prod_{p\mid cd}\frac{ (1-p^{-s})^{3}}{(1+p^{-s})}\] \[=\frac{\zeta(s)^{4}}{\zeta(2s)}\prod_{p\mid cd}\frac{(1-p^{-s})^{ 3}}{(1+p^{-s})}.\]
If \(\gcd(c,d)=1\)
\[\prod_{p|cd}\left(1+\sum_{\ell=1}^{\infty}\frac{\tau(cp^{\ell})^{2} \tau(dp^{\ell})^{2}}{\tau(c)\tau(d)p^{\ell s}}\right) =\prod_{p^{k}\|cd}\left(1+\sum_{\ell=1}^{\infty}\frac{(k+1+\ell) (\ell+1)}{(k+1)p^{\ell s}}\right)\] \[=\prod_{p^{k}\|cd}\left(1+\sum_{\ell=1}^{\infty}\frac{(\ell+1)}{p ^{\ell s}}+\frac{1}{k+1}\sum_{\ell=1}^{\infty}\frac{\ell(\ell+1)}{p^{\ell s}}\right)\] \[=\prod_{p^{k}\|cd}\left(1-p^{-s}\right)^{-3}\left(1-\tfrac{(k-1)} {(k+1)}p^{-s}\right).\]
## 4. Proof of Theorem 1.2 and of Proposition 1.1
We continue with the proof with the following Lemma.
**Lemma 4.1**.: _Let \(a,b\) be positive integers, \(\lambda=\gcd(a,b)\), \(c=a/\lambda\) and \(d=b/\lambda\). Then_
\[\lim_{X\to\infty}\frac{1}{X^{3/2}}\int_{1}^{X}\Delta(x/a)\Delta(x/b)dx=\frac{ 1}{6\pi^{2}\sqrt{\lambda}cd}\sum_{n=1}^{\infty}\frac{\tau(cn)\tau(dn)}{n^{3/2}}.\]
Proof.: Let \(N>0\) and \(\epsilon>0\) be a small number that may change from line after line. We proceed with Voronoi's formula for \(\Delta(x)\) in the following form (see [10])
\[\Delta(x)=\frac{x^{1/4}}{\pi\sqrt{2}}\sum_{n\leq N}\frac{\tau(n)}{n^{3/4}} \cos(\sqrt{nx}-\pi/4)+R_{N}(x)\]
where, for every positive \(\epsilon\), we have
\[R_{N}(x)\ll x^{\epsilon}+\frac{x^{1/2+\epsilon}}{N^{1/2}}.\]
We select \(N\) at the end. With this formula we have that in the range \(1\leq x\leq X\),
\[\Delta(x/a)=\frac{(x/a)^{1/4}}{\pi\sqrt{2}}\sum_{n\leq N}\frac{\tau(n)}{n^{3/4 }}\cos(\sqrt{nx/a}-\pi/4)+R_{N}(x/a)=U_{N}(x/a)+R_{N}(x/a)\]
say.
Now,
\[\int_{1}^{X}\Delta(x/a)\Delta(x/b)dx =\int_{1}^{X}U_{N}(x/a)U_{N}(x/b)dx+\int_{1}^{X}U_{N}(x/a)R_{N}(x/b)dx\] \[+\int_{1}^{X}U_{N}(x/b)R_{N}(x/a)dx+\int_{1}^{X}R_{N}(x/a)R_{N}(x/ b)dx\] \[=\int_{1}^{X}U_{N}(x/a)U_{N}(x/b)dx+O\left(X^{1+1/4+\epsilon}+ \frac{X^{1+3/4+\epsilon}}{\sqrt{N}}\right),\]
where we used the Cauchy-Schwarz inequality in the last equality. Let now \(\lambda=\gcd(a,b)\), \(c=a/\lambda\) and \(d=b/\lambda\). By making the change of variable \(u=x/\lambda\), we reach
\[\int_{1}^{X}U_{N}(x/a)U_{N}(x/b)dx=\lambda\int_{1}^{X/\lambda}U_{ N}(x/c)U_{N}(x/d)dx\] \[=\frac{\lambda}{2\pi^{2}(cd)^{1/4}}\sum_{n,m\leq N}\frac{\tau(n) \tau(m)}{(nm)^{3/4}}\int_{1}^{X/\lambda}x^{1/2}\cos(\sqrt{nx/c}-\pi/4)\cos( \sqrt{mx/d}-\pi/4)\] \[=\frac{\lambda}{\pi^{2}(cd)^{1/4}}\sum_{n,m\leq N}\frac{\tau(n) \tau(m)}{(nm)^{3/4}}\int_{1}^{(X/\lambda)^{1/2}}u^{2}\cos(u\sqrt{n/c}-\pi/4) \cos(u\sqrt{m/d}-\pi/4)du,\]
where in the last equality above we made a change of variable \(u=\sqrt{x}\). We claim now that the main contribution comes when \(n/c=m/d\). Since \(c\) and \(d\) are coprime, this implies that \(m=dk\) and \(n=ck\). Therefore the sum over these \(n\) and \(m\) can be written as
\[\frac{\lambda}{\pi^{2}cd}\sum_{k=1}^{\infty}\frac{\tau(ck)\tau(dk)}{k^{3/2}} \int_{1}^{(X/\lambda)^{1/2}}u^{2}\cos^{2}(u\sqrt{k}-\pi/4)^{2}du+O\left(\frac {X^{3/2+\epsilon}}{\sqrt{N}}\right). \tag{4}\]
We recall now that \(\cos^{2}(v)=\frac{1+\cos(2v)}{2}\), and hence the integral above is
\[\int_{1}^{X^{1/2}/\lambda^{1/2}}x^{2}\cos^{2}(\sqrt{n}x-\pi/4)dx=\frac{X^{3/2} }{6\lambda^{3/2}}+O(X), \tag{5}\]
where the big-oh term is uniform in \(n\). Now we will show that the sum over those \(n\) and \(m\) such that \(n/c\neq m/d\) will be \(o(X^{3/2})\). With this the proof will be complete by combining (4) and (5).
We recall the identity \(2\cos(u)\cos(v)=\cos(u-v)+\cos(u+v)\). Thus, for \(\sqrt{n/c}\neq\sqrt{m/d}\), we find that
\[\int_{1}^{X^{1/2}/\lambda^{1/2}}x^{2}\cos(\sqrt{n/c}x-\pi/4)\cos( \sqrt{m/d}x-\pi/4)dx\] \[=\int_{1}^{X^{1/2}/\lambda^{1/2}}x^{2}\cos((\sqrt{n/c}-\sqrt{m/d} )x)dx+\int_{1}^{X^{1/2}/\lambda^{1/2}}x^{2}\sin((\sqrt{n/c}+\sqrt{m/d})x)dx\] \[\ll\frac{X}{\sqrt{n/c}-\sqrt{m/d}}\ll\frac{\sqrt{n/c}+\sqrt{m/d}}{ nd-mc}X.\]
Let \(\mathds{1}_{P}(n)\) be the indicator that \(n\) has property \(P\). We find that
\[\sum_{\begin{subarray}{c}n,m\leq N\\ nd-mc\neq 0\end{subarray}}\frac{\tau(n)\tau(m)}{(nm)^{3/4}}\int_{1}^{X/ \lambda}x^{1/2}\cos(\sqrt{nx/c}-\pi/4)\cos(\sqrt{mx/d}-\pi/4)dx\] \[\ll XN^{\epsilon}\sum_{\begin{subarray}{c}n,m\leq N\\ nd-mc\neq 0\end{subarray}}\frac{\sqrt{n/c}+\sqrt{m/d}}{(nm)^{3/4}|nd-mc|}\] \[=XN^{\epsilon}\sum_{\begin{subarray}{c}n,m\leq N\\ nd-mc\neq 0\end{subarray}}\frac{\sqrt{n/c}+\sqrt{m/d}}{(nm)^{3/4}|nd-mc|}\sum_{ \begin{subarray}{c}k=-N\max(c,d)\\ k\neq 0\end{subarray}}^{N\max(c,d)}\mathds{1}_{nd-mc=k}.\]
On calling this sum \(S\), we readily continue with
\[S \ll XN^{\epsilon}\sum_{k=1}^{N\max(c,d)}\frac{1}{k}\sum_{m\leq N} \frac{\sqrt{m}+\sqrt{k}}{((k+mc)m)^{3/4}}\] \[\ll XN^{\epsilon}\left(O(\log N)^{2}+\sum_{k\leq N}\frac{1}{ \sqrt{k}}\sum_{m\leq N}\frac{1}{(m^{2}+mk)^{3/4}}\right)\] \[\ll XN^{\epsilon}\left(O(\log N)^{2}+\sum_{k\leq N}\frac{1}{ \sqrt{k}}\left(\sum_{k\leq m\leq N}\frac{1}{m^{3/2}}+\frac{1}{k^{3/4}}\sum_{m \leq k}\frac{1}{m^{3/4}}\right)\right)\] \[\ll XN^{\epsilon}(\log N)^{2}.\]
Finally, by selecting \(N=X^{2}\), we arrive at
\[\int_{1}^{X}\Delta(x/a)\Delta(x/b)dx=\frac{1}{6\pi^{2}\sqrt{\lambda}cd}\left( \sum_{n=1}^{\infty}\frac{\tau(cn)\tau(dn)}{n^{3/2}}\right)X^{3/2}+O(X^{3/2-1/4 +\epsilon}),\]
where the main contribution in the \(O\)-term above comes from the usage of Cauchy-Schwarz in the beginning of the proof.
The proof is complete.
Now we deviate from the main line and prove Proposition 1.1.
Proof of Proposition 1.1.: By the proof of Lemma 4.1 we have that
\[\eqalign{I_{\theta}(X):=&\int_{0}^{X}\Delta(x)\Delta(\theta x)dx\cr=&{1\over\pi^ {2}}\sum_{n,m\leq N}{\tau(n)\tau(m)\over(nm)^{3/4}}\int_{0}^{X^{1/2}}x^{2}\cos( x\sqrt{n}-\pi/4)\cos(x\sqrt{m\theta}-\pi/4)dx\cr&+O\left(X^{1+1/4+\epsilon}+{X^{1+3/4+ \epsilon}\over\sqrt{N}}\right).}\]
Now, by appealing to the identity \(2\cos(u)\cos(v)=\cos(u-v)+\cos(u+v)\), we reach at
\[I_{\theta}(X)={1\over 2\pi^{2}}\sum_{n,m\leq N}{\tau(n)\tau(m)\over(nm)^{3/4}} \int_{0}^{X^{1/2}}x^{2}\cos(x(\sqrt{n}-\sqrt{m\theta}))dx+O\left(X^{1+1/4}+{X^ {1+3/4+\epsilon}\over\sqrt{N}}\right).\]
On calling the sum above \(S_{\theta}(X)\), \(a_{n,m}:=\sqrt{n}-\sqrt{m\theta}\), we obtain that
\[S_{\theta}(X)=X^{3/2}\sum_{n,m\leq N}{\tau(n)\tau(m)\over(nm)^{3/4}}\Lambda(a_ {n,m}\sqrt{X}),\]
where \(\Lambda(0):=1/3\) and for \(u\neq 0\)
\[\Lambda(u):={\sin(u)\over u}+2{\cos(u)\over u^{2}}-2{\sin(u)\over u^{3}}.\]
A careful inspection shows that \(\Lambda\) is continuous and for large \(|u|\), \(\Lambda(u)\ll|u|^{-1}\).
Now, for a large parameter \(T\) to be chosen later, we split
\[S_{\theta}(X)=X^{3/2}\sum_{n,m\leq N\atop|a_{n,m}\sqrt{X}|\leq T}{\tau(n) \tau(m)\over(nm)^{3/4}}\Lambda(a_{n,m}\sqrt{X})+X^{3/2}\sum_{n,m\leq N\atop|a_{ n,m}\sqrt{X}|>T}{\tau(n)\tau(m)\over(nm)^{3/4}}\Lambda(a_{n,m}\sqrt{X}).\]
We call the first sum in the right hand side above _diagonal_ contribution and the second sum the _non-diagonal_ contribution. We select \(T=X^{1/2-\delta}\) and \(N=X^{1/2+\delta}\), for some small \(\delta>0\).
_The diagonal contribution_. We have that
\[D(X) =X^{3/2}\sum_{\begin{subarray}{c}n,m\leq N\\ |a_{n,m}\sqrt{X}|\leq T\end{subarray}}\frac{\tau(n)\tau(m)}{(nm)^{3/4}}\Lambda(a_ {n,m}\sqrt{X}) \tag{7}\] \[\ll X^{3/2}N^{\epsilon}\sum_{m\leq N}\frac{1}{m^{3/4}}\sum_{ \begin{subarray}{c}n=m\theta-\frac{2\sqrt{m\theta}}{X^{\delta}}+\frac{1}{X^{2 \delta}}\end{subarray}}^{m\theta+\frac{2\sqrt{m\theta}}{X^{\delta}}+\frac{1}{X ^{2\delta}}}\frac{|\Lambda(a_{n,m}\sqrt{X})|}{n^{3/4}}. \tag{6}\]
The inner sum above we split accordingly \(\frac{2\sqrt{m\theta}}{X^{\delta}}+\frac{1}{X^{2\delta}}\) is below and above \(1\). In the case that this quantity is great or equal to \(1\), we have that \(m\geq((2\theta)^{-1}+o(1))X^{2\delta}\), and hence
\[D(X)\ll X^{3/2}N^{\epsilon}\sum_{((2\theta)^{-1}+o(1))X^{2\delta} \leq m\leq N}\frac{1}{m^{3/4}}\sum_{n=m\theta-\frac{2\sqrt{m\theta}}{X^{\delta }}+\frac{1}{X^{2\delta}}}^{m\theta+\frac{2\sqrt{m\theta}}{X^{\delta}}+\frac{1 }{X^{2\delta}}}\frac{|\Lambda(a_{n,m}\sqrt{X})|}{n^{3/4}}\] \[\ll X^{3/2}N^{\epsilon}\sum_{((2\theta)^{-1}+o(1))X^{2\delta} \leq m\leq N}\frac{1}{m^{3/4}}\cdot\frac{1}{m^{3/4}}\frac{\sqrt{m}}{X^{\delta}}\] \[\ll X^{3/2-\delta}N^{\epsilon}.\]
In the case that \(\frac{2\sqrt{m\theta}}{X^{\delta}}+\frac{1}{X^{2\delta}}\leq 1\), we have that \(m\leq((2\theta)^{-1}+o(1))X^{2\delta}\), and now the Diophantine properties of \(\theta\) come in to play. If the degree of irrationality of \(\theta\) is \(\eta\), we have that for each \(\epsilon\) there is a constant \(C>0\) such that the inequality
\[|n-m\theta|\geq\frac{C}{m^{\eta+\epsilon}}\]
is violated only for a finite number of positive integers \(n\) and \(m\). In our case, this allows us to lower bound \(|a_{n,m}\sqrt{X}|\) for all but a finite number of \(n\) and \(m\) such that \(1\leq m\ll X^{2\delta}\) and \(1/2\leq\sqrt{n}/\sqrt{m\theta}\leq 2\):
\[|a_{n,m}\sqrt{X}|\cdot\frac{\sqrt{n}+\sqrt{m\theta}}{\sqrt{n}+ \sqrt{m\theta}} =\sqrt{X}\frac{|n-m\theta|}{\sqrt{n}+\sqrt{m\theta}}\] \[\geq\frac{\sqrt{X}}{m^{\eta+\epsilon}(\sqrt{n}+\sqrt{m\theta})}\] \[\gg X^{1/2-(2\eta+1)\delta-\epsilon}.\]
Observe that the diagonal contribution from those exceptional \(n\) and \(m\) will be at most \(O(X)\). With these estimates on hand and recalling that \(\Lambda(u)\ll|u|^{-1}\), we obtain
\[\eqalign{&X^{3/2}N^{\epsilon}\sum_{m\leq((2\theta)^{-1}+o(1))X^{2\delta}}{1 \over m^{3/4}}\sum_{n=m\theta-{2\sqrt{m\theta}\over X^{\delta}}+{1\over X^{2 \delta}}}^{m\theta+{2\sqrt{m\theta}\over X^{\delta}}+{1\over X^{2\delta}}}{| \Lambda(a_{n,m}\sqrt{X})|\over n^{3/4}}\cr&\ll X^{3/2}N^{\epsilon}\sum_{m\leq((2 \theta)^{-1}+o(1))X^{2\delta}}{1\over m^{3/2}}\cdot{1\over X^{1/2-(2\eta+1) \delta-\epsilon}}+O(X)\cr&\ll X^{1+(2\eta+1)\delta+\epsilon}.}\]
Therefore, the diagonal contribution is at most
\[D(X)\ll X^{1+(2\eta+1)\delta+\epsilon}+X^{3/2-\delta+\epsilon}.\]
_The non-diagonal contribution_. Now, we reach
\[\eqalign{X^{3/2}\sum_{n,m\leq N\atop|a_{n,m}\sqrt{X}|>T}{\tau(n)\tau(m)\over( nm)^{3/4}}\Lambda(a_{n,m}\sqrt{X})&\ll{X^{3/2}N^{1/2+\epsilon}\over T}\cr&=X^{3/2+1/4+( \delta+\epsilon)/2+\epsilon\delta-1/2+\delta}\cr&=X^{1+1/4+3\delta/2+ \epsilon/2+\epsilon\delta}.}\]
We choose \(\delta={1\over 3(2\eta+1)}\) and obtain
\[I_{\theta}(X)=X^{3/2-1/(18\eta)}.\]
The proof of the first part of Proposition 1.1 is complete.
Now we assume that \(\theta\) is a Liouville number, _i.e._, \(\theta\) doesn't have finite degree of irrationality. We see that the non-diagonal argument does not depend on the Diophantine properties of \(\theta\). Let \(\eta>0\) be a large fixed number, \(t>0\) a small number that will tend to \(0\). For \(D(X)\) as in (6), by repeating verbatim the estimates
above we have that
\[D(X)\ll X^{3/2}\sum_{m\leq((2\theta)^{-1}+o(1))X^{2\delta}}\frac{\tau(m)}{m^{3/4} }\sum_{n=m\theta-\frac{2\sqrt{m\theta}}{X^{\delta}}+\frac{1}{X^{2\delta}}}^{m \theta+\frac{2\sqrt{m\theta}}{X^{\delta}}+\frac{1}{X^{2\delta}}}\frac{\tau(n)| \Lambda(a_{n,m}\sqrt{X})|}{n^{3/4}}+O(X^{3/2-\delta}N^{\epsilon}).\]
Let \(\|x\|\) be the distance from \(x\) to the nearest integer. We split the sum over \(m\) above into two sums: One over those \(m\) such that \(\|m\theta\|>tm^{-\eta}\) and the other over \(m\) such that \(\|m\theta\|\leq tm^{-\eta}\).
Repeating the argument above for non-Liouville numbers, we have that the contribution over those \(m\) such that \(\|m\theta\|>tm^{-\eta}\) is \(O(t^{-1}X^{1+\delta(2\eta+1)})\). Therefore
\[D(X)\ll X^{3/2}\sum_{\begin{subarray}{c}m=1\\ \|m\theta\|\leq tm^{-\eta}\end{subarray}}^{\infty}\frac{1}{m^{3/2-\epsilon}}+ O(t^{-1}X^{1+\delta(2\eta+1)}+X^{3/2-\delta+\epsilon}).\]
Combining all these estimates, we see that
\[\limsup_{X\to\infty}\frac{1}{X^{3/2}}\left|\int_{0}^{X}\Delta(x)\Delta(\theta x )dx\right|\ll\sum_{\begin{subarray}{c}m=1\\ \|m\theta\|\leq tm^{-\eta}\end{subarray}}^{\infty}\frac{1}{m^{3/2-\epsilon}}.\]
Since the upper bound above holds for all \(t>0\), we have that as \(t\to 0^{+}\), the sum above converges to \(0\) and thus implying that the \(\limsup\) is \(0\). The proof is complete.
Proof of Theorem 1.2.: On combining Lemma 4.1 together with Lemma 3.2, we get the first part of Theorem 1.2. The second part is a trivial consequence of Proposition 1.1.
## 5. Quadratic forms auxiliaries
The main proof will lead to considering the quadratic form attached to a matrix of the form
\[M_{S,\varphi}=\bigg{(}\frac{1}{\sqrt{\gcd(a,b)}}\varphi\bigg{(}\frac{\operatorname {lcm}(a,b)}{\gcd(a,b)}\bigg{)}\bigg{)}_{a,b\in S} \tag{8}\]
where \(S\) is some finite set of integers while \(\varphi\) is a _non-negative multiplicative function such that \(\varphi(p^{k})\leq 1\)_. So we stray somewhat from the main line and investigate this
situation. Our initial aim is to find conditions under which the associated quadratic form is positive definite, but we shall finally restrict our scope. GCD-matrices have received quite some attention, but it seems the matrices occuring in (8) have not been explored. We obtain results in two specific contexts.
Completely multiplicative caseHere is our first result.
**Lemma 5.1**.: _When \(\varphi\) is completely multiplicative, the matrix \(M_{S,\varphi}\) is non-negative. When \(p^{1/4}\varphi(p)\in(0,1)\) and \(S\) is divisor closed, this matrix is positive definite. The determinant in that case is given by the formula_
\[\det\biggl{(}\frac{1}{\sqrt{\gcd(a,b)}}\varphi\biggl{(}\frac{\operatorname{ lcm}(a,b)}{\gcd(a,b)}\biggr{)}\biggr{)}_{a,b\in S}=\prod_{d\in S}\varphi(d)^{2}( \mu*\psi)(d),\]
_where \(\psi\) is the completely multiplicative function given by \(\psi(p)=1/(\sqrt{p}\varphi(p)^{2})\)._
By _divisor closed_, we mean that every divisor of an element of \(S\) also belongs to \(S\).
Proof.: We write
\[\frac{1}{\sqrt{\gcd(a,b)}}\varphi\biggl{(}\frac{\operatorname{lcm}(a,b)}{ \gcd(a,b)}\biggr{)}=\frac{1}{\sqrt{\gcd(a,b)}}\varphi\biggl{(}\frac{ \operatorname{lcm}(a,b)}{\gcd(a,b)}\biggr{)}=\varphi(a)\varphi(b)\psi(\gcd(a, b))\]
where \(\psi(n)=1/(\varphi(n)^{2}\sqrt{n})\) is another non-negative multiplicative function. We introduce the auxiliary function \(h=\mu*\psi\). Notice that this function is multiplicative and non-negative, as \(\psi(p)\geq 1\). We use Selberg's diagonalization process to write
\[\sum_{a,b\in S}\frac{1}{\sqrt{\gcd(a,b)}}\varphi\biggl{(}\frac{ \operatorname{lcm}(a,b)}{\gcd(a,b)}\biggr{)}x_{a}x_{b} =\sum_{a,b\in S}\psi(\gcd(a,b))\varphi(a)x_{a}\varphi(b)x_{b}\] \[=\sum_{a,b\in S}\sum_{d|(a,b)}h(d)\varphi(a)x_{a}\varphi(b)x_{b}\] \[=\sum_{d}h(d)\biggl{(}\sum_{\begin{subarray}{c}a\in S\\ d|a\end{subarray}}\varphi(a)x_{a}\biggr{)}^{2}\]
from which the non-negativity follows readily. When \(\varphi\) verifies the more stringent condition that \(p^{1/4}\varphi(p)\in(0,1)\), we know that both \(\varphi\) and \(h\) are strictly positive.
Let us define \(y_{d}=\sum_{\begin{subarray}{c}a\in S\\ d|a\end{subarray}}\varphi(a)x_{a}\). The variable \(d\) varies in the set \(D\) of divisors of \(S\). We assume that \(S\) is divisor closed, so that \(D=S\). We can readily invert the triangular system giving the \(y_{d}\)'s as functions of the \(x_{a}\)'s into
\[\varphi(a)x_{a}=\sum_{a|b}\mu(b/a)y_{b}\]
Indeed, the fact that the mentioned system is triangular ensures that a solution \(y\) is unique if it exists. We next verify that the proposed expression is indeed a solution by:
\[\sum_{\begin{subarray}{c}a\in S\\ d|a\end{subarray}}\varphi(a)x_{a}=\sum_{\begin{subarray}{c}a\in S\\ d|a\end{subarray}}\sum_{\begin{subarray}{c}a|b\\ d|b\end{subarray}}\mu(b/a)y_{b}=\sum_{\begin{subarray}{c}b\in S\\ d|b\end{subarray}}y_{b}\sum_{d|a|b}\mu(b/a)=y_{d}\]
as the last inner sum vanishes when \(d\neq b\). We thus have a writing as a linear combination of squares of independant linear forms. In a more pedestrian manner, if our quadratic form vanishes, then all \(y_{d}\)'s do vanish, hence so do the \(x_{a}\)'s.
Here is a corollary.
**Lemma 5.2**.: _When the set \(S\) contains solely squarefree integers, the matrix \(M_{S,\varphi}\) is non-negative._
Proof.: Simply apply Lemma 5.1 to completely multiplicative function \(\varphi^{\prime}\) that have the same values on primes as \(\varphi\).
An additive-like situation.: Let us restrict our attention to the case
\[S=\{1,p,p^{2},\cdots,p^{K}\}.\]
In that case, the matrix has the form
\[\mathcal{M}_{\varphi,K}=\left(\frac{1}{p^{\min(i,j)/2}}\varphi(p^{\max(i,j)- \min(i,j)})\right)_{i,j\leq K}.\]
We have not been able to get general results like Lemma 5.2 in that case. We may however work out some criterium that is simple to verify in our case. We first recall the following theorem of Frobenius.
**Lemma 5.3**.: _A hermitian complex valued matrix \(M=(m_{i,j})_{i,j\leq K}\) defines a positive definite form if and only if all its principal minors \(\det(m_{i,j})_{i,j\leq k}\) for \(k\leq K\) are positive._
So in our case, here is the list of conditions to verify, where we have set \(q=1/\sqrt{p}\):
* \(q-\varphi(p)^{2}>0\).
* \((q-\varphi(p^{2}))(\varphi(p^{2})-2\varphi(p)^{2}+q)>0\).
* \(q^{2}\varphi(p)^{4}-2q\varphi(p^{3})\varphi(p)^{3}+(4q^{2}\varphi(p^{2})-2q \varphi(p^{2})^{2}+\varphi(p^{3})^{2}-3q^{3})\varphi(p)^{2}+\) \(2(2q-\varphi(p^{2}))\varphi(p^{3})\varphi(p^{2})\varphi(p)+\varphi(p^{2})^{4}- 2q^{2}\varphi(p^{2})^{2}-q\varphi(p^{3})^{2}+q^{4}>0\).
The first condition is equivalent to the condition \(p^{1/4}\varphi(p)<1\) that we have already met in Lemma 5.1. We conclude to the next lemma.
**Lemma 5.4**.: _Recall that \(\varphi(1)=1\), that \(\varphi(p)\in(0,p^{-1/4})\). We have_
1. _The matrix_ \(\mathcal{M}_{\varphi,1}\) _is positive definite._
2. _The matrix_ \(\mathcal{M}_{\varphi,2}\) _is positive definite if and only if_ \(\varphi(p^{2})-2\varphi(p)^{2}+1/\sqrt{p}>0\)_._
3. _The matrix_ \(\mathcal{M}_{\varphi,3}\) _is positive definite if and only if_ \(\mathcal{M}_{\varphi,2}\) _is positive definite and if_ \[q^{2}\varphi(p)^{4}-2q\varphi(p^{3})\varphi(p)^{3}+(4q^{2} \varphi(p^{2})-2q\varphi(p^{2})^{2}+\varphi(p^{3})^{2}-3q^{3})\varphi(p)^{2}\] \[+2(2q-\varphi(p^{2}))\varphi(p^{3})\varphi(p^{2})\varphi(p)+ \varphi(p^{2})^{4}-2q^{2}\varphi(p^{2})^{2}-q\varphi(p^{3})^{2}+q^{4}>0\] _where_ \(q=1/\sqrt{p}\)_._
4. _The matrix_ \(\mathcal{M}_{\varphi,K}\) _is positive definite if and only if_ \(\mathcal{M}_{\varphi,K-1}\) _is positive definite and if_ \(\det\mathcal{M}_{\varphi,K}>0\)_._
Here is another situation where we are able to conclude.
**Lemma 5.5**.: _If for every \(i\leq K\), we have_
\[\sum_{1\leq\ell\leq i-1}\varphi(p^{\ell})p^{\ell/2}+\sum_{1\leq\ell\leq K-i} \varphi(p^{\ell})<1,\]
_then the matrix \(\mathcal{M}_{\varphi,K}\) is definite positive._
Proof.: By Gershgorin Disks' Theorem (see the book [16] by Varga), we know that each eigenvalue of \(\mathcal{M}_{\varphi,K}\) lies inside one of the Gershgorin's disks. As these eigenvalues are real numbers, the disk reduce to segments. They are the \(K\) intervals of center \(1/p^{i/2}\) and radius \(\sum_{j\neq i}\frac{1}{p^{\min(i,j)/2}}\varphi(p^{\max(i,j)-\min(i,j)})\). When this radius is strictly less than the center, we are sure that each eigenvalue is positive. We massage a bit this condition to get the one stated in the lemma, hence completing the proof.
A tensor product-like situation.: Lemma 5.2 is enough to solve our main problem when \(M_{1}\) and \(M_{2}\) are coprime squarefree integers. We need to go somewhat further. Let \(S\) be a divisor closed set. We consider the quadratic form
\[\sum_{a,b\in S}\varphi\bigg{(}\frac{\operatorname{lcm}(a,b)}{\gcd(a,b)}\bigg{)} x_{a}x_{b} \tag{9}\]
where the variables \(x_{a}\)'s are also multiplicatively split, i.e.
\[x_{a}=\prod_{p^{k}\|a}x_{p^{k}}. \tag{10}\]
Let \(S(p)\) the subset of \(S\) made only of \(1\) and of prime powers. We extend \(S\) so that it contains every products of integers from any collection of distinct \(S(p)\)*. We then find that
Footnote *: This is not automatically the case, as the example \(S=\{1,2,3,5,6,10\}\) shows, since \(30\) does not belong to \(S\)
\[\sum_{a,b\in S}\frac{1}{\sqrt{\gcd(a,b)}}\varphi\bigg{(}\frac{\operatorname{ lcm}(a,b)}{\gcd(a,b)}\bigg{)}x_{a}x_{b}=\prod_{p\in S}\bigg{(}\sum_{p^{k},p^{ \ell}\in S(p)}\frac{\varphi\big{(}p^{\max(k,\ell)-\min(k,\ell)}\big{)}}{p^{\min (k,\ell)/2}}x_{p^{k}}x_{p^{\ell}}\bigg{)}. \tag{11}\]
We check this identity simply by opening the right-hand side and seeing that every summand from the left-hand side appears one and only one time. Then Lemma 5.4 and 5.5 apply.
## 6. Proof of the main result
Proof.: By [1, Theorem 1.4], we have
\[S(x)=\sum_{n\leq x}(f_{1}*f_{2})(n)=\sum_{a|M_{1}M_{2}}g(a)\Delta(x/a)\]
where \(g=f_{1}*f_{2}*\mu*\mu\). We infer from this formula that
\[\eqalign{&\int_{1}^{X}|S(x)|^{2}dx=\sum_{a,b|M_{1}M_{2}}g(a)g(b)\int_{1}^{X} \Delta(x/a)\Delta(x/b)dx\cr&={(1+o(1))\over 6\pi^{2}}X^{3/2}\sum_{a,b|M_{1}M_{2}}g(a)g (b){\gcd(a,b)^{3/2}\over ab}\sum_{n=1}^{\infty}{\tau(an/\gcd(a,b))\tau(bn/\gcd( a,b))\over n^{3/2}}}\]
by Lemma 4.1. We next use Lemma 3.2 to infer that
\[\lim_{X\to\infty}{1\over X^{3/2}}\int_{1}^{X}|S(x)|^{2}dx={\zeta(3/4)^{4}\over 6 \pi^{2}\zeta(3/2)}\sum_{a,b|M_{1}M_{2}}g(a)g(b){1\over\sqrt{\gcd(a,b)}}\varphi \biggl{(}{{\rm lcm}(a,b)\over\gcd(a,b)}\biggr{)}\]
where \(\varphi\) is multiplicative and at prime powers:
\[\eqalign{\varphi(p^{k})&={(k+1)\over p^{k}}{1\over 1+p^{-3/2}}\biggl{(}1-{(k-1) \over(k+1)p^{3/2}}\biggr{)}\cr&={1\over p^{k}(1+p^{-3/2})}\biggl{(}(k+1)-(k-1) p^{-3/2}\biggr{)}\cr&={1\over p^{k}(1+p^{-3/2})}\biggl{(}k(1-p^{-3/2})+(1+p^{-3/2}) \biggr{)}\cr&={k\beta(p)+1\over p^{k}},}\]
where
\[\beta(p)={1-p^{-3/2}\over 1+p^{-3/2}}.\]
Now, we can write
\[{1\over\sqrt{\gcd(a,b)}}\varphi\left({{\rm lcm}(a,b)\over\gcd(a,b)}\right)={1 \over(ab)^{1/4}}\left({{\rm lcm}(a,b)\over\gcd(a,b)}\right)^{1/4}\varphi\left( {{\rm lcm}(a,b)\over\gcd(a,b)}\right).\]
Since the terms \(a^{-1/4}\) and \(b^{-1/4}\) can be absorbed into the variables \(g(a)\) and \(g(b)\) of the quadratic form, it is enough to consider the quantity
\[\varphi^{*}\left({{\rm lcm}(a,b)\over\gcd(a,b)}\right),\quad{\rm where}\;\; \varphi^{*}(n)=n^{1/4}\varphi(n).\]
We note that, at the prime power \(p^{k}\), we have
\[\varphi^{*}(p^{k})=p^{k/4}\varphi(p^{k})={k\beta(p)+1\over p^{3k/4}}.\]
Due to (11) and the discussion before it, we now restrict to the prime power case, that is, we look to matrices of the form
\[\mathcal{M}_{K}=\big{(}\varphi^{*}(p^{|i-j|})\big{)}_{i,j\leq K}.\]
As we are dealing with a given prime \(p\), we shorten \(\beta(p)\) in \(\beta\).
Since \(\varphi^{*}\) is not completely multiplicative, it is not clear how to handle the matrix \(\mathcal{M}_{K}\) directly. So, our aim will be to transform this into another matrix which, in some way associates with a completely multiplicative function. So, let us consider
\[\mathcal{A}_{K}=\mathcal{U}_{K}^{\top}\mathcal{M}_{K}\,\mathcal{U}_{K},\]
where,
\[\mathcal{U}_{K}(i,j)=\begin{cases}\dfrac{\mu(p^{|i-j|})}{p^{3(|i-j|)/4}},& \text{when }i\geq j\,\,\,\text{or}\,\,\,(i,j)=(K-1,K),\\ 0,&\text{otherwise}.\end{cases} \tag{14}\]
Simply speaking, \(\mathcal{U}_{K}\) is \(1\) on the diagonal and \(-p^{-3/4}\) on all \((i+1,i)\) as well as \((K-1,K)\). Also
\[\det(\mathcal{U}_{K})=1-p^{-3/2}.\]
We now calculate the entries of the matrix \(\mathcal{A}_{K}\). We have the following:
**Proposition 6.1**.: _The matrix \(\mathcal{A}_{K}\) above is given by:_
\[\mathcal{A}_{K}(i,j)=\beta(1-p^{-3/2})\cdot\begin{cases}p^{-3|i-j|/4},&\text{ when }\,1\leq i,j\leq K-1\,\,\text{or }i=j=K,\\ 0,&\text{otherwise}.\end{cases}\]
We begin with the following lemma:
**Lemma 6.1**.: _We have_
\[\varphi^{*}(p^{m})-p^{-3/4}\varphi^{*}(p^{|m-1|})=p^{-3m/4}\beta,\quad\text{ for all }\,m\geq 0.\]
Proof.: First, assume \(m\geq 1\). We have
\[\varphi^{*}(p^{m})-p^{-3/4}\varphi^{*}(p^{m-1})=\frac{m\beta+1}{p^{3m/4}}-p^{-3/4 }\frac{(m-1)\beta+1}{p^{3(m-1)/4}}=p^{-3m/4}\beta.\]
When \(m=0\), we have
\[1-p^{-3/4}\varphi^{*}(p)=1-p^{-3/2}(\beta+1)=1-\frac{2p^{-3/2}}{1+p^{-3/2}}=\beta.\]
Now, we shall proceed with the proof of the Proposition 6.1.
Proof of Proposition 6.1.: Let us first assume that \(1\leq i,j\leq K-1\). We have
\[\begin{split}\mathcal{A}_{K}(i,j)&=\sum_{k_{1},k_{2 }}\mathcal{U}_{K}^{\top}(i,k_{1})\mathcal{M}_{K}(k_{1},k_{2})\,\mathcal{U}_{K }(k_{2},j)\\ &=\sum_{\begin{subarray}{c}k_{1}-i\in\{0,1\}\\ k_{2}-j\in\{0,1\}\end{subarray}}\frac{\mu(p^{k_{1}-i})}{p^{3(k_{1}-i)/4}} \frac{\mu(p^{k_{2}-j})}{p^{3(k_{2}-j)/4}}\varphi^{*}(p^{|k_{1}-k_{2}|})\\ &=\bigg{(}\varphi^{*}(p^{|i-j|})\big{(}1+p^{-3/2}\big{)}-\frac{ \varphi^{*}(p^{|i-j+1|})+\varphi^{*}(p^{|i-j-1|})}{p^{3/4}}\bigg{)}.\end{split} \tag{15}\]
Here, we do not have the contribution coming from \(\mathcal{U}_{K}(K-1,K)\) or \(\mathcal{U}_{K}^{\top}(K,K-1)\) as we have assumed \(i,j\leq K-1\). This assumption is necessary because we are considering the values \(k_{1}=i+1\) and \(k_{2}=j+1\) (both of which should remain \(\leq K\)).
First, let us consider the case \(i\geq j\). Letting \(i-j=m\geq 0\), (15) becomes
\[\begin{split}\mathcal{A}_{K}(i+m,i)&=\varphi^{*}(p ^{m})-p^{-3/4}\varphi^{*}(p^{|m-1|})-p^{-3/4}\big{(}\varphi^{*}(p^{m+1})-p^{-3 /4}\varphi^{*}(p^{m})\big{)}\\ &=p^{-3m/4}\beta-p^{-3/4}p^{-3(m+1)/4}\beta\\ &=\beta(1-p^{-3/2})p^{-3m/4}.\end{split}\]
Similarly, for \(j\geq i\), we will obtain the same expression in terms of \(m=j-i\). This proves Proposition 6.1 for \(1\leq i,j\leq K-1\).
Next, we consider the case when one of \(i\) or \(j\) equals \(K\).
**Claim**: \(\mathcal{A}_{K}(i,K)=\mathcal{A}_{K}(K,j)=0\), for all \(1\leq i,j\leq K-1\).
We revert to the first line of the expression (15). Letting \(m=K-i\geq 1\), we obtain
\[\eqalign{{\mathcal{A}}_{K}(i,K)&=\sum_{\begin{subarray}{c}k_{1}\in\{i,i+1\}\\ k_{2}\in\{K-1,K\}\end{subarray}}{\mu(p^{k_{1}-i})\over p^{3(k_{1}-i)/4}}{\mu(p^ {K-k_{2}})\over p^{3(K-k_{2})/4}}\varphi^{*}(p^{|k_{1}-k_{2}|})\cr&=-p^{-3/4} \varphi^{*}(p^{m-1})+p^{-3/2}\varphi^{*}(p^{|m-2|})+\varphi^{*}(p^{m})-p^{-3/4} g(p^{m-1})\cr&=-p^{-3/4}\bigl{(}\varphi^{*}(p^{m-1})-p^{-3/4}\varphi^{*}(p^{|m-2|}) \bigr{)}+\varphi^{*}(p^{m})-p^{-3/4}\varphi^{*}(p^{m-1})\cr&=-p^{-3/4}p^{-3(m-1 )/4}\beta+p^{-3m/4}\beta=0.\cr}\]
It similarly follows that \({\mathcal{A}}_{K}(K,j)=0\) for \(1\leq j\leq K-1\), proving the claim.
Next, we see that
\[\eqalign{{\mathcal{A}}_{K}(K,K)&=\sum_{k_{1},k_{2}\in\{K-1,K\}}{\mu(p^{K-k_{1} })\over p^{3(K-k_{1})/4}}{\mu(p^{K-k_{2}})\over p^{3(K-k_{2})/4}}\,\varphi^{*} (p^{|k_{1}-k_{2}|})\cr&=1-p^{-3/4}\varphi^{*}(p)-p^{-3/4}\bigl{(}\varphi^{*}(p )-p^{-3/4}\bigr{)}\cr&=\beta-p^{-3/4}\bigl{(}p^{-3/4}\beta\bigr{)}=\beta(1-p^{ -3/2}).\cr}\]
This completes the proof of Proposition 6.1.
Now, since \(n\mapsto n^{-3/4}\) is completely multiplicative, by repeating almost as verbatim the the proof of Lemma 5.1, we obtain that for some \(c>0\), the matrix
\[{\mathcal{A}}_{K}=c\left(\left({{\rm lcm}(a,b)\over{\rm gcd}(a,b)}\right)^{-3 /4}\right)_{a,b\in\{1,\ldots,p^{K}\}}\]
is positive definite for all \(K\). Moreover, since \({\mathcal{A}}_{K}={\mathcal{U}}_{K}^{\top}{\mathcal{M}}_{K}{\mathcal{U}}_{K}\), we have
\[\det({\mathcal{A}}_{K})=\det({\mathcal{U}}_{K})^{2}\,\det({\mathcal{M}}_{K})= (1-p^{-3/2})^{2}\,\det({\mathcal{M}}_{K}).\]
This proves that \(\det({\mathcal{M}}_{K})>0\) and by induction over \(K\) in Lemma 5.4, \({\mathcal{M}}_{K}\) is positive definite for all \(K\).
The factorization (11) completes the proof of Theorem 1.1.
**Acknowledgements.** This project started while the first author was a visiting Professor at Aix-Marseille Universite, and he is thankful for their kind hospitality. The first author is funded by CNPq grant PDE no. 400010/2022-4 (200121/2022-7)
and by CNPq grant Universal no. 403037/2021-2. The second and third author are supported by the joint FWF-ANR project Arithrand: FWF: I 4945-N and ANR-20-CE91-0006. The main bulk of this paper was build when the fourth author was a guest of the Aix-Marseille Universite I2M laboratory. We thank this place for its support.
|
2304.01626 | On trialities and their absolute geometries | We introduce the notion of moving absolute geometry of a geometry with
triality and show that, in the classical case where the triality is of type
$(I_\sigma)$ and the absolute geometry is a generalized hexagon, the moving
absolute geometry also gives interesting flag-transitive geometries with
Buekenhout diagram with parameters $(d_p, g, d_L) = (5, 3, 6)$ for the groups
$G_2(k)$ and $^3D_4(k)$, for any integer $k \geq 2$. We also classify the
classical absolute geometries for geometries with trialities but no dualities
coming from maps of Class III with automorphism group $L_2(q^3)$, where $q$ is
a power of a prime. We then investigate the moving absolute geometries for
these geometries, illustrating their interest in this case. | Dimitri Leemans, Klara Stokes, Philippe Tranchida | 2023-04-04T08:30:44Z | http://arxiv.org/abs/2304.01626v1 | # On trialities and their absolute geometries
###### Abstract.
We introduce the notion of moving absolute geometry of a geometry with triality and show that, in the classical case where the triality is of type \((I_{\sigma})\) and the absolute geometry is a generalized hexagon, the moving absolute geometry also gives interesting flag-transitive geometries with Buekenhout diagram
Key words and phrases:Incidence geometry, triality, absolute geometry 2020 Mathematics Subject Classification: 51A10,51E24,20C33 This research was made possible thanks to an Action de Recherche Concertee grant from the Communaute Francaise Wallonie-Bruxelles. \({}^{1}\)See [http://neo-classical-physics.info/uploads/3/4/3/6/34363841/study-analytical_kinematics](http://neo-classical-physics.info/uploads/3/4/3/6/34363841/study-analytical_kinematics). pdf for an english translation of [15].
where \(f\) is the cardinality of the underlying field and \(k\) is \(f\) for \(G\cong G_{2}(k)\) and \(f\) is the cube of \(k\) for \(G\cong{}^{3}D_{4}(k)\). Moreover, we prove that their Buekenhout diagrams are as follows.
We also revisit the geometries with trialities constructed in [11]. We show that their absolute geometries are basically unions of paths of length two and we compute the moving absolute geometries of some of them, finding more interesting geometries. The absolute geometries of thin geometries, or more generally of geometries with rather small residues of rank 2, will often be very poorly connected (see Theorem 3.13 for an example of such behaviour). To get richer geometries from a triality in that context, the moving absolute geometry is then a good candidate.
## 2. Preliminaries
### Incidence and Coset Geometries
To their core, most of the geometric objects of interest to mathematicians are composed of elements together with some relation between them. This very general notion is made precise by the notion of an incidence system, or an incidence geometry.
A triple \(\Gamma=(X,*,\tau)\) is called an _incidence system_ over \(I\) if
1. \(X\) is a set whose elements are called the _elements_ of \(\Gamma\)
2. \(*\) is a symmetric and reflexive relation on \(X\). It is called the _incidence relation_ of \(\Gamma\).
3. \(\tau\) is a map from \(X\) to \(I\), called the _type map_ of \(\Gamma\), such that distinct elements \(x,y\in X\) with \(x*y\) satisfy \(\tau(x)\neq\tau(y)\). Elements of \(\tau^{-1}(i)\) are called the elements of type \(i\).
The _rank_ of \(\Gamma\) is the cardinality of the type set \(I\). A _flag_ in an incidence system \(\Gamma\) over \(I\) is a set of pairwise incident elements. The type of a flag \(F\) is \(t(F)\), that is the set of types of the elements of \(F.\) A _chamber_ is a flag of type \(I\). An incidence system \(\Gamma\) is an _incidence geometry_ if all its maximal flags are chambers.
Let \(F\) be a flag of \(\Gamma\). An element \(x\in X\) is _incident_ to \(F\) if \(x*y\) for all \(y\in F\). The _residue_ of \(\Gamma\) with respect to \(F\), denoted by \(\Gamma_{F}\), is the incidence system formed by all the elements of \(\Gamma\) incident to \(F\) but not in \(F\). The _rank_ of a residue is equal to \(\operatorname{rank}(\Gamma)\) - \(|F|\).
The _incidence graph_ of \(\Gamma\) is a graph with vertex set \(X\) and where two elements \(x\) and \(y\) are connected by an edge if and only if \(x*y\). Whenever we talk about the distance between two elements \(x\) and \(y\) of a geometry \(\Gamma\), we mean the distance in the incidence graph of \(\Gamma\) and simply denote it by \(d_{\Gamma}(x,y)\), or even \(d(x,y)\) if the context allows.
Let \(\Gamma=\Gamma(X,*,\tau)\) be an incidence geometry over the type set \(I\). A correlation of \(\Gamma\) is a bijection \(\phi\) of \(X\) respecting the incidence relation \(*\) and such that, for every \(x,y\in X\), if \(\tau(x)=\tau(y)\) then \(\tau(\phi(x))=\tau(\phi(y))\). If, moreover, \(\phi\) fixes the types of every element (i.e \(\tau(\phi(x))=\tau(x)\) for all \(x\in X\)), then \(\phi\) is said to be an automorphism of \(\Gamma\). The _type_ of a correlation \(\phi\) is the permutation it induces on the type set \(I\). A correlation of type \((i,j)\) is called a duality if it has order 2. A correlation of type \((i,j,k)\) is called a triality if it has order 3. The group of all correlations of \(\Gamma\) is denoted by \(\operatorname{Cor}(\Gamma)\) and the automorphism group of \(\Gamma\) is denoted by \(\operatorname{Aut}(\Gamma)\). Remark that \(\operatorname{Aut}(\Gamma)\) is a normal subgroup of \(\operatorname{Cor}(\Gamma)\) since it is the kernel of the action of \(\operatorname{Cor}(\Gamma)\) on \(I\).
Incidence geometries can be obtained from a group \(G\) together with a set \((G_{i})_{i\in I}\) of subgroups of \(G\).
The _coset geometry_\(\Gamma(G,(G_{i})_{i\in I})\) is the incidence geometry over the type set \(I\) where:
1. The elements of type \(i\in I\) are right cosets of the form \(G_{i}\cdot g\), \(g\in G\).
2. The incidence relation is given by non empty intersection. More precisely, the element \(G_{i}\cdot g\) is incident to the element \(G_{j}\cdot k\) if and only if \(i\neq j\) and \(G_{i}\cdot g\cap G_{j}\cdot k\neq\emptyset\).
A lot of properties of incidence geometries, such as connectedness, residual connectedness, residues, flag-transitivity, and so on, can be translated to group theoretical properties of \((G,(G_{i})_{i\in I})\) (see [5] for a more detailed exposition).
Francis Buekenhout introduced in [4] a new diagram associated to \(\Gamma\). His idea was to associate to each rank two residue a set of three integers giving information on its incidence graph. Let \(\Gamma\) be a rank 2 geometry. We can consider \(\Gamma\) to have type set \(I=\{P,L\}\), standing for points and lines. The _point-diameter_, denoted by \(d_{P}(\Gamma)=d_{P}\), is the largest integer \(k\) such that there exists a point \(p\in P\) and an element \(x\in\Gamma\) such that \(d(p,x)=k\). Similarly the _line-diameter_, denoted by \(d_{L}(\Gamma)=d_{L}\), is the largest integer \(k\) such that there exists a line \(l\in L\) and an element \(x\in\Gamma\) such that \(d(l,x)=k\). Finally, the _gonality_ of \(\Gamma\), denoted by \(g(\Gamma)=g\) is half the length of the smallest circuit in the incidence graph of \(\Gamma\).
If a rank 2 geometry \(\Gamma\) has \(d_{P}=d_{L}=g=n\) for some natural number \(n\), we say that it is a _generalized \(n\)-gon_. Generalized 2-gons are also called generalized digons. They are in some sense trivial geometries since all points are incident to all lines. Their incidence graphs are complete bipartite graphs. Generalized 3-gons are projective planes.
Let \(\Gamma\) be a geometry over \(I\). The _Buekenhout diagram_ (or diagram for short) \(D\) for \(\Gamma\) is a graph whose vertex set is \(I\). Each edge \(\{i,j\}\) is labeled with a collection \(D_{ij}\) of rank 2 geometries. We say that \(\Gamma\) belongs to \(D\) if every residue of rank 2 of type \(\{i,j\}\) of \(\Gamma\) is one of those listed in \(D_{ij}\) for every pair of \(i\neq j\in I\). In most cases, we use conventions to turn a diagram \(D\) into a labeled graph. The most common convention is to not draw an edge between two vertices \(i\) and \(j\) if all residues of type \(\{i,j\}\) are generalized digons, and to label the edge \(\{i,j\}\) by a natural integer \(n\) if all residues of type \(\{i,j\}\) are generalized \(n\)-gons. It is also common to omit the label when \(n=3\). If the edge \(\{i,j\}\) is labeled by a triple \((d_{ij},g_{ij},d_{ji})\) it means that every residue of type \(\{i,j\}\) had \(d_{P}=d_{ij},g=g_{ij},d_{L}=d_{ji}\). We can also add information to the vertices of a diagram. We can label the vertex \(i\) with the number \(n_{i}\) of elements of type \(i\) in \(\Gamma\). Moreover, if for all flags \(F\) of co-type \(i\), we have that \(|\Gamma_{F}|=s_{i}+1\), we will also label the vertex \(i\) with the integer \(s_{i}\).
Let \(\Gamma\) be an incidence geometry and let \(\phi\) be a correlation of \(\Gamma\). The action of this correlation \(\phi\) on \(\Gamma\) will induce a new geometry, called the absolute geometry of \(\Gamma\) with respect to \(\phi\).
The _absolute geometry_ of \(\Gamma\) with respect to \(\phi\) is the incidence geometry \(\Gamma_{\phi}=(X_{\phi},*_{\phi},\tau_{\phi})\) over \(J\) where
1. The set \(J\) is the collection of all \(\phi\)-orbits \(K\) on \(I\) for which there exist invariant flags of type \(K\);
2. The set \(X_{\phi}\) is the set of all non empty \(\phi\)-invariant flags of \(\Gamma\);
3. The relation \(*_{\phi}\) is determined by \(F*_{\phi}G\) if and only if \(F\cup G\) is a flag of \(\Gamma\);
4. The function \(\tau_{\phi}\colon X_{\phi}\to J\) is the map assigning to a minimal \(\phi\)-invariant flag \(F\) the set of \(\phi\)-orbits in \(\tau(F)\).
This concept of absolute geometry motivated Tits in [16] to define generalized polygons.
### Maps
A _map_\(\mathcal{M}\) is a 2-cell embedding of a graph into a closed surface. In other words, a map is composed of a set \(V=V(\mathcal{M})\) of vertices, a set \(E=E(\mathcal{M})\) of edges and finally a set \(F=F(\mathcal{M})\) of faces, which are the simply connected components obtained by cutting the surface along \(V\cup E\). A _flag_\(F\) of a map \(\mathcal{M}\) is a triple \(\{v,e,f\}\) with \(v\in V,e\in E,f\in F\) and such that each element is incident to the two others.
An _automorphism_ of a map \(\mathcal{M}\) is a permutation of its elements preserving the three sets \(V,E\) and \(F\) and sending incident pairs to incident pairs and non-incident pairs to non-incident
pairs. The group of all automorphisms of a map \(\mathcal{M}\) is denoted by \(\mathrm{Aut}(\mathcal{M})\). A map is said to be _reflexible_ if \(\mathrm{Aut}(\mathcal{M})\) has only one orbit on the set of flags of \(\mathcal{M}\).
One can always define three operation on the set of flags of a map \(\mathcal{M}\). Let \(F=\{v,e,f\}\) be a flag. Then there is exactly one flag \(F_{0}\) of \(\mathcal{M}\) that coincides with \(F\) on the elements \(e\) and \(f\) but has a different vertex. Similarly there is a unique flag \(F_{1}\), respectively \(F_{2}\), coinciding with \(F\) except for \(e\), respectively \(f\). The three operations, denoted by \(\rho_{0},\rho_{1}\) and \(\rho_{2}\), send each flag to its unique \(i\)-adjacent flag, \(i=0,1,2\). It is easily seen that \(\rho_{0}\) and \(\rho_{2}\) always commute. In other words, there is always an action of the Coxeter group \(C=\langle\rho_{0},\rho_{1},\rho_{2}\mid\rho_{0}^{2}=\rho_{1}^{2}=\rho_{2}^{2}= (\rho_{0}\rho_{2})^{2}=e\rangle\cong V_{4}*C_{2}\) on the set of flags of a map \(\mathcal{M}\). If the map \(\mathcal{M}\) is reflexible, its automorphism group \(\mathrm{Aut}(\mathcal{M})\) is the quotient of \(C\) by the stabilizer of a flag of \(\mathcal{M}\). Conversely, given a finite quotient of \(C\), it is possible to reconstruct a map \(\mathcal{M}\) from the action of \(C\) on its flags. This gives a correspondence between finite quotients of \(C\) acting on sets and maps \(\mathcal{M}\) on closed surfaces. This correspondance is functorial, in the sense that it sends \(C\)-equivariant maps to morphisms of maps, and vice-versa. A reflexible map then corresponds to an epimorphism of \(C\) to a finite group \(G\).
Given a base map \(\mathcal{M}\), there are some operations than one can apply to \(\mathcal{M}\) to obtain new maps. One of them is called the _dual operator_\(D\) and comes from the classical notion of duality, on polytopes for example. The dual map \(D(\mathcal{M})\) is obtained from \(\mathcal{M}\) by switching the roles of vertices and faces. From the group theoretic perspective, the operator \(D\) then simply exchanges \(\rho_{0}\) and \(\rho_{2}\). Another such operator is the _Petrie dual operator_\(P\). A _Petrie path_ is a "left-right" path in \(\mathcal{M}\). This means that the path turns once left at a vertex and next time it turns right, and so on, until it comes back to the starting vertex. The operator \(P\) fixes the vertices and the edges of the map \(\mathcal{M}\) but replaces the faces. The map \(D(\mathcal{M})\) is obtained by deleting the faces of \(\mathcal{M}\) and then, for every Petrie path, gluing a disk with boundary corresponding to the Petrie path. This corresponds to fixing \(\rho_{1}\) and \(\rho_{2}\) and sending \(\rho_{0}\) to \(\rho_{0}\rho_{2}\). One can then also consider composition of these two operators \(D\) and \(P\). Wilson showed that these two operators and their compositions form a group \(\Sigma\cong S_{3}\)[18]. The operators \(D\circ P\) and \(P\circ D\) are thus of order \(3\). We will refer to these two operators or order \(3\) as _trialities_ of the map \(\mathcal{M}\) and we will say that the map \(\mathcal{M}\) has trialities if \(\mathcal{M}\) is isomorphic to \(D\circ P(\mathcal{M})\) and \(P\circ D(\mathcal{M})\).
Jones and Thornton showed that the group \(\Sigma\) is the outer automorphism group \(\mathrm{Out}(\Gamma)=\mathrm{Aut}(\Gamma)/\mathrm{Inn}(\Gamma)\cong S_{3}\)[10]. The action of the six operators in \(\Sigma\) on the generators of \(C\) is showed in Figure 1.
Figure 1. Wilson’s operations on the monodromy group of a map.
For a given map \(\mathcal{M}\), we can thus consider the set of \(6\) maps obtained from \(\mathcal{M}\) by applying the operators of \(\Sigma\). Some of these maps may be isomorphic to \(\mathcal{M}\) and some of them may not be. Wilson divided regular maps into four classes. A map \(\mathcal{M}\) is of Class I if \(\Sigma(\mathcal{M})\) is formed of \(6\) non isomorphic maps. It is of Class II is \(\Sigma(\mathcal{M})\) splits into three pairs of isomorphic maps, of Class III if it splits into two triples of isomorphic maps and of Class IV if all \(6\) maps are pairwise isomorphic.
Jones and Thornton showed that every finite map has a finite reflexible cover of Class IV [10]. Richter, Siran and Wang proved that there is a Class IV map of every even valency [13]. This was later extended to odd valency \(\geq 5\) by Fraser, Jeans and Siran [7]. The kaleidoscopic maps due to Archdeacon, Conder and Siran are also, by definition, of Class IV [2].
The first sporadic example of a map of Class III was constructed by Wilson in 1979 [18]. It seemed to him at the time that maps of Class III are rare. As reported by Jones and Poulton, Conder found more sporadic examples of Class III maps in a computer search in 2006, but over 30 years passed before Jones and Poulton [9] produced an infinite family of reflexible maps of Class III with automorphism group \(\mathrm{L}_{2}(2^{3n})\), for \(n\) a positive integer, thereby extending Wilson's first example. In the same article Jones and Poulton also constructed maps of Class III as covers of other maps of Class III as well as parallel products. In a recent paper Abrams and Ellis-Monaghan also constructed non-reflexible maps of Class III [1]. Finally, Leemans and Stokes also constructed [11] an infinite family of reflexible maps of Class III directly the simple groups \(L_{2}(q^{3})\) with \(q=p^{n}\). The trialities of these maps come from the existence of the Frobenius automorphism.
### Quadric of type \(D_{4}\) in \(\mathbf{PG}(7,\mathbb{F})\)
Let \(\mathbf{Q}\) be an hyperbolic quadratic set in a projective space \(\mathbf{P}=\mathbf{PG}(7,\mathbb{F})\) of dimension \(7\) over a field \(\mathbb{F}\). The maximal subspaces of \(\mathbf{Q}\) are of dimension \(3\), as the index of \(\mathbf{Q}\) is \(4\). If we want to be more concrete, we can choose a set of homogeneous coordinates \(\{X_{0},X_{1},\cdots,X_{7}\}\) for \(\mathbf{P}\) and fix the quadric \(\mathbf{Q}\) to have equation
\[X_{0}X_{4}+X_{1}X_{5}+X_{2}X_{6}+X_{3}X_{7}=0 \tag{1}\]
We can define an equivalence relation on the set \(M\) of maximal subspaces of \(\mathbf{Q}\) as follows: for two subspaces \(m,n\in M\), we set \(m\equiv n\) if \(m\cap n\) is of odd dimension. This relation is obviously reflexive and symmetric, and it can be shown to be also transitive. Moreover, there are exactly two equivalence classes of maximal subspaces, denoted by \(M_{1}\) and \(M_{2}\). If we suppose \(\mathbf{Q}\) to have equation (1), representatives for \(M_{1}\) and \(M_{2}\) can be taken to be the \(3\)-spaces obtained by setting \(X_{0}=X_{2}=X_{4}=X_{6}=0\) and \(X_{1}=X_{3}=X_{5}=X_{7}=0\) respectively. We call the points of \(\mathbf{Q}\) the \(0\)-points, the elements of \(M_{1}\) as \(1\)-points and the elements of \(M_{2}\) the \(2\)-points.
We can then define a geometry \(\Gamma\) of rank four on \(\{0,1,2,3\}\). The lines of \(\mathbf{Q}\) are the elements of type \(3\). The \(i\)-points (where \(i=0,1,2\)) are the elements of type \(i\). Incidence is given by symmetrized inclusion whenever it makes sense, and for a \(1\)-point \(m_{1}\) and a \(2\)-point \(m_{2}\), we set \(m_{1}*m_{2}\) if they intersect in a plane. The geometry \(\Gamma\) thus obtained can then be shown to have Buekenhout diagram \(D_{4}\), see Figure 2.3.
This geometry \(\Gamma\) admits trialities that permute the \(i\)-points. Let \(\alpha\) be such a triality. We can then consider the absolute geometry \(\Gamma_{\alpha}\) of \(\Gamma\) with respect to \(\alpha\). We will call the points and the lines of \(\Gamma_{\alpha}\) the _absolute points_ and _absolute lines_. In [16],Tits showed that, as long as there exists a cycle of absolute lines, the absolute \(\Gamma_{\alpha}\) is a generalized hexagon.
Let \(\sigma\) be an automorphism of the field \(\mathbb{F}\) such that \(\sigma^{3}=\mathrm{Id}\). Tits classified the projective type of trialities \(\tau\) of a projective plane \(\pi\) over \(\mathbb{F}\) into four categories, denoted by \((I_{\sigma}),(II),(III)\) and \((III)^{\pm}\) (see [16], section 2). Surprisingly, there is a strong relation between trialities \(\tau\) of \(\pi\) and trialities \(\alpha\) of \(\mathbf{Q}\), that we briefly sketch here (for more details, see [16], section 4).
We adopt the convention that upper case letters will designate points and lower case letters will designate lines. Also, if \(P\) and \(Q\) are two points, the line going through them will be designated by \((PQ)\). Suppose \(P\) is a point of \(\mathbf{Q}\) and let \(P^{\alpha}\) and \(P^{\alpha^{2}}\) be the associated incident 3-spaces. We define \(P\omega\) to be the intersection \(P^{\alpha}\cap P^{\alpha^{2}}\). If \(P\) is not an absolute point, then \(P\omega\) is a point. Instead, if \(P\) is an absolute point, \(P\omega\) is then a plane, and we will sometimes refer to the plane \(P\omega\) as the plane associated to \(P\). Any plane that is the associated plane of some absolute point \(P\) will be called a _special plane_. Similarly, any point of \(\mathbf{Q}\) contained in a special plane is called a _special point_. Notice that the plane \(P\omega\) is spanned by any two absolute lines through \(P\). This is true because all absolute points in \(P^{\alpha}\cup P^{\alpha^{2}}\) are contained in \(P\omega\).
Let \(Q\) be a point of \(\mathbf{Q}\) which is not special. Since \(Q^{\alpha}\) and \(Q\omega\) are incident, it follows that \(Q^{\alpha^{2}}\) and \((Q\omega)^{\alpha}\) are also incident. We define \(Q\pi_{1}\) to be the plane \(Q^{\alpha^{2}}\cap(Q\omega)^{\alpha}\) and \(Q\pi_{2}\) to be the plane \(Q^{\alpha}\cap(Q\omega)^{\alpha^{2}}\).
Under these notations, the triality \(\alpha\) induces a map from the lines of \(\mathbf{Q}\) passing through \(Q\omega\) and contained in \(Q^{\alpha^{2}}\) to the lines contained in \((Q\omega)^{\alpha}\) and passing by \(Q\). We can then obtained a collineation \(\alpha_{Q}\) from \(Q\pi_{1}\) to itself as follows:
\[\alpha_{Q}(P)=Q\pi_{1}\cap(P\cdot Q\omega)^{\alpha}\]
for any \(P\in Q\pi_{1}\) and where \((P\cdot Q\omega)\) designates the line through \(P\) and \(Q\omega\).
Using the classification of trialities of a projective plane mentioned above, we can then say that a triality \(\alpha\) of \(\mathbf{Q}\) is of type \((I_{\sigma}),(II)\) or \((III)\) if there exists a non special point \(Q\) of \(\mathbf{Q}\) such that \(\alpha_{Q}\) is of type \((I_{\sigma}),(II)\) or \((III)\), respectively. Tits showed (see [16], section 5) that all trialities \(\alpha\) of type \((I_{\sigma})\) with \(\sigma\neq\mathrm{Id}\) are projectively equivalent and that all other trialities \(\alpha\) are projectively equivalent to either a triality of type \((I_{\mathrm{Id}})\) or a triality of type \((II)\). In this paper we will only consider the case where \(\alpha\) is of type \((I_{\sigma})\). While this is in many sense the more general case, it could be interesting to figure out what happens in the case where \(\alpha\) is of type \((II)\).
We finish this section by a few useful tools and properties of \(\mathbf{Q}\).
As in [17, Section 2.4.6], let \(V\) be an eight-dimensional vector space over \(\mathbb{F}\). The fact that the points and the two types of 3-spaces of \(\mathbf{Q}\) play the same role can be expressed by the existence of a trilinear form \(\mathcal{T}\colon V\times V\times V\to\mathbb{K}\). This form \(\mathcal{T}\) is characterized by the fact that two points \((X,Y)\) of \(\mathbf{Q}\) represent a 0-point and a 1-point of \(\mathbf{Q}\) that are incident if and only if \(\mathcal{T}(X,Y,Z)\) is identically zero as a function of \(Z\). The same is true for any permutation of the letters \(X,Y\) and \(Z\). This trilinear form has the following explicit description:
Figure 2. \(D_{4}\) diagram.
\[\begin{split}\mathcal{T}(X,Y,Z)=&\begin{vmatrix}X_{0}&X_{1}&X_{2} \\ Y_{0}&Y_{1}&Y_{2}\\ Z_{0}&Z_{1}&Z_{2}\end{vmatrix}+\begin{vmatrix}X_{4}&X_{4}&X_{4}\\ Y_{5}&Y_{5}&Y_{5}\\ Z_{6}&Z_{6}&Z_{6}\end{vmatrix}\\ &+X_{3}(Z_{0}Y_{4}+Z_{1}Y_{5}+Z_{2}Y_{6})+X_{7}(Y_{0}Z_{4}+Y_{1}Z_{5}+Y_{2}Z_{6 })\\ &+Y_{3}(X_{0}Z_{4}+X_{1}Z_{5}+X_{2}Z_{6})+Y_{7}(Z_{0}X_{4}+Z_{1}X_{5}+Z_{2}X_{6 })\\ &+Z_{3}(Y_{0}X_{4}+Y_{1}X_{5}+Y_{2}X_{6})+Z_{7}(X_{0}Y_{4}+X_{1}Y_{5}+X_{2}Y_{6 })\\ &-X_{3}Y_{3}Z_{3}-X_{7}Y_{7}Z_{7}\end{split} \tag{2}\]
When the triality is of type \((I_{\text{Id}})\), it is well known that the absolute points are exactly the intersection of \(\mathbf{Q}\) with the hyperplane of equation \(X_{3}+X_{7}=0\). We can thus substitute \(X_{7}\) for \(X_{3}\) and work with the parabolic quadric \(\mathbf{Q}^{\prime}\) in \(\mathbf{P}^{\prime}=\mathbf{PG}(6,\mathbb{F})\) of equation
\[X_{0}X_{4}+X_{1}X_{5}+X_{2}X_{6}=X_{3}^{2}\]
It can then be shown (see [17, Section 2.4.13] for example) that the absolute lines of \(\Gamma_{\alpha}\) have Grassmann coordinates satisfying the following six linear equations:
\[\begin{split} X_{12}&=X_{34},\qquad\qquad\qquad X_{54} =X_{32},\qquad\qquad\qquad X_{20}=X_{35},\\ X_{65}&=X_{30},\qquad\qquad\qquad X_{01}=X_{36}, \qquad\qquad\qquad X_{46}=X_{31},\end{split} \tag{3}\]
and conversely, every line on \(\mathbf{Q}\) whose Grassmann coordinates satisfy (3) is an absolute line of \(\Gamma_{\alpha}\). These equations will be used later to verify the existence of some lines of a new rank two geometry we will introduce in the next section as the moving absolute geometry of \(\Gamma\).
Quadrics \(\mathbf{Q}\) have an important property, called the _all-or-one property_. This means that if \(l\) is a line of \(\mathbf{Q}\) and \(P\) is a point of \(\mathbf{Q}\) not contained in \(l\), then there is either a unique line of \(\mathbf{Q}\) passing through \(P\) and intersecting \(l\) or all lines passing through \(P\) and intersecting \(l\) are in \(\mathbf{Q}\). This will often be useful in the proofs of the next section.
There is a natural group acting on both the absolute and the moving absolute geometry in this context. It is the group \(G\) of collineations of \(\mathbf{P}\) that preserves the triality \(\alpha\). If \(\alpha\) is a triality of type \((I_{\sigma})\), as we will assume from now on, Tits showed that if \(\sigma\) is the identity, then the group \(G\) is of type \(G_{2}\) and if \(\sigma\) is not the identity then it is a twisted Lie group of type \(D_{4}\), noted by \({}^{3}D_{4}\) (see [16, Section 8]).
## 3. moving absolute geometries
Let \(\Gamma\) be a geometry with diagram \(D_{4}\) (see figure 2.3) and let \(\alpha\) be a triality of \(\Gamma\). The triality \(\alpha\) must fix the central vertex of the diagram, and we will call an element that belongs to that fixed type _a line_. We also choose one of the remaining types of \(\Gamma\) and call elements belonging to that type _points_. In this situation, we can define another type of absolute geometry for \(\Gamma\) which has the same vertex set as the classical absolute geometry but considers lines that are moved by the triality instead of fixed lines. We will call such lines _moving lines_, in contrast with the lines fixed by the triality which we will call _absolute lines_.
The _moving absolute geometry_ of \(\Gamma\) with respects to \(\alpha\) is the point-line geometry \(\text{M}\Gamma_{\alpha}\) over \(J=\{P,L\}\) where
1. The points, called _absolute points_, are the points \(p\in\Gamma\) such that \(p,\alpha(p)\) and \(\alpha^{2}(p)\) form a flag of \(\Gamma\); each point has type \(P\);
2. The lines are the lines of \(\Gamma\) that are not fixed by \(\alpha\) and that contain at least two absolute points; each line has type \(L\);
3. Incidence is given by the incidence in \(\Gamma\), that is, a point \(p\) is incident to a line \(l\) if \(p\) and \(l\) are incident in \(\Gamma\).
Remark that the definition of \(\mathrm{M\Gamma}_{\alpha}\) does not depend, up to isomorphism, on the choice of points in \(\Gamma\). Indeed suppose that we decide that the points of \(\Gamma\) are now the elements of type \(\alpha(P)\) instead. If \(p\) and \(p^{\prime}\) are on a line \(l\) in \(\Gamma\), then \(\alpha(p)\) and \(\alpha(p^{\prime})\) are incident to the line \(\alpha(l)\). Therefore, both the lines and the incidence of \(\mathrm{M\Gamma}_{\alpha}\) do not depend of the choice of points in \(\Gamma\).
Whenever the triality \(\alpha\) that we are working with is clear from context, we simplify the notation and talk about \(\mathrm{M\Gamma}\) instead of \(\mathrm{M\Gamma}_{\alpha}\).
We now examine the moving absolute geometry of a quadric of type \(D_{4}\) in a \(7\)-dimensional projective space.
### Quadrics of type \(D_{4}\)
Recall that \(\mathbf{Q}\) is a quadric in a \(7\)-dimensional projective space over a field \(\mathbb{F}\) and that \(\sigma\) is an automorphism of order \(1\) or \(3\) of \(\mathbb{F}\) such that \(\alpha\) is a triality of type \((I_{\sigma})\) (see section 2.3). Let \(\mathbb{K}\) be the subfield of \(\mathbb{F}\) fixed by \(\sigma\). We will denote by \(f\) and \(k\) respectively the cardinalities of \(\mathbb{F}\) and \(\mathbb{K}\). Note that \(f=k\) if \(\sigma\) is the identity and \(f=k^{3}\) if \(\sigma\) is not the identity.
The main goal of this section is to prove that the moving absolute \(\mathrm{M\Gamma}\) in these settings is a geometry on two types with \(d_{P}=5,g=3\) and \(d_{l}=6\). The first results rules out the existence of some lines in \(\mathbf{Q}\) and the proof is following the ideas of the proof of [17, Theorem 2.4.4]. Since the classical absolute geometry \(\Gamma_{\alpha}\) is a generalized hexagon, there exist hexagons in \(\mathbf{Q}\) whose vertices are all absolute points and whose lines are all absolute lines. We will call such an hexagon an _absolute hexagon_.
**Lemma 3.1**.: _Let \(\mathbf{H}\) be an absolute hexagon. The line joining two opposite vertices of \(\mathbf{H}\) is never a line of the conic \(\mathbf{Q}\)._
Proof.: Let \(P\) be a vertex of \(\mathbf{H}\) and let \(l\) and \(m\) be the two absolute lines of \(\mathbf{H}\) meeting at \(P\). Then the plane \(P\omega\) is spanned by \(l\) and \(m\). Since the choice of \(P\) was arbitrary, we conclude that the line between any two vertices at distance two of each other in \(\mathbf{H}\) is always a line of \(\mathbf{Q}\), and thus also a line of \(\mathrm{M\Gamma}\) since it cannot be an absolute line. See Figure 3, where examples of such lines are drawn in blue dashed lines. Now suppose that the line between a vertex \(P\) of \(\mathbf{H}\) and its opposite vertex \(Q\) is a line of the conic \(\mathbf{Q}\). Using the all-or-one property of \(\mathbf{Q}\), we conclude that all the lines between \(P\) and another point of \(\mathbf{H}\) are in \(\mathbf{Q}\). Moreover, if we let \(S\) be the space spanned by the \(4\) lines of \(\mathbf{H}\) not containing \(P\), then using the all-or-one property again, we can conclude that all the lines between \(P\) and \(S\) are in \(\mathbf{Q}\). Hence the whole hexagon \(\mathbf{H}\) must be contained in a subspace \(U\) of \(\mathbf{H}\). Then \(U\) must have dimension \(3\) or \(2\). If \(U\) has dimension \(3\), we can assume that \(U=P^{\alpha}\). But we could have followed the same reasoning starting with \(Q\), concluding that \(U=Q^{\alpha}\), a contradiction. If \(U\) has dimension \(2\), it means the whole hexagon \(\mathbf{H}\) is contained in a plane but then there must be \(3\) absolute lines forming a triangle, which is not possible since the absolute geometry is a generalized hexagon.
We now investigate how many absolute points a line of \(\mathrm{M\Gamma}\) contains.
**Lemma 3.2**.: _If \(l\) is a moving line containing at least two absolute points, it contains exactly \(k+1\) absolute points._
Proof.: Let \(P\) be an absolute point and let \(P\omega\) be its associated plane. By [16, SS8.2.3], the number of absolute lines going through \(P\) and inside of \(P\omega\) is equal to \(k+1\). There are no other absolute points in \(P\omega\) than the ones contained in those \(k+1\) lines. This can be shown by a counting argument using [16, SS8.2.4 and SS8.2.6].
Now, let \(l\) be a moving line containing at least two absolute points \(P_{1}\) and \(P_{2}\). Then, there exists an absolute hexagon \(\mathbf{H}\) such that \(P_{1}\) and \(P_{2}\) are vertices of \(\mathbf{H}\). We claim that \(P_{1}\) and
are neither adjacent nor opposite in \(\mathbf{H}\). Indeed, they cannot be adjacent, else \(l\) would be an absolute line, and Lemma 3.1 tells us that they cannot be opposite.
Let thus \(P\) be the unique vertex of \(\mathbf{H}\) at distance one from both \(P_{1}\) and \(P_{2}\). Then \(l\) is inside the plane \(P\omega\) and \(l\) does not contain \(P\). Hence the absolute points of \(l\) are in one to one correspondence with the absolute lines going through \(P\).
As an immediate corollary, we have that all lines in \(M\Gamma\) contain exactly \(k+1\) absolute points. In fact \(\mathrm{MI}\) is flag-transitive.
**Lemma 3.3**.: _The moving absolute geometry \(M\Gamma\) is flag-transitive._
Proof.: Take two flags \((P_{1},l_{1})\) and \((P_{2},l_{2})\) of \(\mathrm{MI}\). As we showed in Lemma 3.2, a moving line can always be placed in an absolute hexagon \(\mathbf{H}\) as a line between two vertices at distance \(2\). By [16, Theorem 6.2.5.], we know that the group \(G\) of collineations of \(\mathbf{P}\) preserving \(\mathbf{Q}\) and \(\alpha\) acts transitively on triples of absolute points \((Q,P,Q^{\prime})\) such that \((PQ)\) and \((PQ^{\prime})\) are absolute lines. So there exist pairs of absolute points \((Q_{1},Q^{\prime}_{1})\) and \((Q_{2},Q^{\prime}_{2})\) such that \((P_{i}Q_{i})=l_{i}\) (with \(i=1,2\)) and a collineation mapping \((P_{1},Q^{\prime}_{1},Q_{1})\) to \((P_{2},Q^{\prime}_{2},Q_{2})\) and hence also \(l_{1}\) to \(l_{2}\). Therefore the group \(G\) acts transitively on the flags of \(\mathrm{MI}\).
We now compute the number of lines of \(\mathrm{MI}\) passing through a given absolute point \(P\).
**Lemma 3.4**.: _Let \(P\) be an absolute point. There are \((k+1)f^{2}\) moving lines passing through \(P\)._
Proof.: Let \(x\) be the number of moving lines through \(P\) and let \(y\) be the number of absolute vertices at distance \(2\) of \(P\) in the absolute geometry \(\Gamma\). We first claim that \(kx=y\). Indeed, if \(Q\) is an absolute point at distance \(2\) from \(P\), then there exists an absolute point \(R\) such that \((PR)\) and \((RQ)\) are absolute lines. Hence all three points are in \(R\omega\) and the line \((PQ)\) is a line of \(\mathrm{MI}\). But any other point \(Q^{\prime}\) on \((PQ)\) would yield the same line since \((PQ^{\prime})=(PQ)\). By Lemma 3.2 there are \(k+1\) points on any line of \(\mathrm{MI}\). Hence this proves the claim.
It now suffices to find out what \(y\) is. There are \(k+1\) absolute lines containing \(P\). Each of these lines contains \(f+1\) absolute points. So we have \((k+1)f\) neighbours of \(P\) in \(\Gamma\). Through each
Figure 3. Some lines of \(\mathrm{MI}\) spanned by points of an absolute hexagon.
of these neighbours pass \(k\) new absolute lines which all contain \(f\) new absolute points. We thus counted \(k(k+1)f^{2}\) potential absolute points at distance \(2\) of \(P\) in \(\Gamma\). Since the girth of \(\Gamma\) is \(6\), we could not possibly have counted points twice, so that \(y=k(k+1)f^{2}\) and \(x=(k+1)f^{2}\).
Using flag-transitivity together with the previous results, we can now count the number of lines of \(\mathrm{M}\Gamma\).
**Lemma 3.5**.: _[_16_, SS8.2.4]_ _There are \((k^{2}f^{2}+kf+1)(f+1)\) points in \(\mathrm{M}\Gamma\)._
**Lemma 3.6**.: _There are \((k^{2}f^{2}+kf+1)(f+1)f^{2}\) lines in \(\mathrm{M}\Gamma\)._
Proof.: By Lemma 3.5, there are \((k^{2}f^{2}+kf+1)(f+1)\) absolute points. The number of lines of \(\mathrm{M}\Gamma\) can be computed by multiplying the number of points by the number of lines per point (that is \((k+1)f^{2}\) by Lemma 3.4) and dividing by the number of points per line (that is \((k+1)\) by Lemma 3.2).
**Lemma 3.7**.: _Every line of \(\mathrm{M}\Gamma\) is contained in exactly one special plane and every special plane contains \(f^{2}\) lines of \(\mathrm{M}\Gamma\)._
Proof.: Let \(P\) be an absolute point and \(P\omega\) be its associated plane. We claim that all lines of \(P\omega\), except for those containing \(P\), are lines of \(\mathrm{M}\Gamma\). Indeed, a line \(l\) of \(P\omega\) that does not contain \(p\) must intersect every line of \(P\omega\) containing \(P\). This means that \(l\) contains \(k+1\) absolute points and \(l\) cannot be an absolute line, else we would have found a triangle of absolute lines in \(\Gamma\).
This gives us a way to associate to every absolute point \(f^{2}\) lines of \(\mathrm{M}\Gamma\). Clearly, we can count every line of \(\mathrm{M}\Gamma\) in this way. Moreover, since we know that, by Lemma 3.6, there are \(f^{2}\) times more lines than points in \(\mathrm{M}\Gamma\), we can deduce that we never count a line twice in this way, which proves the lemma.
At this point, we remark that although it may seem natural to think that the triality \(\alpha\) acts on the set of lines of \(\mathrm{M}\Gamma\), that is actually not the case. Indeed, if \(l\) is a line of \(\mathrm{M}\Gamma\), then \(\alpha(l)\) is still a line but it never contains more than one absolute point. To see this, let \(P\omega\) be the unique special plane containing \(l\) and suppose that \(\alpha(l)\) also contains two absolute points. Then \(\alpha(l)\) is also contained in a unique special plane \(R\omega\) for some absolute point \(R\). The line \(\alpha^{2}(l)\) must then be \((PQ)\). But \(Q\) is a point of \(l\), since \(l=\alpha^{2}(\alpha(l))\). Therefore \((PQ)\) is a line of \(P\omega\) containing \(P\) and \(Q\), and must then be an absolute line, a contradiction.
We now compute an upper bound for the number of lines at distance \(4\) or less from a given line \(l\) of \(\mathrm{M}\Gamma\). This will be used later to show the existence of two lines \(l\) and \(l^{\prime}\) of \(\mathrm{M}\Gamma\) such that the distance between \(l\) and \(l^{\prime}\) is six in the incidence graph of \(\mathrm{M}\Gamma\).
**Lemma 3.8**.: _Let \(l\) be a moving line and let \(C=(k+1)f^{2}\) be the number of moving lines through a point. There are at most \(1+(k+1)(C-1)+k(k+1)(C-1)^{2}\) moving lines at distance \(4\) or less from \(l\) in the incidence graph of \(\mathrm{M}\Gamma\)._
Proof.: Denote by \(A(l,k)\) the number of elements of \(\mathrm{M}\Gamma\) at distance exactly \(k\) from \(l\) in the incidence graph of \(\mathrm{M}\Gamma\). If \(k\) is odd, these elements will all be points and if \(k\) is even they will be lines. Hence, we need to compute the number \(A(l,0)+A(l,2)+A(l,4)\). Obviously, we have that \(A(l,0)=1\). It is also clear that \(A(l,2)=(k+1)(C-1)\). Indeed, there are exactly \((k+1)\) absolute points on \(l\) and there are \(C\) lines through each of them, one of them being \(l\) every time. After that it becomes more difficult to get precise numbers as there are triangles in \(\mathrm{M}\Gamma\), but we can ignore the triangles and still get upper bounds. Let us estimate \(A(l,3)\). We have \(A(l,2)\) lines at distance two from \(l\) and each of these lines yields \(k\) absolute points at distance \(3\) from \(l\). This gives us \(k(k+1)(C-1)\) potential points. Hence, \(A(l,3)\leq k(k+1)(C-1)\). We proceed the same way for computing \(A(l,4)\). Through each one of the points of \(A(l,3)\) passes
\(C\) moving lines. That gives us \(k(k+1)(C-1)^{2}\) potential lines in \(A(l,4)\), which is what we need to conclude the proof.
We need two more Lemmas before proving the main Theorem, that is Theorem 3.11. The first Lemma is used to show that some configuration of points are not far from each other in \(\mathrm{M}\Gamma\) and the second one is the key ingredient in the proof that \(d_{l}=6\) in \(\mathrm{M}\Gamma\).
**Lemma 3.9**.: _Let \(l\) be an absolute line and \(P_{1}\) and \(P_{2}\) be vertices on \(l\). There exists an absolute vertex \(P\) not on \(l\) such that \((PP_{1})\) and \((PP_{2})\) are moving lines in \(\mathbf{Q}\)._
Proof.: The line \(l\) must contain at least another point \(P_{3}\) which is then an absolute point. Let \(\mathbf{H}\) be an hexagon containing \(P_{1}\) and \(P_{3}\). Since \(P_{1}\) and \(P_{3}\) are adjacent in \(\mathbf{H}\), there is a unique vertex \(P\neq P_{1}\) or \(P_{3}\) in \(\mathbf{H}\). But then \(P_{1},P_{2},P_{3}\) and \(Q\) are all in the plane \(P\omega_{3}\) and the lines \((PP1)\) and \((PP_{2})\) cannot be absolute lines, else there would be an absolute triangle.
**Lemma 3.10**.: _There exist two moving lines \(l\) and \(l^{\prime}\) that are at distance \(6\) or more in the incidence graph of \(\mathrm{M}\Gamma\)._
Proof.: In the case of \(f=k^{3}\), this follows from Lemma 3.8 except for \(f=8\) and \(k=2\), in which case we can use Magma[3] or some other program to verify the claim. Indeed, by Lemma 3.6, we know that there are a total of \(A=(k^{2}f^{2}+kf+1)(f+1)f^{2}\) lines in \(\mathrm{M}\Gamma\). We also know, by Lemma 3.8, that if we choose a line \(l\) as a base line, there are at most \(B=1+(k+1)(C-1)+k(k+1)(C-1)^{2}\), where \(C=(k+1)f^{2}\), lines at distance \(4\) or less from \(l\). Consider \(A\) and \(B\) as polynomials in \(f\). When \(f=k^{3}\), the leading term for \(A\) is of order \(5\) while the leading term for \(B\) is of order \(4\). A straighforward analysis then shows that \(A>B\) as soon as \(k\geq 3\). See Lemma 3.12 and the comments thereafter for a way to construct \(\mathrm{M}\Gamma\) on a computer, and thus verify that the statement holds in the small cases too.
On the other hand, if \(\sigma\) is the identity, we need to proceed differently since in that case \(B>A\) except maybe when \(f=k\) is small enough. We will explicitly show that there exists a pair of lines at distance \(6\). In this case, it is known that the absolute points are exactly the intersection of \(\mathbf{Q}\) with the hyperplane of equation \(X_{3}+X_{7}=0\). Therefore, by substituting \(X_{7}\) by \(-X_{3}\), we can work in a projective space of dimension \(6\) instead and use Grassmann coordinates to characterize the absolute lines by the set of equations (3) described in the previous section. Let \(e_{i}\) be the points having all homogeneous coordinates equal to \(0\) except for the \(i^{th}\) one that can be set to \(1\). Using the trilinear form \(\mathcal{T}\) with equation (2), we can check that all the \(e_{i}\), except for \(i=3\) or \(7\), are absolute points. Using the equations (3), we also check that the path \(e_{0}-e_{5}-e_{2}-e_{4}-e_{1}-e_{6}-e_{0}\) forms an hexagon of absolute lines and points. We claim that the lines \(l=(e_{5}e_{6})\) and \(l^{\prime}=(e_{1}e_{2})\) are lines of \(\mathrm{M}\Gamma\) at distance \(6\) in the incidence graph of \(\mathrm{M}\Gamma\). In other words, we have to show that there is no line of \(\mathrm{M}\Gamma\) connecting a point of \(l\) to a point of \(l^{\prime}\). Easy computations show that any line connecting \(l\) to \(l^{\prime}\) satisfies the set of equations (3). This means that any such line is either an absolute line or is not a line of the conic \(\mathbf{Q}\).
We are now ready to prove the main theorem of this section.
**Theorem 3.11**.: _Let \(\mathbf{Q}\) be a quadric in a \(7\)-dimensional projective space \(\mathbf{P}\) over a finite field \(\mathbb{F}\) of cardinality \(f\) and let \(\Gamma\) be its associated geometry with diagram \(D_{4}\). Let \(\sigma\) be an automorphism of \(\mathbb{F}\) of order \(o(\sigma)=1\) or \(3\) and let \(\alpha\) be a triality of \(\Gamma\) of type \((I_{\sigma})\). The moving absolute geometry \(M\Gamma_{\alpha}\) is a flag-transitive geometry with the following Buekenhout diagram where \(f=k^{o(\sigma)}\)._
_Moreover, the group \(G\) acts flag-transitively on \(\mathrm{M}\Gamma_{\alpha}\) where \(G\) is \(G_{2}(k)\) when \(o(\sigma)=1\) and \({}^{3}D_{4}(k)\) when \(o(\sigma)=3\)._
Proof.: By Lemma 3.3, \(\mathrm{M}\Gamma\) is flag-transitive. We already showed that the number of points, lines, points per line and lines per point are as indicated in the diagram (see Lemmas 3.5, 3.6, 3.2, 3.4 respectively). It remains thus to show that \(d_{P}=5,g=3\) and \(d_{L}=6\). Let \(P_{1}\) and \(P_{2}\) be two points of \(\mathbf{Q}\). Then, there exists an absolute hexagon \(\mathbf{H}\) such that \(P_{1}\) and \(P_{2}\) are vertices of \(\mathbf{H}\). Fix this hexagon once and for all. We already showed the existence of the dashed line in figure 3. Therefore, there are triangles in \(\mathrm{M}\Gamma\). As all the lines are lines of a projective space, no two lines have two points in common and the gonality \(g\) of \(M\Gamma\) must then be \(3\).
We claim that \(P_{1}\) and \(P_{2}\) are at distance at most \(4\) in the incidence graph of \(\mathrm{M}\Gamma\). Suppose first that \(P_{1}\) and \(P_{2}\) are adjacent in \(\mathbf{H}\). Then, the distance between \(P_{1}\) and \(P_{2}\) is \(4\) by Lemma 3.9. If they are opposite vertices of \(\mathbf{H}\), their distance in the incidence graph is also \(4\). Indeed, let \(l\) be a line of \(\mathbf{H}\) at distance \(3\) from both \(P_{1}\) and \(P_{2}\). Then, there is a point \(P\) on \(l\) which is not a vertex of \(\mathbf{H}\), since even in the smallest case of \(f=2\) there are \(3\) points per line. Since every point on an absolute line is an absolute point, \(P\) is an absolute point. Moreover, the lines \(P_{1}P\) and \(PP_{2}\) are lines of \(\mathbf{Q}\) since any \(3\) consecutive vertices of an hexagon are in the plane spanned by the two lines emanating from the middle vertex. The lines \(P_{1}P\) and \(PP_{2}\) are moving lines, else we would find triangles in the absolute. Such a path of length \(2\) between two absolute points is illustrated by the dotted lines in Figure 3.
This shows that the maximal distance between two absolute vertices in the incidence graph of \(M\Gamma\) is \(4\). This implies that the maximal distance between a vertex \(P\) and a moving line \(l\) is \(3\) or \(5\).
It cannot be \(3\). Indeed, take a point \(P\) to be a vertex of \(\mathbf{H}\) and \(l\) to be the line between \(P_{1}\) and \(P_{2}\) where \(P_{1}\) is a neighbor of \(P\) in \(\mathbf{H}\) and \(P_{2}\) is the opposite vertex of \(P\) in \(\mathbf{H}\). Then, by the one or all axiom, the only line joining \(P\) to a point of \(l\) is \(PP_{1}\) (since \(PP2\) cannot be a line of \(\mathbf{Q}\)). Therefore, there is no line of \(\mathrm{M}\Gamma\) connecting \(P\) to a point of \(l\) and thus their distance is strictly greater than \(3\). This proves that \(d_{P}=5\).
It only remains to show that \(d_{L}=6\). Since \(d_{P}=5\), \(d_{L}\) can only be \(5\) or \(6\). Lemma 3.10 shows the existence of two lines at distance \(6\) from each other. Hence \(d_{L}=6\).
We end this section by showing how to construct the moving absolute geometries \(\mathrm{M}\Gamma\) associated to the quadric \(\mathbf{Q}\) using a computer. The group \(G\) is the group of collineations preserving the triality \(\alpha\). Depending on whether \(\sigma\) is the identity or not, the group \(G\) is isomorphic to \(G_{2}(q)\) or to \({}^{3}D_{4}(q)\) (see [16, section 6] for more details).
**Lemma 3.12**.: _Let \(H\) be the stabilizer in \(G\) of an absolute point \(P\). Then \(H\) acts transitively on the \(f^{2}\) lines of \(\mathrm{M}\Gamma\) contained in \(P\omega\)._
Proof.: We already mentioned in Lemma 3.3 that \(G\) acts transitively on triples of absolute points \((Q,Q^{\prime},Q^{\prime\prime})\) such that \((Q^{\prime},Q)\) and \((Q^{\prime\prime},Q^{\prime})\) are absolute lines. Then, if we fix an absolute point \(P\) and two absolute lines \(l_{1}\) and \(l_{2}\) containing \(P\), we immediately deduce that \(H\) still acts transitively on those of the above triples with \(Q^{\prime}=P\), \(Q\in l_{1}\) and \(Q^{\prime\prime}\in l_{2}\). It suffices to notice that any of the \(f^{2}\) lines of \(\mathrm{M}\Gamma\) can be written as \((QQ^{\prime\prime})\) for some \(Q\in l_{1}\) and \(Q^{\prime\prime}\in l_{2}\).
The stabiliser in \(G\) of an absolute point \(P\) is the same whether we consider the classical absolute or our moving absolute geometry. Therefore, it is usually very easy to find such stabilizer for any known representations of \(G\). By Lemma 3.12 we know that the stabilizer \(K^{\prime}\) of a line of \(\mathrm{M}\Gamma\) in \(P\omega\) is a subgroup of index \(f^{2}\) of \(H\). We can thus find \(K^{\prime}\) by looking at the list of maximal subgroups of \(H\). Let \(l\) by a line of \(\mathrm{M}\Gamma\) incident to \(P\) so that \(\{P,l\}\) is a flag. Then, since \(G\) acts transitively of \(\mathrm{M}\Gamma\), the stabilizer of \(l\) must be a conjugate of \(K^{\prime}\). We can
then look for a suitable conjugate of \(K^{\prime}\), namely a conjugate \(K\) of \(K^{\prime}\) such that \(H\cap K\) has the right cardinality (i.e. \(\frac{|H|}{|H\cap K|}=(k+1)f^{2}\)). The moving absolute geometry \(\mathrm{M}\Gamma\) is then obtained as the coset geometry \(\Gamma(G,\{H,K\})\).
### Maps of class III
In their recent article [11], Leemans and Stokes showed how to construct reflexible maps having trialities but no dualities (i.e maps of class III), using the simple group \(L_{2}(q^{3})\) with \(q=p^{n}\) for a prime number \(p\).
From one of these maps they also show how to construct an incidence geometry \(\Delta\) having as elements the vertices, edges, faces and Petrie paths of the map. A face and a Petrie path are considered incident if they share a an edge and the rest of the incidence relation is given by symmetrized inclusion. They then show that, since \(\Delta\) is constructed from a class III map, it admits trialities but no dualities in the sens of incidence geometry also. We recall the precise construction of such geometries.
Let \(G=L_{2}(q^{3})\) and let \(\rho_{0},\rho_{1},\rho_{2}\) be three involutions generating \(G\) and let \(\alpha\in\mathrm{Out}(G)\) be an outer automorphism of order \(3\) such that
1. \(\alpha\) cyclically permutes \(\rho_{0},\rho_{2}\) and \(\rho_{0}\rho_{2}\),
2. \(\alpha\) fixes \(\rho_{1}\),
3. \(\rho_{0}\) and \(\rho_{2}\) commute,
4. \(\langle\rho_{0},\rho_{1},\rho_{2}\rangle=G\), and
5. there is no element of \(\mathrm{Aut}(L_{2}(q^{3}))\) that swaps \(\rho_{0}\) and \(\rho_{2}\) and fixes \(\rho_{1}\).
For any choices of \(q\),\(\rho_{0},\rho_{1},\rho_{2}\) and \(\alpha\) satisfying the above conditions Leemans and Stokes construct a coset geometry \(\Delta=\Gamma(G;\{G_{0},G_{1},G_{2},G_{3}\})\) where
* \(G_{0}=\langle\rho_{0},\rho_{1}\rangle\)
* \(G_{1}=\langle\rho_{0},\rho_{2}\rangle\)
* \(G_{2}=\langle\rho_{1},\rho_{2}\rangle\)
* \(G_{3}=\langle\rho_{1},\rho_{0}\rho_{2}\rangle\)
We start by analysing the classical absolute geometry of \(\Delta\).
**Theorem 3.13**.: _Let \(\Delta=\Gamma(G;\{G_{0},G_{1},G_{2},G_{3}\})\) be as above and let \(\Delta_{\alpha}\) be its absolute with respect to the triality \(\alpha\). Then \(\Delta_{\alpha}\) is a graph which is the disjoint union of \(\frac{|L_{2}(q)|}{2}\) paths of length \(2\)._
Proof.: We first show that \(\Delta_{\alpha}\) is a union of path of length \(2\) and then we will compute the number of edges of \(\Delta_{\alpha}\).
Suppose there is at least one edge \(e\) fixed by \(\alpha\) and label its endpoints by \(v_{1}\) and \(v_{2}\). First notice that \(v_{1}\) and \(v_{2}\) are absolute points. Indeed, since \(\alpha\) fixes \(e\) and preserves incidence, we get that \(v_{i}*e\) implies that \(\alpha(v_{i})*e\) and \(\alpha^{2}(v_{i})*e\) for \(i=1,2\). Label \(\alpha(v_{i})\) by \(F_{i}\) and \(\alpha^{2}(v_{i})\) by \(P_{i}\).
Figure 4 shows a local picture around the fixed edge \(e\). This picture is to be understood as being drawn on the underlying surface of the original map of class III.
Among all edges coming out of \(v_{1}\) or \(v_{2}\) there is exactly one edge that is also incident to a triple \(v_{i},F_{i},P_{i}\). We label that edge by \(e^{\prime}\). It is easy to see that \(e^{\prime}\) must also be fixed by \(\alpha\). We thus showed that fixed edges appear in pairs. Label the other endpoint of \(e^{\prime}\) by \(v_{3}\). It remains to show that no other edges among all the edges coming out of \(v_{1}\),\(v_{2}\) and \(v_{3}\) can be fixed by \(\alpha\).
Let \(F_{3}\) be the face containing \(e^{\prime}\) which is not \(F_{1}\). Then, we notice that the set of all the edges around a vertex \(v_{i}\) must be sent by \(\alpha\) to the set of edges of \(F_{3}\). This is because if an edge \(x\) is incident to a vertex \(v_{i}\), then \(\alpha(x)\) must be incident to \(\alpha(v_{i})=F_{i}\). In the case of \(v_{1}\) this already proves the claim, since the only edges of \(F_{1}\) incident to \(v_{1}\) are \(e\) and \(e^{\prime}\). For \(v_{2}\), there remains one edge \(x\neq e\) incident to both \(v_{2}\) and \(F_{2}\). But \(x\) is incident to \(P_{1}\) so \(\alpha(x)\) must be incident
to \(\alpha(P_{1})=v_{1}\) and thus cannot be fixed. The case of \(v_{3}\) is identical to the one of \(v_{2}\). There is only one edge \(y\neq e^{\prime}\) which is incident to both \(v_{3}\) and \(F_{3}\). But \(y\) is incident to \(P_{1}\) again, so \(\alpha(y)\) must be incident to \(v_{1}\) and thus cannot be fixed. This concludes the proof that \(\Delta_{\alpha}\) is a union of paths of length \(2\).
We will now show that the number of edges of \(\Delta_{\alpha}\) is equal to \(|L_{2}(q)|\). To do so we will show that there is a one-to-one correspondence between edges of \(\Delta_{\alpha}\) and fixed points of the action of \(\alpha\) on \(L_{2}(q^{3})\). The automorphism \(\alpha\) being a field automorphism, it fixes elementwise a subfield subgroup of \(L_{2}(q^{3})\) isomorphic to \(L_{2}(q)\). Here we use the fact that \(\Gamma\) is a coset geometry. Since \(\rho_{0}\) and \(\rho_{2}\) commute, the maximal parabolic subgroup \(G_{1}=\{Id_{G},\rho_{0},\rho_{2},\rho_{0}\rho_{2}\}=\{Id_{G},\rho_{0},\alpha( \rho_{0}),\alpha^{2}(\rho_{0})\}\). The edges of \(\Gamma\) are thus of the form \(G_{1}\cdot x=\{x,\rho_{0}x,\alpha(\rho_{0})x,\alpha^{2}(\rho_{0})x\}\) for some \(x\in G\). Suppose such a \(G_{1}\cdot x\) is fixed by \(\alpha\). Then \(G_{1}\cdot x\) must be a union of \(\alpha\)-orbits. Since the orbits of \(\alpha\) on \(G\) have length \(1\) or \(3\), \(G_{1}\cdot x\) can only be made of \(4\) fixed points or \(1\) fixed point and an orbit of size \(3\). The former case is easily seen to be impossible as it would imply that \(G_{1}\) is also made of \(4\) fixed points and thus \(\alpha\) should be the identity. Therefore, every fixed \(G_{1}\cdot x\) contains exactly \(1\) fixed point of \(\alpha\). And of course, if \(x\) is fixed by \(\alpha\), then \(G_{1}\cdot x\) is also fixed as \(x\in G_{1}\cdot x\cap\alpha(G_{1}\cdot x)\).
This entirely characterises the absolute geometries for the incidence geometries obtained from class III maps constructed using \(L_{2}(q)\) in [11]. Remark that in this case the absolute geometry is independent of the choices of generators \(\rho_{0},\rho_{1}\) and \(\rho_{2}\) and of the triality \(\alpha\).
The core of the issue here is that the residues of \(\Delta\) are too small for the absolute geometry to be interesting. The moving absolute geometry is not so controlled by the size of the residue, as we will show with some examples below.
We have written a Magma program that computes the moving absolute geometry \(M\Delta_{\alpha}\) from \(L_{2}(q^{3})\) together with a generating set \(\{\rho_{0},\rho_{1},\rho_{2}\}\). The algorithm used by the program follows the five steps described below:
1. Compute the coset geometry \(\Gamma(G,\{G_{0},G_{1},G_{2},G_{3}\})\).
2. Find a triality \(\alpha\in\operatorname{Aut}(G)\): The group \(G=L_{2}(q^{3})\) is given as a permutation group over \(q^{3}+1\) points. The group \(G\) is thus a subgroup of \(\operatorname{Sym}(q^{3}+1)\). Construct the centraliser \(C=C_{\operatorname{Sym}(q^{3}+1)}(G)\) of \(G\) in \(\operatorname{Sym}(q^{3}+1)\). Elements of \(C\) can be seen as automorphisms of
Figure 4. Local picture around a fixed edge. The dashed edges and their endpoints correspond to elements of the absolute geometry.
\(G\) via their action by conjugation. Since \(\operatorname{Aut}(G)\) is an extension of order \(3\) of \(G\), as long as \(C\neq G\) we know that \(C=\operatorname{Aut}(G)\). Look for an element of order \(3\) in \(C\setminus G\) that centralizes \(\rho_{1}\) and permutes \(\rho_{0},\rho_{2}\) and \(\rho_{0}\rho_{2}\). That will be the triality \(\alpha\).
3. Compute the absolute points: for each coset \(G_{0}\cdot x\), check if \(G_{0}\cdot x\cap(G_{0}\cdot x)^{\alpha}\) is empty or not. Keep the ones for which the intersection is not empty. Note that this is sufficient as \(\alpha\) is a triality and thus \(v*\alpha(v)\) implies \(\alpha^{-1}(v)=\alpha^{2}(v)*v\).
4. Compute the edges: given a coset \(G_{1}\cdot x\), check if \(\alpha x\alpha^{-1}x^{-1}\) is in \(G_{1}\) or not. Keep the ones for which \(\alpha x\alpha^{-1}x^{-1}\) is not in \(G_{1}\). Indeed, \((G_{1}\cdot x)^{\alpha}=\alpha G_{1}\cdot x\alpha^{-1}=\alpha G_{1}\cdot( \alpha^{-1}\alpha)x\alpha^{-1}=G_{1}\cdot\alpha x\alpha^{-1}\).
5. Match each moving edge with its endpoints and create a graph.
We end this article by mentioning a few examples of the moving absolute geometries computed by the above algorithm. Remark that the moving absolute geometry depends not only of the cardinality \(q\) of the underlying field \(\mathbb{F}\) but also of the choice of the generating set \(\{\rho_{0},\rho_{1},\rho_{2}\}\) and of the triality \(\alpha\). We also adopt the convention that if \(M\Delta_{\alpha}\) contains isolated vertices, we remove them.
1. For \(G=L_{2}(2^{3})\) there is only one choice, up to conjugation, of generating set \(\{\rho_{0},\rho_{1},\rho_{2}\}\). We know that \(\Delta_{\alpha}\) is always a disjoint union of \(3\) paths of length \(2\). In this case \(M\Delta_{\alpha}\) is a prism with triangular basis. In Figure 5 we show \(M\Delta_{\alpha}\) and the dashed lines show how \(\Delta_{\alpha}\) attaches to \(M\Delta_{\alpha}\) inside of \(\Delta\).
2. For \(G=L_{2}(4^{3})\), there is a moving absolute geometry \(M\Delta_{\alpha}\) which is a disjoint union of \(15\) edges and \(12\) pentagons. It has \(90\) vertices if degree \(3\) and \(75\) edges, its girth is equal to \(5\), diameter equal to \(7\) and its automorphism group is isomorphic to \(2\times A_{5}\).
3. For \(G=L_{2}(5^{3})\) the following moving absolute geometries \(M\Delta_{\alpha}\) appear: 1. A graph with \(60\) vertices, with girth equal to \(3\), diameter equal to \(8\) and automorphism group isomorphic to \(2\times A_{5}\). This graph is vertex transitive. 2. A graph with \(30\) vertices of degree \(4\), \(60\) edges which is arc-transitive and has Buekenhout diagram: \[\begin{array}{ccccc}&7&5&8\\ \hline\end{array}\]
Figure 5. The two absolute geometries for \(G=L_{2}(8)\). The classical absolute consists of the vertices and the dashed edges and the moving absolute geometry of the vertices and the full edges.
Its automorphism group is \(2\times\operatorname{Sym}(5)\).
4. For \(G=L_{2}(7^{3})\), the following moving absolute geometries \(M\Delta_{\alpha}\) appear : 1. A graph on \(84\) vertices of degree \(4\) which is admits a perfect matching. 2. A graph on \(168\) vertices of degree \(4\). It is connected, has girth equal to \(3\), diameter equal to \(9\) and automorphism group isomorphic to \(L_{2}(7)\).
5. For \(G=L_{2}(9^{3})\), there is a moving absolute geometries \(M\Delta_{\alpha}\) which is a graph on \(180\) vertices of degree \(4\) and \(360\) edges. It has Buekenhout diagram:
|
2306.00427 | Out-of-distribution forgetting: vulnerability of continual learning to
intra-class distribution shift | Continual learning (CL) is an important technique to allow artificial neural
networks to work in open environments. CL enables a system to learn new tasks
without severe interference to its performance on old tasks, i.e., overcome the
problems of catastrophic forgetting. In joint learning, it is well known that
the out-of-distribution (OOD) problem caused by intentional attacks or
environmental perturbations will severely impair the ability of networks to
generalize. In this work, we reported a special form of catastrophic forgetting
raised by the OOD problem in continual learning settings, and we named it
out-of-distribution forgetting (OODF). In continual image classification tasks,
we found that for a given category, introducing an intra-class distribution
shift significantly impaired the recognition accuracy of CL methods for that
category during subsequent learning. Interestingly, this phenomenon is special
for CL as the same level of distribution shift had only negligible effects in
the joint learning scenario. We verified that CL methods without dedicating
subnetworks for individual tasks are all vulnerable to OODF. Moreover, OODF
does not depend on any specific way of shifting the distribution, suggesting it
is a risk for CL in a wide range of circumstances. Taken together, our work
identified an under-attended risk during CL, highlighting the importance of
developing approaches that can overcome OODF. Code available:
\url{https://github.com/Hiroid/OODF} | Liangxuan Guo, Yang Chen, Shan Yu | 2023-06-01T08:07:58Z | http://arxiv.org/abs/2306.00427v2 | Out-of-distribution forgetting: vulnerability of continual learning to intra-class distribution shift
###### Abstract
Continual learning (CL) is an important technique to allow artificial neural networks to work in open environments. CL enables a system to learn new tasks without severe interference to its performance on old tasks, i.e., overcome the problems of catastrophic forgetting. In joint learning, it is well known that the out-of-distribution (OOD) problem caused by intentional attacks or environmental perturbations will severely impair the ability of networks to generalize. In this work, we reported a special form of catastrophic forgetting raised by the OOD problem in continual learning settings, and we named it out-of-distribution forgetting (OODF). In continual image classification tasks, we found that for a given category, introducing an intra-class distribution shift significantly impaired the recognition accuracy of CL methods for that category during subsequent learning. Interestingly, this phenomenon is special for CL as the same level of distribution shift had only negligible effects in the joint learning scenario. We verified that CL methods without dedicating subnetworks for individual tasks are all vulnerable to OODF. Moreover, OODF does not depend on any specific way of shifting the distribution, suggesting it is a risk for CL in a wide range of circumstances. Taken together, our work identified an under-attended risk during CL, highlighting the importance of developing approaches that can overcome OODF.
## 1 Introduction
Learning models based on artificial neural networks [15] have achieved significant success in various challenging tasks ranging from image recognition [8] to playing the game of Go [35]. However, these models usually suffer from catastrophic forgetting (CF) [4, 21, 26] in open environments. That is, when tasks need to be learned sequentially, the model's performance on old tasks will be severely impaired after learning new tasks due to the interference between previously and newly learned knowledge. It is a significant limitation preventing an artificial intelligence system from effectively working in open, dynamic environments. To overcome the problem of CF, various continual learning (CL) methods have been proposed for deep neural networks, including strategies based on parameter regularization, memory replay, and parameter isolation [13, 27, 30]. By enabling a system to learn new tasks and maintain its performance on old tasks, CL has made significant progress in incremental image recognition and other computer vision tasks [19, 41].
Are the current CL strategies enough for coping with the problem of CF in more complex situations? Maybe not. One concern is the noise tolerance of CL strategies. In a recent review [14], the authors raised the possibility that continual learning machines may not perform well if there is a large distribution shift between the data encountered in the inference phase and those in the training phase. In the present work, surprisingly, we find that even an intra-class distribution shift negligible to human observers will introduce severe performance impairments for current CL methods.
Specifically, we named this phenomenon out-of-distribution forgetting (OODF) (Fig. 1). This phenomenon is special for CL as the effect only takes place after subsequent learning other categories, which is different from the well-studied OOD problem in the setting of joint learning [33]. We believe OODF is an important yet under-attended problem for developing as well as evaluating future CL methods because:
* OODF is present for various CL strategies and settings. We verified its existence in both regularization-based and memory-based CL strategies on different tasks with different network structures.
* OODF is difficult to detect. It doesn't take effect immediately after training on the distribution-shifted data. Instead, as a specific form of CF, it only manifests when the model continues to learn other new tasks. Thus, a continual learning machine affected by OODF can be considered capable of performing certain tasks but it actually will fail. In addition, it only affects the class contaminated with the distribution-shifted data, without influencing other tasks.
* OODF can be triggered by various conditions leading to distribution-shifted data. We find that OODF severely impairs the CL performance, regardless of the approach causing the shift (local or global perturbation), as well as the reason behind it (deliberately designed attack or accident).
Our work identified OODF as a specific form of CF barely covered in previous studies, which is an important issue to consider for improving the security and robustness of the CL methods towards their application in practical circumstances.
## 2 Related Works
### Continual learning
In recent years, various algorithms have been proposed to overcome the CF problem in CL tasks. These algorithms can be categorized into a few strategies: parameter regularization strategy, memory replay strategy, and parameter isolation (also known as architecture-based) strategy [3, 14, 20]. Here, we briefly introduce the CL methods used for OODF evaluation in the present work.
**Regularization-based methods.** These methods modified the loss function with an external term to protect weights for old tasks from changing. **Orthogonal weights modification (OWM)**[42] algorithm is a strong Regularization-based method in Class-incremental learning (IL) scenario. When learning a new task, the model weights are modified only in the direction orthogonal to the input space of the old task. An improved version of the orthogonal projection-based system (such as OWM) called **Adaptive Orthogonal Projection (AOP)**[6].
**Memory-based methods** jointly train new tasks together with a subset of the previous training data stored in a memory buffer. **Experience Replay (ER)**[29] is a straightforward method applying an exemplar set (_i.e_. memory buffer) to store previous samples. In the learning procedure, samples from exemplar sets were used together with present samples to train the agent. A random memory buffer update strategy was used in ER. **Dark Experience Replay++ (DER++)**[1] uses some external terms to match the network's logits sampled throughout the optimization trajectory, combining with a memory buffer, thus promoting consistency with its past. **Incremental Classifier and Representation Learning (iCaRL)**[27] decomposes the system into three components: representation, exemplar set (_memory buffer_), and classification. It attains classification by a nearest-mean-of-exemplars rule, memory update by prioritized exemplar selection, and representation learning using knowledge distillation and prototype rehearsal. **Deep Generative Replay (DGR)**[34] is a framework with a cooperative dual model architecture consisting of a generator for generating previous task samples and a solver to interleave them with those from a new task for present learning. **Greedy Sampler and Dumb Learner (GDumb)**[25] continually learns tasks by keeping the balance of category diversity in the exemplar set. The method does not complete training at the beginning. It trains a model based only on the exemplar set once a testing query comes.
**Parameter-isolation-based methods** like **Continual Neural Dirichlet Process Mixture (CN-DPM)**[16] allocates an independent expert module for each task and expands the neural network according to the Bayesian non-parametric framework.
This work focuses on the class incremental scenario [11, 40], as it is a real scene where CL models need to identify all classes (_i.e_. categories) without task IDs. Meanwhile, we allow models to train each task offline (as opposed to online CL [20]), ensuring a better performance on standard CL, as a higher baseline for subsequent OODF experiments.
Figure 1: Illustration of out-of-distribution forgetting. There are two continual learning scenarios, the top row is a standard continual learning paradigm, while the bottom row is a continual learning paradigm with an intra-class distribution shift on task 1. At time 1 in the OODF paradigm, although the generalization of task 1 was equally good compared to the standard CL setting, the protection provided by CL methods mainly focuses on out-of-distribution samples of task 1, leading to severe deficits in performing task 1 after learning task 2.
### Security Concerns of Neural Networks
The first concern is about the **OOD** problem in the neural networks. In the none-CL paradigm, if there is a shift between the training and testing dataset, specifically due to the corruption or perturbation, the network loses its generalization ability significantly [31, 33]. Hendrycks and Dietterich [9] established rigorous benchmarks for robustness evaluation on common perturbations (noise, blur, weather _etc._), not worst-case adversarial perturbations. Techniques like AugMix [10] are proposed to improve the OOD robustness.
The second concern is about the security of a well-trained neural network in the face of deliberate attacks [24, 36]. **Adversarial attacks** concentrate on the existence of adversarial samples in the inference stage and designing samples for well-trained neural networks, whereas they are obvious for humans but will be misidentified by nets [28]. **Data poisoning**[12] manipulates networks' training stage by injecting specially designed samples into the training sets, resulting in an immediate generalization decrease in the inference stage. **Backdoor attacks**[18] also take control of the training stage by inputting a trigger into the model, which aims to mislead the model's outputs at the testing stage but relies on replacing the benign test samples with trigger ones.
These areas mentioned above comprehensively investigate the security problem in different stages, purposes, and means. However, the models investigated in this area are mainly static, using an un-sequential, joint training procedure, and relatively fewer studies have been devoted to the security or robustness risk unique to CL.
### Several Concerns of Continual Learning
The first concern was security in CL. Security of neural networks and CL have been studied largely in parallel until recently. Guo _et al_. [7] propose the GREV method to attack the A-GEM methods with adversarial samples and disseminate misinformation in the memory buffer. Umer _et al_. [37, 38, 39] show that it is possible to attack CL methods(EWC [13], SI [43], online EWC [32], and DGR [34]) via data poisoning. These attacks need to modify both training samples and labels to give a misleading supervising signal. Li and Ditzler [17] attack several parameter regularization strategies by injecting poisoned adversarial samples into subsequent tasks following the target task, in the task incremental scenario. However, they implement targeted poisoning attacks by injecting poisoned adversarial samples into subsequent non-target tasks.
The second concern was the real-world application of CL. Recent literature has pointed out that the continual agent will experience out-of-distribution samples in the open-world situation, and thus may not be safe for use. Caccia _et al_. [2] treat learning new tasks as the OOD problem (compared to the learned old tasks), which is quite different from OODF in the definition. Mundt _et al_. [22, 23] conducted experiments on _reverse continual learning_. They first trained the model on the entire dataset, then retrained the model on a core set and compared the difference in performance. A well-chosen core set will better represent the entire dataset, associated with a lower performance drop. It was concluded that the introduction of OOD samples to the core set does not have a significant effect on CL. However, it was not an OOD problem since the model had access to the whole dataset at the beginning of reverse CL.
Instead of focusing on specific CL strategies or the scenario of adversary attack, here we address a more general issue related to the security and OOD robustness of CL. That is, there is a previously unnoticed form of CF: the OODF that can severely affect CL models' performance. In the following sections, we will show several important properties of the OODF: it is present for various CL strategies and settings; it is difficult to detect due to its delayed effects; and it can be triggered by various conditions.
## 3 Out-of-distribution Forgetting
The data used for training and later testing in each CL task is usually assumed to be sampled from the same distribution. However, this is hardly the case in the real world. The data distribution may have intentionally or accidentally changed in the referring stage after various periods since the learning stage. **NOTE** that we are not discussing the distribution shifts between the previous task (_e.g_. task 1) and the later task (_e.g_. task 2). We focus on the intra-class distribution shift in a specific task (_e.g_. only in task 1 at different time steps).
Previous researches on CL barely consider this situation within a dynamic circumstance, nor measures how such a shift in data distribution would have influenced the performance of the designed CL algorithm. In this section, we will show the influence of OODF, _i.e_., the catastrophic forgetting caused by the distribution shift in data between the training and inferring phases, on the artificial neural network with the mainstream CL algorithms. Firstly, we will introduce the learning paradigm and experiment procedure of the CL task considering OODF. Next, we will evaluate OODF on various mainstream CL strategies and compare the conditions with joint learning. Finally, we will analyze the key factors that determine the extent of the influence of OODF.
### Standard CL Paradigm
Here we take the supervised image classification problem as an example to illustrate the paradigm of CL. For experiments, we use the class incremental scenario. In the CL task, total \(K\) tasks need to be learned, and the dataset for each task is defined as \(D^{t}=\{x_{i}^{t},y_{i}^{t}\}_{i=1}^{n_{t}},t=1,2,\ldots,K\)
where \(t\) is the task index. The dataset for the \(t^{\text{th}}\) task has \(n_{t}\) pairs of labeled data. Data \(\{x_{i}^{t}\}_{i=1}^{n_{t}}\) and label \(\{y_{i}^{t}\}_{i=1}^{n_{t}}\) are sampled from distribution \(P\left(x^{t}\right)\) and \(P\left(y^{t}\right)\), respectively. In CL, the artificial neural network \(f_{\theta_{t}}:X^{t}\to Y^{t}\) must learn the task once at a time. In the \(t^{\text{th}}\) task, the neural network has to optimize its parameter \(\theta_{t}\) according to \(D^{t}\). It usually has no or very limited access to previous datasets \(D^{t-1},D^{t-1},\ldots,D^{1}\), but needs to maintain the performances on all learned classes. In the inference phase, the testing dataset in \(\{D_{test}^{t}\}_{t=1}^{K}\) is sampled from the same distribution as the training dataset.
### OODF Paradigm
In an open and dynamic circumstance, assuming that training and testing datasets are sampled from the same distribution is not always practical. To evaluate the influence of distribution shift in data, we adjust the standard CL diagram accordingly. If a distribution shift takes place in the training dataset of the \(S^{\text{th}}\) task (_i.e_.intra-class shift), the data \(D^{S}\) will be directly replaced by the shifted data \(\widehat{D}_{train}^{S}=\{\widehat{x}_{i},y_{i}\}_{i=1}^{\widehat{n}}\cup\{x_ {i},y_{i}\}_{i=\widehat{n}+1}^{n}\). \(\widehat{D}_{train}^{S}\) contains \(\widehat{n}\) shifted data pairs \(\{\widehat{x}_{i},y_{i}\}_{i=1}^{\widehat{n}}\) and \(n-\widehat{n}\) original data pairs \(\{x_{i},y_{i}\}_{i=\widehat{n}+1}^{n}\). The percentage \(r=\widehat{n}/n\) of samples \(\widehat{x}\) is used to measure the occurrence frequency (_i.e_. ratio) of feature shifting in the training dataset. The training procedure causing OODF is described in the Algorithm 1. In the referring phase, the neural network is tested on \(D_{test}^{S}\) sampled from the same distribution as \(D^{S}\). Except for such a minor modification, the rest of the procedure in the learning is the same as that in the standard CL.
### Introducing of the Distribution Shift
In the experiments, we constructed distribution-shifted data \(\widehat{x}\) by adding the non-shifted data \(x\) a new feature sampled from a distinct distribution \(P^{{}^{\prime}}(x)\). There were no changes in the labels. In practice, we just chose a small pixel block in a fixed location in the image and set it to a constant value. The feature position was denoted by the index \(p\). The position of other pixels remaining the same is denoted by \(q\), _i.e_. \(q=\neg p\). The strength of the shifted pixels is controlled by parameter \(\epsilon\):
\[\hat{x}[q]=x[q]\qquad\hat{x}[p]=\epsilon \tag{1}\]
Figures 1(a) and 1(b) demonstrate the feature-shifting operation on two image examples from MNIST and CIFAR-10 datasets. In each panel, the left image is the original one, while the right is the corresponding image with a pixel-wise modification. The shifted pixels highlighted by the red square at the bottom right corner are vague and easily overlooked by humans. The above operation is not necessarily an attack on the CL machine, though OODF can easily be exploited for intentional sabotage. In reality, many conditions can cause such distribution shifts, _e.g_., slight defection in the sensory equipment or some random, noisy perturbations. These unpredictable and hardly detectable defections or perturbations can easily cause a feature shift in training samples.
We also consider another form of distribution shift by using FGSM [5] to construct adversarial samples, shown in Fig. 1(c). We regard shift diversity as a factor of OODF and discuss it in Sec. 3.6.
```
0: Datasets \(\{D_{train}^{t}\}_{t=1}^{K}\), \(\{D_{test}^{t}\}_{t=1}^{K}\), task-ID \(S\), occlusion strength \(\epsilon\), position \(p\), percentage \(r\), classifier with initial parameter \(f_{\theta_{0}}\), loss function \(l_{t}(\cdot)\), continual methods \(CL\).
0:\(\widehat{n}_{S}=rn_{S}\)
1:\(\widehat{D}_{train}^{S}\Leftarrow\{\widehat{x}_{i}^{S},y_{i}^{S}\}_{i=1}^{n_ {S}}\cup\{x_{i}^{S},y_{i}^{S}\}_{i=\hat{n}_{S}+1}^{n_{S}}\)
2:for\(t=1\) to \(K\), using \(CL\)do
3:if\(t\neq S\)then
4: get \(\{x^{t}\},\{y^{t}\}\) from \(D_{train}^{t}\)
5:else
6: get \(\{x^{t}\},\{y^{t}\}\) from \(\widehat{D}_{train}^{S}\)
7:endif
8:\(\theta_{t}\Leftarrow\underset{\theta}{\arg\min}\,l_{t}(f_{\theta}(x^{t}),y^ {t};\theta_{t-1})\)
9:endfor
10:return\(f_{\theta_{K}}\)
```
**Algorithm 1** Continual Learning on Distribution Shift Dataset
### Experiment Settings
To evaluate OODF, we tested the influence of intra-class shift on all three mainstream learning strategies in classic CL tasks. The choices of the algorithm and corresponding network structure and dataset in each experiment are
Figure 2: Distribution Shift. Red rectangle box selected the pixels that were modified in (a) and (b). Figure (c) will be discussed in later section.
listed in Tab. 1. In all experiments, either the original code or the popular reproducing code [20] of the CL algorithms were used for evaluation. All the code had been checked in the standard CL tasks without data distribution shift. We note that we're not aiming to evaluate the performances of different CL methods or compare performance degradation caused by distribution shifts in these methods. Instead, the purpose here is to examine the extent of OODF in CL models.
**Shift SplitMNIST-10 Task** The MNIST dataset was divided into \(10\) tasks. In each task, the neural network was trained only to learn one class of handwriting digits. Each class included "6000 samples in the training set and "1000 in the testing set. The images were not pre-processed before the training. The training order of tasks was from \(0,1,\cdots\) to \(9\). We took the digit \(3\) (task index \(S=4\)) as an example to illustrate the influence of the intra-class distribution shift on CL. The task choice is without specific consideration and can be replaced by other digits. The shifted training samples took the percentage \(r=90\%\). The shifted feature is a four-pixel square located at the bottom right corner of the image, as highlighted by the red box in Fig. 1(a). The strength \(\epsilon\) was set to \(64\) in experiments of memory replay strategies and \(32\) for parameter regularization strategies.
**Shift SplitCIFAR-10&100 Task** This task is similar to the shifted splitMNIST-10 task. The CIFAR10 dataset was divided into \(10\) tasks according to the category to be sequentially learned by the neural network. Each category included 6000 samples in the training set and 1000 in the testing set. The RGB images were normalized before the training. We randomly added a shifted feature of pixel square at the bottom right corner on training samples of the task \(S=2\). The percentage of shifted samples is \(r=50\%\) for memory-based methods and \(r=90\%\) for others. The square size is \(1\times 1\) and \(2\times 2\), respectively. The strength is \(\epsilon=255\) for each RGB channel. Same as above, the CIFAR100 dataset was divided into \(100\) tasks. To enable stochasticity, all results are collected over 5 independent trials, presented in the bars and tables. Details for training and distribution shifts are listed in the supplementary material.
### Properties of OODF
#### 3.5.1 Delayed effect
As a new form of catastrophic forgetting, OODF also has a **delayed effect**. We first take the experiment of OWM on SplitMNIST-10 as an example. The experiment was conducted in a control group and a shift group. In the control group, the experiment was performed following the standard CL paradigm for comparison. In the shift group, shifted features were added to the task of \(S=4\) and the experiment was performed following the OODF testing paradigm. The results are demonstrated in Fig. 2(a). In each group, the \(4^{\text{th}}\) task was firstly tested immediately after the end of the current task (the time point is denoted as \(t_{3}\)) and then at the ending of the experiment on the original testing dataset (the time point is denoted as \(t_{9}\)). Both the control group and shift group performed well at \(t_{3}\), with accuracy at \(99.54\pm 0.16\%\) and \(92.85\pm 0.76\%\) respectively. As the experiment continued, the performance on the \(4^{\text{th}}\) task in the control group maintained high at \(89.33\pm 0.67\%\), indicating that the CL algorithm functioned normally and protected the previous knowledge well. As a comparison, the performance in the shift group dropped dramatically to \(51.90\pm 2.36\%\).
We conducted similar experiments on different CL strategies, network structures, and datasets. The results are demonstrated in Tab. 2, 3 and 4. In all experiments, the performance in the shift group at \(t=S\) was comparable to the control group but dropped dramatically at the end of learning \(t=K\). These results show that distribution shifts in the data can severely degrade the function of regularization-based and memory-based CL methods, but not parameter-isolation-based methods.
#### 3.5.2 Targeting
This section investigates the incidence of OODF on learned tasks. Figure 2(b) shows the performance of all tasks but the one recognizing digit \(3\) in the SplitMNIST-10 task trained with the OWM algorithm. In both the control and shift
\begin{table}
\begin{tabular}{c|c|c} \hline \hline & Backbone & CL methods \\ \cline{2-3} Split MNIST-10 & 784-800-10 & OWM \\ \cline{2-3} & 784-400-400-10 & iCaRL\_DGR,ER \\ \hline \multirow{3}{*}{Split CIFAR-10 Split CIFAR-100} & 3 CNN with 3 FC & OWM,AOP \\ \cline{2-3} & & \\ \cline{1-1} \cline{2-3} Split CIFAR-100 & Resnet18 & iCaRL\_ER,DER++, \\ \cline{1-1} \cline{2-3} & & GDumb,CN-DPM \\ \hline \hline \end{tabular}
\end{table}
Table 1: Network backbone of CL methods.
Figure 3: Properties of out-of-distribution forgetting.
groups, the accuracy was tested at the end of the experiment. The testing accuracies of the tasks in the shift group were almost identical to those in the control group when there was no distribution shift in the training data. The result indicates that slight spillover caused by the distribution shift in a specific task affects the rest. Similarly, we further examined the rest experiments with different CL settings and algorithms. Figure 4 illustrates the average accuracy of tasks without data distribution shifting in the control and shift groups. The minor difference verifies that OODF only affects the target task with data distribution shifting in the learning sequence.
#### 3.5.3 Continual Detrimental
Is the above phenomenon specific to CL? Or is it just a form of data poisoning working for all learning systems? To answer this question, We conducted joint learning experiments in the same setting as the above for OODF evaluation,
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \multirow{2}{*}{CIFAR-100} & \multicolumn{4}{c}{Mem.} \\ \cline{2-6} & & iCaRL & ER & GDumb \\ \hline \multirow{2}{*}{\(t=S\)} & Control & \(94.6\pm 2.5\) & \(87.3\pm 5.3\) & \(96.2\pm 2.7\) \\ & Shift & \(91.8\pm 2.8\) & \(94.5\pm 1.3\) & \(94.0\pm 2.9\) \\ \hline \multirow{2}{*}{\(t=K\)} & Control & \(47.4\pm 11.0\) & \(23.5\pm 7.6\) & \(11.4\pm 5.3\) \\ & Shift & \(\mathbf{28.0\pm 12.4}\) & \(\mathbf{5.5\pm 3.1}\) & \(\mathbf{7.0\pm 4.6}\) \\ \hline \end{tabular}
\end{table}
Table 4: Out-of-distribution forgetting on CIFAR100. Test Acc.(%) of task \(S\) at two time steps \(t=S\) and \(t=K\). We only report the results of these three methods in the table due to compatibility issues (_e.g._, CN-DPM for 100 class-incremental) or intractable testing performance (_e.g._, Reg. based methods and DER++) for other methods.
Figure 4: Comparison of non-target tasks’ accuracies between standard and shift experiments. The results were obtained by averaging the accuracies for all tasks except for task S after the whole CL learning procedure was completed. The left (right) bars for each figure are the results for the control (shift) group.
Figure 5: Comparison between joint learning and CL under the same distribution shift with various network structures tested on SplitMNIST-10. In each figure, the light pink bar on the leftmost indicates joint training without shifts, the pink bar nearby indicates joint learning with shifts, and the green bars indicate different CL methods.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \multirow{2}{*}{MNIST} & \multicolumn{2}{c|}{Reg.} & \multicolumn{2}{c}{Mem.} \\ \cline{3-6} & & OWM & iCaRL & DGR & ER \\ \hline \multirow{2}{*}{\(t=S\)} & Control & \(99.54\pm 0.16\) & \(99.76\pm 0.13\) & \(99.08\pm 0.32\) & \(99.30\pm 0.39\) \\ & Shift & \(92.85\pm 0.76\) & \(98.67\pm 0.40\) & \(94.57\pm 1.37\) & \(97.76\pm 1.24\) \\ \hline \multirow{2}{*}{\(t=K\)} & Control & \(89.33\pm 0.67\) & \(84.63\pm 2.54\) & \(83.40\pm 3.58\) & \(90.20\pm 1.22\) \\ & Shift & \(\mathbf{51.90\pm 2.36}\) & \(\mathbf{59.57\pm 6.34}\) & \(\mathbf{67.43\pm 5.94}\) & \(\mathbf{72.15\pm 5.97}\) \\ \hline \end{tabular}
\end{table}
Table 2: Out-of-distribution forgetting on MNIST. Test Acc.(%) of task \(S\) at two time steps \(t=S\) and \(t=K\).
including the same dataset \(D_{train}\) and network structures.
\[D_{train}=(\bigcup_{t=1,t\neq S}^{K}D_{train}^{t})\cup\hat{D}_{train}^{S} \tag{2}\]
In Fig. 5 we show the CL dependency of OODF by evaluating the distribution-shifted task in two different learning paradigms. The green bars in each figure represent the testing accuracy of the task with distribution-shifted data in the continual learning, while the pink bars represent the corresponding accuracy of the task learning jointly with and without the same shifts samples. These results indicate that the existence of a large range of shifts which is more detrimental to CL than joint learning.
### Analysis
#### 3.6.1 Occlusion strength
The intra-class distribution shift relies on occlusion strength \(\epsilon\), intra-class percentage \(r\), and the number of shifted pixels. Based on the splitMNIST-10 task and OWM algorithm in Sec. 3.5.1, we estimated these three factors w.r.t. test the accuracy of the target task.
In Fig. 5(a), we evaluate the final performance of digit \(3\), when giving different occlusion strength levels ranging from \(\epsilon=4\) to \(128\), listed on X-axis. We can see that the test accuracy dropped quickly at a low \(\epsilon\) value, \(\epsilon=16\) for example. It indicates that even an occlusion with small strength will lead to OODF. Fig. 5(b) shows that test accuracy stays in the plateau at a low percentage level and drops until reaching a high level, suggesting that a high percentage level is needed to cause significant OODF. We further report the results using numbers of shifted pixels as different strengths in Tab. 5, which shows a similar trend, indicating that the larger number of shifted pixels, the more significant of OODF.
#### 3.6.2 Various conditions of shift
We further examined if OODF depends on specific types of distribution shifts. To this end, we replaced the explicit distribution shift (occlusion) in Sec. 3.3 with an implicit one, adversarial samples (Fig. 1(c)), and kept other settings the same. The results show a trend consistent with the occlusion condition. We tested digit \(3\) at \(t_{3}\) and \(t_{9}\), it dropped from \(94.28\pm 0.62\%\) to \(22.78\pm 2.47\%\), compared to occlusion, \(92.85\pm 0.76\%\) to \(51.90\pm 2.36\%\).
#### 3.6.3 Mechanism of OODF
We hypothesized that frequently occurring shifts can serve as informative features for classification. This compromises the mechanism designed to protect the intrinsic features for the learned class, leading to severe CF in subsequent learning, especially when the less-protected features overlap with the features in new classes.
Specifically, we take the results in Sec. 3.6.2 for analysis. The in-process test accuracy of digit 3 from \(t_{0}\) to \(t_{9}\) was shown in Tab. 6, and the performance drops significantly at \(t_{5}\) and \(t_{8}\). Let \(\mathcal{D}_{3}\) be the distribution of clean digit 3 and \(\hat{\mathcal{D}}_{3}\) be the distribution of shifted digit 3. Assume \(\mathcal{D}_{3}^{\prime}=\mathcal{D}_{3}-(\mathcal{D}_{3}\cap\hat{\mathcal{D} }_{3})\) and \(\hat{\mathcal{D}}_{3}^{\prime}=\hat{\mathcal{D}}_{3}-(\mathcal{D}_{3}\cap\hat {\mathcal{D}}_{3})\). We make the following conjecture: (i) In the learning process of the OODF scenario, the feature of \(\mathcal{D}_{3}\cap\hat{\mathcal{D}}_{3}\) was protected. Meanwhile, features of \(\hat{\mathcal{D}}_{3}\) overlap with that of subsequent tasks. (ii)Performance on clean 3 mainly depends on \(\mathcal{D}_{3}^{\prime}\) rather than \((\mathcal{D}_{3}\cap\hat{\mathcal{D}}_{3})\). Taken together, OODF happens on digit 3. We show that the accuracy of digit 3 drops significantly after subsequently learning 5 and 8.
To test the hypothesis, we constructed a 3-layer binary classification MLP that distinguishes \(\mathcal{D}_{3}\) and \(\hat{\mathcal{D}}_{3}\) as large as possible (Fig. 7. row 1, column 1). Input any other digit through this MLP, we take \(\mathbb{R}^{2}\) output vector and construct a feature map. Consistent with the hypothesis, we found 5 and 8 overlap more with clean 3 than other digits (digit 7. row 1, column 3).
These results revealed why parameter-isolation-based methods are not sensitive to OODF (Tab. 3). The feature space representations in these methods vary from task to task, and more importantly, they are independent from each other. In contrast, regularization-based and memory-based methods share a public representation space for every task, which causes interference. Despite the robustness of
\begin{table}
\begin{tabular}{c|c c c c c c} \hline \hline Task-10 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline Control & \(99.54\%\) & \(98.58\%\) & \(92.40\%\) & \(92.40\%\) & \(92.05\%\) & \(90.02\%\) & \(89.34\%\) \\ Shift & \(94.28\%\) & \(88.62\%\) & \(\mathbf{50.64\%}\) & \(41.21\%\) & \(37.91\%\) & \(\mathbf{24.80\%}\) & \(22.28\%\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Test accuracy of the digit 3 after learning each task.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Number & 1 & 4 & 9 & 16 \\ \hline Acc. & \(56.34\%\) & \(51.90\%\) & \(44.55\%\) & \(39.60\%\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Impact of strength: the number of shifted pixels. OWM on MNIST.
Figure 6: Influences of distribution shift factors
the parameter-isolation-based methods towards OODF, they may be not suited to deal with a large amount of CL tasks due to the increasing structural complexity. Our results thus highlight the need to develop regularization-based and memory-based approaches that are more robust to OODF.
## 4 Conclusion
In this work, we identify a new phenomenon of catastrophic forgetting, named out-of-distribution forgetting, and demonstrate how it can significantly affect the robustness of CL. Although OODF is described here in the image classification task under the class incremental scenario, it is straightforward to extend to other CL tasks in computer vision or natural language processing. OODF reveals the vulnerability of current CL methods in dealing with intra-class distribution shifts, which could be introduced intentionally or by unnoticed perturbations. This is well-conceivable in both attacking or accidental scenarios.
More generally, our work suggests that the catastrophic forgetting problem in CL is more complex than we previously recognized, and it is likely that other forms of CF cannot be dealt with by the majority of current CL approaches may exist. Thus, it is of theoretical and practical importance to investigate the issue of CF more comprehensively, which will guide the development of more robust CL approaches that can work in complex environments.
|
2305.19426 | ScoNe: Benchmarking Negation Reasoning in Language Models With
Fine-Tuning and In-Context Learning | A number of recent benchmarks seek to assess how well models handle natural
language negation. However, these benchmarks lack the controlled example
paradigms that would allow us to infer whether a model had learned how negation
morphemes semantically scope. To fill these analytical gaps, we present the
Scoped Negation NLI (ScoNe-NLI) benchmark, which contains contrast sets of six
examples with up to two negations where either zero, one, or both negative
morphemes affect the NLI label. We use ScoNe-NLI to assess fine-tuning and
in-context learning strategies. We find that RoBERTa and DeBERTa models solve
ScoNe-NLI after many shot fine-tuning. For in-context learning, we test
InstructGPT models and find that most prompt strategies are not successful,
including those using step-by-step reasoning. To better understand this result,
we extend ScoNe with ScoNe-NLG, a sentence completion test set that embeds
negation reasoning in short narratives. Here, InstructGPT is successful, which
reveals the model can correctly reason about negation, but struggles to do so
on prompt-adapted NLI examples outside of its core pretraining regime. | Jingyuan Selena She, Christopher Potts, Samuel R. Bowman, Atticus Geiger | 2023-05-30T21:43:11Z | http://arxiv.org/abs/2305.19426v1 | # ScoNe: Benchmarking Negation Reasoning in Language Models
###### Abstract
A number of recent benchmarks seek to assess how well models handle natural language negation. However, these benchmarks lack the controlled example paradigms that would allow us to infer whether a model had learned how negation morphemes semantically scope. To fill these analytical gaps, we present the **S**eoped **N**eigation NLI (ScoNe-NLI) benchmark, which contains contrast sets of six examples with up to two negations where either zero, one, or both negative morphemes affect the NLI label. We use ScoNe-NLI to assess fine-tuning and in-context learning strategies. We find that RoBERTa and DeBERTa models solve ScoNe-NLI after many shot fine-tuning. For in-context learning, we test InstructGPT models and find that most prompt strategies are not successful, including those using step-by-step reasoning. To better understand this result, we extend ScoNe with ScoNe-NLG, a sentence completion test set that embeds negation reasoning in short narratives. Here, InstructGPT is successful, which reveals the model can correctly reason about negation, but struggles to do so on prompt-adapted NLI examples outside of its core pretraining regime.
## 1 Introduction
Negation is a ubiquitous but complex linguistic phenomenon that poses a significant challenge for NLP systems. A diverse array of benchmarks focused on negation have appeared in recent years, many of which contain families of contrasting examples that provide a local view of the model decision boundary Gardner et al. (2020). For instance, Cooper et al. (1996), McCoy and Linzen (2018), Wang et al. (2019), Ettinger (2020), Hartmann et al. (2021), and Kassner and Schutze (2020) all conduct evaluations with minimal pairs of examples that are identical except for a negative morpheme. These examples reveal whether the presence of negation has a causal impact on model predictions.
However, negation is not simply present or absent in a sentence. Rather, negation morphemes are semantic operators that take scope in complex ways, as we see in clear contrasts like _the person who was at the talk wasn't happy_ and _the person who wasn't at the talk was happy_. The recent CondaQA benchmark of Ravichander et al. (2022) includes minimal pairs aimed at determining whether models are sensitive to these differences in scope.
With the current paper, we seek to provide an even fuller picture of the complexities of negation and semantic scope. We introduce the English-language **S**eoped **N**eigation Natural Language Inference Benchmark (ScoNe-NLI). ScoNe-NLI extends the negated portion of the Monotonicity NLI dataset Geiger et al. (2020) such that each of the 1,202 examples is now a contrast set with six examples in which zero, one, or two negations are present and each negation may or may not have a semantic scope such that the NLI label is impacted by its presence. These six conditions offer a rich picture of how negation affects NLI reasoning, and they allow us to determine whether models are truly able to handle nested negation and scope or whether they have found simplistic solutions.
We evaluate models on ScoNe-NLI using many-shot fine-tuning as well as a wide range of in-context learning strategies. For fine-tuning approaches, we find that RoBERTa and DeBERTa models both solve ScoNe-NLI. For in-context learning, we evaluate the latest InstructGPT model with a variety of prompt strategies. We find that these models perform well on sections of ScoNe-NLI where the negation morphemes can simply be ignored, but they systematically fail in conditions where exactly one negative morpheme has semantic scope such that its presence changes the NLI label. In other words, these models fail to learn in context how negation actually takes scope.
To better understand this result, we introduce a sentence completion test set (ScoNe-NLG) contain
ing examples that seem better aligned with what we can infer about the training data used for InstructGPT models. In each ScoNe-NLG example, negation reasoning is needed to provide a coherent ending to an incomplete narrative (see Figure 0(b)). ScoNe-NLG contains minimal triplets of examples where negation is absent, present with relevant scope, or present without relevant scope. InstructGPT is successful on ScoNe-NLG. When considered alongside our negative result for ScoNe-NLI, this finding seems to show that these models _can_ learn in-context about how negation takes scope, but only when the examples are hand-tailored to be aligned with the training data and aligned with known strengths of these models. Thus, when used together, ScoNe-NLI and ScoNe-NLG serve as a clear diagnostic for exploring useful prompting strategies and assessing the capacity of language models to reason about negation and scope.
## 2 A Brief Review of Negation in NLI Benchmarks
A diverse array of benchmarks and diagnostic experiments have included negation reasoning in recent years (Nairn et al., 2006; McCoy and Linzen, 2018; Wang et al., 2019; Ettinger, 2020; Hartmann et al., 2021; Kassner and Schutze, 2020; Ravichander et al., 2022).
Hossain et al. (2022) analyze a variety of natural language understanding benchmarks and find that negation is underrepresented, and that when negation is present it often has no impact on the example label. Hossain et al. (2020) address this issue by manually adding negation to the premise-hypothesis pairs in MNLI (Williams et al., 2018), SNLI (Bowman et al., 2015), and RTE (Dagan et al., 2007; Cooper et al., 1996).
Yanaka et al. (2019) introduce the crowdsourced MED dataset, which has many NLI examples where negation generates inferences. Monotonicity NLI (MoNLI; Geiger et al. 2020) consists of modified SNLI sentences that have gold labels impacted by lexical entailments in affirmative contexts (PMoNLI) and lexical entailments reversed by a negation (NMoNLI). BERT fine-tuned on SNLI and MNLI fails to generalize to both of these datasets, but succeeds with further fine-tuning on MED/MoNLI. Some automatically generated NLI datasets also include negation reasoning (Geiger et al., 2019; Richardson et al., 2020; Yanaka et al., 2019, 2021).
## 3 ScoNe-NLI
ScoNe-NLI is an extension of MoNLI (Geiger et al., 2020). MoNLI was generated by randomly selecting a sentence from SNLI and replacing a noun with a hypernym (more general term) or
\begin{table}
\begin{tabular}{p{14.2pt} p{14.2pt} p{14.2pt} p{14.2pt} p{14.2pt}} \hline \hline Split & Premise & Rel. & Hypothesis & Examples \\ \hline No negation & The cowboy fell off a horse at the competition & \(\sqsupset\) & The cowboy fell off a racehorse at the competition & 1,202 \\ One Not & The cowboy did not fear & \(\sqsupset\) & The cowboy did not fear anything, until he fell off a racehorse at the competition & 1,202 \\ Two Not & The cowboy, who was not very old, was not proud that he fell off a horse at the competition & \(\sqsupset\) & The cowboy, who was not proud that we fell off a racehorse at the competition & 1,202 \\ Two Scoped & There is no way that the cowboy did not fall off a horse at the competition & \(\sqsupset\) & There is no way that the cowboy did not fall off a racehorse at the competition & 1,202 \\ One Scoped & The cowboy did not fall off a horse at the competition & \(\sqsupset\) & The cowboy did not fall off a racehorse at the competition & 1,202 \\ One Scoped, One not too important & The cowboy did not fall off a horse, but the competition & \(\sqsupset\) & The cowboy did not fall off a racehorse, but the competition was not too important & 1,202 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Two contrast sets from the ScoNe Benchmark
hyponym (less general term). The original and edited sentences are then used to form two premise-hypothesis pairs, one with the label _entailment_ and the other with the label _neutral_. In about half of the examples, this replacement is in an affirmative context with no negation (PMoNLI). In the other half, it is under the scope of a single negation (NMoNLI).
The authors generated ScoNe-NLI by using each example of NMoNLI to create a contrast set of six examples where gold labels are impacted by the scope of zero, one, or two negations, as in Table 1.
To succeed across all sections of ScoNe, models need to attend to the presence of negation as well as the way it scopes semantically. Table 1(a) shows an actual example of how ScoNe extends MoNLI. We use the train-test split of MoNLI where substituted lexical items are disjoint across training and testing data. Appendix C provides further details.
**Fine-Tuning on ScoNe-NLI** We used publicly available weights on HuggingFace for the DeBERTa-v3-base models already fine-tuned on MNLI, Fever-NLI, and Adversarial-NLI Laurer et al. (2022); He et al. (2021). Appendix B contains comparable results for the RoBERTa model Liu et al. (2019). Fine-tuning results are in Table 2.
Fine-tuning on existing NLI datasets is insufficient for good performance on ScoNe-NLI: DeBERTa-v3-base fine-tuned on existing NLI datasets, even those that focus on negation, systematically fails. Thus, it seems that ScoNe-NLI captures novel aspects of negation reasoning.
In contrast, fine-tuning on MoNLI and ScoNe-NLI training data results in near perfect performance on ScoNe-NLI test data. This shows that DeBERTa can learn negation reasoning and generalize to new lexical items.
**In-context Learning on ScoNe-NLI** We evaluated InstructGPT using OpenAI's API with _text-davinci-002_ and _text-davinci-003_ engines and a temperature of 0.0 Brown et al. (2020). We ask InstructGPT to infer NLI labels given the premise and hypothesis using prompts. All prompts are constructed such that if the response contain "yes" (case-insensitive), then the label _entailment_ is predicted, else the label _neutral_ is predicted. We use six prompts (Table 3). For each prompt, we implemented both zero-shot and few-shot inference experiments. Appendix E provides the full prompts.
**InstructGPT makes systematic errors similar to a baseline that ignores negation entirely.** The best results are for the few-shot reasoning prompt with _davinci-003_. While its overall accuracy of 82% may initially appear to be a success, further analysis reveals otherwise. InstructGPT succeeds only on the sections of ScoNe-NLI where zero or two negations take scope, namely, no negation (99%), one not scoped (97%), two not scoped
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Fine-tuning Datasets} & No & One & Two & Two & One & One Scoped, \\ Fine-tuning Datasets & Negation & Not Scoped & Not Scoped & Scoped & Scoped & One not Scoped \\ \hline MAF-NLI & 82.0 & 86.0 & 81.5 & 91.0 & 5.0 & 5.0 \\ MAF-NLI+ MoNLI Geiger et al. (2020) & 96.2 & 87.5 & 99.5 & 8.9 & 100.0 & 100.0 \\ MAF-NLI+ MED Yanaka et al. (2020) & 84.8 & 83.5 & 82.0 & 58.9 & 99.5 & 97.0 \\ MAF-NLI+ Neg-NLI Hossain et al. (2020) & 91.3 & 88.5 & 83.0 & 70.4 & 37.0 & 29.0 \\ MAF-NLI+ MoNLI+ ScoNe-NLI & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: DeBERTa fine-tuning results on ScoNe-NLI. MAF-NLI stands for on MNLI, ANLI, and Fever-NLI.
\begin{table}
\begin{tabular}{l l} \hline \hline Conditional Q & Is it true that if **Premise**, then \\ & **Hypothesis?** \\ \hline Hypothesis Q & Assume that **Premise**. Is it then definitely true that **Hypothesis**? Answer yes or no. \\ \hline Conditional & If **Premise**, then **Hypothesis**. Is this true? \\ Truth & & \\ \hline Brown et al. & P: **Premise**u Q: **Hypothesis**\(\backslash\)n Yes, No, or Maybe? \\ Structured & P: **Premise**u H: **Hypothesis**\(\backslash\)nL: \\ \hline Reasoning & \\ Logical and commonsense reasoning exam.un \\ Explain your reasoning in detail, then answer with Yes or No. Your answers should follow this 4-line format:un \\ Premise: \textless{}a tricky logical statement about the world\textgreater{}.un \\ Question: \textless{}question requiring logical deduction\textgreater{}.un \\ Reasoning: \textless{}an explanation of what you understand about the possible scenarios\textgreater{}un \\ Answer: \textless{}Yes or No\textgreater{}.un \\ Premise: **Premise**\(\backslash\)n \\ Question: **Hypothesis**\(\backslash\)n \\ Reasoning: Let’s think logically step by step. The premise basically tells us that \\ \hline \hline \end{tabular}
\end{table}
Table 3: Prompts used to adapt a 2-way NLI example (**Premise**, **Hypothesis**). Newlines are indicated with \(\backslash\)n. Full prompts with few-shot variants are in Appendix E.
(98%), and two scoped (89%). InstructGPT performs much worse on sections where exactly one negation takes scope, namely one scoped (69%), one scoped/one not (48%). An idealized baseline entirely ignoring the presence of negation (last row of Table 4) succeeds and fails on the same sections, indicating a systematic flaw in InstructGPT.
## 4 ScoNe-NLG
InstructGPT fails to reason about negation when given NLI examples that must be adapted to natural language generation (NLG) with prompts. We hypothesized that InstructGPT may correctly reason about negation when evaluated on examples hand tailored to its pretraining objective, because there is no need for prompt engineering Liu et al. (2021); Wei et al. (2022); Kojima et al. (2022).
**Dataset**: ScoNe-NLG is a natural language generation dataset that contains 74 contrasting triplets of examples of half-completed naturalistic narratives that have different coherent completions depending on the presence and scope of a negation. InstructGPT fails on the sections of ScoNe-NLI examples containing only one negation, so we opt for contrast sets with three examples that require knowledge of a lexical entailment in an affirmative context without negation, an affirmative context with non-scoping negation, and an negative context with scoping negation, respectively. See Table 0(b).
**In-context Learning on ScoNe-NLG**: We used InstructGPT to complete the partial sentence inputs with the _text-davinci-003_ engine (temperature of 0.0). In the zero-shot setting, the prompt consists of the ScoNe-NLG example. In the few-shot setting, four demonstrations from ScoNe-NLG are given one with no negation, two with scoping negation, and one with non-scoping negation. See Appendix E.13 for the complete prompts.
To evaluate, the authors went through the responses by hand and determined whether the generated text is coherent and compatible with the initial narrative. The authors agreed on these annotations for 216/222 of the zero-shot responses with a Fleiss kappa of 0.84 and 220/222 of the few-shot responses with a Fleiss kappa of 0.91. These agreement rates are so high that we evaluate InstructGPT only for the cases where the annotators agree. Here, InstructGPT is successful but not perfect, achieving 95% and 92% accuracy in the few and zero-shot settings, respectively. We do not observe the systematic failures seen on ScoNe-NLI.
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline \hline & & No & One & Two & Two & One & One & Scoped, & \\ & & Negation & Not Scoped & Not scoped & Scoped & Scoped & One not Scoped & Overall \\ \hline \multirow{7}{*}{Zero-shot} & Structured & 0.50 & 0.50 & 0.50 & 0.50 & 0.50 & 0.50 & 0.50 \\ & Brown et al. & 0.74 & 0.70 & 0.74 & 0.55 & 0.44 & 0.45 & 0.60 \\ & Conditional Q & 0.79 & 0.84 & 0.80 & 0.50 & 0.52 & 0.44 & 0.65 \\ & Conditional Truth & 0.98 & 0.86 & 0.80 & 0.43 & 0.66 & 0.47 & 0.70 \\ & Hypothesis Q & 0.69 & 0.90 & 0.70 & 0.51 & 0.62 & 0.42 & 0.64 \\ & Reasoning & 0.90 & 0.88 & 0.94 & 0.72 & 0.52 & 0.46 & 0.73 \\ \hline \multirow{7}{*}{Few-shot} & Structured & 0.50 & 0.50 & 0.50 & 0.50 & 0.50 & **0.50** & 0.50 \\ & Brown et al. & 0.86 & 0.66 & 0.80 & 0.83 & 0.36 & 0.28 & 0.63 \\ \cline{1-1} & Conditional Q & 0.92 & 0.85 & 0.90 & 0.62 & 0.34 & 0.34 & 0.66 \\ \cline{1-1} & Conditional Truth & 0.94 & 0.90 & 0.94 & 0.64 & 0.36 & 0.37 & 0.69 \\ \cline{1-1} & Hypothesis Q & 0.98 & 0.96 & 0.94 & 0.83 & 0.51 & 0.40 & 0.77 \\ \cline{1-1} & Reasoning & **0.99** & **0.97** & **0.98** & **0.89** & **0.69** & 0.43 & **0.82** \\ \hline \hline \end{tabular}
\end{table}
Table 4: In-context learning results on ScoNe-NLI for InstructGPT (_davinci-003_ engine; see Appendix F for corresponding results for _davinci-002_, which are uniformly lower). Zero-shot results are given in the first group of rows, with the best results in that condition underlined. Few-shot results are given in the second group, with the best results for this condition (and overall) in bold. The bottom row specifies a simple, idealized Ignore-Negation baseline that makes predictions as if negations were absent. The baseline shows that the seemingly solid Overall results of these models are driven largely by conditions for which negation can be ignored. Conversely, models are often at or below chance where negation is critical in some way.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & No & One & One Not & \\ & Negation & Scoped & Scoped & Overall \\ \hline Zero-shot & 0.99 & 0.90 & 0.88 & 0.92 \\ Few-shot & 0.93 & 1.00 & 0.93 & 0.95 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results for ScoNe-NLG using davinci-003. The three conditions correspond to those of ScoNe and test the essential scope-taking properties of negation.
## 5 Future Work on Interpretability
ScoNe is based in naturalistic examples, but it also has a controlled structure that offers valuable opportunities to move beyond simple behavioral testing and more deeply understand _how_ models solve tasks related to lexical entailment and negation.
The theory of causal abstraction provides a framework for interpretability Geiger et al. (2023), where a neural model can be understood to implement the intermediate variables and internal structure of a program or algorithm Geiger et al. (2021, 2022); Wu et al. (2022); Huang et al. (2022); Geiger et al. (2023). In fact, the MoNLI dataset and the technique of interchange interventions (which is the primary technique in causal abstraction analysis) were jointly introduced in Geiger et al. (2020), where interchange interventions were used to investigate whether a BERT model implements a simple, human-interpretable algorithm that can perfectly label MoNLI using a variable representing lexical entailment and a variable representing the presence of negation.
With ScoNe, we can ask even deeper interpretability questions of this form. To encourage future work in this direction, we present a range of algorithmic solutions in Figure 1. Two of these solutions solve ScoNe and could perhaps explain neural models that learn the task perfectly, and two others implement flawed heuristics that could explain neural models with poor task performance.
Figure 0(a) and Figure 0(b) present two intuitive and correct algorithms that solve ScoNe, but have distinct intermediate variables and internal structure. The first computes two Booleans representing whether each negation scopes, and the second computes a count of how many negations scope.
Figure 0(d) is the flawed heuristic that ignores negation that we discussed in Section 3 as a hypothesis about how models fail at our task. Figure 0(d) is a second flawed heuristic that counts the number of negations present but ignores scope.
Using the toolkit of causal abstraction, we can assess models not only behaviorally, but also evaluate whether they implement an interpretable algorithm. The results of Geiger et al. (2023) begin to show how such analyses could be extended to in-context learning with LLMs, as in Section 4.
## 6 Conclusion
We introduced ScoNe, a benchmark for fine-tuning and in-context learning experiments on negation. ScoNe is challenging for NLI models fine-tuned on other datasets, even those designed for negation reasoning, but modest amount of fine-tuning on ScoNe leads to success. For in-context learning, we find that that InstructGPT models fail dramatically on ScoNe. However, we also introduce ScoNe-NLG, which uses more narrative-like examples to probe models' capacity to handle negation, and show that InstructGPT is successful with zero-shot and few-shot prompts for this task. These results show that ScoNe supports fine-grained assessments of whether models can reason accurately about natural language negation, and our discussion in Section 5 suggests that ScoNe can be a powerful tool for discovering _how_ models reason semantically.
Figure 1: Four human-interpretable algorithms for ScoNe-NLI. The first two solve the task perfectly, and the other two implement flawed heuristics that a model might learn to implement. The function get-lexrel retrieves the relation between the aligned words in the premise and hypothesis, count-scoped counts scoped negations, count-neg counts negations regardless of scope, and get-first returns true if the first negation scopes, while get-second returns true if there is a second negation and it scopes.
### Limitations
We are releasing ScoNe as a diagnostic tool for conducting controlled scientific experiments. This is our primary intended use, and we advise against uncritical use of ScoNe for real-world applications, as we have not audited the dataset for such purposes.
As a diagnostic tool, ScoNe's primary limitation is its focus on English. Cross-linguistically, we find many strategies for expressing negation. The English-language strategy of using mostly adverbial modifiers for sentential negation is not the only one by any means, and we would expect to see quite different results for languages in which negation is expressed, for example, with verbal suffixes. This highlights the value of potential future efforts extending ScoNe to other languages.
By the same token, we acknowledge that many linguistic phenomena interact with negation even internal to English. ScoNe restricts to negation in the context of lexical entailment, and mostly uses "not" as the negative morpheme. This excludes a wide range of negation morphemes and negation strategies that ultimately need to be brought into the picture.
Finally, we note that there may be undesirable biases in ScoNe that could interact with biases in the models. ScoNe is in part derived from SNLI, which is known to contain gaps, social biases, and artifacts [20, 19, 17, 16], and ScoNe may inherit some of these.
|
2304.01140 | MORe DWR: Space-time goal-oriented error control for incremental
POD-based ROM | In this work, the dual-weighted residual (DWR) method is applied to obtain a
certified incremental proper orthogonal decomposition (POD) based reduced order
model. A novel approach called MORe DWR (Model Order Rduction with
Dual-Weighted Residual error estimates) is being introduced. It marries
tensor-product space-time reduced-order modeling with time slabbing and an
incremental POD basis generation with goal-oriented error control based on
dual-weighted residual estimates. The error in the goal functional is being
estimated during the simulation and the POD basis is being updated if the
estimate exceeds a given threshold. This allows an adaptive enrichment of the
POD basis in case of unforeseen changes in the solution behavior which is of
high interest in many real-world applications. Consequently, the offline phase
can be skipped, the reduced-order model is being solved directly with the POD
basis extracted from the solution on the first time slab and -- if necessary --
the POD basis is being enriched on-the-fly during the simulation with
high-fidelity finite element solutions. Therefore, the full-order model solves
can be reduced to a minimum, which is demonstrated on numerical tests for the
heat equation and elastodynamics. | Hendrik Fischer, Julian Roth, Thomas Wick, Ludovic Chamoin, Amelie Fau | 2023-04-03T17:06:08Z | http://arxiv.org/abs/2304.01140v1 | # MORe DWR: Space-time goal-oriented error control
###### Abstract
In this work, the dual-weighted residual (DWR) method is applied to obtain a certified incremental proper orthogonal decomposition (POD) based reduced order model. A novel approach called MORe DWR (Model Order Reduction with Dual-Weighted Residual error estimates) is being introduced. It marries tensor-product space-time reduced-order modeling with time slabbing and an incremental POD basis generation with goal-oriented error control based on dual-weighted residual estimates. The error in the goal functional is being estimated during the simulation and the POD basis is being updated if the estimate exceeds a given threshold. This allows an adaptive enrichment of the POD basis in case of unforeseen changes in the solution behavior which is of high interest in many real-world applications. Consequently, the offline phase can be skipped, the reduced-order model is being solved directly with the POD basis extracted from the solution on the first time slab and -if necessary- the POD basis is being enriched on-the-fly during the simulation with high-fidelity finite element solutions. Therefore, the full-order model solves can be reduced to a minimum, which is demonstrated on numerical tests for the heat equation and elastodynamics.
## 1 Introduction
Model order reduction (MOR) by means of the proper orthogonal decomposition (POD) has been applied for cheap surrogate modeling to a plethora of partial differential equations (PDEs) [70, 46, 52, 6, 75, 13, 42, 12, 1, 76, 33]. Therein, the dynamics is projected onto a set of POD modes that constitute an approximate basis for the solution manifold to reduce the cost of running expensive high-fidelity simulations. This proper orthogonal decomposition based reduced-order modeling (POD-ROM) is a truth approximation because it yields a compressed representation of an a priori known solution trajectory. To avoid the necessity of these expensive high-fidelity simulations beforehand, we use error estimates to only locally perform high-fidelity calculations.
The dual-weighted residual method is used in this work to switch between ROM and high-fidelity computations. The space-time dual-weighted residual (DWR) method is an extension of the DWR
method for stationary problems introduced in [10; 11; 8], which is based on seminal prior work of Johnson and co-workers [30]. A recent overview on the usage with adaptive predictive multiscale modeling was published by Oden [55]. The space-time DWR method has been applied to parabolic PDEs by Schmich (Besier) and Vexler [68], Schmich (Besier) [67] and Besier and Rannacher [16] and in the authors' own works [73; 63]. Moreover, it has been applied to hyperbolic PDEs in the dissertation of Rademacher [59] and to the wave equation by Bangerth et al. [7]. Since the theory for the error estimation is formulated in spatio-temporal function spaces and requires space-time variational formulations, we employ a space-time finite element method (FEM) discretization; see for instance [48]. Space-time finite elements for the heat equation have been studied in [68; 66] and for the elastodynamics equation in [41; 7]. Similar space-time FEM implementations can be found in FEniCS in [51] and in NGSolve in [50; 58].
In recent years, space-time formulations have been applied to model order reduction [21; 43; 72; 25], including a windowed space-time approach for more efficiency [69]. Additional applications of space-time model order reduction include optimal control [81] and classical error estimates and hyper-reduction estimates using discrete empirical interpolation [14]. A lot of research on DWR error estimates for hyper-reduction with reduced quadrature rules has been done by Yano [79; 71]. Another reduced-order modeling approach employing goal-oriented error estimates has been proposed by Meyer and Matthies [53], where the estimates have been used to remove POD basis vectors that are not relevant for the accurate computation of the quantity of interest. Finally, related methods include the proper generalized decomposition (PGD) [20] and hierarchical model (HiMod) reduction [57; 9; 56], which uses estimates for the POD in the transverse direction of the dynamics.
In this work, we propose a different methodology for POD-ROM computations in which only a small portion of the solution trajectory is being computed with the expensive full-order-model (FOM) and the reduced-order-model (ROM) is being updated on-the-fly when the error estimates exceed a prescribed tolerance. This is being accomplished by combining POD-ROM with the incremental POD and space-time dual-weighted residual error estimates. We work out the algorithmic details resulting in a final newly proposed algorithm for incremental ROM. The incremental POD method relies on additive rank-b updates of the singular value decomposition [17; 18] and has successfully been applied to the incremental model order reduction of fluid flows [45]. As an overall framework, we employ a space-time setting. More concretely, we rely on the tensor-product space-time FEM implementation from [63] based on the FEM library deal.II [2; 3]. The final algorithm is implemented and demonstrated with various settings that include parabolic problems (heat equation) and second-order hyperbolic problems (elastodynamics). The main objective is to show the decrease in computational cost by keeping the accuracy of the numerical solutions. Moreover, the error estimator and the goal functional are compared in terms of effectivities.
The outline of this paper is as follows: In Section 2, we formulate the problem for the heat equation and elastodynamics and discretize them with tensor-product space-time finite elements. Next, in Section 3 we recapitulate POD-based reduced-order modeling and depict its extension to tensor-product space-time POD-ROM. Then, in Section 4 the theories for the space-time error estimates and the
incremental model order reduction are elucidated. In Section 5, numerical tests in 1+1D, 2+1D and 3+1D are being conducted for the heat equation and elastodynamics. Finally, our findings are summarized in Section 6.
## 2 Problem formulation and discretization
### Model problem formulation
Let \(\tilde{d}\in\mathbb{N}\) with \(\tilde{d}\) depending on whether the problem is vector- or scalar-valued, i.e. for the heat equation we have \(\tilde{d}=1\), whereas for elastodynamics in \(u\)-formulation (where \(u\) denotes the displacements) we have \(\tilde{d}=d\) and for the \((u,v)\)-formulation (where \(u\) is as before and \(v\) denotes the velocity), we have \(\tilde{d}=2d\), where \(d\in\{1,2,3\}\) is the spatial dimension. In the problem description, \(I:=(0,T)\) denotes the temporal domain and \(\Omega\subset\mathbb{R}^{d}\) a sufficiently smooth spatial domain. Here, the spatial boundary is split into a Dirichlet boundary \(\Gamma_{D}\subseteq\partial\Omega\) and a Neumann boundary \(\Gamma_{N}\subsetneq\partial\Omega\) with \(\Gamma_{D}\cap\Gamma_{N}=\emptyset\). We consider the abstract time-dependent problem: Find \(u:\bar{\Omega}\times\bar{I}\to\mathbb{R}^{\tilde{d}}\) such that
\[\begin{split}\partial_{t}u+\mathcal{A}(u)&=f\qquad \text{ in }\Omega\times I,\\ u&=u_{D}\qquad\text{ on }\Gamma_{D}\times I,\\ \mathcal{B}(u)&=g_{N}\qquad\text{ on }\Gamma_{N}\times I,\\ u&=u^{0}\qquad\text{ in }\Omega\times\{0\},\end{split} \tag{1}\]
with possibly nonlinear spatial operator \(\mathcal{A}\), boundary operator \(\mathcal{B}\) and sufficiently regular right-hand side \(f\). Choosing a suitable continuous spatial function space \(V:=V(\Omega)\), a continuous temporal functional space \(X:=X(I,\cdot)\) and time-dependent Sobolev space \(X(I,V(\Omega))\) mapping from \(I\) into \(V(\Omega)\), we can define the continuous spatio-temporal variational formulation as: Find \(u\in u_{D}+X(I,V(\Omega))\) such that
\[\begin{split} A(u)(\varphi)&:=(\!(\partial_{t}u, \varphi)\!)+(\!(\mathcal{A}(u),\varphi)\!)+(u(0),\varphi(0))\\ &=(\!(f,\varphi)\!)+\langle\!\langle g_{N}-\mathcal{B}(u), \varphi\rangle\!\rangle_{\Gamma_{N}}+(u^{0},\varphi(0))=:F(\varphi)\qquad \forall\varphi\in X(I,V(\Omega)),\end{split}\]
where we use the notation
\[\begin{split}(f,g)&:=(f,g)_{L^{2}(\Omega)}:=\int_{ \Omega}f\cdot g\ \mathrm{d}x,\qquad(\!(f,g)\!):=(f,g)_{L^{2}(I,L^{2}(\Omega))}:=\int_{I}(f,g)\ \mathrm{d}t,\\ \langle f,g\rangle&:=\langle f,g\rangle_{L^{2}( \Gamma)}:=\int_{\Gamma}f\cdot g\ \mathrm{d}s,\qquad\langle\!(f,g)\!\rangle:=(f,g)_{L^{2}(I,L^{2}(\Gamma))}:= \int_{I}\langle f,g\rangle\ \mathrm{d}t.\end{split}\]
In this notation, \(f\cdot g\) represents the Euclidean inner product if \(f\) and \(g\) are scalar- or vector-valued and it stands for the Frobenius inner product if \(f\) and \(g\) are matrices. We notice that some partial differential equations (PDE) which fall into this framework are the heat equation and more generally parabolic problems. With a bit of abuse of notation, elastodynamics formulated as a first-order-in-time system can also be written in the above form, which we however precise below for the sake of mathematical precision.
#### 2.1.1 Heat equation
The strong formulation of the heat equation reads: Find the temperature \(u:\bar{\Omega}\times\bar{I}\to\mathbb{R}\) such that
\[\partial_{t}u-\Delta_{x}u=f\qquad\quad\text{in }\Omega\times I,\]
with \(\mathcal{A}(u):=-\Delta_{x}u\) in (1). The initial and boundary conditions are given by
\[u =u^{0}\qquad\text{on }\Omega\times\{0\},\] \[u =0\qquad\text{on }\partial\Omega\times I.\]
We thus arrive at the continuous variational formulation:
**Formulation 2.1** (Continuous variational formulation of the heat equation).:
_Find \(u\in X(I,V(\Omega)):=\{v\in L^{2}(I,H^{1}_{0}(\Omega))\mid\partial_{t}v\in L^ {2}(I,(H^{1}_{0}(\Omega))^{*})\}\) such that_
\[A(u)(\varphi):=(\!(\partial_{t}u,\varphi)\!)+(\!(\nabla_{x}u,\nabla_{x} \varphi)\!)+(u(0),\varphi(0))=(\!(f,\varphi)\!)+(u^{0},\varphi(0))=:F(\varphi )\qquad\forall\varphi\in X(I,V(\Omega)).\]
For this variational formulation, we use \(u_{0}\in L^{2}(\Omega)\) and \(f\in L^{2}(I,H^{1}_{0}(\Omega)^{*})\)[78]. Here \(H^{1}_{0}(\Omega)^{*}\) denotes the dual space of \(H^{1}_{0}(\Omega)\).
#### 2.1.2 Elastodynamics equation
The strong formulation of linear elastodynamics in three spatial dimensions reads: Find the displacement \(u:\bar{\Omega}\times\bar{I}\to\mathbb{R}^{d}\) such that
\[\partial_{tt}u-\nabla_{x}\cdot\sigma(u)=0\qquad\quad\text{in }\Omega\times I,\]
with
\[\sigma(u) =2\mu E(u)+\lambda\operatorname{tr}(E(u))\mathbbm{1}_{d\times d}, \text{(stress tensor)}\] \[E(u) =\frac{1}{2}(\nabla_{x}u+(\nabla_{x}u)^{T}), \text{(linearized strain tensor)}\]
where \(\mathbbm{1}_{d\times d}\in\mathbb{R}^{d\times d}\) is the identity matrix and the Lame parameters are \(\mu>0\) and \(\lambda>-\frac{2}{3}\mu\). The initial conditions are given by
\[u =u^{0}\qquad\text{on }\Omega\times\{0\},\] \[\partial_{t}u =v^{0}\qquad\text{on }\Omega\times\{0\}.\]
As boundary conditions, we prescribe
\[u =0\qquad\text{on }\Gamma_{D}\times I,\] \[\mathcal{B}(u)=\sigma(u)\cdot n =g_{N}\qquad\text{on }\Gamma_{N}\times I.\]
We convert this into a first-order system in time and solve for displacement \(u:\bar{\Omega}\times\bar{I}\to\mathbb{R}^{d}\) and velocity \(v:\bar{\Omega}\times\bar{I}\to\mathbb{R}^{d}\) such that
\[\partial_{t}v-\nabla_{x}\cdot\sigma(u) =f\qquad\quad\text{in }\Omega\times I,\] \[\partial_{t}u-v =0\qquad\quad\text{in }\Omega\times I,\]
with \(\mathcal{A}(u,v):=-\nabla_{x}\cdot\sigma(u)-v\) in (1). We still have the same initial and boundary conditions with the only difference that we now have
\[v =v^{0}\qquad\text{ on }\Omega\times\{0\},\] \[v =0\qquad\text{ on }\Gamma_{D}\times I.\]
For the variational formulation, we use \(u_{0}\in H^{1}_{\Gamma_{D},0}(\Omega)^{d}\), which is the space of weakly differentiable functions that vanish on \(\Gamma_{D}\), \(v_{0}\in L^{2}(\Omega)^{d},g_{N}\in L^{2}(I,L^{2}(\Gamma_{N})^{d})\) and the function spaces
\[X(I,V^{u}(\Omega)) :=\{v\in L^{2}(I,H^{1}_{\Gamma_{D},0}(\Omega)^{d})\mid\partial_{t }v\in L^{2}(I,L^{2}(\Omega)^{d}),\partial_{t}^{2}v\in L^{2}\left(I,(H^{1}_{ \Gamma_{D},0}(\Omega)^{d})^{*}\right)\},\] \[X(I,V^{v}(\Omega)) :=\{v\in L^{2}(I,L^{2}(\Omega)^{d})\mid\partial_{t}v\in L^{2} \left(I,(H^{1}_{\Gamma_{D},0}(\Omega)^{d})^{*}\right)\},\] \[X(I,V(\Omega)) :=X(I,V^{u}(\Omega))\times X(I,V^{v}(\Omega)).\]
We thus solve the continuous variational formulation:
**Formulation 2.2** (Continuous variational formulation of the elastodynamics equation).:
_Find \(U=(u,v)\in X(I,V(\Omega))\) such that_
\[A(U)(\Phi)=F(\Phi)\qquad\forall\Phi=(\varphi^{u},\varphi^{v})\in X(I,V( \Omega)),\]
_where_
\[A(U)(\Phi) :=(\!(\partial_{t}v,\varphi^{u})\!)+(\!(\sigma(u),\nabla_{x} \varphi^{u})\!)+(v(0),\varphi^{u}(0))+(\!(\partial_{t}u,\varphi^{v})\!)-(\!(v,\varphi^{v})\!)+(u(0),\varphi^{v}(0)),\] \[F(\Phi) :=(v^{0},\varphi^{u}(0))+\langle\!\langle g_{N},\varphi^{u} \rangle\!\rangle_{\Gamma_{N}}+(u^{0},\varphi^{v}(0)).\]
### Tensor-product space-time FEM discretization
We follow our recent work on space-time adaptivity for the Navier-Stokes equations [63] and use tensor-product space-time finite elements (FEM) with discontinuous finite elements in time (dG) and continuous finite elements in space (cG). Using the tensor-product of the temporal and spatial basis functions is a special case of the broad class of space-time finite element methods [48]. We will now explain tensor-product space-time FEM at the example of the heat equation, where the function spaces can be found in [68] and the slabwise tensor-product space-time implementation is being outlined in [73]. We assume that the spatial mesh remains fixed, which simplifies the analysis and the implementation. Furthermore, we outline the extension of this methodology to elastodynamics.
#### 2.2.1 Discretization in time
Let \(\mathcal{T}_{k}:=\{I_{m}:=(t_{m-1},t_{m})\mid 1\leq m\leq M\}\) be a partitioning of time, i.e. \(\bar{I}=[0,T]=\bigcup_{m=1}^{M}\bar{I}_{m}\). We now introduce broken continuous level function spaces
\[\tilde{X}(\mathcal{T}_{k},V(\Omega)):=\{v\in L^{2}(I,L^{2}(\Omega))\mid v \big{|}_{I_{m}}\in X(I_{m},V(\Omega))\quad\forall I_{m}\in\mathcal{T}_{k}\}\]
for the heat equation and
\[\tilde{X}(\mathcal{T}_{k},V^{u}(\Omega)) :=\{v\in L^{2}(I,L^{2}(\Omega)^{3})\mid v\big{|}_{I_{m}}\in X(I_{m}, V^{u}(\Omega))\quad\forall I_{m}\in\mathcal{T}_{k}\},\] \[\tilde{X}(\mathcal{T}_{k},V^{v}(\Omega)) :=\{v\in L^{2}(I,L^{2}(\Omega)^{3})\mid v\big{|}_{I_{m}}\in X(I_{ m},V^{v}(\Omega))\quad\forall I_{m}\in\mathcal{T}_{k}\},\] \[\tilde{X}(\mathcal{T}_{k},V(\Omega)) :=\tilde{X}(\mathcal{T}_{k},V^{u}(\Omega))\times\tilde{X}( \mathcal{T}_{k},V^{v}(\Omega))\]
for the elastodynamics equation. These broken function spaces [24] are required, since we want to perform a conforming discontinuous Galerkin discretization in time and thus need to allow for discontinuities between time intervals/temporal elements. Due to these discontinuities, we define the limits of \(f\) at time \(t_{m}\) from above and from below for a function \(f\) as
\[f_{m}^{\pm}:=\lim_{\epsilon\searrow 0}f(t_{m}\pm\epsilon),\]
and the jump of the function value of \(f\) at time \(t_{m}\) as
\[[f]_{m}:=f_{m}^{+}-f_{m}^{-}.\]
The function spaces enable us to include discontinuities in the variational formulations:
**Formulation 2.3** (Time-discontinuous variational formulation of the heat equation).:
_Find \(u\in\tilde{X}(\mathcal{T}_{k},V(\Omega))\) such that_
\[\tilde{A}(u)(\varphi)=\tilde{F}(\varphi)\qquad\forall\varphi\in\tilde{X}( \mathcal{T}_{k},V(\Omega)),\]
_where_
\[\tilde{A}(u)(\varphi) :=\sum_{m=1}^{M}\int_{I_{m}}(\partial_{t}u,\varphi)+(\nabla_{x}u, \nabla_{x}\varphi)\ \mathrm{d}t+\sum_{m=1}^{M-1}([u]_{m},\varphi_{m}^{+})+(u_{0}^{+}, \varphi_{0}^{+}),\] \[\tilde{F}(\varphi) :=(\!(f,\varphi)\!)+(u^{0},\varphi_{0}^{+}).\]
**Formulation 2.4** (Time-discontinuous variational formulation of the elastodynamics equation).:
_Find \(U=(u,v)\in\tilde{X}(\mathcal{T}_{k},V(\Omega))\) such that_
\[\tilde{A}(U)(\Phi)=\tilde{F}(\Phi)\qquad\forall\Phi=(\varphi^{u},\varphi^{v}) \in\tilde{X}(\mathcal{T}_{k},V(\Omega)),\]
_where_
\[\tilde{A}(U)(\Phi) :=\sum_{m=1}^{M}\int_{I_{m}}(\partial_{t}v,\varphi^{u})+(\sigma( u),\nabla_{x}\varphi^{u})+(\partial_{t}u,\varphi^{v})-(v,\varphi^{v})\ \mathrm{d}t\] \[\qquad+\sum_{m=1}^{M-1}([v]_{m},\varphi_{m}^{u,+})+([u]_{m}, \varphi_{m}^{v,+})+(v_{0}^{+},\varphi_{0}^{u,+})+(u_{0}^{+},\varphi_{0}^{v,+}),\] \[\tilde{F}(\Phi) :=(v^{0},\varphi_{0}^{u,+})+\langle\!\langle g_{N},\varphi^{u} \rangle\!\rangle_{\Gamma_{N}}+(u^{0},\varphi_{0}^{v,+}).\]
We have the inclusions \(X(I,\cdot)\subset\tilde{X}(\mathcal{T}_{k},\cdot)\), since for continuous functions the jump terms vanish, and thus the variational Formulation 2.3 and Formulation 2.4 are consistent.
Next, we define the semi-discrete space for the heat equation as
\[X_{k}^{\mathrm{dG}(r)}(\mathcal{T}_{k},V(\Omega)) :=\Big{\{}v_{k}\in L^{2}(I,L^{2}(\Omega))\,\Big{|}\,v_{k}\big{|}_{I _{m}}\in P_{r}(I_{m},H^{1}_{0}(\Omega))\Big{\}}\subset\tilde{X}(\mathcal{T}_{k},V(\Omega))\]
and for the elastodynamics equation as
\[X_{k}^{\mathrm{dG}(r)}(\mathcal{T}_{k},V^{u}(\Omega)) :=\Big{\{}v_{k}\in L^{2}(I,L^{2}(\Omega)^{3})\,\Big{|}\,v_{k} \big{|}_{I_{m}}\in P_{r}(I_{m},H^{1}_{\Gamma D,0}(\Omega)^{3})\Big{\}}\subset \tilde{X}(\mathcal{T}_{k},V^{u}(\Omega)),\] \[X_{k}^{\mathrm{dG}(r)}(\mathcal{T}_{k},V^{v}(\Omega)) :=X_{k}^{\mathrm{dG}(r)}(\mathcal{T}_{k},V^{u}(\Omega)),\] \[X_{k}^{\mathrm{dG}(r)}(\mathcal{T}_{k},V(\Omega)) :=X_{k}^{\mathrm{dG}(r)}(\mathcal{T}_{k},V^{u}(\Omega))\times X_{ k}^{\mathrm{dG}(r)}(\mathcal{T}_{k},V^{v}(\Omega)),\]
where the space-time function spaces \(\tilde{X}(\mathcal{T}_{k},\cdot)\) have been discretized in time with the discontinuous Galerkin method of order \(r\in\mathbb{N}_{0}\) (\(\mathrm{dG}(r)\)). Typical choices in our work for the temporal degree are \(r=1\) and \(r=2\). Here, \(P_{r}(I_{m},Y)\) is the space of polynomials of order \(r\), which map from the time interval \(I_{m}\) into the space \(Y\). The \(\mathrm{dG}(r)\) time discretization for the case \(r=1\) is illustrated in Figure 1.
The locations of the temporal degrees of freedom (DoFs) are defined by quadrature rules. Due to the discontinuity of the temporal discretization, various quadrature rules can be chosen, the most common being Gauss-Lobatto, Gauss-Legendre and Gauss-Radau. In Figure 1 the location of the temporal degrees of freedom are chosen at the ends of the time intervals, which corresponds to Gauss-Lobatto quadrature. In Section 5, we use Gauss-Legendre and Gauss-Lobatto quadrature in time to demonstrate the versatility of our method concerning the choice of the temporal quadrature formula.
It has been derived in [23] (see also the classical textbooks [60, 29]) that the \(\mathrm{dG}(0)\) time-discretization is a variant of the backward Euler scheme. Higher-order schemes are derived as well and it was established that \(dG(r_{p})\) discretizations, where \(r_{p}\in\mathbb{N}_{0}\) is the polynomial degree, are generically implicit and \(A\)-stable.
#### 2.2.2 Discretization in space
For the spatial discretization of the variational formulation, we use a fixed mesh \(\mathcal{T}_{h}\), which consists of intervals in one dimension and of quadrilateral (2D) or hexahedral (3D) elements in higher dimensions. We can then use element-wise polynomial functions of up to order \(s\in\mathbb{N}\) as our spatial function space, i.e.,
\[V_{h}^{s}:=V_{h}^{s}(\mathcal{T}_{h}):=\Big{\{}v\in C(\bar{ \Omega})\Big{|}v\big{|}_{K}\in\mathcal{Q}_{s}(K)\quad\forall K\in\mathcal{T}_ {h}\Big{\}}\]
Figure 1: \(\mathrm{dG}(1)\) time discretization
and for the elastodynamics equation
\[V_{h}^{s,u}:=V_{h}^{s,u}(\mathcal{T}_{h}):=\Big{\{}v\in C(\bar{ \Omega})^{d}\Big{|}v\big{|}_{K}\in(\mathcal{Q}_{s}(K))^{d}\quad\forall K\in \mathcal{T}_{h}\Big{\}}=:V_{h}^{s,v}(\mathcal{T}_{h})=:V_{h}^{s,v},\]
where \(\mathcal{Q}_{s}(K)\) is being constructed by mapping tensor-product polynomials of degree \(s\) from the master element \((0,1)^{d}\) to the element \(K\). The fully discrete function space for the heat equation is then given by
\[X_{k}^{\mathrm{dG}(r)}(\mathcal{T}_{k},V_{h}^{s}):=\Big{\{}v_{ kh}\in L^{2}(I,L^{2}(\Omega))\,\Big{|}\,v_{kh}\big{|}_{I_{m}}\in P_{r}(I_{m},V_{h} ^{s})\quad\forall I_{m}\in\mathcal{T}_{k}\Big{\}}\]
and for the elastodynamics equation
\[X_{k}^{\mathrm{dG}(r)}(\mathcal{T}_{k},V_{h}^{s}) :=\Big{\{}v_{kh}\in L^{2}(I,L^{2}(\Omega)^{2d})\,\Big{|}\,v_{kh} \big{|}_{I_{m}}\in P_{r}(I_{m},V_{h}^{s})\quad\forall I_{m}\in\mathcal{T}_{k} \Big{\}}\,,\] \[V_{h}^{s} :=V_{h}^{s,u}\times V_{h}^{s,v}.\]
Thus, the fully discrete variational formulation reads for the heat equation:
Find \(u_{kh}\in X_{k}^{\mathrm{dG}(r)}(\mathcal{T}_{k},V_{h}^{s})\) such that
\[\tilde{A}(u_{kh})(\varphi_{kh})=\tilde{F}(\varphi_{kh})\quad \forall\varphi_{kh}\in X_{k}^{\mathrm{dG}(r)}(\mathcal{T}_{k},V_{h}^{s}).\]
Moreover, the fully discrete variational formulation for the elastodynamics equation reads:
Find \(U_{kh}:=(u_{kh},v_{kh})\in X_{k}^{\mathrm{dG}(r)}(\mathcal{T}_{k},V_{h}^{s})\) such that
\[\tilde{A}(U_{kh})(\Phi_{kh})=\tilde{F}(\Phi_{kh})\quad\forall \Phi_{kh}=(\varphi_{kh}^{u},\varphi_{kh}^{v})\in X_{k}^{\mathrm{dG}(r)}( \mathcal{T}_{k},V_{h}^{s}).\]
#### 2.2.3 Slabwise discretization
Finally, we want to remark that the fully discrete variational formulations do not need to be solved on the entire space-time cylinder \(\Omega\times I\), but can also be solved sequentially on space-time slabs
\[S_{l}^{n}:=\Omega\times\left(\bigcup_{m=l}^{n}I_{m}\right),\]
where \(1\leq l\leq n\leq M\), see also [73][Remark 2.1]. As mentioned previously, we can then get the space-time FEM basis on \(S_{l}^{n}\) by taking the tensor-product of the spatial and the temporal finite element basis functions. This simplifies the finite element discretization of the abstract time-dependent problem (1), since the main prerequisite is a FEM code for the stationary problem \(\mathcal{A}(u)=f\) in \(\Omega\). Furthermore, tensor-product space-time FEM allows for larger flexibility in the choice of temporal discretization, since changing the temporal degree of the space-time discretization can be performed simply by changing the polynomial degree of the temporal finite elements. Due to the tensor-product structure of the space-time FE basis, it is straightforward how proper orthogonal decomposition (POD) based reduced-order modeling can be performed, since on an abstract level only the spatial finite element basis needs to be replaced by the spatial POD basis.
For the heat equation on the space-time slab \(S_{l}^{n}\) with \(n-l+1\) time intervals, we arrive at the linear equation system
\[\begin{pmatrix}A&&&\mathbf{0}\\ B&A&&&\\ &B&A&&\\ &&\ddots&\ddots\\ \mathbf{0}&&&B&A\end{pmatrix}\begin{pmatrix}U_{l}\\ U_{l+1}\\ U_{l+2}\\ \vdots\\ U_{n}\end{pmatrix}=\begin{pmatrix}F_{l}-BU_{l-1}\\ F_{l+1}\\ F_{l+2}\\ \vdots\\ F_{n}\end{pmatrix} \tag{2}\]
or in brevity
\[A_{S_{l}^{n}}U_{S_{l}^{n}}=F_{S_{l}^{n}} \tag{3}\]
with
\[A =C_{k}\otimes M_{h}+M_{k}\otimes K_{h},\] \[B =-D_{k}\otimes M_{h},\]
where we use the spatial matrices
\[M_{h} =\left\{(\varphi_{h}^{(j)},\varphi_{h}^{(i)})\right\}_{i,j=1}^{ \#\mathrm{DoFs}(\mathcal{T}_{h})},\] \[K_{h} =\left\{(\nabla_{x}\varphi_{h}^{(j)},\nabla_{x}\varphi_{h}^{(i)} )\right\}_{i,j=1}^{\#\mathrm{DoFs}(\mathcal{T}_{h})}\]
and the temporal matrices
\[M_{k} =\left\{\int_{I_{m}}\varphi_{k}^{(j)}\cdot\varphi_{k}^{(i)}\ \mathrm{d}t\right\}_{i,j=1}^{\#\mathrm{DoFs}(I_{m})},\] \[C_{k} =\left\{\int_{I_{m}}\partial_{t}\varphi_{k}^{(j)}\cdot\varphi_{k} ^{(i)}\ \mathrm{d}t+\varphi_{k,m-1}^{(j),+}\cdot\varphi_{k,m-1}^{(i),+} \right\}_{i,j=1}^{\#\mathrm{DoFs}(I_{m})},\] \[D_{k} =\left\{\varphi_{k,m-1}^{(j),-}\cdot\varphi_{k,m-1}^{(j),+} \right\}_{i,j=1}^{\#\mathrm{DoFs}(I_{m})}.\]
Note that \(U_{l},\ldots,U_{n}\) are space-time vectors themselves, where \(U_{m}\in\mathbb{R}^{\#\mathrm{DoFs}(I_{m})\cdot\#\mathrm{DoFs}(\mathcal{T}_{h })}\) with \(m=l,\ldots,n\) is the coefficient vector of the solution \(u_{kh}\) on the time interval \(I_{m}\), i.e., for the \(\mathrm{dG}(r)\) method in time with temporal quadrature points \(t_{1},\ldots,t_{r+1}\) we have
\[U_{m}=\begin{pmatrix}U_{m}(t_{1})\\ \vdots\\ U_{m}(t_{r+1})\end{pmatrix},\quad m=1,\ldots,M,\]
where \(M\) is the total number of time intervals. In particular, if we use space-time slabs that contain only one temporal element, then we only need to solve the linear system
\[AU_{m}=F_{m}-BU_{m-1}\]
for each time slab \(S_{m}:=S_{m}^{m}=\mathcal{T}_{h}\times I_{m}\). For efficiency reasons, in the remainder of this paper, we only consider such slabs of size one.
For the elastodynamics equation, the space-time FEM linear system can be derived similarly. The linear system and time-stepping formulations for dG(1) and dG(2) with Gauss-Lobatto quadrature in time can be found in A.
**Remark 2.5**.: _Although the linear systems for the heat equation in this section and for the elastodynamics equation in A have been presented as the tensor product of spatial matrices, tensor-product space-time FEM can be applied to a much larger class of problems. For instance, it is not always possible to decompose a space-time linear system into this tensor-product structure when the PDE contains coefficients that depend on space and time. Nevertheless, our implementation of tensor-product space-time FEM is general enough to also deal with these kinds of problems, since it does not rely on a tensor-product of the linear system but only on the tensor-product structure of the finite element basis._
## 3 Reduced-order modeling
### Pod-Rom
The increase in computational power in the last decades has made it possible to exploit high-performance computing for large-scale numerical simulations. Nevertheless, in some scenarios, e.g. for multiphysics problems, high-performance computing can be computationally expensive, in particular also having a large carbon footprint and enormous energy consumption. These circumstances motivate the application of model order reduction (MOR) techniques on the premise of a large computational speedup to satisfy these demands. In this work, we mainly deal with projection-based reduced basis methods (RBM) [38, 40, 15, 49, 37, 65, 54] since this methodology aims at efficient treatments by providing both an approximate solution procedure and efficient error estimates [38]. Here, the critical observation is that instead of using projection spaces with general approximation properties (e.g. finite element method) problem-specific approximation spaces are chosen and then can be used for the discretization of the original problem [64]. Based on these spaces and the assumption that the solution evolves smoothly in a low-dimensional solution manifold (equivalent to a small Kolmogorov N-width [44, 13, 40]), a reduced-order model (ROM) can be constructed that represents with sufficient accuracy the physical problem of interest using a significantly smaller number of degrees of freedom [64].
In order to construct the reduced spaces, the solution manifold is empirically explored by means of solutions of the full-order model as developed in Section 2.2. Then, a proper orthogonal decomposition (POD) is conducted on these snapshots of the high-fidelity solution to obtain the reduced basis functions [49, 13, 47, 61, 70, 77, 22, 36, 19, 5]. The following Theorem 3.1 states that the POD basis is optimal in a least-squares sense. The proof is provided by Gubisch and Volkwein in [35].
**Theorem 3.1** (POD basis).: _Let \(Y=[Y_{1},\ldots,Y_{q}]:=[U_{1}(t_{1}),\ldots,U_{1}(t_{r+1}),U_{2}(t_{1}), \ldots,U_{M}(t_{r+1})]\in\mathbb{R}^{n\times q}\) with \(q=M\cdot\#\text{DoFs}(I_{m})\), \(n=\#\text{DoFs}(\mathcal{T}_{h})\) and rank \(d\leq\min(n,q)\) be the snapshot matrix with a (spatial) column vector for each temporal degree of freedom. Moreover, let \(Y=\Psi\Sigma\Phi^{T}\) be its singular value decomposition with \(\Sigma=\text{diag}(\sigma_{1},\ldots,\sigma_{d})\in\mathbb{R}^{d\times d}\) and orthogonal matrices \(\Psi=[\psi_{1},\ldots,\psi_{d}]\in\mathbb{R}^{n\times q}\) with \(q=M\cdot\#\text{DoFs}(I_{m})\), \(n=\#\text{DoFs}(\mathcal{T}_{h})\), \(n=
\(\mathbb{R}^{n\times d}\), \(\Phi=[\phi_{1},\ldots,\phi_{d}]\in\mathbb{R}^{q\times d}\). Then for \(1\leq N\leq d\) the optimization problem_
\[\min_{\tilde{\psi}_{1},\ldots,\tilde{\psi}_{N}\in\mathbb{R}^{n}} \sum_{j=1}^{q}\left\|Y_{j}-\sum_{i=1}^{N}\left(Y_{j},\tilde{\psi}_{i}\right)_{ \mathbb{R}^{n}}\tilde{\psi}_{i}\right\|_{\mathbb{R}^{n}}^{2}\quad\text{s.t.} \quad(\tilde{\psi}_{i},\tilde{\psi}_{j})_{\mathbb{R}^{n}}=\delta_{ij}\ \forall 1 \leq i,j\leq N\] ( \[\mathbb{P}^{N}\] )
_where \(\{\tilde{\psi}_{i}\}_{i=1}^{N}\subset\mathbb{R}^{n}\), and which is being solved by the left-singular vectors \(\{\psi_{i}\}_{i=1}^{N}\subset\mathbb{R}^{n}\) and it holds that_
\[\sum_{j=1}^{q}\left\|Y_{j}-\sum_{i=1}^{N}\left(Y_{j},\psi_{i} \right)_{\mathbb{R}^{n}}\psi_{i}\right\|_{\mathbb{R}^{n}}^{2}=\sum_{i=N+1}^{ d}\sigma_{i}^{2}=\sum_{i=N+1}^{d}\lambda_{i}. \tag{4}\]
Thus, the decay rate of the singular values plays an essential role in the feasibility of the POD approach. If the sum of the squared truncated singular values is sufficiently small for a relatively small \(N\), we can utilize a linear combination of a few basis functions \(\psi_{i}\) for a good approximation of elements \(Y_{j}\) living in the high-dimensional FE space. Although the error of an obtained rank-\(N\) approximation can be determined by Equation (4), this does not yield an intuitive measure for rank determination. Thus, a widely used criterion to determine the quality of the POD basis heuristically refers to its retained energy or information content \(\varepsilon(N)\), cf. [34, 35, 49]. The latter is defined by
\[\varepsilon(N)=\frac{\sum_{i=1}^{N}\sigma_{i}^{2}}{\sum_{i=1}^{d} \sigma_{i}^{2}}=\frac{\sum_{i=1}^{N}\sigma_{i}^{2}}{\sum_{i=1}^{q}||U_{i}||^{2}}. \tag{5}\]
Next, the construction of the POD basis is presented. In Algorithm 1, we introduce different approaches depending on the row-to-column ratio of the snapshot matrix. For this, we partly rely on the work of Grasle et al. in [13][Chap. 2].
```
Input: Snapshots \(\{Y_{j}\}_{j=1}^{q}\subset\mathbb{R}^{n}\) and energy threshold \(\varepsilon\in[0,1]\). Output: POD basis \(\{\boldsymbol{\psi}_{i}\}_{i=1}^{N}\subset\mathbb{R}^{n}\) and eigenvalues \(\{\lambda_{i}\}_{i=1}^{N}\).
1: Set \(Y=[Y_{1},\ldots,Y_{q}]\in\mathbb{R}^{n\times q}\).
2:if\(n\approx q\)then
3: Compute singular value decomposition \([\Psi,\Sigma,\Phi]=\text{SVD}(Y)\).
4: Compute \(N=\min\left\{N\in\mathbb{N}\ |\ \varepsilon(N)\geq\varepsilon,\ \ 1\leq N\leq d\right\}\).
5: Set \(\lambda_{i}=\Sigma_{ii}^{2}\) and \(\boldsymbol{\psi}_{i}=\Psi_{\cdot,i}\in\mathbb{R}^{n}\) for \(1\leq i\leq N\).
6:elseif\(n\ll q\)then
7: Compute eigenvalue decomposition \([\Psi,\Lambda]=\text{Eig}(YY^{T})\), where \(YY^{T}\in\mathbb{R}^{n\times n}\).
8: Compute \(N=\min\left\{N\in\mathbb{N}\ |\ \varepsilon(N)\geq\varepsilon,\ \ 1\leq N\leq d\right\}\).
9: Set \(\lambda_{i}=\Lambda_{ii}\) and \(\boldsymbol{\psi}_{i}=\Psi_{\cdot,i}\in\mathbb{R}^{n}\) for \(1\leq i\leq N\).
10:elseif\(q\ll n\)then
11: Compute eigenvalue decomposition \([\Phi,\Lambda]=\text{Eig}(YY^{T}Y)\), where \(Y^{T}Y\in\mathbb{R}^{q\times q}\).
12: Compute \(N=\min\left\{N\in\mathbb{N}\ |\ \varepsilon(N)\geq\varepsilon,\ \ 1\leq N\leq d\right\}\).
13: Set \(\lambda_{i}=\Lambda_{ii}\) and \(\boldsymbol{\psi}_{i}=Y\Phi_{\cdot,i}/\sqrt{\lambda_{i}}\in\mathbb{R}^{n}\) for \(1\leq i\leq N\).
```
**Algorithm 1** POD basis generation in \(\mathbb{R}^{n}\)
### Tensor-product space-time POD-ROM
In order to reduce the space-time full-order system (2) of Section 2.2 the general spatial FEM space \(V_{h}\) is replaced by a problem-specific low-dimensional space \(V_{N}=\text{span}\{\varphi_{N}^{1},\ldots,\varphi_{N}^{N}\}\) obtained by means of POD. This yields the reduced variational formulation: Find \(u_{N}\in\tilde{X}(\mathcal{T}_{k},V_{N})\) such that
\[\tilde{A}(u_{N})(\varphi)=\tilde{F}(\varphi)\qquad\forall\varphi \in\tilde{X}(\mathcal{T}_{k},V_{N}).\]
The reduced basis matrix can be formed by the concatenation of the reduced basis vectors, viz.
\[Z_{N}=\begin{bmatrix}\varphi_{N}^{1}&\ldots&\varphi_{N}^{N} \end{bmatrix}\in\mathbb{R}^{\#\text{DoFs}(\mathcal{T}_{k})\times N}. \tag{6}\]
Subsequently, the slabwise discretization for the space-time slab \(S_{l}^{n}\) with \(n-l+1\) time intervals is obtained in analogy to the full-order model of Section 2.2.3. In the case of the heat equation, we utilize the linear equation system described in (2) and reduce the given matrices in an affine manner. Thus, we arrive at
\[\begin{pmatrix}A_{N}&&&&\mathbf{0}\\ B_{N}&A_{N}&&\\ &B_{N}&A_{N}&&\\ &&\ddots&\ddots&\\ \mathbf{0}&&&B_{N}&A_{N}\end{pmatrix}\begin{pmatrix}U_{N_{l}}\\ U_{N_{l+1}}\\ U_{N_{l+2}}\\ \vdots\\ U_{N_{n}}\end{pmatrix}=\begin{pmatrix}F_{N_{l}}-B_{N}U_{N_{l-1}}\\ F_{N_{l+1}}\\ F_{N_{l+2}}\\ \vdots\\ F_{N_{n}}\end{pmatrix} \tag{7}\]
or in brevity
\[A_{N}U_{N,S_{l}^{n}}=F_{N,S_{l}^{n}} \tag{8}\]
with the reduced components
\[A_{N} =Z_{N}^{T}AZ_{N}, \tag{9a}\] \[B_{N} =Z_{N}^{T}BZ_{N},\] (9b) \[F_{N_{i}} =Z_{N}^{T}F_{i},\quad l\leq i\leq n. \tag{9c}\]
## 4 A posteriori error-estimator certified reduced-order modeling
For further analysis, we consider homogeneous Dirichlet boundary conditions to simplify the presentation, i.e. \(u_{D}=0\). Let a goal functional \(J:\tilde{X}(\mathcal{T}_{k},V(\Omega))\rightarrow\mathbb{R}\) of the form
\[J(u)=\int_{0}^{T}J_{1}(u(t))\ \mathrm{d}t+J_{2}(u(T)), \tag{10}\]
be given, which represents some physical quantity of interest (QoI). Here, \(T\) denotes the end time as before. Now, we want to reduce the difference between the quantity of interest of a fine solution \(u^{\text{fine}}\) and a coarse solution \(u^{\text{coarse}}\), i.e.,
\[J(u^{\text{fine}})-J(u^{\text{coarse}}) \tag{11}\]
subject to the constraint that the variational formulation of the time-dependent problem (1) is being satisfied. Possible choices for the fine and the coarse solution could be \(u^{\text{fine}}:=u\in X(I,V(\Omega)),u^{\text{coarse}}:=u_{k}\in X^{\text{dG}(r )}_{k}(\mathcal{T}_{k},V(\Omega))\) to control the error caused by the temporal discretization or \(u^{\text{fine}}:=u_{k}\in X^{\text{dG}(r)}_{k}(\mathcal{T}_{k},V(\Omega))\), \(u^{\text{coarse}}:=u_{kh}\in X^{\text{dG}(r)}_{k}(\mathcal{T}_{k},V_{h})\), with \(V_{h}:=V_{h}^{s}\) for the heat equation and \(V_{h}:=V_{h}^{s}=V_{h}^{s,u}\times V_{h}^{s,v}\) for the elastodynamics equation, to control the error caused by the spatial discretization. For more information on space-time error control, we refer the interested reader to [67, 73, 63] and for general information on spatial error control to [10, 11, 8, 26]. As an extension, in this work we restrict ourselves to the control of the error introduced by reduced-order modeling and thus we consider the full-order-model (FOM) solution \(u^{\text{fine}}:=u^{\text{FOM}}_{kh}\in X^{\text{dG}(r)}_{k}(\mathcal{T}_{k}, V^{\text{FOM}}_{h})\) as the fine solution, and the reduced-order-model (ROM) solution \(u^{\text{coarse}}:=u^{\text{ROM}}_{kh}\in X^{\text{dG}(r)}_{k}(\mathcal{T}_{k},V^{\text{ROM}}_{h})\) as the coarse solution, with \(V^{\text{ROM}}_{h}\subset V^{\text{FOM}}_{h}=:V_{h}\). First efforts of incorporating the dual-weighted residual (DWR) method in reduced-order modeling have been undertaken by Meyer and Matthies [53], where after computing some snapshots and creating the reduced basis, they used the DWR error estimator to determine which basis vectors have the largest error contribution and only use them for the reduced-order model. This can be thought of as a goal-oriented adaptive coarsening of the reduced basis. In this work, we focus on another objective, namely the enrichment of the reduced basis depending on the temporal evolution of the quantities of interest. This can be thought of as a goal-oriented adaptive refinement1 of the reduced basis, which we propose to accurately and efficiently compute the solution over the whole temporal domain.
Footnote 1: In principle coarsening would also be possible, but is not the objective in this work. For coarsening, we would need to follow the work of Meyer and Matthies [53].
### Space-time dual-weighted residual method
For the constrained optimization problem (11), we define the Lagrange functional for the fine problem as
\[\mathcal{L}_{\text{fine}}:X^{\text{dG}(r)}_{k}(\mathcal{T}_{k}, V^{\text{FOM}}_{h})\times X^{\text{dG}(r)}_{k}(\mathcal{T}_{k},V^{ \text{FOM}}_{h})\rightarrow\mathbb{R},\] \[(u^{\text{fine}},z^{\text{fine}})\mapsto J(u^{\text{fine}})- \tilde{A}(u^{\text{fine}})(z^{\text{fine}})+\tilde{F}(z^{\text{fine}}),\]
and for the coarse problem as
\[\mathcal{L}_{\text{coarse}}:X^{\text{dG}(r)}_{k}(\mathcal{T}_{k},V^{\text{ROM}}_{h})\times X^{\text{dG}(r)}_{k}(\mathcal{T}_{k},V^{\text{ ROM}}_{h})\rightarrow\mathbb{R},\] \[(u^{\text{coarse}},z^{\text{coarse}})\mapsto J(u^{\text{coarse }})-\tilde{A}(u^{\text{coarse}})(z^{\text{coarse}})+\tilde{F}(z^{\text{coarse}}).\]
The stationary points \((u^{\text{fine}},z^{\text{fine}})\) and \((u^{\text{coarse}},z^{\text{coarse}})\) of the Lagrange functionals \(\mathcal{L}_{\text{fine}}\) and \(\mathcal{L}_{\text{coarse}}\) need to satisfy the Karush-Kuhn-Tucker first-order optimality conditions. Firstly, these stationary points are solutions to the equations
\[\mathcal{L}^{\prime}_{\text{fine},z}(u^{\text{fine}},z^{\text{ fine}})(\delta z^{\text{fine}})=0\quad\forall\delta z^{\text{fine}}\in X^{ \text{dG}(r)}_{k}(\mathcal{T}_{k},V^{\text{FOM}}_{h}),\] \[\mathcal{L}^{\prime}_{\text{coarse},z}(u^{\text{coarse}},z^{ \text{coarse}})(\delta z^{\text{coarse}})=0\quad\forall\delta z^{\text{ coarse}}\in X^{\text{dG}(r)}_{k}(\mathcal{T}_{k},V^{\text{ROM}}_{h}).\]
We call these equations the primal problems and their solutions \(u^{\text{fine}}\) and \(u^{\text{coarse}}\) the primal solutions. Secondly, the stationary points must also satisfy the equations
\[\mathcal{L}^{\prime}_{\text{fine},u}(u^{\text{fine}},z^{\text{fine} })(\delta u^{\text{fine}})=0 \quad\forall\delta u^{\text{fine}}\in X^{\text{dG}(r)}_{k}(\mathcal{T}_{k}, V^{\text{FOM}}_{h}),\] \[\mathcal{L}^{\prime}_{\text{coarse},u}(u^{\text{coarse}},z^{ \text{coarse}})(\delta u^{\text{coarse}})=0 \quad\forall\delta u^{\text{coarse}}\in X^{\text{dG}(r)}_{k}( \mathcal{T}_{k},V^{\text{ROM}}_{h}).\]
These equations are called the adjoint or dual problems and their solutions \(z^{\text{fine}}\) and \(z^{\text{coarse}}\) are the adjoint solutions.
#### 4.1.1 Primal problem
Taking the Gateaux derivatives of the Lagrange functionals \(\mathcal{L}_{\text{fine}}\) and \(\mathcal{L}_{\text{coarse}}\) with respect to the adjoint solution \(z\), we arrive at the primal problem. Since the variational formulation of the PDE is linear in the test functions, we get
\[\mathcal{L}^{\prime}_{\text{fine},z}(u^{\text{fine}},z^{\text{fine }})(\delta z^{\text{fine}})=-\tilde{A}(u^{\text{fine}})(\delta z^{\text{fine} })+\tilde{F}(\delta z^{\text{fine}})=0 \quad\forall\delta z^{\text{fine}}\in X^{\text{dG}(r)}_{k}(\mathcal{T}_{k}, V^{\text{FOM}}_{h}),\] \[\mathcal{L}^{\prime}_{\text{coarse},z}(u^{\text{coarse}},z^{ \text{coarse}})(\delta z^{\text{coarse}})=-\tilde{A}(u^{\text{coarse}})( \delta z^{\text{coarse}})+\tilde{F}(\delta z^{\text{coarse}})=0 \quad\forall\delta z^{\text{coarse}}\in X^{\text{dG}(r)}_{k}(\mathcal{T}_{k}, V^{\text{ROM}}_{h}).\]
We observe that the primal solution can be obtained by solving the original problem, e.g. the heat or the elastodynamics equation, forward in time.
#### 4.1.2 Adjoint problem
Taking the Gateaux derivatives of the Lagrange functionals \(\mathcal{L}_{\text{fine}}\) and \(\mathcal{L}_{\text{coarse}}\) with respect to the primal solution \(u\), we get
\[\mathcal{L}^{\prime}_{\text{fine},u}(u^{\text{fine}},z^{\text{fine }})(\delta u^{\text{fine}})=J^{\prime}_{u}(u^{\text{fine}})(\delta u^{\text{ fine}})-\tilde{A}^{\prime}_{u}(u^{\text{fine}})(\delta u^{\text{fine}},z^{ \text{fine}})=0\] \[\forall\delta u^{\text{fine}}\in X^{\text{dG}(r)}_{k}(\mathcal{T }_{k},V^{\text{FOM}}_{h}),\] \[\mathcal{L}^{\prime}_{\text{coarse},u}(u^{\text{coarse}},z^{ \text{coarse}})(\delta u^{\text{coarse}})=J^{\prime}_{u}(u^{\text{coarse}})( \delta u^{\text{coarse}})-\tilde{A}^{\prime}_{u}(u^{\text{coarse}})(\delta u^ {\text{coarse}},z^{\text{coarse}})=0\] \[\forall\delta u^{\text{coarse}}\in X^{\text{dG}(r)}_{k}(\mathcal{T }_{k},V^{\text{ROM}}_{h}).\]
Hence, to obtain the adjoint solution, we need to solve an additional equation, the adjoint problem
\[\tilde{A}^{\prime}_{u}(u)(\delta u,z)=J^{\prime}_{u}(u)(\delta u). \tag{12}\]
Note that even for nonlinear PDEs and goal functionals the adjoint problem is linear since the semilinear form in the variational formulation of the PDE is linear in the test functions, however the primal solution enters as it is well-known [11].
**Remark 4.1**.: _For linear PDEs, like the heat or the elastodynamics equation, the left-hand side of the adjoint problem (12) simplifies to_
\[\tilde{A}^{\prime}_{u}(u)(\delta u,z)=\tilde{A}(\delta u)(z).\]
_For linear goal functionals, like the mean-value functional, the right-hand side of the adjoint problem (12) reduces to_
\[J^{\prime}_{u}(u)(\delta u)=J(\delta u).\]
_In particular for a linear problem, i.e. linear PDE and goal functional, we have the adjoint problem_
\[\tilde{A}(\delta u)(z)=J(\delta u), \tag{13}\]
_which does not depend on the primal solution \(u\) anymore._
By Remark (4.1), the adjoint problem for the heat equation reads
\[\tilde{A}(\delta u)(z)=J^{\prime}_{u}(u)(\delta u)\] \[\Leftrightarrow\sum_{m=1}^{M}\int_{I_{m}}(\partial_{t}\delta u,z )+(\nabla_{x}\delta u,\nabla_{x}z)\ \mathrm{d}t+\sum_{m=1}^{M-1}([\delta u]_{m},z_{m}^{+})+(\delta u_{0}^{+},z_{0 }^{+})=J^{\prime}_{u}(u)(\delta u).\]
We now use integration by parts in time to move the time derivative from the test function \(\delta u\) to the adjoint solution \(z\) and we get
\[\sum_{m=1}^{M}\int_{I_{m}}(\delta u,-\partial_{t}z)+(\nabla_{x}\delta u,\nabla _{x}z)\ \mathrm{d}t-\sum_{m=1}^{M-1}(\delta u_{m}^{-},[z]_{m})+(\delta u_{M}^{-},z_{M }^{-})=J^{\prime}_{u}(u)(\delta u).\]
For the elastodynamics equation the adjoint problem can be derived in a similar fashion as
\[\sum_{m=1}^{M}\int_{I_{m}}(\delta v,-\partial_{t}z^{u})+(\sigma( \delta u),\nabla_{x}z^{u})+(\delta u,-\partial_{t}z^{v})-(\delta v,z^{v})\ \mathrm{d}t\] \[\qquad-\sum_{m=1}^{M-1}\left((\delta v_{m}^{-},[z^{u}]_{m})+( \delta u_{m}^{-},[z^{v}]_{m})\right)+(\delta v_{M}^{-},z_{M}^{u,-})+(\delta u_ {M}^{-},z_{M}^{v,-})=J^{\prime}_{U}(U)(\delta U).\]
We notice that the adjoint problem now runs backward in time.
#### 4.1.3 Error identity and temporal localization for linear problems
For the sake of simplicity, we assume that we are dealing with a linear PDE and goal functional. Then we have the error identity
\[J(u^{\mathrm{fine}})-J(u^{\mathrm{coarse}})=-\tilde{A}(u^{\mathrm{coarse}})(z ^{\mathrm{fine}})+\tilde{F}(z^{\mathrm{fine}})=:\eta. \tag{14}\]
The proof relies on both the linearity of the goal functional and the PDE, and the definition of the adjoint and primal problems:
\[J(u^{\mathrm{fine}})-J(u^{\mathrm{coarse}})=J(u^{\mathrm{fine}}-u^{\mathrm{ coarse}})=\tilde{A}(u^{\mathrm{fine}}-u^{\mathrm{coarse}})(z^{\mathrm{fine}})=- \tilde{A}(u^{\mathrm{coarse}})(z^{\mathrm{fine}})+\tilde{F}(z^{\mathrm{fine}}).\]
In the DWR literature for spatial and temporal discretization error control this kind of error identity (14) would be useless, because for most applications \(z^{\mathrm{fine}}\) is the analytical solution which is not known a priori and replacing it by \(z^{\mathrm{coarse}}\) yields bad error estimates. Thus, for FEM discretization error control the dual weights \(z^{\mathrm{fine}}-z^{\mathrm{coarse}}\) are being used, which can be approximated by post-processing of the dual solution. However, in our case \(z^{\mathrm{fine}}:=z^{\mathrm{FOM}}_{kh}\in X^{\mathrm{dG}(r)}_{k}(\mathcal{T} _{k},V^{\mathrm{FOM}}_{h})\) is the full-order-model dual solution, which is computable but comes with an expense. Moreover, in our numerical experiments we will observe that using a reduced-order-model dual solution \(z^{\mathrm{coarse}}:=z^{\mathrm{ROM}}_{kh}\in X^{\mathrm{dG}(r)}_{k}( \mathcal{T}_{k},\tilde{V}^{\mathrm{ROM}}_{h})\) still produces excellent error estimates for our problems if the dual reduced basis is sufficiently large.
We point out that the dual spatial reduced-order-model function space \(\tilde{V}_{h}^{\text{ROM}}\) needs to differ from the primal spatial reduced-order-model function space \(V_{h}^{\text{ROM}}\) if we want to capture the dynamics of the dual problem and want to have a non-zero error estimator.
To localize the error in time, we just need to assemble the primal residual (14) slabwise. In particular, to localize the error to each time interval \(I_{m}\), we simply need to assemble the primal residual on each time interval separately. More concretely, for the heat equation the error on the time interval can be computed from the primal linear equation system, the coarse primal solution and the fine dual solution by
\[\left.\eta\right|_{I_{m}}=\sum_{i=1}^{\#\text{DoFs}(I_{m})}\left\{(Z_{m}^{ \text{fine}})^{T}\left(-AU_{m}^{\text{coarse}}+F_{m}-BU_{m-1}^{\text{coarse}} \right)\right\}_{i}. \tag{15}\]
The error estimator on the time interval \(I_{m}\) for elastodynamics can be derived analogously by using the linear system (27) of the primal problem.
To test whether we need to use the fine dual solution for our error estimates or whether we can replace it with a coarse dual solution, we use the effectivity index as a measure of the quality of our error estimator. The effectivity index is the ratio of the estimated and the true errors, i.e.
\[\text{I}_{\text{eff}}:=\left|\frac{\eta}{J(u^{\text{fine}})-J(u^{\text{coarse }})}\right|. \tag{16}\]
We desire \(\text{I}_{\text{eff}}\approx 1\), since then the error estimator can reliably predict the reduced-order-modeling error and we also observe this in the numerical tests in Section 5.
#### 4.1.4 Space-time dual-weighted residual method for nonlinear problems
For nonlinear problems, like the heat equation with nonlinear goal functional in Section 5.2, we do not have an error identity anymore as in (14) for the linear case. Based on the proof in [11][Proposition 2.3], we have the following error representation formula.
**Theorem 4.2** (Error representation for nonlinear problems).: \[J(u^{\text{fine}})-J(u^{\text{coarse}})=-\tilde{A}(u^{\text{coarse}})(z^{ \text{fine}})+\tilde{F}(z^{\text{fine}})+R,\]
_with the quadratic remainder term_
\[R=\int_{0}^{1} \left[\tilde{A}_{uu}^{\prime\prime}(u^{\text{coarse}}+s(u^{ \text{fine}}-u^{\text{coarse}}))(u^{\text{fine}}-u^{\text{coarse}},u^{ \text{fine}}-u^{\text{coarse}},z^{\text{fine}})\right.\] \[\left.-J_{uu}^{\prime\prime}(u^{\text{coarse}}+s(u^{\text{fine}} -u^{\text{coarse}}))(u^{\text{fine}}-u^{\text{coarse}},u^{\text{fine}}-u^{ \text{coarse}})\right]\cdot s\ \text{d}s.\]
Proof.: In the following we will show that \(R=J(u^{\text{fine}})-J(u^{\text{coarse}})+\tilde{A}(u^{\text{coarse}})(z^{ \text{fine}})-\tilde{F}(z^{\text{fine}})\) holds. For abbreviation, we use the notation \(u:=u^{\text{fine}}\), \(\tilde{u}:=u^{\text{coarse}}\) and \(z:=z^{\text{fine}}\). Then, using integration by parts we get
\[R =\int_{0}^{1}\left[\tilde{A}_{uu}^{\prime\prime}(\tilde{u}+s(u- \tilde{u}))(u-\tilde{u},u-\tilde{u},z)-J_{uu}^{\prime\prime}(\tilde{u}+s(u- \tilde{u}))(u-\tilde{u},u-\tilde{u})\right]\cdot s\ \text{d}s\] \[=-\int_{0}^{1}\left[\tilde{A}_{u}^{\prime}(\tilde{u}+s(u-\tilde{u }))(u-\tilde{u},z)-J_{u}^{\prime}(\tilde{u}+s(u-\tilde{u}))(u-\tilde{u}) \right]\cdot 1\ \text{d}s+\left[\tilde{A}_{u}^{\prime}(u)(u-\tilde{u},z)-J_{u}^{\prime}(u )(u-\tilde{u})\right]\cdot 1-0.\]
We observe that \(\tilde{A}^{\prime}_{u}(u)(u-\tilde{u},z)-\tilde{J}^{\prime}_{u}(u)(u-\tilde{u})=0\), since \(z:=z^{\text{fine}}\) is the solution of the fine dual problem. Thus, by the fundamental theorem of calculus and \(\tilde{A}(u)(z)=\tilde{F}(z)\), we have
\[R=-\left[\tilde{A}(u)(z)-J(u)-\tilde{A}(\tilde{u})(z)+J(\tilde{u})\right]=J(u)- J(\tilde{u})+\tilde{A}(\tilde{u})(z)-\tilde{F}(z).\]
This completes the proof.
To make the error estimator computable, we neglect the quadratic remainder term and arrive at the same primal error estimator (14) as for linear problems
\[\eta:=-\tilde{A}(u^{\text{coarse}})(z^{\text{fine}})+\tilde{F}(z^{\text{fine}}).\]
Similarly as before, we replace the full-order dual solution \(z^{\text{fine}}\) in the error estimator with a reduced-order-model dual solution \(z^{\text{coarse}}\). Note that due to these approximations, the effectivity index for nonlinear problems is expected not to be close to 1. Clearly, for highly nonlinear problems (e.g., quasi-linear or fully nonlinear) and nonlinear goal functionals, both estimator parts are necessary as demonstrated in [28][Figure 4] and [27][Sec. 6.5]. However, in our numerical tests, we see that the estimated error still yields a reasonable approximation to the true error.
### Error estimator based ROM updates
In this section, we present our novel approach of a goal-oriented incremental reduced-order model. In the MORe DWR method, we marry a reduced-order model with a DWR-based error estimator and an incremental version of the POD algorithm. The MORe DWR method addresses the problems that occur when a reduced-order model has to deal with solution behavior that is not already captured and incorporated during basis generation. In general, this yields an increasing error between full- and reduced-order solutions. Thus, the presented approach aims to detect changes in solution behavior, or more precisely, differences in the evaluated quantities of interest by means of the full or reduced model during the temporal evolution. If the error increases to intolerable heights, the method allows an adaptive on-the-fly basis enrichment with snapshots of the new behavior. Hence, the reduced model can be incrementally modified until the error is sufficiently small.
In more detail, we rely on the space-time reduced-order model presented in Section 3.2 and apply our findings on error control of Section 4.1. The use of an error estimate rather than an analytical error bound entails practical advantages since its application is more versatile and we can use the method even if no error bounds are known. Further, an incremental basis generation is mandatory for the method to reduce computational operations and thus to be fast. The incremental SVD satisfies these requirements and allows an update only requiring the prior SVD and the new snapshots. The incremental SVD is presented in Section 4.2.1. In this context, we also introduce the incremental POD as a trimmed version of the incremental SVD. Subsequently, the overall MORe DWR framework is depicted in Section 4.2.2. Here, all the ingredients are assembled and the final algorithm is presented.
In summary, our novel approach neglects a computationally heavy offline phase and directly solves the reduced model. Full-order solves are only required for the basis enrichment and are held to a minimum. Moreover, the reduced evaluation of the quantity of interest can be certified.
#### 4.2.1 Incremental Proper Orthogonal Decomposition
This section aims to derive an algorithm that updates an already existing truncated SVD (tSVD) or solely its left-singular (POD) vectors according to modifications of the snapshot matrix without recomputing the whole tSVD or requiring access to the snapshot matrix. This methodology can then be used to update the POD incrementally by appending additional snapshots to the snapshot matrix. For this purpose, we rely on the general approach of an additive rank-b modification of the SVD, mainly developed by [17, 18] and applied to the model-order reduction of fluid flows in [45]. Although this approach provides a variety of possible modifications, e.g. resizing of the matrix, modification of individual values, or exchanging rows and columns, we are merely interested in the updates of columns, i.e. adding columns to the matrix, and thus restrict the proceeding on this. The following steps are based on [45][Section 2.2].
We start with a given snapshot matrix \(Y\in\mathbb{R}^{\#\text{DoFs}(\mathcal{T}_{h})\times\tilde{m}}\) that includes \(\tilde{m}>0\) snapshots. Usually, \(\tilde{m}\) is equal or connected to the number of already computed time steps. Further, we have the rank-\(N\) tSVD \(USV^{\text{T}}\) of the matrix \(Y\). Additionally, let \(b\in\mathbb{N}\) newly computed snapshots \(\{U_{1},\ldots,U_{b}\}\) be stored in the bunch matrix
\[B=\begin{bmatrix}u_{1}&\ldots&u_{b}\end{bmatrix}\in\mathbb{R}^{ \#\text{DoFs}(\mathcal{T}_{h})\times b}. \tag{17}\]
We now aim to compute the tSVD that is updated by the information contained in the bunch matrix \(B\) according to
\[\tilde{U}\tilde{S}\tilde{V}^{T}=\tilde{Y}=\begin{bmatrix}Y&B\end{bmatrix}\]
without explicitly recomputing \(Y\) or \(\tilde{Y}\) due to performance and memory reasons which was the original motivation of Brand's work on the incremental SVD, cf. [17, 18].
Therefore, we write the column update as an additive operation given as
\[\begin{bmatrix}Y&B\end{bmatrix}=\begin{bmatrix}Y&0_{\#\text{DoFs}( \mathcal{T}_{h})\times b}\end{bmatrix}+B\begin{bmatrix}0_{b\times\tilde{m}}&I _{b\times b}\end{bmatrix} \tag{18}\]
to apply the additive rank-b modification to the SVD according to [45] and obtain the rank-\(\tilde{N}\) tSVD of \(\tilde{Y}\) with \(\tilde{N}\leq N+b\) and
\[\tilde{V} =\begin{bmatrix}V&0\\ 0&I\end{bmatrix}V^{\prime}(:,1:\tilde{N}) \tag{19}\] \[\tilde{S} =S^{\prime}(1:\tilde{N},1:\tilde{N})\] (20) \[\tilde{U} =\begin{bmatrix}U&Q_{\text{B}}\end{bmatrix}U^{\prime}(:,1:\tilde{ N})\,, \tag{21}\]
where \(F=U^{\prime}{S^{\prime}V^{\prime}}^{T}\in\mathbb{R}^{N+b\times N+b}\) is the SVD of
\[F=\begin{bmatrix}\Sigma&U^{\text{T}}B\\ 0&R_{\text{B}}\end{bmatrix} \tag{22}\]
and \(Q_{B}\in\mathbb{R}^{\#\text{DoFs}(\mathcal{T}_{h})\times b}\) and \(R_{B}\in\mathbb{R}^{b\times b}\) are given by the QR decomposition
\[Q_{\text{B}}R_{\text{B}}=(I-UU^{\text{T}})B\in\mathbb{R}^{\#\text{DoFs}( \mathcal{T}_{h})\times b}. \tag{23}\]
For the POD basis update, we identify \(U\) and \(\tilde{U}\) with the previous and updated versions of the reduced basis matrix \(Z_{N}\) including the POD vectors, respectively. We also neglect the update of the right-singular vectors in (19), since it does not provide any additional benefit apart from extra computational effort for the reduced-order model, cf. Theorem 3.1. The singular values are considered for the rank determination but they come within zero computational cost. In conclusion, (20)-(22) serve as the basis for the on-the-fly or incrementally computed POD (iPOD) in this paper.
An additional technical observation: For bunch matrices with small column rank \(b\), the iPOD algorithm is invoked frequently, and algebraic subspace rotations possibly involved do not preserve orthogonality, cf. [18, 32, 4, 31, 80]. Hence, a numerically induced loss of orthogonality of the POD basis vectors can occur. In order to deal with this problem an additional orthonormalization of \(\begin{bmatrix}U&Q_{\text{B}}\end{bmatrix}\) is recommended. Algorithm 2 drafts the implementation of an incremental POD update. Here, \(Z_{N}\) and \(\Sigma=[\sigma_{1},\ldots,\sigma_{N}]\in\mathbb{R}^{N}\) denote the reduced basis matrix of (6) and its respective singular values. In addition, the bunch matrix \(B\) introduced in (17) including \(b\) snapshots is used as an input. The information content captured by the reduced basis is determined by the energy threshold \(\varepsilon\).
```
0: Reduced basis matrix \(Z_{N}\in\mathbb{R}^{\#\text{DoFs}(\mathcal{T}_{h})\times N}\), singular value vector \(\Sigma=[\sigma_{1},\ldots,\sigma_{N}]\in\mathbb{R}^{N}\), bunch matrix \(B\in\mathbb{R}^{\#\text{DoFs}(\mathcal{T}_{h})\times b}\), and energy threshold \(\varepsilon\in[0,1]\).
0: Reduced basis matrix \(Z_{N}\in\mathbb{R}^{\#\text{DoFs}(\mathcal{T}_{h})\times N}\), singular value vector \(\Sigma=[\sigma_{1},\ldots,\sigma_{\tilde{N}}]\in\mathbb{R}^{\tilde{N}}\)
1:\(H=Z_{N}^{T}B\)
2:\(P=B-Z_{N}H\)
3:\([Q_{P},\,R_{P}]=\text{QR}(P)\)
4:\(Q=[Z_{N}\,\,Q_{P}]\)
5:\(F=\begin{bmatrix}\text{diag}(\Sigma)&H\\ 0&R_{P}\end{bmatrix}\)
6:if Q not orthogonal then
7:\([Q,\,R]=\text{QR}(Q)\)
8:\(F=RF\)
9:\([U^{\prime},\Sigma^{\prime}]=\text{SVD}(F)\)
10:\(\tilde{N}=\min\left\{N\in\mathbb{N}\ \middle|\ \varepsilon(N)\geq \varepsilon,\ \ 1\leq N\leq d\right\}\)
11:\(\Sigma=\text{diag}(\Sigma^{\prime})(1:\tilde{N})\)
12:\(Z_{N}=QU^{\prime}(:,1:\tilde{N})\)
```
**Algorithm 2** Incremental POD update
Note that checking if the orthogonality is preserved can be computationally expensive. Thus, we resort to a heuristic approach by sole validation if the first and last columns of a matrix are orthogonal.
#### 4.2.2 Goal-oriented certified incremental ROM
In this section, we assemble the space-time ROM presented in Section 3.2 and the incremental POD of Section 4.2.1 with the findings on goal-oriented error control of Section 4.1. This yields an adaptive goal-oriented incremental reduced-order model. Firstly, next to the slab definition we introduce the
parent-slab notion as a further decomposition of the space-time domain. A parent-slab unifies several slabs that are consecutive in time and is defined as
\[P_{k}^{r}=\{S_{l}^{n}\mid l\geq k\;\wedge\;n\leq r\}.\]
Now, our approach is designed to work without any prior knowledge or exploration of the solution manifold while also attempting to minimize the full-order operations. Thus, we aim to solve the reduced-order model parent-slab wise and -if necessary- adaptively enrich the reduced basis by means of the iPOD with full-order solutions of the parent-slab until the reduced basis is good enough to meet a given estimated error tolerance for the chosen cost functional. For this, we identify the fine and coarse solutions introduced in the DWR method with the finite element and reduced basis solutions, respectively, and estimate the error on each slab of the parent-slab. The full-order solution used for the basis enrichment is computed on the slab where the error is the largest. We remark that both the primal and dual full-order solutions are computed on this slab and are used to enrich the primal and dual bases. So, for each enrichment two full-order solves are conducted. After having finished this iterative process on a parent-slab, the obtained basis is transferred to the proceeding parent-slab and is used as a starting point to solve the reduced-order model where the whole procedure is repeated. So if the solution behavior on the next parent-slab only differs slightly from the already observed behavior, the reduced basis at hand should be able to reproduce most of the behavior. Thus, few basis updates would be sufficient such that a fast computation of the reduced solution can be expected. However, if the solution behavior changes drastically the error estimate will detect this and further refinements of the basis will be conducted to ensure that the solution meets the error tolerance. We observe that this procedure is perfectly compatible with the adaptive basis selection based on DWR estimates presented by Meyer and Matthies in [53] to reduce the dimension of the reduced space. Thus, if incorporated it would be possible to either enrich or delude the reduced basis adjusted to the problem statement.
The resulting approach is outlined in Algorithm 3. For the sake of simplicity, we decompose the space-time cylinder in \(K\) parent-slabs of fixed length \(L\) and enumerate them with respect to time, viz. \(P_{1},P_{2},\ldots,P_{K}\). In order to identify the affiliation of a slab to a parent-slab \(P_{k}\), the slabs it contains are denoted by \(S_{P_{k}}^{1},S_{P_{k}}^{2},\ldots,S_{P_{k}}^{L}\) with \(1\leq k\leq K\). The discretized primal systems for each slab \(S_{P_{k}}^{j}\) are expressed in (3) and (8) for the full- and reduced-order models, respectively. For the dual problem
\[A^{\prime}Z_{S_{P_{k}}^{l}} =J_{S_{P_{k}}^{l}}\quad\text{and} \tag{24}\] \[A_{N}^{\prime}Z_{N,S_{P_{k}}^{l}} =J_{N,S_{P_{k}}^{l}} \tag{25}\]
denote the discretized full- and reduced-order systems of the adjoint problem (12). Further, the evaluation of the error estimator (15) on slab \(S_{P_{k}}^{l}\) is given by \(\eta_{N,S_{P_{k}}^{l}}\left(U_{N,S_{P_{k}}^{l}},Z_{N,S_{P_{k}}^{l}}\right)\). Note that the reduced primal and dual solutions are deployed to enable an evaluation independent of the full-order system size and thus a fast error evaluation. Lastly, the incremental POD (2) is referred to by the abbreviation iPOD with the reduced basis, new snapshots bundled in the snapshot matrix, and singular values as input and the new POD basis as output.
**Input:** Initial condition \(U_{0}:=U(t_{0})\), primal and dual reduced basis matrices \(Z_{N}^{p}\) and \(Z_{N}^{d}\), energy threshold \(\varepsilon\in[0,1]\) and error tolerance \(\text{tol}>0\).
**Output:** Primal and dual reduced basis matrices \(Z_{N}^{p}\) and \(Z_{N}^{d}\) and reduced primal solutions \(U_{N,I_{m}}\) for all \(1\leq m\leq M\).
```
1:for\(k=1,2,\ldots,K\)do
2:while\(\eta_{max}>tol\)do
3:for\(l=1,2,\ldots,L\)do
4: Solve reduced primal system (8): \(A_{N}U_{N,S_{P_{k}}^{l}}=F_{N,S_{P_{k}}^{l}}\)
5:for\(l=L,L-1,\ldots,1\)do
6: Solve reduced dual system (25): \(A_{N}^{\prime}Z_{N,S_{P_{k}}^{l}}=J_{N,S_{P_{k}}^{l}}\)
7:for\(l=1,2,\ldots,L\)do
8: Compute error estimate: \(\eta_{N,S_{P_{k}}^{l}}\left(U_{N,S_{P_{k}}^{l}},Z_{N,S_{P_{k}}^{l}}\right)\)
9:\(\eta_{max}=\max\limits_{1\leq l\leq L}\left|\eta_{N,S_{P_{k}}^{l}}\right|\)
10:if\(\eta_{max}>tol\)then
11:\(l_{max}=\underset{1\leq l\leq L}{\arg\max}\left|\eta_{N,S_{P_{k}}^{l}}\right|\)
12: Solve primal full-order system (3): \(AU_{S_{P_{k}}^{l_{max}}}=F_{S_{P_{k}}^{l_{max}}}\)
13: Update primal reduced basis: \(Z_{N}^{p}=\text{iPOD}(Z_{N}^{p},[U_{S_{P_{k}}^{l_{max}}}(t_{1}),\ldots,U_{S_{ P_{k}}^{l_{max}}}(t_{r+1})],\Sigma)\)
14: Solve dual full-order system (24): \(A^{\prime}Z_{S_{P_{k}}^{l_{max}}}=J_{S_{P_{k}}^{l_{max}}}\)
15: Update dual reduced basis: \(Z_{N}^{d}=\text{iPOD}(Z_{N}^{d},[Z_{S_{P_{k}}^{l_{max}}}(t_{1}),\ldots,Z_{S_{ P_{k}}^{l_{max}}}(t_{r+1})],\Sigma)\)
16: Update reduced system components and error estimator w.r.t (9)
17:------ Validation loop ------ \(\triangleright\) This is an optional validation mechanism of the model.
18:for\(k=1,2,\ldots,K\)do
19:for\(l=1,2,\ldots,L\)do
20: Solve primal reduced system: \(A_{N}U_{N,S_{P_{k}}^{l}}=F_{N,S_{P_{k}}^{l}}\)
21:for\(k=K,K-1,\ldots,1\)do
22:for\(l=L,L-1,\ldots,1\)do
23: Solve dual reduced system: \(A_{N}^{\star}Z_{N,S_{P_{k}}^{l}}=J_{N,S_{P_{k}}^{l}}\)
24:for\(k=1,2,\ldots,K\)do
25:for\(l=1,2,\ldots,L\)do
26: Compute slab estimate: \(\eta_{N,S_{P_{k}}^{l}}(U_{N,S_{P_{k}}^{l}},Z_{N,S_{P_{k}}^{l}})\)
```
**Algorithm 3** Incremental ROM
In addition to the previously mentioned steps, we add an optional validation loop whose purpose depends on the application. Specifically, it consists in recomputing the whole reduced solutions with the final reduced basis and evaluating its error again. If the generated reduced basis is meant to be reused,
the additional validation of its accuracy ensures that the reduced basis is well suited to approximate the solution for the whole time domain. This is mainly the case in an optimization process or if the MORe DWR method is used for manifold exploration. However, if the only purpose is a one-time evaluation of a quantity of interest, the validation can be neglected for performance reasons.
Furthermore, we note that similar to the mere approximation error of the POD in (4) the physical interpretation of the error estimate is not intuitive. Therefore, we are considering a relative measurement of the approximation quality. However, the full-order solutions are not available for a normalization of the error so that we resort to
\[J\left(U_{S_{P_{k}}^{l}}\right)\approx J\left(U_{N,S_{P_{k}}^{l}}\right)+ \eta_{N,S_{P_{k}}^{l}}.\]
This results in the relative error estimator \(\eta_{N,S_{P_{k}}^{l}}^{rel}\) on slab \(S_{P_{k}}^{l}\) defined by
\[\eta_{N,S_{P_{k}}^{l}}^{rel}=\frac{\eta_{N,S_{P_{k}}^{l}}}{J\left(u_{S_{P_{k}}^ {l}}\right)}\approx\frac{\eta_{N,S_{P_{k}}^{l}}}{J\left(u_{N,S_{P_{k}}^{l}} \right)+\eta_{N,S_{P_{k}}^{l}}}. \tag{26}\]
## 5 Numerical tests
In order to demonstrate our methodology, we perform numerical tests on three different problem configurations. For the first two numerical tests, we perform computations for the heat equation in 1+1D and 2+1D. For the former, we use a linear goal functional and for the latter, we use a nonlinear goal functional. To demonstrate the flexibility of our temporal discretization, we use Gauss-Legendre quadrature points in time for the heat equation and a \(\mathrm{dG}(1)\) time discretization. As the third problem configuration, we consider a 3+1D cantilever beam as a benchmark problem from elastodynamics. For this problem, we use Gauss-Lobatto quadrature points in time, which are the support points for conventional time-stepping schemes, and we use a \(\mathrm{dG}(2)\) time discretization.
All our computations have been performed on a personal computer with an Intel i5-7600K CPU @ 3.80GHz \(\times\) 4 and 16GB of RAM. The space-time FEM codes have been written in deal.II [2, 3] and the reduced-order modeling has been performed with NumPy [39] and SciPy [74]. The data between the codes is exchanged via the hard disk.
### 1+1D Heat equation
For our first numerical test, we construct a 1+1D heat equation problem; see Formulation 2.1. We consider the spatial domain \(\Omega=(0,1)\) and the temporal domain \(I=(0,4)\). We use a single moving heat source that changes its temperature after each second and moves through the spatial domain with a heating interval width of \(0.1\) from \(x=0.1\) to \(x=0.9\) and then back to \(x=0.1\). For this, we use the right-hand side function
\[f(t,x):=\begin{cases}0.2&t\in(0,1),\,-0.05\leq x-0.4t-0.1\leq 0.05,\\ -0.5&t\in(1,2),\,-0.05\leq x-0.4t-0.1\leq 0.05,\\ 1.0&t\in(2,3),\,-0.05\leq x+0.4(t-2)-0.9\leq 0.05,\\ -0.75&t\in(3,4),\,-0.05\leq x+0.4(t-2)-0.9\leq 0.05.\end{cases}\]
We use a zero initial condition, homogeneous Dirichlet boundary conditions, and the time-averaged mean value goal functional \(J(u):=\frac{1}{4}\int_{0}^{4}\int_{0}^{\frac{1}{2}}u(t,x)\;\mathrm{d}x\;\mathrm{d}t\). We point out that the goal functional does not have support on the entire spatial domain, but only on its lower half \((0,\frac{1}{2})\subsetneq\Omega\).
For the reduced-order model, we choose that the primal and dual reduced bases have to preserve \(\varepsilon=1-10^{-8}\) of the information. As previously stated, we resort to the relative error estimate \(\eta_{N,S_{P_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
of \(\eta_{max}=0.1210\%\).
We compare the temporal error estimate with the exact temporal error on each slab in Figure 4. The general tendencies of both curves are similar. The exact error is on average more than one magnitude smaller than the error tolerance of \(1\%\) (indicated by a green dashed line). The error estimate exceeds the tolerance for a short moment after \(t=3\,s\). Such an overestimation can cause the execution of unnecessary full-order solves. Nonetheless, an overestimation is of less harm to the approximation quality since the exact error still meets the tolerance.
Figure 4: Temporal evolution of the time interval-wise error estimator compared to the true error for the 1+1D heat equation.
Figure 3: Temporal evolution of cost functional for the 1+1D heat equation.
Table 1 gives an overview of simulation results for different error tolerances comprised between \(0.1\%\) and \(10\%\). The listed characteristics are: the relative error, computational speedup, the total number of FOM solves, POD basis sizes for the primal and dual problem, prediction capability of error estimator, and the effectivity index from (16). Here, the number of FOM solves sums up all primal and dual solves and the basis sizes are shown in the pattern primal/dual. The prediction capability is visualized by means of a confusion matrix. The prediction on each slab is assigned to one of the four cases:
\[\text{error}>\text{tol}\wedge\text{estimate}<\text{tol}\quad\mid \quad\text{error}<\text{tol}\wedge\text{estimate}>\text{tol}\quad\mid\] \[\text{error}>\text{tol}\wedge\text{estimate}>\text{tol}\quad\mid \quad\text{error}<\text{tol}\wedge\text{estimate}<\text{tol}.\]
We note that the four possible scenarios are sorted according to the severity of the consequences of their occurrence. So, the first two cases indicate mispredictions of the estimator. Here, the first case is the least desirable since then the error estimator underestimates the true error, which can lead to an insufficiently small reduced basis. The second case is less fatal since then the true error is being overestimated by the error estimator, which can cause the reduced basis to be slightly larger than necessary. The last two cases are less harmful since the estimate correctly predicts the error. However, the third case is also not optimal, since it shows that after the incremental basis enrichment, in the validation loop, there are still slabs on which the error tolerance is being exceeded. Therefore, we expect that for an efficient method (almost) all slabs fall in the last category, where the error tolerance is being met and the error estimate is also below the tolerance.
We observe that with a rise in the tolerance the relative error as well as the speedup increase. Note that the relative error is almost a magnitude smaller than the tolerance, which aligns with the results of Figure 4. The difference in magnitude can be explained by the fact that the tolerance has to be met slabwise while the relative error is evaluated over the whole time domain. The speedup is explained by the decreasing amount of FOM solves and smaller POD bases for both the primal and dual problem w.r.t. the given tolerance. Furthermore, the estimator predicts the relationship of the error to the tolerance in approximately \(98-99\%\) of the cases right with most of the incorrect predictions being overestimations. Similarly, for the effectivity index, a slight worsening can be seen with rising tolerance since then replacing the full-order dual solution in the error estimator with the reduced-order dual solution introduces additional errors.
Finally, we demonstrate the incremental nature of our MORe DWR approach in Figure 5. In this
\begin{table}
\begin{tabular}{|l||l|l|l|l|l|l|l|} \hline Tolerance & Relative error & Speedup & FOM solves & Basis size & Prediction & Effectivity \\ \hline \(0.1\%\) & \(0.0130\%\) & \(10.9\) & \(68\) & \(39\mid 36\) & \(0\mid 38\mid\)\(0\mid 5082\) & \(1.0065\) \\ \(1\%\) & \(0.1210\%\) & \(12.2\) & \(40\) & \(25\mid 22\) & \(0\mid 31\mid\)\(0\mid 5089\) & \(1.0071\) \\ \(2\%\) & \(0.3370\%\) & \(13.2\) & \(38\) & \(24\mid 21\) & \(0\mid 41\mid\)\(0\mid 5079\) & \(1.0074\) \\ \(5\%\) & \(1.2019\%\) & \(15.2\) & \(32\) & \(21\mid 18\) & \(28\mid 48\mid 18\mid 5026\) & \(1.0125\) \\ \(10\%\) & \(1.7645\%\) & \(18.6\) & \(30\) & \(20\mid 17\) & \(0\mid 73\mid\)\(0\mid 5047\) & \(1.0404\) \\ \hline \end{tabular}
\end{table}
Table 1: Performance of MORe DWR for the 1+1D heat equation depending on the tolerance in the goal functional.
context, we illustrate the on-the-fly basis generation by plotting the primal and dual reduced basis size over the time domain and compare its evolution for the tolerances of 1% and 10%. The results indicate a steeper and more granular increase of both the primal and dual basis size if the tolerance is smaller. Nevertheless, we observe a steady basis size for all bases and tolerances after around 2 seconds. If we take the movement of the heat source into account, this is exactly the time the source needs to travel once through the spatial domain. Thus, after this, no new information is added to the system that would trigger a further basis enrichment.
### 2+1D Heat equation
In the second numerical experiment, we test MORe DWR on a 2+1D heat equation problem. We consider the spatial domain \(\Omega=(0,1)^{2}\) and the temporal domain \(I=(0,10)\). We create a moving heat source of oscillating temperature that rotates around the midpoint of the spatial domain \(\Omega\) as shown in Figure 6. For this, we use the right-hand side function
\[f(t,x):=\begin{cases}\sin(4\pi t)&\text{if }(x_{1}-p_{1})^{2}+(x_{2}-p_{2})^{2} <r^{2},\\ 0&\text{else},\end{cases}\]
with \(x=(x_{1},x_{2})\), midpoint \(p=(p_{1},p_{2})=(\frac{1}{2}+\frac{1}{4}\cos(2\pi t),\frac{1}{2}+\frac{1}{4} \sin(2\pi t))\) and radius of the trajectory \(r=0.125\). In addition, a zero initial condition and homogeneous Dirichlet boundary conditions are applied. In contrast to the goal functional in Section 5.1, we test the method for a nonlinear cost functional \(J(u):=\frac{1}{10}\int_{0}^{10}\int_{\Omega}u(t,x)^{2}\ \mathrm{d}x\ \mathrm{d}t\).
Figure 5: Temporal evolution of the reduced basis size for a relative error tolerance of 1% (left) and 10% (right) for the 1+1D heat equation.
For the reduced-order model, we choose that the primal and dual reduced bases have to preserve \(\varepsilon=1-10^{-8}\) of the information. Similar to the previous one-dimensional scenario, we resort to the relative error estimate \(\eta_{N,S_{P_{k}}}^{rel}\) and allow errors up to a tolerance of \(1\%\). The full-order model is characterized by \(n=4,225\) and \(q=4,096\) DoFs in space and time, respectively. This gives us a total of \(n\cdot q=17,305,600\) space-time degrees of freedom. Further, the temporal domain is split up into \(M=2,048\) time slabs. For the incremental ROM, we choose a total amount of \(K=128\) parent-slabs on which the slabs are evenly distributed, i.e. \(L=16\).
Firstly, in Figure 7 we compare the time trajectories of the goal functional restricted to each time slab for the full-order space-time solution \(u_{h}\) and the reduced-order space-time solution \(u_{N}\). It illustrates that both trajectories are not distinguishable from each other although the solution behavior is constantly changing. Furthermore, good approximation quality can also be observed when regarding the time-averaged cost functional. We obtain \(J(u_{h})=6.4578\cdot 10^{-5}\) and \(J(u_{N})=6.4577\cdot 10^{-5}\) yielding a relative error of \(\eta_{max}=0.0016\%\). This implies that the incremental ROM can replicate nonlinear cost functionals within a given tolerance.
Figure 6: Full-order solution snapshots for the 2+1D heat equation.
In Figure 8, the exact temporal errors and their estimation on each slab are compared. Further, for illustration we indicate the error tolerance of 1% in this plot. The results show that both the exact and estimated errors meet the given error tolerance on all slabs. Overall, the estimate shows a similar trajectory to the exact error. However, we can observe spikes in the exact error that are not completely covered by the estimation. Nevertheless, these deflections remain without consequences.
Table 2 presents simulation results for a range of error tolerances. The quantities we consider are the following: the relative error, computational speedup, the total number of FOM solves, POD basis sizes for the primal and dual problem, prediction capability of the error estimator, and the effectivity
Figure 8: Temporal evolution of the time interval-wise relative error estimator compared to the true error for the 2+1D heat equation.
Figure 7: Temporal evolution of cost functional for 2+1D heat equation.
index. For definitions of these quantities, we refer to Section 5.1. We can observe that with a rise in the tolerance the relative error as well as the speedup increase. Again, the relative error is much smaller than the tolerance. Note in contrast to the 1D linear scenario the relaxation of the error tolerance has a greater impact on the speedup. This can be explained by the evolution of the amount of FOM solves and the POD bases w.r.t. the given tolerance. Furthermore, the estimator predicts the relationship of the error to the tolerance in approximately \(94-99\%\) of the cases right with most of the incorrect predictions being overestimations. An exception exists for \(\text{tol}=10\%\) where a drop of \(5\%\) in the prediction capability can be observed indicating the dual basis is too small to accurately estimate the error. An adapted tolerance for the information content of the dual basis could counteract that problem. Nevertheless, the obtained reduced cost functional still meets the error tolerance. The largest difference to the linear case holds the evaluation of the effectivity index. We observe that in contrast to the linear case, the effectivity indices show larger fluctuations around \(1\), which have been expected due to the nonlinear cost functional. However, the effectivity indices are still in an acceptable range yielding good results. Finally, we observe that for a large tolerance of \(10\%\) we have a few mispredictions, i.e. on \(79\) slabs the true error is greater than the tolerance while the estimated error is smaller than the tolerance, and on \(28\) slabs the error estimator is greater than the tolerance while the true error is smaller than the tolerance. Additionally, for this tolerance we have \(17\) slabs on which both true and estimated errors are larger than the tolerance. This decay of MORe DWR performance can be explained by the replacement of the fine dual solution \(z^{\text{fine}}\) in the DWR error estimator by the coarse dual solution \(z^{\text{coarse}}\). If we make the POD bases for the primal and dual problems too small, then this approximation might cause additional errors and lead to a worse performance of our method.
Lastly, Figure 9 sketches the incremental nature of the MORe DWR approach. The on-the-fly basis generation is shown by plotting the primal and dual reduced basis size over the time domain and comparing its evolution for the tolerances of \(1\%\) and \(10\%\). The results indicate a steep increase of both the primal and dual basis sizes in the first second of the simulation that reflects one round trip of the oscillating heat source through the spatial domain. Again, for a more restrictive tolerance, the size of a reduced basis enhances in a faster fashion. After the first round trip of the heat source, the basis size remains almost unchanged with only one basis enlargement for the tolerance of \(1\%\) at around \(t=4\) s. This is grounded in the periodic behavior of the chosen numerical experiment that does not add any further information to the system. Thus, less or no further basis enrichments have to be performed to meet the given error tolerance.
\begin{table}
\begin{tabular}{|l||l|l|l|l|l|l|l|l|} \hline Tolerance & Relative error & Speedup & FOM solves & Basis size & Prediction & Effectivity \\ \hline \hline \(0.1\%\) & \(0.0019\%\) & \(7.7\) & \(150\) & \(92\mid 78\) & \(0\mid 35\mid 0\mid 2013\) & \(0.7524\) \\ \(1\%\) & \(0.0017\%\) & \(27.5\) & \(80\) & \(55\mid 44\) & \(0\mid 1\mid 0\mid 2047\) & \(0.2771\) \\ \(2\%\) & \(0.0628\%\) & \(29.6\) & \(66\) & \(47\mid 36\) & \(0\mid 9\mid 0\mid 2039\) & \(3.9181\) \\ \(5\%\) & \(0.9162\%\) & \(44.8\) & \(44\) & \(33\mid 25\) & \(0\mid 1\mid 0\mid 2047\) & \(1.2254\) \\ \(10\%\) & \(0.9243\%\) & \(50.0\) & \(38\) & \(31\mid 23\) & \(79\mid 28\mid 17\mid 1924\) & \(1.5474\) \\ \hline \end{tabular}
\end{table}
Table 2: Incremental reduced-order modeling summary for the 2+1D heat equation depending on the tolerance in the goal functional.
### 3+1D Elastodynamics equation
In the third numerical experiment, we choose Formulation 2.2 and investigate the method on a 3+1D elastodynamics problem. We consider a rectangular beam spanning the spatial domain \(\Omega=(0,6)\times(0,1)\times(0,1)\). Further, the temporal domain \(I=(0,40)\) is regarded. We induce an oscillation in the vertical direction by defining a force \(f(t,x)\) acting on the upper boundary of the beam \(\Gamma_{\text{up}}=(0,6)\times(0,1)\times\{x_{3}=1\}\). In the first part of the experiment the beam is lifted up by means of the acting force as shown in Figure 10. Thereafter, the force is slowly eliminated such that the beam begins to swing.
For this, we use
\[g(t):=\begin{cases}f_{\max}\frac{t}{t_{1}}&x_{3}=1\;\wedge\;t\leq t_{1},\\ f_{\max}\left(1-\frac{t-t_{1}}{t_{2}-t_{1}}\right)&x_{3}=1\;\wedge\;t_{1}<t\leq t _{2},\\ 0&\text{else},\end{cases}\]
with maximal acting force \(f_{\max}=0.5\) and \(t_{1}=5\) and \(t_{2}=6\) being the time points until the force increases or decreases, respectively. Together with the beam being clamped at the boundary \(\Gamma_{\text{clamped}}=\{x_{1}=0\}\times(0,1)\times(0,1)\) this yields the boundary conditions in Section 2.1.2
\[u=0 \text{in }I\times\Gamma_{\text{clamped}},\] \[v=0 \text{in }I\times\Gamma_{\text{clamped}},\] \[\sigma(u)\cdot n=0 \text{in }I\times\partial\Omega\setminus(\Gamma_{\text{clamped}} \cup\Gamma_{\text{up}}),\] \[\sigma(u)\cdot n=g(t) \text{in }I\times\Gamma_{up}.\]
Furthermore, the homogeneous initial conditions are given by
\[u(0)=0 \text{in }\Omega,\] \[v(0)=0 \text{in }\Omega.\]
Figure 9: Temporal evolution of the reduced basis size for a relative error tolerance of 1% (left) and 10% (right) for the 2+1D heat equation.
We choose the time-averaged stress acting on the clamped boundary \(\Gamma_{\text{clamped}}\) denoted by \(J(u):=\frac{1}{40}\int_{0}^{40}\int_{\Omega}\sigma(u(t,x))\cdot n\ \text{d}x\ \text{d}t\) as the cost functional. For the reduced-order model, we decide that the primal and dual reduced bases have to preserve \(\varepsilon=1-10^{-11}\) of the information. Again, we resort to the relative error estimate \(\eta_{N,S_{p_{k}}^{\text{rel}}}^{\text{rel}}\) and allow errors up to a tolerance of \(1\%\). The full-order model is characterized by \(n=702\) and \(q=4,800\) DoFs in space and time, respectively. This gives us a total of \(n\cdot q=561,600\) space-time degrees of freedom. Further, the temporal domain is split up to \(M=1,600\) time slabs. For the incremental ROM, we choose a total amount of \(K=80\) parent-slabs on which the slabs are evenly distributed, i.e. \(L=20\).
We compare the time trajectories of the goal functional restricted to each time slab for the full-order space-time solution \(u_{h}\) and the reduced-order space-time solution \(u_{N}\) in Figure 11. The results show that both trajectories are indistinguishable from each other and the oscillating behavior can be mimicked by the reduced cost functional. Furthermore, the good approximation quality can also be observed when regarding the time-averaged cost functional. We obtain \(J(u_{h})=-3.1114\) and \(J(u_{N})=-3.1115\) yielding a relative error of \(\eta_{max}=0.0035\%\) which is smaller than the desired error tolerance of \(1\%\).
Figure 11: Temporal evolution of goal functional of the 3+1D elastodynamics equation.
Figure 10: Full-order solution snapshot at t = 5.75 for the elastodynamics equation.
In Figure 12, we plot the exact temporal errors and their estimation on each slab for comparison. For illustration purposes, we indicate the error tolerance of 1% in this plot. The results show that both quantities are on average in the same order of magnitude while the standard deviation of the real error appears to be larger. Thus, there exist spikes in the error that are not captured by the error estimation. Most of the time, this has no consequence but on one slab the error tolerance is exceeded slightly.
In order to investigate the violation of the error tolerance, we present in Table 3 simulation results for a range of error tolerances. Therefore, we show the relative error, the prediction measures and the effectivity indices to examine the error estimation. Further, the computational speedup, total number of FOM solves and the POD basis sizes for the primal and dual problems are displayed. For definitions of these quantities, we refer to Section 5.1. We can observe that while most of the mispredictions are poor underestimations of the error, there are only a few of them. In addition, the effectivity indices are near to the optimum of 1 and the relative errors meet the tolerance in all scenarios. However, a small decay in the effectivity indices can be recognized for the reduced-order models with larger tolerance. Additionally, when reviewing the performance measurement, we can determine differences to the previous heat problems. The resulting speedups as well as the FOM solves are near constant for all tolerances. Only for a tolerance of 10% we do see further improvements in speedups and basis size reduction. We also observe that the total amount of FOM solves and the size of the POD bases are not monotonically decreasing w.r.t. the error tolerance like in the heat equation setting. A reason for this behavior can be assigned to the behavior of the error itself. Using a smaller tolerance can lead to more reduced basis enrichment early on. Larger tolerances lead to smaller initial reduced basis and so errors further on due to the small basis size, which is compensated by enlarging the basis in a later stage.
Figure 12: Temporal evolution of the time interval-wise relative error estimator compared to the true error of the 3+1D elastodynamics equation.
Finally, the incremental nature of our MORe DWR approach is depicted in Figure 13. We allow insights into the on-the-fly basis generation by plotting the primal and dual reduced basis size over the time domain and comparing its evolution for the tolerances of 1% and 10%. Similar to the previous scenarios, we observe a steep increase in both the primal and dual basis sizes at the beginning of the simulation. In addition, we see further changes in the reduced basis sizes in the second half of the simulation. We see again that a tighter tolerance yields a larger primal reduced basis, i.e. more refinements of the reduced basis have been performed. Another curiosity of this numerical experiment is that for a tolerance of 1%, the reduced dual basis shrinks slightly at \(t\approx 10\) s, due to an iPOD update with a snapshot that carries a lot of information.
## 6 Conclusion and outlook
In this work, we proposed a novel incremental POD-ROM method with on-the-fly basis enrichment based on space-time DWR error estimates for linear PDEs, namely the heat equation and elastodynamics, and linear and nonlinear goal functionals. This methodology can be applied to a wide class of problems since its efficiency has been demonstrated in Section 5. The effectivity indices for linear problems are almost exactly one, which makes the error estimates reliable in practice, and for nonlinear goal functionals, we had also good effectivity indices. For nonlinear PDEs and goal functionals possibly
\begin{table}
\begin{tabular}{|l||l|l|l|l|l|l|l|l|} \hline Tolerance & Relative error & Speedup & FOM solves & Basis size & Prediction & Effectivity \\ \hline
0.1\% & 0.0042\% & 11.2 & 44 & 98 \(\mid\) 79 & 26 \(\mid\) 0 & 1 \(\mid\) 1573 & 1.0029 \\
1\% & 0.0035\% & 12.1 & 46 & 107 \(\mid\) 82 & 1 \(\mid\) 0 & 0 \(\mid\) 1599 & 1.0001 \\
2\% & 0.0001\% & 10.6 & 46 & 113 \(\mid\) 82 & 0 \(\mid\) 0 & 0 \(\mid\) 1600 & 0.9953 \\
5\% & 0.0040\% & 12.6 & 48 & 101 \(\mid\) 86 & 0 \(\mid\) 0 & 0 \(\mid\) 1600 & 0.9999 \\
10\% & 0.0200\% & 14.9 & 38 & 89 \(\mid\) 84 & 44 \(\mid\) 1 & 0 \(\mid\) 1555 & 0.9801 \\ \hline \end{tabular}
\end{table}
Table 3: Incremental reduced-order modeling summary for the 3+1D elastodynamics equation depending on the tolerance in the goal functional.
Figure 13: Temporal evolution of the reduced basis size for a relative error tolerance of 1% (left) and 10% (right) for the 3+1D elastodynamics equation.
full DWR is needed. Additionally, we had speedups of up to 50, while the error between the FOM and the ROM solution remained within our prescribed tolerance. Consequently, e.g. the expensive high-fidelity computations in the offline stage of the reduced basis method could be replaced by our incremental POD method. An interesting aspect for future work would be the extension of this method to dynamical, adaptive spatial meshes to further speed up the computations.
## Acknowledgements
The authors acknowledge the funding of the German Research Foundation (DFG) within the framework of the International Research Training Group on Computational Mechanics Techniques in High Dimensions GRK 2657 under Grant Number 433082294. In addition, we thank Hendrik Geisler (Leibniz University Hannover, GRK 2657) for fruitful discussions and comments. The support of the French-German University through the French-German Doctoral college "Sophisticated Numerical and Testing Approaches" (CDFA-DFDK 19-04) is also acknowledged.
## Appendix A Space-time linear system and dG(r) time-stepping formulation for elastodynamics
The space-time discretization of the elastodynamics equation on a slab with a single temporal element and a dG(\(r\)) in time discretization is discussed in this appendix. Using the fully discrete variational formulation 2.4 of elastodynamics, we arrive at the linear equation system
\[\left[C_{k}\otimes M_{h}+M_{k}\otimes K_{h}+D_{k}^{1}\otimes M_{h}\right]U_{m} =F_{m}+\left[D_{k}^{2}\otimes M_{h}\right]U_{m-1} \tag{27}\]
where we use the spatial matrices
\[M_{h} =\left\{(\varphi_{h}^{v,(j)},\varphi_{h}^{u,(i)})+(\varphi_{h}^{ u,(j)},\varphi_{h}^{v,(i)})\right\}_{i,j=1}^{\#\text{DoFs}(\mathcal{T}_{h})},\] \[K_{h} =\left\{(\sigma(\varphi_{h}^{u,(j)}),\nabla_{x}\varphi_{h}^{u,(i )})+(\varphi_{h}^{v,(j)},\varphi_{h}^{v,(i)})\right\}_{i,j=1}^{\#\text{DoFs}( \mathcal{T}_{h})}\]
and the temporal matrices
\[M_{k} =\left\{\int_{I_{m}}\varphi_{k}^{(j)}\cdot\varphi_{k}^{(i)}\ \text{d}t\right\}_{i,j=1}^{\#\text{DoFs}(I_{m})},\] \[C_{k} =\left\{\int_{I_{m}}\partial_{t}\varphi_{k}^{(j)}\cdot\varphi_{k} ^{(i)}\ \text{d}t\right\}_{i,j=1}^{\#\text{DoFs}(I_{m})},\] \[D_{k}^{1} =\begin{pmatrix}1&0&\cdots&0\\ 0&0&&\\ \vdots&&\ddots&\\ 0&&&0\end{pmatrix},\qquad D_{k}^{2}=\begin{pmatrix}0&\cdots&0&1\\ &&0&0\\ &\iddots&&\vdots\\ 0&&&0\end{pmatrix}.\]
Here, the solution vector \(U_{m}\) for the \(\mathrm{dG}(r)\) method in time with temporal quadrature points \(t_{1},\ldots,t_{r+1}\) is given by
\[U_{m}=\begin{pmatrix}U_{m}(t_{1})\\ \vdots\\ U_{m}(t_{r+1})\end{pmatrix}=\begin{pmatrix}u_{m}(t_{1})\\ v_{m}(t_{1})\\ \vdots\\ u_{m}(t_{r+1})\\ v_{m}(t_{r+1})\end{pmatrix}.\]
To derive the \(\mathrm{dG}(r)\) time-stepping formulation, we now only need to evaluate the temporal matrices \(M_{k}\) and \(C_{k}\) by integrating over \((0,k)\), where \(k:=t_{m}-t_{m-1}\) is the time step size, and by plugging in the \(\mathrm{dG}-Q^{r}\) basis functions on \((0,k)\), which coincide with the \(Q^{r}\) basis functions since we only have one single element and use Gauss-Lobatto quadrature.
### \(\mathrm{dG}(1)\) formulation of elastodynamics
By inserting the basis functions \(\varphi_{k}^{(1)}=1-\frac{t}{k},\varphi_{k}^{(2)}=\frac{t}{k}\) into the temporal matrices \(M_{k}\) and \(C_{k}\) we get
\[\int_{0}^{k}\varphi_{k}^{(1)}\cdot\varphi_{k}^{(1)}\ \mathrm{d}t =\int_{0}^{k}\left(1-\frac{t}{k}\right)^{2}\ \mathrm{d}t=\int_{0}^{k}1-\frac{2t}{k}+\frac{t^{2}}{k^{2}}\ \mathrm{d}t=\frac{k}{3}=\int_{0}^{k}\varphi_{k}^{(2)}\cdot \varphi_{k}^{(2)}\ \mathrm{d}t,\] \[\int_{0}^{k}\varphi_{k}^{(1)}\cdot\varphi_{k}^{(2)}\ \mathrm{d}t =\int_{0}^{k}\varphi_{k}^{(2)}\cdot\varphi_{k}^{(1)}\ \mathrm{d}t=\int_{0}^{k}\left(1-\frac{t}{k}\right)\cdot\frac{t}{k}\ \mathrm{d}t=\int_{0}^{k}\frac{t}{k}-\frac{t^{2}}{k^{2}}\ \mathrm{d}t=\frac{k}{6},\]
as well as
\[\int_{0}^{k}\partial_{t}\varphi_{k}^{(1)}\cdot\varphi_{k}^{(2)}\ \mathrm{d}t =\int_{0}^{k}\partial_{t}\left(1-\frac{t}{k}\right)\cdot\frac{t}{k} \ \mathrm{d}t=\int_{0}^{k}-\frac{t}{k^{2}}\ \mathrm{d}t=-\frac{1}{2}=\int_{0}^{k} \partial_{t}\varphi_{k}^{(1)}\cdot\varphi_{k}^{(1)}\ \mathrm{d}t,\] \[\int_{0}^{k}\partial_{t}\varphi_{k}^{(2)}\cdot\varphi_{k}^{(2)}\ \mathrm{d}t =\int_{0}^{k}\partial_{t}\left(\frac{t}{k}\right)\cdot\frac{t}{k} \ \mathrm{d}t=\int_{0}^{k}\frac{t}{k^{2}}\ \mathrm{d}t=\frac{1}{2}=\int_{0}^{k} \partial_{t}\varphi_{k}^{(2)}\cdot\varphi_{k}^{(1)}\ \mathrm{d}t.\]
Consequently, the \(\mathrm{dG}(1)\) time-stepping formulation for elastodynamics reads
\[\left[\frac{1}{2}\begin{pmatrix}1&1\\ -1&1\end{pmatrix}\otimes M_{h}+\frac{k}{6}\begin{pmatrix}2&1\\ 1&2\end{pmatrix}\otimes K_{h}\right]\begin{pmatrix}U_{m}(t_{m-1})\\ U_{m}(t_{m})\end{pmatrix}=\begin{pmatrix}F_{m}(t_{m-1})+U_{m-1}(t_{m-1})M_{h}\\ F_{m}(t_{m})\end{pmatrix},\]
where we use the fact that the temporal quadrature points for \(\mathrm{dG}(1)\) are \(t_{m-1}\) and \(t_{m}\).
### \(\mathrm{dG}(2)\) formulation of elastodynamics
Repeating the procedure from Section A.1 with quadratic basis functions, we arrive at the \(\mathrm{dG}(2)\) time-stepping formulation for elastodynamics
\[\left[\frac{1}{6}\begin{pmatrix}3&4&-1\\ -4&0&4\\ 1&-4&3\end{pmatrix}\otimes M_{h}+\frac{k}{30}\begin{pmatrix}4&2&-1\\ 2&16&2\\ -1&2&4\end{pmatrix}\otimes K_{h}\right]\begin{pmatrix}U_{m}(t_{m-1})\\ U_{m}(t_{m-\frac{1}{2}})\\ U_{m}(t_{m})\end{pmatrix}=\begin{pmatrix}F_{m}(t_{m-1})+U_{m-1}(t_{m-1})M_{h}\\ F_{m}(t_{m-\frac{1}{2}})\\ F_{m}(t_{m})\end{pmatrix},\]
where we use the fact that the temporal quadrature points for \(\mathrm{dG}(2)\) are \(t_{m-1}\), \(t_{m-\frac{1}{2}}:=t_{m-1}+\frac{k}{2}\) and \(t_{m}\).
**Remark A.1**.: _The \(\mathrm{dG}(1)\)and \(\mathrm{dG}(2)\) formulations can also be found in Section 7.1 and Section 7.2 in [62] for an ODE model._ |
2305.09635 | Asymmetric node placement in fiber-based quantum networks | Restrictions imposed by existing infrastructure can make it hard to ensure an
even spacing between the nodes of future fiber-based quantum networks. We here
investigate the negative effects of asymmetric node placement by considering
separately the placement of midpoint stations required for heralded
entanglement generation, as well as of processing-node quantum repeaters in a
chain. For midpoint stations, we describe the effect asymmetry has on the time
required to perform one entangling attempt, the success probability of such
attempts, and the fidelity of the entangled states created. This includes
accounting for the effects of chromatic dispersion on photon
indistinguishability. For quantum-repeater chains we numerically investigate
how uneven spacing between repeater nodes leads to bottlenecks, thereby
increasing both the waiting time and the time states are stored in noisy
quantum memory. We find that while the time required to perform one entangling
attempt may increase linearly with the midpoint's asymmetry, the success
probability and fidelity of heralded entanglement generation and the
distribution time and error rate for repeater chains all have vanishing first
derivatives with respect to the amount of asymmetry. This suggests resilience
of quantum-network performance against small amounts of asymmetry. | Guus Avis, Robert Knegjens, Anders S. Sørensen, Stephanie Wehner | 2023-05-16T17:40:37Z | http://arxiv.org/abs/2305.09635v3 | # Asymmetric node placement in fiber-based quantum networks
###### Abstract
Restrictions imposed by existing infrastructure can make it hard to ensure an even spacing between the nodes of future fiber-based quantum networks. We here investigate the negative effects of asymmetric node placement by considering separately the placement of midpoint stations required for heralded entanglement generation, as well as of processing-node quantum repeaters in a chain. For midpoint stations, we describe the effect asymmetry has on the time required to perform one entangling attempt, the success probability of such attempts, and the fidelity of the entangled states created. This includes accounting for the effects of chromatic dispersion on photon indistinguishability. For quantum-repeater chains we numerically investigate how uneven spacing between repeater nodes leads to bottlenecks, thereby increasing both the waiting time and the time states are stored in noisy quantum memory. We find that while the time required to perform one entangling attempt may increase linearly with the midpoint's asymmetry, the success probability and fidelity of heralded entanglement generation and the distribution time and error rate for repeater chains all have vanishing first derivatives with respect to the amount of asymmetry. This suggests resilience of quantum-network performance against small amounts of asymmetry.
## I Introduction
The quantum internet promises the distribution of quantum entanglement between any two points on the planet [1]. Entanglement can be a valuable resource that enables a variety of applications in domains such as quantum cryptography [2; 3; 4; 5], distributed quantum computing [6; 7; 8] and quantum sensing [9; 10; 11]. A major outstanding challenge towards the construction of large-scale ground-based quantum networks is the fact that entangling rates over optical fiber decline exponentially with the length of the fiber due to attenuation losses. While classical amplification strategies are not viable, a special class of devices called quantum repeaters can be used to reach high entangling rates over long optical fibers [12; 13]. By using quantum repeaters as intermediate nodes a fiber is divided into segments over which entanglement can be distributed more efficiently than over the full fiber length. Although various proposals for quantum repeaters exist [14; 15], only proof-of-concept experiments have been realized so far [16; 17].
In order to build a quantum network, decisions need to be made on where its nodes are positioned and how they are connected. These nodes include the end nodes of the network, quantum repeaters and potentially midpoint stations as required by some entanglement-generation protocols (as depicted in Figure 1) [18; 19]. For the midpoint stations, and for quantum repeaters running at least some specific types of protocols (such as the one investigated in this paper), optimal network performance requires the nodes to be positioned _symmetrically_. That is, with an internode spacing that is the same between all neighboring nodes. To understand why symmetric placement of repeater nodes can be favourable to the performance, consider the following. For quantum repeaters in a chain, the end-to-end capacity for generating entanglement is equal to the minimal capacity over all pairs of neighboring nodes in the chain [20]. This minimal capacity is maximized by a symmetric placement of repeater nodes, and hence a symmetric placement optimizes the end-to-end capacity. We note however that there exist specific repeater protocols that do perform best, according to specific performance metrics, under asymmetric node placement [21; 22]. The suboptimal capacity of such a node placement suggests that improvements on the protocols could perhaps result in a symmetric placement being optimal again. This is demonstrated by Ref. [23], where it is shown that the advantage found in Ref. [22] vanishes when one further optimizes the protocol.
Symmetric placement of nodes in a quantum network may not always be possible. For instance, if a quantum network is built using existing infrastructure this restricts the freedom in choosing the locations of the nodes [24]. Therefore, in this paper, we address the question of how asymmetric node placement affects the performance of a quantum network. We do so by investigating two separate aspects of asymmetric quantum networks: first, we consider asymmetric placement of midpoint stations and examine how entanglement generation between two neighboring nodes is affected (Section II). We identify three independent effects, namely on the cycle time of
entanglement generation (Section II.1, see the beginning of Section II for a definition), on the success probability of entanglement generation and the fidelity of generated entanglement through the introduction of imbalanced losses (Section II.2), and on the photon indistinguishability through chromatic dispersion (Section II.3). Second, we consider asymmetric placement of quantum repeaters in a chain (Section III). Here, we focus specifically on processing-node repeaters executing a SWAP-ASAP protocol (as explained in Section III and studied in, e.g., Refs. [25; 26]). Notably, the results presented in this paper indicate robustness against small amounts of asymmetry. For asymmetry in the placement of the midpoint station, we find that both the success probability and fidelity have a vanishing first derivative with respect to how asymmetrically the midpoint is positioned (granted that the photons are shaped such that the effects of chromatic dispersion are negligible). However, the cycle time increases linearly with the asymmetry in case the time required to exchange signals between neighboring nodes is the limiting factor (it may be independent of asymmetry if this is not the case). Similarly, we find that both the entangling rate and error rate in a SWAP-ASAP repeater chain have vanishing first derivatives with respect to how asymmetrically the repeater nodes are positioned. This robustness suggests that, when designing a quantum network, nodes do not need to be placed exactly symmetrically. It furthermore suggests that the effects of constraints on node locations imposed by existing infrastructure on network performance may not be too severe.
## II Asymmetry in midpoint placement
Two popular methods for the creation of entanglement between neighboring nodes in a quantum network are single-click heralded entanglement generation [27; 18; 28] and double-click heralded entanglement generation (also known as the Barrett-Kok protocol) [19; 29; 30; 31; 32]. In both of these protocols, time is slotted. In each time slot, the nodes perform a single attempt at entanglement generation. Such an attempt consists of both nodes sending a photon entangled with a local qubit to a midpoint station, where the photons are interfered and measured. The midpoint then sends a message to the end nodes containing the measurement outcome. Depending on the measurement outcome, the attempt is declared either a success or a failure. The probability that it is declared a success is called the success probability and denoted by \(P_{\text{succ}}\). The duration of each time slot (i.e., the time required to perform one attempt) is called the cycle time and denoted by \(T_{\text{cycle}}\). The (average) rate at which successes occur is then given by
\[R=\frac{P_{\text{succ}}}{T_{\text{cycle}}}. \tag{1}\]
After a successful attempt a state \(\rho\) is shared by the two neighboring nodes. Ideally, the state \(\rho\) is some pure maximally-entangled target state \(|\phi\rangle\!\langle\phi|\). However, due to noise, \(\rho\) will instead be a mixed state with fidelity
\[F=\left\langle\phi|\rho|\phi\right\rangle. \tag{2}\]
We will use the success probability \(P_{\text{succ}}\), the cycle time \(T_{\text{cycle}}\) and the fidelity \(F\) as performance metrics for heralded entanglement generation.
In this section we study the effect of displacing the midpoint station from the exact center between the nodes (as illustrated in Figure 1) on our performance metrics. We do so by separately examining the effect on the cycle time, the effect that the resulting imbalanced losses have on the success probability and the fidelity, and the effect on the photon indistinguishability (which in turn affects primarily the fidelity but also the success probability). In order to do so we first need a method for quantifying how far the midpoint has been displaced. To that end, let the fiber distance between the midpoint station and the left-hand (right-hand) node be denoted \(L_{\text{left}}\) (\(L_{\text{right}}\)). Then we define
\[\begin{split}\Delta L&=L_{\text{left}}-L_{\text{ right}},\\ L_{\text{tot}}&=L_{\text{left}}+L_{\text{right}}. \end{split} \tag{3}\]
The parameter \(\Delta L\) is then a measure of the amount of asymmetry, as shown in Figure 1. As we will show below, the effects of asymmetric midpoint placement on the cycle time, success probability and fidelity are all quantified by \(|\Delta L|\).
### Cycle time
First we consider the effect of asymmetric midpoint placement on the cycle time of the entanglement-generation protocol between neighboring nodes. During each cycle both nodes need to emit entangled photons that reach the midpoint station simultaneously. Then
Figure 1: Symmetric and asymmetric positioning of a midpoint station for heralded entanglement generation. The magnitude of the parameter \(\Delta L\) is a measure for how large the asymmetry is. \(\Delta L\) and \(L_{\text{tot}}\) are defined in Equation (3).
the midpoint station sends a message with the measurement result back to each of the nodes. Assuming both the entangled photons and the messages travel with the same velocity \(c\), the cycle time at least includes the communication time between the midpoint station and the node that is furthest away. That is, \(T_{\text{cycle}}\geq\frac{2}{c}\max(L_{\text{left}},L_{\text{right}})\). This can be rewritten as
\[T_{\text{cycle}}\geq\frac{1}{c}(L_{\text{tot}}+|\Delta L|). \tag{4}\]
When the cycle time is limited only by the speed-of-light communication delay the cycle time will be exactly equal to the right-hand side of the equation. However, we note that in practice the cycle time is often much longer (see, e.g., Refs [28, 32]), for example due to local operations or the limited rate at which entangled photons can be emitted. In that regime, the cycle time may be independent of \(\Delta L\) until the asymmetry becomes so large that the communication delays are again the limiting factor.
### Imbalanced losses
As attenuation loss in fiber scales exponentially with the length of the fiber, having a midpoint station that is off center will result in an imbalance between the losses encountered by the photons. To be more precise, let \(P_{0}\) be the probability that when a node attempts photon emission, this photon is emitted successfully, couples successfully to fiber, and is then successfully detected at a detector at the end of the fiber, given that the fiber has length zero. Then, the probability that photon emission at the left node leads to photon detection at the midpoint station is given by
\[P_{\text{left}}=P_{0}e^{-\frac{L_{\text{left}}}{L_{\text{att}}}}, \tag{5}\]
where \(L_{\text{att}}\) is the attenuation coefficient of the fiber. The same equation holds for \(P_{\text{right}}\), but with \(L_{\text{left}}\) replaced by \(L_{\text{right}}\). In an asymmetric setup we will have \(P_{\text{left}}\neq P_{\text{right}}\), which is what we mean by imbalanced losses. This can affect both the success probability \(P_{\text{succ}}\) and the fidelity \(F\) of heralded entanglement generation.
For both the single- and double-click protocol, expressions for \(P_{\text{succ}}\) and \(F\) in terms of, among other parameters, \(P_{\text{left}}\) and \(P_{\text{right}}\) can be found in Ref. [33]. In order to make the effect of imbalanced losses explicit in these expressions we introduce the parameters
\[\begin{split} P_{\text{tot}}&\equiv P_{\text{left }}P_{\text{right}}=P_{0}^{2}e^{-\frac{L_{\text{tot}}}{L_{\text{att}}}},\\ P_{\text{sum}}&\equiv P_{\text{left}}+P_{\text{ right}}=2\sqrt{P_{\text{tot}}}\cosh\left(\frac{|\Delta L|}{2L_{\text{att}}} \right).\end{split} \tag{6}\]
Nontrivially, we find that for both protocols (to leading order, as discussed below) we can eliminate \(P_{\text{left}}\) and \(P_{\text{right}}\) completely from the expressions for \(P_{\text{succ}}\) and \(F\) in favour of \(P_{\text{tot}}\) and \(P_{\text{sum}}\). The effect of imbalanced losses is then captured entirely by the dependence of \(P_{\text{sum}}\) on \(\Delta L\). We discuss the resulting expressions and their implications for the single- and double-click protocol separately below.
In the double-click protocol, both nodes emit a photon. The mode that the photon is emitted in (e.g., its polarization) is entangled with the state of the emitter, and entanglement between the emitters is heralded when both photons are detected in different modes at the midpoint station after interfering on a beam splitter. By eliminating \(P_{\text{left}}\) and \(P_{\text{right}}\) as described above we find (see Appendix A)
\[\begin{split} P_{\text{succ, 2click}}=& d_{1}+2p_{\text{ dc}}P_{\text{sum}}+\mathcal{O}(p_{\text{dc}}^{2}),\\ F_{\text{2click}}=& d_{2}-d_{3}p_{\text{dc}}P_{ \text{sum}}+\mathcal{O}(p_{\text{dc}}^{2}).\end{split} \tag{7}\]
The parameters \(d_{1}\), \(d_{2}\), and \(d_{3}\) have no direct dependence on \(\Delta L\) and are given by
\[\begin{split} d_{1}=&\frac{1}{2}P_{\text{tot}}-p_{ \text{dc}}\left(4+r-\frac{1}{2}(2-r)(1+V)\right)P_{\text{tot}},\\ d_{2}=&\left(\frac{1}{2}q_{\text{em}}(1+V)+\frac {1}{4}(1-q_{\text{em}})\right)(1+8p_{\text{dc}})\\ &-\frac{1}{2}(2-r)q_{\text{em}}p_{\text{dc}}(1+V)^{2},\\ d_{3}=&\frac{q_{\text{em}}}{P_{\text{tot}}}(2V+1). \end{split} \tag{8}\]
Here, \(p_{\text{dc}}\) denotes the detector dark-count probability. The notation \(\mathcal{O}(x^{n})\) represents any terms that are of order \(n\) in the parameter \(x\). As \(p_{\text{dc}}\) is typically small, we have only included leading-order terms in Equations (7) (the full expressions can be found in Appendix A). \(V\) denotes the indistinguishability of the photons, which can itself depend on the asymmetry through the effect of chromatic dispersion as discussed in Section II.3. It is is assumed that the state shared between a node's emitter and the photon it emits is given by \(\frac{1}{3}(4F-1)\ket{\psi}\!\bra{\psi}+\frac{1}{3}(1-F)\mathds{1}\), which has fidelity \(F\) to the state \(\ket{\psi}=\frac{1}{\sqrt{2}}(\ket{00}+\ket{11})\) and where \(F=F_{\text{em, left}}\) (\(F=F_{\text{em, right}}\)) for the left (right) node. We then have
\[q_{\text{em}}=\frac{1}{9}(4F_{\text{em, left}}-1)(4F_{\text{em, right}}-1). \tag{9}\]
Finally,
\[r=\begin{cases}1&\text{for non-photon-number-resolving detectors},\\ 2&\text{for photon-number-resolving detectors}.\end{cases} \tag{10}\]
We see that when the dark-count probability is zero, the double-click protocol is not affected by imbalanced losses at all. This is explained by the fact that the probability of both photons surviving their respective fiber segments is equal to the probability of a single photon surviving the full fiber length \(L_{\text{tot}}\), which is not affected by asymmetry. The reason why the protocol
is affected in the presence of dark counts is that as the photon arrival probability on the longer leg becomes of the same order as the dark-count probability, the probability of falsely heralding successful entanglement becomes large. This results both in an increased rate and a reduced fidelity.
In the single-click protocol, both nodes also perform photon emission and send those photons to the midpoint station. However, before emission starts, the emitter is prepared in an unbalanced superposition of a bright state from which photons can be emitted and a dark state from which emission is impossible. How large the amplitude of the bright state is, is parameterized by the bright-state parameter \(\alpha\). As a result, after emission, the state shared by the emitter and the photon takes the form
\[\sqrt{1-\alpha}\ket{\mathrm{dark}}\ket{0}+\sqrt{\alpha}\ket{\mathrm{bright}} \ket{1}, \tag{11}\]
where \(\ket{0}\) (\(\ket{1}\)) indicates the absence (presence) of the photon. An attempt is then considered a success in case only one photon is detected at the midpoint station (as opposed to two for the double-click protocol), creating an entangled state that is a superposition of the left-node emitter being in the bright state but the right-node emitter in the dark state and vice versa. However, in case both emitters are in the bright state but one of the emitted photons is lost, a success is heralded without the creation of an entangled state. Therefore, even when the only imperfection in the system is fiber attenuation, the created entangled state is never pure. The fidelity of the created entangled state will depend on the choice of \(\alpha\); when \(\alpha\) is small, the relative probability that both nodes are found in the bright state is suppressed resulting in a good fidelity. However, using a small \(\alpha\) also results in a small success probability. In case the midpoint is placed symmetrically and there are no imperfections but losses, for \(\alpha\ll 1\), the success probability and fidelity can be approximated as \(P_{\mathrm{succ}}\approx 2\alpha\sqrt{P_{\mathrm{tot}}}\) and \(F\approx 1-\alpha\). See, e.g., Ref. [34] for a further discussion of this effect. Thus, choosing the value of \(\alpha\) is a matter of trading off success probability and fidelity. In an asymmetric setup it has been found that in case one wants to optimize the fidelity, the equation \(\alpha_{\mathrm{left}}P_{\mathrm{left}}\approx\alpha_{\mathrm{right}}P_{ \mathrm{right}}\) should be satisfied [28]. Therefore, we here assume the bright-state parameters are always chosen such that
\[\alpha_{\mathrm{left}}P_{\mathrm{left}}=\alpha_{\mathrm{right}}P_{\mathrm{ right}}\equiv q, \tag{12}\]
where \(q\) parameterizes the remaining degree of freedom. As the bright-state parameter needs to be small in order to get a good fidelity, we will here present a result that is not only leading order in the dark-count probability but also in the parameter \(q\). Eliminating \(\alpha_{\mathrm{left}}\) and \(\alpha_{\mathrm{right}}\) in favour of \(q\) and \(P_{\mathrm{left}}\) and \(P_{\mathrm{right}}\) in favour of \(P_{\mathrm{tot}}\) and \(P_{\mathrm{sum}}\) we find (see Appendix A)
\[\begin{split} P_{\mathrm{succ,\,\,1click}}=& 2q+2p_{\mathrm{dc}}+\mathcal{O}(q^{2},p_{ \mathrm{dc}}^{2},qp_{\mathrm{dc}}),\\ F_{\mathrm{1click}}=& s_{1}-s_{2}P_{\mathrm{sum}}+ \mathcal{O}(q^{2},p_{\mathrm{dc}}^{2},qp_{\mathrm{dc}}).\end{split} \tag{13}\]
Here, the parameters \(s_{1}\) and \(s_{2}\) are defined by
\[\begin{split} s_{1}=&\frac{1}{2}(1+\sqrt{V})\frac{q }{q+p_{\mathrm{dc}}}\Bigg{(}1+q-(1+r)p_{\mathrm{dc}}\\ &+\frac{q}{q+p_{\mathrm{dc}}}\left[rp_{\mathrm{dc}}-\frac{1}{4}(2- r)(1+V)q\right]\Bigg{)},\\ s_{2}=&\frac{1}{2}(1+\sqrt{V})\frac{q}{q+p_{\mathrm{ dc}}}\frac{1}{P_{\mathrm{tot}}}\left(\frac{1}{2}q-p_{\mathrm{dc}} \right).\end{split} \tag{14}\]
Note that the success probability of the single-click scheme is not affected when \(\Delta L\) is increased, as long as the bright-state parameters are chosen to keep \(q\) constant. This behaviour is not a consequence of the leading-order expansion. It is shown in Appendix A that the exact expression for the success probability has no direct dependence on the asymmetry either.
The success probability and fidelity as a function of the asymmetry are shown in Figure 2 for both protocols. We see that in both cases imbalanced losses do not reduce the success probability. Additionally the fidelity falls in a similar way for both cases, with a vanishing first derivative at \(\Delta L=0\). The reason for this is that the hyperbolic cosine to which \(P_{\mathrm{sum}}\) is proportional (see Equations (6)) has a vanishing first derivative at zero. As a result, the success probability and fidelity are resilient against small amounts of asymmetry. For instance, for the parameters considered in Figure 2, we see that the fidelity is still above 99% of the value it attains for symmetric midpoint placement at \(\Delta L=30\) km.
### Photon indistinguishability
Light waves traveling through optical fiber are subject to chromatic dispersion, meaning that different frequency components travel at different velocities. As a result, when performing heralded entanglement generation, the photons that arrive at the midpoint station are shaped differently than the photons that are emitted by the nodes. A key requirement for the creation of an entangled state through the interference and measurement of the photons is that the photons are indistinguishable, i.e., their wave packets need to be identical and arrive at the midpoint simultaneously. Although chromatic dispersion always results in photon deformation, the indistinguishability will not be affected if both photons are subjected to the same amount of dispersion. This is the consequence of a phenomenon known as dispersion cancellation [35; 36]. The situation is different in case the midpoint station is placed asymmetrically. If the photons travel through fibers of different lengths they will undergo different amounts of dispersion and hence be deformed differently.
A wave packet \(\phi\) in a one-dimensional medium emitted
at time \(t=t_{0}\) and location \(x=0\) takes the form
\[\phi(x,t)=\int d\omega\phi(\omega)e^{i\omega(t-t_{0})-i\beta(\omega)x}. \tag{15}\]
Here, \(\beta(\omega)\) is the wave number corresponding to a monochromatic wave with angular frequency \(\omega\), which is determined by the medium the wave travels in. Now, let \(\phi_{l}\) (\(\phi_{r}\)) be the wave packet of the photon emitted by the left (right) node. The indistinguishability \(V\) between these photons at the midpoint station (i.e., at \(x=L_{\text{left}}\) for \(\phi_{l}\) and at \(x=L_{\text{right}}\) for \(\phi_{r}\)) is then defined by
\[V=|\mu|^{2}, \tag{16}\]
where \(\mu\) is given by
\[\begin{split}\mu&=\int dt\phi_{l}(L_{\text{left}},t )\phi_{r}^{*}(L_{\text{right}},t)\\ &=\int d\omega\phi_{l}(\omega)\phi_{r}^{*}(\omega)e^{i\beta( \omega)\Delta L+i\omega\Delta t}.\end{split} \tag{17}\]
Here, we have \(\Delta t=t_{l}-t_{r}\), where \(t_{l}\) (\(t_{r}\)) is the time of emission of the photon at the left (right) node. As discussed in Section II.2, the indistinguishability \(V\) affects both the success probability and the fidelity of the single- and double-click protocols.
We assume that the wave packets have a central frequency that is close to some frequency \(\omega_{0}\). It is then useful to Taylor expand the wave number of the fiber as [37]
\[\beta(\omega)\approx\beta_{0}+\beta_{1}(\omega-\omega_{0})+\frac{1}{2}\beta_ {2}(\omega-\omega_{0})^{2}+\frac{1}{6}\beta_{3}(\omega-\omega_{0})^{3}. \tag{18}\]
Here, \(\beta_{0}=1/v_{p}\) and \(\beta_{1}=1/v_{g}\), where \(v_{p}\) and \(v_{g}\) are the phase and group velocity in the fiber respectively. \(\beta_{2}\) is the Group-Velocity Dispersion (GVD) parameter and \(\beta_{3}\) the Third-Order Dispersion (TOD) parameter. As the \(\beta_{0}\) contribution will only alter the global phase of \(\mu\), it does not affect the indistinguishability and can effectively be dropped from the expression. Furthermore, we assume \(\Delta t=-\beta_{1}\Delta L+\delta t\), where \(\delta t\) is the alignment mismatch (for \(\delta t=0\), both emissions are timed such that the photons arrive at the midpoint station exactly at the same time). Then, using \(\Delta\omega\equiv\omega-\omega_{0}\), we can effectively write
\[\begin{split}\mu=&\int\phi_{l}(\omega_{0}+\Delta \omega)\phi_{r}^{*}(\omega_{0}+\Delta\omega)\\ &\times e^{i\Delta L(\frac{1}{2}\beta_{2}\Delta\omega^{2}+\frac{i }{6}\beta_{3}\Delta\omega^{3})+i\delta t\Delta\omega}d\Delta\omega.\end{split} \tag{19}\]
The value of \(V\) and how much it is degraded by chromatic dispersion depends on the exact shapes of the photons, i.e., on \(\phi_{l}\) and \(\phi_{r}\). In general, we expect the photons to be affected by chromatic dispersion less if their spread in frequency is small, as frequency components that are far apart also travel at velocities that are far apart. Below, we derive expressions for \(V\) in case of two specific wave-packet shapes, namely Gaussian and Lorentzian. (Attenuated) laser pulses are often approximated as Gaussian, and approximate Gaussian photons can, e.g., be produced using cavity quantum electrodynamics [38] or spontaneous four-wave mixing [39]. We here take Gaussian wave packets as a generic example of a pulse which is well localised in time and frequency, allowing us to obtain analytical results. On the other hand, Lorentzian photons are created through the radiative decay of a two-level system. In practice, photons will rarely be exactly Gaussian or Lorentzian as they interact with other components in the system such as filters and cavities. Yet, we can think of the two types of photons as two extremes in how spread out their frequency distributions
Figure 2: Leading-order results (presented in Equations (7) and (13)) for the probability that an entanglement-generation attempt is heralded as a success and the fidelity of entangled states created upon a heralded success of both the single-click and double-click protocol as a function of the difference in length between the two fibers connecting the midpoint station (\(\Delta L\), defined in Equation (3)). This figure has been created using the parameters \(L_{\text{tot}}=100\) km, \(p_{\text{dc}}=3\cdot 10^{-4}\), \(q=4\cdot 10^{-3}\) and \(L_{\text{att}}\approx 22\) km. Apart from attenuation losses and dark counts no imperfections have been included.
are, and therefore how sensitive they are to chromatic dispersion. It was noted in Ref. [40] that a Gaussian wave packet, for a fixed value of the time-distribution standard deviation, has a frequency-distribution standard deviation that is as small as possibly allowed by the Heisenberg uncertainty principle. From this the authors concluded that Gaussian photons offer the best protection against alignment mismatch \(\delta t\). Here, it leads us to expect Gaussian photons are well protected against chromatic dispersion. Lorentzian photons on the other hand have frequency distributions with very long tails, with \(|\phi_{l/r}(\omega)|^{2}\) only going to zero as \(\frac{1}{\omega^{2}}\). It is expected that they are therefore much more susceptible to the effects of chromatic dispersion.
#### ii.2.1 Gaussian photons
The wave packets of two Gaussian photons with frequency mismatch \(\delta\omega\) can be written as
\[\phi_{l/r}(\omega)=\frac{1}{\sqrt[4]{2\pi\sigma^{2}}}e^{-\frac{1}{4\sigma^{2} }(\omega-\omega_{0}\pm\frac{1}{2}\delta\omega)^{2}}. \tag{20}\]
The probability distributions \(|\phi_{l/r}(\omega)|^{2}\) are Gaussian with standard deviation \(\sigma\). When there is no TOD, the indistinguishability can be calculated exactly, giving
\[V|_{\beta_{3}=0}=\frac{\exp\left(-2\left(\frac{\delta\omega}{\sigma}\right)^{ 2}-\frac{(\delta t\sigma)^{2}}{1+\Delta L^{2}\beta_{2}^{2}\sigma^{4}}\right)} {\sqrt{1+\Delta L^{2}\beta_{2}^{2}\sigma^{4}}}. \tag{21}\]
We derive this result in Appendix B. A similar expression has been derived under the more restrictive assumption \(\delta t=\delta\omega=\beta_{3}=0\) in Ref. [41], with which ours is consistent. In case the photon indistinguishability is close to one, \(1-V|_{\beta_{3}=0}\ll 1\), it is well-approximated by the leading-order expansion
\[V|_{\beta_{3}=0}\approx 1-2\left(\frac{\delta\omega}{\sigma}\right)^{2}-( \delta t\sigma)^{2}-\frac{1}{2}\Delta L^{2}\beta_{2}^{2}\sigma^{4} \tag{22}\]
Finding an exact solution to Equation (19) when the TOD is nonzero is difficult, but a leading-order result can be readily found to yield
\[\begin{split} V=& V|_{\beta_{3}=0}\left(1-\Delta L \beta_{3}\delta t\sigma^{4}\right)\\ &+\mathcal{O}\left(\Delta L^{2}\beta_{3}^{2}\sigma^{6},\Delta L ^{3}\beta_{2}^{2}\beta_{3}\delta t\sigma^{8},\Delta L\beta_{3}\delta t^{3} \sigma^{6}\right).\end{split} \tag{23}\]
This result as well is derived in Appendix B. Note that, to first order, the TOD does not affect the indistinguishability in case \(\delta t=0\). If the alignment mismatch is itself small, \(|\delta t\sigma|\ll 1\), we can expect the TOD to have only a very small effect on the indistinguishability.
#### ii.2.2 Lorentzian photons
For two Lorentzian wave packets with frequency mismatch \(\delta\omega\) we can write
\[\phi_{l/r}(\omega)=\sqrt{\frac{2\tau}{\pi}}\frac{1}{1-2i\tau(\omega-\omega_{0 }\pm\frac{1}{2}\delta\omega)}. \tag{24}\]
While the corresponding frequency distributions are Lorentzian functions with \(\frac{1}{\tau}\) as full width at half maximum, the time distributions of these photons are one-sided exponentials with standard deviation \(\tau\). We are not aware of an analytical method for determining the indistinguishability for Lorentzian photons in full generality. One method to evaluate the indistinguishability is numerical integration as done in Refs. [42; 43]. Instead we make the simplifying assumptions that the photons arrive at the same time (\(\delta t=0\)), they have the same central frequency (\(\delta\omega=0\)), and there is no TOD (\(\beta_{3}=0\)). The indistinguishability then becomes exactly solvable, giving (see Appendix B for a derivation)
\[V|_{\delta t=\delta\omega=\beta_{3}=0}=1-\frac{2\sqrt{2}}{\sqrt{\pi}}(C+S)+ \frac{4}{\pi}(C^{2}+S^{2}). \tag{25}\]
Here, \(C\) and \(S\) are Fresnel integrals defined by \(S=\int_{0}^{2}\sin(t^{2})dt\) and \(C=\int_{0}^{x}\cos(t^{2})dt\) with \(x=\sqrt{\frac{1}{2}|\Delta L\beta_{2}|\tau^{-2}}\). To linear order, \(C=x\) and \(S=0\), and therefore when the effect of dispersion is small we can use the approximation
\[V|_{\delta t=\delta\omega=\beta_{3}=0}=1-\frac{2}{\sqrt{\pi}}\frac{\sqrt{| \Delta L\beta_{2}|}}{\tau}+\mathcal{O}(|\Delta L\beta_{2}|\tau^{-2}). \tag{26}\]
We stress that the assumption \(\delta t=\delta\omega=\beta_{3}=0\) is not generally expected to hold in a real experiment; it is introduced solely to make the problem analytically more tractable. However, by comparing to results obtained through numerical integration we find that the assumption \(\beta_{3}=0\) does not greatly affect the result in conditions typical to single-mode fiber (see the discussion below and Figure 4). Therefore, while the above equations may not be able to capture the effects of \(\delta t\) and \(\dot{\delta\omega}\), they do accurately capture the effects of asymmetry in the placement of the midpoint station, as is the focus of this section. Furthermore, we note that it may sometimes already be desirable to use frequency conversion to convert photons to frequencies that incur relatively less attenuation losses in fiber. This opens up the possibility for correcting any frequency mismatch and bringing \(\delta\omega\) close to zero [44].
#### ii.2.3 Requirements for indistinguishable photons
The results above describe how the indistinguishability \(V\) is diminished through the effect of chromatic dispersion in the case of asymmetric midpoint placement. From these results, it becomes clear that how
bady \(V\) is reduced depends on the characteristics of the photon. In particular, for Gaussian photons it depends on the parameter \(\sigma\), while for Lorentzian photons it depends on the parameter \(\tau\). As expected, for both photons the effect of dispersion is increased as the width of the frequency distribution is increased, or equivalently, as the length of the time distribution is decreased. In Figure 3 we investigate how much indistinguishability is lost as a function of the length of the photon wave packet, assuming the photons are otherwise perfectly indistinguishable.
We here make the simplifying assumption that there is no TOD, thereby enabling the use of the exact analytical results obtained above. This simplification is motivated by the fact that comparing our analytical results in case the TOD is zero with results obtained through numerical integration for a typical value of the TOD in single-mode optical fiber suggests that the TOD has only a negligible effect in this case, as shown in Figure 4.
Unsurprisingly we see in Figure 3 that Lorentzian photons with their long tails in frequency are affected (much) worse by chromatic dispersion than Gaussian photons with the same length. However, even for Lorentzian photons we see that (for standard single-mode fiber and \(\Delta L=40\) km) the decrease in \(V\) is only of the order \(10^{-2}\) when the length of the wave packet is of the order of nanoseconds.
In case the photon length and \(\Delta L\) are such that the decrease in photon indistinguishability can be significant, it is clear that it is better if the photons are closer in shape to a Gaussian than a Lorentzian. This strengthens the case for Gaussian photons made in Ref. [40], where it was found that Gaussian photons protect favourably against alignment mismatch. However, some sources naturally emit photons that are more Lorentzian than Gaussian. Potentially, photon-shaping techniques could be used to convert such photons to a more Gaussian waveform [54, 55, 56, 57, 58]. A simpler solution could be to send Lorentzian photons through a filter to remove the long tails of their frequency distribution. While this would introduce extra losses, the spread in frequency could be greatly reduced, resulting in a much more Gaussian photon.
Lastly we point out that there are various methods for reducing the drop in indistinguishability in case of asymmetric midpoint placement irrespective of photon shape. The telecom C-band (1530 nm - 1565 nm) is the
Figure 4: A comparison between our analytical results (the lines) for the photon indistinguishability \(V\) assuming the TOD is zero (Equations (21) and (26)) and results obtained through numerical integration assuming a nonzero TOD (the markers). The temporal photon length is used as x axis, as measured by the standard deviation of \(|\phi(x=0,t)|^{2}\). The results assume \(\Delta L=40\) km, a GVD of \(\beta_{2}\approx-21.7\)ps\({}^{2}\)/km and a TOD parameter of \(\beta_{3}\approx 0.127\)ps\({}^{3}\)/km (corresponding to a dispersion coefficient of 17 ps/(nm km) and a dispersion slope of 0.056 ps/(nm\({}^{2}\) km), which are typical values for single-mode optical fiber at 1550 nm [45]). Other sources of noise are not included. That is, \(\delta t=\delta\omega=0\)). Error bars for the numerical results are smaller than the marker size.
Figure 3: Loss in the indistinguishability \(V\) as a function of the temporal photon length, as measured by the standard deviation of the time distribution \(|\phi(x=0,t)|^{2}\). We include results for both Gaussian and Lorentzian photons (Equations (23) and (26)), for which the standard deviations are given by \(\frac{1}{\sqrt{2}\sigma}\) and \(\tau\) respectively. The results assume \(\Delta L=40\) km and a GVD of \(\beta_{2}\approx-21.7\)ps\({}^{2}\)/km (corresponding to a dispersion coefficient of 17 ps/(nm km), which is a typical value for single-mode optical fiber at 1550 nm [45]). The TOD parameter has been set to \(\beta_{3}=0\). Other sources of noise are not included. That is, \(\delta t=\delta\omega=0\). The lengths of photons emitted by some specific sources have been indicated in the figure. QD1, QD2, QD3: quantum-dot sources from Refs. [46, 47] and [48] respectively. NV: nitrogen-vacancy centers in diamond [44, 49, 50]. (Some types of trapped ions, such as Ba\({}^{+}\)[51] and Sr\({}^{+}\)[31] emit photons at a length close to the NV one.) SPDC: frequency-multiplexed spontaneous parametric down-conversion sources that interface with atomic quantum memories [52, 53]. Ca\({}^{+}\): trapped calcium ions [32] (lifetime estimated in [33]).
band conventionally used to transmit signals as it minimizes attenuation losses (a typical value of 0.275 dB/km in standard single-mode fiber [45]). In contrast the telecom O-band (1260 nm - 1360 nm) incurs much heavier attenuation losses (typically 0.5 dB/km [45]), but as it is centered around the zero-dispersion wavelength (1310 nm) of standard single-mode optical fiber it minimizes dispersive effects. By using the O-band instead of the C-band one can lessen the effects of chromatic dispersion at the cost of incurring extra losses. This strategy is utilized in e.g., Ref. [59]. An investigation in Ref. [41] based on Gaussian photons suggests that using the O-band may only be worth it for photons shorter than approximately 100 picoseconds. A second potential solution is the use of dispersion-shifted fiber. Such fiber has its zero-dispersion wavelength in the telecom C-band and provides simultaneously small dispersion and small attenuation loss [60]. However, such fiber is not widely deployed [41; 61] and hence not suitable when using existing fiber infrastructure to build a quantum network [24]. Finally, one can use dispersion-compensating modules to reduce the effects of chromatic dispersion at the cost of incurring extra losses [61].
## III Asymmetry in repeater chains
Now we turn our attention away from the placement of midpoint stations and instead consider the placement of repeater nodes in a quantum-repeater chain. First, in Section III.1, we discuss the specific type of quantum-repeater chains we consider here. Then, we pose two research questions about asymmetry in such repeater chains in Section III.2. These questions are made more precise in Sections III.3, III.4, and III.5. This allows us to address the research questions using numerical simulations in Section III.6. Finally, we reflect on the numerical results in Section III.7.
### SWAP-ASAP repeaters with parallel entanglement generation
While there exist many types of quantum repeaters [14; 15], we here focus on one specific type, namely the processing-node quantum repeater. Such quantum repeaters are capable of generating and storing entanglement with neighboring nodes and of executing quantum gates. These gates allow processing nodes to perform deterministic entanglement swapping, which is an operation such that if one qubit is entangled with some qubit A and the other qubit is entangled with some qubit B, performing entanglement swapping on those two qubits will result in qubits A and B being entangled [62]. Various proposed repeater platforms are processing nodes, such as trapped ions [63; 64; 65; 66], color centers in diamond [67; 28; 68] and neutral atoms [17; 69].
We here assume that each repeater has exactly two qubits, each of which can be used in parallel to perform heralded entanglement generation (as discussed in Section II) with a different neighboring node. (Note that there exist also proposed repeater systems that can only generate entanglement with one neighbouring node at a time [23; 28; 33].) A chain of such repeaters can then create end-to-end entanglement by combining heralded entanglement generation and entanglement swapping. How these are combined exactly, and what additional operations are performed, is dictated by the protocol that the repeaters execute. Examples of additional operations that could be included are discarding entangled states when they have been stored in memory for too long [23; 26; 70; 71] and entanglement distillation [72; 73; 74], both of which can help mitigate the effects of noise. Optimizing repeater protocols is by no means an easy matter, and what protocols perform well depends both on the specific hardware used and the performance metric employed [26; 75; 76; 77; 21; 78]. Here, we consider the SWAP-ASAP protocol, in which no additional operations are included. In the SWAP-ASAP protocol, each pair of neighboring nodes performs entanglement generation whenever this is possible. That is, whenever at each node the qubit that is reserved for entanglement generation along that specific link is free. As soon as both qubits at a repeater node are entangled it performs entanglement swapping (thereby freeing both qubits up again). We have chosen to study this protocol as it is relatively simple both to understand and to study numerically. Moreover, it has been found that the SWAP-ASAP protocol outperforms schemes that include entanglement distillation for near-term hardware quality, as measured both by the fidelity of end-to-end entangled states and the generation duration [25]. Additionally, for the case when entanglement swapping is deterministic and entanglement is never discarded, it was found that the SWAP-ASAP protocol results in an optimal generation duration [78; 26]. Throughout the rest of this paper, it will be assumed that quantum repeaters can generate entanglement with two neighbours in parallel and that they execute the SWAP-ASAP protocol.
### Research questions
Asymmetric node placement will result in some fiber links between repeaters being shorter while others are longer. As attenuation losses grow exponentially with the fiber length, the longer links generate entanglement at a slower rate, and the shorter links at a faster rate. In other words, the entangling rates along the chain become uneven due to asymmetric repeater placement. The slower links could then potentially become bottlenecks. This is expected to increase not only the amount of time required to distribute end-to-end entanglement, but also the amount of time entangled states need to be stored in the SWAP-ASAP quantum repeaters until entanglement
swapping takes place. The result of this would be an increased amount of noise due to memory decoherence.
The above observation motivates posing the following research question: what is the effect of uneven entangling rates in a SWAP-ASAP repeater chain in which repeaters can generate entanglement with both neighboring nodes simultaneously, as caused by the asymmetric distribution of the repeater nodes, on the performance of that chain? A particularly simple method that could perhaps be used to mitigate any negative effects of asymmetric node placement is what we here refer to as the "extended-fiber method". In this method, spooled fiber is used at the repeater nodes to make the shorter links as long as the longer ones, thereby effectively making the repeater chain symmetric again. However, rather than making the bottlenecks faster, this method just makes the faster links slower. It seems perhaps unlikely that such a strategy can lead to any improvement. Therefore, we pose a second research question: is the extended-fiber method effective at improving the performance of asymmetric SWAP-ASAP repeater chains in which repeaters can generate entanglement with both neighboring nodes simultaneously? In order to address these questions, they need to be made more precise. To that end, we first quantify how well a repeater chain performs in Section III.3. Then, we quantify how asymmetrical a repeater chain is and how we can systematically vary the amount of asymmetry in Section III.4. Finally, we introduce a simplified model for repeater chains in Section III.5.
### Quantifying repeater performance
We quantify the performance of a repeater chain in terms of how capable it is at supporting Quantum Key Distribution (QKD). Specifically, we consider the rate at which a secret key can be obtained when executing an entanglement-based implementation of the BB84 protocol [2, 79] in the asymptotic limit. The end nodes realize this protocol by keeping entangled quantum states stored in memory until they learn that all required entanglement swaps have been performed and hence end-to-end entanglement has been created. At that time, they each measure their qubit in either the Pauli X or Z basis. The corresponding asymptotic secret-key rate is given by [80]
\[\text{SKR}=\frac{1}{T}\max(1-2h(Q),0). \tag{27}\]
Here, \(T\) is the generation duration, i.e., the average time required to distribute an end-to-end entangled state, \(Q\) is the Quantum-Bit Error Rate (QBER), and \(h(x)=-x\log_{2}(x)-(1-x)\log_{2}(1-x)\) is the binary entropy function. The QBER is defined as the probability that, if both end nodes measure their qubits in the same basis, the parity between the outcomes is different than would be expected for the maximally-entangled target state. Therefore, the QBER can be considered a measure for the amount of noise. Note that, in general, the QBER can take a different value for measurements in the X basis than in the Z basis. However, as we will be using a depolarizing noise model (see Section III.5 below), the two values will coincide. Our choice for the secret-key rate as performance metric is motivated not only by the fact that it has a clear operational interpretation, but also because it combines information about how quickly and how noisily entanglement is distributed into a single convenient number. While the secret-key rate is the primary performance metric considered here, the generation duration and QBER from which the secret-key rate is calculated can help provide a more detailed understanding of a repeater chain's performance.
### Quantifying chain asymmetry
Now, we first discuss how asymmetry in a repeater chain can be quantified. We then use that to introduce a specific method for placing repeaters in a chain in such a way that the amount of asymmetry can be varied. Let \(\mathcal{R}\) be the set of all repeater nodes in the chain of interest. Then, for every \(n\in\mathcal{R}\), the _node asymmetry parameter_ is defined by
\[\mathcal{A}_{n}=\frac{\left|L_{\text{left of }n}-L_{\text{right of }n}\right|}{L_{\text{left of }n}+L_{\text{right of }n}} \tag{28}\]
and the _node asymmetry sign_ is defined by
\[S_{n}=\text{Sgn}\left(L_{\text{left of }n}-L_{\text{right of }n}\right). \tag{29}\]
Here, \(L_{\text{left of }n}\) (\(L_{\text{right of }n}\)) is the fiber distance between node \(n\) and its neighboring node to the left (right) and \(\text{Sgn}\) is the sign function. We note that \(\mathcal{A}_{n}\) is equivalent to \(\Delta L/L_{\text{tot}}\) and \(S_{n}\) to \(\text{Sgn}(\Delta L)\), where \(\Delta L\) and \(L_{\text{tot}}\) are defined for that specific node as in Equation (3). While \(\Delta L\) proved convenient to describe the effects of asymmetry in the placement of midpoint stations, we find the node asymmetry parameter more convenient in the context of repeater chains. This is because the value of \(L_{\text{tot}}\) can vary between different nodes in the chain, making it hard to understand just how asymmetrically a node is placed between its neighboring nodes from only \(\Delta L\). The node asymmetry parameters and node asymmetry signs of all repeater nodes together provide a complete parameterization of the locations of the nodes in the chain. Now, we define the _chain asymmetry parameter_\(\mathcal{A}_{\text{chain}}\) to be the average value of \(\mathcal{A}_{n}\) over all repeaters,
\[\mathcal{A}_{\text{chain}}=\frac{1}{\left|\mathcal{R}\right|}\sum_{n\in \mathcal{R}}\mathcal{A}_{n}. \tag{30}\]
While the node asymmetry parameter \(\mathcal{A}_{n}\) quantifies how asymmetrically one specific node is placed between
its neighboring nodes, the chain asymmetry parameter \(\mathcal{A}_{\rm chain}\) aims to capture how asymmetric the chain is as whole.
We aim to address the research questions posed in Section III.2 by investigating how the repeater-chain performance varies as a function of \(\mathcal{A}_{\rm chain}\). However, for a repeater chain with a given total length and given number of nodes, there are many different possible repeater placements for which the chain asymmetry parameter takes the same value. Therefore, in order to avoid ambiguity, we here introduce a specific class of repeater chains for which the parameter \(\mathcal{A}_{\rm chain}\) (together with the total length and number of nodes) uniquely defines the locations of all the repeaters. These are repeater chains for which \(\mathcal{A}_{n}\) is the same for every repeater in the chain and \(S_{n}\) alternates between nodes (such that no two neighboring repeaters have the same sign). It then holds that \(\mathcal{A}_{\rm chain}=\mathcal{A}_{n}\) for all \(n\in\mathcal{R}\). See Figure 5 for an example of what such a repeater chain looks like for different values of \(\mathcal{A}_{\rm chain}\). Our reason for choosing this class of chains is that the chains are relatively regular and easy to understand, while at the same time increasing \(\mathcal{A}_{\rm chain}\) clearly increases the disparity between long and short links, allowing us to study the effect of different entangling rates between different nodes as we set out to do.
### Model for Repeater Chain
We consider a simplified model for the repeater nodes as well as for heralded entanglement generation between neighboring nodes. In this model, the midpoint stations studied in Section II are abstracted away, such that we can focus on the placement of the repeater nodes only. We then take the cycle time for performing one attempt at generating an entangled state between two neighboring nodes to be given by
\[T_{\rm cycle}=\frac{L}{c}, \tag{31}\]
where \(L\) is the distance between the neighboring nodes and \(c\) is again the speed of light in fiber (here taken to be 200,000 km/s). We note that this is equivalent to the cycle time when entanglement between neighboring nodes is generated using a symmetrically placed midpoint (see Equation (4)). We model the success probability of each attempt as
\[P_{\rm succ}=e^{-\frac{L}{L_{\rm att}}}, \tag{32}\]
where \(L_{\rm att}\approx 22\) km is the attenuation length corresponding to attenuation losses of 0.2 dB/km. This model has been chosen both for simplicity and for not being overly specific to one particular protocol for heralded entanglement generation. It reflects the exponential scaling of the success probability common to both the double-click and single-click protocols, and also to protocols based on the direct transmission of photons between neighboring nodes [81, 82, 16] (assuming dark counts do not contribute significantly). Therefore it is expected to adequately capture, at least on a qualitative level, how uneven entangling rates arise due to asymmetric node placement in repeater chains based on heralded entanglement generation.
We model the states created by heralded entanglement generation to be noiseless. More precisely, whenever an attempt is successful, a pure Bell state \(\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)\) is created. We consider the repeater nodes to be largely perfect devices at which entanglement swapping can be performed noiselessly and deterministically. The only imperfection modeled at both repeater nodes and end nodes is that while qubits are stored in quantum memory, they undergo memory decoherence. For simplicity, we model memory decoherence as depolarizing noise according to
\[\rho\to e^{-\frac{t}{T_{\rm coh}}}\,\rho+(1-e^{-\frac{t}{T_{\rm coh}}}) \frac{1}{2}. \tag{33}\]
Here \(t\) is the storage time and \(T_{\rm coh}\) the coherence time, which we take to be one second here (as demonstrated with nitrogen-vacancy centers in Ref. [83]). Given these assumptions, noise in end-to-end entangled states produced by the repeater chain has only two sources. The first of these is repeaters storing entangled states in quantum memory until entanglement swapping takes place. The second is end nodes storing entangled states until all entanglement swaps have been completed and the measurements required by the BB84 protocol are performed. These are exactly the sources of noise that may be affected by uneven entangling rates in a repeater chain.
### Numerical results
Now, we are ready to address the research questions outlined in Section III.2. For concreteness, we consider
Figure 5: Locations of nodes in a chain with 7 repeaters for which \(\mathcal{A}_{n}=\mathcal{A}_{\rm chain}\) is the same for all nodes and \(S_{n}\) alternates between nodes (see definitions in Equations (28), (29) and (30)). The chain asymmetry parameter \(\mathcal{A}_{\rm chain}\) then quantifies the amount of asymmetry.
a repeater chain with a length of 1000 km that contains 21 nodes (including two end nodes). The nodes are thus, in the symmetric case, spaced 50 km apart. A distance of 1000 km can be thought of as a typical pan-continental one, corresponding to, e.g., roughly the distance between Paris and Berlin. In order to estimate the values of the generation duration and the QBER, we employ numerical simulations using the quantum-network simulator NetSquid [25]. These simulations are based on the code introduced in Ref. [33] and make use of a number of open-source libraries [84, 85, 86, 87, 88, 89, 90]. All simulation code and data can be found in our repository [91]. After the generation duration and QBER have been estimated, an estimate for the secret-key rate is computed using Equation (27). The simulations are performed for different values of the chain asymmetry parameter \(\mathcal{A}_{\text{chain}}\), and both for asymmetric chains and chains that have been symmetrized again using the extended-fiber method. The results of these simulations are shown in Figure 6. It can be directly seen that using the extended-fiber method does not improve the performance of the repeater chain, but instead reduces it significantly. Simulation results demonstrating that the same conclusion holds for other numbers of repeaters, other chain lengths and other coherence times can be found in our repository [91]. This suggests that the question whether the extended-fiber method can be used to mitigate the adverse effects of uneven entangling rates due to asymmetric repeater placement must be answered in the negative.
It can be observed that the performance of the repeater chain exhibits some resilience against small amounts of asymmetry. At \(\mathcal{A}_{\text{chain}}=0.1\) the secret-key rate has only fallen by about 10%, and at \(\mathcal{A}_{\text{chain}}=0.2\) by about 50%. For the specific repeater chain we consider, this corresponds to all the even nodes in the chain being displaced by 5 km and 10 km respectively as compared to their position in a symmetric chain, while the odd nodes remain in place (see also Figure 5). This resilience seems to be a consequence of the fact that both the generation duration and the QBER have a vanishing first derivative at \(\mathcal{A}_{\text{chain}}=0\) in Figure 6. We note furthermore that the first derivatives not only appear to vanish for the parameters considered in Figure 6, but also for different numbers of nodes, chain lengths and coherence times, as demonstrated by data that can be found in our repository [91].
### Reflection on numerical results
It may be surprising that the first derivatives in Figure 6 appear to vanish. After all, when \(\mathcal{A}_{n}\) is nonzero, the resultant longer links may be expected to form bottlenecks. However, we need to take into account that while the longer links become slower at generating entanglement, the shorter links become faster. It would appear that for small values of \(\mathcal{A}_{n}\) the negative effect of the slower links is mostly compensated by the positive effect of the faster links. To foster an intuitive understanding, let us introduce the following hand-waving argument that reinforces the interpretation that first-order effects on the fast and slow links cancel each other. Consider a single repeater
Figure 6: Effect of the chain asymmetry parameter \(\mathcal{A}_{\text{chain}}\) in a repeater chain of the type illustrated in Figure 5 on the asymptotic secret-key rate of entanglement-based BB84. Additionally, the QBER and average entanglement-generation duration are shown, from which the secret-key rate is derived according to Equation (27). When using the “extended-fiber method”, spooled fiber is deployed to make all links in the network equally long, resulting in an effectively symmetric network with an increased total fiber length. The total length of the repeater chain considered here is 1000 km and it contains 21 nodes (including 2 end nodes). Depolarizing memory decoherence (see Equation (33)) with a coherence time of \(T_{\text{coh}}=1\)s is the only source of noise included. Error bars represent the standard error in the estimates and are smaller than the marker size. Each data point is based on 20,000 simulated end-to-end entangled states.
node \(n\in\mathcal{R}\). This repeater is connected to its two neighbors by fibers of lengths \(\frac{1}{2}L_{\text{tot}}(1\pm\mathcal{A}_{n})\), where \(L_{\text{tot}}\) is the sum of the two lengths. Therefore, from combining Equations (1), (31) and (32), we find that the average rates at which entanglement is generated with the two different neighbors are
\[\begin{split} R_{\pm}&=\frac{c\exp\left(-\frac{L_{ \text{tot}}}{2L_{\text{att}}}(1\pm\mathcal{A}_{n})\right)}{L_{\text{tot}}(1\pm \mathcal{A}_{n})}\\ &=\frac{c\text{e}^{-\frac{L_{\text{tot}}}{2L_{\text{att}}}}}{L_{ \text{tot}}}\left(1\mp(\frac{L_{\text{tot}}}{2L_{\text{att}}}+1)\mathcal{A}_{n }\right)+\mathcal{O}(\mathcal{A}_{n}^{2}).\end{split} \tag{34}\]
Initially, entanglement generation is continuously attempted with both neighbors simultaneously. This can be thought of as entanglement being created on one side with rate \(R_{+}\) and with rate \(R_{-}\) on the other side, resulting in a "total rate" at which entanglement is produced at this node of
\[R_{\text{sum}}=R_{+}+R_{-}=2\frac{c\text{e}^{-\frac{L_{\text{tot}}}{2L_{\text {att}}}}}{L_{\text{tot}}}+\mathcal{O}(\mathcal{A}_{n}^{2}). \tag{35}\]
Abusively treating the time required to generate entanglement on either side as being exponentially distributed (while they are really geometrically distributed), we then have that the time required until the first entangled state is created takes time \(1/R_{\text{sum}}\). This time is invariant with respect to the node asymmetry parameter at first order.
Before entanglement swapping can take place, the second entangled state still needs to be generated. Now, entangling attempts are only made on one side, and therefore the "total rate" is no longer \(R_{\text{sum}}\) but only \(R_{+}\) or \(R_{-}\), depending on with which of the two neighbors entanglement has been established already. The probability that the longer link is generated first (once more treating the times required to generate entanglement as being exponentially distributed) is given by \(R_{+}/R_{\text{sum}}\), in which case it on average still takes a time \(1/R_{-}\) to generate the second entangled state. Similarly, with probability \(R_{-}/R_{\text{sum}}\) it still takes a time \(1/R_{+}\). Therefore, the average time until entanglement is swapped at repeater \(n\) is
\[\begin{split} T_{\text{swap}}&=\frac{1}{R_{\text{ sum}}}\left(1+\frac{R_{+}}{R_{-}}+\frac{R_{-}}{R_{+}}\right)\\ &=\frac{3}{2}\frac{c\text{e}^{-\frac{L_{\text{tot}}}{2L_{\text{att }}}}}{L_{\text{tot}}}+\mathcal{O}(\mathcal{A}_{n}^{2}),\end{split} \tag{36}\]
which is just the well-known "three-over-two" approximation for symmetric repeaters [92; 93]. Furthermore, the average time during which the first entangled state is stored in quantum memory is then given by \(T_{\text{swap}}-1/R_{\text{sum}}\), which also does not contain any linear terms in \(\mathcal{A}_{n}\). This is consistent with the fact that not only the generation duration of the repeater chain appears to be independent of the chain asymmetry parameter to linear order, but also the QBER.
While the above argument can help understand why the performance of the repeater chain studied here has a vanishing first derivative with respect to the asymmetry parameter, we stress that it is not a complete or accurate treatment. For one, we have approximated geometrically-distributed random variables as being exponentially distributed. Moreover, we neglected the fact that in order to calculate the QBER we would need to calculate the expected value of the exponential function occurring in Equation (33), which is not the same as the exponential function evaluated at the expected value. But perhaps most importantly, the different repeaters cannot be considered in isolation. After the repeater has performed entanglement swapping, it can only start entanglement generation again with neighbors that have themselves also performed entanglement swapping (otherwise their qubit is still occupied). This complex interdependence is one of the main reasons why we have turned to numerical simulations here.
## IV Conclusion
We have investigated how the asymmetric placement of nodes in a quantum network can affect network performance. Specifically, we have studied the effect of asymmetric midpoint placement on heralded entanglement generation and of asymmetric repeater placement on SWAP-ASAP repeater chains in which repeaters can generate entanglement with both neighboring nodes in parallel. In both cases we have observed a remarkable resilience against small amounts of asymmetry, even though performance can be expected to degrade significantly as asymmetry is increased further. While for the midpoint placement the cycle time will be directly affected when asymmetry is introduced, the success probability and fidelity have a vanishing first derivative. Similarly, for repeater chains, both the generation duration and QBER appear to have vanishing first derivatives with respect to asymmetry in repeater placement. Whether the first derivatives also vanish for repeater chains in which parallel entanglement generation is not possible remains an open question. The same is true for repeater chains that do not execute a SWAP-ASAP protocol but instead, for example, execute a protocol that includes entanglement distillation.
We have also observed that asymmetry in midpoint placement can significantly affect the indistinguishability of photons used in heralded entanglement generation because of chromatic dispersion. Chromatic dispersion can potentially cause a bad fidelity even for small amounts of asymmetry. The size of the effect, however, depends on the temporal length of the photons, and we have found that as long as the photons are long enough (on the order of nanoseconds) the effect of chromatic
dispersion can be negligible even for large asymmetries (percent level for an asymmetry of 40 km, see Figure 3). We have furthermore found that Gaussian wave packets are much more resilient against chromatic dispersion than Lorentzian wave packets which have long tails in their frequency distribution. By making the shape of a wave packet to be more Gaussian than Lorentzian (e.g., by filtering out long tails), the effects of chromatic dispersion can be mitigated.
From all this, we conclude that while asymmetry degrades quantum-network performance and should therefore be avoided where possible, small amounts of asymmetry are not expected to have a large effect. This may alleviate some of the pressure in selecting the perfect locations for nodes in a quantum network, and makes it more plausible that existing fiber infrastructure can provide fertile ground for a future quantum internet.
## V Code availability
The code that was used to perform the simulations and generate the plots in this paper has been made available at [https://gitlab.com/GuusAvis/reproduction-code-for-asymmetric-node-placement-in-fiber-based-quantum-networks](https://gitlab.com/GuusAvis/reproduction-code-for-asymmetric-node-placement-in-fiber-based-quantum-networks) [91].
## Acknowledgements
We thank Francisco Ferreira da Silva, Janice van Dam, Arian Stolk and Kian van der Enden for useful discussions. We thank Arian Stolk for critical reading of the manuscript. This work was supported by the QIA-Phase 1 project which has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No. 101102140, NWO Zwaartekracht QSC 024.003.037 and Danmarks Grundforskningsfond (DNRF 139, Hy-Q Center for Hybrid Quantum Networks).
|
2302.13812 | Adapting Pre-trained Language Models for Quantum Natural Language
Processing | The emerging classical-quantum transfer learning paradigm has brought a
decent performance to quantum computational models in many tasks, such as
computer vision, by enabling a combination of quantum models and classical
pre-trained neural networks. However, using quantum computing with pre-trained
models has yet to be explored in natural language processing (NLP). Due to the
high linearity constraints of the underlying quantum computing infrastructures,
existing Quantum NLP models are limited in performance on real tasks. We fill
this gap by pre-training a sentence state with complex-valued BERT-like
architecture, and adapting it to the classical-quantum transfer learning scheme
for sentence classification. On quantum simulation experiments, the pre-trained
representation can bring 50\% to 60\% increases to the capacity of end-to-end
quantum models. | Qiuchi Li, Benyou Wang, Yudong Zhu, Christina Lioma, Qun Liu | 2023-02-24T14:59:02Z | http://arxiv.org/abs/2302.13812v1 | # Adapting Pre-trained Language Models for Quantum Natural Language Processing
###### Abstract
The emerging classical-quantum transfer learning paradigm has brought a decent performance to quantum computational models in many tasks, such as computer vision, by enabling a combination of quantum models and classical pre-trained neural networks. However, using quantum computing with pre-trained models has yet to be explored in natural language processing (NLP). Due to the high linearity constraints of the underlying quantum computing infrastructures, existing Quantum NLP models are limited in performance on real tasks. We fill this gap by pre-training a sentence state with complex-valued BERT-like architecture, and adapting it to the classical-quantum transfer learning scheme for sentence classification. On quantum simulation experiments, the pre-trained representation can bring 50% to 60% increases to the capacity of end-to-end quantum models.
## 1 Introduction
Quantum computing combines quantum mechanics and computer science. The concepts of superposition and entanglement bring inherent parallelism between _qubits_, the basic computational element, which endow enormous computational power to quantum devices. Classical-quantum transfer learning (Mari et al., 2020) has emerged as an appealing quantum machine learning technique. As shown in Fig. 1, in a classical-quantum transfer learning pipeline, the pre-trained input features are encoded to a multi-qubit state, transformed and measured in a quantum circuit. The output probabilities are projected to the task label space. The losses are backpropagated to update the parameters in the pipeline with classical algorithms. This transfer learning scheme combines the representation power of state-of-the-art (SOTA) pre-trained models and the computational power of quantum computing, yielding decent accuracy on various image classification tasks (Lloyd et al., 2020; Mogalapalli et al., 2021; Mari et al., 2020; Oh et al., 2020).
However, combining pre-trained models and quantum computing remains unexplored in NLP, where large-scale pre-trained models have dramatically improved language representation (Devlin et al., 2019; Radford and Sutskever, 2018). Current Quantum NLP (QNLP) models (Zeng and Coecke, 2016; Coecke et al., 2020; Mechanetzidis et al., 2020; Lorenz et al., 2021; Lloyd et al., 2020; Kartsaklis et al., 2021) mainly construct quantum circuits from a certain kind of tensor network that aggregates word vectors to sentence representations (Coecke et al., 2020), and the parameters in the network are randomly initialized and learned end-to-end. Since a quantum circuit can be seen as a linear model in the feature space (Schuld and Petruccione, 2021), these models are highly restricted in scalability and effectiveness. One attempt to resolve this issue is hybrid classical-quantum models (Li et al., 2022), where certain layers of models are implemented on a quantum device, and the intermediate results are sent to classical models for non-linear operations. However, the frequent switching between classical and quantum processing units significantly drags the speed of training and inference, and limits the applicability of the model.
We posit that classical-quantum transfer learning paradigm is a sound fit for QNLP models, especially in the current noisy intermediate-scale quantum (NISQ) era. The pre-trained language features can lead to strong performance in downstream natural language understanding tasks Devlin et al. (2019), even with simple models. Furthermore, it is crucial to introduce robust quantum encodings to mitigate the errors caused by the noisy quantum device, and pre-trained language model is a promising approach to this aim due to its high robustness Qiu et al. (2020). Finally, pre-trained mechanism can contribute to scalable QNLP models by avoiding tensor product of all token vectors and using fixed-dimensional representations for arbitrarily long sentences.
We are motivated to pre-train language representations compatible with quantum computing models. Due to the crucial role of complex numbers in quantum computing, we build a complex-valued pre-trained language model (PLM) for classical-quantum transfer learning. Complex-valued neural networks (NNs) have been long studied (Georgiou and Koutsoperas, 1992; Nitta, 2002; Hirose, 2011; Trabelsi et al., 2018; Xiang et al., 2020; Yang et al., 2020) with various NN building blocks including RNN (Wisdom et al., 2016), CNN (Guberman, 2016) and Transformers (Yang et al., 2020; Wang et al., 2020; Tay et al., 2019; Zhang et al., 2021), and have shown advantages in enhanced representation capacity (Wisdom et al., 2016; Arjovsky et al., 2016; Trabelsi et al., 2018; Wang et al., 2020; Trouillon et al., 2016), faster learning speed (Arjovsky et al., 2016; Danihelka et al., 2016), and increased model robustness (Danihelka et al., 2016; Yeats et al., 2021; Xiang et al., 2020). Despite these advances, complex numbers are not used in pre-trained language models. It remains unknown whether complex-valued NN building blocks can be integrated into high-quality pre-trained models.
To adapt the complex-valued pre-trained models to QNLP, we impose numerical constraints to the network components. For feature encoding, we unit normalize the hidden vectors of the [CLS] token so that the sentence representation can be mapped to a quantum state all throughout the network. We also re-implement the next sentence prediction (NSP) head to simulate variational measurement. At fine-tuning, we train an authentic task-related variational measurement process by parameterizing the involved unitary transformation. Despite the imposed numerical constraints, the quantum-compatible pre-trained language model performs in par to a real-valued BERT of comparable size. More importantly, the pre-trained sentence state encoding brings remarkable performance gain to end-to-end quantum models on various classification datasets, with an relative accuracy improvement of around 50% to 60%.
**We contribute** the first approach to introduce the classical-quantum transfer learning for QNLP models, and pre-train language representations compatible with quantum computing models. Apart from the remarkably improved performance, our model is more scalability then existing NLP models and can tackle longer sentences.
## 2 Related work
### Complex-valued Neural Networks
Complex values have been used in various NNs across domains (Arjovsky et al., 2016; Danihelka et al., 2016; Wisdom et al., 2016; Trouillon et al., 2016; Hirose, 2011; Trabelsi et al., 2018; Xiang et al., 2020; Yang et al., 2020; Guberman, 2016; Wang et al., 2019). Arjovsky et al. (2016); Wisdom et al. (2016); Wolter & Yao (2018) studied complex numbers in recurrent NNs. Arjovsky et al. (2016) systematically studied variants of CNNs with complex-valued inputs and weights, which led Trabelsi et al. (2018) to build a complex-valued NN that achieved SOTA performance in audio-related tasks. Wang et al. (2019); Yang et al. (2020); Zhang et al. (2021); Tay et al. (2019) obtained promising
Figure 1: Classical-quantum transfer learning pipeline Mari et al. (2020).
results on sequence-to-sequence (seq2seq) tasks with a complex-valued Transformer. Complex-valued NNs have also been used in privacy detection (Xiang et al., 2020) and knowledge graph completion (Trouillon et al., 2016; Trouillon and Nickel, 2017).
Apart from effectiveness gains (Wisdom et al., 2016; Arjovsky et al., 2016; Trabelsi et al., 2018; Wang et al., 2020; Trouillon et al., 2016), complex-valued NNs also contribute to faster learning (Arjovsky et al., 2016; Danihelka et al., 2016) and increased model robustness (Danihelka et al., 2016; Yeats et al., 2021; Xiang et al., 2020). However, these properties have been found only in end-to-end tasks. The impact of complex values to pre-trained models remains unexplored.
### Quantum Natural Language Processing (QNLP)
QNLP aims to build NLP models compatible with quantum hardware. Current QNLP models are limited in their architecture and application (Coecke et al., 2020; Meichanetzidis et al., 2020; Lorenz et al., 2021; Lloyd et al., 2020); they are based on a compositional model (Coecke et al., 2010), which represents words as tensors in spaces dependent on their grammatical roles, and performs tensor operations to encode syntactic relations. Sentence representations are built by bottom-up aggregating individual word tensors. This process is translated to quantum circuits, followed by a quantum measurement component to produce classification labels. Preliminary studies have successfully implemented and simulated QNLP models for sentence classification (Meichanetzidis et al., 2020; Lorenz et al., 2021). By quantum simulation in the feedforward pass and performing backpropagation on a classical computer, the model can learn from data and outperform random labeling.
Like other quantum machine learning models (Lloyd et al., 2020; Jerbi et al., 2021), QNLP models encode words to different _qubits_, and design the quantum circuit routine (a.k.a _ansatz_) to derive the sentence representation before feeding it to a measurement layer. In its mathematical form, the model encodes each word to a unit complex vector and it has an all-linear structure up to the measurement outcome. Therefore, they have a low capacity and suffer from the scalability issue. Recently, a hybrid classical-quantum scheme (Li et al., 2022) is introduced to alleviate the limitations. In this quantum self-attention neural network (QSANN), a quantum process is introduced, with parameterized unitary rotations and Pauli measurement, to compute the query, key and value vectors, and they are sent to classical computer to perform non-linear attentions. Due to the introduced non-linearity, QSANN beats the QNLP model in (Lloyd et al., 2020). However, running the network requires switching between quantum and classical hardware at each self-attention layer, which is too inefficient to be practical. We therefore posit that the classical-quantum transfer learning paradigm (Mari et al., 2020) is a more promising solution for alleviating the low non-linearity issue for QNLP models.
## 3 Background
**Complex number.** A complex number \(z\) is written as \(z=a+bi\) in the rectangular form or \(z=re^{i\theta}=r(\cos\theta+i\sin\theta)\) in the polar form. a, b are the real and imaginary parts of \(z\), written as \(\mathfrak{Re}(\text{z})\) and \(\mathfrak{Im}(\text{z})\). \(r=|z|\in[0,+\infty)\) and \(\theta\in[-\pi,\pi)\) are the _modulus_ (or _amplitude_) and _argument_ (or _phase_). The conjugate of a complex number \(z=a+bi\) is \(\overline{z}=a-bi\), which could be extended for vectors and matrices. The Hermitian of a complex matrix \(A\) is its conjugate transpose, written as \(A^{H}=\overline{A^{T}}\). A is an Hermitian matrix when \(A=A^{H}\). An orthogonal or unitary complex matrix \(U\) satisfies \(U^{H}U=I\). We use the typical complex-valued fully-connected (dense) layer to illustrate complex additions and multiplications. A complex dense layer is parameterized by a complex matrix \(\mathbf{W}=\mathbf{A}+i\mathbf{B}\) and a complex bias \(\mathbf{b}=\mathbf{c}+i\mathbf{d}\). For a complex input vector \(\mathbf{X}=\mathbf{x}+i\mathbf{y}\), the output is a complex-valued multiplication of the weight and the input vector, plus the bias value:
\[\mathbf{z}=\mathbf{W}\mathbf{X}+\mathbf{b}=(\mathbf{A}\mathbf{x}-\mathbf{B}\mathbf{y}+\mathbf{c})+i(\mathbf{B}\mathbf{x}+ \mathbf{A}\mathbf{y}+\mathbf{d}) \tag{1}\]
The **mean and variance** of a set of complex numbers \(\{z_{j}\}_{j=1}^{n}\) are given below:
\[\begin{split}\bar{\mathbf{z}}&=\frac{\sum_{j=1}^{n}z_{ i}}{n}\\ \sigma_{z}^{2}&=\frac{\sum_{j=1}^{n}(z_{j}-\bar{\bm {z}})(\overline{z_{j}-\bar{\mathbf{z}}})}{n}.\end{split} \tag{2}\]
**Quantum Computing.** We present the basic concepts of quantum computing, see (Nielsen and Chuang, 2010) for more. The basic computing unit is a _qubit_, the quantum analog of a classical bit. A qubit describes the state \(\ket{\psi}^{1}\) in a 2-dim Hilbert space. The _basis states_\(\ket{0}\) and \(\ket{1}\) are orthonormal vectors that form the basis of the space. A general state \(\ket{\psi}\) is a _superposition_ of the basis states, i.e. \(\ket{\psi}=\alpha\ket{0}+\beta\ket{1}\), where \(\alpha\), \(\beta\) are complex numbers with \(|\alpha|^{2}+|\beta|^{2}=1\). One can apply _measurement_ to a qubit to check its probabilities of outcomes 0 and 1 by Born's rule (Born, 1926), i.e, \(P(i)=|\bra{\psi|i}|^{2}\), so \(P(0)=|\alpha|^{2}\) and \(P(1)=|\beta|^{2}\) for the state above. The probabilities of all outcomes always sum to 1. For multiple qubits, their joint space is the tensor product of each qubit space, hence of dimension \(2^{n}\) for \(n\) qubits, and the basis states are denoted by \(\{|a_{1}a_{2}...a_{n}\rangle,a_{i}\in\{0,1\}\}\). A state transformation is mathematically a unitary map or a complex unitary matrix \(U\), such that \(\ket{\psi^{\prime}}=U\ket{\psi}\) for state \(\ket{\psi}\). Quantum circuits are physical implementations of quantum computing models. The basic units of quantum circuits are quantum gates, which are unitary maps that play similar roles to logic gates in classical computers. The combination of different quantum states allows us to implement any unitary transformation before the final measurement step.
**Classical BERT.** Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019) takes as input a concatenation of two segments (sequences of tokens). The inputs are passed through an embedding layer that adds positional embedding, token embedding and segment embedding. The embeddings are then fed into a stack of \(N\) transformer layers, and each layer has a multi-head attention module to enact token-level interactions. The last hidden units of each sequence are used to perform Mask Language Model (MLM) and Next Sentence Prediction (NSP). The MLM objective \(\mathcal{L}_{MLM}\) is a cross-entropy loss on predicting the randomly masked tokens in the sequence, while the NSP loss \(\mathcal{L}_{NSP}\) produces binary cross-entropy loss on predicting whether the two segments follow each other in the original text. The overall objective of BERT is \(\mathcal{L}_{BERT}=\mathcal{L}_{MLM}+\mathcal{L}_{NSP}\). BERT is pre-trained on large text corpora, e.g., BOOKCORPUS and the English WIKIPEDIA (Devlin et al., 2019). The model is fine-tuned on different text classification and natural language inference datasets, such as the famous GLUE benchmark (Wang et al., 2018). The effectiveness on these datasets indicates the performance of the pre-trained model.
## 4 Our quantum-compatible pre-trained language model
### Overall Architecture
A typical quantum circuit for classification is composed of a feature encoding module and a variational measurement component. The feature encoding unit encodes an input data point \(x\), such as a sentence,
Figure 2: Mapping the classical-quantum transfer learning scheme to our complex-valued PLM. The MLM prediction head for PLM and the linear projection layer of the quantum model are omitted.
to any \(n\)-qubit state \(\ket{\psi(x)}\). Basically, it applies a unitary transformation \(U_{\phi}(x)\) to the \(n\)-qubit basis state, i.e. \(\ket{\psi(x)}=U_{\phi}(x)\ket{00...0}\). Next, the encoded state \(\ket{\psi(x)}\) is fed to a variational measurement unit, which consists of a task-related unitary transformation \(V(\theta)\) and a measurable observable along the basis states \(\{\ket{e_{j}}\}\) of the \(n\)-qubit system. The output probabilities on all basis states \(p_{j}=|\bra{e_{j}}V(\theta)\ket{\psi(x)}|^{2}\) are aggregated as the circuit output. They are further projected onto a low-dimensional space to produce the task label.
We establish that the above schema can be structurally mapped to a pre-trained language model, as shown in Fig. 2. The [CLS] token vector is usually viewed as the sequence representation. The multi-layer Transformer encoder encodes its input to a quantum state \(\ket{\mathbf{\psi}_{[CLS]}(\mathbf{x})}\), playing the role of quantum feature encoding in a quantum circuit. Essentially, the unitary transformation \(U_{\phi}(x)\) is parameterized such that the transformed state \(U_{\phi}(x)\ket{00...0}=\ket{\psi_{[CLS]}(x)}\). Furthermore, the NSP prediction head that maps [CLS] token to classification label is analogous to the variational measurement component that directs \(\ket{\mathbf{\psi}_{[CLS]}(\mathbf{x})}\) to the classification label.
Since complex values are vital to quantum models, we build a pre-trained LM with complex values, namely QBERT, to support the mapping above. We follow the multi-layer bidirectional Transformer architecture of classical BERT, and adjust the implementations of each network component to support complex representations. For feature encoding, we unit normalize the hidden vectors of the [CLS] token so that the sentence representation can be attributed to a quantum state at each layer of the network. We also re-implement the NSP prediction head to simulate variational measurement in both pre-training and fine-tuning phase.
### QBERT building blocks
**Embedding layer.** In classical BERT, token embeddings, segment embeddings and position embeddings are summed up at a per-token level. They are each extended to the complex domain in QBERT. Essentially, the complex-valued token embeddings, segment embeddings and position embeddings are summed up as the output of the embedding layer for each token.
**Multi-head attention.** A complex transformer layer has a multi-head attention component at its core. It computes query-key affinity scores, and linearly combines them with value vectors to produce a contextual vector for each element. The attention scores for one head are computed by
\[\text{ComplexAttention}(Q,K,V)=f(\frac{QK^{H}}{\sqrt{d_{k}}})V, \tag{3}\]
where \(f(\cdot)\) is a softmax-like activation function applied on the query-key complex inner (i.e. Hermitian) products. Since the inputs to this function are complex matrices, we extend the real softmax function to the complex domain.
A straightforward approach to this aim is to apply softmax to real and imaginary parts of the inner product separately. Suppose the Hermitian product is denoted by \(\{\sigma(q,k)\}\) for a pair of query-key elements \((q,k)\), the formula of this _split activation function_ is given by
\[f_{\text{split}}(q,k)=\frac{e^{\mathfrak{Re}(\sigma(q,k))}}{\sum_{k^{\prime }}e^{\mathfrak{Re}(\sigma(q,k^{\prime}))}}+i\frac{e^{\mathfrak{Im}(\sigma(q,k) )}}{\sum_{k^{\prime}}e^{\mathfrak{Im}(\sigma(q,k^{\prime}))}}, \tag{4}\]
where the summation iterates over all key elements \(k^{\prime}\) in the sequence. The split softmax function normalizes both real and imaginary parts of the affinity scores to sum up to 1 for each key. When the scores are taken to linearly combine the value vectors \(\{v^{\prime}\}\), the summation can be decomposed into
\[h= \sum_{k^{\prime}}\langle\mathfrak{Re}(f_{\text{split}}(q,k^{ \prime}))+i\mathfrak{Im}(f_{\text{split}}(q,k^{\prime}))(\mathfrak{Re}(v^{ \prime})+i\mathfrak{Im}(v^{\prime}))\rangle\] \[= \sum_{k^{\prime}}\langle\mathfrak{Re}(f_{\text{split}}(q,k^{ \prime}))\mathfrak{Re}(v^{\prime})-\mathfrak{Im}(f_{\text{split}}(q,k^{ \prime}))\mathfrak{Im}(v^{\prime})\rangle\] \[+i\sum_{k^{\prime}}\langle\mathfrak{Re}(f_{\text{split}}(q,k^ {\prime}))\mathfrak{Im}(v^{\prime})+\mathfrak{Re}(f_{\text{split}}(q,k^{ \prime}))\mathfrak{Im}(v^{\prime})\rangle,\]
and a negative sign exists in the real part of the summation due to the \(i^{2}=-1\). However, an ideal weighted combination should be a convex combination of the value vectors with all non-negative
weights. This motivates us to modify the normalization function \(f\) so that real-valued affinity scores are produced from the complex inputs, because they indicate a convex combination in both the real and imaginary channels. We propose to apply softmax normalization to the modulus part of the complex numbers, as shown in the equations below.
\[f_{\text{mod}}(q,k)=\frac{e^{|\sigma(q,k)|}}{\sum_{k^{\prime}}e^{|\sigma(q,k^{ \prime})|}} \tag{5}\]
**Feed-forward network.** The main component of feed-forward networks is the fully-connected layer. We employ Eq. 1, Sec. 3 for The implementation of a complex fully-connected layer. For an input vector \(\mathbf{X}\in\mathbb{C}^{d_{o}}\), a fully-connected layer projects it to a \(d_{o}\)-dim vector \(z\in\mathbb{C}^{d_{o}}\), with weight matrix \(\mathbf{W}\in\mathbb{C}^{d_{o}\times d_{i}}\) and a bias term \(\mathbf{b}\in\mathbb{C}^{d_{o}}\).
**Prediction heads.** In order to be consistent with the quantum classification model, we re-implement the NSP head to simulate a variational measurement, which consists of unitary training and measurement. Due to the computational cost of training a unitary matrix, we replace the unitary transformation with a dense layer followed by unit-normalization in the pre-training phase Chen et al. (2021). For the measurement, two pure states are for the NSP task, and the squared Hermitian product between the input state and each state is computed and linearly re-scaled to discrete probabilities. As per Lorenz et al. (2021), the probabilities are used to compute the binary cross-entropy loss against the true binary label. We further remove the non-linear activation function from the NSP prediction head.
For MLM head, a complex-valued feed-forward network is used to project the encoder-output tokens to MLM logits, which are then converted to real values by taking the moduli of complex numbers.
**Activation function.** Classical BERT typically adopts Rectified Linear Unit (ReLU) (Nair and Hinton, 2010) or the Gaussian Error Linear Unit (GeLU) (Hendrycks and Gimpel, 2016) as the activation function for hidden units. To extend it to the complex domain, we simply employ **split-GeLU**, which activates the real and imaginary parts of the input with a GeLU function:
\[\text{split-GeLU}(z)=\text{GeLU}(\mathfrak{Re}(z))+i\text{GeLU}(\mathfrak{ Im}(z)) \tag{6}\]
**Layer normalization.** For a real vector, standard layer normalization rescales the elements to zero mean and unit variance, and applies an affine transformation to the rescaled values. Similarly, one can directly compute the mean \(\bar{z}\) and variance \(\sigma_{z}\) for a set of complex numbers \(\mathbf{z}=\{z_{j}\}_{j=1}^{n}\) by Eq. 2. The complex layer normalization function becomes
\[\text{complex-LN}(z)=\frac{z-\bar{z}}{\sigma_{z}}\times a+b. \tag{7}\]
Complex-LN is slightly different from applying layer normalization respectively to real and imaginary channels. They both normalize the mean value of inputs to zero, but bring different variances to the normalized values, and hence lead to different outputs. To ensure the [CLS] token is a legal quantum state, we unit-normalize the hidden vector of the [CLS] token and apply complex-LN to the remaining tokens.
### Network Training
**Optimization.** Most recent works (Li et al., 2019; Yang et al., 2020; Tay et al., 2019) implement complex-valued NNs with double-sized real networks, and apply classical backpropagation to update their real and imaginary parts simultaneously. However, because of non-holomorphic functions (Hirose, 2003), this can yield wrong gradients of complex weights, and the effectiveness metrics of a complex-valued NN may not reflect its true performance. Therefore, we use the Wirtinger Calculus (Kreutz-Delgado, 2009) to update complex-valued parameters, which explicitly computes the gradient with respect to each complex weight. We are the first to adapt AdamW (Loshchilov and Hutter, 2017), the most popular optimizer, for complex weights. AdamW computes the second raw moment (i.e., uncentered variance) of the gradient by averaging the squared gradients in the real case. In the complex domain, however, the variance should be computed by _multiplying the gradient with
its conjugate._ We modify AdamW accordingly to fix this problem and get the correct second raw moment for complex gradients. We highlight the difference between the real and complex AdamW optimizers in Alg.1.
```
1:Given \(\alpha=0.001\), \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), \(\epsilon=10^{-8}\), \(\lambda\in\mathbb{R}\)
2:Initialize time step \(t\leftarrow\textbf{0}\), parameter \(\theta_{t=0}\in\mathbb{R}^{n}\), parameter \(\theta_{t=0}\in\mathbb{C}^{n}\), first moment \(\textbf{m}_{t=0}\leftarrow\textbf{0}\), second moment \(\textbf{v}_{t=0}\leftarrow\textbf{0}\), schedule multiplier \(\eta_{t=0}\in\mathbb{R}\)
3:while stopping criteria is not met do
4:\(t\gets t+1\)
5:\(\forall f_{t}(\boldsymbol{\theta}_{t-1})\leftarrow\text{SelectBatch}( \boldsymbol{\theta}_{t-1})\)
6:\(\textbf{g}_{t}\gets f_{t}(\boldsymbol{\theta}_{t-1})+\lambda\boldsymbol{ \theta}_{t-1}\)
7:\(\textbf{m}_{t}\leftarrow\beta_{1}\textbf{m}_{t-1}+(1-\beta_{1})\textbf{g}_{t}\)
8:\(\forall_{t}\in\beta_{2}\forall_{t-1}+(1-\beta_{2})\textbf{g}_{t}\odot \textbf{g}_{t}\)
9:\(\forall_{t}\leftarrow\textbf{m}_{t}+(1-\beta_{2})\textbf{g}_{t}\odot \textbf{g}_{t}\)
10:\(\textbf{\hat{m}}_{t}\leftarrow\textbf{m}_{t}/(1-\beta_{1}^{*})\)
11:\(\textbf{\hat{v}}_{t}\leftarrow\textbf{v}_{t}/(1-\beta_{2}^{*})\)
12:\(\eta_{t}\leftarrow\textbf{SetScheduleMultiplier}(t)\)
13:\(\boldsymbol{\theta}_{t}\leftarrow\theta_{t-1}-\eta_{t}(\alpha\hat{\textbf{m} }_{t}/(\sqrt{\textbf{v}}_{t}+\epsilon)+\lambda\boldsymbol{\theta}_{t-1})\)
14:endwhile
15:Return\(\boldsymbol{\theta}_{t}\)
```
**Algorithm 1** AdamW for real numbers and AdamW for for complex numbers
**Weight Initialization**. By default, we initialize the real and imaginary parts of complex weights with a normal distribution at a mean value of zero and a variance of 0.01.
### Measurement as Classification
With the pre-trained quantum encoding \(\ket{\psi_{[CLS]}(\boldsymbol{x})}\), we train a task-related variational measurement for text classification. The state is passed to an authentic unitary layer, parameterized by a complex-valued square matrix \(W\) as follows:
\[H=\frac{W+W^{H}}{2},\ \ U=e^{iH}, \tag{8}\]
where \(e^{(\cdot)}\) stands for matrix exponential. The resulting \(U\) is guaranteed to be a square unitary matrix, supporting a trainable unitary transformation of [CLS] state \(\ket{\psi^{\prime}_{[CLS]}(\boldsymbol{x})}=U\ket{\psi_{[CLS]}(\boldsymbol{x })}\). Finally, \(\ket{\psi^{\prime}_{[CLS]}(\boldsymbol{x})}\) is measured along each basis state, and the output probability vector is linearly transformed to produce the target class labels. The variational measurement head is trained with the previous transformer layers in a fine-tuning task.
At this stage, the model operates in a hybrid classical-quantum fashion: the multi-layer Transformer encoder is executed in a classical computer to obtain the encoded state \(\ket{\psi_{[CLS]}(\boldsymbol{x})}\). The state is then passed to a quantum device to compute the classification probabilities for each class. Finally, we compute the cross-entropy loss against true class labels as the loss function, and the network weights are updated accordingly with the **CAdamW** optimizer in a classical computer. Compared to QSANN (Li et al., 2022), this model only requires switching once between quantum and classical hardware, so it is far more practical for hybrid classical-quantum training.
## 5 Experiment
We pre-train QBERT on BOOKCORPUS and English WIKIPEDIA and evaluated them on the GLUE benchmark, following the standard practice. To examine its effectiveness, we compare it with the classical BERT (a.k.a **BERT-base**) and a complex-valued BERT (a.k.a **CVBERT-base**). **CVBERT-base** has the same settings as _QBERT-base_, except that the [CLS] token is layer normalized by _complex-LN_, and the NSP head is not replaced by quantum-compatible structures in pre-training and fine-tuning. We plot their learning curves during pre-training, and compute their effectiveness on GLUE datasets. Following the established practice, **GLUE scores** are calculated by averaging the performance values on all GLUE datasets Wang et al. (2018).
To ensure a fair comparison, we align the parameter number of all three models. The models have a 12-layer Transformer structure with 12 heads in each layer. **BERT-base** has a model dimension \(d_{model}=768\) and a hidden size of \(d_{hidden}=3072\). Since a complex-valued NN has twice the number of parameters as its real-valued counterpart, the complex-valued models **CVBERT-base** and **QBERT-base** have halved hidden dimension \(d_{hidden}=1536\) and same model dimension \(d_{model}=768\). In this way, a 768-dim unit complex vector is pre-trained as the quantum encoding \(\ket{\psi_{[CLS]}(\mathbf{x})}\). This means that the quantum classification model can be implemented as a 10-qubit circuit2. We further remove the query projection and output projection layers \(W^{Q}\), \(W^{O}\) from all its transformer layers, and tie the input embedding lookup table with the MLM projection matrix. As shown in Tab. 1, the real, complex and quantum-compatible BERT models are comparable in size.
Footnote 2: Since the dimensionality is not a power of 2, the capacity of qubits is not fully exploited. We aim at conducting a fair comparison with the other BERT models.
To empirically check the gain in representation capacity brought about by the classical-quantum transfer learning paradigm, we compare the fine-tune performance of **QBERT-base** with **DisCoCat** Lorenz et al. (2021), the architecture for QNLP models. However, due to the scalability issue, **DisCoCat** can only work at a low dimensionality to be able to handle long sentences in the datasets. We follow the original setting in Lorenz et al. (2021), which uses at most 3 qubits to represent each token. We also implement two other end-to-end _quantum-like_3 models to simulate the performance of quantum models at a comparable scale to **QBERT-base**. **QCLS-end2end** model embeds each word to a complex-valued vector and normalizes the average of word vectors as the sequence encoder. The sequence state is passed to a variational measurement to produce classification labels. The **QCLS-transformer** model has a similar structure to **QBERT-base** but is directly trained on text classification dataset with randomly initialized complex-valued token embeddings. We try \(N\in\{3,6,12\}\) transformer layers in the **QCLS-transformer** architecture. In the absence of the source code of **QSANN** Li et al. (2022), the performance of **QCLS-transformer** can serve as an estimate of **QSANN**, since they have similar attention mechanisms.
Footnote 3: They are not strict quantum models, since the sentence encoding in **QBERT-base** and attention components in the **QCLS-transformers** must be implemented on a classical computer. However, they are good estimates of how a quantum model of comparable size to **QBERT-base** performs on text classification.
**DisCoCat** is implemented with lambeq Kartsaklis et al. (2021), an open-source, modular, extensible high-level Python library for experimental Quantum Natural Language Processing (QNLP). We apply **DisCoCat** to SST and CoLA, the text classification datasets in GLUE, and both training and prediction are simulated classically, with CAdamW and a batch size of 32. See App. B for the source code. The remaining models are pre-trained on 8 Tesla V100 GPU cards, at a batch size of 512 and an initial learning rate of 1e-4. Fine-tuning models are executed on a single Tesla V100 card, at a batch size of 128 and an initial learning rate of 1e-3. All models are implemented in PyTorch 1.9.0.
**Overall Result**. Fig. 3 presents the learning curves of the three BERT models in pre-training, while Tab. 1 compares the accuracy performance of all above-mentioned models on GLUE dataset. **QBERT-base** has a close learning curve to **CVBERT-base** and **BERT-base** and a small 4.5% and 4.9% relative drop over the two models, indicating that the quantum-compatible settings on the network implementations have little harm to the performance. More importantly, remarkable gaps in GLUE scores appear between **QBERT-base** and all end-to-end quantum classification models. By GLUE score, **QBERT-base** outperforms **QCLS-end2end** by \(61.8\)% and **QCLS-transformer** with a minimum gap of 48.2%. Due to its low dimensionality, **DisCoCat** is
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Name} & \multirow{2}{*}{\(d_{model}\)} & \multirow{2}{*}{\(d_{hidden}\)} & \multirow{2}{*}{Size} & MNLI & QMLS & QGP & RTK & SST & MRPC & GLA & STS & Avg \\ & & & (Acc) & (Acc) & (T) & (Acc) & (Acc) & (G/T) & (MinDev) & (Person) & (GLUE score) \\ \hline HBERT-base & 768 & 3075.1 & 13.54 & 58.23 & 80.80 & 86.83 & 64.66 & 90.1 & 85.86 & 48.56 & 87.97 & 79.85 (+4.9%) \\ \hline CVBERT-base & 768 & 1536.1 & 15.08 & 15.08 & 87.16 & 87.02 & 86.31 & 67.03 & 90.72 & 85.28 & 85.25 & 85.71 (+4.9%) \\ \hline QBERT-base & 768 & 1536.1 & 15.08 & 87.97 & 83.75 & 83.69 & 69.92 & 80.80 & 41.98 & 83.85 & 79.52 (+0.0%) \\ \hline QCLS-end2end2 & 768 & 1536.0 & 48.07 & 46.04 & 64.69 & 67.60 & 80.05 & 74.88 & 60.0 & 4-5.45 & 47.11 (+6.1%) \\ QCLS-transformer-31.78 & 768 & 1536.71 & 15.08 & 55.03 & 61.92 & 72.63 & 80.80 & 66.53 & 3.2 & 42.48 (+3.0%) \\ QCLS-transformer-46.78 & 768 & 1536.7 & 92.73 & 75.27 & 69.93 & 75.20 & 81.0 & 65.9 & 7.3 & 11.3 & 508.50 (+5.0%) \\ QCLS-transformer-12.78 & 768 & 1536.1 & 15.08 & 55.09 & 69.68 & 66.6 & 74.4 & 48.0 & 80.0 & 74.1 & 13.4 & 51.41 (+6.2%) \\ \hline Bio-Cai Lorenzi et al. (2021) & 768 & 1536.1 & 15.08 & 55.09 & 69.68 & 66.6 & 74.4 & 48.0 & 80.0 & 74.1 & 13.4 & 51.41 (+6.2%) \\ \hline Bio-Cai Lorenzi et al. (2021) & 768 & 1536.01 & 55.09 & 69.68 & 66.6 & 74.4 & 50.9 & -2.0 & -2.0 & -2.0 & -2.0 & -2.0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of **QBERT-base** on GLUE in comparison to classical BERT models and end-to-end QNLP models. The evaluation metric for each dataset is shown in the parentheses below the dataset name. Performance values on the development set are reported. We report the relative differences between **QBERT-base** and each model in the parentheses in the last column.
even inferior to QCLS models by a large margin, and it is unfair to compare it with **QBERT-base**. However, the margin between **QBERT-base** and QCLS models does indicate the enormous benefit to the representation capacity of QNLP models brought about by the pre-trained quantum encoding.
**CAdamW vs. RAdamW**. We compare the complex-valued AdamW optimizer with the original AdamW optimizer (RAdamW) for real numbers. For a fair comparison, we use the two optimizers to pre-train the same complex-valued language model. As shown in Fig. 4, CAdamW converges faster than RAdamW and to a lower loss in pre-training. Therefore, the improved AdamW optimizer is indeed superior to RAdamW for training complex-valued models in practice.
**Quantum Simulation**. The experiment results are acquired by classical simulation. However, as demonstrated in App. A, the fine-tuning network can be converted to a quantum circuit can be implemented by the qiskit4 toolbox, and both networks have identical behaviours. This implies that the reported values in the table can be obtained by hybrid classical-quantum training. The authentic classical-quantum hybrid simulation of QBERT is left for future work.
Footnote 4: [https://qiskit.org/](https://qiskit.org/)
## 6 Conclusion
We have presented the first classical-quantum transfer learning scheme for quantum natural language processing (QNLP). By delicately designing the pre-trained model architecture, we managed to leverage the strong pre-trained language models for enhancing the capacity of QNLP. Empirical evaluation suggests that 50% to 60% improvement in effectiveness can be brought about by the pre-trained quantum language encoding. Besides the main finding, we proposed the first AdamW optimizer for training complex-valued BERT models, and we believe it will be beneficial for training other complex-valued neural networks.
Figure 4: Learning curves of real and complex Adam optimizers to train the same complex-valued model.
Figure 3: Learning curves of real, complex and quantum BERT models.
The applicability of our model is limited in that current quantum technology cannot support authentic classical-quantum hybrid training for fine-tuning QBERT on downstream NLP tasks. However, we believe that the classical-quantum transfer mechanism and pre-trained models is necessary for scalable QNLP models, and this work has made the crucial first step by demonstrating the enormous potential of pre-trained quantum encodings. We expect that future advances on quantum technologies will make our approach feasible on real quantum computers.
|
2303.15192 | Mean first passage time of active fluctuating membrane with stochastic
resetting | We study the mean first passage time of a one-dimensional active fluctuating
membrane that is stochastically returned to the same flat initial condition at
a finite rate. We start with a Fokker Planck equation to describe the evolution
of the membrane coupled with an Ornstein-Uhlenbeck type of active noise. Using
the method of characteristics, we solve the equation and obtain the joint
distribution of the membrane height and active noise. In order to obtain the
mean first-passage time (MFPT), we further obtain a relation between the MFPT
and a propagator that includes stochastic resetting. The derived relation is
then used to calculate it analytically. Our studies show that the MFPT
increases with a larger resetting rate and decreases with a smaller rate, i.e.,
there is an optimal resetting rate. We compare the results in terms of MFPT of
the membrane with active and thermal noises for different membrane properties.
The optimal resetting rate is much smaller with active noise compared to
thermal. When the resetting rate is much lower than the optimal rate, we
demonstrate how the MFPT scales with resetting rates, distance to the target,
and the properties of the membranes. | Tapas Singha | 2023-03-27T13:27:25Z | http://arxiv.org/abs/2303.15192v2 | # Mean first passage time of active fluctuating membrane with stochastic resetting
###### Abstract
We study the mean first passage time of a one-dimensional active fluctuating membrane that is stochastically returned to the same flat initial condition at a finite rate. We start with a Fokker-Planck equation to describe the evolution of the membrane coupled with an Ornstein-Uhlenbeck type of active noise. Using the method of characteristics, we solve the equation and obtain the joint distribution of the membrane height and active noise. In order to obtain the mean first-passage time (MFPT), we further obtain a relation between the MFPT and a propagator that includes stochastic resetting. The derived relation is then used to calculate it analytically. Our studies show that the MFPT increases with a larger resetting rate and decreases with a smaller rate, i.e., there is an optimal resetting rate. We compare the results in terms of MFPT of the membrane with active and thermal noises for different membrane properties. The optimal resetting rate is much smaller with active noise compared to thermal. When the resetting rate is much lower than the optimal rate, we demonstrate how the MFPT scales with resetting rates, distance to the target, and the properties of the membranes.
## I Introduction
A biological membrane is comprised of a variety of proteins and ion channels that interact with the intra- and extra-cellular environments. The proteins and ion channels consume energy via the hydrolysis of adenosine triphosphate (ATP) and exert force on the membrane[1]. On average, the exerted active force could be either zero or nonzero, depending on the exact problem. The interplay of the force and the mechanical properties of the membrane, such as bending rigidity and tension, determines the shape of the membrane and the movement of a cell.
In order to perform a specific function, a cell often needs to reach a certain target. When it moves with an average velocity, calculating the time it will take to reach a target seems straightforward. We consider a scenario in which a cell finds its static single target only through membrane fluctuations. In this case, it is interesting to understand the time a membrane takes to reach its target for the first time. As the membrane fluctuates either towards or away from the target, the reaching time is a stochastic variable, and in a statistical sense, the mean first-passage time may be a sensible measure for the process.
The mean first-passage time (MFPT) is relevant in a variety of physical, chemical, and biological processes ranging from species extinctions in ecology[2; 3] to molecular processes. A simple example of MFPT is the average time a forager takes to find food or other resources for the first time[4]. In biology, the immune cells search for cells with antigens. Therefore, it is relevant in a range of length scales and becomes more important as the need to understand complex systems increases.
Because the underlying stochastic dynamics of a fluctuating system can cause MFPT to be infinitely long, depending on the system details, several search strategies have been proposed over the last two decades[5; 6; 7]. One of the most important strategies is stochastic resetting, in which the searcher is returned to its initial state at a fixed rate. This may enhance the likelihood of finding the target because the searcher may explore the surroundings, that may reduce the likelihood of wandering away from the target. Stochastic resetting was first proposed for a single Brownian particle[8; 9; 10], subsequently, numerous problems have also been studied in other fields, such as population expansion in fluctuating environments[11] and search algorithms in computer science[12]. Several variations of resetting[13] have also been investigated, including resetting with finite velocity[14], which may be useful in real life.
A passive membrane driven by thermal noise satisfies the fluctuation-dissipation theorem. A biological membrane, on the other hand, transports ions through its embedded ion pumps, which consume ATP and cause local force (or active noise) on the membrane. Together with thermal noise, this active noise enhances the amplitude of membrane fluctuations, which can be experi
Figure 1: Schematic diagram of fluctuating active (curvy, solid black line) and passive (curvy, dashed blue line) membranes and their target (shown by a solid green circle).The membrane is reset to its initial height \(h=h_{0}\) (shown by a straight-dashed red line) at a finite rate.
mentally quantified by the effective temperature[1; 15]. Unlike the passive membrane, the membrane, which actively participates in biological processes, does not satisfy the fluctuation-dissipation theorem.
Though the passive surface growth with stochastic resetting has been studied[16], the dynamics of a membrane with active fluctuations have yet to be investigated. Importantly, the MFPT, which is fascinating in the context of searching, remains elusive. In this work, we answer the question: how long does it take for a membrane to reach its single point target when a membrane is reset to its flat initial height with a finite rate? We illustrate the scenario in the schematic diagram shown in Fig. (1).
To this end, we first derive the propagator for the coupled dynamical equations for an active membrane. Further, we derive a general relation between the MFPT and a propagator under resetting. Using the relation and the derived propagator for the height, we obtain the MFPT. Our studies reveal that there exists an optimal resetting rate at which MFPT becomes minimum. We demonstrate how, for different membrane properties, MFPT scales with resetting rates and target heights, and how an optimal resetting rate arises. Finally, we compare the MFPT for active and passive membranes.
The resetting of a membrane may be related to the type of motility in which a part of the cell membrane gets detached from its cortex and fluctuates, known as a bleb. After that, the cortical actin filaments reach the detached membrane and bring the membrane back to its initial position[17; 18]. The membrane may grow again and find its target; however, the underlying dynamics remain unclear.
This work is structured as follows: The model is presented in Section II. The derivation of the single-height distribution is described in Section III. Section IV depicts the connection between the mean first-passage time and the propagator. In the following section, we calculate the height-height correlations for different cases. Section VI presents the MFPT for a variety of cases in one spatial dimension. Finally, Section VII presents discussion and conclusions.
## II Model
We are interested in understanding how the symmetric membrane fluctuations eventually drive the membrane to reach a target. We consider a membrane with the symmetric distribution of all membrane components (such as lipids and proteins) and the same environment on both sides of the membrane that may ensure symmetric membrane fluctuations and, thereby zero spontaneous curvature. Here we first consider the free energy for a symmetric membrane [19] expressed as
\[\mathcal{F}[h]=\int dx\left\{\frac{\nu}{2}(\nabla h)^{2}+\frac{\kappa}{2}( \nabla^{2}h)^{2}\right\} \tag{1}\]
where \(\nu\), and \(\kappa\) are the tension and the bending rigidity of the membrane, respectively. We study a simple dynamical equation for height field \(h(x,t)\) in one spatial dimension written as \(\Gamma\partial h/\partial t=-\delta\mathcal{F}[h]/\delta h+\eta(x,t)+av(x,t)\), where \(\eta(x,t)\) is the thermal noise with zero average and \(\langle\eta(x,t)\eta(x^{\prime},t^{\prime})\rangle=2D\delta(x-x^{\prime}) \delta(t-t^{\prime})\). The coefficient \(D\) is the strength of the thermal noise. From here onwards, we scale time \(t\) with \(\Gamma\), and keep it as \(t\). The last term \(v(x,t)\) arises from the fluctuating force of ion pumps/channels and the proteins which affect the membrane dynamics. Considering \(v(x,t)\) as an Ornstein-Uhlenbeck type of field, we write
\[\frac{\partial v(x,t)}{\partial t}=-\frac{v}{\tau_{a}}+\mu(x,t) \tag{2}\]
where \(\mu(x,t)\) is the Gaussian noise with zero average and \(\langle\mu(x,t)\mu(x^{\prime},t^{\prime})\rangle=2D_{a}\delta(x-x^{\prime}) \delta(t-t^{\prime})\), and \(\tau_{a}\) is the relaxation time of an active noise field \(v\). The strength of the active fluctuations \(D_{a}\) depends on the available ATP concentration, and the density of the protein pumps [20] which are taken to be constant in space and time. As there are no spatial derivatives or other functions of space in Eq. (2), \(v_{i}\) acts locally on the membrane \(h_{i}\). In contrast to thermal noise, active noise \(v_{i}\) has a temporal correlation with a correlation time \(\tau_{a}\) resulting in membrane height \(h_{i}\) being kicked for a much longer period of time than that of thermal noise. To put it another way, the active noise correlation effectively takes the form of the thermal noise correlation in the limit \(\tau_{a}\to 0\).
When a membrane is surrounded by fluid, the motion at one point of the membrane affects the motion at the other point via the fluid medium, which is known as hydrodynamic interaction. However, in one spatial dimension, the effect of the hydrodynamic interactions on the relaxation dynamics is marginal[21; 22]. We, therefore, neglect the hydrodynamic interactions and write the dynamical equation for membrane height as
\[\frac{\partial h(x,t)}{\partial t}=\nu\,\frac{\partial^{2}h}{\partial x^{2}} -\kappa\,\frac{\partial^{4}h}{\partial x^{4}}+a\,v(x,t)+\eta(x,t). \tag{3}\]
In the above equation, bending rigidity \(\kappa\) and tension \(\nu\) determine a length scale \(\ell_{c}=\sqrt{\kappa/\nu}\). As a result, when \(\ell>\ell_{c}\), \(\nu\) dominates membrane dynamics; otherwise, \(\kappa\) dominates. We study the two limits of the membrane properties, namely tension-less active membrane (TLAM) (\(\nu=0\)) in which dynamics are governed by bending rigidity \(\kappa\), and tension-dominated active membrane (TDAM) (\(\kappa=0\)) in which dynamics are governed by tension \(\nu\). To study active dynamics, we neglect the thermal noise \(\eta\) compared with active noise \(v\), and we study the membrane dynamics separately with thermal noise.
The growth of the height fluctuations may vary over time. For instance, the initial height fluctuation dynamics differ from the late-time dynamics. The mean-squared width, which is a statistical measure of height fluctuations, grows with \(t^{\beta}\) for \(t\ll L^{z}\), where \(\beta\) and \(z\) are the
growth and the dynamic exponents, respectively, and \(L\) represents the size of the system. For \(t\gg L^{z}\), the system reaches its steady state, and mean-squared width no longer depends on time \(t\), rather on \(L^{\chi}\) where \(\chi\) is the roughness exponent of the membrane [23]. The dynamic exponent \(z\) is determined by the properties of the membrane; for example, when tension dominates membrane relaxation (\(\nu\neq 0\) and \(\kappa=0\)) the dynamic exponent \(z=2\) and when bending rigidity dominates membrane relaxation (\(\kappa\neq 0\) and \(\nu=0\)), \(z=4\). For the active system, the timescale of interest is expressed as \(t\ll\tau_{a}\ll L^{z}\) for \(L\to\infty\).
## III Height distribution
We first aim to derive the propagator for the coupled equations given in Eqs. (2), and (3). The discretized versions of Eqs. (2) and (3) are written as
\[\frac{\partial}{\partial t}\begin{bmatrix}h_{i}(t)\\ v_{i}(t)\end{bmatrix}=\begin{bmatrix}-\sum_{j}\Lambda_{ij}&a\\ 0&-\Lambda^{\prime}\end{bmatrix}\begin{bmatrix}h_{j}(t)\\ v_{i}(t)\end{bmatrix}+\begin{bmatrix}\eta_{i}(t)\\ \mu_{i}(t)\end{bmatrix} \tag{4}\]
where \(\Lambda_{ij}=-(\nu\Delta_{ij}-\kappa\Delta_{ij}^{2})\), \(\Lambda^{\prime}=1/\tau_{a}\), and \(\sum_{j}\) in the above equation includes the nearest neighbors. Further, we write the Fokker-Planck equation for the joint distribution \(\mathbf{W}(\{h\},\{v\},t|h^{0},v^{0},t_{0})\) as
\[\frac{\partial\mathbf{W}}{\partial t}=[-\sum_{i,j=1}^{N}\frac{ \partial}{\partial h_{i}}(-\Lambda_{ij}h_{j}+a\,\delta_{ij}\,v_{j})+\sum_{i,j =1}^{N}\frac{\partial}{\partial v_{i}}(\Lambda^{\prime}_{ij}v_{j})\] \[+\sum_{i,j=1}^{N}\frac{\partial^{2}}{\partial h_{i}\partial h_{j }}D_{ij}+\sum_{i,j=1}^{N}\frac{\partial^{2}}{\partial v_{i}\partial v_{j}}D_{ ij}^{a}]\ \mathbf{W}(\tilde{h},\tilde{v},t) \tag{5}\]
where \(N\) is the total number of sites. The initial conditions read \(\mathbf{W}(\tilde{h},\tilde{v},t_{0}|h^{0},v^{0},t_{0})=\delta(\tilde{h}- \tilde{h}^{0})\ \delta(\tilde{v}-\tilde{v}^{0})\) where \(\tilde{h}\equiv\{h_{i}\}\) and \(\tilde{v}\equiv\{v_{i}\}\). Using the method of characteristics, we solve the above equation (as detailed in _Appendix-A_). Further, we integrate the distribution of initial noise field \(\tilde{v}^{0}\) as \(\int d\tilde{v}^{0}\,\delta(\tilde{v}^{0})\,\mathbf{W}(\tilde{h},t|\tilde{v}^ {0})\). We also integrate out the active noise field \(\tilde{v}\) and then obtain a marginal distribution for \(\tilde{h}\) which reads as
\[W(\tilde{h},t|0,0) =\frac{\exp{[-\frac{1}{2}\tilde{h}^{T}\mathbf{M}_{1}^{-1}\tilde{ h}]}}{(2\pi)^{L/2}\sqrt{\det\mathbf{M}_{1}}} \tag{6}\]
where covariance matrix \(\mathbf{M}_{1}\) is expressed as
\[\mathbf{M}_{1}=\left[2D_{\text{tot}}f(2\Lambda)+a^{\prime 2}(f(2\Lambda^{ \prime})-2f(\Lambda+\Lambda^{\prime}))\right]\delta_{ij} \tag{7}\]
in which \(f(\theta)=(1-e^{-\theta\,t})/\theta\), \(D_{\text{tot}}=D+a^{\prime 2}/2\), and \(a^{\prime 2}=2a^{2}D_{a}/(\Lambda^{\prime}-\Lambda)^{2}\). The above matrix is obtained with the non-stationary active noise. Considering \(a=0\) in Eq. (6), we obtain the propagator for the passive system expressed as \(W_{\text{passive}}(\tilde{h},t|0,0)=\sqrt{\frac{1}{(2\pi)^{L}}\det{\left(\frac{ \Lambda}{D\,(1-e^{-2\Lambda t})}\right)}}\,\exp{\left[-\frac{1}{2}\tilde{h}^{T }\frac{\Lambda}{D(1-e^{-2\Lambda t})}\tilde{h}\,\right]}\), which is consistent with the propagator for the Ornstein-Uhlenbeck type of particle in a harmonic potential [24].
As we define the position of the target as being at \(0\), we integrate out heights from all the sites except \(x=0\) because a single height at that point may only reach the target as overhangs are not considered. For a homogeneous system, we obtain the marginal distribution
\[W(\mathbf{0},t|h_{0},0) =\frac{1}{\sqrt{2\pi\langle h^{2}(t)\rangle}}\exp{\left[-\frac{1 }{2}\frac{h_{0}^{2}}{\langle h^{2}(t)\rangle}\right]}. \tag{8}\]
The obtained propagator in the above equation describes how a single height distribution evolves with time starting from a reference height \(h_{0}\).
## IV Mean first passage time with resetting
We next present a relation between the first-passage time and survival probability, and then investigate how the mean first-passage time is related to a propagator.
### Survival probability
Let us consider a particle driven by stochastic noise starts at position \(h=h_{0}\), and reaches the target (\(h=0\)) for the first time at time \(T\), known as first-passage time. The survival probability, \(S(h_{0},T)\), can be related to the first-passage time distribution, \(F(h_{0},T)\), which is defined as the likelihood that the searcher has not reached the target up to time \(T\). With this, \(F(h_{0},T)\) can be expressed by the difference between the survival probabilities at two consecutive discrete times as \(F(h_{0},T)=S(h_{0},T-1)-S(h_{0},T)\)[25]. In the continuum limit \(\Delta T\to 0\), \(F(h_{0},T)\Delta T=S(h_{0},T-\Delta T)-S(h_{0},T)\) yields
\[F(h_{0},T)=-\frac{\partial S(h_{0},T)}{\partial T}. \tag{9}\]
We next study the MFPT of a stochastic variable in one spatial dimension under stochastic resetting.
### MFPT with Resetting
We consider that the initial separation between a point on the membrane and a target is \(h_{0}\), and that the membrane takes time \(T\) to reach the target for the first time (see fig.1). Resetting is the immediate return of the membrane to its starting flat initial height \(h_{0}\) at a rate \(r\). The survival probability with resetting is denoted as \(S_{r}(h_{0},T)\) and the corresponding first-passage distribution as \(F_{r}(h_{0},T)\). The first-passage distribution and the survival probability with resetting are related in the following way [26]:
\[F_{r}(h_{0},T)=-\frac{\partial S_{r}(h_{0},T)}{\partial T}. \tag{10}\]
Therefore, the mean first-passage time with resetting is defined as \(\langle T\rangle=\int_{0}^{\infty}dT\,TF_{r}(h_{0},T).\) Using Eq. (10) in the definition of the MFPT, we get
\[\langle T\rangle=\int_{0}^{\infty}dT\,S_{r}(h_{0},T). \tag{11}\]
Now, we consider the Laplace transform (LT) of survival probability, which reads as
\[\widetilde{S}_{r}(h_{0},s)=\int_{0}^{\infty}dT\,e^{-s\,T}\,S_{r}(h_{0},T). \tag{12}\]
When \(T\to\infty\), we consider that the searcher finds the target with probability 1, which results in \(S_{r}(h_{0},\infty)=0\). Substituting \(s=0\) in Eq. (12) and equating with Eq. (11), we obtain
\[\langle T\rangle=\widetilde{S}_{r}(h_{0},0), \tag{13}\]
where \(S_{r}(h_{0},\infty)=0\) is considered. Following the backward Kolmogorov equation for first-passage time distribution, we write
\[F_{r}(h_{0},T)=e^{-rT}F(h_{0},T)+\int_{0}^{T} dt^{\prime}S_{r}(h_{0},T-t^{\prime})\,r\] \[\times e^{-rt^{\prime}}F(h_{0},t^{\prime}), \tag{14}\]
where \(F_{r}(h_{0},T)\) is the first-passage time distribution with resetting. The first term on the right-hand side of the above equation indicates that there has been no resetting in time \(T\), while the second term denotes that the last resetting took place at a time \(t^{\prime}\).
Using Eq. (10), we multiply \(e^{-sT}\) on both sides of the above equation and finally integrate over \(T\). This leads to
\[\widetilde{S}_{r}(h_{0},s)=\frac{1-\widetilde{F}(h_{0},r+s)}{s+r \widetilde{F}(h_{0},r+s)} \tag{15}\]
where \(\widetilde{F}(h_{0},r+s)\) is the LT of the first-passage distribution time _without_ resetting. In the above equation, we use \(S_{r}(h_{0},0)=1\), i.e., initially (\(t=0\)) the survival probability is 1. Substituting \(s=0\) in the above equation, and using Eq. (13), we obtain
\[\langle T\rangle=\frac{1}{r}\left(\frac{1}{\widetilde{F}(h_{0},r) }-1\right). \tag{16}\]
Following the renewal equation, we write the relation between the first-passage time distribution and the propagator as [3; 27]
\[W(\mathbf{0},T|h_{0})=\int_{0}^{T}dt^{\prime}F(h_{0},t^{\prime}) W(\mathbf{0},T-t^{\prime}|\mathbf{0})dt^{\prime}\] \[+\delta_{(0,h_{0})}\delta(T) \tag{17}\]
where \(W(\mathbf{0},T|h_{0})\) is the propagator at \(h=0\) at time \(T\) given that the initial coordinate was at \(h=h_{0}\). The Laplace transform of the above equation leads to
\[\widetilde{F}(h_{0},s)=\frac{\widetilde{W}(\mathbf{0},s|h_{0})}{ \widetilde{W}(\mathbf{0},s|\mathbf{0})} \tag{18}\]
where \(h_{0}\neq 0\), and \(s\) is the Laplace conjugate to time \(T\). Substituting the above equation in Eq. (16), we obtain
\[\langle T\rangle=\frac{1}{r}\left(\frac{\widetilde{W}(\mathbf{0},r |\mathbf{0})}{\widetilde{W}(\mathbf{0},r|h_{0})}-1\right). \tag{19}\]
Despite the fact that we discuss the case of a point on the membrane, the obtained relation is valid for any stochastic variable.
## V Height-height correlation
The auto-correlation of active proteins from Eq.(2) can be written as
\[\langle v_{i}(t)v_{j}(t^{\prime})\rangle =e^{-\Lambda^{\prime}(t+t^{\prime})}\int_{t_{0}}^{t}ds\int_{t_{0}} ^{t^{\prime}}ds^{\prime}e^{\Lambda^{\prime}(s+s^{\prime})}\langle\mu_{i}(s) \mu_{j}(s^{\prime})\rangle \tag{20}\]
where \(\langle v_{i}(t)\rangle=0\), and \(t_{0}\) is the initial time. We consider that the active proteins have already reached their steady state as they relax faster than the membrane, which is achieved by setting \(t_{0}\to-\infty\). Carrying out the integration in the above equation, we obtain
\[\langle v_{i}(t)v_{j}(t^{\prime})\rangle = D_{a}\,\frac{e^{-\Lambda^{\prime}|t-t^{\prime}|}}{\Lambda^{ \prime}}\delta_{ij}. \tag{21}\]
The derivation of \(\mathbf{M}_{1}\) (given in Eq. (7)) from the Fokker-Planck equation and \(\langle h_{i}(t)h_{j}(t)\rangle\) from the coupled Langevin equations (given in Eqs. (2),(3)) are found to be the same
\[\langle h_{i}(t)h_{j}(t)\rangle=M_{1}^{ij}\ \delta_{ij}. \tag{22}\]
The above relation is obtained with the non-stationary and stationary state active noise. With non-stationary state active noise, the relation is shown in Appendix-A, and with stationary state, the height correlation is obtained as
\[\langle h_{i}^{2}(t)\rangle=2\left(D+D_{a}^{\prime}\right)f(2 \Lambda)-2D_{a}^{\prime}\,f(\Lambda+\Lambda^{\prime}), \tag{23}\]
where \(D_{a}^{\prime}=\frac{a^{2}D}{\Lambda^{\prime}(\Lambda^{\prime}-\Lambda)}.\) When \(\Lambda\) is considered as the spring constant of a harmonic well, the above equation is consistent with the displacement correlation of a particle in a harmonic well with an Ornstein-Uhlenbeck type of active noise [28]. For \(\Lambda\to 0\), Eq. (23) becomes equivalent to the displacement correlation of the active Ornstein-Uhlenbeck type of noise. Further, considering \(\Lambda^{\prime}=1/\tau_{a}\), the above correlation can be obtained as follows: \(\lim_{\Lambda\to 0}\langle h_{i}^{2}(t)\rangle=2t\left(D+(a\tau_{a})^{2}D_{a} \left[1-\frac{\tau_{a}}{t}(1-e^{-t/\tau_{a}})\right]\right)\) which agrees with ref.[29]. In the passive case (\(a=0\)), the above correlation becomes \(2Dt\), which is the mean-square displacement of a Brownian particle in one dimension.
In order to investigate the MFPT of fluctuating membranes, we first employ the relation between the MFPT and the propagator given in Eq. (19). We next aim to obtain the propagator with active or thermal noise (as
given in Eq. (8)), which requires the single point height-height correlation. To this end, we begin with the Fourier transform of Eq.(3) which reads as
\[\frac{\partial h(q,t)}{\partial t}=-\frac{1}{\tau_{q}}h(q,t)+\eta(q,t)+a\,v(q,t) \tag{24}\]
where \(\,\tau_{q}=(\nu q^{2}+\kappa q^{4})^{-1}\) is the membrane relaxation. Using the flat initial condition, we get
\[h(q,t)=e^{-t/\tau_{q}}\int_{0}^{t}dt^{\prime}\,e^{t^{\prime}/\tau_{q}}\,\left[ \eta(q,t^{\prime})+a\,v(q,t^{\prime})\right]. \tag{25}\]
We also obtain the auto-correlation of the stationary state active noises as
\[\langle v(q,t)v(q^{\prime},t^{\prime})\rangle=D_{a}\tau_{a}\,(2\pi)\delta(q+q^ {\prime})\,e^{-|t^{\prime}-t|/\tau_{a}}. \tag{26}\]
The inverse Fourier transform of the height-height correlation can be written as
\[\langle h(x,t)^{2}\rangle =\int_{-\infty}^{\infty}\frac{dq}{(2\pi)}\int_{-\infty}^{\infty} \frac{dq^{\prime}}{(2\pi)}\,e^{i(q+q^{\prime})x}\,\langle h(q,t)h(q^{\prime},t)\rangle \tag{27}\]
in which the correlation can be calculated either with active or thermal noises.
### Tension-dominated membrane (\(\kappa=0\))
#### ii.1.1 Active
Using the active noise correlation given in Eq. (26) and considering the Fourier transform of the height given in Eq. (25), we write the height-height correlation as
\[\langle h(x,t)^{2}\rangle_{\rm active}=a^{2}\,D_{a}\,\tau_{a}\int _{-\infty}^{\infty}\!\frac{dq}{(2\pi)}e^{-2\nu tq^{2}}\int_{0}^{t}du\int_{0}^ {t}du^{\prime}\] \[\times e^{\nu(u+u^{\prime})q^{2}}e^{-|u-u^{\prime}|/\tau_{a}}. \tag{28}\]
We perform integration over \(q\) and obtain
\[\langle h(x,t)^{2}\rangle_{\rm active} =\frac{a^{2}\,D_{a}\,\tau_{a}}{2\sqrt{\pi\nu}}\,\int_{0}^{t}du \int_{0}^{t}du^{\prime}\,\frac{e^{-|u-u^{\prime}|/\tau_{a}}}{\sqrt{2t-(u+u^{ \prime})}}. \tag{29}\]
Next, we carry out the integration over \(u\) and \(u^{\prime}\) which results in
\[\langle h(x,t)^{2}\rangle_{\rm active} =\frac{a^{2}D_{a}\tau_{a}^{3/2}}{\sqrt{\nu}}(\,[\sqrt{\frac{2t \tau_{a}}{\pi}}-\frac{\tau_{a}}{2}\,{\rm Erf}\!\left(\sqrt{\frac{t}{\tau_{a}}} \right)]\] \[+\frac{\tau_{a}}{2}e^{-\frac{2t}{\tau_{a}}}[{\rm Erf}\left(\sqrt{ \frac{t}{\tau_{a}}}\right)-{\rm Erf}\!\left(\sqrt{\frac{2t}{\tau_{a}}}\right)]\rangle. \tag{30}\]
In this study, we are interested in the regime in which active noise drives the membrane. As we see above, depending on time \(t\) and \(\tau_{a}\), the active noise correlation given in Eq. (30) has two distinct regimes. For \(t\ll\tau_{a}\), the above equation is simplified to
\[\langle h(x,t)^{2}\rangle_{\rm active}\simeq\frac{4}{3}(\sqrt{2}-1)\,\frac{a^ {2}\,D_{a}\tau_{a}}{\sqrt{\pi\nu}}\,t^{3/2} \tag{31}\]
and for \(t\gg\tau_{a}\)
\[\langle h(x,t)^{2}\rangle_{\rm active}\simeq a^{2}\,D_{a}\tau_{a}^{2}\,\sqrt {\frac{2t}{\pi\nu}}. \tag{32}\]
#### ii.1.2 Passive
Similarly, we next derive the auto-correlation with thermal noise (see Appendix B) which reads as
\[\langle h(x,t)^{2}\rangle_{\rm passive}=D\sqrt{\frac{2t}{\pi\nu}}. \tag{33}\]
In the limit \(t\gg\tau_{a}\), the auto-correlation of active noise exhibits passive-like behavior. Because the active noise becomes uncorrelated when the interval between two successive observations is larger than the correlation time \(\tau_{a}\). As a result, the effect of several changes in the uncorrelated noise \(v_{i}\) changes the membrane height \(h_{i}\) in a random manner. Thus, in the long time limit, this uncorrelated active noise becomes additive to the thermal noises, and according to the central limit theorem, the distribution of these uncorrelated noises leads to a Gaussian distribution with increased variance.
Since we study the passive case separately, for active noise, we limit ourselves only when \(t\ll\tau_{a}\).
### Tension-less membrane (\(\nu=0\))
#### ii.2.1 Active
When the membrane tension is very low compared to the bending rigidity, the dynamics of membrane relaxation are dominated by \(\kappa\). We investigate how the height-height correlations scale in time for tension-less active membrane (TLAM). We begin with the height-height correlation expressed as
\[\langle h(x,t)^{2}\rangle_{\rm active} =a^{2}\,D_{a}\,\tau_{a}\int_{-\infty}^{\infty}\frac{dq}{(2\pi)} \!\!\int_{0}^{t}du\int_{0}^{t}du^{\prime}e^{\kappa(u+u^{\prime}-2t)q^{4}}\] \[\times e^{-|u-u^{\prime}|/\tau_{a}}. \tag{34}\]
Integrating over \(q\), we obtain
\[\langle h(x,t)^{2}\rangle_{\rm active} =\Gamma\left(\frac{5}{4}\right)\frac{a^{2}D_{a}\,\tau_{a}}{\pi \kappa^{1/4}}\,\int_{0}^{t}du\int_{0}^{t}du^{\prime}\,\frac{e^{-|u-u^{\prime}|/ \tau_{a}}}{[2t-(u+u^{\prime})]^{\frac{1}{4}}}. \tag{35}\]
Next, integrating over \(u\) and \(u^{\prime}\) and considering \(t\ll\tau_{a}\), we obtain
\[\langle h(x,t)^{2}\rangle_{\rm active} \simeq(2^{3/4}-1)\frac{8\,\Gamma(\frac{1}{4})}{21}\frac{a^{2}\,D_{a }\,\tau_{a}}{\pi\kappa^{1/4}}\,\,t^{7/4}. \tag{36}\]
We now carry out numerical integrations given in Eqs. (28) and (34), and compare the numerical results with the obtained analytical results given in Eqs. (31) and (36), respectively for \(t\ll\tau_{a}\) (shown in Fig. (2))
#### v.2.2 Passive
We also obtain the height-height correlation for the passive membrane as
\[\langle h(x,t)^{2}\rangle_{\rm passive} = \frac{D}{3\pi^{3/4}}\,\left[\frac{(2t)^{3}}{\pi\kappa}\right]^{1/4}. \tag{37}\]
With active noise, the height-height correlation (\(\sim t^{7/4}\)) grows much faster than the thermal noise (\(\sim t^{3/4}\)), on the other hand, the scaling of \(\kappa\) remains the same (\(\sim\kappa^{-1/4}\)) for both the noises. We obtain similar results for tension-dominant membranes in which the correlation \(\langle h^{2}\rangle\sim\nu^{-1/2}\) for both the noises, whereas the time dependence of the correlation gets significantly affected by the active noise as given in Eqs. (31) and (33).
## VI Mean first-passage time
In this section, we derive the MFPT for TDAM via the LT of the propagator.
### MFPT: Tension-dominated active membrane
As the diagonal matrix \({\bf M}_{1}\) is the exactly same as \(\langle h_{i}^{2}(t)\rangle\), we substitute Eq. (31) in Eq. (8), and obtain
\[W_{\rm active}(h_{0},t)=\frac{1}{\sqrt{\pi}}\frac{1}{\alpha_{t}}\exp\left(- \frac{h_{0}^{2}}{\alpha_{t}^{2}}\right) \tag{38}\]
where \(\alpha_{t}^{-1}=\sqrt{3\sqrt{\pi\nu}}/\left(\sqrt{8a^{2}D_{a}\tau_{a}(\sqrt{2 }-1)}\ t^{3/4}\right).\) The Laplace transform of Eq. (38) yields
\[\widetilde{W}_{\rm active}(u_{a},r)=\frac{1}{\sqrt{\pi}\alpha}\frac{1}{r^{1/4 }}\,\int_{0}^{\infty}\frac{d\bar{t}}{\bar{t}^{3/4}}\exp\left(-\bar{t}-\frac{u_ {a}^{2}}{\bar{t}^{\frac{2}{2}}}\right) \tag{39}\]
where \(\alpha=\alpha_{t}/t^{3/4}\), \(\bar{t}=r\,t\) and \(u_{a}=r^{3/4}\,h_{0}/\alpha\). We substitute the above equation in Eq. (19), and obtain
\[\langle T_{\nu}\rangle_{\rm active}=\frac{1}{r}\left(\frac{I_{a}(0)}{I_{a}(u_ {a})}-1\right) \tag{40}\]
where
\[I_{a}(u_{a})=\int_{0}^{\infty}\frac{d\bar{t}}{\bar{t}^{3/4}}\exp\left(-\bar{t }-\frac{u_{a}^{2}}{\bar{t}^{3/2}}\right)\!. \tag{41}\]
Figure 2: (a) The integration given in Eq.(28) for tension-dominated membrane fluctuations is numerically solved for \(\tau_{a}=50\) and \(500\) (shown by the dotted-black and dashed-green lines, respectively), and the results are compared to the analytical findings given in Eq. (31) (shown by the solid-red line). (b) Similarly, the integration given in Eq.(34) for rigidity-dominated membrane fluctuations is numerically solved (shown by the dotted-black and dashed-green lines, respectively), and the results are then compared with Eq. (36) (shown by the dashed-red line).
Figure 3: The variation of mean first-passage time \(\langle T\rangle\) with resetting rate \(r\) for a tension-dominated membrane (\(\nu\neq 0\) and \(\kappa=0\)) and fixed target height \(h_{0}\). (a) For an active membrane, we plot the MFPT, which is analytically obtained in Eq.(40), where \(u_{a}=r^{3/4}h_{0}\,\sqrt{(3\sqrt{\pi\nu})/(8a^{2}\,D_{a}\tau_{a}(\sqrt{2}-1))}\). (b) For passive membrane, we plot MFPT obtained in Eq.(92) where \(u_{p}=r^{1/4}h_{0}/(2D\sqrt{2/(\pi\nu)})^{1/2}\) as the effective coupling variables for active (shown in left figure (a)) and passive (shown in right figure (b)) systems, respectively. In contrast to the passive membrane, where the effective coupling rate \(u_{p}>1\), the active membrane has an optimal resetting rate for \(u_{a}<1\).
For \(u_{a}\to 0\),
\[I_{a}(u_{a})\simeq I_{a}(0)(1-b_{0}|u_{a}|^{1/3}+b_{1}|u_{a}|^{5/3}+b_{2}|u_{a}|^{ 2}) \tag{42}\]
where \(I_{a}(0)=\Gamma(1/4)\), and \(b_{0}=4\,\Gamma(5/6)/\Gamma(1/4)\), \(b_{1}=\frac{2^{2/3}}{\sqrt{3}\pi}\Gamma(-\frac{5}{5})\), and \(b_{2}=\frac{2}{\pi\,3^{3/4}}\Gamma(-\frac{5}{12})\Gamma(\frac{11}{12})\).
We are interested in reducing the MFPT which indeed occurs for a smaller resetting rate \(r\). Thus, the MFPT is simplified as
\[\langle T_{\nu}\rangle_{\rm active}\simeq\frac{b_{0}}{r^{3/4}}\left(\frac{h_{ 0}}{\alpha}\right)^{1/3}\;\;\mbox{for}\;\;r\ll\left(\frac{\alpha}{h_{0}}\right) ^{4/3} \tag{43}\]
The above scaling suggests that \(\langle T_{\nu}\rangle_{\rm active}\sim\left(a^{2}D_{a}\tau_{a}/\sqrt{\nu} \right)^{-1/6}.\)
### MFPT: Tension-less active membrane
We next investigate the MFPT in which the membrane relaxation is dominated by \(\kappa\). We first write the propagator as
\[W_{\rm active}(h_{0},t)=\frac{1}{\sqrt{\pi}\beta_{t}}\exp\left(-\frac{h_{0}^{ 2}}{\beta_{t}^{2}}\right) \tag{44}\]
where \(\beta_{t}=\beta\,t^{7/8}\) and \(\beta=\sqrt{\frac{16}{2\Gamma}(2^{3/4}-1)\,\Gamma(1/4)\frac{a^{2}D_{a}\tau_{a} }{\pi\kappa^{1/4}}}\). We consider LT of the above equation, which is expressed as
\[\widetilde{W}_{\rm active}(y_{a},r)=\frac{1}{\sqrt{\pi}\beta}\frac{1}{r^{1/8} }\,\int_{0}^{\infty}\frac{d\bar{t}}{\bar{t}^{7/8}}\exp\left(-\bar{t}-\frac{y_ {a}^{2}}{\bar{t}^{\frac{7}{4}}}\right) \tag{45}\]
where \(y_{a}=r^{7/8}\,h_{0}/\beta\) and \(\widetilde{W}_{\rm active}(0,r)=\frac{1}{\sqrt{\pi}\beta}\Gamma(1/8)/r^{1/8}\). Substituting the above equation in Eq. (19), we obtain
\[\langle T_{\kappa}\rangle_{\rm active}=\frac{1}{r}\left(\frac{J_{a}(0)}{J_{a} (y_{a})}-1\right) \tag{46}\]
where
\[J_{a}(y_{a})=\int_{0}^{\infty}\frac{d\bar{t}}{\bar{t}^{7/8}}\exp\left(-\bar{t }-\frac{y_{a}^{2}}{\bar{t}^{7/4}}\right) \tag{47}\]
where \(J_{a}(0)=\Gamma(1/8)\). For \(x\ll 1\), \(J_{a}(y_{a})\) can be expressed as
\[J_{a}(y_{a})\simeq J_{a}(0)(1-c_{0}|y_{a}|^{1/7}+c_{1}|y_{a}|^{9/7}-c_{2}|y_{a }|^{2}) \tag{48}\]
where \(c_{0}=1.111\), \(c_{1}=0.294\), and \(c_{2}=0.310\). Keeping only the leading order, we obtain
\[\langle T_{\kappa}\rangle_{\rm active}\simeq\frac{c_{0}}{r^{7/8}}\left(\frac{ h_{0}}{\beta}\right)^{1/7}\;\;\mbox{for}\;\;r\ll\left(\frac{\beta}{h_{0}} \right)^{8/7} \tag{49}\]
Our study suggests that the MFPT is lowest at an intermediate value of \(r\) for all cases as shown in Figs. (3) and (4). As \(r\) deviates from this value, the MFPT increases. Because, when \(r\) is very large, the membrane moves too slowly to reach the target due to frequent resetting, and when \(r\) is very small, stochastic fluctuations may cause the membrane to move far away from the target, resulting in a significantly longer MFPT.
The MFPT scales as \(\langle T_{\kappa}\rangle_{\rm active}\sim\kappa^{1/56}\) for a rigidity-dominated active membrane, whereas it scales as \(\langle T_{\nu}\rangle_{\rm active}\sim\nu^{1/12}\) for a tension-dominated active membrane, indicating that the effect of \(\nu\) on MFPT is much stronger than the rigidity \(\kappa\).
On the other hand, active noise significantly reduces the effect of \(\kappa\) (or \(\nu\)) on MFPT compared with the thermal noise. For rigidity-dominated membrane, MFPT with active noise scales as \(\langle T_{\kappa}\rangle_{\rm active}\sim\kappa^{1/56}\) whereas with thermal noise it scales as \(\langle T_{\kappa}\rangle_{\rm passive}\sim\kappa^{5/24}\). Similarly, for tension-dominated membrane, MFPT with active noise scales as \(\langle T_{\nu}\rangle_{\rm active}\sim\nu^{1/12}\) whereas with thermal noise it scales as \(\langle T_{\nu}\rangle_{\rm passive}\sim\nu^{1/2}\).
## VII Discussion and conclusions
We study the MFPT for one-dimensional membranes under stochastic resetting with active and passive noises. Starting with the coupled equations for membrane heights and active noises, we write a Fokker-Planck equation for the joint distribution and then solve it using the method of characteristics. The explicit solution of the Fokker-Planck equation for joint distribution describes how a single height distribution depends on the single point height-height correlation (\(\langle h^{2}\rangle\)).
Our study shows that the height-height correlation with active noise grows much faster than that with thermal noise for both the tension-dominated and tension-less membranes (Table 1). Across the membrane properties, the tension-less membrane shows a larger exponent of time than the tension-dominated membranes shown in Table 1.
We derive a general relation between MFPT and a propagator with resetting (given in Eq. (19)). Using the relation and the obtained propagators for heights, we next analytically obtain the MFPT under stochastic resetting with active (or thermal) noise.
Finally, we demonstrate how \(\langle T\rangle\) scales with resetting rate \(r\) and target height \(h_{0}\) and how it differs between active and passive systems. Our study reveals that, starting with a very small resetting rate, \(\langle T\rangle\) decreases with increasing \(r\), whereas starting with a very high resetting rate, MFPT decreases with decreasing \(r\). Since \(\langle T\rangle\) decreases when \(r\) increases to intermediate ranges from both the extrema of \(r\), this indicates that there is an optimal resetting rate at which MFPT is minimized (shown in Figs. (4) and (3)). This is explained in the following way: In the absence of resetting, the fluctuating membrane may move so far away from the target that the first-passage time may be infinitely long. With a smaller resetting rate, the interface is reset to its initial height,
which may reduce the possibility of moving the membrane far away from the target, as resetting brings it back to its initial level. With very high \(r\), on the other hand, the membrane is reset so frequently that it moves too little to reach the target, causing the MFPT to grow. This indicates that at the intermediate resetting rate (the optimal resetting rate), MFPT becomes minimal.
With active dynamics, the optimal resetting rate occurs at a smaller resetting rate than that of passive systems shown in Figs. (4) and (3). In terms of scaling of the target height \(h_{0}\), a tension-dominated passive membrane has the largest value of the exponent of \(h_{0}\) (\(\sim h_{0}^{2}\)), whereas a tension-less active membrane has the lowest value of exponent as \(h_{0}^{1/7}\) (see Table-1). Our work reveals that with active noise, the scaling exponents of \(\langle T\rangle\) on \(\nu\) or \(\kappa\) are smaller by an order of magnitude compared with the passive system (Table-1).
This work may also serve as a benchmark for future studies that may have real-life applications, such as the fluctuating membrane for a finite system, the nonzero resetting time, resetting with a finite velocity, etc. Within this framework, the MFPT of a fluctuating membrane in higher dimensions can be easily generalized.
## Acknowledgment
TS would like to thank Shamik Gupta for several valuable discussions. The author thanks Mustansir Barma, Carles Blanch-Mercader, and Pierre Sens for useful discussions. TS also acknowledges the support provided by a grant from ITMO Cancer, PSCI.
## Appendix A Appendix
### Derivation of the Propagator
#### Fokker-Planck equation
We consider \(\mathbf{p}^{T}=\{p_{i}\}\) and \(\mathbf{q}^{T}=\{q_{i}\}\), which are the Fourier variables corresponding to \(\tilde{h}=\{h_{i}\}\) and \(\tilde{v}=\{v_{i}\}\), respectively. Let us write the Fourier transform in the following way
\[\widehat{\mathbf{W}}(\mathbf{p}^{T},\mathbf{q}^{T},t|\psi^{0})=\int d\tilde{ h}\int d\tilde{v}\ e^{-i(\mathbf{p}^{T}\tilde{h}+\mathbf{q}^{T}\tilde{v})} \mathbf{W}(\tilde{h},\tilde{v},t|\psi^{0}) \tag{50}\]
where \((\mathbf{p}^{T})_{1\times L}\), \((\mathbf{q}^{T})_{1\times L}\), and \((h)_{L\times 1}\) and \((v)_{L\times 1}\) matrices. Taking into account the initial conditions, we obtain \(\widehat{\mathbf{W}}_{0}(\mathbf{p}^{0T},\mathbf{q}^{0T},0|\psi^{0})=\int d \tilde{h}\int d\tilde{v}\ e^{-i(\mathbf{p}^{0T}\tilde{h}+\mathbf{q}^{0T} \tilde{v})}\ \delta(\tilde{h}-\tilde{h}^{0})\,\delta(\tilde{v}-\tilde{v}^{0})=e^{-i \mathbf{p}^{0T}\tilde{h}^{0}_{0}-i\mathbf{q}^{0T}\tilde{v}^{0}}.\) We now obtain a Fokker-Planck equation in Fourier space as
\[\frac{\partial\widehat{\mathbf{W}}}{\partial t} = -\sum_{i,j}\Lambda_{ij}\,p_{i}\ \frac{\partial\widehat{\mathbf{W}}}{ \partial p_{j}}+a\sum_{ij}p_{i}\frac{\partial\widehat{\mathbf{W}}}{\partial q _{i}}\] \[- \sum_{ij}\Lambda^{\prime}_{ij}q_{i}\ \frac{\partial\widehat{ \mathbf{W}}}{\partial q_{i}}-\sum_{ij}D_{ij}p_{i}p_{j}\widehat{\mathbf{W}}- \sum_{ij}D^{a}_{ij}q_{i}q_{j}\widehat{\mathbf{W}}.\]
### Method of characteristics
We employ the method of characteristics and get
\[\frac{d\widehat{\mathbf{W}}}{dt}=\frac{\partial\widehat{\mathbf{W}}}{\partial t }+\sum_{i}\left(\frac{\partial\widehat{\mathbf{W}}}{\partial p_{i}}\frac{dp_{ i}}{dt}+\frac{\partial\widehat{\mathbf{W}}}{\partial q_{i}}\frac{dq_{i}}{dt} \right). \tag{51}\]
\begin{table}
\begin{tabular}{|l c c c c|} \hline \hline \multicolumn{5}{|c|}{Correlation and Mean first-passage time} \\ \hline Membranes & \(\langle h^{2}(x,t)\rangle\sim\) & Eqs. & \(\lim_{r\to 0}\langle T\rangle\sim\) & Eqs. \\ \hline Tension-dominated active & \(D_{a}\ \nu^{-1/2}\ t^{3/4}\) & (31) & \(r^{-3/4}\,h_{0}^{1/3}\ \nu^{1/12}\,D_{a}^{-1/6}\) & (43) \\ Tension-less active & \(D_{a}\ \kappa^{-1/4}\,t^{7/4}\) & (36) & \(r^{-7/8}\,h_{0}^{1/7}\ \kappa^{1/56}\,D_{a}^{-1/14}\) & (49) \\ Tension-dominated passive & \(D\,\nu^{-1/2}\ t^{1/2}\) & (33) & \(r^{-1/2}\,h_{0}^{2}\,\nu^{1/2}\ D^{-1}\) & (95) \\ Tension-less passive & \(D\,\kappa^{-1/4}\,t^{3/4}\) & (37) & \(r^{-3/8}\,h_{0}^{5/3}\ \kappa^{5/24}\,D^{-5/6}\) & (101) \\ \hline \end{tabular}
\end{table}
Table 1: Single point height-height correlation and mean first-passage times with active and passive noises. For tension-dominated passive membrane (\(\nu\neq 0\)), MFPT has the largest power \(h_{0}^{2}\), whereas tension-less active membrane has the lowest power \(h_{0}^{1/7}\). We show the dependence of MFPT on the membrane properties \(\nu\), \(\kappa\), target height \(h_{0}\), and resetting rate \(r\) for \(r\to 0\). Our study reveals that active noise, compared with the thermal, significantly reduces the dependence on the membrane properties.
Along the characteristic line, we have
\[\frac{dp_{i}}{dt} = \sum_{j}\Lambda_{ij}p_{j}, \tag{52}\] \[\frac{dq_{i}}{dt} = \sum_{j}\Lambda^{\prime}_{ij}q_{j}-a\sum_{ij}p_{i}. \tag{53}\]
Solving the above equations, we obtain
\[\mathbf{p}(t)=e^{t\Lambda}\,\mathbf{p^{0}} \tag{54}\]
and
\[\mathbf{q}(t)=e^{t\Lambda^{\prime}}\,\mathbf{q^{0}}+a\left(\frac{e^{\Lambda t }-e^{\Lambda^{\prime}t}}{\Lambda^{\prime}-\Lambda}\right)\mathbf{p}^{0}. \tag{55}\]
In the above equations, \(\mathbf{p}(t)\) and \(\mathbf{q}(t)\) evolve along the characteristic line. The total time derivative of FPE is obtained as
\[\frac{d\widehat{\mathbf{W}}}{dt}=-\frac{1}{2}\sum_{ij}2D_{ij}\ p_{i}\,p_{j} \widehat{\mathbf{W}}-\frac{1}{2}\sum_{ij}2D_{ij}^{a}\,q_{i}\,q_{j}\widehat{ \mathbf{W}} \tag{56}\]
\[\frac{d(\log\widehat{\mathbf{W}})}{dt}=-\frac{1}{2}\left(\mathbf{p}^{T}2D\, \mathbf{p}+\mathbf{q}^{T}2D_{a}\,\mathbf{p}\right). \tag{57}\]
Let us calculate the argument in the above equation as \(\mathbf{p}^{T}2D\mathbf{p}+\mathbf{q}^{T}2D_{a}\mathbf{p}\) in terms of \(\mathbf{p}^{0}\) and \(\mathbf{q}^{0}\). Using \(\mathbf{q}\) from Eq.(55), we obtain
\[\mathbf{q}^{T}D_{a}\mathbf{q} =(\mathbf{p^{0}})^{T}Q_{1}\mathbf{p}^{0}+(\mathbf{q^{0}})^{T}Q_{ 2}\mathbf{q^{0}}+2(\mathbf{p^{0}})^{T}Q_{3}\mathbf{q^{0}} \tag{58}\]
where
\[(Q_{1})_{L\times L}=a^{2}\left[2D_{a}\,\left(\frac{e^{\Lambda t}-e^{\Lambda^ {\prime}t}}{\Lambda^{\prime}-\Lambda}\right)^{2}\right]_{L\times L}, \tag{59}\]
\[(Q_{2})_{L\times L}=(2D_{a}\,e^{2\Lambda^{\prime}t})_{L\times L}, \tag{60}\]
and
\[(Q_{3})_{L\times L}=a\left[2D_{a}\left(\frac{e^{\Lambda t}-e^{\Lambda^{\prime }t}}{\Lambda^{\prime}-\Lambda}\right)e^{\Lambda^{\prime}t}\right]_{L\times L}. \tag{61}\]
Similarly Eq. (54) leads to
\[\mathbf{p}^{T}2D\mathbf{p}=\mathbf{p^{0}}^{T}(e^{\Lambda t}2De^{\Lambda t}) \ \mathbf{p^{0}} \tag{62}\]
Adding Eqs. (58) and (62), we obtain
\[\mathbf{p}^{T}2D\mathbf{p}+\mathbf{q}^{T}2D_{a}\mathbf{p} =(\mathbf{p^{0}})^{T} (Q_{1}+Q^{\prime}_{1})\,\mathbf{p}^{0}+(\mathbf{q^{0}})^{T}Q_{ 2}\mathbf{q^{0}}\] \[+2(\mathbf{p^{0}})^{T}Q_{3}\,\mathbf{q^{0}}. \tag{63}\]
We now substitute Eq. (63) in Eq. (57), and obtain
\[\log\left(\frac{\widehat{\mathbf{W}}}{\widehat{\mathbf{W}}_{\mathbf{0}}} \right) =-\frac{1}{2}\left[(\mathbf{p^{0}})^{T}R_{1}\,\mathbf{p}^{0}+( \mathbf{q^{0}})^{T}R_{2}\mathbf{q^{0}}+2(\mathbf{p^{0}})^{T}R_{3}\,\mathbf{q^ {0}}\right] \tag{64}\]
where \(\widehat{\mathbf{W}}_{\mathbf{0}}\) is the Fourier transform of the initial height configuration and velocity profiles. In Eq.(64), we have
\[R_{1} = \int_{0}^{t}dt^{\prime}\,(Q_{1}(t^{\prime})+Q^{\prime}_{1}(t^{ \prime})), \tag{65}\] \[R_{2} = \int_{0}^{t}dt^{\prime}Q_{2}(t^{\prime}),\] (66) \[R_{3} = \int_{0}^{t}dt^{\prime}Q_{3}(t^{\prime}). \tag{67}\]
Using \(\widehat{\mathbf{W}}_{\mathbf{0}}\) (as shown above) and Eq.(64), we obtain
\[\widehat{\mathbf{W}} = \exp{(-\frac{1}{2}[\mathbf{p^{0}}^{T}R_{1}\mathbf{p^{0}}+ \mathbf{q^{0}}^{T}R_{2}\mathbf{q^{0}}+2\mathbf{p^{0}}^{T}R_{3}\mathbf{q^{0}}])} \tag{68}\] \[\times \exp{(-i(\mathbf{p^{0}}^{T}\tilde{h}^{0}+\mathbf{q^{0}}^{T}\tilde {v}^{0})))}.\]
Below, we express \(\mathbf{p^{0}}^{T}\), \(\mathbf{q^{0}}^{T}\), \(\mathbf{p^{0}}\), and \(\mathbf{q^{0}}\) in terms of \(\mathbf{p}^{T}\) and \(\mathbf{q^{T}}\). Therefore, we write the arguments in the above equations as \(\mathbf{p^{0}}^{T}R_{1}\mathbf{p^{0}}=\mathbf{p}^{T}(e^{-\Lambda t}R_{1}e^{- \Lambda t})\,\mathbf{p}\), and
\[\mathbf{q^{0}}^{T}R_{2}\mathbf{q^{0}} = \mathbf{q}^{T}(e^{-2\Lambda^{\prime}t}R_{2})\mathbf{q}+\mathbf{p} ^{T}a^{2}e^{-2(\Lambda^{\prime}+\Lambda)t}R_{2}f^{2}\mathbf{p} \tag{69}\] \[-2\mathbf{p}^{T}\left(ae^{-2\Lambda^{\prime}t}R_{2}fe^{-\Lambda t }\right)\mathbf{q}.\]
The other term can be obtained as
\[2\mathbf{p^{0}}^{T}R_{3}\mathbf{q^{0}} = 2\mathbf{p}^{T}(e^{-(\Lambda+\Lambda^{\prime})t}R_{3})\mathbf{q} \tag{70}\] \[+2\mathbf{p}^{T}(-ae^{-(2\Lambda+\Lambda^{\prime})t}R_{3}f) \mathbf{p}.\]
Combining the above terms, we get
\[\exp{\left[-\frac{1}{2}\left(\mathbf{p^{0}}^{T}R_{1}\mathbf{p^{0} }+\mathbf{q^{0}}^{T}R_{2}\mathbf{q^{0}}+2\mathbf{p^{0}}^{T}R_{3}\mathbf{q^{0}} \right)\right]} \tag{71}\] \[= \exp{\left[-\frac{1}{2}(\mathbf{p}^{T}\mathbf{M}_{1}\mathbf{p}+ \mathbf{q}^{T}\mathbf{M}_{2}\mathbf{q}+2\,\mathbf{p}^{T}\mathbf{M}_{3}\mathbf{q })\right]}.\]
The contribution to \(\mathbf{p}^{T}\mathbf{p}\) also comes from \(\mathbf{q^{0}}^{T}\mathbf{q^{0}}\) and \(\mathbf{p^{0}}^{T}\mathbf{p^{0}}\). Therefore, we have three terms in \(\mathbf{M}_{1}\).
\[\mathbf{M}_{1} = (e^{-\Lambda t}R_{1}e^{-\Lambda t}+a^{2}\,e^{-2(\Lambda+\Lambda^{ \prime})t}\,f^{2}(t)R_{2} \tag{72}\] \[-2\,a\,e^{-(2\Lambda+\Lambda^{\prime})t}f(t)\,R_{3})_{L\times L}.\]
Let us now evaluate the integrations for \(R_{1}\), \(R_{2}\) and \(R_{3}\) which are given in Eq. (67). Thus, we now have
\[R_{1}=(2D+\frac{2a^{2}D_{a}}{(\Lambda^{\prime}-\Lambda)^{2}})(\frac{ e^{2\Lambda t}-1}{2\Lambda})+\frac{2a^{2}D_{a}}{(\Lambda^{\prime}-\Lambda)^{2}}(\frac{e^{2 \Lambda^{\prime}t}-1}{2\Lambda^{\prime}})\] \[-4\frac{a^{2}D_{a}}{(\Lambda^{\prime}-\Lambda)^{2}}\frac{(e^{( \Lambda+\Lambda^{\prime})t}-1)}{\Lambda^{\prime}+\Lambda}, \tag{73}\]
\[R_{2} = 2D_{a}\,\left(\frac{e^{2\Lambda^{\prime}t}-1}{2\Lambda^{\prime}}\right) \tag{74}\]
and
\[R_{3} = a\frac{2D_{a}}{\Lambda^{\prime}-\Lambda}\left(\frac{e^{(\Lambda+\Lambda^{ \prime})t}-1}{(\Lambda+\Lambda^{\prime})}-\frac{e^{2\Lambda^{\prime}t}-1}{2 \Lambda^{\prime}}\right) \tag{75}\]
Substituting the expressions for \(R_{1}\), \(R_{2}\) and \(R_{3}\) in Eq. (72), we obtain
\[\mathbf{M}_{1}\,\delta_{ij}=2D_{\mathrm{tot}}\left(\frac{1-e^{-2 \Lambda t}}{2\Lambda}\right)+ \frac{a^{2}2D_{a}}{(\Lambda^{\prime}-\Lambda)^{2}}\{\frac{2\left(e^{-( \Lambda+\Lambda^{\prime})t}-1\right)}{(\Lambda^{\prime}+\Lambda)}\] \[+\frac{\left(1-e^{-2\Lambda^{\prime}t}\right)}{2\Lambda^{\prime }}\} \tag{76}\]
where \(D_{\mathrm{tot}}=D+\frac{a^{2}D_{a}}{(\Lambda^{\prime}-\Lambda)^{2}}\). Similarly, we obtain
\[\mathbf{M}_{2}\,\delta_{ij}=2D_{a}\left(\frac{1-e^{-2\Lambda^{ \prime}t}}{2\Lambda^{\prime}}\right) \tag{77}\]
and
\[\mathbf{M}_{3}\,\delta_{ij}=a\frac{2D_{a}}{(\Lambda^{\prime}- \Lambda)}\left(\frac{1-e^{-(\Lambda+\Lambda^{\prime})t}}{\Lambda+\Lambda^{ \prime}}-\frac{1-e^{-2\Lambda^{\prime}t}}{2\Lambda^{\prime}}\right). \tag{78}\]
### Propagator
Let us write \(\widehat{\mathbf{W}}[\mathbf{p},\mathbf{q},t|\psi^{0}]\) in terms of \(\mathbf{M}_{1}\), \(\mathbf{M}_{2}\) and \(\mathbf{M}_{3}\) as
\[\widehat{\mathbf{W}}[\mathbf{p},\mathbf{q},t|\psi_{0}]=\exp \left[-i\left(\mathbf{p}^{T}e^{-\Lambda t}\tilde{h}^{0}+\mathbf{q}^{T}e^{- \Lambda^{\prime}t}\tilde{v}^{0}\right)\right] \exp\left(-i\left(-a\,\mathbf{p}^{T}f(t)\,e^{-(\Lambda^{\prime}+ \Lambda)t}\tilde{v}^{0}\right)\right)\] \[\times\exp\left(-\frac{1}{2}(\mathbf{p}^{T}\,\mathbf{M}_{1} \mathbf{p}+\mathbf{q}^{T}\mathbf{M}_{2}\mathbf{q}+2\,\mathbf{p}^{T}\mathbf{M} _{3}\mathbf{q})\right) \tag{79}\]
We now consider the inverse Fourier transform and integrate over \(p\) and \(q\). Thus, we obtain
\[\mathbf{W}[\tilde{h},\tilde{v},t|\psi_{0}]=\int\frac{d\mathbf{p}} {(2\pi)^{L}}\int\frac{d\mathbf{q}}{(2\pi)^{L}}\;e^{i\left(\mathbf{p}^{T} \tilde{h}+\mathbf{q}^{T}\tilde{v}\right)}\;\widehat{\mathbf{W}}[\mathbf{p}, \mathbf{q},t|\psi^{0}]\] \[=\frac{1/(2\pi)^{L}}{\sqrt{\det\left(\mathbf{M}_{1}\mathbf{M}_{2} \right)}}\exp\left[-\frac{1}{2}\left(\Delta\tilde{h}\,\mathbf{M}_{1}^{-1} \Delta\tilde{h}\right)-\frac{1}{2}\left(\Delta\tilde{v}^{T}\mathbf{M}_{2}^{-1} \Delta\tilde{v}\right)-\frac{1}{2}\Delta\tilde{h}^{T}\left(\frac{\mathbf{M}_{ 2}}{\mathbf{M}_{3}^{2}}\right)^{-1}\Delta\tilde{h}+\Delta\tilde{h}^{T}\left( \frac{\mathbf{M}_{2}}{\mathbf{M}_{3}}\right)^{-1}\Delta\tilde{v}\right] \tag{80}\]
where \(\Delta\tilde{h}=\tilde{h}-e^{-\Lambda t}\,h^{0}\), and \(\Delta\tilde{v}=\tilde{v}-e^{-\Lambda^{\prime}t}\,v^{0}\). Integrating over \(\tilde{v}\), we obtain the marginal distribution
\[\mathbf{W}(\tilde{h},t|h_{0}) =\int d\tilde{v}\int dv_{0}\ \mathbf{W}(\tilde{h},\tilde{v},t|\psi_{0})\] \[=\frac{\exp\left[-\frac{1}{2}(\tilde{h}-e^{-\Lambda t}h^{0})^{T} \mathbf{M}_{1}^{-1}(\tilde{h}-e^{-\Lambda t}h^{0})\right]}{(2\pi)^{L/2}\sqrt{ \det\mathbf{M}_{1}}}. \tag{81}\]
### Height-height correlation in terms of operator
Let us start with the non-stationary active noise. Setting \(t_{0}=0\), we obtain
\[\langle v_{i}(t)v_{j}(t^{\prime})\rangle =e^{-\Lambda^{\prime}(t+t^{\prime})}\int_{t_{0}}^{t}ds\int_{t_{0} }^{t}ds^{\prime}\,e^{\Lambda^{\prime}(s+s^{\prime})}\langle\Gamma_{i}(s) \Gamma_{j}(s^{\prime})\rangle\] \[=\delta_{ij}\,D_{a}\,e^{-\Lambda^{\prime}(t+t^{\prime})}\left( \frac{e^{2\Lambda^{\prime}t}-1}{\Lambda^{\prime}}\right)\ \ \mathrm{when}\ t^{\prime}>t. \tag{82}\]
Taking into account the other part, i.e., \(t>t^{\prime}\), we obtain
\[\langle v_{i}(t)v_{j}(t^{\prime})\rangle =\frac{D_{a}}{\Lambda^{\prime}}\left(e^{-\Lambda^{\prime}|t-t^{ \prime}|}-e^{-\Lambda^{\prime}(t+t^{\prime})}\right) \tag{83}\]
and
\[\langle\eta_{i}(t)\eta_{j}(t^{\prime})\rangle=2D\delta_{ij}\delta(t-t^{\prime }). \tag{84}\]
We first study the height-height correlation when the velocity-velocity correlation is non-stationary (i.e., \(t_{0}=0\)). We obtain
\[\langle h_{i}(t)h_{j}(t)\rangle=e^{-2\Lambda t}(\int_{0}^{t}dt^{ \prime}e^{\Lambda t^{\prime}}\int_{0}^{t}dt^{\prime\prime}e^{\Lambda t^{\prime \prime}}\langle\eta_{i}(t^{\prime})\eta_{j}(t^{\prime\prime})\rangle\] \[+a^{2}\,\int_{0}^{t}dt^{\prime}e^{\Lambda t^{\prime}}\int_{0}^{t} dt^{\prime\prime}e^{\Lambda t^{\prime\prime}}\langle v_{i}(t^{\prime})v_{j}(t^{ \prime\prime})\rangle)\] \[=\delta_{ij}\,D_{\mathrm{tot}}\left(\frac{1-e^{-2\Lambda t}}{ \Lambda}\right)+\delta_{ij}\,\frac{2\,a^{2}D_{a}}{(\Lambda^{\prime}-\Lambda)^{2 }}\{\frac{2\left(e^{-(\Lambda+\Lambda^{\prime})t}-1\right)}{(\Lambda^{\prime}+ \Lambda)}\] \[+\frac{\left(1-e^{-2\Lambda^{\prime}t}\right)}{2\Lambda^{\prime}}\} \tag{85}\] \[=\sum_{ij}(M_{1})_{ij}\,\delta_{ij}. \tag{86}\]
The covariance matrix from the Fokker-Planck equation and the height-height correlation from the coupled Langevin equations are found to be the same. The relation in the above equation is obtained with the non-stationary active noise, which also holds for the stationary-state active noise.
## Appendix-B
### MFPT: Tension-dominated passive membrane
The height-height correlation can be written as
\[\langle h(x,t)^{2}\rangle_{\rm passive}=\int\frac{dq}{(2\pi)}\int\frac{dq^{ \prime}}{(2\pi)}\;e^{i(q+q^{\prime})x}\,\langle h(q,t)h(q^{\prime},t)\rangle. \tag{87}\]
In the above equation, we consider the membrane fluctuations driven by spatially and temporally uncorrelated thermal noise. The Fourier transform of the correlation can be expressed as \(\langle\eta(q,t)\eta(q^{\prime},t^{\prime})\rangle=2D(2\pi)\delta(q+q^{\prime })\delta(t-t^{\prime})\). Using this, we obtain
\[\langle h(x,t)^{2}\rangle_{\rm passive}=2D\,\int_{-\infty}^{\infty}\frac{dq}{( 2\pi)}\;\int_{0}^{t}dt^{\prime}\;e^{-2\nu t^{\prime}q^{2}}. \tag{88}\]
We carry out the integration over \(q\) and \(t^{\prime}\) in the above equation and obtain
\[\langle h(x,t)^{2}\rangle_{\rm passive}=D\sqrt{\frac{2t}{\pi\nu}}. \tag{89}\]
Substituting the above equation in Eq. (8), we obtain
\[W_{\rm passive}(h_{0},t)=\frac{1}{\sqrt{\pi}\alpha_{p}^{2}(t)}\,e^{-\frac{h_{ 0}^{2}}{\alpha_{p}^{2}(t)}} \tag{90}\]
where \(\alpha_{T}^{2}(t)=2D\sqrt{\frac{2t}{\pi\nu}}.\) We consider the LT of the propagator for passive membrane dynamics with thermal noise. The LT of the above equation can be obtained as
\[\widetilde{W}_{\rm passive}(u_{p},r)=\frac{1}{\sqrt{\pi}\alpha_{p}^{2}}\frac{1 }{r^{3/4}}\int_{0}^{\infty}\frac{d\bar{t}}{\bar{t}^{1/4}}\exp\left(-\bar{t}- \frac{u_{p}^{2}}{\sqrt{\bar{t}}}\right) \tag{91}\]
where \(\bar{t}=rt,\)\(\alpha_{T}^{2}=2D\sqrt{2/(\pi\nu)}\) and \(u_{p}=r^{1/4}h_{0}/\alpha_{T}.\) Therefore, our MFPT can be written as
\[\langle T_{\nu}\rangle_{\rm passive}=\frac{1}{r}\left(\frac{I_{p}(0)}{I_{p}(u _{p})}-1\right) \tag{92}\]
where
\[I_{p}(u_{p})=\int_{0}^{\infty}\frac{d\bar{t}}{\bar{t}^{1/4}}\exp\left(-\bar{t }-\frac{u_{p}^{2}}{\sqrt{\bar{t}}}\right)\!. \tag{93}\]
We carry out the integration for \(u_{p}=0\) and obtain \(I_{p}(0)=\Gamma(\frac{3}{4}).\) From the above equation, we obtain
\[I_{p}(u_{p})\simeq I_{p}(0)\left(1-|u_{p}|^{2}\frac{\Gamma(\frac{1}{4})}{ \Gamma(\frac{3}{4})}\right) \tag{94}\]
for \(u_{p}\ll 1.\) A simpler form for the MFPT can be expressed as
\[\langle T_{\nu}\rangle_{\rm passive}\simeq\frac{\Gamma(\frac{1}{4})}{\Gamma( \frac{3}{4})}\frac{1}{r^{1/2}}\;\;\left(\frac{h_{0}}{\alpha_{T}}\right)^{2}\; \;{\rm for}\;\;r\ll(\alpha_{T}/h_{0})^{4}\,. \tag{95}\]
The expression of the MFPT \(\langle T_{\nu}\rangle_{\rm passive}\to\infty\) when \(u_{p}\to\infty\). The MFPT of a passive membrane scales as \(\langle T_{\nu}\rangle_{\rm passive}\sim\nu^{1/2}.\)
### MFPT: Tension-less passive membrane
Similarly, with the thermal noise, we derive the height-height correlation for the rigidity-dominated membrane as
\[\langle h(x,t)^{2}\rangle_{\rm passive}=\frac{D}{3\pi^{3/4}}\;\left[\frac{(2t )^{3}}{\pi\kappa}\right]^{1/4} \tag{96}\]
where we consider \(\nu=0.\) Substituting the above equation in equation (8), we write the propagator as
\[W_{\rm passive}(h_{0},t)=\frac{1}{\sqrt{\pi}}\frac{1}{\beta_{T}(t)}\exp\left( -\frac{h_{0}^{2}}{\beta_{T}(t)^{2}}\right) \tag{97}\]
where \(\beta_{T}(t)=\beta_{T}\;t^{3/8}\) with \(\beta_{T}=\sqrt{\frac{2D}{3\pi^{3/4}}}\;\left[\frac{8}{\pi\kappa}\right]^{1/8}.\) The Laplace transform of the above propagator can be written as
\[\widetilde{W}_{\rm passive}(y_{p},r)=\frac{1}{\sqrt{\pi}}\frac{1}{\beta_{T} \,r^{5/8}}J_{p}(y_{p}) \tag{98}\]
where \(y_{p}=r^{3/8}h_{0}/\beta_{T},\) and
\[J_{p}(y_{p})=\int_{0}^{\infty}\frac{d\bar{t}}{\bar{t}^{3/8}}\,e^{-\bar{t}}\, \exp\left(-\frac{y_{p}^{2}}{\bar{t}^{3/4}}\right)\!. \tag{99}\]
Using the definition of the Gamma function, we obtain \(\widetilde{W}_{\rm passive}(0,r)=\frac{1}{\sqrt{\pi}}\Gamma\left(\frac{5}{8} \right)/(\beta_{T}\,r^{5/8})\) and
\[\langle T_{\kappa}\rangle_{\rm passive}=\frac{1}{r}\left(\frac{J_{p}(0)}{J_{p }(y_{p})}-1\right). \tag{100}\]
Considering \(y_{p}\ll 1,\) we obtain \(\langle T_{\kappa}\rangle\simeq\frac{1}{r}\;\left[(1-c_{0}|y_{p}|^{5/3}+c_{1} |y_{p}|^{2})^{-1}-1\right]\) where \(c_{0}=4.656,\) and \(c_{1}=6.077.\) For \(r\ll(\beta_{T}/h_{0})^{8/3}\), we obtain
\[\langle T_{\kappa}\rangle_{\rm passive}\simeq\frac{c_{0}}{r^{3/8}}\;\left( \frac{h_{0}}{\beta_{T}}\right)^{5/3}. \tag{101}\]
The above equation suggests that for the passive system, the MFPT strongly depends on the rigidity \(\kappa\) as \(\langle T_{\kappa}\rangle_{\rm passive}\sim\kappa^{5/24}.\) |
2301.03189 | Information Scrambling and Entanglement Dynamics of Complex Brownian
Sachdev-Ye-Kitaev Models | In this work, we study the information scrambling and the entanglement
dynamics in the complex Brownian Sachdev-Ye-Kitaev (cBSYK) models, focusing on
their dependence on the charge density $n$. We first derive the effective
theory for scramblons in a single cBSYK model, which gives closed-form
expressions for the late-time OTOC and operator size. In particular, the result
for OTOC is consistent with numerical observations in [1]. We then study the
entanglement dynamics in cBSYK chains. We derive the density dependence of the
entanglement velocity for both R\'enyi entropies and the Von Neumann entropy,
with a comparison to the butterfly velocity. We further consider adding
repeated measurements and derive the effective theory of the measurement
induced transition which shows $U(2)_L\otimes U(2)_R$ symmetry for
non-interacting models. | Pengfei Zhang | 2023-01-09T07:37:09Z | http://arxiv.org/abs/2301.03189v2 | # Information Scrambling and Entanglement Dynamics of Complex Brownian Sachdev-Ye-Kitaev Models
###### Abstract
In this work, we study the information scrambling and the entanglement dynamics in the complex Brownian Sachdev-Ye-Kitaev (cBSYK) models, focusing on their dependence on the charge density \(n\). We first derive the effective theory for scramblons in a single cBSYK model, which gives closed-form expressions for the late-time OTOC and operator size. In particular, the result for OTOC is consistent with numerical observations in [1]. We then study the entanglement dynamics in cBSYK chains. We derive the density dependence of the entanglement velocity for both Renyi entropies and the Von Neumann entropy, with a comparison to the butterfly velocity. We further consider adding repeated measurements and derive the effective theory of the measurement induced transition which shows \(U(2)_{L}\otimes U(2)_{R}\) symmetry for non-interacting models.
###### Contents
* 1 Introduction
* 2 Quantum Information Scrambling at Late Time
* 2.1 Model and two-point functions
* 2.2 Wightman function with sources
* 2.3 Scramblon diagrams and OTOC
* 2.4 Late-time operator size distribution
* 3 Entanglement dynamics and transitions
* 3.1 Model and set-up
* 3.2 Density dependence of entanglement velocity
* 3.3 Measurements and entanglement transitions
* 4 Discussions
Introduction
Understanding the dynamics of quantum information is of vital importance for revealing the universal picture of quantum many-body systems in both high-energy physics and condensed matter physics. For example, motivated by gravity calculations, the out-of-time-order correlator (OTOC) \(\langle W^{\dagger}(t)V^{\dagger}(0)W(t)V(0)\rangle\) was introduced to describe the quantum information scrambling in general quantum systems [2, 3, 4, 5]. At the early-time regime with \(t\ll t_{s}\), it shows an exponential deviation behavior \(1-\#e^{\varkappa t}/N\) in systems with large Hilbert space dimensions, which defines the quantum Lyapunov exponent \(\varkappa\). It has been proved that the quantum Lyapunov has an upper bound \(\varkappa\leqslant 2\pi/\beta\)[6], which is saturated by holographic systems dual to semi-classical black holes. The study of information scrambling largely benefits from the Sachdev-Ye-Kitaev (SYK) models [7, 8, 9, 10], which describes randomly interacting Majorana fermions. The SYK model can be solved using the \(1/N\) expansion. The early-time OTOC in the SYK model is known to show maximally chaotic behavior in the low-temperature limit [7, 9, 10], while its late-time bahavior \(t\gtrsim t_{s}\) is described by an emergent bulk scattering with a complex phase shift [11, 12]. Moreover, a series of works study the entanglement of the SYK model, where the Renyi entropies are computed exactly to the leading order of \(1/N\) using the perturbation theory or numerical simulations [13, 14, 15, 16, 17, 18, 19, 20], which is closely related to replica wormholes in gravity [21, 22, 23]. In special limits, results for the Von Neumann can also be obtained by performing the analytical continuation [24, 25, 26, 27].
Time scales in quantum information dynamics play an important role in understanding "Planckian" transports in strongly correlated materials [28, 29, 30, 31, 32, 33, 34]. Although the original SYK model is defined for Majorana fermions with all-to-all interactions, a number of generalizations have been proposed to study quantum transports, including models with Brownian couplings [35, 36], charge conservations [37, 38, 39, 40, 41], or at higher dimensions [42, 43, 44, 45, 46, 47]. As an example, authors propose bounds for the Lyapunov expoenent in systems with charge conservations motivated by the calculation in complex SYK models [41]. In particular, the Lyapunov exponent and butterfly velocity are computed exactly in the complex Brownian SYK (cBSYK) model. Later, there are attempts to understand the information scrambling in the cBSYK model beyond the early-time regime [1]. By generalizing the mapping from the Majorana Brownian SYK model to the effective bosonic model in [36], authors are above to perform numerical simulations for the OTOC in systems with finite but large \(N\), where a data collapse for OTOC with different charge density \(n\) has been observed.
Motivated by these developments, here we push forward the understanding of the quantum information dynamics in the cBSYK model by studying its information scrambling and entanglement dynamics. In section 2, we derive closed-form expressions for the late-time OTOC, which is consistent with numerical observations. We also introduce the idea of finite density operator size and compute its distribution function in the late-time regime. In section 3, we study the entanglement dynamics of cBSYK chains, including the density dependence of the entanglement velocity, and the effective theory for the measurement induced transitions. We conclude our paper in section 4, with discussions for a few future directions.
Quantum Information Scrambling at Late Time
In this section, we study the information scrambling of the cBSYK model, focusing on its dependence on charge density \(n\). Adapting the methodology developed in [48, 11, 49], we derive analytical results for the late-time OTOC and operator size distribution by summing up contributions with multiple scramblons. In particular, our results explain the \(n\) dependence of OTOC observed in recent numerics [1].
### Model and two-point functions
The complex Brownian SYK model with \(q\)-body interactions (cBSYK\({}_{q}\)) is described by the Hamiltonian:
\[H(t)=\sum_{i_{1}<i_{2}...<i_{\frac{q}{2}}}\sum_{j_{1}<j_{2}...<j_{ \frac{q}{2}}}J_{i_{1}i_{2}...i_{\frac{q}{2}}j_{1}j_{2}...j_{\frac{q}{2}}}(t)c^{ \dagger}_{i_{1}}c^{\dagger}_{i_{2}}...c^{\dagger}_{i_{\frac{q}{2}}}c_{j_{1}}c _{j_{2}}...c_{j_{\frac{q}{2}}}. \tag{1}\]
Here the random interaction parameters \(J_{i_{1}i_{2}...i_{\frac{q}{2}}j_{1}j_{2}...j_{\frac{q}{2}}}(t)\) are independent Brownian variables with
\[\overline{J_{i_{1}i_{2}...i_{\frac{q}{2}}j_{1}j_{2}...j_{\frac{q}{ 2}}}(t)}=0,\qquad\overline{J_{i_{1}i_{2}...i_{\frac{q}{2}}j_{1}j_{2}...j_{ \frac{q}{2}}}(t)J_{i_{1}i_{2}...i_{\frac{q}{2}}j_{1}j_{2}...j_{\frac{q}{2}}}( t^{\prime})^{*}}=\frac{J\delta(t-t^{\prime})}{\frac{q}{2}!(\frac{q}{2}-1)!N^{q-1}}. \tag{2}\]
For the main part of this paper, we assume \(q\geqslant 4\) and the system is many-body chaotic.
The time-dependent Hamiltonian (1) has no energy conservation. Consequently, its steady states are at infinite temperature. Taking \(U(1)\) charge conservation into account, the steady-state density matrix reads \(\rho=\frac{1}{\mathcal{Z}}e^{-\mu Q}\) in the grand canonical ensemble, with total charge \(Q=\sum_{i}c^{\dagger}_{i}c_{i}\). The density of fermions is related to \(\mu\) by
\[n=\frac{\langle Q\rangle}{N}=\frac{1}{e^{\mu}+1}\in[0,1]. \tag{3}\]
We employ the Keldysh path-integral approach with partition function \(\mathcal{Z}=\mathrm{tr}\left(U(T)\rho U(T)^{\dagger}\right)\) and \(U(T)=\mathcal{T}e^{-\int_{-T/2}^{T/2}H(t^{\prime})dt^{\prime}}\). The path-integral contour includes two branches \((u,d)\), which corresponds to the forward/backward evolution \((U,U^{\dagger})\). A pictorial representation for \(T\to\infty\) reads
(4)
The fermion fields \(\psi^{s}_{i}\) and \(\overline{\psi}^{s}_{i}\) on the Keldysh contour are labeled by branch indices \(s\in\{u,d\}\). We first consider the fermion two-point functions \(G^{ss^{\prime}}(t)=\langle\psi^{s}_{i}(t)\overline{\psi}^{\overline{\psi}^{ \prime}}_{i}(0)\rangle\). In the UV limit \(t\to 0^{\pm}\), \(G^{ss^{\prime}}(t)\) can be determined by the density as
\[G^{ud}(0)=-n,\qquad G^{du}(0)=1-n,\qquad G^{uu}(0^{\pm})=G^{dd}(0^{\mp})= \frac{1}{2}-n\pm\frac{1}{2}. \tag{5}\]
To the leading order in \(1/N\), the self-energy receives contributions from melon diagrams. The Schwinger-Dyson equation then reads
\[\begin{pmatrix}\partial_{t}-\Sigma^{uu}&-\Sigma^{ud}\\ -\Sigma^{du}&-\partial_{t}-\Sigma^{dd}\end{pmatrix}\circ\begin{pmatrix}G^{uu}& G^{ud}\\ G^{du}&G^{dd}\end{pmatrix}=I,\qquad\Sigma^{ss^{\prime}}=\parbox[t]{65.0pt}{ \includegraphics[width=65.
Working out the details, we find
\[\begin{split}\Sigma^{uu}(t)&=-J\delta(t)G^{uu}(t)^{ \frac{q}{2}}(-G^{uu}(-t))^{\frac{q}{2}-1}=-\frac{\Gamma}{2}(1-2n)\delta(t)= \Sigma^{dd}(t),\\ \Sigma^{ud}(t)&=-J\delta(t)G^{ud}(t)^{\frac{q}{2}}(- G^{du}(-t))^{\frac{q}{2}-1}=-\Gamma n\delta(t),\\ \Sigma^{du}(t)&=J\delta(t)G^{du}(t)^{\frac{q}{2}}(- G^{ud}(-t))^{\frac{q}{2}-1}=\Gamma(1-n)\delta(t),\end{split} \tag{7}\]
with the decay rate of quasi-particles \(\Gamma=J\left(n(1-n)\right)^{\frac{q}{2}-1}\)[41]. This gives
\[G(t)=\begin{pmatrix}\frac{1}{2}-n+\frac{1}{2}\mathrm{sgn}(t)&-n\\ 1-n&\frac{1}{2}-n-\frac{1}{2}\mathrm{sgn}(t)\end{pmatrix}e^{-\frac{\Gamma|t|} {2}}. \tag{8}\]
Comparing to results in [41], (8) contains no running phase \(e^{i\mu t}\), since our evolution Hamiltonian (1) does not include the chemical potential term. It is also useful to work out the retarded and advanced Green's function as
\[G^{\mathrm{R}}(t)=-i\theta(t)\mathrm{tr}\left[\rho\{c_{i}(t),c_{i}^{\dagger}(0 )\}\right]=-i\theta(t)(G^{du}(t)-G^{ud}(t))=-i\theta(t)e^{-\Gamma t/2}=G^{ \mathrm{A}}(-t)^{*}. \tag{9}\]
The dependence of density is only from the decay rate \(\Gamma\). The result indicates the spectral function \(A(\omega)=-2\mathrm{Im}G^{\mathrm{R}}(\omega)\) is Lorentzian, with a single peak near \(\omega=0\).
### Wightman function with sources
Now we turn to the study of OTO-correlations. Our final goal is to derive analytical expressions for the out-of-time-order correlators:
\[\begin{split}\mathrm{OTOC}_{1}(t_{1},t_{2};t_{3},t_{4})& =-\frac{1}{N^{2}}\sum_{ij}\mathrm{tr}\left[\sqrt{\rho}c_{i}(t_{1} )c_{j}(t_{3})\sqrt{\rho}c_{i}^{\dagger}(t_{2})c_{j}^{\dagger}(t_{4})\right], \\ \mathrm{OTOC}_{2}(t_{1},t_{2};t_{3},t_{4})&=- \frac{1}{N^{2}}\sum_{ij}\mathrm{tr}\left[\sqrt{\rho}c_{i}^{\dagger}(t_{1})c_{j }(t_{3})\sqrt{\rho}c_{i}(t_{2})c_{j}^{\dagger}(t_{4})\right].\end{split} \tag{10}\]
with \(t_{1}\approx t_{2}\gg t_{3}\approx t_{4}\). Here we choose the convention to equally split the density matrix \(\rho\), while results with other conventions can be computed straightforwardly using our results since \(\rho\) commutes with the Hamiltonian. The OTOC can be understood as probing the perturbation distribution excited by operators in the past \((t_{3},t_{4})\) using operators in the future \((t_{1},t_{2})\)[50]. A trick proposed in [11] is to replace the actual source with a mean-field perturbation and solve the Wightman function to determine the perturbation generated by a pair of operators, which gives an effective theory of scramblons. OTOCs are then given by combining two Wightman functions.
We first study non-linear equations of Wightman functions on the double Keldysh contour in this subsection. Compared to the traditional Keldysh contour (4), it contains four branches, including two forward evolutions and two backward evolutions. We introduce a "world" label \(w\in\{1,2\}\) in additional to \(s\in\{u,d\}\), as indicated in the pictorial representation:
\[\begin{split}\includegraphics[width=142.26378pt]{Fig1}\end{split} \tag{11}\]
We can introduce Green's functions \(G^{ss^{\prime}}_{ww^{\prime}}(t)=\langle\psi^{sw}_{i}(t)\overline{\psi}^{s^{ \prime}w^{\prime}}_{i}(0)\rangle\). For \(w=w^{\prime}\), \(G^{ss^{\prime}}_{ww}(t)\) match the single Keldysh result (8) due to the unitarity. For \(w\neq w^{\prime}\), the unitarity implies \(G^{ss^{\prime}}_{ww^{\prime}}(t)\) is independent of \(s\) and \(s^{\prime}\). Explicitly, we have
\[G^{ss^{\prime}}_{21}(t)\equiv G^{\mathrm{W},0}_{21}(t)=\sqrt{n(1-n)}e^{- \frac{\Gamma[t]}{2}}\equiv G(t),\qquad G^{ss^{\prime}}_{12}(t)\equiv G^{ \mathrm{W},0}_{12}(t)=-G(t). \tag{12}\]
Now we add mean-field sources to probe the perturbation created by fermion operators. We choose a source that does not affect single-world observables, such as \(G^{ss^{\prime}}_{ww}(t)\):
\[\begin{split}\delta S=& s_{2}\sum_{i}(\overline{ \psi}^{u2}_{i}(t_{0})-\overline{\psi}^{d2}_{i}(t_{0}))(\psi^{u1}_{i}(t_{0})- \psi^{d1}_{i}(t_{0}))\\ &+s_{1}\sum_{i}(\psi^{u2}_{i}(t_{0})-\psi^{d2}_{i}(t_{0}))( \overline{\psi}^{u1}_{i}(t_{0})-\overline{\psi}^{d1}_{i}(t_{0})).\end{split} \tag{13}\]
With the source term, the Wightman Green's functions \(G^{\mathrm{W}}_{12/21}(t,t^{\prime})\) show explicit dependence of two time variables, which satisfies the Schwinger-Dyson equation [50]
\[\begin{split}\int dt_{3}dt_{4}\ G^{\mathrm{R}}(t_{13})\Sigma^{ \mathrm{W}}_{12}(t_{3},t_{4})G^{\mathrm{A}}(t_{42})&=G^{\mathrm{ W}}_{12}(t_{1},t_{2}),\\ \int dt_{3}dt_{4}\ G^{\mathrm{R}}(t_{13})\Sigma^{\mathrm{W}}_{21} (t_{3},t_{4})G^{\mathrm{A}}(t_{42})&=G^{\mathrm{W}}_{21}(t_{1},t_ {2}).\end{split} \tag{14}\]
Here we have introduced \(t_{ij}=t_{i}-t_{j}\) for conciseness. The self-energies receive contributions from source terms:
\[\begin{split}\Sigma^{\mathrm{W}}_{21}(t,t^{\prime})& =J\delta(t-t^{\prime})G^{\mathrm{W}}_{21}(t,t)^{\frac{q}{2}}(-G^{ \mathrm{W}}_{12}(t,t))^{\frac{q}{2}-1}-s_{2}\delta(t-t_{0})\delta(t^{\prime}-t _{0}),\\ \Sigma^{\mathrm{W}}_{12}(t,t^{\prime})&=J\delta(t-t^{ \prime})G^{\mathrm{W}}_{12}(t,t)^{\frac{q}{2}}(-G^{\mathrm{W}}_{21}(t,t))^{ \frac{q}{2}-1}+s_{1}\delta(t-t_{0})\delta(t^{\prime}-t_{0}).\end{split} \tag{15}\]
Using the explicit form in (9), we can inverse the retarded and advanced Green's functions and obtain differential equations
\[\left(\partial_{t_{1}}+\frac{\Gamma}{2}\right)\left(\partial_{t_{2}}+\frac{ \Gamma}{2}\right)G^{\mathrm{W}}_{w\overline{w}}(t_{1},t_{2})=\Sigma^{\mathrm{ W}}_{w\overline{w}}(t_{1},t_{2}). \tag{16}\]
Here we have introduced \(\overline{w}\neq w\). The right-hand side of these equations only contains delta functions, which separate the solution into different regions as in the Majorana Brownian SYK case [11]. Since the equations (and initial conditions) are symmetric with respect to \(t_{1}\) and \(t_{2}\), we assume
\[\begin{cases}G^{\mathrm{W}}_{w\overline{w}}=e^{-\frac{\Gamma}{2}t_{1}}f_{w \overline{w}}(t_{2})+e^{-\frac{\Gamma}{2}t_{2}}g_{w\overline{w}}(t_{1}),&(t_ {1},t_{2})\in A,\\ G^{\mathrm{W}}_{w\overline{w}}=e^{-\frac{\Gamma}{2}t_{2}}f_{w\overline{w}}(t_{ 1})+e^{-\frac{\Gamma}{2}t_{1}}g_{w\overline{w}}(t_{2}),&(t_{1},t_{2})\in B,\\ G^{\mathrm{W}}_{w\overline{w}}=(-1)^{w}\sqrt{n(1-n)}e^{-\frac{\Gamma}{2}[t_{12} ]},&(t_{1},t_{2})\in C\cup D,\\ G^{\mathrm{W}}_{w\overline{w}}(t_{0}^{+},t_{0}^{+})=G^{\mathrm{W}}_{w \overline{w}}(t_{0}^{-},t_{0}^{-})-(-1)^{w}s_{w}.&\end{cases}\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
For the third line, we use the fact that the source (13) can be neglected due to the cancellation between \(u\) and \(d\) branches for either \(t_{1}<t_{0}\) or \(t_{2}<t_{0}\). For the fourth line, we integrate the equation (16) over a small square surrounding \((t_{0},t_{0})\) and use the continuum condition without the source term \(G^{\rm W}_{w\overline{w}}(t_{0}^{+},t_{0}^{-})=G^{\rm W}_{w\overline{w}}(t_{0}^ {-},t_{0}^{+})=G^{\rm W}_{w\overline{w}}(t_{0}^{-},t_{0}^{-})\).
To determine the solution of \(f_{w\overline{w}}\) and \(g_{w\overline{w}}\), we match the boundary condition near the \(AC\) and \(AB\) boundary. Near the \(AC\) boundary we have
\[(\partial_{t_{2}}+\frac{\Gamma}{2})f_{w\overline{w}}(t_{2})=0,\qquad f_{w \overline{w}}=ae^{-\frac{\Gamma t_{2}}{2}}. \tag{18}\]
We could always fix \(a=0\) using the redundancy of \((f_{w\overline{w}}(t),g_{w\overline{w}}(t))\to(f_{w\overline{w}}(t)-ae^{- \frac{\Gamma t}{2}},g_{w\overline{w}}(t)+ae^{-\frac{\Gamma t}{2}})\). The boundary condition near \(AB\) gives
\[\left(\frac{d}{dt}+\Gamma\right)z_{w\overline{w}}=\Gamma z_{w\overline{w}}^{ \frac{q}{2}}z_{\overline{w}w}^{\frac{q}{2}-1},\qquad g_{w\overline{w}}(t)=(- 1)^{w}\sqrt{n(1-n)}e^{\frac{\Gamma}{2}t}z_{w\overline{w}}(t). \tag{19}\]
The solution can be parametrized as
\[z_{12}=\frac{C}{(1+ze^{\varkappa(t-t_{0})})^{\frac{1}{q-2}}},\qquad z_{21}= \frac{C^{-1}}{(1+ze^{\varkappa(t-t_{0})})^{\frac{1}{q-2}}}. \tag{20}\]
Here \(\varkappa=(q-2)\Gamma\) is the quantum Lyapunov exponent [41]. The initial condition gives
\[1-\frac{s_{2}}{\sqrt{n(1-n)}}=\frac{C^{-1}}{(1+z)^{\frac{1}{q-2}}},\qquad 1- \frac{s_{1}}{\sqrt{n(1-n)}}=\frac{C}{(1+z)^{\frac{1}{q-2}}}. \tag{21}\]
For small \(s_{1}\) and \(s_{2}\), we expand \(\delta C=C-1\) and \(z\) as
\[\delta C=\frac{s_{2}-s_{1}}{\sqrt{n(1-n)}},\qquad z=\frac{q-2}{2}\frac{s_{2}+s _{1}}{\sqrt{n(1-n)}}. \tag{22}\]
We are interested in probing the distribution of scramblons perturbations. As a result, the trick is to take \(s_{i}=p_{i}e^{\varkappa t_{0}}\) with \(t_{0}\to-\infty\). The final result is
\[G^{\rm W}_{w\overline{w}}(t_{1},t_{2})=\frac{G^{\rm W,0}_{w\overline{w}}(t_{ 12})}{(1+\widetilde{z}e^{\varkappa\frac{t_{1}+t_{2}-t_{12}}{2}})^{\frac{1}{q-2} }},\qquad\widetilde{z}=\frac{q-2}{2}\frac{p_{2}+p_{1}}{\sqrt{n(1-n)}}. \tag{23}\]
Note that the result is symmetric under \(p_{1}\leftrightarrow p_{2}\) and \(w\leftrightarrow\overline{w}\), which suggests the OTO-correlations are symmetric for particles and holes. This leads to \(\rm OTOC_{1}=\rm OTOC_{2}=\rm OTOC\) in (10). As a result, one may set one of \(p_{i}\) to be zero (we set \(p_{1}=0\) and \(p_{2}=p\)), and derive scramblon data using a single source term. This will be explained in the next subsection.
### Scramblon diagrams and OTOC
In the late-time regime, information scrambling is mediated by collective modes named scramblons [10]. In this subsection, we utilize results derived in the last subsection to extract the effective theory of scramblons in the cBSYK model. The OTOC can then be obtained by computing scramblon diagrams.
We begin by examining \(G^{\rm W}\) in the effective theory of scramblons. The result (23) is derived to the leading order of \(1/N\) with \(-\log p_{2}\ll\varkappa t\ll\log N\), which can be identified with diagrams:
\[G^{\rm W}_{21}(t_{1},t_{2})=\raisebox{-14.226378pt}{\includegraphics[width=14.22637 8pt]{fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/figfig/fig/figfig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/figfig/fig/fig/fig/fig/figfig/fig/figfig/fig/figfig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/fig/figfigfig/figfig/figfig/figfig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/fig/figfig/figfigfig/fig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfigfig/figfig/fig/figfigfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfigfig/figfig/figfig/figfigfig/figfig/figfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfigfigfig/figfig/figfigfig/figfigfig/figfigfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfigfig/figfigfig/figfig/figfigfig/figfigfig/figfigfig/figfig/figfigfig/figfigfig/figfig/figfigfig/figfigfig/figfigfig/figfigfig/figfig/figfigfig/figfig/figfigfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfigfig/figfigfig/figfig
This leads to several equivalent expressions for the OTOC:
\[\begin{split}\text{OTOC}(t_{1},t_{2};t_{3},t_{4})&= \int dy_{\text{A}}dy_{\text{R}}\ h(y_{\text{A}},t_{12})h(y_{\text{R}},t_{34})e^ {-\lambda y_{\text{A}}y_{\text{R}}}\\ &=\int dy_{\text{A}}\ h(y_{\text{A}},t_{12})f(\lambda y_{\text{A} },t_{34})=\int dy_{\text{R}}\ f(\lambda y_{\text{R}},t_{12})h(y_{\text{R}},t_{34 }).\end{split} \tag{32}\]
In this first line, the result can be understood as each pair of operators creates perturbations in the future/past with distribution \(h(y_{\text{R/A}},t)\), which interact through an Euclidean action \(S_{\text{eff}}=\lambda y_{\text{A}}y_{\text{R}}\). In the second line, we integrate out \(y_{\text{R}}\) (or \(y_{\text{A}}\)), and the pair of operators in the past (future) serves as a probe of perturbations in the future (past), with a probe function \(f(\lambda y_{\text{A/R}},t)\). Finally, performing the remaining integral explicitly, we find
\[\text{OTOC}(t_{1},t_{2};t_{3},t_{4})=G(t_{12})G(t_{34})\left[\frac{e^{\frac{ \kappa}{2}(|t_{12}|+|t_{34}|)}}{\lambda}\right]^{\frac{1}{q-2}}U\left(\frac{1} {q-2},1,\frac{e^{\frac{\kappa}{2}(|t_{12}|+|t_{34}|)}}{\lambda}\right). \tag{33}\]
Here we have introduced \(\lambda=\frac{e^{\frac{t_{1}+t_{2}-t_{3}-t_{4}}{2}}}{C}\). The result shows that the charge density dependence of \(\frac{\text{OTOC}(t_{1},t_{2};t_{3},t_{4})}{G(t_{12})G(t_{34})}\) only comes from \(\lambda\), consistent with the numerical observations in [1]. This extends previous discussions on information scrambling in the cBSYK model to the late-time regime [41].
### Late-time operator size distribution
Finally, we consider the operator size distribution of the cBSYK model. The operator size is defined unambiguously at infinite temperature in the full Hilbert space. In this subsection, we first explain our proposal for extending the definition to systems with charge conservation in the grand canonical ensemble with \(\mu\neq 0\), which is an analog of the finite-temperature size for systems with energy conservations [48, 51, 32]. We then derive a concrete formula for the late-time operator size using scramblon diagrams.
We begin with the \(\mu=0\) case. To be concrete, we consider the Heisenberg evolution \(c_{i}(t)=U(t)^{\dagger}c_{i}(0)U(t)\). The definition of the operator size is basis dependent. Here we choose the local orthonormal basis \(\{O_{j}^{a}\}=\{I,c_{j}+c_{j}^{\dagger},i(c_{j}-c_{j}^{\dagger}),2c_{j}^{ \dagger}c_{j}-1\}\), and define their operator size as \(\{n^{a}\}=\{0,1,1,2\}\). The operator basis for the total system is given by a tensor product of local basis \(O_{1}^{a_{1}}O_{2}^{a_{2}}...O_{N}^{a_{N}}\) with an operator size \(S=\sum_{j}n_{j}^{a_{1}}\). This definition matches the convention for Majorana fermions [52]. The operator size distribution \(P(S)\) of \(c_{i}(t)\) is then defined by
\[c_{i}(t)=\sum_{\{a_{n}\}}c_{a_{1}a_{2}...a_{N}}O_{1}^{a_{1}}O_{2}^{a_{2}}...O_ {N}^{a_{N}},\qquad P(S)=2\sum_{\{a_{n}\}}|c_{a_{1}a_{2}...a_{N}}|^{2}\delta_{ \sum_{j}n_{j}^{a_{1}},S}. \tag{34}\]
At late-time regime, we take the continuum limit of the operator size by introducing \(s\equiv S/N\in[0,2]\) and \(\mathcal{P}(s)\equiv NP(sN)\). It is straightforward to show \(\int ds\ \mathcal{P}(s)=\sum_{S}P(S)=2\langle c_{j}^{\dagger}c_{j}\rangle=1\).
To compute the operator size distribution, we use the trick by introducing an auxiliary system with \(N\) complex fermions \(\xi_{j}\). We first prepare the initial state as a maximally entangled state between \(c_{j}\) and \(\xi_{j}\). On the occupation basis, we have
\[|I\rangle=\prod_{j}\otimes\frac{1}{\sqrt{2}}(|01\rangle_{j}+|10\rangle_{j}). \tag{35}\]
The state satisfies \((c_{j}-\xi_{j})|I\rangle=(c_{j}^{\dagger}+\xi_{j}^{\dagger})|I\rangle=0\). This motivates us to measure the perturbation of \(|I\rangle\) using operator
\[Q_{S}=\frac{1}{2}\sum_{j}\left[(c_{j}^{\dagger}-\xi_{j}^{\dagger})(c_{j}-\xi_{j })+(c_{j}+\xi_{j})(c_{j}^{\dagger}+\xi_{j}^{\dagger})\right]. \tag{36}\]
It is straightforward to show that
\[Q_{S}|I\rangle=0,\quad Q_{S}c_{j}|I\rangle=c_{j}|I\rangle,\quad Q_{S}c_{j}^{ \dagger}|I\rangle=c_{j}^{\dagger}|I\rangle,\quad Q_{S}(2c_{j}^{\dagger}c_{j}-1 )|I\rangle=2(2c_{j}^{\dagger}c_{j}-1)|I\rangle. \tag{37}\]
This shows that the eigenvalue of \(Q_{S}\) matches the definition of operator size. As a result, the generating function of the operator size distribution can be expressed as
\[\mathcal{S}(\nu)=\int ds\ \mathcal{P}(s)e^{-\nu s}=2\langle I|c_{j}^{\dagger}(t )e^{-\frac{\nu Q_{S}}{N}}c_{j}(t)|I\rangle. \tag{38}\]
Now we generalize above arguments to finite \(\mu\). We first consider applying \(\sqrt{\rho_{c}}\equiv\sqrt{\rho}\otimes I\) to \(|I\rangle\), which leads to a bias in the Hilbert space
\[|I_{\mu}\rangle=\sqrt{\rho_{c}}|I\rangle=\frac{1}{\sqrt{\mathcal{Z}}}\prod_{j }\otimes\frac{1}{\sqrt{2}}(|01\rangle_{j}+e^{-\frac{\mu}{2}}|10\rangle_{j}). \tag{39}\]
Tracing out the auxiliary system leads to the correction density matrix of \(\mathrm{tr}_{\xi}(|I_{\mu}\rangle\langle I_{\mu}|)=\rho\). To probe deviations from \(|I_{\mu}\rangle\), we ask which operator annihilates the state. We take a symmetric convention with
\[(e^{\frac{\mu}{4}}c_{j}-e^{-\frac{\mu}{4}}\xi_{j})|I_{\mu}\rangle=(\rho_{c}^{ \frac{1}{4}}c_{j}\rho_{c}^{-\frac{1}{4}}-\rho_{\xi}^{\frac{1}{4}}\xi_{j}\rho_{ \xi}^{-\frac{1}{4}})|I_{\mu}\rangle=\rho_{c}^{\frac{1}{4}}\rho_{\xi}^{\frac{1 }{4}}(c_{j}-\xi_{j})|I\rangle=0 \tag{40}\]
Here we have introduced \(\rho_{\xi}\equiv I\otimes e^{\mu\sum_{j}\xi_{j}^{\dagger}\xi_{j}}/\mathcal{Z}\) and used \(\rho_{\xi}|I\rangle=\rho_{c}|I\rangle\). Similarly, we have \((e^{-\frac{\mu}{4}}c_{j}^{\dagger}+e^{\frac{\mu}{4}}\xi_{j}^{\dagger})|I_{\mu }\rangle=0\). As in the \(\mu=0\) case, we introduce the positive semi-definite operator
\[Q_{S\mu}\equiv\frac{\cosh\frac{\mu}{2}}{2}\sum_{j}\left[(e^{\frac{\mu}{4}}c_{ j}^{\dagger}-e^{-\frac{\mu}{4}}\xi_{j}^{\dagger})(e^{\frac{\mu}{4}}c_{j}-e^{- \frac{\mu}{4}}\xi_{j})+(e^{-\frac{\mu}{4}}c_{j}+e^{\frac{\mu}{4}}\xi_{j})(e^{- \frac{\mu}{4}}c_{j}^{\dagger}+e^{\frac{\mu}{4}}\xi_{j}^{\dagger})\right], \tag{41}\]
and define the finite density operator size of \(c_{j}(t)\) by the generating function
\[\begin{split}\mathcal{S}_{\mu}(\nu)&=\int ds\ \mathcal{P}_{\mu}(s)e^{-\nu s }\equiv\frac{\langle I|c_{j}^{\dagger}(t)\rho_{c}^{\frac{1}{4}}\rho_{\xi}^{ \frac{1}{4}}e^{-\frac{\nu Q_{S\mu}}{N}}\rho_{c}^{\frac{1}{4}}\rho_{\xi}^{ \frac{1}{4}}c_{j}(t)|I\rangle}{\langle I|c_{j}^{\dagger}(t)\rho_{c}^{\frac{1} {2}}\rho_{\xi}^{\frac{1}{4}}c_{j}(t)|I\rangle}\\ &=2\cosh\frac{\mu}{2}\langle I|c_{j}^{\dagger}(t)\rho_{c}^{\frac{ 1}{2}}\rho_{\xi}^{\frac{1}{4}}e^{-\frac{\nu Q_{S\mu}}{N}}\rho_{c}^{\frac{1}{ 4}}\rho_{\xi}^{\frac{1}{4}}c_{j}(t)|I\rangle.\end{split} \tag{42}\]
In the late-time regime with \(\varkappa t\sim\log N\), only contributions from scramblon diagrams are important due to the suppression of \(1/N\). As in [48], the result can be derived by arguments similar to that of the OTOC: We imagine the insertion of \(c_{j}\) and \(c_{j}^{\dagger}\) creates the perturbation in the future, whose strength is described by the distribution function \(h^{\mathrm{R}}(y,0)\). This perturbation is probed by the size operator \(Q_{S\mu}\) is the past, which has a probe function
\[Q_{S\mu}\approx N\left(1-2\cosh\left(\frac{\mu}{2}\right)f(\lambda y,0)\right) =N\left(1-\frac{1}{(1+\lambda y)^{\frac{1}{q-2}}}\right),\qquad\lambda=e^{ \varkappa t}/C. \tag{43}\]
Here we take the expectation value over \(|I_{\mu}\rangle\) for terms without OTO-correlations. This leads to the result
\[\mathcal{S}_{\mu}(\nu)=2\cosh\frac{\mu}{2}\int dy\ h(y,0)e^{-\nu[1-(1+\lambda y) ^{-\frac{1}{q-2}}]}=\int dy\ \frac{y^{\frac{1}{q-2}-1}}{\Gamma(\frac{1}{q-2})}e^{-y}e^{-\nu[1-(1+\lambda y )^{-\frac{1}{q-2}}]}. \tag{44}\]
Performing the inverse Laplace transform, the distribution \(\mathcal{P}(s)\) is given by
\[\mathcal{P}_{\mu}(s)=|y^{\prime}(s)|\,\frac{y(s)^{\frac{1}{q-2}-1}}{\Gamma( \frac{1}{q-2})},\qquad s=1-(1+\lambda y)^{-\frac{1}{q-2}}\in[0,1]. \tag{45}\]
In the early-time regime with \(\lambda\ll 1\), the operator size grows exponentially with exponent \(\varkappa\). In the long-time limit, \(c_{j}(t)\) approaches a maximally scrambled operator, which leads to \(\mathcal{P}(s)=\delta(s-1)\), sincec the standard deviation of the operator size can be neglected to the leading order of \(1/N\). Moreover, as for OTOC, the density dependence only comes from the propagator \(\lambda\) of scramblons 1. This generalizes previous results for the late-time operator size distribution of Majorana SYK models [48].
Footnote 1: This statement depends on the overall coefficient in the definition of \(Q_{S\mu}\). One may change the definition by an overall factor, which leads to a rescale of \(s\).
## 3 Entanglement dynamics and transitions
In this section, we consider the entanglement dynamics of complex Brownian SYK chains. We first consider the unitary evolutions and study the charge dependence of the entanglement velocity for both the Von Neumann entropy and the \(m\)-th Renyi entropy. We then add repeated weak measurements, where measurement induced phase transitions [53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66] exist for both interacting and non-interacting models. We derive the effective theory for the transition, with a comparison to its Majorana counterparts [67, 68, 69, 70].
### Model and set-up
We extend the cBSYK model (1) to 1-D chains by introducing multiple copies and adding Brownian hopping terms [71]. The Hamiltonian reads
\[\begin{split} H(t)=&\sum_{x}\sum_{i_{1}<i_{2}...<i \frac{q}{2}}\sum_{j_{1}<j_{2}...<j_{\frac{q}{2}}}J^{x}_{i_{1}i_{2}...i_{\frac{ q}{2}}j_{1}j_{2}...j_{\frac{q}{2}}}(t)c^{\dagger}_{i_{1},x}c^{\dagger}_{i_{2},x}...c^{\dagger}_{i_{\frac{q}{2},x}}c_{j_{1},x}c_{j_{2},x}...c_{j_{\frac{q}{2},x}}\\ &+\sum_{x}\left[\sum_{ij}V^{x}_{ij}(t)c^{\dagger}_{i,x+1}c_{j,x} +\text{H.C.}\right].\end{split} \tag{46}\]
Here we take the periodic boundary condition with \(x\in\{1,2,...,L\}\). Random parameters on different sites are labeled by \(x\) and thus independent. The random hopping strength \(V^{x}_{ij}(t)\) are Brownian variables with
\[\overline{V^{x}_{ij}(t)}=0,\qquad\overline{V^{x}_{ij}(t)V^{x}_{ij}(t^{\prime} )^{\star}}=\frac{V}{2N}\delta(t-t^{\prime}). \tag{47}\]
Following the the discussion in section 2.1, we find the Green's functions on the Keldysh contour \(G_{x}^{ss^{\prime}}(t)=\langle\psi_{i}^{s}(t)\overline{\psi}_{i}^{s^{\prime}}(0)\rangle\) are still given by (8), with decay rate \(\Gamma=J\left(n(1-n)\right)^{\frac{q}{2}-1}+V\).
In this section, we are interested in the dynamics of the entanglement entropy for pure states. We focus on the setup with an auxiliary fermion chain \(\xi_{i,x}\)[71, 16], which is a direct analog of the gravity calculation [72]. The system is prepared in
\[|\psi_{0}\rangle=\prod_{x}\otimes|I_{\mu}\rangle_{x}, \tag{48}\]
which contains no entanglement between different sites \(x\). The system is then evolved under the Hamiltonian (46), where the entanglement between different sites builds up. We then choose first \(L_{A}\) sites \(x\in[1,2,...,L_{A}]\) as the subsystem \(A\) including both \(c_{i,x}\) and \(\xi_{i,x}\) fermions. The reduced density matrix \(\rho_{A}\) reads
\[\rho_{A}=\mathrm{tr}_{\overline{A}}\ U(t)|\psi_{0}\rangle\langle\psi_{0}|U(t) ^{\dagger}=\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs.eps}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs.eps}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs.eps}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs.eps}} \tag{49}\]
Here we draw a pictorial representation, which is understood as a path-integral over the corresponding contour [16, 15]. We have omitted the density matrix \(\rho\) for simplicity. Dotted lines represent the interaction between A and \(\overline{A}\) in the unitary evolution. We are interested in the entropy of \(\rho_{A}\). The \(m\)-th Renyi entropy and the Von Neumann entropy are defined as
\[S_{A}^{(m)}=-\frac{1}{m-1}\log\mathrm{tr}_{A}(\rho_{A}^{n}),\qquad S_{A}=\lim _{m\to 1}\ S_{A}^{(m)}=-\mathrm{tr}_{A}(\rho_{A}\log\rho_{A}). \tag{50}\]
As an example, for the second and the forth Renyi entropy, we have
\[\mathrm{tr}_{A}(\rho_{A}^{2})\ =\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs.eps}} \raisebox{-14.
### Density dependence of entanglement velocity
In this subsection, we are interested in the entanglement velocity, in particular its density dependence, of the cBSYK chain. The entanglement velocity \(v_{E}^{(m)}\) is defined as the slope of entropy in the early-time regime \(S_{A}^{(m)}\approx 2v_{E}^{(m)}s_{0}^{(m)}t\), where \(s_{0}^{(m)}=-N(m-1)^{-1}\log(n^{m}+(1-n)^{m})\) is the maximal entropy density2. We focus on small hopping strength \(V\), where \(v_{E}^{(m)}\) can be obtained by a perturbative calculation. Our discussion primarily follows [25], which focuses on static system-bath couplings with Hermitian operators.
Footnote 2: Here the factor of 2 is due to the exsistence of two boundaries in our setup.
We begin with the Renyi entropy with \(n>1\). For \(V=0\), the contour (51) is reduced to \(n\) copies of the traditional Keldysh contour (4). As a result, the Green's functions is diagonal in the world index \(G_{ww^{\prime},x}^{ss^{\prime}}(t)=G^{ss^{\prime}}(t)\delta_{ww^{\prime}}\) with \(G^{ss^{\prime}}(t)\) given by (8). Since there is no coupling between different sites, we have \(S_{A}^{(m)}=0\) for \(V=0\). Now we adding the effect of \(V\) perturbatively. The leading order contribution comes from evaluating the \(V\) term in (52) using Green's functions with \(V=0\). Due to the unitarity, the only contribution is from boundary terms with \(x=L_{A}\) and \(x=L\):
\[S_{A}^{(m)}(t)=-\frac{mN}{m-1}V\int_{0}^{t}dt^{\prime}\ \sum_{s}G^{ss}(t^{\prime},t^{ \prime})G^{ss}(t^{\prime},t^{\prime})=\frac{2mN}{m-1}Vn(1-n)t. \tag{53}\]
More generally, in models where nearest neighbor sites are coupled through \(p\)-body Brownian term \(\sim(c_{x+1}^{\dagger}c_{x})^{p/2}\), we expect \(v_{E}^{(m)}s_{0}^{(m)}\sim V[n(1-n)]^{\frac{p}{2}}\).
It is also interesting to compare \(v_{E}^{(m)}s_{0}^{(m)}\) with butterfly velocity \(v_{B}\). To determine \(v_{B}\), we generalize the discussion in subsection 2.2 to the SYK chain case. (19) now becomes
\[\left(\frac{d}{dt}+\Gamma_{J}+\Gamma_{V}\right)z_{w\overline{w},x}=\Gamma_{J} z_{w\overline{w},x}^{\frac{q}{2}-1}\frac{\Gamma_{V}}{2}(z_{w\overline{w},x +1}^{\frac{p}{2}-1}z_{\overline{w}w,x}^{\frac{p}{2}-1}+z_{w\overline{w},x-1} ^{\frac{p}{2}-1}z_{\overline{w}w,x}^{\frac{p}{2}-1}). \tag{54}\]
Here \(\Gamma_{J}=J\left(n(1-n)\right)^{\frac{q}{2}-1}\) and \(\Gamma_{V}=V\left(n(1-n)\right)^{\frac{p}{2}-1}\). Now we consider the linearizion of above equation \(z_{\overline{w}w,x}=1-\delta z_{\overline{w}w,x}\) with \(z_{\overline{w}w,x}=z_{w\overline{w},x}\), which gives
\[\frac{d}{dt}\delta z_{w\overline{w},x}=(q-2)\Gamma_{J}\delta z_{w\overline{w},x}+\frac{p}{2}\frac{\Gamma_{V}}{2}(\delta z_{w\overline{w},x+1}+\delta z_{w \overline{w},x-1})+(p-4)\frac{\Gamma_{V}}{2}\delta z_{w\overline{w},x}. \tag{55}\]
To the leading order of small \(V\), an initial perturbation near \(x=0\) spreads out as
\[\delta z_{w\overline{w},x}(t)\approx\int\frac{dk}{2\pi}e^{(q-2)\Gamma_{J}t-p \Gamma_{V}tk^{2}/2}e^{ikx}\sim e^{(q-2)\Gamma_{J}t-\frac{q^{2}}{2\Gamma_{V}^ {p}t}}. \tag{56}\]
This give \(v_{B}=\sqrt{2(q-2)p\Gamma_{J}\Gamma_{V}}\sim\sqrt{JV}(n(1-n))^{\frac{p+q}{4}-1}\). We compare \(v_{E}^{(m)}\) with \(v_{B}s_{0}^{(m)}\). At small density \(n\approx 0\) or high density \(n\approx 1\), we find
\[v_{B}s_{0}^{(m)}\sim(VJ)^{1/2}\frac{mN}{m-1}(n(1-n))^{\frac{p+q}{4}}. \tag{57}\]
For small \(V\), we thus find \(v_{B}\gg v_{E}^{(m)}\). The result shows that \(v_{B}\) matches the entanglement velocity \(v_{E}\) only for \(p=q\) and \(V\sim J\).
Now we consider the entanglement velocity for the Von Neumann entropy \(S_{A}\). In the limit of \(m\to 1\), it is found that additional loop diagrams contribute [25], which cancels the divergence of (53). For \(m=2\) and \(m=4\), the diagrams read
\[S_{A}^{(m)}=-\frac{1}{m-1}\log\mathrm{tr}_{A}(\rho_{A}^{n}),\qquad S_{A}=\lim_ {m\to 1}\,S_{A}^{(m)}=-\mathrm{tr}_{A}(\rho_{A}\log\rho_{A}). \tag{58}\]
As an example, for the second and the forth Renyi entropy, we have
\[\delta S_{A}^{(2)}\ \ =\ \ \raisebox{-2.15pt}{\includegraphics[scale=0.8]{fig/Pen_1.pdf}} \tag{59}\]
Here the solid black lines represent the fermion Green's function \(G^{s\overline{s}}(t)\). There is also a diagram given by the taking charge conjugation of (59). As the next step, one needs to take the disorder average over Brownian variables, which leads to contractions between \(V_{ij}\). An important observation is that if neglect one of the Green's functions (for example the Green's on the blue contour with \(w=1\)), the rest part of the diagram, after summation over \(m\), can be represented by an auxiliary Schwinger-Dyson equation3
Footnote 3: Here we add a factor \((-1)^{m-1}\). This is due to the gauge transform of fermion modes \(c_{d}\to-c_{d}\) on \((m-1)\) red contours if we hope to set Green’s functions the same as \(G^{ss^{\prime}}\) in (8). Please see [67] for the details.
\[\begin{split}\sum_{m}(m-1)(-1)^{m-1}\delta S_{A}^{(m)}& =4N\int_{0}^{t}\int_{0}^{t}dt_{1}dt_{2}\ G^{du}(t_{12})S_{\text{ aug}}^{ud}(t_{21})\\ &\approx 4Nt\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}G^{du}( \omega)S_{\text{aug}}^{ud}(\omega),\end{split} \tag{60}\]
where diagrammatically we have
\[\begin{split} S_{\text{aug}}^{s\overline{s}}\ \ =\ \ \raisebox{-2.15pt}{\includegraphics[scale=0.8]{fig/Pen_1.pdf}}\ \ +\ \ \raisebox{-2.15pt}{\includegraphics[scale=0.8]{fig/Pen_2.pdf}}\ \ \raisebox{-2.15pt}{\includegraphics[scale=0.8]{fig/Pen_3.pdf}}\ \ +\ \ \raisebox{-2.15pt}{ \includegraphics[scale=0.8]{fig/Pen_4.pdf}}\ \ +\ \...\\ \raisebox{-2.15pt}{\includegraphics[scale=0.8]{fig/Pen_5.pdf}}\ \ +\ \raisebox{-2.15pt}{ \includegraphics[scale=0.8]{fig/Pen_6.pdf}}\end{split} \tag{61}\]
This leads to
\[\begin{split} G_{\text{aug}}(\omega)^{-1}=\begin{pmatrix}0&G^{ud}( \omega)\\ G^{du}(\omega)&0\end{pmatrix}^{-1}-\begin{pmatrix}0&\Sigma_{\text{aug}}^{ud} \\ \Sigma_{\text{aug}}^{du}&0\end{pmatrix},\\ \Sigma_{\text{aug}}^{s\overline{s}}=\int\frac{d\omega}{2\pi}\ \frac{V}{2}G_{\text{aug}}^{s \overline{s}}(\omega),\quad S_{\text{aug}}^{s\overline{s}}(\omega)=\Sigma_{ \text{aug}}^{s\overline{s}}+\Sigma_{\text{aug}}^{s\overline{s}}G^{\overline{ s}s}(\omega)S_{\text{aug}}^{s\overline{s}}(\omega).\end{split} \tag{62}\]
In other words, we use \(G^{s\overline{s}}\) in (8) as \(G_{0}\), and add the effect of \(V\) perturbatively. The solution of \(\Sigma^{s\overline{s}}_{\rm aug}\) takes the form of
\[\Sigma^{ud}_{\rm aug}=-n\sigma,\qquad\Sigma^{du}_{\rm aug}=(1-n)\sigma, \qquad\mbox{with:}\ \ \frac{\Gamma V}{2}\sqrt{\frac{1}{\Gamma^{2}+4(1-n)n\Gamma\sigma}}=\sigma. \tag{63}\]
This gives
\[\begin{split}\sum_{m}(m-1)&(-1)^{m-1}\delta S^{(m)}_ {A}=-8Nt\frac{n(1-n)}{V}\sigma^{2}\\ &\approx Nt\left\{-2n(1-n)V+4[n(1-n)]^{2}\frac{V^{2}}{\Gamma}-12 [n(1-n)]^{3}\frac{V^{3}}{\Gamma^{2}}\right\}+O(V^{4}).\end{split} \tag{64}\]
Since diagrammatically \(\delta S^{(m)}_{A}\propto V^{m}\), we have \((m-1)\delta S^{(m)}_{A}=-NC(m)[Vn(1-n)/\Gamma]^{m}\Gamma\). In particular, \(C(1)=2\). Unfortunately, we don't find a way to determine the analytical continuation of \(C(m)\) near \(m=1\) unambiguously. Combining (64) with (53), we find
\[S_{A}(t)=2NVtn(1-n)\log\left(\frac{\Gamma e^{1-C^{\prime}(1)/2}}{Vn(1-n)} \right). \tag{65}\]
Comparing (65) and (53), we find an additional factor of \(-\log\left[V(1-n)n/\Gamma\right]\). We expect this is general for systems with large local Hilbert space dimensions. The enhancement of \(\log V/\Gamma\) has also been obtained in [25] for couplings with Hermitian operators. We attribute enhancement of \(\log n\) at a small density to the different behavior of the maximal entropy:
\[s_{0}\approx-Nn\log n,\qquad s_{0}^{(m>1)}\approx-\frac{Nmn}{m-1}. \tag{66}\]
### Measurements and entanglement transitions
In this subsection, we consider adding repeated weak measurement to the cBSYK chain (46). The evolution of the system then becomes non-unitary, which can exhibit measurement induced entanglement phase transitions. We derive the effective theory for the transition, with a comparison to its Majorana counterpart [68, 69].
We first introduce measurements into the cBSYK chain. We consider weak measurements with respect to operator \(O\), which is described by Kraus operators:
\[K^{0}_{O}=1-\gamma_{O}O^{\dagger}O+O(\gamma^{2}),\ \ \ \ \ K^{1}_{O}=\sqrt{2 \gamma_{O}}O. \tag{67}\]
Here we assume \(\gamma\ll 1\). In this section, we focus on the second Renyi entropy, and perform forced measurement by post-selection of outcome \(0\). For \(\gamma_{O}=\zeta_{O}\delta t\), the evolution of \(\rho\otimes\rho\) due to the measurement then takes the form of imaginary-time evolutions [69]
\[\rho(t+\delta t)\otimes\rho(t+\delta t)\propto e^{-h_{I}\delta t}\rho(t)e^{-h_ {I}\delta t}\otimes e^{-h_{I}\delta t}\rho(t)e^{-h_{I}\delta t}, \tag{68}\]
with \(h_{I}=\zeta_{O}O^{\dagger}O\). Adding contributions from measurements with different \(O\) and contributions from the unitary part, the total evolution is governed by the non-Hermitian Hamiltonian [67]:
\[H_{\rm tot}(t)=H(t)-iH_{I},\qquad H_{I}=\sum_{O}\zeta_{O}O^{\dagger}O. \tag{69}\]
For complex fermion model, it is natural to measure with respect to the fermion density operator \(n_{i,x}=c_{i,x}^{\dagger}c_{i,x}\). However, choosing \(\{O\}=\{n_{i,x}\}\) with \(\zeta_{O}=\zeta\) leads to \(H_{I}=\zeta Q\), which commutes the \(H(t)\). As a result, the steady state is always with zero density \(n\) for any \(\zeta>0\). To construct a model with non-trivial entanglement transition, we instead take \(\{O\}=\{n_{i\in\text{odd},x},1-n_{i\in\text{even},x}\}\) and \(\zeta_{O}=\zeta\), as a result, we have
\[H_{I}=\zeta\sum_{i,x}(-1)^{x-1}c_{i,x}^{\dagger}c_{i,x}+\text{cons}. \tag{70}\]
With the repeated weak measurements, we consider the same protocol as described in subsection 3.1. The Renyi entropies can still be computed as in (51). The only difference is the replacement:
\[\text{tr}\log\left(\eta_{s}\delta_{ww^{\prime}}^{ss^{\prime}}\partial_{t}- \Sigma_{ww^{\prime},x}^{ss^{\prime}}\right)\ \to\ \text{tr}\log\left(\delta_{ww^{\prime}}^{ss^{\prime}}[\eta_{s}\partial_{t}- \zeta(-1)^{x}]-\Sigma_{ww^{\prime},x}^{ss^{\prime}}\right). \tag{71}\]
As pointed out in [68, 69], the transitions for the second Renyi entropy can be understood as a topological defect of the replicated Keldysh contour, which contains two disconnected copies of the traditional Keldysh contour (4). A volume-law/area-law entangled phase corresponds to a non-vanishing/vanishing correlation between the \(u\)\(d\) branches. Following this idea, we first study the Green's functions for the steady state on the single Keldysh contour with \(w=1\). The Schwinger-Dyson equation reads
\[\begin{pmatrix}\partial_{t}\mp\zeta-\Sigma_{e/o}^{uu}&-\Sigma_{e/o}^{ud}\\ -\Sigma_{e/o}^{du}&-\partial_{t}\mp\zeta-\Sigma_{e/o}^{dd}\end{pmatrix}\circ \begin{pmatrix}G_{e/o}^{uu}&G_{e/o}^{ud}\\ G_{e/o}^{du}&G_{e/o}^{dd}\end{pmatrix}=I, \tag{72}\]
Here \(G_{e/o}^{ss^{\prime}}\) is the Green's function for even/odd sites. We would focus on the non-interacting limit with \(J=0\), and comment on the interaction effect finally. For \(\zeta<V\), the solution takes the forms of
\[G_{e/o}(\omega)=\begin{pmatrix}-i\omega\mp\frac{\zeta}{2}&\frac{V}{2\phi} \sqrt{1-\frac{\zeta^{2}}{V^{2}}}\\ -\frac{V\phi}{2}\sqrt{1-\frac{\zeta^{2}}{V^{2}}}&i\omega\mp\frac{\zeta}{2} \end{pmatrix}^{-1}, \tag{73}\]
or in the time domain:
\[G_{e/o}(t)=\frac{1}{2}\begin{pmatrix}\text{sgn}(t)\mp\frac{\zeta}{V}&-\phi^{- 1}\sqrt{1-\frac{\zeta^{2}}{V^{2}}}\\ \phi\sqrt{1-\frac{\zeta^{2}}{V^{2}}}&-\text{sgn}(t)\mp\frac{\zeta}{V}\end{pmatrix} e^{-\frac{V}{2}|t|}. \tag{74}\]
Interestingly, there is a free parameter \(\phi\), which is not fixed by (72). This is an analog of the density \(n\) for the unitary evolution case, which need to be determined by the initial density matrix \(\rho=e^{-\mu Q}/\mathcal{Z}\). The solution of (8) indicates we have \(\phi=e^{\mu/2}\). We have checked that this matches the numerical results obtained by using methods elaborated in [16, 67]. This solution contains non-trivial correlation between \(u\) and \(d\) branches, and consequently describes a critical phase with logarithmic entanglement entropy. The correlation vanishes as
near \(\zeta=V\), indicating a mean-field transition in an area-law entangled phase. The solution for \(\zeta\geqslant V\) reads
\[G_{e/o}(\omega)=\begin{pmatrix}-i\omega\mp(\zeta-\frac{V}{2})&0\\ 0&i\omega\mp(\zeta-\frac{V}{2})\end{pmatrix}^{-1}, \tag{75}\]
which gives
\[G_{e/o}(t)=\frac{1}{2}\begin{pmatrix}\text{sgn}(t)\mp 1&0\\ 0&-\text{sgn}(t)\mp 1\end{pmatrix}e^{-\frac{2\zeta-V}{2}|t|}. \tag{76}\]
This solution indicates all particles are in even sites while all odd sites are empty. The solution contains no additional parameters, which indicates the steady state for the area-law entangled phase is independent of the initial density matrix \(\rho\).
We are interested in the effective theory for the transition. We start from the area law phase, and consider the fluctuation of \(G^{ss^{\prime}}_{ww^{\prime},x}(t,t)\) on the replicated Keldysh contour with two worlds. The saddle-point solution of \(G^{ss^{\prime}}_{ww^{\prime},x}(t,t)\) is given by two copies of (76) as \(G^{ss^{\prime},0}_{ww^{\prime},x}(t,t)=\delta_{ww^{\prime}}G^{ss^{\prime}}_{x \in\text{even/odd}}(0)\). For fluctuations
\[G^{ss^{\prime}}_{ww^{\prime},x}(t,t)=G^{ss^{\prime},0}_{ww^{\prime},x}(t,t)+ \delta g^{ss^{\prime}}_{ww^{\prime},x}(t), \tag{77}\]
expanding the \(G-\Sigma\) action to the quadratic order gives
\[S=\phi^{2}\sum_{ww^{\prime}}\int\frac{d\omega}{2\pi}\frac{dk}{2\pi}\ (\delta g^{ud}_{ww^{\prime},e},\delta g^{ud}_{ww^{\prime},o})^{*}.\begin{pmatrix} 2\zeta-V-i\omega&-V\cos(k)\\ -V\cos(k)&2\zeta-V+i\omega\end{pmatrix}.(\delta g^{ud}_{ww^{\prime},e},\delta g ^{ud}_{ww^{\prime},o})^{T}. \tag{78}\]
Here we focus on correlations between \(u\) and \(d\) branches with \(-\delta g^{du}_{ww^{\prime},x}=\phi^{2}(\delta g^{ud}_{w^{\prime}w,x})^{*}\), which is relevant for the entanglement transition. We introduce the symmetric and anti-symmetric components \(\delta g^{s\overline{s}}_{ww^{\prime},s/a}=\frac{1}{\sqrt{2}}(\delta g^{s \overline{s}}_{ww^{\prime},e}\pm\delta g^{s\overline{s}}_{ww^{\prime},o})\), and expand for small \(\omega\) and \(k\). This leads to
\[S=\phi^{2}\sum_{ww^{\prime}}\int\frac{d\omega}{2\pi}\frac{dk}{2\pi}\ (\delta g^{ud}_{ww^{\prime},s},\delta g^{ud}_{ww^{\prime},a})^{*}.\begin{pmatrix} 2\zeta-2V+Vk^{2}/2&-i\omega\\ -i\omega&2\zeta\end{pmatrix}.(\delta g^{ud}_{ww^{\prime},s},\delta g^{ud}_{ww^ {\prime},a})^{T}. \tag{79}\]
Now integrate out the anti-symmetric component, we find
\[S=\phi^{2}\sum_{ww^{\prime}}\int\frac{d\omega}{2\pi}\frac{dk}{2\pi}\left(2 \zeta-2V+\frac{Vk^{2}}{2}+\frac{\omega^{2}}{2\zeta}\right)|\delta g^{ud}_{ww^ {\prime},s}(\omega,k)|^{2}. \tag{80}\]
For \(\zeta-V>0\), excitations \(\delta g^{ud}_{ww^{\prime},s}\) have positive energy, while for \(\zeta-V<0\), they tend to condense, which leads to a finite correlation between \(u\) and \(d\). To make the condensate value finite, we need to introduce the quartic term. The symmetry of fields \(\delta g^{ud}_{ww^{\prime},s}\) becomes explicit if we treat \(\delta g^{ud}_{ww^{\prime},s}\) as a matrix field \(\boldsymbol{\varphi}\) with indices \(ww^{\prime}\). (80) now becomes
\[S=\phi^{2}\int dxdt\left\{\frac{1}{2\zeta}\text{tr}(\partial_{t}\boldsymbol{ \varphi}^{\dagger}\partial_{t}\boldsymbol{\varphi})+\frac{V}{2}\text{tr}( \nabla\boldsymbol{\varphi}^{\dagger}\nabla\boldsymbol{\varphi})+(2\zeta-2V) \text{tr}(\boldsymbol{\varphi}^{\dagger}\boldsymbol{\varphi})\right\}. \tag{81}\]
This action is invariant under \(U(2)_{L}\otimes U(2)_{R}\), where the matrix field transforms according to \(\boldsymbol{\varphi}\to u_{L}\boldsymbol{\varphi}u_{R}^{\dagger}\). This can be traced back to the \(U(2)\otimes U(2)\) invariance of the replicated non-interacting Hamiltonian, similar to the \(O(2)\otimes O(2)\) symmetry in the Majorana case [68]. As a
result, there are two natural quartic terms that are consistent with the symmetry:
\[\begin{split} S_{\text{full}}=\int dxdt\Big{\{}&\frac{ \phi^{2}}{2\zeta}\text{tr}(\partial_{t}\mathbf{\varphi}^{\dagger}\partial_{t}\mathbf{ \varphi})+\frac{V\phi^{2}}{2}\text{tr}(\nabla\mathbf{\varphi}^{\dagger}\nabla\mathbf{ \varphi})+(2\zeta-2V)\phi^{2}\text{tr}(\mathbf{\varphi}^{\dagger}\mathbf{\varphi})\\ &+\lambda_{1}\phi^{4}\text{tr}(\mathbf{\varphi}^{\dagger}\mathbf{\varphi})^ {2}+\lambda_{2}\phi^{4}\text{tr}(\mathbf{\varphi}^{\dagger}\mathbf{\varphi}\mathbf{\varphi} ^{\dagger}\mathbf{\varphi})\Big{\}}.\end{split} \tag{82}\]
We expect \(\lambda_{2}>0\), which determines property of the symmetry breaking phase. To see this, we consider the special case with \(\mathbf{\varphi}=\varphi_{1}\mathbf{I}+\varphi_{2}\mathbf{\sigma}_{x}\). We have
\[\text{tr}(\mathbf{\varphi}^{\dagger}\mathbf{\varphi})^{2}=4(\varphi_{1}^{2}+\varphi_{ 2}^{2})^{2},\qquad\text{tr}(\mathbf{\varphi}^{\dagger}\mathbf{\varphi}\mathbf{\varphi}^{ \dagger}\mathbf{\varphi})=2(\varphi_{1}^{4}+3\varphi_{1}^{2}\varphi_{2}^{2}+ \varphi_{1}^{4}). \tag{83}\]
As a result, for \(\lambda_{2}>0\), the repulsion between \(\phi_{1}\) and \(\phi_{2}\) is larger than the repulsion within each species, and the energy favors \(\phi_{1}\neq 0\), \(\phi_{2}=0\) or \(\phi_{1}=0\), \(\phi_{2}\neq 0\), depending on the boundary condition. In particular, for two copies of the traditional Keldysh contour, \(u\) and \(d\) branches are paired up with each world, and we have \(\phi_{1}\neq 0\) and \(\phi_{2}=0\). The residue symmetry group is then given by \(u_{L}=u_{R}\), and there are Goldstone modes living on \(U(2)_{L}\otimes U(2)_{R}/U(2)_{+}\). This is different from the Majorana case, where Goldstone modes live on \(O(2)_{L}\otimes O(2)_{R}/O(2)_{+}\sim O(2)\). However, we need to point out that in the large-\(N\) limit, only the classical configuration with the lowest energy contributes to the entanglement entropy. Even for the cBSYK chain, such a configuration only contains rotations between \(\phi_{1}\) and \(\phi_{2}\), which can be parametrized by \(u_{R}=I\) and \(u_{L}=e^{i\theta\sigma_{y}}\). Consequently, the entanglement entropy shows the same critical behavior as in Majorana SYK models to the leading order in \(1/N\) expansion. We expect the signature of the charge conservation show up to the next order of \(1/N\).
Finally, we comment on the interaction effects. Perturbatively, the interaction \(J\) contributes a term
\[\delta S=\sum_{ww^{\prime}}\int dxdt\ \lambda_{q}\phi^{\frac{q}{2}}|\varphi_{ww^{ \prime}}|^{q}. \tag{84}\]
For \(q\geqslant 4\), this term breaks \(U(2)_{L/R}\) to \(Z_{2}\times U(1)\times U(1)\). In this case, the entanglement entropy excites no Goldstone mode, which leads to a volume-law entangled phase.
## 4 Discussions
In this work, we studied the density dependence of the late-time information scrambling and entanglement dynamics in the cBSYK models. Using the Wightman function on a perturbed background, we derive the effective theory between fermions and scramblons for a single-site cBSYK model, which gives analytical results for the late-time OTOC and operator size distribution to the leading order in \(1/N\). We then compute the entanglement velocity of the cBSYK chain using perturbative calculations and derive the effective theory for measurement induced transitions with matrix field \(\mathbf{\varphi}\).
There are many interesting generalizations of the current work. Firstly, it is interesting to consider the late-time information scrambling of cBSYK chains [73]. This requires a complete solution of (54) with a generalization of (23). Secondly, to the leading order of \(1/N\), the entanglement entropy for the cBSYK chain with repeated measurements shows the same scaling as its Majorana counterpart. Recently, there is a proposal [74] for considering charge weighted
version of the entanglement entropy. Similar ideas may be useful in measurement induced transitions with charge conservations. Finally, it is interesting to study the information scrambling and entanglement dynamics in models with non-Abelian symmetries.
## Acknowledgment
We thank Xiao Chen, Yingfei Gu, Shao-Kai Jian, and Chunxiao Liu for helpful discussions on related topics.
|
2307.07061 | PRyMordial: The First Three Minutes, Within and Beyond the Standard
Model | In this work we present PRyMordial: A package dedicated to efficient
computations of observables in the Early Universe with the focus on the
cosmological era of Big Bang Nucleosynthesis (BBN). The code offers fast and
precise evaluation of BBN light-element abundances together with the effective
number of relativistic degrees of freedom, including non-instantaneous
decoupling effects. PRyMordial is suitable for state-of-the-art analyses in the
Standard Model as well as for general investigations into New Physics active
during BBN. After reviewing the physics implemented in PRyMordial, we provide a
short guide on how to use the code for applications in the Standard Model and
beyond. The package is written in Python, but more advanced users can
optionally take advantage of the open-source community for Julia. PRyMordial is
publicly available on GitHub. | Anne-Katherine Burns, Tim M. P. Tait, Mauro Valli | 2023-07-13T20:59:01Z | http://arxiv.org/abs/2307.07061v2 | # PRyMordial: The First Three Minutes, Within and Beyond the Standard Model
###### Abstract
In this work we present PRyMordial: A package dedicated to efficient computations of observables in the Early Universe with the focus on the cosmological era of Big Bang Nucleosynthesis (BBN). The code offers fast and precise evaluation of BBN light-element abundances together with the effective number of relativistic degrees of freedom, including non-instantaneous decoupling effects. PRyMordial is suitable for state-of-the-art analyses in the Standard Model as well as for general investigations into New Physics active during BBN. After reviewing the physics implemented in PRyMordial, we provide a short guide on how to use the code for applications in the Standard Model and beyond. The package is written in Python, but more advanced users can optionally take advantage of the open-source community for Julia. PRyMordial is publicly available on GitHub.
Big Bang Nucleosynthesis -- Early Universe -- New Physics
###### Contents
* 1 Introduction
* 2 Physics in PRyMorial
* 2.1 Thermodynamics with Non-instantaneous Decoupling
* 2.2 Neutron Freeze Out beyond the Born Approximation
* 2.3 BBN Thermonuclear Reactions
* 3 How to Use PRyMordial
* 3.1 Structure of the Code and Hello, World!
* 3.2 SM examples: the PDG Plot and Monte Carlo Analysis
* 3.3 NP examples: New Interacting Sectors and BBN
* 4 Outlook
* A How to Install PRyMordial
* B Nuclear Processes in PRyMorial
## 1 Introduction
The snapshot of the Universe approximately three minutes after the Big Bang [1] can be regarded as one of the most remarkable predictions of the Standard Model (SM) of Particle Physics in conjunction with the (so-called) concordance model of Cosmology, \(\Lambda\)CDM.
While a theory for the origin of chemical elements based on an epoch of high-energy densities and pressures was already formulated by Alpher, Bethe, and Gamow more than seventy years ago [2], the discovery of the quasi-black body spectrum of the Cosmic Microwave Background (CMB) [3; 4] paved the road for the modern formulation of the theory of Big Bang Nucleosynthesis (BBN) [5]. Indeed, thanks to the CMB, we know today that the SM particle species were in a thermal state during an epoch dominated by radiation. Extrapolating this cosmological picture back in time when the Universe was not yet transparent to light, within the standard lore of Cosmology and of Particle Physics we can accurately predict [6; 7; 8; 9; 10; 11]:
* The evolution of the number of relativistic degrees of freedom until recombination, \(N_{\rm eff}\);
* The cosmological abundance of light nuclides synthesized from protons and neutrons, as a function of the number density of baryons relative to photons, \(\eta_{B}\equiv n_{B}/n_{\gamma}\).
Regarding _1)_, given the current knowledge of neutrino oscillations [12], \(N_{\rm eff}\) is predicted in the SM via solving a set of integro-differential equations for the neutrino density matrix at finite temperature [13], yielding \(N_{\rm eff}^{\rm SM}=3.044\) with an error estimated to be below the level of per mil [14; 15; 16].
Concerning _2)_, a detailed analysis of CMB anisotropies in temperature and polarization currently constrains \(\eta_{B}\) with 1% accuracy or better [17], anchoring the primordial asymmetry between baryons
and anti-baryons to be \(\mathcal{O}(10^{-10})\)[18]. Assuming no large asymmetry in the lepton sector as well, see e.g. [19], standard BBN turns into an extremely predictive theory, often dubbed "parameter free".
On the observational side, multi-wavelength astronomical campaigns have been able to provide rich spectroscopic information about emission and absorption lines of gas clouds in metal-poor extragalactic environments, see e.g. [20; 21; 22; 23], bringing us today to a percent-level determination of the abundance of primordial deuterium and helium-4. Given the predictions of the standard theory and the precision of those measurements, together with the strong constraints on the thermal history provided by the CMB [24; 25], the study of the Early Universe around the BBN epoch offers unique insight on New Physics (NP) [26; 27; 28; 29; 30; 31; 32; 33; 34; 35]. Looking at the exciting prospects of next-gen CMB experiments [36; 37; 38], and at the expected future sensitivity in the field of observational astronomy [39; 40], it is therefore very timely to have tools at our disposal that allow for numerically efficient, yet precise computations that test the SM in the Early Universe, and that are flexible enough to broadly explore NP scenarios.
A few packages have already been developed to accurately investigate the BBN era. A publicly available version of the historical code of Ref. [41] (whose most up-to-date version is currently adopted by the PDG [42]) is described in [43]. At the same time, publicly released codes dedicated to state-of-the-art BBN analyses are also available; in particular:
* PArthENoPE[44; 45; 46] is a code originally written in FORTRAN 77 that in its latest re-incarnation also enjoys a graphical user interface; it offers a very efficient evaluation of BBN light-element abundances based on fitting formulae worked out for both weak rates and nuclear cross sections.
* PRIMAT[47; 48] is an user-friendly Mathematica package containing all the inputs and ingredients for an ab-initio computation of neutron freeze-out and of weak rates; moreover, it has tabulated the largest nuclear network at hand in order to track the abundance of heavy nuclides as well.
Both codes include a few built-in options to account for the study of some specific NP scenarios. AlterBBN[49; 50] is a C++ open-source software developed for broad investigation of Physics Beyond the SM (BSM) in the BBN era. However, while allowing for fast numerical evaluations, AlterBBN does not implement the same level of detail and accuracy in its computation of light primordial abundances compared to PArthENoPE or PRIMAT. In fact, these two packages may currently represent the best tools to perform precision cosmological analyses [24; 51].
While powerful and flexible, these public codes nevertheless suffer from a few limitations and/or missing features. A precision tool for Cosmology, able to handle BSM Particle Physics should:
* Allow for the evaluation of the physics of the thermal bath in a fast but precise way, following, e.g., the approach highlighted in [52; 53; 34], and implemented in the standalone code NUDEC_BSM;
* Interconnect a first-principle computation of the thermal background with an ab-initio precise calculation of the neutron-to-proton (\(n\leftrightarrow p\)) conversion, as the one implemented in PRIMAT[47];
* Render easily accessible exploration of the impact of the input parameters characterizing the BBN era and the uncertainties in the set of thermonuclear rates on the basis of more model-dependent/more data-driven approaches available in literature, see [54; 55; 35];
* Adopt a user-friendly, modern programming language compatible with numerical efficiency of the computations, while smoothly interfacing with standard libraries for statistically advanced analyses like Monte Carlo (MC) ones, see e.g. [56; 57].
* T
In this work, we introduce PRyMordial: A new public tool for the community of Particle Physics and Cosmology that precisely aims at filling in the above gaps for precision studies on the physics of the Early Universe both within and beyond the SM. The package is written and runs entirely with Python 3. Moreover, for the most advanced users, the resolution of the set of stiff differential equations for the BBN nuclear-reaction network can be further optimized with the optional switch to some routines of the SciML kit [58], the open-source software for scientific machine learning in Julia.
Our article is organized as follows: In section 2 we present all the key ingredients of the physics implemented in PRyMordial; In section 3 we discuss in detail how PRyMordial is structured and we provide several examples on the usage of the code; In section 4 we comment on future directions for further development of PRyMordial along with possible interesting applications. We finally collect in Appendix A a set of instructions for the installation of the package and its dependencies.
## 2 Physics in PRyMordial
In this section we present the key equations present in PRyMordial, which stand out as a reference for the physics implemented within the code as well as representing a guideline regarding its use (see section 3). We organize the presentation in three distinct topics: the thermodynamics of the plasma; the weak rates for \(n\leftrightarrow p\) conversion; and the set of thermonuclear rates for the key reactions responsible of the non-zero primordial abundance of deuterium, helium-3 and -4, and lithium-7.
### Thermodynamics with Non-instantaneous Decoupling
The description of the thermal background during the BBN era in \(\Lambda\)CDM follows from an isotropic, homogeneous Universe modelled by the Einstein field equation:
\[H^{2}\equiv\left(\frac{d\log a}{dt}\right)^{2}=\frac{8\pi}{3M_{\rm Pl}^{2}} \,\rho_{\rm tot}\, \tag{1}\]
where \(H\) is the Hubble rate of space-time expansion, \(a\) the scale factor of the Friedmann-Lemaitre-Robertson-Walker metric, \(\rho_{\rm tot}\) the total energy density present in the Universe, and \(M_{\rm Pl}\equiv 1/\sqrt{G_{\rm N}}\), with \(G_{\rm N}\) the Newton gravitational constant.
Within an axiomatic characterization of the Early Universe provided by _local thermodynamic equilibrium_[59, 60], SM species are described according to the spin-statistics theorem and the temperature \(T_{\gamma}\) of the thermal bath (provided chemical potentials \(\mu\) can be neglected, i.e., \(\mu/T_{\gamma}\ll 1\)). Standard BBN takes place during radiation domination, and thus features contributions to \(\rho_{\rm tot}\) largely from relativistic species, i.e. \(\rho_{\rm tot}\simeq\rho_{\rm rad}\propto T_{\gamma}^{4}\). This observation dramatically simplifies the investigation of BBN, allowing one to decouple the study of the thermal background from the nucleon dynamics. Indeed, after the QCD crossover takes place [61] protons and neutrons are already non-relativistic, i.e. they are highly Boltzmann-suppressed well before the MeV scale temperatures characteristic of the BBN era.
Hence, for temperatures \(T_{\gamma}\lesssim{\cal O}(10)\) MeV, one can accurately describe \(\rho_{\rm tot}\) in the SM as a sum of just three contributions:
\[\rho_{\gamma}=\frac{\pi^{2}}{15}\,T_{\gamma}^{4}\ \,\ \ \rho_{e^{\pm}}=\frac{2}{\pi^{2}}T_{ \gamma}^{4}\,\int_{x_{e}}^{\infty}d\tilde{x}\,\frac{\tilde{x}^{2}\sqrt{\tilde {x}^{2}-x_{e}^{2}}}{\exp(\tilde{x}+1)}\ \,\ \ \rho_{\nu,{\rm tot}}=3\times\frac{7\pi^{2}}{120}\,T_{\nu}^{4}\, \tag{2}\]
where \(x_{e}\equiv m_{e}/T_{\gamma}\) and we distinguish the temperature of the electron-positron-photon system, \(T_{\gamma}\), from that of neutrinos, \(T_{\nu}\).1 Indeed, while the initial condition \(T_{\nu}=T_{\gamma}\) must hold at early times for the two systems to be in thermal (more precisely, in chemical and kinetic) equilibrium, around the MeV scale neutrinos are expected to freeze out from the thermal bath as weakly-interacting relativistic species [63]. Neglecting tiny departures from a Fermi-Dirac distribution in \(\nu\) phase space, one can study the evolution of the two systems according to the momentum-integrated Boltzmann equations:
Footnote 1: While \(T_{e}=T_{\gamma}\) follows from \(e^{\pm}\) being tightly coupled to photons via fast QED processes, the approximation underlying \(T_{\nu}\), namely \(T_{\nu_{e}}\simeq T_{\nu_{\mu}}\simeq T_{\nu_{+}}\), can be motivated by the active flavor mixing of \(\nu\) oscillations at \(T_{\gamma}\) of few MeV [62].
\[(\rho^{\prime}_{\gamma}+\rho^{\prime}_{e^{\pm}})\,\frac{dT_{\gamma}}{dt} = -4H\,\rho_{\gamma}-3H(\rho_{e^{\pm}}+p_{e^{\pm}})+\delta C_{e^{\pm }}\, \tag{3}\] \[\rho^{\prime}_{\nu,\rm tot}\,\frac{dT_{\nu}}{dt} = -4H\,\rho_{\nu,\rm tot}+\delta C_{\nu}\,\]
with \({}^{\prime}\equiv d/dT\), \(p\) the pressure density (equal to \(\rho/3\) for a relativistic species), \(\delta C\) the (momentum-integrated) collision term, and where we have conveniently traded energy densities for temperatures in light of Eq. (2). Due to energy-momentum conservation, the sum over all \(\delta C\)s must vanish, so that one recovers the continuity equation for the total energy density of the Universe:
\[\frac{d\rho_{\rm tot}}{dt}+3H(\rho_{\rm tot}+p_{\rm tot})=0. \tag{4}\]
In the SM, where Eq. (3) holds, such a constraint implies: \(\delta C_{\nu}=-\delta C_{e^{\pm}}\). The collision term \(\delta C_{\nu}\) has been evaluated in [63] under Maxwell-Boltzmann approximation, nicely refined in [52; 53] taking into account relativistic corrections as well as finite mass effects from \(m_{e}\neq 0\), and more recently recomputed independently in [34]. Including finite temperature QED corrections to the electromagnetic plasma [64], one can solve the system of coupled differential equations in Eq. (3), to find \(T_{\gamma}(t)\), \(T_{\nu}(t)\), and, as a byproduct, \(T_{\nu}(T_{\gamma})\).2 Such a treatment naturally includes non-instantaneous decoupling effects, and allows one to perform a numerically fast, but accurate prediction of the effective number of relativistic degrees of freedom from first principles, yielding (in the SM) at \(T_{\gamma}\ll\) MeV:
Footnote 2: In the current version of PRyMordial we adopt the computation of \(\delta C_{\nu}\) as well as the next-to-leading (NLO) QED corrections to the electromagnetic pressure of the plasma directly from the numerical results tabulated in NUDEC_BSM [53].
\[N_{\rm eff}\equiv\frac{8}{7}\left(\frac{11}{4}\right)^{4/3}\left(\frac{\rho_{ \rm rad}-\rho_{\gamma}}{\rho_{\gamma}}\right)=3.044\, \tag{5}\]
while also opening up novel explorations of BSM physics in the Early Universe [33; 34; 53].3
Footnote 3: Eq. (3) can be easily generalized to include new sectors. This contrasts with typical existing BBN codes which compute the thermodynamic background by interpolating the tabulated result of the (numerically intensive) integro-differential Boltzmann equation, solved for the neutrino phase-space density in the SM.
Based on these results, one can also easily evaluate the relic density of neutrinos (neglecting phase space spectral distortions). From the CMB we know the photon temperature today is \(T_{\gamma,0}=0.2348\) meV; plugging this value into the solution of Eq. (3) yields the temperature \(T_{\nu,0}=0.1682\,\)meV, corresponding to the cosmological abundance of SM neutrinos:
\[\Omega_{\nu}^{\rm(rel)}h^{2} = \left(\frac{7\pi^{2}}{120}\,T_{\nu,0}^{4}\right)\Big{/}\left( \frac{3}{8\pi}\frac{M_{\rm Pl}^{2}H_{0}^{2}}{h^{2}}\right)=5.70\times 10^{-6}\, \tag{6}\] \[\Omega_{\nu}^{\rm(nr)}h^{2} = \left(\frac{3}{2}\frac{\zeta(3)}{\pi^{2}}T_{\nu,0}^{3}\,\sum_{i} m_{\nu_{i}}\right)\Bigg{/}\left(\frac{3}{8\pi}\frac{M_{\rm Pl}^{2}H_{0}^{2}}{h^{2}} \right)=\sum_{i}\frac{m_{\nu_{i}}}{93.03\,{\rm eV}}\,\]
which reproduces the relic neutrino abundance computed, e.g., in Ref. [65] to the per mil level.
In order to obtain \(T_{\gamma}(t)\) and \(T_{\nu}(t)\) from Eq. (3), we have made use both of Eq. (1) together with Eq. (2). At this point, to complete the study of the thermodynamic background, we must extract the scale factor \(a\) as a function of time \(t\) and temperature \(T_{\gamma}\). This can be accomplished by applying (again) the notion of local thermodynamic equilibrium, which allows one to introduce the entropy density for each species \(i\) as: \(s_{i}=(\rho_{i}+p_{i}-\mu_{i}\,n_{i})/T_{i}\), where \(n_{i}\) is the number density of the species with associated chemical potential \(\mu_{i}\).
For negligible chemical potentials, the total entropy density of the Universe \(s_{tot}\) per comoving volume must be conserved as a consequence of energy-momentum conservation, Eq. (4). Then, during radiation domination \(s_{tot}\) roughly scales as \(T_{\gamma}^{3}\), underlying the approximate relation \(a\propto 1/T_{\gamma}\). Nevertheless, even under the assumption of \(\mu_{i}/T_{i}\ll 1\), the entropy of each species is generally not separately conserved due to heat exchanges related to the interactions with other species. The Boltzmann equation for \(s_{i}\) generally follows (see, e.g., the discussion in Refs. [47; 66]):
\[\frac{ds_{i}}{dt}+3Hs_{i}=\frac{\delta C_{i}}{T_{i}}-\frac{\mu_{i}}{T_{i}} \left(\frac{dn_{i}}{dt}+3Hn_{i}\right)\, \tag{7}\]
where the first collision term (divided by the temperature) is the one appearing in the Boltzmann equation for the density \(\rho_{i}\), while the second collision term has been rewritten using the Boltzmann equation for the number density \(n_{i}\).4 In the SM, in the limit5\(\mu_{e}/T_{\gamma}\ll 1\), we use Eq. (7) for the electromagnetic bath to pin down the relation between \(a\) and \(T_{\gamma}\); with \(\bar{s}_{\rm pl}\equiv(s_{\gamma}+s_{e^{\pm}})/T_{\gamma}^{3}\), we get:
Footnote 4: Notice that in absence of interactions for the species \(i\), entropy conservation can be guaranteed either by a negligible chemical potential, \(\mu_{i}\ll T_{i}\) or by number density conservation per comoving volume, \(d(n_{i}a^{3})/dt=0\).
Footnote 5: \(\mu_{e}/T_{\gamma}\ll 1\) is justified in the SM by \(\eta_{B}\sim\mathcal{O}(10^{-10})\) and the condition of electric charge neutrality in the Early Universe.
\[\frac{1}{(T_{\gamma}a)^{3}}\frac{d\left(\bar{s}_{\rm pl}T_{\gamma}^{3}a^{3} \right)}{d\ln a}=-\frac{\delta C_{\nu}}{HT_{\gamma}^{4}}\equiv-\mathcal{N}_{ \nu}\ \ \Leftrightarrow\ \ a(T_{\gamma})=a_{0}\exp\left(-\int_{T_{\gamma,0}}^{T_{\gamma}}\frac{dT}{ T}\frac{3\bar{s}_{\rm pl}+T\,\bar{s}_{\rm pl}^{\prime}}{3\bar{s}_{\rm pl}+ \mathcal{N}_{\nu}}\right). \tag{8}\]
Knowing all the thermodynamical quantities as a function of \(T_{\gamma}\) in the integrand above, Eq. (8) allows one to extract \(a(T_{\gamma})\) up to the scale-factor value of today, \(a_{0}\), customarily defined as \(1\). Note that for \(T_{\gamma}\lesssim m_{e}\) one has \(\bar{s}_{\rm pl}^{\,\prime}=0\), and taking the limit \(\mathcal{N}_{\nu}\to 0\), the expected scaling set by \(d(s_{\gamma}a^{3})/dt=0\) is easily recovered. The solution in Eq. (8) precisely tracks the relation between the scale factor and \(T_{\gamma}\) in the case of non-instantaneous decoupling of neutrinos. While in the SM these effects are tiny (since \(\mathcal{N}_{\nu}/3\ll\bar{s}_{\rm pl}\)), they could become non-negligible in a BSM scenario.
It is worth noting that given \(T_{\gamma}(t)\) from the solution of Eq. (3) and \(a(T_{\gamma})\) from Eq. (8), one obtains \(a(t)\) as a byproduct, which allows to assess the evolution of the number density of baryons in \(t\) or \(T_{\gamma}\) during the BBN era, since by definition: \(n_{B}\propto 1/a^{3}\).
### Neutron Freeze Out beyond the Born Approximation
Shortly after hadrons form, neutrons and protons are non-relativistic species that do not contribute appreciably to the total energy budget stored in the thermal bath. Nevertheless, their abundance is eventually responsible for the tiny fraction of light primordial elements relative to hydrogen which are observable today in pristine astrophysical environments.
According to local thermodynamic equilibrium, the relative number density of nucleons is initially given by the Maxwell-Boltzmann distribution:
\[\left(\frac{n_{\rm n}}{n_{\rm p}}\right)\Big{|}_{T_{\gamma}\,\gg\,{\rm MeV}}= \left(\frac{m_{\rm n}}{m_{\rm p}}\right)^{3/2}\exp\left(-\frac{\cal Q}{T_{ \gamma}}-\frac{\mu_{\cal Q}}{T_{\nu}}\right)\,, \tag{9}\]
where \({\cal Q}=m_{\rm n}-m_{\rm p}\), \(\mu_{\cal Q}=\mu_{\rm n}-\mu_{\rm p}\), \(m_{\rm n,p}\) and \(\mu_{\rm n,p}\) are the mass and chemical potential of neutrons and protons. For clarity, we have used \(T_{\nu}=T_{\gamma}\) (valid for temperatures well above MeV) in the \({\cal Q}\) term, but retain \(T_{\nu}\) explicitly in the \(\mu_{\cal Q}\) term. Assuming \(\mu_{\rm n}\simeq\mu_{\rm p}\) (e.g. a negligible contribution from lepton chemical potentials), Eq. (9) implies that at equilibrium \(n_{\rm n}\simeq n_{\rm p}\). Indeed, fast electroweak processes efficiently convert \(n\leftrightarrow p\):
\[\Gamma_{\rm n\,\to\,p} \equiv \Gamma(n\,e^{+}\,\to\,p\,\bar{\nu})+\Gamma(n\,\bar{\nu}\,\to\,p\,e ^{-})+\Gamma(n\,\to\,p\,e^{-}\,\bar{\nu})\gg H\,,\] \[\Gamma_{\rm p\,\to\,n} \equiv \Gamma(p\,e^{-}\,\to\,n\,\bar{\nu})+\Gamma(p\,\bar{\nu}\,\to\,n\,e ^{+})+\Gamma(p\,e^{-}\,\bar{\nu}\,\to\,n)\gg H\,,\]
and govern the Boltzmann equations for the nucleon yields \(Y_{\rm n,p}\equiv n_{\rm n,p}\,/\,n_{B}=n_{\rm n,p}\,/\,(n_{\rm n}+n_{\rm p})\):
\[\frac{dY_{\rm n}}{dt} = \Gamma_{\rm p\,\to\,n}\,Y_{\rm p}-\Gamma_{\rm n\,\to\,p}\,Y_{\rm n }\, \tag{10}\] \[\frac{dY_{\rm p}}{dt} = \Gamma_{\rm n\,\to\,p}\,Y_{\rm n}-\Gamma_{\rm p\,\to\,n}\,Y_{\rm p }\,\]
whose equilibrium solutions: \(Y_{\rm n}=1-Y_{\rm p}=\Gamma_{\rm p\,\to\,n}/(\Gamma_{\rm p\,\to\,n}+\Gamma_{ \rm n\,\to\,p})\simeq 1/2\), are in agreement with Eq. (9). These reactions guarantee chemical equilibrium among the involved species, implying \(\mu_{\cal Q}\simeq-\mu_{\nu}\). Eq. (9) thus demonstrates that a primordial non-zero lepton asymmetry in the neutrino sector [67, 68] can impact the initial conditions for BBN by altering the neutron-to-proton ratio, with notable cosmological consequences [35, 69].
At temperatures close to neutrino decoupling, \(n\leftrightarrow p\) conversion falls out of equilibrium, freezing out the neutron-to-proton ratio to \(\sim 1/6\) (in the SM), up to finite neutron lifetime effects [59, 60]. The weak rates for neutron freeze out require the evaluation of an involved multi-dimensional phase-space integral: e.g. for \(n\,e^{+}\to p\,\bar{\nu}\) (and similarly for the others) [70]:
\[Y_{\rm n}\,\Gamma(n\,e^{+}\,\to\,p\,\bar{\nu})=\frac{16\pi^{4}}{n_{B}}\int d \Pi_{\rm n}d\Pi_{e}d\Pi_{\rm p}d\Pi_{\nu}\,\delta^{(4)}(P_{\rm n}+P_{e}-P_{\rm p }-P_{\nu})\,|{\cal M}|^{2}\,f_{\rm n}f_{e}(1-f_{\rm p})(1-f_{\nu}), \tag{11}\]
where \(d\Pi_{i}\) and \(P_{i}\) are the Lorentz-invariant phase-space element and 4-momentum of the particle \(i\), \(f_{i}\) is the relativistic thermal distribution of the species \(i\) in the rest frame of the thermal bath, and \({\cal M}\) is the full matrix element of the process summed over initial and final spins. The latter can be computed from the weak effective theory for \(\beta\) decay [71]:
\[{\cal L}_{\rm F}=-\frac{2G_{\rm F}}{\sqrt{2}}\,V_{\rm ud}\,\,\bar{\nu}(x)\, \gamma_{\mu}\,e_{L}(x)\,\left\{\,\bar{n}(x)\gamma^{\mu}(1-g_{\rm A}\,\gamma_{5 })p(x)+\frac{\kappa}{2m_{\rm N}}\partial_{\nu}\left[\bar{n}(x)\,\sigma^{\mu \nu}\,p(x)\right]\,\right\}+h.c.\,, \tag{12}\]
where \(G_{\rm F}\) is the Fermi constant [42], \(V_{\rm ud}\) corresponds to the Cabibbo angle [72], \(g_{\rm A}\) and \(\kappa\) are the axial-current and weak-magnetism constant of the nucleon of mass \(m_{\rm N}\)[73], and \(\sigma_{\mu\nu}\equiv i\,(\gamma_{\mu}\gamma_{\nu}-\gamma_{\nu}\gamma_{\mu})/2\). The computation of \(|{\cal M}|^{2}\) can be found in detail in Appendix B of Ref. [47] (see also [70, 74]).
While expressions like Eq. (11) can be reduced to a five-dimensional integral in phase space by exploiting the symmetries of the problem, a dramatic simplification is obtained in the limit of infinite nucleon-mass at fixed \({\cal Q}\)[70, 74]. This is the so-called Born approximation, in which the kinetic
energy of the 'infinitely' heavy neutrons and protons may be neglected, leading to the simplification: \(|{\cal M}|^{2}=32\,G_{\rm F}^{2}V_{\rm ud}^{2}(1+3g_{\rm A}^{2})E_{e}E_{\nu}E_{ \rm p}E_{\rm n}\,\). In that limit the \(n\leftrightarrow p\) rates read:
\[\Gamma_{\rm n\to p}^{\infty} = \widetilde{G}_{\rm F}^{2}\int_{0}^{\infty}dE_{e}\,E_{e}\,\sqrt{E_ {e}^{2}-m_{e}^{2}}\,(E_{\nu}^{-})^{2}\left[f_{\nu}(E_{\nu}^{-})f_{e}(-E_{e})+f _{\nu}(-E_{\nu}^{-})f_{e}(E_{e})\right]\, \tag{13}\] \[\Gamma_{\rm p\to n}^{\infty} = \widetilde{G}_{\rm F}^{2}\int_{0}^{\infty}dE_{e}\,E_{e}\,\sqrt{E_ {e}^{2}-m_{e}^{2}}\,(E_{\nu}^{+})^{2}\left[f_{\nu}(E_{\nu}^{+})f_{e}(-E_{e})+f _{\nu}(-E_{\nu}^{+})f_{e}(E_{e})\right]\,\]
where \(\widetilde{G}_{\rm F}\equiv G_{\rm F}V_{\rm ud}\sqrt{(1+3g_{\rm A}^{2})/(2 \pi^{3})}\) and \(E_{\nu}^{\pm}=E_{e}\pm{\cal Q}\). The outcome of Eq. (13) are rates that generally depend on both background temperatures and chemical potentials (i.e. \(T_{\gamma},T_{\nu}\) and \(\mu_{\nu}\)). For \(T_{\nu}=T_{\gamma}\) (and negligible chemical potentials) detailed balance follows as: \(\Gamma_{\rm p\to n}^{\infty}/\Gamma_{\rm n\to p}^{\infty}=\exp(-{\cal Q}/T_{ \gamma})\). The dimensionful factor \(\widetilde{G}_{\rm F}\) depends on \(V_{\rm ud}\), \(g_{\rm A}\), and \(G_{\rm F}\), whose value is precisely determined by the muon lifetime. However, this factor is often more conveniently extracted from neutron decay in the vacuum, since in the SM:
\[\tau_{\rm n}^{-1}=\widetilde{G}_{\rm F}^{2}\,m_{e}^{5}\,{\cal F}_{\rm n}\, \tag{14}\]
where \({\cal F}_{\rm n}\) incorporates a phase-space statistical factor for the neutron decay at zero temperature [75] plus electroweak radiative corrections [76]. For a precise calculation of \({\cal F}_{\rm n}\), see the very recent reassessment in Ref. [77] and references therein. This approach allows one to trade the combination \(V_{ud}^{2}(1+3g_{A}^{2})\) for the measured \(\tau_{\rm n}\).6 Using Eq. (14), in PRyMordial one can choose to adopt either a normalization of the weak rates based on the determination of the neutron lifetime, or one involving the knowledge of the modified Fermi constant \(\widetilde{G}_{\rm F}\).
Footnote 6: Any treatment must confront both the _neutron lifetime puzzle_, i.e. the tension between “bottle” [78] and “beam” [79] measurements of \(\tau_{\rm n}\), see, e.g., [80]; and the _Cabibbo angle anomaly_[81], i.e. the extraction of \(V_{\rm ud}\) from super-allowed \(\beta\) decays and \(V_{\rm us}\) from semi-leptonic decays versus unitarity in the Cabibbo-Kobayashi-Maskawa matrix [72].
In the SM the Born approximation predicts a neutron freeze-out temperature of slightly below 1 MeV. At smaller temperatures, the neutron-to-proton ratio is still affected by \(\beta\) decay until the Universe cools down sufficiently enough to preclude photo-dissociation of deuterium: for a binding energy \(B_{\rm D}=2.2\) MeV, this happens at temperatures around \(B_{\rm D}/\log(1/\eta_{B})\sim\!0.1\) MeV [59, 60]. At that point, virtually all of the neutrons experience two-body nuclear reactions, ultimately resulting in their binding in helium-4, the most stable light element. As a result, the uncertainty on the Born-level theory prediction for helium-4 is only a few % (see Table 5 in [47]).
That said, the present percent-level inference of primordial helium-4 and deuterium [42] and the sub-percent target of future observational campaigns [39] demand the following refinements to Eq. (13):
* QED radiative corrections (in the vacuum) to the \(n\leftrightarrow p\) amplitudes of order \({\cal O}(\alpha_{\rm em})\) via virtual- and real-photon emission [82, 83, 84, 85] must be computed;
* Finite nucleon-mass effects and non-zero weak magnetism, which induce relative shifts in the weak rates of \(\Delta\Gamma/\Gamma\sim{\cal O}(10^{-2})\)[70, 74], must be taken into account;
* Finite-temperature effects [84, 86] must be evaluated for sub-percent accuracy.
PRyMordial implements all of these corrections, following the treatment in PRIMAT (see Appendix B of [47]), where particular care was taken to attempt to combine several existing state-of-the-art recipes for electroweak rates beyond the Born approximation.
It is worth noticing that in the context of the SM, the corrections to the Born rates due to the incomplete neutrino decoupling are only marginal [87, 88]. Nevertheless, NP could dramatically alter \(T_{\nu}(T_{\gamma})\), \(a(T_{\gamma})\) and \(a(t)\), and the departure from the standard value for the weak rates can impact the final BBN abundances in a non-negligible way [31, 33]. As a result, the approach undertaken in subsection 2.1 is particularly useful not only for the study of neutrino decoupling, but also for a careful assessment of the neutron-to-proton ratio in BSM scenarios.
### BBN Thermonuclear Reactions
Local thermodynamical equilibrium implies that at temperatures above neutron decoupling, a nuclear species \(i\) of atomic number \(Z_{i}\), mass number \(A_{i}\), spin \(s_{i}\), and binding energy \(B_{i}\) follows a Boltzmann distribution with internal degrees of freedom: \(g_{i}=2s_{i}+1\); mass: \(m_{i}=Z_{i}m_{\rm p}+(A_{i}-Z_{i})m_{\rm n}-B_{i}\,\); and chemical potential: \(\mu_{i}=Z_{i}\mu_{\rm p}+(A_{i}-Z_{i})\mu_{\rm n}\). In terms of the yield \(Y_{i}\equiv n_{i}/n_{B}\), this equilibrium distribution reads:
\[Y_{i}\big{|}_{T_{\gamma}\gtrsim{\rm MeV}}=g_{i}\,2^{(3A_{i}-5)/2}\,\pi^{(1-A_ {i})/2}\,\left(\zeta(3)\,\eta_{B}\right)^{A_{i}-1}\left(\frac{m_{i}\,T_{\gamma }^{A_{i}-1}}{m_{\rm p}^{Z_{i}}m_{\rm n}^{A_{i}-Z_{i}}}\right)^{3/2}Y_{\rm p}^{ Z_{i}}Y_{\rm n}^{A_{i}-Z_{i}}\exp\left(B_{i}/T_{\gamma}\right)\,, \tag{15}\]
where we made use of the relation: \(n_{B}/\eta_{B}=3\,\zeta(3)\,T_{\gamma}^{3}/(2\pi^{2})\). This expression holds for the nucleons (\(A_{\rm N}=1\), \(B_{\rm N}=0\)) themselves, and is consistent with Eq. (9). Importantly, it offers another handle on the estimate for the start of nucleosynthesis as the time in which the relative abundance of neutrons after freeze out becomes comparable to deuterium as dictated by Eq. (15), and pointing again to a temperature of about 0.1 MeV.
Starting from the initial conditions, abundances are determined by a network of Boltzmann equations that generalize Eq. (10) (see, e.g., Refs. [89, 90]) to include the relevant nuclei:
\[\frac{dY_{i}}{dt}=\sum_{R}{\cal S}_{i,R}\left[\,\Gamma^{(R)}_{\ldots\to i\, \ldots}\times\prod_{j}\left(\frac{Y_{j}^{{\cal S}_{j,R}}}{{\cal S}_{j,R!}} \right)-\Gamma^{(R)}_{i\,\ldots\to\ldots}\times\prod_{k}\left(\frac{Y_{k}^{{ \cal S}_{k,R}}}{{\cal S}_{k,R!}}\right)\,\right]\,, \tag{16}\]
where the sum \(R\) is performed over all reactions involving the nuclear species \(i\); \({\cal S}_{i,R}\) is the stoichiometric coefficient for the species \(i\) in the nuclear reaction \(R\); and the products \(j\) and \(k\) run over all of the initial and final states of the reaction with nuclear rate \(\Gamma^{(R)}_{\ldots\to i\,\ldots}\) or \(\Gamma^{(R)}_{i\,\ldots\to\ldots}\).
Given the range of energies characterizing the BBN era, the nuclear reaction rates of interest can be measured in the laboratory, and are often tabulated as \(\widetilde{\Gamma}_{i\ldots l\to j\ldots m}\equiv N_{A}^{{\cal S}_{i}\ldots{ \cal S}_{l}-1}\langle\sigma_{i\ldots l\to j\ldots m}\,v\rangle\)[91], where \(N_{A}\) is Avogadro's number (typically expressed in units of mol\({}^{-1}\)), and the velocity averaged cross section is obtained by weighting the appropriate cross section by the Maxwell-Boltzmann velocity distribution for the non-relativistic species (see e.g. Ref. [92] for a detailed description). By definition, for a given number-density rate \(\langle\sigma^{(R)}_{i\,\ldots\to\ldots}v\rangle\), the corresponding abundance rate \(\Gamma^{(R)}_{i\,\ldots\to\ldots}\) is:
\[\Gamma_{i\ldots l\to j\ldots m}=n_{B}^{{\cal S}_{i}\ldots{\cal S}_{l}-1} \langle\sigma_{i\ldots l\to j\ldots m}\,v\rangle=(n_{B}/N_{A})^{{\cal S}_{i} \ldots{\cal S}_{l}-1}\,\widetilde{\Gamma}_{i\ldots l\to j\ldots m}. \tag{17}\]
A priori, Eq. (16) includes the rates of both forward and reverse reactions in the evolution of the abundance of the nuclear species \(i\). Nevertheless detailed balance implies
\[\left(\frac{Y_{j}^{{\cal S}_{j}}\ldots Y_{m}^{{\cal S}_{m}}}{Y_{i}^{{\cal S}_{ i}}\ldots Y_{l}^{{\cal S}_{l}}}\right)\bigg{|}_{T_{\gamma}\gtrsim{\rm MeV}}=\ \frac{\langle\sigma_{i\ldots l\to j\ldots m}v\rangle/({\cal S}_{i}!\ldots{ \cal S}_{l}!)}{\langle\sigma_{j\ldots m\to i\ldots l}v\rangle/({\cal S}_{j}! \ldots{\cal S}_{m}!)}\, \tag{18}\]
since local thermodynamical equilibrium ensures that the forward and reverse reactions should balance. Thus, it is easy to evaluate the reverse reaction rates given the forward ones. It is customary to parameterize the relationship as:
\[\langle\sigma_{j\ldots m\to i\ldots l}v\rangle=\alpha_{R}\;T_{9}^{\beta_{R}}\, \exp(\gamma_{R}/T_{9})\,\langle\sigma_{i\ldots l\to j\ldots m}v\rangle\;\;,\; \text{with:}\;T_{9}\equiv T_{\gamma}/(10^{9}\,\text{K})\,, \tag{19}\]
where the constants \(\alpha_{R}\), \(\beta_{R}\), and \(\gamma_{R}\) for a given process \(R\) from e.g. the up-to-date nuclear database of Ref. [93] via Eq. (15).
PRyMordial solves the general system of equations Eq. (16) following the strategy of Ref. [47] comprehensively comprehensively breaks nucleosynthesis into three steps:
* We analyze \(n\leftrightarrow p\) conversion by solving Eq. (10) from an initial temperature of \(\mathcal{O}(10)\) MeV (and initial conditions from Eq. (9)) down to standard neutron freeze out, around MeV;
* We use the values of \(Y_{\text{n,p}}\) obtained from _1)_ together with Eq. (15) and evolve with a network comprised of the 18 key thermonuclear rates for the abundance of \(n,p\) together with all of the nuclides up to \(A=8\) and \(Z=5\)7 down to the temperature where deuterium photo-dissociation becomes inefficient, around 0.1 MeV; Footnote 7: In the current version of PRyMordial we include up to boron-8 in the nuclear chains, which is sufficient for an accurate prediction of lithium-7, likely the heaviest nuclide of interest when confronting BBN with observations [94]. For this purpose, the largest implemented set of thermonuclear rates comprises 63 reactions, see Appendix B.
* We further evolve the network with the full set of thermonuclear processes and with initial conditions given by the nuclide yields obtained in step _2)_, evolving the abundances of the aforementioned nuclides down to \(\mathcal{O}(\text{keV})\) (i.e., well below \(e^{\pm}\) annihilation), when BBN is over.
The output of Step _3)_ is the abundances of the light-element originating from BBN. To compare with data, it is customary to quote helium-4 in terms of the primordial mass fraction:8
Footnote 8: Notice that this definition differs at the sub-percent level from the helium mass fraction adopted in the context of the CMB [24]: \(Y_{P}^{\text{CMB}}\equiv(m_{\text{\tiny He}}/4)\,Y_{P}/[(m_{\text{\tiny He}}/4 )Y_{P}+m_{\text{\tiny H}}\,(1-Y_{P})]\), with \(m_{\text{\tiny H},4\text{\tiny He}}\) the atomic mass of hydrogen and helium.
\[Y_{P}\equiv 4\times Y_{\text{\tiny 4}\text{\tiny He}}\simeq\rho_{\text{\tiny 4 }\text{\tiny He}}/\rho_{B}\,. \tag{20}\]
The other primordial elements under the lamppost of astrophysical observations are deuterium, helium-3 and lithium-7 (see, e.g., [11] for a recent report on the status of these measurements), which are usually quoted in terms of the relative number densities with respect to hydrogen:
\[i/\text{H}\equiv Y_{i}/Y_{\text{p}}=n_{i}/n_{\text{H}}\;\;,\;\text{where}\;i= \text{D},\,^{3}\text{He},\,^{7}\text{Li}\,. \tag{21}\]
Notice that the final yield of primordial helium-3 receives a contribution from unstable species such as tritium; likewise, the final amount of lithium-7 includes the decay of beryllium-7.
The literature contains several publicly accessible compilations of the thermonuclear rates relevant for BBN. It is important to note that there are several different parameterizations of these rates adopted in BBN studies, and they differ not only with respect to the theoretical approach, but also with respect to the measured nuclear reaction data included in fitting them. To highlight a few of the more important approaches:
* The NACRE II database [95] collects an extended evaluation of reaction rates of charged-particle induced reactions on target nuclides with mass number \(A<16\), adopting the so-called potential model [91] to describe nuclear cross sections in the energy range of interest.
* PRIMAT tabulates an extensive catalogue (comprising more than 400 reactions), characterized by several nuclear cross sections evaluated via refined statistical analyses within \(R\)-matrix theory [96, 97, 98, 99] or computed using dedicated numerical tools, e.g., the TALYS code [100].
* PArthENoPE implements semi-analytic expressions resulting from polynomial fits to nuclear data including theory modeling of screening and thermal effects [92, 101]; data-oriented analyses relevant for BBN rates can be also found in the work of Refs. [102, 103].
If one limits the scope to precise predictions of the helium-4 and deuterium abundances, the relevant portion of the nuclear network simplifies considerably, contracting to \(\mathcal{O}(10)\) processes [104]. Thus, PRyMordial offers the option of restricting the BBN analysis to a small network of 12 key reactions [105], implemented according to two different sets of thermonuclear rates: the first is largely based on the NACRE II compilation, whereas the second is based on the tabulated rates in PRIMAT. These two sets differ marginally in their predictions for helium-4, but lead to relevant differences in the prediction for deuterium, as discussed at length in Ref. [54], after the important measurement carried out by the LUNA collaboration [106].9 For the most precise prediction of lithium-7, PRyMordial offers the possibility to solve a nuclear network including the 51 additional reactions listed in Appendix B, by adopting part of the network in Ref. [100] included in the PRIMAT database.
Footnote 9: This fact has been more quantitatively acknowledged in Ref. [35] which used a beta version of PRyMordial.
PRyMordial handles uncertainties on the tabulated thermonuclear rates \(\widetilde{\Gamma}^{(R)}\) by providing (for each forward10 nuclear reaction) a set of median values, \(\langle\widetilde{\Gamma}^{(R)}\rangle\) together with an uncertainty factor \(\Delta\widetilde{\Gamma}^{(R)}\), corresponding to a sample of temperatures. Following the method outlined in Refs. [107, 108], to perform a MC analysis with PRyMordial one should treat the provided thermonuclear rates as log-normal distributed, implying that for each nuclear process \(R\) a random realization of the thermonuclear rate will be:
Footnote 10: The corresponding reverse reactions are obtained via Eq. (19) from the interpolated forward rates.
\[\log\widetilde{\Gamma}^{(R)}=\log\langle\widetilde{\Gamma}^{(R)}\rangle+p^{(R )}\log\Delta\widetilde{\Gamma}^{(R)}\, \tag{22}\]
where \(p^{(R)}\) is a temperature-independent coefficient following a normal distribution [109]. Hence, in order to properly take into account the uncertainties of the thermonuclear rates in a MC analysis of BBN, one should independently vary the nuisance parameters \(p^{(R)}\) for all the reactions \(R\) included in the study, see, e.g., the work carried out in Ref. [35] and the MC examples presented in section 3.
## 3 How to Use PRyMordial
In this section we provide some example code that demonstrates the use of PRyMordial. We start by detailing the modules of the code including their inputs and key parameters. We show how to implement a state-of-the-art analysis of the BBN era within the SM. Finally, we provide a concise description on how to use the code for the study of NP, and discuss how to implement and analyze generic BSM scenarios.
### Structure of the Code and Hello, World!
PRyMordial is a numerical tool dedicated to efficiently and accurately evaluate in the SM and beyond all the key observables related to the BBN era, discussed in section 2, namely:
* The number of effective relativistic degrees of freedom, \(N_{\rm eff}\), Eq. (5) ;
* The cosmic neutrino abundance today, \(\Omega_{\nu}h^{2}\), Eq. (6) ;
* The helium-4 mass fraction (both for BBN and CMB), \(Y_{P}\), Eq. (20) ;
* The relative number density of deuterium, helium-3 and lithium-7, Eq. (21).
In contrast to other BBN codes available, PRyMordial begins by computing the thermal background from first principles. As a byproduct of the determination of \(N_{\rm eff}\) and \(\Omega_{\nu}h^{2}\), the relationship between time, scale factor and temperature of relativistic species is determined precisely, including effects from non-instantaneous decoupling within and beyond the Standard Model.
Next, PRyMordial evaluates the weak rates for neutron freeze out via a state-of-the-art implementation that includes nucleon finite-mass effects, one-loop QED corrections and finite-temperature effects. While the latter are typically negligible within current observational precision and can be conveniently stored between runs, the remainder are generally recomputed for each iteration of a generic BBN analysis.
Finally, PRyMordial solves a network of nuclide reactions for their yields within three different physical regimes: _i)_ a high-temperature era in which one can restrict the study to nucleons with an initial temperature of \({\cal O}(10)\) MeV and a final temperature close to neutrino decoupling; _ii)_ a mid-temperature era from \({\cal O}(1)\) MeV down to \({\cal O}(0.1)\) MeV, during which photo-dissociation of nuclear bound states is relevant; _iii)_ and a low temperature era starting at \({\cal O}(0.1)\) MeV during which PRyMordial follows all of the nuclear species of interest, which ends at a temperature well below \(e^{\pm}\) heating of the thermal bath, i.e. down to \({\cal O}(1)\) keV. Local thermal equilibrium sets the initial nuclide abundances and detailed balance determines all of the reverse reactions included in the chosen set of nuclear reactions. These three regimes are matched such that the solution for each one provides the initial conditions for the successive period.
Figure 1: PRyMordial _in a nutshell: Schematic of the modules making it up and their inter-relations._
PRyMordial is a Python package with optional dependencies which allow more advanced users to speed up execution by exploiting the Julia programming language. The recommended libraries and general requirements are tabulated in Appendix A. As highlighted in Figure 1, PRyMordial is organized in five primary modules:
* PRyM_init.py is an initialization module where physical constants and Boolean flags for user-controlled options are defined; in particular, three main blocks for input parameters are found:
* Fundamental constants, masses (in natural units), initialized according to the PDG [42];11 Footnote 11: For the electroweak sector we adopt \(\{\alpha_{\rm em},G_{\rm F},M_{\rm Z}\}\) as inputs and derive the rest via tree-level relations.
* Additional parameters needed for the evaluation of the \(n\leftrightarrow p\) rates beyond the Born level;
* Cosmological inputs including the CMB temperature and the abundance of baryonic matter [24].
Boolean flags allow the user to switch on/off the following options:
* verbose_flag: Allows the user to run the code with all of the internal messages enabled;
* numba_flag: If True, speeds up some numerical integrations, if the Numba library is installed;
* numdiff_flag: If True, performs numerical derivatives using Numdifftools library;
* aTid_flag: Controls the inclusion of incomplete-decoupling effects in the determination of the scale factor as a function of time and temperature;
* compute_bckg_flag: If True, recomputes the thermodynamical background as presented in subsection 2.1 (via save_bckg_flag the outcome can be stored in a file for future runs);
* NP_thermo_flag: If True, includes the contribution(s) of new (interacting) species to the dynamics of the thermal bath (by default, one must also provide a NP temperature);
* NP_nu_flag: If True, includes new species thermalized with the neutrino bath;
* NP_e_flag: If True, includes new species thermalized with the plasma;
* compute_nTOp_flag: If True, recomputes weak rates beyond Born as discussed in subsection 2.2 (via save_nTOp_flag the outcome can be stored in a file for future runs);
* nTOpBorn_flag: If True, adopts the Born approximation for the neutron freeze out;
* compute_nTOp_thermal_flag: If True, recomputes thermal corrections to \(n\leftrightarrow p\) rates via Vegas (since this is numerically intensive, we recommend save_nTOp_thermal_flag = True);
* tau_nflag: If True, uses the neutron lifetime to normalize the weak rates, see subsection 2.2;
* NP_nTOp_flag: If True, includes NP affecting \(n\leftrightarrow p\) rates in units of the Born rates;
* smallnet_flag: If True, restricts the nuclear network to the set of 12 key nuclear processes collected in Table 1 of Appendix B;
* nacreii_flag: If True, the key nuclear rates adopted in PRyMordial will be mostly based on NACRE II compilation rather than those of PRIMAT, see subsection 2.3;
* NP_nuclear_flag: If True, shifts the nuclear rates due to NP in units of the standard ones;
* julia_flag: If True, solves all of the systems of ordinary differential equations using routines in the SciML kit [58] developed for the Julia programming language; the optional dependencies described in Appendix A are then required.
This module also loads the tabulated nuclear rates (as well as the coefficients of Eq. (19)).
* PRyM_thermo.py is the module where all of the thermodynamical quantities for the species contributing to the expansion of the Universe during radiation domination are defined, together with all the collision terms that enter in Eq. (3) and Eq. (7).
* where the actual computation of the rates is performed from scratch
- or by loading pre-stored rates from a file.
* see Eq. (16)
- involving the nuclear rates loaded by PRyM_init.py. The Boolean flag smallnet_flag controls whether PRyMordial sets up and solves the smaller network of 12 key reactions or the full set of 63 nuclear processes.
* PRyM_main.py is the main module, which calls the other modules to solve for the thermodynamical background, compute \(N_{\rm eff}\) and the cosmic neutrino abundance, and solve for the nuclide yields. It contains the Python class PRyMclass(), designated to return all the cosmological observables implemented in the package.
* PRyM_jl_sys.py is an optional module which allows the user to solve all of the systems of ordinary differential equations in PRyM_main.py by taking advantage of the numerically efficient routines that are part of the SciML kit [58] developed in Julia. In some cases, this significantly speed up the execution time of the code (to a degree depending on both the adopted precision of the computation as well as the specific choice of differential-equation solver).
After downloading PRyMordial, the code can be used immediately. To run a Hello, World!-style example, the user would enter the package folder, start an interactive Python session, and type:
```
#Hello,World!ofPRyMorial importPRyM.PRyM_mainasPRyMmain res=PRyMmain.PRyMclass().PRyMresults()
```
which executes a BBN computation and fills the array res with the values of:
\[\left[\begin{array}{c}N_{\rm eff},\Omega_{\nu}h^{2}\times 10^{6}\ ({\rm rel }),\sum m_{\nu}/\Omega_{\nu}h^{2}[{\rm eV}],Y_{P}^{({\rm CMB})},Y_{P}^{({\rm BBN })},D/H\times 10^{5},^{3}{\rm He/H}\times 10^{5},^{7}{\rm Li/H}\times 10^{10} \end{array}\right]\]
Located in the same folder are:
* a folder PRyM in which all of the modules described above reside;
* a folder PRyMrates in which all the essential thermal, weak and nuclear rates are present, and where new evaluations of them can be stored;
* a script named runPryM_julia.py that provides a simple example for the user as to how to use the package, with execution-time benchmarking in both standard and Julia modes.
In the following subsections we present more sophisticated examples illustrating some of PRyMordial's capabilities.
### SM examples: the PDG Plot and Monte Carlo Analysis
In an interactive session in Python, any default value in PRyM_init.py can be changed using the syntax:
import PRyM.PRyM_init as PRyMini
_# New assignment x for parameter X_
PRyMini.X = x
This includes the Boolean flags listed in the previous subsection. Hence - to perform a run with: _i)_ the computation of the thermal background from scratch, including non-instantaneous decoupling effects; _ii)_ the ab-initio evaluation of the weak rates for neutron freeze out; and _iii)_ the inclusion of key nuclear processes based on the tabulated rates of the NACRE II compilation - one should type:
import PRyM.PRyM_init as PRyMini
_# Include incomplete decoupling in a(T)_
aTid_flag = True
_# Recompute the background from scratch_
PRyMini.compute_bckg_flag = True
_# Save the background in PRyMrates/thermo_
PRyMini.save_bckg_flag = True
_# Recompute n <--> p rates from scratch_
PRyMini.compute_nTOp_flag = True
_# Save n <--> p rates in PRyMrates/nTOp_
PRyMini.save_nTOp_flag = True
_# Include only key rates in nuclear network_
PRyMini.smallnet_flag = True
_# NACRE II compilation for key rates_
PRyMini.nacreii_flag = True
_# Compute PRyMordial observables_
import PRyM.PRyM_main as PRyMmain
res = PRyMmain.PRyMclass().PRyMresults()
The array res is assigned the same values as in the Hello, World! example, above. This code also stores the results for the thermal background and \(n\leftrightarrow p\) rates for future runs. Consequently, a subsequent call with the same setup can be made faster:
import PRyM.PRyM_init as PRyMini
_# No need to recompute background since stored_
PRyMini.compute_bckg_flag = False
PRyMini.save_bckg_flag = False
* _Wo need to recompute n <--> p rates as well_ PRyMini.compute_nTOp_flag = False PRyMini.save_nTOp_flag = False _# Compute PRyMordial observables: now faster!_
Figure 2: _Primordial abundances of helium-4, deuterium, helium-3, and lithium-7 as predicted by_ PRyMordial _within the SM, as a function of the cosmic baryon density. Central predictions are shown without theory uncertainties (i.e. using the nominal nuclear rates for the largest set implemented in the package with the NACRE II compilation for the key processes) and at the central values of all of the inputs. Measurements of light-element abundances (orange) as well as the CMB constraint on the baryon-to-photon ratio (cyan) follow from Figure 24.1 of the PDG [42]._
import PRyM.PRyM_main as PRyMmain res = PRyMmain.PRyMclass().PRyMresults() ```
While it may be necessary in general to recompute the thermal background and/or the rates for neutron freeze out, there are cases for which storing the outcome of these computations can be computationally advantageous. An example is the classic PDG review BBN plot of the primordial abundances as a function of the baryon-to-photon ratio \(\eta_{B}\)[42]. Once thermal background and weak rates have been stored, the behaviour of the abundances in the PDG Figure 24.1 can be reproduced with PRyMordial:
```
#PDGplot npoints=50 importnumpyasnp etabvec=np.logspace(-10,-9,npoints)
#Initializationofarrayofobservables Y_vec,DoH_vec,He30H_vec,Li7oH_vec=np.zeros((4,npoints)) foriinrange(npoints):
#Updatevalueofbaryon-to-photonratioandstorenewobs PRyMini.eta0b=etabvec[i] YP_vec[i],DoH_vec[i],He30H_vec[i],Li7oH_vec[i]= PRyMmain.PRyMclass().PRyMresults()[4:8] ```
The outcome of this code is illustrated in Figure 2, which adopts the largest nuclear network for the most accurate prediction of the relative abundance of lithium-7. It is worth noting that the BBN prediction for deuterium matches observations of quasar absorption systems, and is also in line with the cosmological abundance of baryons independently determined from the CMB (without a BBN prior). As pointed out in Ref. [54] and further scrutinized in Ref. [35], this test of concordance would fail if the PRIMAT rates were to be adopted, i.e. nacreii_flag=False.
To perform a Monte Carlo analysis of the SM predictions taking into account uncertainties (similar to the one presented in Ref. [35]):
```
#SMMCrun num_it=10000#numberofiterations importnumpyasnp Yp_vec,YDoH_vec,YHe30H_vec,YLi7oH_vec=np.zeros((4,num_it))
#ImportPRyMmodules importPRyM.PRyM_initasPRyMini importPRyM.PRyM_mainasPRyMmain
#BaryonetafromPlanck18(noBBNprior) mean_eta0b=PRyMini.eta0b std_Omegab2=2*1.e-4 std_eta0b=PRyMini.Omegab2_to_eta0b*std_Omegab2
#NeutronlifetimefromPDG2023 mean_tau_n=PRyMini.tau_n std_tau_n=0.5
# Compute primordial abundances at each iteration def ComputeAbundances(i):
# Settings to speed up the SM MC
PRyMini.recompute_bckg = False
Figure 3: _1D probability distributions (and 2D joint 68% and 95% probability regions) for the light primordial abundances predicted in the SM with_ PRyMordial_. Predictions are obtained using a Gaussian prior for the neutron lifetime \(\tau_{n}=878.4\pm 0.5\) s (comprising the eight best measurements from ultra-cold neutron experiments combined in Ref. [42]), and the cosmic baryon density, \(\Omega_{B}h^{2}=0.02230\pm 0.00020\) (from Table 5 of Ref. [24] for the analysis with an uninformative \(Y_{P}\) prior). The large network of nuclear reactions has been used, implying an additional 63 nuisance parameters varied with a log-normal distribution. Two different sets of key nuclear rates have been considered on the basis of the Boolean flag_ nacreii_flag_, and the statistics of the marginalized distributions for each case is presented.
PRyMini.recompute_nTOp_rates = False # Large network for nuclear rates PRyMini.smallnet_flag = False # Gaussian prior on baryon-to-photon ratio PRyMini.eta0b = np.random.normal(mean_eta0b,std_eta0b) # Gaussian prior on neutron lifetime PRyMini.tau_n = np.random.normal(mean_tau_n,std_tau_n) # Log-normal prior on nuclear rates PRyMini.p_npdg,PRyMini.p_dpHe3g,PRyMini.p_ddHe3n, ... # for the sake of brevity not listing all 63 process PRyMini.p_ppndp, PRyMini.p_Li7taann = np.random.normal(0,1,PRyMini.num_reactions) # NACRE II compilation for key rates PRyMini.nacreii_flag = True PRyMini.RelocadKeyRates() return PRyMmain.PRyMclass().PRyMresults()[4:8]
Parallelizing w/ joblib + multiprocessing from joblib import Parallel, delayed import multiprocessing num_cpu = int(multiprocessing.cpu_count()) FinalAbundances = Parallel(n_jobs = num_cpu)(delayed(ComputeAbundances)((i)) for i in range(num_it)) Yp_vec,YDoH_vec,YHe3oH_vec,YLi7oH_vec = np.array(FinalAbundances).transpose() The output maps out the probability distributions, shown in Figure 3, where the light elements at the end of the BBN era are predicted within the SM via a MC analysis that involves: _i)_ a cosmological prior on the cosmic baryon abundance; _ii)_ a particle-physics measurement prior on the neutron lifetime; and _iii)_ a dedicated treatment of the uncertainties in the rates of the nuclear processes. Figure 3 displays the "deuterium anomaly" present for the PRIMAT compilation of the key nuclear rates, and further shows that it is completely washed out when one employs the NACRE II database12.
Footnote 12: The results in Figure 3 slightly differ from Ref. [35] due to an update on the Gaussian prior for the neutron lifetime and the different choice for the cosmological baryon abundance adopted in that study.
Figure 3 suggests that the "primordial lithium problem" stands out as statistically significant, regardless of the approach undertaken for the nuclear network. However, the up-to-date analysis of the lithium problem in Ref. [94] points out that the predicted primordial abundance of lithium-7 could be depleted via stellar (and cosmic-ray) nucleosynthesis. Given this argument, the observational inference of Figure 2 and Figure 3, in which the observations lie below the theoretical prediction for primordial lithium-7, are consistent with a resolution for this long-standing puzzle.
### NP examples: New Interacting Sectors and BBN
PRyMordial allows the user to perform state-of-the-art analyses for Physics beyond the SM in the Early Universe. A few options already built-in to the current release include:
* additional relativistic degrees of freedom contributing to the expansion rate of the Universe in the form of a shift of \(N_{\rm eff}\), see Eq. (5);
* a non-zero chemical potential for neutrinos, influencing both the cosmological expansion rate as well as the equilibrium distributions in the weak processes for neutron-to-proton conversion;
* Boolean flags specific to the study of new species interacting with the plasma and/or neutrino bath, as well as flags implementing a new entire sector with temperature \(T_{\rm NP}\neq T_{\gamma,\nu}\,\);
* a Boolean flag and a dedicated parameter encoding NP effects as a phenomenological modification of \(n\leftrightarrow p\) conversion rates (in units of the Born rates);
* a set of parameters that allow one to similarly investigate NP effects in the nuclear processes as a simple shift in terms of the median rate of each process.
The first two have been extensively investigated in Ref [35], and thus we focus here on the others. The following is code demonstrating how to implement an electrophilic species in thermal equilibrium with the SM bath during BBN:
import PRyM.PRyM_init as PRyMini
_# Electrophilic_
PRyMini.NP_e_flag = True import numpy as np from scipy.integrate import quad
_# Scalar with mass mX = 5 MeV_ gX = 1; mX = 5.; def rho_NP(T_NP): if T_NP < mX/30.: return 0. else: res_int = quad(lambda E: E**2*(E**2-(mX/T_NP)**2)**0.5 /(np.exp(E)-1.),mX/T_NP,100.,epsrel=1e-9,epsabs=1e-12)[0] return gX/(2*np.pi**2)*T_NP**4*res_int def p_NP(T_NP): if T_NP < mX/30.: return 0. else: res_int = quad(lambda E: (E**2-(mX/T_NP)**2)**1.5 /(np.exp(E)-1.),mX/T_NP,100.,epsabs=1e-9,epsrel=1e-12)[0] return gX/(6*np.pi**2)*T_NP**4*res_int def drho_NP_dT(T_NP): if T_NP < mX/30.: return 0. else: res_int = quad(lambda E: 0.25*E**3*(E**2-(mX/T_NP)**2)**0.5* np.sinh(E/2.0)**-2,mX/T_NP,100,epsabs=1e-9,epsrel=1e-12)[0] return gX/(2*np.pi**2)*T_NP**3*res_int import PRyM.PRyM_main as PRyMmain res = PRyMmain.PRyMclass(rho_NP,p_NP,drho_NP_dT).PRyMresults()
One can similarly evaluate a neutrinophilic species thermalized with the SM bath by replacing the Boolean flag at the top of the script with: PRyMini.NP_nu_flag = True.
In Figure 4 we present the results for NP scenarios of this type, reproducing the qualitative features already well-discussed, e.g., in Ref. [31]. In particular, we observe three primary NP effects: _i)_ a change in the cosmological expansion rate, affecting the time-temperature relation; _ii)_ an impact on the evolution of the neutrino-to-photon temperature ratio, relevant for both neutrino and neutron
Figure 4: _Investigation of the cosmological impact at the end of the BBN era from a new relativistic species \(X\) with degrees of freedom corresponding to a real / complex scalar (light / dark-blue lines), a real massive vector (magenta), or a Majorana / Dirac fermion (red / green); \(X\) is assumed to be in thermal equilibrium with either the electron-positron-photon plasma (left panels) or with the SM neutrino thermal bath (right panels). The orange bands represent the observational constraints at the 2\(\sigma\) level from Refs. [24; 42]. Predictions with PRyMordial are obtained at nominal inputs and rates._
decoupling; and _iii)_ additional entropy released in the plasma, altering the number of baryons per a given baryon-to-photon ratio. Note that in Figure 4 we use the set of nuclear reactions from PRIMAT (nacreii_flag = False) and as a result a neutrinophilic species around \(\sim 10\) MeV in mass appears to be favored by current observations of primordial \(D/H\) while remaining compatible with the other cosmological NP probes based on helium-4 and \(N_{\rm eff}\).
In contrast to the previous scripts, this code calls PRyMclass() with three functions (of temperature) as arguments: the contribution to the energy density, its derivative, and the pressure of the new species added to the bath. More generally, one can include a new interacting sector with its own temperature \(T_{\rm NP}\) and non-trivial collision term \(\delta C_{\rm NP}\) along the lines of the recent work in Ref. [34]. In PRyMordial one may study such "dark sectors" consistently by generalizing the set of equations in Eq. (3) to follow \(T_{\rm NP}\) together with \(T_{\gamma,\nu}\), and solving for the entropy density involved in Eq (8) taking into account the effect of the NP. To do this, one switches on the Boolean flag NP_thermo_flag and codes all of the relevant contributions to the energy density, its derivative (which can optionally be evaluated numerically via Numdifftools), pressure and collision term for the NP sector, and passes them to PRyMresults.
One can also study NP resulting in changes to the weak rates for neutron freeze out and/or any of the implemented thermonuclear rates. To modify the weak rates, one sets the Boolean flag NP_nTOp_flag = True and change the parameter NP_delta_nTOp from its default of zero. Analogously, for the nuclear rates one switches on the flag NP_nuclear_flag and modifies the value of NP_delta_R with R being the reaction of interest.
As an example, we consider NP which results in a small change to the \(n\leftrightarrow p\) conversion rates. We perform a Bayesian fit to \(Y_{P}\) and \(D/H\) (as quoted by the PDG [42]) and allowing \(\tau_{\rm n}\), \(\Omega_{B}h^{2}\), and the other key nuclear rates to vary within their uncertainties (in line with the SM MC analysis of the previous subsection):
_# Bayesian analysis w/ PRyMordial and emcee_ import emcee _# BBN measurements from PDG 2023_ YP_ave = 0.245; YP_std = 0.003; DoH_ave = 2.547; DoH_std = 0.025; _# Mean and standard deviation on neutron lifetime [s]_ mean_tau_n = PRyMini.tau_n std_tau = 0.5 _# Mean and standard deviation on cosmic baryonic abundance_ mean_Omegabh2 = PRyMini.Omegah2 std_Omegah2 = 2*1.e-4 _# Test statistic for the fit_ def log_L(theta): delta_nTOp = theta PRyMini.NP_delta_nTOp = delta_nTOp _# Gaussian extraction of neutron lifetime_ PRyMini.tau_n = np.random.normal(mean_tau_n,std_tau_n) _# Gaussian extraction of cosmic baryonic abundance_ PRyMini.Omegah2 = np.random.normal(mean_Omegah2,std_Omegah2)
#IMPORTANT: Assign etab after updating Omegab (or directly vary etab) PRyMini.etaOb = PRyMini.Omegabh2_to_eta0b*PRyMini.Omegabh2 # Gaussian weights for log-normal nuclear rates PRyMini.p_npdg,PRyMini.p_dpHe3g,PRyMini.p_ddHe3n,PRyMini.p_ddtp, PRyMini.p_tqag, PRyMini.p_tdan,PRyMini.p_t_aLi7g,PRyMini.p_He3ntp,PRyMini.p_He3dap, PRyMini.p_He3aBe7g, PRyMini.p_Be7nLi7p,PRyMini.p_Li7paa=np.random.normal(0,1,12) YPth, DoHth = PRyMmain.PRyMclass().PRyMresults()[4:6] m2LogL = (YPth-YP_ave)**2/(YP_std**2) + (DoHth-DoH_ave)**2/(DoH_std**2) return -0.5*m2LogL def log_prior(theta): delta_nTOp = theta if -0.3 < delta_nTOp < 0.3: return 0.0 return -np.inf def log_prob(theta): lp = log_prior(theta) if not np.isfinite(lp): return -np.inf ll = log_L(theta) return lp + ll if __name__ == '__main__' : # Total number of steps x walker nsteps = 2100 # Guess on burn-in steps discsteps = int(nsteps/3.) nwalkers = 6 ndim = 1 pos = np.array([0.]) + [1e-2]*np.random.randn(nwalkers,ndim) def RunMCMCserial(i): import PRyM.PRyM_init as PRyMini PRyMini.smallnet_flag = True PRyMini.nacreii_flag = True PRyMini.RelocadKeyRates() PRyMini.NP_nTOp_flag = True sampler = emcee.EnsembleSampler(nwalkers,ndim, log_prob) sampler.run_mcmc(pos, nsteps, progress = True) all_samples = sampler.get_chain(discard=discsteps,flat=True) return all_samples from joblib import Parallel, delayed import multiprocessing num_cpu = int(multiprocessing.cpu_count()) start = time.time() FinalRes = Parallel(n_jobs = num_cpu)(delayed(RunMCMCserial)((i))
for i in range(num_cpu)) my_samples_1,my_samples_2,my_samples_3,my_samples_4, my_samples_5,my_samples_6,my_samples_7,my_samples_8 = FinalRes
Collecting the samples all together final_samples = np.concatenate((my_samples_1,my_samples_2,my_samples_3,my_samples_4, my_samples_5,my_samples_6,my_samples_7,my_samples_8))
This code can be simply generalized to modify any of the other nuclear reactions.
Figure 5 shows the resulting 2D joint (68% and 95%) probability regions for NP_delta_nTOp correlated with the measurements of primordial helium-4 and deuterium. To perform the statistical analysis, we adopt the emcee package [56]. For the sake of computational efficiency, we restrict the analysis to the network of 12 key reactions (with nacreii_flag = True), as is sufficient given the focus on helium-4 and deuterium. Figure 5 indicates that BBN is consistent with NP in the \(n\leftrightarrow p\) conversion rates at the level of at most a few percent relative to the standard Born rates. The tight correlation with \(Y_{P}\) illustrates the importance of neutron freeze out in determining the primordial helium-4 abundance.
## 4 Outlook
In this work we have presented PRyMordial: A new tool to explore the physics of BBN in great detail, with an unprecedented eye toward applications for physics beyond the SM. The package also allows for fast, user-friendly precision analyses of the BBN era within the SM of Particle Physics, reaching the same level of accuracy as the state-of-the-art codes already publicly available.
In section 2 we have provided in some detail a review of the BBN era, highlighting the physics implemented in the code. The main novelties in PRyMordial are that it is:
Figure 5: _Constraint on a relative change of the weak \(n\leftrightarrow p\) conversion rates from NP, based on a Bayesian fit performed with PRyMordial with the use of the emcee\([56]\)package. Gaussian priors on the neutron lifetime and the cosmic baryon abundance are assumed (as for Figure 3) and flags smallnet_flag and nacreii_flag are both switched on. Helium-4, deuterium measurements correspond to the recommended values from the PDG [42]._
* A package entirely written in Python, easy to install, run and modify, efficient in the evaluation of the key quantities for the study of BBN; moreover, an optional dependence on Julia allows the user to make the code run even faster;
* A computation of the thermal background based on the Boltzmann equations governing the evolution of the relativistic species present at that time. This allows for an accurate prediction of \(N_{\rm eff}\) from first principles and opens up new avenues for the study of BSM Physics;
* A fast and accurate evaluation of the weak rates including QED, nucleon-finite mass and thermal corrections for a prediction of the neutron-to-proton ratio that confronts the precision of current and next-generation measurements;
* A BBN code that easily allows exploration of uncertainties and changes in all of the input parameters and most importantly, includes by default different treatments for the nuclear rates in order to give to the user a better handle on the overall theoretical systematics.
In section3 we describe the structure of the code and provide several examples of its usage within the Standard Model and for a few interesting scenarios of NP.
There are many directions that can be pursued in the future to make PRyMordial an even more compelling and flexible tool for the community. One important aspect we plan to expand upon is the characterization of the thermal background. At the moment, only a single common temperature for neutrinos is considered and no evolution equation for primordial chemical potentials is given by default. All of these can be easily implemented along the lines of Ref. [53].
Also relevant for precision studies would be an approach to efficiently include effects from phase-space spectral distortions of relativistic species. In this regard, we plan to further enrich the physics in PRyMordial with a dedicated framework for neutrino decoupling that includes effects from oscillations at non-zero lepton chemical potentials, see Ref. [110].
It would be a very interesting (though formidable) task to further improve the current next-to-leading order computation of neutron freeze out in the Early Universe, filling in the gaps of some of the approximations undertaken in the literature (see Appendix B of [111] as well as the improvements brought by the recent effective-field-theory study at zero temperature of Ref. [77]). We would also eventually like to include higher-order QED corrections such as the ones available in Refs [64] and [112], as well as the NLO QED corrections to \(e^{+}e^{-}\leftrightarrow\nu\bar{\nu}\) matrix elements recently inspected in Ref. [113].
Finally, in the future we would like to enlarge the nuclear network beyond the 63 nuclear reactions currently implemented, which encode all of the processes involving nuclides up to boron-8 in atomic and mass number (needed for an accurate prediction of lithium-7 in the Standard Model).
With the public release of PRyMordial we hope to provide to the community an important new tool to address fundamental questions about the Early Universe, whose study remains central to further progress in our understanding of Nature. In the wise words of a giant of our time [1]:
_"[Human beings] are not content to comfort themselves with tales of gods and giants, or to confine their thoughts to the daily affairs of life; they also build telescopes and satellites and accelerators, and sit at their desks for endless hours working out the meaning of the data they gather. The effort to understand the universe is one of the very few things that lifts human life a little above the level of force, and gives it some of the grace of tragedy."_
**Note about referencing:** PRyMordial makes use of previous work in the literature. When using it, please make certain to appropriately reference the original literature as well as PRyMordial itself.
## Appendix A How to Install PRyMordial
PRyMordial is publicly released on GitHub. Once in the desired directory, from your terminal type:
Appendix B **G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**GG**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**GG**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**G**GG**G**G**G**G**G**G**G**GG**G**G**GG**G**G**G**G**G**G**GG**G
import julia julia.install() import diffeatopy diffeqpy.install() At this point the user will be able to exploit the SciML routines developed in Julia to solve the nuclear-reaction network in PRyMordial, speeding up the execution of time by a factor of two or more, and with the possibility of cherry-picking from a large collection of differential-equation solvers built-in in the package, see the documentation at [https://docs.sciml.ai/DiffEqDocs/stable/solvers/ode_solve](https://docs.sciml.ai/DiffEqDocs/stable/solvers/ode_solve).
To use the SciML routines, the user must set the flag PRyM_init.flag_julia = True. In some systems, the very first call of PRyM_main.PRyMresults() might need to be in Python and therefore requires initially PRyM_init.flag_julia = False. Also, the first call in Julia will inevitably be slow, since it will compile PRyM_j1_sys.py. As a concise example of the dedicated script runPRyM_julia.py coming with the present release, here below is how things should work in the Julia mode:
import PRyM.PRyM_init as PRyMini import PRyM.PRyM_main as PRyMmain
# Initialization call in Python: PRyMini.julia_flag = False res = PRyMmain.PRyMclass().PRyMresults()
First call in Julia will be slow: PRyMini.julia_flag = True res = PRyMmain.PRyMclass().PRyMresults()
# From here on, any call will be fast!
## Appendix B Nuclear Processes in PRyMordial
In this appendix we collect the 12 key reactions necessary to accurately predict helium-4 and deuterium, see Table 1, as well as the 51 additional reactions comprising the full set recommended for a more robust prediction of lithium-7, Table 2. For the general aspects of the evaluation of the nuclear rates in the Early Universe as well as the theoretical and statistical details behind the compilation of the nuclear rates present in PRyMordial, we refer the interested reader to Ref. [92] and Refs. [107; 108].
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Nuclear Reaction** & Ref. & Ref. & **Nuclear Reaction** & Ref. & Ref. \\ \hline n+p \(\rightarrow\) D+\(\gamma\) & [114] & [114] & D+p \(\rightarrow\)\({}^{3}\)He+\(\gamma\) & [106] & [106] \\ D+D \(\rightarrow\)\({}^{3}\)He+n & [98] & [115] & D+D \(\rightarrow\)\({}^{3}\)H+p & [98] & [115] \\ \({}^{3}\)H+p \(\rightarrow\)\({}^{4}\)He+\(\gamma\) & [92] & [92] & \({}^{3}\)H+D \(\rightarrow\)\({}^{4}\)He+n & [96] & [115] \\ \({}^{3}\)H+\({}^{4}\)He \(\rightarrow\)\({}^{7}\)Li+\(\gamma\) & [96] & [115] & \({}^{3}\)He+n \(\rightarrow\)\({}^{3}\)H+p & [96] & [102] \\ \({}^{3}\)He+D \(\rightarrow\)\({}^{4}\)He+p & [96] & [115] & \({}^{3}\)He+\({}^{4}\)He \(\rightarrow\)\({}^{7}\)Be+\(\gamma\) & [98] & [115] \\ \({}^{7}\)Be+n \(\rightarrow\)\({}^{7}\)Li+p & [96] & [103] & \({}^{7}\)Li+p \(\rightarrow\)\({}^{4}\)He+\({}^{4}\)He & [96] & [115] \\ \hline \end{tabular}
\end{table}
Table 1: _The key nuclear reactions adopted in PRyMordial, with corresponding references. The red (blue) column refers to the option nacreii_flag = True (False), see subsection 2.3 for further details. Notice that the compilation of the blue column is present also in the code PRIMAT[47]._
## Acknowledgments
We are grateful to Cara Giovanetti and Federico Bianchini for providing us valuable feedback for the present release after \(\beta\)-testing PRyMordial. We acknowledge Kevork Abazajian, Kim Berghaus, Federico Bianchini, Miguel Escudero, Rouven Essig, Cara Giovanetti, Seyda Ipek, Mariangela Lisanti, Hongwan Liu, Jessie Shelton for discussion. We are in debt to all of the authors of AlterBBN, NUDEC_BSM, PArthENoPE, and PRIMAT for making their codes publicly accessible: The present work and the release of PRyMordial greatly benefited from the open-source community.
M.V. is supported in part by the Simons Foundation under the Simons Bridge for Postdoctoral Fellowships at SCGP and YITP, award number 815892. T.M.P.T. is supported in part by the U.S. National Science Foundation under Grant PHY-2210283. This work was performed in part at Aspen Center for Physics, which is supported by National Science Foundation grant PHY-2210452.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Nuclear Reaction** & Ref. & **Nuclear Reaction** & Ref. \\ \hline \({}^{7}\)Li+p \(\to\)\({}^{4}\)He+\({}^{4}\)He+\(\gamma\) & [115] & \({}^{7}\)Be+n \(\to\)\({}^{4}\)He+\({}^{4}\)He & [116] \\ \({}^{7}\)Be+D \(\to\)\({}^{4}\)He+\({}^{4}\)He+p & [117] & D+\({}^{4}\)He \(\to\)\({}^{6}\)Li+\(\gamma\) & [118] \\ \({}^{6}\)Li+p \(\to\)\({}^{7}\)Be+\(\gamma\) & [115] & \({}^{6}\)Li+p \(\to\)\({}^{3}\)He+\({}^{4}\)He & [115] \\ \({}^{8}\)B+n \(\to\)\({}^{4}\)He+\({}^{4}\)He+p & [119] & \({}^{6}\)Li+\({}^{3}\)He \(\to\)\({}^{4}\)He+\({}^{4}\)He+p & [119] \\ \({}^{6}\)Li+\({}^{3}\)H \(\to\)\({}^{4}\)He+\({}^{4}\)He+n & [119] & \({}^{6}\)Li+\({}^{3}\)H \(\to\)\({}^{8}\)Li+p & [119] \\ \({}^{7}\)Li+\({}^{3}\)He \(\to\)\({}^{6}\)Li+\({}^{4}\)He & [119] & \({}^{8}\)Li+\({}^{3}\)He \(\to\)\({}^{7}\)Li+\({}^{4}\)He & [119] \\ \({}^{7}\)Be+\({}^{3}\)H \(\to\)\({}^{6}\)Li+\({}^{4}\)He & [119] & \({}^{8}\)B+\({}^{3}\)H \(\to\)\({}^{7}\)Be+\({}^{4}\)He & [119] \\ \({}^{8}\)B+n \(\to\)\({}^{6}\)Li+\({}^{3}\)He & [119] & \({}^{8}\)B+n \(\to\)\({}^{7}\)Be+D & [119] \\ \({}^{6}\)Li+\({}^{3}\)H \(\to\)\({}^{7}\)Li+D & [119] & \({}^{6}\)Li+\({}^{3}\)He \(\to\)\({}^{7}\)Be+D & [119] \\ \({}^{7}\)Li+\({}^{3}\)He \(\to\)\({}^{4}\)He+\({}^{4}\)He+D & [119] & \({}^{8}\)Li+\({}^{3}\)He \(\to\)\({}^{4}\)He+\({}^{4}\)He+\({}^{3}\)H & [119] \\ \({}^{7}\)Be+\({}^{3}\)H \(\to\)\({}^{4}\)He+\({}^{4}\)He+D & [119] & \({}^{7}\)Be+\({}^{3}\)H \(\to\)\({}^{7}\)Li+\({}^{3}\)He & [119] \\ \({}^{8}\)B+D \(\to\)\({}^{7}\)Be+\({}^{3}\)He & [119] & \({}^{8}\)B+\({}^{3}\)H \(\to\)\({}^{4}\)He+\({}^{4}\)He+\({}^{3}\)He & [119] \\ \({}^{7}\)Be+\({}^{3}\)He \(\to\) p+p+\({}^{4}\)He+\({}^{4}\)He & [119] & D+D \(\to\)\({}^{4}\)He+\(\gamma\) & [115] \\ \({}^{3}\)He+\({}^{3}\)He \(\to\)\({}^{4}\)He+p+p & [115] & \({}^{7}\)Be+p \(\to\)\({}^{8}\)B+\(\gamma\) & [115] \\ \({}^{7}\)Li+D \(\to\)\({}^{4}\)He+\({}^{4}\)He+n & [120] & D+n \(\to\)\({}^{3}\)H+\(\gamma\) & [121] \\ \({}^{3}\)H+\({}^{3}\)H \(\to\)\({}^{4}\)He+n+n & [121] & \({}^{3}\)He+n \(\to\)\({}^{4}\)He+\(\gamma\) & [90] \\ \({}^{3}\)He+\({}^{3}\)H \(\to\)\({}^{4}\)He+D & [117] & \({}^{3}\)He+\({}^{3}\)H \(\to\)\({}^{4}\)He+n+p & [117] \\ \({}^{7}\)Li+\({}^{3}\)H \(\to\)\({}^{4}\)He+\({}^{4}\)He+n+n & [117], [122] & \({}^{7}\)Li+\({}^{3}\)He \(\to\)\({}^{4}\)He+\({}^{4}\)He+n+p & [117], [122] \\ \({}^{8}\)Li+D \(\to\)\({}^{7}\)Li+\({}^{3}\)H & [123] & \({}^{7}\)Be+\({}^{3}\)H \(\to\)\({}^{4}\)He+\({}^{4}\)He+n+p & [117], [122] \\ \({}^{7}\)Be+\({}^{3}\)He \(\to\)\({}^{4}\)He+\({}^{4}\)He+p+p & [117], [122] & \({}^{6}\)Li+n \(\to\)\({}^{3}\)H+\({}^{4}\)He & [117] \\ \({}^{3}\)He+\({}^{3}\)H \(\to\)\({}^{6}\)Li+\(\gamma\) & [124] & \({}^{4}\)He+n+p \(\to\)\({}^{6}\)Li+\(\gamma\) & [117] \\ \({}^{6}\)Li+n \(\to\)\({}^{7}\)Li+\(\gamma\) & [122] & \({}^{6}\)Li+D \(\to\)\({}^{7}\)Li+p & [122] \\ \({}^{6}\)Li+D \(\to\)\({}^{7}\)Be+n & [122] & \({}^{7}\)Li+n \(\to\)\({}^{8}\)Li+\(\gamma\) & [122], [125] \\ \({}^{7}\)Li+D \(\to\)\({}^{8}\)Li+p & [122] & \({}^{8}\)Li+p \(\to\)\({}^{4}\)He+\({}^{4}\)He+n & [126] \\ \({}^{4}\)He+n+n \(\to\)\({}^{6}\)He+\(\gamma\) & [127] & p+p+n \(\to\) D+p & [117] \\ \({}^{7}\)Li+\({}^{3}\)H \(\to\)\({}^{4}\)He+\({}^{4}\)He+n+n & [117], [122] & & \\ \hline \end{tabular}
\end{table}
Table 2: _Nuclear processes beyond the key ones implemented in the package PRyMordial,with related references. Those processes are particularly needed for a precise prediction of the primordial abundance of lithium-7. Notice that the compilation above is part of the larger one present in the code PRIMAT [47]._ |
2304.11353 | Transition System Representation of Boolean Control Networks | First, the topological structure of a transition system is studied. Then, two
types of transition system (TS) representations of Boolean networks (BNs) and
Boolean control networks (BCNs) are investigated. The first kind of
representation is state-based, which converts a BCN into a TS with either
distinct control or non-distinct control. The second representation is
output-based, which is also called the simulation of the original BCN. Some
applications are also studied. | Daizhan Cheng, Xiao Zhang, Zhengping Ji | 2023-04-22T09:24:53Z | http://arxiv.org/abs/2304.11353v1 | # Transition System Representation of Boolean Control Networks
###### Abstract
First, the topological structure of a transition system is studied. Then, two types of transition system (TS) representations of Boolean networks (BNs) and Boolean control networks (BCNs) are investigated. The first kind of representation is state-based, which converts a BCN into a TS with either distinct control or non-distinct control. The second representation is output-based, which is also called the simulation of the original BCN. Some applications are also studied.
## I Introduction
Since Kauffman proposed the BN to formulate genetic networks [17], the study of BNs and BCNs becomes a heat topic in the biological community as well as in the control community. Consider a BN,
\[\begin{cases}X_{1}(t+1)=f_{1}(X_{1}(t),X_{2}(t),\cdots,X_{n}(t)),\\ X_{2}(t+1)=f_{2}(X_{1}(t),X_{2}(t),\cdots,X_{n}(t)),\\ \vdots,\\ X_{n}(t+1)=f_{n}(X_{1}(t),X_{2}(t),\cdots,X_{n}(t)),\end{cases} \tag{1}\]
where \(X_{i}\in\mathcal{D}:=\{0,1\}\), \(f_{i}:\mathcal{D}^{n}\rightarrow\mathcal{D}\), \(i\in[1,n]\).
It is clear that every trajectory can converge to an attractor (either a fixed point or a cycle) because it consists only of finite nodes and each node can take only two values \(\mathcal{D}=\{0,1\}\). Thus, the attractors, with their attractor basins, form the entire topological structure of a BN. Therefore, finding both fixed points and cycles of a given BN becomes a first priority problem in the study of BN. Many early works considered this problem by providing methods to solve a certain class of BNs [8, 13, 9, 12], to name a few.
In the last two decades, the semi-tensor product (STP) of matrices has been used to transform BN (or BCN) into an algebraic discrete-time dynamical (control) system. The STP approach to BN (BCN) has proven to be very efficient. Many theoretical results have been obtained.
The basic step for the STP approach can be described in a nutshell as follows: construct the so-called vector form of \(X_{i}\) as
\[x_{i}:=\begin{bmatrix}X_{i},\\ 1-X_{i,}\end{bmatrix}\in\Delta,\quad i\in[1,n],\]
where \(\Delta:=\Delta_{2}\), and \(\Delta_{k}:=\mathrm{Col}(I_{k})\) is the column set of the identity matrix \(I_{k}\).
Using the vector form for \(X_{i}\), the system (1) can be expressed in its algebraic state space representation (ASSR) as [3]
\[x(t+1)=Mx(t), \tag{2}\]
where \(x(t)=\mathbb{x}_{i=1}^{n}x_{i}(t)\in\Delta_{2^{n}}\) and \(M=M_{1}*M_{2}*\cdots,*M_{n}\in\mathcal{L}_{2^{n}\times 2^{n}}\), with \(M_{i}\) being the structure matrix of \(f_{i}\), \(*\) is the Khatri-Rao product of matrices [5].
The general formula for calculating the number of attractors is given by [2] as
**Theorem I.1**: _[_2_]_ _Consider the Boolean network (1) with its ASSR (2). Then_
\[\begin{cases}N_{1}=\mathrm{trace}(M),\\ N_{s}=\frac{\mathrm{trace}(M^{s})-\sum\limits_{k\in\mathcal{D}(s)}kN_{k}}{s}, \quad 2\leq s\leq n,\end{cases} \tag{3}\]
_where \(N_{s}\) is the number of cycles of length \(s\), in particular, \(N_{1}\) is the number of fixed points considered as cycles of length \(1\). Note that \(\mathcal{P}(s)\) is the set of the proper factors of \(s\), including \(1\) and excluding \(s\)._
**Remark I.2**: _The proof of Theorem I.1 is based on the following three observations:_
1. _If_ \(x_{i}\in\Delta_{2^{n}}\) _is on a cycle of length_ \(s\)_, then_ \(x_{i}\) _is a fixed point of_ \(M^{s}\)_._
2. _If_ \(x_{i}\in\Delta_{2^{n}}\) _is on a cycle of length_ \(t\) _and_ \(t|s\)_, then_ \(x_{i}\) _is also a fixed point of_ \(M^{s}\)_._
3. _If_ \(x_{i}\in\Delta_{2^{n}}\) _is a fixed point of_ \(M^{s}\)_, it must be either of type (i) or of type (ii)._
Consider a \(k\)-valued logical network. Assume it is expressed as in (1) with only \(X_{i}\in\mathcal{D}_{k}=\{1,\cdots,k\}\), \(i\in[1,n]\). Let \(j\sim\delta_{k}^{j}\), \(j\in[1,k]\), and
\[x_{i}=\delta_{k}^{j},\quad X_{i}=j,\quad i\in[1,n],\ j\in[1,k],\]
is the vector form of \(X_{i}\). Then its ASSR is still expressed as in (2) with only \(M\in\mathcal{L}_{k^{n}\times k^{n}}\).
Taking into account the observations of Remark I.2, the following result is also obvious.
**Corollary I.3**: _[_20_]_ _Consider the \(k\)-valued logical network (1) with its ASSR (2). Then the formula (3) remains true._
The cycles of BCN have also been studied in several papers, e.g. [25, 19] etc. But there is no formula similar to (3) to calculate all control attractors.
A transition system (TS) is a more general finite-valued network. A BN can be seen as an autonomous TS and a BCN as a TS. The TS itself is a very useful framework for representing finite automata (FA) [10, 15, 16]. In particular, it provides a fundamental framework for hybrid
systems [22, 21]. The STP approach to FAs has also been developed [23, 24, 7].
To our best knowledge, the topological structure of a TS, or the structure of its attractors, has not been clearly revealed. "Can the formula (3) be applied to them?" is still an open problem.
In this paper, we first consider the topological structure of autonomous TS. Two types of cycles are considered: simple cycles and compound cycles. Then the number of attractors with different lengths is calculated. Then the transformation of BCN into autonomous TS is considered. Using transformed TS, the attractors of BCN can be calculated. Finally, some applications of such transformations are examined.
The remainder of this paper is structured as follows:
Section II considers the topological structure of TSs, and the formula of fixed points and cycles for networks is extended to TSs. Section III investigates the state-based representation of BCNs with some direct applications. Section IV discusses the output-based representation of BCNs, called the simulation of BCNs. Furthermore, the formula for the simulation dynamics is obtained. The output robust controls are also investigated. Section V is a brief conclusion.
The notations used in the text are listed below:
* \(\mathrm{Col}(A)\) (\(\mathrm{Row}(A)\)): the set of columns (rows) of \(A\).
* \(\mathcal{D}_{k}:=\{1,2,3,\cdots,k\}\).
* \(\delta^{i}_{n}\): the \(i\)-th column of the identity matrix \(I_{n}\).
* \(\Delta_{n}:=\Delta_{n}=\{\delta^{i}_{n}\,|\,i=1,2,\cdots,n\}\).
* \(\mathcal{L}_{m\times n}\): the set of \(m\times n\) logical matrices, that is, \(\mathrm{Col}(\mathcal{L})\subset\Delta_{m}\).
* \(\mathcal{B}_{m\times n}\): the set of \(m\times n\) Boolean matrices, that is, \([\mathcal{B}]_{i,j}\in\mathcal{D}:=\{0,1\}\).
* \(\delta_{m}[i_{1},i_{2},\cdots,i_{n}]:=[\delta^{i_{1}}_{m},\delta^{i_{2}}_{m},\cdots,\delta^{i_{m}}_{m}]\in\mathcal{L}_{m\times n}\).
* \(A+_{\mathcal{B}}B\): the Boolean addition of \(A,B\in\mathcal{B}_{m\times n}\) that is, \([A+_{\mathcal{B}}B]_{i,j}=[A]_{i,j}\vee[B]_{i,j}\), \(\sum\limits_{\mathcal{B}}\) is the Boolean sum.
* \(A\times_{\mathcal{B}}B\): the Boolean product of \(A,B\). For \(A\in\mathcal{B}_{n\times n}\), \(A^{(s)}:=\underbrace{A\times_{\mathcal{B}}A\times_{\mathcal{B}}\cdots\times_{ \mathcal{B}}A}_{s}\).
## II Transition Systems
**Definition II.1**: _[_1_]_ _A TS can be described by \(T=(\mathcal{X},\mathcal{U},\Sigma,\mathcal{O},h)\), where_
* \(\mathcal{X}:=\{X_{1},X_{2},\cdots,X_{n}\}\) _is a finite state set and_ \(X_{i}\in\mathcal{D}_{2}\) _for Boolean TS (or_ \(X_{i}\in\mathcal{D}_{k}\) _for_ \(k\)_-valued TS);_
* \(\mathcal{U}:=\{U_{1},U_{2},\cdots,U_{m}\}\) _is a finite input set and_ \(U_{j}\in\mathcal{D}_{2}\) _for Boolean TS (or_ \(U_{j}\in\mathcal{D}_{k}\) _for_ \(k\)_-valued TS);_
* \(\Sigma:\mathcal{X}\times\mathcal{U}\to 2^{\mathcal{X}}\) _is the state transition mapping;_
* \(\mathcal{O}:=\{O_{1},O_{2},\cdots,O_{p}\}\) _is the observing set;_
* \(h:\mathcal{X}\to\mathcal{O}\) _is the observation mapping._
_If \(|\Sigma(X,U)|\leq 1\), for \(\forall X\in\mathcal{X}\) and \(\forall U\in\mathcal{U}\), then \(T\) is called a deterministic TS, otherwise, it is called a non-deterministic TS._
_Then the dynamics of a TS can be expressed as_
\[\begin{cases}X(t+1)=\Sigma(U(t),X(t)),\\ Y(t)=h(X(t)).\end{cases} \tag{4}\]
For a TS where \(|\mathcal{X}|=n\), \(|\mathcal{U}|=m\), and \(|\mathcal{O}|=p\), a subset \(X(t)\in 2^{X}\) can be expressed into vector form as \(x(t)=(x_{1}(t),x_{2}(t),\cdots,x_{n}(t))^{\mathrm{T}}\in\mathcal{B}^{n}\), which is a Boolean vector, where
\[x_{i}(t)=\begin{cases}1,&X_{i}\in X(t),\\ 0,&X_{i}\not\in X(t),\quad i\in[1,n].\end{cases}\]
Using vector form expressions, similar to BN (or \(k\)-valued network), the ASSR of a transition system can be expressed as
\[\begin{cases}x(t+1)=Lu(t)x(t),\\ y(t)=Hx(t),\end{cases} \tag{5}\]
where \(L\in\mathcal{B}_{n\times nm}\) is a Boolean matrix, \(H\in\mathcal{L}_{p\times n}\) is a logical matrix.
It is also clear that the ASSR of an autonomous TS is
\[\begin{cases}x(t+1)=Mx(t),\\ y(t)=Hx(t),\end{cases} \tag{6}\]
where, \(M\in\mathcal{B}_{n\times n}\) is a Boolean matrix.
**Example II.2**: _[_1_]_ _Consider a TS as \(T=(\mathcal{X},\mathcal{U},\Sigma,\mathcal{O},h)\) (Ref. to Fig. 1), where_
* \(\mathcal{X}=\{x_{1},x_{2},x_{3},x_{4}\}\)_._
* \(\mathcal{U}=\{u_{1},u_{2}\}\)_._
* \(\Sigma(x_{1},u_{1})=\{x_{2},x_{3}\},\)__\(\Sigma(x_{2},u_{1})=\{x_{2},x_{3}\},\)__\(\Sigma(x_{2},u_{2})=\{x_{4}\},\)__\(\Sigma(x_{3},u_{2})=\{x_{2},x_{3}\},\)__\(\Sigma(x_{4},u_{1})=\{x_{2},x_{4}\}.\)__
* \(\mathcal{O}=\{O_{1},O_{2},O_{3}\}.\)__
* \(h(x_{1})=O_{1},\quad h(x_{2})=h(x_{4})=O_{2},\quad h(x_{3})=O_{3}.\)__
Let
\[\begin{array}{ll}x_{i}=\delta^{i}_{4},&i=1,2,3,4;\\ u_{j}=\delta^{j}_{2},&j=1,2;\\ y_{k}=\delta^{k}_{3},&k=1,2,3.\end{array}\]
Then the ASSR of \(T\) is
\[\begin{cases}x(t+1)=Lu(t)x(t),\\ y(t)=Hx(t),\end{cases} \tag{7}\]
Fig. 1: TS of Example II.2
where,
\[L=\begin{bmatrix}0&0&0&0&0&0&0\\ 1&1&0&1&0&0&1&0\\ 1&1&0&0&0&0&1&0\\ 0&0&0&1&0&1&0&0\end{bmatrix},\]
\(H=\delta_{3}[1,2,3,2]\).
**Definition II.3** ([1]): _Consider the transition system (4)._
1. _A set_ \(X(t,U,X(0)):=\{X(0),X(1),\cdots,X(t)\}\) _is called a trajectory starting from_ \(X(0)\) _and driven by_ \(U=\{U(0),U(1),\cdots,U(t-1)\}\)_, where_ \(X(\tau)\in\mathcal{X}\)_,_ \(\tau\in[0,t]\)_, and_ \[X(\tau+1)\in\Sigma(U(\tau),X(\tau)),\quad\tau\in[0,t-1].\]
2. _A trajectory_ \(\{X(\tau),X(\tau+1),\cdots,X(\tau+\ell-1)\}\) _is called a general cycle (GC) of length_ \(\ell\) _if_ \(X(\tau+\ell)=X(\tau)\)_._
3. _A cycle of length_ \(1\) _is called a fixed point._
4. _A cycle is called a simple cycle (SC) if_ \(X(\tau)\)_,_ \(X(\tau+1)\)_,_ \(\cdots\)_,_ \(X(\tau+\ell-1)\) _are distinct (fixed points are also considered SCs). Otherwise, it is called a compound cycle (CC)._
5. _A cycle_ \(C_{P}\) _is called a (non-trivial) power cycle (PC), if_ \(C\) _is a cycle and_ \(C_{P}=\underbrace{C\ C\ \cdots\ C}_{k}\)_,_ \(k\geq 2\)_. Obviously, a PC must be a CC._
**Remark II.4**: _The cycle of an autonomous TS can be defined as a simplified version of Definition II.3. We omit it and assume that the cycle of an autonomous TS is also properly defined._
**Example II.5**: _Consider an autonomous TS, denoted by \(T\), which has a transition graph as shown in Fig. 2. Its ASSR is easily obtained as_
\[x(t+1)=\begin{bmatrix}1&1\\ 1&0\end{bmatrix}x(t). \tag{8}\]
_Then_
1. \(1\) _is a fixed point._
2. \((1,2)\) _is an SC._
3. \((1,2,1,2,1,2)\) _is a PC._
4. \((1,2,1,1,2,1,1,1,2,\cdots,\underbrace{1,\cdots,1}_{s},2),\quad s=1,2,\cdots\)__ _are CCs of length_ \(\frac{s(s+3)}{2}\)_, which can be arbitrarily long._
_Consider a partition as:_
\[\text{GC }=\text{ SC }\bigcup\text{ CC}.\]
_Following the argument in Remark I.2, it is easy to obtain the following result:_
**Proposition II.6**: _Consider an autonomous TS with its ASSR (6). Then formula (3) is applicable to calculate its numbers of CCs of arbitrary length \(s>0\)._
_As shown in Example II.5, the length \(s>0\) of a CC could be arbitrarily large, so applying formula (3) to calculate the numbers of all its CCs with arbitrary length \(s>0\) is impossible. The problem can be solved by observing the following facts:_
* _An SC is a cycle of length_ \(s\leqslant n\)_;_
* _Any CC can be viewed as a recursive concatenation or insertion of SCs. For example, consider a trajectory of the form_ \(C:=(y,\cdots,\xi,x_{1},x_{2},\cdots,x_{\ell-1},\xi,\cdots,y)\) _where_ \(x_{1},x_{2},\cdots,x_{\ell-1}\) _are distinct; then_ \((\xi,x_{1},x_{2},\cdots,x_{\ell-1})\) _is an SC (of length less than_ \(n\)_); replace the subsequence_ \(\xi,x_{1},x_{2},\cdots,x_{\ell-1},\xi\) _in_ \(C\) _by_ \(\xi\) _and keep finding subcycles that are SCs in the remaining trajectory, we will end up partitioning the trajectory into SCs and finding it constructed by recursively inserting SCs into an SC._
_Therefore, we only have to compute all SCs using the formula (3) and the algorithm for finding attractors of BNs (or \(k\)-valued networks) [2]. Then the SCs are sufficient to describe the topological structure of the autonomous TS._
We consider a simple example.
**Example II.7**: _Consider a TS, its ASSR is_
\[x(t+1)=\begin{bmatrix}0&0&1&0\\ 1&0&0&0\\ 1&1&0&0\\ 0&0&1&1\end{bmatrix}x(t):=Mx(t). \tag{9}\]
_Using the formula (3), it is easy to calculate that_
\[N_{1}=1,\quad N_{2}=1,\quad N_{3}=1,\quad N_{4}=0.\]
_The corresponding attractors are:_
\[(4),\quad(1,3),\quad(1,2,3).\]
_They are all SCs._
## III TS Representation of BCNs
### _Conversion of BCNs to Autonomous TSs_
Consider a BCN as
\[\begin{cases}X_{1}(t+1)=f_{1}(X_{1}(t),\cdots,X_{n}(t),U_{1}(t),\cdots,U_{m}( t)),\\ X_{2}(t+1)=f_{2}(X_{1}(t),\cdots,X_{n}(t),U_{1}(t),\cdots,U_{m}(t)),\\ \vdots\\ X_{n}(t+1)=f_{n}(X_{1}(t),\cdots,X_{n}(t),U_{1}(t),\cdots,U_{m}(t)),\end{cases} \tag{10}\]
_where \(X_{i},U_{j}\in\mathcal{D}\), \(f_{i}:\mathcal{D}^{n+m}\rightarrow\mathcal{D}\), \(i\in[1,n]\), \(j\in[1,m]\)._
_The ASSR of (10) is_
\[x(t+1)=Lu(t)x(t), \tag{11}\]
_where \(x(t)=\ltimes_{i=1}^{n}x_{i}(t)\in\Delta_{2^{n}}\), \(u(t)=\ltimes_{j=1}^{m}u_{j}(t)\in\Delta_{2^{m}}\), \(L\in\mathcal{L}_{2^{n}\times 2^{m+n}}\)._
Fig. 2: Transition System \(T\)
**Definition III.1**: _Consider system (10)._
1. _If there exists a sequence_ \[u(\tau),u(\tau+1),\cdots,u(\tau+s-1)\] _such that_ \[x(\tau)\xrightarrow{u(\tau)}x(\tau+1)\xrightarrow{u(\tau+1)}x(\tau+2)\cdots x( \tau+s-1)\] \[\xrightarrow{u(\tau+s-1)}x(\tau+s).\] _Then_ \(x(\tau),x(\tau+1),\cdots,x(\tau+s)\) _is called a control trajectory with undistinguished control of length_ \(s\)_._ _The sequence of state-control pairs_ \((x(\tau),u(\tau)),(x(\tau+1),u(\tau+1)),\cdots,(x(\tau+s),u(\tau+s))\) _is called a control trajectory with distinguished control of length_ \(s\)_._
2. _A control trajectory_ \(x(\tau),x(\tau+1),\cdots,x(\tau+s)\) _is called a control cycle of length_ \(s\)_, if_ \(x(\tau)=x(\tau+s)\)_. The simple (or power, compound, etc.) control cycle can be defined similarly._
3. _A control cycle of length_ \(1\) _is called a control fixed point._
**Definition III.2**: _Consider BCN (10)._
1. _It is converted into an autonomous TS with undistinguished control, where the ASSR of the TS is_ \[x(t+1)=Mx(t),\] (12) _where_ \[M=\sum_{\mathcal{B}}\limits^{2^{m}}{(L\delta^{i}_{2^{m}})},\] (13) _here_ \(\sum\limits_{\mathcal{B}}\) _is the Boolean sum._
2. _It is converted into an autonomous TS with distinguished control, where the ASSR of the TS is_ \[w(t+1)=\Xi w(t),\] (14) \[\text{where }w(t)=u(t)x(t),\,\Xi=\underbrace{[L^{\text{T}}\ L^{\text{T}} \cdots\ L^{\text{T}}]}_{2^{m}}^{\text{T}}.\]
Using Proposition II.6 to the converted autonomous TS yields the following result, which can be used to calculate control cycles of BCNs.
**Corollary III.3**: _(i) Applying the formula (3) to the converted autonomous TS with undistinguished control (12), the number of cycles of a BCN with undistinguished control of different lengths \(s\) can be calculated._
2. _Applying the formula (_3_) to the converted autonomous TS with distinguished control (_13_), the number of cycles of a BCN with distinguished control of different lengths_ \(s\) _can be calculated._
**Remark III.4**: _With some mild revision, the above results can be naturally extended to \(k\)-valued control networks. Similarly, they are also applicable to a general TS, when it is converted into an autonomous TS. The following example shows this._
**Example III.5**: _Recall Example II.2._
1. _A straightforward computation shows that its converted autonomous TS with undistinguished control is determined by its ASSR as_ \[x(t+1)=M_{I}x(t),\] _where the transition matrix is_ \[M_{I}=\begin{bmatrix}0&0&0&0\\ 1&1&1&1\\ 1&1&1&0\\ 0&1&0&1\end{bmatrix}.\] (15)
2. _The ASSR of its converted autonomous TS with distinguished control is_ \[z(t+1)=\Xi z(t),\] _where_ \[\Xi=\begin{bmatrix}0&0&0&0&0&0&0\\ 1&1&0&1&0&0&1&0\\ 1&1&0&0&0&0&1&0\\ 0&0&0&1&0&1&0&0\\ 0&0&0&0&0&0&0&0\\ 1&1&0&1&0&0&1&0\\ 1&1&0&0&0&0&1&0\\ 0&0&0&1&0&1&0&0\end{bmatrix}.\] (16)
3. _Using the formula (_3_) the number of CCs with undistinguished control are_ 1. \(N_{1}=3\)_:_ \[X_{\text{fixed point}}=\{2,3,4\},\] _where,_ \(i\) _stands for_ \(x_{i}\)_,_ \(i=1,2,3,4\)_._ 2. \(N_{2}=2\)_:_ \[C_{1}^{2}=(2,3);\quad C_{2}^{2}=(2,4).\] 3. \(N_{3}=4\)_:_ \[\begin{array}{ll}C_{1}^{3}=(2,2,3);&C_{2}^{3}=(2,3,3);\\ C_{3}^{3}=(2,2,4);&C_{4}^{3}=(2,4,4).\end{array}\] 4. \(N_{4}=7\)_:_ \[\begin{array}{ll}C_{1}^{4}=(2,2,2,3);&C_{2}^{4}=(2,2,3,3);\\ C_{3}^{4}=(2,3,3,3);&C_{4}^{4}=(2,2,2,4);\\ C_{5}^{4}=(2,2,4,4);&C_{6}^{4}=(2,4,4,4);\\ C_{7}^{4}=(2,3,2,4).\end{array}\] 5. \(N_{5}=16\)_:_ \[\begin{array}{ll}C_{1}^{5}=(2,2,2,2,3);&C_{2}^{5}=(2,2,2,3,3);\\ C_{3}^{5}=(2,2,3,3,3);&C_{4}^{5}=(2,3,3,3);\\ C_{5}^{5}=(2,3,2,2,3);&C_{6}^{5}=(2,3,2,3,3);\\ C_{7}^{5}=(2,2,2,2,4);&C_{8}^{5}=(2,2,2,4,4);\\ C_{9}^{5}=(2,2,4,4,4);&C_{10}^{5}=(2,4,4,4,4);\\ C_{11}^{5}=(2,4,2,2,4);&C_{12}^{5}=(2,4,2,4,4);\\ C_{13}^{5}=(2,3,2,2,4);&C_{14}^{5}=(2,3,2,4,2);\\ C_{15}^{5}=(2,3,3,2,4);&C_{16}^{5}=(2,4,4,2,3).\end{array}\] \[\vdots\] Finally, the SC is \[SC:\{\{2\},\{3\},\{4\},\{2,3\},\{2,4\}\}.\]
### _Some Applications_
* Reachability of TSs: Consider an autonomous TS (in ASSR form): \[x(t+1)=Mx(t),\] (17) where \(x(t)\in\Delta_{n}\), \(M\in\mathcal{B}_{n\times n}\). **Definition III.6**: \(x_{j}\) is reachable from \(x_{i}\), if the trajectory \(x(t,x_{0})\), starting from \(x_{0}=x_{i}\), can reach \(x_{j}\) at finite time \(t_{0}\), i.e., \(x(t_{0},x_{0})=x_{j}\). Define the reachable matrix \(\mathcal{C}\) as \[\mathcal{C}=\sum_{\mathcal{B}}^{n}M^{(s)}.\] (18) Then the following result is well known.
**Proposition III.7** ([3]): _Assume (17) is the converted autonomous TS of a BCN \(\Sigma\). Then for (17) \(x_{j}\) is reachable from \(x_{i}\), if and only if, for BCN \(\Sigma\): \(x_{j}\) is reachable from \(x_{i}\)._
* Decoupling: **Definition III.8**: _(i) A subset \(Z\subset\Delta_{n}\) is called an attractor of (17), if \(x(t)\in Z\) implies \(x(t+1)\in Z\) for \(\forall t\in\mathbb{N}\). (ii) Assume \(Z=\{x_{0}\}\) is an attractor, then \(Z\) (or \(x_{0}\)) is called a fixed point._
The following result is obvious:
**Proposition III.9**: _(i) Suppose (after a coordinate change if necessary) \(Z=\{x_{1},x_{2},\cdots,x_{r}\}\). Then \(Z\) is an attractor, if and only if \(M\) has the following block upper triangle form:_
\[M=\begin{bmatrix}M_{1}&M_{2}\\ 0&M_{3}\end{bmatrix} \tag{19}\]
_where \(M_{1}\in\mathcal{B}_{r\times r}\)._
* _Suppose (after a possible coordinate change)_ \(Z_{i}=\{x_{1}^{i},x_{2}^{i},\cdots,x_{r_{i}}^{i}\}\)_,_ \(i=1,2,\cdots,s\) _are sets of disjoint states. Then_ \(Z_{i}\)_,_ \(i=1,2,\cdots,s\) _are attractor sets if and only if,_ \(M\) _has the form (_18_), where_ \(M_{1}\) _is a block diagonal matrix such that_ \[M_{1}=\begin{bmatrix}M_{1}^{1}&0&\cdots&0\\ 0&M_{1}^{2}&\cdots&0\\ \vdots&&\\ 0&0&\cdots&M_{1}^{s}\end{bmatrix},\] _where_ \(M_{1}^{i}\in\mathcal{B}_{r_{i}\times r_{i}}\)_,_ \(i\in[1,s]\)_._
## IV Simulation of BNs and BCNs
### _Output-Based Simulation_
Consider a logical control network or a deterministic TS (in ASSR form), denoted by \(\Sigma\), and defined by
\[\begin{cases}x(t+1)=Mu(t)x(t),\\ y(t)=Hx(t),\end{cases} \tag{20}\]
where \(x(t)\in\Delta_{n}\), \(u(t)\in\Delta_{m}\), \(M\in\mathcal{L}_{n\times mn}\), \(H\in\mathcal{L}_{p\times n}\).
The following definition is based on [1] with mild formation modification.
**Definition IV.1**: _Consider control network \(\Sigma\) (20)._
* _Two states_ \(x_{i}\) _and_ \(x_{j}\) _are said to be (output) equivalent, denoted by_ \(x_{i}\sim x_{j}\)_, if_ \(Hx_{i}=Hx_{j}\)_._
* _The quotient system, denoted by_ \(\Sigma/\sim\) _is called a simulation of_ \(\Sigma\)_._
Denote by \(\bar{x}\) the equivalence class of \(x\); \(con(\bar{x}):=\{x\mid x\sim\bar{x}\}\); \(\mathcal{O}(x)\) the output trajectory for the state trajectory starting from \(x\). Then we have
**Proposition IV.2**: _[_1_]_
\[\mathcal{O}_{\Sigma}(con(\bar{x}))\subset\mathcal{O}_{\Sigma/\sim}(\bar{x}). \tag{21}\]
From the set controllability point of view [11, 6], the following is a straightforward result [14].
**Proposition IV.3**: _The simulation \(\Sigma/\sim\) of \(\Sigma\) is a transition system, and its dynamics is_
\[\begin{cases}\bar{x}(t+1)=\left[H\times_{\mathcal{B}}M\times_{\mathcal{B}}(I_ {m}\otimes H^{T})\right]u(t)\bar{x}(t),\\ y(t)=HH^{T}\bar{x}(t).\end{cases} \tag{22}\]
### _Output-Robust Network and Control_
Consider a network (i.e., a deterministic TS):
\[\begin{cases}x(t+1)=Mx(t),\\ y(t)=Hx(t),\end{cases} \tag{23}\]
where \(M\in\mathcal{L}_{n\times n}\), \(H\in\mathcal{L}_{p\times n}\). Assume (23) is a system with possible disturbances, described by
\[M=\begin{cases}M_{0},&\xi\in\emptyset,\\ L\xi,&L\in\mathcal{L}_{n\times ns},\ \xi\in\Delta_{s}.\end{cases} \tag{24}\]
Then we have two models: when \(M=M_{0}\) it is called the nominated model; and when \(M=L\xi\) it is called the disturbed model. For the nominated model, denoted by \(\Sigma_{0}\), we can construct its simulation system \(\Sigma_{0}/\sim\). For the disturbed model, we can first get its TSR, denoted by \(\Sigma_{\xi}\), and then construct its simulation system \(\Sigma_{\xi}/\sim\).
**Definition IV.4**: _System (23) is said to be output robust, if \(\Sigma_{0}/\sim\) and \(\Sigma_{\xi}/\sim\) are bi-simulated, that is to say, they generate identical output dynamics calculated from (22)._
**Example IV.5**: _Consider a Boolean network \(\Sigma\), which has its nominated model as_
\[\begin{cases}x_{1}(t+1)=\neg x_{1}(t),\\ x_{2}(t+1)=x_{1}(t)\forall x_{3}(t),\\ x_{3}(t+1)=[x_{1}(t)\forall x_{2}(t)]\lor x_{3}(t),\\ y(t)=[x_{1}(t)\leftrightarrow x_{2}(t)]\leftrightarrow\neg x_{3}(t);\end{cases} \tag{25}\]
_and its disturbed model as_
\[\begin{cases}x_{1}(t+1)=(\neg\xi(t))\wedge x_{1}(t),\\ x_{2}(t+1)=[\xi(t)\vee\neg x_{1}(t)]\,\bar{\tri}x_{3}(t),\\ x_{3}(t+1)=[x_{1}(t)\triangledown x_{2}(t)]\lor x_{3}(t),\\ y(t)=[x_{1}(t)\leftrightarrow x_{2}(t)]\leftrightarrow\neg x_{3}(t).\end{cases} \tag{26}\]
_It is easy to have its ASSR as in (23), where_
\[\begin{array}{l}M_{0}=\delta_{8}[7,6,7,5,1,3,1,4],\\ L=\delta_{8}[7,6,7,5,7,6,1,4,1,3,7,5,7,6],\\ H=\delta_{2}[2,1,1,2,1,2,2,1].\end{array}\]
The transition representation of its disturbed model becomes \(x(t+1)=Tx(t)\), where
\[T=\begin{bmatrix}1&0&1&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\\ 0&0&0&1&0&0&0&0\\ 0&1&0&0&0&0&0&0\\ 0&0&0&1&0&1&0&0\\ 0&1&0&0&0&0&0&1\\ 1&0&1&0&1&0&1&0\\ 0&0&0&0&0&0&0\end{bmatrix}.\]
Using Proposition IV.3, it is easy to calculate that the (output-based) simulations of these two models are the same as
\[\bar{x}(t+1)=\begin{bmatrix}0&1\\ 1&1\end{bmatrix}\bar{x}(t).\]
Hence, system \(\Sigma\) is output robust.
Consider a control network (20) where \(u(t)\in\Delta_{\ell}\), \(M\in\mathcal{L}_{n\times n\ell}\) satisfying (24). Assume there exists a state feedback control
\[u(t)=Gx(t), \tag{27}\]
where \(G\in\mathcal{L}_{\ell\times n}\), such that the closed-loop system is output robust, then \(u(t)\) is called an output robust control, which solves the output robust control problem.
**Example IV.6**: _Consider a Boolean control network \(\Sigma\) with nominated model as_
\[\begin{cases}x_{1}(t+1)=(\neg x_{1}(t))\vee(\neg u(t)),\\ x_{2}(t+1)=x_{1}(t)\bar{\vee}x_{3}(t),\\ x_{3}(t+1)=[x_{1}(t)\bar{\vee}x_{2}(t)]\lor x_{3}(t),\\ y(t)=[x_{1}(t)\leftrightarrow x_{2}(t)]\leftrightarrow\neg x_{3}(t);\end{cases} \tag{28}\]
_and its disturbed model as_
\[\begin{cases}x_{1}(t+1)=(\neg\xi(t))\wedge x_{1}(t)\wedge u(t),\\ x_{2}(t+1)=[\xi(t)\vee\neg x_{1}(t)]\,\bar{\vee}x_{3}(t),\\ x_{3}(t+1)=[x_{1}(t)\bar{\vee}x_{2}(t)]\lor x_{3}(t),\\ y(t)=[x_{1}(t)\leftrightarrow x_{2}(t)]\leftrightarrow\neg x_{3}(t).\end{cases} \tag{29}\]
_It is easy to see that if we choose_
\[u(t)=x_{1}(t), \tag{30}\]
_then the closed-loop system becomes (25)-(26), which is output robust. Thus, the state-feedback control (30) solves the output robust problem of \(\Sigma\)._
Output robust control solves the disturbance decoupling problem without the regularity assumption [4].
## V Conclusion
This paper investigated the transition representation of BCNs. The main contribution consists of three parts: (i) The topology of TSs was considered, and the formula for calculating fixed points and cycles of BNs was extended to TSs. (ii) Two types of state-based TS representations of BCNs, namely representations with either distinct or non-distinct controls, were proposed. (iii) An output-based TS representation, also called simulation, was studied. Its dynamic equation was obtained. The output robust (control) was also studied. The technique proposed in this paper is applicable to any finite value network. In fact, some examples in this paper are not BN or BCN. But the technique used for them is exactly the same as the one for BN or BCN.
Some related problems, such as finding the output robust controls, etc., are left for further study.
|
2306.07348 | Oblique rings from migrating exomoons: A possible origin for long-period
exoplanets with enlarged radii | Context. The extremely low density of several long-period exoplanets in
mature systems is still unexplained -- with HIP 41378 f being archetypical of
this category. It has been proposed that such planets could actually have
normal densities but be surrounded by a ring observed approximately face on,
mimicking the transit depth of a puffy planet. This would imply that the
equator of the planet is nearly perpendicular to its orbit plane, which is at
odds with the formation process of gas giants. Yet, in the context of the Solar
System planets, it has been shown that after gigayears of evolution, the tidal
migration of a moon can naturally lead to a very tilted planet with a ring.
Aims. As exomoons are expected to be ubiquitous around giant exoplanets, this
mechanism may be responsible for the anomalous radii of some observed
exoplanets. In preparation for the future discoveries of the PLATO mission, we
present a simple method for checking the plausibility of this mechanism for a
given exoplanet.
Methods. Analytical formulas give the probability density function of the
relevant precession harmonics of the planet. For each harmonic, simple criteria
set the moon mass and other properties required for the mechanism to operate.
Results. We applied this methodology to HIP 41378 f, and we show that in
order to reproduce the observed configuration, a hypothetical former moon
should have had a moon-to-planet mass ratio of a few times 1e-4 (i.e. roughly
the mass of our Moon) and have migrated over a distance of a few planet's radii
on a gigayear timescale. These orders of magnitude match the properties of
moons expected to exist around gaseous exoplanets.
Conclusions. We conclude that the migration of a former moon is a viable
formation pathway for the proposed ring and tilt of HIP 41378 f. This example
strengthens the ring hypothesis and motivates its application to other targets. | Melaine Saillenfest, Sophia Sulis, Paul Charpentier, Alexandre Santerne | 2023-06-12T18:15:09Z | http://arxiv.org/abs/2306.07348v1 | Oblique rings from migrating exomoons: A possible origin for long-period exoplanets with enlarged radii
###### Abstract
Context:The extremely low density of several long-period exoplanets in mature systems is still unexplained - with HIP 41378 f being archetypical of this category. It has been proposed that such planets could actually have normal densities but be surrounded by a ring observed approximately face on, mimicking the transit depth of a puffy planet. This configuration would imply that the equator of the planet is nearly perpendicular to its orbit plane, which is at odds with the formation process of gas giants. Yet, in the context of the Solar System planets, it has recently been shown that after gigayears of evolution, the tidal migration of a moon can naturally lead to a very tilted planet with a ring.
Aims:As exomoons are expected to be ubiquitous around giant exoplanets, this mechanism may be responsible for the anomalous radii of some observed exoplanets. In preparation for the future discoveries of the _PLATO_ mission, we present a simple method for checking the plausibility of this mechanism for a given exoplanet.
Methods:Analytical formulas give the probability density function of the relevant precession harmonics of the planet. For each harmonic, simple criteria set the moon mass and other properties required for the mechanism to operate.
Results:We applied this methodology to HIP 41378 f, and we show that in order to reproduce the observed configuration, a hypothetical former moon should have had a moon-to-planet mass ratio of a few times \(10^{-4}\) (i.e. roughly the mass of our Moon) and have migrated over a distance of a few planet's radii on a gigayear timescale. These orders of magnitude match the properties of moons expected to exist around gaseous exoplanets.
Conclusions:We conclude that the migration of a former moon is a viable formation pathway for the proposed ring and tilt of HIP 41378 f. This example strengthens the ring hypothesis and motivates its application to other promising targets.
## 1 Introduction
The so-called super-puff exoplanets have moderate masses (typically \(\lesssim 15~{}M_{\oplus}\)) but surprisingly large radii (\(\gtrsim 6~{}R_{\oplus}\)), giving them extremely low bulk densities (\(\lesssim 0.3~{}\mathrm{g\,cm^{-3}}\); see e.g. Lee & Chiang 2016). Although relatively rare, super-puffs form a growing class of exoplanets. Among the puffest exoplanets with the longest orbital periods, we can cite the iconic HIP 41378 f, Kepler-87 c, Kepler-79 d, Kepler-177 c, and Kepler-51 b, c, and d. Super-puffs must be distinguished from inflated hot Jupiters, which show a correlation between stellar irradiation and radius inflation (see e.g. Laughlin et al. 2011; Lopez & Fortney 2016). This correlation indicates that hot Jupiters have extended atmospheres connected in some way to their close proximity to the star (see e.g. Burrows et al. 2000; Chabrier & Baraffe 2007; Batygin et al. 2011; Grunblatt et al. 2016). A similar conclusion can be reached for short-period sub-Neptunes (Pu & Valencia 2017; Millohland 2019), but not for distant super-puffs, because they have much cooler equilibrium temperatures and undergo negligible star-planet tidal dissipation.
Initiated by the preprint of Santerne et al. (2019), the low density of exoplanet HIP 41378 f, in particular, immediately raised much discussion. HIP 41378 f is mature (\(2.1^{+0.4}_{-0.3}\) Gyr; Lund et al. 2019) and has a long period (542 days) and low equilibrium temperature (300 K). Its low density (\(0.09\pm 0.02~{}\mathrm{g\,cm^{-3}}\)) puts this planet among the puffiest exoplanets known to date. Even though other super-puffs are known, most of them are likely young and/or have shorter periods (Lee & Chiang 2016). Instead of a radius inflation, Akinsanmi et al. (2020) propose that HIP 41378 f could be a standard Neptune-sized planet surrounded by an inclined opaque ring that would mimic the transit depth of an inflated planet. As no significant distortion is visible in the transit ingress and egress of HIP 41378 f, the hypothetical ring should be optically thick and seen roughly face on. This configuration would imply that the obliquity of the planet1 is nearly 90\({}^{\circ}\).
Footnote 1: Not to be confused with the stellar obliquity (i.e. the angle between the spin axis of the star and the orbit pole of a given planet). Throughout this article, the term obliquity is exclusively used for the planetary obliquity (i.e. the angle between the spin axis of the planet and its orbit pole).
The ring hypothesis was investigated by Piro & Vissapra-gada (2020) for other super-puff exoplanets. Good candidates are Kepler-87 c, Kepler-79 d, and Kepler-177 c, even though their moderate temperatures - as that of HIP 41378 f - do not allow for water ice to exist around them. Therefore, unlike Saturn's ring, their rings would need to be composed of porous rocky par
ticles. According to the results of Piro and Vissapragada (2020), HIP 41378 f is currently the best candidate for a ring. Its long period would protect a ring against destructive irradiation levels and a strong warp due to the stellar torque; it also results in negligible star-planet tidal dissipation, which means that no particular mechanism would be required for the planet to maintain a large obliquity2. The low eccentricity of HIP 41378 f also guarantees a small level of orbital perturbations for the ring particles.
Footnote 2: High-obliquity equilibrium states also exist for short-period planets (Millholland and Laughlin, 2019; Millholland and Spalding, 2020); however, because of tidal despinning and obliquity damping, their obliquity needs to be continuously forced through dynamical interactions involving several planets (see also Su and Lai, 2022, 2020).
In order to determine the planets' atmospheric properties and test the ring hypothesis, near-infrared transmission spectra have been acquired for Kepler-51 b and d (Libby-Roberts et al., 2020), Kepler-79 d (Chachan et al., 2020), and HIP 41378 f (Alam et al., 2022). These spectra ended up being featureless, ruling out clear, low-metallicity atmospheres. The ring hypothesis is therefore not contradicted for these planets, but flat spectra can also be produced by high-altitude hazes or high-metallicity atmospheres. In fact, convincing atmospheric models have been put forward for Kepler-51 b and d, as well as Kepler-79 d (see also Wang and Dai, 2019; Gao and Zhang, 2020; Ohno and Tanaka, 2021). Interestingly, these models of extended atmospheres appear to be inapplicable to HIP 41378 f as it is too massive (\(M=12\pm 3~{}M_{\oplus}\)), too cold, and too old.
The question of the possible physical composition of HIP 41378 f was explicitly tackled by Belkovski et al. (2022). The authors show that photoevaporation is not nearly enough to explain the extreme density disparity between planet f and other planets in the system. Moreover, the observed mass and radius of HIP 41378 f would require an envelope-to-core mass fraction larger than 75% together with a high entropy (e.g. produced by recent collisions). Such a massive envelope is unlikely from the perspective of planetary formation, as it would require runaway gas accretion to have started precisely during the dissipation of the gas disc, and planet HIP 41378 f may not be massive enough anyway to have triggered runaway accretion.
Hence, the ring hypothesis appears to be favoured for HIP 41378 f, and it may apply as well to a restricted number of other observed super-puffs. Tidal rings are confined below the Roche limit, very close to their host planets. As such, they are strongly coupled to the centrifugal bulge of the planets, and they directly materialise their equatorial planes. In order to produce a substantial increase in a planet's transit depth (i.e. a very noticeable super-puff), its ring must be oriented roughly in the sky plane. This means that the planet's spin axis must point roughly along the observer's direction; its obliquity is therefore \(\approx 90^{\circ}\) as proposed by Akinsanmi et al. (2020). Such an exotic configuration may seem questionable from a formation point of view. Because of the angular momentum acquired during gas accretion, gaseous planets are expected to form with low obliquities. The obliquities of the Solar System giant planets are therefore interpreted as strong tracers of their dynamical evolution, and much effort is put into understanding their origin (see e.g. Tremaine, 1991; Ward and Hamilton, 2004; Hamilton and Ward, 2004; Boue et al., 2009; Boue and Laskar, 2010; Morbidelli et al., 2012; Vokrouhlicky and Nesvorny, 2015; Rogoszinski and Hamilton, 2020, 2021; Saillenstel et al., 2020, 2021; Salmon and Canup, 2022; Rufu and Canup, 2022; Wisdom et al., 2022). In this context, the ring hypothesis for super-puffs would greatly benefit from an underlying mechanism that may be responsible for their unusual configuration. The existence of such a mechanism would not certify whether a given planet does possess a ring or not, but it would show whether known dynamical processes are able to (or are even likely to) produce the proposed configuration.
In the Solar System, a substantial tidal migration of moons has recently been observed to be at play around gaseous planets (see Lainey et al., 2009, 2017, 2020; Jacobson, 2022) - even though it involves mechanisms of energy dissipation that are vastly different from those responsible for the well known rapid migration of our Moon (see e.g. Farhat et al., 2022). These results have strong implications for the orbital dynamics of moons around gaseous planets, but also for the gigayear-timescale dynamics of planetary spin axes. Indeed, moons affect the spin-axis precession rate of planets in a way that is intimately related to their distance (see e.g. Boue and Laskar, 2006). The migration of a moon is therefore accompanied with a variation in the planet's spin-axis precession rate. In turn, this variation can drive the planet into a so-called secular spin-orbit resonance, that is, a resonance between the planet's spin-axis precession and one harmonic of its orbital nodal precession. As a matter of fact, this kind of resonances abound in multi-planetary systems. Provided that a planet has a substantially massive migrating moon, it may therefore be guaranteed to encounter one of these resonances sooner or later during its evolution. Once captured in resonance, the still ongoing migration of the moon produces a gradual tilting of the planet's spin axis (unless, as for the Earth, resonances are so numerous that they overlap massively; see Laskar and Robutel, 1993; Neron and Suryg and Laskar, 1997). This phenomenon is probably responsible for the \(27^{\circ}\) obliquity of Saturn (Saillenstel et al., 2021, 2021), and it is predicted to happen to Jupiter in the future (Saillenstel et al., 2020). It may also have played a role in the tilting of Uranus (Saillenstel et al., 2022).
When the planet's obliquity reaches \(\varepsilon\gtrsim 70^{\circ}\), however, regular moons are known to be unstable in some range of distance (Tremaine et al., 2009). Interestingly, the migration of a single moon makes the system converge to this unstable zone, putting a dramatic end to the tilting process (see Saillenstel and Lari, 2021; Saillenstel et al., 2022). At this point, the moon may be ejected or be destructed below the planet's Roche limit, eventually forming a tidal disc of debris. In the latter case, the final state of the system is a ringed planet with very high obliquity. This final state recalls the exotic configuration proposed for super-puff exoplanets. It would therefore be valuable to determine whether this mechanism could apply to them and provide a plausible dynamical background to the ring hypothesis.
In this article, we aim to present a generic methodology to assess whether the migrating-moon mechanism can realistically produce a tilted ring around a given exoplanet. Even though the number of known distant super-puffs is small today, the future _PLATO_ mission (Rauer et al., 2014, 2016) will considerably increase our knowledge of the population of long-period exoplanets - including their masses through an intensive radial-velocity follow-up. In this context, we need efficient methods for a routine characterisation of the newly discovered planets and identification of the most interesting targets for follow up. For this reason, we design our methodology to be applicable even if the minimum amount of information about the planetary system is available (masses, periods, and sky-plane inclinations).
The article is organised as follows. In Sect. 2, we recall the basics of the tilting mechanism. In Sect. 3, we compute the probability density function of the dominant orbital precession frequencies of a planet, and we present an example of application to the super-puff exoplanet HIP 41378 f. From these results, we es
timate in Sect. 4 the mass and migration rate that a moon around this planet would need in order to trigger the full tilting mechanism. In Sect. 5, we check that the resonance is large enough to enable an adiabatic capture and tilting, and we illustrate this mechanism with numerical simulations. We then discuss our results in Sect. 6 and conclude in Sect. 7.
## 2 Basic mechanism
As shown by Saillenfest & Lari (2021), the tilting of a planet from a low obliquity \(\varepsilon\) up to \(\varepsilon\approx 90^{\circ}\) can be achieved on a gigayear timescale via the tidal migration of a moon. This process occurs through the adiabatic drift of the system along the centre of a secular spin-orbit resonance. In this section, we recall the physical quantities involved and the conditions required to trigger this process.
We write \(I\) the orbital inclination of the planet and \(\Omega\) its longitude of ascending node. We decompose the inclination dynamics of the planet in a quasi-periodic series truncated to \(N\) terms:
\[\zeta=\sin\frac{I}{2}\exp(i\,\Omega)=\sum_{j=1}^{N}S_{j}\exp[i\,\phi_{j}(t)]\,, \tag{1}\]
where \(S_{j}\) is a positive real constant, and \(\phi_{j}(t)=\nu_{j}\,t+\phi_{j}^{(0)}\) evolves linearly over time \(t\) with frequency \(\nu_{j}\). Resonance capture from a low obliquity is possible only for resonances with a harmonic having a negative frequency \(\nu_{j}\) such that \(|\nu_{j}|\geqslant p\), where
\[p=\frac{3}{2}\frac{\mathcal{G}M_{\star}}{a^{3}(1-e^{2})^{3/2}}\frac{J_{2}}{ \omega\lambda} \tag{2}\]
is the characteristic spin-axis precession rate of the planet. In this expression, \(\mathcal{G}\) is the gravitational constant, \(M_{\star}\) is the mass of the star, \(a\) and \(\varepsilon\) are the semi-major axis and eccentricity of the planet on its orbit around the star, \(J_{2}\) is the second zonal gravity coefficient of the planet, \(\omega\) is its spin rate, and \(\lambda\) is its normalised polar moment of inertia. The parameters \(J_{2}\) and \(\lambda\) must be defined through the same normalising radius \(R\) (which is generally chosen as the equatorial radius of the planet).
The influence of a regular moon on the long-term spin-axis dynamics of the planet can be quantified by its non-dimensional'mass parameter' \(\eta\) defined by
\[\eta=\frac{1}{2}\frac{m}{M}\frac{r_{\rm M}^{2}}{J_{2}R^{2}}\,, \tag{3}\]
where \(m\) is the mass of the moon, \(M\) is the mass of the planet, and \(r_{\rm M}\) is the following characteristic length3:
Footnote 3: \(r_{\rm M}\) is called ‘mid-point radius’ by Saillenfest & Lari (2021). It is sometimes defined as the Laplace radius in other publications, either with or without the leading factor 2.
\[r_{\rm M}^{5}=2\frac{M}{M_{\star}}J_{2}R^{2}\ a^{3}(1-e^{2})^{3/2}\,. \tag{4}\]
Under the hypothesis that the moon's mass ratio \(m/M\) is small (which does not necessarily imply that \(\eta\) is small), Saillenfest & Lari (2021) show that all resonances with a nodal harmonic having a negative frequency \(\nu_{j}\) verifying
\[p\leqslant|\nu_{j}|\leqslant p\frac{\eta}{2} \tag{5}\]
can allow the planet's obliquity to grow from \(\varepsilon=0^{\circ}\) to \(\varepsilon=90^{\circ}\). This condition is illustrated in Fig. 1. Knowing the harmonics \(\nu_{j}\) of the planet's orbital precession, Eq. (5) allows one to compute the minimum mass required for the moon to produce the tilting. As resonances converge to an unstable region, the moon is ultimately lost at the end of the tilting process (see Fig. 1).
When Eq. (5) is verified, the adiabatic capture and tilting of the planet within a given resonance requires an adequate hierarchy of timescales. First, we introduce the timescale \(\tau\) of secular oscillations of the moon around its equilibrium 'Laplace plane' (see Tremaine et al., 2009) as \(\tau=2\pi/\kappa\), where
\[\kappa^{2}=\frac{9}{4}\frac{M_{\star}}{M}\frac{r_{\rm M}^{3}}{a^{3}(1-e^{2})^ {3}}\frac{\mathcal{G}M_{\star}}{a^{3}}\,. \tag{6}\]
An adiabatic capture in resonance requires that \(\tau\) is much shorter than the spin-axis precession timescale of the planet \(T=2\pi/p\); this conditions is generally well verified in practice.
Then, a given observed planet may have been adiabatically tilted via a resonance only if the timescale \(T_{\rm ib}\) of libration inside the resonance is much smaller than the age of the system. For a given secular spin-orbit resonance, the value of \(T_{\rm ib}\) near the resonance centre can be computed as \(T_{\rm ib}=2\pi/\mu\), where
\[\mu^{2}=(p^{\prime})^{2}\left(\frac{\beta^{2}}{\sin^{2}\varepsilon_{0}}+\beta \sin\varepsilon_{0}\right)\,. \tag{7}\]
In this expression, \(\varepsilon_{0}\) is the planet's obliquity at the resonance centre and \(p^{\prime}\) is a modified version of \(p\) that takes into account the presence of the planet's moon (see Saillenfest & Lari, 2021). We define the non-dimensional variables \(\gamma=\rho_{1}/p^{\prime}\) and \(\beta=\rho_{2}/p^{\prime}\), where
\[\begin{split}\rho_{1}&=-\left(\nu_{k}-2\sum_{j=1}^ {N}\nu_{j}S_{j}^{2}\right)\,,\\ \rho_{2}&=-S_{k}\left(2\nu_{k}+\nu_{k}S_{k}^{2}-2 \sum_{j=1}^{N}\nu_{j}S_{j}^{2}\right)\,,\end{split} \tag{8}\]
and \(k\) is the index in Eq. (1) of the considered resonance. \(T_{\rm ib}\) depends on the distance of the moon through \(p^{\prime}\) and \(\varepsilon_{0}\). However, an upper bound for \(T_{\rm ib}\) is obtained at the time of resonance capture, for which \(\gamma^{2/3}+\beta^{2/3}=1\)(see Henrard & Murigande, 1987; Saillenfest et al., 2019). In this case, \(p^{\prime}\) is equal to
Figure 1: Level curves of the planet’s spin-axis precession rate (adapted from Saillenfest et al., 2022 for a moon mass-parameter \(\eta=20\)). If the planet is trapped in a secular spin-orbit resonance, the system evolves along one of these curves as the moon migrates. The condition in Eq. (5) corresponds to the pink region; the left and right inequalities are the green and red curves, respectively. In the blue area, the moon is unstable.
\(p^{\prime}=(\rho_{1}^{2/3}+\rho_{2}^{2/3})^{3/2}\), and the planet's obliquity at the centre of the resonance is
\[\cos\varepsilon_{0}=\gamma-\gamma^{1/3}+\sqrt{\gamma^{2}+\gamma^{2/3}-\gamma^{4/ 3}}\,. \tag{9}\]
Thanks to these expressions, we can compute \(T_{\rm lib}\) from Eq. (7) as a mere function of the planet's orbital dynamics in Eq. (1).
## 3 Orbital precession modes of the planet
To apply this mechanism to a given planet, we need to know its orbital precession spectrum, which depends on planet-planet mutual interactions. However, the masses and orbital elements of exoplanets are generally not well known. For given parameters and their uncertainties, the most simple way to explore the variety of possible long-term orbital solutions is to use the Lagrange-Laplace system (see e.g. Murray & Dermott, 1999).
### The Lagrange-Laplace proper modes
The Lagrange-Laplace system is a secular theory at second order in eccentricity and inclination. As such, it assumes that all eccentricities and inclinations are small and it neglects the long-term influence of mean-motion resonances. Small mutual inclinations are indeed strongly favoured in multi-planetary systems in which most planets are observed to transit their star. This is the case of HIP 41378, around which the transits of five planets are observed (Vanderburg et al., 2016). Eccentricities are also expected to be small in multi-planetary systems for stability reasons. Moreover, according to the statistical distribution of multi-planetary systems (Xie et al., 2016) and to theoretical arguments about chaotic diffusion (which leads to the statistical equipartition of angular momentum deficit; see Laskar & Petit, 2017), planets having small mutual inclinations tend to have small eccentricities, and vice versa. Hence, the use of the Lagrange-Laplace theory is generally justified in this regard for multi-planetary systems. Neglecting the long-term effect of mean-motion resonances may seem more questionable, as many pairs of exoplanets are observed to be close to important resonances (see e.g. Fabrycky et al., 2014). Yet, the strongest mean-motion resonances in planetary systems - and those enabling smooth captures - are of eccentricity type. As such, they mainly affect eccentricities. Here, instead, we are only interested in the inclination degree of freedom of the planets because it is by far the main driver of their long-term spin-axis dynamics. The planets' eccentricity dynamics only enter into play at order three and beyond (see Sailenfest et al., 2019), so mean-motion resonances can safely be ignored in this analysis.
As above, we describe the nodal precession and inclination dynamics of a planet \(k\) in the planetary system through the complex variable
\[\zeta_{k}=\sin\frac{I_{k}}{2}\exp(i\,\Omega_{k})\,, \tag{10}\]
where \(I_{k}\) is the orbital inclination of planet \(k\) and \(\Omega_{k}\) is its longitude of ascending node. The Lagrange-Laplace system gives the linear equation of motion
\[\frac{\mathrm{d}\zeta}{\mathrm{d}t}=iB\,\zeta\,, \tag{11}\]
in which \(\zeta\) is the vector containing the \(\zeta_{k}\) variable of all planets and \(B\) is a constant matrix that only depends on the masses and semi-major axes of the planets (see e.g. Laskar & Robutel, 1995). The solution of this equation for a given planet \(k\) has the form of a quasi-periodic series as in Eq. (1):
\[\zeta_{k}(t)=\sum_{j=1}^{N_{p}}S_{j}\exp\left[i\left(\nu_{j}t+\phi_{j}^{(0)} \right)\right]\,, \tag{12}\]
where the number of terms \(N\) is equal to the number of planets \(N_{p}\) in the system. Equation (12) is a linear combination of proper modes whose frequencies \(\nu_{j}\) are the eigenvalues of the matrix \(B\). As \(B\) only depends on the masses and semi-major axes of the planets, this is also the case of the frequencies \(\nu_{j}\). Because of the conservation of total angular momentum, one of the frequencies \(\nu_{j}\) is identically equal to zero; the related constant term in Eq. (12) gives the orientation of the system's invariant plane.
Thanks to the fast computation of the solution of the Lagrange-Laplace system (which amounts to a mere matrix inversion), millions of trials can be performed at virtually no cost. In order to explore the distribution of possible values for the frequencies \(\nu_{j}\), the first step is to draw the masses and semi-major axes of the \(N_{p}\) planets from their respective statistical distributions - which represent our knowledge of their values. A similar approach was followed by Becker & Adams (2016) in their study of the compact multi-planetary systems observed by _Kepler_. Each sequence of masses and semi-major axes for the \(N_{p}\) planets represent a possible realisation of the planetary system. In case the mass of a given planet has not been measured, a broad distribution of mass can be adopted (e.g. a uniform distribution in a given interval, or a law drawn for an assumed mass-radius relationship; see below). From a large number of realisations of the planetary system, a histogram for each frequency \(\nu_{j}\) can be computed. These histograms define the possible locations of secular spin-orbit resonances given our current knowledge of the planetary system.
In practice, the largest values of the Lagrange-Laplace matrix \(B\) in Eq. (11) are often located along its diagonal (meaning that the planetary system is only weakly coupled); this implies that each planet \(k\) has its own dominant proper mode, which appears in Eq. (12) as the term with largest amplitude. The frequency of the dominant proper mode of planet \(k\) is usually noted \(s_{k}\). In the context of the Lagrange-Laplace approximation, the quasi-periodic series in Eq. (12) contains exactly \(N=N_{p}\) terms and the frequencies \(\nu_{j}\) are each equal to one of the \(s_{k}\). More generally, the orbital evolution of any planet in a stable system can be written as in Eq. (12), but where \(N\) tends to infinity and each harmonic \(\nu_{j}\) is a linear combination of the fundamental frequencies of the system (see Sect. 2). The first few strongest harmonics of the series are however proper modes given by the Lagrange-Laplace approximation; hence, the analysis presented here can be thought of as the dominant component of a more general theory.
While building the histogram for each proper mode \(s_{k}\), a complication may arise. Indeed, if the masses and semi-major axes of the planets have large uncertainties, the distributions of the various frequencies may overlap. In this case, identifying each eigenvalue \(\nu_{j}\) of matrix \(B\) as the correct proper mode \(s_{k}\) requires some caution. As the hierarchy of proper modes depends on the planetary system considered, a specific identification process is required. As an example, we subsequently present the case of the HIP 41378 system.
### Application to the HIP 41378 system
HIP 41378 is bright F-type star4 which harbours at least five planets called b, c, d, e, and f (Vanderburg et al., 2016). Dynamical analysis reveals that planets b and c are slightly off the 2:1 mean-motion resonance, similarly to many _Kepler_ planets (see e.g. Fabrycky et al., 2014). A tentative detection of a sixth planet - planet g - is reported in the preprint of Santerne et al. (2019) close to the 2:1 mean-motion resonance with planet c. As of today, only planets b, c, and f have been observed during successive transits (see Vanderburg et al., 2016; Becker et al., 2019; Bryant et al., 2021; Alam et al., 2022) and unambiguously detected in radial velocity (Santerne et al., 2019). Therefore, only planets b, c, and f have secured periods and masses.
Footnote 4: Also known as K2-93 and EPIC 211311380.
Two transits of planet d have been observed by the _K2_ mission but they are separated by a three-year observation gap, leading to a discrete set of possible periods (Becker et al., 2019). From stability considerations, and thanks to additional observations by _TESS_, this discrete set is further reduced to only two likely values (278 and 371 days; see Berardo et al., 2019; Lund et al., 2019; Grouffal et al., 2022). In contrast, only one transit of planet e has been observed so far, so its period suffers from large uncertainties. The best period estimate for planet e is \(260^{+160}_{-40}\) days (Lund et al., 2019). The period of \(369\pm 10\) days obtained by Santerne et al. (2019) is compatible with this estimate, and it results in a mass of \(12\pm 5\)\(M_{\oplus}\) for planet e. The mass of planet d, however, is unknown.
As explained in Sect. 1, planet HIP 41378 f is a paradigmatic case of distant super-Puff. Its period is about 542 days, and it has a radius of \(9.2\pm 0.1\)\(R_{\oplus}\) and mass \(12\pm 3\)\(M_{\oplus}\), giving it a bulk density of \(0.09\pm 0.02\)\(\mathrm{g\,cm^{-3}}\)(Santerne et al., 2019). Under the ring hypothesis, current data suggests a planet with radius \(R=3.7\pm 0.3\)\(R_{\oplus}\) surrounded by a ring with radius \(2.6\pm 0.2\)\(R\) and inclination \(25\pm 4\)deg from the sky plane (Akinsanami et al., 2020). This new planetary radius yields a bulk planet density of \(1.2\pm 0.4\)\(\mathrm{g\,cm^{-3}}\), similar to that of Uranus. The hypothetical equatorial ring provides an indirect measure of the obliquity of the planet, namely5\(\varepsilon=92\pm 7\)deg.
Footnote 5: The ring obtained by Akinsanmi et al. (2020) is inclined by \(i_{t}=25\pm 4\)° from the sky plane and rotated by \(\theta=95\pm 1\)° from the transit direction. The spin-orbit obliquity \(\varepsilon\) of the planet is given by \(\cos\varepsilon=\cos I\cos i_{t}+\sin I\sin i_{t}\cos\theta\), where \(I=89.97\pm 0.01\)° is the orbital inclination of HIP 41378 f.
In order to compute the orbital precession modes of HIP 41378 f, our choice of prior for the masses and semi-major axes of the planets must reflect our partial knowledge of the HIP 41378 system. We sort the planets by increasing orbital periods such that the indexes \(k=(1,2,3,4,5,6)\) correspond to the planets (b, c, g, d, e, f). We assume all masses and semi-major axes to have Gaussian distributions centred on the best-fit values of Santerne et al. (2019) given in Table 1. Planet d needs a specific treatment: even though its period has tentatively been confirmed by Grouffal et al. (2022), it has still not been detected by the radial velocity method, so its mass is highly uncertain. We choose to remain as agnostic as possible as regards its mass, and draw it from a Gaussian fit to the mass-radius distribution of all known exoplanets having a radius between 3 and 4 \(R_{\oplus}\) and a mass measurement. From the Nasa Exoplanet Archive6 on date 2022-11-23, we obtain a central mass value of 12.7 \(M_{\oplus}\) and a standard deviation of 6.0 \(M_{\oplus}\). The high tail of this distribution may not be compatible with radial velocity measurements; yet, this broad interval gives us confidence that the actual mass of planet d is contained in our analysis. The low tail of the distribution (from which we cut the portion \(<0.1\)\(M_{\oplus}\)) corresponds to cases in which planet d barely exists at all. The system may also contain additional massive planets that have not been discovered yet. Hence, we stress that the analysis below represents our current knowledge of the system and it may need to be revisited in the future.
Footnote 6: [https://exoplanetarchive.ipac.caltech.edu](https://exoplanetarchive.ipac.caltech.edu)
For the HIP 41378 system as considered in Table 1, a look at the diagonal and off-diagonal values in the Lagrange-Laplace matrix \(B\) reveals a peculiar hierarchical configuration. The system is composed of two weakly coupled subsystems: _i)_ the inner subsystem (planets 1-2-3) is characterised by planets 1 and 3 interacting with each other and affecting the motion of the low-mass planet 2; and _ii)_ the outer subsystem (planets 4-5-6) is made of the two strongly coupled planets 4 and 5, interacting as a whole with planet 6.
This peculiar hierarchy can be visualised by solving the Lagrange-Laplace system a first time using reasonable values for the parameters. The exact values of the parameters do not matter for now; this first step only serves as a guide to identify the frequencies and choose an adequate naming convention. Figure 2 shows an example obtained from the nominal masses and semi-major axes of the planets. We name the proper frequencies according to their qualitative role in the dynamics: \(s_{1}\) is the precession frequency of planets 1 and 3 about their total angular momentum vector; \(s_{2}\) is the precession frequency of the low-mass planet 2 under the action of planets 1 and 3; \(s_{3}\) is the slow rigid precession of the inner and outer subsystems (planets 1-2-3 and 4-5-6); \(s_{4}\) is the precession frequency of planets 4 and 5 about their total angular momentum vector; \(s_{5}\) is identically zero; \(s_{6}\) is the precession frequency of planet 6 and planets 4-5 about their total angular momentum vector. We stress that all precession modes actually appear in the dynamics of all planets (see Eq. 12), but this qualitative description gives us a good idea of the relative importance of each term in the orbital evolution of each planet.
In order to compute the probability density function of each frequency \(s_{j}\) given our current knowledge of the planetary system, we drew \(10^{6}\) realisations of the star's mass and planets' masses and semi-major axes. For each of these realisations, we computed the eigenvalues of the Lagrange-Laplace matrix \(B\) and identified them to the frequencies \(s_{j}\) according to their qualitative role described above. In practice, this identification can be made by choosing fictitious initial conditions \(\zeta_{k}(t=0)\) designed to magnify the specific term we are looking for. For instance, the frequency \(s_{3}\) would appear as strongly dominant for all planets if we set \(\zeta_{k}(t=0)=0\) for \(k=\{1,2,3\}\) and \(\zeta_{k}(t=0)=\sqrt{2}/2\) for \(k=\{4,5,6\}\). Then, one may identify \(s_{2}\) as the dominant term in the solution of planet 2 by setting \(\zeta_{2}(t=0)=\sqrt{2}/2\) and \(\zeta_{k}(t=0)=0\) for \(k\neq 2\), etc. This way, all frequencies can be correctly identified one by one. Moreover, we remind the reader that the frequencies \(s_{j}\) only depend on the masses and semi-major axes of the planets, so they do not depend on the fictitious initial conditions chosen here, and they are not plagued with our ignorance of the actual orientations of the planets' orbital planes.
Figure 3 shows the frequency distribution for each inclination proper mode obtained from our \(10^{6}\) realisations of the system. Frequency \(s_{4}\) has a broad distribution due to the large uncertainties in the masses of planets d and e. Frequency \(s_{3}\), on the contrary, is very peaked, which means that the hierarchy of the two subsystems is a robust property of the HIP 41378 system - unless it contains additional massive planets yet to be discovered. In order to quantify the relative importance of each param
eter in the value of each frequency, a correlation analysis can be performed on our large sample of realisations. Here, the small spread in frequency \(s_{3}\) results to be essentially due to the uncertainty in the mass of planet d (see Appendix A).
As illustrated in Fig. 2, frequency \(s_{3}\) is expected to have a strong contribution in the motion of all planets. Next to it, the dominant inclination proper mode of planet f has frequency \(s_{6}\). This frequency would produce a strong (if not the strongest) secular spin-orbit resonance for this planet. Figure 3 shows that despite observational uncertainties, frequency \(s_{6}\) has a relatively peaked distribution. Its most probable value is \(-136\)\(\arcsec\) year\({}^{-1}\), with \(68.3\%\) occurrences within \([-181,-97]\)\(\arcsec\) year\({}^{-1}\), \(95.4\%\) occurrences within \([-241,-65]\)\(\arcsec\) year\({}^{-1}\), and \(99.7\%\) occurrences within \([-405,-35]\)\(\arcsec\) year\({}^{-1}\). As shown in Appendix A, the value of \(s_{6}\) is essentially set by the mass of the perturbing planet e, with a Spearman correlation coefficient \(\rho_{8}\approx-0.8\). The value of \(s_{6}\) is only weakly \(\{\rho_{8}|\lesssim 0.3\}\) correlated with the parameters of planet f itself. This low correlation allows us to investigate different values for the frequency \(s_{6}\) independently of the mass and semi-major axis of planet f (that we fix, from now on and in the rest of the article, to their nominal values in Table 1).
## 4 Properties of the hypothetical former moon
Knowing the dominant harmonics in the orbital precession of a planet, Eq. (5) gives the conditions required to tilt the planet and form a ring through the tidal migration and disruption of a moon. In addition to the mass and orbital elements of the planet, Eq. (5) depends on the planet's normalising radius \(R\), its subleness coefficient \(J_{2}\), and the product \(\omega\lambda\). For a given super-puff exoplanet, we may assume that the anomalous planet's density is entirely due to the existence of a ring; therefore, the value of \(R\) can be chosen so as to produce a conventional bulk density (e.g. that of Uranus or Neptune). In the specific case of HIP 41378 f, Akisanni et al. (2020) show that, under the ring hypothesis, its true radius would be \(3.7^{+0.3}_{-0.2}\)\(R_{\oplus}\). Hence, we adopt the value \(R=3.7\)\(R_{\oplus}\) below as our normalising radius.
For given values of the parameters \(J_{2}\) and \(\omega\lambda\), Eq. (5) provides a direct relation between the frequency \(\nu_{j}\) of the resonance and the minimum mass \(m_{\rm min}\) of the former moon. Even though \(J_{2}\) and \(\omega\lambda\) are completely unknown for exoplanets, we know that they are related, and in first approximation \(J_{2}\propto\omega^{2}\) (planets spinning faster are more flattened; see e.g. Chandrasekhar, 1969). For a given moon mass \(m\), the condition \(|\nu_{j}|\leqslant p/2\) in Eq. (5) corresponds to a power law \(J_{2}\propto\omega^{5/2}\). Because of the coincidental near match between these two exponents (2 and \(5/2\)), our total ignorance of \(J_{2}\) and \(\omega\lambda\) does not affect much our estimate of \(m_{\rm min}\): we may just set \(J_{2}\) and \(\omega\lambda\) to realistic values (e.g. ob
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \(k\) & name & \(M_{k}\) (\(M_{\oplus}\)) & \(P_{k}\) (day) & \(a_{k}\) (au) & \(I_{k}\) (\({}^{\circ}\)) \\ \hline
1 & b & \(6.89\pm 0.88\) & \(15.57208\pm 0.00002\) & \(0.1283\pm 0.0015\) & \(88.75\pm 0.13\) \\
2 & c & \(4.4\pm 1.1\) & \(31.70603\pm 0.00006\) & \(0.2061\pm 0.0024\) & \(88.48\pm 0.07\) \\
3 & g & \(7.0\pm 1.5\) & \(62.06\pm 0.32\) & \(0.3227\pm 0.0036\) & \(88\) \\
4 & d & \(12.7\pm 6.0\) & \(278.3618\pm 0.0005\) & \(0.88\pm 0.01\) & \(89.80\pm 0.02\) \\
5 & e & \(12\pm 5\) & \(369\pm 10\) & \(1.06\pm 0.03\) & \(89.84\pm 0.07\) \\
6 & f & \(12\pm 3\) & \(542.0798\pm 0.0002\) & \(1.37\pm 0.02\) & \(89.97\pm 0.01\) \\ \hline \end{tabular} 1
\end{table}
Table 1: Parameters of planets in the HIP 41378 system used in this article.
Figure 2: Example of inclination evolution of the six planets in the HIP 41378 system. In this example, the Lagrange-Laplace equation is solved using the nominal masses and semi-major axes of all planets given in Table 1. The initial conditions \(\zeta_{\rm d}\) are set from the nominal inclinations \(I_{k}\) (all assumed to be \(I_{k}\leqslant 90^{\circ}\)) and random longitudes of node \(\Omega_{k}\) in a 0.2\(\arcsec\)-wide interval; this choice is commented in Sect. 5.
tained from the Solar System planets) and be assured to obtain relevant results - unless the planet has a particularly exotic internal structure which violates \(J_{2}\propto\omega^{2}\). This property is verified in Appendix B in the case of planet HIP 41378 f. As the mass and radius proposed by Akinsanmi et al. (2020) for HIP 41378 f are relatively close to those of Uranus, we choose to apply Eq. (5) using the parameters \(J_{2}\) and \(\omega\lambda\) of Uranus (see e.g. Yoder 1995).
Independently of the resonance considered, Eq. (5) can be fulfilled only if the mass parameter \(\eta\) of the moon is \(\eta\geq 2\). Using the \(J_{2}\) value of Uranus, this condition translates into \(m/M\geq 1.2\times 10^{-4}\). This is the minimum mass ever that the former moon of HIP 41378 f should have had. For a larger moon, the minimum mass \(m_{\rm min}\) needed to tilt the planet is proportional to the frequency \(\nu\) / of the considered resonance. The top horizontal axis in Fig. 3 shows the values of \(m_{\rm min}\) computed from Eq. (5) using the parameters \(J_{2}\) and \(\lambda\omega\) of Uranus (the tics start at \(1.2\times 10^{-4}\) and go from right to left).
The characteristic spin-axis precession rate of HIP 41378 f computed from Eq. (2) is \(p\approx 25\arcsec\) year\({}^{-1}\). According to the left inequality in Eq. (5), this value almost certainly rules out a resonance with frequency \(s_{3}\), because frequency \(s_{3}\) sharply peaks at \(s_{3}=-15.7\arcsec\) year\({}^{-1}\) (see Fig. 3). The fact that \(p>|s_{3}|\) means that the \(s_{3}\) resonance is located in the green portion of Fig. 1; therefore no capture from a low obliquity is possible in this resonance whatever the mass of the moon. Frequency \(s_{6}\), on the contrary, is the closest resonance reachable by HIP 41378 f. This resonance is expected to be strong for planet f, if not the strongest (see Sect. 3). Figure 3 shows that a capture and full tilting within the \(s_{6}\) resonance requires a moon with minimum mass ratio ranging between about \(2\times 10^{-4}\) and \(10\times 10^{-4}\). This corresponds to an absolute mass ranging roughly between Triton's mass and the mass of our Moon, respectively. More precisely, when the parameters \(J_{2}\) and \(\omega\lambda\) of Uranus are assumed for HIP 41378 f, the value of frequency \(s_{6}=-136^{+101}_{-269}\) '' year\({}^{-1}\) obtained in Sect. 3 translates into a minimum moon mass \(m_{\rm min}/M=6^{+13}_{-5}\times 10^{-4}\) (\(3\sigma\) uncertainty).
This mass range seems realistic when viewed in the context of the regular moons of the Solar System giant planets. For comparison, the moon-to-planet mass ratio of Titan is \(2\times 10^{-4}\), and the summed masses of the largest moons of Jupiter and Uranus yield ratios of about \(2\times 10^{-4}\) and \(1\times 10^{-4}\), respectively. This similarity among planets motivated the work of Canup & Ward (2006), who found that the formation mechanism of moons around the Solar System giant planets may naturally lead to a common mass scaling, with final mass ratios of a few times \(10^{-4}\). Yet, these results do not rule out the existence of larger moons, either because of differing external conditions during their formation, or because of different formation processes (see e.g. the discussion by Sailenfest et al. 2022).
In order to fully incline the planet starting from a low obliquity, the distance that the migrating moon needs to cover depends on the resonance considered, but Fig. 1 shows that one can expect in general a migration from \(a_{\rm m}\approx 0.5\,r_{\rm M}\) to \(1\,r_{\rm M}\). Using the \(J_{2}\) value of Uranus, Eq. (4) gives a characteristic length \(r_{\rm M}\approx 11\,R\) for planet HIP 41378 f, which implies that the moon would need to migrate from roughly 5 to 10 \(R\). Given that \(r_{\rm M}\) is proportional to \(J_{2}^{1/5}\), other realistic values of \(J_{2}\) may change these distances by a small amount (see discussion in Appendix B).
The HIP 41378 system is \(2.1^{+0.4}_{-0.3}\) Gyr-old (Lund et al. 2019). As the whole tilting mechanism must have been completed before today, the required migration range for the moon can be translated into a minimum migration rate. In the case of HIP 41378 f, we obtain a migration rate of about 6 cm year\({}^{-1}\) in average. This velocity is comparable to the Moon's migration rate from the Earth (Williams & Boggs 2016), and about two times less than the migration rates of Ganymede from Jupiter (Lainey et al. 2009) or Titan from Saturn (Lainey et al. 2020). In order to power this migration through tidal dissipation within the planet, classical formulas with constant parameters (see e.g. Efroimsky & Lainey 2007) imply that the planet's dissipation coefficient needs to be higher than \(k_{2}/Q\approx 3\times 10^{-5}\) for a moon mass \(m/M=2\times 10^{-4}\), and higher than \(k_{2}/Q\approx 6\times 10^{-6}\) for a moon mass \(m/M=10^{-3}\). For comparison, the value measured for Jupiter's satellite Io is \(k_{2}/Q=1.102\pm 0.203\times 10^{-5}\)(Lainey et al. 2009), and the value measured globally for Saturn's main satellites is \(k_{2}/Q=1.59\pm 0.74\times 10^{-4}\)(Lainey et al. 2017) with a large spread for individual moons extending to much higher values (see Lainey et al. 2020; Jacobson 2022).
## 5 Adiabatic resonance capture
The analysis above shows that when assuming realistic values for the unknown parameters \(J_{2}\) and \(\omega\lambda\), the constraints obtained for the planet HIP 41378 f and its hypothetical former moon match well the properties expected for giant planets and moons (i.e. distance, mass, migration rate, and tidal dissipation), at least when viewed in the context of the Solar System. Yet, in order for a planet to be captured and adiabatically tilted within a given resonance, this resonance must be large enough. The width of secular spin-orbit resonances scales as the square root of the amplitude of the term in the orbital series (see Eq. 1). The \(s_{6}\) term is expected to be among the dominant terms for planet HIP 41378 f, but its amplitude may still be small, depending on the mutual inclinations between the planets' orbital planes. In order to compute the mutual inclinations of the planets, we need their orbital inclinations \(I_{k}\) and longitudes of ascending nodes \(\Omega_{k}\).
As shown in Table 1, the orbital inclinations \(I_{k}\) of transiting planets with respect to the sky plane are tightly constrained from observations, apart from the mirror degeneracy with respect to \(90^{\circ}\). As for the longitudes of nodes \(\Omega_{k}\) in the sky plane, they are
Figure 3: Probability density of the inclination proper modes of the HIP 41378 system. Histograms are built from \(10^{6}\) realisations of the Lagrange-Laplace system with the mass and semi-major axis uncertainties in Table 1. The histogram for frequency \(s_{4}\) has a long tail extending beyond the left border of the figure (with 99.7% occurrences above \(-1850\) ′′ year\({}^{-1}\) and 95.4% above \(-1000\) ′′ year\({}^{-1}\)). The histogram for frequency \(s_{3}\) peaks above the top border of the figure. Frequency \(s_{5}\) is identically equal to zero from the conservation of angular momentum. The upper axis shows the minimum moon mass needed for HIP 41378 f to be fully tilted through a resonance with a given frequency value (see Sect. 4).
not constrained from transit photometry, but we know that their values are likely to be close to each other. Indeed, for a given set of orbital inclinations \(I_{k}\), mutual inclinations between the planets' orbital planes are minimum if their longitudes of node \(\Omega_{k}\) are equal. As a general rule, low mutual inclinations minimise the planets' orbital excitation, and a low orbital excitation is expected in multi-planetary systems for stability reasons.
In systems observed by the transit method, low mutual inclinations are expected also because they maximise the probability of observing several transiting planets. Gravitational interactions produce a precession of the planets' orbital planes, possibly making some of them evolve in and out of transit configuration (see e.g. Becker & Adams, 2016). Using the Lagrange-Laplace theory, it is straightforward to compute the fraction of time that a planet spends in and out of transit configuration (see e.g. Fig. 2). In the HIP 41378 system as described in Table 1, only the innermost planet may possibly transit 100% of the time, even if we set all the \(\Omega_{k}\) values of the planets to be equal. Due to orbital precession, the probability to observe five transiting planets (as today) is 30% at best, and the probability to observe six is lower than 5%. As such, the HIP 41378 system would not be classified as 'continually mutually transiting' (Becker & Adams, 2016, 2017).
The level of orbital excitation of a planetary system can be quantified as a function of the dispersion of their longitudes of ascending node \(\Omega_{k}\) in the sky plane. As shown in Appendix C, allowing for just a few degrees dispersion in \(\Omega_{k}\) can increase the amplitude \(S_{j}\) of several modes in Eq. (12) by orders of magnitude, drastically reducing transit probabilities. In the HIP 41378 system, the level of dispersion of the planets' longitudes of node \(\Omega_{k}\) is therefore likely to be very small, perhaps less than \(1^{\circ}\), but their actual values are unknown.
Here, we are interested in the possibility for a planet to be captured in secular spin-orbit resonance from a low initial obliquity. In this context, the larger the resonance, the easier the capture (see e.g. Sailenfest et al., 2020); hence, we actually just need a lower bound for the resonance widths, that is, a lower bound for the amplitudes \(S_{j}\) in Eq. (12). If we show that the resonance capture operates flawlessly for this lower bound, then we can be assured that it will operate as well or even better for the true amplitudes \(S_{j}\). To this aim, we consider that: _i)_ the orbital inclinations of all planets with respect to the sky plane lie on the same side of \(90^{\circ}\), and _ii)_ all planets have exactly the same longitude of ascending node \(\Omega_{k}\) in the sky plane. When applied to the HIP 41378 system, this idealised system gives the solution shown in Table 2 for planet f.
In order to produce a resonance capture, the migration of the moon must be slow compared to the oscillations of the resonance angle, so that the parameter change is close to the adiabatic regime (see e.g. Su & Lai, 2020). For a given resonance, the oscillation frequency near the resonance centre can be computed through Eq. (7); the frequency scales as the square root of the amplitude \(S_{j}\). When applying Eq. (7) to HIP 41378 f by considering the orbital series in Table 2, one finds that the libration period of the \(s_{6}\) resonance angle when the separatrix appears is \(T_{\rm lib}\approx 547\,000\) years. This value is much smaller than the age of the system (\(2.1^{+0.4}_{-0.3}\) Gyr; see Lund et al., 2019). Therefore, even when considering the minimum possible width of the resonance, the available time span is more than enough for the planet to oscillate many times within the \(s_{6}\) resonance, allowing an adiabatic drift to occur within this resonance.
This point can be verified by performing a numerical integration of the coupled equations of motion of the planet's spin axis and the orbit of its moon. We used the same setting as Sailenfest et al. (2022): we integrated the secular equations of Correia et al. (2011) expanded at quadrupole order, and forced the orbital evolution of the planet with the quasi-periodic series in Table 2. A typical example of evolution is displayed in Fig. 4. In this example, the mass of the moon is \(m/M=7\times 10^{-4}\) (i.e. about the mass of Jupiter's moon Europa), and we made the moon migrate outwards at a constant rate, chosen to emulate a tidal parameter \(k_{2}/Q\approx 10^{-5}\). For such a tidal parameter, the moon is expected to migrate from a distance \(a_{\rm m}=5\ R\) to a distance \(a_{\rm m}=10\ R\) in about 1.2 Gyr. The planet was initialised with an obliquity of 0.05 rad and a random precession phase. The eccentricity of the moon and its inclination with respect to its local Laplace plane were both initialised to \(10^{-4}\), with random argument of pericentre and longitude of ascending node. As expected, Fig. 4 shows that the adiabatic capture and tilting in resonance \(s_{6}\) is guaranteed on a gigayear timescale. Due to the large separation between timescales, the obliquity oscillations of the planet inside the resonance are not even noticeable in the figure, but they build up in the curve width.
When the system reaches the unstable region, the eccentricity of the moon increases rapidly, which produces chaotic jumps in the planet's obliquity. Indeed, near the border of the unstable region, the timescale for the moon's eccentricity to be multiplied by 100 is a few times the characteristic timescale \(\tau\) defined in Eq. (6). Here, one obtains \(\tau\approx 100\) yr, which means that the eccentricity increase is extremely fast compared to the planet's spin-axis precession timescale (\(T\approx 52\,000\) yr), to the oscillations of the planet inside the resonance (\(T_{\rm lib}\approx 547\,000\) yr), or to the tidal eccentricity damping of the moon (whose timescale is a few millions of years; see e.g. Murray & Dermott, 1999).
The simulation in Fig. 4 is stopped when the moon's pericentre goes below the Roche limit of the planet. At this point, the moon is expected to be disrupted into pieces which would rapidly reorganise into an equatorial disc confined inside the Roche limit (see e.g. Canup, 2010; Hyodo et al., 2017). As the moon is lost, the planet is suddenly released from any kind of spin-orbit coupling, and its obliquity remains permanently frozen. In the example shown in Fig. 4, the final obliquity of the planet is about \(77^{\circ}\). This value is roughly compatible at \(2\sigma\) with the obliquity \(\varepsilon=92\pm 7^{\circ}\) proposed by Akinsanmi et al. (2020). However, we stress that the final obliquity of the planet is the result of a chaotic phase; its value strongly depends on initial conditions, on the mass of the moon, and on the widths of nearby secular spin-orbit resonances (Sailenfest et al., 2022). More massive moons and larger resonances increase the obliquity excitation of the planet during the chaotic phase. Due to
\begin{table}
\begin{tabular}{l c r r r} \hline \hline \(j\) & identification & \(v_{j}\) (\({}^{\prime\prime}\) yr\({}^{-1}\)) & \(S_{j}\times 10^{7}\) & \(\phi_{j}^{(0)}\) (\({}^{\circ}\)) \\ \hline
1 & \(s_{5}\) & 0.000 & 7046055 & 0.0 \\
2 & \(s_{3}\) & \(-15.600\) & 18545 & 0.0 \\
3 & \(s_{6}\) & \(-144.623\) & 5682 & 0.0 \\
4 & \(s_{1}\) & \(-170.310\) & 972 & 180.0 \\
5 & \(s_{4}\) & \(-477.109\) & 26 & 180.0 \\
6 & \(s_{2}\) & \(-477.679\) & 5 & 180.0 \\ \hline \end{tabular} 5
\end{table}
Table 2: Solution for the long-term inclination dynamics of planet HIP 41378 f given by the Lagrange-Laplace system.
chaos, obliquity values larger than 90\({}^{\circ}\) can be reached, but the detailed exploration of possible outcomes would require a precise knowledge of the orbital dynamics of the planet. Without this knowledge, we can only conclude that the obliquity of the planet ends up within the hatched blue region in Fig. 4, that is, between about7 70\({}^{\circ}\) and 110\({}^{\circ}\).
Footnote 7: The closed-form expression for the border of the unstable region is \(\cos^{2}\varepsilon=(51+25\sqrt{3})/726\); see Sailenfest and Lari (2021).
## 6 Discussions
### Refining the tilting mechanism
Under the ring hypothesis, we have presented a proof of concept for producing the unusual configuration proposed for super-puff exoplanets through the tidal migration of a former moon. We have considered the effect of a single massive moon on the planet's spin axis dynamics. This does not mean that the planet only had one moon - we expect it to possibly have many - but that this big moon gathered most of the mass of the satellite system, similarly to Titan around Saturn. Now that this big moon is lost, the remaining moons (either pre-existing or formed in the debris ring) are expected to be very small and undetectable with current facilities.
The presence of several pre-existing big moons, as the Galilean satellites around Jupiter, would complicate the picture outlined here. Through their mutual gravitational perturbations, several massive moons could either inhibit or facilitate the tilting process (see Sailenfest et al., 2022). The exploration of this more complicate scenario is out of the scope of this article. More generally, additional work can refine the scenario proposed here for a given target exoplanet, including the efficiency of ring formation, the distribution of possible final obliquities, and the combined effect of several massive moons. However, this level of detail would require an in-depth knowledge of the orbital dynamics of the planetary system.
In the case of HIP 41378 f, confirmed periods and masses are still missing for planets d, planet e, and the candidate planet g. The analysis presented here reflects our current understanding of the system, and some results may change in case of substantial modifications in the system's hierarchy. Our correlation analysis shows that planets d and g are only weakly coupled with the frequency \(s_{6}\) of the resonance involved. A mass measurement for these planets would therefore not alter much the picture outlined above. However, substantial changes could be produced if future observations reveal a substantially different mass or period for planet e, or if the system contains an additional outer planet; the calculations presented here should therefore be updated. In this respect, the simplicity of the analytical formulas involved is a great advantage.
### The true nature of super-puffs
Future characterisation of super-puff exoplanets is fundamental to assess the actual nature of their anomalously large radii. Unfortunately, due to the nearly face-on configuration of the proposed ring, an unambiguous detection of the ring by transit photometry or by the Rossiter-McLaughlin effect would be challenging with current instruments (Akinsami et al., 2020). Spectroscopic observations are much more promising. Even though the spectra of several super-puffs have been revealed to be featureless in near infrared (Libby-Roberts et al., 2020; Chachan et al., 2020; Alam et al., 2022), rings are expected to be transparent in far infrared, which would strongly reduce the transit depth of the planet. As noted by Alam et al. (2022), mid-infrared observations by the _JWST_ would be enough to break the degeneracy between high-altitude hazes, a high-metallicity atmosphere, or the ring hypothesis. The nominal _JWST_ mission offers only two opportunities to observe a transit of HIP 41378 f: October 2025 and March 2027. Considering their high scientific value, these opportunities should not be missed. In addition, the high cadence and high photometric resolution of the future _PLATO_ mission may allow small distortions in the transit light curve to be detected (due to the non-zero inclination of the ring with respect to the sky plane and/or to a possible thin inner gap in the ring; see Akinsanmi et al., 2020).
Figure 4: Example of tidal evolution of the planet HIP 41378 f and a hypothetical former moon. The mass of the moon is chosen to be \(m/M=7\times 10^{-4}\). The moon migrates away at constant rate emulating a tidal parameter \(k_{2}/Q=10^{-5}\). The trajectory of the system is shown in black; it goes from the leftmost to the rightmost point in about 1.3 Gyr. The available resonances are shown in pink, with their separatrices in red; they are labelled with the frequencies \(s_{6}\) of the corresponding modes (see Sect. 3). In this example, the resonances have the minimum possible widths according to the planets’ orbital elements in Table 1. In the hatched blue region, the moon is unstable (same as Fig. 1). The top axis shows the moon distance in unit of the planetary radius.
### The rarity of enlarged planets
Due to the generic nature of the mechanism presented here, one may wonder why in this case we do not observe many distant exoplanets with anomalously large radii. This rarity can be explained by several factors. First, the transit and radial-velocity methods are strongly biased towards the detection of short-period exoplanets (Perryman, 2018). In this regard, the detection of HIP 41378 f with a period of 542 days is already an exception (the transit probability is 0.5%). In turn, the long-period planets observed in direct imaging are strongly biased towards young systems, which cannot have gone through the gigayear adiabatic tilting process described here. As of today, this leaves us with only a handful of exoplanet detections for which this mechanism may have played a role.
The second rarity factor is geometric: a strong radius enhancement able to cast suspicion requires a roughly face-on ring. The mechanism proposed here produces a final planetary obliquity more or less equal to 90deg, which is a necessary condition for observing a transiting face-on ring, but is not sufficient: the precession phase \(\psi\) of the planet must also have an adequate value. For a ring with typical radius 2.5 \(R\), the increase in transit depth leading to underestimating the planet density by a factor \(q>10\) requires a precession phase within \(4\)deg of the exact face-on configuration (see e.g. Zuluaga et al., 2015). As shown in Fig. 5, this occurs about 45% of the time. This fraction is lowered if we consider the ring to have an inner optically thin gap similar to Saturn's ring.
Finally, even though the mechanism described in this article is generic, not all giant planets are expected to reach the final instability phase in only a few gigayears. Depending on the initial configuration of their moons and the geometry of the available resonances, the planet's obliquity may only have time to increase by a few tens of degrees during its lifetime. In the Solar System, which is aged 4.5 Gyr, only Uranus may have completed the final stage today (Saillenfest et al., 2022). In contrast, Jupiter is only starting the tilting phase8(Saillenfest et al., 2020), while Saturn is seemingly halfway in (Saillenfest et al., 2021, 2021) - and it may have recently been ejected from resonance (see Wisdom et al., 2022).
Footnote 8: As Jupiter possesses four massive moons interacting with each other, its tilting process is somewhat different from what is presented here. Jupiter may never be able to reach an obliquity close to 90deg even if it was given infinite time.
Hence, even though many exoplanets are probably affected by this mechanism, the conjunction of observational biases, ring geometry, and the long timescales at play drastically reduces the probability of detecting targets as exquisite as HIP 41378 f. In this regard, the future _PLATO_ mission is particularly promising, as its observing strategy is tailored to long-period planets, and it will be accompanied by an intensive radial-velocity follow-up to get accurate planet masses and detect possible non-transiting companions. Hopefully, the _PLATO_ discoveries will enable us to estimate the fraction of the exoplanet population that may have gone through the mechanism described in this article.
## 7 Conclusion
The apparent enlarged radius of some long-period exoplanets may be due to the presence of a ring observed roughly face on (Piro and Vissargagada, 2020; Akinsanmi et al., 2020). Despite their unconventional configuration, such hypothetical rings and the nearly 90deg obliquity of their host planets can be the natural end state of former migrating moons. This mechanism involves the capture of the planet in secular spin-orbit resonance as the moon migrates away on a gigayear timescale. The planet is then gradually tilted until the moon is destabilised and may be destructed into a debris disc.
For a given exoplanet, the plausibility of this formation mechanism can be assessed through simple analytical calculations. First, we need to determine the list of secular spin-orbit resonances that may tilt the planet. The frequencies \(\nu_{j}\) of the main orbital precession harmonics of the planet can be obtained through the Lagrange-Lagrange theory; in this theory, orbital frequencies are the eigenvalues of a matrix which depends only on the masses and spacings of the planets contained in the system. The probability density function of each frequency can be built from numerous realisations of the system (e.g. \(10^{6}\) or more) which are sampled according to our uncertainties on the parameters. Simple correlation analysis can then quantify the influence of each planet in the frequency values.
Then, for each frequency \(\nu_{j}\), the simple formula in Eq. (5) gives the minimum mass of a moon that the planet must have in order to trigger an adequate secular spin-orbit resonance. This formula depends on the unknown parameters \(J_{2}\) and \(\omega\lambda\) of the planet, but thanks to the approximate relation \(J_{2}\propto\omega^{2}\), this lack of knowledge only weakly affects the final result. The moon-to-planet mass ratio obtained is the first plausibility check of this dynamical mechanism. Moons with mass ratio \(m/M\sim 10^{-4}\) or smaller are expected to be ubiquitous around gaseous planets (see e.g. Canup and Ward, 2006). Substantially larger moons cannot be categorically ruled out, but they would require non-generic formation pathways such as captures or giant impacts, and are therefore much less likely (see e.g. Kipping, 2014).
A second consistency check is provided by the age of the planetary system considered. The Laplace radius of the planet (see Eq. 4) sets the distance over which the moon needs to migrate to fully tilt the planet. The migration range obtained must have been covered by the moon in a smaller timespan than the
Figure 5: Probability of underestimating the density of a transiting exoplanet by a factor \(q\) due to the presence of an opaque ring with outer radius 2.5 \(R\) and no inner gap. The enhanced transit depth due to the ring is supposed to be fully misinterpreted as an enlarged planetary radius. The red curve is obtained by computing the ring inclination required to divide the measured planet density by a factor \(q\), and by assuming that the inclination of the ring (or equivalently, its precession angle \(\psi\); see text) is uniformly distributed between 0 and 2\(\pi\). The three planet pictures show the approximate geometries corresponding to the factor \(q\) in abscissa. The probability goes from 1 at \(q=1\) (exact edge-on configuration) to 0 at \(q=2.5^{\circ}\) (exact face-on configuration). In case it possesses a ring, planet HIP 41378 f would have \(q\approx 13\)(Akinsanmi et al., 2020).
age of the system. As the migration of moons is powered by tidal dissipation inside the planet, the required distance and migration timescale can be translated into a tidal parameter \(k_{2}/Q\) for the planet. Expected values are of the order of \(10^{-5}\) to \(10^{-4}\) from a Solar System perspective (Lainey et al., 2009, 2017).
The last plausibility check is the consistency of timescales between the age of the planetary system and the hypothesis of an adiabatic capture into resonance. An adiabatic capture requires the libration period inside the resonance to be much shorter that the age of the system, such that many oscillations of the resonance angle may possibly have occurred during the tilting of the planet. The characteristic libration frequency is given in Eq. (7); it depends on the frequencies \(\nu_{j}\) obtained above, but also on the amplitudes \(S_{j}\) of the corresponding harmonics in the planet's orbital precession spectrum. Using the Lagrange-Laplace theory, the computation of these amplitudes requires to know the inclinations \(I_{k}\) and longitudes of nodes \(\Omega_{k}\) of the planets (e.g. measured in the sky plane). As the libration frequency scales as the square root of the amplitude of the resonant term, only a lower bound for \(S_{j}\) is actually needed. This lower bound can be obtained even in case the longitudes of nodes \(\Omega_{k}\) of the planets are unknown, allowing one to compute a maximum value for the libration period inside the resonance considered.
We applied this methodology to the planet HIP 41378 f, and obtained that all consistency checks are fulfilled. In order to tilt the planet through an adequate resonance, the hypothetical exomoon must have had a moon-to-planet mass ratio ranging from \(m/M\gtrsim 2\times 10^{-4}\) to \(10\times 10^{-4}\), that is, a mass comparable to that of Neptune's moon Triton, Jupiter's moon Europa, or to that of our own Moon. Even though such small exomoons are very hard to detect due to the weakness of their observational signals (Kipping, 2014), we expect them to be ubiquitous around giant exoplanets. Provided that the exomoon was initially formed at a distance of about 3 to 10 planetary radii (similarly to Jupiter's moons Io or Europa), its outward migration leads to a guaranteed capture of HIP 41378 f in a secular spin-orbit resonance. The migration timescale required for the moon is found to be in line with what is observed in the Solar System, with a corresponding tidal dissipation factor \(k_{2}/Q\) larger than \(2\times 10^{-5}\) (for the smallest possible moon) or larger than about \(6\times 10^{-6}\) (for a bigger moon). Finally, the libration timescale inside the resonance is found to be orders of magnitudes smaller than the age of the system (\(2.1^{+0.4}_{-0.3}\) Gyr; see Lund et al., 2019), allowing for the whole tilting mechanism to have possibly occurred.
All these requirements are confirmed by an example of fully coupled numerical integration of the planet's spin axis and the moon's orbit. The planet's spin axis is gradually tilted until its obliquity \(e\) reaches values in the interval \([70^{\circ},110^{\circ}]\), and its moon becomes unstable (Tremaine et al., 2009; Saillenfest and Lari, 2021). Due to the short instability timescale of the exomoon (\(\tau\approx 100\) yr in the case of HIP 41378 f), its eccentricity increase is likely to cause catastrophic events, such as collision chains between small inner moons or a tidal disruption of the moon itself when its pericentre goes below the planet's Roche limit (see e.g. Canup, 2010; Hyodo et al., 2017; Wisdom et al., 2022). Hence, we argue the dynamical mechanism described here, which may be responsible for the tilting of planet HIP 41378 f to an obliquity \(e\approx 90^{\circ}\), can also naturally provide the material for its hypothetical ring.
We stress, however, that even though this dynamical mechanism is physically realistic for HIP 41378 f, this does not imply that it necessarily happened. Planet HIP 41378 f may have had too small and/or too distant moons for the mechanism to operate, and the anomalous transit depth and flat spectrum of this planet may still be due to a particularly tenuous atmosphere covered with high-altitude hazes (Chachan et al., 2020; Alam et al., 2022; Belkovski et al., 2022). Yet, our analysis does provide further significance to the high-obliquity ring hypothesis, by showing that such an unusual configuration is not only feasible in a physical point of view, but even expected for some fraction of exoplanets resembling HIP 41378 f - that is, for old and distant exoplanets in multi-planetary systems. As detailed above, checking the plausibility of this mechanism only requires a limited knowledge of the planetary system considered, and this methodology can be applied to other super-puff exoplanets, and in particular to the potential future discoveries of _PLATO_.
###### Acknowledgements.
The authors thank the anonymous referee for her/his inspiting comments. This work was supported by the Programme National de Planetologie (PNP) of CNRS/INSU, co-funded by CNES.
|
2307.08352 | Zero-th Order Algorithm for Softmax Attention Optimization | Large language models (LLMs) have brought about significant transformations
in human society. Among the crucial computations in LLMs, the softmax unit
holds great importance. Its helps the model generating a probability
distribution on potential subsequent words or phrases, considering a series of
input words. By utilizing this distribution, the model selects the most
probable next word or phrase, based on the assigned probabilities. The softmax
unit assumes a vital function in LLM training as it facilitates learning from
data through the adjustment of neural network weights and biases.
With the development of the size of LLMs, computing the gradient becomes
expensive. However, Zero-th Order method can approximately compute the gradient
with only forward passes. In this paper, we present a Zero-th Order algorithm
specifically tailored for Softmax optimization. We demonstrate the convergence
of our algorithm, highlighting its effectiveness in efficiently computing
gradients for large-scale LLMs. By leveraging the Zeroth-Order method, our work
contributes to the advancement of optimization techniques in the context of
complex language models. | Yichuan Deng, Zhihang Li, Sridhar Mahadevan, Zhao Song | 2023-07-17T09:43:50Z | http://arxiv.org/abs/2307.08352v1 | # Zero-th Order Algorithm for Softmax Attention Optimization
###### Abstract
Large language models (LLMs) have brought about significant transformations in human society. Among the crucial computations in LLMs, the softmax unit holds great importance. Its helps the model generating a probability distribution on potential subsequent words or phrases, considering a series of input words. By utilizing this distribution, the model selects the most probable next word or phrase, based on the assigned probabilities. The softmax unit assumes a vital function in LLM training as it facilitates learning from data through the adjustment of neural network weights and biases.
With the development of the size of LLMs, computing the gradient becomes expensive. However, Zero-th Order method can approximately compute the gradient with only forward passes. In this paper, we present a Zero-th Order algorithm specifically tailored for Softmax optimization. We demonstrate the convergence of our algorithm, highlighting its effectiveness in efficiently computing gradients for large-scale LLMs. By leveraging the Zeroth-Order method, our work contributes to the advancement of optimization techniques in the context of complex language models.
Introduction
In the last few years, the field of natural language processing has witnessed explosive growth in large language models (LLMs). A series of breakthrough neural network models have rapidly advanced the capabilities of LLMs, including Transformer [13], GPT-1 [14], BERT [15], GPT-2 [16], GPT-3 [17], PaLM [18], OPT [20]. Each iteration incorporates architectural innovations and larger datasets to push the boundaries of what is possible with self-supervised learning on text. The conversational chatbot ChatGPT [13] created by OpenAI in 2022 brought LLMs into the public spotlight by showcasing their potential for remarkably human-like interaction. Riding this wave, OpenAI recently unveiled an even more powerful LLM called GPT-4 [15]. While technical details remain scarce, initial evaluations suggest GPT-4 significantly outperforms its predecessor ChatGPT [20]. Fine-tuned LLMs have proven adept at real-world natural language tasks including machine translation [14], sentiment analysis [21], language modeling [22], and even creative writing [13, 15]. The rapid progress shows the power of scale and self-supervision in language models.
_Attention mechanism_ is a crucial component of large language models (LLMs) like GPT-3, enabling them to focus on relevant parts of the input text [13, 14, 15, 16, 17, 18, 19, 20]. The attention matrix represents correlations between tokens, with entries quantifying the relevance of each token to others. This allows selective focus on pertinent input when generating output, rather than weighing all tokens equally. Attention is inspired by how humans pay differing amounts of attention to various input stimuli. In LLMs, attention is commonly implemented via soft weighting using the softmax function. The attention computation proceeds as follows [1, 1, 21, 18],
**Definition 1.1** (Static Attention Computation).: _Let \(Q,K,V\in\mathbb{R}^{n\times d}\) be three matrices, we define two matrices_
\[A :=\;\exp(QK^{\top})\in\mathbb{R}^{n\times n}\] \[D :=\;\operatorname{diag}(A\mathbf{1}_{n})\in\mathbb{R}^{n\times n}\]
_Obviously, \(A\) is square and \(D\) is a diagonal matrix. Based on these, we define_
\[\mathsf{Att}(Q,K,V):=D^{-1}AV\]
_Here \(\mathbf{1}_{n}\in\mathbb{R}^{n}\) is a length-\(n\) vector where all the entries are ones._
In the provided definition, the query tokens, represented by the matrix \(Q\in\mathbb{R}^{n\times d}\), are commonly derived from the decoder's preceding hidden state. As for the key tokens and values, we utilize matrices \(K\in\mathbb{R}^{n\times d}\) and \(V\in\mathbb{R}^{n\times d}\) respectively. The attention matrix \(A\) is computed as follows: we take the dot product between each query vector \(q_{i}\) and key vector \(k_{j}\) to obtain the relevance scores, and then apply the softmax function to normalize these scores into attention weights \(A_{i,j}\). Specifically,
\[A_{i,j}=\operatorname{softmax}(q_{i}^{\top}k_{j})\]
So each entry \(A_{i,j}\) reflects how much attention should be placed on the \(j^{\text{th}}\) key when interpreting the \(i^{\text{th}}\) query token. This enables the model to concentrate on relevant parts of the keys for each query.
Motivated by the exponential function used in attention, some work has explored hyperbolic regression problems for examples \(f(x)=\exp(Ax),\cosh(Ax),\sinh(Ax)\)[1, 13], formally defined as follows,
**Definition 1.2** (Hyperbolic Regression [11]).: _Let \(A\in\mathbb{R}^{n\times d}\) and \(b\in\mathbb{R}^{n}\) be a matrix and a vector, we define the objective function of hyperbolic regression problem as_
\[\min_{x\in\mathbb{R}^{d}}\|f(x)-b\|_{2}^{2}.\]
_In this case, the function \(f(x)\) can take the form of either \(\exp(Ax)\), \(\cosh(Ax)\), or \(\sinh(Ax)\)._
Very recently, [10] considered the normalization factor, and defined the following Softmax regression problem,
**Definition 1.3** (Softmax Regression, [10]).: _Let \(A\in\mathbb{R}^{n\times d}\) and \(b\in\mathbb{R}^{n}\) be a matrix and a vector, we define the objective function of softmax regression problem as_
\[\min_{x\in\mathbb{R}^{d}}\|\langle\exp(Ax),\mathbf{1}_{n}\rangle^{-1}\exp(Ax) -b\|_{2}^{2}.\]
While in practice of LLMs, the number of parameters to be trained is very large (e.g. ChatGPT has 1.5B parameters [1]), training can be explosively slow. A traditional method to avoid this is the _Zero-th Order_ methods. A widely-used zero-th order method is the following simultaneous perturbation stochastic approximation (SPSA) [13, 14] algorithm.
**Definition 1.4** (Simultaneous Perturbation Stochastic Approximation (SPSA) [13]).: _Let \(L(x)\) be a loss function. For a point \(x_{0}\in\mathbb{R}^{d}\), we define the Simultaneous Perturbation Stochastic Approximation (SPSA) of \(L(x)\) on \(x_{0}\) as a vector \(\widehat{g}(x_{0})\in\mathbb{R}^{d}\) such that_
\[\widehat{g}(x_{0})_{i}:=\frac{1}{2\epsilon\cdot p_{i}}(L(x_{0}+\epsilon\cdot p )-L(x_{0}-\epsilon\cdot p)),\ \ \forall i\in[d],\]
_where \(p\in\mathbb{R}^{d}\sim\mathcal{N}(0,I_{d})\) is the perturbation vector and \(\epsilon>0\) is the perturbation scale._
In SPSA, the gradient is approximated using only loss function evaluations, rather than back-propagation. Specifically, random perturbations are added to the parameters, and the loss is evaluated twice - once with positive perturbations, and once with negative. The gradient is estimated as the difference in losses divided by the perturbation size. This allows gradient estimation without explicit differentiation, enabling efficient training of massive models [12]. While not as accurate as true gradients, SPSA gradients are much cheaper to obtain.
### Our main result
In this work, we consider the following loss function:
**Definition 1.5** (Our Softmax Loss Function).: _For a vector \(x\in\mathbb{R}^{d}\), we define the softmax loss function_
\[L_{\exp}(x):=\sum_{j=1}^{n}L_{\exp,j}(x),\ \ \ \ L(x):=\sum_{j=1}^{n}L_{\exp, \mathrm{reg},j}(x)\]
_where_
\[L_{\exp,j}(x):= 0.5\|\langle\exp(A_{j}x),\mathbf{1}_{n}\rangle^{-1}\exp(A_{j}x) -b_{j}\|_{2}^{2}\] \[L_{\exp,\mathrm{reg},j}(x):= 0.5\|\langle\exp(A_{j}x),\mathbf{1}_{n}\rangle^{-1}\exp(A_{j}x) -b_{j}\|_{2}^{2}+0.5\|WA_{j}x\|_{2}^{2}\]
\(A_{j}\in\mathbb{R}^{n\times d}\)_, \(b_{j}\in\mathbb{R}^{n}\). For a certain batch \(\mathcal{B}\in[n]\) of data points, we define_
\[L_{\exp}(x;\mathcal{B}):=\sum_{j\in\mathcal{B}}L_{\exp,j}(x).\]
With the experiments of applying SPSA on LLMs [16], we look for the underlying theoretical explanation of the performance of SPSA on the large models. We show that,
**Theorem 1.6** (Informal version of Theorem 5.6).: _Let \(A_{j}\in\mathbb{R}^{n\times d}\), let \(b_{j}\in\mathbb{R}^{n}\) satisfy that \(\|b_{j}\|_{1}\leq 1\) for all \(j\in[n]\). Let \(R\geq 4\), \(\|A_{j}\|\leq R\), \(\|x\|_{2}\leq R\), let \(M:=\exp(O(R^{2}+\log n))\). Let \(W=\operatorname{diag}(w)\), where \(\min_{i}w_{i}^{2}\geq\mu/\sigma_{\min}(A_{j})\) for all \(j\in[n]\), \(|\mathcal{B}|=B\) let \(\kappa(A)=\max_{j\in[n]}\kappa(A_{j})\). Let \(T=O(M\cdot(1+d^{1.5}\cdot\kappa^{2}(A)/k)\cdot\mu^{-2}B^{-1}\log((L(x_{0})-L^{ *})/\epsilon))\). Let \(x^{0}\) denote the init point of SGD. Let \(L^{*}=\min_{x}L(x)\). The SGD based on zero-th order method on multiple softmax loss function converges to optimal with an additive error \(\epsilon\) in \(T\) iterations._
### Related work
Attention Theory.Much research has examined the theory behind attention computation in large language models [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 219, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 252, 254, 256, 257, 259, 261, 258, 259, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 287, 289, 285, 286, 287, 288, 289, 291, 289, 280, 283, 285, 287, 289, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 334, 306, 307, 311, 335, 336, 337, 312, 334, 308, 313, 332, 338, 341, 342, 343, 344, 345, 346, 347, 348, 358, 36, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 40, 41, 42, 43, 44, 44, 45, 46, 47, 48, 49, 41, 44, 45, 46, 48, 49, 42, 44, 47, 48, 49, 43, 44, 49, 44, 45, 46, 49, 44, 47, 48, 49, 40, 41, 42, 44, 49, 45, 46, 49, 47, 48, 49, 40, 41, 42, 44, 45, 46, 49, 40, 41, 42, 45, 46, 47, 48, 49, 42, 49, 43, 44, 45, 46, 49, 40, 41, 42, 45, 46, 47, 48, 49, 40, 41, 42, 45, 46, 49, 41, 43, 47, 48, 49, 42, 49, 43, 44, 45, 46, 49, 42, 47, 48, 49, 40, 41, 43, 48, 49, 44, 45, 46, 47, 49, 40, 41, 42, 45, 46, 49, 42, 47, 48, 49, 40, 41, 43, 49, 44, 45, 46, 47, 48, 49, 40, 41, 44, 45, 46, 49, 42, 48, 49, 43, 45, 47, 49, 41, 45, 46, 49, 42, 49, 43, 46, 47, 48, 49, 40, 41, 44, 48, 49, 45, 46, 47, 49, 40, 41, 42, 45, 48, 49, 40, 42, 49, 41, 45, 46, 49, 42, 47, 48, 49, 40, 41, 42, 45, 46, 49, 43, 47, 48, 49, 40, 42, 49, 45, 46, 49, 40, 41, 43, 49, 42, 48, 49, 41, 45, 46, 47, 49, 42, 49, 43, 44, 45, 46, 47, 48, 49, 40, 41, 45, 47, 49, 42, 48, 49, 43, 45, 46, 49, 45, 47, 49, 46, 48, 49, 40, 41, 45, 46, 49, 40, 41, 47, 49, 42, 49, 43, 48, 49, 40, 41, 45, 46, 49, 42, 49, 43, 45, 46, 47, 48, 49, 40, 42, 49, 45, 46, 49, 47, 48, 49, 40, 41, 45, 46, 49, 43, 49, 45, 47, 49, 45, 46, 49, 47, 48, 49, 40, 41, 45, 46, 49, 42, 49, 45, 47, 49, 46, 49, 47, 48, 49, 40, 41, 48, 49, 40, 41, 49, 42, 49, 45, 46, 49, 40, 42, 49, 45, 47, 49, 46, 49, 47, 48, 49, 40, 41, 49, 40, 42, 49, 45, 46, 49, 40, 41, 45,
[13]. They conducted distributed fine-tuning with low bandwidth, by the idea of shared randomness.
## 2 Preliminary
In this section, we state preliminary for the whole paper. In Section 2.1, we define the notations to be used in the paper. In Section 2.2 we provide the definition for stable rank and effective rank. In Section 2.3 we state a standard tool for exact computation. In Section 2.4 we provide some basic tools for matrix norm bounds. In Section 2.5 we provide some basic tools for matrix inequality. In Section 2.6 we define the definitions to be used in our paper. In Section 2.7 we define some basic definition regarding to a function's properties. In Section 2.8 we privode the definition of Simultaneous Perturbation Stochastic Approximation(SPSA). In Section 2.9 we state some previous results from previous work to be used in our paper.
### Notations
In this paper, we use \(\mathbb{R}\) to denote real numbers, \(\mathbb{R}_{\geq 0}\) to denote non-negative real numbers.
Given vector \(x\in\mathbb{R}^{d}\), we \(b=\exp(x)\in\mathbb{R}^{d}\) to generate a vector such that \(b_{i}=\exp(x_{i})\) where \(i\in[d]\)
Given \(x\in\mathbb{R}^{n}\), its \(\ell_{2}\)-norm can be denote as \(\|x\|_{2}:=(\sum_{i=1}^{n}x_{i}^{2})^{1/2}\).
Given \(A\in\mathbb{R}^{n\times k}\), its spectral norm can be denote as \(\|A\|\), i.e.\(\|A\|:=\sup_{x\in\mathbb{R}^{k}}\|Ax\|_{2}/\|x\|_{2}\).
Given \(A\), its largest singular value is denoted as \(\sigma_{\max}(A)\), its smallest singular value is denoted as \(\sigma_{\min}(A)\).
Given \(x\in\mathbb{R}^{n}\), we use \(\|x\|_{\infty}\) to denote \(\max_{i\in[n]}|x_{i}|\).
Given \(x\in\mathbb{R}^{n},y\in\mathbb{R}^{n}\), we use \(c=x\circ y\) to generate a vector \(c\in\mathbb{R}^{n}\) where \(c_{i}=x_{i}y_{i}\) for \(i\in[n]\).
Given \(x\in\mathbb{R}^{n}\), we use \(A=\operatorname{diag}(x)\in\mathbb{R}^{n\times n}\) to denote a diagonal matrix where \(A_{i,i}=x_{i}\) for \(i\in[n]\).
We use \(a=\mathbf{1}_{d}\in\mathbb{R}^{d}\) to denote a vector such that \(a_{i}=1\) where \(i\in[d]\)
Given \(A,B\in\mathbb{R}^{d\times d}\), we say \(A\succeq B\) if \(x^{\top}Ax\geq x^{\top}Bx\) for \(\forall x\in\mathbb{R}^{d}\).
We define \(\cosh(x)=\frac{1}{2}(\exp(x)+\exp(-x))\) and \(\sinh(x)=\frac{1}{2}(\exp(x)-\exp(-x))\).
Given \(A\in\mathbb{R}^{n\times d}\), we define the number of non zero entries of \(A\) to be \(\operatorname{nnz}(A)\), i.e., \(\operatorname{nnz}(A):=|\{(i,j)\in[n]\times[d]\ |\ A_{i,j}\neq 0\}|\)
Given diagonal matrix \(D\in\mathbb{R}^{n\times n}\), we say \(D\) is a \(k\)-sparse diagonal matrix where \(k:=|\{i\in[n]\ |\ D_{i,i}\neq 0\}|\).
Given function \(f\), we use \(\widetilde{O}(f)\) to denote \(f\cdot\operatorname{poly}(\log f)\).
### Stable Rank and Effective Rank
**Definition 2.1** (Stable rank [14]).: _Let \(A\in\mathbb{R}^{n\times d}\)_
\[\operatorname{srank}(A):=\frac{\|A\|_{F}^{2}}{\|A\|^{2}}\]
_to denote the stable rank of \(A\)._
**Definition 2.2** (effective rank).: _Let \(A\in\mathbb{R}^{d\times d}\), we use_
\[\operatorname{erank}(A):=\frac{\operatorname{tr}[A]}{\|A\|}\]
_to denote the effective rank of \(A\)._
### Basic Algebras
**Fact 2.3**.:
* _Let_ \(X\in\mathbb{R}^{k\times k}\)_,_ \(a\in\mathbb{R}^{k}\)_, then_ \[a^{\top}Xa=\sum_{i=1}^{k}\sum_{j=1}^{k}a_{i}X_{i,j}a_{j}=\sum_{i=1}^{k}a_{i}X_{i,i}a_{i}+\sum_{i\neq j}a_{i}X_{i,j}a_{j}.\]
### Tools for Matrix Inequality
**Fact 2.4**.: _Let \(A,B\in\mathbb{R}^{n\times d}\), then_
* \(\|A\|_{F}\leq\sqrt{\operatorname{rank}(A)}\cdot\|A\|\)__
* \(\operatorname{rank}(A+B)\leq\operatorname{rank}(A)+\operatorname{rank}(B)\)__
* \(\|A^{\top}\|=\|A\|\)__
* \(\|A\|\geq\|B\|-\|A-B\|\)__
* \(\|A+B\|\leq\|A\|+\|B\|\)__
* \(\|A\cdot B\|\leq\|A\|\cdot\|B\|\)__
* _Let_ \(a\in\mathbb{R}\)_, if_ \(A\preceq a\cdot B\)_, then_ \(\|A\|\leq a\cdot\|B\|\)__
* _Let_ \(a\in\mathbb{R}\)_, then_ \(\|a\cdot A\|\leq|a|\cdot\|A\|\)__
* _Let_ \(x\in\mathbb{R}^{d}\)_, we have_ \(\|Ax\|_{2}\leq\|A\|\cdot\|x\|_{2}\)_._
* _Let_ \(x,y\in\mathbb{R}^{d}\)_, then_ \(\|xy^{\top}\|\leq\|x\|_{2}\|y\|_{2}\)__
### Tools for PSD
**Fact 2.5**.: _Let \(x,y\in\mathbb{R}^{d}\), We have:_
* \(xy^{\top}+yx^{\top}\preceq xx^{\top}+yy^{\top}\)__
**Fact 2.6**.: _Let \(\{\alpha_{i}\}_{i\in[n]}\subseteq\mathbb{R}^{d}\) be a set of vectors, then we have_
* _Part 1._ \(a_{i}a_{j}^{\top}+a_{j}a_{i}^{\top}\preceq a_{i}a_{i}^{\top}+a_{j}a_{j}^{\top}\)__
* _Part 2._ \(\sum_{i=1}^{n}\sum_{j>i}^{n}a_{i}a_{j}^{\top}+a_{j}a_{i}^{\top}\preceq(n-1) \sum_{i=1}^{n}a_{i}a_{i}^{\top}\)__
* _Part 3._ \(\sum_{i=1}^{n}\sum_{j=1}^{n}a_{i}a_{j}^{\top}\preceq n\cdot\sum_{i=1}^{n}a_{i }a_{i}^{\top}\)__
Proof.: **Proof of Part 1** It trivially follows from Fact 2.5
**Proof of Part 2.** We have
\[\sum_{i=1}^{n}\sum_{j>i}^{n}a_{i}a_{j}^{\top}+a_{j}a_{i}^{\top} \preceq \sum_{i=1}^{n}\sum_{j>i}^{n}(a_{i}a_{i}^{\top}+a_{j}a_{j}^{\top})\] \[= (n-1)\cdot\sum_{i=1}^{n}a_{i}a_{i}^{\top}\]
where the first step follows from Part 1.
**Proof of Part 3.**
\[\sum_{i=1}^{n}\sum_{j=1}^{n}a_{i}a_{j}^{\top} =\ \sum_{i=1}^{n}a_{i}a_{i}^{\top}+\sum_{i\neq j}a_{i}a_{j}^{\top}\] \[=\ \sum_{i=1}^{n}a_{i}a_{i}^{\top}+\sum_{i=1}^{n}\sum_{j>i}^{n}a_{i} a_{j}^{\top}+a_{j}a_{i}^{\top}\] \[\preceq\ n\sum_{i=1}^{n}a_{i}a_{i}^{\top},\]
where the first step follows from Fact 2.5, the second step follows from decomposing the second term, and the last step follows from Part 2.
Thus we complete the proof.
### Basic Definitions
**Definition 2.7** (Regularization Term).: _Let \(A_{j}\in\mathbb{R}^{n\times d}\), \(w\in\mathbb{R}^{n}\), \(W=\operatorname{diag}(w)\). We define \(L_{\operatorname{reg}}:\mathbb{R}^{d}\to\mathbb{R}\) as follows_
\[L_{\operatorname{reg},j}(x):=0.5\|WA_{j}x\|_{2}^{2}\]
**Definition 2.8** (Our Softmax Loss Function).: _Let \(x\in\mathbb{R}^{d}\), we define the softmax loss function as follows_
\[L_{\operatorname{exp}}(x):= \sum_{j=1}^{n}L_{\operatorname{exp},j}(x)\] \[L(x):= \sum_{j=1}^{n}L_{\operatorname{exp},\operatorname{reg},j}(x)\]
_where_
\[L_{\operatorname{exp},j}(x):= 0.5\|\langle\exp(A_{j}x),\mathbf{1}_{n}\rangle^{-1}\exp(A_{j}x)- b_{j}\|_{2}^{2}\] \[L_{\operatorname{exp},\operatorname{reg},j}(x):= 0.5\|\langle\exp(A_{j}x),\mathbf{1}_{n}\rangle^{-1}\exp(A_{j}x)- b_{j}\|_{2}^{2}+0.5\|WA_{j}x\|_{2}^{2}\]
\(A_{j}\in\mathbb{R}^{n\times d}\)_, \(b_{j}\in\mathbb{R}^{n}\). For a certain batch \(\mathcal{B}\in[n]\) of data points, we define_
\[L_{\operatorname{exp}}(x;\mathcal{B}):=\sum_{j\in\mathcal{B}}L_{\operatorname{ exp},j}(x)\]
**Lemma 2.9** ([6]).: _Let \(L_{\operatorname{exp},\operatorname{reg},j}(x)\in\mathbb{R}^{d}\) follows from Definition 2.7, then we have_
\[\nabla^{2}L_{\operatorname{exp},\operatorname{reg},j}(x)\succeq\mu\cdot I_{d}\]
_where \(l>0\) is a constant._
**Lemma 2.10** (Decomposition of gradient, [6]).: _Given_
* \(L_{\operatorname{exp},j}(x)\) _follows from Definition_ 2.8_._
* \(f_{j}(x)\) _follows from Definition_ 2.11_._
* \(c_{j}(x)\) _follows from Definition_ 2.12_._
_Then it holds_
* _Part 1._
\[\nabla L_{\exp,j}(x)=A_{j}^{\top}\cdot G_{j}(x).\]
* _Part 2. For_ \(\mathcal{B}\subseteq[n]\)__ \[\nabla L_{\exp}(x;\mathcal{B})=\sum_{j\in\mathcal{B}}\nabla L_{\exp,j}(x)\]
* _Part 3._ \[\nabla L_{\exp}(x)=\sum_{j\in[n]}\nabla L_{\exp,j}(x)\]
**Definition 2.11**.: _We define \(f_{j}(x)\) as follows_
\[f_{j}(x):=\langle\exp(A_{j}x),\mathbf{1}_{n}\rangle^{-1}\cdot\exp(A_{j}x).\]
**Definition 2.12**.: _Let \(b_{j}\in\mathbb{R}^{n}\)._
_We define \(c_{j}(x)\) as_
\[c_{j}(x):=f_{j}(x)-b_{j}\]
**Definition 2.13** ([4]).: _We define \(B_{j}(x)\) as follows_
\[B_{j}(x):= \ \langle 3f_{j}(x)-2b_{j},f_{j}(x)\rangle f_{j}(x)f_{j}(x)^{\top}\] \[\ +(b_{j}\circ f_{j}(x))f_{j}(x)^{\top}+f_{j}(x)(b_{j}\circ f_{j}( x))^{\top}\] \[\ +\langle f_{j}(x)-b_{j},f_{j}(x)\rangle\cdot\operatorname{ diag}(f_{j}(x))\] \[\ +\operatorname{diag}((2f_{j}(x)-b_{j})\circ f_{j}(x))\]
**Definition 2.14**.: _Given_
* _Let_ \(f_{j}(x)\) _be defined as Definition_ 2.11_._
* _Let_ \(c_{j}(x)\) _be defined as Definition_ 2.12_._
_We define \(G_{j}:\mathbb{R}^{d}\to\mathbb{R}^{k}\) as follows_
\[G_{j}(x):=-\underbrace{f_{j}(x)}_{k\times 1}\underbrace{c_{j}(x)^{\top}}_{1 \times k}\underbrace{f_{j}(x)}_{k\times 1}+\underbrace{\operatorname{diag}(f_{j}(x))}_{k \times k}\underbrace{c_{j}(x)}_{k\times 1}\]
For convenient, we define
**Definition 2.15**.: _Given_
* \(f_{j}(x)\) _follows from Definition_ 2.11_._
* \(b_{j}\in\mathbb{R}^{k}\)__
_We define \(G_{j,1}:\mathbb{R}^{d}\to\mathbb{R}^{k}\) and \(G_{j,2}:\mathbb{R}^{d}\to\mathbb{R}^{k}\)_
* \(G_{j,1}:=f_{j}(x)(f_{j}(x)-b_{j})^{\top}f_{j}(x)\)__
* \(G_{j,2}:=\operatorname{diag}(f_{j}(x))(f_{j}(x)-b_{j})\)__
_Then it is obvious that \(G_{j}(x)=-G_{j,1}(x)+G_{j,2}(x)\) (see Definition 2.14)._
**Lemma 2.16**.: _Let \(B_{j}(x)\) be defined as Definition 2.13 and \(L_{\exp}(x)\) be defined as Definition 2.8, then we have_
\[\nabla^{2}L_{\exp}(x)=\sum_{j=1}^{n}A_{j}^{\top}B_{j}(x)A_{j}\]
Proof.: It trivially follows from Lemma 5.10 of [10].
### Definition of General Properties
**Definition 2.17** (\(l\)-Smooth).: _We say a differentiable function \(L(x):\mathbb{R}^{d}\to\mathbb{R}\) is \(l\)-smooth if_
\[\|\nabla L(x)-\nabla L(y)\|_{2}\leq l\cdot\|x-y\|_{2},\ \ \forall x,y\in \mathbb{R}^{d}.\]
**Definition 2.18** (Stong Convexity).: _We say a continuously differentiable function \(f:\mathbb{R}^{d}\to\mathbb{R}\) is strongly convex if there exists a positive number \(\mu\) such that_
\[f(y)\geq f(x)\nabla f(x)^{\top}(y-x)+\frac{1}{2}\mu\|y-x\|_{2}^{2},\ \ \forall x,y\in \mathbb{R}^{d}.\]
_Equivalently, if the function is twice differentiable, then_
\[f(x)\text{ is }\mu-\text{strongly convex}\iff\nabla^{2}f(x)\succeq\mu I.\]
**Definition 2.19** (Polyak-Lojasiewicz Inequality).: _We say a function \(L(x):\mathbb{R}^{d}\to\mathbb{R}\) satisfies \(\mu\)-Polyak-Lojasiewicz (PL) inequality if for all \(x\in\mathbb{R}^{d}\), it holds that_
\[\frac{1}{2}\|\nabla L(x)\|^{2}\geq\mu(L(x)-L^{*}),\]
_where \(L^{*}:=\min_{x\in\mathbb{R}^{d}}L(x)\)._
We have the following existing lemma connecting strong-convexity and PL inequality.
**Lemma 2.20** ([11]).: _If a function \(L(x)\) is \(\mu\)-strongly convex, then it is \(\mu\)-PL._
### Simultaneous Perturbation Stochastic Approximation (SPSA)
**Definition 2.21** (Simultaneous Perturbation Stochastic Approximation (SPSA) [14]).: _Let \(L(x)\) be a loss function. For a point \(x_{0}\in\mathbb{R}^{d}\), we define the Simultaneous Perturbation Stochastic Approximation (SPSA) of \(L(x)\) on \(x_{0}\) as a vector \(\widehat{g}(x_{0})\in\mathbb{R}^{d}\) such that_
\[\widehat{g}(x_{0}):=\frac{L(x_{0}+\epsilon\cdot p)-L(x_{0}-\epsilon\cdot p)}{ 2\epsilon}\cdot p,\ \ \forall i\in[d],\]
_where \(p\in\mathbb{R}^{d}\sim\mathcal{N}(0,I_{d})\) is the perturbation vector and \(\epsilon>0\) is the perturbation scale._
**Remark 2.22** (\(k\)-Spa).: _The \(k\)-SPSA gradient estimate averages \(\widehat{g}(x)\) over \(k\) randomly sampled \(z\)._
**Lemma 2.23** ([10]).: _The gradient estimate \(\widehat{g}(x)\) is almost unbiased, i.e.,_
\[\mathbb{E}[\widehat{g}(x)|x]=pp^{\top}\nabla L(x),\]
_with probability of \(1\)._
### Previous Results
**Lemma 2.24** (Lemma 2 in [11]).: _Let \(L\) be defined as Definition 2.8, then we have_
\[\mathbb{E}[\|\widehat{g}(x,\mathcal{B})\|^{2}]=\frac{d+k-1}{k}\cdot\mathbb{E} [\|\nabla L(x,\mathcal{B})\|^{2}],\]
_where \(k\) is the parameter for \(k\)-SPSA._
**Definition 2.25** (Gradient Covariance).: _We say the covariance of SGD gradient estimate on a minibatch \(\mathcal{B}\) of size \(B\) is defined as_
\[\Sigma(x)=B(\mathbb{E}[\nabla L(x;\mathcal{B})\nabla L(x;\mathcal{B})^{\top}]- \nabla L(x;\mathcal{B})\nabla L(x;\mathcal{B})^{\top}).\]
**Lemma 2.26** (Lemma 5 in [11]).: _Let \(z\in\mathbb{R}^{n}\) with \(z_{i}\sim\mathcal{N}(0,1)\) i.i.d. Then it holds that_
\[\mathbb{E}[\widehat{g}(x,\mathcal{B})\widehat{g}(x,\mathcal{B})^ {\top}]\] \[= (1+\frac{1}{n})\cdot(\nabla L(x)L(x)^{\top}+\frac{1}{B}\Sigma(x) )+\frac{1}{n}I\cdot(\|\nabla L(x)\|^{2}+\frac{1}{B}\operatorname{tr}(\Sigma(x ))).\]
## 3 Analysis for Softmax Function
In this section, we provide analysis for the softmax loss function. In Section 3.1 we proved that the softmax loss function is smooth. In Section 3.2 we state some useful lemmas from our previous work. In Section 3.3 we proved that \(G_{j,1}\) is smooth. In Section 3.4 we proved that \(G_{j,2}\) is smooth. In Section 3.5 we find the upper bound of the effecive rank of \(H\) by upper bounding the stable rank of \(B(x)\). In Section 3.6 we state the inequality between stable rank and effective rank.
### Softmax Loss is Smooth
We have the following lemma
**Lemma 3.1**.: _Given_
* \(A\in\mathbb{R}^{n\times d}\)__
* \(R\geq 4\)__
* \(x,y\in\mathbb{R}^{d}\) _satisfy_ \(\|A(x-y)\|_{\infty}<0.01\)__
* \(\|A\|\leq R\)__
* _Let_ \(R_{f}:=n^{1.5}\exp(5R^{2})\)__
* _Let_ \(W=\operatorname{diag}(w)\)_, where_ \(w_{i}^{2}\leq\frac{1}{\sigma_{\max}(A)}\)
_Then the Softmax loss function (Definition 2.8) \(L_{j}(x)\) is \(l\)-smooth (Definition 2.17), where_
\[l=8RR_{f}.\]
Proof.: Let \(x,y\in\mathbb{R}^{d}\) be two arbitrary point. By Lemma 3.5 and Lemma 3.6 we have
\[\|\nabla L_{\exp,j}(x)-\nabla L_{\exp,j}(y)\|_{2}\] \[\leq \|A_{j}\|_{2}\cdot(\|G_{1}(x)-G_{1}(y)\|_{2}+\|G_{2}(x)-G_{2}(y) \|_{2})\] \[\leq 8RR_{f}\cdot\|x-y\|_{2}\]
where the first step follows from definition of \(G_{1}\) and \(G_{2}\), the second step follows from Fact 3.3, Lemma 3.5 and Lemma 3.6.
Trivially,
\[\nabla L_{\text{reg},j}(x)=AW^{2}A^{\top}x.\]
Thus we have
\[\|\nabla L_{\text{reg},j}(x)-\nabla L_{\text{reg},j}(y)\|_{2}\] \[\leq \|AW^{2}A^{\top}\|_{2}\cdot\|x-y\|_{2}\] \[\leq \|x-y\|_{2},\]
where the first step follows from definition of spectral norm, the second step follows from \(w_{i}^{2}\leq\frac{1}{\sigma_{\max}(A)}\).
Adding the above together, we have \(l=8RR_{f}+1\). Since \(8RR_{f}\gg 1\) trivially, we complete the proof.
### Tools from previous work
**Lemma 3.2** (Lemma 5.2 in [1]).: _Let \(f_{j}:\mathbb{R}^{d}\to\mathbb{R}\) follows from Definition 2.11, then for \(\forall x\in\mathbb{R}^{d}\), it holds_
* \(\|f_{j}(x)\|_{2}\leq\|f_{j}(x)\|_{1}\leq 1\)_._
* \(0\preceq f_{j}(x)f_{j}(x)^{\top}\preceq I_{n}\)_._
* _Let_ \(b\in\mathbb{R}^{d}\)_,_ \(0\preceq(b\circ f_{j}(x))(b\circ f_{j}(x))^{\top}\preceq\|b\|_{\infty}^{2}f_{j }(x)f_{j}(x)^{\top}\preceq\|b\|_{\infty}^{2}I_{n}\)__
* _Let_ \(b\in\mathbb{R}^{d}\)_,_ \(\operatorname{diag}(b\circ b)\preceq\|b\|_{\infty}^{2}I_{n}\)__
* \(0\preceq\operatorname{diag}(f_{j}(x))\preceq\|f_{j}(x)\|_{\infty}I_{n}\preceq \|f_{j}(x)\|_{2}I_{n}\)_._
* \(0\preceq\operatorname{diag}(f_{j}(x)\circ f_{j}(x))\preceq\|f_{j}(x)\|_{ \infty}^{2}I_{n}\preceq\|f_{j}(x)\|_{2}I_{n}\)_._
**Fact 3.3** (Lemma 7.2 in [1]).: _If the following conditions hold_
* _Let_ \(A\in\mathbb{R}^{n\times d}\)__
* _Let_ \(R\geq 4\)__
* _Let_ \(x,y\in\mathbb{R}^{d}\) _satisfy_ \(\|A(x-y)\|_{\infty}<0.01\)__
* \(\|A\|\leq R\)__
* _Let_ \(R_{f}:=n^{1.5}\exp(5R^{2})\)__
_We have_
* _Part 0._ \(\|\exp(Ax)\|_{2}\leq\sqrt{n}\exp(R^{2})\)__
* _Part 1._ \(\|\exp(Ax)-\exp(Ay)\|_{2}\leq 2\sqrt{n}R\exp(R^{2})\cdot\|x-y\|_{2}\)__
* _Part 2._ \(\|f_{j}(x)-f(y)\|_{2}\leq R_{f}\cdot\|x-y\|_{2}\)__
**Lemma 3.4** ([16]).: _If the following conditions holds_
* \(\|A\|\leq R\)__
* \(\|x\|_{2}\leq R\)__
* _Let_ \(\beta\) _be lower bound on_ \(\langle\exp(Ax),\mathbf{1}_{n}\rangle\)__
_Then we have_
\[\beta\geq\exp(-R^{2})\]
### Smoothness for function \(G_{j,1}\)
**Lemma 3.5**.: _We define_
\[G_{j,1}(x):=f_{j}(x)(f_{j}(x)-b_{j})^{\top}f_{j}(x).\]
_Then we have_
\[\|G_{j,1}(x)-G_{j,1}(y)\|_{2}\leq 5R_{f}\cdot\|x-y\|_{2}.\]
Proof.: Since \(f_{j}(x),b_{j}\in\mathbb{R}\), thus we have
\[G_{1}(x)=(f_{j}(x))^{3}-(f_{j}(x))^{2}b_{j}.\]
Then by Fact 3.3, we have
\[\|G_{1}(x)-G_{1}(y)\|_{2}\] \[\leq \|(f_{j}(x))^{3}-(f_{j}(y))^{3}\|_{2}+\|(f_{j}(x))^{2}-(f_{j}(y)) ^{2}\|_{2}\] \[\leq 5\|f_{j}(x)-f_{j}(y)\|_{2}\] \[\leq 5R_{f}\|x-y\|_{2}\]
where the first step follows from triangle inequality and Fact 3.3, the second step follows from Lemma 3.2, the last step follows from Fact 3.3.
### Smoothness for function \(G_{j,2}\)
**Lemma 3.6**.: _We define_
\[G_{j,2}(x):=\mathrm{diag}(f_{j}(x))(f_{j}(x)-b_{j}).\]
_Then we have_
\[\|G_{j,2}(x)-G_{j,2}(y)\|_{2}\leq 3R_{f}\cdot\|x-y\|_{2}.\]
Proof.: We have
\[\|G_{2}(x)-G_{2}(y)\|_{2}\] \[\leq\|(f_{j}(x))^{2}-(f_{j}(y))^{2}\|_{2}+\|f_{j}(x)-f_{j}(y)\|_{2}\] \[\leq 3\|f_{j}(x)-f_{j}(y)\|_{2}\] \[\leq 3R_{f}\|x-y\|_{2}\]
where the first step follows from \(f_{j}(x)\in\mathbb{R}\) and Fact 3.3, the second step follows from Lemma 3.2, the last step follows from Fact 3.3.
### Effective Bound for \(H\)
**Lemma 3.7** (Upper Bound Stable Rank of \(B_{j}\)).: _Let \(B_{j}\) be defined as in Lemma 2.16, then we have_
\[\operatorname{srank}(B_{j})=(\frac{\|B_{j}\|_{F}}{\|B_{j}\|})^{2} \leq 2d+2,\]
_where \(\operatorname{srank}\) is defined as Definition 2.1._
Proof.: Firstly, we have
\[\frac{\|B_{j}\|_{F}}{\|B_{j}\|} \leq\frac{\sqrt{\operatorname{rank}(B_{j})}\|B_{j}\|}{\|B_{j}\|}\] \[=\sqrt{\operatorname{rank}(B_{j})}\]
where the first step follows from Fact 2.4, the second step follows from simple algebra.
Secondly, by applying Lemma 5.15 of [14], we can show that \(B_{j}\) is composed of several rank-1 matrices and diagonal matrices:
\[B_{\operatorname{rank},j}(x) :=\;\underbrace{\langle 3f_{j}(x)-2b_{j},f_{j}(x)\rangle f_{j}(x)f_{j}(x) ^{\top}}_{:=B_{\operatorname{rank},j}^{1}(x)}+\underbrace{(b_{j}\circ f_{j}(x ))f_{j}(x)^{\top}+f_{j}(x)(b_{j}\circ f_{j}(x))^{\top}}_{:=B_{\operatorname{ rank},j}^{2}(x)}\] \[B_{\operatorname{diag},j}(x) :=\;\underbrace{\langle f_{j}(x)-b_{j},f_{j}(x)\rangle\cdot \operatorname{diag}(f_{j}(x))}_{:=B_{\operatorname{diag},j}^{1}(x)}+ \underbrace{\operatorname{diag}((2f_{j}(x)-b_{j})\circ f_{j}(x))}_{:=B_{ \operatorname{diag},j}^{2}(x)}\]
Thus, we can bound \(\operatorname{rank}(B_{j})\) as follows
\[\operatorname{rank}(B_{j}) =\;\operatorname{rank}(B_{\operatorname{rank},j}+B_{\operatorname {diag},j})\] \[\leq\;\operatorname{rank}(B_{\operatorname{rank},j})+\operatorname {rank}(B_{\operatorname{diag},j})\] \[=\;\operatorname{rank}(B_{\operatorname{rank},j}^{1}+B_{ \operatorname{rank},j}^{2})+\operatorname{rank}(B_{\operatorname{diag},j}^{1} +B_{\operatorname{diag},j}^{2})\] \[\leq\;\operatorname{rank}(B_{\operatorname{rank},j}^{1})+ \operatorname{rank}(B_{\operatorname{rank},j}^{2})+\operatorname{rank}(B_{ \operatorname{diag},j}^{1})+\operatorname{rank}(B_{\operatorname{diag},j}^{2})\] \[\leq\;1+1+d+d\] \[=\;2d+2\]
where the first step follows from decomposing \(B_{j}\), the second step follows from Fact 2.4, the third step follows from decomposing \(B_{\operatorname{rank},j},B_{\operatorname{diag},j}\), the fifth step follows from \(\operatorname{rank}(B_{\operatorname{rank},j}^{1})=\operatorname{rank}(B_{ \operatorname{rank},j}^{2})=1\) and \(\operatorname{rank}(B_{\operatorname{diag},*})=\operatorname{nnz}(B_{ \operatorname{diag},*})\leq d\) for \(B_{\operatorname{diag},*}\in\mathbb{R}^{d\times d}\), the last step follows from simple algebra.
Thus, we aquired the bound for \((\frac{\|B_{j}\|_{F}}{\|B_{j}\|})^{2}\):
\[\frac{\|B_{j}\|_{F}}{\|B_{j}\|}\leq\sqrt{2d+2}\Longrightarrow(\frac{\|B_{j}\|_{F }}{\|B_{j}\|})^{2}\leq 2d+2\]
### The connection between effective rank and stable rank
The following lemma provide upper bound for the effective rank of \(H\), in the term of stable rank of \(B\).
**Lemma 3.8**.: _Let \(A\in\mathbb{R}^{n\times d},B\in\mathbb{R}^{n\times n}\) be two matrix. If the following conditions hold_
* \(\|B\|_{F}/\|B\|\leq r\)__
* _Let_ \(H=A^{\top}BA\)__
_Then,_
\[\operatorname{erank}(H)\leq\operatorname{rank}(A)\cdot r\cdot\kappa^{2}(A),\]
_where \(\operatorname{erank}\) is defined as Definition 2.2. Without loss of generality, we can assume \(n\gg d\), then_
\[\operatorname{erank}(H)\leq dr\cdot\kappa^{2}(A).\]
Proof.: We have
\[\operatorname{tr}[H]= \operatorname{tr}[A^{\top}BA]\] \[= \operatorname{tr}[AA^{\top}B]\] \[\leq \|AA^{\top}\|_{F}\cdot\|B\|_{F}\] \[\leq \|A\|_{F}^{2}\cdot\|B\|_{F}\] \[\leq \operatorname{rank}(A)\cdot\sigma_{\max}^{2}(A)\cdot\|B\|_{F}, \tag{1}\]
where the first step follows from definition of \(H\), the second step follows from the cyclic rule of matrix trace, the third and fourth steps follows from the Cauchy-Schwartz inequality, and the last step follows from the definition of Frobenius norm.
Now we provide the lower bound for the spectral norm of \(\|H\|\). We have
\[\|H\|=\|A^{\top}BA\|\leq\sigma_{\min}^{2}(A)\cdot\|B\|. \tag{2}\]
Thus by Eq. (1) and Eq. (2) we have,
\[\operatorname{tr}[H]/\|H\|\leq\frac{\operatorname{rank}(A)\cdot\sigma_{\max}^{ 2}(A)\cdot\|B\|_{F}}{\sigma_{\min}^{2}(A)\cdot\|B\|}\leq\operatorname{rank}( A)\cdot r\cdot\kappa^{2}(A).\]
Thus we completed the proof.
## 4 Loss Analysis for Gradient Descent
Here in this section, we provide analysis for the loss in each iteration of the Gradient Descent. In Section 4.1, we define how we update the parameters in traditional SGD. In Sextion 4.2, we analyze the decrease of loss per iteration.
### Gradient Step
**Definition 4.1** (GD step).: _The gradient descent step based on the zero-th order method is defined as_
\[x_{t+1}\gets x_{t}-\eta\cdot\widehat{g}(x_{t}),\]
_where \(\widehat{g}(x_{t})\) is defined as Definition 2.21._
### Loss Decrease
We have the following convergence lemma.
**Lemma 4.2** (Convergence Rate).: _Let \(x_{t+1}\gets x_{t}-\eta\widehat{g}(x_{t})\), where \(\widehat{g}(x_{t})\) is computed with respect to the batch \(\mathcal{B}\). Consider \(L_{\exp}(x)\) as defined in Definition 2.8, then there exists a parameter_
\[\gamma=\frac{d^{2}\cdot\sqrt{2d+2}\cdot\kappa^{2}(A)+d-2}{k(d+2)}+1\]
_such that the expected loss decrease can be bounded as_
\[\mathbb{E}[L(x_{t+1})|x_{t}]-L(x_{t})\leq-\eta\|\nabla L(x_{t})\|^{2}+\frac{1} {2}\eta^{2}\ell\cdot\gamma\cdot\mathbb{E}[\|\nabla L(x;\mathcal{B})\|^{2}]\]
Proof.: By Taylor's theorem with remainder, we have that
\[L(x_{t+1})= L(x_{t})+\nabla L(x_{t})^{\top}(x_{t+1}-x_{t})\] \[+\int_{0}^{1}\lambda(x_{t+1}-x_{t})^{\top}\nabla^{2}L(\lambda x_{ t+1}+(1-\lambda)x_{t})(x_{t+1}-x)^{\top}d\lambda. \tag{3}\]
Then by
\[\|x_{t+1}-x_{t}\| =\eta\cdot\|\widehat{g}(x;\mathcal{B})\|\] \[\leq\eta\sqrt{d}\cdot\frac{1}{kB}\sum_{i=1}^{k}\sum_{j=1}^{B}|z_{i }^{\top}\nabla L_{\exp,j}(x)|\] \[\leq\eta dG_{\max}(x_{t}),\]
where \(G_{\max}:=\max_{j\in[n]}\nabla L_{\exp,j}(x_{t})\). The first step follows from the definition of GD step, the second step follows from the way we calculate \(\widehat{g}\) (\(k\)-SPSA in Remark 2.22), the third step follows from \(|z_{i}^{\top}\nabla L_{\exp,j}(x)|\leq G_{\max}(x_{t})\) and \(\sqrt{d}\leq d\).
Thus we have
\[\|\lambda x_{t+1}+(1-\lambda)x_{t}-x_{t}\|\leq\eta dG_{\max}(x_{t}). \tag{4}\]
this follows from simple algebra.
We define
\[H_{\lambda}(x_{t}):=\nabla^{2}L(\lambda x_{t+1}+(1-\lambda)x_{t}). \tag{5}\]
Then we have
\[L(x_{t+1})\leq L(x_{t})+\nabla L(x_{t})^{\top}(x_{t+1}-x_{t})+(x_{t+1}-x_{t}) ^{\top}H_{\lambda}(x_{t})(x_{t+1}-x_{t})\]
\[=L(x_{t})-\eta\nabla L(x_{t})^{\top}\widehat{g}(x_{t};\mathcal{B})+ \frac{1}{2}\eta^{2}\widehat{g}(x_{t};\mathcal{B})^{\top}H_{\lambda}(x_{t}) \widehat{g}(x_{t};\mathcal{B}).\]
where step 1 follows from Eqs.(3), (4) and (5), step 2 follows from the way we update \(x_{t}\).
We have
\[\mathbb{E}[L(x_{t+1})|x_{t}] \leq L(x_{t})-\eta\|\nabla L(x_{t})\|^{2}+\frac{\eta^{2}}{2}\langle H _{\lambda}(x_{t}),\mathbb{E}[\widehat{g}(x;\mathcal{B})\widehat{g}(x; \mathcal{B})^{\top}]\rangle\] \[= L(x_{t})-\eta\|\nabla L(x_{t})\|^{2}+\frac{\eta^{2}}{2}\cdot \frac{d}{k(k(d+2))}\cdot(\|\nabla L(x_{t})\|^{2}+\frac{1}{B}\operatorname{tr} [\Sigma(x_{t})]\operatorname{tr}[H_{\lambda}(x_{t})]\] \[+\frac{\eta^{2}}{2}(1+\frac{d-2}{k(d+2)})(\nabla L(x_{t})^{\top} H_{\lambda}(x_{t})\nabla L(x_{t})+\frac{1}{B}\langle\Sigma(x_{t}),H_{\lambda}(x_{ t})\rangle).\]
where step 1 follows from taking conditional expectation with respect to \(x_{t}\), step 2 follows from Lemma 2.26.
We have
* Part 1. \(\|H_{\lambda}(x_{t})\|\leq 8RR_{f}=l\), by Lemma 3.1;
* Part 2. \(\operatorname{tr}[H_{\lambda}(x_{t})]/\|H_{\lambda}(x_{t})\|\leq d\cdot\sqrt{2 d+2}\cdot\kappa^{2}(A)=r\), by Lemma 3.7 and Lemma 3.8.
Thus
\[\operatorname{tr}[H_{\lambda}(x_{t})]\leq 8RR_{f}\cdot d\cdot\sqrt{2d+2} \cdot\kappa^{2}(A)=lr. \tag{6}\]
This follows from combining **Part 1** and **Part 2**.
Then we have
\[\mathbb{E}[L(x_{t+1})|x_{t}] \leq L(x_{t})-\eta\|\nabla L(x_{t})\|^{2}+\frac{\eta^{2}l}{2}\cdot( \frac{dr+d-2}{k(d+2)}+1)\cdot(\|\nabla L(x_{t})\|^{2}+\frac{1}{B}\operatorname {tr}[\Sigma(x_{t})])\] \[= L(x_{t})-\eta\|\nabla L(x_{t})\|^{2}+\frac{\eta^{2}l}{2}\cdot( \frac{dr+d-2}{k(d+2)}+1)\cdot\mathbb{E}[\|\nabla L(x_{t};\mathcal{B})\|^{2}].\]
where step 1 follows from Eq. (6), step 2 follows from definition of \(\Sigma(x)\) (Definition 2.25).
Defining
\[\gamma =\frac{dr+d-2}{k(d+2)}+1\] \[=\frac{d^{2}\cdot\sqrt{2d+2}\cdot\kappa^{2}(A)+d-2}{k(d+2)}+1.\]
and we complete the proof.
We also have the following result.
**Corollary 4.3**.: _By Lemma 4.2 and Lemma 2.24, we choose \(\eta=\eta_{0}\), where \(\eta_{0}\) is used in traditional SGD. Then we have_
\[\mathbb{E}[L(x_{t+1})|x_{t}]-L(x_{t})\leq\frac{1}{\gamma}\cdot(- \eta_{0}\|\nabla L(x_{t})\|^{2}+\frac{1}{2}\eta_{0}^{2}\ell\cdot\mathbb{E}[\| \nabla L(x;\mathcal{B})\|^{2}]).\]
Convergence Analysis
In this section, we provide the analysis for convergence of our algorithm. During this section, we use \(L^{*}:=\min_{x\in\mathbb{R}^{d}}L(x)\) to denote the global minimum of \(L(x)\). In Section 5.1, we proved that \(L_{\mathrm{exp,reg}}\) is strongly convex and thus is PL. In Section 5.2, we upper bound the trace of covariance matrix under certain assumptions. In Section 5.3, we state an existing result with respect to the traditional SGD. In Section 5.4, we provide our main result, we show that our algorithm has convergence guarantee for softmax loss function.
### Softmax Loss is Strongly Convex
We have the following lemma
**Lemma 5.1**.: _Let \(L_{\mathrm{exp,reg},j}(x)\) be defined as Definition 2.8, then there exists a parameter \(\mu\) such that it is \(\mu\)-strongly convex (Definition 2.18). And by Lemma 2.20, it is also \(\mu\)-PL._
Proof.: By the definition of strongly convex, we know that if a function \(f(x)\) is strongly convex, then
\[\nabla^{2}f(x)\succeq\mu I\]
where \(\mu\) is a positive constant.
Thus, by applying Lemma 2.9, \(L_{\mathrm{exp,reg},j}\) is strongly convex with parameter \(\mu\).
### Upper Bound Covariance
**Lemma 5.2**.: _Let \(\Sigma(x)\) be defined as Definition 2.25, If_
\[\mathrm{tr}[\sum_{j\in[n]}A_{j}^{\top}G_{j}(x)G_{j}(x)^{\top}A_{j}] \preceq\epsilon_{0}^{-1}\frac{n}{B^{2}}\alpha(L(x)-L^{*}) \tag{7}\]
_Then we have_
\[\mathrm{tr}[\Sigma(x)]\leq\alpha\cdot(L(x)-L^{*}),\]
_for all \(x\in\mathbb{R}^{d}\)._
Proof.: We have
\[\mathrm{tr}[\Sigma(x)] \leq |\,\mathrm{tr}[\Sigma(x)]|\] \[= |\,\mathrm{tr}[\nabla L(x;\mathcal{B})\nabla L(x;\mathcal{B})^{ \top}-\mathbb{E}[\nabla L(x;\mathcal{B})\nabla L(x;\mathcal{B})^{\top}]]|\] \[\leq \epsilon_{0}\cdot\frac{B^{2}}{n}\,\mathrm{tr}[\sum_{j\in[n]}A_{j }^{\top}G_{j}(x)G_{j}(x)^{\top}A_{j}]\] \[\leq \alpha\cdot(L(x)-L^{*})\]
where step 1 follows from simple algebra, step 2 follows from the definition of \(\Sigma(x)\), step 3 follows from Assumption 5.3 and Lemma 5.4, step 4 follows from Eq.(7).
**Assumption 5.3**.: _Let \(\epsilon_{0}=1/4\). We assume the following balanced distribution, for all \(\mathcal{B}\subset[n]\)_
* \(\sum_{j\in\mathcal{B}}A_{j}^{\top}G_{j}(x)G_{j}(x)^{\top}A_{j}\approx(1\pm \epsilon_{0})\frac{B}{n}\sum_{j\in[n]}A_{j}^{\top}G_{j}(x)G_{j}(x)^{\top}A_{j}\)__
* \(\sum_{j_{1}\neq j_{2}\in\mathcal{B}}A_{j_{1}}^{\top}G_{j_{1}}(x)G_{j_{2}}(x)^{ \top}A_{j_{2}}\approx(1\pm\epsilon_{0})B(B-1)\cdot\frac{1}{n}(\sum_{j\in[n]}A_{j }^{\top}G_{j}(x)G_{j}(x)^{\top}A_{j})\)
**Lemma 5.4**.: _Given_
* \(L(x;\mathcal{B})\) _follows from Definition_ 2.8__
* \(G_{j}(x)\) _follows from Definition_ 2.14_, for_\(\forall j\in[n]\)__
_Then we can show_
* _Part 1._
* _Part 2._
* _Part 3._ \[\mathbb{E}[\sum_{j_{1}\neq j_{2}\in\mathcal{B}}A_{j_{1}}^{\top}G_{j _{1}}(x)G_{j_{2}}(x)A_{j_{2}}] =B(B-1)\cdot(\frac{1}{n}\sum_{j\in[n]}A_{j}^{\top}G_{j}(x))(\frac {1}{n}\sum_{j\in[n]}G_{j}(x)^{\top}A_{j})\] \[\preceq B(B-1)\frac{1}{n}\sum_{j\in[n]}A_{j}^{\top}G_{j}(x)G_{j}(x )A_{j}\]
Proof.: **Proof of Part 1.** By applying Lemma 2.10, we have
\[\nabla L(x;\mathcal{B})\nabla L(x;\mathcal{B})^{\top} =\sum_{j_{1}\in\mathcal{B}}\nabla A_{j_{1}}^{\top}G_{j_{1}}(x)( \sum_{j_{2}\in\mathcal{B}}\nabla A_{j_{2}}^{\top}G_{j_{2}}(x))^{\top}\] \[=\sum_{j_{1},j_{2}\in\mathcal{B}}A_{j_{1}}^{\top}G_{j_{1}}(x)G_{j _{2}}(x)^{\top}A_{j_{2}}\]
where step 1 follows from Lemma 2.10, step 2 follows from simple algebra.
Then, by applying Fact 2.3 we have
\[\sum_{j_{1},j_{2}\in\mathcal{B}}A_{j_{1}}^{\top}G_{j_{1}}(x)G_{j_{2}}(x)^{\top }A_{j_{2}} =\sum_{j\in\mathcal{B}}A_{j}^{\top}G_{j}(x)G_{j}(x)^{\top}A_{j}+ \sum_{j_{1}\neq j_{2}\in\mathcal{B}}A_{j_{1}}^{\top}G_{j_{1}}(x)G_{j_{2}}(x)^{ \top}A_{j_{2}}\]
Thus, we completes the proof.
**Proof of Part 2** We have
\[\mathbb{E}[\sum_{j\in\mathcal{B}}A_{j}^{\top}G_{j}(x)G_{j}(x)A_{j }]=\frac{B}{n}\sum_{j\in[n]}A_{j}^{\top}G_{j}(x)G_{j}(x)A_{j}\]
this follows from expectation.
**Proof of Part 3** We have
\[\mathbb{E}[\sum_{j_{1}\neq j_{2}\in\mathcal{B}}A_{j_{1}}^{\top}G_ {j_{1}}(x)G_{j_{2}}(x)A_{j_{2}}] =B(B-1)\cdot(\frac{1}{n}\sum_{j\in[n]}A_{j}^{\top}G_{j}(x))(\frac {1}{n}\sum_{j\in[n]}G_{j}(x)^{\top}A_{j})\] \[\preceq B(B-1)\frac{1}{n}\sum_{j\in[n]}A_{j}^{\top}G_{j}(x)G_{j}(x)A_{j}\]
where step 1 follows from expectation, step 2 follows from Fact 2.6.
### Previous Results on SGD
**Lemma 5.5** (Lemma 4 in [31]).: _Assume a loss function satisfies_
* \(\mu\)_-PL (Definition_ 2.19_);_
* _It holds that_ \(\operatorname{tr}[\Sigma(x)]\leq\alpha\cdot(L(x)-L^{*})\)_;_
* \(l\)_-smooth;_
* _Its Hessian_ \(H\) _satisfies_ \(\operatorname{erank}(H)\leq r\)_._
_Then after_
\[O((\frac{l}{\mu}+\frac{l\alpha}{\mu^{2}B})\cdot\log\frac{L(x_{0})-L^{*}}{ \epsilon}))\]
_iterations of SGD (with the real gradient), it holds that_
\[\mathbb{E}[L(x_{t})]\leq L^{*}+\epsilon.\]
### Global Convergence of the Zero-th Order Algorithm
In this section, we provide the following global convergence theorem.
**Theorem 5.6** (Global convergence, formal version of Theorem 1.6).: _Given_
* _Let_ \(A_{j}\in\mathbb{R}^{n\times d}\)_,_ \(b_{j}\in\mathbb{R}^{n}\) _satisfies_ \(\|b_{j}\|_{1}\leq 1\) _for_ \(\forall j\in[n]\)__
* _Let_ \(R\geq 4\)_,_ \(\|A_{j}\|\leq R\)_,_ \(\|x\|_{2}\leq R\)__
* _Let_ \(W=\operatorname{diag}(w)\)_, where_ \(\min_{i}w_{i}^{2}\geq\mu/\sigma_{\min}(A_{j})\) _for all_ \(j\in[n]\)__
* _batch size_ \(|\mathcal{B}|=B\)__
* _Let let_ \(\kappa(A)=\max_{j\in[n]}\kappa(A_{j})\)__
* _Let_ \(x_{0}\) _denote the initial point_
* _Let_ \(L(x)\) _be defined as Definition_ 2.8__
* _Let_ \(L^{*}=\min_{x}L(x)\)__
* _Let_ \(M:=\exp(O(R^{2}+\log n))\)__
* _Let_ \[t=O(M\cdot(1+d^{1.5}\cdot\kappa^{2}(A)/k)\cdot\mu^{-2}B^{-1}\log((L(x_{0})-L^{ *})/\epsilon)).\]
_We perform GD algorithm based on zero-th order (Definition 4.1) gradient estimate on it. Then after \(t\) iterations, we have_
\[\mathbb{E}[L(x_{t})]\leq L^{*}+\epsilon.\]
Proof.: Using Corollary 4.3, we obtain
\[\mathbb{E}[L(x_{t+1})|x_{t}]-L(x_{t})\leq\frac{1}{\gamma}\cdot[-\eta_{0}\|\nabla L (x_{t})\|^{2}+\frac{1}{2}\eta_{0}^{2}\ell\cdot\mathbb{E}[\|\nabla L(x;\mathcal{B })\|^{2}]],\]
where \(\eta_{0}\) is the learning rate used in traditional SGD. Note that
\[\mathbb{E}[\|\nabla L(x_{t};\mathcal{B})\|^{2}]=\|\nabla L(x_{t})\|^{2}+\frac{ 1}{B}\operatorname{tr}[\Sigma(x_{t})].\]
This follows from the definition of \(\Sigma(x)\) (Definition 2.25).
By selecting \(\eta_{0}\leq\frac{1}{l}\), we have
\[\mathbb{E}[L(x_{t+1})|x_{t}]-L(x_{t})\leq\frac{1}{\gamma}\cdot(-\frac{\eta_{0 }}{2}\|\nabla L(x_{t})\|^{2}+\frac{\eta_{0}^{2}l}{2B}\operatorname{tr}[ \Sigma(x_{t})]).\]
By Lemma 5.1 and Lemma 5.2, we have
\[\mathbb{E}[L(x_{t+1})|x_{t}]-L(x_{t})\leq\frac{1}{\gamma}(-\eta_{0}\mu+\frac{ \eta_{0}^{2}l\alpha}{2B})\cdot(\mathbb{E}[L(x_{t})]-L^{*}).\]
Thus by simple algebra, we obtain
\[\mathbb{E}[L(x_{t+1})]-L^{*}\leq(1-\frac{1}{\gamma}(\eta_{0}\mu-\frac{\eta_{0 }^{2}l\alpha}{2B}))\cdot(\mathbb{E}[L(x_{t})]-L^{*}).\]
Now by choosing \(\eta_{0}=\min\{\frac{1}{l},\frac{\mu B}{l\alpha}\}\), we have
\[\mathbb{E}[L(x_{t+1})]-L^{*}\leq(1-\frac{1}{\gamma}\cdot\min\{\frac{\mu}{2l}, \frac{\mu^{2}B}{2l\alpha}\})(\mathbb{E}[L(x_{t})]-L^{*}).\]
Now, to make \(\mathbb{E}[L(x_{t})]-L^{*}\leq\epsilon\), we need
\[t=\gamma\max\{\frac{2l}{\mu},\frac{2l\alpha}{\mu^{2}B}\}\log\frac{L(x_{0})-L^{ *}}{\epsilon}\]
iterations.
Plugging \(\gamma\) and \(l\), we get
\[t =16RR_{f}\cdot(\frac{d^{2}\cdot\sqrt{2d+2}\cdot\kappa^{2}(A)+d-2 }{k(d+2)}+1)\cdot\max\{\frac{1}{\mu},\frac{\alpha}{\mu^{2}B}\}\log\frac{L(x_{0 })-L^{*}}{\epsilon}\] \[=16R\beta^{-2}n^{1.5}\exp(3R^{2})\cdot(\frac{d^{2}\cdot\sqrt{2d+2 }\cdot\kappa^{2}(A)+d-2}{k(d+2)}+1)\cdot\max\{\frac{1}{\mu},\frac{\alpha}{\mu^ {2}B}\}\log\frac{L(x_{0})-L^{*}}{\epsilon}\] \[=16Rn^{1.5}\exp(5R^{2})\cdot(\frac{d^{2}\cdot\sqrt{2d+2}\cdot \kappa^{2}(A)+d-2}{k(d+2)}+1)\cdot\max\{\frac{1}{\mu},\frac{\alpha}{\mu^{2}B} \}\log\frac{L(x_{0})-L^{*}}{\epsilon}\] \[=O(n^{1.5}\exp(30R^{2})\cdot(\frac{d^{2}\cdot\sqrt{2d+2}\cdot \kappa^{2}(A)+d-2}{k(d+2)}+1)\cdot\mu^{-2}B^{-1}\log\frac{L(x_{0})-L^{*}}{ \epsilon})\] \[=O(M\cdot(1+d^{1.5}\cdot\kappa^{2}(A)/k)\cdot\mu^{-2}B^{-1}\log ((L(x_{0})-L^{*})/\epsilon)),\]
where step 1 follows from plugging \(\gamma\) and \(l\), step 2 follows from plugging \(R_{f}\)(Fact 3.3), step 3 follows from plugging \(\beta\) (Lemma 3.4), step 4 follows from \(R\geq 4\) and the choosing \(\alpha\) to be a large constant, step 5 follows from the definition of \(M\).
Thus we complete the proof. |
2306.09365 | Fault Detection in Induction Motors using Functional Dimensionality
Reduction Methods | The implementation of strategies for fault detection and diagnosis on
rotating electrical machines is crucial for the reliability and safety of
modern industrial systems. The contribution of this work is a methodology that
combines conventional strategy of Motor Current Signature Analysis with
functional dimensionality reduction methods, namely Functional Principal
Components Analysis and Functional Diffusion Maps, for detecting and
classifying fault conditions in induction motors. The results obtained from the
proposed scheme are very encouraging, revealing a potential use in the future
not only for real-time detection of the presence of a fault in an induction
motor, but also in the identification of a greater number of types of faults
present through an offline analysis. | María Barroso, José M. Bossio, Carlos M. Alaíz, Ángela Fernández | 2023-06-14T06:46:58Z | http://arxiv.org/abs/2306.09365v1 | # Fault Detection in Induction Motors using Functional Dimensionality Reduction Methods
###### Abstract
The implementation of strategies for fault detection and diagnosis on rotating electrical machines is crucial for the reliability and safety of modern industrial systems. The contribution of this work is a methodology that combines conventional strategy of Motor Current Signature Analysis with functional dimensionality reduction methods, namely Functional Principal Components Analysis and Functional Diffusion Maps, for detecting and classifying fault conditions in induction motors. The results obtained from the proposed scheme are very encouraging, revealing a potential use in the future not only for real-time detection of the presence of a fault in an induction motor, but also in the identification of a greater number of types of faults present through an offline analysis.
keywords: Induction Motors, Fault Detection, Functional Data, Dimensionality Reduction, FPCA, FDM. +
Footnote †: journal: Engineering Applications of Artificial Intelligence
## 1 Introduction
The implementation of strategies for incipient faults detection and diagnosis on Rotating Electrical Machines (REM) is very important for the reliability and safety of modern industrial systems. Its execution allows planning interruptions of continuous production processes in scheduled stops, thus reducing
maintenance time and associated economic losses [1; 2].
The diagnosis of faults present in a REM is integrated by the detection, identification and isolation of an anomaly, which can be achieved by using the information obtained on the state of operation of the equipment or drive [3]. As a result, it is possible to consider fault diagnosis as a pattern recognition problem with respect to the condition of a REM [4]. To effectively diagnose faults in a REM, it is essential to distinguish between failures originating from the machine itself, whether electrical or mechanical, and those corresponding to the associated load [5].
In recent decades, with the advancement of communication technologies and the inclusion of control devices in REM, non-invasive faults detection and diagnosis techniques based on the use of electrical variables have been studied more than those that use acoustic emissions, analysis lubrication, thermography and vibrations. The latter have been the techniques most widely used for some time, in which different methods are used for analysis, among the most common, Fast Fourier Transform (FFT) in the frequency domain, and wavelet analysis and empirical model decomposition in the domain time-frequency [6].
The techniques based on electrical variables have focused mainly on those methods sustained on the Motor Current Signature Analysis (MCSA), instantaneous power analysis, and Park's vector analysis, among others [1]. In this way it is possible to detect a large number of failures in induction motors, which are associated with the presence of certain components of the frequency spectrum. It should be noted that, when performing these types of analysis, the presence of an expert in the area is required to carry out the task according to the information contained in the processed signals.
Its increasing use is due to the fact that the monitoring of the electrical variables to be analyzed is carried out without modifying the state or structure of the electrical machine [3]. In particular, the sensors are placed on a control panel, thus avoiding problems with difficult access to the equipment to be analyzed and even reducing the risks for the operator in dangerous environments.
In an ideal scenario, all this process should be done in an automatic way.
Although machine learning methods fit nicely for this purpose, we should taken into account that it is very difficult to obtain a targeted sample in this context. Thus, supervised models should be discarded. As an alternative, dimensionality reduction and clustering methods could help to analyze and group the available electrical motor signals.
It is worth mentioning that electrical variables are functions, and hence they can be studied from a functional perspective using Functional Data Analysis (FDA; [7]) techniques. The novelty of this article resides precisely on applying functional dimensionality reduction methods, namely Functional Principal Component Analysis (FPCA; [8]) and Functional Diffusion Maps (FDM; [9]), to detect and identify faults due to broken bars and low-frequency load oscillations in IMs. Moreover, a complete methodology is proposed, covering from data processing to fault detection and fault diagnosis.
The rest of the paper is organized as follows. In Section 2 the state of the art in fault detection applying machine learning techniques is presented. Section 3 briefly reviews the theory under the functional dimensionality reduction techniques used in this work. Section 4 explains the data collection and the applied experimental methodology, and shows the obtained results. Finally, Section 5 presents some conclusions of this work, as well as possible research lines to further extend it.
## 2 Machine learning and induction motor fault detection
In order to automate faults detection and diagnosis, significant progress has been made in the use of data processing, specifically data mining based on Artificial Intelligence methods. This has been achieved through the implementation and integration of different machine learning techniques and computational statistics, as exposed in recent literature [5; 6; 10; 11; 12; 13; 14]. These advancements enable non-experts in the field to analyze systems without detailed knowledge of the model being studied, resulting in simpler diagnoses [3].
In [11], promising diagnostic techniques based on machine learning are pre
sented, with a focus on their attributes. In [12], an analysis of the advantages and disadvantages of various intelligent diagnostic techniques used in REM is presented, including decision trees, support vector machines, principal component analysis, and genetic algorithms. This analysis was carry out based on the most common mechanical and electrical failures observed in these machines.
As explained in Section 1, we will focus on unsupervised machine learning methods, specifically on dimensionality reduction techniques. Regarding this field, in [14] Principal Component Analysis (PCA) is identified as one of the most promising Machine Learning technique and is highlighted as a method that provides interesting results. Its use allows the identification of the most significant failure characteristics and the extraction of underlying patterns, all while reducing data dimension.
Several case studies have focused on the use of PCA together with with various Machine Learning techniques for diagnosing REM failures, with particular emphasis on the diagnosis of broken bars. Such is the case of [15], where an advanced signal processing method based on wavelet analysis, PCA, and multi-layer neural networks is presented. This technique enables the extraction of suitable characteristics, reduces the correlation between the extracted features, and determines the magnitude of a failure due to broken bars in an IM.
On the other hand, [16] provides a comparative analysis of three methods for detecting broken bars in an induction motor, based on the electrical signal analysis, particularly MCSA, MSCSA and PCA. Additionally, [17] presents a method for the detection of broken bars through the use of PCA in the three stator currents, to later be used in the calculation of \(\mathcal{Q}\) statistic that determines the presence or absence of the fault.
As mentioned before, due to its nature, electrical variables can be studied from a functional perspective using FDA techniques that, as far as we have read, no study has used before for trying to solve the problem at hand.
## 3 Functional dimensionality reduction methods
Dimensionality reduction methods are statistical techniques where high-dimensional data are represented in lower dimensional spaces, for example by capturing most of the variance of the original data in the new space, like PCA does, or by reflecting lower dimensional underlying manifolds where the original data lay, as manifold learning techniques intend [18].
When data are functions, they live in infinite-dimensional spaces, and hence finding low-dimensional representations becomes essential. Finding reliable representations of low-dimensional data, specifically in two or three dimensions, is beneficial in real-world problems for visualization, description and general exploration purposes [19]. Moreover, these representations can be used as feature vectors in supervised machine learning algorithms which require multivariate inputs [20].
The most popular technique is FPCA. However, non-linear dimensionality reduction methods such as FDM and Isomap [21] have been gaining popularity in recent years and have outperformed FPCA in some data applications.
In the next subsections, we will briefly introduce FDA, and present the theoretical framework for FPCA and FDM.
### Functional data analysis
Functional Data Analysis [7] studies samples of functions \(x_{1}(t),\ldots,x_{N}(t)\), where \(t\in\mathcal{J}\), \(\mathcal{J}\) is a compact interval, and each \(x_{i}(t)\) is the observation of an independent functional variable \(X_{i}\) identically distributed as \(X\). It is usual to assume that the functional variable \(X\) is a second order stochastic process, \(\mathrm{E}[X^{2}]<\infty\), and takes values in the Hilbert space of square integrable functions \(L^{2}([a,b])\) defined on the closed interval \([a,b]\subset\mathbb{R}\). Square integrable functions form a vector space and we can define the inner product of two functions by \(\langle x,y\rangle=\int_{a}^{b}x(t)y(t)dt\). The inner product allows to introduce the notion of distance between functions by the \(L^{2}\)-norm \(\|x\|^{2}=\langle x,x\rangle=\int_{a}^{b}x^{2}(t)dt\). Therefore, in \(L^{2}\) space, the distance between two functions can be calculated as the norm of their difference, which is expressed as \(\|x-y\|\).
### Functional PCA
Functional Principal Component Analysis [8] is a linear functional dimensionality reduction method that generalizes multivariate Principal Component Analysis [22] to the case where data are functions. In FPCA, the infinite-dimensional random functions are projected onto the lower dimensional subspace generated by the eigenfunctions of the covariance operator.
Let \(x_{1}(t),\dots,x_{N}(t)\) be the realizations of a stochastic processes over a compact domain. The sample variability is characterized by the spectral decomposition of the sample covariance operator, \(\hat{\Gamma}y(t)=\int_{a}^{b}\hat{\gamma}(t,s)y(s)ds\) where \(\hat{\gamma}(s,t)=N^{-1}\sum_{i=1}^{N}x_{i}(s)x_{i}(t)\) is the sample covariance function. The directions \(\xi_{l}\) of the FPCA projection into an \(L\)-dimensional subspace are chosen such that they maximized the variance of the projection; more specifically they are the solution of the following problem:
\[\max_{\xi_{l}}\widehat{\mathrm{Var}}[\langle\xi_{l},X\rangle]\;\mathrm{s.t.} \;\langle\xi_{l},\xi_{k}\rangle=\delta_{lk},\;k\leq l,\;l=1,...,L.\]
The above expression can be simplified by using the sample covariance operator defined as
\[\max_{\xi_{l}}\langle\hat{\Gamma}\xi_{l},\xi_{l}\rangle\;\mathrm{s.t.}\; \langle\xi_{l},\xi_{k}\rangle=\delta_{lk},\;k\leq l,\;l=1,...,L.\]
The solutions of this problem are obtained by solving the eigenequation
\[\hat{\Gamma}\xi_{l}(t)=\lambda_{l}\xi_{l}(t),\;t\in[a,b], \tag{1}\]
where \(\lambda_{1}\geq\lambda_{2}\geq\dots\geq 0\) are the eigenvalues and \(\xi_{1},\xi_{2}\dots\) are the eigenfunctions, which form an orthonormal basis for the space of functions being analyzed. Hence, \(\hat{x}_{i}=\sum_{l=1}^{L}\hat{\theta}_{li}\xi_{l}\), with \(\hat{\theta}_{li}=\langle\xi_{l},x_{i}\rangle\) is a good approximation of \(x_{i}\) for a relevant choice of \(L\).
To apply this method, there are two possible strategies to approach the eigeneanalysis problem (1): discretizing the functions or expressing the functions in a known basis. In both cases, we convert the continuous functional eigenanalysis problem into an approximately equivalent matrix eigenanalysis task. The whole
procedure of the first strategy, which will be the one used in the experiments, is shown in Algorithm 1. Here, FPCA is equivalent to a standard multivariate PCA with the metric defined by the quadrature weight matrix. For more information about the other strategy, please refer to [23].
```
0: X - Functional data matrix \(L\) - Embedding dimension \(\{t_{j}\}_{j=1}^{M}\) - Quadrature points
0:\(\{\tilde{\xi}_{l}\}_{l=1}^{L}\) - Discretized principal component functions \(\{\hat{\theta}_{l}\}_{l=1}^{L}\) - Principal component scores
1: Compute sample covariance matrix \(\hat{\Sigma}=N^{-1}\mathrm{X}^{\top}\mathrm{X}\), where \(N\) is the number of functional data.
2: Compute weight matrix \(\mathrm{W}\) from quadrature weights using some numeric integration rule.
3: Obtain eigenvalues \(\{\lambda_{l}\}_{l=1}^{L}\) and eigenvectors \(\{\mathrm{u}_{l}\}_{l=1}^{L}\) of \(\mathrm{W}^{1/2}\hat{\Sigma}\mathrm{W}^{1/2}\) satisfying \[\mathrm{W}^{1/2}\hat{\Sigma}\mathrm{W}^{1/2}\mathrm{u}_{l}=\lambda_{l}\mathrm{ u}_{l}\ \mathrm{s.t.}\ \mathrm{u}_{l}^{\top}\mathrm{u}_{k}=\delta_{lk},\ l,k=1, \ldots,L.\]
4: Calculate discretized principal component functions \(\tilde{\xi}_{l}=\mathrm{W}^{-1/2}\mathrm{u}_{l}\).
5: Calculate principal components scores \(\hat{\theta}_{l}=\mathrm{X}\mathrm{W}\tilde{\xi}_{l}\).
```
**Algorithm 1** FPCA over discretized functions.
### Functional DM
Functional Diffusion Maps [9] is a nonlinear dimensionality reduction algorithm applied to functional data that extend multivariate Diffusion Maps (DM, [24; 25]) to the functional domain. FDM seeks to identify low-dimensional representations of \(L^{2}\) functions on nonlinear functional manifolds after defining a diffusion process on a normalized graph based on pairwise similarities between functional data.
In more detail, let \(\mathcal{X}=x_{1}(t),...,x_{N}(t)\) be the realizations of a stochastic process over a compact domain. In this context, \(\mathcal{X}\) is assumed to lie on a functional manifold \(\mathcal{M}\subset L^{2}([a,b])\). To identify the underlying manifold, the initial stage of FDM involves building a weighted graph \(\mathrm{G}=(\mathcal{X},\mathrm{K})\), where the graph vertices are functions \(x_{i}(t)\) and the weights \(k_{ij}\) are obtained from a symmetric and pointwise positive \(N\times N\) kernel matrix \(\mathrm{K}\). This kernel matrix defines a local measure of similarity within a certain neighborhood, so that outside the neighborhood the function quickly goes to zero. The standard kernel used to compute the similarity between functional data is the Gaussian kernel, \(k_{ij}=\exp\frac{-\|x_{i}(t)-x_{j}(t)\|_{L^{2}}^{2}}{2\sigma^{2}}\), where the size of the local neighborhood considered is determined by the hyperparameter \(\sigma\). Alternatively, the Laplacian kernel can be used, defined as \(k_{ij}=\exp\frac{-\|x_{i}(t)-x_{j}(t)\|_{L^{1}}}{\sigma^{2}}\). These types of kernels define a connected graph.
Once the matrix \(\mathrm{K}\) is obtained, the connected graph is normalized using a density parameter \(\alpha\in[0,1]\). This results in a new graph, denoted as \(\mathrm{G^{\prime}}=(\mathcal{X},\mathrm{K}^{(\alpha)})\), where the entries of \(\mathrm{K}^{(\alpha)}\) are \(k_{ij}^{(\alpha)}=\frac{k_{ij}}{d_{i}^{\alpha}d_{j}^{\alpha}}\). Here, \(d_{i}=\sum_{j=1}^{N}k_{ij}\) is the degree of the graph and the power \(d_{i}^{\alpha}\) approximates the density of each vertex.
Now, we can create a Markov chain on the normalized graph whose transition matrix \(\mathrm{P}\) is defined by \(p_{ij}=\frac{k_{ij}^{(\alpha)}}{d_{i}^{(\alpha)}}\), where \(d_{i}^{(\alpha)}=\sum_{j}k_{ij}^{(\alpha)}\). The transition matrix \(\mathrm{P}\) provides the probabilities of arriving from node \(i\) to node \(j\) in a single step. By taking powers of the \(\mathrm{P}\) matrix, we can increase the number of steps taken in the random walk. This defines a _diffusion process_ that reveals the global geometric structure of \(\mathcal{X}\) at different scales.
Now we are ready to define a diffusion distance \(\mathrm{D_{T}}\) based on the geometrical structure of the obtained diffusion process,
\[\mathrm{D}_{T}^{2}(x_{i}(t),x_{j}(t))=\|p_{i\cdot}^{T}-p_{j\cdot}^{T}\|_{ \mathrm{L}^{2}(\frac{1}{\pi})}^{2}=\sum_{k}\frac{\left(p_{ik}^{T}-p_{jk}^{T} \right)^{2}}{\pi_{k}}.\]
This metric measures the similarities between data as the connectivity or probability of transition between them. Therefore, \(T\) plays the role of a scale parameter and \(\mathrm{D}_{T}^{2}(x_{i}(t),x_{j}(t))\) will be small if there exist a lot of paths of length
\(T\) that connect \(x_{i}(t)\) and \(x_{j}(t)\).
Spectral analysis of the Markov chain allows us to consider an alternative formulation of the diffusion distance that uses eigenvalues and eigenvectors of P. As detailed in [24], even though P is not symmetric, it makes sense to perform its spectral decomposition using its left and right eigenvectors, such that \(p_{ij}=\sum_{l\geq 0}\lambda_{l}(\psi_{l})_{i}(\varphi_{l})_{j}\).
The eigenvalue \(\lambda_{0}=1\) of P is discarded since \(\psi_{0}\) is a vector with all its values equal to one. The other eigenvalues \(\lambda_{1},\lambda_{2},\ldots,\lambda_{N-1}\) tend to 0 and satisfy \(\lambda_{l}<1\) for all \(l\geq 1\). Thus, the diffusion distance can be approximated by the first \(L\) eigenvalues and eigenvectors using the new representation of P, \(\mathrm{D}_{T}^{2}(x_{i}(t),x_{j}(t))\approx\sum_{l=1}^{L}\lambda_{l}^{2T} \left((\psi_{l})_{i}-(\psi_{l})_{j}\right)^{2}.\)
Finally, the diffusion map is given by
\[\Psi_{T}(x_{i}(t))=\begin{pmatrix}\lambda_{1}^{T}\psi_{1}(x_{i}(t))\\ \vdots\\ \lambda_{L}^{T}\psi_{L}(x_{i}(t))\end{pmatrix}, \tag{2}\]
satisfying that the diffusion distance on the original space can be approximated by the Euclidean distance of the \(\Psi_{T}\) projections in \(\mathbb{R}^{L}\):
\[\mathrm{D}_{T}^{2}(x_{i}(t),x_{j}(t))\approx\sum_{l=1}^{L}\lambda_{l}^{2T} \left((\psi_{l})_{i}-(\psi_{l})_{j}\right)^{2}=\|\Psi(x_{i}(t))-\Psi(x_{j}(t)) \|^{2}.\]
The complete methodology of the method is presented in Algorithm 2.
```
1:Initialization \(\psi_{0}\), \(\lambda_{1},\lambda_{2},\ldots,\lambda_{N-1}\), \(\lambda_{N-1}\
**Algorithm 2** FDM.
```
1:\(L\) - Embedding dimension
2:\(T\) - Steps in random walk
3:\(\alpha\) - Density parameter
4:\(\mathcal{K}\) - Kernel operator \(\mathcal{K}:L^{2}([a,b])\times L^{2}([a,b])\rightarrow\mathbb{R}\)
5:\(\mathcal{X}=\{x_{1}(t),\ldots,x_{N}(t)\}\) - Functional dataset
6:\(\Psi_{T}(\mathcal{X})\) - Embedded functional data
7:Construct \(\mathrm{G}=(\mathcal{X},\mathrm{K})\), where \(\mathrm{K}\) is a positive and symmetric matrix with \(k_{ij}=\mathcal{K}(x_{i}(t),x_{j}(t))\).
8:Compute density of each vertex: \(d_{i}^{\alpha}=\left(\sum_{j=1}^{N}k_{ij}\right)^{\alpha}.\)
9:Construct \(\mathrm{G}^{\prime}=(\mathcal{X},\mathrm{K}^{(\alpha)})\) normalized by \(\alpha\) with \(k_{ij}^{(\alpha)}=\frac{k_{ij}}{d_{i}^{\alpha}d_{j}^{\alpha}}\).
10:Define transition matrix \(\mathrm{P}\) with \(p_{ij}=\frac{k_{ij}^{(\alpha)}}{d_{i}^{(\alpha)}}\), where \(d_{i}^{(\alpha)}=\sum_{j}k_{ij}^{(\alpha)}\).
11:Obtain eigenvalues \(\{\lambda_{l}\}_{l\geq 0}\) and right eigenvectors \(\{\psi_{l}\}_{l\geq 0}\) of \(\mathrm{P}\) such that
12:\[\begin{cases}1&=\lambda_{0}>\lambda_{1}\geq\ldots\\ \mathrm{P}\psi_{l}&=\lambda_{l}\psi_{l}.\end{cases}\]
13:Calculate diffusion maps \[\Psi_{T}(x_{i}(t))=\begin{pmatrix}\lambda_{1}^{T}\psi_{1}(x_{i}(t))\\ \vdots\\ \lambda_{L}^{T}\psi_{L}(x_{i}(t))\end{pmatrix},\ \forall x_{i}(t)\in\mathcal{X}.\]
oscillating torque reference of adjustable amplitude and frequency was added to the reference of said torque control, thus being able to represent the failure due to a low-frequency oscillating load. Specifically, at a frequency of 1 Hz and 2 Hz, with a percentage of the nominal load torque of the order of 2 % and 3 %, depending on each case. In addition, the torque control allowed to adapt its operation according to the different load values in the analyzed IM, particularly with values of 0, 20, 40, 60 and 80 % load.
The condition of a healthy motor and those with interrupted or broken bars were represented in the motor under test due to the possibility of using different types of rotors, in particular a healthy rotor and three rotors with broken bars (1, 2 and 3 continuous bars). In the tests carried out, two phase currents and two line voltages were measured, using AC current probes, and isolated voltage probes with \(10:1\) attenuation, respectively. Each measurement has a length of 128 000 samples and a sampling frequency of 8000 samples/second. More technical data and parameters of the IM for laboratory experimental results are shown in Table 1.
The recording and storage of electrical variables were carried out using a digital recorder with 4 channels, with data storage capacity through an internal memory in the recorder itself. In Figure 1, a schematic diagram of the laboratory assembly is shown (test bench and measurement equipment).
The analyzed dataset is made up of 10 measurements done with the IM without anomalies (Healthy Motor, HM) per load percentage. Each measurement was repeated twice per experimental setup, so a total of 50 HM measurements were compiled. We have to add 10 measurements for the motor with 1 broken bar (1BB), 10 measurements with 2 broken bars (2BB), and 10 measurements
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Rated & Rated & Frequency & Rated & Rated & Power & Rotor & N. rotor \\ power & voltage & & current & speed & factor & inertia & bars \\ \hline
5.5 kW & 380 V & 50 Hz & 11.1 A & 1470 rpm & 0.85 & 0.02 kgm\({}^{2}\) & 40 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Technical data and parameters of the IM for laboratory experimental results.
with 3 broken bars (3BB), repeating again two measurements for each percentage of load in the three cases; they were also collected 10 measurements with load oscillation at 1 Hz with an amplitude of 2 % of the nominal load torque (Sinusoidal Oscillation Signal, SS_1), and 10 with 3 % of the load torque, with two measurements for each load percentage in both cases; and finally, 20 measurements with load oscillation at 2 Hz with 2 % and 3 % of the nominal load torque respectively (SS_2), two per load percentage in each case, were obtained. In this way, a total of 120 current measurements are available for data analysis using the proposed functional dimensionality reduction techniques.
Apart from these current signals, together with the two line voltage signals mentioned above, instantaneous active power (IAP) signals are obtained for the cases of motors with faults, as in [26]. Therefore, 70 IAP measurements are also available.
To summarize, Table 2 shows the labels used for each type of induction motor diagnostic throughout our experiments and their frequencies.
### Experimental methodology
This section presents the experimental framework developed in this research, which uses functional dimensionality reduction methods to detect and diagnose faults in induction motors.
The current and instantaneous active power data will be analyzed, including raw signals, their derivatives, and their Fourier transform. The latter are referred to as signatures in the literature and so here. Functional Principal Component Analysis and Functional Diffusion Maps will be applied to each type of
Figure 1: Schematic diagram of the test bench.
data, and the results obtained by each technique will be compared. Finally, a scheme for detecting and diagnosing failures in IMs will be proposed based on the obtained results.
#### 4.2.1 Data preprocessing
First of all, we align data using the first \(x\)-axis cut-off point. Then, the datum corresponding to one broken bar with load at 20 % was dropped as it was identified as an outlier by expert knowledge. Therefore, this implies working with a total of 119 current signals and 69 IAP signals. We scale data to the range of \(-1\) and \(1\). In this way, by normalizing the signals, it is possible to compare them independently of the associated load percentage. Since the managed data are periodical, we just consider the first 750 steps as a representative sample. Thus, we deal with the curves in the first 9.3625 ms.
Next, we estimate the derivatives of both current and IAP signals by finite differences [27] and the current and active power signatures by applying the Fast Fourier Transform (FFT; [28]) method to the normalized data.
In Figure 2, examples of current data and instantaneous active power data obtained from motors with faults due to broken bars and motors with low-frequency load oscillations are shown. Specifically, these data were obtained
\begin{table}
\begin{tabular}{l l c c} \hline \hline IM Condition & Tag & Current Freq & IAP Freq \\ \hline Healthy Motor & HM & 50 & - \\ \hline Induction Motor with 1 Broken Bar & 1BB & 10 & 10 \\ Induction Motor with 2 Broken Bars & 2BB & 10 & 10 \\ Induction Motor with 3 Broken Bars & 3BB & 10 & 10 \\ \hline Sinusoidal Signal load at 1 Hz, 1 mV & SS\_1\_A & 10 & 10 \\ Sinusoidal Signal load at 1 Hz, 1.5 mV & SS\_1\_B & 10 & 10 \\ Sinusoidal Signal load at 2 Hz, 1 mV & SS\_2\_A & 10 & 10 \\ Sinusoidal Signal load at 2 Hz, 1.5 mV & SS\_2\_B & 10 & 10 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Current and instantaneous active power data information.
from the motor with two broken bars and the motor with a 1 Hz, 1.5 mV load oscillations, using load at 80 %. The figure shows both preprocessed signals, as well as derivatives of the signals and FFTs. In addition, for the current data, signals obtained from the healthy motor at the same load percentage are also shown.
By visually analyzing the current data, we observe differences between data from faulty motors and data from the healthy motor. These differences are clearer in the case of FFTs, which show peaks around the fundamental frequency, equal to 50 Hz, when motors have faults.
Analyzing the IAP data, we barely find differences in the preprocessed signals and derivatives that allow us to distinguish the type of motor fault. However, considerable magnitude peaks are observed at low frequencies in the FFTs of the sinusoidal signals, which do not appear in the FFTs of motors with broken bars.
It is worth noting that when visually discriminating healthy motors from faulty motors using FFTs and discriminating motors with low-frequency load oscillations faults using FFTs of IAP, it is not necessary to resort to analyzing instantaneous reactive power (IRP).
#### 4.2.2 Experimental configuration
FDM require an initial analysis to identify the suitable parameters to reveal patterns of interest or clusters in the data. Table 3 shows the hyperparameters obtained after a visual analysis for different parameters configurations. Based on the results, we can conclude that the Gaussian kernel gives better outcomes for current data, while the Laplacian kernel appears to be more appropriate for signatures.
### Experimental results
In these experiments, the embeddings obtained from FPCA and FDM are analyzed for both current and instantaneous active power data in the time and frequency domains. Initially, we will apply the functional dimensionality reduc
Figure 2: Current and IAP data from motors with faults due to two broken bars and motors with faults due to \(1\,\mathrm{Hz}\), \(1.5\,\mathrm{mV}\) load oscillations, using load at \(80\,\mathrm{\char 37}\). Current data from the healthy motor are also shown in the left panel.
tion techniques discussed previously to the raw current signal and its derivative. Following this, we will apply these methods to the current signatures, and the same analysis will be repeated for the instantaneous active power.
#### 4.3.1 Analysis of the current signal and its derivative
The goal of the first experiment is to detect motors with faults using dimension reduction methods over the current signals and their derivatives. Figure 3 shows the scatterplots of FPCA and FDM scores over them.
While the FPCA embedding over the current signals fails for data with high load percentages, the FPCA embedding obtained for the current signal derivatives groups data from the healthy motor and data from the motors with faults into separable clusters. Therefore, there is more discriminatory information in the growth of the current data than in the amplitude. The influence of load on the embeddings can also be observed, as the scores are grouped by load instead of by type of failure. Even so, it is worth highlighting the scores corresponding to motors with 3 broken bars, which appear at the embedding edges.
Similar analysis can be applied to the FDM embeddings, particularly to the one obtained using current derivatives, which groups data from the healthy motor in a distinct concentric circle, well separated from the rest. On the other hand, the embedding over the current signals shows an improvement compared to the FPCA embedding for this type of data, as it avoids mixing data from
\begin{table}
\begin{tabular}{l l l c c} \hline \hline Data & Type & Kernel & \(\sigma\) & \(\alpha\) \\ \hline Current & Signal & Gaussian & 0.035 & 1.0 \\ & Derivative & Gaussian & 10.0 & 0.0 \\ & Signature & Laplacian & 100.0 & 1.0 \\ \hline IAP & Signal & Gaussian & 0.1 & 0.5 \\ & Derivative & Gaussian & 5.0 & 0.0 \\ & Signature & Laplacian & 38.0 & 0.25 \\ \hline \hline \end{tabular}
\end{table}
Table 3: FDM hyperparameters.
faulty motors with data from the healthy one.
In summary, we have obtained an embedding that centralizes all the components from the healthy motor and excludes those from faulty motors. This is a novel contribution to the state of the art in induction motor faults detection using current instead of its signature and obtained in an automated way. This suggests that, by using an unsupervised analysis, it is possible to identify data coming from faulty motors. However, accurately identifying the type of malfunction remains a challenging task that requires additional information to be achieved. In the subsequent experiments, we will only use data from faulty motors in order to distinguish the type of fault.
#### 4.3.2 Analysis of the current signature
In the second experiment, we will verify if the proposed dimensionality reduction techniques allow us to determine the type of fault present in the motors.
Figure 3: FPCA and FDM embedding over the current signals and their derivatives.
We will examine whether these techniques can replace the common visual analysis of the current signature in induction motor fault detection literature [29]. Figure 4 shows the scatterplots of FPCA and FDM scores over the current signatures.
Both FPCA and FDM embeddings separate data from motors with low-frequency load oscillations into three groups: motors with load oscillations at \(2\,\mathrm{Hz}\); motors with loads at \(1\,\mathrm{Hz}/1\,\mathrm{mV}\); and motors with loads oscillations at \(1\,\mathrm{Hz}/1.5\,\mathrm{mV}\). However, signals from motors with broken bars appear scattered. Therefore, any embedding makes it possible to diagnose faults due to broken bars from faults due to low-frequency load oscillations.
In conclusion, dimensionality reduction methods applied to the current signature are not sufficient to diagnose the type of failure in induction motors. We need to use more information and we resort to the instantaneous active power. In the same way as in the previous experiments, we will analyze the power signal, its derivative and finally, its signature.
#### 4.3.3 Analysis of the instantaneous active power signal and its derivative
In the third experiment, the instantaneous active power signals and their derivatives are analyzed. Figure 5 shows the scatterplots of FPCA and FDM scores over the IAP signals and their derivatives.
None of the obtained embeddings allows to diagnose the type of fault. How
Figure 4: FPCA and FDM embedding over the current signatures.
ever, we can stand out the one obtained by FPCA on the IAP signal derivatives as it is the one that presents data more separated, grouped from motors with faults due to broken bars at the bottom, data from faulty motors with load oscillations at \(2\,\mathrm{Hz}\) in the middle, and data from faulty motors with load oscillations at \(1\,\mathrm{Hz}\) at the extremes. Nevertheless, the clusters appear very close and even overlapped.
#### 4.3.4 Analysis of the instantaneous active power signature
After the last experiment, we will try to diagnose the type of motor fault by analyzing the IAP signatures. The FPCA and FDM embeddings obtained are displayed in Figure 6.
FPCA allows to differentiate motors with failures due to broken bars from motors with failures due to load oscillations, distinguishing those at \(1\,\mathrm{Hz}\) and \(1.5\,\mathrm{Hz}\), since they appear well separated in the embedding. However, detecting
Figure 5: FPCA and FDM embedding over the IAP signals and their derivatives.
the number of broken bars of the motors remains a challenge as they appear grouped together in the embedding. Another important contribution is that by using the active power signature, the embedding obtained gives much less importance to the motor load percentage than in the rest of the embeddings. By comparison, FDM embedding also keeps the above classes separated, but the separation distance between classes is smaller.
### Detection and diagnosis algorithm
Based on the previous results, it can be concluded that functional dimensionality reduction methods offer a good performance for electrical signals from induction motors, which enables the proposal of an algorithm for detecting and diagnosing induction faults. Figure 7 depicts the proposed induction faults detection algorithm.
First of all, current signals are collected from an induction motor, along with line voltage signals, which facilitate the computation of instantaneous active
Figure 6: FPCA and FDM embedding over the IAP signatures.
Figure 7: Unsupervised detection and diagnosis algorithm.
power data. Then, a preprocessing step consisting of aligning data, scaling to the range of \(-1\) and \(1\), and keeping only the first \(750\) steps as a representative sample is performed. Consecutively, FPCA is applied to the time-domain derivative of a single stator current in order to distinguish healthy motors from motors with faults. Next, in the event that the motor exhibits faults, FPCA is applied to the frequency domain, specifically to the frequency spectrum of the instantaneous active power signal, to distinguish between motors with broken bars faults and those with low-frequency load oscillation faults, as well as their various subtypes.
## 5 Conclusions
Implementing strategies to detect and diagnose early faults in rotating electrical machines online is crucial for ensuring the reliability and safety of modern industrial systems. Its execution allows planning interruptions of continuous production processes in scheduled stops, thus reducing maintenance time and associated economic losses [1; 2]. The diagnosis of faults present in an REM is integrated by the detection, identification and isolation of an anomaly, which can be applied by using the information obtained on the state of operation of the equipment or drive [3]. As a result, it is possible to consider fault diagnosis as a pattern recognition problem with respect to the condition of an REM [4].
The proposed unsupervised scheme consist of using functional dimensionality reduction techniques, specifically FPCA and FDM, to detect and identify the presence of faults due to broken bars and low-frequency load oscillations in induction motors. An analysis of a single stator current and an analysis of the instantaneous active power of the IM is carried out. In this analysis, both the raw data and its derivatives, as well as their signatures, are used.
The results obtained from the scheme proposed are very encouraging, revealing a potential use in the future not only for real-time detection of the presence of a fault in an IM, but also in the identification of a greater number of types of faults present through an offline analysis. Both FPCA and FDM give very similar results, although FPCA is more competitive for these data. We have
seen that FPCA on the derivative of the current signal allows to distinguish motors with faults from healthy motors and applying this technique for the instantaneous active power signature it is possible to diagnose the type of motor failure: broken bars or low-frequency load oscillations.
## Acknowledgements
The authors acknowledge financial support from the European Regional Development Fund and the Spanish State Research Agency of the Ministry of Economy, Industry, and Competitiveness under the project PID2019-106827GB-I00 / AEI / 10.13039/501100011033. They also thank the UAM-ADIC Chair for Data Science and Machine Learning.
|
2307.11603 | Cascaded multitask U-Net using topological loss for vessel segmentation
and centerline extraction | Vessel segmentation and centerline extraction are two crucial preliminary
tasks for many computer-aided diagnosis tools dealing with vascular diseases.
Recently, deep-learning based methods have been widely applied to these tasks.
However, classic deep-learning approaches struggle to capture the complex
geometry and specific topology of vascular networks, which is of the utmost
importance in most applications. To overcome these limitations, the clDice
loss, a topological loss that focuses on the vessel centerlines, has been
recently proposed. This loss requires computing, with a proposed soft-skeleton
algorithm, the skeletons of both the ground truth and the predicted
segmentation. However, the soft-skeleton algorithm provides suboptimal results
on 3D images, which makes the clDice hardly suitable on 3D images. In this
paper, we propose to replace the soft-skeleton algorithm by a U-Net which
computes the vascular skeleton directly from the segmentation. We show that our
method provides more accurate skeletons than the soft-skeleton algorithm. We
then build upon this network a cascaded U-Net trained with the clDice loss to
embed topological constraints during the segmentation. The resulting model is
able to predict both the vessel segmentation and centerlines with a more
accurate topology. | Pierre Rougé, Nicolas Passat, Odyssée Merveille | 2023-07-21T14:12:28Z | http://arxiv.org/abs/2307.11603v2 | # Cascaded multitask U-Net using topological loss for vessel segmentation and centerline extraction+
###### Abstract
Vessel segmentation and centerline extraction are two crucial preliminary tasks for many computer-aided diagnosis tools dealing with vascular diseases. Recently, deep-learning based methods have been widely applied to these tasks. However, classic deep-learning approaches struggle to capture the complex geometry and specific topology of vascular networks, which is of the utmost importance in most applications. To overcome these limitations, the clDice loss, a topological loss that focuses on the vessel centerlines, has been recently proposed. This loss requires computing, with a proposed soft-skeleton algorithm, the skeletons of both the ground truth and the predicted segmentation. However, the soft-skeleton algorithm provides suboptimal results on 3D images, which makes the clDice hardly suitable on 3D images. In this paper, we propose to replace the soft-skeleton algorithm by a U-Net which computes the vascular skeleton directly from the segmentation. We show that our method provides more accurate skeletons than the soft-skeleton algorithm. We then build upon this network a cascaded U-Net trained with the clDice loss to embed topological constraints during the segmentation. The resulting model is able to predict both the vessel segmentation and centerlines with a more accurate topology.
Keywords:vessel segmentation, U-Net, topological loss, deep-learning.
## 1 Introduction
Vascular diseases include all the alterations that occur in vessels, such as stenosis, aneurysm, thrombosis or embolism. Consequences of these diseases, such as stroke, are an important cause of death and disability worldwide. The accurate vessel segmentation from angiographic images, actively investigated over the last
30 years [5, 8], is an important step for the diagnosis and treatment of vascular diseases.
In the last decade, deep-learning has allowed significant progress in medical imaging [15] and especially in segmentation. In particular, it is increasingly applied to the segmentation of vascular structures [9]. In this context, one of the first approaches dealing with 3D angiographic images was proposed by [4] who used a multi pathway convolutional network processing each of the three orthogonal planes to segment the hepatic vessels in 3D. The same year, [6] and [14] used for the first time architectures building upon 2D and 3D U-Net, respectively, to segment the brain vessels. From this time on, efforts were geared towards designing networks dedicated to the segmentation of curvilinear structures. Notably, [10] proposed a convolutional network with two attention modules to encode both spatial and channel relationship. This method was tested on six different imaging modalities and nine datasets (2D and 3D). In the meantime, [17] designed an architecture aiming to perform simultaneously vessel segmentation, centerline prediction and bifurcation detection in angiographic images. Recently, [2] tackled the problem of data annotation and proposed a weakly-supervised deep-learning framework. The annotated patches were obtained using a classifier discriminating between vessel and non-vessel patches and K-means algorithm.
Despite these efforts, automatic segmentation of vascular networks remains a challenging topic, especially due to the complex topological and geometrical properties of vessels, and their sparseness in the images. By contrast to many anatomical structures, vessels do not constitute a compact volume at a specific position and scale. They are organized as a multiscale network (from large vessels to thin ones, close to the resolution of the image) in the whole image. This represents a challenge for deep-learning methods, especially when it is required to predict topologically correct results which are mandatory for certain tasks, for instance blood flow modeling [7].
To overcome these difficulties, [16] recently proposed a novel metric specifically designed to evaluate the quality of tubular structure segmentation. This metric, named clDice (for "centerline Dice"), mainly relies on the skeleton of the tubular structures instead of their whole volume, therefore focusing on topological information. To use this new metric as a loss function, it is necessary to compute the skeleton of the predicted segmentation in a differentiable manner. Therefore, the authors proposed a differentiable soft-skeleton algorithm. They experimentally showed that using the clDice loss with this soft-skeleton algorithm provides better and more connected segmentation of 2D tubular structures, for instance on retinal images, and on 3D tubular structures on the Vessap dataset [18]. However, this is a dataset of mice brain vascular networks acquired at a very high resolution and with a research protocol. In practice, Magnetic Resonance (MR) and X-ray Computed Tomography (CT) images of vascular networks are much more noisy and with a much lower resolution. In particular, to the best of our knowledge, the clDice has not been successfully applied to 3D images of human vascular networks. Our work is inspired by [12] who proposed to perform the skeletonization via a multitask architecture with shared features fu
sion blocks which jointly performs centerline extraction and vessel segmentation. In a preliminary work, we extended this work to 3D images, however the results were not convincing (see Appendix) as the number of parameters to train is very high compared to the amount of available annotated data in 3D. Moreover, the skeleton is computed based on the initial image and benefits only indirectly from the segmentation, which complicates the skeletonization task. Based on the above discussion, we propose to use a U-Net [13] to learn the skeletonization operation. We can then use this network as a differentiable skeletonization algorithm to train another segmentation network using the clDice loss. More precisely, we build upon this skeletonization network a cascaded U-Net architecture that aims to predict both the vascular volumes and their centerlines, and we train our architecture using the clDice loss to better take into account topological constraints. We evaluate our method against two standard U-Nets, trained respectively with the Dice loss and the clDice using the soft-skeletonization.
In this context, our main contributions are the following:
* we evaluate the performance of the clDice loss for 3D vascular segmentation of the human brain;
* we propose an efficient way of performing the skeletonization operation to compute the clDice;
* we propose a cascaded multitask U-Net architecture to jointly learn vessel and centerline segmentation.
The remainder of the article is organized as follows. In Section 2, we describe our methodological contribution. In Section 3, we present the experiments to assess the relevance of our method. In Section 4, we discuss our results and conclude.
## 2 Method
### clDice loss and soft-skeleton algorithm
As introduced by [16], the clDice derives from two metrics called _topology precision_ (\(T_{prec}\)) and _topology sensitivity_ (\(T_{sens}\)) in reference to the usual precision and sensitivity metrics. These metrics are defined as follows:
\[T_{prec}(C_{P},S_{G})=\frac{|C_{P}\cap S_{G}|}{|C_{P}|}\qquad T_{sens}(C_{G},S_{P})=\frac{|C_{G}\cap S_{P}|}{|C_{G}|} \tag{1}\]
where \(C_{P}\), \(C_{G}\) and \(S_{P}\) and \(S_{G}\) are the predicted and ground truth centerlines and segmentations, respectively. The clDice is defined as the harmonic mean of \(T_{prec}\) and \(T_{sens}\):
\[clDice(S_{P},S_{G},C_{P},C_{G})=2\cdot\frac{T_{prec}(C_{P},S_{G})\cdot T_{sens }(C_{G},S_{P})}{T_{prec}(C_{P},S_{G})+T_{sens}(C_{G},S_{P})} \tag{2}\]
Most of the methods to extract a skeleton are not differentiable. Therefore, [16] proposed a differentiable soft-skeleton algorithm to use the clDice for training a neural network. This algorithm uses min and max filters to perform dilation
and erosion on the predicted segmentation. The induced iterative algorithm requires the setting of a parameter \(k\) (number of iterations) that depends on the dataset. Preliminary experiments (see Section 3.2) showed that the results from this soft-skeletonization are not sufficiently accurate for 3D vascular networks, in particular regarding its topology.
For these reasons, we chose to replace this algorithm by a U-Net trained for skeletonization purpose. We show that this solution provides more accurate skeletons.
### Model architecture
Our models are based on the U-Net architecture [13]. We use the U-Net presented in Figure 1 for the skeletonization task and as the backbone of our cascaded U-Net architecture. (Note that the same U-Net will also be used as baseline in the experiments.) This model is a standard U-Net with a depth of 4, using 2-stride convolution for down-sampling, instance normalization and leakyReLU activation function. This U-Net was trained with a Dice loss.
Our cascaded U-Net architecture is presented in Figure 2. It is composed of a first U-Net taking as input a Magnetic Resonance Angiography (MRA) [11] image and performing the segmentation task supervised by a Dice loss. This part of the architecture will be referred to as the _segmentation network_. The output of this network is concatenated with the MRA image and fed to a second U-Net performing the skeletonization task, also supervised by a Dice loss. This part of the architecture will be referred to as the _skeletonization network_. Similarly to [12], the training of these two networks is also constrained by the clDice loss. The cascaded U-Net final loss is then defined by:
\[Loss(S_{P},S_{G},C_{P},C_{G})=Dice(S_{P},S_{G})+\lambda_{1} \cdot Dice(C_{P},C_{G})\\ +\lambda_{2}\cdot clDice(S_{P},S_{G},C_{P},C_{G}) \tag{3}\]
with \(\lambda_{1},\lambda_{2}\in\mathbb{R}\) two weight parameters.
Figure 1: The baseline U-Net architecture used in the proposed approach (see Section 2.2). The output of this network is either the vascular segmentation (as shown here) or the vascular skeleton, depending on the chosen task.
### Training configuration
As a preprocessing step, all the MRA volumes are first normalized by Z-score. Then, during the training, for each batch, we produce 2 patches of size \(192\times 192\times 64\) each randomly located in a randomly selected MRA volume. One epoch consists of 250 batches.
We used the same data augmentation strategy for all trained network, applied on the patches and implemented in the python package batchgenerators[3]:
* Applied around each axis \((x,y,z)\) with a probability of 0.2. The angles of rotation are drawn from a uniform distribution \(U(-30,30)\).
* Applied with a probability of 0.2. Factor drawn from \(U(0.7,1.4)\).
* Applied with a probability of 0.1. Variance is drawn from \(U(0.0,0.1)\).
* Applied with a probability of 0.1. The width of the Gaussian kernel is drawn from \(U(0.5,1.0)\).
* Modify the voxel intensities by a multiplicative factor with a probability of 0.15. The multiplicative factor is drawn from \(U(0.75,1.25)\).
* Modify the voxel intensities by a multiplicative factor and clip them to the original range value, with a probability of 0.15. The multiplicative factor is drawn from \(U(0.75,1.25)\).
* Downsample the image with nearest neighbour interpolation, then upsample it to its original size with cubic interpolation, with a probability of 0.125. Downsampling factor is drawn from \(U(0.5,1.0)\).
* The input is normalized in \([0,1]\); then the voxel intensity \(i\) is transformed, with probability of 0.1, as follows: \(i_{new}=i_{old}^{\gamma}\), with \(\gamma\sim U(0.7,1.5)\).
Figure 2: Architecture of the proposed cascaded U-Net (see Section 2.2).
- Patches are mirrored along each axis.
Sliding windows with 25% overlap were used to reconstruct the full volume at inference time. A Gaussian kernel was applied on overlapping parts to reduce the weight of the voxels far from the center of the patches. We used stochastic gradient descent with Nesterov momentum with an initial learning rate set to 0.01. A linear learning rate decay was applied so that learning rate was equal to 0 at the last epoch.
For the cascaded multitask U-Net, we chose to first pretrain the two U-Nets separately before fine-tuning the network in the scheme described in Section 2.2. We initialized the weights of the segmentation network with the weights of the baseline U-Net trained with the Dice loss, and we pretrained the skeletonization network using as input the MRA image and the segmentation ground truth. Baseline models were trained during 750 epochs; for the cascaded multitask U-Net we first pretrained the two networks during 750 epochs then fine-tuned during 500 epochs.
## 3 Evaluation
In this section, we present the setup and results of the conducted experiments to evaluate our skeletonization network and cascaded multitask U-Net. All the results presented for deep-learning models were obtained through a 5-fold cross-validation.
### Dataset and metrics
**Dataset:** For this study, we used a publicly available dataset3[1] containing 34 time-of-flight (TOF) MRA volumes of the brain. All the volumes present a voxel resolution of \(0.513\times 0.513\times 0.800\) mm\({}^{3}\) and a size of \(448\times 448\times 128\). For all the volumes, the associated vessel segmentation ground truth was available. To produce the skeleton ground truth, we used the skeletonize function from the python package scikit-image.
Footnote 3: [https://public.kitware.com/Wiki/TubeTK/Data](https://public.kitware.com/Wiki/TubeTK/Data)
**Metrics:** In the evaluation, we adopted the following metrics: clDice (see Section 2.1), Precision (\(Prec\)), Sensitivity (\(Sens\)) and Dice similarity coefficient (\(DSC\)), defined by:
\[Prec=\frac{TP}{TP+FP}\qquad Sens=\frac{TP}{TP+FN}\qquad DSC=\frac{2\cdot TP}{2 \cdot TP+FP+FN} \tag{4}\]
with \(TP\): True Positives, \(FP\): False Positives, \(FN\): False Negatives.
Beyond these quantitative metrics, we also evaluated the topological quality of our segmentation by computing the following topological scores: mean absolute error of the first two Betti numbers \(\beta_{0}\) (number of connected components) and \(\beta_{1}\) (number of tunnels), and mean absolute error of the Euler characteristics \(\chi=\sum_{i}(-1)^{i}\beta_{i}\) with \(\beta_{i}\) the successive Betti numbers. Before computing the
metrics, we applied a post-processing on all the results to remove the connected components with a volume lower than 20 voxels.
### Skeletonization using U-Net
To evaluate our skeletonization network, we compared it to the soft-skeleton algorithm introduced in [16] for different values of the parameter \(k\). We computed, for both methods, the mean time required to perform the skeletonization per patch as well as the topological metrics introduced in Section 3.1. These results, presented in Table 1, show that our method produces better skeletons in regard to the topological metrics with a reasonable computation time. Qualitatively, the skeletons produced by the soft-skeleton algorithm present many disconnections and a thickness of several voxels compared to our skeletons which are thus closer to the ground truth skeletons (see Figure 4 in Appendix).
Based on this experiment, we chose \(k=3\) to train the baseline U-Net with clDice using the soft-skeleton algorithm in future experiments.
### Cascaded Multitask U-Net
Hyperparameters optimizationThe goal of the cascaded multitask U-Net is to improve the results of the segmentation network thanks to the clDice loss. As stated in Section 2.2, we have to tune the hyperparameters \(\lambda_{1}\) and \(\lambda_{2}\) in order to handle the trade-off between the skeletonization loss and the clDice loss. For this purpose, we evaluated the combinations of hyperparameters for \(\lambda_{1}\in\{0,0.25,0.5,0.75,1.0\}\) and \(\lambda_{2}\in\{0.25,0.5,0.75,1.0\}\). We also tested two training configurations: the first one where the skeletonization network weights are frozen; the second where they are updated during the cascaded U-Net training. In both configurations, the skeletonization network is first pre-trained.
Based on these experiments, we found that the best cascaded multitask U-Net training policy consists of freezing the weights of the skeletonization network and set the loss weight to \((\lambda_{1},\lambda_{2})=(0.25,0.25)\).
Validation and comparisonsWe compared our network with two baselines:
\begin{table}
\begin{tabular}{l c c c c} \hline Model & Time (ms) & \(\chi\) error \(\downarrow\) & \(\beta_{0}\) error \(\downarrow\) & \(\beta_{1}\) error \(\downarrow\) \\ \hline soft-skel. alg. \(k\)=1 & **2.6**\(\pm\) 0.3 & 3.507 \(\pm\) 0.622 & 11.072 \(\pm\) 5.336 & 0.961 \(\pm\) 0.022 \\ soft-skel. alg. \(k\)=2 & 4.0 \(\pm\) 0.4 & 3.600 \(\pm\) 0.647 & 11.531 \(\pm\) 5.565 & 0.959 \(\pm\) 0.024 \\ soft-skel. alg. \(k\)=3 & 5.5 \(\pm\) 0.5 & 3.625 \(\pm\) 0.653 & 11.661 \(\pm\) 5.634 & 0.958 \(\pm\) 0.023 \\ soft-skel. alg. \(k\)=4 & 8.3 \(\pm\) 1.4 & 3.631 \(\pm\) 0.652 & 11.693 \(\pm\) 5.647 & 0.958 \(\pm\) 0.023 \\ soft-skel. alg. \(k\)=5 & 8.9 \(\pm\) 1.0 & 3.633 \(\pm\) 0.653 & 11.700 \(\pm\) 5.650 & 0.958 \(\pm\) 0.023 \\ \hline Skeletonization. network & 12.4 \(\pm\) 5.6 & **1.602**\(\pm\) 0.209 & **3.197**\(\pm\) 1.565 & **0.757**\(\pm\) 0.067 \\ p-value (w.r.t soft-skel. alg. \(k\)=3) & \multicolumn{3}{c}{\(p<0.001\)} & \multicolumn{1}{c}{\(p<0.001\)} & \multicolumn{1}{c}{\(p<0.001\)} \\ \hline \end{tabular}
\end{table}
Table 1: Results of our skeletonization network (6th row) vs. the soft-skeleton algorithm for different values of \(k\) (1st to 5th row).
* a U-Net trained with a Dice loss;
* a U-Net trained with the combination of Dice loss and clDice loss but with the soft-skeleton algorithm. The combination of Dice and clDice losses is defined as follows: \[Loss(S_{P},S_{G})=(1-\alpha)\cdot Dice(S_{P},S_{G})+\alpha\cdot clDice(S_{P},S_{G})\] (5) The hyperparameter \(\alpha\) was optimized the same way as for our approach in the range \(\{0.1,0.2,0.3,0.4,0.5\}\). It was finally set to \(\alpha=0.1\).
Table 2 presents the results of the different baselines and our method. The two baselines, U-Net (Dice) and U-Net (Dice+clDice), give similar results as shown by the high p-values (\(p>0.05\)). Comparatively, our method presents a lower precision but a higher sensitivity, leading overall to similar DSC and clDice. We interpret this difference by the capacity of our method to detect more connected vessels, but with some false positives as a counterpart. In terms of topological metrics our method generally performs significantly better than the baselines. The \(\beta_{0}\) of our approach is better than for the baselines, but not significantly (\(p=0.59\)), probably due to its high variance. More experiments should be conducted on larger datasets to confirm these results.
Beyond this quantitative analysis, it is important to observe the results from a qualitative point of view. In particular, the good reconnection behavior induced by clDice and cascaded U-Net is illustrated in Figure 3. We can see that the two methods using the clDice loss better preserve the connectivity of small vessels on the distal parts of the vascular network, and our approach reconnects even more the vessels than the Unet (Dice+clDice).
## 4 Conclusion
In this paper, we proposed to use a U-Net to learn the skeletonization operation required to compute the clDice loss. We showed that this method provides more topologically-accurate skeletons than the originally used soft-skeleton algorithm. We then proposed a cascaded multitask U-Net to learn a vascular segmentation with a topological constraint enforced by the clDice loss, computed with our skeletonization approach. We showed that this constraint improved the topology
\begin{table}
\begin{tabular}{l c c c} \hline Model & U-Net (Dice) & U-Net (Dice+clDice) & Cascaded U-Net \\ \hline \(DSC\uparrow\) & 0.751 \(\pm\) 0.017 & **0.752**\(\pm\) 0.015 (\(p=0.43\)) & 0.750 \(\pm\) 0.017 (\(p=0.71\)) \\ \(clDice\uparrow\) & 0.843 \(\pm\) 0.018 & 0.844 \(\pm\) 0.016 (\(p=0.52\)) & 0.843 \(\pm\) 0.017 (\(p=0.55\)) \\ \(Prec\uparrow\) & **0.774**\(\pm\) 0.030 & 0.772 \(\pm\) 0.029 (\(p=0.56\)) & 0.757 \(\pm\) 0.030 (\(p<0.001\)) \\ \(Sens\uparrow\) & 0.732 \(\pm\) 0.043 & 0.735 \(\pm\) 0.036 (\(p=0.37\)) & **0.746**\(\pm\) 0.040 (\(p<0.001\)) \\ \(\chi\) error \(\downarrow\) & 0.495 \(\pm\) 0.187 & 0.518 \(\pm\) 0.166 (\(p=0.34\)) & **0.379**\(\pm\) 0.186 (\(p<0.001\)) \\ \(\beta_{0}\) error \(\downarrow\) & 1.338 \(\pm\) 1.103 & 1.424 \(\pm\) 1.224 (\(p=0.22\)) & **1.263**\(\pm\) 0.874 (\(p=0.59\)) \\ \(\beta_{1}\) error \(\downarrow\) & 0.205 \(\pm\) 0.138 & 0.212 \(\pm\) 0.137 (\(p=0.72\)) & **0.170**\(\pm\) 0.104 (\(p=0.02\)) \\ \hline \end{tabular}
\end{table}
Table 2: Results of the Cascaded Multitask U-Net: mean \(\pm\) standard deviation values (p-value). The p-values are computed with respect to the U-Net (Dice) results.
of the vascular segmentations in 3D TOF MRA. Future work includes validating this approach on larger datasets and other organ vascular networks.
Figure 3: Segmentation results obtained with the three compared approaches. The red arrows show some vessels extremities where the topology is better preserved by our approach. |
2304.08602 | Crossing Roads of Federated Learning and Smart Grids: Overview,
Challenges, and Perspectives | Consumer's privacy is a main concern in Smart Grids (SGs) due to the
sensitivity of energy data, particularly when used to train machine learning
models for different services. These data-driven models often require huge
amounts of data to achieve acceptable performance leading in most cases to
risks of privacy leakage. By pushing the training to the edge, Federated
Learning (FL) offers a good compromise between privacy preservation and the
predictive performance of these models. The current paper presents an overview
of FL applications in SGs while discussing their advantages and drawbacks,
mainly in load forecasting, electric vehicles, fault diagnoses, load
disaggregation and renewable energies. In addition, an analysis of main design
trends and possible taxonomies is provided considering data partitioning, the
communication topology, and security mechanisms. Towards the end, an overview
of main challenges facing this technology and potential future directions is
presented. | Hafsa Bousbiat, Roumaysa Bousselidj, Yassine Himeur, Abbes Amira, Faycal Bensaali, Fodil Fadli, Wathiq Mansoor, Wilfried Elmenreich | 2023-04-17T20:41:43Z | http://arxiv.org/abs/2304.08602v1 | # Crossing Roads of Federated Learning and Smart Grids: Overview, Challenges, and Perspectives
###### Abstract
Consumer's privacy is a main concern in Smart Grids (\(\mathrm{SG}\)s) due to the sensitivity of energy data, particularly when used to train machine learning models for different services. These data-driven models often require huge amounts of data to achieve acceptable performance leading in most cases to risks of privacy leakage. By pushing the training to the edge, Federated Learning (FL) offers a good compromise between privacy preservation and the predictive performance of these models. The current paper presents an overview of FL applications in \(\mathrm{SG}\)s while discussing their advantages and drawbacks, mainly in load forecasting, electric vehicles, fault diagnoses, load disaggregation and renewable energies. In addition, an analysis of main design trends and possible taxonomies is provided considering data partitioning, the communication topology, and security mechanisms. Towards the end, an overview of main challenges facing this technology and potential future directions is presented.
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: thanks: [leftmargin=*]
+
Footnote †: leftmargin=*]
the amount of data to transmit over the network and hence improves the efficiency of the \(\mathrm{SG}\) by reducing network congestion and improving response times [12]. Additionally, by training models on data from multiple sources, \(\mathrm{FL}\) can help create models more representative of the entire system and better adapt to changing conditions [13].
In pursuit of this goal, significant emphasis has been placed on \(\mathrm{SG}\) frameworks based on \(\mathrm{FL}\). Recently, a growing number of research articles have highlighted the crucial need for a review article that can enhance the research topic by accomplishing the following tasks: (i) identifying gaps in \(\mathrm{FL}\)-based \(\mathrm{SG}\) research; (ii) evaluating the quality and validity of research studies and recognizing their strengths and limitations; (iii) critiquing methodologies, providing analysis, and proposing alternative or improved methods; (iv) facilitating the improvement of research design by offering constructive feedback and suggestions; and (v) sharing research findings, and deriving recommendations.
### Paper's Contributions
To date, several studies separately investigating either \(\mathrm{SG}\) services or \(\mathrm{FL}\) already exist in the literature [14, 15, 16]. Nonetheless, no review article has been found that discusses the use of \(\mathrm{FL}\) in \(\mathrm{SG}\). Typically, very little is known about existing and state-of-the-art frameworks coupling the two scholarships, with no detailed analysis of the advantages and limitations of such technology. The current manuscript addresses this gap by closely examining the applications of \(\mathrm{FL}\) in \(\mathrm{SG}\) setups. The main contributions of the current paper are summarised as follows:
* A presentation of \(\mathrm{FL}\) fundamental concepts including: (i) the definition of \(\mathrm{FL}\), (ii) categories of \(\mathrm{FL}\) (iii) communication topologies of \(\mathrm{FL}\), and (iv) evaluation of \(\mathrm{FL}\).
* A discussion of the degrees of freedom and research trends of \(\mathrm{FL}\)-based frameworks for \(\mathrm{SG}\), analyzing their advantages and drawbacks.
* A comprehensive literature review of existing \(\mathrm{FL}\)-based frameworks for the different services related to the management of \(\mathrm{SG}\) with a comparative analysis of the achieved results for each service.
* A discussion of implementation challenges by summarizing the lessons learned, critical open research questions, and future research direction.
### Methodology
The literature search considered four search databases: Scopus, IEEE, ACM library, and Google Scholar with the following query: _"federated learning"_ AND _("energy"_ OR _"smart grid"_ OR _"energy forecasting"_ OR _"load disaggregation"_ OR _"thermal comfort control"_). The search methodology resulted in 286 contributions refined to 120 at the end, as explained in Figure 1. The duplicates were first eliminated, followed by a screening process based on the title, the abstract, and, later on, based on the full text. A final set of 120 contributions were considered. Figure 1 further illustrates the resulting word cloud from the analysis of the keywords. It reveals three main topics: _Energy applications_, _Security and privacy_, and the _Modeling approach_. Energy applications include load forecasting, demand response, energy efficiency, renewable energy, load disaggregation, and smart contracts. Regarding security, blockchain is one of the most frequently used keywords, along with data privacy, security, and poisoning attacks. The third topic includes keywords such as \(\mathrm{AI}\), machine learning, LSTM, deep neural networks, and convolutional neural network. The remainder of the paper discusses related work based on an extended set of these topics.
### Organization
The remainder of the current manuscript proceeds as follows: Section 2 introduces basic concepts related to \(\mathrm{FL}\). Section 3 describes the main design dilemmas and research trends of the application of \(\mathrm{FL}\) in \(\mathrm{SG}\). Section 4 discusses existing \(\mathrm{FL}\)-based frameworks for the energy services in \(\mathrm{SG}\). Section 5 highlights this technology's primary challenges with possible future directions to address them discussed in Section 6. Finally, a conclusion of the current study is presented in Section 7.
## 2 Background
### _\(\mathrm{FL}\) Paradigm_
\(\mathrm{FL}\) is an \(\mathrm{ML}\) strategy that relies on training algorithms across various decentralized edge servers or devices holding local data patterns without the need to exchange them. This technique is applied in contrast to traditional centralized \(\mathrm{ML}\) schemes, in which all local data repositories are uploaded to a central server. Accordingly, \(\mathrm{FL}\) allows various actors to build a robust \(\mathrm{ML}\) algorithm without sharing data to overcome crucial challenges such as privacy preservation, right of access to sensitive data, and access to heterogeneous data. For this purpose, \(\mathrm{FL}\) involves the following four main learning steps [17, 18]:
1. **Global model initialisation**: The global model is randomly initialized at the beginning of the training process.
2. **Client selection**: Due to constraints on the communication network, only a subset of randomly selected nodes contribute to the training at each round. In some cases, the selection further considers network reliability (e.g., [19]) to avoid computational delay.
3. **Local training**: Each selected client / trains the model on its local dataset \(\mathcal{D}\) for a certain number of epochs \(\mathcal{N}\). The updated model is then uploaded to the central node in the case of a centralized topology or broadcast to all the other nodes in the case of a decentralized topology.
4. **Model aggregation**: At the reception of locally trained models, an updated global model is built considering a weighted average of the received models where the weights represent the Amount of data of each client \(\mathcal{L}\). The new global model is sent back to the different nodes.
The steps \(b\) to \(d\) are repeated until convergence or until a maximum number of rounds is achieved. The previously described aggregation algorithm is commonly referred to as FedAvg [20] algorithm. Unlike centrally trained models, evaluating \(\mathrm{FL}\) frameworks requires examining additional aspects due to the distributed nature of the training process that is sensitive to several attacks. Several evaluation frameworks exist in the literature [21, 22]. Considering the characteristics of \(\mathrm{SG}\), we provide an extended list of these evaluation aspects in Table 1. Figure 2 summarizes the learning steps adopted in \(\mathrm{FL}\) frameworks.
### Categories of \(\mathrm{FL}\)
Three different categories of \(\mathrm{FL}\) are identified [18, 24, 25, 26, 27] based on the data partitioning approach: (i) horizontal \(\mathrm{FL}\), (ii) vertical \(\mathrm{FL}\), and (iii) transfer \(\mathrm{FL}\). For clarity purposes, this section briefly discusses the three categories. _Horizontal Federated Learning (HFL)_, also referred to as sample-based \(\mathrm{FL}\)[24], is used in scenarios where local
Figure 1: Literature review process and keywords cloud analysis
Figure 2: Overview of learning steps in \(\mathrm{FL}\) (_image source [23]_)
datasets \(\mathcal{D}\) have the same feature space but different ID space [18, 26]. More precisely, \(\mathrm{HFL}\) is applicable in cases where the local datasets \(\mathcal{D}\) have the same structure but for different data samples. _Vertical Federated Learning (VFL)_, also referred to as feature-based \(\mathrm{FL}\)[24], is applicable in scenarios where the local datasets \(\mathcal{D}\) have different feature space with an overlapping ID space [18, 26]. As such, \(\mathrm{VFL}\) is applicable in cases where the local datasets \(\mathcal{D}\) have partial features and an overlapping set of data samples. In contrast with \(\mathrm{HFL}\), \(\mathrm{VFL}\) can turn out to be more complicated as it requires entity alignment [27]. The latter consists of finding records from two databases representing the same real-object [28]. Figure 3 illustrates the previous two approaches. _Transfer Federated Learning (TFL)_ is applicable in scenarios where local datasets \(\mathcal{D}\) have different feature space and/or different ID space [18, 26]. It can further be sub-categorized into: (1) Instance-based \(\mathrm{TFL}\), (2) Feature-based \(\mathrm{TFL}\), and (3) Parameter-based \(\mathrm{TFL}\).
### _Topologies of \(\mathrm{FL}\)_
The communication architecture between distant clients in \(\mathrm{FL}\) is a crucial aspect influencing performance. Considering related work, three main communication topologies can be identified: centralized \(\mathrm{FL}\), decentralized \(\mathrm{FL}\), and hierarchical \(\mathrm{FL}\). Figure 4 compares the first two topologies and classical learning. The hierarchical topology is not represented as it could be considered a variant of centralized learning with an extra communication hop. _Centralised FL_, also referred to as client-server architecture, corresponds to a scenario where \(\mathcal{N}\) data owners (i.e., clients) locally train the same models using different local datasets \(\mathcal{D}\) with a central node aggregating these models. This architecture assumes that the clients are honest participants and only the server is honest, but curious [24]. _Decentralised FL_, also referred to as Peer-to-Peer (P2P) architecture, corresponds to a scenario where \(\mathcal{N}\) data owners train the same model using different local datasets \(\mathcal{D}\) with direct communication between these nodes to construct an updated model after each iteration. P2P topology is thus a serverless topology where the learning does not require a central node to
\begin{table}
\begin{tabular}{p{56.9pt} p{113.8pt} p{113.8pt}} \hline \hline
**Aspect** & **Description** & **Measures** \\ \hline
**Convergence** & \begin{tabular}{l} Evaluates the required resources to achieve acceptable performance \\ \end{tabular} &
\begin{tabular}{l} Number of communication rounds, training time \\ \end{tabular} \\ \hline
**Communication efficiency** & \begin{tabular}{l} Evaluates the amount of data that needs to be transmitted between devices during the communication rounds \\ \end{tabular} &
\begin{tabular}{l} Amount of data transmitted, transmission time \\ \end{tabular} \\ \hline
**Data privacy** & \begin{tabular}{l} Ensures that sensitive data is not compromised or leaked \\ \end{tabular} &
\begin{tabular}{l} Testing the model against possible attacks \\ \end{tabular} \\ \hline
**Model** & \begin{tabular}{l} Evaluates the model’s performance on different scenarios \\ \end{tabular} & \begin{tabular}{l} Measures the model’s performance on both \\ Independently \\ \end{tabular} &
\begin{tabular}{l} Measures the model’s performance on both \\ Independently \\ and non-iID data \\ \end{tabular} \\ \hline
**Robustness** &
\begin{tabular}{l} Evaluate the system’s performance under different \\ parameters and conditions \\ \end{tabular} & Ablation and hyper-parameters studies \\ \hline \hline \end{tabular}
\end{table} TABLE I: Evaluation of FL frameworks
Fig. 3: Data partitioning in FL for EV and CS
process [29, 30]. Furthermore, the FL paradigm is referred to as _Gossip learning_[31] when this architecture is adopted.
## 3 Overview of FL frameworks in \(\mathrm{SGs}\)
This section describes the main research trends and characteristics of FL frameworks in \(\mathrm{SGs}\). Five aspects are considered, including three that already emerged from the word cloud in Figure 1, that is _energy services_, _security and privacy_, and _modeling approach_. Two additional aspects are further investigated, related to the FL paradigm, that is, the _data partitioning_, and the _network topology_. Figure 5 illustrates key variants that emerged during the review process for each considered aspect and the relationship between them. The first four aspects are discussed in the following subsections. However, the application of FL for different energy services is presented in Section 4.
### Data Partitioning
The majority of the reviewed contributions show strong evidence for HFL. More precisely, 98% of the reviewed FL frameworks for \(\mathrm{SG}\) adopt horizontal partitioning. Considering the characteristics of \(\mathrm{SG}\), this trend can be justified with the ownership of energy data by a single Energy service provider (ESP). HFL is often combined with transfer learning. One principal subcategory is mainly adopted, the parameter-based, while little consideration of instance-based and feature-based TFL. Parameter-based TFL consists of: (1) fine-tuning a global model after each training round on the level of the clients, also referred to as _personalization_, or (2) initializing the global model with a model pre-trained on another domain [2]. The first variant was primarily adopted in several works [32, 33] to address the data's non-IIDness and enhance the performance of local models. Despite the success of the previous technique, little attention was dedicated to studying the computational delay that this
Figure 4: Communication Topologies of FL
Figure 5: Research trends and taxonomies of FL contributions in \(\mathrm{SGs}\)
technique may induce on the computational time, considering the heterogeneity of hardware characteristics of edge devices. The second variant was only adopted in reference [34], where the experimental setup demonstrated a strong need for fine-tuning.
An exception to the previous trend can be observed in the case power prediction of \(\mathrm{EV}\), and \(\mathrm{CS}\) as suggested per references [35, 36] where \(\mathrm{VFL}\) was prominent. The primary purpose of adopting the \(\mathrm{VFL}\) is to obtain a complete record of each car driver through the combination of features recorded in the \(\mathrm{EV}\) system and features recorded at the \(\mathrm{CS}\) level. The development of secured \(\mathrm{VFL}\) is still in its infancy, and the majority of security mechanisms applied are only valid with simple ML models [27], basically linear models [18]. Consequently, this scheme is rarely used in \(\mathrm{SGs}\). In references [35, 36], it was combined with cryptographic alignment to mitigate the previous issue. However, it is expected that this counter-measure would increase the computational complexity.
### Model characteristics
The _supervised learning_ paradigm was the most commonly used among the reviewed contributions, with eighty-one (81) contributions in total, that is 72%, adopting this paradigm, mainly with deep neural networks for energy modeling. While adopting this learning paradigm is viable in the case of some energy services, such as load forecasting and power production generation, the availability of labeled data remains a huge obstacle in the case of energy services, such as anomaly detection and load disaggregation. Even if applicable in laboratory and simulation setups, the second case is merely impossible in real smart grid setups. A viable alternative, in this case, is the adoption of semi-supervised algorithms. It was examined in [37], showing competitive results for fault detection. Nonetheless, the experimental setup included only tests on a simulated dataset. Thus, further experiments are required before a clear conclusion can be established about the effect of this paradigm on the performance.
In contrast, _reinforcement and unsupervised learning_ received less attention. Reinforcement Learning (RL) was adopted in four (04) [34, 38, 39, 40] different contributions for four (03) different services, that is low voltage control in distribution networks, demand response in distribution networks, and energy management in \(\mathrm{EV}\) networks. All these contributions demonstrated comparable results to supervised approaches. The only contribution suggesting an unsupervised learning approach leveraging the k-means algorithm was suggested in reference [41] for the case of extracting power consumption patterns from load profiles.
### Communication Topology
A closer inspection of energy-related work reveals different and heterogeneous topologies in \(\mathrm{SG}\) derived from the two most popular \(\mathrm{FL}\) topologies presented earlier. Figure 6 illustrates the main topologies identified in the studies included in our literature search.
The client-server topology was the most commonly used in \(\mathrm{SG}\) setups with its two variants: (1) standard client-server and (2) clustered client-server. Studies have adopted the first topology as a typical architecture. However, it could lead to convergence issues due to the heterogeneous load profiles exhibited by different clients [42, 43, 44]. The clustered variant was thus suggested to address this limitation by using several central nodes, optionally communicating with each other, each responsible for aggregating only a subset of locally trained models to mitigate the effect of non-IIDness of the energy data.
Only a few \(\mathrm{FL}\) studies adopt a peer-to-peer topology to overcome the single failure point of other alternatives by allowing each node to act as a client and a server simultaneously. For example, this architecture was adopted in [45] where the authors consider clients to be prosumers (i.e., a producer and consumer of energy, for example a house with a PV system) to extend energy trading platforms with the concept of Peer-To-Peer (P2P) energy sharing. This topology is particularly interesting for small energy communities. Nonetheless, it depends on the hardware resources available at the edge since the aggregation is performed locally. Furthermore, they require a high communication bandwidth since each client broadcasts its local model to the rest of the clients.
A major challenge for the previous two topologies is the non-IIDness of energy data due to its dependency on the consumer's behavior [46], particularly in the residential sector. To deal with this problem, the edge computing layer can be leveraged. The hierarchical topology was thus adopted in several frameworks. This topology introduces an extra communication hop in the standard client-server topology, where an intermediate aggregation is performed. The reason behind this aggregation is to group consumers according to the similarity of their energy consumption data [42], which allows for mitigating the convergence issues. Despite adopting this topology, a consensus on the best clustering techniques/criteria is still not established. The conducted review revealed the following criteria:
* _Localisation_ is the most popular criterion. It was adopted in eight (8) frameworks out of twelve (12) total energy-related hierarchical FL frameworks. The intuition behind adopting longitude and latitude as clustering criteria is that buildings/clients in the same close regions will have the same weather conditions, resulting in similar usage patterns and, thus, similar energy consumption.
* _Attributes_ 1. _Attributes of the clients_ also consist of viable clustering criteria where the intuition behind it is that clients with different socio-economic features (e.g., employee/retired or family/students) are likely to exhibit similar load profiles [47] leading to faster convergence.
2. _Attributes of the building_: were adopted in reference [46] including building type, facing direction, region, rental units, and heating type. Building attributes define the consumer's interaction with major loads composing the power load, such as the heating system.
* _Load profiles_ was suggested in reference [47], where the authors argued that the direct use of the power curve allows for overcoming convergence issues and is more efficient compared to other clustering criteria. More precisely, the authors suggest using low-frequency energy data collected by the ESP for billing purposes to perform clustering while using high-frequency data locally stored for the training.
* _Parameters of local models_ leveraging unsupervised clustering was adopted in reference [42] where the clustering is based on the hyper-parameters of local neural networks trained on edge devices. Although the proposition showed high potential in accurately clustering the clients, it is limited by its high dependency on the capabilities of the client devices for performing the local training.
* _Membership value_ allows evaluating the membership of each client to each cluster considering different training rounds. This approach results in dynamic clustering at each training round considering the potential change in consumption, particularly applied in the case of predicting the energy consumption of charging stations [48].
All the previously defined topologies have a direct influence on the aggregation algorithm and, thus, the convergence and the expected performance. The choice of the clustering technique remains highly dependent and relative to the modeled task in the SG. In general, it remains unclear what is the best clustering approach/criteria. However, in comparing the clustering based on: (1) load profiles and (2) attributes of the clients for the task of load forecasting, the authors of [47] demonstrated that load profiles provide the best results further confirmed in reference [49].
### Security Mechanism
Despite the privacy-preserving character of FL, recent contributions demonstrated that it suffers from several security issues [50, 51]. As the energy data is subject to strict regulations [52], this aspect becomes of paramount importance for the different actors of the power grid. Various security mechanisms were adopted in related work, as illustrated in Figure 3. Authentication is the simplest security mechanism among the analyzed frameworks, implemented commonly through a third-party server [53, 54]. An attribute access control mechanism was suggested in reference [55]. These suggestions introduce a new hop to existing topologies, a trusted third-party server, inducing an extra step where the clients are granted access to avoid malicious injections. The authenticated clients are assumed to be fully trusted.
Despite their complexity, cryptography algorithms were leveraged in several contributions to protect the weights of local models upon transfer through the network. Two main trends of encryption schemes can be identified in related work: (1) encrypt/decrypt scheme and (2) homomorphic encryption. The first involves using a private/public key pair to encrypt the model before exchanging it through the network, implemented with different approaches including a KDC server [56], Paillier encryption [15], and hashing and RSA cryptography [35]. Alternatively, homomorphic encryption allows performing calculations directly on the encrypted model as suggested in reference [57, 58, 59]. While the first alternative is robust against False Data Injection (FDI), the second is more optimized and prevents computational delays. Therefore, adopting one of these two techniques depends highly on the requirements of the service offered. For example, homomorphic encryption would be more suitable for short-term residential load forecasting due to time Constraints.
Figure 6: Communication topologies in SG-FL frameworks
Differential privacy is another technique adopted to prevent alternating the model weights by introducing a smaller amount of noise to the data and adopting robust aggregation algorithms in the face of potential attacks [60, 61, 62, 63, 64, 55]. For example, the authors of [55] suggest that adopting the differential privacy algorithm can effectively solve the risk of maliciously reconstructing user load data addressed through adversarial networks. Nonetheless, the experimental setup was limited, where a potential enhancement would involve investigating the effect of more robust aggregation algorithms. A main drawback of this technique is its high dependency on the amount of noise added to deliver an acceptable level of security that could negatively impact the performance, as demonstrated in reference [63] for the case of load disaggregation.
Blockchain technology was extensively adopted in the case of EV, with differences in the proposed framework. A common suggestion was integrating the aggregator in a blockchain network [65, 36, 66]. For example, the authors of [65] replace the aggregator with a blockchain network, and the EV fleet is used both as a consumer and a supplier of electrical energy. A primary limitation can be perceived when the number of blocks increases as the system's efficiency will be negatively affected due to huge memory requirements, slow transactions, and mining speed. An alternative would include integrating the EV fleet, and the aggregator in the blockchain network as suggested in reference [67]. In this case, EV clients are deployed to act as miners, which implies the need for more resources on edge devices and that the computing resources on the EV may not be sufficient. A more generic FL-framework for SGs leveraging blockchain technology was suggested in reference [45] to enhance the security of the plethora of energy services. Alliance chain, a variant of blockchain technology, was also adopted in reference [68] as a general FL-a framework for SGs. Other security mechanisms were adopted in related work, such as injection attacks through outlier detection [69] and security control as suggested per [70]. Furthermore, a combination of several previously presented security mechanisms was suggested in reference [59].
Despite their benefits in protecting user's data and the model's weight, all the previously presented security mechanisms add a computational burden on the edge devices, and a compromise between the security level, the available resources, and the performance is undoubtedly required in real SG environments.
## 4 Applications of FL in energy services
Adopting FL for training ML models in energy scholarship recently received significant attention. The conducted literature search revealed five major services, mainly: (1) generic frameworks for SG, (2) load forecasting, (3) Re-newable Energy (RE) production, (4) FDI and anomaly detection, (6) Non-Intrusive Load Monitoring (NILM). The application of FL in NILM was already reviewed in reference [23], and thus it is only briefly discussed along with four other services receiving less attention in the last section.
### Generic Frameworks
The new SG eco-system gathers ESP and Energy Data Owner (EDO) in a distributed setup. The energy data is thus often stored in isolated islands, which prevents unlocking the full potential of data-driven approaches. The relevance of FL in addressing this issue was highlighted in reference [71, 76], in combination with digital twins enabling real-time simulation and decision-making for the case of [71]. The framework suggests a full exploitation of available sensing technologies with federated learning for real-time power grid management. A similar study was presented in reference [73] investigating the applicability of FL approach in the case of smart buildings to evaluate the convergence time. Moreover, a theoretical contribution suggested in reference [77] recommends an inclusive FL of different grid operators and customers to achieve robust systems against cyber-attacks. Overall, three main research streams for general applications in SG can be identified: (1) performance of the FL framework and its topology, (2) edge resources allocation, and (3) security aspects. Table 2 illustrates a representative set of generic frameworks for SG. It could be observed that all frameworks yield competitive results where the minimum accuracy of 90% and a maximum MAE of 0.03 were recorded. A common limitation, however, was the lack of evaluation of communication overhead and processing resources required on edge devices.
A generic FL framework was proposed in reference [78] for the metering infrastructure suggesting two aggregation schemes: a 2-tiers scheme and a 3-tier scheme reflecting a hierarchical aggregation scheme. The main advantage of the latter is the grouping of the clients based on their geo-localization, validated considering the case of NILM. It revealed equivalent performance to centralized alternatives with significant network load improvements. The impact of the aggregation algorithm was further discussed in reference [79], where an aggregation algorithm considering data characteristics was used to calculate the weight distribution and meet the needs of each node. Another generic implementation of FL in SGs was suggested in reference [15]. The authors investigated both VFL and HFL, tested in the case of predicting consumption patterns of consumers. Two different models were implemented for the vertical framework: a linear model and gradient-boosting decision trees. Nonetheless, the experimental results revealed that the first one is more suitable when a trusted third-party server is required. In contrast, the second one is more suitable when the main focus is on the accuracy of the generated predictions. The authors of [34] formulated an optimization problem to contribute to solving FL tasks (e.g., load forecasting) requested by the ESP. The optimization problem aims at maximizing the profit of both EDO and ESP through two payoff functions. The first one offers incentives from the ESP to convince EDO to participate in the FL task, and the second one represents the gain that the ESP will achieve after paying the incentives. A main limitation of this framework is that it requires the deployment of edge aggregators grouping nearly located EDO and processing delays to address the
optimization problem before executing any task, which may consist of an obstacle for real-time processing.
The heterogeneity of edge devices in \(\mathrm{SG}\), with different computational capabilities, highlights the importance of resource allocation for efficient exploitation. It follows that \(\mathrm{FL}\) framework, mainly running on these devices, would best consider this constraint for better computational efficiency. This aspect was considered in several related works [71, 72, 80]. The characteristics of the task at hand were considered in [72] as cutting criteria to choose the appropriate offloading strategy. These criteria include the data size and the resources required for computing the task at edge devices leading to either local, hybrid, or distant execution. This framework provides thus optimal exploitation of resources yet may leak privacy with remote and hybrid execution. The previous idea was extended in reference [71] with a dynamic allocation performed at all levels of a hierarchical \(\mathrm{FL}\) framework. The dynamic nature of run-times was considered in reference [80], leveraging a collaborative adaptive approach to adjust threat detection threshold, adaptive security management, and adaptive models. The suggested method also includes an explainable AI module providing decision support. In the same direction, the authors of [19] consider the problem of communication reliability addressed through timers limiting each step of the \(\mathrm{FL}\) framework calculation, allowing to overcome link failure and local training delay.
The third research stream (i.e., security) considers different mechanisms, including blockchain, differential privacy, and encryption [81]. Blockchain networks were adopted in both [59, 68]. Another \(\mathrm{FL}\) framework adopting blockchain was suggested in reference [82] with a distilled knowledge loss overcoming the single point of failure related to the central node. The evaluation has shown good accuracy yet revealed extra communication overhead and sensitivity to intrusion attacks. Alliance chain, a particular case of blockchain, was adopted in reference [68] for safe storage and data usage by power equipment. A key advantage of using an alliance chain is the registration process forcing each power equipment to register before enabling its participation in the learning process. Differential privacy was adopted in reference [60] through a communication protocol to reach a trade-off between resource consumption, user utility, and local differential privacy. This mechanism allows adding noise before uploading the local models, leading to a more robust learning process against intrusion attacks, where the amount depends on the user class ranging from sensitive to regular users. In the same direction, a differential private model aggregation scheme was proposed in
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Work Year & Model & Characteristics & Security & Dataset & Metrics & Results & Main Limitations \\ \hline
[34] & 2022 DRL & \(\mathrm{HFL}\) RL & \(\mathrm{hier.}\) & 2-tier aggregation scheme & Private dataset & Acc. & 0.99 & Potential privacy leakage from the aggregator \\ \hline
[71] & 2022 DNN & \(\mathrm{HFL}\) S & \(\mathrm{hier.}\) & - & Private dataset & Carbon reduction & 0.49 & Gradient delay due to computational complexity \\ \hline
[45] & 2022 CNN & \(\mathrm{HFL}\) S & \(\mathrm{P2P}\) & Blockchain & Data from & RMSE & 0.03 & Limited evaluation of the energy trading and sharing system \\ \hline
[38] & 2022 RL & \(\mathrm{HFL}\) RL & \(\mathrm{client}\)-server & Private dataset & Reward & -10 & Limited evaluation \\ \hline
[72] & 2021 GRU & \(\mathrm{HFL}\) S & \(\mathrm{client}\)-server & - & Private dataset & Processing delay & 5 & Lack of evaluation on real-data \\ \hline
[73] & 2021 Deep & \(\mathrm{HFL}\) S & \(\mathrm{client}\)-server & - & Quarnot’s & MAE & 0.02 & The system can easily be tracked through the reporter \\ \hline
[74] & 2021 ML & \(\mathrm{HFL}\) S & \(\mathrm{client}\)-server & Disturbing mechanism & Private dataset & - & Limited experimental setup \\ \hline
[59] & 2021 DNN & \(\mathrm{HFL}\) S & & Blockchain & MNIST & Acc. & 0.98 & 2D transform induces extra computations \\ \hline
[75] & 2021 & \(\mathrm{HFL}\) & \(\mathrm{client}\)-server & Group signatures & Private dataset & Time & 10.92 & Induces a heavy computational load over the SG \\ \hline
[61] & 2021 ATT-BLSTM & \(\mathrm{HFL}\) S & \(\mathrm{client}\)-server & DF-privacy & Dataport & MAPE & 0.29 & Poor results compared to non-DP schemes \\ \hline
[15] & 2021 XGBoost & \(\mathrm{VFL}\) S & \(\mathrm{client}\)-server & Paillier- encryption & Data from & MSE & 0.2 & Limited evaluation \\ \hline
[68] & 2021 & \(\mathrm{HFL}\) S & \(\mathrm{client}\)-server & Alliance chain & MNIST & Acc. & 0.90 & Requires long training time \\ \hline \hline \end{tabular}
\end{table}
Table 2: Representative \(\mathrm{FL}\)-based contributions for general applications in \(\mathrm{S}\mathrm{S}\mathrm{S}\mathrm{S}\)
reference [61] combined with an attentive model to detect malicious models. With the same goal of securing the \(\mathrm{FL}\)-\(\mathrm{SG}\), the authors of [83] suggest adding an intermediate encryption level between the edge devices and cloud servers. The method reinforces the security of the model parameters by encrypting the process relying on secure fusion methods, including four techniques: (1) differential privacy, (2) secure multiparty computing, (3) homomorphic encryption, and (4) function encryption. Alternatively, group-based signatures were suggested in reference [75] to protect the identity of the grid operators and consumers.
### Residential Load Forecasting
Load forecasting is the task of predicting future load demand/generation based on historical load data. It is crucial for efficiently managing power grids [84]. Data-driven techniques were at the core of the load forecasting scholarship [85] in recent years. Centralized approaches raise privacy concerns since load energy data holds sensitive information about the usage of appliances and occupancy data [86]. Furthermore, insufficient data for training is a major obstacle. \(\mathrm{FL}\) is a viable solution to overcome these issues [87]. Table 3 summarises the main \(\mathrm{FL}\)-based load forecasting contributions, indicating that horizontal \(\mathrm{FL}\) is a common design choice for all the existing contributions, with the majority of them following a supervised learning paradigm under a client-server topology. Nonetheless, some exceptions to this trend can be observed (e.g., [39, 88]). These contributions can be grouped according to two main themes: (1) addressing the non-IIDness of the data and (2) enhancing the security of \(\mathrm{FL}\) frameworks.
The problem of non-IID nature originates from the different behaviors of clients reflecting on their load profiles and leading to convergence issues. It can be addressed using different techniques, such as clustering techniques, personalization of the global model, and client selection strategies. Other techniques include considering an augmented set of features as input to the model (e.g., weekly information[47]) to improve the forecasting accuracy. In this regard, the aggregation techniques were also investigated in reference [92] considering two algorithms, the FedSVG, and the FedAVG. The study concluded that FedAVG, with several training steps of GD on the client's level, provides better forecasting results requiring fewer training rounds to achieve convergence.
To address the data silos problem in power grids, several contributions adopted clustering algorithms. Different features were found to have different effects on the performance. These features include clients attribute [90, 47], the characteristics of the buildings [93], and load profiles [47, 49], with evidence about the superiority of the last approach for load forecasting [47, 49]. Yet, leveraging the building's characteristics [93] revealed better transferability. Global model personalization is another well-acknowledged strategy to address the non-IID nature of data in \(\mathrm{FL}\). It was adopted in two different contributions [32, 33] and demonstrated significant enhancement but remains highly dependent on the resources available at the client's level.
Both [95] and [54] propose to select a random set of clients to contribute to the training at each epoch to enable sampling from a more homogeneous data distribution. The experimental setup presented in reference [95] highlighted that using only a subset of clients allows for building powerful forecasting models. A similar framework was presented in [88] leveraging a peer-to-peer topology with a gradient selection strategy. This framework would induce a larger communication overhead than all the previously presented approaches since it would require each client to broadcast its local model. Thus, the authors suggested custom broadcast frequencies for each client, leading to an equivalent performance of centralized learning.
While some scholars [94] argue that design choices such as average pooling already add an extra security layer to standard \(\mathrm{FL}\) frameworks, they still suffer from several security limitations. For example, an analysis of their sensitivity to poisoning attacks was presented in reference [89], concluding that they are vulnerable. Outlier detection approaches [69] or unsupervised clustering [96] can be used prior to aggregation to detect malicious updates. The encryption of the model's weight was highlighted in [58, 47]. The difference between these contributions is the assumptions made on the trustworthiness of the central server. Assuming that the latter is trusted but curious adds more privacy but can lead to performance loss, as suggested in related work [97]. Little attention was given to this aspect in reference [58]. Several scholars also considered differential privacy [74, 55, 62] where it is used to disturb the weights of the model. In this case, the privacy budget is expected to have minor effects on the overall performance [62], potentially enhanced with attribute-based access control [55]. Despite the security enhancement these mechanisms provide, they induce extra computational and communication resources leading to delays in short-term forecasting. Finding the perfect trade-off between the security level and the required performance remains an open research question.
### Renewable Energy
\(\mathrm{RE}\) sources are crucial in the new power grid for eco-friendly energy production. They represent the best alternative to achieve smart and sustainable energy production. However, wind and PV systems are not controllable regarding their output, which requires careful planning based on accurate prediction of power generation and consumption. Several scholars focused on improving the performance of predicting power generation through several existing techniques. The use of Generative Adversarial Networks (GANs) was suggested in reference [98] combined with the least square error to solve the problem of training instability of these models. The model demonstrated promising results in the case of renewable scenario generation due to its powerful generative modeling ability, where the generated scenarios almost perfectly reproduce the characteristics of the real data while maintaining its diversity. However, a main limitation
of this contribution is it assumes all edge devices have the same computational resources. The authors of [48] focus on the learning scheme, suggesting attributing a membership value to each solar power generation station. This membership value is calculated based on the centroids of each cluster, considering the features of the local stations. The suggested FL process thus allows the grouping of similar stations leading to more accurate predictions. A more general FL framework, considering both energy consumers and producers, was suggested in reference [99] to predict power consumption and generation. The framework is beneficial for load swings and load curtailment detection.
Security aspects of the FL framework when considering the prediction of power production are also crucial since the latter includes power trading between different grid operators. To address this aspect, the use of a key distribution center was suggested in reference [56], allowing local clients to encrypt the weights of the model before uploading them to the central server. This framework adds an extra security layer but requires more communication and computational resources at the edge devices. Considering the case of predicting wind power generation, an FDI attack can be modeled through scaling attacks applied to the input
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Work Year Model & Characteristics & Security & Dataset & Metrics & Results & Main Limitations \\ \hline
[89] & 2021 LSTM & HFL S & client-server & - & Private & MAPE & 0.13 & Investigated only two noise additive attacks \\ \hline
[32] & 2021 LSTM & HFL S & client-server & - & DATAPORT & RMSE & 0.49 & No protection against in- \\ & & TFL & server & & MAPE & 0.33 & trusion attacks \\ \hline
[87] & 2021 LR & HFL S & client-server & - & Private & MAPE & 0.11 & Limited experimental setup \\ \hline
[49] & 2021 LSTM & HFL S & client-server & - & Australian & RMSE & 0.12 & Potential privacy-leakage \\ & & & server & & dataset & MAPE & 0.44 & \\ & & & & MAE & 0.07 & \\ \hline
[90] & 2021 LSTM & HFL S & client-server & - & Hue & Huber & 0.007 & Limited experimental setup \\ \hline
[54] & 2021 DNN & HFL S & client-server & - & ASHARAE & RMSE & 1.13 & Assumes that the third party server is trusted \\ \hline
[88] & 2021 LSTM & HFL S & P2P & - & DATAPORT & Acc. & 0.99 & Requires heavy computations on edge devices \\ & & TFL & & & & & & \\ \hline
[42] & 2022 LSTM & HFL S & hier. & - & Private & RMSE & 0.41 & Dependent on edge resources \\ & & & & & data & & & \\ \hline
[69] & 2021 LSTM & HFL S & hier. & Outlier detection & DATAPORT & RMSE & 0.55 & Suffers from challenges of security robustness and resource optimisation \\ \hline
[91] & 2021 LSTM & HFL S & hier. & - & British & RMSE & 0.01 & Potential privacy leakage during inference \\ \hline
[47] & 2021 LSTM & HFL S & client-server & - & Data from & RMSE & 0.12 & Requires hyper- \\ & & & server & & UK & & parameters tuning and heavy computations on edge devices. \\ \hline
[33] & 2022 - & HFL S & client-server & - & Irish CER & RMSE & 0.82 & High communication and computation burden \\ \hline
[92] & 2022 LSTM & HFL S & client-server & - & Hydro & RMSE & 0.61 & Requires several training \\ & & & server & & Lodon & MAPE & 0.14 & epochs on edge devices \\ \hline
[93] & 2022 ANN & HFL S & client-server & Encryption of the weights & Data & RMSE & 0.03 & Not applicable to free- \\ & & & server & & Genome & MAPE & 0.15 & based algorithms such as decision trees \\ \hline
[58] & 2022 LSTM & HFL S & client-server & Encryption of the weights & Data from & MSE & 0.33 & A potential privacy leakage between the edge and the energy relief \\ \hline
[62] & 2022 LSTM & HFL S & client-server & DF-privacy & IEEE Volt-age Feeder & QL & 0.032 & No evaluation of potential information leakage \\ \hline
[94] & 2022 CNN & HFL S & client-server & Security & Data from & RMSE & 0.006 & Only evaluated forecast- \\ & & & server & control & Tomsk & MAE & 0.002 & ing performance \\ & & & mechanism & & MAPE & 0.08 & \\ \hline \hline \end{tabular}
\end{table}
Table 6: Representative FL-based contributions for load forecasting contributions.
data where the wind speed is suggested by [70]. The previous suggestion, tested in different regions, demonstrated an enhanced transferability performance.
Despite the promising results in the previously presented contributions, fault and anomaly detection remains a prominent challenge for adopting renewable energies in distributed power grids [100, 101]. These power sources require continuous monitoring and maintenance to prevent damages [102]. Nonetheless, a major obstacle to deploying fault detection systems is the unbalanced data [103]. This problem becomes more relevant when considering \(\mathrm{FL}\) as local models being trained with smaller portions of unbalanced data. The previous problem was mitigated in the case of blade icing in wind turbines by balancing the extracted features in the latent space protected through homomorphic encryption [57]. Table 4 summarizes the main \(\mathrm{FL}\)-based \(\mathrm{RE}\) frameworks. Despite that these frameworks yield good performance, the table shows that most assume perfect communication conditions, and little attention was given to evaluating other aspects of federated frameworks.
### Electrical Vehicles
The adoption of \(\mathrm{EV}\) is gaining increasing momentum. However, this vehicle's power control and energy management type is still in its infancy, and many problems are yet to be addressed. AI and communication technologies are both key tools for efficient power prediction. Table 5 portrays representative \(\mathrm{FL}\)-based contributions for electrical vehicle studies. The best-reported results for each contribution reveal a maximum RSME of 6.5 and a minimum accuracy of 90%, which indicates that using \(\mathrm{FL}\) does not lead to significant deterioration in the performance. However, this remains subject to the considered experimental design (e.g., if the data is \(\mathrm{IID}\)).
Considering \(\mathrm{EV}\) demand learning (EDL), three different \(\mathrm{FL}\) topologies were suggested in reference [104]: (1) a centralized EDL (CEDL) for scenarios where charging stations have limited hardware, (2) federated EDL (FEDL) to protect the data collection process, and (3) clustering-based EDL where CS are grouped into \(K\) cluster to minimize the cost of biased predictions with CEDL or EDL applied on each cluster independently. The latter demonstrated the best results considering both RMSE and communication overhead. The effect of clustering was further investigated in reference [105] based on the similarity of historical energy demand and geo-localization, revealing that the first clustering criteria showed superior performance. An unsupervised clustering based on the client's attribute was suggested in reference [106] through a two-scale regression model. The comparison between all these clustering techniques in the case of \(\mathrm{EV}\) remains an open research question, and further research is needed in this regard.
Two main concepts emerged to enhance energy trading in \(\mathrm{EV}\) networks: Vehicle-to-Grid (\(\mathrm{V2G}\)) and Vehicle-to-Vehicle (\(\mathrm{V2V}\)). The first concept (i.e., \(\mathrm{V2G}\)) involves exploiting the power available in \(\mathrm{EV}\) to feed the power grid. The combination of this concept with \(\mathrm{FL}\)-based algorithms were first proposed in reference [65] to predict \(\mathrm{EV}\) power consumption for the next period leading to straight estimation of the power that can be supplied by the \(\mathrm{EV}\) given the remaining power and the grid's request. This concept was further explored in reference [107] where an adaptive and cost-friendly privacy preservation mechanism for wireless charging \(\mathrm{FL}\) in \(\mathrm{V2G}\) systems was proposed with a \(\mathrm{RL}\) mechanism for local learning. The second concept (i.e., \(\mathrm{V2V}\)) allows direct energy between \(\mathrm{FL}\). It was considered in a two-step learning paradigm in reference [40] to find the optimal charging/discharging variant among \(\mathrm{V2V}\), \(\mathrm{V2G}\), and \(\mathrm{G2V}\) through price negotiation. The consideration of \(\mathrm{V2V}\)
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline Title & Year & Model & Features & Learning & Topology & Security & Dataset & Metrics & Main Limitations \\ \hline
[98] & 2022 & least & HFL & S & client- & - & Dataset & MAE & 0.17 & Assumes optimal communication and the same re- \\ & & square & & & server & & from & RMSE & 0.24 & nication and the same resources for all edge devices \\ & & GANs & & & & & NREL & & & \\ \hline
[48] & 2022 & FCN & HFL & S & clustered & - & AMS solar energy & Acc. & 0.70 & The clustering was the main \\ & & & & & & & server & & & \\ \hline
[99] & 2022 & NN & HFL & S & client- & - & DataPort & RMSE & 1.98 & Lack of comparison with \\ & & & & & server & & & centralised-baseline & \\ \hline
[56] & 2022 & LSTM & HFL & S & client- & Cryptography & Readings & RMSE & 0.60 & No evaluation of the encryption- \\ & & & & & server & & by & MAE & 0.33 & tion’s effect on the overall \\ \hline
[70] & 2022 & CNN & TFL & S & client- & Security & Dataset & RMSE & 0.02 & The image processing technique induces heavy computation on edge devices \\ & & & & & server & & & from & MAE & 0.007 & nique induces heavy computations on edge devices \\ & & & & & & & RPA & 0.64 & nations on edge devices \\ \hline
[57] & 2022 & CNN, & HFL & S & client- & Homomorphic Data & F1 & 0.93 & Requires heavy computation resources due to the \\ & & LSTM & & & server & & & SCADA & MCC & 0.50 & sliding window approach \\ \hline \hline \end{tabular}
\end{table}
Table 4: Representative \(\mathrm{FL}\)-based contributions for \(\mathrm{RE}\).
revealed an increase of 6,1% in self-sufficiency compared to state-of-the-art methods. The concept of smart contracting was also suggested in reference [108, 109] to solve the problem of competing \(\mathrm{CS}\) revealing that 24% of enhancement can be achieved in the power demand prediction with a 48% and 36% enhancement in the utilities and social welfare over traditional economic models [108].
\(\mathrm{VFL}\) was exploited in reference [35, 36] to complete the client's feature set to generate charging recommendations. Both contributions suggest training two models in a synchronized manner where the \(\mathrm{EV}\) and the \(\mathrm{CS}\) wait for each other at each epoch before updating the models. An encryption alignment technique was adopted to find the intersection between both local datasets. However, the framework suggested in reference [36] considers the protection of the aggregator inside a blockchain to secure and optimize the calculations. Thus, the latter offers a higher level of security.
The security of FL-based frameworks remains critical and is subject to several possible improvements. The authors of [53] proposed a security-enhanced mutual authentication \(\mathrm{FL}\)-based energy management framework for \(\mathrm{EV}\) infrastructure. In the proposed method, cryptographic authentication is applied to prevent unauthorized access. A trust value and award are used to exclude low-prediction or suspect CS and convince CS to contribute to the training. Blockchain is another promising technology that can be leveraged to address some of the issues of FL. Replacing the central node with a blockchain network where a consensus committee is established to update the global model is a viable suggestion, as suggested in reference [66]. Alternatively, [67] suggests integrating the Virtual Power Plant (VPP) aggregator, \(\mathrm{FL}\) fleets, and a group of miners into the blockchain network. This last contribution also suggests adopting an FL-QLMS algorithm to select a qualified set of models for aggregation, resulting in a more accurate global model. Following this approach, the blockchain network is only adopted after the first aggregation occurs on the VPP level.
### False Data Injection and Anomaly Detection
Despite the fact that \(\mathrm{FL}\) frameworks allow protecting data confidentiality, their security remains an open research topic [51] due to their vulnerability to several attacks originating from the distributed nature of the learning process. False data injection attacks are the most widespread attacks where a client can alternate the global model by updating malicious weights. Table VI illustrates a representative set of \(\mathrm{FL}\)-based for \(\mathrm{FDI}\) detection. The best-recorded performance for these frameworks reveals values of the \(\mathrm{F}\)1-score between 0.87 and 0.99 highlighting good detection performance for different datasets. However, most of these frameworks suppose the availability of annotated data on edge devices which can be hard to obtain in real setups.
A demonstration of such an attack in \(\mathrm{SG}\) setups was suggested in reference [114], also leveraging side channel attack to build a dataset similar to the benign clients. The study resulted in countermeasures serving as recommendations for future implementations. These recommendations include: (1) the necessity to protect physical access to the hardware, (2) Adding randomization to the model, and (3) the necessity to use \(\mathrm{FDI}\) detection techniques. The third countermeasure was central to recent research contributions [115, 116].
An \(\mathrm{FL}\)-based approach for intrusion detection in the metering infrastructure was suggested in reference [115, 116] where the conducted studies demonstrated acceptable results. Both contributions were evaluated on the same dataset and resulted in equivalent results. This aspect was further explored for Photovoltaic(s) (PV) power prediction [37, 114], leveraging techniques such as customization and asynchronous learning to enhance the overall system. Alternatively, the authors of [117] propose a custom aggregation scheme based on the geometric median and the Weisfeld's algorithm [118] allowing to mitigate the effect of noisy, irregular gradient and reducing the overall communication overhead. The latter was also a main concern in reference [119]. However, the authors addressed this problem through a compressed system log reducing the transmission delay and increasing privacy protection.
Other malicious attacks in the case of the power grid may include abnormal behavior detection. For this purpose, an FL-based approach with LSTM models was used in reference [120] in smart buildings. This framework follows a multi-task learning paradigm leading to the reduction of the processing delay. Energy theft, a particular case of abnormal behavior, was investigated in reference [64], where each detection station gathers data from different households, and the learning process is further protected with differential privacy during local training. The evaluation setup of this method demonstrated that it had a very low communication overhead while outperforming state-of-the-art models even with differential privacy.
### Other Applications
\(\mathrm{FL}\) application is not restricted in the aforementioned use cases. Other use cases have also adopted this learning paradigm but with less attention. These energy services include: load disaggregation, thermal comfort control, pattern identification, energy recommendation, demand response, and smart energy trading and contracting.
Energy disaggregation (or \(\mathrm{NILM}\)) is the set of approaches aiming to identify individual loads related to operating devices using as measured by a smart meter [122, 123]. \(\mathrm{NILM}\) scholarship received significant attention in the past decade marked by several turning points. Adopting deep models to solve the problem of load disaggregation in 2015 [124] was a major breakpoint. Typically, a tremendous number of contributions followed [125]. The common practice in these approaches is the centralized training of the models, where it is assumed that the data is gathered at a central point. Nonetheless, such practice can violate a consumer's privacy as information about his daily routines [126, 127] will be exposed. \(\mathrm{FL}\) was adopted in few contributions [128, 129, 130, 131, 132, 63] to address this issue. Most of these contributions reveal good results with \(\mathrm{IID}\) data but reported a significant decrease in the performance with non-\(\mathrm{IID}\)
data. The only contribution revealing promising results was presented in reference [133], where a comparison between locally-trained, centrally-trained, and federated models was performed.
Applying FL in thermal comfort control received attention from the authors of [134]. They suggested using FL in two different applications related to smart buildings, namely thermal comfort modeling and short-term forecasting. The experiments proved the efficiency of FL, showing a high prediction accuracy. Moreover, the authors assessed the effect of the personalization step, demonstrating that it evolves the trained model enough to fit all participants. Thermal comfort prediction also received interest from [135]. Using FL solves both over-fitting and privacy-revealing problems. The FedAvg algorithm is used after being endowed with a branch selection protocol.
As providing customized services to householders is the ultimate objective of retailers, identifying household electricity consumption patterns becomes crucial. However, retailers' privacy represents a major concern when dealing with problems requiring data sharing. The authors of [136] suggested using FL for consumer profile identification in a privacy-preserving way. Considering this framework, the training stage is accelerated by utilizing an asynchronous stochastic gradient descent with delay compensation (ASGD-DC) to achieve the global model update. ASGD-DC helps optimize the global parameters update and facilitate the model's performance. The privacy-preserving nature further motivated the use of FL in [137] to propose an electricity consumer profile identification method through a three-step classification algorithm. The evaluation phase showed that using PCA-based feature extraction enhanced the identification model performance indicating that the federated model has comparable performance whether trained on the balanced or unbalanced dataset.
Energy recommendation is another application where FL bears great potential. The study presented in reference [5] discusses the synergy between big data and FL to analyze
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline Title & Year & Model & Features & Learning & Topology & Security & Dataset & Metrics & Results & Main Limitations \\ \hline
[104] & 2019 & DL & HFL & S & client- & - & Dundee, & RMSE & 5.87 & Clustering only considered location \\ \hline
[65] & 2020 & MLP & HFL & S & client- & - & Dundee, & UK & & \\ \hline
[65] & 2020 & MLP & HFL & S & client- & - & Dundee, & RMSE & 5.87 & Clustering only considered location \\ & & & & server & & & & & & \\ \hline
[65] & 2020 & MLP & HFL & S & client- & - & Dundee, & RMSE & 5.87 & Clustering only considered location \\ & & & & server & & & & & & \\ \hline
[10] & 2020 & S & HFL & - & client- & - & - & MLE\({}^{*}\) & 3 & Assumes IID data \\ \hline
[79] & 2021 & CNN & HFL & S & client- & - & Trusted & MNIST & Acc. & 0.97 & Considers a small number of nodes \\ & & & & server & & & & & & \\ \hline
[53] & 2021 & HFL & & hier. & Third- & & CIFAR-10 & Acc. & 0.90 & Provides poor results with few number of adversarial clients \\ \hline
[35] & 2021 & VFL & S & P2P & Hashing & Private & AUC & 0.94 & Considered an evaluation dataset without the user’s behavior \\ \hline
[106] & 2021 & NN & HFL & S & client- & - & Private & MCRPS & 0.56 & No personalisation technique considered for individual drivers \\ \hline
[111] & 2021 & LSTM & HFL & S & client- & - & private & MAE & Limited set of features were considered during validation \\ \hline
[108] & 2022 & DNN & HFL & S & clustered & - & Dundee, & RMSE & 6.5 & Clustering only considered location \\ & & & & server & & & & & \\ \hline
[112] & 2022 & RL & HFL & S & client- & - & Private & Reward & -2.05 & Enhancement only observed on the long term \\ \hline
[67] & 2022 & HFL & S & client- & - & Blockchain & Private & RMSE & 5.2 & Dependent on computational resources of the edge devices \\ \hline
[113] & 2022 & CNN & HFL & S & client- & - & DATAPORT & Acc. & 0.97 & Requires extra computational resources on REFIT & Prec. & 0.75 & edge devices \\ & & & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 6: Representative FL-based contributions for EV and CS
collected data and provide timed recommendations to the user. The project's ultimate objective is to change user habits by leveraging the concept of micro-moments. The \(\mathrm{FL}\) showed a prediction accuracy comparable to the centralized learning models. Additionally, it outperforms those models in privacy-preserving and time performance, revealing that \(\mathrm{FL}\) scales up to large environments.
Demand response is a mechanism to balance energy production and consumption by customers reducing their electricity usage during times of peak demand. As indicated in [138], demand response can be addressed using FL. In particular, demand response requires customers changing their energy consumption behavior to lower critical-peak demand by opting for off-peak energy consumption or changing the energy source [139, 140]. In this regard, [39] focused on regulating electricity production and demand. The FL is used for distributed learning to tackle the privacy-revealing problem. The authors adopted a deep RL algorithm to handle uncertainty issues. Furthermore, non-convex power flow constraints were handled by transforming the updating neural network parameters into a sequence of semi-definite programs. The proposed algorithm reduces both peak load and the user's daily cost. Moreover, the algorithm convergence was evaluated as good as it converges to the centralized algorithm solution.
Energy sharing enables energy exchange between consumer and prosumer communities in return for future benefits [45]. In [45], the authors suggest an autonomous smart contracting system for SG leveraging the blockchain-based FL scheme to estimate the power demand of the consumers and power produced by prosumers. The P2P sharing of the energy allows for efficient exploitation of RE in microgrids. In the same context of energy sharing, [141] discussed the challenges facing prosumer communities preventing them from achieving collective goals. For this purpose, the authors developed a framework based on FL due to its distributed nature leading to better collaboration within the prosumer community. The main objective of this work is to balance the usage of different energy sources (PV and V2G). FL guarantees privacy protection and avoids over-fitting in the energy production/consumption prediction process. Likewise, the collaboration within the prosumer community was considered in [142] aiming at optimizing energy consumption in large-scale grids. The authors suggested exploiting FL to allow agents to collaboratively find optimization solutions. A hybrid (synchronous/asynchronous) global parameter update is used to reduce communication costs. More precisely, agents are grouped into clusters where the update inter-cluster is asynchronous while the update intra-cluster is synchronous.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline Title & Year & Model & Features & Learning & Topology & Security & Dataset & Metrics & Results & Main Limitations \\ \hline
[119] & 2022 & LSTM & HFL & S & client- & - & Private & Acc. & 0.93 & Potential privacy leakage on edge servers \\ \hline
[115] & 2022 & FCN & HFL & S & client- & - & NSL- & Acc. & 0.99 & Suffres from security issues \\ & & & & server & & KDD & & & \\ \hline
[116] & 2021 & DNN & HFL & S & client- & & NSL- & Prec. & 0.99 & Provides merely acceptable \\ & & & & server & & KDD & Rec. & 0.98 & results in the case of classes \\ & & & & & & F1 & 0.99 & with few samples \\ \hline
[121] & 2022 & LSTM & HFL & S & client- & - & Hourly & Acc. & 0.89 & Suppose physical access to \\ & & & & server & & electricity & & & capture power trace \\ & & & & & load, & & & & \\ & & & & & Toronto & & & & \\ \hline
[120] & 2021 & LSTM & HFL & S & client- & - & Sensors & Prec. & 0.89 & Suffres from security issues \\ & & & & server & & Event & Rec. & 0.79 & \\ & & & & & Log & F1 & 0.87 & \\ \hline
[114] & 2022 & CNN & HFL & S & P2P & - & Private & Acc. & 0.99 & Convergence issues and \\ & & & & & dataset & & & & communication overhead \\ & & & & & & & & with increasing number of \\ \hline
[37] & 2022 & LSTM- & HFL & S & client- & - & Private & Acc. & 0.97 & Only considered the case of \\ & based & & & server & & dataset & Prec. & 0.95 & PVs \\ & & & & & & Rec. & 0.95 & \\ & & & & & & F1 & 0.96 & \\ \hline
[64] & 2022 & TCN- & HFL & S & client- & DF-privacy & SGCC & Acc. & 0.92 & The third party server \\ & & based & & server & & dataset & & &duce a delay in the computations \\ \hline
[117] & 2022 & GAN & HFL & SS & client- & Robust aggregation & Simulated & Acc. & 0.97 & Requires heavy computations on the edge devices \\ & & & & server & & scheme & dataset & 0.99 & \\ \hline \hline \end{tabular}
\end{table}
Table 6: Representative FL-based contributions for FDI detection in SG
## 5 Open Challenges
Despite the extensive research efforts to adapt different variants of \(\mathrm{FL}\) to the requirements of \(\mathrm{SG}\), adopting this learning paradigm in real setups remains subject to several challenges. Different technical issues still await additional investigations and research, such as privacy and security concerns and incentive mechanism design. In this section, we present some challenges and promising research directions to overcome these problems, improve the performance of \(\mathrm{FL}\) models and widen their adoption in \(\mathrm{SG}\). Table 7 summarizes these challenges.
The first challenge is the heterogeneity of edge devices and their limited hardware in some scenarios. This challenge opposes the main goal of \(\mathrm{SG}\) to deliver near real-time services. Particularly, when training tasks are initiated on devices with different computational capabilities, a delay could occur since the aggregation process cannot start before receiving the updated weights from all the clients. With this in mind, potential solutions could consist in using offloading strategies with a smart selection of the clients based on their hardware characteristics. Furthermore, compression techniques can be leveraged to address the problem of computational time. Resource management is thus a main issue in distributed learning. In addition, this problem imposes compatibility issues, so a standardization effort is required. Using advanced strategies for the dynamic allocation of resources can also introduce a considerable computational delay at the beginning of each training task. Furthermore, failures could occur during the training and induce the failure of the whole process. Further research on this topic is thus required with appropriate evaluation protocols that consider all the previously mentioned scenarios to deliver a clearer idea about the robustness of \(\mathrm{FL}\) in the case of \(\mathrm{SG}\).
The training bottleneck and convergence time are also main challenges for \(\mathrm{FL}\) in \(\mathrm{SG}\), mainly caused by the heterogeneity of data in different devices or the distributed nature of the learning process. More precisely, it is widely acknowledged that different energy clients exhibit different consumption patterns directly reflecting on their energy curve and thus hindering learning convergence, especially with the high number of local training epochs. One possible solution to avoid this issue is clustering clients (i.e., \(\mathrm{HFL}\)). Nonetheless, there is little evidence in the reviewed literature about the best clustering criteria for the case of different energy applications. Only a few studies provided a comparative evaluation of different clustering criteria for the possibility of load forecasting. Yet, the evaluation was limited since the comparison only considered the performance and the convergence time disregarding other aspects. For example, the model's updates transmitted at every training epoch can divulge clients' private data. In this regard, an adversary can infer the training data through the weights updates transferred at each iteration. Moreover, multiple malicious attacks may threaten the models learned by the servers. They can poison the models or delay/prevent their convergence.
Simultaneously ensuring security protection and privacy preservation in \(\mathrm{FL}\) while maintaining high models' accuracy and low computational cost is a critical issue [143]. Specifically, while ensuring rigorous privacy assurances is of utmost importance, mechanisms deployed to preserve clients' data privacy need to moderately hold the accuracy rates of the learned models. Along the same line, privacy-preserving techniques must not significantly increase the computational overhead of the training phase or introduce an excessive overhead to the network [144, 145]. In addition, while protection strategies against poisoning and Byzantine
\begin{table}
\begin{tabular}{l l l l} \hline \hline Challenge & Aspect of \(\mathrm{FL}\) & Source & Possible solutions \\ \hline \multirow{3}{*}{\begin{tabular}{l} Limited and heterogeneity \\ neous edge-devices \\ \end{tabular} } & Topology & Heterogeneity of hardware used by different grid operators & - Smart selection of clients \\ & & & Heterogeneity of hardware used by different grid operators & - Model compression techniques \\ & & & by different grid operators & - Adaptive resource allocation \\ \hline \multirow{3}{*}{\begin{tabular}{l} Training Bottleneck \\ and convergence time \\ \end{tabular} } & \multirow{3}{*}{ML training} & - Non-identically distributed data & - Clustering of clients \\ & & - Communication overhead & - Robust aggregation algorithms \\ \hline \multirow{2}{*}{\begin{tabular}{l} Inference Attacks \\ \end{tabular} } & Security \& privacy & \begin{tabular}{l} Inference of training data used \\ from the \(\mathrm{FL}\) updates \\ \end{tabular} & \begin{tabular}{l} Secure computations (e.g., homomorphic encryption)/ Differential \\ privacy / Third party server \\ \end{tabular} \\ \hline \multirow{2}{*}{\begin{tabular}{l} Poisoning Attacks \\ \end{tabular} } & Security \& privacy & Data poisoning / model poisoning & \begin{tabular}{l} FDI detection before aggregation \\ / Blockchain \\ \end{tabular} \\ \hline \multirow{2}{*}{\begin{tabular}{l} Backdoor Attacks \\ \end{tabular} } & Security \& privacy & \begin{tabular}{l} Introduction of backdoor by a set \\ of devices to target the labeling \\ of a single task \\ \end{tabular} & \begin{tabular}{l} Differential privacy / Norm \\ thresholding updates \\ \end{tabular} \\ \hline \multirow{2}{*}{
\begin{tabular}{l} Evaluation protocols \\ \end{tabular} } & Overall \(\mathrm{FL}\) scheme & No standardized \(\mathrm{FL}\) evaluation & Standardization of evaluation \\ & & framework & protocols/ Development of new \\ & & evaluation protocols for \(\mathrm{FL}\) in \\ \hline \hline \end{tabular}
\end{table}
Table 7: Challenges facing \(\mathrm{FL}\) in \(\mathrm{SGs}\)
attacks need the FL server to analyze individual client-supplied updates [146], privacy-enhancing methodologies using secure Multiparty Computation (MPC) involve significant overhead and conceal the individual updates from the FL servers [147]. This is because they usually rely on heavy encryption to combine local updates before employing them in the global model. Thus, they preclude the servers from measuring weight statistics and accuracy metrics on individual updates. Hence detecting and discarding malicious updates can not be reached [148]. It is unarguable that including all the clients in each training round is not feasible in real scenarios as it would overwhelm the communication infrastructure. Thus, FL approaches generally rely on the selection strategies of different clients. Several selection strategies that exist were previously described.
## 6 Future Directions
Different FL technical issues, such as privacy and security concerns and incentive mechanism design, still await additional investigations and research efforts. Developing techniques for FL on non-IID data is one of the critical research directions. FL algorithms typically assume that the data at each location is IID. However, in many real-world settings, data may be non-IID due to factors such as location or demographics. Research is needed to develop FL techniques that handle non-IID data.
To overcome these challenges, the following future directions can be considered:
### Model Pruning
Model pruning can reduce the size and complexity of DL models, making them more suitable for edge devices. In FL, participating devices often have limited resources, making it challenging to train large models. By pruning the model, researchers can remove redundant parameters, giving the model more freedom to adapt to the different calculation and communication capabilities of the clients in the FL process [149, 150].
### Secure multiparty computation
It can allow multiple parties to jointly compute the result of a function using their private inputs without revealing these inputs to one another. In the context of FL, the secure MPC can be used to protect the privacy of user updates, but it can also increase communication volume and computational complexity. Finding a balance between privacy protection and communication costs is a vital field research area [151]. Furthermore, MPC is a cryptographic technique that enables distributed computing systems to securely aggregate local models that contain sensitive information. While MPC can prevent direct leak computation-efficient, the server may still be able to indirectly obtain local information in some cases or even reveal the true images through methods such as Deep Leakage from Gradients (DLG) [152].
### Asynchronous online FL
Although FedAvg is the most popular method for optimizing FL, it assumes unrealistic device homogeneity. To overcome this issue, asynchronous online FL, a technique that addresses the challenges of data and equipment heterogeneity in FL on distributed edge devices, can be used [153]. This approach addresses issues such as device load, lag, or withdrawal, and some studies have proposed using active device selection to address device heterogeneity. Moreover, while standard FL assumes that clients have offline access to their data samples generated statistically, the study suggested in reference [154] breaks away from this assumption and explores FL in uncertain environments, where clients' local loss functions arrive in an online streaming manner without statistical assumptions. The study uses a collective regret metric as the performance measure, departing from the traditional FL approach where clients have offline access to their data samples generated statistically. Another proposed method for handling incomplete local updates involves scaling aggregation coefficients, but the effectiveness of this approach has not yet been demonstrated. Alternatively, [155] discusses minimizing the training latency of a wireless FL system while maintaining client data privacy. Hence, a client scheduling scheme is used to reduce the number of training rounds and time intervals by jointly considering the significance of each client's local updates and delay issues. The problem is formulated as a multi-armed bandit program, and an online scheduling scheme based on the \(\epsilon\)-greedy algorithm is proposed to achieve a tradeoff between exploration and exploitation.
### Interpretability and explainability
Interpretability and explainability are other important features in FL-based SG frameworks. Typically, explainable VFL improves the performance and security of AI systems that deal with low-quality data [156, 157]. This can include a credibility assessment strategy, a federated counterfactual explanation, and an importance rate metric [158]. In this regard, FedexA, an FL-based anomaly detection solution for edge-based industrial control systems, was introduced in reference [159]. It uses XAI for interpretability, allowing experts to make quick decisions and trust the model more. [160] proposes a lifecycle dashboard as a solution to address the need for explainability in FL-based systems by considering the requirements and perspectives of SG stakeholders, visualizing information from the FL server and being generic enough to be applied to different use cases and industries. In summary, the potential impact of FL is ultimately improving the SG by (i) helping break through the inherent information exchange barriers and (ii) allowing for all the SG parties to trustingly collaborate on energy data mining, with an enhanced level of privacy protection is very promising.
## 7 Conclusion
This study contributes to the literature with a detailed overview of FL applications in SG considering different energy services. To the best of our knowledge, this work is
the first to analyze the development of \(\mathrm{FL}\) in \(\mathrm{SG}\) through a detailed analysis of design trends considering different aspects with a discussion of representative contributions for popular energy services. Furthermore, the review highlighted the need for a holistic approach to \(\mathrm{SG}\) management and the potential of \(\mathrm{FL}\) to facilitate efficient and secure collaboration among different actors in the energy sector. Only few technical questions have been answered, and \(\mathrm{FL}\) is expected to be an active research area throughout the next decade.
|
2305.18738 | Generating Behaviorally Diverse Policies with Latent Diffusion Models | Recent progress in Quality Diversity Reinforcement Learning (QD-RL) has
enabled learning a collection of behaviorally diverse, high performing
policies. However, these methods typically involve storing thousands of
policies, which results in high space-complexity and poor scaling to additional
behaviors. Condensing the archive into a single model while retaining the
performance and coverage of the original collection of policies has proved
challenging. In this work, we propose using diffusion models to distill the
archive into a single generative model over policy parameters. We show that our
method achieves a compression ratio of 13x while recovering 98% of the original
rewards and 89% of the original coverage. Further, the conditioning mechanism
of diffusion models allows for flexibly selecting and sequencing behaviors,
including using language. Project website:
https://sites.google.com/view/policydiffusion/home | Shashank Hegde, Sumeet Batra, K. R. Zentner, Gaurav S. Sukhatme | 2023-05-30T04:22:37Z | http://arxiv.org/abs/2305.18738v2 | # Generating Behaviorally Diverse Policies with Latent Diffusion Models
###### Abstract
Recent progress in Quality Diversity Reinforcement Learning (QD-RL) has enabled learning a collection of behaviorally diverse, high performing policies. However, these methods typically involve storing thousands of policies, which results in high space-complexity and poor scaling to additional behaviors. Condensing the archive into a single model while retaining the performance and coverage of the original collection of policies has proved challenging. In this work, we propose using diffusion models to distill the archive into a single generative model over policy parameters. We show that our method achieves a compression ratio of 13x while recovering 98% of the original rewards and 89% of the original coverage. Further, the conditioning mechanism of diffusion models allows for flexibly selecting and sequencing behaviors, including using language.
Project website: [https://sites.google.com/view/policydiffusion/home](https://sites.google.com/view/policydiffusion/home).
## 1 Introduction
Quality Diversity (QD) is an emerging field in which collections of high performing, behaviorally diverse solutions are trained. QD methods perform what is often referred to as illumination or divergent search, in that they attempt to illuminate the search space rather than optimizing towards a single point. QD algorithms have shown success in learning robot controllers capable of adapting to damage, solving hard exploration problems, and generating diverse scenarios in the procedural content generation (PCG) domain [5][7][2]. The foundational method, Map Eltes [20], maintains an archive of solutions where each cell in the archive corresponds to a solution with a score given by the task objective \(f\), and behavior specified by measure functions \(m_{1}...m_{k}\), which map to a low dimensional behavior space. The measure functions \(m_{1}...m_{k}\) specify which cell each solution belongs to in the \(k\) dimensional archive. New solutions are evolved using evolutionary methods and inserted into the archive only if they outperform existing ones with the same behavior.
A promising subclass of methods (Quality Diversity Reinforcement Learning (QD-RL)) combines the optimization capabilities of RL with the illumination capabilities of QD, to find high performing and diverse solutions. In the robotics domain where the environment and the objective-measure functions \(f\) and \(\mathbf{m}\) are assumed to be non-differentiable, RL can be leveraged to estimate the gradients of \(f\) and/or \(\mathbf{m}\) and provide a powerful performance improvement operator on the solutions in the
archive. QD-RL methods that combine QD with on-policy and off-policy RL algorithms have shown promising results on a variety of locomotion tasks and are capable of finding a plethora of high-quality gaits [21][27][22][1]. One of several drawbacks of existing QD-RL methods is that they must maintain a collection of hundreds, if not thousands of policies, in order to cover the behavior space, which leads to poor space-complexity and difficulty in real-world deployment. Map-Elites-based QD methods show poor scaling properties and suffer from the curse of dimensionality, quite literally in that as the dimensionality \(k\) of the archive increases, the number of solutions one needs to store increases exponentially. Prior methods have attempted to scale Map-Elites to higher dimensional problems by using Centroidal Voronoi Tessellations to divide the archive into a small number of evenly spaced geometric regions. However, these methods require recomputing the Voronoi cells periodically, resulting in worse runtime performance, and try to keep the number of niches small in order to effectively exploit local competition. In order to smoothly interpolate between behaviors of different solutions with a discrete archive, one must upsample the archive resolution to (tens of) thousands of policies, often resulting in a level of discretization higher than the actual occurrence of distinct behaviors, while further worsening the space and time complexity of the algorithm.
An alluring idea is to distill the archive into a single, expressive model that completely covers the behavior space and maintains high performance. A single model representing the archive reduces space-complexity and potentially allows for smooth interpolation in the behavior space, making it easier to deploy and use in downstream tasks. Prior methods have shown promising results in distilling the archive into a single policy [8], or by learning a generative model via adversarial training over the policies in the archive [15]. We wish to improve on these methods by maintaining, or even improving, the overall performance of the original archive during the distillation phase, and scale generative models to be able to represent policies parameterized by deep neural networks rather than a low dimensional 1D vector of parameters.
To this end, we utilize the powerful generative and conditioning mechanisms of diffusion models to distill the archive into a single, expressive model that can generate a policy with any behavior from the behavior space mapped by the original archive. This generative process can be conditioned on the desired behavior measures, and even language descriptions of the desired behavior. Diffusion models have shown great success in computer vision in image quality and diversity [14][6]. Latent diffusion models accelerate training by compressing the image dataset into a compact, expressive latent space and training a diffusion model on this lower dimensional space [24]. They proceed in two stages, first by compressing imperceptible, high-frequency details via a learned dimensionality reduction, and then by learning the semantic details of the images via the actual diffusion model itself. Similarly, here we show that one can compress a collection of policies parameterized by deep neural networks into a lower dimensional space by using a variational auto encoder (VAE), and then learn the semantic or behavioral details of the policy distribution via latent diffusion. Our experiments show evidence of the manifold hypothesis or the elite hypervolume [31], that all high performing policies lie on a low-dimensional manifold. We summarize our contributions below.
1. We compress an archive of policies parameterized by deep neural networks and trained via a state of the art QD-RL method PPGA into a single, expressive model while maintaining performance of the policies in the original dataset.
2. We use the iterative conditioning mechanism of diffusion models to reconstruct policies with precise locations in measure space, and demonstrate how language conditioning can be used to flexibly generate policies with different behaviors.
3. We showcase our model's ability to sequentially compose completely different behaviors together, and additionally show that language conditioning can be used to dramatically improve the performance and consistency of sequential behavior composition.
## 2 Related Work
Quality DiversityQD Optimization attempts to illuminate the search space with high performing solutions. The optimization problem is formulated as follows. Given an objective \(f(\cdot)\) to maximize and \(k\) measure functions \(\textbf{m}=<m_{1}(\cdot)...m_{k}(\cdot)>\) that map a solution \(\theta_{i}\) to a low dimensional behavior space, the QD problem is to find the highest performing solution \(\theta_{i}\) for every value of **m**. Since **m** is a continuous variable, estimating a good solution for every point in behavior space requires infinite memory and is intractable. The QD problem is usually relaxed by discretizing **m** into a finite number of cells \(M\), represented as a \(k\)-dimensional archive \(\mathcal{A}\). The optimization
problem then becomes \(\textbf{max}\sum_{i=0}^{M}f(\theta_{i})\), where \(\theta_{i}\) is a solution whose measures \(\textbf{m}(\theta_{i})\) fall into cell \(i\). Differentiable Quality Diversity (DQD) [9] considers the problem where the objective and measure functions are differentiable, which provides gradients \(\nabla f(\cdot)\) and \(\nabla\textbf{m}(\cdot)\). Quality Diversity Reinforcement Learning (QD-RL) considers a subclass of problems that can be framed as sequential decision making tasks with exploitable Markov Decision Process (MDP) structure. Instead of optimizing for a single optimal policy, the goal is to find a collection of high performing policies that are diverse with respect to embedding functions **m** that encode behavior information in a low-dimensional space. QD-RL algorithms vary in implementation, and leverage recent works in both on-policy and off-policy RL [21, 29, 28, 22]. Proximal Policy Gradient Arborescence (PPGA) [1], on which we build here, is a state of the art QD-RL method that combines on-policy reinforcement learning with DQD. It maintains a current search policy corresponding to some policy \(\pi_{\theta_{\mu}}\) in the archive. The objective and measure gradients \(f\) and **m** are estimated for this policy and used to branch off solutions into nearby cells. The information on which branched policies most improved the archive, where policies that land in new cells are favored, is used to derive a first-order gradient estimate of maximum archive improvement. On-policy RL is used to walk the search policy towards those promising new regions of the archive that have yet to be explored. PPGA has produced state of the art results on challenging locomotion tasks. A particularly nice property of this method is that the first-order approximation of the gradient w.r.t. archive improvement improves with higher archive resolution. Since training diffusion models requires large datasets, upsampling the archive resolution in PPGA generally results in better performance and allows us to produce more data for the diffusion model.
Archive DistillationArchive distillation is the process by which a collection of solutions is distilled into a single model. This is particularly useful in the QD-RL domain, since having a single policy with full coverage of the behavior space, the ability to interpolate between points in the behavior space, and compose different behaviors to produce new behaviors, makes the model more versatile, memory efficient, and easily deployable in downstream tasks. Prior works predict that a form of the manifold hypothesis (Elite Hypervolume), exists because policies that map to the same low-dimensional behavioral space, despite occupying different niches, may share certain traits. Follow-up works attempt to either find such low-dimensional representations or illuminate them by searching over the manifold directly [11, 23]. Contemporary work in the QD-RL domain have shown success in archive distillation on difficult RL problems. [8] jointly produces an archive using a state of the art QD-RL method Policy Gradient Assisted Map Elites [21] and distills the archive into a single behavior-conditioned model. [19] uses a variant of Map Elites to produce a collection of policies that perform well in uncertain environments, and distills these policies into a single Decision Transformer [3]. Prior methods have also applied Generative Adversarial Networks to generate a diverse collection of policies for a robot ball-throwing task [15]. Here, we aim to improve on generative models applied in the QD-RL domain by scaling the representational capacity of our model to collections of deep neural networks, while simultaneously maintaining the performance and diversity of the original archive.
DiffusionDiffusion models have become state of the art in image generation. Denoising Diffusion Probabilistic Models (DDPM) are a class of generative models that iteratively denoise noise sampled from an isotropic Gaussian. The iterative denoising process is a powerful mechanism that has been shown to produce state of the art results on computer vision benchmarks [6]. Numerous methods have made improvements on DDPMs that address some of the shortcomings of these models compared to other generative methods. [6] shows that classifier-guidance can be applied at test-time to improve the quality of the generated samples. [25] showed that, by relaxing the Markov assumption in the forward diffusion process, one can significantly improve inference-time by downsampling the number diffusion steps while maintaining most of the sample quality. Multiple refined methods for sampling from a diffusion process have been proposed, including [26, 18] and [16]. However, here we are not particularly concerned with sampling efficiency, and thus use the method proposed in [25]. [24] showed that diffusion can be performed on the latent space of a pretrained Variational Autoencoder.
Graph HypernetworksHypernetworks are models capable of estimating the weights of a secondary network [12]. When conditioned on task identities, these can achieve continual learning by rehearsing task specific weight realizations [32]. Graph hypernetworks were originally introduced for architecture search in image classification [17] and have been shown to be trainable with RL to estimate variable architecture policies for locomotion and manipulation [13].
## 3 Method
Policy CompressionFollowing [24], we compress the archive \(\mathcal{A}\) into a lower dimensional space using a variational autoencoder (VAE). A policy consists of \(l\) layers, each containing a weight matrix \(W_{i}\) and bias vector \(b_{i}\), \(1\leq i\leq l\). In the encoder \(\mathcal{E}\), the features of each \(W_{i}\) and \(b_{i}\) are extracted using a convolutional neural network and fully connected network, respectively. These features are concatenated together and fed into a final fully connected layer, that produces an intermediate latent representation \(z\in\mathbb{R}^{h\times w\times c}\). The decoder \(\mathcal{D}\) contains a conditional graph hypernetwork and an observation normalized decoder, \(d_{n}\), which takes in the latent code \(z\) and produces the reconstructed policy \(\pi^{\prime}_{i}=\mathcal{D}(z_{i})\). The conditional graph hypernetwork estimates the policy network's parameters while \(d_{n}\) estimates the observation normalizing mean and variance parameters. Our conditional graph hypernetwork is based on the implementation in [13]. While the original graph hypernetwork is meant to estimate the parameters of variable architectures (represented as graphs), we freeze the input architecture graph \(g\). This is set to be the architecture graph of all the networks in the archive, which in our case is represented as \(\{0,128,128,a\}\). Here 0 indicates the input node, the following 128's represent the hidden layer nodes and \(a\) is the output node and equals the action space dimension. Further, we add a latent encoder \(e_{z}\), and with the graph encoder \(e_{s}\), a concatenated encoding is fed to the gated graph network [33]. This mechanism lets us condition the parameter estimation on the latent \(z\). Together, \(\mathcal{E}\) and \(\mathcal{D}\) form the VAE architecture, which optimizes the objective
\[L_{VAE}=L_{rec}(\pi^{\prime}(a|s),\pi(a|s))+D_{KL}(\mathcal{D}_{\phi}(z|\pi)|| \mathcal{N}(0,I)) \tag{1}\]
Policy DiffusionWe hypothesize that the denoising process can be used to produce high quality policies given a dataset of behaviorally diverse policies parameterized by deep neural networks. We describe how the DDPM formulation can be applied to such datasets. Diffusion models consist of a forward process \(q\) that iteratively applies noise \(\epsilon_{t}\) to a sample \(x_{0}\) from the original data distribution \(x_{0}\sim q(\textbf{x})\). The noise \(\epsilon_{t}\) at each timestep is sampled according to a variance schedule \(\{\beta_{t}\}_{t=1}^{T}\)
\[\epsilon_{t}=q(x_{t}|x_{t-1})=\mathcal{N}(x_{T};x_{t-1}\sqrt{1-\beta_{t}}, \beta_{t}\textbf{I}) \tag{2}\]
making the forward process Markovian. The reverse or generative process \(p(x_{T})\) reverts noise from an isotropic Gaussian into a sample \(x_{0}\) in \(q(\textbf{x})\) and contains a similar Markov structure.
\[p_{\theta}(x_{0})=p(x_{T})\prod_{t=1}^{T}p_{\theta}(x_{t-1}|x_{t}),\;\;\;p_{ \theta}(x_{t-1}|x_{t})=\mathcal{N}(x_{t-1};\mu_{\theta}(x_{t},t),\Sigma_{ \theta}(x_{t},t)) \tag{3}\]
Here, \(x_{0}\) is a latent code \(z\) representing some policy \(\pi_{\theta}(a|s)\), rather than the policy parameters itself. Thus, the diffusion model instead learns to capture the distribution of the much lower dimensional \(z\), analogous to the Elite Hypervolume hypothesized in [31].
Figure 1: **Structure of our model as a encoder (left) and decoder (right). During encoding, policies are split into layers and encoded separately. The encodings are concatenated together and fed into a final layer to produce a latent representation. The conditional diffusion model samples a latent code \(z\) from the latent representation. During decoding, a graph hyper network jointly decodes the weights and bias parameters from \(z\), and the policy network architecture graph \(g\), while normalization parameters are directly decoded from \(z\).**
[14] makes a connection between the reverse process and Langevin Dynamics, where \(p_{\theta}(x_{t-1}|x_{t})\) is the learned gradient of the data density. When \(x\) represents neural network parameters, the diffusion model learns the gradient of the policy-distribution score function and iteratively refines the noisy policy parameters \(x_{t}\) towards this distribution. When conditioning the policy \(x\) to match a specific behavior **m** i.e. \(p_{\theta}(x_{t-1}|x_{t},\textbf{m})\), this gradient can be thought of as the gradient of the maximum a posteriori (MAP) objective over the distribution of policies that exhibit behavior **m** with respect to policy parameters \(x_{t}\). Thus, our diffusion formulation draws inspiration from Bayesian Methods, where \(p(x_{T},\textbf{m})\prod_{t=1}^{T}p_{\theta}(x_{t-1}|x_{t},\textbf{m})\) resembles the iterative training process of a neural network \(x\) towards the mode of the posterior distribution over high-performing policies with behavior **m**.
Training ProcedureWe follow the training procedure in [24]. We first train an autoencoder according to the objective in Eq. 1. A random batch of policies and their observation normalizers \((\pi_{\theta},\eta)\) are sampled from the archive and fed into the encoder \(\mathcal{E}\) to produce latents \(\textbf{z}=\mathcal{E}(\pi_{\theta},\eta)\). The decoder \(\mathcal{D}\) then reconstructs the policies and their respective observation normalizers from the latents \((\pi^{\prime}_{\theta},\eta^{\prime})=\mathcal{D}(\textbf{z})\). To simplify training, on some tasks we normalize the archive dataset by subtracting out the per-parameter mean and dividing by the per-parameter variance. This results in an autoencoder over parameter residuals relative to the original per-parameter mean and variance, which we keep for decoding. For training the latent diffusion model, we sample a batch of policies and their respective obs normalizers, measures, and text labels \((\pi_{\phi},\eta,\textbf{m},\textbf{y})\). The policies and measures are first encoded into latent vectors and measure embeddings respectively \(\textbf{z}=\mathcal{E}(\pi_{\theta},\eta)\), \(\tau_{\psi_{m}}(\textbf{m})\), where \(\tau_{\psi_{m}}\) is a trainable encoder. These are subsequently fed into the diffusion model, where the latents are conditioned on the measure embeddings using the cross-attention mechanism. We uniformly sample **t** from \(\{1,...,T\}\) for the batch and regress the predicted noise vectors according to the latent diffusion training objective
\[L_{LDM}:=\mathbb{E}_{\mathcal{E}(\pi_{\phi}),\epsilon\sim\mathcal{N}(0,1),t} \left[||\epsilon-\epsilon_{\theta}(z_{t},t,\tau_{\psi_{m}}(\textbf{m})||_{2}^ {2}\right] \tag{4}\]
In the case of language-conditioned diffusion, the measures are replaced with text labels that are encoded using a Flan-T5-Small encoder ([4]), which is fine-tuned end-to-end using the loss in Equation 4 to produce text embeddings \(\epsilon_{\theta}(z_{t},t,\tau_{\psi_{y}}(y))\) that condition the diffusion process.
## 4 Experiments
In our experiments, we wish to analyze our model's performance on the following: 1. archive compression while maintaining original performance, 2. measure and language conditioning to produce policies with specific behaviors, and 3. sequential behavior composition to produce new behaviors. Since PPGA was evaluated on the Brax [10] environments Humanoid, Walker2D, Halfcheetah, and Ant, we evaluate our model on the same four environments. For each environment, the reward function encourages stable forward locomotion while minimizing energy consumption, and the observation and action spaces are continuous. The dimensions for the (observation, action) spaces are: Humanoid (227, 17); Walker2d (17, 6); Halfcheetah (18, 6); Ant (87, 8). Every policy in the archive has two hidden layers of 128 neurons each, followed by an output layer matching the action space dimension. While there are recent works that perform archive distillation on these tasks [8; 19], they produce a very different datasets of policies using different underlying QD-RL methods. The Quality Diversity Transformer, for example, uses evolutionary strategies with explicit optimization towards policies with low-variance behaviors, whereas PPGA uses first-order gradient approximations and makes no such explicit optimization towards low behavior variance. As any comparison of distillation methods is relative to the archive being distilled, we are unable to make any direct comparison to these methods.
Performance and Accuracy ExperimentsWe evaluate our model's ability to reconstruct the performance and behavior of policies in the archive to high precision. Following [19], we first downsample our trained archives for each task into 50 equally-spaced geometric regions using Centroidal Voronoi Tessalation - Map Elites (CVT-ME) [30]. Each region has a policy \(\pi_{\theta}\) from the original archive and a corresponding behavior \(<m_{1},...,m_{k}>\) for a \(k\)-dimensional archive that lies in the center of that region. These policies' behaviors are used as conditions to produce 50 measure-conditioned policies \(\pi_{\theta_{1}},...,\pi_{\theta_{50}}\). Each policy is then rolled out 50 times, and the objective and measure functions \(f(\pi_{\theta_{i}}),\textbf{m}(\pi_{\theta_{i}})\) are computed as the average over 50 episodes. These values
are then used to compute the reward ratio, which is the average performance of the generated policies over the original ones: \(r=\frac{\sum_{i=1}^{50}f(\pi_{\theta_{i}}^{\prime})}{\sum_{i=1}^{50}f(\pi_{ \theta_{i}})}\).
The reward ratio alone can be misleading if all the generated policies have high performance but incorrect measures w.r.t. the measure conditions. For example, generating 50 best-performing policies with the same measures despite sampling different measure conditions would lead to a large reward ratio but no diversity. Thus, we also track the Jenson-Shannon (JS) Divergence between the distribution of measures produced by the generated policies and the original policies. We refer to this as the measure divergence. We report the JS divergence instead of the KL divergence because there is no clear preference between the KL divergence directions, and because some experiments produce policies with near-zero measure distribution overlap, for which the JS divergence is upper bounded by \(ln(2)\), and the KL divergence is arbitrarily large. We perform this once every 10 epochs over the course of training and present the results in Fig. 2. Humanoid, Walker2D, and Halfcheetah achieve a reward ratio of near \(\sim 1.0\) while reaching a measure divergence of \(10^{-2}\). Ant achieves a reward ratio of \(\sim 0.75\) with a measure divergence of \(\sim 0.1\). We expect a higher measure divergence on Ant given that it is a 4-legged locomotor and thus has twice as many measures as the other environments.
Archive ReconstructionAt test time, we analyze the ability of the latent diffusion model to reconstruct the entire original archive. We take the measure vector \(<m_{1},...,m_{k}>\) corresponding to cell \(c_{i}\) and use it as a condition to produce policy \(\pi_{\theta_{i}}^{\prime}\forall c_{i}\in\mathcal{A}\). If the resolution of archive \(\mathcal{A}\) is \(d^{k}\), where \(d\) is the discretization level and \(k\) is the number of measures, this gives us a collection of \(d^{k}\) policies. These are rolled out to compute \(f\) and \(\mathbf{m}\), and inserted into a new, empty archive \(\mathcal{A}^{\prime}\) according to the standard insertion rules in QD optimization, where only the best solution for a cell \(i\) is stored when two solutions map to the same location in behavior space. The reconstructed archive will thus have some number of unique solutions such that \(|\mathcal{A}^{\prime}|\leq d^{k}\) with a QD Score of \(\sum_{i=0}^{|\mathcal{A}^{\prime}|}f(\pi_{\theta_{i}}^{\prime})\). To make an informative comparison between the reconstructed and original archives, we plot their cumulative distribution functions (CDF) (Fig. 4). These not only encapsulate coverage and QD-score information, but also tell us how the policies are distributed with respect to the objective.
On all tasks, the policy distribution of the reconstructed archive closely matches that of the original one. On Halfcheetah, we are able to almost exactly reproduce the original archive. On all other tasks, we lose some density in the low performing regions of the archive, but consistently match policy density in the higher performing regions. Fig. 3 tells a strikingly similar story, in that our model first fills the central and often higher performing regions of the archive before moving on to the fringe regions to improve diversity. Both results seem to suggest that the diffusion model first learns common features corresponding to high performance across all policies, and then proceeds to learn aspects of individual policies that make them behaviorally unique. Table 1 provides a quantitative view of the CDF plots. We report the QD-scores and coverage on both the original and reconstructed archives for all tasks. Following [19], we report the Mean Error in Measure (MEM), which is the average \(L_{2}\) distance between all generated policies' and original policies' measures in archives \(\mathcal{A}^{\prime}\) and \(\mathcal{A}\), respectively: MEM \(=\mathbb{E}\big{[}||\mathbf{m}(\pi_{\theta_{i}})-\mathbf{m}(\pi_{\theta_{i}} ^{\prime})||_{2}^{2}\big{]}\).
Figure 2: **Performance of our method on four benchmark environments.** Higher reward ratios correspond to better performing policies. Lower JS divergence corresponds to more precise measure reconstruction. See Section 4 for a description of the experimental method and limits of these performance metrics.
KL Coefficient AblationA KL penalty (Eq. 1) is used to regularize the latent space. [24] used a relatively small penalty coefficient of \(10^{-6}\) to prevent information loss due to overly compacting the latent space. We wish to similarly quantify the information density of our dataset and the effects of stronger latent space regularization on VAE model performance. Fig. 5 shows the VAE reward ratio and JS Divergence for larger values of the KL coefficient. Overall, our findings are in line with [24] - stronger regularization results in a loss of information, thus reducing our VAE's ability to reproduce policies from the original dataset. For all other experiments, we fix the KL coefficient to \(10^{-6}\).
GHN Size AblationWe examine the effect of model size on our model's ability to reproduce the archive. We chose three different values for the number of neurons in the hidden layers of the hypernetwork in the decoder and keep the diffusion model size fixed. The results are shown in Table 2 for the Humanoid environment. The QD Ratio indicates the QD score of the reconstructed archive over the original archive's QD score. Compression ratio is calculated as the number of parameters the decoder plus the number of parameters in the diffusion model, divided by the sum of parameters of all policies in the original archive. In general, we find that the MEM decreases and QD ratio increases with larger model size, at the expense of compression ratio. Nonetheless, even the largest diffusion model (43.7 million parameters) achieves a compression ratio of 8 to 1, while reproducing 94% of the original archive with low measure error. In situations where model size is not a significant constraint, picking the largest model may be the best option, as it nearly recovers the original archive's performance and covers all relevant parts of the behavior space covered by the original dataset.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Task** & \begin{tabular}{c} **QD Score** \\ (\(\times 10^{7}\)) \\ \end{tabular} & \begin{tabular}{c} **Coverage** \\ (\%) \\ \end{tabular} & \begin{tabular}{c} **Reconstructed QD** \\ **Score (\(\times 10^{7}\))** \\ \end{tabular} & \begin{tabular}{c} **Reconstructed** \\ **Coverage (\%)** \\ \end{tabular} &
\begin{tabular}{c} **MEM** \\ **Coverage (\%)** \\ \end{tabular} \\ \hline Humanoid & \(8.08\) & 90.79 & \(7.4\pm 0.11\) & \(84.35\pm 1.01\) & \(0.237\pm 0.03\) \\ \hline Walker2D & \(3.12\) & 85.68 & \(3.00\pm 0.02\) & \(83.65\pm 0.14\) & \(0.269\pm 0.07\) \\ \hline Halfcheetah & \(11.4\) & 96.94 & \(11.1\pm 0.01\) & \(94.55\pm 0.11\) & \(0.15\pm 0.01\) \\ \hline Ant & \(3.44\) & 71.03 & \(2.54\pm 0.07\) & \(52.66\pm 1.46\) & \(0.57\pm 0.08\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Latent Diffusion QD Metrics.** The QD Score and Coverage columns are calculated by rolling out the policies in the original archive training dataset. The Reconstructed QD Score and Reconstructed Coverage are calculated by rolling out the policies that were generated by our model, conditioned on the measures from the original archive.
Figure 4: **CDF of policies for all tasks.** For each task, the y axis represents the percentage of the archive grid that has at least the corresponding amount of returns on rollout (shown on the x axis).
Figure 3: **Heatmap for humanoid as training proceeds.** Coverage and average reward increases rapidly in early epochs to approximately match the original archive. Reward ratios and JS divergence continues to improve slowly with more epochs, as shown in Figure 2.
Sequential Behavior CompositionTo test our model's ability to successfully compose an arbitrary chain of generated behavior policies without triggering the termination criteria within a single episode we design an experiment where the episode, which naturally terminates at \(T=1000\), is divided into four time intervals of 250 timesteps each. For each time interval, we randomly sample a measure condition **m** and use our conditional diffusion model combined with the decoder to produce an agent that exhibits behavior **m**, \(\pi_{\phi}(a|s,\textbf{m})=\mathcal{D}(\epsilon_{\theta}(z_{T},T,\textbf{m})),z_ {T}\sim\mathcal{N}(0,1)\). An experiment is successful if the episode terminates no earlier than \(t=800\), implying that our model has produced all four behavior policies and successfully composed at least three of them together. We consider the trajectory length for one experiment to be the average trajectory length over 50 parallel environments, and we repeat this experiment 10 times. Post completion, we see that the success rate is 80%.
Language Conditioned Sequential Behavior CompositionTo evaluate how well our language conditioned diffusion model is able to produce specific behaviors, we repeat the above protocol with encoded text labels in place of **m**. We run three related experiments. First, we uniformly sample 100 sequences of four text labels from the 128 available text labels, and perform the evaluation described above, reaching a success rate of 38%. Second, we sample sequences of text labels, filtering out those that contain the work "fall", which indicates unsuccessful policies. This allows us to select better policies that don't fall over on their own, which increases the success rate of sequences to 59%. Finally, we sample sequences using only text labels that contain the term "quickly," which indicates policies that move forward quickly. This raises the success rate of sequencing four different behaviors to 79%, more than double the success rate of sampling text labels uniformly. The text labels of one such successful episode overlaid on a heatmap of the archive from which the LDM was trained along with a sequence of rendered frames from the same episode are shown in Figure 7. The improved success rate of sequences of filtered labels demonstrates one approach to using text conditioning to extract more specific behaviors from an archive than can be found using measure conditioning alone. This approach is similar to the use of "quality tags," used in some diffusion models.
## 5 Limitations
Scaling with measure dimension warrants further investigation (e.g.,., on ant, we see high MEM). Further experimentation with diffusion hyper parameters might produce better reconstruction of policies. The language descriptions we use are currently limited, and can be expanded to included a much larger corpus of descriptions. Language can sometimes under determine the desired behavior, therefore some descriptions can lead to undesirable outcomes. For example, Figure 6 shows higher variance in the reconstructed measures when conditioned on language. Training the diffusion model requires us to first construct the policy archive by using a QD algorithm. An interesting alternative would be to train this model in an online regime, by passing the archive construction step completely.
## 6 Conclusion
We proposed a method that uses diffusion models to distill the archive into a single generative model over policy parameters, achieving a compression ratio of 13x while recovering 98% of the original reward and maintaining 89% of the original coverage. Our models can flexibly generate specific behaviors using measures or text, and these behaviors can be sequenced together with surprising consistency. A pleasantly surprising find was additional evidence supporting the Elite Hypervolume
Figure 6: **Visualization of the measure values resulting from a sequence of four randomly selected desired measures (left) and text labels (right). The measure values from a policy sequence (Section 4) are shown on the left as a function of time. These were run 10 times and the corresponding error plots are shown. The close match in measure values between the Desired values, which are used for conditioning, and the Reconstructed values show the effectiveness of our conditioning. On the right, the measure values from a policy sequence as described in Section 4 are shown as a function of time. The text labels used to produce the policy sequence are shown at the bottom of the figure. Large changes in measure values show that text conditioning is able to produce and sequence highly diverse policies. The desired behavior in both cases is changed four times throughout the episode.**
Figure 7: **Temporal behavior sequencing from text labels. A humanoid (left) controlled by a policy sequence beginning with “slide forward on your right foot while kicking with your left foot” (top left), then “run forward on left foot while dragging right foot”, then “quickly shuffle forward on your left foot”, and finally “wildly hop forward on left foot while lifting your right foot up” (bottom right). Heatmap of the archive (right) showing the sequence of text labels overlaid on the measure space.**
hypothesis proposed in [31]. From our training results and heatmap evolution over time, we see that the diffusion model first learns the general structure of what comprises a "good" policy across behavior space, and then proceeds to branch out to different areas of behavior space, implying a learning of what makes each policy behaviorally unique. Finally, we look forward to exploring the connections this work has to other subfields, such as Meta-RL and Bayesian Learning.
|
2301.12084 | Automated Arrangements of Multi-Part Music for Sets of Monophonic
Instruments | Arranging music for a different set of instruments that it was originally
written for is traditionally a tedious and time-consuming process, performed by
experts with intricate knowledge of the specific instruments and involving
significant experimentation. In this paper we study the problem of automating
music arrangements for music pieces written for monophonic instruments or
voices. We designed and implemented an algorithm that can always produce a
music arrangement when feasible by transposing the music piece to a different
scale, permuting the assigned parts to instruments/voices, and transposing
individual parts by one or more octaves. We also published open source software
written in Python that processes MusicXML files and allows musicians to
experiment with music arrangements. It is our hope that our software can serve
as a platform for future extensions that will include music reductions and
inclusion of polyphonic instruments. | Matthew Mccloskey, Gabrielle Curcio, Amulya Badineni, Kevin Mcgrath, Dimitris Papamichail | 2023-01-28T04:13:45Z | http://arxiv.org/abs/2301.12084v1 | # Automated Arrangements of Multi-Part Music for Sets of Monophonic Instruments
###### Abstract.
Arranging music for a different set of instruments that it was originally written for is traditionally a tedious and time-consuming process, performed by experts with intricate knowledge of the specific instruments and involving significant experimentation. In this paper we study the problem of automating music arrangements for music pieces written for monophonic instruments or voices. We designed and implemented an algorithm that can always produce a music arrangement when feasible by transposing the music piece to a different scale, permuting the assigned parts to instruments/voices, and transposing individual parts by one or more octaves. We also published open source software written in Python that processes MusicXML files and allows musicians to experiment with music arrangements. It is our hope that our software can serve as a platform for future extensions that will include music reductions and inclusion of polyphonic instruments.
music arrangement, music algorithms +
Footnote †: c) 2023 Association for Computing Machinery.
Despite its obvious benefits, we are not aware of any published algorithm or widely available software that allows for the automated arrangement of a given music piece to a different set of instruments that it was originally written for in the general case. Working toward filling that need, we designed and implemented an algorithm that arranges music written for monophonic instruments and guarantees a successful outcome when an arrangement is possible without score reduction. Our recursive backtracking algorithm exhaustively examines all feasible assignments of parts to available instruments and all possible transpositions of the piece, including independent octave transpositions of individual parts, to determine a successful arrangement that minimally affects the musicality of the piece.
## 2. Methods
### Definitions
For the purposes of our research, a music piece is written in a chromatic scale and notes are separated by the interval of a semitone. We will assume that all notes fall within a total range of 88 semitones, the notes of a traditional piano, from A0 to C8. We will assign an integer to each note in the range, such that all notes can be represented by an integer from 1 to 88. For our discussion, a monophonic instrument is one that can only play one pitch at a time, such as the flute, the oboe, or a voice. Polyphonic instruments can play multiple notes simultaneously, such as the piano, guitar, or harp. A polyphonic instrument can always play a monophonic part within its range.
For our study an input music piece will consist of \(n\) parts, each being assigned to a single monophonic instrument or voice. Such parts are presented in the sheet music representation of the piece in an equal number of staves each. Our
Figure 1. Approximate sounding ranges of instruments and voices. Figure reproduced with permission from Dr. Brian Blood (dolmetch.com)
algorithm preserves the rhythm, rhythmic values of notes and rests, as well as bar lines of the music piece. Clefs, key signatures and accidentals are adjusted based on the scale of the transposed music and the instruments/voices that parts are assigned to. Our algorithm does not control for instrument timbre that may be expected in any part of the music; similarly, the thickness of the piece is not being necessarily maintained.
We will assume that an input music piece is originally written for \(n\) instruments \(I_{1},I_{2},\cdots,I_{n}\), each assigned to play a part \(P_{i}\) of the piece, with \(1\leq i\leq n\). We seek to arrange the music for \(n\) output instruments \(O_{1},O_{2},\cdots,O_{n}\). The range of each part \(i\) is an integer interval \(R_{i}=\llbracket a_{i},b_{i}\rrbracket\), where \(a_{i}\) is the integer value corresponding to the lowest frequency note and \(b_{i}\) to the highest frequency note played by instrument \(I_{i}\) in part \(P_{i}\), \(1\leq i\leq n\). Likewise, the playing range of each output instrument \(O_{i}\) will be denoted by \(OR_{i}\), \(1\leq i\leq n\), indicating the integer interval of values corresponding to the notes the instrument is able to play. Approximate ranges for a set of instruments and voices can be seen in Figure 1.
### Monophonic instrument set arrangement algorithm
Our Monophonic Music Arrangement (MMS) algorithm performs a nearly comprehensive search of possible permutations of parts. The music is transposed to all twelve keys, and the algorithm runs on each key, unless a solution has been found so far that results in fewer sharps/flats over all keys for each part. This is designed to prevent the "ideal" transposition from having a complex key signature if not necessary. Other than that, the search is fully comprehensive. For each part, the algorithm finds all possible transpositions of each part in the source piece that can be played by at least one available instrument. All permutations of these possible transpositions are then examined. If all parts can be played by at least one instrument, the algorithm then checks if there exists a set of part assignments that is valid. This is performed by a recursive function that is memoized to improve performance. If a transposed key yields valid permutations, the transposition with the least total deviation from the original composition is selected. Once all twelve keys have been checked, all permutations are tried using the selected transposition, unless there is no selected transposition, in which case the algorithm fails. All permutations are checked, and for those that are valid in the given transposition, the best arrangement is selected based on how closely the average pitch of each part matches the median pitch of the instrument's range.
The MMA algorithm implementation consists of four main function described in pseudocode below.
### Implementation
The MMA algorithm was implemented in Python utilizing the Music21 library and the MuseScore software. Our program requires two input files and produces a single output file with the music arrangement. The required input files consist of the original piece of music in MusicXML format and a TOML file listing the instrument set to arrange for, where an assigned value of \(k\) to an instrument indicates \(k\) parts should be arranged for that instrument. An example of a TOML file with an input instrument set consisting of one clarinet, two tenor saxophones, and two alto saxophones is
Figure 2. Examples of input instrument set and instrument information files
shown in Figure 1(a). Metadata about each instrument, consisting of its key in notation and a reasonable note range, is defined in a separate TOML file which is loaded separately by the program and is populated with common music instruments. An example of an entry for the alto saxophone in the instrument metadata file is shown in Figure 1(b).
During execution our program checks whether the number of input instruments matches the number of parts in the piece, and then attempts to arrange for the given instruments as previously described. If arrangements are found, the best arrangement based on the criteria described in section 2.2 is output as a MusicXML file. If no feasible arrangement is found, or if the number of instruments does not match, then an error message is displayed and no output file is produced.
```
procedureFindBestChoice(\(stream,parts\)) \(bestChoice\leftarrow\)null \(bestSharps\leftarrow\infty\) forsemitones from \(-6\) through \(5\)do \(sharps\leftarrow\)the total number of sharps/flats that would appear in the key signature for each part if\(sharps\leq bestSharps\)then \(thisBestChoice\leftarrow\)element from RunTransposed(\(stream,parts,semitones\)) with the least deviation if\(thisBestChoice\neq\)null and either \(sharps<bestSharps\) or deviation of \(thisBestChoice<\)deviation of \(bestChoice\)then \(bestChoice\leftarrow\)\(thisBestChoice\) \(bestSharps\leftarrow\)\(thisBestSharps\) endif endif endfor return\(bestChoice\) endprocedure
```
**Algorithm 3** Find Best Choice
```
procedure\(\mathrm{MMA}(stream,parts)\) \(bestChoice\leftarrow\)FindBestChoice(\(stream,parts\)) if\(bestChoice\) = nullthen returnnull endif transpose each part by the resulting transposition \(bestFit\leftarrow\infty\) for each permutation of \(newParts\)do ifall parts are valid in the given permutation then \(fit\leftarrow\)the total absolute difference between the average pitches and the median pitch of each part if\(fit<bestFit\)then \(bestFit\gets fit\) \(bestPermutation\leftarrow\)this permutation endif endif endfor return\(bestPermutation\) endprocedure
```
**Algorithm 4** MMA Algorithm
## 3. Results
We tested our software on a variety of music pieces written for monophonic instruments. In Figure 3 we show three measures, starting at measure 16, of the _Puttin' on the Ritz_ song by Irving Berlin. Part (a) shows the input score composed of four monophonic parts. Part (b) displays the arranged piece for saxophone quartet, consisting of a soprano, alto, tenor, and baritone saxophones. Similarly, in Figure 4 we display three measures of _Carol of the Bells_, as arranged and performed by the Pentatonix voice group, starting at measure 18 of the piece.
Complete input/output files for three test cases of our software, including the _Puttin' on the Ritz_ and _Carol of the Bells_ above, can be examined at: [https://owd.tcnj.edu/](https://owd.tcnj.edu/)\(\sim\)papamicd/music/mma/examples/
The repository for this project can be found at: [https://github.com/spazzylemons/music-arrangement/](https://github.com/spazzylemons/music-arrangement/)
## 4. Conclusions and Future Work
Our monophonic music arrangement algorithm and its software implementation create a platform for automating music arrangements with minimal user input. Although currently basic in its functionality, it can be readily extended in a number of different directions. For accommodating arrangements for a smaller sets of instruments than the number of parts in the music, score reduction techniques can be applied to eliminate certain parts or at least reduce the number of simultaneous notes that are played throughout the piece, while maintaining faithfulness to the original. To allow for the inclusion of polyphonic instruments in the arrangements, further work is required in analyzing and decomposing
Figure 4. Three measures from an arrangement of ’Carol of the Bells’ from voices to saxophone quartet
Figure 3. Three measures from an arrangement of ’Puttin’ on the Ritz’ from piano to saxophone quartet
polyphonic parts into monophonic ones and inversely, while adhering to constraints related to fingerings and other instrument and player restrictions.
## Acknowledgments
The authors acknowledge use of the ELSA high performance computing cluster at The College of New Jersey for conducting the research reported in this paper. This cluster is funded in part by the National Science Foundation under grant numbers OAC-1826915 and OAC-1828163.
|
2308.11086 | Pushing coarse-grained models beyond the continuum limit using equation
learning | Mathematical modelling of biological population dynamics often involves
proposing high fidelity discrete agent-based models that capture stochasticity
and individual-level processes. These models are often considered in
conjunction with an approximate coarse-grained differential equation that
captures population-level features only. These coarse-grained models are only
accurate in certain asymptotic parameter regimes, such as enforcing that the
time scale of individual motility far exceeds the time scale of birth/death
processes. When these coarse-grained models are accurate, the discrete model
still abides by conservation laws at the microscopic level, which implies that
there is some macroscopic conservation law that can describe the macroscopic
dynamics. In this work, we introduce an equation learning framework to find
accurate coarse-grained models when standard continuum limit approaches are
inaccurate. We demonstrate our approach using a discrete mechanical model of
epithelial tissues, considering a series of four case studies that consider
problems with and without free boundaries, and with and without proliferation,
illustrating how we can learn macroscopic equations describing mechanical
relaxation, cell proliferation, and the equation governing the dynamics of the
free boundary of the tissue. While our presentation focuses on this biological
application, our approach is more broadly applicable across a range of
scenarios where discrete models are approximated by approximate continuum-limit
descriptions. All code and data to reproduce this work are available at
https://github.com/DanielVandH/StepwiseEQL.jl. | Daniel J. VandenHeuvel, Pascal R. Buenzli, Matthew J. Simpson | 2023-08-21T23:49:03Z | http://arxiv.org/abs/2308.11086v3 | # Pushing coarse-grained models beyond the continuum limit using equation learning
###### Abstract
Mathematical modelling of biological population dynamics often involves proposing high fidelity discrete agent-based models that capture stochasticity and individual-level processes. These models are often considered in conjunction with an approximate coarse-grained differential equation that captures population-level features only. These coarse-grained models are only accurate in certain asymptotic parameter regimes, such as enforcing that the time scale of individual motility far exceeds the time scale of birth/death processes. When these coarse-grained models are accurate, the discrete model still abides by conservation laws at the microscopic level, which implies that there is some macroscopic conservation law that can describe the macroscopic dynamics. In this work, we introduce an equation learning framework to find accurate coarse-grained models when standard continuum limit approaches are inaccurate. We demonstrate our approach using a discrete mechanical model of epithelial tissues, considering a series of four case studies that illustrate how we can learn macroscopic equations describing mechanical relaxation, cell proliferation, and the equation governing the dynamics of the free boundary of the tissue. While our presentation focuses on this biological application, our approach is more broadly applicable across a range of scenarios where discrete models are approximated by approximate continuum-limit descriptions. All code and data to reproduce this work are available at [https://github.com/DanielVandH/StepwiseEQL.jl](https://github.com/DanielVandH/StepwiseEQL.jl).
## 1 Introduction
Mathematical models of population dynamics are often constructed by considering both discrete and continuous descriptions, allowing for both microscopic and macroscopic details to be considered [1]. This approach has been applied to several kinds of discrete models, including Potts models [2, 3, 4, 5], exclusion processes [6, 7, 8, 9], mechanical models of epithelial tissues [10, 11, 12, 13, 14, 15, 16, 17], and a variety of other types of individual-based models [18, 19, 20, 21, 22, 23]. Continuum models are useful for describing collective behaviour, especially because the computational requirement of discrete models increases with the size of the population, and this can become computationally prohibitive for large populations, which is particularly problematic for parameter inference [24]. In contrast, the computational requirement to solve a continuous model is independent of the population size, and generally requires less computational overhead than working with a discrete approach only [15]. Continuum models are typically obtained by coarse-graining the discrete model, using Taylor series expansions to obtain continuous partial differential equation (PDE) models that govern the population densities on a continuum or macroscopic scale [10, 25, 26, 11].
One challenge with using coarse-grained continuum limit models is that while the solution of these models can match averaged data from the corresponding discrete model for certain choices of parameters
[10, 17, 27], the solution of the continuous model can be a very poor approximation for other parameter choices [13, 15, 28, 29]. More generally, coarse-grained models are typically only valid in certain asymptotic parameter regimes [27, 29, 30]. For example, suppose we have a discrete space, discrete time, agent-based model that incorporates random motion and random proliferation. Random motion involves stepping a distance \(\Delta\) with probability \(P_{\mathrm{m}}\in[0,1]\) per time step of duration \(\tau\). The stochastic proliferation process involves undergoing proliferation with probability \(P_{\mathrm{p}}\in[0,1]\) per unit time step. The continuum limit description of this kind of discrete process can be written as [29]
\[\frac{\partial q}{\partial t}=\frac{\partial}{\partial x}\left(D(q)\frac{ \partial q}{\partial x}\right)+R(q), \tag{1}\]
where \(q\) is the macroscopic density of individuals, \(D(q)\) is the nonlinear diffusivity that describes the effects of individual migration, and \(R(q)\) is a source term that describes the effects of the birth process in the discrete model [29]. Standard approaches to derive (1) require \(D(q)=\mathcal{O}(P_{\mathrm{m}}\Delta^{2}/\tau)\) and \(R(q)=\mathcal{O}(P_{\mathrm{m}}/\tau)\) in the limit that \(\Delta\to 0\) and \(\tau\to 0\). To obtain a well-defined continuum limit such that the diffusion and source terms are both present in the macroscopic model, some restrictions on the parameters in the discrete model are required [29, 30]. Typically, this is achieved by taking the limit as \(\Delta\to 0\) and \(\tau\to 0\) jointly such that the ratio \(\Delta^{2}/\tau\) remains finite, implying that \(P_{\mathrm{p}}=\mathcal{O}(\tau)\) so that both the diffusion and source terms in (1) are \(\mathcal{O}(1)\). In practice, this means that the time scale of individual migration events has to be much faster than the time scale of individual proliferation events, otherwise the continuum limit description is not well defined [29, 30]. If this restriction is not enforced, then the solution of the continuum limit model does not always predict the averaged behaviour of the discrete model [29].
Regardless of whether choices of parameters in a discrete model obey the asymptotic restrictions imposed by coarse-graining, the discrete model still obeys a conservation principle, which implies that there is some alternative macroscopic conservation description that will apply describe population-level features of interest [31, 32]. Equation learning is a means of determining appropriate continuum models outside of the usual continuum limit asymptotic regimes. Equation learning has been used in several applications for model discovery. In the context of PDEs, a typical approach is to write \(\partial q/\partial t=\mathcal{N}(q,\mathcal{D},\boldsymbol{\theta})\), where \(q\) is the population density, \(\mathcal{N}\) is some nonlinear function parametrised by \(\boldsymbol{\theta}\), \(\mathcal{D}\) is a collection of differential operators, and \(\boldsymbol{\theta}\) are parameters to be estimated [33]. This formulation was first introduced by Rudy et al. [33], who extended previous work in learning ordinary differential equations (ODEs) proposed by Brunton et al. [34]. Equation learning methods developed for the purpose of learning biological models has also been a key interest [35, 36]. Lagergren et al. [37] introduce a biologically-informed neural network framework that uses equation learning that is guided by biological constraints, imposing a specific conservation PDE rather than a general nonlinear function \(\mathcal{N}\), to discover a model describing data from simple _in vitro_ experiments that described the invasion of populations of motile and proliferative prostate cancer cells. VandenHeuvel et al. [38] extend the work of Lagergren et al. [37], incorporating uncertainty quantification into the equation learning procedure through a bootstrapping approach. Nardini et al. [28] use discrete data from agent-based models to learn associated continuum ODE models, allowing for a user-provided library of functions together with sparse regression methods to be combined to give a parsimonious differential equation model describing population densities. Regression methods have also been used as an alternative to equation learning for this purpose [39].
These previous approaches to equation learning consider various methods to estimate the parameters \(\boldsymbol{\theta}\), such as sparse regression or nonlinear optimisation [33, 34, 35, 28, 37, 38], representing \(\mathcal{N}\) as a library of functions [33, 34, 35], neural networks [37], or in the form of a conservation law with individual components to be learned [37, 38]. In this work, we introduce a _stepwise equation learning_ framework, inspired from stepwise regression [40], for estimating \(\boldsymbol{\theta}\) from averaged discrete data with a given \(\mathcal{N}\) representing a proposed form for the continuum model description, incorporating or removing terms one at a time until a parsimonious continuum model is obtained whose solution fits the data well and no further improvements can be made. Our approach is advantageous for several reasons. Firstly, it is computationally efficient and parallelisable within a single iteration, taking only seconds to construct the problem and learn an accurate macroscopic model for most problems, allowing for exploration of results with different discrete parameters and different
forms of \(\mathcal{N}\) for a given data set. Secondly, the approach is modular, with different mechanistic features easily incorporated. This approach enables extensive computational experimentation by exploring the impact of including or excluding putative terms in the continuum model without any great increase in computational overhead. Lastly, it is easy to examine the results from our procedure, allowing for ease of diagnosing and correcting reasons for obtaining poor fitting models, and explaining what components of the continuum model are the most influential.
To illustrate our procedure, we consider a discrete, individual-based model of epithelial tissue [10, 17]. Epithelial tissues are biological tissue composed of cells, organised in a monolayer, and are present in many parts of the body and interact with other cells [41], lining surfaces such as the skin and the intestine [42]. They are important in a variety of contexts, such as wound healing [43, 44] and cancer [45, 46]. Many models have been developed for studying their dynamics, considering both discrete and continuum modelling [10, 11, 12, 13, 14, 15, 16], with most models given in the form of a nonlinear reaction-diffusion equation with a moving boundary, using a nonlinear diffusivity term to incorporate mechanical relaxation and a source term to model cell proliferation [12, 13, 16]. These continuum limit models too are only accurate in certain parameter regimes, becoming inaccurate if the rate of mechanical relaxation is slow relative to the rate of proliferation [13, 15, 29]. To apply our stepwise equation learning procedure, we let the nonlinear function \(\mathcal{N}\) be given in the form of a conservation law together with equations describing the free boundary. We demonstrate this approach using a series of four biologically-motivated case studies, with each case study building on those before it. The first two case studies are used to demonstrate how our approach can learn known continuum limits, while the latter two case studies show how we can learn improved continuum limit models in parameter regimes where these known continuum limits are no longer accurate. We implement our approach in the Julia language [47], and all code is available on GitHub at [https://github.com/DanielVandH/StepwiseEQL.jl](https://github.com/DanielVandH/StepwiseEQL.jl).
## 2 Mathematical model
Following Murray et al. and Baker et al. [10, 16], we suppose that we have a set of nodes \(x_{1},\ldots,x_{n}(t)\) describing \(n\) cell boundaries at a time \(t\). The interval \((x_{i}(t),x_{i+1}(t))\) represents the \(i\)th cell for \(i=1,\ldots,n-1\), where we fix \(x_{1}=0\) and \(x_{1}<x_{2}(t)<\cdots<x_{n}(t)\). The number of nodes, \(n\), may increase over time due to cell proliferation. We model the mechanical interaction between cells by treating them as springs, as indicated in Figure 1, so that each node \(i\) experiences forces \(F_{i,i\pm 1}\) from nodes \(i\pm 1\), respectively, except at the boundaries where there is only one neighbouring force. We further assume that each of these springs has the same mechanical properties, and that the viscous force from the surrounding medium is given by \(\eta\mathrm{d}x_{i}(t)/\mathrm{d}t\) with drag coefficient \(\eta\). Lastly, assuming we are in a viscous medium so that the motion is overdamped, we can model the dynamics of each individual node \(x_{i}(t)\), fixing \(x_{1}=0\), by [16]
\[\eta\frac{\mathrm{d}x_{i}(t)}{\mathrm{d}t} =F_{i,i-1}+F_{i,i+1},\quad i=1,\ldots,n-1, \tag{2}\] \[\eta\frac{\mathrm{d}x_{n}(t)}{\mathrm{d}t} =F_{n,n-1}, \tag{3}\]
where
\[F_{i,i\pm 1}=F\left(\left|x_{i}(t)-x_{i\pm 1}(t)\right|\right)\frac{x_{i}(t)-x_{i \pm 1}(t)}{\left|x_{i}(t)-x_{i\pm 1}(t)\right|} \tag{4}\]
is the interaction force that the \(i\)th node experiences from nodes \(i\pm 1\) (Figure 1). In Case Studies 1 and 3 (see Section 3, below), we hold \(x_{n}(t)=L\) constant and discard (3). Throughout this work, we use linear Hookean springs so that \(F(\ell_{i})=k(s-\ell_{i})\), \(\ell_{i}>0\), where \(\ell_{i}(t)=x_{i+1}(t)-x_{i}(t)\) is the length of the \(i\)th cell, \(k>0\) is the spring constant, and \(s\geq 0\) is the resting spring length [10]; we discuss other force laws in Appendix D.
The dynamics governed by (2)-(3) describe a system in which cells mechanically relax. Following previous work [10, 14, 16], we introduce a stochastic mechanism that allows the cells to undergo proliferation, assuming only one cell can divide at a time over a given interval \([t,t+\Delta t)\) for some small duration \(\Delta t\). We let the
Figure 1: Discrete model and schematics for the four case studies. (a) A fixed boundary problem with \(x_{1}=0\) and \(x_{n}=L\) fixed. (b) A free boundary problem with \(x_{1}=0\) and \(x_{n}(t)=L(t)\), show in red, free. (c) Proliferation schematic, showing a cell \((x_{i}(t),x_{i+1}(t))\) dividing into \((x_{i}(t+\Delta t),x_{i+1}(t+\Delta t))\) and \((x_{i+1}(t+\Delta t),x_{i+2}(t+\Delta t))\) following a proliferation event, where \(x_{i+1}(t+\Delta t)=(x_{i}(t)+x_{i+1}(t))/2\). (d)–(g) show schematics for the four case studies considered in the paper, where the first row in each panel is a representation of the initial configuration of cells at \(t=0\) and the second row a representation at a later time \(t>0\).
probability that the \(i\)th cell proliferates be given by \(G_{i}\Delta t\), where \(G_{i}=G(\ell_{i})\) for some length-dependent proliferation law \(G(\ell_{i})>0\). As represented in Figure 1(c), when the \(i\)th cell proliferates, the cell divides into two equally-sized daughter cells, and the boundary between the new daughter cells is placed at the midpoint of the original cell. Throughout this work, we use a logistic proliferation law \(G(\ell_{i})=\beta[1-1/(K\ell_{i})]\) with \(\ell_{i}>1/K\), where \(\beta\) is the intrinsic proliferation rate and \(K\) is the carrying capacity density; we consider other proliferation laws in Appendix D. The implementation of the solution to these equations (2)-(3) and the proliferation mechanism in the Julia package EpithelialDynamics1D.jl; in this implementation, if \(G(\ell_{i})<0\) we set \(G(\ell_{i})=0\) to be consistent with the fact that we interpret \(G(\ell_{i})\) as a probability. We emphasise that, without proliferation, we need only solve (2)-(3) once for a given initial condition in order to obtain the expected behaviour of the discrete model, because the discrete model is deterministic in the absence of proliferation. In contrast, incorporating proliferation means that we need to consider several identically-prepared realisations of the same stochastic discrete model to estimate the expected behaviour of the discrete model for a given initial condition.
In practice, macroscopic models of populations of cell populations are described in terms of cell densities rather than keeping track of the position of individual cell boundaries. The density of the \(i\)th cell \((x_{i}(t),x_{i+1}(t))\) is \(1/\ell_{i}(t)\). For an interior node \(x_{i}(t)\), we obtain a density \(q_{i}(t)\) by taking the inverse of the average of the cells left and right of \(x_{i}(t)\), giving
\[q_{i}(t)=\frac{2}{x_{i+1}(t)-x_{i-1}(t)},\quad i=2,\ldots,n-1, \tag{5}\]
as in Baker et al. [16]. At boundary nodes, we use
\[q_{1}(t)=\frac{2}{x_{2}(t)}-\frac{2}{x_{3}(t)},\quad q_{n}(t)=\frac{2}{x_{n}(t )-x_{n-1}(t)}-\frac{2}{x_{n}(t)-x_{n-2}(t)}, \tag{6}\]
derived by linear extrapolation of (5) to the boundary. The densities in (6) ensure that the slope of the density curves at the boundaries, \(\partial q/\partial x\), match those in the continuum limit. We discuss the derivation of (6) in the Appendix A. In the continuum limit where the number of cells is large and mechanical relaxation is fast, the densities evolve according to the moving boundary problem [16, 10]
\[\begin{array}{rcll}\frac{\partial q}{\partial t}&=&\frac{ \partial}{\partial x}\left(D(q)\frac{\partial q}{\partial x}\right)+R(q)&0<x< L(t),\,t>0,\\ \frac{\partial q}{\partial x}&=&0&x=0,\,t>0,\\ \frac{\partial q}{\partial x}&=&H(q)&x=L(t),\,t>0,\\ q\frac{\mathrm{d}L}{\mathrm{d}t}&=&-D(q)\frac{\partial q}{\partial x}&x=L(t), \,t>0,\end{array} \tag{7}\]
where \(q(x,t)\) is the density at position \(x\) and time \(t\), \(D(q)=-1/(\eta q^{2})F^{\prime}(1/q)\), \(R(q)=qG(1/q)\), \(H(q)=-2qF(1/q)/[\eta D(q)]\), and \(L(t)=x_{n}(t)\) is the leading edge position with \(L(0)=x_{n}(0)\). The quantity \(1/q\) in these equations can be interpreted as a continuous function related to the length of the individual cells. The initial condition \(q(x,0)=q_{0}(x)\) is a linear interpolant of the discrete densities \(q_{i}(t)\) of the cells at \(t=0\). Similar to the discussion of (1), for this continuum limit to be valid so that both \(D(q)\) and \(R(q)\) play a role in the continuum model, constraints must be imposed on the discrete parameters. As discussed by Murphy et al. [15], we require that the time scale of mechanical relaxation is sufficiently fast relative to the time scale of proliferation. In practice this means that for a given choice of \(\beta\) we must have \(k/\eta\) sufficiently large for the solution of the continuum model to match averaged data from the discrete model. We note that, with our choices of \(F\) and \(G\), the functions in (7) are given by
\[D(q)=\frac{k}{\eta q^{2}},\quad R(q)=\beta q\left(1-\frac{q}{K}\right),\quad H (q)=2q^{2}(1-qs). \tag{8}\]
For fixed boundary problems we take \(H(q)=0\) and \(\mathrm{d}L/\mathrm{d}t=0\). In Appendix B, we describe how to solve (7) numerically, as well as how to solve the corresponding problem with fixed boundaries numerically.
## 3 Continuum-discrete comparison
We now consider four biologically-motivated case studies to illustrate the performance of the continuum limit description (7). These case studies are represented schematically in Figure 1(d)--(g). Case Studies 1 and 3, shown in Figure 1(d) and Figure 1(f), are fixed boundary problems, where we see cells relax mechanically towards a steady state where each cell has equal length. Case Studies 2 and 4 are free boundary problems, where the right-most cell boundary moves in the positive \(x\)-direction while all cells relax towards a steady state where the length of each cell is given by resting spring length \(s\). Case Studies 1 and 2 have \(\beta=0\) so that there is no cell proliferation and the number of cells remains fixed during the simulations, whereas
Figure 2: Space-time diagrams and densities for the four example problems from Figure 1 considered throughout the paper. In (e)–(h), the solid curves are the discrete densities (5) and the dashed curves are solutions to the continuum limit problem (7), and curves are given by black, red, blue, green, orange, and purple in the order of increasing time. The times shown are (e) \(t=0,1,2,3,4,5\); (f) \(t=0,5,10,25,50,100\); (g) \(t=0,1,5,10,20,50\); and (h) \(t=0,5,10,20,50,100\). In each problem, the force law uses \(k=50\), \(s=1/5\), and \(\eta=1\), and where there is proliferation we use \(\Delta t=10^{-2}\), \(K=15\), and \(\beta=0.15\). The proliferation results show data averaged over 100 stochastic realisations. The arrows show the direction of increasing time.
Case Studies 3 and 4 have \(\beta>0\) so that the number of cells increases during the discrete simulations. To explore these problems, we first consider cases where the continuum limit model is accurate, using the data shown in Figure 2, where we show space-time diagrams in Figure 2(a)--(d) and a set of averaged density profiles for each problem in Figure 2(e)--(h). Case Studies 1 and 3 initially place 30 nodes in \(0\leq x\leq 5\) and 30 nodes in \(25\leq x\leq 30\), or equivalently \(n=60\) with 28 cells in \(0\leq x\leq 5\) and 28 cells in \(25\leq x\leq 30\), spacing the nodes uniformly within each subinterval. Case Studies 2 and 4 initially place 60 equally spaced nodes in \(0\leq x\leq 5\).
The problems shown in Figure 2 use parameter values such that the solution of the continuum limit (7) is a good match to the averaged discrete density profiles. In particular, all problems use \(k=50\), \(\eta=1\), \(s=1/5\) and, for Case Studies 3 and 4, \(\Delta t=10^{-2}\), \(K=15\), and \(\beta=0.15\). The accuracy of the continuum limit is clearly evident in Figure 2(e)--(h) where, in each case, the solution of the continuum limit model is visually indistinguishable from averaged data from the discrete model. With proliferation, however, the continuum limit can be accurate when \(k/\eta\) is not too much larger than \(\beta\), and we use Case Studies 3 and 4 to explore this.
Figure 3 shows further continuum-discrete comparisons Case Studies 3 and 4 in which the continuum limit solutions are no longer accurate, where we have slowed the mechanical relaxation by taking \(k=1/5\). In both cases, the solution of the continuum limit model lags behind the averaged data from the discrete model.
Figure 3: Examples of inaccurate continuum limits for (a) Case Study 3 and (b) Case Study 4, where both case studies use the same parameters as in Figure 2 except with \(k=1/5\) rather than \(k=50\). The solid curves are the discrete densities (5) and the dashed curves are solutions to the continuum limit problem (7). The arrows show the direction of increasing time. The density profiles are plotted in black, red, blue, green, orange, and purple for the respective times (a) \(t=0,1,10,25,40,75\) and (b) \(t=0,5,25,50,100,250\).
We are interested in developing an equation learning method for learning an improved continuum model for problems like those in Figure 3, allowing us to extend beyond the parameter regime where the continuum limit (7) is accurate. We demonstrate this in Case Studies 1-4 in Section 4 where we develop such a method.
## 4 Learning accurate continuum limit models
In this section we introduce our method for equation learning and demonstrate the method using the four case studies from Figures 1-3. Since the equation learning procedure is modular, adding these components into an existing problem is straightforward. All Julia code to reproduce these results is available at [https://github.com/DanielVandH/StepwiseEQL.jl](https://github.com/DanielVandH/StepwiseEQL.jl).
### Case Study 1
Case Study 1 involves mechanical relaxation only \(R(q)=H(q)=0\) in (8) and the only function to learn is \(D(q)\).
Our equation learning approach starts by assuming that \(D(q)\) is a linear combination of \(d\)_basis coefficients_\(\{\theta_{1},\dots,\theta_{d}\}\) and \(d\)_basis functions_\(\{\varphi_{1},\dots,\varphi_{d}\}\), meaning \(D(q)\) can be represented as
\[D(q)=\sum_{i=1}^{d}\theta_{i}\varphi_{i}(q). \tag{9}\]
These basis functions could be any univariate function of \(q\), for example the basis could be \(\{\varphi_{1},\varphi_{2},\varphi_{3}\}=\{1/q,1/q^{2},1/q^{3}\}\) with \(d=3\). In this work, we impose the constraint that \(D(q)\geq 0\) for \(q_{\min}\leq q\leq q_{\max}\), where \(q_{\min}\) and \(q_{\max}\) are the minimum and maximum densities observed in the discrete simulations, respectively. This constraint enforces the condition that the nonlinear diffusivity function is positive over the density interval of interest. While it is possible to work with some choices of nonlinear diffusivity functions for which \(D(q)<0\) for some interval of density [48, 49, 50], we wish to avoid the possibility of having negative nonlinear diffusivity functions and our results support this approach.
The aim is to estimate \(\mathbf{\theta}=(\theta_{1},\dots,\theta_{d})^{\mathsf{T}}\) in (9). We use ideas similar to the basis function approach from VandenHeuvel et al. [38], using (9) to construct a matrix problem for \(\mathbf{\theta}\). In particular, let us take the PDE (7), with \(R(q)=0\) and \(H(q)=0\), and expand the spatial derivative term so that we can isolate the \(\theta_{k}\) terms,
\[\frac{\partial q_{ij}}{\partial t}=\sum_{k=1}^{d}\left\{\frac{\mathrm{d} \varphi_{k}(q_{ij})}{\mathrm{d}q}\left(\frac{\partial q_{ij}}{\partial x} \right)^{2}+\varphi_{k}(q_{ij})\frac{\partial^{2}q_{ij}}{\partial x^{2}} \right\}\theta_{k}, \tag{10}\]
where we let \(q_{ij}\) denote the discrete density at position \(x_{ij}=x_{i}(t_{j})\) and time \(t_{j}\). We note that while \(q_{ij}\) is discrete, we assume it can be approximated by a smooth function, allowing us to define these derivatives \(\partial q_{ij}/\partial t\), \(\partial q_{ij}/\partial x\), and \(\partial^{2}q_{ij}/\partial x^{2}\) in (10); this assumption is appropriate since, as shown in Figure 2, these discrete densities can be well approximated by smooth functions. These derivatives are estimated using finite differences, as described in Appendix C. We save the solution to the discrete problems (2)-(3) at \(M\) times \(0=t_{1}<t_{2}<\dots<t_{M}\) so that \(i\in\{1,\dots,n\}\) and \(j\in\{2,\dots,M\}\), where \(n=60\) is the number of nodes and we do not deal with data at \(j=1\) since the PDE does not apply at \(t=0\). We can therefore convert (10) into a rectangular matrix problem \(\mathbf{A}\mathbf{\theta}=\mathbf{b}\), where the \(r\)th row in \(\mathbf{A}\), \(r=1,\dots,n(M-1)\), corresponding to the point \((x_{ij},t_{j})\) is given by \(\mathbf{a}_{ij}\in\mathbb{R}^{1\times d}\), where
\[\mathbf{a}_{ij}=\left[\frac{\mathrm{d}\varphi_{1}(q_{ij})}{\mathrm{d}q}\left( \frac{\partial q_{ij}}{\partial x}\right)^{2}+\varphi_{1}(q_{ij})\frac{ \partial^{2}q_{ij}}{\partial x^{2}},\ \ \cdots,\ \ \frac{\mathrm{d}\varphi_{d}(q_{ij})}{ \mathrm{d}q}\left(\frac{\partial q_{ij}}{\partial x}\right)^{2}+\varphi_{d}(q_{ ij})\frac{\partial^{2}q_{ij}}{\partial x^{2}}\right], \tag{11}\]
with each element of \(\mathbf{a}_{ij}\) corresponding to the contribution of the associated basis function in (10). Thus,
we obtain the system
\[\mathbf{A}=\begin{bmatrix}\mathbf{a}_{12}\\ \mathbf{a}_{22}\\ \vdots\\ \mathbf{a}_{nM}\end{bmatrix}\in\mathbb{R}^{n(M-1)\times d}\quad\text{and}\quad \mathbf{b}=\begin{bmatrix}\partial q_{12}/\partial t\\ \partial q_{22}/\partial t\\ \vdots\\ \partial q_{nM}/\partial t\end{bmatrix}\in\mathbb{R}^{n(M-1)\times 1}. \tag{12}\]
The solution of \(\mathbf{A}\boldsymbol{\theta}=\mathbf{b}\), given by \(\boldsymbol{\theta}=(\mathbf{A}^{\mathsf{T}}\mathbf{A})^{-1}\mathbf{A}^{ \mathsf{T}}\mathbf{b}\), is obtained by minimising the residual \(\|\mathbf{A}\boldsymbol{\theta}-\mathbf{b}\|_{2}^{2}\), which keeps all terms present in the learned model. We expect, however, just as in (8), that \(\boldsymbol{\theta}\) is sparse so that \(D(q)\) has very few terms, which makes the interpretation of these terms feasible [33, 34]. There are several ways that we could solve \(\mathbf{A}\boldsymbol{\theta}=\mathbf{b}\) to obtain a sparse vector, such as with sparse regression [33], but in this work we take a _stepwise equation learning_ approach inspired by stepwise regression [40] as this helps with both the exposition and modularity of our approach. For this approach, we first let \(\mathcal{I}=\{1,\ldots,d\}\) be the set of basis function indices. We let \(\mathcal{A}_{k}\) denote the set of _active coefficients_ at the \(k\)th iteration, meaning the indices of non-zero values in \(\boldsymbol{\theta}\), starting with \(\mathcal{A}_{1}=\mathcal{I}\). The set of indices of zero values in \(\boldsymbol{\theta}\), \(\mathcal{I}_{k}=\mathcal{I}\setminus\mathcal{A}_{k}\), is called the set of _inactive coefficients_. To obtain the next set, \(\mathcal{A}_{k+1}\), from a current set \(\mathcal{A}_{k}\), we apply the following steps:
1. Let the vector \(\boldsymbol{\theta}_{\mathcal{A}}\) denote the solution to \(\mathbf{A}\boldsymbol{\theta}=\mathbf{b}\) subject to the constraint that each inactive coefficient \(\theta_{i}\) is zero, meaning \(\theta_{i}=0\) for \(i\in\mathcal{I}\setminus\mathcal{A}\) for a given set of active coefficients \(\mathcal{A}\). We compute \(\boldsymbol{\theta}_{\mathcal{A}}\) by solving the reduced problem in which the inactive columns of \(\mathbf{A}\) are not included. The vector with \(\mathcal{A}=\mathcal{A}_{k}\) at step \(k\) is denoted \(\boldsymbol{\theta}_{k}\). We compute \(\boldsymbol{\theta}_{\mathcal{A}}\) by solving the reduced problem in which the inactive columns of \(\mathbf{A}\) are discarded. With this definition, we compute the sets \[\mathcal{M}_{k}^{+}=\left\{\boldsymbol{\theta}_{\mathcal{A}_{k}\cup\{i\}}:i \notin\mathcal{A}_{k}\right\}\quad\text{and}\quad\mathcal{M}_{k}^{-}=\left\{ \boldsymbol{\theta}_{\mathcal{A}_{k}\setminus\{i\}}:i\in\mathcal{A}_{k}\right\}.\] (13) \(\mathcal{M}_{k}^{+}\) is the set of all coefficient vectors \(\boldsymbol{\theta}\) obtained by making each active coefficient at step \(k\) inactive one at a time. \(\mathcal{M}_{k}^{-}\), is similar to \(\mathcal{M}_{k+1}^{-}\) except we make each inactive coefficient at step \(k\) active one at a time. We then define \(\mathcal{M}_{k}=\{\boldsymbol{\theta}_{k}\}\cup\mathcal{M}_{k}^{+}\cup \mathcal{M}_{k}^{-}\), so that \(\mathcal{M}_{k}\) is the set of all coefficient vectors obtained from activating coefficients one at a time, or retaining the current vector \(\boldsymbol{\theta}_{k}\).
2. Choose one of the vectors in \(\mathcal{M}_{k}\) by defining a loss function \(\mathcal{L}(\boldsymbol{\theta})\): \[\underbrace{\mathcal{L}(\boldsymbol{\theta})}_{\text{loss}}=\underbrace{\log \left[\frac{1}{n(M-1)}\sum_{j=2}^{M}\sum_{i=1}^{n}\left(\frac{q_{ij}-q(x_{ij},t _{j};\boldsymbol{\theta})}{q_{ij}}\right)^{2}\right]}_{\text{goodness of fit}}+ \underbrace{\|\boldsymbol{\theta}\|_{0}}_{\text{model complexity}},\] (14) where \(q(x,t;\boldsymbol{\theta})\) is the solution of the PDE (7) with \(R(q)=H(q)=0\) and \(D(q)\) uses the coefficients \(\boldsymbol{\theta}\) in (9), \(q(x_{ij},t_{j};\boldsymbol{\theta})\) is the linear interpolant of the PDE data at \(t=t_{j}\) evaluated at \(x=x_{ij}\), and \(\|\boldsymbol{\theta}\|_{0}\) is the number of non-zero terms in \(\boldsymbol{\theta}\). This loss function balances the goodness of fit with model complexity. If, for some \(\boldsymbol{\theta}\), \(D(q)<0\) within \(q_{\min}\leq q\leq q_{\max}\), which we check by evaluating \(D(q)\) at \(n_{c}=100\) equally spaced points in \(q_{\min}\leq q\leq q_{\max}\), we set \(\mathcal{L}(\boldsymbol{\theta})=\infty\). With this loss function, we compute the next coefficient vector \[\boldsymbol{\theta}_{k+1}=\operatorname*{argmin}_{\boldsymbol{\theta}\in \mathcal{M}_{k}}\mathcal{L}(\boldsymbol{\theta}).\] (15) If \(\boldsymbol{\theta}_{k+1}=\mathbf{0}\), so that there are no inactive coefficients, we instead take the vector that attains the second-smallest loss so that a model with no terms cannot be selected.
3. If \(\boldsymbol{\theta}_{k+1}=\boldsymbol{\theta}_{k}\), then there are no more local improvements to be made and so the procedure stops. Otherwise, we recompute \(\mathcal{A}_{k+1}\) and \(\mathcal{I}_{k+1}\) from \(\boldsymbol{\theta}_{k+1}\) and continue iterating.
The second step prevents empty models from being returned, allowing the algorithm to more easily find an optimal model when starting with no active coefficients. We note that Nardini et al. [28] consider a loss based on the regression error, \(\|\mathbf{A}\mathbf{\theta}-\mathbf{b}\|_{2}^{2}\), that has been useful for a range of previously-considered problems [33, 28, 34]. We do not consider the regression error in this work as we find that it typically leads to poorer estimates for \(\mathbf{\theta}\) compared to controlling the density errors as we do in (15).
Let us now apply the procedure to our data from Figure 2, where we know that the continuum limit with \(D(q)=50/q^{2}\) is accurate. We use the basis functions \(\varphi_{i}=1/q^{i}\) for \(i=1,2,3\) so that
\[D(q)=\frac{\theta_{1}}{q}+\frac{\theta_{2}}{q^{2}}+\frac{\theta_{3}}{q^{3}}, \tag{16}\]
and we expect to learn \(\mathbf{\theta}=(0,50,0)^{\mathsf{T}}\). We save the solution to the discrete model at \(M=50\) equally spaced time points between \(t_{1}=0\) and \(t_{M}=5\). With this setup, and starting with all coefficients initially active so that \(\mathcal{A}_{1}=\{1,2,3\}\), we obtain the results in Table 1. The first iterate gives us \(\mathbf{\theta}_{1}\) such that \(D(q)<0\) for some range of \(q\) as we show in Figure 4(a), and so we assign \(\mathcal{L}(\mathbf{\theta}_{1})=\infty\). To get to the next step, we remove \(\theta_{1}\), \(\theta_{2}\), and \(\theta_{3}\) one a time and compute the loss for each resulting vector, and we find that removing \(\theta_{3}\) leads to a vector that gives the least loss out of those considered. We thus find \(\mathcal{A}_{2}=\{1,2\}\) and \(\mathbf{\theta}_{2}=(-1.46,47.11,0)^{\mathsf{T}}\). Continuing, we find that out of the choice of removing \(\theta_{1}\) or \(\theta_{2}\), or putting \(\theta_{3}\) back into the model, removing \(\theta_{1}\) decreases the loss by the greatest amount, giving \(\mathcal{A}_{3}=\{2\}\). Finally, we find that there are no more improvements to be made, and so the algorithm stops at \(\mathbf{\theta}_{3}=(0,43.52,0)^{\mathsf{T}}\), which is close to the continuum limit. Comparing the densities from the solution of the learned PDE with \(\mathbf{\theta}=\mathbf{\theta}_{3}\) with the discrete densities in Figure 5(a), we see that the curves are nearly visually indistinguishable near the center, but there are some visually discernible discrepancies near the boundaries. We show the form of \(D(q)\) at each iteration in Figure 4(a), where we observe that the first iterate captures only the higher densities, the second iterate captures the complete range of densities, and the third iterate removes a single term which gives no noticeable difference.
\begin{table}
\begin{tabular}{|r|r r r|r|} \hline
**Step** & \(\theta_{1}\) & \(\theta_{2}\) & \(\theta_{3}\) & **Loss** \\ \hline
1 & -5.97 & 70.73 & **-27.06** & \(\infty\) \\
2 & **-1.46** & 47.11 & 0.00 & -4.33 \\
3 & 0.00 & 43.52 & 0.00 & -5.18 \\ \hline \end{tabular}
\end{table}
Table 1: Stepwise equation learning results for the density data for Case Study 1 using the basis expansion (16), saving the results at \(M=50\) equally spaced times between \(t_{1}=0\) and \(t_{M}=5\) and starting with all coefficients active, \(\mathcal{A}_{1}=\{1,2,3\}\). Coefficients highlighted in blue show the coefficient chosen to be removed or added at the corresponding step.
Figure 4: Progression of \(D(q)\) over each iterate for Case Study 1. (a) Progression from the results in Table 1 (dashed curves). (b) As in (a), except with the results from Table 2 using matrix pruning.
To improve our learned model we introduce _matrix pruning_, inspired from the data thresholding approach in VandenHeuvel et al. [38], to improve the estimates for \(\mathbf{\theta}\). Visual inspection of the space-time diagram in Figure 2(a) shows that the most significant density changes occur at early time and near to locations where \(q\) changes in the initial condition, and a significant portion of the space-time diagram involves regions where \(q\) is almost constant. These regions where \(q\) has minimal change are problematic as points which lead to a higher residual are overshadowed, affecting the least squares problem and consequently degrading the estimates for \(\mathbf{\theta}\) significantly, and so it is useful to only include important points in the construction of \(\mathbf{A}\). To resolve this issue, we choose to only include points if the associated densities falls between the \(10\%\) and \(90\%\) quantiles for the complete set of densities, which we refer to by _density quantiles_; more details on this pruning procedure are given in Appendix C. When we apply this pruning and reconstruct \(\mathbf{A}\), we obtain the improved results in Table 2 and associated densities in Figure 5(b). Compared with Table 1, we see that the coefficient estimates for \(\mathbf{\theta}\) all lead to improved losses, and our final model now has \(\mathbf{\theta}=(0,49.83,0)^{\mathsf{T}}\), which is much closer to the the continuum limit, as we see in Figure 5(b) where the solution curves are now visually indistinguishable everywhere. Moreover, we show in Figure 4(b) how \(D(q)\) is updated at each iteration, where we see that the learned nonlinear diffusivity functions are barely different from the expected continuum limit result. These results demonstrate the importance of only including the most important points in \(\mathbf{A}\).
### Case Study 2
Case Study 2 extends Case Study 1 by allowing the right-most cell boundary to move. We do not consider proliferation, giving \(R(q)=0\) in (8).
The equation learning procedure for this case study is similar to Case Study 1, namely we expand \(D(q)\) as in (9) and constrain \(D(q)\geq 0\). In addition to learning \(D(q)\), we need to learn \(H(q)\) and the evolution equation describing the free boundary. In (7), this evolution equation is given by a conservation statement, \(q\mathrm{d}L/\mathrm{d}t=-D(q)\partial q/\partial x\) with \(q=q(L(t),t)\). Here we treat this moving boundary condition more generally by introducing a function \(E(t)\) so that
\[q\frac{\mathrm{d}L}{\mathrm{d}t}=-E(q)\frac{\partial q}{\partial x} \tag{17}\]
at \(x=L(t)\) for \(t>0\). While (17) could lead to local loss of conservation at the moving boundary, our approach is to for the possibility that coefficients in \(D(q)\) and \(E(q)\) differ and to explore the extent to which this is true, or otherwise, according to our equation learning procedure. We constrain \(E(q)\geq 0\) so that (17) makes sense for our problem and we expand \(D(q)\), \(H(q)\), and \(E(q)\) as follows
\[D(q)=\sum_{i=1}^{d}\theta_{i}^{d}\varphi_{i}^{d}(q),\quad H(q)=\sum_{i=1}^{h} \theta_{i}^{h}\varphi_{i}^{h}(q),\quad E(q)=\sum_{i=1}^{h}\theta_{i}^{e} \varphi_{i}^{e}(q). \tag{18}\]
The matrix system for \(\mathbf{\theta}^{d}=(\theta_{1}^{d},\ldots,\theta_{d}^{d})^{\mathsf{T}}\) is the same as it was in Case Study 1 in (12), which we now write as \(\mathbf{A}^{d}\mathbf{\theta}^{d}=\mathbf{b}^{d}\) with \(\mathbf{A}^{d}\in\mathbb{R}^{n(M-1)\times d}\) and \(\mathbf{b}^{d}\in\mathbb{R}^{n(M-1)\times 1}\) given by \(\mathbf{A}\) and \(\mathbf{b}\) in (12), and we can construct two other independent matrix systems for \(\mathbf{\theta}^{h}=(\theta_{1}^{h},\ldots,\theta_{h}^{h})^{\mathsf{T}}\) and \(\mathbf{\theta}^{e}=(\theta_{1}^{e},\ldots,\theta_{e}^{e})^{\mathsf{T}}\). To construct these
\begin{table}
\begin{tabular}{|r|r r r|r|} \hline
**Step** & \(\theta_{1}\) & \(\theta_{2}\) & \(\theta_{3}\) & **Loss** \\ \hline
1 & **-1.45** & 42.48 & 13.76 & -4.19 \\
2 & 0.00 & 37.79 & **19.69** & -5.46 \\
3 & 0.00 & 49.83 & 0.00 & -7.97 \\ \hline \end{tabular}
\end{table}
Table 2: Improved results for Case Study 1 from Table 1, now using matrix pruning so that densities outside of the \(10\%\) and \(90\%\) density quantiles are not included. Coefficients highlighted in blue show the coefficient chosen to be removed or added at the corresponding step.
matrix systems, for a given boundary point \((x_{nj},t_{j})\) we write
\[\frac{\partial q_{nj}}{\partial x}=\sum_{k=1}^{h}\theta_{k}^{h}\varphi_{k}^{h}(q_{ nj}),\quad q_{nj}\frac{\mathrm{d}L_{j}}{\mathrm{d}t}=-\frac{\partial q_{nj}}{ \partial x}\sum_{k=1}^{e}\theta_{k}^{e}\varphi_{k}^{e}(q_{nj}), \tag{19}\]
where \(L_{j}=x_{nj}\) is the position of the leading edge at \(t=t_{j}\). In (19) we assume that \(L_{j}\) can be approximated by a smooth function so that \(\mathrm{d}L_{j}/\mathrm{d}t\) can be defined. With (19) we have \(\mathbf{A}^{h}\mathbf{\theta}^{h}=\mathbf{b}^{h}\) and \(\mathbf{A}^{\epsilon}\mathbf{\theta}^{e}=\mathbf{b}^{e}\), where
\[\mathbf{A}^{h}=\begin{bmatrix}\varphi_{1}^{h}(q_{12})&\cdots&\varphi_{h}^{h}( q_{12})\\ \vdots&\ddots&\vdots\\ \varphi_{1}^{h}(q_{nM})&\cdots&\varphi_{h}^{h}(q_{nM})\end{bmatrix},\quad \mathbf{b}^{h}=\begin{bmatrix}\frac{\partial q_{12}}{\partial x}\\ \vdots\\ \frac{\partial q_{nM}}{\partial x}\end{bmatrix} \tag{20}\]
Figure 5: Stepwise equation learning results for Case Study 1. (a) Comparisons of the discrete density profiles (solid curves) with those learned from PDEs obtained from the results in Table 1 (dashed curves). (b) As in (a), except with the results from Table 2 using matrix pruning so that densities outside of the 10% and 90% density quantiles are not included. (c) Comparisons of the learned \(D(q)\) from Table 1 without pruning, Table 2 with pruning, and the continuum limit from (8). In (a)–(b), the arrows show the direction of increasing time, and the density profiles shown are at times \(t=0,1,2,3,4,5\) in black, red, blue, green, orange, and purple, respectively.
with \(\mathbf{A}^{h}\in\mathbb{R}^{(M-1)\times h}\) and \(\mathbf{b}^{h}\in\mathbb{R}^{(M-1)\times 1}\), and
\[\mathbf{A}^{e}=\begin{bmatrix}\varphi_{1}^{e}(q_{12})\frac{\partial q_{n2}}{ \partial x}&\cdots&\varphi_{e}^{e}(q_{12})\frac{\partial q_{n2}}{\partial x} \\ \vdots&\ddots&\vdots\\ \varphi_{1}^{e}(q_{nM})\frac{\partial q_{nM}}{\partial x}&\cdots&\varphi_{e} ^{e}(q_{nM})\frac{\partial q_{nM}}{\partial x}\end{bmatrix},\quad\mathbf{b}^{e }=-\begin{bmatrix}q_{n2}\frac{\mathrm{d}L_{2}}{\mathrm{d}t}\\ \vdots\\ q_{nM}\frac{\mathrm{d}L_{M}}{\mathrm{d}t}\end{bmatrix} \tag{21}\]
with \(\mathbf{A}^{e}\in\mathbb{R}^{(M-1)\times e}\) and \(\mathbf{b}^{e}\in\mathbb{R}^{(M-1)\times 1}\). Then, writing
\[\mathbf{A}=\mathrm{diag}(\mathbf{A}^{d},\mathbf{A}^{h},\mathbf{A}^{e})\in \mathbb{R}^{(n+2)(M-1)\times(d+h+e)},\quad\mathbf{b}=\begin{bmatrix}\mathbf{b }^{d}\\ \mathbf{b}^{h}\\ \mathbf{b}^{e}\end{bmatrix}\in\mathbb{R}^{(n+2)(M-1)\times 1}, \tag{22}\]
we obtain
\[\mathbf{A}\boldsymbol{\theta}=\mathbf{b},\quad\boldsymbol{\theta}=\begin{bmatrix} \boldsymbol{\theta}^{d}\\ \boldsymbol{\theta}^{h}\\ \boldsymbol{\theta}^{e}\end{bmatrix}\in\mathbb{R}^{(d+h+e)\times 1}. \tag{23}\]
The solution of \(\mathbf{A}\boldsymbol{\theta}=\mathbf{b}\) is the combined solution of the individual linear systems as \(\mathbf{A}\) is block diagonal. Estimates for \(\boldsymbol{\theta}^{d}\), \(\boldsymbol{\theta}^{h}\), and \(\boldsymbol{\theta}^{e}\) are independent, which demonstrates the modularity of our approach, where these additional features, in particular the leading edge, are just an extra independent component of our procedure in addition to the procedure for estimating \(D(q)\).
In addition to the new matrix system \(\mathbf{A}\boldsymbol{\theta}=\mathbf{b}\) in (23), we augment the loss function (14) to incorporate information about the location of the moving boundary. Letting \(L(t;\boldsymbol{\theta})\) denote the leading edge from the solution of the PDE (7) with parameters \(\boldsymbol{\theta}\), the loss function is
\[\underbrace{\mathcal{L}(\boldsymbol{\theta})}_{\text{loss}} =\log\left[\frac{1}{n(M-1)}\sum_{j=2}^{M}\sum_{i=1}^{n}\left( \frac{q_{ij}-q\left(x_{ij},t_{j};\boldsymbol{\theta}\right)}{q_{ij}}\right)^{ 2}\right]\] \[+\underbrace{\log\left[\frac{1}{M-1}\sum_{j=2}^{M}\left(\frac{L_ {j}-L\left(t_{j};\boldsymbol{\theta}\right)}{L_{j}}\right)^{2}\right]}_{ \text{leading edge goodness of fit}}+\underbrace{\|\boldsymbol{\theta}\|_{0}}_{ \text{model complexity}}. \tag{24}\]
Let us now apply our stepwise equation learning procedure with (23) and (24). We consider the data from Figure 2, where we know in advance that the continuum limit with \(D(q)=50/q^{2}\), \(H(q)=2q^{2}-0.4q^{3}\), and \(E(q)=50/q^{2}\) is accurate. The expansions we use for \(D(q)\), \(H(q)\), and \(E(q)\) are given by
\[D(q) = \frac{\theta_{1}^{d}}{q}+\frac{\theta_{2}^{d}}{q^{2}}+\frac{ \theta_{3}^{d}}{q^{3}},\] \[H(q) = \theta_{1}^{h}q+\theta_{2}^{h}q^{2}+\theta_{3}^{h}q^{3}+\theta_ {4}^{h}q^{4}+\theta_{5}^{h}q^{5}, \tag{25}\] \[E(q) = \frac{\theta_{1}^{e}}{q}+\frac{\theta_{2}^{e}}{q^{2}}+\frac{ \theta_{3}^{e}}{q^{3}}.\]
With these expansions, we expect to learn \(\boldsymbol{\theta}^{d}=(0,50,0)^{\mathsf{T}}\), \(\boldsymbol{\theta}^{h}=(0,2,-0.4,0,0)^{\mathsf{T}}\), and \(\boldsymbol{\theta}^{e}=(0,50,0)^{\mathsf{T}}\). We initially consider saving the solution at \(M=1000\) equally spaced times between \(t_{1}=0\) and \(t_{M}=100\), and using matrix pruning so that only points whose densities fall within the \(35\%\) and \(65\%\) density quantiles are included. The results with this configuration are shown in Table 3, where we see that we are only able to learn \(H(q)=E(q)=0\) and \(D(q)=25.06/q^{3}\). This outcome highlights the importance of choosing an appropriate time interval, since Figure 2(b) indicates that mechanical relaxation takes place over a relative short interval which means that working with data in \(0<t\leq 100\) can lead to a poor outcome.
We proceed by restricting our data collection to \(0\leq t\leq 15\), now saving the solution at \(M=200\) equally spaced times between \(t_{1}=0\) and \(t_{M}=15\). Keeping the same quantiles for the matrix pruning, the new results are shown in Table 4 and Figure 6. We see that the densities and leading edges are accurate for small time, but the learned mechanisms do not extrapolate as well for \(t\geq 15\), for example \(L(t)\) in Figure 6(b) does not match the discrete data. To address this issue, we can further limit the information that we include in our matrices, looking to only include boundary points where \(\mathrm{d}L/\mathrm{d}t\) is neither too large nor too small. We implement this by excluding all points \((x_{nj},t_{j})\) from the construction of \((\mathbf{A}^{e},\mathbf{b}^{e})\) in (21) such that \(\mathrm{d}L_{j}/\mathrm{d}t\) is outside of the \(10\%\) or \(90\%\) quantiles of the vector \((\mathrm{d}L_{2}/\mathrm{d}t,\ldots,\mathrm{d}L_{M}/\mathrm{d}t)\), called the _velocity quantiles_.
Implementing thresholding on \(\mathrm{d}L/\mathrm{d}t\) leads to the results presented in Figure 7. We see that the learned densities and leading edges are both visually indistinguishable from the discrete data. Since \(H(q)\) and \(E(q)\) are only ever evaluated at \(x=L(t)\), and \(q(L(t),t)\approx 5\) for \(t>0\), we see that \(H(q)\) and \(E(q)\) only match the continuum limit at \(q\approx 5\), which means that our learned continuum limit model conserves mass and is consistent with the traditional coarse-grained continuum limit, as expected. We discuss in the Appendix D how we can enforce \(D(q)=E(q)\) to guarantee conservation mass from the outset, however our approach in Figure 7 is more general in the sense that our learned continuum limit without making any _a priori_ assumptions about the form of \(E(q)\).
### Case Study 3
Case Study 3 is identical to Case Study 1 except that we incorporate cell proliferation. This case is more complicated than with mechanical relaxation only, as we have to consider how we combine the repeated realisations to capture the average density data as well. For this work, we average over each realisation at each time using linear interpolants as described in Appendix C. This averaging procedure gives \(n_{k}\) points \(\bar{x}_{ij}\) between \(x=0\) and \(x=30\) at each time \(t_{j}\), \(j=1,\ldots,M\), with corresponding density value \(\bar{q}_{ij}\). The quantities \(\bar{x}_{ij}\) and \(\bar{q}_{ij}\) play the same role as \(x_{ij}\) and \(q_{ij}\) in the previous case studies.
To apply equation learning we note there is no moving boundary, giving \(H(q)=0\) in (8). We proceed by expanding \(D(q)\) and \(R(q)\) as follows
\[D(q)=\sum_{i=1}^{d}\theta_{i}^{d}\varphi_{i}^{d}(q),\quad R(q)=\sum_{i=1}^{r} \theta_{i}^{e}\varphi_{i}^{r}(q), \tag{26}\]
\begin{table}
\begin{tabular}{|r|r r r|r r r r r r|r r r|} \hline
**Step** & \(\theta_{1}^{d}\) & \(\theta_{2}^{d}\) & \(\theta_{3}^{d}\) & \(\theta_{1}^{h}\) & \(\theta_{2}^{h}\) & \(\theta_{3}^{h}\) & \(\theta_{4}^{h}\) & \(\theta_{5}^{h}\) & \(\theta_{1}^{e}\) & \(\theta_{2}^{e}\) & \(\theta_{3}^{e}\) & **Loss** \\ \hline
1 & 0.00 & 0.00 & 0.00 & 0.00 & **0.00** & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & -3.37 \\
2 & 0.00 & 0.00 & 0.00 & 0.00 & -0.03 & 0.00 & 0.00 & 0.00 & **0.00** & 0.00 & 0.00 & -2.37 \\
3 & 0.00 & **0.00** & 0.00 & 0.00 & -0.03 & 0.00 & 0.00 & 0.00 & 8.74 & 0.00 & 0.00 & -3.68 \\
4 & 0.00 & 47.38 & 0.00 & **0.00** & -0.03 & 0.00 & 0.00 & 0.00 & 8.74 & 0.00 & 0.00 & -4.02 \\
5 & 0.00 & 47.38 & 0.00 & 8.41 & -1.69 & 0.00 & 0.00 & 0.00 & 8.74 & 0.00 & 0.00 & -8.14 \\ \hline \end{tabular}
\end{table}
Table 4: Stepwise equation learning results for Case Study 2, using the basis expansions (25), saving the results at \(M=200\) equally spaced times between \(t_{1}=0\) and \(t_{M}=15\), pruning so that densities outside of the \(35\%\) and \(65\%\) density quantiles are not included, and starting with all terms inactive. Coefficients highlighted in blue show the coefficient chosen to be removed or added at the corresponding step.
\begin{table}
\begin{tabular}{|r|r r r|r r r r r r|r|r|} \hline
**Step** & \(\theta_{1}^{d}\) & \(\theta_{2}^{d}\) & \(\theta_{3}^{d}\) & \(\theta_{1}^{h}\) & \(\theta_{2}^{h}\) & \(\theta_{3}^{h}\) & \(\theta_{4}^{h}\) & \(\theta_{5}^{h}\) & \(\theta_{1}^{e}\) & \(\theta_{2}^{e}\) & \(\theta_{3}^{e}\) & **Loss** \\ \hline
1 & 0.00 & 0.00 & **0.00** & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & -1.40 \\
2 & 0.00 & 0.00 & 25.06 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & -0.40 \\ \hline \end{tabular}
\end{table}
Table 3: Stepwise equation learning results for Case Study 2, using the basis expansions (25), saving the results at \(M=1000\) equally spaced times between \(t_{1}=0\) and \(t_{M}=100\), pruning so that densities outside of the \(35\%\) and \(65\%\) density quantiles are not included, and starting with all terms inactive. Coefficients highlighted in blue show the coefficient chosen to be removed or added at the corresponding step.
Figure 6: Stepwise equation learning results from Table 4 for Case Study 2. (a) Comparisons of the discrete density profiles (solid curves) with those learned from PDEs obtained from the results in Table 4 (dashed curves), plotted at the times \(t=0,5,10,25,50,100\) in black, red, blue, green, orange, and purple, respectively. The arrow shows the direction of increasing time. (b) As in (a), except comparing the leading edges. (c)–(e) are comparisons of the learned forms of \(D(q)\), \(H(q)\), and \(E(q)\) with the forms from the continuum limit (8).
Figure 7: Stepwise equation learning results from Table 4 for Case Study 2, except also using matrix pruning on \((\mathbf{A}_{3},\mathbf{b}_{3})\) so points where \(\mathrm{d}L_{j}/\mathrm{d}t\) falls outside of the \(10\%\) and \(90\%\) velocity quantiles are excluded, giving \(\theta_{1}^{e}=9.42\) rather than \(8.74\). (a) Comparisons of the discrete density profiles (solid curves) with those from the learned PDE (dashed curves), plotted at the times \(t=0,5,10,25,50,100\) in black, red, blue, green, orange, and purple, respectively. The arrow shows the direction of increasing time. (b) As in (a), except comparing the leading edges. (c)–(e) are comparisons of the learned forms of \(D(q)\), \(H(q)\), and \(E(q)\) with the forms from the continuum limit (8).
with the aim of estimating \(\mathbf{\theta}^{d}=(\theta_{1}^{d},\ldots,\theta_{d}^{d})^{\mathsf{T}}\) and \(\mathbf{\theta}^{r}=(\theta_{1}^{r},\ldots,\theta_{r}^{r})^{\mathsf{T}}\), again constraining \(D(q)\geq 0\). We expand the PDE from (10), as in Section 4(a), and the only difference is the additional term \(\sum_{m=1}^{r}\varphi_{m}^{r}(\bar{q}_{ij})\theta_{m}^{r}\) for each point \((\bar{x}_{ij},t_{j})\). Thus, we have the same matrix as in Section 4(a), denoted \(\mathbf{A}^{d}\in\mathbb{R}^{n_{k}(M-1)\times d}\), and a new matrix \(\mathbf{A}^{r}\in\mathbb{R}^{n_{k}(M-1)\times r}\) whose row corresponding to the point \((\bar{x}_{ij},t_{j})\) is given by
\[\mathbf{a}_{ij}^{r}=\begin{bmatrix}\varphi_{1}^{r}(\bar{q}_{ij})&\cdots& \varphi_{r}^{r}(\bar{q}_{ij})\end{bmatrix}\in\mathbb{R}^{1\times r}, \tag{27}\]
so that the coefficient matrix \(\mathbf{A}\) is now
\[\mathbf{A}=\begin{bmatrix}\mathbf{A}^{d}&\mathbf{A}^{r}\end{bmatrix}\in \mathbb{R}^{n_{k}(M-1)\times(d+r)}. \tag{28}\]
The corresponding entry for the point \((\bar{x}_{ij},t_{j})\) in \(\mathbf{b}\in\mathbb{R}^{n_{k}(M-1)\times 1}\) is \(\partial\bar{q}_{ij}/\partial t\). Notice that this additional term in the PDE adds an extra block to the matrix without requiring a significant coupling with the existing equations from the simpler problem without proliferation. Thus, we estimate our coefficient vectors using the system
\[\mathbf{A}\mathbf{\theta}=\mathbf{b},\quad\mathbf{\theta}=\begin{bmatrix}\mathbf{\theta}^ {d}\\ \mathbf{\theta}^{r}\end{bmatrix}\in\mathbb{R}^{(d+r)\times 1}. \tag{29}\]
We can take exactly the same stepwise procedure as in Section 4(a), except now the loss function (14) uses \(n_{k}\), \(\bar{q}_{ij}\), and \(\bar{x}_{ij}\) rather than \(n\), \(q_{ij}\), and \(x_{ij}\), respectively.
#### 4.3.1 Accurate continuum limit
Let us now apply these ideas to our data from Figure 2, where we know that the continuum limit with \(D(q)=50/q^{2}\) and \(R(q)=0.15q-0.01q^{2}\) is accurate. The expansions we use for \(D(q)\) and \(R(q)\) are given by
\[D(q)=\frac{\theta_{1}^{d}}{q}+\frac{\theta_{2}^{d}}{q^{2}}+\frac{\theta_{3}^{ d}}{q^{3}},\quad R(q)=\theta_{1}^{r}q+\theta_{2}^{r}q^{2}+\theta_{3}^{r}q^{3}+ \theta_{4}^{r}q^{4}+\theta_{5}^{r}q^{5}, \tag{30}\]
and we expect to learn \(\mathbf{\theta}^{d}=(0,50,0)^{\mathsf{T}}\) and \(\mathbf{\theta}^{r}=(0.15,-0.01,0,0,0)^{\mathsf{T}}\). We average over 1000 identically-prepared realisations, saving the solutions at \(M=501\) equally spaced times between \(t_{1}=0\) and \(t_{M}=50\) with \(n_{k}=50\) knots for averaging. We also use matrix pruning so that we only include points whose densities fall within the 10% and 90% density quantiles, as done in Section 4(a). The results we obtain are shown in Table 5, starting with all coefficients active.
Table 5 shows that we find \(\mathbf{\theta}^{d}=(0,52.97,0)^{\mathsf{T}}\) and \(\mathbf{\theta}^{r}=(0.15,-0.010,0,0,0)^{\mathsf{T}}\), which are both very close to the continuum limit. Figure 8 visualises these results, showing that the PDE solutions with the learned \(D(q)\) and \(R(q)\) match the discrete densities, and the mechanisms that we do learn are visually indistinguishable with the continuum limit functions (8) as shown in Figure 8(b)-(c).
\begin{table}
\begin{tabular}{|r|r r r|r r r r|r r|} \hline
**Step** & \(\theta_{1}^{d}\) & \(\theta_{2}^{d}\) & \(\theta_{3}^{d}\) & \(\theta_{1}^{r}\) & \(\theta_{2}^{r}\) & \(\theta_{3}^{r}\) (\(\times 10^{-4}\)) & \(\theta_{4}^{r}\) (\(\times 10^{-5}\)) & \(\theta_{5}^{r}\) (\(\times 10^{-7}\)) & **Loss** \\ \hline
1 & -11.66 & 147.43 & **-191.51** & 0.13 & -0.00 & -0.00 & 5.83 & \(-11.30\) & \(\infty\) \\
2 & -2.24 & 60.86 & 0.00 & 0.13 & -0.00 & **-5.72** & 2.62 & \(-3.49\) & -0.71 \\
3 & **-2.25** & 60.90 & 0.00 & 0.14 & -0.01 & 0.00 & \(-1.25\) & 5.95 & -1.92 \\
4 & 0.00 & 52.95 & 0.00 & 0.14 & -0.01 & 0.00 & **-1.36** & 6.49 & -3.35 \\
5 & 0.00 & 53.02 & 0.00 & 0.15 & -0.01 & 0.00 & 0.00 & **0.32** & -4.98 \\
6 & 0.00 & 52.97 & 0.00 & 0.15 & -0.01 & 0.00 & 0.00 & 0.00 & -5.70 \\ \hline \end{tabular}
\end{table}
Table 5: Stepwise equation learning results for Case Study 3 where the continuum limit is accurate, using the basis expansions (30), saving the results at \(M=501\) equally spaced times between \(t_{1}=0\) and \(t_{M}=50\), averaging across 1000 realisations with \(n_{k}=50\) knots, pruning so that densities outside of the 10% and 90% density quantiles are not included, and starting with all diffusion and reaction coefficients active. Coefficients highlighted in blue show the coefficient chosen to be removed or added at the corresponding step.
#### 4.3.2 Inaccurate continuum limit
We now extend the problem so that the continuum limit is no longer accurate, taking \(k=1/5\) to be consistent with Figure 3(a). Using the same basis expansions in (30), we save the solution at \(M=751\) equally spaced times between \(t_{1}=0\) and \(t_{M}=75\), averaging over 1000 realisations with \(n_{k}=200\). We find that we need to use the 25% and 75% density quantiles rather than the 10% and 90% density quantiles, as in the previous example, to obtain results in this case. With this configuration, the results we find are shown in Table 6 and Figure 9.
Results in Table 6 show \(\boldsymbol{\theta}^{d}=(0,0.12,0)^{\mathsf{T}}\), which is reasonably close to the continuum limit with \((0,0.2,0)^{\mathsf{T}}\). The reaction vector, for which the continuum limit is \((0.15,-0.01,0,0,0)^{\mathsf{T}}\) so that \(R(q)\) is a quadratic, is now given by \(\boldsymbol{\theta}^{r}=(0.16,-0.02,7.49\times 10^{-4},-1.69\times 10^{-5},0)^{ \mathsf{T}}\), meaning the learned \(R(q)\) is a quartic. Figure 9 compares the averaged discrete densities with the solution of the learned continuum limit model. Figure 9(c) compares the learned source term with the continuum limit. While both terms are visually indistinguishable at small densities, we see that the two source terms differ at high densities, with the learned carrying capacity density, where \(R(q)=0\), reduced relative to the continuum limit. This is consistent with previous results [15].
Figure 8: Stepwise equation learning results for Case Study 3, where the continuum limit is accurate. (a) Comparisons of the discrete density profiles (solid curves) with those learned from PDEs obtained from the results in Table 5 (dashed curves), plotted at the times \(t=0,1,5,10,20,50\) in black, red, blue, green, orange, and purple, respectively. The arrow shows the direction of increasing time. (b)–(c) are comparisons of \(D(q)\) and \(R(q)\) with the forms from the continuum limit (8).
### Case Study 4
Case Study 4 is identical to Case Study 2 except that we now introduce proliferation into the discrete model. First, as in Case Study 3 and as described in Appendix C, we average our data across each realisation from our discrete model. This averaging provides us with points \(\bar{x}_{ij}\) between \(x=0\) and \(x=\bar{L}_{j}\) at each time \(t_{j}\)
\begin{table}
\begin{tabular}{|r|r r r|r r r r r|r|} \hline
**Step** & \(\theta_{1}^{d}\) & \(\theta_{2}^{d}\) & \(\theta_{3}^{d}\) & \(\theta_{1}^{r}\) & \(\theta_{2}^{r}\) & \(\theta_{3}^{r}\) (\(\times 10^{-4}\)) & \(\theta_{4}^{r}\) (\(\times 10^{-5}\)) & \(\theta_{5}^{r}\) & **Loss** \\ \hline
1 & 0.00 & 0.00 & 0.00 & **0.00** & 0.00 & 0.00 & 0.00 & 0.00 & -0.33 \\
2 & 0.00 & 0.00 & 0.00 & 0.02 & **0.00** & 0.00 & 0.00 & 0.00 & 0.51 \\
3 & 0.00 & **0.00** & 0.00 & 0.11 & -0.01 & 0.00 & 0.00 & 0.00 & 0.20 \\
4 & 0.00 & 0.11 & 0.00 & 0.11 & -0.01 & **0.00** & 0.00 & 0.00 & -0.04 \\
5 & 0.00 & 0.12 & 0.00 & 0.13 & -0.01 & 1.59 & **0.00** & 0.00 & -0.46 \\
6 & 0.00 & 0.12 & 0.00 & 0.16 & -0.02 & 7.49 & \(-1.69\) & 0.00 & -1.13 \\ \hline \end{tabular}
\end{table}
Table 6: Stepwise equation learning results for Case Study 3, where the continuum limit is inaccurate, using the basis expansions (30), saving the results at \(M=751\) equally spaced times between \(t_{1}=0\) and \(t_{M}=75\), averaging across 1000 realisations with \(n_{k}=200\) knots, pruning so that densities outside of the 25% and 75% density quantiles are not included, and starting with all diffusion and reaction coefficients inactive. Coefficients highlighted in blue show the coefficient chosen to be removed or added at the corresponding step.
Figure 9: Stepwise equation learning results for Case Study 3 where the continuum limit is inaccurate. (a) Comparisons of the discrete density profiles (solid curves) with those learned from PDEs obtained from the results in Table 5 (dashed curves), plotted at the times \(t=0,1,10,25,40,75\) in black, red, blue, green, orange, and purple, respectively. The arrow shows the direction of increasing time. (b)–(c) are comparisons of \(D(q)\) and \(R(q)\) with the forms from the continuum limit (8).
\(j=1,\ldots,M\), where \(\bar{L}_{j}\) is the average leading edge at \(t=t_{j}\), with corresponding density values \(\bar{q}_{ij}\), where \(i=1,\ldots,n_{k}\) and \(n_{k}\) is the number of knots to use for averaging. We expand the functions \(D(q)\), \(R(q)\), \(H(q)\), and \(E(q)\) as
\[D(q)=\sum_{i=1}^{d}\theta_{i}^{d}\varphi_{i}^{d}(q),\ R(q)=\sum_{i=1}^{r} \theta_{i}^{r}\varphi_{i}^{r}(q),\ H(q)=\sum_{i=1}^{h}\theta_{i}^{h}\varphi_{i }^{h}(q),\ E(q)=\sum_{i=1}^{e}\theta_{i}^{e}\varphi_{i}^{e}(q), \tag{31}\]
again restricting \(D(q),E(q)\geq 0\). The function \(E(q)\) is used in the moving boundary condition in (7), as in (17). The matrix \(\mathbf{A}\) and vector \(\mathbf{b}\) are given by
\[\mathbf{A}=\mathrm{diag}(\mathbf{A}^{dr},\mathbf{A}^{h},\mathbf{A}^{e})\in \mathbb{R}^{n_{k}(M-1)\times(d+r+h+e)},\quad\mathbf{b}=\begin{bmatrix}\mathbf{ b}^{dr}\\ \mathbf{b}^{h}\\ \mathbf{b}^{e}\end{bmatrix}\in\mathbb{R}^{n_{k}(M-1)}, \tag{32}\]
where \(\mathbf{A}^{dr}=[\mathbf{A}^{d}\ \mathbf{A}^{r}]\) as defined in (28), \(\mathbf{A}^{h}\) and \(\mathbf{A}^{e}\) are the matrices from (20) and (21), respectively, and similarly for \(\mathbf{b}^{dr}=\partial\mathbf{q}/\partial t\), \(\mathbf{b}^{h}\), and \(\mathbf{b}^{e}\) from (12), (20), and (21), respectively. Thus,
\[\mathbf{A}\boldsymbol{\theta}=\mathbf{b},\quad\boldsymbol{\theta}=\begin{bmatrix} \boldsymbol{\theta}^{d}\\ \boldsymbol{\theta}^{r}\\ \boldsymbol{\theta}^{h}\\ \boldsymbol{\theta}^{e}\end{bmatrix}\in\mathbb{R}^{(d+r+h+e)\times 1}. \tag{33}\]
Similar to Case Study 2, the coefficients for each mechanism are independent, except for \(\boldsymbol{\theta}^{d}\) and \(\boldsymbol{\theta}^{r}\). The loss function we use is the loss function from (24).
With this problem, it is difficult to learn all mechanisms simultaneously, especially as mechanical relaxation and proliferation occur on different time scales since mechanical relaxation dominates in the early part of the simulation, whereas both proliferation and mechanical relaxation play a role at later times. This means \(D(q)\) and \(R(q)\) cannot be estimated over the entire time range as was done in Case Study 3. To address this we take a sequential learning procedure to learn these four mechanisms using four distinct time intervals \(I^{d}\), \(I^{e}\), \(I^{h}\), and \(I^{r}\):
1. Fix \(R(q)=H(q)=E(q)=0\) and learn \(\boldsymbol{\theta}^{d}\) over \(t\in I^{d}\), solving \(\mathbf{A}^{d}\boldsymbol{\theta}^{d}=\mathbf{b}^{dr}\).
2. Fix \(R(q)=H(q)=0\) and \(\boldsymbol{\theta}^{d}\) and learn \(\boldsymbol{\theta}^{e}\) over \(t\in I^{e}\), solving \(\mathbf{A}^{e}\boldsymbol{\theta}^{e}=\mathbf{b}^{e}\).
3. Fix \(R(q)=0\), \(\boldsymbol{\theta}^{d}\), and \(\boldsymbol{\theta}^{e}\) and learn \(\boldsymbol{\theta}^{h}\) over \(t\in I^{h}\), solving \(\mathbf{A}^{h}\boldsymbol{\theta}^{h}=\mathbf{b}^{h}\).
4. Fix \(\boldsymbol{\theta}^{d}\), \(\boldsymbol{\theta}^{e}\), and \(\boldsymbol{\theta}^{h}\) and learn \(\boldsymbol{\theta}^{r}\) over \(t\in I^{r}\), solving \(\mathbf{A}^{r}\boldsymbol{\theta}^{r}=\mathbf{b}^{dr}-\mathbf{A}^{d} \boldsymbol{\theta}^{d}\).
In these steps, solving the system \(\mathbf{A}\boldsymbol{\theta}=\mathbf{b}\) means to apply our stepwise procedure to this system; for these problems, we start each procedure with no active coefficients. The modularity of our approach makes this sequential learning approach straightforward to implement. For these steps, the interval \(I^{d}\) must be over sufficiently small times so that proliferation does not dominate, noting that fixing \(R(q)=0\) will not allow us to identify any proliferation effects when estimating the parameters. This is less relevant for \(I^{h}\) and \(I^{e}\) as the estimates of \(\boldsymbol{\theta}^{h}\) and \(\boldsymbol{\theta}^{e}\) impact the moving boundary only.
### Accurate continuum limit
We apply this procedure to data from Figure 2, where the continuum limit is accurate with \(D(q)=50/q^{2}\), \(R(q)=0.15q-0.01q^{2}\), \(H(q)=2q^{2}-0.4q^{3}\), and \(E(q)=50/q^{2}\). The expansions we use are
\[\begin{array}{rcl}D(q)&=&\frac{\theta_{1}^{d}}{q}+\frac{\theta_{2}^{d}}{q^{2 }}+\frac{\theta_{3}^{d}}{q^{3}},\\ R(q)&=&\theta_{1}^{d}q+\theta_{2}^{d}q^{2}+\theta_{3}^{d}q^{3}+\theta_{4}^{r}q ^{4}+\theta_{5}^{r}q^{5},\\ H(q)&=&\theta_{1}^{h}q+\theta_{1}^{h}q^{2}+\theta_{3}^{h}q^{3}+\theta_{4}^{h}q ^{4}+\theta_{5}^{h}q^{5},\\ E(q)&=&\frac{\theta_{1}^{e}}{q}+\frac{\theta_{2}^{e}}{q^{2}}+\frac{\theta_{3}^ {e}}{q^{3}}.\end{array} \tag{34}\]
With (34), we expect to learn \(\mathbf{\theta}^{d}=(0,50,0)^{\mathsf{T}}\), \(\mathbf{\theta}^{r}=(0.15,-0.01,0,0,0)^{\mathsf{T}}\), \(\mathbf{\theta}^{h}=(0,2,-0.4,0,0,0)^{\mathsf{T}}\), and \(\mathbf{\theta}^{e}=(0,50,0)^{\mathsf{T}}\). We average the data over 1000 realisations. For saving the solution, the time intervals we use are \(I^{d}=[0,0.1]\), \(I^{e}=[0,5]\), \(I^{h}=[5,10]\), and \(I^{r}=[10,50]\), with 25, 50, 100, and 250 time points inside each time interval for saving. For interpolating the solution to obtain the averages, we use \(n_{k}=25\), \(n_{k}=50\), \(n_{k}=100\), and \(n_{k}=50\) over \(I^{d}\), \(I^{e}\), \(I^{h}\), and \(I^{r}\), respectively.
To now learn the mechanisms, we apply the sequential procedure described for learning them one at a time. For each problem, we apply pruning so that points outside of the 10% and 90% density quantiles or the 20% and 80% velocity quantiles are not included. We find that \(\mathbf{\theta}^{d}=(0,49.60,0)^{\mathsf{T}}\), \(\mathbf{\theta}^{e}=(0,49.70,0)^{\mathsf{T}}\), \(\mathbf{\theta}^{h}=(-0.0084,0,0,-0.0011,0)^{\mathsf{T}}\), and \(\mathbf{\theta}^{r}=(0.15,-0.010,0,0,0)^{\mathsf{T}}\). The results with all these learned mechanisms are shown in Figure 10. We see from the comparisons in Figure 10(a)-(b) that the PDE results from the learned mechanisms are nearly indistinguishable from the discrete densities. Similar to Case Study 2, \(H(q)\) only matches the continuum limit at \(q(L(t),t)\). Note also that the solutions in Figure 10(a) go up to \(t=100\), despite the stepwise procedure considering only times up to \(t=50\).
Figure 10: Stepwise equation learning results for Case Study 4 when the continuum limit is accurate, using the learned mechanisms with \(\mathbf{\theta}^{d}=(0,49.60,0)^{\mathsf{T}}\), \(\mathbf{\theta}^{e}=(0,49.70,0)^{\mathsf{T}}\), \(\mathbf{\theta}^{h}=(-0.0084,0,0,-0.0011,0)^{\mathsf{T}}\), and \(\mathbf{\theta}^{r}=(0.15,-0.010,0,0,0)^{\mathsf{T}}\). (a) Comparisons of the discrete density profiles (solid curves) with those learned from PDEs with the given \(\mathbf{\theta}^{d}\), \(\mathbf{\theta}^{e}\), \(\mathbf{\theta}^{h}\), and \(\mathbf{\theta}^{r}\) (dashed curves), plotted at the times \(t=0,5,10,25,50,100\) in black, red, blue, green, orange, and purple, respectively. The arrow shows the direction of increasing time. (b) As in (a), except comparing the leading edges. (c)–(f) are comparisons of the learned forms of \(D(q)\), \(R(q)\), \(H(q)\), and \(E(q)\) with the forms from the continuum limit (8).
### Inaccurate continuum limit
We now consider data from Figure 3(b) where the continuum limit is inaccurate. Here, \(k=1/5\) and the continuum limit vectors are \(\mathbf{\theta}^{d}=(0,0.2,0)^{\mathsf{T}}\), \(\mathbf{\theta}^{r}=(0.15,-0.01,0,0,0)^{\mathsf{T}}\), \(\mathbf{\theta}^{h}=(0,2,-0.4,0,0,0)^{\mathsf{T}}\), and \(\mathbf{\theta}^{e}=(0,0.2,0)^{\mathsf{T}}\). Using the same procedures and expansions as Figure 10, we average the data over \(1000\) realisations. The time intervals we use are \(I^{d}=[0,2]\), \(I^{e}=[2,10]\), \(I^{h}=[10,20]\), and \(I^{r}=[20,50]\), using \(20\) time points for \(I^{d}\) and \(200\) time points for \(I^{e}\), \(I^{h}\), and \(I^{r}\). We use \(n_{k}=50\) knots for averaging the solution over \(I^{d}\), and \(n_{k}=100\) knots for averaging the solution over \(I^{e}\), \(I^{h}\), and \(I^{r}\).
To apply the equation learning procedure we prune all matrices so that points outside of the \(40\%\) and \(60\%\) temporal quantiles are eliminated, where the _temporal quantiles_ are the quantiles of \(\partial q/\partial t\) from the averaged discrete data, and similarly for points outside of the \(40\%\) and \(60\%\) velocity quantiles. We find \(\mathbf{\theta}^{d}=(0,0.21,0)^{\mathsf{T}}\), \(\mathbf{\theta}^{e}=(0,0.23,0)^{\mathsf{T}}\), \(\mathbf{\theta}^{h}=(-0.15,0,0,-0.0079,0)^{\mathsf{T}}\), and \(\mathbf{\theta}^{r}=(0.11,-0.0067,0,0,0)^{\mathsf{T}}\). Interestingly, here we learn \(R(q)\) is quadratic with coefficients that differ from the continuum limit. The results with all these learned mechanisms are shown in Figure 11. We see from the comparisons in Figure 11 that the PDE results from the learned mechanisms are visually indistinguishable from the discrete densities. Moreover, as in Figure 10, the learned \(H(q)\) and \(E(q)\) match the continuum results at \(q(L(t),t)\) which confirms that the learned continuum limit conserves mass, as expected. Note also that the solutions in Figure 11(a) go up to \(t=250\), despite the stepwise procedure considering only times up to \(t=50\), demonstrating the extrapolation power of our method.
## 5 Conclusion and discussion
In this work, we presented a stepwise equation learning framework for learning continuum descriptions of discrete models describing population biology phenomena. Our approach provides accurate continuum approximations when standard coarse-grained approximations are inaccurate. The framework is simple to implement, efficient, and modular, allowing for additional components to be added into a model with minimal changes required to accommodate them into an existing procedure. In contrast to other approaches, like neural networks [37] or linear regression approaches [39], results from our procedure are interpretable in terms of the underlying discrete process. The coefficients incorporated or removed at each stage of our procedure give a sense of the influence each model term contributes to the model, giving a greater interpretation of the results, highlighting an advantage of the stepwise approach over traditional sparse regression methods [28, 33, 34]. We demonstrated our approach using a series of four biologically-motivated case studies that incrementally build on each other, studying a discrete individual-based mechanical free boundary model of epithelial cells [10, 11, 15, 16]. In the first two case studies, we demonstrated that we can easily rediscover the continuum limit models derived by Baker et al. [16], including the equations describing the evolution of the free boundary. The last two case studies demonstrate that, when the coarse-grained models are inaccurate, our approach can learn an accurate continuum approximation. The last case study was the most complicated, with four mechanisms needing to be learned, but the modularity of our approach made it simple to apply a sequential procedure to learning the mechanisms, applying the procedure to each mechanism in sequence. Our procedure was able to recover terms that conserved mass, despite not enforcing conservation of mass explicitly. The procedure as we have described does have some limitations, such as assuming that the mechanisms are linear combinations of basis functions, which could be handled more generally by instead using nonlinear least squares [38]. The procedure may also be sensitive to the quality of the data points included in the matrices, and thus to the pruning parameters. In E, we discuss a parameter sensitivity study that investigates this in greater detail, revealing that the choice of pruning parameters is crucial for obtaining accurate continuum approximations.
There are many avenues for future work based on our approach. Firstly, two-dimensional extensions of our discrete model could be considered [51, 52], which would follow the same approach except the continuum problems would have to be solved using a more detailed numerical approximation [53, 54, 55]. Another avenue for exploration would be to consider applying the discrete model on a curved interface which is more realistic than considering an epithelial sheet on a flat substrate [56, 57]. Working with heterogeneous populations
of cells, where parameters in the discrete model can vary between individuals in the population, is also another interesting option for future exploration [14]. Uncertainty quantification could also be considered using bootstrapping [38] or Bayesian inference [58]. Allowing for uncertainty quantification would also allow for noisy data sets to be modelled, unlike the idealised, noise-free data used in this work. We emphasise that, regardless of the approach taken for future work, we believe that our flexible stepwise learning framework can form the basis of these potential future studies.
|
2306.16245 | Successive Cancellation Automorphism List Decoding of Polar Codes | The discovery of suitable automorphisms of polar codes gained a lot of
attention by applying them in Automorphism Ensemble Decoding (AED) to improve
the error-correction performance, especially for short block lengths. This
paper introduces Successive Cancellation Automorphism List (SCAL) decoding of
polar codes as a novel application of automorphisms in advanced Successive
Cancellation List (SCL) decoding. Initialized with L permutations sampled from
the automorphism group, a superposition of different noise realizations and
path splitting takes place inside the decoder. In this way, the SCAL decoder
automatically adapts to the channel conditions and outperforms the
error-correction performance of conventional SCL decoding and AED. For a polar
code of length 128, SCAL performs near Maximum Likelihood (ML) decoding with
L=8, in contrast to M=16 needed decoder cores in AED. Application-Specific
Integrated Circuit (ASIC) implementations in a 12 nm technology show that
high-throughput, pipelined SCAL decoders outperform AED in terms of energy
efficiency and power density, and SCL decoders additionally in area efficiency. | Lucas Johannsen, Claus Kestel, Marvin Geiselhart, Timo Vogt, Stephan ten Brink, Norbert Wehn | 2023-06-28T14:14:48Z | http://arxiv.org/abs/2306.16245v1 | # Successive Cancellation Automorphism List Decoding of Polar Codes
###### Abstract
The discovery of suitable automorphisms of polar codes gained a lot of attention by applying them in Automorphism Ensemble Decoding (AED) to improve the error-correction performance, especially for short block lengths. This paper introduces Successive Cancellation Automorphism List (SCAL) decoding of polar codes as a novel application of automorphisms in advanced Successive Cancellation List (SCL) decoding. Initialized with L permutations sampled from the automorphism group, a superposition of different noise realizations and path splitting takes place inside the decoder. In this way, the SCAL decoder automatically adapts to the channel conditions and outperforms the error-correction performance of conventional SCL decoding and AED. For a polar code of length 128, SCAL performs near Maximum Likelihood (ML) decoding with L=8, in contrast to M=16 needed decoder cores in AED. Application-Specific Integrated Circuit (ASIC) implementations in a 12 nm technology show that high-throughput, pipelled SCAL decoders outperform AED in terms of energy efficiency and power density, and SCL decoders additionally in area efficiency.
Polar Code, Automorphisms, Successive Cancellation List Decoding, 12 nm FinFET, ASIC Implementation
## I Introduction
In the enhanced mobile broadband (eMBB) scenarios of the 5G New Radio (NR) standard, polar codes [1] were selected as error-correction codes for the control channels [2]. Polar codes achieve the capacity of binary memoryless channels under low-complexity Successive Cancellation (SC) decoding at infinite code length. However, SC decoding suffers from a poor error-correction performance in the practical block length regime. More advanced decoding algorithms, with Successive Cancellation List (SCL) decoding [3] being the most prominent one, were developed to overcome this limitation at the price of higher decoding complexity and implementation costs. However, SCL decoding approaches Maximum Likelihood (ML) decoding performance with sufficient large list size \(L\).
Recently, Automorphism Ensemble Decoding (AED) [4] for polar codes [5, 6, 7] gained attention as a new ML-approaching algorithm. In AED, an ensemble of \(M\) low-complexity decoders (e. g., SC) operates on different permutations from the code's automorphism group in parallel. The most probable code word is selected as output from the \(M\) constituent decoders, which improves the overall error-correction performance.
In [8], near-ML performance was achieved for Reed-Muller (RM) codes by running SCL decoders on permutations of the channel Log-Likelihood Ratios (LLRs) corresponding to stage-shuffled factor graphs and combining the different lists in each decoding step. This approach was generalized in [9] by initializing the \(L\) decoding paths by permutations randomly sampled from the full automorphism group of the RM code.
The automorphism group of polar codes and its properties were investigated in [4, 5, 6, 7]. Through the discovery of suitable automorphisms for polar codes, also more advanced polar decoding algorithms, e. g., SCL decoding, can be leveraged. The new contributions of this work are summarized as follows:
* We propose Successive Cancellation Automorphism List (SCAL) decoding of polar codes as a novel method to beneficially utilize permutations selected from the code's automorphism group in the SCL decoding algorithm.
* We analyze the evolution of these permutations during SCAL decoding and provide simulation results to show the capability of channel adaption and the improved error-correction performance, respectively.
* We present implementation results of the proposed SCAL decoder architecture in a 12 nm FinFET technology and compare them with state-of-the-art, high-throughput AED and SCL decoders.
The remainder of this paper is structured in four sections: Section II describes the fundamentals of polar codes and the relevant decoding strategies. The new SCAL decoding algorithm is presented in Section III. Results with respect to an analysis of the evolution of the input permutations, the error-correction performance of SCAL decoding and its hardware implementation costs are provided in Section IV. Section V concludes this work.
## II Background
### _Polar Codes_
A polar code \(\mathcal{P}(N,K)\)[1] is a linear block code with code length \(N=2^{n}\) and \(K\) information bits. The information set \(\mathcal{I}\) defines the row indices to obtain the generator matrix \(\mathbf{G}\) as the rows of \(\mathbf{G}_{N}=\mathbf{G}_{2}^{\otimes n}\), where \(\mathbf{G}_{2}=\left[\begin{smallmatrix}1&0\\ 1&1\end{smallmatrix}\right]\) denotes
the polarization kernel and \(\otimes n\) denotes the \(n\)-th Kronecker power. The codeword can be obtained from an input vector \(\mathbf{u}\) as \(\mathbf{x}=\mathbf{u}\mathbf{G}_{N}\), where \(\mathbf{u}\) contains \(K\) information bits at the positions of \(\mathcal{I}\) and \(N-K\) frozen bits (set to \(0\)) at the positions of \(\mathcal{F}=\mathcal{I}^{C}\). Polar codes can also be seen as monomial codes. In particular, most practical polar codes are decreasing monomial codes, i. e., all sub-channels in \(\mathcal{I}\) obey the partial order according to [10], and are thus called decreasing polar codes. They are completely defined by the minimal information set \(\mathcal{I}_{\min}\)[5].
### _Successive Cancellation Based Decoding_
Polar code decoding can be represented as traversal of a balanced binary tree, the Polar Factor Tree (PFT) [11]. Input to the root node are the channel LLRs, defined as \(\operatorname{LLR}(y_{i})=\log(\operatorname{P}(y_{i}|x_{i}=0)/\operatorname {P}(y_{i}|x_{i}=1))\), calculated from the \(N\) received channel values \(\mathbf{y}\) with \(i\in[0,N)\). A node \(v\) in layer \(s\) receives a vector \(\mathbf{\alpha}^{v}\) of \(N_{v}=2^{s}\) LLRs from its parent node in layer \(s+1\). The min-sum formulations of \(f\)- and \(g\)-functions [12] are used to obtain the messages passed to the left and the right child nodes, \(\mathbf{\alpha}^{l}\) and \(\mathbf{\alpha}^{r}\), respectively. The partial-sum vector \(\mathbf{\beta}^{v}\) is returned to the parent node and calculated by the \(h\)-function combining \(\mathbf{\beta}^{l}\) and \(\mathbf{\beta}^{r}\). Thus, SC-based decoding causes a depth first traversal of the PFT with an inherent priority to the left child. In the \(N\) leaf nodes, \(\mathbf{\beta}^{v}\) is set to the value of the estimated bit \(\hat{u}_{i}\) as
\[\hat{u}_{i}=\begin{cases}0,&\text{if }i\in\mathcal{F}\text{ or }\alpha_{i}^{v}\geq 0\\ 1,&\text{otherwise}.\end{cases} \tag{1}\]
### _Successive Cancellation List Decoding_
In SCL decoding [3], lists of \(L\) vectors are passed among the nodes of the PFT instead of only passing vectors \(\mathbf{\alpha}\) and \(\mathbf{\beta}\). Therefore, \(l\in[0,L)\) is introduced to index the \(L\) concurrent paths, e. g., the LLRs of a node \(v\) are denoted as \(\mathbf{\alpha}^{v,l}\). For an information bit estimation in layer \(s=0\) of the PFT, the decoding path splits, i. e., both possible values, \(0\) and \(1\), are considered. Consequently, an information bit estimation doubles the number of decoding paths. Unreliable paths must be rejected by sorting, when the number of paths exceeds \(L\). For this purpose, every bit estimation updates a Path Metric (PM) to rate the reliability of each path. These PMs are initialized with 0 in LLR-based SCL decoding [13]. The PM of a path with index \(p\in[0,2L)\) proceeding input path \(l\in[0,L)\) is updated for the \(i\)-th bit estimation by
\[\text{PM}_{i}^{p}=\begin{cases}\text{PM}_{i-1}^{l}+\left|\alpha_{i}^{v,l} \right|,&\text{if }\beta_{i}^{v,l}\neq\text{HDD}(\alpha_{i}^{v,l})\\ \text{PM}_{i-1}^{l},&\text{otherwise},\end{cases} \tag{2}\]
with Hard Decision Decoding (HDD) on \(\mathbf{\alpha}^{v}\) defined as
\[\text{HDD}(\alpha_{i}^{v,l})=\begin{cases}0&\text{if }\alpha_{i}^{v,l}\geq 0\\ 1&\text{otherwise}.\end{cases} \tag{3}\]
Consequently, the PM is a cost function and, thus, the smallest PMs values belong to the most probable paths which survive the sorting step. The most probable path is selected as the output of the decoder after the last bit decision.
### _Automorphisms for Polar Code Decoding_
Automorphisms are permutations that map every code word onto another code word. It was shown that the group of affine automorphisms of decreasing polar codes equals the Block Lower-Triangular Affine (BLTA) group [6] defined as
\[\mathbf{z}^{\prime}=\mathbf{A}\mathbf{z}+\mathbf{b}, \tag{4}\]
where \(\mathbf{A}\) is an invertible, block lower triangular binary matrix, \(\mathbf{b}\) an arbitrary binary vector and \(\mathbf{z},\mathbf{z}^{\prime}\) are the binary representations of the bit indices before and after permutations, respectively [4]. In [5], an algorithm to determine the block profile of \(\mathbf{A}\) for a given \(\mathcal{I}\) is provided. To select proper permutations \(\pi_{m}\) from the automorphism group, a greedy selection method was proposed in [14] to pick \(M\) permutations from different partitions of the automorphisms, i. e., the Equivalence Class (ECs) [7].
## III Successive Cancellation Automorphism List Decoding
Permuting the received channel data according to automorphisms corresponds to different noise realizations. Since there may be permutations that are easier to decode, the probability of finding the correct code word is increased in AED [4]. Simultaneously, AED benefits from the locality of the \(M\) independent, constituent ensemble decoder cores resulting in decreased hardware implementation costs compared to SCL decoders [14].
SCL decoding, on the contrary, splits the decoding paths \(\zeta\) whenever an information bit is decoded to overcome error propagation [3]. In a _Sort & Select_ step, a message exchange takes place to proceed the \(L\) most probable paths and prevent the exponential increase of the number of candidate paths.
To benefit from both approaches of error reduction, AED with constituent SCL decoders is an obvious solution [4]. However, the messages exchange is then limited to the constituent SCL decoder instances and, with one permutation per decoder core, the potential of automorphisms is not fully exploited. Furthermore, the selection of the most probable candidate in AED is already inherent in SCL decoding.
In [8, 9], near-ML performance was achieved for Reed-Muller codes by running SCL decoders on permutations of the channel LLRs and combining the different lists in each decoding step. Similarly, SCL decoding of Polar codes can benefit from the usage of automorphisms, which represent different noise realizations of the received channel data. For this purpose, an SCL-\(L\) decoder is initialized with \(L\) permutations \(\pi_{l},l\in[0,L)\) as
\[\mathbf{\alpha}^{0,l}=\pi_{l}\left(\operatorname{LLR}(\mathbf{y})\right), \tag{5}\]
instead of only working on the original channel values corresponding to the identity permutation \(\pi_{0}\left(\operatorname{LLR}(\mathbf{y})\right)\). The permutations are sampled from the automorphism ECs of the code by a greedy selection method as described in [14]. Consequently, the list of candidates is filled from the start of SCL decoding and does not only start expanding with path splitting in the first nodes containing information bits.
While the PMs of all candidates (i. e., decoding paths \(\zeta_{l}\)) are already updated in the left-most frozen nodes, path splitting is superposed in non-frozen nodes. Thus, the different noise realizations of the permutations compete against each other whenever the expanded list of candidates is sorted and pruned to \(L\) in the sorters of the non-frozen nodes. This message exchange in the _Sort & Select_ steps provides a gain compared to the independent decoding of the permutations in AED. Thus, SCAL has the ability to adapt to the channel conditions automatically, since either the candidates generated by path splitting or stemming from the permutations persist preferably.
Finally, the SCAL decoder outputs the most reliable candidate, i. e., the path with the smallest PM, which is found as the first entry (index \(0\)) in the final sorted list of candidates. Depending on its origin \(o(\boldsymbol{\hat{x}_{0}})\), the reverse permutation \(\pi_{o(\boldsymbol{\hat{x}_{0}})}^{-1}\) of the corresponding input permutation is applied as
\[\boldsymbol{\hat{x}}=\pi_{o(\boldsymbol{\hat{x}_{0}})}^{-1}\left(\boldsymbol{ \beta}^{0,0}\right) \tag{6}\]
to obtain the decoded code word.
The architecture of an SCAL decoder is shown in Fig. 1 indicating the parallel decoding of paths \(\zeta_{l}\) until leaf nodes split the decoding paths. The decoding of each path \(\zeta\) produces multiple candidates. Based on the corresponding PMs, the _Sort & Select_ step consolidates the \(L\) most reliable paths by the message exchange of the incoming candidate paths. It is noteworthy, that SCAL decoding can adopt the known, state-of-the-art optimizations for complexity reduction of SCL decoding without error-correction performance degradation.
## IV Results
The results presented in this section are based on the polar code \(\mathcal{P}(128,60)\) with \(\mathcal{I}_{\min}=\{27\}\), which exhibits a block profile \(\boldsymbol{s}=(3,4)\) of its BLTA group and is therefore well suited for AED [14] and SCAL decoding. The permutation selection follows [14] to pick one automorphism from \(L\) ECs. Since the same code was used in [14], this selection enables a direct comparison to AED.
The Fast Simplified SCAL (Fast-SSCAL) and Fast Simplified Successive-Cancellation List (Fast-SSCAL) decoders use optimized nodes according to [15, 16] and a quantization of 6 bit for LLRs and 8 bit for PMs, for both, Frame Error Rate (FER) simulations and hardware implementations.
Our simulation model applies Binary Phase Shift Keying (BPSK) modulation and uses an Additive White Gaussian Noise (AWGN) channel. Up to \(10^{8}\) pseudo-random blocks with a limit of \(10^{3}\) erroneous decoded block are simulated.
### _Analysis of Automorphism Evolution_
To evaluate the novel SCAL decoding algorithm, the evolution of the permutations is analyzed. The permutations can be seen as different noise realizations, and the ones which are easier to decode prevail. Thus, the mean number of different permutations within the list of surviving candidates decreases during SCAL decoding. This effect is shown in Fig. 2 for the \(\mathcal{P}(128,60)\) under SCAL-\(8\) decoding at different Signal-to-Noise-Ratio (SNR) points. The figure also shows that the permutations dominate classical candidate generation in SCL decoding with increasing SNR.
Unlike in SCL decoding, the final code word candidates (after undoing the permutation) are not necessarily different in SCAL decoding. Fig. 3 shows the average number of unique code word candidates and the number of different permutations of the surviving paths before the final _Sort & Select_ stage. For low SNR, a large list diversity can be observed, as the number of unique code word candidates is almost identical to the maximum number of paths \(L\). However, these candidates stem from only a small number of permutations, indicating that the SCL branching dominates candidate generation in this regime. Conversely, at high SNR, the decoder only rarely branches and, thus, behaves more like AED where almost all \(L\) initial permutations survive until the final selection. However, this results in a low list diversity, as all decoding paths converge to the same (correct) code word candidate most of the time.
Fig. 1: SCAL decoding architecture
Fig. 2: Evolution of the number of different permutations \(\pi_{l}\) vs. the bit indices during SCAL-\(8\) decoding of \(\mathcal{P}(128,60)\)
### _Error-Correction Performance_
The plot in Fig. 4 shows the FERs of SCAL decoding, SCL decoding and AED for \(L/M\in\{2,4,8,16\}\). The upper and lower bounds for ML decoding performance are also given, which are observed by performing SCL decoding with \(L=128\). Whenever the decoded codeword is closer to the received channel values than the correct codeword, an ML decoder would fail. Thus, counting these cases provides the upper ML bound. Similarly, the lower bound is derived with the additional condition that the final list of candidates contains the correct codeword. In Fig. 4, both bounds coincide.
It can be seen that SCAL-\(2\) provides a gain of 0.1 dB over AED and SCL decoding. At an FER of \(10^{-5}\), SCAL-\(4\) has a performance gain of 0.22 dB and 0.16 dB compared to SCL-\(4\) and AED-\(4\) and a gap of 0.074 dB and 0.11 dB to SCL-\(8\) and AED-\(8\), respectively. At the same FER, SCAL-\(8\) outperforms SCL-\(16\) and AED-\(16\) by 0.024 dB and 0.015 dB, respectively, and the gap to the ML bound is 0.06 dB. To summarize, SCAL decoding outperforms the error-correction performance of AED and SCL decoding significantly for all list sizes/degrees of parallelism.
### _Hardware Implementation_
In addition to the improved error-correction performance, also the implementation costs of the proposed decoders have to be considered. Thus, we implemented different AED, SCL and SCAL decoders. All decoder implementations are unfrolled and fully pipelined architectures for a target frequency of 500 MHz. Synthesis was executed with _Design Compiler_ and Placement and Routing (PAR) with _IC-Compiler_, both from _Synopsys_, in a 12 nm FinFET technology from Global Foundries under worst case Process, Voltage and Temperature (PVT) conditions (125 \({}^{\circ}\)C, 0.72 V) for timing and nominal case PVT (25 \({}^{\circ}\)C, 0.8 V) for power. Power values stem from post-PAR netlist simulations with back-annotated wiring data and test data at \(E_{b}/N_{0}=4\) dB.
In fully pipelined architectures, the coded throughput is calculated as \(T_{c}=f\cdot N\), with clock frequency \(f\). Metrics for comparison are area efficiency \(\mu_{A}=T_{c}/A\), energy efficiency \(\mu_{E}=P/T_{c}\), with \(A\) being the area and \(P\) the power of the implementation, and power density \(P/A=\mu_{E}\cdot\mu_{A}\).
#### Iv-C1 SCAL decoding vs. AED
The PAR results for SCAL decoder and AED implementations are presented in Table I. For a fair comparison of the implementation costs of SCAL and AE decoders, the architectures having comparable error-correction performance are selected. Thus, SCAL-\(4\) is compared with AED-\(8\), and SCAL-\(8\) with AED-\(16\). It can be seen, that the SCAL decoders have a better energy efficiency than their AED counterparts. The area efficiency of SCAL-\(4\) is \(1.32\times\) better than \(\mu_{\text{A}}\) of AED-\(8\). Comparing \(\mu_{\text{A}}\) of SCAL-\(8\) and AED-\(16\), the SCAL decoder is \(0.90\times\) worse. All presented SCAL decoders have a better power density than AED. In terms of latency, AED is always at an advantage, since its latency is defined by the latency of the SC decoder cores.
#### Iv-C2 SCAL decoding vs. SCL decoding
The comparison of SCAL and SCL decoder implementations aims for showing the overhead caused by generating and processing \(L\) permutations of the input LLRs in the SCAL decoder. Thus, two decoders with equal \(L\) are compared, respectively. The threshold parameters for the optimized Single Parity-Check (SPC) nodes [16] are set to preserve the error-correction performance of each decoder, i. e., \(S_{\text{SPC}}=\{2,2\}\) and \(k_{\text{SPC}}=\{2,3\}\) for SCAL-\(\{4,8\}\), and \(S_{\text{SPC}}=\{3,4\}\) and
Fig. 4: FER vs. SNR for SCL, AED and SCAL decoding of \(\mathcal{P}(128,60)\)
Fig. 3: Average number of unique final code word candidates and average number of permutations vs. SNR for SCAL decoding of \(\mathcal{P}(128,60)\)
\(k_{\text{SPC}}=\{2,3\}\) for SCL-\(\{4,8\}\), respectively. The corresponding results are given in Table II.
Fig. 5 shows the two layouts of the SCL-8 decoder (5a) and the SCAL-8 decoder (5b) with equal area scaling. Whenever possible, equal coloring is used for the computational kernels in both implementations. The most conspicuous difference is the size of the cells highlighted in lime-green, which correspond to the first \(g\)-function of the decoder in the root node. Since in the SCL decoder, only one input vector needs to be processed (in \(L\) variants), in SCAL decoding the input to this \(g\)-function is a list of \(L\) LLR vectors. The corresponding delay-line memory (colored in black for all stages) is also \(\times L\) greater. Thus, costs in area efficiencies are \(\mu_{\text{A, SCAL-4}}\approx\mu_{\text{A, SCL-4}}\times 0.78\) and \(\mu_{\text{A, SCAL-8}}\approx\mu_{\text{A, SCL-8}}\times 0.70\), and in energy efficiencies \(\mu_{\text{E, SCAL-4}}\approx\mu_{\text{E, SCL-4}}\times 1.21\) and \(\mu_{\text{E, SCAL-8}}\approx\mu_{\text{E, SCL-8}}\times 1.27\), respectively. However, the usage of the permutations brings a gain in the error-correction performance (section IV-B) for which reason SCAL decoders outperform comparable SCL decoders.
## V Conclusion
In this paper, SCAL decoding is presented as a novel method to benefit from using automorphisms of polar codes together with SCL decoding. The proposed SCAL decoding algorithm outperforms AED and SCL decoding with respect to the error-correction performance. Regarding the hardware implementation costs, SCAL decoders benefit from the possible reduction of the list size and therefore compete with AED and SCL decoder implementations with comparable error correction.
|
2305.14102 | A Deep Matched Filter For R-Peak Detection in Ear-ECG | The Ear-ECG provides a continuous Lead I electrocardiogram (ECG) by measuring
the potential difference related to heart activity using electrodes that can be
embedded within earphones. The significant increase in wearability and comfort
afforded by Ear-ECG is often accompanied by a corresponding degradation in
signal quality - a common obstacle that is shared by most wearable
technologies. We aim to resolve this issue by introducing a Deep Matched Filter
(Deep-MF) for the highly accurate detection of R-peaks in wearable ECG, thus
enhancing the utility of Ear-ECG in real-world scenarios. The Deep-MF consists
of an encoder stage (trained as part of an encoder-decoder module to reproduce
ground truth ECG), and an R-peak classifier stage. Through its operation as a
Matched Filter, the encoder searches for matches with an ECG template pattern
in the input signal, prior to filtering the matches with the subsequent
convolutional layers and selecting peaks corresponding to true ECG matches. The
so condensed latent representation of R-peak information is then fed into a
simple R-peak classifier, of which the output provides precise R-peak
locations. The proposed Deep Matched Filter is evaluated using
leave-one-subject-out cross validation over 36 subjects with an age range of
18-75, with the Deep-MF outperforming existing algorithms for R-peak detection
in noisy ECG. The Deep-MF achieves a median R-peak recall of 94.9\%, a median
precision of 91.2\% and an (AUC) value of 0.97. Furthermore, we demonstrate
that the Deep Matched Filter algorithm not only retains the initialised ECG
kernel structure during the training process, but also amplifies portions of
the ECG which it deems most valuable. Overall, the Deep Matched Filter serves
as a valuable step forward for the real-world functionality of Ear-ECG and,
through its explainable operation, the acceptance of deep learning models in
e-health. | Harry J. Davies, Ghena Hammour, Marek Zylinski, Amir Nassibi, Danilo P. Mandic | 2023-05-23T14:24:14Z | http://arxiv.org/abs/2305.14102v1 | # A Deep Matched Filter For R-Peak Detection in Ear-ECG
###### Abstract
The Ear-ECG provides a continuous Lead I electrocardiogram (ECG) by measuring the potential difference related to heart activity through the use of electrodes that can be embedded within earphones. The significant increase in wearability and comfort afforded by Ear-ECG is often accompanied by a corresponding degradation in signal quality - a common obstacle that is shared by the majority of wearable technologies. We aim to resolve this issue by introducing a Deep Matched Filter (Deep-MF) for the highly accurate detection of R-peaks in wearable ECG, thus enhancing the utility of Ear-ECG in real-world scenarios. The Deep-MF consists of an encoder stage (trained as part of an encoder-decoder module to reproduce ground truth ECG), and an R-peak classifier stage. Through its operation as a Matched Filter, the encoder section searches for matches with an ECG template pattern in the input signal, prior to filtering the matches with the subsequent convolutional layers and selecting peaks corresponding to true ECG matches. The so condensed latent representation of R-peak information is then fed into a simple R-peak classifier, of which the output provides precise R-peak locations. The proposed Deep Matched Filter is evaluated using leave-one-subject-out cross validation over 36 subjects with an age range of 18-75, with the Deep-MF outperforming existing algorithms for R-peak detection in noisy ECG. The proposed Deep-MF is bench marked against a ground truth ECG in the form of either chest-ECG or arm-ECG, and both R-peak recall and R-peak precision is calculated. The Deep-MF achieves a median R-peak recall of 94.9% and a median precision of 91.2% across subjects when evaluated with leave-one-subject-out cross validation. Moreover, when evaluated across a range of thresholds, the Deep-MF achieves an area under the curve (AUC) value of 0.97. The interpretability of Deep-MF as a Matched Filter is further strengthened by the analysis of its response to partial initialisation with an ECG template. We demonstrate that the Deep Matched Filter algorithm not only retains the initialised ECG kernel structure during the training process, but also amplifies portions of the ECG which it decays most valuable - namely the P wave, and each aspect of the QRS complex. Overall, the Deep Matched Filter serves as a valuable step forward for the real-world functionality of Ear-ECG and, through its explainable operation, the acceptance of deep learning models in e-health.
## I Introduction
Recent advancements in Hearables serve to disrupt the e-health market through the provision of continuous monitoring of mental state and vital signs from the ear [1]. Of the different Hearables' sensing modalities, one of the most notable is the Ear-ECG, which provides continuous lead I electrocardiogram (ECG), the measurement of the electrical activity of the heart, through the potential difference between two in-ear electrodes on separate sides of the head [2]. The precise heart rate information from ECG can be used to monitor stress through heart rate variability metrics [3][4][5] and the detection of irregular heart rhythms (arrhythmia) [6]. However, with the immense gain in comfort and wearability afforded by an in-ear sensor compared to electrodes on the chest, comes a drop in signal to noise ratio. The potential difference across the heart is often as much as 2 orders of magnitude lower from the ear than it is at the chest [7]. Moreover, the Ear-ECG commonly contains other signals comparable in amplitude, such as electrical activity generated by eye movements, known as electrooculography (EOG) [8], and electrical signals generated by neuronal activity in the brain, known as electroencephalography (EEG) [9][10][11]. In order to best exploit the benefits of Ear-ECG, algorithms need to be able to detect the presence of ECG waveform across challenging range of signal qualities, and correctly distinguish the peaks in ECG (R-peaks) from peaks that may occur due to artefacts or other electrical activity.
With this in mind, it is straightforward to assume that a matched filter [12], the process of shifting a template across a signal to enhance the detection of the pattern contained within the template, would perform well in the scenario of detecting R-peaks in wearable ECG. This has been demonstrated previously through the combination of matched filter and Hilbert transform [13], which was shown to outperform the commonly used Pan-Tompkins algorithm [14] for r-peak detection. Recent work on the interpretability of convolutional neural networks (CNNs) has demonstrated that at a fundamental pattern recognition level, a CNN performs in the same way as a matched filter, by performing convolution between a learned template kernel and an input signal or image and exploiting the correlation between the two [15]. This was further verified through the MNIST handwriting data set, in which trained kernels converged to resemble different numbers [16]. Given the clear benefits of using matched filtering to
Fig. 1: The Ear-ECG earpiece. Left: The placement of one of the ear electrodes within the ear canal. Right: A labelled prototype Ear-ECG device, consisting of a foam ear-plug, a cloth electrode and an ear-hook to stabilise the ear-piece within the ear canal.
detect R-peaks in noisy ECG, and the theoretical link between CNNs and matched filtering, it is hypothesised that a learned convolutional matched filter could be leveraged to provide superior results for R-peak detection, whilst remaining fully interpretable in its operation.
To this end, we implement a deep convolutional neural network based matched filter for the efficient and accurate detection of R-peaks in Ear-ECG with poor signal to noise ratio. The trained model, whilst demonstrating exceptional performance over existing methods, has the benefit of full interpretability through the lens of matched filters, with kernel weights that exploit and amplify aspects of the ECG pattern.
## II Methods
### _Hardware and Data_
Simultaneous Ear-ECG and either arm-ECG or chest-ECG (resembling lead I) was measured from 36 subjects, with an age range of 18-75. There was a minimum of 2 minutes of data recorded from each subject, with the majority of subjects having 5 minutes of ECG data. Recordings took place when subjects were still or during sleep to minimise the impact of motion artefacts, but it should be noted that motion artefacts were still present in the data, albeit rare, and not excluded from our analysis. In 34 of the subjects, the Ear-ECG was recorded with two earpieces across the head with a ground electrode placed on the forehead. In 2 of the subjects the Ear-ECG signal was from a single ear electrode which was referenced to the contra-lateral mastoid. The Ear-ECG earpiece, shown in Fig. 1, consisted of a foam earpiece with a cloth electrode, and electrode gel was used to reduce the impedance between the electrodes and the skin of the ear canal. The recordings were performed under the IC ethics committee approval JRCO 20IC6414. All subjects gave full informed consent.
The Ear-ECG was down-sampled from 500Hz to 250Hz, and pre-filtered with three separate configurations to provide 3 input channels to the model. The first channel was a band-pass filter between 1Hz and 4Hz which aimed to reduce higher frequency noise whilst preserving the crucial information in the ECG. The second channel was a band-pass filter between 1 and 5Hz, which removed high frequency noise and the QRS complex from the ECG, but retained information on the P and T waves. The third channel was high-pass filtered with a cut-off frequency of 1Hz. This preserved all of the higher frequency detail present in the ECG, but also retained high frequency noise such as electrical interference at 50Hz. To segment the data, a sliding window with a length of two seconds (500 samples) was implemented with a shift of 0.4 seconds (100 samples). This resulted in a total of 26564 segments across all subjects. Two seconds was chosen as the segment length so that inputs would always have an ECG waveform, and usually have upwards of two ECG waveforms.
### _Deep Matched Filter Model_
The deep matched filter architecture, developed in PyTorch [17] and shown in Fig. 2, consists of two main parts. Firstly, an encoder-decoder module, which aims to extract shared
Fig. 2: An overview of the proposed Deep Matched Filter (deep-MF) architecture. The three input channels to the model (top left, blue), including Ear-ECG band-pass filtered between 1 and 45Hz, Ear-ECG band-pass filtered between 1 and 5Hz, and Ear-ECG high-pass filtered with a cut-off frequency of 1Hz. These channels are inputs to an encoder module which serves as a matched filter (middle, blue). The encoder, constructed of 1D convolutional layers, consists of a matched filter layer with kernels of length 200 (0.8 seconds) which serve to detect ECG patterns in the input, and three subsequent “refinement” layers with kernels of length 50, to determine which matches are true. A subsection of the weights of the matched filter layer are initialised with a shifted ECG template (top right, grey). The encoder is accompanied by a decoder (middle, red), consisting of 1D transpose convolutional layers, which upsample the output of the encoder into an output which resembles an ECG waveform (bottom left, red). The decoder is essential for the training of the encoder. The final module is the R-peak classifier (right, purple), which takes the output of the encoder and uses it to predict the position of the R-peak. The R-peak classifier consists of a single 1D convolution layer, and a linear layer.
information between the input and a training reference by condensing the information from the input that is most predictive of the output into a latent representation [18]. In this case, the encoder-decoder was trained with arm-ECG as a reference and aims to encode the shared information between the Ear-ECG and the arm-ECG, before decoding this information into a waveform resembling that of the arm-ECG. The encoder, whilst similar in organisation to that of a denoiser, behaved as a matched filter by simply encoding the r-peak location from the original Ear-ECG and no corresponding morphological information. It then used this encoded R-peak location and pasted a learned ECG pattern in the same position. The decoder was thus bypassed, with a simple CNN based classifier which used the latent representation to predict the R-peak location.
The encoder, highlighted in blue in Fig. 2, consisted of 4 one-dimensional convolutional layers. In the first layer, there were 6 kernels associated with each input channel, to form 6 output channels. In all subsequent layers there were 6 kernels associated with each of the 6 new inputs. In the first layer, a kernel size of 200 was chosen, corresponding to 0.8 seconds and representing a duration slightly longer than that of a full ECG segment for its use as a matched filter template. Moreover, the 6 kernels corresponding to the first band-pass filtered input channel (1 to 45Hz) were initialised with a shifted ECG template which is highlighted in grey in Fig. 2. The subsequent layers in the encoder had a kernel size of 50, chosen to encompass the width of the resulting "match" peak from convolution between the input and the input layer. These layers served as refinement layers for the output of the first layer, in essence helping the model to increase the precision of the matched filter by deciding which matches were valid and which matches were not. The first 3 layers had a ReLU activation function and a dropout of 50%, and the fourth layer had a Sigmoid activation function and a dropout of 50%. The Sigmoid activation function was important for ensuring stability of the model during training, due to the bounded output property.
The decoder, highlighted in red in Fig.2, contained 4 transpose convolutional layers which mirrored the one-dimensional convolutional layers of the encoder. In contrast to the encoder layers, there was no dropout applied and only a single Sigmoid activation function was applied between the first and second decoder layers. Moreover, there was a single output corresponding to a 2 second ECG trace. The encoder-decoder structure was trained to minimise mean squared error between the output and the reference ECG waveform. Importantly however, due to the encoder operating as a matched filter and the fact that there were only slight differences in the morphology of ECG waveform across subjects, the model minimised error by detecting only the location of the ECG in the input, and upsampling this into a generic ECG waveform. Despite this, the training paradigm of using a decoder to replicate a full ECG waveform was necessary, as when the same structure was trained to replicate just the R-peaks it often failed to converge.
Given that the latent representation contained information on location of the ECG in the input, a simple classifier, highlighted in purple in Fig. 2, was then trained to take in the latent variables and output R-peak locations. This classifier consisted of a single layer 1D convolution, followed by a Sigmoid activation function and flattening, before finally being passed to a linear layer. For the training of this second model, the latent variables were extracted from the training inputs being passed through the trained encoder. These latent variables were then used as inputs to the R-peak classifier which was trained against the corresponding R-peak locations from the reference ECG. The R-peak locations were calculated from the ECG reference using the MATLAB (ver. 2022b) function findpeaks, and both the location of the R-peak and the two neighbouring values were assigned a value of 1. Extending the window of the R-peak location from 1 to 3 in the training reference gave the model slightly more lenience in the shift of a peak, and without this the model had a tendency to suppress peaks. The output of the classifier was trained to minimise mean squared error against the corresponding array of 1s and
Fig. 3: The signal pathway through the proposed Deep Matched Filter (Deep-MF) of an example test input Ear-ECG trace (blue). The input firstly passes through the matched filter layer, resulting in the Layer 1 Output, with 3 potential matches. This initial output is then passed through the subsequent “refinement” layers, until a singular peak is present in the latent representation. In the matched filter and refinement stages, the true peak is highlighted with a shaded red box. This latent space is then passed through the R-peak classification phase, resulting in predicted R-peak location (purple). For the purposes of comparison, the ground truth ECG is displayed below in black.
0s. Finally, averaging was performed on the 2 second output of the deep matched-filter, with a shift of 0.4 seconds. In a real-world setting it would be practical to implement the model with a rolling output, rather than waiting for each new 2 second window to pass. Moreover, if the ECG in the input was at the boundaries of the model and not a full waveform it would be cropped with respect to the matched filter (an issue that padding would not solve) and thus it would be more difficult to detect. This issue is circumvented by using a rolling window by ensuring that every ECG waveform in the input is at some point close to the center of the input.
The encoder-decoder model was trained for 10 epochs and the R-peak classifier was trained for 15 epochs, with both numbers of epochs chosen purposely as to limit over-fitting. Both were trained with a batch size of 10 segments, and both the encoder-decoder and the R-peak classifier models were trained using leave-one-subject-out cross validation.
The path that the input signal takes through the combined model is highlighted in Fig. 3. Observe that in this test example, where multiple peaks are present in the input with only true R-peak, the output of the ECG template "matched filtering" layer results in 3 strong peaks. These peaks are then sifted through by the subsequent decoder layers to produce a single peak in the latent representation - a process we refer to as "refinement". This peak in the latent representation is then used by the R-peak classifier to determine the true R-peak location.
### _Model Evaluation_
The deep matched filter (Deep-MF) was evaluated against two separate models, namely a standard matched filter (MF) and the matched filter Hilbert transform algorithm (MF-HT)[13]. Both MF and MF-HT were implemented using the input channel that was band-pass filtered between 1 and 45Hz. For the outputs of the Deep-MF and MF, R-peaks were determined using the MATLAB function findpeaks, with a maximum peak width of 25 samples and a minimum peak distance of 12 samples. The determined R-peaks were then compared to the true R-peaks, which were also calculated using findpeaks on the reference ECG signal. If the predicted peak was within 40ms of the true R-peak, it was considered
Fig. 4: Test results for the proposed Deep Matched Filter for R-peak detection in noisy Ear-ECG. (a) An example of Ear-ECG with a poor signal to noise ratio (blue) and the corresponding output R-peak locations of the Deep-MF (purple). Below is the ground truth Arm-ECG (black) with predicted R-peak locations overlaid in purple. Note that in the input there are several peaks which a stronger than the true R-peaks, particularly between 79 and 80 seconds. The Deep-MF correctly rejects these peaks and predicts the true R-peaks in the output. (b) Boxplots of R-peak Recall and Precision, across all subjects as a result of leave one subject out cross validation. The results of the proposed Deep-MF filter (purple) are compared to the Matched Filter Hilbert Transform (MF-HT) (red) and the standard Matched-Filter (blue). In terms of R-peak recall, the percentage of peaks in the ground truth correctly identified by the model, the proposed Deep-MF achieves a median of 94.9%, compared with the MF-HT and MF where the respective median recalls are 83.4% and 62.3%. In terms of precision, the percentage of the peaks predicted by the model that are correct, the proposed Deep-MF achieves a median of 91.2%, compared with the MF-HT and MF which achieved the respective median precisions of 79.5% and 67%.
Fig. 5: Precision-recall curves for R-peak detection, for the proposed Deep Matched Filter (Deep-MF) and the standard matched filter (MF). The Deep-MF (purple) achieves an area under the curve (AUC) value of 0.97, compared with the MF (blue) which achieves an AUC of 0.64.
a match. This condition was also applied to the output of the MF-HT. The proposed Deep-MF and the MF were both evaluated in terms of r-peak recall (the proportion of the R-peaks in the reference signal that were correctly identified) and r-peak precision (the proportion of predicted R-peaks which were true R-peaks). An area under the curve (AUC) value corresponding to a precision-recall curve was calculated for both the proposed Deep-MF and the standard MF, by varying the minimum peak height threshold of the findpeaks function. This precision-recall curve was generated with the median precision and recall values across all subjects. For the MF-HT algorithm, it was not possible to vary sensitivity in this way, and thus the implementation was compared Deep-MF and the standard MF with fixed threshold parameters that produced a good balance of recall and precision, 0.11 in the case of Deep-MF and 0.90 in the case of MF. Note that the large difference in threshold used between the Deep-MF and the standard MF stemmed from the fact that the outputs of the Deep-MF were scaled and thus lower in amplitude than the standard MF. The Deep-MF, standard MF and MF-HT were all compared again through performance in recall and precision, in the form of boxplots of these values across all 36 subjects.
The effects of initialisation with an ECG template (shown in grey in Fig. 2) were also evaluated and contrasted against random initialisation, both in terms of performance and interpretability. To evaluate the performance impact of initialisation, the mean absolute test error of the encoder-decoder model was calculated at regular intervals during training for both random initialisation and ECG template initialisation. Similarly, AUC values for R-peak recall-precision curves were calculated for both random initialisation and ECG template initialisation. For the purposes of interpretability, the kernel weights post training with ECG template initialisation were examined visually and compared to the initialised values, with a focus on the P, Q, R, S, and T portions of the ECG to determine which aspects of the ECG were valuable to the model for detecting ECG in the input.
## III Results and Discussion
The deep matched filter achieved a median R-peak detection recall of 94.9% in the Ear-ECG of unseen subjects, with an interquartile range of 60.1% to 99.3%. The Deep-MF had a corresponding median precision of 91.2% with an interquartile range of 68.6% to 98.2%. The high recall and precision of the Deep-MF is reinforced by an example Deep-MF model output shown in Fig. 4(a), alongside the ground truth ECG and the input Ear-ECG. It can be observed in this example that even in a scenario with a poor signal-to-noise ratio, in which it is difficult to visually identify which peaks in the input belong to ECG, the Deep-MF correctly identifies the correct peaks and excludes the incorrect peaks. On the same subject pool, the MF-HT achieves a median recall of 83.4% with an IQR of 62.3% to 97.2%. The MF-HT has a corresponding median precision of 79.5% (IQR 28.6% to 96.8%). These results are comparable to the original implementation of MF-HT on noisy ECG by Chanwimalueang et al [13], in which the algorithm achieved a recall of 83.1% an a precision of 86.8%.
The results of the Deep-MF and MF-HT are compared with the standard MF which had a median recall of 62.3% (IQR 27.3% to 85.9%) and a median precision of 67.1% (IQR of 47.9% to 84.5). The full results for the comparison of recall and precision between the Deep-MF, MF-HT and standard MF are shown in boxplots in Fig. 4(b) with the Deep-MF plotted in purple, the MF-HT in red, and the standard MF in blue. Whilst the overall distribution of recall was comparable between the Deep-MF and MF-HT, with both models having a similar interquartile range, Deep-MF performed far better in terms of precision. This is likely due to the advantages of the Deep-MF having multiple refinement layers, in which false peaks could be discarded. It is important to note that the MF-HT relies on manual input to select a matched filter template from the input signal, which explains the improvements in recall over the standard-MF in which a fixed template was used across all subjects. Moreover, the MF-HT also has further conditions on determining which peaks are true R-peaks, based on a balance between the correlation between the template and the input and a deviation from the mean RR interval. These conditions explain why the median precision of the MF-HT was also higher than the standard MF.
For both the Deep-MF and standard MF, recall and precision were evaluated across the full range of peak sensitivity threshold values to produce precision-recall curves taken from the median results across all subjects, as shown in Fig. 5. The Deep-MF achieves an area under the curve value of 0.97, compared to the standard MF which achieves an AUC of 0.64. Observe that precision values never drop below 0.1 due to the two fixed conditions of this findpeaks implementation, namely the maximum peak width and a minimum peak distance, which limited precision from dropping below this value.
It is important to note that the Deep-MF model is higher in complexity than the MF-HT and the standard MF, which provides a more significant barrier to practical implementation. However, on the landscape of deep learning implementations, the Deep-MF is a relatively small model, with an implementation consisting of only 6 total layers, 5 of which are convolutional layers. Moreover, the total number of convolutional kernels is only 162, making the Deep-MF very computationally cheap to implement. As is the case with the standard MF and MF-HT, the Deep-MF can therefore operate in quasi real-time, with a rolling window of the previous 2 seconds of input data. Furthermore, the increase in model complexity of the Deep-MF is justified by vast improvements in performance, with a median increase in recall of 11.6% and a median increase in precision of 11.8% when compared to the MF-HT.
## IV Interpretability
A major barrier to the widespread adoption of deep learning techniques in digital health is the notion that the models themselves are a black box that offer no interpretability to explain the predictions they make. In this paper, we have shown that this particular model for R-peak detection is fully interpretable as a multi-layer matched filter, a tool for detecting an overlap in patterns between an input signal and a template
of interest. A key aspect of this argument is the partial initialisation of the input layer of the Deep-MF to the pattern which it is trying to detect, namely the electrocardiogram. For rigour, it is important to also examine the same kernels after the training process, as if the network was to completely discard the ECG template then it could be assumed that the model did not find the information useful for minimisation of error and thus was not searching for ECG "matches" in the input signal.
In terms of model accuracy, the effects of partial initialisation of the input layer of the Deep-MF with the same ECG template as used in the standard matched filter were small. It is highlighted in Fig. 6(a) that initialisation with an ECG template provided a minor improvement in the mean squared error of the decoder output, and in Fig. 6(b) it is shown that this corresponded to a slight increase in precision-recall AUC from 0.967 to 0.973. Note that the precision-recall curves plotted in Fig. 6(b) are zoomed in the recall axis to exaggerate the difference between the random initialisation and the template initialisation. Whilst these improvements in performance are marginal, it does suggest that initialising the Deep-MF with an ECG provided the model with useful information that it didn't otherwise learn from the training process.
When examining the effect of training on the initialised weights, as shown with two examples in Fig. 7, it is clear that the network holds on to aspects of the ECG templates as it deems them useful in minimising error. Notably, in the all kernels initialised with an ECG template, the R-peak information from the template was retained by the network and exploited with an increase in the weights at this location. Moreover, the network goes further than the original template ECG and exaggerates aspects such as the P wave, and the Q and S parts of the QRS complex during the training process, showing that these aspects of the ECG are useful in distinguishing true R-peaks from other peaks in the input signal. This can be seen in Fig. 7 with the kernel 1 seeing an amplification in weights around the P wave and Q, and kernel 5 seeing an amplification of the R and S components of the ECG.
## V Conclusion
We have introduced a novel Deep Matched Filter framework for the detection of R-peaks in wearable-ECG. The proposed Deep Matched Filter (Deep-MF) has been evaluated on the
Fig. 6: The effects on model performance of the random initialisation of kernel weights (black), against partial initialisation of the input “Matched Filter” layer with an electrocardiogram template (purple). (a) The \(Log_{10}\) of the test error convergence of the encoder-decoder module during the training process, with a dotted line representing the mean test error of the last half of the final training epoch. (b) Precision-Recall curves for R-peak detection of the Deep-MF, shown to have an area under the curve (AUC) value of 0.967 with random initialisation and an AUC value of 0.973 after initialisation with an ECG template.
Fig. 7: The effects of training on kernels initialised with an electrocardiogram template. The dashed trace in black represents the initialised ECG kernel weights before the training process, and the purple trace represents the kernel weights after the training process. Labelling highlights that in different kernels, different aspects of the ECG are amplified during the training process. In the \(1^{\text{st}}\) kernel (top), there is an enhancement of the P wave and of the Q portion of the QRS complex. In the \(5^{\text{th}}\) kernel there is an amplification of the S portion of the QRS complex. In all kernels there is an amplification of the R-peak that was provided by the ECG template.
Ear-ECG of 36 subjects, and has shown a marked improvement over existing matched filter based algorithms, both in terms of recall and precision. In parallel with demonstrating the proficiency of the Deep-MF at R-peak detection in scenarios with poor signal to noise ratio, it has been illustrated that this encoder-based model behaves precisely as a learned matched filter. It serves to detect ECG segments in the input, followed by several refinement layers which distinguish the true ECG matches from the false matches. This has been reinforced through partial initialisation of the model with an ECG template, whereby through the training process the model amplifies physically relevant aspects of the ECG. The proposed Deep Matched Filter has been shown to greatly improve the practical utility of the Ear-ECG signal, whilst being transparent in its operation. It is our hope that physically grounded models such as the Deep Matched filter may help to accelerate wide scale adoption of interpretable artificial intelligence in healthcare.
## Acknowledgment
This work was supported by the USSOCOM MARVELS grant and the Dementia Research Institute at Imperial College London.
|
2306.04899 | Multi-level Protein Representation Learning for Blind Mutational Effect
Prediction | Directed evolution plays an indispensable role in protein engineering that
revises existing protein sequences to attain new or enhanced functions.
Accurately predicting the effects of protein variants necessitates an in-depth
understanding of protein structure and function. Although large self-supervised
language models have demonstrated remarkable performance in zero-shot inference
using only protein sequences, these models inherently do not interpret the
spatial characteristics of protein structures, which are crucial for
comprehending protein folding stability and internal molecular interactions.
This paper introduces a novel pre-training framework that cascades sequential
and geometric analyzers for protein primary and tertiary structures. It guides
mutational directions toward desired traits by simulating natural selection on
wild-type proteins and evaluates the effects of variants based on their fitness
to perform the function. We assess the proposed approach using a public
database and two new databases for a variety of variant effect prediction
tasks, which encompass a diverse set of proteins and assays from different
taxa. The prediction results achieve state-of-the-art performance over other
zero-shot learning methods for both single-site mutations and deep mutations. | Yang Tan, Bingxin Zhou, Yuanhong Jiang, Yu Guang Wang, Liang Hong | 2023-06-08T03:00:50Z | http://arxiv.org/abs/2306.04899v1 | # Multi-level Protein Representation Learning for Blind Mutational Effect Prediction
###### Abstract
Directed evolution plays an indispensable role in protein engineering that revises existing protein sequences to attain new or enhanced functions. Accurately predicting the effects of protein variants necessitates an in-depth understanding of protein structure and function. Although large self-supervised language models have demonstrated remarkable performance in zero-shot inference using only protein sequences, these models inherently do not interpret the spatial characteristics of protein structures, which are crucial for comprehending protein folding stability and internal molecular interactions. This paper introduces a novel pre-training framework that cascades sequential and geometric analyzers for protein primary and tertiary structures. It guides mutational directions toward desired traits by simulating natural selection on wild-type proteins and evaluates the effects of variants based on their fitness to perform the function. We assess the proposed approach using a public database and two new databases for a variety of variant effect prediction tasks, which encompass a diverse set of proteins and assays from different taxa. The prediction results achieve state-of-the-art performance over other zero-shot learning methods for both single-site mutations and deep mutations.
## 1 Introduction
The analysis of protein sequence-function relationships provides valuable insights for enzyme engineering to develop new or enhanced functions. Predicting the effects of point mutations in proteins allow researchers to dissect how changes in the amino acid (AA) sequence can impact the protein's structure, stability, function, and interactions with other molecules [44]. While direct projection to protein functionality may encompass numerous uncharacterized molecular interactions, the advent of high-throughput experimental techniques has enabled the measurement of sequence-function mappings, thereby expanding the range of observable biochemical functions [6; 27; 33; 40].
Currently, hundreds of well-studied proteins have documented tens of thousands of mutational effects on their functions, such as ParD-ParE complexes for binding preferences [1], and ubiquitin for thermodynamic stability [41], and green fluorescent proteins for fluorescence [42], to name but a few. Systematic exploration of sequence variants offers copious evidence for characterizing the evolutionary space of mutants. However, this approach heavily depends on the domain-specific knowledge of individual proteins and their functionalities, thereby generalizing these specifically-designed mapping rules to a vast array of protein families poses a significant challenge.
Deep learning methods have been applied to expedite protein research, particularly in bridging protein sequences to functions. Analogous to how language models analyze the semantics and syntax of human language, these methods interpret protein sequences as raw text and utilize self-attention mechanisms [3, 29, 39] and/or autoregressive inference methods [23, 31] to reveal hidden long-range sequential dependencies. In addition, multiple sequence alignments (MSAs) have been wildly applied in predicting protein sequence [5, 12, 26, 35] or structure [2, 16] to augment the contextual information gleaned from sets of relevant sequences. While language models reveal sophisticated projections from protein sequence to functionality, inferring hundreds to thousands of uncharacterized molecular interactions demands considerable input samples and computationally intensive propagation modules. Alternatively, the structure of a protein, governed by its AA sequence, not only guides molecular interactions but also determines its functionality. Studies have derived the latent representation of AAs' local environment based on protein topology or geometry [15, 49, 55]. They assume that an accurate description of a protein's microenvironment is essential for determining its properties or functionality. Given that core mutations often induce functional defects through subtle disruptions to structure or dynamics [41], incorporating protein topology or geometry into the learning process can offer valuable insights into stabilizing protein functioning.
Both sequence and structure-oriented deep learning approaches have contributed to significant scientific discoveries [22, 23, 47]. However, they exhibit limitations when implemented individually. Structure encoders fail to capture long-range sequential connections for an AA beyond its contact region, and they overlook any correlations that do not conform to the'structure-function determination' heuristic. On the other hand, sequential encoders struggle to capture the spatial molecular interactions of long-range elements and require an excessive number of protein instances to implicitly unravel the deep connection between protein topology and functionality. Although sequence-based learning might or might not find a better solution than human beings, language models demand significantly more resources for data processing and model training. In natural language processing, large models claim to consume more than \(10^{12}\) documents to achieve quantitative changes in inference [36, 46].
We believe incorporating the intermediate state of protein structures can facilitate the discovery of an efficient and effective trajectory for mapping protein sequences to functionalities. To this end, we introduce P\({}^{13}\)LG, a framework designed to assimilate the semantics and topology of **P**roteins from their primary (**1**) and tertiary (**3**) structure with **L**anguage and **G**raph models. The developed model extends the generalization and robustness of self-supervised protein language models while maintaining low computational costs, thereby facilitating self-supervised training and task-specific customization. A funnel-shaped learning pipeline, as depicted in Figure 1, is designed due to the limited availability of crystallized protein structures compared to observed protein sequences.
Figure 1: An illustration of P\({}^{13}\)LG that extracts the semantics and topology of a protein by learning its primary and tertiary structures. The hidden representation can be decoded for variants effect prediction that recognizes the impact of mutating a few sites of a protein on its functionality.
Initially, the linguistic embedding establishes the semantic and grammatical rules in AA chains by inspecting over \(60\) million protein sequences [21]. Then, the topological embedding encodes the microenvironment of AAs, supplementing sequential relationships with spatial connections. Since a geometry can be placed or observed by different angles and positions in space, we represent proteins' topology by graphs and enhance the model's robustness and efficiency with a rotation and translation equivariant graph representation learning scheme. Consequently, the trained model is capable of interpreting the characterization of proteins in dimensions closely related to their functionality.
The developed model for directed evolution fulfills practical requirements in enzyme engineering from three perspectives. (i) The model gains interpretability by simulating natural selection. During training, random perturbations are assigned to AA types to encourage the model to recover advantageous protein sequences found in nature. (ii) The trained model provides robust and meaningful approximations to the joint distribution of the complete AA chain, enhancing the _epistatic effect_[17; 42] in deep mutations by considering the nonlinear combinatorial effects of AA sites. (iii) The model deploys self-supervised learning during training to eliminate the need for further supervision on downstream tasks. This zero-shot scenario is desirable due to the scarcity of experimental results as well as the 'cold-start' situation common in many wet lab experiments.
The pre-trained P\({}^{13}\)LG demonstrates its feasibility across a broad range of variant effect prediction benchmarks. These benchmarks include a general deep mutational effect prediction benchmark, **ProteinGym**[31], which comprises over \(80\) proteins of varying assays and taxa. In addition, we have prepared two niche single-site mutation benchmarks. They measure thermodynamic stability using \(\Delta\)Tm and \(\Delta\Delta\)G values and \(2,967\) mutants across \(90\) protein-condition combinations. These two databases supplement the existing publicly available benchmarks with assay-specific deep mutational scanning (DMS) records, which facilitate the establishment of well-defined evaluation criteria for future methods that are specifically designed and assessed based on protein stability.
## 2 Zero-shot Multi-level Protein Representation Learning
Labeled data are usually scarce in biomolecule research, which demands designing a general model for predicting variant effects on unknown proteins and protein functions. Given a three-dimensional protein backbone structure, this study utilizes a self-supervised learning model that recovers AA types from noisy local environment observations. It simulates nature's iterative selection of well-functioning mutated proteins from harmful random mutations.
### Multi-level Protein Representation
Protein Primary Structure (Noised)For a given observation with an AA type \(\tilde{\mathbf{v}}\), it is assumed that this observation is randomly perturbed. The model then learns a revised state \(\mathbf{v}\) that is less likely to be eliminated by natural selection due to unfavorable properties such as instability or inability to fold. Formally, we define the perturbed observation by a Bernoulli distribution as follows:
\[\mathbf{\pi}(\tilde{\mathbf{v}}\mid\mathbf{v})=p\mathbf{\Theta}(\pi_{1},\pi_{2},\ldots,\pi_{20 })+(1-p)\delta(\tilde{\mathbf{v}}-\mathbf{v}), \tag{1}\]
where an AA in a protein chain has a chance of \(p\) to mutate to one of \(20\) AAs following the _replacement distribution_\(\mathbf{\Theta}(\cdot)\) and \((1-p)\) of remaining unchanged. We consider \(p\) as a tunable parameter and define \(\mathbf{\Theta}(\cdot)\) based on the frequency of AA types observed in wild-type proteins in the training dataset.
Protein Tertiary StructureThe geometry of a protein is described by \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{W}_{V},\mathbf{W}_{E},\mathbf{X}_{V})\), a residue graph by the \(k\)-nearest neighbor (\(k\)NN). Each node \(v_{i}\in\mathcal{V}\) represents an AA in the protein connected to up to \(k\) other nodes in the graph that are the closest in Euclidean distance within a contact region of \(30\)A. Node attributes \(\mathbf{W}_{V}\) are hidden semantic embeddings of AA types, and edge attributes \(\mathbf{W}_{E}\in\mathbb{R}^{33}\) feature relationships of connected nodes based on inter-atomic distances, local N-C positions, and sequential position encoding. Additionally, \(\mathbf{X}_{V}\) records 3D coordinates of AAs in the Euclidean space, which plays a crucial role in the subsequent topological embedding stage to preserve roto-translation equivariance.
### Semantic Encoding of Protein Sequence
Although it is generally believed by the community that a protein's sequence determines its biological function via the folded structure, following strictly to this singular pathway risks overlooking other
unobserved yet potentially influential inter-atomic communications impacting protein fitness. In line with this reasoning, our proposed model, P\({}^{13}\)LG, begins by extracting pairwise relationships for residues through an analysis of proteins' primary structure from \(\tilde{\mathbf{V}}\) and embed them to hidden representations \(\mathbf{W}_{V}\) for residues. At each update, the information and representations of the noisy AA sequence are encoded from the noisy input via an _Evolutionary Scale Modeling_ ESM-2 [21]2. This approach employs a BERT-style masked language modeling (MLM) objective that predicts the identity of randomly selected AAs in a protein sequence by observing their context within the remainder of the sequence. Note that during training, the sequence embedding operates in every epoch as AA types are subject to independent random perturbations. For alternative encoding strategies, please refer to the discussion in Appendix B.
Footnote 2: Official implementation released at [https://github.com/facebookresearch/esm](https://github.com/facebookresearch/esm).
### Topological Encoding of Protein Structure
Proteins are structured in 3D space, which requires the geometric encoder to possess roto-translation equivariance to node positions as well as permutation invariant to node attributes. This design is vital to avoid the implementation of costly data augmentation strategies. We practice _Equivariant Graph Neural Networks_ (EGNN) [43] to acquire the hidden representation to node properties \(\mathbf{W}_{V}^{l+1}=\left\{\mathbf{w}_{v_{1}}^{l+1},\dots,\mathbf{w}_{v_{n}}^{l+1}\right\}\) and node coordinates \(\mathbf{X}_{\text{pos}}^{l+1}=\left\{\mathbf{x}_{v_{1}}^{l+1},\dots,\mathbf{x}_{v_{n}}^{l+ 1}\right\}\) at the \(l+1\)th layer by
\[\mathbf{m}_{ij} =\phi_{e}\left(\mathbf{w}_{v_{i}}^{l},\mathbf{w}_{v_{j}}^{l},\left\|\mathbf{x} _{v_{i}}^{l}-\mathbf{x}_{v_{j}}^{l}\right\|^{2},\mathbf{w}_{e_{ij}}\right), \tag{2}\] \[\mathbf{x}_{v_{i}}^{l+1} =\mathbf{x}_{v_{i}}^{l}+\frac{1}{n}\sum_{j\neq i}\left(\mathbf{x}_{v_{i}}^ {l}-\mathbf{x}_{v_{j}}^{l}\right)\phi_{x}\left(\mathbf{m}_{ij}\right),\] \[\mathbf{w}_{v_{i}}^{l+1} =\phi_{v}\big{(}\mathbf{w}_{i}^{l},\sum_{j\neq i}\mathbf{m}_{ij}\big{)}.\]
In these equations, \(\mathbf{w}_{e_{ij}}\) represents the input edge attribute on \(\mathcal{V}_{ij}\), which is not updated by the network. The propagation rules \(\phi_{e},\phi_{x}\) and \(\phi_{v}\) are defined by differentiable functions, _e.g._, multi-layer perceptrons (MLPs). The final hidden representation on nodes \(\mathbf{W}_{V}^{L}\) embeds the microenvironment and local topology of AAs, and it will be carried on by readout layers for label predictions.
## 3 Blind Variant Effect Prediction
Our method is specifically designed for protein engineering that is trained with a self-supervised learning scheme. The model's capability extends to blind variant effect prediction on an unknown protein, and it can generate the joint distribution for all AA sites as one of the 20 possible types, conditioned on their spatial and sequential neighbors. This process accounts for the epistatic effect and concurrently returns all AA sites in a sequence. Below we detail the workflow for training the zero-shot model and scoring the mutational effect of a specific mutant.
### Model Pipeline
TrainingThe fundamental model architecture cascades a frozen sequence encoding module and a trainable tertiary structure encoder. Initially, a protein language model encodes pairwise hidden relationships of AAs by analyzing the input protein sequence and produces a vector representation \(\mathbf{w}_{v_{i}}\in\mathbf{W}_{V}\) for an arbitrary AA, where \(\mathbf{W}_{V}=\text{LM}_{\text{frozen}}(\tilde{\mathbf{V}})\) with \(\tilde{\mathbf{V}}\) be the perturbed initial AA-type encoding. The language model \(\text{LM}_{\text{frozen}}(\cdot)\), ESM-2 [21] for instance, has been pre-trained on a massive protein sequence database (_e.g._, **UniRef50**[48]) to understand the semantic and grammatical rules of wild-type proteins. It conceals high-dimensional AA-level long-short-range interactions that may or may not have been investigated and explained by scientists. Next, we represent proteins by \(k\)NN graphs with model-encoded node attributes, handcrafted edge attributes, and 3D positions of the corresponding AAs. This representation is embedded using a stack of \(L\) EGNN [43] layers to yield \(\mathbf{W}_{V}^{L}=\text{EGNN}(\mathcal{G})\). This process extracts the geometric and topological embedding for protein graphs with AAs represented by \(\mathbf{w}_{v_{i}}\). During the pre-training phase for protein sequence recovery, the output layer \(\phi(\cdot)\) provides the probability of AA types on each residue, _i.e._, \(\mathbf{Y}=\phi(\mathbf{W}_{V}^{L})\in\mathbb{R}^{n\times 20}\)
for a protein comprising \(n\) AAs. The model's learnable parameters are refined by minimizing the cross-entropy of the recovered AAs with respect to the ground-truth AAs in wild-type proteins.
InferenceFor a given mutant, its fitness score is derived from the joint distribution of the altered AA types on associated nodes that provides a preliminary assessment based on natural observations. We consider the AA type in the wild-type protein as a reference state and compare it with the predicted probability of AAs at the mutated site. Formally, for a mutant with mutated sites \(\mathcal{T}\) (\(|\mathcal{T}|\geq 1\)), we define its fitness score by the corresponding _log-odds-ratio_, _i.e._, \(\sum_{t\in\mathcal{T}}\log p(\mathbf{y}_{t})-\log p(\mathbf{v}_{t})\), where \(\mathbf{y}_{t}\) and \(\mathbf{v}_{t}\) denote the mutated and the wild-type AA at site \(t\), respectively.
### Evaluation Metrics
It is critical to evaluate the developed model's effectiveness using quantitative, computable measurements before proceeding to wet lab validations. Within the scope of mutational effect prediction, each raw protein in the database maintains dozens to tens of thousands of mutants with varying depths of mutation sites. Considering that protein functions are sensitive to the external environment and experimental methods, the absolute values measured by individual labs are typically not directly comparable. Consequently, we evaluate the performance of pre-trained models on a diverse set of proteins and protein functions using two quantitative measurements for ordinal and categorical data.
Spearman's \(\rho\) CorrelationSpearman's correlation is commonly applied in mutational effect prediction tasks to measure the strength and direction of the monotonic relationship between two ranked sequences _i.e._, experimentally evaluated mutants and model-inferred mutants. This non-parametric rank measure is robust to outliers and asymmetry in mutational scores, does not assume any specific distribution of mutational scores, and captures non-linear correlations between the two sequences. The scale of \(\rho\) ranges from \(-1\) to \(1\) which indicates whether the predicted sequence is negatively or positively related to the ground truth. Ideally, a result close to \(1\) is preferred.
True Positive RateThe true positive rate (TPR), also known as recall, is a key performance measure for the proportion of actual positives that are correctly identified. In the context of directed evolution tasks, this refers to the proportion of top beneficial mutations that are accurately predicted as most beneficial. A high TPR for the predicted results indicates that the trained model is likely to provide reliable mutational recommendations for wet labs. The following section will test TPR for baseline models on each of the proteins at \(5\%\), \(25\%\), and \(50\%\). For instance, TPR at \(5\%\) defines the top \(5\%\) mutants (in terms of the highest ground-truth score) as 'positive samples' and measures the proportion of these samples that are also ranked in the top \(5\%\) by the model.
## 4 Numerical Experiments
We validate the efficacy of P\({}^{13}\)LG on zero-shot mutational effect prediction tasks on \(186\) diverse proteins. The performance is compared with other SOTA models of varying scales (_i.e._, number of parameters). The implementations ([https://anonymous.4open.science/r/plg-1B02](https://anonymous.4open.science/r/plg-1B02)) are programmed with PyTorch-Geometric (ver 2.2.0) and PyTorch (ver 1.12.1) and executed on an NVIDIA(r) Tesla A100 GPU with \(6,912\) CUDA cores and \(80\)GB HBM2 installed on an HPC cluster.
### Experimental Protocol
Training SetupWe train P\({}^{13}\)LG on a non-redundant subset of **CATH v4.3.0**[32] domains, which contains \(30,948\) experimental protein structures with less than 40% sequence identity. We further remove \(\sim 6\%\) of proteins that exceed \(2,000\) AAs in length. Each protein domain is transformed into a \(k\)NN graph as following Section 2, with node features extracted by a frozen ESM2-433 [21] prefix model. Protein topology is inferred by a \(6\)-layer EGNN [43] with the hidden dimension tuned from \(\{512,768,1280\}\). Adam[18] is used for backpropagation with the learning rate set to \(0.0001\). To avoid training instability or CUDA out-of-memory errors, we limit the maximum input to \(8,192\) AA tokens per batch, constituting approximately \(32\) residue graphs.
Baseline MethodsWe undertake an extensive comparison with baseline methods of self-supervised SOTA models on the fitness of mutation effects prediction. These methods utilize protein sequences
and/or structures for learning. Sequence models employ position embedding strategies such as autoregression (Tranception[31], RITA [10], and ProGen2[29]), masked language modeling (ESM-1b[39], ESM-1v[26], and ESM2[21]), and a combination of the both (ProtTrans[5]). As our model acquires structural encoding, we also compare with ESM-1F1[12] which incorporates mask language modeling objectives with GVP[14]. **ProteinGym** exhibits diverse protein types and assays, we thus include additional baselines that utilize MSA for model training (DeepSequence[38], WaveNet[45], MSA-Transformer[35], SiteIndep, and EVmutation[11]).
Benchmark DatasetsWe conduct a comprehensive comparison of diverse mutation effect predictors in different regimes. Following [31, 38], we prioritize experimentally-measured properties that possess a monotonic relationship with protein fitness, such as protein stability and relative activity. For protein stability, we generate \(90\) experimentally-measured sets of protein-condition combination assays from **ProThermDB**3, containing \(2,967\) single-site mutants in environments with different pH levels, where \(60\) of them are measured by \(\Delta\)Tm (the change of melting temperature) and the rest \(30\) assays are by \(\Delta\Delta\)G (the change in the change in Gibbs free energy). The two datasets are named according to their scoring metrics: **DTm** and **DDG**, respectively. See Appendix A for additional descriptions. We also examine the fitness prediction of the proteins in **ProteinGym**, which constitutes \(86\)4 DMS assays of different taxa (_e_.\(g\)., prokaryotes, humans, other eukaryotes, viruses).
Footnote 3: Retrieved from [https://web.iitm.ac.in/bioinfo2/prothermdb/index.html](https://web.iitm.ac.in/bioinfo2/prothermdb/index.html).
Footnote 4: We exclude AOA14QD2TI_ZIKV_Sourisseau_growth_2019, the longest protein of over \(3,000\) AAs in **ProteinGym** because it fails to be folded by AlphaFold2.
### Variant Effect Prediction
Our model has demonstrated exceptional predictive performance compared to other SOTA models in forecasting the stability of protein mutation sequences in both **DTm** and **DDG**. P\({}^{13}\)LG learns residue graphs with \(k=20\) and deploys \(1,280\) hidden neurons in each EGNN layer. Table 1 evaluates \(100\) protein assays using TPR at \(5\%\), \(25\%\), and \(50\%\), wherein P\({}^{13}\)LG consistently outperforms competitors of varying model sizes. To further examine how our model efficiently achieves top performance relative to other large models, Figure 2 visualizes Spearman's correlation from predictions of pre-trained models at different model scales. Our model occupies the most desirable upper-left corner
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**version**} & \multicolumn{3}{c}{**TPR \(\uparrow\) (DTm)**} & \multicolumn{3}{c}{**TPR \(\uparrow\) (DDG)**} \\ \cline{3-8} & & 5\% & 25\% & 50\% & 5\% & 25\% & 50\% \\ \hline \multirow{8}{*}{ProGen2} & \multirow{3}{*}{\begin{tabular}{c} one \\ medium \\ large \\ \end{tabular} } & 0.033 & 0.286 & 0.537 & 0.000 & 0.339 & 0.515 \\ & & 0.117 & 0.367 & 0.582 & 0.072 & 0.443 & 0.615 \\ & & 0.212 & 0.362 & 0.585 & **0.231** & 0.408 & 0.621 \\ & & large & 0.132 & 0.323 & 0.557 & 0.117 & 0.320 & 0.597 \\ & & 0.178 & 0.333 & 0.589 & 0.206 & 0.451 & **0.644** \\ & & 0.118 & 0.353 & 0.578 & 0.144 & 0.383 & 0.603 \\ \hline \multirow{2}{*}{Transception} & \multirow{2}{*}{\begin{tabular}{c} medium \\ large \\ \end{tabular} } & 0.188 & 0.359 & 0.564 & 0.083 & 0.367 & 0.527 \\ & & 0.149 & 0.371 & 0.586 & 0.072 & 0.395 & 0.540 \\ \hline \multirow{4}{*}{PortTrans} & \multirow{4}{*}{\begin{tabular}{c} bert \\ 5\_xl\_unifered \\ \end{tabular} } & 0.131 & 0.364 & 0.586 & 0.122 & 0.424 & 0.635 \\ & & bert,bf & 0.168 & 0.336 & 0.579 & 0.136 & 0.423 & 0.589 \\ & & 0.184 & 0.412 & 0.593 & 0.147 & 0.425 & 0.640 \\ & & 0.136 & 0.350 & 0.587 & 0.106 & 0.419 & 0.610 \\ \hline ESM-1v & - & 0.216 & 0.386 & 0.602 & **0.231** & 0.451 & 0.622 \\ \hline ESM-1b & - & 0.151 & 0.402 & 0.606 & 0.211 & 0.424 & 0.642 \\ \hline ESM-if1 & - & 0.188 & **0.418** & **0.656** & **0.258** & **0.469** & 0.641 \\ \hline \multirow{4}{*}{ESM-2} & \multirow{4}{*}{
\begin{tabular}{c} t30 \\ 33 \\ \end{tabular} } & 0.139 & 0.397 & 0.598 & 0.172 & **0.453** & **0.646** \\ & & 0.239 & 0.407 & 0.601 & 0.181 & 0.438 & 0.637 \\ \cline{1-1} & & 0.152 & 0.408 & **0.634** & 0.169 & 0.405 & 0.641 \\ \cline{1-1} & & 0.232 & **0.430** & 0.607 & 0.189 & 0.400 & 0.606 \\ \hline P\({}^{13}\)LG & k20\_h1280 & **0.304** & **0.419** & **0.642** & **0.267** & **0.454** & **0.676** \\ \hline \hline \end{tabular}
* The top three are highlighted by **First**, **Second**, **Third**.
\end{table}
Table 1: Variant Effect Prediction on **DTm** and **DDG**.
spot, where it reaches top-rank correlation with minimal computational cost, or equivalently, the smallest number of parameters to learn.
In addition to single-site predictions, we also test P\({}^{13}\)LG's performance on deep mutations using \(86\) protein assays in **ProteinGym** and compare its ranking with \(27\) baselines. For the Spearman's correlation scores reported in Table 2, we reproduce ESM series (including MSA-Transformer) and ProtTrans, and retrieve scores for the remaining methods from **ProteinGym**'s repository 5. Our method consistently predicts the most aligned ranks, regardless of mutational depth or the predicted taxon. Notably, we include 6 additional MSA-based models, which require fewer parameters but significantly longer inference times due to the need to query and process supplementary information in MSA for the target protein. Consequently, MSA-based methods achieve the second-best overall performance on **ProteinGym**, following closely behind P\({}^{13}\)LG.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline & **Model** & **Version** & \begin{tabular}{c} **\# Params** \\ (million) \\ \end{tabular} & \begin{tabular}{c} \(\rho\) **(by depth)** \(\uparrow\) \\ \end{tabular} & \multicolumn{4}{c}{\(\rho\) **(by taxon)**\(\uparrow\)} \\ \cline{3-11} & & & & Single & Double & All & Prokaryote & Human & Eukaryote & Virus \\ \hline \multirow{6}{*}{**ProteinGym**} & \multirow{4}{*}{\begin{tabular}{c} Stridberg \\ EVmutation \\ \end{tabular} } & \multirow{4}{*}{\begin{tabular}{c} Stridberg \\ EVmutation \\ \end{tabular} } & - & - & 0.378 & 0.322 & 0.378 & 0.343 & 0.375 & 0.401 & **0.406** \\ & & - & - & **0.423** & **0.401** & **0.423** & 0.499 & 0.396 & 0.429 & 0.381 \\ & & & - & 0.399 & 0.344 & 0.400 & 0.492 & 0.373 & 0.442 & 0.321 \\ & & & DeepSequence & - & - & **0.411** & 0.357 & **0.415** & 0.497 & 0.396 & **0.461** & 0.332 \\ \cline{3-11} & & & msa1 & 100 & 0.310 & 0.232 & 0.308 & 0.292 & 0.302 & 0.392 & 0.278 \\ & & & msa1b & 100 & 0.291 & 0.275 & 0.290 & 0.268 & 0.282 & 0.365 & 0.279 \\ \hline \multirow{6}{*}{**ProteinGym**} & \multirow{4}{*}{\begin{tabular}{c} RITA \\ large \\ \end{tabular} } & small & 85 & 0.324 & 0.211 & 0.329 & 0.311 & 0.314 & 0.330 & 0.372 \\ & & medium & 300 & 0.372 & 0.237 & 0.377 & 0.356 & 0.370 & 0.399 & 0.398 \\ & & large & 680 & 0.372 & 0.227 & 0.383 & 0.353 & 0.380 & 0.404 & 0.405 \\ & & & xlarge & 1,200 & 0.385 & 0.234 & 0.389 & 0.405 & 0.364 & 0.393 & **0.407** \\ \hline \multirow{6}{*}{**ProteinGym**} & \multirow{4}{*}{\begin{tabular}{c} ProGen2 \\ large \\ \end{tabular} } & small & 151 & 0.346 & 0.249 & 0.352 & 0.364 & 0.376 & 0.396 & 0.273 \\ & & medium & 764 & 0.394 & 0.274 & 0.395 & 0.434 & 0.393 & 0.411 & 0.346 \\ & & base & 764 & 0.389 & 0.323 & 0.394 & 0.426 & 0.396 & 0.427 & 0.335 \\ & & large & 2,700 & 0.396 & 0.333 & 0.396 & 0.431 & 0.396 & 0.436 & 0.336 \\ & & & xlarge & 6,400 & 0.404 & 0.358 & 0.404 & 0.480 & 0.349 & 0.452 & 0.383 \\ \hline \multirow{6}{*}{**ProteinGym**} & \multirow{4}{*}{\begin{tabular}{c} PortTrans \\ \$5\_xl\_uniref50 \\ \end{tabular} } & bert & 420 & 0.339 & 0.279 & 0.336 & 0.403 & 0.300 & 0.345 & 0.317 \\ & & bert,bfd & 420 & 0.311 & 0.336 & 0.308 & 0.471 & 0.328 & 0.338 & 0.087 \\ & & & t5\_xl\_uniref50 & 3,000 & 0.384 & 0.284 & 0.378 & 0.485 & 0.375 & 0.369 & 0.277 \\ & & t5\_xl\_bfd & 3,000 & 0.355 & 0.356 & 0.351 & 0.490 & 0.399 & 0.349 & 0.131 \\ \hline \multicolumn{2}{c}{Transcription} & large & 700 & 0.399 & **0.398** & 0.406 & 0.447 & 0.369 & 0.426 & **0.407** \\ \hline ESM-1v & - & 650 & 0.376 & 0.290 & 0.372 & 0.496 & 0.409 & 0.398 & 0.233 \\ \hline ESM-1b & - & 650 & 0.371 & 0.325 & 0.366 & **0.507** & 0.416 & 0.360 & 0.150 \\ \hline ESM-if1 & - & 142 & 0.359 & 0.279 & 0.368 & 0.445 & 0.358 & 0.339 & 0.322 \\ \hline \multirow{6}{*}{**Event-2**} & \multirow{4}{*}{
\begin{tabular}{c} IS30 \\ 333 \\ \end{tabular} } & 150 & 0.345 & 0.296 & 0.344 & 0.437 & **0.419** & 0.401 & 0.405 \\ & & & 0.33 & 0.650 & 0.392 & 0.317 & 0.389 & 0.515 & **0.433** & **0.454** & 0.155 \\ \cline{1-1} & & t36 & 3,000 & 0.384 & 0.261 & 0.383 & 0.495 & 0.419 & 0.429 & 0.195 \\ \cline{1-1} & & t48 & 15,000 & 0.394 & 0.313 & 0.391 & 0.457 & 0.402 & 0.442 & 0.251 \\ \hline \multicolumn{2}{c}{P\({}^{13}\)LG} & k20\_n512 & 148 & **0.424** & **0.395** & **0.426** & **0.516** & **0.425** & **0.480** & 0.297 \\ \hline \hline \end{tabular}
\(\dagger\) The top three are highlighted by **First**, **Second**, **Third**.
\end{table}
Table 2: Variant Effect Prediction on **ProteinGym**.
Figure 2: Number of parameters versus Spearman’s \(\rho\) correlation on **DTm** and **DDG**.
### Ablation Study
This section evaluates the prediction performance of **ProteinGym** based on Spearman's correlation on different modular designs of P\({}^{13}\)LG. The results are visualized in Figure 3 with additional details supplemented in Appendix F. In this section, we mix the inference results for each primary criterion with diverse secondary arguments. For instance, in the top orange box of Figure 3(a), we report all ablation results that utilize \(6\) EGNN layers for graph convolution, regardless of the different scales of ESM-2 or the definitions of node attributes. For all modules investigated in this section, we separately discuss their influence on predicting mutational effects when modifying a single site or an arbitrary number of sites. These two cases are marked respectively as'single' and 'all' on the y-axis.
Inclusion of Roto-Translation EquivarianceWe assess the effect of incorporating rotation and translation equivariance during protein geometric and topological encoding. Three types of graph convolutions are compared, including GCN [19], GAT [51], and EGNN [43]. The first two are classic non-equivariant graph convolutional methods, while the last one, which we apply in the main algorithm, preserves roto-translation equivariance. We fix the number of EGNN layers to \(6\) and examine the performance of the other two methods with either \(4\) or \(6\) layers. We find that integrating equivariance when embedding protein geometry significantly improves prediction performance.
Sequence EncodingWe next investigate the benefits of defining data-driven node attributes for protein representation learning. We compare the performance of models trained on two sets of graph inputs: the first set defines its AA node attributes through trained ESM2 [21], while the second set uses one-hot encoded AA types for each node. A clear advantage of using hidden representations by prefix models over hardcoded attributes is evident from the results presented in Figure 3(b).
Depth of EGNNAlthough graph neural networks can extract topological information from geometric inputs, it is vital to select an appropriate number of layers for the module to deliver the most expressive node representation without encountering the oversmoothing problem. We investigate a wide range of choices for the EGNN layers among \(\{6,12,18\}\). As reported in Figure 3(c), embedding graph topology with deeper networks does not lead to performance improvements. A moderate choice of \(6\) EGNN layers is sufficient for our learning task.
Scale of ESMWe also evaluate our models on different choices of language embedding dimensions to study the trade-off between the computational cost and input richness. Various scales of prefix models, including \(\{8,150,650,3000\}\) millions of parameters, have been applied to produce different sequential embeddings with \(\{320,640,1280,2560\}\) dimensions, respectively. Figure 3(d) reveals a clear preference for ESM-2-t33, which employs \(650\) million parameters to achieve optimal model performance with the best stability. Notably, a higher dimension and richer semantic expression do not always yield better performance. In fact, performance degradation is observed when using the t36 version of the prefix model with 3 billion parameters.
## 5 Related Work
Protein Primary Structure EmbeddingSelf-supervised protein language models play the predominant role in the training of large quantities of protein sequences for protein representation learning. These methodologies draw parallels between protein sequences and natural language, encoding amino
Figure 3: Ablation Study on **ProteinGym**, evaluated by Spearman’s correlation on single-site and deep mutations.
acid tokens using the Transformer model [50] to extract pairwise relationships among tokens. These methods typically pre-train on extensive protein sequence databases to autoregressively recover protein sequences [23, 31]. Alternatively, masked language modeling objectives develop attention patterns that correspond to the residue-residue contact map of the protein [21, 26, 34, 39, 52]. Other methods start from a multiple sequence alignment, summarizing the evolutionary patterns in target proteins [7, 35, 38]. Both aligned and non-aligned methods result in a strong capacity for discovering the hidden protein space, but this often comes at the expense of excessive training input or the use of substantial learning resources. This trade-off underlines the need for efficient and cost-effective approaches in self-supervised protein modeling.
Protein Tertiary Structure EmbeddingProtein structures directly dictate protein functions and are essential to _de novo_ protein design, which is a critical challenge in bioengineering. The remarkable success of accurate protein folding by AlphaFold2 [16] and the subsequent enrichment of the structure-aware protein repository have motivated a series of research initiatives focused on learning protein geometry. Recent efforts have been made to encode geometric information of proteins [9, 14, 54] for topology-sensitive tasks such as molecule binding [13, 20, 28], protein interface analysis [24, 37], and protein properties prediction [53].
Variant Effect PredictionVariant effect predictions quantify the fitness of a mutant protein in comparison to its wild-type counterpart. For well-studied proteins, it is feasible to fit a supervised model on the hidden representation of amino acid local environments to infer fitness scores [8, 22, 25, 55]. However, in many cases, labeled data is either scarce or inaccessible. To overcome this, zero-shot methods have been developed to infer the fitness of a mutation from the evolutionary landscape of the original protein using sequence alignments [11, 38, 45]. Alternatively, hybrid models [31, 35] utilize retrieval or attention mechanisms to extract patterns from Multiple Sequence Alignments (MSAs) in conjunction with protein language or structure models.
## 6 Conclusion and Discussion
The development of dependable computational methodologies for protein engineering is a crucial facet of _in silico_ protein design. Accurately assessing the fitness of protein mutants not only supports cost-effective experimental validations but also guides the modification of proteins to enhance existing or introduce new functions. Most recent deep learning solutions employ a common strategy that involves establishing a hidden protein representation and masking potential mutation sites for amino acid generation. Previous research has primarily focused on extracting protein representations from either their sequential or structural modalities, with many treating the prediction of mutational effects merely as a secondary task following inverse folding or _de novo_ protein design. These approaches often overlook the importance of comprehending various levels of protein structures that are critical for determining protein function. Furthermore, they seldom implement model designs tailored specifically for mutation tasks. In this work, we introduce P\({}^{13}\)LG, a denoising framework that effectively cascades protein primary and tertiary structure embedding for the specific task of predicting mutational effects. This framework first employs a prefix protein language model to decode sequence representation and identify residue-wise intercommunications. This is subsequently enhanced by a roto-translation equivariant graph neural network, which encodes geometric representations for amino acid microenvironments. We have extensively validated the efficacy of P\({}^{13}\)LG across various protein function assays and taxa, including two thermal stability databases that were prepared by ourselves. Our approach consistently demonstrates substantial promise for protein engineering applications, particularly in facilitating the design of mutation sequences with improved thermal stability.
Broader ImpactThe intersection of deep learning and structural biology, as showcased in this study, has the potential to transform our approach to protein engineering challenges, paving the way for sustainable and efficient solutions. Algorithms, such as P\({}^{13}\)LG, are primarily designed to enhance enzymes to support initiatives in drug discovery, biofuel production, and other relevant industries. However, given the pervasive presence of proteins across numerous scenarios and organisms, it is feasible that these methods could be employed to modify dangerous species, such as harmful viruses. Therefore, it is crucial to regulate the use of these deep learning methods, akin to the oversight required for any other powerful tools. Interestingly, our model demonstrates suboptimal performance
when applied to such categories of proteins (refer to Table 2), suggesting an inherent limitation in its potential misuse.
LimitationThe consumption of training resources for AI-driven protein engineering techniques has surged considerably nowadays. For instance, ESM-IF1, which is another geometric model that utilizes structural information of proteins, necessitates months of processing time and hundreds of machines to integrate sequence and topological data. Owing to these computational cost constraints, our approach does not train on such an extensive corpus from scratch. Instead, we harness publicly-available language models to extract hidden representations for amino acids. Nevertheless, training and inference in such an integrated model require geometric information from proteins in addition to sequential data. The current data repositories are rich with protein structures experimentally solved by biologists and supplemented by high-quality protein geometries from contemporary techniques such as AlphaFold2, and they are adequate for training our model. However, it's plausible that a revised protein could have an excessively long sequence that lacks a crystallized structure and cannot be folded by computational tools. An example of such a limitation is evident in our experiment, where a protein with an extended length was removed from the **ProteinGym** database.
|
2305.04311 | Egglog Python: A Pythonic Library for E-graphs | E-graphs have emerged as a versatile data structure with applications in
synthesis, optimization, and verification through techniques such as equality
saturation. This paper introduces Python bindings for the experimental egglog
library (previously called egg-smol), which aims to bring the benefits of
e-graphs to the Python ecosystem. The bindings offer a high-level, Pythonic API
providing an accessible and familiar interface for Python users. By integrating
e-graph techniques with Python, we hope to enable collaboration and innovation
across various domains in the scientific computing and machine learning
communities. We discuss the advantages of using Python bindings for both Python
and existing egg-smol users, as well as possible future directions for
development. | Saul Shanabrook | 2023-05-07T15:35:17Z | http://arxiv.org/abs/2305.04311v2 | # Egg-smol Python: A Pythonic Library for E-graphs
###### Abstract.
E-graphs have emerged as a versatile data structure with applications in synthesis, optimization, and verification through techniques such as equality saturation. This paper introduces Python bindings for the experimental egg-smol library, which aims to bring the benefits of e-graphs to the Python ecosystem. The bindings offer a high-level, Pythonic API providing an accessible and familiar interface for Python users. By integrating e-graph techniques with Python, we hope to enable collaboration and innovation across various domains in the scientific computing and machine learning communities. We discuss the advantages of using Python bindings for both Python and existing egg-smol users, as well as possible future directions for development.
Key words and phrases:E-graphs +
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
+
Footnote: copyrighted: E-graphs
+
Footnote: copyrighted: E-graphs
+
Footnote †: copyrighted: E-graphs
aids in static analysis and code completion, as it helps disambiguate which methods are allowed based on the type. Also, in the s-expression language, everything is called as a function, which eliminates the need for typed variables. In Python, we allow classes to define the same method name, but have it translated into two different egg-smol functions. This requires knowing what class each expression has before it can be sent to egg-smol, so we know which methods to use.
## 3. Advantages for Python Users
The Python bindings for the egg-smol library provide several benefits for Python users. First, the use of built-in structures familiar to Python users makes the library more accessible and easy to use. Also, by relying on the Rust library, the Python bindings take advantage of the speed and performance that Rust offers, while allowing users to benefit from the latest innovations in the e-graph world without having to reinvent existing solutions.
Compared to the bindings for the egg library, specifically the snake-egg API, the Python bindings for the egg-smol library are more opinionated. In the snake-egg API, users could bring any Python object they wanted to map to e-graphs. In contrast, the egg-smol bindings are more restrictive, only allowing definitions through the existing function and class wrappers.
This restriction offers several advantages, especially when considering the goal of enabling rewrites between different downstream libraries. Although these libraries may define different domains, sorts, and functions, they are represented using the same meta-structure. This uniformity makes it possible to write rewrites between them, providing a significant benefit for the community as a whole. By offering a consistent and flexible interface, the egg-smol bindings can help create a more unified ecosystem for e-graph-based optimization and rewriting in Python.
## 4. Comparison for existing egg-smol Users
The Python interface for the egg-smol library offers several differences to existing s-expression authors. Some aspects of the language may be more verbose in Python, such as declaring variables or adding rewrites. On the other hand, Python's operator overloading allows for more succinct mathematical expressions for custom operations.
By reusing existing Python constructs and leveraging Python's class and function structures, the Python interface provides static type checking for free, with support from existing tooling like MyPy or PyLance in Visual Studio Code. As far as static type checkers are concerned, the type of decorated functions and classes is the same as the underlying object, ensuring type checking remains consistent even though the runtime representation might differ.
Static type checking is supported not only for creating expressions but also for writing rules. Rewrites must have the same type for the left and right-hand sides, which can be
Figure 1. High-Level Python API
Figure 2. Text API
verified statically by Python type checkers. See Figure 3 for an example of the static error, if you try to replace the term Num(i) * Num(j), which is of type Num, with term i * j, which is the built-in integer type. Providing type checking on these rewrites is also the reason for the more verbose fluent syntax rewrite(lhs).to(rhs) instead of a more succinct syntax rewrite(lhs, rhs). The first option allows for testing that lhs and rhs are the same type, whereas the second does not due to restrictions of Python's type annotations.
Auto-completion also helps users find relevant methods for rewrites, similar to how type-checking aids in code correctness. Furthermore, the Python interface allows for interactive notebook environments, such as Jupyter, to experiment with e-graph expressions and test rewrite rules on the fly.
Overall, the Python interface exposes e-graphs to a wider audience by appealing to the large user base of the Python language and providing additional guardrails for analysis and editor integration.
## 5. Future Work
The current work represents an exciting exploration of the e-graphs space with Python bindings for the egg-smol library. However, it is yet untested in production use cases and relies on egg-smol, which is labeled as experimental and subject to change. There are several potential directions for future work that could aid in its adoption among Python library authors:
### Support for Embedding Existing Python Types
For Python authors dealing with existing Python objects, it would be beneficial to explore how these could be embedded in the e-graph as leaf nodes and have rules written for them. This could potentially be achieved without any modifications upstream to egg-smol by creating a new sort for Python objects and a function to execute arbitrary Python code as a string, given some Python objects.
### Simplifying the API for User Exposure
If the expressions and e-graphs were to be exposed to users, we would need to develop a more straightforward API for the pipeline of taking an existing expression, running some replacements on it, and outputting that back to a native Python object. This would require adding hooks to convert the output automatically to a Python object when certain methods are called.
### Prototype with an Upstream Library
It would be helpful to find an upstream library that already uses an internal expression system with re-rewriting and prototype how this library could be used in it, as well as what improvements would be needed beforehand. Since the Ibis library is already experimenting with e-graphs, this could be a possible option.
### Exporting and Importing E-graph Descriptions
Currently, it is not possible to write an e-graph description in Python and then use it from the s-expression language. Adding some form of export should be possible, either by emitting the s-expression language from Python or using some other machine-readable representation, such as JSON. Ideally, this could be a bi-directional transformation, allowing for the conversion between text formats and Python source.
### Relying on Python State Management and Modules
We could rely more heavily on Python state management and modules for encapsulation. Currently, all definitions are bound to a specific e-graph instance at runtime. However, when writing a reusable library, authors might want to separate definitions between files or modules and import several of them together into an e-graph. This would make distributing and combining different modules easier.
### Interactive Visualization in Jupyter Notebooks
By introspecting the internal state of an egg-smol e-graph, we could create visual and interactive views of it in Jupyter notebooks. Other libraries, like Dask and Ibis, use these visualizations as educational aids to help users understand how the library processes their expressions. Developing this type of tool could allow users to step through the e-graph's execution to understand how rules are being executed and serve as an educational tool for new users to grasp the concept of e-graphs.
## 6. Conclusion
There is a growing interest in utilizing e-graphs within the Python ecosystem, and this work presents an opportunity to open up e-graphs to a wider audience. In addition to solving concrete expression optimization problems in Python data science libraries, the egg-smol Python bindings could help make the Python data science community as a whole more resilient and connected by providing library authors with a common language to express their domains and translate between them.
Even for users who are not currently using Python, the Python bindings present an attractive authoring experience due to the extensive tooling that already exists for the Python language. By leveraging these tools and the popularity of Python, we can foster greater adoption of e-graph techniques and facilitate collaboration and innovation across various domains.
## Acknowledgments
Thank you to the Recurse Center for providing the community and support to do this work. In particular, thank you Sean Aubin for your feedback on this proposal, which was invaluable in the editing process. None of this would be possible without all of the work on the underlying Rust libraries, by Max Willsey and others. Thank you for fielding my questions about egg-smol and be open to collaboration on it, even at such a young stage.
|
2310.20011 | Competition between transient oscillations and early stochasticity in
exponentially growing populations | It has been recently shown that the exponential growth rate of a population
of bacterial cells starting from a single cell shows transient oscillations due
to early synchronized bursts of division. These oscillations are enhanced by
cell size regulation and contain information about single-cell growth
statistics. Here, we report a phase transition in these oscillations as a
function of growth rate variability. Below the transition point, these
oscillations become asymptotically deterministic and can be measured
experimentally, while above the transition point, the stochasticity in
population growth dominates the oscillations and masks all the information
about single cell growth statistics. The analytically calculated transition
point, which roughly corresponds to $13\%$ variability in single-cell growth
rate, falls within physiologically relevant parameters. Additionally, we show
that the oscillations can stochastically emerge even when the initial state
contains multiple cells with out-of-phase division cycles. We show that the
amplitude and the phase of these oscillations are stochastic and would vary
across repeated measurements with the same initial conditions. We provide
analytic expressions as well as numerical estimates for the typical oscillation
amplitude and the number of generations before the amplitude falls below a
given measurement threshold for E. coli in multiple growth conditions. | Yaïr Hein, Farshid Jafarpour | 2023-10-30T20:56:29Z | http://arxiv.org/abs/2310.20011v3 | Competition between transient oscillations and early stochasticity in exponentially growing populations
###### Abstract
It has been recently shown that the exponential growth rate of a population of bacterial cells starting from a single cell will show transient oscillations due to early synchronized bursts of division. These oscillations are enhanced by cell size regulation and contain information about single-cell growth statistics. Here, we report a phase transition in these oscillations as a function of growth rate variability. Below the transition point, these oscillations become asymptotically deterministic and can be measured experimentally, while above the transition point, the stochasticity in population growth dominates the oscillations and masks all the information about single cell growth statistics. The analytically calculated transition point that roughly corresponds to 13% variability in single-cell growth rate falls within physiologically relevant parameters. Additionally, we show that the oscillations can stochastically emerge even when the initial state contains multiple cells with out-of-phase division cycles. We show that the amplitude and the phase of these oscillations are stochastic and would vary across repeated measurements with the same initial conditions. We provide analytic expressions as well as numerical estimates for the typical oscillation amplitude and the number of generations before the amplitude falls below a given measurement threshold for _E. coli_ in multiple growth conditions.
## I Introduction
The relationship between bacterial populations and single-cell statistics has extensively been researched, particularly for cells in unrestricted growth [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25]. In the presence of sufficient nutrients, bacterial populations grow exponentially at a constant rate that can be linked to the single-cell statistics [3; 4; 5; 6; 7; 9; 12; 14; 15; 19; 24]. However, when the population starts from a row of a single cell, it takes some time before bacterial cell count grows perfectly exponentially, even if all individual cells are in balanced growth, meaning that their physiological states are adapted to the growth medium. This transient period is characterized by slowly decaying oscillations [1; 10; 11; 13; 18; 20; 25; 26]. These oscillations are due to the fact that the descendants of a cell grow together and divide more or less at the same time resulting in synchronized bursts of division in the population. As time passes, the stochasticity in cell growth slowly de-synchronizes the timing of cell division, which dampens the oscillations [1; 25]. Note that these oscillations only show in population observables that are affected by divisions, such as total cell count, cell count growth rate, or cell mass distribution. The total cell mass grows smoothly, which is why optical density measurements would not reveal the oscillations [25].
To illustrate how transient behaviour manifests, we focus on the cell count growth rate \(\Lambda(t)\), defined as the fraction of cells in the population that divide per unit time \(\frac{1}{N(t)}\frac{dN(t)}{dt}\). This population observable, along with many others, converges to a certain asymptotic constant value after an infinite amount of time. At large but finite times, however, these population observables exhibit transient behaviour. On one hand, the ensemble averages of intensive observables oscillate around their asymptotic value, with an exponentially decaying amplitude. The time scale for this oscillation decay has been derived for several single-cell division models and it usually scales with the variability of single-cell growth rates [1; 10; 11; 13; 18; 20; 25; 26]. On the other hand, populations made up of randomly fluctuating cells must exhibit random collective fluctuations themselves. Assuming that \(N(t)\) cells roughly fluctuate independently, the amplitude of the collective fluctuations scales with \(\sqrt{N(t)}\). The relative amplitude of such fluctuations, therefore, scales as \(1/\sqrt{N(t)}\), which decays exponentially at a rate of half the population growth rate. Population observables thus exhibit two competing types of transient behaviour: oscillations and fluctuations.
There is a transition point at which the transient oscillations and fluctuations are equal. Since the oscillations and fluctuations decrease in magnitude at different rates, the transient population behaviour will ultimately be dominated by only one of these effects: the one with the lowest decay rate. We can, therefore, formulate a condition necessary for the oscillations to dominate the transient population behaviour. In Ref. [1], the oscillation decay rate was estimated to be \(r\approx 29\sigma_{\kappa}/\bar{\kappa}\), where \(\sigma_{\kappa}\) and \(\bar{\kappa}\) are the single-cell growth rate standard deviation and mean respectively. The asymptotic population growth rate is roughly identical to the single-cell growth rate \(\bar{\kappa}\), so the fluctuations decay at a rate of \(\bar{\kappa}/2\). We can therefore say that oscillations dominate the transient behavior only if
\[\sigma_{\kappa}\leq 0.13\,\bar{\kappa}. \tag{1}\]
This transition point lies right within the range of physiological values for \(\sigma_{\kappa}\) and \(\bar{\kappa}\)[27].
The aforementioned transition point was defined for a model in which growth rate correlations were neglected. Population behaviour depends on how cell division times are distributed and how they are correlated between mother and daughter cells [5; 7]. A good way to model these correlations is by treating growth rate variability and division size variability as two separate sources of noise, each with its own mother-daughter cell correlations Ref. [27]. In Ref. [25] the authors use such a description of single-cell behaviour to quantitatively predict the oscillatory population behaviour. They assume that cell growth rate along a lineage is a continuous random process. The per-cell growth rate variability and correlations then emerge from the noise and auto-correlation of the underlying growth process. All of their results have, however, only been derived for ensemble averaged populations, which means they only considered observables corresponding to averages over repeated experiments with the same initial conditions.
In this paper, we adopt the model from Ref. [25], but consider the behaviour of single population experiments instead of just the ensemble average, which causes some interesting effects that are often ignored in other literature. Unlike in the ensemble average population, we observe a phase transition between fluctuation-dominated and oscillation-dominated transient regimes. We determine a generalized transition point for this phase transition, of which Eq. (1) is a special case. Another interesting phenomenon absent in the ensemble average is that the amplitude of the transient oscillations is random among repeated experiments, set by stochasticity in the early population. We analyze the distribution of these transient oscillation amplitudes through simulations and theory. Interestingly, the typical amplitude of a single population is higher than that of an ensemble-averaged population, due to the net effects of spontaneous cell synchronization. Our predictions are in great qualitative agreement with oscillations studied in a multistage model used to describe cancer cell populations [11]. At last, we consider tables of realistic parameters based on data from Ref. [27] and answer just how many generations it typically takes for the oscillations to fall below a certain threshold.
## II Model
Suppose we have a population that starts from a single cell. We assume this cell is already adapted to the growth medium. We want to keep track of the number of cells \(N(t)\). For this we use a physiologically relevant model of growth and division, which accounts for growth fluctuations, their correlations, and cell size regulation based on the model from Ref. [25]. The effect of asymmetric division is ignored, since its effect of oscillations is relatively small for physiologically relevant values [1; 28; 29; 30]
Consider the following model. At each point in time, there are a total of \(N(t)\) cells in the population, each of which can be labeled by an index \(j\in\{1,\ldots,N(t)\}\). Every cell \(j\) can be characterized by two variables, its cell mass \(m_{j}(t)\) (or equivalently cell size, volume, area, etc.) and growth state \(\mathbf{x}_{j}(t)\). The cell mass grows exponentially in time, meaning it satisfies
\[\frac{d}{dt}m_{j}(t)=\lambda(\mathbf{x}_{j}(t))m_{j}(t) \tag{2}\]
where the growth rate \(\lambda(\mathbf{x})\) is a function of the growth state \(\mathbf{x}\), which is an abstract vector that represents the internal state of the cell. The growth state also changes stochastically in time, with each cell's growth state fluctuating independently of the other cell's. For a single cell, we denote the probability of finding a cell in growth state \(\mathbf{x}\) at time \(t\) is given by \(p(t,\mathbf{x})\). This probability evolves according to
\[\frac{d}{dt}p(t,\mathbf{x})=\mathcal{K}p(t,\mathbf{x}) \tag{3}\]
where \(\mathcal{K}\) is some differential operator. We also assume that at any moment cells have a chance of dividing. The rate of such a division event may be any function of cell mass \(m\) and the masses at previous births and divisions, as long as the distribution of cell mass at division goes to a steady state within a reasonable amount of time. Upon division, the mother cell is removed and two new daughter cells are formed with the same growth state but half the mass of the mother cell at division.
When concrete examples are needed, we will assume \(\lambda_{t}=\lambda(\mathbf{x}(t))\) to be an Ornstein-Uhlenbeck process, which is characterized by three parameters. These are the mean \(\bar{\lambda}=\langle\lambda_{t}\rangle\), variance \(\sigma_{\lambda}^{2}=\mathrm{Var}(\lambda_{t})\) and correlation time \(\tau_{cor}\) which sets the correlation decay time via \(\mathrm{Cov}(\lambda_{t},\lambda_{s})/\sigma_{\lambda}^{2}=e^{-|t-s|/\tau_{cor}}\). There is a direct correspondence between average division time along a lineage and mean growth rate set by \(\tau_{div}=\ln(2)/\bar{\lambda}\).
In our simulations, we assume a division process such that the steady-state birth mass distribution \(m_{b}\) has a log-normal distribution with the coefficient of variation \(\mathrm{CV}_{m_{b}}\) and a typical value \(\bar{m}\) that satisfies \(\ln\bar{m}=\langle\ln m_{b}\rangle\). We will describe all cell masses in units of \(\bar{m}\). This model of division is entirely consistent with many cell size regulation models [1; 5; 27; 31].
## III Ensemble averaged oscillations in population growth rate
If one could repeat the same experiment with the same initial conditions and take the average over all cell count trajectories, one would obtain some ensemble average
\(\langle N(t)\rangle\). For this function, we define the ensemble population growth rate
\[\Lambda_{\text{ens}}(t)=\frac{1}{\langle N(t)\rangle}\frac{\mathrm{d}\langle N(t) \rangle}{\mathrm{d}t}, \tag{4}\]
This effective growth rate is known to converge to an asymptotic population growth rate \(\Lambda_{\infty}\) at large times [5; 7; 9]. This limit holds true when the cell mass and growth state distribution of the population have reached a steady state, in which case the cell masses have completely de-synchronized.
Let us explain why a non-steady-state population exhibits oscillations in population growth rate. When the average cell mass exceeds the steady-state average, we have an increase in cells close to division, which momentarily raises \(\Lambda(t)\) with respect to the steady-state value \(\Lambda_{\infty}\). Following an increase in divisions, the abundance of newborn cells causes average cell mass to drop below the steady state, thereby decreasing \(\Lambda_{\text{ens}}(t)\). When the number of divisions \(\Lambda_{\text{ens}}(t)\) is low, average cell mass builds up again and the cycle repeats. This process causes wave-like behaviour in cell mass distribution and cell division rate \(\Lambda_{\text{ens}}(t)\). The average effect of cells growing at different rates de-synchronizes the population. This dampens the mass synchronicity waves, at rates proportional to their oscillation speed. This effect rapidly dampens out any higher-order mass synchronicity oscillations. What is left are just the first-order oscillations corresponding to full cell cycles. The transient regime is thus well described by just the first-order oscillations [25]
\[\Lambda_{\text{ens}}(t)\approx\Lambda_{\infty}\left(1+A_{\text{ens}}e^{-rt} \cos(\Omega t+\phi_{\text{ens}})\right), \tag{5}\]
Here, the amplitude pre-factor \(A_{\text{ens}}\) and phase \(\phi_{\text{ens}}\) are constants that depend on the division process, growth process as well as the population's initial conditions. An explicit expression for \(A_{\text{ens}}\) is given in Section V. The asymptotic growth rate \(\Lambda_{\infty}\), oscillation decay rate \(r\), and oscillation speed \(\Omega\), however, are time-scales that are uniquely determined by the model of cell growth. In Ref. [25] these timescales were given in the case where \(\lambda_{t}\) is an Ornstein-Uhlenbeck process. Here, we provide a more general expression.
Assuming a growth rate process \(\lambda(\mathbf{x})\) where \(\mathbf{x}\) evolves according to Eq. (3), we find that \(\Lambda_{\infty}\) is the leading eigenvalue of the operator \(\mathcal{K}+\lambda(\mathbf{x})\). Let \(\mu\) be the leading eigenvalue by real part of the operator \(\mathcal{K}+[1+i2\pi/\ln(2)]\lambda(\mathbf{x})\). Then we have that \(r=\Lambda_{\infty}-\mathrm{Re}\mu\) and \(\Omega=\mathrm{Im}\mu\). These expressions reduce to forms that are easier to interpret if we assume \(\lambda_{t}\) is Ornstein-Uhlenbeck or assume small growth rate variability \(\sigma_{\lambda}/\ll\bar{\lambda}\). For any growth rate process, one can define an auto-correlation function
\[\rho(t):=\mathrm{Cov}(\lambda_{t},\lambda_{0})/\sigma_{\lambda}^{2}. \tag{6}\]
A key variable that links the population time-scales to the growth process is the integrated correlation
\[D:=\sigma_{\lambda}^{2}\int_{0}^{\infty}ds\rho(s) \tag{7}\]
For an Ornstein-Uhlenbeck process, this reduces to \(D=\sigma_{\lambda}^{2}\tau_{cor}\). The time-scales are now given by
* \(\Lambda_{\infty}=\bar{\lambda}+D\)
* \(r=\frac{4\pi^{2}}{\ln(2)^{2}}D\)
* \(\Omega=\frac{2\pi}{\ln(2)}(\bar{\lambda}+2D)\)
## IV Oscillation-Fluctuation Phase Transition
In this section, we consider experiments of single populations and discuss the most important characteristic absent in ensemble population descriptions: transient fluctuations. Depending on the parameter regime, The transient behavior is dominated either by fluctuations or deterministic oscillations of the form in Eq. (5). For a single population with cell count trajectory \(N(t)\), we define the cell count growth rate as
\[\Lambda(t):=\frac{1}{N(t)}\frac{\mathrm{d}N(t)}{\mathrm{d}t} \tag{8}\]
Suppose that the \(N(t)\) cells in a population grow independently. The fluctuations in the number of cells that divide per unit time \(dN(t)/dt\) now scale with \(\sqrt{N(t)}\). Consequently, the fluctuations in the cell count growth rate \(\Lambda(t)\) scale with \(1/\sqrt{N(t)}\), which evolves as \(e^{-\frac{\Lambda_{\infty}}{2}t}\). This introduces a timescale of \(\Lambda_{\infty}/2\) in addition to the oscillation decay rate \(r\). We define the relative oscillation decay rate \(\epsilon\) as the ratio of the oscillation decay rate to the fluctuation decay rate
\[\epsilon:=\frac{r}{\Lambda_{\infty}/2}\approx 164D/\bar{\lambda} \tag{9}\]
If \(\epsilon<1\), then the oscillations decay more slowly than the background fluctuations. In this regime, the ratio of fluctuations to oscillations in \(\Lambda(t)\) will always go to zero eventually. When this happens, the dynamics of \(\Lambda(t)\) become deterministic in nature. Conversely, if \(\epsilon\geq 1\), then all oscillations will eventually fall below the background fluctuation level. the trajectory of \(\Lambda(t)\) is dominated by noise and local oscillations caused by spontaneous cell mass synchronicity. In Appendix D we show how in the limit of vanishing growth rate correlations, the condition \(\epsilon<1\) is equivalent to Eq. (1).
In Fig. 1 we show examples of the trajectories of \(\Lambda(t)\) for \(\epsilon=0.5\) and \(\epsilon=1.3\), as well as the log of the deviation of \(\Lambda(t)\) from its asymptotic value \(\Lambda_{\infty}\). In log deviation space, the exponential decay rates of transient effects appear as slopes. Which transient effect
dominates in the large time limit only depends on the theoretical ratio of the slopes set by \(\epsilon\). One can think of the point at which the oscillations clearly separate from the background noise as the crossover between the two lines. At the transition, as \(\epsilon\) goes to 1, this separation time diverges.
## V Oscillation amplitude and phase vary across repeated identical experiments
In this section, we assume that \(\epsilon<1\) and discuss the effects of fluctuations on the oscillations in single-population experiments. In Fig. 2 we see the trajectory of \(\Lambda(t)\) for two populations with the same parameters and initial conditions. In all simulations, we observe transient oscillations of the form
\[\Lambda(t)\approx\Lambda_{\infty}\left(1+Ae^{-rt}\cos(\Omega t+\phi)\right). \tag{10}\]
Although time scales \(\Lambda_{\infty}\), \(r\), and \(\Omega\) are the same as for the ensemble's oscillations, the amplitude \(A\) and phase \(\phi\) are now random variables that are set by the population's early history, which is in close agreement with observations from Ref. [11]. In Fig. 3 we show the spread in the instantaneous amplitude (see Appendix C for a definition) after 12 generations. Interestingly, the ensemble average amplitude \(A_{\text{ems}}\) from Ref. [25] visibly underestimates the typical amplitude of a single population experiment.
### The typical amplitude is larger than the amplitude of the ensemble average
In this section, we de-generalize our model a little to investigate how the amplitude prefactor \(A\) in Eq. (10) is distributed and depends on the model's parameters. We assume that the growth process \(\lambda_{t}\) is Ornstein-Uhlenbeck with mean \(\bar{\lambda}\), variance \(\sigma_{\lambda}^{2}\), correlation time \(\tau_{corr}\), and that the initial growth rate \(\lambda_{0}\) is close to \(\bar{\lambda}\). We also assume that the cell size at division admits some steady-state distribution with a coefficient of variation of \(\text{CV}_{m_{b}}\). Based on the theory from Ref. [25], the ensemble average amplitude of a population starting from a single cell is now approximately given by
\[A_{\text{ems}}=2e^{-41\text{CV}_{m_{b}}^{2}+123\sigma_{\lambda}^{2}\tau_{cor}^ {2}}, \tag{11}\]
To summarize Eq. (11), there are two major factors that affect the ensemble average amplitude. Firstly, growth rate correlations increase this amplitude, as they cause some delay in the time it takes for cells to fluctuate independently and de-synchronize their masses. Secondly, variability in division mass decreases the amplitude, as this causes a local spread in cell division timings among cells with synchronized mass, thereby slightly flattening oscillations in \(\Lambda(t)\).
The transient oscillation amplitude differs per population, even for populations with the same initial conditions. For any population with sustained oscillations, there will be some random prefactor \(A\) that describes the transient oscillations as in Eq. (10). To investigate the dependency of this amplitude on initial conditions and parameters, we consider a 'typical' amplitude pre-factor \(\bar{A}\), defined as the root mean square
\[\bar{A}:=\sqrt{\mathbb{E}\left[A^{2}\right]} \tag{12}\]
where the average is taken over independent stochastic outcomes of the same initial population. In Appendix C we show that this typical amplitude is closely approximated by
\[\bar{A}=2e^{-41CV_{m_{b}}^{2}+123\sigma_{\lambda}^{2}\tau_{cor}^{2}}\sqrt{ \frac{m_{0}^{-\epsilon}}{2^{1-\epsilon}-1}}. \tag{13}\]
in Fig. 3 (a) and (b) we see how the typical amplitude pre-factor \(\bar{A}\) more closely resembles the amplitude in typical experiments than \(A_{\text{ems}}\). The right-most factor in Eq. (13) is strictly greater than one, and it quantifies the net increase in typical amplitude pre-factor with respect to the ensemble average due to stochasticity in the early population. Note that \(\bar{A}\) diverges at the oscillation-fluctuation phase transition as \(\epsilon\) goes to 1. This does not mean that we observe arbitrarily large oscillations, since an increase in \(\epsilon\) also means that oscillations decay faster. The instantaneous oscillation amplitude at a fixed point in time therefore still goes to a constant as \(\epsilon\) goes to 1.
### The amplitude of a population starting from multiple cells
For a general population starting from \(N(0)\) cells with cell masses \(m_{1},\dots m_{N(0)}\), the oscillation amplitude pre-factor needs a slight modification
\[A_{\text{ens}}=2e^{-41\text{CV}_{m_{b}}^{2}+123\sigma_{\lambda}^{2}\tau_{cor} ^{2}}|\Psi(0)|. \tag{14}\]
Here \(\Psi(t)\) is the population's cell mass synchronicity defined as
\[\Psi(t):=\frac{1}{M(t)}\sum_{j=1}^{N(t)}m_{j}^{1+i\frac{2\pi}{\ln(2)}},\quad M (t):=\sum_{j=1}^{N(t)}m_{j} \tag{15}\]
The absolute value \(|\Psi(t)|\) ranges between 0 and 1 and measures the degree to which cell masses are synchronized. When \(|\Psi(t)|=1\), all cells must have the same mass. In the special case of one cell, this always holds, hence Eq. (14) reduces to Eq. (11). When all cells are out of phase, we get \(\Psi(t)=0\). In this case, there is full destructive interference in the first-order ensemble average oscillations of the future trajectory of \(\Lambda_{\text{ens}}(t)\).
Naturally, we have \(\Psi(t)=0\) when cell masses are distributed according to the steady-state population mass distribution [25]. Whenever we talk about the instantaneous amplitude, we actually mean a re-scaled \(|\Psi(t)|\). The general form of the typical amplitude is
\[\bar{A}=2e^{-41\mathrm{CV}_{m_{b}}^{2}+123\sigma_{\lambda}^{2}\tau_{\mathrm{ corr}}^{2}}\sqrt{|\Psi(0)|^{2}+J(0)}. \tag{16}\]
Here \(J(0)\) quantifies the net increase in transient oscillation amplitude with respect to the ensemble average due to early population stochasticity. A full formula is derived in the appendix, but it is closely approximated by
\[J(0)\approx\frac{1}{M(0)}\left(\frac{1}{2^{1-\epsilon}-1}-1\right) \tag{17}\]
The larger the initial population, the smaller \(J(0)\) and the smaller the effect of early population stochasticity. The strength of this effect also monotonically increases with growth rate variability. Interestingly, \(J(0)\) is always a positive finite value, so \(\bar{A}\) is strictly positive, even when the initial population is completely out of phase. This is demonstrated in Fig. 3 (c) and (d), where we provide the trajectories of instantaneous amplitudes as well as their distributions for a population starting from three out-of-phase cells. Note how in Fig. 3 (c), the small population immediately spontaneously synchronizes its cell masses, which then leads to sustained oscillations in the transient regime.
## VI How long do the oscillations last?
In this section, we consider how long it takes for the oscillation amplitude to fall below a certain threshold \(A_{cut}\). The typical number of generations \(g_{cut}\) to reach this threshold roughly satisfies \(\bar{A}\exp[-g_{cut}\tau_{\mathrm{div}}]=A_{cut}\). We can rewrite this as
\[g_{cut}=\frac{\bar{\lambda}}{r\ln(2)}\left[\ln(\bar{A})-\ln(A_{cut})\right]. \tag{18}\]
Let us estimate what this amounts to for some experimentally obtained parameters, given a population that starts from a single cell with mass \(\bar{m}\) at \(t=0\). In Ref. [27], single lineage data is collected for E. _coli_ in various growth media. We are interested in the mean and coefficient of variation of the elongation rate and
Figure 1: Population growth rate as a function of time for two parameter values on both sides of the transition. (a) growth rate above the transition is stochastic and approaches steady exponential growth, while (c) the growth rate below the transition approaches a decaying oscillation around the exponential growth. (b) and (d) are the same plots in logscale (with asymptotic values subtracted, \(\ln_{>0}[(\Lambda(t)-\Lambda_{\infty})/\bar{\lambda}]\)) showing the exponential decay of transient dynamics. In (b), above the transition, oscillations decay faster than the fluctuations, while in (d) the oscillations decay slower than the fluctuations and become asymptotically deterministic. Simulation parameters: \(\sigma_{\lambda}^{2}\tau_{\mathrm{cor}}/\bar{\lambda}=0.008\) for (a) and (b), and \(\sigma_{\lambda}^{2}\tau_{\mathrm{cor}}/\bar{\lambda}=0.003\) for (c) and (d).
cell length at division, as well as the elongation rate's mother-daughter correlations. We assume cell length is proportional to cell mass and that the reported elongation rates are effective cell-cycle averages of some underlying Ornstein-Uhlenbeck growth rate process. Using theory from Ref. [25], we can deduce the set of parameters \(\bar{\lambda}\), \(\sigma_{\lambda}\) and \(\tau_{\text{cor}}\) of the supposed underlying growth rate process, see Appendix D for the relationships used. In Table 1 we report these values, derived from the experiments obtained over the seven different growth media. We also calculate the oscillation decay rate \(r\), order parameter \(\epsilon\), and typical amplitude \(\bar{A}\). At last, we report the typical number of generations a population needs for the relative amplitude to fall below a cutoff of \(0.1\). Note the large discrepancy in this cutoff. The growth medium thus has a large effect on whether the oscillations are observable or not.
## VII Discussions and future directions
In 1950, well before the existence of single-cell technology, microbiologists resorted to developing experimental methods to synchronize populations of growing bacteria to be able to probe single cells [32]. A celebrated example is what was widely known as the "baby machine" that screened cells of the same size to produce a synchronized population [33]. This method was famously used to discover the multi-fork DNA replication in _E. coli_[34]. These experiments inspired many early attempts to theoretically model the dynamics of synchronized growing populations [20; 35; 36; 21]. These attempts use McKindrick van Forester formalism to analyze models akin to Bellman-Harris [37], where cells grow for a period of time independently drawn from a division time distribution and then divide. Such models are sometimes referred to as the timer model or independent generation time model. These models predict that variation in generation times of cells (about 20% variability [27]) desynchronizes the population (Note that 20% is well above the 13% threshold in Eq. (1), so no one would have expected to get a synchronized population just by starting it from a single cell).
With the recent invention of single-cell Microfluidics technology such as the "mother machine" [38] and subsequent theoretical developments in cell size regulation [31], we now understand that the aforementioned theoretical models fail to capture the oscillation dynamics quantitatively. This is because cell-size regulation drastically delays the slow down the desynchronization [1]. There are two major sources of variability in the generation time of a cell: the variability in the growth rate, and the variability in the size at which the cells divide. The decay rate of the oscillations is set by the smaller source of noise, the noise in the growth rate of the cells. This is while, any long-term effect of the larger source of noise, the noise in the division size, on the population dynamics, is canceled by cell-size regulation. This comes with the realization that under conditions where the growth rate variability is sufficiently low, these oscillations can last longer than 30 generations. This is the time it takes for a 1 ml culture starting with a single cell to saturate. In other words, all one needs to synchronize a large population is to start it from a single cell. This provides a way to probe hard-to-measure single-cell statistics by performing simple population measurements.
There have been very few experimental works on measurements of oscillations in population starting from single cells, and they only capture the very first few generations on gel plates using simple microscopy [13; 18]. Our theoretical predictions suggest that for many growth conditions, these oscillations can be very easily measured in liquid culture at the late stage when the population is growing deterministically. For this reason, we have written this paper with our experimental colleagues in mind to provide the necessary tools that help decide when these
Figure 2: Two trajectories of cell count growth rates \(\Lambda(t)/\bar{\lambda}\) for two simulations with identical initial conditions showing different oscillation amplitudes at long time. Simulation details: starting with of one cell with mass \(m(0)=m\) and growth rate \(\lambda_{0}=\bar{\lambda}\) with growth parameters \(\tau_{\text{cor}}=5/\bar{\lambda}\), \(\sigma_{\bar{\lambda}}^{2}\tau_{\text{cor}}/\bar{\lambda}=0.003\) and division parameters \(\text{CV}_{m_{d}}=0.12\), \(\alpha=0.5\).
oscillations can be observed.
It is important to realize that these oscillations are hidden from typical optical density measurements which estimate the total mass and not directly the cell count. The period of these oscillations is about one generation time, which means one needs multiple repeated measurements of instantaneous population growth rate per cell cycle (which could be as short as 23 min for _E. coli_). The issue of time resolution can be elevated by performing the experiment at a lower temperature, where all the processes in cells slow down almost proportionally. The population growth rate can be estimated by cell count
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} & TSB & Synth. rich & gcl+12a.a. & gcl+6a.a & glucose & sorbitol & glycerol \\ \hline \(\lambda\) & 0.057 & 0.043 & 0.037 & 0.033 & 0.027 & 0.018 & 0.019 \\ \(\sigma_{\lambda}\) & 0.0060 & 0.0046 & 0.0035 & 0.0033 & 0.0027 & 0.0032 & 0.0024 \\ \(\tau_{\text{cor}}\) & 6.1 & 6.1 & 4.4 & 7.0 & 5.9 & 9.6 & 18.8 \\ CV\({}_{m_{d}}\) & 0.17 & 0.14 & 0.10 & 0.10 & 0.10 & 0.12 & 0.11 \\ \hline \(r\) & 0.018 & 0.010 & 0.0048 & 0.0061 & 0.0035 & 0.0081 & 0.0087 \\ \(\epsilon\) & 0.63 & 0.48 & 0.24 & 0.37 & 0.26 & 0.86 & 0.91 \\ \(\tilde{A}\) & 1.4 & 1.5 & 1.6 & 1.9 & 1.7 & 3.8 & 6.3 \\ \(g_{\text{cut}}\) for \(A_{cut}=0.1\) & 12 & 16 & 34 & 23 & 32 & 12 & 13 \\ \end{tabular}
\end{table}
Table 1: Experimentally measured values of the model parameters for various growth conditions for _E. coli_ obtained from Ref. [27] (see Appendix D for details) together with analytically calculated values for oscillation decay rate \(r\), relative decay rate \(\epsilon\), typical amplitude pre-factor \(\tilde{A}\), and number of generation before the amplitude falls below 10% of average growth rate. All seven growth conditions are below the transition points with "glucose + 12 amino acids" having the longest-lasting oscillations with amplitude larger than 10%, lasting for over 34 generations. See Ref. [27] for the details of the growth conditions.
Figure 3: Instantaneous relative oscillation amplitude in \(\Lambda(t)\) for 4000 simulations (a) and (c) along with the distribution of the amplitude pre-factor \(A\) (b) and (d) starting. In (a) and (c), the black line is the root mean square \(\sqrt{\langle A^{2}\rangle}\) and green its asymptotic prediction \(\tilde{A}e^{-rt}\), where \(\tilde{A}\) is given by Eq. (13). The brown solid line is the ensemble average’s amplitude \(|\langle Ae^{i\phi}\rangle|\), and red dashed line is its asymptotic prediction \(A_{\text{ens}}e^{-rt}\) where \(A_{\text{ens}}\) is given by Eq. (11). In (b) and (d) we show the distribution of the amplitude pre-factor \(A\) based on their values at \(t=12\tau_{\text{div}}\). (a) and (b) are based on simulations starting from one cell with \(m(0)=\bar{m}\) and \(\lambda(0)=\bar{\lambda}\), while (c) and (d) start with three cells with masses \(m(0)/\bar{m}=1,1.19,1.56\), chosen such that initial cell mass synchronicity is \(\Psi(0)=0\).
ing measurements performed twice on small samples a few minutes apart. We expect any well-designed experiment to be able to estimate the instantaneous population growth rate with uncertainty well below \(10\%\). For this reason, we have estimated the number of generations it takes before the amplitude of the oscillations falls below \(10\%\) of the mean growth rate in Table 1. For certain growth conditions (glucose + 12 amino acids, see Ref. [27] for the description of the growth media), the oscillations are predicted to maintain amplitude above \(10\%\) of mean growth rate for \(34\) generations (this corresponds to longer than the time it takes a \(10\) ml culture to saturate).
Here, we have shown that when the relative decay rate \(\epsilon\) goes above \(1\), these oscillations are not observable. This threshold corresponds to roughly \(13\%\) variability in growth rate, which is within the range \(6-20\%\) variability reported in the literature [27, 39, 40, 41]. However, careful estimations of the parameters of the model for all seven growth conditions from Ref. [27] put _E. coli_ below this threshold (see Table 1). We do not know if there is a biological significance for cells to stay below this transition point, and we have no reason to believe this is always the case. Moreover, due to the finite uncertainty in any measurement of cell counts, we find it unlikely for these oscillations to be observed in single experiments when \(\epsilon\) is close to one which is the case for some of the growth conditions.
In this work, we have tried to use a realistic model that keeps track of noise in both growth and division, is compatible with models of cell size regulation, and includes correlations in growth rates. One can always write more complicated models of growth and division keeping track of more degrees of freedom, but complicated models come with more parameters. We believe our model captures all the dominant sources of noise and correlations without involving parameters that are unknown or hard to measure. One source noise and correlation that is being ignored in our model is any effect of asymmetry in cell division. E coli is extremely good at dividing symmetrically; the asymmetry in cell division of _E. coli_ is measured to have the relative variability of order \(2\%\) under certain conditions [30], which would certainly have a negligible effect on any calculations performed in this paper. However, one could expect growth conditions or other species of bacteria in which there is a significant source of noise in the position of the division plane. The asymmetry in cell division could also manifest itself in an asymmetry in the division of the proteins in ribosomes among the two daughter cells which would lead to discontinuity in the growth process and subtle correlations between the growth rate cell size [42, 43]. Quantifying the effect of such asymmetries would be the subject of future work.
On the theoretical side, the competition between the transient dynamics and early stochasticity is what we believe to be a much more general phenomenon in stochastic exponential growth models. In most models of stochastic exponential growth, the early phases of growth are highly stochastic, but as the size of the system grows, the dynamics becomes more deterministic. In such cases, the effect of early stochasticity freezes in the distribution of the exponentially growing state. A well-known example of such dynamics is the Yule process, where the agents can spontaneously divide into two at a constant rate in a Markovian fashion. While the early stages of the Yule process are highly stochastic, it asymptotically grows exponentially with a prefactor that is exponentially distributed. Similarly, Branching processes such as the Bellman-Haris model of population growth show early stochasticity that freezes in the final state of the system. Stochastic exponential growth can be also studied in the context of stochastic differential equations (SDEs) such as the Cox model with squat root noise [44] and its generalizations to power-law noise [45]. Similar effects can be observed in these models as long as the stochasticity grows slower than the system size and they can be mapped to square root noise through a change of variable [45]. The exception is the case of geometric Brownian motion, where the noise grows proportional to the system's size [46]. Stochastic exponential growth is also observed in autocatalytic reaction networks such as those used in models of the origin of life [47, 48] and models of single cell growth [49, 50]. These models also have early stochasticity that grows asymptotically with the square root of the system size.
While simpler models such as the Yule process and single-variable SDEs do not exhibit other transient dynamics besides the early stochasticity, the branching processes such as Bellman-Harris, and all the models with multiple degrees of freedom such as multivariate SDEs, autocatalytic networks, and models of single-cell growth can exhibit transient behavior which can be oscillatory. We believe the result of this work can and should be generalized to all such models where there is a transition between where the transient dynamics of macroscopic observable can be used to probe microscopic behavior and the regime where such microscopic information is masked by the early stochasticity. The model of growth and division in this paper is a complicated chimera of branching processes and SDEs. Nevertheless, we have solved this problem analytically and provided simple expressions for the transient dynamics in the presence of early stochasticity.
Besides exponentially growing systems, both synchronized divisions and finite-size stochasticity are present in theoretical models and experimental setups of systems with finite population sizes such as the Moran process and turbidostat. It has recently been shown that the oscillations in the division rate in such systems persist indefinitely due to the finite coalescent time of finite populations [51]. While the transition is not present in these systems due to the lack of exponential growth, we hope the techniques used in this paper prove fruitful in the analysis of such systems. Finite-size models are important in the context of population genetics since populations are unable to grow exponentially over evolutionary
timescales. Understanding the interaction of transient dynamics controlled by single-cell statistics and stochasticity in finite-size populations can provide insight into how evolution acts on cellular physiology.
## Appendix A The cell mass phase expansion
Assume that the population is big enough to the point where the division and birth mass are distributed according to their steady-state distributions. Using Fourier expansion, we can rewrite the total cell count as an expansion in total cell mass and collective cell phase
\[N(t)= \frac{e^{\frac{1}{2}\mathrm{CV}_{m_{d}}^{2}}}{2\ln(2)} \tag{10}\] \[\times \left(M(t)+\sum_{k=1}^{\infty}2e^{-\frac{1}{2}\omega^{2}\mathrm{ CV}_{m_{d}}^{2}}\mathrm{Re}\left[\frac{e^{ik\omega\mathrm{CV}_{m_{d}}^{2}}}{1+ik \omega}\Phi_{k}(t)\right]\right)\]
where \(\omega=2\pi/\ln(2)\) and total cell phase are given by
\[\Phi_{k}(t)=\sum_{j=1}^{N(t)}m_{j}^{1+i\omega k},\qquad M(t)=\Phi_{0}(t) \tag{11}\]
In the next section, we will show that at large times, the total cell mass \(M(t)\) grows exponentially at some rate \(\Lambda_{\infty}\), and the collective mass phase terms \(\Phi_{k}(t)\) grow exponentially while rotating in the complex plane. The higher order terms \(\Phi_{k}(t)\) for \(k\geq 2\) quickly become irrelevant, so will neglect them. We will just consider the total cell mass \(M(t)\) and first-order collective mass phase \(\Phi(t)\), for which we drop the index.
## Appendix B The transient dynamics of total mass and synchronicity
In this section, we show that asymptotically, \(\Phi(t)\) grows exponentially while rotating in the complex plane and show how to calculate these rates. We first define the collective mass-phase density of cells with growth state \(\mathbf{x}\) as
\[\Phi(t,\mathbf{x}):=\sum_{j=1}^{N(t)}m_{j}^{1+i\omega}\delta(\mathbf{x}_{j}- \mathbf{x}) \tag{12}\]
Note that this quantity is conserved upon cell divisions. The expected change thus purely depends on the change in cell mass \(m_{j}\) and growth states \(\mathbf{x}_{j}\) of individual cells
\[\left\langle\frac{d\Phi(t,\mathbf{x})}{dt}\middle|\Phi\right\rangle= \sum_{j=1}^{N}\delta(\mathbf{x}_{j}-\mathbf{x})\frac{d}{dt}\left\langle m ^{1+i\omega}\middle|m,\mathbf{x}\right\rangle\] \[+\sum_{j=1}^{N(t)}m_{j}^{1+i\omega}\frac{d}{dt}\left\langle\delta (\mathbf{x}_{j}-\mathbf{x})\middle|m,\mathbf{x}\right\rangle \tag{13}\]
For the first term, we simply find \(\partial_{t}m^{1+i\omega}=(1+i\omega)\lambda(\mathbf{x})m^{1+i\omega}\), since the cell's state is conditioned to be at \(\mathbf{x}\). The second term directly corresponds to the change in growth sate probability density \(\partial_{t}p(t,\mathbf{x})\), and we can thus use (3). By combining the terms we get
\[\left\langle\frac{d\Phi(t,\mathbf{x})}{dt}\middle|\Phi\right\rangle=\left( \mathcal{K}+(1+i\omega)\lambda(\mathbf{x}))\right)\Phi(t,\mathbf{x}) \tag{14}\]
The steady-state dynamics of the collective phase are thus governed by \(\left\langle\Phi(t,\mathbf{x})\right\rangle\propto e^{\mu t}\), where \(\mu\) is the leading complex-valued eigenvalue of the operator on the right-hand side of Eq. (14). By noting that the total collective mass phase \(\Phi(t)\) is simply the integral of \(\Phi(t,\mathbf{x})\) over all growth states, we find that its leading expected behavior must also evolve as \(\left\langle\Phi(t)\right\rangle\propto e^{\mu t}\). Analogously we find that the expected total mass evolves as \(\left\langle M(t)\right\rangle\propto e^{\Lambda_{\infty}t}\) where \(\Lambda_{\infty}\) is the leading eigenvalue of \(\mathcal{K}+\lambda(\mathbf{x})\). By applying these transient results to the expectation of the first order expansion in Eq. (10) we find that for large \(t\), \(\left\langle N(t)\right\rangle\) evolves as
\[\left\langle N(t)\right\rangle\propto e^{\Lambda_{\infty}t}\left(1+A_{\mathrm{ ns,ens}}e^{-rt}\cos(\Omega t+\phi_{\mathrm{N,ens}})\right) \tag{15}\]
for some constant \(A_{\mathrm{ens}}\), where \(r=\Lambda_{\infty}-\mathrm{Re}\mu\) and \(\Omega=\mathrm{Im}\mu\). Plugging this into the definition of \(\Lambda_{\mathrm{ens}}(t)\) from (4) yields the first-order behaviour of Eq. (5) after ignoring terms of order \(O(e^{-2rt})\).
## Appendix C The small growth rate variability limit
Based on Eq. (14), on can show that the full solution to the expectation of the collective mass phase can be written as
\[\left\langle\Phi(t)\right\rangle=\left\langle e^{(1+i\omega)\int_{0}^{t} \lambda_{s}ds}\right\rangle\Phi(0). \tag{16}\]
where \(\left\langle.\right\rangle\) is essentially a path integral over growth state space that integrates \(\lambda_{t}\) over all growth rate trajectories along a lineage. The leading complex-valued constant \(\mu\) for which asymptotically \(\Phi(t)\rangle\propto e^{\mu t}\) can be obtained from this expression via
\[\mu=\lim_{t\to\infty}\frac{1}{t}\ln\left\langle e^{(1+i\omega)\int_{0}^{t} \lambda_{s}ds}\right\rangle \tag{17}\]
When the growth rate integral is Gaussian (which holds when \(\lambda_{t}\) is Ornstein-Uhlenbeck) or variations in \(\lambda_{s}\) are small, one can perform second-order cumulant expansion on Eq. (17). to obtain
\[\mu=(1+i\omega)\bar{\lambda}+(1+i\omega)^{2}D. \tag{18}\]
Here, the lineage steady-state growth rate is
\[\bar{\lambda}:=\lim_{t\to\infty}\frac{1}{t}\left\langle\int_{0}^{t}\lambda_{s} ds\right\rangle=\langle\lambda_{0}\rangle_{ss}, \tag{19}\]
where the subscript \(ss\) denotes that \(\lambda_{0}\) is taken from a lineage steady-state distribution. The growth accumulation diffusion constant \(D\) is
\[D:=\lim_{t\to\infty}\frac{1}{2t}\text{Var}\left(\int_{0}^{t}\lambda_{s}ds\right) =\int_{0}^{\infty}\text{Cov}\left(\lambda_{0},\lambda_{s}\right)_{ss}ds. \tag{29}\]
In an analogous derivation where we substitute \(\omega\to 0\), one obtains the asymptotic population growth rate \(\Lambda_{\infty}=\bar{\lambda}+D\). The population time scales given in Section III can now easily be derived.
## Appendix C The oscillation amplitude
In this section, we will discuss how the oscillation amplitude prefactor in Eq. (10) is distributed and derive its typical value. We will assume that \(\lambda_{t}\) is an Ornstein-Uhlenbeck process with mean \(\bar{\lambda}\), variance \(\sigma_{\lambda}^{2}\) and correlation time \(\tau_{\text{cor}}\). To find the amplitude of oscillations in cell count growth rate \(\Lambda(t)\), we assume the population is large enough to the point where \(\partial_{t}M(t)=\Lambda_{\infty}M(t)\) and \(\partial_{t}\Phi(t)=\mu\Phi(t)\). We then plug in Eq. (11) into the definition of \(\Lambda(t)\) to find
\[\Lambda(t)\approx\Lambda_{\infty}\left(1+2\text{Re}\left[e^{(i\omega-\frac{1} {2}\omega^{2})\text{CV}_{m_{d}}^{2}}\Psi(t)\right]\right) \tag{30}\]
we made an approximation \((\mu/\Lambda_{\infty}-1)/(1+i\omega)\approx 1\) for simplicity. Recall that the cell mass synchronicity is given by \(\Psi(t)=\Phi(t)/M(t)\). Eq. (30) tells us that the instantaneous amplitude is equal to \(2e^{-\frac{1}{2}\omega^{2}\text{CV}_{m_{d}}^{2}}|\Psi(t)|\). This is how we determined the instantaneous amplitude trajectories in Fig. 3 (a) and (c). The oscillation fits in Fig. 2 and Fig. 1 were obtained from calculating the simulated cell mass synchronicity \(\Psi(t_{max})\) at the final simulation time \(t_{max}=19\tau_{\text{div}}\) and extrapolating its expected trajectory back in time by plugging \(\Psi(t)\approx\Psi(t_{\text{max}})e^{(-r+i\Omega)(t-t_{\text{max}})}\) into Eq. (30). In this procedure, as \(t_{\text{max}}\) increases, one can obtain better estimates of the asymptotic amplitude pre-factor \(A\) and phase \(\phi\). In fact, by matching Eq. (30) to Eq. (10), we can obtain a formal definition of the random asymptotic amplitude pre-factor and phase as a function of \(\Psi(t)\) at infinity,
\[Ae^{i\phi}:=\lim_{t\to\infty}e^{(i\omega-\frac{1}{2}\omega^{2})\text{CV}_{m_{ d}}^{2}}e^{(r-i\Omega)t}\Psi(t). \tag{31}\]
We will use this definition to analyze the properties of \(A\) as a random variable. similarly to Eq. (31), one can show that the ensemble average population oscillation amplitude and phase from Eq. (5) satisfy
\[A_{\text{ens}}e^{i\phi_{\text{ens}}}=\lim_{t\to\infty}e^{(i\omega-\frac{1}{2} \omega^{2})\text{CV}_{m_{d}}^{2}}e^{(r-i\Omega)t}\frac{\langle\Phi(t)\rangle}{ \langle M(t)\rangle} \tag{32}\]
Since \(\lambda_{t}\) is an Ornstein-Uhlenbeck process, we know that \(\int_{0}^{t}\lambda_{s}ds\) is normally distributed, where for large \(t\) the mean and variance are
\[\left\langle\int_{0}^{t}\lambda_{s}ds\right\rangle=\bar{\lambda}t+(\lambda_{0 }-\bar{\lambda})\tau_{\text{cor}} \tag{33}\]
and
\[\text{Var}\left(\int_{0}^{t}\lambda_{s}ds\right)=2Dt-3D\tau_{\text{cor}}. \tag{34}\]
note that both of these are proportional to \(t\), with some shift that scales with the time \(\tau_{\text{cor}}\) it takes for the distribution of \(\lambda_{t}\) to go to its steady-state. We can now calculate the expected values of total mass and collective mass phase using Eq. (34).
\[A_{\text{ens}}=2e^{-\frac{1}{2}\omega^{2}\text{CV}_{m_{d}}^{2}+\frac{3}{2} \omega^{2}\sigma_{\lambda}^{2}\tau_{\text{cor}}^{2}}|\Psi(0)| \tag{35}\]
The term \(\frac{3}{2}\omega^{2}\sigma_{\lambda}^{2}\tau_{\text{cor}}^{2}\) follows from the shift in Eq. (34).
We are now interested in the distribution and typical value of \(A\) Let us assume that variability in \(M(t)\) is negligible. We can write
\[Ae^{i\phi}\approx\lim_{t\to\infty}e^{(i\omega-\frac{1}{2}\omega^{2})\text{CV }_{m_{d}}^{2}}e^{-\mu t}\frac{\Phi(t)}{M(0)} \tag{36}\]
Note that \(\langle Ae^{i\phi}\rangle=A_{\text{ens}}e^{i\phi_{\text{ens}}}\). This already gives us the first indication that the average amplitude of a single simulation is strictly higher than the ensemble average amplitude, by noting that
\[\langle A\rangle=\langle|Ae^{i\phi}|\rangle>|\langle Ae^{i\phi}\rangle|=A_{ \text{ens}} \tag{37}\]
Although we cannot calculate the average amplitude directly, we can use some tricks to find its average square \(\langle A^{2}\rangle\) instead. The ratio between the typical amplitude and ensemble amplitude \(\sqrt{\langle A^{2}\rangle}/A_{ens}\) has a negligible dependence on growth and division variability, so in the following calculations, we will assume \(\bar{\lambda}\tau_{\text{corr}}\ll 1\), whereas \(D\) is finite, and that cells always divide upon attaining a cell mass of 2. In the limit of small growth rate correlations, the integrated growth rate \(\int_{0}^{t}\lambda_{s}ds\) will be identical to a Brownian motion with drift, with diffusion constant \(D\) and drift velocity \(\bar{\lambda}\). For a cell with initial mass \(m_{0}\), its mass at time \(t\) is given by
\[m(t)=m_{0}e^{\int_{0}^{t}\lambda_{s}ds} \tag{38}\]
The time for a cell of initial mass \(m_{0}\) to reach 2 and divide is \(T_{m_{0}}=T(\ln(2/m_{0}))\) where
\[T(u):=\left\{t>0:\int_{0}^{t}\lambda_{s}ds\geq u\right\} \tag{39}\]
is equivalent to the hitting time of a Brownian motion with drift, which is known to have an inverse Gaussian distribution. This gives us a moment-generating function of the division time
\[\left\langle e^{aT(u)}\right\rangle=\exp\left[\frac{u}{2D}\left(\bar{\lambda}- \sqrt{\bar{\lambda}^{2}-4Da}\right)\right] \tag{40}\]
Consider the asymptotic collective mass phase prefactor of a population starting from one cell with mass \(m_{0}\)
\[Z(m_{0}):=\lim_{t\to\infty}\frac{e^{-\mu t}\Phi(m_{0},t)}{m_{0}^{1+i\omega}} \tag{41}\]
This is a complex-valued random variable with \(\langle Z(m_{0})\rangle=1\). The first division takes place at \(t=T_{m_{0}}\), at which the population splits up into two identical sub-populations that start with mass \(1\) at time \(t=T_{m_{0}}\). This gives us a relationship for the collective mass phase similar to the one used to derive Powell's relationship [9].
\[\Phi(m_{0},t)=\Phi_{\uparrow}\left(t-T_{m_{0}}\right)+\Phi_{\downarrow}\left(t -T_{m_{0}}\right), \tag{103}\]
where the arrows are used to distinguish the two sub-populations. Plugging this relationship into Eq. (102) we find
\[Z(m_{0}):=\frac{1}{m_{0}^{1+i\omega}}\left(Z_{\uparrow}+Z_{\downarrow}\right) e^{-\mu T_{m_{0}}} \tag{104}\]
Now we multiply both sides by their conjugate and take their expected values to find
\[\left\langle|Z(m_{0})|^{2}\right\rangle=\frac{1}{m_{0}^{2}}\left(2\left\langle |Z|^{2}\right\rangle+2\right)\left\langle e^{-2\mathrm{Re}\mu T_{m_{0}}}\right\rangle \tag{105}\]
Let us define
\[\epsilon^{*}=2-\frac{1}{2D}\left(\sqrt{\bar{\lambda}^{2}+8D\bar{\lambda}+8(1- \omega^{2})D^{2}}-\bar{\lambda}^{2}\right) \tag{106}\]
One can show that up to the first order in \(D/\bar{\lambda}\) we have
\[\epsilon^{*}\approx 2\omega^{2}D/\bar{\lambda}\approx\epsilon \tag{107}\]
The definition in Eq. (106) is chosen such that for all \(u\)
\[\left\langle e^{-2\mathrm{Re}\mu T(u)}\right\rangle=e^{(-2+\epsilon^{*})u} \tag{108}\]
This lets us rewrite Eq. (105) as
\[\left\langle|Z(m_{0})|^{2}\right\rangle=\frac{1}{2}\left(\left\langle|Z|^{2} \right\rangle+1\right)e^{\epsilon^{*}\left(\ln(2)-\ln(m_{0})\right)} \tag{109}\]
If the initial population starts from one cell with mass \(m_{0}=1\), then its value of \(Z(1)\) will be equal in distribution to that of either of the daughter populations. This way one can recursively solve for \(\langle|Z|^{2}\rangle\) in Eq. (109). The full solution for arbitrary initial cell mass is
\[\left\langle|Z(m_{0})|^{2}\right\rangle=\frac{m_{0}^{*}}{2^{1-\epsilon^{*}}-1} \tag{110}\]
This is the final ingredient needed to solve the value of the typical amplitude since we can write
\[\frac{\bar{A}}{A_{\mathrm{ens}}}\approx\lim_{t\to\infty}\frac{\sqrt{\langle| \Phi(t)|^{2}\rangle}}{|\langle\Phi(t)\rangle|}=\langle|Z(m_{0})|^{2}\rangle \tag{111}\]
We now use Eq. (110) to find
\[\frac{\bar{A}}{A_{\mathrm{ens}}}\approx\frac{m_{0}^{*}}{2^{1-\epsilon}-1} \tag{112}\]
which gives the typical amplitude for a population stemming from one cell. When the population starts from multiple cells with masses \(m_{j}\), the collective mass phase is a complex superposition of the mass phase trajectories of the populations stemming from each of the individual cells \(\Phi_{j}(m_{j},t)\), so
\[\Phi_{\mathrm{total}}(t)=\sum_{j=1}^{N(0)}\Phi_{j}(m_{j},t) \tag{113}\]
We can now plug this into Eq. (107) and use the single-cell result to obtain
\[\bar{A}=2e^{-\frac{1}{2}\omega^{2}\mathrm{CV}_{m_{b}}^{2}+3\omega^{2}\sigma_{ \lambda}^{2}\tau_{\mathrm{cor}}^{2}}\sqrt{|\Psi(0)|^{2}+J(0)}, \tag{114}\]
where
\[J(0)=\frac{1}{M_{0}^{2}}\sum_{j=1}^{N(0)}m_{j}^{2}\left(\frac{m_{j}^{-\epsilon} }{2^{1-\epsilon}-1}-1\right). \tag{115}\]
For all values of \(\epsilon\) and \(m_{j}\), this is well approximated by Eq. (17) in the main text. When the initial population has a well-mixed mass distribution and \(\Phi_{\mathrm{total}}(0)\) is close to zero, one could argue with the help of a central limit theorem that \(\Phi_{\mathrm{total}}(t)\) and \(Ae^{i\phi}\) have a centered rotationally symmetric bi-variate Gaussian distribution in the complex plane. In that case, we know that \(A\) itself must have a \(\chi_{2}\)-distribution, with a probability density function of
\[f_{A}(a)=\frac{2a}{\bar{A}^{2}}e^{-\frac{a^{2}}{\bar{A}^{2}}} \tag{116}\]
The notion that this is the amplitude distribution for a well-mixed initial population is supported by Fig. 3 (d), where we see that it is a great fit for a population starting from as few as just three cells.
## Appendix D Determining the growth process parameters
In Ref. [25], a relationship is derived between parameters of an Ornstein-Uhlenbeck growth process \(\bar{\lambda}\), \(\sigma_{\lambda}\), \(\tau_{\mathrm{cor}}\) and the emergent mean, \(\bar{\kappa}\), standard-deviation \(\sigma_{\kappa}\) and mother-daughter correlations \(\rho_{m-d}\) of growth rate averaged over cell cycles. Up to first order in \(\sigma_{\lambda}^{2}/\bar{\lambda}^{2}\), their results are
\[\bar{\kappa}=\bar{\lambda}\left(1+\frac{\sigma_{\lambda}^{2}}{\bar{\lambda}}h \left(\frac{\ln(2)}{\bar{\lambda}\tau_{\mathrm{cor}}}\right)\right) \tag{117}\]
\[\sigma_{\kappa}^{2}=\sigma_{\lambda}^{2}h\left(\frac{\ln(2)}{\bar{\lambda} \tau_{\mathrm{cor}}}\right) \tag{118}\]
\[\rho_{m-d}=h_{2}\left(\frac{\ln(2)}{\bar{\lambda}\tau_{\mathrm{cor}}}\right) \tag{119}\]
where \(h_{1}(z)\) and \(h_{2}(z)\) are auxillary functions defined as
\[h_{1}(z)=\frac{2}{z}\left(1-\frac{1}{z}(1-e^{-z})\right) \tag{40}\]
\[h_{2}(z)=\frac{1}{2}\frac{(1-e^{-z})^{2}}{z-(1-e^{-z})} \tag{41}\]
By inverting this set of equations we obtained the Ornstein-Uhlenbeck process parameters in Table 1 based on values from Ref. [27]. The values used are listed in Table 2.
### The transition point for vanishing growth rate correlations
to show that our results are consistent with the transition point derived in the introduction, we want to take a limit of the continuous growth rate model that corresponds to the per-cell growth rate model with vanishing correlations from Ref. [1]. This can be achieved by taking \(\bar{\lambda}\tau_{\text{cor}}\ll 1\) while fixing \(D=\sigma_{\lambda}^{2}\tau_{\text{cor}}/\bar{\lambda}\). In this limit we obtain
\[\bar{\kappa}=\bar{\lambda}\left(1+\frac{2}{\ln(2)}\frac{D}{\bar{\lambda}}\right) \tag{42}\]
and
\[\sigma_{\kappa}^{2}=\bar{\lambda}^{2}\frac{2}{\ln(2)}\frac{D}{\bar{\lambda}} \tag{43}\]
and \(\rho_{m-d}\ll 1\). Recall that for a continuous process, the transition point was given by \(164D\leq\bar{\lambda}\). By using the conversion given by Eq. (42) and Eq. (43), we recover the transition point given in Eq. (1) in the introduction.
\[\sigma_{\kappa}\leq 0.13\bar{\kappa} \tag{44}\]
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} & TSB & Synth. rich & gcl+12a.a. & gcl+6a.a & glucose & sorbitol & glycerol \\ \hline \(\bar{\kappa}\) & 0.057 & 0.043 & 0.037 & 0.033 & 0.027 & 0.018 & 0.019 \\ \(\sigma_{\kappa}\) & 0.0045 & 0.0032 & 0.0021 & 0.0022 & 0.0016 & 0.0020 & 0.0018 \\ \(\rho_{m-d}\) & 0.33 & 0.26 & 0.14 & 0.22 & 0.14 & 0.17 & 0.34 \\ CV\({}_{m_{d}}\) & 0.17 & 0.14 & 0.10 & 0.10 & 0.10 & 0.12 & 0.11 \\ \end{tabular}
\end{table}
Table 2: Experimentally measured values of the mean, standard deviation, and mother-daughter correlations of per cell growth rates for E. coli in several growth media as reported in Table S3 of Ref. [27]. The mother-daughter correlations were obtained from the elongation rates in Figure S4 of Ref. [27] by taking the first-generation correlations and subtracting the average of the three final correlations as extrinsic noise. |
2303.06850 | Old and new results on the Furstenberg sets | This paper is a complement to our previous paper [21]. It surveys the works
on the Furstenberg set $S=\{2^{m}3^{n}: n\ge 0, m\ge 0\}$ and its random
version $T$. We also present some new results. For example, it is proved that
$T$ almost surely contains a subset of positive lower density which is
$\frac{4}{3}$-Rider. It is also proved that a class of random sets of integers
are Sidon sets when Bourgain's condition is not satisfied; this generalizes a
result of Kahane-Katznelson. Some open questions about $S$ and $T$ are listed
at the end of the paper. | Aihua Fan, Hervé Queffélec, Martine Quffélec | 2023-03-13T04:42:52Z | http://arxiv.org/abs/2303.06850v1 | # Old and new results on the Furstenberg sets
###### Abstract.
This paper is a complement to [21]. It surveys the works on the Furstenberg set \(S=\{2^{m}3^{n}:n\geq 0,m\geq 0\}\) and its random version \(T\). We also present some new results. For example, it is proved that \(T\) almost surely contains a subset of positive lower density which is \(\frac{4}{3}\)-Rider. It is also proved that a class of random sets of integers are Sidon sets when Bourgain's condition is not satisfied; this generalizes a result of Kahane-Katznelson. Some open questions about \(S\) and \(T\) are listed at the end of the paper.
Key words and phrases:Furstenberg set, Random Furstenberg set, Sidon set, Khintchin class, Uniform distribution, Hartman distribution 2010 Mathematics Subject Classification: 37A44, 43A46, 60G46 This survey, enriched by some new results, is devoted to the innocent-looking Furstenberg set of integers
\[S=\{2^{j}\times 3^{k}:j,k\in\mathbb{N}_{0}\}=:\{s_{1}<s_{2}<\cdots<s_{n}< \cdots\},\]
and its random analogue \(T\), to which we come a little later in this paper. The distribution in \(\mathbb{N}\) of the set \(S\) had already been studied by Ramanujan and Hardy [30] at the beginning of the last century. More generally, if \(\{q_{1},\ldots,q_{s}\}\) is a finite set of mutually prime numbers, we denote by \(S(q_{1},\ldots,q_{s})\) the multiplicative semigroup generated by the \(q_{j}^{\prime}s\); when \(\{q_{1},\ldots,q_{s}\}\) are the \(s\) first prime numbers, we get what arithmeticians call the \(s\)-friable numbers [24]. In the extreme case of \(s=1\), the set reduces to a canonical Hadamard set which enjoys many nice properties in ergodic theory and harmonic analysis (as is well-known). It is natural to explore which properties remain true when we switch to \(S(q_{1},\ldots,q_{s})\) with \(s\geq 2\) and to other less lacunary sets.
We particularly focus on the semigroup \(S(2,3)=:S\), the set of \(2\)-friable numbers, also called Furstenberg set because of its link with the famous metric conjecture formulated by Furstenberg in 1960's [25], that we recall in Subsection 1.4. This conjecture and other questions originate from the attempt to explicit the following feeling: "expansions to bases \(2\) and \(3\) look very different!". Note that \(2\) and \(3\) can be replaced by multiplicatively independent bases \(a\) and \(b\) (i.e. \(\log a/\log b\notin\mathbb{Q}\)).
The aim of this study is to explore dynamical as well as harmonic analysis properties of \(S\) and of its random analogue \(T\). This paper is mainly based on our previous one [21]. Also, we take the opportunity to elaborate on some improvements and some new considerations that we omitted or missed in [21]. So, to some extent, this work may appear as a complement to [21].
We will make use of the classical notations: we put \(\mathbb{T}=\mathbb{R}/\mathbb{Z}\) identified with \([0,1)\), \(\|x\|:=d(x,\mathbb{Z})\), \(x=\{x\}+[x]\); \(e_{n}(x)=e(nx):=e^{2i\pi nx}\) for \(n\in\mathbb{Z}\) and \(x\in\mathbb{T}\). We write \(m\) for the Lebesgue measure on \(\mathbb{T}\). \(|E|\) will denote the cardinality of a finite set \(E\). By \(u\lesssim v\) we mean \(u\leq Cv\) for some constant \(C>0\). We also adopt the Hardy notations \(\asymp\) and \(\sim\), Landau's \(o\) and \(O\)[29, p.7] and the English convention \(\mathbb{N}=\{1,2,\dots,\}\). We denote \(a\wedge b\) the gcd of the positive integers \(a,b\). Recall that a _Hadamard set_\(E=(\lambda_{n})\subset\mathbb{N}\) is a set that satisfies \(\lambda_{n+1}/\lambda_{n}\geq r\) for some \(r>1\) and for all \(n\geq 1\).
## 1. Historical introduction and motivations
The non-lacunary behaviour of the Furstenberg set \(S\) seems to have been known for a long time. Let us first mention Hardy and Ramanujan's contribution, followed by major related results.
### Ramanujan and others
In his first letter addressed to Hardy in 1913 (cf. [30], Chapter V), Ramanujan asserted without explanation that
\[|S_{N}|:=|S\cap[1,N]|=\frac{\log 2N\times\log 3N}{2\log 2\log 3}+\frac{1}{2}.\]
Hardy understood this formula as an approximation and stated that there is no evidence to show how accurate Ramanujan supposed it to be. Chapter V of [30] is devoted to this lattice-type problem. In two papers published in 1921 and 1922, Hardy and Littlewood considered this question and proved the following estimation
\[|S_{N}|=\frac{\log 2N\times\log 3N}{2\log 2\log 3}+o(\log N/\log\log N). \tag{1}\]
Ostrowski obtained quite similar results by different methods. The estimation (1) implies that
\[|S_{N}|=\frac{1}{2\log 2\log 3}\log^{2}N+\frac{\log 6}{2\log 2\log 3}\log N+o( \log N),\]
By taking \(N=s_{n}\), we can easily deduce (cf. [21])
\[s_{n}\sim\exp(C\sqrt{n})/\sqrt{6},\quad\text{with }C=\sqrt{2\log 2\log 3}. \tag{2}\]
So we see that \(S\) has an intermediate sparseness with rate of increase both subexponential and superpolynomial. In [21], we have proved a more precise estimate of the remainder term in (1) by using diophantine approximation and discrepancy. We will come back to this in Section 2.
### The number \(\alpha=\log 2/\log 3\)
An attractive feature in this study is the role played by the number \(\alpha:=\frac{\log 2}{\log 3}=0.631\cdots\) and its diophantine properties. This transcendental number is diophantine (non-Liouville), which means that for some \(C>0\) and \(\rho\geq 1\)
\[\|q\alpha\|\geq Cq^{-\rho}\text{ for all }\ q\in\mathbb{N}.\]
Such a parameter \(\rho\) is an irrationality exponent of \(\alpha\), which is thus called \(\rho\)_-diophantine_. Concerning a good value of \(\rho\) for \(\alpha\), G. Rhin proved that \(\rho\leq 7.616...\) by using Pade approximants [61]; this estimate has been improved by Wang and Wu to \(\rho\leq 4.11633052....\). [74].
Tijdeman [71, 72], for more general \(S=S(q_{1},\ldots,q_{s})\), established the following inequalities where \(A,B\) are positive constants depending only on \(q_{1},\ldots,q_{s}\):
\[s_{n}/(\log s_{n})^{A}\lesssim s_{n+1}-s_{n}\lesssim s_{n}/(\log s_{n})^{B}.\]
His proof used the Baker's results on linear forms of logarithms.
For the special case of \(S=S(2,3)\), we have obtained an improved form of these inequalities with explicit \(A\) and \(B\) in terms of any irrationality exponent \(\rho\) of \(\alpha\)[21] (see also Theorem 2.1). The estimation on \(|S_{N}|\) will be useful in considering the question whether \(S\) is a Khintchin set, one of the dynamical aspects of \(S\) (see Section 3).
### Two combinatorial properties of \(S\)
\(\bullet\) G. Rauzy observed that the increasing sequence of powers of \(2\) or \(3\) is arranged according to a sturmian law. More precisely, there are one or two powers of \(2\) between two consecutive powers of \(3\), and the sequence \(u\) with values in the alphabet \(\{1,2\}\) obtained by coding one power of \(2\) by \(1\) and two powers of \(2\) by \(2\) is
\[u:=1\ 2\ 1\ 2\ 1\ 2\ 2\ 1\ 2\ 1\ 2\ 2\ 1\ 2\cdots,\]
which is the characteristic sturmian sequence associated to \(\alpha\) (see [1] page 143, for equivalent definitions). Indeed, \(u_{n}=1\) means that there exist \(k<j\) such that \(3^{k}<2^{j}<3^{k+1}\), with \(j+k=n.\) Hence, \(k\log 3<j\log 2<(k+1)\log 3,\) in other words \(0<j\alpha-k<1\); since \(k=n-j\), we get \(0<j(1+\alpha)-n<1\) then \(0<j-n\beta<\beta\), where \(\beta:=1/(1+\alpha)\) with continued fraction expansion \([0;1,\alpha_{1},\alpha_{2},\cdots]\). Finally,
\[u_{n}=1\Longleftrightarrow\{n\beta\}\in]1-\beta,1),\]
which is one of the equivalent descriptions of the characteristic sturmian sequence with slope \(\alpha\in(0,1)\).
\(\bullet\) The continued fraction expansion of \(\frac{\log 2}{\log 3}\) is related to \(S\). The following observations and more on the convergents to \(\alpha\) can be found in [4]. Here are the first terms of the sequence \(S\):
\[\begin{array}{|c|c|c|c|c|c|}\hline 2,3,2^{2}&2\times 3&2^{3},3^{2}&2^{2} \times 3,\ 2^{4},\ 2\times 3^{2},\ 2^{3}\times 3&3^{3},2^{5}&2^{2}\times 3^{2},\\ \\ 2^{4}\times 3,\ 2\times 3^{3},\ 2^{6},\ 2^{3}\times 3^{2},\ 3^{4},\ 2^{5}\times 3,\ 2^{2} \times 3^{3},\ 2^{7},2^{4}\times 3^{2},\ 2\times 3^{4},\\ \\ 2^{6}\times 3,\ 2^{3}\times 3^{3}\left[\frac{3^{5},2^{8}}{2}\right]2^{5}\times 3 ^{2},\ 2^{2}\times 3^{4},\ 2^{7}\times 3,\ 2^{4}\times 3^{3},\ 2^{9},\ 2\times 3^{5},\\ \\ 2^{6}\times 3^{2},\ 2^{3}\times 3^{4},\ 3^{6},\ 2^{8}\times 3,\ 2^{5}\times 3^{3},\ 2^{2} \times 3^{5},\ 2^{10},\ 2^{7}\times 3^{2},\ 2^{4}\times 3^{4},\\ \\ 2^{9}\times 3,\ 2\times 3^{6},\ 2^{6}\times 3^{3},\ 2^{3}\times 3^{5}\left[ \frac{2^{11},3^{7}}{2}\right]\ \ldots\ \left[\frac{2^{19},3^{12}}{2}\right]\ \ldots\ \left[\frac{3^{19},2^{30}}{2}\right]\ \ldots\]
We call a _pure pair_, any pair of consecutive terms \((3^{p},2^{q})\) or \((2^{q},3^{p})\) in the sequence \(S\); these pairs lead exactly to the _Farey approximations_\((p/q)\) to \(\alpha\) since \(|q\alpha-p|<|q^{\prime}\alpha-p^{\prime}|\) for any \(q^{\prime}<q\), so that, if \(p/q\) now is a _convergent_ to \(\alpha\), \((3^{p},2^{q})\) or \((2^{q},3^{p})\) according to the parity is a pure pair in \(S\). We easily identify the first convergents to \(\alpha\) as \(1,1/2,2/3,5/8,12/19,\dots\) whence
\[\alpha=[0;1\ 1\ 1\ 2\ 2\ \cdots].\]
In view of Subsection 1.2, the open question whether \(\alpha\) could be a badly approximable number (i.e. with bounded partial quotients) becomes relevant since then \(\rho=1\) is admissible.
### Furstenberg conjecture and dynamical properties of \(S\)
The study of \(S\) is motivated by a famous, still open, conjecture of Furstenberg, namely: _A continuous (i.e. atomless) probability Borel measure \(\mu\) on \(\mathbb{T}\), \(\times 2\times 3\)-invariant, must be equal to the Lebesgue measure \(m\)._ In terms of the Fourier transform, the conjecture of Furstenberg is stated as follows
\[(C)\ \ \ \widehat{\mu}(2n)=\widehat{\mu}(3n)=\widehat{\mu}(n)\ \ \text{for every $n\in\mathbb{Z}\Longrightarrow\mu=m$.}\]
This conjecture is related to the dynamics of the semi-group \(S\), e.g. the distribution of the orbits \((s_{n}x)\) for various \(x\in\mathbb{T}\). Let \(\Lambda=(\lambda_{n})\subset\mathbb{Z}\); recall that the \(\Lambda\)-orbit of \(x\) is the sequence \((\lambda_{n}x)\) of \(\mathbb{T}\).
**Definition 1.1**.: _We say that \(x\) is \(\Lambda\)-normal if the sequence \((\lambda_{n}x)\) is uniformly distributed modulo 1._
Thanks to the H. Weyl's theorem this means that
\[\frac{1}{N}\sum_{n=1}^{N}e(h\lambda_{n}x)\to 0\ \text{for every integer $h\neq 0$.} \tag{3}\]
It is known that for any increasing sequence \(E=(\lambda_{n})\subset\mathbb{N}\), (3) holds for almost all \(x\in\mathbb{T}\) ([43]) hence the notation:
**Definition 1.2**.: _We denote by \(W(\Lambda)\) the negligible set of non-\(\Lambda\)-normal numbers._
When \(\Lambda=(q^{n})\) for some \(q\geq 2\), the \(\Lambda\)-orbit of \(x\) is just the orbit of \(x\) under the action of the \(q\)-shift \(\sigma_{q}:=x\mapsto qx\) on \(\mathbb{T}\) and we speak of \(q\)-normal numbers. In this case, any orbit \(O_{q}(x):=(\sigma_{q}^{n}x)\) can be described by the \(q\)-adic expansion of \(x\). It is thus easy to construct uncountable sets of \(x\) with non-dense orbit and continuous \(q\)-invariant probability measures singular with respect to the Lebesgue measure like Bernoulli measures. The set \(W(q)\) of non-\(q\)-normal numbers has a Hausdorff dimension 1 (which holds more generally for any lacunary (Hadamard) sequence in place of \((q^{n})\), (cf. [18, 20]). The other way round, by the Birkhoff theorem, almost every orbit is \(\mu\)-distributed for any \(q\)-invariant ergodic measure \(\mu\).
Furstenberg proved in [25] that \((s_{n}x)\) is dense for every \(x\notin\mathbb{Q}\), a first notable difference between \(S\) and the lacunary sequences. At the end of his
paper, he claimed that \(W(S)\) is uncountable by suggesting an appropriate family of Liouville numbers inside. This is a first element regarding his conjecture.
Actually, if \(W(S)\) were reduced to \(\mathbb{Q}\), Furstenberg's conjecture would hold. Indeed, if it fails, there exists a continuous probability measure \(\mu\) on \(\mathbb{T}\), \(\times 2\times 3\)-invariant, with \(\widehat{\mu}(a)\neq 0\) for some \(a\in\mathbb{Z}^{*}\). Then we get a contradiction:
\[0\neq|\widehat{\mu}(a)|=\limsup\Big{|}\frac{1}{N}\sum_{n=1}^{N}\widehat{\mu}( as_{n})\Big{|}\leq\limsup\int_{\mathbb{T}}\Big{|}\frac{1}{N}\sum_{n=1}^{N}e(-as_{n} x)\Big{|}d\mu(x)\]
\[\leq\int_{\mathbb{T}}\limsup\Big{|}\frac{1}{N}\sum_{n=1}^{N}e(-as_{n}x)\Big{|} d\mu(x)\leq\mu(\mathbb{Q})=0.\]
Also in the 90' Bergelson asked whether \(S\) could be a _recurrence set_, referring to the dynamical classification developed in [10, 69]. We shall prove in Section 4.2.3. that this is not the case.
Another dynamical classification is more adapted to the set \(S\) and goes back to Hartman [31]. We briefly recall the definition that he introduced.
**Definition 1.3**.: \(E\subset\mathbb{Z}\) _is a Ka-set if there exists a continuous measure \(\mu\) on \(\mathbb{T}\) such that \(\inf_{n\in E}|\widehat{\mu}(n)|>0\). (Here "Ka" stands for R. Kaufman.)_
Hartman observed that \(W(E)\) is uncountable for any Ka-set \(E:=(n_{k})\). Indeed, if not, we would have \(\lim_{N}\frac{1}{N}\sum_{k\leq N}e(n_{k}t)\to 0\) for every \(t\) outside a countable set. As above, this leads to
\[\lim_{N}\frac{1}{N}\sum_{k\leq N}\widehat{\nu}(n_{k})\to 0 \tag{4}\]
for every continuous measure \(\nu\). However, a continuous measure \(\mu\) exists with \(|\widehat{\mu}(n_{k})|>\delta\) since \(E\) is a Ka-set; the continuous measure \(\nu=\mu*\tilde{\mu}\) (\(\tilde{\mu}(A):=\overline{\mu(-A)}\)) in turn satisfies \(\widehat{\nu}(n_{k})>\delta^{2}\) and (4), whence a contradiction.
Later R. Lyons [50], investigating partial answers to the conjecture, asked whether \(S\) could be a Ka-set (conjecturing the opposite), since this would be implied by the disproof of the conjecture. Badea and Grivaux [2] gave a positive answer, recovering the uncountability of \(W(S)\).
It becomes clear that the size or the shape of \(W(S)\) are relevant in view to the conjecture. In [21], keeping in mind the Hadamard case, we prove that \(W(S)\) is rather big since it has a Hausdorff dimension \(>0.45\) (this has been recently improved to \(\dim_{H}(W(S))=1\) by S. Usuki [73]) and rather well distributed by constructing a _Rajchman measure_ supported on \(W(S)\).
**1.5. Khintchin class.** Recall that \((\lambda_{n}x)\) is almost surely uniformly distributed for any increasing sequence \(E=(\lambda_{n})\subset\mathbb{N}\). As a consequence, for
every Riemann-integrable function \(f\) we have
\[\frac{1}{N}\sum_{n=1}^{N}f(\lambda_{n}x)\to\int_{\mathbb{T}}fdm\quad a.e. \tag{5}\]
Khinchin conjectured that this holds for \(L^{\infty}\)-functions [41]. Marstrand [52], refuting this conjecture, proved that (5) fails for \(E=\mathbb{N}\), by taking \(f=\mathbf{1}_{O}\in L^{\infty}(\mathbb{T})\) with some well-chosen open set \(O\). This led us to coin the _Khinchin class_ of \(E=(\lambda_{n})\subset\mathbb{N}\) as
\[\mathcal{K}_{E}:=\Big{\{}f\in L^{1}(\mathbb{T}):\frac{1}{N}\sum_{n=1}^{N}f( \lambda_{n}x)\to\int_{\mathbb{T}}fdm\quad\text{a.e.}\Big{\}} \tag{6}\]
For the Furstenberg set \(S\), Marstrand also proved in [52] that \(\mathcal{K}_{S}\) contains \(L^{\infty}(\mathbb{T})\) and, later, Nair [54] got \(\mathcal{K}_{S}=L^{1}(\mathbb{T})\).
In [21] we gave an elementary proof of the inclusion \(\mathcal{K}_{S}\supset L\log^{+}L\). The slight restriction on integrability is due to the following: we need some maximal function to be integrable, contrary to a fake result in Mane's book [51] (Corollary 1.6, page 96), for which we will present a simple counterexample (Theorem 3.4). Actually, using the classical Birkhoff ergodic theorem, we proved a result which recovers Marstrand's result, nearly Nair's result and extends to \(\times 2\times 3\)-invariant probability measures (see Theorem 3.2). This highlights the fact that continuous and singular \(\times 2\times 3\)-invariant measures (counterexamples to the conjecture of Furstenberg), if any, must be carried by \(W(S)\).
**1.6**.: **Lacunarity in harmonic analysis.** Thin sets of integers play a fundamental role as _spectrum_ in harmonic analysis. Here is a generic definition. If \(X\subset L^{1}(\mathbb{T})\) is a Banach space and \(E\subset\mathbb{Z}\) a subset of integers, we set
\[X_{E}=\{f\in X:\widehat{f}(n)=0\text{ for }n\notin E\}.\]
It appears that functions in \(X_{E}\), i.e. functions in \(X\) with Fourier spectrum in a thin set \(E\), enjoy better properties than a generic function in \(X\) (we have \(X_{E}=\{0\}\) in the extreme case \(E=\emptyset\)). Here are some special examples of thinness:
1. _Every_ \(f\in L^{2}_{E}\) _satisfies_ \(\|f\|_{q}\leq C_{q}\|f\|_{2}\) _for some_ \(2<q<\infty\)_. We then say that \(E\) is a \(\Lambda(q)\)-set (in short \(L^{2}_{E}\subset L^{q}\)).
2. _Every_ \(f\in L^{\infty}_{E}\) _satisfies_ \(\sum_{n}|\widehat{f}(n)|<\infty\). We then say that \(E\) is a Sidon set (in short \(L^{\infty}_{E}\subset A(\mathbb{T})\), the Wiener algebra).
3. _Every_ \(f\in L^{\infty}_{E}\) _satisfies_ \(\sum_{n}|\widehat{f}(n)|^{p}<\infty\) _for some fixed_ \(1\leq p<2\)_. We then say that \(E\) is a \(p\)-Sidon set (\(L^{\infty}_{E}\subset A_{p}(\mathbb{T})\)).
It is well known that Hadamard lacunary sets enjoy all these properties (for \(p=1\) and for all \(q<\infty\)). The Furstenberg set \(E=S\), which is less lacunary, is an interesting candidate to be considered from this harmonic analysis viewpoint. Such properties for \(S\) seem poorly known, contrary to the well-understood sumset \(\{2^{j}+3^{k}\}\)[45] (we could say that the characters
on \(\mathbb{T}\) digest better the sum of integers than the product!). However, in 1970's, Gundy and Varopoulos [28] proved that _the set \(S\) is \(\Lambda(q)\) for all \(q>2\)_ (and even more). We have produced a proof in [21] when \(S=S(q_{1},\ldots,q_{s})\) with a careful analysis of the constants and their dependence on the parameter \(s\). The proof uses the square function of a martingale and Burkholder's inequalities.
In a fascinating way, combinatorics reappears in this study of thin sets \(E\); in particular, the rate of growth of the cardinality \(|E_{N}|:=|E\cap[1,N]|\) reveals a main tool [21], which immediately implies that \(S\)_is not a Sidon set, even not \(p\)-Sidon for \(p<4/3\)_; we will come back to this in the following. Another measuring tool, less easy to implement, consists in enumerating the relationships in \(E\): \(\sum\epsilon_{j}n_{j}=0\) with \(\epsilon_{j}\in\{0\pm 1\}\) if \(E:=(n_{j})\). This is due to the use of Riesz products which we cannot do without. When \(E=S\), once again, we are faced with confrontational additive and multiplicative properties.
**1.7. Random version of Furstenberg set.** In order to understand which properties of \(S\) depend on its arithmetic structure and which on its sparseness, we parallelly considered in [21] a random version \(T\) of \(S\): let \((\delta_{k})\) be a sequence of numbers with \(0\leq\delta_{k}<1\) and \(\sum\delta_{k}=\infty\), and let \((\xi_{k})_{k\geq 1}\) be a sequence of (0-1)-valued independent random variables with _expectation_\(\mathbb{E}(\xi_{k})=:\delta_{k}\); put
\[R=R(\omega)=\{k:\xi_{k}(\omega)=1\}.\]
This is a random set of integers. The random version \(T\) of the Furstenberg set corresponds to the special choice \(\delta_{k}=\log k/k\). Indeed, almost surely \(T\) has the same growth as \(S\) : with \(T_{N}:=T\cap[1,N]\),
\[|T_{N}|\sim\mathbb{E}(|T_{N}|)=\sum_{k=1}^{N}\frac{\log k}{k}\sim\int_{1}^{N} \frac{\log t}{t}dt=\frac{\log^{2}N}{2},\]
where the first "\(\sim\)" is an unconventional law of large numbers (the normalizer is \(\mathbb{E}(|T_{N}|\) which is not proportional to \(n\)). See [3] for a proof of this law of large numbers.
Now we have lost the arithmetical property of the initial set \(S\), whence the question of the remaining, or additional, properties of \(T\). Here are some results.
1. \(T\) is almost surely Hartman-distributed (\(W(T)=\mathbb{Q}\)), see Bourgain [11]. More generally, this holds for the random set \(R\) as soon as \(\delta_{k}\downarrow\) and \(m_{N}/\log N\to\infty\) where \(m_{N}=\sum_{k=1}^{N}\delta_{k}\). It is pointed out in [23] that the monotonicity of \(\delta_{k}\) can be weakened to \(\sum_{n=1}^{N}|\delta_{n}-\delta_{n+1}|=o(m_{N})\). In this work, we prove a converse of Bourgain's result (extending a result of Kahane and Katznelson): if \(\delta_{k}\downarrow\) and \(m_{N}=O(\log N)\), then \(R\) is almost surely Sidon, and hence not Hartman-distributed (see Theorem 5.12).
2. \(T\) is almost surely \(p\)-Rider for each \(p>4/3\)[21]. We were not able to prove the same result for \(S\), but will prove here that \(S\) is \(3/2\)-Rider (see Theorem 4.9 and Theorem 4.10).
3. \(T\) contains almost surely a subset \(T^{\prime}\) of positive lower density which is \(\Lambda(p)\) for each \(2<p<\infty\)[48]. We conjecture that \(T\) itself is \(\Lambda(p)\) for each \(p<\infty\).
The following comparative table is very meaningful.
\begin{tabular}{|c|c|} \hline DETERMINISTIC SET \(S\) & RANDOM SET \(T\) \\ \hline Weak lacunarity of \(S\) & Similar lacunarity for \(T\) \\ \hline Hausdorff dimension of \(W(S)=1\) & \(W(T)=\mathbb{Q}\) \\ \hline \(W(S)\) supports a Rajchman measure & \(W(T)=\mathbb{Q}\) \\ \hline \(S\) is a Ka-set & \(T\) is not a Ka-set \\ \hline \(S\) is a Bohr-closed & \(T\) is Bohr-dense \\ \hline \(\mathcal{K}_{S}=L^{1}\) & "every" \(f\in\mathcal{K}_{\mathbb{N}}\) is a.s. contained in \(\mathcal{K}_{T}\) \\ \hline \(S\) is \(\Lambda(p)\) & A big subset of \(T\) is \(\Lambda(p)\). \\ \hline \(S\) is p-Rider for \(p=3/2\), & \(T\) is \(p\)-Rider for \(p>4/3\), \\ how about \(4/3<p<3/2\)? & but not \(p\)-Rider for \(p<4/3\). \\ \hline \end{tabular}
## 2. Arithmetical aspects
It is easy to prove that \(|S_{N}|=:|S\cap[1,N]|\asymp(\log N)^{2}\) whence the rate of growth \(s_{n}\asymp\exp(\sqrt{n})\). Going further needs sharp estimates and more work. More generally, for \(S(q_{1},\ldots,q_{s})\), the multiplicative semi-group generated by coprime integers \(q_{1},\ldots,q_{s}\), Marstrand [52, p.545] gave the following asymptotic estimate
\[|S(q_{1},\ldots,q_{s})\cap[1,n]|=K_{s}(\log n)^{s}+O((\log n)^{s-1})\quad \text{as}\ \ n\to\infty, \tag{7}\]
where
\[K_{s}=\frac{1}{s!}\prod_{j=1}^{s}\frac{1}{\log q_{j}}\]
which is the volume of the simplex \(\{x=(x_{j})\in\mathbb{R}^{s}:\ \sum_{1}^{s}\lambda_{j}x_{j}\leq 1,\ x_{j}\geq 0\}\), with here \(\lambda_{j}=\log q_{j}\).
Our first task in [21], in the case of \(S\), consisted in an improved remainder term with respect to Hardy's one, namely \(o(\log N/\log\log N)\), by using diophantine approximation properties of the number \(\alpha=\frac{\log 2}{\log 3}\) and the discrepancy of the sequence \((n\alpha)\).
According to Gelfond [26], \(\frac{\log a}{\log b}\) is either rational (\(a=N^{p},b=N^{q}\)) or transcendental, so that \(\alpha\) is indeed transcendental. But it is not Liouville meaning that there exists \(\delta>0\) and \(\rho<\infty\) such that
\[\forall q\in\mathbb{N},\qquad\|q\alpha\|\geq\delta q^{-\rho}\]
Such a parameter \(\rho\geq 1\) is called _an irrationality exponent_ of \(\alpha\), which is then called \(\rho\)-diophantine. Concerning a good \(\rho\) for \(\alpha\), as said in the introduction, G. Rhin [61] obtained \(\rho\leq 7.616...\) and many years later Wang and Wu [74] got the improved estimate \(\rho\leq 4.11633052.....\). Based on this estimate, the following result is proved.
**Theorem 2.1** ([21]).: _Assume that \(\alpha\) is \(\rho\)-diophantine; let \(\delta:=\frac{\rho}{\rho+1}\). Then_
\[|S_{N}|=\frac{1}{2\log 2\log 3}\log^{2}N+\frac{\log 6}{2\log 2\log 3}\log N+O(( \log N)^{\delta}). \tag{8}\]
_Or else_
\[|S_{N}|=\frac{\log 2N\times\log 3N}{2\log 2\log 3}+O((\log N)^{\delta}).\]
_We can take \(\rho=4.11633052....\) and \(\delta:=0.80454645\cdots\)_
In our case \(S=S(2,3)\), we could improve the inequalities of Tijdeman for \(S(q_{1},\ldots,q_{s})\), with here explicit constants depending once more on an irrationality exponent \(\rho\) of \(\alpha\).
**Theorem 2.2** ([21]).: _We have that (with \(2r=1/(\rho+1)\))_
\[\frac{1}{(\log s_{n})^{\rho}}\lesssim\frac{s_{n+1}-s_{n}}{s_{n}}\lesssim\frac{ 1}{n^{r}}\lesssim\frac{1}{(\log s_{n})^{2r}}. \tag{9}\]
_We can take \(\rho\sim 4.116\) and \(2r\sim 0.1954\). In particular, \(s_{n+1}-s_{n}\to\infty\) and \(s_{n+1}/s_{n}\to 1\)._
**Remarks**.: 1. To what level can the right inequality in (9) be reinforced was a question raised by Tijdeman who observed that \(\log s_{n}\) is best possible. We can add a small precision to (9): for infinitely many pairs \((s_{n},s_{n+1})\), related to the approximations to \(\alpha\), we have
\[\frac{s_{n+1}-s_{n}}{s_{n}}\leq\frac{2\log 2\log 3}{\log s_{n}}.\]
This is due to the best approximation property of the convergents [21].
2. It then remains to locate those pairs. If \(\alpha\) is a badly approximable number, thoses pairs appear with bounded gaps and the inequalities can be refined. If not, the authors in [4] observe that \(S\) contains arbitrarily long intervals in a _geometric_ progression: _For infinitely many \(n\) there exist \(a_{n}:=2^{-q_{n}}3^{p_{n}}\in\mathbb{Q}\) and \(j:=j_{n}\geq 1\) such that \(s_{k+1}/s_{k}=a_{n}\) for \(j\leq k\leq j+n-1\)
## 3. Dynamical aspects
We now turn to the dynamics of the sequence \(S\) and the description of the \(S\)-orbits, i.e. of \(\{s_{n}x\}\) for \(x\in\mathbb{T}\). Finite orbits do exist and can be described. A topological study has been achieved by Furstenberg who proved the following rigidity result.
**Theorem 3.1** ([25]).: _An infinite closed subset of \(\mathbb{T}\) which is \(S\)-invariant must be \(\mathbb{T}\) itself._
In particular, every infinite \(S\)-orbit is dense in \(\mathbb{T}\). This very interesting result reveals a deep difference between the \(S\)-action and the rich dynamics of \(q\)-shifts (\(q\geq 2\)) where infinite non-dense orbits exist. To go further, we need a more elaborate analysis of the distribution of the points \(s_{n}x\). We say that the sequence of numbers \((u_{k})\subset\mathbb{T}\) is uniformly distributed if, for any \(0\leq a<b<1\),
\[\frac{1}{N}|\{1\leq k\leq N,\ u_{k}\in[a,b)\}|\to b-a;\]
a very useful criterion due to H. Weyl asserts that \((u_{k})\) is uniformly distributed if and only if, for every \(h\neq 0\),
\[\frac{1}{N}\sum_{1\leq k\leq N}e(hu_{k})\to 0.\]
We shall also make use of the following definition.
**Definition 3.1**.: _A sequence \(E:=(n_{k})\) of integers is Hartman-distributed if_
\[\frac{1}{N}\sum_{1\leq k\leq N}e(n_{k}x)\to 0\ \ \forall x\in\mathbb{T},\ x\neq 0.\]
The negligible set \(W(E)\) of \(x\in\mathbb{T}\) such that \((n_{k}x)\) is not uniformly distributed comes into play. Observe that "\(E\) is Hartman-distributed" means "\(W(E)=\mathbb{Q}\)".
### Khintchin class
Consider an arbitrary increasing sequence of integers \(E=\{\lambda_{n}\}_{n\geq 1}\subset\mathbb{N}\). It is well known that \((\lambda_{n}x)\) is a.e. uniformly distributed. Consequently, for every Riemann-integrable function \(f\) we have
\[\frac{1}{N}\sum_{n=1}^{N}f(\lambda_{n}x)\to\int_{\mathbb{T}}fdm\quad a.e. \tag{10}\]
Marstrand [52] proved that when \(E=\{\lambda_{n}\}=\mathbb{N}\), there are bounded functions \(f\in L^{\infty}(\mathbb{T})\) such that (10) fails. We define the _Khinchin class_ of a set \(E=\{\lambda_{n}\}\subset\mathbb{N}\) as that of those Lebesgue integrable functions such that (10) holds.
Marstrand [52] proved that
\[\mathcal{K}_{S}\supset L^{\infty}(\mathbb{T}) \tag{11}\]
and later Nair [54] proved that \(\mathcal{K}_{S}=L^{1}(\mathbb{T})\). Thus the Khintchin class \(\mathcal{K}_{S}\) is completely determined, but the determination of \(\mathcal{K}_{\mathbb{N}}\) is not complete. Koksma [42] proved the following criterion for \(E=\mathbb{N}\):
\[\sum_{|k|\geq 3}|\widehat{f}(k)|^{2}\big{(}\log\log|k|)^{3}<\infty \Longrightarrow f\in\mathcal{K}_{\mathbb{N}}.\]
For \(E=\{q^{n}\}\) with an integer \(q\geq 2\), we have \(\mathcal{K}_{E}=L^{1}(\mathbb{T})\) by the Birkhoff ergodic theorem. But for a general lacunary set \(E\), the determination of \(\mathcal{K}_{E}\) is unknown. In many cases, lacunary sequences share nice properties. The sequence \(\{\lambda_{n}\}=\{2^{2^{n}}\}\) is very lacunary, but J. Rosenblatt [65] proved that there exists \(f\in L^{\infty}\) such that (10) fails, in other words \(\mathcal{K}_{\{2^{2^{n}}\}}\not\supset L^{\infty}\). For any Hadamard lacunary sequence \(\Lambda=\{\lambda_{n}\}\), a result of Cuny and Fan [15, Theorem C, p. 2728] implies that
\[\sum_{n>N}|\widehat{f}(n)|^{2}=O\Big{(}\frac{1}{\log^{1+\epsilon}N}\Big{)} \Longleftrightarrow\omega_{f,2}(\delta)=O\Big{(}\frac{1}{|\log\delta|^{1/2+ \epsilon/2}}\Big{)}\Longrightarrow f\in\mathcal{K}_{\Lambda}\]
where \(\omega_{f,2}(t)\) denotes the modulus of \(L^{2}\)-continuity of \(f\) defined by
\[\omega_{f,2}(t)=\sup_{|t|\leq\delta}\|f(\cdot)-f(\cdot-t)\|_{2}.\]
Actually, the above condition on \(\omega_{f,2}(\delta)\) (with assumption \(\int f=0\)) implies that \(\{f(\lambda_{n}x)\}\) is a convergence system, meaning that
\[\sum_{n=1}^{\infty}|a_{n}|^{2}<\infty\Longrightarrow\sum_{n=1}^{\infty}a_{n} f(\lambda_{n}x)\ \ \text{converges a.e.}\]
and the exponent \(1/2\) is the best possible [15, Proposition 5.2].
Nair's proof of \(\mathcal{K}_{S}=L^{1}(\mathbb{T})\) is based on an ergodic theorem for amenable group actions due to Bewley [7]. Using the classical ergodic theorem, it is possible to give a simple proof of Marstrand's result \(\mathcal{K}_{S}\supset L^{\infty}(\mathbb{T})\), and even nearly of Nair's result, with an extension to a class of \(\times 2\times 3\)-invariant probability measures. The result is stated as follows.
**Theorem 3.2** ([21]).: _Let \(\mu\) be a \(\times 2\times 3\) invariant probability measure, ergodic for one of both shifts. If \(f\in L\log^{+}L\), then,_
\[A_{N}f(x):=\frac{1}{N}\sum_{n\leq N}f(s_{n}x)\to\int fd\mu\quad\mu-a.e.\]
As a consequence, if a "Furstenberg-exotic" measure \(\mu\) exists, namely continuous \(S\)-invariant and \(\mu\neq m\), then it must be supported by \(W(S)\) (actually \(\mu\) is of zero dimension according to a result of Rudolph [68]): indeed, we can suppose that in addition \(\mu\) is \(S\)-ergodic by invoking the \(S\)-ergodic decomposition; now choose \(a\neq 0\) such that \(\widehat{\mu}(a)\neq 0\) and apply theorem 3.2 with \(f=e_{a}\): \(\frac{1}{N}\sum_{n\leq N}e(as_{n}x)\to\widehat{\mu}(a)\)\(\mu\)-a.e. and such \(x^{\prime}s\) belong to \(W(S)\).
A key step in the proof of Theorem 3.2 is the following generalization of Birkhoff's ergodic theorem.
**Theorem 3.3** ([21]).: _Let \((X,\mathcal{B},\mu,T)\) be a measure-preserving dynamical system. Let \((f_{n})\) be a sequence of integrable functions. Suppose that_
\((1)\)__\(f_{n}\to 0\) _a.e. and_ \(\|f_{n}\|_{1}\to 0\)_;_ \((2)\)__\(\sup_{n}|f_{n}|\) _is integrable. Then almost everywhere we have_
\[\frac{1}{N}\sum_{n=0}^{N-1}f_{n}(T^{N-n}x)\to 0.\]
Mane claimed the same conclusion in [51, pages 96-97] without assuming that \(\sup_{n}|f_{n}|\) is integrable. His proof presents a gap. We show here that this extra assumption is mandatory in the general case and cannot be dropped. We detail for that a counterexample due to F. Rodriguez-Hertz (kindly indicated to us by L. Flaminio [22]). The idea lurking behind is that of "shrinking target". We will give a self-contained and elementary proof.
**Theorem 3.4**.: _Let \((X,\mathcal{B}(X),\mu,T)\) be the Bernoulli system where \(X=\{0,1\}^{\mathbb{N}}\), \(\mu\) is the symmetric Bernoulli measure and \(T:X\to X\) the left shift defined by \(T((x_{i}))=(x_{i+1})\). Let \((\lambda_{n})\) be a non-decreasing sequence of positive integers and let \(E_{n}=\{x:x_{1}=\cdots=x_{\lambda_{n}}=0\}\), a cylinder of length \(\lambda_{n}\). Suppose that_
\[\sum\frac{1}{2^{\lambda_{n}}}=\infty,\qquad\frac{n}{2^{\lambda_{n}}}\to 0.\]
_(one choice is \(\lambda_{n}=\left[\log(n\log n)/\log 2\right]\)). The sequence of functions \((f_{n})\) defined by \(f_{n}=n1_{E_{n}}\) satisfies the condition (1) in Theorem 3.3, but_
\[a.s\quad\varlimsup_{N\to\infty}\frac{1}{2N}\sum_{n=0}^{2N-1}f_{n}(T^{2N-n}x) \geq 1/2.\]
Proof.: We first observe that \(E_{n}\downarrow\{0^{\infty}\}\). It follows that \(f_{n}(x)\to 0\) for all \(x\neq 0^{\infty}\). On the other hand, \(\|f_{n}\|_{1}=n\mu(E_{n})=n/2^{\lambda_{n}}\to 0.\) Thus the condition (1) in Theorem 3.3 is met.
In order to prove the announced inequality, we shall use the following form of the Borel-Cantelli lemma [60, p.368]. Let \((A_{n})\) be a sequence of events. We have \(\mu(\varlimsup A_{n})=1\) if the following conditions are satisfied:
\[\sum_{k=1}^{\infty}\mu(A_{k})=\infty,\qquad\varliminf_{n\to\infty}\frac{\sum _{1\leq k,l\leq n}\mu(A_{k}\cap A_{l})}{\left(\sum_{1\leq k\leq n}\mu(A_{k}) \right)^{2}}=1. \tag{12}\]
Observe that
\[\frac{1}{2N}\sum_{n=0}^{2N-1}f_{n}(T^{2N-n}x)\geq\frac{1}{2N}f_{N}(T^{N}x)= \frac{1}{2N}N1_{E_{N}}(T^{N}x)=\frac{1}{2}1_{A_{N}}(x)\]
where
\[A_{N}=\{x:x_{N+1}=\cdots=x_{N+\lambda_{N}}=0\}=T^{-N}(E_{N}).\]
We only need to check that \(A_{N}\)'s satisfy the condition (12) to validate the Borel-Cantelli lemma. First, \(\sum\mu(A_{n})=\sum\mu(E_{n})=\sum 2^{-\lambda_{n}}=\infty\). Second, set
\[J_{n}=\{(k,l):1\leq k,l\leq n,\ \mu(A_{k}\cap A_{l})\neq\mu(A_{k})\mu(A_{l})\}.\]
Since \(\mu(A_{k}\cap A_{l})=\mu(A_{k})\mu(A_{l})\) for \((k,l)\notin J_{n}\), we get
\[\sum_{1\leq k,l\leq n}\mu(A_{k}\cap A_{l})=\sum_{(k,l)\in J_{n}}[\mu(A_{k}\cap A _{l})-\mu(A_{k})\mu(A_{l})]+\sum_{1\leq k,l\leq n}\mu(A_{k})\mu(A_{l}). \tag{13}\]
Now assume \(1\leq k,l\leq n\) and \(l\geq k\) (the case of \(k\geq l\) can be similarly dealt with). We distinguish two cases:
(i) if \(l>k+\lambda_{k}\), then \((k,l)\in J_{n}\) because \(A_{k}\) and \(A_{l}\) are independent and hence \(\mu(A_{k}\cap A_{l})=\mu(A_{k})\mu(A_{l})\);
(ii) if \(l\leq k+\lambda_{k}\), we have \(A_{k}\cap A_{l}=\{x_{k+1}=\cdots=x_{l+\lambda_{l}}=0\}\) so that
\[\mu(A_{k}\cap A_{l})=\frac{1}{2^{l-k}}\cdot\frac{1}{2^{\lambda_{l}}}\leq \frac{1}{2^{l-k}}\mu(A_{k}).\]
It follows from (i) and (ii) that
\[\sum_{(k,l)\in J_{n}}\mu(A_{k}\cap A_{l})\leq 2\sum_{\genfrac{}{}{0.0pt}{}{1 \leq k\leq n,}{k\leq l\leq k+\lambda_{k}}}\frac{1}{2^{l-k}}\mu(A_{k})\leq 4 \sum_{k=1}^{n}\mu(A_{k})=o\big{(}\sum_{k=1}^{n}\mu(A_{k})\big{)}^{2}. \tag{14}\]
Similarly, we have
\[\sum_{(k,l)\in J_{n}}\mu(A_{k})\mu(A_{l})=2\sum_{\genfrac{}{}{0.0pt}{}{1\leq k \leq n,}{k\leq l\leq k+\lambda_{k}}}\mu(A_{k})\mu(A_{l})\leq 2\sum_{1\leq k \leq n}\mu(A_{k})\frac{\lambda_{k}}{2^{\lambda_{k}}} \tag{15}\]
\[=o\big{(}\sum_{1\leq k\leq n}\mu(A_{k})\big{)}. \tag{16}\]
The second condition in (12) is thus implied by (13), (14) and (16) and the fact that \(\sum\mu(A_{k})=\infty\).
### Dimension of \(W(s)\) and Rajchman measure on \(W(s)\)
The previous remarks motivate a closer examination of \(W(S)\). We can prove:
**Theorem 3.5** ([21],[73]).: _The set \(W(S)\) satisfies:_
1. \(\dim_{H}(W(S))=1\)_._
2. \(W(S)\) _supports a_ Rajchman _probability measure_ \(\mu\)_, more explicitely,_ \(\widehat{\mu}(h)=O(1/\log\log|h|)\) _as_ \(h\to\infty\)_._
In [21], we have only proved \(\dim_{H}(W(S))\geq 0.451621\), but this estimate has recently been improved to \(\dim_{H}W(S)=1\) by Usuki [73], using a method similar to ours. The second assertion of Theorem 3.5 indicates that \(W(S)\) is not that porous. So is the (uncountable) set \(\mathcal{L}\) of Liouville numbers, though \(\dim_{H}(\mathcal{L})=0\) ([8]). On the opposite, the Cantor ternary set \(K\) (satisfying
\(\dim_{H}(K)=\frac{\log 2}{\log 3}\)) is porous: it supports NO Rajchman measure (Kahane-Salem [39]).
Now we are able to precise the \(M_{0}\)-property of the set \(W(S)\) by showing that we can hardly do better than the decay stated in Theorem 3.5.
**Theorem 3.6**.: _For any probability measure \(\mu\) supported on \(W(S)\), \(\widehat{\mu}(h)\) cannot decay as \(1/(\log\log h)^{\alpha}\) as \(h\to\infty\) as soon as \(\alpha>1\)._
We are going to prove Theorem 3.6 basing ourselves on the classical criterion of Davenport-Erdos-LeVeque below.
**Proposition 3.7** (cf [12], Lemma 1.8.).: _Let \((\Omega,\mathcal{A},\mu)\) be a probability space and \((X_{n})\) a sequence of complex valued random variables with \(|X_{n}|\leq 1\). Then the averages \(Z_{n}:=\frac{1}{n}\sum_{j=1}^{n}X_{j}\) converge \(\mu\)-a.e. to \(0\) under the assumption_
\[\sum_{n=1}^{\infty}\frac{1}{n}\int_{\Omega}|Z_{n}|^{2}d\mu<\infty.\]
Proof of Theorem 3.6.: We will show that if \(\widehat{\mu}(h)=O\big{(}1/(\log\log h)^{\alpha}\big{)}\) for some \(\alpha>1\), then \(\mu(W(S))=0\), so that \(\mu\) cannot be supported on \(W(S)\). To this end, we only need to show that if \(\ell\) is a non-zero integer, the averages \(A_{n}^{(\ell)}(x):=\frac{1}{n}\sum_{j=1}^{n}e(\ell s_{j}x)\) tend almost everywhere to \(0\), which implies that \((s_{n}x)\) is uniformly distributed \(\mu\)-a.e.
We can assume that \(\ell=1\) (the proof is the same for all \(\ell\)'s). Denote simply \(A_{n}^{(\ell)}\) by \(A_{n}\). We are going to apply Proposition 3.7 by just checking its assumption for \(X_{n}(x)=e(s_{n}x)\). Clearly, by expanding \(|A_{n}|^{2}\) we get
\[\int_{\mathbb{T}}|A_{n}|^{2}d\mu=\frac{1}{n^{2}}\Big{(}n+\sum_{j\neq k,1\leq j,k\leq n}\widehat{\mu}(s_{k}-s_{j})\Big{)}\leq\frac{1}{n^{2}}\Big{(}n+2\sum_{ 1\leq j<k\leq n}|\widehat{\mu}(s_{k}-s_{j})|\Big{)}.\]
But we know that, for \(j<k\) and some numerical constant \(A\), it holds \(s_{k}-s_{j}\geq s_{k}-s_{k-1}\gtrsim s_{k}/(\log s_{k})^{A}\) (cf. Theorem 2.2), so that
\[\log\log(s_{k}-s_{j})\gtrsim\log\log s_{k}\gtrsim\log(\sqrt{k})\gtrsim\log k\]
and hence \(|\widehat{\mu}(s_{k}-s_{j})|\lesssim 1/(\log k)^{\alpha}.\) This gives us
\[\int_{\mathbb{T}}|A_{n}|^{2}d\mu\lesssim\frac{1}{n^{2}}\big{(}n+\sum_{1\leq j <k\leq n}1/(\log k)^{\alpha}\big{)}\lesssim\frac{1}{n}+\frac{1}{n}\sum_{k=2}^{ n}1/(\log k)^{\alpha}\lesssim\frac{1}{(\log n)^{\alpha}}.\]
So, the assumption of Proposition 3.7 is met.
Theorem 3.6 is a manifestation of an uncertainty principle for measures on \(\mathbb{T}\): the support and the spectrum of \(\mu\) cannot be too small at the same time.
## 4. Harmonic analysis of thin sets
### Combinatorics in harmonic analysis
Combinatorics plays a fundamental role in harmonic analysis of thin sets, under various aspects such as independence, sparseness or arithmetic relations. The cumulative function
\(N\to|E_{N}|:=|E\cap[1,N]|\) of a subset \(E\subset\mathbb{Z}\) accounts for the sparseness of the set and provides necessary conditions for \(E\) to enjoy one of the harmonic properties we are interested in, as recalled below.
**Definition 4.1**.: _Let \(E\) be a set of positive integers, and \(X\) a Banach space of integrable functions on the circle (e.g. \(X=L^{p},1<p<2\)). We say that \(E\) is \(X\)-Paley if the Fourier transform of any function \(f\in X\), once restricted to \(E\), is square-summable, i.e._
\[\widehat{f}_{|E}\in\ell^{2}.\]
_Then, there is a smallest constant \(P(X,E)>0\), the Paley constant of the pair \((X,E)\), such that the following Paley-type inequality holds:_
\[\|\widehat{f}_{|E}\|_{2}\leq P(X,E)\|f\|_{X},\quad\forall f\in X.\]
**Proposition 4.1** ([67]).: _The following holds._
1)_\(E\) is \(H^{1}\)-Paley if and only if \(|E\cap[N,2N]|=O(1)\)._
2) _If_ \(E\) _is_ \(p\)_-Sidon, then_ \(|E_{N}|\leq c(\log N)^{\frac{p}{2-p}}\)_._
3) _If_ \(E\) _is_ \(\Lambda(p)\) _for_ \(p>2\)_, then_ \(|E_{N}|\leq 4\lambda_{p}(E)^{2}N^{2/p}\)_._
As an easy corollary of 1) (this indeed motivated the definition), the set \(\{2^{n}\}\), more generally a Hadamard set, is \(H^{1}\)-Paley. The known estimate of \(|S_{N}|\), together with 2) and 3), leads to the first negative results on the \(S\).
**Proposition 4.2**.:
1. \(S\) _is not Sidon and even more,_ \(S\) _is not_ \(p\)_-Sidon for_ \(p<4/3\)_._
2. \(S\) _is not_ \(H^{1}\)_-Paley._
#### 4.1.1. Quasi-independent and Pisier criterion for Sidon sets
**Definition 4.2**.: _A set \(E\subset\mathbb{Z}\backslash\{0\}\) is said to be quasi-independent if, for all distinct elements \(x_{1},\dots,x_{n}\in E\) and for all \(\varepsilon_{1},\dots,\varepsilon_{n}\in\{-1,0,1\}\), the relation \(\sum\varepsilon_{k}x_{k}=0\) implies that all \(\varepsilon_{k}=0\)._
The quantity \(\sum_{k}|\varepsilon_{k}|\) is called the length of the relation. A quasi-independent set is hence a set which contains no relation of positive length.
Quasi-independent sets are prototypes of Sidon sets. Since the sidonicity is preserved by finite union (Drury's theorem [16]), it is conjectured that a Sidon set could be a finite union of quasi-independent sets. In this direction, a breakthrough has been made by Pisier [58] with the following characterization.
**Theorem 4.3** ([58]).: \(E\subset\mathbb{Z}\backslash\{0\}\) _is a Sidon set if and only if there exists \(\delta>0\) such that, from every finite subset \(A\subset E\), a quasi-independent set \(B\) can be extracted from \(A\) with \(|B|\geq\delta|A|\)._
An analogue for \(p\)-Sidon and \(p\)-Rider sets will appear and be used later.
#### 4.1.2. Spectrum of continuous measures
The conjecture of Furstenberg, actually, raises questions on continuous measures on \(\mathbb{T}\) and their spectrum. Continuous measures can be described just as well in terms of support (actually of annihilating set) as in terms of Fourier coefficients thanks to Wiener criterion (a scent of the uncertainty principle). Russel Lyons ([49]) obtained the following new characterization:
_A measure \(\mu\in M(\mathbb{T})\) is continuous if and only if there exists \((n_{k})\subset\mathbb{Z}\), \(|n_{k}|\to\infty\) such that \(\widehat{\mu}(n_{k}\ell)\to 0\) for every \(\ell\in\mathbb{Z}^{*}\)._
But nothing can be said in general on this sequence \((n_{k})\), a priori depending on \(\mu\). Investigating partial answers to the Furstenberg conjecture, Lyons asked in [50] whether \(S=(s_{n})\) could be a _universal_ such sequence, in the sense that \(\liminf|\widehat{\sigma}(s_{k})|=0\) for every continuous measure \(\sigma\) on \(\mathbb{T}\)?
If this holds, the Furstenberg conjecture would be true: if \(\mu\) is continuous, then for every \(\ell\in\mathbb{Z}\), \(\ell\neq 0\), \(\liminf|\widehat{\mu}(s_{k}\ell)|=0\) since \(T_{\ell}\mu\) is still continuous; if in addition \(\mu\) is \(\times 2\times 3\)-invariant, we get \(|\widehat{\mu}(\ell)|=\liminf|\widehat{\mu}(s_{k}\ell)|=0\) and \(\mu\) is the Lebesgue measure. But S. Grivaux and C. Badea [2] constructed a continuous measure with \(\inf_{s\in S}|\widehat{\mu}(s)|\geq\delta\) for some constant \(\delta>0\); in other terms,
**Theorem 4.4** ([2]).: \(S\) _is a Ka-set._
Thanks to an improved version of Drury's result due to Hartman [31], we can see that _Sidon sets are Ka-sets too_.
A much more restrictive property of measures has been studied by Bergelson et al. in the spirit of dynamics (as a spectral property [5]) and by Eisner and Grivaux with an operator-theoretic point of view ([17]).
**Definition 4.3**.: _A subset \(E=(n_{k})\subset\mathbb{Z}\) is said rigid if there exists a continuous probability measure \(\mu\) on \(\mathbb{T}\) such that \(\lim_{k\to\infty}\widehat{\mu}(n_{k})=1\)._
Of course rigid sets are Ka-sets, whence the question whether \(S\) could be a rigid set?
### More on \(S\) as a thin set
#### 4.2.1. \(S\) is \(p\)-Paley, \(1<p<2\)
We mentioned at the beginning of the section that \(S\) cannot be \(H^{1}\)-Paley. The following is proved in [28]: \((L^{p},S)\) is a Paley pair. Here is a preliminary result in this direction, due to Gundy and Varopoulos (cf. [28]).
**Theorem 4.5** ([28]).: _The set \(S\) is \(\Lambda(q)\) for all \(q>2\), i.e. \(||f||_{q}\leq\lambda_{q}(S)||f||_{2}\) for any \(f\in L^{2}_{S}\)._
The proof uses the square function of a martingale corresponding to a decreasing sequence of \(\sigma\)-subalgebras, and Burkholder's inequalities.
**Theorem 4.6** ([28]).: _For any \(g\in L^{p}\) with \(1<p<2\), one has_
\[\Big{(}\sum_{n\in S}|\widehat{g}(n)|^{2}\Big{)}^{1/2}\leq C_{q}\|g\|_{p}\]
_where \(q>2\) is such that \(\frac{1}{p}+\frac{1}{q}=1\), and \(C_{q}<\infty\)._
In addition, we proved in [21] that \(\lambda_{q}(S)\leq Cq^{3/2}\) and that \(C_{q}=P(L^{p},S)\lesssim q^{2}\).
#### 4.2.2. Is \(S\) \(4/3\)-Sidon?
We have mentioned at the beginning of the section that \(S\) is not \(p\)-Sidon for \(p<4/3\) and our feeling is that \(S\) is \(4/3\)-Sidon. We can not confirm this yet, but we can prove a partial result in this direction, by showing that \(S\) is \(3/2\)-Rider. We recall some facts.
If \(f=\sum a_{k}e_{k}\) is a trigonometric polynomial and \((\varepsilon_{k})\) a Rademacher sequence, the randomized polynomial \(f_{\omega}\) is by definition
\[f_{\omega}=\sum\varepsilon_{k}(\omega)a_{k}e_{k}.\]
**Definition 4.4**.: _A subset \(\Lambda\) of \(\mathbb{N}\) is called a \(p\)-Rider set (\(1\leq p<2\)) if there is a constant \(C\) such that, for every \(f\in\mathcal{P}_{\Lambda}\), we have_
\[\|\widehat{f}\|_{p}\leq C[f]\]
_where \([f]\) denotes the Pisier norm of \(f\) defined by_
\[[f]=\mathbb{E}(\|f_{\omega}\|_{\infty}).\]
The \(1\)-Riderness coincides with the \(1\)-Sidonicity, i.e. Sidonicity (cf. [62]). For \(1<p<2\), the \(p\)-Sidonicity (namely \(\|\widehat{f}\|_{p}\leq C\|f\|_{\infty}\)) implies the \(p\)-Riderness, the converse being open. But it is more flexible to work with \(p\)-Riderness than with \(p\)-Sidonicity. See [21] for more on this.
In order to study the \(p\)-Riderness of \(S\), we invoke two theorems of Rodri guez-Piazza ([63], Lema 2.4 p. 89 and Teorema 2.3 p. 85-86), which were used in [47]. For stating these theorems, we need the following notations. For an arbitrary subset \(\Lambda\) of \(\mathbb{N}\) and \(n=1,2,\ldots\), let us set once and for all
\[\Lambda_{n}=\Lambda\cap[1,n]\text{ and }\Lambda_{I_{n}}=\Lambda\cap I_{n} \quad\text{where }I_{n}=[2^{n},2^{n+1}[\subset\mathbb{N}.\]
For a finite subset \(A\) of \(\mathbb{N}\), we write
\[\psi_{A}=\big{\|}\sum_{k\in A}e_{k}\big{\|}_{\psi_{2}}.\]
Recall that \(\psi_{2}\) designates the gaussian Orlicz function: \(\psi_{2}(x)=e^{x^{2}}-1\) and that, for a function \(f\), it holds [46, p.44, vol.1]
\[\|f\|_{\psi_{2}}\asymp\sup_{q\geq 2}\frac{\|f\|_{q}}{\sqrt{q}}. \tag{16}\]
The following result gives a lower bound for the size of the largest quasi-independent subset in a given finite set.
**Theorem 4.7** ([64], p.89).: _Let \(A\subset\mathbb{N}\) be a finite set. We can find a quasi-independent subset \(E\) of \(A\) with cardinality_
\[|E|\geq\delta\Big{(}\frac{|A|}{\psi_{A}}\Big{)}^{2}\]
_where \(\delta\) is a positive constant._
The following is a necessary and sufficient condition for a set \(E\subset\mathbb{N}\) to be \(p\)-Rider, a condition involving the size of the largest quasi-independent subset in an arbitrary finite subset of \(E\).
**Theorem 4.8** ([64], p.85).: _Let \(1\leq p<2\). A subset \(E\subset\mathbb{N}\) is \(p\)-Rider if and only if for every finite subset \(A\) of \(E\), there exists a quasi-independent subset \(B\) of \(A\) such that_
\[|B|\geq\delta|A|^{\varepsilon}\]
_where \(0<\varepsilon=\frac{2}{p}-1\leq 1\) and where \(\delta\) is a positive constant._
The case \(p=1\) is due to Pisier. Observe that \(\frac{2}{p}-1=1/2\) when \(p=4/3\). We begin with a general fact.
**Theorem 4.9**.: _Let \(\Lambda\subset\mathbb{N}\). Suppose there exist constants \(C>0\) and \(\alpha\geq 1/2\) such that_
\[\forall q>2,\ \forall f\in\mathcal{P}_{\Lambda},\ \ \|f\|_{q}\leq Cq^{\alpha}\|f\|_{2}.\]
_Then \(\Lambda\) is \(p\)-Rider for \(p=\frac{4\alpha}{2\alpha+1}\)._
Proof.: Let \(A\) be a finite subset of \(\Lambda\) with \(|A|=n\). We claim that
\[\|\sum_{j\in A}e_{j}\|_{q}\lesssim\min(q^{\alpha}\sqrt{n},\ n)\ \text{for all}\ q\geq 2. \tag{17}\]
Indeed, firstly, we have obviously \(\|\sum_{j\in A}e_{j}\|_{q}\leq\|\sum_{j\in A}e_{j}\|_{\infty}=n\); secondly the assumption allows us to get
\[\|\sum_{j\in A}e_{j}\|_{q}\lesssim q^{\alpha}\|\sum_{j\in A}e_{j}\|_{2}=q^{ \alpha}\sqrt{n}.\]
From (17) we easily get (separating the cases \(q\leq n^{\frac{1}{2\alpha}}\), \(q\geq n^{\frac{1}{2\alpha}}\)) that
\[\psi_{A}\asymp\sup_{q\geq 2}\frac{1}{\sqrt{q}}\|\sum_{j\in A}e_{j}\|_{q} \lesssim n^{1-\frac{1}{4\alpha}},\]
using the formula (16). Now, Theorem 4.7 provides us with a quasi-inde pendent subset \(E\) of \(A\) of cardinality
\[|E|\gtrsim\left(\frac{n}{\psi_{A}}\right)^{2}\gtrsim n^{\frac{1}{2\alpha}}.\]
Finally, adjust the exponent \(p\) so as to have
\[\frac{1}{2\alpha}=\frac{2}{p}-1,\]
that is \(p=\frac{4\alpha}{2\alpha+1}\). Then Theorem 4.8 allows us to conclude.
If \(\alpha=1/2\), we have \(\frac{4\alpha}{2\alpha+1}=1\), and \(\Lambda\) is a Sidon set.
The Furstenberg set \(S\) is \(\Lambda(q)\) for all \(q<\infty\) with specified constants [21, Theorem 4.12]). This result allows us to prove a partial result concerning the \(p\)-Riderness of \(S\). It is partial because we conjecture that \(S\) is \(4/3\)-Rider
(perhaps even \(4/3\)-Sidon), but we only prove its \(3/2\)-Riderness. More generally, we conjecture that \(S(q_{1},\ldots,q_{s})\) (where the \(q_{j}\)'s are multiplicatively independent integers) is \(\frac{2s}{s+1}\)-Rider (or even \(\frac{2s}{s+1}\)-Sidon) and this would be optimal. We prove the following partial result for \(S(q_{1},\ldots,q_{s})\).
**Theorem 4.10**.: _The Furstenberg set \(S\) is \(3/2\)-Rider. More generally, let \(s\geq 2\), we have_
1. _The set_ \(S(q_{1},\ldots,q_{s})\) _is_ \(\frac{2s-1}{s}\)_-Rider._
2. _The set_ \(S(q_{1},\ldots,q_{s})\) _is not_ \(p\)_-Rider for_ \(p<\frac{2s}{s+1}\)_._
Proof.: We appeal to [21, Theorem 4.12] which tells that the assumption of Theorem 4.9 holds for \(\Lambda=S(q_{1},\ldots,q_{s})\) with \(\alpha=s-1/2\), giving
\[\frac{4\alpha}{2\alpha+1}=\frac{2s-1}{s}.\]
The multiplicative character of \(S(q_{1},\ldots,q_{s})\) lurks in the value of that exponent \(\alpha\). The second assertion comes from the mesh condition for \(p\)-Rider sets [21, Proposition 3.2]: if \(\Lambda\) is \(p\)-Rider, we must have \(|\Lambda_{N}|\lesssim(\log N)^{p/(2-p)}\). But we know [21, page 9] that \(|\Lambda_{N}|\asymp(\log N)^{s}\). So that \(s\leq p/(2-p)\), or again \(p\geq\frac{2s}{s+1}\).
#### 4.2.3. \(S\) and the Bohr topology
Recall that if \(G\) is a locally compact abelian group with dual \(\Gamma\), the Bohr compactification \(\beta\Gamma\) of \(\Gamma\) is the dual group of \(G_{d}\), the group \(G\) equipped with the discrete topology. The group \(\beta\Gamma\) is the set of all characters (continuous or not) on \(G\), it is compact and contains \(\Gamma\) as a dense subgroup. We describe this topology when \(G=\mathbb{T}\) and \(\Gamma=\mathbb{Z}\).
**Definition 4.5**.: _The Bohr topology on \(\mathbb{Z}\) is the group topology with the following basis of neighbourhoods of zero:_
\[V(x_{1},\ldots,x_{k},\varepsilon)=\{n\in\mathbb{Z};\ |e(nx_{j})-1|<\varepsilon \,\text{ for }1\leq j\leq k\}\]
_with \(\varepsilon>0\) and \(x_{j}\in\mathbb{R}\), called Bohr neighbourhoods (of \(0\) in \(\mathbb{Z}\))._
The Bohr topology on \(\mathbb{Z}\) is the coarsest topology for which all Fourier transforms of discrete measures on \(\mathbb{T}\) are continuous.
The following assertion follows from the pigeonhole principle and the simultaneous diophantine approximation.
**Proposition 4.11**.: _A Bohr neighbourhood has positive upper density._
We are concerned with \(\beta\mathbb{Z}\) and with some subsets of integers, dense in \(\beta\mathbb{Z}\) (i.e. Bohr-dense) or not. A first class of examples is the following.
**Proposition 4.12**.: _A Hartman-distributed set is Bohr-dense._
It is a direct consequence of the definition. This property of Bohr-density appears in the dynamical classification of (rather big) sets of integers intensively studied by Bergelson, Bourgain, Ruzsa and many others ([6, 10, 69]). We extract the main implications we need:
\[\text{Hartman}-\text{distributed}\Longrightarrow\text{Uniformly recurrent} \Longrightarrow\text{Bohr}-\text{dense},\]
with the definitions to come.
**Definition 4.6**.: _A set \(E\subset\mathbb{Z}\) is said to be recurrent if, for every dynamical system \((X,\mathcal{A},\mu,T)\) and every subset \(A\in\mathcal{A}\) with positive measure, there exists \(h\in E\) such that \(\mu(A\cap T^{-h}A)>0\)._
**Definition 4.7**.: _A set \(E\subset\mathbb{Z}\) is said to be uniformly recurrent if every translate of \(E\) is recurrent._
The question whether the last implication above is reversible remains open.
Back to the Furstenberg set, we show that
**Theorem 4.13**.: _The set \(E:=S(p_{1},\ldots,p_{r})\), in particular \(S\), is Bohr-closed, thus cannot be a recurrent set._
Proof.: We write \(p^{\alpha}\) for \(p_{1}^{\alpha_{1}}\cdots p_{r}^{\alpha_{r}}\), and we write \(\beta\geq\alpha\) if \(\beta_{j}\geq\alpha_{j}\) for all \(j\); and \(\beta>\alpha\) if \(\beta_{j}>\alpha_{j}\) for all \(j\). We now show that \(\mathbb{Z}\backslash E\) is open for the Bohr topology. Indeed, let \(m\notin E\). We distinguish two cases.
\(\bullet\)\(m=0\). Then \(V=N\mathbb{Z}\), where \(N\) has a prime factor \(p>p_{r}\), is a neighbourhood of \(0\) disjoint from \(E\).
\(\bullet\)\(m\neq 0\). One writes
\[m=p^{\alpha}n=:s_{0}\times n\]
with \(s_{0}\in E\), \(n\neq 0,1\) and \(n\wedge p_{1}p_{2}...p_{r}=1\). One can find \(s=p^{\beta}\in E\) with \(\beta>0\) and the \(\beta_{j}\)'s large enough so as to have \(s>n-1\), implying that \(n\not\equiv 1\mod s\). We now claim that the neighbourhood of \(m\),
\[V:=V(m)=m+sp^{\alpha}\mathbb{Z},\]
satisfies \(V\cap E=\emptyset\). Indeed, a relation
\[p^{\gamma}=m+sp^{\alpha}k=p^{\alpha}(n+sk),\]
clearly implies \(\gamma\geq\alpha\). If \(\gamma=\alpha\), then \(n+sk=1\) and \(n\equiv 1\) mod \(s\), contradicting the choice of \(s\). Therefore, we have for example \(\gamma_{1}>\alpha_{1}\). After simplification by \(p_{1}^{\alpha_{1}}\), we get \(p_{1}|(n+sk)\) which is again impossible: since \(p_{1}|s\), we have \((n+sk)\wedge p_{1}=n\wedge p_{1}=1\).
A second dynamical classification due to Hartman concerns small sets of integers and it seems to escape this Bohr property:
\[E\text{ Sidon or }E\text{ rigid}\Longrightarrow E\text{ Ka}-\text{set}\Longrightarrow W(E)\text{ uncountable.}\]
However, Katznelson [40] constructed a Bohr-dense Ka-set and Griesmer [27] constructed a both rigid and Bohr-dense set. The existence of a Sidon set dense in \(\beta\mathbb{Z}\) remains open.
## 5. A random version \(T\) of \(S\)
Recall that we define a random version \(T\) of \(S\) by
\[T=\{k\in\mathbb{N}:\xi_{k}=1\}\]
where \((\xi_{k})\) is a sequence of independent \(0\)-\(1\)-valued independent random variables with \(\mathbb{E}(\xi_{k})=\delta_{k}:=\frac{\log k}{k}\).
### First comparative results
Here are some results about the random version \(T=(t_{n})\). First, \(T\) shares with \(S\) some sparseness properties.
**Theorem 5.1** ([21]).: _Almost surely, the difference \(t_{n+1}-t_{n}\) satisfies_:__
1) \(\limsup_{n\to\infty}\frac{t_{n+1}-t_{n}}{(t_{n}/\log t_{n})\,\log\log t_{n}} \leq 2;\)__
2) \(\liminf_{n\to\infty}\frac{t_{n+1}-t_{n}}{t_{n}/(\log t_{n})^{3+\delta}}\geq 1 \quad\text{for all $\delta>0$}.\)__
_In particular, the set \(T\) satisfies: \(t_{n+1}-t_{n}\to\infty,\ t_{n+1}/t_{n}\to 1\) a.s._
Here is a dynamical property of \(T\), shared by more general random sets \(R\) studied by Bourgain. Let
\[m_{N}:=\sum_{k=1}^{N}\delta_{k}.\]
**Theorem 5.2** ([11]).: _If \(\delta_{k}\downarrow\) and \(m_{N}/\log N\to\infty\), the corresponding random set \(R\) is almost surely Hartman distributed. In particular, that is the case for \(T\)._
This property of \(R\) is in strong contrast with the case of \(S\).
Now comes a harmonic analysis property of \(T\), which is better than that of \(S\).
**Theorem 5.3** ([48], [21]).: _Almost surely, the set \(T\) is \(p\)-Rider for \(p>4/3\) and not \(p\)-Rider for \(p<4/3\)._
The case \(p<4/3\) is trivial (mesh condition). We do not know if \(T\) is \(4/3\)-Rider but we tend towards this result in the next item.
### A large subset \(T^{\prime}\) of \(T\) is \(4/3\)-Rider
In the sequel, we prove the \(4/3\)-Riderness and \(\Lambda(q)\) property of a large subset \(T^{\prime}\) of \(T\). The detailed proof completes a highly sketchy proof in [48, Lemma 3.3]. We do not know whether \(T\) itself possesses these properties of \(T^{\prime}\).
We begin with some notations. Let \(E\subset\mathbb{N}\). For \(E^{\prime}\subset E\), the upper density of \(E^{\prime}\) inside \(E\) is defined by
\[\overline{d}(E^{\prime},E)=\limsup_{N\to\infty}\frac{|E^{\prime}\cap[1,N]|}{| E\cap[1,N]|}.\]
We define similarly the lower density \(\underline{d}(E^{\prime},E)\) with \(\liminf\). Clearly
\[0\leq\underline{d}(E^{\prime},E)\leq\overline{d}(E^{\prime},E)\leq 1.\]
Clearly, \(\underline{d}(T^{\prime},T)>0\) indicates that \(T^{\prime}\) occupies a large portion of \(T\).
**Theorem 5.4**.: _Almost surely, the random set \(T\) contains a subset \(T^{\prime}\) of positive lower density such that_
1. \(T^{\prime}\) _is a_ \(\frac{4}{3}\)_-Rider set;_
2. \(T^{\prime}\) _is a_ \(\Lambda(q)\)_-set for all_ \(q<\infty\)_._
The key point for proving Theorem 5.4 is the our main technical result below, which is stated with notations of Section 3.2.2 and the long proof of which is outlined at the end of this subsection.
**Theorem 5.5**.: _Almost surely, the following two estimates hold:_
1. \(|T_{I_{n}}|\asymp n\)_._
2. \(\psi_{T_{I_{n}}}\leq C|T_{I_{n}}|^{1/2}\) _where_ \(C=C(\omega)\) _does not depend on_ \(n\)_._
#### 5.2.1. Proof of Theorem 5.4
Taking Theorem 5.5 for granted, we can prove Theorem 5.4 by a combinatorial argument of [47] which we reproduce here. First, by Theorem 4.7 and Theorem 5.5 (2), we can find a quasi-independent subset \(E_{n}\) of \(T_{I_{n}}\) with cardinality
\[|E_{n}|\geq\delta\Big{(}\frac{|T_{I_{n}}|}{\psi_{T_{I_{n}}}}\Big{)}^{2}\geq \delta^{\prime}|T_{I_{n}}|.\]
Let now
\[T^{\prime}:=\bigcup_{n\geq 1}E_{n}.\]
Notice that \(T^{\prime}_{I_{n}}=E_{n}\) is a quasi-independent subset of size proportional to \(|T_{I_{n}}|\). We are going to see that something similar subsists for _arbitrary finite_ subsets of \(T^{\prime}\), with some loss.
We first note that \(T^{\prime}\) has positive lower density in \(T\). Indeed, if \(N\) is given and \(2^{n}\leq N<2^{n+1}\), we know that almost surely \(|T_{N}|\lesssim(\log N)^{2}\) while
\[|T^{\prime}_{N}|\geq\sum_{k=1}^{n-1}|E_{k}|\geq\delta^{\prime}\sum_{k=1}^{n-1 }|T_{I_{k}}|\gtrsim\sum_{k=1}^{n-1}k\gtrsim n^{2}\gtrsim(\log N)^{2}\gtrsim|T_ {N}|,\]
where for the first \(\gtrsim\) we have used Theorem 5.5 (1).
Let now \(A\subset T^{\prime}\) be an arbitrary finite set. We claim that \(A\) contains a quasi-independent subset \(E\) of size \(\gtrsim|A|^{1/2}\). To this end, we put
\[J=\{n:A\cap E_{n}\neq\emptyset\}=\{n_{1}<n_{2}<\cdots<n_{h}\}\]
so that
\[A=\bigcup_{n\in J}(A\cap E_{n}).\]
We look for a big quasi-independent subset \(B\) in \(A\) such that \(|B\gtrsim|A|^{1/2}\) by distinguishing two cases:
_Case I. \(|A\cap E_{n}|\geq|A|^{1/2}\) for some \(n\in J\)._ Then, take \(B:=A\cap E_{n}\subset E_{n}\).
_Case II. \(|A\cap E_{n}|\leq|A|^{1/2}\) for all \(n\in J\)._ Then, \(h\geq|A|^{1/2}\). Pick a point \(\mu_{j}\) in \(A\cap I_{n_{2j+1}}\) for each \(j\leq K:=\big{[}(h-1)/2\big{]}\) and observe that \(\mu_{j+1}/\mu_{j}\geq 2\)
so that \(B:=\{\mu_{1},\mu_{2},\ldots,\mu_{K}\}\) is a quasi-independent set with cardinality \(\geq\delta^{\prime\prime}h\geq\delta^{\prime\prime}|A|^{1/2}\). This proves our claim.
As the assumptions of Theorem 4.8 are satisfied for \(T^{\prime}\), with \(p\) such that \((2/p)-1=1/2\), i.e. \(p=4/3\), the first part of Theorem 5.4 is thus proved.
Let us now turn to the second part of Theorem 5.4. Assume \(f\in L^{2}_{T^{\prime}}\) and let \(f_{n}=\sum_{k\in E_{n}}\widehat{f}(k)e_{k}\). The Littlewood-Paley theorem [53, Chapter 8] and the convexity of the \(L^{q}\)-norm for \(q\geq 2\) imply
\[\|f\|_{q}\leq C_{q}\|\big{(}\sum_{n}|f_{n}|^{2}\big{)}^{1/2}\|_{q}\leq C_{q} \big{(}\sum_{n}\|f_{n}\|_{q}^{2}\big{)}^{1/2}\]
with \(C_{q}\lesssim q^{3/2}\)[47]. But \(E_{n}\) is quasi-independent, hence Sidon with a Sidon constant \(\leq 8\); consequently, \(E_{n}\) is a \(\Lambda(q)\)-set with \(\lambda_{q}(E_{n})\leq C\sqrt{q}\), so that \(\|f_{n}\|_{q}\leq C\sqrt{q}\|f_{n}\|_{2}\) and we get
\[\|f\|_{q}\lesssim C_{q}\sqrt{q}\big{(}\sum_{n}\|f_{n}\|_{2}^{2}\big{)}^{1/2}= C_{q}\sqrt{q}\|f\|_{2}.\]
This means that \(T^{\prime}\) is a \(\Lambda(q)\)-set, with \(\lambda_{q}(T^{\prime})\lesssim q^{2}\). \(\Box\)
#### 5.2.2. Proof of \(\psi_{T_{I_{n}}}\leq C|T_{I_{n}}|^{1/2}\)
We now prove the second part of Theorem 5.5 (the first one is easy). In what follows, \(C_{0},C_{1},\ldots\) will denote constants. We begin with a simple interpolation lemma-estimating \(\|f\|_{\psi_{2}}\) by \(\|f\|_{\infty}\) and \(\|f\|_{2}\).
**Lemma 5.6**.: _Let \(f\in L^{\infty}\). Then_
\[\|f\|_{\psi_{2}}\leq\frac{\|f\|_{\infty}}{\sqrt{\log(1+\|f\|_{\infty}^{2}/\|f \|_{2}^{2})}}. \tag{18}\]
_In particular, if \(f=\sum_{k\in A}c_{k}e_{k}\) with \(A\) finite and \(c_{k}\) scalars, we have_
\[\|f\|_{\psi_{2}}\leq\frac{\sum|c_{k}|}{\sqrt{\log\big{(}1+\frac{(\sum|c_{k}|)^{ 2}}{\sum|c_{k}|^{2}}\big{)}}}. \tag{19}\]
Proof.: We can assume \(\|f\|_{\infty}=1\). Let \(\lambda=\|f\|_{\psi_{2}}\) and \(\psi_{1}(x)=e^{x}-1\). By the definition of \(\lambda\) and the convexity of \(\psi_{1}\) together with the facts \(|f|\leq 1\) and \(\psi_{1}(0)=0\), we have
\[1=\int\psi_{2}(|f|/\lambda)=\int\psi_{1}(|f|^{2}/\lambda^{2})\leq\psi_{1}(1/ \lambda^{2})\int|f|^{2}.\]
Inverting this relation gives (18), since \(\psi_{1}^{-1}(y)=\log(1+y)\).
Next, observe that \(x\mapsto x/\log(1+x)\) increases on \(\mathbb{R}^{+}\). Then (18) implies (19) through the relations \(\|f\|_{\infty}\leq\sum_{k\in A}|c_{k}|,\ \|f\|_{2}^{2}=\sum_{k\in A}|c_{k}|^{2}\). \(\Box\)
The following specializations of Lemma 5.6 will be used.
**Lemma 5.7**.: _Let \(f=\sum_{k\in I_{n}}a_{k}b_{k}e_{k}\) with \(a_{k},b_{k}\) scalars and \(\sum|a_{k}|^{2}\leq 1\). Then_
\[\|f\|_{\psi_{2}}\leq\frac{(\sum|b_{k}|^{2})^{1/2}}{\sqrt{\log(1+(\sum|b_{k}|^{2 })^{1/2})}}. \tag{20}\]
_Moreover, with \(\delta_{k}=\frac{k}{\log k}\),_
\[\|\sum_{I_{n}}\delta_{k}e_{k}\|_{\psi_{2}}\leq C_{0}\sqrt{n}. \tag{21}\]
Proof.: Apply (19) with \(c_{k}=a_{k}b_{k}\) and Cauchy-Schwarz inequality, remembering that \(\sum|a_{k}|^{2}\leq 1\). The maximum of \((\sum|c_{k}|)^{2}/\sum|c_{k}|^{2}\) is obtained when \(\sum|a_{k}|^{2}=1\) and \(a_{k}\) is proportional to \(|b_{k}|\), that is \(a_{k}=|b_{k}|/\sqrt{\sum|b_{k}|^{2}}\). Then \(\sum|c_{k}|=\sqrt{\sum|b_{k}|^{2}}\), and this gives (20).
Next, for (21), we observe that \(\sum_{k\in I_{n}}\delta_{k}\asymp 2^{n}\times(n/2^{n})\asymp n\) and similarly \(\sum_{k\in I_{n}}\delta_{k}^{2}\asymp 2^{n}\times(n^{2}/4^{n})\asymp(n^{2}/2^{n})\), so that \((\sum\delta_{k})^{2}/\sum\delta_{k}^{2}\asymp 2^{n}\). Now, (19) gives
\[\|\sum_{I_{n}}\delta_{k}e_{k}\|_{\psi_{2}}\lesssim\frac{n}{\sqrt{\log 2^{n}}} \asymp\sqrt{n}.\]
We will apply the above lemma to the random function \(f=\sum_{k\in I_{n}}(\xi_{k}-\delta_{k})e_{k}\), considered as function in the Banach space \(L^{\psi_{2}}\). For this, we first need a simple symmetrization lemma.
**Lemma 5.8**.: _Let \(\mathcal{X}\) be a Banach space, and \(X\) be a \(\mathcal{X}\)-valued and integrable random variable with symmetrization \(\tilde{X}=X-X^{\prime}\) where \(X^{\prime}\) is an independent copy of \(X\). Let \(t>2\,\mathbb{E}(\|X\|)\). Then_
\[\mathbb{P}(\|X\|>2t)\leq 2\mathbb{P}(\|\widetilde{X}\|>t). \tag{22}\]
Proof.: Firstly, Markov's inequality gives
\[\mathbb{P}(\|X\|>t)\leq\frac{\mathbb{E}(\|X\|)}{t}\leq\frac{1}{2}.\]
Secondly, \(\|X\|>2t\) and \(\|X^{\prime}\|\leq t\) imply \(\|\widetilde{X}\|>t\), so that
\[\mathbb{P}(\|\widetilde{X}\|>t) \geq \mathbb{P}(\|X^{\prime}\|\leq t,\|X\|>2t)\] \[= \mathbb{P}(\|X^{\prime}\|\leq t)\mathbb{P}(\|X\|>2t)\geq\frac{1}{ 2}\mathbb{P}(\|X\|>2t).\]
Then (22) follows.
Consider \((X_{k})_{k\in I_{n}}\) with \(X_{k}=\xi_{k}-\delta_{k}\) and their symmetrizations \((\widetilde{X}_{k})_{k\in I_{n}}\). Let now
\[Z_{n}=\|\sum_{k\in I_{n}}X_{k}e_{k}\|_{\psi_{2}},\quad\widehat{Z}_{n}=\|\sum_{ k\in I_{n}}\widetilde{X}_{k}e_{k}\|_{\psi_{2}}.\]
Thanks to (21), it holds
\[\psi_{T_{I_{n}}}:=\|\sum_{k\in I_{n}}\xi_{k}e_{k}\|_{\psi_{2}}\leq Z_{n}+\|\sum_{ I_{n}}\delta_{k}e_{k}\|_{\psi_{2}}\leq Z_{n}+C_{0}\sqrt{n}. \tag{23}\]
In order to majorize \(\widehat{Z}_{n}\) and then \(Z_{n}\), we need an inequality, proved independently by several authors, often referred to as Talagrand's deviation inequality for Lipschitz functions (see e.g. [9], [34, Corollary 4 p. 75], [70, Theorem 3]). Here is a version borrowed from [34].
**Theorem 5.9** ([34], p.75).: _Let \(f:\ell_{2}^{n}\to\mathbb{R}\) be a convex function with Lipschitz constant \(\lambda\). Let_
\[Z=f(\varepsilon_{1},\ldots,\varepsilon_{n})\]
_where \((\varepsilon_{j})_{1\leq j\leq n}\) is a Rademacher sequence. Then, it holds_
\[\mathbb{P}\big{(}|Z-\mathbb{E}(Z)|>t\big{)}\leq a\exp(-b\frac{t^{2}}{\lambda^ {2}})\quad\text{for all $t\geq 1$}\]
_where \(a,b\) are positive absolute constants._
As a corollary, we get
**Theorem 5.10**.: _Let \(A\subset\mathbb{N}\) be a finite set. Let \((\varepsilon_{j})_{j\in A}\) be a Rademacher sequence, and \(v=(v_{j})_{j\in A}\) be vectors in a Banach space \(\mathcal{X}\), with weak moment \(\sigma\) defined by_
\[\sigma=\sigma(v):=\sup_{\varphi\in B_{\mathcal{X}^{*}}}\big{(}\sum_{j\in A}| \varphi(v_{j})|^{2}\big{)}^{1/2}=\sup_{\sum|a_{j}|^{2}\leq 1}\|\sum_{j\in A}a_{j} \,v_{j}\| \tag{24}\]
_where \(B_{\mathcal{X}^{*}}\) is the unit ball of \(\mathcal{X}^{*}\). Set \(Z=\|\sum_{j\in A}\varepsilon_{j}v_{j}\|\). Then, for all \(t>0\), the following two-sided inequality holds with absolute constants \(a,b>0\):_
\[\mathbb{P}\big{(}|Z-\mathbb{E}(Z)|>t\big{)}\leq a\exp\big{(}-b\frac{t^{2}}{ \sigma^{2}}\big{)}. \tag{25}\]
Proof.: We apply Theorem 5.9 to the convex function \(f(x)=\|\sum_{j\in A}x_{j}v_{j}\|\). It suffices to note that \(f\) has Lipschitz constant \(\lambda\) exactly equal to \(\sigma\), which is elementary via Hahn-Banach's theorem.
To exploit this theorem, it is convenient to note first the following.
**Lemma 5.11**.: _For \(Z_{n}=\|\sum_{k\in I_{n}}X_{k}e_{k}\|_{\psi_{2}}\), we have_
\[\mathbb{E}(Z_{n})\leq\ \mathbb{E}(\widehat{Z}_{n})\leq C_{1}\sqrt{n}.\]
Proof.: The left inequality is clear since the \(X_{k}\)'s are centered [33, Theorem 2.6]. Next, the symmetry of the \(\widetilde{X}_{k}\), the \(L^{2}\)-\(L^{\psi_{2}}\) Khintchine inequalities and Fubini's theorem imply
\[\mathbb{E}(\widehat{Z}_{n})\lesssim\big{(}\sum_{k\in I_{n}}V(X_{k})\big{)}^{1/ 2}=\big{(}\sum_{k\in I_{n}}\delta_{k}(1-\delta_{k})\big{)}^{1/2}\leq C_{1} \sqrt{n}.\]
We next claim that
\[t>2C_{1}\sqrt{n}\Longrightarrow\mathbb{P}(Z_{n}>4t)\leq\mathbb{P}\Big{(}\widehat{Z }_{n}-\mathbb{E}(\widehat{Z}_{n})>t\big{)}. \tag{26}\]
Indeed, since \(t>2C_{1}\sqrt{n}\geq 2\mathbb{E}(Z_{n})\), the relation (22) gives us
\[\mathbb{P}(Z_{n}>4t)\leq 2\mathbb{P}(\widehat{Z}_{n}>2t)\leq 2\mathbb{P}(\widehat{ Z}_{n}-\mathbb{E}(\widehat{Z}_{n})>t)\]
since \(\mathbb{E}(\widehat{Z}_{n})\leq C_{1}\sqrt{n}\leq t\).
Hence, we are led to find upper bounds on the RHS of (26). We claim that
\[\mathbb{P}\big{(}\widehat{Z}_{n}-\mathbb{E}(\widehat{Z}_{n})>t\big{)}\leq a \int_{\Omega}\exp\big{(}-b\frac{t^{2}}{\sigma_{\omega}^{2}}\big{)}d\omega \tag{27}\]
where \(\sigma_{\omega}\) denotes the weak moment of the vectors \(v_{k}=\widetilde{X}_{k}(\omega)e_{k},\ k\in I_{n}\), in the Banach space \(\mathcal{X}=L^{\psi_{2}}\). Indeed, it suffices to apply Theorem 5.10 to the variables \(\varepsilon_{k}\) and the vectors \(X_{k}v_{k}\), and a symmetrization argument.
In the following, we estimate \(\sigma_{\omega}\). For simplicity, we will abbreviate \(\sum_{k\in I_{n}}\) to \(\sum\) and \(\prod_{k\in I_{n}}\) to \(\prod\). First, remark that Lemma 5.7 implies
\[\sigma_{\omega}\leq\frac{(\sum|\widetilde{X}_{k}|^{2})^{1/2}}{\sqrt{\log(1+( \sum|\widetilde{X}_{k}|^{2})^{1/2})}}. \tag{28}\]
Second, we are going to prove that
\[W_{n}:=\sum_{k\in I_{n}}|\widetilde{X}_{k}|^{2}=\sum|\xi_{k}-\xi_{k}^{\prime} |^{2}=O(n)\quad a.s.\]
by showing
\[\mathbb{P}(W_{n}\geq 5C_{1}n)\leq\exp(-C_{1}n). \tag{29}\]
Indeed, since \(|\widetilde{X}_{k}|=|\xi_{k}-\tilde{\xi}_{k}|\leq 1\) takes only values \(0\) and \(1\) with \(P(|\widetilde{X}_{k}|=1)=2\delta_{k}(1-\delta_{k})\), Markov's inequality implies
\[\mathbb{P}(W_{n}\geq 5C_{1}n)\leq e^{-5C_{1}n}\prod\mathbb{E}(e^{|\widetilde{X} _{k}|^{2}})\]
But
\[\prod\mathbb{E}(e^{|\widetilde{X}_{k}|^{2}})\leq\prod\mathbb{E}(1+2| \widetilde{X}_{k}|^{2})\leq\prod(1+4\delta_{k})\leq e^{\sum 4\delta_{k}}\leq e^{4C_{1}n}.\]
We have thus proved (29).
If \(W_{n}(\omega)<5C_{1}n\), we have \(\sigma_{\omega}\leq C_{2}\sqrt{n/\log n}\) according to (28). This, together with (29), allows us to write (27) under the form
\[\mathbb{P}\big{(}\widehat{Z}_{n}-\mathbb{E}(\widehat{Z}_{n})>t\big{)}\leq a \int_{W_{n}<5C_{1}n}\exp\big{(}-b\frac{t^{2}}{\sigma_{\omega}^{2}}\big{)}d \omega+a\mathbb{P}(W_{n}\geq 5C_{1}n) \tag{30}\]
\[\leq a\exp(-b\frac{t^{2}}{C_{2}^{2}}\frac{\log n}{n})+a\exp(-C_{1}n). \tag{31}\]
Taking \(t=t_{n}=C_{3}\sqrt{n}\) with large \(C_{3}\) so that \(b\,C_{3}^{2}/C_{2}^{2}\geq 2\), we get in view of (26) and (5.2.2):
\[\sum\mathbb{P}(Z_{n}>4t_{n})\leq a\sum\left(n^{-2}+e^{-C_{1}n}\right)<\infty.\]
Now, by Borel-Cantelli's lemma, there exists almost surely an integer \(n_{0}=n_{0}(\omega)\) such that, for all \(n\geq n_{0}\), it holds \(Z_{n}\leq 4t_{n}=4C_{3}\sqrt{n}\). Hence, since we have as well \(|T_{I_{n}}|\asymp n\) almost surely, we get
\[\psi_{T_{I_{n}}}\leq Z_{n}+C_{0}\sqrt{n}\leq(4C_{3}+C_{0})\sqrt{n}\leq C_{4}|T _{I_{n}}|^{1/2},\]
This ends the proof of Theorem 5.5.
### Sharpness of Bourgain's random condition
Let \(R\) be the random set of integers associated to a Bernoulli sequence \((\xi_{k})\) with \(\mathbb{E}(\xi_{k})=\delta_{k}\downarrow\) and let \(m_{N}=\sum_{k=1}^{N}\delta_{k}\). We saw in [21, Theorem 5.3] that (Bourgain's result) if \(m_{N}/\log N\to\infty\), \(R\) is almost surely H-distributed. We mention in passing that a result of [32] is wrong: the same conclusion is claimed when \(m_{N}\to\infty\). But this _cannot work:_ a (correct) result of Kahane and Katznelson [37, Theorem 1 p. 364] shows that if \(\delta_{k}=1/k\), the corresponding \(R\) is almost surely Sidon, hence not Hartman-distributed. We are going to show here a kind of converse which on the one hand shows that Bourgain's result is rather sharp, and on the other hand improves on Theorem 1 in [37], which was proved by a method involving the theory of multiplicative chaos.
**Theorem 5.12**.: _Assume that \(\delta_{k}\) is decreasing and \(m_{N}=O(\log N)\). Then \(R\) is almost surely Sidon, hence not Hartman-distributed._
Proof.: We proceed in two steps.
**Step 1.**_We assume that \(m_{N}\leq c\log N\) for large \(N\) with some \(c\leq 1/(24e)\)._ We first recall a lemma from [47] already used in [21].
**Lemma 5.13**.: _Let \(n\geq 2\) and \(A\geq 1\) be positive integers. Set_
\[\Omega_{n}(A)=\{\omega:R(\omega)\cap[A,\infty[\text{ contains at least a relation of length }n\}.\]
_Then_
\[\mathbb{P}(\Omega_{n}(A))\leq\frac{B^{n}}{n^{n}}\sum_{j>A}\delta_{j}^{2}m_{j}^{n -2},\text{ with }B=4e.\]
Recall that \(R_{k}=R\cap[1,k],\ k=1,2,\ldots\). We will prove
**Lemma 5.14**.: _Let \(A_{n}=e^{n}\). Then_
(1)_\(\sum_{n\geq 1}\mathbb{P}(\Omega_{n}(A_{n}))<1\)._
(2) _Almost surely, \(|R_{A_{n}}|\leq n\) for all integers \(n\) large enough._
Indeed, since \(n\delta_{n}\leq\sum_{k=1}^{n}\delta_{k}\leq c\log n\), we get from Lemma 5.13 that
\[\mathbb{P}(\Omega_{n}(A_{n}))\leq\frac{B^{n}c^{n}}{n^{n}}\sum_{j>A_{n}}\frac {(\log j)^{n}}{j^{2}}.\]
Let \(f_{n}(t)=(\log t)^{n}/t^{2}\). This function decreases on \([A_{n},\infty[\) since
\[f_{n}^{\prime}(t)=\frac{(\log t)^{n-1}}{t^{3}}\big{(}n-2\log t\big{)}\leq 0\]
when \(t>A_{n}=e^{n}\), and then
\[\sum_{j>A_{n}}f_{n}(j)\leq\int_{A_{n}}^{\infty}\frac{(\log t)^{n}}{t^{2}}dt.\]
We now estimate
\[I_{n}:=\int_{A_{n}}^{\infty}\frac{(\log t)^{n}}{t^{2}}dt=\int_{\log A_{n}}^{ \infty}x^{n}e^{-x}dx\leq\int_{0}^{\infty}x^{n}e^{-x}dx=n!\leq n^{n}.\]
So that, simplifying by \(n^{n}\):
\[\mathbb{P}(\Omega_{n}(A_{n}))\lesssim 4(Bc)^{n}\leq 4\times 6^{-n},\]
and \(\sum_{n\geq 1}\mathbb{P}(\Omega_{n}(A_{n}))\leq 4/5<1\) which proves (1).
For (2), by Bernstein's deviation inequality and Borel-Cantelli, we know that almost surely (say for \(\omega\in\Omega_{1}\)), we have for \(n\geq n_{0}(\omega)\):
\[|R(\omega)\cap[1,A_{n}]|\leq 2\mathbb{E}(|R_{A_{n}}(\omega)|)=2m_{A_{n}}\leq 2c \log A_{n}\leq n.\]
This implies that, with \(\Omega_{2}=\big{(}\Omega\setminus\cup_{n\geq 1}\Omega_{n}(A_{n})\big{)}\cap \Omega_{1}\), we have \(\mathbb{P}(\Omega_{2})>0\) with moreover: if \(\omega\in\Omega_{2}\), then, for all \(n\), we have
\[R(\omega)\cap[A_{n},\infty[\text{ contains no relation of length }\leq n. \tag{31}\]
Otherwise, \(R(\omega)\cap[A_{n},\infty[\) would contain a relation of length \(s\) with \(3\leq s\leq n\) and since \(A_{n}\geq A_{s}\), we would have \(\omega\in\Omega_{s}(A_{s})\), contradicting \(\omega\in\Omega_{2}\).
And moreover, for large \(n\), say \(n\geq n_{0}(\omega)\),
\[|R(\omega)\cap[1,A_{n}]|\leq n. \tag{32}\]
Let now \(E\) be a finite subset of \(R(\omega)\) with cardinality \(|E|=2n\) or \(2n+1\) such that \(n\geq n_{0}(\omega)\). By (32) above, we know that
\[|E\cap[1,A_{n}]|\leq|R(\omega)\cap[1,A_{n}]|\leq n.\]
So that
\[|E\cap[A_{n},\infty[|\geq|E|-|E\cap[1,A_{n}]|\geq 2n-n=n.\]
But if we now take
\[F\subset E\cap[A_{n},\infty[\subset R(\omega)\cap[A_{n},\infty[\]
with cardinality \(n\), we see by definition (cf. 31) that \(F\) is quasi-independent. Since \(|F|\geq(1/3)|E|\) and \(E\) is arbitrary, this means, by Pisier's criterion, that \(R(\omega)\) is Sidon. All in all, we proved that \(R(\omega)\) is Sidon with positive probability. Since being Sidon is clearly an asymptotic property for \(R(\omega)\), the zero-one law shows that \(R(\omega)\) is Sidon almost surely.
**Step 2.** Dropping the dependence in \(\omega\), \(R\) is almost surely a finite union of sets \(R_{j}\) satisfying the assumptions of Step 1. Since a finite union of Sidon
sets is again a Sidon set (Drury's theorem), \(R\) itself is Sidon and we are done.
For that, we select a large integer \(M\) such that \(C/M\leq 1/(48e)\) and set
\[R_{j}=\{\xi_{kM+j}:k\geq 1\},\ j=0,1,\ldots,M-1.\]
For each fixed \(j\), \(R_{j}\) consists of selectors of mean \(\delta_{kM+j}\). Clearly, since the \(\delta_{k}\)'s decrease, for each \(0\leq j\leq M-1\) we have
\[\sum_{k=1}^{N}\delta_{kM}\leq\sum_{k=1}^{N}\delta_{kM-j}.\]
Adding those inequalities gives
\[M\sum_{k=1}^{N}\delta_{kM}\leq\sum_{j=0}^{M-1}\sum_{k=1}^{N}\delta_{kM-j}\leq \sum_{l=1}^{NM}\delta_{l}\]
so that
\[\sum_{k=1}^{N}\mathbb{E}(\xi_{kM})=\sum_{k=1}^{N}\delta_{kM}\leq\frac{1}{M} \sum_{l=1}^{NM}\delta_{l}\leq\frac{C}{M}\log(NM)\leq\frac{2C}{M}\log N\]
for \(N\geq M\). As \(2C/M\leq 1/(24e)\), \(R_{M}\) satisfies the assumptions of Step 1, and is almost surely Sidon. We do the same for \(R_{j},\ j=0,1,\ldots,M-1\) and we are done.
**Remark.** Even for \((\delta_{n})\) nonincreasing, the assumption \(m_{N}=O(\log N)\) is more general than the Kahane-Katznelson assumption \(\delta_{n}=O(1/n)\), as shown by the following example. First we choose \(x_{j}=2^{2^{j}}\) so that
\[x_{j+1}=x_{j}^{2},\ \ \log x_{j}\asymp 2^{j},\ \ x_{1}=4.\]
Then we define
\[\delta_{n}=\begin{cases}1\ \ \ \text{if}\ \ \ 1\leq n\leq 4,\\ \frac{\log x_{j+1}}{x_{j+1}}\ \ \ \text{if}\ \ \ x_{j}<n\leq x_{j+1}\ \text{with}\ j\geq 2.\end{cases}\]
Observe that
(i) The sequence \((\delta_{n})\) is clearly _nonincreasing._
(ii) The assumption \(\delta_{n}=O(1/n)\) fails for \(n=x_{j+1}\).
(iii) Finally, for large \(N\), let \(n\) satisfy \(x_{n}<N\leq x_{n+1}\). Then
\[m_{N}=\sum_{k=1}^{N}\delta_{k}\lesssim\sum_{j=1}^{n}\Big{(}x_{j+1}\frac{\log x _{j+1}}{x_{j+1}}\Big{)}=\sum_{j=1}^{n}\log x_{j+1}\lesssim\sum_{j=1}^{n}2^{j} \lesssim\log N.\]
## 6. Questions on \(S\) and \(T\)
**Problem 1**.: _Theorem 2.1 and Theorem 2.2 can be stated and proved in the same way for \(S(q_{1},q_{2})\). But efforts are needed for \(S(q_{1},q_{2},q_{3})\)._
**Problem 2**.: _Is \(S\) a rigid set?_
**Problem 3**.: _Is a Sidon set always rigid?_
**Problem 4**.: _Is a Bohr-dense set always uniformly recurrent?_
**Problem 5**.: _Is there a Bohr-dense Sidon set? Equivalently, a non Bohr-closed Sidon set?_
**Problem 6**.: _Is the Furstenberg set \(S\)\(\frac{4}{3}\)-Rider (even \(\frac{4}{3}\)-Sidon)? More generally, is \(S(q_{1},\ldots,q_{s})\)\(\frac{2s}{s+1}\)-Rider (even \(\frac{2s}{s+1}\)-Sidon)?_
**Problem 7**.: _Is \(T\) a \(\frac{4}{3}\)-Rider set?_
**Problem 8**.: _A simple argument shows that, in Theorem 5.12, we can replace \(m_{N}=O(\log N)\) by \(m_{N_{j}}=O(\log N_{j})\) where \((N_{j})\) is an increasing sequence of integers such that \(N_{j+1}=O(N_{j})\). But we do not know if the assumption \(\liminf_{N\to\infty}m_{N}/\log N<\infty\) is enough._
### Acknowledgments
A. H. Fan is partially supported by NSFC (grant no.11971192 and grant no. 12231013). H. Queffelec and M. Queffelec acknowledge the support of the Labex CEMPI (ANR-11-LABX-0007-01).
|
2307.13014 | Graph Neural Networks For Mapping Variables Between Programs -- Extended
Version | Automated program analysis is a pivotal research domain in many areas of
Computer Science -- Formal Methods and Artificial Intelligence, in particular.
Due to the undecidability of the problem of program equivalence, comparing two
programs is highly challenging. Typically, in order to compare two programs, a
relation between both programs' sets of variables is required. Thus, mapping
variables between two programs is useful for a panoply of tasks such as program
equivalence, program analysis, program repair, and clone detection. In this
work, we propose using graph neural networks (GNNs) to map the set of variables
between two programs based on both programs' abstract syntax trees (ASTs). To
demonstrate the strength of variable mappings, we present three use-cases of
these mappings on the task of program repair to fix well-studied and recurrent
bugs among novice programmers in introductory programming assignments (IPAs).
Experimental results on a dataset of 4166 pairs of incorrect/correct programs
show that our approach correctly maps 83% of the evaluation dataset. Moreover,
our experiments show that the current state-of-the-art on program repair,
greatly dependent on the programs' structure, can only repair about 72% of the
incorrect programs. In contrast, our approach, which is solely based on
variable mappings, can repair around 88.5%. | Pedro Orvalho, Jelle Piepenbrock, Mikoláš Janota, Vasco Manquinho | 2023-07-24T16:14:32Z | http://arxiv.org/abs/2307.13014v2 | # Graph Neural Networks For Mapping Variables Between Programs - Extended Version
###### Abstract
Automated program analysis is a pivotal research domain in many areas of Computer Science -- Formal Methods and Artificial Intelligence, in particular. Due to the undecidability of the problem of program equivalence, comparing two programs is highly challenging. Typically, in order to compare two programs, a relation between both programs' sets of variables is required. Thus, mapping variables between two programs is useful for a panopoly of tasks such as program equivalence, program analysis, program repair, and clone detection. In this work, we propose using graph neural networks (GNNs) to map the set of variables between two programs based on both programs' abstract syntax trees (ASTs). To demonstrate the strength of variable mappings, we present three use-cases of these mappings on the task of _program repair_ to fix well-studied and recurrent bugs among novice programmers in introductory programming assignments (IPAS). Experimental results on a dataset of 4166 pairs of incorrect/correct programs show that our approach correctly maps 83% of the evaluation dataset. Moreover, our experiments show that the current state-of-the-art on program repair, greatly dependent on the programs' structure, can only repair about 72% of the incorrect programs. In contrast, our approach, which is solely based on variable mappings, can repair around 88.5%.
## 1 Introduction
The problem of program equivalence, i.e., deciding if two programs are equivalent, is undecidable [33, 6]. On that account, the problem of repairing an incorrect program based on a correct implementation is very challenging. In order to compare both programs, i.e., the correct and the faulty implementation, program repair tools first need to find a relation between both programs' sets of variables. Besides _program repair_[1], the task of mapping variables between programs is also important for _program analysis_[41], _program equivalence_[8], _program clustering_[27, 40], _program synthesis_[30], _clone detection_[15], and _plagiarism detection_[34].
Due to a large number of student enrollments every year in programming courses, providing feedback to novice students in _introductory programming assignments_ (IPAS) requires substantial time and effort by the faculty [42]. Hence, there is an increasing need for systems capable of providing automated, comprehensive, and personalized feedback to students in programming assignments [12, 10, 11, 1]. _Semantic program repair_ has become crucial to provide feedback to each novice programmer by checking their IPAS submissions using a pre-defined test suite. Semantic program repair frameworks use a correct implementation, provided by the lecturer or submitted by a previously enrolled student, to repair a new incorrect student's submission. However, the current state-of-the-art tools on semantic program repair [10, 1] for IPAS have two main drawbacks: (1) require a perfect match between the control flow graphs (loops, functions) of both programs, the correct and the incorrect one; and (2) require a bijective relation between both programs' sets of variables. Hence, if one of these requirements is not satisfied, then, these tools cannot fix the incorrect program with the correct one.
For example, consider the two programs presented in Figure 1. These programs are students' submissions for the IPA of printing all the natural numbers from \(1\) to a given number \(n\). The program in Listing 1 is a semantically correct implementation that uses a for-loop to iterate all the natural numbers until \(n\). The program in Listing 2 uses a while-loop and an auxiliary function. This program is semantically incorrect since the student forgot to initialize the variable \(j\), a frequent bug among novice programmers called _missing expression/assignment_[36]. However, in this case, state-of-the-art program repair tools [10, 1] cannot fix the buggy program, since the control flow graphs do not match either due to using different loops (for-loop vs. while-loop) or due to the use of an auxiliary function. Thus, these program repair tools cannot leverage on the correct implementation in Listing 1 to repair the faulty program in Listing 2.
To overcome these limitations, in this paper, we propose a novel graph program representation based on the structural information of the _abstract syntax trees_ (ASTs) of imperative programs to learn how to map the set of variables between two programs using _graph neural networks_ (GNNs). Additionally, we present use-cases of program repair where these variable mappings can be applied to repair common bugs in incorrect students' programs that previous tools are not always capable of handling. For example, consider again the two programs presented in Figure 1. Note that having a mapping between both programs' variables (e.g. [n:1,i:j]) lets us reason about, on the level of expressions, which program fixes one can perform on the
faulty program in Listing 2. In this case, when comparing variable i with variable j one would find the _missing assignment_ i.e., j = 1.
Another useful application for mapping variables between different programs is fault localization. There is a body of research on fault localization [16; 21; 22; 23], that requires the usage of assertions in order to verify programs. Variable mappings can be helpful in sharing these assertions among different programs. Additionally, several program repair techniques (e.g., SearchRepair[18], Clara[10]) enumerate all possible mappings between two programs' variables during the search for possible fixes, using a correct program [10] or code snippets from a database [18]. Thus, variable mappings can drastically reduce the search space, by pruning all the other solutions that use a different mapping.
In programming courses, unlike in production code, typically, there is a reference implementation for each programming exercise. This comes with the challenge of comparing different names and structures between the reference implementation and a student's program. To deal with this challenging task, we propose to map variables between programs using GNNs. Therefore, we explore three tasks to illustrate the advantages of using variable mappings to repair some frequent bugs without considering the incorrect/correct programs' control flow graphs. Hence, we propose to use our variable mappings to fix bugs of: _wrong comparison operator_, _variable misuse_, and _missing expression_. These bugs are recurrent among novice programmers [36] and have been studied by prior work in the field of automated program repair [3; 31; 38; 4].
Experiments on 4166 pairs of incorrect/correct programs show that our GNN model correctly maps 83% of the evaluation dataset. Furthermore, we also show that previous approaches can only repair about 72% of the dataset, mainly due to control flow mismatches. On the other hand, our approach, solely based on variable mappings, can fix 88.5%.
The main contributions of this work are:
* A novel graph program representation that is agnostic to the names of the variables and for each variable in the program contains a representative variable node that is connected to all the variable's occurrences;
* We propose to use GNNs for mapping variables between programs based on our program representation, ignoring the variables' identifiers;
* Our GNN model and the dataset used for this work's training and evaluation, will be made open-source and publicly available on GitHub: [https://github.com/pmorvalho/ecai23-GNNs-for-mapping-variables-between-programs](https://github.com/pmorvalho/ecai23-GNNs-for-mapping-variables-between-programs).
The structure of the remainder of this paper is as follows. First, Section 2 presents our graph program representations. Next, Section 3 describes the GNNs used in this work. Section 4 introduces typical program repair tasks, as well as our program repair approach using variable mappings. Section 5 presents the experimental evaluation where we show the effectiveness of using GNNs to produce correct variable mappings between programs. Additionally, we compare our program repair approach based on the variable mappings generated by the GNN with state-of-the-art program repair tools. Finally, Section 6 describes related work, and the paper concludes in Section 7.
## 2 Program Representations
We represent programs as directed graphs so the information can propagate in both directions in the GNN. These graphs are based on the programs' _abstract syntax trees_ (ASTs). An AST is described by a set of nodes that correspond to non-terminal symbols in the programming language's grammar and a set of tokens that correspond to terminal symbols [14]. An AST depicts a program's grammatical structure [2]. Figure 1(a) shows the AST for the small code snippet presented in Listing 3.
Regarding our graph program representation, firstly, we create a unique node in the AST for each distinct variable in the program and connect all the variable occurrences in the program to the same unique node. Figure 1(b) shows our graph representation for the small code snippet presented in Listing 3. Observe that our representation uses a single node for each variable in the program, the green nodes a and b. Moreover, we consider five types of edges in our representation: child, sibling, read, write, and chronological edges. _Child edges_ correspond to the typical edges in the AST representation that connect each parent node to its children. Child edges are bidirectional in our representation. In Figure 1(b), the black edges correspond to child edges. _Sibling edges_ connect each child to its sibling successor. These edges denote the order of the arguments for a given node and have been used in other program representations [3]. Sibling edges allow the program representation to differentiate between different arguments when the order of the arguments
Figure 1: Two implementations for the IPA of printing all the natural numbers from 1 to a given number \(n\). The program in Listing 2 is semantically incorrect since the variable j, which is the variable being used to iterate over all the natural numbers until the number l, is not being initialized, i.e., the program has a bug of _missing expression_. The mapping between these programs’ sets of variables is [n:l;i:j].
is important (e.g. binary operation such as \(\leq\)). For example, consider the node that corresponds to the operation \(\sigma(A_{1},A_{2},\ldots,A_{m})\). The parent node \(\sigma\) is connected to each one of its children by a child edge e.g. \(\sigma\leftrightarrow A_{1},\sigma\leftrightarrow A_{2},\ldots,\sigma \leftrightarrow A_{m}\). Additionally, each child its connected to its successor by a sibling edge e.g. \(A_{1}\to A_{2},A_{2}\to A_{3},\ldots,A_{m-1}\to A_{m}\). In Figure 1(b), the red dashed edges correspond to sibling edges.
Regarding the _write and read edges_, these edges connect the ID nodes with the unique nodes corresponding to some variable. Write edges are connections between an ID node and its variable node. This edge indicates that the variable is being written. Read edges are also connections between an ID node and its variable node, although these edges indicate that the variable is being read. In Figure 1(b), the blue dashed edge corresponds to a write edge while the green dashed edges correspond to read edges. Lastly, _chronological edges_ establish an order between all the ID nodes connected to some variable. These edges denote the order of the ID nodes for a given variable node. For example, in Figure 1(b), the yellow dashed edge corresponds to a chronological edge between the ID nodes of the variable \(\mathtt{a}\). Besides the siblings and the chronological edges, all the other edges are bidirectional in our representation.
_The novelty of our graph representation_ is that we create a unique variable node for each variable in the program and connect each variable's occurrence to its unique node. This lets us map two variables in two programs, even if their number of occurrences is different in each program. Furthermore, the variable's identifier is suppressed after we connect all the variable's occurrences to its unique node. This way, all the variables' identifiers are anonymized. Prior work on representing programs as graphs [3; 38; 4] use different nodes for each variable occurrence and take into consideration the variable identifier in the program representation. Furthermore, to the best of our knowledge, combining all five types of edges (sibling, write, read, chronological, and AST) is also novel. Section 5.3 presents an ablation study on the set of edges to analyze the impact of each type of edge.
## 3 Graph Neural Networks (GNNs)
Graph Neural Networks (GNNs) are a subclass of neural networks designed to operate on graph-structured data [20], which may be citation networks [7], mathematical logic [9] or representations of computer code [3]. Here, we use graph representations of a pair of ASTs, representing two programs for which we want to match variables, as the input. The main operative mechanism is to perform _message passing_ between the nodes, so that information about the global problem can be passed between the local constituents. The content of these messages and the final representation of the nodes is parameterized by neural network operations (matrix multiplications composed with a non-linear function). For the variable matching task, we do the following to train the parameters of the network. After several message passing rounds through the edges defined by the program representations above, we obtain numerical vectors corresponding to each variable node in the two programs. We compute scalar products between each possible combination of variable nodes in the two programs, followed by a softmax function. Since the program samples are obtained by program mutation, the correct mapping of variables is known. Hence, we can compute a cross-entropy loss and minimize it so that the network output corresponds to the labeled variable matching. Note that the network has no information on the name of any object, which means that the task must be solved purely based on the structure of the graph representation. Therefore, our method is invariant to the consistent renaming of variables.
Architecture Details.The specific GNN architecture used in this work is the relational graph convolutional neural network (RGCN), which can handle multiple edges or relation types within one graph [35]. The numerical representation of nodes in the graph is updated in the message passing step according to the following equation:
\[\mathbf{x}^{\prime}_{i}=\mathbf{\Theta}_{\text{not}}\cdot\mathbf{x}_{i}+ \sum_{r\in\mathcal{R}}\sum_{j\in\mathcal{N}_{r}(i)}\frac{1}{|\mathcal{N}_{r}( i)|}\mathbf{\Theta}_{r}\cdot\mathbf{x}_{j},\]
where \(\mathbf{\Theta}\) are the trainable parameters, \(\mathcal{R}\) stands for the different edge types that occur in the graph, and \(\mathcal{N}_{r}\) the neighbouring nodes of the current node \(i\) that are connected with the edge type \(r\)[32]. After each step, we apply Layer Normalization [5] followed by a Rectified Linear Unit (ReLU) non-linear function.
We use two separate sets of parameters for the message passing phase for the program with the bug and the correct program. Five
Figure 2: AST and our graph representation for the small code snippet presented in Listing 3.
message passing steps are used in this work. After the message passing phase, we obtain numerical vectors representing every node in both graphs. We then calculate dot products \(\vec{a}\cdot\vec{b}\) between the vectors representing variable nodes in the buggy program graph \(a\in A\) and the variable nodes from the correct graph \(b\in B\), where \(A\) and \(B\) are the sets of variable node vectors. A score matrix \(\mathcal{S}\) with dimensions \(|A|\times|B|\) is obtained, to which we apply the softmax function on each row to obtain the matrix \(\mathcal{P}\). The values in each row of \(\mathcal{P}\) can now be interpreted as representing the probability that variable \(a_{i}\) maps to each of the variables \(b_{i}\).
## 4 Use-Cases: Program Repair
In this section, we propose a few use-cases on how to use variable mappings for program repair. More specifically, to repair bugs of: _wrong comparison operator_, _variable misuse_, and _missing expression_. These bugs are common among novice programmers [36] and have been studied by prior work in the field of automated program repair [3, 31, 38, 4]. The current state-of-the-art on semantic program repair tools focused on repairing IPAs, such as Clara[10] and Verifix[1], are only able to fix these bugs if the correct expression in the correct program is located in a similar program structure as the incorrect expression in the incorrect implementation. For example, consider again the two programs presented in Figure 1. If the loop condition was incorrect in the faulty program, Clara and Verifix could not fix it, since the control flow graphs do not match. Thus, these tools would fail due to _structural mismatch_.
The following sections present three program repair tasks that take advantage of variable mappings to repair an incorrect program using a correct implementation for the same IPA without considering the programs' structures. Our main goal is to show the usefulness of variable mappings. We claim that variable mappings are informative enough to repair these three realistic types of bugs. Given a buggy program, we search for and try to repair all three types of bugs. Whenever we find a possible fix, we check if the program is correct using the IPA's test suite.
Bug #1: Wrong Comparison Operator (WCO).Our first use-case are faulty programs with the bug of wrong comparison operator (WCO). This is a recurrent bug in students' submissions to IPAs since novice programmers frequently use the wrong operator, e.g., \(\mathrm{i}<=\mathrm{n}\) instead of \(\mathrm{i}<\mathrm{n}\).
We propose tackling this problem solely based on the variable mapping between the faulty and correct programs, ignoring the programs' structure. First, we rename all the variables in the incorrect program based on the variable mapping by changing all the variables' identifiers in the incorrect program with the corresponding variables' identifiers in the correct implementation. Second, we count the number of times each comparison operation appears with a specific pair of variables/expressions in each program. Then, for each comparison operation in the correct program, we compute the mirrored expression, i.e., swapping the operator by its mirrored operator, and swapping the left-side and right-side of the operation. This way, if the incorrect program has the same correct mirrored expression, we can match it with an expression in the correct program. For example, in the programs shown in Figure 1, both loop conditions would match even if they are mirrored expressions, i.e., \(\mathrm{i}<=\mathrm{n}\) and \(\mathrm{n}>=\mathrm{i}\).
Afterwards, we iterate over all the pairs of variables/expressions that appear in comparison operations of the correct program (plus the mirrored expressions) and compare if the same pair of variables/expressions appear the same number of times in the incorrect program, using the same comparison operator. If this is not the case, we try to fix the program using the correct implementation's operator in each operation of the incorrect program with the same pair of variables/expressions. Once the program is fixed, we rename all the variables based on the reverse variable mapping.
Bug #2: Variable Misuse (VM).Our second program repair task are buggy programs with variables being misused, i.e., the student uses the wrong variable in some program location. The wrong variable is of the same type as the correct variable that should be used. Hence, this bug does not produce any compilation errors. This type of bug is common among students and experienced programmers [17, 37]. The task of detecting this specific bug has received much attention from the Machine Learning (ML) research community [3, 38, 43].
Once again, we propose to tackle this problem based on the variable mapping between the faulty program and the correct one, ignoring the programs' structure. We start by renaming all the variables in the incorrect program based on the variable mapping. Then we count the number of times each variable appears in both programs. If a variable, \(\mathrm{x}\), appears more times in the incorrect program than in the correct implementation, and if another variable \(\mathrm{y}\) appears more times in the correct program, then we try to replace each occurrence of \(\mathrm{x}\) in the incorrect program with \(\mathrm{y}\). Once the program is fixed, we rename all the variables based on the reverse variable mapping.
Bug #3: Missing Expression (ME).The last use-case we will focus on is to repair the bug of _missing expressions/assignments_. This bug is also recurrent in students' implementations of IPAs [36]. Frequently, students forget to initialize some variable or to increment a variable of some loop, resulting in a bug of missing expression. However, unlike the previously mentioned bugs, this one has not received much attention from the ML community since it is more complex to repair this program fault. To search for a possible fix, we start by renaming all the variables in the incorrect program based on the variable mapping. Next, we count the number of times each expression appears in both programs. Expressions that appear more frequently in the correct implementation are considered possible repairs. Then, we try to inject these expressions, one at a time, into the incorrect implementation's code blocks and check the program's correctness. Once the program is fixed, we rename all the variables based on the reverse variable mapping. This task is solely based on the variable mapping between the faulty and the correct programs.
## 5 Experiments
Experimental Setup.We trained the Graph Neural Networks on an Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz server with 72 CPUs and 692GB RAM. Networks were trained using NVIDIA GEFORCE GTX 1080 graphics cards with 12GB of memory. All the experiments related to our program repair tasks were conducted on an Intel(R) Xeon(R) Silver computer with 4210R CPUs @ 2.40GHz, using a memory limit of 64GB and a timeout of 60 seconds.
### IPAs Dataset
To evaluate our work, we used C-Pack-IPAs[26], a benchmark of student programs developed during an introductory programming course in the C programming language for ten different IPAs, over two distinct academic years, at Instituto Superior Tecnico. These
IPAs are small imperative programs that deal with integers and input-output operations (see Appendix B).
First, we selected a set of correct submissions, i.e., programs that compiled without any error and satisfied a set of input-output test cases for each IPA. We gathered 238 correct students' submissions from the first year and 78 submissions from the second year. We used the students' submissions from the first year for training and for validating our GNN and the submissions from the second year for evaluating our work.
Since we need to know the real variable mappings between programs (ground truth) to evaluate our representation, we generated a dataset of pairs of correct/incorrect programs to train and evaluate our work with specific bugs. This is a common procedure to evaluate machine learning models in the field of program repair [3, 38, 4, 43, 29]. To generate this dataset, we used MultIPAs [28], a program modifier capable of mutating C programs syntactically, generating semantically equivalent programs, i.e., changing the program's structure but keeping its semantics. There are several program mutations available in MultIPAs: mirroring comparison expressions, swapping the if's then-block with the else-block and negating the test condition, increment/decrement operators mirroring, variable declarations reordering, translating for-loops into equivalent while-loops, and all possible combinations of these program mutations. Hence, MultIPAs has thirty-one different configurations for mutating a program. All these program mutations generate semantically equivalent programs. Afterwards, we also used MultIPAs, to introduce bugs into the programs, such as _wrong comparison operator_ (WCO), _variable misuse_ (VM), _missing expression_ (ME). Hence, we gathered a dataset of pairs of programs and the mappings between their sets of variables (see Appendix A). Each pair corresponds to a real correct student's implementation, and the second program is the student's program after being mutated and with some bug introduced. Thus, this IPA dataset is generated, although based on real programs. The dataset is divided into three different sets: training set, validation set, and evaluation set. The programs generated from _first year_ submissions are divided into a training and validation set based on which students' submissions they derive from. 80% of the students supply the training data, while 20% supply validation data. The evaluation set, which is not used during the machine learning optimization, is chronologically separate: it consists only of _second year_ submissions, to simulate the real-world scenario of new, incoming students. The training set is composed of 3372, 5170, and 2908 pairs of programs from the first academic year for the WCO, VM, and ME bugs, respectively. The validation set, which was used during development to check the generalization of the prediction to unseen data, comprises 1457, 1457, and 1023 pairs of programs from the first year. Note that we subsample from the full spectrum of possible mutations, to keep the training data size small enough to train the network with reasonable time constraints. From each of the 31 combinations of mutations, we use one randomly created sample for each student per exercise. We found that this already introduced enough variation in the training dataset to generalize to unseen data. Finally, the evaluation set is composed of 4166 pairs of programs from the second year (see \(3^{rd}\) row, Table 2). This dataset will be publicly available for reproducibility reasons.
### Training
At training time, since the incorrect program is generated, the mapping between the variables of both programs is known. The network is trained by minimizing the cross entropy loss between the labels (which are categorical integer values indicating the correct mapping) and the values in each corresponding row of the matrix \(\mathcal{P}\). As an optimizer, we used the Adam algorithm with its default settings in PyTorch [19]. The batch size was 1. As there are many different programs generated by the mutation procedures, we took one sample from each mutation for each student. Each network was trained for 20 full passes (epochs) over this dataset while shuffling the order of the training data before each pass. For validation purposes, data corresponding to 20\(\%\) of the students from the first year of the dataset was kept separate and not trained on.
Table 1 shows the percentage of validation data mappings that were exactly correct (accuracy) after 20 epochs of training, using four different GNN models. Each GNN model was trained on programs with the bugs of wrong comparison operator (WCO), variable misuse (VM), missing expression (ME) or all of them (All). Furthermore, each GNN model has its own validation set with programs with a specific type of bug. The GNN model trained on All Bugs was validated using a mix of problems from each bug type. In the following sections, we focus only on this last GNN model (All Bugs).
### Evaluation
Our GNN model was trained on programs with bugs of wrong comparison operator (WCO), variable misuse (VM), and missing expression (ME). We used two evaluation metrics to evaluate the variable mappings produced by the GNN. First, we counted the number of totally correct mappings our GNN was able to generate. We consider a variable mapping totally correct if it correctly maps all the variables
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{**Buggy Programs**} \\ \cline{2-5}
**Evaluation Metric** & WCO Bug & VM Bug & ME Bug & All Bugs \\ \hline \# Correct Mappings & 87.38\% & 81.87\% & 79.95\% & 82.77\% \\ Avg Overlap Coefficient & 96.99\% & 94.28\% & 94.51\% & 95.05\% \\ \hline \# Programs & 1078 & 1936 & 1152 & 4166 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The number of correct variable mappings generated by our GNN on the evaluation dataset and the average overlap coefficients between the real mappings and our GNN’s variable mappings.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{**Buggy Programs**} \\ \cline{2-5}
**Evaluation Metric** & WCO Bug & VM Bug & ME Bug & All Bugs \\ \hline \# Correct Mappings & 87.38\% & 81.87\% & 79.95\% & 82.77\% \\ Avg Overlap Coefficient & 96.99\% & 94.28\% & 94.51\% & 95.05\% \\ \hline \# Programs & 1078 & 1936 & 1152 & 4166 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Validation mappings fully correct after 20 training epochs.
between two programs. Secondly, we computed the overlap coefficient between the original variable mappings and the variable mappings generated by our GNN. The overlap coefficient is a similarity metric given by the intersection between the two mappings divided by the length of the variable mapping (see Appendix D).
The first row in Table 2 shows the number of totally correct variable mappings computed by our GNN model. One can see that the GNN maps correctly around 83% of the evaluation dataset. We have also looked into the number of variables in the mappings we were not getting entirely correct. The results showed that programs with more variables (e.g., six or seven variables) are the most difficult for our GNN to map their variables correctly (see Appendix C). For this reason, we have also computed the overlap coefficient between the GNN's variables mappings and the original mappings (ground truth). The second row in Table 2 shows the average of the overlap coefficients between the original variable mappings and the mappings generated by our GNN model. The overlap coefficient [39] measures the intersection (overlap) between two mappings. If the coefficient is \(100\%\), both sets are equal. One set cannot be a subset of the other since both sets have the same number of variables in our case. The opposite is \(0\%\) overlap, meaning there is no intersection between the two mappings. The GNN achieved at least 94% of overlap coefficients, i.e., even if the mappings are not always fully correct, almost 94% of the variables are correctly mapped by the GNN.
Ablation Study.To study the effect of each type of edge in our program representation, we have performed an ablation study on the set of edges. Prior works have done similar ablation studies [3]. Table 3 presents the accuracy of our GNN (i.e., number of correct mappings) on the evaluation dataset after 20 epochs. We can see that the accuracy of our GNN drops from 96% to 53% if we remove the AST edges (index 0), which was expected since these edges provide syntactic information about the program. Removing the sibling edges (index 1) also causes a great impact on the GNN's performance, dropping to 74%. The other edges are also important, and if we remove them, there is a negative impact on the GNN's performance. Lastly, since the AST and sibling edges caused the greatest impact, we evaluated using only these edges on our GNN and got an accuracy of 94.7%. However, the model using all the proposed edges has the highest accuracy of 96.49%.
### Program Repair
This section presents the results of using variable mappings on the three use-cases described in Section 4, i.e., the tasks of repairing bugs of: _wrong comparison operator_ (WCO), _variable misuse_ (VM) and _missing expression_ (ME). For this evaluation, we have also used the two current publicly available program repair tools for fixing introductory programming assignments (IPAS): Clara[10] and Veri-Fix[1]. Furthermore, we have tried to fix each pair of incorrect/correct programs in the evaluation dataset by passing each one of these pairs of programs to every repair method: Veri-Fix, Clara, and our repair approach based on the GNN's variable mappings.
If our repair procedure cannot fix the incorrect program using the most likely variable mapping according to the GNN model, then it generates the next most likely mapping based on the variables' distributions computed by the GNN. Therefore, the repair method iterates over all variable mappings based on the GNN's predictions. Lastly, we have also run the repair approach using as baseline variable mappings generated based on uniform distributions. This case simulates most repair techniques that compute all possible mappings between both programs' variables (e.g., SearchRepair[18]).
Table 4 presents the number of programs repaired by each different repair method. The first row presents the results for the baseline, which was only able to fix around 50% of the evaluation dataset. In the second row, the interested reader can see that Veri-Fix can only repair about 62% of all programs. Clara, presented in the third row, outperforms Veri-Fix, being able to repair around 72% of the whole dataset. The last row presents the GNN model. This model is the best one repairing 88.5% of the dataset.
The number of executions that resulted in a timeout (60 seconds) is relatively small for Veri-Fix and Clara. Regarding our repair procedure, it either fixes the incorrect program or iterates over all variable mappings until it finds one that fixes the program. Thus, the baseline and the GNN present no failed executions and considerably high rates of executions that end up in timeouts, almost 50% for the baseline and 11.5% in the case of the GNN model. Additionally, Table 4 also presents the failure rate of each technique, i.e., all the computations that ended within 60 seconds and did not succeed in fixing the given incorrect program. Veri-Fix has the highest failure rate, around 35% of the entire evaluation set. Clara also presents a significant failure rate, about 28%. As explained previously, this is the main drawback of these tools. Hence, these results support our claim that it is possible to repair these three realistic bugs solely based on the variable mappings' information without matching the structure of the incorrect/correct programs.
Furthermore, considering all executions, the average number of variable mappings used within 60 seconds is 1.24 variable mappings for the GNN model and 5.6 variable mappings when considering the baseline. The minimum number of mappings generated by both approaches is 1, i.e., both techniques were able to fix at least one incorrect program using the first generated variable mapping. The maximum number of variable mappings generated was 32 (resp. 48) for the GNN (resp. baseline). The maximum number of variable mappings used is high because the repair procedure iterates over all the variable mappings until the program is fixed or the time runs out. Moreover, even if we would only consider using the first variable mapping generated by the GNN model to repair the incorrect programs, we would be able to fix 3377 programs in 60 seconds, corresponding to 81% of the evaluation dataset.
Regarding the time performance of each technique, Figure 3 shows a cactus plot that presents the CPU time spent, in seconds, on repairing each program (\(y\)-axis) against the number of repaired programs (\(x\)-axis) using different repairing techniques. One can clearly see a gap between the different repair methods' time performances. For example, in 10 seconds, the baseline can only repair around 1150 programs, Veri-Fix repairs around 2300, Clara repairs around 2850
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline
**Edges Used** & All & (1,2,3,4) & (0,2,3,4) & (0,1,3,4) & (0,1,2,4) & (0,1,2,3) & (0,1) \\ \hline
**Accuracy** & **96.49\%** & 52.53\% & 73.76\% & 95.45\% & 94.87\% & 96.06\% & 94.74\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Percentage of variable mappings fully correct on the validation set for different sets of edges used. Each type of edge is represented by an index using the mapping: {0: AST; 1: sibling; 2: write; 3: read; 4: chronological}.
programs while using the GNN's variable mappings, we can repair around 3350 programs, i.e., around 17% more. We are considering the time the GNN takes to generate the variable mappings and the time spent on the repair procedure. However, the time spent by the GNN to generate one variable mapping is almost insignificant. The average time the GNN takes to produce a variable mapping is 0.025 seconds. The minimum (resp. maximum) time spent by the GNN, considering all the executions is 0.015s (resp. 0.183s).
## 6 Related Work
_Automated program repair_[1, 24, 10, 12, 42] has become crucial to provide feedback to novice programmers by checking their introductory programming assignments (IPAs) submissions using a test suite. In order to repair an incorrect program with a correct reference implementation, Clara[10] requires a perfect match between both programs' control flow graphs and a bijective relation between both programs' variables. Otherwise, Clara returns a structural mismatch error. Verifix[1] aligns the control flow graph (CFG) of an incorrect program with the reference solution's CFG. Then, using that alignment relation and MaxSMT solving, Verifix proposes fixes to the incorrect program. Verifix also requires a compatible control flow graph between the incorrect and the correct program. BugLab[4] is a Python program repair tool that learns how to detect and fix minor semantic bugs. To train BugLab, [4] applied four program mutations and introduced four different bugs to augment their benchmark of Python programs. DeepBug[31] uses rule-based mutations to build a dataset of programs from scratch to train its ML-based program repair tool. Given a program, this tool classifies if the program is buggy or not.
_Mapping variables_ can also be helpful for the task of _code adaption_, where the repair framework tries to adapt all the variable names in a pasted snippet of code, copied from another program or a Stack Overflow post to the surrounding preexisting code [25]. AdaptivePaste[25] focused on a similar task to _variable misuse_ (VM) repair, it uses a sequence-to-sequence with multi-decoder transformer training to learn programming language semantics to adapt variables in the pasted snippet of code. Recently, several systems were proposed to tackle the VM bug with ML models [3, 13, 41]. These tools classify the variable locations as faulty or correct and then replace the faulty ones through an enumerative prediction of each buggy location [3]. However, none of these methods takes program semantics into account, especially the long-range dependencies of variable usages [25].
## 7 Conclusions
This paper tackles the highly challenging problem of mapping variables between programs. We propose the usage of graph neural networks (GNNs) to map the set of variables between two programs using our novel graph representation that is based on both programs' abstract syntax trees. In a dataset of 4166 pairs of incorrect/correct programs, experiments show that our GNN correctly maps 83% of the evaluation dataset. Furthermore, we leverage the variable mappings to perform automatic program repair. While the current state-of-the-art on program repair can only repair about 72% of the evaluation dataset due to structural mismatch errors, our approach, based on variable mappings, is able to fix 88.5%.
In future work, we propose to integrate our variable mappings into other program repair tools to evaluate the impact of using these mappings to repair other types of bugs. Additionally, we will analyze using our mappings to fix an incorrect program using several correct programs.
## Acknowledgements
This work was supported by Portuguese national funds through FCT under projects UIDB/50021/2020, PTDC/CCI-COM/2156/2021, 2022.03537.PTDC and grant SFRH/BD/07724/2020. This work was also supported by European funds through COST Action CA2011; by the European Regional Development Fund under the Czech project AI&Reasoning no. CZ.02.1.01/0.0/0.0/15_003/0000466 (JP), Amazon Research Awards (JP), and by the Ministry of Education, Youth, and Sports within the program ERC CZ under the project POSTMAN no. LL1902. This article is part of the RICAIP project
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{3}{c}{**Buggy Programs**} & \multicolumn{3}{c}{**Not Succeeded**} \\ \cline{2-7}
**Repair Method** & WCO Bug & VM Bug & ME Bug & All Bugs & **\% Failed** & **\% Timeouts (60s)** \\ \hline
**Baseline** & 618 (57.33\%) & 1187 (61.31\%) & 287 (24.91\%) & 2092 (50.22\%) & 0 (0.0\%) & **2074 (49.78\%)** \\
**Verifix** & 555 (51.48\%) & 1292 (66.74\%) & 741 (64.32\%) & 2588 (62.12\%) & **1471 (35.31\%)** & 107 (2.57\%) \\
**Clara** & 722 (66.98\%) & 1517 (78.36\%) & 764 (66.32\%) & 3003 (72.08\%) & 1153 (27.68\%) & 10 (0.24\%) \\
**GNN** & **992 (92.02\%)** & **1714 (88.53\%)** & **981 (85.16\%)** & **3687 (88.5\%)** & 0 (0.0\%) & 479 (11.5\%) \\ \hline \hline \end{tabular}
\end{table}
Table 4: The number of programs repaired by each different repair technique: Verifix, Clara, and our repair approach based on our GNN’s variable mappings. The first row shows the results of repairing the programs using variable mappings generated based on uniform distributions (baseline).
Figure 3: Cactus plot - The time spent by each method repairing each program of the evaluation dataset, using a timeout of 60 seconds.
that has received funding from the EU's Horizon 2020 research and innovation program under grant agreement No 857306.
## Appendix A IPAs Dataset Generation
To evaluate our work, we have generated a dataset of pairs programs based on a benchmark of student programs developed during an introductory programming course in the C programming language for ten different introductory programming assignments (IPAs), over two distinct academic years. We selected only semantically correct submissions i.e., programs that compiled without any error and satisfied a set of input-output test cases for each IPA.
Afterwards, we generated a dataset of pairs of correct/incorrect programs to train and evaluate our work with specific bugs. The reason to generate programs is that we need to know the real variable mappings between two programs (ground truth) to evaluate our representation. As explained in the paper, we used MultiIPAs [28] to generate this dataset. This tool can mutate our programs syntactically, generating semantically equivalent programs. There are several program mutations available in MultiIPAs such as: mirroring comparison expressions, swapping the if's then-block with the ell-block and negating the test condition, increment/decrement operators mirroring, variable declarations reordering, translating for-loops into equivalent while-loops, and all possible combinations of these program mutations. Hence, MultiIPAs has 31 different configurations for mutating a program. Each program mutation can be applied in more than one place for a given program. Hence, each program mutation can generate several different mutated programs. For example, using the program mutation that reorders variable declarations, each possible reordering generates a different mutated program.
Regarding the generation of buggy programs, we also used MultiIPAs, for introducing bugs into the programs, such as _wrong comparison operator_ (WCO), _variable misuse_ (VM) and _missing expression_ (ME). Each bug can be applied in more than one place for a given program. Thus, one program can generate several different buggy programs using the same bug. For example, the bug of variable misuse can be applied in each variable occurrence in the program, each one generates a single buggy program.
Figure 4 presents the generation of our dataset. Firstly, we applied all the available program mutations to each correct student's submission. Then, for each mutated program, we applied all three types of bugs: WCO, VM and ME. Finally, we gathered a dataset of pairs of programs and the mappings between their sets of variables. As Figure 4 shows, each pair of programs, in our generated dataset, corresponds to a correct student's implementation and the student's program after being mutated and with some bug introduced.
## Appendix B Description of IPAs
The set of Introductory Programming Assignments (IPAs) used to train and evaluate the GNN model is part of the C-Pack-IPAs benchmark [26]. In this set of IPAs the students learn how to program with integers, floats, IO operations (mainly printf and scanf), conditionals (if-statements), and simple loops (for and while-loops).
Ipa #1.Write a program that determines and prints the largest of three integers given by the user.
Ipa #2.Write a program that reads two integers 'N, M' and prints the smallest of them in the first row and the largest in the second.
Ipa #3.Write a program that reads two positive integers 'N, M' and prints "yes" if 'M' is a divisor of 'N', otherwise prints "no".
Ipa #4.Write a program that reads three integers and prints them in order on the same line. The smallest number must appear first.
Ipa #5.Write a program that reads a positive integer 'N' and prints the numbers '1..N', one per line.
Ipa #6.Write a program that determines the largest and smallest number of 'N' real numbers given by the user. Consider that 'N' is a value requested from the user. The result must be printed with the command 'printf('min: %f, max: %ffn', min, max)'.
Ipa #7.Write a program that asks the user for a positive integer 'N' and prints the number of divisors of 'N'. Remember that prime numbers have 2 divisors.
Ipa #8.Write a program that calculates and prints the average of 'N' real numbers given by the user. The program should first ask the user for an integer 'N', representing the number of numbers to be entered. The real numbers must be represented by float type. The result must be printed with the command 'printf("%.2f", avg);'.
Ipa #9.Write a program that asks the user for a value 'N' corresponding to a certain period of time in seconds. The program should output this period of time in the format 'HH:MM:SS'.
Ipa #10.Write a program that asks the user for a positive value 'N'. The output should present the number of digits that make up 'N' (on the first line), as well as the sum of the digits of 'N' (on the second line). For example, the number 12345 has 5 digits, and the sum of these digits is 15.
## Appendix C #Correct/Incorrect Mappings vs #Variables
Figure 5 shows a histogram with the number of programs, \(y\)-axis, whose variables (number of variables in the \(x\)-axis) our GNN models can map totally correct (#Correct Mappings) in green and programs with at least one variable being mapped incorrectly (#incorrect Mappings) in red.
## Appendix D Overlap Coefficient
The overlap or Szymkiewicz-Simpson coefficient measures the overlap between two sets (e.g. mappings). This metric can be calculated by dividing the size of the intersection of two sets by the size of the smaller set, as follows:
\[overlap(A,B)=\frac{|A\cap B|}{min(|A|,|B|)} \tag{1}\]
An overlap of \(100\%\) means that both sets are equal or one of them is a subset of the other. The opposite, \(0\%\) overlap, means there is no intersection between both sets. |
2310.08945 | Performance of a PEM fuel cell cathode catalyst layer under oscillating
potential and oxygen supply | A model for impedance of a PEM fuel cell cathode catalyst layer under
simultaneous application of potential and oxygen concentration perturbations is
developed and solved. The resulting expression demonstrates dramatic lowering
of the layer impedance under increase in the amplitude of the oxygen
concentration perturbation. In--phase oscillations of the overpotential and
oxygen concentration lead to formation of a fully transparent to oxygen
sub--layer. This sub--layer works as an ideal non polarizable electrode, which
strongly reduces the system impedance. | Andrei Kulikovsky | 2023-10-13T08:22:57Z | http://arxiv.org/abs/2310.08945v1 | # Performance of a PEM fuel cell cathode catalyst layer under oscillating potential and oxygen supply
###### Abstract
A model for impedance of a PEM fuel cell cathode catalyst layer under simultaneous application of potential and oxygen concentration perturbations is developed and solved. The resulting expression demonstrates dramatic lowering of the layer impedance under increase in the amplitude of the oxygen concentration perturbation. In-phase oscillations of the overpotential and oxygen concentration lead to formation of a fully transparent to oxygen sub-layer. This sub-layer works as an ideal non polarizable electrode, which strongly reduces the system impedance.
PEM fuel cell, catalyst layer, impedance, modeling
## I Introduction
Electrochemical impedance spectroscopy (EIS) has proven to be a unique non-destructive and non-invasive tool for fuel cells characterization [1]. In its classic variant, EIS implies application of a small-amplitude harmonic perturbation of the cell current or potential and measuring the response of the cell potential or current, respectively. In recent years, there has been interest in alternative techniques based on application of pressure (Engebretsen et al. [2], Shirsath et al. [3], Schiffer et al. [4], Zhang et al. [5]) or oxygen concentration (Sorentino et al. [6; 7; 8]) perturbation to the cell and measuring the response of electric variable (potential or current), keeping the second electric variable constant.
Application of pressure oscillations at the cathode channel inlet or outlet inevitably leads to flow velocity oscillations (FVO). Kim et al. [9] and Hwang et al. [10] reported experiments showing dramatic improvement of PEM fuel cell performance under applied FVO. The effect of FVO on the cell performance was more pronounced with lower static flow rates and with increasing the FVO amplitude [9]. In [9; 10], the effect has been attributed to improvement of diffusive oxygen transport through the cell due to FVO. Kulikovsky [11; 12] developed a simplified analytical model for impedance of the cell subjected to simultaneous oscillations of potential and air flow velocity. The model have shown reduction of the cell static resistivity upon increase of the FVO amplitude. Yet, however, due to system complexity, the mechanism of cell performance improvement is not clear.
Below, a much simpler system (PEM fuel cell cathode catalyst layer) subjected to oscillating in-phase potential and oxygen supply is considered. Analytical model for the CCL impedance under these conditions is solved. The result demonstrates the effect of impedance reduction due to oscillating oxygen supply. In-phase oxygen concentration and overpotential oscillations make part of the catalyst layer at the cathode catalyst layer (CCL)/gas diffusion layer (GDL) interface fully transparent to oxygen, which leads to dramatic decrease of the system impedance.
## II Model
Consider a problem for impedance of the cathode catalyst layer (CCL) under oscillating potential and oxygen supply (Figure 1). For simplicity, we will assume that the proton transport is fast. The model is based on two equations: the proton charge conservation
\[C_{dl}\frac{\partial\eta}{\partial t}+\frac{\partial j}{\partial x}=-i_{*} \left(\frac{c}{c_{ref}}\right)\exp\left(\frac{\eta}{b}\right) \tag{1}\]
and the oxygen mass transport equation
\[\frac{\partial c}{\partial t}-D_{ox}\frac{\partial^{2}c}{\partial x^{2}}=- \frac{i_{*}}{4F}\left(\frac{c}{c_{ref}}\right)\exp\left(\frac{\eta}{b}\right). \tag{2}\]
Here, \(x\) is the distance through the CCL, \(C_{dl}\) is the double layer capacitance, \(\eta\) is the positive by convention ORR over
Figure 1: Schematic of the cathode catalyst layer and typical shapes of the oxygen concentration, proton current and overpotential through the layer. In–phase harmonic perturbations of the overpotential \(\eta_{1}^{1}\) and oxygen concentration \(c_{1}^{1}\) are applied at the CCL/GDL interface. |
2306.15994 | Systematic analysis of the impact of label noise correction on ML
Fairness | Arbitrary, inconsistent, or faulty decision-making raises serious concerns,
and preventing unfair models is an increasingly important challenge in Machine
Learning. Data often reflect past discriminatory behavior, and models trained
on such data may reflect bias on sensitive attributes, such as gender, race, or
age. One approach to developing fair models is to preprocess the training data
to remove the underlying biases while preserving the relevant information, for
example, by correcting biased labels. While multiple label noise correction
methods are available, the information about their behavior in identifying
discrimination is very limited. In this work, we develop an empirical
methodology to systematically evaluate the effectiveness of label noise
correction techniques in ensuring the fairness of models trained on biased
datasets. Our methodology involves manipulating the amount of label noise and
can be used with fairness benchmarks but also with standard ML datasets. We
apply the methodology to analyze six label noise correction methods according
to several fairness metrics on standard OpenML datasets. Our results suggest
that the Hybrid Label Noise Correction method achieves the best trade-off
between predictive performance and fairness. Clustering-Based Correction can
reduce discrimination the most, however, at the cost of lower predictive
performance. | I. Oliveira e Silva, C. Soares, I. Sousa, R. Ghani | 2023-06-28T08:08:14Z | http://arxiv.org/abs/2306.15994v1 | # Systematic analysis of the impact of label noise correction on ML Fairness
###### Abstract
Arbitrary, inconsistent, or faulty decision-making raises serious concerns, and preventing unfair models is an increasingly important challenge in Machine Learning. Data often reflect past discriminatory behavior, and models trained on such data may reflect bias on sensitive attributes, such as gender, race, or age. One approach to developing fair models is to preprocess the training data to remove the underlying biases while preserving the relevant information, for example, by correcting biased labels. While multiple label noise correction methods are available, the information about their behavior in identifying discrimination is very limited. In this work, we develop an empirical methodology to systematically evaluate the effectiveness of label noise correction techniques in ensuring the fairness of models trained on biased datasets. Our methodology involves manipulating the amount of label noise and can be used with fairness benchmarks but also with standard ML datasets. We apply the methodology to analyze six label noise correction methods according to several fairness metrics on standard OpenML datasets. Our results suggest that the Hybrid Label Noise Correction [1] method achieves the best trade-off between predictive performance and fairness. Clustering-Based Correction [2] can reduce discrimination the most, however, at the cost of lower predictive performance.
Label noise correction, ML fairness, bias mitigation, semi-synthetic data +
Footnote †: This work was partly funded by: Agenda “Center for Responsible AI”, nr. C645008882-00000055, investment project nr. 62, financed by the Recovery and Resilience Plan (PRR) and by European Union - NextGeneration EU.; AISym4Med (101095387) supported by Horizon Europe Cluster 1: Health, ConnectedHealth (n.o - 46858), supported by Competiveness and Internationalisation Operational Programme (POCI) and Lisbon Regional Operational Programme (LSBOA 2020), under the PORTUGAL 2020 Partnership Agreement, through the European Regional Development Fund (ERDF); and Base Funding - UIDB/00027/2020 of the Artificial Intelligence and Computer Science Laboratory - LIACC - funded by national funds through the FCT/MCTES (PIDDAC).
## I Introduction
The widespread use of ML systems in sensitive environments has a profound impact on people's lives when given the power to make life-changing decisions [3]. One well-known example is the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) software. This computer program assesses the recidivism risk of individuals and is used by the American courts to decide whether a person should be released from prison. In a 2016 investigation conducted by ProPublica 1, it was discovered that the system was biased against African-Americans, incorrectly classifying Black offenders as "high-risk" twice as often as White defendants. Another example relates to a less impactful yet more widely present tool in people's lives: Google's targeted ads. A group of researchers proposed AdFisher [4], a tool to gather insights on how user behaviors, Google's transparency tool "Ad Settings", and the presented advertisements interact. Their study revealed that male web users were more likely to be presented with ads for high-paying jobs than their female counterparts. In this context, we can classify an algorithm as unfair if its decisions reflect some kind of prejudice or favoritism towards certain groups of people based on their inherent or acquired characteristics [3].
Footnote 1: [https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing](https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing)
The process of learning which factors are relevant to the desired outcome in these tasks involves generalizing from historical examples, which can lead to algorithms being vulnerable to the same biases that people projected in their past decisions. For example, Amazon's ML experts tried to build a recruiting engine to automate the review of job applicants' resumes. However, they realized that their tool was
discriminatory towards women. This was suspected to be the consequence of using training data from the previous ten years, during which most technical positions were applied for and granted to men, leading the system to discard most female applicants2.
Footnote 2: [https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G](https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G)
The goal of _fair machine learning_ is to identify and mitigate these harmful and unacceptable inequalities [5]. When collecting data, prejudice will lead to incorrect labels, as the relationship between an instance's features and its class will be biased. Despite the vast amount of literature on methods for dealing with noisy data, only a few of these studies focus on identifying and correcting noisy labels [2]. This approach of correcting wrongly attributed labels can be leveraged in the context of fair machine learning if we consider discrimination present in the data as noise that can be removed. As such, noise correction techniques can be applied to obtain a feasibly unbiased dataset that can be used to train fair models. Thus, the motivation for this work comes from, to the best of our knowledge, the lack of work exploring the use of label noise correction techniques in training fair models from biased data.
We develop an empirical methodology to systematically evaluate the usefulness of applying label noise correction techniques to guarantee the fairness of predictions made by models trained on biased data. Having an assumedly clean dataset, we first manipulate the labels to simulate the desired amount and type of label noise. The injected noise is group-dependent, meaning that it depends on the value of the specified sensitive attribute. We can parameterize the noise injection process to model various types of discrimination. The considered label noise correction technique is applied to the noisy data to generate a corrected version of the dataset. We train ML classifiers using the _original_, _noisy_, and _corrected_ training sets. The obtained models are then evaluated under different assumptions, measuring the fairness and predictive performance of their predictions on the three test sets (_original_, _noisy_, and _corrected_).
In this empirical study, we test and compare the effectiveness of six label noise correction techniques in improving the generated models' performance. We apply our methodology using multiple standard ML datasets available on OpenML and inject different types of label noise at varying rates. The models are evaluated using four well-known fairness metrics.
The rest of this paper is organized as follows. In Section II, we present an overview of the existing literature and state-of-the-art methods related to ML fairness and label noise. In Section III, we propose the methodology to systematically evaluate the impact of label noise correction methods on the fairness of ML models. In Section IV, we describe the performed experiments and analyze the obtained results in Section V, presenting the corresponding discussion in Section VI. Finally, in Section VII, we review the conclusions that were derived from the developed work.
## II Related work
In this section, we introduce the relevant literature related to label noise and dealing with fairness under label noise.
### _Label noise_
Noise can be defined as non-systematic errors that might complicate an algorithm's ability to uncover the relationship between the features and the class label of a sample [6]. When noise is related to wrongly assigned labels, we are in the presence of label noise. Label noise is a common phenomenon in real-world datasets, and the cost of acquiring non-polluted data is usually high. This makes it of great importance to develop methods that deal with this type of noise [7].
Label noise is particularly important in the case of bias mitigation techniques, which typically assume the existence of clean labels. However, in practice, this is not always the case. In fact, data bias and label corruption are closely related, especially since the accuracy of certain labels is often affected by the subject belonging to a protected group [8]. Label noise can be classified into one of three categories:
* **Random noise**, which corresponds to noise that is randomly distributed and does not depend on the instance's features or label [7], i.e., \(P(\tilde{y})=P(y)\);
* **Y-dependant noise** happens when instances belonging to a particular class are more likely to be mislabeled [7]. This type of label noise assumes that given the clean label \(y\), the noisy label \(\tilde{y}\) is conditionally independent of the instance \(x\), i.e., \(P(\tilde{y}|y,x)=P(\tilde{y}|y)\)[9];
* **XY-dependant noise** depends on both features and target values, meaning that the probability of a sample being mislabeled changes not only according to its particular class but also to the values of its features [7]. This is the type of noise commonly referred to as group-dependant [8] or instance-dependant [9] in the fairness literature. This type of label noise is often related to discrimination. Considering the COMPAS case, for example, the model unfairly predicts African-Americans as having a "high risk" of recidivism more often than Caucasians due to discrimination in past trials, which leads to models that reproduce the same kind of discrimination. In this situation, the probability of an offender being misclassified as "high risk", i.e., the label noise, depends on the _race_ feature, so it is group-dependant.
One way to categorize noise-dealing approaches is to classify the existing methods according to whether they model the noise structure or not [7]. Noise model-free methodologies focus on algorithms that are inherently less sensitive to label noise and thus do not require the explicit modeling of the noise structure. On the other hand, the goal of noise model-based methods is to extract information about the noise structure in the data to leverage it during training [7]. The label noise correction methods we focus on in this work and further present are included in this category.
### _Fairness in the presence of Label Noise_
While many methods have been proposed to promote the fairness of ML classifiers, these usually assume that the training data is not corrupted [8]. However, label noise is a common phenomenon in real-world data that may have negative consequences on model performance when not properly dealt with [6].
One approach to achieve fair classification is to focus on re-weighting the training data to alter its distribution in a way that corrects for the noise process that causes the bias [10]. The authors have shown that training on the re-weighted dataset is equivalent to training on the unobserved unbiased labels. To evaluate how their method performed in comparison to previous approaches, they tested the various methods on a number of benchmark fairness datasets, measuring multiple fairness metrics.
A different line of work focuses on enforcing fairness constraints on the learning process to achieve fair predictions. Research has been conducted in adapting this approach for learning fair classifiers in the presence of label noise [8, 9]. Some authors rewrite the loss function and fairness constraints to deal with label noise [9]. They further propose to model label noise and fairness simultaneously by uncovering the internal causal structure of the data. Surrogate loss functions and surrogate constraints have also been devised to ensure fairness in the presence of label noise [8].
### _Evaluation of Robustness_
The performance of label noise correction methods depends on the level of noise in the data. They are expected to improve fairness by correcting possible biases. For practitioners to apply those methods safely in the real world, it is important to understand their behavior under different noise conditions. However, there is currently a lack of research in understanding how those techniques affect the fairness of models.
To address this limitation, a sensitivity analysis framework for fairness has been developed [11]. It assesses whether the conclusions about the fairness of a model derived from biased data are reliable. This is done by estimating bounds on the fairness metrics under assumptions about the magnitude of label noise. However, this approach still relies on a limited set of fairness benchmarks, limiting the scope of the conclusions since the existing datasets are not representative of many different types and levels of label noise.
In this work, we address the limitations of the empirical evaluation procedures that are usually conducted in the existing work. Instead of making assumptions about the level of label noise, we explicitly manipulate it.
## III Methodology
With the objective of understanding the effect of existing label correction methods on improving the fairness of machine learning classifiers trained on the corresponding corrected data, we propose a methodology for empirically evaluating the efficacy of such techniques in achieving this goal.
Having the _original_ dataset, \(D_{o}\), in which we assume the instances to have correctly assigned labels, the first step is to manipulate the labels. When considering a fairness benchmark dataset, we may use the data as expected, meaning that the sensitive attributes and positive class are the original ones. If we are applying this methodology to a standard classification dataset, we arbitrarily choose the positive class and a binary attribute to be considered as the sensitive one. Given noise rate \(\tau\), noise injection is performed by altering the label of instances with a certain probability depending on the noise rate and whether it belongs to the protected group. By parameterizing this process, we can simulate different types of discrimination. We thus obtain a _noisy_ dataset, \(D_{n}\), that is corrupted by the induced bias.
To simulate different types of biases, we inject group-dependant label noise in the clean datasets in two ways:
* **Positive Bias Noise**. This type of label noise is intended to simulate the cases where the instances belonging to the protected group are more likely to be given a positive label (or the ones not belonging to the protected group are systematically assigned to the negative class). For example, this would be equivalent to classifying African-American offenders as having a high risk of re-offending at a higher rate than their Caucasian counterparts. To simulate such bias, we set the label of each instance belonging to the protected group to the positive class with a probability equal to the desired noise rate. Naturally, as the noise rate gets higher, the data gets progressively more imbalanced.
* **Balanced Bias Noise**. In different situations, both the members of the protected group are benefited and the non-members are harmed. An example of this type of scenario is the automated selection of job applicants. If the selection process is biased towards preferring male applicants, there will be simultaneously more men being selected and more women being rejected. We simulate such bias by setting the label of each instance to the positive class if it belongs in the protected group or to the negative class otherwise, with a probability equal to the desired noise rate. We assume that the positive class is a good outcome, which is not always the case. Nevertheless, this is not expected to affect the conclusions.
The next step is to perform label noise correction by applying the method being analyzed on the training set, obtaining a _corrected_ training set, \(D_{c}\). We first examine the similarity between the original labels and the ones obtained after applying label noise correction to the noisy data. Given a dataset with \(N\) instances, the ability to reconstruct the original labels is measured as the similarity between the _original_ labels and the _corrected_ ones, as shown in Eq. 1. Essentially, this is a measure of the accuracy of the label correction method in obtaining the original labels. However, to avoid confusion, we will refer to it as _reconstruction score_, \(r\).
\[r=\frac{\sum_{i=1}^{N}\hat{y}_{i}=y_{i}}{N} \tag{1}\]
For each training set (\(D_{o}^{train}\), \(D_{n}^{train}\), and \(D_{c}^{train}\)), we then apply the chosen ML algorithm to it, obtaining the classifiers \(M_{o}\), \(M_{n}\), and \(M_{c}\), respectively. These models are then evaluated under different scenarios.
Firstly, we want to consider the testing scenario where we only have access to corrupted data both for training and testing. The aim is to understand the effect of correcting training data in the case where the discrimination that was present when collecting the training data still exists at testing time. To achieve this, the _corrected_ (\(M_{c}\)) and _noisy_ (\(M_{n}\)) models are evaluated on the _noisy_ test set, \(D_{n}^{test}\). In this case, the intent is to observe if the noise correction methods are able to produce less discriminatory predictions without significant loss in predictive performance.
Our next objective is to understand the effect of correcting biased training data when the discrimination has been eliminated in the meantime and the testing data is unbiased. To achieve this, the models (\(M_{o}\), \(M_{n}\), and \(M_{c}\)) are evaluated on the _original_ test set \(D_{o}^{test}\).
Finally, we extend the previous scenario to remove the assumption that the original data is unbiased. In other words, we analyze the effect of correcting training data when the discrimination has been eliminated in the meantime but the original data was already biased and, thus, its labels are noisy. To achieve this, the _corrected_ model, \(M_{c}\), is evaluated on a test set with labels without noise. However, since we do not have access to the clean labels, we use a label noise correction method to correct the test data as well. We employ the same method that is being analyzed, but a more extensive empirical validation could use different methods or a combination of them. In any case, the results should be interpreted carefully, as the unbiased labels cannot be determined.
The diagram presented in Fig. 1 illustrates the explained methodology.
## IV Experiments
To illustrate the use of the proposed methodology, we perform an empirical evaluation of six label noise correction methods to ensure the fairness of ML models. In this section, we describe its key aspects, detailing how we evaluated the methods, explaining the experimental setup, and analyzing the obtained results.
### _Label noise correction methods_
We focus on problems where fairness issues are essentially caused by label noise, with the corruption rates being group-dependent. By assuming that there exist underlying, unknown, and unbiased labels that are overwritten by the observable biased ones, the natural approach is to apply label noise correction techniques to the data in order to obtain a clean dataset to be used in model training. The goal of this approach is to pre-process biased data to remove underlying discrimination, thus enabling classifiers trained on the corrected datasets to deliver predictions that are both accurate and fair. In the conducted experiments, we compared the following label noise correction methods:
* **Bayesian Entropy Noise Correction** (BE) [12]. In this method, multiple Bayesian classifiers are obtained with different training samples. These classifiers are used to obtain a probability distribution for each sample of it belonging to each considered class, which is applied in calculating the instance's information entropy. If the entropy of a sample is below the calculated threshold and its label is different from the predicted one, its value is corrected. These steps are repeated until a stopping criterion has been met;
* **Polishing Labels** (PL) [2]. This method replaces the label of each instance with the most frequent label predicted by a set of models obtained with different training samples;
* **Self-Training Correction** (STC) [2]. This algorithm works by first dividing the data into a noisy and a clean set using a noise-filtering algorithm. These methods identify and remove noisy instances from data, and in this case, the Classification Filter [13] is used. A model is obtained from the clean set and is used to estimate the confidence that each instance in the noisy set is correctly labeled. The most likely mislabeled instance is relabeled to the class determined by the classifier and added to the clean set. These steps are repeated until the desired proportion of labels is corrected;
* **Clustering-Based Correction** (CC) [2]. Firstly, a clustering algorithm is executed on the data multiple times, varying the number of clusters. A set of weights is calculated for each cluster based on its distribution of labels and size and is attributed to all the instances that belong to it. These weights are meant to benefit the most frequent class in the cluster. The weights obtained from each clustering are added up for each instance, and the label with the maximum weight is chosen;
* **Ordering-based Label Noise Correction** (OBNC) [14]. The first step in this algorithm is to learn an ensemble classifier from the data. For each instance, the ensemble decides the label by voting, and the difference between the votes can be used to calculate an ensemble margin. The misclassified samples are ordered in a descending manner based on the absolute value of their margin. The most likely mislabeled instances are relabeled to their predicted classes;
Fig. 1: Diagram of the proposed methodology for empirically evaluating the efficacy of label noise correction methods in ensuring the fairness of classifiers using standard ML datasets.
* **Hybrid Label Noise Correction** (HLNC) [1]. In this approach, the first step is to separate the data into high-confidence and low-confidence samples. This is achieved by applying the k-means algorithm to divide the data into clusters and determining each cluster's label. The instances are classified as high-confidence if their label matches the cluster's label and low-confidence otherwise. The high-confidence samples are used to simultaneously train two very different models, using the SSK-means [15] and Co-training [16] algorithms. These are applied to each low-confidence sample, and if both algorithms give it the same label, then the sample is relabeled and set as high-confidence. This process is repeated until all labels are high-confidence or after a specified number of times.
### _Datasets and Algorithm_
The datasets used in the noise injection experiments are available on OpenML 3 and are summarized in Table I.
Footnote 3: [https://www.openml.org/](https://www.openml.org/)
In the conducted experiments, we used the Logistic Regression algorithm to obtain the classifiers. The code implementing the proposed methodology is available at [https://github.com/reluzita/fair-lnc-evaluation](https://github.com/reluzita/fair-lnc-evaluation).
### _Evaluation Measures_
To evaluate the obtained models, we tested the predictive performance of the predictions by calculating the Area Under the ROC Curve (AUC) metric [17]. In terms of fairness, the following metrics were analyzed:
* **Demographic Parity** (also known as statistical parity) is a statistical group fairness notion that is achieved when individuals from both protected and unprotected groups are equally likely to be predicted as positive by the model [18]. We analyze the Demographic Parity differencebetween the two groups: \[DP_{dif}=|P(\hat{y}=1|g=0)-P(\hat{y}=1|g=1)|\] (2)
* **Equalized Odds**[19] is satisfied when protected and unprotected groups have equal true positive rates (TPR) and equal false positive rates (FPR). To calculate the Equalized Odds difference, \(EOD_{dif}\) between groups, we first obtain the TPR difference: \[TPR_{dif} =|P(\hat{y}=1|y=1,g=0)\] (3) \[-P(\hat{y}=1|y=1,g=1)|\] And the FPR difference: \[FPR_{dif} =|P(\hat{y}=1|y=0,g=0)\] (4) \[-P(\hat{y}=1|y=0,g=1)|\] Returning the largest of both values: \[EOD_{dif}=max(TPR_{dif},FPR_{dif})\] (5)
* **Predictive Equality**[20] requires both protected and unprotected groups to have the same false positive rate (FPR), which is related to the fraction of subjects in the negative class that were incorrectly predicted to have a positive value. We obtain the Predictive Equality difference: \[PE_{dif} =|P(\hat{y}=1|y=0,g=0)\] (6) \[-P(\hat{y}=1|y=0,g=1)|\]
* **Equal Opportunity**[20] is obtained if both protected and unprotected groups have an equal false negative rate (FNR), the probability of an individual from the positive class to have a negative predictive value. We use the Equal Opportunity difference: \[EOP_{dif} =|P(\hat{y}=0|y=1,g=0)\] (7) \[-P(\hat{y}=0|y=1,g=1)|\]
## V Results
Our goal is to analyze the robustness of label correction methods in terms of predictive accuracy as well as fairness, considering models trained in 3 different ways - \(M_{o}\), \(M_{n}\), \(M_{c}\) -, first analyzing the similarity between the original labels and the ones obtained after applying each label noise correction method to the noisy data.
### _Similarity to original labels after correction_
Fig. 2 shows, on average, how similar each method's correction was to the original labels, considering both types of bias. Regardless of the type of bias, OBNC was the method that was able to achieve higher similarity to the original labels.
### _Performance evaluation on the noisy test set_
In some cases, we may only have access to biased data both for training and testing the models. As such, we evaluate the predictive performance and fairness of the predictions of the models on the noisy test set. The trade-off between the AUC metric and the Predictive Equality difference metric for different noise rates is shown in Fig. 3, for the **Positive Bias** noise, and in Fig. 4, for the **Balanced Bias** noise. In the remainder of this section, we only present the results in terms of Predictive Equality difference since the same conclusions can be derived from the results that were obtained using any of the aforementioned fairness metrics.
The OBNC method achieved performance similar to using the _noisy_ data, while PL and STC show small improvements, mainly in terms of fairness. The CC method performs the best in achieving fairness, being able to keep discrimination at a minimum even at higher noise rates, as shown in Figure 4, but losing significant predictive performance to do so. The HLNC method maintained its ability to improve fairness at minimum expense to the predictive performance of the resulting models.
### _Performance evaluation on the original test set_
To understand how the label noise correction methods fare on producing accurate and fair predictions from biased data in an environment where these biases are no longer present, we evaluate the performance of the three obtained models on the original test set. The trade-off between the AUC metric and the Predictive Equality difference for each method at different noise rates is shown in Fig. 5, for the **Positive Bias** noise, and in Fig. 6, for the **Balanced Bias** noise.
In this testing scenario, the methods still behave in a similar way to the previous one in relation to each other. The OBNC method was shown to correct the labels in a way that is the most similar to the _original_ train set. Still, the performance of the resulting model is comparable to using the _noisy_ train set. The PL and STC methods achieve a slightly better trade-off between predictive performance and fairness. On the other hand, the CC method shows significant improvements in terms of fairness, but at the expense of a lower AUC score. The BE method achieves a low score in both metrics. Finally, the
Fig. 4: Trade-Off between AUC and Predictive Equality difference obtained on the _noisy_ test set when correcting the data injected with Balanced Bias noise at different rates using each of the label correction methods. The red dashed line shows the performance of the model obtained from the _noisy_ train set at each noise rate.
Fig. 3: Trade-Off between AUC and Predictive Equality difference obtained on the _noisy_ test set when correcting the data injected with Positive Bias noise at different rates using each of the label correction methods. The red dashed line shows the performance of the model obtained from the _noisy_ train set at each noise rate.
Fig. 2: Reconstruction score (\(r\)), representing the similarity between original labels and the ones obtained after applying each label noise correction method for different noise rates.
HLNC method was found to be the best at simultaneously improving both predictive performance and fairness.
### _Performance evaluation on the corrected test set_
Finally, we investigate the possibility of applying label noise correction methods on the corrupted test set to simulate having an unbiased testing environment when only corrupted data is available for testing. To do so, we evaluate the performance of the models obtained using corrected train data on the test set corrected using the same method. We then assess whether that performance is similar to the one obtained when testing the same models on the original test set. The results for the AUC metric are presented in Fig. 7, for the Positive Bias noise type, and in Fig. 8, for the Balanced Bias noise type. Considering the Predictive Equality difference metric, the results are shown in Fig. 9 for the Positive Bias noise type, and in Fig. 10, for the Balanced Bias noise type.
In terms of AUC, the PL, STC, and BE methods tend to result in an overestimation of the predictive performance of
Fig. 5: Trade-Off between AUC and Predictive Equality difference obtained on the _original_ test set when correcting the data injected with Positive Bias noise at different rates using each of the label correction methods. The red dashed line shows the performance of the model obtained from the _noisy_ train set at each noise rate.
Fig. 8: Comparison in AUC between testing the model obtained from the data corrected by each method on the original test set and on the test set corrected by the same method, in the presence of Balanced Bias noise.
Fig. 6: Trade-Off between AUC and Predictive Equality difference obtained on the _original_ test set when correcting the data injected with Balanced Bias noise at different rates using each of the label correction methods. The red dashed line shows the performance of the model obtained from the _noisy_ train set at each noise rate.
Fig. 7: Comparison in AUC between testing the model obtained from the data corrected by each method on the original test set and on the test set corrected by the same method, in the presence of Positive Bias noise.
the resulting model. At the same time, the OBNC method appears to slightly underestimate it. The HLNC method shows similar performance to testing on the _original_ test set in the presence of both types of noise, while the CC method only achieves this when dealing with Positive Bias noise. Regarding the Predictive Equality difference metric, all methods show a performance very similar to using the _original_ test set. A slight underestimation of discrimination can be seen for the PL, STC, and BE methods for the higher noise rates.
## VI Discussion
The ability to correct the labels does not necessarily guarantee a good compromise between accuracy and fairness. For instance, the OBNC method obtained the highest similarity with the original labels. However, when assessing the compromise between predictive performance and fairness, the OBNC method had a much less satisfactory performance, showing barely any difference from training with the noisy training set.
On the other hand, the CC method, which did not show a high reconstruction score, kept discrimination at minimum values, even at the highest noise rates. However, this was achieved at the cost of lower predictive performance. The nature of the fairness metrics can explain this: e.g., the Predictive Equality metric calculates the difference between the FPR of each group, meaning that if both groups have a high but similar FPR, the predictions are technically fair but not accurate.
We must acknowledge some limitations of this study, as they can impact the generalizability of our findings. The first one is related to an important advantage of the proposed methodology: it may use standard benchmark datasets to assess the robustness of label correction methods. This means that analysis can be based on a much larger set of datasets than typical fairness studies use. However, the choice of both the sensitive attribute and the positive class is arbitrary. This means that these datasets do not necessarily have similar distributions to real problems with label noise caused by discrimination. However, the methodology can also be applied to benchmark fairness datasets to assess the generality of the results obtained. Additionally, the predicted classes were based on a threshold of 0.5, which is not realistic in many problems where discrimination might be an issue. As the choice of threshold impacts the fairness metrics, it is important to obtain results with other thresholds. In the case of benchmark fairness datasets, problem-specific thresholds can also be used. Therefore, future research would involve applying this methodology to benchmark fairness datasets.
## VII Conclusions
In this work, we tackle the problem of learning fair ML classifiers from biased data. In such a scenario, we look at the inherent discrimination in datasets as label noise that can be eliminated using label noise correction techniques. This way, the corrected data could be used to train fair classifiers using standard ML algorithms without further application of fairness-enhancing techniques. We propose a methodology to empirically evaluate the effect of different label noise correction techniques in improving the fairness and predictive performance of models trained on previously biased data. Our framework involves manipulating the amount and type of label noise and can be used on both fairness benchmarks and standard ML datasets. In the conducted experiments, we analyzed six label noise correction methods. We observed that the Hybrid Label Noise Correction method was able to achieve the best trade-off between fairness and predictive performance.
|
2307.04568 | Global synchronization on time-varying higher-order structures | Synchronization has received a lot of attention from the scientific community
for systems evolving on static networks or higher-order structures, such as
hypergraphs and simplicial complexes. In many relevant real world applications,
the latter are not static but do evolve in time, in this paper we thus discuss
the impact of the time-varying nature of high-order structures in the emergence
of global synchronization.
To achieve this goal we extend the master stability formalism to account, in
a general way, for the additional contributions arising from the time evolution
of the higher-order structure supporting the dynamical systems. The theory is
successfully challenged against two illustrative examples, the Stuart-Landau
nonlinear oscillator and the Lorenz chaotic oscillator. | Md Sayeed Anwar, Dibakar Ghosh, Timoteo Carletti | 2023-07-10T14:00:46Z | http://arxiv.org/abs/2307.04568v1 | # Global synchronization on time-varying higher-order structures
###### Abstract
Synchronization has received a lot of attention from the scientific community for systems evolving on static networks or higher-order structures, such as hypergraphs and simplicial complexes. In many relevant real world applications, the latter are not static but do evolve in time, in this paper we thus discuss the impact of the time-varying nature of high-order structures in the emergence of global synchronization. To achieve this goal we extend the master stability formalism to account, in a general way, for the additional contributions arising from the time evolution of the higher-order structure supporting the dynamical systems. The theory is successfully challenged against two illustrative examples, the Stuart-Landau nonlinear oscillator and the Lorenz chaotic oscillator.
## I Introduction
In the realm of complex systems, synchronization refers to the intriguing ability of coupled nonlinear oscillators to self-organize and exhibit a collective unison behavior without the need for a central controller [1]. This phenomenon, observed in a wide range of human-made and natural systems [2], continues to inspire scientists seeking to unravel its underlying mechanisms.
To study synchronization, network science has proved to be a powerful and effective framework. Here, the interconnected nonlinear oscillators are represented as nodes, while their interactions are depicted as links [3]. However, the classical static network representation has its limitation in modeling many empirical systems, such as social networks [4], brain networks [5; 6], where the connections among individual basic units are adaptable enough to be considered to evolve through time. Therefore, the framework of networks has been generalized as to include time-varying networks [7; 8], whose connections vary with time. The results presented in this framework support the claim that synchronization is enhanced by the dynamics of the supporting medium [9; 10; 11].
Another intrinsic limitation of networks is due to their capability to only model pairwise interactions. To go beyond this issue, scholars have brought to the fore the relevance of higher-order structures, which surpass the traditional network setting that models the interactions between individual basic units only through pairwise links [12; 13; 14; 15; 16]. By considering the simultaneous interactions of many agents, higher-order structures, namely hypergraphs [17] and simplicial complexes [18], offer a more comprehensive understanding of complex systems. These higher-order structures have been proven to produce novel features in various dynamical processes, including consensus [19; 20], random walks [21; 22], pattern formation [12; 23; 24], synchronization [12; 25; 26; 27; 28; 29], social contagion and epidemics [30; 31]. Nevertheless, the suggested framework is not sufficiently general for describing systems with many-body interactions that vary with time. As an example, group interactions in social systems have time-varying nature as the interactions among groups of individuals are not always active but rather change throughout time [32]. Some early works have begun to investigate the time-varying aspect of many-body interactions in various dynamical processes. For instance, time-varying group interactions have been demonstrated to influence the convergence period of consensus dynamics [20] and to predict the onset of endemic state in epidemic spreading [31].
The present work is motivated by these recent research directions, and it aims to take one step further by considering the impact of time-varying higher-order structures in the synchronization of nonlinear oscillators. In this context, a preliminary effort has been reported in [33], that investigates synchronization in time-varying simplicial complexes, limited only to fast switching [34; 35] among distinct static simplicial configurations, implying that the time scale of the simplicial evolution is exceedingly fast compared to that of the underlying dynamical system. In contrast, in the present work, we allow the higher-order structures to evolve freely with time, thus removing any limitations on the imposed time evolution of the higher-order structure. We present the results in the framework of hypergraphs, but they hold true also for simplicial complexes. Under such broad circumstances, we develop a theory to determine the conditions ensuring the stability of a globally synchronized state that generalizes the Master Stability Equation [36] to a setting where the time evolution of underlying higher-order structures is explicitly considered. The generalized framework we discuss here assumes that the coupling functions cancel out when the dynamics of individual oscillators are identical, which is a necessary condition that must be met for the extended system to have a synchronous solution and it has been frequently used in the literature across various domains. The developed theory reveals that the consideration of temporality in group interactions can induce synchronization more easily than static group interactions, tested on higher-order structures of coupled
Stuart Landau oscillators and paradigmatic Lorenz systems.
## II The model
To start with, let us consider a \(m\)-dimensional dynamical system whose time evolution is described by the following ordinary differential equation
\[\frac{d\vec{x}}{dt}=\vec{f}(\vec{x})\,, \tag{1}\]
where \(\vec{x}\in\mathbb{R}^{m}\) denotes the state vector and \(\vec{f}:\mathbb{R}^{m}\to\mathbb{R}^{m}\) some smooth nonlinear function; let us assume moreover that system (1) exhibits an oscillatory behavior, being the latter periodic or irregular; we are thus considering the framework of generic nonlinear oscillators. Let us now consider \(n\) identical copies of system (1) coupled by a symmetric higher-order structure; namely, we allow the nonlinear oscillators to interact in couples, as well as in triplets, quadruplets, and so on, up to interactions among \(D+1\) units. We can thus describe the time evolution of the state vector of the \(i\)-th unit by
\[\dot{\vec{x}}_{i}=\vec{f}(\vec{x_{i}})+\sum_{d=1}^{D}q_{d}\sum_{j_{1},\ldots,j _{d}=1}^{n}A^{(d)}_{ij_{1}\ldots j_{d}}(t)\vec{g}^{(d)}(\vec{x}_{i},\vec{x}_{j _{1}},\ldots,\vec{x}_{j_{d}})\,, \tag{2}\]
where for \(d=1,\ldots,D\), \(q_{d}>0\) denotes the coupling strength, \(\vec{g}^{(d)}:\mathbb{R}^{(d+1)m}\to\mathbb{R}^{m}\) the nonlinear coupling function and \(\mathbf{A}^{(d)}(t)\) the tensor encoding which units are interacting together. More precisely \(A^{(d)}_{ij_{1}\ldots j_{d}}(t)=1\) if the units \(i,j_{1},\ldots,j_{d}\) do interact at time \(t\) observe indeed that such tensor depends on time, namely the intensity of the coupling as well which units are coupled, do change in time. Finally, we assume the time-varying interaction to be symmetric, namely if \(A^{(d)}_{ij_{1}\ldots j_{d}}(t)=1\), then \(A^{(d)}_{\pi(ij_{1}\ldots j_{d})}(t)=1\) for any permutation \(\pi\) of the indexes \(i,j_{1},\ldots,j_{d}\). Let us emphasize that we consider the number of nodes to be fixed, only the interactions change in time; one could relax this assumption by considering to have a sufficiently large reservoir of nodes, from which the core of the system can recruit new nodes or deposit unused nodes.
Let us fix a periodic reference solution, \(\vec{s}(t)\), of system (1). We are interested in determining the conditions under which the orbit \((\vec{s}(t),\ldots,\vec{s}(t))^{\top}\) is a solution of the coupled system (2), and moreover it is stable, namely the \(n\) units globally synchronize and behave at unison. A necessary condition is that the coupling functions vanish once evaluated on such orbit, i.e., \(\vec{g}^{(d)}(\vec{s},\ldots,\vec{s})=0\), for \(d=1,\ldots,D\). This assumption is known in the literature as _non-invasive_ condition.
For the sake of pedagogy, we will hereby consider a particular case of non-invasive couplings and we will refer the interested reader to Appendix A for a general discussion. We are thus assuming the coupling functions \(\vec{g}^{(d)}\) to be _diffusive-like_, namely for each \(d\) there exists a function \(\vec{h}^{(d)}:\mathbb{R}^{dm}\to\mathbb{R}^{m}\) such that
\[\vec{g}^{(d)}(\vec{x}_{i},\vec{x}_{j_{1}},\ldots,\vec{x}_{j_{d}})=\vec{h}^{(d )}(\vec{x}_{j_{1}},\ldots,\vec{x}_{j_{d}})-\vec{h}^{(d)}(\vec{x}_{i},\ldots, \vec{x}_{i})\,. \tag{3}\]
In this way we can straightforwardly ensure that the coupling term in Eq. (3) vanishes once evaluated on the orbit \((\vec{s}(t),\ldots,\vec{s}(t))^{\top}\), allowing thus to conclude that the latter is also a solution of the coupled system.
To study the stability of the reference solution, let us now perturb the synchronous solution \((\vec{s}(t),\ldots,\vec{s}(t))^{\top}\) with a spatially inhomogeneous term, meaning that \(\forall i\in\{1,\ldots,n\}\) we define \(\vec{x}_{i}=\vec{s}+\delta\vec{x}_{i}\). Substituting the latter into Eq. (2) and expanding up to the first order, we obtain
\[\delta\dot{\vec{x}}_{i}=\frac{\partial\vec{f}}{\partial\vec{x}_{i}}\Big{|}_{ \vec{s}}\delta\vec{x}_{i}+\sum_{d=1}^{D}q_{d}\sum_{j_{1},\ldots,j_{d}=1}^{n}B_ {ij_{1}\ldots j_{d}}(t)\sum_{\ell=1}^{d}\frac{\partial\vec{h}^{(d)}}{ \partial\vec{x}_{j_{\ell}}}\Big{|}_{(\vec{s},\ldots,\vec{s})}\delta\vec{x}_{j _{\ell}}\,, \tag{4}\]
where
\[B_{ij_{1}}(t) = A^{(1)}_{ij_{1}}(t)-k^{(1)}_{i}(t)\delta_{ij_{1}}\,,\] \[B_{ij_{1}j_{2}}(t) = A^{(2)}_{ij_{1}j_{2}}(t)-2k^{(2)}_{i}(t)\delta_{ij_{1}j_{2}}\,,\ldots\] \[B_{ij_{1}j_{2}\ldots j_{D}}(t) = A^{(D)}_{ij_{1}j_{2}\ldots j_{D}}(t)-D!k^{(D)}_{i}(t)\delta_{ij _{1}j_{2}\ldots j_{D}}\,,\]
being \(\delta_{ij_{1}j_{2}\ldots j_{D}}\) the generalized multi-indexes Kronecker-\(\delta\), and the (time-varying) \(d\)-degree of node \(i\) is given by
\[k^{(d)}_{i}(t)=\frac{1}{d!}\sum_{j_{1},\ldots,j_{d}=1}^{n}A^{(d)}_{ij_{1} \ldots j_{d}}(t)\,, \tag{5}\]
which represents the number of hyperedges of order \(d\) incident to node \(i\) at time \(t\). Observe that if \(\mathbf{A}^{(d)}\) is weighted, then \(k^{(d)}_{i}(t)\) counts both the number and the weight, it is thus the generalization of the strength of a node. Let us now define
\[k^{(d)}_{ij}(t)=\frac{1}{(d-1)!}\sum_{j_{1},\ldots,j_{d-1}}^{n}A^{(d)}_{ij_{1} \ldots j_{d-1}}(t)\,, \tag{6}\]
namely the number of hyperedges of order \(d\) containing both nodes \(i\) and \(j\) at time \(t\). Again, once \(\mathbf{A}^{(d)}\) is weighted, then \(k^{(d)}_{ij}(t)\) generalizes the link strength. Let us observe that because of the invariance of \(\mathbf{A}^{(d)}\) under index permutation, we can conclude that
\(k_{ji}^{(d)}(t)\). Finally, we define the generalized time-varying higher-order Laplacian matrix for the interaction of order \(d\) as
\[L_{ij}^{(d)}(t)=\begin{cases}-d!k_{i}^{(d)}(t)&\text{if }i=j\\ (d-1)!k_{ij}^{(d)}(t)&\text{if }i\neq j\end{cases}\,. \tag{7}\]
Observe that such a matrix is symmetric because of the assumption of the tensors \(\mathbf{A}^{(d)}\). Let us also notice the difference in sign with respect to other notations used in the literature.
We can then rewrite Eq. (4) as follows
\[\delta\ddot{\vec{x}}_{i} = \frac{\partial\vec{f}}{\partial\vec{x}_{i}}\Big{|}_{\vec{s}} \delta\vec{x}_{i}+\sum_{d=1}^{D}q_{d}\left[\sum_{j_{1}=1}^{n}\frac{\partial \vec{h}^{(d)}}{\partial\vec{x}_{j_{1}}}\Big{|}_{(\vec{s},\ldots,\vec{s})} \delta\vec{x}_{j_{1}}\sum_{j_{2},\ldots,j_{d}=1}^{n}B_{ij_{1}\ldots j_{d}}(t)+ \cdots+\sum_{j_{d}=1}^{n}\frac{\partial\vec{h}^{(d)}}{\partial\vec{x}_{j_{d} }}\Big{|}_{(\vec{s},\ldots,\vec{s})}\delta\vec{x}_{j_{d}}\sum_{j_{1},\ldots,j_ {d-1}=1}^{n}B_{ij_{1}\ldots j_{d}}(t)\right] \tag{8}\] \[= \frac{\partial\vec{f}}{\partial\vec{x}_{i}}\Big{|}_{\vec{s}} \delta\vec{x}_{i}+\sum_{d=1}^{D}q_{d}\sum_{j=1}^{n}L_{ij}^{(d)}(t)\left[\frac {\partial\vec{h}^{(d)}}{\partial\vec{x}_{j_{1}}}+\cdots+\frac{\partial\vec{h }^{(d)}}{\partial\vec{x}_{j_{d}}}\right]_{(\vec{s},\ldots,\vec{s})}\delta\vec{ x}_{j}\,,\]
where we used the fact the \(\frac{\partial\vec{h}^{(d)}}{\partial\vec{x}_{j_{1}}}+\cdots+\frac{\partial \vec{h}^{(d)}}{\partial\vec{x}_{j_{d}}}\) is independent from the indexes being the latter just place holders to identify the variable with respect to the derivative has to be done. Finally, by defining
\[\mathbf{J}_{f} := \frac{\partial\vec{f}}{\partial\vec{x}_{i}}\Big{|}_{\vec{s}(t)}\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
\(\delta\vec{x}_{i}=\sum_{\alpha}\delta\hat{\vec{x}}_{\alpha}\phi_{i}^{(\alpha)}\) and recalling the definition of \(\mathbf{c}\) we obtain
\[\frac{d\delta\hat{\vec{x}}_{\beta}}{dt}=\sum_{\alpha}c_{\beta\alpha}(t)\delta \hat{\vec{x}}_{\alpha}+\left[\mathbf{J}_{f}+\mu^{(\beta)}(t)\mathbf{J}_{h^{(1) }}\right]\delta\hat{\vec{x}}_{\beta}\,. \tag{14}\]
Let us observe that the latter formula and the following analysis differ from the one presented in [37] where the perturbation is assumed to align onto a single mode, a hypothesis that ultimately translates in the stationary of the Laplace eigenvectors that is \(\mathbf{c}=\mathbf{0}\). The same assumption is also at the root of the results by [38]; indeed, commuting time-varying networks implies to deal with a constant eigenbasis. In conclusion, Eq. (14) returns the more general description for the projection of the linearized dynamics on a generic time-varying Laplace eigenbasis, and thus allowing us to draw general conclusions without unnecessary simplifying assumptions.
### Regular topologies
An alternative approach to study Eq. (9) is to assume regular topologies [23], namely hypergraphs such that \(\mathbf{L}^{(d)}(t)=\alpha_{d}\mathbf{L}^{(1)}(t)\), for \(d=1,\ldots,D\), with \(\alpha_{1}=1\) and \(\alpha_{d}\in\mathbb{R}_{+}\). Indeed we can use this assumption to obtain from Eq. (9)
\[\delta\hat{\vec{x}}_{i}=\mathbf{J}_{f}\delta\vec{x}_{i}+\sum_{j=1}^{n}L^{(1)} _{ij}(t)\mathbf{J}_{h}\delta\vec{x}_{j}\,, \tag{15}\]
where
\[\mathbf{J}_{h}:=\sum_{d=1}^{D}q_{d}\alpha_{d}\mathbf{J}_{h^{(d)}}\,, \tag{16}\]
that results in a sort of weighted nonlinear coupling term. We can now make use of the existence of a time-varying orthonormal basis of \(\mathbf{L}^{(1)}(t)\), namely \(\psi^{(\alpha)}(t)\), \(\alpha=2,\ldots,n\), associated to eigenvalues \(\Lambda^{(\alpha)}<0\), \(\psi^{(1)}(t)=(1,\ldots,1)^{\top}\) and \(\Lambda^{(1)}=0\), to project \(\delta\vec{x}_{i}\) onto the \(n\) eigendirections, \(\delta\vec{x}_{i}=\sum_{\alpha}\delta\vec{\vec{x}}_{\alpha}\psi_{i}^{(\alpha)}\). Because the latter vary in time we need to define a second \(n\times n\) time dependent matrix \(\mathbf{b}(t)\) given by
\[\frac{d\vec{\psi}^{(\alpha)}(t)}{dt}=\sum_{\beta}b_{\alpha\beta}(t)\vec{\psi }^{(\beta)}(t)\quad\forall\alpha=1,\ldots,n\,, \tag{17}\]
that it is again real, skew-symmetric, with a null first row and first column, i.e., \(b_{\alpha\beta}+b_{\beta\alpha}=0\) and \(b_{1\alpha}=0\), because of the orthonormality condition of eigenvectors. By projecting Eq. (15) onto \(\psi^{(\alpha)}(t)\), we get
\[\frac{d\delta\hat{\vec{x}}_{\beta}}{dt}=\sum_{\alpha}b_{\beta\alpha}(t)\delta \hat{\vec{x}}_{\alpha}+\left[\mathbf{J}_{f}+\Lambda^{(\beta)}(t)\mathbf{J}_{ \hat{h}}\right]\delta\hat{\vec{x}}_{\beta}\,. \tag{18}\]
Let us conclude by observing that the latter equation has the same structure of (14). Those equations determine the generalization of the Master Stability Equation to the case of time-varying higher-order structures. The time variation signature of the topology is captured by the matrices \(\mathbf{c}(t)\) or \(\mathbf{b}(t)\) and the eigenvectors \(\mu^{(\alpha)}(t)\) or \(\Lambda^{(\alpha)}(t)\), while the dynamics (resp. the coupling) in the Jacobian \(\mathbf{J}_{f}\) (resp. \(\mathbf{J}_{h^{(1)}}\) or \(\mathbf{J}_{\hat{h}}\)).
It is important to notice that as the eigenvalues \(\mu^{(1)}=0\), \(\Lambda^{(1)}=0\) and the skew-symmetric matrices \(\mathbf{c}(t),\mathbf{b}(t)\) have null first row and column, in analogy with the MSF approaches carried over static networks [36] and higher-order structures [39], also in the case of time-varying higher-order structures, we can decouple the Master Stability Equation into two components. One component describes the movement along the synchronous manifold, while the other component represents the evolution of different modes that are transverse to the synchronous manifold. The Maximum Lyapunov Exponent (MLE) associated with the transverse modes measures the exponential growth rate of a tiny perturbation in the transverse subspace. It serves as an enhanced form of Master Stability Function (MSF) and provides valuable insights into the stability of the reference orbit. For the synchronous orbit to be stable, the MLE associated to all transverse modes must be negative. Moreover, the MSF approaches applied to static networks and higher-order structures can be simplified by examining the evolution of the perturbation along each independent eigendirection associated with distinct eigenvalues of the Laplacian matrix. Let us observe that this is not possible in the present because the matrices \(\mathbf{c}(t)\) and \(\mathbf{b}(t)\) mix the different modes and introduce a complex interdependence among them, making it challenging to disentangle their individual contributions. For this reason, one has to address numerically the problem [10].
To demonstrate the above introduced theory and emphasize the outcomes arising from the modified Master Stability Equations (14) and (18), we will present two key examples in the following sections. Indeed, we will utilize the Stuart-Landau limit cycle oscillator and the chaotic Lorenz system as prototype dynamical systems anchored to each individual nodes. To simplify the calculations, we assume that the hypergraph consists of only three nodes, three links and one triangle (face), whose weights change in time. Additionally, the eigenvector projection matrices \(\mathbf{c}(t)\) and \(\mathbf{b}(t)\) do not vary in time; this assumption results from a suitable choice of the Laplace eigenbasis as explained later in Appendix B. Finally, to simplify the analysis we also assume the Laplace eigenvalues to be constant in time. Let us stress that despite such assumptions, the proposed framework is very general and can be applied to any time varying hypergraphs.
## III Synchronization of Stuart-Landau oscillators coupled via time-varying higher-order networks
The aim of this section is to present an application of the theory above introduced. We decided to use the Stuart-Landau (SL) model as a prototype example for two reasons; first, it provides the normal form for a generic system close to a supercritical Hopf-bifurcation, second, because of its structure, the Jacobian of the reaction part becomes constant once evaluated on the reference orbit and this simplifies the presentation of the results.
A SL oscillator can be described by a complex amplitude \(w\) that evolves in time according to \(\dot{w}=\sigma w-\beta|w|^{2}w\), where \(\sigma=\sigma_{\Re}+i\sigma_{\Im}\) and \(\beta=\beta_{\Re}+i\beta_{\Im}\) are complex model parameters. The system admits a limit cycle solution \(w_{LC}(t)=\sqrt{\sigma_{\Re}/\beta_{\Re}}e^{i\omega t}\), where \(\omega=\sigma_{\Im}-\beta_{\Im}\sigma_{\Re}/\beta_{\Re}\), that is stable provided \(\sigma_{\Re}>0\) and \(\beta_{\Re}>0\), conditions that we hereby assume.
To proceed in the analysis, we couple together \(n\) identical SL oscillators, each described by a complex amplitude \(w_{j}\), with \(j=1,...,n\), anchored to the nodes of a time-varying hypergraph as prescribed in the previous section, namely
\[\frac{dw_{j}}{dt}=\sigma w_{j}-\beta w_{j}|w_{j}|^{2}+\sum_{d=1}^{D}q_{d}\sum _{j_{1},...,j_{d}=1}^{n}A_{jj_{1}...j_{d}}^{(d)}(t)\vec{g}^{(d)}(w_{j},w_{j_{ 1}},\ldots,w_{j_{d}})\,. \tag{19}\]
For the sake of simplicity, we restrict our analysis to pairwise and three-body interactions, namely \(D=2\) in Eq. (19). We hereby present and discuss the SL synchronization under the diffusive-like coupling hypothesis and by using two different assumptions: regular topology and natural coupling. The case of non-invasive coupling will be presented in Appendix A.1.
### Diffusive-like and regular topology
Let us thus assume the existence of two functions \(h^{(1)}(w)\) and \(h^{(2)}(w_{1},w_{2})\) such that \(g^{(1)}\) and \(g^{(2)}\) do satisfy results.
A SL oscillator can be described by a complex amplitude \(w\) that evolves in time according to \(\dot{w}=\sigma w-\beta|w|^{2}w\), where \(\sigma=\sigma_{\Re}+i\sigma_{\Im}\) and \(\beta=\beta_{\Re}+i\beta_{\Im}\) are complex model parameters. The system admits a limit cycle solution \(w_{LC}(t)=\sqrt{\sigma_{\Re}/\beta_{\Re}}e^{i\omega t}\), where \(\omega=\sigma_{\Im}-\beta_{\Im}\sigma_{\Re}/\beta_{\Re}\), that is stable provided \(\sigma_{\Re}>0\) and \(\beta_{\Re}>0\), conditions that we hereby assume.
To proceed in the analysis, we couple together \(n\) identical SL oscillators, each described by a complex amplitude \(w_{j}\), with \(j=1,...,n\), anchored to the nodes of a time-varying hypergraph as prescribed in the previous section, namely
\[\frac{dw_{j}}{dt}=\sigma w_{j}-\beta w_{j}|w_{j}|^{2}+\sum_{d=1}^{D}q_{d}\sum _{j_{1},...,j_{d}=1}^{n}A_{jj_{1}...j_{d}}^{(d)}(t)\vec{g}^{(d)}(w_{j},w_{j_{ 1}},\ldots,w_{j_{d}})\,. \tag{20}\]
For the sake of simplicity, we restrict our analysis to pairwise and three-body interactions, namely \(D=2\) in Eq. (19). We hereby present and discuss the SL synchronization under the diffusive-like coupling hypothesis and by using two different assumptions: regular topology and natural coupling. The case of non-invasive coupling will be presented in Appendix A.1.
### Diffusive-like and regular topology
Let us thus assume the existence of two functions \(h^{(1)}(w)\) and \(h^{(2)}(w_{1},w_{2})\) such that \(g^{(1)}\) and \(g^{(2)}\) do satisfy results.
A SL oscillator can be described by a complex amplitude \(w\) that evolves in time according to \(\dot{w}=\sigma w-\beta|w|^{2}w\), where \(\sigma=\sigma_{\Re}+i\sigma_{\Im}\) and \(\beta=\beta_{\Re}+i\beta_{\Im}\) are complex model parameters. The system admits a limit cycle solution \(w_{LC}(t)=\sqrt{\sigma_{\Re}/\beta_{\Re}}e^{i\omega t}\), where \(\omega=\sigma_{\Im}-\beta_{\Im}\sigma_{\Re}/\beta_{\Re}\), that is stable provided \(\sigma_{\Re}>0\) and \(\beta_{\Re}>0\), conditions that we hereby assume.
To proceed in the analysis, we couple together \(n\) identical SL oscillators, each described by a complex amplitude \(w_{j}\), with \(j=1,...,n\), anchored to the nodes of a time-varying hypergraph as prescribed in the previous section, namely
\[\frac{dw_{j}}{dt}=\sigma w_{j}-\beta w_{j}|w_{j}|^{2}+\sum_{d=1}^{D}q_{d}\sum _{j_{1},...,j_{d}=1}^{n}A_{jj_{1}...j_{d}}^{(d)}(t)\vec{g}^{(d)}(w_{j},w_{j_{ 1}},\ldots,w_{j_{d}})\,. \tag{21}\]
For the sake of elimicity, we restrict our analysis to pairwise and three-body interactions, namely \(D=2\) in Eq. (19). We hereby present and discuss the SL synchronization under the diffusive-like coupling hypothesis and by using two different assumptions: regular topology and natural coupling. The case of non-invasive coupling will be presented in Appendix A.1.
### Diffusive-like and regular topology
Let us thus assume the existence of two functions \(h^{(1)}(w)\) and \(h^{(2)}(w_{1},w_{2})\) such that \(g^{(1)}\) and \(g^{(2)}\) do satisfy results.
A SL oscillator can be described by a complex amplitude \(w\) that evolves in time according to \(\dot{w}=\sigma w-\beta|w|^{2}w\), where \(\sigma=\sigma_{\Re}+i\sigma_{\Im}\) and \(\beta=\beta_{\Re}+i\beta_{\Im}\) are complex model parameters. The system admits a limit cycle solution \(w_{LC}(t)=\sqrt{\sigma_{\Re}/\beta_{\Re}}e^{i\omega t}\), where \(\omega=\sigma_{\Im}-\beta_{\Im}\sigma_{\Re}/\beta_{\Re}\), that is stable provided \(\sigma_{\Re}>0\) and \(\beta_{\Re}>0\), conditions that we hereby assume.
To proceed in the analysis, we couple together \(n\) identical SL oscillators, each described by a complex amplitude \(w_{j}\), with \(j=1,...,n\), anchored to the nodes of a time-varying hypergraph as prescribed in the previous section, namely
\[\frac{dw_{j}}{dt}=\sigma w_{j}-\beta w_{j}|w_{j}|^{2}+\sum_{d=1}^{D}q_{d}\sum_ {j_{1},...,j_{d}=1}^{n}A_{jj_{1}...j_{d}}^{(d)}(t)\vec{g}^{(d)}(w_{j},w_{j_{ 1}},\ldots,w_{j_{d}})\,. \tag{22}\]
For the sake of elimicity, we restrict our analysis to pairwise and three-body interactions, namely \(D=2\) in Eq. (19). We hereby present and discuss the SL synchronization under the diffusive-like coupling hypothesis and by using two different assumptions: regular topology and natural coupling. The case of non-invasive coupling will be presented in Appendix A.1.
Let us assume the existence of two functions \(h^{(1)}(w)\) and \(h^{(2)}(w_{1},w_{2})\) such that \(g^{(1)}\) and \(g^{(2)}\) do satisfy results.
\[\frac{d}{dt}\begin{pmatrix}\rho_{j}\\ \theta_{j}\end{pmatrix}=\begin{pmatrix}-2\sigma_{\Re}&0\\ -2\beta_{\Im}\frac{\sigma_{\Re}}{\beta_{\Re}}&0\end{pmatrix}\begin{pmatrix} \rho_{j}\\ \theta_{j}\end{pmatrix}+\sum_{\ell}L_{j\ell}^{(1)}\bigg{[}\begin{pmatrix}q_{1, \Re}&-q_{1,\Im}\\ q_{1,\Im}&q_{1,\Re}\end{pmatrix}+2\alpha_{2}\sqrt{\frac{\sigma_{\Re}}{\beta_{ \Re}}}\begin{pmatrix}\cos(\omega t)&-\sin(\omega t)\\ \sin(\omega t)&\cos(\omega t)\end{pmatrix}\begin{pmatrix}q_{2,\Re}&-q_{2,\Im}\\ q_{2,\Im}&q_{2,\Re}\end{pmatrix}\begin{pmatrix}\rho_{\ell}\\ \theta_{\ell}\end{pmatrix}\,, \tag{23}\]
where \(\omega=\sigma_{\Im}-\beta_{\Im}\sigma_{\Re}/\beta_{\Re}\) is the frequency of the limit cycle solution.
By exploiting the eigenvectors \(\psi^{(\alpha)}(t)\) and eigenvalues \(\Lambda^{(\alpha)}(t)\) of \(\mathbf{L}^{(1)}(t)\) to project the perturbation \(\rho_{j}\) and \(\theta_{j}\) we obtain:
\[\frac{d}{dt}\begin{pmatrix}\rho_{\beta}\\ \theta_{\beta}\end{pmatrix}=\sum_{\alpha}b_{\beta\alpha}\begin{pmatrix}\rho_{ \alpha}\\ \theta_{\alpha}\end{pmatrix}+\Big{\{}\begin{pmatrix}-2\sigma_{\Re}&0\\ -2\beta_{\Im}\frac{\sigma_{\Re}}{\beta_{\Re}}&0\end{pmatrix}+\Lambda^{(\beta)} \left[\begin{pmatrix}q_{1,\Re}&-q_{1,\Im}\\ q_{1,\Im}&q_{1,\Re}\end{pmatrix}+2\alpha_{2}\sqrt{\frac{\sigma_{\Re}}{\beta_{ \Re}}}\begin{pmatrix}\cos(\omega t)&-\sin(\omega t)\\ \sin(\omega t)&\cos(\omega t)\end{pmatrix}\begin{pmatrix}q_{2,\Re}&-q_{2,\Im}\\ q_{2,\Im}&q_{2,\Re}\end{pmatrix}\begin{pmatrix}\rho_{\beta}\\ q_{\beta}\end{pmatrix}\,, \tag{24}\]
where the matrix \(\mathbf{b}\) has been defined in Eq. (17).
For the sake of definiteness and to focus on the impact
of the time-varying topology, we hereby consider a simple higher-order network structure composed of \(n=3\) nodes, three links and one triangle. Moreover, the eigenvalues are assumed to be constant and the time-derivative of the associated eigenvectors projected on the eigenbasis to return a constant matrix \(\mathbf{b}\), for a given \(\Omega\geq 0\)
\[\mathbf{b}=\begin{pmatrix}0&0&0\\ 0&0&\Omega\\ 0&-\Omega&0\end{pmatrix}\,. \tag{23}\]
One can show (see Appendix B and [10]) that those assumptions on the hypergraph correspond to two eigenvectors rotating in a plane orthogonal to the constant eigenvector \(\psi^{(1)}\sim(1,\dots,1)^{\top}\) with frequency \(\Omega>0\). The case \(\Omega=0\) corresponds thus to a static higher-order network structure.
Under those assumptions, Eq. (22) determines a time periodic linear system whose stability can be determined by using Floquet theory. In order to illustrate our results, we let \(q_{1,\Im}\) and \(q_{2,\Im}\) to freely vary in the range \([-5,5]\), while keeping fixed to generic values the remaining parameters, and we compute the Floquet eigenvalue with the largest real part, corresponding thus to the Master Stability Function (MSF) of Eq. (22), as a function of \(q_{1,\Im}\) and \(q_{2,\Im}\). The corresponding results are shown in Fig. 1 for \(\Omega=0\) (panel (a)) and \(\Omega=2\) (panel (b)). By a direct inspection, one can clearly conclude that the parameters region associated with a negative MSF (black region), i.e., to the stability of the SL limit cycle and thus to global synchronization, is larger for \(\Omega>0\) than for \(\Omega=0\).
To study the combined effect of both coupling strengths \(q_{1}\) and \(q_{2}\), we set \(q_{1}=\epsilon_{1}q_{1,0}\) and \(q_{2}=\epsilon_{2}q_{2,0}\), and we compute the MSF as a function of \(\epsilon_{1}\) and \(\epsilon_{2}\), having fixed without loss of generality \(q_{1,0}=0.1-0.5i\) and \(q_{2,0}=0.1-0.5i\). The corresponding results are presented in Fig. 2 for static (\(\Omega=0\), panel (a)) and time-varying (\(\Omega=2\), panel (b)) higher-order structure. We can again conclude that the region of parameters corresponding to
Figure 1: **Synchronization on time-varying regular higher-order network of coupled SL oscillators**. We report the MSF as a function of \(q_{1,\Im}\) and \(q_{2,\Im}\) for two different values of \(\Omega\), \(\Omega=0\) (panel (a)) and \(\Omega=2\) (panel (b)), by using a color code, we determine the region of stability (black) and the region of instability (yellow). The remaining parameters have been fixed at the values \(\alpha_{2}=2\), \(\sigma=1.0+4.3i\), \(\beta=1.0+1.1i\), \(q_{1,\Re}=0.1\), \(q_{2,\Re}=0.1\), \(\Lambda^{(2)}=-1\), and \(\Lambda^{(3)}=-2\).
Figure 2: **Synchronization on time-varying regular higher-order network of coupled SL oscillators**. The MSF is reported as a function of \(\epsilon_{1}\) and \(\epsilon_{2}\) for two different values of \(\Omega\), \(\Omega=0\) (panel (a)) and \(\Omega=2\) (panel (b)). The color code represents the values of the MSF, negative values (black) while positive values (yellow). The remaining parameters have been fixed at the values \(\alpha_{2}=2\), \(\sigma=1.0+4.3i\), \(\beta=1.0+1.1i\), \(q_{1,0}=0.1-0.5i\), \(q_{2,0}=0.1+0.5i\), \(\Lambda^{(2)}=-1\), and \(\Lambda^{(3)}=-2\).
global synchronization (black region) is larger in the case of time-varying hypergraph than in the static case.
Our last analysis concerns the relation between the frequency \(\Omega\) and the size of the coupling parameters \(\epsilon_{1}\), \(\epsilon_{2}\), still assuming \(q_{1}=\epsilon_{1}q_{1,0}\) and \(q_{2}=\epsilon_{2}q_{2,0}\), on the onset of synchronization. In Fig. 3 we report the MSF in the plane \((\Omega,\epsilon_{1})\) for a fixed value of \(\epsilon_{2}\) (panel (a)), and in the plane \((\Omega,\epsilon_{2})\) for a fixed value of \(\epsilon_{1}\) (panel (b)). Let us observe that the synchronization can be easier achieved the smaller the value \(\epsilon_{j}\), \(j=1,2\), for which the MSF is negative, having fixed \(\Omega\). Let us thus define \(\hat{\epsilon}_{1}(\Omega)=\min\{\epsilon>0:\text{MSF}(\epsilon,\epsilon_{2}, \Omega)<0\}\), for fixed \(\epsilon_{2}\), and similarly \(\hat{\epsilon}_{2}(\Omega)\). The results of Fig. 3 clearly show that \(\hat{\epsilon}_{1}(\Omega)<\hat{\epsilon}_{1}(0)\sim 3.5\) and \(\hat{\epsilon}_{2}(\Omega)<\hat{\epsilon}_{2}(0)\sim 4.2\) and thus support our claim that time-varying structures allow to achieve synchronization easier.
To support our analysis, we performed numerical simulations of the SL defined on the simple 3 nodes time-varying hypergraph. We selected \((\epsilon_{1},\epsilon_{2})=(2.5,0.5)\) and the remaining parameters values as in Fig. 2. By observing the latter figure, we conclude that for the chosen parameters, the MSF is positive if \(\Omega=0\) and negative if \(\Omega=2\), hence the SL should globally synchronize on the time-varying hypergraph while it would not achieve this state in the static case. Results of Fig. 4 confirm these conclusions; indeed, we can observe that (real part of) the complex state variable is in phase for all \(i\) in the case \(\Omega=2\) (right panel), while this is not clearly the case for \(\Omega=0\) (left panel).
### Diffusive-like and natural coupling
The aim of this section is to replace the condition of regular topology with a condition of natural coupling and consider thus again, a diffusive-like coupling. Let us thus consider now two functions \(h^{(1)}(w)\) and \(h^{(2)}(w_{1},w_{2})\) satisfying the natural coupling assumption, namely
\[h^{(1)}(w)=h^{(2)}(w,w)\,.\]
Figure 3: **Synchronization domains**. We show the MSF in the plane \((\Omega,\epsilon_{1})\) (panel (a)) for \(\epsilon_{2}=0.02\) and in the plane \((\Omega,\epsilon_{2})\) (panel (b)) for \(\epsilon_{1}=0.02\). We can observe that in both panels, the critical value of coupling strengths \(\hat{\epsilon}_{j}(\Omega)\) to achieve synchronization is smaller for \(\Omega>0\) than for \(\Omega=0\). Furthermore, in panel (a) existence of an interval \(\mathcal{I}_{1}=[\Omega_{1},\Omega_{2}]\) can be observed such that for all \(\Omega\in\mathcal{I}_{1}\), there exist three different values of critical coupling \(\hat{\epsilon}_{1}\) for the occurrence of synchronization. In panel (b), we can observe the existence of two intervals \(\mathcal{I}_{2}=[\Omega_{3},\Omega_{4}]\) and \(\mathcal{I}_{3}=[\Omega_{5},\Omega_{6}]\) such that for all \(\Omega\in\mathcal{I}_{2}\) there exist two critical values of \(\hat{\epsilon}_{2}\) and for all \(\Omega\in\mathcal{I}_{3}\) there exist three critical values of \(\hat{\epsilon}_{2}\) for the emergence of synchronization. The remaining parameters are kept fixed at the values \(\alpha_{2}=2\), \(\sigma=1.0+4.3i\), \(\beta=1.0+1.1i\), \(q_{1,0}=0.1-0.5i\), \(q_{2,0}=0.1+0.5i\), \(\Lambda^{(2)}=-1\), and \(\Lambda^{(3)}=-2\).
For the sake of definitiveness, let us fix
\[h^{(1)}(w)=w^{3}\text{ and }h^{(2)}(w_{1},w_{2})=w_{1}(w_{2})^{2}\,. \tag{24}\]
Consider again to perturb the limit cycle solution \(w_{LC}(t)=\sqrt{\sigma_{\Re}/\beta_{\Re}}e^{i\omega t}\) by defining \(w_{j}=W_{LC}(1+\rho_{j})e^{i\theta_{j}}\), where \(\rho_{j}\) and \(\theta_{j}\) are real and small functions for all \(j\). A straightforward computation allows us to write the time evolution of \(\rho_{j}\) and \(\theta_{j}\) as,
\[\frac{d}{dt}\begin{pmatrix}\rho_{j}\\ \theta_{j}\end{pmatrix}=\begin{pmatrix}-2\sigma_{\Re}&0\\ -2\beta_{\Im}\frac{\sigma_{\Re}}{\beta_{\Re}}&0\end{pmatrix}\begin{pmatrix} \rho_{j}\\ \theta_{j}\end{pmatrix}+3\frac{\sigma_{\Re}}{\beta_{\Re}}\sum_{\ell}M_{j\ell} \begin{pmatrix}\cos(2\omega t)&-\sin(2\omega t)\\ \sin(2\omega t)&\cos(2\omega t)\end{pmatrix}\begin{pmatrix}\rho_{l}\\ \theta_{l}\end{pmatrix}\,, \tag{25}\]
where \(\omega=\sigma_{\Im}-\beta_{\Im}\sigma_{\Re}/\beta_{\Re}\) is the frequency of the limit cycle solution and \(\mathbf{M}\) is the matrix \(q_{1}\mathbf{L}^{(1)}(t)+q_{2}\mathbf{L}^{(2)}(t)\) (see Eq. (12)). Let us observe that in this case, the coupling parameters \(q_{1}\) and \(q_{2}\) should be real numbers if we want to deal with real Laplace matrices, hypothesis that we hereby assume to hold true.
By invoking the eigenvectors \(\phi^{(\alpha)}(t)\) and eigenvalues \(\mu^{(\alpha)}(t)\) of \(\mathbf{M}(t)\), and the matrix \(\mathbf{c}\) (see Eq. (13)), we can project the perturbation \(\rho_{j}\) and \(\theta_{j}\) on the eigenbasis and thus rewrite the time variation of the perturbation as follows
\[\frac{d}{dt}\begin{pmatrix}\rho_{\beta}\\ \theta_{\beta}\end{pmatrix}=\sum_{\alpha}c_{\beta\alpha}\begin{pmatrix}\rho_{ \alpha}\\ \theta_{\alpha}\end{pmatrix}+\left[\begin{pmatrix}-2\sigma_{\Re}&0\\ -2\beta_{\Im}\frac{\sigma_{\Re}}{\beta_{\Re}}&0\end{pmatrix}+3\frac{\sigma_{ \Re}}{\beta_{\Re}}\mu^{(\beta)}\begin{pmatrix}\cos(2\omega t)&-\sin(2\omega t) \\ \sin(2\omega t)&\cos(2\omega t)\end{pmatrix}\right]\begin{pmatrix}\rho_{\beta} \\ \theta_{\beta}\end{pmatrix}\,. \tag{26}\]
Let us assume again to deal with an hypergraph made by 3 nodes and consider a time-independent matrix \(\mathbf{c}\)
\[\mathbf{c}=\begin{pmatrix}0&0&0\\ 0&0&\Omega\\ 0&-\Omega&0\end{pmatrix}\,,\]
for some \(\Omega\geq 0\). The eigenvalue \(\mu^{(1)}=0\) of \(\mathbf{M}\) determines the dynamics parallel to the synchronous manifold. On the other hand, the equations obtained for \(\mu^{(2)}\) and \(\mu^{(3)}\) give the dynamics of transverse modes to the synchronization manifold. Hence the MSF can be obtained by solving the latter equations and provide the conditions for a global stable synchronous solution to exist. In Fig. 5, we show the level sets of the MSF as a function of the eigenvalues \(\mu^{(2)}\) and \(\mu^{(3)}\) while keeping the remaining parameters in Eq. (26) fixed at generic nominal values. In panel (a), we consider a static hypergraph, i.e., \(\Omega=0\), while in panel (b) a time-varying hypergraph, i.e., \(\Omega=2\), negative values of MSF are reported in black and they correspond thus to a global synchronous state, positive values of MSF are shown in yellow; one can clearly appreciate that in the case of time-varying hypergraph, the MSF is negative for a much larger set of eigenvalues \(\mu^{(2)}\) and \(\mu^{(3)}\) and thus the SL system can easier synchronize.
## IV Synchronization of Lorenz systems nonlinearly coupled via time-varying higher-order networks
The aim of this section is to show that our results hold true beyond the example of the dynamical system shown above, i.e., the Stuart-Landau. We thus decide to present an application of synchronization for chaotic systems on a time-varying higher-order network. For the sake of definitiveness, we used the paradigmatic chaotic Lorenz model for the evolution of individual nonlinear oscillators.
We consider again the scenario of regular topology with the toy model hypergraph structure composed of \(n=3\) nodes described previously, the whole system can thus be described by
\[\begin{cases}\dot{x}_{i}&=a_{1}(y_{i}-x_{i})+\epsilon_{2}\sum\limits_{j=1}^{N }\sum\limits_{k=1}^{N}A_{ijk}^{(2)}(x_{j}^{2}x_{k}-x_{i}^{3})\\ \dot{y}_{i}&=x_{i}(a_{3}-z_{i})-y_{i}+\epsilon_{1}\sum\limits_{j=1}^{N}A_{ij}^{( 1)}(y_{j}-y_{i})\\ \dot{z}_{i}&=x_{i}y_{i}-a_{2}z_{i}\end{cases}\,, \tag{27}\]
where the system parameters are kept fixed at \(a_{1}=10\), \(a_{2}=\frac{8}{3}\), \(a_{3}=28\) for which individual nodes exhibits chaotic trajectory. The pairwise and higher-order structures are related to each other by \(\mathbf{L}^{(2)}=\alpha_{2}\mathbf{L}^{(1)}\). We assume the eigenvalues of the Laplacian \(\mathbf{L}^{(1)}\) to be con
stant and the matrix \(\mathbf{b}\) to be given by
\[\mathbf{b}=\begin{pmatrix}0&0&0\\ 0&0&\Omega\\ 0&-\Omega&0\end{pmatrix}\quad\text{for some }\Omega\geq 0.\]
Let us thus select as reference solution \(\vec{s}(t)\) a chaotic orbit of the isolated Lorenz model and consider as done previously the time evolution of a perturbation about such trajectory. Computations similar to those reported above, allow to obtain a linear non-autonomous system ruling the evolution of the perturbation, whose stability can be numerically inferred by computing the largest Lyapunov exponent, i.e., the MSF. We first considered the impact of the coupling strength, \(\epsilon_{1}\) and \(\epsilon_{2}\) on synchronization; results are reported in Fig. 6 where we present the level sets of the MSF as a function of the above parameters by using a color code: black dots refer to negative MSF while yellow dots to positive MSF. The panel (a), refers to a static hypergraph, i.e., \(\Omega=0\), while the panel (b) to a time-varying one, i.e., \(\Omega=3\), one can thus appreciate that the latter setting allows a negative MSF for a larger range of parameters \(\epsilon_{1}\) and \(\epsilon_{2}\) and hence we can conclude that time-varying hypergraph enhance synchronization also in the case of chaotic oscillators.
We conclude this analysis by studying again the relation between the frequency \(\Omega\) and the size of the coupling parameters \(\epsilon_{1}\), \(\epsilon_{2}\) on the onset of synchronization. In Fig. 7 we show the MSF in the plane \((\Omega,\epsilon_{1})\) for a fixed value of \(\epsilon_{2}=0.01\) (panel (a)), and in the plane \((\Omega,\epsilon_{2})\) for a fixed value of \(\epsilon_{1}=0.2\) (panel (b)). By using again \(\hat{\epsilon}_{1}(\Omega)=\min\{\epsilon>0:\text{MSF}(\epsilon,\epsilon_{2}, \Omega)<0\}\), for fixed \(\epsilon_{2}\), and similarly \(\hat{\epsilon}_{2}(\Omega)\), we can conclude that \(\hat{\epsilon}_{1}(\Omega)<\hat{\epsilon}_{1}(0)\sim 1.4\) and \(\hat{\epsilon}_{2}(\Omega)<\hat{\epsilon}_{2}(0)\sim 0.04\) and thus supporting again our claim that time-varying structures allow to achieve synchronization easier.
Figure 5: **Synchronization on time-varying higher-order network of coupled SL oscillators with diffusive-like natural coupling.** We report the MSF as a function of the eigenvalues \(\mu^{(2)}\) and \(\mu^{(3)}\) for two different choices of \(\Omega\), \(\Omega=0\) (panel (a)) and \(\Omega=2\) (panel (b)) by using a color code, black is associated to negative values while positive ones are shown in yellow. We characterize the range of the axes by considering the absolute values of the eigenvalues. The remaining parameters are kept fixed at \(\sigma=1.0+4.3i,\,\beta=1.0+1.1i\).
Figure 6: **Synchronization on time-varying regular higher-order network of coupled Lorenz oscillators**. We report the MSF as a function of the coupling strengths, \(\epsilon_{1}\) and \(\epsilon_{2}\), for two different values of \(\Omega\), \(\Omega=0\) (panel (a)) and \(\Omega=3\) (panel (b)), by using a color code, where black dots stand for a negative MSF, i.e., global synchronization, while yellow dots for a positive MSF. The remaining parameters are kept fixed at \(a_{1}=10\), \(a_{2}=\frac{8}{3}\), \(a_{3}=28\), and \(\alpha_{2}=2\).
## V Conclusions
To sum up we have here introduced and studied a generalized framework for the emergence of global synchronization on time-varying higher-order networks and developed a theory for its stability without imposing strong restrictions on the functional time evolution of the higher-order structure. We have demonstrated that the latter can be examined by extending the Master Stability Function technique to the novel framework for specific cases based either on the inter-node coupling scheme or the topology of the higher-order structure. Our findings reveal that the behavior of the higher-order network is represented by a matrix that changes over time and possesses skew symmetry. This matrix is derived from the time-dependent evolution of the eigenvectors of the higher-order Laplacian. Additionally, the eigenvalues associated with these eigenvectors can also vary over time and have an impact on shaping the evolution of the introduced disturbance. We have validated the proposed theory on time-varying hypergraphs of coupled Stuart-Landau oscillators and chaotic Lorenz systems, and the results obtained indicate that incorporating temporal aspects into group interactions can facilitate synchronization in higher-order networks compared to static ones.
The framework and concepts presented in this study create opportunities for future research on the impact of temporality in systems where time-varying group interactions have been observed but not yet thoroughly explored due to the absence of a suitable mathematical setting. Importantly, the fact that our theory does not require any restrictions on the time evolution of the underline structure could offer the possibility to apply it for a diverse range of applications other than synchronization.
|
2305.12573 | High-resolution APEX/LAsMA $^{12}$CO and $^{13}$CO (3-2) observation of
the G333 giant molecular cloud complex : I. Evidence for gravitational
acceleration in hub-filament systems | Hub-filament systems are suggested to be the birth cradles of high-mass stars
and clusters. We apply the FILFINDER algorithm to the integrated intensity maps
of the 13CO (3-2) line to identify filaments in the G333 complex, and extract
the velocity and intensity along the filament skeleton from moment maps. Clear
velocity and density fluctuations are seen along the filaments, allowing us to
fit velocity gradients around the intensity peaks. The velocity gradients
fitted to the LAsMA data and ALMA data agree with each other over the scales
covered by ALMA observations in the ATOMS survey. Changes of velocity gradient
with scale indicate a ''funnel'' structure of the velocity field in PPV space,
indicative of a smooth, continuously increasing velocity gradient from large to
small scales, and thus consistent with gravitational acceleration. The typical
velocity gradient corresponding to a 1 pc scale is ~1.6km/s/pc. Assuming
free-fall, we estimate a kinematic mass within 1 pc of ~1190 M$_\odot$, which
is consistent with typical masses of clumps in the ATLASGAL survey. We find
direct evidence for gravitational acceleration from comparison of the observed
accelerations to those predicted by free-fall onto dense hubs. On large scales,
we find that the inflow may be driven by the larger scale structure, consistent
with hierarchical structure in the molecular cloud and gas inflow from large to
small scales. The hub-filament structures at different scales may be organized
into a hierarchical system extending up to the largest scales probed, through
the coupling of gravitational centers at different scales. We argue that the
''funnel'' structure in PPV space can be an effective probe for the
gravitational collapse motions in molecular clouds. The large scale gas inflow
is driven by gravity, implying that the molecular clouds in G333 complex may be
in the state of global gravitational collapse. | J. W. Zhou, F. Wyrowski, S. Neupane, J. S. Urquhart, N. J. Evans II, E. Vázquez-Semadeni, K. M. Menten, Y. Gong, T. Liu | 2023-05-21T21:35:46Z | http://arxiv.org/abs/2305.12573v1 | High-resolution APEX/LAsMA \({}^{12}\)CO and \({}^{13}\)CO (3-2) observation of the G333 giant molecular cloud complex : I. Evidence for gravitational acceleration in hub-filament systems
###### Abstract
Context:Hub-filament systems are suggested to be the birth cradles of high-mass stars and clusters.
Aims:We investigate the gas kinematics of hub-filament structures in the G333 giant molecular cloud complex using \({}^{13}\)CO (3\(-\)2) observed with the APEX/LAsMA heterodyne camera.
Methods:We apply the FILFINDER algorithm to the integrated intensity maps of the \({}^{13}\)CO \(J\)=3\(-\)2 line to identify filaments in the G333 complex, and we extract the velocity and intensity along the filament skeleton from moment maps. Clear velocity and density fluctuations are seen along the filaments, allowing us to fit velocity gradients around the intensity peaks.
Results:The velocity gradients fitted to the LAsMA data and ALMA data agree with each other over the scales covered by ALMA observations in the ATOMS survey (\(<\) 5 pc). Changes of velocity gradient with scale indicate a "funnel" structure of the velocity field in PPV space, indicative of a smooth, continuously increasing velocity gradient from large to small scales, and thus consistent with gravitational acceleration. The typical velocity gradient corresponding to a 1 pc scale is \(\sim\) 1.6 km s\({}^{-1}\) pc\({}^{-1}\). Assuming free-fall, we estimate a kinematic mass within 1 pc of \(\sim\) 1190 M\({}_{\odot}\), which is consistent with typical masses of clumps in the ATLASGAL survey of massive clumps in the inner Galaxy. We find direct evidence for gravitational acceleration from comparison of the observed accelerations to those predicted by free-fall onto dense hubs with masses from millimeter continuum observations. On large scales, we find that the inflow may be driven by the larger scale structure, consistent with hierarchical structure in the molecular cloud and gas inflow from large to small scales. The hub-filament structures at different scales may be organized into a hierarchical system extending up to the largest scales probed, through the coupling of gravitational centers at different scales.
Conclusions:We argue that the "funnel" structure in PPV space can be an effective probe for the gravitational collapse motions in molecular clouds. The large scale gas inflow is driven by gravity, implying that the molecular clouds in G333 complex may be in the state of global gravitational collapse.
Conclusions:
## 1 Introduction
To understand the formation of hierarchical structures in high-mass star formation regions, it is critical to measure the dynamical coupling between density enhancements in giant molecular clouds and gas motion of their local environment (McKee & Ostriker, 2007; Motte et al., 2018; Henshaw et al., 2020). Such studies may distinguish between competing concepts for high-mass star formation, such as monolithic collapse of turbulent cores in virial equilibrium (McKee & Tan, 2003; Krumholz et al., 2007), competitive accretion in a protocluster environment through Bondi-Hoyle accretion (Bonnell et al., 1997, 2001), turbulence-driven inertial-inflow (Padoan et al., 2020) and gravity-dominated global hierarchical collapse (Vazquez-Semadeni et al., 2009; Ballesteros-Paredes et al., 2011; Hartmann et al., 2012; Vazquez-Semadeni et al., 2017, 2019), as discussed in Zhou et al. (2022). High-resolution observations show that density enhancements are organized in filamentary gas networks, especially in hub-filament systems. In such systems, converging flows are funneling matter into the hub through the filaments. Many case studies have suggested that hub-filament systems are the birth cradles of high-mass stars and clusters (Peretto et al., 2013; Henshaw et al., 2014; Zhang et al., 2015; Liu et al., 2016; Yuan et al., 2018; Lu et al., 2018; Issac et al., 2019; Dewangan et al., 2020; Liu et al., 2022; Zhou et al., 2022). Numerical simulations of colliding flows and collapsing clumps reveal velocity gradients along the dense filamentary streams converging toward the hubs (Wang et al., 2010; Gomez & Vazquez-Semadeni, 2014; Smith et al., 2016; Padoan et al., 2020). In observations, velocity gradients along filaments are often interpreted as evidence for
gas inflow along filaments (Kirk et al., 2013; Liu et al., 2016; Yuan et al., 2018; Williams et al., 2018; Chen et al., 2019, 2020; Pillai et al., 2020; Zhou et al., 2022).
Zhou et al. (2022) studied the physical properties and evolution of hub-filament systems in a large sample of proto-clusters that were observed in the ATOMS (ALMA Three-millimeter Observations of Massive Star-forming regions) survey (Liu et al., 2020). They found that hub-filament structures can exist not only in small-scale (\(\sim\)0.1 pc) dense cores but also in large-scale clumps/clouds (\(\sim\)1-10 pc), suggesting that multi-scale hub-filament systems at various scales are common in regions forming massive stellar clusters in various Galactic environments. The filaments in clumps observed in the ATOMS program show clear velocity gradients. The approximately symmetric distribution of positive and negative velocity gradients strongly indicates the existence of converging gas inflows along filaments. The observations confirm that high-mass stars in protoclusters may accumulate most of their mass through longitudinal inflow along filaments. Velocity and density fluctuations are discussed in detail in Henshaw et al. (2020), who detected ubiquitous velocity fluctuations across all spatial scales and galactic environments and discovered oscillatory gas flows with wavelengths ranging from 0.3-400 pc that are coupled to regularly spaced density enhancements, probably formed via gravitational instabilities (Henshaw et al., 2016; Elmegreen et al., 2018). Furthermore, the locations of some of these density enhancements spatially correlate with velocity gradient extrema, indicative of either convergent motion or collapse-induced rotation (Clarke et al., 2016; Misugi et al., 2019).
In Zhou et al. (2022), we measured the velocity gradients around the intensity peaks for all observed cores. The statistical analysis found that velocity gradients are very small at scales larger than \(\sim\)1 pc, probably suggesting the dominance of pressure-driven inertial inflow, which can originate either from large-scale turbulence or from cloud-scale gravitational contraction. Below \(\sim\)1 pc, velocity gradients dramatically increase as filament lengths decrease, indicating that the hub's or core's gravity dominates gas infall on small scales. Due to the FOV limitation of ALMA observation, our previous work was restricted to scales up to about 5 pc. In this paper, following a similar method as described in Zhou et al. (2022), we can generalize the results from the clump-core scale to cloud-clump scale.
Fig.1(b) displays an overview three-color map of the observed field by combining ATLASGAL+Planck 870 \(\mu\)m and GLIMPSE 8.0 and 4.5 \(\mu\)m emission. Fig.1(a) shows the distribution of newly observed \({}^{13}\)CO (3\(-\)2) emission. The interesting individual sub-regions are marked and highlighted below. The main structures of the observed field are the G331 and G333 giant molecular clouds (GMCs), and the G332 ring structure between them.
The G331 GMC is one of the most massive molecular clouds in the Southern Galaxy, in the tangent region of the Norma spiral arm (Bronfman et al., 1989). Using the C\({}^{18}\)O (1-0) integrated emission, Merello et al. (2013) defined the central region of the G331 GMC at \(l=331.523^{\circ}\), \(b=-0.099^{\circ}\), with a distance of 7.5 kpc. It may harbor one of the most extended and luminous regions of massive star formation in the Galactic disk. Caswell and Haynes (1987) determined that the line central velocity of the ionized gas is \(\sim-89\) km s\({}^{-1}\) from observations of the H109\(\alpha\) and H110\(\alpha\) hydrogen recombination lines, which is similar to the peak velocity of the molecular gas in the GMC. The detection of OH and methanol maser emission further provides evidence of active star formation in the G331 GMC (Goss et al., 1970; Caswell et al., 1980; Caswell, 1998; Pestalozzi et al., 2005).
The G333 GMC, centered at \(l\sim 333.2^{\circ}\), \(b\sim-0.4^{\circ}\), is a 1.2\({}^{\circ}\times\) 0.6\({}^{\circ}\) region in the fourth quadrant of the Galaxy at a distance of 3.6 kpc (Lockman, 1979; Bains et al., 2006). Its gas emission takes the form of a string of knots. As a part of the Galactic Ring of molecular clouds at a Galactocentric radius of 3-5 kpc, the G333 molecular cloud complex contains a diverse sample of molecular regions as well as a range of high mass star forming clouds, bright HII regions and infrared point sources, all surrounded by diffuse atomic and molecular gas (Fujiyoshi et al., 2006; Cunningham et al., 2008; Wong et al., 2008; Lo et al., 2009; Jordan et al., 2013; Lowe et al., 2014). OH, H\({}_{2}\)O and CH\({}_{3}\)OH maser lines as tracers of high-mass star formation have been detected from various sources in the region (Caswell et al., 1980, 1995; Breen et al., 2007; Caswell et al., 2011; Breen et al., 2012). Towards the G333 GMC, Bains et al. (2006) used the Mopra 22-m radio telescope to identify five distinct velocity features at \(-105\), \(-90\), \(-70\), \(-50\), and \(-10\) km s\({}^{-1}\) with \({}^{13}\)CO (1\(-\)0) maps. They also found at least three velocity components (\(-55\), \(-50\), and \(-42\) km s\({}^{-1}\)) between \(-65\) and \(-35\) km s\({}^{-1}\), with the brightest feature at \(-50\) km s\({}^{-1}\). Several spiral arms intersect at \(l\sim 333^{\circ}\) along the line-of-sight, i.e., the Sagittarius-Carina arm, the Scutum-Crux arm and the Norma-Cygnus arm (Russeil, 2003; Russeil et al., 2005; Vallee, 2008). This region may therefore be suitable to study the formation of molecular clouds under galaxy dynamics.
Between the G333 and G331 GMCs, there is a cavity at \(l\sim 332^{\circ}\). The upper part presents a ring structure within Galactic coordinates \(332.0^{\circ}<l<332.8^{\circ}\) and \(-0.3^{\circ}<b<0.4^{\circ}\). The principal emission of this ring structure is concentrated between the velocity range \(-55<v_{\rm LSR}<-44\) km s\({}^{-1}\)(Romano et al., 2019) and the average spectral profiles of the CO lines and CI show a peak at \(v_{\rm LSR}=-50\) km s\({}^{-1}\). The distance of the ring is determined to be \(\sim\)3.7 kpc from the Sun (Romano et al., 2019), thus the ring and G333 GMC are located in the same region, referred to as the G333 complex in this work.
## 2 Observation
### APEX/LAAsMA data
We mapped a \(3.4^{\circ}\times 1.2^{\circ}\) area centered on \((l,b)=(332.33^{\circ},-0.29^{\circ})\). The observations were conducted between March and August of 2022 using the APEX telescope (Gusten et al., 2006)1. The 7 pixel Large APEX sub-Millimeter Array (LAsMA) receiver was used to observe the \(J=3-2\) transitions of \({}^{12}\)CO (\(v_{\rm rest}\sim 345.796\) GHz) and \({}^{13}\)CO (\(v_{\rm rest}\sim 330.588\) GHz) simultaneously in the upper and lower sideband, respectively. More details about the receiver are given in Mazumdar et al. (2021). The local oscillator frequency was set at 338.190 GHz in order to avoid contamination of the \({}^{13}\)CO (3\(-\)2) lines due to bright \({}^{12}\)CO (3\(-\)2) emission from the image band. The whole observed region was divided into two parts, the first part is from \(l\sim 332^{\circ}\) to \(l\sim 334^{\circ}\), the second part is from \(l\sim 330.6^{\circ}\) to \(l\sim 332^{\circ}\). Each part was further divided into sub-maps of size \(10^{\prime}\times 10^{\prime}\) each. Observations were performed in a position switching on-the-fly (OTF) mode. Although the reference-positions were carefully chosen, the reference-position of the first part still has emission present in the velocity range of \(-140\) to 10 km s\({}^{-1}\) (see Appendix B for details about how the issue was solved).
The data were calibrated using a three load chopper wheel method, which is an extension of the "standard" method used for millimeter observations (Ulich & Haas 1976) to calibrate the data to the antenna temperature \(T_{A}^{*}\) scale. The grading was reduced using the GILDAS package2. A velocity range of \(-190\) to \(60\) km s\({}^{-1}\) was extracted for each spectrum and resampled to an adequate velocity resolution of 0.25 km s\({}^{-1}\) to reduce the noise. The velocity range \(-140\) to 10 km s\({}^{-1}\) was masked before fitting a first order baseline to each spectrum. The reduced, calibrated data obtained from the different scans were then combined and gridded using a 6\({}^{\prime\prime}\) cell size. The gridding process includes a convolution with a Gaussian kernel with a FWHM size of one-third the telescope FWHM beam width. The data cubes obtained have a final angular resolution of 19.5\({}^{\prime\prime}\) comparable with the resolution of the ATLASGAL survey.
Footnote 2: [http://www.iram.fr/IRAMFR/GILDAS](http://www.iram.fr/IRAMFR/GILDAS)
After the online calibration, the intensities are obtained on the \(T_{A}^{*}\) (corrected antenna temperature) scale. Apart from the atmospheric attenuation, this also corrects for rear spillover, blockage, scattering and ohmic losses. A beam efficiency value \(\eta_{mb}=0.71\) (Mazumdar et al. 2021) was used to convert intensities from \(T_{A}^{*}\) to the main beam brightness temperature \(T_{mb}\). The final noise levels of the \({}^{12}\)CO (3-2) and \({}^{13}\)CO (3-2) data cubes are \(\sim\)0.32 K and \(\sim\)0.46 K, respectively. In the rest of paper, we focus on the analysis of the \({}^{13}\)CO (3-2) emission. The information from both lines will be combined in a forthcoming paper.
### Archival continuum emission data
The observed region was covered in the infrared by the GLIMPSE survey (Benjamin et al. 2003). The images of the Spitzer Infrared Array Camera (IRAC) at 4.5 and 8.0 \(\mu\)m were
Figure 1: Overview of the entire observed field. The enlarged maps in the first row display several typical hub-filament structures. (a) Background is the integrated intensity map of \({}^{13}\)CO (3-2) for the G333 complex, boxes show several divided sub-regions. Letters mark the sub-structures studied in this work. Red contours show the peak emission of \({}^{13}\)CO (3-2); (b) An overview three-color map of the observed field by combining ATLASGAL+Planck 870 \(\mu\)m (red) and GLIMPSE 8.0 (green) and 4.5 \(\mu\)m (blue) emission, including G333 complex and G331 GMC.
retrieved from the Spitzer Archive. The angular resolutions of the images in the IRAC bands are \(\sim 2^{\prime\prime}\). We also used ATLASGAL+Planck 870 \(\mu\)m data (ATLASGAL combined with Planck data), which are sensitive to a wide range of spatial scales at a resolution of \(\sim 21^{\prime\prime}\)(Csengeri et al., 2016).
## 3 Results
### Velocity components
The average emission spectrum of \({}^{13}\)CO (3\(-\)2) coming from the entire observed field is shown in Fig. 2, together with Gaussian curves used to fit the average spectrum. The fitting is carried out using the GAUSSPY+ software (Riener et al., 2019), which will be introduced in detail below. We have divided the spectrum into three velocity components [\(-120\), \(-76\)] km s\({}^{-1}\), [\(-76\), \(-60\)] km s\({}^{-1}\) and [\(-60\), \(-35\)] km s\({}^{-1}\) in which emission is prominent, marked as peak1, peak2 and peak3. From Fig. 3, we can see the velocity components peak1 and peak2 represent the main two parts of the G331 GMC, called G331 GMC-blue and G331 GMC-red. peak3 shows a very extended structure and over the observed region with the G333 GMC and G332 ring as the main parts. There are four main ATLASGAL clump clusters marked by A, B, C and D in Fig.3(a) (Uruquhart et al., 2018, 2022), they well correspond to different velocity components shown in the moment-1 maps of \({}^{13}\)CO (3\(-\)2) (Fig.3(b)). peak1 and peak2 associate with the clump clusters C and D. Both A and B clump clusters are located in the velocity component peak3. peak1 is relatively independent of peak2 and peak3, but the dominant features in peak2 and peak3 appear have some overlap, yet they can be separated. Hence, we fit Gaussian functions in order to minimise the effects of cross-contamination. peak2 and peak3 can be distinguished by the full width at half maximum (FWHM). In this paper, we mainly focus on peak3 with a velocity range [\(-60\), \(-35\)] km s\({}^{-1}\), which represents a complete shell structure shown in Fig. 3(e). The association between peak1, peak2 and peak3 will be discussed in detail in a forthcoming paper.
In order to study the structure of the molecular cloud complex in position-position-velocity space, we applied the fully automated Gaussian decomposer GAUSSPY+ (Lindner et al., 2015; Riener et al., 2019), which can decompose the complex spectra of molecular lines into multiple Gaussian components. The most important four parameters in the algorithm are the first and second smoothing parameters \(\alpha_{1}\) and \(\alpha_{2}\), signal-to-noise (SNR) and significance criterion. To determine the smoothing parameters \(\alpha_{1}\) and \(\alpha_{2}\), sets with 500 randomly selected spectra from the \({}^{13}\)CO (3\(-\)2) cube were used to train GAUSSPY+. The default values of SNR and significance are 3 and 5, but considering that the moment-1 map needs a 5\(\sigma\) threshold to show a clear velocity field, we adjust the value of SNR to 5. Other parameters for the decomposition are set to the default values provided by Riener et al. (2019).
As shown in Fig. 4, the velocity components separated by the average spectrum is consistent with the decomposition of GAUSSPY+, with the three main velocity components well separated in PPV space. Different from peak1 and peak2, for peak3, no matter the projection on Galactic longitude or latitude, we can see a large-scale continuous structure, indicating the structures in peak3 are physically connected. Especially, the overall structure of peak3 consists of a large-scale shell structure extended from \(l=330.6^{\circ}\) to \(l=334^{\circ}\). It may be generated by a large-scale compression, such as the shock from supernova explosion events, which will be investigated in a forthcoming paper.
The entire G333 complex seems to have fragmented into several independent molecular clouds, they are treated as sub-regions marked in Fig. 1, such as s3. Sub-structures in each sub-region marked by letters in Fig. 1(a) are divided by the bright emission of \({}^{13}\)CO (3-2) shown in red contours, such as s3a. From Fig. 1(b) and Fig. 3(a), we can see the dense parts of \({}^{13}\)CO (3\(-\)2) emission tightly correlate with the infrared bright regions. The overlap of different velocity components in sub-regions 5 and 7 is significant, as shown in Fig.3(a). However, these two sub-regions only include a few structures in the velocity range [\(-60\), \(-35\)] km s\({}^{-1}\), thus they are subordinate in G333 complex.
Through the above analysis, we see that the G331 GMC and the G333 complex overlap on the sky, but they have different velocities and lie at very different distances. Given that the G333 region is closer to us, we focus the remainder of the paper on it.
### Identification of filaments
The first step is to identify and characterize filaments in the G333 complex. Following the method described in Zhou et al. (2022), we use the Moment 0 maps (the integrated intensity maps) of \({}^{13}\)CO (3\(-\)2) to identify filaments in the G333 complex using the FILFNIDER algorithm (Koch and Rosolowsky, 2015). The velocity intervals for making Moment 0 maps are determined from the averaged spectra of each sub-region, marked by vertical dotted lines in Fig. 11. Moreover, a threshold of 5\(\sigma\) is applied to make the moment maps, which can reduce the noise contamination effectively. The skeletons of identified filaments overlaid on the moment-0 maps of \({}^{13}\)CO (3-2) line emission are shown in Fig. 5. They are highly consistent with the gas structures traced by \({}^{13}\)CO (3-2) as seen by eye, indicating that the structures identified in FILFNIDER are reliable. Then for each sub-region, we break the filamentary network into several filaments. This step is necessary to calculate the offset along the filament before fitting the velocity gradient. To show the velocity field along the filament more clearly, we try to make each filament long enough.
Considering that each sub-region includes many sub-structures, it is not surprising that their average spectra will show multiple velocity peaks. In Fig. 11, the average spectra of G333-s3 clearly shows multiple components. We separate out each peak in the line profile, and find that each peak indeed corresponds to a sub-structure. This situation may also exist in other sub-regions. However, generally these sub-structures are well spatially separated and thus do not have a significant impact on the moment maps analysis.
Fig. 6 displays the GAUSSPY+ decomposition of each sub-region in PPV space. From there, we can see the main structures of each sub-region are connected in PPV space and thus unlikely to be contaminated by velocity components from unrelated foreground or background cloud emission. Moreover, after we ex
Figure 2: Average spectra of \({}^{13}\)CO (3\(-\)2) for the entire region, red dashed profiles represent the components of multi-Gaussian fitting.
Figure 3: ATLASGAL clumps distribution and velocity components of the entire observed field. (a) Background is the integrated intensity map of \({}^{13}\)CO (3\(-\)2), color-coded \({}^{+}\)+ mark different ATLASGAL clump clusters A (green), B (orange), C (red) and D (magenta); (b) The moment-1 map of \({}^{13}\)CO (3\(-\)2) for the entire region in the overall velocity range; (c), (d) and (e) The moment-1 map of \({}^{13}\)CO (3\(-\)2) for three peaks marked in Fig. 2.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline Clump & log(M) & Radius & T\({}_{\rm dust}\) & log(Lum) & v\({}_{\rm LSR}\) & \(\Delta\)V\({}_{1}\) & L\({}_{1}\) & \(\Delta\)V\({}_{2}\) & L\({}_{2}\) \\ & (M\({}_{\odot}\)) & (pc) & (K) & (L\({}_{\odot}\)) & (km s\({}^{-1}\)) & (km s\({}^{-1}\) pc\({}^{-1}\)) & (pc) & (km s\({}^{-1}\) pc\({}^{-1}\)) & (pc) \\ \hline AGAL332.094\(-\)00.421 & 3.155 & 0.865 & 28.4 & 4.789 & \(-\)56.5 & \(-\)2.97 & 1.4 & 1.9 & 1.1 \\ AGAL332.312\(-\)00.556 & 2.947 & 0.831 & 21.4 & 3.835 & \(-\)51.9 & 0.9 & 1.9 & \(-\)1.76 & 0.6 \\ AGAL332.351\(-\)00.436 & 3.034 & 0.935 & 18.3 & 3.483 & \(-\)47.6 & & & \(-\)0.58 & 6.0 \\ AGAL332.411\(-\)00.471 & 2.618 & 0.502 & 19.3 & 2.694 & \(-\)55.8 & 0.3 & 6.0 & \(-\)0.49 & 5.0 \\ AGAL332.467\(-\)00.522 & 3.162 & 0.848 & 19.8 & 3.864 & \(-\)52.6 & 0.2 & 3.0 & \(-\)2.25 & 0.8 \\ AGAL332.606\(-\)00.854 & 2.569 & 0.537 & 19.8 & 3.088 & \(-\)56.2 & \(-\)0.53 & 2.0 & 2.0 & 1.0 \\ AGAL332.647\(-\)00.609 & 3.549 & 1.35 & 28.3 & 5.195 & \(-\)49.2 & \(-\)2.7 & 2.0 & 3.7 & 1.0 \\ AGAL332.694\(-\)00.612 & 3.184 & 0.813 & 26.6 & 4.466 & \(-\)47.6 & \(-\)7.84 & 1.0 & 9.7 & 1.0 \\ AGAL332.694\(-\)00.612 & 3.184 & 0.813 & 26.6 & 4.466 & \(-\)47.6 & 3.3 & 0.8 & \(-\)4.59 & 1.0 \\ AGAL332.726\(-\)00.621 & 2.407 & 0.294 & 24.8 & 3.456 & \(-\)49.7 & \(-\)3.2 & 0.7 & 1.5 & 0.8 \\ AGAL332.751\(-\)00.597 & 2.964 & 1.038 & 24.7 & 3.937 & \(-\)53.1 & & & 0.8 & 1.8 \\ AGAL332.762\(-\)00.641 & 2.32 & 0.623 & 22.4 & 3.313 & \(-\)50.4 & \(-\)4.14 & 0.6 & 0.9 & 1.7 \\ AGAL332.826\(-\)00.549 & 3.724 & 1.367 & 31.4 & 5.543 & \(-\)57.3 & \(-\)0.58 & 6.7 & 2.7 & 1.0 \\ AGAL332.826\(-\)00.549 & 3.724 & 1.367 & 31.4 & 5.543 & \(-\)57.3 & \(-\)2.65 & 2.5 & 3.1 & 1.0 \\ AGAL332.8826\(-\)00.549 & 3.724 & 1.367 & 31.4 & 5.543 & \(-\)57.3 & & & \(-\)0.86 & 8.5 \\ AGAL332.892\(-\)00.569 & 2.595 & 0.485 & 19.7 & 3.19 & \(-\)57.3 & \(-\)1.49 & 1.5 & 0.9 & 2.0 \\ AGAL332.866\(-\)00.587 & 2.364 & 0.169 & 23.3 & 3.113 & \(-\)58.1 & & & \(-\)1.59 & 1.0 \\ AGAL332.962\(-\)00.679 & 3.305 & 0.779 & 22.5 & 4.225 & \(-\)48.5 & 2.0 & 3.0 & \(-\)1.53 & 5.5 \\ AGAL332.969\(-\)00.737 & 2.785 & 0.64 & 13.5 & 2.525 & \(-\)55.6 & \(-\)3.7 & 1.0 & 3.9 & 1.0 \\ AGAL332.995\(-\)00.519 & 2.511 & 0.467 & 18.2 & 2.593 & \(-\)52.8 & & & 3.3 & 1.0 \\ AGAL333.001\(-\)00.436 & 2.848 & 0.865 & 35.7 & 4.989 & \(-\)55.4 & \(-\)3.4 & 0.7 & 1.0 & 2.5 \\ AGAL333.013\(-\)00.466 & 2.821 & 0.606 & 34.5 & 4.968 & \(-\)53.3 & \(-\)1.74 & 1.2 & 3.4 & 0.6 \\ AGAL333.014\(-\)00.521 & 2.986 & 1.056 & 22.6 & 4.152 & \(-\)53.2 & \(-\)2.58 & 1.0 & 1.3 & 2.0 \\ AGAL33.0534\(-\)00.029 & 3.017 & 1.056 & 29.2 & 4.398 & \(-\)45.3 & 1.1 & 1.4 & \(-\)0.42 & 2.1 \\ AGAL333.071\(-\)00.399 & 3.281 & 0.969 & 19.1 & 3.744 & \(-\)53.3 & & & 0.9 & 1.2 \\ AGAL33.089\(-\)00.352 & 2.347 & 0.692 & 28.1 & 3.95 & \(-\)52.7 & & & 0.6 & 2.5 \\ AGAL333.094\(-\)00.524 & 2.728 & 0.623 & 26.5 & 3.966 & \(-\)57.6 & \(-\)1.64 & 1.5 & 0.8 & 3.5 \\ AGAL333.103+00.087 & 2.071 & 0.169 & 17 & 2.119 & \(-\)44.9 & 0.3 & 2.0 & \(-\)0.26 & 3.0 \\ AGAL33.103\(-\)00.502 & 3.251 & 1.367 & 25.5 & 4.601 & \(-\)55.8 & \(-\)0.81 & 6.0 & 1.3 & 1.0 \\ AGAL333.129\(-\)00.559 & 3.843 & 1.402 & 16.1 & 3.983 & \(-\)56.4 & \(-\)0.8 & 4.0 & 2.4 & 2.0 \\ AGAL333.134\(-\)00.431 & 3.959 & 1.506 & 32 & 5.823 & \(-\)51.9 & \(-\)3.3 & 1.3 & 2.0 & 1.7 \\ AGAL333.169\(-\)00.431 & 2.959 & 0.831 & 23 & 4.052 & \(-\)50.8 & \(-\)0.63 & 2.2 & 2.0 & 1.3 \\ AGAL333.248+00.054 & 3.27 & 1.454 & 26.3 & 4.706 & \(-\)46.9 & & & \(-\)0.81 & 5.5 \\ AGAL33.264\(-\)00.291 & 2.229 & 0.485 & 27.4 & 3.951 & \(-\)52.2 & & & \(-\)3.4 & 0.8 \\ AGAL333.284\(-\)00.387 & 3.678 & 1.211 & 32.1 & 5.435 & \(-\)51.6 & & & 0.7 & 3.0 \\ AGAL333.284\(-\)00.387 & 3.678 & 1.211 & 32.1 & 5.435 & \(-\)51.6 & \(-\)1.66 & 1.8 & 2.5 & 1.0 \\ AGAL333.308\(-\)00.
tract the velocity and intensity information along the filament skeletons, if there is overlap of unrelated velocity components, we should find anomalies in Fig. 7, such as a sudden break at a certain position of the skeleton. However, that is not the case in our analysis. Furthermore, we only analyze the variations of velocity and intensity along the filament skeletons with a one
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline Clump & log(Mass) & Radius & T\({}_{\rm dust}\) & log(Lum) & v\({}_{\rm LSR}\) & \(\Delta\)v\({}_{1}\) & L\({}_{1}\) & \(\Delta\)v\({}_{2}\) & L\({}_{2}\) \\ & (M\({}_{\odot}\)) & (pc) & (K) & (L\({}_{\odot}\)) & (km s\({}^{-1}\)) & (km s\({}^{-1}\) pc\({}^{-1}\)) & (pc) & (km s\({}^{-1}\) pc\({}^{-1}\)) & (pc) \\ \hline AGAL332.166+00.126 & 2.515 & 0.541 & 14.4 & 2.316 & \(-\)49.9 & \(-\)0.65 & 2.0 & 0.7 & 3.5 \\ AGAL332.226\(-\)00.124 & 2.892 & 0.556 & 15.4 & 2.579 & \(-\)50.8 & \(-\)2.63 & 1.0 & 1.6 & 2.0 \\ AGAL332.226\(-\)00.124 & 2.892 & 0.556 & 15.4 & 2.579 & \(-\)50.8 & \(-\)1.57 & 1.5 & 2.9 & 0.7 \\ AGAL332.241\(-\)00.044 & 3.226 & 0.917 & 17 & 3.526 & \(-\)48.1 & \(-\)1.45 & 1.0 & 3.0 & 1.0 \\ AGAL332.254\(-\)00.056 & 2.862 & 0.421 & 14.3 & 2.675 & \(-\)47.7 & & & 0.8 & 1.7 \\ AGAL332.296\(-\)00.094 & 3.15 & 0.932 & 24 & 4.303 & \(-\)48.2 & 0.6 & 3.0 & \(-\)0.36 & 2.5 \\ AGAL332.317\(+\)00.177 & 2.576 & 0.406 & 13.6 & 2.223 & \(-\)48.6 & & & 2.0 & 0.5 \\ AGAL332.352\(-\)00.116 & 2.513 & 0.601 & 27.9 & 3.887 & \(-\)49.8 & \(-\)0.39 & 5.0 & 0.5 & 2.0 \\ AGAL332.442\(-\)00.139 & 2.723 & 0.421 & 18.3 & 2.669 & \(-\)51.3 & & & 0.7 & 2.0 \\ AGAL332.469\(-\)00.131 & 2.643 & 0.556 & 20.3 & 3.088 & \(-\)50.5 & 1.8 & 1.2 & \(-\)1.3 & 1.2 \\ AGAL332.544\(-\)00.124 & 2.857 & 1.037 & 32.5 & 4.591 & \(-\)47.5 & & & \(-\)1.92 & 2.4 \\ AGAL332.604\(-\)00.167 & 3.185 & 1.052 & 17.7 & 3.371 & \(-\)46.2 & & & \(-\)0.89 & 1.6 \\ AGAL332.141\(-\)00.466 & 2.66 & 0.571 & 24.2 & 3.495 & \(-\)57 & & & & \\ AGAL332.144\(-\)00.469 & 2.711 & 0.883 & 24.5 & 4.203 & \(-\)58 & & & 3.0 & 0.8 \\ AGAL332.141\(-\)00.446 & 2.883 & 0.71 & 27.7 & 4.76 & \(-\)56.3 & & & & \\ AGAL332.147\(-\)00.439 & 2.371 & 0.169 & 26.9 & 3.974 & \(-\)57.9 & & & & \\ AGAL332.156\(-\)00.449 & 3.239 & 1.004 & 30 & 5.078 & \(-\)55.6 & & & 3.0 & 0.9 \\ AGAL332.252\(-\)00.539 & 2.821 & 0.45 & 17.8 & 2.608 & \(-\)52.9 & & & & \\ AGAL332.281\(-\)00.547 & 3.133 & 0.71 & 16.9 & 3.299 & \(-\)52.2 & \(-\)1.73 & 1.0 & 2.2 & 1.0 \\ AGAL333.521\(-\)00.241 & 2.713 & 0.415 & 20 & 2.93 & \(-\)48.8 & & & & \\ AGAL333.524\(-\)00.269 & 3.488 & 1.54 & 22.5 & 4.252 & \(-\)48.8 & 2.3 & 1.0 & \(-\)1.36 & 1.0 \\ AGAL332.442\(-\)00.139 & 2.723 & 0.421 & 18.3 & 2.669 & \(-\)51.3 & & & & \\ AGAL332.469\(-\)00.131 & 2.643 & 0.556 & 20.3 & 3.088 & \(-\)50.5 & \(-\)0.74 & 2.0 & 0.7 & 4.5 \\ AGAL332.751\(-\)00.597 & 2.964 & 1.038 & 24.7 & 3.937 & \(-\)53.1 & & & & \\ AGAL332.774\(-\)00.584 & 3.103 & 1.038 & 23 & 4.183 & \(-\)55.2 & \(-\)1.74 & 3.3 & 5.6 & 1.0 \\ \hline \end{tabular}
\end{table}
Table 2: Remaining part of Table 1. Usually one clump corresponds to one hub. Here 2 or 3 clumps separated by the gap indicate that the 2 or 3 clumps together form a hub.
Figure 4: The velocity distribution of the entire observed field in PPV space decomposed by GAUSSPY+ along the Galactic latitude and longitude.
pixel width, which can also effectively avoid potential overlap. In Zhou et al. (2022), we found that multiple velocity components are common in hub-filament systems, especially in hub regions. Furthermore, the observed samples in Zhou et al. (2022) with or without multiple velocity components show similar results, such as the velocity gradient estimated from moment maps.
To say the least, we can assume that most of the pixels above the filaments have a single velocity component or have a dominant velocity component, and then we compare the fitting results with the free-fall model and previous results from ALMA data, if they are consistent, indicating that the hypothesis is reasonable. That is the case in our work. Conversely, if the overlap of velocity components is significant, the fitted velocity gradient should be free without the regularities shown in our work, because the overlap of unrelated velocity components is random. From Fig. 7, we can see good correlations between velocity and density fluctuations, which is consistent with previous studies Hacar & Tafalla (2011); Henshaw et al. (2014, 2016); Liu et al. (2019); Henshaw et al. (2020). The variation of the velocity gradient with the scale shown in Fig. 8 is also consistent with the results in Zhou et al. (2022).
### Velocity gradients
The kinematical features of hub-filament systems can be revealed by the velocity gradients along the filaments. However, it is difficult to directly carry out the measurement in PPV space due to the complex gas motion shown in Fig. 4. In Zhou et al. (2022), we investigate the presence of hub-filament systems in a large sample of 146 proto-clusters using H\({}^{13}\)CO\({}^{+}\) J=1-0 molecular line in ATOMS survey. The strongest intensity peaks of H\({}^{13}\)CO\({}^{+}\) emission coincide with the brightest 3 mm cores or hub regions in those hub-filament systems. We find that filaments are ubiquitous in proto-clusters, velocity and density fluctuations along these filaments are seen. We firstly estimated two overall velocity gradients between velocity peaks and valleys at the two sides of the center of the gravitational potential well, and ignored local velocity fluctuation. We also derived additional velocity gradients over smaller distances around the strongest intensity peaks of H\({}^{13}\)CO\({}^{+}\) emission (see Fig. 6 in Zhou et al. 2022). In this work, the same method is used to derive the velocity gradients from the LAsMA data. Then the fitted velocity gradients derived from the LAsMA and ALMA data are combined for a statistical analysis, as shown in Fig. 8.
Figure 5: Background images show the moment 0 maps of \({}^{13}\)CO (3-2). Lines in color present the filament skeletons. Orange circles show the ATLASGAL clumps, the size of the circle represents the clump radius. Red \({}^{+}\)\({}^{+}\) marks the intensity peak of \({}^{13}\)CO (3-2) emission in Fig. 7.
Fig. 7 shows the intensity-weighted velocity (Moment 1) and integrated intensity (Moment 0) of \({}^{13}\)CO (3\(-\)2) line emission along the skeletons of selected filaments with intense velocity and density fluctuations along these filaments. Generally, the peaks of the density fluctuations are associated with ATLASGAL clumps as shown in Fig. 5. The density and velocity fluctuations along filaments may indicate converging gas flows coupled to regularly spaced density enhancements that probably form via gravitational instabilities (Henshaw et al., 2020). In Fig. 7, we can see the typical V-shape velocity structure around the intensity peaks, which we fit as velocity gradients on both sides of the intensity peak marked by the straight lines. Fig. 7(a) shows a more complex velocity field with more repetitive segments than the simple structure in Fig. 7(b). Gas kinematics of other long filaments in Fig. 5 are similar to Fig. 7(a). For the long filaments, except for the local velocity fluctuations or gradients, they also exhibit velocity gradients on larger scales, which are fitted by ignoring the local velocity fluctuations. Generally, large-scale velocity gradients are associated with several intensity peaks.
### Hub-filament structure
Each sub-region marked in Fig. 1(a) contains many sub-structures, which are nearly separated from each other, except for several crowded regions. Red contours from \(\sim\)50% of the integrated intensity of the \({}^{13}\)CO (3-2) emission show the high density parts of various structures. In Fig. 1, several typical hub-filament structures are highlighted in the first row, see also Fig. 5 for more details. We can see filamentary structures connected to high-density hubs. Other sub-structures in the entire observed region have more or less similar morphology. Some differences are expected, since the appearance of hub-filament morphology relies on the projection angle and the resolution of observation.
Figure 6: The intensity and velocity distribution of sub-regions in PPV space based on the decomposition of GAUSSPY+. For other sub-regions, see Fig. A.2 and Fig. A.3 in Sec. A.
Moreover, most structures have the typical kinematic features of hub-filament systems, as discussed in next sections. For example, the peaks of intensity (the hubs) are associated with the converging velocities, indicating that the surrounding gas flows may be converging to dense structures. Especially, velocity gradients increase at small scales.
Orange circle in Fig. 7 marks the intensity peak of \({}^{13}\)CO (3\(-\)2) emission associated with an ATLASGAL clump where the distance between the central coordinate of ATLASGAL clump and the intensity peak of \({}^{13}\)CO (3\(-\)2) emission is less than the effective radius of ATLASGAL clump. These associated clumps are treated as hubs in this work, their properties are listed in Table 1. In the second part of Table 1, some of the peaks have more than one corresponding ATLASGAL clumps and we have added those together, as indicated by a space in the table. We focus on their mass, effective radius and the corresponding velocity gradients. However, as shown in Fig. 7, there are also many peaks of \({}^{13}\)CO (3\(-\)2) emission without corresponding clumps, thus not included in the table, but they are similar to the peaks associated with clumps. Generally, large-scale velocity gradients involve many intensity peaks with or without corresponding clumps, thus it is difficult to estimate the properties of their hubs, therefore these are not given in the table.
### Funnel structure
If we compare the velocity gradients of molecular gas to iron filings, just as the iron filings can reflect the shape of a magnetic field, the distribution of velocity gradients may to some extent reflect the shape of the force field that plays the dominant role in the molecular clouds. Below, we will investigate the distribution of the molecular velocity gradients in PPV space. In the case of gravitational free-fall onto a central point mass, velocities change as \(v\propto r^{-1/2}\) (\(r=\sqrt{x^{2}+y^{2}}\)). In the case of collapse driven by the gravity of a gaseous mass with a certain density profile, the radial dependence of the infall speed depends on the slope of the density profile (Gomez et al., 2021). In either case, this results in a 'funnel' morphology of the velocity field in PPV space, as illustrated in Fig. 9. The exterior cloud velocities show only gentle variations (small velocity gradient, funneling material from cloud to clump), while the velocities in the interior part near the center change dramatically with scales (large velocity gradient, funneling material from clump to core). We note that the schematic in Fig. 9 is oversimplified. Generally a molecular cloud contains many clumps inside, and a clump also contains many cores. Here, we choose only one clump and one core to demonstrate the gas inflow and the variation of the velocity gradient from the molecular cloud scale to the core scale. A more realistic case is shown in Fig. 6, here we indeed find most of sub-structures show the expected funnel structures in PPV space. Particularly, close to the intensity peak, the velocity gradient increases steeply, also reflected in Fig. 7.
Moreover, the V-shape velocity structure around the intensity peaks indicates accelerated material inflowing towards the central hub (Gomez and Vazquez-Semadeni, 2014; Kuznetsova et al., 2015; Hacar et al., 2017; Kuznetsova et al., 2018; Zhou et al., 2022), which can be seen in Fig. 7 everywhere. The V-shape velocity structure can be treated as the projection of the funnel structure from PPV space to PV plane. Hence, the funnel structures in PPV space can be an effective probe for the gravitational collapse motions in molecular cloud.
## 4 Discussion
### Characteristic scale \(\sim\)1pc
In this work, we have investigated gas motions on sufficiently large scales, the longest filament \(\sim\)50 pc, obtaining similar results to previous small-scale ALMA observations. We fitted a wide range of values of the velocity gradients, as shown in Fig. 8(a). The velocity gradients are small at large scales, while they become significantly larger at small scales (\(\lesssim\) 1 pc), as is the case in the ATOMS survey (Zhou et al., 2022). For both ALMA and LAsMA data, most of the fitted velocity gradients concentrate on the scales \(\sim\) 1 pc, as shown in Fig. 8. This is consistent with the fact that 1 pc is considered as the characteristic scale of massive clumps (Urquhart et al., 2018). In Fig. 8, the variation of velocity gradients with scales is comparable with the expectations from gravitational free-fall, in the sense that the gradient decreases smoothly with increasing scale. Fig. 10(c) and (d) show similar results to Fig. 8, but only consider the velocity gradients associated with ATLASGAL clumps, listed in Table 1. Especially, in Fig. 10(c), the required central mass in the fitted free-fall model is consistent with the mass distribution observed in Fig. 10(b).
Figure 7: Two selected filaments are used to illustrate the fitting of the velocity gradient. Upper panel: Velocity gradients are fitted in the ranges defined by the red vertical dashed lines, and straight lines show the linear fitting results. Lower panel: Blue and red dotted lines show the normalized velocity and intensity, respectively. Orange circles mark the intensity peaks of \({}^{13}\)CO (3\(-\)2) emission associated with ATLASGAL clumps. For other filaments, see Fig. 4, Fig. 5, Fig. 6, Fig. 7 and Fig. 8 in Sec. A.
### Evidence for Gravitational Acceleration
Fig. 10(a) displays the probability distribution of velocity gradients measured around 1 pc (0.8\(\sim\)1.2 pc), showing that the most frequent velocity gradient is \(\sim 1.6\) km s\({}^{-1}\) pc\({}^{-1}\). Assuming free-fall,
\[\nabla V_{free}=-\frac{d}{dR}\sqrt{\frac{2GM}{R}}=\sqrt{\frac{GM}{2R^{3}}}, \tag{1}\]
we estimate the kinematic mass corresponding to 1 pc is \(\sim 1190\) M\({}_{\odot}\), which is comparable with the typical mass of clumps in the ATLASGAL survey (Urquhart et al., 2018). In Fig. 10(b), the peak of the associated clump's mass distribution is also around \(\sim 1000\) M\({}_{\odot}\). Thus the clump may be a gravity-dominated collapsing object, also consistent with the survey results that most Galactic parsec-scale massive clumps seem to be gravitationally bound no matter how evolved they are (Liu et al., 2016; Urquhart et al., 2018; Evans et al., 2021).
Based on Eqn. 1, we can use the values of \(M\), \(L\) and \(\nabla V\) listed in Table 1 to compare observations to the simple theory. In Fig. 11, the observed and predicted accelerations do show a clear correlation, strong evidence for gravity accelerating the gas inflow. The scatter in Fig. 11 is to be expected because we measure only the projection of the realistic velocity vector and hence acceleration. For the same reason, we tend to underestimate the observed acceleration. Moreover, in Table 1, we only selected the ATLASGAL clumps associated with the intensity peaks of the \({}^{13}\)CO (3\(-\)2) emission (a deviation within the effective radius of the clump), thus some of them are not the exact gravitational centers. Despite these caveats, the correlation between observations and predictions for the denser clumps is quite good. The less dense clumps tend to lie _above_ the line of equality, suggesting their masses underestimate the total mass contributing to the gravitational acceleration.
### Large-scale inflow driven by gravity
As shown in Fig. 8 and Fig. 10, the velocity gradients fitted to the LAsMA and ALMA data agree with each other over the range of scales covered by ALMA observations in the ATOMS survey (\(<5\) pc). Interestingly, the variations of velocity gradients at small scales (\(<1\) pc), middle scales (\(\sim 1\)-\(7.5\) pc) and large scales (\(>7.5\) pc) are consistent with gravitational free-fall with central masses of \(\sim 500\) M\({}_{\odot}\), \(\sim 5000\) M\({}_{\odot}\) and \(\sim 50000\) M\({}_{\odot}\). It means the velocity gradients on larger scales require larger mass to maintain, which is also revealed by the funnel structure shown in Fig. 9. Indeed, larger masses imply larger scales, that is to say, the larger scale inflow is driven by the larger scale structure which may be the gravitational clustering of smaller scale structures (Sec. 4.4), consistent with the hierarchical structure in molecular clouds and the gas inflow from large to small scales. Thus the large scale gas inflow may also be driven by gravity, with the molecular clouds in G333 in the state of globally gravitational collapse, which is consistent with the argument that these molecular clouds act as cloud-scale hub-filament structures, see the description in Sec. 3.4. Moreover, as described in Sec. 3.5, the funnel structure of the velocity field shown in PPV space also tends to support the large-scale gravitational collapse of the molecular clouds in G333 complex.
### Hierarchical gravitational structure in molecular cloud
The scales of hub-filament structures (cloud-scale) in this work are much larger than the ones (clump-scale) in Zhou et al. (2022). Each large-scale hub-filament structure will contain many sub-structures, thus the kinematic structure will be very complex. Nevertheless, if they are hub-filament systems, we expect that they have similar kinematic features like the clump-scale hub-filament system. A hub-filament system must have a common gravitational center, which will dominate the overall gravitational field, thus the velocity field. In Fig. 4, although there are many sub-structures in each sub-region, which cause many velocity and density fluctuations, globally, the entire large-scale structure still displays the funnel structure. As discussed in
Figure 8: Statistical analysis of all fitted velocity gradients. (a) Velocity gradient versus the length over which the gradient has been estimated for all sources in ATOMS survey (red dots) and current LAsMA observation (black ‘+’). The color lines show free-fall velocity gradients for comparison. For the free-fall model, yellow, blue, cyan and green lines denote masses of 5 M\({}_{\odot}\), 500 M\({}_{\odot}\), 5000 M\({}_{\odot}\) and 50000M\({}_{\odot}\), respectively; (b), (c) and (d) Blow-up maps with lengths \(<1\) pc (small scale), \(\sim 1\) – 7.5 pc (middle scale) and \(>7.5\) pc (large scale) in panel (a).
Sec. 3.3 and Sec. 4.3, large-scale velocity gradients always involve many intensity peaks, and the larger scale inflow is driven by the larger scale structure, implying the clustering of local small-scale gravitational structures can act as the gravitational center on larger scale. To some extent, the funnel structure gives an indication of the gravitational potential well formed by the clustering.
Both Zhou et al. (2022) and this work find that when the scale is less than \(\sim\)1 pc, the gravitationally dominated gas infall can become apparent. Thus, relatively small-scale hub-filament structures will have better and more recognizable morphology than large-scale ones due to the strong local gravitational field. For the large-scale hub-filament structures, background turbulence is more likely to disturb their morphology due to weaker gravitational confinement on large scales, thus they are more "blurred". Zhou et al. (2023) suggested for the interpretation of the formation of the large-scale hub-filament structure in W33 complex, that IRDCs and ATLASGAL clumps in W33-blue can be treated as small-scale hub-filament structures, W33-blue itself as a huge hub-filament structure. Thus there may be a self-organization process from small-scale hub-filament structures to a large-scale one. If the large scale gravitational center is the clustering of small scale ones, compared to the latter, large scale gravity center will be more loose, thus its ability to control the gas motions on large scale is also weaker, reflected as the loose funnel structure and small velocity gradient, as shown in this work. In Fig. 6, the small-scale structures indeed have clearer funnel structures than the large-scale ones, implying the role of gravity in shaping the funnel structure.
### G333 in Context of Galactic Molecular Clouds
The same region of the sky, with slightly larger area (\(332.6^{\circ}<l<333.8^{\circ}\) and \(-0.8^{\circ}<b<0.1^{\circ}\)), has also been studied by Nguyen et al. (2015), using CO and \({}^{13}\)CO (1-0) emission and archival data. They call the G333 GMC the RCW 106 complex after the associated HII region. They determined the following properties for the RCW 106 complex: the total effective diameter is 183 pc, the mass is \(5.9\times 10^{6}\) M\({}_{\odot}\), and the mass surface density is 220 M\({}_{\odot}\) pc\({}^{-2}\). The virial parameter for the whole complex is 0.35, making it strongly gravitationally bound; since the cloud is elongated, they include a linear analysis that also implies a strongly bound cloud. These properties would make the G333 GMC stand out as one of the largest, most massive complexes in the Galaxy (cf Fig. 7 of Miville-Deschenes et al. 2017), and one of the few that is gravitationally bound (cf Fig. 2 of Evans et al. 2021). However, Nguyen et al. (2015) took the velocity range of the RCW 106 complex as [-80, -40] km s\({}^{-1}\). As discussed in Sec. 3.1, after a careful decomposition of velocity components, we find the velocity range [-80, -40] km s\({}^{-1}\) also includes G331 GMC; thus the mass of the G333 GMC is overestimated in Nguyen et al. (2015). The catalog of Miville-Deschenes et al. 2017, restricted to the velocity range of peak3, yields a mass for the G333 cloud traced by CO 1-0 emission of \(\sim 1.7\times 10^{6}\) M\({}_{\odot}\), with the surface density \(\sim 120\) M\({}_{\odot}\) pc\({}^{-2}\). Tracers that are biased toward higher volume densities find progressively smaller masses. Karnik et al. (2001) made a far-infrared (FIR) 150 and 210 \(\mu\)m dust emission study of the region, using the canonical gas-dust ratio of 100, they measure a total GMC mass of \(\sim 1.8\times 10^{5}\) M\({}_{\odot}\). This is consistent with the total mass of \(\sim 1\times 10^{5}\) M\({}_{\odot}\) estimated by Mookerjea et al. (2004) from a 1.2-mm cold dust continuum emission map and the same gas-to-dust ratio. These estimates show the decreasing fraction of mass in progressively denser regions.
The lower total mass from CO 1-0 based on the catalog of Miville-Deschenes et al. (2017) still makes the G333 complex one of the most massive in the Milky Way. Analysis of the RCW HII region yields a stellar mass of \(48\times 10^{3}\) M\({}_{\odot}\); assuming a timescale of 0.2 Myr (the lifetime of an O7 star), Nguyen et al. (2015) derive a star formation rate so far of 0.25 M\({}_{\odot}\) Myr\({}^{-1}\) and an efficiency (\(M_{*}/M_{\rm gas}\)) of 0.008. The very high star formation rate led Nguyen et al. (2015) to describe it as a "minist starburst". It is thus atypical of most molecular clouds but an exemplar of the very small subset that form the majority of stars in the Galaxy.
## 5 Summary
We investigated the gas kinematics of hub-filament structures in the G333 complex using \({}^{13}\)CO (3-2) emission. The main
Figure 9: Schematic diagram of the ‘funnel’ structure in PPV space and on PP plane. PP plane shows a multi-scale hub-filament structure in a molecular cloud, which can turn to a ‘funnel’ structure in PPV space. Red arrow represents the gas inflow.
conclusions are as follows:
1. The G333 complex includes three main velocity components, which can be well separated by the average spectra and the velocity distribution in PPV space. Here the velocity distribution is derived from the decomposition of \({}^{13}\)CO (3-2) emission by GAUSSPY+. The velocity components with velocity ranges [\(-120\), \(-76\)] km s\({}^{-1}\) and [\(-76\), \(-60\)] km s\({}^{-1}\) represent the main two parts of G331 GMC, the third velocity component with velocity range [\(-60\), \(-35\)] km s\({}^{-1}\) shows a very extended shell structure from \(l=330.6^{\circ}\) to \(l=334^{\circ}\) with G333 GMC and G332 ring as the main parts.
2. The entire G333 complex seems to have fragmented into several independent molecular clouds, with some of the sub-structures in these molecular clouds showing typical hub-filament structures, which also have the typical kinematic features of hub-filament systems. The broken morphology of some very infrared bright structures indicates that the feedback is disrupting the star-forming regions.
3. We use the integrated intensity maps of \({}^{13}\)CO J=3-2 to identify filaments in G333 complex by FILFIDER algorithm, and extract the velocity and intensity along the filament skeleton from moment maps. We find a good correlation between velocity and density fluctuations, and fit the velocity gradients around the intensity peaks. The change in velocity gradients with the scale indicates that the morphology of the velocity field in PPV space resembles a "funnel" structure. The funnel structure can be explained as accelerated material inflowing towards the central hub and gravitational contraction of star-forming clouds/clumps. The typical V-shape velocity structure along the filament skeleton can be treated as the projection of the funnel structure from PPV space to PV plane. Hence, the funnel structure in PPV space can be an effective probe for the gravitational collapse motion in molecular cloud.
4. We have investigated gas motions on sufficiently large scales, the longest filament \(\sim\)50 pc, but obtain similar results to small-scale ALMA observations. The typical velocity gradient corresponding to a one pc scale is \(\sim 1.6\) km s\({}^{-1}\) pc\({}^{-1}\). Assuming the free-fall model, we can predict the gravitational acceleration onto each hub. The observed accelerations correlate well with the predicted ones and have values comparable to the prediction. Given the fact that we observe only one component of the acceleration and that the masses of the hubs are uncertain, the result provides strong evidence for gravitational acceleration of material flowing into hubs from filaments.
5. The velocity gradients fitted by LAsMA data and ALMA data agree with each other over the scales covered by ALMA observations in the ATOMS survey (\(<5\) pc). On large scales, we find that the larger scale inflow is driven by the larger scale structure, indicating the hierarchical structure in molecular cloud and the gas inflow from large scale to small scale. The
Figure 11: Correlation between the left and right terms in Eqn. 1 revealed by the values of \(M\), \(L\) and \(\nabla V\) listed in Table 1. Blue dotted line is the line of equality of the left and right terms. There are two cases in Table 1, a dense clump has two or more filaments leading to it, or only one filament with a gradient can be identified (the rows with only one entry for the gradient). The number of clumps in two cases is 49 and 25, and marked by black and red ’+’, respectively.
Figure 10: Statistical analysis of the velocity gradients listed in Table 1. (a) The probability distribution of velocity gradients measured around 1 pc scale; (b) Mass distribution of ATLASGAL clumps listed in Table 1; (c) and (d) The same with Fig. 8, but only consider the velocity gradients listed in Table 1.
large scale gas inflow is driven by gravity, implying that the molecular clouds in G333 may be in the state of globally gravitational collapse. The funnel structure of velocity field shown in PPV space also tends to support the large-scale gravitational collapse.
6. Although there are many sub-structures in each molecular cloud, which cause ubiquitous velocity and density fluctuations, totally, the entire large-scale structure still displays the loose funnel structure. Thus the hub-filament structures at different scales may be organized into a hierarchical macroscopic system through the coupling of gravitational centers at different scales.
In short, changes of velocity gradient with scale indicate a "funnel" structure of the velocity field in PPV space, indicative of a smooth, continuously increasing velocity gradient from large to small scales, and thus consistent with gravitational acceleration.
###### Acknowledgements.
This publication is based on data acquired with the Atacama Pathfinder Experiment (APEX) under programme ID M-0109.F-9514A-2022. APEX has been a collaboration between the Max-Planck-Institut fur Botsiastronome, the European Southern Observatory, and the Onsala Space Observatory. This work has been supported by the National Key R&D Program of China (No. 2022YFA1603100). Tie Liu acknowledges the supports by National Natural Science Foundation of China (NSFC) through grants No.12122307 and No.12073061, the international partnership program of Chinese Academy of Sciences through grant No.114231KYSB20200009, and Shanghai Pujiang Program 20PJ1415500.
|
2304.05326 | Towards Power Characterization of FPGA Architectures To Enable
Open-Source Power Estimation Using Micro-Benchmarks | While in the past decade there has been significant progress in open-source
synthesis and verification tools and flows, one piece is still missing in the
open-source design automation ecosystem: a tool to estimate the power
consumption of a design on specific target technologies. We discuss a
work-in-progress method to characterize target technologies using generic
micro-benchmarks, whose results can be used to establish power models of these
target technologies. These models can further be used to predict the power
consumption of a design in a given use case scenario (which is currently out of
scope). We demonstrate our characterization method on the publicly documented
Lattice iCE40 FPGA technology, and discuss two approaches to generating
micro-benchmarks which consume power in the target device: simple lookup table
(LUT) instantiation, and a more sophisticated instantiation of ring
oscillators. We study three approaches to stimulate the implemented
micro-benchmarks in hardware: Verilog testbenches, micro-controller
testbenches, and pseudo-random linear-feedback-shift-register-(LFSR)-based
testing. We measure the power consumption of the stimulated target devices. Our
ultimate goal is to automate power measurements for technology
characterization; Currently, we manually measure the consumed power at three
shunt resistors using an oscilloscope. Preliminary results indicate that we are
able to induce variable power consumption in target devices; However, the
sensitivity of the power characterization is still too low to build expressive
power estimation models. | Stefan Riesenberger, Christian Krieg | 2023-04-11T16:33:20Z | http://arxiv.org/abs/2304.05326v1 | Towards Power Characterization of FPGA Architectures To Enable Open-Source Power Estimation Using Micro-Benchmarks
###### Abstract
While in the past decade there has been significant progress in open-source synthesis and verification tools and flows, one piece is still missing in the open-source design automation ecosystem: a tool to estimate the power consumption of a design on specific target technologies. We discuss a work-in-progress method to characterize target technologies using generic micro-benchmarks, whose results can be used to establish power models of these target technologies. These models can further be used to predict the power consumption of a design in a given use case scenario (which is currently out of scope). We demonstrate our characterization method on the publicly documented _Lattice_ iCE40 FPGA technology, and discuss two approaches to generating micro-benchmarks which consume power in the target device: simple lookup table (LUT) instantiation, and a more sophisticated instantiation of ring oscillators. We study three approaches to stimulate the implemented micro-benchmarks in hardware: Verilog testbenches, micro-controller testbenches, and pseudo-random linear-feedback-shift-register-(LFSR)-based testing. We measure the power consumption of the estimated target devices. Our ultimate goal is to automate power measurements for technology characterization; Currently, we manually measure the consumed power at three shunt resistors using an oscilloscope. Preliminary results indicate that we are able to induce variable power consumption in target devices; However, the sensitivity of the power characterization is still too low to build expressive power estimation models.
FPGA, power measurements, open source, yosys, nextpm, verilog
## I Introduction
Awareness of power consumption of an Field Programmable Gate Array (FPGA) design is important for optimization in power or energy constrained use cases. In this work in progress paper we are presenting the first part of our original work about creating targeted FPGA circuits and measuring them. The second part uses this basis to fit power estimation models. Table I shows the power estimation features offered by different FPGA vendors and what part of the feature spectrum we want to cover in the second work. An important goal for us is that all the data and useful tools that we create in this works are going to be published as open source. On the hardware side we are using FPGAs from the Lattice iCE40 family and the open source tooling based on Yosys and nextpmr that is available for it.
## II Preliminary
The previous works that we looked into had two approaches to get power measurement data. On the one hand they acquired it directly by measurements on hardware [1], which requires more thought on how to extract certain data. High sensitivity of the measurement system is also important, which can be difficult to achieve on highly clocked systems like MCUs/CPUs [3]. On the other hand the data can be acquired by utilizing the existing closed source vendor power estimation tools [2]. Depending on the features of the tool, these directly provide the power of different components. This second approach requires trust in the accuracy of the vendor provided tools,
\begin{table}
\begin{tabular}{c} \hline Lattice iCEcube2 \\ \hline Lattice Diamond \\ \hline Intel PowerPlay \\ \hline Xilinx Power Estimator \\ \hline (our solution) IcePwExt \\ \hline \end{tabular}
\end{table} TABLE I: Feature matrix of power estimation vendor tools
which can vary to a substantial degree as mentioned in ([1] referencing [4] and [5]).
## III Simple Benchmarking
Micro-benchmarks are essential for targeted analysis of certain aspects of FPGA hardware. These benchmarks can be tedious to create and variate in parameters like component count or placement position. Thus we decided to design a simplified way of creating such micro-benchmarks by providing a tool, which generates said benchmarks in Verilog and an accompanying constraint file from a benchmark definition file. The micro-benchmarks depend on the target toolchain and hardware, thus requiring specific handling in the generator tool. In particular we are looking into Lattice iCE40 FPGAs, due to their well supported open source workflow.
### _Introduction_
Micro-benchmarking is an effective mean to analyze certain behavior and parameters of FPGA hardware. The creation of such benchmarks can be tedious and repetitive, when it is required to place hundreds of identical cells and vary one of their parameters like position or configuration. The structure of such a generator tool has to be aware of the target toolchain in each step i.e. cell, constraints. Most of the Verilog code used is toolchain agnostic. Only some special Verilog attributes are toolchain dependent. The constraint files on the other hand are very different for each vendor and possibly toolchain. One of the simplifications done to reduce the complexity of the benchmark circuits is to not include an output. This triggered a nextprn bug1 that was fixed by us.
Footnote 1: [https://github.com/YosysHQ/nextpr/pull/944](https://github.com/YosysHQ/nextpr/pull/944)
### _Usage and goal_
The micro-benchmark generator is aimed to be used in conjunction with an automated testbench generator and hardware measuring setup. Our main goal is to accelerate the creation of micro-benchmarks for FPGA hardware analysis by providing a simple tool that generates said benchmarks and its variations. The resulting Verilog files can then be used by other tools also utilizing the definition file to implement testbenches, which can conduct simulations and automated hardware testing.
### _Methodology_
The basic idea is to instantiate a minimum amount of cells and move them around in the FPGA. This includes not connecting the output of the component to reduce wires. Such outputless components are detected by the optimization tool. To prevent them being optimized away it is important to add the Verilog keep attribute.
The various toolchain vendors require different handling of the placement constraints of components. Figure 1 shows the general hierarchy of the functions of the benchmark generator and their data dependencies.
### _Nextprn, Yosys - Lattice_
Nextpmr is a place and route tool that combined with the synthesis tool Yosys supports a fully open source hardware synthesis flow for the Lattice iCE40 FPGA family. To forcefully place a component with Nextpmr one has to specify the attribute given in listing 1. In example a lookup table (LUT) is positioned by specifying its X and Y coordinate on the grid of the FPGA and the desired logic cell. The Lattice iCE40up5k has for example 8 logic cells per valid grid position.
## IV Testbenches
An accompanying concept to section III is the automated generation of fitting testbenches. The information provided for the benchmark is enough to construct a general stochastic testbench. This section explains the method used to produce stochastic inputs and how they can be used to test designs.
### _LFSR based testing_
To stimulate the input of a device under test (DUT) with stochastic inputs that are not too highly correlated a pseudo random sequence from an linear-feedback shift register (LFSR) can be used. This testing method is simple to describe for reproduction, because the LFSR has only 3 degrees of freedom. Those are the polynomial order, the feedback taps and the initial register seed. In practice these degrees of freedom can be reduced even further, when only polynomials with maximum length are chosen. This limits the sets of feedback taps that can be used for a given polynomial order. Lists of polynomial orders and their maximum length tap configurations can be found in most literature about LFSRs and also on the internet2.
Footnote 2: [http://users.ece.cmu.edu/~koopman/ffsr/](http://users.ece.cmu.edu/~koopman/ffsr/)
### _Verilog Testbench_
A Verilog testbench can be utilized to simulate the DUT with the stochastic inputs that will be later applied by the hardware testbenches. The simulation of the testbench can be used to acquire an approximate representation of the internal signals of a design running on hardware. It is not fully accurate, since no gate or interconnect delays are taken into account. The values are rather an lower bound of the internal signal changes. These signal changes can be used
Fig. 1: Benchmark generator hierarchy and data flow
to calculate activation rates, which are required i.e. for most power estimation models.
### _Microcontroller Testbench_
This testbench is used to examine a design on hardware. It is based on a microcontroller that is connected to the FPGA IOs and the measurement device. The FPGA contains the DUT that gets stimulated by stochastic signals from the microcontroller. The measurement device receives a trigger signal from the microcontroller when the testing begins. The microcontroller itself is controlled by a PC, which starts the test runs. The FPGA is also connected to the PC to allow for exchange of the DUT. The data from the measurement device is either directly copied to the PC or indirectly via a data storage depending on the capabilities of the device.
## V Experimental Setup
Our initial measurement setup fig. 2 used a USB measurement card with a \(12\,\mathrm{Bit}\) Analog-Digital Converter (ADC). At a supply voltage of \(5\,\mathrm{V}\) results in at best \(\approx$1.2\,\mathrm{mV}$\) of quantization step size. The minimal current that can possibly be measured from a \(1.5\,\mathrm{\SIUnitSymbolOhm}\) shunt resistor is \(\approx$800\,\mathrm{\mu A}$\). This turned out to be insufficient measurement accuracy, due to the low power FPGA used on the iCEBreaker Board. In reality even bigger design targets like the Drystone benchmark3 running on a PicoRV32 core4 on the iCEBreaker board were in the range of a few quantization steps, which made those measurements unusable.
Footnote 3: [https://github.com/YosysHQ/picovx32/tree/master/dhrystone](https://github.com/YosysHQ/picovx32/tree/master/dhrystone)
Footnote 4: [https://github.com/YosysHQ/picovx32](https://github.com/YosysHQ/picovx32)
These problems meant that we had to put more focus on the measuring setup and benchmarking. Our target for useful measurements is around a minimum of \(100\,\mathrm{\mu V}\), which would allow currents of \(66\,\mathrm{\mu A}\). This seems to be possible to achieve by amplifying the differential voltage of the shunt with an operational amplifier or a dedicated Integrated Circuit (IC) and measuring the single ended resulting voltage with a accurate ADC. For this work we pursued a solution with a dedicated IC, because such chips have better component matching and less offset errors in addition to very good noise and frequency characteristics. The only real trade off for us was the loss of flexibility, but it was worth the resulting simplicity of the measurement setup.
### _Current Sense Amplifier Board_
To ease the measurement of small differential voltages on the shunt resistors a current sense amplifier IC the INA293B5 has been utilized. The most interesting electrical characteristics of this IC are listed in table II.
### _iCEBreaker FPGA Measurement Setup_
The iCEBreaker board is using a low power Lattice iCE40UP5k FPGA IC, which has a small amount of logic cells compared to Xilinx FPGAs. This results in the situation that even the power draw, of designs that fill almost the entire FPGA, is negligible. The iCEBreaker PCB has preexisting jumper pads shown in fig. 3, which are by design intended to be used for shunt resistors to allow for current measurements of the FPGA. For our setup \(1.5\,\mathrm{\SIUnitSymbolOhm}\) resistors have been used. The setup to measure the power of the board is depicted in fig. 4. The differential voltage of the \(V_{\mathrm{core}}\) shunt resistor is getting amplified by the current sense amplifier PCB section V-A and the IO shunt resistors are connected to subtraction circuits due to higher currents, which would clip when amplified. The outputs are then measured with an oscilloscope.
### _Amplifier Prototype Validation_
Before the amplifier board could be used for target hardware measurements it had to be validated to confirm it properly functioning. The simple setup for validation is depicted in fig. 5. The variable values in table III have been selected in such a way that a current of approximately \(0.1\,\mathrm{mA}\) is measured at the shunt resistor \(R_{\mathrm{SH}}\) resulting in \(1\,\mathrm{mV}\). This differential voltage should, if the circuit is working correctly, then get amplified by the INA293 to around \(0.5\,\mathrm{V}\).
#### V-C1 Measurement Results
For the measurements it has to be taken into account that the resistance of the connecting points lies in the range of \(0.1\)-\(0.7\,\mathrm{\Omega}\), which includes the wire and contact resistance. The resistor tolerance used in the setup was \(1\,\mathrm{\char 37}\). The three amplifiers have been measured one after another. The measurement results of the differential voltage \(U_{\mathrm{diff}}\), the amplified output voltage \(U_{\mathrm{out}}\) and the calculated amplified voltage \(U_{\mathrm{amp,calc}}\) can be found in table IV. The relative errors of \(1.2\)-\(6.25\,\mathrm{\char 37}\) between the amplified and calculated values seem acceptable when comparing to the uncertainties and variances of the whole setup. The overall behavior of the circuit was correct, which validates its functionality for this DC test.
## VI tCEBreaker FPGA - Measurements
The measurements of the Lattice iCE40UP5k FPGA are of particular interest to us for analyzing its internals. To gather information about the properties of the internals of the FPGA, different circuits and measurements have been used. The following sections go over the different circuits and their measurement results.
### _LUT4 - Results_
One of the basic internal building blocks of the FPGA is the LUT with 4 inputs and one output, which was measured in this section. The circuit in this setup consists of only LUT, that get instantiated by the benchmark generator in section III. The inputs of all LUTs get connected to 4 inputs of the FPGA in parallel. This allows to control all LUTs by only using 4 inputs. The idea behind connecting all LUTs in parallel is to be able to measure their power in an additive way. This should result in a proportional increase of power in regards to the number of LUTs used. For example reducing the LUT amount from \(5\,\mathrm{k}\) to \(1\,\mathrm{k}\) would result in approximately a fifth of the power after correcting for a constant offset value.
Figure 6 contains measurements of the iCEBreaker board with \(5\,\mathrm{k}\) and \(1\,\mathrm{k}\) LUT instantiated on the FPGA respectively. These inputs then got stimulated by a maximum length 4Bit-LFSR every \(100\,\mathrm{\mu s}\) after the rising trigger signal. When looking at fig. 6 one can see the voltage spikes in a regular pattern. This pattern matches the LFSR changing values and one can also see its period of 15. Comparing the three graphs does not bring the desired result of proportional dependency of the power to the number of LUT. This means the voltage spikes on the shunt are the result of something else. For further investigation fig. 7 has been created. The setup for these measurements was one instantiated LUT and only one input connected to one bit of the LFSR. The purpose of this measurement was to investigate how the power relates to the connected signals and their state.
Comparing the measurements from fig. 6 and fig. 7 shows that the voltage on the shunt resistor does not depend on the number of LUT at all, due to there being no real difference in the amplitude of the voltage spike. On the other hand the comparison allows the conclusion that the amplitude of the voltage spike depends on the amount of inputs in high state. In fig. 7 the voltage spikes occur only sparsely due to the LFSR still being of period 15 and only one bit being connected to the LUT.
output of the last gate with the input of the first. Such a construct is used on FPGAs for example to generate random numbers.
As a first attempt to instantiate a ring oscillator the Verilog code in listing 2 has been used. This code should generate a ring oscillator that can be turned on and off with an FPGA input via the AND-gate. The keep attribute should prevent the synthesis tool from optimizing the gates away.
After a few synthesis steps in Yosys the resulting circuit in fig. 8 shows that the attempt of using keep was futile and that most gates got optimized away. The reason for the optimization kicking in is due to the translation of the NOT and AND-gates to LUT, which the keep attribute did not propagate to. This means that a more elaborate way to instantiate the ring oscillator has to be used.
Utilizing the tricks learned from the benchmark generator we can approach the issue from the further down the synthesis steps. The code in listing 3 shows a functional equivalent implementation of the ring oscillator as described in listing 2, but this time the general gates have been replaced with the iCE40 specific LUT. Figure 9 shows that this time the synthesis tool did not optimize the intermediate gates away. This means with this approach we can generate the desired ring oscillators that we want to measure on FPGA hardware.
#### V-B1 iVerilog iCE40 simulation with delays
Having a working simulation environment to work with is useful to evaluate the behavior of designs beforehand. This is especially important for low level implementations on FPGA logic blocks and high frequency circuits like ring oscillators. For the first type a zero delay simulation is good enough, but the latter requires approximate timings of the FPGA. A high frequency ring oscillator can cause the destruction of the FPGA by overloading the internal paths due to high frequency switching. Synthesis tools might detect such high frequency paths and prevent them, but designers can choose to implement the oscillator anyways. To prevent the destruction it is important to know the approximate oscillation frequency, which can be determined from simulation that includes the gate delays. The frequency will be an upper bound to the real frequency, due to path delays being missing in this step.
Fig. 8: Schematic of the synthesized not based ring oscillator
Fig. 7: Measurements of 1 LUT4
Fig. 9: Schematic of the synthesized LUT based ring oscillator
For Verilog simulation the open source tool Icarus Verilog (iverilog) can be used. Verilog synthesized or tailored to the iCE40 architecture can be simulated with iverilog by including the simulation library from Yosys. It is to note the the Verilog path delay definitions in Yosys' simulation library do not comply to the Verilog standard and iverilog errors out when parsing them. This is fixed by our pull request5 to the library.
Footnote 5: [https://github.com/YosysHQ/yosys/pull/3542](https://github.com/YosysHQ/yosys/pull/3542)
Listing 4 shows an example on how to simulate an iCE40 synthesized design with iverilog. The argument -gspecify is necessary to enable the usage of Verilog specify blocks, which contain the path delay definitions. With the flag -D the corresponding Verilog define variables are being set. Yosys' simulation library contains timing definitions for the different variants of the iCE40 architecture. To select the low power variant the variable 'ICE40_LP=1' has to be defined.
#### Vi-B2 Results
For the following results use a total number of approximately 3000 NOT gates has been instantiated on the iCEBreaker board. These gates are all chained together in the same fashion as shown in listing 3 for the first ring oscillator \(O1\). The second the design \(O2\) consists of two chains of half the length of \(O1\) with double the oscillation frequency. The third the design \(O4\) consists of half the length of \(O2\) with double the chains, which results in two times the frequency.
Figure 11 shows the output of the oscillating signal of one chain in each of the three designs. One can clearly observe the doubling of the oscillator frequency between each of the designs. This confirms the oscillator designs properly working and scaling. Now the most interesting measurements are found in fig. 10, which shows an increasing voltage on the \(V_{core}\) shunt that corresponds to the core current. Assuming an approximately constant core voltage it is trivial to calculate the dynamic power used based on the following simple model.
## VII Conclusion
Initial results of section V showed that measuring low power FPGAs is not a simple task. Solely an improved measurement setup compared to the USB measurement card was not sufficient to get sensible results as depicted in section VI-A. The influence of the external input signals was too severe and the low stimulation frequency didn't achieve significant power draw changes. This meant that the results can not be used for our intended purpose. The latter results in section VI-B on the other hand show a significant difference between the three designs, which provides strong evidence that the characteristics of the ring oscillator alone resulted in this change. Due to the ring oscillator being solely constructed out of iCE40 LUTs means that the measured data contains information about their power characteristics.
## VIII Future Outlook
Further analysis of the measurements in section VI-B has to be done to provide stronger evidence for their plausibility. Based on these results we will expand the set of test circuits to acquire more data. This will enable us to design appropriate estimation models of the analyzed Lattice hardware. These models will then be fitted with the data to allow for power estimation, which will then be checked with new test circuits against hardware measurements.
|
2310.09483 | Sorting and Selection in Rounds with Adversarial Comparisons | We continue the study of selection and sorting of $n$ numbers under the
adversarial comparator model, where comparisons can be adversarially tampered
with if the arguments are sufficiently close.
We derive a randomized sorting algorithm that does $O(n \log^2 n)$
comparisons and gives a correct answer with high probability, addressing an
open problem of Ajtai, Feldman, Hassadim, and Nelson [AFHN15]. Our algorithm
also implies a selection algorithm that does $O(n \log n)$ comparisons and
gives a correct answer with high probability. Both of these results are a
$\log$ factor away from the naive lower bound. [AFHN15] shows an
$\Omega(n^{1+\varepsilon})$ lower bound for both sorting and selection in the
deterministic case, so our results also prove a discrepancy between what is
possible with deterministic and randomized algorithms in this setting.
We also consider both sorting and selection in rounds, exploring the tradeoff
between accuracy, number of comparisons, and number of rounds. Using results
from sorting networks, we give general algorithms for sorting in $d$ rounds
where the number of comparisons increases with $d$ and the accuracy decreases
with $d$. Using these algorithms, we derive selection algorithms in $d+O(\log
d)$ rounds that use the same number of comparisons as the corresponding sorting
algorithm, but have a constant accuracy. Notably, this gives selection
algorithms in $d$ rounds that use $n^{1 + o(1)}$ comparisons and have constant
accuracy for all $d = \omega(1)$, which still beats the deterministic lower
bound of $\Omega(n^{1+\varepsilon})$. | Christopher Trevisan | 2023-10-14T04:05:47Z | http://arxiv.org/abs/2310.09483v1 | # Sorting and Selection in Rounds with Adversarial Comparisons
###### Abstract
We continue the study of selection and sorting of \(n\) numbers under the adversarial comparator model, where comparisons can be adversarially tampered with if the arguments are sufficiently close.
We derive a randomized sorting algorithm that does \(O(n\log^{2}n)\) comparisons and gives a correct answer with high probability, addressing an open problem of Ajtai, Feldman, Hassadim, and Nelson [1]. Our algorithm also implies a selection algorithm that does \(O(n\log n)\) comparisons and gives a correct answer with high probability. Both of these results are a log factor away from the naive lower bound. [1] shows an \(\Omega(n^{1+\varepsilon})\) lower bound for both sorting and selection in the deterministic case, so our results also prove a discrepancy between what is possible with deterministic and randomized algorithms in this setting.
We also consider both sorting and selection in rounds, exploring the tradeoff between accuracy, number of comparisons, and number of rounds. Using results from sorting networks, we give general algorithms for sorting in \(d\) rounds where the number of comparisons increases with \(d\) and the accuracy decreases with \(d\). Using these algorithms, we derive selection algorithms in \(d+O(\log d)\) rounds that use the same number of comparisons as the corresponding sorting algorithm, but have a constant accuracy. Notably, this gives selection algorithms in \(d\) rounds that use \(n^{1+o(1)}\) comparisons and have constant accuracy for all \(d=\omega(1)\), which still beats the deterministic lower bound of \(\Omega(n^{1+\varepsilon})\).
## 1 Introduction
Comparison-based sorting and selection are two of the most well-studied computational problems, with applications in all aspects of computing. Often, these problems are studied with the goal to minimize the number of comparisons needed. Classical results show that sorting takes \(\Theta(n\log n)\) comparisons [1] and selection takes \(\Theta(n)\) comparisons [10].
Comparison-based sorting and selection have also been extensively studied in the parallel case. The round-based model of parallelism we consider was introduced by Valiant [14] for comparison-based problems, where groups of comparisons are done in rounds of interaction. There followed a long line of research in parallel sorting and selection [1, 2, 3, 15, 16, 17, 18, 19, 20]. Particularly similar to the problems studied in this paper are parallel sorting with limited closure [1, 2] and sorting networks of arity \(k\) with low depth [13, 14, 15, 16, 17, 18, 19, 21, 22], the latter of which we will use to derive our general sorting algorithms.
However, often it is not possible to guarantee comparisons are completely precise. For example, when ranking chess players, or comparing job applicants. Due to this, the problems of sorting and selection with imprecise comparators have also been widely considered.
Depending on the model, the manner in which the comparator is imprecise differs. The adversarial comparison model we will study was introduced in [1]. If the values being compared differ by more than some threshold \(\delta\), the comparison is correct, otherwise the result of the comparison can be chosen arbitrarily by an adversary. There are two adversary models that have been considered (as described in [1]): the _non-adaptive_ model, where all of the comparisons must be predetermined by the adversary before the algorithm is run, and the _adaptive_ model, where the comparisons can be chosen by the adversary at the time they are queried, possibly depending on the previous queries made by the algorithm. In this paper, we focus entirely on the adaptive model, and as our results are all upper bounds, all of our results imply equivalent results for the easier non-adaptive model. By scaling, we assume \(\delta=1\).
Since the comparisons are imprecise, it is impossible to always determine the correct result, so algorithms in this setting instead strive to achieve a small approximation factor, which measures how far away the returned solution is from the correct one. More precisely, we say an ordering \(Y\) of a set \(X\) is a \(k\)-approximate sorting if all inversions differ in value by at most \(k\). In their original paper, Ajtai, Feldman, Hassidim, and Nelson [1] give deterministic \(k\)-approximate algorithms for sorting and selection that use \(O(4^{k}\cdot n^{1+1/2^{k-1}})\) and \(O(2^{k}\cdot n^{1+1/2^{k-1}})\) comparisons respectively. They also give a lower bound of \(\Omega(n^{1+1/2^{k-1}})\) for deterministic \(k\)-approximate sorting and selection. Special consideration is given to the case of selecting the maximum element, for which they give a randomized algorithm that uses \(O(n)\) comparisons and returns a \(3\)-approximation with probability \(1-n^{-r}\). This maximum selection result was then improved by Acharya, Falahatgar, Jafarpour, Orlitsky, and Suresh [1] who gave a randomized algorithm that uses \(O(n\log\frac{1}{\varepsilon})\) comparisons and returns a \(2\)-approximation with probability \(1-\varepsilon\). The study of these problems in the parallel setting was introduced by Gopi, Kamath, Kulkarni, Nikolov, Wu, and Zhang [2], where they gave a randomized \(d\)-round algorithm that uses \(O(n^{1+\frac{1}{2^{d-1}}}d)\) comparisons and returns a \(3\)-approximate maximum with probability \(0.9\). This raises the following questions: can _randomized_ algorithms yield an improvement in sorting and general selection? How many comparisons are required to do sorting and general selection in \(d\) rounds?
### Results, Techniques, and Discussion
To describe our results, we more formally define the model and the problems of approximate sorting and selection.
**Definition 1.1**.: _Suppose we are given \(n\) items \(x_{1},\ldots,x_{n}\) with unknown real values. An adversarial comparator \(C\) is a function that takes two items \(x_{i}\) and \(x_{j}\) and returns \(\max\{x_{i},x_{j}\}\) if \(|x_{i}-x_{j}|>1\) and \(x_{i}\) or \(x_{j}\) adversarially otherwise._
Throughout this paper, we assume the _adaptive adversary_ model [1], where the adversarial comparisons may depend on previous queries made by the algorithm.
We first define a notion of \(k\)-approximate sorting, in which inversions may only occur between values that differ by at most \(k\).
**Definition 1.2**.: _We say \(x_{i}\geq_{k}x_{j}\) if \(x_{i}\geq x_{j}-k\). For sets of items \(Y,Z\), we say \(Y\geq_{k}Z\) if \(x_{i}\geq_{k}x_{j}\) for all \(x_{i}\in Y\), \(x_{j}\in Z\). We say some ordering \(x_{1},\ldots,x_{n}\) of items in a set \(X\) is a \(k\)-approximate sorting if \(x_{j}\geq_{k}x_{i}\) for all \(j>i\). Equivalently, for any pair \(x_{i},x_{j}\) in the wrong order, they must differ by at most \(k\)._
This leads to a notion of approximate \(i\)-selection, in which the result must be the \(i\)-th element of some approximate sorting.
**Definition 1.3**.: _We say an item \(x^{*}\) in a set \(X\) is a \(k\)-approximate \(i\)-selection if there exists a \(k\)-approximate sorting \(x_{1},\ldots,x_{n}\) of \(X\) such that \(x_{i}=x^{*}\)._
We show that this definition is equivalent to the result differing from the "actual" \(i\)-th smallest element by at most \(k\).
**Lemma 1.4**.: _An item \(x_{j}\) is a \(k\)-approximate \(i\)-selection if and only if \(|x_{j}-x_{i}|\leq k\) where \(x_{i}\) is the actual \(i\)-th smallest element of \(X\)._
We proceed with our results. We begin in the non-parallel setting (although our algorithms still have good round guarantees). We provide the following near-optimal approximate sorting algorithm.
**Theorem 1.5**.: _There exists a randomized algorithm that takes \(O(n\log^{2}n)\) comparisons, uses \(O(\log n)\) parallel rounds, and returns a \(4\)-approximate sorting with probability \(>1-\frac{1}{n^{2}}\)._
Since any approximate sorting algorithm must be able to correctly sort any list of numbers after scaling, such an algorithm must take \(\Omega(n\log n)\) comparisons by the well-known lower bound. Thus, this result is a log factor away from optimal. Note that no algorithm can give better than a \(2\)-approximation, as the adversary can force \(0>1>2>0\), which can make \(0,1,2\) indistinguishable [1]. The best prior result is of [1], where they show quicksort gives a \(2\)-approximate sorting in \(O(n\log n)\) expected comparisons against the non-adaptive adversary. This approach falls apart against the adaptive adversary, however, as if all values are the same, the adversary can force all pivots to compare less than all elements, forcing the algorithm to do \(\Omega(n^{2})\) comparisons. Our result shows that it is possible to get a constant approximate sorting in near-optimal number of comparisons even against the adaptive adversary. Previously, this problem had also been studied in the deterministic case [1], where an upper bound of \(O(4^{k}\cdot n^{1+1/2^{k-1}})\) and a lower bound of \(\Omega(n^{1+1/2^{k-1}})\) comparisons were proven for \(k\)-approximate sorting. Taking this result with \(k=4\), we get a lower bound of \(\Omega(n^{9/8})\) for \(4\)-approximate deterministic sorting. Thus, our algorithm shows a distinction between randomized and deterministic algorithms in this problem. To get an \(\widetilde{O}(n)\) deterministic algorithm, one could at best provide a \(\Omega(\log\log n)\)-approximation.
Our algorithm uses the fact that randomized quicksort has good comparison complexity if there are not big groups of close elements, as the adversary cannot force the pivots too far away. Thus, if randomized quicksort does not work, there must be a large cluster of close elements that we can exploit. We then estimate the order of each element using a \(O(\log n)\) size sample of items, using the existence of this cluster to guarantee our estimates are accurate. Finally, we use these approximate orders to find a partition of the input items, and recursively solve as in quicksort.
Our algorithm also implies a similar selection algorithm.
**Corollary 1.6**.: _For any \(i\), there exists a randomized algorithm that takes \(O(n\log n)\) comparisons and returns a \(4\)-approximate \(i\)-selection with probability \(>1-\frac{1}{n^{2}}\)._
Similarly, this result is a log factor away from optimal. Again, the best prior result is of [1], where their analysis also shows that quickselect gives a \(2\)-approximate selection in \(O(n)\) expected comparisons against the non-adaptive adversary. Against the adaptive adversary, this approach fails in an identical way to quicksort. Our result shows that it is possible to get a constant approximate selection in near-optimal number of comparisons even against the adaptive adversary. This was also studied in the deterministic setting [1], where an equivalent \(\Omega(n^{9/8})\) lower bound was shown, so we also show a distinction between randomized and deterministic in this case. Similarly, any \(\widetilde{O}(n)\) deterministic algorithm could at best return a \(\Omega(\log\log n)\)-approximation.
Our approach is identical to the sorting algorithm, except we only have to recursively solve on the relevant side of the partition, as in quickselect.
Next, we provide a family of algorithms that explore the tradeoff between number of rounds, number of comparisons, and approximation factor in the sorting case.
**Theorem 1.7**.: _For any integer \(d>0\), there exists a deterministic algorithm that takes \(d\) rounds, uses \(n^{1+O(1/d)}d\) comparisons, and returns a \(2d\)-approximate sorting._
Again, any approximate sorting algorithm must be able to correctly sort any list of numbers, so such an algorithm must take \(\Omega(n^{1+1/d})\) comparisons [1]. Thus, this algorithm is optimal up to a constant factor of \(1/d\) in the exponent. However, this constant factor is large, as it arises from the notoriously bad constant of the AKS sorting network [1]. The best prior result is the aforementioned deterministic algorithms from [1]. Their \(k\)-approximate algorithm uses \(\Omega(n^{1-\frac{1}{2^{k}-1}})\) rounds and \(O(4^{k}n^{1+1/2^{k-1}})\) comparisons. Thus, their algorithm uses \(\Omega(n^{2/3})\) rounds at best. We drastically improve this by giving algorithms that can use an arbitrarily small number of rounds, which could not be done by any prior algorithm (except for the trivial \(1\) round round robin tournament). However, our comparison bound is worse than that of [1], as it is not possible to achieve their comparison complexity even for regular sorting in rounds.
Our algorithm uses a connection between this problem and the problem of sorting networks that use a sorting oracle of arity \(k\). We use a result based on the AKS sorting network [1] that gives sorting networks with asymptotically optimal depth \(O(\log_{k}n)\). We then show that these networks imply good algorithms for adversarial sorting, by showing that each round can incur at most \(2\) additional approximation error.
Since the constant factor in the exponent is large, we also provide an asymptotically worse algorithm (with respect to \(d\)) with smaller constant that is better for small constant \(d\).
**Theorem 1.8**.: _For any integer \(d>0\), there exists a deterministic algorithm that takes \(d\) rounds, uses \(n^{1+2/\sqrt{d}}d\) comparisons, and returns a \(2d\)-approximate sorting._
Our final result is an extension of these algorithms to selection algorithms that guarantee a constant approximation.
**Theorem 1.9**.: _For any integer \(d>1\) and \(i\), there exists a randomized algorithm that takes \(d+O(\log d)\) rounds, uses \(n^{1+O(1/d)}d\log n\) comparisons, and returns a \(202\)-approximate \(i\)-selection with probability \(>1-\frac{1}{n^{2}}\)._
This result uses the previous sorting result, as well as the maximum selection in rounds result from [1]. Similarly, such an algorithm must take \(\Omega(n^{1+1/(d+O(\log d))})\) comparisons, so our algorithm is optimal up to a constant factor of \(1/d\) in the exponent. The best prior result is again the deterministic selection algorithms of [1], but their algorithms similarly use \(\Omega(n^{1-\frac{1}{2^{k}-1}})=\Omega(n^{2/3})\) rounds. Thus, our algorithm is again a drastic improvement in terms of round complexity. On top of this, for \(d=\omega(1)\), our algorithm uses \(n^{1+o(1)}\) comparisons, which still beats the deterministic lower bound of [1], regardless of the number of rounds the deterministic algorithm uses. Thus, we show that randomized algorithms can beat the best deterministic algorithms even when restricted to an arbitrarily small number of rounds (as long as it increases with \(n\)).
Our algorithm repeatedly approximates the \(k\)-th element by taking \(n^{2/3}\log n\) random subsets of size \(n^{1/3}\), sorting them with depth \(d\), and splitting around position \(k/n^{2/3}\). This results in us reducing the problem to that of size \(n^{5/6}\), which we can then solve with a constant approximation
using one of our sorting algorithms. The depth \(d\) sorting does not guarantee a good approximation, so we instead use a gap-preserving property of all approximate sorting algorithms to show that this must give a good approximation if there are few elements close to the \(k\)-th smallest. If there are many close elements, we can instead sample a large subset and estimate the \(k\)-th smallest directly, which is likely to give us one of the close elements. We then show a method of combining these two algorithms to show it is possible to always get a good approximation.
The following tables summarize the previous results for the problems of approximate sorting and selection along with our contributions.
### Related Work
Imprecise comparisons were first considered by Renyi [14] and Ulam [15] in the setting of binary search. The model described allows for a bounded number of incorrect comparisons. An optimal algorithm for this problem was given by Rivest, Meyer, Kleitman, Winklmann, and Spencer [13] that uses \(O(\log n)\) comparisons. This problem was considered in the parallel setting by Negro, Parlati, and Ritrovato [12] where they give optimal algorithms for a fixed number of rounds and errors.
Binary search has also been considered in the setting where comparisons are incorrect with some probability \(p<\frac{1}{2}\)[10, 1, 13]. Pelc [14] gave an algorithm that uses \(O(\log n)\) comparisons and gives the correct answer with probability \(1-\varepsilon\) if \(p<\frac{1}{3}\). For \(\frac{1}{3}\leq p<\frac{1}{2}\), he gave an algorithm that uses \(O(\log^{2}n)\) comparisons. A later result from Borgstrom and Kosaraju [1] implies an optimal \(O(\log n)\) algorithm for all \(p<\frac{1}{2}\).
Sorting with imprecise comparisons was first considered by Lakshmanan, Ravikuman, and Ganesan [11] in the model where the number of incorrect comparisons is bounded by a function \(e(n)\). They gave a lower bound of \(\Omega(n\log n+en)\) comparisons and an upper bound of \(O(n\log n+en+e^{2})\) comparisons. The upper bound was later improved to match the lower bound by Bagchi [1] and Long [12].
\begin{table}
\begin{tabular}{||c|c|c|c|c|c||} \hline Paper & Adversary & Randomized? & Approximation & Query Complexity & Round Complexity \\ \hline [1] & Non-Adaptive & Deterministic & 2 & \(O(n\log n)\) & \(O(\log n)\) \\ \hline [1] & Adaptive & Deterministic & \(k\) & \(O(2^{k}n^{1+1/2^{k-1}})\) & \(O(n^{1-1/(2^{k}-1)})\) \\ \hline Our Paper & Adaptive & Randomized & 4 & \(O(n\log n)\) & \(O(\log n)\) \\ \hline Our Paper & Adaptive & Deterministic & \(2d\) & \(n^{1+O(1/d)}d\log n\) & \(d\) \\ \hline Our Paper & Adaptive & Randomized & 202 & \(n^{1+O(1/d)}d\log n\) & \(d+O(\log d)\) \\ \hline \end{tabular}
\end{table}
Table 1: Sorting
\begin{table}
\begin{tabular}{||c|c|c|c|c|c||} \hline Paper & Adversary & Randomized? & Approximation & Query Complexity & Round Complexity \\ \hline [1] & Non-Adaptive & Deterministic & 2 & \(O(n)\) & \(O(\log n)\) \\ \hline [1] & Adaptive & Deterministic & \(k\) & \(O(2^{k}n^{1+1/2^{k-1}})\) & \(O(n^{1-1/(2^{k}-1)})\) \\ \hline Our Paper & Adaptive & Randomized & 4 & \(O(n\log n)\) & \(O(\log n)\) \\ \hline Our Paper & Adaptive & Deterministic & \(2d\) & \(n^{1+O(1/d)}d\log n\) & \(d\) \\ \hline Our Paper & Adaptive & Randomized & 202 & \(n^{1+O(1/d)}d\log n\) & \(d+O(\log d)\) \\ \hline \end{tabular}
\end{table}
Table 2: Selection
Sorting with comparisons that are incorrect with probability \(p<\frac{1}{2}\) was considered by Feige, Peleg, Raghavan, and Upfal [12] where they gave an algorithm that uses \(O(n\log(n/\varepsilon))\) queries and gives the correct answer with probability \(1-\varepsilon\).
Another common model is that of sorting networks with faulty comparisons. In the model where \(e\) gates may be faulty (i.e. do nothing), Yao and Yao [13] gave an algorithm that uses \(O(n\log n+en)\) gates. This model has been further studied in the cases where faulty gates may arbitrarily permute their inputs [1, 1] and where gates are faulty with some probability [10].
Recently, a comparison model has been considered where some comparisons are not allowed to be made at all. This model was introduced by Huang, Kannan, and Khanna [10] where they give a randomized algorithm that uses \(\widetilde{O}(n^{3/2})\) comparisons with high probability provided the input is sortable. They also give an algorithm that uses \(\widetilde{O}(\min(n/p^{2},n^{3/2}\sqrt{p}))\) comparisons if the graph of forbidden comparisons is random with edge probability \(1-p\). This was recently improved by Kuszmaul and Narayanan [11] who give corresponding algorithms using \(\widetilde{O}(\sqrt{nm})\) and \(O(n\log(np))\) comparisons respectively.
None of these results apply to our comparison model, as the incorrect comparisons are either bounded or random. In our model, however, there can be any number of incorrect comparisons, which can be chosen adversarially. There is a notion of 'closeness' of elements that allows comparisons to be incorrect, for which there does not exist an analogue in other models. We note that if we were to 'disallow' all comparisons between items that differ by at most 1, using the aforementioned algorithms would give a good approximation. However, this would require additional knowledge of which pairs of elements are sufficiently close, which the algorithm does not have in our model.
Solving comparison-based problems in rounds has also been widely considered [1,
could move the values on each side of the gap arbitrarily far apart without affecting any comparisons. We say some item \(x^{*}\) is a \(k\)-left-approximation of another item \(x_{i}\) if \(x^{*}\geq x_{i}-k\). Similarly, \(x^{*}\) is a \(k\)-right-approximation of \(x_{i}\) if \(x^{*}\leq x_{i}+k\). Intuitively, \(x^{*}\) is a left-approximation if it not "too far left" of \(x_{i}\), and similarly for a right-approximation. If an element is both a \(k\)-left-approximation and a \(k\)-right-approximation of \(x_{i}\) it must be a \(k\)-approximate \(i\)-selection.
### Randomized Sorting
The main issue with sorting in the adversarial comparison setting is balancing worst-case approximation factor with worst-case number of comparisons. Standard sorting algorithms either always guarantee a good approximation, but can be forced to do many comparisons (i.e. quicksort), or always guarantee few comparisons, but can be forced to be give a bad approximation (i.e. mergesort). With deterministic algorithms, it was shown in [1] that it is not possible to get the best of both worlds, where they gave a tradeoff between approximation factor and comparisons. Our algorithm shows that in the randomized case, however, it is possible to be near-optimal in both aspects.
In the style of quicksort, our algorithm aims to partition the input set \(X\) into two sets \(Y\) and \(\overline{Y}\) such that \(\overline{Y}\geq_{4}Y\). If we can guarantee this in every recursive call, we must return a \(4\)-approximate sorting, as no pair differing by more than \(4\) can ever be put in the wrong order [1]. To ensure a good bound on the number of comparisons, we also require \(|Y|,|\overline{Y}|\geq n/8\).
The first phase of our algorithm attempts to partition using random pivots \(O(\log n)\) times. Let \(x_{L}\) and \(x_{R}\) be the \(n/8\)-th smallest and \(n/8\)-th largest elements of \(X\) respectively. For a fixed pivot \(x_{p}\), if \(x_{p}>x_{L}+1\), at least \(n/8\) items must compare less than \(x_{p}\). Thus, if \(|X\cap[x_{L},x_{L}+1]|\leq n/4\), by a Chernoff bound, less than half of the pivots we try will have less than \(n/8\) items compare less with high probability. Similarly, if \(|X\cap[x_{R}-1,x_{R}]|\leq n/4\), less than half of the pivots we try will have less than \(n/8\) items compare greater with high probability. If both of these inequalities are satisfied, it follows that we will find a "good" partition from one of our pivots with high probability. Otherwise, without loss of generality we assume more than half of the pivots had less than \(n/8\) compare less. In this case, we have \(|X\cap[x_{L},x_{L}+1]|>n/4\) with high probability. We will exploit this property in the other phases of the algorithm.
The second phase of our algorithm estimates the order of each element \(x_{i}\) by comparing it to a small subset of \(X\). Then, we create a partition by taking \(Y\) to be the elements with estimated order less than \(n/4\), and \(\overline{Y}\) the elements with estimated order at least \(n/4\). For some fixed item \(x_{i}<x_{L}-1\), there must be \(7n/8\) items that compare greater than it, so by a Chernoff bound, \(x_{i}\) ends up in \(Y\) with high probability. Similarly, if \(x_{i}>x_{L}+2\), there must be \(3n/8\) items that compare less than it (since \(|X\cap[x_{L},x_{L}+1]|>n/4\)), so by a Chernoff bound it ends up in \(\overline{Y}\) with high probability. Thus, by a union bound, \(\overline{Y}\geq_{3}Y\) with high probability. We note that if we did not have such high density in \([x_{L},x_{L}+1]\), the elements of order around \(n/4\) (which could be arbitrarily far apart value-wise) would be impossible to differentiate with a small sample.
The final phase of our algorithm ensures \(|Y|,|\overline{Y}|\geq n/8\). Without loss of generality we assume \(|Y|\leq|\overline{Y}|\). If \(|Y|\geq n/8\), nothing needs to be done as both sets are sufficiently large. Otherwise, we repeatedly sample \(m=O(\log n)\) elements of \(\overline{Y}\), sort them, and move the \(m/8\) smallest elements to \(Y\) until \(|Y|\geq n/8\). Since \(|Y|<n/8\) and \(|X\cap[x_{L},x_{L}+1]|>n/4\) before any iteration, there must be at least \(n/4\) items \(\leq x_{L}\) that are not in \(Y\). Thus, by a Chernoff bound, the \(m/8\) smallest elements are all \(\leq x_{L}\) with high probability. Since Tournament incurs error at most \(2\), it follows that \(|Y|\geq_{4}Y\) at the end with high probability. Note that by choosing a random permutation to guarantee disjoint subsets, we can do this sampling in parallel.
### Sorting in Rounds
The main issue with sorting in rounds is guaranteeing good comparison complexity. Many of the state of the art algorithms for sorting in rounds heavily rely on the existence of a "correct" sorting order to guarantee a low number of comparisons.
We use low-depth sorting networks of arity \(m\) using Tournament to implement the sorting oracle. Since each call to Tournament incurs error at most 2, each round incurs error at most 2, so we can show the total error is bounded by 2 times the number of rounds.
### Selection in Rounds
Our algorithm improves the approximation factor from the previous sections algorithm with a small \(O(\log d)\) additional round overhead. We use a similar approach to the maximum selection algorithm given in [1], where we first describe an algorithm that gives a good approximation if there are few elements around the actual answer, then describe an algorithm that gives a good approximation if there are many elements around the actual answer, then show a way to combine them to guarantee a good approximation always. Since we are looking for an element in the middle of the sorted order, however, there is some additional complexity with considering close elements on each side of the desired element.
Let \(x_{i}\) be the actual \(i\)-th smallest element of \(X\). The first part of our algorithm guarantees a good left-side approximation if \(|X\cap[x_{i}-1,x_{i}]|\leq\frac{1}{10}n^{2/3}\). Similarly, it guarantees a good right-side approximation if \(|X\cap[x_{i},x_{i}+1]|\leq\frac{1}{10}n^{2/3}\). We aim to partition \(X\) into three sets \(Z,Y,\Gamma\) such that elements of \(Z\) are "less than" \(x_{i}\), elements of \(\Gamma\) are "greater than" \(x_{i}\), and \(Y\) is the set of "candidate" elements to be \(x_{i}\). We sample \(cn^{2/3}\log n\) subsets of \(X\) of size \(n^{1/3}\), each time sorting using the depth \(d\) algorithm from the previous subsection. We then take the elements within \(n^{1/6}\) of the \(k/n^{2/3}\)-th element of each subset and add them to \(Y\) (the set of candidates). We say that elements in positions to the left of \(k/n^{2/3}-n^{1/6}\) are on the left side, and the rest of the elements are on the right side. After sampling all subsets, elements which are not in the set of candidates are partitioned into \(Z\) and \(\Gamma\) based on how frequently they are on the left side. Assume \(|X\cap[x_{i}-1,x_{i}]|\leq\frac{1}{10}n^{2/3}\), the other case is symmetric. In this case, roughly 90% of the sampled subsets will contain no elements in \([x_{i}-1,x_{i}]\). For each of these subsets, since our sorting algorithm must be _gap-preserving_, the sorting must be correct with respect to \([x_{i}-1,x_{i}]\). It thus follows by a tail bound of the Hypergeometric distribution and a union bound that for all \(x_{j}<x_{i}-1\), \(x_{j}\) will end up in \(Z\cup Y\) with high probability. Similarly, for \(x_{j}>x_{i}\), \(x_{j}\) will end up in \(Y\cup\Gamma\) with high probability. Finally, if \(|Z|\geq k\), we return the maximum element of \(Z\) computed with a depth \(O(\log d)\) maximum finding algorithm from [1]. If \(|Z|+|Y|<k\), we return the minimum element of \(\Gamma\) similarly. Otherwise, we return the \((i-|Z|)\)-th element of \(Y\) as determined by a constant depth, constant approximate sorting algorithm from the previous section (since \(|Y|=O(n^{5/6})\)). If \(|Z|\geq i\), there must be some element of \(Z\) that is \(\geq x_{i}\), so since the maximum finding algorithm returns a constant approximation, we return a constant left-approximation. If \(|Z|+|Y|<i\), we return some element of \(\Gamma\), and all elements of \(\Gamma\) are \(\geq x_{i}-1\). Otherwise, there can be at most \(i-|Z|-1\) elements of \(Y\) that are \(<x_{i}\), so the actual \((k-|Z|)\)-th element is \(\geq x_{i}\), so since we use a constant approximate sorting, the element we return is a constant left-approximation as desired.
The second part of our algorithm guarantees a good left side approximation given \(|X\cap[x_{i}-1,x_{i}]|>\frac{1}{10}n^{2/3}\) (and there is a symmetric algorithm that guarantees a good right side approximation). The idea is simple: sample \(O(n^{5/6})\) elements of \(x\), sort them with a constant approximation algorithm in constant rounds, and then take the \((k/n^{1/6}-n^{5/12})\)-th element. By another Hypergeometric tail bound, it follows that we always get a good right-approximation, and also get a good
left-approximation if \(|X\cap[x_{i}-1,x_{i}]|>\frac{1}{10}n^{2/3}\) as desired.
Finally, we describe how to combine these algorithms. First, we run the sparse algorithm and get the result \(x^{*}\). Then, we count the number of elements that compare less than the result. If this value is \(\leq i-1\), we must have \(x^{*}\leq x_{i}+1\). We then call the left-side dense algorithm and return the greater of the two results (according to the comparator). Since one of the two algorithms must return a constant left-approximation, the latter algorithm always returns a constant right approximation, and we already know that \(x^{*}\) is a constant right approximation, it follows that in the end we return a constant approximation. The case where the number of elements that compare less is \(\geq i\) is handled symmetrically.
## 3 Preliminaries
In this section, we give some basic definitions and results in the adversarial comparison setting which will serve as the basis for many of our algorithms.
**Definition 3.1**.: _Element \(x_{j}\) in the set \(X=\{x_{1},\ldots,x_{n}\}\) is of \(k\)-order \(i\) if there exists a partition \(S_{1},S_{2}\) of \(X\backslash\{x_{j}\}\) such that \(|S_{1}|=i-1\), and \(S_{2}\cup\{x_{j}\}\geq_{k}S_{1}\cup\{x_{j}\}\)._
This the notion of approximate selection that was originally introduced in [1]. We show that this is equivalent to the intuitive notion we previously described, and then show that it is equivalent to something that is easier to work with.
**Lemma 3.2**.: _An item \(x_{j}\) in \(X\) is of \(k\)-order \(i\) if and only if \(x_{j}\) is a \(k\)-approximate \(i\)-selection._
Proof.: Assume \(S_{1},S_{2}\) exist as in the definition of \(x_{j}\) being \(k\)-order \(i\). Then, since \(S_{2}\cup\{x_{j}\}\geq_{k}S_{1}\cup\{x_{j}\}\), the concatenation of the sorted order of \(S_{1}\), \(x_{j}\), and the sorted order of \(S_{2}\) in that order is a \(k\)-approximate sorting by definition. Thus, \(x_{j}\) is a \(k\)-approximate \(i\)-selection as desired.
Similarly, assume there exists a \(k\)-approximate sorting \(Y\) of \(X\) where \(y_{i}=x_{j}\). Then, taking \(S_{1}\) to be the first \(i-1\) elements of \(Y\), and \(S_{2}\) to be the final \(n-i\) elements, it follows by definition that \(S_{2}\cup\{x_{j}\}\geq_{k}S_{1}\cup\{x_{j}\}\). Thus, \(x_{j}\) is of \(k\)-order \(i\) as desired.
**Lemma 3.3**.: _An item \(x_{j}\) in \(X\) is a \(k\)-approximate \(i\)-selection if and only if \(|x_{j}-x_{i}|\leq k\) where \(x_{i}\) is the actual \(i\)-th smallest element of \(X\)._
Proof.: Consider some \(k\)-approximate sorting \(Y\) of \(X\) where \(y_{i}=x_{j}\). Without loss of generality, assume \(x_{j}\leq x_{i}\). Since \(x_{i}\) is the \(i^{\text{th}}\) smallest element of \(X\), there must be at least \(n-i+1\) elements of \(X\) that are \(\geq x_{i}\). Thus, there must exist some element \(x_{\ell}\geq x_{i}\) that is to the left of \(x_{j}\) in \(Y\) since there are only \(n-i\) places to the right that they can be. It follows that \(x_{j}\geq_{k}x_{\ell}\), which implies \(x_{j}\geq_{k}x_{i}\) since \(x_{\ell}\geq x_{i}\). Thus, \(x_{i}\geq x_{j}\geq x_{i}-k\), so \(|x_{j}-x_{i}|\leq k\).
Consider some element \(x_{j}\) in \(X\) such that \(|x_{j}-x_{i}|\leq k\). Let \(Y\) be the sorted order of \(X\), but with \(x_{j}\) and \(x_{i}\) swapped. All pairs that do not contain \(x_{i}\) or \(x_{j}\) must still be in the right order. Pairs \((x_{i},x_{\ell})\) that are in the wrong order must have \(x_{\ell}\) between \(x_{i}\) and \(x_{j}\) (or equal to \(x_{j}\)), so \(x_{\ell}\) and \(x_{i}\) must differ by at most \(k\). An identical argument applies to pairs \((x_{j},x_{\ell})\), so it follows that \(Y\) is a \(k\)-approximate sorting as desired.
**Corollary 3.4**.: _If \(Y\) is a \(k\)-approximate sorting of \(X\), and \(S\) is the actual sorting of \(X\), \(|y_{i}-s_{i}|\leq k\) for all \(i\)._
We define the notion of a _gap-preserving_ algorithm, where elements must be correctly sorted with respect to a gap of size \(1\). This will be useful in proving the correctness of our later algorithms.
**Definition 3.5**.: _We say a sorting algorithm is gap-preserving if, given there exists a gap of length \(1\) in the input, the sorting algorithm returns all elements before the gap before all elements after the gap. Formally, given input \(X\) such that there exists a gap \([y,y+1)\) where \(X\cap[y,y+1)=\emptyset\), the sorting algorithm must return all elements of \(X\) less than \(y\) before all elements of \(X\) greater than \(y\)._
**Lemma 3.6**.: _All approximate sorting algorithms are gap-preserving._
Proof.: We can shift the elements on one side of the gap arbitrarily far away without affecting any comparison results. Thus, if there exists an input for which a \(\tau(n)\)-approximate algorithm is not gap-preserving, then we can make the approximation factor larger than \(\tau(n)\) by shifting one side by more than that, a contradiction.
Recall that the Tournament algorithm for sorting a set \(X\) of items, as originally defined in [1], does all pairwise comparisons, and orders the items by the number of "wins" they have (i.e. the number of elements that compare less).
**Lemma 3.7**.: _Tournament is a \(2\)-approximate sorting algorithm._
Proof.: If \(x_{i}>x_{j}+2\), \(x_{i}\) must compare greater than all elements \(\leq x_{j}+1\) including \(x_{j}\). However, \(x_{j}\) can at most compare greater than all elements \(\leq x_{j}+1\) excluding itself, so it must come before \(x_{i}\) as desired.
We show that partitioning as in quicksort guarantees a good approximation factor, which will be the basis of our randomized sorting algorithm. This was originally shown in [1].
**Lemma 3.8**.: _Let \(x_{i}\) be some item in a set \(X\). Let \(S=\{x_{j}\mid x_{j}<_{c}x_{i}\}\) and \(T=X\backslash S\). We must have \(T\geq_{2}S\)._
Proof.: All elements of \(S\) must be \(\leq x_{i}+1\), and all elements of \(T\) must be \(\geq x_{i}-1\). Thus, for \(x_{j}\in S,x_{k}\in T\), \(x_{j}-x_{k}\leq x_{i}+1-(x_{j}-1)=2\) as desired.
**Lemma 3.9**.: _If a sorting algorithm repeatedly partitions the input set \(X\) into two sets \(S,T\) such that \(T\geq_{k}S\), recursively sorts \(S\) and \(T\) and then concatenates them, it is guaranteed to result in a \(k\)-approximate sorting._
Proof.: Assume for the sake of contradiction that there exist \(x_{i},x_{j}\) for \(i>j\) in the final order such that \(x_{i}\ngeq_{k}x_{j}\). We must have put \(x_{i}\) in \(T\) and \(x_{j}\) in \(S\) in some recursive call, but this contradicts \(T\geq_{k}S\), as desired.
Throughout, when referring to a sorted order, we assume a fixed sorted order with ties broken arbitrarily. Unless otherwise stated, all logarithms are base \(e\). We often ignore rounding errors that vanish for large \(n\).
## 4 A Randomized Sorting (and Selection) Algorithm
In this section we prove Theorem 1.5 by describing an algorithm \(\mathtt{RSort}\). This algorithm is similar to quicksort in the sense that we aim to partition the original set of items \(X\) into two sets \(S,T\), and then recursively sort \(S\) and \(T\) and concatenate them. Recall by Lemma 3.9 that it is sufficient to have \(T\geq_{4}S\) in every call. Thus, our algorithm aims to find a partition \(S,T\) of \(X\) such that \(T\geq_{4}S\). To
ensure the recursion depth is \(O(\log|X|)\), we also aim to have \(|S|,|T|\geq\frac{|X|}{8}\). Our algorithm consists of three phases, which we will analyze independently. Throughout the algorithm, we let \(n\) be the size of the current set \(X\) the function is being called on, and we let \(N\) be the size of the initial set \(X\) that RSort was called on. This distinction is important, as we want our probability guarantees to be with respect to the size of the original caller.
### The Pivot Phase
```
1:\(R\gets 0\)
2:\(L\gets 0\)
3:loop \(8c_{1}\log N\) times
4: pick a pivot \(x_{p}\) at random
5:\(Y\leftarrow\{x\in X:x<_{c}x_{p}\}\)
6:\(\overline{Y}\gets X\backslash Y\)
7:if\(\min(|Y|,|\overline{Y}|)\geq\frac{n}{8}\)then
8:return\((Y,\overline{Y})\)
9:elseif\(|Y|<\frac{n}{8}\)then
10:\(L\gets L+1\)
11:else
12:\(R\gets R+1\)
13:endif
14:endloop
```
**Algorithm 1** Pivot Phase
In this phase, we aim to use a pivot as in quicksort to find the desired partition, in which case we return early. If we do not find such a pivot, the input set has additional structure with high probability, which we will use in the rest of the algorithm.
\(8c_{1}\log N\) elements \(x_{p}\) are randomly chosen and used as pivots. This splits \(X\) into two sets \(Y\) and \(\overline{Y}\) such that \(\overline{Y}\geq_{2}Y\) by Lemma 3.8. If both of these sets are sufficiently large, then we have found our desired partition, and we return early. Otherwise, if \(|Y|<\frac{n}{8}\), we say \(x_{p}\)_goes left_ and if \(|\overline{Y}|<\frac{n}{8}\), we say \(x_{p}\)_goes right_. Let \(x_{L}\) be the \(n/8\)-th smallest element of \(X\) and \(x_{R}\) the \(n/8\)-th largest. If \(x_{p}>x_{L}+1\), then \(x_{p}\) cannot go left, and symmetrically, if \(x_{p}<x_{R}-1\), then \(x_{p}\) cannot go right. Let \(X_{L}=\{x_{p}\in X:x_{p}\leq x_{L}+1\}\) and \(X_{R}=\{x_{p}\in X:x_{p}\geq x_{R}-1\}\). The elements in \(X\backslash(X_{L}\cup X_{R})\) are thus guaranteed to neither go left or go right. Intuitively, if this set is sufficiently big, we expect to find such a pivot. Otherwise, either \(X_{L}\) or \(X_{R}\) must be large. The variables \(L\) and \(R\) in the code count how many pivots go left and right respectively. Again intuitively, we expect \(L>R\) if \(X_{R}\) is small and vice versa. These intuitive statements are captured in the following lemmas:
**Lemma 4.1**.: _If \(|X_{L}|<\frac{3n}{8}\), for any constant \(r>0\), we can choose \(c_{1}\) sufficiently large such that \(\Pr[L\geq 4c_{1}\log N\text{ after pivot phase}]<\frac{1}{N^{r}}\)._
Proof.: Let \(A_{i}\) be a random variable that takes value \(1\) if the \(i^{\text{th}}\) pivot \(x_{p}\) is in \(X_{L}\), and \(0\) otherwise. Note that if \(A_{i}\) is \(0\), we cannot increment \(L\) in the \(i^{\text{th}}\) iteration, so we have \(L\leq A=\sum_{i}A_{i}\). Let \(\mu=\mathbb{E}[A]=8c_{1}\log N\frac{|X_{L}|}{n}\geq c_{1}\log N\). By a Chernoff bound, we have:
\[\Pr[L\geq 4c_{1}\log N\text{ at line }20] \leq\Pr[A\geq 4c_{1}\log N]\] \[=\Pr[A\geq(1+\delta)\mu]\]
Where \(\delta=\frac{n}{2|X_{L}|}-1>\frac{1}{3}\)
\[\leq e^{-\frac{\delta^{2}\mu}{2+\delta}}\] \[<e^{-\frac{\mu}{21}}\] \[\leq e^{-\log N\frac{c_{1}}{21}}\] \[=N^{-\frac{c_{1}}{21}}\]
Thus, choosing \(c_{1}\geq 21r\), we get
\[\Pr[L\geq 4c_{1}\log N\text{ at line }20]<\frac{1}{N^{r}}.\]
**Corollary 4.2**.: _If \(|X_{R}|<\frac{3n}{8}\), for any constant \(r>0\), we can choose \(c_{1}\) sufficiently large such that \(\Pr[R\geq 4c_{1}\log N\text{ after pivot phase}]<\frac{1}{N^{r}}\)._
Proof.: Symmetric.
Throughout the rest of the analysis, we will assume \(L\geq R\). The other case is handled symmetrically.
### The Sample Phase
```
1:\(Y\leftarrow\emptyset\)
2:for\(x_{i}\in X\)do
3:\(C\gets 0\)
4:loop\(8c_{2}\log N\) times
5: Choose \(z\in X\) at random
6:if\(z<_{c}x_{i}\)then
7:\(C\gets C+1\)
8:endif
9:endloop
10:if\(C<2c_{2}\log N\)then
11:\(Y\gets Y\cup\{x_{i}\}\)
12:endif
13:endfor
```
**Algorithm 2** Sample Phase
In this phase, for each element \(x_{i}\) we estimate its position in the sorted array by comparing it to a small subset of \(X\). All elements with estimated position less than \(\frac{n}{4}\) are put in set \(Y\) and the remaining elements are put in \(\overline{Y}\). Since \(|X_{L}|\geq\frac{3n}{8}\), all elements with positions between \(\frac{n}{8}\) and \(\frac{3n}{8}\) are in \([x_{L},x_{L}+1]\). Thus, since all elements \(<x_{L}-1\) compare less than all of these elements, we intuitively expect them to have estimated position less than \(\frac{n}{4}\) even on a small subset. Similarly, since all elements \(>x_{L}+2\) compare greater than all of those elements, we intuitively expect them to have estimated position greater than \(\frac{n}{4}\). These statements are captured in the following lemmas:
**Lemma 4.3**.: _If \(|X_{L}|\geq\frac{3n}{8}\), for any constant \(r>0\) we can choose \(c_{2}\) sufficiently large such that_
\[\Pr[\exists x_{i}<x_{L}-1:C\geq 2c_{2}\log N]<\frac{1}{N^{r}}.\]
Proof.: Let \(U\) be the set of the smallest \(n/8\) elements of \(X\). Consider some iteration of the loop on line 25 where \(x_{i}<x_{L}-1\). Let \(A_{i}\) be a random variable that takes value 1 if the \(i^{\text{th}}\) random element is \(\in U\) and 0 otherwise. Note that if \(A_{i}\) is 0, we cannot increment \(C\) in the \(i^{\text{th}}\) iteration. Thus, \(C\leq A=\sum_{i}A_{i}\). Let \(\mu=\mathbb{E}[A]=8c_{2}\log N\frac{|U|}{n}=c_{2}\log N\), we have:
\[\Pr[C\geq 2c_{2}\log N] \leq\Pr[A\geq 2c_{2}\log N]\] \[=\Pr[A\geq(1+\delta)\mu]\]
Where \(\delta=1\)
\[\leq e^{-\frac{\delta^{2}\mu}{2+\delta}}\] \[=e^{-\frac{\mu}{3}}\] \[=e^{-\log N\frac{c_{2}}{3}}\] \[=N^{-\frac{c_{2}}{3}}\]
Thus, choosing \(c_{2}\geq 3(r+1)\), we get
\[\Pr[C\geq 2c_{2}\log N] \leq\frac{1}{N^{r+1}}\]
By a union bound:
\[\Pr[\exists x_{i}<x_{L}-1:C\geq 2c_{2}\log N] \leq\#\{x_{i}<x_{L}-1\}\frac{1}{N^{r+1}}\] \[<\frac{1}{N^{r}}.\]
**Lemma 4.4**.: _If \(|X_{L}|\geq\frac{3n}{8}\), for any constant \(r>0\) we can choose \(c_{2}\) sufficiently large such that_
\[\Pr[\exists x_{i}>x_{L}+2:C<2c_{2}\log N]<\frac{1}{N^{r}}.\]
Proof.: Similar to the previous Lemma, using the fact that \(|X_{L}|\geq\frac{3n}{8}\implies\#\{x_{i}\leq x_{L}+1\}\geq\frac{3n}{8}\).
**Corollary 4.5**.: _If \(|X_{L}|\geq\frac{3n}{8}\), for any constant \(r>0\) we can choose \(c_{2}\) sufficiently large such that after the sample phase, \(\Pr[\max(Y)>x_{L}+2]<\frac{1}{N^{r}}\)._
Proof.: If \(\max(Y)>x_{L}+2\) then we must have had \(C<2c_{2}\log N\) for some \(x_{i}>x_{L}+2\), which happens with probability \(<\frac{1}{N^{r}}\) by the previous Lemma.
**Corollary 4.6**.: _If \(|X_{L}|\geq\frac{3n}{8}\), for any constant \(r>0\) we can choose \(c_{2}\) sufficiently large such that after the sample phase, \(\Pr[\min(X\backslash Y)<x_{L}-1]<\frac{1}{N^{r}}\)._
### The Shifting Phase
```
1: Let \(P\) be a random permutation of \(X\backslash Y\)
2:\(i\gets 0\)
3:\(B\gets 4c_{3}\log N\)
4:while\(|Y|<\frac{n}{8}\)do
5:\(Z\gets P[i..i+7B)\)
6: Tournament\((Z)\)
7:\(Y\gets Y\cup Z[0..B)\)
8:\(i\gets i+7B\)
9:endwhile
10: Let \(P\) be a random permutation of \(Y\)
11:\(i\gets 0\)
12:while\(|Y|>\frac{7n}{8}\)do
13:\(Z\gets P[i..i+7B)\)
14:\(Z\leftarrow\) Tournament\((Z)\)
15:\(Y\gets Y\backslash Z[6B..7B)\)
16:\(i\gets i+7B\)
17:endwhile
18: Let \(\overline{Y}=X\backslash Y\)
19:return\((Y,\overline{Y})\)
```
**Algorithm 3** Shifting Phase
In this phase, if either \(Y\) or \(\overline{Y}\) is too big, we move some elements to the other set to ensure they both have size \(\geq n/8\). Since the two cases are symmetric, without loss of generality, we assume \(|Y|\leq|\overline{Y}|\). We partition \(\overline{Y}\) into small subsets, and move the minimum \(1/8\)-th of each subset into \(Y\) until \(|Y|\geq n/8\). Since at least \(3n/8\) elements of \(X\) are \(\leq x_{L}+1\), at least \(1/4\)-th of the elements in \(\overline{Y}\) are \(\leq x_{L}+1\), so even for small subsets we expect the smallest \(1/8\)-th to be all \(\leq x_{L}+1\). Thus, since Tournament returns a \(2\)-approximate sorting by Lemma 3.7, we expect the elements we add to \(Y\) to be \(\leq x_{L}+3\). These intuitive statements are captured in the following lemmas:
**Lemma 4.7**.: _If \(|X_{L}|\geq\frac{3n}{8}\) and \(\max(Y)\leq x_{L}+2\) after the sample phase, for any constant \(r>0\) we can choose \(c_{3}\) sufficiently large such that after the shifting phase, \(\Pr[\max(Y)>x_{L}+3]<\frac{1}{N^{r}}\)_
Proof.: Consider some iteration of the loop on line 40. Recall that Tournament returns a \(2\)-approximate sorting. Thus, if in each iteration of the loop, \(Z\) has at least \(B\) elements \(\leq x_{L}+1\), then we are guaranteed to only add elements \(\leq x_{L}+3\) to \(Y\). Since \(|X_{L}|\geq\frac{3n}{8}\) and \(|Y|<\frac{n}{8}\), there must be at least \(\frac{n}{4}\) elements \(x\in X\backslash Y\) such that \(x\leq x_{L}+1\). Let \(U\) be the set of the \(\frac{n}{4}\) smallest elements of \(X\backslash Y\), breaking ties arbitrarily. Let \(A_{i}\) be a random variable that takes value \(1\) if \(Z_{i}\not\in U\) (before sorting), and \(0\) otherwise. Note that for any subset \(S\) of \(\{A_{i}\}\), \(\Pr\bigl{[}\bigwedge_{i\in S}A_{i}\bigr{]}=\Pr[A_{S_{0}}]\Pr[A_{S_{1}}|A_{S_{ 0}}]\ldots\Pr\Bigl{[}A_{S_{|S|}}|A_{S_{0}},\ldots,A_{S_{|S|-1}}\Bigr{]}=\frac {3}{4}\frac{3n}{4}-1\ldots\frac{3n}{4}-|S|\leq\left(\frac{3}{4}\right)^{|S|}\). Let
\(C=\#\{x_{i}\in Z|x_{i}>x_{L}+1\}\). Clearly, \(x_{i}>x_{L}+1\implies x_{i}\not\in U\), so \(C\leq A=\sum_{i}A_{i}\). We have:
\[\Pr[C\geq 6B] \leq\Pr[A\geq 6B]\] \[\leq e^{-7B\left(2\left(\frac{9}{7}-\frac{3}{4}\right)^{2}\right)}\] \[=e^{-\frac{9B}{56}}\] \[=e^{-\log N\frac{9c_{3}}{14}}\] \[=N^{-\frac{9c_{3}}{14}}\]
Thus, choosing \(c_{3}\geq\frac{14(r+1)}{9}\)
\[\leq\frac{1}{N^{r+1}}\]
By a union bound:
\[\Pr[C\geq 5B\text{ on some iteration}] \leq\frac{n}{B}\frac{1}{N^{r+1}}\] \[<\frac{1}{N^{r}}.\]
Here we use the generalized Chernoff bound from Theorem 1.1 of [10].
**Lemma 4.8**.: _If \(|X_{L}|\geq\frac{3n}{8}\) and \(\min(X\backslash Y)\geq x_{L}-1\) after the sample phase, for any constant \(r>0\) we can choose \(c_{3}\) sufficiently large such that after the shifting phase, \(\Pr[\min(X\backslash Y)<x_{L}-2]<\frac{1}{N^{r}}\)._
Proof.: Similar to Lemma 9, noting that since \(x_{L}\) is the \(\frac{n}{8}\) smallest element of \(X\) and \(|X\backslash Y|<\frac{n}{8}\), there must be at least \(\frac{3n}{4}\) elements \(x\in Y\) such that \(x\geq x_{L}\).
### Tying it together
We conclude the probability bounds for the algorithm and describe the comparison and round complexity.
**Lemma 4.9**.: _For any constant \(r>0\), we can choose \(c_{1},c_{2},c_{3}\) sufficiently large such that the probability that we split \(X\) into sets \(Y,\overline{Y}\) such that \(\overline{Y}\geq_{4}Y\) is \(>1-\frac{1}{N^{r}}\)._
Proof.: As described in the previous sections, there are \(6\) failure points at which something may go wrong and we may end up with \(\overline{Y}\not\geq_{4}Y\). By a union bound, it follows that the probability that \(\overline{Y}\geq_{4}Y\) is at least \(1-\frac{6}{N^{r+1}}>1-\frac{1}{N^{r}}\) for sufficiently large \(c_{1},c_{2},c_{3}\) as desired.
**Theorem 4.10**.: _For any constant \(r>0\), we can choose \(c_{1},c_{2},c_{3}\) sufficiently large such that RSort returns a \(4\)-approximate sorting with probability \(>1-\frac{1}{N^{r}}\)._
Proof.: Recall that it is sufficient for every recursive call to satisfy \(\overline{Y}\geq_{4}Y\). Since we reduce the size of the input by a constant factor in each recursive call, there must be \(O(N)\) total recursive calls. Thus, by a union bound, we return a \(4\)-approximate sorting with probability at least \(1-O(N)\frac{1}{N^{r+2}}>1-\frac{1}{N^{r}}\) as desired.
**Theorem 4.11**.: _RSort uses \(O(N\log^{2}N)\) comparisons._
Proof.: It is clear that any recursive call takes \(O(n\log N)\) comparisons. It follows by a well known recurrence that \(O(N\log^{2}N)\) comparisons are thus required in total.
**Theorem 4.12**.: _RSort uses in \(O(\log N)\) rounds._
Proof.: The different iterations of each loop in RSort are clearly independent, so we can do them in parallel. Thus each call to RSort takes \(O(1)\) rounds. Additionally, each layer of the recursion can also be done in parallel. Since we reduce the size of the input by a constant factor in each call, the recursion depth is \(O(\log N)\) and thus the algorithm works in \(O(\log N)\) rounds.
Theorem 1.5 thus follows from the previous three Theorems.
By only recursively solving on the relevant side, this sorting algorithm implies a selection algorithm that returns a \(4\)-approximation with probability \(>1-\frac{1}{N^{r}}\) that uses \(O(N\log N)\) comparisons and \(O(\log N)\) rounds. Corollary 1.6 thus follows.
## 5 A General Sorting Algorithm In Rounds
In this section, we use a connection to sorting networks to give a general sorting algorithm in rounds. We consider sorting networks of arity \(k\): rather than being able to compare and swap two elements, we can sort any \(k\) elements.
**Theorem 5.1**.: _[_5_]_ _For all \(m\geq 2\), there exists an arity \(m\) sorting network of depth \(O(\log_{m}n)\)._
**Corollary 5.2**.: _For any integer \(d>0\), there exists a sorting network of arity \(n^{O(1/d)}\) and depth \(d\)._
This result comes from the AKS sorting network construction [1], which has a notoriously big constant factor. Thus, we also consider asymptotically worse (with respect to \(d\)) networks with smaller constant factors, which are better for small \(d\).
**Theorem 5.3**.: _[_10_]_ _For all \(m\geq 2\), there exists an arity \(m\) sorting network of depth \(4\log_{m}^{2}n\)._
**Corollary 5.4**.: _For any integer \(d>0\), there exists a sorting network of arity \(n^{2/\sqrt{d}}\) and depth \(d\)._
We connect this result to the adversarial comparison setting by showing that these sorting networks imply approximate sorting algorithms. Since Tournament gives a \(2\)-approximate sorting, by implementing the sorting oracle with Tournament, we in some sense guarantee that the total approximation error only accumulates by \(2\) on each level of the network. Thus, for a depth \(d\) network, we get a \(2d\)-approximate algorithm.
**Lemma 5.5**.: _Let \(a\) and \(b\) be arrays of length \(n\). If \(|a_{i}-b_{i}|\leq k\) for all \(i\), then \(|\text{sorted}(a)[i]-\text{sorted}(b)[i]|\leq k\) for all \(i\)._
Proof.: We proceed by induction over \(n\). When \(n=1\), the result is trivial. Otherwise, let \(i=\text{argmin}(a)\), \(j=\text{argmin}(b)\). Without loss of generality, assume \(a[i]\leq b[j]\). If \(i=j\), then \(|\text{sorted}(a)[0]-\text{sorted}(b)[0]|\leq k\) and the result follows by the induction hypothesis. Otherwise, we claim that \(|b[i]-a[j]|\leq k\). If \(b[i]\geq a[j]\), then \(|b[i]-a[j]|=b[i]-a[j]\leq b[i]-a[i]\leq k\). Otherwise, \(|b[i]-a[j]|=a[j]-b[i]\leq a[j]-b[j]\leq k\). Thus, we can swap \(a[i]\) and \(a[j]\) and the assumption still holds. We thus reduce to the already solved \(i=j\) case as desired.
**Lemma 5.6**.: _If there exists a sorting network with arity \(k\) and depth \(d\), then there exists a \(2d\)-approximate sorting algorithm in \(d\) rounds that takes \(O(nkd)\) comparisons._
Proof.: Consider directly running the sorting network, using Tournament to sort. Clearly, \(O(dn/k)\) groups are sorted, and each takes \(O(k^{2})\) time, so the total time taken is \(O(nkd)\). We claim that after the \(r\)-th round, the current element at position \(i\) differs by the "correct" element at position \(i\) (the element that would be there if all comparisons were correct) by at most \(2r\). We prove this
by induction. When \(r=0\), the result is trivial. Otherwise, after \(r-1\) rounds, each element must differ by at most \(2r-2\) from the "correct" element. By the previous lemma, it follows that in each group that is being sorted, the elements of the correct sorting of the current elements differ by the elements of the correct sorting of the correct elements by at most \(2r-2\). Since Tournament gives a 2-approximate sorting, it follows by Corollary 3.4 that after sorting the elements differ by the "correct" elements by at most \(2r\) by the triangle inequality as desired.
Theorem 1.7 and Theorem 1.8 follow. By letting \(d\) be an arbitrarily large constant, we can get a constant round, constant approximate algorithm that uses \(O(n^{1+\varepsilon})\) comparisons for any \(\varepsilon>0\).
## 6 A General Selection Algorithm In Rounds
In this section, we extend the sorting algorithms in the previous section to selection algorithms that return a constant approximation regardless of \(d\). We first provide an algorithm that gives a good approximation if there are few elements close to the answer. Then, we provide an algorithm that gives a good approximation if there are many elements close to the answer. We then show that it is possible to combine these to always achieve a constant approximation.
### Sparse Selection
Let \(L_{x}=\{x_{i}|x_{k}-1\leq x_{i}\leq x_{k}\}\) and \(R_{x}=\{x_{i}|x_{k}\leq x_{i}\leq x_{k}+1\}\). This part of the algorithm returns a 200-approximation on the left side if \(|L_{x}|\) is sufficiently small, and a 200-approximation on the right side if \(|R_{x}|\) is sufficiently small. Specifically, if \(|L_{x}|\leq\frac{1}{10}n^{2/3}\), then \(x^{*}\geq_{200}x_{i}\) where \(x^{*}\) is the returned item. Similarly, if \(|R_{x}|\leq\frac{1}{10}n^{2/3}\), then \(x_{i}\geq_{200}x^{*}\).
We aim to partition \(X\) into three sets: \(Z,Y,\Gamma\) where \(Z\) is the set of elements definitely to the left of \(x_{k}\), \(Y\) is the set of candidate elements to be \(x_{k}\), and \(\Gamma\) is the set of elements definitely to the right of \(x_{k}\). We also want \(|Y|=O(n^{1-\varepsilon})\), so we can sort \(Y\) with a constant approximate algorithm. We sample \(cn^{2/3}\log n\) subsets of \(X\) of size \(n^{1/3}\), each time sorting with the \(d\) round algorithm from the previous section. We then take the elements of each subset close to the \(k/n^{1/3}\)-th position and add them to the set of candidates. The elements that are not candidates at the end are partitioned into left and right depending whether they were to the left or the right of the \(k/n^{1/3}\)-th position more frequently. If \(|L_{x}|\) is sufficiently small, we expect most of the subsets to not contain any elements of \(L_{x}\), and thus since Sort must be _gap-preserving_, the subsets must be roughly correctly sorted around position \(k/n^{1/3}-\frac{|L_{x}|}{2}\). Thus, we expect our candidates to be \(\geq x_{k}-1\). Similarly, when \(|R_{x}|\) is sufficiently small, we expect our candidates to be \(\leq x_{k}+1\). This gives us our desired result.
Let \(d>1\) be arbitrary. Let Sort be the \(d\) round sorting algorithm from Theorem 1.7 and let 100-Sort be the sorting algorithm obtained by taking \(d=100\) in Theorem 1.8.
**Theorem 6.1**.: _[_6_]_ _For any \(r>0\), there exists a \(\log_{2}d\) round maximum/minimum finding algorithm GetMax/GetMin that uses \(O(n^{1+\frac{1}{d-1}}\log d\log n)\) comparisons and returns a \(5\)-approximate maximum/minimum with probability \(>1-\frac{1}{n^{r}}\)._
```
1:functionSelectSparse(\(X,k\))
2:\(Y\leftarrow\emptyset\)
3:\(L\leftarrow[0]*n\)
4:loop\(cn^{2/3}\log n\) times
5: Generate a subset \(S\) of \(X\) of size \(n^{1/3}\)
6:\(S\leftarrow\text{Sort}(S)\)
7:\(T\gets S[k/n^{2/3}-n^{1/6}:k/n^{2/3}+n^{1/6}]\)
8:for\(x_{i}\in S[:k/n^{2/3}-n^{1/6}]\)do
9:\(L[i]\gets L[i]+1\)
10:endfor
11:\(Y\gets Y\cup T\)
12:endloop
13:\(Z\leftarrow\emptyset\)
14:for\(i=0..n-1\)do
15:if\(x_{i}\not\in Y\) and \(L[i]>\frac{c}{2}\log n\)then
16:\(Z\gets Z\cup\{x_{i}\}\)
17:endif
18:endfor
19:\(\Gamma\gets X\backslash(Y\cup Z)\)
20:if\(k\leq|Z|\)then
21:return\(\text{GetMax}(Z)\)
22:elseif\(k\leq|Z|+|Y|\)then
23:\(Y\leftarrow\) 100-Sort(\(Y\))
24:return\(Y[k-|Z|-1]\)
25:else
26:return\(\text{GetMin}(\Gamma)\)
27:endif
28:endfunction
```
**Algorithm 4** Sparse Selection
**Lemma 6.2**.: _If \(|L_{x}|\leq\frac{1}{10}n^{2/3}\), for \(r>0\) and \(x_{i}>x_{L}\) there exists \(c\) large enough that \(\Pr\bigl{[}L[i]>\frac{c}{2}\log n\bigr{]}<\frac{1}{n^{r}}\)._
Proof.: Consider the iterations in which \(x_{i}\) is chosen. By a union bound,
\[\Pr[L_{x}\cap S\neq\emptyset\mid x_{i}\in S]\leq|L_{x}|\frac{n^{1/3}-1}{n-1} \leq\frac{1}{10}n^{2/3}\frac{n^{1/3}-1}{n-1}\leq\frac{2}{10}\]
for \(n\) sufficiently large. Conditioning on \(x_{i}\) being \(\in S\), the number of elements of \(S\) that are \(\leq x_{k}\) (call this \(V\)) follows a Hypergeometric\((n,k,n^{1/3})\)+ distribution. By a tail bound:
Footnote †: If there are multiple items with value \(x_{k}\), the second argument can be larger than \(k\), but that can only make the bounds better.
\[\Pr\Bigl{[}V\leq k/n^{2/3}-n^{1/6}\mid x_{i}\in S\Bigr{]} =\Pr\Bigl{[}V\leq(k/n-n^{-1/6})n^{1/3}\Bigr{]}\] \[\leq e^{-2(n^{-1/6})^{2}n^{1/3}}\] \[=e^{-2}.\]
If \(L_{x}\cap S=\emptyset\) and \(V>k/n^{2/3}-n^{1/6}\), since Sort is _gap-preserving_ by Lemma 3.6, \(L[i]\) cannot increase in this iteration. It thus follows by a union bound that \(L[i]\) increases with probability at
most \(\frac{2}{10}+e^{-2}<0.4\). Thus, \(\Pr[x_{i}\in S\text{ and }L[i]\text{ increases}]<\frac{0.4}{n^{2/3}}\). Let \(\mu=\mathbb{E}[L[i]]\leq cn^{2/3}\log n\frac{0.4}{n^{2/3}}=0.4c\log n\). By a Chernoff bound:
\[\Pr\Bigl{[}L[i]>\frac{c}{2}\log n\Bigr{]} \leq\Pr[L[i]>(1+1/4)\mu]\] \[\leq e^{-(1/4)^{2}\mu/(2+1/4)}\] \[\leq e^{-0.4c\log n/36}\]
Choosing \(c>90r\):
\[<\frac{1}{n^{r}}\]
as desired.
**Corollary 6.3**.: _If \(|L_{x}|\leq\frac{1}{10}n^{2/3}\), for any \(r>0\) there exists \(c\) large enough that \(\Pr[Z\cap(x_{L},\infty)=\emptyset]>1-\frac{1}{n^{r}}\)._
Proof.: This follows by a union bound and the previous lemma.
**Corollary 6.4**.: _If \(|L_{x}|\leq\frac{1}{10}n^{2/3}\), for any \(r>0\) there exists \(c\) large enough that \(\Pr[\Gamma\cap(-\infty,x_{L}-1)=\emptyset]>1-\frac{1}{n^{r}}\)._
Proof.: Symmetric.
Let \(x^{*}\) be the value returned by SelectSparse.
**Lemma 6.5**.: _If \(|L_{x}|\leq\frac{1}{10}n^{2/3}\), for any \(r>0\) there exists \(c\) large enough that \(\Pr[x^{*}\geq_{200}x_{k}]>1-\frac{1}{n^{r}}\)._
Proof.: We claim it suffices that \(Z\cap(x_{L},\infty)=\emptyset\) and \(\Gamma\cap(-\infty,x_{L}-1)=\emptyset\). If \(|Z|\geq k\), then there must be an element of \(Z\) that is \(\geq x_{k}\). Thus, since GetMax returns a 5-approximation with sufficiently large probability, \(x^{*}\geq_{5}x_{k}\) with sufficiently large probability. If \(|Z|+|Y|<k\), then we return some element of \(\Gamma\) which is \(\geq_{1}x_{k}\) if \(\Gamma\cap(-\infty,x_{L}-1)=\emptyset\). Otherwise, if \(|Z|<k\) and \(|Z|+|Y|\geq k\), there must be at most \(k-|Z|-1\) elements of \(Y\) that are \(<x_{k}\). Thus, the \((k-|Z|)\)-th element of \(Y\) is \(\geq x_{k}\). Since 100-Sort returns a 200-approximation, it follows that \(x^{*}\geq_{200}x_{k}\) with sufficiently large probability as desired.
**Corollary 6.6**.: _If \(|R_{x}|\leq\frac{1}{10}n^{2/3}\), for any \(r>0\) there exists \(c\) large enough that \(\Pr[x_{k}\geq_{200}x^{*}]>1-\frac{1}{n^{r}}\)._
Proof.: Symmetric.
**Theorem 6.7**.: _SelectSparse uses \(n^{1+O(1/d)}d\log n\) comparisons and \(d+\max(100,\log_{2}d)\) rounds._
Proof.: All of the comparisons come from Sort, 100-Sort, and GetMax/GetMin. The number of comparisons is thus bounded by \(cn^{2/3}\log n(n^{1/3})^{1+O(1/d)}+n^{1+\frac{1}{d-1}}\log d\log n+(cn^{5/6} \log n)^{6/5}=n^{1+O(1/d)}d\log n\) as desired. All iterations of the loop can be done in parallel, so the number of rounds is bounded by \(d+\log_{2}d\) if GetMax is called, and by \(d+100\) if 100-Sort is called.
### Dense Selection
Here we give two algorithms: Select+ and Select-, the former of which will return a good approximation if \(|L_{x}|>\frac{1}{10}n^{2/3}\), and the latter if \(|R_{x}|>\frac{1}{10}n^{2/3}\).
The idea is simple: take a large sample (size \(cn^{5/6}\log n\)), sort it with a constant approximate algorithm, and return the element in roughly the \(k\)-th position.
```
1:functionSelect\(\pm(X,k)\)
2:\(n\leftarrow|X|\)
3: Generate a subset \(S\) of \(X\) of size \(cn^{5/6}\log n\)
4:\(S\leftarrow\) 100-Sort\((S)\)
5:return\(S[ck\log n/n^{1/6}\pm cn^{5/12}\log n]\)
6:endfunction
```
**Algorithm 5** Dense Selection
Let \(x^{*}\) be the item returned by Select\(\pm\).
**Lemma 6.8**.: _If \(|L_{x}|>\frac{1}{10}n^{2/3}\), for any \(r>0\) there exists \(c\) large enough such that Select\(-\) returns a \(201\)-approximation with probability \(>1-\frac{1}{n^{r}}\)._
Proof.: It suffices to prove the actual \((k/n^{1/6}-n^{5/12})\)-th smallest element of \(S\) is in \(L_{x}\), since 100-Sort returns a 200-approximation. The number of elements of \(S\) that are \(\leq x_{k}\) (call this \(V\)) follows a Hypergeometric\((n,k,cn^{5/6}\log n)\)++ distribution. Thus, by a tail bound:
Footnote ‡: Similarly to before, the second argument can be \(>k\), but it only makes the bounds better.
\[\Pr\Bigl{[}V\leq ck\log n/n^{1/6}-cn^{5/12}\log n\Bigr{]} =\Pr\Bigl{[}V\leq(k/n-n^{-5/12})cn^{5/6}\log n\Bigr{]}\] \[\leq e^{-2(n^{-5/12})^{2}cn^{5/6}\log n}\] \[\leq n^{-2c}.\]
Similarly, the number of elements of \(S\) that are \(<x_{k}-1\) (call this \(U\)) follows a Hypergeometric\((n,k-\frac{1}{10}n^{2/3},cn^{5/6}\log n)\)SS distribution. Thus, by a tail bound:
Footnote §: Again, the second argument could be larger.
\[\Pr\Bigl{[}U\geq k/n^{1/6}-n^{5/12}\Bigr{]} =\Pr\Bigl{[}((k-n^{2/3}/10)/n+n^{-1/3}/10-n^{-5/12})cn^{5/6}\log n \Bigr{]}\] \[\leq e^{-2(n^{-1/3}/10-n^{-5/12})^{2}cn^{5/6}\log n}\] \[<n^{-2c}.\]
for \(n\) sufficiently large. Thus, the probability of the actual \((k/n^{1/6}-n^{5/12})\)-th smallest element is in \(L_{x}\) is at least \(1-2n^{-2c}>1-\frac{1}{n^{r}}\) for \(c\) sufficiently large by a union bound.
**Corollary 6.9**.: _If \(|R_{x}|>\frac{1}{10}n^{2/3}\), for any \(r>0\) there exists \(c\) large enough such that Select\(+\) returns a \(201\)-approximation with probability \(>1-\frac{1}{n^{r}}\)._
Proof.: Symmetric.
**Lemma 6.10**.: _If \(x^{*}\) is the item returned by Select\(-\), for any \(r>0\) there exists \(c\) sufficiently large that \(x_{k}\geq_{200}x^{*}\) with probability \(>1-\frac{1}{n^{r}}\)._
Proof.: This is implicitly proven in the previous lemma, where we prove the position in the original array of \(x^{*}\) is less than \(k\).
**Corollary 6.11**.: _If \(x^{*}\) is the item returned by Select+, for any \(r>0\) there exists \(c\) sufficiently large that \(x^{*}\geq_{200}x_{k}\) with probability \(>1-\frac{1}{n^{r}}\)_
Proof.: Symmetric.
**Theorem 6.12**.: \(\text{{Select}}\pm\) _uses \(O(n\log^{6/5}n)\) comparisons and \(100\) rounds._
Proof.: All comparisons are done in 100-Sort, so the number of comparisons is \(O((n^{5/6}\log n)^{6/5})=O(n\log^{6/5}n)\). Since 100-Sort takes 100 rounds, so does Select\(\pm\).
### Combining
```
1:function\(\text{{Count}}(X,x_{i})\)
2:\(c\gets 0\)
3:for\(x\in X\)do
4:if\(x<_{c}x_{i}\)then
5:\(c\gets c+1\)
6:endif
7:endfor
8:return\(c\)
9:endfunction
```
**Algorithm 6** Pivot
```
1:function\(\text{{Select}}(X,k)\)
2:\(x_{i}\leftarrow\text{{SelectSparse}}(X,k)\)
3:\(c_{i}\leftarrow\text{{Count}}(X,x_{i})\)
4:if\(c_{i}<k\)then
5:\(x_{j}\leftarrow\text{{Select}}-(X,k)\)
6:if\(x_{j}>_{c}x_{i}\)thenreturn\(x_{j}\)
7:elsereturn\(x_{i}\)
8:endif
9:else
10:\(x_{j}\leftarrow\text{{Select}}+(X,k)\)
11:if\(x_{j}<_{c}x_{i}\)thenreturn\(x_{j}\)
12:elsereturn\(x_{i}\)
13:endif
14:endif
15:endfunction
```
**Algorithm 7** Selection
**Lemma 6.13**.: _For any \(r>0\), we can choose \(c\) sufficiently large that \(|x_{k}-x^{*}|\leq\max(200,1+\min(|x_{k}-x_{i}|,|x_{k}-x_{j}|))\) with probability \(>1-\frac{1}{n^{r}}\)._
Proof.: By symmetry, we may assume without loss of generality that \(c_{i}<k\). In this case, we return the item with the larger Count. Since we return the maximum of \(x_{i}\) and \(x_{j}\) (according to the
comparator), we must have \(\max(x_{i},x_{j})-1\leq x^{*}\leq\max(x_{i},x_{j})\). By a result from the previous section, \(x_{j}\leq x_{k}+200\) with probability \(>1-\frac{1}{n^{r}}\). If \(x_{i}\) was \(>x_{k}+1\), then it would compare greater than \(x_{k}\) and everything before it, contradicting \(c_{i}<k\). Thus, \(x_{i}\leq x_{k}+1\). It follows that with probability \(>1-\frac{1}{n^{r}}\), \(\max(x_{i},x_{j})\leq x_{k}+200\). Thus, if either \(x_{i}>x_{k}\) or \(x_{j}>x_{k}\), \(|x^{*}-x_{k}|\leq 200\) as desired. Otherwise, if both \(x_{i}\leq x_{k}\) and \(x_{j}\leq x_{k}\), then \(|x_{k}-x^{*}|=x_{k}-x^{*}\leq x_{k}-(\max(x_{i},x_{j})-1)=\min(|x_{k}-x_{i}|,|x _{k}-x_{j}|)+1\) as desired.
**Theorem 6.14**.: _For \(r>0\) there exists \(c\) sufficiently large that Select returns a \(202\)-approximate \(k\)-selection with probability \(>1-\frac{1}{n^{r}}\)._
Proof.: We consider cases based on the sizes of \(L_{x}\) and \(R_{x}\):
If \(|L_{x}|\leq\frac{1}{10}n^{2/3}\) and \(|R_{x}|\leq\frac{1}{10}n^{2/3}\), then we have \(x_{i}\geq_{200}x_{k}\) and \(x_{k}\geq_{200}x_{i}\) with probability \(>1-\frac{2}{n^{r+1}}\), in which case we have \(|x_{k}-x_{i}|\leq 200\). By the previous lemma it follows that \(|x^{*}-x_{k}|\leq 201\) with probability \(>1-\frac{3}{n^{r+1}}\), so we return a 201-approximate \(k\)-selection with probability \(>1-\frac{3}{n^{r+1}}>1-\frac{1}{n^{r}}\) as desired.
If \(|L_{x}|>\frac{1}{10}n^{2/3}\) and \(|R_{x}|>\frac{1}{10}n^{2/3}\), then \(|x_{j}-x_{k}|\leq 201\) with probability \(>1-\frac{1}{n^{r+1}}\). Thus, by the previous lemma, \(|x^{*}-x_{k}|\leq 202\) with probability \(>1-\frac{2}{n^{r+1}}\). It follows that \(x^{*}\) is a 202-approximate \(k\)-selection with probability \(>1-\frac{2}{n^{r+1}}>1-\frac{1}{n^{r}}\) as desired.
If \(|L_{x}|\leq\frac{1}{10}n^{2/3}\) and \(|R_{x}|>\frac{1}{10}n^{2/3}\), we have \(x_{i}\geq_{200}x_{k}\) with probability \(>\frac{1}{n^{r+1}}\). Thus, either \(x_{i}\leq x_{k}+1\) in which case we have \(|x_{i}-x_{k}|\leq 200\), or \(x_{i}>x_{k}+1\) in which case \(x_{j}\) must come from Select+ and thus \(|x_{j}-x_{k}|\leq 201\) with probability \(>\frac{1}{n^{r+1}}\). By the previous lemma, it thus follows that \(|x^{*}-x_{k}|\leq 202\) with probability \(>\frac{3}{n^{r+1}}\). It follows that \(x^{*}\) is a 202-approximate \(k\)-selection with probability \(>1-\frac{3}{n^{r+1}}>1-\frac{1}{n^{r}}\) as desired.
The case where \(|L_{x}|>\frac{1}{10}n^{2/3}\) and \(|R_{x}|\leq\frac{1}{10}n^{2/3}\) is symmetric.
**Theorem 6.15**.: _Select takes \(n^{1+O(1/d)}d\log n\) comparisons and \(d+102+\min(100,\log_{2}d)\) rounds._
Proof.: All comparisons are done in SelectSparse, Select\(\pm\) and Count. The number of comparisons done by the two calls to Count is bounded by \(2n\). Thus, the total number of comparisons is bounded by \(n^{1+O(1/d)}d\log n+n\log^{6/5}n+2n=n^{1+O(1/d)}d\log n\). Each call to Count takes one round, so the total number of rounds is bounded by \(d+\min(100,\log_{2}d)+100+2=d+102+\min(100,\log_{2}d)\).
Theorem 1.9 follows.
## 7 Open Problems
* Is there an algorithm to find a 3-approximate sorting or selection with high probability in \(\widetilde{O}(n)\) time?
* Is there an algorithm to find a constant-approximate sorting with high probability in \(O(n\log n)\) time?
* Is there an algorithm to find a constant-approximate selection with high probability in \(O(n)\) time?
* Can we improve the lower or upper bounds for \(k\)-approximate sorting and selection in \(d\) rounds?
## Acknowledgements
I would like to thank Gautam Kamath for introducing me to this problem, advising me throughout this process, and giving feedback on earlier drafts of this paper. This would not have been possible without his help. I would also like to thank Richard Peng for pointing me in the direction of Gautam, and Yousof Hosny for helpful discussions.
|
2308.13691 | Central elements in the $\mathrm{SL}_d$-skein algebra of a surface | The $\mathrm{SL}_d$-skein algebra $\mathcal{S}_{\mathrm{SL}_d}^q(S)$ of a
surface $S$ is a certain deformation of the coordinate ring of the character
variety consisting of flat $\mathrm{SL}_d$-local systems over the surface. As a
quantum topological object, $\mathcal{S}_{\mathrm{SL}_d}^q(S)$ is also closely
related to the HOMFLYPT polynomial invariant of knots and links in
$\mathbb{R}^3$. We exhibit a very rich family of central elements in this
algebra $\mathcal{S}_{\mathrm{SL}_d}^q(S)$ that appear when the quantum
parameter $q$ is a root of unity. These central elements are obtained by
threading along framed links certain polynomials arising in the elementary
theory of symmetric functions, and related to taking powers in $\mathrm{SL}_d$. | Francis Bonahon, Vijay Higgins | 2023-08-25T22:18:53Z | http://arxiv.org/abs/2308.13691v1 | # Central elements in the \(\mathrm{SL}_{d}\)-skein algebra
###### Abstract.
The \(\mathrm{SL}_{d}\)-skein algebra \(\mathcal{S}^{q}_{\mathrm{SL}_{d}}(S)\) of a surface \(S\) is a certain deformation of the coordinate ring of the character variety consisting of flat \(\mathrm{SL}_{d}\)-local systems over the surface. As a quantum topological object, \(\mathcal{S}^{q}_{\mathrm{SL}_{d}}(S)\) is also closely related to the HOMFLYPT polynomial invariant of knots and links in \(\mathbb{R}^{3}\). We exhibit a very rich family of central elements in this algebra \(\mathcal{S}^{q}_{\mathrm{SL}_{d}}(S)\) that appear when the quantum parameter \(q\) is a root of unity. These central elements are obtained by threading along framed links certain polynomials arising in the elementary theory of symmetric functions, and related to taking powers in \(\mathrm{SL}_{d}\).
This work was developed under the auspices of the Research Training Grant DMS-2135960, _RTG: Algebraic and Geometric Topology at Michigan State_, from the U.S. National Science Foundation.
###### Abstract.
We consider the \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)- \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)- \(d\)-dimensional \(\operatorname{SL}_{d}\)-linear \(d\)-dimensional \(\operatorname{SL}_{d}\)-
where \(L^{[e_{1}^{i_{1}}e_{2}^{i_{2}}\dots e_{d-1}^{i_{d-1}}]}\in\mathcal{S}_{\mathrm{SL}_{ d}}^{q}(M)\) is represented by the union of \(i_{1}+i_{2}+\dots i_{d-1}\) disjoint parallel copies of the knot \(L\), taken in the direction of the framing, and with \(i_{1}\) of these copies carrying the weight \(1\), \(i_{2}\) carrying the weight \(2\),..., and \(i_{d-1}\) carrying the weight \(d-1\). A similar construction applies to links \(L\) with several components. See SS2 for details.
**Theorem 1**.: _Suppose that the \(d\)-root \(q^{\frac{1}{d}}\) occurring in the definition of skein modules \(\mathcal{S}_{\mathrm{SL}_{d}}^{q}(M)\) is such that \(q^{\frac{2n}{d}}=1\), and that \(q^{2i}\neq 1\) for every integer \(i\) with \(2\leqslant i\leqslant\frac{d}{2}\). In a thickened surface \(S\times[0,1]\), let \(L=L_{1}\sqcup L_{2}\sqcup\dots\sqcup L_{c}\) be a framed link in which each component \(L_{j}\) carries a weight \(i_{j}\in\{1,2,\dots,d-1\}\). Then the skein \(L^{[\widehat{P}_{d}^{(n,\bullet)}]}\in\mathcal{S}_{\mathrm{SL}_{d}}^{q}(S)\) obtained by threading the reduced power elementary polynomial \(\widehat{P}_{d}^{(n,i_{j})}\in\mathbb{Z}[e_{1},e_{2},\dots,e_{d-1}]\) along each component \(L_{j}\) is central in the skein algebra \(\mathcal{S}_{\mathrm{SL}_{d}}^{q}(S)\) of the surface \(S\)._
Theorem 1 is based on a more general property for skein modules \(\mathcal{S}_{\mathrm{SL}_{d}}^{q}(M)\) of \(3\)-manifolds which, borrowing terminology from [1], is a certain transparency property for threading operations along the reduced power polynomial \(\widehat{P}_{d}^{(n,i)}\in\mathbb{Z}[e_{1},e_{2},\dots,e_{d-1}]\). This property states that, if \(L_{0}\) is a framed link in a \(3\)-manifold \(M\) carrying component weights in \(\{1,2,\dots,d-1\}\) and if \(L\) is a framed knot disjoint from \(L_{0}\), then the skein \(L_{0}\sqcup L^{[P_{d}^{(n,i)}]}\in\mathcal{S}_{\mathrm{SL}_{d}}^{q}(M)\) obtained by threading \(\widehat{P}_{d}^{(n,i)}\) along \(L\) is invariant under any isotopy of \(L\) in \(M\) that is allowed to cross \(L_{0}\).
**Theorem 2**.: _Suppose that the \(d\)-root \(q^{\frac{1}{d}}\) occurring in the definition of skein modules \(\mathcal{S}_{\mathrm{SL}_{d}}^{q}(M)\) is such that \(q^{\frac{2n}{d}}=1\), and that \(q^{2i}\neq 1\) for every integer \(i\) with \(2\leqslant i\leqslant\frac{d}{2}\). Then, for every \(i=1\), \(2\),..., \(d-1\), the threading operation along the reduced power elementary polynomial \(\widehat{P}_{d}^{(n,i)}\in\mathbb{Z}[e_{1},e_{2},\dots,e_{d-1}]\) is transparent in the skein module \(\mathcal{S}_{\mathrm{SL}_{d}}^{q}(M)\) of any oriented \(3\)-manifold \(M\)._
As indicated in Remark 15, the hypothesis in Theorems 1 and 2 that \(q^{2i}\neq 1\) for every integer \(i\) with \(2\leqslant i\leqslant\frac{d}{2}\) is probably unnecessary.
Similar results for \(\mathrm{G}_{2}\)-skewins, where \(\mathrm{G}_{2}\) is the exceptional Lie group of rank \(2\), will appear in [1].
## 1. \(\mathrm{SL}_{d}\)-webs and skein relations
### The \(\mathrm{SL}_{d}\)-skein module of a \(3\)-dimensional manifold
Throughout the article, \(\mathrm{SL}_{d}\) will denote the Lie group of \(d\)-by-\(d\) matrices with determinant \(1\). Because the coefficient ring of this algebraic group is irrelevant for our purposes, we will systematically omit it.
We are here using the version of \(\mathrm{SL}_{d}\)-skein modules that uses the webs developed by Cautis-Kamnitzer-Morrison in [13, 1]. There is another well-known alternative based on Kuperberg-Sikora spiders [14, 15, 16]. See [14] for the equivalence between the two viewpoints.
An \(\mathrm{SL}_{d}\)_-web_ in an oriented \(3\)-dimensional manifold \(M\) is a graph \(W\) embedded in \(M\) endowed with additional data satisfying the following conditions:
1. the graph \(W\) is endowed with a _ribbon structure_ consisting of a thin oriented surface embedded in \(M\) that contains \(W\) and deformation retracts onto it;
2. each edge of \(W\) carries an orientation and a weight \(i\in\{1,2,\ldots,d-1\}\);
3. each vertex of \(W\) is of one of the following three types: 1. a vertex of type "merge" with two incoming edges of weights \(i\) and \(j\) and one outgoing edge of weight \(i+j\), as in the first picture of Figure 1; 2. a vertex of type "split" with one incoming edge of weight \(i+j\) and two outgoing edges of weights \(i\) and \(j\), as in the second picture of Figure 1; 3. a vertex of type "stump" (also called "tag" in [1]) adjacent to exactly one edge of \(W\), which carries weight \(d\), as in the last two pictures of Figure 1;
4. the only edges that are allowed to carry weight \(d\) are those adjacent to a stump;
5. \(W\) can have components that are closed loops, with no vertices, but no component can be the graph with exactly one edge and two stumps.
Along the components of \(W\) that are closed loops, the ribbon structure is equivalent to the very classical notion of _framing_, namely the data of a vector field that is everywhere transverse to the loop (or, equivalently, with a trivialization of the normal bundle of that loop). In particular, framed (oriented) links where each component is colored by a weight \(i\in\{1,2,\ldots,d-1\}\) are fundamental examples of webs.
The \(\operatorname{SL}_{d}\)_-skein module_\(\mathcal{S}^{q}_{\operatorname{SL}_{d}}(M)\) is obtained from the vector space over \(\mathbb{C}\) (say) freely generated by the set of isotopy classes of \(\operatorname{SL}_{d}\)-webs under a set of _skein relations_ that are explicitly listed in [1]. Since we will not need most of them, we are only listing a few in Figures 2-5 and refer to [1] for the full list.
In these figures, each skein relation should be seen as occurring in a neighborhood of a disk embedded in \(M\), in such a way that the ribbon structures of each web represented
Figure 1. Vertices of a web
Figure 2. A typical skein relation
are horizontal for the projection to that disk. The sums are over indices \(m\in\mathbb{Z}\), with the following conventions:
1. the sum is limited to those values of \(m\) that lead to edge weights in \(\{0,1,\ldots,d\}\);
2. an edge carrying weight \(0\) and its end vertices should be erased;
3. an edge carrying weight \(d\) should be split into two edges with stumps, with a convention that will be more precisely described when we need it in our proof of Lemma 9.
Also, the symbols \(\genfrac{[}{]}{0.0pt}{}{i}{j}_{q}\) represent the _quantum binomials_
\[\genfrac{[}{]}{0.0pt}{}{i}{j}_{q}=\frac{\left[i\right]_{q}\left[i-1\right]_{q }\ldots\left[i-j+1\right]_{q}}{\left[j\right]_{q}\left[j-1\right]_{q}\ldots \left[2\right]_{q}\left[1\right]_{q}}=\frac{\left[i\right]_{q}!}{\left[j \right]_{q}!\left[n-j\right]_{q}!}\]
Figure 4. Two skein relations involving stumps
Figure 5. Braiding relations
Figure 3. Another skein relation
with the _quantum integers_
\[\left[i\right]_{q}=\frac{q^{i}-q^{-i}}{q-q^{-1}}\]
and the _quantum factorials_
\[\left[i\right]_{q}!=\left[i\right]_{q}\left[i-1\right]_{q}\ldots\left[2\right]_{ q}\left[1\right]_{q}.\]
We will not need the skein relation of Figure 2, which is shown here only to give the flavor of typical skein relations. However, we will make use of the relations of Figures 3-5.
Note that the _braiding relations_ of Figure 5 require us to fix a \(d\)-root \(q^{\frac{1}{d}}\) of the quantum parameter \(q\in\mathbb{C}-\{0\}\). As a consequence, the skein module \(\mathcal{S}^{q}_{\mathrm{SL}_{d}}(M)\) depends on this choice of \(q^{\frac{1}{d}}\) in spite of the fact that this is not reflected in the notation, which would otherwise be too cumbersome.
These skein relations originate from the representation theory of the quantum group \(\mathrm{U}_{q}(\mathfrak{sl}_{d})\). The skein relations other than the braiding relations of Figure 5 describe all the relations that occur between tensor products of the quantum exterior power representations \(\bigwedge_{q}^{i}\mathbb{C}^{d}\) of \(\mathrm{U}_{q}(\mathfrak{sl}_{d})\). The braiding relations reflect the braiding of the representation category of \(\mathrm{U}_{q}(\mathfrak{sl}_{d})\). See [9] for details.
### The \(\mathrm{SL}_{d}\)-skein algebra of a surface
An important special case is provided by the thickening \(M=S\times[0,1]\) of an oriented surface \(S\). In this case the skein module \(\mathcal{S}^{q}_{\mathrm{SL}_{d}}(S\times[0,1])\) admits a natural algebra structure where the multiplication is defined as follows. If \([W_{1}]\), \([W_{2}]\in\mathcal{S}^{q}_{\mathrm{SL}_{d}}(S)\) are respectively represented by webs \(W_{1}\), \(W_{2}\) in \(S\times[0,1]\), the product \([W_{1}]\bullet[W_{2}]\) is represented by the web \(W_{1}^{\prime}\cup W_{2}^{\prime}\) where \(W_{1}^{\prime}\) is obtained by rescaling \(W_{1}\) inside \(S\times[0,\frac{1}{2}]\) and \(W_{2}^{\prime}\) is obtained by rescaling \(W_{2}\) inside \(S\times[\frac{1}{2},1]\). In practice if, by projection to \(S\), we represent each \(W_{i}\) by the picture of a possibly knotted graph in \(S\), \([W_{1}]\bullet[W_{2}]\) is obtained by placing \(W_{2}\) on top of \(W_{1}\).
The algebra \(\mathcal{S}^{q}_{\mathrm{SL}_{d}}(S\times[0,1])\), denoted as \(\mathcal{S}^{q}_{\mathrm{SL}_{d}}(S)\) for short, is the _\(\mathrm{SL}_{d}\)-skein algebra_ of the oriented surface \(S\).
## 2. Threading a polynomial along a framed link
Let \(L\) be an oriented framed knot in the \(3\)-manifold \(M\), namely a \(1\)-dimensional oriented closed submanifold of \(M\) that is endowed with a nonzero section of its normal bundle. This framing can also be used to define a ribbon structure along \(L\).
Given a polynomial
\[P=\sum_{i_{1},i_{2},\ldots,i_{d-1}=0}^{i_{\mathrm{max}}}a_{i_{1}i_{2}\ldots i _{d-1}}e_{1}^{i_{1}}e_{2}^{i_{2}}\ldots e_{d-1}^{i_{d-1}}\in\mathbb{Z}[e_{1}, e_{2},\ldots,e_{d-1}]\]
in \((d-1)\) variables \(e_{1}\), \(e_{2}\),..., \(e_{d-1}\) with coefficients \(a_{i_{1}i_{2}\ldots i_{d-1}}\in\mathbb{Z}\), the _skein obtained by threading \(P\) along \(L\)_ is defined as the linear combination
\[L^{[P]}=\sum_{i_{1},i_{2},\ldots,i_{d-1}=0}^{i_{\max}}a_{i_{1}i_{2}\ldots i_{d- 1}}L^{[e_{1}^{i_{1}}e_{2}^{i_{2}}\ldots e_{d-1}^{i_{d-1}}]}\in\mathcal{S}_{ \mathrm{SL}_{d}}^{q}(M),\]
where \(L^{[e_{1}^{i_{1}}e_{2}^{i_{2}}\ldots e_{d-1}^{i_{d-1}}]}\in\mathcal{S}_{ \mathrm{SL}_{d}}^{q}(M)\) is represented by the union of \(i_{1}+i_{2}+\ldots i_{d-1}\) disjoint parallel copies of the knot \(L\), taken in the direction of the framing, and with \(i_{1}\) of these copies carrying the weight \(1\), \(i_{2}\) carrying the weight \(2\),..., and \(i_{d-1}\) carrying the weight \(d-1\). In particular, \(L^{[e_{1}^{0}e_{2}^{0}\ldots e_{d-1}^{0}]}\) is represented by the empty link.
More generally, if \(L\) is an oriented framed link with components \(L_{1}\), \(L_{2}\),...\(L_{c}\), the _skein obtained by threading the polynomials \(P_{j}\) along the components \(L_{j}\) of \(L\)_ is defined as the disjoint union
\[L^{[P_{1},P_{2},\ldots,P_{c}]}=L_{1}^{[P_{1}]}\sqcup L_{2}^{[P_{2}]}\sqcup \cdots\sqcup L_{c}^{[P_{c}]}\]
where the parallel copies used to define each \(L_{j}^{[P_{j}]}\) are chosen in disjoint tubular neighborhoods of the \(L_{j}\). Note that, because each \(L_{j}^{[P_{j}]}\) is represented by a linear combination of webs, the disjoint union \(L^{[P_{1},P_{2},\ldots,P_{c}]}\in\mathcal{S}_{\mathrm{SL}_{d}}^{q}(M)\) is also defined by a linear combination of disjoint union of those webs.
Threading a polynomial \(P\in\mathbb{Z}[e_{1},e_{2},\ldots,e_{d-1}]\) is _transparent_ if, for every oriented framed knot \(L\) in a \(3\)-manifold \(M\) and every web \(W\subset M\) that is disjoint from \(L\), the element of \(\mathcal{S}_{\mathrm{SL}_{d}}^{q}(M)\) that is represented by \(L^{[P]}\sqcup W\) is invariant under any isotopy of the knot \(L\) that allows it to cross \(W\).
**Lemma 3**.: _If threading a polynomial \(P\in\mathbb{Z}[e_{1},e_{2},\ldots,e_{d-1}]\) is transparent then, for every surface \(S\) and every oriented framed link \(L\subset S\times[0,1]\), the skein \(L^{[P]}\) obtained by threading \(P\) along each component of \(L\) is central in the skein algebra \(\mathcal{S}_{\mathrm{SL}_{d}}^{q}(S)\)._
Proof.: If \([W]\in\mathcal{S}_{\mathrm{SL}_{d}}^{q}(S)\) is represented by a web \(W\subset S\times[\frac{1}{3},\frac{2}{3}]\), then \([L^{[P]}]\bullet[W]\) is represented by \(L_{1}^{[P]}\sqcup W\) where \(L_{1}\) is obtained by rescaling \(L\) inside \(S\times[0,\frac{1}{3}]\), while \([W]\bullet[L^{[P]}]\) is represented by \(L_{2}^{[P]}\sqcup W\) with \(L_{2}\) obtained by rescaling \(L\) inside \(S\times[\frac{2}{3},1]\). Applying the transparency property to an isotopy moving \(L_{1}\) to \(L_{2}\) shows that \([L^{[P]}]\bullet[W]=[W]\bullet[L^{[P]}]\).
## 3. Power elementary polynomials
In the ring \(\mathbb{Z}[\lambda_{1},\lambda_{2},\ldots,\lambda_{d}]\) of polynomials with integer coefficients in \(d\) variables \(\lambda_{1}\), \(\lambda_{2}\),..., \(\lambda_{d}\), recall that a polynomial is _symmetric_ if it is invariant under all permutations of the variables \(\lambda_{1}\), \(\lambda_{2}\),..., \(\lambda_{d}\). Fundamental examples include the _elementary symmetric polynomials_
\[E_{d}^{(i)}=\sum_{1\leqslant j_{1}<j_{2}<\cdots<j_{i}\leqslant d}\lambda_{j_{ 1}}\lambda_{j_{2}}\ldots\lambda_{j_{i}},\]
defined for \(1\leqslant i\leqslant d\).
There is a well-known connection between the elementary symmetric polynomials \(E_{d}^{(i)}\) and the Lie group \(\mathrm{GL}_{d}\). Namely, if \(A\in\mathrm{GL}_{d}(\mathbb{K})\) is a matrix with coefficients in the field \(\mathbb{K}\), with eigenvalues \(\lambda_{1}\), \(\lambda_{2}\),..., \(\lambda_{d}\) in the algebraic closure of \(\mathbb{K}\), the coefficient of the term of degree \(d-i\) in the characteristic polynomial of \(A\) is equal to \((-1)^{i}E_{d}^{(i)}\). In this situation, we will also write
\[E_{d}^{(i)}(A)=E_{d}^{(i)}(\lambda_{1},\lambda_{2},\dots,\lambda_{d})\in \mathbb{K}.\]
If we are interested in the characteristic polynomial of a power \(A^{n}\), whose eigenvalues are \(\lambda_{1}^{n}\), \(\lambda_{2}^{n}\),..., \(\lambda_{d}^{n}\), it makes sense to consider, for \(n\geqslant 1\) and \(1\leqslant k\leqslant d\), the _power elementary symmetric polynomials_
\[E_{d}^{(n,i)}=\sum_{1\leqslant j_{1}<j_{2}<\dots<j_{i}\leqslant d}\lambda_{j_ {1}}^{n}\lambda_{j_{2}}^{n}\dots\lambda_{j_{i}}^{n}\]
obtained from \(E_{d}^{(i)}\) by replacing each occurrence of the variable \(\lambda_{j}\) with its power \(\lambda_{j}^{n}\). For instance, the case \(n=1\) gives the original elementary symmetric polynomial \(E_{d}^{(1,i)}=E_{d}^{(i)}\), while the case \(i=1\) corresponds to the well-known family of _power sum polynomials_\(E_{d}^{(n,1)}=\sum_{i=1}^{d}\lambda_{i}^{n}\).
**Lemma 4**.: _There exists a unique polynomial \(P_{d}^{(n,i)}\in\mathbb{Z}[e_{1},e_{2},\dots,e_{d}]\) such that \(E_{d}^{(n,i)}\in\mathbb{Z}[\lambda_{1},\lambda_{2},\dots,\lambda_{d}]\) is obtained from \(P_{d}^{(n,i)}\) by replacing each variable \(e_{j}\) with the elementary symmetric polynomial \(E_{d}^{(j)}\in\mathbb{Z}[\lambda_{1},\lambda_{2},\dots,\lambda_{d}]\)._
Proof.: This is an immediate consequence of the very classical property that the subring of symmetric polynomials in \(\mathbb{Z}[\lambda_{1},\lambda_{2},\dots,\lambda_{d}]\) is itself isomorphic to the polynomial ring \(\mathbb{Z}[e_{1},e_{2},\dots,e_{d}]\), by an isomorphism sending each elementary symmetric polynomial \(E_{d}^{(i)}\) to the variable \(e_{i}\). See for instance [12, SSI.2].
We call these \(P_{d}^{(n,i)}\in\mathbb{Z}[e_{1},e_{2},\dots,e_{d}]\) the _power elementary polynomials_, not to be confused with the closely connected but formally different power elementary _symmetric_ polynomials \(E_{d}^{(n,i)}\in\mathbb{Z}[\lambda_{1},\lambda_{2},\dots,\lambda_{d}]\), which involve different variables.
Simple considerations show that \(P_{d}^{(1,i)}=e_{i}\) when \(n=1\), and \(P_{d}^{(n,d)}=e_{d}^{n}\) when \(i=d\). More generally, the isomorphism between the ring of symmetric polynomials and the polynomial ring in the elementary symmetric polynomials is fairly algorithmic (see for instance Property (2.3) of [12, SSI.2]) and the power elementary polynomials \(P_{d}^{(n,i)}\in\mathbb{Z}[e_{1},e_{2},\dots,e_{d}]\) can be explicitly computed, although their sizes quickly grow with \(d\) and \(n\) and eventually require
the use of mathematical software. For instance, when \(d=4\) and \(n=6\),
\[P_{4}^{(6,1)} =e_{1}^{6}-6e_{1}^{4}e_{2}+9e_{1}^{2}e_{2}^{2}-2e_{2}^{3}+6e_{1}^{3} e_{3}-12e_{1}e_{2}e_{3}+3e_{3}^{2}-6e_{1}^{2}e_{4}+6e_{2}e_{4}\] \[P_{4}^{(6,2)} =e_{2}^{6}-6e_{1}e_{2}^{4}e_{3}+9e_{1}^{2}e_{2}^{2}e_{3}^{2}+6e_{2 }^{3}e_{3}^{2}-2e_{1}^{3}e_{3}^{3}-12e_{1}e_{2}e_{3}^{3}+3e_{3}^{4}+6e_{1}^{2}e_ {2}^{3}e_{4}-6e_{2}^{4}e_{4}\] \[\qquad\qquad\qquad\qquad-12e_{1}^{3}e_{2}e_{3}e_{4}+18e_{1}^{2}e_ {3}^{2}e_{4}+3e_{1}^{4}e_{4}^{2}+9e_{2}^{2}e_{4}^{2}-18e_{1}e_{3}e_{4}^{2}+2e_{ 4}^{3}\] \[P_{4}^{(6,3)} =e_{3}^{6}-6e_{2}e_{3}^{4}e_{4}+9e_{2}^{2}e_{3}^{2}e_{4}^{2}+6e_{1 }e_{3}^{3}e_{4}^{2}-2e_{2}^{3}e_{4}^{3}-12e_{1}e_{2}e_{3}e_{4}^{3}-6e_{3}^{2}e_ {4}^{3}+3e_{1}^{2}e_{4}^{4}+6e_{2}e_{4}^{4}\] \[P_{4}^{(6,4)} =e_{4}^{6}.\]
We are interested in the Lie group \(\mathrm{SL}_{d}\) rather than \(\mathrm{GL}_{d}\). For a matrix \(A\in\mathrm{SL}_{d}(\mathbb{K})\) with eigenvalues \(\lambda_{1}\), \(\lambda_{2}\),..., \(\lambda_{d}\) in the algebraic closure of the field \(\mathbb{K}\), we have that
\[E_{d}^{(d)}(\lambda_{1},\lambda_{2},\ldots,\lambda_{d})=\lambda_{1}\lambda_{2} \ldots\lambda_{d}=\det A=1.\]
It is therefore natural to specialize the polynomial \(P_{d}^{(n,i)}\in\mathbb{Z}[e_{1},e_{2},\ldots,e_{d}]\) by setting \(e_{d}=1\), and to consider the _reduced power elementary polynomial_\(\widehat{P}_{d}^{(n,i)}\in\mathbb{Z}[e_{1},e_{2},\ldots,e_{d-1}]\) defined by
\[\widehat{P}_{d}^{(n,i)}(e_{1},e_{2},\ldots,e_{d-1})=P_{d}^{(n,i)}(e_{1},e_{2},\ldots,e_{d-1},1).\]
**Lemma 5**.: _The power elementary polynomial \(P_{d}^{(n,i)}\) is the unique polynomial in \(\mathbb{Z}[e_{1},e_{2},\ldots,e_{d}]\) such that_
\[E_{d}^{(i)}(A^{n})=P_{d}^{(n,i)}\left(E_{d}^{(1)}(A),E_{d}^{(2)}(A),\ldots,E_{ d}^{d}(A)\right)\]
_for every \(A\in\mathrm{GL}_{d}\)._
_The reduced power elementary polynomial \(\widehat{P}_{d}^{(n,i)}\) is the unique polynomial \(\mathbb{Z}[e_{1},e_{2},\ldots,e_{d-1}]\) such that_
\[E_{d}^{(i)}(A^{n})=\widehat{P}_{d}^{(n,i)}\left(E_{d}^{(1)}(A),E_{d}^{(2)}(A), \ldots,E_{d}^{(d-1)}(A)\right)\]
_for every \(A\in\mathrm{SL}_{d}\)._
Proof.: If a matrix \(A\in\mathrm{GL}_{d}\) has eigenvalues \(\lambda_{1}\), \(\lambda_{2}\),..., \(\lambda_{d}\), its \(n\)-th power \(A^{n}\) has eigenvalues \(\lambda_{1}^{n}\), \(\lambda_{2}^{n}\),..., \(\lambda_{d}^{n}\). The fact that \(P_{d}^{(n,i)}\) and \(\widehat{P}_{d}^{(n,i)}\) satisfy the relations indicated then follows from their definitions, noting that \(E_{d}^{(d)}(A)=1\) for every \(A\in\mathrm{SL}_{d}\). The uniqueness property immediately follows from the fact that the polynomials \(E_{d}^{(1)}\), \(E_{d}^{(2)}\),..., \(E_{d}^{(d)}\) are algebraically independent in \(\mathbb{Z}[\lambda_{1},\lambda_{2},\ldots,\lambda_{d}]\) (see [3, SSI.2]).
For future reference, we note the following elementary homogeneity property of the power elementary polynomials \(P_{d}^{(n,ki)}\in\mathbb{Z}[e_{1},e_{2},\ldots,e_{d}]\).
**Lemma 6**.: _For every scalar \(\theta\in\mathbb{C}\),_
\[P_{d}^{(n,i)}(\theta e_{1},\theta^{2}e_{2},\ldots,\theta^{d}e_{d})=\theta^{ni}P _{d}^{(n,i)}(e_{1},e_{2},\ldots,e_{d}).\]
Proof.: This is an immediate consequence of the property that each elementary symmetric polynomial \(E_{d}^{(i)}\in\mathbb{Z}[\lambda_{1},\lambda_{2},\ldots,\lambda_{d}]\) is homogeneous of degree \(i\), while the power elementary symmetric polynomial \(E_{d}^{(n,i)}\) is homogeneous of degree \(ni\)
The following result is much less natural, but it will play an essential role in the proof of the main result of this article.
**Proposition 7**.: _Given commuting variables \(x_{1}\), \(x_{2}\),..., \(x_{d-1}\) with \(x_{d-1}\) invertible, define_
\[y_{j}=\begin{cases}x_{d-1}^{-1}+x_{1}&\text{if }j=1\\ x_{j-1}x_{d-1}^{-1}+x_{j}&\text{if }2\leqslant j\leqslant d-1.\end{cases}\]
_Then, for every \(n\) and every \(i\) with \(1\leqslant i\leqslant d-1\), we have the following equality_
\[\widehat{P}_{d}^{(n,i)}(y_{1},y_{2},\ldots,y_{d-1})=x_{d-1}^{-n}P_{d-1}^{(n,i -1)}(x_{1},x_{2},\ldots,x_{d-1})+P_{d-1}^{(n,i)}(x_{1},x_{2},\ldots,x_{d-1})\]
_of Laurent polynomials in \(\mathbb{Z}[x_{1},x_{2},\ldots,x_{d-2},x_{d-1}^{\pm 1}]\)._
Proof.: The proof should make the statement less mysterious. Consider the ring homomorphism
\[\varphi\colon\mathbb{Z}[x_{1},x_{2},\ldots,x_{d-2},x_{d-1}^{\pm 1}]\to\mathbb{Z}[ \lambda_{1},\lambda_{2},\ldots,\lambda_{d}]/(\lambda_{1}\lambda_{2}\ldots \lambda_{d}=1)\]
sending each \(x_{j}\) to the elementary symmetric polynomial \(E_{d-1}^{(j)}\in\mathbb{Z}[\lambda_{1},\lambda_{2},\ldots,\lambda_{d-1}]\) in the first \(d-1\) variables, and sending \(x_{d-1}^{-1}\) to \(\lambda_{d}\). Note that \(\varphi\) is well-defined since, in the target space,
\[\varphi(x_{d-1}^{-1})=\lambda_{d}=(\lambda_{1}\lambda_{2}\ldots\lambda_{d-1}) ^{-1}=(E_{d-1}^{(d-1)})^{-1}=\varphi(x_{d-1})^{-1}.\]
Using the fact that the \(E_{d-1}^{(i)}\) are algebraically independent in \(\mathbb{Z}[\lambda_{1},\lambda_{2},\ldots,\lambda_{d-1}]\), a simple argument shows that \(\varphi\) is injective. To prove the proposed relation, we therefore only need to show that the two sides have the same image under \(\varphi\).
The key property underlying the whole result is that, for \(2\leqslant j\leqslant d-1\),
\[\varphi(y_{j}) =\varphi(x_{j-1}x_{d-1}^{-1}+x_{j})=E_{d-1}^{(j-1)}\lambda_{d}+E _{d-1}^{(j)}\] \[=\lambda_{d}\sum_{1\leqslant i_{1}<\cdots<i_{j-1}\leqslant d-1} \lambda_{i_{1}}\lambda_{i_{2}}\ldots\lambda_{i_{j-1}}+\sum_{1\leqslant i_{1}< \cdots<i_{j}\leqslant d-1}\lambda_{i_{1}}\lambda_{i_{2}}\ldots\lambda_{i_{j}}\] \[=\sum_{1\leqslant i_{1}<\cdots<i_{j}\leqslant d}\lambda_{i_{1}} \lambda_{i_{2}}\ldots\lambda_{i_{j}}=E_{d}^{(j)}.\]
A similar argument shows that \(\varphi(y_{1})=E_{d}^{(1)}\).
Then, for the left-hand side of the proposed equality,
\[\varphi\big{(}\widehat{P}_{d}^{(n,i)}(y_{1},y_{2},\ldots,y_{d-1} )\big{)} =\widehat{P}_{d}^{(n,i)}\big{(}\varphi(y_{1}),\varphi(y_{2}), \ldots,\varphi(y_{d-1})\big{)}\] \[=\widehat{P}_{d}^{(n,i)}(E_{d}^{(1)},E_{d}^{(2)},\ldots,E_{d}^{(d -1)})\] \[=P_{d}^{(n,i)}(E_{d}^{(1)},E_{d}^{(2)},\ldots,E_{d}^{(d-1)},1)\] \[=P_{d}^{(n,i)}(E_{d}^{(1)},E_{d}^{(2)},\ldots,E_{d}^{(d-1)},E_{d} ^{(d)})=E_{d}^{(n,i)}\]
using the property that \(E_{d}^{(d)}=\lambda_{1}\lambda_{2}\ldots\lambda_{d}=1\) in the target space of \(\varphi\).
For the right-hand side,
\[\varphi\big{(}x_{d-1}^{-n}P_{d-1}^{(n,i-1)} (x_{1},x_{2},\ldots,x_{d-1})+P_{d-1}^{(n,i)}(x_{1},x_{2},\ldots,x_{d -1})\big{)}\] \[=\varphi(x_{d-1}^{-1})^{n}P_{d-1}^{(n,i-1)}\big{(}\varphi(x_{1}), \varphi(x_{2}),\ldots,\varphi(x_{d-1})\big{)}\] \[\qquad\qquad\qquad\qquad\qquad+P_{d-1}^{(n,i)}\big{(}\varphi(x_{ 1}),\varphi(x_{2}),\ldots,\varphi(x_{d-1})\big{)}\] \[=\lambda_{d}^{n}P_{d-1}^{(n,i-1)}(E_{d-1}^{(1)},E_{d-1}^{(2)}, \ldots,E_{d-1}^{(d-1)})+P_{d-1}^{(n,i)}(E_{d-1}^{(1)},E_{d-1}^{(2)},\ldots,E_{ d-1}^{(d-1)})\] \[=\lambda_{d}^{n}E_{d-1}^{(n,i-1)}+E_{d-1}^{(n,i)}\] \[=\lambda_{d}^{n}\sum_{1\leqslant j_{1}<\cdots<j_{i-1}\leqslant d- 1}\lambda_{j_{1}}^{n}\lambda_{j_{2}}^{n}\ldots\lambda_{j_{i-1}}^{n}+\sum_{1 \leqslant j_{1}<\cdots<j_{i}\leqslant d-1}\lambda_{j_{1}}^{n}\lambda_{j_{2}}^{ n}\ldots\lambda_{j_{i}}^{n}\] \[=\sum_{1\leqslant j_{1}<\cdots<j_{i}\leqslant d}\lambda_{j_{1}}^ {n}\lambda_{j_{2}}^{n}\ldots\lambda_{j_{i}}^{n}=E_{d}^{(n,i)}=\varphi\big{(} \widehat{P}_{d}^{(n,i)}(y_{1},y_{2},\ldots,y_{d-1})\big{)}.\]
Since \(\varphi\) is injective, this concludes the proof.
## 4. Computations in the annulus
Inspired by earlier constructions of Morton [14], Le [1] and Queffelec-Wedrich [13], we let \(A=S^{1}\times[0,1]\) be the annulus with two marked points \(x_{0}=(x,0)\) and \(x_{1}=(x,1)\) on the boundary (for an arbitrary \(x\in S^{1}\)). Let \(\mathcal{S}^{q}_{\mathrm{SL}_{d}}(A)_{\mathrm{io}}\) be the vector space generated by webs in \(A\) with boundary \(\{x_{0},x_{1}\}\), with orientation going inward at \(x_{0}\) and outward at \(x_{1}\), and quotiented by the skein relations of [1]. (The subscript io stands for "in-out".)
Figure 6 offers a few examples of webs representing elements of \(\mathcal{S}^{q}_{\mathrm{SL}_{d}}(A)_{\mathrm{io}}\) in \(A\). In particular, let \(I\in\mathcal{S}^{q}_{\mathrm{SL}_{d}}(A)_{\mathrm{io}}\) be represented by the arc \(x\times[0,1]\) of the first picture of Figure 6, endowed with weight \(1\), and let the _twist element_\(T\in\mathcal{S}^{q}_{\mathrm{SL}_{d}}(A)_{\mathrm{io}}\) be the arc of the second diagram if Figure 6, also endowed with weight \(1\). A more elaborate element \(X_{j}\in\mathcal{S}^{q}_{\mathrm{SL}_{d}}(A)_{\mathrm{io}}\), with \(1\leqslant j\leqslant d-2\), is represented by the third web of Figure 6.
The space \(\mathcal{S}^{q}_{\mathrm{SL}_{d}}(A)_{\mathrm{io}}\) comes with a multiplication
\[\circ\,:\mathcal{S}^{q}_{\mathrm{SL}_{d}}(A)_{\mathrm{io}}\otimes\mathcal{S}^{q }_{\mathrm{SL}_{d}}(A)_{\mathrm{io}}\to\mathcal{S}^{q}_{\mathrm{SL}_{d}}(A)_{ \mathrm{io}}\]
where the skein \(W_{1}\circ W_{2}\) is defined by placing \(W_{1}\) in \(S^{1}\times[0,\frac{1}{2}]\) and \(W_{2}\) in \(S^{1}\times[\frac{1}{2},1]\).
It also comes with a left and a right action of the usual skein algebra \(\mathcal{S}^{q}_{\mathrm{SL}_{d}}(A)\) where, if \([W_{0}]\in\mathcal{S}^{q}_{\mathrm{SL}_{d}}(A)\) and \([W_{1}]\in\mathcal{S}^{q}_{\mathrm{SL}_{d}}(A)_{\mathrm{io}}\), \([W_{0}]\bullet[W_{1}]\) is obtained by placing \([W_{0}]\) below \([W_{1}]\) and \([W_{1}]\bullet[W_{0}]\) is obtained by placing \([W_{0}]\) on top of \([W_{1}]\). We are particularly interested in the elements \(I\bullet L_{j}\) and \(L_{j}\bullet I\in\mathcal{S}^{q}_{\mathrm{SL}_{d}}(A)_{\mathrm{io}}\) illustrated in the last two pictures of Figure 7, where \(L_{j}\in\mathcal{S}^{q}_{\mathrm{SL}_{d}}(A)\) is represented by a simple loop going counterclockwise around the annulus and carrying weight \(j\).
The following lemma states that the elements \(I\), \(T\), \(I\bullet L_{j}\) and \(L_{j}\bullet I\) are central in \(\mathcal{S}^{q}_{\mathrm{SL}_{d}}(A)_{\mathrm{io}}\), for the multiplication \(\circ\).
**Lemma 8**.: _For every \(X\in\mathcal{S}^{q}_{\mathrm{SL}_{d}}(A)_{\mathrm{io}}\),_
\[X\circ I =I\circ X=X X\circ T =T\circ X\] \[X\circ(I\bullet L_{j}) =(I\bullet L_{j})\circ X X\circ(L_{j}\bullet I) =(L_{j}\bullet I)\circ X.\]
Proof.: These properties are easily checked by elementary isotopies in the thickened annulus \(A\times[0,1]\).
A less immediate relation between the skeins of Figures 6-7 is provided by the skein relations of SS1.1.
**Lemma 9**.: _For \(1\leqslant j\leqslant d-1\),_
\[I\bullet L_{j} =\begin{cases}q^{\frac{d-1}{d}}T-q^{-\frac{1}{d}}X_{1}&\text{ if }j=1\\ (-1)^{j-1}q^{\frac{d-j}{d}}X_{k-1}\circ T+(-1)^{j}q^{-\frac{j}{d}}X_{j}&\text{ if }2 \leqslant j\leqslant d-2\\ (-1)^{d-2}q^{\frac{1}{d}}X_{d-2}\circ T+q^{\frac{1-d}{d}}T^{-1}&\text{ if }j=d-1 \end{cases}\] \[L_{j}\bullet I =\begin{cases}q^{\frac{1-d}{d}}T-q^{\frac{1}{d}}X_{1}&\text{ if }j=1\\ (-1)^{j-1}q^{\frac{j-d}{d}}X_{j-1}\circ T+(-1)^{j}q^{\frac{j}{d}}X_{j}&\text{ if }2 \leqslant j\leqslant d-2\\ (-1)^{d-2}q^{-\frac{1}{d}}X_{d-2}\circ T+q^{\frac{d-1}{d}}T^{-1}&\text{ if }j=d-1 \end{cases}\]
_where \(T^{-1}\) is the inverse of \(T\) for the composition operation \(\circ\) (which is also its mirror image)._
Proof.: This follows from an application of the braiding relations of Figure 5, which express \(L_{j}\bullet I\) and \(I\bullet L_{j}\) as a linear combination of two webs.
When \(2\leqslant j\leqslant d-2\), the computation for \(I\bullet L_{j}\) is illustrated in Figure 8. On the right hand side of the equation, the webs represented each have one edge carrying weight \(0\) (represented by a dotted line in the pictures) which must be erased by the conventions stated in SS1.1. The first web is easily seen to be isotopic to \(X_{k-1}\circ T\), while the second web is isotopic to \(X_{j}\).
When \(j=1\), the first web occurring in the same computation now has two edge weights equal to \(0\). After erasing the corresponding two edges, the resulting web is isotopic to \(T\). The second web is still isotopic to \(X_{1}\).
When \(j=d-1\), the first web is still \(X_{d-2}\circ T\) but the second web has an edge weight equal to \(d\). We now need to use the conventions of [1] for this case, which we had skipped in our discussion in SS1.1. These involve a two-step process, first splitting the weight \(d\) edge into two stumps and then flipping the resulting inward stump to the other side of the split vertex at which it is attached (see the top of Page 358 of [1]). After applying the second and third skein relations of Figure 4 followed by an isotopy, we obtain the mirror image of \(T\), which is also \(T^{-1}\) in \(\mathcal{S}^{q}_{\mathrm{SL}_{d}}(A)_{\mathrm{io}}\). See Figure 9.
This completes the proof of the statement of Lemma 9 for \(I\bullet L_{j}\). The proof for \(L_{j}\bullet I\) is essentially identical.
**Lemma 10**.: _For every \(X\in\mathcal{S}^{q}_{\mathrm{SL}_{d}}(A)_{\mathrm{io}}\) and every \(j\) with \(1\leqslant j\leqslant d-2\),_
\[X\circ X_{j}=X_{j}\circ X.\]
Proof.: By induction on \(j\), the formulas of Lemma 9 show that, for the multiplication by composition \(\circ\), the skein \(X_{j}\in\mathcal{S}^{q}_{\mathrm{SL}_{d}}(A)_{\mathrm{io}}\) can be expressed as a polynomial in the skeins \(I\), \(T\) and \(L_{j}\bullet I\). Since these skeins are central in \(\mathcal{S}^{q}_{\mathrm{SL}_{d}}(A)_{\mathrm{io}}\) by Lemma 8, so is \(X_{j}\).
**Proposition 11**.: _Suppose that the \(d\)-root \(q^{\frac{1}{d}}\) occurring in the braiding relations of Figure 5 is a \(2n\)-root of unity, and let \(\widehat{P}^{(n,i)}_{d}\in\mathbb{Z}[e_{1},e_{2},\ldots,e_{d-1}]\) be the reduced power elementary polynomial of SS3. Then, for the framed link \(L\subset A\) and the skein \(I\in\mathcal{S}^{q}_{\mathrm{SL}_{d}}(A)_{\mathrm{io}}\) as above,_
\[L^{[\widehat{P}^{(n,i)}_{d}]}\bullet I=I\bullet L^{[\widehat{P}^{(n,i)}_{d}]}.\]
Proof.: Consider \(\mathcal{S}^{q}_{\mathrm{SL}_{d}}(A)_{\mathrm{io}}\) as a ring for the multiplication by composition \(\circ\). Then, the commutativity property of Lemma 10 shows that there is a unique ring homomorphism
\[\psi\colon\mathbb{Z}[x_{1},x_{2},\ldots,x_{d-2},x_{d-1}^{\pm 1}]\to\mathcal{S}^{q}_{ \mathrm{SL}_{d}}(A)_{\mathrm{io}}\]
such that \(\psi(x_{d-1})=q^{\frac{1-d}{d}}T^{-1}\) and \(\psi(x_{j})=(-1)^{j}q^{\frac{j}{d}}X_{j}\) for every \(j\) with with \(1\leqslant j\leqslant d-2\).
Figure 9. The proof of Lemma 9 when \(k=d-1\)
If we set
\[y_{j}=\begin{cases}x_{d-1}^{-1}+x_{1}&\text{if }i=1\\ x_{j-1}x_{d-1}^{-1}+x_{j}&\text{if }2\leqslant j\leqslant d-1\end{cases}\]
as in Proposition 7, the first batch of computations in Lemma 9 show that \(\psi(y_{j})=L_{j}\bullet I\) for every \(j\). Applying the ring homomorphism \(\psi\) to both sides of the conclusion
\[\widehat{P}_{d}^{(n,i)}(y_{1},y_{2},\ldots,y_{d-1})=x_{d-1}^{-n}P_{d-1}^{(n,i- 1)}(x_{1},x_{2},\ldots,x_{d-1})+P_{d-1}^{(n,i)}(x_{1},x_{2},\ldots,x_{d-1})\]
of Proposition 7, we conclude that
\[\widehat{P}_{d}^{(n,i)}(L_{1}\bullet I,L_{2}\bullet I,\ldots,L_{ d-1}\bullet I)\] \[=q^{\frac{n(1-d)}{d}}T^{n}\circ P_{d-1}^{(n,i-1)}\big{(}-q^{\frac {1}{d}}X_{1},+q^{\frac{2}{d}}X_{2},\ldots,(-1)^{d-1}q^{\frac{d-1}{d}}X_{d-1} \big{)}\] \[\qquad\qquad+P_{d-1}^{(n,i)}\big{(}-q^{\frac{1}{d}}X_{1},+q^{ \frac{2}{d}}X_{2},\ldots,(-1)^{d-1}q^{\frac{d-1}{d}}X_{d-1}\big{)}\] \[=(-1)^{n(i-1)}q^{\frac{n(i+1-d)}{d}}\,T^{n}\circ P_{d-1}^{(n,i-1) }\big{(}X_{1},X_{2},\ldots,X_{d-1}\big{)}\] \[\qquad\qquad+(-1)^{ni}q^{\frac{ni}{d}}P_{d-1}^{(n,i)}\big{(}X_{1 },X_{2},\ldots,X_{d-1}\big{)},\]
using Lemma 6 for the second equality.
When evaluating a polynomial on elements of \(\mathcal{S}_{\mathrm{SL}_{d}}^{q}(A)_{\mathrm{io}}\), we used the multiplication by juxtaposition \(\circ\). However, in the case of the skeins \(L_{j}\bullet I\), this evaluation is also closely related to the multiplication by superposition \(\bullet\) and to the threading operation. Indeed, by inspection of the definitions,
\[P(L_{1}\bullet I,L_{2}\bullet I,\ldots,L_{d-1}\bullet I)=L^{[P]}\bullet I\]
for every polynomial \(P\in\mathbb{Z}[e_{1},e_{2},\ldots,e_{d-1}]\). In particular, we now conclude that
\[L^{[\widehat{P}_{d}^{(n,i)}]}\bullet I=(-1)^{n(i-1)}q^{\frac{n(i +1-d)}{d}}\,T^{n}\circ P_{d-1}^{(n,i-1)}\big{(}X_{1},X_{2},\ldots,X_{d-1}\big{)}\] \[\qquad\qquad\qquad+(-1)^{ni}q^{\frac{ni}{d}}\,P_{d-1}^{(n,i)} \big{(}X_{1},X_{2},\ldots,X_{d-1}\big{)}.\]
If we now use the second batch of computations in Lemma 9, where \(q\) is replaced by \(q^{-1}\), the same arguments show that
\[I\bullet L^{[\widehat{P}_{d}^{n,k}]}=(-1)^{n(i-1)}q^{-\frac{n(i+ 1-d)}{d}}\,T^{n}\circ P_{d-1}^{(n,i-1)}\big{(}X_{1},X_{2},\ldots,X_{d-1}\big{)}\] \[\qquad\qquad\qquad\qquad+(-1)^{ni}q^{-\frac{ni}{d}}\,P_{d-1}^{(n,i)}\big{(}X_{1},X_{2},\ldots,X_{d-1}\big{)}.\]
We are now ready to use our hypothesis that \(q^{\frac{1}{d}}\) is a \(2n\)-root of unity, which means that \(q^{\frac{n}{d}}=q^{-\frac{n}{d}}\). The above computations then show that \(L^{[\widehat{P}_{d}^{(n,i)}]}\bullet I=I\bullet L^{[\widehat{P}_{d}^{(n,i)}]}\).
## 5. Central and transparent skeins from power elementary polynomials
We now use Proposition 11 to construct transparent elements in the skein module \(\mathcal{S}_{\mathrm{SL}_{d}}^{q}(M)\). The following lemma will enable us to limit our argument to web edges that carry weight \(1\)
**Lemma 12**.: _Let \(W\) be a web in the \(3\)-manifold \(M\), and let \(B\subset M\) be a ball meeting \(W\) along an arc contained in the interior of an edge \(e\) of \(W\) carrying weight \(i\in\{1,2,\ldots,d-1\}\). Suppose that \(q^{2j}\neq 1\) for every \(j\in\{2,3,\ldots,i\}\), so that the quantum factorial \([i]_{q}!\) is nonzero. Then there exists a web \(W^{\prime}\) in \(M\) such that_
1. \([W]=\frac{1}{[i]_{q}!}[W^{\prime}]\) _in the skein module_ \(\mathcal{S}^{q}_{\mathrm{SL}_{d}}(M)\)_;_
2. _every point of_ \(B\cap W^{\prime}\) _is contained in a weight_ \(1\) _edge of_ \(W^{\prime}\)_;_
3. \(W^{\prime}\) _is contained in an arbitrarily small neighborhood of_ \(W\)_._
Proof.: The skein relation of Figure 3 gives us the relation of Figure 10. The result then follows by repeated application of this property.
**Theorem 13**.: _Suppose that the \(d\)-root \(q^{\frac{1}{d}}\) occurring in the definition of skein modules \(\mathcal{S}^{q}_{\mathrm{SL}_{d}}(M)\) is such that \(q^{\frac{2n}{d}}=1\), and that \(q^{2i}\neq 1\) for every integer \(i\) with \(2\leqslant i\leqslant\frac{d}{2}\). Then, for every \(i=1\), \(2\),..., \(d-1\), threading the reduced power elementary polynomial \(\widehat{P}^{(n,i)}_{d}\in\mathbb{Z}[e_{1},e_{2},\ldots,e_{d-1}]\) is transparent in the skein module \(\mathcal{S}^{q}_{\mathrm{SL}_{d}}(M)\) of any oriented \(3\)-manifold \(M\)._
Proof.: Let \(W\) be a web in a oriented \(3\)-manifold \(M\), and let \(L_{1}\) and \(L_{2}\) be two framed knots in \(M\) that are disjoint from \(W\) and isotopic to each other by an isotopy that is allowed to cross \(W\). We want to show that \(L_{1}^{[\widehat{P}^{(n,i)}_{d}]}\sqcup W\) and \(L_{2}^{[\widehat{P}^{(n,i)}_{d}]}\sqcup W\) represent the same element in \(\mathcal{S}^{q}_{\mathrm{SL}_{d}}(M)\).
By decomposing the isotopy into little steps, we can clearly restrict attention to the case where it crosses \(W\) in exactly one point, located in an edge of \(W\) carrying weight \(i\). If \(i>\frac{d}{2}\), we can use the second skein relation of Figure 4 to replace \(i\) by \(d-i\); we can therefore assume that \(i\leqslant\frac{d}{2}\), and in particular that \([i]_{q}!\neq 0\) by our hypotheses on \(q\). Then, applying Lemma 12 to a small ball around the crossing point enables us to restrict attention to the case where the isotopy crosses \(W\) transversely in one point contained in an edge \(e\) with weight \(1\).
By transversality, we can further choose the isotopy so that, for the annulus \(A\), there is an embedding of \(A\times[0,1]\) in \(M\) such that:
1. the intersection of the edge \(e\) with \(A\times[0,1]\) is equal to \(I\times\frac{1}{2}\) for the arc \(I\) of Figure 6, and the ribbon structure there is horizontal for the projection to \(A\);
Figure 10. The proof of Lemma 12
2. shortly around the time when the isotopy crosses \(e\), the link is contained in \(A\times[0,1]\), its projection to \(A\) is equal to the link \(L\) of Figure 7, and its ribbon structure is horizontal;
3. the link is contained in \(A\times[\frac{1}{2},1]\) shortly before the isotopy crosses \(W\), and in \(A\times[0,\frac{1}{2}]\) shortly after that.
Restricting the isotopy to times near the crossing time, we can even assume that \(L_{1}\) is contained in \(A\times[\frac{1}{2},1]\), that \(L_{2}\) is contained in \(A\times[0,\frac{1}{2}]\)
We can then apply Proposition 11 to conclude that the intersections of \(L_{1}^{[\widehat{P}_{d}^{(n,i)}]}\sqcup W\) and \(L_{2}^{[\widehat{P}_{d}^{(n,i)}]}\sqcup W\) with \(A\times[0,1]\) differ by a sequence of isotopies and skein relations supported in the interior of \(A\times[0,1]\). Since \(L_{1}^{[\widehat{P}_{d}^{(n,i)}]}\sqcup W\) and \(L_{2}^{[\widehat{P}_{d}^{(n,i)}]}\sqcup W\) coincide outside of \(A\times[0,1]\), we conclude that \([L_{1}^{[\widehat{P}_{d}^{(n,i)}]}\sqcup W]=[L_{2}^{[\widehat{P}_{d}^{(n,i)}] }\sqcup W]\) in \(\mathcal{S}_{\mathrm{SL}_{d}}^{q}(M)\).
Applying Lemma 3, an immediate corollary is that Theorem 13 provides central elements in the skein algebra \(\mathcal{S}_{\mathrm{SL}_{d}}^{q}(S)\).
**Corollary 14**.: _Suppose that the \(d\)-root \(q^{\frac{1}{d}}\) occurring in the definition of the skein algebra \(\mathcal{S}_{\mathrm{SL}_{d}}^{q}(M)\) is such that \(q^{\frac{2n}{d}}=1\), and that \(q^{2i}\neq 1\) for every integer \(i\) with \(2\leqslant i\leqslant\frac{d}{2}\). In a thickened surface \(S\times[0,1]\), let \(L=L_{1}\sqcup L_{2}\sqcup\cdots\sqcup L_{c}\) be a framed link in which each component \(L_{j}\) carries a weight \(i_{j}\in\{1,2,\ldots,d-1\}\). Then the skein \(L^{[\widehat{P}_{d}^{(n,\bullet)}]}\in\mathcal{S}_{\mathrm{SL}_{d}}^{q}(S)\) obtained by threading the reduced power elementary polynomial \(\widehat{P}_{d}^{(n,i_{j})}\in\mathbb{Z}[e_{1},e_{2},\ldots,e_{d-1}]\) along each component \(L_{j}\) is central in the skein algebra \(\mathcal{S}_{\mathrm{SL}_{d}}^{q}(S)\) of the surface \(S\). _
_Remark 15_.: In the statements of Theorem 13 and Corollary 14, the condition that \(q^{2i}\neq 1\) for every \(i\) with \(2\leqslant i\leqslant\frac{d}{2}\) is an artifact of our use of Lemma 12 in the proof, and is probably unnecessary.
## 6. Two conjectures
We conclude with two conjectures.
The first conjecture is the obvious one regarding the center of the skein algebra \(\mathcal{S}_{\mathrm{SL}_{d}}^{q}(S)\). In addition to the elements exhibited in this article, the center of \(\mathcal{S}_{\mathrm{SL}_{d}}^{q}(S)\) admits more obvious elements associated to the punctures of the surface \(S\). Indeed, if \([P_{i}]\in\mathcal{S}_{\mathrm{SL}_{d}}^{q}(S)\) is represented by a small loop going around one of the punctures of the surface \(S\), endowed with a weight \(i\in\{1,2,\ldots,d-1\}\), a simple isotopy shows that \([P_{i}]\) in central in \(\mathcal{S}_{\mathrm{SL}_{d}}^{q}(S)\), and this for any value of \(q\).
**Conjecture 16**.: _Suppose that the \(d\)-root \(q^{\frac{1}{d}}\) occurring in the definition of the skein algebra \(\mathcal{S}_{\mathrm{SL}_{d}}^{q}(S)\) is such that \(q^{\frac{2}{d}}\) is a primitive \(n\)-root of unity. Then, for every oriented surface \(S\) of finite topological type, the center of \(\mathcal{S}_{\mathrm{SL}_{d}}^{q}(S)\) is generated (as a subalgebra) by the skeins \([P_{i}]\) associated to punctures as above, as well as by the skeins \(L^{[\widehat{P}_{d}^{(n,i)}]}\) obtained by threading reduced power elementary polynomials \(\widehat{P}_{d}^{(n,i)}\) around framed knots \(L\subset S\times[0,1]\)._
See [11] for a proof of this conjecture in the case where \(d=2\).
The second conjecture is the full \(\mathrm{SL}_{d}\) analogue of the main statement underlying the results of [1] for \(\mathcal{S}^{q}_{\mathrm{SL}_{2}}(S)\). It essentially asserts that the central skeins \(L^{[\widehat{P}^{(n,i)}_{d}]}\in\mathcal{S}^{q}_{\mathrm{SL}_{d}}(S)\), obtained by threading reduced power elementary polynomials along framed knots, satisfy the skein relations corresponding to \(q^{\frac{1}{d}}=1\).
**Conjecture 17**.: _Let \(S\) be an oriented surface of finite topological type. If the \(d\)-root \(q^{\frac{1}{d}}\) occurring in the definition of the \(\mathrm{SL}_{d}\)-skein algebra is a root of unity of order \(n\) coprime with \(2d\), and if the commutative skein algebra \(\mathcal{S}^{1}_{\mathrm{SL}_{d}}(S)\) is defined with the convention that \(1^{\frac{1}{d}}=1\), there exists a algebra homomorphism_
\[\Phi\colon\mathcal{S}^{1}_{\mathrm{SL}_{d}}(S)\to\mathcal{S}^{q}_{\mathrm{SL} _{d}}(S)\]
_with central image such that, for every skein \([L]\in\mathcal{S}^{1}_{\mathrm{SL}_{d}}(S)\) represented by a framed knot \(L\) carrying weight \(i\in\{1,2,\ldots,d-1\}\), the image \(\Phi\big{(}[L]\big{)}=L^{[\widehat{P}^{(n,i)}_{d}]}\) is obtained by threading the reduced power elementary polynomial \(\widehat{P}^{(n,i)}_{d}\in\mathbb{Z}[e_{1},e_{2},\ldots,e_{d-1}]\) along \(L\), in the sense defined in SS2._
_Remark 18_.: It easily follows from the skein relations that the algebra \(\mathcal{S}^{1}_{\mathrm{SL}_{d}}(S)\) is generated by knots colored by a weight \(i\in\{1,2,\ldots,d-1\}\). So the homomorphism \(\Phi\colon\mathcal{S}^{1}_{\mathrm{SL}_{d}}(S)\to\mathcal{S}^{q}_{\mathrm{SL} _{d}}(S)\) is unique if it exists.
The case \(d=2\) of this Conjecture 17 was proved in [1] when \(n\) is odd. See also [1, 1] for related statements with other conditions on \(q^{\frac{1}{2}}\). These properties played a fundamental role in the study of the finite-dimensional representation theory [1, 10, 11] of \(\mathcal{S}^{q}_{\mathrm{SL}_{2}}(S)\).
See [14] for a proof when \(d=3\).
For general \(d\), the homomorphism predicted by Conjecture 17 is likely to be the Frobenius homomorphism \(\Phi\colon\mathcal{S}^{1}_{\mathrm{SL}_{3}}(S)\to\mathcal{S}^{q}_{\mathrm{SL} _{3}}(S)\) constructed for \(d=3\) in [14] (see also [15] for \(d=2\)), and conjectured to exist for all \(d\). See [17] for an explicit construction of this Frobenius homomorphism when the surface has nonempty boundary, and [11] for a related construction. Also see [1, 10] for more general developments.
|
2306.14452 | Phenomenon of multiple reentrant localization in a double-stranded helix
with transverse electric field | The present work explores the potential for observing multiple reentrant
localization behavior in a double-stranded helical (DSH) system, extending
beyond the conventional nearest-neighbor hopping interaction. The DSH system is
considered to have hopping dimerization in each strand, while also being
subjected to a transverse electric field. The inclusion of an electric field
serves the dual purpose of inducing quasiperiodic disorder and strand-wise
staggered site energies. Two reentrant localization regions are identified: one
exhibiting true extended behavior in the thermodynamic limit, while the second
region shows quasi-extended characteristics with partial spreading within the
helix. The DSH system exhibits three distinct single-particle mobility edges
linked to localization transitions present in the system. The analysis in this
study involves examining various parameters such as the single-particle energy
spectrum, inverse participation ratio, local probability amplitude, and more.
Our proposal, combining achievable hopping dimerization and induced correlated
disorder, presents a unique opportunity to study phenomenon of reentrant
localization, generating significant research interest. | Sudin Ganguly, Suparna Sarkar, Kallol Mondal, Santanu K. Maiti | 2023-06-26T06:46:23Z | http://arxiv.org/abs/2306.14452v2 | Phenomenon of multiple reentrant localization in a double-stranded helix with transverse electric field
###### Abstract
The present work explores the potential for observing multiple reentrant localization behavior in a double-stranded helical (DSH) system, extending beyond the conventional nearest-neighbor hopping interaction. The DSH system is considered to have hopping dimerization in each strand, while also being subjected to a transverse electric field. The inclusion of an electric field serves the dual purpose of inducing quasiperiodic disorder and strand-wise staggered site energies. Two reentrant localization regions are identified: one exhibiting true extended behavior in the thermodynamic limit, while the second region shows quasi-extended characteristics with partial spreading within the helix. The DSH system exhibits three distinct single-particle mobility edges linked to localization transitions present in the system. The analysis in this study involves examining various parameters such as the single-particle energy spectrum, inverse participation ratio, local probability amplitude, and more. Our proposal, combining achievable hopping dimerization and induced correlated disorder, presents a unique opportunity to study phenomenon of reentrant localization, generating significant research interest.
## I Introduction
The phenomenon of localization has been a vibrant area of research in condensed matter physics ever since its prediction by P. W. Anderson Anderson (1958). Over the years, the interest in this topic has grown exponentially with the exploration of various fascinating systems across different branches of physics Anderson (1958); Anderson (1959); Anderson (1958); Anderson (1958); Anderson (1958); Anderson (1958). Anderson's seminal work Anderson (1958) demonstrated a metal-insulator transition in a one-dimensional atomic system with uncorrelated site energies, where all energy eigenstates become completely localized regardless of the strength of disorder. However, such an uncorrelated disordered system is considered relatively trivial due to the absence of a finite critical disorder strength and the limited control over it. By imposing constraints on the site energies, one can unveil captivating dynamics and explore more intriguing phenomena within correlated disordered systems Ganesh _et al._ (2014); Ganesh _et al._ (2015); Ganesh _et al._ (2016); Ganesh _et al._ (2017).
To date, a wide range of correlated systems have been employed across various fields, and among them, the Aubry-Andre-Harper (AAH) model Aubry and Andre (1958); Harper (1958) stands out as the most prevalent and adaptable example. In the nearest-neighbor tight-binding (TB) framework, the 1D AAH model with an incommensurate potential demonstrates a distinct transition between localization and delocalization. Below a critical point, all eigenstates are found to be delocalized, while beyond that critical point, they become completely localized Aubry and Andre (1958); Harper (1958); Harper (1958). Recent advancements in the field have introduced several generalizations of this model. These include exponential short-range hopping Harper (1958), flatband networks Anderson (1958), higher dimensions Ganesh _et al._ (2016), power-law hopping Ganesh and Ganesh (2017), flux-dependent hopping Ganesh _et al._ (2018), and nonequilibrium generalized AAH model Ganesh _et al._ (2018), etc. The studies have revealed that beyond the nearest-neighbor TB framework, there is typically a single-particle mobility edge (SPME), which represents a critical energy that differentiates localized states from extended states within the system Ganesh _et al._ (2018). AAH systems have also been experimentally realized using cold atoms and optical waveguides. Ganesh _et al._ (2018); Ganesh _et al._ (2018).
Based on current understanding, it has been firmly established that following a localization transition, all states continue to exhibit localization indefinitely as the disorder strength increases. However, recent studies have revealed that under certain constraints or conditions imposed on the system, this characteristic of indefinite localization may alter. In more recent findings, an intriguing occurrence of reentrant localization has been discovered in 1D quasiperiodic disordered systems Ganesh _et al._ (2018); Ganesh _et al.
to the absence of reentrant behavior. Considering this insight, we focus solely on short-range hopping interactions in our current work. By exclusively examining short-range hopping, we are able to observe and validate the presence of multiple localization phenomena using various analytical techniques. These techniques include analyzing the eigenvalue spectrum, inverse participation ratio, and local probability amplitudes, among others. Through these investigations, we gain valuable insights into the behavior and characteristics exhibited by the system.
The key findings of the present work are: (i) presence of multiple reentrant behavior, specifically two instances of reentrant localizations, (ii) states in the first reentrant region exhibit truly extended nature in the thermodynamic, (iii) states in the second reentrant region are quasi-extended, and (iv) implementation of reentrant phenomenon in a realistic biological system simply by applying an electric field.
The rest of the paper is organized as follows. In Sec. II, we describe the helical geometry, the tight-binding Hamiltonian in presence of transverse electric field, and the theoretical formulae for the quantities required to study the localization phenomenon. The numerical results and our analysis are presented in Sec. III. Finally, in Sec. IV, we conclude our findings.
## II System and theoretical framework
Figure 1 depicts the schematic diagram of a right-handed double-stranded helical geometry. The alternate bondings are assumed to be dimerized as shown by the black (dotted black) and blue (dotted blue) lines in strand-I (strand-II). An electric field \(E_{g}\) is applied perpendicular to the axis of the helix. The nature of hopping is determined by two parameters: the stacking distance \(\Delta h\) and the twisting angle \(\Delta\phi\). When \(\Delta h\) is sufficiently small, indicating densely packed atoms, long-range hopping becomes significant as electrons can effectively hop over larger distances. In contrast, when \(\Delta h\) is considerably large, with atoms separated by greater distances, electron motion is restricted to shorter distances, resulting in a short-range hopping helix. In practice, two prominent examples that fall into the short-range hopping and long-range hopping groups are DNA and protein molecules, respectively [37]. However, in this study, our focus is solely on SRH systems, as mentioned previously.
The tight-binding Hamiltonian for the DSH system in the presence of an external electric field is expressed as
\[H = \sum_{j=I,II}\left[\sum_{n=1}^{N}\epsilon_{j,n}c_{j,n}^{\dagger}c_ {j,n}\right. \tag{1}\] \[+ \left.t_{1}\sum_{n=1,3,5,...}^{N}\left(c_{j,n}^{\dagger}c_{j,n+1} +h.c.\right)\right.\] \[+ \left.t_{2}\sum_{n=2,4,6,...}^{N}\left(c_{j,n}^{\dagger}c_{j,n+1} +h.c.\right)\right.\] \[+ \left.\sum_{n=1}^{N}\sum_{\begin{subarray}{c}m=1\\ m-n\neq 1\end{subarray}}^{N}t_{j,(n,m)}\left(c_{j,n}^{\dagger}c_{j,m}+h.c. \right)\right]\] \[+ \sum_{n}t_{3}\left(c_{I,n}^{\dagger}c_{II,n}+h.c.\right).\]
Here \(j(=I,II)\) represents the strand index, \(c_{j,n}^{\dagger}\) and \(c_{j,n}\) are the usual fermionic creation and annihilation operators at the \(n\)th site of strand-\(j\), respectively.
In the first term of Eq. 1, \(\epsilon_{j,n}\) represents the site energy at site \(n\) of strand-\(j\). When an electric field is applied perpendicular to the helix axis, the site energy undergoes modifications as [35; 38; 39]
\[\epsilon_{I,n}=-\epsilon_{II,n}=eV_{g}\cos{(n\Delta\phi-\beta)}, \tag{2}\]
where \(e\) is the electronic charge and \(V_{g}\) corresponds to the gate voltage associated with the applied electric field. The relationship between the gate voltage and the electric field can be expressed as \(2V_{g}=2E_{g}R\). The reversal in sign observed between the two strands can be attributed to the combined effects of the perpendicular electric field and the helix conformation of the strands [35]. The phase factor \(\beta\) represents the angle between the positive \(x\)-axis
Figure 1: (Color online). Schematics of a right-handed double-stranded helical geometry in presence of an external electric field of strength \(E_{g}\). The blue balls represent the sites in strand-I, and the red balls represent the sites in strand-II. \(R\) is the radius of the helix and \(\Delta h\) is the stacking distance between adjacent sites. \(\phi=n\Delta\phi\), where \(\Delta\phi\) is the twisting angle between the neighboring sites and \(n\) is the site index in each strand. The alternating black (dotted black) and blue (dotted blue) lines indicate dimerization of the adjacent hoppings in strand-I (strand-II).
and the incident electric field. This phase factor can be adjusted or modified by changing the direction of the electric field. Equation 2 illustrates that the presence of a perpendicular electric field leads to a harmonic modulation of the site energies along the helical strands. Interestingly, such a modulation is identical to the well-known AAH model [14; 15]. The factor \(eV_{g}\) is analogous to the AAH modulation strength \(W\), \(\Delta\phi\) can be identified with the term \(2\pi b\) (\(b\) an irrational number) and the phase \(\beta\) with the Aubry phase \(\phi_{\nu}\) in the AAH model. By selectively choosing the term \(\Delta\phi\), it becomes possible to achieve a deterministic disordered double-stranded helical system, where the site energies exhibit a correlated pattern resembling the AAH model. This correlation is realized when the DSH system is subjected to the electric field \(E_{g}\).
The second and third terms in Eq. 1 represent the nearest-neighbor hopping terms in the Hamiltonian. The parameters \(t_{1}\) and \(t_{2}\) indicate that the hopping in the DSH system is dimerized.
The fourth term in Eq. 1 is the beyond nearest-neighbor interaction. \(t_{j,(n,m)}\) is the hopping integral between the sites \(n\) and \(m\) in strand-\(j\) and reads as [35; 40]
\[t_{j,(n,m)}=\left(\frac{t_{1}+t_{2}}{2}\right)\mathrm{e}^{-\left(l_{j,(n,m)}- l_{1}\right)/l_{c}}, \tag{3}\]
where \(l_{j,(n,m)}\) is the Euclidean distance between sites \(n\) and \(m\). With \(n-m=k\), it is expressed as
\[l_{j,(n,m)}=\sqrt{\left[2R\sin\left(\frac{k\Delta\phi}{2}\right)\right]^{2}+ \left[k\Delta h\right]^{2}}, \tag{4}\]
and \(l_{1}\) represents the distance between neighboring sites in both strands, and its value can be calculated using Eq.4 when \(k=1\). On the other hand, \(l_{c}\) denotes the decay exponent. In Eq.3, the first term within the parentheses, \(\left(t_{1}+t_{2}\right)/2\), accounts for an average over a unit cell. This average is utilized in the computation of hopping integrals beyond the nearest-neighbor interactions.
The final term in Eq. 1 corresponds to the inter-strand coupling, which describes the interaction between the two strands and \(t_{3}\) represents the inter-strand hopping integral.
The inverse participation ratio (IPR) serves as a valuable tool for detecting the transition from a localized state to a delocalized state. This measure allows us to quantify and analyze the spatial distribution of a wavefunction or probability density, providing insights into whether the state is confined to a specific region or spread out across multiple locations. By observing changes in the IPR, we can effectively identify and track the transition as the wavefunction evolves from a localized state to a more delocalized one or vice-versa. For the \(n\)th normalized eigenstate, IPR is defined as [41; 42]
\[\mathrm{IPR}_{n}=\sum_{i}|\psi_{n}^{i}|^{4}. \tag{5}\]
In the case of a highly extended state, in the thermodynamic limit, the IPR tends to zero. On the other hand, for a strongly localized state, the IPR approximately approaches to unity.
A complementary tool to characterize the localization transition is the normalized participation ratio (NPR), which for the \(n\)th normalized eigenstate is defined as [41; 42]
\[\mathrm{NPR}_{n}=\left(2N\sum_{i}|\psi_{n}^{i}|^{4}\right)^{-1} \tag{6}\]
where, \(2N\) is the total number of sites present in the DSH system. In the case of a highly extended state, the NPR tends to unity in the thermodynamic limit. Conversely, for a strongly localized state, the NPR approaches to zero.
The earlier defined \(\mathrm{IPR}_{n}\) and \(\mathrm{NPR}_{n}\) can be modified to characterize the parameter space region where localized and delocalized states coexist. One defines their average over a subset of states \(N_{L}\) as follows [42]
\[\langle\mathrm{IPR}\rangle=\sum_{n}^{N_{L}}\frac{\mathrm{IPR}_{n}}{N_{L}}, \langle\mathrm{NPR}\rangle=\sum_{n}^{N_{L}}\frac{\mathrm{NPR}_{n}}{N_{L}}. \tag{7}\]
When all \(N_{L}\) states are localized, \(\langle\mathrm{IPR}\rangle\) tends to unity, while \(\langle\mathrm{NPR}\rangle\) tends to zero when all \(N_{L}\) states are delocalized. However, in the regime where both \(\langle\mathrm{IPR}\rangle\) and \(\langle\mathrm{NPR}\rangle\) remain finite, the Hamiltonian's spectrum features an intermediate phase with coexisting spatially extended and localized eigenstates, along with the presence of SPME.
## III Results and discussion
Let us mention the common parameter values before presenting the numerical results. To implement the short-range hopping in DSH system, we consider physical parameters analogous to those found in the real biological system [43]. DNA has been proposed as an ideal and established example of short-range hopping system by various research groups. The structural parameters for the said geometry are as follows: The radius is considered as \(R=$8\,\mathrm{\SIUnitSymbolAngstrom}$\), the stacking distance as \(\Delta h=$4.3\,\mathrm{\SIUnitSymbolAngstrom}$\), the twisting angle \(\Delta\phi=\pi\left(\sqrt{5}-1\right)/4\), and the decay exponent \(l_{c}=$0.8\,\mathrm{\SIUnitSymbolAngstrom}$\). From the relation \(\Delta\phi=2\pi b\), we can determine the value of \(b\) for the SRH case, which is incommensurate. All the energies are measured in units of eV. The number of sites in each of the strands is taken as \(N=500\). Unless stated otherwise, we set the dimerized hopping integrals as \(t_{1}=0.5\), \(t_{2}=2.2\), and the inter-strand coupling \(t_{3}=1\).
To study the localization transition, the energy spectrum corresponding to the Hamiltonian in Eq. 1 is plotted as a function of gate voltage \(V_{g}\) (in units of Volts) as shown in Fig. 2(a). Each energy point in the plot is color-coded based on its corresponding IPR value, which is computed according to Eq. 5. To capture the localization transition, our colorbar uses purple for the lowest 10% of the maximum IPR value, highlighting the extended states and a gradient of increasing gray shades for the rest, reflecting higher degree of localization. In
Fig. 2(a), the purple color extends throughout the entire region below \(V_{g}\sim 1\), indicating that the IPR values in this region are significantly below 0.1. This observation strongly suggests that the states within this region exhibit extended behavior. Above \(V_{g}\sim 1\), a mixed phase emerges where some states begin to localize while others remain extended, resulting in a combination of both localized and extended states. This mixed phase persists until approximately \(V_{g}\sim 2\). However, beyond this critical value, all the states undergo localization, indicating a complete transition to a fully localized state. Around \(V_{g}\sim 2.6\), an intriguing phenomenon occurs as a small number of states around zero energy undergo a reentrant localization, indicated by a narrow purple patch within the predominantly localized region. This region exhibits a transient delocalization, where a few states regain their extended nature in contrast to the surrounding localized states.
Upon crossing the reentrant zone, all the states return to a localized state. However, as we approach \(V_{g}\sim 4\), a noteworthy phenomenon occurs. Several small purple spots emerge, indicating the presence of a second reentrant region. Within this region, a few states exhibit a transient delocalization before ultimately undergoing localization once again. A better visibility of the situation can be obtained by examining the individual eigenstates' IPR, as depicted in Fig. 2(b). This plot provides a comprehensive view of the localization behavior and allows for a more detailed analysis of the reentrant regions and the transition between extended and localized states. The presence of the first reentrant region, spanning from approximately \(V_{g}\sim 2.5\) to \(2.9\), is clearly evident in the plot. Within this range, a significant number of eigenstates exhibit delocalization, marked by a distinct decrease in their IPRs. Similarly, the occurrence of the second reentrant region around \(V_{g}\sim 3.9\) to \(4.1\) is also observed, with a noticeable deviation from the localized behavior as indicated by a cluster of eigenstates displaying lower IPR values. For a more enhanced visualization, a magnified section of Fig.2(b) is illustrated in Fig.2(c), providing a clear depiction of the aforementioned description. In Fig. 2(c), we observe the presence of two horizontal lines highlighted in purple color immediately following the first reentrant localization. To assess the potential occurrence of another reentrant localization, we thoroughly analyze the IPR values and the \(V_{g}\)-window associated with these two lines. Upon investigation, it becomes evident that the IPR values associated with these horizontal lines are approximately 0.09, indicating the presence of quasi-extended states. However, it should be noted that these horizontal lines appear before the completion of the first reentrant localization. Consequently, we can conclude that these two lines do not represent another instance of reentrant behavior.
To gain insight into the mixed phase zone, we compute the average IPR and NPR over a subset of states \(N_{L}\) from the spectrum of Fig. 2, as defined in Eq. 7.
The quantities \(\langle\)IPR\(\rangle\) and \(\langle\)NPR\(\rangle\) are plotted as a function of \(V_{g}\) in Fig. 3. In this analysis, \(N_{L}\) is considered to be the subset of eigenstates with indices ranging from 400 to 600, taken from Fig. 2(b). All the system parameters remain unchanged as described earlier. In Fig. 3, both \(\langle\)IPR\(\rangle\) and \(\langle\)NPR\(\rangle\) exhibit finite values within the range of \(1.1<V_{g}<1.7\), indicating the presence of a critical region where a mixture of extended and localized states coexist. For \(V_{g}>1.7\), the system undergoes a transition into a fully localized state, where all states become localized. Moreover, in the approximate window of \(2.5<V_{g}<2.9\), a dip in the \(\langle\)IPR\(\rangle\) value is observed, accompanied by a bump in \(\langle\)NPR\(\rangle\). This specific region corresponds to the occurrence of the first reentrant region. Within the chosen subset of states, the system hosts two SPMEs. Considering the limited number of extended states in the second reentrant region, detection of the transition becomes challenging within the same
Figure 3: (Color online). \(\langle\)IPR\(\rangle\) and \(\langle\)NPR\(\rangle\) as a function of \(V_{g}\) for a subset of states ranging from 400 to 600 of Fig. 2(b). The shaded regions indicate the critical zones. All the system parameters remain the same as described in Fig. 2.
Figure 2: (Color online). Density plot. (a) The energy spectrum vs gate voltage \(V_{g}\) with \(t_{1}=0.5\), \(t_{2}=2.2\), and \(t_{3}=1\). (b) The energy index vs gate voltage \(V_{g}\). (c) A magnified version of Fig. 2(b) to provide enhanced clarity. The color map shows IPR values of different energy eigenstates.
plot. Nevertheless, it is important to note that when considering the entire spectrum, the system reveals the presence of three distinct SPMEs.
To explore the extension of states within the reentrant regions, we analyze the local probability amplitudes of different states at varying gate voltages.
This analysis provides insights into the robustness of state extension or localization within the system as the gate voltage, \(V_{g}\) changes. The results are presented in Fig. 4. Firstly, we calculate the local probability amplitude \(|\psi_{n}^{i}|^{2}\) for the state \(n=500\) under zero-field condition, as illustrated in Fig. 4(a). In this disorder-free case, as expected, the local probability amplitudes \(|\psi_{n}^{i}|^{2}\) for all sites exhibit extended behavior. This is evident from the smooth sinusoidal curve and the relatively lower values of probability amplitudes throughout the system. Next, we examine the case where \(V_{g}=1\) and focus on the state \(n=500\), with the corresponding result depicted in Fig. 4(b). Notably, the envelope of the local probability distribution maintains the characteristics observed in the disorder-free scenario. Consequently, the state remains within the extended region. Subsequently, we raise the gate voltage to \(V_{g}=2.3\) and examine the state \(n=249\). As depicted in Fig. 4(c), the probability amplitudes for all sites, except for site index \(i=500\), become vanishingly small. Notably, at this specific site, the probability amplitude assumes a relatively large value of approximately 0.8. This observation indicates that the chosen \(V_{g}\) value indeed induces a fully localized state within the system. In Fig. 4(d), we examine the case where \(V_{g}\) is fixed at 2.65, corresponding to the first reentrant localization region. We consider the state \(n=436\), and observe that the probability amplitudes range from 0 to 0.02, indicating relatively low values. Therefore, it is evident that within the first reentrant region, the considered state remains its extended nature. Upon further increasing \(V_{g}\) to 3.5 and examining the state \(n=249\), it is evident from Fig. 4(e) that the system transitions into a fully localized phase once again. To investigate the second reentrant localization, we examine the case where \(V_{g}=4\) and focus on the state \(n=332\). Interestingly, we observe two broad peaks in the distribution of \(|\psi_{n}^{i}|^{2}\), as shown in Fig. 4(f). The values of the probability amplitude for these peaks are relatively low. Upon closer inspection, we find that these peaks are spread over a span of approximately 40-50 sites, as demonstrated in the two insets of Fig. 4(f). Consequently, this region exhibits a quasi-extended behavior. In Fig. 4(g), we set the gate voltage to \(V_{g}=5\) and examine the state \(n=249\). Notably, the probability amplitude is localized predominantly at site \(n=500\) with a value of approximately 0.8, while the amplitudes at all other sites are vanishingly small. This observation confirms the presence of a fully localized state within the system.
To address and account for any potential finite size effects, we examine the relationship between the minimum IPR value and the system size in the two reentrant regions as shown in Fig. 5. The minimum IPR value for a given system size is determined by identifying the lowest IPR among all states, achieved at a specific value of \(V_{g}\).
We plot IPR\({}_{n}\) as a function of the inverse of the system size \(1/2N\) in the first reentrant region, namely at \(V_{g}=2.65\) as shown in Fig. 5(a). As the system size increases, the IPR\({}_{n}\) value decreases following a scaling behavior of \(\mathcal{O}(1/L)\), where \(L\) represents the system size. Consequently, in the thermodynamic limit, the states within the first reentrant region display a tendency towards a true extended nature. In contrast, the results shown in Fig. 5(b) for the second reentrant region do not exhibit a scaling behavior similar to the first reentrant region. Instead, the IPR\({}_{n}\) value decreases with increasing system size in a step-like fashion. In the limit of large system sizes, it converges to a finite value of approximately 0.055. Considering the lower values of IPR and its behavior with respect to system size, it becomes evident that the states within the second reentrant region do not exhibit a genuine extended nature in the thermodynamic limit. Instead, these states can be characterized as quasi-extended, as observed in Fig. 4(f), where they demonstrate a partial spreading throughout the system.
Finally, we examine the parameter space between the gate voltage \(V_{g}\) and the hopping integrals in terms of average IPR ((IPR)) to identify the regions where the phenomenon of reentrant localiz
Figure 5: (Color online). Minimimum IPR\({}_{n}\) as a function of \(1/2N\) in the two reentrant regions, namely, at (a) \(V_{g}=2.65\) and (b) \(V_{g}=4\).
focus is solely on the first reentrant region, and we do not investigate the second reentrant region due to the aforementioned reasons. In Fig.6(a), we plot the color-coded (IPR) as functions of \(V_{g}\) and \(t_{2}/t_{1}\). All other physical parameters are the same as described in Fig.2. We calculate (IPR) using the same method as described in Fig. 3.
The maximum IPR value in the corresponding color bar is 0.7, and the extended nature is attributed to the range from 0 to 0.07, which corresponds to 10% of the maximum IPR value. Based on the plot, we observe that the reentrant region emerges for values of \(t_{2}/t_{1}\) ranging approximately from 2.9 to 4.5 and for \(V_{g}\) within the range of 1.3 to 2.8. By adjusting the inter-strand coupling \(t_{3}\), it is also feasible to alter the extent of the reentrant region. To visualize this, we plot the color-coded (IPR) as functions of \(V_{g}\) and \(t_{3}/t_{1}\) in Figure 6(b). We observe an approximate reentrant window occurring for \(t_{3}/t_{1}\) values ranging from 2.2 to 4 and for \(V_{g}\) within the range of 1 to 2.6. It is important to note that the parameter space considered is based on the selected subset of eigenstates, as mentioned earlier. The specific values might slightly vary if we were to consider the entire spectrum.
## IV Conclusion
We have focused on the localization behavior of a DSH system under the influence of a transverse electric field. Each strand of the DSH system is assumed to possess dimerized hopping. The introduction of a transverse electric field gives rise to the emergence of correlated disorder within the system, accompanied by a strand-wise staggered arrangement of site energies. Notably, we have observed a distinctive multiple reentrant behavior, specifically characterized by two instances of reentrant localizations. This observation has been made by examining the behavior of the IPR of individual eigenstates within the single-particle spectrum. Each localization transition is accompanied by an SPME, and our system exhibits a total of three SPMEs, two of which are associated with the two reentrant regions. By examining the local probability amplitude and scaling behavior of IPR, We have found that the states corresponding to the first reentrant region demonstrate genuine extended characteristics in the thermodynamic limit. However, states within the second reentrant region display a quasi-extended nature. Our investigation reveals that the reentrant region can be influenced and adjusted by modulating both the gate voltage and hopping integrals.
Considering the ongoing progress in experimental feasibility to achieve hopping dimerization [44; 45; 46; 47; 48; 49] and the potential for inducing correlated disorder (AAH) through a transverse electric field, our proposal is highly compelling and is expected to generate significant interest within the research community. The incorporation of these factors in our study presents a valuable opportunity to observe and study the reentrant localization behavior.
|
2308.02944 | dPASP: A Comprehensive Differentiable Probabilistic Answer Set
Programming Environment For Neurosymbolic Learning and Reasoning | We present dPASP, a novel declarative probabilistic logic programming
framework for differentiable neuro-symbolic reasoning. The framework allows for
the specification of discrete probabilistic models with neural predicates,
logic constraints and interval-valued probabilistic choices, thus supporting
models that combine low-level perception (images, texts, etc), common-sense
reasoning, and (vague) statistical knowledge. To support all such features, we
discuss the several semantics for probabilistic logic programs that can express
nondeterministic, contradictory, incomplete and/or statistical knowledge. We
also discuss how gradient-based learning can be performed with neural
predicates and probabilistic choices under selected semantics. We then describe
an implemented package that supports inference and learning in the language,
along with several example programs. The package requires minimal user
knowledge of deep learning system's inner workings, while allowing end-to-end
training of rather sophisticated models and loss functions. | Renato Lui Geh, Jonas Gonçalves, Igor Cataneo Silveira, Denis Deratani Mauá, Fabio Gagliardi Cozman | 2023-08-05T19:36:58Z | http://arxiv.org/abs/2308.02944v1 | dPASP: A Comprehensive Differentiable Probabilistic Answer Set Programming Environment For Neurosymbolic Learning and Reasoning
###### Abstract
We present dPASP, a novel declarative probabilistic logic programming framework for differentiable neuro-symbolic reasoning. The framework allows for the specification of discrete probabilistic models with neural predicates, logic constraints and interval-valued probabilistic choices, thus supporting models that combine low-level perception (images, texts, etc), common-sense reasoning, and (vague) statistical knowledge. To support all such features, we discuss the several semantics for probabilistic logic programs that can express nondeterministic, contradictory, incomplete and/or statistical knowledge. We also discuss how gradient-based learning can be performed with neural predicates and probabilistic choices under selected semantics. We then describe an implemented package that supports inference and learning in the language, along with several example programs. The package requires minimal user knowledge of deep learning system's inner workings, while allowing end-to-end training of rather sophisticated models and loss functions.
Machine Learning, ICML
## 1 Introduction
Answer Set Programming (ASP) (Gebser et al., 2012; Lifschitz, 2019) is a powerful declarative paradigm for specifying domain knowledge by means of logic programming. For example, the following program very intuitively describes the causal relationships between stress, smoking and peer pressure (Fierens et al., 2015).
smokes(X) :- stress(X). smokes(X) :- influences(Y,X), smokes(Y).
The program can be extended with a database of known facts such as influences(anna,bill) and stress(anna), and used to conclude that smokes(bill) must be true (according to some selected semantics).
While powerful, ASP cannot cope with uncertainty, that abounds in data-driven situations. For instance, suppose that we only known (or are only willing to ascertain) that Anna influences Bill with probability 0.8:
\[\texttt{|0.8::influences(anna,bill).}\]
Such probabilistic facts appear quite naturally when they are the output of perception models. In particular, when differentiable models such as deep neural networks are used, as is the case of image, text and speech recognition.
Once probabilistic facts are allowed, we enter the realm of Probabilistic Answer Set Programming (PASP) (Cozman and Maua, 2020); and when the probabilistic facts are linked to the output of neural probabilistic classifiers, we name the resulting framework Neural-Probabilistic Logic Programming (NPLP). Essentially, NPLP provides a tight coupling of low-level reasoning models (e.g., image recognition) and high-level reasoning (e.g., planning). By exploiting existing domain knowledge, NPLP allows, among other things, end-to-end gradient-based weakly supervised learning of neural models, thus providing an effective implementation for neurosymbolic reasoning (Yang et al., 2020; Manhaeve et al., 2021).
An important and mostly overlooked component of NPLP systems is the interface connecting real-world objects, neural network models and logic programming languages. Existing implementations of NPLP such as DeepProbLog(Manhaeve et al., 2021) and NeurASP(Yang et al., 2020) require the user to write the "glue" between the neural-probabilistic logic language and the deep learning framework themselves, a task which is not always straightforward. These frameworks are also limited: DeepProbLog forbids negative cycles and nondeterministic knowledge; NeurASP disallows probabilistic facts; and neither of them deals with contradictions or vague uncertain knowledge.
In order to fill those gaps, we introduce dPASP, a new NPLP framework that features a large set of ASP constructs, can learn from both probabilistic facts and neural atoms, and
supports several semantics, from standard maximum entropy probability measures to richer semantics that can accommodate contradictions and partial specification of probabilistic knowledge.
## 2 Background
We review some background knowledge on logic and probabilistic logic programming. We assume the reader has some familiarity with the terminology and basics of logic programming (Gebser et al., 2012), and focus on less common concepts such as L-stable semantics, probability models and the credal L-stable semantics (Rocha and Cozman, 2022).
### Logic Programming
A logic program is a finite set of disjunctive rules of the form:
\[\mid h_{1}\text{; }\ldots\text{; }h_{k}\text{ :- }b_{1}\text{, }\ldots\text{, }b_{n}\text{, not }b_{n+1}\text{, }\ldots\text{, not }b_{n+m}\text{.}\]
where each \(h_{i}\) and \(b_{j}\) is an atom and not denotes default negation. We say that \(\mathsf{head}(r)=\{h_{i}\}_{i=1}^{k}\) is the _head_ of the rule, while the atoms right of :- are the _body_, denoted \(\mathsf{body}(r)\); sets \(\mathsf{body}^{-}(r)\) and \(\mathsf{body}^{+}(r)\) denote, resp., the sets of negated (i.e., preceded by \(\mathsf{not}\)) and non-negated atoms in \(\mathsf{body}(r)\). The rule is disjunctive if the head has two or more atoms. It is called an integrity constraint if the head is empty, and normal if the head is a singleton. And it is a fact if it is normal and the body is empty.
The _Herbrand base_ of a program is the set formed by all ground atoms that can be built using predicate names and constants in the program. The _grounding_ of a program is the propositional program obtained by grounding each rule, that is, replacing variables with constants from the Herbrand base in every consistent way. The semantics of a program with variables is the semantics of its grounding.
Let \(t\), \(f\) and \(u\) denote ground atoms which do _not_ occur in a program \(L\). A three-valued interpretation \(I\), also called a partial interpretation, is a function from the atoms in the Herbrand base to \(\{0,0.5,1\}\) such that \(I(t)=1\), \(I(f)=0\), \(I(u)=0.5\) We define \(I^{t}=\{A\mid I(A)=1\}\), \(I^{f}=\{A\mid I(A)=0\}\) and \(I^{u}=\{A\mid I(A)=0.5\}\) as the sets of true, false and undefined atoms, respectively, according to \(I\). An interpretation is _total_ if it assigns every atom (other than \(u\)) to either true or false. The interpretation of the positive body of a rule \(r\) is the minimum over the value of its atoms, \(I(\mathsf{body}^{+}(r))=\min\{I(A)\mid A\in\mathsf{body}^{+}(r)\}\). The value of the negative body is the minimum of the complements of the atom's values, \(I(\mathsf{body}^{-}(r))=\min\{1-I(A)\mid A\in\mathsf{body}^{-}(r)\}\). The interpretation of the body is \(I(\mathsf{body}(p))=\min\{I(\mathsf{body}^{+}(r)),I(\mathsf{body}^{-}(r))\}\). Last, the interpretation of the rule's head is the maximum of its atoms, \(I(\mathsf{head}(r))=\max\{I(A)\mid A\in\mathsf{head}(r)\}\).
An interpretation _satisfies_ a rule \(r\) if and only if \(I(\mathsf{head}(r))\geq I(\mathsf{body}(r))\). \(I\) is a _model_ of a program if it satisfies all of its rules. We define a partial order \(\leq\) (reflexive, antisymmetric and transitive) of interpretations as: \(I_{0}\leq I_{1}\) if and only if \(I_{0}(A)\leq I_{1}(A)\) for all \(A\). A model \(I\) is minimal if there is no model \(I^{\prime}\leq I\) such that \(I\neq I^{\prime}\). If \(I\) is total then it is minimal if and only if \(I^{t}\) is \(\subseteq\)-minimal. Note that by (3) an interpretation that assigns undefined to all atoms is always a model as long as the program can be rewritten so it does not contain disjunctive rules nor integrity constraints. Thus, since \(\leq\) is a partial order and the Herbrand base is finite, every normal program admits one or more minimal models (Przymusinski, 1991). This is in contrast to complete (i.e., true/false) semantics, for which a normal program might have none, one or multiple minimal models.
The stability of a model \(I\) is connected to the notion of the program's _reduct_ w.r.t. \(I\), written \(P/I\), obtained by the modified Gelfond-Lifschitz transformation (Przymusinski, 1991). The transformation operates on each atom \(A\in\mathsf{body}^{-}(r)\) in the negative body of a rule \(r\in L\) and replaces it by the atom \(t\), \(f\) or \(u\) corresponding to its semantics in \(I\). Formally, it replaces \(A\) with: (i) \(t\) if \(A\in I^{f}\); or (ii) \(f\) if \(A\in I^{t}\); or (iii) \(u\) if \(A\in I^{u}\). We say that \(I\) is a _partial stable model_ of \(L\) if \(I\) is a minimal model of \(L/I\). This is the partial stable model semantics (P-stable) for logic programs with disjunctions and default negation (Przymusinski, 1991). A stable model \(I_{0}\) is _least undefined_ if there is no other stable model \(I_{1}\) with \(I_{0}^{u}\subset I_{1}^{u}\). That is, \(I\) is least undefined if there is no other stable model that defines (as true or false) more atoms than it. Then we say that \(I\) is a least undefined stable model of \(L\), or L-stable model, for short. This is finally the L-stable model semantics of disjunctive logic programs (Sacca and Zaniolo, 1997).
### Probabilistic Logic Programming
Probabilistic logic programming extend logic programs with _annotated disjunctive rules_ (ADR) of the form
\[\mid p_{1}\text{:-}\text{a}_{1}\text{;-}\text{;}p_{k}\text{:-}\text{a}_{k}\text { :- }\text{b}_{1}\text{,-}\text{,}\text{b}_{n}\text{,not }b_{n+1}\text{,}\ldots\text{,not }\text{b}_{n+m}\text{.}\]
where \(p_{1},\ldots,p_{k}\) are nonnegative real values whose sum is equal to one. We also allow ADRs where \(\sum_{i}p_{i}<1\). Semantically, this is a syntactic sugar where the head is extended by atom \(f\) (which is never true) with probability \(1-\sum_{i}p_{i}\). For example, a probabilistic fact is an ADR with a singleton head and empty body, written as:
\[\mid\theta\text{:-}\text{a}\text{.}\]
It is equivalent to the ADR
\[\mid\theta\text{:-}\text{a}\text{; }1-\theta\text{:-}f\text{.}\]
A probabilistic logic program is a finite set of ADRs and (non-probabilistic) rules. The semantics of probabilistic logic programs extends that of Sato's distribution semantics (Sato, 1995), in which independent random choices induce logic programs.
A _total choice_ independently selects one atom of the head of each ADR in a probabilistic logic program. Note that a total choice can select different atoms for rules with the same head (and different bodies). Each total choice \(\theta\) induces a logic program where each ADR \(r\) is transformed into a normal rules
\[\left|\begin{array}{l}h\text{ :- }b_{1},\text{ \ldots, not }b_{m}\text{.}\end{array}\right.\]
where \(h\) is the atom selected by the total choice for rule \(r\), and \(b_{1},\ldots,b_{m}\) are the body.
An interpretation of a probabilistic logic program is simply an interpretation of the logic program obtained by dropping probabilities (i.e., turning ADRs into dijunctive rules). Let \(\theta\) denote both a total choice and its induced logic program, and \(\Gamma(\theta)\) denote the selected models of \(\theta\), according to some semantics. A _probability model_ is a probability measure \(\mathbb{P}\) over the interpretations \(I\) of the program such that (i) \(\mathbb{P}(I)>0\) if and only if \(I\) is in \(\Gamma(\theta)\), (ii) \(\mathbb{P}(\Gamma(\theta))=\prod_{r}p_{r}\), where the product runs over all ADRs in the probabilistic logic program and \(p_{r}\) is the probability annotating \(\theta(r)\) in rule \(r\). The _maximum entropy_ probability model, or max-ent model for short, is the probability model that splits \(\mathbb{P}(\Gamma(\theta))\) evenly over each stable model, that is, \(\mathbb{P}(I)=\sum_{\theta:I\in\Gamma(\theta)}(1/|\Gamma(\theta)|)\) for any stable model \(I\).
The _credal semantics_ assigns to each atom \(a\) a pair of lower and upper probabilities such that \(\underline{\mathbb{P}}(a)=\min_{\mathbb{P}}\mathbb{P}(a)\) and \(\underline{\mathbb{P}}(a)=\max_{\mathbb{P}}\mathbb{P}(a)\), where the optimizations are over the set of probability models (which is closed and convex). The max-ent semantics assigns to \(a\) the probability of the corresponding max-ent probability model. Note that by definition we have that \(\overline{\mathbb{P}}_{\text{max-ent}}(a)\in[\underline{\mathbb{P}}(a), \overline{\mathbb{P}}(a)]\).
Note that the previous probabilistic semantics are agnostic with respect to the logic semantics used to produce \(\Gamma(\theta)\), requiring only that it is non-empty for each \(\theta\). In the literature, such a set is more commonly selected as the set of stable models (Cozman and Maua, 2020) or well-founded models (Fierens et al., 2015).
As we will discuss later, dPASP supports the combination of either the stable, partial, L-stable or SMProbLog semantics, for the logic part, and the credal or max-ent semantics, for the probabilistic part. The result is thus one of eight different semantics, to be selected by the user and according to the task at hand. Not every feature is however available for each selected semantics: for instance, learning under credal or L-stable semantics remains an open problem.
## 3 The dPASP Framework
We now describe the language as well as inference and learning routines of the dPASP framework.
### Language
dPASP extends clingo's syntax (Gebser et al., 2017) to neural-probabilistic logic programs by including annotated disjunctive rules and _neural annotated disjunctive rule_ (NADR). The latter are ADR whose probabilities are set by a(n externally defined) neural network.
The neural networks that parametrize NADRs are specified using standard deep learning frameworks such as PyTorch. To facilitate the integration between the probabilistic logic program and the deep learning framework, dPASP allows the embedding of Python code within the program via the #python guard. Code within this guard is executed and all functions declared are available for use within special predicates in the program. As an example, consider the following #python code prototype for constructing a neural network in PyTorch for MNIST.
#python def net(): return... # Neural network.
# (i=1)st or (i=2)nd half of train set as tensor.
# (i=1)st or (i=2)nd half of test set as tensor.
# **mnist_te(i)**: return...
#end.
The specification of the interface between raw data (that is fed to the neural network in the python code part) and program constants is managed by a special rule of the form
\[\left|\begin{array}{l}\texttt{atom(x)}\sim\texttt{test(@arg1), train(@arg2)}\text{.}\end{array}\right.\]
In that rule, atom is a user-defined one-place predicate name used to represent an object fed into the neural network, x is a constant identifying a particular object (since the same network can be used several times in the same program) and test and train are reserved predicates, whose arguments are either paths to CSV files or Python functions as defined in #python. As an example, consider the example of adding two digit images classified by neural networks, as described in Manhaeve et al. (2021); here, the network's inputs are images, with each grounding of the NADR representing one of the two digits in the sum, and different images in the train and test sets.
\[\left|\begin{array}{l}\texttt{input(0)}\sim\texttt{test(@mnist_tr(0)), train(@mnist_te(0))}\text{.}\end{array}\right.\]
Note that @ and 1 are arbitrary constants used to identify the two distinct inputs of the same neural network. These
constants are associated with pairs of images taken either from the test or the train set, depending on the task being solved.
A NADR's head must contain a single (possibly non-ground) atom of the form \(1::\texttt{f(X, }\{\texttt{v}_{1},\ldots,\texttt{v}_{k}\}\)), where \(1\) is either a?, indicating the neural ADR is learnable, or a 1, in which case its parameters are fixed during learning. Variable X is used to ground the NADR according to its body, while \(\texttt{v}_{1},\ldots,\texttt{v}_{k}\) are the possible values the annotated disjunctive rule may take. Optionally, an interval may be passed as a shorthand; for instance, 0..9 is equivalent to writing out all digits from 0 to 9. The network in the NADR is embedded by either passing a function declared in \(\texttt{Hypthon}\) or a GitHub repository in PyTorch Hub format. If the NADR is learnable, then we may optionally pass parameters to the PyTorch optimizer via the with operator. Finally, the NADR must then be declared with the data predicate as one of the subgoals in its body. In our running MNIST addition example, the neural predicate digit must cover all digits from 0 to 9.
```
?::digit(X, {0..9})as@net:-input(X).
```
Data embedded into special data predicates (e.g. input(0) and input(1) in the previous example) are passed as PyTorch tensors to the neural networks, allowing for efficient parallelization in the CPU or GPU.
Regular ADRs are declared in a similar fashion to NADRs in dPASP, except they allow setting an initial probability value for the optimization when the rule is learnable. For example:
```
0.57::h1; 7::h2; 7::h3:-b1,b2,...,b4.
```
When no initialization value is given, like in the case of h2 and h3, dPASP uniformly distributes the remaining mass to the rest of the atoms. dPASP also supports credal facts in order to specify a credal interval for imprecise inference, where
```
[0.2,0.7]::f.
```
indicates f may take a probability as low as 0.2 and as high as 0.7.
Purely logic rules may contain arithmetic operations and comparisons over variables of annotated disjunctive rules as long as they are safe and appear in the body.
```
sum(Z):-digit(0,X),digit(1,Y),Z=x-Y.
```
Querying a partial interpretation (possibly conditioned on another interpretation) is done through the special directive #query.
```
querydigit(0,4).%P(digit(1,4))
#querysum(8)|notboth_even.%P(sum(4)|-both_even)
```
The under keyword may be used within #query for querying the probability of an atom being undefined under the L-stable or partial semantics. Each query returns either a precise probability if under the max-entropy semantics, or a pair of probabilities encoding lower and upper bounds if under the credal semantics.
In order to define the semantics of the program, a special directive #semantics may be used. It must receive at least one of the supported logic or probabilistic semantics, i.e. stable model, partial, L-stable, max-entropy or credal semantics. If none is given, dPASP defaults to the credal and stable models semantics. If only one is given, the missing one is set to its respective default semantics.
```
#semanticslstable,maxent.%L-stable;max-entropy. #semanticscredal,partial.%Credal;partial. #semanticsstable.%Stable;credal. #semanticsmaxent.%Max-entropy,stable.
```
The #learn directive specifies the learning procedure to take place in the program, and receives as parameters the training dataset with the observed atoms, as well as the learning parameters such as learning rate, number of epochs and batch size.
```
|#learn@observed_atoms,lr=0.001,niters=5.
```
### Inference
The most typical inference one draws with neuro-probabilistic logic programming models is to compute the probability of a query atom, possibly conditional on some evidence. That is, if \(\mathbf{q}=\{q_{1},\ldots,q_{m}\}\) and \(\mathbf{e}=\{e_{1},\ldots,e_{n}\}\) are disjoint set of literals, then we are usually interested in computing
\[\mathbb{P}(\mathbf{q}|\mathbf{e})=\frac{\sum_{I\models\mathbf{q},\mathbf{e}} \mathbb{P}(\mathbf{q},\mathbf{e})}{\sum_{I\models\mathbf{e}}\mathbb{P}( \mathbf{e})}\,,\]
for some or all probability models \(\mathbb{P}\).
dPASP provides exact inference by enumerating total choices and using clingo's solver to enumerate all models for each induced ASP program. Performing exact inference by that exhaustive approach, be it under the max-ent or credal semantics, limits the scalability of inference to programs with few NADRs and ADRs.
More scalable approximate inference based on knowledge compilation (Totis et al., 2021), sampling (Tuckey et al., 2021; Azzolini et al., 2023) and variational methods are planned features for future versions of dPASP, which is currently in early stage development.
#### 3.2.1 Partial, L-stable and SMProbLog semantics
Internally, dPASP only accepts the stable model semantics when performing inference or learning. To enable support to different semantics, we implement translation procedures to the stable model semantics.
The partial semantics in dPASP is implemented via the translation described in (Janhunen et al., 2006). In a nutshell, dPASP creates an auxiliary atom and rule for each non-probabilistic atom in the program and duplicates logic rules in order to allow undefined values for non-probabilistic atoms. The L-stable semantics is implemented by checking, at each total choice, if there exists a stable model for the program: in the positive case, dPASP performs inference over the stable models of such a program, otherwise it queries from the translated program's partial stable models.
We also implement the SMProbLog semantics, introduced in Totis et al. (2021). The main difference between the L-stable and SMProbLog's semantics is how to deal with undefined atoms. While in L-stable a model may contain undefined, true and false atoms, in SMProbLog, if an atom is set to undefined in a model, then all atoms in this model must also be undefined.
#### 3.2.2 Maximum entropy semantics
If the max-ent semantics is selected, it is sufficient to simply add up the (uniform) probabilities of each model that is consistent with the query; this is the same procedure done in Yang et al. (2020). More formally, the probability of some observation \(O\) under the max-ent semantics is given by
\[\mathbb{P}(O)=\sum_{\theta\in\Theta}\mathbb{P}(\theta)\cdot\frac{N(I_{\theta} \models O)}{N(\theta)}, \tag{1}\]
where \(\Theta\) is the set of all total choices, and \(N(I_{\theta}\models O)\) and \(N(\theta)\) return respectively the number of stable models that are consistent with both \(\theta\) and \(O\), and the number of stable models consistent with \(\theta\).
dPASP computes \(N(I_{\theta}\models O)\) and \(N(\theta)\) by calling clingo's solver and counting, for each total choice \(\theta\), how many models are consistent with observation \(O\) and how many models are in total. The probability \(\mathbb{P}(\theta)\) is easily computable by simply multiplying the probabilities of each probabilistic and neural component (i.e. probabilistic and neural facts, rules and annotated disjunctions), as we assume them to be marginally independent from each other.
#### 3.2.3 Credal semantics
For the credal semantics, one is interested in the interval of all probabilities \(\mathbb{P}(\mathbf{q}|\mathbf{e})\) obtained by some probability model. That interval can be described by its lower and upper values, which in dPASP are obtained by the exact algorithm described in Cozman and Maua (2020). In short, given query \(\mathbf{q}=\{q_{1},\ldots,q_{m}\}\) and evidence \(\mathbf{e}=\{e_{1},\ldots,e_{n}\}\) literals, we compute the lower \(\mathbb{E}(\mathbf{q}|\mathbf{e})\) and upper \(\overline{\mathbb{P}}(\mathbf{q}|\mathbf{e})\) probabilities by iterating over each total choice \(\theta\) and counting the models where (\(a\)) every model satisfies both \(\mathbf{q}\) and \(\mathbf{e}\), (\(b\)) some model satisfies both \(\mathbf{q}\) and \(\mathbf{e}\), (\(c\)) every model satisfies \(\mathbf{e}\) but does not satisfy some value in \(\mathbf{q}\), and (\(d\)) some model satisfies \(\mathbf{e}\) but does not satisfy some value in \(\mathbf{q}\). The credal interval is then \([0,0]\) if \(b+c=0\) and \(d>0\), \([1,1]\) if \(a+d=0\) and \(b>0\), and \([a/(a+d),b/(b+c)]\) otherwise.
Credal facts in dPASP are only available when the credal semantics is selected. To perform inference with credal facts, dPASP constructs four multilinear polynomials corresponding to \(a\), \(b\), \(c\) and \(d\); each term is a total choice \(\theta\), each coefficient is the probability of \(\theta\), and variables in the polynomial are \(x\) if \(X=1\) in \(\theta\) or \(1-x\) otherwise. The domain of the polynomial is the cartesian product of all pairs of lower and upper probabilities in credal facts. The functions \(a(\mathbf{x})/(a(\mathbf{x})+d(\mathbf{x}))\) and \(b(\mathbf{x})/(b(\mathbf{x})+c(\mathbf{x}))\) are then optimized in order to find the two global minimum and maximum respectively, with the first amounting to the lower and the second the upper probabilities of the queries.
### Parameter learning
dPASP currently implements three parameter learning rules for the max-ent stable model semantics: (i) a fixed-point learning procedure for non-neural programs, (ii) a Lagrange multiplier derivation for gradient ascent, and (iii) an implementation of NeurASP's learning procedure (Yang et al., 2020). How to learn the parameters of programs in partial or least undefined stable model semantics either under the max-ent or credal semantics is an open problem.
We now describe the first two learning rules, which as far as we are aware, are novel in the literature. Both rules provide rules for maximizing the log-likelihood \(\mathcal{L}(\mathbf{O})=\sum_{O\in\mathbf{O}}\log\mathbb{P}(O)\) of a set of observations \(\mathbf{O}\) with respect to the parameters \(\mathbf{p}\) of the program, which are the probabilities that annotate ADRs.
#### 3.3.1 Fixed-point parameter learning
We start with the fixed-point learning procedure, which can be used when we have only ADRs (but no NADR).
**Proposition 3.1**.: _Let \(\mathbb{P}(X=x)\) be the probability of a specific probabilistic component \(X\) we wish to learn from the set of observations \(\mathbf{O}\). If the iterated application of the rule_
\[\mathbb{P}(X=x)=\frac{1}{|\mathbf{O}|}\cdot\sum_{O\in\mathbf{O}}\frac{\mathbb{P }(X=x,O)}{\mathbb{P}(O)}. \tag{2}\]
_converges, then it does so to a critical point of the log-likelihood function._
The marginal \(\mathbb{P}(X=x,O)\) can be computed by counting
the models consistent with both the observation \(O\) and \(X=x\) and weighting over the probability of the total choices.
#### 3.3.2 Lagrangian parameter learning
We now derive an update rule that applies also in the presence of NADRs, and is an alternative to NeurASP's learning rule (Yang et al., 2020). To better understand the need of our alternative parameter learning method, we must first understand the shortcomings of the NeurASP's learning rule. The rule updates parameters by \(\mathbf{p}\leftarrow\mathbf{p}-\eta\nabla_{\mathbf{\rho}}\mathcal{L}(O)\), where \(\eta\) is a learning rate and the gradient components are:
\[\frac{\partial}{\partial p_{x}}\mathcal{L}(O)= \tag{3}\] \[\frac{1}{\mathbb{P}(O)}\sum_{\theta_{x}}\frac{\mathbb{P}(\theta_ {x})}{\mathbb{P}(X=x)}\cdot\frac{N(I_{\theta_{x}}\models O)}{N(\theta_{x})}\] \[\quad\quad-\sum_{\overline{x},\,\overline{x}\neq x}\frac{1}{ \mathbb{P}(O)}\sum_{\theta_{\overline{x}}}\frac{\mathbb{P}(\theta_{\overline{ x}})}{\mathbb{P}(X=\overline{x})}\cdot\frac{N(I_{\theta_{\overline{x}}} \models O)}{N(\theta_{\overline{x}})}.\]
The intuition is that, interpretations that are consistent with \(O\) increase the value of the derivative, while interpretations that are not decrease it.
Note, however, that the sum of the updates over all the parameters \(p_{x}\) of a probabilistic component \(X\) is only zero when \(X\) is binary, which means that rule (3) can produce estimates that are outside the feasible set of valid parameters (i.e., they are not probability distributions). This issue can be mitigated by either projecting the parameters back to the feasible set or ensuring that parameter updates lie within the feasible set, for instance by using a softmax layer. To avoid this issue, we instead constrain parameters to remain within the feasible set by employing Lagrange multipliers.
**Proposition 3.2**.: _The constrained derivative of the log-likelihood function with respect to the probability \(\mathbb{P}(X=x)=p_{x}\) of a probabilistic component \(X\) is_
\[\frac{\partial}{\partial p_{x}}\mathcal{L}(O)= \tag{4}\] \[\left(1-\frac{1}{m}\right)\frac{1}{\mathbb{P}(O)}\sum_{\theta_{x }}\frac{\mathbb{P}(\theta_{x})}{\mathbb{P}(X=x)}\cdot\frac{N(I_{\theta_{x}} \models O)}{N(\theta_{x})}\] \[\quad-\frac{1}{m}\sum_{\overline{x},\,\overline{x}\neq x}\frac{1 }{\mathbb{P}(O)}\sum_{\theta_{\overline{x}}}\frac{\mathbb{P}(\theta_{\overline {x}})}{\mathbb{P}(X=\overline{x})}\cdot\frac{N(I_{\theta_{\overline{x}}} \models O)}{N(\theta_{\overline{x}})}.\]
_where \(m\) is the number of possible values \(X\) can take._
Interestingly, (4) yields a similar expression to (3), with the only distinction being the factors \(1-\frac{1}{m}\) and \(\frac{1}{m}\). Thus, when \(m=2\), the Lagrangian rule is equivalent to halving the learning rate of NeurASP's rule. For \(m>2\), rule (4) assigns more weight to the probability of interpretations consistent with the observation, and less weight to its complement. Note that this is more sensible, since the latter sums over more terms than the former.
The extension of (4) to the neural case is trivial; by applying the chain rule on the derivative of the log-likelihood with respect to the output \(p_{x}\) of the neural component \(X\), we easily find that the resulting gradient is (4) multiplied by the derivative of the neural network with respect to network weights \(\mathbf{w}\)
\[\frac{\partial}{\partial\mathbf{w}}\mathcal{L}(O)=\frac{\partial\mathcal{L}(O )}{\partial p_{x}}\frac{\partial p_{x}(\mathbf{x})}{\partial\mathbf{w}}, \tag{5}\]
where \(\frac{\partial p_{x}(\mathbf{x})}{\partial\mathbf{w}}\) is the standard backward pass in a neural network.
## 4 Experiments
In this section, we present two preliminary experiments showcasing the dPASP system. The first experiment compares the performance of our system against two competitors on the task of parameter learning in image classification. The second showcases a possible use of the credal semantics in cautious ensemble classification.
### MNIST Addition
We compare the performance of dPASP to NeurASP, DeepProbLog and a purely data-driven convolutional neural network (CNN) on the task of learning addition of MNIST image digits, a common distant supervision benchmark for NPLP (Manhaeve et al., 2021). This is a preliminary experiment, as dPASP is still in early development. Given two unlabelled images (e.g. \(\blacksquare\) and \(\blacksquare\) ) of digits, and the corresponding atom (e.g. sum(9)) as a distant label, the program must learn to identify the sum of digits.
The dPASP program to perform this task is quite simple and short if we do not account for the Python code needed for processing the MNIST data. The program in its entirety can be found in Appendix B.
```
#python defnet():...#neuralnetwork defmnist_tr(i):...#trainimagesfori-thdigit defmnist_te(i):...#testimagesfori-thdigit deflabels():...#sum(x)labels
#end. input(@)\(\sim\)test(@mnist_te(@)),train(@mnist_tr(@)). input(1)\(\sim\)test(@mnist_te(1)),train(@mnist_tr(1)).
7::digit(X,(@..9))as@net:-input(X). sum(Z):-digit(@,X),digit(1,Y),Z=X+Y.
#semanticsmaxent.
#learn@labels,lr=0.001,niters=5,batch=1000.
```
We briefly highlight the fact that the prior know-how needed to write a program in NeurASP or DeepProbLog can be
a significant barrier to the widespread use of NPLPs. Not only is the user required to have a good grasp of Python, but they must also have a significant understanding of the deep learning system used by the NPLP. For instance, one might compare the equivalent programs in NeurASP1 and DeepProbLog2 to ours in order to understand the steep learning curve of current NPLPs.
Footnote 1: [https://github.com/azreasoners/NeurASP/blob/master/examples/mnistAdd/mnistAdd.py](https://github.com/azreasoners/NeurASP/blob/master/examples/mnistAdd/mnistAdd.py)
Footnote 2: [https://github.com/ML-KULeuven/deepproblog/blob/master/src/deepproblog/examples/MNIST/addition.py](https://github.com/ML-KULeuven/deepproblog/blob/master/src/deepproblog/examples/MNIST/addition.py)
Going back to our preliminary experiment, we follow a similar methodology used in NeurASP (Yang et al., 2020) and DeepProbLog(Manhaeve et al., 2021) for the MNIST digit addition task. Just like the aforementioned works, we split the original MNIST dataset in half, taking the first (resp. second) half as the images of the first (resp. second) digit; the labels observed by the program are the atoms corresponding to the sum of the labels of the two halves. We use the same learning parameters for dPASP, NeurASP, DeepProbLog and CNN: a learning rate of \(0.001\), batch size of \(1000\), and the Adam optimizer for the neural components/networks (Kingma & Ba, 2017).
Figure 1 shows a comparison of both the performance in terms of classification accuracy, as well as training time. The plot on the left compares the accuracy of programs in classifying the correct sum of digits, while the plot on the right shows the digit classification accuracy of the embedded neural networks in the programs while they learn to classify the sum of digits. Curve CNN sum corresponds to the performance of evaluating a CNN whose input is a single image consisting of concatenating the two digits and whose output are the probabilities of the 19 possible two-digit sum values; CNN digit is the accuracy of a single digit classification network under the same parameter conditions as dPASP, NeurASP and DeepProbLog.
Interestingly, both purely data-driven CNN approaches performed poorly compared to NPLPs. In particular, CNN sum struggled to even break 50% accuracy, while CNN digit quickly converged to the 80% mark, below that of NPLPs. We again stress the fact that these results were obtained by subjecting all systems to the same learning parameters.
Comparing dPASP against DeepProbLog and NeurASP, we find that NeurASP achieves better accuracy faster, although dPASP eventually catches up. This difference between dPASP and NeurASP might be explained by the correction factor discussed in Section 3.3.2; the factors involved in the Lagrangian optimization slow down learning, as the gradient is diminished due to the correction.
Of note is the significant difference in training time for the four methods, with a surprising gap between dPASP and CNN. We conjecture that the main factor that explains this discrepancy is implementation overhead: dPASP is mostly written in C, with only the grammar parsing part of the language implemented in Python; in contrast, even though most of the computation in a pure Python script using PyTorch or any other deep learning framework is also done in C, a lot of the boilerplate code runs in Python. Thus, the careful implementation of highly optimized C code might make up for the use of logical inference routines in small problems like MNIST addition.
Given that dPASP is quite faster compared to the other approaches, we argue that this speed-up is a key advantage of dPASP (when the number of probabilistic and neural components is low) and should be exploited. With this in mind, we lower the batch size of dPASP in order to show how, by taking advantage of the careful optimized implementation in dPASP, we might achieve better performance by slightly increasing training time. The dPASP batch 500 curve in Figure 1 shows the impact of performance in terms of accuracy when halving the batch size, which causes only a slight increase (6 seconds) in training time.
### Ensemble Classification
The following program considers pooling probabilistic predictions of different forecasters without information about accuracy of each forecaster. The neural predicates f and g are pre-trained MNIST digit classifiers. Our goal is to perform a digit classification from a two-classifier ensemble and compare a precise strategy that gives equal weight to each of the two classifiers, against a credal strategy that employs a more cautious approach.
```
data(x)-test(emist_test),train(emist_train). %wehavetwoclassifiers/forecasters class(f).class(g).class(g).1::f(X,0,...,9))asemet1:-data(X).1::g(X,0,...,9)asemet2:-data(X).pred(f,Y):-f(X,Y).pred(g,Y):-g(X,Y).%atleastoneofthemiscorrect bit(f);hit(g).%theircorrectnessisconsistent hit(W2):-pred(N1,Y),pred(W2,Y),N1!=N2,hit(N1).miss(N2):-pred(N1,Y),pred(N2,Y),N1!=N2,miss(N1).miss(N1);miss(N2):-pred(N1,Y),pred(N2,Z),Y!=Z.miss(N)-nothit(W),class(W).%classificationagreeswithcorrectprediction digit(Y):-pred(N,Y),hit(N).%Wemustpickoneofthebelowsemantics.#semanticscredal.%forcredalemantics.%semanticsmarcant.%formax-entsemantics.%Queryforthedigitprobabilities.%querydigit(0)....#querydigit(9).
```
The program encodes the assumption that some of the predictions made by the forecaster must be correct (hit), and
rules outs inconsistent assessments (e.g., two different predictions being considered both correct).
Using our framework and the previous logic program, we can perform either precise or credal inference. The precise inference amounts to simply computing the probability of each class under the maximum entropy semantics and taking the most likely one. For the credal semantics, we employ a max-max decision strategy to classify digits; for each digit's lower and an upper probabilities, we select the upper (max) as the probability of being that digit. Once all upper probabilities have been selected, we then select the class with the highest (max) probability.
The accuracy for detecting each digit using the two strategies is shown in Table 1. We can see that a more cautious ensemble strategy based on the credal semantics can achieve slightly better performance in terms of accuracy compared to a simple uniform weighting of components.
## 5 Conclusion
We have presented dPASP, a new and flexible framework for neurosymbolic reasoning based on probabilistic logic programming. The framework extends the answer set programming declarative language with probabilistic and neural facts that allows the specification of uncertain knowledge and the tight integration of deep perception models (image classifiers, named entity recognizers, etc) with logic reasoning and constraint solving.
Unlike other similar systems such as NeurASP and DeepProbLog, the framework implementation provides several different semantics for the logic and probabilistic parts, and a more friendly interface between machine learning components (e.g., PyTorch models) and the logic specification.
The system is also relatively fast for learning certain class of models, as illustrated by preliminary empirical results. This is due to a careful software implementation, which we release as free and open source at [https://kamel.ime.usp.br/dpasp](https://kamel.ime.usp.br/dpasp).
There is still much to achieve to make the system more broadly applicable. In particular, more efficient learning and inference routines need to be devised to scale to larger domains.
\begin{table}
\begin{tabular}{|c|c c|c|} \hline & Max-Ent & Credal & \# examples \\ \hline
0 & **98.36** & 97.95 & 980 \\
1 & 98.14 & **98.23** & 1135 \\
2 & 92.24 & **92.92** & 1032 \\
3 & **93.56** & 92.57 & 1010 \\
4 & **95.92** & 95.72 & 982 \\
5 & 92.48 & **93.83** & 892 \\
6 & 96.13 & **96.55** & 958 \\
7 & 94.06 & **94.35** & 1028 \\
8 & 89.42 & **89.93** & 974 \\
9 & 46.77 & **47.77** & 1009 \\ \hline Total & 89.73 & **89.99** & 10000 \\ \hline \end{tabular}
\end{table}
Table 1: Accuracy when classifying each MNIST digit under the max-ent or credal semantics. Best accuracy in bold.
Figure 1: Sum and digit classification accuracy and training time for dPASP, NeurASP, DeepProbLog and CNN. On the left, accuracy per iteration of classifying sums; on the right, accuracy of learned networks on classifying digits.
## Appendix A Proofs
**Proposition 3.1**.: _Let \(\mathbb{P}(X=x)\) be the probability of a specific probabilistic component \(X\) we wish to learn from the set of observations \(\mathbf{O}\). If the iterated application of the rule_
\[\mathbb{P}(X=x)=\frac{1}{|\mathbf{O}|}\cdot\sum_{O\in\mathbf{O}}\frac{\mathbb{ P}(X=x,O)}{\mathbb{P}(O)}. \tag{2}\]
_converges, then it does so to a critical point of the log-likelihood function._
Proof.: The log-likelihood function of the program is given by
\[\mathcal{L}(\mathbf{O})=\sum_{O\in\mathbf{O}}\log\sum_{\theta\in\Theta} \mathbb{P}(\theta)\cdot\frac{N(I_{\theta}\models O)}{N(\theta)}. \tag{6}\]
We wish to constrain the probabilities of a probabilistic component \(X\) to \(\sum_{x\in\mathcal{X}}\mathbb{P}(X=x)=1\) and \(\mathbb{P}(X=x)>0\), where \(\mathcal{X}\) is the set of all possible values \(X\) can take. To do this, instead of directly computing the derivative of \(\mathcal{L}\) with respect to \(\mathbb{P}(X=x)\), we instead compute \(\frac{\partial}{\partial w_{x}}\mathcal{L}\), where \(w_{x}\in\mathbb{R}\) is unconstrained and define
\[\mathbb{P}(X=x)=\frac{e^{w_{x}}}{\sum\limits_{x^{\prime}\in\mathcal{X}}e^{w_{ x^{\prime}}}}, \tag{7}\]
i.e. we optimize with respect to a softmax function instead of directly working with the probabilities. With this in mind, the derivative of the log-likelihood function is given by
\[\begin{split}\frac{\partial}{\partial w_{x}}\mathcal{L}(\mathbf{O })=\\ \sum_{O\in\mathbf{O}}\frac{1}{\mathbb{P}(O)}\sum_{\theta\in\Theta }\frac{e^{w_{x}}}{\sum\limits_{x^{\prime}\in\mathcal{X}}e^{w_{x^{\prime}}}} \cdot\mathbb{P}(\theta_{-x})\cdot\frac{N(I_{\theta}\models O)}{N(\theta)}, \end{split} \tag{8}\]
where \(\mathbb{P}(\theta_{-x})\) is the probability of the total choice \(\theta\) excluding the probability of \(X\), or more formally
\[\mathbb{P}(\theta_{-x})\coloneqq\prod_{\begin{subarray}{c}(Y=y)\in\theta\\ Y\neq X\end{subarray}}\mathbb{P}(Y=y)=\frac{\mathbb{P}(\theta)}{\mathbb{P}(X=x )}. \tag{9}\]
We may then split (6) into two terms: one where the total choices \(\Theta_{x}\) agree with the weight \(w_{x}\) to be derived, and another where total choices \(\Theta_{\overline{x}}\) choose other values \(X=\overline{x}\), \(\overline{x}\neq x\) for \(X\)
\[\begin{split}\frac{\partial}{\partial w_{x}}\mathcal{L}(\mathbf{O })=\sum_{O\in\mathbf{O}}\frac{1}{\mathbb{P}(O)}\cdot\\ \overbrace{\left(\frac{\partial}{\partial w_{x}}\sum_{\theta_{x} \in\Theta_{x}}\frac{e^{w_{x}}}{\sum\limits_{x^{\prime}\in\mathcal{X}}e^{w_{x^ {\prime}}}}\cdot\mathbb{P}(\theta_{-x})\cdot\frac{N(I_{\theta_{x}}\models O)}{ N(\theta_{x})}\right.}^{(11)}\\ +\underbrace{\frac{\partial}{\partial w_{x}}\sum_{\theta_{\overline {x}}\in\Theta\overline{x}}\frac{e^{w_{x}}}{\sum\limits_{x^{\prime}\in \mathcal{X}}e^{w_{x^{\prime}}}}\cdot\mathbb{P}(\theta_{-\overline{x}})\cdot \frac{N(I_{\theta_{\overline{x}}}\models O)}{N(\theta_{\overline{x}})} \right)}_{(12)}.\end{split} \tag{10}\]
For the sake of clarity, let us call \(N^{O}_{\theta}=\frac{N(I_{\theta}\models O)}{N(\theta)}\). We may simplify the first term in (10) to
\[\begin{split}&\sum_{\theta_{x}\in\Theta_{x}}\frac{e^{w_{x}} \cdot\sum\limits_{x^{\prime}\in\mathcal{X}}e^{w_{x^{\prime}}}-e^{w_{x}}\cdot e ^{w_{x}}}{\sum\limits_{x^{\prime}\in\mathcal{X}}e^{w_{x^{\prime}}}\cdot\sum \limits_{x^{\prime}\in\mathcal{X}}e^{w_{x^{\prime}}}}\cdot\mathbb{P}(\theta_{- x})\cdot N^{O}_{\theta_{x}}=\\ &\sum_{\theta_{x}\in\Theta_{x}}\left(\frac{e^{w_{x}}}{\sum\limits _{x^{\prime}\in\mathcal{X}}e^{w_{x^{\prime}}}}-\left(\frac{e^{w_{x^{\prime}}}}{ \sum\limits_{x^{\prime}\in\mathcal{X}}e^{w_{x^{\prime}}}}\right)^{2}\right) \cdot\mathbb{P}(\theta_{-x})\cdot N^{O}_{\theta_{x}}=\\ &\sum_{\theta_{x}\in\Theta_{x}}\mathbb{P}(\theta_{x})\cdot N^{O}_{ \theta_{x}}-\sum_{\theta_{x}\in\Theta_{x}}\frac{e^{w_{x^{\prime}}}}{\sum\limits _{x^{\prime}\in\mathcal{X}}e^{w_{x^{\prime}}}}\cdot\mathbb{P}(\theta_{x}) \cdot N^{O}_{\theta_{x}}=\\ &\mathbb{P}(X=x,O)-\mathbb{P}(X=x)\cdot\mathbb{P}(X=x,O)\end{split} \tag{11}\]
and the second term to
\[\begin{split}&-\sum_{\theta\neq\Theta\overline{x}}\frac{e^{w_{ \overline{x}}}}{\sum\limits_{x^{\prime}\in\mathcal{X}}e^{w_{x^{\prime}}}} \cdot\frac{e^{w_{x}}}{\sum\limits_{x^{\prime}\in\mathcal{X}}e^{w_{x^{\prime}}} }\cdot\mathbb{P}(\theta_{-\overline{x}})\cdot N^{O}_{\theta_{\overline{x}}}=\\ &-\sum_{\theta_{\overline{x}}\in\Theta_{\overline{x}}}\mathbb{P}(X=x )\cdot\mathbb{P}(\theta_{\overline{x}})\cdot N^{O}_{\theta_{\overline{x}}}=\\ &\mathbb{P}(X=x)\cdot\mathbb{P}(X=\overline{x},O).\end{split} \tag{12}\]
Putting (11) and (12) together, we get
\[\begin{split}&\frac{\partial}{\partial w_{x}}\mathcal{L}(\mathbf{O })=\sum_{O\in\mathbf{O}}\frac{1}{\mathbb{P}(O)}\big{[}\mathbb{P}(X=x,O)-\\ &\mathbb{P}(X=x)\cdot\mathbb{P}(X=x,O)-\mathbb{P}(X=x)\cdot \mathbb{P}(X=\overline{x},O)\big{]}=\\ &\sum_{O\in\mathbf{O}}\frac{1}{\mathbb{P}(O)}\big{[}\mathbb{P}(X=x,O)-\mathbb{P}(X=x)\cdot\mathbb{P}(O)\big{]}.\end{split} \tag{13}\]
By setting the derivative of the objective function to zero to
find the critical point, we finally find that
\[\begin{split}&\sum_{O\in\mathbf{O}}\frac{1}{\mathbb{P}(O)}\big{[} \mathbb{P}(X=x,O)-\mathbb{P}(X=x)\cdot\mathbb{P}(O)\big{]}=\\ &\sum_{O\in\mathbf{O}}\left(\frac{\mathbb{P}(X=x,O)}{\mathbb{P}(O )}-\mathbb{P}(X=x)\right)=\\ &\sum_{O\in\mathbf{O}}\frac{\mathbb{P}(X=x,O)}{\mathbb{P}(O)}-| \mathbf{O}|\cdot\mathbb{P}(X=x)=0\\ &\implies\mathbb{P}(X=x)=\frac{1}{|\mathbf{O}|}\sum_{O\in\mathbf{ O}}\frac{\mathbb{P}(X=x,O)}{\mathbb{P}(O)}.\end{split} \tag{14}\]
**Proposition 3.2**.: _The constrained derivative of the log-likelihood function with respect to the probability \(\mathbb{P}(X=x)=p_{x}\) of a probabilistic component \(X\) is_
\[\begin{split}&\frac{\partial}{\partial p_{x}}\mathcal{L}(O)=\\ &\left(1-\frac{1}{m}\right)\frac{1}{\mathbb{P}(O)}\sum_{\theta_{x }}\frac{\mathbb{P}(\theta_{x})}{\mathbb{P}(X=x)}\cdot\frac{N(I_{\theta_{x}} \models O)}{N(\theta_{x})}\\ &-\frac{1}{m}\sum_{\overline{x},\,\overline{x}\neq x}\frac{1}{ \mathbb{P}(O)}\sum_{\theta_{\overline{x}}}\frac{\mathbb{P}(\theta_{\overline{ x}})}{\mathbb{P}(X=\overline{x})}\cdot\frac{N(I_{\theta_{\overline{x}}} \models O)}{N(\theta_{\overline{x}})}.\end{split} \tag{4}\]
_where \(m\) is the number of possible values \(X\) can take._
Proof.: Recall that the log-likelihood function is given by
\[\mathcal{L}(\mathbf{O})=\sum_{O\in\mathbf{O}}\log\sum_{\theta\in\Theta} \mathbb{P}(\theta)\cdot\frac{N(I_{\theta}\models O)}{N(\theta)}, \tag{15}\]
and that the derivative of the log-likelihood function with respect to a probabilistic component \(p_{x}=\mathbb{P}(X=x)\) is given by the expression
\[\frac{\partial}{\partial p_{x}}\mathcal{L}(\mathbf{O})=\sum_{O\in\mathbf{O}} \frac{1}{\mathbb{P}(O)}\frac{\partial}{\partial p_{x}}\sum_{\theta\in\Theta} \mathbb{P}(\theta)\cdot\frac{N(I_{\theta}\models O)}{N(\theta)}. \tag{16}\]
We define the objective function as the log-likelihood subject to the restriction that all probabilities over \(X\) sum to one, that is, let \(\mathcal{X}\) be the set of all possible values taken by \(X\), then the new objective function is the log-likelihood with the added Lagrange multiplier \(\lambda\)
\[\hat{\mathcal{L}}(\{p_{x}\}_{x\in\mathcal{X}},\lambda,\mathbf{O})=\mathcal{L} (\mathbf{O})-\lambda\left(\sum_{x\in\mathcal{X}}p_{x}-1\right). \tag{17}\]
Optimization must now take place within the new constrained log-likelihood \(\hat{\mathcal{L}}\). To find the complete expression of (17), we must find the value of the Lagrange multiplier \(\lambda\), which is achievable by noting that the sum of all derivatives with respect to \(X\) must sum to zero
\[\begin{split}&\sum_{x\in\mathcal{X}}\frac{\partial}{\partial p_{x}} \hat{\mathcal{L}}(\{p_{x}\}_{x\in\mathcal{X}},\lambda,\mathbf{O})=\sum_{x\in \mathcal{X}}\left(\frac{\partial}{\partial p_{x}}\mathcal{L}(\mathbf{O})- \lambda\right)=\\ &\sum_{x\in\mathcal{X}}\frac{\partial}{\partial p_{x}}\mathcal{L} (\mathbf{O})-|\mathcal{X}|\cdot\lambda\implies\lambda=\frac{1}{|\mathcal{X} |}\sum_{x\in\mathcal{X}}\frac{\partial}{\partial p_{x}}\mathcal{L}(\mathbf{O}).\end{split} \tag{18}\]
Let us first revisit (16), as it appears as a term of the gradient of our new objective function. We may split (16) into two terms, one where total choices \(\Theta_{x}\) agree with the assignment \(X=x\), and the other where total choices \(\Theta_{\overline{x}}\) choose other values \(X=\overline{x}\), \(\overline{x}\neq x\)
(16) \[\begin{split}&\sum_{O\in\mathbf{O}}\frac{1}{\mathbb{P}(O)}\left( \frac{\partial}{\partial p_{x}}\sum_{\theta_{x}\in\Theta_{x}}\mathbb{P}( \theta_{x})\frac{N(I_{\theta_{x}}\models O)}{N(\theta_{x})}+\right.\\ &\left.\frac{\partial}{\partial p_{x}}\sum_{\theta\in\Theta_{ \overline{x}}}\mathbb{P}(\theta_{\overline{x}})\frac{N(I_{\theta_{\overline{x}} }\models O)}{N(\theta_{\overline{x}})}\right).\end{split}\] (19)
We now derive each coordinate of the gradient of (17)
\[\begin{split}&\frac{\partial}{\partial p_{x}}\hat{\mathcal{L}}(\{p_{x}\}_ {x\in\mathcal{X}},\lambda,\mathbf{O})=\frac{\partial}{\partial p_{x}}\mathcal{L }(\mathbf{O})-\lambda=\\ &\eqref{eq:log-likelihood}-\frac{1}{|\mathcal{X}|}\sum_{x^{\prime} \in\mathcal{X}}\frac{\partial}{\partial p_{x^{\prime}}}\mathcal{L}(\mathbf{O}).\end{split} \tag{20}\]
To further simplify the above expression, we must further develop the formula in (19). Within the constraints set by the Lagrange multiplier, we may then simplify (19) as
\[\eqref{eq:log-likelihood}=\sum_{O\in\mathbf{O}}\frac{1}{\mathbb{P}(O)}\sum_{ \theta_{x}\in\Theta_{x}}\frac{\mathbb{P}(\theta_{x})}{\mathbb{P}(X=x)}\cdot \frac{N(I_{\theta_{x}}\models O)}{N(\theta_{x})}, \tag{21}\]
since the second term in (19) cancels out to zero. Note that we do not need to write \(p_{\overline{x}}\) as a function of \(p_{x}\) (which would mean that the term would not be equal to zero), as the Lagrange multiplier is already taking into account this relationship. Finally,
\[\begin{split}&\eqref{eq:log-likelihood}=\sum_{O\in\mathbf{O}} \frac{1}{\mathbb{P}(O)}\sum_{\theta_{x}\in\Theta_{x}}\frac{\mathbb{P}(\theta_{x} )}{\mathbb{P}(X=x)}\cdot\frac{N(I_{\theta_{x}}\models O)}{N(\theta_{x})}-\\ &\frac{1}{|\mathcal{X}|}\sum_{x^{\prime}\in\mathcal{X}}\sum_{O\in \mathbf{O}}\frac{1}{\mathbb{P}(O)}\sum_{\theta_{x^{\prime}}\in\Theta_{x}}\frac{ \mathbb{P}(\theta_{x^{\prime}})}{\mathbb{P}(X=x^{\prime})}\cdot\frac{N(I_{ \theta_{x^{\prime}}}\models O)}{N(\theta_{x^{\prime}})}\end{split} \tag{22}\]
can be rearranged so that when \(x^{\prime}=x\), the first and second
terms are subtracted, yielding the final form
\[\frac{\partial}{\partial p_{x}}\hat{\mathcal{L}}(O)=\] \[\left(1-\frac{1}{|\mathcal{X}|}\right)\frac{1}{\mathbb{P}(O)} \sum_{\theta_{x}\in\Theta_{x}}\frac{\mathbb{P}(\theta_{x})}{\mathbb{P}(X=x)} \cdot\frac{N(I_{\theta_{x}}\models O)}{N(\theta_{x})}\] \[-\frac{1}{|\mathcal{X}|}\sum_{\overline{x},\,\overline{x}\neq x} \frac{1}{\mathbb{P}(O)}\sum_{\theta\neq\in\Theta_{\overline{\tau}}}\frac{ \mathbb{P}(\theta_{\overline{\tau}})}{\mathbb{P}(X=\overline{x})}\cdot\frac{ N(I_{\theta\neq}\models O)}{N(\theta_{\overline{x}})}; \tag{23}\]
call \(m=|\mathcal{X}|\) and we get the expression in our claim.
## Appendix B dPASP MNIST Addition Program
|
2302.09692 | Classification via two-way comparisons | Given a weighted, ordered query set $Q$ and a partition of $Q$ into classes,
we study the problem of computing a minimum-cost decision tree that, given any
query $q$ in $Q$, uses equality tests and less-than comparisons to determine
the class to which $q$ belongs. Such a tree can be much smaller than a lookup
table, and much faster and smaller than a conventional search tree. We give the
first polynomial-time algorithm for the problem. The algorithm extends
naturally to the setting where each query has multiple allowed classes. | Marek Chrobak, Neal E. Young | 2023-02-19T23:18:02Z | http://arxiv.org/abs/2302.09692v1 | # Classification via two-way comparisons
###### Abstract
Given a weighted, ordered query set \(Q\) and a partition of \(Q\) into classes, we study the problem of computing a minimum-cost decision tree that, given any query \(q\in Q\), uses equality tests and less-than comparisons to determine the class to which \(q\) belongs. Such a tree can be much smaller than a lookup table, and much faster and smaller than a conventional search tree. We give the first polynomial-time algorithm for the problem. The algorithm extends naturally to the setting where each query has multiple allowed classes.
## 1 Introduction
Given a weighted, ordered _query_ set \(Q\) partitioned into classes, we study the problem of computing a minimum-cost decision tree that uses equality tests (e.g., "\(q=4\)?") and less-than tests (e.g., "\(q<7\)?") to quickly determine the class of any given query \(q\in Q\). (Here the cost of a tree is the weighted sum of the depths of all queries, where the depth of a given query \(q\in Q\) is the number of tests the tree makes when given query \(q\).) We call such a tree a _two-way-comparison decision tree_ (\(\mathtt{2wcdt}\)). See Figure 1.
A main use case for \(\mathtt{2wcdt}\) is when the number of classes is small relative to the number of queries. In this case a \(\mathtt{2wcdt}\) can be significantly smaller than a lookup table, and, likewise, faster and smaller than a conventional search tree, because a search tree has to identify a given query \(q\) (or the inter-key interval that \(q\) lies in) whereas a decision tree only has to identify \(q\)'s class. Because they can be faster and more compact, \(\mathtt{2wcdts}\) are used in applications such as dispatch trees, which allow compilers and interpreters to quickly resolve method implementations for objects declared with type inheritance [2, 3]. (Each type is assigned a numeric ID via a depth-first search of the inheritance digraph. For each method, a \(\mathtt{2wcdt}\) maps each ID to the appropriate method resolution.)
Chambers and Chen give a heuristic to construct low-cost \(\mathtt{2wcdts}\), but leave open whether minimum-cost \(\mathtt{2wcdts}\) can be found in polynomial time [2, 3]. We give the first polynomial-time algorithm to find minimum-cost \(\mathtt{2wcdts}\). The algorithm runs in time \(O(n^{4})\), where \(n=|Q|\) is the number of distinct query values, matching the best time bound known for a special type of \(\mathtt{2wcdts}\) called _two-way-comparison search trees (\(\mathtt{2wcsts}\))_, discussed below. The algorithm extends naturally to the setting where each query can belong to multiple classes, any one of which is acceptable as
Figure 1: An optimal two-way-comparison decision tree (\(\mathtt{2wcdt}\)) for the problem instance shown on the right. The instance (but not the tree) is from [2, 3, Figure 6]. Each leaf (rectangle) is labeled with the queries that reach it, and below that with the class for the leaf. The table gives the class and weight of each query \(q\in Q=[50]=\{1,2,\ldots,50\}\). The tree has cost \(2055\), about \(11\%\) cheaper than the tree from [2, 3], of cost \(2305\).
an answer for the query. The extended algorithm runs in time \(O(n^{3}m)\), where \(m\) is the sum of the sizes of the classes.
Related work.Various types of decision trees are ubiquitous in the areas of artificial intelligence, machine learning, and data mining, where they are used for data classification, clustering, and regression.
Here we study decision trees for one-dimensional data sets. In theoretical computer science, most work on such trees has focussed on search trees, that is, decision trees that must fully identify the query or the inter-key interval it lies in. Here is a brief summary of relevant work on such trees. One of our main contributions is to increase the understanding of trees based on two-way comparisons. These are not yet fully understood.
The tractability of finding a minimum-cost search tree depends heavily on the kind of tests that the tree can use. For some kinds of tests, the problem is NP-complete [12]. Early works considered trees in which each test compared the given query value \(q\) to some particular comparison key \(k\), with _three_ possible outcomes: the query value \(q\) is less than, equal to, or greater than \(k\)[6, SS14.5], [14, SS6.2.2]. (See Figure 4 (a).) We call such trees _three-way-comparison search trees_, or 3wcsts for short. In a 3wcst, the query values that reach any given node form an interval. This leads to a natural \(O(n^{3})\)-time dynamic-programming algorithm with \(O(n^{2})\) subproblems for finding minimum-cost 3wcsts[8]. Knuth reduced the time to \(O(n^{2})\)[13].
In practice each three-way comparison is often implemented by doing a less-than test followed by an equality test. Knuth [14, SS6.2.2, Example 33] proposed exploring binary search trees that use these two tests directly in any combination. Such trees are called _two-way-comparison search trees (2wcst)_[1]. For the so-called _successful-queries_ variant, assuming that the query weights are normalized to sum to 1, there is always a 2wcst whose cost exceeds the entropy of the weight distribution by at most 1 [7]. The entropy is a lower bound on the cost of any binary search tree that uses Boolean tests of any kind. This suggests that restricting to less-than and equality tests need not be too costly [7].
Stand-alone equality tests introduce a technical obstacle not encountered with 3wcsts. Namely, while (analogously to 3wcsts) each node of a 2wcst is naturally associated with an interval of queries, not all queries from this interval necessarily reach the node. For this reason the dynamic
Figure 2: Tree _(a)_ is a three-way-comparison search tree (3wcst). Tree _(b)_ is a two-way-comparison search tree (2wcst) for the same instance. The query (or interval of queries) reaching each (rect-angular) leaf is within the leaf. The weight of the query (or interval) is below the leaf.
programming approach for 3wcsts does not extend easily to 2wcsts. This led early works to focus on restricted classes of 2wcsts, namely _median split trees_[16] and _binary split trees_[11, 15, 9]. These, by definition, constrain the use of equality tests so as to altogether sidestep the aforementioned technical obstacle. _Generalized binary split trees_ are less restrictive, but the only algorithm proposed to find them [10] is incorrect [5]. Similarly, the first algorithms proposed to find minimum-cost 2wcsts (without restrictions) were given without proofs of correctness [17, 18], and the recurrence relations underlying some of those proposed algorithms turned out to be demonstrably wrong [5].
In 1994, Spuler made a conjecture that leads to a natural dynamic program for 2wcsts. Namely, that every instance admits a minimum-cost 2wcst with the _heaviest-first_ property: that is, at any equality-test node \(\langle=h\rangle\), _the comparison key \(h\) is heaviest among keys reaching the node_[18]. In a breakthrough in 2002, Anderson et al proved the conjecture for the so-called _successful-queries_ variant, leading to an \(O(n^{4})\)-time dynamic-programming algorithm to find minimum-cost 2wcsts for that variant [1]. In 2021, Chrobak et al simplified their result (in particular, the handling of keys of equal weights, as discussed later) obtaining an \(O(n^{4})\)-time algorithm for finding minimum-cost 2wcsts[4].
Our contributions.Unfortunately these 2wcst algorithms don't extend easily to 2wcsts. The main obstacle is that for some instances (e.g. in Figure 5) no minimum-cost 2wcdt has the crucial heaviest-first property. To overcome this obstacle we introduce a _splitting_ operation (Definition 7), a correctness-preserving local rearrangement of the tree that can be viewed as an extension of the well-studied rotation operation to a more general class of trees, specifically, to trees whose allowed tests induce a laminar set family (Property 1).
We use splitting to identify an appropriate relaxation of the heaviest-first property that we call being _admissible_ (Definition 4). Most of the paper is devoted to proving the following theorem:
**Theorem 1**.: _If the instance is feasible, then some optimal tree is admissible._
Section 3 gives the proof. Along the way it establishes new structural properties of optimal 2wcsts and 2wcsts. Section 4 shows how Theorem 1 leads to a suitable dynamic program and our main result:
**Theorem 2**.: _There is an \(O(n^{3}m)\)-time algorithm for finding a min-cost 2wcdt._
Remarks.The presentation above glosses over a secondary technical obstacle for 2wcsts. For 2wcst instances with distinct query weights, the heaviest-first property uniquely determines the key of each equality test, so that the subset of queries that reach any given node in a 2wcst with the heaviest-first property must be _one of \(O(n^{4})\) predetermined subsets_. This leads to a natural dynamic program with \(O(n^{4})\) subproblems. (See Section 3.) But this does not hold for instances with non-distinct weights. This obstacle turns out to be more challenging than one might expect. Indeed, there are instances for which, for each of the \(2^{n}\) subsets \(S\) of \(Q\), there is a minimum-cost 2wcst, having the heaviest-first property, with a node \(u\) such that the set of queries reaching \(u\) is \(S\). It appears that one cannot just break ties arbitrarily: it can be that, for two maximum-weight keys \(h\) and \(h^{\prime}\) reaching a given node \(u\), there is an optimal subtree in which \(u\) does an equality-test to \(h\), but none in which \(u\) does an equality-test to \(h^{\prime}\)[4, Figure 3]. Similar issues arise in finding
optimal _binary split trees_--these can be found in time \(O(n^{4})\) if the instance has distinct weights, while for arbitrary instances the best bound known is \(O(n^{5})\)[9].
Nonetheless, using a perturbation argument Chrobak et al show that an arbitrary 2wcst instance can indeed be handled as if it is a distinct-weights instance just by breaking ties among equal weights in an appropriate way [4]. We use the same approach here for 2wcsts.
Most search-tree problems come in two flavors: the _successful-queries_ variant and the _standard_ variant. In the former, the input is an ordered set \(K\) of weighted keys, each comparison must compare the given query value to a particular key in \(K\), and each query must be a value in \(K\). In the latter, the input is augmented with a weight for each open interval between successive keys. Queries (called _unsuccessful queries_) to values in these intervals are also allowed, and must be answered by returning the interval in which the query falls. Our formal definition of 2wcdst generalizes both variants.
The tractability of finding a minimum-cost search tree depends heavily on the kind of tests that the tree must use. For some kinds of tests, the problem is NP-complete [12]. Early works considered trees in which each test compared the given query value \(q\) to some particular comparison key \(k\), with _three_ possible outcomes: the query value \(q\) is less than, equal to, or greater than \(k\)[6, SS14.5], [14, SS6.2.2]. (See Figure 4 (a).) We call such trees _three-way-comparison search trees_, or 3wcsts for short. In a 3wcst, the query values that reach any given node form an interval. This leads to a natural \(O(n^{3})\)-time dynamic-programming algorithm with \(O(n^{2})\) subproblems for finding minimum-cost 3wcsts[8]. Knuth reduced the time to \(O(n^{2})\)[13].
Often, in practice, each three-way comparison is implemented by doing a less-than test followed by an equality test. Knuth [14, Section 6.2.2, Example 33] proposed exploring binary search trees that use these two tests directly. Such trees are called _two-way-comparison search trees (2wcsts)_[1]. For the so-called _successful-queries_ variant, assuming that the query weights are normalized to sum to 1, there is always a 2wcst whose cost exceeds the entropy of the weight distribution by at most 1 [7]. The entropy is a lower bound on the cost of a binary search tree that uses Boolean tests of any kind. This suggests that restricting to less-than and equality tests need not be too costly [7].
But equality tests present a technical obstacle not encountered with 3wcsts. Namely, while
Figure 3: Three trees for the 2wcdt instance shown in _(d)_. The set of queries reaching each (rectangular) leaf is shown within the leaf (to save space, there \(\iota_{i}\) denotes the inter-key open interval with right boundary \(i\), e.g. \(\iota_{1}=(-\infty,1)\), \(\iota_{2}=(1,2)\)). The associated weights are below the leaf. The optimal tree _(a)_ has cost 36 and is not heaviest-first. Each heaviest-first tree (e.g. _(b)_ of cost 41 or _(c)_ of cost 56) is not optimal. These properties also hold if each weight is perturbed to make the weights distinct.
(analogously to 3wcsts) with each node of a 2wcst we can naturally associate an interval of queries, not all queries from this interval necessarily reach the node. For this reason the dynamic-programming approach for 3wcsts does not extend easily to 2wcsts. This led early works to focus on restricted classes of 2wcsts, namely _median split trees_[16] and _binary split trees_[11, 15, 9]. These, by definition, constrain the use of equality tests so as to altogether sidestep the aforementioned technical obstacle. _Generalized binary split trees_ are less restrictive, but the only algorithm proposed to find them [10] is incorrect [5]. Similarly, the first algorithms proposed to find minimum-cost 2wcsts (without restrictions) were given without proofs of correctness [17, 18], and the recurrence relations underlying some of those proposed algorithms turned out to be demonstrably wrong [5].
In 1994, Spuler made a conjecture that leads to a natural dynamic program for 2wcsts. Namely, that every instance admits a minimum-cost 2wcst with the _heaviest-first_ property: that is, at any equality-test node \(\langle=h\rangle\), _the comparison key \(h\) is heaviest among keys reaching the node_[18]. In a breakthrough in 2002, Anderson et al proved the conjecture for the so-called _successful-queries_ variant, leading to an \(O(n^{4})\)-time dynamic-programming algorithm to find minimum-cost 2wcsts for that variant [1]. In 2021, Chrobak et al simplified their result (in particular, the handling of keys of equal weights, as discussed later) obtaining an \(O(n^{4})\)-time algorithm for finding minimum-cost 2wcsts[4].
Unfortunately these 2wcst algorithms don't extend easily to 2wcsts. The main obstacle is that for some instances (e.g. in Figure 5) no minimum-cost 2wcdt has the crucial heaviest-first property. To overcome this obstacle we develop new machinery for reasoning about 2wcsts. Using this machinery we identify an appropriate relaxation of the heaviest-first property, one that leads to the desired algorithm.
## 2 Definitions and technical overview
For the remainder of the paper, fix a 2wcdt instance \((Q,w,\mathcal{C},K)\), where \(Q\) is a totally ordered finite set of _queries_, each with a weight \(w(q)\geq 0\), the set \(\mathcal{C}\subseteq 2^{Q}\) is a collection of _classes_ of queries (where each class has a unique identifier), and \(K\subseteq Q\) is the set of _keys_. Let \(n=|Q|\) and \(m=\sum_{c\in\mathcal{C}}|c|\). The problem is to compute a minimum-cost _two-way-comparison decision tree_
Figure 4: Tree _(a)_ is a three-way-comparison search tree (3wcst). Tree _(b)_ is a two-way-comparison search tree (2wcst) for the same instance. The query (or interval of queries) reaching each (rect-angular) leaf is within the leaf. The weight of the query (or interval) is below the leaf.
(\(2\)wcdt) for the instance (as defined below).
To streamline presentation, throughout the paper we restrict attention to the model of decision trees that allows only less-than and equality tests. Our results extend naturally to decision trees that also use other inequality comparisons between queries and keys. See the end of Section 4 for details.
**Definition 1** (\(2\)wcdt).: _A two-way-comparison decision tree (\(2\)wcdt) is a rooted binary tree \(T\) where each non-leaf node is a test of the form \(\langle<k\rangle\) for some \(k\in K\) such that \(\min Q<k\leq\max Q\), or of the form \(\langle=k\rangle\) for some \(k\in K\), and the two children of the node are labeled with the two possible test outcomes ("yes" or "no"). Each leaf node is labeled with the identifier of one class in \(\mathcal{C}\). This class must contain every query \(q\in Q\) whose search (as defined next) ends at the leaf._
_For each \(q\in Q\), the search for \(q\) in \(T\) starts at the root, then recursively searches for \(q\) in the root's yes-subtree if \(q\) satisfies the root's test, and otherwise in the no-subtree. The search stops at a leaf, called the leaf for \(q\). The path from the root to this leaf is called \(q\)'s search path. We say that \(q\) reaches each node on this path, and \(q\)'s depth in \(T\) is defined as the length of this path (equivalently, the number of comparisons when searching for \(q\)). The cost of \(T\) is the weighted sum of the depths of all queries in \(Q\)._
_A tree \(T\) is called irreducible if, for each node \(u\) in \(T\), (i) at least one query in \(Q\) reaches \(u\), and (ii) if some class \(c\in\mathcal{C}\) contains all the queries that reach \(u\), then \(u\) is a leaf._
_For any \(\ell,r\in Q\), let \(\left[\ell,r\right]_{{}_{Q}}\) and \(\left[\ell,r\right]_{{}_{K}}\) denote the query interval \(\left\{q\in Q:\ell\leq q\leq r\right\}\) and key interval \(\left\{k\in K:\ell\leq k\leq r\right\}=K\cap\left[\ell,r\right]_{{}_{Q}}\)._
Allowing \(K\) and \(Q\) to be specified as we do captures both the successful-queries and standard variants. The successful-queries variant corresponds to the case when \(K=Q\). The standard variant is modeled by having one non-key query between every pair of consecutive keys, and before the minimum key and after the maximum key (so \(|Q\setminus K|=|K|+1\)). Each such non-key query represents an interval between keys.
For ease of exposition, assume without loss of generality that each query belongs to some class, so \(m\geq|Q|\) and the input size is \(\Theta(n+m)=\Theta(m)\). Note that the instance is not necessarily
Figure 5: Three trees for the \(2\)wcdt instance shown in _(d)_. The set of queries reaching each (rectangular) leaf is shown within the leaf (to save space, there \(\iota_{i}\) denotes the inter-key open interval with right boundary \(i\), e.g. \(\iota_{1}=(-\infty,1)\), \(\iota_{2}=(1,2)\)). The associated weights are below the leaf. The optimal tree _(a)_ has cost \(36\) and is not heaviest-first. Each heaviest-first tree (e.g. _(b)_ of cost \(41\) or _(c)_ of cost \(56\)) is not optimal. These properties also hold if each weight is perturbed by a small amount to make the weights distinct.
feasible, that is, it might not have a decision tree. (To be feasible, in addition to each query belonging to some class, each query interval that contains no keys must be contained in some class.) If the instance is feasible, some optimal tree is irreducible, so we generally restrict attention to irreducible trees. As we shall see, in an irreducible tree any given test is used in at most one node.
**Definition 2** (ordering queries by weight).: _For any query subset \(R\subseteq Q\) and integer \(i\geq 0\) define \(\mathsf{hea\mathsf{u}\mathsf{e}\mathsf{i}}_{i}(R)\) to contain the \(i\) heaviest queries in \(R\) (or all of \(R\) if \(i\geq|R|\)). For \(q\in Q\), define \(\mathsf{heavier}(q)\) to contain the queries (in \(Q\)) that are heavier than \(q\). Define \(\mathsf{lighter}(q)\) to contain the queries (in \(Q\)) that are lighter than \(q\). Break ties among query weights arbitrarily but consistently throughout._
Formally, we use the following notation to implement the consistent tie-breaking mentioned above. Fix an ordering of \(Q\) by increasing weight, breaking ties arbitrarily. For \(q\in Q\) let \(\tilde{w}(q)\) denote the rank of \(q\) in the sorted order. Throughout, given distinct queries \(q\) and \(q^{\prime}\), define \(q\) to be lighter than \(q^{\prime}\) if \(\tilde{w}(q)<\tilde{w}(q^{\prime})\) and heavier otherwise (\(\tilde{w}(q)>\tilde{w}(q^{\prime})\)). So, for example \(\mathsf{hea\mathsf{u}\mathsf{e}\mathsf{i}}_{i}(R)\) contains the last \(i\) elements in the ordering of \(R\) by increasing \(\tilde{w}(q)\). The symbol \(\bot\) represents the undefined quantity \(\arg\max\emptyset\). Define \(\tilde{w}(\bot)=w(\bot)=-\infty\), \(\mathsf{heavier}(\bot)=Q\), and \(\mathsf{lighter}(\bot)=\emptyset\).
**Definition 3** (intervals and holes).: _Given any non-empty query subset \(R\subseteq Q\), call \(\left[\min R,\max R\right]_{{}_{Q}}\) the query interval of \(R\). Define \(k^{*}(R)\) to be the heaviest key in \(R\), if there is one (that is, \(k^{*}(R)=\arg\max\{\tilde{w}(k):k\in K\cap R\}\)). Define also \(\mathsf{holes}(R)=\left[\min R,\max R\right]_{{}_{Q}}\setminus R\) to be the set of holes in \(R\). We say that a hole \(h\in\mathsf{holes}(R)\) is light if \(\tilde{w}(h)<\tilde{w}(k^{*}(R))\), and otherwise heavy._
_The set of queries reaching a node \(u\) in a tree \(T\) is called \(u\)'s query set, denoted \(Q_{u}\). The query interval, and light and heavy holes, for \(u\) are defined to be those for \(u\)'s query set \(Q_{u}\)._
Overview.Note that each hole \(h\in\mathsf{holes}(Q_{u})\) at a node \(u\) in a tree \(T\) must result from a failed equality test \(\langle=h\rangle\) at a node \(v\) on the path from the root to \(u\) in \(T\). In particular, \(h\in K\). Further, if the hole is light, then \(h\) is not the heaviest key reaching \(v\).
The problem has the following _optimal substructure_ property. Any query subset \(R\subseteq Q\) naturally defines the subproblem \(\pi(R)\) induced by restricting the query set to \(R\) (that is, \(\pi(R)=(R,w,\mathcal{C}_{R},K)\) where \(\mathcal{C}_{R}=\{c\cap R:c\in\mathcal{C}\}\)). In any minimum-cost tree \(T\) for \(R\), if \(T\) is not a leaf, then the yes-subtree and no-subtree of \(T\) must be minimum-cost subtrees for their respective subproblems.
Let \(\mathsf{cost}(R)\) be the minimum cost of an irreducible tree for \(\pi(R)\). (If \(R\) is empty, then \(\mathsf{cost}(R)=\infty\), as no tree for \(R\) is irreducible.) Then the following recurrence holds:
**Observation 1** (recurrence relation).: _Fix any \(R\subseteq Q\). If \(R=\emptyset\), then \(\mathsf{cost}(R)=\infty\). Otherwise, if \((\exists c\in\mathcal{C})\,R\subseteq c\) (that is, \(R\) can be handled by a single leaf labeled \(c\)), then \(\mathsf{cost}(R)=0\). Otherwise, for any allowed test \(u\), let \((R^{\mathsf{yes}}_{u},R^{\mathsf{no}}_{u})\) be the bipartition of \(R\) into those queries that satisfy \(u\) and those that don't. Then_
\[\mathsf{cost}(R)=w(R)\,+\,\min_{u}\ \big{(}\,\mathsf{cost}(R^{\mathsf{yes}}_{u})+ \mathsf{cost}(R^{\mathsf{no}}_{u})\,\big{)}, \tag{1}\]
_where the variable \(u\) ranges over the allowed tests such that \(R^{\mathsf{yes}}_{u}\) and \(R^{\mathsf{no}}_{u}\) are non-empty. (If there are no such tests then \(\mathsf{cost}(R)=\infty\).)_
The goal is to compute \(\mathsf{cost}(Q)\) using a dynamic program that applies recurrence (1) recursively, memoizing results so that for each distinct query set \(R\) the subproblem for \(R\) is solved at most
once. (The algorithm as presented computes only \(\mathsf{cost}(Q)\). It can be extended in the standard way to also compute an optimal tree.) The obstacle is that exponentially many distinct subproblems can arise.
Identity classification without equality tests.For intuition, consider first the variant of our problem in which \(\mathcal{C}\) is the _identity classification_, that is \(\mathcal{C}=\big{\{}\{q\}:q\in Q\big{\}}\), and only less-than tests \(\langle<k\rangle\) are allowed (equality tests are not). In the absence of equality tests, there are no holes. When applying recurrence (1) recursively to \(\mathsf{cost}(Q)\), each query set \(R\) that arises is a query interval. There are \(O(n^{2})\) such query intervals, and for each the right-hand side of the recurrence can be evaluated in \(O(n)\) time. This yields an \(O(n^{3})\)-time algorithm. This approach mirrors a classical dynamic-programming algorithm for 3wcsts[8], as discussed in the introduction.
The algorithm extends easily to arbitrary classifications \(\mathcal{C}\). Recall that a given query set \(R\) can be handled by a leaf (at zero cost) if and only if \(R\subseteq c\) for some \(c\in\mathcal{C}\). This condition can be checked in constant time given \((\ell,r)\) such that \(R=[\ell,r]_{{}_{Q}}\) (after an appropriate precomputation, e.g., for each \(\ell\), precompute the maximum \(r\) for which the condition holds).
Identity classification with equality tests allowed.Next consider the variant when \(\mathcal{C}\) is the identity classification but both kinds of tests are allowed. This is essentially the problem of computing a minimum-cost 2wcst. In this variant, each node \(u\) in a tree \(T\) has query set \(Q_{u}=\left[\min Q_{u},\max Q_{u}\right]_{{}_{Q}}\setminus\mathsf{holes}(Q_{u})\). Applying recurrence (1) naively to \(\mathsf{cost}(Q)\) can yield exponentially many subproblems because \(\mathsf{holes}(Q_{u})\) can be almost any subset of \(\left[\min Q_{u},\max Q_{u}\right]_{{}_{Q}}\). However, as mentioned in Section 1, it is known that some optimal tree \(T\) has the _heaviest-first_ property [1, 4]: for each node \(u\) in \(T\) that does an equality test \(\langle=h\rangle\), the test key \(h\) is the heaviest key reaching \(u\). (Our tie-breaking scheme makes \(h\) unique.) In such a tree there are no light holes. That is, the hole set of any given node \(u\) is the set of heavy holes at \(u\):
\[\mathsf{holes}(Q_{u})=\left[\min Q_{u},\max Q_{u}\right]_{{}_{K}}\cap\mathsf{ heavier}(k^{*}(Q_{u})).\]
(Note that, by the definition of \(k^{*}(Q_{u})\), no keys heavier than \(k^{*}(Q_{u})\) reach \(u\), so the set \(\left[\min Q_{u},\max Q_{u}\right]_{{}_{K}}\cap\mathsf{heavier}(k^{*}(Q_{u}))\) contains exactly the heavy holes at \(u\).)
A non-empty query set \(R\) without light holes is determined by the triple \(\left(\min R,\max R,k^{*}(R)\right)\), so there are \(O(n^{3})\) query sets without light holes. This leads naturally to an \(O(n^{4})\)-time algorithm for instances with distinct weights [1, 4]. (Specifically, redefine \(\mathsf{cost}(R)\) to be the minimum cost of any _heaviest-first_, irreducible tree for \(\pi(R)\). Then \(\mathsf{cost}(R)=\infty\) if \(R\) has at least one light hole. Add this case as a base case to recurrence (1). Apply the modified recurrence recursively to calculate \(\mathsf{cost}(Q)\). Then the number of distinct non-trivial subproblems that arise is \(O(n^{3})\), and each can be solved in \(O(n)\) time.)
Allowing equality tests and an arbitrary classification.The existing results for 2wcsts don't extend to 2wcdts because, as shown in Figure 5, there are 2wcdt instances with distinct weights for which no optimal tree is heaviest-first. But, in some sense, the example in Figure 5 is as bad as it gets. There is an optimal tree in which an appropriate relaxation of the heaviest-first property holds, namely, that each node's query set is _admissible_. Roughly, this means that there are at most three light holes, and the light holes must be taken heaviest first from those keys that don't belong to some class \(c\in\mathcal{C}\) that contains \(k^{*}\) (the heaviest key reaching the node). Here's the formal definition:
**Definition 4** (admissible).: _Consider any query subset \(R\subseteq Q\). The set \(R\) is called admissible if it is non-empty and the set of light holes in \(R\) is either empty or has the form_
\[\mathsf{heaviest}_{b}(\,[\min R,\max R]_{{}_{K}}\cap\mathsf{ lighter}(k^{*}(R))\setminus c\,)\]
_for some \(b\in[3]\) and \(c\in\mathcal{C}\) such that \(k^{*}(R)\in c\)._
_A tree (or subtree) \(T\) is called admissible if the query set of each node in \(T\) is admissible._
Above (and within any mathematical expression), for any integer \(i\), the notation \([i]\) denotes the set \(\{1,2,\ldots,i\}\).
To gain some intuition for the definition, note that, by definition, for any query set \(R\) its holes must be in \([\min R,\max R]_{{}_{K}}\), and its light holes must be in \(\mathsf{lighter}(k^{*}(R))\).
For the algorithm, roughly, we redefine \(\mathsf{cost}(R)=\infty\) if \(R\) is not admissible, add a corresponding base case to recurrence (1), and then recursively apply the modified recurrence to compute \(\mathsf{cost}(Q)\). Each admissible query set \(R\) with no light holes is determined by the triple \((\min R,\max R,k^{*}(R))\). Per Definition 4, each admissible query set \(R\) with at least one light hole is determined by a triple \((\min R,\max R,k^{*}(R),b,c)\), where \((b,c)\in[3]\times\mathcal{C}\) with \(k^{*}(R)\in c\). It follows that there are \(O(n^{3}+n^{2}m)=O(n^{2}m)\) admissible query subsets, so that, in the recursive evaluation of \(\mathsf{cost}(Q)\), \(O(n^{2}m)\) distinct non-trivial subproblems arise. These are solvable in total time \(O(n^{3}m)\). Section 4 gives the detailed proof.
## 3 Some optimal tree is admissible
This section proves Theorem 1: if the instance is feasible, then some optimal tree is admissible. Along the way we establish quite a bit more about the structure of optimal trees. We start with some general terminology for how pairs of tests can relate. Recall that \((Q,w,\mathcal{C},K)\) is a problem instance with at least one correct tree. In any such tree, each edge \(u\to v\) from a node to its child corresponds to one of the two possible outcomes of the test at \(u\). We use \(u\to v\) to denote both the edge and the associated outcome at \(u\). For example, if \(u\) is the node \(\langle<3\rangle\), and \(v\) is the no-child of \(u\), then the outcome \(u\to v\) means that the queried value is at least \(3\).
**Definition 5**.: _Two such outcomes \(u\to v\) and \(x\to y\) are called consistent if \(Q\) contains a query value that satisfies them both. Otherwise they are inconsistent._
_Two tests are said to be equivalent if either for all \(q\in Q\) the two tests give the same outcome for \(q\), or for all \(q\in Q\) the two tests give opposite outcomes for \(q\)._
For example, assume \(Q=[4]\). The yes-outcome of \(\langle<3\rangle\) is inconsistent with the yes-outcome of \(\langle=4\rangle\) and with the no-outcome of \(\langle<4\rangle\), but is consistent with both outcomes of \(\langle=2\rangle\), and with both outcomes of \(\langle<2\rangle\). The tests \(\langle<4\rangle\) and \(\langle=4\rangle\) are equivalent.
Most of the proof requires only the following property of tests:
**Property 1** (_laminarity_).: _Let \(u\) and \(x\) be test nodes. If \(u\) and \(x\) do non-equivalent tests, then, among the four pairs of outcomes between the two nodes, exactly one pair is inconsistent, while the other three pairs are consistent. Formally, let \(u\to v\), \(u\to v^{\prime}\), \(x\to y\), and \(x\to y^{\prime}\) be the two outcomes from \(u\) and the two outcomes from \(x\). Then exactly one pair in \(\{u\to v,u\to v^{\prime}\}\times\{x\to y,x\to y^{\prime}\}\) is inconsistent._
_If \(u\) and \(x\) do equivalent tests, each outcome at \(u\) is consistent with a distinct outcome at \(x\)._
Property 1 is easily verified. (Note that, by the definition of 2wcdfs in Section 2, and assuming there is more than one test, each outcome of each test is satisfied by at least one query in \(Q\).) We call Property 1_laminarity_ because it is equivalent to the laminarity of the collection of sets that has, for each possible test, one set containing the queries that satisfy the test. In our case this laminar collection is
\[\big{\{}\{q\in Q:q<k\}:k\in K,\,\min Q<k\leq\max Q\big{\}}\,\cup\,\big{\{}\{q \}:q\in K\big{\}}.\]
As an example, consider the query set \(Q=[4]\). Then the yes-outcome of \(\langle<3\rangle\) and the yes-outcome of \(\langle=4\rangle\) are inconsistent, while every other pair of outcomes is consistent; e.g., the yes-outcome of \(\langle<3\rangle\) and the no-outcome of \(\langle=4\rangle\) are consistent, as they are both satisfied by the query value 2.
Throughout most of the rest of this section (including Sections 3.1 and 3.2), fix \(T\) to be an arbitrary irreducible tree.
**Property 2**.: _(i) In \(T\), if \(u\) is a proper ancestor of a test node \(v\) then the outcome of \(u\) leading to \(v\) is consistent with both outcomes at \(v\), and the other outcome of \(u\) is consistent with exactly one outcome at \(v\). (ii) No two nodes in \(T\) are equivalent._
Property 2 follows quite easily from the irreducibility of \(T\) and Property 1: The irreducibility of \(T\) implies directly that the outcome of \(u\) leading to \(v\) is consistent with both outcomes at \(v\). This implies that \(u\) and \(v\) are not equivalent, and then the second part of (i) then follows from laminarity (Property 1). To justify Property 2(ii), let \(x\) and \(y\) be two different test nodes in \(T\). We have already established that if one of \(x\), \(y\) is an ancestor of the other then they cannot be equivalent. Otherwise, let \(u\) be the lowest common ancestor of \(x\) and \(y\). By (i), the outcome at \(u\) leading to \(x\) is consistent with both outcomes at \(x\), but, using (i) and Property 1, it is inconsistent with one outcome at \(y\). So \(x\) and \(y\) cannot be equivalent.
### Two weight bounds, via splitting
This section introduces _splitting_--a correctness-preserving local rearrangement of the tree that can be viewed as an extension of the well-studied _rotation_ operation to a more general class of trees, specifically, to trees whose admissible tests form a laminar set as described above. The section uses splitting to prove two weight bounds (Lemmas 3 and 4) that are used in subsequent sections.
**Definition 6**.: _Let \(u\) be a node in \(T\), \(T_{u}\) the subtree of \(T\) rooted at \(u\), and \(x\) any allowed test (not necessarily in \(T\)). The \(x\)-consistent path from \(u\) is the maximal downward path from \(u\) in \(T_{u}\) such that each outcome along this path is consistent with both outcomes at \(x\)._
The \(x\)-consistent path from \(u\) is unique because (by laminarity) at most one outcome out of any given node is consistent with both outcomes at \(x\). In the case that \(T_{u}\) contains a node \(\tilde{x}\) that is equivalent to \(x\), the \(x\)-consistent path from \(u\) is the path from \(u\) to \(\tilde{x}\) (using here the irreducibility of \(T\) and that neither outcome at \(\tilde{x}\) is consistent with both outcomes at \(x\)). In the case that \(T_{u}\) contains no such node \(\tilde{x}\), this \(x\)-consistent path from \(u\) ends at a leaf.
Fix a node \(u\) in \(T\) and a test node \(x\), not necessarily in \(T\). Informally, _splitting \(T_{u}\) around \(x\)_replaces subtree \(T_{u}\) of \(T\) by the subtree \(T^{\prime}_{x}\) obtained by the following process: initialize \(T^{\prime}_{x}\) to a subtree with root \(x\), whose yes- and no-subtrees are each a copy of \(T_{u}\), then splice out each redundant test (that is, each test \(w\) such that one of the outcomes at \(w\) is inconsistent with the
outcome at \(x\) that leads to \(w\), so that all queries reaching \(w\) must satisfy the other outcome at \(w\)). The resulting subtree \(T_{x}^{\prime}\) has a particular structure that we'll need to use. The formal definition, below, makes this structure explicit.
In this construction, and in the proofs that follow, we will consider and manipulate downward paths in \(T\). For convenience, we adopt the following convention: If \(u=u_{1}\to u_{2}\to\cdots\to u_{j}\) is any downward path in \(T\) then for each \(i\in[j-1]\) by \(u_{i}^{\prime}\) we denote the sibling of \(u_{i}\), so each edge \(u_{i}\to u_{i+1}^{\prime}\) leaves this path.
**Definition 7** (splitting).: Splitting \(T_{u}\) around \(x\) yields the subtree \(T_{x}^{\prime}\) produced by the following process. Let \(u=u_{1}\to u_{2}\to\cdots\to u_{d}\) be the \(x\)-consistent path from \(u\), as defined in Definition 6. Initialize \(T_{x}^{\prime}\) to have root \(x\), with yes- and no-subtrees, denoted \(T_{u}^{\mathsf{yes}}\) and \(T_{u}^{\mathsf{no}}\), each a copy of \(T_{u}\).
For each outcome \(\alpha\in\{\mathsf{yes},\mathsf{no}\}\) at \(x\), modify \(T_{u}^{\alpha}\) within \(T_{x}^{\prime}\) as follows. For each \(i\in[d-1]\), if outcome \(u_{i}\to u_{i+1}^{\prime}\) is inconsistent with the \(\alpha\)-outcome at \(x\), within \(T_{u}^{\alpha}\), delete node \(u_{i}\) and the subtree \(T_{u_{i+1}^{\prime}}\), making \(u_{i+1}\) the child of the current parent of \(u_{i}\) in place of \(u_{i}\). For \(i=d\), if \(u_{d}\) is a leaf, stop. Otherwise (\(u_{d}\) is a test node), let \(u_{d}\to y^{\prime}\) be the outcome at \(u_{d}\) that is inconsistent with the \(\alpha\)-outcome at \(x\). Within \(T_{u}^{\alpha}\), delete node \(u_{d}\) and the subtree \(T_{y^{\prime}}\), making the other child \(y\) of \(u_{d}\) the child of the current parent of \(u_{d}\) in place of \(u_{d}\)._
Note that, for each \(\alpha\in\{\mathsf{yes},\mathsf{no}\}\), by the definition of the \(x\)-consistent path from \(u\) and Property 1 (lamimarity), outcome \(u_{i}\to u_{i+1}^{\prime}\) is inconsistent with exactly one outcome at \(x\). Also, if \(u_{d}\) is a test node then it must be equivalent to \(x\), so exactly one outcome at \(u_{d}\) is inconsistent with the \(\alpha\)-outcome at \(x\). (See Lemmas 1 and 2 below for a more detailed characterization of the result of splitting.) Figures 6 and 7 give examples of splitting. In Figure 6, \(d=4\) and \(u_{4}\) is a test node (in fact \(x=u_{4}\)). In Figure 7, \(x\) is a new node (not equivalent to any node in \(T_{u}\)), \(d=5\) and \(u_{5}\) is a leaf.
**Lemma 1**.: _For each query \(q\in Q_{u}\), the search for \(q\) in \(T_{x}^{\prime}\) ends at a leaf that is one of the two copies in \(T_{x}^{\prime}\) of the leaf that the search for \(q\) in \(T_{u}\) ends at._
Proof.: The process that produces \(T_{x}^{\prime}\) maintains this property as an invariant. The invariant holds initially when the yes- and no-subtrees of \(x\) in \(T_{x}^{\prime}\) are each copies of \(T_{u}\). Suppose the invariant holds
Figure 6: Splitting a subtree \(T_{u_{1}}\) around descendant \(u_{4}\). The figures in this section draw \(T_{u}\) by drawing \(u_{i}\) and \(u_{i}^{\prime}\) as the left and right children of their parent \(u_{i-1}\), so that the \(u_{4}\)-consistent path from \(u_{1}\) is drawn as a prefix of the left spine. Each rounded half-circle represents a subtree, labeled with its root. Here outcomes \(u_{1}\to u_{2}^{\prime}\) and \(u_{3}\to u_{4}^{\prime}\) are consistent with the outcome \(u_{4}\to u_{5}\) at \(u_{4}\) while outcome \(u_{2}\to u_{3}^{\prime}\) is consistent with the other outcome \(u_{4}\to u_{5}^{\prime}\). In the notation of Lemma 3 (taking \(j=4\)) \(\delta_{2}=\delta_{3}=1\) and \(\beta=2\), and the lemma gives the bound \(w(u_{2}^{\prime})\geq w(u_{5})+2w(u_{5}^{\prime})\).
just before the process deletes a test node \(v\) and its subtree \(T_{y^{\prime}}\) from a subtree \(T_{u}^{\alpha}\) of the current \(T_{x}^{\prime}\). The \(\alpha\)-outcome at \(x\) is inconsistent with the \(v\to y^{\prime}\) outcome at \(v\), and all queries that reach \(v\) in the current tree have outcome \(\alpha\) at \(x\), and therefore they all satisfy the opposite outcome \(v\to y\) at \(v\). So deleting \(v\) and \(T_{y^{\prime}}\) (replacing \(v\) by \(y\)) doesn't change the leaf that any search ends at, thus maintaining the invariant.
Lemma 1 implies that \(T_{x}^{\prime}\) is a correct subtree for query set \(Q_{u}\).
**Lemma 2**.:
1. _For each_ \(i\in[d-1]\)_, outcome_ \(u_{i}\to u_{i+1}^{\prime}\) _is inconsistent with exactly one outcome_ \(\alpha\in\{\texttt{yes},\texttt{no}\}\) _at_ \(x\)_. For this outcome_ \(\alpha\)_, node_ \(u_{i}\) _and subtree_ \(T_{u_{i+1}^{\prime}}\) _are deleted from the_ \(\alpha\)_-subtree_ \(T_{u}^{\alpha}\) _of_ \(x\)_, and are not deleted from the other subtree_ \(T_{u}^{\alpha^{\prime}}\)_, where outcome_ \(\alpha^{\prime}\) _is the opposite of_ \(\alpha\)_._
2. _If_ \(u_{d}\) _is a test node, one outcome at_ \(u_{d}\)_, say_ \(u_{d}\to y\)_, is inconsistent with the yes-outcome at_ \(x\)_, while the other outcome_ \(u_{d}\to y^{\prime}\) _is inconsistent with the no-outcome at_ \(x\)_. Then within_ \(T_{u}^{\texttt{yes}}\) _node_ \(u_{d}\) _and subtree_ \(T_{y}\) _are deleted, while within_ \(T_{u}^{\texttt{no}}\) _node_ \(u_{d}\) _and subtree_ \(T_{y^{\prime}}\) _are deleted._
3. _For each leaf_ \(z\) _in_ \(T_{u}\) _except_ \(u_{d}\) _(if_ \(u_{d}\) _is a leaf), only one of the two copies of_ \(z\) _remains in_ \(T_{x}^{\prime}\)_, and the query set of the remaining copy in_ \(T_{x}^{\prime}\) _is the same as the query set of_ \(z\) _in_ \(T_{u}\)_._
Proof.: For \(i<d\), by the definition of the \(x\)-consistent path from \(u\), each outcome \(u_{i}\to u_{i+1}\) is consistent with both outcomes at \(x\), so, by laminarity, the outcome \(u_{i}\to u_{i+1}^{\prime}\) is inconsistent with exactly one outcome \(\alpha\) at \(x\). Inspecting the construction of \(T_{x}^{\prime}\), we obtain that \(u_{i}\) and its subtree \(T_{u_{i+1}^{\prime}}\) are deleted from \(T_{u}^{\alpha}\) but not from \(T_{u}^{\alpha^{\prime}}\). This implies Part (i).
Recall that if the final node \(u_{d}\) on the \(x\)-consistent path from \(u\) is a test node, then by the definition of this path, \(u_{d}\) must be equivalent to \(x\). So one outcome at \(u_{d}\), say \(u_{d}\to y\), is inconsistent with the yes-outcome at \(x\), while the other outcome \(u_{d}\to y^{\prime}\) at \(u_{d}\) is inconsistent with the no-outcome at \(x\). This and the definition of \(T_{x}^{\prime}\) imply Part (ii).
To prove Part (iii), first consider the case that \(u_{d}\) is a test node. Each leaf in \(T_{u}\) is in one of the subtrees \(T_{u_{i}^{\prime}}\) (\(2\leq i\leq d\)) or in the yes- or no-subtree of \(T_{u_{d}}\). (Note that all these subtrees are disjoint.) In \(T_{x}^{\prime}\), for each of these \(d+1\) subtrees, one of the two copies of the subtree is deleted
from \(T^{\prime}_{x}\). So only one copy of each leaf remains in \(T^{\prime}_{x}\). In the case that \(u_{d}\) is a leaf, for each leaf other than \(u_{d}\), the same reasoning applies (minus the subtrees of \(T_{u_{d}}\)), to show that only one copy of each leaf other than \(u_{d}\) remains in \(T^{\prime}_{x}\). Part (iii) then follows from Lemma 1.
If \(u_{d}\) is a leaf in \(T_{u}\), then both copies of \(u_{d}\) remain in \(T^{\prime}_{x}\), although one can have an empty query set. (In general, \(T^{\prime}_{x}\) might not be irreducible, but this does not affect the proofs below.)
Now we prove the weight bounds that are used in later sections. The proofs of these bounds takes advantage of laminarity. Specifically, as \(T\) is irreducible, Property 2(i) implies that if \(u_{i}\) is a proper ancestor of \(u_{j}\) then outcome \(u_{i}\to u^{\prime}_{i+1}\) is consistent with one outcome at \(u_{j}\) and inconsistent with the other.
**Lemma 3**.: _Suppose \(T\) is optimal. Let \(u_{1}\to\cdots\to u_{j+1}\) be any downward path in \(T\). For \(1\leq i\leq j-1\), let \(\delta_{i}\) be the number of ancestors \(u_{s}\) of \(u_{i}\) on the path such that outcomes \(u_{s}\to u^{\prime}_{s+1}\) and \(u_{i}\to u^{\prime}_{i+1}\) are consistent with opposite outcomes at \(u_{j}\). Let \(\beta\) be the number of ancestors \(u_{s}\) of \(u_{j-1}\) whose outcome \(u_{s}\to u^{\prime}_{s+1}\) is consistent with outcome \(u_{j}\to u_{j+1}\) (so \(0\leq\beta\leq j-1\)). Then_
\[w(u^{\prime}_{2})\,\geq\,(j-1-\beta)\,w(u_{j+1})+\beta w(u^{\prime}_{j+1})+ \sum_{i=3}^{j}(\delta_{i-1}-1\,)w(u^{\prime}_{i}).\]
Proof.: Consider splitting subtree \(T_{u_{1}}\) around \(u_{j}\). Because \(T\) is irreducible, both outcomes at \(u_{j}\) are consistent with each outcome along the path \(u_{1}\to\cdots\to u_{j}\), so this path is the \(u_{j}\)-consistent path from \(u_{1}\) used in splitting. By Lemma 2, for each \(i\) with \(2\leq i\leq j\), each descendant of \(u^{\prime}_{i}\) gains one new ancestor \((u_{j})\) and loses \(\delta_{i-1}\) ancestors, namely those ancestors \(u_{s}\) of \(u_{i-1}\) such that outcomes \(u_{i-1}\to u^{\prime}_{i}\) and \(u_{s}\to u^{\prime}_{s+1}\) are consistent with opposite outcomes at \(u_{j}\). Each descendant of \(u_{j+1}\) loses \(j-1-\beta\) ancestors, namely the ancestors \(u_{s}\) of \(u_{j-1}\) whose outcome \(u_{s}\to u^{\prime}_{s+1}\) is inconsistent with \(u_{j}\to u_{j+1}\). Each descendant of \(u^{\prime}_{j+1}\) loses \(\beta\) ancestors, namely the ancestors \(u_{s}\) of \(u_{j-1}\) whose outcome \(u_{s}\to u^{\prime}_{s+1}\) is inconsistent with \(u_{j}\to u^{\prime}_{s+1}\) is inconsistent with \(u_{j}\to u^{\prime}_{j+1}\). (Here we use that descendants of \(u_{j}\) already had \(u_{j}\) as an ancestor in \(T_{u}\).) So (using Lemma 2 (iii) and that \(u_{j}\) is not a leaf) splitting increases the cost by \(-(j-1-\beta)w(u_{j+1})-\beta w(u^{\prime}_{j+1})+\sum_{i=2}^{j}(1-\delta_{i-1} )w(u^{\prime}_{i})\). By the optimality of \(T\), this quantity must be non-negative. Substituting \(\delta_{1}=0\) and rearranging gives the desired bound.
**Lemma 4**.: _Suppose \(T\) is optimal. Let \(x\) be any test node, not necessarily in \(T\). Let \(u_{1}\to\cdots\to u_{j+1}\) be a prefix of the \(x\)-consistent path from \(u_{1}\). For \(1\leq i\leq j-1\), let \(\delta_{i}\) be the number of ancestors \(u_{s}\) of \(u_{i}\) on the path such that outcomes \(u_{s}\to u^{\prime}_{s+1}\) and \(u_{i}\to u^{\prime}_{i+1}\) are consistent with opposite outcomes at \(u_{j}\). Let \(\beta^{\prime}\) be the number of ancestors \(u_{s}\) of \(u_{j}\) whose outcome \(u_{s}\to u^{\prime}_{s+1}\) is consistent with the yes-outcome of \(x\) (so \(0\leq\beta^{\prime}\leq j\)). Then_
\[w(u^{\prime}_{2})\,\geq\,\min(j-1-\beta^{\prime},\beta^{\prime}-1)w(u_{j})+ \sum_{i=3}^{j}(\delta_{i-1}-1)w(u^{\prime}_{i}).\]
Proof.: Consider splitting subtree \(T_{u_{1}}\) around \(x\). By Lemma 2, for each \(i\) with \(2\leq i\leq j\), each descendant of \(u^{\prime}_{i}\) gains one new ancestor \((x)\) and loses \(\delta_{i-1}\) ancestors, namely those ancestors \(u_{s}\) such that outcomes \(u_{i-1}\to u^{\prime}_{i}\) and \(u_{s}\to u^{\prime}_{s+1}\) are consistent with opposite outcomes at \(x\). Each proper descendant of \(u_{j}\) in the yes-subtree of \(T^{\prime}_{x}\) gains one new ancestor \((x)\) and loses at least \(j-\beta^{\prime}\) ancestors, namely the ancestors \(u_{s}\) of \(u_{j}\) on the path whose outcome \(u_{s}\to u^{\prime}_{s+1}\) is inconsistent with the yes-outcome at \(x\). Each proper descendant of \(u_{j}\) in the no-subtree of \(T^{\prime}_{x}\) gains one new ancestor \((x)\) and loses at least \(\beta^{\prime}\) ancestors, namely the ancestors \(u_{s}\) of \(u_{j}\) on the path whose outcome \(u_{s}\to u^{\prime}_{s+1}\) is inconsistent with the no-outcome at \(x\). So the search depth of each proper descendant of \(u_{j}\) increases by at most \(\max(1+\beta^{\prime}-j,1-\beta^{\prime})\). So (using Lemmas 1 and 2 (iii)) splitting increases the cost by at most \(\max(1+\beta^{\prime}-j,1-\beta^{\prime})w(u_{j})+\sum_{i=2}^{j}(1-\delta_{i-1} )w(u^{\prime}_{i})\). By the optimality of \(T\), this quantity must be non-negative. Substituting \(\delta_{1}=0\) and rearranging gives the bound.
### Structural theorem
This section proves the following theorem. The next section uses it to prove Theorem 1. As in the previous section, for any downward path \(u_{1}\to u_{2}\to\cdots\to u_{j}\), by \(u^{\prime}_{i}\) we will denote \(u_{i}\)'s sibling (\(2\leq i\leq j\)).
**Theorem 3**.: _Suppose \(T\) is optimal. Let \(u_{1}\to u_{2}\to\cdots\to u_{d}\) be any downward path in \(T\) (not necessarily starting at the root) such that \(w(u^{\prime}_{2})<w(u_{d})\). Then, for all different nodes \(u_{i}\), \(u_{j}\) on the path, with \(i,j<d\), both outcomes at \(u_{i}\) are consistent with outcome \(u_{j}\to u_{j+1}\)._
Consider the following example for intuition. Suppose that some node \(u\) in \(T\) does an equality test \(\langle=h\rangle\), and, in the no-subtree of \(u\), some node \(x\) has \(w(x)>w(h)\). By the theorem, then, the query value \(q=h\) satisfies all outcomes along the path from the no-child of \(u\) to \(x\).
The only property of the admissible tests that Theorem 3 relies on is laminarity.
Proof of Theorem 3.: If \(i>j\) then the theorem follows directly from Property 2(i). So for the rest of the proof we assume that \(i<j<d\), and we only need to prove that outcomes \(u_{i}\to u^{\prime}_{i+1}\) and \(u_{j}\to u_{j+1}\) are consistent (since we already know that outcomes \(u_{i}\to u_{i+1}\) and \(u_{j}\to u_{j+1}\) are consistent).
Applying Lemma 3 to the path \(u_{1}\to u_{2}\to u_{3}\) (so \(j=2\)) gives \(w(u^{\prime}_{2})\geq(1-\beta)\,w(u_{3})+\beta w(u^{\prime}_{3})\), where \(\beta\) is \(1\) if \(u_{1}\to u^{\prime}_{2}\) is consistent with \(u_{2}\to u_{3}\) and zero otherwise. But \(w(u^{\prime}_{2})<w(u_{d})\leq w(u_{3})\), so \(\beta=1\). So \(w(u^{\prime}_{2})\geq w(u^{\prime}_{3})\) and \(u_{1}\to u^{\prime}_{2}\) is consistent with \(u_{2}\to u_{3}\).
With \(w(u^{\prime}_{2})<w(u_{d})\), this implies \(w(u^{\prime}_{3})<w(u_{d})\). Applying the theorem inductively to the (shorter) path \(u_{2}\to\cdots\to u_{d}\), we have that, for all \(i\) and \(j\) with \(2\leq i<j<d\), \(u_{i}\to u^{\prime}_{i+1}\) is consistent with \(u_{j}\to u_{j+1}\).
Then the only remaining case is for \(i=1\) and \(3\leq j<d\). That is, we need to prove that \(u_{1}\to u^{\prime}_{2}\) is consistent with \(u_{j}\to u_{j+1}\), for all \(j\) with \(3\leq j<d\). Suppose otherwise for contradiction. Fix \(j\) with \(3\leq j<d\) such that \(u_{1}\to u^{\prime}_{2}\) is not consistent with \(u_{j}\to u_{j+1}\). Then, by laminarity and the irreducibility of \(T\), \(u_{1}\to u^{\prime}_{2}\) is consistent with \(u_{j}\to u^{\prime}_{j+1}\). By the previous paragraph \(u_{2}\to u^{\prime}_{3}\) is consistent with \(u_{j}\to u_{j+1}\). So \(u_{1}\to u^{\prime}_{2}\) and \(u_{2}\to u^{\prime}_{3}\) are consistent with different outcomes at \(u_{j}\).
Apply Lemma 3 to the path \(u_{1}\to\cdots\to u_{j+1}\). Let \(\delta_{i}\) and \(\beta\) be as defined in the lemma. As \(u_{1}\to u^{\prime}_{2}\) and \(u_{2}\to u^{\prime}_{3}\) are consistent with different outcomes at \(u_{j}\), we have \(\delta_{i}\geq 1\) for all \(2\leq i\leq j-1\). Likewise we have \(1\leq\beta\leq j-2\), so the bound from the lemma implies \(w(u^{\prime}_{2})\geq w(u_{j+1})+w(u^{\prime}_{j+1})=w(u_{j})\geq w(u_{d})\), contradicting \(w(u^{\prime}_{2})<w(u_{d})\).
### Proof of Theorem 1 (some optimal tree is admissible)
The proofs above rely only on laminarity. The proofs below use the particular structure of less-than and equality tests, and the properties of \(u\)-consistent paths. In particular, in the special case when \(u\) is an equality test, say \(u\) is \(\langle=h\rangle\), the \(u\)-consistent path from \(x\) is the path that the search for \(h\) would take if started at \(x\).
**Lemma 5**.: _Suppose the instance has distinct weights and \(T\) is optimal. Consider any equality-test node \(\langle=h\rangle\) and a key \(k\) with \(w(k)>w(h)\) reaching this node. Then a search for \(h\) from the no-child of \(\langle=h\rangle\) would end at the leaf \(L_{k}\) for \(k\), and the path from \(\langle=h\rangle\) to \(L_{k}\) has at most four nodes (including \(\langle=h\rangle\) and \(L_{k}\)). Also, \(h\) is not in the class that \(T\) assigns to \(k\)._
Proof.: Let \(u_{1}\to u_{2}\to\cdots\to u_{d}\) be the path from \(\langle\,=h\rangle\) to \(L_{k}\). Note that \(u_{1}\), \(u_{2}\), and \(u_{d}\) are \(\langle\,=h\rangle\), the no-child of \(\langle\,=h\rangle\), and \(L_{k}\). We have \(w(u_{d})=w(L_{k})\geq w(k)>w(h)=w(u_{2}^{\prime})\). (Recall that \(u_{2}^{\prime}\) denotes the yes-child of \(u_{1}\).) So, by Theorem 3, we obtain
* For any two different test nodes \(u_{i}\), \(u_{j}\) along the path with \(i,j<d\), both outcomes at \(u_{i}\) are consistent with \(u_{j}\to u_{j+1}\).
Applying this to \(i=1\), we obtain that the \(\langle\,=h\rangle\)-consistent path from \(u_{2}\) ends at \(L_{k}\). So the yes-outcome of \(\langle\,=h\rangle\) is consistent with all outcomes along this path, and thus a search for \(h\) starting from \(u_{2}\) would end in \(L_{k}\), as claimed.
To see that \(h\) cannot be in the class that \(T\) assigns to \(k\), suppose for contradiction that it is. A search for \(h\) starting at \(u_{2}\) would end at \(L_{k}\), so changing the test key at \(\langle\,=h\rangle\) to \(k\) (and relabeling \(u_{2}^{\prime}\) with a class containing \(k\)) gives a correct tree. The modification decreases the cost by \((w(k)-w(h))(d-2)\). By assumption \(w(k)>w(h)\). By the irreducibility of \(T\), the node \(\langle\,=h\rangle\) cannot be replaced by a leaf, so \(d\geq 3\). So the modification gives a correct tree that is cheaper than \(T\), contradicting the optimality of \(T\).
It remains only to show that the length \(d\) of the path from \(\langle\,=h\rangle\) to \(L_{k}\) is at most four. The argument uses the following claim:
**Claim 1**.: \(2\,w(h)\geq w(u_{3})\)_._
We postpone the proof of Claim 1, and show first how the bound \(d\leq 4\) follows from this claim. Assume towards contradiction that \(d\geq 5\), and consider the modification to \(T_{u_{1}}\) illustrated in Figure 8. Namely, replace \(T_{u_{3}}\) by a new equality test \(\langle\,=k\rangle\) (for the key \(k\) from the lemma) whose yes-child is a new leaf labeled with any answer that \(k\) accepts, and whose no-subtree is a copy of \(T_{u_{3}}\). This increases the search depth of every query reaching \(u_{3}\), except key \(k\), by \(1\). It decreases the search depth of \(k\) by at least \(1\). Thus, the increase in cost is at most \((w(u_{3})-w(k))-w(k)\). The optimality of \(T\) implies \(w(u_{3})\geq 2\,w(k)>2\,w(h)\), contradicting Claim 1. So we must have \(d\leq 4\).
To complete the proof of the lemma, it remains to prove Claim 1. The basic idea is to consider splitting \(T_{u_{1}}\) around a suitably choosen test node \(x\), and to apply the bound from Lemma 4 to derive the inequality in Claim 1.
For any two less-than tests along the path, the yes-outcome of the test with smaller key is inconsistent with the no-outcome of the other test, so, using Property \((*)\), the outcomes on the path must be the no-outcome of the test with smaller key and the yes-outcome of the test with larger key. It follows that the path has at most two less-than tests.
For any equality test, its yes-outcome is inconsistent with one outcome of every test. By this and Property \((*)\), for each equality test \(\langle\,=k_{i}\rangle\) on the path, its outcome along the path is the
no-outcome, and the yes-outcome at \(\langle=k_{i}\rangle\) is consistent with every other outcome along the path. That is, the value \(k_{i}\) satisfies every outcome along the path except the no-outcome at \(\langle=k_{i}\rangle\).
Let \(k_{i}\) be the key of test node \(u_{i}\) (\(1\leq i\leq 4\)). Let \(k_{1}^{*},k_{2}^{*},k_{3}^{*},k_{4}^{*}\) be a permutation of \(k_{1},k_{2},k_{3},k_{4}\) such that \(k_{1}^{*}\leq k_{2}^{*}\leq k_{3}^{*}\leq k_{4}^{*}\). Let \(u_{1}^{*},u_{2}^{*},u_{3}^{*},u_{4}^{*}\) be the corresponding permutation of \(u_{1},u_{2},u_{3},u_{4}\). Break ties when choosing the permutation so that \(u_{2}^{*}\) and \(u_{3}^{*}\) do equality tests and \(k_{1}^{*}\leq k_{2}^{*}<k_{3}^{*}\leq k_{4}^{*}\). (This is possible by the conclusions of the previous two paragraphs.) The only possible less-than tests are at \(u_{1}^{*}\) and at \(u_{4}^{*}\): node \(u_{1}^{*}\) could be \(\langle<k_{1}^{*}\rangle\) whose outcome along the path is negative, and node \(u_{4}^{*}\) could be \(\langle<k_{4}^{*}\rangle\) whose outcome along the path is positive.
Now create a new node \(x=\langle<k_{3}^{*}\rangle\). For each equality-test on the path, the outcome on the path is the no-outcome, which is consistent with both outcomes at \(x\). By the previous paragraph and using the key ordering, \(k_{1}^{*}\leq k_{2}^{*}<k_{3}^{*}\leq k_{4}^{*}\), for each of the two possible less-than tests on the path, its outcome along the path is consistent with both outcomes of \(x\). So both outcomes at \(x\) are consistent with all outcomes along the path. Therefore path \(u_{1}\to\cdots\to u_{5}\) is a prefix of the \(x\)-consistent path from \(u_{1}\), satisfying the assumptions of Lemma 4 with \(j=4\). The rest of the argument relies on this lemma.
The following observation will be useful: _among the four nodes on \(u_{1}\to u_{2}\to u_{3}\to u_{4}\), two have both outcomes consistent with the yes-outcome at \(x\), while the other two have both outcomes consistent with the no-outcome at \(x\)_. (Indeed, by the ordering of the keys and routine inspection, the yes-outcome at \(x\) is consistent with both outcomes at \(u_{1}^{*}\) and with both outcomes at \(u_{2}^{*}\). Similarly, the no-outcome at \(x\) is consistent with both outcomes at \(u_{3}^{*}\) and with both outcomes at \(u_{4}^{*}\).)
Next, we claim that outcomes \(u_{1}\to u_{2}^{\prime}\) and \(u_{2}\to u_{3}^{\prime}\) are consistent with the same outcome at \(x\). Towards contradiction, suppose that \(u_{1}\to u_{2}^{\prime}\) and \(u_{2}\to u_{3}^{\prime}\) are consistent with opposite outcomes at \(x\), so, in the notation from Lemma 4, \(\delta_{2}=1\). The observation above implies that outcomes \(u_{3}\to u_{4}^{\prime}\) and \(u_{4}\to u_{5}^{\prime}\) are also consistent with opposite outcomes at \(x\), so \(\delta_{3}=1\). But then (recalling \(j=4\)) Lemma 4 gives the bound \(w(u_{2}^{\prime})\geq w(u_{4})\), contradicting \(w(u_{2}^{\prime})<w(u_{d})\), and proving the claim.
Since \(u_{1}\to u_{2}^{\prime}\) and \(u_{2}\to u_{3}^{\prime}\) are consistent with the same outcome at \(x\), the earlier observation implies that \(u_{3}\to u_{4}^{\prime}\) and \(u_{4}\to u_{5}^{\prime}\) are consistent with the other outcome at \(x\). In this case (again in the notation of Lemma 4) \(\delta_{2}=0\), \(\delta_{3}=2\), and (as before) \(\beta^{\prime}=2\) and \(j=4\), so the lemma gives the bound \(w(u_{2}^{\prime})\geq w(u_{4})-w(u_{3}^{\prime})+w(u_{4}^{\prime})=w(u_{3})-w( u_{3}^{\prime})\). That is, \(w(u_{2}^{\prime})+w(u_{3}^{\prime})\geq w(u_{3})\).
It must be that \(w(u_{2}^{\prime})\geq w(u_{3}^{\prime})\). (Otherwise, by Theorem 3 applied to path \(u_{1}\to u_{2}\to u_{3}^{\prime}\), the \(u_{1}\)-consistent path from \(u_{2}\) would include \(u_{3}^{\prime}\), contradicting that it includes \(u_{3}\).) With the previous inequality this implies \(2\,w(u_{2}^{\prime})\geq w(u_{3})\). Since \(w(u_{2}^{\prime})=w(h)\), this completes the proof of Claim 1, and the whole lemma.
**Lemma 6**.: _If the instance has distinct weights, every irreducible optimal tree is admissible._
Proof.: Let \(T\) be any irreducible optimal tree. Consider any node \(u\) in \(T\). To prove the lemma we show that \(u\)'s query set is admissible. If \(Q_{u}\) has no light holes, then we are done, so assume otherwise. Let \(k^{*}=k^{*}(Q_{u})\) be the heaviest key reaching \(u\). Let \(H_{u}=\mathsf{holes}(Q_{u})\cap\mathsf{lighter}(k^{*})\) be the set of light holes at \(u\) and \(b=|H_{u}|\). Let \(c\) be the class that \(T\) assigns to \(k^{*}\) and \(S=\left[\min Q_{u},\max Q_{u}\right]_{k^{*}}\cap\mathsf{lighter}(k^{*})\setminus c\). We want to show \(H_{u}=\mathsf{heaviest}_{b}(S)\) and \(b\in[3]\).
First we show \(H_{u}\subseteq S\). By definition, \(H_{u}\subseteq\left[\min Q_{u},\max Q_{u}\right]_{K}\cap\mathsf{lighter}(k^{*})\). For any light hole \(h\in H_{u}\), key \(k^{*}\) is heavier than \(h\) and reaches the ancestor \(\langle=h\rangle\) of \(u\). Applying Lemma 5 to that ancestor, hole \(h\) is not in \(c\). It follows that \(H_{u}\subseteq S\).
Next we show \(H_{u}=\mathsf{heaviest}_{b}(S)\). Suppose otherwise for contradiction. That is, there are \(k\in S\setminus H_{u}\subseteq Q_{u}\) and \(h\in H_{u}\) such that \(k\) is heavier than \(h\). Keys \(k^{*}\) and \(k\) reach the ancestor \(\langle\,=h\rangle\) of \(u\). Applying Lemma 5 (twice) to that ancestor, the search path for \(h\) starting from the no-child of \(\langle\,=h\rangle\) ends both at \(L_{k^{*}}\) and at the leaf \(L_{k}\) for \(k\). So \(L_{k}=L_{k^{*}}\), which implies that \(k\) is in \(c\), contradicting \(k\in S\). Therefore \(H_{u}=\mathsf{heaviest}_{b}(S)\).
Finally, we show that \(b\leq 3\). Let \(h\in H_{u}\) be the light hole whose test node \(\langle\,=h\rangle\) is closest to the root. Key \(k^{*}\) reaches \(\langle\,=h\rangle\) and weighs more than \(h\). Applying Lemma 5 to that ancestor, the path from \(\langle\,=h\rangle\) to \(L_{k^{*}}\) has at most four nodes (including the leaf). Each light hole has a unique equality-test node on that path. So there are at most three light holes.
Finally we prove Theorem 1:
**Theorem 1**.: _If the instance is feasible, then some optimal tree is admissible._
Proof.: We use a perturbation argument to extend Lemma 6. Assume the instance \(I=(Q,w,\mathcal{C},K)\) is feasible (otherwise we are done). Recall that \(\tilde{w}(q)\) is the rank of \(q\) in the sorting of \(Q\) by weight, breaking ties arbitrarily but consistently, as defined at the beginning of Section 2.
Let \(I^{*}=(Q,w^{*},\mathcal{C},K)\) be an instance obtained from \(I\) by perturbing the query weights infinitesimally so that (i) the perturbed weights are distinct and (ii) sorting \(Q\) by \(w^{*}\) gives the same order as sorting by \(\tilde{w}\). (Specifically, take \(w^{*}(q)=w(q)+\epsilon\,\tilde{w}(q)\), for \(\epsilon\) such that \(0<\epsilon<\delta/n^{3}\), where \(\delta>0\) is less than the absolute difference in cost between any two irreducible trees with distinct costs, and less than the absolute difference between any two distinct weights.) Note that the sets of valid trees for \(I\) and for \(I^{*}\) are the same and finite, and that \(I^{*}\) is a feasible instance with distinct weights.
Let \(T^{*}\) be an optimal, irreducible tree for \(I^{*}\). Applying Lemma 6 to \(T^{*}\) and \(I^{*}\), tree \(T^{*}\) is admissible for \(I^{*}\). By inspection of Definition 1, whether or not a tree is irreducible for \(I\) is independent of \(w\). So \(T^{*}\) is also irreducible for \(I\). By inspection of Definition 4, whether or not a tree is admissible for \(I\) depends only on the tree and the set of query pairs \((h,k)\) such that \(\tilde{w}(h)<\tilde{w}(k)\). This and \(\tilde{w}(h)<\tilde{w}(k)\iff\tilde{w}^{*}(h)<\tilde{w}^{*}(k)\) imply that \(T^{*}\) is also admissible for \(I\). To finish we observe that \(T^{*}\) is also optimal for \(I\).
Recall that \(T\) is an optimal, irreducible tree for \(I\). Letting \(\mathsf{cost}(T)\) and \(\mathsf{cost}^{*}(T)\) be the costs of \(T\) under weight functions \(w\) and \(w^{*}\), we have \(\mathsf{cost}(T^{*})\leq\mathsf{cost}^{*}(T^{*})\leq\mathsf{cost}^{*}(T)\leq \mathsf{cost}(T)+n^{3}\epsilon<\mathsf{cost}(T)+\delta\). So by the choice of \(\delta\) we have \(\mathsf{cost}(T^{*})\leq\mathsf{cost}(T)\). So \(T^{*}\) is optimal for \(I\) as well.
## 4 Algorithm
This section proves Theorem 2, that the problem admits an \(O(n^{3}m)\)-time algorithm. The input is an arbitrary 2wcdt instance \((Q,w,\mathcal{C},K)\). In this section, for any \(R\subseteq Q\) redefine \(\mathsf{cost}(R)\) to be the minimum cost of any _admissible_ tree for the subproblem \(\pi(R)=(R,w,\mathcal{C},K)\) obtained by restricting the query set to \(R\). (Take \(\mathsf{cost}(R)=\infty\) if there is no admissible tree for \(\pi(R)\).) The algorithm returns \(\mathsf{cost}(Q)\), the minimum cost of any admissible tree for \((Q,w,\mathcal{C},K)\). By Theorem 1, this equals the minimum cost of any tree.
The algorithm computes \(\mathsf{cost}(Q)\) by using memoized recursion on the following recurrence relation:
**Recurrence 1**.: _For any \(R\subseteq Q\),_
\[\mathsf{cost}(R)=\begin{cases}\infty&(R\not\in\mathcal{A})\\ 0&(R\in\mathcal{A}\wedge(\exists c\in\mathcal{C})\,R\subseteq c)\\ w(R)+\min_{u}\big{(}\mathsf{cost}(R^{\mathsf{yes}}_{u})+\mathsf{cost}(R^{ \mathsf{no}}_{u})\big{)},&(\text{otherwise})\end{cases}\]
_where above \(\mathcal{A}\) denotes the set of admissible query subsets of \(Q\) (per Definition 4), \((R^{\mathsf{yes}}_{u},R^{\mathsf{no}}_{u})\) is the bipartition of \(R\) into those values that satisfy test \(u\) and those that don't, and \(u\) ranges over the allowed tests such that \(R^{\mathsf{yes}}_{u}\) and \(R^{\mathsf{no}}_{u}\) are admissible. (If there are no such tests then the minimum is infinite.)_
There are \(O(n^{2}m)\) admissible query sets. (Indeed, each admissible set \(R\) with no light holes is determined by the triple \((\min R,\max R,k^{*}(R))\). Per Definition 4, each admissible set \(R\) with light holes is determined by a triple \((\min R,\max R,k^{*}(R),b,c)\), where \((b,c)\in[3]\times\mathcal{C}\) with \(k^{*}(R)\in c\).) So \(O(n^{2}m)\) subproblems arise in recursively evaluating \(\mathsf{cost}(Q)\). To finish we describe how to evaluate the right-hand side of Recurrence 1 for a given \(R\) in \(O(n)\) amortized time.
Assume (by renaming elements in \(Q\) in a preprocessing step) that \(Q=[n]\). Given a non-empty query set \(R\subseteq Q\), define the _signature_ of \(R\) to be
\[\tau(R)=(\min R,\max R,k^{*}(R),H(R)),\]
where \(H(R)=\mathsf{holes}(R)\cap\mathsf{lighter}(k^{*}(R))\) is the set of light holes in \(R\).
For any \(R\), its signature is easily computable in \(O(n)\) time (for example, bucket-sort \(R\) and then enumerate the hole set \(\left[\ell,r\right]_{{}_{Q}}\setminus R\) to find \(H(R)\)). Each signature is in the set
\[\mathcal{S}=Q\times Q\times(K\cup\{\bot\})\times 2^{Q}\]
of _potential signatures_. Conversely, given any potential signature \(t=(\ell,r,k,H^{\prime})\in\mathcal{S}\), the set \(\tau^{-1}(t)\) with signature \(t\), if any, is unique and computable from \(t\) in \(O(n)\) time. (Specifically, \(\tau^{-1}(t)\) is \(Q_{t}=\left[\ell,r\right]_{{}_{Q}}\setminus((K\cap\mathsf{heavier}(k))\cup H ^{\prime})\) provided \(Q_{t}\) is non-empty and has signature \(t\).)
**Lemma 7**.: _After an \(O(n^{3}m)\)-time preprocessing step, given the signature \(\tau(R)\) of any \(R\in\mathcal{A}\), the right-hand of Recurrence 1 can be computed in amortized time \(O(n)\)._
Proof.: Note that the admissible sets can be enumerated in \(O(n^{3}m)\) time as follows. First do the \(O(n^{3})\) admissible sets without light holes: for each \((\ell,r,k)\in Q\times Q\times(K\cup\{\bot\})\), output \(\tau^{-1}(\ell,r,k,\emptyset)\) if it exists. Next do the \(O(n^{2}m)\) admissible sets with at least one light hole, following Definition 4: for each \((\ell,r,k,b,c)\in Q\times Q\times K\times[3]\times\mathcal{C}\) with \(k\in c\), letting \(H^{\prime}=\mathsf{heaviest}_{b}(\left[\ell,r\right]_{{}_{K}}\cap\mathsf{ lighter}(k)\setminus c)\), if \(H^{\prime}\) is well-defined then output \(\tau^{-1}(\ell,r,k,H^{\prime})\) if it exists.
First we describe the preprocessing step.
_Initialize a dictionary \(A\) holding a record \(A[\tau(R)]\) for each set \(R\) in \(\mathcal{A}\)._ To be able to determine whether a given query set \(R\) is in \(\mathcal{A}\), and to store information (including the memoized cost) for each admissible set \(R\), build a dictionary \(A\) that holds a record \(A[\tau(R)]\) for each \(R\in\mathcal{A}\), indexed by the signature \(\tau(R)\). For now, assume the dictionary \(A\) supports constant-time access to the record \(A[\tau(R)]\) for each \(R\in\mathcal{A}\) given the signature \(\tau(R)\) of \(R\). (We describe a suitable implementation later.) Initialize \(A\) to hold an empty record \(A[\tau(R)]\) for each \(R\in\mathcal{A}\) by enumerating all \(R\in\mathcal{A}\) as described above. This takes \(O(n^{3}m)\) time.
_Identify the leaves._ To identify the sets \(R\in{\cal A}\) that are leaves (that is, such that \((\exists c\in{\cal C})\ R\subseteq c\)) in \(O(n^{3}m)\) time, for each triple \((\ell,r,k)\in Q\times Q\times(K\cup\{\bot\})\), do the following steps.
1. Let \({\cal R}\) contain the admissible sets \(R\) such that \(\tau(R)=(\ell,r,k,H^{\prime})\) for some \(H^{\prime}\). Assume \({\cal R}\) is non-empty (otherwise move on to the next triple). Let \(R_{\emptyset}\) be the set with signature \((\ell,r,k,\emptyset)\), so that \(R=R_{\emptyset}\setminus H(R)\) for \(R\in{\cal R}\). Let \({\cal C}_{\ell}\) contain the classes \(c\in{\cal C}\) such that \(\ell\in c\). Observe that \(|{\cal R}|\leq 4|{\cal C}_{\ell}|\), because \(R_{\emptyset}\) is unique for the triple \((\ell,r,k)\), and then \(R\) is determined from \(R_{\emptyset}\) by the class \(c\in{\cal C}\) and the number \(b\in[3]\) of light holes, per Definition 4.
2. Each set \(R\in{\cal R}\) contains \(\ell\), so \(R\) is a leaf if and only if \(R\subseteq c\) for some \(c\in{\cal C}_{\ell}\). The condition \(R\subseteq c\) is equivalent to \(R_{\emptyset}\setminus H(R)\subseteq c\), which is equivalent to \(R_{\emptyset}\setminus c\subseteq H(R)\). So, any given set \(R\in{\cal R}\) is a leaf if and only if some subset of \(H(R)\) equals \(R_{\emptyset}\setminus c\) for some \(c\in{\cal C}_{\ell}\). Identify all such \(R\) in time \(O(n|{\cal R}|+n|{\cal C}_{\ell}|)\). (Recalling that \(|H(R)|\leq 3\) for each \(R\in{\cal R}\), this is straightforward. One way is to construct the collection \({\cal H}=\bigcup_{R\in{\cal R}}2^{H(R)}\) of subsets of the light-hole sets. Order the elements within each subset in \({\cal H}\) by increasing value, then radix sort \({\cal H}\) into lexicographic order. Do the same for the collection \({\cal L}=\{R_{\emptyset}\setminus c:c\in{\cal C}_{\ell},|R_{\emptyset}\setminus c |\leq 3\}\). Then merge the two collections to find the elements common to both. A given \(R\in{\cal R}\) is a leaf if and only if some subset of \(H(R)\) in \({\cal H}\) also occurs in \({\cal L}\).)
As noted above, we have \(|{\cal R}|\leq 4|{\cal C}_{\ell}|\), so the time spent above on a given triple \((\ell,r,k)\) is \(O(n|{\cal C}_{\ell}|)\). Summing over all triples \((\ell,r,k)\), the total time is \(O(n^{2}\sum_{\ell\in K}n|{\cal C}_{\ell}|)=O(n^{3}m)\).
In \(O(n^{3}m)\) time, identify the \(O(n^{2}m)\) leaves \(R\in{\cal A}\) as described above. For each, record in its entry \(A[\tau(R)]\) that \(R\) is a leaf and \(\mbox{\sf cost}(R)=0\).
_Implementing Recurrence 1._ Next we describe how to compute \(\mbox{\sf cost}(R)\), given the signature \(\tau(R)=(\ell,r,k,H^{\prime})\) of any set \(R\subseteq Q\), in \(O(n)\) time.
If \(A\) contains no record \(A[\tau(R)]\), then \(R\) is not admissible, so \(\mbox{\sf cost}(R)=\infty\) and we are done. (Checking this takes constant time, using that if \(|H^{\prime}|\geq 4\) then no lookup is necessary.) So assume \(A\) contains a record \(A[\tau(R)]\).
If the record \(A[\tau(R)]\) already holds a memoized cost for \(R\), then we are done, so assume otherwise. (This implies that \(R\) is not a leaf.) In \(O(n)\) time, build \(R\) from \(\tau(R)\) and calculate the sum \(w(R)\). Then calculate \(\mbox{\sf cost}(R)\) in \(O(n)\) time by evaluating the right-hand side of Recurrence 1, in two stages. In the first stage enumerate all less-than tests that the root \(u\) in Recurrence 1 for \(\mbox{\sf cost}(R)\) can be, using the following steps:
1. Using bucket sort, compute \(R=(q_{1},q_{2},\ldots,q_{j})\) in sorted order. For \(0\leq i\leq j\) define \(R_{i}=(q_{1},q_{2},\ldots,q_{i})\) and \(\overline{R}_{i}=(q_{i+1},q_{i+2},\ldots,q_{j})\).
2. Compute \(k^{*}(R_{i})\) for \(0\leq i\leq j\) in constant time per \(i\) as follows. Start with \(k^{*}(R_{0})=\bot\). Then, for \(i\gets 1,\ldots,j\), compute \(k^{*}(R_{i})\) by using \(k^{*}(R_{i})=q_{i}\) if \(q_{i}\in K\) and \(q_{i}\) is heavier than \(k^{*}(R_{i-1})\), and otherwise \(k^{*}(R_{i})=k^{*}(R_{i-1})\).
3. Compute the light-hole set \(H(R_{i})\) for \(0\leq i\leq j\) from \(H(R)\) in constant time per \(i\), using \(H(R_{i})=\{h\in H(R):h\leq q_{i}\}\) (recall that \(|H(R)|\leq 3\)).
4. Using the results of Steps 2 and 3, for \(0\leq i\leq j\), in constant time per \(i\) compute and store the signature \((q_{1},q_{i},k^{*}(R_{i}),H(R_{i}))\) of \(R_{i}\).
5. Similarly, compute and store the signature \((q_{i+1},q_{j},k^{*}(\overline{R}_{i}),H(\overline{R}_{i}))\) of \(\overline{R}_{i}\) for each \(i\).
6. By "merging" \(R\) and \(K\) (each sorted), identify for each \(h\in K\) the \(i\) such that \((R^{\mbox{\sf yes}}_{<h}),R^{\mbox{\sf no}}_{<h})=(R_{i},\overline{R}_{i})\), thereby determining \(\tau(R^{\mbox{\sf yes}}_{<h})\) and \(\tau(R^{\mbox{\sf no}}_{<h})\). Then there is a term for \(u=\langle\ <h\rangle\) in the recurrence for each \(h\) such that the corresponding \(i\) satisfies \(1\leq i<j\) (that is, \(R_{i}\) and \(\overline{R}_{i}\) are not empty).
In the second stage, enumerate all the equality-tests \(\langle=h\rangle\) for \(h\in K\cap R\) that the root \(u\) can be. For each such test \(u\), we have \(R_{u}^{\mathsf{yes}}=\{h\}\), so \(\tau(R_{u}^{\mathsf{yes}})=(h,h,h,\emptyset)\). For all \(h\not\in\{\min R,\max R,k^{*}(R)\}\) (using that \(|R|\geq 2\), as \(R\) is not a leaf, so \(R_{u}^{\mathsf{no}}\neq\emptyset\)) use that \(\tau(R_{u}^{\mathsf{no}})\) is \((\min R,\max R,k^{*}(R),H(R)\cup\{h\})\), which (as \(|H(R)\cup\{h\}|\leq 4\)) is computable from \(\tau(R)\) in constant time. For each of the (at most three) values \(h\in\{\min R,\max R,k^{*}(R)\}\), using \(R_{u}^{\mathsf{no}}=R\setminus\{h\}\), explicitly compute \(R_{u}^{\mathsf{no}}\) and its signature in \(O(n)\) time. This completes the second stage.
For all tests \(u\) considered above, the values of \(\mathsf{cost}(R_{u}^{\mathsf{yes}})\) and \(\mathsf{cost}(R_{u}^{\mathsf{no}})\) are computed recursively from their signatures \(\tau(R_{u}^{\mathsf{yes}})\) and \(\tau(R_{u}^{\mathsf{no}})\).
Finally, for \(\mathsf{cost}(R)\), return (and memoize in \(A[\tau(R)]\)) \(w(R)\) plus the minimum of \(\mathsf{cost}(R_{u}^{\mathsf{yes}})+\mathsf{cost}(R_{u}^{\mathsf{no}})\), over all such \(u\).
In this way, for each \(R\in\mathcal{A}\), the time to evaluate the right-hand side of the recurrence is \(O(n)\). There are \(O(n^{2}m)\) sets in \(\mathcal{A}\), so the total time is \(O(n^{3}m)\). (Note that \(\mathsf{cost}(R)=\infty\) may also be computed for \(O(n^{3}m)\) non-admissible sets \(R\not\in\mathcal{A}\), but each of these takes constant time.)
_How to implement the dictionary \(A\)._ For each admissible query set \(R\in\mathcal{A}\), the set \(H(R)\) of light holes has size at most three, so the signature \(\tau(R)=(\ell,r,k,H(R))\) has size \(O(1)\). So one way to implement the dictionary \(A\) (so as to support constant-time lookup) is to use a hash table with universal hashing. Then the algorithm uses space \(O(n^{2}m)\), but is randomized. If a deterministic implementation is needed, one can implement the dictionary by storing an \(n\times n\times n\) matrix \(T\) of buckets such that a given bucket \(T[\ell,r,k]\) holds the records for the admissible query sets \(R\) with signatures of the form \(\tau(R)=(\ell,r,k,H^{\prime})\) for some \(H^{\prime}\). Organize the records in this bucket using a trie (prefix tree) of depth 3 keyed by the (sorted) keys in \(H^{\prime}\). This still supports constant-time access, but increases the space to \(O(n^{3}m)\). More generally, for any \(d\geq 1\), one can represent each element \(k\in[n]\) within each set \(H^{\prime}\) as a sequence of \(\lceil\log_{2}(n)/d\rceil\)\(d\)-bit words, then use a trie with alphabet \(\{0,1,\ldots,2^{d}-1\}\) and depth at most \(3\lceil\log_{2}(n)/d\rceil\). Then space is \(\Theta(2^{d}n^{2}m)\) while the access time is \(\Theta(\log(n)/d)\). For example, we can take \(d=\lceil\epsilon\log_{2}n\rceil\) for any constant \(\epsilon\) to achieve space \(O(n^{2+\epsilon}m)\) and access time \(\Theta(1/\epsilon)=\Theta(1)\). Or we can take \(d=1\) and achieve space \(O(n^{2}m)\) and access time \(\Theta(\log n)\), increasing the total time to \(O(n^{3}m\log n)\).
Remarks.Theorem 2 follows from Lemma 7.
We note without proof that there is a deterministic variant of the algorithm that uses space \(O(n^{2}m)\) and time \(O(n^{3}m)\). This variant is more complicated so we chose not to present it.
In the common case that \(\mathcal{C}\) partitions \(Q\), so each query \(q\in Q\) is contained in just one class \(c\in\mathcal{C}\), our algorithm can be implemented in time and space \(O(n^{2}m)=O(n^{3})\). To do this, in the above implementation of the dictionary using a matrix \(T\) of buckets, each bucket \(T[\ell,r,k]\) stores the records of at most four sets, so no prefix tree is needed to achieve constant access time and space.
Extending the algorithm to other inequality tests.Our model considers decision trees that use less-than and equality tests. Allowing the negations of these tests is a trivial extension. (E.g., every greater-than-or-equal test \(\langle\geq k\rangle\) is equivalent by swapping the children to the less-than test \(\langle<k\rangle\).) We note without proof that our results also extend easily to the model that allows less-than-or-equal tests (of the form \(\langle\leq k\rangle\)): the proof of Theorem 3 requires only a minor adjustment--specifically, such tests need to be taken into account when proving Claim 1 in the proof of Lemma 5; the extended algorithm then allows such tests in Recurrence 1. |
2301.10693 | The fate of infrared divergences in a finite formulation of field
theory: QED revisited | Within the framework of the recently proposed Taylor-Lagrange regularization
procedure, we reanalyze the calculation of radiative corrections in $QED$ at
next to leading order. Starting from a well defined local bare Lagrangian, the
use of this regularization procedure enables us to manipulate fully finite
elementary amplitudes in the ultra-violet as well as infra-red regimes, in
physical $D=4$ space-time dimensions and for physical massless photons, as
required by gauge invariance. We can thus separately calculate the
electromagnetic form factors of the electron and the cross-section for real
photon emission, each quantity being finite in these physical conditions. We
then discuss the renormalization group equations within this regularization
procedure. Thanks to the taming of infra-red divergencies, the renormalization
group equation associated to the (physical) effective charge exhibits an
ultra-violet stable fixed point at $\alpha^*=0$, showing an asymptotic freedom
type behavior. We finally consider the case of two mass scales, one low and one
heavy, paying particular attention to the natural decoupling properties between
heavy and light degrees-of-freedom. As a direct consequence, the fine structure
constant should be zero in the limit of massless electrons. | Jean-François Mathiot | 2023-01-25T16:55:35Z | http://arxiv.org/abs/2301.10693v1 | # The fate of infrared divergences in a finite formulation of field theory: QED revisited
###### Abstract
Within the framework of the recently proposed Taylor-Lagrange regularization procedure, we reanalyze the calculation of radiative corrections in \(QED\) at next to leading order. Starting from a well defined local bare Lagrangian, the use of this regularization procedure enables us to manipulate fully finite elementary amplitudes in the ultra-violet as well as infra-red regimes, in physical \(D=4\) space-time dimensions and for physical massless photons, as required by gauge invariance. We can thus separately calculate the electromagnetic form factors of the electron and the cross-section for real photon emission, each quantity being finite in these physical conditions. We then discuss the renormalization group equations within this regularization procedure. Thanks to the taming of infra-red divergencies, the renormalization group equation associated to the (physical) effective charge exhibits an ultra-violet stable fixed point at \(\alpha^{*}=0\), showing an asymptotic freedom type behavior. We finally consider the case of two mass scales, one low and one heavy, paying particular attention to the natural decoupling properties between heavy and light degrees-of-freedom. As a direct consequence, the fine structure constant should be zero in the limit of massless electrons.
## 1 Introduction
Following the recent development of a regularization procedure based on the nature of quantum fields as operator valued distributions (\(OPVD\)) - the so-called Taylor-Lagrange regularization scheme (\(TLRS\)) [1, 2] - we shall consider in this study the case of quantum electrodynamics (\(QED\)) in the one-loop, next to leading order (\(NLO\)), approximation. This regularization procedure originates from the observation that the divergences of bare amplitudes can be traced back to the violation of causality due to the ill-defined product of distributions at the same point [3, 4, 5, 6, 7, 8] (see also Refs. [9, 10]). Since the Lagrangian we start from is constructed from the product of fields or derivative of fields at the same point - it is thus called _local_ - the calculation of any elementary amplitude must be done with great care. The correct mathematical treatment for such a case is known since a long time [11, 13, 12]. More recently, these considerations led to the construction of \(TLRS\). According to this procedure, physical
fields are constructed as \(OPVD\), these distributions being applied on test functions with well defined mathematical properties. Since this scheme is completely finite - in a sense that will be defined below - it is not plagued by any arbitrariness due to the way divergences in the ultra-violet (\(UV\)) as well as infra-red (\(IR\)) regimes are cancelled. We can therefore concentrate on the most important, physical, consequences of the finite renormalization of the bare amplitudes, as in any interacting many-body system.
The main properties of \(TLRS\) can be characterized by the following two essential features:
* \(TLRS\) enables us to give a well defined meaning to the Lagrangian we start from. This is of course also the case using dimensional regularization (\(DR\)). Both regularization procedures are thus called _a-priori_ in contrast to _a-posteriori_ regularization procedures, like for instance using a naive cut-off in momentum space. In this latter case, the regularization is done a posteriori at the level of each elementary amplitude and not at the level of the Lagrangian itself.
* The calculation of any elementary amplitude in \(TLRS\) is done in physical conditions, _i.e._ in four space-time dimensions, with no additional non-physical degrees of freedom like for instance (infinitely massive) Pauli-Villars fields, and for massless photons. \(TLRS\) is thus called an _intrinsic_ regularization procedure, in contrast to \(DR\) for which elementary amplitudes still diverge in physical conditions. This last procedure is thus called _extrinsic_.
The construction of \(TLRS\) enables us to treat at the same time \(UV\) as well as \(IR\) singular operators [1, 2]. The \(IR\) singularities do occur in particular when massless degrees of freedom are involved. A textbook example of such a case is given by one-loop corrections in \(QED\). Using \(DR\) for instance, such calculation requires to give, in intermediate steps, a small non-zero mass to the photon. The subsequent massless limit is then taken at the very end of the calculation. These \(IR\) singularities should be properly taken care of before any physical consequences can be drawn from the calculation of a given physical observable. We shall investigate in our study the physical consequences of using \(TLRS\) for the calculation of \(NLO\) corrections in \(QED\). Although these corrections are by now well known examples, the use of this regularization procedure enables a completely new analysis, free from any \(UV\) and \(IR\) divergences. This implies in particular that no intermediate renormalization is necessary. In this sense, \(TLRS\) is at the same time a regularization procedure and a renormalization scheme, with the same acronym. The only renormalization we should worry about is the field strength renormalization for external, on-shell, particles. Thanks to the lack of any \(IR\) divergences, this renormalization factor is well defined for massless photons.
The behavior of any elementary amplitude and any physical observable is governed by two arbitrary scales, as already explained in Ref. [14]:
* the _regularization scale_ denoted by \(\eta\). It is inherent to the regularization procedure which is used to give a mathematical well defined meaning to the local bare Lagrangian we start from. _This scaling variable is dimensionless in \(TLRS\)_.
* The energy scale \(M\) at which an experiment is performed in order to fix the value of the parameters of the Lagrangian. It is more precisely a set of scales, like for instance in \(\phi^{4}\) theories. There is however only one scale in the case of \(QED\). We call this scale the _renormalization point_ since it fixes the kinematical condition where the finite (physical) renormalization of the bare parameters is performed.
These two arbitrary scales are the ones relevant for the calculation of the running of the two universal coupling constants - the bare and the physical ones - using the renormalization group (\(RG\)) equations [14]. The bare coupling constant depends on the regularization scale \(\eta\) only and is denoted by \(\alpha_{0}(\eta)\), while the physical one depends on the renormalization point \(M\) only and is denoted by \(\alpha_{M}(M)\)1. These two coupling constants are universal in the sense that they can be identified independently of the choice of any regularization procedure or any renormalization scheme. Moreover, they can be defined both in the perturbative as well as non-perturbative regimes. Note that the calculation of the physical coupling constant \(\alpha_{M}\), at \(M\neq 0\), is only made possible when \(IR\) divergences are properly taken care of, as we shall see in Sec. 3.
Footnote 1: When \(\alpha_{M}(M)\) is not directly accessible from an experiment at any value of \(M\) - as it is the case for \(QED\) - we can consider equivalently an effective charge directly related to a physical observable as we shall see in Sec. 3.1.
We would like to emphasize the very different nature of these two coupling constants. On the one hand, the bare one - \(\alpha_{0}(\eta)\) - is defined at the level of the bare Lagrangian, and knows nothing about the renormalization scheme which will be used, if any, nor about the physical state which is realized in Nature, like for instance in the presence of spontaneous symmetry breaking. On the other hand, the physical coupling constant \(\alpha_{M}(M)\) is a definite property of this physical state, and is independent of the regularization procedure which has been used. The running of these two coupling constants is therefore governed by two separate \(RG\) equations. The one associated to the \(\eta\)-dependence of \(\alpha_{0}\), called \(RGE(\eta)\), is mass-independent since it is associated to the local character of the Lagrangian we start from, _i.e._ to the \(UV\) limit of elementary amplitudes in momentum space. The \(RG\) equation associated to the \(M\)-dependence of \(\alpha_{M}\), called \(RGE(M)\), is mass-dependent since the kinematical condition \(M\) is finite and not arbitrarily large.
Once elementary amplitudes have been calculated, like for instance the self-energy of the electron, the polarization operator of the photon and the electromagnetic vertex correction, one should consider physical observables. Apart of course from the physical mass of the electron which is used to fix its bare mass, or the fine structure constant which is used to fix the bare coupling constant and its \(\eta\)-dependence, the first simple non-trivial observable is the elastic \(e^{-}-e^{\pm}\) scattering. As a direct consequence of the unique properties of \(TLRS\) recalled above, we shall see in Sec. 3 - in the one-photon-exchange approximation - that this scattering amplitude is finite in physical conditions _i.e._ with a massless photon. We shall also check that the cross-section for soft-photon bremstrahlung is finite in these conditions. The calculation of the electromagnetic form factors of the electron - for an arbitrary precision of the experimental apparatus in order to separate real photon emission from virtual vertex corrections - is thus made possible for the first time. As we shall see in Sec. 4, this has a non trivial consequence in the high energy limit. In this limit, \(QED\) exhibits an \(UV\) stable fixed point for the effective charge with \(\alpha^{*}=0\), showing an asymptotic freedom type behavior.
The plan of our article is the following. We calculate in Sec. 2 the elementary amplitudes in \(QED\) at \(NLO\), and check the Ward identities. The electromagnetic form factors of the electron together with soft-photon bremstrahlung are calculated in Sec. 3. We discuss in Sec. 4 the use of the \(RG\) equations as well as the case of two mass scales and the limit of massless electrons. Our conclusions are drawn in Sec. 5. We recall in A the main physical properties of \(TLRS\), while the calculation of all relevant integrals is detailed in B.
## 2 Elementary amplitudes
For illustration purposes on how to use \(TLRS\) in practice, we recall in this section the calculation of the elementary amplitudes in \(QED\) at \(NLO\). For simplicity, we restrict ourself to the Feynman gauge. The use of different gauges is discussed in Ref. [15]. All the necessary integrals are detailed in B.
### Self-energy of the electron
The electron self-energy is written, with the appropriate test functions \(f_{\sigma}\) (see A),
\[\Sigma(p)=-ie^{2}\lim_{\sigma\to 1^{-}}\int\frac{d^{4}k}{(2\pi)^{4}}\frac{ \gamma^{\mu}(\not{p}-\not{k}+m)\gamma_{\mu}}{k^{2}[(p-k)^{2}-m^{2}]}f_{\sigma} \left[\frac{k^{2}}{m^{2}}\right]f_{\sigma}\left[\frac{(p-k)^{2}}{m^{2}}\right], \tag{1}\]
where \(e\) is the physical charge and \(m\) the physical mass of the electron. The \(IR\) singularity at \(k=0\) is taken care of by the first test function. We thus get
\[\Sigma(p)=2ie^{2}\left[(\not{p}-2m)\overline{I}_{0}-\gamma_{\nu}\overline{I}_ {1}^{\nu}\right], \tag{2}\]
where the integrals \(\overline{I}_{0}\) and \(\overline{I}_{1}^{\mu}\) are given by
\[(\overline{I}_{0},\overline{I}_{1}^{\nu})(p)=\lim_{\sigma\to 1^{-}}\int \frac{d^{4}k}{(2\pi)^{4}}\frac{(1,k^{\nu})}{k^{2}[(p-k)^{2}+m^{2}]}f_{\sigma} \left[\frac{k^{2}}{m^{2}}\right]f_{\sigma}\left[\frac{(k-p)^{2}}{m^{2}}\right]. \tag{3}\]
They are calculated in B. With the most general decomposition
\[\Sigma(p)=mA(p^{2})+\not{p}B(p^{2}), \tag{4}\]
we have
\[A(p^{2}) = \frac{\alpha}{\pi}\int_{0}^{1}dx\ \left[\text{Log}\frac{\eta^{2}}{ \Delta_{p}}+cte\right], \tag{5a}\] \[B(p^{2}) = -\frac{\alpha}{2\pi}\int_{0}^{1}dx(1-x)\ \left[\text{Log}\frac{\eta^{2}}{ \Delta_{p}}+cte\right], \tag{5b}\]
with
\[\Delta_{p}=1-\frac{p^{2}}{m^{2}}(1-x), \tag{6}\]
and \(\alpha=\frac{e^{2}}{4\pi}\). In the above equations, and in all this study, we have indicated by \(cte\) a constant term, independent of any kinematical variable, in order to remind us that the regularization scale \(\eta\) is defined up to a multiplicative constant (see A). Note that the integrals in Eq. (5) do not involve any test function anymore since \(A\) and \(B\) are finite. We recover here the standard result [16].
We shall also need in Sec. 3 the electron field strength renormalization factor \(Z\). This factor is written as [16], at \(NLO\),
\[Z=1+\frac{d\Sigma}{d\not{p}}\bigg{|}_{\not{p}=m}=2m^{2}\left[A^{\prime}(m^{2} )+B^{\prime}(m^{2})\right]+B(m^{2}). \tag{7}\]
The calculation of \(A^{\prime}\) and \(B^{\prime}\) requires some care since both quantities involve \(IR\) singular operators [15]. They are calculated in B. We thus get
\[Z\equiv 1+\delta=1-\frac{\alpha}{4\pi}\left[\text{Log}\ \eta^{2}+cte\right]. \tag{8}\]
This factor is free from any \(IR\) divergences although it is calculated with a massless photon.
### Vacuum polarization of the photon
The calculation of the polarization operator of the photon proceeds similarly. We have
\[\Pi^{\mu\nu}(q)=ie^{2}\lim_{\sigma\to 1^{-}}\int\frac{d^{4}k}{(2\pi)^{4}}\frac{Tr[ \gamma^{\mu}(\not{k}+m)\ \gamma^{\nu}(\not{k}-\not{q}+m)]}{(k^{2}-m^{2})[(k-q)^{2}-m^{2}]}f_{\sigma} \left[\frac{k^{2}}{m^{2}}\right]f_{\sigma}\left[\frac{(k-q)^{2}}{m^{2}}\right]. \tag{9}\]
We thus get
\[\Pi^{\mu\nu}(q)=4ie^{2}\left[2\overline{J}_{2}^{\mu\nu}-g^{\mu\nu}\overline{J} _{2}+g^{\mu\nu}\overline{J}_{1}^{\rho}q_{\rho}-q^{\mu}\overline{J}_{1}^{\nu}-q ^{\nu}\overline{J}_{1}^{\mu}+m^{2}g^{\mu\nu}\overline{J}_{0}\right]. \tag{10}\]
The various integrals entering in Eq. (10) are given by
\[(\overline{J}_{0},\overline{J}_{1}^{\mu},\overline{J}_{2}, \overline{J}_{2}^{\mu\nu})(q)=\lim_{\sigma\to 1^{-}}\int\frac{d^{4}k}{(2\pi)^{4}} \frac{(1,k^{\mu},k^{2},k^{\mu}k^{\nu})}{(k^{2}-m^{2})[(k-q)^{2}-m^{2}]}\\ \times f_{\sigma}\left[\frac{k^{2}}{m^{2}}\right]f_{\sigma}\left[ \frac{(k-q)^{2}}{m^{2}}\right]. \tag{11}\]
They are calculated in B. We have finally
\[\Pi^{\mu\nu}(q^{2})=\Pi(q^{2})\left[g^{\mu\nu}q^{2}-q^{\mu}q^{\nu}\right], \tag{12}\]
where
\[\Pi(q^{2})=-\frac{2\alpha}{\pi}\int_{0}^{1}dx\ x(1-x)\left[\text{Log}\frac{ \eta^{2}}{\Delta_{q}}+cte\right], \tag{13}\]
with
\[\Delta_{q}=1+\frac{Q^{2}}{m^{2}}x(1-x), \tag{14}\]
and \(Q^{2}=-q^{2}\). We can check explicitly here that the photon propagator remains transverse as required by gauge invariance. It is instructive to calculate the limiting cases \(Q^{2}\ll m^{2}\) and \(Q^{2}\gg m^{2}\). We get
\[\Pi(Q^{2}\ll m^{2}) = -\frac{\alpha}{3\pi}\left[\text{Log}\ \eta^{2}-\frac{Q^{2}}{5m^{2}}+cte \right], \tag{15a}\] \[\Pi(Q^{2}\gg m^{2}) = \frac{\alpha}{3\pi}\text{Log}\frac{Q^{2}}{m^{2}}. \tag{15b}\]
These results will be used in Sec. 3 for the calculation of the electromagnetic form factors of the electron.
### Electromagnetic vertex
For simplicity, we calculate here the electromagnetic vertex for external on-shell electrons only. It is given, at \(NLO\), by
\[\Lambda^{\mu}(p,q)=-ie^{2}\bar{u}(p^{\prime})\lim_{\sigma\to 1^{-}} \int\frac{d^{4}k}{(2\pi)^{4}}\frac{\gamma^{\rho}(\not{p}-\not{k}+m) \gamma^{\mu}(\not{p}-\not{k}+m)\gamma_{\rho}}{k^{2}[(p^{\prime}-k)^{2}-m^{2}][ (p-k)^{2}-m^{2}]}\\ \times f_{\sigma}\left[\frac{k^{2}}{m^{2}}\right]f_{\sigma}\left[ \frac{(p-k)^{2}}{m^{2}}\right]f_{\sigma}\left[\frac{p^{\prime}-k)^{2}}{m^{2}} \right]u(p), \tag{16}\]
where \(\bar{u}(p^{\prime})\) and \(u(p)\) are the Dirac spinors, and \(q=p^{\prime}-p\). As usual, we decompose the electromagnetic vertex into two parts. The first one, denoted by \(\Lambda^{\mu}_{UV}\), is a divergent contribution in the \(UV\) domain in the absence of test functions, while the second one,
denoted by \(\Lambda^{\mu}_{IR}\), is convergent in this domain but has still \(IR\) divergences which should be properly taken care of. The first one depends explicitly on the regularization scale \(\eta\) while the second one does not. We get
\[\Lambda^{\mu}_{UV}(p,q)=-ie^{2}\bar{u}(p^{\prime})\lim_{\sigma\to 1^{-}} \int\frac{d^{4}k}{(2\pi)^{4}}\frac{\gamma^{\rho}\not\!k\gamma^{ \mu}\not\!k\gamma_{\rho}}{k^{2}[(p^{\prime}-k)^{2}-m^{2}][(p-k)^{2}-m^{2}]}\] \[\times f_{\sigma}\left[\frac{k^{2}}{m^{2}}\right]f_{\sigma}\left[ \frac{(p-k)^{2}}{m^{2}}\right]f_{\sigma}\left[\frac{(p^{\prime}-k)^{2}}{m^{2}} \right]u(p). \tag{17}\]
We thus can write
\[\Lambda^{\mu}_{UV}(p,q)=2ie^{2}\bar{u}(p^{\prime})\left[2\overline{K}^{\mu\nu} _{2}\gamma_{\nu}-\overline{K}_{2}\gamma^{\mu}\right]u(p), \tag{18}\]
where the integrals \(\overline{K}_{2}\) and \(\overline{K}^{\mu\nu}_{2}\) are given by
\[(\overline{K}_{2},\overline{K}^{\mu\nu}_{2})(p,q)=\lim_{\sigma \to 1^{-}}\int\frac{d^{4}k}{(2\pi)^{4}}\frac{(k^{2},k^{\mu}k^{\nu})}{k^{2}[(p^ {\prime}-k)^{2}-m^{2}][(p-k)^{2}-m^{2}]}\\ \times f_{\sigma}\left[\frac{k^{2}}{m^{2}}\right]f_{\sigma} \left[\frac{(p-k)^{2}}{m^{2}}\right]f_{\sigma}\left[\frac{(p^{\prime}-k)^{2}}{ m^{2}}\right]. \tag{19}\]
They are calculated in B. We finally have
\[\Lambda^{\mu}_{UV}(p,q)=\Phi^{UV}_{1}(Q^{2})\ \bar{u}(p^{\prime})\gamma^{\mu}u(p), \tag{20}\]
with
\[\Phi^{UV}_{1}(Q^{2})=\frac{\alpha}{2\pi}\int_{0}^{1}dx\int_{0}^{1-x}dy\left[ \text{Log}\frac{\eta^{2}}{\Delta}+cte\right], \tag{21}\]
and \(\Delta=(x+y)^{2}+\frac{Q^{2}}{m^{2}}xy\). By a standard change of variable [16] with \(w=x+y\) and \(y=w\xi\), we get, after integration over \(w\) and with the change of notation \(\xi\to x\),
\[\Phi^{UV}_{1}(Q^{2})=\frac{\alpha}{4\pi}\int_{0}^{1}dx\left[\text{Log}\frac{ \eta^{2}}{\Delta_{q}}+cte\right], \tag{22}\]
with \(\Delta_{q}\) given in Eq. (14). In the limits \(Q^{2}\ll m^{2}\) and \(Q^{2}\gg m^{2}\) we have
\[\Phi^{UV}_{1}(Q^{2}\ll m^{2}) = \frac{\alpha}{4\pi}\left[\text{Log}\ \eta^{2}-\frac{Q^{2}}{6m^{2}}+cte\right], \tag{23a}\] \[\Phi^{UV}_{1}(Q^{2}\gg m^{2}) = -\frac{\alpha}{4\pi}\text{Log}\frac{Q^{2}}{m^{2}}. \tag{23b}\]
These results will be used in Sec. 3 for the calculation of the electromagnetic form factors of the electron.
The contribution \(\Lambda^{\mu}_{IR}\) is finite in the \(UV\) domain but has still singularities in the \(IR\) domain in the absence of test functions, as well known. We can write, using the on-shell conditions for the external legs,
\[\Lambda^{\mu}_{IR}(p,q)=-4ie^{2}\bar{u}(p^{\prime})\left[[(p+p^{ \prime})^{\mu}\gamma_{\nu}-(p+p^{\prime})_{\nu}\gamma^{\mu}]\,\overline{K}^{ \nu}_{1}-m\overline{K}^{\mu}_{1}\right.\\ +\left.\gamma^{\mu}\left(m^{2}+\frac{Q^{2}}{2}\right)\overline{K} _{0}\right]u(p), \tag{24}\]
where the integrals \(\overline{K}_{0}\) and \(\overline{K}_{1}^{\lambda}\) are given by \[(\overline{K}_{0},\overline{K}_{1}^{\lambda})(p,q)=\lim_{\sigma\to 1^{-}} \int\frac{d^{4}k}{(2\pi)^{4}}\frac{(1,k^{\lambda})}{k^{2}[(p^{\prime}-k)^{2}-m^{ 2}][(p-k)^{2}-m^{2}]}\\ \times f_{\sigma}\left[\frac{k^{2}}{m^{2}}\right]f_{\sigma} \left[\frac{(p-k)^{2}}{m^{2}}\right]f_{\sigma}\left[\frac{(p^{\prime}-k)^{2}}{m ^{2}}\right].\] (25) They are calculated in B. This gives \[\Lambda_{IR}^{\mu}(p,q)=\bar{u}(p^{\prime})\left[\gamma^{\mu}\Phi_{1}^{IR}+ \frac{i}{2m}\sigma^{\mu\nu}q_{\nu}\ \Phi_{2}\right]u(p),\] (26) with \[\Phi_{1}^{IR}(Q^{2}) = \frac{\alpha}{2\pi}\lim_{\sigma\to 1^{-}}\int_{0}^{1}dx\int_{0}^{ 1-x}dy\frac{1}{\Delta}\left[\ [(x+y)^{2}+2(x+y)-2\right]\] (27a) \[+\ \frac{Q^{2}}{m^{2}}\left[(x+y-xy-1)\right]\right]F_{\sigma},\] \[\Phi_{2}(Q^{2}) = -\frac{\alpha}{\pi}\int_{0}^{1}dx\int_{0}^{1-x}dy\frac{1}{\Delta }\left[(x+y)(x+y-1)\right].\] (27b) We have kept in \(\Phi_{1}^{IR}\) the relevant test functions, summarized by \(F_{\sigma}\), in order to take care of the \(IR\) singularities. We recover here of course the well known result for \(\Phi_{2}(Q^{2})\) since it has no infrared divergences. The continuum limit \(\sigma\to 1^{-}\) can then be taken immediately in this case. We thus have, using the results of B.4 and with the same change of variables as above, \[\Phi_{1}^{IR}(Q^{2}) = \frac{\alpha}{4\pi}\int_{0}^{1}\frac{dx}{\Delta_{q}}\left[5-2{ \rm Log}\Delta_{q}+\frac{Q^{2}}{m^{2}}\left[2-x(1-x)-{\rm Log}\Delta_{q} \right]\right],\] (28a) \[\Phi_{2}(Q^{2}) = \frac{\alpha}{2\pi}\int_{0}^{1}\frac{dx}{\Delta_{q}}.\] (28b) In the particular limits of very small or very large momentum transfer, we have \[\Phi_{1}^{IR}(Q^{2}\ll m^{2}) = \frac{\alpha}{4\pi}\left(5+\frac{2Q^{2}}{3m^{2}}\right),\] (29a) \[\Phi_{2}(Q^{2}\ll m^{2}) = \frac{\alpha}{2\pi}\left(1-\frac{Q^{2}}{6m^{2}}\right),\] (29b) and \[\Phi_{1}^{IR}(Q^{2}\gg m^{2}) = -\frac{\alpha}{4\pi}\left[{\rm Log}\frac{Q^{2}}{m^{2}}\right]^{2},\] (30a) \[\Phi_{2}(Q^{2}\gg m^{2}) = \frac{\alpha}{\pi}\frac{m^{2}}{Q^{2}}{\rm Log}\frac{Q^{2}}{m^{2}}.\] (30b) This completes the calculation of the electromagnetic vertex in \(QED\), using \(TLRS\). As expected, all expressions are finite in physical conditions, _i.e._ in four space-time dimensions and with a massless photon. Note the \(\left[{\rm Log}\frac{Q^{2}}{m^{2}}\right]^{2}\) behavior of \(\Phi_{1}^{IR}\) in the large \(Q^{2}\) limit. We shall come back to this point in the next Sections.
### Ward-Takahashi identity
With our notations, the Ward-Takahashi identity is written as
\[\Lambda^{\mu}(p,0)=-\bar{u}(p)\left[\frac{\partial}{\partial p_{\mu}}\Sigma(p) \right]u(p). \tag{31}\]
From the expression (1) for \(\Sigma(p)\) we have
\[\bar{u}(p)\left[\frac{\partial}{\partial p_{\mu}}\Sigma(p)\right]u (p) = -\Lambda^{\mu}(p,0)\] \[-4ie^{2}\bar{u}(p)(\gamma^{\mu}p_{\nu}-p^{\mu}\gamma_{\nu}) \overline{u}(p)K_{1}^{\nu}(p,p)\] \[-ie^{2}\bar{u}(p)\left[\lim_{\sigma\to 1^{-}}\int\frac{d^{4}k}{( 2\pi)^{4}}\frac{\gamma^{\mu}(p\!\!\!/-k\!\!\!/+m)\gamma_{\mu}}{k^{2}[(p-k)^{2} -m^{2}]}\frac{\partial}{\partial p_{\mu}}F_{\sigma}\right]u(p),\]
with the integral \(\overline{K}_{1}^{\nu}\) calculated in B. With the on-shell condition, the second term in the r.h.s. of Eq. (32) is identically zero. Moreover, the third term in the r.h.s. of this equation is also zero in the continuum limit since the derivative of the test functions is zero everywhere except in the asymptotic region where it goes to zero more rapidly than any power of the momentum as a rapidly decreasing function (see A). This insures the conservation of the Ward identities at that order. Note that this is only true in the continuum limit.
## 3 Physical observables
From the above results, it is easy to anticipate that the use of \(TLRS\) enables us to calculate, for the first time, physical observables free from any \(IR\) divergences. We shall concentrate in this first study on the electromagnetic form factors of the electron. Within \(TLRS\), these form factors are unambiguous since they are \(IR\) finite and do not depend on any regularization scale. They can be extracted from a combination of \(e^{-}-e^{-}\) and \(e^{-}-e^{+}\) elastic scattering cross-sections. It is commonly said that \(IR\) divergences in the calculation of these cross-sections are cancelled once soft-photon bremstrahlung - which is not separated out experimentally below a given energy threshold \(\Delta E\) of the photon - is considered (the well known Bloch-Nordsieck mechanism [17]). Note that, strictly speaking, these \(IR\) divergences in first order perturbation theory have not disappeared anyhow in this case but they have just been reinterpreted in terms of \(IR\) divergences when \(\Delta E\) tends to 0 in any _Gedanken_ experiment. This is indeed the price to pay, when using \(DR\) for instance, for not having treated properly these \(IR\) divergences from the start, in physical conditions.
We shall show in Sec. 3.2 how the use of \(TLRS\) enables us to calculate the cross-section for the emission of real massless photons. We first concentrate on the calculation of the electromagnetic form factors.
### The electromagnetic form factors of the electron
The physical amplitude for elastic \(e^{-}-e^{-}\) scattering is written as, in the Feynman gauge and in the one-photon exchange approximation,
\[{\cal M}_{ee}=i\frac{e^{2}}{q^{2}}\left[\bar{u}(p_{1}^{\prime})\Gamma_{\mu}u( p_{1})\right]\times\left[\bar{u}(p_{2}^{\prime})\Gamma^{\mu}u(p_{2})\right], \tag{33}\]
with
\[\Gamma_{\mu}=\gamma_{\mu}F_{1}(Q^{2})+\frac{i}{2m}\sigma_{\mu\nu}q^{\nu}F_{2}( Q^{2}). \tag{34}\]
The form factor \(F_{1}\) is normalized to \(F_{1}(Q^{2}=0)=1\) by definition of the physical electric charge \(e\) of the electron. The various contributions to \({\cal M}_{ee}\) at \(NLO\) are indicated on Fig. 1 in the one-photon-exchange approximation. All these contributions have been calculated in Sec. 2. We thus get, in terms of the bare coupling constant \(\alpha_{0}\),
\[\alpha F_{1}^{2}(Q^{2})=\alpha_{0}Z^{2}+\alpha\Pi(Q^{2})+2\alpha\Phi_{1}(Q^{2}), \tag{35}\]
with \(\Phi_{1}=\Phi_{1}^{UV}+\Phi_{1}^{IR}\), while \(F_{2}\) is simply identical to \(\Phi_{2}\) at that order. The value of \(\alpha_{0}\), and its \(\eta\)-dependence, is then fixed from the calculation of the fine structure constant at \(Q^{2}=0\), with
\[\alpha_{0}=\alpha\left[1-\Pi(0)-2\delta-2\Phi_{1}(0)\right], \tag{36}\]
where \(\delta\) is defined in Eq. (8). We can then calculate the form factor \(F_{1}\) of the electron, with
\[F_{1}(Q^{2})=1+\frac{1}{2}\left[\Pi(Q^{2})-\Pi(0)\right]+\left[\Phi_{1}(Q^{2} )-\Phi_{1}(0)\right]. \tag{37}\]
While this expression is of course not new and refers to the early days of \(QED\)[18], it is calculated here with a massless photon, as demanded by gauge invariance, and is free from any \(IR\) divergences. As expected \(F_{1}(Q^{2})\) is \(\eta\)-independent, as it should.
It is instructive to calculate \(F_{1}(Q^{2})\) in the two limiting kinematical conditions \(Q^{2}\ll m^{2}\) and \(Q^{2}\gg m^{2}\). We get immediately, from the results of Sec. 2,
\[F_{1}(Q^{2}\ll m^{2}) = 1+\frac{\alpha}{4\pi}\frac{Q^{2}}{m^{2}}\frac{19}{30}+{\cal O}( \alpha^{2}), \tag{38a}\] \[F_{1}(Q^{2}\gg m^{2}) = 1-\frac{\alpha}{4\pi}\left[\mbox{Log}\frac{Q^{2}}{m^{2}}\right]^{ 2}+{\cal O}(\alpha^{2}). \tag{38b}\]
The value of \(F_{1}\) in the large momentum region is of particular interest. While it shows the usual \(\left[\mbox{Log}\;Q^{2}\right]^{2}\) behavior, this contribution is finite although the calculation is done, from the start, with a massless photon. This is a direct consequence of using \(TLRS\) which enables us to tame both \(UV\) and \(IR\) divergences in physical conditions.
### Soft photon emission
The elementary cross-section for the emission of a single soft photon is well known [19]. It is given, in first order perturbation theory, by
\[d\sigma(p\to p^{\prime}+\gamma)=d\sigma(p\to p^{\prime})I, \tag{39}\]
Figure 1: Elementary \(e-e\) amplitude at \(NLO\) in the one-photon exchange approximation. The dots indicate similar contributions with self-energy corrections on any of the external legs.
with \[I=e^{2}\lim_{\sigma\to 1^{-}}\int\frac{d^{3}{\bf k}}{(2\pi)^{3}2\omega}\left[\frac{2p \cdot p^{\prime}}{p\cdot k}-\frac{m^{2}}{(p\cdot k)^{2}}-\frac{m^{2}}{(p^{\prime }\cdot k)^{2}}\right]f_{\sigma}\left[\frac{\omega^{2}}{m^{2}}\right],\] (40) where \(\omega=|{\bf k}|\). We have kept explicitly in Eq. (40) the appropriate test function for the outgoing photon [15]. For large momentum transfer, and with an upper limit \(\Delta E\) for the energy of the outgoing real photon, we have \[I=\frac{2\alpha}{\pi}{\rm Log}\frac{Q^{2}}{m^{2}}\lim_{\sigma\to 1^{-}}\int_{0}^{ \Delta E}\frac{d\omega}{\omega}f_{\sigma}\left[\frac{\omega^{2}}{m^{2}}\right] \equiv\frac{2\alpha}{\pi}{\rm Log}\frac{Q^{2}}{m^{2}}J.\] (41) With \(X=\omega^{2}/m^{2}\), we can write \[J=\frac{1}{2}\lim_{\sigma\to 1^{-}}\int_{0}^{\frac{(\Delta E)^{2}}{m^{2}}} \frac{dX}{X}f_{\sigma}(X).\] (42) From the properties of \(TLRS\), and by definition of the Pseudo-function (see A), we get \[J=\frac{1}{2}\int_{0}^{\frac{(\Delta E)^{2}}{m^{2}}}dXPf\left[\frac{1}{X} \right]=\frac{1}{2}{\rm Log}\left[\frac{(\Delta E)^{2}}{m^{2}}\right].\] (43) The elementary cross-section for the emission of a soft-photon is thus \[d\sigma(p\to p^{\prime}+\gamma)=d\sigma(p\to p^{\prime})\ \frac{\alpha}{\pi}\ {\rm Log}\frac{Q^{2}}{m^{2}}\ { \rm Log}\left[\frac{(\Delta E)^{2}}{m^{2}}\right].\] (44) This cross-section does not show any \(IR\) divergences associated to the zero mass of the photon anymore. One can thus treat separately virtual corrections to the electromagnetic form factors of the electron from the emission of soft real photons. This contribution should further be summed up to all orders in order to account for the emission of an arbitrary number of real photons [19]. This gives the usual Sudakov-type correction, for large \(Q^{2}\), with \[I\rightarrow\exp[I]=\exp\left[-\frac{\alpha}{\pi}\ {\rm Log}\frac{Q^{2}}{m^{2}}\ { \rm Log}\left(\frac{m^{2}}{(\Delta E)^{2}}\right)\right].\] (45) This correction tends to \(0\) when \(\Delta E\) gets very small, leaving only the virtual photon contribution embedded in the electromagnetic form factors of the electron, independently of the ability of the experimental apparatus in disantangling the emission of soft real photons from elastic \(e-e\) scattering.
### Comparison with dimensional regularization
It is particularly interesting to compare our results with those using \(DR\) for instance, as far as \(IR\) divergences are concerned. In this latter approach, the only meaningful contribution to consider in order to get an \(IR\) finite physical observable is the sum of the \(IR\) divergent contributions for both the virtual vertex correction and the contribution from (non-detected) soft-photon emission below a given photon energy \(\Delta E\). This sum is simply given [20, 21], for the differential cross-section at large \(Q^{2}\), by:
\[d\sigma(p\to p^{\prime})\ \frac{2\alpha}{\pi}\ {\rm Log}\frac{Q^{2}}{m^{2}} \left[-{\rm Log}\frac{m}{\lambda}+{\rm Log}\frac{2\Delta E}{\lambda}\right], \tag{46}\]
where \(\lambda\) is a small finite mass of the photon. In \(TLRS\), the only (\(IR\)-finite) contribution to compare with comes from soft-photon emission, as calculated in Eq. (44).
It is rewarding to check that both contributions (44) and (46) are identical for large values of \(\Delta E/m\). This insures that the use of \(TLRS\) should be compatible, at \(NLO\) at least, with all \(QED\)-related experimental results already analyzed in the framework of \(DR\). In all these calculations, \(\Delta E\) should be fixed from the exact threshold for (non-detected) soft-photon emission according to the characteristics of each experimental apparatus. Contrarily to \(DR\), the use of \(TLRS\) enables us to unambiguously define the electromagnetic form factors of the electron independently of any experimental considerations, as expected from general arguments for a well-defined theoretical framework. As a direct consequence, we can calculate the effective charge of the electron for an arbitrary value of the energy scale, as we shall see below.
## 4 The renormalization group equations
### Decoupling equation
As already mentioned in the Introduction, we must consider separately two \(RG\) equations. The first one, \(RGE(\eta)\), is associated to the independence of physical observables on the dimensionless regularization scale \(\eta\). It concerns the bare parameters and is mass-independent. The second one, \(RGE(M)\), is associated to the independence of physical observables on the dimensionful renormalization point \(M\). It concerns the effective charge \(\alpha_{M}\) and is mass-dependent. These two \(RG\) equations can be obtained simultaneously from the calculation of the physical coupling constant, or effective charge, in terms of the bare one, and similarly for the bare mass. By definition of \(\alpha_{M}\), we can write
\[\alpha_{M}(M)\equiv\alpha F_{1}^{2}(Q^{2}=M^{2})=\alpha_{0}(\eta)Z(\eta)^{2}+ \alpha\Pi(\eta,M^{2})+2\alpha\Phi_{1}(\eta,M^{2}), \tag{47}\]
with \(Z\) given by Eq. (7). For completeness, we have indicated in the above equation the various \(\eta\)- and \(M\)-dependences. From the results of Sec. 2, we can see that the \(\eta\)-dependence of \(\Pi(\eta,M^{2})\) and \(\Phi_{1}(\eta,M^{2})\) can be easily separated out, with
\[\Pi(\eta,M^{2}) = -\frac{\alpha}{3\pi}\left(\mbox{Log }\eta^{2}+cte\right)+ \overline{\Pi}(M^{2}), \tag{48a}\] \[\Phi_{1}(\eta,M^{2}) = \frac{\alpha}{4\pi}\left(\mbox{Log }\eta^{2}+cte\right)+\overline{\Phi}_{1}^{UV}(M^{2})+\overline{\Phi}_{1}^{IR} (M^{2}), \tag{48b}\]
and
\[\overline{\Pi}(M^{2}) = \frac{2\alpha}{\pi}\int_{0}^{1}dx\ x(1-x)\mbox{Log}\left[1+\frac {M^{2}}{m^{2}}x(1-x)\right], \tag{49a}\] \[\overline{\Phi}_{1}^{UV}(M^{2}) = -\frac{\alpha}{4\pi}\int_{0}^{1}dx\ \mbox{Log}\left[1+\frac{M^{2}}{m^{2}}x(1-x)\right]. \tag{49b}\]
We have defined in these equations \(\overline{\Phi}_{1}^{IR}=\Phi_{1}^{IR}(M^{2})-\Phi_{1}^{IR}(M^{2}=0)\) with \(\Phi_{1}^{IR}\) given in Eq. (28a). For convenience, we have normalized \(\overline{\Phi}_{1}^{IR}\) to \(\overline{\Phi}_{1}^{IR}(M^{2}=0)=0\) by including \(\Phi_{1}^{IR}(M^{2}=0)\) into the constant \(cte\) in Eq. (48b). This simply corresponds to a finite on-mass-shell renormalization condition. We can thus write Eq. (47) as
\[\alpha_{M}(M)-\alpha\overline{\Pi}(M^{2})-2\alpha\overline{\Phi}_{1}^{UV}(M^ {2})-2\alpha\overline{\Phi}_{1}^{IR}(M^{2})=\alpha_{0}(\eta)-\frac{\alpha^{2} }{3\pi}\left(\mbox{Log }\eta^{2}+cte\right)\equiv\alpha. \tag{50}\]
Thanks to the Ward identity, the \(\eta\)-dependence of \(\alpha_{0}(\eta)\) is given only by the vacuum polarization of the photon, as well known, while the energy scale dependence of
the physical coupling constant includes in addition the contribution from the electromagnetic vertex. As shown in Ref.[14], this decoupling property persists when these radiative corrections are summed up to all orders.
Note that the common value of Eq. (50) - which should be independent of both \(\eta\) and \(M\) - is just the fine structure constant \(\alpha\) since it is by construction the value of \(\alpha_{M}(M=0)\). With the results of Sec. 2, we thus have in both limits \(M\ll m\) and \(M\gg m\),
\[\alpha_{M}(M\ll m) = \alpha+\frac{\alpha^{2}}{4\pi}\frac{19}{15}\frac{M^{2}}{m^{2}}+ \mathcal{O}(\alpha^{3}), \tag{51a}\] \[\alpha_{M}(M\gg m) = -\frac{\alpha^{2}}{2\pi}\left(\mbox{Log}\frac{M^{2}}{m^{2}} \right)^{2}+\mathcal{O}(\alpha^{3}). \tag{51b}\]
### \(\beta\) functions
The decoupling equation (50) is instructive from many points of view.
_i)_ The behavior of \(\alpha_{0}\) as a function of \(\eta\) should be compared with the behavior of \(\alpha_{R}(\mu)\) in \(DR\) in the minimal subtraction scheme (\(MS\)) as a function of the unit of mass [22]\(\mu\) of \(DR\). They both give the same mass-independent \(\beta\) function with, in \(TLRS\),
\[\beta_{\eta}\equiv\eta\frac{\partial\alpha_{0}(\eta)}{\partial\eta}=\frac{2 \alpha_{0}^{2}}{3\pi}+\mathcal{O}(\alpha_{0}^{3}). \tag{52}\]
This behavior should not be identified with any physical pattern. It is just the remnant of the scaling properties associated to the local character of the Lagrangian we start from, independently of the relevance of this Lagrangian to describe the physical reality in a given energy domain.
_ii)_ The behavior of the physical coupling constant \(\alpha_{M}\) as a function of \(M\) is given by its \(\beta\) function which is written as
\[\beta_{M}\equiv M\frac{\partial\alpha_{M}(M)}{\partial M}\equiv\alpha_{M}^{2} b_{M}(M). \tag{53}\]
It involves three different contributions easily calculated from Eqs. (48-50). The first one is associated to \(\overline{\Pi}(M^{2})\) and is equal, in the limit of large \(M\), to \(\beta_{\eta}\) as expected. The second one is associated to \(\overline{\Phi}_{1}^{UV}(M^{2})\) and has no equivalence in \(\beta_{\eta}\). The third one is associated to \(\overline{\Phi}_{1}^{IR}(M^{2})\), with also no equivalence in \(\beta_{\eta}\). It is \(IR\) finite. To get some insight into these contributions, let us investigate \(\beta_{M}\) in two different limits:
* In the limit of small energy scale, \(M\ll m\), or equivalently in the limit of heavy electron mass, we have \[\beta_{M}(M\ll m)=\frac{\alpha_{M}^{2}}{2\pi}\frac{19}{15}\frac{M^{2}}{m^{2}} +\mathcal{O}(\alpha_{M}^{3}).\] (54) It goes therefore to zero in the limit of infinitely large electron mass. This insures the decoupling of very heavy degrees-of-freedom (_d.o.f._) from light ones, as expected from a mass-dependent \(RG\) equation.
* In the limit of large momentum scale, \(M\gg m\), we get \[\beta_{M}(M\gg m)=-\frac{2\alpha_{M}^{2}}{\pi}\mbox{Log}\frac{M^{2}}{m^{2}}+ \mathcal{O}(\alpha_{M}^{3}).\] (55) Remarkably enough, the \(\beta_{M}\) function in this limit is negative and mass-dependent. It is discussed in more details below.
_iii)_ The above decoupling equation between \(\eta\)- and \(M\)-dependences is also important in order to understand how the requirement for a perturbative calculation to remain valid should be understood. The only relevant (physical) coupling constant is \(\alpha_{M}\), expressed in terms of the physical parameter \(M\). This coupling constant should be small compared to 1 in order to be able to perform a meaningful perturbative calculation. This constraint, however, does not imply any constraint on \(\eta\) since the behavior of \(\alpha_{0}\) as a function of \(\eta\) is decoupled from the behavior of \(\alpha_{M}\) as a function of \(M\). In other words, \(\eta\) can be chosen in principle to be very large, with \(\alpha_{0}(\eta)\) also very large, while maintaining a well defined perturbative calculation in terms of \(\alpha_{M}\). From a practical point of view however, \(\eta\) should be chosen in order to avoid large numerical cancellations between \(\alpha_{0}(\eta)\) and terms explicitly dependent on \(\eta\), as shown in Eq. (50), order by order in perturbation theory. This argument translates also to \(DR\) with the identification \(\mu=\eta m\)[14]. We emphasize again that this particular choice of \(\eta\), or equivalently of \(\mu\), should not lead to any physical interpretation.
### Asymptotic behavior
From integration of the \(\beta_{M}\) function of Eq. (53), we immediately get
\[\alpha_{M}(M)=\frac{\alpha}{1-\alpha\int_{0}^{M}d\nu\frac{b_{M}(\nu)}{\nu}}, \tag{56}\]
where \(b_{M}\) is defined in Eq. (53). This effective charge is indicated on Fig. 2. It shows two immediate and far reaching consequences.
* The physical coupling constant does not show any Landau pole. This is at variance with the bare coupling constant which, as expected from Eq. (52), exhibits a Landau pole at a critical value of the regularization scale \(\eta\). As already emphasized in Ref. [14], this Landau pole for \(\alpha_{0}(\eta)\) should not have any physical interpretation.
* The physical coupling constant shows an asymptotic freedom type behavior at very large energies. This is a direct consequence of the mass-dependent contribution to \(\beta_{M}\) at large \(M^{2}\) originating from the taming of \(IR\) divergences in \(TLRS\) for the calculation of the electromagnetic vertex function.
Figure 2: The effective charge \(\alpha_{M}\) divided by the fine structure constant - or equivalently the square of the \(F_{1}\) form factor of the electron - as a function of \(M\).
The corresponding \(\beta_{M}\) function is indicated on Fig. 3. As expected, it exhibits both an \(IR\) stable fixed point at \(\alpha_{M}(M=0)=\alpha\), and an \(UV\) stable fixed point at \(\alpha_{M}(M\to\infty)=0\equiv\alpha^{*}\). Note that for the calculation of the physical coupling constant, the limit of high energy scale is identical to the limit of small electron mass. This implies immediately that the physical coupling constant at finite \(M\) should tend to zero when the electron mass tends to zero.
### The case of two mass scales
To complete our discussion, let us consider the case of two fermionic degrees of freedom, one with a low mass \(m_{<}\), the other one with a high mass \(m_{>}\), with the hierarchy \(m_{<}\ll m_{>}\). We shall concentrate for this discussion on the bare and physical coupling constants.
The running of the bare coupling constant is entirely given by the vacuum polarization of the photon. It is mass independent, as recalled above. The only change when considering these two _d.o.f._ is thus just to multiply the \(\beta_{\eta}\) function by two, without any consideration of threshold effect. The running of the bare coupling constant just counts the number of (charged) _d.o.f._ present in the Lagrangian we start from, independently of their mass. Since any physical observable is independent of the regularization scale \(\eta\), this change of the \(\eta\) dependence of the bare coupling constant has absolutely no influence on the calculation of physical observables.
We should thus concentrate only on the behavior of the physical coupling constants as a function of the physical energy scale \(M\), or in other words on the behavior of the electromagnetic form factors for the light or heavy _d.o.f._. We denote by \(\alpha_{M}^{<}\) and \(\alpha_{M}^{>}\) the physical coupling constants of the light and heavy _d.o.f._ respectively. The only new contribution to consider as compared to the calculation of the electromagnetic form factor in Sec. 3, with \(m=m_{<}\) or \(m=m_{>}\), corresponds to the contribution of the vacuum polarization of the photon. We can identify three characteristic kinematical conditions:
* \(M\ll m_{<}\ll m_{>}\)
Figure 3: The \(\beta_{M}\) function as a function of \(\alpha_{M}\), showing an ultra-violet stable fixed point at \(\alpha^{*}=0\). The inserted figure shows a zoom on \(\beta_{M}\) for \(M\leq 50\) GeV.
Since the vacuum polarization of the photon behaves in this limit like \(M^{2}/m^{2}\), both contributions from light or heavy _d.o.f._ are negligible and the corresponding form factors are close to 1. This insures that both coupling constants \(\alpha_{M}^{<}\) and \(\alpha_{M}^{>}\) are equal to \(\alpha\) for \(M=0\), as they should.
* \(m_{<}\ll m_{>}\ll M\) In this case, the contribution from the vacuum polarization of the photon is given by \[\frac{\alpha_{M}^{<}}{6\pi}{\rm Log}\ \frac{M^{2}}{m_{<}^{2}}+\frac{\alpha_{M}^{>} }{6\pi}{\rm Log}\frac{M^{2}}{m_{>}^{2}}.\] (57) This contribution is however subdominant as compared to the \(IR\) contributions in the large \(M\) kinematical region, as detailed in the preceeding subsection.
* \(m_{<}\ll M\ll m_{>}\) This case is very similar to the above one, since the contribution from the heavy _d.o.f._ behaves in this condition like \(M^{2}/m_{>}^{2}\) which is subdominant.
According to the above discussion, the physical coupling constant \(\alpha_{M}^{<}\) is thus almost identical to the physical coupling constant with one mass scale only, as discussed in the preceding sections. This is in complete agreement with the decoupling theorem [23]. On the contrary, the physical coupling constant \(\alpha_{M}^{>}\) is almost equal to \(\alpha\) except in the very far \(UV\) domain \(M\gg m_{>}\) where it tends to zero like \(\left[{\rm Log}\frac{M^{2}}{m_{>}^{2}}\right]^{-1}\).
### The limit of massless electrons
The discussion of the limit of massless electrons, and the appearance of associated \(IR\) divergences, is usually done in terms of exceptional or non-exceptional momenta [24]. Momenta are said exceptional if any partial sum of external momenta is zero. This classification, however, does not make any reference to whether the amplitude under consideration is a physical one or not. It should thus be clarified.
Let us first recall the three different objects we have to manipulate in any calculation of a cross-section. An _elementary amplitude_ is a single diagram which contributes to this cross-section, as investigated for instance in Sec. 2. Its calculation does not make any a-priori assumption about the external legs. When these legs correspond to physical, real, particles, they are on their mass-shell: the corresponding amplitude is thus called a _physical amplitude_. Finally, a _physical observable_ corresponds to the sum of all physical amplitudes contributing to a physical process at a given order of perturbation theory.
From the above classification, it is clear that the relevant amplitudes to worry about when calculating a physical process are therefore physical amplitudes and not elementary ones. This will be our guiding line for discussing the limit of massless electrons. What happens, however, for elementary amplitudes with off-shell external legs? This case corresponds to the calculation of diagrams beyond \(NLO\) in perturbation theory. The off-shell self-energy \(\Sigma(p)\) for instance will be attached in this case to an internal line of a more complex physical amplitude. For this more complex amplitude, the external legs of \(\Sigma(p)\) contribute to internal propagators with appropriate test functions, according to the use of \(TLRS\). These additional test functions will prevent any new \(UV\) as well as \(IR\) singularities in such a way that only the final, on-shell, physical observable is independent of the regularization scale \(\eta\) without any new \(IR\) singularities. The (apparent) singularities appearing for exceptional momenta will thus be taken care of in \(TLRS\) thanks to the presence of the test functions in all internal propagators.
According to our discussion in Sec 4, the calculation of the massless limit of the electron for the different \(\beta\) functions is immediate. On the one hand, the case of \(\beta_{\eta}\) is trivial but also particularly instructive. Since it is mass-independent, its value for a massless electron is given by Eq. (52). It is therefore non-zero, with the bare coupling constant given by (50). This would not be the case, however, if the regularization scale \(\eta\) would have been dimensionful, as it is the case in the standard formulation of \(DR\) for instance. In this scheme indeed, the regularization scale is identified with the unit of mass [22]\(\mu\) of \(DR\). In absence of any other mass scale at the level of the \(QED\) Lagrangian, the dimensionless renormalized coupling constant in the \(MS\) scheme cannot depend on a single dimensionful variable only. It should therefore be a constant independent of \(\mu\). This would imply a zero \(\beta\) function for \(QED\) at \(NLO\), in contradiction with the mass-independence of the (non-zero) \(\beta\) function in \(DR+MS\).
On the other hand, the case of \(\beta_{M}\) is not trivial. Since \(\alpha_{M}\) depends on the dimensionful variable \(M\) through the ratio \(M^{2}/m^{2}\), it should be independent of \(M\) in the limit of massless electrons for obvious dimensional arguments. Since this limit is also equivalent to the large \(M\) limit, this implies that, by construction, one should have an \(UV\) stable fixed point at a given value \(\alpha^{*}\) of the physical coupling constant. This is precisely what we get from the analysis of the \(\beta_{M}\) function in Sec. 4.3, with \(\alpha^{*}=0\). In the limit of massless electrons, the physical coupling constant at finite \(M\) should therefore also tend to zero. Note that this is only true in the absence of any other mass scale in the physical world, _i.e._ in absence of spontaneous symmetry breaking. This is not true for instance for quantum chromodynamics (\(QCD\)).
## 5 Conclusions
We have reanalyzed in this study first order radiative corrections to \(QED\) in the light of the recently proposed regularization procedure called \(TLRS\). While these corrections are by now standard textbook exercises, the use of \(TLRS\) enables a completely new insight into our understanding of quantum corrections: it enables a direct calculation of physical observables in physical conditions starting from the bare Lagrangian itself, without encountering any \(UV\) nor \(IR\) divergences. The only renormalization we should worry about is the finite, physical, renormalization needed to calculate physical parameters in terms of bare ones. These features refer to the unique properties of \(TLRS\): as an _a-priori_ regularization procedure it enables us to give a mathematically well defined meaning to the local Lagrangian we start from, and as an _intrinsic_ regularization procedure, all intermediate calculations are done in physical conditions, _i.e._ in four space-time dimensions with massless photons, as required by gauge invariance.
The analysis of any physical observable should be done in terms of two different sets of parameters: the bare coupling constants and bare masses defined at the level of the bare Lagrangian, and the physical ones - physical coupling constants and physical masses - which are measured experimentally and which are used to fix the value of the bare parameters. These two sets of parameters do depend on two different running variables. On the one hand, the bare coupling constant depends on the regularization scale \(\eta\). This scale is inherent to the local character of the Lagrangian we start from, which is constructed from the product of fields or derivative of fields at the same space-time point. It is therefore associated to the scaling invariance of the \(UV\) limit since for any internal momentum \(k\to\infty\), we also have \(\eta\)\(k\to\infty\), for any finite scaling variable \(\eta\). This regularization scale is therefore dimensionless. This is at variance with the usual dimensionful unit of mass of \(DR\) for instance. Note that using \(DR\) we can also identify a corresponding dimensionless variable, as explained in Refs [14, 15]. On the other hand, the physical coupling constant depends on the kinematical conditions
chosen to measure it. This is the so-called renormalization point \(M\). By construction, \(M\) is dimensionful.
The relationship between these two coupling constants, \(\alpha_{0}\) and \(\alpha_{M}\), is governed by a decoupling equation, as given by Eq. (50). This equation can also be used to understand how the decoupling between heavy and light _d.o.f._ is at work. From its mass-dependence, the physical coupling constant exhibits explicitly the decoupling between these _d.o.f._, as well known. This behavior however is in complete agreement with the mass-independence of the \(\eta\)-dependence of \(\alpha_{0}\), as shown by this decoupling equation. This is indeed expected from general physical arguments since \(\eta\) is associated to the scaling properties of elementary amplitudes in the \(UV\) regime. These ones are therefore the same for any _d.o.f._ of finite mass, light or heavy. It is just associated to the local character of the Lagrangian we start from. As a direct consequence, the running of the bare coupling constant as a function of \(\eta\) should thus include _all_ (charged) physical _d.o.f._ present in the Lagrangian we start from. This should also be the case for the renormalized coupling constant in \(DR\): the running of the renormalized coupling constant in \(DR+MS\) should include _all_ (charged) physical _d.o.f._ present in the Lagrangian of the Standard Model, with no threshold effects depending on the energy scale under consideration.
Physical observables, when calculated in \(TLRS\), are free from any \(IR\) divergences. This is the case for the electromagnetic form factors of the electron which can be calculated directly with massless photons. The \(F_{1}\) form factor is then independent of the characteristics of the detector and of its ability to discriminate a single electron from an electron-photon state. This unique property of \(TLRS\) opens the way for a direct unambiguous measurement of this form factor and at the same time could provide a direct non-trivial test of the validity of \(TLRS\). The calculation of this form factor may also have important consequences for precision experiments involving \(QED\), like for instance the calculation of the charge radius of the proton from electron scattering experiments [25]. It also immediately implies the presence of an \(UV\) stable fixed point for the physical coupling constant at \(\alpha^{*}=0\). Note that the \(UV\) behavior of the physical coupling constant is entirely dominated by the behavior of the vertex function in the \(IR\) domain, and is thus \(UV\) complete.
This finding has remarkable consequences. It implies both the absence of a Landau pole for the physical coupling constant, as well as an asymptotic freedom type behavior. This \(UV\) fixed point at \(\alpha^{*}=0\) is also required by dimensional analysis since the limit of very high energy scale is identical to the limit of zero electron mass. In this limit, the physical coupling constant - which is dimensionless - should be independent of any energy scale, hence its \(\beta\) function should be zero. This implies therefore from Eq. (55) that \(\alpha_{M}\) should be 0 in this limit, in first order perturbation theory. This may explain why the fine structure constant is small, since the electron mass is also small. The physical coupling constant \(\alpha_{M}\) at very large energy scale \(M\) thus shows an asymptotic freedom type behavior, similarly to the effective coupling constant of \(QCD\) extracted for instance from the Bjorken sum rule [26]. This behavior should completely change our understanding of a possible unification of the physical coupling constants at very high energy [27].
## Acknowledgement
Preprint of an article published in Int. J. Mod. Phys. A, 2250204 (2022), doi:10.1142/S0217751X22502049 (c) World Scientific Publishing Company.
Main properties of \(Tlrs\)
The general mathematical properties of \(TLRS\) have been detailed in Refs. [2] and [15]. Several applications have already been considered: application to light-front dynamics [28], interpretation of the fine-tuning of the Higgs mass in the Standard Model [29], the recovery of the axial anomaly [30], the fate of the trace anomaly [31] or the study of conformal field theories in two-dimensions [32]. For completness, we shall briefly recall in this Appendix the main properties of \(TLRS\).
As already mentioned in the Introduction, this procedure originates from the well known observation that the divergencies of bare amplitudes can be traced back to the violation of causality due to ill-defined products of distributions at the same point [3, 4]. It requires therefore the whole apparatus of the theory of distribution [33] to correctly define any local Lagrangian. We consider here for simplicity the case of a scalar field.
As detailed in Ref. [2], the physical field \(\varphi\) is constructed in \(TLRS\) as a functional of the original quantum field \(\phi(x)\), considered as a distribution, according to
\[\varphi[\rho](x)\equiv\int d^{4}y\phi(y)\rho(x-y), \tag{58}\]
where the reflection symmetric test function \(\rho\) belongs to the Schwartz space \(\mathscr{S}\) of rapidly decreasing functions [33]. The physical interest to use the test function \(\rho\) is to smear out the original distribution in a space-time domain of typical extension \(a\). The test function can thus be characterized by \(\rho_{a}(x)\) and the physical field by \(\varphi_{a}(x)\equiv\varphi[\rho_{a}](x)\).
For practical calculations, it is convenient to construct the physical fields in momentum space. If we denote by \(f_{\sigma}\) the Fourier transform of the test function \(\rho_{a}\), we can write \(\varphi_{a}\) in terms of creation and annihilation operators, leading to [2]
\[\varphi_{a}(x)\!=\!\!\int\!\frac{d^{3}{\bf p}}{(2\pi)^{3}}\frac{f_{\sigma}( \varepsilon_{p}^{2},{\bf p}^{2})}{2\varepsilon_{p}}\left[a_{\bf p}^{\dagger} e^{ip.x}+a_{\bf p}e^{-ip.x}\right], \tag{59}\]
with \(\varepsilon_{p}^{2}={\bf p}^{2}+m^{2}\). Each propagator being the contraction of two fields is proportional to \(f_{\sigma}^{2}\). This test function in momentum space is a dimensionless quantity. It should therefore be expressed in terms of dimensionless arguments. To do that, one shall introduce an arbitrary scale \(M_{0}\) to "measure" all momenta. In practical calculations, \(M_{0}\) can be any of the non-zero physical mass of the theory under consideration 2. It is taken equal to the electron mass \(m\) in this study. Note that a change in the value of \(M_{0}\) is just equivalent to a redefinition of \(\eta\), without any consequences on physical observables which should anyhow be independent of \(\eta\) and \(M_{0}\).
Footnote 2: For purely massless theories, \(M_{0}\) corresponds for instance to the scale fixed to measure any non-zero momentum.
As shown in Ref. [2], it is appropriate to choose \(f_{\sigma}\) as a partition of unity. A simple example of such function, with \(f_{\sigma}\) constructed from the sum of two elementary functions only, is discussed in Ref. [28]. It is equal to 1 almost everywhere and is 0 outside a finite domain of \(\mathbb{R}^{+4}\), along with all its derivatives (super-regular function). The parameter \(\sigma\), chosen for convenience between 0 and 1, controls the lower and upper limits of the support of \(f_{\sigma}\). Note that for any partition of unity, the product of two partitions of unity is also a partition of unity. We shall therefore identify \(f_{\sigma}^{2}\) by \(f_{\sigma}\) when needed. As we shall see in B, we do not need to know the precise form of the test function as a partition of unity, we just rely on its asymptotic properties. Note that the construction of the test function as a partition of unity is essential in order to relate its \(IR\) and \(UV\) properties.
Requiring locality for the bare Lagrangian we start from implies considering the subsequent limit \(a\to 0\), dubbed _the continuum limit._ In this process, it is essential to preserve the scaling properties
\[\rho_{a}(x)\stackrel{{ a\to 0}}{{\rightarrow}}\rho_{\eta}(x)\ ;\ \varphi_{a}(x) \stackrel{{ a\to 0}}{{\rightarrow}}\varphi_{\eta}(x), \tag{60}\]
where \(\eta\) is an arbitrary, dimensionless, scaling variable since in the limit \(a\to 0\), we also have \(a/\eta\to 0\), for any finite \(\eta\). This arbitrary scale just governs the "spead" at which the continuum limit is reached. Any physical observable should of course be independent of this dimensionless scaling variable, also called regularization scale in order to stick to the common denomination, although this denomination may be misleading when using \(TLRS\) since this regularization scale is dimensionless in this case. From the choice of parametrization of \(f_{\sigma}\), the continuum limit corresponds to \(\sigma\to 1^{-}\).
As we shall see in the next Appendix, any amplitude associated to a singular operator \(T(X)\) is written schematically as
\[\mathcal{A}_{\sigma}=\int_{0}^{\infty}dX\ T(X)\ f_{\sigma}(X). \tag{61}\]
We consider here for simplicity a one-loop amplitude, with a single dimensionless variable \(X\). It is easy to check that a naive implementation of the continuum limit for the test function, with a constant boundary condition like for instance \(X\leq H_{\sigma}=1/(1-\sigma)\), will result in a divergent amplitude \(\lim_{\sigma\to 1^{-}}A_{\sigma}\), as expected from the calculation of the amplitude in terms of a cut-off in momentum space. However, from the scaling properties (60), we should get
\[A_{\eta}\equiv\lim_{\sigma\to 1^{-}}A_{\sigma}. \tag{62}\]
To achieve this, we should rather consider a "running" boundary condition on \(f_{\sigma}\) defined by
\[f_{\sigma}(X\geq H_{\sigma}(X))=0, \tag{63}\]
with
\[H_{\sigma}(X)\equiv\eta^{2}Xg_{\sigma}(X)+(\sigma-1), \tag{64}\]
where \(\eta\) is the dimensionless regularization scale in \(TLRS\)3 with \(\eta^{2}>1\). The function \(g_{\sigma}(X)\) is constructed in such a way that the boundary on \(X\), as defined by Eq. (63), is finite and tends to \(1^{-}\) when \(\sigma\to 1^{-}\). A typical (but not unique) simple form for \(g_{\sigma}(X)\) is given by
Footnote 3: The square of \(\eta\) in (64) is just for convenience since \(X\) is usually identified with the square of a momentum, as shown in B.
\[g_{\sigma}(X)=X^{\sigma-1}. \tag{65}\]
The conditions (63) and (64) amount to an infinitesimal drop-off of the test function in the \(UV\) region, with the drop-off rate governed by the regularization scale \(\eta\). It also preserves the super-regular properties of the test function in the continuum limit, with the test function and all its derivatives being zero at infinity.
Remarkably enough, this boundary condition defines at the same time the \(UV\) and \(IR\) boundaries once \(f\) is constructed from a partition of unity [28]. The explicit calculation of standard one-loop integrals using \(TLRS\) is thus straightforward, as recalled in B. It relies on the identification of the test function \(f_{\sigma}\) with its Taylor remainder in the \(UV\) as well as \(IR\) domains - thanks to its asymptotic properties - and the subsequent use of the Lagrange formula, hence the name Taylor-Lagrange
regularization scheme [2]. The calculation of elementary amplitudes in the \(UV\) and \(IR\) domains is thus immediate. In the \(UV\) domain the continuum limit (62) is taken after integration by part. The extension of \(IR\) singular operators [2, 15] involves the Pseudo-function [33, 32], denoted by \(Pf\), of \(1/X^{n}\), with \(n\geq 1\). This gives, for \(n>1\),
\[\int_{0}^{X_{0}}dX{\rm Pf}\left[\frac{1}{X^{n}}\right]=\lim_{\epsilon\to 0} \left[\int_{\epsilon}^{X_{0}}\frac{dX}{X^{n}}-\frac{1}{1-n}\frac{1}{\epsilon^{ n-1}}\right], \tag{66}\]
and for \(n=1\)
\[\int_{0}^{X_{0}}dX{\rm Pf}\left[\frac{1}{X}\right]=\lim_{\epsilon\to 0} \left[\int_{\lambda\epsilon}^{X_{0}}\frac{dX}{X}+{\rm Log}(\epsilon)\right], \tag{67}\]
where \(\lambda\) is an arbitrary scale variable [33]. The value of \(\lambda\) is fixed from the choice of gauge [15]. In the Feynman gauge we have \(\lambda=1\).
## Appendix B Relevant integrals
For completness, and as an illustration of the use of \(TLRS\) in practical calculations, we detail in this Appendix all the relevant integrals needed in our study.
### Self-energy of the electron
#### b.1.1 Calculation of \(\overline{I}_{0}\)
We recall here the various steps of the calculation of this simple integral. More details can be found in the Appendix of Ref. [30]. We calculate the relevant integrals in Euclidian space, using Feynman representation. The integral \(\overline{I}_{0}\) is written
\[\overline{I}_{0}(p)=\frac{i}{(2\pi)^{4}}\lim_{\sigma\to 1^{-}}\int_{0}^{1}dx \int d^{4}{\bf K}\frac{1}{[{\bf K}^{2}+m^{2}x\Delta_{p}]^{2}}F_{\sigma},\]
where \(F_{\sigma}\) is a simplified notation for the product of the two test functions, with
\[F_{\sigma}=f_{\sigma}\left[\frac{({\bf K}+xp)^{2}}{M_{0}^{2}}\right]f_{\sigma} \left[\frac{({\bf K}-(1-x)p)^{2}}{M_{0}^{2}}\right].\]
For a non zero electron mass, it is convenient to choose \(M_{0}\equiv m\). In the absence of test functions, this integral is divergent in the \(UV\) regime only. We can thus safely concentrate on the behavior of the test functions for large \({\bf K^{2}}\). The use of test functions when \(IR\) singular operators are involved is detailed in B.4. In the \(UV\) domain, the arguments of the two test functions are both equivalent to \({\bf K}^{2}/m^{2}\equiv\Delta_{p}X\) with \(\Delta_{p}\neq 0\). We extract here from the running variable \(X\) the scale \(\Delta_{p}\) which depends on the kinematical conditions. This insures that the integrand \(X/(X+x)^{2}\) is independent of any momentum-dependent scale so that the scaling variable \(\eta\) is also (implicitely) independent of any momentum-dependent scale. We thus have, with the identification \(f_{\sigma}^{2}\sim f_{\sigma}\) valid for a partition of unity,
\[\overline{I}_{0}(p)=\frac{i}{(4\pi)^{2}}\lim_{\sigma\to 1^{-}}\int_{0}^{1}dx \int_{0}^{\infty}dX\frac{X}{(X+x)^{2}}f_{\sigma}\left[\Delta_{p}X\right].\]
From the properties of the test function [2], we can write a Lagrange formula for \(f_{\sigma}\), at fixed support [30], with
\[f_{\sigma}\left[\Delta_{p}X\right]=-X\int_{\Delta_{p}}^{\infty}\frac{dt}{t} \frac{\partial}{\partial X}f_{\sigma}\left[Xt\right].\]
We can then write \(\overline{I}_{0}\) in the following form, after integration by part,
\[\overline{I}_{0}(p)=\frac{i}{(4\pi)^{2}}\lim_{\sigma\to 1^{-}}\int_{0}^{1}dx\int_{0}^{ \infty}dX\frac{\partial}{\partial X}\left[\frac{X^{2}}{(X+x)^{2}}\right]\int_{ \Delta_{p}}^{\infty}\frac{dt}{t}f_{\sigma}[Xt].\]
From the boundary condition (63), the argument of \(f_{\sigma}\) under the integral is bounded from above by the support of the test function given by \(H_{\sigma}(X)\), so that
\[Xt\leq\eta^{2}Xg_{\sigma}(X)\ \ \mbox{and}\ \ \Delta_{p}\leq t\leq\eta^{2}g_{ \sigma}(X).\]
Since the integral over \(X\) is now finite thanks to the derivative, we can safely take the continuum limit \(\sigma\to 1^{-}\) which gives \(g_{\sigma}\to 1\) and \(f_{\sigma}\to 1\). We finally get
\[\overline{I}_{0}(p)=\frac{i}{(4\pi)^{2}}\int_{0}^{1}dx\ \mbox{Log}\frac{\eta^{2}}{ \Delta_{p}}.\]
A different choice for \(M_{0}\) will just induce a multiplicative factor at \(\eta^{2}\). This shall induce a finite additive constant on top of any contribution in Log \(\eta^{2}\), as indicated in the final results for the elementary amplitudes calculated in Sec. (2), with no consequences for any physical observables. This is reminiscent of the flexibility in choosing the unit of mass \(\mu\) of \(DR\)[22], like for instance using either the \(MS\) or the \(\overline{MS}\) schemes.
#### b.1.2 Calculation of \(\overline{I}_{1}^{\mu}\)
The integral \(\overline{I}_{1}^{\mu}\) is written as
\[\overline{I}_{1}^{\mu}(p)=I_{1}^{\mu}(p)+{\bf p}^{\mu}\frac{i}{(2\pi)^{4}}\lim _{\sigma\to 1}\int_{0}^{1}dx\int d^{4}{\bf K}\frac{x}{[{\bf K}^{2}+m^{2}x \Delta_{p}]^{2}}F_{\sigma},\]
with
\[I_{1}^{\mu}(p)=\frac{i}{(2\pi)^{4}}\lim_{\sigma\to 1^{-}}\int_{0}^{1}dx\int d ^{4}{\bf K}\frac{{\bf K}^{\mu}}{[{\bf K}^{2}+m^{2}x\Delta_{p}]^{2}}F_{\sigma},\]
The term in \({\bf p}^{\mu}\) is calculated similarly to \(\overline{I}_{0}\). The calculation of \(I_{1}^{\mu}\) should be done with care since the test functions do depend on all the relevant momenta of the system. Following the calculations of Ref. [30], we start from the identity
\[\frac{\partial}{\partial{\bf K}_{\mu}}\frac{1}{{\bf K}^{2}+m^{2}x\Delta_{p}}= -2\frac{{\bf K}^{\mu}}{({\bf K}^{2}+m^{2}x\Delta_{p})^{2}}.\]
We can thus write immediately
\[I_{1}^{\mu}=-\frac{i}{2(2\pi)^{4}}\lim_{\sigma\to 1^{-}}\int_{0}^{1}dx\int \frac{d^{4}{\bf K}}{(2\pi)^{4}}\frac{\partial}{\partial{\bf K}_{\mu}}\left[ \frac{1}{{\bf K}^{2}+m^{2}x\Delta_{p}}\right]F_{\sigma}.\]
By integration by part, the surface term is a 3-dimensional integral orthogonal to the \(\mu\)-direction. It should be taken at \({\bf K}_{\mu}\to\pm\infty\). Thanks to the presence of the test functions, this term is identically zero. The remaining integral involves the derivative of \(F_{\sigma}\), with
\[F_{\sigma}=f_{\sigma}\left[\frac{({\bf K}+x{\bf p})^{2}}{m^{2}}\right]f_{ \sigma}\left[\frac{({\bf K}-(1-x){\bf p})^{2}}{m^{2}}\right].\]
One thus gets
\[I_{1}^{\mu}= \frac{i}{m^{2}(2\pi)^{4}}\lim_{\sigma\to 1^{-}}\int_{0}^{1}dx\int \frac{d^{4}{\bf K}}{(2\pi)^{4}}\frac{1}{{\bf K}^{2}+m^{2}x\Delta_{p}}\] \[\times \left[({\bf K}^{\mu}+x{\bf p}^{\mu})f_{\sigma}^{\prime}\left[ \frac{({\bf K}+x{\bf p})^{2}}{m^{2}}\right]f_{\sigma}\left[\frac{({\bf K}-(1- x){\bf p})^{2}}{m^{2}}\right]\right.\] \[+ \left.({\bf K}^{\mu}-(1-x){\bf p}^{\mu})f_{\sigma}\left[\frac{({ \bf K}+x{\bf p})^{2}}{m^{2}}\right]f_{\sigma}^{\prime}\left[\frac{({\bf K}-(1- x){\bf p})^{2}}{m^{2}}\right]\right].\]
In this equation \(f^{\prime}_{\alpha}\) denotes \(\frac{d}{dX}f_{\sigma}(X)\). The integral \(I^{\mu}_{1}\) is a-priori non-zero only in the \(UV\) region where \(f^{\prime}\neq 0\). In this region, all test functions are equivalent to \(f_{\sigma}\left[\frac{\mathbf{K}^{2}}{m^{2}}\right]\). By symmetry arguments, the integral over \(\mathbf{K}_{\mu}\) is strictly zero and it remains to calculate \[I^{\mu}_{1}=\frac{i}{(2\pi)^{4}}\frac{\mathbf{p}^{\mu}}{m^{2}}\lim_{\sigma \to 1^{-}}\int_{0}^{1}dx\int\frac{d^{4}\mathbf{K}}{(2\pi)^{4}}\frac{2x-1}{ \mathbf{K}^{2}+m^{2}x\Delta_{p}}f_{\sigma}\left[\frac{\mathbf{K}^{2}}{m^{2}} \right]f^{\prime}_{\sigma}\left[\frac{\mathbf{K}^{2}}{m^{2}}\right].\] With \(\Delta_{p}X=\mathbf{K}^{2}/m^{2}\) we have \[I^{\mu}_{1}=\frac{i}{2(4\pi)^{2}}\mathbf{p}^{\mu}\lim_{\sigma\to 1^{-}}\int_{0}^{1} dx(2x-1)\Delta_{p}\int_{0}^{X_{max}}dX\frac{X}{X+x}\left[f^{2}_{\sigma}( \Delta_{p}X)\right]^{\prime}.\] By integration by part, we have \[I^{\mu}_{1}=\frac{i}{2(4\pi)^{2}}\mathbf{p}^{\mu}\lim_{\sigma \to 1^{-}}\int_{0}^{1}dx(2x-1)\Delta_{p}\\ \left[\left.\frac{X}{X+x}f^{2}_{\sigma}(X)\right|_{0}^{\infty}- \int_{0}^{\infty}dX\left[\frac{X}{X+x}\right]^{\prime}f^{2}_{\sigma}(\Delta_{p }X)\right].\] Both contributions are finite in the absence of the test functions, so that we can safely take the continuum limit \(f_{\sigma}\to 1^{-}\) and we finally get \[I^{\mu}_{1}=0.\] We recover here rotational invariance. Note that this property is only true in the continuum limit. We thus have finally for \(\overline{I}^{\mu}_{1}\) \[\overline{I}^{\mu}_{1}(p)=\frac{i}{(4\pi)^{2}}\ p^{\mu}\int_{0}^{1}dx\ x\ \text{Log}\frac{\eta^{2}}{\Delta_{p}}.\]
### Vacuum polarization of the photon
The calculation of \(\overline{J}_{0}\) and \(\overline{J}^{\mu}_{1}\) is very similar to the calculation of \(\overline{I}_{0}\) and \(\overline{I}^{\mu}_{1}\) detailed above, with \(\Delta_{p}\) replaced by \(\Delta_{q}\). We thus get immediately
\[\overline{J}_{0}(p) = \frac{i}{(4\pi)^{2}}\int_{0}^{1}dx\ \text{Log}\frac{\eta^{2}}{ \Delta_{q}},\] \[\overline{J}^{\mu}_{1}(p) = \frac{i}{(4\pi)^{2}}\ p^{\mu}\int_{0}^{1}dx\ x\ \text{Log}\frac{\eta^{2}}{ \Delta_{q}}.\]
#### b.2.1 Calculation of \(\overline{J}_{2}\)
Following the calculation of \(\overline{I}_{0}\), the integral \(\overline{J}_{2}\) is written as
\[\overline{J}_{2}(p)=J_{2}(p)-p^{2}\frac{i}{(2\pi)^{4}}\lim_{\sigma\to 1^{-}} \int_{0}^{1}dx\int d^{4}\mathbf{K}\frac{x^{2}}{[\mathbf{K}^{2}+m^{2}\Delta_{p }]^{2}}F_{\sigma},\]
with
\[J_{2}(p)=-\frac{i}{(2\pi)^{4}}\lim_{\sigma\to 1^{-}}\int_{0}^{1}dx\int d^{4} \mathbf{K}\frac{\mathbf{K}^{2}}{[\mathbf{K}^{2}+m^{2}\Delta_{p}]^{2}}F_{ \sigma}.\]
The term in \(p^{2}\) is calculated similarly to \(\overline{I}_{0}\), and we have for \(J_{2}\)
\[J_{2}(p)=-\frac{i}{(4\pi)^{2}}m^{2}\Delta_{q}\lim_{\sigma\to 1^{-}}\int_{0}^{1} dx\int_{0}^{\infty}dX\frac{X^{2}}{(X+1)^{2}}f_{\sigma}\left[\Delta_{p}X \right].\]
The integrand over \(X\) can be written as
\[\frac{X^{2}}{(X+1)^{2}}=1-\frac{1}{X+1}-\frac{X}{(X+1)^{2}}.\]
It is easy to check that the contribution of the first term to \(J_{2}\) is strictly zero, with
\[\lim_{\sigma\to 1^{-}}\int_{0}^{\infty}dXf_{\sigma}\left[\Delta_{p}X\right]=\lim_{ \sigma\to 1^{-}}\int_{0}^{\infty}\frac{dY}{Y^{2}}f_{\sigma}\left[\frac{ \Delta_{p}}{Y}\right]=\int_{0}^{\infty}dY\mathrm{Pf}\left[\frac{1}{Y^{2}} \right]\equiv 0.\]
It remains
\[J_{2}(p)=\frac{i}{(4\pi)^{2}}m^{2}\Delta_{q}\lim_{\sigma\to 1^{-}}\int_{0}^{1}dx \int_{0}^{\infty}dX\left[\frac{1}{X+1}+\frac{X}{(X+1)^{2}}\right]f_{\sigma} \left[\Delta_{p}X\right].\]
Using the Lagrange formula for \(f_{\sigma}\), and after integration by part, we get, in the continuum limit
\[J_{2}(p)=\frac{i}{(4\pi)^{2}}m^{2}\Delta_{q}\int_{0}^{1}dx\int_{0}^{\infty}dX \frac{\partial}{\partial X}\left[\frac{X}{X+1}+\frac{X^{2}}{(X+1)^{2}}\right] \int_{\Delta_{q}}^{\eta^{2}}\frac{dt}{t},\]
so that
\[J_{2}(p)=\frac{2i}{(4\pi)^{2}}\ m^{2}\Delta_{q}\int_{0}^{1}dx\ \mathrm{ Log}\frac{\eta^{2}}{\Delta_{q}}.\]
We thus get
\[\overline{J}_{2}(p)=\frac{i}{(4\pi)^{2}}\ m^{2}\Delta_{q}\int_{0}^{1}dx\left( 2+\frac{x^{2}}{\Delta_{q}}\frac{p^{2}}{m^{2}}\right)\mathrm{ Log}\frac{\eta^{2}}{\Delta_{q}}.\]
#### b.2.2 Calculation of \(\overline{J}_{2}^{\mu\nu}\)
Following the calculation of \(\overline{I}_{0}\) and \(\overline{I}_{1}^{\mu}\), the integral \(\overline{J}_{2}^{\mu\nu}\) is written as
\[\overline{J}_{2}^{\mu\nu}=J_{2}^{\mu\nu}+p^{\mu}p^{\nu}\frac{i}{(2\pi)^{4}} \lim_{\sigma\to 1^{-}}\int_{0}^{1}dx\int d^{4}\mathbf{K}\frac{x^{2}}{[ \mathbf{K}^{2}+m^{2}\Delta_{p}]^{2}}F_{\sigma},\]
with
\[J_{2}^{\mu\nu}=\frac{i}{(2\pi)^{4}}\lim_{\sigma\to 1^{-}}\int_{0}^{1}dx\int d ^{4}\mathbf{K}\frac{\mathbf{K}^{\mu}\mathbf{K}^{\nu}}{[\mathbf{K}^{2}+m^{2} \Delta_{p}]^{2}}F_{\sigma},\]
The term in \(p^{\mu}p^{\nu}\) is calculated similarly to \(\overline{I}_{0}\). From symmetry arguments, and in the absence of any external momentum in the continuum limit, we can write
\[J_{2}^{\mu\nu}=Ag^{\mu\nu}.\]
By contraction with \(g_{\mu\nu}\), we get immediately
\[A=\frac{1}{4}g_{\mu\nu}I_{2}^{\mu\nu}.\]
Note that, due to the presence of the test functions, the contraction \(g_{\mu\nu}I_{2}^{\mu\nu}\)_is not_ a-priori equal to \(\tilde{J}_{2}\) written as
\[\tilde{J}_{2}(p)=-\frac{i}{(2\pi)^{4}}\lim_{\sigma\to 1^{-}}g_{\mu\nu}\int_{0}^{ 1}dx\int d^{4}\mathbf{K}\frac{\mathbf{K}^{\mu}\mathbf{K}^{\nu}}{[\mathbf{K}^{ 2}+m^{2}\Delta_{p}]^{2}}F_{\sigma}.\]
The difference, if any, should come from the asymptotic behavior of the test functions in the continuum limit. This prevents to reverse the order of taking the continuum
limit \(\sigma\to 1^{-}\) with the contraction by \(g_{\mu\nu}\). It cannot therefore depend on any mass scale. Since \(J_{2}\) and \(J_{2}^{\mu\nu}\) have a dimension of a mass squared, this difference is thus zero. This is however not the case for the integrals \(K_{2}\) and \(K_{2}^{\mu\nu}\), as we shall see below, since these integrals are dimensionless. We thus get
\[J_{2}^{\mu\nu}=\frac{1}{4}g^{\mu\nu}\tilde{J}_{2}.\]
where \(\tilde{J}_{2}\) can be deduced easily from \(J_{2}\) with
\[\tilde{J}_{2}(p)=\frac{2i}{(4\pi)^{2}}\ m^{2}\Delta_{q}\int_{0}^{1}dx\ \mbox{Log}\frac{\eta^{2}}{\Delta_{q}}.\]
### Electromagnetic vertex
The integrals involved in the calculation of the electromagnetic form factor, with three propagators, have already been detailed in Ref. [30] for the calculation of the triangular diagrams leading to the axial anomaly.
#### b.3.1 Calculation of \(\overline{K}_{0}\)
The integral \(\overline{K}_{0}\) is written as
\[\overline{K}_{0}(p,q)=\frac{2i}{(2\pi)^{4}}\lim_{\sigma\to 1^{-}}\int_{0}^{1}dx \int_{0}^{1-x}dy\int d^{4}{\bf K}\frac{1}{({\bf K}^{2}+m^{2}\Delta)^{3}}F_{ \sigma},\]
with
\[F_{\sigma}=f_{\sigma}\left[\frac{({\bf K}+{\bf P})^{2}}{m^{2}}\right]f_{ \sigma}\left[\frac{({\bf K}+{\bf P}-{\bf p}^{\prime})^{2}}{m^{2}}\right]f_{ \sigma}\left[\frac{({\bf K}+{\bf P}-{\bf p})^{2}}{m^{2}}\right],\]
and \(P=xp^{\prime}+yp\). Since this integral is finite, we can safely take the continuum limit with \(f_{\sigma}\to 1\) and get
\[\overline{K}_{0}(p,q)=-\frac{i}{(4\pi)^{2}}\int_{0}^{1}dx\int_{0}^{1-x}dy\frac {1}{\Delta}.\]
#### b.3.2 Calculation of \(\overline{K}_{1}^{\lambda}\)
The integral \(\overline{K}_{1}^{\lambda}\) is written as
\[\overline{K}_{1}^{\lambda}(p,q)=K_{1}^{\lambda}(p,q)+\frac{2i}{(2\pi)^{4}} \lim_{\sigma\to 1^{-}}\int_{0}^{1}dx\int_{0}^{1-x}dy\int d^{4}{\bf K}\frac{{ \bf P}^{\lambda}}{({\bf K}^{2}+m^{2}\Delta)^{3}}F_{\sigma},\]
with
\[K_{1}^{\lambda}(p,q)=\frac{2i}{(2\pi)^{4}}\lim_{\sigma\to 1^{-}}\int_{0}^{1} dx\int_{0}^{1-x}dy\int d^{4}{\bf K}\frac{{\bf K}^{\lambda}}{({\bf K}^{2}+m^{2} \Delta)^{3}}F_{\sigma},\]
In the continuum limit, \({K_{1}}^{\lambda}\) is strictly zero as shown in Ref. [30] so that we get immediately, from the calculation of \(\overline{K}_{0}\),
\[K_{1}^{\lambda}(p,q)=-\frac{i}{(4\pi)^{2}}\int_{0}^{1}dx\int_{0}^{1-x}dy\frac {{\bf P}^{\lambda}}{m^{2}\Delta}.\]
#### b.3.3 Calculation of \(\overline{K}_{2}\)
Similarly, the integral \(\overline{K}_{2}\) is written as
\[\overline{K}_{2}(p,q)=K_{2}(p,q)+\frac{2i}{(2\pi)^{4}}\lim_{\sigma\to 1^{-}} \int_{0}^{1}dx\int_{0}^{1-x}dy\int d^{4}\mathbf{K}\frac{\mathbf{P}^{2}}{( \mathbf{K}^{2}+m^{2}\Delta)^{3}}F_{\sigma},\]
with
\[K_{2}(p,q)=\frac{2i}{(2\pi)^{4}}\lim_{\sigma\to 1^{-}}\int_{0}^{1}dx\int_{0}^{1-x} dy\int d^{4}\mathbf{K}\frac{\mathbf{K}^{2}}{(\mathbf{K}^{2}+m^{2}\Delta)^{3}}F_{ \sigma},\]
From the results of Ref. [30] we get immediately
\[\overline{K}_{2}(p,q)=\frac{i}{(4\pi)^{2}}\int_{0}^{1}dx\int_{0}^{1-x}dy\left[ 2\mathrm{Log}\frac{\eta^{2}}{\Delta}-\frac{\mathbf{P}^{2}}{m^{2}\Delta}\right].\]
#### b.3.4 Calculation of \(\overline{K}_{2}^{\mu\nu}\)
\[\overline{K}_{2}^{\mu\nu}(p,q)=K_{2}^{\mu\nu}(p,q)+\frac{i}{(2\pi)^{4}}\lim_{ \sigma\to 1^{-}}\int_{0}^{1}dx\int_{0}^{1-x}dy\int d^{4}\mathbf{K}\frac{ \mathbf{P}^{\mu}\mathbf{P}^{\nu}}{(\mathbf{K}^{2}+m^{2}\Delta)^{3}}F_{\sigma},\]
with
\[K_{2}^{\mu\nu}(p,q)=\frac{i}{(2\pi)^{4}}\lim_{\sigma\to 1^{-}}\int_{0}^{1}dx \int_{0}^{1-x}dy\int d^{4}\mathbf{K}\frac{\mathbf{K}^{\mu}\mathbf{K}^{\nu}}{( \mathbf{K}^{2}+m^{2}\Delta)^{3}}F_{\sigma}.\]
Following the above discussion for the calculation of \(J_{2}^{\mu\nu}\), and according to the results of Ref. [30], we have
\[K_{2}^{\mu\nu}(p,q)=\frac{1}{4}g^{\mu\nu}\left[K_{2}(p,q)+m^{2}\Delta\overline {K}_{0}(p,q)\right].\]
### Infra-red divergences
#### b.4.1 Calculation of \(A^{\prime}\) and \(B^{\prime}\)
The calculation of \(A^{\prime}(m^{2})\) and \(B^{\prime}(m^{2})\) involves a singular integral in \(1/x\). This singularity corresponds to a pole at \(\mathbf{K}=0\). In this kinematical domain, the relevant test function is written as \(f_{\sigma}\left(\frac{x^{2}p^{2}}{m^{2}}\right)\). We thus have, for \(p^{2}=m^{2}\),
\[A^{\prime}(m^{2}) = \frac{\alpha}{\pi m}\lim_{\sigma\to 1^{-}}\int_{0}^{1}dx \frac{1-x}{x}f_{\sigma}(x^{2})=\frac{\alpha}{\pi m}\left[\frac{1}{2}\int_{0}^{ 1}dX\mathrm{Pf}\left(\frac{1}{X}\right)-1\right]\] \[= -\frac{\alpha}{\pi m},\]
and
\[B^{\prime}(m^{2})=-\frac{\alpha}{2\pi m}\lim_{\sigma\to 1^{-}}\int_{0}^{1}dx \frac{(1-x)^{2}}{x}f_{\sigma}(x^{2})=\frac{3\alpha}{4\pi m}.\]
#### b.4.2 Calculation of \(\Phi_{1}^{IR}\)
The \(IR\) singularities in the calculation of \(\Phi_{1}^{IR}\) originates from the poles in \(w=x+y=0\), _i.e._ for \(x=y=0\). This pole occurs for \(\mathbf{K}=0\) and is taken care of by the test function \(f_{\sigma}\left[\frac{k^{2}}{m^{2}}\right]\). This test function is written as, in this limit,
\[f_{\sigma}\left[\frac{k^{2}}{m^{2}}\right]\to f_{\sigma}\left[\frac{ \mathbf{P}^{2}}{m^{2}}\right]=f_{\sigma}\left[w^{2}\left(1+\frac{Q^{2}}{m^{2}} \xi(1-\xi)\right)\right],\]
with the variables \(w\) and \(\xi\) introduced in Eq. (22). The a-priori singular part of the relevant integrals is thus written as
\[I = \lim_{\sigma\to 1^{-}}\int_{0}^{1}\frac{dw}{w}f_{\sigma}\left(w^{2} \Delta_{q}\right)=\frac{1}{2}\lim_{\sigma\to 1^{-}}\int_{0}^{\Delta_{q}}\frac{ dX}{X}f_{\sigma}(X)=\frac{1}{2}\int_{0}^{\Delta_{q}}dX{\rm Pf}\left( \frac{1}{X}\right)\] \[= \frac{1}{2}{\rm Log}\Delta_{q}.\]
Thanks to the presence of the test function, this contribution is finite, eventhough we have considered a massless photon. In a calculation using \(DR\) with a finite mass \(\delta\) of the photon, such integral will have an additional contribution in \({\rm Log}\left(\frac{\delta^{2}}{m^{2}}\right)\).
|
2304.01168 | DeepAccident: A Motion and Accident Prediction Benchmark for V2X
Autonomous Driving | Safety is the primary priority of autonomous driving. Nevertheless, no
published dataset currently supports the direct and explainable safety
evaluation for autonomous driving. In this work, we propose DeepAccident, a
large-scale dataset generated via a realistic simulator containing diverse
accident scenarios that frequently occur in real-world driving. The proposed
DeepAccident dataset includes 57K annotated frames and 285K annotated samples,
approximately 7 times more than the large-scale nuScenes dataset with 40k
annotated samples. In addition, we propose a new task, end-to-end motion and
accident prediction, which can be used to directly evaluate the accident
prediction ability for different autonomous driving algorithms. Furthermore,
for each scenario, we set four vehicles along with one infrastructure to record
data, thus providing diverse viewpoints for accident scenarios and enabling V2X
(vehicle-to-everything) research on perception and prediction tasks. Finally,
we present a baseline V2X model named V2XFormer that demonstrates superior
performance for motion and accident prediction and 3D object detection compared
to the single-vehicle model. | Tianqi Wang, Sukmin Kim, Wenxuan Ji, Enze Xie, Chongjian Ge, Junsong Chen, Zhenguo Li, Ping Luo | 2023-04-03T17:37:00Z | http://arxiv.org/abs/2304.01168v5 | # DeepAccident: A Motion and Accident Prediction Benchmark for V2X Autonomous Driving
###### Abstract
Safety is the primary priority of autonomous driving. Nevertheless, no published dataset currently supports the direct and explainable safety evaluation for autonomous driving. In this work, we propose DeepAccident, a large-scale dataset generated via a realistic simulator containing diverse accident scenarios that frequently occur in real-world driving. The proposed DeepAccident dataset contains 57K annotated frames and 285K annotated samples, approximately 7 times more than the large-scale nuScenes dataset with 40k annotated samples. In addition, we propose a new task, end-to-end motion and accident prediction, based on the proposed dataset, which can be used to directly evaluate the accident prediction ability for different autonomous driving algorithms. Furthermore, for each scenario, we set four vehicles along with one infrastructure to record data, thus providing diverse viewpoints for accident scenarios and enabling V2X (vehicle-to-everything) research on perception and prediction tasks. Finally, we present a baseline V2X model named V2XFormer that demonstrates superior performance for motion and accident prediction and 3D object detection compared to the single-vehicle model.
## 1 Introduction
In recent years, single-vehicle autonomous driving has achieved significant progress owing to the well-established datasets for autonomous driving, such as KITTI [3], nuScenes [1], Waymo [14], etc. Using those datasets, researchers have proposed various representative algorithms for different downstream tasks, including perception [7, 13, 6, 8] and prediction [5, 19].
Nevertheless, single-vehicle autonomous driving suffers from performance degradation in distant or occluded areas
due to poor or partial visibility of raw sensors. One possible solution is to seek the help of vehicle-to-everything (V2X) communication technology which can provide a complementary perception range or enhanced visibility for the ego vehicle. Based on the additional information source, V2X communication can be further categorized as vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication. Most of the existing V2X datasets [15, 9, 17] support perception tasks but ignore the critical motion prediction task. The only V2X dataset that supports motion prediction is V2X-seq [18], that is recently released, but it requires ground truth vehicle positions, map topology, and traffic light status as inputs, which is impractical for real-world autonomous driving.
Moreover, mainstream datasets lack an essential attribute for evaluating the safety of autonomous driving: the inclusion of safety-critical scenarios, such as collision accidents. Existing accident datasets [12, 4, 16] suffer from limitations such as low-resolution images captured from a single forward-facing camera without camera parameter information. In addition, these datasets oversimplify the accident prediction task by treating it as a classification or 2D object detection task, which is challenging to interpret or use for subsequent planning module in autonomous driving.
We propose the DeepAccident dataset, the first V2X autonomous driving dataset supporting end-to-end motion and accident prediction, as well as other typical autonomous driving perception tasks. Using the CARLA simulator [2], we reconstructed diverse real-world driving accidents according to NHTSA pre-crash reports [11]. For each scenario, four vehicles and one infrastructure collected a full set of sensor data, including multi-view RGB cameras and LiDAR, with annotations for perception and motion prediction tasks. This setup fills the gap of a lack of safety-critical scenarios in existing V2X datasets. Additionally, we introduce a new end-to-end accident prediction task to predict collision accidents' occurrence, timing, location, and involved vehicles or pedestrians. An illustration of this task is shown in Figure 1. Lastly, we propose a V2X model named V2XFormer, which demonstrates superior performance compared to the single-vehicle model on the DeepAccident dataset for both perception and prediction.
Our main contributions can be summarized in three-fold:
* We propose DeepAccident, the first dataset and benchmark for end-to-end V2X motion and accident prediction, which contains diverse collision accidents. Besides, it also supports the multi-agent multi-modality multi-task perception research for other autonomous driving perception tasks.
* Based on DeepAccident, we design a new task named end-to-end accident prediction that predicts the occurrence of collision accidents and their specific timing, location, and vehicles or pedestrians involved.
* We propose a V2X model named V2XFormer for both perception and prediction tasks to serve as a baseline for further research.
## 2 Related Work
**V2X autonomous driving datasets for perception.** The existing V2X datasets primarily focus on perception tasks that demonstrate the improved perception capabilities of V2X approaches. OPV2V [15] supports 3D object detection and tracking in V2V scenarios, while V2X-Sim [9] supports more comprehensive tasks, including BEV map semantic segmentation in both V2V and V2I scenarios. These datasets were generated using simulators. In contrast, DAIR-V2X [17] and V2X-seq [18] are the only publicly available real-world V2X datasets that collect data in V2I scenarios. DAIR-V2X only supports 3D object detection, while V2X-seq is a sequential dataset that also supports 3D object tracking. For comparison, our proposed DeepAccident dataset includes multi-view cameras and LiDAR sensors and supports all the existing datasets' perception tasks.
**V2X autonomous driving datasets for motion prediction.** The V2X-seq dataset [18] is currently the only available dataset that supports the V2X motion prediction task. However, the V2X-seq dataset treats motion prediction as a subsequent module after the perception module and takes ground truth vehicle locations, map topology, and traffic light status as inputs. This modularized pipeline for motion prediction ignores rich semantics embedded within the raw sensor data and can lead to error accumulation in real-time driving scenarios due to inaccurate perception results. Alternatively, end-to-end motion prediction, which takes the raw sensor as input and generates the motion prediction results, has aroused significant research interest recently [5], [19] due to the potential to extract more semantics from the raw sensor and the time efficiency. To support the end-to-end motion prediction task, datasets need to provide sequential frames with consistent object IDs and corresponding raw sensor data. Although existing datasets can be modified to support end-to-end motion prediction, they did not include it in their official evaluation benchmark, as it is a newly emerging task. We mark this situation using a symbol \(\triangle\) in Table 1. In contrast, our proposed DeepAccident dataset includes end-to-end motion prediction in the benchmark and provides code for training and evaluation.
**Autonomous driving datasets for accident prediction.** Currently, there is no V2X dataset available for accident prediction, with existing datasets only covering normal and safe scenarios. However, there are existing accident datasets that operate within a single vehicle or single infrastructure setting. For instance, VIENA\({}^{2}\) and GTACrash create collisions in the GTA V video game by manually driving or losing control and capturing single forward camera images of the ego vehicle. VIENA\({}^{2}\) only provides coarse classification
labels, while GTACrash provides 2D bounding boxes for collided vehicles, treating accident prediction as a 2D dangerous vehicle detection task. YoutubeCrash utilizes real-world collision video clips and manually annotates the 2D bounding box for the collided vehicles, keeping the same task setting as GTACrash. TAD is a real-world accident dataset captured from a single surveillance camera installed on the infrastructure. Here, the task of accident prediction is viewed as involving both classification and 2D accident vehicle detection. However, all these existing accident datasets treat accident prediction as a naive classification or 2D dangerous vehicle detection task, making it hard to be interpreted or utilize by the subsequent autonomous driving planning module. In contrast, our proposed DeepAccident dataset provides fully detailed accident labels, such as the accident vehicle ids and their future colliding trajectories in the V2X scenario.
Table 1 compares our DeepAccident dataset with other autonomous driving datasets from V2X configurations, sensor configurations, supported tasks, and inclusion of accident scenarios. Meanwhile, Table 2 compares the dataset scale of DeepAccident with other datasets. In summary, the proposed DeepAccident is the only V2X dataset that includes accident scenarios and supports end-to-end motion and accident prediction, as well as other perception tasks in both V2V and V2I settings. Furthermore, DeepAccident also has the largest scale compared to existing datasets.
## 3 DeepAccident Dataset
### Dataset Generation
The accident scenarios in our proposed DeepAccident dataset are designed following the pre-crash report by NHTSA [11], in which various types of collision accidents are reported from the real-world crash data. We design 12 types of accident scenarios in DeepAccident as shown in Figure 2. Our designed accident scenarios generally involve two vehicles with overlapped planned trajectories at signalized and unsignalized intersections. An accident occurs if either vehicle runs a red light or both vehicles are allowed to proceed but show no negotiating behavior. In addition to the two accident vehicles, we spawn two more vehicles, each following behind one of the accident vehicles, to capture diverse viewpoints of the same scene (See Figure 2). Furthermore, a full set of sensors is installed on all four vehicles, and annotated labels are saved independently. Additionally, a full stack of sensors is installed facing toward the intersection on the infrastructure side, resulting in data from four vehicles and one infrastructure of the same scene to support V2X research. The detailed sensor configuration
\begin{table}
\begin{tabular}{l|c|c c c|c c c c|c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Scenario} & \multirow{2}{*}{Source} & \multicolumn{3}{c|}{Sensor} & \multicolumn{3}{c|}{Tasks} & \multirow{2}{*}{Accident} \\ & & & & MTV Cameras & & LiDAR & Det. & Track. & Seg. & Mot. & \\ \hline nuScenes [1] & single & real-world & ✓ & ✓ & ✓ & ✓ & ✓ & \(\triangle\) & � \\ Waymo [14] & single & real-world & ✓ & ✓ & ✓ & ✓ & ✗ & \(\triangle\) & � \\ KITTI [3] & single & real-world & ✗ & ✓ & ✓ & ✓ & ✗ & \(\triangle\) & � \\ \hline OPV2V [15] & V2V & simulator & ✓ & ✓ & ✓ & ✓ & ✗ & \(\triangle\) & ✗ \\ V2X-Sim [9] & V2V\&V2I & simulator & ✓ & ✓ & ✓ & ✓ & ✓ & \(\triangle\) & ✗ \\ DAIR-V2X [17] & V2I & real-world & ✗ & ✓ & ✓ & ✗ & ✗ & ✗ \\ V2X-seq/Perception [18] & V2I & real-world & ✗ & ✓ & ✓ & ✓ & ✗ & \(\triangle\) & ✗ \\ V2X-Seq/Forecasting [18] & V2I & real-world & ✗ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ \\ \hline VIENA\({}^{2}\)[12] & single & simulator & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ \\ GTACrash [4] & single & simulator & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ \\ YoutubeCrash [4] & single & real-world & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ \\ TAD [16] & single & real-world & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ \\ \hline DeepAccident (**ours**) & V2V\&V2I & simulator & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of existing autonomous driving datasets, including both real-world and simulated ones, to our proposed DeepAccident dataset. The Mot. task listed in Tasks represents end-to-end motion prediction, and the symbol \(\triangle\) indicates that the required ground truth motion labels are not officially provided but can be obtained via manipulation of the original labels.
\begin{table}
\begin{tabular}{c|c c c c c c} \hline \hline & KITTI [3] & nuScenes [1] & Waymo [14] & OPV2V [15] & V2X-Sim [9] & V2X-seq [18] & DeepAccident (**ours**) \\ \hline \# of annotated samples & 15K & 40K & 230K & 33K & 47K & 36K & **285K** \\ \# of annotated V2X frames & 0 & 0 & 0 & 11K & 10K & 18K & **57K** \\ annotation frequency (Hz) & 1 & 2 & 10 & 10 & 5 & 10 & **10** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Scale comparison between existing autonomous driving datasets and our proposed DeepAccident. We only list the datasets that have the potential to support the end-to-end motion prediction task.
is provided in Appendix Table 8, and an illustration example of our V2X setting is shown in Appendix Figure 7.
**Accident generation details.** For the two accident vehicles that we choose to collide with each other, we first calculate the intersection point of their planned trajectories and then set their initial positions to arrive at a similar time by dividing the arriving distance by the randomly chosen maximum speed. The two vehicles that are designed to follow the accident vehicles have the same trajectories as the accident vehicles. Thus, in some scenarios, it is the behind vehicles to have collision instead, and we treat that as a valid scenario as well. Moreover, although not designed intentionally, there are other accident scenarios, such as vehicles hitting the crossing pedestrians and colliding with other non-V2X vehicles. We also include these scenarios in the proposed DeepAccident and will design more scenarios in the later version of this dataset. Besides, we have also designed normal scenarios in which the vehicles also have overlapped trajectories but show negotiation behavior and obey the traffic rule to avoid collisions, which is achieved by using the in-built TrafficManager in CARLA to control all the vehicles and pedestrians coordinately. Each scenario will be stopped when a collision occurs, the ego vehicle completes its planned trajectory, or the scenario time exceeds 10 seconds.
### Dataset Statistics
**Scenario distribution.** During data collection, many random factors were involved in increasing the diversity, including the number of spawned surrounding vehicles and pedestrians, weather, and time-of-the-day. As shown in Figure 3, the proposed DeepAccident dataset exhibits a substantial degree of diversity. More dataset statistics can be found in Appendix Figure 10, 11, and 12.
**Supported tasks.** DeepAccident contains both multi-view camera sensors and LiDAR sensors and provides annotations that support end-to-end motion prediction, 3D object detection and tracking, and BEV semantic segmentation. Additionally, we have also proposed a new task named end-to-end accident prediction, which can evaluate the safety of autonomous algorithms directly. These tasks can all be achieved under V2X settings in DeepAccident, thus stimulating more V2X research.
**Dataset size.** As shown in Table 2, the proposed DeepAccident has the largest scale compared to existing datasets. It comprises a total of 285k annotated samples and 57k annotated V2X frames, providing annotations at a high frequency of 10 Hz. In addition, we split the data with an approximate ratio of 0.7, 0.15, and 0.15 for training validation, and testing splits, resulting in 203k, 41k, and 41k samples respectively. A total of 691 scenarios are assigned to the three splits, with 483, 104, and 104 scenarios for training, validation, and testing, respectively.
## 4 End-to-End Motion and Accident Prediction
For this task, we select the camera-based setting to utilize multi-view camera images as inputs for generating motion prediction results for the entire scene. These motion prediction results are then post-processed to determine the occurrence of the accident and the accident vehicle ids, accident positions as well as timing (see Figure 1).
### Network Structure
We choose BEVerse [19] as our baseline single-vehicle model for the end-to-end motion prediction. For the V2X setting, we propose a simple yet effective V2X model named V2XFormer due to using SwinTransformer [10] as the image feature backbone. As shown in Figure 4, V2XFormer shares the same BEV feature extractor as the single-vehicle model such that each V2X agent would extract a BEV feature centered at its own coordinate system. These BEV features are then spatially wrapped to the ego vehicle coordinate system to concatenate with the ego vehicle BEV feature. For the BEV feature fusion part, we use a simple average fusion strategy for the aligned BEV features to generate a BEV feature with the same number of channels as the ego vehicle BEV feature. This averaged BEV feature is then fed into the task heads to generate the motion prediction and 3D object detection results. In addition, given the fact that the ego vehicle itself can cause an acci
Figure 2: The designed accident scenarios in DeepAccident can be divided into 12 types across signalized intersections (6 types) and unsignalized intersections (6 types). For each scenario, we have two colliding vehicles with overlapped planned trajectories and another two vehicles following behind them. The designed scenarios include: (1) running against a red light at four-way intersections, (2) left turn against a red light at four-way intersections, (3) unprotected left turn at four-way intersections, (4) right turn against left turn at four-way intersections, (5) right turn against left turn at three-way intersections (6) go straight against right turn at three-way intersections in signalized cases. In unsignalized cases, the designed overlapping trajectories are the same, but there are no traffic lights to affect the vehicle behaviors.
dent with other vehicles or pedestrians, we also require the network to jointly predict the ego vehicle's future motion.
### Accident Prediction
**Post-processing for accident prediction.** We propose the end-to-end accident prediction task, which can be achieved via post-processing the motion prediction results. We view the performance on this task as safety metrics for autonomous driving. From the motion prediction results, which consist of several BEV outputs, including centerness, segmentation, offset, and future flow, we can combine them to get the BEV instance segmentation results like the ones shown in Figure 1 for the current moment as well as the future period. For each timestamp, we can approximate the BEV segmentation results for each object as polygons and then find the polygons with the closest distance and store the object ids and positions to represent the accident candidates for this timestamp. By looking for the timestamp with the closest object distance, we determine whether an accident occurred and provide details regarding the colliding object ids and positions at that particular moment. In our experiment, we set a threshold for a dangerous distance of 2.5 meters.
**Accident prediction accuracy.** To evaluate the accuracy of accident prediction compared to ground truth accident information, the same post-process steps are applied to the ground truth motion to determine the occurrence of future accidents. A true positive prediction is when : \((i)\) both the prediction and ground truth indicate the occurrence of an accident, and \((ii)\) the total position difference of the colliding agents between the prediction and the ground truth is less than a threshold. Based on this, we propose a new evaluation metric named Accident Prediction Accuracy (APA) in Equation 1 where we calculate the average accident prediction accuracy over a set of position difference thresholds of \(\mathbb{D}=\{\)5,10,15\(\}\) meters:
\[\mathrm{APA}=\frac{1}{\mathbb{D}}\sum_{d\in\mathbb{D}}\frac{|TP|_{d}}{|TP|_{d }+\frac{1}{2}|FP|_{d}+\frac{1}{2}|FN|_{d}} \tag{1}\]
**True Positive metrics.** In addition to the APA, we calculate several _True Positive_ metrics (TP metrics) for each true positive accident prediction sample to provide more detailed interpretation of the performance. This includes the error terms for _IDs_, _positions_ and _time_ between the ground truth accident and the predicted accident. For TP metrics calculation, we set the position difference threshold to 10 meters when deciding the true positive predictions. As for ID error, if the predicted accident objects' ids are the same as the ground truth's, then it equals to zero, otherwise equals to one. For position and time error, we present them using their native units (_meters_ and _seconds_)) and calculate the absolute difference compared to the ground truth. For each TP metric, we calculate the average value over true positive predictions as indicated by Equation 2.
\[\mathrm{mTP}=\frac{1}{|TP|}\sum_{i=1}^{|TP|}\mathrm{TP}_{\mathrm{err}} \tag{2}\]
**Number of future motion predictions.** In reality, there could be multiple future motions given the same past states. As a result, both FIERY [5], and BEVerse [19] learn a Gaussian distribution map consisting of a mean map and covariance map over the BEV feature. Thus, by sampling from the learned Gaussian distribution map, we can obtain different BEV features. This enables the motion head to produce diverse motion prediction results.
## 5 Experiment
**Evaluated tasks.** To show the usefulness of our proposed DeepAccident dataset as a V2X motion and accident prediction benchmark, we focus on the end-to-end motion and accident prediction task and choose the camera-based setting. In addition, we train another 3D object detection head with the motion head to simultaneously compare the perception ability between the V2X models and the single-vehicle model.
**Experiment settings.** We use the settings for motion prediction in BEVerse [19] and FIERY [5] as our default experiment settings. The model predicts 2 seconds into the future, considering 3 frames of past temporal context and 4 frames into the future at 2Hz. We choose BEVerse-tiny as the single-vehicle model and initialize all the V2X models and the single-vehicle model with pretrained weights on nuScenes. For training, we train the models on the training
Figure 3: Distribution of the proposed DeepAccident dataset, including scenario length, number of vehicles and pedestrians in each sample, accident occurrence, weather, and time of the day.
split of DeepAccident for 20 epochs. As for evaluation, we randomly sample five BEV features from the learned motion Gaussian distribution as we mentioned in section 4.2 along with the mean of this learned distribution to generate six different motion prediction results. Only the motion prediction result obtained from the mean vector of the learned Gaussian distribution is used to assess motion prediction performance. For accident prediction, we consider a prediction indicating the occurrence of an accident when any of the sampled motion predictions is analyzed to cause a collision accident, prioritizing safety.
We report the performance on DeepAccident's validation split in the following sections and include results on the testing split in Appendix Table 7. We compare the overall performance of V2X models with different agent configurations to the single-vehicle model and provide further ablation analysis that considers time-to-collision, accident visibility, and longer prediction horizon settings. Additionally, we conduct experiments on nuScenes to validate the trained models' real-world generalization ability.
### V2X models for prediction and perception
**Evaluation metrics.** We use mIOU and VPQ proposed in FIERY [5] for the motion prediction task, our proposed APA (Accident Prediction Accuracy) and id error, position error for accident prediction task, and detection mAP averaged over center distance matching thresholds of {1,2,4} meters for 3D object detection task.
**Overall performance.** V2X models significantly outperform the single-vehicle baseline in all three tasks, as shown in Table 3. The V2X model with four vehicles and infrastructure exhibit better performance than the single-vehicle model, with an increase of 8.3, 5.2, and 9.7 in mIOU, APA, and detection mAP, respectively. V2X communication with the vehicle on the other side demonstrates better performance in motion prediction and 3D object detection tasks, while showing similar performance in accident prediction, as compared to V2X communication with the vehicle behind the ego vehicle. V2X-infra yields the best motion
Figure 4: Network details of the proposed V2XFormer. We use the three-V2X-agent setting consisting of ego AV, AV, and Infra for illustration. V2X agents in V2XFormer utilize a shared-weight BEV extractor to extract BEV features based on multi-view camera observation history within the previous N frames. These features are spatially wrapped and aligned with the BEV features from the ego vehicle before being concatenated along the channel dimension. For the V2X fusion part, here we utilize a simple yet effective average pooling over channel dimension to generate the aggregated BEV feature, which is then fed into different task heads to get prediction results.
\begin{table}
\begin{tabular}{c|c c|c c c|c} \hline \hline \multirow{2}{*}{Config} & \multicolumn{3}{c|}{Motion} & \multicolumn{3}{c|}{Accident} & \multirow{2}{*}{
\begin{tabular}{c} map \\ (\(\uparrow\)) \\ \end{tabular} } \\ & mIOU(\(\uparrow\))VPQ(\(\uparrow\)) & \multicolumn{1}{c}{APAP(\(\uparrow\))id err(\(\downarrow\))pos err(\(\downarrow\))} \\ \hline Single vehicle & 43.8 & 31.6 & 61.9 & 0.12 & 3.20 & 26.5 \\ + behind vehicle & 48.3 & 35.7 & 65.7 & 0.10 & 3.03 & 33.1 \\ + other vehicle & 49.5 & 37.0 & 65.1 & 0.09 & 2.99 & 34.5 \\ + infrastructure & 50.0 & 37.4 & 65.0 & 0.11 & 3.12 & 32.9 \\ \multicolumn{2}{c|}{3 vehicles} & 50.7 & 38.1 & 65.0 & 0.09 & 2.87 & 35.5 \\ \multicolumn{2}{c|}{4 vehicles} & 51.2 & 38.7 & 66.6 & 0.08 & 2.91 & 35.7 \\ \multicolumn{2}{c|}{4 vehicles+infra} & 52.1 & 39.5 & 67.1 & 0.08 & 2.88 & 36.2 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparison between the single-vehicle model and different v2x configuration models on the validation split of DeepAccident.
prediction performance due to the broad visibility provided by the infrastructure's relatively high sensor mounting position. However, it exhibits slightly lower performance on the detection and accident prediction tasks compared to V2X-other or V2X-behind, possibly due to the simple V2X fusion design of our proposed V2XFormer. Nevertheless, V2X-infra still outperforms the single-vehicle model across all tasks. Finally, gradual incorporation of more V2X agents for communication can lead to gradual improvement in performance across all tasks.
**Performance based on Time-To-Collision.** We divide the evaluation data based on Time-To-Collision (TTC) into 2s, 1.5s, 1s, and 0.5s prior to collision, using our prediction horizon of two seconds and a 0.5s frame gap. As we lack ground truth labels for the period after collision, we only assess prediction results in the valid future time period. According to Table 4, The V2X models outperform the single-vehicle model for motion and accident prediction, especially when TTC is shorter. For example, the V2X-5agents model outperforms the single-vehicle model by 9.0 and 6.5 for mIOU and APA, respectively, on the 1s TTC data. However, for the 2s TTC data, the accident prediction task is challenging for all models, with the V2X-5agents model even performing worse than the single-vehicle model. This may be attributed to the simple fusion design for V2X models and suggests an area for future improvement.
**Performance based on accident vehicle visibility.** During the observation period, accident vehicles or pedestrians may be temporarily or consistently invisible from the ego vehicle's perspective, making it challenging to predict their motion with a single-vehicle model and hindering accident prediction. To address this, we evaluate the performance of different V2X models and a single-vehicle model by dividing the evaluation data based on accident vehicle or pedestrian visibility from the ego vehicle side. We define a sample with over half of its observation frames having invisible accident vehicles as an invisible sample for accidents. Figure 5 shows that the performance gap between V2X models and the single-vehicle model is significantly larger when there is limited accident visibility from the ego vehicle side, for both motion prediction and accident prediction tasks. Specifically, V2X-5agent model (4 vehicles + infra) outperforms the single-vehicle model by 12.0 and 8.1 higher mIOU and APA, respectively, for invisible accident scenarios, while the gap is only 7.8 and 5.0 in terms of mIOU and APA for visible accident scenarios.
**Longer prediction horizon.** We also conduct experiments on predicting longer future motion and choose the single-vehicle model as the baseline. As shown in Table 5, predicting longer future motion achieves worse accident prediction accuracy compared to the model with a shorter prediction horizon. For example, the model predicting 4s future achieves almost half the APA of the 2s-setting model on all validation data, achieving 35.4 and 61.9, respectively. Moreover, the models with longer future prediction horizon settings still perform worse when evaluating only the common future period of time (20.4, 25.7, 28.7 for 4s, 3s, and 2s prediction horizon settings on 2s time-to-collision data). However, the 4s-setting model achieves an APA of 10.2 for samples 4s prior to collision, while other models are unable
\begin{table}
\begin{tabular}{c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Config} & \multicolumn{2}{c|}{2s} & \multicolumn{2}{c|}{1.5s} & \multicolumn{2}{c|}{1s} & \multicolumn{2}{c}{0.5s} \\ \cline{2-9} & mIOU & APA & mIoU & APA & mIoU & APA & mIoU & APA \\ \hline Single vehicle & 38.3 & 28.7 & 43.1 & 49.1 & 48.0 & 74.7 & 52.1 & 81.1 \\ + behind vehicle & 41.6 (+3.4) & 31.4 (+2.7) & 47.6 (+4.5) & 48.2 (-0.9) & 53.2 (+5.2) & 80.8 (+6.1) & 58.1 (+6.1) & 85.5 (+4.4) \\ + other vehicle & 43.5 (+5.2) & 30.5 (+1.8) & 49.0 (+5.9) & 53.3 (+4.2) & 54.2 (+6.2) & 78.5 (+3.8) & 58.5 (+6.4) & 83.7 (+2.6) \\ + infrastructure & 44.2 (+5.9) & 30.2 (+1.5) & 49.3 (+6.2) & 50.4 (+1.3) & 54.5 (+6.5) & 79.0 (+4.3) & 59.3 (+7.2) & 85.0 (+3.9) \\ 3 vehicles & 44.1 (+5.9) & 30.5 (+1.8) & 50.1 (+7.1) & 49.6 (+0.9) & 55.6 (+7.6) & 79.0 (+4.3) & 60.3 (+8.2) & 85.3 (+4.2) \\ 4 vehicles & 44.5 (+6.2) & 31.5 (+2.8) & 50.5 (+7.5) & 53.1 (+4.0) & 56.1 (+8.1) & 79.5 (+4.8) & 61.3 (+9.2) & 86.7 (+5.6) \\ 4 vehicle + infra & 45.6 (+7.3) & 28.2 (-0.5) & 51.4 (+8.3) & 53.6 (+4.5) & 56.9 (+9.0) & 81.2 (+6.5) & 62.3 (+10.2) & 87.6 (+6.5) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance comparison between the single-vehicle model and different V2x configuration models for motion mIoU and accident prediction accuracy (APA) at different Time-To-Collision (TTC).
Figure 5: Performance comparison between the single-vehicle model and different v2x configuration models for motion mIoU and accident prediction accuracy (APA) _v.s._ accident visibility.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Prediction horizon & all data & 1s & 2s & 3s & 4s \\ \hline
2s & 61.9 & 74.7 & 28.7 & none & none \\
3s & 50.5 & 71.5 & 25.7 & 21.2 & none \\
4s & 35.4 & 56.3 & 20.4 & 14.6 & 10.2 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance of single-vehicle models with different prediction horizon settings at different Time-To-Collision (TTC) for accident prediction accuracy.
to predict the accident this early due to their design. These results suggest a trade-off between predicting longer future horizons and achieving satisfactory overall performance.
**Qualitative results.** Figure 6 shows an example where the crossing pedestrian is invisible to the ego vehicle due to the uphill terrain. As a result, the single-vehicle model is unable to detect the pedestrian and fails to predict the upcoming accident. In contrast, the infrastructure provides complementary visibility for the colliding vehicle and pedestrian, allowing the V2X-infra and V2X-5agents models to accurately anticipate the accident.
### Sim2Real Domain Adaptation
To validate the real-world generalization ability of the models trained with our proposed DeepAccident dataset, we fine-tune the trained single vehicle model on nuScenes for five epochs and compare it with the original BEVerset tiny model that is only trained on nuScenes for both motion prediction and 3D object detection tasks. As shown in Table 6, the model trained with both datasets achieves 1.9 higher mAP and 0.8 higher VPQ on the nuScenes validation dataset, which demonstrates the usefulness of our proposed DeepAccident dataset for real-world scenarios.
## 6 Conclusion
We propose DeepAccident, the first large-scale V2X autonomous driving dataset that includes various collision accident scenarios commonly encountered in real-world driving. Based on this dataset, we introduce the end-to-end motion and accident prediction task and corresponding metrics to assess the accuracy of accident prediction. DeepAccident contains sensor data and annotation labels from four vehicles and one infrastructure for each scenario, allowing for the V2X research for perception and prediction. Our proposed V2XFormer outperforms the single-vehicle model in both perception and prediction tasks, providing a baseline for future research. The proposed DeepAccident serves as a direct safety benchmark for autonomous driving algorithms and as a supplementary dataset for enhancing perception methods' generalization ability in both single-vehicle and V2X settings.
**Limitations.** At present, we have not considered communication latency, bandwidth limitations, and estimation errors in the relative poses of V2X agents, all of which are inevitable in real-world scenarios, when transferring messages between V2X agents. Our V2XFormer model is a starting point to demonstrate the potential of V2X perception and prediction. We encourage more V2X methods to be developed to overcome these simplifications by using the DeepAccident dataset.
**Acknowledgement.** We gratefully acknowledge the support of MindSpore, CANN (Compute Architecture for Neural Networks) and Ascend AI Processor used for this research.
\begin{table}
\begin{tabular}{c|c c} \hline Training data & VPQ & mAP \\ \hline nuScenes only & 33.4 & 32.1 \\ DeepAccident + nuScenes & 34.2(+0.8) & 34.0(+1.9) \\ \hline \end{tabular}
\end{table}
Table 6: Performance comparison between the original BEVerset tiny [19] model and the model trained with both the synthesized DeepAccident data and the real-world nuScenes data for motion prediction (VPQ) and 3D object detection (mAP).
Figure 6: A qualitative result where the ego vehicle is going uphill while the ahead vehicle will collide with the pedestrian that is crossing the road. The crossing pedestrian is invisible to the ego vehicle due to the uphill terrain. In this case, the infrastructure provides clear visibility for the colliding vehicle and pedestrian, thus successfully predicting the accidents for V2X-infra and V2X-5agent model. The red and green bounding boxes in the images, respectively, represent the colliding vehicles or pedestrians and the other V2X vehicles. |
2306.17173 | Photon: A Cross Platform P2P Data Transfer Application | Modern computing requires efficient and dependable data transport. Current
solutions like Bluetooth, SMS (Short Message Service), and Email have their
restrictions on efficiency, file size, compatibility, and cost. In order to
facilitate direct communication and resource sharing amongst linked devices,
this research study offers a cross-platform peer-to-peer (P2P) data
transmission solution that takes advantage of P2P networks' features.
The system enables cost-effective and high-performance data transport by
using the compute, storage, and network resources of the participating devices.
Simple file sharing, adaptability, dependability, and high performance are some
of the important benefits. The examination of the suggested solution is
presented in this paper and includes discussion of the P2P architecture, data
transfer mechanisms, performance assessment, implementation issues, security
concerns, and the potential difficulties that needs to be addressed.
The research intends to validate the efficacy and potential of the suggested
cross-platform P2P data transfer solution, delivering better efficiency and
dependability for users across various platforms, through practical
investigations and comparisons with existing approaches. | Abhilash Shreedhar Hegde, Amruta Narayana Hegde, Adeep Krishna Keelar, Ananya Mathur | 2023-06-16T06:58:08Z | http://arxiv.org/abs/2306.17173v1 | # Photon: A Cross Platform P2P Data Transfer Application
###### Abstract
Modern computing requires efficient and dependable data transport. Current solutions like Bluetooth, SMS (Short Message Service), and Email have their restrictions on efficiency, file size, compatibility, and cost. In order to facilitate direct communication and resource sharing amongst linked devices, this research study offers a cross-platform peer-to-peer (P2P) data transmission solution that takes advantage of P2P networks' features.
The system enables cost-effective and high-performance data transport by using the compute, storage, and network resources of the participating devices. Simple file sharing, adaptability, dependability, and high performance are some of the important benefits. The examination of the suggested solution is presented in this paper and includes discussion of the P2P architecture, data transfer mechanisms, performance assessment, implementation issues, security concerns, and the potential difficulties that needs to be addressed.
The research intends to validate the efficacy and potential of the suggested cross-platform P2P data transfer solution, delivering better efficiency and dependability for users across various platforms, through practical investigations and comparisons with existing approaches.
Peer-to-Peer Networks, Local Area Networks, Mobile Application, Cross-Platform, Flutter.
## I **Introduction**
Data transfer is the application of computing techniques to transmit electronic or analog data from one device to another. The transferred data may be of any type, size, context, and nature and it can be accomplished through network-less modes, from copying data to an external device and then copying from that device to another. However, this is a time-consuming task and,even, an irritating job, in case of multiple changes and transferring of the same file is done. While some wireless and communication features enable data transfer smoothly, these being Bluetooth, SMS (Short Message Service), and Email, they all possess several flaws. Bluetooth does provide good efficiency for very nearby devices and small files, however, takes a large amount of time for larger files and is sometimes incompatible with devices that are of lower quality or branding. Data transfer through SMS is impractical considering its limitations in size and cost, while Emails, too, have similar flaws.
A peer-to-peer network is an information technology infrastructure that allows two or more computer systems to connect and share resources without requiring a separate server or server network. Unlike the client-server architecture, there is no central server for processing requests and sending responses, for the peers directly interact with one another. Each computer in a P2P network provides resources to the network and consumes resources that the network provides. Resources such as files, printers, storage, bandwidth, and processing power can be shared between various computers in that network. Advantages of a P2P Network, within the local area, include easy file sharing, reduced costs, adaptability, reliability, high performance, and efficiency.
With the existing options with some flaws or with the application in need having to face some of the flaws with either poor access to the internet and or previous solutions being non-functional, our proposal aims to address the above-mentioned with a solution that can tackle the issues on a wide variety of platforms with ease. Our solution aims to be cross-platform, thus able to run on any environment and provide the best performance.
## II **Literature Survey**
**Turbo Share** - File sharing application developed by _Suraj Bhul et al._, [5], was based on Peer-to-Peer data transfer technology. The app supports features like multiple file transfer, viewing file sharing history, average data transfer speed, etc. But it lacks cross-platform availability as it supports only android devices.
A paper published by _Paulo R. M. de Andrade, et al._,[2], describes how cross-platform applications work and a comparison of the same with native applications. It draws contrasting features between hybrid, native, and web apps. It shows how operating systems have evolved, what are the needs for cross-platform apps, and the richness offered by them when compared to native apps. It also gives statistical insight into the usage of several types of applications (native, cross-platform, and web apps).
A paper published by _Yoonsik Cheon_ and team [3], describes their working experience on converting their natively built Android application to cross-platform by means of the Flutter framework. The paper describes their application that was built on a Java platform and how it was to convert the application using the Flutter framework to make it cross-platform.
A paper by _Mr. D. S. Thosar_ and team, [4], reviews the aspects of file sharing using a local area network in a peer-to-peer network. The paper speaks of the various drawbacks that Bluetooth, SMS (Short Message Service) and Internet based transfer of data ie files face and how a minimal set client to client-based transfer technology can help to transfer files.
A paper by _Wenhao Wu_[1] describes cross-platform app development using React Native and Flutter. React Native is a JavaScript framework that lets developers build apps using JavaScript. It uses a design principle like React. It uses declarative approach to design the User Interface (UI). React Native uses the same fundamental user interface as the building blocks for native android and iOS apps. The same paper also describes about the '**Flutter**' framework which unlike React Native, does not use JavaScript but rather uses a newly developed language called '**Dart**'. Dart was developed and is maintained by Google. It provides features like cross-platform support, rich User Interface features and hot-reload.
## III **Proposed Solution: Photon**
Photon is a cross-platform file-transfer application built using the Flutter framework. It uses HTTP to transfer files between devices. One can transfer files between devices that run Photon. (No Wi-Fi router is required; one can use hotspot). Some of the features of photon are,
* For instance, one can transfer files between Android and Windows, GNU/Linux based, Macintosh based systems and vice-versa.
* One can pick any number of files and transfer them efficiently.
* In contrast to the strictly bound design paradigms such as Model-View-View Model (MVVM), Model-View-Presenter (MVP), Clean Architecture and such, the application uses the "**Material You**" design paradigm, an evolution of the Material Design language. Introduced by Google in 2014, it focuses on customization, personalization, and expressing individuality within the user interface.
* Transfer of data over the internet, while is efficient and reliable given the factors being favourable, has the drawbacks of being expensive, and in case of unfavourable conditions might suffer from latency. However, in this application, data transfer is facilitated by using Wireless Fidelity (Wi-Fi) Direct Technology or P2P (Peer to Peer), allowing devices to connect directly with each other via Wi-Fi without an intermediate access point. This type of method provides faster transfer speeds when compared to other technologies such as Bluetooth.
* Uses a secure secret code generation for authentication (internally). Even though the files are streamed at local area network, these files cannot be downloaded/received without using the application. Thus, no external client like a browser can get the transferred files using an URL as the secret code is associated with the URL. It will be regenerated for every session.
* Photon is capable of transferring files at a remarkably high rate, depending upon the Wi-Fi bandwidth however, no internet connection required.
### **Architecture of Photon**
Photon was built using the Flutter framework, designed to provide a fast, efficient, and reactive application experience. It is widget-based approach and reactive programming model, combined with the built-in navigation system and rich set of tools, making it highly feasible for developing high-quality, responsive and performance applications.
* Being the building blocks of Flutter UI (User Interface), from buttons to layout elements, it describes its UI elements and behaviour. It consists of two types
- StatelessWidgets representing those widgets which cannot change its state and StatefulWidgets that can be updated and rebuilt.
* Flutter follows a composition model, complex UI elements are built by combining multiple widgets which can be either nested or combined in various methods to get the desired interface.
* Changes to the user interface are triggered by the changes in the application state, defining the reactive behaviour. This approach helps to keep the user interface coordinated with the underlying states.
* Flutter allows developers to make changes in the code and see the changes immediately without restarting the application, speeding the development process by providing a fast feedback loop facilitating iterative development.
* Flutter bypasses the native UI components of the underlying platform and uses its own rendering engine called _Skia_ which renders it UI to the native
canvas, resulting in a highly customizable and performant UI experience.
Photon was built using the Model-View-Controller (MVC) architecture pattern, the Model being the data structures that provides necessary data, the View being the interface that the user gets to see and the Controller being the intermediate link between the View and the Model, responsible for handling user input, updating the model and view appropriately. The separation between the three layers makes it easy to debug and helps to build stable systems with the clear separation between the application and the business logic. The changes in the View will not affect the Model, making it easier to make changes without breaking the application. In the case of Photon being built with the Flutter framework, multiple screens and views using the same logic reduces code duplicity, increasing code reusability.
### **Working of Photon**
Photon works between two interacting devices, one being a Sender of some data and the other being the Receiver of the shared Data. The Sender object initially upon starting the Photon application gets to pick two options (Send and Receive) and upon choosing Send, the sender first picks the data (be files/pictures/videos/apk files) that sender wishes to send from the device (be mobile/desktop) and starts the server. Simultaneously, the other object the Receiver upon choosing the Receive option first must discover the Sender within the same network and upon discovery, sends a Permission request to the Sender who can either deny or accept permission to start the session between the two. A session is generated between the two objects where the Receiver gets the file index from the sender who also transfers the files to the Receiver. After receiving the file, the Receiver can terminate from the session and end the session, while the Sender too ends the Session with the Receiver by stopping the server.
Some of the functionalities are described.
* Writing services enables end to end functionality of the application.
* To facilitate the file picking mechanism, the application should be able to pick files from the device which is provided by the _file_picker_ package to pick files from the local storage.
* To implement the file transfer mechanism, the application uses Hyper Text Transfer Protocol (HTTP). When Peer 1 sends the files to Peer2, it sets up a HTTP-server locally which is received by Peer2 when it makes a HTTP-GET request. To improve the efficiency, the application does not load the whole file into the memory streams the file into bytes of data, hence the device never runs out of memory.
* Error Handling is an important feature that helps to handle a lot of unknown scenarios that prevent the smooth functioning of the application, Photon is coded with several try-catch cases.
### **Performance Check of Photon**
To check the performance of Photon, the application was tested against a variety of devices ranging from Mobile-to-Mobile transfers, Mobile to Desktop Transfers and Desktop to Desktop Transfers. Mobile environments include Android and iOS while Desktop environments include Windows (Windows 10, Windows 11), GNU/Linux (Ubuntu 18.10, Ubuntu 20.04) and Macintosh OS (Operating System) (Ventura). The results are tabulated to various files of sizes ranging from few megabytes to 5 gigabytes are displayed in the table I.
### **Testing**
One key aspect of system testing for a Flutter app is testing for compatibility across different platforms. This means testing the app on a range of devices, such as smartphones and tablets, with different screen sizes and resolutions, as well as testing the app on different operating systems such as Android, GNU/Linux, Windows and macOS. This helps ensure that the
Figure 1: Sequence diagram showing interaction between peers
app functions correctly on all platforms and that users have a consistent experience across devices. For testing software, various test strategies are to be used such as unit testing, integration testing, system testing etc. The project was put through intensive testing methods to ensure efficiency and accuracy.
### **Conclusion and Future Enhancements**
A comprehensive solution to the problems with data transfer is provided by the suggested Peer- to-Peer Data Transfer App Within Local Area Network, which also delivers a quick, effective, and cross-platform experience. The programme enables direct communication between devices within a local area network by utilising a peer-to-peer network infrastructure, doing away with the requirement for a separate server. This strategy guarantees quicker data transfer speeds while simultaneously saving money. The programme optimises bandwidth usage because it does not rely on the internet, making it the best option for sending large files or sensitive data. Future updates can concentrate on extending support to Android TV devices to further improve the app. With this improvement, the app's functionality would be expanded and smooth data transfer between Android TV devices and other local network devices made possible. Additionally, a user-friendly and visually beautiful application will be made possible through ongoing improvements to the user interface and overall user experience.
## Acknowledgments
We would like to thank our institution "The National Institute of Engineering, Mysuru" For all the support rendered. Also we wish to acknowledge all the contributors of various technical papers and journals for their valuable contribution.
|
2303.03900 | New Perspectives on Regularization and Computation in Optimal
Transport-Based Distributionally Robust Optimization | We study optimal transport-based distributionally robust optimization
problems where a fictitious adversary, often envisioned as nature, can choose
the distribution of the uncertain problem parameters by reshaping a prescribed
reference distribution at a finite transportation cost. In this framework, we
show that robustification is intimately related to various forms of variation
and Lipschitz regularization even if the transportation cost function fails to
be (some power of) a metric. We also derive conditions for the existence and
the computability of a Nash equilibrium between the decision-maker and nature,
and we demonstrate numerically that nature's Nash strategy can be viewed as a
distribution that is supported on remarkably deceptive adversarial samples.
Finally, we identify practically relevant classes of optimal transport-based
distributionally robust optimization problems that can be addressed with
efficient gradient descent algorithms even if the loss function or the
transportation cost function are nonconvex (but not both at the same time). | Soroosh Shafieezadeh-Abadeh, Liviu Aolaritei, Florian Dörfler, Daniel Kuhn | 2023-03-07T13:52:32Z | http://arxiv.org/abs/2303.03900v1 | New Perspectives on Regularization and Computation in Optimal Transport-Based Distributionally Robust Optimization
###### Abstract
We study optimal transport-based distributionally robust optimization problems where a fictitious adversary, often envisioned as nature, can choose the distribution of the uncertain problem parameters by reshaping a prescribed reference distribution at a finite transportation cost. In this framework, we show that robustification is intimately related to various forms of variation and Lipschitz regularization even if the transportation cost function fails to be (some power of) a metric. We also derive conditions for the existence and the computability of a Nash equilibrium between the decision-maker and nature, and we demonstrate numerically that nature's Nash strategy can be viewed as a distribution that is supported on remarkably deceptive adversarial samples. Finally, we identify practically relevant classes of optimal transport-based distributionally robust optimization problems that can be addressed with efficient gradient descent algorithms even if the loss function or the transportation cost function are nonconvex (but not both at the same time).
## 1 Introduction
Because of their relevance for empirical risk minimization in machine learning, stochastic optimization methods are becoming increasingly popular beyond their traditional application domains in operations research and economics [77]. In the wake of the ongoing data revolution and the rapid emergence of ever more complex decision problems, there is also a growing need for stochastic optimization models outputting reliable decisions that are insensitive to input misspecification and easy to compute.
A (static) stochastic optimization problem aims to minimize the expected value \(\mathbb{E}_{Z\sim\mathbb{P}}[\ell(\theta,Z)]\) of an uncertainty-affected loss function \(\ell:\Theta\times\mathcal{Z}\rightarrow(-\infty,\infty]\) across all feasible decisions \(\theta\in\Theta\), where \(Z\in\mathcal{Z}\) is the random vector of all uncertain problem parameters that is governed by some probability distribution \(\mathbb{P}\). To exclude trivialities, we assume throughout the paper that the feasible set \(\Theta\subseteq\mathbb{R}^{m}\) and the support set \(\mathcal{Z}\subseteq\mathbb{R}^{d}\) are non-empty and closed. Despite their simplicity, static stochastic optimization problems are ubiquitous in statistics and machine learning, among many other application domains. However, their practical deployment is plagued by a fundamental challenge, namely that the distribution \(\mathbb{P}\) governing \(Z\) is rarely accessible to the decision-maker. In a data-driven decision situation,
for instance, \(\mathbb{P}\) is only indirectly observable through a set of independent training samples. In this case one may use standard methods from statistics to construct a parametric or non-parametric reference distribution \(\widehat{\mathbb{P}}\) from the data. However, \(\widehat{\mathbb{P}}\) invariably differs from \(\mathbb{P}\) due to inevitable statistical errors, and optimizing in view of \(\widehat{\mathbb{P}}\) instead of \(\mathbb{P}\) may lead to decisions that display a poor performance on test data. For example, in machine learning it is well known that deep neural networks trained in view of the empirical distribution of the training data can easily be fooled by adversarial examples, that is, test samples subject to seemingly negligible noise that cause the neural network to make a wrong prediction. Even worse, the decision problem at hand could suffer from a domain shift, that is, the training data may originate form a distribution other than \(\mathbb{P}\), under which decisions are evaluated.
A promising strategy to mitigate the detrimental effects of estimation errors in the reference distribution \(\widehat{\mathbb{P}}\) would be to minimize the worst-case expected loss with respect to all distributions in some neighborhood of \(\widehat{\mathbb{P}}\). There is ample evidence that, for many natural choices of the neighborhood of \(\widehat{\mathbb{P}}\), this distributionally robust approach leads to tractable optimization models and provides a simple means to derive powerful generalization bounds [15, 27, 28, 31, 48, 60, 84]. Specialized distributionally robust approaches may even enable generalization in the face of domain shifts [28, 29, 51, 87] or may make the training of deep neural networks more resilient against adversarial attacks [47, 79, 83, 88].
More formally, distributionally robust optimization (DRO) captures the uncertainty about the unknown distribution \(\mathbb{P}\) through an ambiguity set \(\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})\), that is, an \(\varepsilon\)-neighborhood of the reference distribution \(\widehat{\mathbb{P}}\) with respect to a distance function on the space \(\mathcal{P}(\mathcal{Z})\) of all distributions supported on \(\mathcal{Z}\). Using this ambiguity set, the DRO problem of interest is formulated as the minimax problem
\[\inf_{\theta\in\Theta}\sup_{\mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{ \mathbb{P}})}\,\mathbb{E}_{Z\sim\mathbb{Q}}\left[\ell(\theta,Z)\right]. \tag{1}\]
Problem (1) can be viewed as a zero-sum game between the decision-maker, who chooses the decision \(\theta\), and a fictitious adversary, often envisioned as 'nature', who chooses the distribution \(\mathbb{Q}\) of \(Z\). From now on we define nature's feasible set \(\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})=\{\mathbb{Q}\in\mathcal{P}( \mathcal{Z}):d_{c}(\mathbb{Q},\widehat{\mathbb{P}})\leq\varepsilon\}\) as a pseudo-ball of radius \(\varepsilon\geq 0\) around \(\widehat{\mathbb{P}}\) with respect to an optimal transport discrepancy \(d_{c}:\mathcal{P}(\mathcal{Z})\times\mathcal{P}(\mathcal{Z})\to[0,+\infty]\) defined through
\[d_{c}(\mathbb{P},\widehat{\mathbb{P}})=\inf_{\pi\in\Pi(\mathbb{P},\widehat{ \mathbb{P}})}\mathbb{E}_{\pi}[c(Z,\widehat{Z})].\]
Here, \(c:\mathcal{Z}\times\mathcal{Z}\to[0,+\infty]\) is a prescribed transportation cost function satisfying the identity of indiscernibles (\(c(z,\widehat{z})=0\) if and only if \(z=\widehat{z}\)), and \(\Pi(\mathbb{P},\widehat{\mathbb{P}})\) represents the set of all joint probability distributions of \(Z\) and \(\widehat{Z}\) with marginals \(\mathbb{P}\) and \(\widehat{\mathbb{P}}\), respectively. It is conventional to refer to elements of \(\Pi(\mathbb{P},\widehat{\mathbb{P}})\) as couplings or transportation plans. If the transportation cost function \(c\) is symmetric, then \(d_{c}\) constitutes a semimetric on \(\mathcal{P}_{c}(\mathcal{Z})=\{\mathbb{Q}\in\mathcal{P}(\mathcal{Z}):\mathbb{E }_{Z\sim\mathbb{Q}}[c(Z,\widehat{z}_{0})]<\infty\}\), where \(\widehat{z}_{0}\in\mathcal{Z}\) is an arbitrary reference point. Thus, \(d_{c}\) satisfies all axioms of a metric but the triangle inequality. Moreover, if \(c(z,\widehat{z})=\|z-\widehat{z}\|^{p}\) for some norm \(\|\cdot\|\) on (the span of) \(\mathcal{Z}\) and for some exponent \(p\in\mathbb{N}\), then \(W_{p}=\sqrt[p]{d_{c}}\) constitutes a metric on \(\mathcal{P}_{c}(\mathcal{Z})\)[86, SS 6], which is termed the \(p\)-th Wasserstein distance. Note that the ambiguity set \(\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})\) constructed in this way may be interpreted as the family of all probability distributions \(\mathbb{Q}\) that can be obtained by reshaping the reference distribution \(\widehat{\mathbb{P}}\) at a finite cost of at most \(\varepsilon\geq 0\), where the cost of moving unit probability from \(\widehat{z}\) to \(z\) is given by \(c(z,\widehat{z})\).
The following regularity conditions will be assumed to hold throughout the paper.
**Assumption 1.1** (Continuity assumptions).:
1. The transportation cost function \(c(z,\widehat{z})\) is lower semi-continuous in \((z,\widehat{z})\).
2. For any \(\theta\in\Theta\), the loss function \(\ell(\theta,\widehat{z})\) is upper semi-continuous and \(\widehat{\mathbb{P}}\)-integrable in \(\widehat{z}\).
Assumption 1.1 (i) ensures that the optimal transport problem in the definition of \(d_{c}(\mathbb{P},\widehat{\mathbb{P}})\) is solvable [86, Theorem 4.1] and admits a strong dual linear program over a space of integrable functions [86, Theorem 5.10]. Together, Assumptions 1.1 (i) and 1.1 (ii) ensure that the inner maximization problem in (1) admits a strong dual minimization problem [14, Theorem 1]; see also Proposition 1.2 below. Optimal transport-based DRO problems of the form (1) are nowadays routinely studied in diverse areas such as statistical learning [2, 12, 13, 20, 21, 32, 40, 74, 75], estimation and filtering [64, 65, 76], control [24, 90], dynamical systems theory [1, 17], hypothesis testing [34], inverse optimization [61], and chance constrained programming [22, 38, 39, 78, 89] etc.; see [46] for a recent survey. The existing literature focuses almost exclusively on Wasserstein ambiguity sets, which are obtained by setting the transportation cost function to \(c(z,\widehat{z})=\|z-\widehat{z}\|^{p}\) for some norm \(\|\cdot\|\) on \(\mathbb{R}^{d}\). Allowing for more general transportation cost functions enables the decision-maker to inject prior information about the likelihood of certain subsets of \(\mathcal{Z}\) into the definition of the ambiguity set. For example, setting \(c(z,\widehat{z})=\infty\cdot\mathds{1}_{z\not\in\mathbb{A}(\widehat{z})}\) for some measurable set \(\mathbb{A}(\widehat{z})\subseteq\mathcal{Z}\) with \(\widehat{z}\in\mathbb{A}(\widehat{z})\) ensures that the probability mass located at \(\widehat{z}\) cannot be moved outside of \(\mathbb{A}(\widehat{z})\). More generally, assigning \(c(z,\widehat{z})\) a large value for every \(\widehat{z}\) in the support of the reference distribution \(\widehat{\mathbb{P}}\) makes it expensive for nature to transport probability mass to \(z\). Thus, \(z\) has a low probability under every distribution in the ambiguity set. The following proposition describes a strong dual of nature's inner maximization problem in (1) and reveals how \(c\) impacts the solution of the underlying DRO problem. This strong duality result was first established in [60, 92] for finite-dimensional problems with discrete reference distributions and then generalized in [14, 33].
**Proposition 1.2** (Strong duality).: If Assumption 1.1 holds, then we have
\[\sup_{\mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})}\ \mathbb{E}_{Z\sim\mathbb{Q}}\left[\ell(\theta,Z)\right]=\inf_{\lambda\geq 0 }\ \lambda\varepsilon+\mathbb{E}_{\widehat{Z}\sim\widehat{\mathbb{P}}}\left[ \ell_{c}(\theta,\lambda,\widehat{Z})\right] \tag{2}\]
for any \(\theta\in\Theta\) and \(\varepsilon>0\), where \(\ell_{c}(\theta,\lambda,\widehat{z})=\sup_{z\in\mathcal{Z}:\,c(z,\widehat{z}) <\infty}\ \ell(\theta,z)-\lambda c(z,\widehat{z})\).
It is well known that checking whether a given distribution \(\mathbb{Q}\in\mathcal{P}(\mathcal{Z})\) belongs to \(\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})\) is \(\#\)P-hard even if \(\widehat{\mathbb{P}}\) is discrete [81, Theorem 2.2]. Thus, unless \(\#\mathrm{P}=\mathrm{FP}\), there is no efficient algorithm to check feasibility in the worst-case expectation problem on the left hand side of (2). Thanks to Proposition 1.2, however, the optimal value of this problem can often be computed efficiently by solving the dual problem on the right hand side of (2), which constitutes a univariate convex stochastic program. Indeed, the dual objective function involves the expected value of \(\ell_{c}(\theta,\lambda,\widehat{Z})\) with respect to the reference distribution \(\widehat{\mathbb{P}}\), which is convex in \(\lambda\). In the following, we will refer to \(\ell_{c}\) as the \(c\)-transform of \(\ell\). A key challenge towards solving the dual problem lies in evaluating \(\ell_{c}\) and in establishing useful structural properties of \(\ell_{c}\) as a function of \(\lambda\). If \(-\ell\) and \(c\) are (piecewise) convex in \(z\) and if \(\mathcal{Z}\) is a convex set, for example, then robust optimization techniques can be used to express \(\ell_{c}(\theta,\lambda,\widehat{z})\) as the optimal value of a finite convex minimization problem. If, additionally, the reference distribution \(\widehat{\mathbb{P}}\) has a finite support, then the entire dual problem on the right hand side of (2) simplifies to a finite convex minimization problem amenable to off-the-shelf solvers [60, 93]. Treating \(\theta\in\Theta\) as an additional decision variable finally yields a reformulation of the original DRO problem (1) as a single (possibly nonconvex) finite minimization problem. This reformulation is convex if \(\ell\) is convex in \(\theta\) and if \(\Theta\) is a convex set.
If \(-\ell\) or \(c\) fail to be (piecewise) convex in \(z\), \(\mathcal{Z}\) fails to be convex, or \(\widehat{\mathbb{P}}\) fails to have a finite support, then exact convex reformulation results are scarce (an exception is described in [60, SS 6.2]). In
these cases, one may still attempt to attack the dual problem in (2) directly with (stochastic) gradient descent-type algorithms [16, 79]. The efficacy of these methods is predicated on the structural properties of the \(c\)-transform \(\ell_{c}\). If the transportation cost function is representable as \(c(z,\widehat{z})=\Psi(z-\widehat{z})\) for some real-valued univariate function \(\Psi(\cdot)\), and if \(\mathcal{Z}=\mathbb{R}^{d}\), for instance, then \(-\ell_{c}(\theta,\lambda,\cdot)\) is readily recognized as the infimal convolution of \(-\ell(\theta,\cdot)\) and \(\lambda\Psi(\cdot)\). Hence, it can be interpreted as an epigraphical regularization of \(-\ell(\theta,\cdot)\). Indeed, the epigraph of the infimal convolution \(-\ell_{c}(\theta,\lambda,\cdot)\) (essentially) coincides with the Minkowski sum of the epigraphs of \(-\ell(\theta,\cdot)\) and \(\lambda\Psi(\cdot)\)[71, Exercise 1.28 (a)]. Epigraphical regularizations are ubiquitous in approximation theory, and it is known that, as \(\lambda\) tends to infinity, \(-\ell_{c}(\theta,\lambda,\cdot)\) provides an increasingly accurate approximation for \(-\ell(\theta,\cdot)\) that inherits desirable regularity properties from \(\Psi(\cdot)\) such as uniform continuity or smoothness [3, 4, 18, 68]. Specifically, if \(\Psi(\cdot)=\|\cdot\|^{2}\), then \(-\ell_{c}(\theta,\lambda,\cdot)\) reduces to the Moreau-Yosida regularization (or Moreau envelope) [71, Chapter 1.G], which is at the heart of most proximal algorithms [67]. In addition, if \(\Psi(\cdot)=\|\cdot\|\), then \(-\ell_{c}(\theta,\lambda,\cdot)\) reduces to the Pasch-Hausdorff envelope (or Lipschitz regularization) proposed in [71, Example 9.11] and [37], which constitutes the largest \(\lambda\)-Lipschitz continuous function majorized by \(-\ell(\theta,\cdot)\).
This paper aims to develop a deeper understanding of optimal transport-based DRO problems with general transportation cost functions, to showcase different regularizing effects of robustification and to elucidate the connections between DRO and game theory. Specifically, we show that robustification is intimately related to various forms of variation and Lipschitz regularization even if the transportation cost function is not (some power of) a metric. We also derive conditions for the existence and the computability of a Nash equilibrium between the decision-maker and nature in the DRO problem (1), and we demonstrate that nature's Nash strategy can be viewed as a distribution that is supported on remarkably deceptive adversarial examples. Finally, we identify practically relevant classes of optimal transport-based DRO problems that can be addressed with efficient gradient descent algorithms even if the loss function or the transportation cost function are nonconvex (but not both at the same time).
The main contributions of this paper can be summarized as follows.
1. **Existence of Nash equilibria.** We establish weak conditions under which the DRO problem (1), viewed as a zero-sum game between the decision-maker and nature, admits a Nash equilibrium.
2. **Computation of Nash equilibria.** We prove that the dual DRO problem, which is obtained from (1) by interchanging the order of minimization and maximization, can be reduced to a finite convex program under the same set of conditions that already ensured that the primal DRO problem (1) admits a finite convex reduction. We also show that any solutions of the primal and dual DRO problems represent Nash strategies of the decision-maker and nature, respectively.
3. **Adversarial examples.** Every Nash strategy of nature constitutes a best response to a Nash strategy of the decision-maker, but not vice versa. In the context of a binary classification task, we show experimentally that nature's Nash strategy as well as nature's best response to the decision-maker's Nash strategy, as computed by Gurobi, encodes an adversarial dataset. While the adversarial examples implied by nature's best response can only deceive an algorithm, the adversarial examples implied by nature's Nash strategy can even deceive a human.
4. **Higher-order variation and Lipschitz regularization.** We show that, under natural regularity conditions, the worst-case expected loss in (1) is bounded above by the sum of the expected loss under the reference distribution and several regularization terms that penalize certain \(L^{p}\)-norms
or Lipschitz moduli of the higher-order derivatives of the loss function. This result generalizes and unifies several existing results, which have revealed intimate connections between robustification and gradient regularization [32], Hessian regularization [6] and Lipschitz regularization [60].
* **Numerical solution of nonconvex DROs.** By leveraging techniques from nonconvex optimization such as Toland's duality principle, we show that distributionally robust linear prediction models, which emerge in portfolio selection, regression, classification, newsvendor, linear inverse or phase retrieval problems, can sometimes be solved efficiently by gradient descent-type algorithms even if the loss function or the transportation cost function are nonconvex.
The rest of this paper is organized as follows. In Section 2 we investigate sufficient conditions for the existence and computability of Nash equilibria between the decision-maker and nature, and in Section 3 we shed new light on the intimate relations between regularization and robustification and discuss the numerical solution of certain nonconvex DRO problems. Numerical results are reported in Secion 4.
NotationWe denote the inner product of two vectors \(x,y\in\mathbb{R}^{d}\) by \(\langle x,y\rangle\), and for any norm \(\|\cdot\|\) on \(\mathbb{R}^{d}\), we use \(\|\cdot\|_{*}\) to denote its dual norm defined through \(\|y\|_{*}=\sup\{\langle x,y\rangle:\|x\|\leq 1\}\). The domain of a function \(f:\mathbb{R}^{d}\to[-\infty,\infty]\) is defined as \(\text{dom}(f)=\{x\in\mathbb{R}^{d}:f(x)<\infty\}\). The function \(f\) is proper if \(f(x)>-\infty\) and \(\text{dom}(f)\neq\emptyset\). The convex conjugate of \(f\) is defined as \(f^{*}(y)=\sup_{x\in\mathbb{R}^{d}}\langle y,x\rangle-f(x)\). A function \(f\) is (positively) homogeneous of degree \(p\geq 1\) if \(f(x)=\lambda^{p}f(x)\) for any \(x\in\mathbb{R}^{d}\) and \(\lambda>0\). The perspective function of a proper, convex and lower semi-continuous function \(f:\mathbb{R}^{d}\to(-\infty,\infty]\) is defined through \(f(x,\lambda)=\lambda f(x/\lambda)\) if \(\lambda>0\) and \(f(x,\lambda)=\delta^{*}_{\text{dom}(f^{*})}(x)\) if \(\lambda=0\), where \(\delta_{\text{dom}(f^{*})}\) is the indicator function of the set \(\text{dom}(f^{*})\). By slight abuse of notation, however, we use \(\lambda f(x/\lambda)\) to denote the perspective function \(f(x,\lambda)\) for all \(\lambda\geq 0\). The set of all integers up to \(n\in\mathbb{N}\) is denote by \([n]\).
## 2 Nash Equilibria in DRO
By construction, the DRO problem (1) constitutes a zero-sum game between an agent, who chooses the decision \(\theta\in\Theta\), and some fictitious adversary or 'nature,' who chooses the distribution \(\mathbb{Q}\in\mathbb{B}_{\varepsilon}(\hat{\mathbb{P}})\). In this section we will show that, under some mild technical conditions, any optimal decision \(\theta^{*}\) that solves the primal DRO problem (1) and any optimal distribution \(\mathbb{Q}^{\star}\) that solves the dual DRO problem
\[\sup_{\mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})}\inf_{ \theta\in\Theta}\ \mathbb{E}_{Z\sim\mathbb{Q}}\left[\ell(\theta,Z)\right] \tag{3}\]
form a Nash equilibrium. Thus, \(\theta^{\star}\) and \(\mathbb{Q}^{\star}\) satisfy the saddle point condition
\[\mathbb{E}_{Z\sim\mathbb{Q}}\left[\ell(\theta^{\star},Z)\right]\leq\mathbb{E }_{Z\sim\mathbb{Q}^{\star}}\left[\ell(\theta^{\star},Z)\right]\leq\mathbb{E}_{ Z\sim\mathbb{Q}^{\star}}\left[\ell(\theta,Z)\right]\quad\forall\theta\in \Theta,\ \mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}}). \tag{4}\]
In statistics, an optimal solution \(\theta^{\star}\) of the DRO problem (1) is sometimes referred to as a minimax estimator or a robust estimator. When \(\theta^{\star}\) and \(\mathbb{Q}^{\star}\) satisfy the saddle point condition (4), then the robust estimator \(\theta^{\star}\) solves the stochastic program \(\min_{\theta\in\Theta}\ \mathbb{E}_{Z\sim\mathbb{Q}^{\star}}\left[\ell(\theta,Z)\right]\), which minimizes the expected loss under the crisp distribution \(\mathbb{Q}^{\star}\); see also [54, Chapter 5]. For this reason, \(\mathbb{Q}^{\star}\) is often referred to as a least favorable distribution. The existence of \(\mathbb{Q}^{\star}\) makes the robust estimator \(\theta^{\star}\) particularly attractive. Indeed, if \(\mathbb{Q}^{\star}\) exists, then \(\theta^{\star}\) solves a classical stochastic program akin to the ideal learning problem one would want to solve if the true distribution was known. In the remainder of this section, we will first identify mild regularity conditions under which a Nash equilibrium is guaranteed to exist (Section 2.1),
and then we will show that if the reference distribution is discrete and the loss function is convex-piecewise concave, then the dual DRO problem (3) can be reformulated as a finite convex program (Section 2.2). This reformulation will finally enable us to construct a least favorable distribution.
### Existence of Nash Equilibria
The following assumption is instrumental to prove the existence of a Nash equilibrium.
**Assumption 2.1** (Transportation cost).:
1. There exists a reference point \(\widehat{z}_{0}\in\mathcal{Z}\) such that \(\mathbb{E}_{Z\sim\mathbb{Q}}[c(Z,\widehat{z}_{0})]\leq C\) for some constant \(C<\infty\).
2. There exists a metric \(d(z,\widehat{z})\) on \(\mathcal{Z}\) with compact sublevel sets such that \(c(z,\widehat{z})\geq d^{p}(z,\widehat{z})\) for some exponent \(p\in\mathbb{N}\).
Assumption 2.1 will enable us to prove that the ambiguity set \(\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})\) is weakly compact. We emphasize that Assumption 2.12 is unrestrictive and allows for transportation costs that fail to display common properties such as symmetry, convexity, homogeneity, or the triangle inequality.
**Example 2.2** (Local Mahalanobis transportation cost).: The local Mahalanobis transportation cost is defined as \(c(z,\widehat{z})=\langle z-\widehat{z},A(\widehat{z})(z-\widehat{z})\rangle\), where \(A(\widehat{z})\) represents a positive definite matrix for each \(\widehat{z}\in\mathcal{Z}\)[16]. This transportation cost satisfies Assumption 2.12 if the reference distribution \(\widehat{\mathbb{P}}\) has finite second moments, and it satisfies Assumption 2.12 for \(d(z,\widehat{z})=\alpha\|z-\widehat{z}\|_{2}\) and \(p=2\) if \(\alpha=\inf_{\widehat{z}\in\mathcal{Z}}\lambda_{\min}(A(\widehat{z}))>0\). However, the local Mahalanobis transportation cost fails to be symmetric.
**Example 2.3** (Discrete metric).: If the transportation cost is identified with the (nonconvex) discrete metric \(c(z,\widehat{z})=\mathds{1}_{\{z\neq\widehat{z}\}}\), then the optimal transport distance reduces to the total variation distance [86, SS 6]. In this case Assumption 2.12 is trivially satisfied, and Assumption 2.12 holds for \(d(z,\widehat{z})=c(z,\widehat{z})\) and for any \(p\geq 1\) provided that the support set \(\mathcal{Z}\) is compact.
To prove the existence of Nash equilibria, we also need the following assumption on the loss function.
**Assumption 2.4** (Loss function).:
1. For any \(z\in\mathcal{Z}\), the function \(\ell(\theta,z)\) is convex and lower semi-continuous in \(\theta\).
2. For any \(\mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})\) and any \(\bar{\theta}\in\Theta\), there exists a neighborhood \(\mathcal{U}\) of \(\bar{\theta}\) such that the family of functions \(\ell^{-}(\theta,z)=-\min\{0,\ell(\theta,z)\}\) parametrized by \(\theta\in\mathcal{U}\) is uniformly \(\mathbb{Q}\)-integrable in \(z\).
3. For any \(\theta\in\Theta\), there are \(g>0\), \(\widehat{z}_{0}\in\mathcal{Z}\) and \(r\in(0,p)\) with \(\ell(\theta,z)\leq g\left[1+d^{r}(z,\widehat{z}_{0})\right]\) for all \(z\in\mathcal{Z}\).
As we will see below, Assumptions 2.42 and 2.42 imply that \(\mathbb{E}_{Z\sim\mathbb{Q}}\left[\ell(\theta,Z)\right]\) is lower semi-continuous in \(\theta\), while Assumption 2.42 implies that \(\mathbb{E}_{Z\sim\mathbb{Q}}\left[\ell(\theta,Z)\right]\) is weakly upper semi-continuous in \(\mathbb{Q}\). Note that Assumption 2.42 trivially holds for non-negative loss functions. The following proposition establishes that the inner maximization problem in (1) is solvable, which is a key ingredient needed to prove the existence of a Nash equilibrium. This proposition generalizes [91, Theorems 1 and 2] to ambiguity sets defined in terms of general optimal transport discrepancies.
**Proposition 2.5** (Worst-case expectation).: If the transportation cost satisfies Assumptions 1.12 and 2.12, then the ambiguity set \(\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})\) is weakly compact. If in addition the loss function satisfies Assumptions 1.12 and 2.42, then the supremum in (2) is finite and attained for any \(\theta\in\Theta\).
Proof.: Throughout this proof we use \(W_{p}\) to denote the \(p\)-th Wasserstein distance with respect to the metric \(d\) from Assumption 2.12. Specifically, for any probability distributions \(\mathbb{Q}\) and \(\widehat{\mathbb{P}}\) on \(\mathcal{Z}\) we set
\[W_{p}(\mathbb{Q},\widehat{\mathbb{P}})=\left(\inf\nolimits_{\pi\in\Pi(\mathbb{ Q},\widehat{\mathbb{P}})}\mathbb{E}_{(Z,\widehat{Z})\sim\pi}[d(Z,\widehat{Z})^{p}] \right)^{1/p}. \tag{5}\]
In addition, we define the Wasserstein space \(\mathcal{W}_{p}(\mathcal{Z})\subseteq\mathcal{P}(\mathcal{Z})\) as the family of all probability distributions \(\mathbb{Q}\) on \(\mathcal{Z}\) with \(\mathbb{E}_{Z\sim\mathbb{Q}}[d(Z,\widehat{z}_{0})^{p}]<\infty\), where \(\widehat{z}_{0}\in\mathcal{Z}\) is the reference point from Assumption 1.12. By using the triangle inequality for the metric \(d\), one can show that the Wasserstein space is in fact independent of the choice of \(\widehat{z}_{0}\). Next, we define the \(p\)-th Wasserstein ball of radius \(\varepsilon\geq 0\) around \(\widehat{\mathbb{P}}\) as
\[\mathbb{W}_{\varepsilon}(\widehat{\mathbb{P}})=\left\{\mathbb{Q}\in\mathcal{ P}(\mathcal{Z}):W_{p}(\mathbb{Q},\widehat{\mathbb{P}})\leq\varepsilon\right\}.\]
Assumptions 2.12 and 2.12 imply that \(\widehat{\mathbb{P}}\in\mathcal{W}_{p}(\mathcal{Z})\), which in turn implies via the triangle inequality for the \(p\)-th Wasserstein distance \(W_{p}\) that \(\mathbb{W}_{\varepsilon}(\widehat{\mathbb{P}})\subseteq\mathcal{W}_{p}( \mathcal{Z})\); see also [91, Lemma 1]. Assumption 2.12 further ensures that \(d_{c}\geq W_{p}^{p}\), and thus the ambiguity set \(\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})\) constructed from the optimal transport discrepancy \(d_{c}\) is covered by the ambiguity set \(\mathbb{W}_{\varepsilon^{1/p}}(\widehat{\mathbb{P}})\) constructed from the Wasserstein distance \(W_{p}\).
In order to prove that \(\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})\) is weakly compact, note first that the Wasserstein ball \(\mathbb{W}_{\varepsilon^{1/p}}(\widehat{\mathbb{P}})\) is weakly sequentially compact thanks to [91, Theorem 1]. Since the notions of sequential compactness and compactness are equivalent on metric spaces [62, Theorem 28.2] and since the weak topology on \(\mathcal{P}(\mathcal{Z})\) is metrized by the Prokhorov metric [11, Theorem 6.8], \(\mathbb{W}_{\varepsilon^{1/p}}(\widehat{\mathbb{P}})\) is also weakly compact. Next, recall that the transportation cost \(c\) is lower semi-continuous by virtue of Assumption 1.12. This implies via [23, Lemma 5.2] that \(W_{c}(\cdot,\widehat{\mathbb{P}})\) is lower semi-continuous with respect to the weak topology on \(\mathcal{P}(\mathcal{Z})\). Therefore, \(\mathbb{W}_{\varepsilon^{1/p}}(\widehat{\mathbb{P}})\) is weakly closed as a sublevel set of a weakly lower semi-continuous mapping. As any weakly closed subset of a weakly compact set is weakly compact, \(\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})\) is indeed weakly compact.
Next, we prove that the supremum in (2) is finite. As \(\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})\subseteq\mathbb{W}_{\varepsilon^ {1/p}}(\widehat{\mathbb{P}})\), it is clear that
\[\sup\limits_{\mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})} \mathbb{E}_{Z\sim\mathbb{Q}}\left[\ell(\theta,Z)\right]\leq\sup\limits_{ \mathbb{Q}\in\mathbb{W}_{\varepsilon^{1/p}}(\widehat{\mathbb{P}})}\mathbb{E}_ {Z\sim\mathbb{Q}}\left[\ell(\theta,Z)\right].\]
The supremum on the right side of the above inequality is finite by [91, Theorem 2], which applies thanks to Assumptions 1.12, 2.12 and 2.12. In particular, Assumption 2.12 ensures that there exist \(g>0\), \(\widehat{z}_{0}\in\mathcal{Z}\), and \(r\in(0,p)\) such that the loss function satisfies the required growth condition
\[\ell(\theta,z)\leq g\left[1+d^{r}(z,\widehat{z}_{0})\right]\leq g\left[1+1+d^{ p}(z,\widehat{z}_{0})\right]\leq 2g\left[1+d^{p}(z,\widehat{z}_{0})\right]\quad \forall z\in\mathcal{Z}.\]
Thus, the suprema in (2) have a finite upper bound. On the other hand, the suprema in (2) are bounded below by \(\mathbb{E}_{Z\sim\widehat{\mathbb{P}}}[\ell(\theta,Z)]\), which is finite by virtue of Assumption 1.12.
It remains to be shown that the supremum in (2) is attained. To this end, we first observe that the objective function \(\mathbb{E}_{Z\sim\mathbb{Q}}[\ell(\theta,Z)]\) is weakly upper semi-continuous in \(\mathbb{Q}\) over the ambiguity set \(\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})\). This follows immediately from the proof of [91, Theorem 3], which reveals that \(\mathbb{E}_{Z\sim\mathbb{Q}}[\ell(\theta,Z)]\) is weakly upper semi-continuous over \(\mathbb{W}_{\varepsilon^{1/p}}(\widehat{\mathbb{P}})\), and from the observation that \(\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})\subseteq\mathbb{W}_{ \varepsilon^{1/p}}(\widehat{\mathbb{P}})\). As \(\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})\) is weakly compact, Weierstrass' theorem then guarantees that the supremum in (2) is indeed attained.
Armed with Proposition 2.5, we are now ready to state the main result of this section.
**Theorem 2.6** (Minimax theorem).: If Assumptions 1.12, 2.12 and 2.12 hold and \(\Theta\) is convex, then we have
\[\inf\limits_{\theta\in\Theta}\max\limits_{\mathbb{Q}\in\mathbb{B}_{\varepsilon}( \widehat{\mathbb{P}})}\mathbb{E}_{Z\sim\mathbb{Q}}\left[\ell(\theta,Z)\right]= \max\limits_{\mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})} \inf\limits_{\theta\in\Theta}\mathbb{E}_{Z\sim\mathbb{Q}}\left[\ell(\theta,Z) \right]. \tag{6}\]
Proof.: The infimum and the maximum in (6) can be interchanged thanks to Sion's minimax theorem [80, Corollary 3.3]. To see that Sion's minimax theorem applies, note first that \(\Theta\) is convex by assumption and that \(\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})\) is both convex as well as weakly compact. The convexity of \(\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})\) follows from [86, Theorem 4.8], which applies because the transportation cost \(c\) is non-negative and lower semi-continuous by virtue of Assumption 1.1 (i). The weak compactness of \(\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})\) was established in Proposition 2.5. Next, note that the objective function \(\mathbb{E}_{Z\sim\mathbb{Q}}\left[\ell(\theta,Z)\right]\) inherits convexity and lower-semicontinuity in \(\theta\) from the loss function \(\ell(\theta,z)\). Specifically, lower semi-continuity holds because
\[\liminf_{\theta_{n}\to\theta}\mathbb{E}_{Z\sim\mathbb{Q}}\left[\ell(\theta_{n},Z)\right]\geq\mathbb{E}_{Z\sim\mathbb{Q}}\left[\liminf_{\theta_{n}\to\theta} \ell(\theta_{n},Z)\right]\geq\mathbb{E}_{Z\sim\mathbb{Q}}\left[\ell(\theta,Z) \right],\]
where the two inequalities follow from Fatou's lemma for random variables with uniformly integrable negative parts, which applies thanks to Assumption 2.4 (ii), and from the lower semi-continuity of the loss function \(\ell(\theta,z)\) in \(\theta\) stipulated in Assumption 2.4 (i), respectively. Finally, one readily verifies that the objective function \(\mathbb{E}_{Z\sim\mathbb{Q}}\left[\ell(\theta,Z)\right]\) is concave (in fact, linear) and weakly upper semi-continuous in \(\mathbb{Q}\). This follows directly from the proof Proposition 2.5. In summary, we have shown that all assumptions of Sion's minimax theorem are satisfied, which implies that the infimum and the maximum in (6) may indeed be interchanged. It remains to be shown that both maxima in (6) are attained. However, this follows immediately from the weak compactness of \(\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})\) and the weak upper semi-continuity in \(\mathbb{Q}\) of both the expected loss \(\mathbb{E}_{Z\sim\mathbb{Q}}\left[\ell(\theta,Z)\right]\) and the optimal expected loss \(\inf_{\theta\in\Theta}\mathbb{E}_{Z\sim\mathbb{Q}}\left[\ell(\theta,Z)\right]\).
Theorem 2.6 generalizes [15, Theorem 2], which holds only for quadratic transportation cost functions, non-negative loss functions and discrete (empirical) reference distributions. Even though we use similar proof techniques, we significantly relax the conditions of [15, Theorem 2] in that we allow \(c\) to be nonconvex, \(\ell\) to adopt both positive and negative values and \(\mathbb{P}\) to be non-discrete. Theorem 2.6 implies that the infimum of the primal DRO problem matches the maximum of the dual DRO problem and that the maximum of the dual DRO problem is attained by some distribution \(\mathbb{Q}^{\star}\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})\). Without imposing further conditions on \(\Theta\), however, the infimum of the primal DRO problem might not be attained. As \(\mathbb{E}_{Z\sim\mathbb{Q}}\left[\ell(\theta,Z)\right]\) is lower semi-continuous in \(\theta\), a sufficient condition for the primal DRO problem to be solved by some \(\theta^{\star}\in\Theta\) is that the feasible set \(\Theta\) is compact. In this case \(\theta^{\star}\) and \(\mathbb{Q}^{\star}\) form a Nash equilibrium. These insights culminate in the following corollary.
**Corollary 2.7** (Existence of Nash equilibria).: If Assumptions 1.1, 2.1 and 2.4 hold and if \(\Theta\) is both convex and compact, then we have
\[\min_{\theta\in\Theta}\max_{\mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{ \mathbb{P}})}\mathbb{E}_{Z\sim\mathbb{Q}}\left[\ell(\theta,Z)\right]=\max_{ \mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})}\min_{\theta\in \Theta}\mathbb{E}_{Z\sim\mathbb{Q}}\left[\ell(\theta,Z)\right].\]
### Computation of Nash Equilibria
At first sight, the dual DRO problem (3) appears intractable as it constitutes a challenging maximin problem that maximizes the optimal value of a parametric minimization problem over an infinite-dimensional space of probability distributions. It is well known that the primal DRO problem (1) can be reformulated as a finite convex program if the loss function \(\ell(\theta,z)\), the support set \(\mathcal{Z}\), the feasible set \(\Theta\) and the transportation cost \(c(z,\widehat{z})\) display certain convexity properties and if the reference distribution \(\widehat{\mathbb{P}}\) is discrete; see, _e.g._, [60, SS 4.1] or [93, SS 6]. In the remainder of this section we will demonstrate that essentially the same regularity conditions also enable us to reformulate the dual DRO
problem (3) as a finite convex program. The solutions of the emerging convex programs can be used to construct a robust decision \(\theta^{\star}\) and a least favorable distribution \(\mathbb{Q}^{\star}\) that form a Nash equilibrium.
As we will explain below, knowledge of both \(\theta^{\star}\) and \(\mathbb{Q}^{\star}\) offers valuable insights into the decision problem at hand. To date, however, dual DRO problems have only been investigated in specific applications. For example, it is known that the least favorable distribution in distributionally robust minimum mean square error estimation and Kalman filtering problems with a type-2 Wasserstein ambiguity set centered at a Gaussian reference distribution is Gaussian and can be computed efficiently by solving a semidefinite program [65, 76]. When the ambiguity set is defined in terms of the Kullback-Leibler divergence instead of the Wasserstein distance, the least favorable distribution remains Gaussian and can be found in quasi-closed form [55, 56]. Similar results are available for generalized \(\tau\)-divergence ambiguity sets [94, 95]. In addition, Nash equilibria for distributionally robust pricing and auction design problems with rectangular ambiguity sets can sometimes also be derived in closed form [43, 44].
We will now show that general dual DRO problems of the form (3) can often be addressed with methods from convex optimization. To this end, we restrict attention to _discrete_ reference distributions.
**Assumption 2.8** (Reference distribution).: We have \(\widehat{\mathbb{P}}=\sum_{j\in[J]}p_{j}\delta_{\widehat{z}_{j}}\) for some \(J\in\mathbb{N}\), where each probability \(p_{j}\) is strictly positive and \(\delta_{\widehat{z}_{j}}\) denotes the Dirac measure at the atom \(\widehat{z}_{j}\in\mathcal{Z}\).
We also need the following assumption, which constrains the shape of the transportation cost function, the loss function, the support set and the feasible set. Thus, it limits modeling flexibility.
**Assumption 2.9** (Convexity conditions).:
1. The transportation cost function \(c(z,\widehat{z})\) is lower semi-continuous in \((z,\widehat{z})\) and convex in \(z\).
2. The loss function is representable as a pointwise maximum of finitely many saddle functions, that is, we have \(\ell(\theta,z)=\max_{i\in[I]}\ell_{i}(\theta,z)\) for some \(I\in\mathbb{N}\), where \(\ell_{i}(\theta,z)\) is proper, convex and lower semi-continuous in \(\theta\), while \(-\ell_{i}(\theta,z)\) is proper, convex and lower semi-continuous in \(z\).
3. The support set is representable as \(\mathcal{Z}=\{z\in\mathbb{R}^{d}:f_{k}(z)\leq 0\ \forall k\in[K]\}\) for some \(K\in\mathbb{N}\), where each function \(f_{k}(z)\) is proper, convex and lower semi-continuous.
4. The feasible set is representable as \(\Theta=\{\theta\in\mathbb{R}^{m}:g_{l}(\theta)\leq 0\ \forall l\in[L]\}\) for some \(L\in\mathbb{N}\), where each function \(g_{l}(\theta)\) is proper, convex and lower semi-continuous.
Recall that the transportation cost function \(c(z,\widehat{z})\) is non-negative and satisfies the identity of indiscernibles. Together with Assumption 2.9 (i), this immediately implies that \(c(z,\widehat{z})\) is proper, convex and lower semi-continuous in \(z\) for every fixed \(\widehat{z}\) and that it is proper and lower semi-continuous in \(\widehat{z}\) for every fixed \(z\). Assumption 2.9 (ii) requires both \(\ell_{i}(\theta,z)\) and \(-\ell_{i}(\theta,z)\) to be proper, which implies that the saddle function \(\ell_{i}(\theta,z)\) can adopt only finite values. However, the convex functions \(f_{k}(z)\) and \(g_{l}(\theta)\) introduced in Assumptions 2.9 (iii) and 2.9 (iv), respectively, may adopt the value \(\infty\).
In the remainder, we adopt the following definition of a Slater point for a minimization problem.
**Definition 2.10** (Slater point).: A Slater point of the set \(\mathcal{X}=\{x\in\mathbb{R}^{d}:h_{m}(x)\leq 0\ \forall m\in[M]\}\) represented via \(M\) generic inequality constraints is any vector \(x^{s}\in\mathcal{X}\) with \(x^{s}\in\operatorname{ri}(\operatorname{dom}(h_{m}))\) for all \(m\in[M]\) and \(h_{m}(x^{s})<0\) for all \(m\in[M]\) such that \(h_{m}\) is nonlinear. In addition, \(x^{s}\) is a Slater point of the minimization problem \(\inf\{h_{0}(x):x\in\mathcal{X}\}\) if it is a Slater point of \(\mathcal{X}\) and if \(x^{s}\in\operatorname{ri}(\operatorname{dom}(h_{0}))\).
Our results also rely on the following technical Slater conditions. Even though they are indispensable for the proofs of our convex reformulation results below, they do not critically limit modeling flexibility.
**Assumption 2.11** (Slater conditions).:
1. For every \(j\in[J]\), \(\widehat{z}_{j}\in\operatorname{ri}(\operatorname{dom}(c(\cdot,\widehat{z}_{j})))\) is a Slater point for the support set \(\mathcal{Z}\).
2. The feasible set \(\Theta\) admits a Slater point.
Before addressing the dual DRO problem (3), we show that the primal DRO problem (1) admits a finite convex reformulation when the above regularity conditions are satisfied. This reformulation was first derived under the simplifying assumption that \(c(z,\widehat{z})=\|z-\widehat{z}\|\) is defined in terms of a norm on \(\mathcal{Z}\)[60, Theorem 4.2] and later generalized to arbitrary convex transportation cost functions [93, SS 6]. Proposition 2.12 below re-derives this reformulation under the Assumptions 2.8, 2.9 and 2.11 (i), which can be checked _ex ante_. In contrast, the regularity conditions used in [93] can only be checked _ex post_ by solving a convex program. To keep this paper self-contained, we provide a simple proof of Proposition 2.12, which will also allow us to streamline the proof of Proposition 2.14 below.
**Proposition 2.12** (Primal reformulation).: If Assumptions 2.8, 2.9 and 2.11 (i) hold and if \(\varepsilon>0\), then the primal DRO problem (1) has the same infimum as the finite convex program
\[\begin{array}{ll}\inf&\lambda\varepsilon+\sum_{j\in[J]}p_{j}s_{j}\\ \operatorname{s.t.}&\theta\in\Theta,\,\lambda,\tau_{ijk}\in\mathbb{R}_{+},\,s _{j}\in\mathbb{R},\,\zeta^{\ell}_{ij},\zeta^{c}_{ij},\zeta^{f}_{ijk}\in \mathbb{R}^{d}&\forall i\in[I],\,j\in[J],\,k\in[K]\\ &(-\ell_{i})^{*2}(\theta,\zeta^{\ell}_{ij})+\lambda c^{*1}(\zeta^{c}_{ij}/ \lambda,\widehat{z}_{j})+\sum_{k\in[K]}\tau_{ijk}f^{*}_{k}(\zeta^{f}_{ijk}/ \tau_{ijk})\leq s_{j}&\forall i\in[I],\,j\in[J]\\ &\zeta^{\ell}_{ij}+\zeta^{c}_{ij}+\sum_{k\in[K]}\zeta^{f}_{ijk}=0&\forall i \in[I],\,j\in[J],\end{array} \tag{7}\]
where \((-\ell_{i})^{*2}(\theta,\zeta)\) denotes the conjugate of \(-\ell_{i}(\theta,z)\) with respect to its second argument \(z\) for fixed \(\theta\), and where \(c^{*1}(\zeta,\widehat{z})\) denotes the conjugate of \(c(z,\widehat{z})\) with respect to its first argument \(z\) for fixed \(\widehat{z}\).
Proof.: Note first that Assumptions 2.8 and 2.9 (i)-(ii) strengthen Assumption 1.1. In particular, Assumption 2.9 (i) trivially implies Assumption 1.1 (i). In addition, Assumption 2.8 requires \(\widehat{\mathbb{P}}\) to be discrete, and Assumption 2.9 (ii) implies that \(\ell(\theta,z)\) is finite-valued. As any finite-valued function is integrable with respect to any discrete distribution, Assumption 1.1 (ii) trivially holds. Thus, Proposition 1.2 applies and allows us to reformulate the inner supremum in (1) as
\[\sup_{\mathbb{Q}\in\mathbb{R}_{\widehat{z}}(\widehat{\mathbb{P}})}\ \ \mathbb{E}_{\mathbb{Q}}\left[\ell(\theta,Z)\right]=\left\{ \begin{array}{ll}\inf&\lambda\varepsilon+\sum_{j\in[J]}p_{j}s_{j}\\ \operatorname{s.t.}&\lambda\in\mathbb{R}_{+},\ s_{j}\in\mathbb{R}&\forall j \in[J]\\ &\sup_{z\in\mathcal{Z}}\ \ell(\theta,z)-\lambda c(z,\widehat{z}_{j})\leq s _{j}&\forall j\in[J],\end{array}\right.\]
where the epigraphical decision variable \(s_{j}\) coincides with \(\ell_{c}(\theta,\lambda,\widehat{z}_{j})=\sup_{z\in\mathcal{Z}}\ \ell(\theta,z)-\lambda c(z,\widehat{z}_{j})\) at optimality. Note that since \(\ell(\theta,z)\) is finite-valued, there is no need to make the constraint \(c(z,\widehat{z})<\infty\) explicit. By Assumption 2.9 (ii), the loss function \(\ell\) can be expressed in terms of the saddle functions
\(\ell_{i}\), \(i\in[I]\), and thus the resulting dual problem is equivalent to the following robust convex program.
\[\begin{array}{llll}\inf&\lambda\varepsilon+\sum_{j\in[J]}p_{j}s_{j}\\ \mathrm{s.\,t.}&\lambda\in\mathbb{R}_{+},\ s_{j}\in\mathbb{R}&\forall j\in[J]\\ &\sup_{z\in\mathcal{Z}}\ \ell_{i}(\theta,z)-\lambda c(z,\widehat{z}_{j}) \leq s_{j}&\forall i\in[I],\,j\in[J]\end{array} \tag{8}\]
By the convexity of the support set \(\mathcal{Z}\) imposed by Assumption 2.9 (iii) and thanks to an explicit convex duality result [93, Theorem 2], the embedded maximization problems in (8) can be recast as
\[\sup_{z\in\mathcal{Z}}\ \ell_{i}(\theta,z)-\lambda c(z,\widehat{z}_{j})= \left\{\begin{array}{ll}\min&(-\ell_{i})^{*2}(\theta,\zeta_{ij}^{\ell})+ \lambda c^{*1}(\zeta_{ij}^{c}/\lambda,\widehat{z}_{j})+\sum_{k\in[K]}\tau_{ ijk}f_{k}^{*}(\zeta_{ijk}^{f}/\tau_{ijk})\\ \mathrm{s.\,t.}&\tau_{ijk}\in\mathbb{R}_{+},\ \zeta_{ij}^{\ell},\zeta_{ij}^{c}, \zeta_{ijk}^{f}\in\mathbb{R}^{d}\quad\forall k\in[K]\\ &\zeta_{ij}^{\ell}+\zeta_{ij}^{c}+\sum_{k\in[K]}\zeta_{ijk}^{f}=0\end{array}\right.\]
for all \(i\in[I]\) and \(j\in[J]\). Indeed, Assumption 2.11 (i) and the finiteness of \(\ell_{i}\) implied by Assumption 2.9 (ii) ensure that the primal maximization problem admits a Slater point. This implies both strong duality as well as solvability of the dual minimization problem. Substituting all resulting dual minimization problems into (8) and eliminating the corresponding minimization operators yields (7).
Note that the conjugate \((-\ell_{i})^{*2}\) of the negative saddle function \(-\ell_{i}\) with respect to its second argument is convex by construction. Similarly, when \(\widehat{z}_{j}\) is kept fixed, the perspectives of the conjugates \(c^{*1}(\cdot,\widehat{z}_{j})\) and \(f_{k}^{*}\) are readily seen to be convex. Thus, problem (7) is indeed a convex program.
Next, we prove that the dual DRO problem (3) also admits a finite convex reformulation if all assumptions of Proposition 2.8 are satisfied and if at least one out of three regularity conditions holds.
**Assumption 2.13** (Self service condition).: At least one of the following three conditions is satisfied: (i) \(\Theta\) is compact, (ii) \(\mathcal{Z}\) is compact or (iii) \(c(z,\widehat{z})\) grows superlinearly with \(z\).
Assumption 2.13 is unrestrictive in practice, and we will later see that it can be further relaxed.
**Proposition 2.14** (Dual reformulation).: If Assumptions 2.8, 2.9, 2.11 and 2.13 hold and if \(\varepsilon>0\), then the dual DRO problem (3) has the same supremum as the finite convex program
\[\begin{array}{llll}\max&-\sum_{i\in[I]}\sum_{j\in[J]}q_{ij}\ell_{i}^{*1}( \alpha_{ij}/q_{ij},\widehat{z}_{j}+\xi_{ij}/q_{ij})-\sum_{l\in[L]}\nu_{l}\,g_{ l}^{*}(\beta_{l}/\nu_{l})\\ \mathrm{s.\,t.}&q_{ij},\nu_{l}\in\mathbb{R}_{+},\ \xi_{ij}\in\mathbb{R}^{d},\ \alpha_{ij},\beta_{l}\in\mathbb{R}^{m}&\forall i\in[I],\,j\in[J],\,l\in[L]\\ &q_{ij}f_{k}(\widehat{z}_{j}+\xi_{ij}/q_{ij})\leq 0&\forall i\in[I],\,j\in[J], \,k\in[K]\\ &\sum_{i\in[I]}q_{ij}=p_{j}&\forall j\in[J]\\ &\sum_{i\in[I]}\sum_{j\in[J]}\alpha_{ij}+\sum_{l\in[L]}\beta_{l}=0\\ &\sum_{i\in[I]}\sum_{j\in[J]}q_{ij}\,c(\widehat{z}_{j}+\xi_{ij}/q_{ij}, \widehat{z}_{j})\leq\varepsilon,\end{array} \tag{9}\]
where \(\ell_{i}^{*1}(\alpha,z)\) denotes the conjugate of \(\ell_{i}(\theta,z)\) with respect to its first argument \(\theta\) for fixed \(z\).
Proof.: We first establish that the infimum of the primal DRO problem (1) equals the maximum of the finite convex program (9) and that (9) is indeed solvable (Step 1). Using the insights from Step 1, we
then show that any optimal solution of the convex program (9) can be used to construct a near-optimal solution of the dual DRO problem (3) and that the two problems have the same supremum (Step 2).
**Step 1.** By Proposition 2.12 for \(\Theta=\{\theta\}\), which applies thanks to Assumptions 2.8, 2.9 and 2.11 (i) and because \(\varepsilon>0\), the supremum of the inner maximization problem in (1) equals
\[\inf \lambda\varepsilon+\sum_{j\in[J]}p_{j}s_{j} \tag{10}\] \[\mathrm{s.\,t.} \lambda,\tau_{ijk}\in\mathbb{R}_{+},\ s_{j}\in\mathbb{R},\ \zeta_{ij}^{\ell},\zeta_{ij}^{c},\zeta_{ijk}^{f}\in\mathbb{R}^{d} \forall i\in[I],/,j\in[J],\,k\in[K]\] \[(-\ell_{i})^{*2}(\theta,\zeta_{ij}^{\ell})+\lambda e^{*1}(\zeta_ {ij}^{c}/\lambda,\tilde{z}_{j})+\sum_{k\in[K]}\tau_{ijk}f_{k}^{*}(\zeta_{ijk}^ {f}/\tau_{ijk})\leq s_{j} \forall i\in[I],\,j\in[J]\] \[\zeta_{ij}^{\ell}+\zeta_{ij}^{c}+\sum_{k\in[K]}\zeta_{ijk}^{f}=0 \forall i\in[I],\,j\in[J].\]
In [93, SS 6] it is shown that the maximization problem dual to the above minimization problem can be simplified by eliminating auxiliary epigraphical decision variables and applying a linear variable substitution; see problem (19) in [93]. In our notation, this simplified dual convex program is given by
\[\max \sum_{i\in[I]}\sum_{j\in[J]}q_{ij}\ell_{i}(\theta,\widehat{z}_{j} +\xi_{ij}/q_{ij}) \tag{11}\] \[\mathrm{s.\,t.} q_{ij}\in\mathbb{R}_{+},\ \xi_{ij}\in\mathbb{R}^{d} \forall i\in[I],\,j\in[J]\] \[q_{ij}f_{k}(\widehat{z}_{j}+\xi_{ij}/q_{ij})\leq 0 \forall i\in[I],\,j\in[J],\,k\in[K]\] \[\sum_{i\in[I]}\sum_{j\in[J]}q_{ij}=p_{j} \forall j\in[J]\] \[\sum_{i\in[I]}\sum_{j\in[J]}q_{ij}\,c(\widehat{z}_{j}+\xi_{ij}/q _{ij},\widehat{z}_{j})\leq\varepsilon.\]
Strong duality holds thanks to Assumption 2.11 (i), which implies that the dual maximization problem admits a Slater point \((\{q_{ij}^{s},\xi_{ij}^{s}\}_{i,j})\) with \(q_{ij}^{s}=p_{j}/J\) and \(\xi_{ij}^{s}=0\) for all \(i\in[I]\) and \(j\in[J]\). In addition, [93, Proposition 20] implies that the dual problem has a compact feasible set and is thus solvable. Hence, its maximum is indeed attained. Substituting this dual problem back into (1) and interchanging the infimum and the maximum, which is allowed by Sion's minimax theorem [80], we then obtain
\[\max \inf_{\theta\in\Theta}\ \sum_{i\in[I]}\sum_{j\in[J]}q_{ij}\ell_{i}( \theta,\widehat{z}_{j}+\xi_{ij}/q_{ij}) \tag{12}\] \[\mathrm{s.\,t.} q_{ij}\in\mathbb{R}_{+},\ \xi_{ij}\in\mathbb{R}^{d} \forall i\in[I],\,j\in[J]\] \[q_{ij}f_{k}(\widehat{z}_{j}+\xi_{ij}/q_{ij})\leq 0 \forall i\in[I],\,j\in[J],\,k\in[K]\] \[\sum_{i\in[I]}q_{ij}=p_{j} \forall j\in[J]\] \[\sum_{i\in[I]}\sum_{j\in[J]}q_{ij}\,c(\widehat{z}_{j}+\xi_{ij}/q _{ij},\widehat{z}_{j})\leq\varepsilon.\]
The inner minimization problem in (12) has a Slater point because of Assumptions 2.9 (ii) and 2.11 (ii). By [93, Theorem 2], this minimization problem therefore admits a strong dual of the form
\[\max -\sum_{i\in[I]}\sum_{j\in[J]}q_{ij}\ell_{i}^{*1}(\alpha_{ij}/q_{ ij},\widehat{z}_{j}+\xi_{ij}/q_{ij})-\sum_{l\in[L]}\nu_{l}g_{l}^{*}(\beta_{l}/ \nu_{l}) \tag{13}\] \[\mathrm{s.\,t.} \alpha_{ij}\in\mathbb{R}^{m},\ \beta_{l}\in\mathbb{R}^{m},\ \nu_{l}\in \mathbb{R}_{+}\quad\forall i\in[I],\,j\in[J],\,l\in[L]\] \[\sum_{i\in[I]}\sum_{j\in[J]}\alpha_{ij}+\sum_{l\in[L]}\beta_{l}=0,\]
which is solvable. Substituting this dual maximization problem back into (12) finally yields (9). Note that the maximum in (9) is indeed attained because problem (12) as well as the dual of the parametric minimization problem in its objective function are both solvable.
**Step 2.** Fix now any maximizer \((\{q^{\star}_{ij},\xi^{\star}_{ij},\alpha^{\star}_{ij}\}_{i,j},\{\nu^{\star}_{l},\beta^{\star}_{l}\}_{l})\) of problem (9), which exists thanks to the results of Step 1. In Step 2 we will show that this maximizer can be used to construct a maximizer of the dual DRO problem (3) (if (3) is solvable) or a sequence of distributions that are feasible and asymptotically optimal in (3) (if (3) is _not_ solvable). To this end, define three disjoint index sets
\[\mathcal{I}^{+}_{j}=\left\{i\in[I]:q^{\star}_{ij}>0\right\},\quad\mathcal{I}^{ 0}_{j}=\left\{i\in[I]:q^{\star}_{ij}=0,\,\xi^{\star}_{ij}=0\right\}\quad\text{ and}\quad\mathcal{I}^{\infty}_{j}=\left\{i\in[I]:q^{\star}_{ij}=0,\,\xi^{\star}_{ij} \neq 0\right\},\]
which form a partition of \([I]\) for every fixed \(j\). Assume first that \(\mathcal{I}^{\infty}_{j}=\emptyset\) for every \(j\in[J]\). In this case, we can use similar arguments as in [46, SS 2.2] and [93, SS 6] to show that the discrete distribution
\[\mathbb{Q}^{\star}=\sum_{j\in[J]}\sum_{i\in\mathcal{I}^{+}_{j}}q^{\star}_{ij} \delta_{\widehat{z}_{j}+\xi^{\star}_{ij}/q^{\star}_{ij}}\]
solves the dual DRO problem (3). Note first that \(\mathbb{Q}^{\star}\) is indeed a probability distribution because
\[\mathbb{Q}^{\star}(z\in\mathbb{R}^{d})=\sum_{j\in[J]}\sum_{i\in\mathcal{I}^{+} _{j}}q^{\star}_{ij}=\sum_{j\in[J]}\sum_{i\in[I]}q^{\star}_{ij}=\sum_{j\in[J]}p _{j}=1,\]
where the second equality holds because \(q^{\star}_{ij}=0\) for all \(i\in\mathcal{I}^{0}_{j}\cup\mathcal{I}^{\infty}_{j}\), while the third equality follows from the constraints of problem (9). More precisely, we have \(\mathbb{Q}^{\star}(z\in\mathcal{Z})=1\) because the constraints of (9) imply that \(f_{k}(\widehat{z}_{j}+\xi^{\star}_{ij}/q^{\star}_{ij})\leq 0\) whenever \(i\in\mathcal{I}^{+}_{j}\). In addition, one readily verifies that
\[d_{c}(\mathbb{Q}^{\star},\widehat{\mathbb{P}})\leq\sum_{j\in[J]}\sum_{i\in \mathcal{I}^{+}_{j}}q^{\star}_{ij}c(\widehat{z}_{j}+\xi^{\star}_{ij}/q^{\star }_{ij},\widehat{z}_{j})=\sum_{j\in[J]}\sum_{i\in[I]}q^{\star}_{ij}c(\widehat{ z}_{j}+\xi^{\star}_{ij}/q^{\star}_{ij},\widehat{z}_{j})\leq\varepsilon,\]
where the first inequality holds because the transportation plan that moves mass \(q^{\star}_{ij}\) from \(\widehat{z}_{j}\) to \(\widehat{z}_{j}+\xi^{\star}_{ij}/q^{\star}_{ij}\) for all \(j\in[J]\) and \(i\in\mathcal{I}^{+}_{j}\) is an admissible coupling of \(\widehat{\mathbb{P}}\) and \(\mathbb{Q}^{\star}\), the equality holds because \(\mathcal{I}^{\infty}_{j}=\emptyset\) and because \(q^{\star}_{ij}c(\widehat{z}_{j}+\xi^{\star}_{ij}/q^{\star}_{ij},\widehat{z}_ {j})=0c(\widehat{z}_{j}+0/0,\widehat{z}_{j})=0\) for all \(i\in\mathcal{I}^{0}_{j}\) thanks to our conventions for perspective functions, and the second inequality follows from the constraints of problem (9). In summary, we have thus shown that \(\mathbb{Q}^{\star}\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})\). It remains to be shown that \(\mathbb{Q}^{\star}\) is optimal in (3). To this end, note that
\[\sup_{\mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}}) }\inf_{\theta\in\Theta}\mathbb{E}_{\mathbb{Z}\sim\mathbb{Q}}[\ell(\theta,Z)] \geq\inf_{\theta\in\Theta}\ \mathbb{E}_{\mathbb{Q}^{\star}}[\ell(\theta,Z)]\] \[\geq\inf_{\theta\in\Theta}\ \sum_{j\in[J]}\sum_{i\in\mathcal{I}^{+}_{j}}q^{ \star}_{ij}\ell_{i}(\theta,\widehat{z}_{j}+\xi^{\star}_{ij}/q^{\star}_{ij})\] \[=\inf_{\theta\in\Theta}\ \sum_{i\in[I]}\sum_{j\in[J]}q^{\star}_{ij}\ell_{i}( \theta,\widehat{z}_{j}+\xi^{\star}_{ij}/q^{\star}_{ij}) \tag{14}\] \[\geq-\sum_{i\in[I]}\sum_{j\in[J]}q^{\star}_{ij}\ell^{\star 1}_{i}( \alpha^{\star}_{ij}/q^{\star}_{ij},\widehat{z}_{j}+\xi^{\star}_{ij}/q^{\star}_{ ij})-\sum_{l\in[L]}\nu^{\star}_{l}g^{\star}_{l}(\beta^{\star}_{l}/\nu^{\star}_{l})\] \[=\inf_{\theta\in\Theta}\sup_{\mathbb{Q}\in\mathbb{B}_{\varepsilon }(\widehat{\mathbb{P}})}\mathbb{E}_{Z\sim\mathbb{Q}}[\ell(\theta,Z)]\geq\sup _{\mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})}\inf_{\theta \in\Theta}\mathbb{E}_{Z\sim\mathbb{Q}}[\ell(\theta,Z)],\]
where the first inequality holds because \(\mathbb{Q}^{\star}\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})\), the second inequality exploits Assumption 2.9 (ii), which implies that \(\ell\) majorizes \(\ell_{i}\) for every \(i\in[I]\), and the first equality follows from the properties of perspective functions and from our assumption that \(\mathcal{I}^{\infty}_{j}\) is empty. The third inequality then holds because \((\{\alpha^{\star}_{ij}\}_{i,j},\{\nu^{\star}_{l},\beta^{\star}_{l}\}_{l})\) is feasible in (13), which is dual to the resulting minimization problem over \(\theta\)
and because any dual feasible solution provides a lower bound on the infimum of the primal problem. The second equality holds because the primal DRO problem (1) and the finite convex program (9) share the same optimal value, which was established in Step 1, while the fourth and last inequality follows from weak duality. Hence, all inequalities in (14) are in fact equalities, which proves that the optimal values of the primal and dual DRO problems (1) and (3) coincide and that \(\mathbb{Q}^{\star}\) solves (3).
Assume now that \(\mathcal{I}_{j}^{\infty}\neq\emptyset\) for some \(j\in[J]\), and select any \(i\in\mathcal{I}_{j}^{\infty}\). Since \(q_{ij}^{\star}=0\), [70, Theorem 8.6] implies that \(\xi_{ij}^{\star}\) is a recession direction of \(\mathcal{Z}\) (see also [93, Lemma B.7 (ii)]), and since \(\xi_{ij}^{\star}\neq 0\), the support set \(\mathcal{Z}\) cannot be compact. Similarly, one readily verifies that \(c(z,\widehat{z})\) cannot grow superlinearly with \(z\) for otherwise the left hand side of the last constraint in (9) would evaluate to infinity [93, Remark 6], which contradicts the feasibility of \((\{q_{ij}^{\star},\xi_{ij}^{\star},\alpha_{ij}^{\star}\}_{i,j},\{\nu_{l}^{ \star},\beta_{l}^{\star}\}_{l})\). By Assumption 2.13, we may thus conclude that the feasible set \(\Theta\) is compact. Next, consider the discrete distributions
\[\mathbb{Q}(n)=\sum_{j\in[J]}\sum_{i\in\mathcal{I}_{j}^{+}\cup\mathcal{I}_{j}^ {\infty}}q_{ij}(n)\,\delta_{z_{ij}(n)}\]
indexed by \(n\in\mathbb{N}\), where
\[q_{ij}(n)=\begin{cases}q_{ij}^{\star}(1-|\mathcal{I}_{j}^{\infty}|/n)&\text{ if }i\in\mathcal{I}_{j}^{+}\\ p_{j}/n&\text{if }i\in\mathcal{I}_{j}^{\infty}\end{cases}\quad\text{ and }\quad z_{ij}(n)=\begin{cases}\widehat{z}_{j}+\xi_{ij}^{\star}/q_{ij}^{ \star}&\text{ if }i\in\mathcal{I}_{j}^{+}\\ \widehat{z}_{j}+n\xi_{ij}^{\star}/p_{j}&\text{if }i\in\mathcal{I}_{j}^{ \infty}.\end{cases}\]
Following [93, SS 5], we now prove that \(\mathbb{Q}(n)\) is feasible and asymptotically optimal in (3) as \(n\) increases. To this end, we first show that \(\mathbb{Q}(n)\in\mathbb{B}_{c}(\widehat{\mathbb{P}})\) for all \(n>\underline{n}=\max_{j\in[J]}|\mathcal{I}_{j}^{\infty}|\). It is easy to verify that \(\mathbb{Q}(n)\) is indeed a probability distribution for every \(n>\underline{n}\) because \(\sum_{j\in[J]}\sum_{i\in\mathcal{I}_{j}^{+}\cup\mathcal{I}_{j}^{\infty}}q_{ij} (n)=1\) and \(q_{ij}(n)\geq 0\) for all \(i\in\mathcal{I}_{j}^{+}\cup\mathcal{I}_{j}^{\infty}\) and \(j\in[J]\) thanks to the constraints of problem (9). In addition, \(\mathbb{Q}(n)\) is supported on \(\mathcal{Z}\), that is, \(z_{ij}(n)\in\mathcal{Z}\) for each \(j\in[J]\) and \(i\in\mathcal{I}_{j}^{+}\cup\mathcal{I}_{j}^{\infty}\). In particular, the constraints of problem (9) imply that \(\xi_{ij}^{\star}\) is a recession direction of \(\mathcal{Z}\) for every \(i\in\mathcal{I}_{j}^{\infty}\) (see, _e.g._, [93, Lemma B.7]). Hence, for any fixed \(j\in[J]\) and \(i\in\mathcal{I}_{j}^{\infty}\), the distribution \(\mathbb{Q}(n)\) sends a decreasing probability mass \(q_{ij(n)}=p_{j}/n\) along the ray emanating from \(\widehat{z}_{j}\) in the direction of \(\xi_{ij}^{\star}\) without ever leaving the support set \(\mathcal{Z}\) as \(n\) grows. Finally, for all \(n>\underline{n}\), one readily verifies that
\[d_{c}(\mathbb{Q}(n),\widehat{\mathbb{P}}) \leq\sum_{j\in[J]}\sum_{i\in\mathcal{I}_{j}^{+}\cup\mathcal{I}_{j} ^{\infty}}q_{ij}(n)\,c(z_{ij}(n),\widehat{z}_{j})\] \[=\sum_{j\in[J]}\sum_{i\in\mathcal{I}_{j}^{+}}q_{ij}^{\star}\left( 1-\frac{|\mathcal{I}_{j}^{\infty}|}{n}\right)c\left(\widehat{z}_{j}+\frac{\xi_ {ij}^{\star}}{q_{ij}^{\star}},\widehat{z}_{j}\right)+\sum_{j\in[J]}\sum_{i\in \mathcal{I}_{j}^{\infty}}\frac{p_{j}}{n}c\left(\widehat{z}_{j}+n\frac{\xi_{ij} ^{\star}}{p_{j}},\widehat{z}_{j}\right)\] \[\leq\sum_{j\in[J]}\sum_{i\in\mathcal{I}_{j}^{+}}q_{ij}^{\star}c \left(\widehat{z}_{j}+\frac{\xi_{ij}^{\star}}{q_{ij}^{\star}},\widehat{z}_{j} \right)+\sum_{j\in[J]}\sum_{i\in\mathcal{I}_{j}^{\infty}}\lim_{n\to \infty}\frac{p_{j}}{n}c\left(\widehat{z}_{j}+n\frac{\xi_{ij}^{\star}}{p_{j}}, \widehat{z}_{j}\right)\] \[=\sum_{j\in[J]}\sum_{i\in\mathcal{I}_{j}}q_{ij}^{\star}c\left( \widehat{z}_{j}+\frac{\xi_{ij}^{\star}}{q_{ij}^{\star}},\widehat{z}_{j}\right) \leq\varepsilon,\]
where the first inequality holds because the transportation plan that moves mass \(q_{ij}(n)\) from \(\widehat{z}_{j}\) to \(z_{ij}(n)\) for all \(j\in[J]\) and \(i\in\mathcal{I}_{j}^{+}\cup\mathcal{I}_{j}^{\infty}\) is an admissible coupling of \(\widehat{\mathbb{P}}\) and \(\mathbb{Q}(n)\), and the second inequality holds because the transportation cost function is non-negative and convex (see Assumption 2.9 (i)), which implies that both terms in the second line of the above expression are non-decreasing in \(n\). The last equality follows from the definition of the perspective function \(q_{ij}^{\star}c(\widehat{z}_{j}+\xi_{ij}^{\star}/q_{ij}^{\star},\widehat{z}_{j})\) for \(q_{ij}^{\star}=0\), and the last inequality uses the constraints of (9). Hence, \(\mathbb{Q}(n)\in\mathbb{B}_{c}(\widehat{\mathbb{P}})\) for all \(n>\underline{n}\). It remains to be shown
that \(\mathbb{Q}(n)\) is asymptotically optimal in (9). Using a similar reasoning as in (14), we obtain
\[\sup_{\mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})} \inf_{\theta\in\Theta} \mathbb{E}_{Z\sim\mathbb{Q}}[\ell(\theta,Z)]\geq\lim_{n\to\infty} \inf_{\theta\in\Theta}\mathbb{E}_{\mathbb{Q}(n)}[\ell(\theta,Z)]\] \[\geq\lim_{n\to\infty}\inf_{\theta\in\Theta}\sum_{j\in[J]}\sum_{i \leq T_{i}\cup J_{j}^{\infty}}q_{ij}(n)\ell_{i}\left(\theta,z_{ij}(n)\right)\] \[=\inf_{\theta\in\Theta}\sum_{j\in[J]}\sum_{i\in\mathcal{I}_{j}^{ +}\cup J_{j}^{\infty}}q_{ij}^{*}\ell_{i}(\theta,\widehat{z}_{j}+\xi_{ij}^{*}/ q_{ij}^{*})\] \[=\inf_{\theta\in\Theta}\sum_{j\in[J]}\sum_{i\in[I]}q_{ij}^{*}\ell _{i}(\theta,\widehat{z}_{j}+\xi_{ij}^{*}/q_{ij}^{*})\] \[\geq\left\{\begin{array}{ll}\max&-\sum_{j\in[J]}\sum_{i\in[J]} \alpha_{ij}^{*}\ell_{i}^{*1}(\alpha_{ij}/q_{ij}^{*},\widehat{z}_{j}+\xi_{ij}^ {*}/q_{ij}^{*})-\sum_{l\in[L]}\nu_{l}q_{l}^{*}(\beta_{l}/\nu_{l})\\ \text{s.\,t.}&\alpha_{ij}\in\mathbb{R}^{m},\ \beta_{l}\in\mathbb{R}^{m}, \ \nu_{l}\in\mathbb{R}_{+}\quad\forall i\in\mathcal{I}_{j}^{+}\cup J_{j}^{ \infty},\,j\in[J],\,l\in[L]\\ &\sum_{j\in[J]}\sum_{i\in\mathcal{I}_{j}^{+}\cup J_{j}^{\infty}} \alpha_{ij}+\sum_{l\in[L]}\beta_{l}=0\\ \geq-\sum_{i\in[I]}\sum_{j\in[J]}q_{ij}^{*}\ell_{i}^{*1}(\alpha_{ij}^{*}/q_{ ij}^{*},\widehat{z}_{j}+\xi_{ij}^{*}/q_{ij}^{*})-\sum_{l\in[L]}\nu_{l}^{*}g_{l}^{* }(\beta_{l}^{*}/\nu_{l}^{*})\\ =\inf_{\theta\in\Theta}\sup_{\mathbb{Q}\in\mathbb{B}_{\varepsilon}( \widehat{\mathbb{P}})}\mathbb{E}_{Z\sim\mathbb{Q}}[\ell(\theta,Z)]\geq\sup_{ \mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})}\inf_{\theta\in \Theta}\ \mathbb{E}_{Z\sim\mathbb{Q}}[\ell(\theta,Z)].\end{array}\right.\]
Here, the first two inequalities hold because \(\mathbb{Q}(n)\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})\) for all \(n>\underline{n}\) and because \(\ell\) majorizes \(\ell_{i}\) for all \(i\in[I]\). The first equality follows from [71, Corollary 7.18], which applies because \(\Theta\) is compact, and from the observation that the uniform convergence of a sequence of functions implies the convergence of their infima. The second equality follows from the properties of perspective functions. The third inequality follows from weak duality; see also problem (13). The fourth inequality holds because \((\{\alpha_{ij}^{*}\}_{i,j},\{\nu_{l}^{*},\beta_{l}^{*}\}_{l})\) is feasible in the emerging dual maximization problem. The third equality exploits our insight from Step 1 that the optimal values of the primal DRO problem (1) and the finite convex program (9) match, and the fifth inequality follows from weak duality. Again, we may conclude that all inequalities in the above expression are in fact equalities, which proves that the optimal values of the primal and dual DRO problems (1) and (3) coincide and that \(\mathbb{Q}(n)\) is asymptotically optimal in (3).
As a byproduct, the proof of Proposition 2.14 reveals that if Assumptions 2.8, 2.9, 2.11 and 2.13 hold and if \(\varepsilon>0\), then the infimum of the primal DRO problem (1) equals the supremum of the dual DRO problem (3). Hence, strong duality holds. The proof of Proposition 2.14 also reveals that the convex reformulation (9) of the dual DRO problem (3) is always solvable and that any maximizer of (9) can be used to construct a least favorable distribution that is optimal in (3), if \(\mathcal{I}_{j}^{\infty}=\emptyset\) for all \(j\in[J]\), or a sequence of distributions that are _asymptotically_ optimal in (3), if \(\mathcal{I}_{j}^{\infty}\neq\emptyset\) for some \(j\in[J]\). Finally, the proof of Proposition 2.14 shows that Assumption 2.13 can be replaced with the following condition.
**Assumption 2.15** (Relaxed self service condition).: At least one of the following two conditions is satisfied: (i) \(\Theta\) is compact or (ii) the index set \(\mathcal{I}_{j}^{\infty}=\{i\in[I]:q_{ij}^{*}=0,\,\xi_{ij}^{*}\neq 0\}\) is non-empty for every \(j\in[J]\), where \((\{q_{ij}^{*},\xi_{ij}^{*},\alpha_{ij}^{*}\}_{i,j},\{\nu_{l}^{*},\beta_{l}^{ *}\}_{l})\) is a maximizer of problem (9).
The next theorem formalizes the above insights, thus identifying conditions under which one can compute Nash equilibria for the DRO problem (1) by solving the finite convex programs (7) and (9).
**Theorem 2.16** (Computing Nash equilibria).: If Assumptions 2.8, 2.9, 2.11 and 2.15 hold and \(\varepsilon>0\), then the optimal values of the primal and dual DRO problems (1) and (3) match, and the following hold.
1. If \((\theta^{\star},\lambda^{\star},\{s^{\star}_{j}\},\{\zeta^{\ell_{ij}}_{ij},\zeta^{ \mathcal{\star}_{ij}}_{i,j}\},\{\tau^{\star}_{ijk},\zeta^{\mathcal{\star}_{ij} \mathcal{\star}}_{ijk}\}_{i,j,k})\) solves (7), then \(\theta^{\star}\) solves the primal DRO problem (1).
2. If \((\{\beta^{\star}_{l}\},\{\nu^{\star}_{l}\},\{q^{\star}_{i}\}_{i,j},\{\xi^{ \star}_{ij}\}_{i,j},\{\alpha^{\star}_{ij}\}_{i,j})\) solves (9) with \(\mathcal{I}^{\infty}_{j}=\emptyset\) for every \(j\in[J]\), then the discrete distribution \(\mathbb{Q}^{\star}=\sum_{j\in[J]}\sum_{i\in\mathcal{I}^{\star}_{j}}q^{\star}_{ ij}\delta^{\widehat{\sigma}_{2j}+\xi^{\star}_{ij}/q^{\star}_{ij}}\) solves the dual DRO problem (3).
Proof.: The claim follows from the proofs of Propositions 2.12 and 2.14 and the above discussion.
The conditions of Theorem 2.16 do not ensure that (7) is solvable and, even though they guarantee the solvability of (9), they do not ensure that (9) has a solution with \(\mathcal{I}^{\infty}_{j}=\emptyset\) for every \(j\in[J]\); see the proofs of Propositions 2.12 and 2.14. However, Theorem 2.16 implies that if (7) is solvable and (9) has a solution with \(\mathcal{I}^{\infty}_{j}=\emptyset\) for all \(J\in[J]\), then the DRO problem (1) admits a Nash equilibrium \((\theta^{\star},\mathbb{Q}^{\star})\), which can be computed from the solutions of the finite convex programs (7) and (9). The following lemma identifies easily checkable sufficient conditions for problem (7) to be solvable.
**Lemma 2.17** (Existence of \(\theta^{\star}\)).: Suppose that Assumptions 2.8, 2.9 and 2.11 hold and that \(\varepsilon>0\). Then, the finite convex program (7) is solvable if \(\Theta\) is compact or if (9) admits a Slater point.
Proof.: Assume first that \(\Theta\) is compact. From the proof of Proposition 2.14 we know that (10) and (11) are dual convex programs and that (11) admits a Slater point. This implies via [93, Theorem 2 (i)] that (10) is solvable irrespective of \(\theta\). As (7) minimizes the minimum of (10) across all \(\theta\in\Theta\) and as \(\Theta\) is compact, we may conclude that problem (7) is solvable. Assume next that (9) admits a Slater point. As (9) is dual to (7), it follows again from [93, Theorem 2 (i)] that (7) must be solvable.
The next lemma identifies conditions ensuring that (9) has a solution with \(\mathcal{I}^{\infty}_{j}=\emptyset\) for all \(J\in[J]\).
**Lemma 2.18** (Existence of \(\mathbb{Q}^{\star}\)).: Suppose that Assumptions 2.8, 2.9 and 2.11 hold and that \(\varepsilon>0\). Then, the finite convex program (9) has a solution with \(\mathcal{I}^{\infty}_{j}=\emptyset\) for all \(J\in[J]\) if the support set \(\mathcal{Z}\) is compact or if the transportation cost function \(c(z,\widehat{z})\) grows superlinearly with \(z\).
Proof.: This is an immediate consequence of the proof of Proposition 2.14.
Under the conditions of Lemma 2.17, any solution of (7) gives rise to a robust decision \(\theta^{\star}\), and under the conditions of Lemma 2.18, any solution of (9) gives rise to a least favorable distribution \(\mathbb{Q}^{\star}\). By construction, \(\theta^{\star}\) and \(\mathbb{Q}^{\star}\) form a Nash equilibrium for the DRO problem (1). The constructive results of this section thus strengthen and complement the existential results of Theorem 2.6 and Corollary 2.7. We emphasize, in particular, that the conditions of Lemma 2.18 are reminiscent of the growth condition specified in Assumption 2.1 (ii), which was instrumental in Theorem 2.6 to prove that the dual DRO problem (3) is solvable. Similarly, the conditions of Lemma 2.17 are reminiscent of the compactness condition that was needed in Corollary 2.7 to prove that the primal DRO problem (1) is solvable.
The results of this section readily extend to decision problems with loss functions that display an arbitrary dependence on some of the uncertain problem parameters provided that nature cannot perturb their marginal distribution. Such decision problems naturally arise in machine learning applications (see Section 4). Propositions 2.12 and 2.14 as well as Theorem 2.16 remain valid in this generalized setting with obvious minor modifications but at the expense of a higher notational overhead; see Appendix A.
Regularization by Robustification
It is well known that many regularization schemes in statistics and machine learning admit a robustness interpretation. Apart from carrying an aesthetic appeal, such interpretations often lead to generalization bounds for the optimizers of regularized learning models. This prompts us to seek a comprehensive theory of regularization and optimal transport-based robustification that unifies various specialized results scattered across the existing literature and that rationalizes, for the first time, a broad range of higher-order variation regularization schemes. In Section 3.1 we first study the _primal_ regularizing effects of robustification by relating the worst-case expected loss to the expected loss under the reference distribution adjusted by Lipschitz and higher-order variation regularization terms. In Section 3.2 we then study the _dual_ regularizing effects of robustification by relating the worst-case expected loss to the expected value of a regularized (_e.g._, smoothed) loss function under the reference distribution.
### Primal Regularizing Effects of Robustification
In this section we will show that, under natural regularity conditions, the worst-case expected loss across all distributions in a generic optimal transport-based ambiguity set is bounded above by the sum of the expected loss under the reference distribution and several regularization terms that penalize \(L^{p}\)-norms and Lipschitz moduli of the higher-order derivatives of the loss function. Our results generalize and unify several existing results, which have revealed intimate connections between robustification and gradient regularization [32], Hessian regularization [6] and Lipschitz regularization [60]; see also [74, 75, 13, 32].
The subsequent discussion focuses on the inner maximization problem in (1), where \(\theta\) is fixed. For ease of exposition, we thus temporarily suppress the dependence of the loss function on \(\theta\) and use the shorthand notation \(\ell(z)\) throughout this section. In addition, we will also adopt the following notational conventions. For any \(k\in\mathbb{Z}_{+}\), we use \(D^{k}\ell(z)\) to denote the totally symmetric tensor of all \(k\)-th order partial derivatives of \(\ell(z)\). Accordingly, \(D^{k}\ell(z)[\xi_{1},\dots,\xi_{k}]\) stands for the directional derivative of \(\ell(z)\) along the directions \(\xi_{i}\in\mathbb{R}^{d}\) for \(i\in[k]\). If \(\xi_{i}=\xi\) for all \(i\in[k]\), then we use the shorthand \(D^{k}\ell(z)[\xi]^{k}\). Any norm \(\|\cdot\|\) on \(\mathbb{R}^{d}\) induces a norm on the space of totally symmetric \(k\)-th order tensors through
\[\|D^{k}\ell(z)\|=\sup_{\xi_{1},\dots,\xi_{k}}\left\{\left|D^{k}\ell(z)[\xi_{1},\dots,\xi_{k}]\right|:\|\xi_{i}\|\leq 1\ \forall i\in[k]\right\}=\sup_{\xi}\left\{\left|D^{k}\ell(z)[\xi]^{k}\right|:\| \xi\|\leq 1\right\},\]
where the second equality follows from the symmetry of \(D^{k}\ell(z)\)[5, Satz 1]. By slight abuse of notation, we use the same symbol \(\|\cdot\|\) for the tensor norm induced by the vector norm \(\|\cdot\|\).
The results of this section will depend on the following smoothness conditions.
**Assumption 3.1** (Smoothness properties of the loss function).:
1. The loss function \(\ell(z)\) is \(p\) times continuously differentiable, and there exist \(G>0\) and \(p\in\mathbb{N}\) with \(\ell(z)\leq G(1+\|z\|^{p})\), \(\|D^{k}\ell(z)\|\leq G(1+\|z\|^{p-k})\), \(k\in[p-1]\), and \(\|D^{p}\ell(z)\|\leq G\) for all \(z\in\mathcal{Z}\).
2. The tensor \(D^{k}\ell(z)\) of \(k\)-th order partial derivatives is Lipschitz continuous in \(z\) for all \(k\in[p-1]\).
We are now ready to state the main result of this section.
**Theorem 3.2** (Regularization by robustification).: Suppose that Assumption 2.1 (i) holds. If there exists a norm \(\|\cdot\|\) on \(\mathbb{R}^{d}\) such that \(c(z,\widetilde{z})\) satisfies Assumption 2.1 (ii) for \(d(z,\widetilde{z})=\|z-\widetilde{z}\|\) and such
that \(\ell(z)\) satisfies Assumption 3.1 (i) with respect to \(\|\cdot\|\), then the worst-case expected loss satisfies
\[\sup_{\mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})}\mathbb{E}_{Z \sim\mathbb{Q}}\left[\ell(Z)\right]\leq\mathbb{E}_{\widehat{Z}\sim\widehat{ \mathbb{P}}}[\ell(\widehat{Z})]+\sum_{k=1}^{p-1}\frac{\varepsilon^{\frac{1}{p_ {k}}}}{k!}\,\mathbb{E}_{\widehat{Z}\sim\widehat{\mathbb{P}}}\left[\|D^{k}\ell( \widehat{Z})\|^{q_{k}}\right]^{\frac{1}{q_{k}}}+\frac{\varepsilon}{p!}\,\, \mathrm{lip}(D^{p-1}\ell)<\infty, \tag{15}\]
where \(p_{k}=p/k\) and \(q_{k}=p/(p-k)\) for all \(k\in[p-1]\).
Proof.: Fix any \(\mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})\), and use \(\pi^{\star}\) to denote an optimal coupling of \(\mathbb{Q}\) and \(\widehat{\mathbb{P}}\), which exists thanks to [86, Theorem 4.1]. By the defining properties of couplings, we have
\[\mathbb{E}_{Z\sim\mathbb{Q}}\left[\ell(Z)\right]-\mathbb{E}_{\widehat{Z}\sim \widehat{\mathbb{P}}}[\ell(\widehat{Z})]=\int_{\mathcal{Z}\times\mathcal{Z}} \left[\ell(z)-\ell(\widehat{z})\right]\mathrm{d}\pi^{\star}(z,\widehat{z}).\]
Representing the integrand as a Taylor series with Lagrange remainder yields
\[\ell(z)-\ell(\widehat{z}) =\sum_{k=1}^{p-1}\frac{1}{k!}D^{k}\ell(\widehat{z})\left[z- \widehat{z}\right]^{k}+\frac{1}{p!}D^{p}\ell(\widehat{z}+t\cdot(z-\widehat{z }))\left[z-\widehat{z}\right]^{p}\] \[\leq\sum_{k=1}^{p-1}\frac{1}{k!}\|D^{k}\ell(\widehat{z})\|\|z- \widehat{z}\|^{k}+\frac{1}{p!}\|D^{p}\ell(\widehat{z}+t\cdot(z-\widehat{z})) \|\|z-\widehat{z}\|^{p}\]
for some \(t\in(0,1)\) that depends (measurably) on \(z\) and \(\widehat{z}\)[45, Theorem 2.2.5]. The inequality in the second line follows directly from the multi-linearity of tensors and the definition of the tensor norm. By Holder's inequality, the integral of the \(k\)-th term in the above sum against \(\pi^{\star}\) admits the estimate
\[\int_{\mathcal{Z}\times\mathcal{Z}}\|D^{k}\ell(\widehat{z})\|\|z -\widehat{z}\|^{k}\mathrm{d}\pi^{\star}(z,\widehat{z}) \leq\left(\int_{\mathcal{Z}\times\mathcal{Z}}\|z-\widehat{z}\|^{ p}\mathrm{d}\pi^{\star}(z,\widehat{z})\right)^{\frac{1}{p_{k}}}\left(\int_{ \mathcal{Z}}\|D^{k}\ell(\widehat{z})\|^{q_{k}}\mathrm{d}\widehat{\mathbb{P}}( \widehat{z})\right)^{\frac{1}{q_{k}}}\] \[\leq\varepsilon^{\frac{1}{p_{k}}}\left(\int_{\mathcal{Z}}\|D^{k} \ell(\widehat{z})\|^{q_{k}}\mathrm{d}\widehat{\mathbb{P}}(\widehat{z})\right) ^{\frac{1}{q_{k}}},\]
where \(p_{k}=p/k\) and \(q_{k}=p/(p-k)\). The second inequality follows from Assumption 2.1 (ii), which ensures that \(c(z,\widehat{z})\geq\|z-\widehat{z}\|^{p}\) for all \(z,\widehat{z}\in\mathcal{Z}\), and from the definition of \(\pi^{\star}\) as an optimal coupling of \(\widehat{\mathbb{P}}\) and \(\mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})\). A similar reasoning based on Holder's inequality implies that
\[\int_{\mathcal{Z}\times\mathcal{Z}}\|D^{p}\ell(\widehat{z}+t\cdot (z-\widehat{z}))\|\|z-\widehat{z}\|^{p}\mathrm{d}\pi^{\star}(z,\widehat{z}) \leq\sup_{\widehat{z}\in\mathcal{Z}}\|D^{p}\ell(\widehat{z})\|\int _{\mathcal{Z}\times\mathcal{Z}}\|z-\widehat{z}\|^{p}\mathrm{d}\pi^{\star}(z, \widehat{z})\] \[\leq\varepsilon\sup_{\widehat{z}\in\mathcal{Z}}\|D^{p}\ell( \widehat{z})\|=\varepsilon\,\mathrm{lip}(D^{p-1}\ell),\]
where the dependence of \(t\) on \(z\) and \(\widehat{z}\) is notationally suppressed to avoid clutter. In summary, the above reasoning readily implies that
\[\mathbb{E}_{Z\sim\mathbb{Q}}\left[\ell(Z)\right]-\mathbb{E}_{\widehat{Z}\sim \widehat{\mathbb{P}}}[\ell(\widehat{Z})]\leq\sum_{k=1}^{p-1}\frac{\varepsilon ^{\frac{1}{p_{k}}}}{k!}\,\mathbb{E}_{\widehat{Z}\sim\widehat{\mathbb{P}}} \left[\|D^{k}\ell(\widehat{Z})\|^{q_{k}}\right]^{\frac{1}{q_{k}}}+\frac{ \varepsilon}{p!}\,\mathrm{lip}(D^{p-1}\ell).\]
Note that the right hand side of the resulting inequality is independent of \(\mathbb{Q}\). Thus, the inequality remains valid if we maximize its left hand side across all \(\mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})\), which finally yields the upper bound in (15). To demonstrate that this upper bound is finite, we observe that
\[\mathbb{E}_{\widehat{Z}\sim\widehat{\mathbb{P}}}\left[\|D^{k}\ell( \widehat{Z})\|^{q_{k}}\right] \leq\mathbb{E}_{\widehat{Z}\sim\widehat{\mathbb{P}}}\left[G(1+\| \widehat{Z}\|^{p-k})^{q_{k}}\right]\leq\mathbb{E}_{\widehat{Z}\sim\widehat{ \mathbb{P}}}\left[G(2+\|\widehat{Z}\|)^{p}\right]\] \[\leq G^{\prime}+\mathbb{E}_{\widehat{Z}\sim\widehat{\mathbb{P}}} \left[\|\widehat{Z}-\widehat{z}_{0}\|^{p}\right]\leq G^{\prime}+\mathbb{E}_{ \widehat{Z}\sim\widehat{\mathbb{P}}}\left[c(\widehat{Z},\widehat{z}_{0})\right] \leq\infty\quad\forall k\in[p-1],\]
where the first two inequalities follow from Assumption 3.1 (i) and the definition of \(q_{k}\), respectively. By the triangle inequality and the binomial theorem, there is \(G^{\prime}>0\) with \(G(2+\|z\|)^{p}\leq G^{\prime}(1+\|z-\widehat{z}_{0}\|^{p})\) for all \(z\in\mathcal{Z}\), and this justifies the third inequality. The fourth inequality exploits Assumption 2.1 (ii), which holds for \(d(z,\widehat{z})=\|z-\widehat{z}\|\), and the fifth inequality follows from Assumption 2.1 (i). Similarly, Assumption 3.1 (i) implies that \(\mathrm{lip}(D^{p-1}\ell)\leq G<\infty\). Thus, the upper bound in (15) is finite.
Theorem 3.2 asserts that the worst-case expected loss with respect to a generic optimal transport-based ambiguity set is bounded above by the sum of the expected loss under the reference distribution, \(p-1\) different variation regularization terms, and a Lipschitz regularization term. The \(k\)-th variation regularization term penalizes the \(L^{q_{k}}\)-norm of the tensor of \(k\)-th order partial derivatives, and the Lipschitz regularization term penalizes the Lipschitz modulus of the tensor of \((p-1)\)-st order partial derivatives of the loss. Variation regularizers such as regularizers penalizing the gradients, Hessians or tensors of higher-order partial derivatives are successfully used in various applications in machine learning. For example, they emerge in the adversarial training of neural networks [30, 36, 42, 57, 66, 85] or in the stabilizing training of generative adversarial networks [35, 63, 72]. In the context of image recovery, regularizers that penalize higher-order partial derivatives are sometimes preferred to total variation regularizers because they prevent undesirable staircase artifacts [19]. For example, the ability of Hessian regularizers (or their finite difference approximations) to mitigate staircase effects has been documented in [52, 53]. More generally, regularizers based on higher-order partial derivatives (or their finite difference approximations) can be used to preserve or enhance various image features such as edges or ridges [41]. Theorem 3.2 gives these heuristic regularization schemes a theoretical justification.
The next corollary shows that the estimate (15) can be simplified by upper bounding all variation regularization terms by corresponding Lipschitz regularizers.
**Corollary 3.3** (Lipschitz regularization).: If all assumptions of Theorem 3.2 as well as Assumption 3.1 (ii) hold, then the worst-case expected loss satisfies
\[\sup_{\mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})}\mathbb{E}_ {Z\sim\mathbb{Q}}[\ell(Z)]\leq\mathbb{E}_{\widehat{Z}\sim\widehat{\mathbb{P}} }[\ell(\widehat{Z})]+\sum_{k=1}^{p-1}\frac{\varepsilon^{\frac{1}{p_{k}}}}{k!} \operatorname{lip}(D^{k-1}\ell)<\infty. \tag{16}\]
Proof.: Assumption 3.1 (ii) implies that \(D^{k}\ell(\widehat{Z})\leq\sup_{\widehat{z}\in\widehat{\mathcal{Z}}}\|D^{k} \ell(\widehat{z})\|=\operatorname{lip}(D^{k-1}\ell)<\infty\)\(\widehat{\mathbb{P}}\)-almost surely for all \(k\in[p-1]\). Substituting these conservative upper bounds into (15) yields (16).
We now specialize the results of Theorem 3.2 and Corollary 3.3 to Wasserstein ambiguity sets.
**Corollary 3.4** (Regularization by robustification over Wasserstein balls).: If all assumptions of Theorem 3.2 hold and \(\mathbb{W}_{\varepsilon}(\widehat{\mathbb{P}})=\{\mathbb{Q}\in\mathcal{P}( \mathcal{Z}):W_{p}(\mathbb{Q},\widehat{\mathbb{P}})\leq\varepsilon\}\) is the \(p\)-th Wasserstein ball, then
\[\sup_{\mathbb{Q}\in\mathbb{W}_{\varepsilon}(\widehat{\mathbb{P}})}\mathbb{E}_ {Z\sim\mathbb{Q}}\left[\ell(Z)\right]\leq\mathbb{E}_{\widehat{Z}\sim\widehat {\mathbb{P}}}[\ell(\widehat{Z})]+\sum_{k=1}^{p-1}\frac{\varepsilon^{k}}{k!} \mathbb{E}_{\widehat{Z}\sim\widehat{\mathbb{P}}}\left[\|D^{k}\ell(\widehat{Z} )\|^{q_{k}}\right]^{\frac{1}{q_{k}}}+\frac{\varepsilon^{p}}{p!}\sup_{\widehat {z}\in\widehat{Z}}\|D^{p}\ell(\widehat{z})\|<\infty,\] (17a) where \[q_{k}=p/(p-k)\] for all \[k\in[p-1]\]. If additionally Assumption 3.1 (ii) holds, then \[\sup_{\mathbb{Q}\in\mathbb{W}_{\varepsilon}(\widehat{\mathbb{P}})}\mathbb{E}_ {Z\sim\mathbb{Q}}\left[\ell(Z)\right]\leq\mathbb{E}_{\widehat{Z}\sim\widehat{ \mathbb{P}}}[\ell(\widehat{Z})]+\sum_{k=1}^{p}\frac{\varepsilon^{k}}{k!} \operatorname{lip}(D^{k-1}\ell)<\infty. \tag{17b}\]
The proof of Corollary 3.4 follows from those of Theorem 3.2 and Corollary 3.3 and is thus omitted. To conclude, we show that the results of this section generalize several bounds from the extant literature.
**Remark 3.5** (Gradient regularization).: Corollary 3.4 is reminiscent of [32, Theorem 1], which shows that robustification with respect to a Wasserstein ball \(\mathbb{W}_{\varepsilon_{J}}(\widehat{\mathbb{P}}_{J})\) of radius \(\varepsilon_{J}=\varepsilon/\sqrt{J}\) around the empirical distribution \(\widehat{\mathbb{P}}_{J}\) on \(J\) independent training samples is approximately equivalent to gradient regularization. Specifically, if \(p>1\), if the loss function \(\ell(z)\) is piecewise smooth and its gradient \(D\ell(z)\) obeys two growth conditions, and if the data-generating distribution has a bounded density, we have
\[\sup_{\mathbb{Q}\in\mathbb{W}_{\varepsilon_{J}}(\widehat{\mathbb{P}}_{J})} \mathbb{E}_{Z\sim\mathbb{Q}}\left[\ell(Z)\right] \;\approx\;\mathbb{E}_{\widehat{Z}\sim\widehat{\mathbb{P}}_{J}}[\ell(\widehat{ Z})]+\varepsilon_{J}\,\mathbb{E}_{\widehat{Z}\sim\widehat{\mathbb{P}}_{J}}[\|D \ell(\widehat{Z})\|^{q}]^{\frac{1}{q}},\]
where \(q=p/(p-1)\) is the Holder conjugate of \(p\), and the approximation error is bounded by a constant multiple of \(\max\{\varepsilon_{J}^{2},\varepsilon_{J}^{p}\}\) with high probability under the data-generating distribution. Corollary 3.4 readily implies that the right hand side of the above expression provides an upper bound on the worst-case expected loss. This can be seen from (17a) by ignoring all terms of the order \(\mathcal{O}(\varepsilon_{J}^{k})\) for \(k>1\).
**Remark 3.6** (Hessian regularization).: Corollary 3.4 also generalizes [6, Remark 10], which establishes an upper bound on the worst-case expected loss across all distributions in a Wasserstein ball with \(p>2\) and \(d(z,\widehat{z})=\|z-\widehat{z}\|_{2}\). This bound penalizes the gradient and the Hessian of the loss function and holds only _asymptotically_ for small \(\varepsilon\). One readily verifies that this bound can be obtained from (17a) by ignoring all terms of the order \(\mathcal{O}(\varepsilon^{k})\) for \(k>2\) and by noting that \(\|D^{2}\ell(\widehat{z})\|_{2}=\lambda_{\max}\left(D^{2}\ell(\widehat{z})\right)\).
**Remark 3.7** (Lipschitz regularization).: For \(p=1\) the upper bound in (17b) collapses to the sum of the expected loss under the reference distribution and the Lipschitz modulus of \(\ell(z)\) weighted by the Wasserstein radius \(\varepsilon\). This bound is exact if \(\mathcal{Z}=\mathbb{R}^{d}\) and the loss function is convex [60, SS 6.2], and it gives classical regularization schemes in statistics and machine learning a robustness interpretation [13, 32, 74, 75]. Note that the upper bound in (17b) fails to be tight for \(p>1\) even if \(\ell(z)\) is convex.
### Dual Regularizing Effects of Robustification
A distributionally robust learning model over linear hypotheses is a DRO problem of the form
\[\inf_{\theta\in\Theta}\ \sup_{\mathbb{Q}\in\mathbb{B}_{\varepsilon}( \widehat{\mathbb{P}})}\mathbb{E}_{Z\sim\mathbb{Q}}\left[L(\langle\theta,Z \rangle)\right], \tag{18}\]
where \(L:\mathbb{R}\to(-\infty,+\infty]\) is a univariate loss function. Thus, (18) is a special case of (1) with \(\ell(\theta,z)=L(\langle\theta,z\rangle)\). Several important problems in operations research and machine learning can be framed as instances of (18). Examples include portfolio optimization and newsvendor problems, plain vanilla or kernelized regression and classification problems, (linear) inverse problems, and phase retrieval problems. For problem (18) to be well-defined, we assume throughout this section that Assumption 1.1 holds. Proposition 1.2 thus implies that the DRO problem (18) is equivalent to the stochastic program
\[\inf_{\theta\in\Theta,\lambda\geq 0}\ \lambda\varepsilon+\mathbb{E}_{\widehat{Z} \sim\widehat{\mathbb{P}}}\left[\ell_{c}(\theta,\lambda,\widehat{Z})\right], \tag{19}\]
whose objective function involves the expectation of the \(c\)-transform \(\ell_{c}(\theta,\lambda,\widehat{Z})\) under the reference distribution \(\widehat{\mathbb{P}}\). Throughout this section we also assume that \(\mathcal{Z}=\mathbb{R}^{d}\), and thus the \(c\)-transform satisfies
\[\ell_{c}(\theta,\lambda,\widehat{z})=\sup_{z\in\text{dom}(c(\cdot,\widehat{z} ))}L(\langle\theta,z\rangle)-\lambda c(z,\widehat{z}). \tag{20}\]
While the conservative bound (15) derived in Section 3.1 exposes the _primal_ regularizing effects, the reformulation (19) exposes the _dual_ regularizing effects of robustification. Indeed, from a primal perspective, robustification essentially amounts to adding various regularization terms to the expected loss, and from a dual perspective, robustification essentially amounts to replacing the loss with its \(c\)-transform, which is best viewed as a regularized version of the loss function, and tuning the corresponding regularization parameter \(\lambda\). The main goal of this section is to shed more light on this dual perspective on robustification. Specifically, we will show that the \(c\)-transform \(\ell_{c}\) can often be expressed in terms of a suitable envelope of \(L\) such as the Pasch-Hausdorff envelope or the Moreau envelope. In addition, we will show how the formulas (19) and (20) can be exploited algorithmically. Indeed, in Section 2.2 we showed that problem (18) is equivalent to a finite convex program if all convexity
conditions of Assumption 2.9 are satisfied. In this section we will show that, as its objective function depends only on a one-dimensional projection of \(z\), problem (18) can sometimes be solved efficiently even if \(L\) fails to be (piecewise) concave or even if \(c\) fails to be convex.
We now leverage techniques from nonconvex optimization to recast the (possibly nonconvex) problem (20) for evaluating the \(c\)-transform as a _univariate_ optimization problem.
**Theorem 3.8** (Nonconvex duality).: The \(c\)-transform (20) admits the following dual reformulations.
1. If \(L\) is proper, convex and lower semi-continuous, while \(c\) is proper and lower semi-continuous, then \(\ell_{c}(\theta,\lambda,\widehat{z})=\sup_{\gamma\in\operatorname{dom}(L^{*}) }\,\lambda c^{*1}(\gamma\theta/\lambda,\widehat{z})-L^{*}(\gamma)\), which is convex in \((\theta,\lambda)\) for any \(\widehat{z}\).
2. If \(c(z,\widehat{z})=\|z-\widehat{z}\|^{p}\) for a norm \(\|\cdot\|\) and \(p\geq 1\), then \(\ell_{c}(\theta,\lambda,\widehat{z})=\sup_{\gamma\in\mathbb{R}}\,L\big{(}( \theta,\widehat{z})+\gamma\|\theta\|_{*}\big{)}-\lambda|\gamma|^{p}\).
The reformulations of the nonconvex problem (20) derived in Theorem 3.8 are generically also nonconvex. As these reformulations involve conjugate functions and dual norms, respectively, we refer to them as _dual_ problems even though they constitute maximization problems like the primal problem (20). These dual problems may be preferable to the primal problem (20) as they involve only a scalar decision. We remark that Theorem 3.8 (ii) generalizes [16, Theorem 1], where \(\|\cdot\|\) is a Mahalanobis norm and \(p=2\). Indeed, Theorem 3.8 (ii) holds for arbitrary norms \(\|\cdot\|\) and exponents \(p\).
Proof of Theorem 3.8.: If \(\lambda>0\), then assertion (i) follows from a classical duality result for nonconvex optimization problems due to Toland [82, SS 3.1]. In order to keep this paper self-contained, we nevertheless provide a short proof. Note first that \(L^{**}=L\) thanks to [70, Theorem 12.2], which applies because \(L\) is proper, convex and lower semi-continuous. Thus, the \(c\)-transform can be reformulated as
\[\ell_{c}(\theta,\lambda,\widehat{z}) =\sup_{z\in\operatorname{dom}(c,\widehat{z}))}\,L^{**}(\langle \theta,z\rangle)-\lambda c(z,\widehat{z})\] \[=\sup_{z\in\operatorname{dom}(c,\widehat{z}))}\,\sup_{\gamma\in \operatorname{dom}(L^{*})}\,\gamma\langle\theta,z\rangle-L^{*}(\gamma)- \lambda c(z,\widehat{z})\] \[=\sup_{\gamma\in\operatorname{dom}(L^{*})}\,\sup_{z\in \operatorname{dom}(c,\widehat{z}))}\langle\gamma\theta,z\rangle-\lambda c(z, \widehat{z})-L^{*}(\gamma)\] \[=\sup_{\gamma\in\operatorname{dom}(L^{*})}\,\lambda c^{*1}( \gamma\theta/\lambda,\widehat{z})-L^{*}(\gamma).\]
where the second equality holds because the biconjugate \(L^{**}\) is defined as the conjugate of \(L^{*}\) and the last equality follows from the elementary insight that the partial conjugate of \(\lambda c(\cdot,\widehat{z})\) coincides with \(\lambda c^{*1}(\cdot/\lambda,\widehat{z})\). This establishes the claim for \(\lambda>0\). Note that the restriction to \(\operatorname{dom}(L^{*})\) in the last maximization problem is necessary because the objective is undefined if \(\lambda c^{*1}(\gamma\theta/\lambda,\widehat{z})=L^{*}(\gamma)=\infty\).
Assume now that \(\lambda=0\). For clarity of exposition we define \(\Phi(z)=c(z,\widehat{z})\) for \(\widehat{z}\) fixed. As \(L^{**}=L\) and as \(0c(z,\widehat{z})=0\Phi(z)=0\) throughout \(\operatorname{dom}(\Phi)\), the \(c\)-transform can now be reformulated as
\[\ell_{c}(\theta,0,\widehat{z})=\sup_{z\in\operatorname{dom}(\Phi)}L^{**}( \langle\theta,z\rangle)=\sup_{z\in\operatorname{dom}(\Phi)}\,\sup_{\gamma\in \operatorname{dom}(L^{*})}\langle\gamma\theta,z\rangle-L^{*}(\gamma)=\sup_{ \gamma\in\operatorname{dom}(L^{*})}\delta^{*}_{\operatorname{dom}(\Phi)}( \gamma\theta)-L^{*}(\gamma).\]
The desired reformulation follows if we can show that \(\delta^{*}_{\operatorname{dom}(\Phi)}(\gamma\theta)=0\Phi^{*}(\gamma\theta/ 0)=0c^{*1}(\gamma\theta/0,\widehat{z})\). To this end, note first that \(0\in\operatorname{dom}(\Phi^{*})\) because \(\Phi\) is non-negative. Thus, we have
\[\delta^{*}_{\operatorname{dom}(\Phi)}(\gamma\theta) =\sup_{\Delta\in\mathbb{R}^{d}}\gamma(\theta,\Delta)-\inf_{ \alpha>0}\Phi(\Delta)/\alpha\;=\;\sup_{\alpha>0}\sup_{\Delta\in\mathbb{R}^{d} }\gamma(\theta,\Delta)-\Phi(\Delta)/\alpha\] \[=\lim_{\alpha\to+\infty}\alpha^{-1}\sup_{\Delta\in\mathbb{R}^{d} }\alpha\gamma(\theta,\Delta)-\Phi(\Delta)\;=\;\lim_{\alpha\to+\infty}\Phi^{*} (\alpha\gamma\theta)/\alpha\] \[=\;0\Phi^{*}(\gamma\theta/0)\;=\;0c^{*1}(\gamma\theta/0,\widehat {z}),\]
where the first equality exploits the definition of the support function and the non-negativity of \(\Phi\), which implies that \(\inf_{\alpha>0}\Phi(\Delta)/\alpha=\delta_{\text{dom}(\Phi)}\), whereas the second, third and fourth equalities follow from elementary reformulations and from the definition of the conjugate. The fifth equality holds thanks to [70, Corollary 8.5.2 & Theorem 13.3]. As \(0\in\text{dom}(\Phi^{*})\), these results ensure that the recession function \(\lim_{\alpha\to+\infty}\Phi^{*}(\alpha\gamma\theta)/\alpha\) of \(\Phi^{*}\) coincides with the support function \(\delta^{*}_{\text{dom}(\Phi^{**})}(\gamma\theta)\) of the domain of \(\Phi^{**}\), which in turn is our definition of \(0\Phi^{*}(\gamma\theta/0)\). The last equality follows from the definition of \(\Phi\).
It remains to be shown that \(\ell_{c}(\theta,\lambda,\widehat{z})\) is convex in \((\theta,\lambda)\). This is evident from our dual reformulation of \(\ell_{c}(\theta,\lambda,\widehat{z})\), however, because convexity is preserved by maximization. Hence, assertion (i) follows.
As for assertion (ii), define the auxiliary function \(\Psi(z)=\|z\|^{p}\) for \(z\in\mathbb{R}^{d}\), and recall that \(L\) is an arbitrary (possibly nonconvex) proper function. In this case, the \(c\)-transform can be reformulated as
\[\ell_{c}(\theta,\lambda,\widehat{z}) =\sup_{z\in\mathbb{R}^{d}}\;L(\langle\theta,z\rangle)-\lambda\Psi (z-\widehat{z})\] \[=\sup_{\gamma\in\mathbb{R}}\;\sup_{\Delta\in\mathbb{R}^{d}}\big{\{} L(\langle\theta,\widehat{z}\rangle+\gamma)-\lambda\Psi(\Delta):\langle\theta, \Delta\rangle=\gamma\big{\}}\] \[=\sup_{\gamma\in\mathbb{R}}\;\inf_{\beta\in\mathbb{R}}\;\sup_{ \Delta\in\mathbb{R}^{d}}\;L(\langle\theta,\widehat{z}\rangle+\gamma)-\lambda \Psi(\Delta)+\beta\langle\theta,\Delta\rangle-\beta\gamma.\]
In order to justify the third equality, consider the two maximization problems over \(\gamma\) in the second and the third lines of the above expression. If \(\theta=0\), then \(\gamma=0\) is the only feasible solution in both problems, and one readily verifies that the corresponding objective function values coincide. If \(\theta\neq 0\), on the other hand, then the objective functions of the two problems coincide for every fixed \(\gamma\in\mathbb{R}^{d}\) due to strong Lagrangian duality [10, Proposition 5.3.1], which applies because the primal maximization problem over \(\Delta\) in the second line is convex and feasible and because every feasible solution constitutes a Slater point. Recalling the definitions of conjugates and perspectives then yields
\[\ell_{c}(\theta,\lambda,\widehat{z}) =\sup_{\gamma\in\mathbb{R}}\;\inf_{\beta\in\mathbb{R}}\;L(\langle \theta,\widehat{z}\rangle+\gamma)+\Big{(}\sup_{\Delta\in\mathbb{R}^{d}}\; \beta\langle\theta,\Delta\rangle-\lambda\Psi(\Delta)\Big{)}-\beta\gamma\] \[=\sup_{\gamma\in\mathbb{R}}\;\inf_{\beta\in\mathbb{R}}\;L( \langle\theta,\widehat{z}\rangle+\gamma)+\lambda\Psi^{*}(\beta\theta/\lambda) -\beta\gamma. \tag{21}\]
To simplify (21), we assume first that \(p=1\) (Case I) and later that \(p>1\) (Case II).
**Case I \((p=1)\):** As \(\Psi(\cdot)=\|\cdot\|\) is a norm, one can show that \(\Psi^{*}(\cdot)=\delta_{C}(\cdot)\) is the indicator function of the dual norm ball \(C=\{\theta\in\mathbb{R}^{d}:\|\theta\|_{*}\leq 1\}\). If \(\lambda>0\) and \(\theta\neq 0\), we can thus re-express (21) as
\[\sup_{\gamma\in\mathbb{R}}\;\inf_{\beta\in\mathbb{R}}\;L(\langle \theta,\widehat{z}\rangle+\gamma)+\lambda\delta_{C}(\beta\theta/\lambda)-\beta\gamma =\sup_{\gamma\in\mathbb{R}}\;\inf_{\beta\in\mathbb{R}}\;\left\{L( \langle\theta,\widehat{z}\rangle+\gamma)-\beta\gamma:\|\beta\theta\|_{*}\leq\lambda\right\}\] \[=\sup_{\gamma\in\mathbb{R}}\;L(\langle\theta,\widehat{z}\rangle+ \gamma)-\lambda|\gamma|,\] \[=\sup_{\gamma\in\mathbb{R}}\;L(\langle\theta,\widehat{z}\rangle+ \gamma\|\theta\|_{*})-\lambda|\gamma|,\]
where the second equality holds because the constraint \(\|\beta\theta\|_{*}\leq\lambda\) is equivalent to \(|\beta|\leq\lambda/\|\theta\|_{*}\), while the third equality follows form the variable substitution \(\gamma\leftarrow\gamma/\|\theta\|_{*}\). If \(\lambda>0\) and \(\theta=0\), on the other hand, then \(\lambda\Psi^{*}(\beta\theta/\lambda)=\lambda\Psi^{*}(0)=0\), and thus we can re-express (21) as
\[\sup_{\gamma\in\mathbb{R}}\;\inf_{\beta\in\mathbb{R}}\;\beta\gamma-L(\gamma)=-L (0)=\sup_{\gamma\in\mathbb{R}}\;L(\langle\theta,z\rangle+\gamma\|\theta\|_{*}) -\lambda|\gamma|.\]
Here, the first equality holds because \(\gamma=0\) is the only feasible solution of the maximization problem on the left hand side, and the second equality holds because \(\theta=0\), which implies that the maximization
problem on the right hand side is also solved by \(\gamma=0\). If \(\lambda=0\) and \(\theta\neq 0\), then \(\lambda\Psi^{*}(\beta\theta/\lambda)\) simplifies to \(0\Psi^{*}(\beta\theta/0)=\delta^{*}_{\mathbb{R}^{d}}(\beta\theta)=\delta_{\{0\} }(\beta\theta)\) thanks to our conventions for perspective functions, and (21) becomes
\[\sup_{\gamma\in\mathbb{R}}\;L(\langle\theta,\widehat{z}\rangle+\gamma)=\sup_{ \gamma\in\mathbb{R}}\;L(\langle\theta,\widehat{z}\rangle+\gamma\|\theta\|_{*}) =\sup_{\gamma\in\mathbb{R}}\;L(\langle\theta,\widehat{z}\rangle+\gamma\| \theta\|_{*})-\lambda|\gamma|,\]
where the first equality exploits the substitution \(\gamma\leftarrow\gamma/\|\theta\|_{*}\), and the second equality holds because \(\lambda=0\). If \(\lambda=0\) and \(\theta=0\), finally, then a similar reasoning reveals that (21) reduces to \(L(\gamma)=\sup_{\gamma\in\mathbb{R}}\;L(\langle\theta,\widehat{z}\rangle+ \gamma\|\theta\|_{*})-\lambda|\gamma|\). This completes the proof of assertion (ii) for \(p=1\).
**Case II \((p>1)\):** By [93, Lemma B.8 (ii)], the conjugate of \(\Psi(\cdot)=\|\cdot\|^{p}\) is given by \(\Psi^{*}(\cdot)=h(q)\|\cdot\|_{*}^{q}\), where \(h(p)=(p-1)^{p-1}/p^{p}\) and \(q>0\) is the Holder conjugate of \(p\). If \(\lambda>0\) and \(\theta\neq 0\), (21) reduces to
\[\sup_{\gamma\in\mathbb{R}}\;\inf_{\beta\in\mathbb{R}}\;L(\langle \theta,\widehat{z}\rangle+\gamma)+h(q)\lambda^{1-q}\|\theta\|_{*}^{q}|\beta|^{q }-\beta\gamma =\sup_{\gamma\in\mathbb{R}}\;L(\langle\theta,\widehat{z}\rangle+ \gamma)-\lambda\,|\gamma/\|\theta\|_{*}|^{p}\] \[=\sup_{\gamma\in\mathbb{R}}\;L(\langle\theta,\widehat{z}\rangle+ \gamma\|\theta\|_{*})-\lambda|\gamma|^{p},\]
where the two equalities follow from a tedious analytical calculation and form the substitution \(\gamma\leftarrow\gamma/\|\theta\|_{*}\), respectively. If \(\lambda=0\) or \(\theta=0\), then we can proceed exactly as in Case I to prove that (21) is equivalent to \(\sup_{\gamma\in\mathbb{R}}\;L(\langle\theta,\widehat{z}\rangle+\gamma\|\theta \|_{*})-\lambda|\gamma|^{p}\). This completes the proof of assertion (ii).
Theorem 3.8 shows that, in the context of linear prediction models, evaluating the \(c\)-transform \(\ell_{c}\) is quite generally equivalent to solving a univariate maximization problem. If this maximization problem is easy to solve, then the stochastic program (19) equivalent to the DRO problem (18) can be addressed with stochastic gradient descent-type algorithms even if the reference distribution fails to be discrete and even if the convexity assumptions of Section 2.2 fail to hold. Specifically, if the univariate maximization problem that evaluates the \(c\)-transform \(\ell_{c}\) can be solved analytically, then one may use the envelope theorem [26, Theorem 2.16] to generate stochastic subgradients for the objective function of (19). In this case, the stochastic program (19) is amenable to standard (projected) stochastic subgradient methods; see, _e.g._, [69]. If the \(c\)-transform evaluation problem can only be solved approximately via a numerical procedure such as a grid-search method, a bisection algorithm or a sequential convex optimization approach, then the envelope theorem only provides _inexact_ stochastic subgradients. In this case, and provided that \(L\) is convex, one can use the stochastic subgradient descent algorithms with _inexact_ gradient oracles studied in [16, SS 3] and [81, SS 4] to solve problem (19). As an example, in Appendix B we identify conditions under which the univariate \(c\)-transform evaluation problem purported in Theorem 3.8 (ii) is a convex maximization problem even if \(L\) is convex instead of concave.
The \(c\)-transform \(\ell_{c}(\theta,\lambda,\widehat{z})\) defined in (20) can be viewed as an approximation of \(\ell(\theta,\widehat{z})=L(\langle\theta,\widehat{z}\rangle)\), where the parameter \(\lambda\) controls the approximation error. If \(c(z,\widehat{z})=\|z-\widehat{z}\|^{p}\) for some \(p\geq 1\), then the \(c\)-transform is closely related to the \(p\)-th envelope \(L_{p}:\mathbb{R}\times\mathbb{R}_{+}\to[-\infty,+\infty]\) of \(L\), which is defined via
\[L_{p}(s,\lambda)=\sup_{s^{\prime}\in\mathbb{R}}\;L(s^{\prime})-\lambda|s-s^{ \prime}|^{p}.\]
The next proposition uses Theorem 3.8 (ii) to formalize this relation and specializes the strong duality result of Proposition 1.2 to DRO problems of the form (18) with norm-based transportation costs.
**Proposition 3.9** (Strong duality for DRO problems of the form (18)).: Assume that \(\mathcal{Z}=\mathbb{R}^{d}\), that \(c(z,\widehat{z})=\|z-\widehat{z}\|^{p}\), where \(p\geq 1\) and \(\|\cdot\|\) is an arbitrary norm, and that \(\ell(\theta,z)=L(\langle\theta,z\rangle)\), where
the univariate loss function \(L\) is proper and upper semi-continuous and satisfies \(L(s)\leq C(1+|s|^{p})\) for some \(C\geq 0\) and all \(s\in\mathbb{R}\). If additionally Assumption 1.1 holds, then we have
\[\sup_{\mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})}\mathbb{E}_{Z \sim\mathbb{Q}}\left[L(\langle\theta,Z\rangle)\right]=\inf_{\lambda\geq 0} \lambda\varepsilon\|\theta\|_{*}^{p}+\mathbb{E}_{\widehat{Z}\sim\mathbb{P}} \left[L_{p}(\langle\theta,\widehat{Z}\rangle,\lambda)\right]. \tag{22}\]
Proof.: If \(\theta=0\), then the left hand side of (22) equals \(L(0)\), and the right hand side of (22) evaluates to
\[\inf_{\lambda\geq 0}L_{p}(0,\lambda)=\lim_{\lambda\to+\infty}L_{p}(0,\lambda)=L (0),\]
where the first equality holds because \(L_{p}(s,\lambda)\) is non-increasing in \(\lambda\) for any fixed \(s\), whereas the second equality follows from Lemma C.1. This establishes the claim for \(\theta=0\). In the remainder of the proof we may thus assume that \(\theta\neq 0\). By Proposition 1.2, which applies thanks to Assumption 1.1, the worst-case expected loss on the left hand side of (22) matches the optimal value of the stochastic program in (2), which involves the \(c\)-transform (20). By Theorem 3.8 (ii), we further have
\[\ell_{c}(\theta,\lambda,\widehat{z})=\sup_{\gamma\in\mathbb{R}}\;L(\langle \theta,\widehat{z}\rangle+\gamma\|\theta\|_{*})-\lambda|\gamma|^{p}=\sup_{ \gamma\in\mathbb{R}}\;L(\gamma)-\frac{\lambda}{\|\theta\|_{*}^{p}}|\gamma- \langle\theta,\widehat{z}\rangle|^{p}=L_{p}\left(\langle\theta,\widehat{z} \rangle,\lambda/\|\theta\|_{*}^{p}\right),\]
where the second and the third equalities follow the variable substitution \(\gamma\leftarrow\langle\theta,\widehat{z}\rangle+\gamma\|\theta\|_{*}\) and the definition of \(L_{p}\), respectively. Substituting this expression for the \(c\)-transform into (2) finally yields
\[\sup_{\mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})}\mathbb{E}_ {Z\sim\mathbb{Q}}\left[L(\langle\theta,Z\rangle)\right]=\inf_{\lambda\geq 0} \lambda\varepsilon+\mathbb{E}_{\widehat{Z}\sim\mathbb{P}}\left[L_{p}\left( \langle\theta,\widehat{z}\rangle,\lambda/\|\theta\|_{*}^{p}\right)\right]= \inf_{\lambda\geq 0}\lambda\varepsilon\|\theta\|_{*}^{p}+\mathbb{E}_{ \widehat{Z}\sim\mathbb{P}}\left[L_{p}(\langle\theta,\widehat{Z}\rangle, \lambda)\right],\]
where the second equality follows from the variable substitution \(\lambda\leftarrow\lambda/\|\theta\|_{*}^{p}\). This establishes the claim for \(\theta\neq 0\). In summary, we have thus shown that (22) holds indeed for every \(\theta\in\mathbb{R}^{d}\).
#### 3.2.1 Pasch-Hausdorff Envelope
Throughout this section we assume that all conditions of Proposition 3.9 hold and that \(p=1\). In this case, \(L_{1}(\cdot,\lambda)\) is usually called the Pasch-Hausdorff envelope of \(L\). One can show that \(L_{1}(\cdot,\lambda)\) coincides with the smallest \(\lambda\)-Lipschitz continuous majorant of \(L\) or evaluates to \(+\infty\) if no such majorant exists [7, Proposition 12.17]. Under the conditions of this section, the univariate minimizaton problem on the right hand side of (22) can sometimes be solved analytically. This allows us to recover--in a unified and simplified manner--several reformulation results from the recent literature on Wasserstein DRO.
**Example 3.10** (Aymptotically steep Lipschitz continuous loss).: If the asymptotic linear growth rate of the loss function \(L\), which is defined as \(\limsup_{|s|\to\infty}L(s)/|s|\), coincides with the Lipschitz modulus of \(L\), then the Pasch-Hausdorff envelope of \(L\) satisfies \(L_{1}(s,\lambda)=L(s)\) if \(\lambda\geq\operatorname{lip}(L)\) and \(L_{1}(s,\lambda)=+\infty\) otherwise. To see this, assume first that \(\lambda\geq\operatorname{lip}(L)\). In that case, we have
\[L(\langle\theta,\widehat{z}\rangle)\leq\sup_{\gamma\in\mathbb{R}}\;L(\gamma)- \lambda|\gamma-\langle\theta,\widehat{z}\rangle|\leq\sup_{\gamma\in\mathbb{R }}\;L(\langle\theta,\widehat{z}\rangle)-(\lambda-\operatorname{lip}(L))| \gamma-\langle\theta,\widehat{z}\rangle|=L(\langle\theta,\widehat{z}\rangle),\]
where the second inequality holds because the loss function \(L\) is Lipschitz continuous with Lipschitz modulus \(\operatorname{lip}(L)\). If \(\lambda<\operatorname{lip}(L)\), on the other hand, then we have
\[L_{1}(\langle\theta,\widehat{z}\rangle,\lambda) \geq\limsup_{|\gamma|\to\infty}\;L(\gamma)-\lambda|\gamma-\langle \theta,\widehat{z}\rangle|\] \[=L(\langle\theta,\widehat{z}\rangle)+\limsup_{|\gamma|\to\infty}| \gamma-\langle\theta,\widehat{z}\rangle|\left(\frac{L(\gamma)-L(\langle \theta,\widehat{z}\rangle)}{|\gamma-\langle\theta,\widehat{z}\rangle|}- \lambda\right)\] \[=L(\langle\theta,\widehat{z}\rangle)+\limsup_{|\gamma|\to\infty}| \gamma-\langle\theta,\widehat{z}\rangle|\left(\operatorname{lip}(L)-\lambda \right)=+\infty,\]
where the second inequality holds because the asymptotic linear growth rate of \(L\) coincides with \(\operatorname{lip}(L)\). Hence, the objective function in the minimization problem on the right hand side of (22) evaluates to \(+\infty\) whenever \(\lambda<\operatorname{lip}(L)\). As \(\varepsilon\|\theta\|_{*}\geq 0\), we thus have \(\lambda=\operatorname{lip}(L)\) at optimality, and (22) simplifies to
\[\sup_{\mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})}\mathbb{E}_{Z \sim\mathbb{Q}}\left[L(\langle\theta,Z\rangle)\right]=\mathbb{E}_{\widehat{Z} \sim\widehat{\mathbb{P}}}[L(\langle\theta,\widehat{Z}\rangle)]+\varepsilon \operatorname{lip}(L)\|\theta\|_{*}. \tag{23}\]
Hence, the worst-case expected loss over a \(1\)-Wasserstein ball of radius \(\varepsilon\) coincides exactly with the expected loss under the reference distribution adjusted by the regularization term \(\varepsilon\operatorname{lip}(L)\|\theta\|_{*}\). This result strengthens the inequality derived in Corollary 3.4 for \(p=1\) to an equality. It was first discovered in the context of distributionally robust linear regression [74], where the random vector \(Z=(X,Y)\in\mathbb{R}^{d}\) consists of a multi-dimensional input \(X\in\mathbb{R}^{d-1}\) and a scalar output \(Y\in\mathbb{R}\), the decision \(\theta=(\theta_{x},\theta_{y})\in\mathbb{R}^{d}\) consists of the weight vector \(\theta_{x}\in\mathbb{R}^{d-1}\) of a linear predictor and the constant \(\theta_{y}=-1\), while the prediction loss \(L(\langle\theta,Z\rangle)=L(\langle\theta_{x},X\rangle-Y)\) is determined by a Lipschitz continuous _convex_ function \(L\). Note that convexity is a simple sufficient condition for the asymptotic linear growth rate of \(L\) to coincide with \(\operatorname{lip}(L)\). In the context of distributionally robust linear classification, where the output \(Y\) is restricted to \(+1\) or \(-1\) and the prediction loss is given by \(L(Y\langle\theta,X\rangle)\), it has further been shown that
\[\sup_{\mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})}\mathbb{E}_{( X,Y)\sim\mathbb{Q}}\left[L(Y\langle\theta,X\rangle)\right]=\mathbb{E}_{( \widehat{X},\widehat{Y})\sim\widehat{\mathbb{P}}}\left[L(\widehat{Y}\langle \theta,\widehat{X}\rangle)\right]+\varepsilon\operatorname{lip}(L)\|\theta\|_ {*}\]
whenever the transportation cost function satisfies \(c((x,y),(\widehat{x},\widehat{y}))=\|x-\widehat{x}\|\) if \(y=\widehat{y}\); \(=+\infty\) if \(y\neq\widehat{y}\), where \(\|\cdot\|\) is an arbitrary norm on the input space [74, 75]. In this case, the output \(Y\) has the same marginal under every distribution \(\mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})\) as under the reference distribution \(\widehat{\mathbb{P}}\). The above identity can be derived by repeating the arguments that led to (23) with obvious minor modifications. Details are omitted for brevity. The general identity (23) for _nonconvex_ loss functions whose asymptotic linear growth rate coincides with their Lipschitz modulus was first established in [32, Corollary 2].
**Example 3.11** (Zero-one loss).: One readily verifies that the Pasch-Hausdorff envelope of the zero-one loss \(L(s)=\mathds{1}_{\{s\leq 0\}}\) satisfies \(L_{1}(s,\lambda)=\max\{0,1-\max\{0,\lambda s\}\}\). By Proposition 3.9, we thus have
\[\sup_{\mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})} \mathbb{Q}\left(\langle\theta,Z\rangle\leq 0\right) =\sup_{\mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P} })}\mathbb{E}_{Z\sim\mathbb{Q}}\left[L(\langle\theta,Z\rangle)\right]\] \[=\min_{\lambda\geq 0}\,\lambda\varepsilon\|\theta\|_{*}+ \mathbb{E}_{\widehat{Z}\sim\widehat{\mathbb{P}}}\left[\max\{0,1-\max\{0, \langle\lambda\theta,\widehat{Z}\rangle\}\}\right].\]
If \(\Theta\) is a cone, then the scaling factor \(\lambda\) can be eliminated by using the variable substitution \(\theta^{\prime}\leftarrow\lambda\theta\). In this case, the DRO problem reduces a stochastic program under the reference distribution, that is,
\[\inf_{\theta\in\Theta}\sup_{\mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{ \mathbb{P}})}\mathbb{Q}\left(\langle\theta,Z\rangle\leq 0\right)=\inf_{\theta^{ \prime}\in\Theta}\varepsilon\|\theta^{\prime}\|_{*}+\mathbb{E}_{\widehat{Z} \sim\widehat{\mathbb{P}}}\left[\max\{0,1-\max\{0,\langle\theta^{\prime}, \widehat{Z}\rangle\}\}\right].\]
The above identity was first derived in the context of distributionally robust linear classification [40] using recent results on ambiguous chance constraints and their relation to conditional value-at-risk constraints [89]. Our derivation based on the Pasch-Hausdorff envelope is new and shorter.
#### 3.2.2 Moreau Envelope
Throughout this section we assume that all conditions of Proposition 3.9 hold and that \(p=2\). In this case, \(L_{2}(s,\lambda)\) is termed the Moreau envelope of \(L\). Under the conditions of this section, the univariate minimizaton problem on the right hand side of (22) can be solved analytically in interesting special cases, which leads again to simplified derivations of existing reformulation results.
**Example 3.12** (Quadratic loss).: The Moreau envelope of the quadratic loss function \(L(s)=s^{2}\) satisfies \(L_{2}(s,\lambda)=\lambda s^{2}/(\lambda-1)\) if \(\lambda>1\), \(L_{2}(s,\lambda)=+\infty\cdot\mathds{1}_{\{s\neq 0\}}\) if \(\lambda=1\), and \(L_{2}(s,\lambda)=+\infty\) if \(0\leq\lambda\leq 1\). If we assume that \(\widehat{\mathbb{P}}((\theta,\widehat{Z})\neq 0)>0\) to rule out trivialities, then Proposition 3.9 implies that
\[\sup_{\mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})}\mathbb{E}_{Z \sim\mathbb{Q}}\left[L(\langle\theta,Z\rangle)\right]=\min_{\lambda>1}\ \lambda\varepsilon\|\theta\|_{*}^{2}+\frac{\lambda}{\lambda-1}\mathbb{E}_{ \widehat{Z}\sim\widehat{\mathbb{P}}}\left[(\langle\theta,\widehat{Z}\rangle)^ {2}\right]=\left(\sqrt{\mathbb{E}_{\widehat{Z}\sim\widehat{\mathbb{P}}}\left[( \langle\theta,\widehat{Z}\rangle)^{2}\right]}+\sqrt{\varepsilon}\|\theta\|_{* }\right)^{2}.\]
The second equality in the above expression holds because the minimization problem over \(\lambda\) is solved analytically by \(\lambda^{*}=1+\sqrt{\varepsilon}\|\theta\|_{*}(\mathbb{E}_{\widehat{Z}\sim \widehat{\mathbb{P}}}[(\langle\theta,\widehat{Z}\rangle)^{2}])^{-1/2}\). This identity was first discovered in the context of distributionally robust least squares regression; see [13, Proposition 2].
**Example 3.13** (Zero-one loss).: The Moreau envelope of the zero-one loss \(L(s)=\mathds{1}_{\{s\leq 0\}}\) is given by \(L_{2}(s,\lambda)=\max\{0,1-\sqrt{\lambda}s\max\{0,\sqrt{\lambda}s\}\}\). If \(\Theta\) is a cone, then, in analogy to Example 3.11, we can use Proposition 3.9 and the variable substitution \(\theta^{\prime}\leftarrow\sqrt{\lambda}\theta\) to conclude that
\[\inf_{\theta\in\Theta}\sup_{\mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{ \mathbb{P}})}\mathbb{Q}\left(\langle\theta,Z\rangle\leq 0\right)=\inf_{\theta^{ \prime}\in\Theta}\varepsilon\|\theta^{\prime}\|_{*}^{2}+\mathbb{E}_{\widehat{Z }\sim\widehat{\mathbb{P}}}\left[\max\left\{0,1-\langle\theta^{\prime},\widehat {Z}\rangle\max\{0,\langle\theta^{\prime},\widehat{Z}\rangle\}\right\}\right].\]
## 4 Numerical Experiments
All linear and second-order cone programs considered in our experiments are implemented in Python and solved with Gurobi 10.0.0 on a \(2.4\;\mathrm{GHz}\) quad-core machine with \(8\;\mathrm{GB}\;\mathrm{RAM}\). To ensure reproducibility, all source codes are made available at [https://github.com/sorooshafiee/regularization_with_OT](https://github.com/sorooshafiee/regularization_with_OT).
### Nash Equilibria
We first illustrate the computation of Nash equilibria between a statistician and nature in the context of a distributionally robust support vector machine problem. We thus assume that \(Z=(X,Y)\in\mathbb{R}^{d}\) consists of a feature vector \(X\in\mathcal{X}\subseteq\mathbb{R}^{d-1}\) and a label \(Y\in\mathcal{Y}=\{-1,+1\}\). In addition, \(\theta\in\Theta=\mathbb{R}^{d-1}\) is the weight vector of a linear classifier, and \(\ell(\theta,Z)=\max\{0,1-Y\langle\theta,X\rangle\}\) is the hinge loss function, which represents the pointwise maximum of \(I=2\) saddle functions. Finally, the reference distribution \(\widehat{\mathbb{P}}\) is set to the empirical (uniform) distribution on \(J\) training samples \(\widehat{z}_{j}=(\widehat{x}_{j},\widehat{y}_{j})\), \(j\in[J]\), and the transportation cost function is defined as \(c((x_{1},y_{1}),(x_{2},y_{2}))=\|x_{1}-x_{2}\|\) if \(y_{1}=y_{2}\), where \(\|\cdot\|\) is an arbitrary norm on \(\mathbb{R}^{d-1}\), and \(c((x_{1},y_{1}),(x_{2},y_{2}))=+\infty\) if \(y_{1}\neq y_{2}\). In this setting, the Nash equilibrium between the statistician and nature can be computed by using the techniques that were developed in Section 2.2 and further generalized in Appendix A. Indeed, one readily verifies that Assumptions 2.8, A.1 and A.2 hold. In addition, as we will see below, one can also show that Assumption 2.15 is satisfied.
This implies that Nash strategies for the statistician and nature can be computed by solving the finite convex programs (30) and (31), which generalize problems (7) and (9), respectively; see Theorem A.5. Specifically, under the given assumptions about the loss and transportation cost functions as well as the reference distribution, one can show that if \(\mathcal{X}=\mathbb{R}^{d-1}\), then (30) simplifies to
\[\begin{array}{ll}\min&\varepsilon\lambda+\frac{1}{J}\sum_{j\in[J]}s_{j}\\ \mathrm{s.\,t.}&\theta\in\mathbb{R}^{d-1},\ \lambda,s_{j}\in\mathbb{R} \forall j\in[J]\\ &\lambda\geq\|\theta\|_{*},\ s_{j}\geq 0,\ s_{j}\geq 1-\widehat{y}_{j} \langle\theta,\widehat{x}_{j}\rangle\quad\forall j\in[J],\end{array} \tag{24}\]
while problem (31) reduces to
\[\max \sum_{j\in[J]}q_{2j} \tag{25}\] \[\mathrm{s.\,t.} q_{ij}\in\mathbb{R}_{+},\ \xi_{ij}\in\mathbb{R}^{d-1} \forall i\in[I],\,j\in[J]\] \[\sum_{i\in[I]}q_{ij}=1/J \forall j\in[J]\] \[\sum_{i\in[I]}\sum_{j\in[J]}\|\xi_{ij}\|\leq\varepsilon\] \[\sum_{j\in[J]}\widehat{y}_{j}(q_{2j}\widehat{x}_{j}+\xi_{2j})=0.\]
The following proposition shows that distributionally robust support vector machine problem under consideration admits a continuum of least favorable distributions, which represent different Nash strategies of nature. This implies that there is in fact a continuum of many different Nash equilibria.
**Proposition 4.1** (Non-uniqueness of Nash equilibria).: If \((\{\vec{q}_{j}^{*}\}_{j})\) solves the maximization problem
\[\max_{\vec{q}_{j},\,j\in[J]}\left\{\sum_{j\in[J]}\vec{q}_{j}:\left\|\sum_{j\in[ J]}\vec{q}_{j}\widehat{x}_{j}\right\|\leq\varepsilon,\,0\leq\bar{q}_{j}\leq \frac{1}{J}\ \forall j\in[J]\right\}, \tag{26}\]
and if \(\mathcal{J}_{0}=\{j\in[J]:\vec{q}_{j}^{*}=0\}\), \(\mathcal{J}_{+}=\{j\in[J]:\vec{q}_{j}^{*}>0\}\) and \(\tilde{\xi}=-\sum_{j\in[J]}\vec{q}_{j}^{*}\widehat{y}_{j}\widehat{x}_{j}\), then the distribution
\[\mathbb{Q}^{*}(\alpha)=\frac{1}{J}\sum_{j\in\mathcal{J}_{0}}\delta_{(\widehat {x}_{j},\widehat{y}_{j})}+\sum_{j\in\mathcal{J}_{+}}\left(\frac{1}{J}-\vec{q}_ {j}^{*}\right)\delta_{(\widehat{x}_{j},\widehat{y}_{j})}+\sum_{j\in\mathcal{J }_{+}}\vec{q}_{j}^{*}\delta_{(\widehat{x}_{j}+\alpha_{j}\widehat{y}_{j}\tilde {\xi}/\vec{q}_{j}^{*},\widehat{y}_{j})}\]
constitutes a Nash strategy of nature for any \(\alpha\in A=\{\alpha\in\mathbb{R}_{+}^{J}:\sum_{j\in[J]}\alpha_{j}=1,\ \alpha_{j}=0\ \forall j\in \mathcal{J}_{0}\}\).
Proof.: The conic duality theorem [8, Theorem 1.4.2] ensures that the minimization problem (24), which trivially admits a Slater point, and the maximization problem (26) are strong duals. Thus, problem (26) is solvable and has the same optimal value as problem (24). Theorem A.5 further implies that problems (26) and (25) share the same optimal values, too. In the following we use any maximizer \((\{\vec{q}_{j}^{*}\}_{j})\) of (26) to construct a family of optimal solutions \((\{q_{ij}^{*}(\alpha),\xi_{ij}^{*}(\alpha)\}_{i,j})\) for problem (25) parametrized by \(\alpha\in A\). Specifically, we set \(q_{1j}^{*}(\alpha)=1/J-\vec{q}_{j}^{*}\), \(q_{2j}^{*}(\alpha)=\vec{q}_{j}^{*}\) and \(\xi_{1j}^{*}(\alpha)=0\) for all \(j\in[J]\), \(\xi_{2j}^{*}(\alpha)=0\) for all \(j\in[J]\) with \(\vec{q}_{j}^{*}=0\), and \(\xi_{2j}^{*}(\alpha)=\alpha_{j}\widehat{y}_{j}\tilde{\xi}\) for all \(j\in[J]\) with \(\vec{q}_{j}^{*}>0\). By construction, \((\{q_{ij}^{*}(\alpha),\xi_{ij}^{*}(\alpha)\}_{i,j})\) is feasible in (25), and its objective function value in (25) matches the optimal value of problem (26), which equals the optimal value of problem (25). Hence, \((\{q_{ij}^{*}(\alpha),\xi_{ij}^{*}(\alpha)\}_{i,j})\) solves (25) for any \(\alpha\in A\). In addition, one readily verifies that \(\mathcal{I}_{j}^{\infty}(\alpha)=\{i\in[I]:q_{ij}^{*}(\alpha)=0,\,\xi_{ij}^{*} (\alpha)\neq 0\}=\emptyset\) for all \(j\in[J]\) and \(\alpha\in A\). This implies that Assumption 2.15 is satisfied irrespective of \(\alpha\in\mathcal{A}\). By Theorem A.5 (ii) and the definitions of \(\mathcal{J}_{0}\) and \(\mathcal{J}_{+}\), the discrete distribution \(\mathbb{Q}^{*}(\alpha)\) therefore represents a Nash strategy of nature for every \(\alpha\in A\). This observation completes the proof.
Consider now an instance of the distributionally robust support vector machine problem with \(d=3\), \(\varepsilon=0.1\) and \(J=20\), where \(\widehat{y}_{j}=1\) and \(\widehat{x}_{j}\) is sampled independently from \(\mathcal{N}((1,-1),\frac{1}{2}I_{2})\) for \(j=1,\ldots,10\), while \(\widehat{y}_{j}=-1\) and \(\widehat{x}_{j}\) is sampled independently from \(\mathcal{N}((-1,1),\frac{1}{2}I_{2})\) for \(j=11,\ldots,20\). In addition, we use the \(p\)-norm to quantify the transportation cost in the feature space for \(p\in\{1,2,\infty\}\). We then solve problem (24) to find an optimal weight vector \(\theta^{\star}\), and we solve problem (26) to construct the index sets \(\mathcal{J}_{0}\) and \(\mathcal{J}_{+}\), the transportation budget \(\tilde{\xi}\) as well as different least favorable distributions \(\mathbb{Q}^{\star}(\alpha)\) with \(\alpha\in\mathcal{A}\). Figure 1 represents all training samples with \(\widehat{y}_{j}=1\) and \(\widehat{y}_{j}=-1\)
as blue and red dots, respectively. The optimal classifier is visualized by the separating hyperplane \(\{x\in\mathbb{R}^{2}:\langle\theta^{\star},x\rangle=0\}\) (solid line) and the maximum margin hyperplanes \(\{x\in\mathbb{R}^{2}:\langle\theta^{\star},x\rangle=\pm 1\}\) (dashed lines). By Proposition 4.1, there are infinitely many least favorable distributions, which are obtained from the empirical distribution by moving probability mass from the samples in \(\mathcal{J}_{+}\) in a direction of increasing hinge loss at a total transportation budget \(\tilde{\xi}\). The left charts of Figure 1 show the empirical distribution, where all training samples in \(\mathcal{J}_{+}\) are designated by a solid circle. The middle charts show a least favorable distribution obtained by assigning the transportation budget \(\tilde{\xi}\) to a single training sample in \(\mathcal{J}_{+}\), and the right charts show the least favorable distribution obtained by evenly distributing the transportation budget \(\tilde{\xi}\) across all training samples in \(\mathcal{J}_{+}\).
In the second experiment we use a distributionally robust support vector machine to distinguish greyscale images of handwritten numbers \(3\) and \(8\) from the MNIST 3-vs-8 dataset [49]. Any such image consists of \(786\) pixels with an intensity ranging from \(0\) (black) to \(1\) (white) and thus represents a feature vector \(X\in\mathcal{X}=[0,1]^{d-1}\) with \(d=787\). The corresponding label \(Y\) is set to \(+1\) if the image shows the number \(3\) and to \(-1\) if the image shows the number \(8\). For ease of visualization, we
Figure 1: Different feature distributions in a distributionally robust support vector machine problem: Empirical distribution (left), a least favorable distribution obtained by perturbing a single sample in \(\mathcal{J}_{+}\) (center) and the least favorable distribution obtained by perturbing all samples in \(\mathcal{J}_{+}\) (right).
use the first \(J=10\) records of the dataset as the training samples. In addition, we use the \(\infty\)-norm to quantify the transportation cost in the feature space. The goal of this experiment is to compare the least favorable distributions that solve the dual DRO problem (_i.e._, the Nash strategies of nature) against the worst-case distributions that maximize the expected hinge loss when the classifier's weight vector is fixed to a minimizer \(\theta^{\star}\) of the primal DRO problem (_i.e._, the best response strategies of nature when the statistician plays \(\theta^{\star}\)). Specifically, we construct a least favorable distribution as in Theorem A.52, which is possible because \(\mathcal{I}_{j}^{\infty}=\emptyset\) for every \(j\in[J]\) thanks to the compactness of \(\mathcal{X}\). In addition, we compute \(\theta^{\star}\) as in Theorem A.52 and construct a worst-case distribution for \(\theta^{\star}\) as in [74, SS 3.2]. As every Nash strategy is a best response to the adversary's Nash strategy, it is clear that every least favorable distribution is a worst-cast distribution for \(\theta^{\star}\). If the finite convex program used to construct the worst-case distribution has multiple optimal solutions, then the reverse implication is generally _false_. The optimization algorithm used to solve this convex program thus outputs an arbitrary worst-case distribution that generically fails to be a least favorable distribution.
Figure 2 visualizes specific worst-case and least favorable distributions found by Gurobi for different radii \(\varepsilon\) of the ambiguity set. The ten images corresponding to \(\varepsilon=0\) represent the features of the ten unperturbed training samples. For any \(\varepsilon>0\), both the worst-case distribution as well as the least favorable distribution are obtained by moving probability mass from some of the training samples to corresponding _adversarial_ samples with the same labels but perturbed features. Whenever this happens, Figure 2 shows the adversarial samples instead of repeating the corresponding training samples, and underneath each adversarial sample we indicate the probability mass--as a percentage of \(1/J\)--inherited from the underlying training sample. We emphasize that the adversarial samples differ from all training samples and were thus 'invented by nature' with the goal to confuse the statistician. The adversarial samples of the worst-case distribution differ from the corresponding training samples only in a few pixels that look like noise to the human eye. Some adversarial samples of the least favorable distribution, however, are truly deceptive. For example, the feature of the sixth training sample arguably shows the number 8, but for any \(\varepsilon>0\) this 8 is ostensibly converted to a 3 that was not present in the training dataset. We conclude that nature's best response to the optimal classifier with weight vector \(\theta^{\star}\) is at best suitable to deceive an algorithm, whereas nature's Nash strategy can even deceive a human. A possible explanation for this observation is that nature's Nash strategy aims to fool _every possible_ classifier and not only _one single optimal_ classifier.
### Distributionally Robust Log-Optimal Portfolio Selection
Assume now that the components of the random vector \(Z\in\mathcal{Z}=\mathbb{R}^{d}\) represent the total returns of \(d\) assets over the next month, say. If the asset returns over consecutive months are serially independent and governed by the same distribution \(\mathbb{P}\in\mathcal{P}(\mathcal{Z})\) satisfying some plausible mild regularity conditions (such as \(\mathbb{P}[Z\in\mathbb{R}^{d}_{++}]=1\)), and if \(\Theta\) represents the probability simplex in \(\mathbb{R}^{d}\), then one can show that the constantly rebalanced portfolio \(\theta\in\Theta\) that maximizes the expected log-utility \(\mathbb{E}_{Z\sim\mathbb{P}}[\log((\theta,Z))]\) generates more wealth than any other causal portfolio strategy with probability 1 in the long run [25, Theorem 15.3.1]. Unfortunately, however, the asset return distribution \(\mathbb{P}\) is unknown in practice. It is therefore natural to study a distributionally robust problem formulation. In contrast to [73], where \(\mathbb{P}\) is assumed to be unknown except for its first- and second-order moments, we model distributional ambiguity here via an optimal transport-based ambiguity set centered at the empirical distribution
\((1/J)\sum_{j\in[J]}\delta_{\widehat{z}_{j}}\) on \(J\) training samples \(\widehat{z}_{j}\in\mathbb{R}^{d}_{++}\), \(j\in[J]\). Thus, we aim to solve
\[\min_{\theta\in\Theta}\sup_{\mathbb{Q}\in\mathbb{B}_{\varepsilon}(\widehat{ \mathbb{P}})}\ \mathbb{E}_{Z\sim\mathbb{Q}}\left[-\log(\langle\theta,Z\rangle)\right], \tag{27}\]
which is an instance of (18) with \(L(s)=-\log(s)\) if \(s>0\) and \(L(s)=+\infty\) if \(s\leq 0\). If the transportation cost function is set to \(c(z,\widehat{z})=\|z-\widehat{z}\|^{p}\) for some norm \(\|\cdot\|\) on \(\mathbb{R}^{d}\) and exponent \(p\geq 1\), then \(\mathbb{B}_{\varepsilon}(\widehat{\mathbb{P}})\) reduces to the \(p\)-th Wasserstein ball of radius \(\varepsilon^{p}\) around \(\widehat{\mathbb{P}}\). One readily verifies that any such Wasserstein ball contains distributions that assign a strictly positive mass to \(0\). Thus, the worst-case expected log-utility of any portfolio \(\theta\in\Theta\) is unbounded from above, which implies that problem (27) is infeasible. To ensure that problem (27) is well-defined, the cost of moving any fixed probability mass towards \(0\) must tend to infinity. This can be ensured, for example, by setting \(c(z,\widehat{z})=\sum_{i\in[d]}|\log(z_{i}/\widehat{z}_{i}))|\) with \(\operatorname{dom}(c(\cdot,\widehat{z}))=\mathbb{R}^{d}_{++}\) for every \(\widehat{z}\in\mathbb{R}^{d}_{++}\). Even though it is nonconvex in both of its arguments, this transportation cost function defines a metric on \(\mathcal{Z}\) that gives rise to a valid optimal transport discrepancy. Elementary calculations show that \(L^{\star}(s)=-1-\log(-s)\) and that \(c^{\star 1}(s,\widehat{z})=\sum_{i\in[d]}h(s_{i}\widehat{z}_{i})\), where the auxiliary function \(h:\mathbb{R}\to(-\infty,+\infty]\) is defined through \(h(s)=-1-\log(-s)\) if \(s<-1\), \(h(s)=s\) if \(-1\leq s\leq 0\) and \(h(s)=+\infty\) if \(s>0\). Note that the proper convex function \(h\) is differentiable on its domain. By Theorem 3.8 (i) and as \(\operatorname{dom}(L^{\star})=(-\infty,0)\), the \(c\)-transform (20) can thus be reformulated as
\[\ell_{c}(\theta,\lambda,\widehat{z})=\sup_{\gamma<0}\ 1+\log(-\gamma)+\sum_{i \in[d]}\lambda\,h(\gamma\theta_{i}\widehat{z}_{i}/\lambda). \tag{28}\]
The next proposition shows that the maximization problem in (28) can be solved efficiently by sorting.
**Proposition 4.2**.: Given \(\theta\in\Theta\), \(\lambda\in\mathbb{R}_{+}\) and \(\widehat{z}\in\mathbb{R}^{d}_{++}\), set \(u_{i}=\theta_{i}\widehat{z}_{i}\) for every \(i\in[d]\), let \(\sigma\) be a permutation of \([d]\) with \(u_{\sigma(1)}\leq u_{\sigma(2)}\leq\cdots\leq u_{\sigma(d)}\), and define
\[k=\max\left\{i\in[d]\ :\ (1-(d-i)\lambda)u_{\sigma(i)}\leq\lambda\sum_{j\in[i]} u_{\sigma(j)}\right\}\quad\text{and}\quad\gamma^{\star}=\frac{(d-k)\lambda-1}{ \sum_{i\in[k]}u_{\sigma(i)}}.\]
If \(\lambda\geq 1/\|\theta\|_{0}\), then problem (28) is solved by \(\gamma^{\star}\). Otherwise, we have \(\ell_{c}(\theta,\lambda,\widehat{z})=+\infty\).
Figure 2: Comparison of worst-case and least favorable distributions for different values of \(\varepsilon\).
Proof.: Assume first that \(\lambda<1/\|\theta\|_{0}\). Then, we have
\[\ell_{c}(\theta,\lambda,\widehat{z}) \geq\lim_{\gamma\to-\infty}\ 1+\log(-\gamma)+\sum_{i\in[d]} \lambda h(\gamma\theta_{i}\widehat{z}_{i}/\lambda)\] \[=\lim_{\gamma\to-\infty}\ 1+\log(-\gamma)+\sum_{i\in[d]:\,\theta_{i}>0 }-\lambda-\lambda\log(-\gamma\theta_{i}\widehat{z}_{i}/\lambda)\] \[=\lim_{\gamma\to-\infty}\ (1-\lambda\,\|\theta\|_{0})(1+\log(- \gamma))-\sum_{i\in[d]:\,\theta_{i}>0}\lambda\log(\theta_{i}\widehat{z}_{i}/ \lambda)=+\infty,\]
where the first equality follows from the definition of \(h\), and the third equality holds because \(\lambda<1/\|\theta\|_{0}\).
Next, assume that \(\lambda>1/\|\theta\|_{0}\). Using a similar reasoning as above, one can then show that the objective function of problem (28) drops to \(-\infty\) as \(\gamma\) either tends to \(-\infty\) or to \(0\). Therefore, problem (28) has a maximizer. In addition, as the objective function of problem (28) is strictly concave and differentiable, this maximizer is unique and fully determined by the first-order optimality condition
\[\frac{1}{\gamma}+\sum_{i\in\mathcal{D}(\gamma)}u_{i}-\sum_{i\in[d]\setminus \mathcal{D}(\gamma)}\frac{\lambda}{\gamma}=0\quad\Longleftrightarrow\quad \gamma=\frac{(d-|\mathcal{D}(\gamma)|)\lambda-1}{\sum_{i\in\mathcal{D}(\gamma)} u_{i}}, \tag{29}\]
where \(\mathcal{D}(\gamma)=\{i\in[d]:\gamma u_{i}\geq-\lambda\}\) and \(u_{i}=\theta_{i}\widehat{z}_{i}\) for every \(i\in[d]\). To show that \(\gamma^{\star}\) solves (29) and (28), it thus suffices to prove that \(\mathcal{D}(\gamma^{\star})=\{\sigma(i):i\in[k]\}\). To this end, we first verify that the critical index \(k\) is well-defined as the maximal element of a non-empty finite set. This is indeed the case because
\[(1-(d-1)\lambda)u_{\sigma(1)}\leq\left(1-\frac{d-1}{d}\right)u_{\sigma(1)}= \frac{u_{\sigma(1)}}{d}\leq\lambda u_{\sigma(1)},\]
where both inequalities follow from our assumption that \(\lambda>1/\|\theta\|_{0}>1/d\) and from the non-negativity of \(u_{\sigma(1)}\). We may thus conclude that \(k\geq 1\). Next, we use induction to show that \(\sigma(i)\in\mathcal{D}(\gamma^{\star})\) for every \(i\in[k]\). As for the base step corresponding to \(i=k\), we may use the definition of \(k\) to find
\[(1-(d-k)\lambda)u_{\sigma(k)}\leq\lambda\sum_{j\in[k]}u_{\sigma(j)}\quad \Longleftrightarrow\quad\gamma^{\star}u_{\sigma(k)}\geq-\lambda,\]
where the equivalence exploits the definition of \(\gamma^{\star}\) in the proposition statement. Hence, \(\sigma(k)\in\mathcal{D}(\gamma^{\star})\). As for the induction step, assume that \(\sigma(i)\in\mathcal{D}(\gamma^{\star})\) for some \(i\in[k]\) with \(i>1\). Thus, we have
\[(1-(d-i)\lambda)u_{\sigma(i)}\leq\lambda\sum_{j\in[i]}u_{\sigma(j)}\quad \Longrightarrow\quad(1-(d-(i-1))\lambda)u_{\sigma(i)}\leq\lambda\sum_{j\in[i -1]}u_{\sigma(j)}.\]
Since the permutation \(\sigma\) sorts the parameters \(\{u_{i}\}_{i\in[d]}\) in ascending order, the last inequality implies
\[(1-(d-(i-1))\lambda)u_{\sigma(i-1)}\leq(1-(d-(i-1))\lambda)u_{\sigma(i)}\leq \lambda\sum_{j\in[i-1]}u_{\sigma(j)}.\]
This in turn allows us to conclude that
\[(1-(d-k)\lambda)u_{\sigma(i-1)}\leq\lambda\sum_{j\in[k]}u_{\sigma(j)}\quad \Longrightarrow\quad\gamma^{\star}u_{\sigma(i-1)}\geq-\lambda,\]
where the implication follows again from the definition of \(\gamma^{\star}\). Hence, \(\sigma(i-1)\in\mathcal{D}(\gamma^{\star})\), which completes the induction step. We have thus shown that \(\sigma(i)\in\mathcal{D}(\gamma^{\star})\) for all \(i\leq k\). In addition, it is clear from the definition of the critical index \(k\) that any \(i>k\) satisfies
\[(1-(d-i)\lambda)u_{\sigma(i)}>\lambda\sum_{j\in[i]}u_{\sigma(j)}\quad \Longrightarrow\quad(1-(d-k)\lambda)u_{\sigma(i)}>\lambda\sum_{j\in[k]}u_{ \sigma(j)}\quad\Longrightarrow\quad\gamma^{\star}u_{\sigma(i)}<-\lambda,\]
and thus \(\sigma(i)\in[d]\backslash\mathcal{D}(\gamma^{\star})\). In summary, we have thus shown that \(\mathcal{D}(\gamma^{\star})=\{\sigma(i):i\in[k]\}\), which confirms via the optimality condition (29) that \(\gamma^{\star}\) solves indeed the maximization problem (28).
Finally, assume that \(\lambda=1/\|\theta\|_{0}\). By using similar arguments as in the last part of the proof, one can show that \(\gamma^{\star}\) remains optimal in (28) but is no longer unique. Details are omitted for brevity.
By Proposition 1.2, the distributionally robust portfolio selection problem (27) is equivalent to the convex stochastic optimization problem \(\inf_{\theta\in\Theta,\lambda\geq 0}\mathbb{E}_{\widehat{Z},\widehat{\varphi}}[f (\theta,\lambda,\widehat{Z})]\) with \(f(\theta,\lambda,\widehat{z})=\lambda\varepsilon+\ell_{c}(\theta,\lambda, \widehat{z})\), and Proposition 4.2 implies that \(f(\theta,\lambda,\widehat{z})=+\infty\) unless \(\lambda\geq 1/\|\theta\|_{0}\), which we may thus impose as an explicit constraint. Proposition 4.2 further enables us to solve the equivalent stochastic program by ordinary or stochastic gradient descent. Indeed, Proposition 4.2 implies via the envelope theorem [26, Theorem 2.16] that if \(\lambda\geq 1/\|\theta\|_{0}\), then the gradients of \(f\) with respect to \(\theta\) and \(\lambda\) are given by
\[\nabla_{\theta}f(\theta,\lambda,\widehat{Z}) =\left(\gamma^{\star}\widehat{Z}_{1}\,h^{\prime}(\gamma^{\star} \theta_{1}\widehat{Z}_{1}/\lambda),\ldots,\gamma^{\star}\widehat{Z}_{d}\,h^{ \prime}(\gamma^{\star}\theta_{d}\widehat{Z}_{d}/\lambda)\right)\] \[\nabla_{\lambda}f(\theta,\lambda,\widehat{Z}) =\varepsilon+\sum_{i\in[d]}h(\gamma^{\star}\theta_{i}\widehat{Z} _{i}/\lambda)-\frac{1}{\lambda}\sum_{i\in[d]}\gamma^{\star}\theta_{i}\widehat{ Z}_{i}\,h^{\prime}(\gamma^{\star}\theta_{i}\widehat{Z}_{i}/\lambda),\]
where \(\gamma^{\star}\) is defined in as Proposition 4.2, and \(h^{\prime}(s)=-1/s\) if \(s<-1\); \(=1\) if \(-1\leq s<0\). Under a mild local uniform integrability condition, these random gradients represent unbiased estimators for the deterministic gradients \(\nabla_{\theta}\mathbb{E}_{\widehat{Z}\sim\widehat{\mathbb{P}}}[f(\theta, \lambda,\widehat{Z})]\) and \(\nabla_{\lambda}\mathbb{E}_{\widehat{Z}\sim\widehat{\mathbb{P}}}[f(\theta, \lambda,\widehat{Z})]\), respectively. In principle, these estimators can thus be used in stochastic gradient descent algorithms. However, they may fail to be Lipschitz continuous in \((\theta,\lambda)\) unless the support of \(\widehat{\mathbb{P}}\) is a compact subset of \(\mathbb{R}^{d}_{++}\). Hence, stochastic gradient descent may fail to converge. Even if \(\widehat{\mathbb{P}}\) is discrete, the Lipschitz modulus of the gradient estimators depends on the atoms of \(\widehat{\mathbb{P}}\) and may thus become arbitrarily large, in which case classical as well as stochastic gradient descent suffer from numerical instability and poor convergence rates. We therefore suggest to solve the equivalent stochastic program with the adaptive Golden Ratio Algorithm introduced in [58, SS 2]. If \(\widehat{\mathbb{P}}\) is discrete, this algorithm finds a \(\delta\)-suboptimal solution in \(\mathcal{O}(1/\delta)\) iterations.
In the last experiment we assess the out-of-sample performance of various log-optimal portfolios with \(d=10\) assets. To this end, we assume that the unknown true asset return distribution \(\mathbb{P}\) is log-normal and that its mean and covariance matrix coincide with the empirical mean and the Ledoit-Wolf covariance shrinkage estimator [50] corresponding to the 600 most recent monthly returns in the '10 Industry Portfolios' dataset from the Fama-French online data library.1 Our experiment consists of 1,000 independent trials. In each trial we construct \(\widehat{\mathbb{P}}\) as the empirical distribution on 100 training samples from \(\mathbb{P}\). Different distributionally robust log-optimal portfolios are then obtained by solving the stochastic programming reformulation of (27) with the adaptive Golden Ratio Algorithm for different values of \(\varepsilon\in\{a\cdot 10^{b}:a\in\{1,\ldots,10\},\ b\in\{-4,\cdots,-1\}\}\). The sample average approximation (SAA) portfolio is obtained similarly by setting \(\varepsilon=0\). Finally, the out-of-sample performance \(\mathbb{E}_{Z\sim\mathbb{P}}[-\log(\langle\theta,Z\rangle]\) of any fixed portfolio \(\theta\in\Theta\) is evaluated empirically by using \(10^{6}\) test samples from \(\mathbb{P}\). Figure 3 (a) plots the out-of-sample performance of the distributionally robust portfolios as a function of \(\varepsilon\) averaged across all 1,000 trials. We conclude that one can indeed improve on SAA by accounting for distributional ambiguity. Figure 3 (b) visualizes the distribution of the out-of-sample performance (which reflects the uncertainty of the 100 training samples) of the SAA portfolio and the DRO portfolio corresponding to \(\varepsilon=10^{-2}\). The picture suggests that accounting for distributional ambiguity not only reduces the expected logarithmic disutility but also (significantly) reduces the dispersion of the disutility.
Footnote 1: [http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html](http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html)
**Acknowledgements.** This research was supported by the Swiss National Science Foundation under the NCCR Automation (grant agreement 51NF40_180545) and under an Early Postdoc.Mobility Fellowship awarded to the first author (grant agreement P2ELP2_195149).
|
2310.03838 | Chameleon: Increasing Label-Only Membership Leakage with Adaptive
Poisoning | The integration of machine learning (ML) in numerous critical applications
introduces a range of privacy concerns for individuals who provide their
datasets for model training. One such privacy risk is Membership Inference
(MI), in which an attacker seeks to determine whether a particular data sample
was included in the training dataset of a model. Current state-of-the-art MI
attacks capitalize on access to the model's predicted confidence scores to
successfully perform membership inference, and employ data poisoning to further
enhance their effectiveness. In this work, we focus on the less explored and
more realistic label-only setting, where the model provides only the predicted
label on a queried sample. We show that existing label-only MI attacks are
ineffective at inferring membership in the low False Positive Rate (FPR)
regime. To address this challenge, we propose a new attack Chameleon that
leverages a novel adaptive data poisoning strategy and an efficient query
selection method to achieve significantly more accurate membership inference
than existing label-only attacks, especially at low FPRs. | Harsh Chaudhari, Giorgio Severi, Alina Oprea, Jonathan Ullman | 2023-10-05T18:46:27Z | http://arxiv.org/abs/2310.03838v2 | # Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning
###### Abstract
The integration of machine learning (ML) in numerous critical applications introduces a range of privacy concerns for individuals who provide their datasets for model training. One such privacy risk is Membership Inference (MI), in which an attacker seeks to determine whether a particular data sample was included in the training dataset of a model. Current state-of-the-art MI attacks capitalize on access to the model's predicted confidence scores to successfully perform membership inference, and employ data poisoning to further enhance their effectiveness. In this work, we focus on the less explored and more realistic _label-only_ setting, where the model provides only the predicted label on a queried sample. We show that existing label-only MI attacks are ineffective at inferring membership in the low False Positive Rate (FPR) regime. To address this challenge, we propose a new attack Chameleon that leverages a novel adaptive data poisoning strategy and an efficient query selection method to achieve significantly more accurate membership inference than existing label-only attacks, especially at low FPRs.
## 1 Introduction
The use of machine learning for training on confidential or sensitive data, such as medical records (Stanfill et al., 2010), financial documents (Ngai et al., 2011), and conversations (Carlini et al., 2021), introduces a range of privacy violations. By interacting with a trained ML model, an attacker might reconstruct data from the training set (Balle et al., 2022; Haim et al., 2022), perform membership inference (Carlini et al., 2022; Shokri et al., 2017; Yeom et al., 2018), learn sensitive attributes (Fredrikson et al., 2015; Mehnaz et al., 2022) or global properties (Ganju et al., 2018; Suri and Evans, 2022) from training data. Membership inference (MI) attacks (Shokri et al., 2017), originally introduced under the name of tracing attacks (Homer et al., 2008), enable an attacker to determine whether or not a data sample was included in the training set of an ML model. While these attacks are less severe than training data reconstruction, they might still constitute a serious privacy violation. Consider a mental health clinic that uses an ML model to predict patient treatment responses based on medical histories. An attacker with accesses to a certain individual's medical history can learn if the individual has a mental health condition, by performing a successful MI attack.
We can categorize MI attacks into two groups: Confidence-based attacks in which the attacker gets access to the target ML model's predicted confidences, and label-only attacks, in which the attacker only obtains the predicted label on queried samples. Recent literature has primarily focused on confidence-based attacks Bertran et al. (2023); Carlini et al. (2022) that maximize the attacker's success at low False-Positive
Rates (FPRs). Additionally, Tramer et al. (2022) and Chen et al. (2022) showed that introducing data poisoning during training significantly improves the MI performance at low FPRs in the confidence-based scenario.
Nevertheless, in many real-world scenarios, organizations that train ML models provide only hard labels to customer queries. For example, financial institutions might solely indicate whether a customer has been granted a home loan or credit card approval. In such situations, launching an MI attack gets considerably more challenging as the attacker looses access to prediction confidences and cannot leverage state-of-the-art attacks such as Carlini et al. (2022), Wen et al. (2023), Bertran et al. (2023). Furthermore, it remains unclear whether existing label-only MI attacks, such as Choquette-Choo et al. (2021) and Li and Zhang (2021), are effective in the low FPR regime and whether data poisoning techniques can be used to amplify the membership leakage in this specific realistic scenario.
In this paper, we first show that existing label-only MI attacks (Choquette-Choo et al., 2021; Li and Zhang, 2021; Yeom et al., 2018) struggle to achieve high True Positive Rate (TPR) in the low FPR regime. We then demonstrate that integrating state-of-the-art data poisoning technique (Tramer et al., 2022) into these label-only MI attacks further degrades their performance, resulting in even lower TPR values at the same FPR. We investigate the source of this failure and propose a new label-only MI attack Chameleon that leverages a novel _adaptive_ poisoning strategy to enhance membership inference leakage in the label-only setting. Our attack also uses an _efficient_ querying strategy, which requires only \(64\) queries to the target model to succeed in the distinguishing test, unlike prior works (Choquette-Choo et al., 2021; Li and Zhang, 2021) that use on the order of a few thousand queries. Extensive experimentation across multiple datasets shows that our Chameleon attack consistently outperforms previous label-only MI attacks, with improvements in TPR at 1% FPR ranging up to \(17.5\times\). Finally, we also provide a theoretical analysis that sheds light on how data poisoning amplifies membership leakage in label-only scenario. To the best of our knowledge, this work represents the first analysis on understanding the impact of poisoning on MI attacks.
## 2 Background and Threat Model
We provide background on membership inference, describe our label-only threat model with poisoning, and analyze existing approaches to motivate our new attack.
Related Work._Membership Inference_ attacks can be characterized into different types based on the level of adversarial knowledge required for the attack. Full-knowledge (or white-box) attacks (Leino and Fredrikson, 2020; Nasr et al., 2018) assume the adversary has access to the internal weights of the model, and therefore the activation values of each layer. In black-box settings the adversary can only query the ML model, for instance through an API, which may return either confidence scores or hard labels. The confidence setting has been studied most, with works like Carlini et al. (2022); Shokri et al. (2017); Ye et al. (2022) training multiple shadow models --local surrogate models-- and modeling the loss (or logit) distributions for members and non-members.
The _label-only_ MI setting, investigated by Choquette-Choo et al. (2021); Li and Zhang (2021); Yeom et al. (2018), models a more realistic threat models that returns only the predicted label on a queried sample. Designing MI attacks under this threat model is more challenging, as the attack cannot rely on separating the model's confidence on members and non-members. Existing label-only MI attacks are based on analyzing the effects of perturbations on the original point on the model's decision. With our work we aim to improve the understanding of MI in the label-only setting, especially in light of recent trends in MI literature.
Current MI research, in fact, is shifting the attention towards attacks that achieve high True Positive Rates (TPR) in low False Positive Rates (FPR) regimes (Bertran et al., 2023; Carlini et al., 2022; Liu et al., 2022; Wen et al., 2023; Ye et al., 2022). These recent papers argues that if an attack can manage to reliably breach the privacy of even a small number of, potentially vulnerable, users, it is still extremely relevant, despite resulting in potentially lower average-case success rates. A second influential research thread exposed the effect that training data poisoning has on amplifying privacy risks. This threat model is particularly relevant when the training data is crowd-sourced, or obtained through automated crawling (common for large datasets), as well as in collaborative learning settings. Tramer et al. (2022) and Chen et al. (2022) showed that data poisoning amplifies MI privacy leakage and increases the TPR values at low FPRs. A related line of research Chaudhari et al. (2023); Mahloujifar et al. (2022) showcased how data poisoning could be utilized to infer statistical information about the overall properties of the training set, called property inference attacks.
Threat Model.We follow the threat model of Tramer et al. (2022) used for membership inference with data poisoning, with adjustments to account for the more realistic label-only setting. The attacker has black-box query access to a trained machine learning model \(M_{t}\), also called target model, that returns only the predicted label on an input query. The attacker's objective is to determine whether a particular target sample was part of \(M_{t}\)'s training set or not. Similarly to Tramer et al. (2022) the attacker \(\mathcal{A}\) has the capability to inject additional poisoned data \(\mathsf{D}_{\mathsf{p}}\) into the training data \(\mathsf{D}_{\mathsf{tr}}\) sampled from a data distribution \(\mathcal{D}\). The attacker can only inject \(\mathsf{D}_{\mathsf{p}}\) once before the training process begins, and the adversary does not participate further in the training process after injecting the poisoned samples. After training completes, the adversary can only interact with the final trained model to obtain predicted labels on selected queried samples.
Analyzing Existing Approaches.Existing label-only MI attacks (Choquette-Choo et al., 2021; Li and Zhang, 2021) propose a decision boundary technique that exploits the existence of adversarial examples to create their distinguishing test. These approaches typically require a large number of queries to the target model to estimate a sample's distance to the model decision boundary. However, these attacks achieve low TPR (e.g., 1.1%) at 1% FPR, when tested on the CIFAR-10 dataset. In contrast, the LiRA confidence-based attack by Carlini et al. (2022) achieves a TPR of 16.2% at 1% FPR on the same dataset. Truth Serum (Tramer et al., 2022) evaluates LiRA with a data poisoning strategy based on label flipping, which significantly increases TPR to 91.4% at 1% FPR once 8 poisoned samples are inserted per challenge point.
A natural first strategy for label-only MI with poisoning is to incorporate the Truth Serum data poisoning method to the existing label-only MI attack (Choquette-Choo et al., 2021) and investigate if the TPR at low FPR can be improved. The Truth Serum poisoning strategy is simply label flipping, where poisoned samples have identical features to the challenge point, but a different label. Surprisingly, the results show a negative outcome, with the TPR decreasing to 0% at 1% FPR after poisoning. This setback compels us to reconsider the role of data poisoning in improving privacy attacks within the label-only MI threat model. We question whether data poisoning can indeed improve label-only MI, and if so, why did our attempt to combine the two approaches fail. In the following section, we provide comprehensive answers to these questions and present a novel poisoning strategy that significantly improves the attack success in the label-only MI setting.
## 3 Chameleon Attack
We first provide some key insights for our attack, then describe the detailed attack procedure, and include some analysis on leakage under MI.
### Attack Intuition
Given the threat model, our main objective is to improve the TPR in the _low_ FPR regime for label-only MI, while reducing the number of queries to the target model \(M_{t}\). To achieve this two-fold objective, we start by addressing the fundamental question of determining an effective poisoning strategy. This involves striking the right balance such that the IN models, those trained with the target point included in the training set, classify the point correctly, and the OUT models, trained without the target point, misclassify it. Without any poisoning, it is likely that both IN and OUT models will classify the point correctly (up to some small training and generalization error). On the other hand, if we insert too many poisoned samples with an incorrect label, then both IN and OUT models will mis-classify the point to the incorrect label. As the attacker only gets access to the labels of the queried samples from the target model \(M_{t}\), over-poisoning would make it implausible to distinguish whether the model is an IN or OUT model.
The state-of-the-art Truth Serum attack (Tramer et al., 2022), which requires access to model confidences, employs a static poisoning strategy by adding a fixed set of \(k\) poisoned replicas for each challenge point. This poisoning strategy fails for label-only MI as often times both IN and OUT models misclassify the target sample, as discussed in Section 2. Our _crucial_ observation is that not all challenge points require the same number of poisoned replicas to create a separation between IN and OUT models. To provide evidence for this insight, we show a visual illustration of model confidences under the same number of poisoned replicas for two challenge points in Figure 1. Therefore, we propose a new strategy that adaptively selects the number of poisoned replicas for each challenge point, with the goal of creating a separation between IN and OUT models. The IN models trained with the challenge point in the training set should classify the point correctly, while the OUT models should misclassify it. Our strategy adaptively adds poisoned replicas until the OUT models consistently misclassify the challenge point at a significantly higher rate than the IN models.
Existing MI attacks with poisoning (Chen et al., 2022; Tramer et al., 2022) utilize confidence scores obtained from the trained model to build a distinguishing test. As our attacker only obtains the predicted labels, we develop a label-only "proxy" metric for estimating the model's confidence, by leveraging the predictions obtained on "close" neighbors of the challenge point. We introduce the concept of a _membership neighborhood_, which is constructed by selecting the closest neighbors based on the KL divergence computed on model confidences. This systematic selection helps us improve the effectiveness of our attack by strategically incorporating only the relevant neighbor predictions. The final component of the attack is the
Figure 1: Impact of poisoning on two different challenge points (a Car and a Ship) on CIFAR-10 dataset. Each row denotes the shift in model confidences (wrt. true label) for IN and OUT models with introduction of poisoned samples.
distinguishing test, in which we compute a score based on the target model \(M_{t}\)'s correct predictions on the membership neighborhood set. These scores are used to compute the TPR at fixed FPR values, as well as other metrics of interest such as AUC and MI accuracy. We provide a detailed description for each stage of our attack below.
### Attack Details
Our Chameleon attack can be described as a three-stage process:
Adaptive Poisoning.Given a challenge point \((x,y)\) and access to the underlying training distribution \(\mathcal{D}\), the attacker constructs a training dataset \(\mathsf{D_{adv}}\sim\mathcal{D}\), such that \((x,y)\notin\mathsf{D_{adv}}\) and a poisoned replica \((x,y^{\prime})\), for some label \(y^{\prime}\neq y\). The goal of the attacker is to construct a small enough poisoned set \(\mathsf{D_{p}}\) such that a model trained on \(\mathsf{D_{adv}}\cup\mathsf{D_{p}}\), which _excludes_\((x,y)\), missclassifies the challenge point. The attacker needs to train their own OUT shadow models (without the challenge point) to determine how many poisoned replicas are enough to misclassify the challenge point. Using a set of \(m\) OUT models instead of a single one increases the chance that any other OUT model (e.g., the target model \(M_{t}\)) has a similar behavior under poisoning. The attacker begins with no poisoned replicas and trains \(m\) shadow models on the training set \(\mathsf{D_{adv}}\). The attacker adds a poisoned replica if the average confidence across the OUT models on label \(y\) is above a threshold \(\mathsf{t_{p}}\), and repeats the process until the models' average confidence on label \(y\) falls below \(\mathsf{t_{p}}\) (threshold where mis-classification for the challenge point occurs). The details of our adaptive poisoning strategy are outlined in Algorithm 1, which describes the iterative procedure for constructing the poisoned set.
```
Input: Challenge point \((x,y)\), poisoned point \((x,y^{\prime})\) where \(y^{\prime}\neq y\), attacker's dataset \(\mathsf{D_{adv}}\), poison threshold \(\mathsf{t_{p}}\) and maximum poisoned iterations \(\mathsf{k_{max}}\).
1: Let \(k\) denote the number of poisoned replicas.
2:For\(k=0,\ldots,\mathsf{k_{max}}\)do:
3: Construct poisoned dataset \(\mathsf{D_{p}}\) containing \(k\) replicas of \((x,y^{\prime})\).
4: Train \(m\) OUT models \(\{\theta^{\text{out}}_{1},\ldots,\theta^{\text{out}}_{m}\}\) on dataset \(\mathsf{D_{p}}\cup\mathsf{D_{adv}}\).
5: Query \(x\) on OUT models and obtain confidences \(c^{y}_{1},\ldots,c^{y}_{m}\) for label \(y\), where \(0\leq c^{y}_{i}\leq 1\).
6: Compute mean of the confidences \(\mu=\frac{\sum_{i=1}^{m}c^{y}_{i}}{m}\).
7:If\(\mu\leq\mathsf{t_{p}}\):
8:break
9:\(k=k+1\)
```
**Output:** Number of poisoned replicas \(k\). ```
**Algorithm 1** Adaptive Poisoning Strategy
Note that we _do not_ need to separately train any IN models, i.e., models trained on \(\mathsf{D_{p}}\cup\mathsf{D_{adv}}\cup\{(x,y)\}\), to select the number of poisoned replicas for our challenge point. This is due to our observation that, in presence of poisoning, the average confidence for the true label \(y\) tends to be higher on the IN models when compared
Figure 2: Impact of poisoning on confidences of IN and OUT models (wrt. the true label) for a challenge point in CIFAR-10 dataset.
to the OUT models. Figure 2 illustrates an instance of this phenomenon, where the average confidence on the OUT models decreases at a faster rate than the confidence on the IN models with the addition of more poisoned replicas. As a result, the confidence mean computed on the OUT models (line 6 in Algorithm 1) will always cross the poisoning threshold \(\mathsf{t_{p}}\) first, leading to misclassification of the challenge point by the OUT models before the IN models. Therefore, we are only required to train OUT models for our adaptive poisoning strategy. In practical scenarios, the attacker's objective will typically be to infer membership on a set of points rather than a single point. Our strategy can be easily adapted to handle a set of challenge points with minimal modifications. Our modified algorithm can be found in Appendix D.
Membership Neighborhood.In this stage, the attacker's objective is to create a membership neighborhood set \(\mathsf{S_{nb}^{(x,y)}}\) by selecting close neighboring points to the challenge point. This set is then used to compute a proxy score in order to build a distinguishing test. To construct the neighborhood set, the attacker needs \(N\) shadow models such that the challenge point \((x,y)\) appears in the training set of half of them (IN models), and not in the other half (OUT models). Interestingly, the attacker can reuse the OUT models trained from the previous stage and reduce the computational cost of the process. Using these shadow models, the attacker constructs the neighborhood set \(\mathsf{S_{nb}^{(x,y)}}\) for a given challenge point \((x,y)\). A candidate \((x_{c},y)\), where \(x_{c}\neq x\), is said to be in set \(\mathsf{S_{nb}^{(x,y)}}\), if the original point and the candidate's model confidences are close in terms of KL divergence for both IN and OUT models, i.e., the following conditions are satisfied:
\[\mathsf{KL}(\;\Phi(x_{c})_{\mathsf{IN}}\;||\;\Phi(x)_{\mathsf{IN}}\;)\leq \mathsf{t_{nb}}\;\mathsf{and}\;\mathsf{KL}(\;\Phi(x_{c})_{\mathsf{OUT}}\;|| \;\Phi(x)_{\mathsf{OUT}}\;)\leq\mathsf{t_{nb}} \tag{1}\]
Here, \(\mathsf{KL}()\) calculates the Kullback-Leibler divergence between two distributions. Notations \(\Phi(x_{c})_{\mathsf{IN}}\) and \(\Phi(x_{c})_{\mathsf{OUT}}\) represent the distribution of confidences (wrt. label \(y\)) for candidate \((x_{c},y)\) on the IN and OUT models trained with respect to challenge point \((x,y)\).
Note that the models used in this stage do not need to include poisoning into their training data. We observe that the distribution of confidence values for candidates characterized by low KL divergence tend to undergo similar changes as those of the challenge point when poisoning is introduced. We call such candidates _close neighbors_. In Figure 3, we show how the confidence distribution of a close neighbor closely mimics the confidence distribution of the challenge point as we add two poisoned replicas. Additionally, we
Figure 3: Effect of poisoning on the scaled confidence (logit) distribution of a challenge point and its neighbors. Both the IN and OUT distributions of the near neighbor, unlike the far-away neighbor, exhibit a behavior similar to the challenge point distribution before and after introduction of poisoning.
also demonstrate that the confidence distribution of a _remote neighbor_ is hardly affected by the addition of poisoned replicas and does not exhibit a similar shift in its confidence distribution as the challenge point. Therefore, it is enough to train shadow models without poisoning, which reduces the time complexity of the attack.
In practical implementation, we approximate the distributions \(\Phi(x_{c})_{\mathsf{IN}}\) and \(\Phi(x_{c})_{\mathsf{OUT}}\) using a scaled version of confidences known as logits. Previous work (Carlini et al., 2022; Tramer et al., 2022) showed that logits exhibit a Gaussian distribution, and therefore we compute the KL divergence between the challenge point and the candidate confidences using Gaussians. In Section 4.3, we empirically show the importance of selecting close neighbors.
Distinguishing Test.The final goal of the attacker is to perform the distinguishing test. Towards this objective, the attacker queries the black-box trained model \(M\) using the challenge point and its neighborhood set \(\mathsf{S}_{\mathsf{nb}}^{(x,y)}\) consisting of \(n\) close neighbors. The attacker obtains a set of predicted labels \(\{\hat{y}_{1},\ldots,\hat{y}_{n+1}\}\) in return and computes the missclassification score of the trained model \(f(x)_{y}=\frac{\sum_{i=1}^{n+1}\hat{y}_{i}\neq y}{n+1}\). The score \(f(x)_{y}\) denotes the fraction of neighbors whose predicted labels do not match the ground truth label. This score is then used to predict if the challenge point was a part of the training set or not. Correspondingly, we use the computed misclassification score \(f(x)_{y}\) to calculate various metrics, including TPR@ fixed FPR, AUC, and MI accuracy.
### Label-Only MI Analysis
We now theoretically analyze the impact of poisoning on MI leakage in the label-only setting. Specifically, under certain assumptions, we formulate how data poisoning disparately impacts a model's probability of correctly classifying a data point, depending on whether the data point was a member or not. Given the above modelling framework and a fixed FPR of \(x\%\), we then construct an attack that maximizes the corresponding TPR when \(k\) poisoned replicas related to a challenge point are added to the training set. We refer to it as 'theoretical attack' as it operates within our theoretical modeling framework. In Figure 4, we present our theoretical attack at a FPR of 5% and validate its similarity to practical scenarios, by running a version of our label-only attack on the CIFAR-10 dataset, where we add \(k\) poisoned replicas for a challenge point. Our observations show that TPR improves with the introduction of poisoning and then decreases as the number of poisoned replicas increases for both cases. Thus, our theoretical modeling provides valuable insights into the impact of poisoning in the label-only scenario. The details of our modelling and theoretical attack are provided in Appendix B.
## 4 Experiments
We show that Chameleon significantly improves upon prior label-only MI, then we perform several ablation studies, and finally we evaluate if differential privacy (DP) is an effective mitigation.
Figure 4: Comparing Theoretical and Practical attack under poisoning.
### Experimental Setting
We perform experiments on four different datasets: three computer vision datasets (GTSRB, CIFAR-10 and CIFAR-100) and one tabular dataset (Purchase-100). We use a ResNet-18 convolutional neural network model for the vision datasets. We follow the standard training procedure used in prior works (Carlini et al., 2022; Tramer et al., 2022; Wen et al., 2023), including weight decay and common data augmentations for image datasets, such as random image flips and crops. Each model is trained for 100 epochs, and its training set is constructed by randomly selecting 50% of the original training set.
To instantiate our attack, we pick 500 challenge points at random from the original training set. In the adaptive poisoning stage, we set the poisoning threshold \(\mathsf{t}_{\mathsf{p}}=0.15\), the number of OUT models \(m=8\) and the number of maximum poisoned iterations \(\mathsf{k}_{\mathsf{max}}=6\). In the membership neighborhood stage, we set the neighborhood threshold \(\mathsf{t}_{\mathsf{nb}}=0.75\), and the size of the neighborhood \(|\mathsf{S}_{\mathsf{nb}}^{(x,y)}|=64\) samples. Later in Section 4.3, we vary these parameters and explain the rationale behind selecting these values. To construct neighbors in the membership neighborhood, we generate a set of random augmentations for images and select a subset of \(64\) augmentations that satisfy Eqn. (1). Finally we test our attack on \(64\) target models, trained using the same procedure, including the poisoned set. Among these, 32 serve as IN models, and the remainder as OUT models, in relation to each challenge point. Therefore, the evaluation metrics used for comparison are computed over 32,000 observations.
Evaluation Metrics.Consistent with prior work (Carlini et al., 2022; Chen et al., 2022; Tramer et al., 2022; Wen et al., 2023), our evaluation primarily focuses on True Positive Rate (TPR) at various False Positive Rates (FPRs) namely 0.1%, 1%, 5% and 10%. To provide a comprehensive analysis, we also include the AUC (Area Under the Curve) score of the ROC (Receiver Operating Characteristic) curve and the Membership Inference (MI) accuracy when comparing our attack with prior label-only attacks (Choquette-Choo et al., 2021; Li and Zhang, 2021; Yeom et al., 2018).
### Chameleon attack improves Label-Only MI
We compare our attack against two prior label-only attacks: the Gap attack (Yeom et al., 2018), which predicts any misclassified data point as a non-member and the state-of-the-art Decision-Boundary attack (Choquette-Choo et al., 2021; Li and Zhang, 2021), which uses a sample's distance from the decision boundary to determine its membership status. The Decision-Boundary attack relies on black-box adversarial example attacks (Brendel et al., 2018; Chen et al., 2020). Given a challenge point \((x,y)\), the attack starts from a random point \(x^{\prime}\) for which the model's prediction is _not_ label \(y\) and walks along the boundary while minimizing the distance to \(x\). The perturbation needed to create the adversarial example estimates the distance to the decision boundary, and a sample is considered to be in the training set if the estimated distance is above a threshold, and outside the training set otherwise. Choquette-Choo et al. (2021) showed that their process
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} & \multicolumn{2}{c}{**[email protected]\%FPR**} & \multicolumn{2}{c}{**TPR@1\%FPR**} & \multicolumn{2}{c}{**TPR@5\%FPR**} & \multicolumn{2}{c}{**TPR@10\%FPR**} \\ \cline{2-13}
**Label-Only Attack** & G-43 & C-10 & C-100 & G-43 & C-10 & C-100 & G-43 & C-10 & C-100 & G-43 & C-10 & C-100 \\ \hline Gap & 0.0\% & 0.0\% & 0.0\% & 0.0\% & 0.0\% & 0.0\% & 0.0\% & 0.0\% & 0.0\% & 0.0\% & 0.0\% & 0.0\% & 0.0\% \\ Decision-Boundary & 0.04\% & 0.08\% & 0.02\% & 1.1\% & 1.3\% & 3.6\% & 5.4\% & 5.6\% & 23.0\% & 10.4\% & 11.6\% & 44.9\% \\ \hline
**Chameleon (Ours)** & **3.1\%** & **8.3\%** & **29.6\%** & **11.4\%** & **22.8\%** & **52.5\%** & **25.9\%** & **34.7\%** & **70.9\%** & **35.0\%** & **42.8\%** & **79.4\%** \\ \end{tabular}
\end{table}
Table 1: **Comparing Label-only attacks on GTSRB (G-43), CIFAR-10 (C-10) and CIFAR-100 (C-100) datasets. Our attack achieves high TPR across various FPR values compared to prior attacks.**
closely approximates results obtained with a stronger white-box adversarial example technique (Carlini and Wagner, 2017) using \(\approx\) 2,500 queries per challenge point. Consequently, we directly compare with the stronger white-box version and show that our attack outperforms even this upper bound.
Table 1 provides a detailed comparison of Chameleon with prior label-only MI attacks. Chameleon shows a significant TPR improvement over all FPRs compared to prior works. In particular, for the case of TPR at \(0.1\%\) FPR, prior works achieve TPR values below \(0.08\%\), but Chameleon achieves TPR values ranging from \(3.1\%\) to \(29.6\%\) across the three datasets, marking a substantial improvement ranging from \(77.5\times\) to \(370\times\). At \(1\%\) FPR, the TPR improves by a factor between \(10.36\times\) and \(17.53\times\). Additionally, our attack consistently surpasses prior methods in terms of AUC and MI accuracy metrics. A detailed comparison can be found in Table 2 (Appendix A.1). Notably, Chameleon is significantly more query-efficient, using only \(64\) queries to the target model, compared to the decision-boundary attack, which requires \(\approx\) 2,500 queries for the MI test, making our attack approximately \(39\times\) more query-efficient.
Furthermore, Chameleon requires adding a relatively low number of poisoned points per challenge point: an average of \(3.5\) for GTSRB, \(1.4\) for CIFAR-10, and \(0.6\) for CIFAR-100 datasets for each challenge point. This results in a minor drop in test accuracy of less than \(2\%\), highlighting the stealthiness of our attack. Overall, the results presented in Table 1 show that our adaptive data poisoning strategy significantly amplifies the MI leakage in the label-only scenario while having a marginal effect on the model's test accuracy.
### Ablation Studies
We perform several ablation studies, exploring the effect of the parameters discussed in Section 4.1.
Adaptive Poisoning Stage.We evaluate the effectiveness of adaptive poisoning and the impact of several parameters.
_a) Comparison to Static Poisoning._ Figure 4(a) provides a comparison of our adaptive poisoning approach (Algorithm 1) and a static approach where \(k\) replicas are added per challenge point. Our approach achieves a TPR@\(1\%\) FPR of \(22.9\%\), while the best static approach among the six versions achieve a TPR@\(1\%\) FPR of \(8.1\%\). The performance improvement of \(14.8\%\) in this metric demonstrates the effectiveness of our adaptive poisoning strategy over static poisoning, a strategy used in Truth Serum for confidence-based MI. Additionally, our approach matches the best static approach (with 1 poison) for the AUC metric, achieving an AUC of \(76\%\).
_b) Poisoning Threshold._ Figure 4(b) illustrates the impact of varying the poisoning threshold \(\mathsf{t_{p}}\) (line 7 of Algorithm 1) on the attack's performance. When \(\mathsf{t_{p}}=1\), the OUT model's confidence is at most 1 and Algorithm 1 will not add any poisoned replicas in the training data, but setting \(\mathsf{t_{p}}<1\) allows the algorithm to introduce poisoning. We observe an immediate improvement in the AUC, indicating the benefits of poisoning over the no-poisoning scenario. Similarly, the TPR improves when \(\mathsf{t_{p}}\) decreases, as this forces the OUT model's confidence on the true label to be low, increasing the probability of missclassification. However, setting \(\mathsf{t_{p}}\) close to \(0\) leads to overpoisoning, where an overly restrictive \(\mathsf{t_{p}}\) forces the algorithm to add a large number of poisoned replicas in the training data, negatively impacting the attack's performance. Therefore, setting \(\mathsf{t_{p}}\) between \(0.1\) and \(0.25\) results in high TPR.
_c) Number of OUT Models._ Figure 4(c) shows the impact of the number of OUT models (line 5 of Algorithm 1) on our attack's performance. As we increase the number of OUT models from \(1\) to \(8\), the TPR@\(1\%\) FPR shows a noticeable improvement from \(13.1\%\) to \(22.9\%\). However, further increasing the number of OUT models up to \(64\) only yields a marginal improvement with TPR increasing to \(24.1\%\). Therefore, setting the
number of OUT models to \(8\) strikes a balance between the attack's success and the computational overhead of training these models.
_d) Maximum Poisoned Iterations._ In Figure 4(d), we observe that increasing the maximum number of poisoned iterations \(\mathsf{k}_{\mathsf{max}}\) (line 2 of Algorithm 1) leads to significant improvements in both the TPR and AUC metrics. When \(\mathsf{k}_{\mathsf{max}}=0\), it corresponds to the no poisoning case. The TPR@1% FPR and AUC metrics improve as parameter \(\mathsf{k}_{\mathsf{max}}\) increases. However, the TPR stabilizes when \(\mathsf{k}_{\mathsf{max}}\geq 4\), indicating that no more than 4 poisoned replicas are required per challenge point.
Membership Neighborhood Stage.Next, we explore the impact of varying the the membership neighborhood size and the neighborhood threshold \(\mathsf{t}_{\mathsf{nb}}\) individually. For the neighborhood size, we observe a consistent increase in TPR@1% FPR of 0.6% as we increase the number of queries from \(16\) to \(64\), beyond which the TPR oscillates. Thus, we set the neighborhood size at 64 queries for our experiments, achieving satisfactory attack success. For the neighborhood threshold parameter \(\mathsf{t}_{\mathsf{nb}}\), we note a decrease in TPR@1% FPR of 6.2% as we increase \(\mathsf{t}_{\mathsf{nb}}\) from 0.25 to 1.75. This aligns with our intuition, that setting \(\mathsf{t}_{\mathsf{nb}}\) to a smaller value prompts our algorithm to select close neighbors, which in turn enhances our attack's performance. The details for these results are in Appendix A.2.
Figure 5: **Ablations for Adaptive Poisoning stage on CIFAR-10 dataset. We provide experiments by varying vaious hyperparameters used in the Adaptive Poisoning stage.**
### Other Data Modalities and Architectures
To show the generality of our attack, we evaluate Chameleon on various model architectures, including ResNet-34, ResNet-50 and VGG-16. We observe a similar trend of high TPR value at various FPRs. Particularly for VGG-16, which has \(12\times\) more trainable parameters than ResNet-18, the attack achieves better performance than ResNet-18 across all metrics, suggesting that more complex models tend to be more susceptible to privacy leakage. We also evaluate Chameleon against a tabular dataset (Purchase-100). Once again, we find a similar pattern of consistently high TPR values across diverse FPR thresholds. In fact, the TPR@1% FPR metric reaches an impressive 45.8% when tested on a two-layered neural network. The details for the experimental setup and the results can be found in Appendix A.3.
### Does Differential Privacy Mitigate Chameleon?
We evaluate the resilience of Chameleon against models trained using a standard differentially private (DP) training algorithm, DP-SGD (Abadi et al., 2016). Our evaluation covers a broad spectrum of privacy parameters, but here we highlight results on \(\epsilon=\{\infty,100,4\}\), which represent no bound, a loose bound and a strict bound on the privacy. At \(\epsilon\) as high as 100, we observe a decline in Chameleon's performance, with TPR@1% FPR decreasing from 22.6% (at \(\epsilon=\infty\)) to 6.1%. Notably, in the case of an even stricter \(\epsilon=4\), we observe that TPR@1% FPR becomes 0%, making our attack ineffective. However, it is also important to note that the model's accuracy also takes a significant hit, plummeting from 84.3% at \(\epsilon=\infty\) to 49.4% at \(\epsilon=4\), causing a substantial 34.9% decrease in accuracy. This trade-off shows that while DP serves as a powerful defense, it does come at the expense of model utility. More comprehensive results using a wider range of \(\epsilon\) values can be found in Appendix A.4.
## 5 Discussion and Conclusion
In this work we propose a new attack that successfully amplifies Membership Inference leakage in the Label-Only setting. Our attack leverages a novel adaptive poisoning and querying strategy, surpassing the effectiveness of prior label-only attacks. Furthermore, we investigate the viability of Differential Privacy as a defense against our attack, considering its impact on model utility. Finally, we offer a theoretical analysis providing insights on the impact of data poisoning on MI leakage.
We demonstrated that Chameleon achieves impressive performance in our experiments, mainly due to the adaptive poisoning strategy we design. While poisoning is a core component of our approach, and allows us to enhance the privacy leakage, it also imposes additional burden on the adversary to mount the poisoning attack. One important remaining open problem in label-only MI attacks is how to operate effectively in the low False Positive Rate (FPR) scenario without the assistance of poisoning. Additionally, our poisoning strategy requires training shadow models. Though our approach generally involves a low number of shadow models, any training operation is inherently expensive and adds computational complexity. An interesting direction for future work is the design of poisoning strategies for label-only membership inference that do not require shadow model training.
## Acknowledgements
We thank Sushant Agarwal and John Abascal for helpful discussions. Alina Oprea was supported by NSF awards CNS-2120603 and CNS-2247484. Jonathan Ullman was supported by NSF awards CNS-2120603, CNS-2232692, and CNS-2247484.
|
2310.05291 | Sample Size Considerations in the Design of Orthopaedic Risk-factor
Studies | Sample size calculations play a central role in study design because sample
size affects study interpretability, costs, hospital resources, and staff time.
For most veterinary orthopaedic risk-factor studies, either the sample size
calculation or the post-hoc power calculation assumes the disease status of
control subjects is perfectly ascertained, when it may not be. That means
control groups may be mixtures of both unaffected cases and some unidentified
affected cases. In this study, we demonstrate the consequences of using
misclassified groups as control groups on the power of risk association tests,
with the intent of showing that control groups with even small
misclassification rates can reduce the power of association tests. In addition,
we offer a range of correction factors to adjust sample size calculations back
to 80% power. This was a simulation study using study designs from published
orthopaedic risk-factor studies. The approach was to use their designs but
simulate the data to include known proportions of misclassified affected
subjects in the control group. The simulated data was used to calculate the
power of a risk-association test. We calculated powers for several study
designs and misclassification rates and compared them to a reference model.
Treating misclassified data as disease-negative only always reduced statistical
power compared to the reference power, and power loss increased with increasing
misclassification rate. For this study, power could be improved back to 80% by
increasing the sample size by a factor of 1.1 to 1.4. Researchers should use
caution in calculating sample sizes for risk-factor studies and consider
adjustments for estimated misclassification rates. | Richard Evans, Antonio Pozzi | 2023-10-08T21:34:09Z | http://arxiv.org/abs/2310.05291v1 | # Sample Size Considerations in the Design of Orthopaedic Risk-factor Studies
###### Abstract
We present a new method for estimating the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of a variety of ways to estimate the size of data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of the data in a variety of ways to estimate the size of data in a variety |
2304.09534 | Realistic Data Enrichment for Robust Image Segmentation in
Histopathology | Poor performance of quantitative analysis in histopathological Whole Slide
Images (WSI) has been a significant obstacle in clinical practice. Annotating
large-scale WSIs manually is a demanding and time-consuming task, unlikely to
yield the expected results when used for fully supervised learning systems.
Rarely observed disease patterns and large differences in object scales are
difficult to model through conventional patient intake. Prior methods either
fall back to direct disease classification, which only requires learning a few
factors per image, or report on average image segmentation performance, which
is highly biased towards majority observations. Geometric image augmentation is
commonly used to improve robustness for average case predictions and to enrich
limited datasets. So far no method provided sampling of a realistic posterior
distribution to improve stability, e.g. for the segmentation of imbalanced
objects within images. Therefore, we propose a new approach, based on diffusion
models, which can enrich an imbalanced dataset with plausible examples from
underrepresented groups by conditioning on segmentation maps. Our method can
simply expand limited clinical datasets making them suitable to train machine
learning pipelines, and provides an interpretable and human-controllable way of
generating histopathology images that are indistinguishable from real ones to
human experts. We validate our findings on two datasets, one from the public
domain and one from a Kidney Transplant study. | Sarah Cechnicka, James Ball, Hadrien Reynaud, Callum Arthurs, Candice Roufosse, Bernhard Kainz | 2023-04-19T09:52:50Z | http://arxiv.org/abs/2304.09534v2 | # Realistic Data Enrichment for Robust Image Segmentation in Histopathology
###### Abstract
Poor performance of quantitative analysis in histopathological Whole Slide Images (WSI) has been a significant obstacle in clinical practice. Annotating large-scale WSIs manually is a demanding and time-consuming task, unlikely to yield the expected results when used for fully supervised learning systems. Rarely observed disease patterns and large differences in object scales are difficult to model through conventional patient intake. Prior methods either fall back to direct disease classification, which only requires learning a few factors per image, or report on average image segmentation performance, which is highly biased towards majority observations. Geometric image augmentation is commonly used to improve robustness for average case predictions and to enrich limited datasets. So far no method provided sampling of a realistic posterior distribution to improve stability, _e.g._ for the segmentation of imbalanced objects within images. Therefore, we propose a new approach, based on diffusion models, which can enrich an imbalanced dataset with plausible examples from underrepresented groups by conditioning on segmentation maps. Our method can simply expand limited clinical datasets making them suitable to train machine learning pipelines, and provides an interpretable and human-controllable way of generating histopathology images that are indistinguishable from real ones to human experts. We validate our findings on two datasets, one from the public domain and one from a Kidney Transplant study.1
Footnote 1: The source code and trained models will be publicly available at the time of the conference, on huggingface and github.
## 1 Introduction
Large scale datasets with accurate annotations are key to the successful development and deployment of deep learning algorithms for computer vision tasks. Such datasets are rarely available in medical imaging due to privacy concerns and high cost of expert annotations. This is particularly true for histopathology, where gigapixel images have to be processed [34]. This is one of the reasons
why histopathology is, to date, a field in which image-based automated quantitative analysis methods are rare. In radiology, for example, most lesions can be characterised manually into clinically actionable information, _e.g._ measuring the diameter of a tumour. However, this is not possible in histopathology, as quantitative assessment requires thousands of structures to be identified for each case, and most of the derived information is still highly dependent on the expertise of the pathologist. Therefore, supervised Machine Learning (ML) methods quickly became a research focus in the field, leading to the emergence of prominent early methods [25] and, more recently, to high-throughput analysis opportunities for the clinical practice [15, 23, 10]. Feature location, shape, and size are crucial for diagnosis; this high volume of information required makes automatic segmentation essential for computational pathology [15]. The automated extraction of these features should lead to the transition from their time-consuming and error-prone manual assessment to reproducible quantitative metrics-driven analysis, enabling more robust decision-making. Evaluating biopsies with histopathology continues to be the gold standard for identifying organ transplant rejection [22]. However, imbalances and small training sets still prevent deep learning methods from revolutionizing clinical practice in this field.
In this work, we are interested in the generation of training data for the specific case of histopathology image analysis for kidney transplant biopsies. In order to maximize transplant survival rates and patient well-being, it is essential to identify conditions that can result in graft failure, such as rejection, early on. The current diagnostic classification system presents shortcomings for biopsy assessment, due to its qualitative nature, high observer variability, and lack of granularity in crucial areas [32].
**Contribution:** We propose a novel data enrichment method using diffusion models conditioned on masks. Our model allows the generation of photo-realistic histopathology images with corresponding annotations to facilitate image segmentation and to balance datasets comprising few examples of underrepresented classes. In contrast to conventional geometric image augmentation, we generate images that are indistinguishable from real samples to human experts and provide means to precisely control the generation process through segmentation maps. Our method can also be used for expert training, as it can cover the extreme ends of pathological representations through manual definition of segmentation masks.
**Related Work:** Diffusion Models have experienced fast-rising popularity [21, 24, 27]. Many improvements have been proposed [28, 30], some of them suggesting image-to-image transfer methods that can convert simple sketches into photo-realistic images [2]. This is partially related to our approach. However, in contrast to sketch-based synthesis of natural images, we aim at bootstrapping highly performing image segmentation methods from poorly labelled ground truth data.
Data enrichment through synthetic images has been a long-standing idea in the community [33, 19, 9]. So far, this approach was limited by the generative capabilities of auto-encoding [16] or generative adversarial approaches [6]. A domain gap between real and synthetic images often leads to shortcut learning [5]
and biased results with minimal gains. The best results have surprisingly been achieved, not with generative models, but with data imputation by mixing existing training samples to new feature combinations [4, 31]. Sample mixing can be combined with generative models like Generative Adversarial Networks (GAN) to enrich the data [19].
## 2 Method
We want to improve segmentation robustness. We denote the image space as \(\mathcal{X}\) and label mask space as \(\mathcal{Y}\). Formally, we look for different plausible variations within the joint space \(\mathcal{X}\times\mathcal{Y}\) in order to generate extensive datasets \(d_{k}=\{(\mathbf{x}_{n}^{(k)},\mathbf{y}_{n}^{(k)})\}_{n=1}^{N_{k}}\), where \(N_{k}\) is the number of labelled data points in the \(k\)-th dataset. We hypothesise that training a segmentation network \(M_{\theta}\) on combinations of \(d_{k}\), \(d_{a}\cup d_{b}\cup\cdots\cup d_{c}\) with or without samples from an original dataset, will lead to state-of-the-art segmentation performance. We consider any image segmentation model \(M_{\theta}:\mathcal{X}\rightarrow\mathcal{Y}\) that performs pixel-wise classification, _i.e._ semantic segmentation, in \(\mathbb{R}^{C}\), where \(C\) is the number of classes in \(\mathcal{Y}\). Thus, predictions for the individual segmentation class labels can be defined as \(p(\mathbf{y}|\mathbf{x},\theta)=\mathbf{\hat{y}}=softmax(M_{\theta}(\mathbf{x}))\).
Inverting the segmentation prediction to \(p(\mathbf{x}|\mathbf{y},\theta)\) is impractical, as the transformation \(M_{\theta}\) is not bijective, and thus inverting it would yield a _set_ of plausible samples from \(\mathcal{X}\). However, the inversion can be modelled through a constrained sampling method, yielding single plausible predictions \(\mathbf{\hat{x}}\in\hat{\mathcal{X}}\) given \(\mathbf{y}\in\mathcal{Y}\) and additional random inputs \(z\sim\mathcal{N}(0,\sigma)\) holding the random state of our generative process. Modelling this approach can be achieved through diffusion probabilistic models [12].We can thus define \(D_{\phi}:\mathcal{Z}\rightarrow\hat{\mathcal{X}}\) where \(\mathcal{Z}\) is a set of Gaussian noise samples. This model can be further conditioned on label masks \(\mathbf{y}\) and produce matching elements to the joint space \(\mathcal{X}\times\mathcal{Y}\) yielding \(D_{\xi}:\mathcal{Z}\times\mathcal{Y}\rightarrow\hat{\mathcal{X}}\).
The first step of our approach, shown in Figure 1, is to generate a set of images \(X_{1}=\{\mathbf{x}_{n}^{(1)}|\mathbf{x}_{n}^{(1)}=D_{\theta}(z),z\sim\mathcal{N}(0, \sigma)\}\subset\hat{\mathcal{X}}\) where \(D_{\theta}\) is an unconditional diffusion model trained on real data samples. We then map all samples \(\mathbf{x}_{n}^{(1)}\) to the corresponding elements in the set of predicted label masks \(Y_{1}=\{\mathbf{y}_{n}^{(1)}|\mathbf{y}_{n}^{(1)}=M_{\theta}(\mathbf{x}_{n}^{(1)}),\mathbf{x}_ {n}^{(1)}\ \in\ X_{1}\}\subset\hat{\mathcal{Y}}\), where \(M_{\theta}\) is a segmentation model trained on real data pairs. This creates a dataset noted \(d_{1}\). The second step is to generate a dataset \(d_{2}\), by using a conditional diffusion model \(D_{\xi}\) trained on real images and applied to the data pairs in \(d_{1}\), such that \(X_{2}=\{\mathbf{x}_{n}^{(2)}|\mathbf{x}_{n}^{(2)}=D_{\xi}(\mathbf{y}_{n}^{(1)},z),\mathbf{y}_ {n}^{(1)}\in Y_{1},z\sim\mathcal{N}(0,\sigma)\}\). This lets us generate a much larger and more diverse dataset of image-label pairs, where the images are generated from the labels. Our final step is to use this dataset to train a new segmentation model \(M_{\zeta}\) that largely outperforms \(M_{\theta}\). To do so, we first train \(M_{\zeta}\) on the generated dataset \(d_{2}\) and fine-tune it on the real dataset.
**Image Generation:** Diffusion models are a type of generative model producing image samples from Gaussian noise. The idea is to reverse a forward Markovian diffusion process, which gradually adds Gaussian noise to a real image \(\mathbf{x}_{0}\) as a
time sequence \(\{\mathbf{x}_{t}\}_{t=1\dots T}\). The probability distribution \(q\) for the forward sampling process at time \(t\) can be written as a function of the original sample image
\[q\left(\mathbf{x}_{t}\mid\mathbf{x}_{0}\right)=\mathcal{N}\left(\mathbf{x}_{t}; \sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0},\left(1-\bar{\alpha}_{t}\right)\mathbf{I }\right),q\left(\mathbf{x}_{t}\mid\mathbf{x}_{s}\right)=\mathcal{N}\left( \mathbf{x}_{t};\left(\alpha_{t}/\alpha_{s}\right)\mathbf{x}_{s},\sigma_{t|s}^{ 2}\mathbf{I}\right), \tag{1}\]
where \(\bar{\alpha}_{t}=\sqrt{1/\left(1+e^{-\lambda_{t}}\right)}\) and \(\sigma_{t|s}^{2}=\sqrt{\left(1-e^{\lambda_{t}-\lambda_{s}}\right)\sigma_{t}^{ 2}}\) parameterise the variance of the noise schedule, whose logarithmic signal to noise ratio \(\lambda_{t}=\log\left(\alpha_{t}^{2}/\sigma_{t}^{2}\right)\) is set to decrease with t [26, 29]. The joint distribution \(p_{\theta}\) describing the corresponding reverse process is
\[p_{\theta}\left(\mathbf{x}_{0:T}\right):=p\left(\mathbf{x}_{T}\right)\prod_{ t=1}^{T}p_{\theta}\left(\mathbf{x}_{t-1}\mid\mathbf{x}_{t}\right),\quad p_{ \theta}\left(\mathbf{x}_{t-1}\mid\mathbf{x}_{t}\right):=\mathcal{N}\left( \mathbf{x}_{t-1};\mu_{\theta}\left(\mathbf{x}_{t},t,\mathbf{c}\right),\sigma_ {t}\right), \tag{2}\]
where \(\mu_{\theta}\) is the parameter to be estimated, \(\sigma_{t}\) is given and \(\mathbf{c}\) is an additional conditioning variable. Distribution \(p\) depends on the entire dataset and is modelled by a neural network. [12] have shown that learning the variational lower bound on the reverse process is equivalent to learning a model for the noise added to the image at each time step. By modelling \(\mathbf{x}_{t}=\alpha_{t}\mathbf{x_{0}}+\sigma_{t}\epsilon\) with \(\epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) we aim to estimate the noise \(\epsilon_{\theta}(x_{t},\lambda_{t},\mathbf{c})\) in order to minimise the loss function
\[\mathcal{L}=\mathbb{E}_{\epsilon,\lambda_{t},\mathbf{c}}\left[w(\lambda_{t}) \left\|\epsilon_{\theta}(x_{t},\lambda_{t},\mathbf{c})-\epsilon\right\|^{2} \right], \tag{3}\]
where \(w(\lambda_{t})\) denotes the weight assigned at each time step [13]. We follow [26] using a cosine schedule and DIMM [30] continuous time steps for training and sampling. We further use classifier free guidance [13] avoiding the use of a separate classifier network. The network partly trains using conditional input and partly using only the image such that the resulting noise is a weighted average:
\[\tilde{\epsilon}_{\theta}\left(x_{t},\lambda_{t},\mathbf{c}\right)=\left(1+w \right)\epsilon_{\theta}\left(x_{t},\lambda_{t},\mathbf{c}\right)-w\epsilon_{ \theta}\left(\mathbf{z}_{\lambda}\right). \tag{4}\]
The model can further be re-parameterized using v-parameterization [28] by predicting \(\mathbf{v}\equiv\alpha_{t}\epsilon-\sigma_{t}\mathbf{x}\) rather than just the noise, \(\epsilon\), as before. With v-parameterization, the predicted image for time step \(t\) is now \(\mathbf{\hat{x}}=\alpha_{t}\mathbf{z}_{t}-\sigma_{t}\hat{\mathbf{v}}_{\theta}( \mathbf{z}_{t})\).
Figure 1: Summary of our dataset generation approach as described in Section 2. We use our diffusion model \(D_{\phi}\) to generate images, \(M_{\theta}\) segments them and \(D_{\xi}\) creates multiple images from these segmentations. Dataset \(d_{2}\) is the one used to train our final model \(M_{\zeta}\).
**Mask conditioning:** Given our proprietary set of histopathology patches, only a small subset of these come with their corresponding segmentation labels. Therefore, when conditioning on segmentation masks, we first train a set of unconditioned cascaded diffusion models using our unlabelled patches. This allows the model to be pre-trained on a much richer dataset, reducing the amount of labelled data needed to get high-quality segmentation-conditioned samples. Conditioning is achieved by concatenating the segmentation mask, which is empty in pre-training, with the noisy image as input into each diffusion model, at every reverse diffusion step. After pre-training, we fine-tune the cascaded diffusion models on the labelled image patches so that the model learns to associate the labels with the structures it has already learnt to generate in pre-training.
**Mask Generation:** We use a nnU-Net [14] to generate label masks through multi-class segmentation. The model is trained through a combination of Dice loss \(\mathcal{L}_{Dice}\) and Cross-Entropy loss \(\mathcal{L}_{CE}\). \(\mathcal{L}_{Dice}\) is used in combination with a Cross Entropy Loss \(\mathcal{L}_{CE}\) to obtain more gradient information during training [14], by giving it more mobility across the logits of the output vector. Additional auxiliary Dice losses are calculated at lower levels in the model. The total loss function for mask generation can therefore be described with
\[\mathcal{L}=\mathcal{L}_{Dice}+\mathcal{L}_{CE}+\beta(\mathcal{L}_{Dice_{1/2}} +\mathcal{L}_{Dice_{1/4}}), \tag{5}\]
where \(\mathcal{L}_{Dice_{1/2}}\) and \(\mathcal{L}_{Dice_{1/4}}\) denote the dice auxiliary losses calculated at a half, and a quarter of the final resolution, respectively.
We train two segmentation models \(M_{\theta}\) and \(M_{\zeta}\). First, for \(M_{\theta}\), we train the nnU-Net on the original data and ground truth label masks. \(M_{\theta}\) is then used to generate the label maps for all the images in \(d_{1}\), the pool of images generated with our unconditional diffusion model \(D_{\phi}\). The second nnU-Net, \(M_{\zeta}\), is pre-trained on our dataset \(d_{2}\) and we fine-tune it on the original data to produce our final segmentation model.
## 3 Evaluation
**Datasets and Preprocessing:** We use two datasets for evaluation. The first one is the public KUMAR dataset [18], which we chose to be able to compare with the state-of-the-art directly. KUMAR consists of 30 WSI training images and 14 test images of \(1000\times 1000\) pixels with corresponding labels for tissue and cancer type (Breast, Kidney, Liver, Prostate, Bladder, Colon, and Stomach). During training, the raw images are cropped into patches of \(256\times 256\) and resized to \(64\times 64\) pixels. Due to the very limited amount of data available, we apply extensive data augmentation, including rotation, flipping, color shift, random cropping and elastic transformations. However the baseline methods [19] only use 16 of the 30 images available for training.
The second dataset is a proprietary collection of Kidney Transplant Pathology WSI slides with an average resolution of \(30000\times 30000\) per slide. These images were tiled into overlapping patches of \(1024\times 1024\) pixels. For this work,
1654 patches, classified as kidney cortex, were annotated by a consultant transplant pathologist with more than ten years of experience and an assistant with 5 years of experience. Among these, 68 patches, belonging to 6 separate WSI, were selected for testing, while the rest were used for training. The dataset also includes tabular data of patient outcomes and history of creatinine scores before and after the transplant. We resize the \(1024\times 1024\) patches down to \(64\times 64\) resolution and apply basic shifts, flips and rotations to augment the data before using it to train our first diffusion model. We apply the same transformations but with a higher re-scaling of \(256\times 256\) for the first super-resolution diffusion model. The images used to train the second and final super-resolution model are not resized but are still augmented with the same shift, flip and rotations as for the previous models. We set most of our training parameters similar to the suggested ones in [26], but use the creatinine scores and patient outcomes as conditioning parameters for our diffusion models.
**Implementation:** We use a set of three cascaded diffusion models similar to [26], starting with a base model that generates \(64\times 64\) images, and then two super-resolution models to upscale to resolutions \(256\times 256\) and \(1024\times 1024\). Conditioning augmentation is used in super-resolution networks to improve sample quality. In contrast to [26], we use v-parametrization [28] to train our super-resolution models (\(64\times 64\to 256\times 256\) and \(256\times 256\to 1024\times 1024\)). These models are much more computationally demanding at each step of the reverse diffusion process, and it is thus crucial to reduce the number of steps during sampling to maintain the sampling time in a reasonable range. We find v-parametrization to allow for as few as 256 steps, instead of 1024 in the noise prediction setting, for the same image quality, while also converging faster. We keep the noise-prediction setting for our base diffusion model, as sampling speed is not an issue at the \(64\times 64\) scale, and changing to v-parametrization with 256 time steps generates images with poorer quality in this case. We use PyTorch v1.6 and consolidated [14, 26] into the same framework. Three Nvidia A5000 GPUs are used to train and evaluate our models. All diffusion models were trained with over 120,000 steps. The kidney study segmentation models were trained for 200 epochs and fine-tuned for 25, the KUMAR study used 800 epochs and was fine-tuned for 100. Training takes about 10 days and image generation takes 60 s per image. Where real data was used for fine-tuning this was restricted to 30% of the original dataset. Diffusion models were trained with a learning rate of 1e\(-\)4 and segmentation models were pre-trained with a learning rate of 1e\(-\)3 which dropped to 3e\(-\)6 when no change was observed on the validation set in 15 epochs. All models used Adam optimiser. See the supplemental material for further details about the exact training configurations.
**Setup:** We evaluate the performance of nnU-Net [14] trained on the data enriched by our method. We trained over 5 different combinations of training sets, using the same test set for metrics comparison, and show the results in Table 1. First, we train a base nnU-Net solely on real image data, (1), before fine-tuning it, independently, twice: once with a mixture of real and synthetic images as (2), and once exclusively with synthetic images as (3). The \(4^{th}\) and \(5^{th}\) models cor
respond to nnU-Nets retrained from scratch using exclusively synthetic images as (4), and one further fine-tuned on real images as (5) in Table 1.
**Results:** Our quantitative results are summarised and compared to the state-of-the-art in Table 1 using the Dice coefficient (Dice) and Aggregated Jaccard Index (AJI) as suggested by [19]. Qualitative examples are provided in Fig. 2 (left), which illustrates that our model can convert a given label mask into a number of different tissue types and Fig. 2, where we compare synthetic enrichment images of various tissue types from our kidney transplant data.
**Sensitivity analysis:** Out of our 5 models relying on additional synthetic data in the KUMAR dataset experiments, 4 of them outperform all previous SOTA on the Dice score. Results are more nuanced when it comes to the AJI, as AJI overpenalizes overlapping regions [8], and our method does not optimise specifically for this metric. Furthermore, Table 1 shows that, for the KIDNEY dataset, we can reach high performance (88% Dice) while training on 30% (500 samples) of the real KIDNEY data (1a), and training on the full training set (1) yields a Dice score of 94%. We also observe that the model pretrained on synthetic data and fine-tuned on 500 real images (5), outperforms the one only trained on 500 real images (1a). Additionally, we discover that training the model on real data before fine-tuning it on synthetic samples (3) does not work as well as the opposite approach. We argue that pre-training an ML model on generated data gives it a strong prior on large dataset distributions and alleviates the need for many real samples in order to learn the final, exact, decision boundaries, making the learning procedure more data efficient.
Figure 2: Left: Outputs from our models. From left to right: (a) sample from \(D_{\phi}\), (b) overlaid segmentation from \(M_{\theta}\), (c) sample from \(D_{\xi}\), (d) difference map of segmentation from \(M_{\zeta}\) and \(M_{\theta}\). Segmentation colors are: red: Tubuli, blue: Glomeruli, green: Vessels. Right: Diffusion model-generated images conditioned on different tissue types in KUMAR, using the same label mask (e). Generated images are from Liver (f), Bladder (g), and Breast (h) tissues. This shows that our conditioning seems to allow a plausible mask to produce any kind of tissue.
**Discussion:** We have shown that data enrichment with generative diffusion models can help to boost performance in low data regimes, _e.g._, KUMAR data, but also observe that when using a larger dataset, where maximum performance might have already been reached, the domain gap may become prevalent and no further improvement can be observed, _e.g._, full KIDNEY data. Estimating the upper bound for the required labelled ground truth data for image segmentation is difficult in general. However, testing model performance saturation with synthetic data enrichment might be an experimental way forward to test for convergence bounds. Finally, we observe that pre-training on synthetic images and then fine-tuning on real images leads to the best performance in very data-limited scenarios, compared with other training strategies.
## 4 Conclusion
In this paper, we propose and evaluate a new data enrichment and image augmentation scheme based on diffusion models. We generate new, synthetic, high-fidelity images from noise, conditioned on arbitrary segmentation masks. This
\begin{table}
\begin{tabular}{l l c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Dice (\%)} & \multicolumn{3}{c}{AJI (\%)} \\ & & \multicolumn{1}{c}{Seen} & \multicolumn{1}{c}{Unseen} & \multicolumn{1}{c}{All} & \multicolumn{1}{c}{Seen} & \multicolumn{1}{c}{Unseen} & \multicolumn{1}{c}{All} \\ \cline{3-8} & CNN3 [17] & 82.26 & 83.22 & 82.67 & 51.54 & 49.89 & 50.83 \\ & DIST [20] & - & - & - & 55.91 & 56.01 & 55.95 \\ & NB-Net [3] & 79.88 & 80.24 & 80.03 & 59.25 & 53.68 & 56.86 \\ & Mask R-CNN [11] & 81.07 & 82.91 & 81.86 & 59.78 & 55.31 & 57.86 \\ & HoVer-Net [7] (*Res50) & 80.60 & 80.41 & 80.52 & 59.35 & 56.27 & 58.03 \\ & TAFE [1] (*Dense121) & 80.81 & 83.72 & 82.06 & 61.51 & 61.54 & 61.52 \\ & HoVer-Net + InsMix [19] & 80.33 & 81.93 & 81.02 & 59.40 & 57.67 & 58.66 \\ & TAFE + InsMix [19] & 81.18 & 84.40 & 82.56 & **61.98** & **65.07** & **63.31** \\ \cline{2-8} & (1) trained on real & 80.30 & 80.58 & 80.38 & 48.35 & 49.62 & 50.81 \\ & (2) fine-tuned by synthetic+real & 85.65 & 86.96 & 86.03 & 56.60 & 57.70 & 56.91 \\ & (3) fine-tuned by synthetic & 75.83 & 82.28 & 77.67 & 32.29 & 40.06 & 34.51 \\ & (4) trained on synthetic & 84.52 & **89.37** & 85.90 & 48.12 & 57.52 & 50.80 \\ & (5) trained on synthetic, fine-tuned on real & **86.13** & 88.29 & **86.75** & 56.80 & 58.44 & 57.27 \\ \hline \multirow{2}{*}{
\begin{tabular}{} \end{tabular} } & (1) trained on real (100\% data) & & 94.22 & & 77.45 \\ & (1a) trained on real (30\% data) & & 88.01 & & & 62.05 \\ & (2) fine-tuned by synthetic+real & & 92.25 & & & 69.11 \\ & (3) fine-tuned by synthetic & & 89.65 & & 58.59 \\ & (4) trained on synthetic & & 82.00 & & 42.40 \\ & (5) trained on synthetic, fine-tuned on real & & 92.74 & & 71.55 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison with the state-of-the-art methods on the KUMAR dataset (top) and the KIDNEY transplant dataset (bottom). Metrics are chosen as suggested in [19]: Dice and AJI. Variant (1-5) show performance on the full as well as limited (1a) KIDNEY data. Best values in bold.
allows us to synthesise an infinite amount of plausible variations for any given feature arrangement. We have shown that using such enrichment can have a drastic effect on the performance of segmentation models trained from small datasets used for histopathology image analysis, thus providing a mitigation strategy for expensive, expert-driven, manual labelling commitments.
**Acknowledgements:** This work is supported by the UKRI Centre for Doctoral Training in Artificial Intelligence for Healthcare (EP/S023283/1). Dr. Roufosse is supported by the National Institute for Health Research (NIHR) Biomedical Research Centre based at Imperial College Healthcare NHS Trust and Imperial College London. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health. Dr Roufosse's research activity is made possible with generous support from Sidharth and Indira Burman. The authors gratefully acknowledge the scientific support and HPC resources provided by the Erlangen National High Performance Computing Center (NHR@FAU) of the Friedrich-Alexander-Universitat Erlangen-Nurnberg (FAU) under the NHR project b143dc22. NHR funding is provided by federal and Bavarian state authorities. NHR@FAU hardware is partially funded by the German Research Foundation (DFG) 440719683. |
2306.07360 | $\mathfrak{m}$-Endoregular lattices | In a previous work, (dual)-$\mathfrak{m}$-Rickart lattices were studied. Now,
in this paper, we introduce $\mathfrak{m}$-endoregular lattices as those
lattices $\mathcal{L}$ such that $\mathfrak{m}$ is a regular monoid, where
$\mathfrak{m}$ is a submonoid with zero of End$_{lin}(\mathcal{L})$. We show
that these lattices can be characterized in terms of $\mathfrak{m}$-Rickart and
$\mathfrak{m}$-dual-Rickart lattices. Also, we compare these new lattices with
those lattices in which every compact element is a complement. We characterize
the $\mathfrak{m}$-endoregular lattices such that every idempotent in
$\mathfrak{m}$ is central in $\mathfrak{m}$ and we show that for these lattices
the complements are a sublattice which is a Boolean algebra. We introduce two
new concepts, $\mathfrak{m}$-$\mathcal{K}$-extending and
$\mathfrak{m}$-$\mathcal{T}$-lifting lattices. For these lattices, we show that
the monoid $\mathfrak{m}$ has a regular quotient monoid provided they satisfy
$\mathfrak{m}$-$C_2$ and $\mathfrak{m}$-$D_2$ respectively. | Mauricio Medina-Bárcenas, Hugo Rincón-Mejía | 2023-06-12T18:32:19Z | http://arxiv.org/abs/2306.07360v1 | # m-endoregular lattices
###### Abstract.
In a previous work, (dual)-m-Rickart lattices were studied. Now, in this paper, we introduce m-endoregular lattices as those lattices \(\mathcal{L}\) such that \(\mathfrak{m}\) is a regular monoid, where \(\mathfrak{m}\) is a submonoid with zero of \(\operatorname{End}_{lin}(\mathcal{L})\). We show that these lattices can be characterized in terms of \(\mathfrak{m}\)-Rickart and \(\mathfrak{m}\)-dual-Rickart lattices. Also, we compare these new lattices with those lattices in which every compact element is a complement. We characterize the m-endoregular lattices such that every idempotent in \(\mathfrak{m}\) is central in \(\mathfrak{m}\) and we show that for these lattices the complements are a sublattice which is a Boolean algebra. We introduce two new concepts, \(\mathfrak{m}\)-\(\mathcal{K}\)-extending and \(\mathfrak{m}\)-\(\mathcal{T}\)-lifting lattices. For these lattices, we show that the monoid \(\mathfrak{m}\) has a regular quotient monoid provided they satisfy \(\mathfrak{m}\)-\(C_{2}\) and \(\mathfrak{m}\)-\(D_{2}\) respectively.
Key words and phrases:Endoregular lattice, Abelian endoregular lattice, \(\mathcal{K}\)-extending lattice, \(\mathcal{T}\)-lifting lattice, linear morphism 2010 Mathematics Subject Classification: Primary 06C05, 06C15, 16D10; Secondary 08A35, 06B35 The first author was supported by the grant "Programa de Becas Posdoctorales en la UNAM 2022" from the Universidad Nacional Autonoma de Mexico (UNAM)
## 1. Introduction
Let \(M\) be a compact manifold and \(\mathcal{K}\) a compact manifold \(M\). We say that \(M\) is _\(\mathcal{K}\)-_
them \(\mathfrak{m}\)-_abelian-endoregular_. We characterize them (Proposition 3.27, Proposition 3.29) and we show when in an \(\mathfrak{m}\)-abelian-endoregular lattice \(\mathcal{L}\) the set of complements \(C(\mathcal{L})\) is a Boolean sublattice of \(\mathcal{L}\) (Proposition 3.31 and Corollary 3.32). In the last section, Section 4, we introduce the \(\mathfrak{m}\)-\(\mathcal{K}\)-extending and \(\mathfrak{m}\)-\(\mathcal{T}\)-lifting lattices (Definition 4.1). We show that every \(\mathfrak{m}\)-Rickart lattice is \(\mathfrak{m}\)-\(\mathcal{K}\)-extending and every \(\mathfrak{m}\)-dual-Rickart lattice is \(\mathfrak{m}\)-\(\mathcal{T}\)-lifting, moreover, we give the converses (Proposition 4.2 and Proposition 4.3). We define two congruences \(\equiv_{\Delta}\) and \(\equiv^{\nabla}\) on any submonoid \(\mathfrak{m}\subseteq\operatorname{End}_{lin}(\mathcal{L})\). On one hand, it is proved that \(\mathfrak{m}/\equiv_{\Delta}\) is a regular monoid provided that \(\mathcal{L}\) is \(\mathfrak{m}\)-\(\mathcal{K}\)-extending and satisfies \(\mathfrak{m}\)-\(C_{2}\), on the other hand, \(\mathfrak{m}/\equiv^{\nabla}\) is a regular monoid provided that \(\mathcal{L}\) is \(\mathfrak{m}\)-\(\mathcal{T}\)-lifting and satisfies \(\mathfrak{m}\)-\(D_{2}\) (Theorem 4.15 and Theorem 4.19).
Throughout this paper, \(\mathcal{L}\) will denote a (bounded, complete, modular) lattice, the lowest element of \(\mathcal{L}\) will be denoted by \(\mathbf{0}\) and the greatest element will be denoted by \(\mathbf{1}\). Given \(a,b\in\mathcal{L}\), \([a,b]\) will denote the interval \(\{x\in\mathcal{L}\mid a\leq x\leq b\}\). The subset of complements in \(\mathcal{L}\) will be denoted as \(C(\mathcal{L})\). The set of linear endomorphisms of \(\mathcal{L}\) will be denoted as \(\operatorname{End}_{lin}(\mathcal{L})\) which is a monoid with the composition. The letter \(\mathfrak{m}\) will stand for a submonoid with zero of \(\operatorname{End}_{lin}(\mathcal{L})\). All rings will be associative with unit, and all modules will be left modules. Given an \(R\)-module, \(\operatorname{End}_{R}(M)\) will denote the endomorphism ring of \(M\).
## 2. Preliminaries
**Definition 2.1**.: A map between bounded lattices \(\varphi:\mathcal{L}\to\mathcal{L}^{\prime}\) is called a _linear morphism_ if there exists \(\ker_{\varphi}\in\mathcal{L}\) called the _kernel_ of \(\varphi\), and \(a^{\prime}\in\mathcal{L}^{\prime}\) such that
1. \(\varphi(x)=\varphi(x\vee\ker_{\varphi})\) for all \(x\in\mathcal{L}\).
2. \(\varphi\) induces an isomorphism of lattices \(\overline{\varphi}:[\ker_{\varphi},\mathbf{1}]\to[\mathbf{0},a^{\prime}]\) given by \(\overline{\varphi}(x)=\varphi(x)\) for all \(x\in[\ker_{\varphi},\mathbf{1}]\).
_Remark 2.2_.: If the lattices are complete, we will assume that the isomorphism in item (2) of Definition 2.1 is an isomorphism of complete lattices. In this case, a linear morphism commutes with arbitrary joins [1, Proposition 1.3].
**Notation:** Let \(\mathcal{L}\) be a complete modular lattice and \(a,x\in\mathcal{L}\). There are two canonical linear morphisms \(\iota_{x}:[\mathbf{0},x]\to\mathcal{L}\) the inclusion, and \(\rho_{a}:\mathcal{L}\to[a,\mathbf{1}]\) given by \(\rho_{a}(y)=a\lor y\).
**Proposition 2.3** ([9, Proposition 2.4]).: _Let \(\mathcal{L}\) be a bounded modular lattice, and \(x\in\mathcal{L}\) be an element with complement \(x^{\prime}\). Then, the map \(\pi_{x}:\mathcal{L}\to\mathcal{L}\) given by \(\pi_{x}(a)=(a\lor x^{\prime})\wedge x\) is a linear morphism._
**Definition 2.4**.: Let \(\mathcal{L}\) be a bounded modular lattice, and \(x\in\mathcal{L}\) be an element with a complement. The linear morphism \(\pi_{x}\) is called the _projection on \(x\)_.
_Remark 2.5_.: Let \(\mathcal{L}\) be a complete modular lattice, and \(x,y\in\mathcal{L}\) with \(x\) being a complement. Suppose a linear morphism exists \(\varphi:[\mathbf{0},x]\to[\mathbf{0},y]\). Then \(\varphi\) can be extended to a linear endomorphism of \(\mathcal{L}\) considering \(\widehat{\varphi}=\iota_{y}\varphi\pi_{x}:\mathcal{L}\to\mathcal{L}\).
**Proposition 2.6** ([9, Proposition 2.10]).: _Let \(\mathcal{L}\) be a bounded modular lattice and \(\varepsilon\in\operatorname{End}_{lin}(\mathcal{L})\). If \(\varepsilon\) is idempotent, then \(\mathbf{0}=\ker_{\varepsilon}\wedge\varepsilon(\mathbf{1})\) and \(\mathbf{1}=\ker_{\varepsilon}\vee\varepsilon(\mathbf{1})\)._
**Corollary 2.7**.: _Let \(\mathcal{L}\) be a complete modular lattice, and \(\varepsilon:\mathcal{L}\to\mathcal{L}\) a linear morphism such that \(\varepsilon^{2}=\varepsilon\). Then \(\varepsilon=\pi_{\varepsilon(\mathbf{1})}\)._
Proof.: Since \(\varepsilon(a)=\varepsilon(\varepsilon(a))\) for all \(a\in\mathcal{L}\), \(a\vee\ker_{\varepsilon}=\varepsilon(a)\vee\ker_{\varepsilon}\). Therefore
\[\pi_{\varepsilon(\mathbf{1})}(a)=(a\vee\ker_{\varepsilon})\wedge\varepsilon( \mathbf{1})=(\varepsilon(a)\vee\ker_{\varepsilon})\wedge\varepsilon(\mathbf{1 })=\varepsilon(a),\]
for all \(a\in\mathcal{L}\).
**Corollary 2.8**.: _Let \(\mathcal{L}\) be a bounded modular lattice. Then there exists a bijective correspondence between idempotent linear endomorphisms of \(\mathcal{L}\) and pairs \((x,x^{\prime})\) such that \(x^{\prime}\) is a complement of \(x\) in \(\mathcal{L}\)._
Proof.: Given an idempotent linear endomorphism \(\varepsilon:\mathcal{L}\to\mathcal{L}\), we have the pair \((\ker_{\varepsilon},\varepsilon(\mathbf{1}))\). On the other hand, if \((x,x^{\prime})\) is a pair of elements of \(\mathcal{L}\) such that \(x^{\prime}\) is a complement of \(x\), then the linear endomorphism \(\pi_{x}(a)=(a\lor x^{\prime})\wedge x\) is idempotent since \(\pi_{x}\) fixes every element in \([\mathbf{0},x]\).
Given an \(R\)-module \(M\) and an endomorphism \(f:M\to M\), there is a linear morphism \(f_{*}:\Lambda(M)\to\Lambda(M)\) induced by \(f\). Then, there is a homomorphism of monoids with zero, \((-)_{*}:\operatorname{End}_{R}(M)\to\operatorname{End}_{lin}(\Lambda(M))\). Let \(\mathfrak{E}_{M}\) denote the image of \(\operatorname{End}_{R}(M)\) under \((-)_{*}\). Then \(\mathfrak{E}_{M}\) is a submonoid with zero of \(\operatorname{End}_{lin}(\Lambda(M))\).
**Definition 2.9**.: Let \(\mathcal{L}\) be a complete lattice, and \(\mathfrak{m}\) be a submonoid with zero of \(\operatorname{End}_{lin}(\mathcal{L})\).
* \(\mathcal{L}\) is called \(\mathfrak{m}\)-_Rickart_ if \(\ker_{\varphi}\) has a complement in \(\mathcal{L}\) for all \(\varphi\in\mathfrak{m}\).
* \(\mathcal{L}\) is called _dual-\(\mathfrak{m}\)-Rickart_ if \(\varphi(\mathbf{1})\) has a complement in \(\mathcal{L}\) for all \(\varphi\in\mathfrak{m}\).
If the submonoid we are considering is \(\operatorname{End}_{lin}(\mathcal{L})\), we will omit the \(\mathfrak{m}\).
## 3. \(\mathfrak{m}\)-Endoregular lattices
**Definition 3.1**.: Let \(\mathcal{L}\) be a complete modular lattice and \(\mathfrak{m}\) be a submonoid of \(\operatorname{End}_{lin}(\mathcal{L})\). We say that \(\mathfrak{m}\) is _closed under complements_ if for any \(\varphi\in\mathfrak{m}\) and any complements \(x,y\in\mathcal{L}\) such that \(\varphi\) induces a linear isomorphism \(\varphi|_{x}:[\mathbf{0},x]\cong[\mathbf{0},y]\), it follows that \(\iota_{x}(\varphi|_{x})^{-1}\pi_{y}\in\mathfrak{m}\).
**Definition 3.2**.: Given a lattice \(\mathcal{L}\) and a submonoid with zero, \(\mathfrak{m}\subseteq\operatorname{End}_{lin}(\mathcal{L})\), we will say that \(\mathfrak{m}\)_contains all the projections_ if \(\pi_{x}\in\mathfrak{m}\) for every complement \(x\in\mathcal{L}\).
_Remark 3.3_.: If a submonoid \(\mathfrak{m}\subseteq\operatorname{End}_{lin}(\mathcal{L})\) is closed under complements, then \(\mathfrak{m}\) contains all the projections.
**Proposition 3.4**.: _Let \(M\) be an \(R\)-module. Then \(\mathfrak{E}_{M}\) is closed under complements._
Proof.: Let \(f\in\operatorname{End}_{R}(M)\) such that \(f_{*}:[0,N]\to[0,L]\) is a linear isomorphism with \(N\) and \(L\) direct summands of \(M\). Since \(f_{*}(A)=f(A)=0\) if and only if \(A=0\), \(f|_{N}\) is injective. On the other hand \(f(N)=L\), therefore \(f|_{N}:N\to L\) is an isomorphism. Let \(g:L\to N\) denote the inverse of \(f|_{N}\). Consider the \(R\)-homomorphism \(i(g\oplus 0):M\to M\) where \(i:N\hookrightarrow M\) is the canonical inclusion. Then \((i(g\oplus 0))_{*}=\iota_{N}g_{*}\pi_{L}\). Thus \(\iota_{N}(f_{*})^{-1}\pi_{L}\in\mathfrak{E}_{M}\).
**Proposition 3.5**.: _Let \(\mathcal{L}\) be a complete modular lattice and \(\mathfrak{m}\) be a submonoid of \(\operatorname{End}_{lin}(\mathcal{L})\) closed under complements. The following conditions are equivalent for \(\varphi\in\mathfrak{m}\):_
* \(\ker_{\varphi}\) _and_ \(\varphi(\mathbf{1})\) _have a complement in_ \(\mathcal{L}\)_._
* _There exists_ \(\psi\in\mathfrak{m}\) _such that_ \(\varphi=\varphi\psi\varphi\)_._
_Moreover, \(\psi\varphi(\mathbf{1})\) is a complement of \(\ker_{\varphi}\), and \(\ker_{\psi\varphi}\) is a complement of \(\varphi(\mathbf{1})\)._
Proof.: (a)\(\Rightarrow\)(b) By hypothesis, there exists \(x\in\mathcal{L}\) such that \(\mathbf{1}=\ker_{\varphi}\lor x\) and \(\mathbf{0}=\ker_{\varphi}\wedge x\). Then, \(\varphi\) induces a linear isomorphism \(\varphi|_{x}:[\mathbf{0},x]\to[\mathbf{0},\varphi(\mathbf{1})]\). Note that \(\varphi(x)=\varphi(\mathbf{1})\). Define \(\psi:\mathcal{L}\to\mathcal{L}\) as \(\psi=\iota_{x}(\varphi|_{x})^{-1}\pi_{\varphi(\mathbf{1})}\). By hypothesis, \(\psi\in\mathfrak{m}\). Hence
\[\varphi\psi\varphi(a)=\varphi\iota_{x}(\varphi|_{x})^{-1}\pi_{\varphi(\mathbf{1 })}\varphi(a)=\varphi\iota_{x}(\varphi|_{x})^{-1}\varphi(a)=\varphi(a).\]
(b)\(\Rightarrow\)(a) Suppose that there exists \(\psi\in\mathfrak{m}\) such that \(\varphi=\varphi\psi\varphi\). Note that \(\varphi\psi\) and \(\psi\varphi\) are idempotent elements of \(\mathfrak{m}\). We have that
\[\varphi\psi(\mathbf{1})\leq\varphi(\mathbf{1})=\varphi\psi(\varphi(\mathbf{1}) )\leq\varphi\psi(\mathbf{1}).\]
This implies that \(\varphi\psi(\mathbf{1})=\varphi(\mathbf{1})\). It follows from Proposition 2.6 that \(\mathbf{1}=\varphi\psi(\mathbf{1})\vee\ker_{\varphi\psi}=\varphi(\mathbf{1}) \vee\ker_{\varphi\psi}\) and \(\mathbf{0}=\varphi\psi(\mathbf{1})\wedge\ker_{\varphi\psi}=\varphi(\mathbf{1 })\wedge\ker_{\varphi\psi}\). Therefore \(\varphi(\mathbf{1})\) is a complement in \(\mathcal{L}\). Now, let \(\overline{\varphi}:[\ker_{\varphi},\mathbf{1}]\to[\mathbf{0},\varphi(\mathbf{1 })]\) be the isomorphism induced by \(\varphi\). Then,
\[\overline{\varphi}(\psi\varphi(\mathbf{1})\vee\ker_{\varphi})=\varphi(\psi \varphi(\mathbf{1})\vee\ker_{\varphi})=\varphi\psi\varphi(\mathbf{1})= \varphi(\mathbf{1})=\overline{\varphi}(\mathbf{1}).\]
Thus, \(\psi\varphi(\mathbf{1})\vee\ker_{\varphi}=\mathbf{1}\). Note that \(\ker_{\varphi}\leq\ker_{\psi\varphi}\) and \(\ker_{\psi\varphi}\wedge\psi\varphi(\mathbf{1})=\mathbf{0}\). Hence, \(\ker_{\varphi}\wedge\psi\varphi(\mathbf{1})=\mathbf{0}\).
**Corollary 3.6**.: _Let \(M\) be an \(R\)-module. The following conditions are equivalent for \(f\in\operatorname{End}_{R}(M)\):_
1. \(\operatorname{Ker}f\) _and_ \(f(M)\) _are direct summands of_ \(M\)_._
2. _There exists_ \(g\in\operatorname{End}_{R}(M)\) _such that_ \(f(N)=fgf(N)\) _for all_ \(N\leq M\)_._
_Moreover, \(gf(M)\) is a complement of \(\operatorname{Ker}f\), and \(\operatorname{Ker}fg\) is a complement of \(f(M)\)._
**Definition 3.7**.: A monoid \(E\) is said to be _regular_ if, for any \(\varphi\in E\), there exists \(\psi\in E\) such that \(\varphi=\varphi\psi\varphi\).
**Definition 3.8**.: Let \(\mathcal{L}\) be a complete lattice and \(\mathfrak{m}\) be a submonoid with zero of \(\operatorname{End}_{lin}(\mathcal{L})\). The lattice \(\mathcal{L}\) is called \(\mathfrak{m}\)_-endoregular_ if the monoid \(\mathfrak{m}\) is regular. If the submonoid we are considering is \(\operatorname{End}_{lin}(\mathcal{L})\), we will omit the \(\mathfrak{m}\).
An \(R\)-module \(M\) is said to be _endoregular_ if \(\operatorname{End}_{R}(M)\) is a von Neumann regular ring [6]. This definition can be compared with Definition 3.8 as follows:
**Corollary 3.9**.: _The following conditions are equivalent for an \(R\)-module \(M\):_
1. \(M\) _is endoregular._
2. \(\Lambda(M)\) _is a_ \(\mathfrak{E}_{M}\)_-endoregular lattice._
3. _For all_ \(f\in\operatorname{End}_{R}(M)\)_, there exists_ \(g\in\operatorname{End}_{R}(M)\) _such that_ \(f(N)=fgf(N)\) _for all_ \(N\leq M\)_._
4. \(\operatorname{Ker}f\) _and_ \(f(M)\) _are direct summands of_ \(M\) _for all_ \(f\in\operatorname{End}_{R}(M)\)_._
Proof.: (a)\(\Rightarrow\)(b) Let \(f_{*}\in\mathfrak{E}_{M}\). By hypothesis, there exists \(g\in\operatorname{End}_{R}(M)\) such that \(f=fgf\). Then \(f_{*}=(fgf)_{*}=f_{*}g_{*}f_{*}\). Thus \(\mathfrak{E}_{M}\) is a regular monoid.
(b)\(\Rightarrow\)(c) It is clear.
(c)\(\Rightarrow\)(d) It follows from Corollary 3.6.
(d)\(\Rightarrow\)(a) It follows from [6, Proposition 2.3]
We mention the following definitions taken from [9].
**Definition 3.10**.: Let \(\mathcal{L}\) be a complete modular lattice, \(a,x,x^{\prime}\in\mathcal{L}\) with \(x^{\prime}\) a complement of \(x\). It is said that \(\mathcal{L}\) satisfies \(\mathfrak{m}\)-\(C_{2}\)_condition_ if whenever there is an isomorphism \(\theta:[\mathbf{0},x]\overset{\cong}{\to}[\mathbf{0},a]\), and \(\iota_{a}\theta\pi_{x}\) is in \(\mathfrak{m}\), this implies that \(a\) is a complement.
**Definition 3.11**.: Let \(\mathcal{L}\) be a complete modular lattice, \(a,x\in\mathcal{L}\), and \(\mathfrak{m}\subseteq\mathrm{End}_{lin}(\mathcal{L})\) be a submonoid. It is said that \(\mathcal{L}\) satisfies \(\mathfrak{m}\)-\(D_{2}\)_condition_ if whenever there is an isomorphism \(\theta:[a,\mathbf{1}]\stackrel{{\cong}}{{\rightarrow}}[\mathbf{0},x]\), with \(x\) a complement in \(\mathcal{L}\) and \(\iota_{x}\theta\rho_{a}\in\mathfrak{m}\), then \(a\) is a complement.
**Theorem 3.12**.: _Let \(\mathcal{L}\) be a complete modular lattice and \(\mathfrak{m}\) be a submonoid of \(\mathrm{End}_{lin}(\mathcal{L})\) closed under complements. The following conditions are equivalent:_
1. \(\mathcal{L}\) _is_ \(\mathfrak{m}\)_-endoregular._
2. \(\mathcal{L}\) _is_ \(\mathfrak{m}\)_-Rickart and satisfies the condition_ \(\mathfrak{m}\)_-_\(C_{2}\)_._
3. \(\mathcal{L}\) _is dual-_\(\mathfrak{m}\)_-Rickart and satisfies the condition_ \(\mathfrak{m}\)_-_\(D_{2}\)_._
4. \(\ker_{\varphi}\) _and_ \(\varphi(\mathbf{1})\) _have a complement in_ \(\mathcal{L}\) _for every_ \(\varphi\in\mathfrak{m}\)_._
Proof.: (a)\(\Leftrightarrow\)(d) follows from Proposition 3.5.
(b)\(\Rightarrow\)(d) Let \(\varphi\in\mathfrak{m}\). Since \(\mathcal{L}\) is \(\mathfrak{m}\)-Rickart, \(\ker_{\varphi}\) is a complement. Let \(x\in\mathcal{L}\) be a complement of \(\ker_{\varphi}\). Then, there is an isomorphism \(\theta=\overline{\varphi}(\ker_{\varphi}\vee\_):[\mathbf{0},x]\to[\ker_{ \varphi},\mathbf{1}]\to[\mathbf{0},\varphi(1)]\). We claim that \(\iota_{\varphi(1)}\theta\pi_{x}\in\mathfrak{m}\). Let \(y\in\mathcal{L}\). It follows that
\[\iota_{\varphi(1)}\theta\pi_{x}(y)=\iota_{\varphi(1)}\theta((y\vee\ker_{ \varphi})\wedge x)=\iota_{\varphi(1)}\overline{\varphi}(\ker_{\varphi}\vee \_)((y\vee\ker_{\varphi})\wedge x)\]
\[=\iota_{\varphi(1)}\overline{\varphi}(y\vee\ker_{\varphi})=\varphi(y).\]
Therefore \(\iota_{\varphi(1)}\theta\pi_{x}=\varphi\in\mathfrak{m}\). This implies, by the \(\mathfrak{m}\)-\(C_{2}\) condition, that \(\varphi(\mathbf{1})\) is a complement in \(\mathcal{L}\).
(d)\(\Rightarrow\)(b) By hypothesis \(\mathcal{L}\) is \(\mathfrak{m}\)-Rickart. Let \(a,x,x^{\prime}\in\mathcal{L}\) with \(x^{\prime}\) a complement of \(x\). Suppose that there exists an isomorphism \(\theta:[\mathbf{0},x]\to[\mathbf{0},a]\) with \(\iota_{a}\theta\pi_{x}\in\mathfrak{m}\). Let \(\varphi\) denote the composition \(\iota_{a}\theta\pi_{x}\), then \(\varphi(\mathbf{1})=a\). By hypothesis \(a\) is a complement. Thus \(\mathcal{L}\) satisfies the \(\mathfrak{m}\)-\(C_{2}\) condition. (c)\(\Leftrightarrow(d)\) can be proved similarly.
**Corollary 3.13**.: _Let \(\mathcal{L}\) be a complete modular lattice and \(\mathfrak{m}\) be a submonoid of \(\mathrm{End}_{lin}(\mathcal{L})\) closed under complements. The following conditions are equivalent:_
1. \(\mathcal{L}\) _is_ \(\mathfrak{m}\)_-endoregular._
2. \(\mathcal{L}\) _is_ \(\mathfrak{m}\)_-Rickart and dual-_\(\mathfrak{m}\)_-Rickart._
Proof.: (a)\(\Rightarrow\)(b) follows from Theorem 3.12. For (b)\(\Rightarrow\)(a), we have that an \(\mathfrak{m}\)-Rickart lattice satisfies \(\mathfrak{m}\)-\(D_{2}\) condition [9, Proposition 3.42] and a dual-\(\mathfrak{m}\)-Rickart lattice satisfies \(\mathfrak{m}\)-\(C_{2}\) condition [9, Proposition 3.43].
**Corollary 3.14**.: _Let \(\mathcal{L}\) be an indecomposable modular lattice, that is, \(C(\mathcal{L})=\{\mathbf{0},\mathbf{1}\}\) and \(\mathfrak{m}\subseteq\mathrm{End}_{lin}(\mathcal{L})\) a submonoid. The following conditions are equivalent:_
1. \(\mathcal{L}\) _is_ \(\mathfrak{m}\)_-endoregular._
2. _Every_ \(0\neq\varphi\in\mathfrak{m}\) _has an inverse._
**Definition 3.15**.: A complete lattice \(\mathcal{L}\) is _compact_ if whenever \(\mathbf{1}=\bigvee_{i\in I}a_{i}\), there exists a finite subset \(F\subseteq I\) such that \(\mathbf{1}=\bigvee_{i\in F}a_{i}\). An element \(c\) in a complete lattice is _compact_ if \([\mathbf{0},c]\) is a compact lattice.
_Remark 3.16_.: Given an \(R\)-module \(M\), the lattice \(\Lambda(M)\) is compact if and only if \(M\) is finitely generated. Similarly, a submodule is compact if and only if it is finitely generated.
**Definition 3.17**.: A complete lattice \(\mathcal{L}\) is _von Neumann regular_ if every compact element of \(\mathcal{L}\) has a complement.
_Remark 3.18_.: For an \(R\)-module \(M\), the lattice \(\Lambda(M)\) is von Neumann regular if and only if every finitely generated (cyclic) submodule of \(M\) is a direct summand. In [12], A. Tuganbaev calls these modules _regular_. When \(M=R\), this definition agrees with that of a _von Neumann regular ring._
**Lemma 3.19**.: _Let \(\mathcal{L}\) be a complete compact modular lattice. Then \(\varphi(\mathbf{1})\) is compact for every linear morphism \(\varphi:\mathcal{L}\to\mathcal{G}\)._
Proof.: Suppose \(\varphi(\mathbf{1})=\bigvee_{i\in I}a_{i}\). Consider the isomorphism \(\overline{\varphi}:[\ker_{\varphi},\mathbf{1}]\to[\mathbf{0},\varphi(\mathbf{ 1})]\). Then
\[\mathbf{1}=\overline{\varphi}^{-1}\varphi(\mathbf{1})=\overline{\varphi}^{-1} \left(\bigvee_{i\in I}a_{i}\right)=\bigvee_{i\in I}\overline{\varphi}^{-1}(a_ {i}).\]
By hypothesis, a finite subset \(F\subseteq I\) exists, such that \(\mathbf{1}=\bigvee_{i\in F}\overline{\varphi}^{-1}(a_{i})\). Applying \(\varphi\) we get that \(\varphi(\mathbf{1})=\bigvee_{i\in F}a_{i}\). Thus, \(\varphi(\mathbf{1})\) is compact.
Given a complete modular lattice \(\mathcal{L}\) and a submonoid \(\mathfrak{m}\subseteq\operatorname{End}_{lin}(\mathcal{L})\), it is said that an element \(a\in\mathcal{L}\) is \(\mathfrak{m}\)-\(\mathcal{L}\)_-generated_ if there exists a family of linear morphisms \(\{\varphi_{i}:\mathcal{L}\to[\mathbf{0},a]\}_{I}\) such that \(\iota_{a}\varphi_{i}\in\mathfrak{m}\) for all \(i\in I\) and \(\bigvee_{I}\varphi_{i}(\mathbf{1})=a\)[9, Definition 3.48].
**Proposition 3.20**.: _Let \(\mathcal{L}\) be a complete compact modular lattice and \(\mathfrak{m}\) be a submonoid of \(\operatorname{End}_{lin}(\mathcal{L})\) closed under complements. The following conditions are equivalent:_
1. \(\mathcal{L}\) _is von Neumann regular._
2. \(\mathcal{L}\) _is dual-_\(\mathfrak{m}\)_-Rickart, and every compact element is_ \(\mathfrak{m}\)_-_\(\mathcal{L}\)_-generated._
Proof.: (a)\(\Rightarrow\)(b) Let \(\varphi\in\mathfrak{m}\). It follows from Lemma 3.19 that \(\varphi(\mathbf{1})\) is compact. By hypothesis \(\varphi(\mathbf{1})\) has a complement; that is, \(\mathcal{L}\) is dual-\(\mathfrak{m}\)-Rickart. Now, let \(c\in\mathcal{L}\) be a compact element. By hypothesis, \(c\) has a complement, then \(\pi_{c}(\mathbf{1})=c\), and \(\mathfrak{m}\) contains all the projections. Thus, \(c\) is \(\mathfrak{m}\)-\(\mathcal{L}\)-generated.
(b)\(\Rightarrow\)(a) Let \(c\in\mathcal{L}\) be a compact element. By hypothesis, there exists a family of linear morphisms \(\{\varphi_{i}:\mathcal{L}\to[\mathbf{0},c]\}_{I}\) such that \(\iota_{c}\varphi_{i}\in\mathfrak{m}\) for all \(i\in I\) and \(\bigvee_{i\in I}\varphi_{i}(\mathbf{1})=c\). Since \(c\) is compact, a finite subset \(F\subseteq I\) exists, such that \(c=\bigvee_{i\in F}\varphi_{i}(\mathbf{1})\). It follows from [9, Proposition 3.24] that \(c\) is a complement because each \(\varphi_{i}(\mathbf{1})\) is a complement in \(\mathcal{L}\).
**Corollary 3.21**.: _The following conditions are equivalent for a finitely generated module \(M\):_
1. \(M\) _is regular (in the sense of_ _[_12_]__)._
2. \(M\) _is dual-Rickart and generates all its cyclic submodules._
3. \(M\) _is dual-Rickart and generates all its finitely generated submodules._
Using Theorem 3.12, we get the following corollaries.
**Corollary 3.22**.: _Let \(\mathcal{L}\) be a complete compact modular lattice and \(\mathfrak{m}\) be a submonoid of \(\operatorname{End}_{lin}(\mathcal{L})\) closed under complements. The following conditions are equivalent:_
1. \(\mathcal{L}\) _is von Neumann regular and satisfies the condition_ \(\mathfrak{m}\)_-_\(D_{2}\)_._
2. \(\mathcal{L}\) _is_ \(\mathfrak{m}\)_-Rickart, dual-_\(\mathfrak{m}\)_-Rickart, and every compact element is_ \(\mathfrak{m}\)_-_\(\mathcal{L}\)_-generated._
3. \(\mathcal{L}\) _is_ \(\mathfrak{m}\)_-endoregular, and every compact element is_ \(\mathfrak{m}\)_-_\(\mathcal{L}\)_-generated._
**Corollary 3.23**.: _The following conditions are equivalent for a finitely generated module \(M\):_
1. \(M\) _is regular (in the sense of_ _[_12_]__) and satisfies the_ \((D_{2})\) _condition._
2. \(M\) _is Rickart, dual-Rickart, and generates all its cyclic submodules._
3. \(M\) _is endoregular and generates all its cyclic submodules._
Recall that a left ideal \(A\) in a monoid \(E\) is a subset such that \(EA\subseteq A\). If the monoid \(E\) is a monoid with zero, we ask that \(0\in A\). Right, and two-sided ideals in a monoid are defined similarly.
**Lemma 3.24**.: _Let \(E\) be a regular monoid with zero, and let \(A\) be a left ideal of \(\mathfrak{m}\). If \(A^{2}=0\), then \(A=0\)._
Proof.: Let \(x\in A\). By hypothesis, there exists \(y\in E\) such that \(xyx=x\). Hence \(x\in A^{2}=0\).
**Lemma 3.25**.: _Let \(\mathcal{L}\) be an \(\mathfrak{m}\)-endoregular lattice and \(\varepsilon^{2}=\varepsilon\in\mathfrak{m}\) be idempotent. Then, \(\varepsilon\) is central in \(\mathfrak{m}\) if and only if \(\pi_{\ker_{\varepsilon}}\varphi\varepsilon=0\) for all \(\varphi\in\mathfrak{m}\)._
Proof.: Set \(k=\ker_{\varepsilon}\) and suppose \(\pi_{k}\varphi\varepsilon=0\) for all \(\varphi\in\mathfrak{m}\). We claim that \(\varepsilon\varphi\pi_{k}=0\) for all \(\varphi\in\mathfrak{m}\). Let \(\varphi\in\mathfrak{m}\). By hypothesis, \(\varphi\varepsilon(a)\leq\varepsilon(\mathbf{1})\) for all \(a\in\mathcal{L}\). By Corollary 2.7, \(\varepsilon(\varphi\varepsilon(a))=\varphi\varepsilon(a)\) for all \(a\in\mathcal{L}\). Hence \(\varepsilon\varphi\varepsilon=\varphi\varepsilon\). It follows that \(\mathfrak{m}\varepsilon\subseteq\varepsilon\mathfrak{m}\). Thus, \(\mathfrak{m}(\varepsilon\mathfrak{m}\pi_{k})\subseteq\varepsilon\mathfrak{m} \pi_{k}\). Also, \((\varepsilon\mathfrak{m}\pi_{k})^{2}=0\). Since \(\mathfrak{m}\) is a regular monoid, then \(\varepsilon\mathfrak{m}\pi_{k}=0\) by Lemma 3.24, proving the claim.
Let \(a\in\mathcal{L}\) and \(\varphi\in\mathfrak{m}\). By [9, Proposition 2.14] and Corollary 2.7, \(a\leq\pi_{k}(a)\vee\varepsilon(a)\). Then \(\varphi(a)\leq\varphi\pi_{k}(a)\vee\varphi\varepsilon(a)\). Thus \(\varepsilon\varphi(a)\leq\varepsilon\varphi\pi_{k}(a)\vee\varepsilon\varphi \varepsilon(a)=\varepsilon\varphi\varepsilon(a)\). On the other hand, \((a\wedge k)\vee(a\wedge\varepsilon(\mathbf{1}))\leq a\) for all \(a\in\mathcal{L}\). Thus,
\[\varepsilon\varphi\varepsilon(a)=\varepsilon\varphi((a\lor k)\wedge k)\vee \varepsilon\varphi((a\lor k)\wedge\varepsilon(\mathbf{1}))\leq\varepsilon \varphi(a\lor k)=\varepsilon\varphi(a),\]
for all \(a\in\mathcal{L}\). Hence \(\varepsilon\varphi=\varepsilon\varphi\varepsilon=\varphi\varepsilon\).
**Definition 3.26**.: Let \(\mathcal{L}\) be a bounded lattice and \(\mathfrak{m}\subseteq\operatorname{End}_{lin}(\mathcal{L})\) be a submonoid. \(\mathcal{L}\) is called \(\mathfrak{m}\)-_abelian_ if the idempotents in \(\mathfrak{m}\) are central in \(\mathfrak{m}\). If the monoid we are considering is \(\operatorname{End}_{lin}(\mathcal{L})\) we will omit the \(\mathfrak{m}\).
**Proposition 3.27**.: _Let \(\mathcal{L}\) be a complete modular lattice and \(\mathfrak{m}\) be a submonoid of \(\operatorname{End}_{lin}(\mathcal{L})\) closed under complements. The following conditions are equivalent:_
1. \(\mathcal{L}\) _is_ \(\mathfrak{m}\)_-Rickart, dual-_\(\mathfrak{m}\)_-Rickart, and_ \(\mathfrak{m}\)_-abelian._
2. \(\mathbf{0}=\varphi(\mathbf{1})\wedge\ker_{\varphi}\) _and_ \(\mathbf{1}=\ker_{\varphi}\vee\varphi(\mathbf{1})\) _for all_ \(\varphi\in\mathfrak{m}\)_._
Proof.: (a)\(\Rightarrow\)(b) Let \(\varphi\in\mathfrak{m}\). Let \(k^{\prime}\in\mathcal{L}\) be a complement of \(k=\ker_{\varphi}\). Then \(\mathbf{0}=\varphi(k)=\varphi\pi_{k}(\mathbf{1})=\pi_{k}\varphi(\mathbf{1})\). This implies that \(\varphi(\mathbf{1})\leq k^{\prime}\). Therefore \(\varphi(\mathbf{1})\wedge k\leq k^{\prime}\wedge k=\mathbf{0}\). Now, from Theorem 3.12 there exists \(\psi\in\mathfrak{m}\) such that \(\varphi\psi\varphi=\varphi\). Since \(\varphi(\mathbf{1})=(\varphi\psi)\varphi(\mathbf{1})=\varphi\varphi\psi(\mathbf{ 1})\), \(\mathbf{1}=\varphi\psi(\mathbf{1})\lor k\). On the other hand, let \(x\in\mathcal{L}\) be a complement of \(\varphi(\mathbf{1})\). Then,
\[\varphi\psi(\mathbf{1})=\varphi\psi(\varphi(\mathbf{1})\lor x)=\varphi\psi \varphi(\mathbf{1})\vee\varphi\psi(x)=\varphi(\mathbf{1})\vee\varphi\psi(x)= \varphi(\mathbf{1}\vee\psi(x))=\varphi(\mathbf{1}).\]
Thus, \(\mathbf{1}=\varphi(\mathbf{1})\lor k\).
(b)\(\Rightarrow\)(a) By Theorem 3.12, we have that \(\mathcal{L}\) is \(\mathfrak{m}\)-Rickart and dual-\(\mathfrak{m}\)-Rickart. Let \(\varepsilon,\varphi\in\mathfrak{m}\) with \(\varepsilon^{2}=\varepsilon\). Let \(k=\ker_{\varepsilon}\) and consider \(\alpha=\pi_{k}\varphi\varepsilon\in\mathfrak{m}\). Note that \(\alpha^{2}=0\) because \(\varepsilon\pi_{k}=0\) and by hypothesis \(\mathbf{1}=\alpha(\mathbf{1})\vee\ker_{\alpha}\). Hence \(\alpha(\mathbf{1})=\mathbf{0}\), that is, \(\mathbf{0}=\alpha(\mathbf{1})=\pi_{k}(\varphi\varepsilon(\mathbf{1}))\). Thus \(\varphi=\varepsilon\varphi\) by Lemma 3.25.
**Corollary 3.28**.: _Let \(\mathcal{L}\) be a complete modular lattice, \(a\in\mathcal{L}\) be a complement, \(\mathfrak{m}\) be a submonoid of \(\operatorname{End}_{lin}(\mathcal{L})\), and let \(\mathfrak{n}\) be a submonoid of \(\operatorname{End}_{lin}([\mathbf{0},a])\) such that \(\iota_{a}\psi\pi_{a}\in\mathfrak{m}\) for every element \(\psi\in\mathfrak{n}\). Suppose that the monoids \(\mathfrak{m}\) and \(\mathfrak{n}\) are closed under complements. If \(\mathcal{L}\) is \(\mathfrak{m}\)-abelian-endoregular, then \([\mathbf{0},a]\) is an \(\mathfrak{n}\)-abelian-endoregular lattice._
Proof.: Let \(\psi\in\mathfrak{n}\). We have that \(\varphi=\iota_{a}\psi\pi_{a}\in\mathfrak{m}\). Then \(\mathbf{0}=\varphi(\mathbf{1})\wedge\ker_{\varphi}\) and \(\mathbf{1}=\ker_{\varphi}\vee\varphi(\mathbf{1})\) by Proposition 3.27. Note that \(\ker_{\varphi}=\ker_{\psi}\vee x\) where \(x\) is a complement of \(a\) in \(\mathcal{L}\) and \(\varphi(\mathbf{1})=\psi(a)\). Hence,
\[a=a\wedge(\ker_{\varphi}\vee\varphi(\mathbf{1}))=a\wedge(\ker_{\psi}\lor x \vee\psi(a))=(\ker_{\psi}\vee\psi(a))\vee(a\wedge x)=\ker_{\psi}\vee\psi(a).\]
On the other hand, \(\ker_{\psi}\wedge\psi(a)\leq\ker_{\varphi}\wedge\varphi(\mathbf{1})=\mathbf{0}\). Thus \([\mathbf{0},a]\) is an \(\mathfrak{n}\)-abelian-endoregular lattice by Proposition 3.27.
**Proposition 3.29**.: _Let \(\mathcal{L}\) be a complete modular lattice and \(\mathfrak{m}\) be a submonoid of \(\operatorname{End}_{lin}(\mathcal{L})\) closed under complements. The following conditions are equivalent:_
1. \(\mathcal{L}\) _is_ \(\mathfrak{m}\)_-abelian-endoregular._
2. \(\mathcal{L}\) _is_ \(\mathfrak{m}\)_-endoregular, and_ \(\varphi(a)\leq a\) _for every_ \(\mathfrak{m}\)_-_\(\mathcal{L}\)_-generated element_ \(a\in\mathcal{L}\) _and for all_ \(\varphi\in\mathfrak{m}\)_._
Proof.: (a)\(\Rightarrow\)(b) Let \(a\in\mathcal{L}\) be \(\mathfrak{m}\)-\(\mathcal{L}\)-generated, that is, there exists a family of linear morphisms \(\{\varphi_{i}:\mathcal{L}\to[\mathbf{0},a]\}_{I}\) such that \(\iota_{a}\varphi_{i}\in\mathfrak{m}\) for all \(i\in I\) and \(\bigvee_{i\in I}\varphi_{i}(\mathbf{1})=a\). Let \(\psi\in\mathfrak{m}\). Then \(\psi(a)=\psi\left(\bigvee_{i\in I}\varphi_{i}(\mathbf{1})\right)=\bigvee_{i \in I}\psi\varphi_{i}(\mathbf{1})\). Fix \(i\in I\). Then, there exists \(\varphi^{\prime}\in\mathfrak{m}\) such that \(\varphi_{i}\varphi^{\prime}\varphi_{i}=\varphi_{i}\). By hypothesis,
\[\psi\varphi_{i}(\mathbf{1})=\psi\varphi_{i}\varphi^{\prime}\varphi_{i}( \mathbf{1})=\varphi_{i}\varphi^{\prime}\psi\varphi_{i}(\mathbf{1})\leq\varphi_ {i}(\mathbf{1})\leq a.\]
Thus, \(\psi\varphi_{i}(\mathbf{1})\leq a\) for all \(i\in I\). This implies that \(\psi(a)=\bigvee_{i\in I}\psi\varphi_{i}(\mathbf{1})\leq a\).
(b)\(\Rightarrow\)(a) Let \(\varepsilon^{2}=\varepsilon\in\mathfrak{m}\). Then \(\varepsilon(\mathbf{1})\) is \(\mathfrak{m}\)-\(\mathcal{L}\)-generated. Let \(\varphi\in\mathfrak{m}\), since \(\varphi\varepsilon(1)\leq\varepsilon(\mathbf{1})\), we have that \(\varepsilon\varphi\varepsilon(a)=\varphi\varepsilon(a)\) for all \(a\in\mathcal{L}\). Hence \(\pi_{\ker_{\varepsilon}}\varphi\varepsilon=0\). By Lemma 3.25, \(\varepsilon\) is central in \(\mathfrak{m}\).
Given a von Neumann regular ring \(R\), it is known that \(R\) is abelian (i.e. the idempotents are central) if and only if the lattice of left direct summands of \(R\) is Boolean [5, Theorem 3.4] and [4, Proposition 3.3] (Also this result is still true for modules, see [10, Proposition 2.11]). In the case of lattices, this result is no longer true in general, as the following example shows.
**Example 3.30**.: Let \(\mathcal{L}\) be the lattice given by the following diagram:
Then \(\mathcal{L}\) is a Boolean algebra, in particular, \(C(\mathcal{L})=\mathcal{L}\). Consider the following linear endomorphisms of \(\mathcal{L}\):
\[\begin{array}{lcl}\tau(\mathbf{0})=\mathbf{0}&\varphi(\mathbf{0})=\mathbf{0} &\psi(\mathbf{0})=\mathbf{0}\\ \tau(a)=b&\varphi(a)=a&\psi(a)=\mathbf{0}\\ \tau(b)=a&\varphi(b)=\mathbf{0}&\psi(b)=b\\ \tau(\mathbf{1})=\mathbf{1}&\varphi(\mathbf{1})=a&\psi(\mathbf{1})=b.\end{array}\]
Then \(\mathrm{End}_{lin}(\mathcal{L})=\{Id,0,\tau,\varphi,\psi,\tau\varphi,\varphi\tau\}\). Note that \(\varphi^{2}=\varphi\), \(\psi^{2}=\psi\) and \(\psi\varphi=\varphi\psi\). But \(\tau\varphi\neq\varphi\tau\). Thus, \(\mathcal{L}\) is an endoregular lattice which is not abelian. If we take \(\mathfrak{m}=\{Id,0,\varphi,\psi\}\), then \(\mathcal{L}\) is \(\mathfrak{m}\)-abelian-endoregular.
**Proposition 3.31**.: _Let \(\mathcal{L}\) be a complete modular lattice and \(\mathfrak{m}\) be a submonoid of \(\mathrm{End}_{lin}(\mathcal{L})\) closed under complements. Suppose \(\mathcal{L}\) is \(\mathfrak{m}\)-endoregular. The following conditions are equivalent:_
1. _Any two idempotents of_ \(\mathfrak{m}\) _commute._
2. \(C(\mathcal{L})\) _is Boolean._
Proof.: Note that \(C(\mathcal{L})\) is a sublattice of \(\mathcal{L}\) because \(\mathcal{L}\) is \(\mathfrak{m}\)-Rickart and dual-\(\mathfrak{m}\)-Rickart [9, Proposition 3.24].
(a)\(\Rightarrow\)(b) Let \(x,y\in C(\mathcal{L})\). Then \(\pi_{x}(y)=\pi_{x}\pi_{y}(\mathbf{1})=\pi_{y}\pi_{x}(\mathbf{1})=\pi_{y}(x) \leq x\wedge y\). We always have that \(x\wedge y\leq\pi_{x}(y)\). Hence \(\pi_{x}(y)=x\wedge y=\pi_{y}(x)\). It follows from [9, Proposition 3.29] that \(C(\mathcal{L})\) is a Boolean algebra.
(b)\(\Rightarrow\)(a) Let \(\alpha,\varepsilon\in\mathfrak{m}\) be two idempotents. We have that \(\mathbf{1}=\alpha(\mathbf{1})\vee\ker_{\alpha}\). By hypothesis, \(\varepsilon(\mathbf{1})=\varepsilon(\mathbf{1})\wedge(\alpha(\mathbf{1})\lor \ker_{\alpha})=(\varepsilon(\mathbf{1})\wedge\alpha(\mathbf{1}))\vee( \varepsilon(\mathbf{1})\wedge\ker_{\alpha})\). Therefore, \(\alpha\varepsilon(\mathbf{1})=\alpha(\varepsilon(\mathbf{1})\wedge\alpha( \mathbf{1}))=\varepsilon(\mathbf{1})\wedge\alpha(\mathbf{1})\leq\varepsilon( \mathbf{1})\). This implies that \(\varepsilon\alpha\varepsilon=\alpha\varepsilon\). Thus \(\pi_{\ker_{\varepsilon}}\alpha\varepsilon=0\). Analogously, \(\varepsilon\alpha\pi_{\ker_{\varepsilon}}=0\). Hence \(\alpha(\ker_{\varepsilon})\leq\ker_{\varepsilon}\). It follows that
\[\varepsilon\alpha(a)=\varepsilon\alpha(a\vee\ker_{\varepsilon})=\varepsilon \alpha(\varepsilon(a)\vee\ker_{\varepsilon})=\varepsilon\alpha\varepsilon(a)\]
for all \(a\in\mathcal{L}\). Thus \(\varepsilon\alpha=\varepsilon\alpha\varepsilon=\alpha\varepsilon\).
Recall that a _semiring_\(S\) is a set with two operations \(+\) and \(\cdot\) such that \((S,+)\) is a commutative monoid and \((S,\cdot)\) is a monoid such that \(a(b+c)=ab+ac\) and \((b+c)a=ba+ca\) for all \(a,b,c\in S\).
**Corollary 3.32**.: _Let \(\mathcal{L}\) be a complete modular lattice and \(\mathfrak{m}\) be a submonoid of \(\mathrm{End}_{lin}(\mathcal{L})\) closed under complements. Suppose \((\mathfrak{m},+,\circ)\) is a semiring and \(\mathcal{L}\) is \(\mathfrak{m}\)-endoregular. If \(C(\mathcal{L})\) is Boolean, then \(\mathcal{L}\) is \(\mathfrak{m}\)-abelian._
Proof.: By Proposition 3.31 any two idempotents in \(\mathfrak{m}\) commute. Let \(\varepsilon^{2}=\varepsilon\in\mathfrak{m}\) and \(\psi\in\pi_{\ker_{\varepsilon}}\mathfrak{m}\varepsilon\). Since \(\mathfrak{m}\) is a semiring, \(\varepsilon+\psi\in\mathfrak{m}\) is an idempotent. Therefore \(\psi+\varepsilon=\psi\varepsilon+\varepsilon=(\psi+\varepsilon)\varepsilon= \varepsilon(\psi+\varepsilon)=\varepsilon\psi+\varepsilon=\varepsilon\), because \(\varepsilon\psi=0\). This implies that \(\psi=\psi\varepsilon=\varepsilon\psi=0\). Thus \(\pi_{\ker_{\varepsilon}}\mathfrak{m}\varepsilon=0\). By lemma 3.25, \(\varepsilon\) is central in \(\mathfrak{m}\).
**Lemma 3.33**.: _Let \(\mathcal{L}\) be a complete modular lattice and \(\mathfrak{m}\) be a submonoid of \(\mathrm{End}_{lin}(\mathcal{L})\). Suppose that \(\mathcal{L}\) is \(\mathfrak{m}\)-abelian-endoregular and that there exists a linear isomorphism \(\theta:[\mathbf{0},a]\to[\mathbf{0},b]\) with \(a,b\in C(\mathcal{L})\). If \(\iota_{b}\theta\pi_{a}\) and \(\iota_{a}\theta^{-1}\pi_{b}\) are in \(\mathfrak{m}\), then \(a=b\)._
Proof.: Since \(\mathcal{L}\) is \(\mathfrak{m}\)-abelian-endoregular,
\[\theta(a\wedge b)=\theta\pi_{a}\pi_{b}(\mathbf{1})=\pi_{b}\theta\pi_{a}( \mathbf{1})=\pi_{b}\theta(a)=\pi_{b}(b)=b=\theta(a).\]
It follows that \(a\wedge b=a\) because \(\theta\) is an isomorphism. Thus \(a\leq b\). Analogously, using the isomorphism \(\theta^{-1}\), we get that \(b\leq a\). Therefore, \(a=b\).
**Proposition 3.34**.: _Let \(\mathcal{L}\) be a complete modular lattice and \(\mathfrak{m}\) be a submonoid of \(\mathrm{End}_{lin}(\mathcal{L})\). Suppose that \(\mathcal{L}\) is \(\mathfrak{m}\)-endoregular. The following conditions are equivalent:_
1. \(\mathcal{L}\) _is_ \(\mathfrak{m}\)_-abelian._
2. _If_ \(x,y\in\mathcal{L}\) _are_ \(\mathfrak{m}\)_-_\(\mathcal{L}\)_-generated and there exists a linear isomorphism_ \(\theta:[\mathbf{0},x]\to[\mathbf{0},y]\)_, then_ \(x=y\)
_._
3. _If_ \(x,y\in\mathcal{L}\) _are_ \(\mathfrak{m}\)_-_\(\mathcal{L}\)_-generated and_ \(x\wedge y=\mathbf{0}\)_, then there are no nonzero linear morphisms from_ \([\mathbf{0},x]\) _to_ \([\mathbf{0},y]\)_._
Proof.: (a)\(\Rightarrow\)(b) Since \(x\) is \(\mathfrak{m}\)-\(\mathcal{L}\)-generated, there exists a nonzero linear morphism \(\varphi:\mathcal{L}\to[\mathbf{0},x]\) with \(\iota_{x}\varphi\in\mathfrak{m}\). Let \(a=\varphi(\mathbf{1})\) and \(b=\theta(a)\). Then \(\theta|:[\mathbf{0},a]\to[\mathbf{0},b]\) is a linear isomorphism. Since \(\mathcal{L}\) is \(\mathfrak{m}\)-endoregular, \(a\in C(\mathcal{L})\) and hence \(b\in C(\mathcal{L})\). It follows from Lemma 3.33, that \(\varphi(\mathbf{1})=a=b=\theta(a)\leq y\). Since \(x\) is \(\mathfrak{m}\)-\(\mathcal{L}\)-generated, \(x\leq y\). Analogously, \(y\leq x\).
(b)\(\Rightarrow\)(c) Suppose that \(\varphi:[\mathbf{0},x]\to[\mathbf{0},y]\) is a nonzero linear morphism. Since \(x\) is \(\mathfrak{m}\)-\(\mathcal{L}\)-generated, there exists a linear morphism \(\psi:\mathcal{L}\to[\mathbf{0},x]\) with \(\iota_{x}\psi\in\mathfrak{m}\). Since \(\mathcal{L}\) is \(\mathfrak{m}\)-endoregular, \(\psi(\mathbf{1})\in C(\mathcal{L})\) and \(\psi(\mathbf{1})\wedge\ker_{\varphi}\) is a complement in \([\mathbf{0},\psi(\mathbf{1})]\). In fact, \(\mathbf{0}=(\psi(\mathbf{1})\wedge\ker_{\varphi})\wedge z\) and \(\psi(\mathbf{1})=(\psi(\mathbf{1})\wedge\ker_{\varphi})\lor z\) with \([\mathbf{0},z]\cong[\mathbf{0},\varphi\psi(\mathbf{1})]\). By hypothesis, \(z=\varphi\psi(\mathbf{1})\leq x\wedge y=\mathbf{0}\). Thus, \(\varphi=0\).
(c)\(\Rightarrow\)(a) Let \(\varepsilon^{2}=\varepsilon\in\mathfrak{m}\). Then \(\mathbf{1}=\ker_{\varepsilon}\vee\varepsilon(\mathbf{1})\) and \(\mathbf{0}=\ker_{\varepsilon}\wedge\varepsilon(\mathbf{1})\). By hypothesis, there are no nonzero linear morphisms from \([\mathbf{0},\varepsilon(\mathbf{1})]\) to \([\mathbf{0},\ker_{\varepsilon}]\). It follows that \(\pi_{\ker_{\varepsilon}}\varphi\varepsilon=0\) for all \(\varphi\in\mathfrak{m}\). By Lemma 3.25, \(\varepsilon\) is central in \(\mathfrak{m}\).
**Corollary 3.35**.: _Let \(\mathcal{L}\) be a complete modular lattice and \(\mathfrak{m}\) be a submonoid of \(\operatorname{End}_{lin}(\mathcal{L})\). Suppose that \(\mathcal{L}\) is \(\mathfrak{m}\)-abelian-endoregular. The following conditions are equivalent for \(\varphi\in\mathfrak{m}\):_
1. \(\varphi\) _is injective._
2. \(\varphi\) _is an isomorphism._
3. \(\varphi\) _is surjective._
Proof.: (a)\(\Leftrightarrow\)(b) Suppose that \(\varphi\) is injective. Then, \(\varphi:[\mathbf{0},\mathbf{1}]\to[\mathbf{0},\varphi(\mathbf{1})]\) is a linear isomorphism. By Proposition 3.34, \(\mathbf{1}=\varphi(\mathbf{1})\). The converse is obvious.
(b)\(\Leftrightarrow\)(c) It follows from Proposition 3.27.
**Definition 3.36**.: A bounded lattice \(\mathcal{L}\) is called _Hopfian_ (resp. _cohopfian_) if every injective (resp. surjective) linear endomorphism \(\varphi\in\operatorname{End}_{lin}(\mathcal{L})\) is an isomorphism.
**Corollary 3.37** ([6, Remark 2.23(ii)]).: _Every abelian endoregular module is Hopfian and cohopfian._
Proof.: Suppose that \(f:M\to M\) is a monomorphism. By Corollary 3.35, \(f_{*}:\Lambda(M)\to\Lambda(M)\) is an isomorphism. Let \(x\in M\) and consider \(Rx\leq M\). Hence there exists \(N\leq M\) such that \(f_{*}(N)=Rx\), that is, \(f(N)=Rx\). Therefore, there is \(n\in N\) such that \(f(n)=x\). This implies that \(f\) is surjective and so \(f\) is an isomorphism. The other condition is analogous.
For a complete lattice \(\mathcal{L}\), its _radical_ is defined as \(\operatorname{Rad}(\mathcal{L})=\bigwedge\{c\in\mathcal{L}\mid c\text{ \rm coatom}\}\). In a dual way, its _Socle_ is defined as \(\operatorname{Soc}(\mathcal{L})=\bigvee\{a\in\mathcal{L}\mid a\text{ \rm atom}\}\).
**Corollary 3.38**.: _Let \(\mathcal{L}\) be a complete modular lattice with \(\operatorname{Rad}(\mathcal{L})\neq\mathcal{L}\). If \(\mathcal{L}\) is abelian endoregular, then \(\mathcal{L}\) has at most one atom. Moreover, if \(\mathcal{L}\) has an atom \(a\), then there exists \(c\in\mathcal{L}\) such that \(a\wedge c=\mathbf{0}\), \(a\lor c=\mathbf{1}\), and \(\operatorname{Soc}([\mathbf{0},c])=\mathbf{0}\)._
Proof.: If \(\mathcal{L}\) has an atom \(a\), then there is a coatom \(c\in\mathcal{L}\) and a linear morphism \(\varphi:\mathcal{L}\to\mathcal{L}\) such that \(\ker_{\varphi}=c\) and \(\varphi(\mathbf{1})=a\). This implies that every coatom in \(\mathcal{L}\) is \(\mathcal{L}\)-generated. By Proposition 3.34, there is at most one atom. Now suppose that there is one atom \(a\in\mathcal{L}\). By the above, there exists a coatom \(c\in\mathcal{L}\) such that \(\mathbf{1}=c\lor a\) and \(\mathbf{0}=c\wedge a\). Since \(a\) is the unique atom in \(\mathcal{L}\), then \(\operatorname{Soc}([\mathbf{0},c])=\mathbf{0}\)
**Corollary 3.39**.: _Let \(\mathcal{L}\) be a finite complete modular lattice. Then \(\mathcal{L}\) is abelian endoregular if and only if \(\mathcal{L}\cong 2\)._
**Lemma 3.40**.: _Let \(\mathcal{L}\) be an upper-continuous complete modular lattice and \(\{a_{i}\}_{I}\) be an independent family of elements of \(\mathcal{L}\) such that \(\bigvee_{i\in I}a_{i}=\mathbf{1}\). Suppose that each interval \([\mathbf{0},a_{i}]\) has a decomposition \(a_{i}=b_{i}\lor c_{i}\) and \(b_{i}\wedge c_{i}=\mathbf{0}\). Then \(\bigvee_{i\in I}b_{i}\) is a complement of \(\bigvee_{i\in I}c_{i}\) in \(\mathcal{L}\)._
Proof.: It is clear that \(\big{(}\bigvee_{i\in I}b_{i}\big{)}\vee\big{(}\bigvee_{i\in I}c_{i}\big{)}= \mathbf{1}\). On the other hand, since each \(c_{i}\leq a_{i}\), the set \(\{c_{i}\}_{I}\) is independent in \(\mathcal{L}\). By modularity,
\[a_{j}\wedge\left(\bigvee_{i\in I}c_{i}\right)=c_{j}\vee\left(a_{j}\wedge \bigvee_{i\neq j}c_{i}\right)\leq c_{j}\vee\left(a_{j}\wedge\bigvee_{i\neq j} a_{i}\right)=c_{j}.\]
Then
\[b_{j}\wedge\left(\bigvee_{i\in I}c_{i}\right)=b_{j}\wedge a_{j}\wedge\left( \bigvee_{i\in I}c_{i}\right)\leq b_{j}\wedge c_{j}=\mathbf{0}\]
for all \(j\in I\). It follows from [3, Proposition 6.1] that \(\{b_{j}\}\cup\{c_{i}\}_{I}\) is independent in \(\mathcal{L}\) for each \(j\in I\). Let \(\{b_{1},...,b_{k}\}\) be a finite subset of \(\{b_{i}\}_{I}\). We have that \(\{b_{1}\}\cup\{c_{i}\}_{I}\) is independent in \(\mathcal{L}\). Now,
\[b_{2}\wedge\left(b_{1}\vee\bigvee_{i\in I}c_{i}\right)=b_{2}\wedge\left(a_{1} \vee\bigvee_{i\neq 1,2}c_{i}\right)=(b_{2}\wedge a_{2})\wedge\left(a_{1} \vee\bigvee_{i\neq 1}c_{i}\right)\]
\[=b_{2}\wedge\left(c_{2}\vee\left(a_{2}\wedge\left(a_{1}\vee\bigvee_{i\neq 1,2}c_{i}\right)\right)\right)=b_{2}\wedge c_{2}=\mathbf{0}\]
because
\[a_{2}\wedge\left(a_{1}\vee\bigvee_{i\neq 1,2}c_{i}\right)\leq a_{2}\wedge \left(a_{1}\vee\bigvee_{i\neq 1,2}a_{i}\right)=\mathbf{0}.\]
Thus \(\{b_{2},b_{1}\}\cup\{c_{i}\}_{I}\) is independent in \(\mathcal{L}\). By induction \(\{b_{1},...,b_{k}\}\cup\{c_{i}\}_{I}\) is independent in \(\mathcal{L}\). It follows that \(\big{(}\bigvee_{i\in F}b_{i}\big{)}\wedge\big{(}\bigvee_{i\in I}c_{i}\big{)}= \mathbf{0}\) for every finite subset \(F\subseteq I\) by [3, Lemma 6.2].
Let \(X\) be the set of finite joins of elements of \(\{b_{i}\}_{I}\), that is, \(x\in X\) if and only if \(x=b_{i_{1}}\vee\cdots\lor b_{i_{k}}\). Then \(X\) is directed and \(\bigvee X=\bigvee_{i\in I}b_{i}\). Note that \(x\wedge\left(\bigvee_{i\in I}c_{i}\right)=\mathbf{0}\) for all \(x\in X\). Since \(\mathcal{L}\) is upper-continuous,
\[\left(\bigvee_{i\in I}b_{i}\right)\wedge\left(\bigvee_{i\in I}c_{i}\right)= \left(\bigvee X\right)\wedge\left(\bigvee_{i\in I}c_{i}\right)=\bigvee_{x\in X }\left(x\wedge\left(\bigvee_{i\in I}c_{i}\right)\right)=\mathbf{0}.\]
Recall that an element \(a\in\mathcal{L}\) is _fully invariant_ if \(\varphi(a)\leq a\) for all \(\varphi\in\mathrm{End}_{lin}(\mathcal{L})\).
**Proposition 3.41**.: _Suppose \(\mathcal{L}\) is an upper-continuous complete modular lattice. Let \(\{a_{i}\}_{I}\) be an independent family of fully invariant elements of \(\mathcal{L}\) such that \(\bigvee_{i\in I}a_{i}=\mathbf{1}\). The following conditions are equivalent:_
1. \(\mathcal{L}\) _is endoregular._
2. \([\mathbf{0},a_{i}]\) _is endoregular._
Proof.: (a)\(\Rightarrow\)(b) It follows from [9, Proposition 3.13 and Proposition 3.14] and Theorem 3.12.
(b)\(\Rightarrow\)(a) Let \(\varphi:\mathcal{L}\to\mathcal{L}\) be a linear morphism. For any \(i\in I\), \(\varphi(a_{i})\leq a_{i}\). This implies that \(\varphi|:[\mathbf{0},a_{i}]\to[\mathbf{0},a_{i}]\) is a linear morphism. Therefore, \(\ker_{\varphi}\wedge a_{i}\) is a complement in \([\mathbf{0},a_{i}]\). On the other hand,
\[\varphi(\pi_{a_{i}}(\ker_{\varphi}))=\varphi\left(\left(\ker_{\varphi}\vee \bigvee_{i\neq j}a_{j}\right)\wedge a_{i}\right)\leq\varphi\left(\ker_{\varphi} \vee\bigvee_{i\neq j}a_{j}\right)=\varphi\left(\bigvee_{i\neq j}a_{j}\right).\]
Hence,
\[\varphi(\pi_{a_{i}}(\ker_{\varphi}))\leq\varphi\left(\bigvee_{i\neq j}a_{j} \right)\wedge\varphi(a_{i})\leq\left(\bigvee_{i\neq j}a_{j}\right)\wedge a_{i}= \mathbf{0}.\]
Thus, \(\pi_{a_{i}}(\ker_{\varphi})\leq\ker_{\varphi}\wedge a_{i}\) for all \(i\in I\). This implies that \(\ker_{\varphi}=\bigvee_{i\in I}\ker_{\varphi}\wedge a_{i}\). Then, \(\ker_{\varphi}\) is a complement in \([\mathbf{0},\bigvee_{i=1}^{n}a_{i}]=\mathcal{L}\) by Lemma 3.40. On the other hand, \(\varphi(a_{i})\) is complemented in \([\mathbf{0},a_{i}]\) for all \(i\in I\). Therefore \(\varphi(\mathbf{1})=\varphi\left(\bigvee_{i\in I}a_{i}\right)=\bigvee_{i\in I }\varphi(a_{i})\) is complemented in \(\mathcal{L}\) by Lemma 3.40. It follows from Theorem 3.12 that \(\mathcal{L}\) is endoregular.
**Corollary 3.42**.: _Suppose \(\mathcal{L}\) is an upper-continuous complete modular lattice. Let \(\{a_{i}\}_{I}\) be an independent family of elements of \(\mathcal{L}\) such that \(\bigvee_{i\in I}a_{i}=\mathbf{1}\). The following conditions are equivalent:_
1. \(\mathcal{L}\) _is abelian endoregular._
2. \([\mathbf{0},a_{i}]\) _is abelian endoregular and_ \(a_{i}\) _is fully invariant in_ \(\mathcal{L}\) _for all_ \(i\in I\)_._
Proof.: (a)\(\Rightarrow\)(b) It follows from Corollary 3.28 and Proposition 3.29.
(b)\(\Rightarrow\)(a) Given \(\varphi:\mathcal{L}\to\mathcal{L}\), we have a linear morphism \(\varphi_{i}=\varphi|:[\mathbf{0},a_{i}]\to[\mathbf{0},a_{i}]\) with \(\ker_{\varphi_{i}}=\ker_{\varphi}\wedge a_{i}\) and \(\varphi_{i}(a_{i})=\varphi(a_{i})\) for all \(i\in I\). By Proposition 3.27, \(a_{i}=\ker_{\varphi_{i}}\vee\varphi_{i}(a_{i})=(\ker_{\varphi}\wedge a_{i}) \vee\varphi(a_{i})\) and \(\mathbf{0}=\ker_{\varphi_{i}}\wedge\varphi_{i}(a_{i})=(\ker_{\varphi}\wedge a_ {i})\wedge\varphi(a_{i})\). It follows from Lemma 3.40 that \(\ker_{\varphi}=\bigvee_{i\in I}\ker_{\varphi}\wedge a_{i}\) is a complement of \(\varphi(\mathbf{1})=\bigvee_{i\in I}\varphi(a_{i})\) in \(\mathcal{L}\). Thus \(\mathbf{1}=\ker_{\varphi}\vee\varphi(\mathbf{1})\) and \(\mathbf{0}=\ker_{\varphi}\wedge\varphi(\mathbf{0})\). By Proposition 3.27, \(\mathcal{L}\) is abelian endoregular.
## 4. Regular quotient monoids of linear endomorphisms
Recall that an element \(x\) in a bounded lattice \(\mathcal{L}\) is _superfluous_, if whenever \(x\lor y=\mathbf{1}\) then \(y=\mathbf{1}\). Equivalently, \(x\) is superfluous in \(\mathcal{L}\) if \(x\) is essential in \(\mathcal{L}^{op}\).
**Definition 4.1**.: Let \(\mathcal{L}\) be a complete lattice and let \(\mathfrak{m}\) be a submonoid with zero of \(\operatorname{End}_{lin}(\mathcal{L})\). The lattice
* \(\mathcal{L}\) is called \(\mathfrak{m}\)-\(\mathcal{K}\)-_extending_ if for every \(\varphi\in\mathfrak{m}\), there exists \(c\in C(\mathcal{L})\) such that \(\ker_{\varphi}\) is essential in \([\mathbf{0},c]\).
* \(\mathcal{L}\) is called \(\mathfrak{m}\)-\(\mathcal{T}\)_-lifting_ if for every \(\varphi\in\mathfrak{m}\), there exists \(c\in C(\mathcal{L})\) with complement \(c^{\prime}\) such that \(c\leq\varphi(\mathbf{1})\) and \(\varphi(\mathbf{1})\wedge c^{\prime}\) superfluous in \([\mathbf{0},c^{\prime}]\).
If the submonoid we are considering is \(\operatorname{End}_{lin}(\mathcal{L})\), we will omit the \(\mathfrak{m}\).
The notions in Definition 4.1 are related to those given in [9, Definition 4.2] as the following results show.
**Proposition 4.2**.: _Let \(\mathcal{L}\) be a complete modular lattice and let \(\mathfrak{m}\) be a submonoid with zero of \(\operatorname{End}_{lin}(\mathcal{L})\) containing all the projections. The following conditions are equivalent:_
1. \(\mathcal{L}\) _is_ \(\mathfrak{m}\)_-Rickart._
2. \(\mathcal{L}\) _is_ \(\mathfrak{m}\)_-_\(\mathcal{K}\)_-extending and_ \(\mathfrak{m}\)_-_\(\mathcal{K}\)_-nonsingular._
Proof.: (a)\(\Rightarrow\)(b). It is clear that every \(\mathfrak{m}\)-Rickart lattice is \(\mathfrak{m}\)-\(\mathcal{K}\)-extending and it is \(\mathfrak{m}\)-\(\mathcal{K}\)-nonsingular by [9, Lemma 4.14].
(b)\(\Rightarrow\)(a). Let \(\varphi\in\mathfrak{m}\). By hypothesis, there exists \(c\in C(\mathcal{L})\) such that \(\ker_{\varphi}\) is essential in \([\mathbf{0},c]\). Let \(c^{\prime}\) be a complement of \(c\) and \(\psi=\varphi\pi_{c}\in\mathfrak{m}\). Then \(\ker_{\psi}=\ker_{\varphi}\lor c^{\prime}\) which is essential in \(\mathcal{L}\). This implies that \(\psi=0\) and hence \(c\leq ker_{\varphi}\), hence \(c=\ker_{\varphi}\). That is, \(\mathcal{L}\) is \(\mathfrak{m}\)-Rickart.
**Proposition 4.3**.: _Let \(\mathcal{L}\) be a complete modular lattice and let \(\mathfrak{m}\) be a submonoid with zero of \(\operatorname{End}_{lin}(\mathcal{L})\) containing all the projections. The following conditions are equivalent:_
1. \(\mathcal{L}\) _is dual-_\(\mathfrak{m}\)_-Rickart._
2. \(\mathcal{L}\) _is_ \(\mathfrak{m}\)_-_\(\mathcal{T}\)_-lifting and_ \(\mathfrak{m}\)_-_\(\mathcal{T}\)_-nonsingular._
Proof.: (a)\(\Rightarrow\)(b) It is clear that every dual-\(\mathfrak{m}\)-Rickart lattice is \(\mathfrak{m}\)-\(\mathcal{T}\)-lifting and it is \(\mathfrak{m}\)-\(\mathcal{T}\)-nonsingular by [9, Lemma 4.16].
(b)\(\Rightarrow\)(a) Let \(\varphi\in\mathfrak{m}\). By hypothesis, there exists \(c\in C(\mathcal{L})\) with complement \(c^{\prime}\) such that \(c\leq\varphi(\mathbf{1})\) and \(\varphi(\mathbf{1})\wedge c^{\prime}\) superfluous in \([\mathbf{0},c^{\prime}]\). We have that \(\pi_{c^{\prime}}\varphi\in\mathfrak{m}\) and \(\pi_{c^{\prime}}(\varphi(\mathbf{1}))=\varphi(\mathbf{1})\wedge c^{\prime}\). Since \(\mathcal{L}\) is \(\mathfrak{m}\)-\(\mathcal{T}\)-nonsingular, \(\pi_{c^{\prime}}\varphi=0\), thus \(\varphi(\mathbf{1})\leq c\). Hence \(\mathcal{L}\) is dual-\(\mathfrak{m}\)-Rickart.
**Definition 4.4**.: Let \(M\) be an \(R\)-module.
* \(M\) is called \(\mathcal{K}\)_-extending_ if \(\Lambda(M)\) is \(\mathfrak{E}_{M}\)-\(\mathcal{K}\)-extending.
* \(M\) is called \(\mathcal{T}\)_-lifting_ if \(\Lambda(M)\) is \(\mathfrak{E}_{M}\)-\(\mathcal{T}\)-lifting.
_Remark 4.5_.: It follows that an \(R\)-module \(M\) is \(\mathcal{K}\)-extending if and only if for every \(f\in\operatorname{End}_{R}(M)\) there exists a direct summand \(N\) of \(M\) such that \(\operatorname{Ker}f\leq^{\text{ess}}N\). On the other hand, \(M\) is \(\mathcal{T}\)-lifting if and only if for every \(f\in\operatorname{End}_{R}(M)\) there exists a decomposition \(M=N\oplus L\) such that \(N\leq f(M)\) and \(f(M)\cap L<<L\).
**Corollary 4.6**.: _The following conditions are equivalent for an \(R\)-module \(M\):_
1. \(M\) _is Rickart._
2. \(M\) _is_ \(\mathcal{K}\)_-extending and_ \(\mathcal{K}\)_-nonsingular._
**Corollary 4.7**.: _The following conditions are equivalent for an \(R\)-module \(M\):_
1. \(M\) _is dual-Rickart._
2. \(M\) _is_ \(\mathcal{T}\)_-lifting and_ \(\mathcal{T}\)_-nonsingular._
**Definition 4.8**.: Let \(\mathcal{L}\) be a complete modular lattice and \(\mathfrak{m}\) a submonoid of \(\operatorname{End}_{lin}(\mathcal{L})\). We define the following relations on \(\mathfrak{m}\):
\(\varphi\equiv_{\Delta}\psi\ \Leftrightarrow\ \text{ there exists $x$ essential in $\mathcal{L}$ \ such that $\varphi(y)=\psi(y)$ \ for all $y\leq x$}\).
\(\varphi\equiv^{\nabla}\psi\ \Leftrightarrow\ \text{ there exists $x$ superfluous in $\mathcal{L}$ \ such that $\varphi(a)\lor x=\psi(a)\lor x$ \ for all $a\in\mathcal{L}$}\).
**Lemma 4.9**.: _Let \(\mathcal{L}\) be a complete modular lattice and \(\varphi\in\operatorname{End}_{lin}(\mathcal{L})\)._
1. _If_ \(x\in\mathcal{L}\) _is essential, then_ \(w=\bigvee\{a\in\mathcal{L}\mid\varphi(a)\leq x\}\) _is essential in_ \(\mathcal{L}\)_._
2. _If_ \(x\in\mathcal{L}\) _is superfluous, then_ \(\varphi(x)\) _is superfluous in_ \(\mathcal{L}\)_._
Proof.: _1._ Let \(w=\bigvee\{a\in\mathcal{L}\mid\varphi(a)\leq x\}\). Note that \(\ker_{\varphi}\leq w\) and \(\varphi(w)\leq x\wedge\varphi(\mathbf{1})\). On the other hand, \(\overline{\varphi}^{-1}(x\wedge\varphi(\mathbf{1}))\leq w\). This implies that \(\overline{\varphi}^{-1}(x\wedge\varphi(\mathbf{1}))=w\). Since \(x\) is essential in \(\mathcal{L}\), \(x\wedge\varphi(\mathbf{1})\) is essential in \([\mathbf{0},\varphi(\mathbf{1})]\). This implies that \(w\) is essential
in \([\ker_{\varphi},\mathbf{1}]\). Let \(y\in\mathcal{L}\) such that \(w\wedge y=\mathbf{0}\). Then \(w\wedge(y\vee\ker_{\varphi})=\ker_{\varphi}\). Hence \(y\vee\ker_{\varphi}=\ker_{\varphi}\), that is, \(y\leq\ker_{\varphi}\). Therefore, \(y\leq w\) and so \(y=\mathbf{0}\). Thus, \(w\) is essential in \(\mathcal{L}\).
_2._ Since \(x\) is superfluous in \(\mathcal{L}\), \(x\vee\ker_{\varphi}\) is superfluous in \([\ker_{\varphi},\mathbf{1}]\). This implies that \(\varphi(x)\) is superfluous in \([\mathbf{0},\varphi(\mathbf{1})]\). Let \(y\in\mathcal{L}\) such that \(\varphi(x)\lor y=\mathbf{1}\). Then \(\varphi(x)\vee(\varphi(\mathbf{1})\wedge y)=\varphi(\mathbf{1})\wedge(\varphi (x)\lor y)=\varphi(\mathbf{1})\). This implies that \(\varphi(\mathbf{1})\wedge y=\varphi(\mathbf{1})\), that is, \(\varphi(\mathbf{1})\leq y\). Therefore, \(\mathbf{1}=\varphi(x)\lor y=y\). Thus \(\varphi(x)\) is superfluous in \(\mathcal{L}\).
**Lemma 4.10**.: _Let \(\mathcal{L}\) be a complete modular lattice. The relations \(\equiv_{\Delta}\) and \(\equiv^{\nabla}\) are congruences on any submonoid \(\mathfrak{m}\subseteq\mathrm{End}_{lin}(\mathcal{L})\)._
Proof.: \((\equiv_{\Delta})\). Let \(\varphi,\psi,\sigma\in\mathfrak{m}\). We have that \(\varphi\equiv_{\Delta}\varphi\) because \(\varphi(y)=\varphi(y)\) for all \(y\leq\mathbf{1}\) and \(\mathbf{1}\) is essential in \(\mathcal{L}\). It is clear that \(\equiv_{\Delta}\) is symmetric. Now, suppose that \(\varphi\equiv_{\Delta}\psi\) and \(\psi\equiv_{\Delta}\sigma\). Then there exist \(x\) and \(w\) essential in \(\mathcal{L}\) such that \(\varphi(y)=\psi(y)\) for all \(y\leq x\) and \(\psi(v)=\sigma(v)\) for all \(v\leq w\). Since \(x\) and \(w\) are essential in \(\mathcal{L}\), so is \(x\wedge w\). Let \(z\leq x\wedge w\). Then \(z\leq x\) and \(z\leq w\). Therefore, \(\varphi(z)=\psi(z)=\sigma(z)\). Thus \(\varphi\equiv_{\Delta}\sigma\). This proves that \(\equiv_{\Delta}\) is an equivalence relation.
Now, suppose that \(\varphi\equiv_{\Delta}\psi\). Then there exists \(x\) essential in \(\mathcal{L}\) such that \(\varphi(y)=\psi(y)\) for all \(y\leq x\). Let \(y\leq x\), then \(\sigma\varphi(y)=\sigma\psi(y)\). Therefore \(\sigma\varphi\equiv_{\Delta}\sigma\psi\). Let \(w=\bigvee\{a\in\mathcal{L}\mid\sigma(a)\leq x\}\). By Lemma 4.9, \(w\) is essential in \(\mathcal{L}\). Note that \(\sigma(w)\leq x\). Let \(v\leq w\). Then \(\sigma(v)\leq\sigma(w)\leq x\). Hence \(\varphi(\sigma(v))=\psi(\sigma(v))\). Thus \(\varphi\sigma\equiv_{\Delta}\psi\sigma\).
\((\equiv^{\nabla})\). Let \(\varphi,\psi,\sigma\in\mathfrak{m}\). We have that \(\varphi\equiv^{\nabla}\varphi\) because \(\varphi(a)\vee\mathbf{0}=\varphi(a)\vee\mathbf{0}\) for all \(a\in\mathcal{L}\). It is clear that \(\equiv^{\nabla}\) is symmetric. Now, suppose that \(\varphi\equiv^{\nabla}\psi\) and \(\psi\equiv^{\nabla}\sigma\). Then there exist \(x\) and \(y\) superfluous in \(\mathcal{L}\) such that \(\varphi(a)\lor x=\psi(a)\lor x\) and \(\psi(a)\lor y=\sigma(a)\lor y\) for all \(a\in\mathcal{L}\). Hence \(\varphi(a)\lor x\lor y=\sigma(a)\lor x\lor y\) for all \(a\in\mathcal{L}\) and \(x\lor y\) is superfluous in \(\mathcal{L}\). Thus \(\varphi\equiv^{\nabla}\sigma\). This proves that \(\equiv^{\nabla}\) is an equivalence relation.
Now, suppose that \(\varphi\equiv^{\nabla}\psi\). Then there exists \(x\) superfluous in \(\mathcal{L}\) such that \(\varphi(a)\lor x=\psi(a)\lor x\) for all \(a\in\mathcal{L}\). It follows that \(\varphi(\sigma(a))\lor x=\psi(\sigma(a))\lor x\) for all \(a\in\mathcal{L}\). Thus \(\varphi\sigma\equiv^{\nabla}\psi\sigma\). On the other hand,
\[\sigma(\varphi(a))\vee\sigma(x)=\sigma(\varphi(a)\lor x)=\sigma(\psi(a)\lor x )=\sigma(\psi(a))\vee\sigma(x)\]
for all \(a\in\mathcal{L}\), and \(\sigma(x)\) is superfluous in \(\mathcal{L}\) by Lemma 4.9. Thus \(\sigma\varphi\equiv^{\nabla}\sigma\psi\).
**Lemma 4.11**.: _Let \(\mathcal{L}\) be a complete modular lattice. Then,_
1. \(0\equiv_{\Delta}\varphi\) _if and only if_ \(\ker_{\varphi}\) _is essential in_ \(\mathcal{L}\)_._
2. \(0\equiv^{\nabla}\varphi\) _if and only if_ \(\varphi(\mathbf{1})\) _is superfluous in_ \(\mathcal{L}\)_._
Proof.: _1._\(\Rightarrow\) Let \(\varphi\in\mathrm{End}_{lin}(\mathcal{L})\) such that \(0\equiv_{\Delta}\varphi\). Then, there exists \(x\) essential in \(\mathcal{L}\) such that \(\mathbf{0}=0(y)=\varphi(y)\) for all \(y\leq x\). This implies that \(x\leq\ker_{\varphi}\) and, therefore \(\ker_{\varphi}\) is essential in \(\mathcal{L}\).
\(\Leftarrow\) It is clear that if \(\ker_{\varphi}\) is essential in \(\mathcal{L}\), then \(0\equiv_{\Delta}\varphi\).
_2._\(\Rightarrow\) Let \(\varphi\in\mathrm{End}_{lin}(\mathcal{L})\) such that \(0\equiv^{\nabla}\varphi\). Then, there exists \(x\) superfluous in \(\mathcal{L}\) such that \(\varphi(a)\lor x=0(a)\lor x=\mathbf{0}\lor x=x\) for all \(a\in\mathcal{L}\). Hence \(\varphi(\mathbf{1})\leq x\). Thus, \(\varphi(\mathbf{1})\) is superfluous in \(\mathcal{L}\).
\(\Leftarrow\) We have that \(\varphi(a)\vee\varphi(\mathbf{1})=0(a)\vee\varphi(\mathbf{1})\) for all \(a\in\mathcal{L}\) and \(\varphi(\mathbf{1})\) superfluous in \(\mathcal{L}\). Thus \(\varphi\equiv^{\nabla}0\).
Let \(\mathcal{L}\) be a complete modular lattice and let \([\varphi]_{\Delta}\) and \([\varphi]^{\nabla}\) denote the equivalence classes of \(\varphi\in\mathrm{End}_{lin}(\mathcal{L})\) respect to \(\equiv_{\Delta}\) and to \(\equiv^{\nabla}\), respectively. Let \(\Delta\) denote
\[[0]_{\Delta}=\{\varphi\in\mathrm{End}_{lin}(\mathcal{L})\mid 0\equiv_{\Delta}\varphi\}=\{ \varphi\in\mathrm{End}_{lin}(\mathcal{L})\mid\ker_{\varphi}\text{ is essential in }\mathcal{L}\},\]
and let \(\nabla\) denote
\[[0]^{\nabla}=\{\varphi\in\operatorname{End}_{lin}(\mathcal{L})\mid 0\equiv^{ \nabla}\varphi\}=\{\varphi\in\operatorname{End}_{lin}(\mathcal{L})\mid\varphi( \mathbf{1})\text{ is superfluous in }\mathcal{L}\}.\]
The proof of the following lemma is straightforward and we omit the proof.
**Lemma 4.12**.: _Let \(\mathcal{L}\) be a complete modular lattice. Then,_
1. \(\Delta\) _and_ \(\nabla\) _are ideals of_ \(\operatorname{End}_{lin}(\mathcal{L})\)_._
2. \(\Delta\) _and_ \(\nabla\) _contain no nonzero idempotents._
**Proposition 4.13**.: _Let \(M\) be an \(R\)-module and \(f,g\in\operatorname{End}_{R}(M)\). Then,_
1. _If_ \(\operatorname{Ker}(f-g)\) _is essential in_ \(M\) _then_ \(f_{*}\equiv_{\Delta}g_{*}\)_._
2. _If_ \(\operatorname{Im}(f-g)\) _is superfluous in_ \(M\) _then_ \(f_{*}\equiv^{\nabla}g_{*}\)_._
Proof.: _(1)_ Let \(N\leq\operatorname{Ker}(f-g)\). Then \(f(n)=g(n)\) for all \(n\in N\). In particular, \(f_{*}(N)=f(N)=g(N)=g_{*}(N)\). Thus, \(f_{*}\equiv_{\Delta}g_{*}\).
_(2)_ Let \(N\leq M\) and \(n\in N\). Consider \(f(n)+(f-g)(m)\in f(N)+\operatorname{Im}(f-g)\). Then
\[f(n)+(f-g)(m)=f(n)+(f-g)(m)+g(n)-g(n)=g(n)+(f-g)(n)+(f-g)(m)\]
\[=g(n)+(f-g)(n+m)\in g(N)+\operatorname{Im}(f-g).\]
Hence \(f(N)+\operatorname{Im}(f-g)\subseteq g(N)+\operatorname{Im}(f-g)\). Analogously, \(g(N)+\operatorname{Im}(f-g)\subseteq f(N)+\operatorname{Im}(f-g)\). Thus \(g_{*}(N)+\operatorname{Im}(f-g)=f_{*}(N)+\operatorname{Im}(f-g)\) for all \(N\leq M\), that is, \(f_{*}\equiv^{\nabla}g_{*}\).
**Lemma 4.14**.: _Let \(\mathcal{L}\) be a complete modular lattice and \(\mathfrak{m}\) a submonoid of \(\operatorname{End}_{lin}(\mathcal{L})\)._
1. _If_ \(\mathcal{L}\) _satisfies the_ \(\mathfrak{m}\)_-_\(C_{2}\) _condition, then every monomorphism_ \(\varphi\in\mathfrak{m}\) _such that_ \(\varphi(\mathbf{1})\) _is essential in_ \(\mathcal{L}\)_, is an isomorphism, and_ \([1]_{\Delta}\) _consists of isomorphisms._
2. _If_ \(\mathcal{L}\) _satisfies the_ \(\mathfrak{m}\)_-_\(D_{2}\) _condition, then every_ \(\varphi\in\mathfrak{m}\) _surjective such that_ \(\ker_{\varphi}\) _is superfluous in_ \(\mathcal{L}\)_, is an isomorphism, and_ \([1]^{\nabla}\) _consists of isomorphisms._
Proof.: _1._ Let \(\varphi\in\mathfrak{m}\) be a monomorphism such that \(\varphi(\mathbf{1})\) is essential in \(\mathcal{L}\). By \(\mathfrak{m}\)-\(C_{2}\), \(\varphi(\mathbf{1})\) is a complement in \(\mathcal{L}\). It follows that \(\varphi(\mathbf{1})=\mathbf{1}\). Thus, \(\varphi\) is a linear isomorphism. Let \(\varphi\in[1]_{\Delta}\). Then there exists \(x\) essential in \(\mathcal{L}\), such that \(\varphi(y)=y\) for all \(y\leq x\). Suppose that \(z\in\mathcal{L}\) is such that \(\varphi(z)=\mathbf{0}\). Then \(z\wedge x=\varphi(x\wedge z)=\mathbf{0}\). This implies that \(z=\mathbf{0}\). Therefore, \(\varphi\) is a monomorphism. Since \(x=\varphi(x)\leq\varphi(\mathbf{1})\), \(\varphi(\mathbf{1})\) is essential in \(\mathcal{L}\). It follows that \(\varphi\) is an isomorphism.
_2._ Let \(\varphi\in\mathfrak{m}\) be surjective such that \(\ker_{\varphi}\) is superfluous in \(\mathcal{L}\). Then there is an isomorphism \(\overline{\varphi}:[\ker_{\varphi},\mathbf{1}]\to[\mathbf{0},\mathbf{1}]\). It follows from the condition \(\mathfrak{m}\)-\(D_{2}\) that \(\ker_{\varphi}\) is a complement. The hypothesis implies that \(\ker_{\varphi}=\mathbf{0}\). Thus \(\varphi\) is an isomorphism. Let \(\varphi\in[1]^{\nabla}\). Then there exists \(x\) superfluous in \(\mathcal{L}\) such that \(\varphi(a)\lor x=1(a)\lor x=a\lor x\) for all \(a\in\mathcal{L}\). In particular, \(\varphi(\mathbf{1})\vee\mathbf{1}=\mathbf{1}\lor x=\mathbf{1}\). Since \(x\) is superfluous, \(\varphi(\mathbf{1})=\mathbf{1}\), that is, \(\varphi\) is surjective. On the other hand \(x=\mathbf{0}\lor x=\varphi(\ker_{\varphi})\lor x=\ker_{\varphi}\lor x\). This implies that \(\ker_{\varphi}\leq x\). Therefore, \(\ker_{\varphi}\) is superfluous in \(\mathcal{L}\). Thus, \(\varphi\) is an isomorphism.
**Theorem 4.15**.: _Let \(\mathcal{L}\) be a complete modular lattice and \(\mathfrak{m}\) be a submonoid of \(\operatorname{End}_{lin}(\mathcal{L})\) closed under complements. If \(\mathcal{L}\) is \(\mathfrak{m}\)-\(\mathcal{K}\)-extending and satisfies \(\mathfrak{m}\)-\(C_{2}\), then \(\mathfrak{m}/\equiv_{\Delta}\) is a regular monoid._
Proof.: By Lemma 4.10, \(\mathfrak{m}/\equiv_{\Delta}\) is a monoid. Let \(\varphi\in\mathfrak{m}\). Since \(\mathcal{L}\) is \(\mathfrak{m}\)-\(\mathcal{K}\)-extending, there exists \(c\in C(\mathcal{L})\) such that \(\ker_{\varphi}\) is essential in \([\mathbf{0},c]\). Let \(c^{\prime}\) be a complement of \(c\) in \(\mathcal{L}\). The morphism \(\varphi\) induces a linear isomorphism \(\varphi|:[\mathbf{0},c^{\prime}]\rightarrow[\mathbf{0},\varphi(c^{\prime})]\). Since \(\varphi\pi_{c^{\prime}}\in\mathfrak{m}\) and \(\mathcal{L}\) satisfies \(\mathfrak{m}\)-\(C_{2}\), \(\varphi(c^{\prime})\) is a complement in \(\mathcal{L}\). By the hypothesis on \(\mathfrak{m}\), we can extend \((\varphi|)^{-1}\) to a linear endomorphism \(\psi\in\mathfrak{m}\) such that \(\psi\varphi(x)=x\) for all \(x\leq c^{\prime}\). Thus \(\varphi\psi\varphi(c^{\prime})=\varphi(c^{\prime})\). Moreover \(\varphi\psi\varphi(\ker_{\varphi}\lor c^{\prime})=\varphi(\ker_{\varphi}\lor c ^{\prime})\). Since \(\ker_{\varphi}\) is essential in \([\mathbf{0},c]\) and \(c^{\prime}\) is a complement of \(c\), \(\ker_{\varphi}\lor c^{\prime}\) is essential in \(\mathcal{L}\). Let \(y\leq\ker_{\varphi}\lor c^{\prime}\). Then \(\varphi(y)\leq\varphi(\ker_{\varphi}\lor c^{\prime})=\varphi(c^{\prime})\). This implies that there exists \(\mathbf{0}\leq z\leq c^{\prime}\) such that \(\varphi(z)=\varphi(y)\). Therefore \(\varphi\psi\varphi(y)=\varphi\psi\varphi(z)=\varphi(z)=\varphi(y)\). Thus, \(\varphi\psi\varphi\equiv_{\Delta}\varphi\).
Since every endoregular lattice \(\mathcal{L}\) is \(\mathcal{K}\)-extending and satisfies \((C_{2})\), one might expect that, for those lattices, the congruence \(\equiv_{\Delta}\) in \(\operatorname{End}_{lin}(\mathcal{L})\) is trivial, but it is not the case. The following example shows an endoregular lattice \(\mathcal{L}\) such that \(\equiv_{\Delta}\) is not trivial.
**Example 4.16**.: Consider the following lattice \(\mathcal{L}\)
It is not difficult to see that every nonzero linear endomorphism of \(\mathcal{L}\) is an isomorphism. Therefore \(\mathcal{L}\) is endoregular. Consider the following endomorphism
\[\begin{array}{l}\varphi(x)=x\quad\text{for all }x\leq a_{0}\\ \varphi(b)=c\\ \varphi(c)=b\\ \varphi(\mathbf{1})=\mathbf{1}\end{array}\]
Then \(\varphi\equiv_{\Delta}id\). In fact, \(\operatorname{End}_{lin}(\mathcal{L})=\{0,id,\varphi\}\) and \(\operatorname{End}_{lin}(\mathcal{L})/\equiv_{\Delta}=\{[0],[id]\}\).
The module theoretic version of Theorem 4.15 is stated for continuous modules [11, Proposition 3.15], that is modules satisfying the conditions \((C_{1})\) and \((C_{2})\). As an example of Theorem 4.15, we show a lattice that is \(\mathcal{K}\)-extending but does not satisfy \((C_{1})\)[2, definition 1.1].
**Example 4.17**.: Consider the following lattice \(\mathcal{L}\)
It can be seen easily that \(\mathcal{L}\) does not satisfy \(C_{1}\). There are 5 linear endomorphism on \(\mathcal{L}\), \(\operatorname{End}_{lin}(\mathcal{L})=\{0,id,\varphi,\psi,\tau\}\) given by
\[\begin{array}{ll}\varphi(\mathbf{0})=\mathbf{0}&\psi(\mathbf{0})=\mathbf{0} &\tau(\mathbf{0})=\mathbf{0}\\ \varphi(a)=\mathbf{0}&\psi(a)=\mathbf{0}&\tau(a)=b\\ \varphi(b)=\mathbf{0}&\psi(b)=\mathbf{0}&\tau(b)=a\\ \varphi(c)=\mathbf{0}&\psi(c)=\mathbf{0}&\tau(c)=c\\ \varphi(\mathbf{1})=a&\psi(\mathbf{1})=b&\tau(\mathbf{1})=\mathbf{1}.\end{array}\]
Hence, \(\mathcal{L}\) is \(\mathcal{K}\)-extending. Then \(\operatorname{End}_{lin}(\mathcal{L})/\equiv_{\Delta}=\{[0],[id],[\tau]\}\) is a regular monoid.
**Corollary 4.18**.: _Let \(\mathcal{L}\) be an indecomposable modular lattice, that is, \(C(\mathcal{L})=\{\mathbf{0},\mathbf{1}\}\). Consider the following sentences:_
1. _Every linear endomorphism_ \(\varphi\notin\Delta\) _has an inverse._
2. \(\operatorname{End}_{lin}(\mathcal{L})/\equiv_{\Delta}\) _is a monoid in which every nonzero element has an inverse._
3. \(\mathcal{L}\) _is_ \(\mathcal{K}\)_-extending._
_Then (1)\(\Rightarrow\)(2)\(\Rightarrow\)(3). Moreover, if \(\mathcal{L}\) is Hopfian, then the three conditions are equivalent._
Proof.: _(1)\(\Rightarrow\)(2)_ It is clear.
_(2)\(\Rightarrow\)(3)_ Let \(\varphi\in\operatorname{End}_{lin}(\mathcal{L})\). By hypothesis, \(\ker_{\varphi}\) is essential in \(\mathcal{L}\) or there exist \(\psi\in\operatorname{End}_{lin}(\mathcal{L})\) and \(x\in\mathcal{L}\) essential such that \(\psi\varphi(y)=id(y)=y\) for all \(y\leq x\). Consider \(\ker_{\varphi}\) and set \(y=x\wedge\ker_{\varphi}\). It follows that \(\mathbf{0}=\psi\varphi(y)=y\). Since \(x\) is essential, \(\ker_{\varphi}=\mathbf{0}\). Thus, \(\mathcal{L}\) is \(\mathcal{K}\)-extending.
Suppose \(\mathcal{L}\) is Hopfian. _(3)\(\Rightarrow\)(1)_ Let \(\varphi\in\operatorname{End}_{lin}(\mathcal{L})\). If \(\ker_{\varphi}\neq\mathbf{0}\), then \(\ker_{\varphi}\) is essential in \(\mathcal{L}\) by the hypothesis. Therefore, \(\varphi\in\Delta\). Now, if \(\ker_{\varphi}=\mathbf{0}\) then \(\varphi\) is an isomorphism because \(\mathcal{L}\) is Hopfian.
**Theorem 4.19**.: _Let \(\mathcal{L}\) be a complete modular lattice and \(\mathfrak{m}\) a submonoid of \(\operatorname{End}_{lin}(\mathcal{L})\) closed under complements. If \(\mathcal{L}\) is \(\mathfrak{m}\)-\(\mathcal{T}\)-lifting and satisfies \(\mathfrak{m}\)-\(D_{2}\), then \(\mathfrak{m}/\equiv^{\nabla}\) is a regular monoid._
Proof.: By Lemma 4.10, \(\mathfrak{m}/\equiv^{\nabla}\) is a monoid. Let \(\varphi\in\mathfrak{m}\). Since \(\mathcal{L}\) is \(\mathfrak{m}\)-\(\mathcal{T}\)-lifting, there exists \(c\in C(\mathcal{L})\) there exists \(c\in C(\mathcal{L})\) with complement \(c^{\prime}\) such that \(c\leq\varphi(\mathbf{1})\) and \(\varphi(\mathbf{1})\wedge c^{\prime}\) is superfluous in \([\mathbf{0},c^{\prime}]\). Consider the isomorphism \(\overline{\varphi}:[\ker_{\varphi},\mathbf{1}]\to[\mathbf{0},\varphi(\mathbf{1})]\). Then, there exist \(\ker_{\varphi}\leq x,y\leq\mathbf{1}\) such that \(\varphi(x)=c\), \(\varphi(y)=\varphi(\mathbf{1})\wedge c^{\prime}\) and \(\overline{\varphi}\) induces isomorphisms \([\ker_{\varphi},x]\cong[\mathbf{0},c]\) and \([\ker_{\varphi},y]\cong[\mathbf{0},\varphi(\mathbf{1})\wedge c^{\prime}]\). Furthermore, \(x\) and \(y\) are complement one of each other in the interval \([\ker_{\varphi},\mathbf{1}]\). Therefore, \([y,\mathbf{1}]\cong[\ker_{\varphi},x]\cong[\mathbf{0},c]\). Since \(\pi_{c}\varphi\in\mathfrak{m}\) and the \(\mathfrak{m}\)-\(D_{2}\) condition, \(y\in C(\mathcal{L})\). Let
\(z\in\mathcal{L}\) be a complement of \(y\). Then \(\ker_{\varphi}\wedge z=\mathbf{0}\) and hence \(\varphi|:[\mathbf{0},z]\to[\mathbf{0},\varphi(z)]\) is an isomorphism. We claim that \(\varphi(z)\) is a complement of \(c^{\prime}\) in \(\mathcal{L}\). We have that
\[\varphi(\mathbf{1})=\varphi(y\lor z)=\varphi(y)\vee\varphi(z)=(\varphi( \mathbf{1})\wedge c^{\prime})\vee\varphi(z)=\varphi(\mathbf{1})\wedge(c^{ \prime}\vee\varphi(z)).\]
Thus, \(c\leq\varphi(\mathbf{1})\leq c^{\prime}\vee\varphi(\mathbf{1})\). By modularity, \(c^{\prime}\wedge\_:[c,\mathbf{1}]\to[\mathbf{0},c^{\prime}]\) is an isomorphism, and \(c^{\prime}\wedge(c^{\prime}\vee\varphi(z))=c^{\prime}\). This implies that \(c^{\prime}\vee\varphi(z)=\mathbf{1}\). On the other hand, there exists \(0\leq w\leq z\) such that \(\varphi(w)=c^{\prime}\wedge\varphi(z)\). Then \(\varphi(w\vee\ker_{\varphi})=c^{\prime}\wedge\varphi(z)\leq\varphi(y)\). This implies that \(w\vee\ker_{\varphi}\leq y\wedge(z\vee\ker_{\varphi})=\ker_{\varphi}\vee \mathbf{0}=\ker_{\varphi}\). Hence \(w\leq\ker_{\varphi}\). Thus \(c^{\prime}\wedge\varphi(z)=\varphi(w)=\mathbf{0}\). This proves the claim. By the hypothesis on \(\mathfrak{m}\), we can consider the endomorphism \(\psi\in\mathfrak{m}\) given by \(\iota_{z}(\varphi|)^{-1}\pi_{\varphi(z)}:\mathcal{L}\to\mathcal{L}\). Therefore
\[\varphi\psi\varphi(a)=\pi_{\varphi(z)}(\varphi(a))=(\varphi(a)\lor c^{\prime} )\wedge\varphi(z),\]
for all \(a\in\mathcal{L}\). Note that \(((\varphi(a)\lor c^{\prime})\wedge\varphi(z))\lor c^{\prime}=(\varphi(a)\lor c ^{\prime})\wedge(\varphi(z)\lor c^{\prime})=(\varphi(a)\lor c^{\prime})\wedge \mathbf{1}=\varphi(a)\lor c^{\prime}\), for all \(a\in\mathcal{L}\). Hence
\[\varphi\psi\varphi(a)\vee(\varphi(\mathbf{1})\wedge c^{\prime})=\varphi( \mathbf{1})\wedge(\varphi\psi\varphi(a)\lor c^{\prime})=\varphi(\mathbf{1}) \wedge(\varphi(a)\lor c^{\prime})=\varphi(a)\vee(\varphi(\mathbf{1})\wedge c^ {\prime}),\]
for all \(a\in\mathcal{L}\). It is not difficult to see that \(\varphi(\mathbf{1})\wedge c^{\prime}\) is superfluous in \(\mathcal{L}\) because \(\varphi(\mathbf{1})\) is superfluous in \([c,\mathbf{1}]\) and \(c\in C(\mathcal{L})\). Thus \(\varphi\psi\varphi\equiv^{\nabla}\varphi\).
**Corollary 4.20**.: _Let \(\mathcal{L}\) be an indecomposable modular lattice, i.e., \(C(\mathcal{L})=\{\mathbf{0},\mathbf{1}\}\). Consider the following conditions:_
1. _Every linear endomorphism_ \(\varphi\notin\nabla\) _has an inverse._
2. \(\operatorname{End}_{lin}(\mathcal{L})/\equiv^{\nabla}\) _is a monoid in which every nonzero element has an inverse._
3. \(\mathcal{L}\) _is_ \(\mathcal{T}\)_-lifting._
_Then (1)\(\Rightarrow\)(2)\(\Rightarrow\)(3). Moreover, if \(\mathcal{L}\) is cohofian, then the three conditions are equivalent._
Proof.: _(1)\(\Rightarrow\)(2)_ It is clear.
_(2)\(\Rightarrow\)(3)_ Let \(\varphi\in\operatorname{End}_{lin}(\mathcal{L})\). By hypothesis \(\varphi(\mathbf{1})\) is superfluous in \(\mathcal{L}\) or there exist \(\psi\in\operatorname{End}_{lin}(\mathcal{L})\) and \(x\in\mathcal{L}\) superfluous such that \(\varphi\psi(a)\lor x=a\lor x\) for all \(a\in\mathcal{L}\). Then \(\varphi\psi(\mathbf{1})\lor x=\mathbf{1}\lor x=\mathbf{1}\). This implies that \(\varphi\psi(\mathbf{1})=\mathbf{1}\). Therefore, \(\varphi(\mathbf{1})=\mathbf{1}\). Thus, \(\mathcal{L}\) is \(\mathcal{T}\)-lifting.
Suppose \(\mathcal{L}\) is cohofian. _(3)\(\Rightarrow\)(1)_ Let \(\varphi\in\operatorname{End}_{lin}(\mathcal{L})\). If \(\varphi(\mathbf{1})\neq\mathbf{1}\), then \(\varphi(\mathbf{1})\) is superfluous in \(\mathcal{L}\) by the hypothesis. Therefore, \(\varphi\in\nabla\). Now, if \(\varphi(\mathbf{1})=\mathbf{1}\) then \(\varphi\) is an isomorphism because \(\mathcal{L}\) is cohofian.
In [11, Corollary 2.32], it is proved that in a quasi-continuous module, two isomorphic submodules have isomorphic closures (given by the \((C_{1})\) condition). In a dual way, in [11, Theorem 4.24], it is proved that given two direct summands \(A\) and \(B\) of a quasi-discrete module \(M\), such that \(A/X\cong B/Y\) with \(X\) superfluous in \(A\) and \(Y\) superfluous in \(B\), then \(A\cong B\). We finish this section giving an example which shows that the mentioned results cannot be extended to linear lattices.
**Example 4.21**.: Consider the following lattice \(\mathcal{L}\):
Then \(C(\mathcal{L})=\{\mathbf{0},\mathbf{1},a,d\}\). Hence \(\mathcal{L}\) satisfies (\(C_{3}\)). On the other hand:
\[\begin{array}{l}a\text{ is essential in }[\mathbf{0},a],\\ b\text{ is essential in }[\mathbf{0},d],\\ c\text{ is essential in }[\mathbf{0},\mathbf{1}],\\ d\text{ is essential in }[\mathbf{0},d].\end{array}\]
Thus, \(\mathcal{L}\) satisfies (\(C_{1}\)). Therefore, \(\mathcal{L}\) is quasi-continuous. Note that \([\mathbf{0},a]\) and \([\mathbf{0},b]\) are isomorphic, but \([\mathbf{0},a]\) is not isomorphic to \([\mathbf{0},d]\).On the other hand, \(\mathcal{L}\) is autodual, so \(\mathcal{L}\) is quasi-discrete. We have that \(\mathbf{0}\) is superfluous in \([\mathbf{0},a]\), \(b\) is superfluous in \([\mathbf{0},d]\), and \([\mathbf{0},a]\cong[b,d]\). But \([\mathbf{0},a]\) is not isomorphic to \([\mathbf{0},d]\).
|
2306.08053 | Quantifying Spatial Audio Quality Impairment | Spatial audio quality is a highly multifaceted concept, with many
interactions between environmental, geometrical, anatomical, psychological, and
contextual considerations. Methods for characterization or evaluation of the
geometrical components of spatial audio quality, however, remain scarce,
despite being perhaps the least subjective aspect of spatial audio quality to
quantify. By considering interchannel time and level differences relative to a
reference signal, it is possible to construct a signal model to isolate some of
the spatial distortion. By using a combination of least-square optimization and
heuristics, we propose a signal decomposition method to isolate the spatial
error from a processed signal, in terms of interchannel gain leakages and
changes in relative delays. This allows the computation of simple energy-ratio
metrics, providing objective measures of spatial and non-spatial signal
qualities, with minimal assumptions and no dataset dependency. Experiments
demonstrate the robustness of the method against common spatial signal
degradation introduced by, e.g., audio compression and music source separation.
Implementation is available at https://github.com/karnwatcharasupat/spauq. | Karn N. Watcharasupat, Alexander Lerch | 2023-06-13T18:18:09Z | http://arxiv.org/abs/2306.08053v3 | # Evaluation of Spatial Distortion
###### Abstract
Despite the recent proliferation of spatial audio technologies, the evaluation of spatial quality continues to rely on subjective listening tests, often requiring expert listeners. Based on the duplex theory of spatial hearing, it is possible to construct a signal model for frequency-independent spatial distortion by accounting for inter-channel time and level differences relative to a reference signal. By using a combination of least-square optimization and heuristics, we propose a signal decomposition method to isolate the spatial error from a processed signal. This allows the computation of simple energy-ratio metrics, providing objective measures of spatial and non-spatial signal qualities, with minimal assumption and no dataset dependency. Experiments demonstrate robustness of the method against common signal degradation as introduced by, e.g., audio compression and music source separation.
Multichannel signal processing, signal decomposition, spatial audio, quality evaluation
## I Introduction
The development of spatial audio technology is intrinsically linked to the spatial hearing ability of human listeners. Human sound localization is commonly understood to be characterizable by the head-related transfer function (HRTF), whereby the shapes and locations of the head, ears, and other anatomical features transform how the sound source originating from a particular location is finally received by the ear. A simplified understanding of this phenomenon, known as the duplex theory [1], can be thought of in terms of the frequency-independent interaural time differences of arrival and the interaural level differences of sound sources between the ears.
The duplex theory, in particular, has been exploited to achieve significant data compression in various lossy audio codec, such as MP3 [2], AAC [3], Vorbis [4], and Opus [5]. These techniques, however, are known to produce audible spatial artifacts, especially at lower bitrates [6, 7]. Although spatial artifacts resulting from such encoding have been repeatedly shown to affect overall perception of audio quality [8, 9, 10], to the best of our knowledge, few experimental studies have specifically investigated objective measurements of these spatial artifacts. Unsurprisingly, equally few objective metrics have been designed specifically for spatial evaluation of multichannel audio. Most practical evaluation of spatial quality continues to rely on time-consuming and labor-intensive listening tests, often requiring expert listeners [11]. A number of spectral and spatial features were proposed for training a simple model to predict perceptual ratings [12, 13, 14, 15]. More recently, a similar deep neural metric was proposed in [16] for binaural audio. All of these data-dependent approaches, however, are not usable outside the channel configuration in which the prediction models were trained on. The generalizability of the models are also often called into question when applying the predictor to unseen data with significant domain shift. AMBIQUAL, a metric designed for ambisonic signals derived from structural similarities of the time-frequency representations [17, 18], does not suffer from data dependency but was designed only for ambisonic data.
Despite limited literature specific to spatial evaluation, findings from audio quality evaluation in other subdomains can be adapted for spatial audio evaluation. In multichannel blind source separation (BSS), the bsseval toolbox [19, 20, 21] has been used widely to measure various aspects of signal degradation due to BSS algorithms. To account for filtering error, the toolbox computes a 512-tap least-square multichannel projection filter \(\mathbf{h}\) of reference signal \(\mathbf{s}\) onto the estimated signal \(\hat{\mathbf{s}}\). However, among other criticisms [22], the resulting error signal \(\mathbf{e}_{\text{proj}}=\mathbf{s}*\mathbf{h}-\hat{\mathbf{s}}\) has often been referred to as the "spatial error" signal, despite containing all filtering errors accountable within 512 taps regardless of their spatial relevance. This issue in particular has led to the limited utility of the "source image to spatial distortion ratio" (ISR), relative to other metrics in the widely used bsseval toolbox.
By constraining the projection filter, however, it is possible to exclusively account for some spatial distortion, in particular those accounted for by the duplex theory, due to their frequency independence. Spatial information originating from the room acoustics or head-related transfer functions, however, cannot be easily distinguished _a priori_ from other frequency-dependent distortion. In this work, we propose a decomposition technique1 that distinguishes a subset of spatial distortion from other filtering distortion, allowing the energy ratio between a clean signal and the corresponding spatial error signal to be explicitly computed. The technique was designed such that it can be used either independently for, e.g., codec evaluation, or in conjunction with existing ratio metrics for evaluation of source separation systems.
Footnote 1: Implementation is available at github.com/karnwatcharasupat/spauq. Last accessed: June 16, 2023.
## II Proposed Method
### _Signal Model and Spatial Projection_
In a multichannel audio setting, the filtering error itself can be loosely decomposed into one concerning spatial distortion, such as interchannel time differences (ITD) and interchannel level
differences (ILD), and another concerning frequency response distortion, such as changes in equalization (EQ) or timbre. Admittedly, any spatial effect with a frequency-dependent response, such as the those due to room reverberation or the pinna filtering effects, cannot be fully distinguished from EQ distortion. As such, we will only consider filtering operations related to the duplex theory in this work.
Changes to the ITD can be modeled by relative changes in delays over the channels, while changes to the ILD can be modeled by relative changes in the gain of each channel as well as leakages into other channels. For a signal \(\mathbf{s}\) with \(C\) channels, the projected signal \(\tilde{\mathbf{s}}\) can thus be modeled by
\[\tilde{s}_{\mathrm{c}}[n]=\sum_{d}A_{cd}s_{d}[n-\tau_{cd}],\quad\forall c,n. \tag{1}\]
Since we are interested in projecting the _reference_ signal as close to the _estimated_ signal as possible, this results in the least-square optimization objective \(\min_{\mathbf{A},\mathbf{T}}\|\tilde{\mathbf{s}}-\hat{\mathbf{s}}\|^{2}\) where \((\mathbf{T})_{cd}=\tau_{cd}\).
At an optimal \(\mathbf{T}\), the optimal value for each row of \(\mathbf{A}\) can be found by solving the matrix equation2
Footnote 2: See Supplementary Materials for derivation.
\[\mathbf{A}_{c,:}\mathbf{R}^{c} =\hat{\mathbf{R}}_{c,:}, \tag{2}\] \[(\mathbf{R}^{c})_{bd} =\sum_{n}s_{\mathrm{b}}[n-\tau_{cb}]s_{d}[n-\tau_{cd}],\] (3) \[(\hat{\mathbf{R}})_{cd} =\sum_{n}\hat{s}_{\mathrm{c}}[n]s_{d}[n-\tau_{cd}]. \tag{4}\]
Solving for \(\mathbf{T}\) directly remains an open problem due to multiple local minima and non-monotonic gradients. As such, we used a method based on interchannel correlation to assign
\[\tau_{cd}=\arg\max_{\eta}\left|\sum_{n}\hat{s}_{\mathrm{c}}[n]s_{d}[n-\eta] \right|. \tag{5}\]
In other words, each input channel of the reference signal is shifted so that it is maximally correlated to target channel of the estimate signal or its inversion.
### _Energy-Ratio Metrics_
Once the optimal projection \(\tilde{\mathbf{s}}\) is found, the spatial error signal can be computed using \(\mathbf{e}_{\text{spat}}=\tilde{\mathbf{s}}-\mathbf{s}\), while any other residual error can be computed by treating \(\tilde{\mathbf{s}}\) as the 'new' reference, i.e., \(\mathbf{e}_{\text{resid}}=\hat{\mathbf{s}}-\tilde{\mathbf{s}}\). Thus, the total error between the reference and the estimated signal can be written as
\[\mathbf{e}_{\text{total}}=\hat{\mathbf{s}}-\mathbf{s}=\mathbf{e}_{\text{spat} }+\mathbf{e}_{\text{resid}}. \tag{6}\]
Using the decompositions above, two metrics naturally arise, namely the Signal to Spatial Distortion Ratio (SSR),
\[\text{SSR}(\hat{\mathbf{s}};\mathbf{s})=10\log_{10}\frac{\|\mathbf{s}\|^{2}} {\|\mathbf{e}_{\text{spat}}\|^{2}}, \tag{7}\]
and the Signal to Residual Distortion Ratio (SRR),
\[\text{SRR}(\hat{\mathbf{s}};\mathbf{s})=10\log_{10}\frac{\|\tilde{\mathbf{s}} \|^{2}}{\|\mathbf{e}_{\text{resid}}\|^{2}}. \tag{8}\]
The SSR itself can be considered as a replacement of the ISR, considering only components of the error signals with spatial importance as errors. The SRR effectively acts as the non-spatial SNR, only considering non-spatial errors such as interference, timbral distortion, and additive artifacts.
### _Framewise Computation_
Since the proposed decomposition is relatively easy to compute, the computation can be made in a frame-wise manner. This is particularly helpful in the case of time-variant signals such as music, speech, and environmental sound where the signal content can drastically change over a time period. This means that most audio processing algorithms may also process the signal in a time-variant manner, leading to time-varying spatial distortion which in turns requires time-varying decomposition. Following bsseval, we defaulted to a window of 2s with 50% overlap in our implementation.
## III Signal Degradation Tests
To evaluate the robustness of the proposed decomposition and thus the proposed metrics, common signal degradations and spatial distortions are evaluated on a subset of the TIMIT Acoustic-Phonetic Continuous Speech Corpus [23]. For the purpose of this robustness check we test various audio degradation on recorded utterances of the sentence SA1, as uttered by 168 different participants with various dialectical variants of American English; SA1 is chosen as it was designed to expose diverse variations in English phoneme pronunciation. The TIMIT Corpus provides single-channel 16-bit PCM audio signal sampled at 16 kHz. In order to simulate known spatialization settings, each mono signal is spatialized to a stereo setup using the constant-power pan law
\[g_{\text{L}}=\cos\left(\frac{\pi}{4}(p+1)\right),\quad g_{\text{R}}=\sin\left( \frac{\pi}{4}(p+1)\right) \tag{9}\]
with \(p\in[-1,1]\).
### _Panning Error_
The most basic test is to investigate the relationship between the metrics in the case where there is only spatial error and no other type of degradation. This first test is simulated by considering magnitude-only stereo panning errors between the estimate signals and the reference signals. For the stereo setup defined in Eq. (9), the theoretical SRR is positive infinity while the theoretical SSR is given by2
Footnote 2: See Supplementary Materials for derivation.
\[-10\log_{10}\left[2-2\cos\left(\frac{\pi}{4}(\hat{p}-p)\right)\right], \tag{10}\]
where \(p\) is the pan parameter of the reference signal and \(\hat{p}\) is the pan parameter of the signal estimate.
As theoretically expected, all computed SRR values on the SA1 signals of the TIMIT Corpus test set were all effectively positive infinities and are not plotted here due to space constraints. The SSR results are shown in Figure 1 and are consistent with the theoretical values with very small variances.
### _Delay Error_
The next test is concerning the channel-wise delay error. The reference signal for this experiment is center-panned (\(p=0\)) with no delay applied. The estimate signals are panned at various errors, with an additional delay of \(\hat{d}_{\text{R}}\) samples applied only on the right channel. In this setup, the theoretical value of the SRR is positive infinity while that of the SSR is given by [2]
\[-10\log_{10}\left[2-2\cos\left(\frac{\pi\hat{p}}{4}\right)+2g_{\text{R}}\hat{g }_{\text{R}}\left(1-\frac{\kappa[\hat{d}_{\text{R}}]}{\kappa[0]}\right)\right] \tag{11}\]
where \(\kappa[\cdot]\) is the autocorrelation function of the mono signal.
The results of the delay test are shown in Figure 2. In general, most experimental values lie within or close to the expected range. Within each pan parameter, the SSR follows roughly the shape of the autocorrelation function of a speech signal with minima roughly at the delay where the shifted speech signal would be at maximally negative correlation with the unshifted version of itself. With the exception of \(\hat{p}=-1\), where only the left channel of the estimate is present, the distributions of the SRR are nearly identical across all pan values. At lower delays, more SRR values are at concentrated the expected positive infinity (capped at 80 dB for numerical stability). As the delays increases, the SRR values tend to decrease, but remain at relatively high values above 25 dB. Upon inspection of the decomposition, the channel-wise shift values have been estimated correctly for all test signals while the channel-wise gain values are often only _approximating_ the ideal values, with the deviation increasing with \(\hat{d}_{\text{R}}\). The slightly imperfect projection thus results in the observed spread and deviation in the SRR values from the theoretical values.
### _Filtering Error_
Many audio processing algorithms can cause a loss of bandwidth [6, 24]. To test the ability of the proposed method to distinguish other filtering errors from spatially relevant ones, the estimates were forward-backward filtered with a 128-tap low pass filter computed using the Remez exchange algorithm at various cutoff frequencies \(f_{\text{c}}\) with a transition band of one third-octave. The reference signal for this experiment is center-panned (\(p=0\)) with no delay applied. The theoretical SSR for this test is given in Eq. (10).
The results of this test are shown in Figure 3. As expected, the SRR increases monotonically as the cutoff frequency increases. At high cutoff frequencies, the SSR is close to the theoretical value. As more of the signal content is lost with decreasing cutoff frequency, the SSR also decreases, with large deviation below 1 kHz. This is somewhat expected given that spatial decomposition will only be valid if most of the signal content is present.
### _Additive Noise_
Another common audio degradation is the addition of noise or other uncorrelated artifacts. In the presence of an uncorrelated additive noise, the SRR is theoretically the overall SNR itself, while the theoretical SSR is given by Eq. (10), independent of the SNR. The computed metrics after adding random Gaussian noise at various SNRs are shown in Figure 4. At \(\hat{p}\neq 0\), Where the theoretical values are finite, the SSR generally follows the theoretical values with small spread except for the most noisy case where the SNR is \(-24\) dB, demonstrating that the decomposition is generally robust to noise. The experimental SRR values largely follow the SNRs themselves with small spreads, demonstrating that the residual errors consist almost entirely of non-spatial errors.
## IV Benchmarks
As a benchmark for the newly introduced metrics, we apply audio compression and music source separation algorithms on the MUSDB18-HQ dataset [25], which provide
Fig. 1: SSR of the estimate signals with respect to the pan parameters. Circular markers represent the experimental values. Dotted lines represent the theoretical values.
Fig. 3: SSR (top) and SRR (bottom) of the estimate signals with respect to the cutoff frequencies and pan parameters. Circular markers represent the experimental values with the horizontal axis slightly offset for readability. In the top plot, the dotted lines represent the theoretical values of the SSR.
Fig. 2: SSR (top) and SRR (bottom) of the estimate signals with respect to the pan and right-channel delay parameters. Circular markers represent the experimental values with the horizontal axis slightly offset for readability. In the top plot, the dashdotted lines connect the median value of the SSR for each delay value; the gray area represents the theoretical range of the SSR.
50 uncompressed stereo music signals sampled at 44.1 kHz. Audio compression and music source separation are specifically chosen as test cases as these are known to introduce both spatial and non-spatial artifacts on music signals.
### _Codec_
As a benchmark, we apply AAC (FAAC 1.30; FAAD2 2.10.0-2), and Opus (libopus 1.3.1) to the test set of MUSDB18-HQ to investigate their impact on SSR and SRR at bitrates from 32 to 320 Kbps. For AAC, the changes in SSR and SRR at each bitrate compared to no joint encoding are shown in Figure 5. Additional plots are provided in the Supplementary Materials. For both AAC and Opus, both SSR and SRR increase as the bitrate increases. The trend in SSR is consistent with the literature on Opus [26, 27] where localization errors increase with decreasing bitrate. In AAC, both mid/side stereo (MS) and intensity stereo (IS) modes generally performed worse in SSR than no joint coding across most ABRs, except for the MS mode at very low ABRs of 32 Kbps and 40 Kbps. In particular, IS also consistently performed worse than MS up to an ABR of 192 Kbps. This is consistent with the knowledge that IS can cause severe spatial artifacts, especially for low-frequency content with decorrelated spatial images [6, 7, 28]. In terms of SRR, which effectively measures the non-spatial fidelity of the codec, the MS mode performs better than no joint coding up to about 112 Kbps while intensity stereo only performs better than no joint coding below 64 Kbps. It is expected that no joint coding performs better than joint coding from about 128 Kbps onwards since joint coding can introduce unnecessary information loss at these bitrates [7].
### _Music Source Separation_
To benchmark the proposed metrics on the music source separation task, we apply Hybrid Demucs (v3) [29], ConvTasNet [30], OpenUnmix [31], and Spleeter [32] on the test set of MUSDB18-HQ. OpenUnmix and Spleeter perform separation in the time-frequency domain using using real-valued channel-wise masks on the complex-valued short-time Fourier transform (STFT) spectrogram. ConvTasNet similarly performs real-valued mask-based separation but on a learnable real-valued basis transform. Hybrid Demucs is the only model tested here that does not utilize masking, instead modifying the time-domain signal and the STFT representation directly in its time and time-frequency branches, respectively. Note that OpenUnmix and Spleeter were optimized in the time-frequency domain without considering phase information, while ConvTasNet and Hybrid Demucs were optimized in the time domain.
The performance of the models is shown in Figure 6. The suffix behind the model name refers to the pre-trained weight variants provided by the model developers3. The SRR values, which effectively act as a non-spatial counterpart of the SDRs, are consistent with the SDRs reported in literature. In terms of SSR, Demucs performs the best among the tested models, while other models perform approximately on par with one another. We surmise that the superior performance of Demucs may be due to either or a combination of (i) its non-masking nature, (ii) the use of a direct time-domain processing branch, and/or (iii) direct optimization in the time domain.
Footnote 3: The training data of HDemucs:extra also contains the test set of MUSDB18-HQ thus may not provide a fair comparison to the rest of the models which has not seen the test set in its training data.
## V Conclusion
In this work, we proposed a spatial evaluation method using filter decomposition technique based on the Duplex theory of spatial hearing. Tests on common signal degradation demonstrated relatively robust performance. The proposed method is additionally benchmarked on audio compression algorithms and music source separation algorithms, showing results consistent with perceptual literature. An open-source Python implementation of the proposed method is provided.
Fig. 4: SSR (top) and SRR (bottom) of the estimate signals with respect to the pan and SNR parameters. Circular markers represent the experimental values with the horizontal axis slightly offset for readability. The dotted lines represent the theoretical values.
Fig. 5: Change in SSR and SRR of the test signals compressed by AAC, relative to the operating mode without joint encoding, by operating mode and average bitrates.
Fig. 6: Evaluation results on the MUSDB18-HQ test set. |
2307.07049 | MegaWika: Millions of reports and their sources across 50 diverse
languages | To foster the development of new models for collaborative AI-assisted report
generation, we introduce MegaWika, consisting of 13 million Wikipedia articles
in 50 diverse languages, along with their 71 million referenced source
materials. We process this dataset for a myriad of applications, going beyond
the initial Wikipedia citation extraction and web scraping of content,
including translating non-English articles for cross-lingual applications and
providing FrameNet parses for automated semantic analysis. MegaWika is the
largest resource for sentence-level report generation and the only report
generation dataset that is multilingual. We manually analyze the quality of
this resource through a semantically stratified sample. Finally, we provide
baseline results and trained models for crucial steps in automated report
generation: cross-lingual question answering and citation retrieval. | Samuel Barham, Orion Weller, Michelle Yuan, Kenton Murray, Mahsa Yarmohammadi, Zhengping Jiang, Siddharth Vashishtha, Alexander Martin, Anqi Liu, Aaron Steven White, Jordan Boyd-Graber, Benjamin Van Durme | 2023-07-13T20:04:02Z | http://arxiv.org/abs/2307.07049v1 | # MegaWika: Millions of reports and their sources across 50 diverse languages
###### Abstract
To foster the development of new models for collaborative AI-assisted report generation, we introduce MegaWika, consisting of 13 million Wikipedia articles in 50 diverse languages, along with their 71 million referenced source materials. We process this dataset for a myriad of applications, going beyond the initial Wikipedia citation extraction and web scraping of content, including translating non-English articles for cross-lingual applications and providing FrameNet parses for automated semantic analysis. MegaWika is the largest resource for sentence-level report generation and the only report generation dataset that is multilingual. We manually analyze the quality of this resource through a semantically stratified sample. Finally, we provide baseline results and trained models for crucial steps in automated report generation: cross-lingual question answering and citation retrieval.
## 1 Introduction
There is a surge in popular demand for collaborative AI based on large language models, such as in the authoring of new documents. In this work we introduce a resource meant to foster the development of collaborative authoring of _reports_ based on _multilingual sources_ of information. This resource, MegaWika, is constructed from the largest open, collaborative report authoring dataset in the world, Wikipedia. MegaWika comprises more than 13 million Wikipedia articles across 50 languages. To accomplish this, Wikipedia passages and referenced web source materials are extracted, automatically translated into English, semantically analyzed, and their source materials are scraped and automatically cleaned. Finally, for each of the 71 million passage/source pairs, questions are extracted, yielding more than 120 million automatically-generated question/answer (qa) pairs.
Unlike most other similarly structured datasets, where data typically comes only from some homogeneous, well-behaved corpus (such as Wikipedia exclusively, or collections of news text), our
Wikipedia context passages are tied each to a related document taken from the Internet as a whole, leading to a collection that is stylistically and structurally diverse. Moreover, as the non-English documents were not generated automatically through machine translation, they may be expected to better resemble corpora targets by real world cross-lingual question answering (XLQA) systems as well as information retrieval (CLIR) applications. MegaWika also enables high quality model pretraining for many tasks, which we exemplify in Section 5.
Automatic processes lead to errors, and in addition, it is not guaranteed that sources cited by the author of a Wikipedia article do in fact represent high quality citations. We perform a number of investigations on the quality of the current resource, and describe steps taken to improve on initial Wikipedia extractions. The full collection is 1.1TB in size, hosted on HuggingFace's dataset repository to allow for easy usage through dataset streaming.1
Footnote 1: [https://huggingface.co/datasets/hltcoe/megawika](https://huggingface.co/datasets/hltcoe/megawika)
In summary:
1. We introduce MegaWika, a naturally cross-lingual dataset consisting of over 120 million English question/answer pairs spread over more than 50 languages, the largest and most diverse resource of its kind. The 50 languages selected for MegaWika were chosen thoughtfully to include examples from a wide range of language families.
2. We provide a novel quantitative analysis of Wikipedia citations' crosslinkingian in Section 4. Previous quantitative analyses of Wikipedia's citation behavior across languages has relied mostly on information that can be deduced from the URLs of the citations themselves (e.g., Lewoniewski et al. (2017)); ours relies on scraping and processing a substantial subset of the web citations across the 50 chosen Wikipedias.
3. We provide a semantic analysis for all Wikipedia content, allowing for structured exploration and semantic filtering.
4. We release manual annotations reflecting the level of evidential support between a source and Wikipedia-based questions, to enable building models for automatic assessment.
5. We illustrate the use of MegaWika in information seeking tasks including cross-lingual QA and citation retrieval, with associated model artifacts released.
## 2 Dataset Collection
The collection approach is as follows:
1. **Identify passages** in Wikipedia, i.e., 1-3 sentences with a trailing external web citation.2 Footnote 2: Wiki dumps for each language were downloaded between Mar. 25 of 2022 (English) and Oct. 20, 2022 (Irish); most Wiki dumps were downloaded in April of 2022, which should be considered MegaWika’s effective knowledge cutoff. Footnote 3: Future work can consider replicating just the translation component atop our effort, in order to build a cross-lingual artifact centered on a non-English language.
2. **Scrape** raw HTML from the cited web page, and **Extract** the human-readable **content** from the scraped page, discarding elements extraneous to the underlying source document (navigation bars, menus, advertisements, etc.) - the result we call a _source document_. Here we leverage Trafilatura, an open-source library for web content extraction (Barbaresi, 2021). The process took approximately two months on 11 high-memory bandwidth AWS instances.
3. In the case of non-English Wikipedia, **translate** the Wikipedia passage into English: this allows MegaWika to serve both as a monolingual resource in each of 50 languages, as well as a cross-lingual resource centered on English.3 Footnote 3: Future work can consider replicating just the translation component atop our effort, in order to build a cross-lingual artifact centered on a non-English language.
4. **Extract events** using the LOME FrameNet parser (Xia et al., 2021); these events correspond with high likelihood to semantically salient factor information in the passage, and are thus relevant answers to natural questions.
5. **Automatically generate questions** based on the English Wikipedia passage (or its translation). In the case of question/answer pairs generated from a translated passage, following the data projection methods in Yarmohammadi et al. (2021), we **align the translated passage
with its non-English original and determine the most likely non-English span corresponding to the English answer.
6. **Locate the answers** to those questions as spans in the cited source document; if they are present, we have an _exact answer match_ in the source.
In the following we focus on three key aspects of the pipeline -- **machine translation**, **question/answer generation**, and **semantic analysis**.
Translate Passages into EnglishFor each of the 49 non-English languages, we translate their collected Wikipedia passages into English, storing the results alongside the original language version of the passages. Each passage is split into sentences using spaCy, relying on language-specific models where possible (Honnibal et al., 2020). Then, each sentence is translated using M2M-100, a powerful, open-source Machine Translation system that focuses on balancing data for language pairs beyond English, scaling up to 100 languages (Fan et al., 2021).4 Throughout the dataset collection cycle, we observed that Google Translate often produces higher quality translations, particularly in low-resource languages. As a result, we provide Google translations (not M2M-100 translations) for the lowest frequency 10 languages, and an updated release of MegaWika will contain dual translation for all languages. Details and statistics of the Google translation data are in the supplementary materials.
Footnote 4: We use the 418 million-parameter model, with the standard 128k sentence piece tokens, a beam size of 5, and we allow the max number of tokens in a sentence to be clipped to 1,000.
Question-Answer Pair GenerationWe generate questions based on the English versions of Wikipedia passages using paq(Lewis et al., 2021), a system that outputs factoid questions given a document.5 paq involves four steps: 1) passage selection, 2) answer extraction, 3) question generator, 4) filtering. The passage selector is a RoBERTa (Liu et al., 2019) model that is trained to select passages with information for factoid questions. The answer extractor is a bert(Devlin et al., 2019) model that is trained to detect spans of texts that are likely answers to questions. The question generator is a bart(Lewis et al., 2020) model fine-tuned on nq(Kwiatkowski et al., 2019), TriviaQA (Joshi et al., 2017), and squad(Rajpurkar et al., 2016). This model generates a question conditioned on the selected passage and an extracted answer. The filtering step only keeps questions that are unambiguous. We omit this step here, as we want to include questions that are not directly answered by the Wikipedia article -- but could be answered by the cited source web page. Finally, we record all spans in the source document that exactly match the generated question's answer span.
Footnote 5: In early development we developed a similar generation model independently, but then embraced the paq model to have alignment across these related artifacts based on Wikipedia.
Semantic AnalysisTo enable semantic-based corpus analysis, we turn to FrameNet (Baker et al., 1998). A frame is defined as a concept that describes some event, relation, or entity. Each frame is associated with a set of roles, which are triggered by certain spans in the sentence. Each passage is parsed using the lome FrameNet parser (Xia et al., 2021), which predicts which spans of text evokes
Figure 1: The MegaWika collection process, illustrated over a Russian Wikipedia example. Wikipedia articles are split into passages with citations, and the original Wikipedia article is translated and the source link is scraped and kept in the original language. Note that there are two additional steps not shown for clarity: question generation from the machine translated Wikipedia article and question-answer span alignment.
frames and their associated roles. These annotations enable structured exploration and semantics based sampling of MegaWika (c.f. the supplemental materials for analysis and statistics).
## 3 Dataset Description
StructureEach entry in MegaWika comprises a single Wikipedia article, along with a list of all its extracted passages. Each entry in this list is a dictionary containing: the passage text; its machine translation into English (where appropriate); content extracted from the web source its cites; generated question/answer pairs; and the passage's FrameNet analysis. The full hierarchical structure of an individual entry is specified in detail on MegaWika's dataset card, which is hosted at MegaWika's homepage on HuggingFace. It is also described in Appendix A of the supplementary materials.
StatisticsThe current version of MegaWika spans some 128 million question answer pairs, spread across approximately 71 million context-passage pairs (each consisting of (1) a Wikipedia passage, and (2) the core textual content of the linked web page, cited in the Wikipedia passage, which we call a source document). The Wikipedia passages are drawn from English Wikipedia and 49 of the largest non-English Wikipedias (Figure 2). These 49 languages were selected both for the scale of their Wikipedias as well as to ensure coverage of a diverse set of language families.
## 4 Analysis and Evaluation
CrosslingualityAutomatic language identification (LID) using PyCLD26 revealed two interesting phenomena, both of which illuminate the inherent cross-lingual nature of Wikipedia source citations. First, while English Wikipedia cites non-English sources quite frequently, a full 11% of online source citations - more than 2 million citations - were to non-English web documents; these web documents span a great diversity of languages, running the gamut from high resource languages to very low resources languages. Second, across the 49 non-English Wikipedias, the _majority_ of online source citations were to languages _other_ than the Wikipedia's native language. In fact, 48% were to English web sources, and 19% were to other languages (i.e., other than the language of that Wikipedia version). Only the remaining 33% of citations were to documents in the same language as the Wikipedia passage itself. This phenomenon was found to be most concentrated in low-resource languages: for example, Xhosa, Pashto, and Khmer citations are nearly 90% English; in fact, Xhosa cites no Xhosa sources at all. This is seen even in some high-resource languages. For example, Arabic Wikipedia only cites Arabic websites 9% of the time, Chinese Wikipedia only cites Chinese websites 12% of the time, and Farsi Wikipedia cites Farsi web documents 28.5% of the time (Figure 3).
Footnote 6: Found here: [https://pypi.org/project/pycld2/](https://pypi.org/project/pycld2/)
ViewerWe provide a viewer for manual exploration, hosted in a HuggingFace space. It allows filtering Q/A pairs by triggered FrameNet frames, and inspection of cited source documents (Figure 4).
Figure 2: MegaWika passage/source counts across languages, labeled by ISO 639-1 language codes. The Y-axis is on a log scale. On average, each passage/source pair yields 1.8 question/answer pairs.
### Manual Evaluation
We perform multiple rounds of manual analysis on MegaWika: (1) we first stratify sample passages according to semantic events, asking crowdworkers to verify when highlighted events are supported by the source; then (2) we provide an author-based analysis of a subset of these examples to determine quality of each step of the pipeline; then (3) we devise a second protocol for crowdsource annotation that is based on question answering, which we prove out on the subset examined by the authors; finally (4) we collect a development and test set under this protocol, in order to enable future model-based data filtering and automatic scoring of how well content supports a passage.
Semantic Stratified SamplingTo judge the quality of the source extraction with respect to the passage text, we take a sample of the native English portion of MegaWika. To ensure a diverse mix of types of described situations from the passage-source pairs, we sample passages based on the FrameNet frames predicted within those passages.7 We target 5 sampled passages that evoke each frame type: to achieve this we sample a number of passages for each frame based on the evaluated precision of the parser on that type. We set the number of sampled source documents \(D_{i}\) for frame \(f_{i}\) which has Precision \(P_{i}\) on the FrameNet test set as given by a negative binomial distribution:
Figure 4: Example of the MegaWika data viewer; Afrikaans article on MI6 in focus.
Figure 3: Distribution of the source documents by Wikipedia language, labeled by ISO 639-1 language codes.
\[\mathrm{D}_{i}=\begin{cases}\lceil\frac{5}{P_{i}}\rceil,&\text{ if }count(f_{i})>=10\\ \lceil\frac{5}{P_{avg}}\rceil,&\text{ otherwise}\end{cases}\]
where, \(P_{avg}\) is the average precision across all frames in the FrameNet corpus test set. This yields a semantically balanced subset of 3,504 documents which we then use for human evaluation.
Leveraging Frames to Judge Source Document QualityIs the information specified in each Wikipedia passage actually contained in a cited source document? For each of our 3,504 passages we highlight the textual span corresponding to the frame for which the passage was sampled. Given the highlighted span we ask annotators (with 3-way redundancy) the following question: _Does the source contain the exact same event highlighted in the passage?_
We use the majority prediction to determine whether the source contains the situation highlighted in the passage text and find that 48.2% of the scraped source texts contain the event highlighted in the Wikipedia passage text. This suggests that approximately half of the events mentioned in a passage can be explicitly traced back to the cited source.
QA EvaluationWe then consider the source as a basis for answering questions. We conducted an evaluation on a random subsample of 150 entries from the same document sample, assessed by the authors themselves. We assessed: (1) the quality of the passage extraction, (2) the quality of the source document scrape, (3) the fluency of the generated question, (4) the reasonableness of the question, (5) the answerability of the question given the Wiki passage, (6) the answerability of the question given the source document, and (7) the correctness of the selected answer span. See Table 1, with inter-annotator agreement according to Fleiss's \(\kappa\). We have fair to moderate agreement across all categories of evaluation. We note that roughly half of the sources in this analysis do not provide clear support to information in the Wikipedia passage (answerability given the source: 1.67/3 = 55.67%) which resembles our finding of 48.23% of sources supporting a highlighted event in the previous analysis, based on FrameNet and crowdsourced assessment.
Strength of Evidential Support AnnotationFinally, for high-quality question-answer pairs from the previous QA evaluation, we further annotate the scalar _strength_ of evidential support from their corresponding source document. We extend a previous annotation protocol for Uncertain Natural Language Inference (UNLI) (Chen et al., 2020) in order to collect fine-grained labels. Crowdsource workers were presented with the interface shown in Figure 5, paired with a source article.8
\begin{table}
\begin{tabular}{l c c} \hline \hline Category & Avg. & \(\kappa\) \\ \hline Passage extraction & 4.47 / 5 & 0.25 \\ Source scrape & 3.96 / 5 & 0.38 \\ Question fluency & 4.43 / 5 & 0.25 \\ Quest. reasonableness & 3.84 / 5 & 0.20 \\ \hline Answerability/Wiki & 2.38 / 3 & 0.39 \\ Answerability/Source & 1.67 / 3 & 0.43 \\ Answer correctness & 2.30 / 3 & 0.37 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Author evaluation results.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Question & Answer & Score & Most Relevant Source Text \\ \hline What color changes & & & A marker on Riley’s RediRipe stickers detects a chemical called ethylene gas, which is released by fruit or vegetables as they ripen. As that happens, the sticker turns from white to blue. \\ \hline What did the international media want & interviews & 2 & In a day fraught with anxiety declining 8 interviews from various Radio, Print, and TV reporters one TV station wouldn’t take ”NO” for an answer. \\ \hline What is cramond island made of? & dolerite & 1 & Geologically, Craigleith is a laccolith, a dome-shaped igneous intrusion, composed of essexite, a rock popular for the manufacture of curling stones. \\ \hline \hline \end{tabular}
\end{table}
Table 2: Examples scored at different levels of answerability from the QA Evaluation, with the most relevant supporting evidence from the cited source documents provided here as illustration.
The left hand side of Figure 6 shows that the strength of support from source documents in the MegaWika dataset has spread across levels of support: sources more or less support statements made in Wikipedia. As this analysis aligns with our in-house answerability annotation, we proceed to create a larger evaluation set that focuses on high quality questions and supporting source documents to support future model based scoring of sources. We filter instances that contain citation-like or table-like text (content with many "]" and "-" tokens), sampled 2.5k instances and crowdsource annotated them in a similar way, with 2-way redundancy. These constitute a larger evaluation set with 1.5k validation and 1k test. The distribution of labels from the validation set is on the right hand side of Figure 6. 380 instances were determined by least one annotator as "unanswerable" through either or both of the two checkboxes showcased in Figure 5: these we assigned a supporting strength of 0.9
Footnote 9: Of these 380 instances, 207 are marked as “does not make sense” by at least one of the two annotators, and 199 are marked as “irrelevant” by at least one of the two annotators.
## 5 Example Applications
MegaWika is intended to enable development of models for assisting authors of reports, such as Wikipedia writers needing to locate salient information for their articles. We illustrate two example tasks for finding information: (1) cross-lingual question answering (QA), and (2) citation retrieval.
### Multilingual Question Answering
MegaWika contains more examples than any previous multilingual QA dataset, including XQuAD (11 languages, Artetxe et al. (2020)), TyDiQA (11 languages, Clark et al. (2020)), or MKQA (26 languages, Longpre et al. (2021)). Furthermore, unlike MKQA and XQuAD, our passages are found naturally on Wikipedia and are not translated from English for research purposes. To demonstrate
Figure 5: Interface for annotating evidential support.
Figure 6: Left: Strength of Evidential Support label distribution for each of the three answerability levels from previous QA evaluation (3 – indisputably answerable and the answer is correct, 1 – completely unanswerable). Light / dark shade covers datapoints within 1 / 1.5 IQR of each category, and the black bar denotes the median. Right: Strength of Evidential Support label distribution on the extended 1.5K English validation set.
MegaWika's effectiveness for multilingual question answering, we evaluate on XQuAD, subsetting our corpus to only contain the same 11 languages.
Experiment SettingsIn order to gather a high quality subset of the MegaWika data, we first filter the data to contain only (question, passage, answer) pairs where the answer can be found with exact string match in the passage. Note that this may exclude some aliases, but in this analysis we focus on high precision and leave higher recall for future work.
We then conduct several rounds of self-filtering, repeating the following process: (1) we train XLM-R (Conneau et al., 2020) and mBERT (Devlin et al., 2019) on the data until performance plateaus, using 10k sampled instances as a validation set. (2) We use those trained models to filter the data, keeping only instances which are correctly predicted by either of our two trained models or an English DistilBERT trained on English SQuAD (Sanh et al., 2019). (3) We recompile the data into the next dataset and repeat the process. After each iteration we sample 500k instances per language from the dataset to provide the training data for the next iteration of the QA models (except for DistilBERT).
ResultsWe use the final XLM-R base model from the previous step and evaluate following the translate-train setup in XQuAD (Artetxe et al., 2020). Our results (Table 3) show that each round of filtering produces a higher quality model; after several rounds of filtering, it has higher performance to training on XQuAD. This supports the claim that MegaWika can be used effectively towards building high-quality multilingual QA systems. Furthermore, in this example we limited our experiments to the 11 languages that are used in XQuAD, but MegaWika also exists for 39 other languages, including many low-resource languages where this data will be particularly valuable.
### Multilingual Information Retrieval for Citations
Another example application of MegaWika is for large-scale multilingual information retrieval.
Experiment SettingsWe first filter the data to avoid train/test leakage and improve dataset quality (similar to Section 5.1). Like in our filtering during manual evaluation for evidentiary support, we start by removing all instances that contain citation-like or table-like text, as indicated in Wikipedia by many "!" and "-" tokens. As some Wikipedia pages may have similar text (and sources) across languages, we select instances for the evaluation sets, and then remove all other instances from the corpus that are linked through Wikipedia's language links. As the size of the data is extremely large, removing these instances from the evaluation data has only a minor effect on corpus size and allows us to avoid data leakage from similar texts in a different language.
We follow this selection process to gather 1k instances per language, or as much as available, and 10k of English per evaluation set and 50k instances per language for training. We use the set of all source documents as the retrieval collection, consisting of 72M passages.
As the number of languages in MegaWika far exceeds any previous retrieval models, the closest baseline is the multilingual mDPR model used in Zhang et al. (2022), although it was used for multilingual retrieval rather than cross-lingual retrieval. Note that other cross-lingual models are at most trained on 5 languages, making them a poor comparison. We note that mDPR from MiRACL was trained on XOR QA (Asai et al., 2021) (which was adapted from TyDiQA (Clark et al., 2020)). To allow meaningful comparison, we use the same base model and training process as that in MiRACL, simply changing the training data.
ResultsTable 4 shows that our citation-finding version of mDPR greatly outperforms the baseline, by around 300%. We further note that precision at 1000 (the cutoff typically used for re-ranking) is
\begin{table}
\begin{tabular}{c c|c c c} \hline \hline XQuAD Zero-Shot & XQuAD Translate-Train & Round 1 & Round 2 & Round 3 \\ \hline
65.5 & 77.0 & 73.1 & 76.6 & 78.1 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Selecting high-quality question answering data from MegaWika through self-filtering, scored using exact-match on the XQuAD test set with XLM-R base. We see that one can take MegaWika and sub-select high-quality examples for cross-lingual question answering.
at 48.6. Our citation mDPR also includes a much wider set of languages compared to any previous open-source dense retrieval model, with 50 languages used in training compared to 18 in MiRACL.
While these experiments are meant as baseline illustrations of what MegaWika will enable, these models already may provide use to other researchers and will be released along with the data resource.
## 6 Related Work
Automated Report GenerationReport generation can have many definitions: Chen et al. (2021) uses Wikipedia articles and tables for data-to-text report generation, while there also exists a large amount of work on taking medical image data and creating medical reports (Li et al., 2019; Yang et al., 2021; Liu et al., 2021, 2021). Other work has derived datasets consisting of question-answer pairs for various usages from Wikipedia articles, which we adopted in our question generation approach (Lewis et al., 2021).
As language models have improved, recent attention has turned towards examining the citations found on Wikipedia as part of report generation. Recent work on this topic has improved the way that these reports cite information through better fact checking (Kamoi et al., 2023; Petroni et al., 2022).
Qian et al. (2023), concurrent to this work, is the closest to ours, as they focus on textual report generation in an open-domain setting. They also build a novel dataset, but like Lewis et al. (2021) use English Wikipedia only. There are several other crucial distinctions between our work and theirs: (1) our work scrapes the citations at the sentence level as compared to scraping the citations section and attempting to match them post-hoc, (2) our work aims to enable collaborative report generation, where AI models assist by finding references and suggesting content for a report one portion at a time (while their work provides a Wikipedia article title and asks the model to generate the full article), and (3) although their raw dataset contains more instances, their data available for training is a magnitude smaller than ours (30M English passages in our retrieval collection while they provide 3M).
Using Multilingual WikipediaAs Wikipedia is available in many languages, much research has used it for various multilingual applications. Some of these resources (like XQuAD or MKQA (Liu et al., 2019; Artetxe et al., 2020)) use automated or human translations of the English versions, some have used language links (Lewis et al., 2020) and others have used multilingual Wikipedia versions with entirely different questions across languages (e.g. typiqA Clark et al. (2020)).
In the information retrieval setting, very few works have used Wikipedia for large-scale multilingual retrieval. Asai et al. (2021) extended TyliqA to the cross-lingual open-retrieval question-answering setting (11 languages) while MiRACL extended it further to include 18 languages (Zhang et al., 2022). Our work is different in that we provide information retrieval data for citation-finding, rather than standard web-retrieval. Furthermore, we provide a much larger corpus and include 50 languages.
## 7 Conclusions
We presented MegaWika, a large-scale cross-lingual report generation dataset. We described the collection pipeline necessary to construct such a dataset, analyzed the quality of the resource through three distinct human evaluations, and gave examples of ways MegaWika may be used in downstream tasks. We release this data and associated artifacts through HuggingFace with a custom data browser.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline Model & P@5 & P@10 & P@100 & P@1000 & MRR \\ \hline mDPR (MiRACL) & 5.7 & 6.9 & 11.7 & 18.5 & 4.3 \\ mDPR (Ours) & 18.0 & 21.3 & 33.6 & 48.6 & 14.1 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results for multilingual information retrieval. Our version of mDPR trained with the same architecture is much more adept at finding sources for citations, as measured by precision (P) and mean reciprocal rank (MRR) at 10. MegaWika also enables retrieval for 50 languages, which is 2x more than existing IR collections. |
2307.09064 | Newtonian Program Analysis of Probabilistic Programs | Due to their quantitative nature, probabilistic programs pose non-trivial
challenges for designing compositional and efficient program analyses. Many
analyses for probabilistic programs rely on iterative approximation. This
article presents an interprocedural dataflow-analysis framework, called
NPA-PMA, for designing and implementing (partially) non-iterative program
analyses of probabilistic programs with unstructured control-flow,
nondeterminism, and general recursion. NPA-PMA is based on Newtonian Program
Analysis (NPA), a generalization of Newton's method to solve equation systems
over semirings. The key challenge for developing NPA-PMA is to handle multiple
kinds of confluences in both the algebraic structures that specify analyses and
the equation systems that encode control flow: semirings support a single
confluence operation, whereas NPA-PMA involves three confluence operations
(conditional, probabilistic, and nondeterministic).
Our work introduces $\omega$-continuous pre-Markov algebras ($\omega$PMAs) to
factor out common parts of different analyses; adopts regular infinite-tree
expressions to encode program-execution paths in control-flow hyper-graphs; and
presents a linearization method that makes Newton's method applicable to the
setting of regular-infinite-tree equations over $\omega$PMAs. NPA-PMA allows
analyses to supply a non-iterative strategy to solve linearized equations. Our
experimental evaluation demonstrates that (i) NPA-PMA holds considerable
promise for outperforming Kleene iteration, and (ii) provides great generality
for designing program analyses. | Di Wang, Thomas Reps | 2023-07-18T08:37:02Z | http://arxiv.org/abs/2307.09064v2 | # Newtonian Program Analysis of Probabilistic Programs
###### Abstract.
Due to their _quantitative_ nature, probabilistic programs pose non-trivial challenges for designing compositional and efficient program analyses. Many analyses for probabilistic programs rely on _iterative_ approximation. This article presents an interprocedural dataflow-analysis framework, called _NPA-PMA_, for designing and implementing (partially) _non-iterative_ program analyses of probabilistic programs with unstructured controlflow, nondeterminism, and general recursion. NPA-PMA is based on Newtonian Program Analysis (NPA), a generalization of Newton's method to solve equation systems over semirings. The key challenge for developing NPA-PMA is to handle multiple kinds of _confluences_ in both the algebraic structures that specify analyses and the equation systems that encode control flow: semirings support a single confluence operation, whereas NPA-PMA involves three confluence operations (conditional, probabilistic, and nondeterministic).
Our work introduces _\(\omega\)-continuous pre-Markov algebras_ (\(\omega\)PMAs) to factor out common parts of different analyses; adopts _regular infinite-tree expressions_ to encode program-execution paths in control-flow hypergraphs; and presents a _linearization_ method that makes Newton's method applicable to the setting of regular-infinite-tree equations over \(\omega\)PMAs. NPA-PMA allows analyses to supply a non-iterative strategy to solve linearized equations. Our experimental evaluation demonstrates that (i) NPA-PMA holds considerable promise for outperforming Kleene iteration, and (ii) provides great generality for designing program analyses.
+
Footnote †: journal: Journal of Computer Science
+ |
2305.07630 | Dynamic Dark Energy from the Local Limit of Nonlocal Gravity | Nonlocal gravity (NLG), a classical extension of Einstein's theory of
gravitation, has been studied mainly in linearized form. In particular,
nonlinearities have thus far prevented the treatment of cosmological models in
NLG. In this essay, we discuss the local limit of NLG and apply this limit to
the expanding homogenous and isotropic universe. The theory only allows
spatially flat cosmological models; furthermore, de Sitter spacetime is
forbidden. The components of the model will have different dynamics with
respect to cosmic time as compared to the standard $\Lambda$CDM model;
specifically, instead of the cosmological constant, the modified flat model of
cosmology involves a dynamic dark energy component in order to account for the
accelerated phase of the expansion of the universe. | Javad Tabatabaei, Abdolali Banihashemi, Shant Baghram, Bahram Mashhoon | 2023-05-12T17:33:51Z | http://arxiv.org/abs/2305.07630v3 | # Dynamic Dark Energy from the Local Limit of Nonlocal Gravity
###### Abstract
Nonlocal gravity (NLG), a classical extension of Einstein's theory of gravitation, has been studied mainly in linearized form. In particular, nonlinearities have thus far prevented the treatment of cosmological models in NLG. In this essay, we discuss the local limit of NLG and apply this limit to the expanding homogenous and isotropic universe. The theory only allows spatially flat cosmological models; furthermore, de Sitter spacetime is forbidden. The components of the model will have different dynamics with respect to cosmic time as compared to the standard \(\Lambda\)CDM model; specifically, instead of the cosmological constant, the modified flat model of cosmology involves a dynamic dark energy component in order to account for the accelerated phase of the expansion of the universe.
Gravitational pacs: 04.20.Cv
The cosmological observations of cosmic microwave background (CMB) radiation [1] as well as large scale structure (LSS) surveys [2] indicate that in terms of energy, we live in a dark universe with about 70% dark energy, about 25% dark matter, and only about 5% visible matter. The source of the two main dark sectors is still a mystery to us since we only know about their gravitational effects. These two components of the universe contribute to the standard model of cosmology only through gravitation. The standard model of cosmology known as \(\Lambda\)CDM is based on the cosmological principle and Einstein's theory of general relativity (GR). This model assumes that the cosmological constant \(\Lambda\) is the cause of the accelerated universe [3] and dark matter is due to a cold component (CDM). This theory is truly ignorant about its two important players and yet gives us acceptable results for describing the universe on large scales as a homogenous coarse-grained fluid. The standard model also gives a reasonable connection between the universe in its early times and the one we observe at low redshifts through the distribution of structures in the universe [4].
The continuing failure of experiments to find the particles of dark matter naturally leads to the possibility that what appears as dark matter in astrophysics and cosmology is indeed an aspect of the gravitational interaction. While the endeavor to unmask the true identity of the dark sectors continues, there is a persistent notion that perhaps we do not know the gravitational interaction well enough even at the classical level. Einstein's theory of gravitation has been well tested and quite successful on the scales of the solar system and isolated binary star systems [5]; moreover, the detection of gravitational radiation is further evidence in support of GR in connection with binary systems [6; 7]. However, "dark" gravitational effects of unknown origin seem to appear on the scales of galaxies and beyond. A starting point to understand general relativity more deeply would be to investigate the physical basis of the assumptions that underlie Einstein's elegant theory in order to align it with reality.
As a field theory of gravitational interaction, general relativity has been patterned after Maxwell's electrodynamics. Maxwell originally formulated the basic equations of electromagnetism in terms of the electromagnetic fields \((\mathbf{E},\mathbf{B})\) and their excitations in a material
medium \(({\bf D},{\bf H})\). The latter contains the response of the medium in terms of its polarizability and magnetizability, respectively; therefore, there are constitutive relations that connect these fields. In their simplest forms, we have \({\bf D}=\epsilon\,{\bf E}\), where \(\epsilon\) is the electric permittivity of the medium and \({\bf B}=\mu\,{\bf H}\), where \(\mu\) is the corresponding magnetic permeability. Even in Maxwell's time, observations pointed to the necessity of nonlocal and possibly nonlinear constitutive relations [8; 9]. Indeed, history dependence must be taken into account in the description of the properties of material media; for instance, magnetic materials generally exhibit hysteresis. In the process of averaging the response of the atomic medium, the memory of past events must be taken into account. The resulting nonlocal constitutive relations contain kernels that incorporate the atomic and molecular physics of the background medium [10; 11; 12]. Similarly, it appears rather natural that in describing the large scale structure and evolution of the universe, averaging procedures may be necessary for cosmology. This leads to the problem of averaging spacetime; in this connection, see [13] and the references cited therein. On the other hand, nonlocality could be an intrinsic feature of the universal gravitational interaction [14].
Physics is local within the frameworks of special and general theories of relativity [15]. Here, locality means that the observer can make physical predictions on the basis of information contained in the local spacetime patch around itself; indeed, there is no need to keep track of the memory of past events. In extending Lorentz invariance to accelerated systems in Minkowski spacetime, Lorentz transformations are applied point by point along accelerated world lines. Acceleration, ignored locally, is taken into account only through the variation in the velocity of motion that appears in the pointwise Lorentz transformations. This locality principle is physically justified if the velocity of the accelerated system is essentially constant during an elementary act of measurement. For pointlike coincidences of classical point particles and rays of radiation, locality holds; however, according to the Huygens principle waves are intrinsically nonlocal. In connection with classical field measurements, Bohr and Rosenfeld have shown that spacetime averaging is necessary [16]. Furthermore, acceleration can deflect the observer's path from a geometrically defined geodesic, and it may rotate the observer's frame, diverting the frame from how it should naturally evolve along the path via parallel propagation; hence, accelerated motion involves local invariant scales of length and time. The challenge here mainly arises when we ask how an accelerated observer can measure wave phenomena and also how quantum effects are manifested in accelerated frames. It,
therefore, appears that the past history of the accelerated motion should in general be taken into account through a nonlocal kernel that depends upon acceleration. The nonlocality of accelerated systems eventually leads to nonlocal special relativity theory [17].
The cornerstone of general relativity, namely, Einstein's principle of equivalence, is based on the local connection between gravitation and inertia, the same inertia that governs the dynamics of accelerated systems. This circumstance suggests that gravity should be nonlocal as well. The most natural approach to a classical nonlocal generalization of GR would be to introduce history dependence into the theory through a nonlocal constitutive relation in close analogy with the nonlocal electrodynamics of media [18; 19]. To this end, we first need to express GR in a form that resembles electrodynamics. The key idea here is the introduction of an orthonormal tetrad frame field adapted to preferred observers in spacetime and the utilization of an extended geometric framework that employs both the Riemann curvature of spacetime and the Weitzenbock torsion of the preferred frame field. That is, in the extended framework we have one spacetime metric tensor and two metric-compatible connections: the standard torsion-free symmetric Levi-Civita connection and the curvature-free Weitzenbock connection defined via the preferred frame field. The Weitzenbock connection [20] renders the spacetime manifold parallelizable. In the framework of teleparallelism [21], two distant vectors are considered parallel if they have the same components with respect to their local preferred frame fields. It turns out that in this framework, Einstein's theory becomes the well-known teleparallel equivalent of general relativity (TEGR), which is the gauge theory of the four-parameter Abelian group of spacetime translations [22]. Therefore, TEGR, though nonlinear, is formally analogous to electrodynamics and then we can use our intuition about electrodynamics and modify TEGR.
To introduce history dependence within the extended GR framework, let us consider an admissible system of spacetime coordinates \(x^{\mu}\) in spacetime with metric
\[ds^{2}=g_{\mu\nu}(x)\,dx^{\mu}\,dx^{\nu}\,. \tag{1}\]
In this gravitational field, free test particles and null rays follow the geodesics of the spacetime manifold. In our convention, Greek indices run from 0 to 3, Latin indices run from 1 to 3, and the signature of the metric is +2; moreover, we employ units such that \(c=1\). We assume the existence of a preferred set of observers with adapted tetrads \(e^{\mu}{}_{\dot{\alpha}}(x)\) that are
orthonormal, namely,
\[g_{\mu\nu}(x)\,e^{\mu}{}_{\hat{\alpha}}(x)\,e^{\nu}{}_{\hat{\beta}}(x)=\eta_{ \hat{\alpha}\hat{\beta}}\,. \tag{2}\]
Here, indices without hats are normal spacetime indices, hatted indices refer to the tetrad axes in the local tangent space, and \(\eta_{\alpha\beta}=\text{diag}(-1,1,1,1)\) is the Minkowski metric tensor.
The curvature-free Weitzenbock connection is defined in terms of the tetrad frame field as
\[\Gamma^{\mu}_{\alpha\beta}=e^{\mu}{}_{\hat{\rho}}\;\partial_{\alpha}\,e_{\beta }{}^{\hat{\rho}}\,. \tag{3}\]
It simply follows from this definition that \(\nabla_{\nu}\,e_{\mu}{}^{\hat{\alpha}}=0\), where \(\nabla\) denotes covariant differentiation via the Weitzenbock connection. In this way, the preferred tetrad frames are parallel throughout spacetime and provide a natural scaffolding. The _torsion_ tensor corresponding to the Weitzenbock connection is the gravitational field strength within the framework of teleparallelism and is given by
\[C_{\mu\nu}{}^{\alpha}=\Gamma^{\alpha}_{\mu\nu}-\Gamma^{\alpha}_{\nu\mu}=e^{ \alpha}{}_{\hat{\beta}}\Big{(}\partial_{\mu}e_{\nu}{}^{\hat{\beta}}-\partial_{ \nu}e_{\mu}{}^{\hat{\beta}}\Big{)}\,. \tag{4}\]
A remark is in order here regarding the analogy between the gravitational field strength \(C_{\mu\nu}{}^{\hat{\alpha}}\) and the electromagnetic field strength \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\); that is, for each \(\hat{\alpha}=\hat{0},\hat{1},\hat{2},\hat{3}\) in the torsion tensor
\[C_{\mu\nu}{}^{\hat{\alpha}}=e_{\rho}{}^{\hat{\alpha}}C_{\mu\nu}{}^{\rho}= \partial_{\mu}e_{\nu}{}^{\hat{\alpha}}-\partial_{\nu}e_{\mu}{}^{\hat{\alpha}}\,, \tag{5}\]
we have an analogue of the electromagnetic field tensor defined in terms of the vector potential \(e_{\mu}{}^{\hat{\alpha}}\). The extended GR framework also contains the _contorsion_ tensor
\[K_{\mu\nu}{}^{\alpha}=\,^{0}\Gamma^{\alpha}_{\mu\nu}-\Gamma^{\alpha}_{\mu\nu}\,, \tag{6}\]
where \({}^{0}\Gamma^{\alpha}_{\mu\nu}\) denotes the symmetric Levi-Civita connection. It follows from the metric compatibility of the Weitzenbock connection that the contorsion tensor is related to the torsion tensor, namely,
\[K_{\mu\nu\rho}=\frac{1}{2}\left(C_{\mu\rho\nu}+C_{\nu\rho\mu}-C_{\mu\nu\rho} \right). \tag{7}\]
Therefore, the Levi-Civita connection given by the Christoffel symbol is the sum of the Weitzenbock connection and the contorsion tensor. The Einstein tensor and the gravitational field equations, via the Levi-Civita connection, can then be expressed in terms of the teleparallelism framework resulting in the teleparallel equivalent of GR, namely, TEGR [14].
Einstein's field equations expressed in terms of torsion thus become the TEGR field equations
\[\frac{\partial}{\partial x^{\nu}}\,\mathfrak{H}^{\mu\nu}{}_{\hat{\alpha}}+\frac{ \sqrt{-g}}{\kappa}\,\Lambda\,e^{\mu}{}_{\hat{\alpha}}=\sqrt{-g}\left(T_{\hat{ \alpha}}{}^{\mu}+\mathbb{T}_{\hat{\alpha}}{}^{\mu}\right), \tag{8}\]
where \(\Lambda\) is the cosmological constant, \(\kappa:=8\pi G\), and the auxiliary torsion field \(\mathfrak{H}_{\mu\nu\rho}\) is defined via the auxiliary torsion tensor \(\mathfrak{C}_{\alpha\beta\gamma}\) as
\[\mathfrak{H}_{\mu\nu\rho}:=\frac{\sqrt{-g}}{\kappa}\,\mathfrak{C}_{\mu\nu\rho} \,,\qquad\mathfrak{C}_{\alpha\beta\gamma}:=C_{\alpha}\,g_{\beta\gamma}-C_{ \beta}\,g_{\alpha\gamma}+K_{\gamma\alpha\beta}\,. \tag{9}\]
Here, \(C_{\mu}:=C^{\alpha}{}_{\mu\alpha}=-C_{\mu}{}^{\alpha}{}_{\alpha}\) is the torsion vector, \(T_{\mu\nu}\) is the symmetric energy-momentum tensor of matter, and \(\mathbb{T}_{\mu\nu}\) is the traceless energy-momentum tensor of the gravitational field
\[\mathbb{T}_{\mu\nu}:=(\sqrt{-g})^{-1/2}\left(C_{\mu\rho\sigma}\,\mathfrak{H}_{ \nu}{}^{\rho\sigma}-\tfrac{1}{4}g_{\mu\nu}\,C_{\rho\sigma\delta}\,\mathfrak{H} ^{\rho\sigma\delta}\right). \tag{10}\]
Inspired by the electrodynamics of nonlocal media, it is possible to introduce history dependence and equip TEGR with the memory of the past. In the resulting nonlocal theory of gravitation, the gravitational field would be locally defined but would satisfy partial integro-differential field equations. That is, we expect that the modified gravitational field equations would contain a certain average of the gravitational field over past events. Such a term should involve a weight or "kernel" function that in the case of electrodynamics is determined by the nature of the atoms and molecules of the medium. The main difference with the electrodynamics of nonlocal media has to do with the absence of the background atomic medium. Therefore, the nonlocal kernel of the theory in the case of gravitation must ultimately be determined by observation.
We can view the relationship between \(\mathfrak{H}_{\mu\nu\rho}\) and the torsion tensor in Equation (9) as the local constitutive relation of TEGR. In the nonlocal electrodynamics of media, the field equations of electrodynamics are not changed, only the constitutive relation is made nonlocal. In a similar fashion, in nonlocal gravity (NLG), only the constitutive relation of TEGR is modified. This means that we replace \(\mathfrak{H}\) in Equations (8) and (10) by \(\mathcal{H}\) given by
\[\mathcal{H}_{\mu\nu\rho}=\frac{\sqrt{-g}}{\kappa}(\mathfrak{C}_{\mu\nu\rho}+ N_{\mu\nu\rho})\,, \tag{11}\]
where \(N_{\mu\nu\rho}=-N_{\nu\mu\rho}\) is a tensor that is nonlocally related to the torsion tensor. That is, the components of \(N_{\mu\nu\rho}\) measured by the preferred observers of the theory with adapted tetrads \(e^{\mu}{}_{\hat{\alpha}}\) are associated with the corresponding measured components of \(X_{\mu\nu\rho}\) that is directly
connected to the torsion tensor, namely,
\[N_{\hat{\mu}\hat{\nu}\hat{\rho}}(x)=\int\mathcal{K}(x,x^{\prime})\,X_{\hat{\mu} \hat{\nu}\hat{\rho}}(x^{\prime})\sqrt{-g(x^{\prime})}\,d^{4}x^{\prime}\,, \tag{12}\]
where \(\mathcal{K}(x,x^{\prime})\) is the basic causal kernel of NLG and [14; 23; 24]
\[X_{\hat{\mu}\hat{\nu}\hat{\rho}}=\mathfrak{C}_{\hat{\mu}\hat{\nu}\hat{\rho}}+ \check{p}\,(\check{C}_{\hat{\mu}}\,\eta_{\hat{\nu}\hat{\rho}}-\check{C}_{\hat{ \nu}}\,\eta_{\hat{\mu}\hat{\rho}})\,. \tag{13}\]
Here, \(\check{C}^{\mu}\) is the torsion pseudovector defined via the Levi-Civita tensor \(E_{\alpha\beta\gamma\delta}\) by
\[\check{C}_{\mu}:=\frac{1}{3!}C^{\alpha\beta\gamma}\,E_{\alpha\beta\gamma\mu} \tag{14}\]
and \(\check{p}\neq 0\) is a constant dimensionless parameter. The resulting theory of nonlocal gravity (NLG) is rather intricate and no exact solution is known except for the trivial result that the spacetime is Minkowskian in the absence of gravity. Furthermore, de Sitter spacetime is not an exact solution of NLG [24]. On the other hand, linearized NLG has been extensively studied [14]; in fact, in the Newtonian regime of NLG, it has been possible to account for the rotation curves of nearby spiral galaxies as well as for the solar system data [25; 26; 27; 28]. Thus far, the nonlinearity of NLG has made it impossible to find exact solutions for the strong-field regimes such as those involving black holes or cosmological models. However, it is possible that features of NLG survive in the local limit of the theory, just as in electrodynamics the local permittivity \(\epsilon(x)\) and permeability \(\mu(x)\) functions can preserve features of nonlocal electrodynamics of media.
The local limit of Equation (12) can be obtained by assuming that the kernel is proportional to the 4D Dirac delta function, namely,
\[\mathcal{K}(x,x^{\prime}):=\frac{S(x)}{\sqrt{-g(x)}}\,\delta(x-x^{\prime})\,, \tag{15}\]
where \(S(x)\) is a dimensionless scalar function that must be determined based on observational data [29]. In this case, the nonlocal constitutive relation (11) reduces to \(N_{\mu\nu\rho}(x)=S(x)X_{\mu\nu\rho}\); that is,
\[N_{\mu\nu\rho}(x)=S(x)\,[\mathfrak{C}_{\mu\nu\rho}(x)+\check{p}\,(\check{C}_{ \mu}\,g_{\nu\rho}-\check{C}_{\nu}\,g_{\mu\rho})] \tag{16}\]
and the constitutive relation takes the form
\[\mathcal{H}_{\mu\nu\rho}=\frac{\sqrt{-g}}{\kappa}[(1+S)\,\mathfrak{C}_{\mu \nu\rho}+S\,\check{p}\,(\check{C}_{\mu}\,g_{\nu\rho}-\check{C}_{\nu}\,g_{\mu \rho})]\,. \tag{17}\]
Here, the susceptibility function \(S(x)\) is a characteristic of the background spacetime just as \(\epsilon(x)\) and \(\mu(x)\) are features of the medium in electrodynamics. For \(S(x)=0\), we recover TEGR; otherwise, we have a generalization of GR that we expect may retain features of NLG through \(S(x)\). Indeed, Equation (17) implies that to have GR as a limit, we must impose the requirement that \(1+S>0\). In this local limit of nonlocal gravity, explicit deviations from the locality have vanished; however, nontrivial aspects of NLG may have survived through \(S(x)\) that would be interesting to study. We, therefore, explore the cosmological implications of this local limit of NLG.
The very first application of the local limit of NLG to the expanding universe has a dramatic result and implies spatial flatness. We begin with the FLRW metric for a dynamic (expanding or contracting) universe that is homogeneous and isotropic and assume that \(S\) is only a function of _cosmic time_\(t\) with \(dS/dt\neq 0\). That the susceptibility \(S\) should be time dependent is consistent with the assumption of a dynamic universe model. The field equations of the local limit of NLG imply that the only model consistent with these assumptions is the spatially _flat_ model. Therefore, we assume a metric of the form
\[ds^{2}=-dt^{2}+a^{2}(t)\,\delta_{ij}\,dx^{i}\,dx^{j}\,, \tag{18}\]
where \(a(t)\) is the scale factor. The gravitational field equations imply that our modified flat model is governed by
\[3(1+S)\left(\frac{\dot{a}}{a}\right)^{2}=\Lambda+8\pi G\rho \tag{19}\]
and
\[2(1+S)\frac{\ddot{a}}{a}+(1+S)\left(\frac{\dot{a}}{a}\right)^{2}=\Lambda-8 \pi GP-2\frac{dS}{dt}\frac{\dot{a}}{a}\,, \tag{20}\]
where \(\rho(t)\) and \(P(t)\) are the energy density and pressure of the background cosmic medium, respectively. Moreover, we recall that \(1+S>0\); however, to ensure that this relation holds as \(S\) varies with cosmic time, we further assume that \(dS/dt>0\). Next, differentiating Equation (19) with respect to cosmic time \(t\) and using Equation (20), we find
\[\frac{d\rho}{dt}=-3(\rho+P)\frac{\dot{a}}{a}-\frac{3}{8\pi G}\frac{dS}{dt} \left(\frac{\dot{a}}{a}\right)^{2}\,, \tag{21}\]
which implies, for \(\rho+P\geq 0\) and \(dS/dt>0\), that \(\rho\) monotonically decreases as the universe expands.
Equations (19) and (20) govern the dynamics of the modified flat model of cosmology. In Equation (19), \(1+S\), \(\Lambda\), and \(\rho\) are all positive; hence, the universe expands forever and as
\(t\to\infty\), \(a(t)\) approaches infinity, while \(\rho\) and \(P\) approach zero. It appears that the universe is then dominated by the cosmological constant \(\Lambda\); however, the variation of \(S\) with cosmic time leads to inconsistency.
Let us assume that our model is dominated by the cosmological constant as \(t\to\infty\) and \(\rho=0\). Therefore, Equation (19) implies
\[\frac{\dot{a}}{a}=(\Lambda/3)^{1/2}(1+S)^{-1/2}\,. \tag{22}\]
Taking the derivative of this relation with respect to time, we find
\[\frac{\ddot{a}}{a}-\left(\frac{\dot{a}}{a}\right)^{2}=-\frac{1}{2}(\Lambda/3) ^{1/2}(1+S)^{-3/2}\,\frac{dS}{dt}\,. \tag{23}\]
Let us multiply this relation by \(2(1+S)>0\) and subtract the result from Equation (20) to get
\[3(1+S)\left(\frac{\dot{a}}{a}\right)^{2}=\Lambda-2\frac{dS}{dt}\frac{\dot{a}} {a}+(\Lambda/3)^{1/2}(1+S)^{-1/2}\,\frac{dS}{dt}\,. \tag{24}\]
Plugging Equation (19) in Equation (24) results in
\[2\frac{\dot{a}}{a}\frac{dS}{dt}=(\Lambda/3)^{1/2}(1+S)^{-1/2}\,\frac{dS}{dt}\,, \tag{25}\]
which contradicts Equation (22), since \(dS/dt\) does not vanish. Therefore, de Sitter spacetime is not a solution of our modified flat model. This circumstance is in conformity with the fact that de Sitter spacetime is not a solution of nonlocal gravity (NLG) [24]. The modified flat cosmological model under consideration here will never asymptotically approach a de Sitter phase.
To conclude our discussion of the local limit of nonlocal gravity (NLG), we have shown that the first approximation of the NLG, which is its local limit, has an important effect on the evolution of the densities of constituents of the universe with cosmic time. All of the components of the universe evolve differently from their standard \(\Lambda\)CDM prediction. Accordingly, the modified flat model replaces the cosmological constant with a dark energy component with an evolving equation of state. An important part of future studies in cosmology is related to the idea to see if the cause of the accelerated expansion of the universe is indeed due to a cosmological constant or not. The fundamental nonlocal feature of gravitational interaction predicts that we will find dynamic dark energy in upcoming
decades instead of a cosmological constant [30; 31].
|
2301.04816 | Analytical Approximations for Generalized Landau-Zener Transitions in
Multi-level Non-Hermitian Systems | We study the dynamics of non-adiabatic transitions in non-Hermitian
multi-level parabolic models where the separations of the diabatic energies are
quadratic function of time. The model Hamiltonian has been used to describe the
non-Hermitian dynamics of two pairs of coupled cavities. In the absence of the
coupling between any two pairs of cavities, the wave amplitudes within each
subsystem are described by the tri-confluent Heun functions. When all the
couplings between the cavities are present, we reduce the dynamics into a set
of two coupled tri-confluent Heun equations, from which we derive analytical
approximations for the wave amplitudes at different physical limits. | Chon-Fai Kam, Yang Chen | 2023-01-12T05:14:43Z | http://arxiv.org/abs/2301.04816v1 | Analytical Approximations for Generalized Landau-Zener Transitions in Multi-level Non-Hermitian Systems
###### Abstract
We study the dynamics of non-adiabatic transitions in non-Hermitian multi-level parabolic models where the separations of the diabatic energies are quadratic function of time. The model Hamiltonian has been used to describe the non-Hermitian dynamics of two pairs of coupled cavities. In the absence of the coupling between any two pairs of cavities, the wave amplitudes within each subsystem are described by the tri-confluent Heun functions. When all the couplings between the cavities are present, we reduce the dynamics into a set of two coupled tri-confluent Heun equations, from which we derive analytical approximations for the wave amplitudes at different physical limits.
## I Introduction
Since the dawn of the twentieth century, quantum mechanics has been the foundation of modern technology from the electronic computers to the most precise atomic clock. One of the basic principles of quantum mechanics is that the spectra of atoms are real and the time evolution of wave functions is unitary, and thus the total probability is conserved. As such, it was once widely believed that the Hamiltonian which describes the time evolution of any physical system has to be self-adjoint or Hermitian [1]. However, over the years, people started to realize that the Hermitian law is not unbreakable, as the deviation of it does not directly implies complex-valued spectra energies. Since Bender and Boettcher's groundbreaking work [2; 3], a new principle started to be affirmed is that real energy spectra of a physical system are not ensured by the Hermitian property, but rather ensured by the parity-time (\(\mathcal{PT}\)) symmetry. The fundamental difference between Hermitian system and non-Hermitian \(\mathcal{PT}\)-symmetric system is that the former one can only be used to describes closed systems that have no exchange with the outer environment, but the later one can be used to describe open systems with two coupled subsystems, each of which is in contact with the outer environment, but the probability in one subsystem with gain compensates another subsystem with loss, so that the entire system is in a dynamical equilibrium [4; 5]. Over the decades, the new principle of pseudo-Hermiticity under \(\mathcal{PT}\)-symmetry has opened up countless new opportunities, and has also revealed various application in modern technology which ranges from optics [6] to
One of the most interesting effects of non-Hermitian systems is that the \(\mathcal{PT}\)-symmetry leads to a new type of spectral degeneracies, known as the exceptional points, at which not only a finite numbers of eigenvalues coincide, but the associated eigenstates also coincide [5]. In contrast to the spectral degeneracies of Hermitian systems, at which the eigenstates can be chosen to be orthogonal to each other, the spectral degeneracies in \(\mathcal{PT}\)-symmetric systems are peculiar as certain eigenstates are completely parallel and the Hamiltonian matrix becomes defective at the exceptional points [7]. This intriguing properties of non-Hermitian physics give rise to many counterintuitive features. For example, a general \(\mathcal{PT}\)-symmetric Hamiltonian may undergo a spontaneous symmetry breaking phase transition, beyond which complex eigenvalues emerge [8; 9]. In particular, when encircling an exceptional point, an unconventional level-crossing behavior will appear, along with a phase change of one eigenstate but not of the other [10; 11].
In ordinary Hermitian quantum mechanics, there is a fundamental physical process called the Landau-Zener transition, which describes the transition between two energy levels of a quantum system directly driven through an avoided crossing [12; 13; 14]. The Landau-Zener transition assumes a constant coupling between bare states in the diabatic basis and a linearly varying separation of diabatic energies [15], which can be exactly solved by the parabolic cylinder functions [16] or the integral representation method [17]. Although the model of Landau-Zener transition has achieved great success over the years, there are indeed cases where the assumption of linear crossing between the diabatic states becomes no longer valid. For the cases in which the crossing points merge together as a result of external fields, the Landau-Zener linearization fails, and the linear-dependence of diabatic energies has to be replaced by a parabolic or superparabolic one [18]. Interestingly, the tunneling dynamics for the parabolic and cubic models can still be exactly solved by the tri-confluent and bi-confluent Heun functions [19; 20]. One can still express the tunneling probability via the Stokes constants by using the Zhu-Nakamura formula [21; 22].
Compared to the two-state scenario, the research of generalized Landau-Zener transition for complicated systems with more than two states, even in ordinary Hermitian quantum mechanics, has been arduous and in most cases inconclusive. The main reason is that in conventional Landau-Zener transition, the coupling equations which govern the non-adiabatic transition amplitudes between the two energy levels can be reduced to a single second-order differential equation, e.g., the parabolic cylinder function for linear separation of diabatic energies [16], or the confluent Heun functions for quadratic and cubic separations of diabatic energies [23; 24]. In contrast, in the general multi-state scenario, if one attempts to re
duce the coupling equations which govern the non-adiabatic transition amplitudes between different energy levels in a single equation, one would probably obtain an ordinary differential equation with order greater than two. Compared to those of conventional second order differential equations, the analytic properties of solutions of differential equations with order greater than two are harder to obtain by regular methods like asymptotic analysis. The essential difficulty lies in the fact that the Stokes curves for conventional second order differential equations are straight lines which never cross each other, but those for differential equations with order greater than two are non-straight lines which always cross each other unavoidably. This simple fact results in the breakdown of the standard connection formula near the crossing points of the Stokes curve. In this regard, the asymptotic WKB solutions of generalized Landau-Zener non-adiabatic transitions for multi-state systems are in general hard to obtain, if not totally impossible.
Despite its evident importance, the non-Hermitian generalization of the two-level Landau-Zener transition has only recently been analyzed by Longstaff [25], the associated non-Hermitian Landau-Zener-Stuckelberg interferometry was analyzed by Shen [26], and the non-Hermitian generalization of the parabolic and super-parabolic models, in which the exceptional points are driven through at finite speeds which are quadratic or cubic functions of time has been analyzed by the authors [27]. Compared to the two-state scenario, the research of the non-Hermitian generalization of Landau-Zener non-adiabatic transitions in the multi-state scenario is still at its early stage. Recently, the three-state non-Hermitian Landau-Zener model in the presence of an interaction with the outer environment has been considered [28], and the Landau-Zener transitions through a pair of higher order exceptional points has been analyzed by Melanathuru [29]. In this work, based on our analytical approximation methods used in previous researches [23; 24; 27], we will analyze the dynamics of a four-state non-Hermitian \(\mathcal{PT}\)-symmetric system which directly passes through a collection of exceptional points.
## II The model
To begin with, we consider a four-state non-Hermitian system which has been used to describe the dynamics of four coupled cavities with asymmetric losses [30]. The non-Hermitian system consists of two pairs of coupled cavities, each of which is described by a \(2\times 2\) non-Hermitian Hamiltonian
\[H_{i}=\left(\begin{array}{ccc}\omega_{i}-i\Gamma_{0}&\kappa\\ \kappa&\omega_{i}-i\Gamma\end{array}\right), \tag{1}\]
where \(\kappa\) represents the coupling strength between the two cavities, \(\omega_{i}\) (\(i=1,2\)) represents the same resonant frequency of the two cavities, and \(\Gamma_{0}\) and \(\Gamma\) represent the intrinsic loss of the each cavity. The whole non-Hermitian system consists of two pairs of above-mentioned coupled cavities with the same values of \(\kappa\), \(\Gamma_{0}\) and \(\Gamma\) but different resonant frequencies \(\omega_{1}\) and \(\omega_{2}\). The two pairs of cavities are coupled by connecting each individual cavity of one pair with that of another pair by a small tube by an inter-pair coupling strength \(\eta\). The \(4\times 4\) non-Hermitian Hamiltonian of the whole system becomes
\[H=\left(\begin{array}{cccc}\omega_{2}-i\Gamma_{0}&\kappa&0&\eta\\ \kappa&\omega_{2}-i\Gamma&\eta&0\\ 0&\eta&\omega_{1}-i\Gamma_{0}&\kappa\\ \eta&0&\kappa&\omega_{1}-i\Gamma\end{array}\right), \tag{2}\]
Evidently, the \(2\times 2\) Hamiltonian for each pair of coupled cavities is \(\mathcal{PT}\)-symmetric only when the intrinsic losses are symmetric, i.e., \(\Gamma_{0}=\Gamma\), where \(\mathcal{P}\equiv\left(\begin{smallmatrix}0&1\\ 1&0\end{smallmatrix}\right)\) denotes the parity operator, and \(\mathcal{T}\) denotes complex conjugation. From the Hamiltonian Eq. (2), one obtains the coupled equations for the four wave amplitudes \(a_{1}\), \(a_{2}\), \(a_{3}\) and \(a_{4}\)
\[i\dot{a}_{1} =(\omega_{2}-i\Gamma_{0})a_{1}+\kappa a_{2}+\eta a_{4}, \tag{3a}\] \[i\dot{a}_{2} =\kappa a_{1}+(\omega_{2}-i\Gamma)a_{2}+\eta a_{3},\] (3b) \[i\dot{a}_{3} =\eta a_{2}+(\omega_{1}-i\Gamma_{0})a_{3}+\kappa a_{4},\] (3c) \[i\dot{a}_{4} =\eta a_{1}+\kappa a_{3}+(\omega_{1}-i\Gamma)a_{4}. \tag{3d}\]
Using the change of variable \(b_{i}\equiv e^{\int_{0}^{i}(\Gamma+i\dot{\omega})d\tau}a_{i}\) to remove the average resonant frequency \(\bar{\omega}\equiv\frac{1}{2}(\omega_{1}+\omega_{2})\) and the average intrinsic loss \(\bar{\Gamma}\equiv\frac{1}{2}(\Gamma+\Gamma_{0})\), one obtains the new coupled equations for the wave amplitudes \(b_{i}\), which depend only on the relative resonant frequency \(\Delta\omega\equiv\omega_{1}-\omega_{2}\) and the relative intrinsic loss \(\Delta\Gamma\equiv\Gamma-\Gamma_{0}\), as
\[i\dot{b}_{1} =-\Omega^{*}b_{1}+\kappa b_{2}+\eta b_{4}, \tag{4a}\] \[i\dot{b}_{2} =\kappa b_{1}-\Omega b_{2}+\eta b_{3},\] (4b) \[i\dot{b}_{3} =\eta b_{2}+\Omega b_{3}+\kappa b_{4},\] (4c) \[i\dot{b}_{4} =\eta b_{1}+\kappa b_{3}+\Omega^{*}b_{4}, \tag{4d}\]
where \(\Omega\equiv\frac{1}{2}(\Delta\omega+i\Delta\Gamma)\). One may further define \(c_{1}\equiv b_{1}+ib_{2}\), \(c_{2}\equiv b_{1}-ib_{2}\), \(c_{3}\equiv b_{3}+ib_{4}\) and \(c_{4}\equiv b_{3}-ib_{4}\), and obtains
\[i\dot{c}_{1} =-\Delta\omega c_{1}+i\gamma c_{2}+i\eta c_{4}, \tag{5a}\] \[i\dot{c}_{2} =i\gamma^{\prime}c_{1}-\Delta\omega c_{2}-i\eta c_{3},\] (5b) \[i\dot{c}_{3} =i\eta c_{2}+\Delta\omega c_{3}+i\gamma c_{4},\] (5c) \[i\dot{c}_{4} =-i\eta c_{1}+i\gamma^{\prime}c_{3}+\Delta\omega c_{4}, \tag{5d}\]
where \(\gamma\equiv\Delta\Gamma+\kappa\) and \(\gamma^{\prime}\equiv\Delta\Gamma-\kappa\). The Hamiltonian in the diabatic basis then reads
\[H^{\prime}=\left(\begin{array}{cccc}-\Delta\omega&i\gamma&0&i\eta\\ i\gamma^{\prime}&-\Delta\omega&-i\eta&0\\ 0&i\eta&\Delta\omega&i\gamma\\ -i\eta&0&i\gamma^{\prime}&\Delta\omega\end{array}\right). \tag{6}\]
From Eqs. (5a) and (5b), one immediately obtains
\[c_{3} =-\frac{1}{\eta}\left(\dot{c}_{2}-i\Delta\omega c_{2}-\gamma^{ \prime}c_{1}\right), \tag{7a}\] \[c_{4} =\frac{1}{\eta}\left(\dot{c}_{1}-i\Delta\omega c_{1}-\gamma c_{2} \right). \tag{7b}\]
Substitution of Eqs. (7a) and (7b) into Eqs. (5c) and (5d) yields the following coupled equations
\[\ddot{c}_{1}+\left(\eta^{2}+\Delta\omega^{2}-\gamma^{\prime 2}-i\Delta\dot{\omega} \right)c_{1}=2\kappa\dot{c}_{2}+2i\Delta\omega\Delta\Gamma c_{2}, \tag{8a}\] \[\ddot{c}_{2}+\left(\eta^{2}+\Delta\omega^{2}-\gamma^{2}-i\Delta \dot{\omega}\right)c_{2}=-2\kappa\dot{c}_{1}+2i\Delta\omega\Delta\Gamma c_{1}. \tag{8b}\]
In particular, for the \(\mathcal{PT}\)-symmetric case, i.e., \(\Delta\Gamma=0\), one immediately obtains
\[\ddot{c}_{1}+Q(t)c_{1} =2\kappa\dot{c}_{2}, \tag{9a}\] \[\ddot{c}_{2}+Q(t)c_{2} =-2\kappa\dot{c}_{1}, \tag{9b}\]
where \(Q(t)\equiv\eta^{2}-\kappa^{2}+\Delta\omega^{2}(t)-i\Delta\dot{\omega}(t)\). A direct computation yields
\[\dot{c}_{1}c_{2}-\dot{c}_{2}c_{1}-\kappa(c_{1}^{2}+c_{2}^{2})=\text{Const}. \tag{10}\]
In particular, for a parabolic separation of diabatic energies, i.e., \(\Delta\omega=\alpha+\beta\ell^{2}\), one will obtain
\[Q(t) =\eta^{2}-\kappa^{2}+(\alpha+\beta\ell^{2})^{2}-2\beta t\] \[=\beta^{2}t^{4}+2\alpha\beta t^{2}-2\beta t+\alpha^{2}+\eta^{2}- \kappa^{2}. \tag{11}\]
One can see that as \(Q(t)\) is now a quartic function of time, in the absence of the coupling \(\kappa\) within each pair of cavities, Eqs. (9a) and (9b) are decoupled and solved by the superposition of the tri-confluent Heun functions \(T_{1}\) and \(T_{2}\). When the coupling \(\kappa\) is nonzero, one can write \(c_{1}=\sum_{n=0}^{\infty}\kappa^{n(n)}c_{1}^{(n)}\) and \(c_{2}=\sum_{n=0}^{\infty}\kappa^{n(n)}c_{2}^{(n)}\), where \(c_{1}^{(0)}\equiv d_{1}T_{1}+d_{2}T_{2}\) and \(c_{2}^{(0)}\equiv e_{1}T_{1}+e_{2}T_{2}\), and obtains the recursive relations
\[\ddot{c}_{1}^{(n)}+Q(t)c_{1}^{(n)} =2\dot{c}_{2}^{(n-1)}, \tag{12a}\] \[\ddot{c}_{2}^{(n)}+Q(t)c_{2}^{(n)} =-2\dot{c}_{1}^{(n-1)}. \tag{12b}\]
## III Integrals involving products of Heun functions and their derivatives
To solve the recursive relations, one regards the right hand side of Eqs. (12a) and (12b) as known functions of time, then the \(n\)-th order terms \(c_{1}^{(n)}\) and \(c_{2}^{(n)}\) are integrals of the \((n-1)\)-th order terms \(\dot{c}_{1}^{(n-1)}\) and \(\dot{c}_{2}^{(n-1)}\), which may be expressed as
\[c_{1}^{(n)} =\frac{-2T_{1}}{W}\int T_{2}\dot{c}_{2}^{(n-1)}dt+\frac{2T_{2}}{ W}\int T_{1}\dot{c}_{2}^{(n-1)}dt, \tag{13a}\] \[c_{2}^{(n)} =\frac{2T_{1}}{W}\int T_{2}\dot{c}_{1}^{(n-1)}dt-\frac{2T_{2}}{ W}\int T_{1}\dot{c}_{1}^{(n-1)}dt. \tag{13b}\]
where \(W\equiv T_{1}\dot{T}_{2}-T_{2}\dot{T}_{1}\) is the Wronskian for \(T_{1}\) and \(T_{2}\), and is a constant of time. Using integration by parts, Eqs. (13a) - (13b) may be simplified as
\[c_{1}^{(n)} =\frac{2T_{1}}{W}\int\dot{T}_{2}c_{2}^{(n-1)}dt-\frac{2T_{2}}{W} \int\dot{T}_{1}c_{2}^{(n-1)}dt, \tag{14a}\] \[c_{2}^{(n)} =-\frac{2T_{1}}{W}\int\dot{T}_{2}c_{1}^{(n-1)}dt+\frac{2T_{2}}{W} \int\dot{T}_{1}c_{1}^{(n-1)}dt. \tag{14b}\]
To continue, one needs to evaluate integrals of the products of Heun functions and their derivatives. To be more precise, let us consider the following three kinds of indefinite integrals: \(\int\tau^{y}y^{2}dt\), \(\int\tau^{y}ydt\) and \(t^{n}\dot{y}^{2}dt\), with \(y\) being any linear combination of the tri-confluent Heun functions \(T_{1}\) and \(T_{2}\), which satisfies the tri-confluent Heun equation \(\ddot{y}+Q(t)y=0\), where \(Q(t)\equiv\sum_{k=0}^{4}A_{k}t^{k}\) is a quartic function of time. Using the following relations \(\int\tau^{y}(y_{1}\dot{y}_{2}+y_{2}\dot{y}_{1})dt=t^{n}y_{1}y_{2}-n\int\tau^{n- 1}y_{1}y_{2}dt\) and \(\int\tau^{n}(y_{1}\dot{y}_{2}-y_{2}\dot{y}_{1})dt=\frac{W}{n+1}t^{n+1}\), one immediately obtains
\[\int\dot{t}^{n}y_{1}\dot{y}_{2}dt=\frac{1}{2}\left(t^{n}y_{1}y_{2}+\frac{W\tau^ {n+1}}{n+1}-n\int\dot{t}^{n-1}y_{1}y_{2}dt\right), \tag{15}\]
where \(y_{1}\) and \(y_{2}\) are two independent solutions of the tri-confluent equation, and \(W\equiv y_{1}\dot{y}_{2}-y_{2}\dot{y}_{1}\) is the Wronskian of them. The other two kinds of integrals will be more involved to evaluate. Let us define
\[\int\dot{t}^{n}y_{1}y_{2}dt \equiv P_{n}y_{1}y_{2}+\frac{Q_{n}}{2}(y_{1}\dot{y}_{2}+y_{2}\dot{y }_{1})+R_{n}\dot{y}_{1}\dot{y}_{2}, \tag{16a}\] \[\int\dot{t}^{n}\dot{y}_{1}\dot{y}_{2}dt \equiv L_{n}y_{1}y_{2}+\frac{M_{n}}{2}(y_{1}\dot{y}_{2}+y_{2}\dot{y }_{1})+N_{n}\dot{y}_{1}\dot{y}_{2}. \tag{16b}\]
The coefficients \(P_{n}\), \(Q_{n}\) and \(R_{n}\) may be determined by taking derivative of Eq. (16a), which yields
\[t^{n}y_{1}y_{2} =(\dot{P}_{n}-QQ_{n})y_{1}y_{2}+\frac{1}{2}(\dot{Q}_{n}+2P_{n}-2QR _{n})(y_{1}\dot{y}_{2}+y_{2}\dot{y}_{1})\] \[+(\dot{R}_{n}+Q_{n})\dot{y}_{1}\dot{y}_{2}. \tag{17}\]
Hence, we obtain \(\dot{P}_{n}-QQ_{n}=t^{n}\), \(\dot{Q}_{n}+2P_{n}-2QR_{n}=0\) and \(R_{n}+Q_{n}=0\), which are solved by \(Q_{n}=-R_{n}\) and \(P_{n}=\frac{1}{2}R_{n}+QR_{n}\), where \(R_{n}\) satisfies the third order differential equation \(\ddot{R}_{n}+4Q\dot{R}_{n}+2\dot{Q}R_{n}=2t^{n}\). To evaluate the coefficients \(L_{n}\), \(M_{n}\) and \(N_{n}\), one may use the following identity
\[\int t^{n}\dot{y}_{1}\dot{y}_{2}dt =\frac{1}{2}t^{n}(y_{1}\dot{y}_{2}+y_{2}\dot{y}_{1})-\frac{n}{2}t^{ n-1}y_{1}y_{2}\] \[+\int t^{n}Qy_{1}y_{2}dt+\frac{n(n-1)}{2}\int t^{n-2}y_{1}y_{2}dt. \tag{18}\]
Substitution of Eq. (16a) and \(Q(t)\equiv\sum_{k=0}^{4}A_{k}t^{k}\) into Eq. (18) yields
\[L_{n} =\sum_{k=0}^{4}A_{k}P_{n+k}+\frac{n(n-1)}{2}P_{n-2}-\frac{n}{2}t^{ n-1}, \tag{19a}\] \[M_{n} =\sum_{k=0}^{4}A_{k}Q_{n+k}+\frac{n(n-1)}{2}Q_{n-2}+t^{n},\] (19b) \[N_{n} =\sum_{k=0}^{4}A_{k}R_{n+k}+\frac{n(n-1)}{2}R_{n-2}. \tag{19c}\]
###### Acknowledgements.
This study was supported by the National Natural Science Foundation of China (Grant nos. 12104524).
## Appendix A Formal solutions of \(c_{1}^{(n)}\) and \(c_{2}^{(n)}\)
From Eqs. (14a) and (14b), a direct computation will yield
\[c_{1}^{(1)}=tc_{2}^{(0)},c_{2}^{(1)}=-tc_{1}^{(0)}. \tag{20}\]
To proceed further, one can use the following integral identity with respect to any functions \(y_{1}\) and \(y_{2}\) and their derivatives
\[\int y_{1}\dot{y}_{2}dt=\frac{t}{2}y_{1}y_{2}+\frac{Wt^{2}}{4}-\frac{1}{2}\int y _{1}y_{2}dt, \tag{10}\]
where \(W\equiv y_{1}\dot{y}_{2}-\dot{y}_{2}y_{1}\) is the Wronskian with respect to \(y_{1}\) and \(y_{2}\). From this, a direct computation yields
\[c_{1}^{(2)} =\frac{1}{2}(Q_{0}-t^{2})c_{1}^{(0)}+R_{0}\dot{c}_{1}^{(0)}, \tag{11a}\] \[c_{2}^{(2)} =\frac{1}{2}(Q_{0}-t^{2})c_{2}^{(0)}+R_{0}\dot{c}_{2}^{(0)}, \tag{11b}\]
where \(Q_{0}\equiv-R_{0}\) and \(R_{0}\) is the solution of the third order differential equation
\[\frac{d^{3}R_{0}(t)}{dt^{3}}+4Q(t)\frac{dR_{0}(t)}{dt}+2\frac{dQ(t)}{dt}R_{0}( t)=2. \tag{12}\]
In order to solve \(c_{1}^{(n)}\) and \(c_{2}^{(n)}\), one needs the following integrals
\[\mathcal{L}_{n}T_{k} =\frac{W}{2}\left[\left(\frac{t^{n+1}}{n+1}-\frac{n}{2}Q_{n-1} \right)T_{k}-nR_{n-1}\dot{T}_{k}\right]\] \[\equiv\frac{W}{2}(\mathcal{E}_{n}T_{k}+\mathcal{F}_{n}\dot{T}_{k}), \tag{13a}\] \[\mathcal{L}_{n}\dot{T}_{k} =\frac{W}{2}(M_{n}T_{k}+2N_{n}\dot{T}_{k})=\frac{W}{2}(\mathcal{G }_{n}T_{k}+\mathcal{H}_{n}\dot{T}_{k}), \tag{13b}\]
where \(k=1,2\), and \(\mathcal{L}_{n}\bullet\equiv T_{1}\int t^{n}\dot{T}_{2}\bullet dt-T_{2}\int t ^{n}\dot{T}_{1}\bullet dt\). Form this, one may expand the functions \(\mathcal{E}_{n}\), \(\mathcal{F}_{n}\), \(\mathcal{G}_{n}\), and \(\mathcal{H}_{n}\) (and thus \(\mathcal{L}_{n}T_{k}\) and \(\mathcal{L}_{n}\dot{T}_{k}\)) in series of time as
\[\mathcal{L}_{n}T_{k} \equiv\frac{W}{2}\left(\sum_{l=0}^{\infty}E_{nl}t^{l}T_{k}+\sum_{l =0}^{\infty}F_{nl}t^{l}\dot{T}_{k}\right), \tag{14a}\] \[\mathcal{L}_{n}\dot{T}_{k} =\frac{W}{2}\left(\sum_{l=0}^{\infty}G_{nl}t^{l}T_{k}+\sum_{l=0}^ {\infty}H_{nl}t^{l}\dot{T}_{k}\right). \tag{14b}\]
In general, one can express the \(n\)-th order correction terms \(c_{1}^{(n)}\) and \(c_{2}^{(n)}\) in terms of \(c_{1}^{(0)}\), \(c_{2}^{(0)}\), \(c_{1}^{(0)}\) and \(\dot{c}_{2}^{(0)}\), and expand the coefficients in series of time as
\[c_{1}^{(n)} \equiv\sum_{k=0}^{\infty}\left(\alpha_{k}^{(n)}c_{1}^{(0)}+\beta _{k}^{(n)}c_{2}^{(0)}+\gamma_{k}^{(n)}c_{1}^{(0)}+\delta_{k}^{(n)}c_{2}^{(0)} \right)t^{k}, \tag{15a}\] \[c_{2}^{(n)} \equiv\sum_{k=0}^{\infty}\left(\lambda_{k}^{(n)}c_{1}^{(0)}+\mu_{ k}^{(n)}c_{2}^{(0)}+\gamma_{k}^{(n)}c_{1}^{(0)}+\xi_{k}^{(n)}c_{2}^{(0)} \right)t^{k}. \tag{15b}\]
Using Eqs. (14a) - (14b), one obtains the \((n+1)\)-th order correction terms
\[c_{1}^{(n+1)} =\frac{2}{W}\sum_{k=0}^{\infty}\left(\lambda_{k}^{(n)}\mathcal{L}_{ k}c_{1}^{(0)}+\mu_{k}^{(n)}\mathcal{L}_{k}c_{2}^{(0)}+\nu_{k}^{(n)} \mathcal{L}_{k}\dot{c}_{1}^{(0)}+\xi_{k}^{(n)}\mathcal{L}_{k}\dot{c}_{2}^{(0)}\right)\] \[=\sum_{k,l=0}^{\infty}\left[\alpha_{k}^{(n)}E_{kl}+\gamma_{k}^{(n) }G_{kl}\right)c_{1}^{(0)}+(\mu_{k}^{(n)}E_{kl}+\xi_{k}^{(n)}G_{kl})c_{2}^{(0)}\] \[+(\lambda_{k}^{(n)}F_{kl}+\gamma_{k}^{(n)}H_{kl})\dot{c}_{1}^{(0 )}+(\mu_{k}^{(n)}F_{kl}+\xi_{k}^{(n)}H_{kl})\dot{c}_{2}^{(0)}\right]t^{l}. \tag{15c}\] \[c_{2}^{(n+1)} =\frac{-2}{W}\sum_{k=0}^{\infty}\left[\alpha_{k}^{(n)}\mathcal{L} _{k}c_{1}^{(0)}+\beta_{k}^{(n)}\mathcal{L}_{k}c_{2}^{(0)}+\gamma_{k}^{(n)} \mathcal{L}_{k}\dot{c}_{1}^{(0)}+\theta_{k}^{(n)}\mathcal{L}_{k}\dot{c}_{2}^{(0 )}\right]\] \[=-\sum_{k,l=0}^{\infty}\left[(\alpha_{k}^{(n)}E_{kl}+\gamma_{k}^{( n)}G_{kl})c_{1}^{(0)}+(\beta_{k}^{(n)}E_{kl}+\delta_{k}^{(n)}G_{kl})c_{2}^{(0)}\right.\] \[+(\alpha_{k}^{(n)}F_{kl}+\gamma_{k}^{(n)}H_{kl})\dot{c}_{1}^{(0 )}+(\beta_{k}^{(n)}F_{kl}+\delta_{k}^{(n)}H_{kl})\dot{c}_{2}^{(0)}\Big{]}t^{l}.\]
A direct comparison between Eqs. (15a) and (15a) yields the recursive relations
\[\alpha_{k}^{(n+1)} =\sum_{j=0}^{\infty}(\lambda_{j}^{(n)}E_{jk}+\gamma_{j}^{(n)}G_{jk}), \tag{16a}\] \[\beta_{k}^{(n+1)} =\sum_{j=0}^{\infty}(\mu_{j}^{(n)}E_{jk}+\xi_{j}^{(n)}G_{jk}),\] (16b) \[\gamma_{k}^{(n+1)} =\sum_{j=0}^{\infty}(\lambda_{j}^{(n)}F_{jk}+\gamma_{j}^{(n)}H_{jk}),\] (16c) \[\delta_{k}^{(n+1)} =\sum_{j=0}^{\infty}(\mu_{j}^{(n)}F_{jk}+\xi_{j}^{(n)}H_{jk}),\] (16d) \[\lambda_{k}^{(n+1)} =-\sum_{j=0}^{\infty}(\alpha_{j}^{(n)}E_{jk}+\gamma_{j}^{(n)}G_{jk}),\] (16e) \[\mu_{k}^{(n+1)} =-\sum_{j=0}^{\infty}(\beta_{j}^{(n)}E_{jk}+\delta_{j}^{(n)}G_{jk}),\] (16f) \[\nu_{k}^{(n+1)} =-\sum_{j=0}^{\infty}(\alpha_{j}^{(n)}F_{jk}+\gamma_{j}^{(n)}H_{jk}),\] (16g) \[\xi_{k}^{(n+1)} =-\sum_{j=0}^{\infty}(\beta_{j}^{(n)}F_{jk}+\delta_{j}^{(n)}H_{jk}). \tag{16h}\]
## Appendix B Recursive relation for \(R_{n}\)
The coefficients \(R_{n}\) for different \(n\) are not independent. From the identity \(\frac{d}{dt}(y_{1}\dot{y}_{2}+Qy_{1}y_{2})=\dot{Q}y_{1}y_{2}\), we obtain the indefinite integral \(\int\dot{Q}y_{1}y_{2}dt=\dot{y}_{1}\dot{y}_{2}+Qy_{1}y_{2}\), which implies that \(\sum_{k=0}^{4}kA_{1}R_{k-1}=1\). Similarly, from the identity \(\frac{d}{dt}[\dot{y}_{1}\dot{y}_{2}+Qy_{1}y_{2})-\frac{1}{2}(y_{1}\dot{y}_{2}+ y_{2}\dot{y}_{1})]=(2Q+tQ)y_{1}y_{2}\), we obtain the indefinite integral \(\int(2Q+tQ)y_{1}y_{2}dt=t(\dot{y}_{1}\dot{y}_{2}+Qy_{1}y_{
In general, one has the following identity \(\int y_{1}y_{2}d(fQ)=f(\dot{y}_{1}\dot{y}_{2}+Q\dot{y}_{1}y_{2})-\int\dot{f}\dot{y }_{1}\dot{y}_{2}dt\) for an arbitrary function \(f\). In particular, for \(f=\dot{r}^{\ast}\), one obtains the following indefinite integral
\[\int(r^{\ast}\dot{Q}+nr^{\ast-1}Q)y_{1}y_{2}dt=t^{\ast}(\dot{y}_{1}\dot{y}_{2}+ Qy_{1}y_{2})-n\int r^{\ast-1}\dot{y}_{1}\dot{y}_{2}dt. \tag{11}\]
Substitution of Eq. (18) into Eq. (11) yields
\[\int(r^{\ast}\dot{Q}+2nt^{\ast-1}Q)y_{1}y_{2}dt=t^{\ast}(\dot{y}_{ 1}\dot{y}_{2}+Qy_{1}y_{2})-\frac{n}{2}t^{\ast-1}(y_{1}\dot{y}_{2}+y_{2}\dot{y} _{1})\] \[+\frac{n(n-1)}{2}t^{\ast-2}y_{1}y_{2}-\frac{n(n-1)(n-2)}{2}\int r ^{\ast-3}y_{1}y_{2}dt. \tag{12}\]
From Eq. (12) and using \(t^{\ast n}\dot{Q}+2nt^{\ast-1}Q=\sum_{k=0}^{4}(2n+k)A_{k}t^{\ast n+k-1}\), one obtains the recursive relations for \(R_{n}\) with \(n\geq 3\)
\[\sum_{k=0}^{4}(2n+k)A_{k}R_{n+k-1}=t^{\ast}-\frac{n(n-1)(n-2)}{2}R_{n-3}. \tag{13}\]
For example, for the Airy functions which satisfy \(\ddot{y}+Q(t)y=0\) with \(Q(t)=-t\), one recovers the recursive relation obtained by the authors previously [23]
\[R_{n}=\frac{1}{2(2n+1)}\left[n(n-1)(n-2)R_{n-3}-2t^{\ast}\right]; \tag{14}\]
while for the Bessel functions which satisfy \(\ddot{y}+\lambda^{2}t^{4}y=0\) with \(Q(t)=\lambda^{2}t^{4}\), one recovers the simple recursive relations [23]
\[R_{n+3}=\frac{-1}{4(n+2)\lambda^{2}}\left[n(n-1)(n-2)R_{n-3}-2t^{\ast}\right]. \tag{15}\]
In general, when \(A_{4}\neq 0\), one can obtain an explicit expression for \(R_{n}\) in terms of \(R_{0}\), \(R_{1}\) and \(R_{2}\) by revising Eq. (13) as
\[R_{n+3}+\sum_{k=-2}^{2}g_{n}^{k+2}R_{n+k}=J_{n}, \tag{16}\]
where \(J_{n}\equiv\frac{2^{\ast}-n(n-1)(n-2)R_{n-3}}{2(2n+4)A_{4}}\), \(g_{n}^{k+1}\equiv\frac{(2n+k)A_{4}}{(2n+4)A_{4}}\), and \(g_{n}^{0}\equiv 0\). Clearly, all \(R_{n}\) may be expressed in terms of \(R_{0}\), \(R_{1}\) and \(R_{2}\). Explicitly, the first few terms may be expressed as
\[R_{3} =J_{0}-\sum_{k=0}^{2}g_{0}^{k+2}R_{k}, \tag{17a}\] \[R_{4} =J_{1}-g_{1}^{4}J_{0}-\sum_{k=0}^{2}h_{1}^{k+1}R_{k},\] (17b) \[R_{5} =J_{2}-g_{2}^{4}J_{1}-h_{2}^{3}J_{0}-\sum_{k=0}^{2}w_{2}^{k}R_{k},\] (17c) \[R_{6} =J_{3}-g_{3}^{4}J_{2}-h_{3}^{3}J_{1}-w_{3}^{2}J_{0}\] \[-\sum_{k=1}^{2}u_{3}^{k-1}R_{k}+(g_{1}^{1}h_{3}^{3}+g_{0}^{2}w_{3 }^{2})R_{0}, \tag{17d}\]
where \(h_{n}^{k}\equiv g_{n}^{k}-g_{n}^{4}g_{n}^{k+1}\), \(w_{n}^{k}\equiv h_{n}^{k}-h_{n}^{3}g_{n-2}^{k+2}\), and \(u_{n}^{k}\equiv w_{n}^{k}-w_{n}^{2}g_{n-3}^{k+3}\). A direct computation yields
\[R_{3} =\frac{1}{4A_{4}}\left(1-3A_{3}R_{2}-2A_{2}R_{1}-A_{1}R_{0}\right), \tag{18a}\] \[R_{4} =\frac{t}{6A_{4}}-\frac{5A_{3}}{24A_{4}^{2}}-\left(\frac{2A_{2}} {3A_{4}}-\frac{5A_{3}^{2}}{8A_{4}^{2}}\right)R_{2}\] \[-\left(\frac{A_{1}}{2A_{4}}-\frac{5A_{2}A_{3}}{12A_{4}^{2}}\right) R_{1}-\left(\frac{A_{0}}{3A_{4}}-\frac{5A_{1}A_{3}}{24A_{4}^{2}}\right)R_{0},\] (18b) \[R_{5} =\frac{t^{2}}{8A_{4}}-\frac{7A_{3}t}{48A_{4}^{2}}-\frac{3A_{2}} {16A_{4}^{2}}+\frac{35A_{3}^{2}}{192A_{4}^{3}}\] \[-\left(\frac{5A_{1}}{8A_{4}}-\frac{55A_{2}A_{3}}{48A_{4}^{2}}+ \frac{105A_{3}^{2}}{192A_{4}^{3}}\right)R_{2}\] \[-\left(\frac{A_{0}}{2A_{4}}-\frac{21A_{1}A_{3}}{48A_{4}^{2}}- \frac{3A_{2}^{2}}{8A_{4}^{2}}+\frac{35A_{2}A_{3}^{2}}{96A_{4}^{3}}\right)R_{1}\] \[+\left(\frac{7A_{0}A_{3}}{24A_{4}^{2}}+\frac{3A_{1}A_{2}}{16A_{4 }^{2}}-\frac{35A_{1}A_{3}^{2}}{192A_{4}^{3}}\right)R_{0},\] (18c) \[R_{6} =\frac{t^{3}}{10A_{4}}-\frac{9A_{3}t^{2}}{80A_{4}^{2}}-\left(\frac {2A_{2}}{15A_{4}^{2}}-\frac{63A_{3}^{2}}{480A_{4}^{3}}\right)t\] \[-\frac{7A_{1}}{40A_{4}^{2}}+\frac{161A_{2}A_{3}}{160A_{4}^{3}}- \frac{21A_{3}^{3}}{128A_{4}^{4}}\] \[-\left(\frac{3A_{0}}{5A_{4}}-\frac{87A_{1}A_{3}}{80A_{4}^{2}}- \frac{8A_{2}^{2}}{15A_{4}^{2}}+\frac{49A_{2}A_{3}^{2}}{32A_{4}^{3}}-\frac{189A_ {4}^{4}}{384A_{4}^{4}}\right)R_{2}\] \[+\left(\frac{9A_{0}A_{3}}{20A_{4}^{2}}+\frac{3A_{1}A_{2}}{4A_{4}^{2 }}-\frac{63A_{1}A_{3}^{2}}{160A_{4}^{3}}+\frac{A_{2}^{2}A_{3}}{240A_{4}^{3}}- \frac{21A_{2}A_{3}^{3}}{64A_{4}^{4}}\right)R_{1}\] \[-\left(\frac{3}{10A_{4}}-\frac{4A_{0}A_{2}}{15A_{4}^{2}}-\frac{7A _{1}^{2}}{40A_{4}^{2}}+\frac{21A_{0}A_{3}^{2}}{80A_{4}^{3}}\right.\] \[\left.+\frac{161A_{1}A_{2}A_{3}}{480A_{4}^{3}}-\frac{315A_{1}A_{3} }{1920A_{4}^{4}}\right]R_{0}. \tag{18d}\]
Appendix C Explicit expressions for \(c_{1}^{(n)}(t)\) and \(c_{2}^{(n)}(t)\) for the cases when \(|t|\ll 1\) and \(|t|\to\infty\)
### Cases for \(|t|\ll 1\) and \(Q(t)\approx-2|\beta t+\alpha^{2}+\eta^{2}-\kappa^{2}\)
When \(|t|\ll 1\), one can simply retain the linear teams in \(Q(t)\) and obtains \(Q(t)\approx A_{1}t+A_{0}=-2i\beta t+\alpha^{2}+\eta^{2}-\kappa^{2}\). After the coordinate transformation \(z\equiv g(t+A_{0}/A_{1})\), \(c_{1}^{(n)}\) and \(c_{2}^{(n)}\) are determined by the recursive relations
\[\frac{
\(e_{1}\operatorname{Ai}(z)+e_{2}\operatorname{Bi}(z)\). To proceed further, notice that the integral of the product of any linear combinations of the Airy functions has the form
\[\int z^{n}y_{1}y_{2}dz\equiv P_{n}y_{1}y_{2}+\frac{Q_{n}}{2}(y_{1}y_{2}^{\prime} +y_{2}y_{1}^{\prime})+R_{n}y_{1}^{\prime}y_{2}^{\prime}, \tag{10}\]
where \(P_{n}=\frac{1}{2}R_{n}^{\prime\prime}-zR_{n}\), \(Q_{n}=-R_{n}^{\prime}\) and \(R_{n}\) is determined by the third-order differential equation
\[\frac{d^{3}R_{n}}{dz^{3}}-4z\frac{dR_{n}}{dz}-2R_{n}=2z^{n}. \tag{11}\]
A straightforward computation yields \(R_{n}\), \(Q_{n}\) and \(P_{n}\) for \(n\leq 2\)
\[R_{0}=-1,Q_{0}=0,P_{0}=z, \tag{12a}\] \[R_{1}=-\frac{z}{3},Q_{1}=\frac{1}{3},P_{1}=\frac{z^{2}}{3},\] (12b) \[R_{2}=-\frac{z^{2}}{5},Q_{2}=\frac{2z}{5},P_{2}=\frac{z^{3}-1}{ 5}, \tag{12c}\]
where \(R_{n}\) for \(n\geq 3\) is determined by the recursive relation
\[R_{n}=-\frac{z^{n}}{2n+1}+\frac{n(n-1)(n-2)}{2(2n+1)}R_{n-3}. \tag{13}\]
Hence, \(R_{n}\) for \(n\geq 3\) is solved by
\[R_{n} =-\frac{z^{n}}{2n+1}-\frac{n(n-1)(n-2)z^{n-3}}{2(2n+1)(2n-5)}\] \[-\cdots-\frac{n(n-1)\cdots(n-3k+1)z^{n-3k}}{2^{k}(2n+1)\cdots(2n +1-6k)}\] \[=-\frac{z^{n}}{2n+1}\sum_{j=0}^{k}\frac{n!\Gamma(\frac{2n+1}{6})( 12z^{3})^{-j}}{(n-3)j!\Gamma(\frac{2n+1}{6}-j)}, \tag{14}\]
where \(k\) is the number such that \(n-3k\in[0,2]\). Using the relation \(Q_{n}=-\hat{R}_{n}\), a direct computation yields
\[Q_{n} =\frac{nz^{n-1}}{2n+1}+\frac{n(n-1)(n-2)(n-3)z^{n-4}}{2(2n+1)(2n- 5)}\] \[+\cdots+\frac{n(n-1)\cdots(n-3l)z^{n-3j-1}}{2^{k}(2n+1)\cdots(2n +1-6l)}\] \[=\frac{nz^{n-1}}{2n+1}\sum_{j=0}^{l}\frac{(n-1)!\Gamma(\frac{2n+1 }{6})(12z^{3})^{-j}}{(n-1-3j)!\Gamma(\frac{2n+1}{6}-j)}, \tag{15}\]
where \(l\) is the number such that \(n-3l-1\in[0,2]\). Using the above relations, \(R_{n}\), \(Q_{n}\) and \(P_{n}\) for \(n\leq 6\) can be explicitly expressed as
\[R_{3}=-\frac{z^{3}+3}{7},Q_{3}=\frac{3z^{2}}{7},P_{3}=\frac{z^{ 4}}{7}, \tag{16a}\] \[R_{4}=-\frac{z^{4}+4z}{9},Q_{4}=\frac{4z^{3}+4}{9},P_{4}=\frac{ z^{5}-2z^{2}}{9},\] (16b) \[R_{5}=-\frac{z^{5}+6z^{2}}{11},Q_{5}=\frac{5z^{4}+12z}{11},P_{5}= \frac{z^{6}-4z^{3}-6}{11}. \tag{16c}\]
### Cases for \(|t|\gg 1\) and \(Q(t)\approx\beta^{2}t^{4}\)
In the region \(|t|\gg 1\), one can only keep the highest order term in \(Q(t)\) such that \(Q(t)\approx\beta^{2}t^{4}\). Then, \(c_{1}^{(n)}(t)\) and \(c_{2}^{(n)}(t)\) are determined by the recursive equations
\[\tilde{c}_{1}^{(n)}+\beta^{2}t^{4}c_{1}^{(n)}=2\tilde{c}_{2}^{(n- 1)}, \tag{17a}\] \[\tilde{c}_{2}^{(n)}+\beta^{2}t^{4}c_{2}^{(n)}=-2\tilde{c}_{1}^{(n- 1)}. \tag{17b}\]
Here the zeroth order terms \(c_{1}^{(0)}(t)\) and \(c_{2}^{(0)}(t)\) are linear combination of \(\bar{y}_{1}=\sqrt{I}I_{1/6}(\beta\beta^{3}/3)\) and \(\bar{y}_{2}\equiv\sqrt{I}J_{-1/6}(\beta\beta^{3}/3)\), where \(J_{\nu}(z)\) are the Bessel function of the first kind defined by
\[J_{\nu}(z)\equiv\sum_{n=0}^{\infty}\frac{(-1)^{n}}{\Gamma(\nu+n+1)n!}\left( \frac{z}{2}\right)^{\nu+2n}. \tag{18}\]
Hence, the fundamental solutions \(y_{1}(t)\) and \(y_{2}(t)\) of the equation \(\bar{y}+\beta^{2}t^{4}y=0\) and their derivatives have the series expansions
\[y_{1} \equiv\Gamma\left(\frac{7}{6}\right)\left(\frac{\beta}{6}\right)^ {-\frac{1}{6}}\bar{y}_{1}=\sum_{n=0}^{\infty}\frac{\Gamma(\frac{2}{6})(-1)^{n }}{\Gamma(\frac{7}{6}+n)n!}\left(\frac{\beta}{6}\right)^{2n}t^{1+6n}, \tag{19a}\] \[y_{2} \equiv\Gamma\left(\frac{5}{6}\right)\left(\frac{\beta}{6}\right)^ {\frac{1}{6}}\bar{y}_{2}=\sum_{n=0}^{\infty}\frac{\Gamma(\frac{5}{6})(-1)^{n }}{\Gamma(\frac{5}{6}+n)n!}\left(\frac{\beta}{6}\right)^{2n}t^{6n},\] (19b) \[\dot{y}_{1} =\sum_{n=0}^{\infty}\frac{\Gamma(\frac{1}{6})(-1)^{n}}{\Gamma( \frac{1}{6}+n)n!}\left(\frac{\beta}{6}\right)^{2n}t^{6n},\] (20c) \[\dot{y}_{2} =-\frac{\beta^{2}t^{5}}{5}\sum_{n=0}^{\infty}\frac{\Gamma(\frac{1 }{6})(-1)^{n+1}}{\Gamma(\frac{11}{6}+n)n!}\left(\frac{\beta}{6}\right)^{2n}t^{6n}. \tag{20d}\]
The fundamental solutions \(y_{1}(t)\) and \(y_{2}(t)\) and their derivatives can be expressed in terms of the generalized hypergeometric functions \({}_{\rho}F_{q}(a_{1},\cdots,a_{p};b_{1},\cdots,b_{q};z)\) as
\[y_{1}(t) ={}_{1}F_{2}\left(1;1,\frac{7}{6};\frac{-\beta^{2}t^{6}}{36} \right)t, \tag{21a}\] \[y_{2}(t) ={}_{1}F_{2}\left(1;1,\frac{5}{6};\frac{-\beta^{2}t^{6}}{36} \right),\] (21b) \[\dot{y}_{1}(t) ={}_{1}F_{2}\left(1;1,\frac{1}{6};\frac{-\beta^{2}t^{6}}{36} \right),\] (21c) \[\dot{y}_{2}(t) =-{}_{1}F_{2}\left(1;1,\frac{11}{6};\frac{-\beta^{2}t^{6}}{36} \right)\frac{\beta^{2}t^{5}}{5}, \tag{21d}\]
which obey the initial conditions \(y_{1}(0)=0\), \(y_{2}(0)=1\), \(\dot{y}_{1}(0)=1\) and \(\dot{y}_{2}(0)=0\). Hence, the Wronskian with respect to \(y_{1}\) and \(y_{2}\) is a constant, i.e., \(W(y_{1},y_{2})\equiv y_{1}\dot{y}_{2}-y_{2}\dot{y}_{1}=-1\). A direct computation yields the products of the fundamental solutions and their derivatives
\[y_{1}y_{2} ={}_{1}F_{2}(1;1,\frac{5}{6};\frac{-\beta^{2}t^{6}}{36})_{1}F_{2}(1; \frac{7}{6};\frac{-\beta^{2}t^{6}}{36})t,\] \[\dot{y}_{1}\dot{y}_{2} =-{}_{1}F_{2}(1;1,\frac{1}{6};\frac{-\beta^{2}t^{6}}{36})_{1}F_{2}(1; 1,\frac{11}{6};\frac{-\beta^{2}t^{6}}{36})\frac{\beta^{2}t^{5}}{5},\] \[y_{1}\dot{y}_{2}+y_{2}\dot{y}_{1} ={}_{1}F_{2}(1;1,\frac{
Thus, the above products can be expanded in the Taylor series as
\[y_{1}y_{2} =t-\frac{2}{35}\beta^{2}t^{7}+\frac{6}{5005}\beta^{4}t^{13}+O(t^{19}),\] \[y_{1}\hat{y}_{2}+y_{2}\hat{y}_{1} =1-\frac{2}{5}\beta^{2}t^{6}+\frac{6}{385}\beta^{4}t^{12}+O(t^{18}),\] \[\hat{y}_{1}\hat{y}_{2} =-\frac{1}{5}\beta^{2}t^{5}+\frac{2}{55}\beta^{4}t^{11}+O(t^{17}). \tag{101}\]
Similar to the previous case, the integral of the product of the fundamental solutions \(y_{1}(t)\) and \(y_{2}(t)\) with a power weight \(t^{n}\) can be written in the form
\[\int t^{n}y_{1}y_{2}dt\equiv P_{n}y_{1}y_{2}+\frac{Q_{n}}{2}(y_{1}\hat{y}_{2}+ y_{2}\hat{y}_{1})+R_{n}\hat{y}_{1}\hat{y}_{2}, \tag{102}\]
where \(P_{n}=\frac{1}{2}\ddot{R}_{n}+\beta^{2}t^{4}R_{n}\), \(Q_{n}=-\dot{R}_{n}\), and \(R_{n}\) is determined by the third-order differential equation
\[\frac{d^{3}R_{n}}{dt^{3}}+4\beta^{2}t^{4}\frac{dR_{n}}{dt}+8\beta^{2}t^{3}R_{n }=2t^{n}, \tag{103}\]
from which one can show that \(R_{n}\) obey the recursive relation
\[R_{n+6}=\frac{t^{n+3}}{2\beta^{2}(n+5)}-\frac{(n+3)(n+2)(n+1)}{4\beta^{2}(n+5) }R_{n}, \tag{104}\]
A direct computation gives \(R_{n}\), \(Q_{n}\) and \(P_{n}\) for \(n=3,4,5\)
\[R_{3}(t)=\frac{1}{4\beta^{2}},Q_{3}(t)=0,P_{3}(t)=\frac{t^{4}}{4}, \tag{105a}\] \[R_{4}(t)=\frac{t}{6\beta^{2}},Q_{4}(t)=-\frac{1}{6\beta^{2}},P_ {4}(t)=\frac{t^{5}}{6},\] (105b) \[R_{5}(t)=\frac{t^{2}}{8\beta^{2}},Q_{5}(t)=-\frac{t}{4\beta^{2}},P_{5}(t)=\frac{t^{6}}{8}+\frac{1}{8\beta^{2}}. \tag{105c}\]
Then, one only need to compute \(R_{n}(t)\) for \(n=0\), \(1\) and \(2\). To proceed further, one can expand the function \(R_{0}(t)\) as \(R_{0}(t)=\sum_{k=0}^{\infty}r_{k}t^{k}\). From Eq. (103), one immediately obtains the recursive relation
\[r_{k+6}=\frac{-4\beta^{2}(k+2)}{(k+6)(k+5)(k+4)}r_{k}, \tag{106}\]
For \(n=0\), the functions \(R_{0}\), \(Q_{0}\) and \(P_{0}\) can be expressed in terms of the generalized hypergeometric function as
\[R_{0}(t) ={}_{2}F_{3}\left(1,\frac{5}{6};\frac{7}{6},\frac{8}{6},\frac{9}{ 6};-\frac{\beta^{2}t^{6}}{9}\right)\frac{t^{3}}{3}, \tag{107a}\] \[Q_{0}(t) =-{}_{2}F_{3}\left(1,\frac{5}{6};\frac{3}{6},\frac{7}{6},\frac{8 }{6};-\frac{\beta^{2}t^{6}}{9}\right)t^{2},\] (107b) \[P_{0}(t) ={}_{2}F_{3}\left(1,\frac{5}{6};\frac{2}{6},\frac{3}{6},\frac{7}{ 6};-\frac{\beta^{2}t^{6}}{9}\right)t\] (107c) \[+{}_{2}F_{3}\left(1,\frac{5}{6};\frac{7}{6},\frac{8}{6},\frac{9}{ 6};-\frac{\beta^{2}t^{6}}{9}\right)\frac{\beta^{2}t^{7}}{3}.\]
These functions define different entire functions of \(t\), which has the following series expansions
\[R_{0}(t) =\frac{1}{3}t^{3}-\frac{5}{378}\beta^{2}t^{9}+\frac{11}{51597} \beta^{4}t^{15}+O(t^{21}),\] \[Q_{0}(t) =-t^{2}+\frac{5}{42}\beta^{2}t^{8}-\frac{55}{17199}\beta^{4}t^{14} +O(t^{20}),\] \[P_{0}(t) =t-\frac{1}{7}\beta^{2}t^{7}+\frac{5}{546}\beta^{4}t^{13}+O(t^{19}). \tag{108}\]
For \(n=1\), the functions \(R_{1}\), \(Q_{1}\) and \(P_{1}\) can be expressed in terms of the generalized hypergeometric function as
\[R_{1}(t) ={}_{2}F_{3}\left(1,1;\frac{8}{6},\frac{9}{6},\frac{10}{6};-\frac{ \beta^{2}t^{6}}{9}\right)\frac{t^{4}}{12}, \tag{109a}\] \[Q_{1}(t) =-{}_{2}F_{3}\left(1,1;\frac{4}{6},\frac{8}{6},\frac{9}{6};-\frac {\beta^{2}t^{6}}{9}\right)\frac{t^{3}}{3},\] (109b) \[P_{1}(t) ={}_{2}F_{3}\left(1,1;\frac{3}{6},\frac{4}{6},\frac{8}{6};-\frac{ \beta^{2}t^{6}}{9}\right)\frac{t^{2}}{2}\] \[+{}_{2}F_{3}\left(1,1;\frac{8}{6},\frac{9}{6},\frac{10}{6};-\frac {\beta^{2}t^{6}}{9}\right)\frac{\beta^{2}t^{8}}{12}. \tag{110c}\]
These functions define different entire functions of \(t\), which has the following series expansions
\[R_{0}(t) =\frac{1}{3}t^{3}-\frac{5}{378}\beta^{2}t^{9}+\frac{11}{51597} \beta^{4}t^{15}+O(t^{21}),\] \[Q_{0}(t) =-t^{2}+\frac{5}{42}\beta^{2}t^{8}-\frac{55}{17199}\beta^{4}t^{14} +O(t^{20}),\] \[P_{0}(t) =t-\frac{1}{7}\beta^{2}t^{7}+\frac{5}{546}\beta^{4}t^{13}+O(t^{19}). \tag{111}\]
For \(n=1\), the functions \(R_{1}\), \(Q_{1}\) and \(P_{1}\) can be expressed in terms of the generalized hypergeometric function as
\[R_{1}(t) ={}_{2}F_{3}\left(1,1;\frac{8}{6},\frac{9}{6},\frac{10}{6};-\frac{ \beta^{2}t^{6}}{9}\right)\frac{t^{4}}{12}, \tag{112a}\] \[Q_{1}(t) =-{}_{2}F_{3}\left(1,1;\frac{4}{6},\frac{8}{6},\frac{9}{6};-\frac{ \beta^{2}t^{6}}{9}\right)\frac{t^{3}}{3},\] (113) \[P_{1}(t) ={}_{2}F_{3}\left(1,1;\frac{3}{6},\frac{4}{6},\frac{8}{6};-\frac{ \beta^{2}t^{6}}{9}\right)\frac{t^{2}}{2}\] \[+{}_{2}F_{3}\left(1,1;\frac{8}{6},\frac{9}{6},\frac{10}{6};-\frac{ \beta^{2}t^{6}}{9}\right)\frac{\beta^{2}t^{8}}{12}. \tag{114}\]
These functions define different entire functions of \(t\), which has the following series expansions
\[R_{2}(t) =\frac{1}{30}t^{5}-\frac{7}{7425}\beta^{2}t^{11}+\frac{91}{7573500} \beta^{4}t^{17}+O(t^{23}),\] \[Q_{2}(t) =-\frac{t^{4}}{675}\beta^{2}t^{10}-\frac{91}{445500}\beta^{4}t^{16} +O(t^{22}),\] \[P_{2}(t) =\frac{t^{3}}{3}-\frac{1}{54}\beta^{2}t^{9}+\frac{7}{10125}\beta^{4 }t^{15}+O(t^{21}). \tag{115}\]
From the recursive relation Eq. (104), one can derive \(R_{n}(t)\), \(Q_{n}(t)\) and \(P_{n}(t)\) for \(n=6k+m\) with \(m=0,1,2\), and \(k\) being
any non-negative integer
\[R_{n}(t) =\frac{2{t^{n+3}}_{2}F_{3}\left(1,\frac{n+5}{6},\frac{n+7}{6},\frac{n +8}{6},\frac{n+9}{6},-\frac{\beta^{2}\ell}{9}\right)}{(n+1)(n+2)(n+3)},\] \[Q_{n}(t) =-\frac{2{t^{n+2}}_{2}F_{3}\left(1,\frac{n+5}{6},\frac{n+3}{6}, \frac{n+7}{6},\frac{n+8}{6},\frac{-\beta^{2}\ell}{9}\right)}{(n+1)(n+2)},\] \[P_{n}(t) =\frac{{t^{n+1}}_{2}F_{3}\left(1,\frac{n+5}{6},\frac{n+2}{6}, \frac{n+3}{6},\frac{n+7}{6},-\frac{\beta^{2}\ell}{9}\right)}{n+1}\] \[+\frac{2{\beta^{2}}{f^{n+7}}_{2}F_{3}\left(1,\frac{n+5}{6},\frac{ n+7}{6},\frac{n+8}{6},\frac{n+9}{6};-\frac{\beta^{2}\ell}{9}\right)}{(n+1)(n+2)(n+3)}. \tag{26}\]
With the analytical expressions of \(R_{n}\), \(Q_{n}\) and \(P_{n}\), one can systematically derive \(c_{1}^{(n)}\) and \(c_{2}^{(n)}\). For example, for \(n=2\), one obtains
\[c_{1}^{(2)}(t) =-\frac{t^{2}}{2}\left[1+{}_{2}F_{3}\left(1,\frac{5}{6};\frac{3}{6 },\frac{7}{6},\frac{8}{6};-\frac{\beta^{2}\ell}{9}\right)\right]c_{1}^{(0)}(t)\] \[+\frac{t^{3}}{3}{}_{2}F_{3}\left(1,\frac{5}{6};\frac{7}{6},\frac{ 8}{6},\frac{9}{6};-\frac{\beta^{2}\ell}{9}\right)\dot{c}_{1}^{(0)}(t), \tag{27a}\] \[c_{2}^{(2)}(t) =-\frac{t^{2}}{2}\left[1+{}_{2}F_{3}\left(1,\frac{5}{6};\frac{3}{6 },\frac{7}{6},\frac{8}{6};-\frac{\beta^{2}\ell}{9}\right)\right]c_{2}^{(0)}(t)\] \[+\frac{t^{3}}{3}{}_{2}F_{3}\left(1,\frac{5}{6};\frac{7}{6},\frac{ 8}{6},\frac{9}{6};-\frac{\beta^{2}\ell}{9}\right)\dot{c}_{2}^{(0)}(t). \tag{27b}\]
|
2309.01194 | A Survey on Service Route and Time Prediction in Instant Delivery:
Taxonomy, Progress, and Prospects | Instant delivery services, such as food delivery and package delivery, have
achieved explosive growth in recent years by providing customers with
daily-life convenience. An emerging research area within these services is
service Route\&Time Prediction (RTP), which aims to estimate the future service
route as well as the arrival time of a given worker. As one of the most crucial
tasks in those service platforms, RTP stands central to enhancing user
satisfaction and trimming operational expenditures on these platforms. Despite
a plethora of algorithms developed to date, there is no systematic,
comprehensive survey to guide researchers in this domain. To fill this gap, our
work presents the first comprehensive survey that methodically categorizes
recent advances in service route and time prediction. We start by defining the
RTP challenge and then delve into the metrics that are often employed.
Following that, we scrutinize the existing RTP methodologies, presenting a
novel taxonomy of them. We categorize these methods based on three criteria:
(i) type of task, subdivided into only-route prediction, only-time prediction,
and joint route\&time prediction; (ii) model architecture, which encompasses
sequence-based and graph-based models; and (iii) learning paradigm, including
Supervised Learning (SL) and Deep Reinforcement Learning (DRL). Conclusively,
we highlight the limitations of current research and suggest prospective
avenues. We believe that the taxonomy, progress, and prospects introduced in
this paper can significantly promote the development of this field. | Haomin Wen, Youfang Lin, Lixia Wu, Xiaowei Mao, Tianyue Cai, Yunfeng Hou, Shengnan Guo, Yuxuan Liang, Guangyin Jin, Yiji Zhao, Roger Zimmermann, Jieping Ye, Huaiyu Wan | 2023-09-03T14:43:33Z | http://arxiv.org/abs/2309.01194v1 | A Survey on Service Route and Time Prediction in Instant Delivery: Taxonomy, Progress, and Prospects
###### Abstract
Instant delivery services, such as food delivery and package delivery, have achieved explosive growth in recent years by providing customers with daily-life convenience. An emerging research area within these services is service Route&Time Prediction (RTP), which aims to estimate the future service route as well as the arrival time of a given worker. As one of the most crucial tasks in those service platforms, RTP stands central to enhancing user satisfaction and trimming operational expenditures on these platforms. Despite a plethora of algorithms developed to date, there is no systematic, comprehensive survey to guide researchers in this domain. To fill this gap, our work presents the first comprehensive survey that methodically categorizes recent advances in service route and time prediction. We start by defining the RTP challenge and then delve into the metrics that are often employed. Following that, we scrutinze the existing RTP methodologies, presenting a novel taxonomy of them. We categorize these methods based on three criteria: (i) type of task, subdivided into only-route prediction, only-time prediction, and joint route&time prediction; (ii) model architecture, which encompasses sequence-based and graph-based models; and (iii) learning paradigm, including Supervised Learning (SL) and Deep Reinforcement Learning (DRL). Conclusively, we highlight the limitations of current research and suggest prospective avenues. We believe that the taxonomy, progress, and prospects introduced in this paper can significantly promote the development of this field.
service route and time prediction, instant delivery; +
Footnote †: _(Corresponding author: Huaiyu Wan.)_
+
Footnote †: _(Corresponding author: Huaiyu Wan.)_
## 1 Introduction
Instant delivery services [1, 2, 3], such as logistics and food delivery, are playing an increasingly important role in serving people's daily demands. By the end of 2021, China's online food delivery platforms processed approximately 29.3 billion orders, engaging over 4 million workers and 460 million consumers. A crucial task on these service platforms is service route and time prediction (RTP) [2, 4, 5], which aims to estimate the future service route and arrival time of a worker given his unfinished tasks. The RTP problem has received increasing attention from both academia and industry in recent years, as it is a foundation for building intelligent service platforms, such as the logistics platforms Cainiao1, JD.COM2, and the food delivery platforms Meituan3, GrabFood4. For instance, accurate arrival time prediction can largely alleviate the waiting anxiety of customers [5, 6, 7, 8], thus improving the customer's experience. Moreover, route forecasts can be integrated into dispatching systems, optimizing order assignments in proximity to a worker's route [9, 10, 11]. In light of the above benefits, precise RTP predictions not only elevate user experience but also reduce operational costs, therefore deserving further studies in the research community.
Footnote 1: [https://global.cainiao.com/](https://global.cainiao.com/)
Footnote 2: [https://www.jdl.com/](https://www.jdl.com/)
Footnote 3: [https://www.meituan.com/](https://www.meituan.com/)
Thanks to the wide equipment of personal digital assistant (PDA) devices for workers, massive historical behaviors of workers are collected from their daily operations, such as GPS location, task accept-time, task finish-time, etc. This forms the data foundation for learning-based models to mine workers' behavior patterns, particularly in terms of routing and estimated time of arrival, as we focus on in this paper. To this end, we have witnessed a variety of learning-based models dedicated to service route and time prediction in instant delivery recent years. However, there is no systematic and comprehensive survey to summarize and guide research in this field. This deficiency hinders researchers' grasp of both the present landscape and evolving trends of this research field. Addressing this need, we introduce
the first survey on RTP techniques, offering a systematic overview and arrangement of the latest endeavors in this domain. Firstly, we define the RTP task and introduce commonly used metrics. Subsequently, we conduct an exhaustive examination of current RTP approaches, stratifying them across three criteria: (i) task perspective (only-route prediction, only-time prediction, both route and time prediction), (ii) model architecture (sequence-based, graph-based), and (iii) learning paradigm (supervised learning, deep reinforcement learning). The proposed taxonomy is shown in Figure 4. At last, we discuss the limitations in current research and suggest potential directions for further exploration. Overall, we summarize our contributions as the following three points:
* _The First Survey on RTP_: To the best of our knowledge, this is the first survey that encompasses a thorough examination of recent advancements in RTP research, ensuring a complete understanding of the field's progress and evolution.
* _A Systematic Taxonomy and Classification_: We create a well-organized taxonomy and classification system for various RTP methods from three perspectives, enabling researchers to better comprehend the relationships between different approaches.
* _Limitations and Future Directions_: We identify the limitations of current works and discuss the potential future research directions in route and time prediction, to inspire innovative ideas and promote growth within the domain.
**Comparisons to Existing Surveys**. Since there are no directly related surveys, we compare our work with existing literature in two key areas: route-related surveys and time-related surveys. Firstly, one direction of route-related surveys primarily concentrates on route optimization [12, 13, 14, 15]. These studies aim to plan the best route for workers based on metrics like travel distance. In contrast, our work centers on route prediction, seeking to forecast the route a worker is most likely to choose. Additionally, another avenue in route-related surveys addresses the next location prediction problem [16, 17, 18, 19, 20], aiming to select the most probable next location a user will visit from a set of common location candidates (i.e., all predictions share the same location candidates). Unlike those studies, the predicted route in RTP is constrained by the unfished tasks. Specifically, the location candidates in the estimated route are derived from, and vary with, the input of incomplete tasks (i.e., different predictions have different candidates), making it a more challenging problem to tackle. Thirdly, earlier time-related surveys focus on predicting arrival times [21, 22, 23, 24] in the map system, particularly for buses. Our work, however, targets arrival time prediction in instant delivery, a more complex task due to the challenge of predicting workers' actions. In summary, our research addresses the RTP problem in the emerging field of instant delivery, an area that deserves further investigation.
**Organization.** The survey is structured as follows: Section 2 formulates the problem and introduces the frequently employed metrics. Section 3 presents the proposed taxonomy. Section 4-6 introduces the details of current route prediction, time prediction, route and time prediction methods, respectively. Lastly, Section 7-8 makes the conclusion which analyzes the limitations and points out the future directions in this research field.
## 2 Preliminaries
We first give a general formulation of RTP to facilitate the following sections (Different methods can have different formulations which are slightly different from the general form, as we will introduce later). Then we introduce commonly used metrics for evaluating different methods.
**Background.** As shown in Figure 1, in a typical instant delivery service, such as food delivery, the customer first places a task (order) with certain spatial-temporal requirements such as delivery location and required time window (i.e., 9:00 am - 9:30 am) in the online platform, then the platform will dispatch the task to a worker. At last, the worker will try to finish the task while satisfying the time and location requirements.
**Definition 1: Task.** A task represents a pick-up or a delivery order in the platform. Different services have different types of tasks. For instance, there are only pick-up tasks in the package pick-up service, while both pick-up and delivery tasks exist in the food delivery service. Given a task denoted by \(o_{i}\), it is associated with both spatial and temporal features:
\[\mathbf{o}_{i}=(o_{i}^{lat},o_{i}^{ing},o_{i}^{aoi},o_{i}^{ttype},o_{i}^{at},o_{i} ^{ft},o_{i}^{tws},o_{i}^{twe}), \tag{1}\]
where the spatial features of task \(o_{i}\) include:
* \(o_{i}^{lat}\), the latitude of the task;
* \(o_{i}^{ing}\), the longitude of the task;
* \(o_{i}^{aoi}\) (if applicable), the ID of the Area-Of-Interest (AOI) where the task is located in.
* \(o_{i}^{type}\) (if applicable), the type (e.g., school area, residential area) of the task's AOI.
And the temporal features of task \(o_{i}\) include:
* \(o_{i}^{at}\), the accept-time (by worker) of the task.
* \(o_{i}^{ft}\), the finish-time of the task.
* \(o_{i}^{tws}\), the start of the required time window.
* \(o_{i}^{twe}\), the end of the required time window.
In Figure 2, we further give a simple illustration of the time features related to task \(o_{i}\) to facilitate the understanding.
**Definition 2: Worker.** A worker \(w\) is responsible for the tasks generated by customers. For instance, in the logistics platform, the worker is the courier for picking up/delivering packages. While in the food delivery platform, the worker is the courier that needs to finish both pickup and delivery tasks. Each worker is associated with
Fig. 1: Illustration of instant delivery service.
his personalized features \(\mathbf{x}_{w}\), such as the total number of work days in the platform, the average number of tasks in a day, the average arrival time, etc.
### _General Service Route&Time Prediction Problem_
**Definition 3: Finished Tasks.** At a certain time \(t\), a worker \(w\) has \(m\) finished tasks, denoted by \(\mathcal{O}^{f}_{w,t}=<o_{1},\ldots,o_{m}>\), where \({o}^{fit}_{i}\leq t\) for \(i\in\{1,\ldots,m\}\). It is worth mentioning that \(\mathcal{O}^{f}_{w,t}\) is essentially a sequence sorted by the task's finish time, i.e., \({o}^{ft}_{i}\leq{o}^{ft}_{j}\) if \(i<j\). For simplicity, we remove the subscript \(w\) of the variables related to a worker, such as \(\mathcal{O}^{f}_{t}\) in the following description.
**Definition 4: Unfinished Tasks.** At time \(t\), a worker \(w\) can also have \(n\) unfinished tasks, denoted by \(\mathcal{O}^{u}_{t}=\{o_{1},\ldots,o_{n}\}\), where \({o}^{at}_{i}\leq t\leq{o}^{ft}_{i}\) (i.e., accepted but not finished) for \(i\in\{1,\ldots,n\}\). Unlike the finished tasks, we consider \(\mathcal{O}^{u}_{t}\) as a set since the finish time of each task \(\mathcal{O}^{u}_{t}\) in is not known.
Since the unfinished tasks is a prerequisite for all route and time prediction models, here we introduce some commonly used features by current models that can influence the couriers' routing decisions and arrival time. Each task \(o_{i}\) is associated with a feature \(\mathbf{x}_{i}\), which can be divided into time-invariant \(\mathbf{x}^{ti}_{i}\) and time-variant \(\mathbf{x}^{tv}_{i}\) features. Specifically, the time-invariant features of the \(i\)-th unfinished task include:
\[\mathbf{x}^{ti}_{i}=({o}^{lat}_{i},{o}^{lng}_{i},{o}^{aoi}_{i},{o}^{type}_{i},{o}^ {at}_{i},{o}^{tws}_{i},{o}^{twe}_{i}). \tag{2}\]
Where each feature has been explained in the definition of a task, and the time-variant feature includes:
\[\mathbf{x}^{tv}_{i}=({o}^{d}_{i},{o}^{tws}_{i}-t,{o}^{twe}_{i}-t,t-{o}^{at}_{i}), \tag{3}\]
* \({o}^{d}_{i}\) is the distance between the task and the worker's current location, since workers tend to visit the nearby task first.
* \({o}^{tws}_{i}-t/{o}^{twe}_{i}-t\) calculates the time duration between the current time and the required time window. Workers tend to visit the more urgent task first.
* \(t-{o}^{at}_{i}\), which is the duration that the task has joined the worker's task pool. The longer a task is in a worker's task pool, the more likely it will be visited next by the worker.
**Definition 5: Route Constraints.** In reality, various route constraints can exist in different services, such as the pick-up then delivery constraint (i.e., the delivery location of an order can only be visited after its pick-up location is visited [5, 25]) and capacity constraint (i.e., the total weight of items carried by a worker can not exceed its capacity of load [26, 27]). Route constraints of a problem can be represented by a rule set \(\mathcal{C}\), with each item corresponding to a specific route constraint.
**Definition 6: Route Prediction Problem.** Given a worker \(w\)'s finished and unfinished tasks at time \(t\), route prediction aims to learn a mapping function \(\mathcal{F}_{R}\) to predict the worker's future service route \(\mathbf{\hat{\pi}}\) of unfinished tasks which can satisfy the given route constraints \(\mathcal{C}\), formulated as:
\[\mathcal{F}_{R}(\mathcal{O}^{f}_{t},\mathcal{O}^{u}_{t};\mathcal{C})\to\mathbf{ \hat{\pi}}=[\hat{\pi}_{1},\hat{\pi}_{2}\cdots\hat{\pi}_{n}], \tag{4}\]
where \(\hat{\pi}\) is essentially a permutation of the unfished tasks \(\mathcal{O}^{u}_{t}\), where \(\hat{\pi}_{i}\) means that the \(i\)-th node in the route is task \(\hat{\pi}_{i}\). Moreover, \(\hat{\pi}_{i}\in\{1,\cdots n\}\) and \(\hat{\pi}_{i}\neq\hat{\pi}_{j}\) if \(i\neq j\).
**Definition 7: Time Prediction Problem.** Given a worker \(w\)'s finished and unfinished tasks at time \(t\), time prediction aims to learn a mapping function \(\mathcal{F}_{T}\) to predict the worker's arrival time for all unfinished tasks, formulated as:
\[\mathcal{F}_{T}(\mathcal{O}^{f}_{t},\mathcal{O}^{u}_{t})\to\mathbf{\hat{\tau}}=[ \hat{\tau}_{1},\hat{\tau}_{2}\cdots\hat{\tau}_{n}], \tag{5}\]
where \(\hat{\tau}_{i}={t}^{ft}_{i}-t\) means how long the worker can arrive at (or finish) the task \(i\) since the query time \(t\).
**Definition 8: Route&Time Prediction Problem.** Similarly, we formulate the route and time prediction problem as follows:
\[\mathcal{F}_{\mathcal{R}T}(\mathcal{O}^{f}_{t},\mathcal{O}^{u}_{t};\mathcal{C} )\to(\mathbf{\hat{\pi}},\mathbf{\hat{\tau}}). \tag{6}\]
We give an illustration of the route and time prediction problem in Figure 3. And Table I lists all the related notions in the paper.
### _Metrics_
Here, we introduce a comprehensive metric system to evaluate the performance of route prediction and time prediction, respectively.
#### 2.2.1 Evaluation of Route Predicioin
Note that in some instant delivery services (e.g., food delivery), the tasks of a worker are not settled from the beginning. Rather, they are revealed over time because the platform can continuously dispatch new tasks to the worker. In that case, the new task coming at \(t^{\prime}\) can change the worker's previous decisions at \(t\), making observations after \(t^{\prime}\) inaccurate [4, 5] as the label for the sample at time \(t\). Therefore, a better way is to treat the route observations between \(t\) and \(t^{\prime}\) as the label information when training or evaluation, recall that \(t^{\prime}\) is the dispatch time of the first coming task after \(t\).
At the evaluation process, formally, we have the prediction \(\mathbf{\hat{\pi}}=[\hat{\pi}_{1},\ldots,\hat{\pi}_{n}]\) and the label \(\mathbf{\pi}=[\pi_{1},\ldots,\pi_{n^{\prime}}]\), where \(n^{\prime}\leq n\) and \(\mathrm{set}(\mathbf{\pi})\subseteq\mathrm{set}(\mathbf{\hat{\pi}})\). Let \(Y_{\mathbf{\pi}}(i)\) and \(Y_{\mathbf{\hat{\pi}}}(i)\) be the order of node \(i\) in the label and prediction route, respectively.
Fig. 3: Illustration of the route and time prediction problem.
Fig. 2: Illustration for the time features of task \(o_{i}\).
One can evaluate the route prediction performance by the following metrics from both global and local perspectives.
**From the Global Perspective**. Metrics in this line measure the overall similarity of two input sequences, including:
* **KRC**: Kendall Rank Correlation [28] is a statistical criterion to measure the ordinal association between two sequences. Given any task pair \((i,j)\), it is said to be concordant if both \(Y_{\mathbf{\pi}}(i)>Y_{\mathbf{\pi}}(j)\) and \(Y_{\mathbf{\pi}}(i)>Y_{\mathbf{\pi}}(j)\) or both \(Y_{\mathbf{\pi}}(i)<Y_{\mathbf{\pi}}(j)\) and \(Y_{\mathbf{\pi}}(i)<Y_{\mathbf{\pi}}(j)\). Otherwise, it is said to be discordant. To calculate this metric, tasks in the prediction are first divided into two sets: i) tasks in label \(\mathcal{O}_{in}=\{\hat{\pi}_{i}|\hat{\pi}_{i}\in\mathbf{\pi}\}\), and ii) tasks not in label \(\mathcal{O}_{not}=\{\hat{\pi}_{i}|\hat{\pi}_{i}\not\in\mathbf{\pi}\}\). We know the order of items in \(\mathcal{O}_{in}\), but it is hard to tell the order of items in \(\mathcal{O}_{not}\), still we know that the order of all items in \(\mathcal{O}_{in}\) are ahead of that in \(\mathcal{O}_{not}\). Therefore, KRC compares the task pairs \(\{(i,j)|i,j\in\mathcal{O}_{in}\text{ and }i\neq j\}\cup\{(i,j)|i\in \mathcal{O}_{in}\text{ and }j\in\mathcal{O}_{not}\}\). In this way, it is defined as: \[\mathrm{KRC}=\frac{N_{c}-N_{d}}{N_{c}+N_{d}},\] (7) where \(N_{c}\) is the number of concordant pairs, and \(N_{d}\) is the number of discordant pairs.
* **ED:** Edit Distance [29] (ED) is an indicator to quantify the dissimilarity of two sequences, by counting the minimum number of required operations to transform one sequence (in this case, the route prediction) into another (i.e., the actual route), formulated as: \[\mathrm{ED}=\mathrm{EditDistance}(\overline{\mathbf{\pi}},\mathbf{\pi}).\] (8) where \(\overline{\mathbf{\pi}}=\hat{\mathbf{\pi}}\cap\mathbf{\pi}\), which is the common part of the prediction and label, with items' relative order in the prediction preserved.
* **LSD** and **LMD**[4]: The Location Square Deviation (LSD) and the Location Mean Deviation (LMD) measure the degree that the prediction deviates from the label, formulated as: \[\mathrm{LSD} =\frac{1}{n^{\prime}}\sum_{i=1}^{n^{\prime}}(Y_{\pi}(\pi_{i})-Y_{ \hat{\pi}}(\pi_{i}))^{2}\] (9) \[\mathrm{LMD} =\frac{1}{n^{\prime}}\sum_{i=1}^{n^{\prime}}|Y_{\pi}(\pi_{i})-Y_{ \hat{\pi}}(\pi_{i})|.\]
* **DMAE**[30]: It denotes the mean absolute error of the distance differences between generated routes and real routes. It measures how far the generated routes deviate from the real routes in terms of spatial distance. \[\mathrm{DMAE}=\frac{1}{n^{\prime}}\sum_{i=1}^{n^{\prime}}|\mathrm{Distance}( \hat{\pi}_{i},\pi_{i})|,\] (10) where \(\mathrm{Distance}(\cdot)\) is the distance function, which calculates the distance given two tasks.
* **SR@\(k\)[30]**: which represents the relaxed concordancy rate of generated routes compared with real routes. It first calculates the distance between nodes in the generated route and the real route. If the distance is less than \(k\) meters, the two route nodes are considered to be consistent. The number of consistent nodes is then counted, and this count is divided by the length of the routes to obtain the metric. The purpose of relaxing the distance criteria is to account for statistical errors that may arise when workers visit tasks from the same location, formulated as: \[\mathrm{SR}@k=\frac{1}{n^{\prime}}\sum_{i=1}^{n^{\prime}}\mathbb{I}(|\mathrm{ Distance}(\hat{\pi}_{i},\pi_{i})|<k),\] (11) where \(\mathbb{I}(\cdot)\) is the indicator function, and \(\mathbb{I}(|\mathrm{Distance}(\hat{\pi}_{i},\pi_{i})|<k)\) equals 1 if \(|\mathrm{Distance}(\hat{\pi}_{i},\pi_{i})|<k\) else 0.
* **MRR**[5]: The Mean Reciprocal Rank measures whether the model can predict the actual next location with a higher probability, calculated by averaging the reciprocal of the actual locations' ranks: \[\mathrm{MRR}=\frac{1}{n^{\prime}}\sum_{i=1}^{n^{\prime}}\frac{1}{|(Y_{\pi}(\pi _{i})-Y_{\hat{\pi}}(\pi_{i}))|+1}.\] (12) From the Local Perspective. Metrics in this line focus on evaluating the performance of top-\(k\) prediction, including:
* **HR@\(k\)[4]**: Hit-Rate@\(k\) is used to quantify the similarity between the top-\(k\) items of two sequences. It describes how many of the first \(k\) predictions are in the label, which is formulated as follows: \[\mathrm{HR@}k=\frac{|\hat{\mathbf{\pi}}_{[1:k]}\cap\mathbf{\pi}_{[1:k]}|}{k},\] (13)
\begin{table}
\begin{tabular}{|c|l|} \hline
**Notation** & \multicolumn{2}{c|}{**Definition**} \\ \hline \(w\) & the target worker \\ \hline \(\mathcal{C}\) & route constraints \\ \hline \(\mathcal{O}_{t}^{\prime}\) & worker \(w\)’s finished tasks at time \(t\) \\ \hline \(\mathcal{O}_{t}^{u}\) & worker \(w\)’s unfinished tasks at time \(t\) \\ \hline \(\hat{\mathbf{\pi}}\) & \(\hat{\mathbf{\pi}}=\{\hat{\pi}_{1},\dots,\hat{\pi}_{n}\}\), predicted service route \\ \hline \(\mathbf{\pi}\) & \(\mathbf{\pi}=\{\pi_{1},\dots,\pi_{n^{\prime}}\}\), the route label \\ \hline \(n^{\prime}\) & number of tasks in the route label \\ \hline \(Y_{\mathbf{\pi}}(i)\) & the order of task \(i\) in the label route \\ \hline \(Y_{\mathbf{\pi}}(i)\) & the order of task \(i\) in the predicted route \\ \hline \(\mathbf{\tau}/\hat{\mathbf{\tau}}\) & actual / predicted arrival time \\ \hline \(\mathbf{E}\) & the embedd matrix of all unfinished tasks \\ \hline \(\mathbf{h}_{j}\) & the hidden state of decoding step \(j\) \\ \hline \(u_{i}^{j}\) & the compatibility score of \(i\) at decoding step \(j\) \\ \hline \multicolumn{2}{c|}{RL-related notation} \\ \hline \(M\) & the Markov Decision Process \\ \hline \(S\) & the set of states \\ \hline \(A\) & the set of actions \\ \hline \(P\) & the transition probability \\ \hline \(R\) & the reward function \\ \hline \(\gamma\) & the discount factor \\ \hline \multicolumn{2}{c|}{Graph-related notation} \\ \hline \(\mathcal{G}_{t}^{w}\) & Input ST-Graph of worker \(w\) at time \(t\) \\ \hline \(\mathcal{N}_{i}\) & Neighbors of node \(i\) \\ \hline \(\mathbf{X}_{t}^{*}\) / \(\mathbf{X}_{t}^{*}\) & Node / Edge features at time \(t\) \\ \hline \(\mathbf{E}_{t}\) / \(\mathbf{Z}_{t}\) & Node / Edge embeddings after encoding \\ \hline \end{tabular}
\end{table} TABLE I: Summary of symbol notations.
where \(|\cdot|\) means the cardinality of a set.
* **Same@\(k\)[31]**: Compared with HR@\(k\), Same@\(k\) is a more strict measurement to calculate the local similarity of two sequences. It answers the following question: Is the route composed of the first \(k\) predictions exactly the same as the label? \[\text{Same@}k=\prod_{i=0}^{k}\mathbb{I}(\hat{\pi}_{i},\pi_{i}),\] (14) where \(\mathbb{I}(\cdot)\) is a indicator function, and \(\mathbb{I}(\hat{\pi}_{i},\pi_{i})\) equals 1 if \(\hat{\pi}_{i}\) equals \(\pi_{i}\) else 0.
In summary, KRC, ED, LSD, LMD, and MRR measure the overall similarity of the predicted route and the label route according to tasks' orders in the two sequences. And DMAE, SR@\(k\) measures the overall similarity based on the task's distance in the geographical location. In comparison, HR@\(k\) and Same@\(k\) calculate their similarity from the local perspective. Higher KRC, HR@\(k\), Same@\(k\), SR@\(k\), MRR, and lower ED, LSD, LMD, DMAE mean better performance of the algorithm.
#### 2.2.2 Evaluation of Time Prediction
Time prediction is typically regarded as a regression problem. Thus metrics for the regression problem are employed to evaluate the performance. Let \(\tau_{i}\), \(\hat{\tau}_{i}\) be the actual arrival time and the predicted arrival time, respectively. And \(n\) is the total number of unfinished tasks. The following metrics can be used:
* **MAE**[7]. Mean Absolute Error (MAE) is a commonly used metric, formulated as follows: \[\mathrm{MAE}=\frac{1}{n}\sum_{i=1}^{n}\left|\hat{\tau}_{i}-\tau_{i}\right|.\] (15)
* **RMSE**[7]. Root Mean Squared Error (RMSE) is another commonly used metric: \[\mathrm{RMSE}=\sqrt{\frac{1}{n}\sum_{i=1}^{n}\left(\hat{\tau}_{i}-\tau_{i} \right)^{2}}.\] (16)
* **MAPE**[5]. Mean Absolute Percentage Error, formulated as: \[\mathrm{MAPE}=\frac{1}{n}\sum_{i=1}^{n}\left|\frac{\hat{\tau}_{i}-\tau_{i}}{ \tau_{i}}\right|.\] (17)
* **ACC@\(k\)[32]**. Besides the above traditional metrics. Delivery platforms usually provide an interval of arrival time for customer notification. Thus \(\mathrm{ACC@}k\) is introduced by computing the ratio of prediction where the time difference between predicted time and true time is less than \(k\) minutes, formulated as \[\mathrm{ACC@}k=\frac{1}{n}\sum_{i=1}^{n}\mathbb{I}(\left|\hat{\tau}_{i}-\tau_{i }\right|<k).\] (18) where \(\mathbb{I}(\cdot)\) is the indicator function, and \(\mathbb{I}(\hat{\pi}_{i},\pi_{i})\) equals 1 if \(\left|\hat{\tau}_{i}-\tau_{i}\right|<k\) else 0. Usually, ACC@\(30\) is adopted to test the model's ability in one-hour prediction.
Overall, route prediction metrics focus on evaluating the similarity between two ranked sequences, while time prediction metrics evaluate the regression error between predictions and labels.
## 3 The proposed taxonomy
This paper provides a comprehensive review of current state-of-the-art models for the RTP task. In this section, we introduce the overall taxonomy of existing efforts, which is shown in Figure 4. And the summary for neural architectures of existing route and time prediction models is shown in Table II.
In the proposed taxonomy, we classify the existing methods by three dimensions: (i) task type (including route prediction, time prediction, and route-time prediction); (ii) model architecture (sequence-based and graph-based); (iii) learning paradigm (supervised learning, deep reinforcement learning). Here we briefly introduce each classification dimension.
### _Task Type_
Broadly speaking, existing algorithms fall into three categories according to their task type, including:
* **(Only) route prediction**. Models in this category only aim to solve the route prediction problem, including Osquare [2], DeepRoute [4], DeepRoute+ [33], CP-Route [34], Graph2Route [31], DRL4Route [35], and ILRoute [30]. Those methods typically utilize learning-based methods to learn the routing strategies/patterns from workers' massive historical behaviors.
* **(Only) time prediction**. Models in this category focus on directly predicting the arrival time of workers without explicitly modeling the route selection process, including DeepETA [7], OFCT [36], CNN-Model [37], MetaSTP [38], IGT [39], DGM-DTE [40], GSL-QR [41].
* **Route and time prediction**. Intuitively, the arrival time of a worker is influenced by his route selection. On the other hand, route selection can also correlate with the arrival time of finished tasks. Therefore, methods in this line learn the joint prediction of route and time, aiming to boost each task's performance by leveraging their mutual correlation, including FDNET [5], RankETPA [42], I\({}^{2}\)RTP [32], and M\({}^{2}\)G4RTP [43].
Fig. 4: The proposed taxonomy of RTP algorithms for instant delivery. We summarize these methods from three dimensions, (i) from the task perspective, which has three categories: only-route prediction, only-time prediction, and route-time prediction. (ii) from the perspective of model architecture, including sequence-based and graph-based models; (iii) from the perspective of learning paradigm: Supervised Learning (SL) and Deep Reinforcement Learning (DRL).
### _Model Architecture_
Model architecture is also an important perspective for classifying different models, including sequence-based models and graph-based models.
#### 3.2.1 Sequence-based Models
As shown in Figure 5, sequence-based models consider the input (i.e., the unfinished tasks) as a sequence, and utilize sequence-to-sequence architecture for solving the related task. These models usually resort to LSTM or Transformer as the encoder to read the input sequence. And use the Pointer-like [44] decoder to output the desired prediction target. Here we first briefly introduce the two commonly used encoders (LSTM and Transformer), then elaborate on the Pointer-like decoder.
**LSTM Encoder.** Long Short-Term Memory (LSTM) [45] is a type of recurrent neural network (RNN) that addresses the vanishing gradient problem by introducing memory cells with self-connected recurrent units. LSTMs are designed to model sequential data and have been widely used in various tasks such as speech recognition [46, 47], natural language processing [48, 49], and time series prediction [50, 51, 52]. The key feature of LSTM is its ability to capture long-term dependencies by utilizing a gating mechanism that controls the information flow within the network. This mechanism involves three main gates: the input gate, the forget gate, and the output gate. LSTM can be formulated in Equation 19. To ease the presentation, variables are defined locally with a little notion confusion to previous definitions.
\[\begin{split} f_{t}&=\sigma(\mathbf{W}_{f}\cdot [\mathbf{h}_{t-1},\mathbf{x}_{t}]+\mathbf{b}_{f})\\ i_{t}&=\sigma(\mathbf{W}_{i}\cdot[\mathbf{h}_{t-1}, \mathbf{x}_{t}]+\mathbf{b}_{i})\\ o_{t}&=\sigma(\mathbf{W}_{o}\cdot[\mathbf{h}_{t-1}, \mathbf{x}_{t}]+\mathbf{b}_{o})\\ \tilde{\mathbf{c}}_{t}&=\tanh(\mathbf{W}_{c}\cdot[\mathbf{h }_{t-1},\mathbf{x}_{t}]+\mathbf{b}_{c})\\ \mathbf{c}_{t}&=f_{t}\cdot\mathbf{c}_{t-1}+i_{t}\cdot\tilde{ \mathbf{c}}_{t}\\ \mathbf{h}_{t}&=o_{t}\cdot\tanh(\mathbf{c}_{t}),\end{split} \tag{19}\]
where \(\mathbf{x}_{t}\) is the input at time step \(t\), \(\mathbf{h}_{t}\) is the hidden state at time step \(t\), \(\mathbf{c}_{t}\) is the cell state at time step \(t\), \(\mathbf{W}_{f},\mathbf{W}_{i},\mathbf{W}_{o},\mathbf{W}_{c}\) are weight matrices, \(\mathbf{b}_{f},\mathbf{b}_{i},\mathbf{b}_{o},\mathbf{b}_{c}\) are bias vectors, and \(\sigma\) denotes the sigmoid function.
**Transformer Encoder.** Transformer [53] encoder is a key component of the Transformer architecture [53], which has revolutionized the field of natural language processing. Unlike traditional recurrent neural networks (RNNs) or convolutional neural networks (CNNs) [54], Transformer encoder relies solely on self-attention mechanisms to capture dependencies between different words or tokens in a sequence. This self-attention mechanism allows the Transformer to efficiently model pairwise long-range dependencies, making it particularly effective for tasks involving sequential data. Here in the RTP problem, each task can be viewed as an item in the sequence. Specifically, the Transformer encoder consists of several transformer blocks, with each equipped with two layers (i) the Multi-Head self-Attention (MHA) layer and (ii) Feed-Forward Network (FFN) layer. MHA layer is formulated in Equation 20.
\[\begin{split}\text{Attention}(\mathbf{Q},\mathbf{K},\mathbf{V})& =\text{softmax}\left(\frac{\mathbf{Q}\mathbf{K}^{T}}{\sqrt{d_{k}}} \right)\mathbf{V}\\ \text{MultiHead}(\mathbf{Q},\mathbf{K},\mathbf{V})& =\text{concat}(\text{head}_{1},\dots,\text{head}_{h})\mathbf{W}_{ O}\\ \text{head}_{i}&=\text{Attention}(\mathbf{Q} \mathbf{W}_{\mathbf{Q}i},\mathbf{K}\mathbf{W}_{\mathbf{K}i},\mathbf{V} \mathbf{W}_{\mathbf{V}i}).\end{split} \tag{20}\]
Here, \(\mathbf{Q}\), \(\mathbf{K}\), and \(\mathbf{V}\) represent the query, key, and value matrices, respectively. In the self-attention mechanism, all of them are projected from the same input (in our case, the embedding matrix \(\mathbf{E}\) of all unfinished tasks). \(d_{k}\) denotes the dimensionality of the key vectors. The MHA layer computes the attention weights between the query and key vectors, and the resulting weighted values are then nonlinearly transformed by the FFN layer and concatenated to produce the final output. More complex mutual correlations are captured by stacking multiple transformer blocks.
**PointerNet Decoder**. Pointer Networks (PointerNet) [44] is a type of neural network developed to tackle sequence-to-sequence tasks with varying output lengths. Unlike traditional sequence-to-sequence models, which rely on discrete symbol generation, Pointer Networks learn to output pointers to positions in the input sequence. This makes them particularly useful in the route and time prediction problem, where the output length is not fixed. The core idea of Pointer Networks is the use of an attention mechanism to dynamically select an element from the input sequence as the output at each decoding step. Specifically, PointerNet adopts an RNN such as LSTM as its backbone network. And the equations for the attention mechanism in Pointer Networks are as follows:
\[\begin{split} u_{i}^{j}&=\mathbf{v}^{T}\tanh(\mathbf{W} _{1}\mathbf{e}_{i}+\mathbf{W}_{2}\mathbf{h}_{j})\\ \alpha_{i}^{j}&=\text{softmax}(u_{i}^{j})\\ o_{j}&=\sum_{i=1}^{N}\alpha_{i}^{j}\mathbf{e}_{i}.\end{split} \tag{21}\]
Here, \(\mathbf{e}_{i}\) represents the encoded representation of the \(i\)-th item in the input sequence, \(\mathbf{h}_{j}\) denotes the hidden state of the decoder at step \(j\), and \(u_{i}^{j}\) represents the compatibility score between the \(i\)-th input element and the \(j\)-th decoder state. The attention mechanism calculates the attention weights \(\alpha_{i}^{j}\) by applying a softmax function to the compatibility scores. \(o_{j}\) is the weighted sum of the input sequence \(\{\mathbf{e}_{1},\dots,\mathbf{e}_{N}\}\) using the attention weights. The core idea of PointerNet is that the attention weight \(\alpha_{i}^{j}\) can be further regarded as the output probability of item \(i\) in the decoding step \(j\), which can be regarded as pointers directing to the input. Benefit from the above properties, Pointer Networks have shown promising results in various domains, including routing problems [55, 56], graph optimization [57, 58], and ranking problem [59, 60].
Fig. 5: Illustration of sequence-based architecture.
#### 3.2.2 Graph-based Models
To effectively capture the spatial correlations between different tasks, graph-based models are introduced. As shown in Figure 6, graph-based models consider the input as a graph, and utilize graph-to-sequence architecture for solving the RTP problem.
**GNN Encoder.** Graph Neural Network (GNN) [61], has emerged as the dominant tool for graph data mining [62, 63]. Due to their powerful ability in modeling pair-wise correlation, GNNs [63, 64, 65, 66] have been widely used in different domains such as node classification [67, 68, 69], graph classification [70, 71, 72] and link prediction [73, 74, 75].
Given a graph \(G=(\mathbf{X},\mathbf{A})\) with \(N\) nodes, where \(\mathbf{X}\in\mathbb{R}^{N\times d_{x}}\) is the node feature matrix, \(d_{x}\) is the feature dimension. \(\mathbf{A}\in\mathbb{R}^{N\times N}\) is the adjacent matrix of the graph. A general formulation [63] of graph neural network can be described as:
\[\mathbf{H}=\sigma\left(\Phi\left(\mathbf{A},\mathbf{X}\right)\mathbf{W} \right), \tag{22}\]
where \(\mathbf{W}\in\mathbb{R}^{d_{x}\times d_{x}}\) denotes a trainable parameter and \(\sigma\) denotes the activation function. \(\Phi\left(\mathbf{A},\mathbf{X}\right)\) is a function (or a rule) that depicts how neighbors' features are aggregated into the target node. From the above formulation, we can see that one of the core tasks for GNNs is to develop an effective aggregation function \(\Phi(\cdot)\). Generally, methods can be classified into two streams:
1) Spectral-based aggregation, where the graph spectral filter is adopted to smooth the input nodes features. For example, ChebNet [61] uses the Chebyshev polynomial to optimize the Laplacians decomposition, which reduces the computational complexity. Following ChebNet, the most popular vanilla GNN [62] defines a symmetric normalized summation function as
\[\Phi\left(\mathbf{A},\mathbf{H}^{l-1}\right)=\mathbf{\tilde{A}}\mathbf{H}^{l -1},\]
where
\[\mathbf{\tilde{A}}=\mathbf{D}^{-\frac{1}{2}}(\mathbf{A}+\mathbf{I})\mathbf{D}^ {-\frac{1}{2}}\in\mathbb{R}^{N\times N}\]
is a normalized adjacent matrix. \(\mathbf{I}\) is the identity matrix and \(\mathbf{D}\) is the diagonal degree matrix with \(\mathbf{D}_{ii}=\sum_{j}(\mathbf{A}+\mathbf{I})_{ij}\).
2) Spatial-based aggregation. Unlike spectral-based GNNs, which operate in the spectral domain by exploiting the eigenvectors of the graph Laplacian, spatial-based GNNs focus on aggregating features directly from the spatial domain. These models incorporate spatial convolutions or pooling operations that aggregate and propagate information based on the spatial proximity of nodes in the graph. For example, GraphSAGE [76] samples a fixed number of neighbors for each node and updates the features, reducing the memory complexity. GAT [77] uses the attention mechanism to adjust the weight of all neighbor nodes. Compared with spectral-based GNNs, spatial-based GNNs have got more attention because of their flexibility in designing the aggregation function.
**Graph-based Decoder.** Graph-based decoder typically adopts the same architecture as PointerNet, where the attention mechanism is utilized to select candidate nodes and output the route recurrently. Furthermore, the Graph-based decoder tends to incorporate graph information as prior knowledge in the decoding process. Doing so can improve the accuracy and robustness of predicted routes. For example, Graph2Route [31] constrains the candidate nodes into the neighbors of the outputted node in the last decoding step. ILRoute [30] aims to select a node which is the \(k\)-nearest nodes of the previous outputted node.
### _Learning Paradigm_
Models for RTP can also be classified by learning paradigm, which contains two categories, including Supervised Learning (SL-based) and Deep Reinforcement Learning (DRL-based). We illustrate the overall architecture of the two types in Figure 7.
SL-based models learn from labeled training data to make predictions for unseen instances. Here in our case, they learn from the data constructed by workers' massive historical behaviors. This technique is widely used in tasks such as image recognition [78], natural language processing [79, 80], and recommendation systems [81, 82].
On the other hand, DRL-based models combine the principles of deep learning and reinforcement learning to enable the model to learn through interaction with an environment. It involves an agent that takes actions in an environment, receives feedback in the form of rewards/penalties, and learns to optimize its behavior over time. Deep reinforcement learning has achieved remarkable successes in complex tasks such as game playing [83, 84, 85], robotics [86, 87], and autonomous driving [88, 89, 90], showcasing its ability to learn directly from raw sensory data and acquire
Fig. 6: Illustration of graph-based architecture.
Fig. 7: Illustration of SL-based and DRL-based models.
sophisticated decision-making abilities. In the RTP problem, one can consider the model as a route/time prediction agent, to mimic the route selection action of the worker. To this end, DRL methods can be applied to effectively improve the performance of route and time prediction.
In summary, we propose to classify the current RTP models from three perspectives, including the task type, model architecture and learning paradigm in this section. Since the RTP problem is a rising topic in the research community, there is still a lack of models for the topic. Therefore, in the next section, we will dive into each category and introduce the details of models to help a comprehensive understanding of each model's motivation and model design.
## 4 Service Route Prediction
To facilitate the following sections, we first propose a framework that summarizes the current models. It follows the encoder-decoder structure where we identify four key components in it, including the input construction, task encoder, route decoder, and masked loss.
**Input.** This component constructs the model's input (i.e., a problem instance) according to the finished tasks and unfinished tasks that contain both spatial and temporal information. A problem instance \(s_{t}\) can be represented by a task sequence or task graph, which depends on different methods.
**Task Encoder.** The Task Encoder learns the unfinished task representations \(\mathbf{E}_{t}\in\mathbb{R}^{n\times d_{t}}\) at time \(t\) by taking the problem instance \(s_{t}\) as input. Abstractly, we write
\[\mathbf{E}_{t}=\mathbf{TaskEncoder}(s_{t}). \tag{23}\]
It is designed to capture each task's spatial features (e.g., the distance between the task and the worker) and temporal features (e.g., the remaining required time), as well as model the ST-correlations between different tasks. Here we only list the unfinished task embedding as the output, as it is the core input of the route decoder component in the next step. One can add additional output in this step accordingly.
**Route Decoder.** The decoder computes the predicted route \(\mathbf{\hat{\pi}}_{t:}\) based on the embedding matrix \(\mathbf{E}_{t}\) outputted by the encoder, equipped with the task decoding module and the service-dependent mask mechanism. The service-dependent mask mechanism is designed to meet the route constraints \(\mathcal{C}\) during the decoding process, specifically, masking unfeasible tasks at each decoding step. Note that the mask mechanism is service-dependent since different types of service can have different route constraints (as we have introduced in Section 2). And the task decoding mechanism is utilized to select a candidate (an unfinished task) at each decoding step. Some works also consider the worker \(w\)'s personalized feature \(\mathbf{x}_{w}\) into the decoding process, thus the overall route decoder is formulated as:
\[\mathbf{\hat{\pi}}_{t:}=\mathbf{RouteDecoder}(\mathbf{E}_{t},\mathbf{x}_{w}), \tag{24}\]
**Masked Loss.** As we mentioned before, in some service scenarios, there can be new tasks dispatched to the worker at any time. In that case, the new coming task at \(t^{\prime}\) can change the worker's previous decisions at \(t\), making observations after \(t^{\prime}\) inaccurate [4, 5] as training label for the sample at time \(t\). Therefore, most existing works choose the route observations between \(t\) and \(t^{\prime}\) (i.e., \(\mathbf{\pi}_{t:t^{\prime}}\)) as the label information when training the model, where \(t^{\prime}\) is the dispatch time of the first coming task after \(t\). In other words, the observation after \(t^{\prime}\) is masked when calculating loss. Therefore, we call it "Masked loss", which can be formulated as:
\[\mathcal{L}=\mathbf{MaskedLoss}(\mathbf{\hat{\pi}}_{t:},\mathbf{\pi}_{t:t^{\prime}}). \tag{25}\]
To conclude, the Input component represents the problem instance with abundant spatial-temporal information. The Task Encoder is supposed to fully capture the spatial-temporal relationship between different tasks. The Route Decoder component decodes the future route based on the encoded task embedding with or without the worker's personalized information. And the Masked Loss component is designed to eliminate the effects of future new coming tasks on the loss calculation of the current sample. Different models have different customization on the input, task encoder, and route decoder. The following part will introduce how those components are implemented in those models.
### _Sequence-based SL Models_
Sequence-based supervised learning models construct the first research line among all methods for the RTP problem. Methods in this research line include OSquare [2], DeepRoute [4], DeepRoute+ [33].
**Osquare [2].** Osquare is a machine-learning method that treats route prediction as a next-location prediction problem. Algorithm 1 shows the implementation details of OSquare. It utilizes a point-wise ranking method that trains a traditional machine learning model (i.e., LightGBM [91]) to output the probability of all candidates (i.e., \(\hat{\mathbf{y}}\)) at each step, and the one with the maximum probability as the next task. At last, the whole route is generated recurrently. Overall, the Input component of OSquare constructs a sequence of features. Both the Task Encoder and RouteDecoder are composed of LightGBM.
```
0:features of unfinished tasks of worker \(w\) at time \(t\)\(\mathbf{X}_{t}=\{\mathbf{x}_{1},\mathbf{x}_{2}...,\mathbf{x}_{n}\}\); max number of unfinished tasks \(\max\); padding vector \(\mathbf{z}\).
0:predicted pick-up route \(\mathbf{\hat{\pi}}\)
1:\(\mathbf{\hat{\pi}}\leftarrow\left\lVert\right\lVert\);
2:for\(j=1,...,n\)do
3:for\(m\in\mathbf{\hat{\pi}}\)do
4:\(\mathbf{X}_{t}[m]=\mathbf{z}\); //pad tasks outputted before
5:endfor
6:\(\mathbf{X}_{t}^{\prime}\leftarrow\mathrm{concatenate}(\mathbf{x}_{1},...,\mathbf{x}_{n},\mathbf{z}_{n+1},...,\mathbf{z}_{\max})\);
7:\(\hat{\mathbf{y}}=\mathrm{LightGBM}(\mathbf{X}_{t}^{\prime})\);
8:\(\bar{\pi}_{j}\leftarrow\mathrm{argmax}_{k}\,\hat{\mathbf{y}}_{k},\ \mathrm{where}\ k\in\{1,\dots,n\}\) and \(k\not\in\mathbf{\hat{\pi}}\);
9:\(\mathbf{\hat{\pi}}\leftarrow\mathbf{\hat{\pi}}+[\hat{\pi}_{j}]\);
10:endfor
```
**Algorithm 1** OSquare.
**DeepRoute [4].** DeepRoute is the first deep neural network proposed for the package pick-up route prediction problem. Unlike OSquare, it is a list-wise model that ranks all unfinished tasks at once. Specifically, the implementation of different components in DeepRoute are:
* **Sequence Input**. The input of DeepRoute is a sequence that contains features of unfinished tasks introduced in Section 2 that may affect a courier's routing decision. An unpicked-up package represents a task in DeepRoute.
* **Transformer-based Task Encoder**. DeepRoute adopts the Transformer Encoder to model the spatial-temporal correlation between different tasks, no matter the distance of two tasks in the given sequence.
* **Pointer-based Route Decoder**. PointerNet decoder is utilized as the backbone network to output the tasks step by step. Moreover, one route constraint is that no duplicated outputted is allowed in the route prediction problem. To meet the constraint, DeepRoute adopts the mask mechanism that masks the outputted tasks before. In that case, the compatibility score at decoding step \(j\) in Equation 21 can be rewritten as: \[u_{i}^{j}=\left\{\begin{array}{ll}\mathbf{v}^{T}\tanh(\mathbf{W_{i}}\mathbf{e}_{i} +\mathbf{W_{2}}\mathbf{h}_{j})&\text{if }i\neq\pi_{j^{\prime}}\quad\forall j^{ \prime}<j\\ -\infty&\text{otherwise},\end{array}\right.\] (26) where \(\mathbf{e}_{i}\) is the encoded embedding of task \(i\), and \(\mathbf{h}_{j}\) is the hidden state of the decoding step \(j\). And the output probability of task \(i\) at deciding step \(j\), i.e., \(y_{i}^{j}\) is calculated by the softmax of the compatibility score, formulated as \(y_{i}^{j}=\operatorname{softmax}(u_{i}^{j})\).
**DeepRoute+ [33]**. DeepRoute+ is an improved version of DeepRoute, which models workers' personalized features:
* **Sequence Input**. DeepRoute mainly focuses on modeling the spatial-temporal factors that influence the worker's routing decision. Compared with DeepRoute, DeepRoute+ also models the worker's personalized preference by taking their features \(\mathbf{x}_{w}\) and the latest finished tasks \(\mathcal{O}_{i}^{f}\) as input.
* **Preference-aware Task Encoder**. A worker decision preference module is designed to identify which factors have an important impact on the worker's decision under the current situation. It learns a mapping function to map the worker's individual features (including total working days and average pick-up number) and his latest finished task (encoded by BiLSTM) sequence to a decision preference vector \(\mathbf{p}\) of the worker. Then the decision vector is merged into the Transformer encoder by updating the task embedding \(\mathbf{e}_{i}\) using Hardamard product: \(\mathbf{e}_{i}=\mathbf{p}\odot\mathbf{e}_{i}\).
* **Pointer-based Route Decoder**. Like DeepRoute, DeepRoute+ also adopts the pointer-based route decoder to output the predicted route.
**CP-Route [34]**. CP-Route aims to model the personal information of workers by mining their spatial transfer patterns.
* **Sequence Input**. CP-Route takes two sequences as input i) the unfinished tasks, ii) their corresponding AOI ID.
* **STC-STP Encoder**. it contains two encoder blocks: an STP-aware location embedding block, which aims to incorporate workers' Spatial Transfer Patterns (STP) into the location embedding; and a correlation-aware constraints embedding block, which aims to incorporate the Spatial-Temporal Correlations (STC) into the task embeddings.
* **STC-STP Decoder**. A mixed-distribution-based decoder is designed to simultaneously consider the influence of STC and STP on couriers' final decisions.
### _Sequence-based DRL Models_
The above sequence-based SL models suffer from the limitation where the training criteria is not the same as the test one. Specifically, those methods consider the task selection at each step as a classification problem, train the model using the Cross-Entropy (CE) as the loss function, while evaluating the model using other measurements, such as LSD [31] and KRC [31]. Thus leading to a mismatch between the training and test objectives. Taking Figure 8 as an example, despite producing the same value on the training criteria (i.e., CE), the two cases exhibit quite different results on the test criteria (i.e., LSD). This disparity limits the potential of a "well-trained" model to deliver more favorable performance in terms of the test criteria, which
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline
**Model** & **Year** & **Input Information** & **Problem** & **Model** & **Learning paradigm** & **Task Encoder** & **Decoder** \\ \hline \hline OSquare [2] & 2019 & Unfinished Tasks & RP & Sequence-based & SL & HighGBM & LightGBM \\ \hline DeepRoute [4] & 2021 & Unfinished Tasks & RP & Sequence-based & SL & Transformer & Pointer \\ \hline DeepRoute+ [33] & 2021 & Finish \& Unfinished Tasks, Workers Features & RP & Sequence-based & SL & Transformer & Pointer \\ \hline CP-Route [34] & 2023 & Unfinished Tasks & RP & Sequence-based & SL & STC-STP & STC-STP \\ \hline GraphRoute [31] & 2022 & Finish \& Unfinished Tasks, Workers Features & RP & Graph-based & SL & Dynamic cnn & Graph-based Pointer \\ \hline DRL4Route [35] & 2023 & Unfinished Tasks & RP & Sequence-based & DRL & Transformer & Pointer \\ \hline ILRoute [30] & 2023 & Unfinished Tasks & RP & Graph-based & DRL & GNN & Pointer \\ \hline DeepETA [7] & 2019 & Finished tasks, Unfinished Tasks & TP & Sequence-based & SL & BiLSTM & MLP \\ \hline OFCT [36] & 2022 & Unfinished Tasks & TP & Sequence-based & SL & MLP & MLP \\ \hline MetaSTP [88] & 2022 & Unfinished Tasks & TP & Sequence-based & SL & Transformer & MLP \\ \hline CNN-model [37] & 2021 & Unfinished Tasks & TP & Sequence-based & SL & CNN & MLP \\ \hline IGT [39] & 2023 & Unfinished Tasks & TP & Graph-based & SL & Hete-GCN & Transformer \\ \hline DGM-DTE [40] & 2023 & Unfinished Tasks & TP & Graph-based & SL & Dual GNN & Multi-task, MLP \\ \hline GSL-QR [41] & 2023 & Unfinished Tasks & TP & Graph-based & SL & CSL, GNN & Attention \\ \hline RankEPTA [42] & 2023 & Unfinished Tasks & RIP & Sequence-based & SL & Transformer & Pointer \\ \hline F’RIT [32] & 2023 & Unfinished Tasks & RIP & Sequence-based & SL & Transformer & Pointer \\ \hline FDNET [5] & 2021 & Unfinished Tasks, Workers Features & RIP & Sequence-based & SL & DeepFM & Pointer \\ \hline M’G4RIT [43] & 2023 & Unfinished Tasks, Workers Features & RTT & Graph-based & SL & GATe & Multi-task, Pointer \\ \hline \hline \end{tabular}
\end{table} TABLE II: The summary for neural architectures of existing route and time prediction models.
considerately trims down their performance when applied in real-world systems. Consequently, Sequence-based DRL models are proposed to distinguish these two cases during the training process.
Currently, models in this category only contain DRL4Route. Here we first introduce how the route prediction is formulated from the RL perspective. Then elaborate on the model architecture of DRL4Route.
#### 4.2.1 Formulation from the RL perspective
Route prediction can be considered as a sequential decision-making process, where each task on the route is outputted step by step based on previous decisions. It can be modeled as a discrete finite-horizon discounted Markov Decision Process (MDP) [92], in which a route prediction agent interacts with the environment and makes decisions over \(T\) time steps. Formally, MDP is denoted by \(M=(S,A,P,R,s_{0},\gamma,T)\), where \(S\) is the set of states, \(A\) is the set of actions, \(P:S\times A\times S\rightarrow\mathbb{R}_{+}\) is the transition probability, \(R:S\times A\rightarrow\mathbb{R}\) is the reward function, \(s_{0}:S\rightarrow\mathbb{R}_{+}\) is the initial state distribution, \(\gamma\in[0,1]\) is a discount factor, and \(T\) is the total time steps determined by the number of unfinished tasks (in our case \(T\) equals the number of unfinished tasks \(n\)). We introduce the details of the agent, state, action, reward and state transition probability in the following part.
**Route Prediction Agent.** The route prediction agent selects a task from the unfinished task candidates step by step, which can be implemented based on the aforementioned sequence-based SL models.
**State.** The state \(s_{j}\in\mathcal{S}\) represents the environment's condition at the \(j\)-th (\(j\in\{1,\dots,n\}\)) decoding step. It encompasses the relevant information that enables the agent to make decisions at each decoding step. The state is formulated as \(s_{j}=(\mathbf{E},\mathcal{C},\mathbf{h}_{j},\hat{\mathbf{\pi}}_{1:j-1})\), where \(\mathbf{E}\) is the encoded embedding matrix of unfinished task, \(\mathcal{C}\) is the route constraints, \(\mathbf{h}_{j}\) is the hidden state of the \(j\)-th step, and \(\hat{\mathbf{\pi}}_{1:j-1}\) denotes the route generated by the agent up to the \(j\)-th decoding step.
**Action.** An action \(a_{j}\in\mathcal{A}_{j}\) refers to the selection of a task \(\pi_{j}\) based on the current task candidates and states. A joint action \((a_{1},\cdots,a_{n})\in\mathcal{A}=\mathcal{A}_{1}\times\cdots\times\mathcal{ A}_{n}\) forms a predicted route. The action space \(\mathcal{A}_{j}\) specifies the available task candidates that the agent can choose from at the \(j\)-th step. It changes during the decoding process because of the route constraints.
**Reward.** The reward is defined based on the test criteria to align the training and test objectives. Here different rewards can be designed according to different test objectives. Equation 27 shows the reward definition of DRL4Route, whose core idea is giving rewards to actions that are close to the label route based on LSD:
\[r_{j}=\left\{\begin{array}{ll}-\mathrm{LSD}(n^{\prime}+1,j)&\hat{\pi}_{j} \notin\mathbf{\pi},j\leq n^{\prime},(\mathrm{case}~{}1)\\ 0&\hat{\pi}_{j}\notin\mathbf{\pi},j>n^{\prime},(\mathrm{case}~{}2)\\ -\mathrm{LSD}(Y_{\hat{\pi}}(\hat{\pi}_{j})+1,j)&\hat{\pi}_{j}\in\mathbf{\pi},j\neq \hat{\pi}_{j},(\mathrm{case}~{}3)\\ \overline{R}&\hat{\pi}_{j}\in\mathbf{\pi},j=\hat{\pi}_{j},(\mathrm{case}~{}4)\end{array}\right. \tag{27}\]
where \(\overline{R}\) is a hyper-parameter to control the scale of the cumulative reward. \(Y_{\hat{\pi}}(\hat{\mathbf{\pi}}_{j})\) is the order of task \(\hat{\mathbf{\pi}}_{j}\) in the predicted route. And \(n^{\prime}\) is the number of tasks in the label route \(\mathbf{\pi}\).
**State Transition Probability.** The state transition probability \(P(s_{j+1}|s_{j},a_{j}):\mathcal{S}\times\mathcal{A}\times\mathcal{S}\to \mathbb{R}_{+}\) represents the likelihood of transition from state \(s_{j}\) to \(s_{j+1}\) when action \(a_{j}\) is taken at state \(s_{j}\). In DRL4Route, the environment is considered deterministic, meaning that the resulting state \(s_{j+1}\) after taking action \(a_{j}\) from state \(s_{j}\) is predetermined and certain.
**Definition 9: RL-based RP Problem.** Given a state \(s_{j}\) at time \(j\), the route prediction agent generates an action by the current policy \(\pi_{\theta}\) parameterized by \(\theta\), then receives the task-specific reward \(r_{j}\) from the environment. The training goal of RL-based methods is to learn the best parameter \(\theta^{*}\) of the route prediction agent that can maximize the expected cumulative reward, formulated as:
\[\theta^{*}=\arg\max_{\theta}\mathbb{E}_{\pi_{\theta}}\left[\sum_{j=1}^{n} \gamma^{j}r_{j}\right], \tag{28}\]
where \(\gamma\) is the discount factor that controls the tradeoffs between the importance of immediate and future rewards.
#### 4.2.2 DRL4Route Architecture
The overall architecture of DRL4Route is depicted in Figure 9. It adopts an Actor-Critic architecture, which reduces the variance of the policy gradient estimates by providing a reward right after each action. The "Actor" is the route prediction agent which updates its policy under the guidance of the "Critic". And the "Critic" estimates two functions, namely i) the state-value function \(V\) to evaluate the value for a state and ii) the state-action func
Fig. 8: Illustration of mismatch between the training and test objectives. The vector is the output probability corresponding to the task.
Fig. 9: DRL4Route Framework [35].
the benefits of taking a certain action under a certain state. Given a policy \(\pi_{\theta}\), the two functions are defined as:
\[Q_{\pi_{\theta}}(s_{j},a_{j})=\mathbb{E}_{\pi_{\theta}}\left[r(\hat{\pi}_{j}, \cdots,\hat{\pi}_{n})|s=s_{j},a=a_{j}\right], \tag{29}\]
\[V_{\pi_{\theta}}(s_{j})=\mathbb{E}_{a_{j}\sim\pi_{\theta}(s_{j})}\left[Q_{\pi_ {\theta}}(s_{j},a=a_{j})\right]. \tag{30}\]
Furthermore, we can use \(V\) function to estimate \(Q\) function as shown in Equation 31. Doing so can reduce the number of estimated functions, thus reducing the risk of estimation error.
\[Q_{\pi_{\theta}}(s_{j},a_{j})=\mathbb{E}[r_{j}+\gamma*V_{\pi_{\theta}}(s_{j+1} )]. \tag{31}\]
Some previous efforts [93] find that removing the exception calculation can significantly accelerate the training process while achieving promising results, formulated as: \(Q_{\pi_{\theta}}(s_{j},a_{j})=r_{j}+\gamma*V_{\pi_{\theta}}(s_{j+1})\).
Based on the above formulation, advantage function \(A_{\pi_{\theta}}\) is defined as subtracting the value function \(V\) from the \(Q\)-function, which is used to reflect the relative superiority of each action and update the model parameters:
\[\begin{split} A_{\pi_{\theta}}(s_{j},a_{j})&=Q_{ \pi_{\theta}}(s_{j},a_{j})-V_{\pi_{\theta}}(s_{j})\\ &\approx r_{j}+\gamma V_{\pi_{\theta}}(s_{j+1})-V_{\pi_{\theta}}( s_{j}).\end{split} \tag{32}\]
**Training.** Overall, the actor first accumulates thousands of samples by current policy. Based on the generated samples, the critic learns and updates the \(V\) function, which is further used to calculate the advantage approximation function \(A_{\pi_{\theta}}(s,a)\). At last, the actor is trained by the following loss function:
\[\mathcal{L}_{\mathrm{actor}}=\frac{1}{K}\sum_{k=1}^{K}\sum_{j=1}^{n}A_{\pi_{ \theta}}(s_{k,j},a_{k,j})\mathrm{log}\pi_{\theta}(a_{k,j}|s_{k,j}), \tag{33}\]
where \(K\) is the total number of samples. And the critic is trained via a robust regression loss [94], which is less sensitive to outliers than \(L_{2}\) loss:
\[\mathcal{L}_{\mathrm{critic}}=\frac{1}{K}\sum_{k=1}^{K}\sum_{j=1}^{n}\mathrm{ smooth}L_{1}(\hat{V}(s_{k,j})-r(\hat{\pi}_{k,j},\cdots,\hat{\pi}_{k,n})), \tag{34}\]
in which \(\mathrm{smooth}L_{1}\) is defined as
\[\mathrm{smooth}L_{1}(x)=\left\{\begin{array}{ll}0.5x^{2}&|x|<1,\\ |x|-0.5&\mathrm{otherwise}.\end{array}\right. \tag{35}\]
### _Graph-based SL Models_
The sequential nature of the above sequence-based methods limits their ability to fully encode the spatial-temporal correlations between different tasks. To overcome the limitations of the sequence-based encoders, graph-based algorithms model a problem instance from the graph perspective and take full advantage of the node/edge features and graph structure of all tasks. A representative method is Graph2Route. We first introduce the problem formulation from the graph perspective, and then we introduce the details of Graph2Route.
#### 4.3.1 Formulation from Graph Perspective
In real scenarios, service tasks are essentially located in different geographic areas. The spatial relationship of those tasks can be naturally described as a graph. Therefore, some works formulate the route prediction task from the graph perspective, which can better represent the intrinsic relationship in a problem instance.
**Definition 10**: **Input ST-Graph.** A problem instance of worker \(w\) at time \(t\) can be defined on a spatial-temporal graph (ST-graph) \(\mathcal{G}_{t}^{w}=(\mathcal{V}_{t},\mathcal{E}_{t},\mathbf{X}_{t}^{v}, \mathbf{X}_{t}^{e})\), where \(\mathcal{V}_{t}=\{v_{1},\dots,v_{m+n}\}=\mathcal{O}_{t}^{l}\cup\mathcal{O}_{t} ^{u}\) contains both \(m\) finished tasks and \(n\) unfinished tasks, with each node corresponds to a task of the worker. \(\mathcal{E}_{t}=\{(i,j)\mid v_{i},v_{j}\in\mathcal{V}_{t}\}\) is the set of edges. To ease the presentation, let \(\overline{n}=m+n\). \(\mathbf{X}_{t}^{v}\in\mathbb{R}^{n\times d_{v}}\) and \(\mathbf{X}_{t}^{e}\in\mathbb{R}^{R\times\overline{n}\times d_{v}}\) are the node and edge features respectively, where \(d_{v}\) and \(d_{e}\) are the node feature dimension and edge feature dimension, respectively. Both of them contain the spatial-temporal features of different tasks and can be constructed by service-specific settings.
**Definition 11**: **Graph-based RP Problem**. Given the input graph \(\mathcal{G}_{t}^{w}\) of worker \(w\) at time \(t\), Graph-based RP problem aims to learn a mapping function \(\mathcal{F}_{\mathcal{C}}\) to predict the worker's future service route \(\hat{\pi}\) of unfinished nodes which can satisfy the given route constraints \(\mathcal{C}\), formulated as:
\[\mathcal{F}_{R}(\mathcal{G}_{t}^{w},\mathcal{C})=[\hat{\pi}_{1},\hat{\pi}_{2}, \cdots,\hat{\pi}_{n}], \tag{36}\]
where \(\hat{\pi}_{i}\) means that the \(i\)-th node in the route is node \(v_{\pi_{i}}\). Moreover, \(\hat{\pi}_{i}\in\{1,\cdots,n\}\) and \(\hat{\pi}_{i}\neq\hat{\pi}_{j}\) if \(i\neq j\). Basically, the graph-based RP problem can be considered a reformulation of the general RP problem from the graph perspective. Figure 10 gives an illustration of the problem. And Table I lists the related notations.
#### 4.3.2 Graph2Route Architecture
Graph2Route investigates service route prediction from the dynamic graph perspective. Traditional models typically treat problem instances at different time steps as independent, relying solely on the request time for prediction. However, in real-life scenarios, the problem instance for a worker evolves over time, such as changes in graph signals or the arrival of new nodes. This evolution establishes natural connections between different problem instances. In other words, the route arrangement at a specific time \(t\) is closely related to previous instances, especially those in proximity. Therefore, Graph2Route defines the decision
context at time \(t\) as the decision-making environment encompassing the workers' activities several time steps prior. By introducing the decision context (represented as \(\Psi\)) in the model, Graph2Route can leverage more valuable information throughout the entire service process, resulting in improved accuracy in route prediction.
* **ST-Graph Input**. The input contains graphs from several consecutive time steps. For each graph, the node features \(\mathbf{X}_{t}^{v}\) are essentially the tasks features introduced in Section 2. The edge set \(\mathcal{E}_{t}\) and edge feature \(\mathbf{X}_{t}^{e}\) are defined according to the distance of two tasks from the spatial (i.e., coordinates) or temporal (i.e., the required time window) perspective. One important feature is the \(k\)-nearest neighbors.
* **DynGNN Task Encoder.** To capture the decision context, a dynamic spatial-temporal graph neural network (DynGNN) is developed that models the evolving relationship between consecutive problem instances. In Equation 37, the encoder first computes the node embeddings \(\mathbf{E}_{t}\in\mathbb{R}^{\mathbb{N}\times d_{h}}\) and edge embeddings \(\mathbf{Z}_{t}\in\mathbb{R}^{\mathbb{N}\times\mathbb{R}\times d_{h}}\) by a GNN (i.e., the spatial-correlation encoding, Spatial-CE) to leverage their spatial interactions. Then it updates the node embeddings efficiently based on the prior ones (i.e., the decision context \(\Psi\)) by temporal-correlation encoding (Temporal-CE), which is implemented by an RNN (i.e., GRU [95]) architecture. The dominant advantage of such a way is fully utilizing spatial-temporal features and considering the decision context. \[\begin{split}\mathbf{E}_{t}&=\text{DynGNN-Enc}( \mathcal{G}_{t}^{w},\Psi)\\ &=\text{Temporal-CE}(\text{Spatial-CE}(\mathcal{G}_{t}^{w}), \mathbf{E}_{t-1})).\end{split}\] (37)
* **Graph-based Personalized Route Decoder.** A graph-based decoder is designed to filter extremely unreasonable solutions. The decoder computes the predicted route \(\hat{\mathbf{\pi}}_{t}\). by a recurrent attention mechanism which selects a node from the graph based on the embedding matrix \(\mathbf{E}_{t}\). At each decoding step, it only considers the \(k\)-nearest neighbors of the current node as candidates for the next step. Moreover, the worker's features are also incorporated into the attention mechanism to achieve more Personalized route prediction.
### _Graph-based DRL Models_
The graph-based DRL methods combines the advantage of the graph and DRL for route prediction. The method in the class includes ILRoute, which integrates graph neural networks into DRL frameworks to extract the multi-source and heterogeneous features in the workers' decision-making process and unveil workers' routing strategies. ILRoute learns workers' routing strategies by imitation learning and leverages the workers' real route to provide the expert policy. Here we first introduce how route prediction is formulated as MDP in the imitation learning framework, then we elaborate on the architecture of ILRoute.
#### 4.4.1 Formulation from the imitation learning
In imitation learning, route prediction is formulated using MDP to maximize the accuracy of route prediction. In the MDP, the route prediction agent takes action \(a_{j}\) at the \(j\)-th step based on the state defined as \(s_{j}=(\mathbf{x}_{w},\mathcal{O}_{j}^{w},\mathcal{O}_{j}^{f},\mathbf{X}_{j}^{v}, \mathbf{X}_{j}^{e})\), which contains the worker's personalized features, finished and unfinished task features, node and edge features. After an action \(a_{j}\) is taken, the current state \(s_{j}\) transits to the next state \(s_{j+1}\). In the state transition, the worker's personalized features remain unchanged. The task features, context features, and route history will change from \(s_{j}\) to \(s_{j+1}\) due to new route node choice and time changes.
Notably, the reward function in ILRoute is learned from the real worker's routes instead of being defined in advance. This reward function measures the similarity between the route generated by the route generator and the real workers' routes. And the reward value is calculated by the discriminator.
#### 4.4.2 ILRoute Architecture
The overall framework of ILRoute is shown in Figure 11. It is equipped with a graph-based route generator and a sequential discriminator. The graph-based route generator takes the state as input and converts them into the route choice of a worker. The sequential discriminator distinguishes the route generated by the graph-based route generator and returns the reward to the generator to revise its policy.
**Graph-based Route Generator.** The graph-based route generator denoted as \(\pi_{\theta}\) consists of a multi-graph encoder and a PointerNet decoder. The multi-graph encoder is designed to extract multi-source and heterogeneous features as spatial-temporal embeddings and model the complex relationships among features that influence worker's route decisions. Specifically, the spatial-GNN and the temporal-GNN convert vector representation \(\mathbf{X}_{j}^{e}\) of \(j\)-th step into distance \(\mathbf{X}_{j}^{e}\) and time similarity embedding \(\mathbf{X}_{j}^{m}\). Then they are concatenated to obtain the hidden embedding \(\mathbf{X}_{j}^{a}\). The PointerNet decoder is designed to convert the hidden embeddings \(\mathbf{X}_{j}^{a}\) of the observed route into the next route choice of the worker step by step.
**Sequential Discriminator.** The sequential discriminator is designed to distinguish the generation quality of the graph-based route generator compared with the real workers'
Fig. 11: The framework of ILRoute.
route. It also introduces a mobility regularity-aware constraint to reduce route choice exploration with prior spatial continuity knowledge and a personalized constraint mechanism to enhance the personalization of the worker's route decision-making process. The discriminator takes the whole route as input and utilizes an LSTM and a sigmoid function to convert the input into the long-term reward \(r_{l}\), which is calculated as follows:
\[r_{l}=\mathrm{sigmoid}(\mathrm{LSTM}(s_{1},s_{2},\ldots,s_{n})). \tag{38}\]
**Reward Design.** ILRoute introduces a mobility regularity-aware constraint to add an auxiliary reward \(r_{m}\), which assumes that workers will pick up or deliver the nearby tasks first. The calculation of \(r_{m}\) is defined as:
\[r_{m}=-\sum_{j=0}^{n-1}\mathrm{Distance}(l_{j},l_{j+1}), \tag{39}\]
where \(l_{j}\) denotes the task's location visited by the worker at the \(j\)-th step, and Distance denotes the Manhattan distance between two locations. ILRoute also proposes a personalized constraint mechanism to add the mutual regulation between the generated routes sequences \(\hat{\mathbf{\pi}}\) and the worker's personalized features \(\mathbf{x}_{w}\). The mechanism is achieved by maximizing the mutual information \(I(\hat{\mathbf{\pi}};\mathbf{x}_{w})\), which can be calculated as follows:
\[I(\hat{\mathbf{\pi}};\mathbf{x}_{w}) =H(\mathbf{x}_{w})-H(\mathbf{x}_{w}|\hat{\mathbf{\pi}}) \tag{40}\] \[=H(\mathbf{x}_{w})+E_{\hat{\mathbf{\pi}}}E_{\mathbf{x}_{w}|\hat{\mathbf{\pi}}}logp (\mathbf{x}_{w}|\hat{\mathbf{\pi}}),\]
where \(H\) denotes the entropy value, \(E\) denotes the expectation value and \(p\) denotes the probability.
Without access to the posterior \(p(\mathbf{x}_{w}|\hat{\mathbf{\pi}})\), we cannot maximize \(I(\hat{\mathbf{\pi}};\mathbf{x}_{w})\) directly. Here, \(q(\mathbf{x}_{w})\) is introduced to approximate the true posterior \(p(\mathbf{x}_{w}|\hat{\mathbf{\pi}})\):
\[logp(\mathbf{x}_{w}|\hat{\mathbf{\pi}})=logq(\mathbf{x}_{w}|\hat{\mathbf{\pi}})+log\frac{p(\bm {x}_{w}|\hat{\mathbf{\pi}})}{q(\mathbf{x}_{w}|\hat{\mathbf{\pi}})}. \tag{41}\]
Take Equation 41 into Equation 40, it can be observed that \(E_{\mathbf{x}_{w}|\hat{\mathbf{\pi}}}log\frac{p(\mathbf{x}_{w}|\hat{\mathbf{\pi}})}{q(\mathbf{x}_{ w}|\hat{\mathbf{\pi}})}\) is always larger than \(0\). Through the reparameterization trick [96], the left part of Equation 40 can be expressed as follows:
\[I(\hat{\mathbf{\pi}};\mathbf{x}_{w}) \geq\int p(\mathbf{x}_{w})logq(\mathbf{x}_{w}|\hat{\mathbf{\pi}})+H(\mathbf{x}_{w}) \tag{42}\] \[\equiv D_{KL}(p(\mathbf{x}_{w}|\hat{\mathbf{\pi}})||q(\mathbf{x}_{w}|\hat{\mathbf{ \pi}})).\]
To this end, maximizing \(I(\hat{\mathbf{\pi}};\mathbf{x}_{w})\) can be achieved by maximizing \(D_{KL}(p(\mathbf{x}_{w}|\hat{\mathbf{\pi}})||q(\mathbf{x}_{w}|\hat{\mathbf{\pi}}))\). Based on this, a personalized constraint reward \(r_{p}=D_{KL}(p(\mathbf{x}_{w}|\hat{\mathbf{\pi}})||q(\mathbf{x}_{w}|\hat{\mathbf{\pi}}))\) is added to enhance the personalization of the worker's route decision-making process. Therefore, the reward \(r_{D}\) of the discriminator can be obtained as follows:
\[r_{D}=r_{l}+\beta r_{m}+\gamma r_{p}, \tag{43}\]
where \(\beta\) and \(\gamma\) are hyperparameters to control the scale of different rewards.
The discriminator is denoted as \(D_{\phi}\), which is parameterized by \(\phi\) and is optimized based on the following loss function:
\[\mathcal{L}_{D}=-\mathbb{E}_{\pi_{\mathbf{\pi}}}[logD_{\phi}(\hat{\mathbf{\pi}})]- \mathbb{E}_{\pi_{\mathbf{\phi}}}[log(1-D_{\phi}(\hat{\mathbf{\pi}}))]-\mathbb{E}_{\pi _{\mathbf{\phi}}}[logq(\mathbf{x}_{w}|\hat{\mathbf{\pi}})], \tag{44}\]
where \(\mathbb{E}_{\pi_{\mathbf{\hat{\pi}}}}\) represents the expectation with respect to the real workers' routes. In addition, \(\mathbb{E}_{\pi_{\mathbf{\theta}}}\) represents the expectation with respect to the routes generated by generator \(\pi_{\theta}\).
**Training.** The generator network with parameter \(\pi_{\theta}\) and the discriminator with parameter \(D_{\phi}\) are trained together in ILRoute. Firstly, the discriminator is trained by considering the generated route as negative samples while the real-world sequences as positive samples. Then, a batch of rewards is calculated for the generated routes. Finally, the generator is trained by maximizing the expectation of reward via the actor-critic algorithm.
## 5 Service Time Prediction
Time prediction models directly predict the arrival time of unfinished tasks, without counting on the route estimation. We first briefly introduce the difference between time prediction in instant delivery and another related popular research topic, i.e., estimated time of arrival (ETA) prediction in map service.
In map service, ETA prediction especially refers to the travel time estimation given a pair of origin and destination locations. Methods in this topic can be classified into two types: i) path-based methods [97, 98, 99, 100, 101, 102, 103], whose input requires the path information between the origin and destination. ii) path-free methods [104, 105, 106, 107, 108], which do not require the path information.
Unlike map services, the ETA problem in instant delivery focuses on predicting the time for each task in a given set of unfinished tasks. This prediction is based on the worker's current status, such as location, and is essentially a multi-destination prediction problem, which makes the problem even more challenging. Since the worker can freely decide the route, which is unknown when making the prediction. In that case, the problem setting in our task is distinctly different from the ETA problem in map services. In this section, we focus on introducing methods for instant delivery rather than for map service.
### _Sequence-based SL Models_
**DeepETA [7].** DeepETA aims to predict the arrival time of couriers for package delivery. It mainly models three important factors: i) the sequence of the latest delivery route, ii) the regularity of the delivery pattern, and iii) the sequence of packages to be delivered.
\(\bullet\)**Sequence Input.** The input contains the sequence of the latest route and the set of packages to be delivered.
\(\bullet\)**Task Encoder.** Firstly, a lasted route encoder is developed to capture the spatial-temporal and sequential features of the latest delivery route, using a combination of spatial encoding and recurrent cells. Secondly, a frequent pattern encoder is designed with two attention-based layers, which leverage the similarity between the latest route and future destinations to predict the most probable estimated time of arrival (ETA) based on historical frequent and relative delivery routes.
\(\bullet\)**Time Decoder.** DeepETA adopts a fully connected layer (MLP) to jointly learn the delivery time and output the results.
**OfCT [36]**. OFCT proposes a deep neural network to predict the Order Fulfillment Cycle Time (OFCT), which refers to the amount of time elapsed between a customer placing an order and he/she receiving the meal.
* **Sequence Input**. A main contribution of OFCT is the extraction of numerous features that can influence the arrival time, including i) the spatial-temporal information of the task, ii) the supply-and-demand features for describing the supply-and-demand status, and iii) and couriers' features.
* **Task Encoder.** Equipped with elaborately designed features, OFCT designs a simple model architecture for prediction. As for the encoder, it adopts the fully connected exponential linear units (ELU) [109] and the embedding layer to transform the numerical and categorical features.
* **Time Decoder.** The regression module is a simple MLP with two hidden layers of fully connected ELUs.
**CNN-Model [37]**. CNN-Model proposes an end-to-end system capable of parcel delivery time prediction. It studies applying a series of deep Convolutional Neural Networks (CNNs [110]) to solve this problem, relying solely on the start and end points.
* **Sequence Input**. The input contains the latitude and longitude of the depot and the task destination, the accept time of the task, and weather conditions.
* **CNN-based Task Encoder.** It applied and tested three different convolutional network architectures for learning spatio-temporal features of tasks as well as weather features. The first class of convolutional networks is based on VGG modules [111], which comprises a number of convolutional layers followed by a pooling layer. The second class of convolutional networks is ResNet [112], which helps to mitigate the problem of vanishing gradients by a skip connection. The third class of convolutional networks is SE block [113], which contains a Squeeze Operator and an Excitation Operator.
* **Time Decoder.** This method utilizes 2 fully connected layers to output the estimated delivery time of each task.
**MetaSTP [38]**. MetaSTP proposes a meta-learning-based neural network model to predict the service time, which is the time spent on delivering tasks at a certain location.
* **Sequence Input**. The input contains the fine-grained, aggregated information of undelivered tasks, and context information such as workday and time of day.
* **Task Encoder.** Firstly, a task representation module is developed to extract and embed features of each task, then combines the embeddings with other task features to obtain the fine-grained hidden representation of each task. Secondly, a historical observation encoding module is implemented by self-attention and temporal convolution. It encodes the correlation between the hidden representation of the query task and tasks with labels in the support set. Finally, a location-wise knowledge fusion module is adopted to further enhance the output of the encoder with the location-prior knowledge.
* **Time Decoder.** MetaSTP utilizes fully connected layers to output the estimated delivery time of each task.
* **Training.** MetaSTP follows the paradigm of model-based meta-learning to extract the meta-knowledge that is globally shared among a set of related learning tasks. In training, each individual learning task \(\mathcal{T}\) consists of a support set \(\mathcal{D}^{s}=(\mathbf{x}_{i}^{s},y_{i}^{s})_{i=1}^{N_{s}}\) and a query set \(\mathcal{D}^{q}=(\mathbf{x}_{i}^{q},y_{i}^{q})_{i=1}^{N_{q}}\). The inference of each query sample \(\mathbf{x}^{q}\) is formulated as \(\hat{y}^{q}=f(\mathbf{x}^{q},\mathcal{D}^{s},\theta)\), where \(\theta\) is the meta-knowledge that is globally shared, and \(f\) is a neural network parameterized by \(\theta\). The inference can also be written as: \[\hat{y}^{q}=f_{\theta}(\mathbf{x}^{q},\mathcal{D}^{s}).\] (45) To optimize \(\theta\), we leverage a set of learning tasks already sampled from a learning task distribution \(p(\mathcal{T})\), which is called meta-training tasks \(\mathscr{T}_{\mathrm{meta-train}}\). The optimal \(\theta\) learned from \(\mathscr{T}_{\mathrm{meta-train}}\) should adapt well to any learning task sampled from \(p(\mathcal{T})\) based on Equation 45, which is achieved by optimizing the following meta loss function: \[\mathscr{L}(\theta)=\sum_{\mathcal{T}_{i}\in\mathscr{T}_{\mathrm{meta-train} }}\frac{1}{|\mathcal{D}^{q}_{i}|}\sum_{(\mathbf{x}^{q},y^{q})\in\mathcal{D}^{q}_{ i}}\mathcal{L}(f_{\theta}(\mathbf{x}^{q},\mathcal{D}^{s}_{i}),y^{q}).\] (46) \(\mathcal{D}^{s}_{i}\) and \(\mathcal{D}^{q}_{i}\) are the support and query set of learning task \(\mathcal{T}_{i}\). And \(\mathcal{L}\) is the loss function of a learning task.
### _Graph-based SL Models_
**IGt [39]**. The goal of IGT is similar to OFCT. It aims to predict the time from user payment to package delivery, given the information of retailer, origin, destination, and payment time. IGT proposes an Inductive Graph Transformer (IGT) to address the challenge of inductive inference (i.e., models are required to predict ETA for orders with unseen retailers and addresses) and high-order interaction of order semantic information.
* **Heterogeneous Graph Input**. To fully model the high-order interaction of order semantic information, a heterogeneous graph is constructed, where each element in order (i.e., retailer, origin, destination, and payment time) is represented as a node in the graph. Two nodes are linked if they occur in the same order. IGT further limits the links according to proposed rules (such as the retailer node can only connect to the origin node) to reduce the density of the graph and the computational complexity.
* **Temporal and Heterogeneous GCN Encoder** (Heter GCN). To model the heterogeneous graph, it first constructs a set of bipartite subgraphs based on the combinations of the node types. Then the graph convolution is performed in each bipartite subgraph. Until now, the node embedding has been injected with information of graph structure and the inter-correlations between different nodes. Based on the learned node embedding, IGT further adopts GRU to analyze temporal correlations on the time-series axis at the node level.
* **Transformer-based Time Decoder**. In the decoder, the raw features of an order and the embedded order embedding are encapsulated into the same vector and then fed into the Transformer for the estimation.
**DGM-DTE [40]**. The studied problem of DGM-DTE is the same as IGT. DGM-DTE targets the challenge of imbalanced delivery time estimations and proposes a dual graph multi-task framework.
* **Multi-graph input**. Given the retailer, origin, destination, and payment time information of orders, DGM-DTE constructs three graphs named spatial, temporal, and
merchant relation graphs. The spatial relation graph composed of OD pairs (i.e., sending and receiving addresses). The temporal graph represents the periodicity of payment time in weeks and days. And the merchant graph represents the similarity (defined by historical order) between merchants.
* **Dual Graph-based Encoder.** To tackle the imbalanced delivery time estimations, it first classifies the input into two classes: head and tail data according to the delivery time distribution. Then, it designs two graph-based representation brunches where GCN and GAT are employed. One learns high-shot data representation in head data, and another re-weights the representations of tail data according to kernel density estimation [114] of labels.
* **Multitask Decoder.** The order representations learned from the dual graph module are then aggregated and fed into a DNN predictor for time estimation. Dosing so the model can focus on both high-shot regional data and rare labeled data. Moreover, DGM-DTE actually adopts a multitask learning framework that predicts delivery time from two-view, i.e., 1) the classification of the head or tail data and 2) the imbalanced data regression.
**GSL-QR [41].** GSL-QR improves the model performance, by learning the optimal graph structure and graph embeddings guided by the downstream ETA task.
* **Spatial and Temporal Relation Graph Input.** Given the sending and receiving addresses, payment time, and merchant information of orders, GSL-QR constructs spatial and temporal relation graphs. The spatial relation graph is a similarity relation graph among OD pairs (sending and receiving addresses) built upon their spatial attributes. The edge represents the similarity relation between two OD pairs, and the edge weight is the similarity score. The temporal relation graph is a similarity relation graph among the payment time of orders. Each node is a tuple indicating the payment time of an order placed on the day of the week and the hour of the day.
* **GSL and GNN Encoder.** GSL-QR proposes a Graph Structure Learning (GSL) method for simultaneously learning the optimal relation graph structure and potential node embedding for ETA prediction. It uses a metric learning-based graph learner that first obtains node embeddings from the initial spatial relation graph, then reconstructs the adjacency matrix based on the pairwise similarity of node embeddings. GAT is utilized in both learning function and node embedding generation. For the temporal relationship graph, GSL-QR uses a similar method, while GCN is utilized in the learning function and node embedding generation.
* **Attention-based Decoder.** GSL-QR propose a multi-attribute adaptive attention aggregation for dynamically measuring the contributions of the spatial, temporal, and context attribute. A DNN is used for final regression prediction. GSL-QR argues that not only the accuracy of ETA prediction, but also the order fulfillment rate should be considered. To strike a balance between them, GSL-QR employs quantile regression to find an optimal point.
## 6 Joint Route and Time Prediction
Models within this category aim to predict both the future route and the arrival time of a worker. The rationale behind this is that the arrival time and route are often highly correlated with each other. Existing works mainly cover sequence-based SL models and graph-based SL models.
### _Sequence-based SL Models_
Three methods are included in this class, including RankETPA [42], FDNET [5], and I\({}^{2}\)Route [32].
**RankETPA [42].** RankETPA develops a two-step model for package pick-up route and time prediction. It first predicts the future pick-up route, then feed the pick-up route as the input for the time prediction.
* **Sequence Input.** The input contains the features of unfinished tasks as described in the preliminary.
* **Task Encoder.** RankETPA has a route predictor and a time predictor. The encoder of the route predictor is LSTM, which reads the input step by step. While the encoder of the time predictor is Transformer, which encodes the sequential information of the predicted route and the correlation between different tasks.
* **Route&Time Decoder.** The decoder of the route predictor is PointerNet, and the decoder of the time predictor is also Transformer. The route predictor first estimates the future service route, which is converted into the positional encoding and fed into the time decoder.
**FDNET [5].** FDNET is a deep learning method to tackle the food delivery route and time prediction task.
* **Sequence Input.** The input of FDNET contains the features of unfinished tasks and workers' features.
* **Task Encoder.** FDNET has two modules: RP (route prediction) and TP (time prediction). Both of them treat the input as a sequence and share the same LSTM as the encoder. Moreover, DeepFM [115] is adopted to learn the interactions between different features.
* **Route&Time Decoder.** The RP module predicts the probability of each feasible location the worker will visit next and generates the complete delivery route. A model based on RNN and attention is designed to depict the behavior decision process of drivers based on features affecting drivers' behaviors. The TP module predicts the travel time duration between two adjacent locations (from leaving the previous location to arriving at the next one) in the route. A Wide and Deep model [116] is designed to predict the time duration based on the built context, worker and spatiotemporal features. For each step, input locations of the TP module are produced by the RP module, and the result of the TP module will be used to update features for predicting the worker's future behaviors.
**I\({}^{2}\)Route [32].** I\({}^{2}\)Route is proposed for package delivery route and time prediction. It is the first model that explores the inter- and intra-community routing patterns of workers.
* **Sequence Input.** I\({}^{2}\)Route aims to explore the case where only limited information is available in the system. Its input features only contain the latitude, longitude, and community id of the package.
* **Inter- and Intra-Community Task Encoder.** I\({}^{2}\)Route has two modules: i) the inter-community module with LSTM
learns how workers transfer in different communities; ii) The intra-learning module pays attention to the trajectory of a worker inside a community and the time duration between consecutive tasks. It adopts the Transformer encoder to capture the correlation between different tasks.
* **Two-level Route&Time Decoder.** I2Route explicitly model the inter-community and intra-community transition behavior by two separate PointerNet-based nets. Especially for prediction inside a community, it designs a residual-based block to integrally predict the next task and the time duration between two consecutive tasks inside a community.
### _Graph-based SL Methods_
**M\({}^{2}\)G4RTP [43].** M\({}^{2}\)G4RTP proposes a multi-level and multi-task graph model for joint route and time prediction in the logistics platform.
* **Multi-level Graph Input.** To model both AOI-level and location-level transfer patterns, a multi-level graph is constructed where AOIs and locations are treated as nodes. We illustrate the multi-level graph in Figure 12.
* **Multi-level Graph Task Encoder.** M\({}^{2}\)G4RTP develops a multi-level graph encoder, which is equipped with GAT-e (graph attention network [77] with edge feature accounted) encoding module for modeling workers' high-level transfer mode between AOIs and low-level transfer mode between locations.
* **Multi-task and Multi-level Route&Time Decoder.** A multi-task and multi-level decoder completes both the route and time prediction in a multi-task manner for the location- and AOI-level, respectively. It is composed of an AOI-level decoder to rank all AOIs and provide guidance for the location-level decoder. In the location-level decoder, tasks inside an AOI are ranked, and the arrival time is outputted. Besides, during the training process, the route prediction (classification task) and the time prediction (regression task) are trained together in a multi-task manner. Classification and regression are two heterogeneous tasks, and the loss function values are in different scales. To tackle this, M\({}^{2}\)G4RTP uses the weight assignment technic based on homoscedastic uncertainty [117] to balance these tasks in the training process.
### _Applications_
To intuitively show how the RTP methods can serve real-world systems, we illustrate two applications from the Cainiao system in Figure 13.
As shown in Figure 13(a), the Intelligent Order Sorting Service is designed to assist couriers during package pick-up. In this setting, pickup tasks are generated randomly, as users can place orders at any time. This requires couriers to frequently update their route plans due to incoming tasks. Route prediction simplifies the courier's job by intelligently sorting orders based on the courier's likely future route. Before this service, the platform could only display all pending orders in either a time-focused or distance-focused manner. This forced couriers to sit through all orders to plan their routes. With intelligent sorting, the order list now aligns better with couriers' work habits, easing the burden of route planning for couriers.
As shown in Figure 13(b), package pick-up is a face-to-face service, requiring customers to be available until the courier arrives. This often leads to increased waiting anxiety for users, making accurate ETA (Estimated Time of Arrival) crucial. The improved route and time prediction now offer users a more reliable and accurate ETA service. Previously, the platform's ETA service gave a broad 2-hour window, allowing the courier to arrive at any point within that period. With the new precise route and time prediction, the platform can now offer minute-level ETAs and inform users about the number of remaining orders before the courier arrives.
## 7 Limitation and Future Direction
In this section, we first elaborate on the limitations of the current work, then discuss some promising directions for research in this field.
Fig. 12: Modding the problem from the multi-level graph perspective.
Fig. 13: Applications on Cainiao APP [43].
### _Limitations_
Though current methods have achieved promising performance, there are still some limitations.
**Recurrent decoding mechanism.** Firstly, most decoders for route/time prediction adopt the recurrent architecture [31, 43, 5, 33, 4], where the prediction targets are calculated step by step, and the output of the previous step is usually fed as input for the next step. To this end, the recurrent architecture may encounter the efficiency problem. Especially in real-world scenarios such as last-mile delivery, a work (i.e., courier) can have around (even more than) 50 tasks at the same time, which brings big challenges for the recurrent decoder.
**Lack of modeling the road network.** Secondly, all current models did not take the road network into consideration. Most of them only model the spatial-temporal features regarding the finished or unfinished tasks. Some works also model the additional geographical information, such as the community [32] or AOI [43]. Nevertheless, all ignore the road network, which is a natural and important spatial information. Ignoring such information can notably compromise the model's efficacy.
**Error accumulation in time prediction.** Thirdly, current solutions for RTP typically utilize the route prediction results as the input for time prediction, such as [43, 5, 32]. In this way, if the route prediction is wrong, the accuracy of the time prediction can also be affected. Moreover, the error in route prediction could accumulate and trim down the performance of time prediction.
**Lack of public data and Benchmark.** Lastly, there is still a lack of public data in this area. Most existing conduct experiments with private data. For example, OSquare utilizes the data collected by Eleme, DeepRoute utilizes the data collected by Cainiao, and ILRoute uses the data by Meituan. Such data settings lack transparency and make it hard to reproduce the results. Although one recent work [118] proposes a publicly-available dataset (named LaDe) from the last-mile delivery, there is still a lack of publicly available and widely accepted datasets and benchmarks, which puts a hurdle to the development of this area.
### _Future Directions_
There exist some interesting directions for future research.
**More efficient decoding technology for route prediction.** One possible future direction is to develop a more efficient decoding mechanism. There can be two ways to achieve this goal. The first is from the perspective of model architecture, a non-autoregressive decoder can be explored that can generate multiple outputs at once. And the second is from the perspective of model compression [119], a more lightweight model can be explored to accelerate the inference speed when applied in the real system.
**Modeling of the road network.** As elaborated in the limitation, the road network contains abundant spatial information, and it is also the geographical space where the workers finish their tasks. Therefore, one future direction is to model the road network in the model design. For example, cast the problem instance into the road network, and reformulate the RTP problem based on the road network. In this background, how to effectively incorporate the information in the road network to boost the RTP performance, would be a quite challenging and promising direction.
**Modeling of the joint distribution of route and time.** Current route and time prediction models usually treat route prediction and time prediction separately. They either output the route and time in a two-step way [42], or calculate the route and time by two modules [5]. However, route and time are actually strongly correlated with each other. In light of this, it remains a potential future direction to develop more effective models that can represent them in a single unified manner where the two items are considered as a whole, and can capture the joint distribution of the route and time by leveraging abundant spatial-temporal context.
**Consideration of different route constraints.** Current models have already explored the case with the pickup-then-delivery route constraints. However, many different route constraints exist in different scenarios, such as capacity constraints [120, 121], and first-in-last-out constraints [122, 123]. Specific model design may be required to handle different route constraints. It would be another future direction to develop models for route and time prediction problems under different route constraints, so that the solution can better align with the real scenarios.
**Probabistic RTP forecasting.** Existing efforts all study the scenario where the point estimation is conducted. Specifically, for time prediction, only one point estimation is given by the model. And for route prediction, they usually give one route estimation. However, such kind of predictions cannot qualify the uncertainty of the prediction, which is crucial for downstream tasks that require risk assessment or decision-making under uncertainty [124, 125]. In future work, an interesting as well as challenging direction is to develop models that can give probabilistic forecasting. For example, proposing models that can evaluate the uncertainty of the time prediction. As for route prediction, one can develop models that can predict multiple possible routes at once. Furthermore, it would be more challenging and promising to conduct the joint probabilistic route and time forecasting.
## 8 Conclusion
The realm of instant delivery services is experiencing unprecedented growth, largely attributed to its profound impact on enhancing daily living. This paper delves into the route and time prediction (RTP) problem in instant delivery--an essential component for implementing an intelligent delivery platform that has a significant influence on customer satisfaction and operational cost. In this paper, we present the first systematic survey of deep neural networks tailored for service route and time prediction in instant delivery. Specifically, we first introduce the problem, commonly used metrics, and propose a novel taxonomy to classify the existing models from three perspectives. Then, we elaborate on the details of the models in each class, focusing on their motivation and model architecture. At last, we introduce the limitations and discuss the potential future direction in this field. We believe that this review fills the gap in RTP research and ignites further research interest in the RTP problem. |
2304.00276 | NPR: Nocturnal Place Recognition in Streets | Visual Place Recognition (VPR) is the task of retrieving database images
similar to a query photo by comparing it to a large database of known images.
In real-world applications, extreme illumination changes caused by query images
taken at night pose a significant obstacle that VPR needs to overcome. However,
a training set with day-night correspondence for city-scale, street-level VPR
does not exist. To address this challenge, we propose a novel pipeline that
divides VPR and conquers Nocturnal Place Recognition (NPR). Specifically, we
first established a street-level day-night dataset, NightStreet, and used it to
train an unpaired image-to-image translation model. Then we used this model to
process existing large-scale VPR datasets to generate the VPR-Night datasets
and demonstrated how to combine them with two popular VPR pipelines. Finally,
we proposed a divide-and-conquer VPR framework and provided explanations at the
theoretical, experimental, and application levels. Under our framework,
previous methods can significantly improve performance on two public datasets,
including the top-ranked method. | Bingxi Liu, Yujie Fu, Feng Lu, Jinqiang Cui, Yihong Wu, Hong Zhang | 2023-04-01T09:43:58Z | http://arxiv.org/abs/2304.00276v2 | # NPR: Nocturnal Place Recognition in Streets
###### Abstract
Visual Place Recognition (VPR) is the task of retrieving database images similar to a query photo by comparing it to a large database of known images. In real-world applications, extreme illumination changes caused by query images taken at night pose a significant obstacle that VPR needs to overcome. However, a training set with day-night correspondence for city-scale, street-level VPR does not exist. To address this challenge, we propose a novel pipeline that divides VPR and congeners nocturnal Place Recognition (NPR). Specifically, we first established a street-level day-night dataset, NightStreet, and used it to train an unpaired image-to-image translation model. Then we used this model to process existing large-scale VPR datasets to generate the VPR-Night datasets and demonstrated how to combine them with two popular VPR pipelines. Finally, we proposed a divide-and-conquer VPR framework and provided explanations at the theoretical, experimental, and application levels. Under our framework, previous methods can significantly improve performance on two public datasets, including the top-ranked method.
## 1 Introduction
Visual Place Recognition (VPR) is a fundamental task in the fields of computer vision [4, 19, 34, 37, 38, 48] and robotics [16, 22, 23, 25, 28, 43], which involves returning database images that are similar to a query image by comparing it with a known large-scale database image. As previously reported in research, challenges for VPR include database scale [6], repeated structures [38], structural changes [18], occlusion [4], viewpoint[7], visual scale [14], illumination [3, 9, 23, 22, 32, 36, 37, 43, 48], and seasonal changes [25, 35]. Almost all recent VPR methods are learning on large-scale datasets [6, 24, 26, 37, 45]. A neural network is often used to map images into an embedding space that can efficiently distinguish images captured from different places. However, existing datasets restrict the progress of VPR in nighttime street scenes.
**Training sets.** Previous research has established large-scale VPR training sets using the downloading interface of Google Street View [1, 2, 4, 6]. These training sets were collected using either car-mounted panoramic cameras or pedestrian-carried street-view cameras and covered almost all of the challenges above, except for nighttime scenes. Some VPR training sets for autonomous driving scenes include nighttime scenes but lack other challenges [23, 43]. For example, cars are limited to roads and mostly use forward-facing cameras, resulting in a lack of view direction changes.
**Testing sets.** Two representative testing sets for VPR in nighttime street scenes are Tokyo 24/7 [37] and Aachen Day/Night. i) The Tokyo 24/7 dataset comprises daytime, sunset, and nighttime scenes. However, all research
Figure 1: **A demo for NPR.** A 1.2km \(\times\) 1.2km satellite map is presented, where densely purple dots represent the locations where the database images were captured. Our proposed method can retrieve the correct results from a database of 75,984 images, even when provided with a nighttime captured image that includes a significant change in the view direction.
[4, 6, 8, 15, 21] has yet to separate these scenes for testing purposes; even the original paper [37] tested only sun-set and nighttime together. This oversight obscures the fact that VPR performance is poor at night. ii) The Aachen Day/Night dataset evaluates Visual Localization (VL) but can also assess VPR as it serves as the first stage in a two-stage VL approach [12, 30, 33]. The dataset is divided into daytime and nighttime and ranked on an evaluation system without visible ground truth. The top-ranked method used VPR to recall 50 candidate frames, which is poor in practical applications.
Under the limitations of the training set, a straightforward idea is to apply Image Enhancement (IE) or image-to-image translation for the nighttime queries to daytime queries. However, these methods or their application in VPR have the following problems: i) These methods introduce additional computing resources and time. ii) Learning-based IE's training sets usually pairs of low-exposure to high-exposure images rather than night-to-day pairs [44], even not outdoors, so they may have poor performance on the VPR dataset. iii) Inaccurate or erroneous night-to-day image-to-image translation can result in a degradation of VPR performance [3].
In this article, we successfully address training set issues through reverse "Night-to-Day" and propose a method to divide VPR and conquer Nocturnal Place Recognition (NPR). To summarize, our contributions are as follows:
* We propose a dataset comprising street scene images captured during daytime and nighttime and trained an unpaired image-to-image translation network on this dataset.
* Using the above translation network, we processed existing VPR datasets to generate the VPR-Night datasets and demonstrated two popular VPR pipelines on how to leverage the VPR-Night datasets.
* We propose a divide-and-conquer VPR framework and provide theoretical, experimental, and practical explanations for the framework. Furthermore, under this framework, previous methods exhibit superior performance on public datasets.
## 2 Related Work
**Visual Place Recognition** has been dominated by deep learning methods. Previous research can be summarized in three main aspects: model, loss function, and data. At the model level, Convolutional Neural Networks (CNNs) [4] or Visual Transformers (ViTs) [41] are typically used as the feature extraction network backbone, followed by a pooling or aggregation layer [4, 29], such as NetVLAD. At the loss function level, triplet loss and contrastive loss are commonly used to increase the Euclidean margin for better feature embedding [8]. However, triplet loss has a significant issue with mining hard negative samples. To address this problem, Liu et al. [21] proposed the statistical attention embedding loss, which efficiently utilizes multiple negative samples for supervision. Moreover, building on [15], Ge et al. [15] utilized network output scores as self-supervised labels and achieved new state-of-the-art results through iterative training. Berton et al. [6] used the Large Margin Cosine Loss (LMCL) in VPR tasks and demonstrated superior performance over triplet loss. At the data level, it can be further divided into two scenarios: road scenes and street scenes. Road scene datasets typically have obvious characteristics [16, 23, 43], such as the camera facing forward and the fixed viewing direction. Although these datasets may contain nighttime data, these characteristics do not suit VPR tasks in street scenes. Street scene datasets are usually obtained from the Google Street View download interface [1, 2, 6], which can be arbitrarily expanded and includes almost all challenges of VPR tasks, except for nighttime scenes. Therefore, it is reasonable to conclude that the performance of existing VPR models in nighttime scenarios benefits from data augmentation during the training phase and the generalization capability during the inference phase.
**Nighttime Computer Vision** involves addressing classic downstream tasks using nighttime images, which can be categorized into two-stage and one-stage methods. Two-stage methods [3, 46] involve converting nighttime images to daytime images before performing downstream tasks, while one-stage methodologies exploit raw nighttime data for training such tasks [11, 48]. For instance, Anoosheh et al. [3] propose a GAN method for converting nighttime images into daytime images and perform retrieval-based localization. Cui et al. [11] propose a method to perform reverse ISP and dark processing and train an end-to-end model for dark object detection. Xu et al. [46] propose a GAN-based approach to convert nighttime images into daytime images for semantic segmentation of autonomous driving scenes. However, in the context of street-level VPR, two-stage methods suffer from poor generalization and increased computational requirements, whereas the one-stage approach is constrained by a scarcity of suitable training datasets. To address this limitation, we introduce a novel paradigm that integrates the two perspectives by transforming daytime images into nighttime images prior to VPR training.
**Image-to-Image Translation** is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set. The classification of these methods is based on the type of training set, which can be pixel-level corresponding image-to-image or unpaired image-to-image [50, 47]. Obtaining large-scale, street-level, pixel-level correspondence day-night image pairs in the real world is an extremely challenging task. Therefore, we have considered the second
type of training set. There are two main categories of methods used for unpaired image-to-image translation: two-side mapping and one-side mapping. The former, which includes CycleGAN [50] and DualGAN [47], relies on the cycle-consistency constraint. This constraint ensures that the translated image can be reconstructed using an inverse mapping. However, the bijective projection required by this approach is often too limiting, which has led to the development of one-side mapping techniques. One such approach involves using a geometry distance to preserve content. DistanceGAN [13], for instance, maintains the distances between images within domains, while GCGAN [5] ensures geometry-consistency between input and output. Another technique, CUT [27], maximizes mutual information between the input and output using contrastive learning. In particular, the diffusion model [49] has recently demonstrated outstanding performance in solving image-to-image translation problems. However, its usage necessitates substantial computational resources. After a thorough comparison of several approaches, we proceeded to train our day-to-night image-to-image translation network utilizing NEGCUT [42] as the foundation.
## 3 The Proposed Dataset
In this chapter, we introduce a series of datasets that have been proposed, which are divided into two categories: NightStreet dataset for training a day-to-night image-to-image translation network and VPR-Night dataset for training night-to-day VPR networks.
### The NightStreet Dataset
Some datasets have been previously identified as potentially useful for day-to-night image-to-image translation. Nevertheless, they are not suitable for VPR tasks in street scenes. As depicted in Figure 2 (a), the current low light enhancement research predominantly focuses on tackling low light conditions (such as those caused by backlighting) or weak exposure. However, utilizing these datasets in reverse fails to accurately capture the changes that occur during day-to-night translation. As demonstrated in Figure 2 (b), there exist significant differences between automatic driving scenes and street scenes. For example, roadways typically constitute more than one-third of the image content, and images captured while the car is in motion tend to exhibit blurriness. Additionally, lighting conditions (e.g., street lights and car tail lights) generally exhibit limited variation. In regards to Figure 2 (c), several time-lapse photography datasets have been proposed in the image generation field. However, these datasets share a common characteristic: the photographers who capture them tend to focus on distant views and skylines, which differ greatly from the urban street scenes.
We constructed day-night image pairs by directly rearranging the query images from Tokyo 24/7 and Aachen Day/Night datasets. Tokyo 24/7 [37] provides 375 daytime and nighttime images each, while Aachen Day/Night [33] includes 234 daytime and 196 nighttime images. Importantly, we did not exploit the ground-truth relationship between the query and database images. Instead, we trained our translation network under unpaired images setting.
### The VPR-Night Datasets
We used a trained day-to-night image-to-image translation model to process the existing VPR datasets and obtained the VPR-Night datasets. Theoretically, this method can be applied to any VPR dataset. In this study, we processed the Pitts-30k and SF-XL-small datasets, and named the new datasets Pitts-30k-N and SF-XL-small-N, respectively.
**Pitts-30k-N.** Pitts-30k is a subset of Pittsburgh-250k that is widely used in the research of VPR because of its high experimental efficiency [4]. It consists of 30k database images from Google Street View and 24k test queries generated from Street View but taken at different times, and is divided into three roughly equal parts for training, validation, and testing. According to the method designed in section 4.2, we only need to perform day-to-night transfer on the test queries. Therefore, there are 24k night-style query images and 30k daytime database images in Pitts-30k-N dataset.
**SF-XL-small-N.** The San Francisco eXtra Large (SF-XL) [6] is a city-scale, dense, time-varying dataset. It crawls 3.43M equirectangular panoramas images from Google Street View and divides them into 12 crops, with the entire dataset consisting of 41.2M images. Each crop is labeled with 6 DoF information including GPS and heading orientation. Unlike the Pittsburgh dataset, the training set is not divided into query images and database images. To quickly validate our method, we opted to use a subset of SF-XL, namely SF-XL-small, comprising 100k street view images. Similarly, SF-XL-small-N contains 100k original images and 100k nighttime images.
Figure 2: **Examples of Day-Night Style Datasets.** (a) is the LOL dataset [44], (b) is the Robotcar dataset [23], (c) is the Night2Day dataset [20], and (d) is the NightStreet dataset.
## 4 Method
In this chapter, we first introduce the image-to-image translation from day-to-night, and then describe how to train two VPR pipelines on the generated nighttime data. Finally, we explain the rationale and implementation of dividing VPR and conquering NPR.
### Day-to-Night Image-to-Image Translation
In this section, we introduce a contrastive learning-based unpaired image-to-image translation network [27, 42], which is trained on the NightStreet dataset and generates the VPR-Night datasets. Our goal is to preserve the content of daytime images while specifying nighttime style. We define the daytime images from the input domain as \(\mathcal{X}\subset\mathbb{R}^{H\times W\times 3}\) and the nighttime images from the output domain as \(\mathcal{Y}\subset\mathbb{R}^{H\times W\times 3}\), and aim to learn a mapping \(G:\mathcal{X}\rightarrow\mathcal{Y}\). The NightStreet dataset can be represented as an instance \(X=\{x\in\mathcal{X}\}\) and \(Y=\{y\in\mathcal{Y}\}\). The mapping function \(G\) is decomposed into an encoder \(G_{\mathrm{enc}}\) and a generator \(G_{\mathrm{dec}}\), so the process of producing output images can be represented as:
\[\hat{y}=G(x)=G_{\mathrm{dec}}(G_{\mathrm{enc}}(x)). \tag{1}\]
We encourage the output images to have a nighttime style similar to the target domain by using a discriminator \(D\) and the following adversarial loss [50]:
\[\mathcal{L}_{\mathrm{GAN}}=\mathbb{E}_{y\sim Y}\log D(y)+\mathbb{E}_{x\sim X }\log\left(1-D(G(x))\right). \tag{2}\]
Then, a contrastive learning framework is employed to maintain local content consistency between input \(x\) and output \(\hat{y}\). We extract a certain number \(N\) of patches from the images, where one pair of patches is located at the same position in \(x\) and \(\hat{y}\), denoted as \(q\) and \(p\), while the remaining \(N-2\) patches are randomly selected from the \(x\). These patches are then fed into a feature extraction network to obtain feature vectors, and an \((N-1)\) classification problem is established. The feature extraction network used here is based on the \(G_{\mathrm{enc}}\), followed by a 2-layer MLP.
### Two Night-to-Day VPR Pipelines
In this section, we introduce strategies for utilizing nighttime data to train two popular VPR frameworks, namely the triplet network shown in Figure 3 (a) and the classification network shown in Figure 3 (b).
**Triplet Network.** From the training set with GPS, anchor samples \(q\) are selected, and data sharing the same GPS label or with close proximity are considered positive samples \(p\), while the remaining samples are regarded as negative samples \(n\). These samples are then fed into \(f\), a pre-trained deep neural network with an aggregation layer, obtaining feature vectors \(f(q),f(p),\) and \(f(n)\) in the embedding space. Finally, the Euclidean distance between the \(f(q)\) and \(f(p)\), as well as \(f(q)\) and \(f(n)\), are calculated in the embedding space to obtain a triplet loss, which is formulated as:
\[L_{\mathrm{triplet}}=l(||f(q)-f(p)||_{2}^{2}-||f(q)-f(n)||_{2}^{2}+m), \tag{3}\]
Figure 3: **Schematic Diagrams of Two Popular VPR Pipelines.** In (a), the blue, green, and red blocks inside the sphere represent anchor, positive, and negative samples in the embedding space. In (b), the blue, green, and red blocks inside the sphere belong to different classes in the embedding space. In (a) and (b), the black blocks represent nighttime images.
where \(l\) is the hinge loss \(l(x)=\max(x,0)\), \(m\) is the margin parameter that controls the distance between samples of the same class and different classes.
Considering that we aim to achieve Night-to-Day VPR, we need to construct sample pairs with different styles, i.e., we need to transfer the anchor samples to nighttime style or convert both positive and negative samples to nighttime style. The latter method is obviously less efficient than the former.
**Classification Network.** VPR can be treated as a classification problem based on labels. Following [6], the training set can be partitioned into classes by splitting it into square geographical cells using UTM coordinates \(\{east,north\}\) and further dividing each cell into a set of classes based on the orientation\(\{heading\}\) of each image. We transformed all images in the database into a night style while preserving their original labels. Then we employed the Large Margin Cosine Loss (LMCL) [40] to train a model:
\[L_{\mathrm{lmc}}=\frac{1}{N}\sum_{i}-\log\frac{e^{s(\cos(\theta_{y_{i},i})-m)} }{e^{s(\cos(\theta_{y_{i},i})-m)}+\sum_{j\neq y_{i}}e^{s\cos(\theta_{j},i)}}, \tag{4}\]
subject to
\[cos(\theta_{j},i)={W_{j}}^{T}x_{i}, \tag{5}\]
where \(N\) is the numer of training images, \(x_{i}\) is the \(i\)-th embedding vector corresponding to the ground truth class of \(y_{i}\), the \(W_{j}\) is the weight vector of the \(j\)-th class, and \(\theta_{j}\) is the angle between \(W_{j}\) and \(x_{i}\). \(s\) and \(m\) are two hyperparameters that control the weights of the intra-class distance and inter-class distance in the loss function, respectively.
### Dividing VPR and Conquering NPR
In this section, we introduce concepts from three fields: deep learning, neuroscience, and computer algorithms. Our aim is to explain the rationale behind the need to divide night vision problem from general visual problem1.
Footnote 1: Although we focus solely on the VPR task, we believe that this framework should be applicable to many other computer vision tasks.
**Deep learning.** We utilize data-driven network learning, where the training set and test set should have similar distributions [17]. However, previous research on nighttime VPR does not follow this principle, which is the reason why all methods degrade severely in nighttime scenes. When the fitting ability of the model is limited, increasing the amount of nighttime data will also cause the VPR performance in daytime scenes to degrade.
**Neuroscience.** There are two types of photoreceptor cells distributed on the retina: cone cells and rod cells [39]. Cone cells are primarily responsible for color and detail discrimination and are only activated under relatively adequate lighting conditions. Rod cells, on the other hand, are mainly responsible for identifying the intensity of light and motion, and can be responsive under low-light conditions. Nocturnal animals possess a greater number of rod cells.
**Computer algorithm.** Divide-and-conquer [10] (D&C) is a classic algorithm that decomposes a large problem into several smaller but structurally similar sub-problems, recursively solving these sub-problems, and then combining the sub-problem solutions, to obtain the original problem solution.
Based on the above knowledge, it is suggested that the original model should be used for daytime scenes, while the VPR-Night trained or fine-tuned model should be used for nighttime scenes. Then we provide three implementation ideas for the "divide" step: i) training a day-night network for classification, ii) using the global average brightness and an empirical threshold for classification, and iii) using system time and local sunset time 2 for classification. The first method can achieve better results, but has low practicality in real-world applications. The second method may fail in some specific situations, such as scenes with strong back-light being misclassified as night and scenes with bright light being misclassified as day. The third method is independent of image content, but is only applicable to devices equipped with communication functions, such as smartphones and robots. In our experiments, we use a combination of the second and third methods for decomposing the test set. Finally, we emphasize that the execution of "divide" in the real world is a one-time process, so the additional computation required for new pipelines can be ignored.
Footnote 2: [https://www.timeanddate.com/astronomy/](https://www.timeanddate.com/astronomy/)
## 5 Experiments
In this chapter, we describe the implementation details, the test set, and the evaluation methods. We provide quantitative and qualitative results, followed by visualizations that demonstrate the utility of NPR as well as its current limitations.
### Implementation details
**Day-to-Night Image-to-Image Translation.** We use the same training parameters as in [42] and select the model from the 400th epoch for inference. Since the image size of NightStreet is complex and differs greatly from that of the VPR training set, we scale the NightStreet images proportionally with the shorter side fixed at 640 and then randomly crop them to a size of 512\(\times\)512 during training. This can maintain scale consistency between the training set and testing set as much as possible. The translation network takes approximately one day to process the SF-XL-small dataset, at a resolution of 512\(\times\)512 on a single 3090 Ti.
**Night-to-Day Visual Place Recognition.** The essence of our method is to transfer the nighttime style to existing datasets and combine it with existing pipelines. Therefore, it can be compatible with any VPR method and improve their performance in nighttime scenes. We replicated the work of [4, 8, 6], and applied our method. Since most of the previous methods used the VGG-16 structure, we trained and compared it with this structure. The current top-ranked method on Tokyo 24/7 is based on ResNet-50, so we also conducted experiments based on it. We reproduce the best performance of the baseline method or directly use the model open-sourced by the author. Specifically, data augmentation was not used during NPR training and the learning rate used when fine-tuning the model was 1e-6.
**Night-to-Day Visual Localization.** We adopted the hierarchical localization framework provided by [30] and replaced the VPR module with Superpoint [31] and Superglue [12] for local feature extraction and matching, respectively. Notably, we adjusted the input image resolution of the VPR module to the recommended parameters for this project.
### Datasets and evaluation methodology
We reported results on multiple publicly available datasets.
**Tokyo 24/7 v2**[37] contains 75,984 database images from Google Street View and 315 query images taken using mobile phone cameras. This is an extremely challenging dataset where the queries were taken at daytime, sunset, nighttime, while the database images were only taken at daytime. However, to the best of our knowledge, all works using this dataset have not tested them by category. Even the original work that proposed this dataset evaluated sunset and night as one category. This incident seriously overshadowed the fact that the performance of VPR was poor under the cover of night. We propose two ways to test the dataset: one is to directly split by labels, and the other is to use the information of exchangeable image file format (EXIF) and local sunset time 3 for partitioning. The second method will split the sunset testing set into two categories, test them separately, and then merge the results.
Footnote 3: [https://www.timeanddate.com/sun/japan/tokyo?month=9&year=2014](https://www.timeanddate.com/sun/japan/tokyo?month=9&year=2014)
**Aachen Day/Night v1**[33] comprises 922 query images captured during day and night, and 4328 reference images collected over a span of two years using hand-held cameras. All query images were captured using mobile phone cameras, hence the Aachen Day-Night dataset considers the scenario of localization using mobile devices, e.g., for augmented or mixed reality applications.
**Other datasets.** Additionally, we employ the Pitts-30k and SF-XL-test-v1 datasets to investigate the phenomenon of model degradation.
**Evaluation metric.** In the VPR dataset, we use the recall@N with a 25 meters threshold, i.e., the percentage of queries for which at least one of the first N predictions is within a 25 meters distance from the ground truth position of the query, following standard procedure [4, 37, 28, 6, 21, 15]. In the visual localization dataset, we evaluate VPR using the success rate of localization under different recall@N metrics. Specifically, the visual localization system considers higher localization precision than that required by the VPR dataset.
### Comparison with the State-of-the-art Methods
As shown in Table 1, we have replicated the experimental results of NetVLAD [4], DIR [29], DVG [8], and Cos
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline Method & Backbone & \begin{tabular}{l} Aggregation \\ Method \\ \end{tabular} & \begin{tabular}{l} Feature \\ Dim \\ \end{tabular} & \begin{tabular}{l} Loss \\ Function \\ \end{tabular} & \begin{tabular}{l} Training \\ Dataset \\ \end{tabular} & \begin{tabular}{l} R@1 \\ All queries \\ \end{tabular} & \begin{tabular}{l} R@1 \\ Day queries \\ \end{tabular} & \begin{tabular}{l} R@1 \\ Sunset queries \\ \end{tabular} & \begin{tabular}{l} R@1 \\ Night queries \\ \end{tabular} \\ \hline NetVLAD [4] _(CVPR’16)_ & \begin{tabular}{l} VGG-16 \\ \end{tabular} & NetVLAD & 32768 & Triplet Loss & Pitts-30k & 63.8 & 88.6 & 75.2 & 27.6 \\ NetVLAD-D\&C & & & & Pitts-30k-N & 62.5 (-1.3) & 80.0 (-8.6) & 64.8 (-10.4) & 42.9 (+15.3) \\ NetVLAD-D\&C & & & & - & 68.6 (+4.8) & 88.6 & 74.3 (-0.9) & 42.9 (+15.3) \\ \hline NetVLAD [4] _(CVPR’16)_ & VGG-16 & NetVLAD+PCA & 4096 & Triplet Loss & Pitts-30k & 68.9 & - & - & - \\ \hline DIR [29] _(T-PAMI’18)_ & Res-101 & GeM+FC & 2048 & Triplet Loss & Google Landmark & 74.9 & 92.4 & 81.9 & 50.5 \\ \hline SARE [21] _(ICCV’19)_ & VGG-16 & NetVLAD+PCA & 4096 & SARE-Joint & Pitts-30k & 74.8 & - & - & - \\ \hline SFRS [15] _(ECCV’20)_ & VGG-16 & NetVLAD+PCA & 4096 & SARE-Joint & Pitts-30k & 78.5 & - & - & - \\ \hline APPSVR [28] _(ICCV’21)_ & VGG-16 & APP+PCA & 4096 & Triplet Loss & Pitts-30k & 77.1 & - & - & - \\ \hline DVG [8] _(CVPR’22)_ & Res-18 & NetVLAD & 16384 & Triplet Loss & Pitts-30k & 66.7 & 90.5 & 76.2 & 33.3 \\ DVG-D\&C & & & & - & 74.0 (+7.3) & 85.7 (-4.8) & 80.0 (+3.8) & 56.2 (+22.9) \\ \hline CosPace [6] _(CVPR’22)_ & \begin{tabular}{l} VGG-16 \\ \end{tabular} & \begin{tabular}{l} GeM+FC \\ \end{tabular} & 512 & LCM Loss & \begin{tabular}{l} SF-XL \\ Fine tuning on small-N \\ \end{tabular} & 82.9 (+1.0) & 87.6 (-2.9) & 83.8 (-5.7) & 77.1 (+11.4) \\ CosPace-D\&C & & & & - & 84.1 (+2.2) & 90.5 & 84.8 (-4.7) & 77.1 (+11.4) \\ \hline CosPace [6] & \begin{tabular}{l} ResNet-50 \\ \end{tabular} & GeM+FC & 512 & LCM Loss &
\begin{tabular}{l} SF-XL \\ Fine tuning on small-N \\ \end{tabular} & 88.6 & 95.2 & 90.5 & 80.0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Comparisons of various methods on Tokyo 24/7 [37].** We reproduced three methods and added their NPR and D&C versions. As previously mentioned, none of the previous methods conducted experiments on the last three columns. Therefore, we currently only report their Recall@1 metrics for all queries.
Place [6] on the Tokyo 24/7 dataset, while citing the results of NetVLAD with PCA [4], SARE [21], SFRS [15], and APPSVR [28] from the Deep Visual Geo-Localization Benchmark [8]. Our method was applied to these three replication methods and labeled with the name suffixes NPR and D&C accordingly. The results can be summarized in a few points:
1) The Recall@1 for nighttime queries are significantly lower than those for daytime queries across all methods. The Recall@1 for all queries is a trade-off result. This confirms our viewpoint that the challenge of Nighttime VPR has been overlooked for a considerable period of time.
2) All methods trained on the VPR-Night datasets showed a significant improvement in performance on nighttime queries. Models with weaker fitting abilities, such as VGG-16 and ResNet-18, showed a corresponding degradation in daytime scenes, whereas networks with stronger fitting abilities, such as ResNet-50, were able to improve Recall@1 for both daytime and nighttime queries.
3) To address the issue of imbalanced performance between daytime and nighttime for small models, a divide-and-conquer algorithm can effectively maintain performance balance, which is particularly beneficial for models deployed on mobile platforms.
As illustrated in Figure 4, we present the variation of accuracy with respect to different recall values, N. Our proposed method shows a significant performance improvement over the baseline approach across all recall values. It is worth noting that the vertical axes of the two plots have different origins.
As shown in Table 2, our proposed method outperforms the baseline approach on the nighttime testing subset of the Aachen dataset. While our localization success rates at R@10 and R@20 are slightly lower than those of NetVLAD with PCA, we think this may be attributed to the fact that the size of the database cannot fully reflect the advantages of our model, and our output dimensionality is also significantly lower than that of NetVLAD.
### Daytime VPR experiments
While we did not have high expectations for the performance of the NPR during daytime, we still demonstrated its performance on other daytime datasets and compared it with baseline methods. As shown in Table 3, our method exhibits a slight decrease in performance on the Pitts-30k and SF-XL-test-v1 datasets.
As shown in Figure 6, our incremental improvement significantly enhances performance at night. These nocturnal query images not only exhibit extreme illumination differences with database images, but also undergo architectural resets, changes in perspective, and scale variations. This is highly consistent with real-world scenarios.
### Limitations
We believe that our theory is not limited, but the current implementation is constrained. i) We require a larger NightStreet dataset to capture a richer range of day-to-night variations. Fortunately, the loose requirement for unpaired images makes the dataset easily extensible. And constructing the NightStreet dataset is undoubtedly less challenging than creating a large-scale, street-level, day-to-night corresponding VPR dataset. ii) Rendering large-scale datasets such as SF-XL requires significant GPU resources. We plan to address both of these limitations in our future work.
## 6 Conclusions
In this work, we address the challenging problem of nighttime VPR, which has been hindered by the lack of appropriate training datasets and inaccurate testing methodologies. To overcome these issues, we propose a dedicated pipeline for Nocturnal Place Recognition. First, we construct the NightStreet dataset and train a day-to-night image-to-image translation network. We then apply the network to process existing large-scale VPR datasets and demonstrate how to integrate them into two popular VPR pipelines. Finally, we introduce the idea of differentiating between VPR and NPR, providing a multidimensional interpretation. Our experimental results show that our pipeline significantly improves previous methods.
Figure 5: **Examples of translation results from SF-XL-small-N.** Each row corresponds to one location. (a), (c), and (e) represent the images captured at different times for the same location, while (b), (d), and (f) correspond to the nighttime images generated from (a), (c), and (e), respectively.
Figure 6: **Examples of retrieval results for challenging queries on Tokyo 24/7.** Each column corresponds to one query case: the query is shown in the first row, the top retrieved image using our best method (CosPlace-NPR) in the second, and the top retrieved image using our best baseline (CosPlace) in the last row. Correct retrievals are indicated with a green border, while incorrect retrievals are indicated with a red border. |
2302.07697 | Slopes of modular forms and geometry of eigencurves | Under a stronger genericity condition, we prove the local analogue of ghost
conjecture of Bergdall and Pollack. As applications, we deduce in this case (a)
a folklore conjecture of Breuil--Buzzard--Emerton on the crystalline slopes of
Kisin's crystabelian deformation spaces, (b) Gouvea's
$\lfloor\frac{k-1}{p+1}\rfloor$-conjecture on slopes of modular forms, and (c)
the finiteness of irreducible components of the eigencurve. In addition,
applying combinatorial arguments by Bergdall and Pollack, and by Ren, we deduce
as corollaries in the reducible and strongly generic case, (d) Gouvea--Mazur
conjecture, (e) a variant of Gouvea's conjecture on slope distributions, and
(f) a refined version of Coleman's spectral halo conjecture. | Ruochuan Liu, Nha Xuan Truong, Liang Xiao, Bin Zhao | 2023-02-15T14:53:26Z | http://arxiv.org/abs/2302.07697v1 | # Slopes of modular forms and geometry of eigencurves
###### Abstract.
Under a stronger genericity condition, we prove the local analogue of ghost conjecture of Bergdall and Pollack. As applications, we deduce in this case (a) a folklore conjecture of Breuil-Buzzard-Emerton on the crystalline slopes of Kisin's crystalabelian deformation spaces, (b) Gouvea's \(\lfloor\frac{k-1}{p+1}\rfloor\)-conjecture on slopes of modular forms, and (c) the finiteness of irreducible components of the eigencurve. In addition, applying combinatorial arguments by Bergdall and Pollack, and by Ren, we deduce as corollaries in the reducible and strongly generic case, (d) Gouvea-Mazur conjecture, (e) a variant of Gouvea's conjecture on slope distributions, and (f) a refined version of Coleman's spectral halo conjecture.
Key words and phrases:Eigencurves, slope of \(U_{p}\) operators, overconvergent modular forms, completed cohomology, weight space, Gouvea's conjecture, Gouvea-Mazur conjecture, Kisin crystalabelian deformation space 2010 Mathematics Subject Classification: 11F33 (primary), 11F85 (secondary) R. Liu is partially supported by the National Natural Science Foundation of China under agreement No. NSFC-11725101 and the Tencent Foundation. N. Truong is partially supported by L.Xiao's NSF grant DMS-1752703. L. Xiao is partially supported by Simons Collaboration Grant #278433, NSF grant DMS-1502147 and DMS-1752703, the Chinese NSF grant NSFC-12071004, Recruitment Program of Global Experts of China, and a grant from the Chinese Ministry of Education. B. Zhao is partially supported by AMS-Simons Travel Grant.
forms essentially of \(\operatorname{GL}_{2}(\mathbb{Q}_{p})\)-type. In this paper, the \(p\)-adic valuation is normalized so that \(v_{p}(p)=1\).
The general study of slopes of modular forms dates back to 1990's, when Gouvea and Mazur made several profound and intriguing conjectures on these slopes, based on extensive numerical computations. These conjectures were later extended and refined by Buzzard, Calegari, and many other mathematicians; see [1, 1, 2, 3]; certain very special cases were also proved based on either the coincidence that certain modular curve has genus \(0\) (e.g. [1]), or the still computationally manageable \(p\)-adic local Langlands correspondence when the slopes are small (e.g. [1, 2, 3, 1, 1, 1]). Unfortunately, despite strong numerical evidences, little theoretic progress was made towards these conjectures in the general case.
In recent breakthrough work of Bergdall and Pollack [1, 2, 3, 4], they unified all historically important conjectures regarding slopes into one conjecture: the _ghost conjecture_, which roughly gives a combinatorially defined 'toy model", called the _ghost series_, of the characteristic power series of the \(U_{p}\)-action on the space of overconvergent modular forms. The purpose of this work and its prequel [11] is to prove this ghost conjecture and place it under the framework of \(p\)-adic local Langlands conjecture. We now state our main theorem followed by a discussion on all of its corollaries, and then conclude the introduction with a short overview of the proof.
### Statement of main theorems
To be precise, we fix an odd prime \(p\geq 5\) and an isomorphism \(\overline{\mathbb{Q}}_{p}\simeq\mathbb{C}\). Let \(E\) be a finite extension of \(\mathbb{Q}_{p}\) with ring of integers \(\mathcal{O}\) and residue field \(\mathbb{F}\). Let \(\bar{r}:\operatorname{Gal}_{\mathbb{Q}}\to\operatorname{GL}_{2}(\mathbb{F})\) be an absolutely irreducible representation. Let \(\operatorname{S}_{k}(\Gamma_{0}(Np);\psi)_{\bar{r}}\subseteq\operatorname{S} _{k}^{\dagger}(\Gamma_{0}(Np);\psi)_{\bar{r}}\) denote the space of classical and overconvergent modular forms of weight \(k\) level \(\Gamma_{0}(Np)\) and nebentypus character \(\psi\) of \(\mathbb{F}_{p}^{\times}\), localized at the Hecke maximal ideal corresponding to \(\bar{r}\), respectively. (Our convention on associated Galois representation is the cyclotomic twist of that of [1, 1, 1, 1]; see SS 1.26 for more discussion.)
It is a theorem of Coleman and Kisin that \(\operatorname{S}_{k}(\Gamma_{0}(Np);\psi)_{\bar{r}}\) is "almost" the subspace of \(\operatorname{S}_{k}^{\dagger}(\Gamma_{0}(Np);\psi)_{\bar{r}}\) spanned by \(U_{p}\)-eigenforms with slopes \(\leq k-1\) (the forms of slope \(k-1\) is a bit tricky and we do not discuss them in this introduction; see Proposition 2.11(1)). Thus, to understand the slopes of the \(U_{p}\)-action on \(\operatorname{S}_{k}(\Gamma_{0}(Np);\psi)_{\bar{r}}\), it suffices to understand the slopes of the Newton polygon of the characteristic power series of the \(U_{p}\)-action on \(\operatorname{S}_{k}^{\dagger}(\Gamma_{0}(Np);\psi)_{\bar{r}}\).
It is a theorem of Coleman that one may interpolate the characteristic power series of \(U_{p}\)-actions on spaces of overconvergent modular forms of all weights \(k\), as follows. Let \(\omega_{1}:\operatorname{I}_{\mathbb{Q}_{p}}\twoheadrightarrow\operatorname{ Gal}(\mathbb{Q}_{p}(\mu_{p})/\mathbb{Q}_{p})\cong\mathbb{F}_{p}^{\times}\) denote the _first fundamental character_ of the inertial subgroup \(\operatorname{I}_{\mathbb{Q}_{p}}\) at \(p\); so \(\det(\bar{r}|_{\operatorname{I}_{\mathbb{Q}_{p}}})=\omega_{1}^{c}\) for some \(c\in\{0,\dots,p-2\}\). Write \(\omega:\mathbb{F}_{p}^{\times}\to\mathcal{O}^{\times}\) for the Teichmuller character, and put \(w_{k}:=\exp(p(k-2))-1\) for each \(k\in\mathbb{Z}\). Then there exists a power series \(C_{\bar{r}}(w,t)\in\mathcal{O}\llbracket w,t\rrbracket\) such that
\[C_{\bar{r}}(w_{k},t)=\det\bigl{(}\operatorname{I}_{\infty}-U_{p}t;\; \operatorname{S}_{k}^{\dagger}(\Gamma_{0}(Np);\omega^{k-1-c}))_{\bar{r}}\bigr{)}.\]
_The ghost conjecture aims, under a condition we specify later, to find a "toy model" power series \(G_{\bar{\rho}}(w,t)\) that has the same Newton polygon as \(C_{\bar{r}}(w,t)\) for every evaluation of \(w\), but only depends on the restriction \(\bar{\rho}=\bar{r}|_{\operatorname{I}_{\mathbb{Q}_{p}}}\)._ Here and later, for a power series \(C(t):=1+a_{1}t+a_{2}t^{2}+\dots\in\mathcal{O}\llbracket t\rrbracket\), the Newton polygon \(\operatorname{NP}(C(t))\) is the lower convex hull of the
points \((n,v_{p}(a_{n}))\) for all \(n\). In particular, the slopes of \(\operatorname{NP}(C_{\bar{r}}(w_{k},-))\) are precisely the slopes of \(U_{p}\)-action on \(\operatorname{S}_{k}^{\dagger}(\Gamma_{0}(Np);\omega^{k-1-c})_{\bar{r}}\).
The _key requirement_ for the ghost conjecture is that \(\bar{r}_{p}:=\bar{r}|_{\operatorname{Gal}_{\mathbb{Q}_{p}}}\) is _reducible and generic_, namely, \(c\equiv a+2b+1\bmod(p-1)\) for some \(a\in\{1,\ldots,p-4\}\) and \(b\in\{0,\ldots,p-2\}\), and
* (reducible and nonsplit case) either \(\bar{r}_{p}|_{\operatorname{I}_{\mathbb{Q}_{p}}}\simeq\bar{\rho}:=\begin{pmatrix} \omega_{1}^{a+b+1}&*\neq 0\\ 0&\omega_{1}^{b}\end{pmatrix}\), the _unique_ non-trivial extension in \(\operatorname{H}^{1}(\operatorname{I}_{\mathbb{Q}_{p}},\omega_{1}^{a+1})^{ \operatorname{Gal}_{\mathbb{P}_{p}}}=\operatorname{H}^{1}(\operatorname{Gal}_ {\mathbb{Q}_{p}},\omega_{1}^{a+1})\) or
* (reducible and split case) \(\bar{r}_{p}|_{\operatorname{I}_{\mathbb{Q}_{p}}}\simeq\bar{\rho}^{\text{ss}} :=\omega_{1}^{a+b+1}\oplus\omega_{1}^{b}\).
We need one more technical input to state our theorem (which we give a working definition): there exists an integer \(m(\bar{r})\) such that
\[\dim\operatorname{S}_{k}(\Gamma_{0}(Np);\omega^{k-1-c})_{\bar{r}}-\frac{2k}{p- 1}m(\bar{r})\text{ is bounded as }k\to\infty.\]
Such \(m(\bar{r})\) always exists. In fact, we will prove more precise dimension formulas in Definition-Proposition 2.12.
For the \(\bar{\rho}\) above, we defined in [11] a power series \(G_{\bar{\rho}}(w,t)=\sum\limits_{n\geq 0}g_{n}(w)t^{n}\in\mathbb{Z}_{p}[w][t]\) analogous to the ghost series in [1]. (We will quickly recall its definition after the theorem below.)
Our main result is the following. It was essentially conjectured by Bergdall and Pollack [1, 1] (and is slightly adapted in the prequel [11] of this series).
**Theorem 1.3** (Ghost conjecture).: _Assume \(p\geq 11\). Assume that \(\bar{r}:\operatorname{Gal}_{\mathbb{Q}}\to\operatorname{GL}_{2}(\mathbb{F})\) is an absolutely irreducible representation such that \(\bar{r}|_{\operatorname{Gal}_{\mathbb{Q}_{p}}}\) is reducible and that \(\bar{r}|_{\operatorname{I}_{\mathbb{Q}_{p}}}\) is isomorphic to either \(\bar{\rho}\) or \(\bar{\rho}^{\text{ss}}\) above with \(2\leq a\leq p-5\). Then for every \(w_{\star}\in\mathfrak{m}_{\mathbb{C}_{p}}\), the Newton polygon \(\operatorname{NP}\big{(}C_{\bar{r}}(w_{\star},-)\big{)}\) is the same as the Newton polygon \(\operatorname{NP}\big{(}G_{\bar{\rho}}(w_{\star},-)\big{)}\), stretched in both \(x\)- and \(y\)-directions by \(m(\bar{r})\), except possibly for the their slope zero parts._
**Remark 1.4**.:
1. We have complete results for the slope zero part; see Theorem 8.7 for details. In fact, our Theorem 8.7 is a much more general statement for the space of automorphic forms of general \(\operatorname{GL}_{2}(\mathbb{Q}_{p})\)-type.
2. It is conjectured that Theorem 1.3 holds for \(a=1\) and \(a=p-4\), and for smaller primes \(p\). We explain the technical difficulties later in Remarks 2.8 and 5.6.
3. In Remark 8.8, we also explain how to extend Theorem 1.3 to the case when the global representation \(\bar{r}\) is reducible. The only difference is some additional dimension computation.
We quickly recall the definition of ghost series \(G_{\bar{\rho}}(w,t)=1+\sum\limits_{n\geq 1}g_{n}(w)t^{n}\in\mathbb{Z}_{p}[w][t]\); see Definition 2.5 and the following discussion for examples and formulas. Assume that \(\bar{r}|_{\operatorname{I}_{\mathbb{Q}_{p}}}\simeq\bar{\rho}\). For each \(k\equiv a+2b+2\bmod(p-1)\) and \(k\geq 2\), define
\[d_{k}^{\text{ur}}:=\tfrac{1}{m(\bar{r})}\text{dim}\operatorname{S}_{k}\big{(} \Gamma_{0}(N)\big{)}_{\bar{r}}\quad\text{and}\quad d_{k}^{\text{lw}}:=\tfrac{1} {m(\bar{r})}\text{dim}\operatorname{S}_{k}\big{(}\Gamma_{0}(Np)\big{)}_{\bar{r}}.\]
Then we have
\[g_{n}(w)=\prod_{k\equiv a+2b+2\bmod(p-1)}(w-w_{k})^{m_{n}(k)},\]
where the exponents \(m_{n}(k)\) are given by the following recipe
\[m_{n}(k)=\begin{cases}\min\left\{n-d_{k}^{\mathrm{ur}},d_{k}^{\mathrm{tw}}-d_{k} ^{\mathrm{ur}}-n\right\}&\text{ if }d_{k}^{\mathrm{ur}}<n<d_{k}^{\mathrm{tw}}-d_{k}^{ \mathrm{ur}}\\ 0&\text{ otherwise.}\end{cases}\]
We point out that the ghost series \(G_{\bar{\rho}}(w,t)\) depends only on \(\bar{\rho}\), or equivalently the numbers \(p\), \(a\), and \(b\); _it does not depend on \(N\) and the global representation \(\bar{r}\)_.
A very primitive form of the ghost conjecture was first asked in [1], which is only for the case when \(p=2\) and \(N=1\). Later similar types of ghost series for other small primes were conjectured by [10, 11]. The general form of the ghost series was first introduced by Bergdall and Pollack [1, 2]. _We emphasize that the Bergdall and Pollack's work is of crucial importance to this paper._
In [11], we made an analogous local ghost conjecture which, starts with a completely abstract setting: set \(\mathrm{K}_{p}=\mathrm{GL}_{2}(\mathbb{Z}_{p})\); consider _a primitive \(\mathcal{O}\llbracket\mathrm{K}_{p}\rrbracket\)-projective augmented module associated to \(\bar{\rho}\)_, that is, a projective \(\mathcal{O}\llbracket\mathrm{K}_{p}\rrbracket\)-module \(\widetilde{\mathrm{H}}\) on which the \(\mathrm{K}_{p}\)-action extends to a continuous \(\mathrm{GL}_{2}(\mathbb{Q}_{p})\)-action, satisfying certain appropriate conditions (that are naturally satisfied in the automorphic setup). From this, one can similarly define analogues of classical and overconvergent forms, and our main result of this paper is the following analogue of Theorem 1.3 in the abstract setup, which we call the _local ghost theorem_.
**Theorem 1.5** (Local ghost theorem).: _Assume that \(p\geq 11\). Let \(\bar{\rho}=\begin{pmatrix}\omega_{1}^{a+b+1}&*\neq 0\\ 0&\omega_{1}^{b}\end{pmatrix}\) be the reducible, nonsplit, and generic residual representation with \(a\in\{2,\ldots,p-5\}\) and \(b\in\{0,\ldots,p-2\}\) as above. Let \(\widetilde{\mathrm{H}}\) be a primitive \(\mathcal{O}\llbracket\mathrm{K}_{p}\rrbracket\)-projective augmented module of type \(\bar{\rho}\), and let \(\varepsilon\) be a character of \((\mathbb{F}_{p}^{\times})^{2}\) relevant to \(\bar{\rho}\). Then for the characteristic power series \(C_{\widetilde{\mathrm{H}}}^{(\varepsilon)}(w,t)\) of the \(U_{p}\)-action on overconvergent forms associated to \(\widetilde{\mathrm{H}}\), and the combinatorially defined ghost series \(G_{\bar{\rho}}^{(\varepsilon)}(w,t)\), we have, for every \(w_{\star}\in\mathfrak{m}_{\mathbb{C}_{p}}\), \(\mathrm{NP}(G_{\bar{\rho}}^{(\varepsilon)}(w_{\star},-))=\mathrm{NP}(C_{ \widetilde{\mathrm{H}}}^{(\varepsilon)}(w_{\star},-))\)._
Comparing to Theorem 1.3, we here allow characters on both \(\mathbb{F}_{p}^{\times}\)-factors of the Iwahori group \(\mathrm{Iw}_{p}=\left(\begin{smallmatrix}\mathbb{Z}_{p}^{\times}&\mathbb{Z}_{p }\\ p\mathbb{Z}_{p}&\mathbb{Z}_{p}^{\times}\end{smallmatrix}\right)\). We refer to Section 2 for more discussions on undefined notations.
The benefit of extending Theorem 1.3 to the purely local ghost Theorem 1.5 is that the latter works for the "universal" \(\mathcal{O}\llbracket\mathrm{K}_{p}\rrbracket\)-projective augmented module. More precisely, if \(\bar{r}_{p}:\mathrm{Gal}_{\mathbb{Q}_{p}}\to\mathrm{GL}_{2}(\mathbb{F})\) is a residual local Galois representation whose restriction to \(\mathrm{I}_{\mathbb{Q}_{p}}\) is \(\bar{\rho}\), then Paskunas in [12] defined a certain projective envelope \(\widetilde{P}\) of \(\pi(\bar{r}_{p})^{\vee}\) in the category of Pontryagin dual of smooth admissible torsion representations of \(\mathrm{GL}_{2}(\mathbb{Q}_{p})\), so that the endomorphism ring of \(\widetilde{P}\) is isomorphic to the deformation ring \(R_{\bar{r}_{p}}\) of \(\bar{r}_{p}\). The upshot is that there exists an element \(x\) in the maximal ideal of \(R_{\bar{r}_{p}}\) such that for every \(x_{\star}\in\mathfrak{m}^{\prime}\) for \(\mathfrak{m}^{\prime}\) the maximal ideal in some finite extension \(\mathcal{O}^{\prime}\) of \(\mathcal{O}\), \(\widetilde{P}_{\mathcal{O}^{\prime}}/(x-x_{\star})\widetilde{P}_{\mathcal{O}^{ \prime}}\) is always a primitive \(\mathcal{O}^{\prime}\llbracket\mathrm{K}_{p}\rrbracket\)-projective augmented module of type \(\bar{\rho}\). Thus Theorem 1.5 applies and gives the corresponding slopes for overconvergent forms constructed out of \(\widetilde{P}_{\mathcal{O}^{\prime}}/(x-x_{\star})\widetilde{P}_{\mathcal{O}^{ \prime}}\).
Comparing this with the Galois side, we obtain immediately the list of slopes on the triangulline deformation space of \(\bar{r}_{p}\) a la Breuil-Hellmann-Schraen [1]. (Moreover, we observe that this also provides the knowledge of the slopes for triangulline deformation space
of \(\bar{r}_{p}^{\rm ss}\), for free.) Finally, by a bootstrapping argument, our result implies the ghost conjecture for a general automorphic setup using global triangulation results such as [11, 12], in particular Theorem 1.3.
A discussion of the proof of Theorem 1.5 will be given later in SS 1.25.
**Remark 1.6**.: We make several quick comments at the philosophical level on the proof.
1. It essential to work over the entire weight space and harness the integrality of the characteristic power series over the weight ring \(\mathcal{O}\llbracket w\rrbracket\). The pattern of slopes of \(G_{\bar{\rho}}^{(\varepsilon)}(w_{k},-)\) can be very complicated and subtle; see for example the cited proof of Proposition 2.18. The involved combinatorics seems to suggest: working over a single weight \(k\) to treat all slopes is going to be combinatorially extremely difficult.
2. The bootstrapping step makes use of essentially the full power of the known \(p\)-adic local Langlands correspondence for \(\operatorname{GL}_{2}(\mathbb{Q}_{p})\) (which might be downgraded to only assuming Breuil-Mezard conjecture for \(\operatorname{GL}_{2}(\mathbb{Q}_{p})\)). But the proof of Theorem 1.5 (in the primitive case) does not make use of the \(p\)-adic local Langlands correspondence.
**Remark 1.7**.: We point to several possible extensions of Theorem 1.5.
1. In addition to slopes of \(\operatorname{NP}\big{(}C_{\widetilde{\operatorname{H}}}^{(\varepsilon)}(w_{ k},-)\big{)}\), we may ask, for each root \(\alpha\) of \(C_{\widetilde{\operatorname{H}}}^{(\varepsilon)}(w_{k},-)\), what \(\alpha/p^{v_{p}(\alpha)}\) modulo \(\varpi\) is. It seems to be possible that, if we know this for the \(U_{p}\)-action on the space of "modular forms" with weight \(2\) and character \(\omega^{b}\times\omega^{a+b}\), then we may deduce this answer for all slopes of multiplicity one. Suggested by this, it is natural to ask whether for every root \(\alpha\), one may subtract a fixed value \(\alpha_{0}\in\mathbb{C}_{p}\) (combinatorially determined and independent of \(\widetilde{\operatorname{H}}\)) so that \(\alpha-\alpha_{0}\) is always contained in \(p^{\beta}\mathfrak{m}_{\mathbb{C}_{p}}\) for some maximal possible \(\beta\). Translating this to the Galois side, we conjecture perhaps overly optimistically that, when \(\bar{r}|_{\operatorname{Gal}_{\mathbb{Q}_{p}}}\) is reducible and generic, each irreducible component of every Kisin's semistabelian deformation space has Breuil-Mezard multiplicity \(1\). In fact, this can be proved in the crystabelian case with wild inertia type, in the forthcoming work of [1].
2. It is very natural to ask whether the method of this paper extends to the case when \(\bar{r}|_{\operatorname{Gal}_{\mathbb{Q}_{p}}}\) is irreducible, or even non-generic. Our most optimistic answer is "maybe", but we have not carefully investigated this case. The key difference is that, when \(\bar{r}|_{\operatorname{Gal}_{\mathbb{Q}_{p}}}\) is irreducible and generic, the smallest slope at any classical point seems to depend on the automorphic data. However, some initial computation suggests that although \(\operatorname{NP}(C_{\widetilde{\operatorname{H}}}^{(\varepsilon)}(w_{ \star},-))\) can be complicated, if we only consider the convex hull of points whose horizontal coordinates are even integers, then there might be a hope of an analogue of ghost series. Analogous to (1), if we are extremely optimistic, we would make a wild conjecture that, when \(\bar{r}|_{\operatorname{Gal}_{\mathbb{Q}_{p}}}\) is irreducible and generic, each irreducible component of every Kisin's semistabelian deformation space has Breuil-Mezard multiplicity \(2\).
3. In [10], Buzzard proposed an algorithm which is expected to produce slopes of modular forms inductively, at least under the _Buzzard-regular_ condition. We will not include a discussion on this, but leave for the interested readers. We only point out that this has been numerically verified extensively; see [1, Fact 3.1].
The logical process and relations with various conjectures we address in this paper are summarized in the following diagram:
We now discuss these corollaries.
### Application A: Breuil-Buzzard-Emerton conjecture
Let \(\bar{r}_{p}:\operatorname{Gal}_{\mathbb{Q}_{p}}\to\operatorname{GL}_{2}(\mathbb{F})\) be a residual local Galois representation, and let \(R^{\square}_{\bar{r}_{p}}\) denote the framed deformation ring. For \(k\in\mathbb{Z}_{\geq 2}\) and a finite-image character \(\underline{\psi}=\psi_{1}\times\psi_{2}:(\mathbb{Z}_{p}^{\times})^{2}\to \mathcal{O}^{\times}\), Kisin [10] defines a quotient of \(R^{\square,k-1,\underline{\psi}}_{\bar{r}_{p}}\) parametrizing lifts of \(\bar{r}_{p}\) that are potentially crystalline with Hodge-Tate weights \((0,k)\) and initial type \(\psi\).
For each homomorphism \(x^{*}:R^{\square,k-1,\underline{\psi}}_{\bar{r}_{p}}\to E^{\prime}\) with \(E^{\prime}\) a finite extension of \(E\), let \(\mathcal{V}_{x}\) denote the deformation of \(\bar{r}_{p}\) at \(x\). Then the \(2\)-dimensional space \(\mathbb{D}_{\mathrm{pcrys}}(\mathcal{V}_{x})\) carries \(E^{\prime}\)-linear commuting actions of \(\operatorname{Gal}(\mathbb{Q}_{p}(\mu_{p^{\infty}})/\mathbb{Q}_{p})\) and the crystalline Frobenius \(\phi\) (see Notation 7.1 for the definition of \(\mathbb{D}_{\mathrm{pcrys}}(\mathcal{V}_{x})\)).
The following [1, Conjecture 4.1.1] was initially conjectured by Breuil, Buzzard, and Emerton in their personal correspondences around 2005.
**Theorem 1.9** (Breuil-Buzzard-Emerton conjecture).: _Assume that \(p\geq 11\) and that \(\bar{r}_{p}\) is reducible and very generic, that is, \(\bar{r}_{p}|_{\mathbb{Q}_{p}}\simeq\bar{\rho}\) or \(\bar{\rho}^{\mathrm{ss}}\) with \(\bar{\rho}\) defined above and \(a\in\{2,\dots,p-5\}\) and \(b\in\{0,\dots,p-2\}\). Let \(k\), \(\underline{\psi}\), \(R^{\square,k-1,\underline{\psi}}_{\bar{r}_{p}}\), and \(x^{*}\) be as above. Let \(m\) denote the minimal positive integer such that \(\psi_{1}\psi_{2}^{-1}\) is trivial on \((1+p^{m}\mathbb{Z}_{p})^{\times}\), and let \(\alpha\) be an eigenvalue of \(\phi\) acting on the subspace of \(\mathbb{D}_{\mathrm{pcrys}}(\mathcal{V}_{x})\) where \(\operatorname{Gal}(\mathbb{Q}_{p}(\mu_{p^{\infty}})/\mathbb{Q}_{p})\) acts through \(\psi_{1}\). Then_
\[v_{p}(\alpha)\in\begin{cases}\big{(}\frac{a}{2}+\mathbb{Z}\big{)}\cup\mathbb{Z }&\text{ when }m=1,\\ \frac{1}{(p-1)p^{m-1}}\mathbb{Z}&\text{ when }m\geq 2.\end{cases}\]
This is proved in Corollary 7.10, in fact as a corollary of Theorem 7.6 which identifies all possible slopes on the triangulline deformation spaces with slopes of the Newton polygon of \(G^{(\varepsilon)}_{\bar{\rho}}(w,t)\). The idea of the proof is essentially explained in the paragraph after Theorem 1.5, namely, that applying Theorem 1.5 to the universal \(\operatorname{GL}_{2}(\mathbb{Q}_{p})\)-representation defined by Paskunas shows that the slopes of the crystalline Frobenius actions are exactly determined by the \(U_{p}\)-slopes on corresponding overconvergent forms, which is in turn equal
to the slopes of \(G_{\bar{\rho}}^{(\varepsilon)}(w,t)\). Now the integrality statement follows from a (not-at-all-trivial) property of ghost series [11, Corollaries 4.14 and 5.24].
**Remark 1.10**.:
1. What is originally conjectured in [1, Corollary 4.1.1] also includes non-generic cases, which our method cannot treat at the moment.
2. There have been several attempts [1, 1, 1, 1, 1, 1] on various versions of this theorem, based on mod \(p\) local Langlands correspondence. In fact, their goals are much more ambitious: classify the reduction of all crystalline or crystabelian representations with slopes less than equal to a particular number, typically less than or equal to \(3\). In their range, their work even addresses non-generic cases that we cannot touch. Our advantage is to be able to treat all possible slopes.
3. Analogous to Theorem 1.9, Jiawei An obtained some partial results towards the \(p\)-adic valuations of \(\mathcal{L}\)-invariants of semistable deformations of \(\bar{\rho}\).
### Application B: Gouvea's \(\left\lfloor\frac{k-1}{p+1}\right\rfloor\)-conjecture
In 1990s, Gouvea numerically computed the \(T_{p}\)-slopes in \(S_{k}(\Gamma_{0}(N))\) as \(k\to\infty\) and found in [1, SS 4] that almost always, the slopes are less than or equal to \(\left\lfloor\frac{k-1}{p+1}\right\rfloor\).
Interpreting this using the framework of (\(p\)-adic local) Langlands program, we should consider instead the \(T_{p}\)-slopes on \(S_{k}(\Gamma_{0}(N))_{\bar{r}}\) (or equivalently the lesser \(U_{p}\)-slopes on old forms in \(S_{k}(\Gamma_{0}(pN))_{\bar{r}}\) after \(p\)-stabilization) when localized at a residual Galois representation \(\bar{r}\) as in SS 1.2. If we assume further that \(\bar{r}|_{\mathbb{I}_{\mathbb{Q}_{p}}}\) is isomorphic to \(\bar{\rho}\) and \(\bar{\rho}^{\text{ss}}\) as above, it is expected that the slopes are always less than or equal to \(\left\lfloor\frac{k-1}{p+1}\right\rfloor\).
This conjecture also has its Galois theoretic counterpart, which seems more intrinsic. Roughly speaking, this folklore conjecture asserts that for any crystalline representation \(V\) of Hodge-Tate weight \((0,k-1)\), if \(p\)-adic valuation of the trace of the \(\phi\)-action on \(\mathbb{D}_{\text{crys}}(V)\) is strictly larger than \(\left\lfloor\frac{k-1}{p+1}\right\rfloor\), then \(V\) has an irreducible reduction.
Our following result partially answers the counterpositive statement.
**Theorem 1.12** (Gouvea's \(\left\lfloor\frac{k-1}{p+1}\right\rfloor\)-conjecture).: _Assume \(p\geq 11\). Let \(\bar{r}_{p}\) be a residual local Galois representation such that \(\bar{r}_{p}|_{\mathbb{I}_{\mathbb{Q}_{p}}}\simeq\bar{\rho}\) or \(\bar{\rho}^{\text{ss}}\) with \(\bar{\rho}\) defined above and \(a\in\{2,\ldots,p-5\}\) and \(b\in\{0,\ldots,p-2\}\). Let_
\[\underline{\psi}:(\mathbb{Z}_{p}^{\times})^{2}\twoheadrightarrow\Delta^{2} \xrightarrow{\omega^{-s_{\varepsilon}}\times\omega^{-s_{\varepsilon}}} \mathcal{O}^{\times}\]
_be a character with \(s_{\varepsilon}\in\{0,\ldots,p-2\}\), and fix \(k\in\mathbb{Z}_{\geq 2}\) such that \(k\equiv a+2s_{\varepsilon}\bmod(p-1)\)._
_Let \(R_{\bar{r}_{p}}^{\square,k-1,\underline{\psi}}\) be Kisin's crystalabelian deformation ring as above and let \(x^{*}:R_{\bar{r}_{p}}^{\square,k,\underline{\psi}}\to E^{\prime}\) be a homomorphism. Then for the trace \(a_{p,x}\) of the \(\phi\)-action on \(\mathbb{D}_{\text{pcrys}}(\mathcal{V}_{x})\), we have_
\[v_{p}(a_{p,x})\leq\Big{\lfloor}\frac{k-1-\min\{a+1,p-2-a\}}{p+1}\Big{\rfloor}.\]
This is proved in Corollary 7.10.
**Remark 1.13**.:
1. We in fact proved a stronger statement with bound \(\left\lfloor\frac{k-1-\min\{a+1,p-2-a\}}{p+1}\right\rfloor\) as opposed to \(\left\lfloor\frac{k-1}{p+1}\right\rfloor\). The correct way to interpret this is that: consider a crystalline representation \(V\) where one of the Frobenius eigenvalue has slope \(\left\lfloor\frac{k-1}{p+1}\right\rfloor=\frac{k-1-c}{p+1}\) with \(c\in\{0,\ldots,p\}\); then the reduction of \(V\) corresponds to the case when \(a=c-1\) or
\(a=p-2-c\). Such statement might even make sense when "\(a=-1\) or \(a=p-2\)", except our theorem will not be able to address this.
2. The original Galois-theoretic version of Gouvea's conjecture was proved with weaker bounds \(\left\lfloor\frac{k-1}{p-1}\right\rfloor\) by Berger-Li-Zhu [1] and bounds \(\left\lfloor\frac{k-1}{p}\right\rfloor\) by Bergdall-Levin [1]. Both results essentially use tools from \(p\)-adic Hodge theory: the former one uses Wach modules and the latter one uses Kisin modules.
3. The estimate of the slopes of crystalline Frobenius \(\phi\) comes from the estimate of slopes of the ghost series, which turns out to involve a rather subtle inequality on sum of digits of certain number's \(p\)-adic expansions. See [13, Proposition 4.28] for the non-formal part of the proof.
### Application C: Finiteness of irreducible components of eigencurves
Near the end of the introduction of the seminal paper [11] of Coleman and Mazur, they listed many far-reaching open questions, among them, one particularly intriguing question is whether the eigencurve has finitely many irreducible components, as somewhat "suggested" by that all non-Hida components have infinite degrees over the weight space. As far as we understand, almost nothing was known towards this question. As a corollary of our main theorem, we provide we-believe the first positive theoretic evidence towards this question, namely, the eigencurve associated to \(\bar{r}\) that is reducible and very generic at \(p\), has finitely many irreducible components.
Let us be more precise. Keep the notation as in Theorem 1.3. Let \(\mathcal{W}:=(\operatorname{Spf}\mathcal{O}[\![w]\!])^{\operatorname{rig}}\) denote the rigid analytic weight open unit disk and let \(\mathbb{G}_{m}^{\operatorname{rig}}\) denote the rigid analytification of \(\mathbb{G}_{m,\mathbb{Q}_{p}}\). Let \(\operatorname{Spc}(\bar{r})\) denote the zero locus of \(C_{\bar{r}}(w,t)\), as a rigid analytic subspace of \(\mathbb{G}_{m}^{\operatorname{rig}}\times\mathcal{W}\); it carries a natural weight map wt to \(\mathcal{W}\). By Hida theory, this spectral curve is the disjoint union \(\operatorname{Spc}(\bar{r})=\operatorname{Spc}(\bar{r})_{=0}\sqcup \operatorname{Spc}(\bar{r})_{>0}\), where \(\operatorname{Spc}(\bar{r})_{=0}\) (possibly empty) is the component with slope zero, corresponding to the Hida family. It is well known that \(\operatorname{Spc}(\bar{r})_{=0}\) is finite over \(\mathcal{W}\), and hence has finitely many components. We prove the following in Corollary 9.7.
**Theorem 1.15**.: _Assume \(p\geq 11\) and that \(\bar{r}|_{\operatorname{Gal}_{\mathbb{Q}_{p}}}\) is reducible and generic with \(2\leq a\leq p-5\). Then \(\operatorname{Spc}(\bar{r})_{>0}\) has finitely many irreducible components. In fact, each irreducible component \(\mathcal{Z}\) of \(\operatorname{Spc}(\bar{r})_{>0}\) is the zero locus of a power series \(C_{\mathcal{Z}}(w,t)\in\mathcal{O}[\![w,t]\!]\) such that for every \(w_{\star}\in\mathfrak{m}_{\mathbb{C}_{p}}\), the \(\operatorname{NP}\big{(}C_{\mathcal{Z}}(w_{\star},-)\big{)}\) is the same as \(\operatorname{NP}\big{(}G_{\bar{\rho}}(w_{\star},-)\big{)}\) with the slope-zero part removed, and stretched in both \(x\)- and \(y\)-directions by some constant \(m(\mathcal{Z})\)._
In fact, what we prove is that, for every power series \(C(w,t)\) whose positive slopes agree with the ghost series (up to a fixed multiplicity), any irreducible factor of \(C(w,t)\) has the same property; see Theorem 9.6.
### Application D: Gouvea-Mazur conjecture
In the pioneer work of Gouvea and Mazur [1], they investigated how slopes of (classical) modular forms vary when the weight \(k\) changes \(p\)-adically. Their extensive numerical data suggests that when the weights \(k_{1}\) and \(k_{2}\) are \(p\)-adically close, then the slopes of modular forms of weights \(k_{1}\) and \(k_{2}\) agree. More precisely, they made the following conjecture.
**Conjecture 1.17** (Gouvea-Mazur).: _There is a function \(M(n)\) linear in \(n\) such that if \(k_{1},k_{2}>2n+2\) and \(k_{1}\equiv k_{2}\bmod(p-1)p^{M(n)}\), then the sequences of \(U_{p}\)-slopes (with multiplicities) on \(\operatorname{S}_{k_{1}}(\Gamma_{0}(Np))\) and \(\operatorname{S}_{k_{2}}(\Gamma_{0}(Np))\) agree up to slope \(n\)._
Originally, Gouvea and Mazur conjectured with \(M(n)=n\), but Buzzard and Calegari [1] found explicit counterexamples. The current modified version Conjecture 1.17 is still expected by experts. The only proved result is with \(M(n)\) quadratic in \(n\) by Wan [10].
It is natural to consider this conjecture for each \(\bar{r}\)-localized subspaces \(\mathrm{S}_{k}(\Gamma_{0}(Np))_{\bar{r}}\). Under the same hypothesis as above, combining Theorem 1.3 with a combinatorial result of ghost series by Rufei Ren [11], we prove in Theorem 8.10 the following variant of Gouvea-Mazur conjecture.
**Theorem 1.18**.: _Assume \(p\geq 11\) and that \(\bar{r}:\mathrm{Gal}_{\mathbb{Q}}\to\mathrm{GL}_{2}(\mathbb{F})\) is an absolutely irreducible representation such that \(\bar{r}_{p}|_{\mathrm{I}_{\mathbb{Q}_{p}}}\) is isomorphic to \(\bar{\rho}\) or \(\overline{\rho}^{\mathrm{ss}}\) above with \(2\leq a\leq p-5\). Then for weights \(k_{1},k_{2}>2n+2\) such that \(k_{1}\equiv k_{2}\equiv a+2b+1\bmod(p-1)\) and \(v_{p}(k_{1}-k_{2})\geq n+5\), the sequence of \(U_{p}\)-slopes (with multiplicities) on \(\mathrm{S}_{k_{1}}(\Gamma_{0}(Np))_{\bar{r}}\) and \(\mathrm{S}_{k_{2}}(\Gamma_{0}(Np))_{\bar{r}}\) agree up to slope \(n\)._
### Application E: Gouvea's slope distribution conjecture
For slopes of modular forms, Gouvea made extensive numerical computations. In his paper [10], titled "Where the slopes are", he made the following intriguing conjecture.
**Conjecture 1.20**.: _Fix a tame level \(N\) (relatively prime to \(p\)). For each \(k\), write \(\alpha_{1}(k),\dots,\alpha_{d}(k)\) for the list of \(U_{p}\)-slopes on \(\mathrm{S}_{k}(\Gamma_{0}(Np))\), and let \(\mu_{k}\) denote the uniform probability measure of the multiset \(\{\frac{\alpha_{1}(k)}{k-1},\dots,\frac{\alpha_{d}(k)}{k-1}\}\subset[0,1]\). Then the measure \(\mu_{k}\) weakly converges to_
\[\frac{1}{p+1}\delta_{[0,\frac{1}{p+1}]}+\frac{1}{p+1}\delta_{[\frac{p}{p+1},1] }+\frac{p-1}{p+1}\delta_{\frac{1}{2}}, \tag{1.20.1}\]
_where \(\delta_{[a,b]}\) denotes the uniform probability measure on the interval \([a,b]\), and \(\delta_{\frac{1}{2}}\) is the Dirac measure at \(\frac{1}{2}\)._
The symmetry between \(\delta_{[0,\frac{1}{p+1}]}\) and \(\delta_{[0,\frac{1}{p+1}]}\) follows from the usual \(p\)-stabilization process, namely the old form slopes can be paired so that the sum of each pair is \(k-1\). The Dirac measure at \(\frac{1}{2}\) corresponds to the newform slope, where the \(U_{p}\)-eigenvalues are \(p^{\pm\frac{k-2}{2}}\).
In [1], the authors defined abstract ghost series and showed that the slopes of the Newton polygon of abstract ghost series satisfy analogue of Gouvea's distribution conjecture. So combining their work and Theorem 1.3, we obtain the following. (See Theorem 8.11.)
**Theorem 1.21**.: _Assume \(p\geq 11\) and that \(\bar{r}:\mathrm{Gal}_{\mathbb{Q}}\to\mathrm{GL}_{2}(\mathbb{F})\) is an absolutely irreducible representation such that \(\bar{r}_{p}|_{\mathrm{I}_{\mathbb{Q}_{p}}}\) is isomorphic to \(\bar{\rho}\) or \(\overline{\rho}^{\mathrm{ss}}\) above with \(2\leq a\leq p-5\). For \(k\equiv a+2b+2\bmod(p-1)\), let \(\alpha_{1}(k),\alpha_{2}(k),\dots\) denote the \(U_{p}\)-slopes of \(\mathrm{S}_{k}(\Gamma_{0}(Np))_{\bar{r}}\) in increasing order, and let \(\mu_{k}\) denote the probability measure for the set \(\{\frac{\alpha_{1}(k)}{k-1},\frac{\alpha_{2}(k)}{k-1},\dots\}\). Let \(m(\bar{r})\) be the mod-\(p\)-multiplicity defined in SS 1.2. Then_
1. _When_ \(i\leq\dim\mathrm{S}_{k}(\Gamma_{0}(N))_{\bar{r}}\)_, we have_ \(\alpha_{i}(k)=\frac{p-1}{2m(\bar{r})}\cdot i+O(\log k)\) _when_ \(\bar{r}_{p}|_{\mathrm{I}_{\mathbb{Q}_{p}}}\simeq\bar{\rho}\)_, and_ \(\alpha_{i}(k)=\frac{p-1}{m(\bar{r})}\cdot i+O(\log k)\) _when_ \(\bar{r}_{p}|_{\mathrm{I}_{\mathbb{Q}_{p}}}\simeq\bar{\rho}^{\mathrm{ss}}\)_._
2. _As_ \(k\to\infty\) _while keeping_ \(k\equiv a+2b+2\bmod(p-1)\)_, the measure_ \(\mu_{k}\) _weakly converges to the probability measure (_1.20.1_)._
### Application F: a refined Coleman's spectral halo conjecture
In Coleman and Mazur's foundational paper [12] on eigencurves, they also raised an important conjecture regarding the behavior of the associated eigencurve near the boundary of the weight disks:
they conjecture that the eigencurve is an infinite disjoint union of annuli such that each irreducible component is finite and flat over the weight annulus; this was largely inspired by Emerton's thesis [10]. The first proved result in this direction was by Buzzard and Kilford [1], who proved this result when \(N=1\) and \(p=2\). Some additional examples when \(p\) is small were subsequently provided [12, 13, 14, 15]. The first result for more general situation was obtained by Daqing Wan, the first and the third authors in [16], which roughly speaking, proved the following.
**Theorem 1.23**.: _Let \(C_{D}(w,t)\) denote the characteristic power series analogously defined as in SS 1.2 but for automorphic forms on a definite quaternion algebra \(D\) over \(\mathbb{Q}\) that is split at \(p\). Let \(\operatorname{Spc}(D)\) denote the zero locus of \(C_{D}(w,t)\) in \(\mathcal{W}\times\mathbb{G}_{m}^{\operatorname{rig}}\), and_
\[\mathcal{W}_{(0,1)}=\left\{w_{\star}\in\mathcal{W}\;\big{|}\;v_{p}(w_{\star}) \in(0,1)\right\}\quad\text{and}\quad\operatorname{Spc}_{(0,1)}(D)=\operatorname {Spc}(D)\cap\operatorname{wt}^{-1}(\mathcal{W}_{(0,1)}).\]
_Then \(\operatorname{Spc}_{(0,1)}(D)\) is an infinite disjoint union \(X_{0}\bigsqcup X_{(0,1)}\bigsqcup X_{1}\bigsqcup X_{(1,2)}\bigsqcup\cdots\) such that_
1. _for each point_ \((w_{\star},a_{p})\in X_{I}\) _for_ \(I=n=[n,n]\) _or_ \((n,n+1)\)_, we have_ \[v_{p}(a_{p})\in(p-1)\cdot v_{p}(w_{\star})\cdot I,\]
2. _the weight map_ \(\operatorname{wt}:X_{I}\to\mathcal{W}_{(0,1)}\) _is finite and flat._
This was later generalized to the Hilbert case when \(p\) splits, by Johansson-Newton [16], and Rufei Ren and the fourth author [11]. The case corresponding to the modular forms, namely the "original Coleman-Mazur" conjecture was established by Hansheng Diao and Zijian Yao in [15]. Unfortunately, Theorem 1.23 and all these generalizations do not give further information on the slope ratios \(v_{p}(a_{p})/v_{p}(w_{\star})\) inside the open intervals \((p-1)\cdot(n,n+1)\). When \(\bar{r}\) satisfies the conditions of our ghost theorem, the slopes of ghost series automatically give the following refined version of the above theorem. (See Theorem 8.12)
**Theorem 1.24**.: _Assume \(p\geq 11\) and that \(\bar{r}:\operatorname{Gal}_{\mathbb{Q}}\to\operatorname{GL}_{2}(\mathbb{F})\) is an absolutely irreducible representation such that \(\bar{r}_{p}|_{\operatorname{I}_{\mathbb{Q}_{p}}}\) is isomorphic to \(\bar{\rho}\) above with \(2\leq a\leq p-5\). Let \(\operatorname{Spc}(\bar{r})\) denote the zero of \(C_{\bar{r}}(w,t)\) inside \(\mathcal{W}\times\mathbb{G}_{m}^{\operatorname{rig}}\), and put \(\operatorname{Spc}(\bar{r})_{(0,1)}=\operatorname{Spc}(\bar{r})\cap \operatorname{wt}^{-1}(\mathcal{W}_{(0,1)})\). Then \(\operatorname{Spc}(\bar{r})^{(0,1)}\) is a disjoint union \(Y_{1}\bigsqcup Y_{2}\bigsqcup\cdots\) such that_
1. _for each point_ \((w_{\star},a_{p})\in Y_{n}\)_,_ \(v_{p}(a_{p})=(\deg g_{n}(w)-\deg g_{n-1}(w))\cdot v_{p}(w_{\star})\)_, and_
2. _the weight map_ \(\operatorname{wt}:Y_{n}\to\mathcal{W}_{(0,1)}\) _is finite and flat of degree_ \(m(\bar{r})\)_._
A similar result can be stated when \(\bar{r}\) is split, we refer to Theorem 8.12 for the details.
### Overview of the proof of Theorem 1.5
There are two main inputs in proving Theorem 1.5. We explain these first. Recall that \(\operatorname{K}_{p}=\operatorname{GL}_{2}(\mathbb{Z}_{p})\); we may reduce to the case when \(\bar{\rho}=\begin{pmatrix}\omega_{1}^{a+1}&*\neq 0\\ &1\end{pmatrix}\), namely \(b=0\). Theorem 1.5 involves the following local data: let \(\widetilde{\operatorname{H}}\) be the projective envelope of \(\operatorname{Sym}^{a}\mathbb{F}^{\oplus 2}\) as a right \(\mathcal{O}[\![\operatorname{K}_{p}]\!]\)-module, and we extend the \(\operatorname{K}_{p}\)-action to a continuous (right) action by \(\operatorname{GL}_{2}(\mathbb{Q}_{p})\) so that \(\begin{pmatrix}p&0\\ 0&p\end{pmatrix}\) acts trivially. Then for each character \(\psi\) of \((\mathbb{F}_{p}^{\times})^{2}\) and a character \(\varepsilon_{1}\) of \(\mathbb{F}_{p}^{\times}\), we may define spaces of abstract classical
and overconvergent forms
\[\mathrm{S}_{k}^{\mathrm{Iw}}(\psi) =\mathrm{S}_{\widetilde{\mathrm{H}},k}^{\mathrm{Iw}}(\psi)\ :=\mathrm{Hom}_{\mathcal{O}[\![\mathrm{Iw}_{p}]\!]}\,\big{(}\widetilde{ \mathrm{H}},\,\mathrm{Sym}^{k-2}\,\mathcal{O}^{\oplus 2}\otimes\psi\big{)},\] \[\mathrm{S}_{k}^{\mathrm{ur}}(\varepsilon_{1}) =\mathrm{S}_{\widetilde{\mathrm{H}},k}^{\mathrm{ur}}(\varepsilon_{1}) \ :=\mathrm{Hom}_{\mathcal{O}[\![\mathrm{K}_{p}]\!]}\,\big{(}\widetilde{ \mathrm{H}},\,\mathrm{Sym}^{k-2}\,\mathcal{O}^{\oplus}\otimes\varepsilon_{1} \cdot\det\big{)}, \tag{1.25.1}\] \[\mathrm{S}_{k}^{\dagger}(\psi) =\mathrm{S}_{\widetilde{\mathrm{H}},k}^{\mathrm{Iw}}(\psi)\ :=\mathrm{Hom}_{\mathcal{O}[\![\mathrm{Iw}_{p}]\!]}\,\big{(} \widetilde{\mathrm{H}},\,\mathcal{O}\langle z\rangle\otimes\psi\big{)}.\]
These abstract and overconvergent forms behave exactly as their automorphic counterparts, equipped with the corresponding \(U_{p}\)-operators, \(T_{p}\)-operators, Atkin-Lehner involutions, and theta maps. (See SS 2.4 and Proposition 2.11.)
**Main input I:**\(p\)-stabilization process; see SS 3.3 and Proposition 3.5. When \(\psi=\tilde{\varepsilon}_{1}=\varepsilon_{1}\times\varepsilon_{1}\), the standard \(p\)-stabilization process can be summarized by the following diagram
Here the space \(\mathrm{S}_{\widetilde{\mathrm{H}},k}^{\mathrm{ur}}(\varepsilon_{1})\) carries a natural \(T_{p}\)-action and \(\mathrm{S}_{\widetilde{\mathrm{H}},k}^{\mathrm{Iw}}(\tilde{\varepsilon}_{1})\) carries a \(U_{p}\)-action and an Atkin-Lehner involution. The maps \(\iota_{1},\iota_{2},\mathrm{proj}_{1},\mathrm{proj}_{2}\) are the natural ones. Write \(d_{k}^{\mathrm{ur}}(\varepsilon_{1}):=\mathrm{rank}_{\mathcal{O}}\,\mathrm{S}_ {k,\widetilde{\mathrm{H}}}^{\mathrm{ur}}(\varepsilon_{1})\) and \(d_{k}^{\mathrm{Iw}}(\tilde{\varepsilon}_{1}):=\mathrm{rank}_{\mathcal{O}}\, \mathrm{S}_{k,\widetilde{\mathrm{H}}}^{\mathrm{Iw}}(\tilde{\varepsilon}_{1})\). The key observation is the equality:
\[U_{p}(\varphi)=\iota_{2}(\mathrm{proj}_{1}(\varphi))-\mathrm{AL}(\varphi)\quad \text{for all }\varphi\in\mathrm{S}_{\widetilde{\mathrm{H}},k}^{\mathrm{Iw}}(\tilde{ \varepsilon}_{1}). \tag{1.25.2}\]
Under the usual power basis, the matrix of \(U_{p}\) on \(\mathrm{S}_{\widetilde{\mathrm{H}},k}^{\mathrm{Iw}}(\tilde{\varepsilon}_{1})\) is then decomposed as the sum of
* a matrix with \(\mathrm{rank}\leq d_{k}^{\mathrm{ur}}(\varepsilon_{1})\approx\frac{1}{p+1}d_{k }^{\mathrm{Iw}}(\tilde{\varepsilon}_{1})\), and
* an antidiagonal matrix for the Atkin-Lehner involution.
Essentially this observation alone already shows that the characteristic power series of the upper-left \(n\times n\) submatrix of the \(U_{p}\)-action on abstract overconvergent form is divisible by the ghost series \(g_{n}(w)\) (but in a larger ring \(\mathcal{O}\langle w/p\rangle\)); see Corollary 3.10. Unfortunately, we need much more work to control the determinant of other minors of the matrix of \(U_{p}\).
**Main input II:** halo estimate (for the center of the weight disk); see Lemma 3.14 and the more refined version in Corollary 3.27.
As a right \(\mathcal{O}[\![\mathrm{Iw}_{p}]\!]\)-module, we may write
\[\widetilde{\mathrm{H}}=e_{1}\mathcal{O}[\![\mathrm{Iw}_{p}]\!]\otimes_{ \mathcal{O}[(\mathbb{F}_{p}^{\times})^{2}],1\otimes\omega^{a}}\mathcal{O} \oplus e_{2}\mathcal{O}[\![\mathrm{Iw}_{p}]\!]\otimes_{\mathcal{O}[(\mathbb{F} _{p}^{\times})^{2}],\omega^{a}\otimes 1}\mathcal{O}.\]
Thus, there is a natural power basis of \(\mathrm{S}_{k}^{\dagger}(\psi)\) of the form
\[e_{1}^{*}z^{s_{\psi,1}},\,e_{1}^{*}z^{s_{\psi,1}+p-1},\,e_{1}^{*}z^{s_{\psi,1}+ 2(p-1)},\,\cdots,e_{2}^{*}z^{s_{\psi,2}},\,e_{2}^{*}z^{s_{\psi,2}+p-1},\,e_{2} ^{*}z^{s_{\psi,2}+2(p-1)},\,\cdots,\]
for some characters \(s_{\psi,1},s_{\psi,2}\in\{0,\ldots,p-2\}\) to match the characters; see SS 2.10 for details. It is natural to consider the \(U_{p}\)-action with respect to this basis and the associated Hodge polygon. Some time between the two papers [20] and [21], the authors realized that this estimate is not sharp enough. One should use instead the so-called Mahler basis, or rather _the modified Mahler basis_, which means to replace the monomials above by the following polynomials:
\[f_{1}(z)=\frac{z^{p}-z}{p},\quad f_{\ell+1}(z)=\frac{f_{\ell}(z)^{p}-f_{\ell}(z) }{p}\quad\text{for }\ell\geq 1;\]
\[\text{for }n=n_{0}+pn_{1}+p^{2}n_{2}+\cdots,\quad\text{define }\mathbf{m}_{n}(z):=z^{n_{0}}f_{1}(z)^{n_{1}}f_{2}(z)^{n_{2}}\cdots.\]
Then \(\{f_{n}(z)\ n\in\mathbb{Z}_{\geq 0}\}\) form a basis of the space of continuous functions on \(\mathbb{Z}_{p}\), denoted by \(\mathcal{C}^{0}(\mathbb{Z}_{p};\mathbb{Z}_{p})\). It turns out that estimate of \(U_{p}\)-operator over this basis is slightly sharper than the estimate for power basis. This improvement is the other key to our proof.
We make two remarks here: first, our modified Mahler basis is an approximation of the usual Mahler basis \(\binom{z}{n}\); ours have the advantage that each basis element is an eigenform of the action of \(\mathbb{F}_{p}^{\times}\); second, compare to the estimate in [10], we also need to treat some "pathological case", e.g. coefficients when the degree is close to a large power of \(p\). Such "distractions" complicates our proof a lot.
With the two main input I and II discussed, we now sketch the proof of Theorem 1.5. A more detailed summary can be found at the beginning of Section 4.
In a rough form, Theorem 1.5 says that \(C_{\bar{\mathrm{H}}}^{(\varepsilon)}(w,t)=1+\sum_{n\geq 1}c_{n}(w)t^{n}\) and \(G_{\bar{\rho}}^{(\varepsilon)}(w,t)=1+\sum_{n\geq 1}g_{n}(w)t^{n}\) are "close" to each other. The leads us to the following.
* (Lagrange interpolation) For each \(n\), we formally apply Lagrange interpolation to \(c_{n}(w)\) relative to the zeros \(w_{k}\) of \(g_{n}(w)\) (with multiplicity): (1.25.3) \[c_{n}(w)=\sum_{m_{n}(k)\neq 0}a_{k}(w)\cdot\frac{g_{n}(w)}{(w-w_{k})^{m_{n}(k)} }+h(w)g_{n}(w).\] We give a sufficient condition on the \(p\)-adic valuations of the coefficients of \(a_{k}(w)\) that would imply Theorem 1.5. This is Proposition 4.4. In fact, we prove a similar \(p\)-adic valuation condition for _all_ (principal or not) \(n\times n\)-submatrices \(\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi})\) of the matrix of \(U_{p}\) with respect to the power basis, where \(\underline{\zeta}\) and \(\underline{\xi}\) are row and column index sets.
* (Cofactor expansion argument) The key inequality (1.25.2) writes the matrix \(\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi})\) as the sum of a matrix which is simple at \(w_{k}\) and a matrix which as small rank at \(w_{k}\). Taking the cofactor expansion with respect to this decomposition, we reduce the needed the estimate to an estimate on the power series expansion of the characteristic power series of smaller minors. This step involves some rather subtle inductive processes that we defer to Section 5 for the discussion.
* (Estimating power series expansion for smaller minors) This is to complete the inductive argument by proving that the known estimate of Lagrange interpolation coefficients implies the needed power series expansion of the characteristic power series of smaller minors. This part is relatively straightforward, but is tangled with some pathological cases, where the refined halo estimate is essentially needed.
### Roadmap of the paper
The first five sections are devoted to proving the local ghost conjecture (Theorem 1.5 or Theorem 2.7). This is divided as: Section 2 collects background information on the local ghost conjecture from [11]; Section 3 establishes the two main inputs of the proof as explained in SS 1.25; Sections 4, 5, and 6 treat precisely Step I, III, and II in 1.25, respectively. (We swapped the order to for logical coherence.) In Section 7, we recall a known-to-experts result: applying Emerton's locally analytic Jacquet functor to the Paskunas modules precisely outputs Breuil-Hellmann-Schraen's triangulline deformation space (Theorem 7.18). Combining this with the local ghost theorem, we deduce a theorem on the slopes of the triangulline deformation space (Theorem 7.6). Applications
A and B are corollaries of this. Section 8 is the second part of the bootstrapping argument: using the knowledge of the slopes on triangulline deformation spaces, we determine the \(U_{p}\)-slopes for any so-called \(\mathcal{O}\llbracket\mathrm{K}_{p}\rrbracket\)-projective arithmetic modules (Theorem 8.7). In the case of modular forms, this specializes to Theorem 1.3. Applications D, E, and F follow from this immediately. Finally, in Section 9, we prove the finiteness of irreducible components of spectral curves, namely Theorem 1.15.
### Acknowledgments
This paper will not be possible without the great idea from the work of John Bergdall and Robert Pollack [1]. Part of the proof is inspired by the evidences provided by their numerical computation. We especially thank them for sharing their ideas and insight at an early stage and for many interesting conversations. We thank Yiwen Ding and Yongquan Hu for multiple helpful discussions, especially on Paskunas functors. We also thank Christophe Breuil, Matthew Emerton, Toby Gee, Bao Le Hung, Rufei Ren, Daqing Wan for inspiring communications. We thank all the people contributing to the SAGE software, as lots of our argument rely on first testing using a heavy computer simulation.
R. Liu is partially supported by the National Natural Science Foundation of China under agreement No. NSFC-11725101 and the Tencent Foundation. N. Truong is partially supported by L.Xiao's NSF grant DMS-1752703. L. Xiao is partially supported by Simons Collaboration Grant #278433, NSF grant DMS-1502147 and DMS-1752703, the Chinese NSF grant NSFC-12071004, Recruitment Program of Global Experts of China, and a grant from the Chinese Ministry of Education. B. Zhao is partially supported by AMS-Simons Travel Grant.
### Notation and normalization
For a field \(k\), we write \(\overline{k}\) for its algebraic closure.
Throughout the paper, we fix a prime number \(p\geq 5\). Let \(\mathrm{I}_{\mathbb{Q}_{p}}\subset\mathrm{Gal}(\overline{\mathbb{Q}}_{p}/ \mathbb{Q}_{p})\) denote the inertia subgroup, and \(\omega_{1}:\mathrm{I}_{\mathbb{Q}_{p}}\twoheadrightarrow\mathrm{Gal}( \mathbb{Q}_{p}(\mu_{p})/\mathbb{Q}_{p})\cong\mathbb{F}_{p}^{\times}\) the _the \(1st\) fundamental character_.
The reciprocity map \(\mathbb{Q}_{p}^{\times}\to\mathrm{Gal}_{\mathbb{Q}_{p}}^{\mathrm{ab}}\) is normalized so that \(p\) is sent to the geometric Frobenius element. The character \(\chi_{\mathrm{cycl}}:\mathbb{Q}_{p}^{\times}\to\mathbb{Z}_{p}^{\times}\) given by \(\chi_{\mathrm{cycl}}(x)=x|x|\) extends to the _cyclotomic character_ of \(\mathrm{Gal}_{\mathbb{Q}_{p}}\). The Hodge-Tate weight of \(\chi_{\mathrm{cycl}}\) in our convention is \(-1\).
Let \(\Delta\cong(\mathbb{Z}/p\mathbb{Z})^{\times}\) be the torsion subgroup of \(\mathbb{Z}_{p}^{\times}\), and let \(\omega:\Delta\to\mathbb{Z}_{p}^{\times}\) be the Teichmuller character. For an element \(\alpha\in\mathbb{Z}_{p}^{\times}\), we often use \(\bar{\alpha}\in\Delta\) to denote its reduction modulo \(p\).
All hom spaces in this paper refer to the spaces of continuous homomorphisms. For \(M\) a topological \(\mathcal{O}\)-module, we write \(\mathcal{C}^{0}(\mathbb{Z}_{p};M)\) for the space of continuous functions on \(\mathbb{Z}_{p}\) with values in \(M\).
Let \(E\) be a finite extension of \(\mathbb{Q}_{p}(\sqrt{p})\), as the coefficient field. Let \(\mathcal{O}\), \(\mathbb{F}\), and \(\varpi\) denote its ring of integers, residue field, and a uniformizer, respectively.
The \(p\)-adic valuation \(v_{p}(-)\) and \(p\)-adic norm are normalized so that \(v_{p}(p)=1\) and \(|p|=p^{-1}\).
We use \(\lceil x\rceil\) to denote the ceiling function and \(\lfloor x\rfloor\) to denote the floor function.
We shall encounter both \(p\)-adic logs \(\log(x)=(x-1)-\frac{(x-1)^{2}}{2}+\cdots\) for \(x\) a \(p\)-adic or formal element, and the natural logs \(\ln(-)\) in the real analysis.
For each \(m\in\mathbb{Z}\), we write \(\{m\}\) for the unique integer satisfying the conditions
\[0\leq\{m\}\leq p-2\quad\text{and}\quad m\equiv\{m\}\bmod p-1.\]
For a square matrix \(M\) with coefficients in a ring \(R\), we write \(\mathrm{Char}(M;t):=\det(I-Mt)\in R\llbracket t\rrbracket\) (if it exists), where \(I\) is the identity matrix. When \(U\) acting on an \(R\)-module is given by such a matrix \(M\), we write \(\mathrm{Char}(U;t)\) for \(\mathrm{Char}(M;t)\).
For a power series \(F(t)=\sum_{n\geq 0}c_{n}t^{n}\in\mathbb{C}_{p}[\![t]\!]\) with \(c_{0}=1\), we use \(\operatorname{NP}(F)\) to denote its _Newton polygon_, i.e. the convex hull of points \((n,v_{p}(c_{n}))\) for all \(n\); the slopes the segments of \(\operatorname{NP}(F)\) are often referred to as slopes of \(F(t)\). For two Newton polygons \(A\) and \(B\), let \(A\#B\) denote the Newton polygon (starting at \((0,0)\)) whose slopes (with multiplicity) is the disjoint union of those of \(A\) and \(B\).
For a formal \(\mathcal{O}\)-scheme \(\operatorname{Spf}(R)\), let \(\operatorname{Spf}(R)^{\operatorname{rig}}\) denote the associated rigid analytic space over \(E\).
## 2. Recollection of the local ghost conjecture
In [1, 1, 1], Bergdall-Pollack proposed a conjectural combinatorial recipe to compute the slopes of modular forms. This was reformulated by the authors [10] in a setup that can be adapted to the context of modularity lifting techniques. In this section, we first recall the construction as well as the statement of the local ghost conjecture; notations mostly follow from [10] and we refer to _loc. cit._ for details. After this, we quickly recall the power basis of abstract classical and overconvergent forms as well as the dimension formulas for spaces of abstract classical forms.
**Notation 2.1**.: Recall the following subgroups of \(\operatorname{GL}_{2}(\mathbb{Q}_{p})\).
\[\operatorname{K}_{p}:=\operatorname{GL}_{2}(\mathbb{Z}_{p})\supset \operatorname{Iw}_{p}:=\begin{pmatrix}\mathbb{Z}_{p}^{\times}&\mathbb{Z}_{p} \\ p\mathbb{Z}_{p}&\mathbb{Z}_{p}^{\times}\end{pmatrix}\supset\operatorname{Iw}_{ p,1}:=\begin{pmatrix}1+p\mathbb{Z}_{p}&\mathbb{Z}_{p}\\ p\mathbb{Z}_{p}&1+p\mathbb{Z}_{p}\end{pmatrix}.\]
Fix a finite extension \(E\) of \(\mathbb{Q}_{p}\) which contains a chosen square root \(\sqrt{p}\) of \(p\) (for a technical convenience later). Let Let \(\mathcal{O}\), \(\mathbb{F}\), and \(\varpi\) denote its ring of integers, residue field, and a uniformizer, respectively.
For a pair of non-negative integers \((a,b)\), we use \(\sigma_{a,b}\) to denote the _right_\(\mathbb{F}\)-representation \(\operatorname{Sym}^{a}\mathbb{F}^{\oplus 2}\otimes\det^{b}\) of \(\operatorname{GL}_{2}(\mathbb{F}_{p})\). When \(a\in\{0,\ldots,p-1\}\) and \(b\in\{0,\ldots,p-2\}\), \(\sigma_{a,b}\) is irreducible. These representations exhaust all irreducible (right) \(\mathbb{F}\)-representations of \(\operatorname{GL}_{2}(\mathbb{F}_{p})\). We call them the _Serre weights_. We use \(\operatorname{Proj}_{a,b}\) to denote the projective envelope of \(\sigma_{a,b}\) as a (right) \(\mathbb{F}[\operatorname{GL}_{2}(\mathbb{F}_{p})]\)-module.
**Definition 2.2**.: ([10, Definition 2.22]) Fix a _reducible, nonsplit, and generic_ residual representation \(\bar{\rho}:\operatorname{I}_{\mathbb{Q}_{p}}\to\operatorname{GL}_{2}(\mathbb{ F})\) of the inertia subgroup:
\[\bar{\rho}\simeq\begin{pmatrix}\omega_{1}^{a+b+1}&*\neq 0\\ 0&\omega_{1}^{b}\end{pmatrix}\qquad\text{for $1\leq a\leq p-4$ and $0\leq b\leq p-2$}, \tag{2.2.1}\]
where \(\omega_{1}\) is the first fundamental character, and \(*\) stands for the _unique_ nontrivial extension (up to isomorphism) in the class \(\operatorname{H}^{1}(\operatorname{I}_{\mathbb{Q}_{p}},\omega_{1}^{a+1})^{ \operatorname{Gal}_{p}}=\operatorname{H}^{1}(\operatorname{Gal}_{\mathbb{Q}_ {p}},\omega_{1}^{a+1})\).
An _\(\mathcal{O}[\![\operatorname{K}_{p}]\!]\)-projective augmented module_\(\widetilde{\operatorname{H}}\) is a finitely generated _right_ projective \(\mathcal{O}[\![\operatorname{K}_{p}]\!]\)-module whose right \(\operatorname{K}_{p}\)-action extends to a right continuous \(\operatorname{GL}_{2}(\mathbb{Q}_{p})\)-action. We say that \(\widetilde{\operatorname{H}}\) is _of type \(\bar{\rho}\) with multiplicity \(m(\widetilde{\operatorname{H}})\)_ if
1. (Serre weight) \(\overline{\operatorname{H}}:=\widetilde{\operatorname{H}}/(\varpi, \operatorname{I}_{1+p\operatorname{M}_{2}(\mathbb{Z}_{p})})\) is isomorphic to a direct sum of \(m(\widetilde{\operatorname{H}})\) copies of \(\operatorname{Proj}_{a,b}\) as a right \(\mathbb{F}[\operatorname{GL}_{2}(\mathbb{F}_{p})]\)-module.
The topology on such \(\widetilde{\operatorname{H}}\) is the one inherited from the \(\mathcal{O}[\![\operatorname{K}_{p}]\!]\)-module structure.
We say \(\widetilde{\operatorname{H}}\) is _primitive_ if \(m(\widetilde{\operatorname{H}})=1\) and \(\widetilde{\operatorname{H}}\) satisfies the following additional conditions:
2. (Central character I) the action of \(\left(\begin{smallmatrix}p&0\\ 0&p\end{smallmatrix}\right)\) on \(\widetilde{\mathrm{H}}\) is the multiplication by an invertible element \(\xi\in\mathcal{O}^{\times}\), and
3. (Central character II) there exists an isomorphism \(\widetilde{\mathrm{H}}\cong\widetilde{\mathrm{H}}_{0}\widehat{\otimes}_{ \mathcal{O}}\mathcal{O}[\![(1+p\mathbb{Z}_{p})^{\times}]\!]\) of \(\mathcal{O}[\mathrm{GL}_{2}(\mathbb{Q}_{p})]\)-modules, where \(\widetilde{\mathrm{H}}_{0}\) carries an action of \(\mathrm{GL}_{2}(\mathbb{Q}_{p})\) which is trivial on elements of the form \(\left(\begin{smallmatrix}\alpha&0\\ 0&\alpha\end{smallmatrix}\right)\) for \(\alpha\in(1+p\mathbb{Z}_{p})^{\times}\), and the latter factor \(\mathcal{O}[\![(1+p\mathbb{Z}_{p})^{\times}]\!]\) carries the natural action of \(\mathrm{GL}_{2}(\mathbb{Q}_{p})\) through the map \(\mathrm{GL}_{2}(\mathbb{Q}_{p})\stackrel{{\mathrm{det}}}{{ \longrightarrow}}\mathbb{Q}_{p}^{\times}\xrightarrow{p^{r}\delta\to\delta/ \omega(\bar{d})}\)\((1+p\mathbb{Z}_{p})^{\times}\).
**Remark 2.3**.: We quickly remind the readers here that, for the local theory of ghost conjecture, we only treat the case when \(\bar{\rho}\) is reducible and _nonsplit_, or equivalently, when there is only one Serre weight. It is the later bootstrapping argument in SS 7 and SS 8 that allows us to deduce the general reducible case from the nonsplit case.
### Space of abstract forms
Let \(\widetilde{\mathrm{H}}\) be an \(\mathcal{O}[\![\mathrm{K}_{p}]\!]\)-projective augmented module of type \(\bar{\rho}\) with multiplicity \(m(\widetilde{\mathrm{H}})\).
(1) Set \(\Delta:=\mathbb{F}_{p}^{\times}\) and write \(\omega:\mathbb{F}_{p}^{\times}\to\mathbb{Z}_{p}^{\times}\) for the Teichmuller character. For each \(\alpha\in\mathbb{Z}_{p}\), write \(\bar{\alpha}\) for its reduction modulo \(p\).
A character \(\varepsilon\) of \(\Delta^{2}\) is called _relevant_ to \(\bar{\rho}\) if it is of the form
\[\varepsilon=\omega^{-s_{\varepsilon}+b}\times\omega^{a+s_{\varepsilon}+b}\]
for some \(s_{\varepsilon}\in\{0,\dots,p-2\}\).
Recall that there is a canonical identification \(\mathcal{O}[\![(1+p\mathbb{Z}_{p})^{\times}]\!]\cong\mathcal{O}[\![w]\!]\) by sending \([\alpha]\) for \(\alpha\in(1+p\mathbb{Z}_{p})^{\times}\) to \((1+w)^{\log(\alpha)/p}\), where \(\log(-)\) is the formal \(p\)-adic logarithm. In particular, for each \(k\in\mathbb{Z}\), we set
\[w_{k}:=\exp(p(k-2))-1.\]
For a character \(\varepsilon\) of \(\Delta^{2}\), write \(\mathcal{O}[\![w]\!]^{(\varepsilon)}\) for \(\mathcal{O}[\![w]\!]\), but equipped with the universal character
\[\chi_{\mathrm{univ}}^{(\varepsilon)}:\Delta\times\mathbb{Z}_{p}^{\times} \xrightarrow{}\mathcal{O}[\![w]\!]^{(\varepsilon),\times}\]
\[(\bar{\alpha},\,\delta)\xmapsto{\varepsilon(\bar{\alpha},\bar{\delta})\cdot( 1+w)^{\log(\delta/\omega(\bar{\delta}))/p}},\]
where \(\bar{\delta}\) is the reduction of \(\delta\) modulo \(p\) and \(\omega(\bar{\delta})\) is the Teichmumuller lift of \(\bar{\delta}\). The _weight disk_\(\mathcal{W}^{(\varepsilon)}:=\big{(}\operatorname{Spf}\mathcal{O}[\![w]\!] ^{(\varepsilon)}\big{)}^{\mathrm{rig}}\) for \(\varepsilon\) is the associated rigid analytic space over \(E\). The universal character extends to a character of \(B^{\mathrm{op}}(\mathbb{Z}_{p})=\left(\begin{smallmatrix}\mathbb{Z}_{p}^{ \times}&0\\ p\mathbb{Z}_{p}&\mathbb{Z}_{p}^{\times}\end{smallmatrix}\right)\), still denoted by \(\chi_{\mathrm{univ}}^{(\varepsilon)}\), given by
\[\chi_{\mathrm{univ}}^{(\varepsilon)}\big{(}\big{(}\begin{smallmatrix}\alpha&0\\ \gamma&\delta\end{smallmatrix}\big{)}=\chi_{\mathrm{univ}}^{(\varepsilon)}( \bar{\alpha},\delta). \tag{2.4.1}\]
Fix a relevant character \(\varepsilon\) for the rest of this subsection. Consider the induced representation (for the _right action convention_)
\[\operatorname{Ind}_{B^{\mathrm{op}}(\mathbb{Z}_{p})}^{\mathrm{ Iw}_{p}}(\chi_{\mathrm{univ}}^{(\varepsilon)}):=\big{\{}\text{continuous functions }f:\mathrm{Iw}_{p}\to\mathcal{O}[\![w]\!]^{(\varepsilon)};\] \[\qquad\qquad f(gb)=\chi_{\mathrm{univ}}^{(\varepsilon)}(b)\cdot f (g)\text{ for }b\in B^{\mathrm{op}}(\mathbb{Z}_{p})\text{ and }g\in\mathrm{Iw}_{p}\big{\}} \tag{2.4.3}\] \[\cong\mathcal{C}^{0}(\mathbb{Z}_{p};\mathcal{O}[\![w]\!]^{( \varepsilon)}), \tag{2.4.2}\]
where \(\mathcal{C}^{0}(\mathbb{Z}_{p};-)\) denotes the space of continuous functions on \(\mathbb{Z}_{p}\) with values in \(-\), the isomorphism is given by \(f\mapsto h(z)=f\big{(}\big{(}\begin{smallmatrix}1&z\\ 0&1\end{smallmatrix}\big{)}\big{)}\). Our choice of convention is so that the left
action on its dual, i.e. the distributions \(\mathcal{D}_{0}(\mathbb{Z}_{p};\mathcal{O}\llbracket w\rrbracket^{(\varepsilon)})\) is the natural one, and this will be compatible with later Emerton's lower triangular matrix analytic Jacquet functor [10]; see SS 7.20 for the discussion.
This space (2.4.2) carries an action of the monoid
\[\mathbf{M}_{1}=\big{\{}\big{(}\begin{smallmatrix}\alpha&\beta\\ \gamma&\delta\end{smallmatrix}\big{)}\in\mathrm{M}_{2}(\mathbb{Z}_{p});\ p|\gamma,\,p\nmid\delta,\,\alpha\delta- \beta\gamma\neq 0\big{\}},\]
given by the explicit formula (setting determinant \(\alpha\delta-\beta\gamma=p^{r}d\) with \(d\in\mathbb{Z}_{p}^{\times}\))
\[h\big{|}_{\begin{smallmatrix}\alpha&\beta\\ \gamma&\delta\end{smallmatrix}}(z)=\varepsilon(\bar{d}/\bar{\delta},\bar{ \delta})\cdot(1+w)^{\log\big{(}(\gamma z+\delta)/\omega(\bar{\delta})\big{)}/p }\cdot h\Big{(}\frac{\alpha z+\beta}{\gamma z+\delta}\Big{)}. \tag{2.4.4}\]
(2) For the \(\widetilde{\mathrm{H}}\) and a relevant character \(\varepsilon\) as above, use \(\mathcal{O}\langle w/p\rangle^{(\varepsilon)}\) to denote the same ring \(\mathcal{O}\langle w/p\rangle\) equipped the associated universal character as given in (2.4.1). We define the space of _abstract \(p\)-adic forms_ and the space of _family of abstract overconvergent forms_ to be
\[\mathrm{S}^{(\varepsilon)}_{\text{$p$-adic}}=\mathrm{S}^{( \varepsilon)}_{\widetilde{\mathrm{H}},\text{$p$-adic}} := \mathrm{Hom}_{\mathcal{O}\llbracket\mathrm{Iw}_{p}\rrbracket} \big{(}\widetilde{\mathrm{H}},\,\mathrm{Ind}^{\mathrm{Iw}_{p}}_{B^{\infty}( \mathbb{Z}_{p})}(\chi^{(\varepsilon)}_{\mathrm{univ}})\big{)}\cong\mathrm{Hom }_{\mathcal{O}\llbracket\mathrm{Iw}_{p}\rrbracket}\big{(}\widetilde{\mathrm{H}}, \,\mathcal{C}^{0}(\mathbb{Z}_{p};\mathcal{O}\llbracket w\rrbracket^{( \varepsilon)})\big{)},\] \[\mathrm{S}^{\dagger,(\varepsilon)}=\mathrm{S}^{\dagger,( \varepsilon)}_{\widetilde{\mathrm{H}}} := \mathrm{Hom}_{\mathcal{O}\llbracket\mathrm{Iw}_{p}\rrbracket} \big{(}\widetilde{\mathrm{H}},\,\mathcal{O}\langle w/p\rangle^{(\varepsilon)} \langle z\rangle\big{)},\]
respectively. Viewing power series in \(z\) as continuous functions on \(\mathbb{Z}_{p}\) induces a natural inclusion
\[\mathcal{O}\langle w/p\rangle^{(\varepsilon)}\langle z\rangle\hookrightarrow \mathcal{C}^{0}(\mathbb{Z}_{p};\mathcal{O}\llbracket w\rrbracket^{(\varepsilon) })\otimes_{\mathcal{O}\llbracket w\rrbracket}\mathcal{O}\langle w/p\rangle,\]
such that the \(\mathbf{M}_{1}\)-action on the latter space given by (2.4.4) stabilizes the subspace. This induces a natural inclusion
\[\mathrm{S}^{\dagger,(\varepsilon)}\hookrightarrow\mathrm{S}^{(\varepsilon)}_{ \text{$p$-adic}}\otimes_{\mathcal{O}\llbracket w\rrbracket}\mathcal{O}\langle w /p\rangle. \tag{2.4.5}\]
The space \(\mathrm{S}^{(\varepsilon)}_{\text{$p$-adic}}\) (resp. \(\mathrm{S}^{\dagger,(\varepsilon)}\)) carries an \(\mathcal{O}\llbracket w\rrbracket\)-linear (resp. \(\mathcal{O}\langle w/p\rangle\)-linear) \(U_{p}\)-action: fixing a decomposition of the double coset \(\mathrm{Iw}_{p}\big{(}\begin{smallmatrix}p&-1&0\\ 0&1\end{smallmatrix}\big{)}\mathrm{Iw}_{p}=\coprod_{j=0}^{p-1}v_{j}\mathrm{ Iw}_{p}\) (e.g. \(v_{j}=\big{(}\begin{smallmatrix}p^{-1}&0\\ j&1\end{smallmatrix}\big{)}\) and \(v_{j}^{-1}=\big{(}\begin{smallmatrix}p&0\\ -jp&1\end{smallmatrix}\big{)}\)), the \(U_{p}\)-operator sends \(\varphi\in\mathrm{S}^{(\varepsilon)}_{\text{$p$-adic}}\) (resp. \(\varphi\in\mathrm{S}^{\dagger,(\varepsilon)}\)) to
\[U_{p}(\varphi)(x)=\sum_{j=0}^{p-1}\varphi(xv_{j})|_{v_{j}^{-1}}\quad\text{for all $x\in\widetilde{\mathrm{H}}$}. \tag{2.4.6}\]
The \(U_{p}\)-operator does not depend on the choice of coset representatives. As explained in [11, SS 2.10 and Lemma 2.14], the characteristic power series of the \(U_{p}\)-action on \(\mathrm{S}^{\dagger,(\varepsilon)}\) and \(\mathrm{S}^{(\varepsilon)}_{\text{$p$-adic}}\) are well-defined and are equal; we denote it by
\[C^{(\varepsilon)}(w,t)=C^{(\varepsilon)}_{\widetilde{\mathrm{H}}}(w,t)=\sum_{n \geq 0}c^{(\varepsilon)}_{n}(w)t^{n}\in\Lambda\llbracket t\rrbracket=\mathcal{O} \llbracket w,t\rrbracket.\]
The main subject of local ghost conjecture is to provide an "approximation" of \(C^{(\varepsilon)}(w,t)\).
For each integer \(k\in\mathbb{Z}\), evaluating at \(w=w_{k}:=\exp((k-2)p)-1\), we arrive at the space of _abstract overconvergent forms of weight \(k\) and character_\(\psi=\varepsilon\cdot(1\times\omega^{2-k})\):
\[\mathrm{S}^{\dagger}_{k}(\psi)=\mathrm{S}^{\dagger}_{\widetilde{\mathrm{H}},k} (\psi):=\mathrm{S}^{\dagger,(\varepsilon)}\otimes_{\mathcal{O}\langle w/p \rangle,w\mapsto w_{k}}\mathcal{O},\]
carrying compatible \(U_{p}\)-actions. Moreover, the characteristic power series for the \(U_{p}\)-action is precisely \(C^{(\varepsilon)}(w_{k},t)\).
(3) For each integer \(k\geq 2\), setting \(\psi=\varepsilon\cdot(1\times\omega^{2-k})\), we have a canonical inclusion
\[{\mathcal{O}}[z]^{\leq k-2}\otimes\psi\ \subset{\mathcal{O}}\langle w/p\rangle^{( \varepsilon)}\langle z\rangle\otimes_{{\mathcal{O}}\langle w/p\rangle,w \mapsto w_{k}}{\mathcal{O}},\]
such that the \({\mathbf{M}}_{1}\)-action on the latter given by (2.4.4) stabilizes the submodule. So we may define the space of _abstract classical forms of weight \(k\) and character \(\psi\)_ to be the \(U_{p}\)-equivariant submodule
\[\operatorname{S}_{k}^{\operatorname{Iw}}(\psi)=\operatorname{S}_{\widetilde{ \operatorname{H}},k}^{\operatorname{Iw}}(\psi):=\operatorname{Hom}_{{\mathcal{O }}[\operatorname{Iw}_{p}]}\left(\widetilde{\operatorname{H}},\,{\mathcal{O}}[z ]^{\leq k-2}\otimes\psi\right)\ \subset\ \operatorname{S}_{k}^{\dagger}(\psi),\]
where \({\mathcal{O}}[z]^{\leq k-2}\) means the space of polynomials of degree \(\leq k-2\). In particular, the characteristic power series of the \(U_{p}\)-action on \(\operatorname{S}_{k}^{\operatorname{Iw}}(\psi)\) divides \(C^{(\varepsilon)}(w_{k},t)\).
When \(\widetilde{\operatorname{H}}\) is primitive, set
\[d_{k}^{\operatorname{Iw}}(\psi):=\operatorname{rank}_{{\mathcal{O}}} \operatorname{S}_{k}^{\operatorname{Iw}}(\psi).\]
(4) Recall the notation \(\{-\}\) as defined at the end of the introduction. We define \(k_{\varepsilon}:=2+\{a+2s_{\varepsilon}\}\in\{2,\ldots,p\}\). When the character \(\psi:\Delta^{2}\to{\mathcal{O}}^{\times}\) takes the form of \(\psi=\tilde{\varepsilon}_{1}:=\varepsilon_{1}\times\varepsilon_{1}\), and the integer \(k\in{\mathbb{Z}}_{\geq 2}\) satisfies \(\tilde{\varepsilon}_{1}\cdot(1\times\omega^{k-2})=\varepsilon=\varepsilon^{-s _{\varepsilon}+b}\times\varepsilon^{a+s_{\varepsilon}+b}\), we must have \(\varepsilon_{1}=\omega^{-s_{\varepsilon}+b}\) and \(k\equiv k_{\varepsilon}\bmod p-1\). In this case, \({\mathcal{O}}[z]^{\leq k-2}\otimes\varepsilon_{1}\circ\det\) carries a natural action of the monoid \(\operatorname{M}_{2}({\mathbb{Z}}_{p})^{\det\neq 0}\), given by for \(\left(\begin{smallmatrix}\alpha&\beta\\ \gamma&\delta\end{smallmatrix}\right)\in\operatorname{M}_{2}({\mathbb{Z}}_{p})\) (setting determinant \(\alpha\delta-\beta\gamma=p^{r}d\) with \(d\) in \({\mathbb{Z}}_{p}^{\times}\))
\[h|_{\left(\begin{smallmatrix}\alpha&\beta\\ \gamma&\delta\end{smallmatrix}\right)}(z)=\varepsilon_{1}(\bar{d})\cdot( \gamma z+\delta)^{k-2}h\Big{(}\frac{\alpha z+\beta}{\gamma z+\delta}\Big{)}.\]
Define the space of _abstract classical forms with \(\operatorname{K}_{p}\)-level of weight \(k\) and central character \(\varepsilon_{1}\)_ to be
\[\operatorname{S}_{k}^{\operatorname{ur}}(\varepsilon_{1})=\operatorname{S}_{ \widetilde{\operatorname{H}},k}^{\operatorname{ur}}(\varepsilon_{1}):= \operatorname{Hom}_{{\mathcal{O}}[\operatorname{K}_{p}]}\big{(}\widetilde{ \operatorname{H}},\,{\mathcal{O}}[z]^{\leq k-2}\otimes\varepsilon_{1}\circ \det\big{)}.\]
This space carries an action of \(T_{p}\)-operator: taking a coset decomposition \(\operatorname{K}_{p}\big{(}\begin{smallmatrix}p^{-1}&0\\ 0&1\end{smallmatrix}\big{)}\operatorname{K}_{p}=\prod_{j=0}^{p}u_{j} \operatorname{K}_{p}\) (e.g. \(u_{j}=\left(\begin{smallmatrix}1&jp^{-1}\\ 0&p^{-1}\end{smallmatrix}\right)\) and \(u_{j}^{-1}=\left(\begin{smallmatrix}1&-j\\ 0&p\end{smallmatrix}\right)\) for \(j=0,\ldots,p-1\), and \(u_{p}=\left(\begin{smallmatrix}p^{-1}&0\\ 0&1\end{smallmatrix}\right)\) and \(u_{p}^{-1}=\left(\begin{smallmatrix}p&0\\ 0&1\end{smallmatrix}\right)\)), the \(T_{p}\)-operator sends \(\varphi\in\operatorname{S}_{k}^{\operatorname{ur}}(\varepsilon_{1})\) to
\[T_{p}(\varphi)(x)=\sum_{j=0}^{p}\varphi(xu_{j})|_{u_{j}^{-1}}\quad\text{for all $x\in\widetilde{\operatorname{H}}$}. \tag{2.4.7}\]
(5) For each relevant character \(\varepsilon=\omega^{-s_{\varepsilon}+b}\times\omega^{a+s_{\varepsilon}+b}\), set \(\tilde{\varepsilon}_{1}=\omega^{-s_{\varepsilon}+b}\times\omega^{-s_{ \varepsilon}+b}\). Assume that \(\widetilde{\operatorname{H}}\) is primitive. For each \(k\in{\mathbb{Z}}_{\geq 2}\) satisfying \(k\equiv k_{\varepsilon}\bmod p-1\), set
\[d_{k}^{\operatorname{ur}}(\varepsilon_{1}):=\operatorname{rank}_{{\mathcal{O}}} \operatorname{S}_{k}^{\operatorname{ur}}(\varepsilon_{1})\quad\text{and}\quad d _{k}^{\operatorname{new}}(\varepsilon_{1}):=d_{k}^{\operatorname{Iw}}(\tilde {\varepsilon}_{1})-2d_{k}^{\operatorname{ur}}(\varepsilon_{1})\]
The ranks \(d_{k}^{\operatorname{Iw}}(\psi)\), \(d_{k}^{\operatorname{ur}}(\varepsilon_{1})\) and \(d_{k}^{\operatorname{new}}(\varepsilon_{1})\) defined above depend only on \(a\), \(b\), \(s_{\varepsilon}\), \(\psi\), and \(k\). For their precise formulas, see Definition-Proposition 2.12 later.
(6) Since the definition of \(\operatorname{S}_{k}^{\operatorname{Iw}}(\psi)\) and \(\operatorname{S}_{k}^{\operatorname{ur}}(\varepsilon_{1})\) only uses the \(\operatorname{K}_{p}\)-modules structure of \(\widetilde{\operatorname{H}}\), it follows that, for a \(\operatorname{K}_{p}\)-projective augmented module \(\widetilde{\operatorname{H}}\) of type \(\bar{\rho}\) with multiplicity \(m(\widetilde{\operatorname{H}})\),
\[\operatorname{rank}_{{\mathcal{O}}}\operatorname{S}_{\widetilde{\operatorname{H}},k}^{\operatorname{Iw}}(\psi)=m(\widetilde{\operatorname{H}})\cdot d_{k}^{ \operatorname{Iw}}(\psi)\quad\text{and}\quad\operatorname{rank}_{{\mathcal{O}}} \operatorname{S}_{\widetilde{\operatorname{H}},k}^{\operatorname{ur}}(\varepsilon_{1})=m( \widetilde{\operatorname{H}})\cdot d_{k}^{\operatorname{ur}}(\varepsilon_{1}). \tag{2.4.8}\]
**Definition 2.5**.: Following [1], we define the _ghost series_ of type \(\bar{\rho}\) over \(\mathcal{W}^{(\varepsilon)}\) to be the formal power series
\[G^{(\varepsilon)}(w,t)=G^{(\varepsilon)}_{\bar{\rho}}(w,t)=1+\sum_{n=1}^{ \infty}g^{(\varepsilon)}_{n}(w)t^{n}\in\mathcal{O}\llbracket w,t\rrbracket,\]
where each coefficient \(g^{(\varepsilon)}_{n}(w)\) is a product
\[g^{(\varepsilon)}_{n}(w)=\prod_{\begin{subarray}{c}k\geq 2\\ k\equiv k_{\varepsilon}\bmod p-1\end{subarray}}(w-w_{k})^{m^{(\varepsilon)}_{ n}(k)}\in\mathbb{Z}_{p}[w] \tag{2.5.1}\]
with exponents \(m^{(\varepsilon)}_{n}(k)\) given by the following recipe
\[m^{(\varepsilon)}_{n}(k)=\begin{cases}\min\big{\{}n-d^{\text{ur}}_{k}( \varepsilon_{1}),d^{\text{lw}}_{k}(\tilde{\varepsilon}_{1})-d^{\text{ur}}_{k }(\varepsilon_{1})-n\big{\}}&\text{ if }d^{\text{ur}}_{k}(\varepsilon_{1})<n<d^{\text{ lw}}_{k}(\tilde{\varepsilon}_{1})-d^{\text{ur}}_{k}(\varepsilon_{1})\\ 0&\text{ otherwise.}\end{cases}\]
For a fixed \(k\), the sequence \((m^{(\varepsilon)}_{n}(k))_{n\geq 1}\) is given by the following palindromic pattern
\[\underbrace{0,\ldots,0}_{d^{\text{ur}}_{k}(\varepsilon_{1})},1,2,3,\ldots, \tfrac{1}{2}d^{\text{new}}_{k}(\varepsilon_{1})-1,\tfrac{1}{2}d^{\text{new}}_{ k}(\varepsilon_{1}),\tfrac{1}{2}d^{\text{new}}_{k}(\varepsilon_{1})-1,\ldots,3,2,1,0,0,\ldots, \tag{2.5.2}\]
where the maximum \(\tfrac{1}{2}d^{\text{new}}_{k}(\varepsilon_{1})\) appears at the \(\tfrac{1}{2}d^{\text{lw}}_{k}(\tilde{\varepsilon}_{1})\)th place.
When \(m^{(\varepsilon)}_{n}(k)\neq 0\), we often refer \(w_{k}\) as a _ghost zero_ of \(g^{(\varepsilon)}_{n}(w)\).
**Conjecture 2.6** (Local ghost conjecture).: _Let \(\bar{\rho}=\left(\begin{smallmatrix}\omega^{a+b+1}_{1}\ast\neq 0\\ 0&\omega^{b}_{1}\end{smallmatrix}\right):\mathrm{I}_{\mathbb{Q}_{p}}\to \mathrm{GL}_{2}(\mathbb{F})\) be a reducible, nonsplit, and generic residual representation with \(a\in\{1,\ldots,p-4\}\) and \(b\in\{0,\ldots,p-2\}\), as in (2.2.1). Let \(\widetilde{\mathrm{H}}\) be a primitive \(\mathcal{O}\llbracket\mathds{K}_{p}\rrbracket\)-projective augmented module of type \(\bar{\rho}\), and let \(\varepsilon\) be a character of \(\Delta^{2}\) relevant to \(\bar{\rho}\). We define the characteristic power series \(C^{(\varepsilon)}(w,t)\) of \(U_{p}\)-action and the ghost series \(G^{(\varepsilon)}(w,t)\) for \(\widetilde{\mathrm{H}}\) as in this section. Then for every \(w_{\star}\in\mathfrak{m}_{\mathbb{C}_{p}}\), we have \(\mathrm{NP}(G^{(\varepsilon)}(w_{\star},-))=\mathrm{NP}(C^{(\varepsilon)}(w_{ \star},-))\)._
The main local result of this paper is the following.
**Theorem 2.7**.: _The Conjecture 2.6 holds when \(p\geq 11\) and \(2\leq a\leq p-5\)._
**Remark 2.8**.: We point out that the only place that we essentially need \(a\not\in\{1,p-4\}\) and \(p\geq 11\) is at various places in the proof of Proposition 5.4(1); see also Remark 5.6. We do not know whether one can make more subtle discussion on boundary cases to retrieve the theorem when \(a\in\{1,p-4\}\) or \(p=7\). The condition \(p\geq 7\) is required at more places, e.g. [11, Corollary 5.10].
As pointed out in [11, Remark 2.30], after twisting, we may and will assume that \(b=0\) and that \(\left(\begin{smallmatrix}p&0\\ 0&p\end{smallmatrix}\right)\) acts trivially on \(\widetilde{\mathrm{H}}\).
**Hypothesis 2.9**.: From now on till the end of Section 6 (with the exception of Proposition 2.14 and the following remarks), we assume that \(\widetilde{\mathrm{H}}\) is a primitive \(\mathcal{O}\llbracket\mathds{K}_{p}\rrbracket\)-projective augmented module of type \(\bar{\rho}\), with \(b=0\) and \(\xi=1\). In particular, \(\overline{\mathrm{H}}=\widetilde{\mathrm{H}}/(\varpi,\mathrm{I}_{1+p\mathrm{ M}_{2}(\mathbb{Z}_{p})})\simeq\mathrm{Proj}_{a,0}\), and \(\left(\begin{smallmatrix}p&0\\ 0&p\end{smallmatrix}\right)\) acts trivially on \(\widetilde{\mathrm{H}}\).
For the rest of this section, we recall important definitions and results regarding abstract forms and ghost series that we have proved in the prequel [11]; we refer to _loc. cit._ for details and proofs.
### Power basis
In [11, SS 3], we constructed a power basis of the space of abstract (overconvergent) forms. Let \(\widetilde{\mathrm{H}}\) be as above. As explained in [11, SS 3.2], we may write \(\widetilde{\mathrm{H}}\) as an \(\mathcal{O}\llbracket\mathrm{Iw}_{p}\rrbracket\)-module
(2.10.1)
for the two characters \(\chi_{1}=1\times\omega^{a}\) and \(\chi_{2}=\omega^{a}\times 1\) of \(\bar{\mathrm{T}}=\Delta^{2}\) (embedded diagonally in \(\mathrm{Iw}_{p}\)). Moreover, by [11, Lemma 3.3] we may require that \(e_{i}\big{(}\begin{smallmatrix}0&1\\ p&0\end{smallmatrix}\big{)}=e_{3-i}\) for \(i=1,2\). We fix such an isomorphism (2.10.1).
For a _relevant_ character \(\varepsilon=\omega^{-s_{\varepsilon}}\times\omega^{a+s_{\varepsilon}}\) of \(\Delta^{2}\), we have
\[\mathrm{S}^{\dagger,(\varepsilon)}=\mathrm{Hom}_{\mathcal{O}[ \mathrm{Iw}_{p}]}\left(\widetilde{\mathrm{H}},\,\mathcal{O}\langle w/p\rangle \langle z\rangle\otimes(\omega^{-s_{\varepsilon}}\times\omega^{a+s_{ \varepsilon}})\right)\] \[\cong e_{1}^{*}\cdot\big{(}\mathcal{O}\langle w/p\rangle\langle z \rangle\otimes(\omega^{-s_{\varepsilon}}\times\omega^{a+s_{\varepsilon}}) \big{)}^{\bar{\mathrm{T}}=1\times\omega^{a}}\oplus e_{2}^{*}\cdot\big{(} \mathcal{O}\langle w/p\rangle\langle z\rangle\otimes(\omega^{-s_{\varepsilon }}\times\omega^{a+s_{\varepsilon}})\big{)}^{\bar{\mathrm{T}}=\omega^{a} \times 1}.\]
It follows that the following list is a basis of \(\mathrm{S}^{\dagger,(\varepsilon)}\) and also a basis of \(\mathrm{S}^{\dagger}_{k}\big{(}\varepsilon\cdot(1\times\omega^{2-k})\big{)}\) for every \(k\in\mathbb{Z}_{\geq 2}\):
\[\mathbf{B}^{(\varepsilon)}:=\big{\{}e_{1}^{*}z^{s_{\varepsilon}},e_{1}^{*}z^{ p-1+s_{\varepsilon}},e_{1}^{*}z^{2(p-1)+s_{\varepsilon}},\ldots;e_{2}^{*}z^{ \{a+s_{\varepsilon}\}},e_{2}^{*}z^{p-1+\{a+s_{\varepsilon}\}},e_{2}^{*}z^{2( p-1)+\{a+s_{\varepsilon}\}},\ldots\big{\}}. \tag{2.10.2}\]
When \(k\geq 2\), the subsequence consisting of terms whose power in \(z\) is less than or equal to \(k-2\) forms a basis of \(\mathrm{S}^{\mathrm{Iw}}_{k}\big{(}\varepsilon\cdot(1\times\omega^{2-k}) \big{)}\).
The _degree_ of each basis element \(\mathbf{e}=e_{i}^{*}z^{j}\in\mathbf{B}^{(\varepsilon)}\) is the exponents on \(z\), namely, \(\deg(e_{i}^{*}z^{j})=j\). We order the elements in \(\mathbf{B}^{(\varepsilon)}\) as \(\mathbf{e}_{1}^{(\varepsilon)},\mathbf{e}_{2}^{(\varepsilon)},\ldots\) with increasing degrees. (Under our generic assumption \(1\leq a\leq p-2\), the degrees of elements of \(\mathbf{B}^{(\varepsilon)}\) are pairwise distinct.) Writing \(\mathbf{B}^{(\varepsilon)}_{k}\) for the subset of elements of \(\mathbf{B}^{(\varepsilon)}\) with degree \(\leq k-2\), it is a basis of \(\mathrm{S}^{\mathrm{Iw}}_{k}\big{(}\varepsilon\cdot(1\times\omega^{2-k}) \big{)}\).
Write \(\mathrm{U}^{\dagger,(\varepsilon)}\in\mathrm{M}_{\infty}(\mathcal{O}\langle w/p\rangle)\) for the matrix of the \(\mathcal{O}\langle w/p\rangle\)-linear \(U_{p}\)-action on \(\mathrm{S}^{\dagger,(\varepsilon)}\) with respect to the power basis \(\mathbf{B}^{(\varepsilon)}\); for \(k\in\mathbb{Z}_{\geq 2}\), the evaluation of \(\mathrm{S}^{\dagger,(\varepsilon)}\) at \(w=w_{k}\) is the matrix \(\mathrm{U}^{\dagger,(\varepsilon)}_{k}\) of the \(U_{p}\)-action on \(\mathrm{S}^{\dagger}_{k}\big{(}\varepsilon\cdot(1\times\omega^{2-k})\big{)}\) (with respect to \(\mathbf{B}^{(\varepsilon)}\)). In particular,
\[\mathrm{Char}(\mathrm{U}^{\dagger,(\varepsilon)};t)=C^{(\varepsilon)}(w,t)\quad \text{and}\quad\mathrm{Char}(\mathrm{U}^{\dagger,(\varepsilon)}_{k};t)=C^{( \varepsilon)}(w_{k},t).\]
The following are standard facts regarding theta maps and the Atkin-Lehner involutions.
**Proposition 2.11**.: _Fix notation as above and let \(k\in\mathbb{Z}_{\geq 2}\)._
1. _(Theta maps) Put_ \(\psi=\varepsilon\cdot(1\times\omega^{2-k})\)_,_ \(\varepsilon^{\prime}=\varepsilon\cdot(\omega^{k-1}\times\omega^{1-k})\) _with_ \(s_{\varepsilon^{\prime}}=\{s_{\varepsilon}+1-k\}\)_, and_ \(\psi^{\prime}=\varepsilon^{\prime}\cdot(1\times\omega^{k})=\psi\cdot\tilde{ \omega}^{k-1}\)_. There is a short exact sequence_ (2.11.1) _which is equivariant for the usual_ \(U_{p}\)_-action on the first two terms and the_ \(p^{k-1}U_{p}\)_-action on the third term. Here the map_ \(\big{(}\frac{d}{dz}\big{)}^{k-1}\circ\) _is given by post-composition with the element_ \(\varphi\in\mathrm{S}^{\dagger}_{k}(\psi)\) _when viewing the latter as a map from_ \(\widetilde{\mathrm{H}}\) _to_ \(\mathcal{O}\langle z\rangle\)_. The sequence_ (2.11.1) _is right exact when restricted to the subspace where_ \(U_{p}\)_-slopes are finite._
_More accurately, the matrix_ \(\mathrm{U}_{k}^{\dagger,(\varepsilon)}\) _is a block-upper-triangular matrix of the form_ (2.11.2) \[\mathrm{U}_{k}^{\dagger,(\varepsilon)}=\begin{pmatrix}\mathrm{U}_{k}^{\mathrm{ Iw},(\varepsilon)}&*\\ 0&p^{k-1}D^{-1}\mathrm{U}_{2-k}^{\dagger,(\varepsilon^{\prime})}D\end{pmatrix},\] _where_ \(d_{k}^{\mathrm{Iw}}\big{(}\varepsilon\cdot(1\times\omega^{2-k})\big{)}\times d _{k}^{\mathrm{Iw}}\big{(}\varepsilon\cdot(1\times\omega^{2-k})\big{)}\) _upper-left block_ \(\mathrm{U}_{k}^{\mathrm{Iw},(\varepsilon)}\) _is the matrix for the_ \(U_{p}\)_-action on_ \(\mathrm{S}_{k}^{\mathrm{Iw}}\big{(}\varepsilon\cdot(1\times\omega^{2-k})\big{)}\) _with respect to_ \(\mathbf{B}_{k}^{(\varepsilon)}\)_,_ \(D\) _is the diagonal matrix whose diagonal entries are indexed by_ \(\mathbf{e}=e_{i}^{*}z^{j}\in\mathbf{B}^{(\varepsilon)}\) _with_ \(j\geq k-1\)_, and are given by_ \(j(j-1)\cdots(j-k+2)\)_._ _In particular, all finite_ \(U_{p}\)_-slopes of_ \(\mathrm{S}_{k}^{\dagger}(\psi)\) _strictly less than_ \(k-1\) _are the same as the finite_ \(U_{p}\)_-slopes of_ \(\mathrm{S}_{k}^{\mathrm{Iw}}(\psi)\)_; and the multiplicity of_ \(k-1\) _as_ \(U_{p}\)_-slopes of_ \(\mathrm{S}_{k}^{\dagger}(\psi)\) _is the sum of the multiplicity of_ \(k-1\) _as_ \(U_{p}\)_-slopes of_ \(\mathrm{S}_{k}^{\dagger w}(\psi)\) _and the multiplicity of_ \(0\) _as_ \(U_{p}\)_-slopes of_ \(\mathrm{S}_{2-k}^{\dagger}(\psi^{\prime})\)_._
2. _(Atkin-Lehner involutions) Write_ \(\psi=\varepsilon\cdot(1\times\omega^{2-k})=\psi_{1}\times\psi_{2}\) _as character of_ \(\Delta^{2}\) _(where we allow_ \(\psi_{1}=\psi_{2}\)_). Put_ \(\psi^{s}=\psi_{2}\times\psi_{1}\) _and_ \(\varepsilon^{\prime\prime}=\varepsilon\cdot\psi^{s}\cdot\psi^{-1}\) _so that_ \(s_{\varepsilon^{\prime\prime}}=\{k-2-a-s_{\varepsilon}\}\)_. Then we have a well-defined natural_ Atkin-Lehner involution_:_ (2.11.3) \[\mathrm{AL}_{(k,\psi)}:\mathrm{S}_{k}^{\mathrm{Iw}}(\psi)\] _Here the last_ \(\big{|}_{\begin{pmatrix}0&1\\ p&0\end{pmatrix}}\) _is the usual action on_ \(\mathcal{O}[z]^{\leq k-2}\) _but the_ trivial _action on the factor_ \(\psi^{s}\)_._ _Explicitly, for_ \(i=1,2\) _and any_ \(j\)_, or for any_ \(\ell=1,\ldots,d_{k}^{\mathrm{Iw}}(\psi^{s})\)_,_ (2.11.4) \[\mathrm{AL}_{(k,\psi)}(e_{i}^{*}z^{j})=p^{k-2-j}\cdot e_{3-i}^{*}z^{k-2-j}, \qquad\mathrm{AL}_{(k,\psi)}(\mathbf{e}_{\ell}^{(\varepsilon)})=p^{k-2-\deg \mathbf{e}_{\ell}}\mathbf{e}_{d_{k}^{\mathrm{Iw}}(\psi^{s})+1-\ell}^{(\varepsilon ^{\prime\prime})},\] _where we added superscripts to the power basis element to indicate the corresponding character. In particular, we have_ (2.11.5) \[\mathrm{AL}_{(k,\psi^{s})}\circ\mathrm{AL}_{(k,\psi)}=p^{k-2}.\] _When_ \(\psi_{1}\neq\psi_{2}\) _(or equivalently_ \(k\not\equiv k_{\varepsilon}\bmod(p-1)\)_), we have an equality_ (2.11.6) \[U_{p}\circ\mathrm{AL}_{(k,\psi)}\circ U_{p}=p^{k-1}\cdot\mathrm{AL}_{(k,\psi)}\] _as maps from_ \(\mathrm{S}_{k}^{\mathrm{Iw}}(\psi)\) _to_ \(\mathrm{S}_{k}^{\mathrm{Iw}}(\psi^{s})\)_. Consequently, when_ \(\psi_{1}\neq\psi_{2}\)_, we can pair the slopes for the_ \(U_{p}\)_-action on_ \(\mathrm{S}_{k}^{\mathrm{Iw}}(\psi)\) _and the slopes for the_ \(U_{p}\)_-action on_ \(\mathrm{S}_{k}^{\mathrm{Iw}}(\psi^{s})\) _so that each pair adds up to_ \(k-1\)_. In particular all slopes belong to_ \([0,k-1]\)_._
Proof.: See [11, Propositions 3.10 and 3.12].
The following summarizes the dimension formulas for the spaces of abstract classical forms (see [11, SS 4] for the proofs).
**Definition-Proposition 2.12**.: _Let \(\widetilde{\mathrm{H}}\) be a primitive \(\mathcal{O}[\![\mathrm{K}_{p}]\!]\)-projective augmented module of type \(\bar{\rho}\) and let \(\varepsilon=\omega^{-s_{\varepsilon}}\times\omega^{a+s_{\varepsilon}}\) be a relevant character of \(\Delta^{2}\)._
1. _We have_ \[d_{k}^{\mathrm{Iw}}\big{(}\varepsilon\cdot(1\times\omega^{2-k})\big{)}=\Big{\lfloor} \frac{k-2-s_{\varepsilon}}{p-1}\Big{\rfloor}+\Big{\lfloor}\frac{k-2-\{a+s_{ \varepsilon}\}}{p-1}\Big{\rfloor}+2.\]
2. _Set_ \(\delta_{\varepsilon}:=\left\lfloor\frac{s_{\varepsilon}+\{a+s_{\varepsilon}\}}{p-1}\right\rfloor\)_. In particular, when_ \(k=k_{\varepsilon}+(p-1)k_{\bullet}\) _for_ \(k_{\bullet}\in\mathbb{Z}_{\geq 0}\)_, we have_ \[d_{k}^{\rm{Iw}}(\tilde{\varepsilon}_{1})=2k_{\bullet}+2-2\delta_{\varepsilon}.\]
3. _Introduce two integers_ \(t_{1}^{(\varepsilon)},t_{2}^{(\varepsilon)}\in\mathbb{Z}\)_:_ * _when_ \(a+s_{\varepsilon}<p-1\)_,_ \(t_{1}^{(\varepsilon)}=s_{\varepsilon}+\delta_{\varepsilon}\) _and_ \(t_{2}^{(\varepsilon)}=a+s_{\varepsilon}+\delta_{\varepsilon}+2\)_;_ * _when_ \(a+s_{\varepsilon}\geq p-1\)_,_ \(t_{1}^{(\varepsilon)}=\{a+s_{\varepsilon}\}+\delta_{\varepsilon}+1\) _and_ \(t_{2}^{(\varepsilon)}=s_{\varepsilon}+\delta_{\varepsilon}+1\)_._ _Then for_ \(k=k_{\varepsilon}+(p-1)k_{\bullet}\) _with_ \(k_{\bullet}\in\mathbb{Z}_{\geq 0}\)_, we have_ \[d_{k}^{\rm{ur}}(\varepsilon_{1})=\Big{\lfloor}\frac{k_{\bullet}-t_{1}^{( \varepsilon)}}{p+1}\Big{\rfloor}+\Big{\lfloor}\frac{k_{\bullet}-t_{2}^{( \varepsilon)}}{p+1}\Big{\rfloor}+2.\]
4. _Recall the power basis_ \({\bf B}^{(\varepsilon)}=\{{\bf e}_{1}^{(\varepsilon)},{\bf e}_{2}^{( \varepsilon)},\dots\}\)_. Define the_ \(n\)th Hodge slope _to be_ \[\lambda_{n}^{(\varepsilon)}:=\deg{\bf e}_{n}^{(\varepsilon)}-\Big{\lfloor} \frac{\deg{\bf e}_{n}^{(\varepsilon)}}{p}\Big{\rfloor}.\] _If_ \(a+s_{\varepsilon}<p-1\)_, we have_ (2.12.1) \[\deg g_{n+1}^{(\varepsilon)}(w)-\deg g_{n}^{(\varepsilon)}(w)-\lambda_{n+1}^{ (\varepsilon)}=\begin{cases}1&\text{ if }n-2s_{\varepsilon}\equiv 1,3,\dots,2a+1 \bmod 2p,\\ -1&\text{ if }n-2s_{\varepsilon}\equiv 2,4,\dots,2a+2\bmod 2p,\\ 0&\text{ otherwise.}\end{cases}\] _If_ \(a+s_{\varepsilon}\geq p-1\)_, we have_ (2.12.2) \[\deg g_{n+1}^{(\varepsilon)}(w)-\deg g_{n}^{(\varepsilon)}(w)-\lambda_{n+1}^{ (\varepsilon)}=\begin{cases}1&\text{ if }n-2s_{\varepsilon}\equiv 2,4,\dots,2a+2 \bmod 2p,\\ -1&\text{ if }n-2s_{\varepsilon}\equiv 3,5,\dots,2a+3\bmod 2p,\\ 0&\text{ otherwise.}\end{cases}\] _In either case, we have_ (2.12.3) \[\deg g_{n}^{(\varepsilon)}(w)-(\lambda_{1}^{(\varepsilon)}+\dots+\lambda_{n}^ {(\varepsilon)})=\begin{cases}0&\text{ if }\deg{\bf e}_{n+1}-\deg{\bf e}_{n}=a,\\ 0\text{ or }1&\text{ if }\deg{\bf e}_{n+1}-\deg{\bf e}_{n}=p-1-a.\end{cases}\] _Moreover, the differences_ \(\deg g_{n+1}^{(\varepsilon)}(w)-\deg g_{n}^{(\varepsilon)}(w)\) _are strictly increasing in_ \(n\)_._
Proof.: For (1), see [11, Proposition 4.1]. For (2), see [11, Corollary 4.4]. For (3), see [11, Proposition 4.7]. For (4), see [11, Proposition 4.11].
It would be helpful to copy here the following example from [11, Example 2.25], which may hopefully inspire some of the arguments later.
**Example 2.13**.: Suppose that \(p=7\) and \(a=2\). We list below the dimensions \(d_{k}^{\rm{Iw}}(\varepsilon\cdot(1\times\omega^{2-k})\) for small \(k\)'s.
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \(\varepsilon\) & \(k\) & \(2\) & \(3\) & \(4\) & \(5\) & \(6\) & \(7\) & \(8\) & \(9\) & \(10\) & \(11\) & \(12\) & \(13\) & \(14\) \\ \hline \(1\times\omega^{2}\) & \(d_{k}^{\rm lw}(1\times\omega^{4-k})=\lfloor\frac{k+2}{6}\rfloor+\lfloor\frac{k +4}{6}\rfloor\) & \(1\) & \(1\) & \(2^{*}\) & \(2\) & \(2\) & \(2\) & \(3\) & \(3\) & \(4^{*}\) & \(4\) & \(4\) & \(4\) & \(5\) \\ \hline \(\omega^{5}\times\omega^{3}\) & \(d_{k}^{\rm lw}(\omega^{5}\times\omega^{5-k})=\lfloor\frac{k+1}{6}\rfloor+ \lfloor\frac{k+3}{6}\rfloor\) & \(0\) & \(1\) & \(1\) & \(2\) & \(2^{*}\) & \(2\) & \(2\) & \(3\) & \(3\) & \(4\) & \(4^{*}\) & \(4\) & \(4\) \\ \hline \(\omega^{4}\times\omega^{4}\) & \(d_{k}^{\rm lw}(\omega^{4}\times\omega^{-k})=\lfloor\frac{k}{6}\rfloor+\lfloor \frac{k-2}{6}\rfloor\) & \(0^{*}\) & \(0\) & \(1\) & \(1\) & \(2\) & \(2\) & \(2^{*}\) & \(2\) & \(3\) & \(3\) & \(4\) & \(4\) & \(4^{*}\) \\ \hline \(\omega^{3}\times\omega^{5}\) & \(d_{k}^{\rm lw}(\omega^{3}\times\omega^{1-k})=\lfloor\frac{k-1}{6}\rfloor+ \lfloor\frac{k+1}{6}\rfloor\) & \(0\) & \(0\) & \(0^{*}\) & \(1\) & \(1\) & \(2\) & \(2\) & \(2^{*}\) & \(3\) & \(3\) & \(4\) & \(4\) & \(4^{*}\) \\ \hline \(\omega^{2}\times 1\) & \(d_{k}^{\rm lw}(\omega^{2}\times\omega^{2-k})=\lfloor\frac{k+4}{6}\rfloor+ \lfloor\frac{k}{6}\rfloor\) & \(1\) & \(1\) & \(1\) & \(1\) & \(2^{*}\) & \(2\) & \(3\) & \(3\) & \(3\) & \(3\) & \(4^{*}\) & \(4\) & \(5\) \\ \hline \(\omega\times\omega\) & \(d_{k}^{\rm lw}(\omega\times\omega^{3-k})=\lfloor\frac{k+3}{6}\rfloor+\lfloor \frac{k-1}{6}\rfloor\) & \(0^{*}\) & \(1\) & \(1\) & \(1\) & \(1\) & \(2\) & \(2^{*}\) & \(3\) & \(3\) & \(3\) & \(3\) & \(4\) & \(4^{*}\) \\ \hline \end{tabular}
The superscript \(*\) indicates where the character is equal to \(\tilde{\varepsilon}_{1}\), in which case \(d_{k}^{\rm ur}(\varepsilon_{1})\) makes sense. In the table below, we list the information on dimensions of abstract classical forms with level \(\mathrm{K}_{p}\) and \(\mathrm{Iw}_{p}\).
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \(\varepsilon\) & Triples \(\left(k,\ d_{k}^{\rm ur}(\varepsilon_{1}),\ d_{k}^{\rm new}(\varepsilon_{1})\right)\) on the corresponding weight disk \\ \hline \(1\times\omega^{2}\) & \((4,1,0)\) & \((10,1,2)\) & \((16,1,4)\) & \((22,1,6)\) & \((28,2,6)\) & \((34,2,8)\) & \((40,2,10)\) \\ \hline \(\omega^{5}\times\omega^{3}\) & \((6,0,2)\) & \((12,1,2)\) & \((18,1,4)\) & \((24,1,6)\) & \((30,1,8)\) & \((36,2,8)\) & \((42,2,10)\) \\ \hline \(\omega^{4}\times\omega^{4}\) & \((2,0,0)\) & \((8,0,2)\) & \((14,0,4)\) & \((20,1,4)\) & \((26,1,6)\) & \((32,1,8)\) & \((38,1,10)\) \\ \hline \(\omega^{3}\times\omega^{5}\) & \((4,0,0)\) & \((10,0,2)\) & \((16,0,4)\) & \((22,0,6)\) & \((28,1,6)\) & \((34,1,8)\) & \((40,1,10)\) \\ \hline \(\omega^{2}\times 1\) & \((6,0,2)\) & \((12,1,2)\) & \((18,1,4)\) & \((24,1,6)\) & \((30,1,8)\) & \((36,2,8)\) & \((42,2,10)\) \\ \hline \(\omega\times\omega\) & \((2,0,0)\) & \((8,0,2)\) & \((14,0,4)\) & \((20,1,4)\) & \((26,1,6)\) & \((32,1,8)\) & \((38,1,10)\) \\ \hline \end{tabular}
The first four terms of the ghost series on the \(\varepsilon=(1\times\omega^{2})\)-weight disk (corresponding to the first rows in the above two tables).
\[g_{1}^{(\varepsilon)}(w) =1,\] \[g_{2}^{(\varepsilon)}(w) =(w-w_{10})(w-w_{16})(w-w_{22}),\] \[g_{3}^{(\varepsilon)}(w) =(w-w_{16})^{2}(w-w_{22})^{2}(w-w_{28})(w-w_{34})(w-w_{40})(w-w_{4 6}),\] \[g_{4}^{(\varepsilon)}(w) =(w-w_{16})(w-w_{22})^{3}(w-w_{28})^{2}\cdots(w-w_{46})^{2}(w-w_{52 })\cdots(w-w_{70}).\]
Before proceeding, we prove an interesting coincidence of ghost series, for which we temporarily drop the condition \(b=0\) in Hypothesis 2.9. This is of crucial importance for our later argument to treat the residually split case.
**Proposition 2.14**.: _Consider the residual representation \(\bar{\rho}^{\prime}:\mathrm{I}_{\mathbb{Q}_{p}}\to\mathrm{GL}_{2}(\mathbb{F})\) given by_
\[\bar{\rho}^{\prime}\simeq\begin{pmatrix}1&*\neq 0\\ 0&\omega_{1}^{a+1}\end{pmatrix}=\begin{pmatrix}\omega_{1}^{(p-3-a)+(a+1)+1}&* \neq 0\\ 0&\omega_{1}^{a+1}\end{pmatrix}.\]
_Set \(a^{\prime}=p-3-a\) and \(b^{\prime}=a+1\) accordingly. For \(s_{\varepsilon}\in\{0,\ldots,p-2\}\), write \(s_{\varepsilon}^{\prime}=\{a+s_{\varepsilon}+1\}\) so that \(\varepsilon=\omega^{-s_{\varepsilon}}\times\omega^{a+s_{\varepsilon}}=\omega^{-s _{\varepsilon}^{\prime}+b^{\prime}}\times\omega^{a^{\prime}+s_{\varepsilon}^{ \prime}+b^{\prime}}\)._
1. _When_ \(s_{\varepsilon}\notin\{0,p-2-a\}\)_, we have_ \[G_{\bar{\rho}}^{(\varepsilon)}(w,t)=G_{\bar{\rho}^{\prime}}^{(\varepsilon)}(w,t).\] _In the other two cases, we have_ (2.14.1) \[G_{\bar{\rho}}^{(1\times\omega^{a})}(w,t)=1+tG_{\bar{\rho}^{\prime}}^{(1\times \omega^{a})}(w,t)\quad\text{and}\quad G_{\bar{\rho}^{\prime}}^{(\omega^{a+1} \times\omega^{-1})}(w,t)=1+tG_{\bar{\rho}}^{(\omega^{a+1}\times\omega^{-1})}(w,t).\]
2. _Fix_ \(w_{\star}\in\mathfrak{m}_{\mathbb{C}_{p}}\)_. The Newton polygons_ \(\operatorname{NP}\left(G_{\bar{\rho}}^{(\varepsilon)}(w_{\star},-)\right)\) _and_ \(\operatorname{NP}\left(G_{\bar{\rho}^{\prime}}^{(\varepsilon)}(w_{\star},-)\right)\) _agree, except that when_ \(\varepsilon=1\times\omega^{a}\) _(resp._ \(\varepsilon=\omega^{a+1}\times\omega^{-1}\)_),_ \(\operatorname{NP}\left(G_{\bar{\rho}}^{(\varepsilon)}(w_{\star},-)\right)\) _has one more (resp. one less) slope_ \(0\) _segment than that of_ \(\operatorname{NP}\left(G_{\bar{\rho}^{\prime}}^{(\varepsilon)}(w_{\star},-)\right)\)_._
**Remark 2.15**.: The representations \(\bar{\rho}\) and \(\bar{\rho}^{\prime}\) have the same semisimplification. On the Galois side, the Galois representations associated to say overconvergent modular forms are typically irreducible, in which case one cannot distinguish different reductions \(\bar{\rho}\) and \(\bar{\rho}^{\prime}\). This is reflected in the statement of Proposition 2.14: ghost series for \(\bar{\rho}\) is almost the same as the ghost series for \(\bar{\rho}^{\prime}\) over the same weight disk. Moreover, the additional subtle relation in (2.14.1) accounts for the cases when the associated Galois representations are ordinary (and reducible).
The Galois side of this proposition is discussed later in SS 7.11, and later used in Theorem 7.6 to extend our results from the reducible nonsplit case to the reducible split case.
Proof of Proposition 2.14.: (1) We add a prime to indicate the corresponding construction for \(\bar{\rho}^{\prime}\), e.g. write \(k^{\prime}_{\varepsilon}\), \(d^{\operatorname{lw}^{\prime}}_{k}(\tilde{\varepsilon}_{1})\) and etc. First of all, for the given \(s_{\varepsilon}\), we have
\[k_{\varepsilon}=2+\{a+2s_{\varepsilon}\}=2+\{a^{\prime}+2s^{\prime}_{ \varepsilon}\}=k^{\prime}_{\varepsilon}.\]
This means the ghost zeros for \(G_{\bar{\rho}}^{(\varepsilon)}(w,t)\) and for \(G_{\bar{\rho}}^{(\varepsilon)}(w,t)\) are congruent modulo \(p-1\). The main difference comes from Definition-Proposition 2.12(2):
\[\delta_{\varepsilon}-\delta^{\prime}_{\varepsilon}=\Big{\lfloor}\frac{s_{ \varepsilon}+\{a+s_{\varepsilon}\}}{p-1}\Big{\rfloor}-\Big{\lfloor}\frac{\{a +s_{\varepsilon}+1\}+\{s_{\varepsilon}-1\}}{p-1}\Big{\rfloor}=\begin{cases}-1& \text{if }s_{\varepsilon}=0\\ 1&\text{if }s_{\varepsilon}=p-2-a\\ 0&\text{otherwise}.\end{cases}\]
For \(k=k_{\varepsilon}+(p-1)k_{\bullet}\) with \(k_{\bullet}\in\mathbb{Z}_{\geq 0}\), Definition-Proposition 2.12(2) says that
\[d^{\operatorname{lw}}_{k}(\tilde{\varepsilon}_{1})=2k_{\bullet}+2-2\delta_{ \varepsilon},\quad d^{\operatorname{lw}\prime}_{k}(\tilde{\varepsilon}_{1})= 2k_{\bullet}+2-2\delta^{\prime}_{\varepsilon}. \tag{2.15.1}\]
For computing \(d^{\operatorname{ur}}_{k}(\varepsilon_{1})\) and \(d^{\operatorname{ur}}_{k}(\varepsilon_{1})\), we list the values of \(t_{1}^{(\varepsilon)}\), \(t_{2}^{(\varepsilon)}\), \(t_{1}^{(\varepsilon)\prime}\), and \(t_{2}^{(\varepsilon)\prime}\) in the following table (see the definition in Definition-Proposition 2.12(3)).
\begin{tabular}{|c|c|c|c|c|} \hline & \(s_{\varepsilon}=0\) & \(1\leq s_{\varepsilon}\leq p-3-a\) & \(s_{\varepsilon}=p-2-a\) & \(s_{\varepsilon}\geq p-1-a\) \\ \hline \hline \(t_{1}^{(\varepsilon)}\) & \(\delta_{\varepsilon}\) & \(s_{\varepsilon}+\delta_{\varepsilon}\) & \(p-2-a+\delta_{\varepsilon}\) & \(a+s_{\varepsilon}+\delta_{\varepsilon}-p+2\) \\ \hline \(t_{2}^{(\varepsilon)}\) & \(a+\delta_{\varepsilon}+2\) & \(a+s_{\varepsilon}+\delta_{\varepsilon}+2\) & \(p+\delta_{\varepsilon}\) & \(s_{\varepsilon}+\delta_{\varepsilon}+1\) \\ \hline \(t_{1}^{(\varepsilon)^{\prime}}\) & \(a+\delta_{\varepsilon}+2\) & \(s+\delta_{\varepsilon}\) & \(\delta_{\varepsilon}-1\) & \(a+s_{\varepsilon}+\delta_{\varepsilon}-p+2\) \\ \hline \(t_{2}^{(\varepsilon)^{\prime}}\) & \(p+1+\delta_{\varepsilon}\) & \(a+s_{\varepsilon}+\delta_{\varepsilon}+2\) & \(p-2-a+\delta_{\varepsilon}\) & \(s_{\varepsilon}+\delta_{\varepsilon}+1\) \\ \hline \end{tabular} This together with Definition-Proposition 2.12(3) (and (2.15.1)) implies the following.
* When \(s_{\varepsilon}\not\in\{0,p-2-a\}\), \(t_{i}^{(\varepsilon)}=t_{i}^{(\varepsilon)\prime}\) for \(i=1,2\). So for every \(k=k_{\varepsilon}+(p-1)k_{\bullet}\) as above, \(d^{\operatorname{lw}}_{k}(\tilde{\varepsilon}_{1})=d^{\operatorname{lw}}_{k} (\tilde{\varepsilon}_{1})\) and \(d^{\operatorname{lw}}_{k}(\varepsilon_{1})=d^{\operatorname{ur}}_{k}( \varepsilon_{1})\). This implies that \(G_{\bar{\rho}}^{(\varepsilon)}(w,t)=G_{\bar{\rho}^{\prime}}^{(\varepsilon)}(w,t)\).
* When \(s_{\varepsilon}=0\), we have \(\varepsilon=1\times\omega^{a}\). In this case, \(t_{1}^{(\varepsilon)\prime}=t_{2}^{(\varepsilon)}\), yet \(t_{2}^{(\varepsilon)\prime}=t_{1}^{(\varepsilon)}+p+1\), and \(\delta^{\prime}_{\varepsilon}=\delta_{\varepsilon}+1\). It follows that for every \(k=k_{\varepsilon}+(p-1)k_{\bullet}\) as above, \[d^{\operatorname{lw}}_{k}(\tilde{\varepsilon}_{1})=d^{\operatorname{lw}}_{k} (\tilde{\varepsilon}_{1})+2\quad\text{and}\quad d^{\operatorname{ur}}_{k}( \varepsilon_{1})=d^{\operatorname{ur}}_{k}(\varepsilon_{1})+1.\] This implies that \(m_{n}^{(\varepsilon)}(k)=m_{n+1}^{(\varepsilon)\prime}(k)\). It follows that \(G_{\bar{\rho}}^{(1\times\omega^{a})}(w,t)=1+tG_{\bar{\rho}^{\prime}}^{(1\times \omega^{a})}(w,t)\).
* When \(s_{\varepsilon}=p-2-a\), \(\varepsilon=\omega^{a+1}\times\omega^{-1}\). In this case, the role of \(\bar{\rho}\) and \(\bar{\rho}^{\prime}\) are somewhat swapped, and we deduce that \[d_{k}^{\mathrm{lwr}}(\tilde{\varepsilon}_{1})=d_{k}^{\mathrm{lw}}(\tilde{ \varepsilon}_{1})+2\quad\text{and}\quad d_{k}^{\mathrm{ur}\prime}(\varepsilon_{ 1})=d_{k}^{\mathrm{ur}}(\varepsilon_{1})+1.\] This implies that \(G_{\bar{\rho}^{\prime}}^{(\omega^{a+1}\times\omega^{-1})}(w,t)=1+tG_{\bar{ \rho}}^{(\omega^{a+1}\times\omega^{-1})}(w,t)\).
Part (2) of the Proposition follows from (1) immediately.
The slopes predicted by ghost series also satisfy properties analogous to the theta maps and the Atkin-Lehner involutions, as stated below.
**Proposition 2.16**.: _Let \(\varepsilon\) be a relevant character. For \(k\equiv k_{\varepsilon}\bmod(p-1)\), we write_
\[g_{n,\tilde{k}}^{(\varepsilon)}(w):=g_{n}^{(\varepsilon)}(w)\big{/}(w-w_{k})^ {m_{n}^{(\varepsilon)}(k)}. \tag{2.16.1}\]
_Fix \(k_{0}\geq 2\). Write \(d:=d_{k_{0}}^{\mathrm{lw}}(\varepsilon\cdot(1\times\omega^{2-k_{0}}))\) in this proposition._
1. _(Compatibility with theta maps) Put_ \(\varepsilon^{\prime}:=\varepsilon\cdot(\omega^{k_{0}-1}\times\omega^{1-k_{0}})\) _with_ \(s_{\varepsilon^{\prime}}=\{s_{\varepsilon}+1-k_{0}\}\)_. For every_ \(\ell\geq 1\)_, the_ \((d+\ell)\)_th slope of_ \(\mathrm{NP}(G^{(\varepsilon)}(w_{k_{0}},-))\) _is_ \(k_{0}-1\) _plus the_ \(\ell\)_th slope of_ \(\mathrm{NP}(G^{(\varepsilon^{\prime})}(w_{k_{0}},-))\)_. In particular, the_ \((d+\ell)\)_th slope of_ \(\mathrm{NP}(G^{(\varepsilon)}(w_{k_{0}},-))\) _is at least_ \(k_{0}-1\)_._
2. _(Compatibility with Atkin-Lehner involutions) Assume that_ \(k_{0}\not\equiv k_{\varepsilon}\bmod(p-1)\)_. Put_ \(\varepsilon^{\prime\prime}=\omega^{-s_{\varepsilon^{\prime\prime}}}\times \omega^{a+s_{\varepsilon^{\prime\prime}}}\) _with_ \(s_{\varepsilon^{\prime\prime}}:=\{k_{0}-2-a-s_{\varepsilon}\}\)_. Then for every_ \(\ell\in\{1,\ldots,d\}\)_, the sum of the_ \(\ell\)_th slope of_ \(\mathrm{NP}(G^{(\varepsilon)}(w_{k_{0}},-))\) _and the_ \((d-\ell+1)\)_th slope of_ \(\mathrm{NP}(G^{(\varepsilon^{\prime\prime})}(w_{k_{0}},-))\) _is exactly_ \(k_{0}-1\)_. In particular, the_ \(\ell\)_th slope of_ \(\mathrm{NP}(G^{(\varepsilon)}(w_{k_{0}},-))\) _is at most_ \(k_{0}-1\)_._
3. _(Compatibility with_ \(p\)_-stabilizations) Assume that_ \(k_{0}\equiv k_{\varepsilon}\bmod(p-1)\)_. Then for every_ \(\ell\in\{1,\ldots,d_{k_{0}}^{\mathrm{ur}}(\varepsilon_{1})\}\)_, the sum of the_ \(\ell\)_th slope of_ \(\mathrm{NP}(G^{(\varepsilon)}(w_{k_{0}},-))\) _and the_ \((d-\ell+1)\)_th slope of_ \(\mathrm{NP}(G^{(\varepsilon)}(w_{k_{0}},-))\) _is exactly_ \(k_{0}-1\)_._
4. _(Gouvea's inequality) Assume that_ \(k_{0}\equiv k_{\varepsilon}\bmod(p-1)\)_. Then the first_ \(d_{k_{0}}^{\mathrm{ur}}(\varepsilon_{1})\) _slopes of_ \(\mathrm{NP}(G^{(\varepsilon)}(w_{k_{0}},-))\) _are all less than or equal to_ (2.16.2) \[\frac{p-1}{2}(d_{k_{0}}^{\mathrm{ur}}(\varepsilon_{1})-1)-\delta_{\varepsilon} +\beta_{[d_{k_{0}}^{\mathrm{ur}}(\varepsilon_{1})-1]}^{(\varepsilon)}\leq \Big{\lfloor}\frac{k_{0}-1-\min\{a+1,p-2-a\}}{p+1}\Big{\rfloor},\] _where for the relevant_ \(\varepsilon\)_, we set_ \(\beta_{[n]}^{(\varepsilon)}=\begin{cases}t_{1}^{(\varepsilon)}&\text{if $n$ is even}\\ t_{2}^{(\varepsilon)}-\frac{p+1}{2}&\text{if $n$ is odd}.\end{cases}\)__
5. _(Ghost duality) Assume_ \(k_{0}\equiv k_{\varepsilon}\bmod(p-1)\)_. Then for each_ \(\ell=0,\ldots,\frac{1}{2}d_{k_{0}}^{\mathrm{new}}(\varepsilon_{1})-1\)_,_ (2.16.3) \[v_{p}\big{(}g_{d_{k_{0}}^{\mathrm{ur}}(\varepsilon_{1})-d_{k_{0}}^{\mathrm{ur}}( \varepsilon_{1})-\ell,\hat{k}_{0}}^{(\varepsilon)}(w_{k_{0}})\big{)}-v_{p} \big{(}g_{d_{k_{0}}^{\mathrm{ur}}(\varepsilon_{1})+\ell,\hat{k}_{0}}^{( \varepsilon)}(w_{k_{0}})\big{)}=(k_{0}-2)\cdot(\tfrac{1}{2}d_{k_{0}}^{ \mathrm{new}}(\varepsilon_{1})-\ell).\] _In particular, the_ \((d_{k_{0}}^{\mathrm{ur}}(\varepsilon_{1})+1)\)_th to the_ \((d_{k_{0}}^{\mathrm{ur}}(\tilde{\varepsilon}_{1})-d_{k_{0}}^{\mathrm{ur}}( \varepsilon_{1}))\)_th slopes of_ \(\mathrm{NP}(G^{(\varepsilon)}(w_{k_{0}},-))\) _are all equal to_ \(\frac{k_{0}-2}{2}\)_._
6. _(Ghost duality variant) Assume that_ \(k_{0}\equiv k_{\varepsilon}\bmod(p-1)\)_. We set_ (2.16.4) \[\Delta_{k_{0},\ell}^{\prime(\varepsilon)}:=v_{p}\big{(}g_{\frac{1}{2}d_{k_{0} }^{\mathrm{ur}}(\varepsilon_{1})+\ell,\hat{k}_{0}}^{(\varepsilon)}(w_{k_{0}}) \big{)}-\tfrac{k_{0}-2}{2}\ell,\quad\text{for $\ell=-\tfrac{1}{2}d_{k_{0}}^{ \mathrm{new}}(\varepsilon_{1}),\ldots,\tfrac{1}{2}d_{k_{0}}^{\mathrm{new}}( \varepsilon_{1})$}.\] _Let_ \(\underline{\Delta}_{k_{0}}^{(\varepsilon)}\) _denote the convex hull of the points_ \((\ell,\Delta_{k_{0},\ell}^{\prime(\varepsilon)})\) _for_ \(\ell=-\tfrac{1}{2}d_{k_{0}}^{\mathrm{new}}(\varepsilon_{1}),\ldots,\tfrac{1}{2}d_ {k_{0}}^{\mathrm{new}}(\varepsilon_{1})\)_, and let_ \((\ell,\Delta_{k_{0},\ell}^{(\varepsilon)})\) _denote the corresponding points on_ \(\underline{\Delta}_{k_{0}}^{(\varepsilon)}\)_. Then we have_ (2.16.5) \[\Delta_{k_{0},\ell}^{\prime(\varepsilon)}=\Delta_{k_{0},-\ell}^{\prime( \varepsilon)}\quad\text{and}\quad\Delta_{k_{0},\ell}^{(\varepsilon)}=\Delta_{k_{0 },-\ell}^{(\varepsilon)}\quad\text{for all $\ell=-\tfrac{1}{2}d_{k_{0}}^{\mathrm{new}}(\varepsilon_{1}),\ldots,\tfrac{1}{2}d_ {k_{0}}^{\mathrm{new}}(\varepsilon_{1})$}.\]
Proof.: (1), (2), (3), and (5) are [LTXZ22\({}^{+}\), Proposition 4.18(1)(2)(3)(4)], respectively. (4) is [LTXZ22\({}^{+}\), Proposition 4.28]. (6) is a corollary of (5); see [LTXZ22\({}^{+}\), Notaiton 5.1] for a more careful discussion.
In [LTXZ22\({}^{+}\), SS 5], we carefully studied the properties of the vertices of the Newton polygon of ghost series. We record the main definition and results here.
**Definition 2.17**.: ([LTXZ22\({}^{+}\), Defintion 5.11]) Fix a relevant character \(\varepsilon=\omega^{-s_{\varepsilon}}\times\omega^{a+s_{\varepsilon}}\). For \(k\equiv k_{\varepsilon}\bmod(p-1)\) and \(w_{\star}\in\mathfrak{m}_{\mathbb{C}_{p}}\), let \(L^{(\varepsilon)}_{w_{\star},k}\) denote the largest number (if exists) in \(\{1,\ldots,\frac{1}{2}d^{\rm new}_{k}(\varepsilon_{1})\}\) such that
\[v_{p}(w_{\star}-w_{k})\geq\Delta^{(\varepsilon)}_{k,L^{(\varepsilon)}_{w_{ \star},k}}-\Delta^{(\varepsilon)}_{k,L^{(\varepsilon)}_{w_{\star},k}-1}. \tag{2.17.1}\]
When such \(L^{(\varepsilon)}_{w_{\star},k}\) exists, we call the intervals
\[{\rm nS}^{(\varepsilon)}_{w_{\star},k}:=\big{(}\tfrac{1}{2}d^{\rm lw}_{k}( \tilde{\varepsilon}_{1})-L^{(\varepsilon)}_{w_{\star},k},\,\tfrac{1}{2}d^{ \rm lw}_{k}(\tilde{\varepsilon}_{1})+L^{(\varepsilon)}_{w_{\star},k}\big{)} \subset\overline{{\rm nS}}^{(\varepsilon)}_{w_{\star},k}:=\big{[}\tfrac{1}{2 }d^{\rm lw}_{k}(\tilde{\varepsilon}_{1})-L^{(\varepsilon)}_{w_{\star},k},\, \tfrac{1}{2}d^{\rm lw}_{k}(\tilde{\varepsilon}_{1})+L^{(\varepsilon)}_{w_{ \star},k}\big{]}\]
the _near-Steinberg range_ for \((w_{\star},k)\). When no such \(L^{(\varepsilon)}_{w_{\star},k}\) exists, write \({\rm nS}^{(\varepsilon)}_{w_{\star},k}=\overline{{\rm nS}}^{(\varepsilon)}_{w _{\star},k}=\emptyset\).
For a positive integer \(n\), we say \((\varepsilon,w_{\star},n)\) or simply \((w_{\star},n)\) is _near-Steinberg_ if \(n\) belongs to the near-Steinberg range \({\rm nS}^{(\varepsilon)}_{w_{\star},k}\) for some \(k\).
**Proposition 2.18**.: _Fix a relevant character \(\varepsilon\) and \(w_{\star}\in\mathfrak{m}_{\mathbb{C}_{p}}\)._
1. _For any integer_ \(k^{\prime}=k_{\varepsilon}+(p-1)k^{\prime}_{\star}\neq k\) _with_ \(v_{p}(w_{k^{\prime}}-w_{k})\geq\Delta_{k,L_{w_{\star},k}}-\Delta_{k,L_{w_{ \star},k}-1}\)_, we have the following exclusion_ \[\tfrac{1}{2}d^{\rm lw}_{k^{\prime}}\notin\overline{{\rm nS}}_{w_{\star},k}\quad \text{and}\quad d^{\rm ur}_{k^{\prime}},d^{\rm lw}_{k^{\prime}}-d^{\rm ur}_{k^ {\prime}}\notin{\rm nS}_{w_{\star},k}.\]
2. _For every_ \(n\in\mathbb{N}\)_, the point_ \(\big{(}n,v_{p}(g^{(\varepsilon)}_{n}(w_{\star}))\big{)}\) _is a vertex of_ \({\rm NP}(G^{(\varepsilon)}(w_{\star},-))\) _if and only if_ \((\varepsilon,w_{\star},n)\) _is not near-Steinberg._
3. _For a fixed_ \(n\in\mathbb{N}\)_, the set of elements_ \(w_{\star}\in\mathfrak{m}_{\mathbb{C}_{p}}\) _for which_ \(\big{(}n,v_{p}(g^{(\varepsilon)}_{n}(w_{\star}))\big{)}\) _is a vertex of_ \({\rm NP}\left(G^{(\varepsilon)}(w_{\star},-)\right)\) _form a quasi-Stein open subset of the weight disk_ \(\mathcal{W}^{(\varepsilon)}\)__ \[{\rm Vtx}^{(\varepsilon)}_{n}:=\mathcal{W}^{(\varepsilon)}\backslash\bigcup_{ k}\Big{\{}w_{\star}\in\mathfrak{m}_{\mathbb{C}_{p}}\ \Big{|}\ v_{p}(w_{\star}-w_{k})\geq\Delta^{(\varepsilon)}_{k,|\frac{1}{2}d^{\rm lw }_{k}(\tilde{\varepsilon}_{1})-n|+1}-\Delta^{(\varepsilon)}_{k,|\frac{1}{2}d^ {\rm lw}_{k}(\tilde{\varepsilon}_{1})-n|}\Big{\}},\] _where the union is taken over all_ \(k=k_{\varepsilon}+(p-1)k_{\bullet}\) _with_ \(k_{\bullet}\in\mathbb{Z}\) _such that_ \(n\in\big{(}d^{\rm ur}_{k}(\varepsilon_{1}),d^{\rm lw}_{k}(\tilde{\varepsilon}_ {1})-d^{\rm ur}_{k}(\varepsilon_{1})\big{)}\)_._
4. _The set of near-Steinberg ranges_ \({\rm nS}^{(\varepsilon)}_{w_{\star},k}\) _for all_ \(k\) _is nested, i.e. for any two such open near-Steinberg ranges, either they are disjoint or one is contained in another._ _A near-Steinberg range_ \({\rm nS}^{(\varepsilon)}_{w_{\star},k}\) _is called_ maximal _if it is not contained in other near-Steinberg ranges. Over a maximal near-Steinberg range, the slope of_ \({\rm NP}(G^{(\varepsilon)}(w_{\star},-))\) _belongs to_ (2.18.1) \[\tfrac{a}{2}+\mathbb{Z}+\mathbb{Z}\big{(}\max\{v_{p}(w_{\star}-w_{k^{\prime}})|w _{k^{\prime}}\text{ is a zero of }g^{(\varepsilon)}_{n}(w)\text{ for some }n\in{\rm nS}^{( \varepsilon)}_{w_{\star},k}\}\big{)}.\]
5. _For_ \(k_{0}\equiv k_{\varepsilon}\bmod(p-1)\)_, the following statements are equivalent for_ \(\ell\in\{0,\ldots,\tfrac{1}{2}d^{\rm new}_{k_{0}}(\varepsilon_{1})-1\}\)_._ 1. _The point_ \((\ell,\Delta^{\prime(\varepsilon)}_{k_{0},\ell})\) _is not a vertex of_ \(\underline{\Delta}^{(\varepsilon)}_{k_{0}}\)_,_ 2. \(\tfrac{1}{2}d^{\rm lw}_{k_{0}}(\tilde{\varepsilon}_{1})+\ell\in{\rm nS}_{w_{k_{0} },k_{1}}\) _for some_ \(k_{1}>k_{0}\)_, and_
_._
3. \(\frac{1}{2}d^{\rm lw}_{k_{0}}(\tilde{\varepsilon}_{1})-\ell\in{\rm nS}_{w_{k_{0}},k_{2}}\) _for some_ \(k_{2}<k_{0}\)_._
4. _For any_ \(k_{0}\equiv k_{\varepsilon}\bmod(p-1)\) _and any_ \(k\in\mathbb{Z}\)_, the slopes of_ \({\rm NP}(G^{(\varepsilon)}(w_{k},-))\) _and of_ \(\underline{\Delta}^{(\varepsilon)}_{k_{0}}\) _with multiplicity one belong to_ \(\mathbb{Z}\)_; other slopes all have even multiplicity and the slopes belong to_ \(\frac{a}{2}+\mathbb{Z}\)_._
Proof.: (1) is [11, Proposition 5.16(1)]. (2) is [11, Theorem 5.19(2)]. (3) follows from (2) and Definition 2.17: a point \((\varepsilon,w_{\star},n)\) is near-Steinberg if and only if
\[n\in{\rm nS}^{(\varepsilon)}_{w_{\star},k}=\big{(}\tfrac{1}{2}d^{\rm lw}_{k}( \tilde{\varepsilon}_{1})-L^{(\varepsilon)}_{w_{\star},k},\,\tfrac{1}{2}d^{\rm lw }_{k}(\tilde{\varepsilon}_{1})+L^{(\varepsilon)}_{w_{\star},k}\big{)},\]
or equivalently, \(|n-\tfrac{1}{2}d^{\rm lw}_{k}(\tilde{\varepsilon}_{1})|<L^{(\varepsilon)}_{w_{ \star},k}\), for some \(k=k_{\varepsilon}+(p-1)k_{\bullet}\) with \(k_{\bullet}\in\mathbb{Z}_{\geq 0}\); by (2.17.1), this is further equivalent to
\[v_{p}(w_{\star}-w_{k})\geq\Delta^{(\varepsilon)}_{k,|\frac{1}{2}d^{\rm lw}_{k} (\tilde{\varepsilon}_{1})-n|+1}-\Delta^{(\varepsilon)}_{k,|\frac{1}{2}d^{\rm lw }_{k}(\tilde{\varepsilon}_{1})-n|}.\]
(4) is a reformulation of [11, Theorem 5.19(1)(3)]. (5) is [11, Proposition 5.26]. (6) combines [11, Corollary 5.24 and Proposition 5.26].
We conclude this section with recalling a technical estimate on the difference of \(\Delta\)'s that we will frequently use in this paper. The following is [11, Corollary 5.10].
**Proposition 2.19**.: _Assume \(p\geq 7\). Take integers \(\ell,\ell^{\prime},\ell^{\prime\prime}\in\{0,\ldots,\tfrac{1}{2}d^{\rm new}_{ k}(\varepsilon_{1})\}\) with \(\ell\leq\ell^{\prime}\leq\ell^{\prime\prime}\) and \(\ell^{\prime\prime}>\ell\). Let \(k^{\prime}=k_{\varepsilon}+(p-1)k^{\prime}_{\bullet}\) be a weight such that_
\[d^{\rm ur}_{k^{\prime}}(\varepsilon_{1}),\text{ or }d^{\rm lw}_{k^{\prime}}( \tilde{\varepsilon}_{1})-d^{\rm ur}_{k^{\prime}}(\varepsilon_{1})\text{ belongs to }\big{[}\tfrac{1}{2}d^{\rm lw}_{k}(\tilde{\varepsilon}_{1})-\ell^{\prime}, \tfrac{1}{2}d^{\rm lw}_{k}(\tilde{\varepsilon}_{1})+\ell^{\prime}\big{]}, \tag{2.19.1}\]
_then we have_
\[\Delta^{(\varepsilon)}_{k,\ell^{\prime\prime}}-\Delta^{\prime(\varepsilon)}_{ k,\ell}-(\ell^{\prime\prime}-\ell^{\prime})\cdot v_{p}(w_{k}-w_{k^{\prime}})\geq( \ell^{\prime}-\ell)\cdot\Big{\lfloor}\frac{\ln((p+1)\ell^{\prime\prime})}{ \ln p}+1\Big{\rfloor}+\frac{1}{2}\big{(}\ell^{\prime\prime 2}-\ell^{2}\big{)}.\]
_In particular, we have_
\[\Delta^{(\varepsilon)}_{k,\ell^{\prime\prime}}-\Delta^{\prime(\varepsilon)}_{ k,\ell}\geq\frac{1}{2}\big{(}\ell^{\prime\prime 2}-\ell^{2}\big{)}+1.\]
**Remark 2.20**.: As pointed out by [11, Corollary 5.10], if there exists \(k^{\prime}\) such that \(v_{p}(w_{k^{\prime}}-w_{k})\geq\big{\lfloor}\frac{\ln((p+1)\ell^{\prime\prime} )}{\ln p}+2\big{\rfloor}\), then there are at most two such \(k^{\prime}\) satisfying \(v_{p}(w_{k^{\prime}}-w_{k})\geq\big{\lfloor}\frac{\ln((p+1)\ell^{\prime\prime} )}{\ln p}+2\big{\rfloor}\) and (2.19.1) with \(\ell^{\prime}\) replaced by \(\ell\). In the case of having two such \(k^{\prime}\)'s, say \(k^{\prime}_{1},k^{\prime}_{2}\); up to swapping \(k^{\prime}_{1}\) and \(k^{\prime}_{2}\), we have \(d^{\rm ur}_{k^{\prime}_{1}}(\varepsilon_{1}),d^{\rm lw}_{k^{\prime}_{2}}( \tilde{\varepsilon}_{1})-d^{\rm ur}_{k^{\prime}_{2}}(\varepsilon_{1})\in\big{[} \tfrac{1}{2}d^{\rm lw}_{k}(\tilde{\varepsilon}_{1})-\ell^{\prime},\tfrac{1}{2}d^ {\rm lw}_{k}(\tilde{\varepsilon}_{1})+\ell^{\prime}\big{]}\); and between \(d^{\rm ur}_{k^{\prime}_{1}}(\varepsilon_{1})\) and \(d^{\rm lw}_{k^{\prime}_{2}}(\tilde{\varepsilon}_{1})-d^{\rm ur}_{k^{\prime}_{2 }}(\varepsilon_{1})\), one is \(\geq\tfrac{1}{2}d^{\rm lw}_{k}(\tilde{\varepsilon}_{1})\) and one is \(\leq\tfrac{1}{2}d^{\rm lw}_{k}(\tilde{\varepsilon}_{1})\).
**Remark 2.21**.: By [11, Lemma 5.2], asymptotically, \(\Delta^{(\varepsilon)}_{k,\ell+1}-\Delta^{(\varepsilon)}_{k,\ell}\sim\tfrac{p-1 }{2}\ell\) (when \(\ell\) is large). Intuitively and roughly, the set of vertices \({\rm Vtx}^{(\varepsilon)}_{n}\) in Proposition 2.18(3) is to remove from the open unit disk \(\mathcal{W}^{(\varepsilon)}\), a disk of radius about \(p^{-(a+2)}\) or \(p^{a+1-p}\), centered at \(w_{k^{(\varepsilon)}_{\rm mid}(n)}\), two disks of radius roughly \(p^{1-p}\), centered at \(w_{k^{(\varepsilon)}_{\rm mid}(n)\pm(p-1)}\), \(\ldots\), two disks of radius roughly \(p^{(1-p)\ell/2}\), centered at \(w_{k^{(\varepsilon)}_{\rm mid}(n)\pm\ell(p-1)}\), where \(k^{(\varepsilon)}_{\rm mid}(n)\) is the unique positive integer \(k\equiv k_{\varepsilon}\bmod(p-1)\) such that \(\tfrac{1}{2}d^{\rm lw}_{k}(\tilde{\varepsilon}_{1})=2n\).
## 3. Two key inputs on abstract classical forms
In this section, we give the two key inputs for our proof of local version of ghost conjecture:
(1) The first one is a careful study of the \(p\)-stabilization of abstract classical forms;
(2) The second one is to use the modified Mahler basis to give an estimate of \(\mathrm{U}^{\dagger,(\varepsilon)}\).
**Notation 3.1**.: In this section, we fix a residual representation \(\bar{\rho}=\begin{pmatrix}\omega_{1}^{a+1}&*\neq 0\\ 0&1\end{pmatrix}\) with \(1\leq a\leq p-4\), and a primitive \(\mathcal{O}[\![\mathrm{K}_{p}]\!]\)-projective augmented module \(\widetilde{\mathrm{H}}\) of type \(\bar{\rho}\) on which \(\begin{pmatrix}p&0\\ 0&p\end{pmatrix}\) acts trivially.
We fix a relevant character \(\varepsilon=\omega^{-s_{\varepsilon}}\times\omega^{a+s_{\varepsilon}}\) of \(\Delta^{2}\). When no confusion arises, we suppress \(\varepsilon\) from the notation in the proofs (but still keep the full notations in the statements), for example, writing \(s\), \(d_{k}^{\mathrm{Iw}}\), and \(d_{k}^{\mathrm{ur}}\) for \(s_{\varepsilon}\), \(d_{k}^{\mathrm{Iw}}(\tilde{\varepsilon}_{1})\), and \(d_{k}^{\mathrm{ur}}(\varepsilon_{1})\), respectively.
Before proceeding, we give a very weak Hodge bound for the matrix \(\mathrm{U}^{\dagger,(\varepsilon)}\). A much finer estimate will be given later in this section.
**Proposition 3.2**.: _We have \(\mathrm{U}^{\dagger,(\varepsilon)}\in\mathrm{M}_{\infty}(\mathcal{O}\langle w /p\rangle)\). More precisely,_
1. _the row of_ \(\mathrm{U}^{\dagger,(\varepsilon)}\) _indexed by_ \(\mathbf{e}\) _belongs to_ \(p^{\frac{1}{2}\deg\mathbf{e}}\mathcal{O}\langle w/p\rangle\)_, and_
2. _for each_ \(k\in\mathbb{Z}\)_, the row of_ \(\mathrm{U}^{\dagger,(\varepsilon)}|_{w=w_{k}}\) _indexed by_ \(\mathbf{e}\) _belongs to_ \(p^{\deg\mathbf{e}}\mathcal{O}\)_._
Proof.: For a monomial \(h=z^{n}\) and \(\bigl{(}\begin{smallmatrix}p\alpha&\beta\\ p\gamma&\delta\end{smallmatrix}\bigr{)}\in\bigl{(}\begin{smallmatrix}p\bar{ \mathcal{Z}}_{p}&\mathbb{Z}_{p}\\ p\bar{\mathcal{Z}}_{p}&\mathbb{Z}_{p}^{\times}\end{smallmatrix}\bigr{)}\) with determinant \(pd\) for \(d\in\mathbb{Z}_{p}^{\times}\), the action (2.4.4) is given by
\[h\big{|}_{\bigl{(}\begin{smallmatrix}p\alpha&\beta\\ p\gamma&\delta\end{smallmatrix}\bigr{)}}(z) = \varepsilon(\bar{d}/\bar{\delta},\bar{\delta})\cdot(1+w)^{\log \bigl{(}(p\gamma z+\delta)/\omega(\bar{\delta})\bigr{)}/p}\cdot h\Bigl{(} \frac{p\alpha z+\beta}{p\gamma z+\delta}\Bigr{)}\] \[= \varepsilon(\bar{d}/\bar{\delta},\bar{\delta})\cdot\sum_{n\geq 0 }w^{n}\binom{\log\bigl{(}(p\gamma z+\delta)/\omega(\bar{\delta})\bigr{)}/p}{n} \cdot h\Bigl{(}\frac{p\alpha z+\beta}{p\gamma z+\delta}\Bigr{)}.\]
Note that \(\frac{w^{n}}{n!}=(\frac{w}{p})^{n}\cdot\frac{p^{n/2}}{n!}\cdot p^{n/2}\). So it is not difficult to see that the above expression belongs to \(\mathcal{O}\langle w/p\rangle\langle p^{1/2}z\rangle\). Part (1) of the proposition follows.
When \(w=w_{k}\), we can rewrite the above equality as
\[h\big{|}_{\bigl{(}\begin{smallmatrix}p\alpha&\beta\\ p\gamma&\delta\end{smallmatrix}\bigr{)}}(z)=\varepsilon(\bar{d}/\bar{\delta}, \bar{\delta})\Bigl{(}\frac{p\gamma z+\delta}{\omega(\bar{\delta})}\Bigr{)}^{ k}\cdot h\Bigl{(}\frac{p\alpha z+\beta}{p\gamma z+\delta}\Bigr{)}\in\mathcal{O}[\![pz]\!].\]
From this, we see that the row of \(\mathrm{U}^{\dagger,(\varepsilon)}|_{w=w_{k}}\) indexed by \(\mathbf{e}\) belongs to \(p^{\deg\mathbf{e}}\mathcal{O}\).
### \(p\)-stabilization process
Recall from Proposition 2.11(2) the natural Atkin-Lehner involution
\[\mathrm{AL}_{(k,\tilde{\varepsilon}_{1})}:\mathrm{S}_{k}^{\mathrm{Iw}}(\tilde{ \varepsilon}_{1})\longrightarrow\mathrm{S}_{k}^{\mathrm{Iw}}(\tilde{ \varepsilon}_{1}).\]
We define the following four maps
\[\mathrm{S}_{k}^{\mathrm{ur}}(\varepsilon_{1}) =\mathrm{Hom}_{\mathcal{O}[\![\mathrm{K}_{p}]\!]}\left(\widetilde{ \mathrm{H}},\,\mathcal{O}[z]^{\leq k-2}\otimes\tilde{\varepsilon}_{1}\right)\] \[\mathrm{S}_{k}^{\mathrm{Iw}}(\tilde{\varepsilon}_{1}) =\mathrm{Hom}_{\mathcal{O}[\![\mathrm{Iw}_{p}]\!]}\left(\widetilde{ \mathrm{H}},\,\mathcal{O}[z]^{\leq k-2}\otimes\tilde{\varepsilon}_{1}\right)\]
given by, for \(\psi\in\mathrm{S}_{k}^{\mathrm{ur}}(\varepsilon_{1})\), \(\varphi\in\mathrm{S}_{k}^{\mathrm{Iw}}(\tilde{\varepsilon}_{1})\), and \(x\in\widetilde{\mathrm{H}}\),
\[\iota_{1}(\psi) =\psi.\] \[\iota_{2}(\psi)(x)\] \[\mathrm{proj}_{1}(\varphi)(x)\]
Here \(u_{j}=\left(\begin{smallmatrix}1&0\\ j&1\end{smallmatrix}\right)\) for \(j=0,\dots,p-1\) and \(u_{\star}=\left(\begin{smallmatrix}0&1\\ 1&0\end{smallmatrix}\right)\) form a set of coset representatives of \(\mathrm{Iw}_{p}\backslash\mathrm{K}_{p}\). (In fact, the definition of \(\mathrm{proj}_{1}\) and \(\mathrm{proj}_{2}\) do not depend on this choice of coset representatives.)
**Remark 3.4**.: As we will not need it, we leave as an interesting exercise for the readers to check that for \(\psi\in\mathrm{S}_{k}^{\mathrm{ur}}(\varepsilon_{1})\) and the \(T_{p}\)-operator defined in (2.4.7), we have
\[U_{p}(\iota_{1}(\psi))=p\cdot\iota_{2}(\psi)\quad\text{and}\quad U_{p}(\iota_ {2}(\psi))=\iota_{2}(T_{p}(\psi))-p^{k-2}\iota_{1}(\psi).\]
It then follows that the \(U_{p}\)-action on the span of \(\iota_{2}(\psi)\) and \(\iota_{1}(\psi)\) is given by the matrix
\[\begin{pmatrix}T_{p}&p\\ -p^{k-2}&0\end{pmatrix}.\]
The following is a key (although simple) feature of \(p\)-stabilization.
**Proposition 3.5**.: _We have the following equality_
\[U_{p}(\varphi)=\iota_{2}(\mathrm{proj}_{1}(\varphi))-\mathrm{AL}_{(k,\tilde{ \varepsilon}_{1})}(\varphi),\quad\text{for all }\varphi\in\mathrm{S}_{k}^{ \mathrm{Iw}}(\tilde{\varepsilon}_{1}). \tag{3.5.1}\]
Proof.: For \(\varphi\in\mathrm{S}_{k}^{\mathrm{Iw}}\) and \(x\in\widetilde{\mathrm{H}}\), we have
\[\iota_{2}(\mathrm{proj}_{1}(\varphi))(x)-\mathrm{AL}_{(k)}( \varphi)(x)=\sum_{j=0,\dots,p-1,\star}\varphi\Big{(}x\big{(}\begin{smallmatrix} p^{-1}&0\\ 0&1\end{smallmatrix}\big{)}u_{j}\Big{)}\Big{|}_{u_{j}^{-1}\big{(}\begin{smallmatrix} p&0\\ 0&1\end{smallmatrix}\big{)}}-\varphi\Big{(}x\big{(}\begin{smallmatrix}0&p^{-1}\\ 1&0\end{smallmatrix}\big{)}\Big{)}\Big{|}_{\begin{pmatrix}0&1\\ 0&0\end{pmatrix}}\] \[=\sum_{j=0}^{p-1}\varphi\Big{(}x\big{(}\begin{smallmatrix}p^{-1}& 0\\ 0&1\end{smallmatrix}\big{)}\big{(}\begin{smallmatrix}1&0\\ j&1\end{smallmatrix}\big{)}\Big{)}\Big{|}_{\begin{pmatrix}1&0\\ j&1\end{smallmatrix}^{-1}\big{(}\begin{smallmatrix}p&0\\ 0&1\end{smallmatrix}\big{)}}=\sum_{j=0}^{p-1}\varphi\Big{(}x\big{(}\begin{smallmatrix }p^{-1}&0\\ j&1\end{smallmatrix}\big{)}\Big{)}\Big{|}_{\begin{pmatrix}p^{-1}&0\\ j&1\end{smallmatrix}^{-1}}=U_{p}(\varphi)(x).\]
Here in the first equality, when we unwind the definition of \(\iota_{2}\), we use the matrix \(\left(\begin{smallmatrix}p&0\\ 0&1\end{smallmatrix}\right)\) as opposed to \(\left(\begin{smallmatrix}0&1\\ p&0\end{smallmatrix}\right)\) (using the \(\mathrm{GL}_{2}(\mathbb{Z}_{p})\)-equivariance). The second equality comes from canceling the last term in the first row with the term \(j=\star\) in the sum.
**Proposition 3.6**.: _For \(k\equiv k_{\varepsilon}\ \mathrm{mod}\ (p-1)\), consider the power basis \(\mathbf{B}_{k}^{(\varepsilon)}=\{\mathbf{e}_{1}^{(\varepsilon)},\mathbf{e}_{ 2}^{(\varepsilon)},\dots,\mathbf{e}_{d_{k}^{\mathrm{Iw}}(\tilde{\varepsilon}_{1 })}^{(\varepsilon)}\}\) of \(\mathrm{S}_{k}^{\mathrm{Iw}}(\tilde{\varepsilon}_{1})\) from (2.10.2), ordered with increasing degrees. Let \(\mathrm{U}_{k}^{\mathrm{Iw},(\varepsilon)}\) be the matrix of the \(U_{p}\)-operator on \(\mathrm{S}_{k}^{\mathrm{Iw}}(\tilde{\varepsilon}_{1})\) under \(\mathbf{B}_{k}^{(\varepsilon)}\)._
1. _The matrix_ \(\mathrm{L}_{k}^{(\varepsilon),\mathrm{cl}}\) _for the_ \(\mathrm{AL}_{(k,\tilde{\varepsilon}_{1})}\)_-action with respect to the basis_ \(\mathbf{B}_{k}^{(\varepsilon)}\) _is the anti-diagonal matrix with entries_ \[p^{\deg\mathbf{e}_{1}^{(\varepsilon)}},p^{\deg\mathbf{e}_{2}^{(\varepsilon)}},\dots,p^{\deg\mathbf{e}_{d_{k}^{\mathrm{Iw}}(\varepsilon_{1})}^{(\varepsilon)}}\] _from upper right to lower left. (The superscript_ \(\mathrm{cl}\) _indicates that the matrix is for classical forms as opposed to overconvergent ones.)_
2. _The matrix_ \(\mathrm{U}_{k}^{\mathrm{Iw},(\varepsilon)}\) _is the sum of_ * _the antidiagonal matrix_ \(-\mathrm{L}_{k}^{(\varepsilon),\mathrm{cl}}\) _above, and_ * \(a\) \(d_{k}^{\mathrm{Iw}}(\tilde{\varepsilon}_{1})\times d_{k}^{\mathrm{Iw}}(\tilde{ \varepsilon}_{1})\)_-matrix with rank_ \(\leq d_{k}^{\mathrm{Iw}}(\varepsilon_{1})\)_._
Proof.: (1) is just a special case of Proposition 2.11 (2), when \(\psi=\tilde{\varepsilon}_{1}\). (2) follows from (1) and the equality (3.5.1), because \(\varphi\mapsto\iota_{2}(\mathrm{proj}_{1}(\varphi))\) has rank at most \(d_{k}^{\mathrm{ur}}\) as it factors through the smaller space \(\mathrm{S}_{k}^{\mathrm{ur}}\) of rank \(d_{k}^{\mathrm{ur}}\).
**Notation 3.7**.: Here and later, we shall frequently refer to the _corank_ of an \(n\times n\)-matrix \(B\); it is \(n\) minus the rank of \(B\).
**Corollary 3.8**.: _The multiplicities of \(\pm p^{(k-2)/2}\) as eigenvalues of the \(U_{p}\)-action on \(\mathrm{S}_{k}^{\mathrm{Iw}}(\tilde{\varepsilon}_{1})\) are at least \(\frac{1}{2}d_{k}^{\mathrm{new}}(\varepsilon_{1})\) each._
Proof.: By Proposition 3.6 (1), the matrix \(\mathrm{L}_{k}^{\mathrm{cl}}\) for the Atkin-Lehner operator is semisimple and has eigenvalues \(\pm p^{(k-2)/2}\) each with multiplicity \(\frac{1}{2}d_{k}^{\mathrm{Iw}}\); so \(\mathrm{L}_{k}^{\mathrm{cl}}\pm p^{(k-2)/2}I\) has rank exactly \(\frac{1}{2}d_{k}^{\mathrm{Iw}}\), where \(I\) is the \(d_{k}^{\mathrm{Iw}}\times d_{k}^{\mathrm{Iw}}\)-identity matrix. By Proposition 3.6 (2), \(\mathrm{U}_{k}^{\mathrm{Iw}}\pm p^{(k-2)/2}I\) has corank at least \(\frac{1}{2}d_{k}^{\mathrm{Iw}}-d_{k}^{\mathrm{ur}}=\frac{1}{2}d_{k}^{\mathrm{ new}}\). The corollary follows.
**Remark 3.9**.: It will follow from our local ghost conjecture Theorem 2.7 together with Proposition 2.11(4) that the multiplicities of the eigenvalues \(\pm p^{(k-2)/2}\) are exactly \(\frac{1}{2}d_{k}^{\mathrm{new}}(\varepsilon_{1})\).
The following statement gives a philosophical explanation of the palindromic pattern of (2.5.2) in Definition 2.5 of ghost series.
**Corollary 3.10** (Weak corank theorem).: _If we write \(\mathrm{U}^{\dagger,(\varepsilon)}(\underline{n})\in\mathrm{M}_{n}(\mathcal{O }\langle w/p\rangle)\) for the upper left \(n\times n\)-submatrix of \(\mathrm{U}^{\dagger,(\varepsilon)}\), then \(\det(\mathrm{U}^{\dagger,(\varepsilon)}(\underline{n}))\in\mathcal{O}\langle w /p\rangle\) is divisible by \(p^{-\deg g_{n}^{(\varepsilon)}}g_{n}^{(\varepsilon)}(w)\) (inside \(\mathcal{O}\langle w/p\rangle\))._
Proof.: We need to show that, for each \(k\equiv k_{\varepsilon}\bmod(p-1)\) such that \(m_{n}(k)>0\), \(\det(\mathrm{U}^{\dagger}(\underline{n}))\) is divisible by \(\left(w/p-w_{k}/p\right)^{m_{n}(k)}\). (Note here the coefficients belong to \(\mathcal{O}\langle w/p\rangle\); so we need to divide each ghost factor by \(p\).) For this, it is enough to show that evaluating \(\mathrm{U}^{\dagger}(\underline{n})\) at \(w=w_{k}\), i.e. the matrix \(\mathrm{U}_{k}^{\dagger}(\underline{n})\) has corank \(\geq m_{n}(k)\).
Indeed, let \(\mathrm{L}_{k}^{\mathrm{cl}}(\underline{n})\) denote the upper left \(n\times n\)-submatrix of \(\mathrm{L}_{k}^{\mathrm{cl}}\); then by Proposition 3.6 (1)(2),
\[\mathrm{rank}(\mathrm{U}_{k}^{\dagger}(\underline{n}))\leq d_{k}^{\mathrm{ur}}+ \mathrm{rank}\,\mathrm{L}_{k}^{\mathrm{cl}}(\underline{n})=\begin{cases}d_{k}^ {\mathrm{ur}}&\text{ if }n\leq\frac{1}{2}d_{k}^{\mathrm{Iw}}\\ d_{k}^{\mathrm{ur}}+2(n-\frac{1}{2}d_{k}^{\mathrm{Iw}})&\text{ if }n\geq\frac{1}{2}d_{k}^{ \mathrm{Iw}}.\end{cases}\]
So the corank of \(\mathrm{U}_{k}^{\dagger}(\underline{n})\) is at least \(n-d_{k}^{\mathrm{ur}}\) if \(n\leq\frac{1}{2}d_{k}^{\mathrm{Iw}}\), and at least \(d_{k}^{\mathrm{Iw}}-d_{k}^{\mathrm{ur}}-n\) if \(n\geq\frac{1}{2}d_{k}^{\mathrm{Iw}}\); in other words, corank \(\mathrm{U}_{k}^{\dagger}(\underline{n})\geq m_{n}(k)\). The corollary is proved.
**Remark 3.11**.: This corollary seems to have given some theoretical support for the definition of ghost series, and it already gives us confidence towards proving the local ghost conjecture (Theorem 2.7). In reality, we still need to combine more sophisticated estimate on the \(p\)-adic valuations with the corank argument in the corollary above.
**Remark 3.12**.: With some effort using the representation theory of \(\mathbb{F}[\mathrm{GL}_{2}(\mathbb{F}_{p})]\) and consider the standard Hodge polygon for power basis, one may show that there exists an \(\mathcal{O}\)-basis \(\mathbf{v}_{1},\ldots,\mathbf{v}_{d_{k}^{\mathrm{ur}}}\) of \(\mathrm{S}_{k}^{\mathrm{ur}}(\varepsilon_{1})\) such that the following list
\[p^{-\deg\mathbf{e}_{1}}\iota_{2}(\mathbf{v}_{1}),\,\ldots,\,p^{-\deg\mathbf{e}_{ d_{k}^{\mathrm{ur}}}}\iota_{2}(\mathbf{v}_{d_{k}^{\mathrm{ur}}}),\,\mathbf{e}_{d_{k}^{ \mathrm{ur}}+1},\,\ldots,\,\mathbf{e}_{d_{k}^{\mathrm{Iw}}-d_{k}^{\mathrm{nr}}},\, \iota_{1}(\mathbf{v}_{d_{k}^{\mathrm{ur}}}),\,\ldots,\,\iota_{1}(\mathbf{v}_{1})\]
gives an \(\mathcal{O}\)-basis of \(\operatorname{S}_{k}^{\operatorname{Iw}}(\tilde{\varepsilon}_{1})\) and the matrix of the \(U_{p}\)-action with respect to this basis belongs to
\[d_{k}^{\operatorname{ur}}\left[\begin{array}{ccccc}\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit
2. _The basis_ \(\{\mathbf{m}_{n}(z);\ n\in\mathbb{Z}_{\geq 0}\}\) _is an orthonormal basis of_ \(\mathcal{C}^{0}\big{(}\mathbb{Z}_{p};\mathbb{Z}_{p}[\![w]\!]^{(\varepsilon)}\big{)}\)_._
3. _If_ \(P=(P_{m,n})_{m,n\geq 0}\) _denotes the matrix of the action of_ \(\big{(}\begin{smallmatrix}\alpha&\beta\\ \gamma&\delta\end{smallmatrix}\big{)}\in\mathbf{M}_{1}\) _with respect to the modified Mahler basis, then_ (3.14.1) \[P_{m,n}\in\begin{cases}p^{\max\{m-n,\,0\}}\mathcal{O}\langle w/p\rangle&\text{ if }\big{(}\begin{smallmatrix}\alpha&\beta\\ \gamma&\delta\end{smallmatrix}\big{)}\in\mathbf{M}_{1}\\ p^{m-\lfloor n/p\rfloor}\mathcal{O}\langle w/p\rangle&\text{ if }\big{(}\begin{smallmatrix} \alpha&\beta\\ \gamma&\delta\end{smallmatrix}\big{)}\in\big{(}\begin{smallmatrix}p\mathbb{Z}_{p }&\mathbb{Z}_{p}\\ p\mathbb{Z}_{p}&\mathbb{Z}_{p}^{\times}\end{smallmatrix}\big{)}^{\det\neq 0}\;.\end{cases}\]
Proof.: (1) We need to check that the degrees of each term in each \(f_{i}(z)\) is congruent to \(1\) modulo \(p-1\). This is true for \(f_{1}(z)\), and inductively, we may write \(f_{i}(z)=zh_{i}(z^{p-1})\) and see that \(f_{i+1}(z)=\frac{1}{p}z^{p}h_{i}(z^{p-1})-zh_{i}(z^{p-1})=\frac{1}{p}z\big{(}z ^{p-1}h_{i}(z^{p-1})-h_{i}(z^{p-1})\big{)}\).
(2) Let \(B=(B_{m,n})_{m,n\geq 0}\) denote the change of basis matrix from the usual Mahler basis \(\big{\{}\begin{smallmatrix}z\\ n\end{smallmatrix}\!\big{\}};\ n\in\mathbb{Z}_{\geq 0}\big{\}}\) to the modified Mahler basis \(\{\mathbf{m}_{n}(z);\ n\in\mathbb{Z}_{\geq 0}\}\) so that
\[\mathbf{m}_{n}(z)=\sum_{m=0}^{\infty}B_{m,n}\binom{z}{m}.\]
Since the degree of \(\mathbf{m}_{n}(z)\) is \(n\), \(B_{m,n}=0\) if \(m>n\). By comparing the coefficients of \(z^{n}\) using (3.13.3), we see that \(B_{n,n}\in\mathcal{O}^{\times}\). Moreover, since when \(z\in\mathbb{Z}_{p}\), each \(f_{i}(z)\) takes value in \(\mathbb{Z}_{p}\), and thus \(z^{n_{0}}f_{1}(z)^{n_{1}}f_{2}(z)^{n_{2}}\cdots\) takes values in \(\mathbb{Z}_{p}\), so it is an integral linear combination of \(1,z,\binom{z}{2},\dots,\binom{z}{n}\); so we have \(B_{m,n}\in\mathbb{Z}_{p}\) for \(m\leq n\). Therefore, it follows that the infinite matrix \(B\) is an invertible upper triangular matrix in \(\mathrm{M}_{\infty}(\mathcal{O})\). Part (2) follows.
For (3), let \(P^{\prime}=(P^{\prime}_{m,n})_{m,n\geq 0}\) denote the matrix of the action of \(\big{(}\begin{smallmatrix}\alpha&\beta\\ \gamma&\delta\end{smallmatrix}\big{)}\) on \(\mathcal{C}^{0}(\mathbb{Z}_{p};\mathcal{O}[\![w]\!]^{(\varepsilon)})\) with respect to the Mahler basis \(1,z,\dots,\binom{z}{n},\dots\). Then [14, Proposition 3.14 (1)] implies that
(a) when \(\big{(}\begin{smallmatrix}\alpha&\beta\\ \gamma&\delta\end{smallmatrix}\big{)}\in\mathbf{M}_{1}\), \(P^{\prime}_{m,n}\in(p,w)^{\max\{m-n,0\}}\mathcal{O}[\![w]\!]\subseteq p^{\max \{m-n,0\}}\mathcal{O}\langle w/p\rangle\), and
(b) when \(\big{(}\begin{smallmatrix}\alpha&\beta\\ \gamma&\delta\end{smallmatrix}\big{)}\in\binom{p\mathbb{Z}_{p}\ \mathbb{Z}_{p}\ \mathbb{Z}_{p}\ \mathbb{Z}_{p}^{\times} }{\det\neq 0}\), \(P^{\prime}_{m,n}\in(p,w)^{\max\{0,m-\lfloor n/p\rfloor\}}\mathcal{O}[\![w]\!] \subseteq p^{\max\{m-\lfloor n/p\rfloor,0\}}\mathcal{O}\langle w/p\rangle\).
Changing basis, we have \(P=B^{-1}P^{\prime}B\). Yet \(B\in\mathrm{M}_{\infty}(\mathcal{O})\) is upper triangular with \(p\)-adic units on the diagonal; the same holds true for \(B^{-1}\). From this, we deduce that \(P\) satisfies the same bound (3.14.1).
**Notation 3.15**.: By Lemma 3.14 (1), each \(\mathbf{m}_{n}(z)\) is an eigenvector for the \(\bar{\mathrm{T}}\)-action. So we may "distribute the modified Mahler basis over the weight disks" as follows.
For the fixed relevant character \(\varepsilon=\omega^{-s_{\varepsilon}}\times\omega^{a+s_{\varepsilon}}\) (and possibly suppressing \(\varepsilon\) from the notation occasionally), recall the power basis \(\mathbf{e}_{1}^{(\varepsilon)},\mathbf{e}_{2}^{(\varepsilon)},\dots\) of \(\mathrm{S}^{\dagger,(\varepsilon)}\) defined in SS 2.10. For each \(\mathbf{e}_{n}^{(\varepsilon)}=e_{i}^{*}z^{\deg\mathbf{e}_{n}^{(\varepsilon)}}\) with \(i=1,2\), we define the _associated modified Mahler basis_
\[\mathbf{f}_{n}=\mathbf{f}_{n}^{(\varepsilon)}:=e_{i}^{*}\cdot\mathbf{m}_{\deg \mathbf{e}_{n}^{(\varepsilon)}}(z);\]
then Lemma 3.14 (1) above implies that \(\mathbf{f}_{n}^{(\varepsilon)}\) is a \(\mathbb{Q}_{p}\)-linear combination of \(\mathbf{e}_{1}^{(\varepsilon)},\dots,\mathbf{e}_{n}^{(\varepsilon)}\), and \(\deg\mathbf{f}_{n}^{(\varepsilon)}=\deg\mathbf{e}_{n}^{(\varepsilon)}\). Let \(\mathbf{C}=\mathbf{C}^{(\varepsilon)}\) denote the collection of \(\mathbf{f}_{n}^{(\varepsilon)}\) for all \(n\in\mathbb{Z}_{\geq 0}\); it is the _modified Mahler basis_ of \(\mathrm{S}_{p\text{-adic}}^{(\varepsilon)}\) introduced in SS 2.4(2).
For the rest of this section, we aim to "translate" the halo bound for the \(U_{p}\)-action on \(\mathrm{S}_{p\text{-adic}}^{(\varepsilon)}\) with respect to \(\mathbf{C}^{(\varepsilon)}\) to a bound on the \(U_{p}\)-action with respect to \(\mathbf{B}^{(\varepsilon)}\). (This seems to be stronger than the naive Hodge bound on the power basis.)
We write \(Y=(Y_{m,n})_{m,n\geq 0}\), \(\mathrm{Y}^{(\varepsilon)}=(\mathrm{Y}_{\mathbf{e}_{m}^{(\varepsilon)},\mathbf{f }_{n}^{(\varepsilon)})})_{m,n\geq 0}\in\mathrm{M}_{\infty}(\mathbb{Q}_{p})\) for the change of basis matrix between the modified Mahler basis (3.13.2) and the normalized power basis, that is to write
\[\mathbf{m}_{n}(z)=\sum_{m\geq 0}Y_{m,n}z^{m},\quad\text{and}\quad\mathrm{Y}_{ \mathbf{e}_{m}^{(\varepsilon)},\mathbf{f}_{n}^{(\varepsilon)}}=Y_{\deg\mathbf{ e}_{m},\deg\mathbf{f}_{n}}. \tag{3.15.1}\]
The following estimate on \(Y_{m,n}\) is important.
**Lemma 3.16**.: _The matrix \(Y\) is an upper triangular matrix in \(\mathrm{M}_{\infty}(\mathbb{Q}_{p})\), with diagonal entries \(Y_{n,n}\in(n!)^{-1}\mathbb{Z}_{p}^{\times}\). Moreover, \(Y_{m,n}=0\) unless \(n-m\) is divisible by \(p-1\)._
_Write the inverse of \(Y\) as \(((Y^{-1})_{m,n})_{m,n\geq 0}\). Then we have an estimate (when \(n\geq m\)):_
\[v_{p}(Y_{m,n})\geq-v_{p}(m!)+\Big{\lfloor}\frac{m}{p}\Big{\rfloor}- \Big{\lfloor}\frac{n}{p}\Big{\rfloor}-\Big{\lfloor}\frac{n-m}{p^{2}-p}\Big{\rfloor}, \tag{3.16.2}\] \[v_{p}((Y^{-1})_{m,n})\geq v_{p}(n!)+\Big{\lfloor}\frac{m}{p} \Big{\rfloor}-\Big{\lfloor}\frac{n}{p}\Big{\rfloor}-\Big{\lfloor}\frac{n-m}{ p^{2}-p}\Big{\rfloor}. \tag{3.16.1}\]
Proof.: It is clear that \(Y\) is upper triangular, and the vanishing of \(Y_{m,n}\) when \(p-1\) does not divide \(n-m\) is also obvious from the definition of modified Mahler basis. The statement \(Y_{n,n}\in(n!)^{-1}\mathbb{Z}_{p}^{\times}\) already follows from (3.15.1).
Let \(D\) (resp. \(E\)) denote the diagonal matrix whose \(n\)th diagonal entry is equal to \(p^{\lfloor n/p\rfloor}/n!\) (resp. \(p^{\lfloor n/p\rfloor}\)), and set \(Y^{\prime}=D^{-1}YE\). It suffices to prove that
\[v_{p}(Y^{\prime}_{m,n})\geq-\Big{\lfloor}\frac{n-m}{p^{2}-p}\Big{\rfloor}\quad \text{and}\quad v_{p}((Y^{\prime-1})_{m,n})\geq-\Big{\lfloor}\frac{n-m}{p^{2} -p}\Big{\rfloor} \tag{3.16.3}\]
In fact, the second inequality follows from the first (3.16.3). Indeed, \(Y^{\prime}_{n,n}\in\mathbb{Z}_{p}^{\times}\) together with the condition \(v_{p}(Y^{\prime}_{m,n})\geq-\Big{\lfloor}\frac{n-m}{p^{2}-p}\Big{\rfloor}\geq- \frac{n-m}{p^{2}-p}\) implies that \(v_{p}((Y^{\prime-1})_{m,n})\geq-\frac{n-m}{p^{2}-p}\). Since \(Y^{\prime-1}\in\mathrm{M}_{\infty}(\mathbb{Q}_{p})\), we are forced to have that \(v_{p}((Y^{\prime-1})_{m,n})\geq-\Big{\lfloor}\frac{n-m}{p^{2}-p}\Big{\rfloor}\).
It remains to prove the first estimate (3.16.3) on \(v_{p}(Y^{\prime}_{m,n})\). Rewrite (3.15.1) as (for \(n=n_{0}+pn_{1}+p^{2}n_{2}+\cdots\))
\[p^{\lfloor n/p\rfloor}z^{n_{0}}f_{1}(z)^{n_{1}}f_{2}(z)^{n_{2}}\cdots=\sum_{m= 0}^{n}\frac{p^{\lfloor m/p\rfloor}}{m!}Y^{\prime}_{m,n}z^{m}=:\sum_{m=0}^{n}Y^ {\prime\prime}_{m,n}z^{m}.\]
We then need to show that
\[v_{p}(Y^{\prime\prime}_{m,n})\geq-\Big{\lfloor}\frac{n-m}{p^{2}-p}\Big{\rfloor} -v_{p}\Big{(}\Big{\lfloor}\frac{m}{p}\Big{\rfloor}!\Big{)}. \tag{3.16.4}\]
Note that the function on the right hand side of (3.16.4) is sub-additive in both \(n-m\) and \(m\). So it is enough to prove the inequality (3.16.4) for \(n=p^{i}\). One immediately checks the case of \(i=0\) and \(1\). In general, \(Y^{\prime\prime}_{m,n}\) is the \(z^{m}\)-coefficient of \(p^{p^{i-1}}f_{i}(z)\). We prove this by induction on \(i\), assuming (3.16.4) is already proved for \(n=p^{i}\) (\(i\geq 1\)). Then for \(n=p^{i+1}\), we rewrite
\[p^{p^{i}}f_{i+1}(z)=\frac{1}{p}\big{(}p^{p^{i-1}}f_{i}(z)\big{)}^{p}+p^{p^{i-1}( p-1)-1}\cdot\big{(}p^{p^{i-1}}f_{i}(z)\big{)}.\]
The estimate (3.16.4) for the second factor above is clear (as \(p^{p^{i-1}(p-1)-1}\) has a huge \(p\)-adic valuation). For the first term, we consider the polynomial expression of \(p^{p^{i-1}}f_{i}(z)=\sum a_{m}z^{m}\) and the binomial expansion for the \(p\)th power. For the term of _not_ of the form \(a_{m}^{p}z^{pm}\), the
binomial coefficient is divisible by \(p\) and hence cancels the denominator \(p\). The statement (3.16.4) follows from its convexity in \(n-m\) and \(m\) and the inductive hypothesis on \(p^{p^{i-1}}f_{i}(z)\). For the term in the \(p\)th power of the form \(a_{m}^{p}z^{pm}\), the inductive hypothesis says that
\[v_{p}(a_{m})\geq-\Big{\lfloor}\frac{p^{i}-m}{p^{2}-p}\Big{\rfloor}-v_{p}\Big{(} \Big{\lfloor}\frac{m}{p}\Big{\rfloor}!\Big{)}.\]
From this, we claim that
\[v_{p}\Big{(}\frac{a_{m}^{p}}{p}\Big{)}=pv_{p}(a_{m})-1\geq-\Big{\lfloor}\frac {p^{i+1}-pm}{p^{2}-p}\Big{\rfloor}-v_{p}\big{(}m!\big{)}. \tag{3.16.5}\]
Indeed, this follows from the inequality \(v_{p}(m!)\geq pv_{p}\big{(}\lfloor m/p\rfloor!\big{)}+1\) if \(m\geq p\), and when \(m\in\{1,\dots,p-1\}\), the factorial part has no contribution, and we compute explicitly
\[\Big{\lfloor}\frac{p^{i}-m}{p^{2}-p}\Big{\rfloor}=p^{i-2}+p^{i-3}+\dots+1;\]
\[\Big{\lfloor}\frac{p^{i+1}-pm}{p^{2}-p}\Big{\rfloor}=\begin{cases}p^{i-1}+p^ {i-2}+\dots+1&\text{ if }m=0,1\\ p^{i-1}+p^{i-2}+\dots+p&\text{ if }m\in\{2,\dots,p-1\}.\end{cases}\]
From this, we deduce (3.16.5) when \(m\leq p-1\).
**Notation 3.17**.: We have the following list of matrices of \(U_{p}\) with respect to the given bases:
* \(\mathrm{U}^{\dagger}=\mathrm{U}^{\dagger,(\varepsilon)}=\big{(}\mathrm{U}^{ \dagger,(\varepsilon)}_{\mathbf{c}_{m},\mathbf{e}_{n}}\big{)}_{m,n\geq 0}\) for \(U_{p}:\big{(}\mathrm{S}^{\dagger,(\varepsilon)},\mathbf{B}^{(\varepsilon)} \big{)}\longrightarrow\big{(}\mathrm{S}^{\dagger,(\varepsilon)},\mathbf{B}^{( \varepsilon)}\big{)}\);
* \(\mathrm{U}_{\mathbf{C}}=\mathrm{U}^{(\varepsilon)}_{\mathbf{C}}=\big{(} \mathrm{U}^{(\varepsilon)}_{\mathbf{C},\mathbf{f}_{m},\mathbf{f}_{n}}\big{)} _{m,n\geq 0}\) for \(U_{p}:\big{(}\mathrm{S}^{(\varepsilon)}_{p\text{-adic}},\mathbf{C}^{( \varepsilon)}\big{)}\longrightarrow\big{(}\mathrm{S}^{(\varepsilon)}_{p\text{-adic }},\mathbf{C}^{(\varepsilon)}\big{)}\);
* \(\mathrm{U}_{\mathbf{C}\to\mathbf{B}}=\mathrm{U}^{(\varepsilon)}_{\mathbf{C} \to\mathbf{B}}=\big{(}\mathrm{U}^{(\varepsilon)}_{\mathbf{C}\to\mathbf{B}, \mathbf{e}_{m},\mathbf{f}_{n}}\big{)}_{m,n\geq 0}\) for \(U_{p}:\big{(}\mathrm{S}^{(\varepsilon)}_{p\text{-adic}},\mathbf{C}^{( \varepsilon)}\big{)}\longrightarrow\big{(}\mathrm{S}^{\dagger,(\varepsilon)}, \mathbf{B}^{(\varepsilon)}\big{)}\).
In particular, we have the following equalities
\[\mathrm{U}^{(\varepsilon)}_{\mathbf{C}\to\mathbf{B}}=\mathrm{Y}^{(\varepsilon )}\mathrm{U}^{(\varepsilon)}_{\mathbf{C}}\quad\text{and}\quad\mathrm{U}^{ \dagger,(\varepsilon)}=\mathrm{U}^{(\varepsilon)}_{\mathbf{C}\to\mathbf{B}} \mathrm{Y}^{(\varepsilon),-1}. \tag{3.17.1}\]
A key input in our later proof of local ghost conjecture is that the halo estimate from [13] "propagates" to estimates on \(\mathrm{U}^{(\varepsilon)}_{\mathbf{C}}\) and \(\mathrm{U}^{(\varepsilon)}_{\mathbf{C}\to\mathbf{B}}\).
**Proposition 3.18**.: _The matrix \(\mathrm{U}^{(\varepsilon)}_{\mathbf{C}}\) satisfies the following halo estimate:_
\[\mathrm{U}^{(\varepsilon)}_{\mathbf{C},\mathbf{f}_{m},\mathbf{f}_{n}}\in p^{ \deg\mathbf{e}^{(\varepsilon)}_{m}-\lfloor\deg\mathbf{e}^{(\varepsilon)}_{n}/ p\rfloor}\mathcal{O}\langle w/p\rangle. \tag{3.18.1}\]
Proof.: The \(U_{p}\)-action on \(\mathrm{S}_{p\text{-adic}}\) is a uniform limit of finite sums of actions \(\big{|}\binom{\alpha\ \beta}{\gamma\ \delta}\) with matrices \(\binom{\alpha\ \beta}{\gamma\ \delta}\in\binom{p\mathbb{Z}_{p}\ \mathbb{Z}_{p}\ \mathbb{Z}_{p}}{p \mathbb{Z}_{p}\ \mathbb{Z}_{p}^{\times}}\det_{\mathbb{C}}p\mathbb{Z}_{p}^{\times}\) (see for example [11, (2.9.1)]). The estimate (3.18.1) for \(\mathrm{U}_{\mathbf{C},\mathbf{f}_{m},\mathbf{f}_{n}}\) follows from (3.14.1).
**Remark 3.19**.: This proposition is our new essential input to the local ghost conjecture. The analogous direct estimate of \(\mathrm{U}^{\dagger,(\varepsilon)}\) is more subtle.
**Notation 3.20**.: For an infinite matrix \(\mathrm{U}\) (indexed by \(\mathbb{N}\)) and two finite sets of nonnegative integers \(\underline{\zeta}:=\{\zeta_{1}<\zeta_{2}<\dots<\zeta_{n}\}\) and \(\underline{\xi}:=\{\xi_{1}<\xi_{2}<\dots<\xi_{n}\}\), we write \(\mathrm{U}(\underline{\zeta}\times\underline{\xi})\) for the \(n\times n\)-submatrix of \(\mathrm{U}\) with row indices \(\bar{\zeta}_{1},\dots,\zeta_{n}\) and column indices \(\xi_{1},\dots,\xi_{n}\). When \(\underline{\zeta}=\underline{\xi}\), we write \(\mathrm{U}(\underline{\zeta})\) instead. For example, we often write \(\underline{n}=(1<2<\dots<n)\) and thus \(\mathrm{U}(\underline{n})\) is the upper left \(n\times n\)-submatrix we have considered above.
For \(\underline{\zeta}\subset\mathbb{N}\) a subset, define \(\deg(\underline{\zeta}):=\sum_{\zeta\in\underline{\zeta}}\deg\mathbf{e}_{\zeta}\).
**Definition-Proposition 3.21** (General corank theorem).: _For every \(k\equiv k_{\varepsilon}\bmod(p-1)\) and every two finite sets of integers \(\underline{\zeta}\) and \(\underline{\xi}\) of size \(n\) as above, we set_
\[r_{\underline{\zeta}\times\underline{\xi}}(k)=r_{\underline{\zeta} \times\underline{\xi}}^{(\varepsilon)}(k):=\#\big{\{}i\in\{1,\ldots,d_{k}^{ \operatorname{lw}}(\tilde{\varepsilon}_{1})\}\bigm{|}i\in\underline{\xi} \text{ and }d_{k}^{\operatorname{lw}}(\tilde{\varepsilon}_{1})+1-i\in\underline{\zeta} \big{\}},\] \[s_{\underline{\xi}}(k)=s_{\underline{\xi}}^{(\varepsilon)}(k):= \#\big{\{}i\in\underline{\xi}\bigm{|}i>d_{k}^{\operatorname{lw}}(\tilde{ \varepsilon}_{1})\big{\}}.\]
_In other words, \(r_{\underline{\zeta}\times\underline{\xi}}(k)\) is the number of "classical basis" elements in \(\mathbf{B}^{(\varepsilon)}\) indexed by \(\underline{\zeta}\) that are sent to \(\underline{\xi}\) by \(\operatorname{\widetilde{AL}}_{(k,\varepsilon_{1})}\), and \(s_{\underline{\xi}}(k)\) is the number of basis elements in \(\mathbf{B}^{(\varepsilon)}\) indexed by \(\underline{\xi}\) which are "non-classical"._
_Then the corank of \(\operatorname{U}_{k}^{\dagger,(\underline{\zeta})}(\underline{\zeta}\times \underline{\xi})\) is at least_
\[m_{\underline{\zeta}\times\underline{\xi}}(k)=m_{\underline{\zeta}\times \underline{\xi}}^{(\varepsilon)}(k):=n-d_{k}^{\operatorname{ur}}(\varepsilon _{1})-r_{\underline{\zeta}\times\underline{\xi}}(k)-s_{\underline{\xi}}(k). \tag{3.21.1}\]
_Consequently, \(\det\big{(}\operatorname{U}^{\dagger,(\varepsilon)}(\underline{\zeta}\times \underline{\xi})\big{)}\in\mathcal{O}(w/p)\) is divisible by \(((w-w_{k})/p)^{\max\{0,m_{\underline{\zeta}\times\underline{\xi}}(k)\}}\) in \(\mathcal{O}(w/p)\)._
_When \(\underline{\zeta}=\underline{\xi}\), we write \(r_{\underline{\zeta}}=r_{\underline{\zeta}}^{(\varepsilon)}(k)\) and \(m_{\underline{\zeta}}=m_{\underline{\zeta}}^{(\varepsilon)}(k)\) for \(r_{\underline{\zeta}\times\underline{\zeta}}(k)\) and \(m_{\underline{\zeta}\times\underline{\zeta}}(k)\), respectively._
_Taking \(\underline{\xi}=\underline{n}\) with \(d_{k}^{\operatorname{ur}}(\varepsilon_{1})<n<d_{k}^{\operatorname{lw}}( \tilde{\varepsilon}_{1})-d_{k}^{\operatorname{ur}}(\varepsilon_{1})\), we recover Corollary 3.10._
Proof.: By the property of theta map (2.11.2), \(\operatorname{U}_{k}^{\dagger}\) is a upper triangular block matrix. So
\[\operatorname{rank}\big{(}\operatorname{U}_{k}^{\dagger}(\underline{\zeta} \times\underline{\xi})\big{)}\leq s_{\underline{\xi}}(k)+\operatorname{rank} \big{(}\operatorname{U}_{k}^{\dagger}\big{(}(\underline{\zeta}\cap d_{ \underline{k}}^{\operatorname{lw}})\times(\underline{\xi}\cap d_{\underline{ k}}^{\operatorname{lw}})\big{)}\big{)}.\]
By Proposition 3.6 (3), \(\operatorname{U}_{k}^{\operatorname{lw}}\) is the sum of a matrix with \(\operatorname{rank}\leq d_{k}^{\operatorname{ur}}\) and an anti-diagonal matrix; so
\[\operatorname{rank}\big{(}\operatorname{U}_{k}^{\dagger}\big{(}(\underline{ \zeta}\cap d_{\underline{k}}^{\operatorname{lw}})\times(\underline{\xi}\cap d _{\underline{k}}^{\operatorname{lw}})\big{)}\big{)}\leq d_{k}^{\operatorname{ ur}}+r_{\underline{\zeta}\times\underline{\xi}}(k);\]
The corank formula (3.21.1) follows from combining above two inequalities
The corollary and the last statement are immediate consequences of the above discussion.
### Refined halo estimates
In our later proof of the local ghost theorem, we inevitably encounter a rather pathological case, which demands a slightly refined halo bound depending on the \(p\)-adic expansions of the row and column indices (see the proof of Proposition 5.4(1)). The readers are invited to skip the proof in this portion on the first reading, and only comes back after understanding the complication as shown in the proof of Proposition 5.4(1).
For this part of the argument, we fix a matrix \(\big{(}\begin{smallmatrix}pa&b\\ pc&d\end{smallmatrix}\big{)}\in\big{(}\begin{smallmatrix}p\mathbb{Z}_{p}& \mathbb{Z}_{p}\\ p\mathbb{Z}_{p}&\mathbb{Z}_{p}^{\times}\end{smallmatrix}\big{)}\) with determinant \(p^{u}\delta\in p^{u}\mathbb{Z}_{p}^{\times}\). Let \(P=(P_{m,n})_{m,n\geq 0}\) and \(Q=(Q_{m,n})_{m,n\geq 0}\) denote the matrix of
\[\big{|}_{\big{(}\begin{smallmatrix}pa&b\\ pc&d\end{smallmatrix}\big{)}}:\big{(}\mathcal{C}^{0}(\mathbb{Z}_{p};\mathcal{O} \llbracket w\rrbracket^{(\varepsilon)}),(\mathbf{m}_{n}(z))_{n\geq 0}\big{)} \to \big{(}\mathcal{C}^{0}(\mathbb{Z}_{p};\mathcal{O} \llbracket w\rrbracket^{(\varepsilon)}),(\mathbf{m}_{n}(z))_{n\geq 0}\big{)} \text{and}\] \[\big{|}_{\big{(}\begin{smallmatrix}pa&b\\ pc&d\end{smallmatrix}\big{)}}:\big{(}\mathcal{C}^{0}(\mathbb{Z}_{p};\mathcal{O} \llbracket w\rrbracket^{(\varepsilon)}),(\mathbf{m}_{n}(z))_{n\geq 0}\big{)} \to \big{(}\mathcal{C}^{0}(\mathbb{Z}_{p};\mathcal{O} \llbracket w\rrbracket^{(\varepsilon)}),\big{(}\binom{z}{n}\big{)}_{n\geq 0} \big{)},\]
respectively. Let \(B\) denote the change of basis matrix from the usual Mahler basis \(\{\binom{z}{n};n\in\mathbb{Z}_{\geq 0}\}\) to the modified Mahler basis \(\{\mathbf{m}_{n}(z);n\in\mathbb{Z}_{\geq 0}\}\) as introduced in the proof of Lemma 3.14 so that \(P=B^{-1}Q\). Then (the proof of) Lemma 3.14 implies that \(B\in\operatorname{M}_{\infty}(\mathbb{Z}_{p})\) is an upper triangular matrix with diagonal entries in \(\mathbb{Z}_{p}^{\times}\).
**Notation 3.23**.: For two positive integers \(m,n\), write \(m=m_{0}+pm_{1}+\cdots\) and \(n=n_{0}+pn_{1}+\cdots\) for their \(p\)-adic expansions (so that each \(m_{i}\) and \(n_{i}\) belongs to \(\{0,\ldots,p-1\}\)). Let \(D(m,n)\) denote the number of indices \(i\geq 0\) such that \(n_{i+1}>m_{i}\).
The following are some elementary facts, whose proofs we leave to the readers.
**Lemma 3.24**.: _Let \(m,n\) be two nonnegative integers._
1. _We have_ \(D(m+1,n)+1\geq D(m,n)\) _and_ \(D(m,n)+1\geq D(m,n+c)\) _for any_ \(c\in\{1,\ldots,p\}\)_._
2. _Assume that_ \(m\geq\lfloor\frac{n}{p}\rfloor\)_. Then we have_ \[v_{p}\Big{(}\binom{m}{m-\lfloor\frac{n}{p}\rfloor}\Big{)}\geq D(m,n).\]
3. _We have an equality_ \[\binom{z}{m}\binom{z}{n}=\sum_{j\geq\max\{m,n\}}^{m+n}\binom{j}{j-m,j-n,m+n-j} \binom{z}{j},\] _where_ \(\binom{j}{j-m,j-n,m+n-j}\) _is the generalized binomial coefficient._
**Proposition 3.25**.: _We have the following refined estimate:_
\[P_{m,n},\,Q_{m,n}\in p^{D(m,n)}\cdot p^{m-\lfloor n/p\rfloor}\mathcal{O} \langle\tfrac{w}{p}\rangle. \tag{3.25.1}\]
Proof.: We first explain that (3.25.1) for \(Q_{m,n}\) implies that for \(P_{m,n}\). As \(P=B^{-1}Q\), we have \(P_{m,n}=\sum_{\ell\geq 0}(B^{-1})_{m,\ell}Q_{\ell,n}.\) So it is enough to prove that (when \(\ell\geq m\))
\[D(\ell,n)+\ell-\lfloor n/p\rfloor\geq D(m,n)+m-\lfloor n/p\rfloor.\]
But this follows from Lemma 3.24(1).
Now we focus on proving (3.25.1) for \(Q_{m,n}\). Recall from (2.4.4) that
\[\mathbf{m}_{n}\big{|}_{\binom{pa\ b}{pc\ d}}(z) =\varepsilon(\delta/\bar{d},\bar{d})\cdot(1+w)^{\log(\frac{pcz+d} {\omega(d)})/p}\mathbf{m}_{n}\Big{(}\frac{paz+b}{pcz+d}\Big{)}\] \[=\sum_{r\geq 0}\varepsilon(\delta/\bar{d},\bar{d})\cdot p^{r} \Big{(}\frac{w}{p}\Big{)}^{r}\binom{\log(\frac{pcz+d}{\omega(d)})/p}{r}\cdot \mathbf{m}_{n}\Big{(}\frac{paz+b}{pcz+d}\Big{)}. \tag{3.25.2}\]
We need to go back to several argument in [10, SS 3]. As proved in [10, Lemma 3.13], \(\binom{\log(\frac{pcz+d}{\omega(d)})/p}{r}\) is a \(\mathbb{Z}_{p}\)-linear combination of \(p^{s-r}\binom{z}{s}\) for \(s\in\mathbb{Z}_{\geq 0}\). So to prove (3.25.1) for \(Q_{m,n}\), it suffices to prove that, for every \(s\geq 0\), when expanding
\[p^{s}\binom{z}{s}\cdot\mathbf{m}_{n}\Big{(}\frac{paz+b}{pcz+d}\Big{)}\]
with respect to the Mahler basis \(\{\binom{z}{n}\ |\ n\in\mathbb{Z}_{\geq 0}\}\), the \(m\)th coefficient has \(p\)-adic valuation greater than or equal to \(m-\lfloor n/p\rfloor+D(m,n)\). For this, we need to reproduce the argument in [10, Lemma 3.12]: write
\[n!\cdot\mathbf{m}_{n}\Big{(}\frac{paz+b}{pcz+d}\Big{)}=\sum_{t\geq 0}c_{t}\cdot t! \binom{z}{t}\in\mathbb{Z}_{p}[\![pz]\!],\]
then [17, Lemma 3.11] implies that \(v_{p}(c_{t})\geq t\). Moreover, as \(\mathbf{m}_{n}(\frac{paz+b}{pcz+d})\in\mathcal{C}(\mathbb{Z}_{p},\mathcal{O})\), we know that (when \(t<\lfloor\frac{n}{p}\rfloor\)), \(v_{p}(c_{t})\geq v_{p}(\frac{n!}{t!})\). Using the combinatorial identity in Lemma 3.24(3), we deduce that
\[p^{s}\binom{z}{s}\cdot\mathbf{m}_{n}\Big{(}\frac{paz+b}{pcz+d} \Big{)} =\sum_{t\geq 0}c_{t}p^{s}\frac{t!}{n!}\binom{z}{s}\binom{z}{t}\] \[=\sum_{t\geq 0}\sum_{j\geq\max\{s,t\}}^{s+t}c_{t}p^{s}\frac{t!}{n! }\binom{j}{j-s,j-t,s+t-j}\binom{z}{j}.\]
Taking the term with \(j=m\geq s\), we need to show that whenever \(s+t\geq m\geq t\), we have
\[v_{p}\Big{(}c_{t}p^{s}\frac{t!}{n!}\cdot\binom{m}{m-s,m-t,s+t-m}\Big{)}\geq m- \Big{\lfloor}\frac{n}{p}\Big{\rfloor}+D(m,n).\]
Plugging in the earlier inequality \(v_{p}(c_{t})\geq\max\{t,v_{p}(\frac{n!}{t!})\}\), we need to show that
\[s-m+\Big{\lfloor}\frac{n}{p}\Big{\rfloor}+\max\Big{\{}t+v_{p}\Big{(}\frac{t!} {n!}\Big{)},0\Big{\}}+v_{p}\Big{(}\binom{m}{m-s,m-t,s+t-m}\Big{)}\geq D(m,n). \tag{3.25.3}\]
Now we forget the meaning of \(n\) and \(m\) as indices for basis elements, and prove (3.25.3) as an abstract inequality.
(i) When \(t\geq\lfloor\frac{n}{p}\rfloor\) (so in particular, \(m\geq\lfloor n/p\rfloor\)), we have \(t+v_{p}\big{(}\frac{t!}{n!}\big{)}\geq 0\), so it suffices to prove that
\[s+t-m+v_{p}\Big{(}\frac{t!}{\lfloor n/p\rfloor!}\Big{)}+v_{p}\Big{(}\binom{m} {m-s,m-t,s+t-m}\Big{)}\geq D(m,n).\]
This follows from the binomial identity and the inequalities below
\[\frac{t!}{\lfloor n/p\rfloor!}\binom{m}{m-s,m-t,s+t-m}=\binom{m}{m-\lfloor n /p\rfloor}\binom{t}{m-s}\cdot\frac{(m-\lfloor n/p\rfloor)!}{(m-t)!},\]
\[v_{p}\Big{(}\binom{m}{m-\lfloor\frac{n}{p}\rfloor}\Big{)}\geq D(m,n)\quad \text{and}\quad s+t-m\geq 0.\]
(ii) When \(t<\lfloor\frac{n}{p}\rfloor\), the inequality (3.25.3) is equivalent to
\[s-m+\Big{\lfloor}\frac{n}{p}\Big{\rfloor}+v_{p}\Big{(}\binom{m}{m-s,m-t,s+t- m}\Big{)}\geq D(m,n). \tag{3.25.4}\]
write \(\ell:=\lfloor\frac{n}{p}\rfloor-t\) and \(n^{\prime}=n-p\ell\). Note that Lemma 3.24(1) implies that
\[D(m,n^{\prime})+\ell\geq D(m,n)\]
So (3.25.4) follows from the same inequality with \(n\) replaced by \(n^{\prime}\). This is the case already treated in (i). The proposition is proved.
**Notation 3.26**.: Fix a relevant character \(\varepsilon\). Let \(\underline{\lambda}\) and \(\underline{\eta}\) be two subsets of positive integers of cardinality \(n\); for each such integer \(\lambda_{i}\), we write \(\deg\mathbf{e}_{\lambda_{i}}^{(\varepsilon)}=\lambda_{i,0}+p\lambda_{i,1}+\cdots\) in its \(p\)-adic expansion, and similarly for \(\eta_{i}\)'s. _To iterate, we are expanding \(\deg\mathbf{e}_{\lambda_{i}}^{(\varepsilon)}\) (as opposed to \(\lambda_{i}\))_, as they correspond to the \(m\) and \(n\) in Proposition 3.25. For each \(j\geq 0\), we define
\[D_{\leq\alpha}^{(\varepsilon)}(\underline{\lambda},j):=\#\{i\mid\lambda_{i,j} \leq\alpha\},\]
counting the number of \(\lambda_{i}\)'s whose \(j\)th digit is less than or equal to \(\alpha\). When \(\alpha=0\), we write \(D_{=0}^{(\varepsilon)}(\underline{\lambda},j)\) for \(D_{\leq\alpha}^{(\varepsilon)}(\underline{\lambda},j)\). We define \(D_{=0}^{(\varepsilon)}(\underline{\eta},j+1)\) similarly (but using the \((j+1)\)th digit). We define a tuple version of \(D(m,n)\) as follows:
\[D^{(\varepsilon)}(\underline{\lambda},\underline{\eta})=\sum_{j\geq 0}\Big{(} \max\big{\{}D_{=0}^{(\varepsilon)}(\underline{\lambda},j)-D_{=0}^{(\varepsilon )}(\underline{\eta},j+1),\ 0\big{\}}\Big{)}.\]
Similar to Lemma 3.24(1), we have the following obvious inequalities: if \(\underline{\eta}^{\prime}\) is given by \(\eta^{\prime}_{i}=\eta_{i}\) except for one \(i_{0}\) where \(\eta^{\prime}_{i_{0}}=\eta_{i_{0}}+1\) (so \(\deg\mathbf{e}_{\eta^{\prime}_{i_{0}}}^{(\varepsilon)}-\deg\mathbf{e}_{\eta_{ i_{0}}}^{(\varepsilon)}\in\{a,p-1-a\}\)), then
\[D^{(\varepsilon)}(\underline{\lambda},\underline{\eta}^{\prime})+1\geq D^{( \varepsilon)}(\underline{\lambda},\underline{\eta}). \tag{3.26.1}\]
**Corollary 3.27**.: _Keep the notation as above. Write \(\mathrm{U}_{\mathbf{C}}^{(\varepsilon)}(\underline{\lambda}\times\underline{ \eta})\) for the submatrix of \(\mathrm{U}_{\mathbf{C}}^{(\varepsilon)}\) with row indices in \(\underline{\lambda}\) and column indices in \(\underline{\eta}\). Then_
\[v_{p}\big{(}\mathrm{det}\big{(}\mathrm{U}_{\mathbf{C}}^{(\varepsilon)}( \underline{\lambda}\times\underline{\eta})\big{)}\big{)}\geq D^{(\varepsilon) }(\underline{\lambda},\underline{\eta})+\sum_{i=1}^{n}\Big{(}\deg\mathbf{e}_{ \lambda_{i}}^{(\varepsilon)}-\Big{\lfloor}\frac{\deg\mathbf{e}_{\eta_{i}}^{( \varepsilon)}}{p}\Big{\rfloor}\Big{)} \tag{3.27.1}\]
Proof.: Write \(\mathrm{det}\big{(}\mathrm{U}_{\mathbf{C}}(\underline{\lambda}\times \underline{\eta})\big{)}=\sum\limits_{\sigma\in S_{n}}\mathrm{sgn}(\sigma) \cdot\mathrm{U}_{\mathbf{C},\mathbf{f}_{\lambda_{\sigma(1)}},\mathbf{f}_{\eta _{1}}}\cdots\mathrm{U}_{\mathbf{C},\mathbf{f}_{\lambda_{\sigma(n)}},\mathbf{f}_ {\eta_{n}}}\). By Proposition 3.25, for every permutation \(\sigma\in S_{n}\),
\[v_{p}\big{(}\mathrm{U}_{\mathbf{C},\mathbf{f}_{\lambda_{\sigma(i)}},\mathbf{f} _{\eta_{i}}}\big{)}\geq\deg\mathbf{e}_{\lambda_{\sigma(i)}}-\Big{\lfloor} \frac{\deg\mathbf{e}_{\eta_{i}}}{p}\Big{\rfloor}+D\big{(}\deg\mathbf{e}_{ \lambda_{\sigma(i)}},\deg\mathbf{e}_{\eta_{i}}\big{)}.\]
Then the corollary is reduced to the following combinatorial inequality:
\[\sum_{i=1}^{n}D\big{(}\deg\mathbf{e}_{\lambda_{\sigma(i)}},\deg\mathbf{e}_{ \eta_{i}}\big{)}\geq D(\underline{\lambda},\underline{\eta}).\]
But this is clear, as the total contribution to all \(D\big{(}\deg\mathbf{e}_{\lambda_{\sigma(i)}},\deg\mathbf{e}_{\eta_{i}}\big{)}\)'s from the \(j\)th digit is at least \(\max\big{\{}D_{=0}(\underline{\lambda},j)-D_{=0}(\underline{\eta},j+1),\,0 \big{\}}\).
**Remark 3.28**.: We remark that \(D(\underline{\lambda},\underline{\eta})\) is often zero, e.g. when \(\underline{\lambda}=\underline{\eta}=\underline{n}\). As stated earlier, this notation is introduced to treat certain pathological cases; see the proof of Proposition 5.4(1) where our finer estimate in Corollary 3.27 is used.
Moreover, the same argument above in fact proves a stronger statement with \(D^{(\varepsilon)}(\underline{\lambda},\underline{\eta})\) in (3.27.1) replaced by \(\sum\limits_{j\geq 0}\Big{(}\max\limits_{\alpha=0,\ldots,p-2}\big{\{}D_{\leq \alpha}^{(\varepsilon)}(\underline{\lambda},j)-D_{\leq\alpha}^{(\varepsilon)}( \underline{\eta},j+1),\ 0\big{\}}\Big{)}\). But (3.27.1) seems to work better with our later inductive proof of Proposition 5.4(1)
We end this section with a technical lemma that is useful for computing \(D^{(\varepsilon)}(\underline{\lambda},\underline{\eta})\).
**Lemma 3.29**.: _Let \(n\) be a positive number._
1. _For every_ \(j\geq 0\)_, we have_ \(D_{=0}^{(\varepsilon)}(\underline{n},j)\leq D_{=0}^{(\varepsilon)}(\underline{ n},j+1)\)_._
2. _Write_ \(\deg\mathbf{e}_{n}^{(\varepsilon)}=n_{0}+pn_{1}+\cdots\) _in its_ \(p\)_-adic expansion. If either_ \(n_{j+1}=p-1\) _or_ \(n_{j}=n_{j+1}=0\)_, then_ \(D_{=0}^{(\varepsilon)}(\underline{n},j)=D_{=0}^{(\varepsilon)}(\underline{n},j+1)\)_._
_In particular, \(D^{(\varepsilon)}(\underline{n},\underline{n})=0\) for any \(n\)._
Proof.: Let \(\widetilde{D}_{=0}(\underline{n},j)\) be the set of nonnegative integers \(m\leq\deg\mathbf{e}_{n}\) whose \(p\)-adic expansion has \(j\)th digit equal to \(0\) and is congruent to \(s_{\varepsilon}\) or \(a+s_{\varepsilon}\) modulo \(p-1\). Then \(D_{=0}(\underline{n},j)=\#\widetilde{D}_{=0}(\underline{n},j)\). The key is that for any nonnegative integer \(m=m_{0}+m_{1}p+\cdots\) that is congruent to \(a\) or \(a+s_{\varepsilon}\) modulo \(p-1\) and \(m_{j}=0\), we have
\[m^{\prime}=m_{0}+m_{1}p+\cdots+m_{j-1}p^{j-1}+m_{j+1}p^{j}+m_{j+2}p^{j+2}+\cdots.\]
Then \(m\leftrightarrow m^{\prime}\) defines a bijection among nonnegative integers which are congruent to \(s_{\varepsilon}\) and \(a+s_{\varepsilon}\) modulo \(p-1\). Yet \(m\leq\deg\mathbf{e}_{n}\) implies that \(m^{\prime}\leq\deg\mathbf{e}_{n}\). So \(D_{=0}(\underline{n},j)\leq D_{=0}(\underline{n},j+1)\).
The equality holds if and only if \(m^{\prime}\leq\mathbf{e}_{n}\) implies \(m\leq\deg\mathbf{e}_{n}\). This latter equivalence condition holds under condition (ii). So under the condition of (ii), we have \(D_{=0}(\underline{n},j)=D_{=0}(\underline{n},j+1)\). The lemma follows.
## 4. Proof of local ghost conjecture I: Lagrange interpolation
In this and the next two sections, we keep Hypothesis 2.9 on \(\widetilde{\mathrm{H}}\), i.e. \(\widetilde{\mathrm{H}}\) is a primitive \(\mathcal{O}\llbracket\mathrm{K}_{p}\rrbracket\)-projective augmented module of type \(\bar{\rho}=\left(\begin{smallmatrix}\omega_{1}^{a+1}&*\neq 0\\ 0&1\end{smallmatrix}\right)\) such that \(\left(\begin{smallmatrix}p&0\\ 0&p\end{smallmatrix}\right)\) acts trivially on \(\widetilde{\mathrm{H}}\). We devote this and the next two sections to the proof of the local ghost conjecture, namely, Theorem 2.7. The proof is roughly divided into three steps, which we give a quick overview below. To simplify this introduction, we fix a relevant character \(\varepsilon=\omega^{-s_{\varepsilon}}\times\omega^{a+s_{\varepsilon}}\), and suppress it from the notation.
In a rough form, Theorem 2.7 says that \(C(w,t)\) and \(G(w,t)\) are "close" to each other; in particular, this says that, for each \(n\), near each zero \(w_{k}\) of \(g_{n}(w)\), the function \(c_{n}(w)\) is very small. The leads us to the following.
1. (Lagrange interpolation) For each \(n\), we formally apply Lagrange interpolation to \(c_{n}(w)\) relative to the zeros \(w_{k}\) of \(g_{n}(w)\) (with multiplicity), that is, to obtain a formula of the form (4.0.1) \[c_{n}(w)=\sum_{\begin{subarray}{c}k\equiv k_{\varepsilon}\bmod(p-1)\\ m_{n}(k)\neq 0\end{subarray}}a_{k}(w)\cdot g_{n,\hat{k}}(w)+h(w)g_{n}(w).\] We give a sufficient condition on the \(p\)-adic valuations of the coefficients of \(a_{k}(w)\) that would imply Theorem 2.7. This is Proposition 4.4.
In fact, we shall prove a similar \(p\)-adic valuation condition for _all_ (principal or not) \(n\times n\)-submatrices of the matrix of \(U_{p}\) with respect to the power basis. More precisely, given two tuples \(\underline{\zeta}\) and \(\underline{\xi}\) of \(n\) positive integers, we apply the same Lagrange interpolation (4.0.1) to \(\det(\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi}))\) in place of \(c_{n}(w)\), and we shall fix \(\underline{\zeta}\) and \(\underline{\xi}\) for the rest of this introduction and still use \(a_{k}(w)\) and \(h(w)\) to denote the corresponding power series appearing in (4.0.1) (with \(c_{n}(w)\) replaced by \(\det(\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi}))\)).
We point out that this is a question for each individual zero \(w_{k}\) of \(g_{n}^{(\varepsilon)}(w)\). To simplify the discussion of this introduction, we only consider one such \(k\) for which \(n<\frac{1}{2}d_{k}^{\mathrm{fw}}\); the other case has little variation. We write each \(a_{k}(w)\) as \(a_{k,0}+a_{k,1}(w-w_{k})+a_{k,2}(w-w_{k})^{2}+\cdots\), and we need to prove that for every \(i<m_{n}(k)\),
\[v_{p}(a_{k,i})\geq\Delta_{k,\frac{1}{2}d_{k}^{\mathrm{new}}-i}-\Delta^{\prime }_{k,\frac{1}{2}d_{k}^{\mathrm{new}}-m_{n}(k)}+\frac{1}{2}\big{(}\deg( \underline{\zeta})-\deg(\underline{\xi})\big{)}, \tag{4.0.2}\]
where the term \(\frac{1}{2}(\deg(\underline{\zeta})-\deg(\underline{\xi}))\) is introduced to "balance" the total degrees of basis elements in \(\underline{\zeta}\) and \(\underline{\xi}\). Here, a subtle technical point is that we truly need to use \(\Delta-\Delta^{\prime}\) in order to implement the induction we perform later; see the comments after the statement of Proposition 4.7. As we shall explain just after the statement of Theorem 5.2, the proof of Theorem 2.7 is then reduced to prove (4.0.2).
Step II: (Cofactor expansion argument) We reduce the proof of (4.0.2) to an estimate on the determinant of the minors of \(\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi})\) of smaller size.
For simplicity, assume that \(s_{\underline{\xi}}(k)=0\) (see Definition-Proposition 3.21). Then the corank theorem (Definition-Proposition 3.21) implies that \(a_{k,i}=0\) when \(i<m_{\underline{\zeta}\times\underline{\xi}}(k)-2r_{\underline{\zeta}\times \underline{\xi}}(k)\). Moreover, we can write \(\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi})=\mathrm{T}_{k}( \underline{\zeta}\times\underline{\xi})+\mathrm{A}_{k}(\underline{\zeta} \times\underline{\xi})\), where \(\mathrm{A}_{k}(\underline{\zeta}\times\underline{\xi})\) has coefficients in \(E\) and has exactly \(r_{\underline{\zeta}\times\underline{\xi}}(k)\) nonzero entries (coming from the matrix for the Atkin-Lehner operator at \(w_{k}\)), and \(\mathrm{T}_{k}(\underline{\zeta}\times\underline{\xi})\) is a matrix in \(E\langle w/p\rangle\) whose evaluation at \(w=w_{k}\) has rank at most \(d_{k}^{\mathrm{ur}}\).
We apply a version of cofactor expansion to \(\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi})=\mathrm{A}_{k}( \underline{\zeta}\times\underline{\xi})+\mathrm{T}_{k}(\underline{\zeta} \times\underline{\xi})\), to express \(\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi}) \bigr{)}\) as a linear combination of the determinant of smaller minors of \(\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi})\) plus a term that is divisible by \((w-w_{k})^{m_{\underline{\zeta}\times\underline{\xi}}(k)}\). This way, we essentially reduce the question of estimating \(v_{p}(a_{k,i})\) to the question of estimating the Taylor coefficients for the determinant of smaller minors, when expanded as a power series in \(E[\![w-w_{k}]\!]\) (see the Step III below). There are several subtleties when executing this plan; we leave the discussion to the corresponding points, especially the discussion before Lemma 6.4 and SS 6.10.
Step III: (Estimating power series expansion for smaller minors) Interestingly enough, what is needed in the Step II from the inductive proof is an estimate of \(v_{p}(a^{\prime}_{k,i})\) in the expansion of \(c_{n^{\prime}}(w)/g_{n^{\prime},\hat{k}}(w)=\sum_{i\geq 0}a^{\prime}_{k,i}(w-w_{k})^{i}\) in \(E[\![w-w_{k}]\!]\) not for \(i<m_{n^{\prime}}(k)\) but for \(i\geq m_{n^{\prime}}(k)\).
This estimate will be deduced in Proposition 5.4 from the estimate of the Lagrange interpolation coefficients \(a^{\prime}_{k^{\prime},i}\) of \(c_{n^{\prime}}(w)\) for _other_\(k^{\prime}\neq k\) and \(i\leq m_{n^{\prime}}(k^{\prime})\), as well as the polynomial \(h^{\prime}(w)\) that appears in the Lagrange interpolation of the determinant of the smaller minor. The latter gives the most trouble; in most of the case, it follows immediately from the usual halo estimate, but in some pathological case, we need to invoke the refined halo estimate in Proposition 3.25.
To streamline the logical flow, we will prove Step I in this section, and first prove Step III in the next section, and finally complete Step II in Section 6.
We first give a quick discussion on the "ordinary" part of the characteristic power series.
**Proposition 4.1**.:
1. _For a relevant character_ \(\varepsilon\)_,_ \(c_{1}^{(\varepsilon)}(w)\in\mathcal{O}[\![w]\!]\) _is a unit if and only if_ \(\varepsilon=1\times\omega^{a}\)_._
2. _For a relevant character_ \(\varepsilon\) _and_ \(k\in\mathbb{Z}_{\geq 2}\)_, writing_ \(d_{\varepsilon,k}:=d_{k}^{\mathrm{lw}}(\varepsilon\cdot(1\times\omega^{2-k}))\)_, then_ \(\bigl{(}d_{\varepsilon,k},v_{p}(c_{d_{\varepsilon,k}}^{(\varepsilon)}(w_{k})) \bigr{)}\) _is a vertex of_ \(\mathrm{NP}(C^{(\varepsilon)}(w_{k},-))\)_, and_ \(\bigl{(}d_{\varepsilon,k},v_{p}(g_{d_{\varepsilon,k}}^{(\varepsilon)}(w_{k})) \bigr{)}\) _is a vertex of_ \(\mathrm{NP}(G^{(\varepsilon)}(w_{k},-))\)_._
Proof.: (1) We first show that for \(s_{\varepsilon}>0\), \(c_{1}^{(\varepsilon)}(w)\) is not a unit in \(\mathcal{O}[\![w]\!]\). Indeed, in this case, Definition-Proposition 2.12(3) implies that \(t_{1}^{(\varepsilon)},t_{2}^{(\varepsilon)}\geq\delta_{\varepsilon}+1\); so for \(k=k_{\varepsilon}+(p-1)\delta_{\varepsilon}=2+s_{\varepsilon}+\{a+s_{ \varepsilon}\}\), Definition-Proposition 2.12(2)(3) imply respectively \(d_{k}^{\mathrm{ur}}(\varepsilon_{1})=0\), and \(d_{k}^{\mathrm{lw}}(\tilde{\varepsilon}_{1})=2\)
This means that \(\mathrm{S}_{k}^{\mathrm{Iw}}(\tilde{\varepsilon}_{1})\) consists of only new forms; so \(U_{p}\)-slopes are \(\frac{k-2}{2}=\frac{s_{\varepsilon}+\{a+s_{\varepsilon}\}}{2}>0\). In particular, this shows that \(v_{p}(c_{1}^{(\varepsilon)}(w_{k}))>0\) and thus \(c_{1}^{(\varepsilon)}(w)\) is not a unit.
When \(s_{\varepsilon}=0\) (and thus \(\varepsilon=1\times\omega^{a}\)), \(c_{1}^{(1\times\omega^{a})}(w_{2})\) is a \(p\)-adic unit as proved in [13, Proposition A.7]. So \(c_{1}^{(1\times\omega^{a})}(w)\in\mathcal{O}\llbracket w\rrbracket^{\times}\).
(2) By part (1) and Proposition 2.11(2), the \(d_{\varepsilon,k}\)th slope in \(\mathrm{NP}(C(w_{k},-))\) is \(\leq k-1\) and the equality holds precisely when \(s_{\varepsilon^{\prime\prime}}:=\{k-2-a-s_{\varepsilon}\}=0\). Similarly, part (1) and Proposition 2.11(1) imply that the \((d_{\varepsilon,k}+1)\)th slope is \(\geq k-1\) and the equality holds if and only if \(s_{\varepsilon^{\prime}}:=\{1+s_{\varepsilon}-k\}=0\). Yet, Clearly, \(1+s_{\varepsilon}\) and \(2+a+s_{\varepsilon}\) are never congruent modulo \(p-1\). So the \(d_{\varepsilon,k}\)th slope and the \((d_{\varepsilon,k}+1)\)th slope of \(\mathrm{NP}(C(w_{k},-))\) are never equal, proving that \(\big{(}d_{\varepsilon,k},v_{p}(c_{d_{\varepsilon,k}}(w_{k}))\big{)}\) is a vertex of \(\mathrm{NP}(C(w_{k},-))\).
The same argument above with Proposition 2.11 replaced by Proposition 2.16 proves that \(\big{(}d_{\varepsilon,k},v_{p}(g_{d_{\varepsilon,k}}(w_{k}))\big{)}\) is a vertex of \(\mathrm{NP}(G(w_{k},-))\),
We recall the standard Lagrange interpolation formula, as our main tool to study local ghost conjecture.
**Definition-Lemma 4.2**.: _Let \(f(w)\in\mathcal{O}\langle w/p\rangle\) be a power series, and let \(g(w)=(w-x_{1})^{m_{1}}\cdots(w-x_{s})^{m_{s}}\in\mathbb{Z}_{p}[w]\) be a polynomial with zeros \(x_{1},\ldots,x_{s}\in p\mathbb{Z}_{p}\) and multiplicities \(m_{1},\ldots,m_{s}\in\mathbb{N}\). Then we can uniquely write \(f(w)\) as_
\[f(w)=\sum_{i=1}^{s}\Big{(}A_{i}(w)\frac{g(w)}{(w-x_{i})^{m_{i}}}\Big{)}+h(w) \cdot g(w), \tag{4.2.1}\]
_where each \(A_{j}(w)\in E[w]\) is a polynomial of degree \(<m_{j}\), and \(h(w)\in E\langle w/p\rangle\), characterized by the condition that for each \(i\), \(f(w)\equiv A_{i}(w)\frac{g(w)}{(w-x_{i})^{m_{i}}}\) modulo \((w-x_{i})^{m_{i}}\) when viewed as power series in \(E\llbracket w-x_{i}\rrbracket\)._
_We call the expression (4.2.1) the Lagrange interpolation of \(f(w)\) along \(g(w)\)._
**Notation 4.3**.: For \(n\in\mathbb{N}\) and a relevant character \(\varepsilon\), recall the notation \(g_{n,\hat{k}}^{(\varepsilon)}(w)\) from (2.16.1). We write the \(n\)th coefficient \(c_{n}^{(\varepsilon)}(w)\) of the characteristic power series \(C^{(\varepsilon)}(w,t)\) in terms of its Lagrange interpolation along \(g_{n}^{(\varepsilon)}(w)\) as follows.
\[c_{n}^{(\varepsilon)}(w)=\sum_{\begin{subarray}{c}k\equiv k_{\varepsilon} \bmod(p-1)\\ m_{n}^{(\varepsilon)}(k)\neq 0\end{subarray}}\!\!\!\big{(}A_{k}^{(n, \varepsilon)}(w)\cdot g_{n,\hat{k}}^{(\varepsilon)}(w)\big{)}+h_{n}^{( \varepsilon)}(w)\cdot g_{n}^{(\varepsilon)}(w), \tag{4.3.1}\]
where \(A_{k}^{(n,\varepsilon)}(w)=A_{k,0}^{(n,\varepsilon)}+A_{k,1}^{(n,\varepsilon) }(w-w_{k})+\cdots+A_{k,m_{n}^{(\varepsilon)}(k)-1}^{(n,\varepsilon)}(w-w_{k}) ^{m_{n}^{(\varepsilon)}(k)-1}\in E[w]\) is a polynomial of degree \(\leq m_{n}^{(\varepsilon)}(k)-1\), and \(h_{n}^{(\varepsilon)}(w)\in E\langle w/p\rangle\).
**Proposition 4.4**.: _To prove Theorem 2.7, it suffices to prove that, for every relevant character \(\varepsilon\), every \(n\in\mathbb{N}\), and every ghost zero \(w_{k}\) of \(g_{n}^{(\varepsilon)}(w)\), we have_
\[v_{p}(A_{k,i}^{(n,\varepsilon)})\geq\Delta_{k,\frac{1}{2}d_{k}^{\mathrm{new}} (\varepsilon_{1})-i}^{(\varepsilon)\prime}-\Delta_{k,\frac{1}{2}d_{k}^{ \mathrm{new}}(\varepsilon_{1})-m_{n}^{(\varepsilon)}(k)}^{(\varepsilon)\prime }\quad\text{for}\quad i=0,1,\ldots,m_{n}^{(\varepsilon)}(k)-1. \tag{4.4.1}\]
Proof.: We assume that (4.4.1) holds for every \(\varepsilon\), \(n\), \(k\) as above. Then Theorem 2.7 clearly follows from the following two claims:
**Claim 1**: Every point \((n,v_{p}(c_{n}^{(\varepsilon)}(w_{*})))\) lies on or above \(\mathrm{NP}(G^{(\varepsilon)}(w_{*},-))\).
**Claim 2**: If \(\big{(}n,v_{p}(g_{n}^{(\varepsilon)}(w_{\star}))\big{)}\) is a vertex of \(\mathrm{NP}(G^{(\varepsilon)}(w_{\star},-))\), then we have \(v_{p}(c_{n}^{(\varepsilon)}(w_{\star}))=v_{p}(g_{n}^{(\varepsilon)}(w_{\star}))\).
Through the Lagrange interpolation (4.3.1), we will reduce the two Claims to the following.
**Statement 4.5**.: For each \(w_{\star}\in\mathfrak{m}_{\mathbb{C}_{p}}\) and each \(k=k_{\varepsilon}+(p-1)k_{\bullet}\) such that \(m_{n}^{(\varepsilon)}(k)\neq 0\),
1. The point \(\big{(}n,v_{p}\big{(}A_{k}^{(n,\varepsilon)}(w_{\star})g_{n,\hat{k}}^{( \varepsilon)}(w_{\star})\big{)}\big{)}\) lies on or above the Newton polygon \(\mathrm{NP}(G^{(\varepsilon)}(w_{\star},-))\); and
2. moreover if \(\big{(}n,v_{p}(g_{n}^{(\varepsilon)}(w_{\star}))\big{)}\) is a vertex of \(\mathrm{NP}(G^{(\varepsilon)}(w_{\star},-))\), then \(v_{p}\big{(}A_{k}^{(n,\varepsilon)}(w_{\star})g_{n,\hat{k}}^{( \varepsilon)}(w_{\star})\big{)}>v_{p}\big{(}g_{n}^{(\varepsilon)}(w_{\star}) \big{)}\).
Indeed, we will prove (a strengthened version of) this later in Proposition 4.7. We now assume Statement 4.5 to finish the proof of Proposition 4.4. For this, we fix a relevant character \(\varepsilon\) and omit it from the notations when no confusion arises.
_Proof of_ **Claim 1**_assuming Statement 4.5(1)._
Fix \(n\in\mathbb{N}\). First, by Proposition 2.19, \(\Delta_{k,\frac{1}{2}d_{k}^{\mathrm{new}}-i}>\Delta^{\prime}_{k,\frac{1}{2}d_ {k}^{\mathrm{new}}-m_{n}(k)}\) for any \(i=0,\ldots,m_{n}(k)-1\); so condition (4.4.1) implies that each \(A_{k,i}^{(n)}(w)\in\mathcal{O}[\![w]\!]\). But we know that \(c_{n}(w)\in\mathcal{O}[\![w]\!]\); it follows that \(h_{n}(w)\in\mathcal{O}[\![w]\!]\) (even though the Lagrange interpolation happens in a bigger ring \(E\langle w/p\rangle\)). From this, we deduce that the last term in (4.3.1) satisfies: for every \(w_{\star}\in\mathfrak{m}_{\mathbb{C}_{p}}\),
\[v_{p}\big{(}h_{n}(w_{\star})\cdot g_{n}(w_{\star})\big{)}\geq v_{p}(g_{n}(w_{ \star})).\]
By Statement 4.5(1), the evaluations at \(w_{\star}\) of all other terms in the Lagrange interpolation (4.0.1) have \(p\)-adic valuation greater than or equal to the height of \(G^{(\varepsilon)}(w_{\star},-)\) at \(x=n\). Claim 1 follows.
_Proof of_ **Claim 2**_assuming Statement 4.5(2)._
It is enough to show that, in the Lagrange interpolation (4.3.1), \(h_{n}^{(\varepsilon)}(w)\in\mathcal{O}[\![w]\!]^{\times}\) is a unit. Indeed, if this is known, and if \(\big{(}n,v_{p}(g_{n}^{(\varepsilon)}(w_{\star}))\big{)}\) is a vertex of \(\mathrm{NP}(G^{(\varepsilon)}(w_{\star},-))\), then Statement 4.5(2) implies
\[v_{p}\big{(}A_{k}(w_{\star})g_{n,\hat{k}}^{(\varepsilon)}(w_{\star})\big{)}> v_{p}(g_{n}^{(\varepsilon)}(w_{\star}))\quad\text{yet}\quad v_{p}\big{(}h_{n}^{( \varepsilon)}(w_{\star})g_{n}^{(\varepsilon)}(w_{\star})\big{)}=v_{p}(g_{n}^ {(\varepsilon)}(w_{\star})).\]
From this, we deduce that \(v_{p}(c_{n}^{(\varepsilon)}(w))=v_{p}(g_{n}^{(\varepsilon)}(w))\).
To prove that \(h_{n}^{(\varepsilon)}(w)\) is a unit, we take one \(k\not\equiv k_{\varepsilon}\bmod(p-1)\) such that \(d_{k}^{\mathrm{tw}}(\varepsilon\cdot(1\times\omega^{2-k}))=n\). (This is possible because per Definition-Proposition 2.12(1), \(s_{\varepsilon}\) and \(\{a+s_{\varepsilon}\}\) are not "adjacent" in the cycle modulo \(p-1\).) Set \(s_{\varepsilon^{\prime\prime}}:=\{k-2-a-s_{\varepsilon}\}\). By Proposition 4.1(2), \(\big{(}n,v_{p}(c_{n}^{(\varepsilon)}(w_{k}))\big{)}\) is a vertex of \(\mathrm{NP}(C^{(\varepsilon)}(w_{k},-))\) and \(\big{(}n,v_{p}(c_{n}^{(\varepsilon^{\prime\prime})}(w_{k}))\big{)}\) is a vertex of \(\mathrm{NP}(C^{(\varepsilon^{\prime\prime})}(w_{k},-))\).
We use the Atkin-Lehner involution between \(\mathrm{S}_{k}^{\mathrm{tw}}(\varepsilon\cdot(1\times\omega^{2-k})\) and \(\mathrm{S}_{k}^{\mathrm{tw}}(\varepsilon^{\prime\prime}\cdot(1\times\omega^{2- k}))\). Combining Proposition 2.11(2) and Proposition 2.16(2), we deduce that
\[v_{p}(c_{n}^{(\varepsilon)}(w_{k}))+v_{p}(c_{n}^{(\varepsilon^{\prime\prime}) }(w_{k}))=(k-1)n=v_{p}(g_{n}^{(\varepsilon)}(w_{k}))+v_{p}(g_{n}^{( \varepsilon^{\prime\prime})}(w_{k})).\]
As argued above, for each zero \(w_{k_{1}}\) of \(g_{n}^{(\varepsilon)}(w)\) and each zero \(w_{k_{2}}\) of \(g_{n}^{(\varepsilon^{\prime\prime})}(w)\), we have
\[v_{p}\big{(}A_{k_{1}}(w_{k})g_{n,\hat{k}_{1}}^{(\varepsilon)}(w_{k})\big{)}>v_ {p}(g_{n}^{(\varepsilon)}(w_{k}))\quad\text{and}\quad v_{p}\big{(}A_{k_{2}}(w_{k })g_{n,\hat{k}_{2}}^{(\varepsilon^{\prime\prime})}(w_{k})\big{)}>v_{p}(g_{n}^{ (\varepsilon^{\prime\prime})}(w_{k})).\]
From this together with (4.0.1), we deduce that
\[v_{p}(g_{n}^{(\varepsilon)}(w_{k})h_{n}^{(\varepsilon)}(w_{k}))+v_{p}(g_{n}^{( \varepsilon^{\prime\prime})}(w_{k})h_{n}^{(\varepsilon^{\prime\prime})}(w_{k}))= (k-1)n=v_{p}(g_{n}^{(\varepsilon)}(w_{k}))+v_{p}(g_{n}^{(\varepsilon^{\prime \prime})}(w_{k})).\]
Since \(h_{n}^{(\varepsilon)}(w),h_{n}^{(\varepsilon^{\prime\prime})}(w)\in\mathcal{O}[\![w]\!]\), we deduce that \(h_{n}^{(\varepsilon)}(w_{k}),h_{n}^{(\varepsilon^{\prime\prime})}(w_{k})\in \mathcal{O}^{\times}\); so \(h_{n}^{(\varepsilon)}(w)\) and \(h_{n}^{(\varepsilon^{\prime\prime})}(w)\) are both units in \(\mathcal{O}[\![w]\!]\).
To sum up, we have completed the proof of Proposition 4.4 assuming Statement 4.5.
We record here a technical result [11, Proposition 5.16] that we shall frequently use in the proof of Statement 4.5.
**Proposition 4.6**.: _Fix \(w_{\star}\in\mathfrak{m}_{\mathbb{C}_{p}}\) and a weight \(k\equiv k_{\varepsilon}\bmod(p-1)\)._
1. _Let_ \(\mathrm{nS}_{w_{\star},k}^{(\varepsilon)}=\big{(}\frac{1}{2}d_{k}^{\mathrm{ Iw}}(\tilde{\varepsilon}_{1})-L_{w_{\star},k}^{(\varepsilon)},\,\frac{1}{2}d_{k}^{ \mathrm{Iw}}(\tilde{\varepsilon}_{1})+L_{w_{\star},k}^{(\varepsilon)}\big{)}\) _be a near-Steinberg range. Then for any integer_ \(k^{\prime}=k_{\varepsilon}+(p-1)k^{\prime}_{\star}\neq k\) _such that_ \(v_{p}(w_{k^{\prime}}-w_{k})\geq\Delta_{k,L_{w_{\star},k}^{(\varepsilon)}}^{( \varepsilon)}-\Delta_{k,L_{w_{\star},k}^{(\varepsilon)}-1}^{(\varepsilon)}\)_;_ _the ghost multiplicity_ \(m_{n}^{(\varepsilon)}(k^{\prime})\) _is linear in_ \(n\) _when_ \(n\in\overline{\mathrm{nS}_{w_{\star},k}^{(\varepsilon)}}\)_._
2. _Let_ \(\mathbf{k}:=\{k,k_{1},\ldots,k_{r}\}\) _with each_ \(k_{i}\equiv k_{\varepsilon}\bmod(p-1)\) _be a set of integers including_ \(k\)_. Suppose that there is an interval_ \([n_{-},n_{+}]\) _such that, for any_ \(k^{\prime}=k_{\varepsilon}+(p-1)k^{\prime}_{\star}\notin\mathbf{k}\) _with_ \(v_{p}(w_{k^{\prime}}-w_{k})\geq v_{p}(w_{\star}-w_{k})\)_, the ghost multiplicity_ \(m_{n}^{(\varepsilon)}(k^{\prime})\) _is linear in_ \(n\) _when_ \(n\in[n_{-},n_{+}]\)_. Then for any set of constants_ \((A_{n})_{n\in[n_{-},n_{+}]}\)_, the two lists of points_ \[P_{n}=\big{(}n,A_{n}+v_{p}(g_{n,\hat{\mathbf{k}}}^{(\varepsilon)}(w_{\star})) \big{)},\quad Q_{n}=\big{(}n,A_{n}+v_{p}(g_{n,\hat{\mathbf{k}}}^{(\varepsilon) }(w_{k}))\big{)}\quad\text{ with }n\in[n_{-},n_{+}]\] _differ by a linear function, where_ \(g_{n,\hat{\mathbf{k}}}^{(\varepsilon)}(w_{k}):=g_{n,\hat{\mathbf{k}}}^{( \varepsilon)}(w_{k})/\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Proof.: In this proof, \(\varepsilon\) will be fixed throughout; so we suppress it from the notation. We will treat the two statements uniformly. In the proof below, when writing \(k_{0}\) we mean an empty object in the case of first statement and the given weight \(k_{0}\) in the second statement, (as how [11, Theorem 5.19] is proved in a way applicable also to [11, Proposition 5.26]). Moreover, when treating the second statement, we will write \(w_{\star}\) for \(w_{k_{0}}\). We separate the discussion into the following three cases:
Case A: Assume that \(n\in\mathrm{nS}_{w_{\star},k}\) so that \((n,v_{p}(g_{n}(w_{\star})))\) is not a vertex of \(\mathrm{NP}(G(w_{\star},-))\) by Proposition 2.18(2). We need to prove a non-strict inequality in this case.
Write \(L=L_{w_{\star},k}\) for short, so \(n\in\mathrm{nS}_{w_{\star},k}=(\frac{1}{2}d_{k}^{\mathrm{fw}}-L,\frac{1}{2}d_ {k}^{\mathrm{fw}}+L)\) and \(v_{p}(w_{\star}-w_{k})\geq\Delta_{k,L}-\Delta_{k,L-1}\). We quickly remark that, for the second statement, the condition \(v_{p}(w_{k_{0}}-w_{k})\geq\Delta_{k,L}-\Delta_{k,L-1}\) implies that the ghost multiplicity \(m_{n^{\prime}}(k_{0})\) is linear in \(n^{\prime}\in\overline{\mathrm{nS}}_{w_{k_{0}},k}\) by Proposition 4.6(1). In particular, \(\overline{\mathrm{nS}}_{w_{k_{0}},k}\subseteq[d_{k_{0}}^{\mathrm{ur}},d_{k_{0 }}^{\mathrm{fw}}-d_{k_{0}}^{\mathrm{ur}}]\).
We need to prove that, for each \(i=0,\ldots,m_{n}(k)-1\), the point
\[P:=\big{(}n,\,v_{p}\big{(}A(w_{\star}-w_{k})^{i}\cdot g_{n,\hat{k},\hat{k}_{0 }}(w_{\star})\big{)}\big{)}\]
lies on or above the line segment \(\overline{Q_{-}Q_{+}}\) with
\[Q_{-}:=\big{(}\tfrac{1}{2}d_{k}^{\mathrm{fw}}-L,\;v_{p}\big{(}g_{\frac{1}{2}d _{k}^{\mathrm{fw}}-L,\hat{k}_{0}}(w_{\star})\big{)}\big{)}\quad\text{and}\quad Q _{+}:=\big{(}\tfrac{1}{2}d_{k}^{\mathrm{fw}}+L,\;v_{p}\big{(}g_{\frac{1}{2}d _{k}^{\mathrm{fw}}+L,\hat{k}_{0}}(w_{\star})\big{)}\big{)}.\]
(Here we do not need to require that \(Q_{-}\) and \(Q_{+}\) are vertices of the lower convex hull of all points \(\big{(}n^{\prime},v_{p}(g_{n^{\prime},\hat{k}_{0}}(w_{k_{0}}))\big{)}_{n^{ \prime}\in\mathbb{N}^{*}}\))
Applying Proposition 4.6(2) to the case with \(\underline{k}=\{k,k_{0}\}\), \(w_{\star}\), and \([n_{-},n_{+}]=\big{[}\tfrac{1}{2}d_{k}^{\mathrm{fw}}-L,\tfrac{1}{2}d_{k}^{ \mathrm{fw}}+L\big{]}\), we are reduced to prove that the point
\[P^{\prime}=\big{(}n,\;v_{p}(A)+i\cdot v_{p}(w_{\star}-w_{k})+v_{p}\big{(}g_{n, \hat{k},\hat{k}_{0}}(w_{k})\big{)}\big{)}\]
lies on or above the line segment \(\overline{Q^{\prime}_{-}Q^{\prime}_{+}}\) with
\[Q^{\prime}_{-}=\big{(}\tfrac{1}{2}d_{k}^{\mathrm{fw}}-L,\;(\tfrac{1}{2}d_{k}^{ \mathrm{rew}}-L)\cdot v_{p}(w_{\star}-w_{k})+v_{p}\big{(}g_{\frac{1}{2}d_{k}^{ \mathrm{fw}}-L,\hat{k},\hat{k}_{0}}(w_{k})\big{)}\big{)},\]
\[Q^{\prime}_{+}=\big{(}\tfrac{1}{2}d_{k}^{\mathrm{fw}}+L,\;(\tfrac{1}{2}d_{k}^{ \mathrm{rew}}-L)\cdot v_{p}(w_{\star}-w_{k})+v_{p}\big{(}g_{\frac{1}{2}d_{k}^{ \mathrm{fw}}+L,\hat{k},\hat{k}_{0}}(w_{k})\big{)}\big{)}.\]
Moreover, if we write \(n=\tfrac{1}{2}d_{k}^{\mathrm{fw}}+\ell\), then
\[v_{p}\big{(}g_{n,\hat{k},\hat{k}_{0}}(w_{k})\big{)} = \Delta^{\prime}_{k,\ell}+\tfrac{k-2}{2}\ell-m_{n}(k_{0})\cdot v_{p }(w_{k}-w_{k_{0}})\quad\text{and}\] \[v_{p}\big{(}g_{\frac{1}{2}d_{k}^{\mathrm{fw}}\pm L,\hat{k},\hat {k}_{0}}(w_{k})\big{)} = \Delta^{\prime}_{k,\pm L}\pm\tfrac{k-2}{2}L-m_{\frac{1}{2}d_{k}^{ \mathrm{fw}}\pm L}(k_{0})\cdot v_{p}(w_{k}-w_{k_{0}}).\]
So to prove that \(P^{\prime}\) lies strictly above the \(\overline{Q^{\prime}_{-}Q^{\prime}_{+}}\), (through shifting in the \(y\)-direction by a linear function with slope \(\frac{k-2}{2}\) or slope \(\frac{k-2}{2}\pm v_{p}(w_{k}-w_{k_{0}})\) for the second statement and shifting in the \(x\)-direction by \(\frac{1}{2}d_{k}^{\mathrm{fw}}\)), it is equivalent to show that the point
\[P^{\prime\prime}:=\big{(}\ell,\;v_{p}(A)+\big{(}i-\tfrac{1}{2}d_{k}^{\mathrm{ new}}+L\big{)}\cdot v_{p}(w_{\star}-w_{k})+\Delta^{\prime}_{k,\ell}\big{)}\]
lies strictly above the line connecting the points
\[Q^{\prime\prime}_{-}:=(-L,\,\Delta^{\prime}_{k,-L})\quad\text{and}\quad Q^{ \prime\prime}_{+}:=(L,\,\Delta^{\prime}_{k,L}).\]
By ghost duality (2.16.5), \(\Delta^{\prime}_{k,-L}=\Delta^{\prime}_{k,L}\). So \(Q^{\prime\prime}_{-}\) and \(Q^{\prime\prime}_{+}\) have the same \(y\)-coordinate. Therefore, we need only to prove the following inequality
\[v_{p}(A)+\big{(}i-\tfrac{1}{2}d_{k}^{\mathrm{new}}+L\big{)}\cdot v_{p}(w_{ \star}-w_{k})\geq\Delta^{\prime}_{k,L}-\Delta^{\prime}_{k,|\ell|}.\]
Using condition (4.7.1) \(v_{p}(A)\geq\Delta_{k,\frac{1}{2}d_{k}^{\rm new}-i}-\Delta_{k,|\ell|}^{\prime}\) and using \(\Delta_{k,L}=\Delta_{k,L}^{\prime}\) (as \((L,\Delta_{k,L})\) is a vertex of \(\underline{\Delta}_{k}\)), we are reduced to prove that
\[\Delta_{k,\frac{1}{2}d_{k}^{\rm new}-i}-\Delta_{k,L}^{\prime}=\Delta_{k,\frac {1}{2}d_{k}^{\rm new}-i}-\Delta_{k,L}\geq\left(\tfrac{1}{2}d_{k}^{\rm new}-i-L \right)\cdot v_{p}(w_{\star}-w_{k}). \tag{4.7.2}\]
But this follows from the convexity of \(\underline{\Delta}_{k}\) and the fact
\[\Delta_{k,L}-\Delta_{k,L-1}\leq v_{p}(w_{\star}-w_{k})<\Delta_{k,L+1}-\Delta_{ k,L}.\]
This concludes the proof of the first statement of the proposition when \(n\in{\rm nS}_{w_{\star},k}\).
Case B: We assume that \(n\notin{\rm nS}_{w_{\star},k}\), and that \(\left(\tfrac{1}{2}d_{k}^{\rm new}-m_{n}(k),\Delta_{k,\frac{1}{2}d_{k}^{\rm new }-m_{n}(k)}^{\prime}\right)\) is a vertex of \(\underline{\Delta}_{k}\), so that \(\Delta_{k,\frac{1}{2}d_{k}^{\rm new}-m_{n}(k)}^{\prime}=\Delta_{k,\frac{1}{2} d_{k}^{\rm new}-m_{n}(k)}.\) In this case, we have
\[v_{p}(w_{\star}-w_{k})<\Delta_{k,\frac{1}{2}d_{k}^{\rm new}-m_{n}(k)+1}-\Delta _{k,\frac{1}{2}d_{k}^{\rm new}-m_{n}(k)}. \tag{4.7.3}\]
We need to prove that for every \(i=0,\ldots,m_{n}(k)-1\),
\[v_{p}\big{(}A(w_{\star}-w_{k})^{i}\cdot g_{n,\hat{k},\hat{k}_{0}}(w_{\star}) \big{)}>v_{p}\big{(}g_{n,\hat{k}_{0}}(w_{\star})\big{)}. \tag{4.7.4}\]
But this inequality is equivalent to
\[v_{p}(A)>(m_{n}(k)-i)\cdot v_{p}(w_{\star}-w_{k}). \tag{4.7.5}\]
This follows from the following sequence of inequalities:
\[v_{p}(A) \stackrel{{\eqref{eq:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_defdef_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_defdef_def_def_defdef_def_def_def_defdef_def_def_def_def_def_def_def_def_def
Set \(\gamma:=v_{p}(k-k^{\prime})\geq 1\). Then
(4.7.8) \[\tfrac{1}{2}d_{k}^{\text{new}}-m_{n}(k)\geq\tfrac{1}{2}d_{k^{\prime}}^{\text{Iw} }-\tfrac{1}{2}d_{k}^{\text{Iw}}-L^{\prime}=k^{\prime}_{\bullet}-k_{\bullet}-L^{ \prime}\geq p^{\gamma}-L^{\prime}\stackrel{{\eqref{eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eq
Recall that the two points \(R_{\pm}^{\circ\circ}\) correspond to two endpoints of a segment on \(\underline{\Delta}_{k}\) by our choice of \(L^{\prime}\). So this is equivalent to proving that
\[v_{p}(A)+(i-m_{n}(k))\cdot v_{p}(w_{\star}-w_{k})+\Delta^{\prime}_{k,\frac{1}{2}d _{k}^{\text{new}}-m_{n}(k)}\geq\Delta_{k,\frac{1}{2}d_{k}^{\text{new}}-m_{n}(k)}.\]
Taking into account of (4.7.1), it suffices to show that
\[\Delta_{k,\frac{1}{2}d_{k}^{\text{new}}-i}+(i-m_{n}(k))\cdot v_{p}(w_{\star}-w _{k})\geq\Delta_{k,\frac{1}{2}d_{k}^{\text{new}}-m_{n}(k)}.\]
But this follows from that \(v_{p}(w_{\star}-w_{k})<\Delta_{k,\frac{1}{2}d_{k}^{\text{new}}-m_{n}(k)+1}- \Delta_{k,\frac{1}{2}d_{k}^{\text{new}}-m_{n}(k)}\) and the convexity of \(\underline{\Delta}_{k}\). The proposition is proved in this case.
Proposition 4.7 completes the proof of Proposition 4.4. To summarize, in this section, we reduced the proof of Theorem 2.7 to proving the condition (4.4.1).
## 5. Proof of local ghost conjecture II: halo bound estimates
In this section, we implement Step III of the proof of Theorem 2.7 as laid out at the beginning of SS 4; the Step II will be discussed in the next section.
As in the previous section, we fix a primitive \(\mathcal{O}[\![\mathbb{K}_{p}]\!]\)-projective augmented module \(\widetilde{\text{H}}\) satisfying Hypothesis 2.9. We will also fix a relevant \(\varepsilon=\omega^{-s_{\varepsilon}}\times\omega^{a+s_{\varepsilon}}\) through out this and the next section, and suppress it entirely from the notation. For this and the next section, we assume that \(2\leq a\leq p-5\); this is used in the proof of Proposition 5.4(1).
To prove the estimate (4.4.1), we will show a similar result about the Lagrange interpolation of the determinant of every (not necessarily principal) minor.
**Notation 5.1**.: Let \(\underline{\zeta}=\{\zeta_{1},\ldots,\zeta_{n}\}\) and \(\underline{\xi}=\{\xi_{1},\ldots,\xi_{n}\}\) be two sets of \(n\) positive integers, and let \(\text{U}^{\dagger}(\underline{\zeta}\times\underline{\xi})\) be the \(\underline{\zeta}\times\underline{\xi}\)-minor of the matrix of \(U_{p}\)-action on the power basis. Applying the Lagrange interpolation (Definition-Lemma 4.2) to \(\det(\text{U}^{\dagger}(\underline{\zeta}\times\underline{\xi}))\) along \(g_{n}(w)\), we have
\[\det\bigl{(}\text{U}^{\dagger}(\underline{\zeta}\times\underline{\xi})\bigr{)} =\sum_{\begin{subarray}{c}k=k_{\varepsilon}\bmod(p-1)\\ m_{n}(k)\neq 0\end{subarray}}\bigl{(}A_{k}^{(\zeta\times\underline{\xi})}(w) \cdot g_{n,k}(w)\bigr{)}+h_{\underline{\zeta}\times\underline{\xi}}(w)\cdot g _{n}(w), \tag{5.1.1}\]
where \(h_{\underline{\zeta}\times\underline{\xi}}(w)\in E\langle w/p\rangle\) and \(A_{k}^{(\zeta\times\underline{\xi})}(w)\) is a polynomial in \(E[w]\) of degree \(\leq m_{n}(k)-1\), expanded as
\[A_{k}^{(\zeta\times\underline{\xi})}(w)=A_{k,0}^{(\zeta\times\underline{\xi} )}+A_{k,1}^{(\zeta\times\underline{\xi})}(w-w_{k})+\cdots+A_{k,m_{n}(k)-1}^{( \zeta\times\underline{\xi})}(w-w_{k})^{m_{n}(k)-1}. \tag{5.1.2}\]
**Theorem 5.2**.: _Assume that \(2\leq a\leq p-5\). For every finite subsets \(\underline{\zeta}\) and \(\underline{\xi}\) of size \(n\), and every ghost zero \(w_{k}\) of \(g_{n}(w)\), we have the following inequality for every \(\overline{i}=0,\overline{1},\ldots,m_{n}(k)-1\),_
\[v_{p}(A_{k,i}^{(\underline{\zeta}\times\underline{\xi})})\geq\Delta_{k,\frac{ 1}{2}d_{k}^{\text{new}}-i}-\Delta^{\prime}_{k,\frac{1}{2}d_{k}^{\text{new}}-m_ {n}(k)}+\tfrac{1}{2}\bigl{(}\deg(\underline{\zeta})-\deg(\underline{\xi}) \bigr{)}. \tag{5.2.1}\]
Since \(c_{n}(w)=(-1)^{n}\sum_{\underline{\xi}}\det\bigl{(}\text{U}^{\prime\dagger}( \underline{\xi}\times\underline{\xi})\bigr{)}\) is the sum over all principal minors of size \(n\), we see that, for each \(n\) and each ghost zero \(w_{k}\) of \(g_{n}(w)\),
\[A_{k,i}^{(n)}=(-1)^{n}\sum_{\underline{\xi}}A_{k,i}^{(\underline{\xi}\times \underline{\xi})}.\]
So condition (4.4.1) (and hence Theorem 2.7) follows from Theorem 5.2 above. The proof of Theorem 5.2 will be concluded in SS 6.8 (and SS 6.13).
### Proof of Theorem 5.2 when \(n=1\)
When \(n=1\), the condition \(m_{1}(k)>0\) for \(k=k_{\varepsilon}+(p-1)k_{\bullet}\) is equivalent to that \(d_{k}^{\rm ur}=0\) and \(d_{k}^{\rm lw}=2k_{\bullet}+2-\delta_{\varepsilon}\geq 2\). In this case, \(m_{1}(k)=1\), so we need only to consider the case with \(i=0\). For one such ghost zero \(k\) and indices \(\zeta,\xi\in\mathbb{N}\), \(A_{k,0}^{(\zeta\times\underline{\xi})}={\rm U}_{\mathbf{e}_{\zeta},\mathbf{e} _{\xi}}^{\dagger}|_{w=w_{k}}\). Moreover, note that \((\frac{1}{2}d_{k}^{\rm new},\Delta_{k,\frac{1}{2}d_{k}^{\rm new}}^{\prime})\) is always a vertex of \(\underline{\Delta}_{k}\), so
\[\Delta_{k,\frac{1}{2}d_{k}^{\rm new}}=\tfrac{k-2}{2}\cdot\tfrac{1}{2}d_{k}^{ \rm new}\quad\text{and}\quad\Delta_{k,\frac{1}{2}d_{k}^{\rm new}-1}=v_{p}(g_{1,\hat{k}}(w_{k}))+\tfrac{k-2}{2}\cdot(\tfrac{1}{2}d_{k}^{\rm new}-1).\]
It suffices to prove that
\[v_{p}({\rm U}_{\mathbf{e}_{\zeta},\mathbf{e}_{\xi}}^{\dagger}|_{w=w_{k}})\geq \tfrac{k-2}{2}-v_{p}\big{(}g_{1,\hat{k}}(w_{k})\big{)}+\tfrac{1}{2}(\deg \mathbf{e}_{\zeta}-\deg\mathbf{e}_{\xi}). \tag{5.3.1}\]
If \(\xi>d_{k}^{\rm lw}\), (5.3.1) follows from combining the inequalities \(\frac{1}{2}(k-2-\deg\mathbf{e}_{\xi})\leq 0\) and \(v_{p}({\rm U}_{\mathbf{e}_{\zeta},\mathbf{e}_{\xi}}^{\dagger}|_{w=w_{k}})\geq \deg(\mathbf{e}_{\zeta})\) by Proposition 3.2(2).
If \(\zeta>d_{k}^{\rm lw}\), (5.3.1) follows from the inequality \(v_{p}({\rm U}_{\mathbf{e}_{\zeta},\mathbf{e}_{\xi}}^{\dagger}|_{w=w_{k}})\geq \deg(\mathbf{e}_{\zeta})\geq\tfrac{1}{2}(k-2+\deg(\mathbf{e}_{\zeta}))\) by Proposition 3.2(2).
When \(\zeta,\xi\in\{1,\ldots,d_{k}^{\rm lw}\}\), \({\rm U}_{\mathbf{e}_{\zeta},\mathbf{e}_{\xi}}^{\dagger}(\underline{d_{k}^{\rm lw }})|_{w=w_{k}}\) is the anti-diagonal matrix, set \(\zeta^{\rm op}=d_{k}^{\rm lw}+1-\zeta\). In this case,
\[v_{p}({\rm U}_{\mathbf{e}_{\zeta},\mathbf{e}_{\zeta^{\rm op}}}^{\dagger}|_{w=w _{k}})=\deg\mathbf{e}_{\zeta}=\tfrac{k-2}{2}+\tfrac{1}{2}\deg\mathbf{e}_{\zeta }-\tfrac{1}{2}\deg\mathbf{e}_{\zeta^{\rm op}}.\]
(5.3.1) follows from this. This completes the proof of Theorem 5.2 when \(n=1\).
The following is the main result for Step III in the proof of Theorem 2.7.
**Proposition 5.4**.: _Assume that \(p\geq 11\) and that \(2\leq a\leq p-5\). Fix a relevant character \(\varepsilon\) of \(\Delta^{2}\) and subsets \(\underline{\zeta}\) and \(\underline{\xi}\) of positive integers of cardinality \(n\). Recall the Lagrange interpolation formula from Notation 5.1:_
\[\det\bigl{(}{\rm U}^{\dagger}(\underline{\zeta}\times\underline{\xi})\bigr{)} =\sum_{\begin{subarray}{c}k\equiv k_{\varepsilon}\bmod(p-1)\\ m_{n}(k)\neq 0\end{subarray}}\bigl{(}A_{k}^{(\underline{\zeta}\times\underline{ \xi})}(w)\cdot g_{n,\hat{k}}(w)\bigr{)}+h_{\underline{\zeta}\times\underline{ \xi}}(w)\cdot g_{n}(w), \tag{5.4.1}\]
\[\text{with}\quad A_{k}^{(\underline{\zeta}\times\underline{\xi})}(w)=A_{k,0}^ {(\underline{\zeta}\times\underline{\xi})}+A_{k,1}^{(\underline{\zeta}\times \underline{\xi})}(w-w_{k})+\cdots+A_{k,m_{n}(k)-1}^{(\underline{\zeta}\times \underline{\xi})}(w-w_{k})^{m_{n}(k)-1}.\]
_Assume that, for every ghost zero \(w_{k}\) of \(g_{n}(w)\), the inequality (5.2.1) holds. Then_
1. \(h_{\underline{\zeta}\times\underline{\xi}}(w)\in p^{\frac{1}{2}(\deg( \underline{\zeta})-\deg(\underline{\xi}))}\mathcal{O}\langle w/p\rangle\)_; and_
2. _for every ghost zero_ \(w_{k_{0}}\) _of_ \(g_{n}(w)\)_, if we expand formally in_ \(E[\![w-w_{k_{0}}]\!]\)_:_ (5.4.2) \[\det\bigl{(}{\rm U}^{\dagger}(\underline{\zeta}\times\underline{\xi})\bigr{)} \big{/}g_{n,\hat{k}_{0}}(w)=\sum_{i\geq 0}a_{k_{0},i}^{(\underline{\zeta}\times \underline{\xi})}(w-w_{k_{0}})^{i},\] _then we have the following estimate for_ \(i=m_{n}(k_{0}),\ldots,\frac{1}{2}d_{k_{0}}^{\rm new}\)_:_ (5.4.3) \[v_{p}\big{(}a_{k_{0},i}^{(\underline{\zeta}\times\underline{\xi})}\big{)}\geq \tfrac{1}{2}\big{(}\deg(\underline{\zeta})-\deg(\underline{\xi})+(\tfrac{1}{2}d _{k_{0}}^{\rm new}-i)^{2}-(\tfrac{1}{2}d_{k_{0}}^{\rm new}-m_{n}(k_{0}))^{2} \big{)}+\Delta_{k_{0},\frac{1}{2}d_{k_{0}}^{\rm new}-i}-\Delta_{k_{0},\frac{1}{2 }d_{k_{0}}^{\rm new}-i}^{\prime}.\]
**Remark 5.5**.: This proposition involves the coefficients of the Taylor expansion of some determinant of the minor with exponent _greater than or equal to_ the corresponding ghost multiplicity; in contrast, condition (5.2.1) concerns the coefficients in the Taylor expansions of \(\det\bigl{(}{\rm U}^{\dagger}(\underline{\zeta}\times\underline{\xi})\bigr{)}/g_{ n,\hat{k}}(w)\) with exponents _strictly less_ than the corresponding ghost multiplicity.
Proof of Proposition 5.4.: We first show that (2) follows from (1). We fix a ghost zero \(w_{k_{0}}\) for the discussion. It suffices to prove analogue of (5.4.3) for each of the factors in the Lagrange interpolation (5.4.1), that is, explicitly,
1. if we expand formally \(h_{\underline{\zeta}\times\underline{\xi}}(w)=\sum_{i\geq 0}a_{k_{0},h,i}(w-w_{k_{0}}) ^{i}\) in \(E[\![w-w_{k_{0}}]\!]\), then \(v_{p}(a_{k_{0},h,i})\geq\frac{1}{2}\big{(}\deg(\underline{\zeta})-\deg( \underline{\xi})\big{)}\); and
2. for each ghost zero \(w_{k}\neq w_{k_{0}}\) of \(g_{n}(w)\) and each \(j=0,\ldots,m_{n}(k)-1\), if we expand formally in \(E[\![w-w_{k_{0}}]\!]\),
\[A_{k,j}^{(\zeta\times\underline{\xi})}(w-w_{k})^{j}\cdot\frac{(w-w_{k_{0}})^{m _{n}(k_{0})}}{(w-w_{k})^{m_{n}(k)}}=\sum_{i\geq m_{n}(k_{0})}a_{k_{0},k,i}^{(j) }(w-w_{k_{0}})^{i}, \tag{5.5.1}\]
then we have
\[v_{p}(a_{k_{0},k,i}^{(j)})\geq\tfrac{1}{2}\big{(}\deg(\underline{\zeta})-\deg( \underline{\xi})+(\tfrac{1}{2}d_{k_{0}}^{\text{new}}-i)^{2}-(\tfrac{1}{2}d_{k_ {0}}^{\text{new}}-m_{n}(k_{0}))^{2}\big{)}+\Delta_{k_{0},\tfrac{1}{2}d_{k_{0} }^{\text{new}}-i}-\Delta_{k_{0},\tfrac{1}{2}d_{k_{0}}^{\text{new}}-i}^{\prime}, \tag{5.5.2}\]
for \(i=m_{n}(k_{0}),\ldots,\tfrac{1}{2}d_{k_{0}}^{\text{new}}\).
(a) follows from part (1) of the proposition, and we prove (b) as follows. The case when \(i=m_{n}(k_{0})\) is essentially already handled by Proposition 4.7: indeed, from the formal expansion (5.5.1), we deduce that
\[v_{p}(a_{k_{0},k,m_{n}(k_{0})}^{(j)})=v_{p}\big{(}A_{k,j}^{(\zeta\times \underline{\xi})}\big{)}-(m_{n}(k)-j)\cdot v_{p}(w_{k_{0}}-w_{k}).\]
But the second statement of Proposition 4.7 together with the assumed condition (5.2.1) implies that
Combining these two gives
\[v_{p}(a_{k_{0},k,m_{n}(k_{0})})\] \[\geq\tfrac{1}{2}\big{(}\deg(\underline{\zeta})-\deg(\underline{ \xi})\big{)}+\Delta_{k_{0},n-\tfrac{1}{2}d_{k_{0}}^{\text{w}}}+\tfrac{k_{0}-2 }{2}(n-\tfrac{1}{2}d_{k_{0}}^{\text{w}})-v_{p}\big{(}g_{n,\hat{k},\hat{k}_{0} }(w_{k_{0}})\big{)}-m_{n}(k)\cdot v_{p}(w_{k_{0}}-w_{k})\] \[=\tfrac{1}{2}\big{(}\deg(\underline{\zeta})-\deg(\underline{\xi })\big{)}+\Delta_{k_{0},\tfrac{1}{2}d_{k_{0}}^{\text{w}}-n}-\Delta_{k_{0}, \tfrac{1}{2}d_{k_{0}}^{\text{w}}-n}^{\prime}.\]
This is the same as (5.5.2) (when \(i=m_{n}(k_{0})\)).
The case when \(i>m_{n}(k_{0})\) is easier (compared to the proof of Proposition 4.7). In this case, we will prove an inequality stronger than (5.5.2) without the \(\Delta-\Delta^{\prime}\) at the end. The estimate (5.2.1) we assumed implies that, for \(j=0,\ldots,m_{n}(k)-1\),
\[v_{p}\big{(}A_{k,j}^{(\zeta\times\underline{\xi})}\big{)} \geq \tfrac{1}{2}\big{(}\deg(\underline{\zeta})-\deg(\underline{\xi}) \big{)}+\Delta_{k,\tfrac{1}{2}d_{k}^{\text{new}}-j}-\Delta_{k,\tfrac{1}{2}d_{k _{0}}^{\text{new}}-m_{n}(k)}\] \[\geq \tfrac{1}{2}\big{(}\deg(\underline{\zeta})-\deg(\underline{\xi} )\big{)}+1+\tfrac{1}{2}(\tfrac{1}{2}d_{k}^{\text{new}}-j)^{2}-\tfrac{1}{2}( \tfrac{1}{2}d_{k}^{\text{new}}-m_{n}(k))^{2},\]
where the second inequality follows from Proposition 2.19. So we need to prove the following inequality:
\[\tfrac{1}{2}(\tfrac{1}{2}d_{k}^{\text{new}}-m_{n}(k_{0}))^{2}- \tfrac{1}{2}(\tfrac{1}{2}d_{k}^{\text{new}}-i)^{2}+1+\tfrac{1}{2}(\tfrac{1}{2} d_{k}^{\text{new}}-j)^{2}-\tfrac{1}{2}(\tfrac{1}{2}d_{k}^{\text{new}}-m_{n}(k))^{2}\] \[\geq(1+v_{p}(k_{\bullet}-k_{0\bullet}))\cdot\big{(}(i-m_{n}(k_{0} ))+(m_{n}(k)-j)\big{)}.\]
To simplify notations, we set \(\gamma:=v_{p}(k_{\bullet}-k_{0\bullet})\), \(x=\frac{1}{2}d_{k_{0}}^{\rm new}-m_{n}(k_{0})>y=\frac{1}{2}d_{k_{0}}^{\rm new}-i\geq 0\), and \(z=\frac{1}{2}d_{k}^{\rm new}-j>w=\frac{1}{2}d_{k}^{\rm new}-m_{n}(k)\geq 0\). We need to prove that
\[x^{2}-y^{2}+2+z^{2}-w^{2}\geq 2(1+\gamma)(x-y+z-w). \tag{5.5.3}\]
When \(\gamma=0\), the inequality (5.5.3) is straightforward. So we assume \(\gamma\geq 1\) below. Note that \(|k_{\bullet}-k_{0\bullet}|\geq p^{\gamma}\) and so \(|\frac{1}{2}d_{k}^{\rm lw}-\frac{1}{2}d_{k_{0}}^{\rm lw}|\geq p^{\gamma}\). So we have \(x+w\geq p^{\gamma}\). We separate several cases:
* If \(x+y\geq 2\gamma+2\) and \(z+w\geq 2\gamma+2\), the inequality (5.5.3) is clear.
* If \(z+w\leq 2\gamma+1\), then \(w\leq\gamma\). The condition \(x+w\geq p^{\gamma}\) implies that \(x\geq p^{\gamma}-w\). So \[(z-w)(2+2\gamma-z-w) \leq \Big{(}\frac{(z-w)+2+2\gamma-z-w}{2}\Big{)}^{2}=(\gamma+1-w)^{2},\] \[(x-y)(x+y-(2+2\gamma)) \geq p^{\gamma}-w-2-2\gamma.\] The difference of the term (plus 2) is \[p^{\gamma}-w-2\gamma-(\gamma+1-w)^{2}\geq p^{\gamma}-3\gamma-(\gamma+1)^{2} \geq 0\mbox{ if }\gamma\geq 2.\] When \(\gamma=1\) and \(w=1\), the left hand side is also larger than \(0\). When \(\gamma=1\) and \(w=0\), we have \(x\geq p\), and so \[(x-y)(x+y-4)\geq 2x-5\geq 5\quad\mbox{and}\quad(z-w)(4-z-w)\leq 4.\] (5.5.3) still holds.
* If \(x+y\leq 2\gamma+1\), a similar argument proves (5.5.3); we leave this as an exercise for interested readers.
This completes the proof of (2) assuming (1).
We now turn to prove (1) of Proposition 5.4. The proof resembles the proof of Claim 1 in Proposition 4.4. By (5.2.1) and Proposition 2.19,
\[v_{p}(A_{k,i}^{(\zeta\times\xi)}) \geq\tfrac{1}{2}\big{(}\deg(\underline{\zeta})-\deg(\underline{ \xi})\big{)}+\Delta_{k,\frac{1}{2}d_{k}^{\rm new}-i}-\Delta_{k,\frac{1}{2}d_{k} ^{\rm new}-m_{n}(k)}^{\prime}\] \[\geq\tfrac{1}{2}\big{(}\deg(\underline{\zeta})-\deg(\underline{ \xi})\big{)}+m_{n}(k)-i\] \[A_{k}^{(\zeta\times\underline{\xi})}(w)\,\cdot g_{n,\hat{k}}(w) \in p^{\frac{1}{2}(\deg(\underline{\zeta})-\deg(\underline{\xi}))+\deg g_{n}( w)}\cdot\mathcal{O}[w/p].\]
So if we can prove that
\[\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi}) \bigr{)}\in p^{\frac{1}{2}(\deg(\underline{\zeta})-\deg(\underline{\xi}))+\deg g _{n}(w)}\cdot\mathcal{O}\langle w/p\rangle, \tag{5.5.4}\]
then we would deduce that \(h_{\underline{\zeta}\times\underline{\xi}}(w)\cdot g_{n}(w)\in p^{\frac{1}{2}( \deg(\underline{\zeta})-\deg(\underline{\xi}))+\deg g_{n}(w)}\cdot\mathcal{O} \langle w/p\rangle\), from this it would follow that \(h_{\underline{\zeta}\times\underline{\xi}}(w)\in p^{\frac{1}{2}(\deg( \underline{\zeta})-\deg(\underline{\xi}))}\cdot\mathcal{O}\langle w/p\rangle\). So we now focus on proving (5.5.4).
We go back to the discussion of halo estimate near the end of Section 3. Recall the matrix \(\mathrm{U}_{\mathbf{C}}\) and \(\mathrm{Y}\) from Notation 3.17. For each ordered tuple \(\underline{\eta}=(\eta_{1},\ldots,\eta_{n})\in\mathbb{N}^{n}\), write \(\mathrm{U}_{\mathbf{C}}(\underline{\zeta}\times\underline{\eta})\) for the submatrices with rows in \(\underline{\zeta}\) and columns in \(\underline{\eta}\). Then the equality \(\mathrm{U}^{\dagger}=\mathrm{Y}\mathrm{U}_{\mathbf{C}}\mathrm{Y}^{-1}\) of (3.17.1) implies that
\[\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi}) \bigr{)}=\sum_{\underline{\lambda}\cdot\underline{\eta}}\Big{(}\mathrm{Y}_{ \mathbf{e}_{\zeta_{1}},\mathbf{e}_{\lambda_{1}}}\cdots\mathrm{Y}_{\mathbf{e}_{ \zeta_{n}},\mathbf{e}_{\lambda_{n}}}\cdot\det\bigl{(}\mathrm{U}_{\mathbf{C}}( \underline{\lambda}\times\underline{\eta})\bigr{)}\cdot(\mathrm{Y}^{-1})_{ \mathbf{e}_{\eta_{1}},\mathbf{e}_{\xi_{1}}}\cdots(\mathrm{Y}^{-1})_{\mathbf{e} _{\eta_{n}},\mathbf{e}_{\xi_{n}}}\Big{)}, \tag{5.5.5}\]
where the sum runs over all ordered tuples \(\underline{\lambda}=(\lambda_{1},\ldots,\lambda_{n}),\underline{\eta}=(\eta_{1}, \ldots,\eta_{n})\in\mathbb{N}^{n}\). It is enough to check (5.5.4) for each term above when \(\lambda_{i}\geq\zeta_{i}\) and \(\eta_{i}\leq\xi_{i}\) for every \(i\) (as Y and Y\({}^{-1}\) are upper triangular by Lemma 3.16).
Note that the condition \(\lambda_{i}\geq\zeta_{i}\) and Lemma 3.16 imply that
\[v_{p}(\mathrm{Y}_{\mathbf{e}_{\zeta_{i}},\mathbf{e}_{\lambda_{i} }})+\frac{1}{2}\big{(}\deg\mathbf{e}_{\lambda_{i}}-\deg\mathbf{e}_{\zeta_{i}} \big{)}+v_{p}(\deg\mathbf{e}_{\lambda_{i}}!)\] \[\geq \frac{\deg\mathbf{e}_{\lambda_{i}}-\deg\mathbf{e}_{\zeta_{i}}}{2}+ v_{p}\Big{(}\frac{\deg\mathbf{e}_{\lambda_{i}}!}{\deg\mathbf{e}_{\zeta_{i}}!} \Big{)}+\Big{\lfloor}\frac{\deg\mathbf{e}_{\zeta_{i}}}{p}\Big{\rfloor}-\Big{ \lfloor}\frac{\deg\mathbf{e}_{\lambda_{i}}}{p}\Big{\rfloor}-\Big{\lfloor} \frac{\deg\mathbf{e}_{\lambda_{i}}-\deg\mathbf{e}_{\zeta_{i}}}{p^{2}-p}\Big{\rfloor}\] \[\geq \frac{\deg\mathbf{e}_{\lambda_{i}}-\deg\mathbf{e}_{\zeta_{i}}}{2}+ v_{p}\Big{(}\Big{\lfloor}\frac{\deg\mathbf{e}_{\lambda_{i}}}{p}\Big{\rfloor}! \Big{)}-v_{p}\Big{(}\Big{\lfloor}\frac{\deg\mathbf{e}_{\zeta_{i}}}{p}\Big{ \rfloor}!\Big{)}-\Big{\lfloor}\frac{\deg\mathbf{e}_{\lambda_{i}}-\deg\mathbf{ e}_{\zeta_{i}}}{p^{2}-p}\Big{\rfloor}\geq 0\]
By a similar argument, the condition \(\xi_{i}\geq\eta_{i}\) and Lemma 3.16 imply that
\[v_{p}\big{(}(\mathrm{Y}^{-1})_{\mathbf{e}_{\eta_{i}},\mathbf{e}_{\zeta_{i}}} \big{)}\geq\frac{1}{2}\big{(}\deg\mathbf{e}_{\eta_{i}}-\deg\mathbf{e}_{\xi_{i}} \big{)}+v_{p}(\deg\mathbf{e}_{\eta_{i}}!).\]
So to prove that each term of the right hand side of (5.5.5) belongs to \(p^{\frac{1}{2}(\deg(\underline{\zeta})-\deg(\underline{\xi}))+\deg g_{n}(w)}\). \(\mathcal{O}\langle w/p\rangle\), it suffices to show the following
**Claim**: Assume that \(\lambda_{1}<\cdots<\lambda_{n}\) and \(\eta_{1}<\cdots<\eta_{n}\). We have \(v_{p}\big{(}\mathrm{det}\big{(}\mathrm{U}_{\mathbf{C}}(\underline{\lambda} \times\underline{\eta})\big{)}\big{)}\) (meaning the \(p\)-adic valuation in the ring \(\mathcal{O}\langle w/p\rangle\)) is greater than or equal to
\[= \deg g_{n}(w)+\frac{\deg(\underline{\lambda})-\deg(\underline{ \eta})}{2}+\sum_{i=1}^{n}v_{p}\Big{(}\frac{\deg\mathbf{e}_{\lambda_{i}}!}{\deg \mathbf{e}_{\eta_{i}}!}\Big{)}.\]
To be extremely careful about the boundary case, we set
(5.5.6) \[\boldsymbol{\delta}:=\deg g_{n}(w)-\sum_{i=1}^{n}\Big{(}\deg\mathbf{e}_{i}- \Big{\lfloor}\frac{\deg\mathbf{e}_{i}}{p}\Big{\rfloor}\Big{)}\stackrel{{ \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:
where the first inequality uses the precise condition when \(\boldsymbol{\delta}\) could be equal to \(1\) as discussed above.
2. When \(\underline{\lambda}=\{1,\ldots,n-1,n+1\}\) and \(\eta=\underline{n}\). Let \(\gamma\) be the largest nonnegative integer such that \(p^{\gamma}\) divides a number in \(\{\overline{\deg}\,\mathbf{e}_{n}+1,\ldots,\deg\mathbf{e}_{n+1}\}\). Then \(v_{p}(\deg\mathbf{e}_{n+1}!)-v_{p}(\deg\mathbf{e}_{n}!)=\gamma\). We need to prove (5.5.7) \[v_{p}\big{(}\text{det}\big{(}\text{U}_{\mathbf{C}}(\underline{\lambda}, \underline{n})\big{)}\big{)}\geq\deg g_{n}(w)+\frac{\deg\mathbf{e}_{n+1}-\deg \mathbf{e}_{n}}{2}+\gamma.\] By Corollary 3.27, we have \[v_{p}\big{(}\text{det}\big{(}\text{U}_{\mathbf{C}}(\underline{\lambda}, \underline{n})\big{)}\big{)}\geq D(\underline{\lambda},\underline{n})+\sum_{i =1}^{n}\Big{(}\deg\mathbf{e}_{i}-\Big{\lfloor}\frac{\deg\mathbf{e}_{i}}{p} \Big{\rfloor}\Big{)}+\big{(}\deg\mathbf{e}_{n+1}-\deg\mathbf{e}_{n}\big{)}.\] It suffices to prove that \[D(\underline{\lambda},\underline{n})+\frac{\deg\mathbf{e}_{n+1}-\deg \mathbf{e}_{n}}{2}\geq\boldsymbol{\delta}+\gamma.\] As \(\boldsymbol{\delta}=1\) only happens when \(\deg\mathbf{e}_{n+1}-\mathbf{e}_{n}=p-1-a\), the condition \(2\leq a\leq p-5\) is enough to imply that \(\frac{\deg\mathbf{e}_{n+1}-\deg\mathbf{e}_{n}}{2}\geq\boldsymbol{\delta}+1\). On the other hand, we use Lemma 3.29 to note that \(D_{=0}(\underline{n},0)=\cdots=D_{=0}(\underline{n},\gamma-1)\), so for every \(j=0,\ldots,\gamma-2\) \[D_{=0}(\underline{\lambda},j)=D_{=0}(\underline{n},j+1)+1.\] It follows that \(D(\underline{\lambda},\underline{n})\geq\gamma-1\). This Claim in this case is proved.
We remark that, the proof of (i) follows from standard halo estimate Proposition 3.18. On the other hand, as shown by the proof of (ii), the usual halo bound Proposition 3.18 cannot be used to control the \(\gamma\) on the right hand side of (5.5.7). The subtle improvement of halo estimate in Corollary 3.27 is essential for this proof.
We now prove Claim under the assumption that \(\underline{\lambda}\neq\underline{n}\), which share certain similarity with the proof of (ii). By Corollary 3.27, it suffices to show that
\[D(\underline{\lambda},\underline{\eta})+\sum_{i=1}^{n}\Big{(}\deg\mathbf{e}_{ \lambda_{i}}-\Big{\lfloor}\frac{\deg\mathbf{e}_{\eta_{i}}}{p}\Big{\rfloor} \Big{)}\geq\deg g_{n}(w)+\sum_{i=1}^{n}\bigg{(}\frac{\deg\mathbf{e}_{\lambda_ {i}}-\deg\mathbf{e}_{\eta_{i}}}{2}+v_{p}\Big{(}\frac{\deg\mathbf{e}_{\lambda_ {i}}!}{\deg\mathbf{e}_{\eta_{i}}!}\Big{)}\bigg{)},\]
or equivalently, to show that
\[D(\underline{\lambda},\underline{\eta})+\sum_{i=1}^{n}\bigg{(}\frac{\deg \mathbf{e}_{\lambda_{i}}+\deg\mathbf{e}_{\eta_{i}}}{2}+v_{p}\Big{(}\Big{\lfloor} \frac{\deg\mathbf{e}_{\eta_{i}}}{p}\Big{\rfloor}!\Big{)}\bigg{)}\geq\deg g_{n} (w)+\sum_{i=1}^{n}v_{p}(\deg\mathbf{e}_{\lambda_{i}}!). \tag{5.5.8}\]
We first reduce to the case when \(\underline{\eta}=\underline{n}\). For this, it suffices to show that for a subset \(\underline{\eta}^{\prime}\subset\mathbb{N}\) of size \(n\) with \(\eta_{i}^{\prime}=\eta_{i}\) for all \(i\) except some \(i=i_{0}\) when \(\eta_{i_{0}}^{\prime}-\eta_{i_{0}}=1\), we have
\[D(\underline{\lambda},\underline{\eta}^{\prime})+\frac{\deg\mathbf{e}_{\eta_{ i_{0}}^{\prime}}-\deg\mathbf{e}_{\eta_{i_{0}}}}{2}+v_{p}\bigg{(}\frac{\lfloor \deg\mathbf{e}_{\eta_{i_{0}}^{\prime}}/p\rfloor!}{\lfloor\deg\mathbf{e}_{\eta_ {i_{0}}}/p\rfloor!}\bigg{)}\geq D(\underline{\lambda},\underline{\eta}). \tag{5.5.9}\]
The condition \(2\leq a\leq p-5\) implies that \(\deg\mathbf{e}_{\eta_{i_{0}}}-\deg\mathbf{e}_{\eta_{i_{0}}^{\prime}}\geq 2\); so (5.5.9) follows from (3.26.1).
Now, we assume \(\underline{\eta}=\underline{n}\). It remains to show for any subset \(\underline{\lambda}\subseteq\mathbb{N}\) (but \(\underline{\lambda}\neq\underline{n}\)),
\[D(\underline{\lambda},\underline{n})+\sum_{i=1}^{n}\frac{\deg\mathbf{e}_{ \lambda_{i}}-\deg\mathbf{e}_{i}}{2}\geq\boldsymbol{\delta}+\sum_{i=1}^{n}v_{p }\Big{(}\frac{\deg\mathbf{e}_{\lambda_{i}}!}{\deg\mathbf{e}_{i}!}\Big{)}. \tag{5.5.10}\]
Moreover, we may assume that \(\underline{\lambda}\neq\{1,\ldots,n-1,n+1\}\) as it has been treated in (ii).
For this, we make an induction on \(\underline{\lambda}\); each time, we replace the largest element in \(\underline{\lambda}\), say \(\lambda_{n}\), by the smallest element in \(\underline{n}\) but not in \(\underline{\lambda}\), say \(n_{-}\). Since we have ruled out two special cases of \(\underline{\lambda}\), we must have \(\lambda_{n}-n_{-}\geq 2\).
Write \(\underline{\lambda}^{\prime}=\underline{\lambda}\cup\{n_{-}\}\backslash\{ \lambda_{n}\}\). We need to prove
\[D(\underline{\lambda},\underline{n})+\frac{\deg\mathbf{e}_{\lambda_{n}}-\deg \mathbf{e}_{n_{-}}}{2}\geq\boldsymbol{\delta}+D(\underline{\lambda}^{\prime}, \underline{n})+v_{p}\Big{(}\frac{\deg\mathbf{e}_{\lambda_{n}}!}{\deg\mathbf{e }_{n_{-}}!}\Big{)}.\]
Indeed, if \(\underline{\lambda}^{\prime}\neq\underline{n}\), we did not even need the \(\boldsymbol{\delta}\) on the right hand side to complete the induction. If \(\underline{\lambda}^{\prime}=\underline{n}\), the above inequality is equivalent to (5.5.10) by Lemma 3.29.
Write \(\gamma\) for the maximal \(p\)-adic valuation for integers between \(\deg\mathbf{e}_{n_{-}}+1\) and \(\deg\mathbf{e}_{\lambda_{n}}\); so that we must have \(v_{p}\big{(}\frac{\deg\mathbf{e}_{\lambda_{n}}!}{\deg\mathbf{e}_{n_{-}}!} \big{)}\leq\gamma+\big{\lfloor}\frac{\deg\mathbf{e}_{\lambda_{n}}-\deg\mathbf{ e}_{n_{-}}-2}{p-1}\big{\rfloor}\). We need to prove
\[D(\underline{\lambda},\underline{n})+\frac{\deg\mathbf{e}_{\lambda_{n}}-\deg \mathbf{e}_{n_{-}}}{2}-\Big{\lfloor}\frac{\deg\mathbf{e}_{\lambda_{n}}-\deg \mathbf{e}_{n_{-}}-2}{p-1}\Big{\rfloor}\geq\boldsymbol{\delta}+D(\underline{ \lambda}^{\prime},\underline{n})+\gamma. \tag{5.5.11}\]
Put \(\delta\) to be unique integer such that \(\deg\mathbf{e}_{\lambda_{n}}-\deg\mathbf{e}_{n_{-}}\in\big{(}(p-1)p^{\delta-1},(p-1)p^{\delta}\big{]}\); it is clear that \(\gamma\geq\delta-1\). The case when \(\gamma\leq\delta\) is easier, which we discuss first. In this case, when changing \(\deg\mathbf{e}_{\lambda_{n}}\) to \(\deg\mathbf{e}_{n_{-}}\), only the last \(\gamma\) digits in the \(p\)-adic expansion may change from some nonzero number to \(0\). So \(D(\underline{\lambda},\underline{n})\geq D(\underline{\lambda}^{\prime}, \underline{n})-\gamma\). We need to prove that
\[\frac{\deg\mathbf{e}_{\lambda_{n}}-\deg\mathbf{e}_{n_{-}}}{2}-\Big{\lfloor} \frac{\deg\mathbf{e}_{\lambda_{n}}-\deg\mathbf{e}_{n_{-}}-2}{p-1}\Big{\rfloor} \geq 2\gamma+\boldsymbol{\delta}. \tag{5.5.12}\]
If \(\gamma=0\), then \(\deg\mathbf{e}_{\lambda_{n}}-\deg\mathbf{e}_{n_{-}}=p-1\); so (5.5.12) says \(\frac{p-1}{2}\geq\boldsymbol{\delta}\), which is obvious. If \(\delta\geq\gamma=1\), the condition \(2\leq a\leq p-5\) implies that the left hand side \(\geq\frac{p-1}{2}\geq 2+\boldsymbol{\delta}\). If \(\delta\geq\gamma\geq 2\), it is clear that the left hand side \(\geq\frac{1}{4}(p-1)p^{\gamma-1}>2\gamma+\boldsymbol{\delta}\). This completes the proof of (5.5.12) when \(\gamma\leq\delta\).
From now on, we assume that \(\gamma>\delta\); in this case, the \(p\)-adic expansions of \(\deg\mathbf{e}_{\lambda_{n}}\) and \(\deg\mathbf{e}_{n_{-}}\) look like
\[\deg\mathbf{e}_{\lambda_{n}}=\overline{\cdots\cdots\,\alpha_{ \gamma+2}\,\alpha_{\gamma+1}\alpha_{\gamma}\,0\,0\,\cdots\cdots\cdots\,0\,0\, \alpha_{\delta}\,\cdots\alpha_{0}},\] \[\deg\mathbf{e}_{n_{-}}=\overline{\cdots\cdots\,\alpha_{\gamma+2} \,\alpha_{\gamma+1}(\alpha_{\gamma}-1)\,(p-1)\,\cdots\,(p-1)\,\alpha_{\delta} ^{\prime}\,\cdots\alpha_{0}^{\prime}}. \tag{5.5.13}\]
Here each \(\alpha_{j}\) and \(\alpha_{j}^{\prime}\) belong to \(\{0,\ldots,p-1\}\) (and \(\alpha_{\gamma}\geq 1\)), and the two numbers \(\deg\mathbf{e}_{\lambda_{n}}\) and \(\deg\mathbf{e}_{n_{-}}\) agree beyond the \((\gamma+1)\)th digits. The condition \(\deg\mathbf{e}_{\lambda_{n}}-\deg\mathbf{e}_{n_{-}}\leq(p-1)p^{\delta}\) implies that \(\alpha_{\delta}^{\prime}\neq 0\).
We need to trace back to the definition of \(D(\underline{\lambda},\underline{n})\) in Notation 3.26 to compute
\[D(\underline{\lambda},\underline{n})-D(\underline{\lambda}^{\prime}, \underline{n})\\ =\sum_{j\geq 0}\Big{(}\max\big{\{}D_{=0}(\underline{\lambda},j)-D_{=0}( \underline{n},j+1),\,0\big{\}}-\max\big{\{}D_{=0}(\underline{\lambda}^{\prime}, j)-D_{=0}(\underline{n},j+1),\,0\big{\}}\Big{)}. \tag{5.5.14}\]
By Lemma 3.29 and the fact \(n_{-}\leq n\leq\lambda_{n}\),
\[D_{=0}(\underline{n},\delta+1)=\cdots=D_{=0}(\underline{n},\gamma-1).\]
It is not hard to see that for every \(j\in\{\delta+1,\ldots,\gamma-2\}\)
\[D_{=0}(\underline{\lambda}^{\prime},j)\geq D_{=0}(\underline{n},j)=D_{=0}( \underline{n},j+1).\]
Yet (5.5.13) implies that for each such \(j\), \(D_{=0}(\underline{\lambda},j)=D_{=0}(\underline{\lambda}^{\prime},j)+1.\) The contribution of \(j\)th term to (5.5.14) is \(1\). For \(j=\gamma\), Lemma 3.29 implies that
\[D_{=0}(\underline{\lambda}^{\prime},\gamma)\leq D_{=0}(\underline{n},\gamma) \leq D_{=0}(\underline{n},\gamma+1).\]
So the \((j=\gamma)\)th term in (5.5.14) is zero.
Summarizing the above discussion, we have
\[D(\underline{\lambda},\underline{n})\geq D(\underline{\lambda}^{\prime}, \underline{n})+\max\{\gamma-\delta-2,\,0\}-\delta, \tag{5.5.15}\]
where the term \(\gamma-\delta-2\) comes from \(j\)th term with \(j\in\{\delta+1,\ldots,\gamma-2\}\) and the term \(-\delta\) corresponds to \(j\in\{0,\ldots,\delta-1\}\). The terms with \(j=\delta\) or \(\gamma-1\) has nonnegative contribution to (5.5.14). To prove (5.5.11), it suffices to show that
\[\frac{\deg\mathbf{e}_{\lambda_{n}}-\deg\mathbf{e}_{n_{-}}}{2}-\Big{\lfloor} \frac{\deg\mathbf{e}_{\lambda_{n}}-\deg\mathbf{e}_{n_{-}}-2}{p-1}\Big{\rfloor} \geq 2\delta+2+\boldsymbol{\delta}. \tag{5.5.16}\]
If \(\delta=0\), then \(\deg\mathbf{e}_{\lambda_{n}}-\deg\mathbf{e}_{n_{-}}=p-1\); so (5.5.15) says \(\frac{p-1}{2}\geq 2+\boldsymbol{\delta}\), which is obvious. If \(\delta\geq 2\), it is clear that the left hand side \(\geq\frac{1}{4}(p-1)p^{\delta-1}>2\delta+2+\boldsymbol{\delta}\). If \(\delta=1\) and \(\lambda_{n}-n_{-}\geq 4\), the left hand side \(\geq p-2\geq 4+\boldsymbol{\delta}\).
For the remaining case \(\lambda_{n}-n_{-}=3\), we have \(\delta=1\). Now (5.5.16) becomes
\[\frac{\deg\mathbf{e}_{\lambda_{n}}-\deg\mathbf{e}_{n_{-}}}{2}-\Big{\lfloor} \frac{\deg\mathbf{e}_{\lambda_{n}}-\deg\mathbf{e}_{n_{-}}-2}{p-1}\Big{\rfloor} \geq 4+\boldsymbol{\delta}.\]
The left hand side is \(\geq\frac{p-1}{2}\geq 4+\boldsymbol{\delta}\) as we have assumed \(p\geq 11\). This finishes the proof of Claim under the assumption that \(\underline{\lambda}\neq\underline{n}\).
Finally we prove Claim for \(\underline{\lambda}=\underline{\xi}=\underline{n}\). By (5.5.5) and the fact that \(Y^{-1}\) is uppertriangular, we have
\[\det(\mathrm{U}^{\dagger}(\underline{n})) =\sum_{\underline{\lambda},\underline{n}\in\mathbb{N}^{n}}\det( \mathrm{U}_{\mathbf{C}}(\underline{\lambda}\times\underline{\eta}))\prod_{i=1 }^{n}Y_{\mathbf{e}_{i},\mathbf{e}_{\lambda_{i}}}Y_{\mathbf{e}_{n_{i}},\mathbf{ e}_{i}}^{-1}\] \[=\sum_{\underline{\lambda}\in\mathbb{N}^{n}}\det(\mathrm{U}_{ \mathbf{C}}(\underline{\lambda}\times\underline{n}))\prod_{i=1}^{n}Y_{\mathbf{ e}_{i},\mathbf{e}_{\lambda_{i}}}Y_{\mathbf{e}_{i},\mathbf{e}_{i}}^{-1}\]
As Claim has been proved for all \(\mathrm{U}_{\mathbf{C}}(\underline{\lambda}\times\underline{n})\)'s with \(\underline{\lambda}\neq\underline{n}\), if we write \(f(w)=\det(\mathrm{U}^{\dagger}(\underline{n}))-\det(\mathrm{U}_{\mathbf{C}}( \underline{n}))\), we have \(f(w)\in p^{\deg g_{n}(w)}\mathcal{O}\langle\frac{w}{p}\rangle\). By Corollary 3.10, there exists \(h(w)=\sum\limits_{n\geq 0}h_{n}\cdot(\frac{w}{p})^{n}\in\mathcal{O}\langle \frac{w}{p}\rangle\), \(h_{n}\in\mathcal{O}\) for all \(n\), such that \(\det(\mathrm{U}^{\dagger}(\underline{n}))=p^{-\deg g_{n}(w)}g_{n}(w)h(w)\). For simplicity, we set \(d=\deg g_{n}(w)\) and \(g_{n}(w)=\sum_{i=0}^{d}p^{i}c_{i}w^{d-i}\) with \(c_{0}=1\) and \(c_{i}\in\mathbb{Z}_{p}\), \(i=1,\ldots,d\). If there exists an integer \(M\) satisfying \(v_{p}(h_{M})<d\), let \(m\) be the largest integer with this property (it exists as \(h(w)\in\mathcal{O}\langle\frac{w}{p}\rangle\)). The \(w^{d+m}\)-coefficient of \(\det(\mathrm{U}^{\dagger}(\underline{n}))=p^{-\deg g_{n}(w)}g_{n}(w)h(w)\) is \(p^{-d}\sum_{i=0}^{d}p^{i}c_{i}\frac{h_{m+i}}{p^{m+i}}\), which has \(p\)-adic valuation \(-d+v_{p}(\frac{h_{m}}{p^{m}})<-m\). On the other hand, it follows from Lemma 3.14 that \(\det(\mathrm{U}_{\mathbf{C}}(\underline{n}))\in\mathcal{O}[\![w]\!]\), and we see from the
equality \(\det(\mathrm{U}^{\dagger}(\underline{n}))=\det(\mathrm{U}_{\mathbf{C}}(\underline{n }))+f(w)\) that the \(p\)-adic valuation of the \(w^{d+m}\)-coefficient of \(\det(\mathrm{U}_{\mathbf{C}}(\underline{n}))\) is greater or equal to \(-m\), which is a contradiction. Hence \(v_{p}(h_{m})\geq d\) for all \(m\) and \(\det(\mathrm{U}^{\dagger}(\underline{n}))\in g_{n}(w)\mathcal{O}(\frac{w}{p}) \subset p^{\deg g_{n}(w)}\mathcal{O}(\frac{w}{p})\). So we also have \(\det(\mathrm{U}_{\mathbf{C}}(\underline{n}))\in p^{\deg g_{n}(w)}\mathcal{O} (\frac{w}{p})\). This concludes the proof of Claim as well as the proof of Proposition 5.4.
**Remark 5.6**.: We point out that the proof of this proposition is where the condition \(a\notin\{1,p-4\}\) and \(p\geq 11\) are used. The problem is rooted in the number \(\boldsymbol{\delta}=\deg g_{n}(w)-\sum_{i=1}^{n}\deg\mathbf{e}_{i}-\big{\lfloor} \frac{\deg\mathbf{e}_{i}}{p}\big{\rfloor}\in\{0,1\}\) measuring the error from halo estimate in Corollary 3.27.
## 6. Proof of local ghost conjecture III: cofactor expansions
Now, we come to explain Step II as outlined at the beginning of Section 4, which aims to reduce Theorem 5.2 to the estimate we have proved in Proposition 5.4 for subminors. We conclude the proof of Theorem 5.2 and hence Theorem 2.7 at the end of this section.
Keep the notations from the previous section, and recall that a relevant character \(\varepsilon\) is fixed throughout yet suppressed from the notation.
**Notation 6.1**.: We fix \(n\in\mathbb{N}\) and a weight \(k=k_{\varepsilon}+(p-1)k_{\bullet}\) such that \(m_{n}(k)\neq 0\).
Similar to Proposition 3.6(2), let \(\mathrm{L}_{k}\in\mathrm{M}_{\infty}(\mathcal{O})\) denote the following infinite matrix:
* The upper-left \((d_{k}^{\mathrm{Iw}}\times d_{k}^{\mathrm{Iw}})\)-block of \(\mathrm{L}_{k}\) is the Atkin-Lehner operator \(\mathrm{AL}_{(k,\varepsilon_{1})}\) acting on the power basis \(\mathbf{B}_{k}\); it is an antidiagonal matrix whose \((i,d_{k}^{\mathrm{Iw}}+1-i)\)-entry is \(p^{\deg\mathbf{e}_{i}}\).
* Entries of \(\mathrm{L}_{k}\) other than the upper-left \(d_{k}^{\mathrm{Iw}}\times d_{k}^{\mathrm{Iw}}\) are the same as the corresponding entries of \(\mathrm{U}^{\dagger}\) evaluated at \(w=w_{k}\).
This matrix \(\mathrm{L}_{k}\) is block upper triangular by (2.11.2) of Proposition 2.11(1). Set
\[\mathrm{T}_{k}:=\mathrm{U}^{\dagger}-\mathrm{L}_{k}\;\in\;\mathrm{M}_{\infty}( \mathcal{O}\langle w/p\rangle).\]
For two subsets of integers \(\underline{\zeta}=(\zeta_{1}<\cdots<\zeta_{n})\) and \(\underline{\xi}=(\xi_{1}<\cdots<\xi_{n})\) of cardinality \(n\), we let \(\mathrm{L}_{k}(\underline{\zeta}\times\underline{\xi})\) (resp. \(\mathrm{T}_{k}(\underline{\zeta}\times\underline{\xi})\)) denote the submatrices of \(\mathrm{L}_{k}\) (resp. \(\mathrm{T}_{k}\)) with rows in \(\underline{\zeta}\) and columns in \(\underline{\xi}\). Then Definition-Proposition 3.21 says that the corank of \(\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi})|_{w=w_{k}}\) is at least \(n-d_{k}^{\mathrm{ur}}-r_{\underline{\zeta}\times\underline{\xi}}(k)-s_{ \underline{\xi}}(k)\). In the following discussion, we will use the sets \(\underline{\zeta}\) and \(\underline{\xi}\) as the natural row and column index sets of the matrices \(\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi})\), \(\mathrm{L}_{k}(\underline{\zeta}\times\underline{\xi})\) and \(\mathrm{T}_{k}(\underline{\zeta}\times\underline{\xi})\).
We also need a sign convention: when computing the determinant of a matrix like \(\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi})\), we write the row and columns in increasing order of the numbers in \(\underline{\zeta}\) and \(\underline{\xi}\). For a subset \(I\subseteq\underline{\zeta}\), we write \(\mathrm{sgn}(I,\underline{\zeta})\) to mean the sign of permutation that sends \(\underline{\zeta}\) to the _ordered_ disjoint union of \(I\sqcup(\underline{\zeta}-I)\), where elements in each of \(I\) and \(\underline{\zeta}-I\) are in increasing order.
The following key linear algebra result roughly states that, modulo an appropriate power of \(w-w_{k}\), we may express the determinant of \(\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi})\) as the linear combination of determinants of minors of smaller sizes.
**Lemma 6.2**.: _Let \(k\), \(\mathrm{U}^{\dagger}\), \(\mathrm{T}_{k}\), \(\mathrm{L}_{k}\), \(\underline{\zeta}\), and \(\underline{\xi}\) be as above. Fix \(J_{0}\subseteq\underline{\xi}\) a subset of cardinality \(j_{0}\). We write \(\mathrm{T}_{k}(\underline{\zeta}\times\underline{\xi};J_{0})\) for the \(\underline{\zeta}\times\underline{\xi}\)-matrix whose \((\underline{\xi}-J_{0})\)-columns are given by that of \(\mathrm{U}^{\dagger}\) and whose \(J_{0}\)-columns are given by that of \(\mathrm{T}_{k}\). Then_
\[\det\bigl{(}\mathrm{T}_{k}(\underline{\zeta}\times\underline{\xi}; J_{0})\bigr{)}=\sum_{J\subseteq J_{0}}\sum_{\begin{subarray}{c}I\subseteq\underline{ \zeta}\\ \#I=\#J\end{subarray}}(-1)^{\#J}\mathrm{sgn}(I,\underline{\zeta})\mathrm{sgn}(J, \underline{\xi})\\ \cdot\det\bigl{(}\mathrm{L}_{k}(I\times J)\bigr{)}\cdot\det\bigl{(} \mathrm{U}^{\dagger}((\underline{\zeta}-I)\times(\underline{\xi}-J))\bigr{)}. \tag{6.2.1}\]
_In particular, as power series in \(E[\![w-w_{k}]\!]\), we have the following congruence_
\[\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{ \xi})\bigr{)}\equiv\sum_{\begin{subarray}{c}J\subseteq J_{0}\\ J\neq\emptyset\end{subarray}}\sum_{\begin{subarray}{c}I\subseteq\underline{ \zeta}\\ \#I=\#J\end{subarray}}(-1)^{\#J-1}\mathrm{sgn}(I,\underline{\zeta})\mathrm{sgn}( J,\underline{\xi})\cdot\det\bigl{(}\mathrm{L}_{k}(I\times J)\bigr{)}\\ \cdot\det\bigl{(}\mathrm{U}^{\dagger}((\underline{\zeta}-I)\times( \underline{\xi}-J))\bigr{)}\qquad\mathrm{mod}\;(w-w_{k})^{\mathrm{corank\,T}_{ k}(\underline{\zeta}\times\underline{\xi};J_{0})|_{w=w_{k}}}. \tag{6.2.2}\]
Proof.: The equality (6.2.1) is a purely formal linear algebra equality and it does not need the special properties of the matrices \(\mathrm{U}^{\dagger}\), \(\mathrm{T}_{k}\), and \(\mathrm{L}_{k}\) beyond the equality \(\mathrm{L}_{k}+\mathrm{T}_{k}=\mathrm{U}^{\dagger}\). Indeed, we may write \(\mathrm{T}_{k}(\underline{\zeta}\times\underline{\xi};J_{0})=\mathrm{U}^{ \dagger}(\underline{\zeta}\times\underline{\xi})+(-\mathrm{L}_{k}(\underline{ \zeta}\times J_{0}))\), where we view \(\mathrm{L}_{k}(\underline{\zeta}\times J_{0})\) as a \(\underline{\zeta}\times\underline{\xi}\)-matrix whose \((\underline{\xi}-J_{0})\)-columns are zero. Since taking determinant is (multi)-linear with respect to the columns, taking the cofactor expansion with respect to the expression above exactly gives (6.2.1). For example, if \(\mathrm{L}_{k}(\underline{\zeta}\times\underline{\xi})\) has only four nonzero entries, at the (upper left) \(\{\zeta_{1},\zeta_{2}\}\times\{\xi_{1},\xi_{2}\}\)-minor, and \(J_{0}=\{\xi_{1},\xi_{2}\}\), then \(\mathrm{T}_{k}(\underline{\zeta}\times\underline{\xi};J_{0})=\mathrm{T}_{k}( \underline{\zeta}\times\underline{\xi})\) and the formula (6.2.1) reads
\[\det\bigl{(}\mathrm{T}_{k}(\underline{\zeta}\times\underline{\xi};J_{0})\bigr{)} =\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times \underline{\xi})\bigr{)}-\sum_{i,j=1}^{2}(-1)^{i-j}L_{\zeta_{i},\xi_{j}} \mathrm{det}\bigl{(}\mathrm{U}^{\dagger}((\underline{\zeta}-\zeta_{i})\times (\underline{\xi}-\xi_{j})\bigr{)}\]
Here \(L_{\zeta_{i},\xi_{j}}\) is the \((\zeta_{i},\xi_{j})\)-entry of \(L_{k}\).
The congruence relation (6.2.2) follows immediately from (6.2.1) and the observation that \(\mathrm{T}_{k}(\underline{\zeta}\times\underline{\xi};J_{0})\) is divisible by \((w-w_{k})^{\mathrm{corank\,T}_{k}(\underline{\zeta}\times\underline{\xi};J_{0 })|_{w=w_{k}}}\) in \(E[\![w-w_{k}]\!]\).
**Notation 6.3**.: Now, we apply Lemma 6.2 to the situation of Theorem 5.2 with the fixed integer \(n\geq 2\), a ghost zero \(w_{k}\) of \(g_{n}(w)\), and subsets \(\underline{\zeta}\) and \(\underline{\xi}\) of cardinality \(n\). Then we have \(\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi})\), \(\mathrm{T}_{k}(\underline{\zeta}\times\underline{\xi})\), \(\mathrm{L}_{k}(\underline{\zeta}\times\underline{\xi})\), \(r_{\underline{\zeta}\times\underline{\xi}}(k)\), and \(s_{\underline{\xi}}(k)\) as defined above. Let \(J_{\underline{\zeta}\times\underline{\xi}}\) denote the set consisting of _all_\(\xi_{j}\in\underline{\xi}\) such that either \(\xi_{j}>d_{k}^{\mathrm{Iw}}\) or \(d_{k}^{\mathrm{Iw}}+1-\xi_{j}\in\underline{\zeta}\). Then \(\#J_{\underline{\zeta}\times\underline{\xi}}=r_{\underline{\zeta}\times \underline{\xi}}(k)+s_{\underline{\xi}}(k)\).
We introduce the following notation to reorganize the congruence relation from Lemma 6.2. For every \(j\leq r_{\underline{\zeta}\times\underline{\xi}}(k)+s_{\underline{\xi}}(k)\), we denote
\[\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi}) \bigr{)}_{j}:=\sum_{\begin{subarray}{c}I\subseteq\underline{\zeta}\\ \#I=j\end{subarray}}\sum_{\begin{subarray}{c}J\subseteq J_{\underline{\zeta} \times\underline{\xi}}\\ \#J=j\end{subarray}}\mathrm{sgn}(I,\underline{\zeta})\mathrm{sgn}(J,\underline{ \xi})\cdot\det\bigl{(}\mathrm{L}_{k}(I\times J)\bigr{)}\cdot\det\bigl{(} \mathrm{U}^{\dagger}((\underline{\zeta}-I)\times(\underline{\xi}-J))\bigr{)}. \tag{6.3.1}\]
This is a signed sum of the products of the determinants of some minors of \(\mathrm{U}^{\dagger}\) of size \(n-j\), with the determinants of the complement minors in \(\mathrm{L}_{k}\). In particular, \(\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi}) \bigr{)}_{0}=\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline {\xi})\bigr{)}\), and the Lemma 6.2 above (applied to the case \(J_{0}=J_{\underline{\zeta}\times\underline{\xi}}\)) in particular implies that
\[\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi}) \bigr{)}\equiv\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times \underline{\xi})\bigr{)}_{1}-\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta} \times\underline{\xi})\bigr{)}_{2}+\cdots\\ +(-1)^{r_{\underline{\zeta}}\times\underline{\xi}}(k)+s_{\underline{ \xi}}(k)-1\mathrm{det}\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times \underline{\xi})\bigr{)}_{r_{\underline{\zeta}\times\underline{\xi}}(k)+s_{ \underline{\xi}}(k)}\qquad\mathrm{mod}\;(w-w_{k})^{n-d_{k}^{\mathrm{Iw}}}. \tag{6.3.2}\]
(Note that by Definition-Proposition 3.21 implies that \(\mathrm{T}_{k}(\underline{\zeta}\times\underline{\xi};J_{\underline{\zeta}\times \underline{\xi}})\bigr{|}_{w=w_{k}}\) has corank at least \(n-d_{k}^{\mathrm{ur}}\).)
Our argument needs an elaborated version of (6.3.2), with one goal: we try to write \(\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi})\bigr{)}\) as a linear combination of minors of smallest possible size (after modulo an appropriate power of \(w-w_{k}\)). This is the following.
**Lemma 6.4**.: _Keep the notation as above. For a fixed nonnegative integer \(j_{0}\leq r_{\underline{\zeta}\times\underline{\xi}}(k)+s_{\underline{\xi}}(k)-1\), we have the following congruence of power series in \(E[\![w-w_{k}]\!]\):_
\[\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi}) \bigr{)}\equiv\sum_{j>j_{0}}(-1)^{j-j_{0}-1}\binom{j-1}{j_{0}}\cdot\det\bigl{(} \mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi})\bigr{)}_{j}\quad \mathrm{mod}\;(w-w_{k})^{\max\{0,n-d_{k}^{\mathrm{ur}}-j_{0}\}}. \tag{6.4.1}\]
_More generally, for every nonnegative integers \(\ell\leq j_{0}<r_{\underline{\zeta}\times\underline{\xi}}(k)+s_{\underline{ \xi}}(k)\), we have the following congruence of power series in \(E[\![w-w_{k}]\!]\):_
\[\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi}) \bigr{)}_{\ell}\equiv\sum_{j>j_{0}}(-1)^{j-j_{0}-1}\binom{j-\ell-1}{j_{0}-\ell }\binom{j}{\ell}\cdot\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times \underline{\xi})\bigr{)}_{j}\quad\mathrm{mod}\;(w-w_{k})^{\max\{0,n-d_{k}^{ \mathrm{ur}}-j_{0}\}}. \tag{6.4.2}\]
Proof.: We first prove (6.4.2) in the special case when \(\ell=j_{0}\). When \(\ell=j_{0}=0\), this is exactly (6.3.2). To treat the general case with \(\ell=j_{0}\), we apply Lemma 6.2 (especially (6.2.2)) to each factor \(\det\bigl{(}\mathrm{U}^{\dagger}((\underline{\zeta}-I)\times(\underline{\xi}- J))\bigr{)}\) appearing in (6.3.1), to deduce the following:
\[\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline {\xi})\bigr{)}_{j_{0}}:=\sum_{\begin{subarray}{c}I\subseteq\underline{\zeta} \\ \#I=j_{0}\end{subarray}}\sum_{\begin{subarray}{c}J\subseteq J\times\underline{\xi }\\ \#J=j_{0}\end{subarray}}\mathrm{sgn}(I,\underline{\zeta})\mathrm{sgn}(J, \underline{\xi})\cdot\det\bigl{(}\mathrm{L}_{k}(I\times J)\bigr{)}\cdot\det \bigl{(}\mathrm{U}^{\dagger}((\underline{\zeta}-I)\times(\underline{\xi}-J)) \bigr{)}\] \[\equiv \sum_{\begin{subarray}{c}I\subseteq\underline{\zeta}\\ \#I=j_{0}\end{subarray}}\sum_{\begin{subarray}{c}J\subseteq J\subset\underline{ \zeta}\times\underline{\xi}\\ \#J=j_{0}\end{subarray}}\mathrm{sgn}(I,\underline{\zeta})\mathrm{sgn}(J, \underline{\xi})\cdot\det\bigl{(}\mathrm{L}_{k}(I\times J)\bigr{)}\cdot\sum_{ \begin{subarray}{c}J^{\prime}\subseteq J\times\underline{\xi}-J\\ \#I^{\prime}\neq\emptyset\end{subarray}}\sum_{\begin{subarray}{c}I^{\prime} \subseteq\underline{\zeta}-I\\ \#I^{\prime}=\#J^{\prime}\end{subarray}}(-1)^{\#J^{\prime}-1}\] \[\mathrm{sgn}(I^{\prime},\underline{\zeta}-I)\mathrm{sgn}(J^{ \prime},\underline{\xi}-J)\cdot\det\bigl{(}\mathrm{L}_{k}(I^{\prime}\times J ^{\prime})\bigr{)}\cdot\det\bigl{(}\mathrm{U}^{\dagger}((\underline{\zeta}-I- I^{\prime})\times(\underline{\xi}-J-J^{\prime}))\bigr{)}\]
modulo \((w-w_{k})^{\max\{0,n-d_{k}^{\mathrm{ur}}-j_{0}\}}\). Here we make use of Definition-Proposition 3.21 to deduce that rank \(\mathrm{T}_{k}\bigl{(}(\underline{\zeta}-I)\times(\underline{\xi}-J);J_{ \underline{\zeta}\times\underline{\xi}}-J\bigr{)}|_{w=w_{k}}\) is at most \(d_{k}^{\mathrm{ur}}\) and hence its corank is at least \(n-j_{0}-d_{k}^{\mathrm{ur}}\).
We set \(I^{\prime\prime}=I\sqcup I^{\prime}\) and \(J^{\prime\prime}=J\sqcup J^{\prime}\) and set \(j:=\#I^{\prime\prime}=\#J^{\prime\prime}>j_{0}\). Then the above long expression is equal to
\[\sum_{j>j_{0}}(-1)^{j-j_{0}-1}\sum_{\begin{subarray}{c}I^{\prime \prime}\subseteq\underline{\zeta}\\ \#I^{\prime\prime}=j\end{subarray}}\sum_{\begin{subarray}{c}J^{\prime\prime} \subseteq J_{\underline{\zeta}\times\underline{\xi}}\\ \#J^{\prime\prime}=j\end{subarray}}\sum_{\begin{subarray}{c}I\subseteq I^{ \prime\prime}\\ \#I=j_{0}\end{subarray}}\sum_{\begin{subarray}{c}J\subset J^{\prime\prime}\\ \#I=j_{0}\end{subarray}}\mathrm{sgn}(I,\underline{\zeta})\mathrm{sgn}(J, \underline{\xi})\mathrm{sgn}(I^{\prime\prime}-I,\underline{\zeta}-I)\mathrm{ sgn}(J^{\prime\prime}-J,\underline{\xi}-J)\] \[\quad\cdot\det\bigl{(}\mathrm{L}_{k}(I\times J)\bigr{)}\cdot\det \bigl{(}\mathrm{L}_{k}((I^{\prime\prime}-I)\times(J^{\prime\prime}-J))\bigr{)} \cdot\det\bigl{(}\mathrm{U}^{\dagger}((\underline{\zeta}-I^{\prime\prime})\times (\underline{\xi}-J^{\prime\prime}))\bigr{)}.\]
Using the sign equality
\[\mathrm{sgn}(I,\underline{\zeta})\mathrm{sgn}(I^{\prime\prime}-I,\underline{ \zeta}-I)=\mathrm{sgn}(I^{\prime\prime},\underline{\zeta})\mathrm{sgn}(I,I^{ \prime\prime})\]
and the similar sign equality for \(\xi\), \(J\), \(J^{\prime}\), and \(J^{\prime\prime}\), we may rewrite the above sum as
\[\sum_{j>j_{0}}(-1)^{j-j_{0}-1}\sum_{\begin{subarray}{c}I^{\prime \prime}\subseteq\zeta\\ \#I^{\prime\prime}=j\end{subarray}}\sum_{\begin{subarray}{c}J^{\prime\prime} \subseteq J_{\zeta\times\underline{\xi}}\\ \#J^{\prime\prime}=j\end{subarray}}\operatorname{sgn}(I^{\prime\prime}, \underline{\zeta})\operatorname{sgn}(J^{\prime\prime},\underline{\xi}) \cdot\det\bigl{(}\operatorname{U}^{\dagger}((\underline{\zeta}-I^{\prime\prime} )\times(\underline{\xi}-J^{\prime\prime}))\bigr{)}\] \[\cdot\sum_{\begin{subarray}{c}I\subseteq I^{\prime\prime}\\ \#I=j_{0}\end{subarray}}\sum_{\begin{subarray}{c}J\subseteq J^{\prime\prime} \\ \#J=j_{0}\end{subarray}}\operatorname{sgn}(I,I^{\prime\prime})\operatorname{ sgn}(J,J^{\prime\prime})\cdot\det\bigl{(}\operatorname{L}_{k}(I\times J)\bigr{)} \cdot\det\bigl{(}\operatorname{L}_{k}((I^{\prime\prime}-I)\times(J^{\prime \prime}-J))\bigr{)}.\]
But the sum in the second line is simply \(\binom{j}{j_{0}}\) times \(\det\bigl{(}\operatorname{L}_{k}(I^{\prime\prime}\times J^{\prime\prime}) \bigr{)}\) by Laplace expansion theorem on determinants, where the factor \(\binom{j}{j_{0}}\) corresponds to the number of choices of the subset \(I\subseteq I^{\prime\prime}\). From this, we deduce that
\[\det\bigl{(}\operatorname{U}^{\dagger}(\underline{\zeta}\times \underline{\xi})\bigr{)}_{j_{0}} \equiv \sum_{j>j_{0}}(-1)^{j-j_{0}-1}\sum_{\begin{subarray}{c}I^{\prime \prime}\subseteq\zeta\\ \#I^{\prime\prime}=j\end{subarray}}\sum_{\begin{subarray}{c}J^{\prime\prime} \subseteq J_{\zeta\times\underline{\xi}}\\ \#J^{\prime\prime}=j\end{subarray}}\operatorname{sgn}(I^{\prime\prime}, \underline{\xi})\operatorname{sgn}(J^{\prime\prime},\underline{\xi})\] \[\cdot\det\bigl{(}\operatorname{U}^{\dagger}((\underline{\xi}-I^{ \prime\prime})\times(\underline{\xi}-J^{\prime\prime}))\bigr{)}\cdot\binom{j}{j _{0}}\cdot\det\bigl{(}\operatorname{L}_{k}(I^{\prime\prime}\times J^{\prime \prime})\bigr{)}\]
\(\bmod(w-w_{k})^{\max\{0,n-d_{k}^{\mathrm{ur}}-j_{0}\}}\). This is exactly (6.4.2) when \(\ell=j_{0}\).
We now prove the general case by induction on the difference \(j_{0}-\ell\). The base case when \(\ell=j_{0}\) is just treated. Assume that we have proved (6.4.2) with smaller \(j_{0}-\ell\). Then we have the following congruences (corresponding to the cases of \((\ell,j_{0}-1)\) and \((j_{0},j_{0})\)).
\[\det\bigl{(}\operatorname{U}^{\dagger}(\underline{\zeta}\times \underline{\xi})\bigr{)}_{\ell} \equiv \sum_{j>j_{0}-1}(-1)^{j-j_{0}}\binom{j-\ell-1}{j_{0}-\ell-1} \binom{j}{\ell}\cdot\det\bigl{(}\operatorname{U}^{\dagger}(\underline{\zeta} \times\underline{\xi})\bigr{)}_{j}\mod(w-w_{k})^{\max\{0,n-d_{k}^{\mathrm{ur}} -j_{0}+1\}},\] \[\det\bigl{(}\operatorname{U}^{\dagger}(\underline{\zeta}\times \underline{\xi})\bigr{)}_{j_{0}} \equiv \sum_{j>j_{0}}(-1)^{j-j_{0}-1}\binom{j}{j_{0}}\cdot\det\bigl{(} \operatorname{U}^{\dagger}(\underline{\zeta}\times\underline{\xi})\bigr{)}_{j }\mod(w-w_{k})^{\max\{0,n-d_{k}^{\mathrm{ur}}-j_{0}\}}.\]
Plugging the second congruence into the first one (and modulo the smaller power \((w-w_{k})^{\max\{0,n-d_{k}^{\mathrm{ur}}-j_{0}\}}\)), we immediate deduce (6.4.2) by noting
\[\binom{j_{0}}{\ell}\binom{j}{j_{0}}-\binom{j-\ell-1}{j_{0}-\ell-1}\binom{j}{ \ell}=\binom{j-\ell-1}{j_{0}-\ell}\binom{j}{\ell}.\]
**Remark 6.5**.: We point out a variant of the above lemma that we will use later. Fix any power series \(\eta(w)\in 1+(w-w_{k})E[\![w-w_{k}]\!]\). For \(J_{0}\subseteq J_{\underline{\zeta}\times\underline{\xi}}\), write
\[\widetilde{\Upsilon}_{k}(\underline{\zeta}\times\underline{\xi};J_{0}):= \operatorname{U}^{\dagger}(\underline{\zeta}\times\underline{\xi})-\eta(w)^{- 1}\cdot\operatorname{L}_{k}(\underline{\zeta}\times J_{0})\in\operatorname{M} _{\infty}(E[\![w-w_{k}]\!]);\]
then we obtain a formula of \(\det\bigl{(}\widetilde{\operatorname{T}}_{k}(\underline{\zeta}\times \underline{\xi};J_{0})\bigr{)}\) analogous to (6.2.1), with additional factor \(\eta(w)^{-\#J}\) on the right hand side. Yet \(\widetilde{\operatorname{T}}_{k}(\underline{\zeta}\times\underline{\xi};J_{0} )|_{w=w_{k}}=\operatorname{T}_{k}(\underline{\zeta}\times\underline{\xi};J_{0} )|_{w=w_{k}}\) have the same corank. So if we define the analogue of (6.3.1) to be
\[\det\bigl{(}\operatorname{U}^{\dagger}(\underline{\zeta}\times \underline{\xi})\bigr{)}_{j}^{\sim}:=\eta(w)^{-j}\cdot\det\bigl{(} \operatorname{U}^{\dagger}(\underline{\zeta}\times\underline{\xi})\bigr{)}_{j} \\ =\sum_{\begin{subarray}{c}I\subseteq\underline{\zeta}\\ \#I=j\end{subarray}}\sum_{\begin{subarray}{c}J\subseteq J_{\zeta\times \underline{\xi}}\\ \#J=j\end{subarray}}\operatorname{sgn}(I,\underline{\zeta})\text{sgn}(J, \underline{\xi})\cdot\eta(w)^{-j}\cdot\det\bigl{(}\operatorname{L}_{k}(I \times J)\bigr{)}\cdot\det\bigl{(}\operatorname{U}^{\dagger}((\underline{ \zeta}-I)\times(\underline{\xi}-J))\bigr{)}, \tag{6.5.1}\]
exactly the same argument in Lemmas 6.2 and 6.4 shows that, for every nonnegative integers \(\ell\leq j_{0}<r_{\underline{\zeta}\times\underline{\xi}}(k)+s_{\underline{\xi}}(k)\), we have the following congruence of power series in \(E[\![w-w_{k}]\!]\):
\[\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi}) \bigr{)}\!\sim\!\equiv\sum_{j>j_{0}}(-1)^{j-j_{0}-1}\binom{j-\ell-1}{j_{0}- \ell}\binom{j}{\ell}\cdot\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta} \times\underline{\xi})\bigr{)}\!\sim\!\mod(w-w_{k})^{\max\{0,n-d_{k}^{\mathrm{ ur}}-j_{0}\}}. \tag{6.5.2}\]
**Notation 6.6**.: To further simplify notations later, we normalize
\[B_{k,i}^{(\underline{\zeta}\times\underline{\xi})}:=p^{\frac{1}{2}(\deg( \underline{\xi})-\deg(\underline{\zeta}))}\cdot A_{k,i}^{(\underline{\zeta} \times\underline{\xi})}\cdot g_{n,\hat{k}}(w_{k}). \tag{6.6.1}\]
So condition (5.2.1) is equivalent to, for \(i=0,1,\ldots,m_{n}(k)-1\),
\[v_{p}\bigl{(}B_{k,i}^{(\underline{\zeta}\times\underline{\xi})}\bigr{)}\geq \Delta_{k,\frac{1}{2}d_{k}^{\mathrm{new}-i}}-\tfrac{k-2}{2}(\tfrac{1}{2}d_{k} ^{\mathrm{lr}}-n). \tag{6.6.2}\]
Further, we normalize the minors appearing in the formula (6.4.2) as follows and consider their expansions as power series in \(E[\![w-w_{k}]\!]\):
\[p^{\frac{1}{2}(\deg(\underline{\xi})-\deg(\underline{\zeta}))}\cdot\frac{\det \bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi})\bigr{)} _{\ell}}{g_{n-\ell,\hat{k}}(w)/g_{n-\ell,\hat{k}}(w_{k})}=\sum_{i\geq 0}B_{k,i}^{( \underline{\zeta}\times\underline{\xi},\ell)}(w-w_{k})^{i}. \tag{6.6.3}\]
This normalization has in mind that the natural way to understand each sum of minor determinants appearing in \(\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi}) \bigr{)}_{\ell}\) is through its Lagrange interpolation along \(g_{n-\ell}(w)\). In particular, when \(\ell=0\), \(B_{k,i}^{(\underline{\zeta}\times\underline{\xi},0)}\) is equal to \(B_{k,i}^{(\underline{\zeta}\times\underline{\xi})}\) in (6.6.1) for \(i=0,\ldots,m_{n}(k)-1\).
As a convention, if \(i<0\), we set \(B_{k,i}^{(\underline{\zeta}\times\underline{\xi},\ell)}=0\).
The following estimate on \(B_{k,i}^{(\underline{\zeta}\times\underline{\xi},\ell)}\) can be harvested from the inductive hypothesis and Proposition 5.4.
**Lemma 6.7**.: _Assume that \(p\geq 11\) and \(2\leq a\leq p-5\). Keep the notation as above and assume that Theorem 5.2 holds for minors of size strictly smaller than \(n\). Assume that \(\ell\in\bigl{\{}1,2,\ldots,\min\{n-d_{k}^{\mathrm{ur}},\,r_{\underline{\zeta} \times\underline{\xi}}(k)+s_{\underline{\xi}}(k)\}\bigr{\}}\) is taken so that \(m_{n-\ell}(k)\leq m_{n}(k)-1\) (the latter condition is equivalent to requiring \(\ell\geq 2n-d_{k}^{\mathrm{lr}}+1\) when \(n\geq\tfrac{1}{2}d_{k}^{\mathrm{lr}}\)). We have, for every \(i\in\{m_{n-\ell}(k),\ldots,m_{n}(k)-1\}\),_
\[v_{p}\bigl{(}B_{k,i}^{(\underline{\zeta}\times\underline{\xi}, \ell)}\bigr{)} \geq\Delta_{k,\frac{1}{2}d_{k}^{\mathrm{new}}-m_{n-\ell}(k)}-\tfrac{k-2 }{2}\bigl{(}\tfrac{1}{2}d_{k}^{\mathrm{lr}}-n\bigr{)}-\tfrac{1}{2}\bigl{(}( \tfrac{1}{2}d_{k}^{\mathrm{new}}-m_{n-\ell}(k))^{2}-(\tfrac{1}{2}d_{k}^{ \mathrm{new}}-i)^{2}\bigr{)} \tag{6.7.2}\] \[\geq\Delta_{k,\frac{1}{2}d_{k}^{\mathrm{new}}-i}-\tfrac{k-2}{2} \bigl{(}\tfrac{1}{2}d_{k}^{\mathrm{lr}}-n\bigr{)}. \tag{6.7.1}\]
Later, we will refer (6.7.1) as the strong estimate and (6.7.2) the weak estimate.
Proof.: Here the second inequality follows from Proposition 2.19. We now prove the first one. Recall that \(\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi}) \bigr{)}_{\ell}\) is a \(\mathbb{Z}\)-linear combination of
\[\det\bigl{(}\mathrm{L}_{k}(I\times J)\bigr{)}\cdot\det\bigl{(}\mathrm{U}^{ \dagger}((\underline{\zeta}-I)\times(\underline{\xi}-J))\]
over subsets \(I\subseteq\underline{\zeta}\) and \(J\subseteq J_{\underline{\zeta}\times\underline{\xi}}\) of cardinality \(\ell\). Consider the following Taylor expansion in \(E[\![w-w_{k}]\!]\):
\[\frac{\det\bigl{(}\mathrm{L}_{k}(I\times J)\bigr{)}\cdot\det\bigl{(}\mathrm{U} ^{\dagger}((\underline{\zeta}-I)\times(\underline{\xi}-J)) \tag{6.7.3}\]
) \[\frac{\det\bigl{(}\mathrm{L}_{k}(I\times J)\bigr{)}\cdot\det\bigl{(}\mathrm{U} ^{\dagger}((\underline{\zeta}-I)\times(\underline{\xi}-J))\]
) \[=\sum_{i\geq 0}A_{k,i}^{(\underline{\zeta}\times\underline{\xi},J)}(w-w_{k}) ^{i};\]
where \(A_{k,i}^{(\underline{\zeta}\times\underline{\xi},J)}(w-w_{k})^{i}\) is the \(\mathbb{Z}\)-linear combination of
\[\det\bigl{(}\mathrm{L}_{k}(I\times J)\bigr{)}\cdot\det\bigl{(}\mathrm{U}^{ \dagger}((\underline{\zeta}-I)\times(\underline{\xi}-J))\]
) \[\det\bigl{(}\mathrm{L}_{k}(I\times J)\bigr{)}\cdot\det\bigl{(}\mathrm{U}^{ \dagger}((\underline{\zeta}-I)\times(\underline{\xi}-J))\]
) \[=\sum_{i\geq 0}A_{k,i}^{(\underline{\zeta}\times\underline{\xi},J)}(w-w_{k}) ^{i};\]
where \(A_{k,i}^{(\underline{\zeta}\times\underline{\xi},J)}(w-w_{k})^{i}\) is the \(\mathbb{Z}\)-linear combination of
\[\det\bigl{(}\mathrm{L}_{k}(I\times J)\bigr{)}\cdot\det\bigl{(}\mathrm{U}^{ \dagger}((\underline{\zeta}-I)\times(\underline{\xi}-J))\]
) \[=\sum_{i\geq 0}A_{k,i}^{(\underline{\zeta}\times\underline{\xi},J)}(w-w_{k}) ^{i};\]
and \(A_{k,i}^{(\underline{\zeta}\times\underline{\xi},J)}(w-w_{k})^{i}\) is the \(\mathbb{Z}\)-linear combination of
\[\det\bigl{(}\mathrm{L}_{k}(I\times J)\bigr{)}\cdot\det\bigl{(}\mathrm{U}^{ \dagger}((\underline{\zeta}-I)\times(\underline{\xi}-J))\]
) \[=\sum_{i\geq 0}A_{k,i}^{(\underline{\zeta}\times\underline{\xi},J)}(w-w_{k}) ^{i};\]
where \(A_{k,i}^{(\underline{\zeta}\times\underline{\xi},J)}(w-w_{k})^{i}\) is the \(\mathbb{Z}\)-linear combination of
\[\det\bigl{(}\mathrm{L}_{k}(I\times J)\bigr{)}\cdot\det\bigl{(}\mathrm{U}^{ \dagger}((\underline{\zeta}-I)\times(\underline{\xi}-J))\]
comparing to (6.6.3), we did not multiply the left hand side with \(p^{\frac{1}{2}(\deg(\underline{\zeta})-\deg(\underline{\zeta}))}\cdot g_{n-\ell, \underline{k}}(w_{k})\). As \(v_{p}(g_{n-\ell,\underline{k}}(w_{k}))=\Delta^{\prime}_{k,\frac{1}{2}d_{k}^{ \operatorname{new}}-m_{n-\ell}(k)}-\frac{k-2}{2}\big{(}\frac{1}{2}d_{k}^{ \operatorname{new}}-m_{n-\ell}(k)\big{)}\), to prove condition (6.7.1), it is enough to show that
\[v_{p}\big{(}A_{k,i}^{(\underline{\zeta}\times\underline{\xi},I, J)}\big{)}\geq\Delta_{k,\frac{1}{2}d_{k}^{\operatorname{new}}-m_{n-\ell}(k)}- \Delta^{\prime}_{k,\frac{1}{2}d_{k}^{\operatorname{new}}-m_{n-\ell}(k)}+ \tfrac{k-2}{2}\cdot\ell\\ -\tfrac{1}{2}\big{(}(\tfrac{1}{2}d_{k}^{\operatorname{new}}-m_{n -\ell}(k))^{2}-(\tfrac{1}{2}d_{k}^{\operatorname{new}}-i)^{2}\big{)}+\tfrac{1 }{2}(\deg(\underline{\zeta})-\deg(\underline{\xi})). \tag{6.7.4}\]
(Here we secretly used the condition that \(m_{n-\ell}(k)<m_{n}(k)-1\).)
Using the notation from Proposition 5.4(2) to write
\[\det\bigl{(}\mathrm{U}^{\dagger}((\underline{\zeta}-I)\times(\underline{\xi} -J))\bigr{)}\big{/}g_{n,\underline{k}}(w)=\sum_{i\geq 0}a_{k,i}^{((\underline{ \zeta}-I)\times(\underline{\xi}-J))}(w-w_{k})^{i},\]
then Proposition 5.4(2) and the inductive assumption of the lemma shows that
\[v_{p}\big{(}a_{k,\frac{1}{2}}^{((\underline{\zeta}-I)\times( \underline{\xi}-J))}\big{)}\geq\Delta_{k,\frac{1}{2}d_{k}^{\operatorname{ new}}-m_{n-\ell}(k)}-\Delta^{\prime}_{k,\frac{1}{2}d_{k}^{\operatorname{new}}-m_{n- \ell}(k)}\\ -\tfrac{1}{2}\big{(}-\deg(\underline{\zeta}-I)+\deg(\underline{ \xi}-J)+(\tfrac{1}{2}d_{k}^{\operatorname{new}}-m_{n-\ell}(k))^{2}-(\tfrac{1 }{2}d_{k}^{\operatorname{new}}-i)^{2}\big{)}.\]
Therefore, to prove (6.7.4) and hence the lemma, it is enough to show that
\[v_{p}\big{(}\det(\mathrm{L}_{k}(I\times J))\big{)}\geq\tfrac{k-2 }{2}\cdot\ell+\tfrac{1}{2}\big{(}\deg(\underline{\zeta})-\deg(\underline{\xi })\big{)}-\tfrac{1}{2}\big{(}\deg(\underline{\zeta}-I)-\deg(\underline{\xi}- J)\big{)}\\ =\tfrac{k-2}{2}\cdot\ell+\tfrac{1}{2}(\deg(I)-\deg(J)). \tag{6.7.5}\]
Write \(J=J^{\prime}\sqcup J^{\prime\prime}\) with \(J^{\prime}=J\cap d_{k}^{\operatorname{fw}}\). For each \(\xi\in J^{\prime}\), write \(\xi^{\operatorname{op}}:=d_{k}^{\operatorname{fw}}+1-\xi\in\underline{\zeta}\) (since \(\xi\in J_{\underline{\zeta},\underline{\xi}}\)). Define \(I^{\prime}:=\{\xi^{\operatorname{op}}\,|\,\xi\in J^{\prime}\}\) and \(I^{\prime\prime}=I\backslash I^{\prime}\). Then the \(\xi\)th column of \(\mathrm{L}_{k}(I\times J)\) has only one nonzero entry at \((\xi^{\operatorname{op}},\xi)\), which is \(p^{\deg\mathbf{e}_{\xi^{\operatorname{op}}}}\) as introduced in Notation 6.1. So
\[\det(\mathrm{L}_{k}(I\times J))=p^{\sum_{\xi\in J^{\prime}}\deg\mathbf{e}_{ \xi^{\operatorname{op}}}}\cdot\det(\mathrm{L}_{k}(I^{\prime\prime}\times J^{ \prime\prime})).\]
Note that for each \(\xi\in J^{\prime}\), there is a tautological equality \(\deg\mathbf{e}_{\xi^{\operatorname{op}}}=\frac{k-2}{2}+\frac{1}{2}\big{(}\deg \mathbf{e}_{\xi^{\operatorname{op}}}-\deg\mathbf{e}_{\xi}\big{)}\). So (6.7.5) is equivalent to the following inequality
\[v_{p}\big{(}\det(\mathrm{L}_{k}(I^{\prime\prime}\times J^{\prime\prime}))\big{)} \geq\tfrac{k-2}{2}\cdot\#J^{\prime\prime}+\tfrac{1}{2}(\deg(I^{\prime\prime})- \deg(J^{\prime\prime})). \tag{6.7.6}\]
But this is clear as every element \(\xi\in J^{\prime\prime}\) satisfies \(\deg\mathbf{e}_{\xi}>k-2\); so \(\frac{k-2}{2}\#J^{\prime\prime}\leq\frac{1}{2}\deg(J^{\prime\prime})\), and every entry of \(\mathrm{U}^{\dagger}(I^{\prime\prime}\times J^{\prime\prime})\) in the \(\zeta\)'s row belongs to \(p^{\frac{1}{2}\deg(\mathbf{e}_{\zeta})}\mathcal{O}\langle w/p\rangle\) by Proposition 3.2; so its evaluation at \(w=w_{k}\) belongs to \(p^{\frac{1}{2}\deg(\mathbf{e}_{\zeta})}\mathcal{O}\). (6.7.6) clearly follows from this weak Hodge bound estimate. The lemma is proved.
### Proof of Theorem 5.2
We are now ready to start the proof of Theorem 5.2, by induction on \(n\), that is, we assume that Theorem 5.2 has been proved for all minors of size strictly smaller than \(n\), and we hope to prove Theorem 5.2 for all \(n\times n\) minors. The case of \(n=1\) has been handled in SS 5.3.
We quickly recall the setup: we have fixed a relevant character \(\varepsilon\) (and suppressed it from all notations), an integer \(n\geq 2\), two finite subsets \(\underline{\zeta}\) and \(\underline{\xi}\) of cardinality \(n\), and an integer \(k=k_{\varepsilon}+(p-1)k_{\bullet}\) such that \(m_{n}(k)\neq 0\). The elements \(B_{k,i}^{(\underline{\zeta}\times\underline{\xi})}\) for \(i=1,\dots,m_{n}(k)-1\) are defined in Notation 6.6 by the Lagrange interpolation of \(\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi}) \bigr{)}\) along \(g_{n}(w)\) (after an appropriate normalization), or equivalently determined by the Taylor expansion of \(\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi}) \bigr{)}\) as a power series in \(E[\![w-w_{k}]\!]\). We will prove inductively the following.
**Statement 6.9**.: Keep the notation as above. For every \(i\leq m_{n}(k)-1\) and every \(\ell\in\big{\{}0,1,\ldots,\min\{n-d_{k}^{\rm ur},\,r_{\underline{\zeta}\times \underline{\xi}}(k)+s_{\underline{\xi}}(k)\}\big{\}}\), such that \(m_{n-\ell}(k)\leq m_{n}(k)\) (which, when \(n\geq\frac{1}{2}d_{k}^{\rm lw}\), is equivalent to requiring that \(\ell\geq 2n-d_{k}^{\rm lw}\) or \(\ell=0\)), we have
\[v_{p}\big{(}B_{k,i}^{(\underline{\zeta}\times\underline{\xi},\ell)}\big{)} \geq\Delta_{k,\frac{1}{2}d_{k}^{\rm dew}-i}-\tfrac{k-2}{2}\big{(}\tfrac{1}{2}d_ {k}^{\rm lw}-n\big{)}. \tag{6.9.1}\]
Then condition (6.6.2) or equivalently Theorem 5.2 is the special case of Statement 6.9 when \(\ell=0\).
_For the rest of this section, as \(k\) is already fixed, we will write \(r_{\underline{\zeta},\underline{\xi}}\), \(s_{\underline{\xi}}\), and \(m_{\underline{\zeta},\underline{\xi}}\) for \(r_{\underline{\zeta},\underline{\xi}}(k)\), \(s_{\underline{\xi}}(k)\), and \(m_{\underline{\zeta},\underline{\xi}}(k)\), respectively._
### First stab at Statement 6.9
Definition-Proposition 3.21 says that \(\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi}) \bigr{)}\) and more generally every \(\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi}) \bigr{)}_{\ell}\) is divisible by \((w-w_{k})^{\max\{0,n-d_{k}^{\rm ur}-r_{\underline{\zeta}\times\underline{\xi} }-s_{\underline{\xi}}\}}\) in \(E[\![w-w_{k}]\!]\). So if \(i<n-d_{k}^{\rm ur}-r_{\underline{\zeta}\times\underline{\xi}}-s_{\underline{ \xi}}\), \(B_{k,i}^{(\underline{\zeta}\times\underline{\xi},\ell)}=0\) and the corresponding condition (6.9.1) automatically holds.
Now consider the next easiest case when \(i=n-d_{k}^{\rm ur}-r_{\underline{\zeta}\times\underline{\xi}}-s_{\underline{ \xi}}=m_{\underline{\zeta}\times\underline{\xi}}\). We may assume that \(i\geq 0\), otherwise there is nothing to prove. In this case, \(m_{n-r_{\underline{\zeta}\times\underline{\xi}}-s_{\underline{\xi}}}(k)=m_{ \underline{\zeta}\times\underline{\xi}}(k)=i\). So in the particular case when \(\ell=r_{\underline{\zeta}\times\underline{\xi}}+s_{\underline{\xi}}\), the weak estimate (6.7.2) exactly gives (6.9.1). Now we assume that \(\ell\in\{0,\ldots,r_{\underline{\zeta}\times\underline{\xi}}+s_{\underline{ \xi}}-1\}\). Applying Lemma 6.4 to the case when \(j_{0}=r_{\underline{\zeta}\times\underline{\xi}}+s_{\underline{\xi}}-1\), we deduce that
\[\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi}) \bigr{)}_{\ell}\equiv\binom{r_{\underline{\zeta}\times\underline{\xi}}+s_{ \underline{\xi}}}{\ell}\cdot\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta }\times\underline{\xi})\bigr{)}_{r_{\underline{\zeta}\times\underline{\xi}}+s _{\underline{\xi}}}\mod(w-w_{k})^{i+1}.\]
Comparing the coefficients of \((w-w_{k})^{i}\), we immediately deduce that
\[v_{p}\big{(}B_{k,i}^{(\underline{\zeta}\times\underline{\xi},\ell)}\big{)}=v _{p}\Big{(}\binom{r_{\underline{\zeta}\times\underline{\xi}}+s_{\underline{ \xi}}}{\ell}B_{k,i}^{(\underline{\zeta}\times\underline{\xi},r_{\underline{ \zeta}\times\underline{\xi}}+s_{\underline{\xi}})}\Big{)}\stackrel{{ \eqref{eq:2.2}}}{{\geq}}\Delta_{k,\frac{1}{2}d_{k}^{\rm dew}-i}-\tfrac{k-2}{2} \big{(}\tfrac{1}{2}d_{k}^{\rm lw}-n\big{)}.\]
This proves Statement 6.9 in the corresponding situation.
Since the situation in general is more complicated, we consider another case when \(i=m_{\underline{\zeta}\times\underline{\xi}}+1=n-d_{k}^{\rm ur}-r_{\underline{ \zeta}\times\underline{\xi}}-s_{\underline{\xi}}+1\), to illustrate the new phenomenon by spelling out all the terms involved. First of all, in the special cases \(\ell=r_{\underline{\zeta}\times\underline{\xi}}+s_{\underline{\xi}}\) and \(\ell=r_{\underline{\zeta}\times\underline{\xi}}+s_{\underline{\xi}}-1\), Statement 6.9 just restates the weak estimate (6.7.2). So we assume below that \(\ell\in\{0,\ldots,r_{\underline{\zeta}\times\underline{\xi}}+s_{\underline{ \xi}}-2\}\). We apply Lemma 6.4 to the case when \(j_{0}=r_{\underline{\zeta}\times\underline{\xi}}+s_{\underline{\xi}}-2\) to deduce that, modulo \((w-w_{k})^{i+1}\)
\[\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi}) \bigr{)}_{\ell}\equiv\binom{j_{0}+1}{\ell}\det\bigl{(}\mathrm{U}^{\dagger}( \underline{\zeta}\times\underline{\xi})\bigr{)}_{j_{0}+1}-(j_{0}-\ell+1) \binom{j_{0}+2}{\ell}\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times \underline{\xi})\bigr{)}_{j_{0}+2}.\]
Dividing both sides by \(p^{\frac{1}{2}(\deg(\underline{\xi})-\deg(\underline{\zeta})}\cdot g_{n-\ell, \check{k}}(w)/g_{n-\ell,\check{k}}(w_{k})\) and further by \((w-w_{k})^{i}\) (to kill the auxiliary powers), we arrive at, modulo \((w-w_{k})^{2}\),
\[B_{k,i-1}^{(\underline{\zeta}\times\underline{\xi},\ell)}+B_{k,i }^{(\underline{\zeta}\times\underline{\xi},\ell)}(w-w_{k})\] \[\equiv\ \binom{j_{0}+1}{\ell}\frac{g_{n-j_{0}-1,\check{k}}(w)/g_{n-j_{0}-1, \check{k}}(w_{k})}{g_{n-\ell,\check{k}}(w)/g_{n-\ell,\check{k}}(w_{k})}\Big{(}B_ {k,i-1}^{(\underline{\zeta}\times\underline{\xi},j_{0}+1)}+B_{k,i}^{(\underline{ \zeta}\times\underline{\xi},j_{0}+1)}(w-w_{k})\Big{)}\] \[-(j_{0}-\ell+1)\binom{j_{0}+2}{\ell}\frac{g_{n-j_{0}-2,\check{k}}(w )/g_{n-j_{0}-2,\check{k}}(w_{k})}{g_{n-\ell,\check{k}}(w)/g_{n-\ell,\check{k}}(w _{k})}\Big{(}B_{k,i-1}^{(\underline{\zeta}\times\underline{\xi},j_{0}+2)}+B_{k, i}^{(\underline{\zeta}\times\underline{\xi},j_{0}+2)}(w-w_{k})\Big{)}. \tag{6.10.1}\]
Suggested by this, we consider the following.
**Notation 6.11**.: For every \(j\geq 0\), we write the following power series expansion:
\[\eta_{j}(w):=\frac{g_{n-j,\hat{k}}(w)/g_{n-j,\hat{k}}(w_{k})}{g_{n,\hat{k}}(w)/g_ {n,\hat{k}}(w_{k})}=1+\eta_{j,1}(w-w_{k})+\eta_{j,2}(w-w_{k})^{2}+\cdots\in E \llbracket w-w_{k}\rrbracket. \tag{6.11.1}\]
Comparing the \((w-w_{k})\)-coefficients in (6.10.1), we deduce
\[B_{k,i}^{(\zeta\times\underline{\xi},\ell)} =\binom{j_{0}+1}{\ell}B_{k,i}^{(\zeta\times\underline{\xi},j_{0 }+1)}-(j_{0}-\ell+1)\binom{j_{0}+2}{\ell}B_{k,i}^{(\zeta\times\underline{\xi}, j_{0}+2)}\] \[+ \binom{j_{0}+1}{\ell}(\eta_{j_{0}+1,1}-\eta_{\ell,1})B_{k,i-1}^ {(\zeta\times\underline{\xi},j_{0}+1)}-(j_{0}-\ell+1)\binom{j_{0}+2}{\ell}( \eta_{j_{0}+2,1}-\eta_{\ell,1})B_{k,i-1}^{(\zeta\times\underline{\xi},j_{0}+2 )}.\]
By the weak estimate (6.7.2), the first two terms have \(p\)-adic valuation greater than or equal to \(\Delta_{k,\frac{1}{2}d_{k}^{\rm new}-i}-\frac{1}{2}(d_{k}^{\rm Iw}-n)\). But we need to show the sum of the latter two terms does not interfere here. Our strategy is to show that _the power series \(\eta_{j}(w)\) is "approximately" the same as \(\eta_{1}(w)^{j}\)_, and thus each \(\eta_{j,1}\) is "approximately" equal to \(j\cdot\eta_{1,1}\), and thus we are reduced to prove
\[\binom{j_{0}+1}{\ell}\cdot(j_{0}-\ell+1)\cdot B_{k,i-1}^{(\zeta\times\underline {\xi},j_{0}+1)}=(j_{0}-\ell+2)(j_{0}-\ell+1)\binom{j_{0}+2}{\ell}\cdot B_{k,i-1 }^{(\zeta\times\underline{\xi},j_{0}+2)}, \tag{6.11.2}\]
which follows from what we just proved in the case of \(i=m_{\underline{\zeta}\times\underline{\xi}}(k)\), namely
\[B_{k,m_{\underline{\zeta}\times\underline{\xi}}(k)}^{(\underline{\xi},j_{0}+1 )}=(j_{0}+2)\cdot B_{k,m_{\underline{\zeta}\times\underline{\xi}}(k)}^{( \underline{\xi},j_{0}+2)}.\]
**Remark 6.12**.: Note that it is important to cancel major terms in different \(\eta\)-functions, especially when \(i\) is almost as large as \(\frac{1}{2}d_{k}^{\rm new}\); in this case, the difference \(\Delta_{k,\frac{1}{2}d_{k}^{\rm new}-(i-1)}-\Delta_{k,\frac{1}{2}d_{k}^{\rm new }-i}\approx\frac{p-1}{2}(\frac{1}{2}d_{k}^{\rm new}-i)\), yet the term \(\eta_{\ell,1}\) roughly has \(p\)-adic valuation equal to the maximal \(v_{p}(w_{k^{\prime}}-w_{k})\), for all \(k^{\prime}\) running over the zeros of \(g_{n}(w)\), which is about \(\ln k/\ln p\). As we will show below that the terms that do not get canceled through (6.11.2) have relatively large \(p\)-adic valuation, controlled by the difference \(\Delta_{k,\frac{1}{2}d_{k}^{\rm new}-(i-1)}-\Delta_{k,\frac{1}{2}d_{k}^{\rm new }-i}\).
To implement this strategy in the special case is not particularly easier than the general case. So we now proceed directly to prove Statement 6.9 (in the general case).
### Proof of Statement 6.9
The proof is by induction on \(i\), starting with the smallest case \(i=m_{\underline{\zeta}\times\underline{\xi}}=n-d_{k}^{\rm ur}-r_{\underline{ \zeta}\times\underline{\xi}}-s_{\underline{\xi}}\) already treated in SS 6.10 (and when \(i<m_{\underline{\zeta}\times\underline{\xi}}\), Statement 6.9 also holds automatically.) Now, let \(i_{0}\in\{m_{\underline{\zeta}\times\underline{\xi}}+1,\ldots,m_{n}(k)-1\}\), and suppose that Statement 6.9 has been proved for all nonnegative integers \(i<i_{0}\). We may clearly assume that \(i_{0}\geq 0\), as otherwise there is nothing to prove. We set
\[j_{0}:=r_{\underline{\zeta}\times\underline{\xi}}+s_{\underline{\xi}}-(i_{0}- m_{\underline{\zeta}\times\underline{\xi}}+1)=n-d_{k}^{\rm ur}-i_{0}-1.\]
Then when \(\ell>j_{0}\), one can check that \(i_{0}\geq m_{n-\ell}(k)\) and thus Statement 6.9 just repeats (6.7.1). We henceforth assume \(\ell\in\{0,\ldots,j_{0}\}\). First, we apply Lemma 6.4 to deduce that
\[\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\xi})\bigr{)}_{\ell}\equiv\sum_{j> j_{0}}(-1)^{j-j_{0}-1}\binom{j-\ell-1}{j_{0}-\ell}\binom{j}{\ell}\cdot\det\bigl{(} \mathrm{U}^{\dagger}(\underline{\xi})\bigr{)}_{j}\quad\mathrm{mod}\;(w-w_{k}) ^{i_{0}+1}. \tag{6.13.1}\]
We point out that, the condition \(j>j_{0}=n-d_{k}^{\rm ur}-i_{0}-1\) implies that
\[m_{n-j}(k)=n-j-d_{k}^{\rm ur}=i_{0}+1-(j-j_{0})<m_{n}(k). \tag{6.13.2}\]
Instead of using the the numbers \(B_{k,i}^{(\zeta\times\xi,j)}\)'s to express the Taylor expansion of above in \(E[\![w-w_{k}]\!]\), we define the following:
\[\Big{(}\sum_{i\geq 0}B_{k,i}^{(\zeta\times\xi,j)}(w-w_{k})^{i}\Big{)}\cdot \frac{\eta_{j}(w)}{\eta_{1}(w)^{j}}=\sum_{i\geq 0}C_{k,i}^{(\zeta\times\xi,j)}(w-w _{k})^{i}\quad\in\ E[\![w-w_{k}]\!]. \tag{6.13.3}\]
Or equivalently by (6.6.3), in \(E[\![w-w_{k}]\!]\), we have an equality
\[p^{\frac{1}{2}(\deg(\underline{\xi})-\deg(\underline{\zeta}))}\cdot\frac{ \det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi}) \bigr{)}_{j}}{g_{n,\hat{k}}(w)/g_{n,\hat{k}}(w_{k})}\cdot\eta_{1}(w)^{-j}=\sum _{i\geq 0}C_{k,i}^{(\zeta\times\xi,j)}(w-w_{k})^{i}. \tag{6.13.4}\]
In fact, changing to \(B_{k,i}^{(\zeta\times\xi,j)}\) to \(C_{k,i}^{(\zeta\times\xi,j)}\) is "harmless" for the purpose of our proof.
**Proposition 6.14**.: _Fix \(j\in\{0,\ldots,\min\{n-d_{k}^{\rm ur},r_{\underline{\zeta}\times\underline{\xi }}+s_{\underline{\xi}}\}\}\) such that \(m_{n-j}(k)\leq m_{n}(k)\) (which, when \(n\geq\frac{1}{2}d_{k}^{\rm lw}\), is equivalent to requiring that \(j\geq 2n-d_{k}^{\rm lw}\) or \(j=0\)), and assume that Statement 6.9 holds true for all nonnegative integers \(i<i_{0}\), then_
\[v_{p}\bigl{(}B_{k,i_{0}}^{(\zeta\times\xi,j)}\bigr{)}\geq\Delta_{k,\frac{1}{2 }d_{k}^{\rm new}-i_{0}}-\tfrac{k-2}{2}\bigl{(}\tfrac{1}{2}d_{k}^{\rm lw}-n \bigr{)}\]
\[\Longleftrightarrow\quad v_{p}\bigl{(}C_{k,i_{0}}^{(\zeta\times\xi,j)}\bigr{)} \geq\Delta_{k,\frac{1}{2}d_{k}^{\rm new}-i_{0}}-\tfrac{k-2}{2}\bigl{(}\tfrac {1}{2}d_{k}^{\rm lw}-n\bigr{)}.\]
We temporarily assume this technical result, whose proof will be given later in SS 6.18.
**Remark 6.15**.: For the rest of the inductive proof of Statement 6.9, we will only need the analogue of the seemingly weaker version of Lemma 6.7, namely \(v_{p}\bigl{(}C_{k,i}^{(\zeta\times\xi,\ell)}\bigr{)}\geq\Delta_{k,\frac{1}{2}d_ {k}^{\rm new}-i}^{\prime}-\tfrac{k-2}{2}\bigl{(}\tfrac{1}{2}d_{k}^{\rm lw}-n \bigr{)}\) when \(i\geq m_{n-j}(k)\). The stronger inequality in Lemma 6.7 is only used to enable transferring estimates between \(B_{k,i}^{(\zeta\times\xi,\ell)}\)'s and \(C_{k,i}^{(\zeta\times\xi,\ell)}\)'s.
**Lemma 6.16**.: _For every nonnegative integer \(\ell^{\prime}\leq j^{\prime}_{0}<r_{\underline{\zeta}\times\underline{\xi}}+s _{\underline{\xi}}\), we have_
\[C_{k,n-d_{k}^{\rm ur}-j^{\prime}_{0}-1}^{(\zeta\times\xi,\ell^{\prime})}=\sum_ {j^{\prime}=j^{\prime}_{0}+1}^{r_{\underline{\zeta}\times\xi}+s_{\underline{ \xi}}}(-1)^{j^{\prime}-j^{\prime}_{0}-1}\binom{j^{\prime}-\ell^{\prime}-1}{j^{ \prime}_{0}-\ell^{\prime}}\binom{j^{\prime}}{\ell^{\prime}}C_{k,n-d_{k}^{\rm ur }-j^{\prime}_{0}-1}^{(\zeta\times\xi,j^{\prime})} \tag{6.16.1}\]
Proof.: Applying Remark 6.5 to the case \(\eta(w):=\eta_{1}(w)\), then (6.5.2) implies that for every nonnegative integer \(\ell^{\prime}\leq j^{\prime}_{0}<r_{\underline{\zeta}\times\underline{\xi}}+s _{\underline{\xi}}\), modulo \((w-w_{k})^{\max\{0,n-d_{k}^{\rm ur}-j_{0}\}}\) in \(E[\![w-w_{k}]\!]\),
\[\det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi}) \bigr{)}_{\ell^{\prime}}\cdot\eta_{1}(w)^{-\ell^{\prime}}\equiv\sum_{j^{\prime }>j^{\prime}_{0}}(-1)^{j^{\prime}-j^{\prime}_{0}-1}\binom{j^{\prime}-\ell^{ \prime}-1}{j^{\prime}_{0}-\ell^{\prime}}\binom{j^{\prime}}{\ell^{\prime}}\cdot \det\bigl{(}\mathrm{U}^{\dagger}(\underline{\zeta}\times\underline{\xi}) \bigr{)}_{j^{\prime}}\cdot\eta_{1}(w)^{-j^{\prime}}.\]
(6.16.1) follows from dividing the above congruence by \(p^{\frac{1}{2}(\deg(\underline{\zeta})-\deg(\underline{\xi}))}\cdot g_{n,\hat{ k}}(w)/g_{n,\hat{k}}(w_{k})\) and then taking the coefficient of \((w-w_{k})^{n-d_{k}^{\rm ur}-j^{\prime}_{0}-1}\).
### Proof of Statement 6.9 assuming Proposition 6.14
Continuing with the inductive proof of Statement 6.9 initiated in SS 6.13. Recall that the inductive hypothesis says that Statement 6.9 holds for all \(i<i_{0}\) for some \(i_{0}\in\{m_{\zeta\times\underline{\xi}}+1,\ldots,m_{n}(k)-1\}\) By the inductive hypothesis, the assumption for Proposition 6.14 holds, and thus \(v_{p}\big{(}C_{k,i_{0}}^{(\zeta\times\underline{\xi},j)}\big{)}\geq\Delta^{ \prime}_{k,\frac{1}{2}d_{k}^{\rm new}-i_{0}}-\frac{k-2}{2}\big{(}\frac{1}{2}d_{ k}^{\rm lw}-n\big{)}\) for every \(j>j_{0}\) by Lemma 6.7 and Proposition 6.14, where \(j_{0}=r_{\underline{\zeta}\times\underline{\xi}}+s_{\underline{\xi}}-(i_{0}-m _{\underline{\zeta}\times\underline{\xi}}+1)\). (In particular, if \(n\geq\frac{1}{2}d_{k}^{\rm lw}\), \(j_{0}\geq 2n-d_{k}^{\rm lw}\).)
Then using the formula (6.16.1) in the case when \(j_{0}^{\prime}=j_{0}\), \(\ell^{\prime}=\ell\) (and thus \(n-d_{k}^{\rm ur}-j_{0}^{\prime}-1=i_{0}\)), we deduce that \(C_{k,i_{0}}^{(\zeta\times\underline{\xi},\ell)}\) is a \(\mathbb{Z}\)-linear combination of \(C_{k,i_{0}}^{(\zeta\times\underline{\xi},j)}\)'s with \(j>j_{0}\). Thus, \(v_{p}\big{(}C_{k,i_{0}}^{(\zeta\times\underline{\xi},\ell)}\big{)}\geq\Delta^{ \prime}_{k,\frac{1}{2}d_{k}^{\rm new}-i_{0}}-\frac{k-2}{2}\big{(}\frac{1}{2}d_ {k}^{\rm lw}-n\big{)}\). By Proposition 6.14 again, we deduce that \(B_{k,i_{0}}^{(\zeta\times\underline{\xi},\ell)}\) has the same estimate; this then completes the inductive proof of Statement 6.9, and hence conclude the proof of the local ghost Theorem 2.7 (assuming Proposition 6.14).
### Proof of Proposition 6.14
We now come back to prove this last missing piece for the proof of Statement 6.9 and the local ghost Theorem 2.7. We **claim** that if we expand
\[\frac{\eta_{j}(w)}{\eta_{1}(w)^{j}}=1+\eta_{(j),1}(w-w_{k})+\eta_{(j),2}(w-w_{ k})^{2}+\cdots\in E\llbracket w-w_{k}\rrbracket,\]
then for every \(t\in\{1,\ldots,m_{n}(k)-1\}\), setting \(q_{t}:=\min\{m_{n}(k)-t,m_{n-j}(k)\}\), we have
\[v_{p}(\eta_{(j),t})>\Delta_{k,\frac{1}{2}d_{k}^{\rm new}-(q_{t}+t)}-\Delta_{k, \frac{1}{2}d_{k}^{\rm new}-q_{t}}+\frac{1}{2}\big{(}(\frac{1}{2}d_{k}^{\rm new }-q_{t})^{2}-(\frac{1}{2}d_{k}^{\rm new}-(q_{t}+t))^{2}\big{)}. \tag{6.18.1}\]
We first prove the statement of this lemma assuming this claim: from the definition of \(C_{k,i}^{(\zeta\times\underline{\xi},j)}\) in (6.13.3), we see that,
\[C_{k,i_{0}}^{(\zeta\times\underline{\xi},j)}=B_{k,i_{0}}^{(\zeta\times \underline{\xi},j)}+\sum_{i=0}^{i_{0}-1}B_{k,i}^{(\zeta\times\underline{\xi}, j)}\cdot\eta_{(j),i_{0}-i}.\]
When \(i<m_{n-j}(k)\) (and \(i<i_{0}\)), set \(t:=i_{0}-i\) so that \(q_{t}=\min\{m_{n}(k)+i-i_{0},m_{n-j}(k)\}\). In particular, \(q_{t}+t>i_{0}\) as \(i<m_{n-j}(k)\) and \(i_{0}<m_{n}(k)\). In either case, we have
\[v_{p}\big{(}B_{k,i}^{(\zeta\times\underline{\xi},j)}\eta_{(j),i_ {0}-i}\big{)} \geq\Delta_{k,\frac{1}{2}d_{k}^{\rm new}-i}-\frac{k-2}{2}\big{(} \frac{1}{2}d_{k}^{\rm lw}-n\big{)}+\big{(}\Delta_{k,\frac{1}{2}d_{k}^{\rm new} -(q_{t}+t)}-\Delta_{k,\frac{1}{2}d_{k}^{\rm new}-q_{t}}\big{)}\] \[\geq\Delta_{k,\frac{1}{2}d_{k}^{\rm new}-i}-\frac{k-2}{2}\big{(} \frac{1}{2}d_{k}^{\rm lw}-n\big{)}+\big{(}\Delta_{k,\frac{1}{2}d_{k}^{\rm new }-i_{0}}-\Delta_{k,\frac{1}{2}d_{k}^{\rm new}-i}\big{)}\] \[=\Delta_{k,\frac{1}{2}d_{k}^{\rm new}-i_{0}}-\frac{k-2}{2}\big{(} \frac{1}{2}d_{k}^{\rm lw}-n\big{)}.\]
Here, for first inequality, we used Statement 6.9 to estimate \(v_{p}\big{(}B_{k,i}^{(\zeta\times\underline{\xi},j)}\big{)}\) and used (6.18.1) (but forgetting the term \(\frac{1}{2}\big{(}(\frac{1}{2}d_{k}^{\rm new}-q_{t})^{2}-(\frac{1}{2}d_{k}^{ \rm new}-(q_{t}+t))^{2}\big{)}\)) to estimate \(\eta_{(j),i_{0}-i}\). The second inequality follows from the convexity of \(\underline{\Delta}_{k}\) and that \(q_{t}+t>i_{0}\).
When \(i\geq m_{n-j}(k)\) (and \(i<i_{0}<m_{n}(k)\)), we need the strong estimate in (6.7.1). Set \(t=i_{0}-i\) and in this case,
\[q_{t}+t=\min\{m_{n}(k),m_{n-j}(k)+i_{0}-i\}=m_{n-j}(k)+i_{0}-i\leq i_{0}.\]
In this case, we have \(m_{n-j}(k)=q_{t}\leq q_{t}+t\leq i_{0}<m_{n}(k)\).
By the stronger estimate (6.7.1) and (6.18.1), we have
\[\begin{split}& v_{p}\big{(}B_{k,i}^{(\zeta\times\underline{\xi},j)} \eta_{(j),i_{0}-i}\big{)}\\ \geq&\ \Big{(}\Delta_{k,\frac{1}{2}d_{k}^{\rm new}-m_{n-j} (k)}-\tfrac{1}{2}\big{(}(\tfrac{1}{2}d_{k}^{\rm new}-m_{n-j}(k))^{2}-(\tfrac{1} {2}d_{k}^{\rm new}-i)^{2}\big{)}-\tfrac{k-2}{2}\big{(}\tfrac{1}{2}d_{k}^{\rm lw }-n\big{)}\Big{)}\\ &\ \ +\Big{(}\Delta_{k,\frac{1}{2}d_{k}^{\rm new}-(q_{t}+t)}-\Delta_{k, \frac{1}{2}d_{k}^{\rm new}-m_{n-j}(k)}+\tfrac{1}{2}\big{(}(\tfrac{1}{2}d_{k}^{ \rm new}-m_{n-j}(k))^{2}-(\tfrac{1}{2}d_{k}^{\rm new}-(q_{t}+t))^{2}\big{)} \Big{)}\\ =&\ \Delta_{k,\frac{1}{2}d_{k}^{\rm new}-(q_{t}+t)}- \tfrac{1}{2}\big{(}(\tfrac{1}{2}d_{k}^{\rm new}-(q_{t}+t))^{2}-(\tfrac{1}{2}d_ {k}^{\rm new}-i)^{2}\big{)}-\tfrac{k-2}{2}\big{(}\tfrac{1}{2}d_{k}^{\rm lw}-n \big{)}\\ \geq&\ \Delta_{k,\frac{1}{2}d_{k}^{\rm new}-(q_{t}+t)}- \tfrac{1}{2}\big{(}(\tfrac{1}{2}d_{k}^{\rm new}-(q_{t}+t))^{2}-(\tfrac{1}{2}d_ {k}^{\rm new}-i_{0})^{2}\big{)}-\tfrac{k-2}{2}\big{(}\tfrac{1}{2}d_{k}^{\rm lw }-n\big{)}\\ \geq&\ \Delta_{k,\frac{1}{2}d_{k}^{\rm new}-i_{0}}- \tfrac{k-2}{2}\big{(}\tfrac{1}{2}d_{k}^{\rm lw}-n\big{)},\end{split}\]
where the last inequality follows from Proposition 2.19. The lemma then follows from the two estimates above.
It remains to prove the **claim**, namely the inequality (6.18.1). By the definition of \(\eta_{j}\) in Notation 6.11, we may rewrite
\[\eta_{j}(w)=\prod_{\begin{subarray}{c}k^{\prime}\equiv k_{\varepsilon}\bmod (p-1)\\ k^{\prime}\neq k\end{subarray}(p-1)}\Big{(}1+\frac{w-w_{k}}{w_{k}-w_{k^{ \prime}}}\Big{)}^{m_{n-j}(k^{\prime})-m_{n}(k^{\prime})},\]
\[\begin{split}\frac{\eta_{j}(w)}{\eta_{1}(w)^{j}}& =\prod_{\begin{subarray}{c}k^{\prime}\equiv k_{\varepsilon}\bmod (p-1)\\ k^{\prime}\neq k\end{subarray}}\Big{(}1+\frac{w-w_{k}}{w_{k}-w_{k^{\prime}}} \Big{)}^{m_{n-j}(k^{\prime})-m_{n}(k^{\prime})-j(m_{n-1}(k^{\prime})-m_{n}(k^ {\prime}))}\\ &=1+\eta_{(j),1}(w-w_{k})+\eta_{(j),2}(w-w_{k})^{2}+\cdots.\end{split} \tag{6.18.2}\]
Set \(m_{n,j}(k^{\prime}):=m_{n-j}(k^{\prime})-m_{n}(k^{\prime})-j(m_{n-1}(k^{\prime })-m_{n}(k^{\prime}))\). The weight \(k^{\prime}\) term in the product expression of \(\frac{\eta_{j}(w)}{\eta_{1}(w)^{j}}\) is not \(1\) (or equivalently \(m_{n,j}(k^{\prime})\neq 0\)) only when the function \(n^{\prime}\mapsto m_{n^{\prime}}(k^{\prime})\) for \(n^{\prime}\in[n-j,n]\) fails to be linear, or equivalently, at least one of \(d_{k^{\prime}}^{\rm ur}\), \(d_{k^{\prime}}^{\rm lw}-d_{k^{\prime}}^{\rm ur}\), or \(\tfrac{1}{2}d_{k^{\prime}}^{\rm lw}\) belongs to \((n-j,n)\). We call those weights \(k^{\prime}\)_bad weights_.
The upshot of the proof is the following: (6.18.2) implies that for \(t\in\{1,\ldots,m_{n}(k)-1\}\), \(\eta_{(j),t}\) is the sum of terms of the form
\[\prod_{\alpha=1}^{t}\frac{1}{w_{k}-w_{k^{\prime}_{\alpha}}}, \tag{6.18.3}\]
where each \(k^{\prime}_{\alpha}\) is a bad weights, satisfying certain constraints: if \(m_{n,j}(k^{\prime}_{\alpha})>0\), the multiplicity of \(k^{\prime}_{\alpha}\) appearing in (6.18.3) is less than or equal to \(m_{n,j}(k^{\prime}_{\alpha})\); if \(m_{n,j}(k^{\prime}_{\alpha})<0\), the expansion (6.18.2) is considered as a Taylor expansion; so there is no constraint in the multiplicity of \(k^{\prime}_{\alpha}\) appearing in (6.18.3). Roughly speaking, we will prove that, among all these bad weights, there is at most one \(k^{\prime}_{\alpha}\) such that \(v_{p}(w_{k}-w_{k^{\prime}_{\alpha}})\) is extraordinarily large. When we cite Proposition 2.19 later, most of the \(v_{p}(w_{k}-w_{k^{\prime}_{\alpha}})\) will be controlled by the term \((\ell^{\prime}-\ell)\cdot\lfloor\frac{\ln((p+1)\ell)}{\ln p}+1\rfloor\) and the exceptional \(w_{k^{\prime}_{\alpha}}\) corresponds to the distinguished weight \(k^{\prime}_{\alpha}\) there in.
We now make this proof more precise. Write \(n^{*}:=n\) if \(n\leq\frac{1}{2}d_{k}^{\rm lw}\) and \(n^{*}=d_{k}^{\rm lw}-n\) if \(n\geq\frac{1}{2}d_{k}^{\rm lw}\). In particular, \(m_{n^{*}}(k)=m_{n}(k)\) and \(n^{*}\leq n\). From the above discussion, we reduce the proof to the following:
let \({\mathcal{S}}=\{k^{\prime}_{\alpha}|\alpha=1,\ldots,t\}\) be a set of bad weights (not necessarily distinct) satisfying that if \(m_{n,j}(k^{\prime}_{\alpha})>0\) for some \(\alpha\in[1,t]\), the multiplicity of \(k^{\prime}_{\alpha}\) in \({\mathcal{S}}\) is less or equal to \(m_{n,j}(k^{\prime}_{\alpha})\). Then we have
\[\sum_{\alpha=1}^{t}v_{p}(w_{k}-w_{k^{\prime}_{\alpha}})\leq\Delta_{k,\frac{1} {2}d_{k}^{\rm new}-q_{t}}-\Delta_{k,\frac{1}{2}d_{k}^{\rm new}-(q_{t}+t)}- \frac{1}{2}\big{(}(\frac{1}{2}d_{k}^{\rm new}-q_{t})^{2}-(\frac{1}{2}d_{k}^{ \rm new}-(q_{t}+t))^{2}\big{)} \tag{6.18.4}\]
Suppose that there exists some \(\alpha\in[1,t]\) such that either \(d_{k^{\prime}_{\alpha}}^{\rm ur}\) or \(d_{k^{\prime}_{\alpha}}^{\rm lw}-d_{k^{\prime}_{\alpha}}^{\rm ur}\) belongs to \([n^{*},d_{k}^{\rm lw}-n^{*})\). Without loss of generality, we can assume \(\alpha=t\).
1. When \(t\leq m_{n}(k)-m_{n-j}(k)\), we have \(q_{t}=q_{t-1}=m_{n-j}(k)\). The number \(s\coloneqq\frac{1}{2}d_{k}^{\rm new}-q_{t}-t+1\) satisfies \(\frac{1}{2}d_{k}^{\rm lw}-n^{*}\leq s-1\), and hence \([n^{*},d_{k}^{\rm lw}-n^{*})\subset[\frac{1}{2}d_{k}^{\rm lw}-(s-1),\frac{1}{ 2}d_{k}^{\rm lw}+(s-1)]\). Proposition 2.19 (with \(\ell=\ell^{\prime}=s-1<\ell^{\prime\prime}=s\)) implies \(v_{p}(w_{k}-w_{k^{\prime}_{t}})\leq\Delta_{k,s}-\Delta_{k,s-1}-\frac{1}{2}(s^{ 2}-(s-1)^{2})\). To prove (6.18.4), it suffices to prove (6.18.5) \[\sum_{\alpha=1}^{t-1}v_{p}(w_{k}-w_{k^{\prime}_{\alpha}})\leq\Delta_{k,\frac{ 1}{2}d_{k}^{\rm new}-q_{t-1}}-\Delta_{k,\frac{1}{2}d_{k}^{\rm new}-(q_{t-1}+t- 1)}-\frac{1}{2}\big{(}(\frac{1}{2}d_{k}^{\rm new}-q_{t-1})^{2}-(\frac{1}{2}d_{ k}^{\rm new}-(q_{t-1}+t-1))^{2}\big{)}.\]
2. When \(t>m_{n}(k)-m_{n-j}(k)\), we have \(q_{t}=m_{n}(k)-t\) and \(q_{t-1}=q_{t}+1\). The number \(s\coloneqq\frac{1}{2}d_{k}^{\rm new}-q_{t}\) satisfies \(\frac{1}{2}d_{k}^{\rm lw}-n^{*}\leq s-1\). A similar argument as in (1) implies that we can reduce to prove (6.18.5). Repeating the above argument, we can assume that none of the bad weights \(k^{\prime}_{\alpha}\) in (6.18.4) satisfies that either \(d_{k^{\prime}_{\alpha}}^{\rm ur}\) or \(d_{k^{\prime}_{\alpha}}^{\rm lw}-d_{k^{\prime}_{\alpha}}^{\rm ur}\) belongs to \([n^{*},d_{k}^{\rm lw}-n^{*})\).
Set \(\gamma\coloneqq[\frac{\ln((p+1)(\frac{1}{2}d_{k}^{\rm new}-q_{t}))}{\ln p}+1]\). We first remark that if some \(k^{\prime}_{\alpha}\) satisfies \(\frac{1}{2}d_{k^{\prime}_{\alpha}}^{\rm lw}\in(n-j,n)\), then \(|k^{\prime}_{\alpha\bullet}-k_{\bullet}|<j\). By our assumption \(m_{n-j}(k)\leq m_{n}(k)\) we always have \(\frac{1}{2}d_{k}^{\rm new}-q_{t}\geq\frac{j}{2}\) and hence \(v_{p}(w_{k}-w_{k^{\prime}_{\alpha}})\leq 1+\lfloor\frac{\ln j}{\ln p}\rfloor\leq\gamma\).
1. If \(v_{p}(w_{k}-w_{k^{\prime}_{\alpha}})\leq\gamma\) for all \(\alpha\in[1,t]\), since \((n-j,n)\subset[\frac{1}{2}d_{k}^{\rm lw}-(\frac{1}{2}d_{k}^{\rm new}-q_{t}), \frac{1}{2}d_{k}^{\rm lw}+(\frac{1}{2}d_{k}^{\rm new}-q_{t})]\), we can apply Proposition 2.19 to \(l=\frac{1}{2}d_{k}^{\rm new}-(q_{t}+t)<l^{\prime}=l^{\prime\prime}=\frac{1}{2 }d_{k}^{\rm new}-q_{t}\) and \(k^{\prime}=k^{\prime}_{\alpha}\) for all \(\alpha\in[1,t]\), and have \[\sum_{\alpha=1}^{t}v_{p}(w_{k}-w_{k^{\prime}_{\alpha}})\leq t\cdot\gamma\leq \Delta_{k,\frac{1}{2}d_{k}^{\rm new}-q_{t}}-\Delta_{k,\frac{1}{2}d_{k}^{\rm new }-(q_{t}+t)}-\frac{1}{2}\big{(}(\frac{1}{2}d_{k}^{\rm new}-q_{t})^{2}-(\frac{ 1}{2}d_{k}^{\rm new}-(q_{t}+t))^{2}\big{)}.\]
2. If \(v_{p}(w_{k}-w_{k^{\prime}})\geq\gamma+1\) for some \(k^{\prime}\in{\mathcal{S}}\), we assume that the multiplicity of \(k^{\prime}\) in \({\mathcal{S}}\) is \(M>0\) and \(k^{\prime}=k^{\prime}_{\alpha}\) for \(\alpha\in[t-M+1,t]\). It follows from Remark 2.20 and our assumption on \({\mathcal{S}}\) that \(k^{\prime}\) is the unique element in \({\mathcal{S}}\) with the property \(v_{p}(w_{k}-w_{k^{\prime}})\geq\gamma+1\). Moreover we have that either \(d_{k^{\prime}}^{\rm ur}\) or \(d_{k^{\prime}}^{\rm lw}-d_{k^{\prime}}^{\rm ur}\) belongs to \((n-j,n^{*})\) while \(\frac{1}{2}d_{k^{\prime}}^{\rm lw}\notin(n-j,n)\). When \(d_{k^{\prime}}^{\rm ur}\in(n-j,n^{*})\), we have \(\frac{1}{2}d_{k^{\prime}}^{\rm lw}\geq n\) and hence \(m_{n-j}(k^{\prime})=0\),\(m_{n}(k^{\prime})=n-d_{k^{\prime}}^{\rm ur}\) and \(m_{n-1}(k^{\prime})=m_{n}(k^{\prime})-1\). It follows that \(m_{n,j}(k^{\prime})=d_{k^{\prime}}^{\rm ur}-n+j>0\) and
\[\frac{1}{2}d_{k}^{\rm lw}-d_{k^{\prime}}^{\rm ur}=\frac{1}{2}d_{k}^{\rm lw}-n+j-m_ {n,j}(k)\leq\frac{1}{2}d_{k}^{\rm new}-q_{t}-m_{n,j}(k^{\prime}). \tag{6.18.6}\]
When \(d^{\mathrm{Iw}}_{k^{\prime}}-d^{\mathrm{ur}}_{k^{\prime}}\in(n-j,n^{*})\), we have \(\frac{1}{2}d^{\mathrm{Iw}}_{k^{\prime}}\leq n-j\) and hence \(m_{n-j}(k^{\prime})=d^{\mathrm{Iw}}_{k^{\prime}}-d^{\mathrm{ur}}_{k^{\prime}}-(n -j)>0\) and \(m_{n-1}(k^{\prime})=m_{n}(k^{\prime})=0\). It follows that \(m_{n,j}(k^{\prime})=d^{\mathrm{Iw}}_{k^{\prime}}-d^{\mathrm{ur}}_{k^{\prime}}- (n-j)\) and
\[\frac{1}{2}d^{\mathrm{Iw}}_{k}-(d^{\mathrm{Iw}}_{k^{\prime}}-d^{\mathrm{ur}}_{ k^{\prime}})=\frac{1}{2}d^{\mathrm{Iw}}_{k}-n+j-m_{n,j}(k^{\prime})\leq\frac{1}{2}d^{ \mathrm{new}}_{k}-q_{t}-m_{n,j}(k^{\prime}). \tag{6.18.7}\]
Let \(\ell=\frac{1}{2}d^{\mathrm{new}}_{k}-(q_{t}+t)\), \(\ell^{\prime\prime}=\frac{1}{2}d^{\mathrm{new}}_{k}-q_{t}\) and \(\ell^{\prime}=\ell^{\prime\prime}-M\in[\ell,\ell^{\prime\prime}]\). It follows from (6.18.6) and (6.18.7) that either \(d^{\mathrm{ur}}_{k^{\prime}}\) or \(d^{\mathrm{Iw}}_{k^{\prime}}-d^{\mathrm{ur}}_{k^{\prime}}\) belongs to \([\frac{1}{2}d^{\mathrm{Iw}}_{k}-\ell^{\prime},\frac{1}{2}d^{\mathrm{Iw}}_{k}+ \ell^{\prime}]\). We apply Proposition 2.19 to \(\ell\), \(\ell^{\prime}\), \(\ell^{\prime\prime}\) and \(k^{\prime}\), and have
\[\sum_{\alpha=1}^{t}v_{p}(w_{k}-w_{k^{\prime}_{\alpha}}) \leq(t-M)\cdot\gamma+M\cdot v_{p}(w_{k}-w_{k^{\prime}})\leq \Delta_{k,\ell^{\prime\prime}}-\Delta_{k,\ell}-\frac{1}{2}(\ell^{\prime \prime 2}-\ell^{2})\] \[=\Delta_{k,\frac{1}{2}d^{\mathrm{new}}_{k}-q_{t}}-\Delta_{k,\frac{1 }{2}d^{\mathrm{new}}_{k}-(q_{t}+t)}-\frac{1}{2}\big{(}(\frac{1}{2}d^{\mathrm{ new}}_{k}-q_{t})^{2}-(\frac{1}{2}d^{\mathrm{new}}_{k}-(q_{t}+t))^{2}\big{)}.\]
This completes the proof of the **claim** and thus Proposition 6.14 and Theorem 2.7.
## 7. Trianguline deformation space and crystalline slopes
In this section, we recall the triangulline deformation space defined by Breuil-Hellman-Schraen [1] and then compare it with the eigenvariety attached to Paskunas' universal deformation of representations of \(\mathrm{GL}_{2}(\mathbb{Q}_{p})\)[13]. This together with the known \(p\)-adic local Langlands correspondence for \(\mathrm{GL}_{2}(\mathbb{Q}_{p})\) allows us to transport the local ghost theorem to results regarding slopes on triangulline deformation spaces.
The argument in this section is relatively well known to experts, but some of the awkward arguments are inserted to treat central characters for completeness.
**Notation 7.1**.: As in previous sections, let \(p\) be an odd prime, and let \(E,\mathcal{O},\mathbb{F}\) be coefficient rings as in SS 1.26. For a formal \(\mathcal{O}\)-scheme \(\mathrm{Spf}(R)\), let \(\mathrm{Spf}(R)^{\mathrm{rig}}\) denote the associated rigid analytic space over \(E\). We will later frequently write \(E^{\prime}\) to mean a finite extension of \(E\), typically in the situation of referring to a point of \(\mathrm{Spf}(R)^{\mathrm{rig}}\) over \(E^{\prime}\); we will freely do so without defining \(E^{\prime}\), and in such case, we use \(\mathcal{O}^{\prime}\), \(\varpi^{\prime}\), and \(\mathbb{F}^{\prime}\) denote the corresponding ring of integers, the uniformizer, and the residue field, respectively.
For a crystabelian representation \(V\) of \(\mathrm{Gal}_{\mathbb{Q}_{p}}\) (with coefficients in \(E^{\prime}\)), write \(\mathbb{D}_{\mathrm{pcrys}}(V)\) for the limit of the crystalline functor for \(\mathbb{Q}_{p}(\mu_{p^{n}})\) with \(n\) sufficiently large.
We normalize the local class field theory so that the Artin map \(\mathbb{Q}_{p}^{\times}\to\mathrm{Gal}_{\mathbb{Q}_{p}}^{\mathrm{ab}}\) sends \(p\) to the _geometric_ Frobenius. In what follows, we will practically identify characters of \(\mathbb{Q}_{p}^{\times}\) (with values in \(\mathcal{O}^{\times}\) or \(\mathbb{F}^{\times}\)) and characters of \(\mathrm{Gal}_{\mathbb{Q}_{p}}\).
We use the following notations for local Galois representations:
* For \(\alpha\in\mathbb{F}^{\times}\) or \(\mathcal{O}^{\times}\), write \(\mathrm{ur}(\alpha)\) for the one-dimensional unramified representation of \(\mathrm{Gal}_{\mathbb{Q}_{p}}\) sending the geometric Frobenius element to \(\alpha\).
* Let \(\omega_{1}:\mathrm{Gal}_{\mathbb{Q}_{p}}\to\mathrm{Gal}(\mathbb{Q}_{p}(\mu_{p}) /\mathbb{Q}_{p})\cong\mathbb{F}_{p}^{\times}\) denote the _first fundamental character_ of \(\mathrm{Gal}_{\mathbb{Q}_{p}}\).
* Let \(\chi_{\mathrm{cycl}}:\mathbb{Q}_{p}^{\times}\subset\mathrm{Gal}_{\mathbb{Q}_{p} }^{\mathrm{ab}}\to\mathrm{Gal}(\mathbb{Q}_{p}(\mu_{p^{\infty}})/\mathbb{Q}_{p}) \cong\mathbb{Z}_{p}^{\times}\) denote the cyclotomic character; its reduction modulo \(p\) is precisely \(\omega_{1}\).
Recall \(\Delta:=\mathbb{F}_{p}^{\times}\), the isomorphism \(\mathcal{O}[\![(1+p\mathbb{Z}_{p})^{\times}]\!]\cong\mathcal{O}[\![w]\!]\), and the universal character \(\chi_{\mathrm{univ}}^{(\varepsilon)}:\Delta\times\mathbb{Z}_{p}^{\times}\to \mathcal{O}[\![w]\!]^{(\varepsilon),\times}\) associated to a character \(\varepsilon\) of \(\Delta^{2}\) from SS 2.4(1). For each \(\varepsilon\), call
\(\mathcal{W}^{(\varepsilon)}:=(\operatorname{Spf}\mathcal{O}[\![\![w]\!]^{( \varepsilon)})^{\operatorname{rig}}\) the _weight space labeled by \(\varepsilon\)_. Put \(\mathcal{W}:=\bigcup_{\varepsilon}\mathcal{W}^{(\varepsilon)}\); it parametrizes continuous characters of \(\Delta\times\mathbb{Z}_{p}^{\times}\). Write \(\chi_{\operatorname{univ}}:\Delta\times\mathbb{Z}_{p}^{\times}\to\mathcal{O} _{\mathcal{W}}^{\times}\) for the universal character. Put \(\mathcal{W}_{0}:=(\operatorname{Spf}\mathcal{O}[\![w]\!])^{\operatorname{rig}}\), parametrizing continuous characters of \((1+p\mathbb{Z}_{p})^{\times}\).
Let \(\widetilde{\mathcal{W}}:=(\operatorname{Spf}\mathcal{O}[\![(\mathbb{Z}_{p}^{ \times})^{2}]\!])^{\operatorname{rig}}\) be the rigid analytic space parametrizing continuous characters of \((\mathbb{Z}_{p}^{\times})^{2}\). There is a natural isomorphism
(7.1.1)
_Here, we used \(\chi(\bar{\delta},\alpha)\) as opposed to \(\chi(\bar{\alpha},\delta)\) because our later convention uses the lower triangular matrix local analytic Jacquet functor. The additional factor \(\alpha\) at the beginning indicates a twist by cyclotomic character in our convention._ Under this isomorphism, we may view \(\mathcal{W}\) as a subspace of \(\widetilde{\mathcal{W}}\) where the universal character is trivial on \(\{1\}\times(1+p\mathbb{Z}_{p})^{\times}\); and at the same time, we have a projection map \(\operatorname{pr}_{W}:\widetilde{\mathcal{W}}\to\mathcal{W}\), along \(\mathcal{W}_{0}\).
Later, we often consider a rigid analytic space \(\mathcal{X}\) and the morphism \(\operatorname{id}_{\mathcal{X}}\times\operatorname{pr}_{W}:\mathcal{X}\times \widetilde{\mathcal{W}}\to\mathcal{X}\times\mathcal{W}\); we write \(\operatorname{pr}_{W}\) for it when no confusion arises.
**Notation 7.2**.: For the rest of this paper, we use \(\bar{r}_{p}:\operatorname{Gal}_{\mathbb{Q}_{p}}\to\operatorname{GL}_{2}( \mathbb{F})\) to denote a reducible and generic residual local Galois representation
\[\bar{r}_{p}=\begin{pmatrix}\operatorname{ur}(\bar{\alpha}_{1})\omega_{1}^{a+b+ 1}&*\\ 0&\operatorname{ur}(\bar{\alpha}_{2})\omega_{1}^{b}\end{pmatrix}:\operatorname{ Gal}_{\mathbb{Q}_{p}}\to\operatorname{GL}_{2}(\mathbb{F})\]
with \(a\in\{1,\dots,p-4\}\), \(b\in\{0,\dots,p-2\}\), and \(\bar{\alpha}_{1},\bar{\alpha}_{2}\in\mathbb{F}^{\times}\). We say \(\bar{r}_{p}\) is _split_ if \(*=0\) and _nonsplit_ if \(*\neq 0\). The condition on \(a\) ensures that there is a unique such nontrivial extension when \(\bar{r}_{p}\) is nonsplit, because \(\operatorname{H}^{1}(\operatorname{Gal}_{\mathbb{Q}_{p}},\operatorname{ur}( \bar{\alpha}_{2}^{-1}\bar{\alpha}_{1})\omega^{a+1})\) is one-dimensional.
We often write \(\bar{\rho}:\operatorname{I}_{\mathbb{Q}_{p}}\to\operatorname{GL}_{2}( \mathbb{F})\) for the corresponding residual inertia representation:
* (nonsplit case) \(\bar{\rho}=\begin{pmatrix}\omega_{1}^{a+b+1}&*\neq 0\\ 0&\omega_{1}^{b}\end{pmatrix}\), where \(*\) is the _unique_ nontrivial extension (up to isomorphism) in the class \(\operatorname{H}^{1}(\operatorname{I}_{\mathbb{Q}_{p}},\omega_{1}^{a+1})^{ \operatorname{Gal}_{\mathbb{Q}_{p}}}=\operatorname{H}^{1}(\operatorname{Gal}_ {\mathbb{Q}_{p}},\omega_{1}^{a+1})\); and
* (split case) \(\bar{\rho}^{\operatorname{ss}}=\omega_{1}^{a+b+1}\oplus\omega_{1}^{b}\).
We occasionally use a companion representation \(\bar{\rho}^{\prime}\) for the same construction with parameters \((a,b)\) changed to \((a^{\prime},b^{\prime})=(p-3-a,a+b+1)\), or equivalently, \(\bar{\rho}^{\prime}=\begin{pmatrix}\omega_{1}^{a+b+1}&0\\ *\neq 0&\omega_{1}^{b}\end{pmatrix}\).
_These notations \(\bar{\rho}\), \(\bar{\rho}^{\prime}\) and \(\bar{\rho}^{\operatorname{ss}}\) are fixed throughout the rest of this paper._
### Triangulline deformation spaces
Let \(\mathcal{T}\) denote the rigid analytic space parametrizing continuous characters of \((\mathbb{Q}_{p}^{\times})^{2}\), or more precisely,
(7.3.1)
where \(\mathbb{G}_{m}^{\operatorname{rig}}=\cup_{n\in\mathbb{N}}\operatorname{Spm} \big{(}\mathbb{Q}_{p}\langle\frac{u}{p^{n}},\frac{p^{n}}{u}\rangle\big{)}\) is the rigid analytic \(\mathbb{G}_{m}\). The point on \(\mathcal{T}\) associated to a character \((\delta_{1},\delta_{2}):(\mathbb{Q}_{p}^{\times})^{2}\to\mathbb{C}_{p}^{\times}\) is \((\delta_{1}(p),\delta_{2}(p),\delta_{1}|_{\mathbb{Z}_{p}^{\times}},\delta_{2}|_ {\mathbb{Z}_{p}^{\times}})\). There is a natural _weight map_\(\operatorname{wt}:\mathcal{T}\to\widetilde{\mathcal{W}}\). Define \(\mathcal{T}_{\operatorname{reg}}\) to be the Zariski open subspace of \(\mathcal{T}\), where neither \(\delta_{1}/\delta_{2}\) nor \(\delta_{2}/\delta_{1}\) is a character of \(\mathbb{Q}_{p}^{\times}\) in the following list:
\[x\mapsto x^{n}\text{ and }x\mapsto x^{n}\chi_{\operatorname{cycl}}(x)\text{ with }n\in\mathbb{Z}_{\geq 0}.\]
Let \(\bar{r}_{p}:\operatorname{Gal}_{\mathbb{Q}_{p}}\to\operatorname{GL}_{2}(\mathbb{F})\) be a residual Galois representation. Let \(R^{\square}_{\bar{r}_{p}}\) denote the framed deformation ring of \(\bar{r}_{p}\) parametrizing deformations of \(\bar{r}_{p}\) into matrix representations of \(\operatorname{Gal}_{\mathbb{Q}_{p}}\) with coefficients in noetherian complete local \(\mathcal{O}\)-algebras. Then the Krull dimension of \(R^{\square}_{\bar{r}_{p}}\) is \(9\). Let \(V^{\square}_{\mathrm{univ}}\) denote the universal (matrix) representation over \(R^{\square}_{\bar{r}_{p}}\).
Let \(\mathcal{X}^{\square}_{\bar{r}_{p}}\) denote the rigid analytic space over \(E\) associated to the formal scheme \(\operatorname{Spf}R^{\square}_{\bar{r}_{p}}\); it has dimension \(8\). Write \(\mathcal{V}^{\square}_{\mathrm{univ}}\) for the associated universal representation over \(\mathcal{X}^{\square}_{\bar{r}_{p}}\). For a point \(x\in\mathcal{X}^{\square}_{\bar{r}_{p}}\) over \(E^{\prime}\), write \(\mathcal{V}_{x}\) for universal Galois representation of \(\operatorname{Gal}_{\mathbb{Q}_{p}}\) over \(E^{\prime}\) at \(x\).
Following [1, Definition 2.4], we define the triangulline deformation space as follows.
**Definition 7.4**.: Let \(U^{\square,\operatorname{tri}}_{\bar{r}_{p},\operatorname{reg}}\) denote the set of closed points \((x,\delta_{1},\delta_{2})\in\mathcal{X}^{\square}_{\bar{r}_{p}}\times \mathcal{T}_{\operatorname{reg}}\) (with some residue field \(E^{\prime}\)) such that the associated \((\varphi,\Gamma)\)-module \(\mathbb{D}^{\dagger}_{\mathrm{rig}}(\mathcal{V}_{x})\) sits in an exact sequence
\[0\to\mathcal{R}_{E^{\prime}}(\delta_{1})\to\mathbb{D}^{\dagger}_{\mathrm{rig} }(\mathcal{V}_{x})\to\mathcal{R}_{E^{\prime}}(\delta_{2})\to 0, \tag{7.4.1}\]
where \(\mathcal{R}_{E^{\prime}}\) is the Robba ring for \(\mathbb{Q}_{p}\) with coefficients in \(E^{\prime}\); see [1, SS 6] and [11] for the notation \(\mathcal{R}_{E^{\prime}}(-)\) and related discussions on triangulations of \((\varphi,\Gamma)\)-modules.
The _triangulline deformation space of \(\bar{r}_{p}\)_, denoted by \(\mathcal{X}^{\square,\operatorname{tri}}_{\bar{r}_{p}}\), is the Zariski closure of \(U^{\square,\operatorname{tri}}_{\bar{r}_{p},\operatorname{reg}}\) inside the product \(\mathcal{X}^{\square}_{\bar{r}_{p}}\times\mathcal{T}\).
**Proposition 7.5**.:
1. _The space_ \(\mathcal{X}^{\square,\operatorname{tri}}_{\bar{r}_{p}}\) _is a subspace of_ \(\mathcal{X}^{\square}_{\bar{r}_{p}}\times\mathcal{T}\) _consisting of points_ \((x,\delta_{1},\delta_{2})\) _for which_ \(\det(\mathcal{V}_{x})\) _corresponds to_ \(\delta_{1}\delta_{2}\) _under local class field theory. Moreover, set_ \(\mathcal{X}^{\square,\operatorname{tri},\circ}_{\bar{r}_{p}}:=\mathcal{X}^{ \square,\operatorname{tri}}_{\bar{r}_{p}}\cap\big{(}\mathcal{X}^{\square}_{\bar{ r}_{p}}\times(\mathbb{G}^{\mathrm{rig}}_{m})^{2}\times\mathcal{W}\big{)}\)_, then_ (_7.1.1_) _induces an isomorphism_ _which is compatible with projections to the factor_ \((\mathbb{G}^{\mathrm{rig}}_{m})^{2}\)_._
2. _The set_ \(U^{\square,\operatorname{tri}}_{\bar{r}_{p},\operatorname{reg}}\) _is the set of closed points of a Zariski open and dense subspace_ \(\mathcal{U}^{\square,\operatorname{tri}}_{\bar{r}_{p},\operatorname{reg}}\) _of_ \(\mathcal{X}^{\square,\operatorname{tri}}_{\bar{r}_{p}}\)_. The space_ \(\mathcal{X}^{\square,\operatorname{tri}}_{\bar{r}_{p}}\) _is equidimensional of dimension_ \(7\)_._
Proof.: (1) obviously holds for points in \(U^{\square,\operatorname{tri}}_{\bar{r}_{p},\operatorname{reg}}\) and hence for \(\mathcal{X}^{\square,\operatorname{tri}}_{\bar{r}_{p}}\). (2) is proved in [1, Theorem 2.6].
The main theorem of this section is the following.
**Theorem 7.6**.: _Assume that \(p\geq 11\). Let \(\bar{r}_{p}:\operatorname{Gal}_{\mathbb{Q}_{p}}\to\operatorname{GL}_{2}(\mathbb{F})\) be a residual local Galois representation as in Notation 7.2 with \(2\leq a\leq p-5\), and let \(\bar{\rho}\) be as defined therein. Let \(\mathcal{X}^{\square,\operatorname{tri}}_{\bar{r}_{p}}\) be the triangulline deformation space defined above. Let \(\underline{x}=(x,\delta_{1},\delta_{2})\) be an \(E^{\prime}\)-point of \(\mathcal{X}^{\square,\operatorname{tri}}_{\bar{r}_{p}}\). Then the character \(\varepsilon=\delta_{2}|_{\Delta}\times\delta_{1}|_{\Delta}\cdot\omega^{-1}\) is relevant to \(\bar{r}_{p}|_{\mathbb{I}_{\mathbb{Q}_{p}}}\). Put \(w_{\star}:=(\delta_{1}\delta_{2}^{-1}\chi^{-1}_{\operatorname{cycl}})(\exp(p))-1\) (for the image of \(\underline{x}\) in \(\mathcal{W}\) under \(\operatorname{pr}_{W}\))._
1. _If_ \(v_{p}(\delta_{1}(p))=-v_{p}(\delta_{2}(p))>0\)_, then_ \(v_{p}(\delta_{1}(p))\) _is equal to a slope appearing in_ \(\operatorname{NP}\big{(}G^{(\varepsilon)}_{\bar{\rho}}(w_{\star},-)\big{)}\)_._
2. _If_ \(v_{p}(\delta_{1}(p))=0\)_, then either_ \(\varepsilon=\omega^{b}\times\omega^{a+b}\)_, or_ \(\varepsilon=\omega^{a+b+1}\times\omega^{b-1}\) _and_ \(\bar{r}_{p}|_{\mathbb{I}_{\mathbb{Q}_{p}}}\simeq\bar{\rho}^{\ast}\)_._
3. _If_ \(v_{p}(\delta_{1}(p))=\frac{k}{2}-1\) _and_ \(\delta_{1}|_{\mathbb{Z}^{\times}_{p}}=\chi^{k-1}_{\operatorname{cycl}}\delta_{2}|_ {\mathbb{Z}^{\times}_{p}}\) _for some integer_ \(k\in\mathbb{Z}_{\geq 2}\)_, then_ \(\delta_{1}(p)=p^{k-2}\delta_{2}(p)\)
_Conversely, fix characters \(\delta_{1}|_{\mathbb{Z}_{p}^{\times}}\) and \(\delta_{2}|_{\mathbb{Z}_{p}^{\times}}\) such that \(\varepsilon\) defined above is relevant to \(\bar{r}_{p}|_{\mathbb{I}_{\mathbb{Q}_{p}}}\). Then every nonzero slope of \(\mathrm{NP}\left(G_{\bar{\rho}}^{(\varepsilon)}(w_{\star},-)\right)\) for \(w_{\star}:=(\delta_{1}\delta_{2}^{-1}\chi_{\mathrm{cycl}}^{-1})(\exp(p))-1\), appears as \(v_{p}(\delta_{1}(p))\) at some closed point \(\underline{x}=(x,\delta_{1},\delta_{2})\in\mathcal{X}_{\bar{r}_{p}}^{\square, \mathrm{tri}}\) (for some continuous characters \(\delta_{1},\delta_{2}\) of \(\mathbb{Q}_{p}^{\times}\) extending the given \(\delta_{1}|_{\mathbb{Z}_{p}^{\times}}\) and \(\delta_{2}|_{\mathbb{Z}_{p}^{\times}}\))._
The proof of this theorem will occupy the rest of this section, and is concluded in SS 7.22. We quickly remark that case (1) corresponds to the case when \(\mathcal{V}_{x}\) is reducible, and case (3) mostly concerns the case when \(\mathcal{V}_{x}\) is semistable and noncrystalline (after a twist).
Temporarily admitting this theorem, we first deduce a couple of corollaries that partially answer a conjecture of Breuil-Buzzard-Emerton on crystalline slopes of Kisin's crystalabelian deformation spaces and a conjecture of Gouvea on slopes of crystalline deformation spaces.
### Kisin's crystalabelian deformation space
Let \(\bar{r}_{p}\), \(R_{\bar{r}_{p}}^{\square}\), and \(V_{\mathrm{univ}}^{\square}\) be as above. Let \(\underline{\psi}=\psi_{1}\times\psi_{2}:(\mathbb{Z}_{p}^{\times})^{2}\to E^{\times}\) be a finite character, and let \(\underline{k}=(k_{1},k_{2})\in\mathbb{Z}^{2}\) with \(k_{1}<k_{2}\) be a pair of Hodge-Tate weights. (In our convention, \(\chi_{\mathrm{cycl}}\) has Hodge-Tate weight \(-1\).) In [11], Kisin proved that there is a unique \(\mathcal{O}\)-flat quotient \(R_{\bar{r}_{p}}^{\square,\underline{k},\underline{\psi}}\) of \(R_{\bar{r}_{p}}^{\square}\), called _the Kisin's crystalabelian deformation ring_, such that every homomorphism \(x^{*}:R_{\bar{r}_{p}}^{\square}\to E^{\prime}\) factors through \(R_{\bar{r}_{p}}^{\square,\underline{k},\underline{\psi}}\) if and only if \(\mathcal{V}_{x}\) is potentially crystalline with Hodge-Tate weights \((k_{1},k_{2})\) and the action of \(\mathrm{I}_{\mathbb{Q}_{p}}\) on \(\mathbb{D}_{\mathrm{pcrys}}(\mathcal{V}_{x})\) is isomorphic to \(\psi_{1}\oplus\psi_{2}\). (Here \(\mathbb{D}_{\mathrm{pcrys}}(-)\) is defined in Notation 7.1.) When \(R_{\bar{r}_{p}}^{\square,\underline{k},\underline{\psi}}\) is nonempty, each of its irreducible component has Krull dimension is \(6\). Moreover, the associated rigid analytic space \(\mathcal{X}_{\bar{r}_{p}}^{\square,\underline{k},\underline{\psi}}:=\big{(} \operatorname{Spr}R_{\bar{r}_{p}}^{\square,\underline{k},\underline{\psi}} \big{)}^{\mathrm{rig}}\) is smooth of dimension \(5\) over \(E\).
**Corollary 7.8**.: _Assume that \(p\geq 11\). Let \(\bar{r}_{p}:\mathrm{Gal}_{\mathbb{Q}_{p}}\to\mathrm{GL}_{2}(\mathbb{F})\) be a residual local Galois representation as in Notation 7.2 with \(2\leq a\leq p-5\). Let \(\underline{\psi}\) and \(\underline{k}\) be as above, and let \(x\) be an \(E^{\prime}\)-point of \(\mathcal{X}_{\bar{r}_{p}}^{\square,\underline{k},\underline{\psi}}\). Let \(\alpha_{x}\) be an eigenvalue of the \(\phi\)-action on the subspace of \(\mathbb{D}_{\mathrm{pcrys}}(\mathcal{V}_{x})\) where \(\mathrm{Gal}(\mathbb{Q}_{p}(\mu_{p^{\infty}})/\mathbb{Q}_{p})\) acts through \(\psi_{1}\). Write \(w_{\star}:=(\psi_{1}\psi_{2}^{-1})(\exp(p))\exp(p(k_{2}-k_{1}-1))-1\) (for the image of \(x\) in \(\mathcal{W}\) under \(\mathrm{pr}_{W}\)). Then the character \(\varepsilon=\psi_{2}|_{\Delta}\cdot\omega^{-k_{2}}\times\psi_{1}|_{\Delta} \cdot\omega^{-k_{1}-1}\) is relevant to \(\bar{r}_{p}|_{\mathbb{I}_{\mathbb{Q}_{p}}}\), and_
1. _if_ \(v_{p}(\alpha_{x})-k_{1}\notin\{0,k_{2}-k_{1}\}\)_, then it is equal to a slope appearing in_ \(\mathrm{NP}\left(G_{\bar{\rho}}^{(\varepsilon)}(w_{\star},-)\right)\)_;_
2. _if_ \(v_{p}(\alpha_{x})\in\{k_{1},k_{2}\}\)_, then_ \(\mathcal{V}_{x}\) _is reducible; and_
3. _in the special case_ \(\psi_{1}=\psi_{2}\)_, we have_ \(v_{p}(\alpha_{x})\neq\frac{k_{2}-k_{1}}{2}-1\)_._
_Conversely, every slope of \(\mathrm{NP}\left(G_{\bar{\rho}}^{(\varepsilon)}(w_{\star},-)\right)\) belonging to \((0,k_{2}-k_{1})\) (but not equal to \(\frac{k_{2}-k_{1}}{2}-1\) when \(\psi_{1}=\psi_{2}\)) appears as the \(v_{p}(\alpha_{x})-k_{1}\) at some point \(x\in\mathcal{X}_{\bar{r}_{p}}^{\square,\underline{k},\underline{\psi}}\)._
Proof.: If \(v_{p}(\alpha_{x})\in\{k_{1},k_{2}\}\), the standard \(p\)-adic Hodge theory implies that \(\mathcal{V}_{x}\) is reducible.
We henceforth assume that we are in situation (1), i.e. \(v_{p}(\alpha_{x})\notin\{k_{1},k_{2}\}\). This essentially follows from Theorem 1.5 because all crystalabelian representations are triangulline. More precisely, let \(x\in\mathcal{X}_{\bar{r}_{p}}^{\square,\underline{k},\underline{\psi}}(E^{\prime})\) be a closed point. By possibly replacing \(E^{\prime}\) by a quadratic extension, the action of crystalline Frobenius \(\phi\) and \(\mathrm{Gal}(\mathbb{Q}_{p}(\mu_{p^{\infty}})/\mathbb{Q}_{p})\) on \(\mathbb{D}_{\mathrm{pcrys}}(\mathcal{V}_{x})\) have two (generalized) eigencharacters: \((\alpha_{1},\psi_{1})\) and \((\alpha_{2},\psi_{2})\), with \(\psi_{1},\psi_{2}\) in the data defining the deformation space and \(\alpha_{1},\alpha_{2}\in E^{\prime\times}\). We can also always assume that \((\alpha_{1},\psi_{1})\) is a genuine
eigencharacter. Define characters \(\delta_{i}:\mathbb{Q}_{p}^{\times}\to E^{\prime\times}\) with \(i=1,2\) by
\[\delta_{i}(p)=p^{-k_{i}}\alpha_{i},\quad\delta_{i}|_{\mathbb{Z}_{p}^{\times}}=x^ {-k_{i}}\psi_{i}.\]
Standard facts of Berger's functor provides a triangulation
\[0\to\mathcal{R}_{E^{\prime}}(\delta_{1})\to\mathbb{D}_{\mathrm{rig}}(\mathcal{ V}_{x})\to\mathcal{R}_{E^{\prime}}(\delta_{2})\to 0. \tag{7.8.1}\]
Indeed, if this fails, it must be that the eigenspace for \((\alpha_{1},\psi_{1})\) agrees with \(\mathrm{Fil}^{k_{2}}\mathbb{D}_{\mathrm{pcrys}}(\mathcal{V}_{x})\); then the admissibility condition for \(\mathbb{D}_{\mathrm{pcrys}}(\mathcal{V}_{x})\) forces \(v_{p}(\alpha_{1})=k_{2}\), contradicting our assumption.
Now, (7.8.1) upgrades \(x\) to a point \((x,\delta_{1},\delta_{2})\) of \(\mathcal{X}_{\bar{r}_{p}}^{\square,\mathrm{tri}}\), for which \(v_{p}(\delta_{1}(p))=v_{p}(\alpha_{1})-k_{1}\). (1) now follows from Theorem 7.6, with
\[w_{\star}:=(\delta_{1}\delta_{2}^{-1}\chi_{\mathrm{cycl}}^{-1})(\exp(p))-1=( \psi_{1}\psi_{2}^{-1})(\exp((p))\exp(p(k_{2}-k_{1}-1))-1. \tag{7.8.2}\]
It remains to prove (3). Assume that \(\psi_{1}=\psi_{2}\). Suppose that the subspace \(\mathcal{Y}\) of \(\mathcal{X}_{\bar{r}_{p}}^{\square,\underline{k},\underline{\psi}}\) where \(v_{p}(\alpha_{x})=\frac{k_{2}-k_{1}}{2}-1\) is nonempty. Then this is a rigid analytic subspace, so in particular, \(\dim\mathcal{Y}=5\). For each \(x\in\mathcal{Y}\), \(\delta_{1}|_{\mathbb{Z}_{p}^{\times}}=\chi_{\mathrm{cycl}}^{k_{2}-k_{1}}\delta _{2}|_{\mathbb{Z}_{p}^{\times}}\). Theorem 7.6(3) implies that \(\delta_{1}(p)=p^{k_{2}-k_{1}-2}\delta_{2}(p)\). This means that \(\mathcal{Y}\) is confined in the subspace where the ratio of two Frobenius eigenvalues on \(\mathbb{D}_{\mathrm{pcrys}}(\mathcal{V}_{x})\) is precisely \(p\). Let \(x\) be a point of \(\mathcal{Y}\). The dimension of the tangent space of \(\mathcal{X}_{\bar{r}_{p}}^{\square,\underline{k},\underline{\psi}}\) at \(x\) is equal to \(1+3+\dim\mathrm{H}_{f}^{1}(\mathrm{Gal}_{\mathbb{Q}_{p}},\mathrm{Ad}(\mathcal{V }_{x}))\), where \(1\) comes from infinitesimal central twist of \(\mathcal{V}_{x}\) by an unramified representation, \(3\) comes from the framing variables, and the one-dimensional \(\mathrm{H}_{f}^{1}(\mathrm{Gal}_{\mathbb{Q}_{p}},\mathrm{Ad}(\mathcal{V}_{x}))\) corresponds to varying the ratio of two Frobenius eigenvalues. But our earlier discussion shows that the ratio of two Frobenius eigenvalues on \(\mathbb{D}_{\mathrm{pcrys}}(\mathcal{V}_{x})\) is fixed to be \(p\). (3) is proved.
Conversely, given a slope of \(\mathrm{NP}\left(G_{\bar{\rho}}^{(\varepsilon)}(w_{\star},-)\right)\) belonging to \((0,k_{2}-k_{1})\) (and not being equal to \(\frac{k_{2}-k_{1}}{2}-1\) when \(\psi_{1}=\psi_{2}\)), Theorem 7.6 defines a triangulation (7.8.1) with \(\mathcal{V}_{x}\) having the reduction \(\bar{r}_{p}\). The slope condition implies that (7.8.1) belongs to the type \(\mathscr{S}_{+}^{\mathrm{cris}}\) in [11]. So \(\mathcal{V}_{x}\) is crystabelian.
**Remark 7.9**.:
1. We omitted a full discussion when \(\alpha_{x}\in\{k_{1},k_{2}\}\), which is a standard exercise in \(p\)-adic Hodge theory.
2. (Possibly up to replacing \(E\) by a degree \(2\) extension when \(\psi_{1}=\psi_{2}\)), it is possible embed \(\mathcal{X}_{\bar{r}_{p}}^{\square,\underline{k},\underline{\psi}}\) into \(\mathcal{X}_{\bar{r}_{p}}^{\square,\mathrm{tri}}\) as a rigid analytic subspace, but this construction is a little messy to present, in the ordinary, critical, or Frobenius non-semisimple cases. We content ourselves with a pointwise description and leave the "global" argument to interested readers.
The following answers positively a conjecture by Breuil-Buzzard-Emerton, and a conjecture of Gouvea, when the residual Galois representation is reducible and generic. We refer to SS 1.8 and SS 1.11 for the discussion on their history, and Remarks 1.10 and 1.13 for comments on prior related works.
**Corollary 7.10**.: _Assume that \(p\geq 11\). Let \(\bar{r}_{p}:\mathrm{Gal}_{\mathbb{Q}_{p}}\to\mathrm{GL}_{2}(\mathbb{F})\) be a residual local Galois representation as in Notation 7.2 with \(2\leq a\leq p-5\). Let \(\underline{\psi}\), \(\underline{k}\), \(x\), \(\alpha_{x}\) be as in Corollary 7.8._
1. _If_ \(m\) _denotes the minimal_ positive _integer such that_ \(\psi_{1}\psi_{2}^{-1}\) _is trivial on_ \((1+p^{m}\mathbb{Z}_{p})^{\times}\)_, then_ \[v_{p}(\alpha_{x})\in\begin{cases}\left(\frac{a}{2}+\mathbb{Z}\right)\cup \mathbb{Z}&\text{ when }m=1,\\ \frac{1}{(p-1)p^{m-1}}\mathbb{Z}&\text{ when }m\geq 2.\end{cases}\]
2. _If_ \(\psi_{1}=\psi_{2}\)_, then_ \[v_{p}(\alpha_{x})-k_{1}\text{ or }k_{2}-1-v_{p}(\alpha_{x})\text{ belongs to }\Big{[}0,\,\frac{k_{2}-k_{1}-1-\min\{a+1,p-2-a\}}{p+1} \Big{]}.\]
Proof.: (1) When \(m=1\), this follows from Corollary 7.8 and Proposition 2.18(6). When \(m\geq 2\), \(v_{p}(w_{\star})=\frac{1}{(p-1)p^{m-1}}\), and the slopes of \(\operatorname{NP}\big{(}G_{\tilde{\rho}}^{(\varepsilon)}(w_{\star},-)\big{)}\) are precisely \(v_{p}(w_{\star})\cdot\big{(}\deg g_{n}^{(\varepsilon)}(w)-\deg g_{n-1}^{( \varepsilon)}(w)\big{)}\) for some \(n\in\mathbb{N}\) with multiplicity one, by the last line of Definition-Proposition 2.12(4). In this case, (1) follows from this and Corollary 7.8.
(2) If \(\psi_{1}=\psi_{2}\), then \(v_{p}(\alpha_{1})-k_{1}\) is a slope of \(\operatorname{NP}\big{(}G_{\tilde{\rho}}^{(\varepsilon)}(w_{k_{2}-k_{1}},-) \big{)}\) which is not \(\frac{k_{2}-k_{1}}{2}-1\). By Proposition 2.16(3)(4), either \(v_{p}(\alpha_{x})-k_{1}\) belongs to \(\big{[}0,\,\frac{k_{2}-k_{1}-1-\min\{a+1,p-2-a\}}{p+1}\big{]}\), or \((k_{2}-k_{1}-1)-(v_{p}(\alpha_{x})-k_{1})=k_{2}-1-v_{p}(\alpha_{x})\) belongs to this set.
The rest of this section is devoted to proving Theorem 7.6, which is completed in SS 7.22.
### Reducing Theorem 7.6 to the nonsplit case
We first show that Theorem 7.6 for \(\bar{r}_{p}\) nonsplit implies the theorem for \(\bar{r}_{p}\) split. This is essentially because, at least pointwise for an irreducible triangulline representation, there are lattices with different reductions.
To make this precise, we first note that the character \(\varepsilon=\delta_{2}|_{\Delta}\times\delta_{1}|_{\Delta}\cdot\omega^{-1}\) is always relevant to \(\bar{r}_{p}|_{\mathbb{I}_{0_{p}}}\) by considering the \(\operatorname{det}\!\mathcal{V}_{x}\). Next, by twisting all representations by \(\omega\circ\omega_{1}^{-b}:\operatorname{Gal}_{\mathbb{Q}_{p}}\to\mathbb{F}_{p} ^{\times}\to\mathcal{O}^{\times}\), we may reduce to the case when \(b=0\).
Now suppose that Theorem 7.6 holds for nonsplit residual local Galois representations. Let \(\bar{r}_{p}\) be a split residual local Galois representation as in Notation 7.2 with \(\ast=0\) and \(b=0\). Then there is a unique nonsplit residual local Galois representation \(\bar{r}_{p}^{\prime}\) which is an extension of \(\operatorname{ur}(\bar{\alpha}_{2})\) by \(\operatorname{ur}(\bar{\alpha}_{1})\omega^{a+1}\). In particular, \(\bar{r}_{p}^{\prime}|_{\mathbb{I}_{0_{p}}}\simeq\bar{\rho}=\left(\begin{smallmatrix} \omega_{1}^{a+1}&\ast\neq 0\\ 0&1\end{smallmatrix}\right)\) as in Notation 7.2.
Let \(\underline{x}=(x,\delta_{1},\delta_{2})\) be an \(E^{\prime}\)-point of \(\mathcal{U}_{\bar{r}_{p},\operatorname{reg}}^{\square,\operatorname{tri}}\). (By Zariski density, it is enough to consider points in the regular locus.) We separate two cases.
(1) If \(\mathcal{V}_{x}\) is irreducible, then it is well known that, after possibly enlarging \(E^{\prime}\), \(\mathcal{V}_{x}\) admits an \(\mathcal{O}^{\prime}\)-lattice \(\mathcal{V}_{x}^{\circ}\) such that \(\mathcal{V}_{x}^{\circ}/\varpi^{\prime}\mathcal{V}_{x}^{\circ}\simeq\bar{r}_{p} ^{\prime}\). It follows that \(\underline{x}^{\prime}:=(\mathcal{V}_{x}^{\circ},\delta_{1},\delta_{2})\) also defines a point on \(\mathcal{U}_{\bar{r}_{p},\operatorname{reg}}^{\square,\operatorname{tri}}\). Theorem 7.6 for \(\underline{x}^{\prime}\) implies that for Theorem 7.6 for \(\underline{x}\).
(2) If \(\mathcal{V}_{x}\) is reducible, i.e. there exists an exact sequence \(0\to\mathcal{V}_{x}^{+}\to\mathcal{V}_{x}\to\mathcal{V}_{x}^{-}\to 0\) of representations of \(\operatorname{Gal}_{\mathbb{Q}_{p}}\). There are two possibilities:
1. If \(\delta_{1}(p)\in\mathcal{O}^{\prime\times}\), then (7.4.1) produces an exact sequence of Galois representations. In particular, \(\mathcal{R}_{E^{\prime}}(\delta_{1})\) is isomorphic to either \(\mathbb{D}_{\operatorname{rig}}(\mathcal{V}_{x}^{+})\) or \(\mathbb{D}_{\operatorname{rig}}(\mathcal{V}_{x}^{-})\). This will imply that \(\delta_{2}|_{\Delta}\times\delta_{1}|_{\Delta}\cdot\omega^{-1}=1\times\omega^{a}\) or \(\omega^{a+1}\times\omega^{-1}\), proving (2a).
2. If \(v_{p}(\delta_{1}(p))>0\), this falls in the case of \(\mathscr{S}_{+}^{\text{ord}}\) per classification of triangulline representations in [11, SS 1.2]. In particular, \(v_{p}(\delta_{1}(p))=w(\delta_{1}\delta_{2}^{-1})\in\mathbb{N}\), where \[w(\delta_{1}\delta_{2}^{-1}):=\lim_{\begin{subarray}{c}\gamma\in\mathbb{Z}_{p} ^{\times}\\ \gamma\to 1\end{subarray}}\frac{\log(\delta_{1}\delta_{2}^{-1})}{\log(\chi_{ \operatorname{cycl}}(\gamma))}\]
is the (negative of) generalized Hodge-Tate weight. (In [12], Colmez calls \(w(\delta_{1}\delta_{2}^{-1})\) the Hodge-Tate weight because in his convention the cyclotomic character has Hodge-Tate weight \(1\).) Put \(k:=w(\delta_{1}\delta_{2}^{-1})+1\). In this case, there is another triangulation \[0\to t^{k-1}{\Cal{R}}_{E^{\prime}}(\delta_{2})\to{\mathbb{D}}_{\rm rig}({ \Cal{V}}_{x})\to t^{1-k}{\Cal{R}}_{E^{\prime}}(\delta_{1})\to 0,\] which produces precisely the exact sequence \(0\to{\Cal{V}}_{x}^{+}\to{\Cal{V}}_{x}\to{\Cal{V}}_{x}^{-}\to 0\). This in particularly shows that \[\varepsilon=\delta_{2}|_{\Delta}\times\delta_{1}|_{\Delta}\cdot\omega^{-1}= \omega^{a-k+2}\times\omega^{k-2}.\] We need to show that, \(k-1\) is a slope in \({\rm NP}\left(G_{\bar{\rho}}^{(\varepsilon)}(w_{\star},-)\right)\), (by directly exhibiting such a slope). There are two subcases we need to consider.
2. If \(\delta_{1}|_{(1+p{\mathbb{Z}}_{p})^{\times}}=\delta_{2}|_{(1+p{\mathbb{Z}}_{p })^{\times}}\), then \(w_{\star}=(\delta_{1}\delta_{2}^{-1}\chi_{\rm cycl}^{-1})(\exp(p))=w_{k}\). We invoke the compatibility of Atkin-Lehner involution and \(p\)-stabilization with ghost series in Proposition 2.16(2)(3): the \(d_{k}^{\rm Iw}(\omega^{a-k+2}\times 1)\)th slope of \({\rm NP}\left(G_{\bar{\rho}}^{(\varepsilon)}(w_{k},-)\right)\) is precisely \(k-1\) minus the first slope of \({\rm NP}\left(G_{\bar{\rho}}^{(\varepsilon^{\prime\prime})}(w_{k},-)\right)\) with \(s_{\varepsilon^{\prime\prime}}=k-2-a-(k-2-a)=0\). So the latter ghost slope is \(0\), and thus the former ghost slope is \(k-1\), i.e. \(v_{p}(\delta_{1}(p))\) is a slope of \({\rm NP}\left(G_{\bar{\rho}}^{(\varepsilon)}(w_{k},-)\right)\).
2. If the minimal positive integer \(m\) such that \(\delta_{1}|_{(1+p^{m}{\mathbb{Z}}_{p})^{\times}}=\delta_{2}|_{(1+p^{m}{\mathbb{ Z}}_{p})^{\times}}\) satisfies \(m\geq 2\), then we are in the "halo region"; in particular, \(v_{p}(w_{\star})=\frac{1}{p^{m-2}(p-1)}\). In this case, Definition-Proposition 2.12(4) implies that the \(n\)th slope of \({\rm NP}\left(G_{\bar{\rho}}^{(\varepsilon)}(w_{\star},-)\right)\) is just \(\frac{1}{p^{m-2}(p-1)}\big{(}\deg g_{n}^{(\varepsilon)}(w)-\deg g_{n-1}^{( \varepsilon)}(w)\big{)}\). We compute this explicitly using the formulas in Definition-Proposition 2.12(4) with \(s_{\varepsilon}=\{k-a-2\}\), * If \(a+s_{\varepsilon}<p-1\), note that \(p^{m-1}(k-1)-1\equiv k-2\equiv a+s_{\varepsilon}\bmod(p-1)\). So for \(N=\frac{p^{m-1}(k-1)-1-\{k-2\}}{p-1}+1\), we have \({\bf e}_{2N}^{(\varepsilon)}=e_{2}^{*}z^{p^{m-1}(k-1)-1}\). Moreover, we have \(\frac{p^{m-1}(k-1)-1-\{k-2\}}{p-1}+1-\{k-2-a\}-(a+2)\equiv\{k-2\}-\{k-2-a\}-a= 0\pmod{p}\). This implies by (2.12.1) with \(n=2N-1\) and the "otherwise case" (as \(2N-2s_{\varepsilon}\equiv 2a+4\pmod{p}\)), \[\deg g_{2N}^{(\varepsilon)}(w)-\deg g_{2N-1}^{(\varepsilon)}(w)=\deg{\bf e}_{2 N}^{(\varepsilon)}-\Big{\lfloor}\frac{\deg{\bf e}_{2N}^{(\varepsilon)}}{p}\Big{\rfloor}=p^{m-2} (p-1)(k-1).\] So the \(2N\)th slope of \({\rm NP}\left(G_{\bar{\rho}}^{(\varepsilon)}(w_{\star},-)\right)\) is \(k-1\). * If \(a+s_{\varepsilon}\geq p-1\), the argument is similar. Still, we put \(N=\frac{p^{m-1}(k-1)-1-\{k-2\}}{p-1}+1\), but \({\bf e}_{2N-1}^{(\varepsilon)}=e_{2}^{*}z^{p^{m-1}(k-1)-1}\). We have a similar congruence \(N+1-\{k-2-a\}-(a+3)\equiv\{k-2\}-\{k-2-a\}-a+3=p\equiv 0\pmod{p}\). This implies by (2.12.2) with \(n=2N-1\) and the "otherwise case" (as \(2N-1-2s_{\varepsilon}\equiv 2a+5\pmod{p}\)) that \[\deg g_{2N-1}^{(\varepsilon)}(w)-\deg g_{2N-2}^{(\varepsilon)}(w)=\deg{\bf e}_{ 2N-1}^{(\varepsilon)}-\Big{\lfloor}\frac{\deg{\bf e}_{2N-1}^{(\varepsilon)}}{p} \Big{\rfloor}=p^{m-2}(p-1)(k-1).\] This means that the \((2N-1)\)th slope of \({\rm NP}\left(G_{\bar{\rho}}^{(\varepsilon)}(w_{\star},-)\right)\) is \(k-1\).
Up to now, we have checked (1)-(3) of Theorem 7.6. Conversely, if \(\delta_{1}|_{\mathbb{Z}_{p}^{\times}}\) and \(\delta_{2}|_{\mathbb{Z}_{p}^{\times}}\) are given as in Theorem 7.6. Put \(w_{\star}:=(\delta_{1}\delta_{2}^{-1}\chi_{\mathrm{cycl}}^{-1})(\exp(p))-1\). Let \(\lambda\) be a slope of \(\mathrm{NP}\left(G_{\bar{\rho}}^{(\varepsilon)}(w_{\star},-)\right)\).
1. [label=(0)]
2. If \(\lambda>0\), Theorem 7.6 for the nonsplit representation \(\bar{r}_{p}^{\prime}\) produces an \(E^{\prime}\)-point \(\underline{x}^{\prime}=(x^{\prime},\delta_{1},\delta_{2})\in\mathcal{X}_{\bar{ r}_{p}^{\prime}}^{\square,\mathrm{tri}}\) with \(v_{p}(\delta_{1}(p))=\lambda\). Reversing the argument in (1) gives the needed point of \(\mathcal{X}_{\bar{r}_{p}^{\prime}}^{\square,\mathrm{tri}}\).
3. If \(\lambda=0\), we must have \(\varepsilon=1\times\omega^{a}\). We construct a point on \(\mathcal{X}_{\bar{r}_{p}}^{\square,\mathrm{tri}}\) directly. Lift \(\bar{\alpha}_{i}\in\mathbb{F}^{\times}\) for each \(i=1,2\) to \(\delta_{i}(p)\in\mathcal{O}^{\times}\). Then \(\mathcal{R}_{E^{\prime}}(\delta_{1})\oplus\mathcal{R}_{E^{\prime}}(\delta_{2})\) is the \((\varphi,\Gamma)\)-module of \(\delta_{1}\oplus\delta_{2}\), which reduces to \(\bar{r}_{p}\) automatically, with the correct slope and characters.
This completes the reduction of Theorem 7.6 to the reducible, nonsplit, and generic case.
**Remark 7.12**.:
1. [label=**Remark 7.13**]
2. Supposedly, the proof of (2bii) should also follow from an analogous compatibility of Atkin-Lehner involution for ghost series with wild characters. We leave that for interested readers.
3. It is a very interesting question to ask whether the above correspondence of points between \(\mathcal{U}_{\bar{r}_{p},\mathrm{reg}}^{\square,\mathrm{tri}}\) and \(\mathcal{U}_{\bar{r}_{p}^{\prime},\mathrm{reg}}^{\square,\mathrm{tri}}\) can be made "globally" at the level of rigid analytic spaces or even at the level of formal schemes. This seems to be a rather subtle yet very interesting question.
**Assumption 7.13**.: In view of SS 7.11, we assume that \(\bar{r}_{p}\) is nonsplit for the rest of this section. In particular, \(\bar{r}_{p}|_{\mathbb{Q}_{p}}\simeq\bar{\rho}\). We write \(\bar{r}_{p}:\mathrm{Gal}_{\mathbb{Q}_{p}}\to\mathrm{GL}_{2}(\mathbb{F})\) as
\[\bar{r}_{p}=\begin{pmatrix}\bar{\chi}_{1}&\ast\neq 0\\ 0&\bar{\chi}_{2}\end{pmatrix},\quad\text{with }\bar{\chi}_{1}=\mathrm{ur}( \bar{\alpha}_{1})\cdot\omega_{1}^{a+b+1}\text{ and }\bar{\chi}_{2}=\mathrm{ur}(\bar{\alpha}_{2})\cdot\omega_{1}^{b}. \tag{7.13.1}\]
### Paskunas modules
To relate the study of local ghost series with the triangulline deformation space, we make use of the Paskunas modules in [10] for deformation of \(p\)-adic representations of \(\mathrm{GL}_{2}(\mathbb{Q}_{p})\). As [10] mainly considers the case with a fixed central character, some of our constructions later may be slightly awkward. Similar arguments to remove central character constraints can be found in [1, Appendix A] and [7]. Let \(\zeta:\mathrm{Gal}_{\mathbb{Q}_{p}}\to\mathcal{O}^{\times}\) be a character that induces a character of \(\mathbb{Q}_{p}^{\times}\) by local class field theory.
* Let \(\mathrm{Mod}_{\mathrm{Gal}_{\mathbb{Q}_{p}}}^{\mathrm{pro}}\) be the category of profinite \(\mathcal{O}\)-modules \(V\) with continuous \(\mathrm{Gal}_{\mathbb{Q}_{p}}\)-actions.
* Let \(\mathfrak{C}\) be the category of profinite \(\mathcal{O}\)-modules \(M\) with continuous _right_\(\mathrm{GL}_{2}(\mathbb{Q}_{p})\)-actions for which
* the right \(\mathrm{GL}_{2}(\mathbb{Z}_{p})\)-action on \(M\) extends to a right \(\mathcal{O}[\![\mathrm{GL}_{2}(\mathbb{Z}_{p})]\!]\)-module structure on \(M\), and
* for every vector \(v\) in the Pontryagin dual \(M^{\vee}:=\mathrm{Hom}_{\mathcal{O}}(M,E/\mathcal{O})\) equipped with the induced left \(\mathrm{GL}_{2}(\mathbb{Q}_{p})\)-action, the left \(\mathcal{O}[\mathrm{GL}_{2}(\mathbb{Q}_{p})]\)-submodule generated by \(v\) is of finite length.
* Let \(\mathfrak{C}_{\zeta}\) be the subcategory of \(\mathfrak{C}\) consisting of objects on which \(\mathbb{Q}_{p}^{\times}\) acts by \(\zeta\).
We chose to work with right \(\mathcal{O}[\![\mathrm{GL}_{2}(\mathbb{Q}_{p})]\!]\)-actions on objects of \(\mathfrak{C}\) to match our definition of \(\mathcal{O}[\![\mathrm{K}_{p}]\!]\)-projective augmented modules in Definition 2.2. This can be easily translated from references [10, 11, 12, 13] by considering the inverse action.
There is a natural covariant _modified Colmez functor_
\[\tilde{\mathbf{V}}_{\zeta}:\mathfrak{C}_{\zeta}\to\mathrm{Mod}_{\mathrm{Gal}_{ \mathbb{Q}_{p}}}^{\mathrm{pro}},\]
which is compatible with taking projective limits and whose evaluation on finite length objects \(M\) is given by \(\check{\mathbf{V}}_{\zeta}(M):=\mathbf{V}(M^{\vee})^{\vee}(\chi_{\text{cycl}}\zeta)\), where \((-)^{\vee}=\operatorname{Hom}_{\mathcal{O}}(-,E/\mathcal{O})\) is the Pontryagin duality and \(\mathbf{V}(-)\) is the functor defined in [10]. In particular, for two characters \(\bar{\eta}_{1},\bar{\eta}_{2}:\mathbb{Q}_{p}^{\times}\to\mathbb{F}^{\times}\) such that \(\bar{\eta}_{1}\bar{\eta}_{2}\bar{\chi}_{\text{cycl}}^{-1}=\zeta\bmod\varpi\),
\[\check{\mathbf{V}}_{\zeta}\Big{(}\operatorname{Ind}_{B(\mathbb{Q}_{p})}^{ \operatorname{GL}_{2}(\mathbb{Q}_{p})}\big{(}\bar{\eta}_{1}\otimes\bar{\eta}_ {2}\bar{\chi}_{\text{cycl}}^{-1}\big{)}^{\vee}\Big{)}\cong\bar{\eta}_{1}.\]
We note that for a different character \(\zeta^{\prime}:\operatorname{Gal}_{\mathbb{Q}_{p}}\to\mathcal{O}^{\times}\),
\[\check{\mathbf{V}}_{\zeta\zeta^{\prime}}(M\otimes\zeta^{\prime}\circ\det) \cong\check{\mathbf{V}}_{\zeta}(M)\otimes\zeta^{\prime}. \tag{7.14.1}\]
We focus on the case of Assumption 7.13. Take the earlier \(\zeta\) to satisfy \(\zeta\equiv\omega^{a+2b}\bmod\varpi\).
Let \(\pi(\bar{r}_{p})\) denote the smooth representation of \(\operatorname{GL}_{2}(\mathbb{Q}_{p})\) over \(\mathbb{F}\) associated to \(\bar{r}_{p}\) by the \(\bmod p\) Langlands correspondence. Explicitly, \(\pi(\bar{r}_{p})\) is the nontrivial extension \(\bar{\pi}_{1}-\bar{\pi}_{2}\) with
\[\bar{\pi}_{1}=\operatorname{Ind}_{B(\mathbb{Q}_{p})}^{\operatorname{GL}_{2}( \mathbb{Q}_{p})}\big{(}\bar{\chi}_{2}\otimes\bar{\chi}_{1}\bar{\chi}_{\text{ cycl}}^{-1}\big{)}\quad\text{and}\quad\bar{\pi}_{2}=\operatorname{Ind}_{B( \mathbb{Q}_{p})}^{\operatorname{GL}_{2}(\mathbb{Q}_{p})}\big{(}\bar{\chi}_{1} \otimes\bar{\chi}_{2}\bar{\chi}_{\text{cycl}}^{-1}\big{)}.\]
In particular, we have
\[\check{\mathbf{V}}_{\zeta}(\pi(\bar{r}_{p})^{\vee})\cong\check{\mathbf{V}}_{ \zeta}(\bar{\pi}_{2}^{\vee}-\bar{\pi}_{1}^{\vee})\cong(\bar{\chi}_{1}-\bar{ \chi}_{2})\cong\bar{r}_{p}.\]
This is independent of the choice of \(\zeta\) and agrees with [13, SS 8]; yet [13, SS 6.1] seems to have a minor error by swapping the \(\bar{\pi}_{1}\) with \(\bar{\pi}_{2}\), which is later corrected in [11].
Let \(\mathbf{1}_{\text{tw}}\) denote \(\mathcal{O}\llbracket u,v\rrbracket\) equipped with a \(\mathbb{Q}_{p}^{\times}\)-action where \(p\) acts by multiplication by \(1+u\) and \(a\in\mathbb{Z}_{p}^{\times}\) acts by multiplication by \((1+v)^{\log(a/\omega(\bar{a}))/p}\); such action extends to an action of \(\operatorname{Gal}_{\mathbb{Q}_{p}}\) via local class field theory.
As \(\operatorname{End}_{\operatorname{Gal}_{\mathbb{Q}_{p}}}(\bar{r}_{p})\cong \mathbb{F}\), the deformation problem of \(\bar{r}_{p}\) is representable by a noetherian complete local \(\mathcal{O}\)-algebra \(R_{\bar{r}_{p}}\). Let \(R_{\bar{r}_{p}}^{\zeta}\) denote the quotient parametrizing the deformations of \(\bar{r}_{p}\) with fixed determinant \(\zeta\); let \(\mathfrak{m}_{R_{\bar{r}_{p}}^{\zeta}}\) denote its maximal ideal. Let \(V_{\text{univ}}^{\zeta}\) denote the universal deformation of \(\bar{r}_{p}\) over \(R_{\bar{r}_{p}}^{\zeta}\). It is well known that there is a (noncanonical) isomorphism
\[R_{\bar{r}_{p}}^{\square}\cong R_{\bar{r}_{p}}^{\zeta}\widehat{\otimes}_{ \mathcal{O}}\mathcal{O}\llbracket u,v,z_{1},z_{2},z_{3}\rrbracket,\]
so that the framed and unframed universal deformations of \(\bar{r}_{p}\) satisfy:
\[V_{\text{univ}}^{\zeta}\widehat{\boxtimes}_{\mathcal{O}}\mathbf{1}_{\text{tw}} \widehat{\otimes}_{\mathcal{O}}\mathcal{O}\llbracket z_{1},z_{2},z_{3} \rrbracket\cong V_{\text{univ}}^{\square}.\]
Following [13, SS 8], we have the following.
**Theorem 7.15**.: _Keep the notation as above. Let \(\widetilde{P}_{\zeta}\twoheadrightarrow\bar{\pi}_{1}^{\vee}\) be a projective envelope of \(\pi_{1}^{\vee}\) in \(\mathfrak{C}_{\zeta}\) and put \(R_{\pi_{1},\zeta}:=\operatorname{End}_{\mathfrak{C}_{\zeta}}(\widetilde{P}_{\zeta})\)._
1. _The_ \(\check{\mathbf{V}}_{\zeta}(\widetilde{P}_{\zeta})\) _can be viewed as a_ \(2\)_-dimensional representation of_ \(\operatorname{Gal}_{\mathbb{Q}_{p}}\) _over_ \(R_{\pi_{1},\zeta}\) _lifting_ \(\bar{r}_{p}\)_; this induces an isomorphism_ \(R_{\bar{r}_{p}}^{\zeta}\xrightarrow{\cong}R_{\pi_{1},\zeta}\)_, and_ \(\check{\mathbf{V}}_{\zeta}(\widetilde{P}_{\zeta})\cong V_{\text{univ}}^{\zeta}\)_._
2. _Define the following object in_ \(\mathfrak{C}\)_:_ (7.15.1) \[\widetilde{P}^{\square}:=\widetilde{P}_{\zeta}\widehat{\boxtimes}_{\mathcal{O}} \mathbf{1}_{\text{tw}}\widehat{\otimes}_{\mathcal{O}}\mathcal{O}\llbracket z_{1},z_ {2},z_{3}\rrbracket,\] _equipped with the tensor product right_ \(\operatorname{GL}_{2}(\mathbb{Q}_{p})\)_-action (which is_ \(\mathcal{O}\llbracket z_{1},z_{2},z_{3}\rrbracket\)_-linear). Then_ \(\widetilde{P}^{\square}\) _carries a natural_ \(R_{\bar{r}_{p}}^{\square}\)_-action from the left that commutes with the right_ \(\operatorname{GL}_{2}(\mathbb{Q}_{p})\)_-action. Moreover,_ \(\widetilde{P}^{\square}\) _does not depend on the choice of_ \(\zeta\)_._
3. _There exists_ \(x\in\mathfrak{m}_{R^{\zeta}_{\widetilde{r}_{p}}}\setminus\left(\mathfrak{m}^{ \zeta}_{R^{\zeta}_{\widetilde{r}_{p}}}+(\varpi)\right)\) _such that_ \(\widetilde{P}^{\square}\) _is isomorphic to the projective envelope of_ \(\operatorname{Sym}^{a}\mathbb{F}^{\oplus 2}\otimes\det^{b}\) _as a right_ \(\mathcal{O}\llbracket u,x,z_{1},z_{2},z_{3}\rrbracket[\![\operatorname{GL}_{2}( \mathbb{Z}_{p})]\!]\)_-module._
Proof.: (1) is [13, Corollary 8.7]. For (2), the left \(R^{\square}_{\widetilde{r}_{p}}\)-action comes from the isomorphism \(R^{\zeta}_{\widetilde{r}_{p}}\cong R_{\pi,\zeta}\) proved in (1). The uniqueness follows from (7.14.1).
We now prove (3). For \(A=\mathcal{O}\) or \(\mathcal{O}\llbracket x\rrbracket\), let \(\operatorname{Mod}^{\operatorname{fg}}_{A[\operatorname{GL}_{2}(\mathbb{Z}_{p} )],\zeta}\) denote the category of finitely generated right \(A[\![\operatorname{GL}_{2}(\mathbb{Z}_{p})]\!]\)-modules with the scalar \(\mathbb{Z}_{p}^{\times}\) acting by \(\zeta\). By [13, Theorem 5.2], there exists \(x\in\mathfrak{m}_{R^{\zeta}_{p}}\) such that \(x:\widetilde{P}_{\zeta}\to\widetilde{P}_{\zeta}\) is injective and \(\widetilde{P}_{\zeta}/x\widetilde{P}_{\zeta}\) is the projective envelope of \((\operatorname{soc}_{\operatorname{GL}_{2}(\mathbb{Z}_{p})}\!\widetilde{\pi}_ {1})^{\vee}=\operatorname{Sym}^{a}\mathbb{F}^{\oplus 2}\otimes\det^{b}\) in \(\operatorname{Mod}^{\operatorname{fg}}_{\mathcal{O}[\operatorname{GL}_{2}( \mathbb{Z}_{p})],\zeta}\). In addition, [10, Theorem 3.3(iii)] proves that \(x\notin\left(\mathfrak{m}^{2}_{R^{\zeta}_{\widetilde{r}_{p}}}+(\varpi)\right)\). It then remains to show that \(\widetilde{P}_{\zeta}\) is projective in the \(\operatorname{Mod}^{\operatorname{fg}}_{\mathcal{O}[\mathcal{I}][\operatorname {GL}_{2}(\mathbb{Z}_{p})],\zeta}\), as the projectivity is preserved for tensor products of the form in (7.15.1). (Note that the variable \(v\) in \(\widetilde{P}_{\zeta}\) measuring the central twist of \((1+p\mathbb{Z}_{p})^{\times}\) is "absorbed" into the projective envelope as an \(\mathcal{O}\llbracket\operatorname{GL}_{2}(\mathbb{Z}_{p})\rrbracket\)-module.) Choose a character \(\eta\) of \((1+p\mathbb{Z}_{p})^{\times}\) such that \(\zeta|_{(1+p\mathbb{Z}_{p})^{\times}}=\eta^{2}\). Then it is enough to show that \(\widetilde{P}_{\zeta}\otimes\eta^{-1}\circ\det\) is a projective right \(\mathcal{O}\llbracket x\rrbracket[\![H]\!]\)-module with \(H=\operatorname{GL}_{2}(\mathbb{Z}_{p})/(1+p\mathbb{Z}_{p})^{\times}\), or equivalently,
\[\operatorname{Tor}^{\mathcal{O}[\![x]\!][H]}_{>0}(\widetilde{P}_{\zeta}\otimes \eta^{-1}\circ\det,\ \mathbb{F})=0.\]
But this follows immediately from the spectral sequence
\[E^{2}_{\bullet,\bullet}=\operatorname{Tor}^{\mathcal{O}[\![H]\!]}_{\bullet} \Big{(}\operatorname{Tor}^{\mathcal{O}[\![x]\!][H]}_{\bullet}\big{(}\widetilde{ P}_{\zeta}\otimes\eta^{-1}\circ\det,\ \mathcal{O}\llbracket H\rrbracket\big{)},\ \mathbb{F}\Big{)}\ \Rightarrow\ \operatorname{Tor}^{\mathcal{O}[\![x]\!][H]}_{\bullet} \big{(}\widetilde{P}_{\zeta}\otimes\eta^{-1}\circ\det,\ \mathbb{F}\big{)}\]
and the properties of \(\widetilde{P}_{\zeta}/x\widetilde{P}_{\zeta}\) above.
**Remark 7.16**.:
1. It is proved in [1, Theorem 6.18] that \(\widetilde{P}_{\zeta}\widehat{\boxtimes}_{\mathcal{O}}\mathbf{1}_{\operatorname {tw}}\) is isomorphic to the projective envelope of \(\pi_{1}^{\vee}\) in \(\mathfrak{C}\).
2. It is tempting to use the "less-heavy" tool of patched completed homology of Caraiani-Emerton-Gee-Geraghty-Paskunas-Shin in [1] and the globalization process therein, to reproduce the above construction instead of using the Paskunas module. Unfortunately, we do not know how to implement this idea. The main difficulty is that, while [1] provides a "minimal patching" in the sense that the patched module is of rank \(1\) over the patched version of the local Galois deformation ring \(R_{\infty}[1/p]\), to invoke our local ghost Theorem 2.7, we need the patched completed homology to be the projective envelope as an \(S_{\infty}[\![\operatorname{GL}_{2}(\mathbb{Z}_{p})]\!]\)-module of a Serre weight. So we would need a certain mod-\(p\)-multiplicity-one assumption that compares \(S_{\infty}\) with \(R_{\infty}\), which does not seem to be available.
### Comparison with trianguline deformation space
Continue to consider the \(\bar{r}_{p}\) in (7.13.1). We apply Emerton's locally analytic Jacquet functor [10] to \(\widetilde{P}^{\square}\in\mathfrak{C}\) and compare it with the trianguline deformation space \(\mathcal{X}^{\square,\operatorname{tri}}_{\bar{r}_{p}}\). In a nutshell, we will prove that the reduced eigenvariety \(\operatorname{Eig}(\widetilde{P}^{\square})^{\operatorname{red}}\) associated to \(\widetilde{P}^{\square}\) is isomorphic to \(\mathcal{X}^{\square,\operatorname{tri}}_{\bar{r}_{p}}\) and the \(U_{p}\)-action on \(\operatorname{Eig}(\widetilde{P}^{\square})\) corresponds to the universal character \(\delta_{2}(p)^{-1}\) on \(\mathcal{X}^{\square,\operatorname{tri}}_{\bar{r}_{p}}\).
We first recall the formal part of the construction from [1, SS 3] and [1, SS A.4]. Write \(S^{\square}:=\mathcal{O}\llbracket u,x,z_{1},z_{2},z_{3}\rrbracket\), viewed as a natural subring of \(R^{\square}_{\overline{r_{p}}}\), which induces a morphism
\[\mathrm{pr}^{\square}:\mathcal{X}^{\square}_{\overline{r_{p}}}\to\mathcal{S}^{ \square}:=\mathrm{Spf}(S^{\square})^{\mathrm{rig}}.\]
Consider the Schikhof dual of \(\widetilde{P}^{\square}\):
\[\Pi^{\square}:=\mathrm{Hom}^{\mathrm{cont}}_{\mathcal{O}}\big{(}\widetilde{P}^{ \square},E\big{)}.\]
Applying the locally analytic Jacquet functor construction of Emerton [1], we obtain
(7.17.1)
which may be viewed as a coherent sheaf over the Stein space \(\mathcal{X}^{\square}_{\overline{r_{p}}}\times\mathcal{T}\) that further induces a _coherent_ sheaf \(\mathrm{pr}^{\square}_{*}\mathcal{M}^{\square}\) over \(\mathcal{S}^{\square}\times\mathcal{T}\) (where \(\mathcal{T}=(\mathbb{G}^{\mathrm{rig}}_{m})^{2}\times\widetilde{\mathcal{W}}\) is defined in (7.3.1)). Here,
* \((\Pi^{\square})^{R^{\square}_{\overline{r_{p}}}\text{-an}}\subseteq(\Pi^{ \square})^{S^{\square}\text{-an}}\) are respectively locally \(R^{\square}_{\overline{r_{p}}}\)-analytic and \(S^{\square}\)-analytic vectors as defined in [1, Definition 3.2], and they are equal by [1, Proposition 3.8] as \(\widetilde{P}^{\square}\) is finitely generated over \(S^{\square}\llbracket\mathrm{GL}_{2}(\mathbb{Z}_{p})\rrbracket\);
* \(J_{B}(-)\) is the locally analytic Jacquet functor of Emerton defined in [1] (with respect to the lower triangular matrices to match our computation with the setup in SS 2.4, which further agrees with [1]);
* \((-)^{\prime}_{b}\) is the strong dual for Frechet spaces; and
* \(\mathrm{swap}:\mathcal{T}\to\mathcal{T}\) is the morphism swapping two factors, i.e. sending \((\delta_{1},\delta_{2})\mapsto(\delta_{2},\delta_{1})\). (This is inserted because we used the locally analytic Jacquet functor relative to the lower triangular Borel subgroup, in contrast to [1] and [1] where the upper triangular Borel subgroup are used.)
**Theorem 7.18**.: _Let \(\mathrm{Eig}(P^{\square})\) denote the schematic support of \(\mathcal{M}^{\square}\) over \(\mathcal{X}^{\square}_{\overline{r_{p}}}\times\mathcal{T}\)._
1. _The space_ \(\mathrm{Eig}(P^{\square})\) _is contained in the subspace of_ \(\mathcal{X}^{\square}_{\overline{r_{p}}}\times\mathcal{T}\) _consisting of points_ \((x,\delta_{1},\delta_{2})\) _for which_ \(\det(\mathcal{V}_{x})\) _corresponds to_ \(\delta_{1}\delta_{2}\) _under the local class field theory._
2. _The reduced subscheme of_ \(\mathrm{Eig}(P^{\square})\) _is precisely the triangulline deformation space_ \(\mathcal{X}^{\square,\mathrm{tri}}_{\overline{r_{p}}}\) _(Definition_ 7.4_)._
Proof.: (1) is clear because (if \(\zeta(p)=\zeta(1+p)=1\)), the right actions of \(\left(\begin{smallmatrix}p&0\\ 0&p\end{smallmatrix}\right)\) and the diagonal \(\mathbb{Z}_{p}^{\times}\) on \(\widetilde{P}^{\square}\) are precisely given on \(\mathbf{1}_{\mathrm{tw}}\), which agrees with the \(\mathcal{O}\llbracket u,v\rrbracket\)-action as described just before Theorem 7.15.
(2) is proved at the beginning of [1, Page 134] (except that we have the framing variables, and we used the lower triangular Borel subgroup for the locally analytic Jacquet functor). We summarize the gist for the benefit of the readers.
At an \(E^{\prime}\)-point \(x\in(\mathcal{V}_{x},\delta_{1,x},\delta_{2,x})\in\mathcal{X}^{\square}_{ \overline{r_{p}}}\times\mathcal{T}\), let \(\mathfrak{p}_{x}\subseteq R^{\square}_{\overline{r_{p}}}\) be the corresponding prime ideal. Then \(\Pi^{\square}[\mathfrak{p}_{x}]=\pi(\mathcal{V}_{x})\) is the \(p\)-adic Banach space representation over \(E^{\prime}\) attached to \(\mathcal{V}_{x}\). So \(x\) lies in \(\mathcal{X}^{\square,\mathrm{tri}}_{\overline{r_{p}}}\) if and only if there is a \((\mathbb{Q}_{p}^{\times})^{2}\)-embedding
\[\delta_{2,x}\times\delta_{1,x}\hookrightarrow J_{\bar{B}}\big{(}\Pi^{\square,R ^{\square}_{\overline{r_{p}}}\text{-an}}[\mathfrak{p}_{x}]\big{)}=J_{\bar{B}}( \pi(\mathcal{V}_{x})^{\mathrm{an}}).\]
(Note that, comparing to [1] where \(J_{B}(-)\) is used, the lower triangular locally analytic Jacquet functor has the effect of "swapping" two factors.) By the description of locally analytic vectors for \(p\)-adic local Langlands correspondence [11, 1] (and the full power of \(p\)-adic local Langlands correspondence), there is an embedding \(\mathcal{U}^{\square,\mathrm{tri}}_{\overline{r_{p}},\mathrm{reg}}\hookrightarrow \mathrm{Eig}(P^{\square})\).
Applying a typical construction of eigenvarieties shows that points in \(\mathcal{U}_{\bar{r}_{p},\text{reg}}^{\square,\text{tri}}\) are also Zariski-dense and accumulating in \(\text{Eig}(P^{\square})\). This completes the proof of that \(\mathcal{X}_{\bar{r}_{p}}^{\square,\text{tri}}\) is isomorphic to the reduced subscheme of \(\text{Eig}(P^{\square})\).
**Remark 7.19**.: In fact, one can prove that, in our case, \(\text{Eig}(P^{\square})=\mathcal{X}_{\bar{r}_{p}}^{\square,\text{tri}}\).
### Relating locally analytic Jacquet functor with local ghost theorem I
We will deduce Theorem 7.6 by applying local ghost Theorem 2.7 to \(\widetilde{P}^{\square}\) with all possible evaluations of the formal variables \(u,x,z_{1},z_{2},z_{3}\). For this, we need an intermediate step to relate the characteristic power series of abstract \(p\)-adic forms in the local ghost theorem with the abstract construction of eigenvarieties in SS 7.17 This is essentially explained in [11, Proposition 4.2.36]: one may compute the locally analytic Jacquet functor when \(\widetilde{P}^{\square}\) is a finite projective \(S^{\square}[\![\text{K}_{p}]\!]\)-module, using the eigenvariety machine of Buzzard.
Let \(\mathfrak{d}_{\bar{N}}\) denote the _right_ ideal of \(\mathcal{O}[\![\text{Iw}_{p}]\!]\) generated by \(\big{[}\left(\begin{smallmatrix}1&0\\ p&1\end{smallmatrix}\right)\big{]}-1\); then by Iwasawa decomposition, we may write
(7.20.1)
where the \(\mathcal{D}_{0}\big{(}\mathbb{Z}_{p};-)\) is the space of measures on \(\mathbb{Z}_{p}\), dual to \(\mathcal{C}^{0}(\mathbb{Z}_{p};-)\). Here the induced left \(\text{Iw}_{p}\)-action on the right hand side of (7.20.1) extends to an action of \(\mathbf{M}_{1}=\left(\begin{smallmatrix}\mathbb{Z}_{p}&\mathbb{Z}_{p}\\ p\mathbb{Z}_{p}&\mathbb{Z}_{p}^{\times}\end{smallmatrix}\right)^{\text{det}\neq 0}\) given by, for \(\left(\begin{smallmatrix}\alpha&\beta\\ \gamma&\delta\end{smallmatrix}\right)\in\mathbf{M}_{1}\) with \(\alpha\delta-\beta\gamma=p^{r}d\) for \(d\in\mathbb{Z}_{p}^{\times}\),
Note that (after tensored with \(\mathcal{O}[\![w]\!]^{(\varepsilon)}\),) this is precisely dual to the right \(\mathbf{M}_{1}\)-action on \(\mathcal{C}^{0}\big{(}\mathbb{Z}_{p};\mathcal{O}[\![w]\!]^{(\varepsilon)}\big{)}\) given by (2.4.4). We define the _abstract \(p\)-adic distribution_ associated to \(\widetilde{P}^{\square}\) to be
\[\text{S}^{\vee}_{\widetilde{P}^{\square},p\text{-adic}}:=\widetilde{P}^{ \square}\widehat{\otimes}_{\mathcal{O}[\![\text{Iw}_{p}]\!]}\mathcal{D}_{0} \big{(}\mathbb{Z}_{p};\,\mathcal{O}[\![(\mathbb{Z}_{p}^{\times})^{2}]\!]\big{)},\]
equipped with the infinite product topology (which is automatically _compact_). Then we have a tautological isomorphism (from the tensor-hom adjunction)
(7.20.2)
Define an \(S^{\square}[\![(\mathbb{Z}_{p}^{\times})^{2}]\!]\)-linear operator \(U_{p}^{\vee}\) on \(\text{S}^{\vee}_{\widetilde{P}^{\square},p\text{-adic}}\) given by (choosing a coset decomposition \(\text{Iw}_{p}\big{(}\begin{smallmatrix}p&-1&0\\ 0&1\end{smallmatrix}\big{)}\text{Iw}_{p}=\coprod_{j=0}^{p-1}v_{j}\text{Iw}_{p}\), e.g. \(v_{j}=\big{(}\begin{smallmatrix}p&-1&0\\ j&1\end{smallmatrix}\big{)}\) and \(v_{j}^{-1}=\big{(}\begin{smallmatrix}p&0\\ -jp&1\end{smallmatrix}\big{)}\)),
\[U_{p}^{\vee}(x\otimes\mu):=\sum_{j=0}^{p-1}xv_{j}\otimes v_{j}^{-1}\mu\qquad \text{for $x\in\widetilde{P}^{\square}$ and $\mu\in\mathcal{D}_{0}\big{(}\mathbb{Z}_{p};\,\mathcal{O}[\![(\mathbb{Z}_{p}^ {\times})^{2}]\!]\big{)}$.}\]
Applying an argument similar to [11, SS 2.10] (or essentially Buzzard's original eigenvarieties machine in [10]), we may define a characteristic power series for the \(S^{\square}[\![(\mathbb{Z}_{p}^{\times})^{2}]\!]\)-linear \(U_{p}^{\vee}\)-action on \(\text{S}^{\vee}_{\widetilde{P}^{\square},p\text{-adic}}\):
\[C_{\widetilde{P}^{\square}}(t)=1+c_{1}t+c_{2}t^{2}+\cdots\in S^{\square}[\![( \mathbb{Z}_{p}^{\times})^{2}]\!][\![t]\!].\]
Let \(\widetilde{\mathrm{Spc}}(\widetilde{P}^{\square})\) denote the hypersurface of \(\mathcal{S}^{\square}\times\widetilde{\mathcal{W}}\times\mathbb{G}_{m}^{\mathrm{ rig}}\) cut out by \(C_{\widetilde{P}^{\square}}(t)\). Then the general Buzzard's eigenvariety machine of [10] outputs a coherent sheaf \(\mathcal{N}^{\square}\) on \(\widetilde{\mathrm{Spc}}(\widetilde{P}^{\square})\). On the other hand, the left \(R^{\square}_{\widetilde{r}_{p}}\)-action on \(\widetilde{P}^{\square}\) (extending the \(S_{\infty}\)-action) induces an action of \(R^{\square}_{\widetilde{r}_{p}}\) on the coherent sheaf \(\mathcal{N}^{\square}\) over \(\mathcal{S}^{\square}\times\widetilde{\mathcal{W}}\times\mathbb{G}_{m}^{ \mathrm{rig}}\). Considering the image of \(R^{\square}_{\widetilde{r}_{p}}\) in the endomorphism algebra \(\mathrm{End}_{\mathrm{Spc}_{\widetilde{P}^{\square}}}(\mathcal{N}^{\square})\) induces a coherent sheaf \(\mathcal{M}^{\square\prime}\) on \(\mathcal{X}^{\square}_{\widetilde{r}_{p}}\times\widetilde{\mathcal{W}}\times \mathbb{G}_{m}^{\mathrm{rig}}\) whose pushforward along \(\mathcal{X}^{\square}_{\widetilde{r}_{p}}\to\mathcal{S}_{\infty}\) is isomorphic to \(\mathcal{N}^{\square}\).
In fact, \(\mathcal{M}^{\square\prime}\) is essentially the same as \(\mathcal{M}^{\square}\) of (7.17.1) in the following sense. By Theorem 7.18(1), \(\mathcal{M}^{\square}\) is supported on the subspace
\[\mathcal{Z}=\big{\{}(x,\delta_{1},\delta_{2})\in\mathcal{X}^{\square}_{ \widetilde{r}_{p}}\times\mathcal{T}\;\big{|}\;\mathrm{det}\mathcal{V}_{x}(p)= \delta_{1}(p)\delta_{2}(p)\big{\}}. \tag{7.20.3}\]
The natural map
(7.20.4)
induces an isomorphism \(\iota:\mathcal{Z}\xrightarrow{\cong}\mathcal{X}^{\square}_{\widetilde{r}_{p}} \times\widetilde{\mathcal{W}}\times\mathbb{G}_{m}^{\mathrm{rig}}\). Then \(\iota^{*}\mathcal{M}^{\square\prime}\cong\mathcal{M}^{\square}\); in particular, the reduced subscheme of the support of \(\mathcal{M}^{\square\prime}\) is precisely \(\mathcal{X}^{\square,\mathrm{tri}}_{\widetilde{r}_{p}}\) by Theorem 7.18. Here we point out three subtleties in normalizations:
1. The \(U_{p}^{\vee}\)-operator is associated to the double coset \(\mathrm{Iw}_{p}\big{(}\begin{smallmatrix}p^{-1}&0\\ 0&1\end{smallmatrix}\big{)}\mathrm{Iw}_{p}\), and the zeros of \(C_{\widetilde{P}^{\square}}(t)\) gives the _reciprocal_ of \(U_{p}^{\vee}\)-eigenvalues;
2. the swapping of \(\delta_{1}\) and \(\delta_{2}\) caused by taking \(J_{\bar{B}}(-)\) as opposed to \(J_{B}(-)\); and
3. another twist of cyclotomic character is built-in for the theory of locally analytic Jacquet functors.
### Relating locally analytic Jacquet functor with local ghost theorem II
It remains to relate \(C_{\widetilde{P}^{\square}}(t)\) and the slopes appearing in the local ghost Theorem 2.7. For each homomorphism \(y^{*}:S^{\square}=\mathcal{O}\llbracket u,x,z_{1},z_{2},z_{3}\rrbracket\to \mathcal{O}^{\prime}\), write \(\widetilde{P}_{y}:=\widetilde{P}^{\square}\widehat{\otimes}_{S^{\square},y^{*} }\mathcal{O}^{\prime}\). Then Theorem 7.15(3) implies that \(\widetilde{P}_{y}\) is a primitive \(\mathcal{O}\llbracket\mathrm{K}_{p}\rrbracket\)-projective augmented module of type \(\bar{\rho}\), where the conditions (2) and (3) of Definition 2.2 are clear from (7.15.1).
For a relevant character \(\varepsilon\) of \(\Delta^{2}\), recall there is a natural quotient map
(7.21.1)
for \(\alpha,\delta\in\mathbb{Z}_{p}^{\times}\). Note that this quotient map is a twist of (7.1.1). The homomorphism (7.21.1) together with \(y^{*}\) defines an embedding
\[y\otimes\varepsilon:\mathcal{W}^{(\varepsilon)}_{\mathcal{O}^{\prime}} \hookrightarrow\mathcal{S}^{\square}\times\widetilde{\mathcal{W}}.\]
The isomorphism (7.20.2) implies a canonical \(\mathcal{O}^{\prime}\llbracket w\rrbracket\)-linear isomorphism
\[\mathrm{S}^{\vee}_{\widetilde{P}^{\square},\text{$p$-adic}}\otimes_{S^{\square} \llbracket(\mathbb{Z}_{p}^{\times})^{2}\rrbracket,y^{*}\otimes\varepsilon^{*}} \mathcal{O}^{\prime}\llbracket w\rrbracket^{(\varepsilon)}\quad\cong\quad \mathrm{Hom}_{\mathcal{O}^{\prime}\llbracket w\rrbracket^{(\varepsilon)}}\; \Big{(}\mathrm{S}^{(\varepsilon)}_{\widetilde{P}^{\square}_{y},\text{$p$-adic}}, \mathcal{O}^{\prime}\llbracket w\rrbracket^{(\varepsilon)}\Big{)}, \tag{7.21.2}\]
which can be expressed in terms of a pairing: for \(x\in\widetilde{P}^{\square}\), \(\mu\in\mathcal{D}_{0}\big{(}\mathbb{Z}_{p};\mathcal{O}^{\prime}\llbracket w \rrbracket^{(\varepsilon)}\big{)}\), and \(\varphi\in\mathrm{S}^{(\varepsilon)}_{\widetilde{P}^{\square}_{y},p\cdot \mathrm{adic}}\),
\[\big{\langle}\varphi,x\otimes\mu\big{\rangle}:=\langle\varphi(x),\mu\rangle.\]
We deduce the compatibility of \(U_{p}^{\vee}\)-operator on the left hand side of (7.21.2) and the dual of \(U_{p}\)-action on the right hand side easily as: with the notation as above and \(v_{j}=\big{(}\begin{smallmatrix}p^{-1}&0\\ j&1\end{smallmatrix}\big{)}\) for \(j=0,\dots,p-1\),
\[\langle U_{p}(\varphi),x\otimes\mu\rangle =\langle U_{p}(\varphi)(x),\mu\rangle=\Big{\langle}\sum_{j=0}^{p- 1}\varphi(xv_{j})|_{v_{j}^{-1}},\,\mu\Big{\rangle}=\Big{\langle}\sum_{j=0}^{p -1}\varphi(xv_{j}),\,v_{j}^{-1}\mu\Big{\rangle}\] \[=\Big{\langle}\varphi,\,\sum_{j=0}^{p-1}xv_{j}\otimes v_{j}^{-1} \mu\Big{\rangle}=\langle\varphi,\,U_{p}^{\vee}(x\otimes\mu)\rangle.\]
This in particular means that, under the map \(y^{*}\otimes\varepsilon^{*}:S^{\square}\llbracket(\mathbb{Z}_{p}^{\times})^{2 }\rrbracket\to\mathcal{O}^{\prime}\llbracket w\rrbracket^{(\varepsilon)}\), we have an identity of characteristic power series:
\[(y^{*}\otimes\varepsilon^{*})\big{(}C_{\widetilde{P}^{\square}}(t)\big{)}=C_{ \widetilde{P}^{\square}_{y}}^{(\varepsilon)}(w,t). \tag{7.21.3}\]
Writing \(\mathrm{Spc}^{(\varepsilon)}(\widetilde{P}^{\square}_{y})\) for the zero locus of \(C_{\widetilde{P}^{\square}_{y}}^{(\varepsilon)}(w,t)\) inside \(\mathcal{W}^{(\varepsilon)}\times\mathbb{G}_{m}^{\mathrm{rig}}\). Then \((y\otimes\varepsilon)^{-1}\big{(}\widetilde{\mathrm{Spc}}(\widetilde{P}^{ \square})\big{)}=\mathrm{Spc}^{(\varepsilon)}(\widetilde{P}^{\square}_{y})\).
### Proof of Theorem 7.6
Now, we conclude the proof of Theorem 7.6. By the discussion in SS 7.11, we may assume that \(\bar{r}_{p}\) is reducible, nonsplit and generic with \(a\in\{2,\dots,p-5\}\) and \(b=0\). Let \(\underline{x}=(x,\delta_{1},\delta_{2})\in\mathcal{X}_{\bar{r}_{p}}^{\square, \mathrm{tri}}\) be an \(E^{\prime}\)-point; set \(w_{\star}:=(\delta_{1}\delta_{2}^{-1}\chi_{\mathrm{cycl}}^{-1})(\exp(p))-1\) and \(\varepsilon=\delta_{2}|_{\Delta}\times\delta_{1}|_{\Delta}\cdot\omega^{-1}\), which is relevant as already shown in SS 7.11. we need to show that \(-v_{p}(\delta_{2}(p))\) is equal to a slope appearing in \(\mathrm{NP}\left(G_{\bar{\rho}}^{(\varepsilon)}(w_{\star},-)\right)\).
The argument is summarized by the following diagram:
(7.22.1)
By Proposition 7.5(5), we may assume that \(\delta_{2}|_{(1+p\mathbb{Z}_{p})^{\times}}\) is trivial. Write \(y\) for the image of \(\underline{x}\) in \(\mathcal{S}^{\square}\) and let \(y^{*}:S^{\square}\to E^{\prime}\) be the induced map. Then the image of \(\underline{x}\) in \(\mathrm{Supp}(\mathrm{pr}_{*}^{\square}\mathcal{M}^{\square})\) is precisely given by \((y,\delta_{1},\delta_{2})\). In particular, the map taking the value of \(\delta_{2}(p)\) on \(\mathcal{X}_{\bar{r}_{p}}^{\square,\mathrm{tri}}\) factors through \(\mathrm{Supp}(\mathrm{pr}_{*}^{\square}\mathcal{M}^{\square})\).
As explained in SS 7.20, the image of \(\underline{x}\) in \(\widetilde{\mathrm{Spc}}(\widetilde{P}^{\square})\) admits a cyclotomic twist from (7.20.4); so it is \(\underline{x}^{\prime}:=(y,\delta_{2},\delta_{1}\chi_{\mathrm{cycl}}^{-1})\). In particular, the image of \(\underline{x}^{\prime}\) in \(\mathcal{S}^{\square}\times\widetilde{\mathcal{W}}\) is precisely \(y\otimes\varepsilon(w_{\star})\) with \(w_{\star}=\delta_{1}\chi_{\mathrm{cycl}}^{-1}(\exp(p))-1\) and \(\varepsilon=\delta_{2}|_{\Delta}\times\delta_{1}|_{\Delta}\cdot\omega^{-1}\). So \(v_{p}(\delta_{2}(p))\) at \(\underline{x}^{\prime}\) can be seen on \(\mathrm{Spc}^{(\varepsilon)}(\widetilde{P}_{y}^{\square})\). By local ghost Theorem 2.7, \(-v_{p}(\delta_{2}(p))\) is a slope of \(\mathrm{NP}\left(G_{\bar{\rho}}^{(\varepsilon)}(w_{\star},-)\right)\). Theorem 7.6 except (3) is proved.
For Theorem 7.6(3), we may twist the point \(x\) so that \(\delta_{1}(p)\delta_{2}(p)=1\); this translate to that \(\left(\begin{smallmatrix}p&0\\ 0&p\end{smallmatrix}\right)\) acts trivially on \(\widetilde{P}^{\square}\). As argued above, it suffices to show that for the given \(k\), all slopes \(\frac{k-2}{2}\) appearing in \(\mathrm{NP}\left(C_{\widetilde{P}_{y}^{\square}}^{(\varepsilon)}(w_{k},-)\right)\) (with multiplicity \(d_{k}^{\mathrm{new}}(\varepsilon_{1})\)) genuinely come from the zeros \(\pm p^{-(k-2)/2}\) of \(C_{\widetilde{P}_{y}^{\square}}^{(\varepsilon)}(w_{k},-)\). Indeed, by Corollary 3.8, the multiplicities of \(U_{p}\)-eigenvalues \(\pm p^{-(k-2)/2}\) on \(\mathrm{S}_{\widetilde{P}_{y}^{\square},k}^{\mathrm{Iw}}(\tilde{\varepsilon}_ {1})\) are \(\frac{1}{2}d_{k}^{\mathrm{new}}(\varepsilon_{1})\) each. Theorem 7.6(3) is proved.
## 8. Bootstrapping and ghost conjecture
In this section, we perform a bootstrapping argument to prove a global ghost conjecture (Theorem 1.3) when the residual Galois representation \(\bar{r}\) is irreducible yet its restriction to \(\mathrm{Gal}_{\mathbb{Q}_{p}}\) is reducible and very generic (\(2\leq a\leq p-5\) and \(p\geq 11\)). The global ghost conjecture implies the following (with the help of [1] and [1]) for the \(\bar{r}\)-localized space of modular forms:
* a version of the Gouvea-Mazur conjecture,
* Gouvea's conjecture on slope distributions, and
* a refined version of Coleman-Mazur-Buzzard-Kilford spectral halo conjecture.
In fact, we adopt an axiomatic approach to proving the global ghost conjecture, borrowing a setup from [1, SS2, SS5], and [1, SS4.2]; this allows our theorem to be applicable to the cohomology of general Shimura varieties associated to a group \(G\) which is essentially \(\mathrm{GL}_{2}(\mathbb{Q}_{p})\) at a \(p\)-adic place.
In this section, let \(\bar{r}_{p}\) be a residual local Galois representation as in Notation 7.2.
### Hecke actions
Instead of developing the theory of Hecke actions for general \(\mathrm{K}_{p}\)-types as in [1, SS 4], we focus on the simplest case with one-dimensional representations.
Let \(\varepsilon=\omega^{-s_{\varepsilon}+b}\times\omega^{a+s_{\varepsilon}+b}\) be a relevant character of \(\Delta^{2}\); write \(\varepsilon_{1}=\omega^{-s_{\varepsilon}+b}\) as before, and set \(k_{\varepsilon}:=2+\{a+2s_{\varepsilon}\}\in\{2,\ldots,p\}\). Let \(\widetilde{\mathrm{H}}\) be a \(\mathrm{K}_{p}\)-projective augmented module. For each \(k=k_{\varepsilon}+(p-1)k_{\bullet}\) with \(k_{\bullet}\in\mathbb{Z}_{\geq 0}\), we defined a \(T_{p}\)-operator in SS 2.4(4) on \(\mathrm{S}_{k}^{\mathrm{ur}}(\varepsilon_{1})=\mathrm{Hom}_{\mathcal{O}[ \mathbb{K}_{p}]}\left(\widetilde{\mathrm{H}},\mathcal{O}[z]^{\leq k-2}\otimes \varepsilon_{1}\circ\det\right)\). There is also a similarly defined operator \(S_{p}\) on \(\mathrm{S}_{k}^{\mathrm{ur}}(\varepsilon_{1})\) given by, for \(\varphi\in\mathrm{S}_{k}^{\mathrm{ur}}(\varepsilon_{1})\) and \(x\in\widetilde{\mathrm{H}}\),
\[S_{p}(\varphi)(x)=\varphi\big{(}x\big{(}\begin{smallmatrix}p^{-1}&0\\ 0&p^{-1}\end{smallmatrix}\big{)}\big{)}.\]
The action of \(S_{p}\) is invertible and commutes with the \(T_{p}\)-operator. So \(\mathrm{S}_{k}^{\mathrm{ur}}(\varepsilon_{1})\) admits a \(\mathcal{O}[T_{p},S_{p}^{\pm 1}]\)-module structure.
Recall the associated Kisin's crystabelian deformation ring from SS 7.7. Let \(R_{\bar{r}_{p}}^{\square,1-k,\varepsilon_{1}}\) be the quotient of \(R_{\bar{r}_{p}}^{\square}\) parametrizing crystabelian representations with Hodge-Tate weight \((1-k,0)\) such that \(\mathrm{Gal}(\overline{\mathbb{Q}}_{p}/\mathbb{Q}_{p})\) acts on \(\mathbb{D}_{\mathrm{pcys}}(-)\) by \(\varepsilon_{1}\) (see Notation 7.1 for the definition of \(\mathbb{D}_{\mathrm{pcys}}(-)\)). Let \(\mathcal{V}_{1-k}\) denote the universal representation on \(\mathcal{X}_{\bar{r}_{p}}^{\square,1-k,\varepsilon_{1}}:=\big{(}\,\mathrm{Spr} \,R_{\bar{r}_{p}}^{\square,1-k,\varepsilon_{1}}\big{)}^{\mathrm{rig}}\),
then \(\mathbb{D}_{\mathrm{pcrys}}(\mathcal{V}_{1-k})\) is locally free of rank two over \(\mathcal{X}_{\bar{r}_{p}}^{\square,1-k,\varepsilon_{1}}\), equipped with a linear action of crystalline Frobenius \(\phi\). In particular, our condition says that \(\mathcal{V}_{1-k}\otimes\varepsilon_{1}^{-1}\) is crystalline.
Define elements \(s_{p}\in\mathcal{O}\big{(}\mathcal{X}_{\bar{r}_{p}}^{\square,1-k,\varepsilon_ {1}}\big{)}^{\times}\) and \(t_{p}\in\mathcal{O}\big{(}\mathcal{X}_{\bar{r}_{p}}^{\square,1-k,\varepsilon_ {1}}\big{)}\) such that
\[\det(\phi)=p^{k-1}s_{p}^{-1}\quad\text{and}\quad\mathrm{tr}(\phi)=s_{p}^{-1}t_ {p}.\]
As both \(s_{p}\) and \(t_{p}\) take bounded values, we have \(s_{p}\in R_{\bar{r}_{p}}^{\square,1-k,\varepsilon_{1}}\big{[}\frac{1}{p}\big{]} ^{\times}\) and \(t_{p}\in R_{\bar{r}_{p}}^{\square,1-k,\varepsilon_{1}}\big{[}\frac{1}{p}\big{]}\).
Following [5, SS 4], we define a natural homomorphism
\[\eta_{k}:\mathcal{O}[T_{p},S_{p}^{\pm 1}]\to R_{\bar{r}_{p}}^{\square,1-k, \varepsilon_{1}}\big{[}\frac{1}{p}\big{]}\quad\text{given by}\quad\eta_{k}(T_{p})=t_{p}, \text{ and }\eta_{k}(S_{p})=s_{p}. \tag{8.1.1}\]
**Definition 8.2**.: Recall \(\mathrm{K}_{p}=\mathrm{GL}_{2}(\mathbb{Z}_{p})\), and the representations \(\bar{\rho}\) and \(\bar{\rho}^{\mathrm{ss}}\) from Notation 7.2. For a Serre weight \(\sigma_{a,b}\), write \(\mathrm{Proj}_{\mathcal{O}[\mathrm{K}_{p}]}(\sigma_{a,b})\) for the projective envelope of \(\sigma_{a,b}\) as an \(\mathcal{O}[\![\mathrm{K}_{p}]\!]\)-module.
An \(\mathcal{O}[\![\mathrm{K}_{p}]\!]\)_-projective arithmetic module of type \(\bar{r}_{p}\)_ is an \(\mathcal{O}[\![\mathrm{K}_{p}]\!]\)-projective augmented module \(\widetilde{\mathrm{H}}\) equipped with a continuous _left_ action of \(R_{\bar{r}_{p}}^{\square}\) satisfying the following conditions.
1. The left \(R_{\bar{r}_{p}}^{\square}\)-action on \(\widetilde{\mathrm{H}}\) commutes with the right \(\mathrm{GL}_{2}(\mathbb{Q}_{p})\)-action
2. The induced \(\mathrm{K}_{p}\)-action makes \(\widetilde{\mathrm{H}}\) a right \(\mathcal{O}[\![\mathrm{K}_{p}]\!]\)-module isomorphic to * \(\mathrm{Proj}_{\mathcal{O}[\![\mathrm{K}_{p}]\!]}(\sigma_{a,b})^{\oplus m( \widetilde{\mathrm{H}})}\) for some \(m(\widetilde{\mathrm{H}})\in\mathbb{N}\), if \(\bar{r}_{p}|_{\mathrm{I}_{\mathbb{Q}_{p}}}\simeq\bar{\rho}\), or * \(\mathrm{Proj}_{\mathcal{O}[\![\mathrm{K}_{p}]\!]}(\sigma_{a,b})^{\oplus m^{ \prime}(\widetilde{\mathrm{H}})}\oplus\mathrm{Proj}_{\mathcal{O}[\![\mathrm{K}_ {p}]\!]}(\sigma_{p-3-a,a+b+1})^{\oplus m^{\prime\prime}(\widetilde{\mathrm{H }})}\) for some \(m^{\prime}(\widetilde{\mathrm{H}}),m^{\prime\prime}(\widetilde{\mathrm{H}}) \in\mathbb{N}\), if \(\bar{r}_{p}|_{\mathrm{I}_{\mathbb{Q}_{p}}}\simeq\bar{\rho}^{\mathrm{ss}}\) (writing \(m(\widetilde{\mathrm{H}}):=m^{\prime}(\widetilde{\mathrm{H}})+m^{\prime\prime}( \widetilde{\mathrm{H}})\) in this case).
3. For every relevant character \(\varepsilon=\omega^{-s_{\varepsilon}+b}\times\omega^{a+s_{\varepsilon}+b}\) and every \(k=k_{\varepsilon}+(p-1)k_{\bullet}\) with \(k_{\bullet}\in\mathbb{Z}_{\geq 0}\), the induced \(R_{\bar{r}_{p}}^{\square}\)-action on \(\mathrm{S}_{\widetilde{\mathrm{H}},k}^{\mathrm{ur}}(\varepsilon_{1})\) factors through the quotient \(R_{\bar{r}_{p}}^{\square,1-k,\varepsilon_{1}}\). Moreover, the Hecke action of \(\mathcal{O}[T_{p},S_{p}^{\pm 1}]\) on \(\mathrm{S}_{\widetilde{\mathrm{H}},k}^{\mathrm{ur}}(\varepsilon_{1})\) defined in SS 8.1 agrees with the composition \[\mathcal{O}[T_{p},S_{p}^{\pm 1}]\xrightarrow{\eta_{k}}R_{\bar{r}_{p}}^{ \square,1-k,\varepsilon_{1}}\big{[}\frac{1}{p}\big{]}\to\mathrm{End}_{E}\left( \mathrm{S}_{\widetilde{\mathrm{H}},k}^{\mathrm{ur}}(\varepsilon_{1})\otimes_{ \mathcal{O}}E\right).\]
When \(\bar{r}_{p}|_{\mathrm{I}_{\mathbb{Q}_{p}}}=\bar{\rho}\), we say that \(\widetilde{\mathrm{H}}\) is _primitive_ if \(m(\widetilde{\mathrm{H}})=1\).
In either case, we call \(m(\widetilde{\mathrm{H}})\) the _multiplicity_ of \(\widetilde{\mathrm{H}}\).
**Remark 8.3**.:
1. In applications, all the \(\mathcal{O}[\![\mathrm{K}_{p}]\!]\)-projective arithmetic modules we encounter are known to satisfy conditions analogous to Definition 8.2(3) for all _crystalsian_ representations. (Such compatibility can be alternatively deduced by comparing to trianguline deformations.) But formulating of such condition is slightly more subtle; we refer to for example [5, Definition 1.5] or [11, SS 4.2].
2. Our definition is essentially different from and (in most cases) weaker than the notion of \(\mathcal{O}[\mathrm{GL}_{2}(\mathbb{Q}_{p})]\)-modules \(\mathcal{M}_{\infty}\) with arithmetic actions (see for example, [5, 5, 1, 12, 13]) in the following aspects: (a) their \(\mathcal{M}_{\infty}\) is a module of \(R_{\infty}=R_{\bar{r}_{p}}^{\square}[\![z_{1},\dots,z_{g}]\!]\) for some dummy variables; ours \(\widetilde{\mathrm{H}}\) may be viewed as \(\mathcal{M}_{\infty}\) after evaluating \(z_{i}\)'s; (b) they typically require \(\mathcal{M}_{\infty}\widehat{\otimes}\operatorname{Sym}^{k-2}\mathcal{O}^{\oplus 2}\) to be a maximal Cohen-Macaulay over \(R_{\bar{\rho}}^{\square,k-1}[\![z_{1},\dots,z_{g}]\!]\); we do not need this.
3. When \(\bar{r}_{p}|_{\mathrm{I}_{\mathbb{Q}_{p}}}=\bar{\rho}^{\mathrm{ss}}\), it may happen in practice that \(m^{\prime}(\widetilde{\mathrm{H}})\neq m^{\prime\prime}(\widetilde{\mathrm{H}})\).
4. We do not need to require primitive \(\mathcal{O}[\![\mathrm{K}_{p}]\!]\)-projective arithmetic modules to satisfy the two additional conditions in Definition 2.2(2)(3).
**Example 8.4** (Quaternionic case).: We illustrate by an example how our abstract setup appears naturally in the study of cohomology of Shimura varieties.
Fix an absolutely irreducible residual Galois representation \(\bar{r}:\operatorname{Gal}_{\mathbb{Q}}\to\operatorname{GL}_{2}(\mathbb{F})\) such that \(\bar{r}|_{\operatorname{Gal}_{\mathbb{Q}_{p}}}\simeq\bar{r}_{p}\) for a residual local representation that we consider in Notation 7.2. Let \(D\) be a quaternion algebra over \(\mathbb{Q}\) that is unramified at \(p\); we fix an isomorphism \(D\otimes\mathbb{Q}_{p}\cong\operatorname{M}_{2}(\mathbb{Q}_{p})\). Set
\[i(D):=\begin{cases}1&\text{ if }D\otimes_{\mathbb{Q}}\mathbb{R}\cong \operatorname{M}_{2}(\mathbb{R}),\text{ which we call the {\em indefinite} case};\\ 0&\text{ if }D\otimes_{\mathbb{Q}}\mathbb{R}\cong\mathbb{H},\text{ which we call the {\em definite} case}.\end{cases}\]
Fix an open compact subgroup \(K^{p}\subseteq(D\otimes\mathbb{A}_{f}^{p})^{\times}\) such that \(K^{p}\mathrm{K}_{p}\) is _neat_, i.e. \(gD^{\times}g^{-1}\cap K^{p}\mathrm{K}_{p}=\{1\}\) for every \(g\in(D\otimes\mathbb{A}_{f})^{\times}\). For any open compact subgroup \(K^{\prime}_{p}\subseteq\mathrm{K}_{p}\), let \(\operatorname{Sh}_{D^{\times}}(K^{p}K^{\prime}_{p})\) denote the associated (complex) Shimura variety, with \(\mathbb{C}\)-points given by
\[\operatorname{Sh}_{D^{\times}}(K^{p}K^{\prime}_{p})(\mathbb{C})=\begin{cases}D ^{\times}\backslash(D\otimes\mathbb{A}_{f})^{\times}/K^{p}K^{\prime}_{p}&\text{ when }i(D)=0\\ D^{\times}\backslash\mathfrak{H}^{\pm}\times(D\otimes\mathbb{A}_{f})^{\times}/K^ {p}K^{\prime}_{p}&\text{ when }i(D)=1,\end{cases}\]
where \(\mathfrak{H}^{\pm}:=\mathbb{C}\backslash\mathbb{R}\). Then for \(n\in\mathbb{N}\), the tower of subgroups \(\mathrm{K}_{p,n}:=\left(\begin{smallmatrix}1+p^{n}\mathbb{Z}_{p}&p^{n}\mathbb{ Z}_{p}\\ p^{n}\mathbb{Z}_{p}&1+p^{n}\mathbb{Z}_{p}\end{smallmatrix}\right)\subseteq \mathrm{K}_{p}\) defines a tower of Shimura varieties:
\[\cdots\to\operatorname{Sh}_{D^{\times}}(K^{p}\mathrm{K}_{p,n})\to\cdots\to \operatorname{Sh}_{D^{\times}}(K^{p}\mathrm{K}_{p,1})\to\operatorname{Sh}_{D^{ \times}}(K^{p}\mathrm{K}_{p}).\]
The \(i(D)\)_th completed homology group localized at \(\bar{r}\)_ (with \(h=0\))
\[\widetilde{\mathrm{H}}_{\infty}:=\varprojlim_{n}\mathrm{H}_{i(D)}^{\text{Betti }}\big{(}\mathrm{Sh}_{D^{\times}}(K^{p}\mathrm{K}_{p,n})(\mathbb{C}),\mathcal{O }\big{)}_{\bar{r}}^{\text{cplx=1}},\]
where the subscript \(\mathfrak{m}_{\bar{r}}\) indicates localization at the maximal Hecke ideal at \(\bar{r}\), and the superscript cplx=1 is meaningless when \(i(D)=0\), and means to take the subspace where the complex conjugation acts by \(1\) (so that we only take a one-dimensional subspace of the associated 2-dimensional Galois representation).
This \(\widetilde{\mathrm{H}}_{\infty}\) is a \(\mathrm{K}_{p}\)-projective augmented module. Indeed, this is obvious if \(i(D)=0\); when \(i(D)=1\), this is because, for any open compact subgroup \(K^{\prime}_{p}\subseteq\operatorname{GL}_{2}(\mathbb{Q}_{p})\), the localization
\[\mathrm{H}_{i}^{\text{Betti}}\big{(}\mathrm{Sh}_{D^{\times}}(K^{p}K^{\prime}_{p })(\mathbb{C}),\mathbb{F}\big{)}_{\bar{r}}=0\text{ unless }i=1, \tag{8.4.1}\]
and the projectivity of \(\widetilde{\mathrm{H}}_{\infty}\) follows from studying the usual Tor-spectral sequence. Moreover, \(\widetilde{\mathrm{H}}_{\infty}\) carries an action of \(R_{\bar{r}}\), the Galois deformation ring of \(\bar{r}\). To make this compatible with our setup of Definition 8.2, we choose an isomorphism \(R_{\bar{r}}^{\Box}\cong R_{\bar{r}}[\![y_{1},y_{2},y_{3}]\!]\) and demand that \(y_{1},y_{2},y_{3}\) act trivially on \(\widetilde{\mathrm{H}}_{\infty}\). This then induces a natural \(R_{\bar{r}_{p}}^{\Box}\)-action on \(\widetilde{\mathrm{H}}_{\infty}\), upgrading \(\widetilde{\mathrm{H}}_{\infty}\) to an \(\mathcal{O}[\![\mathrm{K}_{p}]\!]\)-projective arithmetic module of type \(\bar{r}_{p}\), where the condition Definition 8.2(3) is the usual local-global compatibility of automorphic forms on \(D^{\times}\).
In this case, the spaces of abstract classical forms defined in SS 2.4 recover the usual etale cohomology groups: for \(k\in\mathbb{Z}_{\geq 2}\) and characters \(\varepsilon_{1}\) of \(\Delta\) and \(\psi\) of \(\Delta^{2}\), we have
\[\mathrm{S}^{\mathrm{ur}}_{\widetilde{\mathrm{H}},k}(\varepsilon_{1}) \,\otimes_{\mathcal{O}}E=\mathrm{Hom}_{\mathcal{O}[\mathrm{K}_{p}]} \left(\widetilde{\mathrm{H}},\,E[z]^{\leq k-2}\otimes\varepsilon_{1}\circ \det\right)\] \[\qquad\cong\mathrm{H}^{i(D)}_{\mathrm{Betti}}\big{(}\mathrm{Sh}_{D ^{\times}}(K^{p}\mathrm{K}_{p})(\mathbb{C}),\,\mathrm{Sym}^{k-2}\,\mathcal{H} \otimes\varepsilon_{1}\circ\det\big{)}_{\bar{r}}^{\mathrm{cplx}=1}\cong\big{(} \mathrm{S}^{D}_{k}(K^{p}\mathrm{K}_{p})\otimes\varepsilon_{1}\circ\det\big{)}_ {\bar{r}},\] \[\mathrm{S}^{\mathrm{Iw}}_{\widetilde{\mathrm{H}},k}(\psi) \,\otimes_{\mathcal{O}}E=\mathrm{Hom}_{\mathcal{O}[\mathrm{Iw}_{p}]} \left(\widetilde{\mathrm{H}},\,E[z]^{\leq k-2}\otimes\psi\right)\] \[\qquad\cong\mathrm{H}^{i(D)}_{\mathrm{Betti}}\big{(}\mathrm{Sh}_{D ^{\times}}(K^{p}\mathrm{Iw}_{p})(\mathbb{C}),\,\mathrm{Sym}^{k-2}\,\mathcal{H} \otimes\psi\big{)}_{\bar{r}}^{\mathrm{cplx}=1}\cong\mathrm{S}^{D}_{k}(K^{p} \mathrm{Iw}_{p};\psi)_{\bar{r}}.\]
Here \(\mathcal{H}\) is the usual rank \(2\) local system on \(\mathrm{Sh}_{D^{\times}}(K^{p}K^{\prime}_{p})\) associated to the dual of standard representation of \(K^{\prime}_{p}\subset\mathrm{K}_{p}\), \(S^{D}_{k}(-)\) denotes the space of automorphic forms on \(\mathrm{Sh}_{D^{\times}}\), and the isomorphisms are as Hecke modules. This example allows us to deduce results regarding classical modular forms or quaternionic automorphic forms from our abstract setup.
**Remark 8.5**.: Similar constructions can be made for Shimura varieties associated to a more general group \(G\) for which \(G^{\mathrm{ad}}_{\mathbb{Q}_{p}}\) admits a factor isomorphic to \(\mathrm{PGL}_{2,\mathbb{Q}_{p}}\) (after properly treating the central characters), as long as one can prove certain vanishing result similar to (8.4.1). (Such techniques are available for example in [10].)
**Example 8.6** (Patched version).: Another source of \(\mathcal{O}[\![\mathrm{K}_{p}]\!]\)-projective arithmetic modules is the patched completed homology of Caraiani-Emerton-Gee-Geraghty-Paskunas-Shin in [1]. More precisely, let \(\mathcal{G}_{2}\) be the group scheme over \(\mathbb{Z}\) defined in [11, SS2.1], which contains \(\mathrm{GL}_{2}\times\mathrm{GL}_{1}\) as a subgroup of index \(2\), and admits a natural homomorphism \(\nu:\mathcal{G}_{2}\to\mathrm{GL}_{1}\). Let \(F\) be a CM field with maximal totally real subfield \(F^{+}\), \(\bar{r}:\mathrm{Gal}_{F^{+}}\to\mathcal{G}_{2}(\mathbb{F})\) a residual global representation, and \(G\) a definite unitary group over \(F^{+}\) satisfying the following list of properties:
1. \(\bar{r}^{-1}(\mathrm{GL}_{2}(\mathbb{F})\times\mathbb{F}^{\times})=\mathrm{ Gal}_{F}\), and write \(\bar{r}|_{\mathrm{Gal}_{F}}\) for the representation \(\bar{r}:\mathrm{Gal}_{F}\to\mathrm{GL}_{2}(\mathbb{F})\times\mathbb{F}^{ \times}\xrightarrow{\mathrm{pr}_{1}}\mathrm{GL}_{2}(\mathbb{F})\);
2. \(\nu\circ\bar{r}=\bar{\chi}^{-1}_{\mathrm{cycl}}\), where \(\bar{\chi}_{\mathrm{cycl}}\) is the reduction of the cyclotomic character;
3. there is a \(p\)-adic place \(\mathfrak{p}\) of \(F^{+}\) which splits into \(\tilde{\mathfrak{p}}\tilde{\mathfrak{p}}^{c}\) in \(F\) such that \(F_{\tilde{\mathfrak{p}}}\cong F^{+}_{\mathfrak{p}}\cong\mathbb{Q}_{p}\) and \(\bar{r}|_{\mathrm{Gal}_{F_{\tilde{\mathfrak{p}}}}}\cong\bar{r}_{p}\), for \(\bar{r}_{p}\) we consider in Notation 7.2;
4. \(\bar{r}(\mathrm{Gal}_{F(\zeta_{p})})\) is adequate in the sense of [12, Definition 2.3]; in particular, \(\bar{r}\) is irreducible;
5. \(\overline{F}^{\ker\mathrm{ad}\bar{r}|_{\mathrm{Gal}_{F}}}\) does not contain \(F(\zeta_{p})\).
6. \(G\) is an outer form of \(\mathrm{GL}_{2}\) with \(G\times_{F^{+}}F\cong\mathrm{GL}_{2,F}\);
7. if \(v\) is a finite place of \(F^{+}\), then \(G\) is quasi-split at \(v\);
8. if \(v\) is an infinite place of \(F^{+}\), then \(G(F^{+}_{v})\cong U_{2}(\mathbb{R})\), and
9. \(\bar{r}\) is automorphic in the sense of [12, Definition 5.3.1].
Fix an isomorphism \(G(\mathcal{O}_{F^{+}_{p}})\cong\mathrm{GL}_{2}(\mathbb{Z}_{p})=\mathrm{K}_{p}\), and fix a neat open compact subgroup \(K^{\mathfrak{p}}\subseteq G(\mathbb{A}^{(\mathfrak{p})}_{F^{+},f})\). As above, consider the subgroups \(\mathrm{K}_{p,n}:=\left(\begin{smallmatrix}1+p^{n}\mathbb{Z}_{p}& p^{n}\mathbb{Z}_{p}\\ p^{n}\mathbb{Z}_{p}&1+p^{n}\mathbb{Z}_{p}\end{smallmatrix}\right) \subseteq\mathrm{K}_{p}\) for each \(n\). With these global data, [1] constructed a patched completed homology \(\widetilde{\mathrm{H}}_{\infty}\), that patches the usual completed homology
\[\widetilde{\mathrm{H}}_{0}\big{(}G(\mathbb{Q})\backslash G(\mathbb{A}_{f})/K^{ \mathfrak{p}},\mathcal{O}\big{)}_{\bar{r}}:=\varprojlim_{n\to\infty}\mathrm{H}_{0 }\big{(}G(\mathbb{Q})\backslash G(\mathbb{A}_{f})/K^{\mathfrak{p}}\mathrm{K}_{p,n},\mathcal{O}\big{)}_{\bar{r}}.\]
The additional structure associated to \(\widetilde{\mathrm{H}}_{\infty}\) is explained by the following diagram
(8.6.1)
* \(S_{\infty}=\mathcal{O}\llbracket z_{1},\ldots,z_{h}\rrbracket\) is the ring of formal power series formed by patching variables and framing variables;
* \(\widetilde{\mathrm{H}}_{\infty}\) is a projective right \(S_{\infty}\llbracket\mathrm{K}_{p}\rrbracket\)-module isomorphic to
* \(\mathrm{Proj}_{S_{\infty}\llbracket\mathrm{K}_{p}\rrbracket}(\sigma_{a,b})^{ \oplus m^{\prime}(\widetilde{\mathrm{H}})}\) for some \(m(\widetilde{\mathrm{H}})\in\mathbb{N}\), if \(\bar{r}_{p}|_{\mathrm{I}_{\mathbb{Q}_{p}}}\simeq\bar{\rho}\), or
* \(\mathrm{Proj}_{S_{\infty}\llbracket\mathrm{K}_{p}\rrbracket}(\sigma_{a,b})^{ \oplus m^{\prime}(\widetilde{\mathrm{H}})}\oplus\mathrm{Proj}_{S_{\infty} \llbracket\mathrm{K}_{p}\rrbracket}(\sigma_{p-3-a,a+b+1})^{\oplus m^{\prime \prime}(\widetilde{\mathrm{H}})}\) for some \(m^{\prime}(\widetilde{\mathrm{H}}),m^{\prime\prime}(\widetilde{\mathrm{H}}) \in\mathbb{N}\), if \(\bar{r}_{p}|_{\mathrm{I}_{\mathbb{Q}_{p}}}\simeq\bar{\rho}^{\mathrm{ss}}\);
* the right \(\mathrm{K}_{p}\)-action on \(\widetilde{\mathrm{H}}_{\infty}\) extends to a continuous right \(\mathrm{GL}_{2}(\mathbb{Q}_{p})\)-action;
* \(\widetilde{\mathrm{H}}_{\infty}\) is essentially constructed as an inverse limit, carrying an action of the inverse limit of deformation rings \(R^{\square}_{\bar{r},\mathbb{Q}_{n}}/\mathfrak{m}^{n}_{\mathbb{Q}_{n}}\), which commutes with the right \(\mathrm{GL}_{2}(\mathbb{Q}_{p})\)-action;
* the action of \(S_{\infty}\) on \(\widetilde{\mathrm{H}}_{\infty}\) factors through that of \(\varprojlim_{n}R^{\square}_{\bar{r},\mathbb{Q}_{n}}/\mathfrak{m}^{n}_{\mathbb{Q }_{n}}\);
* the local deformation ring \(R^{\square}_{\bar{r}_{p}}\) naturally maps to \(\varprojlim_{n}R^{\square}_{\bar{r},\mathbb{Q}_{n}}/\mathfrak{m}^{n}_{\mathbb{ Q}_{n}}\) and acts on \(\widetilde{\mathrm{H}}_{\infty}\);
* one may lift the homomorphism to \(R^{\square}_{\bar{r}_{p}}\) (somewhat arbitrarily).
Then a main result of [2, Theorem 4.1] shows that, for any homomorphism \(y^{\star}:S_{\infty}\to\mathcal{O}^{\prime}\), \(\widetilde{\mathrm{H}}_{y}:=\widetilde{\mathrm{H}}_{\infty}\widehat{\otimes} _{S_{\infty}}\mathcal{O}^{\prime}\) carries naturally a structure of \(\mathcal{O}\llbracket\mathrm{K}_{p}\rrbracket\)-projective arithmetic module in the sense of Definition 8.2 by verifying the local-global compatibility condition (3).
Recall the residual representations \(\bar{\rho}\), \(\bar{\rho}^{\prime}\), and \(\bar{\rho}^{\mathrm{ss}}\) from Notation 7.2. The main theorem of this paper is the following.
**Theorem 8.7**.: _Assume that \(p\geq 11\). Let \(\bar{r}_{p}\) be a residual local Galois representation as in Notation 7.2 with \(a\in\{2,\ldots,p-5\}\). Let \(\widetilde{\mathrm{H}}\) be an \(\mathcal{O}\llbracket\mathrm{K}_{p}\rrbracket\)-projective arithmetic module of type \(\bar{r}_{p}\) and multiplicity \(m(\widetilde{\mathrm{H}})\) in the sense of Definition 8.2. Fix a relevant character \(\varepsilon\) of \(\Delta^{2}\). Let \(C^{(\varepsilon)}_{\widetilde{\mathrm{H}}}(w,t)\) denote the characteristic power series for the \(U_{p}\)-action on the space of abstract \(p\)-adic forms associated to \(\widetilde{\mathrm{H}}\), as defined in SS 2.4(2)._
_Then for every \(w_{\star}\in\mathfrak{m}_{\mathbb{C}_{p}}\), the Newton polygon \(\mathrm{NP}\left(C^{(\varepsilon)}_{\widetilde{\mathrm{H}}}(w_{\star},-)\right)\) is the same as the Newton polygon \(\mathrm{NP}\left(G^{(\varepsilon)}_{\bar{\rho}}(w_{\star},-)\right)\), stretched in both \(x\)- and \(y\)-directions by \(m(\widetilde{\mathrm{H}})\), except that the slope zero part of \(\mathrm{NP}\left(C^{(\varepsilon)}_{\widetilde{\mathrm{H}}}(w_{\star},-)\right)\)_
* _has length_ \(m^{\prime}(\widetilde{\mathrm{H}})\) _when_ \(\bar{r}_{p}\) _is split and_ \(\varepsilon=\omega^{b}\times\omega^{a+b}\)_, and_
* _has length_ \(m^{\prime\prime}(\widetilde{\mathrm{H}})\) _when_ \(\bar{r}_{p}\) _is split and_ \(\varepsilon=\omega^{a+b+1}\times\omega^{b-1}\)_._
The Newton polygon described in Theorem 8.7(2) is the convex polygon whose slope multiset is the disjoint union of \(m^{\prime}(\widetilde{\mathrm{H}})\) copies of slope multiset of \(\mathrm{NP}\left(G^{(\varepsilon)}_{\bar{\rho}}(w_{\star},-)\right)\) and \(m^{\prime\prime}(\widetilde{\mathrm{H}})\) copies of slope multiset of \(\mathrm{NP}\left(G^{(\varepsilon)}_{\bar{\rho}}(w_{\star},-)\right)\), by Proposition 2.14.
In view of Example 8.4, Theorem 1.3 follows immediately from this theorem.
Proof.: This is divided into two steps. We first show that at each point \(w_{\star}\in\mathfrak{m}_{\mathbb{C}_{p}}\), all possible slopes of \(\operatorname{NP}\big{(}C_{\widetilde{\operatorname{H}}}^{(\varepsilon)}(w_{ \star},-)\big{)}\) are contained in the set of slopes of the Newton polygon of the corresponding ghost series; this comes from "embedding" the eigencurve into the triangulline deformation space (essentially following the standard classicality argument and the global triangulations [11, 12]). With this at hand, we can "link" together the slopes at various \(w_{\star}\) to determine the multiplicities of each slope appearing in \(\operatorname{NP}\big{(}C_{\widetilde{\operatorname{H}}}^{(\varepsilon)}(w_{ \star},-)\big{)}\).
We fix a relevant character \(\varepsilon\) throughout the entire proof.
**Step I:** Let \(\operatorname{Spc}^{(\varepsilon)}(\widetilde{\operatorname{H}})\) denote the hypersurface in \(\mathcal{W}^{(\varepsilon)}\times\mathbb{G}_{m}^{\operatorname{rig}}\) defined by \(C_{\widetilde{\operatorname{H}}}^{(\varepsilon)}(w,t)\); it is the _spectral curve_ in the sense of [10]. Applying the construction of [10, SS5] to the algebra \(R_{\widetilde{r}_{p}}^{\square}[U_{p}]\) acting on \(\widetilde{\operatorname{H}}\), we obtain an _eigenvariety_\(\operatorname{Eig}^{(\varepsilon)}(\widetilde{\operatorname{H}})\) over \(\operatorname{Spc}_{\widetilde{\operatorname{H}}}^{(\varepsilon)}\) (which also lives over \(\mathcal{X}_{\widetilde{r}_{p}}^{\square}\)). The following commutative diagram summarizes the relations between the spectral and eigenvarieties.
Consider the following natural embedding
(8.7.1)
where \(\delta_{1}\) and \(\delta_{2}\) are continuous characters of \(\mathbb{Q}_{p}^{\times}\) uniquely determined by the conditions
* \(\delta_{2}(p)=a_{p}^{-1}\), \(\delta_{1}(p)\delta_{2}(p)=\det(\mathcal{V}_{x})(p)\),
* \(\delta_{1}(\exp(p))=w_{\star}\), \(\delta_{2}(\exp(p))=1\), and
* \(\varepsilon=\delta_{2}|_{\Delta}\times\delta_{1}|_{\Delta}\cdot\omega^{-1}\).
We claim that \(\iota^{(\varepsilon)}\big{(}\operatorname{Eig}^{(\varepsilon)}(\widetilde{ \operatorname{H}})^{\operatorname{red}}\big{)}\subseteq\mathcal{X}_{\widetilde {r}_{p}}^{\square,\operatorname{tri}}\). This is a standard argument using the density of classical points; we only sketch the argument.
First we prove this for _very classical points_: an \(E^{\prime}\)-point \(\underline{x}=(x,w_{\star},a_{p})\in\mathcal{X}_{\widetilde{r}_{p}}^{\square} \times\mathcal{W}^{(\varepsilon)}\) is called _very classical_ if \(w_{\star}=w_{k}\) with \(k\geq 2\) and \(k\equiv k_{\varepsilon}\bmod(p-1)\), and if \(v_{p}(a_{p})<\frac{k-2}{2}\). For such a point, classicality result Proposition 2.11(1) shows that the abstract \(p\)-adic \(U_{p}\)-eigenform associated to the point \(\underline{x}\) belongs to \(\operatorname{S}_{k}^{\operatorname{ur}}(\varepsilon_{1})\). So condition Definition 8.2(3) implies that \(x\) in fact belongs to \(\operatorname{Spf}(R_{\widetilde{r}_{p}}^{\square,\underline{k},\varepsilon_ {1}})^{\operatorname{rig}}\), which further implies that \(\mathcal{V}_{x}\) is crystalline, and the two characters \(\delta_{1}\) and \(\delta_{2}\) exactly upgrades it to a point in \(\mathcal{X}_{\widetilde{r}_{p}}^{\square,\operatorname{tri}}\), i.e. \(\iota^{(\varepsilon)}(\underline{x})\in\mathcal{X}_{\widetilde{r}_{p}}^{ \square,\operatorname{tri}}\).
It remains to show that very classical points are Zariski dense in each irreducible component of \(\operatorname{Eig}^{(\varepsilon)}(\widetilde{\operatorname{H}})\). As \(\operatorname{Spc}^{(\varepsilon)}(\widetilde{\operatorname{H}})\) is defined by Fredholm series, [11, Theorem 4.2.2] shows that every irreducible component of \(\operatorname{Spc}^{(\varepsilon)}(\widetilde{\operatorname{H}})\) is defined by a Fredholm series and hence is surjective onto \(\mathcal{W}\). Fix an irreducible component \(\mathcal{Z}\) of \(\operatorname{Eig}^{(\varepsilon)}(\widetilde{\operatorname{H}})\) and pick a point \(\underline{x}=(x,w_{k_{\varepsilon}},a_{p})\). There exists an open affinoid neighborhood \(U\) of \(\underline{x}\) that maps surjectively to an open neighborhood \(\operatorname{wt}(U)\) of \(w_{k_{\varepsilon}}\in\mathcal{W}^{(\varepsilon)}\) and that \(v_{p}(\delta_{2}(p))\) is constant on \(U\). Then there are infinitely many weights \(w_{k}\in\operatorname{wt}(U)\) with \(k\equiv k_{\varepsilon}\bmod(p-1)\) and \(k>2v_{p}(a_{p})+2\), and each point
in \(\operatorname{wt}^{-1}(w_{k})\cap U\) is a very classical point. This means that very classical points are Zariski dense in \(U\) and hence in \(\mathcal{Z}\). Taking Zariski closure proves that \(\iota^{(\varepsilon)}\big{(}\operatorname{Eig}^{(\varepsilon)}(\widetilde{ \operatorname{H}})^{\operatorname{red}}\big{)}\subseteq\mathcal{X}_{\bar{r}_{p} }^{\square,\operatorname{tri}}\).
As a corollary of this claim and Theorem 7.6, for each closed point \(\underline{x}=(w_{\star},a_{p})\in\operatorname{Spc}^{(\varepsilon)}( \widetilde{\operatorname{H}})\), \(v_{p}(a_{p})\) is always a slope of \(\operatorname{NP}\big{(}G_{\bar{\rho}}^{(\varepsilon)}(w_{\star},-)\big{)}\), with only one possible exception: \(v_{p}(a_{p})=0\), \(\bar{r}_{p}\) is split, and \(\varepsilon=\omega^{a+b+1}\times\omega^{b-1}\). (Recall that \(\bar{\rho}\) always denotes the _nonsplit_ reducible residual representation of \(\operatorname{I}_{\mathbb{Q}_{p}}\) regardless of \(\bar{r}_{p}\) is split or not; see Notation 7.2.)
**Step II:** Write \(\operatorname{wt}:\operatorname{Spc}^{(\varepsilon)}(\widetilde{ \operatorname{H}})\hookrightarrow\mathcal{W}^{(\varepsilon)}\times\mathbb{G}_ {m}^{\operatorname{rig}}\to\mathcal{W}^{(\varepsilon)}\) for the natural weight map. Recall from Proposition 2.18(3) that, for each fixed \(n\in\mathbb{N}\), all elements \(w_{\star}\in\mathcal{W}^{(\varepsilon)}\) for which \((n,v_{p}(g_{n}^{(\varepsilon)}(w_{\star})))\) is a vertex of \(\operatorname{NP}\big{(}G_{\bar{\rho}}^{(\varepsilon)}(w_{\star},-)\big{)}\) form a quasi-Stein open subspace of \(\mathcal{W}^{(\varepsilon)}\):
\[\operatorname{Vtx}_{n}^{(\varepsilon)}:=\mathcal{W}^{(\varepsilon)}\backslash \bigcup_{k}\Big{\{}w_{\star}\in\mathfrak{m}_{\mathbb{C}_{p}}\;\Big{|}\;v_{p}( w_{\star}-w_{k})\geq\Delta_{k,\lfloor\frac{1}{2}d_{k}^{\operatorname{fw}}( \tilde{\varepsilon}_{1})-n\rvert+1}^{(\varepsilon)}-\Delta_{k,\lfloor\frac{1}{2 }d_{k}^{\operatorname{fw}}(\tilde{\varepsilon}_{1})-n\rvert}^{(\varepsilon)} \Big{\}},\]
where the union is taken over all \(k=k_{\varepsilon}+(p-1)k_{\bullet}\) with \(k_{\bullet}\in\mathbb{Z}\) such that \(n\in\big{(}d_{k}^{\operatorname{ur}}(\varepsilon_{1}),d_{k}^{\operatorname{ Iw}}(\tilde{\varepsilon}_{1})-d_{k}^{\operatorname{ur}}(\varepsilon_{1})\big{)}\). The space \(\operatorname{Vtx}_{n}^{(\varepsilon)}\) is irreducible because it is obtained by removing finitely many closed disks from \(\mathcal{W}^{(\varepsilon)}\). For a rigorous argument, we write
\[\operatorname{Vtx}_{n}^{(\varepsilon)}=\bigcup_{\delta\in\mathbb{Q}_{>0},\, \delta\to 0^{+}}\operatorname{Vtx}_{n}^{(\varepsilon),\delta}\quad\text{with}\]
\[\operatorname{Vtx}_{n}^{(\varepsilon),\delta}:=\Bigg{\{}w_{\star}\in \mathfrak{m}_{\mathbb{C}_{p}}\;\Bigg{|}\;\begin{array}{l}v_{p}(w_{\star}) \geq\delta,\text{ and }\\ \text{ for each }k=k_{\varepsilon}+(p-1)k_{\bullet}\text{ s.t. }n\in\big{(}d_{k}^{ \operatorname{ur}}(\varepsilon_{1}),d_{k}^{\operatorname{Iw}}(\tilde{ \varepsilon}_{1})-d_{k}^{\operatorname{ur}}(\varepsilon_{1})\big{)},\\ v_{p}(w_{\star}-w_{k})\leq\Delta_{k,\lfloor\frac{1}{2}d_{k}^{\operatorname{ Iw}}(\tilde{\varepsilon}_{1})-n\rvert+1}^{(\varepsilon)}-\Delta_{k,\lfloor\frac{1}{2}d_{k}^{ \operatorname{Iw}}(\tilde{\varepsilon}_{1})-n\rvert}^{(\varepsilon)}-\delta. \end{array}\Bigg{\}}.\]
Note that for every \(w_{\star}\in\operatorname{Vtx}_{n}\), the left slope at \(x=n\) of \(\operatorname{NP}\big{(}G_{\bar{\rho}}^{(\varepsilon)}(w_{\star},-)\big{)}\) is strictly less than the right slope because \((n,v_{p}(g_{n}^{(\varepsilon)}(w_{\star})))\) is a vertex of \(\operatorname{NP}\big{(}G_{\bar{\rho}}^{(\varepsilon)}(w_{\star},-)\big{)}\). By compactness, we deduce that for each such \(\delta\in(0,1)\cap\mathbb{Q}\), there exists an \(\epsilon_{\delta}\in(0,1)\cap\mathbb{Q}\) such that the following two subspaces are the same:
\[\operatorname{Spc}^{(\varepsilon)}(\widetilde{\operatorname{H}})_{n}^{\delta}:= \bigg{\{}(w_{\star},a_{p})\in\operatorname{Spc}^{(\varepsilon)}(\widetilde{ \operatorname{H}})\;\bigg{|}\;\begin{array}{l}w_{\star}\in\operatorname{Vtx}_{n }^{(\varepsilon),\delta},\text{ and }\\ -v_{p}(a_{p})\leq\text{ the left slope at }x=n\text{ of }\operatorname{NP}\big{(}G_{\bar{\rho}}^{( \varepsilon)}(w_{\star},-)\big{)}\;\bigg{\}},\end{array}\]
\[\operatorname{Spc}^{(\varepsilon)}(\widetilde{\operatorname{H}})_{n}^{\delta,+}:= \bigg{\{}(w_{\star},a_{p})\in\operatorname{Spc}^{(\varepsilon)}(\widetilde{ \operatorname{H}})\;\bigg{|}\;\begin{array}{l}w_{\star}\in\operatorname{Vtx}_{n }^{(\varepsilon),\delta},\text{ and }\\ -v_{p}(a_{p})\leq\epsilon_{\delta}+\text{ the left slope at }x=n\text{ of }\operatorname{NP}\big{(}G_{\bar{\rho}}^{( \varepsilon)}(w_{\star},-)\big{)}\;\bigg{\}}.\end{array}\]
By (the proof of) Kiehl's finiteness theorem, this implies that \(\operatorname{wt}_{*}(\mathcal{O}_{\operatorname{Spc}^{(\varepsilon)}( \widetilde{\operatorname{H}})_{n}^{\delta}})\) is finite over \(\operatorname{Vtx}_{n}^{(\varepsilon),\delta}\). Yet, \(\operatorname{Spc}^{(\varepsilon)}(\widetilde{\operatorname{H}})_{n}^{\delta}\) is flat over \(\operatorname{Vtx}_{n}^{(\varepsilon),\delta}\) by [10, Lemma 4.1] and \(\operatorname{Vtx}_{n}^{(\varepsilon),\delta}\) is irreducible. So \(\operatorname{Spc}^{(\varepsilon)}(\widetilde{\operatorname{H}})_{n}^{\delta}\) has constant degree over \(\operatorname{Vtx}_{n}^{(\varepsilon),\delta}\). Letting \(\delta\to 0^{+}\) (while \(\epsilon_{\delta}\to 0^{+}\)), we deduce that \(\operatorname{Spc}^{(\varepsilon)}(\widetilde{\operatorname{H}})_{n}=\bigcup_{ \delta\to 0^{+}}\operatorname{Spc}^{(\varepsilon)}(\widetilde{\operatorname{H}})_{n}^{\delta}\) is finite and flat of constant degree over \(\operatorname{Vtx}_{n}^{(\varepsilon)}\).
It remains to compute this degree for each \(n\). We have proved in Proposition 4.1(2) that for each \(k\) such that \(n=d_{k}^{\operatorname{fw}}(\varepsilon\cdot(1\times\omega^{2-k}))\), \(\big{(}n,v_{p}(g_{n}^{(\varepsilon)}(w_{k}))\big{)}\) is a vertex of \(\operatorname{NP}\big{(}G_{\bar{\rho}}^{(\varepsilon)}(w_{\star},-)\big{)}\); in particular, \(w_{k}\in\operatorname{Vtx}_{n}^{(\varepsilon)}\). In this case, SS 2.4(6) (applying separately to \(\operatorname{Proj}_{\mathcal{O}[\operatorname{Kp}]}(\sigma_{a,b})\) and
to \(\operatorname{Proj}_{\mathcal{O}[\mathbb{K}_{p}]}(\sigma_{p-1-a,a+b+1})\) if \(\bar{r}_{p}\) is split) implies that
\[\deg\big{(}\operatorname{Spc}^{(\varepsilon)}(\widetilde{\operatorname{H}})_{n }\big{/}\mathrm{Vtx}_{n}^{(\varepsilon)}\big{)}=\operatorname{rank}_{ \mathcal{O}}\operatorname{S}_{\bar{\operatorname{H}},k}^{\mathrm{Iw}}( \varepsilon\cdot(1\times\omega^{2-k}))\] \[=\begin{cases}m(\widetilde{\operatorname{H}})\cdot n&\text{when $\bar{r}_{p}$ is non-split,}\\ m(\widetilde{\operatorname{H}})\cdot n&\text{when $\bar{r}_{p}$ is split and $\varepsilon\notin\{\omega^{b}\times\omega^{a+b},\omega^{a+b+1}\times\omega^{b- 1}\}$},\\ m(\widetilde{\operatorname{H}})\cdot(n-1)+m^{\prime}(\widetilde{ \operatorname{H}})&\text{when $\bar{r}_{p}$ is split and $\varepsilon=\omega^{b}\times\omega^{a+b}$},\\ m(\widetilde{\operatorname{H}})\cdot n+m^{\prime\prime}(\widetilde{ \operatorname{H}})&\text{when $\bar{r}_{p}$ is split and $\varepsilon=\omega^{a+b+1}\times\omega^{b-1}$}.\end{cases}\]
Here we implicitly used Proposition 2.14 to identify the ghost series for \(\bar{\rho}\) and for \(\bar{\rho}^{\prime}\). In particular, when \(\bar{r}_{p}\) is split, the first slope of \(\operatorname{NP}(G_{\bar{\rho}}^{(\varepsilon)}(w_{\star},-))\) is zero if \(\varepsilon=\omega^{b}\times\omega^{a+b}\) and is nonzero if \(\varepsilon=\omega^{a+b+1}\times\omega^{b-1}\); hence the slight variant description above.
We also point out that when \(\bar{r}_{p}\) is split and \(\varepsilon=\omega^{a+b+1}\times\omega^{b-1}\), applying the same argument above using \(\bar{\rho}^{\prime}\) in places of \(\bar{\rho}\), we may deduce that the slope zero part of \(\operatorname{Spc}^{(\varepsilon)}(\widetilde{\operatorname{H}})\) has degree \(m^{\prime\prime}(\widetilde{\operatorname{H}})\) over \(\mathcal{W}^{(\varepsilon)}\).
From this, we immediately deduce the slopes of \(\operatorname{NP}\big{(}C_{\widetilde{\operatorname{H}}}^{(\varepsilon)}(w_{ \star},-)\big{)}\) at each point \(w_{\star}\in\mathfrak{m}_{\mathbb{C}_{p}}\) are exactly \(m(\widetilde{\operatorname{H}})\) disjoint copies of the multiset of the slopes of \(\operatorname{NP}\big{(}G_{\bar{\rho}}^{(\varepsilon)}(w_{\star},-)\big{)}\), except that for the slope zero part of \(\operatorname{NP}\big{(}C_{\widetilde{\operatorname{H}}}^{(\varepsilon)}(w_{ \star},-)\big{)}\)
* has length \(m^{\prime}(\widetilde{\operatorname{H}})\) when \(\bar{r}_{p}\) is split and \(\varepsilon=\omega^{b}\times\omega^{a+b+1}\), and
* has length \(m^{\prime\prime}(\widetilde{\operatorname{H}})\) when \(\bar{r}_{p}\) is split and \(\varepsilon=\omega^{a+b+1}\times\omega^{b-1}\).
Theorem 8.7 is proved.
**Remark 8.8**.: (1) The construction of spectral curve in Step I using Buzzard's machine in Step I agrees with Emerton's construction as explained in the proof of [1, Proposition 4.2.36].
(2) We expect that our method of proof can be generalized to the case of \(\bar{r}\)-localized space of modular forms when the residual Galois representation \(\bar{r}\) is reducible. In this case, the corresponding \(\widetilde{\operatorname{H}}\) is no longer projective as an \(\mathcal{O}[\![\mathrm{K}_{p}]\!]\)-module, causing some trouble. We leave this to interested readers.
In what follows, we give three applications: Gouvea-Mazur conjecture, Gouvea's distribution conjecture, and a refined spectral halo theorem. We refer to SS 1.16, SS 1.19, and SS 1.22, respectively, for a discussion on the history of these conjectures. Here, we give directly their statements and proofs. These applications share the following setup.
**Notation 8.9**.: For the rest of this section, assume that \(p\geq 11\). Let \(\bar{r}_{p}\) be a residual Galois representation as in Notation 7.2 with \(a\in\{2,\ldots,p-5\}\). Let \(\widetilde{\operatorname{H}}\) be an \(\mathcal{O}[\![\mathrm{K}_{p}]\!]\)-projective arithmetic module of type \(\bar{r}_{p}\) and multiplicity \(m(\widetilde{\operatorname{H}})\).
Fix a relevant character \(\varepsilon\) of \(\Delta^{2}\). For each \(k\in\mathbb{Z}_{\geq 2}\), let
\[\alpha_{1}^{(\varepsilon)}(k),\alpha_{2}^{(\varepsilon)}(k),\ldots \tag{8.9.1}\]
denote the list of \(U_{p}\)-slopes on \(\operatorname{S}_{k}^{\uparrow,(\varepsilon)}\) counted with multiplicity, which contains the \(U_{p}\)-slopes on \(\operatorname{S}_{k}^{\mathrm{Iw}}(\varepsilon\cdot(1\times\omega^{2-k}))\) as the first \(d_{k}^{\mathrm{Iw}}(\varepsilon\cdot(1\times\omega^{2-k}))\) terms.
**Theorem 8.10** (\(\bar{r}_{p}\)-version of Gouvea-Mazur conjecture).: _Keep the notation and assumptions in Notation 8.9. Let \(n\geq\mathbb{N}\). For weights \(k_{1},k_{2}>2n+2\) such that \(k_{1}\equiv k_{2}\equiv k_{1}\equiv k_{2}\equiv k_{1}\equiv k_{2}\equiv k_{2}\), and \(k_{1}\equiv k_{2}\equiv k_{1}\equiv k_{2}\equiv k_{1}\equiv k_{2}\equiv k_{2}\)._
Proof.: We first prove the claim.
**Lemma 8.11**.: _Let \(\bar{r}_{p}\) be a residual Galois representation as in Notation 7.2 with \(a\in\{2,\ldots,p-5\}\). Let \(\widetilde{\operatorname{H}}\) be an \(\mathcal{O}[\![\mathrm{K}_{p}]\!]\)-projective arithmetic module of type \(\bar{r}_{p}\) and multiplicity \(m(\widetilde{\operatorname{H}})\). Then_
\[\alpha_{1}^{(\varepsilon)}(k),\alpha_{2}^{(\varepsilon)}(k),\ldots \tag{8.10}\]
_where \(\alpha_{1}^{(\varepsilon)}(k),\alpha_{2}^{(\varepsilon)}(k),\ldots\) are the \(U_{p}\)-slopes on \(\operatorname{S}_{k}^{\uparrow,(\varepsilon)}(\widetilde{\operatorname{H}})\) counted with multiplicity, which contains the \(U_{p}\)-slopes on \(\operatorname{S}_{k}^{\mathrm{Iw}}(\varepsilon\cdot(1\times\omega^{2-k}))\)._
Proof.: We first prove the claim.
**Lemma 8.12**.: _Let \(\bar{r}_{p}\) be a residual Galois representation as in Notation 7.2 with \(a\in\{2,\ldots,p-5\}\). Let \(\widetilde{\operatorname{H}}\) be an \(\mathcal{O}[\![\mathrm{K}_{p}]\!]\)-projective arithmetic module of type \(\bar{r}_{p}\) and multiplicity \(m(\widetilde{\operatorname{H}})\). Then_
\[\alpha_{1}^{(\varepsilon)}(k),\alpha_{2}^{(\varepsilon)}(k),\ldots \tag{8.11}\]
_where \(\alpha_{1}^{(\varepsilon)}(k),\alpha_{2}^{(\varepsilon)}(k),\ldots\) are the \(U_{p}\)-slopes on \(\operatorname{S}_{k}^{\uparrow,(\varepsilon)}(\widetilde{\operatorname{H}})\) counted with multiplicity, which contains the \(U_{p}\)-slopes on \(\operatorname{S}_{k}^{\mathrm{Iw}}(\varepsilon\cdot(1\times\omega^{2-k}))\)._
Proof.: We first prove the claim.
**Lemma 8.13**.: _Let \(\bar{r}_{p}\) be a residual Galois representation as in Notation 7.2 with \(a\in\{2,\ldots,p-5\}\). Let \(\widetilde{\operatorname{H}}\) be an \(\mathcal{O}[\![\mathrm{K}_{p}]\!]\)-projective arithmetic module of type \(\bar{r}_{p}\) and multiplicity \(m(\widetilde{\operatorname{H}})\). Then_
\[\alpha_{1}^{(\varepsilon)}(k),\alpha_{2}^{(\varepsilon)}(k),\ldots \tag{8.12}\]
_where \(\alpha_{1}^{(\varepsilon)}(k),\alpha_{2}^{(\varepsilon)}(k),\ldots\) are the \(U_{p}\)-slopes on \(\operatorname{S}_{k}^{\uparrow,(\varepsilon)}(\widetilde{\operatorname{H}})\) counted with multiplicity, which contains the \(U_{p}\)-slopes on \(\operatorname{S}_{k}^{\mathrm{Iw}}(\varepsilon\cdot(1\times\omega^{2-k}))\)._
Proof.: We first prove the claim.
**Lemma 8.14**.: _Let \(\bar{r}_{p}\) be a residual Galois representation as in Notation 7.2 with \(a\in\{2,\ldots,p-5\}\). Let \(\widetilde{\operatorname{H}}\) be an \(\mathcal{O}[\![\mathrm{K}_{p}]\!]\)-projective arithmetic module of type \(\bar{r}_{p}\) and multiplicity \(m(\widetilde{\operatorname{H}})\). Then_
\[\alpha_{1}^{(\varepsilon)}(k),\alpha_{2}^{(\varepsilon)}(k),\ldots \tag{8.13}\]
_where \(\alpha_{1}^{(\varepsilon)}(k),\alpha_{2}^{(\varepsilon)}(k),\ldots\) are the \(U_{p}\)-slopes on \(\operatorname{S}_{k}^{\uparrow,(\varepsilon)}(\widetilde{\operatorname{H}})\) counted with multiplicity, which contains the \(U_{p}\)-slopes on \
\(a+2b+1\bmod(p-1)\) and \(v_{p}(k_{1}-k_{2})\geq n+5\), the sequence of \(U_{p}\)-slopes (8.9.1) for \(k_{1}\) and for \(k_{2}\) agree up to slope \(n\)._
Proof.: By Theorem 8.7, the sequence (8.9.1) (except for possibly the first several zeros) is precisely the slopes of \(\operatorname{NP}\left(G_{\bar{\rho}}^{(\varepsilon)}(w_{k},-)\right)\) with multiplicity \(m(\widetilde{\operatorname{H}})\). This then follows from the main theorem of [12, Theorem 1.4], who proved the corresponding statement for the ghost slopes.
**Theorem 8.11** (\(\bar{r}_{p}\)-version of Gouvea's slope distribution conjecture).: _Keep the notation and assumption in Notation 8.9. For each \(k\equiv k_{\varepsilon}\bmod(p-1)\), write \(\mu_{k}\) denote the uniform probability measure for the multiset_
\[\left\{\frac{\alpha_{1}^{(\varepsilon)}(k)}{k-1},\ \frac{\alpha_{2}^{( \varepsilon)}(k)}{k-1},\ \dots,\ \frac{\alpha_{d_{k}^{\operatorname{w}}(\varepsilon_{1})}(k)}{k-1}\right\} \subset[0,1].\]
1. _When an positive integer_ \(i\) _satisfies_ \[i<\begin{cases}m(\widetilde{\operatorname{H}})\cdot d_{k}^{\operatorname{ur}} (\varepsilon_{1})-m^{\prime\prime}(\widetilde{\operatorname{H}})&\text{ if }\bar{r}_{p}\text{ is split and }\varepsilon= \omega^{b}\times\omega^{a+b+1},\\ m(\widetilde{\operatorname{H}})\cdot d_{k}^{\operatorname{ur}}( \varepsilon_{1})+m^{\prime\prime}(\widetilde{\operatorname{H}})&\text{ if }\bar{r}_{p}\text{ is split and } \varepsilon=\omega^{a+b+1}\times\omega^{b-1},\\ m(\widetilde{\operatorname{H}})\cdot d_{k}^{\operatorname{ur}}( \varepsilon_{1})&\text{ otherwise,}\end{cases}\] _we have_ \(\alpha_{i}(k)=\frac{p-1}{2}\cdot\frac{i}{m(\widetilde{\operatorname{H}})}+O (\log k).\)__
2. _As_ \(k\to\infty\) _while keeping_ \(k\equiv k_{\varepsilon}\bmod(p-1)\)_, the measure_ \(\mu_{k}\) _weakly converges to the probability measure_ \[\frac{1}{p+1}\delta_{[0,\frac{1}{p+1}]}+\frac{1}{p+1}\delta_{[\frac{p}{p+1},1] }+\frac{p-1}{p+1}\delta_{\frac{1}{2}},\] _where_ \(\delta_{[a,b]}\) _denotes the uniform probability measure on the interval_ \([a,b]\)_, and_ \(\delta_{\frac{1}{2}}\) _is the Dirac measure at_ \(\frac{1}{2}\)_._
Proof.: By Theorem 8.7, the sequence (8.9.1) is precisely the slopes of \(\operatorname{NP}\left(G_{\bar{\rho}}^{(\varepsilon)}(w_{k},-)\right)\) with multiplicity \(m(\widetilde{\operatorname{H}})\) (except when \(\bar{\rho}\) is split and \(\varepsilon=\omega^{b}\times\omega^{a+b}\) or \(\omega^{a+b+1}\times\omega^{b-1}\), the multiplicity of the slope zero part are precisely \(m^{\prime}(\widetilde{\operatorname{H}})\) and \(m^{\prime\prime}(\widetilde{\operatorname{H}})\), respectively). The power series \(G_{\bar{\rho}}^{(\varepsilon)}(w,t)\) is an abstract ghost series in the sense of [1] with
\[A=\frac{2m(\widetilde{\operatorname{H}})}{p+1}\quad\text{and}\quad B=\frac{2(p -1)\cdot m(\widetilde{\operatorname{H}})}{p+1}\]
by Definition-Proposition 2.12 (and SS 2.4(6)). With this, (1)-(4) follow from [1, Theorem 3.1 and Corolllary 3.2].
**Theorem 8.12** (refined spectral halo conjecture).: _Keep the notation and assumptions in Notation 8.9. Let \(\operatorname{wt}:\mathcal{W}^{(\varepsilon)}\times\mathbb{G}_{m}^{\operatorname {rig}}\to\mathcal{W}^{(\varepsilon)}\) be the projection to weight space, and let \(\operatorname{Spc}^{(\varepsilon)}(\widetilde{\operatorname{H}})\) denote the zero locus of \(C_{\widetilde{\operatorname{H}}}^{(\varepsilon)}(w,t)\) in \(\mathcal{W}^{(\varepsilon)}\times\mathbb{G}_{m}^{\operatorname{rig}}\). Set_
\[\mathcal{W}^{(\varepsilon)}_{(0,1)}=\left\{w_{\star}\in\mathcal{W}^{( \varepsilon)}\ \big{|}\ v_{p}(w_{\star})\in(0,1)\right\}\quad\text{and}\quad \operatorname{Spc}^{(\varepsilon)}_{(0,1)}(\widetilde{\operatorname{H}})= \operatorname{Spc}^{(\varepsilon)}(\widetilde{\operatorname{H}})\cap \operatorname{wt}^{-1}(\mathcal{W}^{(\varepsilon)}_{(0,1)}).\]
_Then \(\operatorname{Spc}^{(\varepsilon)}_{(0,1)}(\widetilde{\operatorname{H}})\) is a disjoint union \(Y_{0}\bigsqcup Y_{1}\bigsqcup Y_{2}\bigsqcup\dotsb\) such that_
\[\operatorname{Spc}^{(\varepsilon)}_{(0,1)}(\widetilde{\operatorname{H}})= \operatorname{Spc}^{(\varepsilon)}_{(0,1)}(\widetilde{\operatorname{H}})= \operatorname{Spc}^{(\varepsilon)}_{(0,1)}(\widetilde{\operatorname{H}})= \operatorname{Spc}^{(\varepsilon)}_{(0,1)}(\widetilde{\operatorname{H}})= \operatorname{Spc}^{(\varepsilon)}_{(0,1)}(\widetilde{\operatorname{H}})\cap \operatorname{wt}^{-1}(\mathcal{W}^{(\varepsilon)}_{(0,1)}).\]
Proof.: By Theorem 8.7, the sequence (8.9.1) is precisely the slopes of \(\operatorname{NP}\left(G_{\bar{\rho}}^{(\varepsilon)}(w_{k},-)\right)\) with multiplicity \(m(\widetilde{\operatorname{H}})\) (except when \(\bar{\rho}\) is split and \(\varepsilon=\omega^{b}\times\omega^{a+b}\) or \(\omega^{a+b+1}\times\omega^{b-1}\), the multiplicity of the slope zero part are precisely \(m^{\prime}(\widetilde{\operatorname{H}})\) and \(m^{\prime\prime}(\widetilde{\operatorname{H}})\), respectively). The power series \(G_{\bar{\rho}}^{(\varepsilon)}(w,t)\) is an abstract ghost series in the sense of [1] with
\[A=\frac{2m(\widetilde{\operatorname{H}})}{p+1}\quad\text{and}\quad B=\frac{2(p -1)\cdot m(\widetilde{\operatorname{H}})}{p+1}\]
by Definition-Proposition 2.12 (and SS 2.4(6)). With this, (1)-(4) follow from [1, Theorem 3.1 and Corolllary 3.2].
**Theorem 8.12** (refined spectral halo conjecture).: _Keep the notation and assumptions in Notation 8.9. Let \(\operatorname{wt}:\mathcal{W}^{(\varepsilon)}\times\mathbb{G}_{m}^{ \operatorname{rig}}\to\mathcal{W}^{(\varepsilon)}\) be the projection to weight space, and let \(\operatorname{Spc}^{(\varepsilon)}(\widetilde{\operatorname{H}})\) denote the zero locus of \(C_{\widetilde{\operatorname{H}}}^{(\varepsilon)}(w,t)\) in \(\mathcal{W}^{(\varepsilon)}\times\mathbb{G}_{m}^{\operatorname{rig}}\). Set_
\[\mathcal{W}^{(\varepsilon)}_{(0,1)}=\left\{w_{\star}\in\mathcal{W}^{( \varepsilon)}\ \big{|}\ v_{p}(w_{\star})\in(0,1)\right\}\quad\text{and}\quad\operatorname{ Spc}^{(\varepsilon)}_{(0,1)}(\widetilde{\operatorname{H}})= \operatorname{Spc}^{(\varepsilon)}(\widetilde{\operatorname{H}})\cap \operatorname{wt}^{-1}(\mathcal{W}^{(\varepsilon)}_{(0,1)}).\]
_Then \(\operatorname{Spc}^{(\varepsilon)}_{(0,1)}(\widetilde{\operatorname{H}})\) is a disjoint union \(Y_{0}\bigsqcup Y_{1}\bigsqcup Y_{2}\bigsqcup\dotsb\) such that_
1. \(Y_{0}\) _is non-empty only when_ \(\bar{r}_{p}\) _is split and_ \(\varepsilon=\omega^{a+b+1}\times\omega^{b-1}\)_, in which case, for each point_ \((w_{\star},a_{p})\in Y_{0}\)_,_ \(v_{p}(a_{p})=0\)_, and_ \(\deg\big{(}Y_{0}/\mathcal{W}_{(0,1)}^{(\varepsilon)}\big{)}=m^{\prime\prime}( \widetilde{\mathrm{H}})\)_._
2. _for each point_ \((w_{\star},a_{p})\in Y_{n}\) _with_ \(n\geq 1\)_,_ \(v_{p}(a_{p})=(\deg g_{n}^{(\varepsilon)}(w)-\deg g_{n-1}^{(\varepsilon)}(w)) \cdot v_{p}(w_{\star})\)_, and_
3. _the weight map_ \(\mathrm{wt}:Y_{n}\to\mathcal{W}_{(0,1)}^{(\varepsilon)}\) _is finite and flat of degree_ \(m(\widetilde{\mathrm{H}})\)_, except when_ \(\bar{r}_{p}\) _is split,_ \(\varepsilon=\omega^{b}\times\omega^{a+b}\)_, and_ \(n=1\)_, in which case_ \(\deg\big{(}Y_{1}/\mathcal{W}_{(0,1)}^{(\varepsilon)}\big{)}=m^{\prime}( \widetilde{\mathrm{H}})\)_._
Proof.: By Theorem 8.7, the sequence (8.9.1) is precisely the slopes of \(\mathrm{NP}\left(G_{\bar{\rho}}^{(\varepsilon)}(w_{k},-)\right)\) with multiplicity \(m(\widetilde{\mathrm{H}})\) (except when \(\bar{\rho}\) is split and \(\varepsilon=\omega^{b}\times\omega^{a+b}\) or \(\omega^{a+b+1}\times\omega^{b-1}\), the multiplicity of the slope zero part are precisely \(m^{\prime}(\widetilde{\mathrm{H}})\) and \(m^{\prime\prime}(\widetilde{\mathrm{H}})\), respectively). But when \(v_{p}(w_{\star})\in(0,1)\), we have \(v_{p}(g_{n}^{(\varepsilon)}(w_{\star}))=\deg g_{n}^{(\varepsilon)}(w)\cdot v_ {p}(w_{\star})\). Moreover, Definition-Proposition 2.12(4) says that the differences \(\deg g_{n}^{(\varepsilon)}(w)-\deg g_{n-1}^{(\varepsilon)}(w)\) is strictly increasing in \(n\). It follows that we may "distribute" the points \((w_{\star},a_{p})\in\mathrm{Spc}_{(0,1)}^{(\varepsilon)}(\widetilde{\mathrm{H}})\) by the ratio \(v_{p}(a_{p})/v_{p}(w_{\star})\) into the disjoint spaces \(Y_{n}\) as described in (1) and (2). The theorem is clear.
## 9. Irreducible components of eigencurves
In this section, we prove the finiteness of irreducible components of the spectral curve associated to an \(\mathcal{O}\llbracket\mathrm{K}_{p}\rrbracket\)-projective arithmetic module \(\widetilde{\mathrm{H}}\) of type \(\bar{r}_{p}\). In particular, this applies to the case of eigencurves associated to overconvergent modular forms (with appropriate Hecke maximal ideal localization) and provides some positive theoretical evidence towards a question asked by Coleman and Mazur in their seminal paper [10, page 4], under our reducible nonsplit and very generic condition.
We will separate the discussion for ordinary part and the non-ordinary part.
**Notation 9.1**.: Let \(\bar{r}_{p}\) and \(\bar{\rho}\) be as in Notation 7.2 and let \(\widetilde{\mathrm{H}}\) be an \(\mathcal{O}\llbracket\mathrm{K}_{p}\rrbracket\)-projective arithmetic module of type \(\bar{r}_{p}\) and multiplicity \(m(\widetilde{\mathrm{H}})\).
For each relevant character \(\varepsilon\) of \(\Delta^{2}\), define the nonordinary part of the ghost series to be
\[G_{\bar{\rho},\mathrm{nord}}^{(\varepsilon)}(w,t):=\begin{cases}\big{(}G_{ \bar{\rho}}^{(\omega^{b}\times\omega^{a+b})}(w,t)-1\big{)}/t&\text{ if } \varepsilon=\omega^{b}\times\omega^{a+b},\\ G_{\bar{\rho}}^{(\varepsilon)}(w,t)&\text{ otherwise.}\end{cases}\]
The following is the main subject of our study.
**Definition 9.2**.: Fix a rational number \(\lambda\in(0,1)\cap\mathbb{Q}\). Put \(\mathcal{W}_{\geq\lambda}:=\mathrm{Spm}\,E\langle w/p^{\lambda}\rangle\). Let \(\mathbb{A}^{1,\mathrm{rig}}=\bigcup_{n\in\mathbb{N}}(\mathrm{Spm}\,E\langle p^{ n}t\rangle)\) denote the rigid affine line.
1. A _Fredholm series_ over \(\mathcal{W}_{\geq\lambda}\) is a power series \(F(w,t)\in E\langle w/p^{\lambda}\rangle\llbracket t\rrbracket\) such that \(f(w,0)=1\) and \(F(w,t)\) converges over \(\mathcal{W}_{\geq\lambda}\times\mathbb{A}^{1,\mathrm{rig}}\). Let \(\mathcal{Z}(F)\) denote its zero in \(\mathcal{W}_{\geq\lambda}\times\mathbb{A}^{1,\mathrm{rig}}\), as a rigid analytic subvariety. We say \(F\) is nontrivial if \(F\neq 1\).
2. A Fredholm series \(F(w,t)\) is _of ghost type \(\bar{r}_{p}\) and \(\varepsilon\)_, if for every \(w_{\star}\in\mathcal{W}_{\geq\lambda}(\mathbb{C}_{p})\), \(\mathrm{NP}(F(w_{\star},-))\) is the same as \(\mathrm{NP}\left(G_{\bar{\rho},\mathrm{nord}}^{(\varepsilon)}(w_{\star},-)\right)\), but stretched in the \(x\)- and \(y\)-directions by some \(m(F)\in\mathbb{N}\). This \(m(F)\) is called the _multiplicity_ of \(F\). We also call the subvariety \(\mathcal{Z}(F)\) of _ghost type \(\bar{r}_{p}\) and \(\varepsilon\)_.
We emphasize that the condition \(\lambda\in(0,1)\cap\mathbb{Q}\) implies that \(\mathcal{W}_{\geq\lambda}\) contains some "halo region", namely some part that Theorem 8.12 applies (even though our argument does not use Theorem 8.12 logically).
The following lemma factors out the slope zero part of the characteristic power series.
**Lemma 9.3**.: _Let \(\bar{r}_{p}\), \(\varepsilon\), and \(\widetilde{\mathrm{H}}\) be as in Notation 9.1 with \(a\in\{2,\ldots,p-5\}\) and \(p\geq 11\). Let \(C_{\widetilde{\mathrm{H}}}^{(\varepsilon)}(w,t)=1+\sum_{n\geq 1}c_{n}^{( \varepsilon)}(w)t^{n}\in\mathcal{O}\llbracket w,t\rrbracket\) denote the characteristic power series of \(U_{p}\)-action on the abstract overconvergent forms associated to \(\widetilde{\mathrm{H}}\). Then there is a canonical factorization in \(\mathcal{O}\llbracket w,t\rrbracket\):_
\[C_{\widetilde{\mathrm{H}}}^{(\varepsilon)}(w,t)=C_{\widetilde{\mathrm{H}}, \mathrm{ord}}^{(\varepsilon)}(w,t)\cdot C_{\widetilde{\mathrm{H}},\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{missing}}}}},\mathrm{\mathrm{\mathrm{missing}}}}^{( \varepsilon)}(w,t), \tag{9.3.1}\]
_where \(C_{\widetilde{\mathrm{H}},\mathrm{\mathrm{\mathrm{missing}}}}^{(\varepsilon)}(w,t)\) is a Fredholm series of ghost type \(\bar{r}_{p}\) and \(\varepsilon\) with multiplicity \(m(\widetilde{\mathrm{H}})\) and \(C_{\widetilde{\mathrm{H}},\mathrm{\mathrm{\mathrm{missing}}}}^{(\varepsilon)}( w,t)\) is a polynomial_
* _of degree_ \(m(\widetilde{\mathrm{H}})\) _when_ \(\varepsilon=\omega^{b}\times\omega^{a+b}\) _and_ \(\bar{r}_{p}\) _is nonsplit,_
* _of degree_ \(m^{\prime}(\widetilde{\mathrm{H}})\) _when_ \(\varepsilon=\omega^{b}\times\omega^{a+b}\) _and_ \(\bar{r}_{p}\) _is split,_
* _of degree_ \(m^{\prime\prime}(\widetilde{\mathrm{H}})\) _when_ \(\varepsilon=\omega^{a+b+1}\times\omega^{b-1}\) _and_ \(\bar{r}_{p}\) _is split, and_
* _of degree_ \(0\) _otherwise._
_Moreover, the constant term of \(C_{\widetilde{\mathrm{H}},\mathrm{\mathrm{\mathrm{missing}}}}^{(\varepsilon)}( w,t)\) is \(1\) and the top degree coefficient of \(C_{\widetilde{\mathrm{H}},\mathrm{\mathrm{\mathrm{missing}}}}^{(\varepsilon)}( w,t)\) belongs to \(\mathcal{O}\llbracket w\rrbracket^{\times}\)._
Proof.: This follows from Theorem 8.7 and the standard Weierstrass Preparation Theorem.
**Remark 9.4**.: In fact, Lemma 9.3 holds under much weaker assumption such as \(1\leq a\leq p-4\) and \(p\geq 5\).
**Proposition 9.5**.: _Let \(F(w,t)\in E\langle w/p^{\lambda}\rangle\llbracket t\rrbracket\) be a nontrivial Fredholm series. Then there exists a unique nonempty set of positive integers \(\{n_{i}\}\) and nonempty finite set of distinct irreducible nontrivial Fredholm series \(\{P_{i}\}\) such that \(F=\prod P_{i}^{n_{i}}\). Moreover, the irreducible components of \(\mathcal{Z}(F)\) endowed with their reduced structures are the \(\mathcal{Z}(P_{i})\)'s._
Proof.: This is [13, Theorem 1.3.7] and [12, Corollary 4.2.3].
The main theorem of this section is the following (which holds under weaker conditions \(p\geq 5\) and \(1\leq a\leq p-4\)).
**Theorem 9.6**.: _Let \(F(w,t)\in E\langle w/p^{\lambda}\rangle\llbracket t\rrbracket\) be a nontrivial Fredholm series of ghost type \(\bar{r}_{p}\) and \(\varepsilon\) with multiplicity \(m(F)\). Then any Fredholm series \(H(w,t)|F(w,t)\) is of ghost type \(\bar{\rho}\) and \(\varepsilon\) with some multiplicity \(m(H)\leq m(F)\)._
The proof of Theorem 9.6 will occupy the rest of this section. We note the following.
**Corollary 9.7**.: _Let \(\bar{r}_{p}\), \(\varepsilon\), and \(\widetilde{\mathrm{H}}\) be as in Lemma 9.3, and in particular \(a\in\{2,\ldots,p-5\}\) and \(p\geq 11\). Then \(\mathrm{Spc}^{(\varepsilon)}(\widetilde{\mathrm{H}})=\mathrm{Spc}_{\mathrm{ ord}}^{(\varepsilon)}(\widetilde{\mathrm{H}})\bigsqcup\mathrm{Spc}_{\mathrm{ \mathrm{\mathrm{\mathrm{missing}}}}}^{(\varepsilon)}(\widetilde{\mathrm{H}})\) is a disjoint union of the slope zero subspace and the positive slope subspace._
1. _The ordinary subspace_ \(\mathrm{Spc}_{\mathrm{\mathrm{\mathrm{\mathrm{missing}}}}}^{(\varepsilon)}( \widetilde{\mathrm{H}})\) _is nonzero only when_ \(\varepsilon=\omega^{b}\times\omega^{a+b}\)_, or when_ \(\varepsilon=\omega^{a+b+1}\times\omega^{b-1}\) _and_ \(\bar{\rho}\) _is split. In this case,_ \(\mathrm{wt}:\mathrm{Spc}_{\mathrm{\mathrm{\mathrm{\mathrm{missing}}}}}^{( \varepsilon)}(\widetilde{\mathrm{H}})\to\mathcal{W}^{(\varepsilon)}\) _is finite and flat of degree_ \(m(\widetilde{\mathrm{H}})\)_._
_._
2. _The nonordinary subspace_ \(\operatorname{Spc}^{(\varepsilon)}_{\operatorname{nord}}(\widetilde{\operatorname{H }})\) _has finitely many irreducible component and every irreducible component is of ghost type_ \(\bar{r}_{p}\) _and_ \(\varepsilon\)_, and the total multiplicity is_ \(m(\widetilde{\operatorname{H}})\)_._
Proof.: The factorization in Lemma 9.3 gives the decomposition \(\operatorname{Spc}^{(\varepsilon)}(\widetilde{\operatorname{H}})= \operatorname{Spc}^{(\varepsilon)}_{\operatorname{ord}}(\widetilde{ \operatorname{H}})\bigsqcup Sp^{(\varepsilon)}_{\operatorname{nord}}( \widetilde{\operatorname{H}})\), and (2) follows from Theorem 9.6 immediately.
Further specializing Corollary 9.7 to the case of modular forms proves Theorem 1.15.
**Remark 9.8**.:
1. While Theorem 9.6 works for \(a\in\{1,\ldots,p-4\}\), Corollary 9.7 holds under the slightly more restrictive assumption that \(a\in\{2,\ldots,p-5\}\) and \(p\geq 11\), which is needed for Theorem 8.7.
2. A philosophical implication of Theorem 9.6 and Corollary 9.7 is that _the non-ordinary part of the spectral curve seems to share certain "rigidity" or "finiteness" similar to that of the ordinary part_.
3. It is clear from Corollary 9.7 that if \(\bar{\rho}\) is nonsplit and \(m(\widetilde{\operatorname{H}})=1\), then \(\operatorname{Spc}^{(\varepsilon)}_{\operatorname{nord}}(\widetilde{ \operatorname{H}})\) is irreducible. It is natural to ask: when \(\bar{\rho}\) is split and \(m(\widetilde{\operatorname{H}})=2\), can one prove that \(\operatorname{Spc}^{(\varepsilon)}_{\operatorname{nord}}(\widetilde{ \operatorname{H}})\) is irreducible? In general, suppose that we are in an automorphic setting with all tame local conditions being "primitive" (e.g. having \(\ell\)-adic Breuil-Mezard multiplicity one), does it imply that \(\operatorname{Spc}^{(\varepsilon)}_{\operatorname{nord}}(\widetilde{ \operatorname{H}})\) is irreducible?
**Notation 9.9**.: Fix \(\lambda\in(0,1)\cap\mathbb{Q}\) for the rest of this section. In what follows, we write \(\overline{\mathcal{W}}\) to denote the base change of the rigid analytic space \(\mathcal{W}\) to \(\mathbb{C}_{p}\). For a rigid analytic space \(\overline{\mathcal{X}}\) over \(\mathbb{C}_{p}\), write \(\overline{\mathcal{X}}^{\operatorname{Berk}}\) for the associated Berkovich space.
For a Fredholm series \(F(w,t)=1+f_{1}(w)t+\cdots\in E\langle w/p^{\lambda}\rangle\llbracket t \rrbracket\) and \(w_{\star}\in\overline{\mathcal{W}}^{\operatorname{Berk}}_{\geq\lambda}\), define the Newton polygon \(\operatorname{NP}\big{(}F(w_{\star},-)\big{)}\) to be the convex hull of \((0,0)\) and
\[\Big{(}n,-\frac{\ln|f_{n}(w_{\star})|}{\ln p}\Big{)}\text{ for }n\in\mathbb{N}.\]
Then \(w_{\star}\mapsto\operatorname{NP}\big{(}F(w_{\star},-)\big{)}\) is continuous over \(\overline{\mathcal{W}}^{\operatorname{Berk}}_{\geq\lambda}\).
For a closed point \(w_{\star}\in\overline{\mathcal{W}}\) and \(r\in\mathbb{Q}_{>0}\), write
\[\mathbf{D}(w_{\star},r):=\big{\{}w\in\overline{\mathcal{W}}(\mathbb{C}_{p}) \bigm{|}w_{p}(w-w_{\star})\geq r\big{\}}\]
for the closed disk, and \(\eta_{w_{\star},r}\) for the associated Gaussian point.
The following standard harmonicity fact is the key to our proof of Theorem 9.6; see for example [10, Proposition 11.1.2].
**Definition-Lemma 9.10**.: _Use \(\breve{\mathcal{O}}\) to denote the completion of the maximal unramified extension of \(\mathcal{O}\) with fraction field \(\breve{E}\) and residual field \(\overline{\mathbb{F}}\). Let \(f(w)\in E\langle w/p^{\lambda}\rangle\) be a power series, \(w_{\star}\in\overline{\mathcal{W}}_{\geq\lambda}(\mathbb{C}_{p})\) a closed point, and \(\mu\in(\lambda,\infty)\cap\mathbb{Z}\). Define the following slope derivatives: for \(\bar{\alpha}\in\overline{\mathbb{F}}\) (fixing a lift \(\alpha\in\mathcal{O}_{\breve{E}}\) of \(\bar{\alpha}\))_
\[V^{+}_{w_{\star},\mu}(f):=\lim_{\epsilon\to 0^{+}}\Big{(}-\frac{\ln \big{|}f(\eta_{w_{\star},\mu-\epsilon})\big{|}-\ln\big{|}f(\eta_{w_{\star},\mu}) \big{|}}{\ln p\cdot\epsilon}\Big{)},\] \[V^{\bar{\alpha}}_{w_{\star},\mu}(f):=\lim_{\epsilon\to 0^{+}}\Big{(}-\frac{\ln \big{|}f(\eta_{w_{\star}+\alpha p^{\mu},\mu+\epsilon})\big{|}-\ln\big{|}f(\eta_{w _{\star}+\alpha p^{\mu},\mu})\big{|}}{\ln p\cdot\epsilon}\Big{)}. \tag{9.10.1}\]
_Then we have_
\[V^{+}_{w_{\star},\mu}(f)+\sum_{\bar{\alpha}\in\overline{\mathbb{F}}}V^{\bar{ \alpha}}_{w_{\star},\mu}(f)=0. \tag{9.10.2}\]
_Here each of \(V^{\bar{\alpha}}_{w_{\star},\mu}(f)\) does not depend on the choice of the lift \(\alpha\); and all but finitely many terms in the sum (9.10.2) is zero._
_Such definition and harmonicity (9.10.2) extends in a natural way to rational functions of the form \(f(w)/g(w)\) with \(f(w),g(w)\in E\langle w/p^{\lambda}\rangle\) by setting \(V^{?}_{w_{\star},\mu}(f/g):=V^{?}_{w_{\star},\mu}(f)-V^{?}_{w_{\star},\mu}(g)\) with \(?=+\) or \(\bar{\alpha}\in\overline{\mathbb{F}}\) (whenever the limits exists)._
### Proof of Theorem 9.6
In this entire proof, we fix a relevant character \(\varepsilon\) and suppress all superscripts \((\varepsilon)\). Assume that \(F(w,t)=H(w,t)\cdot H^{\prime}(w,t)\) for Fredholm series \(H,H^{\prime}\in E\langle w/p^{\lambda}\rangle\llbracket t\rrbracket\). Then for any \(w_{\star}\in\overline{\mathcal{W}}_{\geq\lambda}(\mathbb{C}_{p})\), the slopes in \(\operatorname{NP}\big{(}H(w_{\star},-)\big{)}\) (resp. \(\operatorname{NP}\big{(}H^{\prime}(w_{\star},-)\big{)}\)) form a subset of slopes of \(\operatorname{NP}\big{(}F(w_{\star},-)\big{)}\), which is further a subset of slopes of \(\operatorname{NP}\big{(}G_{\bar{\rho},\operatorname{nord}}(w_{\star},-)\big{)}\). Put
\[F(w,t)=1+f_{1}(w)t+\cdots,\ H(w,t)=1+h_{1}(w)t+\cdots,\ \text{and}\ H^{\prime}(w,t)=1+h_{1}^{ \prime}(w)t+\cdots.\]
Recall from Proposition 2.18(3) that, for each fixed \(n\in\mathbb{N}\), all elements \(w_{\star}\in\overline{\mathcal{W}}_{\geq\lambda}(\mathbb{C}_{p})\) for which \((n,v_{p}(g_{n}(w_{\star})))\) is a vertex of \(\operatorname{NP}\big{(}G_{\bar{\rho}_{p},\operatorname{nord}}(w_{\star},-) \big{)}\) form an open subspace of \(\overline{\mathcal{W}}_{\geq\lambda}\):
\[\overline{\operatorname{Vtx}}_{n,\geq\lambda}:=\overline{\mathcal{W}}_{\geq \lambda}\Big{\backslash}\bigcup_{k}\overline{\mathbf{D}}\big{(}w_{k},\,\Delta _{k,\frac{|1}{2}d^{\operatorname{tw}}_{k}(\bar{\varepsilon}_{1})-n|+1}-\Delta _{k,\frac{|1}{2}d^{\operatorname{tw}}_{k}(\bar{\varepsilon}_{1})-n|}\big{)},\]
where \(\overline{\mathbf{D}}(w_{k},r)\) is the base change of \(\mathbf{D}(w_{k},r)\) over \(\mathbb{C}_{p}\), and the union is taken over all \(k=k_{\varepsilon}+(p-1)k_{\star}\) with \(k_{\star}\in\mathbb{Z}\) such that \(n\in\big{(}d^{\operatorname{ur}}_{k}(\varepsilon_{1}),d^{\operatorname{tw}}_{ k}(\bar{\varepsilon}_{1})-d^{\operatorname{ur}}_{k}(\varepsilon_{1})\big{)}\). The Berkovich space \(\overline{\operatorname{Vtx}}_{n,\geq\lambda}\) is clearly connected.
In what follows, we write \(\operatorname{slp}_{n}(w_{\star})\) for the \(n\)th slope in \(\operatorname{NP}\big{(}G_{\bar{\rho},\operatorname{nord}}(w_{\star},-)\big{)}\). The proof is divided into three steps.
**Step I**: For each \(n\), we will prove that the total multiplicity of the \(n\) smallest slopes of \(\operatorname{NP}\big{(}G_{\bar{\rho},\operatorname{nord}}(w_{\star},-)\big{)}\) in \(\operatorname{NP}\big{(}H(w_{\star},-)\big{)}\) is constant in \(w_{\star}\in\overline{\operatorname{Vtx}}_{n,\geq\lambda}\); write \(m(H,n)\) for this constant. We define \(m(H^{\prime},n)\) for \(H^{\prime}\) similarly. It is then clear that \(m(H,n)+m(H^{\prime},n)=n\cdot m(F)\).
To this end, it is sufficient to show that the total multiplicity \(\operatorname{totmult}_{n}(w_{\star})\) of those slopes in \(\operatorname{NP}(H(w_{\star},-))\) that are less than or equal to \(\operatorname{slp}_{n}(w_{\star})\), is a locally constant function on \(\overline{\operatorname{Vtx}}_{n,\geq\lambda}^{\operatorname{Berk}}\). We proceed by induction on \(n\) and start from the case \(n=0\). Now suppose the claim is proved for smaller \(n\). For \(w_{\star}\in\overline{\operatorname{Vtx}}_{n,\geq\lambda}^{\operatorname{Berk}}\), suppose \(\operatorname{totmult}_{n}(w_{\star})=m\), which is obviously less than or equal to \(n\cdot m(F)\). Since \((n,v_{p}(g_{n}(w_{\star})))\) is a vertex of \(\operatorname{NP}\big{(}G_{\bar{\rho}_{p},\operatorname{nord}}(w_{\star},-) \big{)}\), \(\mu=\operatorname{slp}_{n+1}(w_{\star})-\operatorname{slp}_{n}(w_{\star})>0\). On the other hand, note that \(w_{\star}\mapsto\operatorname{NP}\big{(}H(w_{\star},-)\big{)}\) is continuous for the Berkovich topology. We may choose a neighborhood \(U\) of \(w_{\star}\) in \(\overline{\operatorname{Vtx}}_{n,\geq\lambda}^{\operatorname{Berk}}\) such that the following conditions hold for any \(w^{\prime}_{\star}\in U\),
* \(\operatorname{slp}_{n+1}(w^{\prime}_{\star})-\operatorname{slp}_{n}(w^{\prime}_{ \star})>\frac{\mu}{2}\),
* for every \(1\leq i\leq n\), the difference between the \(i\)-th slopes of \(\operatorname{NP}\big{(}H(w_{\star},-)\big{)}\) and \(\operatorname{NP}\big{(}H(w^{\prime}_{\star},-)\big{)}\) is strictly less than \(\frac{\mu}{3n\cdot m(F)}\), and
* for every \(1\leq i\leq n\), \[|v_{p}(h_{i}(w_{\star}))-v_{p}(h_{i}(w^{\prime}_{\star}))|<\frac{\mu}{6}.\]
Suppose \(\operatorname{totmult}_{n}(w^{\prime}_{\star})=m^{\prime}\). If \(m^{\prime}<m\), then by the inductive hypothesis and the first two conditions, we may deduce that the sum of the first \(m\) slopes of \(\operatorname{NP}\big{(}H(w^{\prime}_{\star},-)\big{)}\) minus the sum of the first \(m\) slopes of \(\operatorname{NP}\big{(}H(w_{\star},-)\big{)}\), which is nothing but \(v_{p}(h_{m}(w^{\prime}_{\star}))-v_{p}(h_{m}(w_{\star}))\), is at least \(\frac{\mu}{2}-\frac{m\mu}{3nm(F)}\geq\frac{\mu}{2}-\frac{\mu}{3}=\frac{\mu}{6}\). This makes a contradiction with the third condition. Hence \(m^{\prime}\geq m\). Similar argument yields that \(m\geq m^{\prime}\).
**Step II**: We prove a technical claim. For each integer \(n\geq 1\), Definition-Proposition 2.12(2) implies that there is a unique weight \(k=k_{\varepsilon}+(p-1)(n+\delta_{\varepsilon}-2)\) such that \(k\equiv k_{\varepsilon}\bmod(p-1)\) and \(\frac{1}{2}d_{k}^{\operatorname{Iw}}(\tilde{\varepsilon}_{1})=n-1\).
**Claim**: for all \(\epsilon\in(0,\frac{1}{2})\) and all \(\alpha\in\mathcal{O}_{\mathbb{C}_{p}}\),
1. the point \(\eta_{w_{k},\Delta_{k,1}-\Delta_{k,0}-\epsilon}\) belongs to the subspaces \(\overline{\operatorname{Vtx}}_{n,\geq\lambda}\) and \(\overline{\operatorname{Vtx}}_{n-1,\geq\lambda}^{\operatorname{Berk}}\) of \(\overline{\mathcal{W}}_{\geq\lambda}\), and
2. the points \(\eta_{w_{k}+\alpha p^{\Delta_{k,1}-\Delta_{k,0},\Delta_{k,1}-\Delta_{k,0}+ \epsilon}}\) belongs to the subspaces \(\overline{\operatorname{Vtx}}_{n,\geq\lambda}\) and \(\overline{\operatorname{Vtx}}_{n-2,\geq\lambda}^{\operatorname{Berk}}\) but not \(\overline{\operatorname{Vtx}}_{n-1,\geq\lambda}^{\operatorname{Berk}}\).
Proof: By Proposition 2.18(3), \(\overline{\operatorname{Vtx}}_{n-1,\geq\lambda}^{\operatorname{Berk}}\) does not contain the disc \(\overline{\mathbf{D}}(w_{k},\Delta_{k,1}-\Delta_{k,0})\), so for \(\epsilon\in(0,\frac{1}{2})\),
* the points \(\eta_{w_{k}+\alpha p^{\Delta_{k,1}-\Delta_{k,0},\Delta_{k,1}-\Delta_{k,0}+ \epsilon}}\) do not belong to \(\overline{\operatorname{Vtx}}_{n-1,\geq\lambda}^{\operatorname{Berk}}\), and
* the point \(\delta_{w_{k},\Delta_{k,1}-\Delta_{k,0}-\epsilon}\) does not belong to the removed disc \(\overline{\mathbf{D}}(w_{k},\Delta_{k,1}-\Delta_{k,0})\).
On the other hand, to get \(\overline{\operatorname{Vtx}}_{n-1\pm 1,\geq\lambda}^{\operatorname{Berk}}\), we need to remove the disc \(\overline{\mathbf{D}}(w_{k},\Delta_{k,2}-\Delta_{k,1})\). But by [17, Lemmas 5.6 and 5.8], we have \(\Delta_{k,2}-\Delta_{k,1}\geq\Delta_{k,1}-\Delta_{k,0}+1\); so none of the points in (1) and (2) belong to this disc \(\overline{\mathbf{D}}(w_{k},\Delta_{k,2}-\Delta_{k,1})\). So (1) and (2) hold for this particular "removed disc centered at \(w_{k}\)". We need to explain that other discs removed to get \(\overline{\operatorname{Vtx}}_{n-1-s,\geq\lambda}^{\operatorname{Berk}}\) with \(s\in\{\pm 1,0\}\) will not interfere with the points in (1) and (2).
Now, take any \(k^{\prime}=k^{\prime}_{\bullet}+(p-1)k_{\varepsilon}\neq k\) and any \(s\in\{\pm 1,0\}\). The condition \(\frac{1}{2}d_{k}^{\operatorname{Iw}}(\tilde{\varepsilon}_{1})=n-1\) can be rewritten (via Definition-Proposition 2.12) as
\[(n-1-s)-\tfrac{1}{2}d_{k^{\prime}}^{\operatorname{Iw}}(\tilde{\varepsilon}_{1} )=k_{\bullet}-k^{\prime}_{\bullet}-s.\]
By Proposition 2.18(3), the corresponding disc removed from \(\overline{\mathcal{W}}_{\geq\lambda}\) to get \(\overline{\operatorname{Vtx}}_{n-1-s,\geq\lambda}^{\operatorname{Berk}}\) is precisely \(\overline{\mathbf{D}}(w_{k^{\prime}},\Delta_{k^{\prime},|k_{\bullet}-s-k^{ \prime}_{\bullet}|+1}-\Delta_{k^{\prime},|k_{\bullet}-s-k^{\prime}_{\bullet}|})\).
Suppose for contrary that \(\overline{\mathbf{D}}(w_{k^{\prime}},\Delta_{k^{\prime},|k_{\bullet}-s-k^{ \prime}_{\bullet}|+1}-\Delta_{k^{\prime},|k_{\bullet}-s-k^{\prime}_{\bullet}|})\) contains one of the points in (1) and (2) for some \(s\in\{\pm 1,0\}\). Then we have
* (for the radii) \(\Delta_{k,1}-\Delta_{k,0}+\epsilon\geq\Delta_{k^{\prime},|k_{\bullet}-s-k^{ \prime}_{\bullet}|+1}-\Delta_{k^{\prime},|k_{\bullet}-s-k^{\prime}_{\bullet}|}\), and
* (for the centers) \(v_{p}(w_{k^{\prime}}-w_{k})\geq\min\big{\{}\Delta_{k^{\prime},|k_{\bullet}-s-k^{ \prime}_{\bullet}|+1}-\Delta_{k^{\prime},|k_{\bullet}-s-k^{\prime}_{\bullet}|}, \ \Delta_{k,1}-\Delta_{k,0}-\epsilon\big{\}}\).
Yet the differences \(\Delta_{k^{\prime},|k_{\bullet}-s-k^{\prime}_{\bullet}|+1}-\Delta_{k^{\prime},|k _{\bullet}-s-k^{\prime}_{\bullet}|}\) and \(\Delta_{k,1}-\Delta_{k,0}\) belong to \(\frac{1}{2}\mathbb{Z}\) by Proposition 2.18(6), and \(v_{p}(w_{k^{\prime}}-w_{k})\in\mathbb{Z}\). The condition \(\epsilon\in(0,\frac{1}{2})\) guarantees that the two inequalities above still hold after setting \(\epsilon=0\) by integrality. In particular,
\[v_{p}(w_{k^{\prime}}-w_{k})\geq\Delta_{k^{\prime},|k_{\bullet}-s-k^{\prime}_{ \bullet}|+1}-\Delta_{k^{\prime},|k_{\bullet}-s-k^{\prime}_{\bullet}|}. \tag{9.11.1}\]
This inequality implies that \(n-1-s\in\overline{\mathrm{nS}}_{w_{k^{\prime}},k}\) by Definition 2.17, and thus \(\mathrm{nS}_{w_{k^{\prime}},k}\) contains at least one of \(\{n-3,n-2,\ldots,n+1\}\). This would imply by Proposition 2.18(5) that at least one of \((0,\Delta_{k,0})\), \((1,\Delta_{k,1})\), or \((2,\Delta_{k,2})\) is not a vertex of \(\underline{\Delta}_{k}\); this contradicts with [17, Lemmas 5.6 and 5.8] (which says that the "first" \(p-1\) points on \(\underline{\Delta}_{k}\) are vertices). This completes the proof of the Claim in Step II.
**Step III**: Write \(m(H):=m(H,1)\) and \(m(H^{\prime}):=m(H^{\prime},1)\). We will prove inductively that \(m(H,n)=n\cdot m(H)\) and \(m(H^{\prime},n)=n\cdot m(H^{\prime})\).
The inductive base is clear. We assume that the above statement holds for smaller \(n\)'s. For the integer \(n\) in consideration, we take the weight \(k\) as in Step II.
By Step II(1), \(\eta_{w_{k},\Delta_{k,1}-\Delta_{k,0}-\epsilon}\) belongs to both \(\overline{\mathrm{Vtx}}_{n,\geq\lambda}\) and \(\overline{\mathrm{Vtx}}_{n-1,\geq\lambda}^{\mathrm{Berk}}\) for all \(\epsilon\in(0,\frac{1}{2})\). By Step I and the inductive hypothesis, we have
\[|h_{m(H,n)}(\eta_{w_{k},\Delta_{k,1}-\Delta_{k,0}-\epsilon})|=\Big{|}g_{n-1}^{ m(H)}(\eta_{w_{k},\Delta_{k,1}-\Delta_{k,0}-\epsilon})\cdot\Big{(}\frac{g_{n}}{g_ {n-1}}\Big{)}^{m(H,n)-m(H,n-1)}(\eta_{w_{k},\Delta_{k,1}-\Delta_{k,0}-\epsilon })\Big{|}.\]
By continuity, the above equality holds for \(\epsilon=0\) as well. So in particular, for the slope derivatives at \(\eta_{w_{k},\Delta_{k,1}-\Delta_{k,0}}\) defined in (9.10.1), we have
\[V_{w_{k},\Delta_{k,1}-\Delta_{k,0}}^{+}(h_{m(H,n)})=V_{w_{k},\Delta_{k,1}- \Delta_{k,0}}^{+}\Big{(}g_{n-1}^{m(H)}\cdot\Big{(}\frac{g_{n}}{g_{n-1}}\Big{)} ^{m(H,n)-m(H,n-1)}\Big{)}. \tag{9.11.2}\]
On the other hand, by Step II(2), for every \(\alpha\in\mathcal{O}_{\mathbb{C}_{p}}\) and any \(\epsilon\in[0,\frac{1}{2})\), \(\eta_{w_{k}+\alpha p^{\Delta_{k,1}-\Delta_{k,0}},\Delta_{k,1}-\Delta_{k,0}+ \epsilon}\) is contained in \(\overline{\mathrm{Vtx}}_{n,\geq\lambda}^{\mathrm{Berk}}\) and \(\overline{\mathrm{Vtx}}_{n-2,\geq\lambda}^{\mathrm{Berk}}\) but not in \(\overline{\mathrm{Vtx}}_{n-1,\geq\lambda}^{\mathrm{Berk}}\). It follows that the Newton polygon of \(G_{\bar{\rho},\mathrm{nord}}(w,-)\) at each of those points is a straight line of width \(2\) from \(n-2\) to \(n\). We therefore deduce that for \(\bar{\alpha}\in\overline{\mathbb{F}}\),
\[V_{w_{k},\Delta_{k,1}-\Delta_{k,0}}^{\bar{\alpha}}(h_{m(H,n)})=V_{w_{k}, \Delta_{k,1}-\Delta_{k,0}}^{\bar{\alpha}}\Big{(}g_{n-2}^{m(H)}\cdot\Big{(} \frac{g_{n}}{g_{n-2}}\Big{)}^{(m(H,n)-m(H,n-2))/2}\Big{)}. \tag{9.11.3}\]
Taking the sum of (9.11.2) and (9.11.3) for all \(\bar{\alpha}\in\overline{\mathbb{F}}\) and using the harmonicity equality (9.10.2) (for \(h_{m(H,n)}\) in the first equality and for \(g_{n}\) and \(g_{n-2}\) in the third equality), we deduce that
\[0\stackrel{{\eqref{eq:10.2}}}{{=}}V_{w_{k},\Delta_{k,1}- \Delta_{k,0}}^{+}\big{(}h_{m(H,n)}\big{)}+\sum_{\bar{\alpha}\in\overline{ \mathbb{F}}}V_{w_{k},\Delta_{k,1}-\Delta_{k,0}}^{\bar{\alpha}}\big{(}h_{m(H,n)} \big{)}\] \[= V_{w_{k},\Delta_{k,1}-\Delta_{k,0}}^{+}\Big{(}g_{n-1}^{m(H)}\cdot \Big{(}\frac{g_{n}}{g_{n-1}}\Big{)}^{m(H,n)-m(H,n-1)}\Big{)}\] \[+\sum_{\bar{\alpha}\in\overline{\mathbb{F}}}V_{w_{k},\Delta_{k,1}- \Delta_{k,0}}^{\bar{\alpha}}\Big{(}g_{n-2}^{m(H)}\cdot\Big{(}\frac{g_{n}}{g_{n -2}}\Big{)}^{(m(H,n)-m(H,n-2))/2}\Big{)}\] \[\stackrel{{\eqref{eq:10.2}}}{{=}}V_{w_{k},\Delta_{k,1} -\Delta_{k,0}}^{+}\Big{(}\Big{(}\frac{g_{n}g_{n-2}}{g_{n-1}^{2}}\Big{)}^{(m(H,n )-m(H,n-1)-m(H))/2}\Big{)}.\]
(The third equality also makes use of \(m(H,n-1)-m(H,n-2)=m(H)\) on the exponents of \(g_{n}\) and \(g_{n-2}\).)
To show that \(m(H,n)=n\cdot m(H)\), or equivalently \(m(H,n)-m(H,n-1)=m(H)\), it then suffices to show that
\[2V_{w_{k},\Delta_{k,1}-\Delta_{k,0}}^{+}(g_{n-1})\neq V_{w_{k},\Delta_{k,1}- \Delta_{k,0}}^{+}(g_{n})+V_{w_{k},\Delta_{k,1}-\Delta_{k,0}}^{+}(g_{n-2}). \tag{9.11.4}\]
By definition, for \(i\in\{n-1,n,n+1\}\), we have
\[V^{+}_{w_{k},\Delta_{k,1}-\Delta_{k,0}}(g_{i})=\sum_{v_{p}(w_{k^{\prime}}-w_{k}) \geq\Delta_{k,1}-\Delta_{k,0}}m_{i}(k^{\prime}) \tag{9.11.5}\]
is the sum of ghost zero multiplicities for those weights \(k^{\prime}\equiv k_{\varepsilon}\mod(p-1)\) such that \(v_{p}(w_{k^{\prime}}-w_{k})\geq\Delta_{k,1}-\Delta_{k,0}\). Note that the function \(i\mapsto m_{i}(k^{\prime})\) is linear over \(i\in\{n-1,n,n+1\}\) except that \(i\) is equal to \(\frac{1}{2}d^{\rm lw}_{k^{\prime}}\), \(d^{\rm lw}_{k^{\prime}}-d^{\rm ur}_{k^{\prime}}\), and \(d^{\rm ur}_{k^{\prime}}\). However, by the definition of near-Steinberg range in Definition 2.17, the condition \(v_{p}(w_{k^{\prime}}-w_{k})\geq\Delta_{k,1}-\Delta_{k,0}\) implies that \(n-1\) belongs to the near-Steinberg range for \((w_{k^{\prime}},k)\). Yet Proposition 2.18(1) (for \(L_{w_{k^{\prime}},k}\geq 1\)) implies that the condition \(v_{p}(k^{\prime}_{\bullet}-k_{\bullet})\geq\Delta_{k,1}-\Delta_{k,0}\) excludes the case that \(i=d^{\rm lw}_{k^{\prime}}-d^{\rm ur}_{k^{\prime}}\) or \(i=d^{\rm ur}_{k^{\prime}}\). So the only \(k^{\prime}\) that appears in the sum of (9.11.5) and that \(i\mapsto m_{i}(k^{\prime})\) is not linear is when \(k^{\prime}=k\), in which case \(2m_{n-1}(k)-m_{n}(k)-m_{n-2}(k)=2\). It then follows that
\[2V^{+}_{w_{k},\Delta_{k,1}-\Delta_{k,0}}(g_{n-1})-V^{+}_{w_{k},\Delta_{k,1}- \Delta_{k,0}}(g_{n})-V^{+}_{w_{k},\Delta_{k,1}-\Delta_{k,0}}(g_{n-2})=2.\]
So (9.11.4) is not an equality. This completes the inductive proof of Step II.
**Remark 9.12**.: The claim in Step II can be probably proved without referencing to the heavy results such as Proposition 2.18(3)(4)(5), but that would make the proof longer.
|
2310.19860 | Homologically area-minimizing surfaces that cannot be calibrated | In 1974, Federer proved that all area-minimizing hypersurfaces on orientable
manifolds were calibrated by weakly closed differential forms. However, in this
manuscript, we prove the contrary in higher codimensions: calibrated
area-minimizers are non-generic. This is surprising given that almost all known
examples of area-minimizing surfaces are confirmed to be minimizing via
calibration. Let integers $d\ge 1$ and $c\ge 2$ denote dimensions and
codimensions, respectively. Let $M^{d+c}$ denote a closed, orientable, smooth
manifold of dimension $d+c$. For each $d$-dimensional integral homology class
$[\Sigma]$ on $M$, we introduce $\Omega_{[\Sigma]}$ as the set of metrics for
which any $d$-dimensional homologically area-minimizing surface in the homology
class $[\Sigma]$ in any $g\in \Omega_{[\Sigma]}$ cannot be calibrated by any
weakly closed measurable differential form. Our main result is that
$\Omega_{[\Sigma]}$ is always a non-empty open set. To exemplify the prevalence
of such phenomenon, we show that for any homology class $[\Sigma]$ on
$\mathbb{CP}^n,$ the closure of $\Omega_{[\Sigma]}$ contains the Fubini-Study
metric. In the hypersurface case, we show that even when a smooth
area-minimizer is present, the calibration forms are compelled in some cases to
have a non-empty singular set. This provides an answer to a question posed by
Michael Freedman. Finally, we show that the ratio of the integral minimal mass
to the real minimal mass is unbounded when we consider all Riemannian metrics.
Also, it is always possible to fill a multiple of a homology class with an
arbitrarily small area compared to the class itself. This settles the
Riemannian version of several conjectures by Frank Morgan, Brian White and
Robert Young. | Zhenhua Liu | 2023-10-30T17:58:22Z | http://arxiv.org/abs/2310.19860v2 | # Homologically area-minimizing surfaces that cannot be calibrated: infinite Lavrentiev gap
###### Abstract.
In 1974, Federer [17] proved that all area-minimizing hypersurfaces on orientable manifolds were calibrated by weakly closed differential forms. However, in this manuscript, we prove the contrary in higher codimensions: calibrated area-minimizers are non-generic. This is surprising given that almost all known examples of area-minimizing surfaces are confirmed to be minimizing via calibration.
Let integers \(d\geq 1\) and \(c\geq 2\) denote dimensions and codimensions, respectively. Let \(M^{d+c}\) denote a closed, orientable, smooth manifold of dimension \(d+c\). For each \(d\)-dimensional integral homology class \([\Sigma]\) on \(M\), we introduce \(\Omega_{[\Sigma]}\) as the set of metrics for which any \(d\)-dimensional homologically area-minimizing surface in the homology class \([\Sigma]\) in any \(g\in\Omega_{[\Sigma]}\) cannot be calibrated by any weakly closed measurable differential form. Our main result is that \(\Omega_{[\Sigma]}\) is always a non-empty open set.
To exemplify the prevalence of such phenomenon, we show that for any homology class \([\Sigma]\) on \(\mathbb{CP}^{n}\), the closure of \(\Omega_{[\Sigma]}\) contains the Fubini-Study metric.
In the hypersurface case, we show that even when a smooth area-minimizer is present, the calibration forms are compelled in some cases to have a non-empty singular set. This provides an answer to a question posed by Michael Freedman [19].
The above phenomenon is essentially due to the Lavrentiev phenomenon of the minimal mass of homology classes. In this direction, for \([\Sigma]\) with \(d\geq 1,c\geq 2\), we show that the ratio of the integral minimal mass to the real minimal mass is unbounded when we consider all Riemannian metrics. Also, it is always possible to fill a multiple of a homology class with an arbitrarily small area compared to the class itself. This settles the Riemannian version of several conjectures by Frank Morgan [34], Brian White [41] and Robert Young [43].
###### Contents
* 1 Introduction
* 2 Basic Notations
* 3 Lemmas About Calibrations
* 4 Proof of Theorem 1.5
* 5 Proof of Theorem 1.1,1.4,1.6
* 6 Proof of Theorem 1.2
* 7 Proof of Theorem 1.3
## 1. Introduction
In this paper, area minimizing surfaces refer to area-minimizing integral currents, which roughly speaking are oriented surfaces counted with multiplicity, minimizing the area functional. Calling them surfaces is justified thanks to the Almgren's Big
Theorem ([2]) and De Lellis school's work ([7][8][9][6]). Their results show that \(n\)-dimensional area minimizing integral currents are smooth manifolds outside of a rectifiable singular set of dimension at most \(n-2.\) (In the codimension \(1\) case, the dimension of the singular set can be reduced to \(n-7\) by [16].)
On the other hand, in the foundational work [18], it is established on a Riemannian manifold, any integral homology class admit a representative that is area-minimizing. Consequently, we have extremely powerful existence and regularity at our disposal.
Contrarily, proving that a selected representative of a homology class is area-minimizing poses substantial challenges. To the author's knowledge, over the past \(70\) years, calibrations ([23]) essentially remain the sole method available, encompassing Kahler, special Lagrangian, associative, and Caley calibrations in specific holonomic geometries, Lawlor's vanishing calibrations [26], Morgan and Mackenzie's planar calibrations [31][36] and the foliation by minimal or mean convex/concave hypersurfaces in hypersurface instances.
Furthermore, Federer proved in [17] that every area-minimizing hypersurface residing on compact orientable manifolds can be calibrated by a weakly closed differential form.
In light of mounting evidences, a natural question arises:
**Are calibrated area-minimizers also generic in higher codimensions, given that calibration is predominantly the only way for proving area-minimization?**
There exist sporadic counterexamples. Torsion classes, for instance, are trivially non-calibratable. For non-torsion classes, Section 5.11 of [17] gives a \(3\)-torus counter-example. Nonetheless, it remains an open question whether such non-calibratable minimizers are a general occurrence or happening only on special manifolds.
To resolve the above question, we give a negative answer as follows.
**Theorem 1.1**.: _Assume that,_
* \(M^{d+c}\) _is a compact closed orientable_ \(d+c\) _dimensional smooth manifold with_ \(d\geq 2,c\geq 1.\)__
* \([\Sigma]\) _is a_ \(d\)_-dimensional integral homology class on_ \(M\)_._
_Define \(\Omega_{[\Sigma]}\) to be the set of metrics so that that for any metric \(g\in\Omega_{[\Sigma]}\), every \(d\)-dimensional homologically area-minimizing integral current in the homology class \([\Sigma]\) cannot be calibrated by real flat cochains. Then_
* \(\Omega_{\Sigma}\) _is open._
* \(\Omega_{\Sigma}\) _is non-empty._
_Remark 1_.: By 4.4.19 in [18] and Section 4.6 in [17], any bounded weakly closed measurable differential form is a real flat cochain and vice versa.
_Remark 2_.: The proof of the above theorem gives abundance of area-minimizing integral currents of which their multiples are not area-minimizing, even on manifolds with no torsion classes at all.
_Remark 3_.: In view of the above theorem, Camillo De Lellis conjectures that \(\Omega_{[\Sigma]}\) is dense.
One could postulate, in opposition to Theorem 1.1, that the \(\Omega_{[\Sigma]}\) metrics are hand-tailored to the problem, presupposing that proximity to standard classical metrics would imply calibration of area-minimizers. Regrettably, such an assumption is unfounded.
**Theorem 1.2**.: _For \(c,d\geq 1,\) Let \([\Sigma]\) be a \(d\)-dimensional integral homology class on a \(\frac{d+c}{2}\)-dimensional complex projective space \(\mathbb{CP}^{\frac{d+c}{2}}.\) Then in the notation of Theorem 1.1, the Fubini-Study metric is in the closure of \(\Omega_{[\Sigma]}\)._
_Remark 4_.: This shows that in higher codimensions, metrics with non-calibratable minimizers are not artificial constructions but indeed very natural.
In codimension 1, we always have a calibration by [17]. Professor Michael Freedman raised the following question in personal communication ([20]), to paraphrase him,
**When there is a smooth area-minimizing hypersurface, what is the best regularity of calibration forms?**
We show that the answer might not be as optimistic as one hopes in Theorem A.1 of [20].
**Theorem 1.3**.: _Let \(d\geq 7\) be integers and \(M^{d+1}\) be a compact smooth orientable manifold. For every \(d\)-dimensional integral homology class \([\Sigma]\), there exists a smooth metric \(g\) so that_
* _there are smooth area-minimizing hypersurfaces in_ \([\Sigma]\) _in the metric_ \(g\)_,_
* _any flat cochain calibrating_ \([\Sigma]\) _must have a singular set with positive (possibly infinite) Hausdorff_ \(d-7\) _measure._
Now we are ready to delve deeper into the ideas behind Theorem 1.1. The non-calibrated minimizers is closely connected to the Lavrentiev phenomenon of minimal area, and is a corollary of the following results. First we need some notions as follows.
**Definition 1.1**.: For an integral current (or an integral homology class) \(T\) on a smooth Riemannian manifold \((M,g)\), define the real and integral minimal mass as
\[\min\mathbf{M}^{g}_{\mathbb{R}}([T]) =\inf_{S\text{ homologous to }T\text{ over }\mathbb{R}}\mathbf{M}(S)\] \[\min\mathbf{M}^{g}_{\mathbb{Z}}([T]) =\inf_{S\text{ homologous to }T\text{ over }\mathbb{Z}}\mathbf{M}(S).\]
_Remark 5_.: Here homologous over \(\mathbb{Z}\) (or \(\mathbb{R}\)) means an integral current (or real flat chain) homologous in \(\mathbb{Z}\) (or \(\mathbb{R}\)) coefficient, respectively. See Section 2.2. Also note that \(\min\mathbf{M}\) depends only on the homology class of \(T\) if \(\partial T=0\).
Federer proved in [17] that for any integer \(k\), we have
\[\min\mathbf{M}^{g}_{\mathbb{R}}(k[T])=k\min\mathbf{M}^{g}_{\mathbb{ R}}([T]),\] \[\min\mathbf{M}^{g}_{\mathbb{R}}([T])\leq\min\mathbf{M}^{g}_{ \mathbb{Z}}([T]),\] \[\lim_{k\to\infty}\frac{\min\mathbf{M}^{g}_{\mathbb{Z}}(k[T])}{k} =\min\mathbf{M}^{g}_{\mathbb{R}}([T]).\]
Moreover, by [17], \(T\) is calibrated if and only if \(\min\mathbf{M}^{g}_{\mathbb{R}}([T])=\min\mathbf{M}^{g}_{\mathbb{Z}}([T]).\)
Frank Morgan ([34]), Brian White ([41]) and Robert Young ([43]) have investigated the Lavrentiev phenomenon of
\[\frac{\min\mathbf{M}^{g}_{\mathbb{Z}}([T])}{\min\mathbf{M}^{g}_{\mathbb{R}}([T ])}>1,\frac{k\min\mathbf{M}^{g}_{\mathbb{Z}}([T])}{\min\mathbf{M}^{g}_{ \mathbb{Z}}(k[T])}>1. \tag{1.1}\]
They have all raised the following two questions (Problem 1.13 in [1]), to rephrase them,
* **Is**\(\min\mathbf{M}^{g}_{\mathbb{Z}}([\Sigma])/\min\mathbf{M}^{g}_{\mathbb{R}}([ \Sigma])\)**bounded?**
* **Is it possible that**\(\min\mathbf{M}^{g}_{\mathbb{Z}}(k[T])<\min\mathbf{M}^{g}_{\mathbb{Z}}([T])\)?
In other words, how large can the Lavrentiev gaps be in the two inequalities in (1.1). We show that both gaps are unbounded in the homology setting.
**Theorem 1.4**.: _With the same assumption of Theorem 1.1,_
\[\sup_{g}\frac{\min\mathbf{M}_{\mathbb{Z}}^{g}([\Sigma])}{\min\mathbf{M}_{ \mathbb{R}}^{g}([\Sigma])}=\infty,\]
_with \(g\) ranging over all smooth Riemannian metrics._
_Remark 6_.: Note that the above ratio is independent of constant scalings of the metric.
This follows from a much more refined result.
**Theorem 1.5**.: _With the same assumption of Theorem 1.1, if \([\Sigma]\) is a non-torsion class, then there exists a sequence of integers \(k_{j}\geq 2,k_{j}\to\infty\) so that for any \(k_{j}\) and any integer \(m\geq 3,\) there exists a smooth Riemannian metric \(g_{j,m}\) on \(M\), so that,_
\[\frac{\min\mathbf{M}_{\mathbb{Z}}^{g_{j,m}}([\Sigma])}{\min\mathbf{M}_{ \mathbb{Z}}^{g_{j,m}}(k_{j}[\Sigma])}\geq m,\]
_Remark 7_.: Here \(\{k_{j}\}\) can be any sequence so that \(k_{j}[\Sigma]\) admit a smoothly embedded representative.
_Remark 8_.: Thus, it is always possible to fill multiples of a homology class much more efficiently than the class itself.
The above conclusion cannot be extended to torsion classes as shown in the following theorem.
**Theorem 1.6**.: _With the same assumption of Theorem 1.1, if \([\Sigma]\) is an \(n\)-torsion class, \(k\) is coprime to and smaller than \(n,\)\(l\) the smallest natural number with \(kl\equiv 1(\mod n),\) then for all metric \(g\), we have_
\[\max\{l^{-1},(n-l)^{-1}\}\leq\frac{\min\mathbf{M}_{\mathbb{Z}}^{g}(k[\Sigma] )}{\min\mathbf{M}_{\mathbb{Z}}^{g}([\Sigma])}\leq\min\mathbf{M}\{k,n-k\}.\]
_Remark 9_.: Here \(n\) is the least natural number with \(n[\Sigma]=0.\)
_Remark 10_.: The same conclusion holds for \(n\)-torsion classes in \(\mod v\) coefficient homology, with \(\min\mathbf{M}_{\mathbb{Z}}^{g}\) replaced with corresponding minimal mass \(\mod v.\)
Interestingly, the method of proving Theorem 1.5 relies on inspecting area-minimizing currents in finite coefficient homology. To the author's knowledge, this is the first time in the homology setting that \(\mod v\) mass-minimizing currents are used to study integral and real mass-minimizing currents.
### Sketch of proof
Theorem 1.6 follows directly from the multiplicative structure of finite abelian groups. Theorem 1.1, 1.4 are all straightforward consequences of Theorem 1.5. For readers only interested in 1.1, a one sentence summary is as follows. Metrics with the non-calibratable properties in Theorem 1.1 form an open set, so it suffices to find at least one metric with non-calibratable area-minimizers, of which plenty exists due to Theorem 1.5.
The key idea to prove Theorem 1.5 is that the multiplicative structure of finite abelian groups can sometime forces multiples of a class to admit smaller area.
Let us illustrate the proof for Theorem 1.5 with the simplest possible case, \(d=1,M=S^{1}\times S^{c},\)\([\Sigma]=[S^{1}\times\text{point}],\) and \(k_{1}=2.\) Consider the \(\mod 2m-1\) coefficient homology. We have \(m(2[\Sigma])\equiv[\Sigma]\mod(2m-1).\) In other words, \(m\) times the class \(2[\Sigma]\) is class \([\Sigma]\) in \(\mathbb{Z}/(2m-1)\mathbb{Z}\) coefficient. Now take a connected smooth loop \(\gamma\) representing \(2[\Sigma]\). If one can construct a metric so that \(\gamma\) is area-minimizing
in \(2[\Sigma]\) with \(\mathbb{Z}\) coefficient and \(m\gamma\) is area-minimizing in \(2m[\Sigma]\equiv[\Sigma]\mod(2m-1)\) in \(\mathbb{Z}/(2m-1)\mathbb{Z}\) coefficient, then we are done. The reason is that taking finite coefficients only increases the number of possible competitors, so any \(\mathbb{Z}\) coefficient minimizer must have area at least that of \(m\gamma\).
This minimizing property of \(\gamma\) and \(m\gamma\) can be simultaneously achieved by using Zhang's constructions in [44] and [45], and using the ideas of Morgan in [33]. The connectedness of \(\gamma\) is essential here. Otherwise, the above argument fails and the minimizer in \(2m[\Sigma]\equiv[\Sigma]\) might simply be some components of \(\gamma\).
For Theorem 1.2, we find a smooth connected algebraic subvariety \(N\) representing the class \(2[\Sigma]\). Then we deform the metric arbitrarily close to the Fubini-Study metric while making \(N\) uniquely minimizing.
To prove Theorem 1.3, note that any calibration form must simultaneously calibrate all area-minimizers in the homology class. Thus it suffices to construct a metric where both singular and regular minimizers exist.
## Acknowledgements
I cannot thank my advisor Professor Camillo De Lellis enough for his unwavering support while I have been recovering from illness. I feel so lucky that I have Camillo as my advisor. Many thanks goes to him for countless helpful suggestions regarding this manuscript and others. I would also like to thank Professor Michael Freedman, whose personal communications partially inspired this paper. Last but not least, the author wants to thank Professor Frank Morgan for his constant support and pioneering work that has inspired many constructions in the author's works.
## 2. Basic Notations
In this section, we will fix our notations.
### Manifolds and neighborhoods
We will reserve \(M\) to denote an ambient smooth compact closed orientable Riemannian manifold. Submanifolds will be denoted by \(N,L\), etc. We will use the following sets of definitions and notations.
**Definition 2.1**.: Let \(N\) be a smooth submanifold of a Riemannian manifold \(M.\)
* \(B_{r}^{M}(N)\) denotes a tubular neighborhood of \(N\) inside \(M\) in the intrinsic metric on \(M\) of radius \(r\). When there is no a priori choice of metric, then use an arbitrary metric.
* \(\mathbf{T}_{p}M\) denotes the tangent space to \(M\) at \(p.\) We will often regard \(\mathbf{T}_{p}N\) as a subspace of \(\mathbf{T}_{p}M.\)
* \(\exp_{N\subset M}^{\perp}\) denotes the normal bundle exponential map of the inclusion \(N\subset M.\)
* FocalRad\({}_{N}^{M}\) is the focal radius of \(N\) in \(M\), i.e., the radius below which the normal bundle exponential map remains injective.
* \(\pi_{N}^{M}\) denotes the nearest distance/normal bundle exponential map projection from \(M\) to \(N\) in the metric intrinsic of \(M\) in \(B_{r}^{N}(M)\) with \(r\) less than the focal radius of \(N\) inside \(M\).
_Remark 11_.: We will often drop the \(\sup\)/subscripts \(M,N\) when there is no confusion.
We need the following Lemma about representing homology classes.
**Lemma 2.1**.: _With the same assumptions as in Theorem 1.1, if \([\Sigma]\) is not a torsion class, then there exists a sequence of positive integers \(k_{j}\to\infty\) so that \(k[\Sigma]\) can be represented by a smooth connected embedded orientable submanifold \(N.\)_
_Remark 12_.: This is the only place we have used the condition \(c\geq 2\).
Proof.: First, let us show that for an integral homology class to have an embedded oriented representative is equivalent to have an embedded connected oriented representative. It is clear that the latter condition implies the former. Let us assume the former condition.
Since the codimension \(c\) is larger than \(1\), we show that it always possible to use connected sums to connect different components of the representative.
The connected sum is done as follows. Connect any two components using a curve. Then use transversality to make the curve intersecting the two components at only end points. Replace the curve with the sphere bundle in its normal bundle as neck to do the connected sum. Orientation can always be taken into account by twisting the curve around the normal bundle of end points, which is locally always a ball times \(\mathbb{R}^{2}\). Note that in this process both the transversality of the curve and the orientation of the connected sum utilize the condition \(c\geq 2\).
Thus, we only need to prove the proposition with connectedness condition removed. Suppose there are only finitely many integers \(0<k_{1}<\cdots<k_{n}\), so that we have a smoothly embedded representative for \(k_{j}[\Sigma]\), with \(k_{n}=0\) if there are non. Since \([\Sigma]\) is not a torsion class, \((k_{n}+1)[\Sigma]\) is a nontrivial integral homology class which has no smoothly embedded representative. By Theorem II.29 of [40], there exists a non-zero \(N,\) so that \(N(k_{n}+1)[\Sigma]\) has a smoothly embedded representative, and \(N(k_{n}+1)[\Sigma]\) does not equal \(k_{j}[\Sigma]\) for all \(j\) by non-torsionness. This a contradiction.
### Currents and real flat chains
When we mention a surface \(T\), we mean an integral current \(\llbracket T\rrbracket\). For a comprehensive introduction to integral currents, the standard references are [38] and [15]. We will adhere to their notations. Our manuscript mostly focuses on the differential geometric side and in fact, no a priori knowledge of currents is needed. Every time we mention a current, the reader can just assume it to be a sum of chains representing oriented surfaces with singularities. We will use the following definition of irreducibility of currents.
**Definition 2.2**.: A closed integral current \(T\) is irreducible in \(U,\) if we cannot write \(T=S+W,\) with \(\partial S=\partial W=0,\) and \(S,W\) nonzero, and \(\mathbf{M}(T)=\mathbf{M}(S)+\mathbf{M}(W).\)
_Remark 13_.: It is easy to see that the definition of irreducibility above is equivalent to 1.c.i of Section 2.7 of [28], if we assume the other assumptions in that Section.
Also, we will use the differential geometry convention of closedness. An integral current \(T\) is closed, if \(\partial T=0\) and \(T\) has compact support.
The primary reference for real flat chains is [15] and [17]. Technically speaking they are the closure of real multiplicity polyhedron chains with finite mass and boundary mass under flat topology (4.1.23 of [15]). For our purposes, the only concrete example of real flat chain used is the class of integral currents.
_Remark 14_.: By Section 3 of [17], the real flat chains in the same real homology class on a compact Riemannian manifold is a closed set. Moreover, there exist homologically mass-minimizing real flat chains in any real homology class.
### Mass and comass
The comass of a \(d\)-dimensional differential form \(\phi\) in a metric \(g\) is defined as
\[\operatorname{comass}_{g}\phi=\sup_{x}\sup_{P\subset\mathbf{T}_{x}M}\phi( \frac{P}{|P|_{g}}),\]
where \(P\) ranges over \(d\)-dimensional oriented planes in the tangent space to \(M.\)
The mass of a current is defined as
\[\mathbf{M}_{g}(T)=\sup_{\operatorname{comass}_{g}(\phi)\leq 1}T(\phi).\]
### Calibrations and real flat cochains
For calibrations, the reader should be familiar with the definitions of comass and calibrations (Section II.3 and II.4 in [23]). The primary reference is [23]. The most important concept to keep in mind is the fundamental theorem of calibrations (Theorem 4.2 in [23]), which states that calibrated currents are area-minimizing among homologous competitors. We will apply this theorem numerous times without explicit citation.
Flat cochains are defined as bounded continuous linear functionals on flat chains. By Section 4.1.19 in [18], they are equivalent to measurable forms with weak exterior derivatives that both have finite mass.
**Definition 2.3**.: For a flat cochain \(\alpha,\) we say that \(\alpha\) is a calibration cochain on a Riemannian manifold \(M,\) if
* \(d\alpha=0,\)
* \(\alpha(T)\leq\mathbf{M}(T),\) for all real flat chains \(T.\)
We say a real flat chain \(T\) is calibrated by \(\alpha,\) if
\[\alpha(T)=\mathbf{M}(T).\]
_Remark 15_.: By definition of calibration, it is clear that any classical calibration form serves as a calibration cochain.
We also need to define the singular set of a calibration cochain.
**Definition 2.4**.: For a calibration flat cochain, the singular set is defined to be the set where all representing weakly closed forms are not continuous.
## 3. Lemmas About Calibrations
In this section, we need to collect several useful lemmas about calibrations.
### Basic facts
**Lemma 3.1**.: _A real flat chain calibrated by a flat cochain in the sense of Definition 2.3 is mass-minimizing in its real homology class._
Proof.: This follows directly from Definition 2.3.
**Lemma 3.2**.: _If a real flat cochain \(\phi\) calibrates a mass-minimizing flat chain \(T\) in a homology class \([T]\), then it calibrates all the mass-minimizing currents in \([T].\)_
Proof.: Let \(T^{\prime}\) be any other mass-minimizing chain in \([T].\) Then we have \(T^{\prime}-T=\partial R\) for some real flat chain. Thus we have \((T-T^{\prime})(\phi)=\partial R(\phi)=R(d\phi)=0.\) This implies \(\mathbf{M}(T)=\phi(T)=\phi(T^{\prime})\leq\mathbf{M}(T^{\prime}).\) Since \(T^{\prime}\) is also mass-minimizing, we deduce that \(\phi(T^{\prime})=\mathbf{M}(T^{\prime}).\)
**Lemma 3.3**.: _For any real homology class \([\Sigma]\) on a closed compact smooth Riemannian manifold \(M,\) every mass-minimizing flat chain in \([\Sigma]\) is calibrated by some flat cochain \(\phi.\)_
Proof.: This is just Section 4.12 of [17].
**Lemma 3.4**.: _Let \([\Sigma]\) be a torsion class in integral homology. Then any area-minimizing integral current in \([\Sigma]\) cannot be calibrated._
Proof.: If not, suppose there is a calibration flat cochain \(\alpha\) that calibrates an area-minimizing integral current \(\Sigma.\) Let \(k\) be a non-zero integer so that \(k[\Sigma]=0.\) Then \(\mathbf{M}(T)=\alpha(T)=\frac{1}{k}\alpha(kT)=0,\) a contradiction.
**Lemma 3.5**.: _Let \(T\) be a homologically mass-minimizing real flat chain calibrated by a flat cochain \(\alpha.\)_
* _Then_ \(aT\) _is homologically mass minimizing for any real number_ \(a.\)
* _If_ \(T\) _is the unique real flat chain calibrated by_ \(\alpha\) _in its real homology class then_ \(aT\) _is the unique mass minimizing real flat chain in its homology class for any real number_ \(a\)_._
Proof.: If \(a\) is zero, there is nothing to prove. From now on, we suppose \(a\neq 0.\) Note that
\[\frac{a}{|a|}\alpha(aT)=|a|\alpha(T)=|a|M(T)=M(aT).\]
In case of \(a>0,\) we deduce the result from \(\alpha(S)\leq\mathbf{M}(S)\) for any real flat chain \(S.\) In case of \(a<0,\) note that \(-\alpha\) is also a calibration cochain in the sense of Definition 2.3 and calibrates \(-T\). This proves the first claim.
If \(T\) is the unique flat chain calibrated by \(\alpha\) in its real homology class, then it is clearly the unique mass-minimizer by definition. For \(a>0,\) suppose \(S\) is another mass-minimizing real flat chain \(\alpha\) in the homology class \(a[T],\) then we have
\[\frac{\mathbf{M}(aT)}{a}=\frac{\mathbf{M}(S)}{a}=\mathbf{M}(S/a)\geq\alpha(S/ a)=\alpha(T)=\mathbf{M}(T).\]
Thus, we have \(\mathbf{M}(S/a)=\alpha(S/a).\) This implies \(S=aT.\) If \(a<0,\) we run the argument with \(-\alpha\) replacing \(\alpha\) and \(-T\) replacing \(T.\)
### Continuity of mass
**Lemma 3.6**.: _For any current \(T\) and two different smooth metrics \(g,h,\) we have_
\[\|g/h\|_{C^{0}}^{-d/2}\,\mathbf{M}_{g}(T)\geq\mathbf{M}_{h}(T)\geq\|h/g\|_{C^ {0}}^{d/2}\,\mathbf{M}_{g}(T).\]
_Remark 16_.: Here \(\|g/h\|_{C^{0}}\) is defined as
\[\|g/h\|_{C^{0}}=\sup_{x\in M}\sup_{v\in\mathbf{T}_{x}M}\frac{g(v,v)}{h(v,v)}.\]
Proof.: By symmetry of \(g\) and \(h,\) it suffices to show only the inequality on the right hand side. Let \(c=\|h/g\|_{C^{0}}.\) Then for any \(d\)-dimensional differential form \(\phi,\) we have
\[\operatorname{comass}_{h}(\phi)=\sup_{x}\sup_{P}\phi_{x}\bigg{(}\frac{P}{|P| _{h}}\bigg{)}\geq\sup_{x}\sup_{P}\phi_{x}\bigg{(}\frac{P}{c^{d/2}|P|_{g}} \bigg{)}=\frac{1}{c^{d/2}}\operatorname{comass}_{g}(\phi).\]
where \(P\) runs through all \(d\)-dimensional oriented planes. This implies that
\[\{\phi|\operatorname{comass}_{g}(\phi)\leq c^{d/2}\}\subset\{\phi| \operatorname{comass}_{h}(\phi)\leq 1\}.\]
Since sup of a subset is always smaller than or equal to that of the whole set, we deduce that,
\[\mathbf{M}_{h}(T) =\sup_{\operatorname{comass}_{h}(\phi)\leq 1}T(\phi)\geq\sup_{ \operatorname{comass}_{g}(\phi)\leq c^{d/2}}T(\phi)\] \[=\sup_{\operatorname{comass}_{g}(c^{-d/2}\phi)\leq 1}c^{d/2}T(c^{-d/2 }\phi)=c^{d/2}\mathbf{M}_{g}(T).\]
**Lemma 3.7**.: _Let \(g_{j}\to g\) be a sequence of smooth metrics \(\{g_{j}\}\) converging to \(g.\) For any sequence of mass-minimizing real flat chain \(T_{j}\in[\Sigma]\) in the sequence of metric \(g_{j}\), any converging subsequence converges to a mass-minimizing flat chain in \(g\), and we have_
\[\lim_{j}\mathbf{M}_{g_{j}}(T_{j})=\mathbf{M}_{g}(T),\]
_where \(T\) is any mass-minimizing flat chain in metric \(g\) and \(\mathbf{M}_{g_{j}}\) denote the mass in \(g_{j}\)._
Proof.: By Section 3.7 of [17], the set \(\{T_{j}\}\) is pre-compact with respect to \(g.\) Let \(\{T_{j_{k}}\}\) be any converging subsequence and \(T^{\prime}\) its limit. Now apply the smoothing deformation (Section 4.7 of [17]) to \(T^{\prime}-T_{j_{k}}.\) By Section 4.7 (mainly page 377), for any \(\epsilon>0,\) there exists \(N_{\epsilon},\) so that for all \(k\geq N_{\epsilon},\) we have \(T^{\prime}-T_{j_{k}}=S_{j_{k}}+\partial R_{j_{k}},\) with \(\mathbf{M}_{g}(S_{j_{k}}),\mathbf{M}_{g}(B_{j_{k}})\leq\epsilon,\) and \(S_{j_{k}},R_{j_{k}}\) also being real flat chains. Writing \(T^{\prime}=S_{j_{k}}+T_{j_{k}}+\partial R_{j_{k}},\) using triangle inequality and mass-minimizing property of \(T_{j_{k}},\) we have
\[\mathbf{M}_{g_{j_{k}}}(T^{\prime})\geq\mathbf{M}_{g_{j_{k}}}(T_{j_{k}}+ \partial R_{j_{k}})-\mathbf{M}_{g_{j_{k}}}(S_{j_{k}})\geq\mathbf{M}_{g_{j_{k}} }(T_{j_{k}})-\mathbf{M}_{g_{j_{k}}}(S_{j_{k}}).\]
This implies \(\liminf\mathbf{M}_{g_{j_{k}}}(T^{\prime})\geq\limsup\mathbf{M}_{g_{j_{k}}}(T_ {j_{k}}).\) Now use Lemma 3.6, we deduce that
\[\mathbf{M}_{g}(T^{\prime})\geq\limsup\mathbf{M}_{g_{j_{k}}}(T_{j_{k}}). \tag{3.1}\]
On the other hand, by lower-semicontinuity of mass, e.g., Section 2.8 of [17], we have
\[\mathbf{M}_{g}(T^{\prime})\leq\liminf\mathbf{M}_{g}(T_{j_{k}}).\]
By Lemma 3.6, this gives
\[\mathbf{M}_{g}(T^{\prime})\leq\liminf\mathbf{M}_{g_{j_{k}}}(T_{j_{k}}). \tag{3.2}\]
Combining (3.1) and (3.2) gives \(\lim_{j_{k}}\mathbf{M}_{g_{j_{k}}}(T_{j_{k}})=\mathbf{M}_{g}(T^{\prime}).\)
If we can prove \(T^{\prime}\) is mass-minimizing, then we are done, since all mass-minimiers in the same homology class must have the same mass. To see this, we have
\[\mathbf{M}_{g_{j_{k}}}(T^{\prime}+\partial W)= \mathbf{M}_{g_{j_{k}}}(S_{j_{k}}+T_{j_{k}}+\partial R_{j_{k}}+ \partial W)\] \[\geq -\mathbf{M}_{g_{j_{k}}}(S_{j_{k}})+\mathbf{M}_{g_{j_{k}}}(T_{j_{ k}}+\partial R_{j_{k}}+\partial W)\] \[\geq -\mathbf{M}_{g_{j_{k}}}(S_{j_{k}})+\mathbf{M}_{g_{j_{k}}}(T_{j_{ k}}).\]
Take limit on both sides and use Lemma 3.6 gives \(\mathbf{M}_{g}(T^{\prime}+\partial W)\geq\mathbf{M}_{g}(T^{\prime}).\)
_Remark 17_.: Replacing the smoothing deformation with deformation theorem of integral currents, the above argument shows that the mass is continuous on area-minimizing integral currents in sequences of metrics.
### Non-calibratable condition
**Lemma 3.8**.: _An area-minimizing integral current \(T\) in \([\Sigma]\) is calibrated by a flat cochain if and only if \(\mathbf{M}(T)=\min\mathbf{M}_{\mathbb{R}}([T]).\)_
Proof.: If \(T\) is calibrated by a flat cochain \(\phi,\) then we have
\[\mathbf{M}(T)=\phi(T)=\phi(Z)\leq\mathbf{M}(Z).\]
By mass-minimality of \(Z,\) we deduce that \(\mathbf{M}(T)=\mathbf{M}(Z).\)
If \(\mathbf{M}(T)=\mathbf{M}(Z),\) then by 4.12 in [17], there exists a calibration flat cochain \(\psi\) with \(\lambda(Z)=\mathbf{M}(Z).\) We have \(\mathbf{M}(T)\geq\lambda(T)=\lambda(Z)=\mathbf{M}(Z).\) Since \(\mathbf{M}(T)=\mathbf{M}(Z),\) we deduce that \(\mathbf{M}(T)=\lambda(T).\) Thus \(T\) is calibrated by \(\lambda.\)
**Lemma 3.9**.: _Let \(M\) be a smooth Riemannian manifold \(M\) with a metric \(g\) and \([\Sigma]\) a \(d\)-dimensional integral homology class. If no area-minimizing integral current in \([\Sigma]\) can be calibrated in \(g,\) then there exists an open set \(\Omega_{[\Sigma]}\) containing \(g\), so that for any metric \(g^{\prime}\in\Omega_{[\Sigma]},\) no area-minimizing integral current in \([\Sigma]\) can be calibrated by a flat cochain._
Proof.: By Lemma 3.8 if no area-minimizing integral current in \([\Sigma]\) in \(g\) can be calibrated, we must have
\[\inf_{\text{real flat chain }T\in[\Sigma]}\mathbf{M}_{g}(T)<\inf_{\text{ integral current }T\in[\Sigma]}\mathbf{M}_{g}(T).\]
However, by Lemma 3.7 and the remark below, both sides of the inequalities are continuous functions with respect to the metric \(g\). This finishes the proof.
### Miscellaneous facts
**Lemma 3.10**.: _The singular set of any real flat cochain calibrating an area-minimizing hypersurface contains the singular set of all area-minimizing hypersurfaces in the same class._
Proof.: Theorem 4.2 in [3].
**Lemma 3.11**.: _Consider \(\mathbb{R}^{2n}=\mathbb{C}^{n},\) with standard metric and coordinate_
\[(x_{1},y_{1},\cdots,x_{n},y_{n}).\]
_Define_
\[\psi=\sum_{j}a_{j}Q_{j}^{*},\]
_with \(a_{j}\) real numbers, \(Q_{j}\) holomorphic \(l\)-dimensional planes spanned by the coordinate axes, and \(*\) the musical isomorphism. Then we have_
\[\operatorname{comass}\psi=\max_{j}|a_{j}|.\]
_Moreover, if \(|a_{j}|<1\) for all \(j\geq 2\) and \(a_{1}=1.\) Then \(\psi\) calibrates only the plane \(Q_{j}.\)_
Proof.: The comass equality is just Theorem 2.2 in [14]. To show the claim about calibrating only \(Q_{j},\) we prove inductively on \(l.\) If \(l=1,\) we write
\[\psi=Q_{1}^{*}+\sum_{j\geq 2}Q_{j}^{*}.\]
By Theorem 2.2 in [14], the sum part has comass \(\max_{j\geq 2}|a_{j}|<1.\) Thus, only alternative (i) of Lemma 2.1 of [14] can happen, and we deduce that the comass \(1\) is only achieved at \(Q_{1}.\) Suppose this is true for \(l=1,\cdots,k-1.\) Now suppose \(l=k.\) Without loss of generality, we can suppose that \(Q_{1}\) is spanned by \(dx_{1},dy_{1},\cdots,dx_{l},dy_{l}.\) We sort out the situation into two possible cases.
One is that \(Q_{1}\) does not intersect any other \(Q_{j}.\) Then argue as in the \(l=1\) case, we are done.
The other is that \(Q_{1}\) does intersect some other \(Q_{j}\). Without loss of generality, suppose \(Q_{1}\cap Q_{2}\) includes the span of \(x_{1},y_{1}\) axes. Then we can factor out all \(dx_{1}dy_{1}\) terms, i.e., writing
\[\psi=dx_{1}dy_{1}\wedge(dx_{2}dy_{2}\wedge\cdots\wedge dx_{l}dy_{l}+\sum_{j}a _{j}\tilde{Q_{j}}^{*})+\sum_{l}a_{l}(Q_{l}^{\prime})^{*}.\]
Here \(\tilde{Q_{j}}\) are \(l-1\)-dimensional complex planes spanned by coordinate axes not including \(x_{1},y_{1}\) and \(Q_{l}^{\prime}\) are \(l\)-dimensional complex planes spanned by coordinate axes not including \(x_{1},y_{1}.\)
The second sum has comass smaller than \(1\) by Theorem 2.2 of [14]. Thus by Lemma 2.1 of [14], only alternative (i) of that Lemma can happen. In other words, if \(Q\) is a plane calibrated by \(\psi,\) then it is calibrated by \(dx_{1}dy_{1}\wedge(dx_{2}dy_{2}\wedge\cdots\wedge dx_{l}dy_{l}+\sum_{j}a_{j} \tilde{Q_{j}}^{*}).\) By Proposition 7.10 in [23], this implies that \(Q\) is the product of \(x_{1}y_{1}\)-plane and a plane \(Q^{\prime}\) calibrated by \(dx_{2}dy_{2}\wedge\cdots\wedge dx_{l}dy_{l}+\sum_{j}a_{j}\tilde{Q_{j}}^{*}.\) By inductive hypothesis, we are done.
### Zhang's constructions
**Lemma 3.12**.: _With the same assumptions as in Theorem 1.1, suppose \([\Sigma]\) is non-torsion and admits a connected embedded representative \(N\). Let \(m\) be any positive integer. Then there exists a smooth Riemannian metric \(\hat{g}\) on \(M\), so that_
* _there is a smooth neighborhood_ \(W\) _around_ \(N,\) _and a retract_ \(\pi_{N}:W\to N,\) _so that_ \(\pi_{N}\) _is an area-non-increasing in_ \(\hat{g},\)__
* \(N\) _has area precisely_ \(1,\)__
* _any stationary varifold in_ \(M\) _whose support is not contained in_ \(W,\) _has area larger than_ \(t,\) _with_ \(t\) _an arbitrary positive real number,_
* \(N\) _is calibrated by a smooth form in_ \(M.\)__
Proof.: Equip \(M\) with an arbitrary smooth metric. Take \(B_{r}(N)\) to be a tubular neighborhood of \(N\). Use Lemma 3.4 and Remark 3.5 of [44]. We get a retract \(\pi_{N}\) and a smooth metric \(\overline{g}\) on \(B_{r}(N)\) so that \(\pi_{N}^{*}(dvol_{N})\) has comass at most. Now apply Lemma 5.1 of [28], we see that \(\pi_{N}\) is area-non-increasing.
Since \([N]\) is a non-torsion class, by Section 3.3 of [44], there exists a smooth metric \(\tilde{g}\) and a closed \(d\)-dimensional calibration form \(\Phi\), so that \(\Phi\) calibrates \(N.\) Moreover, \(\Phi\) and \(\tilde{g}\) restricted to \(B_{\frac{3}{5}r}(N)\) (here \(B\) is measured in \(\tilde{g}\)) equal \(\pi_{N}^{*}(dvol_{N})\) and \(\overline{g},\) respectively.
Now take \(W=B_{\frac{3}{5}r}(N)\) and apply the proof of Theorem 4.1 of [45] to get a new metric \(\hat{g}\). A constant scaling of \(\Phi\) remains a calibration that calibrates \(N.\) Moreover, any stationary varifold not contained in \(W\) has area larger than \(t\) times the area of \(N,\) with \(t\) an arbitrarily large constant. Just rescale the metric so that \(N\) has area \(1\). We are done.
## 4. Proof of Theorem 1.5
By Lemma 2.1, there exists a sequence of integers larger than \(1,\)\(k_{j}\to\infty,\) so that \(k_{j}[\Sigma]\) can be represented by a smooth connected embedded orientable submanifold. The proof for different \(k_{j}\) and \(m\) is the same, so we fix one \(j\) and one \(m\) and suppose the connected embedded oriented representative is \(N.\)
By Lemma 3.12 and Lemma 3.8, there exists a smooth metric \(\hat{g},\) so that
1. there is an area-non-increasing retract \(\pi_{N}:W\to N\) from a neighborhood \(W\) of \(N\) onto \(N,\)
2. \(N\) realizes \(\min\mathbf{M}_{\mathbb{Z}}^{\hat{g}}(k_{j}[\Sigma]),\) and has area \(1,\)
3. any stationary varifold not contained in \(W\) has area larger than \(m.\)
4. \(\min\mathbf{M}_{\mathbb{Z}}^{\hat{g}}(k_{j}[\Sigma])=\min\mathbf{M}_{\mathbb{ R}}^{\hat{g}}(k_{j}[\Sigma]).\)
There exists a largest integer \(l,\) so that \([\Sigma]=l[\Sigma_{0}]\) for some non-torsion integral homology class \([\Sigma_{0}].\)
Consider the mod \((mk-1)l\) homology, in other words, homology over coefficient of \(\mathbb{Z}/(mk-1)l\mathbb{Z}.\) By the universal coefficient theorem [25], \(H_{d}(M)\otimes\mathbb{Z}/(mk-1)l\mathbb{Z}\) cannonically injects into \(H_{d}(M,\mathbb{Z}/(mk-1)l\mathbb{Z})\). Use \(I\) to denote this map.
Then
\[mI(k[\Sigma])=mklI([\Sigma_{0}])=lI([\Sigma_{0}])=I([\Sigma]).\]
Moreover, by \(m\geq 3,k\geq 2,\)\(I(k[\Sigma])\) and \(I([\Sigma])\) both are non-zero classes.
Now let \(T\) be a mass-minimizing mod \((mk-1)l\) current in the class \(I([\Sigma]).\) Then \(T\) is non-trivial. By bullet point (3) above, if \(T\) is not contained in \(W\), then we are done. If \(T\) is contained in \(W\), then a straightforward adaptation of Theorem 2.2. of [33] and the fact that \(m\leq\frac{mk-1}{2}\) show that \(mN\) is an area-minimizing current mod \((mk-1)l\) in \(W.\) However, the homology of \(W\) is generated by \(N,\) so we deduce that \(T\) must be homologous to \(mN\) in \(W\) mod \((mk-1)l.\) This implies that \(T\) has the same mass as \(mN,\) namely \(m.\) Since mod \((mk-1)l\) minimizing adds
more competitors, we deduce that any area-minimizing integral current in the class \([\Sigma]\) must have area at least that of \(T.\) We are done.
## 5. Proof of Theorem 1.1,1.4,1.6
### Proof of Theorem 1.4
By Theorem 1.5, for \(k_{1}\geq 2,\) and any integer \(m\geq 3,\) there exists a smooth Riemannian metric \(g_{1,m}\) so that
\[\min\mathbf{M}_{\mathbb{Z}}^{g_{1,m}}(k_{1}[\Sigma])=\min\mathbf{M}_{\mathbb{ R}}^{g_{1,m}}(k_{1}[\Sigma]),\frac{\min\mathbf{M}_{\mathbb{Z}}^{g_{1,m}}([ \Sigma])}{\min\mathbf{M}_{\mathbb{Z}}^{g_{1,m}}(k_{1}[\Sigma])}>m.\]
This implies that
\[\min\mathbf{M}_{\mathbb{R}}^{g_{1,m}}([\Sigma])=\frac{1}{k_{1}}\min\mathbf{M} _{\mathbb{R}}^{g_{1,m}}(k_{1}[\Sigma])=\frac{1}{k_{1}}\min\mathbf{M}_{\mathbb{ Z}}^{g_{1,m}}(k_{1}[\Sigma])<\frac{1}{m}\min\mathbf{M}_{\mathbb{Z}}^{g_{1,m}}([ \Sigma]).\]
Let \(m\to\infty.\) We are done.
### Proof of Theorem 1.1
First of all, by Lemma 3.4, we only have to consider non-torsion classes, i.e., \(n[\Sigma]\neq 0\) for any \(n>0.\) Then the assertion follows directly from Lemma 3.8, Lemma 3.9 and Theorem 1.4.
### Proof of Theorem 1.6
If \([\Sigma]\) is \(n\)-torsion, then we have \(k[\Sigma]=-(n-k)[\Sigma].\) Note that the multiplication by \(-1\) map sends integral currents in \([\Sigma]\) bijectively to integral currents in \(-[\Sigma]\). On the other hand, for any integer \(n,\) we always have \(\mathbf{M}(nT)=|n|\mathbf{M}(T)\) directly from definition. Thus, we have
\[\min\mathbf{M}_{\mathbb{Z}}^{g}(k[\Sigma])= \min\mathbf{M}_{\mathbb{Z}}^{g}(-(n-k)[\Sigma])=\min\mathbf{M}_{ \mathbb{Z}}^{g}((n-k)[\Sigma]),\] \[\min\mathbf{M}_{\mathbb{Z}}^{g}(k[\Sigma])= \inf_{S\text{ homologous to }k[\Sigma]\text{ over }\mathbb{Z}}\mathbf{M}(S)\leq\inf_{S\text{ homologous to }[\Sigma]\text{ over }\mathbb{Z}}\mathbf{M}(kS)\] \[= k\min\mathbf{M}_{\mathbb{Z}}^{g}([\Sigma]),\]
and similarly
\[\min\mathbf{M}_{\mathbb{Z}}^{g}((n-k)[\Sigma])\leq(n-k)\min\mathbf{M}_{ \mathbb{Z}}^{g}([\Sigma]).\]
Combining the three gives
\[\frac{\min\mathbf{M}_{\mathbb{Z}}^{g}(k[\Sigma])}{\min\mathbf{M}_{\mathbb{Z}} ^{g}([\Sigma])}\leq\min\{k,n-k\}. \tag{5.1}\]
On the other hand, if \(kl\equiv 1(\mod n),\) then
\[l(k[\Sigma])=[\Sigma].\]
Apply (5.1) with \(k[\Sigma]\) replacing \([\Sigma],\)\(l\) replacing \(k\), \([\Sigma]\) replacing \(k[\Sigma]\), we are done.
## 6. Proof of Theorem 1.2
Note that \(2[\Sigma]\) admits a smoothly embedding connected subvariety \(N.\) This can be done in many ways. For instance, suppose \([\Sigma]\) is \(k\) times the generator of \(d\)-dimensional homology on \(\mathbb{CP}^{\frac{d+c}{2}}\). Then consider the inclusion of subspaces \(\mathbb{CP}^{\frac{d}{2}+1}\subset\mathbb{CP}^{\frac{d+c}{2}}.\) It sends any degree \(2k\) projective hypersurface of the former into the class \(2[\Sigma]\) of the latter, e.g., by Mayer-Vietoris. A connected degree \(2d\) hypersurface in \(\mathbb{C}^{\frac{d}{2}+1}\) can be constructed explicitly, e.g., the Fermat hypersurfaces.
Now if we consider the corresponding affine variety back in \(\mathbb{C}^{\frac{d+c}{2}+1}\), we have a complex algebraic cone \(C\) corresponding to \(N.\) Let \(\pi_{C}\) be the nearest distance projection onto \(C.\) Then by construction, the orthogonal complement to \(\ker\pi_{C}\) is a smooth distribution of complex \(\frac{d}{2}\) dimensional planes. Use
\[\tilde{\Gamma}\]
to denote this distribution on \(\mathbb{C}^{\frac{d+c}{2}+1}\). The distribution is invariant under diagonal action of \(U(1)\) and scalings by real parameters, as \(C\) and the metric on \(\mathbb{R}^{n}\) is up
to scalings. Note that the complex structure and Fubini-Study metric on complex projective space can be defined via the projection from the unit sphere e.g., Section 6.5 of [21].
Thus by projecting onto the complex projective space, we deduce a smooth distribution
\[\Gamma\]
of complex \(\frac{d}{2}\)-dimensional planes in a tubular neighborhood \(B_{2r}(N)\) of \(N,\) so that \(\Gamma|_{N}\) equals the tangent space to \(N.\)
### New metric
We need to construct a sequence of new metrics on \(\{g_{j}\}.\) Here the notations are as follows.
* \(g_{FS}\) is the Fubini-Study metric,
* \(g_{\Gamma}\) is \(g_{FS}\) restricted to \(\Gamma,\)
* \(g_{\Gamma^{\perp}}\) is \(g_{FS}\) restricted to the orthogonal complement to \(\Gamma,\)
* \(\mathrm{dist}(p,N)\) is the distance of \(p\) to \(N,\)
* \(\beta\) is an even function monotonically increasing on \([0,\infty),\) equal to \((1+\frac{1}{j})\) on \([\frac{4}{9}r,\infty)\) and equal to \(1\) only at \(0.\)
**Definition 6.1**.: \[g_{j}(p)=\begin{cases}(1+\frac{1}{j})g_{FS}(p)\text{ on }B_{r}(N),^{\complement}\\ (1+\frac{1}{j})g_{\Gamma^{\perp}}(p)+\beta(\mathrm{dist}(p,N)^{2})g_{\Gamma}( p)\text{ on }B_{r}(N)\end{cases}\]
It is straightforward to verify that \(g_{j}\) converges smoothly to \(g_{FS}\) as \(j\to\infty.\)
**Lemma 6.1**.: _The \(\frac{d}{2}\) power of Kaler form \(\omega_{d/2}=\frac{1}{(d/2)!}\omega^{d/2}\) is still a calibration in \(g_{j}\) and calibrates only \(N\) in \(2[\Sigma].\)_
Proof.: \(\omega_{d/2}\) is a calibration on \(B_{r}(N)^{\complement}\) with comass \((1+\frac{1}{j})^{-\frac{d}{2}}\) by Theorem 2.2 of [14]. For any point \(p\in B_{r}(N),\) the tangent space \(\mathbf{T}_{p}\) to \(\mathbb{CP}^{\frac{d+c}{2}}\) admits a splitting into orthogonal complex subspaces \(\Gamma\) and \(\Gamma^{\perp}.\) In \(g,\) taking an orthonormal basis \(\{e_{1},Je_{1},\cdots,e_{d/2},Je_{d/2}\}\) of \(\Gamma\) and \(\{f_{1},Jf_{1},\cdots,f_{c/2},Jf_{c/2}\}\) of \(\Gamma^{\perp}\). Then we have
\[\omega_{d/2}=\frac{1}{(\frac{d}{2})!}\big{(}\sum_{l}e_{l}^{*}\wedge(Je_{l})^ {*}+\sum_{m}f_{m}^{*}\wedge(Jf_{m})^{*}\big{)}^{d/2},\]
where \(*\) denotes the musical isomorphism in \(g.\) Now we reinterpret the above expression in the rescaled orthonormal basis in \(g_{j}\). On \(B_{r}(N),\) we have
\[\omega_{d/2}= \frac{1}{(\frac{d}{2})!}\Big{(}\beta^{-1}\sum_{l}\beta^{1/2}e_{l }^{*}\wedge\beta^{1/2}(Je_{l})^{*}\] \[+(1+\frac{1}{j})^{-1}\sum_{m}(1+\frac{1}{j})^{1/2}f_{m}^{*}\wedge (1+\frac{1}{j})^{1/2}(Jf_{m})^{*})^{d/2}.\]
By Lemma 3.11, we deduce that
\[\mathrm{comass}_{g_{j}}\,\omega_{d/2}=\max\{\beta^{-\frac{d}{2}},(1+\frac{1} {j})^{-\frac{d}{2}}\},\]
which is clearly at most \(1.\)
Lemma 3.11 shows that \(\omega_{d/2}(Q)=1\) for a unit simple \(d\)-vector \(Q\) in \(B_{r}(N)\) if and only if \(Q\in\Gamma\) and \(\beta=1.\) This implies \(Q\) is tangent to \(N\). By the constancy theorem (4.1.7 in [15]), integer multiples of \(N\) are the unique integral currents calibrated by \(\omega_{d/2}.\) Since \(2[\Sigma]\) is not a torsion class, we deduce that \(\omega_{d/2}\) only calibrates \(N\) in \(2[\Sigma].\)
Since \(N\) is irreducible by Lemma 2.10 in [28], arguing as in previous sections show that no area-minimizing current in \([\Sigma]\) can be calibrated in \(g_{j}.\) By Lemma 3.9, we are done.
## 7. Proof of Theorem 1.3
By Lemma 3.10, it suffices to construct a metric with at least one smooth minimizer and at least one singular minimizer.
By representation theorem of [30], there exists a primitive class \([\gamma],\) so that \(\gamma\) can be represented by a smoothly embedded minimal hypersurface \(N\) and \([\Sigma]=k[\gamma]\) for some \(k>0.\)
Since the normal bundle to \(N\) is trivial, we can flow \(N\) to one side and get another embedded representative \(N^{\prime}.\) Let \(C\) be any \(7\)-dimensional area-minimizing hypercone in \(\mathbb{R}^{8}.\) Consider the following hypersurface
\[\sigma(C)=(C\times\mathbb{R}^{(d-7)+1})\cap S^{8+(d-7)}\subset\mathbb{R}^{8+(d -7)+1}.\]
By Lemma 4.1 in [28], \(\sigma(C)\) is smooth outside of \(0\times S^{d-7}\subset\mathbb{R}^{8}\times\mathbb{R}^{(d-7)+1},\) and near the singular set \(\sigma(C)\) can be sent to truncated \(C\) times \(S^{d-7}\) by a diffeomorphism. Note that by Theorem 2.1 in [22], there is a foliation of \(\mathbb{R}^{8}\) by area-minimizing hypersurfaces with \(C\) as one leaf. Let \(\nu\) be the normal vector field of the foliation then \(\phi=*\nu^{*}\) will be a calibration form with an isolated singularity at \(0.\) Here the first \(*\)is the Hodge star and the second \(*\) is the musical isomorphism. By taking \(\phi\wedge dvol_{S^{d-7}}\) and taking a product metric, we get a calibration form that calibrates \(\sigma(C)\) near its singular set.
Now pick a point \(p\) in \(N^{\prime}\). Then we can embed \(\sigma(C)\subset S^{8+(d-7)}-\text{pt}\) into the upper half ball of \(B_{r}(p)\) for \(r<\frac{1}{2}d(N,N^{\prime}.)\) Then we can do a connected sum to connect \(\sigma(C)\) to \(N^{\prime}.\) Note that \(N^{\prime}\#\sigma(C)\) still possess the same homology class as \(N.\)
By Lemma 2.9,2.8,2.7 (in this order) of [28], there exists a smooth metric \(g,\) so that both \(N^{\prime}\#\sigma(C)\) and \(N\) are homologically area-minimizing in \([\gamma],\) the primitive class associated with \([\Sigma].\)
By Section 5.10 of [17] and Lemma 3.5, we deduce that \(k(N^{\prime}\#\sigma(C))\) and \(kN\) are both mass-minimizing real flat chains in \([\Sigma].\) By Lemma 3.10, we are done.
|
2307.02131 | Beyond Known Reality: Exploiting Counterfactual Explanations for Medical
Research | The field of explainability in artificial intelligence (AI) has witnessed a
growing number of studies and increasing scholarly interest. However, the lack
of human-friendly and individual interpretations in explaining the outcomes of
machine learning algorithms has significantly hindered the acceptance of these
methods by clinicians in their research and clinical practice. To address this
issue, our study uses counterfactual explanations to explore the applicability
of "what if?" scenarios in medical research. Our aim is to expand our
understanding of magnetic resonance imaging (MRI) features used for diagnosing
pediatric posterior fossa brain tumors beyond existing boundaries. In our case
study, the proposed concept provides a novel way to examine alternative
decision-making scenarios that offer personalized and context-specific
insights, enabling the validation of predictions and clarification of
variations under diverse circumstances. Additionally, we explore the potential
use of counterfactuals for data augmentation and evaluate their feasibility as
an alternative approach in our medical research case. The results demonstrate
the promising potential of using counterfactual explanations to enhance
acceptance of AI-driven methods in clinical research. | Toygar Tanyel, Serkan Ayvaz, Bilgin Keserci | 2023-07-05T09:14:09Z | http://arxiv.org/abs/2307.02131v5 | # Beyond Known Reality: Exploiting Counterfactual Explanations for Medical Research
###### Abstract
The field of explainability in artificial intelligence (AI) has witnessed a growing number of studies and increasing scholarly interest. However, the lack of human-friendly and individual interpretations in explaining the outcomes of machine learning algorithms has significantly hindered the acceptance of these methods by clinicians in their research and clinical practice. To address this, our study employs counterfactual explanations to explore "what if?" scenarios in medical research, aiming to expand our understanding beyond existing boundaries on magnetic resonance imaging (MRI) features for diagnosing pediatric posterior fossa brain tumors. In our case study, the proposed concept provides a novel way to examine alternative decision-making scenarios that offer personalized and context-specific insights, enabling the validation of predictions and clarification of variations under diverse circumstances. Additionally, we explore the potential use of counterfactuals for data augmentation and evaluate their feasibility as an alternative approach in our medical research case. The results demonstrate the promising potential of using counterfactual explanations to enhance trust and acceptance of AI-driven methods in clinical research.
counterfactual explanations posterior fossa pediatric brain tumors magnetic resonance imaging explainable artificial intelligence
## 1 Introduction
As we incorporate automated decision-making systems into the real world, explainability and accountability questions become increasingly important [1]. In some fields, such as medicine and healthcare, ignoring or failing to address such a challenge can seriously limit the adoption of computer-based systems that rely on machine learning (ML) and computational intelligence methods for data analysis in real-world applications [2, 3, 4]. Previous research in eXplainable Artificial Intelligence (XAI) has primarily focused on developing techniques to interpret decisions made by black box ML models. For instance, widely used approaches such as local interpretable model-agnostic explanations (LIME) [5] and shapley additive explanations (SHAP) [6] offer attribution-based explanations to interpret ML models. These methods can assist computer scientists and ML experts in understanding the reasoning behind the predictions made by AI models. However, end-users, including clinicians and patients, may be more interested in understanding the practical implications of the ML model's predictions in relation to themselves, rather than solely focusing on how the models arrived at their predictions. For example, patients' primary concern lies not only in obtaining information about their illness but also in seeking guidance on how to regain their health. Understanding the decision-making process of either the doctor or the ML model is of lesser importance to them.
Counterfactual explanations [7] are a form of model-agnostic interpretation technique that identifies the minimal changes needed in input features to yield a different output, aligned with a specific desired outcome. This approach holds
promise in enhancing the interpretability and accountability of AI models by offering deeper insights into their decision-making processes. Our proposed approach aims to provide enhanced transparency regarding the relationship between MRI features, moving beyond generating actionable outcomes solely for individual patients. Through counterfactual explanations, previously unseen decisions within the decision space can be brought to light. Numerous questions can be explored, such as how to determine the modifications required to transform a patient's diagnosis from one tumor subtype to another. Initially, posing such a question may seem nonsensical and illogical since an individual's actual tumor type cannot be magically altered. However, considering the challenge of distinguishing these two tumor types in clinical settings, asking such a question can effectively demonstrate which features are more informative in differentiating tumor types. Counterfactual explanations enable us to identify the characteristics that distinguish two patient types with the smallest changes in features. Consequently, a deeper understanding of the interactions between MRI features and tumors can be gained; unveiling previously undisclosed outcomes that may be concealed in existing ML studies.
Furthermore, we have identified a potential contribution to clinical practice whereby a new patient with only MRI data available can have their tumor type estimated using a counterfactual approach, prior to receiving histopathological results. Since there is no prior label available for the patient, they are given an "unknown" label and the counterfactual approach is used for each tumor type, allowing estimation of the tumor type with the lowest distance and smallest change in features. While this approach shares similarities with ML, the crucial distinction lies in retaining information about the reasoning behind the estimated tumor type and its corresponding feature changes. This, in turn, can enhance our understanding and the use of AI models in clinical practice.
Last but not least, in situations where the acquisition of data is limited or not possible, various data augmentation methods have been developed to enhance the performance of ML and related applications [8, 9, 10]. However, these methods also give rise to additional issues while fulfilling their intended purpose, such as introducing biased shifts in data distribution. To address this issue, we employed counterfactuals generated from different spaces in order to balance the data by maximizing its diversity, and subsequently reported the results for different scenarios.
### Brief Introduction to Posterior Fossa Pediatric Brain Tumors
Brain tumors represent the predominant form of cancer in children, constituting more than 25% of all cases. Specifically, the posterior fossa (PF) region comprises approximately 60-70% of these tumors and encompasses subtypes such as medulloblastoma (MB), ependymoma (EP), pilocytic astrocytoma (PA), and brainstem glioma (BG).
Clinical information obtained from radiological interpretations and histopathological analysis of tumors plays a crucial role in diagnosing, prognosis, and treating PF tumors in pediatric patients. Histopathological evaluation is essential for the initial diagnosis, providing valuable insights into patient prognosis and guiding clinical and therapeutic decisions. It serves as the established standard for differentiating between various PF tumor types. However, performing biopsies of different PF brain tumors carries significant risks of morbidity and mortality, in addition to being costly. Recent advancements in characterizing tumor subtypes using cross-sectional diagnostic imaging have shown promise in predicting differential survival rates and treatment responses. This progress holds significant potential for future treatment stratification in PF tumors. Hence, the advancement of a novel non-invasive diagnostic tool holds utmost importance in precisely classifying tumor type and grade, as well as aiding in treatment planning.
Magnetic resonance imaging (MRI) has emerged as the leading non-invasive imaging modality. It offers inherent advantages such as excellent soft-tissue contrast while avoiding the potential hazards of ionizing radiation. Conventional MRI protocols, including T1-weighted (T1W), T2-weighted (T2W), and fluid-attenuated inversion recovery (FLAIR) sequences, have shown promising results in distinguishing various PF tumor types in pediatric patients [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29]. Furthermore, diffusion-weighted imaging (DWI) combined with apparent diffusion coefficient (ADC) mapping enables the assessment of physiological characteristics. It facilitates the differentiation of low- and high-grade tumors, as well as their distinct subtypes [30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43].
## 2 Material & Methods
### Ethics Statement and Patient Characteristics
This prospective study (Ref: 632 QP-ND2 dated 12 May 2019) was carried out in both Radiology and Neurosurgery departments, and was approved by the Institutional Review Board in accordance with the 1964 Helsinki declaration. Written informed consent was obtained from authorized guardians of patients prior to the MRI procedure. Our study comprised a cohort of 112 pediatric patients diagnosed with PF tumors, including 42 with MB, 25 with PA, 34 with BG, and 11 with EP. All BG patients were confirmed based on full agreement between neuroradiologists and neurosurgeons, whereas the remaining MB, PA, and EP patients underwent either surgery or biopsy for histopathological confirmation.
### Data Acquisition and Assessment of MRI Features
For all patients, MRI exams including T1W, T2W, FLAIR, DWI (b values: 0 and 1000) with ADC, and contrast-enhanced T1W (CE-T1) sequences with macrocyclic gadolinium-based contrast enhancement (0.1 ml/kg Gadovist, Bayer, Germany, or 0.2 ml/kg Dotarem, Guerbet, France) were collected in the supine position using a 1.5 Tesla MRI scanner (Multiva, Philips, Best, the Netherlands).
The Medical Imaging Interaction Toolkit (German Cancer Research Center, Division of Medical Image Computing, Heidelberg, Germany) was utilized for measuring the region of interest (ROI) of PF tumors and normal-appearing parenchyma and and subsequently assessed the following MRI features: signal intensities (SIs) of T2, T1, FLAIR, T1CE, DWI, and ADC. Ratios between the PF tumor and parenchyma were calculated by dividing the SI of the tumor and the SI of the normal-appearing parenchyma based on T2, T1, FLAIR, T1CE, DWI, and ADC. Additionally, ADC values were quantified for both the PF tumor and parenchyma on the ADC map using the MR Diffusion tool available in Philips Intellig space Portal, version 11 (Philips, Best, The Netherlands). It is worth noting that, prior to analysis, bias field correction was applied to every image to correct for nonuniform grayscale intensities in the MRI caused by field inhomogeneities.
### Standardization
Prior to conducting ML trainings, the dataset was subjected to a standardization process, using Python programming (version 3.9.13) with the Scikit-Learn library (version 1.0.2) module. This technique involved transforming the data to have a mean of zero and a standard deviation of one. To standardize all numerical attributes, the Scikit-Learn StandardScaler function was employed, which subtracted the mean and scaled the values to unit variance, ensuring the data was in a standardized format. To determine the standard score of a sample \(x_{i}\), the following formula is used:
\[z=\frac{x_{i}-\mu}{\sigma}, \tag{1}\]
where, \(\mu\) represents the mean of the training samples, and \(\sigma\) represents their standard deviation.
### Distance Calculation
When using counterfactuals as classifiers, the significant scale difference between the actual values of the MRI features in Tables 1 and 2 makes it illogical to calculate distances. To address this issue, we tackled the problem by disregarding the unchanged values indicated by '-' and rescaling all available values to a standard scale before reintroducing them. Subsequently, we computed the distance using the Euclidean distance metric on the generated counterfactual values of the current factual (i.e., new patient). By aiming to minimize the distance, we seek to determine the tumor type in this manner, as it corresponds to the least dissimilarity (Table 3).
The Euclidean distance formula can be represented as following:
\[\mathrm{Distance}=\sqrt{\sum_{i=1}^{n}(x_{i}-y_{i})^{2}} \tag{2}\]
In the formula, \(x_{i}\) and \(y_{i}\) represent the values of the corresponding features in the current row and baseline row, respectively. The summation symbol \(\sum\) calculates the sum of the squared differences for each feature. Finally, the square root function is applied to obtain the Euclidean distance. Please note that in the formula, the \(n\) represents the number of features or columns in the dataset.
### Statistical Analysis
The statistical analysis was conducted using the t-test from the scipy library (version 1.10.1). A two-tailed p-value of <0.05 was considered statistically significant.
The analysis was performed as follows: First, the analysis involved assessing whether the counterfactuals generated by changing the tumor type from \(\mathcal{X}\) to \(\mathcal{Y}\) underwent a statistically significant change (dependent t-test). Second, it involved analyzing whether the counterfactuals generated by changing the tumor type from \(\mathcal{X}\) to \(\mathcal{Y}\) exhibited significant similarity to the original (factual) patients with tumor type \(\mathcal{Y}\) that we previously had (Welch's t-test).
Five counterfactuals were generated for each patient transition from tumor type \(\mathcal{X}\) to \(\mathcal{Y}\). When applying dependent and independent t-tests, the generated counterfactuals were tested in different ways. In the case of measuring how different
the newly generated data is from the original data, we created more data than our original sample size. Therefore, we cannot satisfy the requirement for equal dimensions in the dependent analysis during the testing phase. To address this, we designed a different analysis approach: for each counterfactual, the data of the corresponding factual patient was considered as the initial data for testing. Subsequently, a significance test was performed on the old-new values of the five feature variables that changed the most for each counterfactual generated from this factual data.
In the case of independent analysis, all generated counterfactuals were independently tested by including all patients present in the real data for each of the three features that underwent the most significant changes. The corresponding values for these three features were tested independently.
In summary, the fundamental difference between the two tests can be considered as being patient-based and feature-based in nature.
### Distribution Plotting
To generate individual kernel density estimation (KDE) plots for each feature, we utilized the kdeplot function from the Seaborn package (version 0.11.2). By specifying a hue parameter (e.g., Tumor Type), we were able to incorporate a meaningful association using this method. Consequently, we transformed the default marginal plot into a layered KDE plot. This approach tackles the challenge of reconstructing the density function \(f\) using an independent and identically distributed (iid) sample \(x_{1},x_{2},...,x_{n}\) from the respective probability distribution.
### Machine Learning
To decrease overfitting and convergence issue of counterfactuals, especially for EP, we took less patients to implement the task: 25 patients from MB, PA and BG and 11 patients from EP. For testing, to ensure the reliability of our ML models, particularly with a small dataset, we conducted five runs using stratified random sampling based on tumor type with 55% train and 45% test patients.
Using nine ML models, including support vector machine (SVM), adaboost (ADA), logistic regression (LR), random forest classifier (RF), decision tree classifier (DT), gradient boosting classifier (GB), catboost classifier (CB), extreme gradient boosting classifier (XGB) and voting classifier (VOTING), we evaluated the models on the raw data with the outcomes prior to our counterfactual interpretations. CB and XGB were obtained from CatBoost version 1.1.1 and XGBoost version 1.5.1 libraries, respectively, while the other models were obtained from the Scikit-Learn library.
We assessed the performance of the models using precision, recall, and F1 score, which were calculated based on the counts of true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN). In order to ensure an accurate interpretation of the ML results, we opted not to balance the labels. Instead, we employed macro precision, macro recall, and macro F1 score metrics, which take into account the contributions of all labels equally. This approach enabled us to observe the genuine impact of the varying label frequencies, EP in this case.
The validation metrics used in ML are as follows:
\[\mathrm{Macro\,Precision}=\frac{1}{n}\sum_{i=1}^{n}\frac{\mathrm{TP}_{i}}{ \mathrm{TP}_{i}+\mathrm{FP}_{i}} \tag{3}\]
\[\mathrm{Macro\,Recall}=\frac{1}{n}\sum_{i=1}^{n}\frac{\mathrm{TP}_{i}}{ \mathrm{TP}_{i}+\mathrm{FN}_{i}} \tag{4}\]
\[\mathrm{Macro\,F1\,Score}=\frac{1}{n}\sum_{i=1}^{n}\frac{2\times\mathrm{TP}_{i }}{2\times\mathrm{TP}_{i}+\mathrm{FP}_{i}+\mathrm{FN}_{i}} \tag{5}\]
where \(n\) represents the total number of classes or categories.
## 3 Counterfactual Explanations
Given the challenges associated with local approximations, it is worthwhile to explore prior research in the "explanation sciences" to identify potential alternative strategies for generating reliable and practical post-hoc interpretations that benefit the stakeholders affected by algorithmic decisions [1, 44]. To create explanations that are understandable and useful for both experts and non-experts, it is logical to investigate theoretical and empirical studies that shed light on how humans provide and receive explanations [45]. Over the past few decades, the fields of philosophy of science and epistemology have shown increasing interest in theories related to counterfactual causality and contrastive explanations [44, 46, 47, 48, 49].
In philosophy, counterfactuals serve not only to assess the relationship between a mental state and reality, but also to determine whether a mental state can be considered as knowledge. The problem of identifying knowledge with justified true belief is complicated by various counterexamples, such as Gettier cases (1963) [50]. However, some scholars proposed additional conditions to address these counterexamples. This literature highlighted two significant counterfactual conditions:
**Sensitivity:** If \(\rho\) were false, \(\mathcal{S}\) would not believe that \(\rho\).
**Safety:** If \(\mathcal{S}\) were to believe that \(\rho\), \(\rho\) would not be false.
Both of these conditions express the notion that \(\mathcal{S}\)'s beliefs must be formed in a manner that is sensitive to the truthfulness of \(\rho\). The counterfactual semantics has influenced from this idea in various ways, including the establishment of their non-equivalence, clarification, and resolution of potential counterexamples [51].
This concept has sparked a fresh wave of counterfactual analyses that employ new methodologies. Hitchcock [52, 53] and Woodward [54], for instance, constructed counterfactual analyses of causation using Bayesian networks (also known as "causal models") and structural equations. The basic idea of the analysis can be summarized as follows: "\(\mathcal{X}\) can be considered a cause of \(\mathcal{Y}\) only if there exists a path from \(\mathcal{X}\) to \(\mathcal{Y}\), and changing the value of \(\mathcal{X}\) alone results in a change in the value of \(\mathcal{Y}\)".
Ginsberg (1986) [55] initiated his discussion by outlining the potential significance of counterfactuals in artificial intelligence and summarizing the philosophical insights that have been drawn regarding them. Following this, Ginsberg provided a structured explanation of counterfactual implication and analyzed the challenges involved in executing it. Over time, numerous developments in the fields of artificial intelligence and cognitive science, including the Bayesian epistemology approach, have gone beyond what was previously envisioned by Ginsberg regarding the potential application of artificial intelligence and counterfactuals [45, 56, 57, 58, 59, 7]. Furthermore, Verma et al. [60] conducted a comprehensive review of the counterfactual literature, analyzing its utilization in over 350 research papers.
In recent times, there has been a growing interest in the concept of counterfactual explanations, which aim to provide alternative perturbations capable of changing the predictions made by a model. In simple terms, when given an input feature \(x\) and the corresponding output produced by an ML model \(f\), a counterfactual explanation involves modifying the input to generate a different output \(y\) using the same algorithm. To further explain this concept, Wachter et al. [7] introduce the following formulation in their proposal:
\[c=\arg\min_{c}\ell(f(c),y)+|x-c| \tag{6}\]
The initial component \(\ell\) of the formulation encourages the counterfactual \(c\) to deviate from the original prediction, aiming for a different outcome. Meanwhile, the second component ensures that the counterfactual remains in proximity to the original instance, thereby emphasizing the importance of maintaining similarity between the two.
### Generating Counterfactual Explanations
The argument of causality in counterfactuals applies to various situations that involve making decisions about an individual's future [61, 62, 63, 64, 65, 66, 67, 68, 69]. In many case, if the received response is _negative_, it is not sufficient to only learn that the response is _negative_; it is quite important to understand how the results can be improved or modified in the future without making significant and unrealistic changes to the data.
We propose that counterfactual explanations can effectively utilize factual knowledge obtained from MRI features. These features serve as differentiators for distinct tumors, enabling the differentiation between two tumors with minimal adjustments. In essence, this approach aims to identify the most distinguishing characteristics when comparing tumors. This methodology becomes especially valuable when conventional diagnostic methods struggle to differentiate between tumor types. By integrating interpretations and explanations into ML models, we have the potential to identify key features that contribute to accurate tumor classification and, ultimately, improve patient outcomes.
The concept of _data manifold_ proximity, as depicted in Fig. 1, is an important constraint that needs to be carefully addressed. It is important to have confidence in the credibility of a counterfactual explanation, which entails generating a set of features that bear resemblance to prior observations encountered by the classifier. If a counterfactual produces unrealistic features that diverge from the training data or disrupt the observed associations between features, it would be considered impractical and outside the norm established by the training data points [70]. Hence, it is essential to ensure that generated counterfactuals are realistic, closely aligning with the training data and preserving the observed feature associations. To address this issue, several methods, including constraint-based approaches, have been consciously integrated into algorithms. For instance, changing the "age" and "gender" parameters would be highly unreasonable; therefore, we specify this in the model and prevent counterfactual from deviating from reality. In this research, we impose restrictions on parenchyma features, which serve as a reference point for tissue characteristics. We integrated
parechyma features during the training phase, but they were constrained from being modified during the counterfactual generation process.
Diverse Counterfactual Explanations (DiCE) [71] library provides us with easy-to-use and modifiable code to accomplish this task. The main concept is to frame the process of finding these explanations as an optimization problem, similar to how adversarial examples are found (e.g., DeepFool [72]). However, the crucial distinction is that for explanations, we require perturbations that not only alter the output of the ML model but are also varied and realistic to implement.
The counterfactual generation engine of DiCE incorporates diversity and feasibility constraints, which involve several factors: diversity through determinantal point processes, proximity, sparsity, and user constraints. Subsequently, optimization formula can be shown as follows in short:
\[C(x)=\arg\min_{c_{1},...,c_{k}}\frac{1}{k}\sum_{i=1}^{k}\ell(f(c_{i}),y)+\frac {\lambda_{1}}{k}\sum_{i=1}^{k}\mathrm{dist}(c_{i},x)-\lambda_{2}\,\mathrm{dpp \_diversity}(c_{1},...,c_{k}), \tag{7}\]
where the function \(\ell\) was chosen as hinge loss,
\[\ell=\max(0,1-z*\mathrm{logit}(f(c))), \tag{8}\]
where \(z\) is assigned a value of -1 when \(y\) equals 0, and a value of 1 when \(y\) equals 1. The term \(\mathrm{logit}(f(c))\) refers to the unscaled output generated by the ML model. For instance, it represents the final logits that are input into a softmax layer to facilitate predictions within a neural network.
\(\mathrm{dpp\_diversity}\) represented as,
\[\mathrm{dpp\_diversity}=\mathrm{det}(\mathrm{K}), \tag{9}\]
where \(\mathrm{K}_{i,j}=\frac{1}{1+\mathrm{dist}(c_{i},c_{j})}\) and \(\mathrm{dist}(c_{i},c_{j})\) indicates a measurement of distance between the two counterfactual examples.
## 4 Results
What if the counterfactual explanations graciously provide us with additional insights into classification?
DiCE provides multi-class training capability, allowing us to develop a framework that facilitates joint training and response acquisition for all four tumor types. Fig. 1 illustrates the visualization of idea for binary classification. This framework aims to leverage counterfactual explanations, acting as a classifier, to determine the tumor type that best aligns with the numerical MRI data of a newly arrived patient. Furthermore, it strives to uncover the factors and distinctive characteristics that differentiate this tumor type from others, even when only numeric MRI data is available for the patient.
By utilizing all four tumor types, we essentially construct a decision space of reality with our existing patients. As the new patient is guided through this space, attempting to transform into each disease sequentially, the degree of
Figure 1: The illustration of generating counterfactual explanations on tumor types. The figure on the left depicts the process of manipulating features and outcomes using a black-box ML model with a counterfactual approach. On the right-hand side, a more detailed depiction of what happens during this process and the concept of “_counterfactual explanation_” is shown with two example tumor types. The green (CF2) is valid counterfactual, while the red (CF1) is representing fanciful generation.
self-modification required for each specific tumor condition will vary. As the required changes decrease, it can be inferred that the patient is closer to that particular tumor type since they necessitate fewer modifications. Similarly, understanding the level of dissimilarity and the contributing features to this dissimilarity has been explored as a critical approach in determining the tumor type.
Our approach for using counterfactual explanations as classifiers avoids the need to separate different test sets. As a result, the performance of ML models significantly surpasses the baseline scores, with only a few patients excluded from the decision space to simulate the scenario of newly arriving patients. In this case, all training samples become test patients, as we explore the decision space. To achieve this, DiCE provides valuable information regarding misclassified samples, enabling us to exclude the associated counterfactuals from the statistical analysis through post-processing.
The LR model outperformed other models in overall performance. Furthermore, when we experimented with alternative models for generating counterfactuals, we observed that a large number of patients failed to converge to the target disease compared to LR. This situation made it challenging to write an automated code for entire patients using DiCE when analyzing counterfactuals. LR resolved this issue since it was able to converge for the transformation of all the provided patients. Therefore, we decided to continue using LR for generating counterfactuals.
Figure 2 and Table 1 illustrate the concept of using counterfactuals as a classifier. This approach can be explained as follows: We have a newly arrived patient who has undergone only an MRI scan, and it is not possible to determine the type of tumor based solely on the MRI images. Our approach aims to generate alternative realities or what if? scenarios (e.g., "what if we had MB? how much would it change?" or "what if we had EP?") for patient \(x\) by utilizing the factual MRI data we possess and leveraging information from the previously obtained decision space. By applying a what-if scenario to each tumor, we can clearly identify which tumor type the available data is closer to or which MRI features need to be adjusted to achieve closer proximity. The overall feature distance, suitably scaled, provides an indication of how different the tumor type of the new patient could be compared to others.
Table 1 provides detailed information about an unknown patient whose ground-truth classification is EP. In case of the MB counterfactual sample, changes are observed in FLAIR_Tumor and TICE_Tumor features, resulting in distances of -663 and 417.5, respectively. As for the EP group, the only noticeable change is observed in the T2_Tumor feature, with a distance of 137.2. On the other hand, significant changes can be observed for the PA group in the T2_Tumor (1286 to 2290.2), ADC_Tumor (1.009 to 2), and TICE_Tumor (892 to 1492.5) features. Similarly, for the BG group, differences can be observed in DWI_Tumor (1175 to 544.23), ADC_Tumor (1.009 to 2), and TICE_Ratio (1.595 to 0.781). Based on these findings, it can be inferred that fewer changes are required in our factual data to align with the characteristics of EP. Consequently, we can conclude the presence of EP patients and further investigate the discrepancies in features among other tumor types. Furthermore, Table 2 is included to present additional potential clinical cases. The last patients belonging to each tumor type were chosen and removed from the decision space to assume their current status as unknown cases.
The MB counterfactual sample exhibits differences only in the FLAIR_Ratio feature, with a change from 1.141 to 0.742. In the case of EP, increases are observed in the FLAIR_Tumor feature (from 1107 to 2493) and the ADC_Ratio feature (from 0.87 to 2.316). Similarly, for PA, changes are observed in the T2_Ratio feature (from 1.638 to 2.61) and the ADC_Ratio feature (from 0.87 to 2.892). The algorithm selects changes in the ADC_Tumor feature (from 0.54 to 2.05) and ADC_Ratio (from 0.87 to 2.917) for the BG group.
In the case of PA, the factual data reveals differences in the T2_Ratio, FLAIR_Ratio, ADC_Tumor, and T1_Ratio features. Specifically, the changes in the MB counterfactual sample are from 2.297 to 0.97, 1.143 to 0.608, 1.879 to 0.4, and 0.57 to 0.535, respectively. For EP, decreases are observed in the T2_Tumor (1778 to 913.6), T2_Ratio (2.297 to 0.968), DWI_Tumor (809 to 402.32), DWI_Ratio (0.805 to 0.476), and ADC_Tumor (1.879 to 0.4) features. Conversely, for PA, no significant changes are observed except for the DWI_Ratio feature, which changes from 0.805 to 1.025. Regarding the BG group, changes occur in the T2_Ratio feature (from 2.297 to 1.072) and the TICE_Ratio feature (from 1.125 to 0.768).
In the case of the last instance, BG, significant changes are required in almost all features to transform it into an MB patient compared to its factual data. For EP, changes are observed in the T2_Tumor feature (from 1709 to 860.3), T2_Ratio feature (from 1.724 to 0.908), FLAIR_Tumor feature (from 1150 to 2019), and ADC_Tumor feature (from 1.59 to 0.34). In case of PA, changes occur in the ADC_Tumor feature (from 1.59 to 1.56) and TICE_Tumor feature (from 326 to 1234). Conversely, in the BG counterfactual, minimal changes are observed in the features, with only the FLAIR_Ratio feature undergoing a change from 1.189 to 0.752.
In Tables 1, 2 and 3, T represents Tumor, and R represents Ratio (Tumor/Parenchyma). The variable \(x\) denotes the actual MRI feature values of a new unknown labeled patient. Although we have access to the ground-truth in this testing, we assume that we do not. The variable \(x_{cf}\) represents a hypothetical scenario in which we transform the patient into all possible tumor types to obtain counterfactual outputs. These outputs help us identify which features are
similar and need to be altered to correspond to each tumor type. The symbol (-) indicates no modification in the feature. For example, in Table 1, the patient is identified as a EP patient based on having the lowest feature distance overall to the EP type. Thus, we predict that this new patient most likely has EP.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline
**Factual (\(x\))** & & & & & & & & & & & & & \\ \hline
**Tumor Type** & **T2\_T** & **T2\_R** & **FLAIR\_T** & **FLAIR\_R** & **DWI\_T** & **DWI\_R** & **ADC\_T** & **ADC\_R** & **T1\_T** & **T1\_R** & **TICE\_T** & **TICE\_R** \\ \hline _unknown (EP)_ & 1286 & 1.529 & 1311 & 1.341 & 1175 & 1.088 & 1.009 & 1.771 & 473 & 0.84 & 892 & 1.595 \\ Counterfactual (\(x_{cf}\)) & & & & & & & & & & & & \\ \hline
**Tumor Type** & **T2\_T** & **T2\_R** & **FLAIR\_T** & **FLAIR\_R** & **DWI\_T** & **DWI\_R** & **ADC\_T** & **ADC\_R** & **T1\_T** & **T1\_R** & **TICE\_T** & **TICE\_R** \\ \hline MB & - & - & 648 & - & - & - & - & - & - & - & 1309.5 & - \\ EP & 1423.2 & - & - & - & - & - & - & - & - & - & - & - & - \\ PA & 2290.2 & - & - & - & - & - & 2 & - & - & - & 1492.5 & - \\ BG & - & - & - & - & - & 544.23 & - & 2 & - & - & - & - & 0.781 \\ \end{tabular}
\end{table}
Table 1: This table presents the results of our proposed method utilizing counterfactuals (Fig. 2).
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline
**Tumor Type** & **T2\_T** & **T2\_R** & **FLAIR\_T** & **FLAIR\_R** & **DWI\_T** & **DWI\_R** & **ADC\_T** & **ADC\_R** & **T1\_T** & **T1\_R** & **TICE\_T** & **TICE\_R** \\ \hline _Factual (\(x\))_ & unknown (_MB_) & _1534_ & _1.638_ & _1107_ & _1.141_ & _1883_ & _1.614_ & _0.54_ & _0.87_ & _513_ & _0.842_ & _818_ & _1.327_ \\ Counterfactual (\(x_{cf}\)) & MB & - & - & - & 0.742 & - & - & - & - & - & - & - & - & - \\ Counterfactual (\(x_{cf}\)) & EP & - & - & 2493 & - & - & - & - & - & 2.316 & - & - & - & - \\ Counterfactual (\(x_{cf}\)) & PA & - & 2.61 & - & - & - & - & - & 2.892 & - & - & - & - \\ Counterfactual (\(x_{cf}\)) & BG & - & - & - & - & - & - & - & 2.05 & 2.917 & - & - & - & - \\ \hline _Factual (\(x\))_ & unknown (_PA_) & _1778_ & _2.297_ & _1085_ & _1.143_ & _809_ & _0.805_ & _1.879_ & _2.685_ & _4.99_ & _0.57_ & _747_ & _1.125_ \\ Counterfactual (\(x_{cf}\)) & MB & - & - & 0.967 & - & 0.608 & - & - & 0.4 & - & - & 0.535 & - & - \\ Counterfactual (\(x_{cf}\)) & EP & 913.6 & 0.968 & - & - & 402.32 & 0.476 & 0.4 & - & - & - & - & - \\ Counterfactual (\(x_{cf}\)) & PA & - & - & - & - & - & 1.025 & - & - & - & - & - & - \\ Counterfactual (\(x_{cf}\)) & BG & - & 1.072 & - & - & - & - & - & - & - & - & - & 0.768 \\ \hline _Factual (\(x\))_ & unknown (_BG_) & _1709_ & _1.724_ & _1750_ & _1.189_ & _1112_ & _0.674_ & _1.59_ & _2.148_ & _373_ & _0.643_ & _326_ & _0.652_ \\ Counterfactual (\(x_{cf}\)) & MB & 1000.9 & - & 974 & 0.669 & - & - & 0.5 & 0.677 & 1424 & 0.573 & 479.9 & 0.76 \\ Counterfactual (\(x_{cf}\)) & EP & 860.3 & 0.908 & 2019 & - & - & - & 0.34 & - & - & - & - & - \\ Counterfactual (\(x_{cf}\)) & PA & - & - & - & - & - & - & 1.56 & - & - & - & 1234 & - \\ Counterfactual (\(x_{cf}\)) & BG & - & - & - & 0.752 & - & - & - & - & - & - & - & - \\ \end{tabular}
\end{table}
Table 2: The table presents additional counterfactual cases generated for different newly arriving patients. The Tumor and Ratio features are not on the same scale for direct comparison due to their mathematical dependency.
Figure 2: The figure illustrates a clinical scenario demonstrating the practical application of counterfactuals and how they can be utilized in practice.
In Table 3, we present the same samples as depicted in Tables 1 and 2, but with standardized features to enable meaningful distance measurements. This explicit representation of tumor classification allows for better comparison. For an actual patient diagnosed as MB, the distance from the counterfactual explanation generated using the MBs is 2.5, and it is close to an EP counterfactual explanation. In the case of a new patient diagnosed as EP, it is 2.985 units away from one of the closest MB counterfactual explanations and 0.35 units away from the generated EP counterfactual. If the patient is diagnosed as PA, it is 3.157 units away from the second closest BG counterfactual explanation and 1.252 units away from the PA counterfactual explanation. Lastly, for a patient diagnosed as BG, the distance from the generated BG counterfactual is 1.853 units, and is 2.574 units away from one of the closest PA counterfactual explanations. The data represented by (-) actually have the same values as the original data and are included for simplification and clearer representation. The distance metric used is generally independent of the results and only alters the magnitude of distances between them.
### Revealing Key MRI Features through Counterfactual Explanations
As discussed in Section 3.1, counterfactual explanations can provide insights into feature importance. These explanations allow us to understand the reasoning behind ML model decisions and offer valuable options for restriction. In clinical settings, visible changes in features through counterfactual explanations can be more relevant and meaningful for real-world evaluations and applications.
Considering that we generated five counterfactuals for each patient, we obtained 125 explanations for MB, PA, and BG, and 55 explanations for EP. Table 4 illustrates our reporting method for counterfactual analysis results for a case scenario (MB to EP). The patient count, the total number of generated counterfactual explanations for them, and the statistical information regarding the frequency of changes observed on which features in these counterfactuals to identify the top 3 influential features are shown. For instance, "FLAIR_Tumor 71 changes" signifies that out of 125 counterfactuals, 71 of them involved a modification from MB to EP. Therefore, FLAIR_Tumor creates such a distinction between these two tumors that the model considers altering this feature significantly influential in shifting the decision from one side to the other in the decision space. The greater the repetition of this occurrence, indicated by the magnitude of "changes," the more pronounced the outcome suggesting that even in random selections, optimization is achieved for that particular feature, significantly impacting the decision.
Table 5 presents the findings from each tumor pair to identify feature differences between different tumor types. The observed changes in features align with expected outcomes from clinical studies. MB and EP tumors are distinguished by FLAIR and ADC features. MB and PA typically exhibit differences in T2 and ADC. MB and BG, on the other hand, show variations primarily in ADC, T2, and T1CE. In the case of EP and PA, T2 exhibits the most significant changes, while variations in ADC and T1CE are also observed. The most distinguishing features between EP and BG are T1CE_Ratio and ADC_Tumor. As for PA and BG, the T2_Ratio feature has been identified as a crucial factor in creating differentiation. Additionally, significant variations in T1CE features are frequently observed, further contributing to the dissimilarity between these tumor types.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} & **Tumor Type** & **T2,T** & **T2,R** & **FLAIR,T** & **FLAIR,R** & **DWL,T** & **DWL,R** & **ADC,T** & **ADC,R** & **T1,T** & **T1,R** & **T1CE,T** & **T1CE,T** & **T1CE,R** & **Distance** \\ \hline _Factual (z)_ & _unknown (MB)_ & 0 & -0.5 & -0.5 & 0.5 & 0 & 0 & -0.5 & -1.191 & 0 & 0 & 0 & 0 & - \\ Counterfactual (\(x_{cf}\)) & MB & - & - & - & -2 & - & - & - & - & - & - & - & - & **2.5** \\ Counterfactual (\(x_{cf}\)) & EP & - & - & 2 & - & - & - & - & 0.370 & - & - & - & - & 2.948 \\ Counterfactual (\(x_{cf}\)) & PA & - & - & - & - & - & - & - & 0.993 & - & - & - & - & 3.319 \\ Counterfactual (\(x_{cf}\)) & BG & - & - & - & - & - & - & - & 2.020 & - & - & - & - & 3.337 \\ \hline _Factual (z)_ & _unknown (EP)_ & -0.583 & 0 & 0.5 & 0 & 0.5 & 0 & -0.816 & 0 & 0 & 0 & -0.795 & 0.5 & - \\ Counterfactual (\(x_{cf}\)) & MB & - & - & - & - & - & - & - & - & - & - & 0.836 & - & 2.985 \\ Counterfactual (\(x_{cf}\)) & EP & -0.233 & - & - & - & - & - & - & - & - & - & - & **0.300** \\ Counterfactual (\(x_{cf}\)) & PA & 1.982 & - & - & - & - & - & 1.225 & - & - & - & - & 1.550 & - & 4.031 \\ Counterfactual (\(x_{cf}\)) & BG & - & - & - & - & - & -2 & - & 1.225 & - & - & - & - & -2 & 4.082 \\ \hline _Factual (z)_ & _unknown (PU)_ & 0.5 & 1.223 & 0 & 0.5 & 0.5 & 0.124 & 0.816 & 0 & 0 & 0.5 & 0 & 0.5 & - \\ Counterfactual (\(x_{cf}\)) & MB & - & -0.871 & - & 2 & - & - & -1.225 & - & - & - & - & - & 4.588 \\ Counterfactual (\(x_{cf}\)) & EP & - & -0.869 & - & - & - & -2.1749 & -1.225 & - & - & - & - & - & 4.955 \\ Counterfactual (\(x_{cf}\)) & PA & - & - & - & - & - & 1.377 & - & - & - & - & - & - & **1.252** \\ Counterfactual (\(x_{cf}\)) & BG & - & -0.705 & - & - & - & - & - & - & - & - & - & - & 2.3157 \\ \hline _Factual (z)_ & _unknown (BG)_ & 0.811 & 0.5 & 0.373 & 0.811 & 0 & 0 & 0.831 & 0.5 & -0.5 & 0.5 & 0.4002 & -0.5 & - \\ Counterfactual (\(x_{cf}\)) & MB & -1.033 & - & -0.847 & -1.393 & - & - & -1.079 & -2 & 2 & -0.166 & 2 & 6.109 \\ Counterfactual (\(x_{cf}\)) & EP & -1.400 & -2 & 1.966 & - & - & -1.360 & - & - & - & - & - & 4.627 \\ Counterfactual (\(x_{cf}\)) & PA & - & - & - & - & - & - & 0.778 & - & - & - & 1.971 & - & 2.574 \\ Counterfactual (\(x_{cf}\)) & BG & - & - & - & -1.041 & - & - & - & - & - & - & - & **1.883** \\ \end{tabular}
\end{table}
Table 3: Distance results for counterfactuals generated on feature-wise scaled data for four distinct newly arriving patients with varying tumor types.
The results presented in Table 5, along with the visualization in Fig. 3, provide insights into the PA example as follows: When considering the scenarios of MB to PA or PA to MB, it is generally observed that similar distributions dominate. The distinctiveness of the distributions between MB and PA becomes evident when examining the top 5 features that exhibit the most variation, as shown in Fig. 3 and discussed in our previous study [73]. Furthermore, when examining nearly identical distributions between BG and PA, a lack of discernibility is found. This is further supported by the absence of these features among the important features for BG to PA or PA to BG, as demonstrated in Table 5. These findings indicate that the algorithm effectively operates in accordance with our expectation during counterfactual generation, which involves altering the most distinct features to achieve maximum impact with minimal modification. Notably, T1CE features and T2_Ratio are identified as the most distinctive features between PA and BG, as shown in Fig. 5 of [73].
\begin{table}
\begin{tabular}{l l l} \multicolumn{3}{l}{Number of patients: 25} \\ \multicolumn{3}{l}{Number of generated counterfactuals: 125} \\ \hline FLAIR\_Tumor & 71 changes & T1\_Ratio & 6 changes \\ ADC\_Tumor & 33 changes & T1CE\_Tumor & 6 changes \\ ADC\_Ratio & 29 changes & T2\_Tumor & 3 changes \\ DWI\_Ratio & 18 changes & T2\_Parenchyma & 0 changes \\ FLAIR\_Ratio & 17 changes & FLAIR\_Parenchyma & 0 changes \\ DWI\_Tumor & 12 changes & DWI\_Parenchyma & 0 changes \\ T1\_Tumor & 10 changes & ADC\_Parenchyma & 0 changes \\ T1CE\_Ratio & 7 changes & T1\_Parenchyma & 0 changes \\ T2\_Ratio & 6 changes & T1CE\_Parenchyma & 0 changes \\ \end{tabular}
\end{table}
Table 4: This example analysis presents the variations in characteristics observed during the generation of counterfactual instances for the transition from MB to EP.
Figure 3: Original data distributions for MB - PA and BG - PA, focusing on specific features.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \multicolumn{3}{l}{**MB to EP**} & \multicolumn{3}{l}{**MB to PA**} & \multicolumn{3}{l}{**MB to BG**} & \multicolumn{2}{l}{**EP to MB**} & \multicolumn{2}{l}{**EP to PA**} & \multicolumn{2}{l}{**EP to BG**} \\ \hline
**Feature** & **Change** & **Feature** & **Change** & **Feature** & **Change** & **Feature** & **Change** & **Feature** & **Change** & **Feature** & **Change** \\ \hline FLAIR\_Tumor & 71 & T2\_Ratio & 87 & T2\_Tumor & 64 & T1CE\_Tumor & 18 & T2\_Tumor & 34 & ADC\_Tumor & 22 \\ ADC\_Tumor & 33 & T2\_Tumor & 55 & ADC\_Tumor & 52 & FLAIR\_Ratio & 16 & T2\_Ratio & 26 & DWI\_Ratio & 16 \\ ADC\_Ratio & 29 & ADC\_Tumor & 43 & T1CE\_Ratio & 43 & FLAIR\_Tumor & 13 & ADC\_Tumor & 19 & T1CE\_Ratio & 15 \\ \hline
**PA to MB** & **PA to EP** & **PA to BG** & **PA to MB** & **BG to EP** & **BG to EP** & **BG to PA** \\ \hline
**Feature** & **Change** & **Feature** & **Change** & **Feature** & **Change** & **Feature** & **Change** & **Feature** & **Change** & **Feature** & **Change** \\ \hline ADC\_Ratio & 91 & T2\_Ratio & 95 & T2\_Ratio & 95 & FLAIR\_Ratio & 85 & T1CE\_Ratio & 90 & T1CE\_Ratio & 53 \\ FLAIR\_Ratio & 88 & T2\_Tumor & 81 & T1CE\_Tumor & 66 & ADC\_Tumor & 71 & T1\_Tumor & 50 & T1CE\_Tumor & 48 \\ ADC\_Tumor & 76 & T1CE\_Tumor & 45 & T1CE\_Ratio & 54 & ADC\_Ratio & 71 & DWI\_Ratio & 48 & T2\_Ratio & 32 \\ \end{tabular}
\end{table}
Table 5: The three most important features for each changing reality case.
### Statistical Analysis of Generated Counterfactuals
During the construction of the counterfactual tumor \(y\) from the original tumor \(x\), we conducted a dependent test to assess the statistical difference between \(x\) and \(y\), as explained in Section 2.5. Apart from the PA to MB transition (e.g., \(p\)=0.04763, \(p\)=0.0307), no significant differences were observed in other tumor transitions. This result can be attributed to both the fundamental optimization principle of minimizing changes during counterfactual generation and the distribution distances shown in Fig. 3. Specifically, Fig. 2(a) demonstrates a distinct separation in the distributions during the PA to MB transition, requiring a significantly larger change for transformation.
Table 6 presents the statistical similarity obtained when each tumor is transformed to represent the "what if?" scenario of other tumors. In other words, when we transform tumor \(x\) to tumor \(x^{\prime}\)=\(y\), we know that \(x^{\prime}\) is still dependent on x. Therefore, we measure how similar \(x^{\prime}\) is to the original distribution of \(y\) on the feature where \(x\) undergoes the most significant change. A high \(p\)-value indicates that we do not reject the difference, implying that the counterfactual explanations we generate sufficiently resemble the original distribution for that particular feature.
As expected, when attempting self-transformation on each tumor type, the obtained \(p\)-values were notably high. Evaluating at a significance level of 0.05, several features closely aligned with the actual feature distribution of the patients, making them indistinguishable from the ground truth. The following features exhibited this characteristic: FLAIR_Ratio and FLAIR_Tumor in the case of transforming EP to MB, ADC_Ratio when transforming PA to MB, ADC_Ratio during the transformation from MB to EP, T2_Tumor and T1CE_Tumor in the context of PA to EP transformation, DWI_Ratio when transforming BG to EP, T2_Tumor for MB to PA transformation, T2_Tumor and T2_Ratio in the case of EP to PA transformation, T1CE_Ratio and T1CE_Tumor during BG to PA transformation, and DWI_Ratio when transforming EP to BG. Fig. 4 presents some of these cases along with their KDE distributions.
\begin{table}
\end{table}
Table 6: The results of hypothesis tests comparing the original data with the generated data.
### Pushing the Boundaries of Data Augmentation through Alternative Realities
During the construction of counterfactuals, we employed downsampling for MB and BG to align with the number of PA patients (25) during training, considering it appropriate. EP had a count of 11, and we did not increase it. The baseline results for this scenario can be observed in Table 7. For evaluation, the train-test splitting was conducted with a ratio of 45% for the baseline dataset, 35% for EP augmentation, and 25% for EP-PA-BG augmentation.
To address the data imbalance, we examined the inclusion of generated counterfactuals for data augmentation. For example, by equalizing EP with the other tumor types and incorporating 14 different generated counterfactuals alongside the originals, we excluded EP-to-EP instances. Opting for transitions from various tumor types to maximize variance and generalizability, we achieved an improvement of up to 12.06% as shown in Table 7, case A.
To incorporate the previously set aside MB and BG data, we aligned all tumor types, except themselves, with counterfactuals generated from different tumor types. BG, PA, and EP were included with MB, and all were evaluated as
Figure 4: The distributions of the original data and the generated data.
a group of 42 patients, which was the maximum patient count for one tumor type. When considering the counterfactuals as actual patients, the outcomes align with the results presented in Table 7, case B.
Furthermore, in the case examined in Table 7, case C, 11 patients were included from each tumor type in the test set, resulting in no actual EP patients in the training set. Consequently, during training, we had 31 real samples for MB, 0 real and 31 counterfactual samples for EP, 14 real and 17 counterfactual samples for PA, and 23 real and 8 counterfactual samples for BG. Notably, when evaluating on real samples, the results were intriguing. Despite the absence of real EP patients in the training data, the model successfully identified 5 out of the 11 patients, leading to an overall baseline score that was, on average, 0.76% higher.
## 5 Discussion
The spatial heterogeneity in tumor characteristics presents a substantial clinical challenge in pediatric brain tumors. Specifically, tumors originating from the posterior fossa often exhibit overlapping imaging features, leading to difficulties in accurate differentiation, even for experienced clinicians. Accurate diagnosis is of paramount importance as each tumor type requires specific treatment strategies, directly impacting patient outcomes and overall quality of life. Despite the promising advancements in AI and medical imaging, the inherent black-box nature of most models and the challenges in convincing clinicians for everyday use often restrict these studies to the realm of research. It is crucial to aspire for these developments to become interactive and trustworthy tools that clinicians can readily utilize in real-life scenarios. Hence, our study introduces a novel approach to the existing literature, offering valuable insights into the underlying patterns and relationships among the features observed in MRI scans. We hypothesize that exploring "what if?" scenarios can significantly enhance our understanding of alternative outcomes and their implications for clinical decision-making. To the best of our knowledge, this research represents a pioneering effort in the investigation of pediatric brain tumors, highlighting its substantial influence on the interpretability and generalizability of ML models in this domain. By exploring alternative scenarios and their impact, we aim to contribute to the advancement of precise diagnostics and improve patient care in this challenging field.
The primary objective of this study was to enhance the interpretability of ML models' outcomes and provide additional insights using a novel approach. Despite being debated in the fields of philosophy and psychology for half a century, the core idea of counterfactuals has been employed in the field of artificial intelligence under various names, and their complete implementation is relatively recent. In this study, we transformed this idea into the clinical literature to extract valuable information that could be beneficial for clinicians in real-life scenarios. We aim to demonstrate both the alternative possibilities in the decision space and the underlying reasons behind the selected decision by utilizing alternative realities. To achieve this, we perturbed the original data by imposing various constraints during a relatively straightforward mathematical optimization process.
The generated counterfactual explanations provide evidence that there is not always a single definitive choice in life. When considering the diversity in individuals' biological characteristics, it becomes apparent that approaching each case may require a personalized approach. This notion aligns with the concept of personalized healthcare, which has been extensively explored in the health literature [74, 75, 76, 77]. In other words, our approach involves producing explanations tailored to each newly arrived patient by drawing insights from previous patients. By leveraging the decision space, we can identify the closest data points in terms of biological characteristics to the newly arrived patient and construct
\begin{table}
\begin{tabular}{c c c c} & **Precision** & **Recall** & **F1 Score** \\ \hline Baseline & 73.15 \(\pm\) 9.48 & 72.20 \(\pm\) 4.78 & 71.28 \(\pm\) 5.62 \\ A & 84.83 \(\pm\) 4.95 & 83.75 \(\pm\) 3.72 & 83.34 \(\pm\) 3.65 \\ B & 86.31 \(\pm\) 4.57 & 84.64 \(\pm\) 4.69 & 84.85 \(\pm\) 4.72 \\ C & 73.58 & 72.73 & 72.04 \\ \end{tabular}
\end{table}
Table 7: The impact of data augmentation using counterfactuals on classification scores is presented in the table. **(A)** For the first augmentation scenario, only EP counterfactuals are added, resulting in a dataset with 25 samples each for MB, EP, PA, and BG. **(B)** In the second augmentation scenario, counterfactuals for EP, PA, and BG are added to balance the number of samples with original count of MB. Assuming all counterfactual examples represent real data, this scenario results from a dataset comprising 42 samples each for MB, EP, PA, and BG. **(C)** The third scenario involves moving all real samples to the test set, with 11 patients in each category. Consequently, no factual EP samples are left in the training set, and the model is trained accordingly. In all cases, LR consistently yields the best results, and all the reported results in the table are from the LR classifier.
alternative realities specific to that individual. These alternative scenarios allow us to observe the differences in the tumor on the MRI and gain insights into which tumor type it is most closely related to.
As there were no existing counterfactual studies in the literature regarding PF tumors, our study aimed to bridge this gap by subjecting the obtained outputs to various statistical tests. The objective was to provide a comprehensive exploration of the subject matter for enhanced clarity. We specifically investigated two aspects: first, the potential utilization of the generated counterfactuals as a post-classifier, and second, whether they could reveal significant MRI features associated with the corresponding tumor. These investigations were conducted with meticulous attention. Furthermore, we examined the potential impact of reintroducing these diverse counterfactuals into the dataset to address the issue of data imbalance. Statistical tests were also performed to assess the similarity of counterfactuals generated from different spaces to the transformed target space. The results of these tests are presented in Section 4.3.
The LR model exhibited the highest score in our evaluation, and therefore, we utilized it to generate counterfactual explanations. To facilitate the model's learning process and balance the data, we included 11 instances of the EP tumor type while selecting 25 patients from the remaining tumor types, despite there being more instances of certain tumor types.
To automate the generation of counterfactuals for all patients, we developed a framework. However, in cases where an optimal counterfactual explanation cannot be found, the process is halted. Currently, addressing such situations comprehensively is not feasible, and updates are necessary in the DiCE framework. Although we did not encounter this problem with the LR model, as an alternative suggestion, if there is a sufficient number of patients, consider working with a subsample and replace the excluded patients with another patient from the actual population for statistical testing purposes. Instead of employing DiCE, alternative methodologies utilizing various counterfactual algorithmic approaches may be employed, which possess the capability to efficiently address the optimization problem within a more favorable time-interval.
Obtaining specific results for individual patients is not problematic and can be resolved through parameter adjustments or by utilizing different models. However, if the goal is to validate the study and focus on medical research rather than practical outputs, performing a more comprehensive statistical analysis by manually deriving counterfactual explanations for each patient would be less effective and time-consuming. Therefore, as demonstrated in this study, there is a clear need for at least a semi-automated system.
Fig. 2 and Table 1 depict a hypothetical scenario involving a patient with an initially unknown EP tumor. The radiologist examining the MR images was uncertain about whether the tumor was of the MB or EP type. A key challenge in such cases is the lack of additional information, which often necessitates invasive procedures like brain surgery and tissue sampling for histopathological analysis to obtain a definitive diagnosis. To overcome this issue, we generate alternative scenarios based solely on the MRI features. These scenarios provide additional quantitative information to the radiologist, enabling them to assess the response based on the individual's biological characteristics.
Moreover, Table 2 presents examples of other potential tumor cases, while Table 3 demonstrates the efficacy of our approach in identifying patients with diverse tumor types that were previously unidentified and not encompassed within the decision space. While ML models can also accomplish this task, our method offers an additional advantage by preserving information regarding tissue characteristics, which in turn reveal similarities or differences among tumors. Additionally, our approach calculates distances to other tumors by transforming the features into a uniform distribution through standard scaling, providing valuable insights about the proximity. This valuable information aids in our comprehension of the differentiation among tumors in the dataset.
Table 4 presents the total count of modifications made to susceptible features, with the exception of the parenchymas that serves as reference points, when generating samples for different patients. The statistical report enables a human verification of the optimization process, wherein minimal changes are implemented to achieve the desired outcome. It also confirms that the features exhibiting the highest variations during the generation of alternative realities are those with the most distinct distributions between two tumors. To elucidate the analysis of their distributions, we present Fig. 3 as a visual representation.
Table 5 presents the top three most variable features extracted from the reports obtained for all tumor matches in Table 4. Table 6 exhibits a statistical analysis demonstrating the high degree of similarity between the generated data and reality across different data spaces, specifically focusing on the most frequently selected features. A high \(p\)-value indicates that the generated samples cannot be well distinguished, implying the effectiveness of the independent transformation process, which produces significant alternative realities separate from the original space. Fig. 4 illustrates an example of some transformations from Table 6, displaying their corresponding \(p\)-values, as well as the kernel density estimation of the generated data in comparison to the original data.
Ultimately, we investigated the potential of the generated alternative realities for data augmentation. The reliability of data augmentation methods, such as SMOTE [10], is a subject of controversy in medical research due to their algorithmic dependencies and the often insignificant impact of the generated data on the distribution. These kinds of approaches often prioritize test performance improvement, i.e., the concept of any means to improve test performance is acceptable, without considering alignment with reality. As a result, the generated data mostly lacks interpretability and becomes disconnected from real-world scenarios. However, we believe that accepting this approach as universally valid would be misguided. In situations where both the available data and testing conditions are limited, relying solely on these approaches may not be suitable for ensuring generalizability. It is essential to recognize the limitations and potential drawbacks associated with using generated data for generalization purposes. In medical studies with limited data, we propose that the counterfactuals generated can provide an alternative solution to this problem. Table 7 presents the performance evaluation of the data augmentation methods we assessed.
The results presented in Fig. 7, case C cannot be directly compared with the baseline due to the inclusion of additional test patients during the testing phase. However, it is evident that the inclusion of more real test samples leads to a slight improvement under the given training conditions. It is important to acknowledge that some of the EP patients pose challenges in terms of differentiation, as mentioned in [73]. When these difficult patients are included in the training data, it can lead to a significant elevation in scores or if they end up in the test set, they act as unpredictable outliers, negatively impacting the overall results. Despite these challenges, achieving accurate predictions for half of these patients without utilizing any real EP data in the training set is commendable and warrants attention for future research.
Moreover, the incorporation of counterfactual explanations holds potential in identifying and addressing model bias in medical diagnosis [78; 79]. In certain healthcare data scenarios, the removal of constraints, such as gender or ethnicity, which we previously recommended adding to this approach as restrictions, may facilitate the potential development of fairness, transparency, and accountability in algorithmic decision-making processes. However, further research and implementation efforts are required to explore and validate the applicability of counterfactual explanations in addressing model bias in medical research and practice.
There are several limitations that should be considered in the present study. One limitation pertains to the implementation of the DiCE method, which may pose challenges when applied to diverse datasets. The method sometimes requires extensive optimization time and can encounter difficulties in finding a convergence point, potentially hindering the generation of accurate counterfactual explanations. To address this issue, alternative counterfactual explanation methods (e.g., Dutta et al. [80], Maragno et al. [81]) can be explored in conjunction with our proposed approach. For a more comprehensive collection of counterfactual algorithms, readers can refer to Guidotti's review paper [82]. Additionally, the dataset utilized in this study has limitations in terms of its scope and size. While it included an adequate number of samples for training ML models, it may not fully capture the range of scenarios encountered in clinical practice, thus potentially limiting the generalizability of the findings to other datasets. Furthermore, the dataset only encompassed four specific types of pediatric PF tumors, which may not represent the entire spectrum of pediatric brain tumors. Future studies should consider expanding the sample size and incorporating additional advanced MRI protocols, such as semiquantitative and quantitative perfusion MRI and MR spectroscopy, to gain deeper insights into the diagnostic and prognostic value of MRI features for pediatric PF tumors.
## 6 Conclusion
In conclusion, this paper introduces a novel perspective on interpretability in medical research, focusing on pediatric PF brain tumors as a case study. Leveraging counterfactual explanations, the study offers personalized and context-specific insights, validating predicted outcomes and shedding light on variations in predictions under different circumstances.
The proposed approach shows great promise in enhancing the interpretability of MRI features for medical research studies. By bridging the gap between ML algorithms and clinical decision-making, it has the potential to facilitate the adoption of advanced computational techniques in medical practice. Clinicians can benefit from valuable insights gained from the generated counterfactual explanations, leading to improved decision-making processes and ultimately better patient outcomes. Notably, the counterfactual explanations generated in this study maintain statistical and clinical fidelity in many cases, underscoring their significance.
To fully realize the potential of this approach, further research and validation are essential. Integrating counterfactual explanations into existing clinical workflows and evaluating their performance in real-world scenarios will be critical to ensuring the reliability and practicality of this method. The continued development and refinement of utilizing counterfactual explanations in MRI-based diagnoses could revolutionize the medical field, benefiting both patients and healthcare providers. Therefore, future studies with larger datasets within the same domain or for different diseases could yield even more robust alternative realities constructed from MRI features. Overall, this study represents a
significant step forward in moving beyond known reality and improving the application of ML in medical research and practice.
## 7 Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## 8 Institutional Review Board Statement
After obtaining approval from the Institutional Review Board of Children Hospital of 02 with approval number [Ref: 632 QD-ND2 dated 12 May 2019], we conducted the study in both Radiology and Neurosurgery departments in accordance with the 1964 Helsinki declaration.
## 9 Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
## 10 Data & Code Availability
The datasets generated and/or analyzed during the current study are not publicly available due to privacy concerns but are available from the Dr. Keserci upon reasonable request.
The source codes of the presented study can be accessed at: [https://github.com/toygarr/counterfactual-explanations-for-medical-research](https://github.com/toygarr/counterfactual-explanations-for-medical-research)
|
2303.06878 | The System Description of dun_oscar team for The ICPR MSR Challenge | This paper introduces the system submitted by dun_oscar team for the ICPR MSR
Challenge. Three subsystems for task1-task3 are descripted respectively. In
task1, we develop a visual system which includes a OCR model, a text tracker,
and a NLP classifier for distinguishing subtitles and non-subtitles. In task2,
we employ an ASR system which includes an AM with 18 layers and a 4-gram LM.
Semi-supervised learning on unlabeled data is also vital. In task3, we employ
the ASR system to improve the visual system, some false subtitles can be
corrected by a fusion module. | Binbin Du, Rui Deng, Yingxin Zhang | 2023-03-13T05:53:42Z | http://arxiv.org/abs/2303.06878v1 | # The System Description of dun_oscar team for The ICPR MSR Challenge
###### Abstract
This paper introduces the system submitted by dun_oscar team for the ICPR MSR Challenge. Three subsystems for task1-task3 are descripted respectively. In task1, we develop a visual system which includes a OCR model, a text tracker, and a NLP classifier for distinguishing subtitles and non-subtitles. In task2, we employ an ASR system which includes an AM with 18 layers and a 4-gram LM. Semi-supervised learning on unlabeled data is also vital. In task3, we employ the ASR system to improve the visual system, some false subtitles can be corrected by a fusion module.
## 1 Introduction
Extracting subtitles in videos is a challenging task, due to the clutter background texts in visual modality and the low speech quality in audio modality. Though the task is difficult, it is not a popular research problem. Many previous studies are only focused on OCR, ASR, which is a core part of the system of extracting subtitles.
In OCR domain, many algorithms are proposed to solve the difficult problems in detection and recognition. EAST[1], PSENet[2], DBNet[3], SAST[4] are both recent popular OCR detection algorithms, they are widely used for the computation efficiency or the ability to detect texts with arbitrary shape. We employ SAST[4] algorithm in our detection model. Except for the usual text center line segmentation branch, SAST utilizes other three segmentation branches to localize texts more precisely.
Two class common methods for OCR recognition are: 1) CTC-based, such as CRNN[5], STAR-Net[6], EnEsCTC[7]; 2) Attention-based, such as ASTER[8], MASTER[9]. Attention-based method usually achieves better results in public research datasets, due to its ability to handle text with irregular shape. But in the video subtitle scenario, the shape of text is usually regular rotated box. Due to the good performance of CTC-based method for Chinese characters, we employ CTC-based method to recognize text content in visual modality.
In ASR domain, Hybrid Attention/CTC training framework[10] is most popular among End-to-End(E2E) automatic speech recognition systems, we also employ the training framework in our system.
Methods in ASR inference stage diversify. [10] utilizes beam search on attention-based branch, and rescore the n-best results with CTC branch. Wenet[11] adopts a different decoding method, beam search are applied in CTC branch firstly, and rescore the n-best results with attention branch. In all methods, a powful LM is very helpful. In our system, we employ a decoding method like[11]. A 4-gram LM is used to rescore in the decoding stage.
In task1, extracting subtitles in visual modality, a OCR model is necessary but enough, we need a module to extract the real subtitles, and discard the cluttered background texts. In this task, we develop a text tracker and a NLP classification model to achieve this goal. The training corpus of NLP model comes from the filtered text by the text tracker. The text tracker can identify most non-subtitles, and the NLP model can strengthen the ability.
For task2, extracting subtitles in audio modality, an ASR system can satisfies the requirement. To achieve better results, we design the inference method as mentioned before. In training stage, strong data augmentation and semi supervised learning[12] are conducted.
For task3, extracting subtitles with both visual and audio modality, we employ a fusion module to integrate two results that in single modality. Subtitles in visual modality are extracted firstly, then subtitles in audio modality are extracted, at last the audio subtitles are used to improve visual subtitles by removing non-subtitles, inserting missing subtitles, and correcting false recognized subtitles.
## 2 Extracting subtitles in visual modality
The system for task1 consists of a OCR module and a subtitle extractor module. The pipeline are shown as Fig 1.
### OCR module
We choose SAST algorithm[4] with a Resnet50 backbone to build our detector. As mentioned in[4], various geometric properties of text regions, including text center line, text border offset, text center offset, and text vertex offset are adopted to ease training and reconstruct the precise polygon of a text instance.
For recognizer, we choose CTC-based method. We employ ResnetXT-18 as the backbone network, and two multi head self-attention layer are inserted before classification layer.
### Subtitle extractor module
After OCR module, the all texts in video will be detected and recognized. A subtitle extractor to distinguish subtitles and non-subtitles is necessary.
To achieve the goal, we employ a simple image classifier to filter some simple non-subtitles at first. The classifier accepts single image as input, and outputs the probability of whether the image is a subtitle. And a high frequency rule also works in this step.
The remained non-subtitle texts are difficult to identify by the single image information. We develop a tracker using Hungarian algorithm to build the connection between texts with similar positions. After the tracker processes whole video, all texts will be divided into several instances located in similar positions.
At last of this module, a NLP classifier with three convolution layers and a FC layer will do the final filtering. After filtering all non-subtitles, the subtitle in each frame will be merged according to their text similarity.
### Training data mining
As the requirements of the ICPR MSR competition, only 10K Chinese OCR Synthetic dataset, LSVT dataset[13], the provided training data with audio annotation, and the own synthetic data can be used.
For Synthetic data, we first select 10K data form ChineseOCR dataset, the rule to select data is to maximize the amount of different characters. When synthesizing our own data, we use the provided corpus to obtain 100k data with simplified Chinese characters, 100k data with traditional Chinese characters, and 20k data with special punctuations.
In LSVT dataset, there are 30k training data with full annotations and 400k training data with weak annotations. The weak annotations only consist of transcribed texts, but not the location of texts. The data with full annotations can be used to train our OCR detection and recognition model directly. The data with weak annotations can be assigned with strong annotations by comparing with the recognized results generated by our OCR model. By iteratively updating our model and mining weak labeled data, les than 300k data with weak annotations are assigned to strong label to train our model.
The mining procedure of provided training data with audio annotation in task1 is similar to the mining procedure of LSVT. An initial model can be obtained by the available public data, then the mining procedure will be done iteratively.
## 3 Extracting subtitles in audio modality
The system for task2 is a pure ASR system.
### Acoustic model
Acoustic model is the most important part in an E2E ASR system. Conformer[14] is a popular model architecture and achieves competitive results on many public datasets. And some researches depict that the bigger the model, the smaller CER[15]. But for this challenge, the amount of training data is limited, big model may overfits the train set, and achieves worse result in the test stage. To alleviate the problem, stochastic depth[16] is applied in our acoustic model training stage.
Our acoustic model adopts Conformer as our encoder architecture, the attention dim is 512, the number of head is 16, the number of layers is 18.
Figure 1: Extracting subtitles in visual modality
### Decoder
An ASR system with a fine decoder usually performs better compared with a pure acoustic model. In our system, a decoder like that in [11] is employed in our system. Beam search is conducted in CTC branch firstly, and attention branch is applied to rescore the n-best results.
The previous decoding procedure only works on the different branches of the acoustic model, an extra LM can be also employed in this stage. To improve the adaptability to the TV show domain, a universal LM trained by common corpus and a domain specified LM trained by the corpus in this challenge will work simultaneously.
### Training data mining
As the requirements of this competition, only aishell1 training data[17] and the provided training data with visual annotation can be used. The provided taring data need to be converted corresponding audios and transcribed texts.
The provided annotations are framewise, there are many repeated contents in it. Firstly we merge the frame-wise annotations like Fig 3, then we employ an ASR system trained by aishell1 data to filter the non-subtitles. The data mining procedure is conducted with four rounds.
Except for the data with various annotations, 200h unlabeled data is provided. Semi supervised learning on this data is also useful to improve accuracy. A method like [12] is conducted with six rounds, as shown in Fig4.
## 4 Extracting subtitles with both visual and audio modality
The system for task3 is most complicated, we employ a subtitle extractor with visual modality, a subtitle extractor with audio modality, and a fusion module to integrate two results in single modality.
The two subtitle extractors in single modality are developed by the method described in previous two chapters. The fusion module is shown in Fig 5. A filter, a remover and a padder works in the module.
The fusion module combines two modalities at result-level, it is more flexible than that combining at model-level. The visual subtitle extractor produces some subtitles, which have been filtered by the subtitle tracker and the NLP classifier.
The produced subtitles are processed by a splitter firstly, some subtitles that be mistakenly merged in visual system can be split with reference to the Asr results, like Fig6.
There is a filter in this system to filter false candidates. If one subtitle set cross multi frames has several different candidate contents, we will compare the visual subtitle with the audio subtitle, the similarities of characters and syllables are considered to rank the candidates. The common LM score also is employed. Some examples are shown in Fig 7.
The remover in the fusion module will remove some subtitles with low similarity to ASR results. And the padder will insert some missing subtitles in visual modality, like Fig 8.
Figure 4: Semi-supervised learning
Figure 3: Merge the framewise annotations
Figure 6: Splitter in the fusion module
Figure 2: Multi-LM decoder
## 5 Experiments and Results
### Experimental Setup
The training data of OCR model consists four parts: (1) LSVT dataset with full annotations and weak annotations; (2)10K ChineseOCR dataset; (3) the provided training data; (4) 220K our synthetic data. The training data of ASR model consists two parts: (1) aishell1 training dataset; (2) the provided training data. The provided training data in various tasks is only used in the specified task.
The backbone of OCR detector is Resnet50, and the backbone of OCR recognizer is ResnetXT-18 with two MHSA layers. The threshold of the image classifier in the subtitle extractor is 0.05.
The backbone of ASR acoustic model is Conformer, the attention dim is 512, the number of head is 16, the number of layers is 18. The common LM and the domain LM are both 4-gram.
### Results on three tasks
The results of the three tasks are listed in Tab 1.
## 6 Conclusions
In the visual system, a OCR model with a SAST detector and a CTC-based recognizer is developed. And a subtitle extractor is conducted to distinguish subtitles and non-subtitles. In the audio system, a well designed ASR system is completed to extract all subtitles in audio modality.
In the most complicated task3, except for the separate audio and visual systems, a fusion module is designed to handle some hard cases in single modality.
Combining the submitted results on the validation set and the test set, we task the third, third and first place in three tasks respectively.
\begin{table}
\begin{tabular}{l|l|l} \hline \hline Task & Dataset & CER \\ \hline \multirow{2}{*}{Task1} & validation & 0.1692 \\ \cline{2-3} & test & 0.1159 \\ \hline \multirow{2}{*}{Task2} & validation & 0.1851 \\ \cline{2-3} & test & 0.2162 \\ \hline \multirow{2}{*}{Task3} & validation & 0.1604 \\ \cline{2-3} & test & 0.1319 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of the three tasks
Figure 5: Fusion module that combines two modalities
Figure 6: Padder in the fusion module
Figure 7: Filter in the fusion module |
2305.02990 | Accelerating GW calculations through machine learned dielectric matrices | The GW approach produces highly accurate quasiparticle energies, but its
application to large systems is computationally challenging, which can be
largely attributed to the difficulty in computing the inverse dielectric
matrix. To address this challenge, we develop a machine learning approach to
efficiently predict density-density response functions (DDRF) in materials. For
this, an atomic decomposition of the DDRF is introduced as well as the
neighbourhood density-matrix descriptor both of which transform in the same way
under rotations. The resulting DDRFs are then used to evaluate quasiparticle
energies via the GW approach. This technique is called the ML-GW approach. To
assess the accuracy of this method, we apply it to hydrogenated silicon
clusters and find that it reliably reproduces HOMO-LUMO gaps and quasiparticle
energy levels. The accuracy of the predictions deteriorates when the approach
is applied to larger clusters than those included in the training set. These
advances pave the way towards GW calculations of complex systems, such as
disordered materials, liquids, interfaces and nanoparticles. | Mario G. Zauchner, Andrew Horsfield, Johannes Lischner | 2023-05-04T16:57:26Z | http://arxiv.org/abs/2305.02990v3 | # Accelerating GW calculations through machine learned dielectric matrices
###### Abstract
The GW approach produces highly accurate quasiparticle energies, but its application to large systems is computationally challenging, which can be largely attributed to the difficulty in computing the inverse dielectric matrix. To address this challenge, we develop a machine learning approach to efficiently predict density-density response functions (DDRF) in materials. For this, an atomic decomposition of the DDRF is introduced as well as the neighbourhood density-matrix descriptor both of which transform in the same way under rotations. The resulting DDRFs are then used to evaluate quasiparticle energies via the GW approach. This technique is called the ML-GW approach. To assess the accuracy of this method, we apply it to hydrogenated silicon clusters and find that it reliably reproduces HOMO-LUMO gaps and quasiparticle energy levels. The accuracy of the predictions deteriorates when the approach is applied to larger clusters than those included in the training set. These advances pave the way towards GW calculations of complex systems, such as disordered materials, liquids, interfaces and nanoparticles.
## I Introduction
Density functional theory (DFT)[1; 2] has shown tremendous success in the calculation of electronic ground-state properties. However, it is well known that band gaps of solids and HOMO-LUMO gaps of molecules are often significantly underestimated when computed using Kohn-Sham (KS) eigenvalues [3; 4]. In order to remedy this issue, the GW method [5; 6; 7] is often employed in which a self-energy correction to the DFT KS energies is computed. The resulting quasiparticle energies are in excellent agreement with experimental measurements for a wide range of materials. However, the large numerical effort required for GW calculations and its unfavorable scaling with system size restrict applications to relatively small systems [8; 9]. The most expensive step is the computation of the interacting density-density response function (DDRF) which is closely related to the inverse dielectric matrix. In particular, the non-interacting DDRF is typically computed by carrying out a slowly-converging summation over all unoccupied states [8; 10; 11]. Afterwards, the non-interacting DDRF must be inverted to calculate the interacting DDRF.
These difficulties have led to the development of model DDRFs (or model dielectric matrices). For example, Hybertsen and Louie constructed a model dielectric matrix based on the assumption that the local screening response of the material is similar to that of a homogeneous medium with the same local density [12]. A similar model was also proposed by Cappellini et al. [13; 14]. However, it has proven difficult to generalize these model dielectric functions to highly non-uniform systems, such as isolated molecules or nano-clusters whose screening properties are substantially different from uniform systems. To overcome this limitation, Rohlfing [9] proposed to express the dielectric matrix as a sum of atomic contributions attributing a density response resulting from a Gaussian-shaped charge density to each atom. This model dielectric matrix contains a number of parameters which need to be determined, for example by comparison to calculated RPA dielectric functions.
In recent years, machine learning (ML) techniques have been widely adopted to predict scalar properties of materials, such as the total energy. A key ingredient in ML approaches is the descriptor which parametrizes the atomic and chemical structure of the material. Many descriptors used in computational chemistry are explicitly constructed to be invariant under rotations and translations: for example, ACE [15], SOAP [16], the Coulomb matrix [17; 18], bag-of-bonds [19] or fingerprint-based descriptors have been shown to be reliable descriptors for the prediction of scalar quantities. When predicting tensors or functions, however, it is no longer sufficient to employ a rotationally invariant descriptor. To alleviate this problem, Grisafi et al. [20] developed a symmetry-adapted version of the SOAP kernel which is equivariant under rotations and was successfully used in the prediction of polarizability tensors and first hyperpolarizabilities [21; 20], dipole moments [22] and electronic densities [23]. Several other groups also explored ML approaches for the electronic density including Brockherde et al.[24], Alred et al. [25] and Chandrasearan and co-workers [26]. Moreover, the construction of group-equivariant neural networks, such as Clebsch-Gordan networks [27; 28; 29], tensor-field networks [30] and spherical convolutional neural networks (CNNs) [31; 32] have seen significant developments in recent years and the implementation of these methods has been significantly simplified by frameworks such as e3NN [33] developed by Geiger et al. [34], thus providing promising alternatives to the symmetry-adapted SOAP for the learning of functions.
To the best of our knowledge, however, there has been no attempt to develop ML models for the prediction of non-local response functions, such as the DDRF. Predicting such quantities is a formidable challenge: for example, the DDRF of a small silicon cluster can be tens of
gigabytes in size when represented in a plane-wave basis even when a modest plane-wave cutoff is used. To address this problem, we introduce a decomposition of the DDRF into atomic contributions which can be predicted using ML techniques. To ensure that the ML model appropriately incorporates the transformation properties of the DDRF, we also develop a new descriptor called neighbourhood density-matrix (NDM) which transforms in the same way as the DDRF under rotations and is used in conjunction with a dense neural network to predict the atomic contributions to the DDRF. We then use the ML DDRFs to carry out GW calculations of hydrogenated silicon clusters. This approach which we refer to as the ML-GW method produces accurate GW quasiparticle energies at a significantly reduced computational cost compared to standard implementations.
## II Results
### Theoretical results
The GW method yields accurate quasiparticle energies by applying a self-energy correction to the mean-field KS energy levels. The GW self-energy \(\Sigma(\mathbf{r},\mathbf{r}^{\prime},\omega)\) is calculated from the one-electron Green's function \(G(\mathbf{r},\mathbf{r}^{\prime},\omega)\) and the screened Coulomb interaction \(W(\mathbf{r},\mathbf{r}^{\prime},\omega)\) according to [7; 8; 35]
\[\Sigma(\mathbf{r},\mathbf{r}^{\prime},\omega)=\frac{i}{2\pi}\int e ^{-i\delta\omega^{\prime}}G(\mathbf{r},\mathbf{r}^{\prime},\omega\!+\!\omega^ {\prime})W(\mathbf{r},\mathbf{r}^{\prime},\omega^{\prime})d\omega^{\prime} \tag{1}\]
with \(\delta\) denoting a positive infinitesimal. The screened Coulomb interaction is in turn computed from the bare Coulomb interaction \(v(\mathbf{r},\mathbf{r}^{\prime})\) and the inverse dielectric matrix \(\epsilon^{-1}(\mathbf{r},\mathbf{r}^{\prime},\omega)\) via
\[W(\mathbf{r},\mathbf{r}^{\prime},\omega)=\int\epsilon^{-1}( \mathbf{r},\mathbf{r}_{2},\omega)v(\mathbf{r}_{2},\mathbf{r}^{\prime})d \mathbf{r}_{2}, \tag{2}\]
which demonstrates that the dielectric matrix constitutes a key ingredient in GW calculations. It can be obtained from the interacting DDRF \(\chi(\mathbf{r},\mathbf{r}^{\prime},\omega)\) according to
\[\epsilon^{-1}(\mathbf{r},\mathbf{r}^{\prime},\omega)=\delta( \mathbf{r},\mathbf{r}^{\prime})+\int v(\mathbf{r},\mathbf{r}_{2})\chi( \mathbf{r}_{2},\mathbf{r}^{\prime},\omega)d\mathbf{r}_{2}. \tag{3}\]
In the remainder of this paper, we will assume that the frequency dependence of the dielectric matrix can be approximated by the generalized plamon-pole approximation (GPP) [7; 36; 37]. As a consequence, only the static DDRF \(\chi(\mathbf{r},\mathbf{r}^{\prime})\equiv\chi(\mathbf{r},\mathbf{r}^{\prime},\omega=0)\) needs to be determined.
Within the random-phase approximation (RPA), the interacting static DDRF is given by
\[\chi(\mathbf{r},\mathbf{r}^{\prime})= \chi_{0}(\mathbf{r},\mathbf{r}^{\prime})+ \tag{4}\] \[\int d\mathbf{r}_{1}d\mathbf{r}_{2}\chi_{0}(\mathbf{r},\mathbf{r }_{1})v(\mathbf{r}_{1},\mathbf{r}_{2})\chi(\mathbf{r}_{2},\mathbf{r}^{\prime}) \tag{5}\]
with \(\chi_{0}(\mathbf{r},\mathbf{r}^{\prime})\) denoting the static non-interacting DDRF which is typically computed as a sum over empty and occupied states [10; 11] according to
\[\chi_{0}(\mathbf{r},\mathbf{r}^{\prime})= \sum_{ij}\frac{f_{i}(1-f_{j})}{\epsilon_{i}-\epsilon_{j}}\times \tag{6}\] \[\left[\phi_{i}^{*}(\mathbf{r})\phi_{j}(\mathbf{r})\phi_{j}^{*}( \mathbf{r}^{\prime})\phi_{i}(\mathbf{r}^{\prime})+\text{c.c.}\right]. \tag{7}\]
Here, \(\epsilon_{i}\), \(f_{i}\) and \(\phi_{i}(\mathbf{r})\) denote the orbital energy, occupancy and wavefunctions of the KS state \(i\).
Equations (5) and (7) highlight the two main challenges in computing the DDRF: (1) the calculation of the non-interacting DDRF requires a summation of all empty states which is slowly converging and (2) the calculation of the interacting DDRF requires a matrix inversion which scales unfavorably with system size.
#### ii.1.1 Atomic decomposition of the density-density response function
In order to bypass the expensive computation of the DDRF and pave the way towards a machine learning approach, we propose to express \(\chi(\mathbf{r},\mathbf{r}^{\prime})\) as a sum of atomic contributions \(\chi_{i}(\mathbf{r},\mathbf{r}^{\prime})\) according to
\[\chi(\mathbf{r},\mathbf{r}^{\prime})=\sum_{i=1}^{N}\chi_{i}( \mathbf{r},\mathbf{r}^{\prime}), \tag{8}\]
where \(i\) labels atoms and \(N\) is the total number of atoms.
How this partitioning is achieved is not immediately obvious. However, the atomic contributions to the DDRF should have the following properties: (1) the atomic contributions should be localized in the vicinity of the corresponding atom, (2) they should retain the global symmetry of \(\chi\), i.e. \(\chi(\mathbf{r},\mathbf{r}^{\prime})=\chi(\mathbf{r}^{\prime},\mathbf{r})\), and (3) they should integrate to zero, i.e. \(\int\chi(\mathbf{r},\mathbf{r}^{\prime})d\mathbf{r}=\int\chi(\mathbf{r}, \mathbf{r}^{\prime})d\mathbf{r}^{\prime}=0\), to ensure that the change in the charge density induced by a perturbing potential is overall charge neutral [8].
We start by expressing the DDRF in a localized basis set of real orbitals \(\{\phi_{\alpha_{a}}^{a}(\mathbf{r})\}\), where \(a\) labels the atom on which the basis function is centered and \(\alpha_{a}\) indexes the orbital on site \(a\). We should note that exploiting locality in GW calculations through local orbital representations has been done before, even as early as the first practical applications of the GW method by Strinati et al. [6]. In this basis the DDRF is given by
\[\chi(\mathbf{r},\mathbf{r}^{\prime})=\sum_{a,\alpha_{a}}\sum_{b, \alpha_{b}}\chi_{\alpha_{a}\alpha_{b}}^{ab}\phi_{\alpha_{a}}^{a}(\mathbf{r}) \phi_{\alpha_{b}}^{b}(\mathbf{r}^{\prime}), \tag{9}\]
where \(\chi_{\alpha_{a}\alpha_{b}}^{ab}\) is a symmetric matrix. This expression suggests the following decomposition of the DDRF into atomic contributions
\[\chi_{i}(\mathbf{r},\mathbf{r}^{\prime})=\frac{1}{2}\sum_{\alpha _{i}}\sum_{b,\alpha_{b}}\bigg{(}\chi_{\alpha_{i}\alpha_{b}}^{ib}\phi_{\alpha_{ i}}^{i}(\mathbf{r})\phi_{\alpha_{b}}^{b}(\mathbf{r}^{\prime})\\ +\chi_{\alpha_{b}\alpha_{i}}^{bi}\phi_{\alpha_{b}}^{b}(\mathbf{r} )\phi_{\alpha_{i}}^{i}(\mathbf{r}^{\prime})\bigg{)}. \tag{10}\]
We refer to the representation of the DDRF in the basis \(\{\phi^{a}_{\alpha_{a}}(\mathbf{r})\}\) as 2-center DDRF (2C-DDRF) because it contains pairs of basis functions which are centered on different atoms.
Using the symmetry of \(\chi^{iw}_{\alpha_{i}\alpha_{w}}\) and the fact that the basis functions are real, it can be easily verified that \(\chi_{i}(\mathbf{r},\mathbf{r}^{\prime})=\chi_{i}(\mathbf{r}^{\prime},\mathbf{ r})\). We can also ensure that \(\int\chi_{i}(\mathbf{r},\mathbf{r}^{\prime})d\mathbf{r}=0\) by removing all s-orbitals from the basis: see computational methods section for details. The locality of \(\chi_{i}(\mathbf{r},\mathbf{r}^{\prime})\) is directly inherited from the corresponding properties of the full DDRF. In particular, we have found that the expansion coefficients \(\chi^{iw}_{\alpha_{i}\alpha_{w}}\) decay rapidly as the distance between atom \(i\) and atom \(w\) increases [38].
We stress that this atomic representation of the DDRF is exact, i.e. \(\sum_{i}\chi_{i}(\mathbf{r},\mathbf{r}^{\prime})\) reproduces the full interacting DDRF when the local basis sets is complete. However, the atomic contributions to the DDRF contain contributions from pairs of basis functions which are centered on different atoms, see Eq. (10). These contributions are difficult to learn using atom-centered descriptors.
To make progress, we exploit the localization of \(\chi_{i}(\mathbf{r},\mathbf{r}^{\prime})\) and expand it in terms of a set of basis functions \(\psi^{i}_{nlm}(\mathbf{r})=Y_{lm}(\hat{\mathbf{r}})R_{n}(|\mathbf{r}|)\) (with \(Y_{lm}\) denoting the spherical harmonics and \(R_{n}\) a set of radial functions) which are all centered on atom \(i\) according to
\[\chi_{i}(\mathbf{r},\mathbf{r}^{\prime})=\sum_{nlm}\sum_{n^{ \prime}l^{\prime}m^{\prime}}\chi^{(i)}_{nlm^{\prime}l^{\prime}m^{\prime}}\\ \quad Y_{lm}(\hat{\mathbf{r}})Y^{*}_{l^{\prime}m^{\prime}}(\hat{ \mathbf{r}}^{\prime})R_{n}(|\mathbf{r}|)R^{*}_{n^{\prime}}(|\mathbf{r}^{\prime }|) \tag{11}\]
with \(\chi^{(i)}_{nlm^{\prime}l^{\prime}m^{\prime}}\) denoting the expansion coefficients given by
\[\chi^{(i)}_{nlm^{\prime}l^{\prime}m^{\prime}}=\int\int d\mathbf{ r}d\mathbf{r}^{\prime}\chi_{i}(\mathbf{r},\mathbf{r}^{\prime})R^{*}_{n}(| \mathbf{r}|)R_{n^{\prime}}(|\mathbf{r}^{\prime}|)\\ Y^{*}_{lm}(\hat{\mathbf{r}})Y_{l^{\prime}m^{\prime}}(\hat{\mathbf{r}} ^{\prime}). \tag{12}\]
These coefficients can be learned using a neural network based on atom-centered descriptors. We refer to the representation of the DDRF in the basis \(\{\psi^{i}_{nlm}(\mathbf{r})\}\) as 1-center DDRF (1C-DDRF) because it only contains pairs of basis functions centered on the same atom.
#### ii.1.2 Neighbourhood density-matrix descriptor
As discussed in the introduction, it is not appropriate to use a scalar descriptor (such as the standard SOAP descriptor [39]) that is invariant under rotations to develop a ML model for the DDRF: the behaviour of the atomic DDRFs under rotations is determined by their analytical form: see Eq. (11). In particular, we show in the Appendix that the coefficients of the atomic DDRF transform according to
\[\tilde{\chi}^{(i)}_{nlm_{1}n^{\prime}l^{\prime}m_{2}}=\sum_{m,m^{ \prime}}D^{l}_{m_{1}m}(\hat{R})D^{l^{\prime}*}_{m_{2}m^{\prime}}(\hat{R})\chi ^{(i)}_{nlm^{\prime}l^{\prime}m^{\prime}}, \tag{13}\]
where \(\tilde{\chi}^{(i)}_{nlm^{\prime}l^{\prime}m^{\prime}}\) denote the coefficients of the tranformed DDRF, \(\hat{R}\) is a rotation and \(D^{l}_{mm^{\prime}}(\hat{R})\) is a Wigner D-matrix [40].
Next, we construct the NDM descriptor, which transforms under rotations in the same way as the atomic DDRF. The starting point for such a descriptor is a non-local extension of the smooth neighbourhood density of atom \(i\) of species \(\eta\) employed in the SOAP descriptor [16], defined as
\[\rho^{\eta}_{i}(\mathbf{r},\mathbf{r}^{\prime})=\sum_{k\in\eta} \sum_{l\in\eta}e^{-\alpha(\mathbf{r}-\mathbf{r}_{k})^{2}}e^{-\alpha(\mathbf{r }^{\prime}-\mathbf{r}_{l})^{2}}, \tag{14}\]
where \(k\) and \(l\) run over atoms in the neighbourhood of atom \(i\) within a cut-off radius \(R_{cut}\) and \(\alpha\) is a hyperparameter which describes the size of an atom. The NDM is then expanded in a basis of spherical harmonics and radial basis functions \(R_{n}(|\mathbf{r}|)\) according to
\[\rho^{\eta}_{i}(\mathbf{r},\mathbf{r}^{\prime})=\sum_{nlm}\sum_{ n^{\prime}l^{\prime}m^{\prime}}\rho^{(i,\eta)}_{nlm^{\prime}l^{\prime}m^{ \prime}}\\ \quad Y_{lm}(\hat{\mathbf{r}})Y^{*}_{l^{\prime}m^{\prime}}(\hat{ \mathbf{r}}^{\prime})R_{n}(|\mathbf{r}|)R^{*}_{n^{\prime}}(|\mathbf{r}^{\prime }|), \tag{15}\]
with \(\rho^{(i,\eta)}_{nlm^{\prime}l^{\prime}m^{\prime}}\) being expansion coefficients. The above equation shows that the NDM transforms in the same way as the atomic DDRF: see Appendix for additional details. Therefore, we use the expansion coefficients as a descriptor for learning the DDRF.
We note that the NDM can be written as the product of two neighbourhood densities \(\rho^{\eta}_{i}(\mathbf{r})=\sum_{k\in\eta}\exp\{-\alpha(\mathbf{r}-\mathbf{ r}_{k})^{2}\}\) according to
\[\rho^{\eta}_{i}(\mathbf{r},\mathbf{r}^{\prime})=\rho^{\eta}_{i}( \mathbf{r})\rho^{\eta}_{i}(\mathbf{r}^{\prime}). \tag{16}\]
Similar to the NDM, \(\rho^{\eta}_{i}(\mathbf{r})\) can be expanded in a basis of spherical harmonics and radial basis functions \(R_{n}(|\mathbf{r}|)\) with coefficients \(\rho^{(i,\eta)}_{nlm}\). It follows that
\[\rho^{(i,\eta)}_{nlm^{\prime}l^{\prime}m^{\prime}}=\rho^{(i,\eta)}_{nlm}\rho^{ (i,\eta)}_{n^{\prime}l^{\prime}m^{\prime}}, \tag{17}\]
which demonstrates that the coefficients of the neighbourhood density contain the same information as the coefficients of the neighbourhood density matrix. Indeed, we have found in our calculations that both types of coefficients perform equally when used as descriptors to predict the atomic DDRFs. We further note that the coefficients of the 3-body version of the SOAP descriptor \(d^{(\eta)}_{nn^{\prime}l}\) can be obtained from the NDM using
\[d^{(\eta)}_{nn^{\prime}l}=\sum_{l^{\prime}mm^{\prime}}\sqrt{\frac{8\pi^{2}}{2l+ 1}}\rho^{(i,\eta)}_{nlm}\rho^{(i,\eta)}_{n^{\prime}l^{\prime}m^{\prime}}\delta_{ ll^{\prime}}\delta_{mm^{\prime}}, \tag{18}\]
in the case where there is no coupling between different atomic species \(\eta\).
### Machine learning
We apply our ML approach for predicting DDRFs to hydrogenated silicon clusters and then use the DDRFs
to calculate GW quasiparticle energies for these systems. We refer to this technique as the ML-GW approach. The atomic positions of the clusters were constructed as described in the methods section and then relaxed using DFT.
To establish the accuracy of this approach, we first investigate the error in the GW quasiparticle energies resulting from the expansion of the DDRF in terms of the intermediate local basis \(\{\phi^{a}_{\alpha_{a}}(\mathbf{r})\}\): see Eq. (9). Fig. 1 compares the HOMO-LUMO gaps obtained from mean-field DFT-PBE calculations, a standard plane-wave G\({}_{0}\)W\({}_{0}\) calculation using a generalized plasmon-pole approximation [7; 37] and a G\({}_{0}\)W\({}_{0}\) calculation using the 2C-DDRF, where the DDRF is expanded in terms of a modified version of the admm-2 basis set [41]: see methods section. The DFT-PBE results show that the HOMO-LUMO gap decreases with increasing cluster size from \(E_{g}\approx 4.8\) eV for the smallest cluster containing 10 Si atoms to \(E_{g}\approx 3\) eV for the biggest cluster with almost 60 Si atoms. This decrease is a consequence of quantum confinement effects which are less pronounced for bigger clusters. The plane-wave GW HOMO-LUMO gaps show a similar trend as function of cluster size, but the gaps are larger than the DFT-PBE gaps by several electron volts. Interestingly, the GW corrections are larger for smaller clusters than for larger clusters. As a consequence, the reduction in the GW HOMO-LUMO gaps as a function of cluster size is larger compared to the DFT-PBE result: in particular, the gap is as large as 8.6 eV for the smallest clusters and shrinks to 5.5 eV for the largest clusters corresponding to a decrease of 3.1 eV (compared to a decrease of 1.8 eV in the DFT-PBE HOMO-LUMO gap energies). Similar results were obtained by Chelikowsky et al. [42] who also carried out GW calculations hydrogenated Si clusters. In particular, they found that the HOMO-LUMO gap shrinks from \(\sim 9\) eV for a 10 Si atom cluster to \(\sim 6.5\) eV for a 47 Si atom cluster. The GW results obtained with the 2C-DDRF are qualitatively similar to the plane-wave GW results. However, the HOMO-LUMO gaps that are obtained with this approach are consistently \(\sim 0.4\) eV smaller than the plane-wave results. This is a consequence of the incompleteness of the local basis set
Next, we determine the 1C-DDRF. For the basis set we use solid harmonic Gaussians with optimized decay coefficients: see methods section. Fig. 2 (a) compares the HOMO-LUMO gaps from G\({}_{0}\)W\({}_{0}\) calculations with the 1C-DDRF to those obtained with the 2C-DDRF and also to plane-wave G\({}_{0}\)W\({}_{0}\) results. For small clusters, the HOMO-LUMO gaps obtained with the 1C-DDRF are smaller than those obtained with the 2C-DDRF, while the opposite behaviour is observed for larger cluster. The largest difference between the two methods is obtained for clusters containing \(\sim 40\) Si atoms. The root-mean-square error (RMSE) of the 1C-basis results relative to the 2C-basis results is 0.22 eV and the RMSE relative to the plane-wave results is 0.45 eV for all clusters. Fig. 2 (b) shows the HOMO and LUMO quasiparticle energies. It can be seen that better agreement with the plane-wave result is obtained for the LUMO than for the HOMO.
Fig. 3 (a) shows the quasiparticle energy corrections of the ten lowest conduction orbitals and the ten highest valence orbitals from plane-wave G\({}_{0}\)W\({}_{0}\) and G\({}_{0}\)W\({}_{0}\) with the 1C-DDRF. The corrections obtained with the 1C-DDRF follow a similar trend as those obtained from the plane-wave calculation. For the unoccupied states, the quantitative agreement is better than for the occupied states, but the 1C-DDRF results for the unoccupied states are scattered over a larger energy range than the plane-wave results. To analyze the errors that arise from the use of the 1C-DDRF in more detail, Fig. 3 (b) shows a two-dimensional histogram of the difference in QP corrections between plane-wave G\({}_{0}\)W\({}_{0}\) and G\({}_{0}\)W\({}_{0}\) with the 1C-DDRF. For the occupied states the differences are mostly smaller than 0.4 eV, while they are somewhat smaller for the unoccupied states. The RMSE over all energy levels is 0.32 eV.
Now that we have established the accuracy of the method used to generate the training set, we use a dense neural network (NN) in conjunction with the NDM descriptor to generate the coefficients of the 1C-DDRF according to
\[\chi^{(i)}_{nlnm^{\prime}l^{\prime}m^{\prime}}=f(\rho^{(i,Si)}_{nlm},\rho^{( i,H)}_{nlm}), \tag{19}\]
where \(f\) is the neural network function. The hydrogen and silicon environment descriptors are concatenated into a single vector before being fed into the neural network. A separate network is trained for Si and H contributions to the DDRF. The exact architecture of the network as well as the practical computation of the atomic decomposition and the descriptors are described in the Methods section. To generate the training data for the neural
Figure 1: HOMO-LUMO gaps of hydrogenated silicon clusters from DFT-PBE Kohn-Sham eigenvalues, plane-wave G\({}_{0}\)W\({}_{0}\) and G\({}_{0}\)W\({}_{0}\) calculations using the 2C-DDRF, see section ”Atomic decomposition of the density-density response function”.
network, we start from the set of relaxed hydrogenated Si clusters that were studied above. From each relaxed cluster, we generate six new configurations by randomly displacing the atoms with the magnitude of the displacements being drawn from a uniform distribution with a maximum of 0.1 A. For these clusters, we then calculate the 1C-DDRF.
Once the neural network is trained on the 1C-DDRF of the randomly displaced clusters, we use it to calculate the 1C-DDRFs of the relaxed clusters and then determine quasiparticle energies via the ML-GW approach. Fig. 4 compares the HOMO-LUMO gaps from ML-GW and GW with explicitly calculated 1C-DDRFs. Except for the smallest cluster, the ML-GW method accurately reproduces the HOMO-LUMO gaps of the explicit GW calculations. The worse performance for the smallest cluster is a consequence of the training set which contains a large number of bigger clusters containing atomic environments that differ from those found in the smallest clusters. The overall RMSE of the ML-GW method relative to the explicit GW with the 1C-basis is only 0.15 eV, but reduces to 0.06 eV when the smallest cluster is excluded.
Fig. 5 shows the difference in QP corrections between ML-GW and GW with the 1C-DDRF for the 10 highest valence states and 10 lowest conduction states. ML
Figure 3: (a) Quasiparticle corrections from plane-wave G\({}_{0}\)W\({}_{0}\) and G\({}_{0}\)W\({}_{0}\) with the 1C-DDRF for the 10 highest valence orbitals and the 10 lowest conduction orbitals of hydrogenated silicon clusters. (b) Histogram of difference in quasiparticle corrections from plane-wave G\({}_{0}\)W\({}_{0}\) and G\({}_{0}\)W\({}_{0}\) calculations with the 1C-DDRF for the 10 highest valence orbitals and the 10 lowest conduction orbitals of hydrogenated silicon clusters. The mean-field energies are referenced to the middle of the mean-field HOMO-LUMO gap.
Figure 2: (a) HOMO-LUMO gaps of hydrogenated silicon clusters from plane-wave G\({}_{0}\)W\({}_{0}\) and G\({}_{0}\)W\({}_{0}\) calculations using the 2C-DDRF and G\({}_{0}\)W\({}_{0}\) calculations using the 1C-DDRF, see section "Atomic decomposition of the density-density response function". (b) HOMO and LUMO energies of hydrogenated Si clusters.
GW produces QP shifts for both valence and conductions states within 0.1 eV from the explicit G\({}_{0}\)W\({}_{0}\) with the 1C-DDRF. The majority of valence states exhibit a positive error, while for conduction states, the error is largely negative.
Fig. 6 compares the ML-G\({}_{0}\)W\({}_{0}\) QP corrections to plane-wave G\({}_{0}\)W\({}_{0}\) results. As expected, the differences are very similar to those between plane-wave G\({}_{0}\)W\({}_{0}\) and the explicit G\({}_{0}\)W\({}_{0}\) with the 1C-basis. In particular, the RMSE is 0.35 eV for all clusters and reduces to 0.30 eV when the smallest cluster is excluded. This results demonstrates that the key obstacle to improving the ML-GW approach is the development of a better basis set.
Finally, we test the ability of the ML-GW approach to predict the quasiparticle energies of clusters which are larger than those included in the training data. For this, we only include clusters with up to \(N_{max}\) Si atoms in the training set with \(N_{max}\) being 60, 50 and 40. Again, the training set only include clusters with randomly displaced atoms and the test set consists of the relaxed clusters. The predicted ML-GW for the whole set of relaxed clusters is shown in Fig. 7. From this graph, it is clear that the accuracy of the prediction for the largest clusters deteriorates as \(N_{max}\) is reduced: while for \(N_{max}=60\), the gaps and QP corrections for clusters with more than 60 Si atoms are still highly accurate, larger differences are observed for \(N_{max}=50\). For \(N_{max}=40\), errors as larger as 1 eV are obtained for the gaps of clusters with around 50 Si atoms. Fig. 7(f) shows that the large error in the gaps are a consequence of having a negative error in the QP shifts for occupied states and a positive error in the shift for unoccupied states. In other words: instead of a cancellation, we get an accumulation of errors when computing HOMO-LUMO gaps.
## III Discussion
We have developed a machine learning approach to predict the interacting density-density response function (DDRF) of materials. To achieve this, we introduce a decomposition of the DDRF into atomic contributions which form the output of a neural network. We also introduce the NDM descriptor which is a generalization of the
Figure 4: HOMO-LUMO gaps of hydrogenated silicon clusters from plane-wave G\({}_{0}\)W\({}_{0}\) and G\({}_{0}\)W\({}_{0}\) calculations using the 1C-DDRF and ML-G\({}_{0}\)W\({}_{0}\).
Figure 5: Histogram of difference in quasiparticle corrections from G\({}_{0}\)W\({}_{0}\) using the 1C-DDRF and ML-G\({}_{0}\)W\({}_{0}\) for the 10 highest valence orbitals and the 10 lowest conduction orbitals of hydrogenated silicon clusters. The mean-field energies are referenced to the middle of the mean-field HOMO-LUMO gap. The energies of the smallest cluster were excluded.
Figure 6: Histogram of difference in quasiparticle corrections from plane-wave G\({}_{0}\)W\({}_{0}\) and ML-G\({}_{0}\)W\({}_{0}\) DDRF for the 10 highest valence orbitals and the 10 lowest conduction orbitals of hydrogenated silicon clusters. The mean-field energies are referenced to the middle of the mean-field HOMO-LUMO gap. The energies of the smallest cluster were excluded.
Figure 7: HOMO-LUMO gaps (left panels) and errors in quasiparticles shifts (right panels) from explicit G\({}_{0}\)W\({}_{0}\) calculations with the 1C-DDRF and from ML-G\({}_{0}\)W\({}_{0}\) trained on clusters containing up to \(N_{m}ax=60\) Si atoms (upper panels), \(N_{m}ax=50\) Si atoms (middle panels) and \(N_{m}ax=40\) Si atoms (lower panels). The red vertical line indicates \(N_{m}ax\). The panels on the right hand side only contain results for clusters with more Si atoms than \(N_{max}\). The mean-field energies are referenced to the middle of the mean-field HOMO-LUMO gap.
widely used SOAP descriptor [16]: instead of symmetrizing the descriptor using a Haar integral over a symmetry group [43], we construct the tensor product of the expansion coefficients of the neighbourhood density which transforms under rotation in the same way as the atomic contributions to the DDRF. Thus, while not fully covariant, our approach is able to distinguish between different orientations of a chemical environment, which is a key requirement for predicting functions, such as the DDRF.
The machine learning technique for DDRFs is then combined with the GW approach. The resulting approach is called the ML-GW approach. We apply this method to hydrogenated silicon clusters. The ML-GW approach reproduces HOMO-LUMO gaps and quasiparticle energies of GW calculations using the explicitly calculated 1C-DDRF, i.e. the DDRF in a pair basis where the basis functions of each pair are centered on the same atom, with an accuracy of about 0.1 eV. The accuracy of the results deteriorates when it is applied to clusters which are larger than those included in the training set.
However, the error of ML-GW is significantly larger when compared to standard plane-wave GW results: HOMO-LUMO gaps are reproduced to within 0.5 eV, but the error reduces to 0.4 eV when the smallest cluster is excluded from the test set. These errors are comparable to those obtained by Rohlfing in his GW calculations for silane using a model dielectric function [9].
These findings demonstrate that the main challenge towards improving the ML-GW method is the construction of better local basis sets for the DDRF. The basis used for the 2C-DDRF can be improved straightforwardly by using larger basis sets, such as aug-admm-2, admm-3 or aug-admm-3 [41]. However, it is more difficult to increase the basis used for the 1C-DDRF as this leads to linear dependencies which deteriorate the predictive accuracy of the neural network. This was also observed by Grisafi et. al. [23] when predicting the expansion coefficients of the electronic density using the symmetry adapted SOAP kernel [20]. In the future, we plan to explore the use of orthogonal radial basis sets, such as Laguerre polynomials instead of solid harmonic Gaussians.
We expect that the ML-GW method can be applied to calculate quasiparticle energies in systems that have so far been out of reach for standard implementations. Examples include disordered materials, liquids, interfaces or nanoparticles. It could also be combined with on-the-fly machine learning methods [44] to perform GW calculations on molecular-dynamics snapshots to determine finite-temperature quasiparticle energies.
## IV Methods
### Data generation
The atomic structures of the hydrogenated silicon clusters were obtained in the same way as described by Zauchner et. al. [45]: starting from the Si\({}_{123}\)H\({}_{100}\) cluster of the silicon Quantum Dot data set [46], we remove the silicon atom furthest from the centre of the cluster, terminate the dangling bonds with hydrogen atoms and relax the resulting structure using DFT. The process is repeated until only 10 silicon atoms remain. In addition, we also include the clusters with less than 123 Si atoms from the silicon Quantum Dot data set. From this set of silicon clusters, only clusters with less than 60 silicon atoms were used in the training set for DDRF prediction. From each cluster with less than 60 silicon atoms, we created six additional clusters in which random displacements were added to the atomic positions. The magnitudes of the displacements were drawn from a normal distribution with mean of 0 A and standard deviation of 0.1 A. Finally, calculations were also carried out for clusters with between 60 and 70 silicon atoms. These clusters are not part of the training set, but are used to test the extrapolation capacity of the ML approach.
### DFT and GW calculations
The DDRF and QP corrections were calculated using the BerkeleyGW software package [47; 48]. Mean-field DFT calculations were performed using the Quantum Espresso code [49; 50]. Norm-conserving pseudopotentials from the Quantum Espresso Pseudopotential Library were used. The parameters of the DFT calculations were the same as those used by Zauchner et. al. [45]: a plane-wave cut-off of 65 Ry, and a supercell with sufficient vacuum to avoid interactions between periodic images. For the calculation of the DDRF a total of 1000 Kohn-Sham states were used in the summation. Also, a plane-wave cut-off of 6 Ry and a truncated Coulomb interaction were used. The QP corrections were calculated using the generalized plasmon-pole approximation (GPP) [7], an explicit sum over 1000 Kohn-Sham states and also a static remainder correction [51]. To calculate the HOMO and LUMO energies, the vacuum level was determined by averaging the electrostatic potential over the faces of the supercell.
### Projection onto intermediate basis
We first use BerkeleyGW to calculate the inverse dielectric matrix \(\epsilon_{\mathbf{G}\mathbf{G}^{\prime}}^{-1}\) in a plane-wave basis [48]. From this, we determine the interacting DDRF via
\[\chi_{\mathbf{G}\mathbf{G}^{\prime}}=(\epsilon_{\mathbf{G}\mathbf{G}^{\prime} }-\delta_{\mathbf{G}\mathbf{G}^{\prime}})/v_{\mathbf{G}} \tag{20}\]
with \(v_{\mathbf{G}}\) being the Fourier transform of the truncated Coulomb interaction.
Next, the DDRF in real space is obtained as
\[\chi(\mathbf{r},\mathbf{r}^{\prime})=\frac{1}{V}\sum_{\mathbf{G},\mathbf{G}^{ \prime}}e^{i\mathbf{G}\cdot\mathbf{r}}\chi_{\mathbf{G}\mathbf{G}^{\prime}}e^{ -i\mathbf{G}^{\prime}\cdot\mathbf{r}^{\prime}}, \tag{21}\]
where \(V\) is the volume of the supercell.
Starting from a set of real atom-centred basis functions \(\phi^{i}_{\alpha_{i}}(\mathbf{r})\), where \(\alpha_{i}\) labels the basis function on atom \(i\), we construct an orthogonal basis set \(\tilde{\phi}^{i}_{\alpha_{i}}(\mathbf{r})\)
\[\tilde{\phi}^{i}_{\alpha_{i}}(\mathbf{r})=\sum_{k}\sum_{\alpha_{k}}A^{\alpha_{i} \alpha_{k}}_{ik}\phi^{k}_{\alpha_{k}}(\mathbf{r}), \tag{22}\]
where \(A^{\alpha_{i}\alpha_{k}}_{ik}\) is the matrix of eigenvectors of the overlap matrix. The coefficients of the DDRF when expanded in the orthogonalized basis are
\[\tilde{\chi}^{ij}_{\alpha_{i}\alpha_{j}}=\frac{1}{V}\sum_{\mathbf{ G},\mathbf{G}^{\prime}}\chi_{\mathbf{G},\mathbf{G}^{\prime}}\\ \times\int_{-\infty}^{\infty}\tilde{\phi}^{i}_{\alpha_{i}}( \mathbf{r})e^{i\mathbf{G}\cdot\mathbf{r}}d\mathbf{r}\int_{-\infty}^{\infty}e^ {-i\mathbf{G}^{\prime}\cdot\mathbf{r}^{\prime}}\tilde{\phi}^{j}_{\alpha_{j}} (\mathbf{r}^{\prime})d\mathbf{r}^{\prime}, \tag{23}\]
where, due to the localised nature of the basis functions, we extended the integral from an integral over the supercell to an integral over all space. These integrals are proportional to the Fourier transforms of the basis functions (or their complex conjugates).
We then transform back to the non-orthogonal localised basis set using Eq. (22) to find
\[\chi(\mathbf{r},\mathbf{r}^{\prime})=\sum_{\alpha_{i}\alpha_{j}} \sum_{ij}\tilde{\chi}^{ij}_{\alpha_{i}\alpha_{j}}\tilde{\phi}^{i}_{\alpha_{i} }(\mathbf{r})\tilde{\phi}^{j}_{\alpha_{j}}(\mathbf{r}^{\prime})=\\ \sum_{\alpha_{k}\alpha_{l}}\sum_{kl}\sum_{\alpha_{i}\alpha_{j}} \sum_{ij}A^{\alpha_{i}\alpha_{k}}_{ik}A^{\alpha_{i}\alpha_{k}}_{jl}\tilde{\chi }^{ij}_{\alpha_{i}\alpha_{j}}\phi^{k}_{\alpha_{k}}(\mathbf{r})\phi^{l}_{\alpha _{l}}(\mathbf{r}^{\prime})\\ =\sum_{\alpha_{k}\alpha_{l}}\sum_{kl}\chi^{kl}_{\alpha_{k}\alpha_ {l}}\phi^{k}_{\alpha_{k}}(\mathbf{r})\phi^{l}_{\alpha_{l}}(\mathbf{r}^{\prime }), \tag{24}\]
where we defined
\[\chi^{kl}_{\alpha_{k}\alpha_{l}}=\sum_{\alpha_{i}\alpha_{j}}\sum_{ij}\tilde{ \chi}^{ij}_{\alpha_{i}\alpha_{j}}A^{\alpha_{i}\alpha_{k}}_{ik}A^{\alpha_{i} \alpha_{k}}_{jl}. \tag{25}\]
The basis functions we employed are the real solid harmonic Gaussians as defined in LibInt2 [52]
\[\phi_{lm}(r,\theta,\phi)=N_{l}(\beta)r^{l}e^{-\beta r^{2}}R_{lm}(\theta,\phi), \tag{26}\]
where \(\beta\) is a decay parameter, \(N_{l}(\beta)\) is a normalization factor and \(R_{lm}\) are the real spherical harmonics given by [53]
\[R_{lm}(\theta,\phi)= \tag{27}\] \[\begin{cases}\frac{i}{\sqrt{2}}\left(Y_{l-|m|}(\theta,\phi)-(-1)^{ m}Y_{l|m|}(\theta,\phi)\right)&\text{if }m<0\\ Y_{lm}(\theta,\phi)\text{ if }m=0\\ \frac{1}{\sqrt{2}}\left(Y_{l-|m|}(\theta,\phi)+(-1)^{m}Y_{l|m|}(\theta,\phi) \right)&\text{if }m>0,\end{cases} \tag{28}\]
where \(Y_{lm}(\theta,\phi)\) are the complex spherical harmonics with the Condon-Shortley phase convention. Kuang and Lin showed that the Fourier transform of the complex solid harmonic Gaussians is again a solid harmonic Gaussian [54]
\[\frac{1}{(2\pi)^{3/2}}\int d\mathbf{r}e^{-i\mathbf{G}\cdot \mathbf{r}}N_{l}(\beta)r^{l}e^{-\beta r^{2}}Y_{lm}(\hat{\mathbf{r}})\\ =(-i)^{l}\tilde{N}_{l}(\beta)G^{l}e^{-G^{2}/(4\beta)}Y_{lm}(\hat{ \mathbf{G}}), \tag{29}\]
with \(\tilde{N}_{l}(\beta)=N_{l}(\beta)/(2\beta)^{3/2}\). The Fourier transform of the real solid harmonic Gaussians can then be easily computed using Eq. (28).
The basis set used in this work is a modified version of the admm-2 basis set [41] (see Appendix for details), in which the s-orbitals were removed and contracted Gaussians were uncontracted into individual basis functions. Removing the s-orbitals ensures that \(\int d\mathbf{r}\chi(\mathbf{r},\mathbf{r}^{\prime})=0\) since only the Fourier transform of s-orbitals has a \(\mathbf{G}=0\) contribution.
### Projection onto atomic basis
The fully atom-centred basis set also consists of solid harmonic Gaussians. The basis set was constructed following the same procedure as in the DScribe library [55], where individual basis functions are given by
\[\psi_{nlm}(r,\theta,\phi)=N_{l}(\beta_{nl})r^{l}e^{-\beta_{nl}r^{2}}R_{lm}( \theta,\phi), \tag{30}\]
where the basis set is truncated at a maximum angular momentum \(l_{max}\) and a maximum principal quantum number \(n_{max}\). For silicon atoms we use \(l_{max}=n_{max}=4\). For hydrogen atoms we use \(l_{max}=n_{max}=3\).
The exponents \(\beta_{nl}\) are constructed such that the corresponding basis functions decay to zero at a cutoff radius \(R_{n}\), i.e. \(\beta_{nl}=-\ln(\frac{T}{R_{n}^{2}})/R_{n}^{2}\) with \(T=10^{-3}\) A\({}^{l}\) being a threshold parameter. The cutoff radius \(R_{n}=R_{i}+(R_{o}-R_{i})/n\) lies between an inner radius \(R_{i}\) and an outer radius \(R_{o}\). For hydrogen atoms, we used \(R_{i}=0.1\) A and \(R_{o}=3.0\) A and for silicon atoms, we used \(R_{i}=1.0\) A and \(R_{o}=8.0\) A. Both \(R_{i}\) and \(R_{o}\) were optimized to minimize linear dependencies in the basis set as such dependencies significantly deteriorate the accuracy of the neural network predictions. A similar observation was made by Grisafi et. al. [23] when learning electron densities, although a different approach was taken to remedy this issue in their work.
In order to compute the coefficients of the atomic contributions to the DDRF in the fully atom-centered basis the same procedure as in the intermediate basis was used: the basis was first orthogonalized by computing the eigenvectors of the overlap matrix. Then the atomic DDRFs in the intermediate basis were projected onto the orthogonalized fully-atom centred basis with overlaps between the different basis functions being computed using LibInt-2 [52]. Then the atomic DDRFs were transformed back to the non-orthogonal basis produced the desired coefficients \(\chi^{(i)}_{nlm^{\prime}l^{\prime}m^{\prime}}\).
### Descriptors
The basis set for neighbourhood densities was generated using the same procedure as for the fully atom-centered basis for the DDRF. However, s-orbitals were not removed and the basis functions of the admm-2 basis set were not included. We used \(R_{i}=1.0\) A for both hydrogen and silicon atoms and \(R_{o}=4.0\) A for hydrogen atoms and \(R_{o}=9.0\) A for silicon atoms. The exponents of the Gaussians in Eq. (14) were set such that the standard deviation of the Gaussians is 0.5 A. LibInt-2 [52] was again used to compute the required integrals for the projection.
### Neural network
A dense neural network with four hidden layers with 2000, 1500, 1000 and 2000 nodes respectively was constructed for both silicon and hydrogen atoms. Each layer uses a Leaky-ReLu activation function with a leak parameter of 0.1. The output layer was further symmetrized by adding its transpose. The loss used was the mean-squared error between the predicted and true expansion coefficients \(\chi^{(i)}_{nlm^{\prime}l^{\prime}m^{\prime}}\). The neural network was trained on the perturbed clusters for 20,000 epochs. We found that adding dropout to the layers does not significantly improve the quasiparticle energies resulting from the predictions which is likely due to the similarity between the atomic environments in the training and test set.
## V Data availability statement
The data that support the findings of this study are available upon reasonable request from the authors.
## VI Acknowledgements
This work was supported through a studentship in the Centre for Doctoral Training on Theory and Simulation of Materials at Imperial College London funded by the EPSRC (EP/L015579/1). We acknowledge the Thomas Young Centre under grant number TYC-101. This work used the ARCHER2 UK National Supercomputing Service via J.L.'s membership of the HEC Materials Chemistry Consortium of UK, which is funded by EPSRC (EP/L000202).
|
2301.10863 | Shape Reconstruction from Thoracoscopic Images using Self-supervised
Virtual Learning | Intraoperative shape reconstruction of organs from endoscopic camera images
is a complex yet indispensable technique for image-guided surgery. To address
the uncertainty in reconstructing entire shapes from single-viewpoint occluded
images, we propose a framework for generative virtual learning of shape
reconstruction using image translation with common latent variables between
simulated and real images. As it is difficult to prepare sufficient amount of
data to learn the relationship between endoscopic images and organ shapes,
self-supervised virtual learning is performed using simulated images generated
from statistical shape models. However, small differences between virtual and
real images can degrade the estimation performance even if the simulated images
are regarded as equivalent by humans. To address this issue, a Variational
Autoencoder is used to convert real and simulated images into identical
synthetic images. In this study, we targeted the shape reconstruction of
collapsed lungs from thoracoscopic images and confirmed that virtual learning
could improve the similarity between real and simulated images. Furthermore,
shape reconstruction error could be improved by 16.9%. | Tomoki Oya, Megumi Nakao, Tetsuya Matsuda | 2023-01-25T23:08:41Z | http://arxiv.org/abs/2301.10863v1 | # Shape Reconstruction from Thoracoscopic Images using Self-supervised Virtual Learning
###### Abstract
Intraoperative shape reconstruction of organs from endoscopic camera images is a complex yet indispensable technique for image-guided surgery. To address the uncertainty in reconstructing entire shapes from single-viewpoint occluded images, we propose a framework for generative virtual learning of shape reconstruction using image translation with common latent variables between simulated and real images. As it is difficult to prepare sufficient amount of data to learn the relationship between endoscopic images and organ shapes, self-supervised virtual learning is performed using simulated images generated from statistical shape models. However, small differences between virtual and real images can degrade the estimation performance even if the simulated images are regarded as equivalent by humans. To address this issue, a Variational Autoencoder is used to convert real and simulated images into identical synthetic images. In this study, we targeted the shape reconstruction of collapsed lungs from thoracoscopic images and confirmed that virtual learning could improve the similarity between real and simulated images. Furthermore, shape reconstruction error could be improved by 16.9%.
Keywords:3D shape reconstruction Virtual learning Variational Autoencoder Endoscopic image.
## 1 Introduction
Endoscopic camera images have been used to reach organs during surgery. However, it is difficult to grasp the three-dimensional structure of organs inside the body owing to the high uncertainty caused by organ deformation and narrow visibility during surgery. Therefore, research is underway to reconstruct the 3D shape of organs from endoscopic images that are acquired during surgery. In this paper, we focus on the reconstruction of 3D shapes using single-viewpoint endoscopic camera images, which is a key technique for image-guided surgery.
Shape reconstruction of biological organs using endoscopic camera images reported so far are: A study on the recovery of three-dimensional information of the entire stomach including texture from endoscopic images of the stomach[1], and model-based positioning of the liver[2]. The former reconstructs shape information from video, but only recovers the shape of the organ surface within
the visible range, and the latter calculates the deformation using parameter optimization, but the computational cost and stability of the optimization calculation remains an issue. To address these issues, there have been reports on attempts to predict 3D shapes and deformations using statistical shape models and graph convolution networks, which describe the morphological characteristics of biological organs in terms of low-dimensional parameters[3][4][5][6]. In the field of medical imaging, data augmentation with simulated data generated from statistical shape models has been shown to be effective. Tang et al. achieved an improvement in segmentation accuracy with a data augmentation method using statistical shape models compared with a method using real images[7]. However, it has been reported that if excessive simulated images are used for training, the accuracy decreases in contrast to the increase in training data; hence, the problem of treating simulated and real images equally still remains.
In this paper, we propose a framework for self-supervised virtual learning for the shape reconstruction of deformed organs using image translation with common latent variables between simulated and real images from endoscopic camera images. This method solves the problem of insufficient training data in medical imaging by using statistically generated simulated images. In this paper, we refer to this learning method as "virtual learning." We use a Variational Autoencoder (VAE)[8] to extract latent variables that are common to both simulated and real images, and construct a framework that accepts two images with different features that humans may regard as equivalent as inputs and converts them into identical synthetic images. Although some studies[9] have reported the generation of medical images that represent pathological, morphological, and anatomical variations of VAEs, there have been no reports of their application to simulation-based learning, which is the subject of this study. Since our framework can improve the similarity between simulated and real images generated from 3D CT images of individual patients, it can reconstruct the shape of lungs from endoscopic images in thoracoscopic lung cancer resection with higher accuracy than conventional methods.
Figure 1: Examples of real and simulated images in endoscopic images, (a) real image \(I_{R}\) extracted from a surgical video, (b) different appearances generated by changing weights, and (c) simulated image \(I_{S}\) corresponding to the real image
## 2 Method
### Dataset and Problem Definition
In this study, 128 still images were collected from videos of of thoracoscopic resections of 9 cases of the right lung and 3 cases of the left lung. The lung shape \(M\) during insufflation was constructed from the CT data taken before the surgery, and the lung shape \(\hat{M}\) during collapse was constructed from the cone-beam CT data taken during the surgery. The lung shape is a mesh model consisting of 402 vertices and 796 triangles.
Fig. 1 shows an example of an endoscopic image \(I_{R}\) and a simulated image \(I_{S}\) generated from the statistical shape models. In Fig. 1(a), part of the surface of the upper lobe of the lung and the inner wall of the thoracic cavity can be seen. Fig. 1(b) shows an example of reproducing different lung shapes by changing the weights of the principal components of the statistical shape models as well as the camera parameters. The lung statistical shape model is a statistical model with low-dimensional parameters describing the displacement vectors per vertex obtained from the vertex-correlated lung shape meshes obtained by mesh deformation alignment[10][11] between \(M\) and \(\hat{M}\). Owing to the collapse deformation, the lung shape at the time of air inclusion, which can be obtained before surgery, differs significantly from the lung shape at the time of collapse during surgery. Therefore, the simulated image reproduces lungs with various appearances and solves the problem pertaining to a lack of data, which is a persistent issue in the field of medical imaging. Fig. 1(c) shows the results of manually adjusting each parameter to correspond to the surgical scene under the supervision of a surgeon. Since the lung surface is transparent and the tumor location, bronchus, and vascular structures can be comprehended, it can be used for surgical assistance. If the parameters of statistical shape models can be automatically calculated using the endoscopic images and the two can be aligned, the shape of organs and anatomical structures can be identified during surgery.
In endoscopic surgery, several situations are considered for the endoscopic images and target organs to be captured during an operation. In this study, we considered the following situations that are commonly observed in most surgeries.
* The initial shape \(M\) of the organ to be operated on is obtained from the 3D-CT measured before the operation.
* A part of the surface of the organ can be observed in the endoscopic image \(I_{R}\), but more than 50 % of the entire organ is occluded.
* Information on the position of the endoscope camera and the direction of the eye are not available.
* The organ is deformed during observation, but its deformation is statistically predictable.
In this study, we aimed to obtain the 3D shape of an organ under surgery directly from the endoscope image via virtual learning using a simulated image
that can be generated from the statistical shape models under the above conditions, and using image translation with common latent variables between the endoscope image \(I_{R}\) and the simulated image \(I_{S}\). The images used in this study were standardized to 180\(\times\)120 pixels, and 10,000 simulated images generated for each case were used for training.
### Framework
Fig. 2 shows the proposed framework, which consists of a VAE to achieve image transformation with common latent variables between real and simulated images, a convolution neural network (CNN) to learn the relationships between synthetic images and model parameters. In the VAE, the real \(I_{R}\) and the simulated \(I_{S}\) images, which have different features but are regarded as equivalent by humans, are transformed into an identical synthetic image. Then, in the CNN, we perform self-supervised virtual learning, using the synthetic image and the initial shape \(M\) of the organ as input, and outputting the deformation mesh and camera parameters. The details of the VAE are described in Section 2.3, and the details of the CNN are described in Section 2.4.
### Image translation
We propose a transformation function \(f\) that translates real and simulated images, which have different detailed features but are regarded as equivalent by humans, into an identical synthetic image. In this study, the input images to VAE are the real image \(I_{R}\) and the simulated image \(I_{S}\), which is an artificial representation of \(I_{R}\). The transformation function \(f\) plays the role of translating \(I_{R}\) to \(I_{S}\) and reconstructing \(I_{S}\) to \(I_{S}\).
Figure 2: Proposed Framework. Variational Autoencoder (VAE) trains the common latent variables in the real image \(I_{R}\) and the simulated image \(I_{S}\), while the convolutional neural network (CNN) trains the relationship between the synthetic image and the model parameters \(\theta\).
VAE consists of an encoder, which converts input data \(x\) into latent variables \(z\), and a decoder, which reconstructs the same input data \(x\) from the latent variables \(z\). The encoder and decoder in our study consists of three fully connected layers. Let \(\phi\) and \(\theta\) be the parameters in the neural networks of the encoder and decoder, and let \(q_{\phi}(z|x)\) and \(p_{\theta}(x|z)\) be the probability distributions of the encoder and decoder, respectively. \(p(x)\) is a Gaussian distribution and the encoder learns the parameter \(\phi\) for \(q_{\phi}(z|x)\), which generates parameters \(\mu\) and \(\sigma\) of \(p(x)\), and the decoder learns the parameter \(\theta\) for \(p_{\theta}(x|z)\), which reconstructs \(x\) from \(z\).
Since \(I_{R}\) and \(I_{S}\) are compressed into a common latent variable, and \(f(I_{R})\) and \(f(I_{S})\) are output from the common latent variable, we assumed that it would be possible to reconstruct the identical synthetic image from the common latent variable; therefore, we adopted VAE in this study. Let \(g\) be the transformation function of VAE in the usual learning method[12]. When the real image \(I_{R}\) is input into the variable function \(g\), the relationship between the input \(I_{R}\) and the output \(g(I_{R})\) is \(g(I_{R})\approx I_{R}\). When the simulated image \(I_{S}\) is input into the variable function \(g\), the relationship between the input \(I_{S}\) and the output \(g(I_{S})\) is \(g(I_{S})\approx I_{S}\). By contrast, in this study, when the real image \(I_{R}\) is input into the transformation function \(f\) of VAE, the relationship between the input \(I_{R}\) and the output \(f(I_{R})\) is \(f(I_{R})\approx I_{S}\). When the simulated image \(I_{S}\) is used as input, the relationship between the input \(I_{S}\) and the output \(f(I_{S})\) is \(f(I_{S})\approx I_{S}\). By using the transformation function \(f\) to transform two images with different features into the identical synthetic image, we assumed it would be possible to learn that \(I_{R}\) and \(I_{S}\) are equivalent. In addition, synthetic images \(f(I_{R})\) and \(f(I_{S})\), which are transformed from \(I_{R}\) and \(I_{S}\), are highly similar to each other.
In VAE, the error between the input image to the encoder and the image reconstructed by the decoder must be minimized. In addition, a regularization term that minimizes the error between the distribution of \(q_{\phi}(z|x)\) estimated by the encoder and \(p_{\theta}(z)\), which is the prior distribution of \(z\)(both Gaussian), is required. Therefore, in VAE training, we introduce a loss function that is a weighted sum of the reconstruction error shown in the first term of Eq. (1) and the Kullback-Leibler Divergence (KLD) shown in the second term of Eq. (1).
\[\mathcal{L}=\|x-\tilde{x}\|_{2}^{2}+\lambda_{KL}D_{KL}(q(z|x)\parallel p(z)) \tag{1}\]
### Virtual learning
In this section, we describe virtual learning using the simulated images that can be generated statistically. In statistical shape models, the collapse deformations assumed during surgery can be generated from the preoperative CT models of individual patients before surgery by changing the weights related to the lung collapse volume. The organ deformations are based on the average collapsed deformations by learning. The vertex coordinates \(\upsilon_{i}^{{}^{\prime}}\) of the generated \(\hat{M}\) can be obtained by the Eq. (2).
\[\upsilon_{i}^{{}^{\prime}}=\upsilon_{i}+\bar{u}+\omega\cdot u_{i} \tag{2}\]
where \(\upsilon_{i}\) is the position of the vertex of the lung shape during aeration, \(\bar{u}\) is the average displacement due to collapse deformation, and \(u_{i}\) is the displacement vector obtained from the lung statistical shape model. \(\omega\) is the weight of the displacement, where \(\omega<1.0\) represents a squeezed shape compared with the average displacement due to collapse deformation, and \(\omega>1.0\) represents a bulged shape compared with the average displacement due to collapse deformation. In this study, the parameters were varied with a width of 0.1 around the average displacement due to collapse deformation (\(\omega=1.0\)).
Next, \(\hat{M}\) is rendered with the set camera parameters to generate the simulated image \(I_{S}\). The camera parameters that are expected to change during the surgery are the camera position in three dimensions and the focus point in two dimensions, which is a total of five dimensions. In this study, a large number of simulated images IS were generated by changing the parameters with a width of 15 mm in the depth direction of the camera position, 25 mm in the other two directions, 15 mm in the horizontal and vertical directions of the focus point for the right lung case, and 10 mm in the three directions of the camera position and two directions of the focus point mentioned above for the left lung case, centering on the camera parameters assumed in advance by the surgeon. To implement the image transformation described in Section 2.3 accurately, in this study, \(I_{S}\) was converted into a binary image of the lung region and the background, and then into a lung contour image by morphological transformation[13].
#### Reconstruction Error
Since the purpose of this study was to obtain the 3D shape of an organ adjusted to an endoscope image, it was desirable to minimize not only the difference between the estimated shape and the target shape but also the deviation of the position and orientation of the obtained shape. In the proposed method, Reconstruction Error, defined in Eq. (3), is introduced as a loss function to calculate the error between mesh vertices after projection.
\[\mathcal{L}_{R}=\frac{1}{n}\sum_{i=1}^{n}\|M_{\theta}\upsilon_{i}-M_{\hat{ \theta}}\hat{\upsilon}_{i}\| \tag{3}\]
where \(M_{\theta}\) and \(M_{\hat{\theta}}\) are the target and estimated projection matrices, respectively, \(\upsilon_{i}\) and \(\hat{\upsilon_{i}}\) are the target and estimated mesh vertex points, respectively, and \(M\) is a \(4\times 4\) matrix used for the perspective projection transformation in generating the rendering image of the organ mesh with camera parameters.
#### Parameter Loss
Since the space in the thoracic cavity is limited, the motion of the endoscopic camera is restricted within a certain range. To constrain the range of variations in the camera parameters, the proposed method introduces the Parameter Loss defined in Eq. (4) as a loss function to calculate the error between the estimated
value \(\hat{\theta}\) and the true value \(\theta\) of the camera parameters and the displacement \(\omega\) in the collapse deformation of the lung.
\[\mathcal{L}_{P}=\frac{1}{n}\sum_{i=1}^{n}|\theta_{i}-\hat{\theta}_{i}|_{2}^{2} \tag{4}\]
The loss function \(\mathcal{L}\) for the entire virtual learning is defined as a weighted linear sum of two loss functions as Eq. (5):
\[\mathcal{L}=\mathcal{L}_{R}+\lambda_{P}\mathcal{L}_{P} \tag{5}\]
## 3 Experiments
By comparing shape reconstruction methods, the effectiveness of the proposed framework in organ shape reconstruction using a single endoscope image was confirmed. The entire network was implemented in Python 3.6.8, Keras, and TensorFlow GPU libraries. Virtual learning was performed using the Adam optimizer with the following parameters: batch size = 60, number of epochs = 500, dropout rate = 0.5, and learning rate = \(1.0\times 10^{-3}\). A single NVIDIA GeForce RTX 2070 GPU was used for training, and the virtual learning took 1.4 hours.
The implementation of this framework enables us to output 3D shapes aligned to the endoscope image from the initial shape. In this study, mean absolute error (MAE), defined as the shape reconstruction error shown in Section 2.4, was used as an evaluation metric for shape reconstruction accuracy. MAE evaluates the average error between 3D shape vertices in the camera coordinate system. Based on the comparison conditions used in the study by Tang et al., this study compared the results using only real images [7], using simulated images generated from statistical shape models (virtual learning) [7], and synthetic images obtained by transforming simulated images using VAE(the proposed method). As a combination of the loss functions, \(\lambda_{KL}=5\), \(\lambda_{P}=0.5\) were adopted based on the results of several sets of experiments.
Cross-validation was performed for all 12 cases except for one test case. Table 1 shows the evaluation results of the shape reconstruction in 11 cases, excluding Case 12 owing to the small amount of data. As a result, it can be confirmed that the shape reconstruction error of this method is significantly smaller than that of the method based on Tang et al.'s idea (one-way analysis of variance, ANOVA; \(p<0.05\) significance level).
\begin{table}
\begin{tabular}{c c c} \hline Real & Virtual & Proposed \\ \hline
75.7 \(\pm\) 33.6 & 27.2 \(\pm\) 15.5 & 22.6 \(\pm\) 12.7 \\ \hline \end{tabular}
\end{table}
Table 1: Shape reconstruction error for the proposed method (Proposed), real image only (Real), and virtual learning (Virtual). The mean \(\pm\) standard deviation of MAE [mm].
Fig. 3 shows the visualization of the shape reconstruction results. In Fig. 3(a), the tumor location and vascular structure of the 3D image of the proposed method are closer to the target image than those of other methods. The MAE of the method using real images only is 112.6 mm, virtual learning is 18.7 mm, and the proposed method is 9.7 mm. Fig. 3(b) shows that the output image of the proposed method is closer in appearance to the target image. Fig. 3(c) shows that the pose of the 3D image is closer to the target image than the other methods; however, there is a shift in the tumor position. Virtual learning can reduce the vertex-level error. In addition, the proposed image transformation method enables the constraint of camera parameters to be applied to the real image, and thus an appropriate pose can be estimated.
## 4 Conclusion
In this paper, we proposed a framework of self-supervised virtual learning for the problem of organ shape reconstruction from endoscopic camera images. Virtual learning based on simulated images generated from statistical shape models was used to cope with the small amount of training data, and the transformation to a common generated image with common latent variables between real and simulated images was realized using VAE. However, the variations in real images are limited, and image transformation results are varied. In the future, we plan to improve the performance of image transformation.
Figure 3: Shape reconstruction results for the proposed method (Proposed), real image only (Real), and virtual learning (Virtual). The proposed method can more accurately estimate the tumor position, vascular structure, and posture. |
2304.04010 | Non-asymptotic approximations of Gaussian neural networks via
second-order Poincaré inequalities | There is a growing interest on large-width asymptotic properties of Gaussian
neural networks (NNs), namely NNs whose weights are initialized according to
Gaussian distributions. A well-established result is that, as the width goes to
infinity, a Gaussian NN converges in distribution to a Gaussian stochastic
process, which provides an asymptotic or qualitative Gaussian approximation of
the NN. In this paper, we introduce some non-asymptotic or quantitative
Gaussian approximations of Gaussian NNs, quantifying the approximation error
with respect to some popular distances for (probability) distributions, e.g.
the $1$-Wasserstein distance, the total variation distance and the
Kolmogorov-Smirnov distance. Our results rely on the use of second-order
Gaussian Poincar\'e inequalities, which provide tight estimates of the
approximation error, with optimal rates. This is a novel application of
second-order Gaussian Poincar\'e inequalities, which are well-known in the
probabilistic literature for being a powerful tool to obtain Gaussian
approximations of general functionals of Gaussian stochastic processes. A
generalization of our results to deep Gaussian NNs is discussed. | Alberto Bordino, Stefano Favaro, Sandra Fortini | 2023-04-08T13:52:10Z | http://arxiv.org/abs/2304.04010v1 | # Non-asymptotic approximations of Gaussian neural networks via second-order Poincare inequalities
###### Abstract
There is a growing interest on large-width asymptotic properties of Gaussian neural networks (NNs), namely NNs whose weights are initialized according to Gaussian distributions. A well-established result is that, as the width goes to infinity, a Gaussian NN converges in distribution to a Gaussian stochastic process, which provides an asymptotic or qualitative Gaussian approximation of the NN. In this paper, we introduce some non-asymptotic or quantitative Gaussian approximations of Gaussian NNs, quantifying the approximation error with respect to some popular distances for (probability) distributions, e.g. the 1-Wasserstein distance, the total variation distance and the Kolmogorov-Smirnov distance. Our results rely on the use of second-order Gaussian Poincare inequalities, which provide tight estimates of the approximation error, with optimal rates. This is a novel application of second-order Gaussian Poincare inequalities, which are well-known in the probabilistic literature for being a powerful tool to obtain Gaussian approximations of general functionals of Gaussian stochastic processes. A generalization of our results to deep Gaussian NNs is discussed.
## 1 Introduction
There is a growing interest on large-width asymptotic properties of Gaussian neural networks (NNs), namely NNs whose weights or parameters are initialized according to Gaussian distributions (Neal, 1996; Williams, 1997; Der and Lee, 2005; Garriga-Alonso et al., 2018; Lee et al., 2018; Matthews et al., 2018; Novak et al., 2018; Antognini, 2019; Hanin, 2019; Yang, 2019; Aitken and Gur-Ari, 2020; Andreassen and Dyer, 2020; Bracale et al., 2021; Eldan et al., 2021; Basteri and Trevisan, 2022). Let \(\mathcal{N}(\mu,\sigma^{2})\) be a Gaussian distribution with mean \(\mu\) and variance \(\sigma^{2}\), and consider: i) an input \(\mathbf{x}\in\mathbb{R}^{d}\), with \(d\geq 1\); ii) a collection of (random) weights \(\theta=\{w_{i}^{(0)},w,b_{i}^{(0)},b\}_{i\geq 1}\) such that \(w_{i,j}^{(0)}\stackrel{{ d}}{{=}}w_{j}\), with the \(w_{i,j}^{(0)}\)'s being independent and identically distributed as \(\mathcal{N}(0,\sigma_{w}^{2})\), and \(b_{i}^{(0)}\stackrel{{ d}}{{=}}b\), with the \(b_{i}^{(0)}\)'s being independent and identically distributed as \(\mathcal{N}(0,\sigma_{b}^{2})\) for \(\sigma_{w}^{2},\sigma_{b}^{2}>0\); iii) an activation function \(\tau:\mathbb{R}\rightarrow\mathbb{R}\). Then, a (fully connected
feed-forward) Gaussian NN is defined as follows:
\[f_{\mathbf{x}}(n)[\tau,n^{-1/2}]=b+\frac{1}{n^{1/2}}\sum_{j=1}^{n}w_{j}\tau(\langle w _{j}^{(0)},\mathbf{x}\rangle_{\mathbb{R}^{d}}+b_{j}^{(0)}), \tag{1}\]
with \(n^{-1/2}\) being a scaling factor. Neal (1996) characterized the infinitely wide limit of the NN (1), showing that, as \(n\to+\infty\), for any \(\mathbf{x}\in\mathbb{R}^{d}\) the NN \(f_{\mathbf{x}}(n)[\tau,n^{-1/2}]\) converges in distribution to a Gaussian random variable (RV). That is, as a function of \(\mathbf{x}\), the infinitely wide limit of the NN is a Gaussian stochastic process. The proof is an application of the classical Central Limit Theorem (CLT), thus relying on minimal assumptions on \(\tau\) to ensure that \(\mathbb{E}[(g_{j}(\mathbf{x}))^{2}]\) is finite, where \(g_{j}(\mathbf{x})=w_{j}\tau(\langle w_{j}^{(0)},\mathbf{x}\rangle_{\mathbb{R}^{d}}+b_ {j}^{(0)})\). The result of Neal (1996) has been extended to more general matrix input, i.e. \(p>1\) inputs of dimension \(d\), and to deep Gaussian NNs, assuming a "sequential growth" (Der and Lee, 2005) and a "joint growth" (Matthews et al., 2018) of the width over the NN's layers. These results provide asymptotic or qualitative Gaussian approximations of Gaussian NNs, as they do not provide the rate at which the NN converges to the infinitely wide limit.
### Our contribution
In this paper, we consider non-asymptotic or quantitative Gaussian approximations of the NN (1), quantifying the approximation error with respect to some popular distances for (probability) distributions. To introduce our results, let \(d_{W_{1}}\) be the 1-Wasserstein distance and consider a Gaussian NN with a 1-dimensional unitary input, i.e. \(d=1\) and \(x=1\), unit variance's weight, i.e. \(\sigma_{w}^{2}=1\), and no biases, i.e. \(b_{i}^{(0)}=b=0\) for any \(i\geq 1\). Under this setting, our result states as follows: if \(\tau\in C^{2}(\mathbb{R})\) such \(\tau\) and its first and second derivatives are bounded above by the linear envelope \(a+b|x|^{\gamma}\), for \(a,b,\gamma>0\), and \(N\sim\mathcal{N}(0,\sigma^{2})\) with \(\sigma^{2}\) being the variance of the NN, then for any \(n\geq 1\)
\[d_{W_{1}}(f_{1}(n)[\tau,n^{-1/2}],N)\leq\frac{K_{\sigma^{2}}}{n^{1/2}}, \tag{2}\]
with \(K_{\sigma^{2}}\) being a constant that can be computed explicitly. The polynomial envelope assumption is not new in the study of large-width properties of Gaussian NNs (Matthews et al., 2018; Yang, 2019), and it is critical to achieve the optimal rate \(n^{-1/2}\) in the estimate (2) of the approximation error. In general, we show that an approximation analogous to (2) holds true for the Gaussian NN (1), with the approximation being with respect to the 1-Wasserstein distance, the total variation distance and the Kolmogorov-Smirnov distance. Our results rely on the use of second-order Gaussian Poincare inequalities, or simply second-order Poincare inequalities, first introduced in Chatterjee (2009) and Nourdin et al. (2009) as a powerful tool to obtain Gaussian approximation of general functionals of Gaussian stochastic processes. Here, we make use of some refinements of second-order Poincare inequalities developed in Vidotto (2020), which have the advantage of providing tight estimates of the approximation error, with (presumably) optimal rates. An extension of (2) is presented for Gaussian NNs with \(p>1\) inputs, whereas a generalization of our results to deep Gaussian NNs is discussed with respect to the "sequential growth" and the "joint growth" of the width over the NN's layers.
### Related work
While there exists a vast literature on infinitely wide limits of Gaussian NNs, as well as their corresponding asymptotic approximations, only a few recent works have investigated non-asymptotic approximations of Gaussian NNs. To the best of our knowledge, the work of Eldan et al. (2021) is the first to consider the problem of non-asymptotic approximations of Gaussian NNs, focusing
on NNs with Gaussian distributed weights \(w_{i,j}\)'s and Rademacher distributed weights \(w_{i}\)'s. For such a class of NNs, they established a quantitative CLT in an infinite-dimensional functional space, metrized with the Wasserstein distance, providing rates of convergence to a Gaussian stochastic process. For deep Gaussian NNs (Der and Lee, 2005; Matthews et al., 2018), the work of Basteri and Trevisan (2022) first established a quantitative CLT in the 2-Wasserstein distance, providing the rate at which a deep Gaussian NN converges to its infinitely wide limit. Such a result relies on basic properties of the Wasserstein distance, which allow for a quantitatively tracking the hidden layers and yield a proof by induction, with the triangular inequality being applied to obtain independence from the previous layers. See Favaro et al. (2022) for an analogous result in the sup-norm distance. Our work is close to that of Basteri and Trevisan (2022), in the sense that we deal with NNs for which all the weights are initialized according to Gaussian distributions, and we consider their approximations through Gaussian RVs. The novelty of our work lies on the use of second-order Poincare inequalities, which allow reducing the problem to a direct computation of the gradient and Hessian of the NN, and provide estimates of the approximation error with optimal rate, and tight constants, with respect to other distances than sole Wasserstein distance. This is the first to make use of second-order Poincare inequalities as a tool to obtain non-asymptotic Gaussian approximations of Gaussian NNs.
### Organization of the paper
The paper is structured as follows. In Section 2 we present an overview on second-order Poincare inequalities, recalling some of the main results of Vidotto (2020) that are critical to prove our non-asymptotic Gaussian approximations of Gaussian NNs. Section 3 contains the non-asymptotic Gaussian approximation of the NN (1), as well as its extension for the NN (1) with \(p>1\) inputs, where Section 4 contains some numerical illustrations of our approximations. In Section 5 we discuss the extension of our results to deep Gaussian NNs, and we present some directions for future research.
## 2 Preliminaries on second-order Poincare inequalities
Let \((\Omega,\mathcal{F},\mathbb{P})\) be a generic probability space on which all the RVs are assumed to be defined. We denote by \(\perp\!\!\!\perp\) the independence between RVs, and we make use of the acronym "iid" to refer to RVs that are independent and identically distributed and by \(\|X\|_{L^{q}}:=(\mathbb{E}[X^{q}])^{1/q}\) the \(L^{q}\) norm of the RV \(X\). In this work, we consider some popular distances between (probability) distributions of real-valued RVs. Let \(X\) and \(Y\) be two RVs in \(\mathbb{R}^{d}\), for some \(d\geq 1\). We denote by \(d_{W_{1}}\) the 1-Wasserstein distance, that is,
\[d_{W_{1}}(X,Y)=\sup_{h\in\mathscr{H}}|\mathbb{E}[h(X)]-\mathbb{E}[h(Y)]|,\]
where \(\mathscr{H}\) is the class of all functions \(h:\mathbb{R}^{d}\to\mathbb{R}\) such that it holds true that \(\|h\|_{\mathrm{Lip}}\;\leq 1\), with \(\|h\|_{\mathrm{Lip}}\;=\sup_{x,y\in\mathbb{R}^{d},x\neq y}|h(x)-h(y)|/\|x-y\|_ {\mathbb{R}^{d}}\). We denote by \(d_{TV}\) the total variation distance, that is,
\[d_{TV}(X,Y)=\sup_{B\in\mathscr{B}(\mathbb{R}^{m})}|\mathbb{P}(X\in B)-\mathbb{ P}(Y\in B)|,\]
where \(\mathscr{B}\left(\mathbb{R}^{d}\right)\) is the Borel \(\sigma\)-field of \(\mathbb{R}^{d}\). Finally, we denote by \(d_{KS}\) the Kolmogorov-Smirnov distance, i.e.
\[d_{KS}(X,Y)=\sup_{z_{1},\ldots,z_{d}\in\mathbb{R}}|\mathbb{P}\left(X\in\times_ {i=1}^{d}\left(-\infty,z_{i}\right]\right)-\mathbb{P}\left(Y\in\times_{i=1}^{ d}\left(-\infty,z_{i}\right]\right)|.\]
We recall the following interplays between some of the above distances: i) \(d_{KS}(\cdot,\cdot)\leq d_{TV}(\cdot,\cdot)\); ii) if \(X\) is a real-valued RV and \(N\sim\mathcal{N}(0,1)\) is the standard Gaussian RV then \(d_{KS}(X,N)\leq 2\sqrt{d_{W_{1}}(X,N)}\).
Second-order Poincare inequalities provide a useful tool for Gaussian approximation of general functionals of Gaussian fields (Chatterjee, 2009; Nourdin et al., 2009). See also Nourdin and Peccati (2012) and references therein for a detailed account. For our work, it is useful to recall some results developed in Vidotto (2020), which provide improved versions of the second-order Poincare inequality first introduced in Chatterjee (2009) for random variables and then extended in Nourdin et al. (2009) to general infinite-dimensional Gaussian fields. Let \(N\sim\mathcal{N}(0,1)\). Second-order Poincare inequalities can be seen as an iteration of the so-called Gaussian Poincare inequality, which states that
\[\mathrm{Var}[f(N)]\leq\mathbb{E}[f^{\prime}(N)^{2}] \tag{3}\]
for every differentiable function \(f:\mathbb{R}\to\mathbb{R}\), a result that was first discovered in a work by Nash (1956) and then reproved by Chernoff (1981). The inequality (3) implies that if the \(L^{2}\) norm of the RV \(f^{\prime}(N)\) is small, then so are the fluctuations of the RV \(f(N)\). The first version of a second-order Poincare inequality was obtained in Chatterjee (2009), where it is proved that one can iterate (3) in order to assess the total variation distance between the distribution of \(f(N)\) and the distribution of a Gaussian RV with matching mean and variance. The precise result is stated in the following theorem.
**Theorem 2.1** (Chatterjee (2009) - second-order Poincare inequality).: _Let \(X\sim\mathcal{N}\left(0,I_{d\times d}\right)\). Take any \(f\in C^{2}(\mathbb{R}^{d})\), and \(\nabla f\) and \(\nabla^{2}f\) denote the gradient of \(f\) and Hessian of \(f\), respectively. Suppose that \(f(X)\) has a finite fourth moment, and let \(\mu=\mathbb{E}[f(X)]\) and \(\sigma^{2}=\mathrm{Var}[f(X)]\). Let \(N\sim\mathcal{N}(\mu,\sigma^{2})\) then_
\[d_{TV}(f(X),N)\leq\frac{2\sqrt{5}}{\sigma^{2}}\left\{\mathbb{E}\left[\|\nabla f (X)\|_{\mathbb{R}^{d}}^{4}\right]\right\}^{1/4}\left\{\mathbb{E}\left[\|\nabla ^{2}f(X)\|_{op}^{4}\right]\right\}^{1/4}, \tag{4}\]
_where \(\|\cdot\|_{op}\) stands for the operator norm of the Hessian \(\nabla^{2}f(X)\) regarded as a random \(d\times d\) matrix._
Nourdin et al. (2009) pointed out that the Stein-type inequalities that lead to (4) are special instances of a more general class of inequalities, which can be obtained by combining Stein's method and Malliavin calculus on an infinite-dimensional Gaussian space. In particular, Nourdin et al. (2009) obtained a general version of (4), involving functionals of arbitrary infinite-dimensional Gaussian fields. Both (4) and its generalization in Nourdin et al. (2009) are known to give suboptimal rates of convergence. This is because, in general, it is not possible to obtain an explicit computation of the expectation of the operator norm involved in the estimate of total variation distance, which leads to move further away from the distance in distribution and use bounds on the operator norm instead of computing it directly. To overcome this drawback, Vidotto (2020) adapted to the Gaussian setting an approach recently developed in Last et al. (2016) to obtain second-order Poincare inequalities for Gaussian approximation of Poisson functionals, yielding estimates of the approximation error that are (presumably) optimal. The next theorem states Vidotto (2020, Theorem 2.1) for the special case of a function \(f(X)\), with \(f\in C^{2}\left(\mathbb{R}^{d}\right)\) such that its partial derivatives have sub-exponential growth, and \(X\sim\mathcal{N}\left(0,I_{d\times d}\right)\). See Appendix A for an overview of Vidotto (2020, Theorem 2.1).
**Theorem 2.2** (Vidotto (2020) - 1-dimensional second-order Poincare inequality).: _Let \(F=f(X)\), for some \(f\in C^{2}\left(\mathbb{R}^{d}\right)\), and \(X\sim\mathcal{N}\left(0,I_{d\times d}\right)\) such that \(E[F]=0\) and \(E\left[F^{2}\right]=\sigma^{2}\). Let
\(N\sim\mathcal{N}\left(0,\sigma^{2}\right)\), then_
\[d_{M}(F,N)\leq c_{M}\sqrt{\sum_{l,m=1}^{d}\left\{\mathbb{E}\left[\left(\langle \nabla_{l,\cdot}^{2}F,\nabla_{m,\cdot}^{2}F\rangle\right)^{2}\right]\right\}^{1/ 2}\left\{\mathbb{E}\left[\left(\nabla_{l}F\nabla_{m}F\right)^{2}\right]\right\} ^{1/2}}, \tag{5}\]
_where \(\langle\cdot,\cdot\rangle\) is the scalar product, \(M\in\{TV,KS,W_{1}\},c_{TV}=\frac{4}{\sigma^{2}},c_{KS}=\frac{2}{\sigma^{2}},c _{W_{1}}=\sqrt{\frac{8}{\sigma^{2}\pi}}\) and \(\nabla_{i,\cdot}^{2}F\) is the \(i\)-th row of the Hessian matrix of \(F=f(X)\) while \(\nabla_{i}F\) is the \(i\)-th element of the gradient of \(F\)._
The next theorem generalizes Theorem 2.2 to multidimensional functionals. In particular, for any \(p>1\), the next theorem states Vidotto (2020, Theorem 2.3) for the special case of a function \((f_{1}(X),\ldots,f_{p}(X))\), with \(f_{1},\ldots,f_{p}\in C^{2}\left(\mathbb{R}^{d}\right)\) such that its partial derivatives have sub-exponential growth, and \(X\sim\mathcal{N}\left(0,I_{d\times d}\right)\). See Appendix A for a brief overview of Vidotto (2020, Theorem 2.3).
**Theorem 2.3** (Vidotto (2020) - \(p\)-dimensional second-order Poincare inequality).: _For any \(p>1\) let \((F_{1},\ldots,F_{p})=(f_{1}(X),\ldots,f_{p}(X))\), for some \(f_{1},\ldots,f_{p}\in C^{2}(\mathbb{R}^{d})\), and \(X\sim\mathcal{N}\left(0,I_{d\times d}\right)\) such that \(E\left[F_{i}\right]=0\) for \(i=1,\ldots,p\) and \(E\left[F_{i}F_{j}\right]=c_{ij}\) for \(i,j=1,\ldots,p\), with \(C=\{c_{ij}\}_{i,j=1,\ldots,p}\) being a symmetric and positive definite matrix, i.e. a variance-covariance matrix. Let \(N\sim\mathcal{N}(0,C)\), then_
\[d_{W_{1}}(F,N) \tag{6}\] \[\quad\leq 2\sqrt{p}\left\|C^{-1}\right\|_{2}\|C\|_{2}\sqrt{\sum_{i, k=1}^{p}\sum_{l,m=1}^{d}\left\{\mathbb{E}\left[\left(\langle\nabla_{l,\cdot}^{2}F_{i },\nabla_{m,\cdot}^{2}F_{i}\rangle\right)^{2}\right]\right\}^{1/2}\left\{ \mathbb{E}\left[\left(\nabla_{l}F_{k}\nabla_{m}F_{k}\right)^{2}\right]\right\} ^{1/2}}\]
_where \(\left\|\cdot\right\|_{2}\) is the spectral norm of a matrix._
## 3 Main results
In this section, we present the main result of the paper, namely a non-asymptotic Gaussian approximation of the NN (1), quantifying the approximation error with respect to the 1-Wasserstein distance, the total variation distance and the Kolmogorov-Smirnov distance. It is useful to start with the simple setting of a Gaussian NN with a 1-dimensional unitary input, i.e. \(d=1\) and \(x=1\), unit variance's weight, i.e. \(\sigma_{w}^{2}=1\), and no biases, i.e. \(b_{i}^{(0)}=b=0\) for any \(i\geq 1\). That is, we consider the NN
\[F:=f_{1}(n)[\tau,n^{-1/2}]=\frac{1}{n^{1/2}}\sum_{j=1}^{n}w_{j}\tau(w_{j}^{(0 )}). \tag{7}\]
By means of a straightforward calculation, one has \(\mathbb{E}[F]=0\) and \(\mathrm{Var}[F]=\mathbb{E}_{Z\sim\mathcal{N}(0,1)}[\tau^{2}(Z)]\). As \(F\) in (7) is a function of independent standard Gaussian RVs, Theorem 2.2 can be applied to approximate \(F\) with a Gaussian RV with the same mean and variance as \(F\), quantifying the approximation error.
**Theorem 3.1**.: _Let \(F\) be the NN (7) with \(\tau\in C^{2}(\mathbb{R})\) such that \(|\tau(x)|\leq a+b|x|^{\gamma}\) and \(\left|\frac{d}{dx^{2}}\tau(x)\right|\leq a+b|x|^{\gamma}\) for \(l=1,2\) and some \(a,b,\gamma\geq 0\). If \(N\sim\mathcal{N}(0,\sigma^{2})\) with \(\sigma^{2}=\mathbb{E}_{Z\sim\mathcal{N}(0,1)}[\tau^{2}(Z)]\), then for any \(n\geq 1\)_
\[d_{M}\left(F,N\right)\leq\frac{c_{M}}{\sqrt{n}}\sqrt{3(1+\sqrt{2})}\cdot\|a+b |Z|^{\gamma}\|_{L_{4}}^{2}, \tag{8}\]
_where \(Z\sim\mathcal{N}(0,1)\), \(M\in\{TV,KS,W_{1}\}\), with corresponding constants \(c_{TV}=4/\sigma^{2}\), \(c_{KS}=2/\sigma^{2}\), and \(c_{W_{1}}=\sqrt{8/\sigma^{2}\pi}\)._
Proof.: To apply Theorem 2.2, we start by computing some first and second order partial derivatives. That is,
\[\begin{cases}\frac{\partial F}{\partial w_{j}}=n^{-1/2}\tau(w_{j}^{(0)})\\ \\ \frac{\partial F}{\partial w_{j}^{(0)}}=n^{-1/2}w_{j}\tau^{\prime}(w_{j}^{(0)}) \\ \\ \nabla_{w_{j},w_{i}}^{2}F=0\\ \\ \nabla_{w_{j}^{(0)},w_{i}^{(0)}}^{2}F=n^{-1/2}\tau^{\prime}(w_{j}^{(0)}) \delta_{ij}\\ \\ \nabla_{w_{j}^{(0)},w_{i}^{(0)}}^{2}F=n^{-1/2}w_{j}\tau^{\prime\prime}(w_{j}^{(0 )})\delta_{ij}\end{cases}\]
with \(i,j=1\ldots n\). Then, by a direct application of Theorem 2.2, we obtain the following preliminary estimate
\[d_{M}\left(F,N\right) \leq c_{M}\Bigg{\{}\sum_{j=1}^{n}2\left\{\mathbb{E}\left[\left( \left\langle\nabla_{w_{j}}^{2}.F,\nabla_{w_{j}^{(0)},\cdot}^{2}F\right\rangle \right)^{2}\right]\mathbb{E}\left[\left(\frac{\partial F}{\partial w_{j}} \frac{\partial F}{\partial w_{j}^{(0)}}\right)^{2}\right]\right\}^{1/2}\] \[\quad+\left\{\mathbb{E}\left[\left(\left\langle\nabla_{w_{j}}^{ 2}.F,\nabla_{w_{j},\cdot}^{2}F\right\rangle\right)^{2}\right]\mathbb{E}\left[ \left(\frac{\partial F}{\partial w_{j}}\frac{\partial F}{\partial w_{j}} \right)^{2}\right]\right\}^{1/2}\] \[\quad+\left\{\mathbb{E}\left[\left(\left\langle\nabla_{w_{j}^{ (0)},\cdot}^{2}.F,\nabla_{w_{j}^{(0)},\cdot}^{2}F\right\rangle\right)^{2} \right]\mathbb{E}\left[\left(\frac{\partial F}{\partial w_{j}^{(0)}}\frac{ \partial F}{\partial w_{j}^{(0)}}\right)^{2}\right]\right\}^{1/2}\Bigg{\}}^{1/2},\]
which can be further developed. In particular, we can write the right-hand side of the previous estimate as
\[c_{M}\bigg{\{}\sum_{j=1}^{n}2\left\{\mathbb{E}\left[\left(\frac{ 1}{n}w_{j}\tau^{\prime}\left(w_{j}^{(0)}\right)\tau^{\prime\prime}\left(w_{j }^{(0)}\right)\right)^{2}\right]\mathbb{E}\left[\left(\frac{1}{n}w_{j}\tau \left(w_{j}^{(0)}\right)\tau^{\prime}\left(w_{j}^{(0)}\right)\right)^{2} \right]\right\}^{1/2}\] \[\quad+\left\{\mathbb{E}\left[\left(\frac{1}{\sqrt{n}}\tau^{ \prime}\left(w_{j}^{(0)}\right)\right)^{4}\right]\mathbb{E}\left[\left(\frac{ 1}{\sqrt{n}}\tau\left(w_{j}^{(0)}\right)\right)^{4}\right]\right\}^{1/2}\] \[\quad+\left\{\mathbb{E}\left[\left(\frac{1}{n}\left\{\tau^{ \prime}\left(w_{j}^{(0)}\right)\right\}^{2}+\frac{1}{n}w_{j}^{2}\left\{\tau^{ \prime\prime}\left(w_{j}^{(0)}\right)\right\}^{2}\right)^{2}\right]\mathbb{E }\left[\left(\frac{1}{\sqrt{n}}w_{j}\tau^{\prime}\left(w_{j}^{(0)}\right) \right)^{4}\right]\right\}^{1/2}\bigg{\}}^{1/2}\] \[\quad+\left\{\mathbb{E}\left[\left(\tau^{\prime}\left(w_{j}^{(0) }\right)\right)^{4}\right]\mathbb{E}\left[\left(\tau\left(w_{j}^{(0)}\right) \right)^{4}\right]\right\}^{1/2}\] \[\quad+\left\{\mathbb{E}\left[\left(\left\{\tau^{\prime}\left(w _{j}^{(0)}\right)\right\}^{2}+w_{j}^{2}\left\{\tau^{\prime\prime}\left(w_{j}^{(0 )}\right)\right\}^{2}\right)^{2}\right]\mathbb{E}\left[\left(w_{j}\tau^{ \prime}\left(w_{j}^{(0)}\right)\right)^{4}\right]\right\}^{1/2}\bigg{\}}^{1/2}\]
\[\stackrel{{(idid)}}{{=}}\frac{c_{M}}{\sqrt{n}}\bigg{\{}2 \left\{\mathbb{E}\left[\left(\tau^{\prime}\left(Z\right)\tau^{\prime\prime}\left(Z \right)\right)^{2}\right]\mathbb{E}\left[\left(\tau\left(Z\right)\tau^{\prime} \left(Z\right)\right)^{2}\right]\right\}^{1/2}\] \[\quad+\left\{\mathbb{E}\left[\left(\tau^{\prime}\left(Z\right) \right)^{4}\right]\mathbb{E}\left[\left(\tau\left(Z\right)\right)^{4}\right] \right\}^{1/2}\] \[\quad+\left\{\mathbb{E}\left[\left(\left\{\tau^{\prime}\left(Z \right)\right\}^{2}+w_{j}^{2}\left\{\tau^{\prime\prime}\left(Z\right)\right\}^ {2}\right)^{2}\right]\mathbb{E}\left[\left(w_{j}\tau^{\prime}\left(Z\right) \right)^{4}\right]\right\}^{1/2}\bigg{\}}^{1/2}\] \[\quad+\left\{\mathbb{E}\left[\left(\left\{\tau^{\prime}\left(Z \right)\right\}^{4}\right]\mathbb{E}\left[\left(\tau\left(Z\right)\right)^{4} \right]\right\}^{1/2}\] \[\quad+\left\{\mathbb{E}\left[\left(\left\{\tau^{\prime}\left(Z \right)\right\}^{4}\right]+2\mathbb{E}\left[\left\{\tau^{\prime}\left(Z \right)\right\}^{2}\left\{\tau^{\prime\prime}\left(Z\right)\right\}^{2}\right] +3\mathbb{E}\left[\left\{\tau^{\prime\prime}\left(Z\right)\right\}^{4}\right] \right)3\mathbb{E}\left[\left\{\tau^{\prime}\left(Z\right)\right\}^{4} \right]\right\}^{1/2}\] \[\quad+\left\{\mathbb{E}\left[\left|\tau^{\prime}\left(Z\right) \right|^{4}\right]\mathbb{E}\left[\left|\tau\left(Z\right)\right|^{4}\right] \right\}^{1/2}\] \[\quad+\left\{\left(\mathbb{E}\left[\left|\tau^{\prime}\left(Z \right)\right|^{4}\right]+2\mathbb{E}\left[\left|\tau^{\prime}\left(Z\right) \right|^{2}\right]+3\mathbb{E}\left[\left|\tau^{\prime\prime}\left(Z\right) \right|^{4}\right]\right)3\mathbb{E}\left[\left|\tau^{\prime}\left(Z\right) \right|^{4}\right]\right\}^{1/2}\bigg{\}}^{1/2},\]
where \(Z\sim\mathcal{N}(0,1)\). Now, since \(\tau\) is polynomially bounded and the square root is an increasing function,
\[d_{M}\left(F,N\right)\leq\frac{c_{M}}{\sqrt{n}}\bigg{\{}2 \left\{\mathbb{E}\left[\left(a+b|Z|^{\gamma}\right)^{4}\right]\mathbb{E} \left[\left(a+b|Z|^{\gamma}\right)^{4}\right]\right\}^{1/2}\] \[\quad+\left\{\mathbb{E}\left[\left(a+b|Z|^{\gamma}\right)^{4} \right]\mathbb{E}\left[\left(a+b|Z|^{\gamma}\right)^{4}\right]\right\}^{1/2}\] \[\quad+\left\{18\mathbb{E}\left[\left(a+b|Z|^{\gamma}\right)^{4} \right]\mathbb{E}\left[\left(a+b|Z|^{\gamma}\right)^{4}\right]\right\}^{1/2} \bigg{\}}^{1/2}\] \[=\frac{c_{M}}{\sqrt{n}}\sqrt{3\sqrt{2+3}}\left\{\mathbb{E}\left[ \left(a+b|Z|^{\gamma}\right)^{4}\right]\right\}^{1/2}\] \[=\frac{c_{M}}{\sqrt{n}}\sqrt{3(1+\sqrt{2})}\|a+b|Z|^{\gamma}\|_{L _{4}}^{2},\]
where \(Z\sim\mathcal{N}(0,1)\).
The proof of Theorem 3.1 shows how a non-asymptotic approximation of \(F\) can be obtained by a direct application of Theorem 2.2. In particular, the estimate (8) of the approximation error \(d_{M}\left(F,N\right)\) has the optimal rate \(n^{-1/2}\) with respect to the \(1\)-Wasserstein distance, the total variation distance and the Kolmogorov-Smirnov distance. As for the constant, it depends on the variance \(\mathbb{E}_{Z\sim\mathcal{N}(0,1)}[\tau^{2}(Z)]\) of \(F\). Once the activation function \(\tau\) is specified, \(\mathbb{E}_{Z\sim\mathcal{N}(0,1)}[\tau^{2}(Z)]\) can be evaluated by means of an exact or approximate calculation, or a suitable lower bound for it can be provided.
Now, we extend Theorem 3.1 to the more general case of the Gaussian NN (1), showing that the problem still reduces to an application of Theorem 2.2. In particular, it is convenient to write (1) as follows:
\[F:=\frac{1}{n^{1/2}}\sigma_{w}\sum_{j=1}^{n}w_{j}\tau(\sigma_{w} \langle w_{j}^{(0)},\mathbf{x}\rangle+\sigma_{b}b_{j}^{(0)})+\sigma_{b}b, \tag{9}\]
with \(w_{j}^{(0)}=[w_{j,1}^{(0)},\ldots,w_{j,d}^{(0)}]^{T}\) and \(w_{j}\overset{d}{=}w_{j,i}^{(0)}\overset{\text{iid}}{\sim}\ \mathcal{N}(0,1)\). We set \(\Gamma^{2}=\sigma_{w}^{2}\|\mathbf{x}\|^{2}+\sigma_{b}^{2}\), and for \(n\geq 1\) we consider a collection \((Y_{1},\ldots,Y_{n})\) of independent standard Gaussian RVs. Then, from (9) we can write
\[F\overset{d}{=}\frac{1}{n^{1/2}}\sigma_{w}\sum_{j=1}^{n}w_{j}\tau \left(\Gamma Y_{j}\right)+\sigma_{b}b.\]
As before, straightforward calculations leads to \(\mathbb{E}[F]=0\) and \(\operatorname{Var}[F]=\sigma_{w}^{2}\mathbb{E}_{Z\sim\mathcal{N}(0,1)}\left[ \tau^{2}\left(\Gamma Z\right)\right]+\sigma_{b}^{2}\). As \(F\) in (9) is a function of independent standard Gaussian RVs, Theorem 2.2 can be applied to approximate \(F\) with a Gaussian RV with the same mean and variance as \(F\), quantifying the approximation error. This approximation is stated in the next theorem, whose proof is in Appendix B.
**Theorem 3.2**.: _Let \(F\) be the NN (9) with \(\tau\in C^{2}(\mathbb{R})\) such that \(|\tau(x)|\leq a+b|x|^{\gamma}\) and \(\left|\frac{d}{dx^{2}}\tau(x)\right|\leq a+b|x|^{\gamma}\) for \(l=1,2\) and some \(a,b,\gamma\geq 0\). If \(N\sim\mathcal{N}(0,\sigma^{2})\) with \(\sigma^{2}=\sigma_{w}^{2}\mathbb{E}_{Z\sim\mathcal{N}(0,1)}\left[\tau^{2} \left(\Gamma Z\right)\right]+\sigma_{b}^{2}\) and \(\Gamma=(\sigma_{w}^{2}\|\mathbf{x}\|^{2}+\sigma_{b}^{2})^{1/2}\), then for any \(n\geq 1\)_
\[d_{M}\left(F,N\right)\leq\frac{c_{M}\sqrt{\Gamma^{2}+\Gamma^{4}(2 +\sqrt{3(1+2\Gamma^{2}+3\Gamma^{4})})}\|a+b|\Gamma Z|^{\gamma}\|_{L^{4}}^{2}} {\sqrt{n}}, \tag{10}\]
_where \(Z\sim\mathcal{N}(0,1)\), \(M\in\{TV,KS,W_{1}\}\), with corresponding constants \(c_{TV}=4/\sigma^{2},c_{KS}=2/\sigma^{2},c_{W_{1}}=\sqrt{8/\sigma^{2}\pi}\)._
We observe that Theorem 3.1 can be recovered from Theorem 3.2. In particular, the estimate (8) of the approximation \(d_{M}\left(F,N\right)\) can be recovered from the estimate (10) by setting \(\sigma_{b}=0\), \(\sigma_{w}=1\) and \(\mathbf{x}=1\). As for Theorem 3.1, the constant depends on the variance \(\sigma_{w}^{2}\mathbb{E}_{Z\sim\mathcal{N}(0,1)}\left[\tau^{2}\left(\Gamma Z \right)\right]+\sigma_{b}^{2}\) of \(F\). Therefore, to apply Theorem 3.2 one needs to evaluate the variance of \(F\), by means of an exact or approximate calculation, or to provide a suitable lower bound for it, as we have discussed previously.
We conclude by presenting an extension of Theorem 3.2 to a Gaussian NN with \(p>1\) inputs \([\mathbf{x_{1}},\ldots,\mathbf{x_{p}}]^{T}\), where \(\mathbf{x_{i}}\in\mathbb{R}^{d}\) for \(i=1,\ldots,p\). In particular, we consider the NN \(F:=[F_{1},\ldots,F_{p}]^{T}\) where
\[F_{i}:=\frac{1}{n^{1/2}}\sigma_{w}\sum_{j=1}^{n}w_{j}\tau(\sigma_ {w}\langle w_{j}^{(0)},\mathbf{x_{i}}\rangle+\sigma_{b}b_{j}^{(0)})+\sigma_{b}b, \tag{11}\]
with \(w_{j}^{(0)}=[w_{j,1}^{(0)},\ldots,w_{j,d}^{(0)}]^{T}\) and \(w_{j}\overset{d}{=}w_{j,i}^{(0)}\overset{d}{=}b_{j}^{(0)}\overset{d}{=}b \overset{\text{iid}}{\sim}\ \mathcal{N}(0,1)\). Since the parameter are jointly distributed according to multivariate standard Gaussian distribution, Theorem 2.3 can be applied to approximate \(F\) with a Gaussian random vector whose mean and covariance are the same as \(F\). The resulting estimate of the approximation error depends on the maximum and the minimum eigenvalues, i.e. \(\lambda_{1}(C)\) and \(\lambda_{p}(C)\) respectively, of the covariance matrix \(C\), whose \((i,k)\)-th entry is given by
\[\mathbb{E}[F_{i}F_{k}]=\sigma_{w}^{2}\mathbb{E}[\tau(Y_{i})\tau(Y_{k})]+\sigma _{b}^{2}, \tag{12}\]
where \(Y\sim\mathcal{N}(0,\sigma_{w}^{2}X^{T}X+\sigma_{b}^{2}\mathbf{1}\mathbf{1}^{T})\), with \(\mathbf{1}\) being the all-one vector of dimension \(p\) and \(X\) being the \(n\times p\) matrix of the inputs \(\{\mathbf{x_{i}}\}_{i\in[p]}\). This approximation is stated in the next theorem, whose proof is in Appendix C.
**Theorem 3.3**.: _Let \(F=[F_{1}\ldots,F_{p}]^{T}\) with \(F_{i}\) being the NN (11), for \(i=1,\ldots,p\), with \(\tau\in C^{2}(\mathbb{R})\) such that \(|\tau(x)|\leq a+b|x|^{\gamma}\) and \(\left|\frac{d}{dx^{t}}\tau(x)\right|\leq a+b|x|^{\gamma}\) for \(l=1,2\) and some \(a,b,\gamma\geq 0\). Furthermore, let \(C\) be the covariance matrix of \(F\), whose entries are given in (12), and define \(\Gamma_{i}^{2}=\sigma_{w}^{2}||\mathbf{x_{i}}||^{2}+\sigma_{b}^{2}\) and \(\Gamma_{ik}=\sigma_{w}^{2}\sum_{j=1}^{d}|x_{ij}x_{kj}|+\sigma_{b}^{2}\). If \(N=[N_{1},\cdots N_{p}]^{T}\sim\mathcal{N}(0,C)\), then for any \(n\geq 1\)_
\[d_{W_{1}}\left(F,N\right)\leq 2\sigma_{w}^{2}\tilde{K}\frac{\lambda_{1}(C)}{ \lambda_{p}(C)}\sqrt{\frac{p}{n}}, \tag{13}\]
_where \(\lambda_{1}(C)\) and \(\lambda_{p}(C)\) are the maximum and the minimum eigenvalues of \(C\), respectively, and where_
\[\tilde{K}=\bigg{\{}\sum_{i,k=1}^{p}(\Gamma_{i}^{2}+\sqrt{3(1+2\Gamma_{i}^{2}+3 \Gamma_{i}^{4})}\Gamma_{ik}^{2}+2\Gamma_{i}^{2}\Gamma_{ik})\|a+b|\Gamma_{i}Z| ^{\gamma}\|_{L^{4}}^{2}\|a+b|\Gamma_{k}Z|^{\gamma}\|_{L^{4}}^{2}\bigg{\}}^{1/2},\]
_with \(Z\sim\mathcal{N}(0,1)\)._
The estimate (13) of the approximation error \(d_{W_{1}}\left(F,N\right)\) depends on the spectral norms of the covariance matrix \(C\) and the precision matrix \(C^{-1}\). Such spectral norms must be computed explicitly for the specific activation \(\tau\) in use, or at least bounded from above, in order to apply Theorem 3.3. This boils down to finding the greatest eigenvalue \(\lambda_{1}\) and the smallest eigenvalue \(\lambda_{p}\) of the matrix \(C\), which can be done for a broad class of activations with classical optimization techniques, or at least bounding \(\lambda_{1}\) from above and \(\lambda_{p}\) from below (Diaconis and Stroock, 1991; Guattery et al., 1999).
## 4 Numerical illustrations
In this section, we present a brief simulation study for two specific choices of the activation function: i) \(\tau(x)=\tanh x\), which is polynomially bounded with parameters \(a=1\) and \(b=0\); ii) \(\tau(x)=x^{3}\), which is polynomially bounded with parameters \(a=6\), \(b=1\) and \(\gamma=3\). Each of the plots below is obtained as follows: for a fixed width of \(n=k^{3}\), with \(k\in\{1,\cdots,16\}\), we simulate 5000 points from a SLNN as in Theorem 3.1 to produce an estimate of the distance between the NN and a Gaussian RV with mean \(0\) and variance \(\sigma^{2}\), which is estimated by means of a Monte-Carlo approach. Estimates of the KS and TV distance are produced by means of the functions _KolmogorovDist_ and _TotVarDist_ from the package **distEx** by Ruckdeschel et al. (2006) while those of the 1-Wasserstein distance using the function _wasserstein1d_ from the package **transport** by Schuhmacher et al. (2022). We repeat this procedure 500 times for every fixed \(n=k^{3}\) with \(k\in\{1,\cdots,16\}\), compute the sample mean (black dots) and the 2.5-th and the 97.5-th sample percentiles (red dashed lines), and compare these estimates with the theoretical bound given by Theorem 3.1 (blue line).
The plots confirm that the distance between a shallow NN and an arbitrary Gaussian RV, with the same mean and variance, is asymptotically bounded from above by \(n^{-1/2}\) and that the approximation gets better and better as \(n\to\infty\). This is evident in the case \(\tau(x)=x^{3}\), where there is a clear decay between \(n=1\) and \(n=1000\). This behaviour does not show up for \(\tau(x)=\tanh x\), since \(\tanh x\sim x\), for \(x\to 0\), and Gaussian RVs are more likely to attain values in a neighbourhood of zero.
Figure 1: Estimates of the Kolmogorov-Smirnov distance for a Shallow NN of varying width \(n=k^{3}\), \(k\in\{1,\cdots,16\}\), with \(\tau(x)=\tanh x\) (left) and \(\tau(x)=x^{3}\) (right). The blue line is the theoretical bound of Theorem 3.1, the black dots are sample means of the Monte-Carlo sample, while the red-dashed lines represent a \(95\%\) sample confidence interval.
Figure 3: Estimates of the 1-Wasserstein distance for a Shallow NN of varying width \(n=k^{3}\), \(k\in\{1,\cdots,16\}\), with \(\tau(x)=\tanh x\) (left) and \(\tau(x)=x^{3}\) (right).
Figure 2: Estimates of the Total Variation distance for a Shallow NN of varying width \(n=k^{3}\), \(k\in\{1,\cdots,16\}\), with \(\tau(x)=\tanh x\) (left) and \(\tau(x)=x^{3}\) (right).
Discussion
We introduced some non-asymptotic Gaussian approximations of Gaussian NNs, quantifying the approximation error with respect to the 1-Wasserstein distance, the total variation distance and the Kolmogorov-Smirnov distance. As a novelty, our work relies on the use of second-order Poincare inequalities, which lead to estimates of the approximation error with optimal rate and tight constants. This is the first work to make use of second-order Poincare inequalities for non-asymptotic Gaussian approximations of Gaussian NNs. For a Gaussian NN with a single input, the estimate in Theorem 3.2 requires to evaluate or estimate, whereas for a Gaussian NN with inputs, the estimate in Theorem 3.3 requires to evaluate or estimate. Our approach based on second-order Poincare inequalities remains valid in the more general setting of deep Gaussian NNs. Both Theorem 3.2 and Theorem 3.3 can be extended to deep Gaussian NNs, at the cost of more involved algebraic calculations, as well as more involved estimates of the approximation errors. For instance, for an input one may consider a deep Gaussian NN with layers, i.e.
(14)
with
(15)
and apply Theorem 2.2 to as defined in (14) and (15). Such an application implies to deals with complicated expressions of the gradient and the Hessian that, however, is a purely algebraic problem.
Related to the choice of the activation, one can also try to relax the hypothesis of polynomially boundedness and use a whatever. There is nothing wrong in doing it, as Corollary 2.2 and 2.3 still apply, with the only difference that the bound would be less explicit than the one we found here. Furthermore, one could also think about relaxing the hypothesis to include or just continuous activations, like the famous ReLU function (i.e. ) which is excluded from our analysis. Some results in this direction can be found in Eldan et al. (2021), though using Rademacher weights for the hidden layer. In this regard, we try to derive a specific bound for the ReLU function applying Corollary 2.2 to a sequence of smooth approximating functions and then passing to the limit. In particular, we approximated the ReLU function with for and applied Theorem 2.2 to a generic using the 1-Wasserstein distance and obtained a bound dependent on. Then, the idea would have been to take the limit of this bound for and hopefully obtain a non-trivial
bound, but that was not the case as the limit exploded. The same outcome was found using the SAU approximating sequence, i.e.
\[H(m,x):=\frac{1}{m\sqrt{2\pi}}\exp\biggl{\{}-\frac{1}{2}m^{2}x^{2}\biggr{\}}+ \frac{x}{2}+\frac{x}{2}\operatorname{erf}\left\{\frac{mx}{\sqrt{2}}\right\},\]
where \(\operatorname{erf}\left(\cdot\right)\) denotes the error function. This fact probably indicates the impossibility to apply the results of Vidotto (2020) in the context of continuous activation functions as the ReLU function, and the necessity to come up with new results on second-order Poincare inequalities to fill this gap. These results would not be trivial aft all, since Theorem A.2 needs each \(F_{1},\ldots,F_{d}\) to be in \(\mathbb{D}^{2,4}\), and so two degrees of smoothness are required. This is not "the fault" of Vidotto (2020), but it is due to the intrinsic character of the equation \(f^{\prime\prime}(x)-xf^{\prime}(x)=h(x)-Eh(Z)\) with \(Z\sim\mathcal{N}(0,1)\) in dimension \(p\geq 2\).
## Acknowledgements
Stefano Favaro is grateful to Professor Larry Goldstein for having introduced him to second-order Poincare inequalities, and to Professor Dario Trevisan for the many stimulating conversations on Gaussian NNs. Stefano Favaro received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No 817257. Stefano Favaro is also affiliated to IMATI-CNR "Enrico Magenes" (Milan, Italy).
|
2306.05577 | Squeezing equivalence of quantum harmonic oscillators under different
frequency modulations | The papers by Janszky and Adam [Phys. Rev. A {\bf 46}, 6091 (1992)] and Chen
\textit{et al.} [Phys. Rev. Lett. {\bf 104}, 063002 (2010)] are examples of
works where one can find the following equivalences: belonging to the following
class: quantum harmonic oscillators subjected to different time-dependent
frequency modulations, during a certain time interval $\tau$, exhibit exactly
the same final null squeezing parameter ($r_f=0$). In the present paper, we
discuss a more general case of squeezing equivalence, where the final squeezing
parameter can be non-null ($r_f\geq0$). We show that when the interest is in
controlling the forms of the frequency modulations, but keeping free the choice
of the values of $r_f$ and $\tau$, this in general demands numerical
calculations to find these values leading to squeezing equivalences (a
particular case of this procedure recovers the equivalence found by Jansky and
Adams). On the other hand, when the interest is not in previously controlling
the form of these frequencies, but rather $r_f$ and $\tau$ (and also some
constraints, such as minimization of energy), one can have analytical solutions
for these frequencies leading to squeezing equivalences (particular cases of
this procedure are usually applied in problems of shortcuts to adiabaticity, as
done by Chen \textit{et al.}). In this way, this more general squeezing
equivalence discussed here is connected to recent and important topics in the
literature as, for instance, generation of squeezed states and the obtaining of
shortcuts to adiabaticity. | Stanley S. Coelho, Lucas Queiroz, Danilo T. Alves | 2023-06-08T22:34:08Z | http://arxiv.org/abs/2306.05577v3 | # Squeezing equivalence of quantum harmonic oscillators under different frequency jumps
###### Abstract
In their studies on the squeezing produced by a sequence of sudden frequency changes of a quantum harmonic oscillator, Janszky and Adam [Phys. Rev. A **46**, 6091 (1992)] found the following equivalence: a harmonic oscillator, under a sequence of two sudden frequency jumps, from \(\omega_{0}\) to \(\omega_{1}\) and back to \(\omega_{0}\) (after a time interval \(\tau\)), exhibits, for \(\tau=k\pi/\omega_{1}\) (\(k\in\mathbb{N}\)), exactly the same squeezing parameter as the harmonic oscillator whose frequency would remain constant [specifically, \(r(t>\tau)=0\)]. In the present paper, we show an extended version of this equivalence, demonstrating how to set up different sequences of two sudden frequency jumps, so that, despite having different intermediate frequencies during a time interval \(\tau\), they result in a same value \(r(t>\tau)\neq 0\) (and, consequently, in the same physical quantities that depend on it) after the jumps cease. Applied to a particular situation, our formulas recover the equivalence obtained by Janszky and Adam.
## I Introduction
The quantum harmonic oscillator potential with time-dependent parameters is relevant in modeling several problems in physics [1; 2; 3; 4; 5], and has been investigated in different situations, such as in the description of the interaction between a spinless charged quantum particle and a time-dependent external classical electromagnetic field [6; 7; 8], in the quantum particle motion in traps [9; 10; 11; 12], as well as in the quantization of the free electromagnetic field in nonstationary media [13; 14; 15; 16; 17]. In quantum circuit electrodynamics, a double superconducting quantum interference device can be modeled as a harmonic oscillator with time-dependent frequency, consisting of an important system for the study of the dynamical Casimir effect [18; 19; 20]. In the description of scalar fields in Friedmann-Robertson-Walker spacetime and in the analysis of particle production in de Sitter spacetime, time-dependent harmonic oscillators have also been considered, since the Hamiltonian of the field modes can be mapped onto the Hamiltonian of a quantum harmonic oscillator with time-dependent mass and frequency [21; 22; 23; 24]. In the context of shortcuts to adiabaticity, time-dependent quantum harmonic oscillators are also utilized [25; 26; 27; 28; 29].
A particular case of a quantum harmonic oscillator with time-dependent parameters is one that presents sudden frequency jumps [30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42]. In Ref. [30], Janszky and Yushin showed that these jumps create squeezed states (see also Refs. [31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42]). While this procedure is still widely used [43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54], it is worth noting that sudden jumps in frequency are not the only way to create squeezed states in quantum harmonic oscillators. For example, in Refs. [55; 56; 57], it is shown that squeezing will be generated even for smoothly varying time-dependent frequencies. In Ref. [36], Janszky and Adam found an interesting equivalence: a harmonic oscillator, under a sequence of two sudden frequency jumps, from \(\omega_{0}\) (for \(t\leq 0\)) to \(\omega_{1}\), and returning to \(\omega_{0}\) (for \(t>\tau\)), exhibits, for \(\tau=k\pi/\omega_{1}\) (\(k\in\mathbb{N}\)), a null squeezing parameter \([r(t>\tau)=0]\), exactly as occurs for a harmonic oscillator whose frequency would remain constant.
In the present paper, we show an extended version of the mentioned equivalence found by Janszky and Adam [36], demonstrating how to set up different sequences of two sudden frequency jumps, so that, despite having different intermediate frequencies during a time interval \(\tau\), they result in a same value \(r(t>\tau)\neq 0\), after the jumps cease. Since the quantum fluctuations of the position and momentum operators, mean energy value, mean number of excitations, and the transition probabilities between different states depend on the squeezing parameter \(r(t)\) (see, for instance, Ref. [54] and references therein), the extended equivalence presented here implies the equivalence of all these quantities, for \(t>\tau\). When applied to the particular situation discussed by Janszky and Adam [36], our formulas recover the equivalence obtained by these authors.
The paper is organized as follows. In Sec. II we make a brief review of the solution of the Schrodinger equation associated with the quantum harmonic oscillator with time-dependent frequency, obtained via the Lewis-Riesenfeld (LR) dynamical invariant method [2; 3; 4]. We also make a brief review of the results discussed in Ref. [54], also obtained via the LR method, for the squeezing parameter and squeezing phase, quantum fluctuations of the position and momentum operators, mean energy, mean number of excitations, and the transition probabilities between different states. In Sec. III, we investigate the dynamics of a quantum harmonic oscillator with initial frequency \(\omega_{0}\), that undergoes a sudden jump to an intermediate time-dependent frequency \(f(t)\) and, after a certain time interval \(\tau\), suddenly returns to its initial frequency \(\omega_{0}\) [we remark that the results of this section, themselves, generalize those obtained in Ref. [54], where \(f(t)=\omega_{1}\)]. In Sec. IV, we obtain the conditions that allow us to set up different sequences of two sudden frequency jumps, with different intermediate time-dependent frequencies \(f(t)\) and \(f^{\prime}(t)\), so that they result in a same value \(r(t>\tau)\neq 0\) (and, consequently, in the same physical quantities that depend on it) after the jumps cease. In Sec. V, taking as basis typical functions \(f(t)\) and \(f^{\prime}(t)\), as those considered in Refs. [9; 35; 58; 59], we apply the conditions obtained by us to exemplify situations exhibiting squeezing equivalence of oscillators under different
frequency jumps. In Sec. VI, we present our final remarks.
## II Analytical method
In this section, by means of the LR method [2; 3; 4], we make a brief review of the solution of the Schrodinger equation associated with the quantum harmonic oscillator with time-dependent frequency, as well as of the results discussed in Ref. [54] for the squeezing parameter and related quantities to it.
### The wave function of the time-dependent quantum harmonic oscillator
Let us consider the Schrodinger equation for a one-dimensional harmonic oscillator with time-independent mass \(m_{0}\) and time-dependent frequency \(\omega(t)\):
\[i\hbar\frac{\partial\Psi(x,t)}{\partial t}=\left[-\frac{\hbar^{2}}{2m_{0}} \frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}m_{0}\omega(t)^{2}x^{2}\right] \!\Psi(x,t). \tag{1}\]
According to the LR method [2; 3; 4], the general solution of Eq. (1) is
\[\Psi(x,t)=\sum_{n=0}^{\infty}C_{n}\Psi_{n}(x,t), \tag{2}\]
in which the coefficients \(C_{n}\) depend only on the initial conditions and
\[\Psi_{n}(x,t)=\exp[i\alpha_{n}(t)]\Phi_{n}(x,t), \tag{3}\]
where \(\alpha_{n}(t)\) are phase functions, given by
\[\alpha_{n}(t)=-\left(n+\frac{1}{2}\right)\int_{0}^{t}\frac{dt^{\prime}}{m_{0} \rho(t^{\prime})^{2}}, \tag{4}\]
and the functions \(\Phi_{n}(x,t)\) are defined by
\[\Phi_{n}(x,t) = \frac{1}{\sqrt{2^{n}n!}}\exp\biggl{\{}\frac{im_{0}}{2\hbar}\biggl{[} \frac{\dot{\rho}(t)}{\rho(t)}+\frac{i}{m_{0}\rho(t)^{2}}\biggr{]}x^{2}\biggr{\}} \tag{5}\] \[\times\left[\frac{1}{\pi\hbar\rho(t)^{2}}\right]^{\frac{1}{4}}\!H _{n}\biggl{[}\frac{x}{h^{\frac{1}{4}}\rho(t)}\biggr{]},\]
with \(H_{n}\) being the Hermite polynomials of order \(n\). The function \(\rho(t)\) in Eqs. (4) and (5) is a real parameter which is solution of the Ermakov-Pinney equation [60; 61]
\[\ddot{\rho}(t)+\omega(t)^{2}\rho(t)=\frac{1}{m_{0}^{2}\rho(t)^{3}}. \tag{6}\]
For the case in which the frequency is always constant [\(\omega(t)=\omega_{0}\)], the solution of Eq. (6) is \(\rho(t)=\rho_{0}\), in which
\[\rho_{0}=\frac{1}{\sqrt{m_{0}\omega_{0}}}, \tag{7}\]
so that \(\Psi_{n}(x,t)\) falls back to the wave function \(\Psi_{n}^{(0)}(x,t)\) of a harmonic oscillator with time-independent mass and frequency.
### Squeezing parameter and related quantities
It is well known that the quantum states \(\Psi_{n}(x,t)\) of the time-dependent oscillator are squeezed (see, for example, Refs. [3; 4]). Thus, we can define the squeezing parameter \(r(t)\) and the squeezing phase \(\phi(t)\) [with \(r(t)\geq 0\) and \(0\leq\phi(t)\leq 2\pi\)] in terms of the parameter \(\rho(t)\) (see, for instance, Ref. [54] and references therein):
\[r(t) = \cosh^{-1}\bigl{\{}(4m_{0}\omega_{0})^{-\frac{1}{2}}[m_{0}^{2} \dot{\rho}(t)^{2}+\rho(t)^{-2} \tag{8}\] \[+\,m_{0}^{2}\omega_{0}^{2}\rho(t)^{2}+2m_{0}\omega_{0}]^{\frac{1}{ 2}}\bigr{\}},\] \[\phi(t) = \cos^{-1}\left\{\frac{1+m_{0}\omega_{0}\rho(t)^{2}-2\cosh^{2}[r( t)]}{2\sinh[r(t)]\cosh[r(t)]}\right\}. \tag{9}\]
In terms of these parameters, the variances of the operators \(\hat{x}\) and \(\hat{p}\) are given by [54]
\[\langle[\Delta\hat{x}]^{2}\rangle(n,t) = \bigl{\{}\cosh^{2}\left[r(t)\right]+2\sinh\left[r(t)\right] \cosh\left[r(t)\right] \tag{10}\] \[\times\cos\left[\phi(t)\right]+\sinh^{2}\left[r(t)\right]\bigr{\}} \langle[\Delta\hat{x}]_{0}^{2}\rangle(n,t),\] \[\langle[\Delta\hat{p}]^{2}\rangle(n,t) = \bigl{\{}\cosh^{2}\left[r(t)\right]-2\sinh\left[r(t)\right]\cosh \left[r(t)\right]\] (11) \[\times\cos\left[\phi(t)\right]+\sinh^{2}\left[r(t)\right]\bigr{\}} \langle[\Delta\hat{p}]_{0}^{2}\rangle(n),\]
where \(\langle[\Delta\hat{x}]_{0}^{2}\rangle(n)=(n+1/2)\,\hbar/m_{0}\omega_{0}\), \(\langle[\Delta\hat{p}]_{0}^{2}\rangle(n)=(n+1/2)\,m_{0}\omega_{0}\hbar\), and the mean energy \(E(n,t)\) of the system is defined by
\[E(n,t) = \frac{1}{2m_{0}}\bigl{\{}\cosh^{2}[r(t)]-2\sinh[r(t)]\cosh[r(t)] \tag{12}\] \[\times\cos[\phi(t)]+\sinh^{2}[r(t)]\bigr{\}}\langle[\Delta\hat{p}] _{0}^{2}\rangle(n)\] \[+\,\frac{1}{2}m_{0}\omega(t)^{2}\bigl{\{}\cosh^{2}[r(t)]+2\cos[ \phi(t)]\] \[\times\sinh[r(t)]\cosh[r(t)]+\sinh^{2}[r(t)]\bigr{\}}\] \[\times\,\langle[\Delta\hat{x}]_{0}^{2}\rangle(n).\]
One can also determine the mean number of excitations \(N(n,t)\) for a system subjected to this time-varying potential. This is given by [62; 54; 63; 64]
\[N(n,t)=n+(2n+1)\sinh^{2}[r(t)]. \tag{13}\]
This means that a system can be excited due to the temporal variations in its frequency. The transition probability \(\mathcal{P}(t)_{\mu\rightarrow\nu}\) between different states \(\mu\) and \(\nu\) is given by
\[\mathcal{P}(t)_{\mu\rightarrow\nu} = \left[\sum_{k=\frac{|\nu-\mu|}{2}}^{\frac{\mu+\nu}{2}}\frac{\left( \begin{array}{c}\frac{\mu+\nu}{2}\\ \frac{\mu+\nu}{2}\end{array}\right)\left(\begin{array}{c}\frac{\mu+\nu+2k-2}{ \frac{\mu+\nu}{2}}\\ \frac{\mu+\nu}{2}\end{array}\right)k!}{\left(k-\frac{|\nu-\mu|}{2}\right)! \bigl{[}N(0,t)+1\bigr{]}^{\frac{k}{2}}}\right]^{2} \tag{14}\] \[\times\,\frac{2^{\mu+\nu}[\min(\mu,\nu)!]^{2}\bigl{[}N(0,t)\bigr{]} ^{\frac{|\nu-\mu|}{2}}}{\mu!\nu!\bigl{[}N(0,t)+1\bigr{]}^{\frac{1}{2}}},\]
for even values of \(|\nu-\mu|\) and \(\mathcal{P}(t)_{\mu\rightarrow\nu}=0\) for odd values of \(|\nu-\mu|\), where \(\min(\mu,\nu)\) is the smallest value between \(\mu\)
and \(\nu\). For an oscillator initially in the fundamental state, the probability of this system being excited to different energy levels is \(\mathcal{P}_{e}(t)=1-\mathcal{P}(t)_{0\,\rightarrow\,0}\)[49].
## III The model
Let us consider the frequency model defined by
\[\omega(t)=\begin{cases}\omega_{0},&t\leq 0,\\ f(t),&0<t\leq\tau,\\ \omega_{0},&t>\tau,\end{cases} \tag{15}\]
where \(\tau\) is the duration of the time interval between frequency jumps, and \(f(t)\) is an arbitrary time-dependent function. In this section, we obtain the general solution of the Ermakov-Pinney equation [Eq. (6)] for this model, and discuss the effects caused by frequency changes on the quantities that depend on the squeezing parameter \(r(t)\) in the interval \(t>\tau\).
### Solution of the \(\rho(t)\) parameter
Due to the form of Eq. (15), the \(\rho(t)\) parameter can be written as [hereafter, the same convention of indices and time-intervals is considered for \(r(t)\) and all related quantities]:
\[\rho(t)=\begin{cases}\rho_{0},&t\leq 0,\\ \rho_{1}(t),&0<t\leq\tau,\\ \rho_{2}(t),&t>\tau,\end{cases} \tag{16}\]
where \(\rho_{0}\) is given by Eq. (7), and \(\rho_{1}(t)\) and \(\rho_{2}(t)\) are calculated next.
For the interval \(0<t\leq\tau\), the general solution for \(\rho_{1}(t)\) is given by [61; 65; 66]
\[\rho_{1}(t)=[A_{1}u_{1}(t)^{2}+B_{1}v_{1}(t)^{2}+2C_{1}u_{1}(t)v_{1}(t)]^{ \frac{1}{2}}, \tag{17}\]
where \(u_{1}(t)\) and \(v_{1}(t)\) are two linearly independent solutions of the classical harmonic oscillator equation
\[\ddot{z}(t)+f(t)^{2}z(t)=0, \tag{18}\]
in which the constants \(A_{1}\), \(B_{1}\) and \(C_{1}\) are determined from the continuity conditions [5; 54]
\[\rho_{1}(t=0)=\frac{1}{\sqrt{m_{0}\omega_{0}}},\quad\dot{\rho}_{1}(t=0)=0, \tag{19}\]
and by the relation
\[A_{1}B_{1}-C_{1}^{2}=\frac{1}{\{m_{0}W[u_{1}(t),v_{1}(t)]_{t=0}\}^{2}}, \tag{20}\]
where \(W[u_{1}(t),v_{1}(t)]=u_{1}(t)\dot{v}_{1}(t)-\dot{u}_{1}(t)v_{1}(t)\) is the Wronskian determinant of \(u_{1}(t)\) and \(v_{1}(t)\). Applying these conditions, one finds that the coefficients \(A_{1}\), \(B_{1}\) and \(C_{1}\) are given by
\[A_{1} = \frac{\omega_{0}^{2}v_{1}^{2}|_{t=0}+\dot{v}_{1}^{2}|_{t=0}}{m_{ 0}\omega_{0}\{W[u_{1},v_{1}]_{t=0}\}^{2}}, \tag{21}\] \[B_{1} = \frac{\omega_{0}^{2}u_{1}^{2}|_{t=0}+\dot{u}_{1}^{2}|_{t=0}}{m_{ 0}\omega_{0}\{W[u_{1},v_{1}]_{t=0}\}^{2}},\] (22) \[C_{1} = -\frac{\omega_{0}^{2}u_{1}|_{t=0}v_{1}|_{t=0}+\dot{u}_{1}|_{t=0} \dot{v}_{1}|_{t=0}}{m_{0}\omega_{0}\{W[u_{1},v_{1}]_{t=0}\}^{2}}. \tag{23}\]
For the interval \(t>\tau\), the general solution for \(\rho_{2}(t)\) has the form
\[\rho_{2}(t)=[A_{2}u_{2}(t)^{2}+B_{2}v_{2}(t)^{2}+2C_{2}u_{2}(t)v_{2}(t)]^{ \frac{1}{2}}, \tag{24}\]
where \(u_{2}(t)=\sin(\omega_{0}t)\) and \(v_{2}(t)=\cos(\omega_{0}t)\). By means of the continuity conditions
\[\rho_{1}(t=\tau)=\rho_{2}(t=\tau),\quad\dot{\rho}_{1}(t=\tau)=\dot{\rho}_{2}( t=\tau), \tag{25}\]
and the relation
\[A_{2}B_{2}-C_{2}^{2}=\frac{1}{(m_{0}\omega_{0})^{2}}, \tag{26}\]
one can express the coefficients \(A_{2}\), \(B_{2}\) and \(C_{2}\) in terms of the parameter \(\rho_{1}(t)\) and its time derivative \(\dot{\rho}_{1}(t)\), both calculated at \(t=\tau\):
\[A_{2} = \rho_{1}^{2}|_{t=\tau}+\Big{[}\frac{\rho_{1}|_{t=\tau}\dot{\rho} _{1}|_{t=\tau}}{\omega_{0}}-C_{2}\Big{]}\cot(\omega_{0}\tau), \tag{27}\] \[B_{2} = \rho_{1}^{2}|_{t=\tau}-\Big{[}\frac{\rho_{1}|_{t=\tau}\dot{\rho} _{1}|_{t=\tau}}{\omega_{0}}+C_{2}\Big{]}\tan(\omega_{0}\tau),\] (28) \[C_{2} = \frac{\{m_{0}^{2}[\omega_{0}^{2}\rho_{1}^{4}|_{t=\tau}-\rho_{1} ^{2}|_{t=\tau}\dot{\rho}_{1}^{2}|_{t=\tau}]-1\}\sin(2\omega_{0}\tau)}{2m_{0}^ {2}\omega_{0}^{2}\rho_{1}^{2}|_{t=\tau}}\] (29) \[+\ \frac{\rho_{1}|_{t=\tau}\dot{\rho}_{1}|_{t=\tau}\cos(2\omega_{0} \tau)}{\omega_{0}}.\]
### Squeezing parameter and related quantities for \(t>\tau\)
It is interesting to analyze the consequences generated by the intermediate frequency \(f(t)\) on the dynamics of the system for \(t>\tau\). From Eqs. (8) and (24), we have that the squeezing parameter \(r_{2}(t)\) is given by
\[r_{2}(t) = \cosh^{-1}\bigl{\{}(4m_{0}\omega_{0})^{-\frac{1}{2}}[m_{0}^{2} \dot{\rho}_{1}^{2}|_{t=\tau}+\rho_{1}^{-2}|_{t=\tau} \tag{30}\] \[+\ m_{0}^{2}\omega_{0}^{2}\rho_{1}^{2}|_{t=\tau}+2m_{0}\omega_{0} ]^{\frac{1}{2}}\bigr{\}}.\]
Note that the squeezing parameter is time-independent for \(t>\tau\). Therefore, all quantities that depend only on \(r_{2}(t)\) will be time-independent when the oscillator frequency returns to \(\omega_{0}\). Besides this, the squeezing phase \(\phi_{2}(t)\) is given by
\[\phi_{2}(t) = \cos^{-1}\biggl{\{}\frac{\mathrm{sech}[r_{2}(t)]}{4m_{0}\omega_{0} \sinh[r_{2}(t)]}\bigl{[}\bigl{(}m_{0}^{2}\omega_{0}^{2}\rho_{1}^{2}|_{t=\tau} \tag{31}\] \[-\ m_{0}^{2}\dot{\rho}_{1}^{2}|_{t=\tau}-\rho_{1}^{-2}|_{t=\tau} \bigr{)}\cos[2\omega_{0}(t-\tau)]\] \[+\ 2m_{0}^{2}\omega_{0}\rho_{1}|_{t=\tau}\dot{\rho}_{1}|_{t=\tau}\sin[2 \omega_{0}(t-\tau)]]\biggr{\}}.\]
As can be seen, in contrast with \(r_{2}(t)\), \(\phi_{2}(t)\) is time-dependent. Consequently, \(\langle[\Delta\hat{x}]^{2}_{2}\rangle(n,t)\) and \(\langle[\Delta\hat{p}]^{2}_{2}\rangle(n,t)\) will also depend explicitly on time for \(t>\tau\), since these quantities depend on \(\phi_{2}(t)\)[54].
Furthermore, from Eqs. (12), (30) and (31), we obtain the mean energy \(E_{2}(n,t)\):
\[E_{2}(n,t)=\{2\cosh^{2}[r_{2}(t)]-1\}E_{0}(n), \tag{32}\]
where \(E_{0}(n)=(n+1/2)\hbar\omega_{0}\). Notably, the mean energy of the system for \(t>\tau\) is time-independent, as expected. That is, even if the expression for \(E(n,t)\) in Eq. (12) contains squeezing phase dependent terms, the time contribution of \(\phi(t)\) will effectively cancel out if the frequency returns to \(\omega_{0}\). However, it can be seen that that \(E_{2}(n,t)\geq E_{0}(n)\), even though the initial and final frequencies are the same [54]. Using Eqs. (13), (30) and (32), one can determine the mean number of excitations \(N_{2}(n,t)\) as
\[N_{2}(n,t)=\frac{E_{2}(n,t)-E_{0}(0)}{2E_{0}(0)}. \tag{33}\]
We highlight that there is excitation even for \(n=0\) since \(E_{2}(0,t)>E_{0}(0)\), which means that, under jumps in its frequency, a quantum oscillator initially in its ground state can become excited [54, 36]. Given this, the transition probability \(\mathcal{P}_{2}(t)_{\mu\to\nu}\) between different states is obtained by substituting Eq. (33) (with \(n=0\)) into Eq. (14). So, even when the frequency returns to \(\omega_{0}\) after an instant \(\tau\), one can find a non-zero \(\mu\to\nu\) transition probability. This is a direct consequence of Eq. (33).
The Eqs. (30)-(33) generalize, for a generic intermediate frequency \(f(t)\), the results found in Ref. [54] for the particular case where \(f(t)=\omega_{1}\).
## IV General condition for squeezing equivalence of oscillators under different frequency jumps
In this section, we discuss how to set up two different sequences of sudden frequency jumps, so that both result in the same squeezing parameter \(r(t)\) for \(t>\tau\). Thus, beyond the sequence described in Eq. (15), let us consider another sequence, with a different intermediate time-dependent frequency \(f^{\prime}(t)\):
\[\omega^{\prime}(t)=\begin{cases}\omega_{0},&t\leq 0,\\ f^{\prime}(t),&0<t\leq\tau,\\ \omega_{0},&t>\tau.\end{cases} \tag{34}\]
The associated parameter \(\rho^{\prime}(t)\) is given by
\[\rho^{\prime}(t)=\begin{cases}\rho_{0},&t\leq 0,\\ \rho^{\prime}_{1}(t),&0<t\leq\tau,\\ \rho^{\prime}_{2}(t),&t>\tau.\end{cases} \tag{35}\]
The solutions for \(\rho^{\prime}_{1}(t)\) and \(\rho^{\prime}_{2}(t)\) are obtained following the same steps mentioned in Sec. III.1, but replacing \(f(t)\) by \(f^{\prime}(t)\) in Eq. (18). We represent these solutions by inserting a prime superscript in all objects with index \(1\) or \(2\) in Eqs. (17) to (29), excepting in \(u_{2}(t)\) and \(v_{2}(t)\), which will remain the same functions. For example, for \(\rho^{\prime}_{2}(t)\), we write
\[\rho^{\prime}_{2}(t)=\left[A^{\prime}_{2}u_{2}(t)^{2}+B^{\prime}_{2}v_{2}(t)^ {2}+2C^{\prime}_{2}u_{2}(t)v_{2}(t)\right]^{\frac{1}{2}}. \tag{36}\]
Since \(f(t)\neq f^{\prime}(t)\), in general \(\rho^{\prime}_{2}(t)\neq\rho_{2}(t)\). On the other hand, even when \(f(t)\neq f^{\prime}(t)\), one can look for sets of specific values of the characteristic parameters contained in the functions \(f(t)\) and \(f^{\prime}(t)\), and also \(\tau\), leading to
\[\rho_{2}(t)=\rho^{\prime}_{2}(t). \tag{37}\]
Comparing Eqs. (24) and (36), one can see that these specific values can be found by solving the following system of equations:
\[A_{2}=A^{\prime}_{2},\ \ \ B_{2}=B^{\prime}_{2},\ \ \ C_{2}=C^{\prime}_{2}. \tag{38}\]
Their solutions lead to Eq. (37), which implies equivalence of the squeezing parameter for \(t>\tau\), that is,
\[r_{2}(t)=r^{\prime}_{2}(t). \tag{39}\]
This, in turn, means that all physical quantities investigated here, depending on the squeezing parameter [see Eqs. (30)-(33)], will be the same for \(t>\tau\). Thus, solutions of Eq. (38) show how to set up two different sequences of sudden frequency jumps in a quantum oscillator, so that they result in a same value for the squeezing parameter or, in other words, in a squeezing equivalence of oscillators.
## V Applications
In this section, we solve the corresponding system of equations (38), for some pairs of different sequences of jumps, taking as basis typical functions \(f(t)\) and \(f^{\prime}(t)\) as those considered in Refs. [59, 35, 9, 58]. In this way, we exemplify some situations exhibiting squeezing equivalence of oscillators under different frequency jumps. We remark that, due to the expressions for the coefficients \(A_{2}\), \(B_{2}\), \(C_{2}\), \(A^{\prime}_{2}\), \(B^{\prime}_{2}\) and \(C^{\prime}_{2}\) are too large to display (except for the case discussed in Sec. V.1), we explicit in each example only the functions \(u_{1}(t)\), \(v_{1}(t)\), \(u^{\prime}_{1}(t)\) and \(v^{\prime}_{1}(t)\), from which the coefficients are calculated according to Sec. III.1. Moreover, except for the case analyzed in Sec. V.1 (which has a direct solution), the system of equations (38) forms a set of transcendental equations, so that only numerical solutions are possible.
Case \(f(t)=\omega_{1}\) and \(f^{\prime}(t)=\omega_{0}\): recovering the equivalence found by Janszky and Adam
For the case where \(f(t)=\omega_{1}\) and \(f^{\prime}(t)=\omega_{0}\), the coefficients \(A_{2}\), \(B_{2}\) and \(C_{2}\) are given by [54]
\[A_{2} = \frac{1}{m_{0}\omega_{0}^{3}\omega_{1}^{2}}\Big{\{}\omega_{0}^{2} \omega_{1}^{2}+\big{[}(\omega_{0}^{4}-\omega_{1}^{4})\sin^{2}(\omega_{0}\tau)- \omega_{0}^{2}\omega_{1}^{2} \tag{40}\] \[+\,\omega_{1}^{4}\big{]}\sin^{2}(\omega_{1}\tau)+\omega_{0}\omega_ {1}(\omega_{0}^{2}-\omega_{1}^{2})\sin(2\omega_{0}\tau)\] \[\times\sin(2\omega_{1}\tau)/2\Big{\}},\] \[B_{2} = \frac{1}{m_{0}\omega_{0}^{3}\omega_{1}^{2}}\Big{\{}\omega_{0}^{2 }\omega_{1}^{2}+\big{[}(\omega_{1}^{4}-\omega_{0}^{4})\sin^{2}(\omega_{0}\tau )-\omega_{0}^{2}\omega_{1}^{2}\] (41) \[+\,\omega_{0}^{4}\big{]}\sin^{2}(\omega_{1}\tau)-\omega_{0}\omega_ {1}(\omega_{0}^{2}-\omega_{1}^{2})\sin(2\omega_{0}\tau)\] \[\times\sin(2\omega_{1}\tau)/2\Big{\}},\] \[C_{2} = \frac{1}{m_{0}\omega_{0}^{3}\omega_{1}^{2}}\Big{\{}\big{[}( \omega_{0}^{2}+\omega_{1}^{2})\sin(2\omega_{0}\tau)\sin(\omega_{1}\tau)/2\] (42) \[-\,2\omega_{0}\omega_{1}[\sin^{2}(\omega_{0}\tau)-1/2]\cos(\omega_ {1}\tau)\big{]}\] \[\times(\omega_{0}^{2}-\omega_{1}^{2})\sin(\omega_{1}\tau)\Big{\}},\]
and the coefficients \(A_{2}^{\prime}\), \(B_{2}^{\prime}\) and \(C_{2}^{\prime}\) have the form
\[A_{2}^{\prime}=B_{2}^{\prime}=\frac{1}{\sqrt{m_{0}\omega_{0}}}, \quad C_{2}^{\prime}=0. \tag{43}\]
For a squeezing equivalence of oscillators to occur, the system of equations (38) must be satisfied. In this case, the only way this condition can be satisfied (beyond the trivial case \(\omega_{1}=\omega_{0}\)) is if \(\sin(\omega_{1}\tau)=0\), which implies that \(\tau=\tau_{k}\), where \(\tau_{k}=k\pi/\omega_{1}\) (\(k\in\mathbb{N}\)). Consequently, for \(\tau=\tau_{k}\), we find \(r_{2}(t)=r_{2}^{\prime}(t)=0\). This is the equivalence found by Janszky and Adam in Ref. [36].
### Case \(f(t)=\omega_{1}\sqrt{1+\cos(\beta t)}\) and \(f^{\prime}(t)=0\)
Now, we investigate a model with
\[f(t)=\omega_{1}\sqrt{1+\cos(\beta t)}, \tag{44}\]
in which \(\beta\) (\(\beta\geq 0\)) is a parameter with frequency dimension. The function \(f(t)\) in Eq. (44) was used, for instance, to describe the motion of a quantum particle in a Paul trap [9; 35; 65]. The two linearly independent solutions of Eq. (18) for the function (44) are given by
\[u_{1}(t)=M_{C}\left(\frac{4\omega_{1}^{2}}{\beta^{2}},-\frac{2 \omega_{1}^{2}}{\beta^{2}},\frac{\beta t}{2}\right), \tag{45}\] \[v_{1}(t)=M_{S}\left(\frac{4\omega_{1}^{2}}{\beta^{2}},-\frac{2 \omega_{1}^{2}}{\beta^{2}},\frac{\beta t}{2}\right), \tag{46}\]
where \(M_{C}\) and \(M_{S}\) are the even and odd Mathieu functions [65], respectively.
When \(f^{\prime}(t)=0\), the two linearly independent solutions of the corresponding Eq. (18) are given by
\[u_{1}^{\prime}(t)=1,\quad v_{1}^{\prime}(t)=\gamma t, \tag{47}\]
where \(\gamma\) is a constant (which has frequency dimension) that does not influence the final result of \(\rho(t)\). This case (as well as the cases that are discussed in the Secs. V.3 and V.4) is equivalent to making the system free in the interval \(0<t\leq\tau\).
For \(f(t)\) given by Eq. (44) and \(f^{\prime}(t)=0\), some possible solutions for the system of equations (38) are shown in Table 1.
For example, for \(\beta\tau=5.1059600\), \(\omega_{1}/\beta=2.9193661\) and \(\omega_{0}\tau=2.1538655\), we find \(r_{2}(t)=r_{2}^{\prime}(t)=0.9347407\). For this example, we show in Fig. 1 the time evolution of \(\omega(t)/\omega_{0}\) and \(\omega^{\prime}(t)/\omega_{0}\) [Fig. 1(a)], \(\rho(t)/\rho_{0}\) and \(\rho^{\prime}(t)/\rho_{0}\) [Fig. 1(b)], and of the squeezing parameters \(r(t)\) and \(r^{\prime}(t)\) [Fig. 1(c)].
These results indicate that, for \(t>\tau\), a system whose intermediate frequency is non-zero \([f(t)\neq 0]\) can behave like a system that was free in the interval \(0<t\leq\tau\) [\(f^{\prime}(t)=0\)], even though it has never been free in the interval \(0<t\leq\tau\). The description of particles or free systems is important, for example, in monitoring the position of free masses, being relevant in applications involving gravitational wave detection [67; 68; 69]. Moreover, in the study of ideal quantum gases confined in harmonic traps, free expansions of these gases can be performed considering that \(\omega(t\leq 0)=\omega_{0}\) and \(\omega(t>0)=0\), making it possible to analyze, for instance, the fermionization of a one-dimensional gas of impenetrable bosons (Tonks-Girardeau gas) [70; 71].
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(\beta\tau\) & \(\omega_{1}/\beta\) & \(\omega_{0}\tau\) \\ \hline \hline
3.2015175 & 2.0185614 & 2.0559670 \\ \hline
5.1059600 & 2.9193661 & 2.1538655 \\ \hline
8.7801658 & 2.9601533 & 3.6972349 \\ \hline
9.3040419 & 2.9227866 & 4.7317833 \\ \hline \end{tabular}
\end{table}
Table 1: Some examples of solutions of the system of equations (38), for \(f(t)\) given by Eq. (44) and \(f^{\prime}(t)=0\).
### Case \(f(t)=\omega_{1}\sqrt{1+\beta t}\) and \(f^{\prime}(t)=0\)
We now investigate the model defined by
\[f(t)=\omega_{1}\sqrt{1+\beta t}. \tag{48}\]
In Ref. [35], a similar model to Eq. (48) was used, for example, to study how states with nonclassical properties can be generated from a harmonic oscillator with time-dependent frequency. The two linearly independent solutions of Eq. (18) for the function (48) are
\[u_{1}(t) =\mathrm{Ai}\biggl{[}-\biggl{(}\frac{\omega_{1}}{\beta}\biggr{)} ^{\frac{3}{2}}(1+\beta t)\biggr{]}, \tag{49}\] \[v_{1}(t) =\mathrm{Bi}\biggl{[}-\biggl{(}\frac{\omega_{1}}{\beta}\biggr{)} ^{\frac{3}{2}}(1+\beta t)\biggr{]}, \tag{50}\]
where \(\mathrm{Ai}\) and \(\mathrm{Bi}\) are the Airy functions of the first and second kind, respectively.
For \(f(t)\) given by Eq. (48) and \(f^{\prime}(t)=0\), some possible solutions for the system of equations (38) are shown in Table 2.
For example, for \(\beta\tau=9.5930013,\omega_{1}/\beta=1.0263230\) and \(\omega_{0}\tau=1.9429223\), we find \(r_{2}(t)=r_{2}^{\prime}(t)=0.8610489\). For this example, we show in Fig. 3 the time evolution of \(\omega(t)/\omega_{0}\) and \(\omega^{\prime}(t)/\omega_{0}\) [Fig. 3(a)], \(\rho(t)/\rho_{0}\) and \(\rho^{\prime}(t)/\rho_{0}\) [Fig. 3(b)], and of the squeezing parameters \(r(t)\) and \(r^{\prime}(t)\) [Fig. 3(c)].
### Case \(f(t)=\omega_{1}\exp(-\beta t)\) and \(f^{\prime}(t)=0\)
Now, we investigate a model with
\[f(t)=\omega_{1}\exp(-\beta t). \tag{51}\]
In Refs. [58; 59], a similar function is used, for instance, to show how an ion trap can be used to simulate quantum fields in an expanding universe. The two linearly independent solutions of Eq. (18) associated with Eq. (51) are given by
\[u_{1}(t)=J_{0}\left(\frac{\omega_{1}e^{-\beta t}}{\beta}\right),\ \ v_{1}(t)=Y_{0}\left(\frac{\omega_{1}e^{-\beta t}}{\beta}\right), \tag{52}\]
in which \(J_{0}\) and \(Y_{0}\) are the Bessel functions of the first and second kind (both of order \(0\)), respectively.
For \(f(t)\) given by Eq. (51) and \(f^{\prime}(t)=0\), some possible solutions for the system of equations (38) are shown in Table 3. For example, for \(\beta\tau=2.6531081\), \(\omega_{1}/\beta=3.2818124\) and \(\omega_{0}\tau=2.9526195\), we find \(r_{2}(t)=r_{2}^{\prime}(t)=1.1815499\). For this example, we show in Fig. 3 the time evolution of \(\omega(t)/\omega_{0}\) and \(\omega^{\prime}(t)/\omega_{0}\) [Fig. 3(a)], \(\rho(t)/\rho_{0}\) and \(\rho^{\prime}(t)/\rho_{0}\) [Fig. 3(b)], and of the squeezing parameters \(r(t)\) and \(r^{\prime}(t)\) [Fig. 3(c)].
### Case \(f(t)=\omega_{1}\sqrt{1+\cos(\beta t)}\) and \(f^{\prime}(t)=\omega_{1}\sqrt{1+\beta t}\)
For the case where \(f(t)\) is given by Eq. (44) and \(f^{\prime}(t)\) is given by Eq. (48), some possible solutions for the system of equations (38) are shown in Table 4.
For example, for \(\beta\tau=4.8792077\), \(\omega_{1}/\beta=2.3935311\) and \(\omega_{0}\tau=3.5686165\), we find \(r_{2}(t)=r_{2}^{\prime}(t)=0.8820916\). For this example, we show in Fig. 4 the time evolution of \(\omega(t)/\omega_{0}\) and \(\omega^{\prime}(t)/\omega_{0}\) [Fig. 4(a)], \(\rho(t)/\rho_{0}\) and \(\rho^{\prime}(t)/\rho_{0}\) [Fig. 4(b)], and of the squeezing
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(\beta\tau\) & \(\omega_{1}/\beta\) & \(\omega_{0}\tau\) \\ \hline \hline
2.7258218 & 0.9260177 & 1.0561689 \\ \hline
3.8068925 & 1.1101122 & 1.1640887 \\ \hline
9.5930013 & 1.0263230 & 1.9429223 \\ \hline
12.699861 & 0.7863563 & 2.5252892 \\ \hline \end{tabular}
\end{table}
Table 2: Some examples of solutions of the system of equations (38), for \(f(t)\) given by Eq. (48) and \(f^{\prime}(t)=0\).
parameters \(r(t)\) and \(r^{\prime}(t)\) [Fig. 4(c)].
### Case \(f(t)=\omega_{1}\sqrt{1+\cos(\beta t)}\) and \(f^{\prime}(t)=\omega_{1}\exp(-\beta t)\)
For \(f(t)\) given by Eq. (44) and \(f^{\prime}(t)\) given by Eq. (51), some possible solutions for the system of equations (38) is shown in Table 5. For example, for \(\beta\tau=4.4298520,\omega_{1}/\beta=0.6737269\) and \(\omega_{0}\tau=8.1922966\), we find \(r_{2}(t)=r_{2}^{\prime}(t)=0.8602827\). For this example, we show in Fig. 5 the time evolution of \(\omega(t)/\omega_{0}\) and \(\omega^{\prime}(t)/\omega_{0}\) [Fig. 5(a)], \(\rho(t)/\rho_{0}\) and \(\rho^{\prime}(t)/\rho_{0}\) [Fig. 5(b)], and of the squeezing parameters \(r(t)\) and \(r^{\prime}(t)\) [Fig. 5(c)].
### Case \(f(t)=\omega_{1}\sqrt{1+\beta t}\) and \(f^{\prime}(t)=\omega_{1}\exp(-\beta t)\)
For \(f(t)\) given by Eq. (48) and \(f^{\prime}(t)\) given by Eq. (51), some possible solutions for the system of equations (38) are shown in Table 6.
For example, for \(\beta\tau=1.8329388\), \(\omega_{1}/\beta=5.0872912\) and \(\omega_{0}\tau=1.7022843\), we find \(r_{2}(t)=r_{2}^{\prime}(t)=0.3952816\). For this example, we show in Fig. 6 the time evolution of \(\omega(t)/\omega_{0}\) and \(\omega^{\prime}(t)/\omega_{0}\) [Fig. 6(a)], \(\rho(t)/\rho_{0}\) and
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(\beta\tau\) & \(\omega_{1}/\beta\) & \(\omega_{0}\tau\) \\ \hline \hline
1.6937302 & 1.3433719 & 1.9604572 \\ \hline
4.2998749 & 0.6576699 & 7.1294955 \\ \hline
4.4298520 & 0.6737269 & 8.1922966 \\ \hline
8.8721273 & 7.5091655 & 13.318275 \\ \hline \end{tabular}
\end{table}
Table 5: Some examples of solutions of the system of equations (38) for \(f(t)\) given by Eq. (44) and \(f^{\prime}(t)\) given by Eq. (51).
and \(\rho^{\prime}(t)/\rho_{0}\) [Fig. 6(b)], and of the squeezing parameters \(r(t)\) and \(r^{\prime}(t)\) [Fig. 6(c)].
## VI Final remarks
In the present paper, in Sec. III, we show that Eqs. (30)-(33) extend, for a generic intermediate frequency \(f(t)\) in Eq. (15), the results found in Ref. [54] for the particular case where \(f(t)=\omega_{1}\). Taking as basis this generalization, we showed in Sec. IV how to set up different sequences of two sudden frequency jumps [here described by Eqs. (15) and (34)] in a quantum harmonic oscillator, so that, despite having different intermediate frequencies \([f(t)\neq f^{\prime}(t)]\), they result in a same squeezing parameter \(r_{2}(t)\) [Eq. (39)] after the jumps cease at an instant \(t=\tau\). To obtain prescriptions for achieving such equivalence, we looked for conditions under which the \(\rho(t)\) parameter [a real solution of the Ermakov-Pinney equation (6) that arose from using the Lewis-Riesenfeld dynamical invariant method] of two systems with different intermediate frequencies is the same after the jumps cease [Eq. (37)]. We showed that such prescriptions can be obtained by looking for sets of specific values of the characteristic parameters contained in the functions \(f(t)\) and \(f^{\prime}(t)\), and also \(\tau\), leading to solutions of Eq. (38).
As applications, in Sec. V we found sets of specific values of such characteristic parameters and \(\tau\), solving Eq. (38) for some pairs of different sequences of sudden jumps, taking as basis typical functions \(f(t)\) and \(f^{\prime}(t)\) as those considered in Refs. [9; 35; 58; 59]. Examples of such sets of parameters are given in Tables 1-VI, and the squeezing equalities [\(r_{2}(t)=r_{2}^{\prime}(t)\)] are exhibited in Figs. 1-6. For instance, the different sequences of sudden jumps shown in Fig. 1(a) lead to the same squeezing parameter \(r_{2}(t)\) shown in Fig. 1(c) (and the same for the other figures 2-6).
The equivalence between sequences of sudden jumps discussed here, in the sense that they lead to the same squeezing parameter after the jumps cease [and, consequently, in the same physical quantities that depend on it (see Secs. III and IV)], can be of interest, since experiments involving quantum harmonic oscillators under sequences of sudden jumps in their frequencies have been made [47; 52].
###### Acknowledgements.
S.S.C. and L.Q. were supported by Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior - Brazil (CAPES), Finance Code 001.
|
2306.13315 | Abstractive Text Summarization for Resumes With Cutting Edge NLP
Transformers and LSTM | Text summarization is a fundamental task in natural language processing that
aims to condense large amounts of textual information into concise and coherent
summaries. With the exponential growth of content and the need to extract key
information efficiently, text summarization has gained significant attention in
recent years. In this study, LSTM and pre-trained T5, Pegasus, BART and
BART-Large model performances were evaluated on the open source dataset (Xsum,
CNN/Daily Mail, Amazon Fine Food Review and News Summary) and the prepared
resume dataset. This resume dataset consists of many information such as
language, education, experience, personal information, skills, and this data
includes 75 resumes. The primary objective of this research was to classify
resume text. Various techniques such as LSTM, pre-trained models, and
fine-tuned models were assessed using a dataset of resumes. The BART-Large
model fine-tuned with the resume dataset gave the best performance. | Öykü Berfin Mercan, Sena Nur Cavsak, Aysu Deliahmetoglu, Senem Tanberk | 2023-06-23T06:33:20Z | http://arxiv.org/abs/2306.13315v1 | # Abstractive Text Summarization for Resumes With Cutting Edge NLP Transformers and LSTM
# Abstractive Text Summarization for Resumes With Cutting Edge NLP Transformers and LSTM
Oyku Berlin Mercan, Sena Nur Cavsak, Aysu Deliahmetoglu (Intern), Senem Tanberk
[email protected], [email protected], [email protected], 0000-0003-1668-0365
Huawei Turkey Research and Development Center, Istanbul
###### Abstract
Text summarization is a fundamental task in natural language processing that aims to condense large amounts of textual information into concise and coherent summaries. With the exponential growth of content and the need to extract key information efficiently, text summarization has gained significant attention in recent years. In this study, LSTM and pre-trained T5, Pegasus, BART and BART-Large model performances were evaluated on the open source dataset (Xsum, CNN/Daily Mail, Amazon Fine Food Review and News Summary) and the prepared resume dataset. This resume dataset consists of many information such as language, education, experience, personal information, skills, and this data includes 75 resumes. The primary objective of this research was to classify resume text. Various techniques such as LSTM, pre-trained models, and fine-tuned models were assessed using a dataset of resumes. The BART-Large model fine-tuned with the resume dataset gave the best performance.
Abstractive Text Summarization, Pre-trained Language Models, ROUGE
## I Introduction
In the recent year, Natural Language Processing (NLP) becomes a popular research area. With the help of advances in technology, a large number of information and documents are collected in the form of text data in the digital world rapidly. In order to obtain meaningful results from text data efficiently, researchers study to develop NLP tasks such as text classification, question answering, text generation and text summarization. The task of text summarization in NLP has become a research of interest, considering the rapid increase in the number of documents, it is important to minimize the time wasted and unnecessary information density in the process of obtaining useful information from the document. Text summarization aims to create, short and accurate summary from the large text data without human intervention. Text summarization is separated into two main types; extractive text summarization and abstractive text summarization. Extractive text summarization creates summary with selected important information using the same words from main text. Differently, abstractive text summarization creates better summary with different words and flexible representations as humans. With the advance in deep learning, many studies have focused on abstractive text summarization [1, 2]. Song et al [1] developed an LSTM-CNN-based Abstracting Text Summarization model. Firstly, they extracted the sentences from the source sentences with the Multiple Order Semantic Parsing model. Then they created text summaries using the deep learning method. The authors used CNN/Daily Mail and Gigaword data for this model and compared performance. Hanunggul et al. [2] examined the effect of local attention in the LSTM model to generate abstract text summaries. They used the Amazon Fine Food Review dataset and evaluated the performance of the model using the GloVe dataset. The findings showed that the ROUGE-1 outperformed the global attention-based model as it produced more words in the actual summary. On the other hand, the local attention-based model achieved higher ROUGE-2 scores because it generated more word pairs found in the actual summary.
Abstractive text summarization gives succeed performance with transformer architecture-based pre-trained language models [3, 4, 5, 6, 7, 8, 9, 10]. Zolotareva et al. [3] used Sequence-to-Sequence Recurrent Neural Networks and Transfer Learning techniques with Composite Text-to-Text Converter for the text summarization problem. They developed the Transfer Learning-based model for Abstractive text summarization. They used the Transformer or T5 framework from the BBC News dataset. Ranganathan and Abuka [4] introduced a text summarization method based on the converter architecture, specifically the Text-to-Text Converter (T5) model. The goal was to condense long texts into concise but informative summaries. The researchers did this for the Irvine (UCI) drug reviews dataset, by training and testing the T5 model on human-generated summaries. In addition, PEGASUS improvements were made using the T5 model BBC News dataset. Zhang et al [5] offered the model for abstractive text summarization. The researchers explored different methods for selecting gap sentences and found that choosing principle sentences yielded the best results. By optimizing the model's configuration, they achieved state-of-the-art performance on 12 datasets. Lalitha et al [6] used various abstractive summarization techniques, including T5, BART, and PEGASUS. These techniques aimed to extract essential information from medical documents and to provide concise summaries suitable for users' interests. They used ROUGE metrics to evaluate the performance of these models. Among the tested models, the most effective model was the PEGASUS model with a ROUGE score of 0.37. In [7], Borah et al. evaluated abstractive text summarization performance of T5 on open-source datasets which are CNN/Daily Mail, MSMO and XSUM. It showed that T5 gives short and fluent summary and the best results obtained from MSMO dataset. Another study [8] compared abstractive text summarization performance of pre-trained models which are BART, T5 and PEGASUS and BBC News Dataset. Pre-trained models from HuggingFace were finetuned and evaluated for summarization. Experiment showed that the T5 model gives the highest ROUGE score. Another similar study is [9], |
2303.08855 | Refined Bohr inequality for functions in $\mathbb{C}^n$ and in complex
Banach spaces | In this paper, we first obtain a refined version of the Bohr inequality of
norm-type for holomorphic mappings with lacunary series on the polydisk in
$\mathbb{C}^n$ under some restricted conditions. Next, we determine the refined
version of the Bohr inequality for holomorphic functions defined on a balanced
domain $ G $ of a complex Banach space $ X $ and take values from the unit disk
$ \mathbb{D} $. Furthermore, as a consequence of one of this results, we obtain
a refined version of the Bohr-type inequalities for harmonic functions $
f=h+\bar{g} $ defined on a balanced domain $ G\subset X $. All the results are
proved to be sharp. | Sabir Ahammed, Molla Basir Ahamed | 2023-03-15T18:15:45Z | http://arxiv.org/abs/2303.08855v1 | # Refined Bohr Inequality for Functions in \(\mathbb{C}^{n}\) and in Complex Banach Spaces
###### Abstract.
In this paper, we first obtain a refined version of the Bohr inequality of norm-type for holomorphic mappings with lacunary series on the polydisk in \(\mathbb{C}^{n}\) under some restricted conditions. Next, we determine the refined version of the Bohr inequality for holomorphic functions defined on a balanced domain \(G\) of a complex Banach space \(X\) and take values from the unit disk \(\mathbb{D}\). Furthermore, as a consequence of one of this results, we obtain a refined version of the Bohr-type inequalities for harmonic functions \(f=h+\bar{g}\) defined on a balanced domain \(G\subset X\). All the results are proved to be sharp.
Key words and phrases:Bohr phenomenon, holomorphic function, harmonic mappings, lacunary series, balanced domain 2020 Mathematics Subject Classification: Primary 30A10; 30B10; 30C45; 32A05; 32A10; 32K05
## 1. Introduction
Bohr's power series theorem was discovered a century ago in the context of the study of Bohr's absolute convergence problem for the Dirichlet series is now an active area of research for different function spaces. In [24], Harald Bohr proved that for every holomorphic function \(f\) on the unit disc \(\mathbb{D}:=\{z\in\mathbb{C}:|z|<1\}\)
\[\sup_{|z|\leq\frac{1}{3}}\sum_{n=0}^{\infty}\left|\frac{f^{(n)}(0)}{n!}z^{n} \right|\leq||f||_{\infty}:=\sup_{z\in\mathbb{D}}|f(z)|, \tag{1.1}\]
and the radius \(1/3\) is optimal. The constant \(1/3\) is famously known as the Bohr radius and (1.1) is known as the Bohr inequality for the class \(\mathcal{B}\) of analytic self-maps on unit disk \(U\).
Several investigations and new problems on Bohr's inequality in the one complex variable appeared in the literature (see [6, 11, 16, 19, 35, 36, 38] and references therein). However, a detailed account of research on the Bohr radius problem, what is known as Bohr's phenomenon, can be found in the survey article [13]. Also, references on the problem of Bohr's phenomenon can be found in the research book [39]. Actually, Bohr's theorem received greater interest in 1995 after it was used by Dixon [30] to characterize Banach algebras that satisfy von Neumann's inequality. Since then, a lot of research has been devoted to extend Bohr's result in multidimensional and abstract settings. In fact, the generalization of Bohr's theorem for various function spaces is now an active area of research and different versions of the Bohr inequality are established in the last three decades. For instance, Aizenberg _et al._[8], Aytuna and Djakov [20] have studied the Bohr property of bases for holomorphic
Introduction
The Bohr phenomenon of a complex Banach space \(X\) is a classical phenomenon of the form
\[\mathcal{H}(X)=\sum_{n=0}^{\infty}a_{pn+m}z^{pn+m}\in\mathcal{H}(\mathbb{D}, \mathbb{D}),\]
where \(\mathcal{H}(\mathbb{D})\) is the Banach space of functions in \(\mathbb{D}\). The Bohr inequality is a classical phenomenon of the form
\[\mathcal{H}(X)=\sum_{n=0}^{\infty}a_{pn+m}z^{pn+m}\in\mathcal{H}(\mathbb{D}, \mathbb{D}),\]
where \(\mathcal{H}(\mathbb{D})\) is the Banach space of functions in \(\mathbb{D}\).
complex variables (see, e.g. [9, 33, 34, 40, 43, 49]). More precisely to say, in 1997, Boas and Khavinson [23] introduced the \(N\)-dimensional Bohr radius \(K_{N}\), (\(N>1\) ) for the polydisk \(\mathbb{D}^{N}=\mathbb{D}\times\cdots\times\mathbb{D}\) which generates extensive research activity in Bohr radius problems. Actually, Boas and Khavinson [23] proved that the \(N\)-Bohr radius \(K_{N}\) as the largest radius \(r>0\) such that for every complex polynomials \(\sum_{\alpha\in\mathbb{N}_{0}^{N}}c_{\alpha}z^{\alpha}\) in \(N\) variables
\[\sup_{z\in r\mathbb{D}^{N}}\sum_{\alpha\in\mathbb{N}_{0}^{N}}|c_{\alpha}z^{ \alpha}|\leq\sup_{z\in r\mathbb{D}^{N}}\bigg{|}\sum_{\alpha\in\mathbb{N}_{0}^{ N}}c_{\alpha}z^{\alpha}\bigg{|}.\]
As expected, the constant \(K_{N}\) is defined as the largest radius \(r\) satisfying \(\sum_{\alpha}|c_{\alpha}z^{\alpha}|<1\) for all \(z\) with \(||z||_{\infty}:=\max\{|z_{1}|,|z_{2}|,\ldots,|z_{N}|\}<r\) and all \(f(z)=\sum_{\alpha}c_{\alpha}z^{\alpha}\in\mathcal{H}(U^{N})\). In recent years, a lot of attention has been paid to multidimensional generalizations of Bohr's theorem. For different aspects of multidimensional Bohr phenomenon including recent advances in this topic, we refer to the articles by Aizenberg [7], Liu and Ponnusamy [46], Paulsen [49], Defant and Frerick [29], Kumar [40] and also [43, 44] and references therein.
Throughout the paper, we denote \(\mathbb{N}_{0}\) be the set of non-negative integers, \(\mathbb{D}^{n}\) be the open unit polydisk in \(\mathbb{C}^{n}\), and \(\mathcal{H}\left(X,Y\right)\) be the set of holomorphic mappings from \(X\) into \(Y\). Let the symbol \({}^{\prime}\) stand for transpose. By an example, Liu and Liu [44] have shown that the Bohr inequality of norm type for holomorphic mappings with lacunary series on the unit polydisc in \(\mathbb{C}^{n}\) does not hold in general. More precisely, it is shown in [44] that for the function \(f\) given by
\[f(z)=\left(z_{1}\frac{z_{1}-\frac{1}{\sqrt{2}}}{1-\frac{1}{\sqrt{2}}z_{1}},z_{2 }\frac{z_{2}-\frac{2}{\sqrt{5}}}{1-\frac{2}{\sqrt{5}}z_{2}}\right)^{\prime}, \;z=(z_{1},z_{2})^{\prime}\]
that \(f\in\mathcal{H}(\mathbb{D}^{2},\overline{\mathbb{D}^{2}})\) and
\[\sum_{s=1}^{\infty}\frac{||D^{s}f(0)\left(z^{s}\right)||}{s!}>\frac{1}{\sqrt{ 2}}\left(\frac{2}{\sqrt{5}}+\frac{1}{\sqrt{2}}\right)>1\text{ for }z=\left(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}\right).\]
To overcome the problem, the authors have studied Bohr inequality in [44] with some restricted condition on the function \(f\) belong to the classes \(\mathcal{H}\left(\mathbb{D}^{n},\mathbb{C}^{n}\right)\) and \(\mathcal{H}\left(\mathbb{D}^{n},\overline{U^{n}}\right)\). In [44], the Bohr inequality of norm-type is investigated for holomorphic mappings \(f\) for the class \(\mathcal{H}\left(\mathbb{D}^{n},\mathbb{C}^{n}\right)\) with lacunary series under some restricted conditions on the function \(f\).
To establish the Bohr inequality for holomorphic functions in \(\mathbb{C}^{n}\), the following lemma was obtained by Liu and Liu in [44].
**Lemma 2.1**.: [44] Let \(m\in\mathbb{N}_{0}\), \(N\in\mathbb{N}\) and
\[\begin{cases}\phi_{1}(r)=2r^{N}+r-1,r\in[0,),\\ \phi_{2}(r)=4r^{2(N-m)}+4r^{N+1-2m}-4r^{N-2m}+r^{2}-2r+1,r\in[0,1),N>2m,\\ \phi_{3}(r)=4r^{N}+r^{2+2m-N}-2r^{1+2m-N}+r^{2m-N}+4r-4,r\in[0,1),m+1\leq N\leq 2 m.\end{cases}\]
Then there exists the maximal positive root for each \(\phi_{k}(r)=0\) (\(k=1,2,3\)).
In what follows, \(\lfloor x\rfloor\) denotes the largest integer not more than \(x\), where \(x\) is a real number. We recall here a key result from [48] which will be useful to prove our results of this paper.
Lemma 2.2: [48] Suppose that \(f(z)=\sum_{n=0}^{\infty}a_{n}z^{n}\in\mathcal{H}(\mathbb{D},\mathbb{D})\). Then for any \(N\in\mathbb{N}\), the following inequality holds:
\[\sum_{n=N}^{\infty}|a_{n}|r^{n}+sgn(t)\sum_{n=1}^{t}|a_{n}|^{2} \frac{r^{N}}{1-r}+\left(\frac{1}{1+|a_{0}|}+\frac{r}{1-r}\right)\sum_{n=t+1}^ {\infty}|a_{n}|^{2}r^{2n}\leq(1-|a_{0}|^{2})\frac{r^{N}}{1-r},\] \[\text{for }\ |z|=r\in[0,1),\text{ where }t=\lfloor(N-1)/2\rfloor.\]
The Bohr inequality for functions \(f\in\mathcal{H}\left(\mathbb{D}^{n},\mathbb{C}^{n}\right)\) with lacunary series of the form
\[f(z) =(a_{1}z_{1}^{m}+g_{1}(z),a_{2}z_{2}^{m}+g_{2}(z),\ldots,a_{n}z_ {n}^{m}+g_{n}(z))^{\prime}\] \[=\frac{D^{m}f(0)\left(z^{m}\right)}{m!}+\sum_{s=N}^{\infty}\frac{ D^{s}f(0)\left(z^{s}\right)}{s!}\]
are obtained under restricted conditions in [44] as the following.
Theorem 2.1: [44] Let \(m\in\mathbb{N}_{0}\), \(a=(a_{1},a_{2},\ldots,a_{n})^{\prime}\), \(N\geq m+1\), \(f\in\mathcal{H}\left(\mathbb{D}^{n},\mathbb{C}^{n}\right)\) be given by (2.1), \(|a_{l}|=||a||=\max_{1\leq l\leq n}\{|a_{l}|\}\), \(l=1,2,\ldots,n\) and \(a_{j}z_{j}^{m}+g_{j}(z)\in\mathcal{H}\left(\mathbb{D}^{n},\overline{\mathbb{D }}\right),\) where \(\frac{D^{m}f(0)(z^{m})}{m!}=(a_{1}z_{1}^{m},a_{2}z_{2}^{m},\ldots,a_{n}z_{n}^{ m})^{\prime}\), and \(j\) satisfies \(|z_{j}|=||z||=\max_{1\leq l\leq n}\{|z_{l}|\}.\) Then
\[\frac{||D^{m}f(0)\left(z^{m}\right)||}{m!}+\sum_{s=N}^{\infty}\frac{||D^{s}f( 0)\left(z^{s}\right)||}{s!}\leq 1\]
for \(||z||=r\leq r_{N,m}\), where \(r_{N,m}\) is the maximal positive root of the equation \(\phi_{k}(r)=0\) (\(k=1,2,3\)) given in Lemma 2.1.
Theorem 2.2: [44] Let \(m\in\mathbb{N}_{0}\), \(N\geq m+1\), \(f(z)=\frac{D^{m}f(0)(z^{m})}{m!}+\sum_{s=N}^{\infty}\frac{D^{s}f(0)(z^{s})}{s! }\in\mathcal{H}\left(\mathbb{D}^{n},\overline{\mathbb{D}}^{n}\right).\) If \(\frac{|D^{m}f_{l}(0)(z^{m})|}{m!}=\frac{||D^{m}f(0)(z^{m})||}{m!}\), \(l=1,2,\ldots n\), then
\[\frac{||D^{m}f(0)\left(z^{m}\right)||}{m!}+\sum_{s=N}^{\infty}\frac{||D^{s}f( 0)\left(z^{s}\right)||}{s!}\leq 1\]
for \(||z||=r\leq r_{N,m}\), where \(r_{N,m}\) is the maximal positive root of the equation \(\phi_{k}(r)=0\) (\(k=1,2,3\)) given in Lemma 2.1.
Furthermore, the Bohr inequality of norm type for _symmetric functions_\(f\in\mathcal{H}\left(\mathbb{D}^{n},\overline{\mathbb{D}^{n}}\right)\) with lacunary series of the form
\[f(z)=\frac{D^{m}f(0)\left(z^{m}\right)}{m!}+\sum_{s=1}^{\infty}\frac{D^{sk+m}f( 0)(z^{sk+m})}{(sk+m)!}\]
is established in [44] and it is shown that the result is sharp.
**Theorem 2.3.** [44] Let \(m,k\in\mathbb{N}_{0}\), \(0\leq m\leq k\) and \(f\in\mathcal{H}\left(\mathbb{D}^{n},\overline{\mathbb{D}^{n}}\right)\) be given by (2.2). If \(\frac{|D^{m}f_{l}(0)(z^{m})|}{m!}=\frac{||D^{m}f(0)(z^{m})||}{m!}\), l=1,2..., then
\[\frac{||D^{m}f(0)\left(z^{m}\right)||}{m!}+\sum_{s=1}^{\infty}\frac{||D^{sk+m}f (0)\left(z^{sk+m}\right)||}{(sk+m)!}\leq 1\]
for \(||z||=r\leq r_{k,m}\), where \(r_{k,m}\) is the maximal positive root of the equation
\[-6r^{k-m}+r^{2(k-m)}+8r^{2k}+1=0. \tag{2.3}\]
Each \(r_{k,m}\) is sharp.
In recent years, refining the Bohr-type inequalities have been an active research topic. Many researchers continuously investigated refined Bohr-type inequalities for a certain class of analytic functions, for classes of harmonic mappings on the unit disk \(\mathbb{D}\), or on the shifted disk \(\Omega_{\gamma}:=\{z\in\mathbb{C}:|z+\frac{\gamma}{1-\gamma}|<\frac{1}{1- \gamma}\}\) where \(0\leq\gamma<1\), for operator-valued functions. For detailed information on such studies, the readers are referred to (see [2, 5, 17, 46, 47, 48, 51] and references therein). No one has yet explored what could be the refined version of the Bohr inequality for holomorphic functions in \(\mathbb{C}^{n}\). Inspired by the methods in [44], we are interested to investigate a refined version of the Bohr inequality for holomorphic functions in \(\mathbb{C}^{n}\) and also to establish their sharpness. Henceforward, the following questions arise naturally.
**Question 2.1.** Can we establish a refined version of Theorems 2.1 to 2.3? Can we show them sharp keeping the radius unchanged?
To answer Question 2.1, we shall establish refined Bohr inequality of norm type for holomorphic mappings with lacunary series on the unit polydisk in \(\mathbb{C}^{n}\) under the restricted conditions that are being considered in [44]. In fact, we show in Theorems 2.4 and 2.5 that the constants \(r_{1,0}=1/3\) and \(r_{2,1}=3/5\) both are optimal. With the help of the Lemmas 2.1 and 2.2, we obtain the following result as sharp refinements of Theorem 2.1.
**Theorem 2.4.** Let \(m\in\mathbb{N}_{0}\), \(a=(a_{1},a_{2},\ldots,a_{n})^{\prime}\), \(N\geq m+1\), \(f\in\mathcal{H}\left(U^{n},\mathbb{C}^{n}\right)\) be given by (2.1), \(|a_{l}|=||a||=\max_{1\leq l\leq n}|a_{l}|\), \(l=1,2,\ldots,n\) and \(a_{j}z_{j}^{m}+g_{j}(z)\in\mathcal{H}\left(\mathbb{D}^{n},\overline{\mathbb{ D}^{n}}\right),\) where \(\frac{D^{m}f(0)(z^{m})}{m!}=(a_{1}z_{1}^{m},a_{2}z_{2}^{m},\ldots,a_{n}z_{n}^{m })^{\prime}\), and \(j\) satisfies \(|z_{j}|=||z||=\max_{1\leq l\leq n}|z_{l}|.\) Then
\[\mathcal{A}_{m}^{f}(||z||) :=\frac{||D^{m}f(0)\left(z^{m}\right)||}{m!}+\sum_{s=N}^{\infty} \frac{||D^{s}f(0)\left(z^{s}\right)||}{s!}+sgn(t)\sum_{s=1}^{t}\left(\frac{||D ^{s}f(0)\left(z^{s}\right)||}{s!}\right)^{2}\frac{||z||^{N-2s}}{1-||z||}\] \[\quad+\left(\frac{||z||^{m}}{||z||^{m}+\frac{||D^{m}f(0)(z^{m})|| }{m!}}+\frac{||z||}{1-||z||}\right)\sum_{s=t+1}^{\infty}\left(\frac{||D^{s}f(0 )\left(z^{s}\right)||}{s!}\right)^{2}\leq 1\]
for \(||z||=r\leq r_{N,m}\), where \(t=\lfloor(N-1)/2\rfloor\) and \(r_{N,m}\) is the maximal positive root of the equation \(\phi_{k}(r)=0\) (\(k=1,2,3\)) are given in Lemma 2.1.
The second result we obtain is the following and it is established as a refined version of Theorem 2.2.
**Theorem 2.5**.: Let \(m\in\mathbb{N}_{0}\), \(N\geq m+1\), \(f(z)=\frac{D^{m}f(0)\left(z^{m}\right)}{m!}+\sum_{s=N}^{\infty}\frac{D^{s}f(0) \left(z^{s}\right)}{s!}\in\mathcal{H}\left(\mathbb{D}^{n},\overline{\mathbb{D }}^{n}\right).\) If \(\frac{|D^{m}f_{l}(0)\left(z^{m}\right)|}{m!}=\frac{||D^{m}f(0)\left(z^{m} \right)||}{m!}\), \(l=1,2,\dots n\), then
\[\mathcal{B}_{m}^{f}(r) :=\frac{||D^{m}f(0)\left(z^{m}\right)||}{m!}+\sum_{s=N}^{\infty} \frac{||D^{s}f(0)\left(z^{s}\right)||}{s!}+sgn(t)\sum_{s=1}^{t}\left(\frac{||D ^{s}f(0)\left(z^{s}\right)||}{s!}\right)^{2}\frac{||z||^{N-2s}}{1-||z||}\] \[\quad+\left(\frac{||z||^{m}}{||z||^{m}+\frac{||D^{m}f(0)\left(z^{ m}\right)||}{m!}}+\frac{||z||}{1-||z||}\right)\sum_{s=t+1}^{\infty}\left(\frac{||D ^{s}f(0)\left(z^{s}\right)||}{s!}\right)^{2}\leq 1\]
for \(||z||=r\leq r_{N,m}\), where \(t=\lfloor(N-1)/2\rfloor\) and \(r_{N,m}\) is the maximal positive root of the equation \(\phi_{k}(r)=0\) (\(k=1,2,3\)) given in Lemma 2.1.
The final result of this section we obtain an analogue of Theorem 1.1 which is a refined version of Theorem 2.3.
**Theorem 2.6**.: Let \(m,k\in\mathbb{N}_{0}\), \(0\leq m\leq k\) and \(f\in\mathcal{H}\left(\mathbb{D}^{n},\overline{\mathbb{D}^{n}}\right)\) be given by (2.2). If \(\frac{|D^{m}f_{l}(0)\left(z^{m}\right)|}{m!}=\frac{||D^{m}f(0)\left(z^{m} \right)||}{m!}\), l=1,2..., then
\[\mathcal{C}_{m}^{f}(||z||):= \frac{||D^{m}f(0)\left(z^{m}\right)||}{m!}+\sum_{s=1}^{\infty} \frac{||D^{sk+m}f(0)\left(z^{sk+m}\right)||}{(sk+m)!}+\left(\frac{1}{||z||^{m} +\frac{||D^{m}f(0)\left(z^{m}\right)||}{m!}}\right.\] \[\quad+\left.\frac{||z||^{k-m}}{1-||z||^{k}}\right)\sum_{s=1}^{ \infty}\left(\frac{||D^{sk+m}f(0)\left(z^{sk+m}\right)||}{(sk+m)!}\right)^{2} \leq 1\]
for \(||z||=r\leq R_{k,m}\left(|c_{0}|\right)\), where \(|c_{0}|=\frac{||D^{m}f(0)\left(z_{0}^{m}\right)||}{m!}\) and \(R_{k,m}\left(|c_{0}|\right)\) is the maximal positive root of the equation
\[\left(1-|c_{0}|-|c_{0}|^{2}\right)r^{m+k}+r^{k}+|c_{0}|r^{m}-1=0. \tag{2.4}\]
Each \(R_{k,m}\left(|c_{0}|\right)\) is sharp.
Setting \(\frac{D^{m}f_{l}(0)\left(z^{m}\right)}{m!}=a_{l}z_{l}^{m}\), \(l=1,2,\dots,n\), we have the following result as a consequence of Theorem 2.6.
**Corollary 2.1**.: Let \(m,k\in\mathbb{N}\), \(a=(a_{1},a_{2},\dots,a_{n})^{\prime}\), \(0\leq m\leq k\),
\[f(z) =\left(a_{1}z_{1}^{m}+g_{1}(z),a_{2}z_{2}^{m}+g_{2}(z),\dots,a_{n }z_{n}^{m}+g_{n}(z)\right)^{\prime}\] \[=\frac{D^{m}f(0)(z^{m})}{m!}+\sum_{s=1}^{\infty}\frac{D^{sk+m}f(0 )(z^{sk+m})}{(sk+m)!}\in\mathcal{H}\left(\mathbb{D}^{n},\overline{\mathbb{D}^ {n}}\right),\]
\(|a_{1}z_{1}^{m}|=\dots=|a_{n}z_{n}^{m}|\), where \(\frac{D^{m}f(0)(z^{m})}{m!}=\left(a_{1}z_{1}^{m},a_{2}z_{2}^{m},\dots,a_{n}z_{n }^{m}\right)^{\prime}\). Then \(\mathcal{C}_{m}^{f}(||z||)\leq 1\) for \(||z||=r\leq R_{k,m}\left(|c_{0}|\right)\), where \(R_{k,m}\left(|c_{0}|\right)\) is the maximal positive root of (2.4). Each \(R_{k,m}\left(|c_{0}|\right)\) is sharp.
**Corollary 2.2**.: We note that Corollary 2.1 is a sharp refined version of Corollary 2.1 in [44].
We now discuss the proof of Theorems 2.4, 2.5 and 2.6 applying Lemma 2.2.
**Proof of Theorem 2.4.** Let \(z\in\mathbb{D}^{n}\setminus\{0\}\) be fixed and denote \(z_{0}=z/||z||\). Let us define a function \(h_{j}(\lambda):=f_{j}(\lambda z_{0})\), \(\lambda\in\mathbb{D}\). Then it is easy to see that \(h_{j}\in\mathcal{H}(\mathbb{D},\overline{U})\) and
\[h_{j}(\lambda)=a_{j}\left(\frac{z_{j}}{||z||}\right)^{m}\lambda^{m}+\sum_{s=N} ^{\infty}\frac{D^{s}f_{j}(0)(z_{0}^{s})}{s!}\lambda^{s}\]
from the hypothesis of Theorem 2.4, where \(j\) satisfies \(|z_{j}|=||z||=\max_{1\leq l\leq n}\{|z_{l}|\}\). We write \(b_{m}=a_{j}\left(\frac{z_{j}}{||z||}\right)^{m},\)\(b_{s}=\frac{D^{s}f_{j}(0)(z_{0}^{s})}{s!}\), \(s=N,N+1,\dots\). Then it is easy to see that the function \(\omega(\lambda)=b_{m}+\sum_{s=N}^{\infty}b_{s}\lambda^{s-m}\in\mathcal{H}( \mathbb{D},\overline{\mathbb{D}})\) due to \(h_{j}\in\mathcal{H}(\mathbb{D},\overline{\mathbb{D}})\). Also from the hypothesis, \(|b_{m}|=|a_{j}|=||a||\). In view of Lemma 2.2 for the function \(h_{j}\in\mathcal{H}(\mathbb{D},\overline{\mathbb{D}})\), we get that
\[\sum_{s=N}^{\infty}|b_{s}||\lambda|^{s} +sgn(t)\sum_{s=1}^{t}|b_{s}|^{2}\frac{\left|\lambda\right|^{N}}{ 1-|\lambda|}+\left(\frac{1}{1+|b_{m}|}+\frac{|\lambda|}{1-|\lambda|}\right) \sum_{s=t+1}^{\infty}|b_{s}|^{2}|\lambda|^{2s}\] \[\leq\frac{(1-|b_{m}|^{2})|\lambda|^{N}}{1-|\lambda|}\]
and thus, we obtain the following estimate
\[\sum_{s=N}^{\infty}\frac{|D^{s}f_{j}(0)(z_{0}^{s})|}{s!}|\lambda| ^{s}+sgn(t)\sum_{s=1}^{t}\left(\frac{|D^{s}f_{j}(0)(z_{0}^{s})|}{s!}\right)^{ 2}\frac{\left|\lambda\right|^{N}}{1-|\lambda|}\] \[+\left(\frac{1}{1+|a_{j}|}+\frac{|\lambda|}{1-|\lambda|}\right) \sum_{s=t+1}^{\infty}\left(\frac{|D^{s}f_{j}(0)(z_{0}^{s})|}{s!}\right)^{2}| \lambda|^{2s}\leq\frac{(1-|a_{j}|^{2})|\lambda|^{N}}{1-|\lambda|}\text{ for }j=1,2,\dots,n.\]
Setting \(|\lambda|=||z||=r\) and \(z=z_{0}||z||\), using maximum modulus principle, this implies that
\[\sum_{s=N}^{\infty}\frac{||D^{s}f(0)(z^{s})||}{s!}+sgn(t)\sum_{s= 1}^{t}\left(\frac{||D^{s}f(0)(z^{s})||}{s!}\right)^{2}\frac{||z||^{N-2s}}{1- ||z||}\] \[+\left(\frac{||z||^{m}}{||z||^{m}+\frac{||D^{m}f(0)(z^{m})||}{m!} }+\frac{||z||}{1-||z||}\right)\sum_{s=t+1}^{\infty}\left(\frac{||D^{s}f(0)(z^{ s})||}{s!}\right)^{2}\leq\frac{(1-||a||^{2})||z||^{N}}{1-||z||}.\]
Therefore, we obtain that
\[\frac{||D^{m}f(0)(z^{m})||}{m!}+\sum_{s=N}^{\infty}\frac{||D^{s}f(0) (z^{s})||}{s!}+sgn(t)\sum_{s=1}^{t}\left(\frac{||D^{s}f(0)(z^{s})||}{s!}\right)^{ 2}\frac{||z||^{N-2s}}{1-||z||}\] \[+\left(\frac{||z||^{m}}{||z||^{m}+\frac{||D^{m}f(0)(z^{m})||}{m!} }+\frac{||z||}{1-||z||}\right)\sum_{s=t+1}^{\infty}\left(\frac{||D^{s}f(0)(z^{s} )||}{s!}\right)^{2}\] \[\leq||a||r^{m}+\frac{(1-||a||^{2})r^{N}}{1-r}:=\mathcal{M}_{f}(|| a||,r).\]
To prove the desired inequality \(\mathcal{A}_{m}^{f}(||z||)\leq 1\) for \(||z||=r\leq r_{N,m}\), it is enough to show that the inequality \(\mathcal{M}_{f}(||a||,r)\leq 1\) holds for \(r\leq r_{N,m}\). To achieve this, the analysis of the following cases is sufficient.
**Case-I.** Let \(m=0\). Since \(0\leq||a||<1\), a simple computation show that
\[\mathcal{M}_{f}(||a||,r) =||a||+(1-||a||^{2})\frac{r^{N}}{1-r}\leq||a||+2(1-||a||)\frac{r^ {N}}{1-r}\] \[=1+(1-||a||)\left(-1+\frac{2r^{N}}{1-r}\right)\] \[\leq 1\ \text{for}\,r\leq r_{N,m},\]
where \(r_{N,m}\) is root of equation \(\phi_{1}(r)=0\) given in Lemma 2.1.
**Case-II.** Let \(N>2m\), \(m=1,2,\dots,.\) Then we see that
\[\mathcal{M}_{f}(||a||,r) \leq 1-\frac{r^{N}}{1-r}\left(||a||-\frac{1-r}{2r^{N-m}}\right)^{ 2}+\frac{4r^{2(N-m)}+4r^{N+1-2m}-4r^{N-2m}+r^{2}-2r+1}{4(1-r)r^{N-2m}}\] \[\leq 1\ \text{for}\ \ r\leq r_{N,m},\]
where \(r_{N,m}\) is the root of equation \(\phi_{2}(r)=0\), given in Lemma 2.1.
**Case-III.** Let \(m+1\leq N\leq 2m\), \(m=1,2,\dots,.\) we deduce that
\[\mathcal{M}_{f}(||a||,r) \leq 1-\frac{r^{N}}{1-r}\left(||a||-\frac{1-r}{2r^{N-m}}\right)^{2}\] \[\ +\frac{4r^{N}+r^{2+2m-N}-2r^{1+2m-N}+r^{2m-N}+4r-4}{4(1-r)}\] \[\leq 1\ \text{for}\ \ r\leq r_{N,m},\]
where \(r_{N,m}\) is the root of equation \(\phi_{3}(r)=0\) given in Lemma 2.1. Therefore, we conclude that \(\mathcal{A}_{m}^{f}(||z||)\leq 1\) holds for \(r\leq r_{N,m}\), where \(r_{N,m}\) is given in Lemma 2.1.
It is not difficult to check that \(f_{a}(z)=(f_{1}(z_{1}),a,a,\dots,a)^{\prime}\), \(z=(z_{1},z_{2},\dots,z_{n})^{\prime}\in U^{n}\), where \(f_{1}(z_{1})=(a-z_{1})/(1-az_{1})\) for some \(a\in[0,1]\) satisfies the condition of Theorem 2.4. Also we can write \(f_{1}(z_{1})=a-(1-a^{2})\sum_{s=1}^{\infty}a^{s-1}z_{1}^{s}\). Putting \(z=(r,0,\dots,0)^{\prime}\), \(0\leq r<1\), it shows that \(\frac{||D^{s}f_{a}(0)(z^{s})||}{s!}=(1-a^{2})a^{s-1}r^{s}\) and for
\(m=0,\)\(\frac{||D^{m}f_{a}(0)(z^{m})||}{m!}=a.\) In fact, for \(f_{a},\) a simple computation shows that
\[\mathcal{A}_{m}^{f_{a}}(r) =a+\sum_{s=1}^{\infty}(1-a^{2})a^{s-1}r^{s}+sgn(t)\sum_{s=1}^{t}(1 -a^{2})^{2}a^{2(s-1)}r^{2s}\frac{r^{N-2s}}{1-r}\] \[\quad+\left(\frac{1}{1+a}+\frac{r}{1-r}\right)\sum_{s=1}^{\infty} (1-a^{2})^{2}a^{2(s-1)}r^{2s}\] \[=1+(1-a)G(a,r), \tag{2.5}\]
where \(G(a,r):=-1+\frac{(1+a)r}{1-ar}+\frac{(1-a^{2})r^{2}}{(1-r)(1-ar)}+\frac{(1+a)r ^{N}(1-a^{2t})sgn(t)}{1-r}\) and
\[\lim_{a\to 1^{-}}G(a,r)=-1+\frac{2r}{1-r}.\]
Clearly, the right side of (2.5) is greater than \(1\) if \(r>1/3.\) This implies that the constant \(r_{1,0}=1/3\) is optimal.
Next, to show that the constant \(r_{2,1}=3/5\) is optimal, we consider the function \(f_{a}(z)=(f_{1}(z_{1}),a,a,\ldots,a)^{\prime},\)\(z=(z_{1},z_{2},\ldots,z_{n})^{\prime}\in\mathbb{D}^{n},\) where \(f_{1}(z_{1})=z_{1}(a-z_{1})/(1-az_{1})\) for some \(a\in[0,1]\) and it is easy to see that \(f_{a}\) satisfies the condition of Theorem 2.4. By a similar argument as above, it can be easily shown that \(r_{2,1}=3/5\) is optimal. This completes the proof.
**Proof of Theorem 2.5.** Let \(z\in\mathbb{D}^{n}\setminus\{0\}\) be fixed and we denote \(z_{0}=z/||z||.\) We define a function \(h_{l}(\lambda):=f_{l}(\lambda z_{0}),\)\(\lambda\in\mathbb{D},\)\(l=1,2,\ldots n.\) Clearly, \(h_{l}\in\mathcal{H}(\mathbb{D},\overline{\mathbb{D}})\) and we easily deduce that
\[h_{l}(\lambda)=\frac{D^{m}f_{l}(0)\left(z_{0}^{m}\right)}{m!}\lambda^{m}+\sum _{s=N}^{\infty}\frac{D^{s}f_{l}(0)\left(z_{0}^{s}\right)}{s!}\lambda^{s}\]
from the condition of Theorem 2.5. Hence, we easily deduce that \(\omega(\lambda)=b_{m}+\sum_{s=N}^{\infty}b_{s}\lambda^{s-m}\in\mathcal{H}( \mathbb{D},\overline{\mathbb{D}})\) due to \(h_{l}\in\mathcal{H}(\mathbb{D},\overline{\mathbb{D}}),\) where \(b_{m}=\frac{D^{m}f_{l}(0)\left(z_{0}^{m}\right)}{m!}\) and \(b_{s}=\frac{D^{s}f_{l}(0)\left(z_{0}^{s}\right)}{s!}\) for \(s=N,N+1,\ldots,.\) Because \(\omega\in\mathcal{H}(\mathbb{D},\overline{\mathbb{D}}),\) in view of Lemma 2.2, we obtain the following estimate
\[\frac{|D^{m}f_{l}(0)\left(z_{0}^{m}\right)|}{m!}|\lambda|^{m}+ \sum_{s=N}^{\infty}\frac{|D^{s}f_{l}(0)\left(z_{0}^{s}\right)|}{s!}|\lambda|^{s }+sgn(t)\sum_{s=1}^{t}\left(\frac{|D^{s}f_{l}(0)\left(z_{0}^{s}\right)|}{s!} \right)^{2}\frac{|\lambda|^{N}}{1-|\lambda|}\] \[+\left(\frac{1}{1+\frac{|D^{m}f_{l}(0)\left(z_{0}^{m}\right)|}{m! }}+\frac{|\lambda|}{1-|\lambda|}\right)\sum_{s=t+1}^{\infty}\left(\frac{|D^{s} f_{l}(0)\left(z_{0}^{s}\right)|}{s!}\right)^{2}|\lambda|^{2s}\] \[\leq\frac{|D^{m}f_{l}(0)\left(z_{0}^{m}\right)|}{m!}|\lambda|^{m} +\frac{\left(1-\left(\frac{|D^{m}f_{l}(0)\left(z_{0}^{m}\right)|}{m!}\right)^ {2}\right)}{1-|\lambda|}|\lambda|^{N}\ \ \text{for}\ \ l=1,2,\ldots,n.\]
Set \(|\lambda|=||z||=r\) and \(z=z_{0}||z||\), we obtain that
\[\frac{||D^{m}f(0)\left(z^{m}\right)||}{m!}+\sum_{s=N}^{\infty}\frac{ ||D^{s}f(0)(z^{s})||}{s!}+sgn(t)\sum_{s=1}^{t}\left(\frac{||D^{s}f(0)(z^{s})||}{ s!}\right)^{2}\frac{||z||^{N-2s}}{1-||z||}\] \[+\left(\frac{||z||^{m}}{||z||^{m}+\frac{||D^{m}f(0)(z^{m})||}{m!} }+\frac{||z||}{1-||z||}\right)\sum_{s=t+1}^{\infty}\left(\frac{||D^{s}f(0)(z^{s })||}{s!}\right)^{2}\] \[\leq||a||||z||^{m}+\frac{(1-||a||^{2})||z||^{N}}{1-||z||}=||a||r^{m }+\frac{(1-||a||^{2})r^{N}}{1-r},\]
where \(||a||=\frac{||D^{m}f(0)\left(z_{0}^{m}\right)||}{m!}.\) We arrive at the desired conclusions by employing a similar argument to that given in the proof of Theorem 2.4. Hence, we omit the details. With this, the theorem's proof is concluded.
**Proof of Theorem 2.6.** Fix \(z\in\mathbb{C}^{n}\setminus\{0\}\) and set \(z_{0}=z/||z||\). Letting \(h_{l}(\lambda)=f_{l}(\lambda z_{0})\) for \(\lambda\in\mathbb{D}\), \(l=1,2,\ldots,n\), it is easy to see that \(h_{l}\in\mathcal{H}(\mathbb{D},\overline{\mathbb{D}})\) and from the hypothesis, we can express the function \(h_{l}\) in the following form of a series
\[h_{l}(\lambda)=b_{m}\lambda^{m}+\sum_{s=1}^{\infty}b_{sk+m}\lambda^{sk+m}= \frac{D^{m}f_{l}(0)\left(z_{0}^{m}\right)}{m!}\lambda^{m}+\sum_{s=1}^{\infty} \frac{D^{sk+m}f_{l}(0)\left(z_{0}^{sk+m}\right)}{(sk+m)!}\lambda^{sk+m}.\]
We write \(\mu=\lambda^{k}\). Then it yields that
\[\omega(\mu)=c_{0}+\sum_{s=1}^{\infty}c_{s}\mu^{s}\in\mathcal{H}(\mathbb{D}, \overline{\mathbb{D}})\text{ due to }h_{l}\in\mathcal{H}(\mathbb{D}, \overline{\mathbb{D}}).\]
Here \(c_{s}=b_{sk+m}=\frac{D^{sk+m}f_{l}(0)\left(z_{0}^{sk+m}\right)}{(sk+m)!},\ s=1,2,3,\ldots\) and \(c_{0}=\frac{D^{m}f_{l}(0)\left(z_{0}^{m}\right)}{m!}\). In view of Lemma 2.2 (with \(N=1\)), a simple computation shows that
\[|c_{0}||\lambda|^{m}+\sum_{s=1}^{\infty}|c_{s}||\lambda|^{ks+m}+ \left(\frac{1}{1+|c_{0}|}+\frac{|\lambda|^{k}}{1-|\lambda|^{k}}\right)\sum_{s= 1}^{\infty}|c_{s}|^{2}|\lambda|^{2ks+m}\] \[\leq|\lambda|^{m}\left(|c_{0}|+\frac{(1-|c_{0}|^{2})|\lambda|^{k} }{1-|\lambda|^{k}}\right)\]
This gives the following estimate
\[\frac{|D^{m}f_{l}(0)\left(z_{0}^{m}\right)|}{m!}|\lambda|^{m}+ \sum_{s=1}^{\infty}\frac{|D^{sk+m}f_{l}(0)\left(z_{0}^{sk+m}\right)|}{(sk+m)!} |\lambda|^{ks+m}\] \[\quad+\left(\frac{1}{|\lambda|^{m}+\frac{|D^{m}f_{l}(0)\left(z_{ 0}^{m}\right)|}{m!}|\lambda|^{m}}+\frac{|\lambda|^{k-m}}{1-|\lambda|^{k}} \right)\sum_{s=1}^{\infty}\left(\frac{|D^{sk+m}f_{l}(0)\left(z_{0}^{sk+m} \right)|}{(sk+m)!}\right)^{2}|\lambda|^{2(ks+m)}\] \[\leq\left(|C_{0}||\lambda|^{m}+\frac{(1-|c_{0}|^{2})|\lambda|^{k+m }}{1-|\lambda|^{k}}\right),\ \ \text{for}\ \ l=1,2,\ldots,n.\]
We set \(|\lambda|=||z||=r\) and \(z=z_{0}||z||\). Then by the maximum modulus principle, a simple calculation confirms that
\[\frac{||D^{m}f(0)\left(z^{m}\right)||}{m!}+\sum_{s=1}^{\infty}\frac {||D^{sk+m}f(0)\left(z^{sk+m}\right)||}{\left(sk+m\right)!}+\left(\frac{1}{||z ||^{m}+\frac{||D^{m}f(0)\left(z^{m}\right)||}{m!}}\right.\] \[\left.\quad+\frac{||z||^{k-m}}{1-||z||^{k}}\right)\left(\sum_{s=1} ^{\infty}\frac{||D^{sk+m}f(0)\left(z^{sk+m}\right)||}{\left(sk+m\right)!} \right)^{2}\] \[\leq||z||^{m}\left(\left|c_{0}\right|+\left(1-|c_{0}|^{2}\right) \frac{||z||^{k}}{1-||z||^{k}}\right)\] \[=1+\frac{\left(1-|c_{0}|-|c_{0}|^{2}\right)||z||^{m+k}+||z||^{k}+ |c_{0}|r^{m}-1}{1-||z||^{k}}\leq 1\]
for \(||z||=r\leq R_{k,m}\left(|c_{0}|\right)\), where \(R_{k,m}\left(|c_{0}|\right)\) is the unique root in \((0,1)\) of equation (2.4). The next step is to show the constant \(R_{k,m}\left(|c_{0}|\right)\) is sharp. To serve the purpose, we consider the function \(f_{a}\) defined by
\[f_{a}(z)=\left(z_{1}^{m}\frac{a-z_{1}^{k}}{1-az_{1}^{k}},z_{2}^{m}\frac{a-z_{2 }^{k}}{1-az_{2}^{k}},\ldots,z_{n}^{m}\frac{a-z_{n}^{k}}{1-az_{n}^{k}}\right)\]
for \(z=\left(z_{1},z_{2},\ldots,z_{n}\right)^{\prime}\in U^{n}\) and \(a\in[0,1)\). In this case, we suppose that \(z=\left(z_{1},0,\ldots,0\right)^{\prime}\) which implies that \(||z||=|z_{1}|=r\), and according to the definition of the Fr\(\acute{e}\)chet derivative, we get that
\[\left\{\frac{||D^{sk+m}f_{a}(0)\left(z^{sk+m}\right)||}{\left(sk+m \right)!}=\left|\frac{\partial^{sk+m}f_{1}(0)}{\partial z_{1}^{sk+m}}\cdot \frac{z_{1}^{sk+m}}{\left(sk+m\right)!}\right|\text{for }k\geq 0\right.\] \[\left.\frac{||D^{m}f_{a}(0)\left(z^{m}\right)||}{m!}=\left|\frac{ \partial^{m}f_{1}(0)}{\partial z_{1}^{m}}\cdot\frac{z_{1}^{m}}{m!}\right| \text{for }s=0,\right.\]
where
\[f_{1}(z)=z_{1}^{m}\left(\frac{a-z_{1}^{k}}{1-az_{1}^{k}}\right)=az_{1}^{m}- \left(1-a^{2}\right)\sum_{s=1}^{\infty}a^{s-1}z_{1}^{sk+m}.\]
A simple computation gives that
\[\frac{||D^{m}f_{a}(0)(z^{m})||}{m!}=ar^{m}\;\;\text{and}\;\;\frac{||D^{sk+m}f_{ a}(0)\left(z^{sk+m}\right)||}{\left(sk+m\right)!}=(1-a^{2})a^{s-1}r^{sk+m}.\]
Thus we see that
\[\frac{||D^{m}f(0)\left(z^{m}\right)||}{m!}+\sum_{s=1}^{\infty}\frac{ ||D^{sk+m}f(0)\left(z^{sk+m}\right)||}{(sk+m)!}+\left(\frac{1}{||z||^{m}+\frac{ ||D^{m}f(0)\left(z^{m}\right)||}{m!}}\right.\] \[\quad+\frac{||z||^{k-m}}{1-||z||^{k}}\left(\sum_{s=1}^{\infty} \frac{||D^{sk+m}f(0)\left(z^{sk+m}\right)||}{(sk+m)!}\right)^{2}\] \[=ar^{m}+\left(1-a^{2}\right)\sum_{s=1}^{\infty}a^{s-1}r^{sk+m}+ \left(\frac{r^{-m}}{1+a}+\frac{r^{k-m}}{1-r^{k}}\right)\left(1-a^{2}\right)^{2 }\sum_{s=1}^{\infty}a^{2s-2}r^{2sk+2m}\] \[=r^{m}\left(a+\left(1-a^{2}\right)\frac{r^{k}}{1-r^{k}}\right)\]
which is bigger than \(1\) if, and only if, \(r>R_{k,m}(a)\). This establishes the sharpness of \(R_{k,m}(a)\) and with this, the proof of theorem is completed.
## 3. Refined versions of the Bohr's inequality in complex Banach spaces
There are a few of articles on Bohr inequality on complex Banach spaces. In this section, we shall consider refined version of Bohr's phenomenon in complex Banach spaces. Let \(X\) and \(Y\) be complex Banach spaces and \(\mathcal{B}_{Y}\) be the unit ball in \(Y\). For domain \(G\subset X\) and \(\Omega\subset Y\), let \(\mathcal{H}(G,\Omega)\) be the set of all holomorphic functions from \(G\) into \(\Omega\). Any mapping \(f\in\mathcal{H}(G,\Omega)\) can be expanded in the following series
\[f(x)=\sum_{s=0}^{\infty}\frac{1}{s!}D^{s}f(0)\left(x^{s}\right), \tag{3.1}\]
where \(D^{s}f(0)\), \(s\in\mathbb{N}\), denote the \(s\)-th Frechet derivative of \(f\) at \(0\), which is bounded symmetric \(s\)-linear mapping from \(\prod_{i=1}^{s}X\) to \(\mathbb{C}\). It is understood that \(D^{0}f(0)\left(x^{0}\right)=f(0)\). A domain \(G\subset X\) is said to be _balanced_, if \(zG\subset G\) for all \(z\in\mathbb{D}\). Given a balanced domain \(G\), we denote the _higher dimensional Bohr radius_ by \(K_{X}^{G}(\Omega)\) the largest non-negative number \(r\) such that
\[\sum_{s=1}^{\infty}\left|\frac{1}{s!}D^{s}f(0)\left(x^{s}\right)\right.\! \bigg{|}\leq d(f(0),\partial\Omega)\]
holds for all \(x\in rG\) and all holomorphic functions \(f\in\mathcal{H}(G,\Omega)\) with the expansion (3.1) about the origin. Here, we denote \(d\) as the _Euclidean distance_ between \(f(0)\) and the boundary \(\partial\Omega\) of the domain \(\Omega\). It is easy to see that the classical Bohr's inequality (1.1) states that \(K_{\mathbb{C}}^{G}(\mathbb{D})=1/3.\) In recent years, researchers have paid their considerable attention to the study of the Bohr inequality and its refined versions for Banach spaces. For example, Aizenberg [7] have obtained that for any balanced domain \(G\subset\mathbb{C}^{n}\), \(K_{\mathbb{C}^{n}}^{G}(\mathbb{D})\geq 1/3\) and also showed that the constant \(K_{\mathbb{C}^{n}}^{G}(\mathbb{D})=1/3\) is best possible in case of when \(G\) is a convex domain. Moreover, by taking a restriction on \(f\in\mathcal{H}(G,\mathbb{D})\) such that \(f(0)=0\) and for any balanced domain \(G\subset\mathbb{C}^{n}\), Liu and Ponnusamy [46] have improved the quantity as \(K_{\mathbb{C}^{n}}^{G}(\mathbb{D})\geq 1/\sqrt{2}\) and obtained that the constant \(K_{\mathbb{C}^{n}}^{G}(\mathbb{D})=1/\sqrt{2}\) is best possible if \(G\) is a convex domain. Furthermore,
Hamada _et al._[34] have established the generalization of the Bohr inequality to the holomorphic mappings \(f\in\mathcal{H}(G,\mathcal{B}_{Y})\) for bounded balanced domain \(G\) in a Banach space \(X\) and \(\mathcal{B}_{Y}\) is the (homogeneous) unit ball in a complex Banach space \(Y\), and shown that the Bohr radius cannot be improved if \(\mathcal{B}_{Y}\) is the unit ball of a \(J^{*}\)-algebra i.e., \(K_{X}^{G}(\mathbb{D})=1/3\) (see [34, Corollary 3.2]). For a simply connected domain \(\Omega\) in the complex plane \(\mathbb{C}\), Bhowmik and Das [25, Theorem 3] have obtained a lower bound of the quantity \(K_{X}^{G}(\mathbb{D})\). Also, a generalized Bohr radius \(R_{p,q}(X)\), where \(p,q\in[1,\infty)\) is obtained in [28] for complex Banach space \(X\). Moreover, a \(n-\)variable version \(R_{p,q}^{n}(X)\) of the quantity \(R_{p,q}(X)\) are considered in [28] and is determined \(R_{p,q}^{n}(X)\) for infinite dimensional complex Hilbert space \(\mathcal{H}\). Various other results related to the multidimensional Bohr radius have appeared recently (see [7, 9, 10, 18, 19, 22, 28, 43, 49] and references therein).
Motivated by the work of Ali _et al._[15] and [35, Corollary 1], a problem concerning symmetric analytic functions was raised (see [36, Problem 1]) which is answered completely by establishing the following result.
**Theorem 3.1**.: [36] Given \(k,m\in\mathbb{N}\), \(f(z)=\sum_{s=0}^{\infty}a_{sk+m}z^{sk+m}\in\mathcal{H}(\mathbb{D},\mathbb{D})\). Then
\[\sum_{s=0}^{\infty}|a_{sk+m}z^{sk+m}|\leq 1\;\text{for}\;r\leq r_{k,m},\]
where \(r_{k,m}\) is the maximal positive root of the equation (2.3). The number \(r_{k,m}\) is the best possible.
A multidimensional generalization of Theorem 3.1 is established recently in [19] for functions with lacunary series in the class \(\mathcal{H}(G,\mathbb{D})\). The result is
**Theorem 3.2**.: [19] Let \(k,m\in\mathbb{N}\), \(0\leq m\leq k\). Suppose that \(G\) be a bounded balanced domain in a complex Banach space \(X\) and \(f\in\mathcal{H}(G,\mathbb{D})\) be of the form \(f(x)=\frac{D^{m}f(0)(x^{m})}{m!}+\sum_{s=1}^{\infty}\frac{D^{sk+m}f(0)\left(x ^{sk+m}\right)}{(sk+m)!}\). Then we have
\[\frac{|D^{m}f(0)\left(x^{m}\right)|}{m!}+\sum_{s=1}^{\infty}\frac{|D^{sk+m}f( 0)\left(x^{sk+m}\right)|}{(sk+m)!}\leq 1\;\text{for}\;x\in\left(r_{k,m}\right)G.\]
Here the constant \(r_{k,m}\) is the maximal positive root in \((0,1)\) of the equation (2.3). The radius \(r_{k,m}\) is best possible.
For further improvement of the inequality in Theorem 3.2, it is natural to raise the following question.
**Question 3.1**.: Can we establish an analogue of Theorem 1.1 which is a sharp refined version of Theorem 3.2?
Ponnusamy _et al._[50] established a refined version of the Bohr's inequality in the case \(f\in\mathcal{H}(\mathbb{D},\mathbb{D})\) with \(f(0)=0\). We recall their result.
**Theorem 3.3**.: Suppose that \(f(z)=\sum_{n=1}^{\infty}a_{n}z^{n}\in\mathcal{H}(\mathbb{D},\mathbb{D}).\) Then
\[\sum_{n=1}^{\infty}|a_{n}|r^{n}+\left(\frac{1}{1+|a_{1}|}+\frac{r}{1-r}\right) \sum_{n=2}^{\infty}|a_{n}|^{2}r^{2n-1}\leq 1\;\;\text{for}\;\;r\leq\frac{3}{5}.\]
The number \(3/5\) is sharp.
We now state our result answering Question 3.1 for functions with lacunary series in the class \(\mathcal{H}(G,\Omega)\). In order to obtain the sharp estimate, we use a recent approach of Liu _et al._[48] which they used to investigate the Bohr radius for symmetric function \(f\in\mathcal{H}(\mathbb{D},\mathbb{D})\). In the last section, we show that an application of Theorem 3.4 will help us to establish a multidimensional refined version of the Bohr-type inequality for the harmonic function \(f\) from the bounded balanced domain \(G\) into \(\mathbb{D}\).
**Theorem 3.4**.: Let \(k,m\in\mathbb{N}\), \(0\leq m\leq k\). Suppose that \(G\) be a bounded balanced domain in a complex Banach space \(X\) and \(f\in\mathcal{H}(G,\mathbb{D})\) be of the form \(f(x)=\frac{D^{m}f(0)(x^{m})}{m!}+\sum_{s=1}^{\infty}\frac{D^{sk+m}f(0)\left(x ^{sk+m}\right)}{(sk+m)!}\). Then
\[\mathcal{I}_{m,k}^{f}(r):= \frac{\left|D^{m}f(0)\left(x^{m}\right)\right|}{m!}+\sum_{s=1}^{ \infty}\frac{\left|D^{sk+m}f(0)\left(x^{sk+m}\right)\right|}{(sk+m)!}\] \[\quad+\left(\frac{1}{r^{m}+\frac{\left|D^{m}f(0)\left(x^{m} \right)\right|}{m!}}+\frac{r^{k-m}}{1-r^{k}}\right)\sum_{s=1}^{\infty}\left( \frac{\left|D^{sk+m}f(0)\left(x^{sk+m}\right)\right|}{(sk+m)!}\right)^{2}\leq 1\]
for \(x\in\left(R_{k,m}(\left|c_{0}\right|)\right)G\), where \(R_{k,m}(\left|c_{0}\right|)\) is the maximal positive root in \((0,1)\) of the equation given by (2.4).
As a consequence of Theorem 3.4 (for \(m=0\) and \(k=1\)), we obtain the following corollary which is in fact a refined version of [34, Corollary 3.2].
**Corollary 3.1**.: Let \(f\in\mathcal{H}(G,\mathbb{D})\) be of the form \(f(x)=\sum_{s=0}^{\infty}\frac{D^{s}f(0)(x^{s})}{s!}\), where \(D^{s}f(0)\), \(s\in\mathbb{N}\) denote the \(s\)-th Frechet derivative of \(f\) at \(0\). Then
\[\left|f(0)\right|+\sum_{s=1}^{\infty}\frac{\left|D^{s}f(0)\left(x^{s}\right) \right|}{s!}+\left(\frac{1}{1+\left|f(0)\right|}+\frac{r}{1-r}\right)\sum_{s= 1}^{\infty}\left(\frac{\left|D^{s}f(0)\left(x^{s}\right)\right|}{s!}\right)^{ 2}\leq 1\]
for \(x\in(1/3)G\). Here, the constant \(1/3\) is best possible.
Now we concentrate to obtain an analogue of Theorem 3.3 for a bounded balanced domain \(G\) in a complex Banach space \(X\) and \(f\in\mathcal{H}(G,\mathbb{D})\) with lacunary series and we obtain the following sharp refined version of Bohr's inequality.
**Theorem 3.5**.: Let \(G\) be a bounded balanced domain in a complex Banach space \(X\) and \(f\in\mathcal{H}(G,\mathbb{D})\) be of the form \(f(x)=\sum_{s=1}^{\infty}\frac{D^{s}f(0)(x^{s})}{s!}\). Then we have
\[\sum_{s=1}^{\infty}\frac{\left|D^{s}f(0)(x^{s})\right|}{s!}+\left(\frac{1}{r+ \frac{\left|Df(0)(x)\right|}{1!}}+\frac{1}{1-r}\right)\sum_{s=1}^{\infty} \left(\frac{\left|D^{s}f(0)(x^{s})\right|}{s!}\right)^{2}\leq 1 \tag{3.2}\]
for \(x\in(3/5)G\). The constant \(3/5\) is best possible.
**Proof of Theorem 3.4**.: Assume any fixed \(y\in G\) and let \(F(z):=f(zy)\), \(z\in\mathbb{D}\). Then it is easy to see that \(F:\mathbb{D}\to\mathbb{D}\) is holomorphic and
\[F(z)=\frac{D^{m}f(0)\left(y^{m}\right)}{m!}z^{m}+\sum_{s=1}^{\infty}\frac{D^{sk +m}f(0)\left(y^{sk+m}\right)}{(sk+m)!}z^{sk+m}=z^{m}g(z^{k}),\]
where \(g(z):=c_{0}+\sum_{s=1}^{\infty}c_{s}z^{s}\in\mathcal{H}\left(U,U\right)\) and
\[c_{0}=\frac{D^{m}f(0)\left(y^{m}\right)}{m!}\text{ and }c_{s}=\frac{D^{sk+m}f(0) \left(y^{sk+m}\right)}{(sk+m)!},\;s=1,2,\ldots\]
Because \(g\in\mathcal{H}\left(U,U\right),\) in view of Lemma 2.2 (with \(N=1\)), we have
\[\sum_{s=0}^{\infty}|c_{s}||z|^{zk}+\left(\frac{1}{1+|c_{0}|}+\frac{|z|^{k}}{1- |z|^{k}}\right)\sum_{s=1}^{\infty}|c_{s}|^{2}|z|^{2sk}\leq|c_{0}|+\left(1-|c_{0 }|^{2}\right)\frac{|z|^{k}}{1-|z|^{k}}\]
and multiplying both sides by \(|z|^{m}\) and puting the value of \(c_{s}\), we obtain
\[\sum_{s=0}^{\infty}\frac{|D^{sk+m}f(0)\left(y^{sk+m}\right)|}{(sk+ m)!}|z|^{sk+m}\] \[\quad+\left(\frac{1}{|z|^{m}+\frac{|D^{m}f(0)(y^{m})|}{m!}|z|^{m }}+\frac{|z|^{k-m}}{1-|z|^{k}}\right)\sum_{s=1}^{\infty}\left(\frac{|D^{sk+m}f (0)\left(y^{sk+m}\right)|}{(sk+m)!}\right)^{2}|z|^{2sk+2m}\] \[\leq|z|^{m}|c_{0}|+\left(1-|c_{0}|^{2}\right)\frac{|z|^{k+m}}{1-| z|^{k}}\] \[=1+\frac{\left(\left(1-|c_{0}|-|c_{0}|^{2}\right)\right)|z|^{m+k}+ |z|^{k}+|c_{0}||z|^{m}-1}{1-|z|^{k}}.\]
Therefore, for the setting \(|z|=r\) and \(x=y|z|\), the inequality \(\mathcal{I}_{m,k}^{f}(r)\leq 1\) holds for \(x\in\left(R_{k,m}(|c_{0}|)\right)G\), where \(R_{k,m}(|c_{0}|)\) is the maximal positive root in \((0,1)\) of (2.4).
To prove the constant \(R_{k,m}(|c_{0}|)\) cannot be improved, we use a technique similar to that in the proof of [19] and [34]. We prove that the inequality \(\mathcal{I}_{m,k}^{f}(r)\leq 1\) is not holds for \(x\in r_{0}G\), where \(r_{0}\in(R_{k,m}(|c_{0}|),1)\). As we know that there exists a \(c\in(0,1)\) and \(\gamma\in\partial G\) such that \(cr_{0}>R_{k,m}(|c_{0}|)\) and \(c\sup\{||x||:x\in\partial G\}<||\gamma||\). Let us consider a function \(h\) on \(G\) defined by
\[h(x):=W\left(\frac{c\Psi_{\gamma}(x)}{||\gamma||}\right)\text{ and }W(z):=z^{m}\left(\frac{a-z^{k}}{1-az^{k}}\right),\]
where \(\Psi_{\gamma}\) is a bounded linear functional on \(X\) with \(\Psi_{\gamma}(\gamma)=||\gamma||\), \(||\Psi_{\gamma}||=1\), and \(a\in[0,1)\). It is easy to check that \(c\Psi_{\gamma}(x)/||\gamma||\in\mathbb{D}\) and \(h\in\mathcal{H}(G,\mathbb{D})\). Choosing \(x=r_{0}\gamma\), we get
\[h(r_{0}\gamma)=\left(cr_{0}\right)^{m}\left(\frac{a-\left(cr_{0}\right)^{k}}{1- a\left(cr_{0}\right)^{k}}\right)=\left(cr_{0}\right)^{m}\left(a-\left(1-a^{2} \right)\sum_{s=1}^{\infty}a^{s-1}\left(cr_{0}\right)^{sk}\right).\]
Thus, a tedious computation gives that
\[\sum_{s=0}^{\infty}\frac{|D^{sk+m}f(0)\left(y^{sk+m}\right)|}{(sk+m)!} \left(cr_{0}\right)^{sk+m}\] \[\quad+\left(\frac{1}{(cr_{0})^{m}+\frac{|D^{m}f(0)(y^{m})|}{m!}(cr _{0})^{m}}+\frac{(cr_{0})^{k-m}}{1-(cr_{0})^{k}}\right)\sum_{s=1}^{\infty}\left( \frac{|D^{sk+m}f(0)\left(y^{sk+m}\right)|}{(sk+m)!}\right)^{2}\left(cr_{0} \right)^{2sk+2m}\] \[=\sum_{s=0}^{\infty}\frac{|D^{sk+m}f(0)\left(y^{sk+m}\right)|}{( sk+m)!}\left(cr_{0}\right)^{sk+m}\] \[\quad+\left(\frac{1}{1+\frac{|D^{m}f(0)(y^{m})|}{m!}}+\frac{(cr_{ 0})^{k}}{1-(cr_{0})^{k}}\right)\sum_{s=1}^{\infty}\left(\frac{|D^{sk+m}f(0) \left(y^{sk+m}\right)|}{(sk+m)!}\right)^{2}\left(cr_{0}\right)^{2sk+m}\] \[=(cr_{0})^{m}a+\left(1-a^{2}\right)\sum_{s=1}^{\infty}a^{s-1} \left(cr_{0}\right)^{sk+m}+\left(\frac{1}{1+a}+\frac{\left(cr_{0}\right)^{k} }{1-(cr_{0})^{k}}\right)\left(1-a^{2}\right)^{2}\sum_{s=1}^{\infty}a^{2s-2} \left(cr_{0}\right)^{2ks+m}\] \[=(cr_{0})^{m}\left(a+\left(1-a^{2}\right)\frac{(cr_{0})^{k}}{1-( cr_{0})^{k}}\right)\] \[=1+\frac{\left(\left(1-a-a^{2}\right)\left(cr_{0}\right)^{m+k}+ \left(cr_{0}\right)^{k}+a\left(cr_{0}\right)^{m}-1\right)}{1-(cr_{0})\,k}>1.\]
This shows that the constant \(R_{k,m}(|c_{0}|)\) cannot be improved. \(\Box\)
**Proof of Theorem 3.5.** Assume any fixed \(y\in G\) and let \(F(z):=f(zy)\) for \(z\in\mathbb{D}\). Then it is easy to see that \(F:\mathbb{D}\to\mathbb{D}\) is holomorphic and
\[F(z)=\sum_{s=1}^{\infty}\frac{D^{s}f(0)(y^{s})}{s!}z^{s}=\sum_{s=1}^{\infty}b_ {s}z^{s}=:\varphi(z),\]
where \(b_{s}=\frac{D^{s}f(0)(y^{s})}{s!}\) for s=1, 2, \(\ldots\) and \(\varphi(z)=z\sum_{s=1}^{\infty}b_{s}z^{s-1}=z\sum_{s=0}^{\infty}B_{s}z^{s}\), where \(B_{s}:=b_{s+1}\) for \(s=0,1,2,\ldots\). Clearly, \(\sum_{s=0}^{\infty}B_{s}z^{s}\in\mathcal{H}(\mathbb{D},\overline{\mathbb{D}})\). In view of Lemma 2.2 (with \(N=1\)), we must have
\[\sum_{s=0}^{\infty}|B_{s}||z|^{s}+\left(\frac{1}{1+|B_{0}|}+\frac{|z|}{1-|z|} \right)\sum_{s=1}^{\infty}|B_{s}|^{2}|z|^{2s}\leq|B_{0}|+\left(1-|B_{0}|^{2} \right)\frac{|z|}{1-|z|}\]
which implies that
\[\sum_{s=0}^{\infty}|b_{s+1}||z|^{s}+\left(\frac{1}{1+|b_{1}|}+\frac{|z|}{1-|z|} \right)\sum_{s=1}^{\infty}|b_{s+1}|^{2}|z|^{2s}\leq|B_{0}|+\left(1-|B_{0}|^{2} \right)\frac{|z|}{1-|z|}.\]
In fact, we have
\[\sum_{s=1}^{\infty}\frac{|D^{s+1}f(0)(y^{s+1})|}{(s+1)!}|z|^{s}+ \left(\frac{1}{1+\frac{|Df(0)(y)|}{1!}}+\frac{|z|}{1-|z|}\right)\sum_{s=1}^{ \infty}\left(\frac{|D^{s+1}f(0)(y^{s+1})|}{(s+1)!}\right)^{2}|z|^{2s}\] \[\leq|B_{0}|+\left(1-|B_{0}|^{2}\right)\frac{|z|}{1-|z|}.\]
Multiplying both sides by \(|z|\), the above inequality takes the following form
\[\sum_{s=1}^{\infty}\frac{|D^{s+1}f(0)(y^{s+1})|}{(s+1)!}|z|^{s+1}+ \left(\frac{|z|^{2}}{|z|+\frac{|Df(0)(y)|}{1!}|z|}+\frac{|z|^{2}}{1-|z|}\right) \sum_{s=1}^{\infty}\left(\frac{|D^{s+1}f(0)(y^{s+1})|}{(s+1)!}\right)^{2}|z|^{ 2s+1}\] \[\leq|B_{0}||z|+\left(1-|B_{0}|^{2}\right)\frac{|z|^{2}}{1-|z|}.\]
Setting \(|z|=r\) and \(x=y|z|\), we easily obtain
\[\sum_{s=1}^{\infty}\frac{|D^{s+1}f(0)(x^{s+1})|}{(s+1)!}+\left( \frac{1}{r+\frac{|Df(0)(x)|}{1!}}+\frac{1}{1-r}\right)\sum_{s=1}^{\infty} \left(\frac{|D^{s+1}f(0)(x^{s+1})|}{(s+1)!}\right)^{2}\] \[\leq|B_{0}|r+\left(1-|B_{0}|^{2}\right)\frac{r^{2}}{1-r}\] \[\leq 1+\frac{J(|B_{0}|)}{1-r},\]
where \(J(t):=-1+r+tr(1-r)+(1-t^{2})\,r^{2}\). Thus the desired inequality holds if \(J(t)\leq 0\) for \(t\leq 3/5\). It can be easily shown that \(J(t)\) has a critical point at \(t_{0}=(1-r)/2r\) and \(J(t)\) has maximum at \(t_{0}\). This amounts of observations leads us to get
\[J(t)\leq J(t_{0})=\frac{(5r-3)(r+1)}{4}\leq 0\text{ for }r\leq\frac{3}{5}.\]
Therefore, the desired inequality is established.
Finally, we show that inequality (3.2) is not hold for \(x\in r_{0}G\), where \(r_{0}\in(3/5,1)\). As we know that there exists \(c\in(0,1)\) and \(\gamma\in\partial G\) such that \(cr_{0}>3/5\) and \(c\sup\{||x||:x\in\partial G\}\leq||\gamma||\). Now, we consider a function \(h\) on \(G\) defined by
\[h(x):=\omega\left(\frac{c\Psi_{\gamma}(x)}{||\gamma||}\right)\text{ and }\omega(z):=z\left(\frac{a-z}{1-az}\right),\]
where \(\Psi_{\gamma}\) is a bounded linear functional on \(X\) with \(\Psi_{\gamma}(\gamma)=||\gamma||\), \(||\Psi_{\gamma}||=1\), and \(a\in[0,1)\). It is easy to check that \(c\Psi_{\gamma}(x)/||\gamma||\in\mathbb{D}\) and \(h\in\mathcal{H}(G,\mathbb{D})\). Choose \(x=r_{0}\gamma\), we get
\[h(r_{0}\gamma)=(cr_{0})\left(\frac{a-(cr_{0})}{1-a(cr_{0})}\right)=a(cr_{0})- \left(1-a^{2}\right)\sum_{s=1}^{\infty}a^{s-1}(cr_{0})^{s+1}.\]
By a routine computation, we get that
\[|D^{s}h(0)(y)|(cr_{0})+\sum_{s=1}^{\infty}\frac{|D^{s}h(0)(y^{s})|}{ s!}(cs_{0})^{s}\] \[\quad+\left(\frac{1}{cr_{0}+a(cr_{0})}+\frac{1}{1-cr_{0}}\right) \sum_{s=1}^{\infty}\left(\frac{|D^{s}h(0)(y^{s})|}{s!}(cs_{0})^{s}\right)^{2}\] \[=a(cr_{0})+\left(1-a^{2}\right)\sum_{s=2}^{\infty}a^{s-2}(cr_{0})^ {s}+\left(\frac{1}{cr_{0}+a(cr_{0})}+\frac{1}{1-(cr_{0})}\right)\sum_{s=2}^{ \infty}\left(1-a^{2}\right)^{2}a^{2s-4}(cr_{0})^{2s}\] \[=a(cr_{0})+\left(1-a^{2}\right)\frac{(cr_{0})^{2}}{1-a(cr_{0})}\] \[>1.\]
This shows that the constant \(3/5\) is sharp.
### Refined Bohr inequality for harmonic functions in balanced domains
Methods of harmonic mappings have been applied to study and solve the fluid flow problems (see [14, 26]). For example, in 2012, Aleman and Constantin [14] established a connection between harmonic mappings and ideal fluid flows. In fact, Aleman and Constantin have developed an ingenious technique to solve the incompressible two dimensional Euler equations in terms of univalent harmonic mappings (see [26] for details).
A complex-valued function \(f(z)=u(x,y)+iv(x,y)\) is called harmonic in \(U\) if both \(u\) and \(v\) satisfy the Laplace's equation \(\bigtriangledown^{2}u=0\) and \(\bigtriangledown^{2}v=0\), where
\[\bigtriangledown^{2}:=\frac{\partial^{2}}{\partial x^{2}}+\frac{\partial^{2} }{\partial y^{2}}.\]
It is well-known that under the assumption \(g(0)=0\), the harmonic function \(f\) has the unique canonical representation \(f=h+\overline{g}\), where \(h\) and \(g\) are analytic functions in \(U\), respectively called, analytic and co-analytic parts of \(f\). If in addition \(f\) is univalent then we say that \(f\) is univalent harmonic on a domain \(\Omega\). A locally univalent harmonic mapping \(f=h+\overline{g}\) is sense-preserving whenever its Jacobian \(J_{f}(z):=|f_{z}(z)|^{2}-|f_{\bar{z}}(z)|=|h^{\prime}(z)|^{2}-|g^{\prime}(z)|^ {2}>0\) for \(z\in U\).
In 2010, Abu-Muhanna [6] considered first time the Bohr radius for the class of complex-valued harmonic function \(f=h+\bar{g}\) defined in \(U\) with \(|f(z)|<1\) for all \(z\in U.\) After this, Kayumov _et al._[37] studied Bohr radius for locally univalent harmonic mappings, Liu and Ponnusamy [45] have determined the Bohr radius for \(k\)-quasiconformal harmonic mappings, Evdoridis _et al._[32] studied an improved version of the Bohr's inequality for locally univalent harmonic mappings, Ahamed [1, 3] have studied refined Bohr-Rogosinski inequalities for certain classes of harmonic mappings. recently, Arora [19] have studied Bohr-type inequality for harmonic functions with lacunary series in complex Banach space.
A harmonic mapping \(f=h+\bar{g}\) defined in a bounded balanced domain \(G\) into \(\mathbb{D}\) can be expressed in lacunary series as
\[f(x)=\sum_{s=0}^{\infty}\frac{D^{sk+m}h(0)\left(x^{sk+m}\right)}{(sk+m)!}+ \overline{\sum_{s=0}^{\infty}\frac{D^{sk+m}g(0)\left(x^{sk+m}\right)}{(sk+m)!}}, \tag{3.3}\]
where \(h\) and \(g\) are in \(\mathcal{H}(G,\mathbb{D})\). It is established in [19] that
\[\sum_{s=0}^{\infty}\left(\frac{|D^{sk+m}h(0)\left(x^{sk+m}\right)|}{(sk+m)!}+ \frac{|D^{sk+m}g(0)\left(x^{sk+m}\right)|}{(sk+m)!}\right)\leq 2 \tag{3.4}\]
for \(x\in(r_{k,m})G\), where \(r_{k,m}\) is the maximal positive root of equation (2.3) and the radius \(r_{k,m}\) is best possible.
In view of the above observations, it is natural to ask that _can we obtain a multi-dimensional refined version of the Bohr-type inequality for the harmonic function \(f\) from the bounded balanced domain \(G\) into \(\mathbb{D}\)?_ We have given an affirmative answer to this question considering a refined version of (3.4) by establishing the following sharp result.
**Theorem 3.6**.: Let \(G\) be bounded domain in a complex Banach space \(X\). Suppose that \(k,m\in\mathbb{N}\) with \(0\leq m\leq k\), and \(f=h+\bar{g}\) is harmonic in \(G\) into \(\mathbb{D}\) be given by (3.3). Then the inequality \(A_{h}(x)+A_{g}(x)\leq 2\) holds for \(x\in(R_{m,k}(|c_{0}|))G\), where \(R_{m,k}(|c_{0}|)\) is the maximal root in \((0,1)\) of equation (2.4), where we define
\[A_{V}(x):= \sum_{s=0}^{\infty}\left|\frac{D^{sk+m}V(0)\left(x^{sk+m}\right) }{(sk+m)!}\right|\] \[\quad+\left(\frac{1}{r^{m}+\frac{|D^{m}V(0)(x^{m})|}{m!}}+\frac{ r^{k-m}}{1-r^{k}}\right)\sum_{s=1}^{\infty}\left(\left|\frac{D^{sk+m}V(0) \left(x^{sk+m}\right)}{(sk+m)!}\right|\right)^{2}\]
for \(V=h,g\). The radius \(R_{m,k}(|c_{0}|)\) is best possible.
**Proof of Theorem 3.6**.: By assumption \(f=h+\bar{g}\) and \(h,g\in\mathcal{H}(G,\mathbb{D})\). Therefore, by applying Theorem 3.4 to the functions
\[h(x)=\sum_{s=0}^{\infty}\frac{D^{sk+m}h(0)\left(x^{sk+m}\right)}{(sk+m)!}\; \text{and}\;g(x)=\sum_{s=0}^{\infty}\frac{D^{sk+m}g(0)\left(x^{sk+m}\right)}{ (sk+m)!},\]
we easily obtain that \(A_{h}(x)\leq 1\) and \(A_{g}(x)\leq 1\) for \(x\in(R_{m,k}(|c_{0}|))G\), where \(R_{m,k}(|c_{0}|)\) is the maximal root in \((0,1)\) of equation (2.4). Adding these two inequalities yields that
\[A_{h}(x)+A_{g}(x)\leq 2\;\text{for}\;x\in(R_{m,k}(|c_{0}|))G.\]
Clearly, the desired inequality is established. Next, in order to show the constant \(R_{m,k}(|c_{0}|)\) is best possible, we use similar argument that is being used in the proof of Theorem 3.4. Henceforth, we consider the function \(h(r_{0}\gamma)\) given in above example
with the same choice of \(r_{0}\), \(c\) and \(\gamma\). Therefore, for \(|\lambda|=1\), it is easy to see that
\[h(r_{0}\gamma)+\overline{\lambda h(r_{0}\gamma)}=\left(1+\bar{\lambda}\right) \left(cr_{0}\right)^{m}\left(a-\left(1-a^{2}\right)\sum_{s=0}^{\infty}a^{s-1} \left(cr_{0}\right)^{sk}\right).\]
Hence, an easy computation shows that
\[A_{h}(x)+A_{g}(x)= 2\left(cr_{0}\right)^{m}\left(a+\left(1-a^{2}\right)\sum_{s=0}^ {\infty}a^{s-1}\left(cr_{0}\right)^{sk}\right.\] \[\left.\quad+\left(\frac{1}{1+a}+\frac{(cr_{0})^{k}}{1-(cr_{0})^{k }}\right)\frac{(1-a^{2})^{2}(cr_{0})^{2}}{1-a^{2}(cr_{0})^{2k}}\right)>2\]
and with this shows that the constant \(R_{m,k}(|c_{0}|)\) is sharp. In fact, the argument used to establish the best possible part in the proof of Theorem 3.4 will also show the last inequality on the right side.
**Acknowledgment:** The authors would like to thank the anonymous referees for their elaborate comments and suggestions which will improve significantly the presentation of the paper.
**Compliance of Ethical Standards**
**Conflict of interest** The authors declare that there is no conflict of interest regarding the publication of this paper.
**Data availability statement** Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
|
2306.04669 | Spectral variability of photospheric radiation due to faculae II:
Facular contrasts for cool main-sequence stars | Magnetic features on the surface of stars, such as spots and faculae, cause
stellar spectral variability on time-scales of days and longer. For stars other
than the Sun, the spectral signatures of faculae are poorly understood,
limiting our ability to account for stellar pollution in exoplanet transit
observations. Here we present the first facular contrasts derived from
magnetoconvection simulations for K0, M0 and M2 main-sequence stars and compare
them to previous calculations for G2 main-sequence stars. We simulate
photospheres and immediate subsurface layers of main-sequence spectral types
between K0 and M2, with different injected vertical magnetic fields (0 G, 100
G, 300 G and 500 G) using MURaM, a 3D radiation-magnetohydrodynamics code. We
show synthetic spectra and contrasts from the UV (300 nm) to the IR (10000 nm)
calculated using the ATLAS9 radiative transfer code. The calculations are
performed for nine viewing angles to characterise the facular radiation across
the disc. The brightness contrasts of magnetic regions are found to change
significantly across spectral type, wavelength and magnetic field strength,
leading to the conclusion that accurate contrasts cannot be found by scaling
solar values. This is due to features of different size, apparent structure and
spectral brightness emerging in the presence of a given magnetic field for
different spectral types. | Charlotte M. Norris, Yvonne C. Unruh, Veronika Witzke, Sami K. Solanki, Natalie A. Krivova, Alexander I. Shapiro, Robert Cameron, Benjamin Beeck | 2023-06-07T15:40:07Z | http://arxiv.org/abs/2306.04669v1 | Spectral variability of photospheric radiation due to faculae II: Facular contrasts for cool main-sequence stars.
###### Abstract
Magnetic features on the surface of stars, such as spots and faculae, cause stellar spectral variability on time-scales of days and longer. For stars other than the Sun, the spectral signatures of faculae are poorly understood, limiting our ability to account for stellar pollution in exoplanet transit observations. Here we present the first facular contrasts derived from magnetoconvection simulations for K0, M0 and M2 main-sequence stars and compare them to previous calculations for G2 main-sequence stars. We simulate photospheres and immediate subsurface layers of main-sequence spectral types between K0 and M2, with different injected vertical magnetic fields (0 G, 100 G, 300 G and 500 G) using MURaM, a 3D radiation-magnetohydrodynamics code. We show synthetic spectra and contrasts from the UV (300 nm) to the IR (10 000 nm) calculated using the ATLAS9 radiative transfer code. The calculations are performed for nine viewing angles to characterise the facular radiation across the disc. The brightness contrasts of magnetic regions are found to change significantly across spectral type, wavelength and magnetic field strength, leading to the conclusion that accurate contrasts cannot be found by scaling solar values. This is due to features of different size, apparent structure and spectral brightness emerging in the presence of a given magnetic field for different spectral types.
keywords: stars: activity - stars: atmospheres - stars: late-type
## 1 Introduction
Magnetic activity is thought to be the main source of variability of late-type stars on time-scales of a day and longer. Over 96% of the total solar irradiance (TSI) variability on these time-scales is reproduced by models attributing the variability to magnetic features on the solar surface (including dark spots and bright faculae (Krivova et al., 2006; Yeo et al., 2017). The radiation observed from each magnetic feature on the surface of the Sun changes as the feature evolves. Additionally, the radiation is modulated due to the movement of the feature across the observed disc caused by rotation. Therefore, both the appearance and disappearance of features on the surface, as well as their location on the disc, affect the radiative output of a star.
Although faculae are small compared to spots, and often low contrast at disc centre, they strongly influence the TSI and solar spectral irradiance (SSI) and, due to their large number and longer lifetimes, lead to the Sun being brighter in times of high magnetic activity (see, e.g., Solanki et al., 2013). Therefore, being able to characterise the radiation from faculae is important for fully understanding the variability of the Sun and likely also of other stars. Stellar variability in turn is important to understand as a noise source for planetary characterisation, particularly when using the transit method (for a recent overview, see, e.g., Rackham et al., 2022). The presence of stellar activity has two main effects on the transit lightcurves. On the one hand, the occultation of active regions leads to "stellar noise" with bumps and troughs due to occulted dark and bright regions (see, e.g., Oshagh et al., 2014; Kirk et al., 2016; Espinoza et al., 2019). On the other hand, the change in the radiation emitted from the unocculted stellar disc affects the transit depth through what is often termed the "transit light-source effect" (TLSE) (see Rackham et al., 2018, 2019).
Small-scale magnetic features on stars other than the Sun are currently unresolvable, leaving the Sun as our main source of information regarding faculae. However, even on the Sun, the contrasts of faculae are difficult to measure, resulting in measurements having only been taken in a few wavelength bands (Chapman & McGuire, 1977; Ermolli et al., 2013; Yeo et al., 2013). To obtain full spectra of these features, model atmospheres and radiative transfer methods must be employed.
One dimensional atmospheric models have been used to synthesise spectra for quiet-Sun and solar facular regions (see, e.g., Fontenla et al., 1993; Unruh et al., 1999). More recently, one dimensional photospheric models have been used by Witzke et al. (2018) to explore the effects of metallicity on facular contrasts for stars with solar effective temperature. To calculate intensity spectra at different viewing angles, the optical depth is artificially adjusted to account for the longer path length traversed when the stellar atmosphere is viewed towards the limb. This method does not account for the corrugated nature of the granular structure in the atmosphere, or the 3D na
ture of the faculae. Therefore, while these models capture the overall disc-integrated properties of faculae reasonably well, they do not reproduce the observed limb-dependent facular contrasts. To obtain accurate centre-to-limb variations of magnetic regions on stars, 3D models must be employed.
A range of such models now exist, including Stagger (Galsgaard & Nordlund, 1996; Magic et al., 2013; Cubas Armas & Fabbian, 2021), CO5BOLD (Freytag et al., 2012; Salhab et al., 2018), and MURaM (see Vogler et al., 2005; Beeck et al., 2013a, and Sec. 2.1 for more detail). These have been employed to model line profile shapes and brightness variations on late-type stars. While some differences remain between the different magnetoconnection models, there is generally good agreement of the general granulation properties and observables (see, e.g., Beeck et al., 2012, for a comparison of Stagger, CO5BOLD and MURaM).
In Norris et al. (2017), henceforth paper i, 3D model atmospheres were used to obtain spectra from the UV to the IR for magnetised regions on G2V spectral type stars at different viewing angles on the stellar disc. Here, we employ the same methods as in paper i, described in section 2, for spectral types of K0V, M0 and M2V. This paper explores the differences in radiation emitted from simulated magnetic regions on various main-sequence spectral types from M2 to G2. The spectral types compared in this paper are characterised by different effective temperatures, \(T_{\rm eff}\), and strengths of gravitational acceleration, \(\log g\), defined in Table 1.
Images of the emergent intensities at a range of wavelengths and viewing angles are presented in Sec. 3. Mean contrast spectra with respect to field-free simulations are explored in Sec. 4. The mean contrast spectra are used to compare different spectral types and activities for regions of otherwise similar properties (e.g., the number of granules). Our findings are discussed and summarised in Sec. 5
## 2 Synthesising Spectra
### MURaM simulations
The mean spectra presented in this paper were derived from 3D snapshots of the surface of late-type stars. The snapshots are taken from different simulation runs that were either magnetic field-free or had different spatially averaged magnetic flux densities. The grids of temperature, pressure, density, magnetic field and velocity for "box-in-a-star" simulations, containing the photosphere and upper layers of the convection zone, were produced using the MURaM magneto-hydrodynamics code (Vogler et al., 2005). MURaM allows for tuning of the effective temperature of the atmosphere by adjusting the constant entropy density inflow through the bottom boundary (Beeck, 2014); \(\log g\) can also be set, allowing for the simulation of different spectral types. The boundaries for the box in the horizontal direction are periodic. The simulations used here are as described in Beeck et al. (2013a), with the upper boundary closed for flows. They use the OPAL equation-of-state (Rogers, 1994; Rogers et al., 1996). Heating and cooling rates are calculated by solving the radiative transfer equation in four bins following the approach by Nordlund (1982). Opacities in the four bins are derived from the ATLAS9, solar metallicity, opacity distribution functions (ODFs - Kurucz, 1993; Castelli & Kurucz, 2001).
Simulations were performed for four combinations of effective temperature and gravitational acceleration to represent stellar spectral types between G2V and M2V (see Table 1). The grid of simulations used for the G2, K0, M0 and M2 spectral types in this study have been presented in Beeck et al. (2013a,b, 2015a,b) and Beeck (2014). Results of the spectral synthesis from G2 simulations have been described in Yeo et al. (2017) and Norris et al. (2017). Here we present equivalent calculations for K0, M0 and M2 main-sequence stars. Each spectral type has a different horizontal and vertical resolution chosen such that the simulation box contains a similar number of granules (approximately 25 for field-free simulations). All simulations are 512 by 512 pixels in the horizontal direction while the number of depth pixels changes from one simulation run to another. Table 1 shows the parameters used for each spectral type. The table includes the magnetic field-free effective temperature (see below), logarithmic gravitational acceleration (\(\log g\)) used in the MURaM simulations, the depth and extent of the simulation cubes, as well as the vertical (\(z\)) and horizontal (\(x/y\)) resolution.
Magnetic field-free (henceforth hydrodynamic or field-free) simulations are run for several hours, a time comparable to the Kelvin-Helmholtz time scale of the simulation boxes. After this time, the simulations are independent of the initial conditions. The simulations are then run further and at least six snapshots are selected at roughly constant time intervals for a given spectral type. On average this time interval is 6 minutes for field-free simulations. This time interval is chosen to allow the granulation patterns to evolve suffi
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Spectral & field-free & \(\log g\) & depth & width & Resolution & snap- \\ type & \(T_{\rm eff}\)[K] & [cgs] & [km] & [km] & \(z\) & \(x/y\) & shots \\ \hline G2V & 5847 \(\pm\) 6 & 4.44 & 3000 & 9000 & 10.0 & 17.6 & 10 \\ K0V & 4903 \(\pm\) 6 & 4.61 & 1800 & 6000 & 6.0 & 11.7 & 6 \\ M0V & 3906 \(\pm\) 1 & 4.83 & 1000 & 2500 & 4.0 & 4.9 & 6 \\ M2V & 3676 \(\pm\) 1 & 4.83 & 800 & 1500 & 3.2 & 3.0 & 6 \\ \hline \end{tabular}
\end{table}
Table 1: Spectral types and simulation parameters. Effective temperatures, \(T_{\rm eff}\), and their standard deviations are for the field-free simulations and are derived from disc-integrated synthetic intensity spectra calculated with ATLAS9 in this study (see text); the sample standard deviations include a correction factor, \(N-1\). The gravitational field strength is given in cgs units in column 3. The total depth of the cubes is listed in column 4; the horizontal extent is given in column 5. Columns 6 and 7 list the resolution in the vertical (\(z\)) and horizontal directions; column 8 gives the number of snapshots, \(N\). All cubes encompass 512 by 512 pixels in the horizontal directions.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Spectral & \((B_{z})\) & \(\epsilon_{\rm bol}\) & \(\sigma_{c}\) & \(\Delta T\) & snap- \\ type & [G] & & & [K] & shots \\ \hline G2V & 100 & 0.016 & 0.006 & +24 & 10 \\ G2V & 300 & 0.039 & 0.009 & +56 & 10 \\ G2V & 500 & 0.050 & 0.007 & +72 & 10 \\ K0V & 100 & 0.019 & 0.006 & +23 & 6 \\ K0V & 300 & 0.033 & 0.007 & +40 & 12 \\ K0V & 500 & 0.027 & 0.008 & +32 & 6 \\ M0V & 100 & 0.004 & 0.001 & +4 & 6 \\ M0V & 300 & 0.005 & 0.001 & +5 & 14 \\ M0V & 500 & 0.000 & 0.001 & 0 & 6 \\ M2V & 100 & 0.002 & 0.002 & +2 & 6 \\ M2V & 300 & -0.002 & 0.002 & \(-2\) & 12 \\ M2V & 500 & -0.014 & 0.001 & \(-13\) & 6 \\ \hline \end{tabular}
\end{table}
Table 2: List of magnetic simulations. Columns one and two list the main-sequence spectral types and mean magnetic field; columns 3 and 4 list the disc-integrated bolometric contrast and its standard deviation with respect to the non-magnetic snapshots. Column 5 gives the difference between the effective temperature (as measure of the bolometric flux) of the listed magnetic and the non-magnetic run. The last column lists the number of snapshots used for the facular intensities; see Tab. 1 for the non-magnetic snapshots.
ciently between snapshots to make them independent. However, on the time-scales of these snapshots, granules tend to re-appear in similar locations, thus, the granule positions will be correlated across the snapshots (Cegla et al., 2018). While the average total radiative flux of the snapshots is close to the average over the whole relaxed portion of the simulation, longer time series (see, e.g., Salhab et al., 2018; Thaler & Spruit, 2014) are necessary to accurately determine the intrinsic variability of the bolometric flux along with the bolometric flux deficits or enhancements due to the magnetic flux.
Translated into effective temperature, the standard deviation for the field-free snapshots is given in Table 1 along with the mean effective temperature, \(T_{\rm eff}\). The \(T_{\rm eff}\) listed here has been derived from disc-integrated spectral intensities calculated with ATLAS9 (see Sec. 2.2 for details). In a first step, the spectral intensities were integrated between 149.5 nm and 160 000 nm to yield bolometric intensities. As our calculations include limb distances between \(\mu=0.2\) and \(\mu=1.0\), a 3-parameter non-linear limb darkening law (Sing et al., 2009) was then used to extrapolate intensities to the limb and integrated over the stellar disc to yield bolometric fluxes. The effective temperatures listed for the K0, M0 and M2 simulations are slightly different from the temperatures given in Beeck et al. (2013a) mainly due to the different number of frequency bins and angles in ATLAS9 and MURaM. The G2V simulations used here are those presented in Norris et al. (2017) and Yeo et al. (2017) and differ from the ones in Beeck et al. (2013a).
Homogeneous vertical magnetic fields of 100 G, 300 G1 and 500 G are injected into one of the relaxed hydrodynamic snapshots to represent a range of activity levels. For each initial average vertical magnetic field, \(\langle B_{z}\rangle\), the simulations are allowed to relax. To determine whether simulations are relaxed, the effective temperature, \(T_{\rm eff}\), and the horizontally averaged magnetic energy density of the simulation are confirmed to be quasi-stationary; see Beeck (2014) for more information on this relaxation criterion. As in the case of the field-free simulations, snapshots are taken at intervals that allow for sufficient evolution of the granulation pattern. As the number of surface elements is large and synthesising spectra is time expensive, we use between six and fourteen snapshots for each of our chosen values of the average magnetic field. The average bolometric flux for these snapshots is close to the average over the whole interval of time when the simulation is in a relaxed state. The bolometric contrasts, the standard deviations and effective temperature differences of the magnetic simulations with respect to the non-magnetic simulations are listed in Tab. 2.
Footnote 1: The K0, M0 and M2 simulations with \(\langle B_{z}\rangle=300\) G were not presented in Beeck et al. (2013a,b, 2013a,b) and are new to this publication.
### Spectral Synthesis
Emergent intensities from the ultraviolet (UV) to the far infrared (FIR) are synthesised using the radiative transfer code, ATLAS9 (Kurucz, 1992; Castelli & Kurucz, 1994). Rays are laid through each pixel of the simulation box, producing a grid of 1D atmospheres. For viewing angles away from disc centre, the rays are inclined at an angle, \(\theta\), to the local normal. These sight lines are pivoted at the mean geometrical depth (\(z\)) where the disc-centre optical depth at 500 nm (\(\tau_{\rm 500,\,DC}\)) is unity. This causes surface features to appear at a similar \(x/y\) position in the snapshot at each limb distance, \(\mu=\cos\theta\). We assume that intensities are a function of viewing angle only. As the extent of the simulation boxes is small compared to the stellar radius, we can neglect changes in the viewing angle within a box. Furthermore, at the low spectral resolution considered here, the position of the box relative to the rotation axis of the star does not need to be taken into account.
Inclined rays are calculated between \(\mu=1.0\) and 0.2 in increments of 0.1 by 2D linear interpolation along the line of sight at depth intervals that match that of the simulation. For each ray the column mass, electron number density and continuum absorption coefficient at 500 nm are derived. For use in ATLAS9, these parameters are interpolated on to an evenly spaced \(\log(\tau_{\rm 500})\) grid with 256 points between \(\log(\tau_{\rm 500})=-5.0\) and 2.5, where most of the radiation is produced.
Synthetic spectra are produced for each sight line of the MURaM box using solar metallicity ODF tables to account for the opacity of the millions of atomic and molecular spectral lines in a stellar atmosphere, particularly in the UV (Castelli & Kurucz, 2001, 2004). In paper 1, outdated ODF tables were used. These have been updated here to the ODFs presented in Castelli & Kurucz (2004), with additionally updated H\({}_{2}\)O contributions produced in 2012. These ODFs use abundances from Grevesse & Sauval (1998), as well as TiO lines from Schwenke (1998). Other improvements for the new ODFs are described in Castelli & Kurucz (2004). For each pixel, we calculate spectra ranging from 149.5 nm to 160 000 nm. The spectra are calculated for 1040 wavelength bins that correspond to the resolution provided by the ATLAS9 ODFs. The resolution of the spectra thus ranges from 1 nm in the UV, to 2 nm in the visible, and up to 20 000 nm in the far infrared.
The spectral synthesis calculations are performed under the assumption of local thermodynamic equilibrium (LTE). This assumption becomes less valid high in the atmosphere, leading to intensities calculated for wavelengths below 300 nm to deviate significantly from more accurate, but time intensive, non-LTE calculations. We thus focus on wavelengths longward of 300 nm here (except for the computation of the bolometric intensities and \(T_{\rm eff}\) values). Further calculations are under way to explore non-LTE effects for wavelengths below 300 nm (Tagirov et al., 2023).
## 3 Intensity Images for Different Spectral Types and Magnetic Flux Densities
### Disc-centre images
Figures 1 to 3 show disc-centre images for K0V, M0V and M2V simulations, respectively. These spectral types were chosen as examples of how the surface structure of a star changes with \(T_{\rm eff}\) and \(\log g\), both in the field-free and magnetic cases (comparable images for the G2 simulations were presented in paper 1). The top row in each of the figures shows the unsigned vertical magnetic field at an optical depth at 500 nm, \(\tau_{\rm 500}\), of one. Images of emergent intensity for four different wavelength bins are shown to represent a variety of behaviours in typically distinct wavelength regions: 388 nm, 602 nm, 1610 nm and 8040 nm from top to bottom2. Due to the opacity minimum at \(\sim\)1600 nm, the images at 1610 nm allow us to see deepest into the stellar photospheres. Radiation at 600 nm typically forms at intermediate depth (though the presence of TiO shifts the formation depth to higher layers for the M2 stars), while radiation at 388 nm (due to a dense forest of lines) and 8040 nm is predominantly formed in the upper photosphere.
Footnote 2: For the images presented here, the intensity bin widths range from 2 nm in the visible to 40 nm in the infrared; specifically, the intensity bins are 387 nm to 389 nm, 601 nm to 603 nm, 1605 nm to 1615 nm, and 8020 nm to 8040 nm.
The columns in Figs. 1 to 3 show, from left to right, field-free, \(\langle B_{z}\rangle=100\) G, \(\langle B_{z}\rangle=300\) G and \(\langle B_{z}\rangle=500\) G snapshots. The colour bars to the right of the magnetic field images indicate the magnetic flux density in \(\mathrm{kG}\). For ease of comparison, the intensities of the snapshots have been normalised to the mean intensity of the field-free snapshot at each of the displayed wavelengths. The greyscales have been adjusted so that they saturate at \(2\sigma_{A}\), where \(\sigma_{A}\) is the standard deviation of intensities \(I_{A}\) at the wavelength for which the snapshot is shown (this is done purely for display purposes; the monochromatic intensities \(I_{A}\) tend to show non-Gaussian distributions, see Sec. 3.3). The brightness temperatures corresponding to the mean intensities of the snapshots at the different wavelengths are given in Tab. 1.
For all spectral types, granulation, caused by convection, is seen. The size of these convective cells, or granules, changes with spectral type. In the mid-IR, the K0V and G2V snapshots (the latter of which have been discussed in paper 1 and are not shown here) granules appear darker than the intergranular lanes. This is known as reversed granulation (see, e.g., images at 8040 nm in the bottom row of Fig. 1). This occurs due to the reversal of horizontal temperature fluctuations in the middle to upper photosphere due to the balance of adiabatic expansion and radiative heating, as described in Cheung et al. (2007). This reversed granulation also appears in the UV (not shown here) for K0V to M2V stars. The intergranular lanes appear wider in the NUV and mid IR than in the visible and NIR due to the pressure in the atmosphere being lower at the height were the NUV/MIR radiation is formed, leading to more diffuse features.
When magnetic field is injected into the simulation, convective motions of the plasma sweep the flux into the intergranular lanes. For all stars, with an injected vertical magnetic field, \(\langle B_{z}\rangle\), of 100 G, mainly bright features are present, as seen in the second column of Fig. 1 to Fig. 3. For the K0-star simulations, these bright features stand out particularly clearly at 388 nm (due to the temperature-sensitive CN lines) where they can be more than 50% brighter than the mean intensity. For the cooler M2-type stars, contrasts are only slightly larger at 388 nm than at 602 nm, as the CN bands become relatively less important and TiO features in the visible gain in prominence. At 8040 nm, contrasts are typically very low (of the order of 5%, see bottom rows of Figs 1 to 3). As radiation emerges higher in the atmosphere where gas pressure is low, the bright magnetic features take up more of the surface area and stand out very clearly against the subdued granulation patterns.
In some selected wavelength regions such as near \(1.6\,\mu\mathrm{m}\), the otherwise bright features appear dark or show very low contrast compared to the granules. This is due to \(1.6\,\mu\mathrm{m}\) being close to the H\({}^{-}\) opacity minimum, allowing radiation from deeper in the star to emerge. At these wavelengths, radiation seen in the granules will be produced in deeper and hotter layers that are closer in geometrical depth to where radiation is seen emerging from magnetic features, leading to only weak intensity contrasts. In the deeper layers, the magnetic features are also less visible, because they are narrower than higher in the atmosphere (e.g., Solanki et al., 1999).
For \(\langle B_{z}\rangle=300\) G and even more so for 500 G, larger dark features emerge in addition to bright features for all spectral types; these vary in size and number and also according to spectral type, as discussed in the following paragraphs. The resulting (mean) contrast of a magnetic region is produced by the sum of these dark and bright features; mean contrasts will be discussed further in Sec. 4.
K0 stars (Fig. 1) have noticeable bright features for \(\langle B_{z}\rangle=100\) G that increase in number as \(\langle B_{z}\rangle\) increases. For \(\langle B_{z}\rangle=300\) G and 500 G the bright magnetic features are fragmented into smaller sections and for \(\langle B_{z}\rangle=500\) G most of the intergranular lanes are filled with magnetic features. Although the small magnetic features are bright at most wavelengths presented, they are very low contrast, if not dark, around \(1.6\,\mathrm{nm}\). A small number of dark features develop for \(\langle B_{z}\rangle=300\) G; these increase in size and number as the magnetic flux density increases. These dark features stand out clearly at 602 nm and 1610 nm; at 388 nm they appear somewhat more diffuse, while at 8040 nm the emission is dominated by the extended bright features that surround them. Therefore, at different wavelengths the balance between dark and bright features varies, resulting in changes in the overall contrast of the magnetic region. For example, the mean contrast (with respect to the mean hydrodynamic intensity) of the \(\langle B_{z}\rangle=500\) G snapshot is positive at 388 nm and 8040 nm, and negative for the same snapshot at 602 nm and 1610 nm (see also Tab. 1).
For M0 and M2 snapshots with \(\langle B_{z}\rangle=100\) G, magnetic features have low contrast or are dark at 1610 nm, but have high contrasts for 388 nm, 602 nm and 8040 nm. For both M-type stars with \(\langle B_{z}\rangle=500\) G large dark features are present, with very few bright features. For M0 snapshots (Fig. 2), the dark features are elongated and more filament-like in shape. They, similarly to the dark features for the K0 snapshot, become less dark at 8040 nm than at 602 nm or 1610 nm. For M2 simulations (Fig. 3), the dark features are larger relative to the granule sizes, and are dark throughout the NUV to the NIR. At which magnetic flux densities dark features first appear depends on the spectral type. Considering the intermediate \(\langle B_{z}\rangle=300\) G simulations, almost no dark features are present for the K0V simulations. The corresponding M0V simulations show some prominent dark features (that somewhat resemble the K0V 500 G simulations) while the M2V simulations are predominantly dark.
### Images at a limb distance of 0.5
Figures 4 to 6 show the same snapshots as Figs. 1 to 3, but now seen at a limb distance of \(\mu=\cos\theta=0.5\), corresponding to a limb angle of \(60^{\circ}\). The images have been normalised by the mean intensity of the field-free simulations at \(\mu=0.5\) (see Tab. 2 for the corresponding brightness temperatures). The inclined viewing angle highlights the corrugated aspect of the granulation. Relative to disc centre, a larger fraction of the hot walls of the magnetic features can be seen, leading to enhanced brightening. As for the disc-centre images, features seen at deeper atmospheric layers (i.e., in the NIR) are less extended and show little brightening.
For snapshots with \(\langle B_{z}\rangle=100\) G, images are typically dominated by bright features; an exception here are images in the NIR at the opacity minimum. For the 300 G and 500 G shapshots we see a mix of dark and bright features with some bright pixels exceeding the mean disc centre intensity. For the K0V and M0V simulations the contribution of the bright features dominates even at \(\langle B_{z}\rangle\) of 500 G, though at most wavelengths the mean contrasts are highest for the \(\langle B_{z}\rangle=300\) G. For the M2V simulations, dark features dominate at \(\langle B_{z}\rangle=500\) G. To illustrate the range of intensities and their distribution, we consider histograms of the images in more detail in the next section.
### Intensity histograms
For a more detailed view of intensity distributions for different spectral types, and to highlight their limb dependence, Fig. 7 shows histograms for K0 (left column), M0 (middle column) and M2 (right column) main-sequence stars at four different wavelengths (388 nm., 602 nm, 1610 nm and 8040 nm from top to bottom). Intensity bin sizes of the histograms vary across wavelength and spectral type, and are given in the caption of the figure. Disc-centre values are
Figure 1: Emergent intensities for K0 snapshots. _Top row:_ unsigned vertical magnetic flux density (in units of kG) at a depth where \(\tau_{500}\) is unity. _Rows 2 to 5:_ disc-centre emergent intensities at 388 nm, 602 nm, 1610 nm and 8040 nm. From left to right, the images are for field-free, \(\langle B_{z}\rangle=100\) G, 300 G and 500 G, respectively. The \(x\) and \(y\)-axes show the size of the snapshots in Mm. Intensities have been normalised by the mean intensity of the field-free simulation (see Tab. 1). Greyscales for intensity images saturate at \(\pm 2\sigma_{A}\), where \(\sigma_{A}\) is the standard deviation of the intensity within a snapshot at wavelength \(\lambda\).
Figure 2: As Fig. 1, but for M0V simulations.
Figure 3: As Fig. 1, but for M2V simulations.
shown with solid lines, while \(\mu=0.5\) histograms are shown by dashed lines. All intensities have been normalised by the field-free disc-centre intensity at the given wavelength.
The histograms are obtained by combining data from all snapshots. The coloured horizontal bars at the top of each graph in Fig. 7 show the range of means of all snapshots at a given \(\mu\) and \(\langle B_{z}\rangle\). These represent the temporal variations of the simulations. Very little temporal variation in mean snapshot intensity is seen compared to the width of the distribution of pixel values. The limited time sampling of our snapshots means that these temporal variations are possibly underestimated due to granules generally re-appearing in the same positions over these time-scales. We note that the shapes of the histograms are insensitive to these small variations in the mean.
The mean intensities of the magnetic field-free snapshots (indicated by vertical grey lines in Fig. 7) decrease from disc centre (solid lines) to \(\mu=0.5\) (dashed lines) in all plots. This demonstrates that limb darkening is occurring for these wavelengths for all chosen spectral types. The histogram shapes change dramatically as a function of wavelength as illustrated by the four example wavelengths shown here. The 602 nm histograms at disc centre closely resemble the disc-centre bolometric intensity histograms presented in Beeck et al. (2015) and Salhab et al. (2018), in particular the narrowing of the distributions for later spectral types and the change in the peak asymmetry between early and late-type K / early M stars (the histograms in Salhab et al. (2018) are for mean magnetic fluxes of 50 G and show slightly less developed high-intensity shoulders compared to the histograms with \(\langle B_{z}\rangle\)= 100 G here).
At disc centre, the central distributions of the histograms tend to show only small differences between magnetic field-free and \(\langle B_{z}\rangle=100\) G histograms, though all magnetic snapshots show an increase in the number of bright pixels. The largest differences ap
Figure 4: Emergent intensities at \(\mu=0.5\) from K0 simulated atmospheric snapshots. As for the corresponding disc-centre images (Figs 1 to 3), the intensities have been normalised by the mean intensity of the field-free simulation; the greysalesturate at \(2\sigma_{A}\). The equivalent brightness temperatures for the snapshots shown here are listed in Tab. 10.
Figure 5: Emergent intensities at \(\mu=0.5\) from M0 simulated atmospheric snapshots; see Fig. 4.
pear at 388 nm and, for the M2 star in particular, also at 8040 nm. The magnetic features emerging in the \(\langle B_{z}\rangle=100\) G simulations are generally small and therefore only a few pixels are moved from the peak to the high-intensity tail when radiation emerges from lower atmospheric layers. When radiation emerges from higher in the atmosphere (for example at 8040 nm) where the pressure is lower, magnetic features take up a larger area, leading to a larger high-intensity tail, and fewer pixels in the peak. Differences become more marked for larger magnetisations: for \(\langle B_{z}\rangle=300\) G and 500 G, dark features form that lead to extended low-intensity tails. At many wavelengths there is also an increase in the number of bright pixels, leading to most histograms developing more pronounced shoulders at larger intensity values. The following paragraphs discuss peculiarities for different spectral types.
**K0V:** Fig. 7 shows that the histograms for K0V stars mainly have single asymmetric peaks at both disc centre and \(\mu=0.5\). This is somewhat different from what was seen for G2V stars (see figure 7 in Norris et al., 2017, for ease of comparison, our histograms are replotted on a linear scale in Fig. B in the Appendix). This is likely due to the less discrete transition between granules and inter-granular lanes for K0 stars compared to G2 stars that leads to less separation between the intensities seen in each feature type. Disc-centre histograms in the NIR (see, e.g., 1610 nm histograms in Fig. 7) are the only exception as a double peak is seen for all but the \(\langle B_{z}\rangle=500\) G simulations; this is due to the granules being more defined at this wavelength.
When the mean magnetic field is increased, the number of small-scale (predominantly bright) and larger-scale (predominantly dark) magnetic features increases (see Fig. 1) which leads to a broadening of the histograms. At disc centre, increasingly prominent low-intensity tails are seen that can be associated with larger magnetic features. Due to their larger diameter, the surrounding atmosphere is unable to heat the larger features sufficiently, so they appear dark by comparison. At the same time, high-intensity tails develop due to the increased number of high-contrast small features. At disc centre in the NIR (where radiation emerges from deeper atmospheric layers) the area taken up by bright magnetic features is negligible and cannot be discerned on the histograms shown here. The combination of bright and dark features results in a flattening of the peak (or peaks in the case of NIR). The relative importance of bright and dark features changes with wavelength, for example, the distribution is close to symmetric at 388 nm, while there is a large low-intensity tail in the NIR.
Looking towards the limb, at \(\mu=0.5\), the histogram peak height decreases with increasing magnetic field as more magnetic concentrations form in the intergranular lanes. These magnetic features allow radiation from deeper layers of the hot granular walls to emerge when the viewing angle is increased, lengthening the high-intensity tails. In the visible and NIR the main histogram peaks become more symmetric for larger \(\langle B_{z}\rangle\); in addition, there is a strengthening of the low-intensity tails. This increase in low intensities is due to larger, cool, magnetic features, which are dark at these wavelengths. When viewed at an angle, these features allow more of the hot walls to be visible (hence the corresponding larger high intensity tails), but, due to their size, some dark areas of the features will still remain visible, increasing the number of dark pixels as \(\langle B_{z}\rangle\) increases.
**M stars:** As with K0V stars, Fig. 7 shows that the M0V and M2V snapshots tend to have singly peaked distributions of intensity (an exception are the non-magnetic snapshots at 388 nm, in particular for the M2V star). Contrary to the K0 simulations, the low-intensity tails increase more strongly than the high-intensity tails as \(\langle B_{z}\rangle\) increases from 100 G to 500 G. This is due to the emergence of large features that are dark at the chosen wavelengths, as illustrated in the images in Fig. 3 (right-hand column). Small-scale magnetic features are less conspicuous, except around 8 \(\mu\)m where radiation emerges from high atmospheric layers; at these heights, the granulation signature is very weak with some intergranular lanes showing weak positive contrasts, a sign of beginning reversed granulation. At 1.6\(\mu\)m where radiation emerges from deeper atmospheric layers, the granules appear at very uniform brightness, while the intergranular lanes extend over a range of brightness. This leads to very asymmetric histograms with a peak at the granular brightness and a sharp drop at the high-intensity side. This sharp drop remains present even in the magnetic simulations as small-scale features have very low contrast and are almost invisible. For \(\langle B_{z}\rangle=500\) G, the larger dark features stand out very clearly and result in extended low-intensity tails.
At 388 nm, 602 nm and 1 610 nm, these dark features remain very pronounced even at inclined viewing angles, resulting in prominent low-intensity tails. At the same time, and similar to K0V stars, the
Figure 6: Emergent intensities at \(\mu=0.5\) from M2 simulated atmospheric snapshots; see Fig. 4.
higher intensity tails increase for \(\mu=0.5\) as the hot and bright walls of the magnetic features are revealed. This effect is less pronounced for the cooler M-type stars where the Wilson depression is smaller (see Beeck et al., 2015; Salhab et al., 2018). Comparatively little brightening is seen at 1.6\(\mu\)m where there is little change in the asymmetry of the histogram shape, though the dramatic decrease at the high-intensity side of the peak is softened for larger \(\langle B_{z}\rangle\).
In summary, the histograms in Fig. 7 show that the distribution of intensities seen in a stellar atmosphere changes significantly across spectral type, wavelength and magnetic field strength. The strong spectral type and wavelength responses show that for high resolution, the appearance of spectral features cannot be simply scaled from solar values.
## 4 Contrast spectra of magnetic regions for G, K and early M stars
The following section presents investigations into the effect of spectral type, magnetic field and wavelength on the intensities emerging from these simulated regions of a star. As the spatial resolution is low in observations of other stars, the high resolution of intensities obtained by simulations is often not required when using these outputs. Therefore, in this section we present the mean emergent intensities across a simulation box. To account for the varying total intensities emitted by different spectral types, we compare the contrasts of the magnetised regions with respect to the hydrodynamic snapshots of the same spectral type.
Contrasts are calculated using the total mean of all snapshots of a given spectral type with a given initial magnetic field. These are taken to represent the mean that would be observed for an area of the size of the simulation boxes. The means thus include pixels of very
Figure 7: Semi-log histograms of intensity for K0, M0 and M2 snapshots (left, middle and right-hand columns, respectively). Black, purple, red and yellow lines indicate field free, \(\langle B_{z}\rangle=\)100 G, 300 G and 500 G simulations, respectively. Intensities have been normalised to the mean intensities of the field-free disc-centre snapshots. Minimum and maximum values of the mean intensities of the individual snapshots that are used to generate these combined histograms are indicated by the small coloured horizontal bars. Histograms are shown at two different limb positions, \(\mu=1.0\) (solid lines) and \(\mu=0.5\) (dashed lines). From top to bottom, graphs show wavelengths of 388 nm, 602 nm, 1610 nm, and 8 040 nm. The histograms have been normalised such that the sum over all bins equals unity.
low magnetic field and of dark magnetic features, that are not strictly facular (but which we count to faculae, as these magnetic structures are much smaller than starspots and typically smaller than granules). The contrast of a magnetic region is calculated as
\[C(\langle B_{z}\rangle,\lambda,\mu)\,=\,\frac{I(\langle B_{z}\rangle,\lambda,\mu )-I(0,\lambda,\mu)}{I(0,\lambda,\mu)}. \tag{1}\]
Here, \(C(\langle B_{z}\rangle,\lambda,\mu)\) is the spectral contrast for a given average vertical magnetic field, \(\langle B_{z}\rangle\), and limb angle, \(\mu\); \(I(\langle B_{z}\rangle,\lambda,\mu)\) is the mean spectral intensity over all pixels in all snapshots for a given \(\langle B_{z}\rangle\) and \(\mu\); and \(I(0,\lambda,\mu)\) is the mean spectral intensity over all pixels in all hydrodynamic boxes at a given \(\mu\).
Tab. 2 lists the bolometric (wavelength-integrated) contrasts of the magnetic snapshots for all four spectral types. For a given spectral type, we find that contrasts initially increase as the mean magnetic field \(\langle B_{z}\rangle\) increases, before decreasing again as more dark pores form. The magnetic flux density for which the bolometric contrast peaks decreases with the effective temperature of the star. The largest disc-integrated bolometric contrasts are seen for the G2 simulations, where they can reach 5% for \(\langle B_{z}\rangle\) = 500 G, followed by contrasts for K0. Here the bolometric contrasts are highest for \(\langle B_{z}\rangle\) = 300 G where they reach 3.3%. Bolometric contrasts for M-type stars are very low, and can be either positive (e.g., roughly 0.5% for M0, \(\langle B_{z}\rangle\) at 100 G and 300 G), neutral (M0, \(\langle B_{z}\rangle\) = 500 G; M2, \(\langle B_{z}\rangle\) = 100 G and 300 G), or negative (M2, \(\langle B_{z}\rangle\) = 500 G). These lower M-star contrasts mirror low contrasts seen for granulation (see Beeck et al., 2013b) and for spots (see Panja et al., 2020) on late-type stars.
Disc-integrated bolometric contrasts only tell part of the story, as contrasts change significantly with wavelength and limb distance. Figure 8 shows mean contrast spectra for \(\langle B_{z}\rangle\) = 300 G with respect to field-free simulations for all four spectral types. The top panel shows the disc-centre contrasts, while the bottom panel shows contrasts at \(\mu\) = 0.5. To bring out lower-contrast features, we use an expanded scale for wavelengths above 800 nm. Generally, contrasts increase towards the limb, though the rate of the increase varies with spectral type, magnetic activity and wavelength region. For example, for M0, we observe a switch in the sign for the visible and NIR contrasts from predominantly dark at disc centre to bright at a limb distance of 0.5.
Even though the disc-integrated bolometric contrasts are largest for the G2-star calculations, faculae at disc centre are brighter for the K0 star compared to the G2 star in the NUV, visible and NIR. This may partly be due to the maximum of the Planck function shifting to redder wavelengths for K stars. Indeed, in the IR the trend is reversed, as expected if this is the cause. Closer to the limb the G2-star contrast catches up with that of the K0 star, even in NUV and visible. This may have to do with the deeper Wilson depression seen for spots on earlier-type stars (see Panja et al., 2020), meaning that a larger area of the hot walls becomes visible (see also Beeck et al., 2015a).
By and large, contrasts decrease from the UV to the visible and NIR, taking on a minimum near 1.6 \(\mu\)m and increasing slightly towards longer wavelengths, with a distinct peak around 4.5 \(\mu\)m (dominantly caused by rovibrational CO lines). Complexity is introduced by different atmospheric species becoming more prominent in the spectra for different spectral types. For example, contrasts for M0 and M2 stars show strong variations between 400 and 900 nm due to molecular bands, such as the TiO band heads at 517 nm, 619 nm and 709 nm, while other stars vary more smoothly at these wavelengths. Fig. 8 reveals that there is no obvious trend with stellar temperature that could easily be used to scale spectral contrasts.
Figs 9, 10 and 11 show the changes in average contrast for different magnetic flux densities for K0, M0 and M2 spectral types, respectively (results from G2 main-sequence MURaM simulations were presented in Norris et al., 2017). For all spectral types considered here, the contrasts obtained from simulations with higher magnetic flux densities show stronger spectral features and a steeper wavelength dependence. The contrast spectra for weak mean fields, \(\langle B_{z}\rangle\) = 100 G, are positive (i.e., bright) at most wavelengths and disc positions and show less variation with wavelength. However, even for such relatively low magnetic flux densities the contrast spectra are very structured and show numerous spectral features.
The contrasts derived from the K0 main-sequence simulations show similar spectral features to those derived for the Sun from G2 main-sequence simulations (see Norris et al., 2017). Two noticeable differences are the strengthening of the feature around 520 nm (most likely due to (0,0) MgH with band head at 521 nm, combined with the Mg b triplet) and a weaker response of the CO feature near 4.5 \(\mu\)m in the K0-star contrasts. As seen in Fig. 1, dark pore-like features begin to appear in the presence of a strong mean magnetic field. When looking at disc centre (top row in Fig. 9) bright features dominate for \(\langle B_{z}\rangle\) = 300 G. For \(\langle B_{z}\rangle\) = 500 G, bright and dark features largely balance, resulting in very low contrasts in most of the visible and negative contrasts around the opacity minimum at 1.6 \(\mu\)m. Closer to the limb (e.g., at \(\mu\) = 0.5 as shown in the lower row of Fig. 9) contrasts increase and become positive throughout. The contrast increases more strongly for larger flux densities such that the contrasts at \(\langle B_{z}\rangle\) = 500 G almost matches that at 300 G, even though disc-centre intensities are typically more than a factor of two lower.
As shown in Fig. 10, contrasts for M0 stars with \(\langle B_{z}\rangle\) = 100 G are positive at all wavelengths and limb positions. For \(\langle B_{z}\rangle\)= 300 G and \(\langle B_{z}\rangle\)= 500 G, disc-centre contrasts above \(\sim\)350 nm are mostly negative, except in some of the molecular features. The negative contrast is due to the magnetic features becoming large relative to the Wilson depression which is smaller for later-type stars, (see Beeck et al., 2015a). The radiation from the hot walls is then inefficient at heating the cooled magnetic region, making them appear dark compared to the surrounding atmosphere. Heating of the upper layers of the magnetic atmospheres is strong, however, resulting in a steep increase of UV contrasts below \(\sim\) 320 nm.
Away from disc centre (see lower panel in Fig. 10), contrasts are positive for all magnetic flux densities considered, with the \(\langle B_{z}\rangle\) = 500 G simulations showing the steepest increase of contrasts towards the limb. When integrated over the stellar disc, the flux contrasts for \(\langle B_{z}\rangle\) = 100 G and 300 G are positive at all wavelengths. For \(\langle B_{z}\rangle\) = 500 G, disc-integrated contrasts are positive in the UV and most of the visible and mid IR, but they are negative for most wavelengths between approximately 800 nm and 4300 nm. Indeed, as shown in Tab. 2, the bolometric contrast vanishes, i.e., the equivalent \(T_{\rm eff}\) for the \(\langle B_{z}\rangle\) = 500 G M0 simulations is equal to that of the field-free simulations.
At disc centre and for \(\langle B_{z}\rangle\) = 100 G, the M0 and M2 contrasts show many similarities, though the M0 contrasts remain positive throughout, while M2 contrasts are largely negative in the NIR. Many of the prominent spectral features seen in the M0 contrasts also stand out at M2 (see Fig. 11), though the strength of the features varies. For example at \(\langle B_{z}\rangle\) = 300 G and 500 G, the M0 contrasts show stronger features in the visible and relatively flat contrasts in the IR, while M2 contrasts show broad molecular lines at 1.8 and 2.4 \(\mu\)m. Contrary to what is seen for the hotter spectral types, only little brightening is seen at \(\mu\) = 0.5 for the M2 contrasts at \(\langle B_{z}\rangle\) = 300 G and 500 G.
To highlight the effects of the emission properties of the facular features as well as those of the corrugated aspect of the granulation and the geometric shape of the magnetic features, we have also included contrasts derived from 1D radiative equilibrium atmospheres (grey lines in Figs 9 to 11). The 1D radiative equilibrium atmo
spheres were calculated using ATLAS9 (Kurucz, 1992) for effective temperatures that closely match (within a degree) the disc-integrated effective temperatures of the magnetic and non-magnetic MURaM simulations as provided in Tabs 1 and 2. In all cases, the contrasts derived from the 1D atmospheres show flatter spectral response with fewer and weaker spectral features compared to the contrasts derived from the 3D simulations. They also show very different centre-to-limb variability. This is particularly noticeable for strong (\(B_{\rm z}\)) where the disc-centre contrasts are typically negative, while contrasts closer to the limb become positive, irrespective of the overall bolometric disc-averaged contrast of the simulation box. Contrasts derived from 1D atmospheres do not display such a switch and remain either positive or negative at all limb angles, depending on whether the feature has a higher or lower effective temperature.
## 5 Discussion and Summary
Using 3-dimensional box-in-a-star simulations and spectral synthesis, we have calculated spectral intensities for wavelengths between approximately 250 nm and 160 000 nm for main-sequence spectral types of G2V (see paper i), K0, M0 and M2V. We have observed that, for a given injected magnetic field and spectral type (i.e., \(T_{\rm eff}\) and \(\log g\)), features of different sizes, apparent structure and spectral brightness will emerge. The resulting intensity distributions are typically double peaked - reflecting the stellar granulation - with extended tails due to small-scale bright and dark magnetic features.
As these small-scale features are unresolved on stars other than the Sun, contrasts were calculated for mean intensities over a simulation box. These contrasts have been found to be complicated functions of effective temperature, magnetic field, and wavelength, leading to the conclusion that accurate contrast values cannot be obtained using simple effective temperature scaling relations and solar values. Contrasts derived from the magneto-convection simulations are very different from contrasts obtained from 1D radiative-equilibrium atmospheres for effective temperatures that correspond to the overall bolometric contrast of the simulation boxes. This is not surprising given that they are obtained from a collection of atmospheres that represent a large variety of different magnetic and non-magnetic features. In particular, the "magnetic contrasts" show steeper increases towards shorter (NUV) wavelengths and much more pronounced spectral features than the equivalent radiative-equilibrium contrasts. In addition, many of the NIR contrasts are negative (dark) at disc centre, but become positive (bright) closer to the limb. Such behaviour is only observed in models that account for the geometric effects of the granulation (see also Witzke et al., 2022).
The calculations presented here will help improve stellar reconstructions for variability studies and stellar noise parameterizations in planetary transits. The spectral data can be used to represent small-scale magnetic features, as was done for the Sun by Yeo et al. (2017). Different spatial scales, from one pixel to the full simulation box, can be selected and used to interpret observations, or as parameters in a larger-scale stellar simulation to obtain more accurate spectral outputs (including for revised centre-to-limb variations) for typical active-region surface distributions (see, e.g., Nemec et al., 2022).
Detailed centre-to-limb calculations from field-free stellar granulation simulations are now available for a large range of spectral types (e.g., Magic et al., 2015) and can be used to model exoplanetary transits (Morello et al., 2017). Calculations such as the ones presented in this paper will allow accounting for the presence of active
Figure 8: Average contrast spectra relative to field-free simulations for all MURaM spectral types for snapshots with \(\langle B_{\rm z}\rangle=300\) G. Contrasts are shown at disc centre (top), and \(\mu=0.5\) (bottom) for wavelengths between 300 nm and 10 000 nm using logarithmic \(x\) axes. The \(y\)-axis scale changes (by a factor of 5) at 800 nm to bring out the spectral shape of the contrast in the visible and IR.
Figure 10: Average contrast spectra as in Fig. 9 but for spectral type M0. The \(y\)-axis scales differ at disc centre and \(\mu=0.5\).
Figure 9: Average contrast spectra relative to field-free simulations for K0 spectral types with \(\langle B_{z}\rangle=\{100,300,500\}\) G (see also Fig. 8). Grey lines show contrasts for 1D radiative equilibrium atmospheres with approximately the same bolometric contrasts as the \((B_{z})=100\) G magnetic simulations (see Table 2). The shading indicates the standard deviation of the contrast derived from the differences between the mean spectra calculated from individual snapshots of the same magnetisation; they thus reflect the temporal variations in the MURaM simulations.
regions and their effects on the centre-to-limb variation of the stellar intensity.
In addition, they can also be used to estimate the stellar contamination of exoplanet transits. A small number of transits show evidence of exoplanets passing in front of bright features (e.g., Kirk et al., 2016; Espinoza et al., 2019). These allow tentative estimates of the contrasts and filling factors of the occulted features, though measurements so far mainly cover the visible wavelength range where contrasts show a relatively flat spectral response. As a result there is a strong degeneracy between feature contrast and filling factor.
Assuming the transit-profile deformations to be due to the presence of bright stellar surface features, Kirk et al. (2016) found contrasts of the order of 12% and 9% in the \(u^{\prime}\) and \(g^{\prime}\) bands for WASP-52, a K2 host star. In their model, the bright feature's centre is at a limb distance of \(\mu=0.78\). For our closest comparison spectral type, K0, 300G contrasts at \(\mu=0.8\) are 11% and 5% for the \(u^{\prime}\) and \(g^{\prime}\) filters, respectively3. While the \(g^{\prime}\) values in particular are lower, these contrasts are in reasonably good agreement with the contrasts presented in Kirk et al. (2016).
Footnote 3: Contrasts have been estimated from the filter responses only; i.e., camera and atmospheric effects have been neglected.
From transit observations of WASP-19b, where the host star is a mid G-type star with \(T_{\rm eff}\)\(\sim\)5460 K, Espinoza et al. (2019) determined noticeably larger contrasts of the order of 10% (in the red and NIR) up to 20% near 450 nm for a large bright feature close to \(\mu=0.8\). Such large contrasts are not seen in our K0 or G2 simulations except at very large viewing angles where \(\mu\leq 0.5\) (\(\theta\geq 60^{\circ}\)), see, e.g., the lower panel in Fig. 8). We note, however, that the contrast spectra shown here are derived from means over a full simulation snapshot. Individual features show much stronger brightness enhancements as demonstrated by the histograms and images such as, e.g., Figs 1 and 4 for K0.
In addition to direct occultations and thus deformations of the transit lightcurve, the presence of unocculted active regions can bias measurements of the lightcurve depth (and hence the inferred extent of exoplanet atmospheres) through the transit light-source effect, TLSE (Rackham et al., 2018, 2019). Most current exoplanet retrieval algorithms account for the effect of active regions, though they tend to model faculae using "hot-star" spectra. Witzke et al. (2022) showed that 1D radiative-equilibrium stellar atmosphere models fail to capture the spectral slope and limb dependence of the facular contrasts for G2 main-sequence stars. The calculations here confirm these findings for K0, M0 and M2 main-sequence stars: 1D radiative equilibrium models (shown as grey lines on Figs 9 to 11) show a much weaker UV excess slope, fewer spectral features and different centre-to-limb behaviour than facular contrasts derived from magneto-convection models.
Recent examples of exoplanet atmosphere retrievals where radiation from unocculted bright features affects the transit depth include GJ 1214b (Rackham et al., 2017), WASP-79b (Rathcke et al., 2021), and WASP-103b (Kirk et al., 2021). Assuming hot-star spectra, the derived excess temperatures were of the order of 350 K for GJ 1214 (M4.5v, \(T_{\rm eff}\)\(\approx\) 3300 K), 450 K to 500 K for WASP-79 (F5v, \(T_{\rm eff}\)\(\approx\) 6600 K) and 350 K for WASP 103 (F8v, \(T_{\rm eff}\)\(\approx\) 6100 K). All of the host stars lie outside the effective temperature range considered here so that direct comparisons are not possible.
Given the trends observed for early M stars, we would expect faculae to manifest quite differently on the M4.5 star GJ 1214 compared to what is seen on the Sun. For M2 stars, we derive very low or even negative contrasts at 300 G (i.e. features darker than the quiet stellar surface), except close to the limb and in the UV. It is thus difficult to see how unocculted active regions akin to the ones modelled here
Figure 11: Average contrast spectra as in Fig. 9 but for spectral type M2.
can mimic a spectrally flat brightening in excess of 300 K. However, as the TLSE is due to a difference in emission between the occulted and unocculted region of the star, a lower transit depth would also ensue if the planet were to pass in front of an activity belt with an even coverage of dark features that are small enough so as not to show up as distinct "bumps" in the transit lightcurve.
The G2 simulations can act as a rough indicator of expected contrasts for the late-F stars WASP 79 and WASP 103, though simulations for hotter stars are expected to yield larger temperature differences than the cooler G2 simulations (see Beeck et al., 2015). We derive an effective temperature difference of 55 K and 71 K between the mean disc-integrated bolometric fluxes for 300 G and 500 G simulations relative to non-magnetic simulations. While these bolometric temperatures are much lower than inferred from transit spectroscopy, brightness temperatures away from disc centre can be much higher, especially in the UV or in specific bands, such as in the G-band (\(\sim\)430 nm) where the mean brightness temperature at a limb distance of \(\mu=0.5\) is approximately 180 K higher for the 500-G simulations compared to the non-magnetic simulations. For the four wavelength regions shown in Figs 1 to 7, mean brightness temperature differences at \(\mu=0.5\) are approximately 260 K (388 nm), 120 K (602 nm), 80 K (1.61 \(\mu\)m) and 112 K (8.04 \(\mu\)m). As part of the active-region temperature fit in exoplanet retrievals is driven by the spectral slope of the contrast, it is unlikely that the mean temperature offsets derived from hot-star spectra can be compared directly to brightness temperatures derived from magnetoconvection simulations.
Indeed, our calculations confirm that it is difficult to unambiguously detect bright features in the visible or NIR with current instrumentation; observations in the UV offer more spectral leverage due to the larger facular contrasts. At high signal-to-noise ratios and spectral resolution it should become possible to trace facular signatures in some of the strong spectral features (e.g., TiO bands for M-type and the CN band for K-type stars). As these spectral domains are also strongly spot affected, detailed modelling of spot contrasts will become necessary, in particular for the later spectral types where observations (see, e.g., Berdyugina, 2005) and models (Panja et al., 2020) infer smaller temperature differences between starspots and their surrounding photospheres.
The spectral synthesis calculations presented here rely on the assumption of LTE. While LTE calculations are sufficient for visible and IR wavelengths, they provide a poor approximation of the emergent intensity in the UV. For instance, Shapiro et al. (2010) showed that quiet-Sun LTE continuum fluxes are typically 40% too low between 210 nm and 250 nm. Non-LTE effects on the emergent intensities will be explored in a future paper (Tagirov et al., 2023).
All results in this paper assume solar metallicity. Metallicity plays an important role in the opacities in a stellar atmosphere, as it affects both line and continuum opacity sources (particularly H\({}^{-}\)). Spectral lines have been found to be the main source of variability on time-scales of a day and longer on the Sun (Shapiro et al., 2015). As such, understanding the role of metallicity is important for accurate modelling of limb darkening, facular contrasts and stellar variability. Higher metallicity has been observed to be associated with higher variability in Karoff et al. (2018) and attributed to the larger facular contrasts resulting from the metallicity increase. 1D atmospheric models have also been used to demonstrate this increase of facular contrast and variability with increased metallicity (Witzke et al., 2018). To properly account for these changes with metallicity in stellar and planetary studies, these 3D calculations should be reproduced for a variety of metallicities.
Finally, this paper does not fully explore the changes in centre-to-limb variations with spectral type, as these data will allow. Nor does it select individual magnetic features to explore how the typical structures and emergent intensities vary. These issues will be explored in future papers.
## Acknowledgments
The authors would like to thank the referee for their helpful comments and swift response. CMN acknowledges support through studentship funding of the UK Science and Technology Facilities Council (STFC). This work was also supported by the German Federal Ministry of Education and Research under project 01LG1209A. YCU would like to thank James Kirk for useful discussions, and acknowledges financial support through STFC Grants ST/S000372/1 and ST/W000989/1. We would also like to thank the International Space Science Institute, Bern, for their support of science teams 335 and 373 and the resulting helpful discussions.
For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising.
## Data Availability
The data underlying this article will be shared on reasonable request to the corresponding author.
|
2307.14753 | Real-space topological localizer index to fully characterize the
dislocation skin effect | The dislocation skin effect exhibits the capacity of topological defects to
trap an extensive number of modes in two-dimensional non-Hermitian systems.
Similar to the corresponding skin effects caused by system boundaries, this
phenomenon also originates from nontrivial topology. However, finding the
relationship between the dislocation skin effect and nonzero topological
invariants, especially in disordered systems, can be obscure and challenging.
Here, we introduce a real-space topological invariant based on the spectral
localizer to characterize the skin effect on two-dimensional lattices. We
demonstrate that this invariant consistently predicts the occurrence and
location of both boundary and dislocation skin effects, offering a unified
approach applicable to both ordered and disordered systems. Our work
demonstrates a general approach that can be utilized to diagnose the
topological nature of various types of skin effects, particularly in the
absence of translational symmetry when momentum-space descriptions are
inapplicable. | Nisarg Chadha, Ali G. Moghaddam, Jeroen van den Brink, Cosma Fulga | 2023-07-27T10:24:14Z | http://arxiv.org/abs/2307.14753v1 | # Real-space topological localizer index to fully characterize the dislocation skin effect
###### Abstract
The dislocation skin effect exhibits the capacity of topological defects to trap an extensive number of modes in two-dimensional non-Hermitian systems. Similar to the corresponding skin effects caused by system boundaries, this phenomenon also originates from nontrivial topology. However, finding the relationship between the dislocation skin effect and nonzero topological invariants, especially in disordered systems, can be obscure and challenging. Here, we introduce a real-space topological invariant based on the _spectral localizer_ to characterize the skin effect on two-dimensional lattices. We demonstrate that this invariant consistently predicts the occurrence and location of both boundary and dislocation skin effects, offering a unified approach applicable to both ordered and disordered systems. Our work demonstrates a general approach that can be utilized to diagnose the topological nature of various types of skin effects, particularly in the absence of translational symmetry when momentum-space descriptions are inapplicable.
## I Introduction
The global conservation of energy ensures that the dynamics of the system together with its environment is Hermitian. However, in some cases it is more convenient to treat the system separately, while introducing the external coupling effectively as a non-Hermitian interaction in the system [1]. Non-Hermitian descriptions are thus commonly used to study optical systems with gain and loss [2], electronic circuits with external contacts [3], atomic systems coupled to probes [4], or acoustic systems [5]. In these cases, energies can be complex, and eigenstates are not guaranteed to form an orthonormal basis [6], leading to phenomena that have no counterpart in Hermitian systems.
One such phenomenon is the non-Hermitian skin effect (NHSE) [7; 8; 9; 10; 11; 12], which denotes the localisation of an extensive number of eigenstates at the boundary of the system [13]. The NHSE is a consequence of nontrivial bulk topology: With periodic boundary conditions, the nonzero winding number of the bulk spectrum around a point in the complex plane marks the presence of a nontrivial point gap within which boundary states accumulate [8; 14].
Recently it has been pointed out that the NHSE is not necessarily a boundary property, but that it may also occur at topological defects such as dislocations [15; 16; 17] and disclinations [18]. In this regard, the topology of non-Hermitian systems parallels that of Hermitian ones, allowing the application of conventional bulk-defect correspondence [19; 20; 21] to determine the combinations of system symmetries and defect types that lead to topologically protected gapless modes. In practice, however, there are several factors that complicate the task of computing the topological invariants responsible for the defect NHSE. For example, previous works have shown examples of systems where dislocations host a NHSE for which the conventional bulk-defect correspondence does not apply [15; 16]. Moreover, the topological invariants are usually computed in an effective Brillouin zone composed of the original momentum space and supplemented by additional degrees of freedom which parameterize the surface surrounding the defect [19; 21]. It is not a priori clear how this can be done when momentum is not a good quantum number, as is the case in disordered, fractal, quasicrystalline, or even amorphous models [22].
In this work, we examine the defect-induced NHSE from a different perspective. We turn to a real-space topological invariant called the _localizer index_. The latter is one of a family of versatile topological invariants that were initially introduced to study Hermitian topological insulators [23; 24; 25; 26; 27; 28; 29; 30; 31], but have since been extended to study a variety of phases. These include metals and semimetals [32; 33; 34; 35; 36], higher-order topological phases [33], and more recently the 1D NHSE [37], Floquet phases [38], as well as line-gapped non-Hermitian phases [39]. We show that one particular localizer index, originally meant to characterize one-dimensional (1D) Hermitian systems [23], can be adapted to study the topological properties of the NHSE in two-dimensional (2D), point-gapped non-Hermitian systems. One of its advantages is that, given a concrete system, it allows for the direct detection of the topology associated to both boundaries as well as dislocations. This approach sidesteps the need for constructing an effective Brillouin zone, and is thus ideally suited for the study of disordered systems.
We begin in Sec. II by introducing a simple model of a topologically-nontrivial 2D non-Hermitian system, con
structed as a stack of parallel Hatano-Nelson chains [15; 16]. We describe the process of introducing different types of dislocations into the system and show that they host a NHSE, similar to the boundaries of the model. Sec. III is devoted to understanding the observed NHSEs from a topological point of view, based on a mapping that relates the topology of Hermitian and non-Hermitian systems. After a brief review of previous approaches, we introduce the spectral localizer, describe its application to 1D Hermitian models, and expand its usage to 2D non-Hermitian models. We highlight the advantages of a direct, real-space formulation of the topological invariant in Sec. IV, by showing that it correctly predicts the robustness of the NHSE against onsite potential disorder. Finally, we conclude in Sec. V, suggesting that a variety of different types of skin effects, in systems of varying dimensionality and symmetry class, may be amenable to a localizer-based topological description.
## II Model
The Hatano-Nelson (HN) model is one of the simplest systems exhibiting the NHSE [40; 41]. It consists of a 1D chain with one orbital per unit cell and nearest neighbour non-reciprocal hoppings given by \(t_{x}(1\pm\gamma)\). Under open boundaries, the Hatano-Nelson chain shows the NHSE with an exponential accumulation of all eigenstates towards the boundary. The direction of accumulation is given by the largest hopping.
Following Ref. [16] we create a periodic 2D system by stacking HN chains with an inter-chain coupling strength \(t_{y}\). This gives the weak Hatano-Nelson model, with the Hamiltonian
\[H(\mathbf{k})=2t_{x}\cos k_{x}+2t_{y}\cos k_{y}-2i\gamma t_{x}\sin k_{x}, \tag{1}\]
where \(\mathbf{k}=(k_{x},k_{y})\) the momentum vector. In the following we will set \(t_{x}=1\) as the energy scale of the problem, expressing all other energy scales relative to it. All numerical results are obtained using the Kwant library [42] and our own code is included in the Supplemental Material.
For \(|t_{y}|<1\), the complex spectrum has a point-gap [shown in Fig. 1(a)]. It can be considered as a collection of periodic HN chains with a momentum-dependent chemical potential \(2t_{y}\cos k_{y}\)[16], such that the spectrum consists of a set of ellipses displaced relative to each other along the real energy axis. For a finite-sized system with open boundary conditions (OBC), the non-reciprocity along the \(x\)-direction leads to the formation of a NHSE. To describe the latter, we turn to the real-space probability density summed over all states,
\[\rho(\mathbf{r})=\sum_{n}|\bra{\mathbf{r}}\psi_{n}\ket{\psi_{n}}^{2}, \tag{2}\]
where \(\mathbf{r}\) is the position of a lattice site, \(|\mathbf{r}\rangle\) is the position ket, \(|\psi_{n}\rangle\) is the \(n^{\text{th}}\) right eigenstate of the Hamiltonian, and the sum runs over all states. The summed probability density (SPD) \(\rho\), plotted in Fig. 1(b), shows an exponential accumulation towards the boundary. In effect, each open HN chain in the stack produces its own non-Hermitian skin effect, with the same, \(y\)-independent localization length.
We introduce dislocations in the lattice by removing one or more rows of sites at fixed \(y\)-coordinates and gluing the two resulting edges together using the same \(t_{y}\) hopping as in the rest of the bulk. An example of a system formed in this way is shown in Fig. 2(a), and contains two dislocations. Each is characterized by a Burgers vector, the additional translation required to form a closed loop around the dislocation core, compared to a loop that does not encircle the defect [43; 21]. In units of the lattice constant, the left-most dislocation has a Burgers vector \(\mathbf{B}=(B_{x},B_{y})=(0,1)\), whereas \(\mathbf{B}=(0,-1)\) for the rightmost dislocation.
We use periodic boundary conditions (PBC) in order to suppress the boundary NHSE, and examine the effect of the dislocations on the SPD in Fig. 2(b). As a result, we reproduce the findings of Ref. [16]: Depending on the sign of \(B_{y}\), there is either an accumulation or a depletion of the density relative to that far from the dislocations. These phenomena have been termed the skin and anti-skin effect.
Going beyond the previous results of Ref. [16], we also turn to a system with double the Burgers vectors. This is achieved by removing two rows of sites in the cut and glue procedure, yielding \(B_{y}=\pm 2\). The SPD, shown in Fig. 2(c), shows the peak and dip at the point defects in the same manner as for \(B_{y}=\pm 1\), but with a larger amplitude and width. Thus, the NHSE is still present regardless of the parity of \(B_{y}\). Finally, we note that with OBC the boundary NHSE hinders the visibility of the dislocation NHSE. We find that the peak and dip appearing in Fig. 2(b, c) are no longer visible in this case, except in the regime of weakly-coupled chains, \(|t_{y}|\ll 1\).
Figure 1: Panel (a) shows the spectrum of the momentum-space Hamiltonian Eq. (1) in the complex energy plane. The spectrum consists of a set of ellipses displaced relative to each other along the real axis, and shows a point gap around the origin, \(E=0\). Panel (b) shows the SPD of Eq. (2) for a finite-sized system consisting of 25\(\times\)25 sites with OBC. For ease of visualization, \(\rho\) is also shown as a varying color scale. For both panels, we use \(t_{y}=0.4\) and \(\gamma=0.3\) in units of \(t_{x}\).
## III Topology of the dislocation skin effect
The topological properties of non-Hermitian systems with a point gap can be studied by means of a _Hamiltonian-doubling procedure_, which maps them to Hermitian systems with the same topological classifications [8]. Specifically, for a non-Hermitian Hamiltonian \(H\) we construct a Hermitian
\[\widetilde{H}=\begin{pmatrix}0&H\\ H^{\dagger}&0\end{pmatrix}. \tag{3}\]
The latter obeys chiral symmetry, \(\Gamma\widetilde{H}=-\widetilde{H}\Gamma\), with \(\Gamma=\text{diag}(\mathbb{I},-\mathbb{I})\), where \(\mathbb{I}\) is an identity matrix of the same size as \(H\).
The NHSE present in the non-Hermitian \(H\) maps to the topologically-protected zero energy modes of the Hermitian \(\widetilde{H}\), and the two systems have equal-valued topological invariants [8]. For the Hatano-Nelson Hamiltonian \(H\), the doubled Hamiltonian \(\widetilde{H}\) is an SSH chain [44; 45] whose 1D winding number is the same as the point-gap winding number of \(H\). Therefore, the topological invariant describing the SPD accumulation in the Hatano-Nelson model is the same as the bulk winding number for the doubled system.
For our case, the doubling procedure Eq. (3) maps the stack of Hatano-Nelson chains into a stack of SSH chains. At each dislocation, we observe one zero-energy state in the case \(|B_{y}|=1\), whereas two states are present at each defect when \(|B_{y}|=2\). We note that when the doubled system has OBC, the zero modes at the bulk defects coexist with gapless states at the boundaries of the system, as shown in Fig. 3. Thus, we expect that the topological invariants characterizing the dislocations should be identifiable under OBC, even though the boundary NHSE, when present, obscures the presence of a defect NHSE.
### Previous approaches to characterize the dislocation skin effect
According to the conventional bulk-defect correspondence established for Hermitian systems [19], the topological invariant characterizing a dislocation in 2D is computed from a three-dimensional (3D) effective Hamiltonian that surrounds the defect, \(\widetilde{H}(k_{x},k_{y},s)\), where \(k_{x,y}\) are the two original bulk momenta, and \(s\in[0,1]\) is a periodic variable describing a circle around the defect. For a non-Hermitian \(H\) that does not have any additional symmetries (class A in the Altland-Zirnbauer classification [46; 47]), \(\widetilde{H}\) belongs to class AIII, and the dislocation invariant is expected to take the form of a 3D winding
Figure 2: Panel (a) shows a sketch of the the weak Hatano-Nelson model in the presence of two dislocations. The dashed lines show two closed contours, one of which encircles the dislocation and one which does not. The path that encircles the defect ends up with a net displacement equal to the Burgers vector, here \(\mathbf{B}=(B_{x},B_{y})=(0,1)\), shown by the thick arrow. Panels (b, c) show the skin and anti-skin effect at the two dislocations. There is an accumulation or a depletion of \(\rho\) compared to its bulk value depending on whether \(B_{y}\) is positive or negative. In each case, the system size is \(40\times 20\) sites, the distance between the two dislocations is \(20\) sites, \(t_{y}=0.4\), and \(\gamma=0.4\). Panel (b) shows the case of unit Burgers vectors, \(\mathbf{B}=(0,\pm 1)\), whereas the dislocations in panel (c) have \(\mathbf{B}=(0,\pm 2)\).
Figure 3: The local density of zero modes, \(\rho_{0}(\mathbf{r})\), for the doubled Hamiltonian Eq. (3) is shown in panel (a) for PBC and in panel (c) for OBC. \(\rho_{0}(\mathbf{r})\) is the summation of the eigenvector probability carried out over only the zero energy modes, defined using a tolerance of \(10^{-4}\) in units of \(t_{x}\). Correspondingly, the spectra of the two systems are shown in panels (b) and (d). In all plots, we use \(t_{y}=\gamma=0.4\), a system size of \(60\times 30\) sites, and introduce dislocations with \(B_{y}=\pm 1\) that are \(10\) sites apart. In the case of PBC, there are only two zero-energy modes, each one localized at a dislocation core. With OBC, these defect modes coexist with boundary states formed by the topological end-modes of each SSH chain in the stack.
number:
\[W_{3}=\int_{\text{BZ}\times\mathcal{S}}\frac{d^{2}\mathbf{k}ds}{12\pi}\epsilon_{ \mu\nu\rho}\text{Tr}[(q^{-1}\partial_{\mu}q)(q^{-1}\partial_{\nu}q)(q^{-1} \partial_{\rho}q)], \tag{4}\]
where \(\epsilon_{\mu\nu\rho}\) is the anti-symmetric Levi-Civita tensor, and \(q(\mathbf{k},s)\) is the off-diagonal block of the Hermitian Hamiltonian \(\widetilde{H}(k_{x},k_{y},s)\) in a basis where the chiral symmetry operator is of the form \(\text{diag}(\mathbb{I},-\mathbb{I})\). Using the doubling construction Eq. (3) means that for the stack of Hatano-Nelson chains \(q=H\). As pointed out in Ref. [16], however, since the model has a single band, \(q(\mathbf{k},s)\) is a scalar, the Levi-Civita summation gives a vanishing contribution to the invariant. Therefore, \(W_{3}\) fails to capture the topology of the one-band model, regardless of which type of dislocation is considered.
As an alternative to the 3D winding number, Ref. [15] proposed an invariant given by the 1D winding number of the bulk Hamiltonian along specific lines of the Brillouin zone for which \(\mathbf{B}\cdot\mathbf{k}=\pi\,\text{mod}\,2\pi\). Thus, for \(B_{y}=\pm 1\), the index is the \(k_{x}\) winding number at \(k_{y}=\pi\), while for \(B_{y}=\pm 2\) it is the sum of \(k_{x}\) winding numbers at \(k_{y}=\pm\pi/2\). Ref. [16], on the other hand, proposed an invariant of the form \(\vartheta=\nu_{x}B_{y}-\nu_{y}B_{x}\), where \(\nu_{x}\) and \(\nu_{y}\) are weak topological invariants that predict the appearance of the boundary NHSE. These indices are defined as the 1D winding numbers of the bulk spectrum along a particular momentum direction, averaged over the perpendicular momentum direction. Thus,
\[\nu_{j}=\int\frac{d^{2}\mathbf{k}}{i(2\pi)^{2}}\,H(\mathbf{k})^{-1}\,\partial_ {k_{j}}H(\mathbf{k})\,\,, \tag{5}\]
with \(j=x,y\). Here \(\vartheta\) can take arbitrary integer values, consistent with the observation of a NHSE both for \(B_{y}=\pm 1\) and \(\pm 2\). It can be derived starting from a Chern-Simons invariant defined in the effective Brillouin zone \((k_{x},k_{y},s)\)[16], but the latter only captures the parity of \(\vartheta\) and does not yield the expected \(\mathbb{Z}\) classification. And the fact that both Refs. [15; 16] define invariants in momentum space hinders their use in disordered systems.
### Localizer index
We turn to a real space description of the dislocation NHSE. To this end, we consider a Hermitian matrix called the _spectral localizer_[48; 23], which is constructed from a real-space, 1D Hermitian Hamiltonian in class AIII. It takes the form
\[L=(\widetilde{X}+i\widetilde{H})\Gamma, \tag{6}\]
where \(\widetilde{H}\) is the Hamiltonian matrix, \(\Gamma\) is the chiral symmetry matrix, and \(\widetilde{X}=\text{diag}(x_{1}-x_{0},x_{2}-x_{0},x_{3}-x_{0},\ldots)\) contains the positions of the lattice sites relative to a given origin, \(x_{0}\). From \(L\), it is possible to define a \(\mathbb{Z}\)-valued topological invariant called localizer index [23]:
\[\nu_{L}=\frac{1}{2}\text{Sig}\,L, \tag{7}\]
where Sig refers to the matrix signature - the difference in the number of positive and negative eigenvalues.
For an SSH chain with OBC, \(\nu_{L}\) predicts the number of zero-energy modes at each end whenever the origin of space, \(x_{0}\), is positioned deep in the bulk of the chain. It gives a trivial answer when the origin is outside of the chain, e.g. when all lattice site positions \(x_{j}>x_{0}\). In effect, the localizer index is equal to the net number of zero-energy modes (counted with their chirality), at positions away from the origin \(x_{0}\).
In our case, the Hamiltonian doubling procedure Eq. (3) yields an array of SSH chains oriented along the \(x\)-direction and stacked along \(y\). As shown in Fig. 3, with OBC \(\widetilde{H}\) hosts zero modes both at its boundaries, in correspondence to the boundary NHSE, as well as at dislocations, in correspondence to the dislocation NSBE. Thus, we expect that the localizer index will give a unified prediction for both boundary as well as defect states by considering \(\nu_{L}\) as a function of \(x_{0}\).
We have computed the localizer index for the doubled Hamiltonian Eq. (3), taking \(\widetilde{X}\) to represent the lattice positions in the horizontal \(x\) direction and independent of the sites' position in the vertical \(y\) direction. As shown in Fig. 4, \(\nu_{L}\) correctly captures the number of zero modes as \(x_{0}\) is varied across the system, both in the case \(B_{y}=\pm 1\) as well as for \(B_{y}=\pm 2\). When \(x_{0}\) is positioned outside the lattice, \(\nu_{L}=0\) yields a trivial answer. As \(x_{0}\) enters the bulk of the system, \(\nu_{L}\) changes by an amount equal to the number of boundary zero-modes, i.e. the number of SSH chains. The same invariant, however, also correctly identifies the number of zero-modes bound to the dislocation, showing a jump by \(\pm 1\) in Fig. 4(a) or by \(\pm 2\) in Fig. 4(b) as \(x_{0}\) is swept across the dislocation core. Thus, the localizer index also allows to determine the position of the topologically protected states: A difference of \(\nu_{L}\) between two different values of \(x_{0}\) is a topological invariant counting the number of protected zero-modes in a particular region of space.
Due to the mapping between the NHSE of the weak Hatano-Nelson model \(H\) and the zero modes of the stack
Figure 4: Panel (a) shows the topological index sweeping over \(x_{0}\) for a \(40\times 20\) lattice with \(B_{y}=\pm 1\), \(\gamma=0.4,t_{y}=0.4\). The index shows a jump equal to 1 when the origin is between the point defects. Panel (b) shows the variation of \(\nu_{L}\) for the same system with \(B_{y}=\pm 2\). \(\nu_{L}\) jumps by 2 between the point-defects, showing the complete topological classification yielded by the localizer index.
of SSH chain \(\widetilde{H}\), the localizer index can thus predict the appearance and position of the NHSE both at boundaries as well as at dislocations.
We now go one step further and re-express the localizer index in such a way that it depends on the non-Hermitian Hamiltonian directly, thus avoiding the need for a doubling procedure. As detailed in Appendix A, using the block LDU decomposition together with Sylvester's law of inertia we obtain
\[\nu_{L}=\frac{1}{2}\text{Sig}(X)-\frac{1}{2}\text{Sig}\left(X+HX^{-1}H^{ \dagger}\right), \tag{8}\]
where \(X\) is the position operator corresponding to the non-Hermitian \(H\). In order to prevent \(X\) from becoming singular, this formula requires that the origin of space \(x_{0}\) be chosen such that it does not exactly coincide with one of the lattice site positions, which can be achieved for any finite discrete system. Importantly, it is faster to evaluate numerically: For a matrix of size \(n\times n\), computing the signature has \(\mathcal{O}(n^{3})\) complexity, which means that Eq. (8) is eight times faster than Eq. (7), while producing, as we have checked, an answer identical to that shown in Fig. 6.
We note that the localizer index' ability to describe the topological properties of both boundaries as well as dislocations in a unified manner goes beyond the conventional bulk-boundary and bulk-defect correspondence. According to the latter, different invariants are generally needed for the two, as mentioned at the beginning of this section.
## IV Robustness against disorder
The NHSE occuring in the weak Hatano-Nelson model does not require any symmetry, such that it is expected to be robust against disorder. We test this hypothesis by adding on-site potential disorder to the model, choosing for each site \(j\) a random potential \(\omega_{j}\), drawn independently from the uniform distribution \([-W/2,W/2]\). \(W\) therefore encodes the strength of disorder.
Beyond testing for the robustness of the dislocation NHSE, adding disorder also allows us to check the validity of the localizer index in a regime where previous invariants do not apply, since momentum is not a good quantum number. We show in Fig. 5(a) the average topological invariant describing the defect NHSE for a system containing dislocations with \(B_{y}=\pm 1\). The index is computed as the difference of \(\nu_{L}\) [Eq. (8)] for two values of \(x_{0}\) on either size of the left-most dislocation (\(x_{0}=7.1\) and \(x_{0}=20.1\)). We compare it with another indicator of NHSE robustness, the bulk gap of the doubled Hamiltonian. We find that the system remains robust against disorder up to values of \(W\) of the order of \(t_{x}\), showing a well-quantized average invariant. When disorder strength is increased further, the bulk gap decreases and the index loses its quantization. In Fig. 5(b) we examine the SPD of a single disorder configuration at \(W=1\). While \(\rho\) is clearly noisy in the bulk of the system compared to Fig. 2(b), the peak and dip corresponding to the skin and anti-skin effect are clearly visible, consistent with the well-quantized localizer index. Finally, we note that neither the distribution of gap sizes nor that of \(\nu_{L}\) is gaussian, so that the error bars of Fig. 5 cannot be determined simply from the variance. We detail their calculation in Appendix B.
## V Summary and outlook
In this work, we have revisited the skin effect occurring at dislocations in 2D non-Hermitian systems, using the simple toy model introduced in Refs. [15; 16]. We have found that a real-space topological invariant, called the localizer index, can fully capture the presence and the position of the NHSE, both at boundaries as well as at dislocations. This is in contrast to previous approaches, which rely on different invariant formulas.
One of the main advantages of localizer invariants is their ability to probe a given system directly and without the need for momentum space. Thus, we expect that this index may play an especially useful role in those systems where momentum space is inaccessible, such as in disordered or in amorphous systems.
Finally, we note that the index we have used is one of a large family of localizer invariants, which have been shown to apply to a variety of different Hermitian topological insulators, with different symmetries as well as in different dimensions. Due to the mapping relating the topology of Hermitian and non-Hermitian Hamiltonians, we expect that similar types of localizer index can be useful to characterize other types of NHSE, such as the ones protected by time-reversal symmetry [49].
Figure 5: Panel (a): Average localizer index characterizing the dislocation NHSE (red), and average bulk gap of the doubled Hamiltonian (blue) as a function of disorder strength \(W\). The system consists of \(40\times 20\) lattice sites and contains two dislocations with \(B_{y}=\pm 1\) that are positioned 20 lattice sites apart. We use \(t_{y}=\gamma=0.4\) and each point is obtained by averaging over 100 independent disorder realizations. The localizer index characterizing the dislocation topology is obtained as a difference between the values of \(\nu_{L}\) computed at \(x_{0}=20.1\) and \(x_{0}=7.1\). Panel (b) shows the SPD for a single disorder configuration at disorder strength \(W=1\). The skin and anti-skin peaks can be prominently seen.
###### Acknowledgements.
N.C. acknowledges financial support from the Working Internships in Science and Engineering (WISE) fellowship from the German Academic Exchange Service (Deutscher Akademischer Austauschdienst, DAAD) as well as the Kishore Vaigyanik Protsahan Yojana (KVPY) fellowship from the Dept. of Science and Technology, Govt. of India. A.G.M. acknowledges financial support from the Academy of Finland (Project 331094) and Jane and Aatos Erkko Foundation. We acknowledge financial support from the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) under Germany's Excellence Strategy through the Wurzburg-Dresden Cluster of Excellence on Complexity and Topology in Quantum Matter--ct.qmat (EXC 2147, Project No. 390858490).
## Appendix A Topological invariant for the non-Hermitian system
Here we derive an alternate expression for the localizer index in Eq. (6) applied to doubled Hamiltonians of the form in Eq. (3). The position operator for the doubled system has the block form:
\[\widetilde{X}=\begin{pmatrix}X&\mathbf{0}\\ \mathbf{0}&X\end{pmatrix} \tag{10}\]
\(X=\mathrm{diag}(\mathrm{x}_{1}-\mathrm{x}_{0},\mathrm{x}_{2}-\mathrm{x}_{0},...\mathrm{x}_{\mathrm{N}}-\mathrm{x}_{0})\) is the position operator for the non-Hermitian system. Substituting the block representations of the matrices in Eq. (6), we get the invariant as
\[\nu_{L}=\frac{1}{2}\mathrm{Sig}\begin{pmatrix}X&-iH\\ iH^{\dagger}&-X\end{pmatrix}. \tag{11}\]
The matrix in the argument is the localizer \(L\), and its signature can be written in terms of the signature of its blocks. We use the following identity for the block LDU decomposition:
\[\begin{pmatrix}A&B\\ C&D\end{pmatrix}=\begin{pmatrix}\mathbb{I}&\mathbf{0}\\ CA^{-1}&\mathbb{I}\end{pmatrix}\begin{pmatrix}A&\mathbf{0}\\ \mathbf{0}&D-CA^{-1}B\end{pmatrix}\begin{pmatrix}\mathbb{I}&A^{-1}B\\ \mathbf{0}&\mathbb{I}\end{pmatrix} \tag{12}\]
Assuming that \(X\) is invertible, meaning that we choose \(x_{0}\neq x_{j}\) for all \(j\), we apply this block LDU decomposition for \(L\), resulting in:
\[L=SMS^{\dagger}, \tag{13}\]
where
\[S=\begin{pmatrix}\mathbb{I}&\mathbf{0}\\ iH^{\dagger}X^{-1}&\mathbb{I}\end{pmatrix}\] (14a) and \[M=\begin{pmatrix}X&\mathbf{0}\\ \mathbf{0}&-X-H^{\dagger}X^{-1}H\end{pmatrix} \tag{14b}\]
Eq. (13) is called a congruence relation, which by Sylvester's law implies that \(\mathrm{Sig}(L)=\mathrm{Sig}(M)\)[50; 51; 52].
Since \(M\) is block diagonal, its signature is simply the sum of the signatures of the diagonal blocks. Therefore, the localizer index can be written in terms of the quantities for the non-Hermitian system as:
\[\nu_{L}=\frac{1}{2}\mathrm{Sig}(X)-\frac{1}{2}\mathrm{Sig}\left(X+H^{\dagger} X^{-1}H\ \right) \tag{15}\]
Aside from providing an eight-fold speed-up for numerical computation, this also provides the expression for the invariant directly in terms of the non-Hermitian system and gives the same answer as the invariant calculated on the doubled system, as shown in Fig. 6.
## Appendix B Determining error bars
The choice of the error bars is made in accordance with the characteristics of the distribution of the corresponding variable. Since the energy gap is a continuous variable which is bounded below by 0, we use the first and third quartiles of the distribution as the lower and upper error bounds. This measure of the dispersion of the data is known as the interquartile range, and gives the interval where the middle half of data points are contained.
Since the topological index is always integer by definition, the corresponding distribution is discrete. We thus use the bootstrap method [53]. We create data sets of the same size as the original data set (100 disorder realizations for each \(W\)), by randomly resampling data points from the data set with replacement. We calculate the mean for each resampled set, and use the interquartile range on the data set of means for each such resampled set. For each disorder value, we apply the bootstrap procedure 10000 times and report the interquartile range as the lower and upper error bars.
|
2307.05886 | Towards Quantitative Evaluation of Crystal Structure Prediction
Performance | Crystal structure prediction (CSP) is now increasingly used in the discovery
of novel materials with applications in diverse industries. However, despite
decades of developments, the problem is far from being solved. With the
progress of deep learning, search algorithms, and surrogate energy models,
there is a great opportunity for breakthroughs in this area. However, the
evaluation of CSP algorithms primarily relies on manual structural and
formation energy comparisons. The lack of a set of well-defined quantitative
performance metrics for CSP algorithms make it difficult to evaluate the status
of the field and identify the strengths and weaknesses of different CSP
algorithms. Here, we analyze the quality evaluation issue in CSP and propose a
set of quantitative structure similarity metrics, which when combined can be
used to automatically determine the quality of the predicted crystal structures
compared to the ground truths. Our CSP performance metrics can be then utilized
to evaluate the large set of existing and emerging CSP algorithms, thereby
alleviating the burden of manual inspection on a case-by-case basis. The
related open-source code can be accessed freely at
https://github.com/usccolumbia/CSPBenchMetrics | Lai Wei, Qin Li, Sadman Sadeed Omee, Jianjun Hu | 2023-07-12T03:18:45Z | http://arxiv.org/abs/2307.05886v1 | # Towards Quantitative Evaluation of Crystal Structure Prediction Performance
###### Abstract
Crystal structure prediction (CSP) is now increasingly used in the discovery of novel materials with applications in diverse industries. However, despite decades of developments, the problem is far from being solved. With the progress of deep learning, search algorithms, and surrogate energy models, there is a great opportunity for breakthroughs in this area. However, the evaluation of CSP algorithms primarily relies on manual structural and formation energy comparisons. The lack of a set of well-defined quantitative performance metrics for CSP algorithms make it difficult to evaluate the status of the field and identify the strengths and weaknesses of different CSP algorithms. Here, we analyze the quality evaluation issue in CSP and propose a set of quantitative structure similarity metrics, which when combined can be used to automatically determine the quality of the predicted crystal structures compared to the ground truths. Our CSP performance metrics can be then utilized to evaluate the large set of existing and emerging CSP algorithms, thereby alleviating the burden of manual inspection on a case-by-case basis. The related open-source code can be accessed freely at [https://github.com/usccolumbia/CSPBenchMetrics](https://github.com/usccolumbia/CSPBenchMetrics).
benchmark materials discovery crystal structure prediction distance metric performance metrics
## 1 Introduction
Deep learning-based AlphaFold has been revolutionizing the field of molecular biology by predicting tens of thousands of protein structures from sequences [1], which can accelerate the understanding of protein structures and functions. However, in the field of materials science, a similar crystal structure prediction problem, which aims to determine the stable structure given only a material composition, remains to be solved. If solved, it can dramatically accelerate the discovery of novel function materials as many material properties such as thermal conductivity, band gap, and elastic constants can be conveniently calculated using first-principle codes such as Density Functional Theory (DFT) based VASP.
Traditionally, the CSP algorithms are mainly based on the DFT calculation of energies combined with global search algorithms. However, the complexity and demanding computing resources of DFT make it challenging to develop new CSP algorithms. Although the formation energy can be efficiently predicted by graph neural networks (GNNs) [2, 3, 4], there is a good amount of accuracy-computing trade-off in this approach. However, recent progress in deep neural network-based energy potentials [5] has demonstrated that it is feasible to develop usable CSP algorithms only using
neural potentials [6]. It can be expected that an increasing number of CSP algorithms will emerge as what has happened in the protein structure prediction field with the CASP competitions organized yearly since 1994. In that case, large-scale benchmark studies and objective quantitative evaluation of CSP prediction performances will be needed to illuminate the progress and weaknesses of different algorithms, as has been done in CASP history.
Currently, there are three main categories of crystal structure prediction algorithms including search-based, template-based, and deep learning-based CSP algorithms. The global search-based CSP algorithms such as USPEX and CALYPSO combine search algorithms with DFT energy calculation for structure search. There are also several open sourced algorithms such as CrySPY [7], XtalOpt[8], GASP[9], and AIRSS [10, 11]. However, the most widely used and well-established leading software for de novo CSP are GA-based USPEX and particle swarm optimization (PSO)-based CALYPSO. Despite their closed source code, their binary programs can be easily obtained and both come with several advanced search techniques such as symmetry handling, crowding niche, and so on. Global search has also been combined with universal neural potentials for crystal structure prediction as done by the GN-OA algorithm in [6] and AGOX [12]. With the many possible search algorithms [13], the family of such algorithms can keep growing. The second category of CSP algorithms is template-based element substitutions combined with relaxation including TCSP [14] and CSPML [15], in which they use rules and machine learning models to guide the selection of structure templates. The last category of CSP algorithms is based on deep learning-based algorithms inspired by the Alphafold. [16].
With these emerging CSP algorithms, it is critical to benchmark and evaluate their performances in predicting structures of varying complexity so that strengths and obstacles can be identified. However, upon a thorough examination of the relevant literature, it is surprising to find that most of the CSP prediction results are manually verified by authors on a case-by-case basis. This verification process typically involves structure inspection, comparison of formation enthalpies, DFT-calculated energy analysis, examination of property distributions, computation of distances between structures, or a combination of these methods. There has been a severe lack of quantitative measures for objective evaluation of CSP prediction performance. In one of the earliest reports of USPEX (Universal Structure Predictor: Evolutionary Xtallography), which used evolutionary algorithms to identify stable and novel crystal structures that may not be easily discovered through traditional experimental or theoretical methods [17], the authors compared the energy difference of the predicted structures and ground truth and then compared the structural similarity by manually inspecting the predicted structures against the experimentally determined structures. Similar approaches have been used for verifying predicted crystal structures in related CSP works [18, 19] using evolutionary algorithms. In a related study by Hofmann et al. [20], the authors used the largest distance between the unit cell edges and the nearest grid point of the experimental structure to evaluate CSP performance. Another widely used method for generating predicted crystal structures is CALYPSO (Crystal structure AnalXis by Particle Swarm Optimization), which was developed by Wang et al [21]. In this work, energy distributions and the distance against distortion for graphite and diamond structures were utilized to verify the predicted structures. Additionally, in the work by Tong et al. [22] on accelerating CALYPSO using data-driven learning of a potential energy surface, the authors employed the evolution of the root mean square errors (RMSEs) of the predicted energy and force by the Gaussian Approximation Potential (GAP) for the testing set to evaluate the CALYPSO structure search for a specific cluster. A vector-type structure descriptor distance has also been used for comparing the predicted structures against the ground truths [23].
The metrics used in validating the predicted structures against ground truths were usually set by the authors with a certain arbitrariness. So far there is not a set of quantitative indicators of the quality of the predicted crystal structures. Table 1 provides an overview of the evaluation methods used in state-of-the-art CSP works. The abbreviations M-i, M-o, M-e, M-s, and M-d represent manual structural inspection, comparison with experimentally observed structures, comparison of energy or enthalpy values, success rate analysis, and computation of distances between structures, respectively. It is noteworthy that computational methods such as DFT-energy or enthalpy calculations are commonly employed in many studies. However, manual structural similarity inspection methods continue to be widely used even today, which leads the casual reader to wonder how exactly a predicted crystal structure is evaluated in terms of its prediction quality, especially when the predicted structure does not exactly match the ground truth. Additionally, energy or enthalpy calculations for structure similarity evaluation using DFT can be time-consuming. Furthermore, performance evaluation methods such as success rates, and ad hoc distance calculations between structures present challenges in standardizing, validating, and comparing the CSP results.
Inspired by the variety of quantitative metrics used in evaluating molecule generation algorithms by the benchmark MOSES [36], here we aim to address the challenges in defining good structure distance/similarity scores to measure the quality of CSP algorithms. We evaluated a series of energy and structure-based performance metrics for crystal structure prediction algorithms. For each metric, we check how their values correlate with the formation energy differences and perturbation deviations between the predicted structures and the ground truth structures. We tested their correlations for both random perturbations (applied to each atomic site independently) and symmetric perturbations [33] (applied only to Wyckoff sites without disrupting the symmetry), both of which have been adopted in CSP algorithms. We also showed that while every single metric cannot be used to fully characterize the quality of a predicted structure against
the ground truth, together they can capture the key structural similarity. Applications of these metrics were additionally used to compare the performance of CSP algorithms based on different search algorithms. We have also used these metrics to visualize the search trajectories of the structures for the GN-OA algorithms and explained its key limitations.
## 2 Method
### Evaluation metrics
Evaluation metrics play a crucial role in materials science research, as they provide a quantitative way to assess the performance and effectiveness of different material structure prediction algorithms. Currently, there are many evaluation benchmark metrics in the molecule research area, such as RDKit [37] and MOSES [36]. However, in the field of materials informatics, we don't have a unified standard for evaluating the similarity between two crystal structures that arise during the crystal structure prediction process. Here we introduce a set of benchmark metrics for CSP by combining the energy distance along with several common distance metrics, including M3GNet energy distance, minimal Wyckoff RMSE distance, minimal Wyckoff MAE distance, RMS distance, RMS anonymous distance, the Sinkhorn distance, the Chamfer distance, the Hausdorff distance, superpose RMSD distance, edit graph distance, XRD spectrum distance, fingerprint distance to standardize the comparison of material structure prediction algorithms.
The structure similarity in CSP has a unique property: the candidate structure and the ground truth structure compared have the same number of atoms within the given unit cell. Then the key step is to match atoms of one structure to the corresponding atoms of the other structure to minimize the MAE error. There are several desirable characteristics for a good structure similarity measure: 1) correlation: the structure difference should correlates well with the distance metric; 2) convergence: when the predicted structures during the CSP search approach to the ground truth, the distance metric scores should converge to 0; 3) applicability: the distance metric should be used to not just evaluate very similar structures, but also relative distant intermediate structures, which lacks in the success rate metric. Here we introduce eleven distance metrics that may be used in evaluating the prediction performance of CSP algorithms.
#### 2.1.1 Energy Distance (ED)
The formation energy is the energy required to form a material from its constituent elements in their reference states, which can provide information regarding the stability and reactivity of materials. While DFT calculation of formation energy is ideal for accuracy, it is too slow in many applications. As a result, machine learning-based energy models
\begin{table}
\begin{tabular}{|l|l|c|c|c|c|c|c|} \hline
**Author** & **Algorithm** & **Year** & **M-i** & **M-e** & **M-s** & **M-d** \\ \hline Hofmann DWM [20] & Data Mining & 2003 & ✓ & & & \\ Scott M. Woodley [24] & Evolutionary Algorithm & 2004 & ✓ & ✓ & & \\ R. Oganov [17] & Evolutionary Algorithm & 2006 & ✓ & ✓ & & \\ R. Oganov [18] & Evolutionary Algorithm & 2006 & ✓ & ✓ & & \\ Christopher C. Fischer [25] & Data Mining & 2006 & ✓ & & & \\ Kuo Bao [26] & Hopping method & 2009 & & ✓ & & \\ Giancarlo Trimarchi [27] & Evolutionary Algorithm & 2009 & ✓ & ✓ & & \\ R. Oganov [19] & Evolutionary Algorithm & 2010 & & ✓ & & \\ David C. Loinie [8] & Evolutionary Algorithm & 2011 & & ✓ & & \\ Yanchao Wang [21] & Particle swarm optimization (PSO) & 2012 & & ✓ & ✓ & ✓ \\ S Q Wu [28] & Evolutionary Algorithm & 2013 & ✓ & & & \\ Anton O. Oliynyk [29] & Data-Driven: ML & 2017 & & ✓ & & \\ Qunchao Tong [22] & Particle swarm optimization (PSO) & 2018 & & ✓ & & ✓ \\ Maximilian Amsler [30] & Hopping method & 2018 & & ✓ & & \\ Asma Nouira [31] & Data-Driven: ML & 2018 & & ✓ & ✓ & \\ Evgeny V. Podryabinkin [32] & Evolutionary Algorithm & 2019 & ✓ & & & \\ Lai Wei [14] & Template-Based Substitution & 2022 & & & & ✓ \\ Xuecheng Shao [33] & A symmetry-orientated method & 2022 & ✓ & & ✓ & \\ Xiangyang Liu [34] & Evolutionary Algorithm & 2022 & ✓ & & & ✓ \\ Yanchao Wang [35] & Particle swarm optimization (PSO) & 2022 & ✓ & & & \\ Guanjian Cheng [6] & Data-Driven: ML & 2022 & ✓ & & ✓ & \\ \hline \end{tabular}
\end{table}
Table 1: Overview of state-of-the-art CSP works for validating predicted structures. Abbreviations M-i, M-e, M-s, M-d stand for different validation methods, where M-i, M-e, M-s, M-d represent manual inspection, comparison of the energy or enthalpy, success rate, and computation of distances between structures.
have become an important topic with significant progress recently for materials discovery and design. Here we use the M3GNet [5], a graph neural network-based surrogate potential model, to calculate the formation energies of the ground truth structure and the predicted structure and then their energy distance. The formula is shown in the following equation:
\[\mathrm{ED}=\left|E_{p}-E_{g}\right| \tag{1}\]
, where \(E_{p}\) is the energy of the predicted structure while the \(E_{g}\) is the ground truth structure.
#### 2.1.2 Wyckoff position fraction coordinate distance (WD)
The Wyckoff position fraction coordinate distance is used to compare two structures that have the same Wyckoff position configurations in the symmetrized structures. It is useful to measure the similarity of the candidate structures and the ground truth structures for those CSP algorithms that can search structures while preserving symmetry (space groups).
RMSE stands for Root Mean Square Error, which can be calculated as the square root of the average of the squared differences between the predicted and actual values.
\[\mathrm{WD_{RMSE}}=\sqrt{\frac{1}{n}\sum_{i=1}^{n}(y_{i}-\hat{y}_{i})^{2}} \tag{2}\]
where variables \(y\) and \(\hat{y}\) representing the actual and predicted values, respectively. To calculate this distance, the input structures have to be symmetrized first e.g. using Pymatgen's SpaceGroupAnalyzer module.
Minimal MAE Distance is the minimum mean absolute error (MAE) distance between two sets of data points. It is a measure of the closeness between two sets of data points. The MAE distance is calculated by taking the absolute difference between corresponding data points in the two sets of data, summing these absolute differences, and dividing by the total number of data points. Let \(X=\{x_{1},x_{2},\ldots,x_{n}\}\) and \(Y=\{y_{1},y_{2},\ldots,y_{n}\}\) be two sets of \(n\) data points. The mean absolute error (MAE) distance between \(X\) and \(Y\) is defined as:
\[\mathrm{WD_{MAE}}=\frac{1}{n}\sum_{i=1}^{n}\left|x_{i}-y_{i}\right| \tag{3}\]
The minimal MAE distance is the smallest possible MAE distance that can be obtained between \(X\) and \(Y\) by permuting the data points in \(X\) and \(Y\).
#### 2.1.3 Adjacency Matrix Distance (AMD)
The adjacency matrix (M) is widely used to represent the connection topology of atoms for a given crystal structure. The value of \(M(i,j)\) is set to 1 if there exists a bond between the atom \(i\) and atom \(j\) or set to 0 if otherwise. Here we use the canonical distance as the cutoff distance for a pair of element types to define the connection status. Given two structures \(S_{1}\) and \(S_{2}\) with adjacency matrices \(M_{1}\) and \(M_{2}\), the adjacency matrix distance is defined as:
\[\mathrm{AMD}=1-\frac{2n}{n_{1}+n_{2}} \tag{4}\]
where \(n\) is the number of matrix cells that both matrices have the value of 1. \(n_{1}\) is the number of matrix cells of \(M_{1}\) with the value of 1 and \(n_{2}\) is the number of matrix cells of \(M_{2}\) with the value of 1.
The AMD can be used to measure the topological similarity between two compared structures, which are assumed to have an equal number of atomic sites. However, we found that the correlation between the perturbation magnitude and the AMD distance is weak (see Supplementary Figure \(S3\) and \(S4\)).
#### 2.1.4 Pymatgen RMS distance (PRD) and RMS Anonymous Distance (PRAD)
We calculate RMS Anonymous Distance using the structure_matcher module of the PyMatGen (Python Materials Genomics) package [38], which allows distinct species in one structure to map to another. It also calculates the root-mean-square error (RMSE) between two structures according to equation 2, but its atomic site matching process does not consider the difference of atom types before calculating the RMSD. It is useful in cases where one wants to compare the overall structural similarity between two structures, without being concerned with the differences in atom types.
#### 2.1.5 Sinkhorn Distance (SD)
The Sinkhorn Distance (SD) [39] is a kind of distance metric that is used to compare two probability distributions or point clouds in high-dimensional spaces. To represent two crystal structures as point cloud, we can consider their atomic sites. Given two structures as \(S_{1}=\{p_{1},p_{2},\ldots,p_{m}\}\) and \(S_{2}=\{q_{1},q_{2},\ldots,q_{n}\}\) where \(p_{i}\) and \(q_{i}\) are atomic sites of structure \(S_{1}\) and \(S_{2}\), respectively, we can formally define SD as:
\[SD\left(S_{1},S_{2}\right)=\frac{1}{\epsilon}\Bigg{(}\sum_{i,j}T_{i,j}\log \frac{T_{i,j}}{u_{i}v_{j}}-\log C\Bigg{)} \tag{5}\]
In Eq. 5, \(S_{1}\) and \(S_{2}\) are the crystal structures to be compared, and \(u\) and \(v\) denote the marginals of \(S_{1}\) and \(S_{2}\) which represent the total mass at each point (atomic site) in each distribution, respectively, \(\epsilon\) denotes a regularization parameter, \(C\) is a normalization constant and \(T\) is the transport plan which can be computed via the following equation:
\[T_{i,j}=\exp(-\epsilon h_{i,j})u_{i}v_{j} \tag{6}\]
In Eq. 6, \(h_{i,j}\) denotes the cost of transporting one unit of mass from site \(i\) in \(S_{1}\) to site \(j\) in \(S_{2}\).
SD can be thought of as a regularized version of the Earth Mover's Distance (EMD), also referred to as the Wasserstein Distance, where the smoothness of the transport plan is controlled by the regularization parameter \(\epsilon\). The transport plan gets quite sparse and the SD approaches the EMD when \(\epsilon\) is very large. The transport plan becomes very dense and the SD approaches the Euclidean Distance when \(\epsilon\) is very tiny.
#### 2.1.6 Chamfer Distance (CD)
The Chamfer Distance (CD) [40] is defined as the average distance of the summed-up squared distances between two point clouds' nearest neighbor correspondences. Similar to SD, we also represent two crystal structures as atomic sites to define it. Given two structures as \(S_{1}=\{p_{1},p_{2},\ldots,p_{m}\}\) and \(S_{2}=\{q_{1},q_{2},\ldots,q_{n}\}\) where \(p_{i}\) and \(q_{i}\) are atomic sites of structure \(S_{1}\) and \(S_{2}\), respectively, we can formally define CD as:
\[CD\left(S_{1},S_{2}\right)=\frac{1}{m}\sum_{p\in S_{1}}\min_{q\in S_{2}}\|p-q \|_{2}+\frac{1}{n}\sum_{q\in S_{2}}\min_{p\in S_{1}}\|q-p\|_{2} \tag{7}\]
In Eq. 7, \(\|p-q\|^{2}\) is the squared Euclidean Distance between sites \(p\) and \(q\). It is relatively fast and easy to compute, can handle large point sets, and is less sensitive to outliers and noise in the data. However, it also has some drawbacks, such as being dependent on the metric used to measure distances between points and being insensitive to the relative ordering of the points.
#### 2.1.7 Hausdorff Distance (HD)
The Hausdorff Distance (HD) [41] measures the maximum distance between any point in one set and its nearest point in the other set, or vice versa. We represent two crystal structures as atomic sites to define them. Given two structures as \(S_{1}=\{p_{1},p_{2},\ldots,p_{m}\}\) and \(S_{2}=\{q_{1},q_{2},\ldots,q_{n}\}\) where \(p_{i}\) and \(q_{i}\) are atomic sites of structure \(S_{1}\) and \(S_{2}\), respectively, we can formally define HD as:
\[HD(S_{1},S_{2})=\max\Big{\{}\sup_{p\in S_{1}}\inf_{q\in S_{2}}\|p-q\|,\sup_{q \in S_{2}}\inf_{p\in S_{1}}\|q-p\|\Big{\}} \tag{8}\]
In Eq. 8, \(\|p-q\|\) is the distance between sites \(p\) and \(q\), which can be any distance metric such as the Euclidean Distance, the Manhattan Distance, or the Minkowski Distance. The sup function takes the supremum or the least upper bound of the distances overall points in the set, and the \(\inf\) function takes the infimum or the greatest lower bound of the distances overall points in the other set. The \(\max\) function takes the maximum of the two supremum, which ensures that the Hausdorff distance is a symmetric metric.
#### 2.1.8 Superpose Distance (SPD)
The Superpose Distance (SPD) is a measure of the structural similarity between two 3D protein structures. SPD is essentially a variation of the RMSE, which is a commonly used metric for quantifying the structural similarity between two protein structures. We again represent two crystal structures as atomic sites to define them. Given two structures as
\(S_{1}=\{p_{1},p_{2},\ldots,p_{n}\}\) and \(S_{2}=\{q_{1},q_{2},\ldots,q_{n}\}\) where \(p_{i}\) and \(q_{i}\) are atomic sites of structure \(S_{1}\) and \(S_{2}\), respectively. We use the Superpose3D [42] package which takes these two structures as input representing two sets of atomic sites of the same length \(N\). It attempts to superimpose them using rotations, translations, and optionally scale transformations while treating them as rigid objects in order to reduce the RMSE across corresponding sites. The RMSE of the paired sites is calculated as SPD following alignment. It can be defined by the following equation:
\[SPD=\sqrt{\frac{1}{N}\sum_{n=1}^{N}\sum_{i=1}^{s_{l}}\left|S_{1_{ni}}-\left( \sum_{j=1}^{s_{l}}hR_{ij}S_{2_{nj}}+T_{i}\right)\right|^{2}} \tag{9}\]
In Eq. 9, \(s_{l}\) denotes the dimension of the atomic sites, \(R_{ij}\) denotes rotation matrix (a \(s_{l}\times s_{l}\) sized array representing the rotation), where \(|R|=1\), \(T_{j}\) denotes a translation vector, and \(h\) denotes a scalar. One limitation of this distance measure is that their superimposing/alignment does not consider the atomic types of the points.
#### 2.1.9 Graph Edit Distance (GED)
The Graph Edit Distance (GED) [43] is a distance metric used to compare two graphs with possibly different numbers of nodes and edges. It measures the minimum number of operations needed to convert one graph into another. The permitted operations are insertions, deletions, and substitutions of nodes and edges. GED is defined by the following equation:
\[GED(G_{1},G_{2})=\min_{\zeta}\Big{(}\sum_{i\in V_{1}}\alpha_{\zeta}(i)+\sum_{ (i,j)\in E_{1}}\beta_{\zeta}(i,j)\Big{)} \tag{10}\]
In Eq. 10, \(G_{1}=(V_{1},E_{1})\) and \(G_{2}=(V_{2},E_{2})\) are the two input graphs to be compared, \(\zeta\) is a graph edit path that maps nodes and edges from \(G_{1}\) to \(G_{2}\), \(\alpha\), and \(\beta\) are the cost of editing a node and an edge, respectively.
Here, we use a measure of the similarity between two crystal structures represented as graphs based on the differences in the connectivity and bonding patterns of the atoms in the structures. We use the atomic number difference as the node substitution cost and the length difference of two edges as the edge substitution cost. Then the GED uses the linear sum assignment algorithm, also known as the Hungarian algorithm [44] to find an assignment of nodes in structure A to nodes of structure B that minimizes the total substitution cost. The algorithm works by iteratively constructing a dual feasible solution, which is then used to generate an alternating path in a bipartite graph. This alternating path is used to update the current assignment and improve the overall cost. We use the implementation in [45] package for this distance calculation.
This GED distance considers the atomic site element types during its site alignment process, which can complement the shortcoming of the Superpose distance SPD. However, we found that the correlation between the perturbation magnitude and the GED distance is weak (see Supplementary Figure S1 and S2).
#### 2.1.10 X-ray Diffraction Spectrum Distance (XD)
X-ray diffraction (XRD) is a technique used to determine the atomic and molecular structure of a material by analyzing the diffraction patterns resulting from X-ray interactions with a crystal. To quantify the similarity between two material structures based on their XRD features, we calculate the Euclidean distance, referred to as the X-ray diffraction (XRD) Spectrum Distance.
Given two crystal structures \(S_{1}=(p_{x_{1}},p_{x_{2}},\ldots,p_{x_{n}})\) and \(S_{2}=(q_{x_{1}},q_{x_{2}},\ldots,q_{x_{n}})\), where \(p_{x_{i}}\) and \(q_{x_{i}}\) represents the XRD features of structures \(S_{1}\) and \(S_{2}\), respectively, in an \(n\)-dimensional space, the Euclidean XRD spectrum distance \(XD\) between the two structures can be represented as follows:
\[XD\left(S_{1},S_{2}\right)=\sqrt{\sum_{i=1}^{n}\left(q_{x_{i}}-p_{x_{i}}\right) ^{2}} \tag{11}\]
#### 2.1.11 Orbital Field Matrix Distance (OD)
The Orbital Field Matrix (OFM) is calculated for each site in the supercell by considering the valence shell electrons of neighboring atoms. Each crystal structure is transformed into a supercell, which prevents sites from coordinating with themselves. And then the average of the orbital field matrices at each site is found to characterize a structure. The
Orbital Field Matrix (OFM) distance is determined by calculating the Euclidean distance between the OFM features of two structures.
Given two crystalline structures \(S_{1}=\left(p_{o_{1}},p_{o_{2}},\ldots,p_{o_{n}}\right)\) and \(S_{2}=\left(q_{o_{1}},q_{o_{2}},\ldots,q_{o_{n}}\right)\), where \(p_{o_{i}}\) and \(q_{o_{i}}\) represents the OFM features of structures \(S_{1}\) and \(S_{2}\), respectively, in an \(n\)-dimensional space, the Euclidean OFM distance \(OD\) between two structures can be defined as the following equation:
\[OD\left(S_{1},S_{2}\right)=\sqrt{\sum_{i=1}^{n}\left(q_{o_{i}}-p_{o_{i}}\right) ^{2}} \tag{12}\]
### Evaluation procedure
We used three ways to evaluate the utility of the selected distance metrics for CSP study including: 1) studying how the structure distance metrics change with the structure perturbation; 2) comparing the CSP performances of three CSP algorithms over a set of test structures; 3) understanding the search dynamics or behavior of the optimization algorithms using the trajectories of the search process.
## 3 Results
### Evaluation of performance metrics
To evaluate how different performance metrics reflect the actual closeness of the predicted structures to the ground truth structures, we use two perturbation methods to generate two sets of perturbed crystal structures with varying magnitudes for a given stable structure. We then calculate how their formation energy differences correlate with the performance metric distances as well as the perturbation magnitudes. The first perturbation method directly changes the coordinates of all sites by a random uniform \(p\) percentage (from 1 to 50% with 1000 points) without considering the space group symmetry. So the resulting structures may lose their space symmetry. By perturbing a given stable crystal structure with an increasing series of perturbation magnitudes, we can simulate the search process of CSP algorithms to a certain degree. The second perturbation method comes from the PyXl package [46], which can do two types of perturbations over a given structure: one is changing the lattice parameters by a given percentage; the other is perturbing the atomic coordinates of the Wyckoff sites with a given magnitude in A. Here we focus on this symmetry-preserving Wyckoff site coordinate perturbation with 2% lattice perturbation.
Figure 1 (a) shows the parity plot of perturbation magnitude and the formation energy distances of the structures compared to the ground truth structure SrTiO\({}_{3}\). Here the formation energy is predicted using the universal machine learned M3GNet potential [5]. It can be found that as the perturbation percentage goes up, the energy difference between the perturbed structures and the stable structure also goes up. We can also find that the range of energy distances for a given perturbation percentage increases as the perturbation magnitude goes up, indicating the fact that highly disrupted structures tend to have diverse energy values. Figure 1 (b)-(l) shows the correlation between perturbation magnitude and eleven performance metrics. It has three types of correlations. Figure 1 (b)-(i) show the linear correlation of perturbation with respect to the following distance metrics, including Wyckoff RMSE, Wyckoff MAE, Anonymous RMS, RMS distance, Sinkhorn distance, Chamfer distance, Hausdorff Distance, and Superpose RMSD. Out of the eleven metrics, Wyckoff RMSE, RMS distance, and Hausdorff distance (Figure 1 (b)(e)(h)) show a higher degree of linearity. Relatively speaking, the remaining metrics demonstrate a certain degree of nonlinear correlation, including XRD Spectrum distance, OFM distance, and FingerPrint distance sorted by the degree of nonlinearity. All eleven metrics can be used to measure the similarity between candidate structures and the ground truth structure.
While the eleven performance metrics show a good correlation with the structure perturbation in Figure 1, they are evaluated over the randomly perturbed structures which neglect the symmetry relationships among equivalent Wyckoff atomic sites. However, many efficient CSP algorithms use symmetry-obeying search operators which do not violate the atomic symmetry relationship during the coordinate search. To simulate this situation, we generate a second set of symmetry-preserving perturbed structures from the ground truth structure ZrSO. Figure 2 shows the correlations of performance metrics with respect to the perturbation magnitude.
Figure 1: Structure distances vs perturbation size evaluated over the random perturbation structures of SrTiO\({}_{3}\)
Figure 2: Structure distances vs perturbation size over the symmetrically perturbation structures of ZrSO. The dataset is generated by applying lattice a/b/c perturbation with small 5% changes while fraction coordinates are perturbed with 2% to 100%. Space group remains unchanged.
Compared to the random perturbation structures of SrTiO\({}_{3}\), symmetrical perturbations, in which the perturbation is applied to the coordinates of Wyckoff sites, may lead to more pronounced structural changes. It can be found that the correlation between perturbation magnitude and performance metrics is weaker (Figure 2). Because symmetrical structures possess internal repetition and symmetry, and a perturbation in one point can propagate equivalent perturbations in other points. Consequently, accurately describing the perturbation based solely on its magnitude becomes challenging. In such scenarios, the influence of perturbations can extend beyond their immediate vicinity, impacting a broader range of elements or variables within the system. Therefore, to fully comprehend and analyze the effects of symmetrical perturbations, it is crucial to possess a comprehensive understanding of the system's dynamics and interactions. From Figure 2 (a), we can find that the perturbations quickly lead to high variations of the energies of the perturbed structures despite that the pattern is similar to random perturbation when the perturbation magnitude/percentage is small. For the distance metrics in Figure 2 (b, c, f, g, h, i), we all observe regular patterns at the bottom which are similar to the trends in the random perturbation results (Figure 1). The top right overhead dots show the impact of symmetric perturbation: a relatively small perturbation can also cause big distance changes. We also find that Figures 2(j, k, l) have much higher variation than those in Figures 1(j, k, l) respectively. These results show that our selected distance metrics tend to have higher variation when used to evaluate the structure similarity for symmetrically perturbed structures. It is also found that the RMS distance is not even usable for symmetric perturbation (Figure 2 (e)).
### Using distance metrics to compare CSP algorithms
From the Material Project Database [47], we individually choose five crystal structures for each type of composition encompassing binary, ternary, and quaternary, which generates a total of fifteen test structures. We also added CaS (mp-1672) as one additional target. To compare the performance of three GN-OA algorithms [6] based on three optimization methods including random searching (RAS), Bayesian optimization (BO), and particle swarm optimization (PSO), we applied these three algorithms (GN-OA-RAS, GN-OA-BO, GN-OA-PSO) to the 16 targets. For RAS and BO, we set the init population size to 200 and the total number of iterations to 20,000. For the PSO algorithm, we set the init population size to be 200 and the generation number is 100.
Table 2 shows the distance metrics between the ground truth structure of CaS and its predicted structures by three CSP algorithms, two of which (GN-OA-RAS and GN-OA-BO) successfully predict the ground-state structures within 5000 iteration steps. The computed distance metrics demonstrate similar performance across all measures. The energy distances for RAS, BO, and PSO are 3.4074, 2,9220, and 2.9493, respectively. Additionally, the XRD spectrum distance values of 2.8915, 2.8651, and 2.8636, along with the corresponding OFM distance values of 0.1535, 0.1419, and 0.1425 for RAS, BO, and PSO, indicate highly consistent results. Notably, both the edit graph distance and the fingerprint distance exhibit perfect matches with the ground truth with the values of 0 for each algorithm. Figure 3 shows the comparison of the ground truth with the predicted crystal structures of CaS by the RAS, BO, and PSO algorithms. Figure 3 (b), figure 3 (c), and figure 3 (d) exhibit a striking similarity in structure to Figure 3 (a).
Table 3 shows the distance metrics of the predicted versus the ground truth for the binary target structure ScBe\({}_{5}\). First, we found that the PSO algorithm achieved the highest performance for all distances except the formation energy. The energy distances are 12.4945, 23.6137, and 4.7720 for RAS, BO, and PSO, respectively. Figure 4 (d) demonstrates a more symmetrical and similar structure compared to Figures 4(b) and 4 (c), which implies that Figure 4 (d) likely exhibits a balanced distribution of elements and a logical arrangement that serves its purpose effectively. The Sinkhorn distance, the Chamfer distance, and the Hausdorff distance of PSO are also significantly smaller than those of RAS and BO, which are 2.7904, 0.9301, and 1.2743, respectively. All the distances indicate a higher similarity between the structure presented in Figure 4 (d) and the ground truth structure. As shown in Table 4, the distances to the ground structure for both BO and PSO results are nearly identical, with energy distances of 5.6953 and 5.6967, respectively, compared to 68.7212 for RAS. Although the formation energy does not differ significantly among RAS, BO, and PSO, the structure of BO and PSO in Figure 5 (c) and Figure 5 (d) shows a more similar structure because they have almost the same values of formation energy. Figure 5 (b) shows a higher symmetry compared to Figure 5 (c) and Figure 5 (d), which may also be the reason why the Sinkhorn distance, the Chamfer distance, and the Hausdorff distances indicate better performance for RAS. For the quaternary material LiTiSe\({}_{2}\)O (Table 5), the BO algorithm achieves better performance in terms of most distance metrics. Among all the distance metrics, BO shows the best performance for the energy distance with a value of 19.2273 compared to 93.8576 and 106.6080 of RAS and PSO respectively. In addition, BO outperforms RAS and PSO in terms of the more challenging indicators including Wyckoff RMSE and Wyckoff MAE, with values of 0.3464 and 0.2458, respectively. Similarly, it has the best results in terms of the Sinkhorn distance, the Chamfer distance, and the Hausdorff distance metrics which are 23.6450, 2.8741, and 2.6054. Figure 6 (c) shows the predicted structure of BO, which has a more symmetrical structure compared to Figure 6 (b) and Figure 6 (d) predicted by RAS and PSO. The figures and tables presented above effectively demonstrate that our distance metrics accurately capture the differences between crystal structures with the ground truths, highlighting that more symmetrical
and stable structures tend to allow the CSP algorithms to achieve better performance. Performance evaluation metrics of additional the 12 targets are shown in the Supplementary file Table S1 to S12.
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline \multicolumn{4}{|c|}{**LiZrO\({}_{2}\)**} \\ \hline Algorithm & RAS & BO & PSO \\ \hline Formation Energy & -1.3519 & **-1.4883** & **-1.4883** \\ \hline Energy Distance & 68.7212 & **5.6953** & 5.6967 \\ \hline Wyckoff RMSE & N/A & N/A & N/A \\ \hline Wyckoff MAE & N/A & N/A & N/A \\ \hline Anonymous RMS & N/A & N/A & N/A \\ \hline RMS Distance & N/A & N/A & N/A \\ \hline Sinkhorn Distance & **31.7423** & 37.2488 & 37.0778 \\ \hline Chamfer Distance & **3.0679** & 3.5888 & 3.6062 \\ \hline Hausdorff Distance & **3.5578** & 4.4537 & 4.6748 \\ \hline Superpose RMSD & 2.9837 & **2.8446** & 2.9019 \\ \hline Edit Graph Distance & **33** & 36 & 36 \\ \hline FingerPrint Distance & 3.5050 & **2.8482** & **2.8482** \\ \hline XRD Spectrum Distance & **1.7977** & 2.0243 & 2.0243 \\ \hline OFM Distance & 0.4376 & **0.2690** & 0.2691 \\ \hline \end{tabular}
\end{table}
Table 4: A metrics table generated by comparing the ground truth structure of LiZrO\({}_{2}\) with the structures obtained using three different optimization algorithms from GN-OA: Random Acceleration Search (RAS), Particle Swarm Optimization (PSO), and Bayesian Optimization (BO).
Figure 5: Comparison of the ground truth and predicted crystal structures of LiZrO\({}_{2}\) by the RAS, BO, and PSO algorithms.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline \multicolumn{4}{|c|}{**LiTiSe\({}_{2}\)O**} \\ \hline Algorithm & RAS & BO & PSO \\ \hline Formation Energy & -0.7530 & **-1.3047** & -1.1161 \\ \hline Energy Distance & 93.8576 & **19.2273** & 106.6080 \\ \hline Wyckoff RMSE & 0.3658 & **0.3464** & 0.3958 \\ \hline Wyckoff MAE & 0.2669 & **0.2458** & 0.3240 \\ \hline Anonymous RMS & N/A & N/A & N/A \\ \hline RMS Distance & N/A & N/A & N/A \\ \hline Sinkhorn Distance & 31.9493 & **23.6450** & 30.4329 \\ \hline Chamfer Distance & 3.3169 & **2.8741** & 3.5282 \\ \hline Hausdorff Distance & 3.2332 & **2.6054** & 2.8125 \\ \hline Superpose RMSD & **6.7413** & _7.3776_ & 7.1555 \\ \hline Edit Graph Distance & **12** & 15 & 14 \\ \hline FingerPrint Distance & 2.6597 & **2.0178** & 2.1986 \\ \hline XRD Spectrum Distance & 1.9977 & **1.2185** & 1.7125 \\ \hline OFM Distance & 0.4486 & 0.6285 & **0.4263** \\ \hline \end{tabular}
\end{table}
Table 5: A metrics table generated by comparing the ground truth structure of LiTiSe\({}_{2}\)O with the structures obtained using three different optimization algorithms from GN-OA: Random Acceleration Search (RAS), Particle Swarm Optimization (PSO), and Bayesian Optimization (BO).
### Trajectory studies of the GN-OA search algorithms in CSP of targets without polymorph structures
Here we exploit the multi-dimensional CSP performance metrics to investigate the search behavior of three optimization algorithms used in the GN-OA CSP package [6]. We applied their random search, Bayesian optimization (BO), and particle swarm optimization (PSO) to search the structures of Ca\({}_{4}\)S\({}_{4}\) and Ba\({}_{3}\)Na\({}_{3}\)Bi\({}_{3}\). For Ca\({}_{4}\)S\({}_{4}\), we allocated 5,000 structure generations for the random search algorithm which generated 147 valid structures (readable by Pymatgen). We allocated 1,000 structure generation steps for the BO algorithm which traversed 140 valid structures. For PSO, we used 5,000 structure generation steps which created only 100 valid structures over the search process. For all the valid structures during a search, we calculated their distance metrics to the ground truth structure and then mapped the distance features using t-SNE to 2D dimension points. We then plot the trajectory of the structure search over time by connecting two consecutive points if the newer structure has lower energy than the previous one. The results are shown in Figure 7. Note the green triangles indicate the starting points while the red stars represent the ground truth structures.
First, we found that for all three algorithms, it is challenging for them to generate valid structures during their search. Both Random search and PSO only generated less than 150 structures for a total of 5,000 structure generations. In comparison, the BO algorithm generated 140 valid structures with only 1,000 structure generations. This is consistent with the authors' observation of GN-OA that the BO has better performance in their CSP experiments. From Figure 7 (a) and (c), we found that the Random search and PSO algorithms tend to jump around in a larger design space. In contrast, the BO algorithm is more focused during its search (Figure 7(b)).
We further applied the three search algorithms to the structure prediction of a ternary compound Ba\({}_{3}\)Na\({}_{3}\)Bi\({}_{3}\), which is more challenging than Ca\({}_{4}\)S\({}_{4}\). Figure 8 shows the three trajectories of the algorithms. First, we found that all three algorithms are much more difficult to generate valid structures, especially for the Random algorithm, which generates only 121 valid structures during its 50,000 tries. In contrast, the BO and PSO both generate 195 and 177 valid structures during their 3,000 and 6,000 structure generation steps, though the success rates of structure generation are still very low. The search trajectory patterns of the Random search and PSO are still more similar by jumping around a large area while the BO algorithm is more focused on their structure search. But compared to Figure 7 (b), the structure range in Figure 8 is larger due to the higher complexity of the target structure Ba\({}_{3}\)Na\({}_{3}\)Bi\({}_{3}\). Our CSP metrics based trajectory analysis shows that current algorithms need to significantly improve their structure generation success rate to achieve higher efficiency in CSP.
Figure 7: Trajectories of three search algorithms in crystal structure prediction of Ca\({}_{4}\)S\({}_{4}\): (a) Random search with 5000 structure generation steps. 147 valid structures found; (b) Bayesian Optimization with 1000 structure generations. 140 valid structures found; (c) PSO with 5000 structure generations with 100 valid structures. The green triangles indicate the starting points while the red star indicates the ground truths.
## 4 Conclusion
Due to the complexity of structural changes during the search process of crystal structure algorithms, it is very difficult to measure the similarity of the candidate structures to the ground truth, especially when the algorithms cannot find the exact solution. This is especially challenging when the candidate structure and the target structure can have different spatial symmetries (space groups). We find that it is infeasible to use a single structure similarity measure to describe the CSP prediction quality of different algorithms. By evaluating a set of 9 structure distance measures (which we call as CSPMetrics), we find that using them together allows us to achieve a quantitative method to measure the prediction quality of predicted crystal structures compared to the ground truths. Application of our CSPmetric set has allowed us to gain interesting analysis of the structures during the search process of different CSP algorithms. While there are definitely rooms to further improve the metrics so that they can capture the prediction errors happening during CSP algorithm search, we believe our current CSPMetrics can be used as a good starting point to characterize benchmark different CSP algorithms. The availability of the source code additionally makes it easy for such evaluations.
## 5 Data and Code Availability
The test crystal structures are downloaded from the Materials Project database at [http://www.materialsproject.org](http://www.materialsproject.org). The source code can be found at [https://github.com/usccolumbia/CSPBenchMetrics](https://github.com/usccolumbia/CSPBenchMetrics)
## 6 Contribution
Conceptualization, J.H.; methodology,J.H. L.W.,Q.L.,S.O.; software, L.W,J.H., Q.L.; resources, J.H.; writing-original draft preparation, J.H., L.W., Q.L., S.O.; writing-review and editing, J.H., L.W.; visualization, J.H., L.W.; supervision, J.H.; funding acquisition, J.H.
## Acknowledgement
The research reported in this work was supported in part by National Science Foundation under the grant 2110033. The views, perspectives, and content do not necessarily represent the official views of the NSF.
|
2306.03106 | Presentation of Jean-Marie Souriau's book ''Structure des systèmes
dynamiques'' | Jean-Marie Souriau's book ''Structure des syst\`emes dynamiques'', published
in 1970, republished recently by Gabay, translated in English and published
under the title ''Structure of Dynamical Systems, a Symplectic View of
Physic'', is a work with an exceptional wealth which, fifty years after its
publication, is still topical. In this paper, we give a rather detailled
description of its content and we intend to highlight the ideas that to us, are
the most creative and promising. | Géry de Saxcé, Charles-Michel Marle | 2023-06-03T16:40:39Z | http://arxiv.org/abs/2306.03106v1 | # Presentation of Jean-Marie Souriau's book
###### Abstract
Jean-Marie Souriau's book "Structure des systemes dynamiques", published in 1970, republished recently by Gabay, translated in English and published under the title "Structure of Dynamical Systems, a Symplectic View of Physics", is a work with an exceptional wealth which, fifty years after its publication, is still topical. In this paper, we give a rather detailled description of its content and we intend to highlight the ideas that to us, are the most creative and promising.
Jean-Marie Souriau's book "Structure des systemes dynamiques" [1] was published in 1970 in a book collection for students in the first year of master's degree in mathematics. It is directed in fact to mathematicians, beginners or experienced, wishing to know the applications of mathematics to physical sciences, and to physicists concerned with knowing certain mathematical tools useful for their researches. The author was very aware of this since, in his Introduction, he gives reading recommendations adapted to both reader categories. It is a work with an exceptional wealth which, fifty years after its publication, is still topical. In the first part, we shall be describing its content. In the second one, we shall discuss the most original aspects. As conclusion, we shall indicate why this book seem to us still today a valuable source for the students and researchers in mathematics, mechanics and physics.
## 1 Book content
### A quick survey.
This book comprises a rather large introduction (20 pages) and five chapters. The first two chapters, entitled _Differential geometry_ and _Symplectic geometry_, are purely mathematical. The third one, entitled _mechanics_, begins with a classical presentation of the mechanics of material point systems. Very soon, the author introduces the concept of _manifold of motions_ of a system. Then he applies the methods of symplectic geometry presented in the previous chapter both to classical and relativistic mechanical systems. An important paragraph deals with isolated systems admitting a symmetry group, acting transitively on the manifold of motions, that the author considers as models of _elementary particles_. Next the author studies the dynamics of systems of elementary particles, taking into account their interactions. The fourth chapter, entitled _statistical mechanics_, contains two sections. The first one, essentially mathematical, presents measure and integration theories, together with notions of probability theory. The author defines the _statistical states_ of a dynamical system, the _entropy_ of a statistical state and proposes a generalisation of the notion of _Gibbs state_. In the second paragraph, he uses these notions to treat certain systems seen in physics: classical and relativistics ideal gas, systems of null mass particles, to name a few. He clarifies the interpretation of the parameters which the Gibbs state depends on in terms of thermodynamical quantities, and proposes an interesting generalisation of the concept of thermodynamical equilibrium. The fifth and last chapter, entitled _Geometric quantization_, presents a construction which allows to associate
to a symplectic manifold satisfying certain conditions another manifold, of larger dimension, called by the author a _quantum manifold_. This construction is due to the author. A slightly different but equivalent construction was proposed independently by the American mathematician B. Kostant [2]. In the second and last section of the fifth chapter, the author applies this construction to the quantization of physical systems.
The book is illustrated by many figures which mostly are very meaningful schematic representations of the geometric constructions used by the author. The references to other books or papers, fairly limited in number, are not gathered in a bibliographic list, but indicated in footnotes. An index, very detailled and easy to use, and the list of main notations completes the book.
### Detailed presentation
_The introduction._ The author evocates the book by Joseph-Louis Lagrange (1736-1813) _Mecanique analytique_[3], written at the end of the 18th century. This famous work is at the origin of the _mecanique analytique classique_ which was, during the 19th century and the first half of the 20th century, an essential part of the scientific learning in the French universities and high schools. According to the author, it is an unfinished work in which some chapters are only sketched. For him, the form used to present this theory and the concepts it uses (instead of concepts, the author writes the _categories_ in the epistemological, Aristotelian or Kantian sense) were fixed by Lagrange's successors such as Simeon Denis Poisson (1781-1840), William Rowan Hamilton (1805-1865) and Carl Gustav Jakob Jacobi (1804-1851). The author considers that the form thus given to analytical mechanics, although it gives to this theory a formal mathematical perfection, has lost an important part of Lagrange's thought. The great discoveries of the first quarter of the 20th century (special and general relativity, quantum theory) taught us that the words _time_, _space_ and _matter_ not necessarily have the obvious meaning ascribed thereto. For the author, classical analytical mechanics, which remains an essential ingredient of current physical theories, is not outdated, although certain concepts it uses are so because they do not have the required covariance, in other words because they are in contradiction with Galilean relativity. He wishes to show in his book that a better consideration of Lagrange's thought allows a formulation of this theory compatible with the most recent discoveries in physical sciences.
Using the very simple example of the motion of a material point, the author explains which concepts he is going to use: evolution space, space of motions of a system, Lagrange's form. The concept of space of motions of a dynamical system seems to us the most important : it is the set of all possible motions of the considered system. The author discusses in depth its usefulness, often underestimated by scholars mostly interested in the study of one particular motion of the considered system. He presents also all the mathematical tools that he will use: differential forms, Lie groups, symplectic forms, etc. Hence he reviews fairly accurately every chapter of his book and completes his introduction with advises to the readers.
_The first chapter, Differential Geometry_ This chapter presents in less than 70 pages numerous delicate notions: differential manifolds, tangent and cotangent fiber bundles, submanifolds, covering spaces, vector fields and differential equations, Lie bracket of two vector fields, exterior derivative, foliations, Lie groups, calculus of variation. The author presents the concept of differential manifold in a rather original way which does not use the previous presentation of topological manifolds, undoubtedly to make this notion easily accessible to beginning students. Differential manifolds are not assumed to be Hausdorff. The _space of motions_ of certain mechanical systems encountered in Chapter III are indeed non-Hausdorff manifolds. The author's language sometimes slightly differs from that generally used : for example, he calls an _embedding_ what most of geometers call an _injective immersion_. However, readers can easily avoid misunderstandings, since the author scrupulously defines all the terms he uses.
The paragraph devoted to Lie groups contains a detailed presentation of the actions of a Lie group on a differential manifold and of its adjoint representation. The author will define the coadjoint representation in the next chapter, with the study of the moment map of the action of a dynamical group on a presymplectic or symplectic manifold. The main classical Lie groups (linear, orthogonal, unitary, symplectic) are described in a very original manner. The section about calculus of variations presents, besides the classical Euler-Lagrange equations, the extremality criterion using Cartan's form, often called _Euler-Cartan theorem_, that establishes a link between calculus of variation and symplectic geometry, together with Noether's theorem.
_Chapter II, Symplectic Geometry._ It is also essentially mathematical. Clearly shorter than the previous one (46 pages), it begins with the study of a finite dimensional vector space \(E\) equipped with a bilinear skew-symmetric form \(\sigma\). The author defines the concepts of _orthogonality_ with respect to \(\sigma\), of _isotropic co-isotropic_ and _self-orthogonal_ vector subspaces, also called by other authors _Lagrangian vector subspaces_ when \(\sigma\) is nondegenerate. The author proves that the rank of \(\sigma\) is always even and that given a coisotropic
vector subspace, one can always build a basis of the kernel of \(\sigma\), then complete it to obtain a basis of \(E\), called _canonical_, in which the expression of \(\sigma\) is very simple. When \(\sigma\) is non degenerate, it is called a _symplectic form_, the dimension of \(E\) is even and the couple \((E,\sigma)\) is called a _symplectic vector space_. Hence the author defines the _symplectic group_ of \((E,\sigma)\) and studies its natural action on \(E\), its features, together with thoses of its Lie algebra. He shows in particular that \(E\) has a complex vector space structure adapted, in a precise mathematical sense, to the symplectic form \(\sigma\), and that it can be endowed with a Hermitian form of which \(\sigma\) is the imaginary part.
The author defines next symplectic and presymplectic manifolds and studies their properties. He shows that under certain conditions, the quotient of a presymplectic manifold by the kernel of its presymplectic form is a symplectic manifold, a result that he will use in the following chapter to define the _space of motions_ of a dynamical system. He shows that to each differentiable function defined on a symplectic manifold, there is an associated vector field that he calls its _symplectic gradient_. The flow of this vector field leaves unchanged the symplectic form. He proves that the set of the differentiable functions defined on a symplectic manifold is endowed with a binary operation, the _Poisson bracket_, that makes it a Lie algebra of finite dimension. He defines and studies some remarkable submanifolds of a symplectic manifold (isotropic, co-isotropic and self-orthogonal submanifolds), studies their properties and gives several examples thereof. He proves _Darboux theorem_, whereby every point of a symplectic manifold is an element of the domain of a chart, called _canonical_, in which the symplectic form is expressed in a simple manner. Next he defines and studies the _symplectomorphisms_, also called _canonical transformations_, and their generalisations, the _canonical similarities_, together with the _infinitesimal canonical transformations_ (called by other authors _locally Hamiltonian vector fields_).
The author calls _dynamical group_ of a symplectic or presymplectic manifold a Lie group acting on it by canonical transformations. He calls _moment map_ of the dynamical group \(G\) of a presymplectic or symplectic manifold \(M\) a smooth map \(\Psi\) from \(M\) into the vector space \(\mathcal{G}^{*}\), dual of the Lie algebra \(\mathcal{G}\) of \(G\), such that for every \(Z\in\mathcal{G}\), the infinitesimal generator of the action on \(M\) of the one-parameter subgroup generated by \(Z\) is the symplectic gradient of the function linking, at every \(x\in M\), the real \(\left\langle\Psi(x),Z\right\rangle\). In the terminology used by most of geometers, this infinitesimal generator is called the _Hamiltonian vector field_ whose Hamiltonian is the function \(x\mapsto\left\langle\Psi(x),Z\right\rangle\).
The author gives several examples of dynamical groups, indicates sufficient conditions for the existence of a moment map and studies its properties. This leads him to propose a generalisation of Noether's theorem encountered in the part of the previous chapter concerning the calculus of variations. After a quick presentation of the cohomology of Lie groups and algebras, the author shows that to any moment map of the action of a dynamical group \(G\) on a connected presymplectic or symplectic manifold, there always exist an associated cocycle \(\theta\) of \(G\) valued in the dual \(\mathcal{G}^{*}\) of its Lie algebra. The differential of \(\theta\) at the neutral element, which is the cocycle of the Lie algebra \(\mathcal{G}\) associated to \(\theta\), is a skew-symmetric bilinear form on \(\mathcal{G}\). The author calls _symplectic cocycle_ a cocycle of \(G\) valued in \(\mathcal{G}^{*}\) satisfying this property. He proves that the addition to the moment map of a constant element of \(\mathcal{G}^{*}\) modifies the cocycle \(\theta\) by addition of a coboundary, therefore does not change its cohomology class, which depends only on the \(G\)-action, not on the choice of the moment map. Moreover the cocycle \(\theta\) allows to define an affine action of \(G\) on the dual \(\mathcal{G}^{*}\) of its Lie algebra whose linear part is the coadjoint representation. The author then proves that the moment map is equivariant with respect to this affine action of \(G\) on \(\mathcal{G}^{*}\) and its action on \(M\) as a dynamical group of this manifold. The orbits of this affine action are submanifolds embedded in \(\mathcal{G}^{*}\) (in author's meaning, most of the geometers would say rather _immersed_) and are endowed with a symplectic form whose expression involves the cocycle of \(\mathcal{G}\), that is the differential of \(\theta\) at the neutral element. Nowadays this important result is expressed by saying that \(\mathcal{G}^{*}\) possesses a _Poisson structure_ whose _symplectic leaves_ are the orbits of the affine action for which the moment map is equivariant, that this Poisson structure remains unchanged under this affine action and that the moment map is a _Poisson map_. When this Poisson structure on \(\mathcal{G}^{*}\) was discovered by the Norwegian mathematician Sophus Lie (1842-1899) in the special case in which the cocycle \(\theta\) vanishes. In the general case, it was rediscovered independantly by Alexander Kirillov, Bertram Kostant and the author, Jean-Marie Suriau. In his book _Structure des systemes dynamiques_, he does not use the concept of Poisson structure, rather he uses the fact that the orbits of the affine action of \(G\) on \(\mathcal{G}^{*}\) are symplectic manifolds, which he calls _symplectic manifolds defined by a Lie group_. He shows that when the dynamical group \(G\) of a symplectic manifold \((M,\sigma)\) is connected, and when its action is transitive and possesses a moment map \(\Psi\), this map is a local symplectomorphism of \(M\) onto an orbit of the affine action of \(G\) on \(\mathcal{G}^{*}\) for which \(\Psi\) is equivariant. The map \(\Psi\) is a symplectomorphism if and only if, moreover, the isotropy group of a point of \(M\) is connected. He presents some examples of this important result, often called nowadays _Kostant-Souriau theorem_.
Chapter III, Mechanics.Relatively long (105 pages), this chapter begins with the study of a mechanical system composed of material points in a fixed Galilean frame. The author writes _Newton's equation_ expressing the equality of the force acting on each of those points and the product of the mass by its acceleration. He treats briefly the case of a unique material point placed in a Coulomb field (Kepler's problem), next the _N-body problem_ of celestial mechanics. He introduces then the notion of _constraint_, presents the _principle of virtual work_ and the conditions in which a constraint is called _ideal_. He uses, to study the motion of a rigid body, the _group of Euclidean displacements_ and establishes the equations of motion.
Returning to the case of a system of \(N\) material points without constraints, subjected to forces expressed by differentiable functions of the time and of the positions and velocities of these points, the author shows that the equations of motion are expressed as the differential equation associated to a vector field depending on time, defined on the _evolution space_ of the system ( a set, isomorphic to \(\mathbb{R}^{6N+1}\), made of multiplets composed of the time and of the positions and velocities of the \(N\) material points). These equations determine a _foliation in curves_ of the evolution space. Each of these curves is the mathematical expression of a possible motion of the system. It is why the author calls _space of motions_ of the system the set made of these curves. He proves that the space of motions is a differential manifold (not always Hausdorff) of dimension \(6N\) and there exists a natural projection, smooth and anywhere of rank \(6N\), of the evolution space on the space of motions. Then he proves the existence, on the evolution space of the system, of a remarkable 2-form \(\sigma\), which he calls _Lagrange's form_, because it was used in 1808 by Lagrange in his works in celestial mechanics. As pointed out by the author, this form was used in mechanics around 1950 by the French mathematician Francois Gallissot [4]. For a unique material point of mass \(m\) whose position and velocity vectors are, respectively, \(\mathbf{r}\) and \(\mathbf{v}\), on which acts a force \(\mathbf{F}\), the components of these three vectors in a fixed orthonormal frame of the space being, respectively, \((r_{1},r_{2},r_{3})\), \((v_{1},v_{2},v_{3})\) and \((F_{1},F_{2},F_{3})\), this form reads
\[\sigma=\sum_{i=1}^{3}(m\mathrm{d}v_{i}-F_{i}\mathrm{d}t)\wedge(\mathrm{d}r_{i }-v_{i}\mathrm{d}t)\,,\text{ that can be written }\sigma=(m\mathrm{d}\mathbf{v}-\mathbf{F}\mathrm{d}t)\wedge( \mathrm{d}\mathbf{r}-\mathbf{v}\mathrm{d}t)\,,\]
where the symbol \(\wedge\) denotes the operator combining the dot product and the exterior product.
The Lagrange form of a system of \(N\) material points is the sum of the Lagrange forms of all points of the system. It determines the vector field of which the system's motions are the integral curves, since its kernel is the sub-bundle that determines the foliation in curves of the evolution space. The author remarks that it is always possible to choose \(N\) differentiable vector fields \(\mathbf{B}_{j}\), defined on the evolution space, and to define \(N\) other vector fields \(\mathbf{E}_{j}\) so that the force \(\mathbf{F}_{j}\) acting on the \(j\)-th material point is \(\mathbf{F}_{j}=\mathbf{E}_{j}-\mathbf{B}_{j}\times\mathbf{v}_{j}\), where \(\mathbf{v}_{j}\) is the velocity of this material point.
When changing the reference frame, the parameterization of the evolution space and the expression of the Lagrange form are modified, particularly because inertial forces must be included in the forces acting on the material points (centrifugal force and Coriolis' force). The author states the _principle of Galilean relativity_ which claims the existence of preferential reference frames, called _inertial reference frames_, such that the expression of the Lagrange form of an _isolated_ system is the same in all inertial reference frames. He shows that the relative motion of an inertial reference frame with respect to another one is a translation motion at a constant velocity. The set of all changes of inertial reference frames is a Lie group, called the _Galilei group_. The author gives a matrix expression of this group, which is of dimension 10. It acts on the evolution space of an unconstrained system of material points by an action which preserves the Lagrange form. Using the principle of virtual works, the author extends the notions of evolution space, Lagrange form and space of motions, for a system of \(N\) material points involving ideal constraints, which can be either holonomic or non holonomic with a linear dependence on the velocities. The results obtained for the unconstrained systems of material points remain valid, provided that the constraints remain the same in all inertial reference frames.
The author calls _Maxwell's principle_ the hypothesis that the exterior derivative of the Lagrange 2-form of a general dynamical system (not necessarily made of material points) vanishes. For a system of material points, since the vector fields \(\mathbf{B}_{j}\) which appear in the forces \(\mathbf{F}_{j}=\mathbf{E}_{j}-\mathbf{B}_{j}\times\mathbf{v}_{j}\) can be freely chosen, the author formulates Maxwell's principle in the following form: the vector fields \(\mathbf{B}_{j}\) can be chosen in such a way that the Lagrange form is closed. This condition determines these fields in a unique manner. The author proves that, as a consequence of this principle, the vector fields \(\mathbf{E}_{j}\) and \(\mathbf{B}_{j}\) must not depend on the velocities of the material points and must verify both _Maxwell equations_
\[\mathrm{rot}\mathbf{E}_{j}+\frac{\partial\mathbf{B}_{j}}{\partial t}=0\,,\quad \mathrm{div}\,\mathbf{B}_{j}=0\,,\quad 1\leq j\leq N\,.\]
The author shows that Maxwell's principle is well verified in a lot of cases: the \(N\)-body problem, a material point in the gravity field, an electrically charged particle in an external electromagnetic field. In this latter
case, the above equations are the first two Maxwell's equations (Maxwell-Faraday equation and Maxwell-Thomson equation) that must be verified by the external electromagnetic field. The author deduces from them the well-known formula giving the expression of the _Laplace force_, and concludes that this force _is not_ a relativistic effect since it results from the application of Maxwell's principle in the framework of classical mechanics.
By contrast, the creation of an electromagnetic field by electric charges in motion, mathematically described by the last two Maxwell equations (Maxwell-Ampere equation and Maxwell-Gauss equation) are relativistic effects which do not appear in the framework of classical mechanics. The author adopts Maxwell's principle as a new _law of the mechanics_. This allows him the study, in the framework of classical mechanics, systems more general than those made of material points. For the systems of material points, Maxwell's principle allows, under certain conditions, to define a Lagrangian and to show that the Lagrange form is nothing else than the exterior derivative of the Cartan form defined in the first chapter, in the study of calculus of variations. We can then apply the _principle of least action_, often considered as an essential piece of analytical mechanics. Moreover, when the Lagrangian is hyper-regular, one can associate to it a Hamiltonian and use for the study of motions the Lagrangian and Hamiltonian formalisms. Without denying the importance of the principle of least action nor the usefulness of these formalisms, the author declares that these concepts seem to him less fundamental than Maxwell's principle. His viewpoint seems to him justified because the existence of a Lagrangian is ensured only locally, and because there exist important systems, such as those made of particles with spin, to which Maxwell's principle applies while they have not a globally defined Lagrangian. In the sequel, the author will not use the principle of least action except to present, very briefly, the _method of variation of constants_ introduced by Lagrange in 1809 during his works on the slow variations of the orbital elements of planets.
The Lagrange form projects onto the manifold of motions, and its projection is a closed form, which is symplectic since it is automatically non degenerate. When the system is isolated, the Galilei group acts on the evolution space and on the space of motions. The moment map of this action on the evolution space is a first integral of the motion. The author details its ten components, which can be regrouped in three vectors \(\mathbf{p,l,g}\) and a scalar \(E\). He gives their physical meaning: \(\mathbf{p}\) is the total linear momentum; \(\mathbf{l}\) is the total angular momentum; the equality \(\mathbf{g}=\mathrm{Constant}\) conveys the fact that the center of mass moves on a straight line at constant velocity; the scalar \(E\), defined modulo an additive constant, is the total energy.
Using a result due to V. Bargmann [5], who proved that the symplectic cohomology of the Galilei group is of dimension 1, the author shows that the class of symplectic cohomology of the action of the Galilei group on the space of motions of an isolated system made of material points in interaction can be interpreted as the _total mass_. The writing of the equations of motion gives two noteworthy results: the vector fields \(\mathbf{B}_{j}\) necessarily all vanish; the total force and the total torque of the interaction forces are necessarily zero. The first result shows that in classical mechanics, a system made of material points cannot describe moving magnets. The second one expresses the _principle of equality of action and reaction_, which appears as a consequence of Maxwell's principle and of the principle of Galilean relativity.
Next the author presents several properties of the dynamical groups of a dynamical system and gives numerous examples. For the Kepler problem (motion of a material point in a Coulomb field) he shows how to _regularize_ the manifold of motions and explains the origin of the exceptional first integral, often called the _Lenz vector_ or the _Laplace vector_, which should rather be called the _eccentricity vector_. Although the author does not say it, it should be stressed that this first integral was discovered by the Swiss mathematician Jakob Hermann (1678-1833) [6].
In the paragraph _The Principles of symplectic mechanics_, the author first works in the framework of non relativistic, classical mechanics. He no longer is limited to systems of material points and adopts the three following assertions as new axioms of mechanics:
1. The space of the motions of a dynamical system is a _connected symplectic manifold_.
2. If several dynamical systems evolve independently, the manifold of motions of the composite system is the _symplectic direct product_ of the spaces of motions of the component systems.
3. If a dynamical system is isolated, its manifold of motions admits the _Galilei group_ as a dynamical group.
It is an _extension_ of the principles generally admitted in classical mechanics, which will allow the author to consider new dynamical systems having a physical interest.
Since, for an isolated system, the author identifies the _mass_ of the system with the number \(m\) which marks the class of cohomology of the Galilei group action on the space of motions, he can now consider systems
of positive, null or negative mass. He keeps, for the more general systems that he will consider, the physical interpretation of the components of the Galilean moment which he has previously given: the vectors \(\mathbf{I}\) and \(\mathbf{p}\) are, respectively, the _angular momentum_ and the _linear momentum_ of the system, the scalar \(E\) is its _energy_. As for \(\mathbf{g}\), it allows, for a system of non vanishing mass \(m\), to define the _center of mass_ of the system that the author calls the _center of gravity_ or the _barycenter_: it is the point whose position vector, at each instant \(t\), is \(\mathbf{R}=(\mathbf{p}t+\mathbf{g})/m\). Hence this point moves on a straight line at constant velocity \(\mathbf{p}/m\). The author chooses as fundamental quantities the length \(L\), the time \(T\) and the action \(A\), and indicates the dimensionnal equations of the encountered quantities in the mathematical description of the dynamical system: coordinates of an element of the evolution space, Lagrange form, components of the matrices of an element of the Galilei group, of an element of its Lie algebra and of an element of its dual.
Next, the author proves two theorems concerning the action of a dynamical system on a symplectic manifold, that he might have placed in Chapter II. The first provides, by means of the action of a dynamical group \(G\) on a connected symplectic manifold \(V\) and the associated symplectic cocycle, the expressions of a moment and that of the associated symplectic cocycle of the restriction of this operation to an invariant subgroup \(G^{\prime}\) of \(G\). The second applies to a moment \(\psi\) of the action of a connected Abelian dynamical group \(G\) on a connected symplectic manifold \(V\) when the differential \(f\) at the identity element of the symplectic cocycle \(\theta\) associated to this moment is a non degenerate bilinear form on the Lie algebra \(\mathcal{G}\) of \(G\).
The author points out that there exist two Abelian subgroups of dimension \(3\) of the Galilei group of changes of inertial reference frame of space-time: the group of translations of the the frame of space, the frame of time remaining unchanged, and the group of changes of inertial reference frame of space-time in which a first inertial frame is replaced by a second one whose relative motion with respect to the first one is a translation at a constant velocity, the time frame remaining once again unchanged. The direct product of these two groups is a normal subgroup of the Galilei group, isomorphic to the additive group \(\mathbb{R}^{6}\).
Applying the above-mentioned theorems, the author shows that the space of motions of an isolated system of mass \(m\neq 0\) is the direct product of two spaces of motions: the _space of motions of the center of mass_, isomorphic to the space of motions of a material point of mass \(m\), and the _space of motions around the center of mass_, in which the center of mass remains always at the origin of the reference frame of space. This important result, called the _barycentric decomposition_ of the motions of an isolated system, is well known in classical mechanics. The Galilei group is a dynamical group both of the space of motions of the center of mass and of the space of motions around the center of mass, but acts on the last one only through the quotient by its normal Abelian subgroup isomorphic to \(\mathbb{R}^{6}\). This quotient is isomorphic to \(\mathrm{SO}(3)\times\mathbb{R}\). Hence the direct product of the Galilei group and of \(\mathrm{SO}(3)\times\mathbb{R}\) is a dynamical group of any isolated system of non vanishing mass. The author gives several examples including the one, non classical, of a particle with spin.
Next the author tackles the study of the relativistic systems. He calls _Lorentz frame_ an inertial reference frame of space-time in which the time unit has been chosen in such a way that the velocity of light is equal to \(1\). He presents the essential mathematical concepts used in the special theory of relativity : the Minkowski space, the Lorentz group and the Poincare group. He gives the expressions of a diffeomorphism of \(\mathrm{SO}(3)\times\mathbb{R}^{3}\) onto the restricted Lorentz group ( the connected component of this group containing the identity) and of matrix representations of the Lie algebras of these groups. He points out, without proof, several important properties: the Lie algebra of the Poincare group is equal to its derived algebra; the symplectic cohomology of the Poincare group, and more generally its cohomology valued in the dual of its Lie algebra, are trivial.
In special relativity, the passage from a Lorentz frame to another one is made by the action of an element of the restricted Poincare group. The author establishes the formula of change of Lorentz frames and gives its matrix expression. The fits two axioms of symplectic mechanics remain unchanged, and in the third one, the Galilei group is replaced by the restricted Poincare group. Therefore these axions are
1. The space of motions of a dynamical system is a _connected symplectic manifold_.
2. If several dynamical systems evolve independently, the manifold of motions of the composite system is the _direct product_ of the spaces of motions of the component systems.
3. If a dynamical system is isolated, its manifold of motions admits the _restricted Poincare group_ as a dynamical group.
It is always possible to choose the moment map of the action of the restricted Poincare group on the space of motions of an isolated relativistic dynamical system so that the cocycle associated to the moment map is null. This condition determines the moment map in an unique way, while in classical mechanics, the moment map of the action of the Galilei group on the space of motions of an isolated system depends on an
arbitrary additive constant. Moreover, in relativistic mechanics, the barycentric decomposition of motions of an isolated system no longer exists.
The intersection of the Galilei and Poincare groups, considered as subgroups of the group of affine transformations of space-time, is a dynamical group of dimension 7 of isolated dynamical systems, both classical and relativistic. The moment map of its action on the space of motions of an isolated system is composed of the vectors **l**, **p** and the scalar \(E\), that the author interpreted in classical mechanics as being the angular momentum, the linear momentum and the energy. For a relativistic system, the author chooses to conserve for these quantities the same interpretation as in classical mechanics. The choice of a Lorentz frame allows to associate to the couple \((\textbf{p},E)\) a vector \(P\) of the Minkowski space-time, called the _4-momentum_ or the _energy-momentum vector_ of the system. Next the author indicates some formulae useful in geometry of oriented Minkowski space, refering for more details to his book [7]. The action of the Poincare group on the space of motions of an isolated relativistic system reveals a second vector \(W\) of the Minkowski space-time, called the _polarization vector_, orthogonal to the energy-momentum vector \(P\).
In a long section, the author proposes a mechanistic description of elementary particles. In the framework of relativistic mechanics, an isolated dynamical system is said to be _elementary_ when the Poincare group acts transitively on the space of its motions. The moment map of its action is then a symplectic diffeomorphism of this space onto a coadjoint orbit of the Poincare group. For the author, the so defined _elementary systems_ are mathematical models for _elementary particles_ of physicists. He uses the type (time-like, spacelike or lightlike, or in other words isotropic) of the quadrivectors \(P\) and \(W\), defined in the previous section, for a classification of elementary systems. By these means, he obtains a large part of the physicists' classification of elementary particles. Below, briefly summarized, his results are presented.
**Case 1, a particle with spin.** It is when \(P\) is timelike and when \(W\) (which, being orthogonal to \(P\), is spacelike) is non-zero. The author proves that two real numbers \(m\) and \(s\), whose expressions are given in terms of \(P\) and \(W\), can be interpreted as the _mass_ and the _spin_ of the particle. These numbers are constant on the space of motions of the system. The mass \(m\) is non-zero, but can be either positive or negative, while the spin \(s\) is always strictly positive. For each given pair \((m,s)\), with \(m\neq 0\) and \(s>0\), there exists only one model of particle with mass \(m\) and spin \(s\). Its space of motions is 8-dimensional. For each motion of the system, there exists an affine straight line of the Minkowski space-time, parallel to the timelike quadrivector \(P\), interpreted as the _trajectory_ of the particle. By expressing the energy-momentum quadrivector \(P\) in any Lorentz reference frame, the author observes that the norm of the velocity \(v\) of the particle in that reference frame is always smaller than 1 (which is, with the chosen units, the norm of the velocity of light) and obtains the famous Einstein's formula
\[E=\frac{m}{\sqrt{1-\|\textbf{v}\|^{2}}}.\]
**Case 2, a particle without spin.** It is when \(P\) is timelike and \(W=0\). There still exists a real number \(m\neq 0\), expressed in terms of \(P\), interpreted as the _mass_ of the particle, constant on the space of motions of the system. For each given real \(m\neq 0\), there exists only one model of particle without spin with mass \(m\), and its space of motions is 6-dimensional. Still in that case, for each motion of the system, there exists an affine straight line of the Minkowski space-time, parallel to the timelike quadrivector \(P\), interpreted as the _trajectory_ of the particle. In any Lorentz reference frame the norm of the velocityof the particle is always smaller than 1, the energy \(E\) and the velocity \(v\) of the particle are related by the above written Einstein's formula.
**Case 3, a massless particle.** It is when both \(P\) and \(W\) are non-zero and lightlike. The author defines three real numbers \(\eta=\pm 1\), \(\chi=\pm 1\) and \(s>0\), interpreted, respectively, as the _sign of the energy_, the _helicity_ and the _spin_ of the particle, expressed in terms of \(P\) and \(W\), constant on the space of motions of the system. For each triple \((\eta,\chi,s)\) of real numbers satisfying \(\eta=\pm 1\), \(\chi=\pm 1\) and \(s>0\), there exists only one model of massless particle with sign of the energy \(\eta\), helicity \(\chi\) and spin \(s\), and its manifold of motions is 6-dimensional. For each motion of the system, there exists a three-dimensional affine subspace of the Minkowski space-time which, interpreted in any Lorentz reference frame, can be described as a two-dimensional spacelike plane moving at the velocity of light, called the _wavefront_ of the particle. There is no more an affine straight line of the Minkowski space-time which can be considered as the trajectory of the particle. More precisely, for each Lorentz reference frame, there is such an affine straight line, which is lightlike and contained in the wavefront of the particle. However, this straight line depends on the chosen Lorentz reference frame and sweeps the whole wavefront when the considered Lorentz reference frame takes all the possible values.
The author briefly indicates the existence of other elementary systems, for example _tachyons_, which do not correspond to known elementary particles. Then he looks at _non relativistic elementary particles_, beginning by particles without spin. He obtains a mathematical model of such particles by means of a
suitable change of variables in which the velocity of light \(c\) appears, and then by letting \(c\mapsto+\infty\). The model so obtained corresponds to material points considered in classical non-relativistic mechanics. The same procedure starting with relativistic particles with spin leads to a model of non-relativistic material point with spin, interpreted as a _proper angular momentum_. He briefly indicates another way in which mathematical models of non-relativistic particles could be obtained, using as spaces of motions orbits of affine actions of the Galilei group on the dual of its Lie algebra, which may involve a symplectic cocycle. By this means he obtains models of non-relativistic massless particles moving at an infinite velocity, each model being characterized by three real numbers \(\chi=\pm 1\), \(s>0\) and \(k>0\), called, respectively, the _helicity_, the _spin_ and the _color_ of the considered particle.
At the end of this section, the author explains that the theory of general relativity argues in favour of physical elementary dynamical systems whose space of motions admits the full Poincare group \(G^{\prime}\) as a dynamical group. Such a system's space of motions may have several connected components. The full Poincare group \(G^{\prime}\) has four connected components, and all its elements are obtained by composition of an element of the restricted Poincare group \(G\) (the connected component of the neutral element) with elements of two discrete subgroups, each with two elements: the group of _space inversions_ (exchange of right and left), and the group of _time reversals_ (exchange of past and future). The author fully discusses geometric properties of the moment map of a Hamiltonian action of \(G^{\prime}\) on a (maybe non-connected) symplectic manifold, as well as geometric properties of its coadjoint orbits, and presents their consequences for physical isolated elementary systems whose space of motions admits \(G^{\prime}\) as a dynamical group. He considers elementary particles first with a non-zero mass, then with zero mass. Known massless particles (photons and neutrinos) exist with two opposite helicities, and the author considers this fact as an argument in favour of the admission of the group of space inversions as an invariance group of mechanics.
Chapter III ends with a study of particles dynamics first in the framework of classical mechanics in a fixed inertial reference frame. Starting from the dynamics of a free material point, the author explains how to describe the dynamics of an electrically charged particle submitted to an electric field \(\mathbf{E}\) and a magnetic field \(\mathbf{B}\). He uses a modification of the Lagrange form to account for the effets of \(\mathbf{E}\), \(\mathbf{B}\) and the electric charge of the particle. Using the fact that \(\mathbf{E}\) and \(\mathbf{B}\) satisfy the Maxwell equations, he proves that this modified Lagrange form still is closed, and that its kernel, of dimension 1, still determines a foliation in curves of the evolution space, and derives the equations of motion. The same procedure, in which the symplectic form of the space of free motions of a relativistic particle is used (instead of that of the space of free motions of a classical particle), leads to the equations of motion of a relativistic electrically charged particle submitted to an electromagnetic field. In these equations, in good agreement with experimental results, appears the relativistic correction of the linear momentum well known by physicists.
For a particle with spin, either classical or relativistic, submitted to an electromagnetic field, the author explains that experiments show that still another term must be added to the Lagrange form. This term is the product by a constant \(\mu\) (later interpreted as the module of the magnetic moment of the particle) of the exterior derivative of a 1-form \(\varpi\) which, at the non-relativistic limit, is the product of \(\mathrm{d}t\) by a function. The equations of motion so obtained in the non-relativistic approximation are in good agreement with the Stern and Gerlach experiments, as well as with the precession of spin and the magnetic resonance phenomena. For a relativistic particle these equations become very complicated. When the electromagnetic field is constant, they are used for the measurement of the _anomalous magnetic moment_ of particles.
The author now considers \(n\) non-interacting particles, which may be either free, or subjected to a field. Let \(U_{i}\) be the space of motions of the system made by the \(i\)-th particle alone. The space of motions \(U\) of the system is the direct product of the spaces of motions \(U_{i}\), \(1\leq i\leq n\). The author describes two different ways in which an evolution space can be built for the system, which lead to two different evolution spaces, respectively called the _synchronous evolution space_ and the _asynchronous evolution space_. The synchronous evolution space is obtained by adding to the space of motions \(U\) one dimension for the time, and by taking the initial conditions of all particles at the same time. Its dimension is \(\dim U+1\). The asynchronous evolution space, built by taking the initial conditions of the particles at different times, is of dimension \(\dim U+n\), and involves \(n\) different times, one for each particle. Both the synchronous and the asynchronous evolution spaces are presymplectic manifolds which project onto the space of motions \(U\). The synchronous evolution space should be used only for non-relativistic particles since it involves a notion of simultaneity, which in relativistic physics depends on the choice of a Lorentz reference frame. When all the particles of the system are identical and submitted to the same field, the spaces of motions \(U_{i}\) for all \(i\in\{1,\ldots,n\}\), are equal to \(U_{1}\) and one could think that the space of motions of the system is \((U_{1})^{n}\). However, experiments have shown that motions which only differ by the labelling of particles should be identified. The true space of motions of the system is the set of equivalence classes of \(n\)-tuples \((x_{1},\ldots,x_{n})\in(U_{1})^{n}\) which satisfy \(x_{i}\neq x_{j}\) for \(i\neq j\), \(1\leq i,j\leq n\), two \(n\)-tuples \((x_{1},\ldots,x_{n})\) and \((x_{1}^{\prime},\ldots,x_{n}^{\prime})\) being equivalent if
there exists a permutation \(\sigma\) of \(\{1,\ldots,n\}\) such that \(x^{\prime}_{i}=x_{\sigma(i)}\) for all \(i\), \(1\leq i\leq n\). The author indicates the expression of the Lagrange form and remarks that the use of a set of equivalence classes as a set of motions of the system is a consequence of the indiscernability of particles which does not involve quantum mechanics.
A "classical" method for obtaining the equations of motion of a system of interacting particles uses the synchronous evolution space of the system when the particles do not interact, with a Lagrange form modified by addition of a suitable term involving an _interaction potential_. This method works successfully for celestial mechanics. The author uses it for a system of non-relativistic particles with spin, indicates the equations of motion and, when the system is isolated, writes the formulae which express the constancy of the Galilean moment map. The equations so obtained take into account the electrostatic and magnetostatic forces exerted by each moving particle on the other particles. However, they do not account for other small, but measurable relativistic effects, such as the Laplace force. To account for these effects, the author considers the use of the asynchronous evolution space. He proves that the use of that space would prohibit the existence of interactions between the particles, and concludes that a way to get around this difficulty could be to abandon the idea of localized particles, the distinction between particles and fields being a non-relativistic approximation.
The _scattering theory_ is an approximate mathematical description of a system of interacting particles in which it is assumed that when each particle is far enough from all other particles, the interaction forces can be neglected. The author presents a mathematically rigorous version of this theory. He assumes that unscattered and scattered dynamical systems share the same evolution space with two different Lagrange forms, equal of an open subset \(\Omega\) of the evolution space. The complement of \(\Omega\), on which the Lagrange forms of the scattered and unscattered systems are not equal, is called the _scattering source_. A scattered motion is said to be _constrained_ when it is wholly contained in the scattering source. The author looks at unconstrained scattered motions contained in the scattering source only for a bounded time interval. He states, without a complete proof, a theorem according to which such a motion coincides with two different unscattered motions, one before it enters the scattering source and another one after it has finally left the scattering source. The so established correspondence between two unscattered motions is a symplectomorphism between two open subsets of the set of motions of the unscattered system. The author defines the _symmetry group_ of the scattering source and proves that it is a dynamical group for both systems, scattered as well as unscattered. For _bounded_, _static_ or _conservative_ scattering sources, some properties of this group can be deduced. The author briefly presents the dynamical system made by photons in a refracting telescope. Due to a term containing a length (later interpreted as the wavelenth of the light) in the formulae he obtains, the image of a distant star can never be a point. When this term is neglected, one obtains the _geometrical optics_ approximation. Using the scattering theory, the author discusses, in relativitic physics, the reflection of light on a moving mirror and obtains formulae for the _Doppler effect due to reflection_.
Collisions of relativistic free particles with non-zero masses are finally discussed by the author, with the help of the scattering theory. Though no valid model of relativistic interacting particles is available, the author obtains some properties of the symplectomorphism which relates the free motions of the particles before and after their collision. He proves that this symplectomorphism commutes with the action of elements of the Poincare group, which implies that the total momentum and the total energy of the system of particles are conserved by collisions. As a conclusion, he states that knowing this symplectomorphism could allow to study the _constrained motions_ by the technique of analytic continuation.
Chapter IV, Statistical mechanics.It contains two sections. The first one, of about 50 pages, is essentially mathematical. The second one, of about 35 pages, presents the principles of statistical mechanics.
The first section of this chapter begins by introducing various concepts related to smooth manifolds, topological spaces, ordered vector spaces (called _Riesz spaces_), normed vector spaces, specially Banach and Hilbert spaces. A very condensed course (about 30 pages) in Measure Theory and Integration follows, with some notions in Probability. The author uses the presentation, privileged by the Boubaki school, in which a measure on a smooth Hausdorff manifold \(V\) is an element of the topological dual vector space of the space of continuous, compactly supported functions on \(V\), instead of defining first measurable parts of \(V\), and then a measure as a denombrably additive function defined on the set of measurable parts. Then the author defines _probability measures_. A measure \(\lambda\) on \(V\) is said to be _defined by an everywhere positive continuous field of densities_ on \(V\), when its expression in every chart is the product, by an everywhere strictly positive continuous function, of the Lebesgue measure. The measure \(\lambda\) being fixed, the author considers a probability measure whose density with respect to \(\lambda\) is a continuous function \(\rho\geq 0\). He defines
the \(\lambda\)_-entropy_ of this probability measure by setting
\[s_{\lambda}(\rho)=\begin{cases}\int_{V}-\rho(x)\log\bigl{(}\rho(x)\bigr{)} \lambda(\mathrm{d}x)&\text{if this integral converges,}\\ -\infty&\text{if the above integral does not converge}\,.\end{cases}\]
By convention, at points \(x\in V\) where \(\rho(x)=0\), the value taken by the function \(x\mapsto-\rho(x)\log\bigl{(}\rho(x)\bigr{)}\) is \(0\).
A continuous map \(\Psi\), defined on \(V\), with values in a finite-dimensional vector space \(E\) being given, the author calls _generalized Gibbs probability law_ any completely continuous probability law for which the map \(\Psi\) is integrable, whose density \(\rho\) with respect to \(\lambda\) is expressed as
\[\rho(x)=\exp\Bigl{(}-\bigl{(}z+\langle Z,\Psi(x)\rangle\bigr{)}\Bigr{)}\,, \quad\text{with }x\in V\,.\]
In the above equality, \(Z\) is an element of the dual vector space \(E^{*}\) of \(E\), which must be such that the integrals below, appearing on the right hand sides of the equalities defining \(I_{0}\) and \(I_{1}\), are convergent. The real \(z\) must be chosen in such a way that the integral of \(\rho\) on \(V\) with respect to the measure \(\lambda\) is \(1\).
The _normal distribution_ on \(\mathbb{R}^{n}\) is a generalized Gibbs probability law for a suitable choice of the map \(\Psi\).
The mean value of \(\Psi\), for the probability law of density \(\rho\) with respect to \(\lambda\), being denoted by \(M\), and setting
\[I_{0}=\int_{V}\exp\Bigl{(}-\langle Z,\Psi(x)\rangle\Bigr{)}\lambda(\mathrm{d} x)\,,\quad I_{1}=\int_{V}\Psi(x)\exp\Bigl{(}-\langle Z,\Psi(x)\rangle\Bigr{)} \lambda(\mathrm{d}x)\,,\]
the author can write
\[z=\log(I_{0})\,,\quad M=\int_{V}\Psi(x)\rho(x)\lambda(\mathrm{d}x)=\frac{I_{1 }}{I_{0}}\,.\]
The \(\lambda\)-entropy of the generalized Gibbs law of density \(\rho\) with respect to \(\lambda\) is
\[s(\rho)=z+\langle Z,M\rangle\,.\]
The author proves that, on the set of completely continuous probability laws for which the mean value of \(\Psi\) is \(M\), the \(\lambda\)-entropy functional has a strict maximum at the generalized Gibbs probability law of density \(\rho\) with respect to \(\lambda\). Moreover, when the set of values of \(\Psi\) is not contained in an affine subspace of \(E\) of dimension strictly smaller than \(\dim E\), he proves that the map \(Z\mapsto M\) is injective. He derives conditions for the differentiability with respect to \(Z\), under the sign \(\int\), of the integrals \(I_{0}\) and \(I_{1}\). He will later improve these results in his paper [8], published a few years after his book [1].
The author then assumes that the Hausdorff manifold \(V\) is endowed with a symplectic form and that a connected Lie group \(G\) acts on it by a Hamiltonian action. For \(\lambda\), he chooses the Liouville measure, and for the map \(\Psi\) a moment map of the action of \(G\). The vector space \(E\) is therefore now the dual vector space \(\mathfrak{g}^{*}\) of the Lie algebra \(\mathfrak{g}\) of \(G\), and its dual \(E^{*}\) is \(\mathfrak{g}\). He denotes by \(\Omega\) the largest open subset of \(\mathfrak{g}\) on which the integrals \(I_{0}\) and \(I_{1}\) are convergent and define differentiable functions of the variable \(Z\) whose differentials are continuous and can be obtained by differentiation under the \(\int\) sign. A generalized Gibbs probability law can then be associated to each \(Z\in\Omega\). The author indicates the corresponding expressions of \(z=\log(I_{0})\), of the entropy \(s\) and of the mean value \(M\) of the moment map, considered as functions of the variable \(Z\in\mathfrak{g}\). He proves that when \(G\) acts effectively on \(V\), the map \(Z\mapsto M\) is injective and open, therefore is a diffeomorphism of \(\Omega\) onto an open subset \(\Omega^{*}\) of \(\mathfrak{g}^{*}\). By using the inverse map, \(z\), \(s\) and \(Z\) can be considered as functions of the variable \(M\), which spans the open subset \(\Omega^{*}\) of \(\mathfrak{g}^{*}\). The author proves that these functions can be differentiated and indicates the expressions of their differentials. Adding a constant to the moment map does not change the generalized Gibbs laws on the manifold \(V\). Their set is called by the author the _Gibbs set_ of the considered Lie group action. Moreover the author proves that the open subset \(\Omega\) of \(\mathfrak{g}\) is a union of orbits of the adjoint action of \(G\) and is endowed with a definite negative Euclidean metric.
Results obtained by the author in the first section of Chapter IV are, in the second section, applied to dynamical systems encountered in physics. A _statistical state_ of such a system is a probability law on the space of its motions. The author explains that in the _kinetic theory of gases_, a gas in a container at rest in a Galilean reference frame is considered as an assembly of very small material particles whose motions are governed by the laws of classical mechanics, interacting by instantaneous elastic collisions between themselves and with the walls of the container. With these assumptions, the gas can be modelled by a conservative Hamiltonian system. The one-dimensional group of time translations acts on the symplectic manifold of motions of the system by a Hamiltonian action, with the energy as a moment map. The author explains that the entropy of the system increases with time, so assuming that the _natural equilibria_ of the gas are elements of the Gibbs set of the group of time translations is a very reasonable assumption. Each
Gibbs state is determined by an element \(Z\) of the one-dimensional Lie algebra of this group, which is a way of measuring the _gas temperature_. Each Gibbs state is unaffected by the action of the group of time translations, since the adjoint action of this group on its Lie algebra is trivial. The author indicates the dimension equations of quantities which determine a Gibbs state, or appear in its description. He explains that when a compound system is the union of several subsystems, the tensor product of natural equilibria of the subsystems is a natural equilibrium of the compound system only when the element \(Z\) of the Lie algebra of the group of time translations which determine the natural equilibria is the same for all subsystems, in other words only when the temperatures of all the subsystems are equal. If this condition is not satisfied, the compound system is in a state of natural equilibrium only when the subsystems cannot exchange energy between them. As soon exchanges of energy can occur, even when they are very tiny, the compound system is no more in a natural state of equilibrium.
For the determination of the Gibbs states of a gas, one has to calculate the integral \(I_{0}\). The _ideal gas approximation_ is when, for this calculation, the only motions taken into account are those for which, at the considered time, no collisions occur between particles or between a particle and the walls of the container, and when the total volume of the particles is considered as negligible in comparison with the volume of the container. The author successively connsatoric and polyatomic ideal gases and derives, for a Gibbs state, the probability distribution of velocities of the particles, which was determined by Maxwell in 1860. He explains the principle of an _ideal gas thermometer_ used for precise measurements of the temperature. He is led to the formula \(Z=1/(kT)\), which expresses the absolute temperature \(T\) in terms of the element \(Z\) of the Lie algebra of the group of time translations, and of _Boltzmann's constant_\(k\). He obtains the Mariotte, Gay-Lussac and Avogadro laws for perfect gases well known by physicists. He proves that for an ideal gas thermometer, the temperature is a random variable whose probability law is very concentrated around its mean value, and converges (for the weak topology) towards the Dirac measure when the number of particles tends to infinity.
For a conservative system, the entropy \(s\) of a Gibbs state is a smooth function of the energy \(E\) whose derivative is the element \(Z\) of the Lie algebra which determines the Gibbs state. Units being chosen, \(Z\) becomes a real number. Experimental results show that this number is always strictly positive. The author can therefore consider the energy \(E\) of a Gibbs state as a function of its entropy \(s\) and of other variables which describe the system. For example, when the considered system is a gas, these additional variables are the volume of the container, the number of particles, etc. The author assumes that these variables can be globally described by an element \(u\) in a smooth Hausdorff manifold. Infinitesimal variations of a Gibbs state are therefore described by the differential of \(E\),
\[\mathrm{d}E=\frac{\mathrm{d}s}{Z}+\varpi\mathrm{d}u=T\mathrm{d}S+\varpi\mathrm{ d}u\,,\]
where \(\varpi\) denotes the partial differential of \(E\) with respect to \(u\), and \(S=ks\) (\(k\) being Boltzmann's constant). The author calls \(S\) and \(u\) the _position variables_, \(T\) and \(\varpi\) the associated _tension variables_. For example, when the considered system is a gas in a container whose volume may vary, the above expression becomes
\[\mathrm{d}E=T\mathrm{d}S-p\mathrm{d}V\,,\]
where \(p\) is the _pressure_.
The author then explains that the infinitesimal variation of the energy \(E\) is the sum of the infinitesimal variations of two quantities \(Q\) and \(W\), called respectively the _heat_ and the _work_, which are not functions, but rather _action integrals_ in the sense of calculus of variations. Variations of heat and work depend on the path followed while the adjustable variables which describe the state of the system changed. Different types of evolution of the state of a system should be separately considered, for example _adiabatic evolution_, _isothermal evolution_, etc.
The author defines the _heat capacities_ of a gas at _constant volume_ and at _constant pressure_, and indicates the expressions of the thermodynamic functions of an _ideal fluid_. These expressions depend on the chosen model for the particles. A table indicates their expressions and values for various models (material point, particle with spin, etc). Agreement with experimental results is good for a monoatomic gases, not so good for polyatomic gases and for solids at low temperatures.
Since the group of time translations is a subgroup, but not a _normal_ subgroup, of the Galilei group, a dynamical system conservative in some inertial reference frame is not conservative in a different inertial reference frame. This important remark leads the author to introduce the new concept of _covariant statistical mechanics_ by proposing the following principle:
_When a dynamical system is invariant by the action of some Lie subgroup \(G^{\prime}\) of the Galilei group, its natural equilibria are the elements of the Gibbs set of the action of \(G^{\prime}\)._
Each natural equilibrium of such a system is determined by an element \(Z\) of the Lie algebra \(\mathfrak{g}^{\prime}\) of \(G^{\prime}\), which is a Lie subalgebra of the Lie algebra of the Galilei group. The author observes that \(Z\) generalizes the inverse of the temperature, and discusses its physical interpretation. For this purpose he chooses an inertial reference frame \(\mathcal{R}_{0}\) of the Galilei space-time and considers the reference frame \(\mathcal{R}=\exp(iZ)\mathcal{R}_{0}\), which generally is not inertial. He looks at the indications given, in the reference frame \(\mathcal{R}\), by an ideal gas thermometer in equilibrium with the considered system. It amounts to observe the system in a moving frame. The author proves that it is as if \(\mathcal{R}\) were an inertial frame and if the particles of gas were submitted to additional forces (inertial forces, centrifugal and Coriolis forces). By this means he obtains general formulae for the probability density of the Gibbs state associated to \(Z\). He then discusses in greater detail several examples: the wind (the relative motion of \(\mathcal{R}\) with respect to \(\mathcal{R}_{0}\) is a translation at constant velocity), accelerating rocket (this relative motion is a uniformly accelerated translation), gas in a centrifuge (this relative motion is now a rotation around an axis at a constant angular velocity). In this last example he considers also a system made by particles with spin, and finds that the most probable orientation of the particle's spin is parallel to the rotation axis.
One may wish to apply the above principle for a system invariant by the whole Galilei group. However, the subset \(\Omega\) of the Lie algebra of the Galilei group made by elements which determine a Gibbs state is then empty. Looking at the motions of the system around its center of mass, the author is led to consider the just discussed equilibria in a rotating frame. For the author, the rotation of celestial bodies observed in Astronomy confirms the validity of his principle of covariant statistical mechanics.
For a relativistic dynamical system, the Galilei group must be replaced with the Poincare group. As above, the Gibbs state of the action of the whole Poincare group is empty, and the author is led to consider systems invariant by the action of a subgroup \(G^{\prime}\) of the Poincare group. He presents in more detail several examples: an ideal gas in a container at rest in an inertial frame (in that example Maxwell's distribution law for the velocities of particles is slightly modified); relativistic wind; statistical equilibria of photons. In this last example, the number of photons cannot be fixed. The author explains how such a system can be described and even takes into account the fact that photons can have two opposite circular polarizations. However, his formula is not in agreement with the one obtained by Planck for the black-body radiation, which is in very good agreement with experimental observations. The author concludes that this last example must be dealt with in the framework of quantum mechanics.
_Chapter V, Geometric quantization._ It contains two long sections, each of around 40 pages. Before describing their contents, let us recall that a _contact form_ on a smooth Hausdorff manifold \(Y\) is a differential 1-form \(\omega\), which nowhere vanishes on \(Y\), whose exterior derivative \(\mathrm{d}\omega\) is non degenerate on \(\ker\omega\). The existence of a contact form on the manifold \(Y\) has several important consequences: \(Y\) must be odd-dimensional, the 2-form \(\mathrm{d}\omega\) is a _prespymplectic form_ and there exists on \(Y\) a unique vector field \(R_{Y}\), called the _Reeb vector field_ (in honour of the French mathematician Georges Reeb), determined by the two equalities \(\mathrm{i}(R_{Y})(\mathrm{d}\omega)=0\) and \(\mathrm{i}(R_{Y})\omega=1\).
The author defines a _quantum manifold_ as a smooth manifold \(Y\) endowed with a contact form \(\omega\) such that all integral curves of the Reeb vector field \(R_{Y}\) are \(2\pi\)-periodic. These curves are then the orbits of an operation on \(Y\) of the one-dimensional torus. The set of these curves, in other words the quotient of \(Y\) by this operation, is a symplectic manifold \((U,\sigma_{U})\), called by the author the _basis_ of the quantum manifold \(Y\). The quantum manifold projects on \(U\), and the pull-back by the projection map of the symplectic form \(\sigma_{U}\) is the presymplectic form \(\mathrm{d}\omega\).
A _quantization_ of a given Hausdorff symplectic manifold \((U,\sigma_{U})\) is defined by the author as the construction of a quantum manifold whose basis is \((U,\sigma_{U})\). A symplectic manifold is said to be _quantizable_ when its quantization is possible. The author indicates, without proof, a necessary and sufficient condition in which a given symplectic manifold is quantizable: the cohomology class of its symplectic form must be \(2n\pi\), with \(n\) an integer. It is satisfied when \(\sigma_{U}\) is the exterior derivative of a 1-form. The manifolds of motions of many mechanical systems, for example harmonic oscillators, Kepler's problem, and the \(N\)-body problem of celestial mechanics, are therefore quantizable. The author proves that the manifold of motions of a non-relativistic particle with spin is quantizable if and only if the spin of the particle is integer or half integer, when expressed with \(\hbar\) as unit. The author indicates several examples of quantizable symplectic manifolds, together with the full description of the corresponding quantum manifolds: two-dimensional spheres of integer or half-integer radii, spaces of motions of a relativistic particle, first with a non-zero mass, then with zero mass, with spin \(1/2\) or \(1\).
Isomorphisms of quantum manifolds are called by the author _quantumorphisms_. Any quantomorphism between two quantum manifolds projects onto a symplectomorphism between their bases. Two quantizations \(Y\) and \(Y^{\prime}\) of the same symplectic manifold are said to be _equivalent_ when there exists between them
a quantumorphism which projects onto the identity map of their common basis. A symplectic manifold is said to be _monoquantizable_ when all its quantizations are equivalent.
A group \(\Gamma\) of quantomorphisms of a quantum manifold \(Y\) projects onto a group of symplectomorphisms of its basis \(U\), and its projection is a group homomorphism. Conversely, a group \(G\) of symplectomorphisms of the basis \(U\) is said to be _liftable_ if there exists a group \(\Gamma\) of quantomorphisms of \(Y\) which projects onto it. When in addition the projection of \(\Gamma\) onto \(G\) is a group isomorphism, \(\Gamma\) is said to be an _isomorphic lift_ of \(G\). The author proves that the set of isomorphic lifts of \(G\) is in one to one correspondence with the set of its characters (group homomorphisms of \(G\) into the one-dimensional torus).
A theorem due to the author states that a simply connected quantizable symplectic manifold is monoquantizable. Another theorem explains how to quantize a symplectic manifold \(U\) when a quantization of a covering manifold \(U^{\prime}\) of \(U\) is known. Conversely, when a quantization \(Y\) of \(U\) is known and when the group of symplectomorphisms of \(Y\) determined by the covering manifold \(U^{\prime}\) isomorphically lifts onto a group \(\Gamma\) of quantomorphisms of \(Y\), \(\Gamma\) is a discrete group and can be used to build a covering manifold \(Y^{\prime}\) of \(Y\), which quantizes \(U^{\prime}\). Using these two theorems, it can be proven that there exist as many non equivalent quantizations of a quantizable connected symplectic manifold as its homotopy group has distinct characters. Therefore, being simply connected, the two-dimensional sphere, any symplectic vector space, the space of motions of a free particle, (with or without spin, non-relativistic or relativistic), when quantizable, are monoquantizable. The author then discusses the quantizability of the manifold of motions of a system of \(N\) particles without interactions. If each of these particles can be distinguished from the others and has an integer or half-integer spin, the space of motions of the system is quantizable and simply connected, therefore monoquantizable. But if each particle cannot be distinguished from the others, the space of motions of the system has exactly two non-equivalent quantizations, corresponding to the two distinct characters of the group of permutations of a set of \(N\) elements. These two quantifications are the Bose-Einstein and Fermi-Dirac quantizations, well known to physicists.
A smooth vector field, defined on a quantum manifold \(Y\), whose flow acts on \(Y\) by quantomorphisms, is called by the author an _infinitesimal quantomorphism_. The Lie derivative of the contact form \(\omega\) defined on \(Y\) with respect to an infinitesimal quantomorphism vanishes. The author proves that each infinitesimal quantomorphism is associated to a smooth function defined on the basis \(U\) of \(Y\). For example, the Reeb vector field is an infinitesimal quantomorphism associated to the constant function whose value at any point in \(U\) is \(1\). The Lie bracket of two infinitesimal quantomorphisms is an infinitesimal quantomorphism, whose associated function is the Poisson bracket of the functions associated to these two infinitesimal quantomorphisms.
The author then discusses the _quantization_ of a dynamical group of a quantizable symplectic manifold \(U\), with the quantum manifold \(Y\) as quantization. When a Lie group acts on \(Y\) by quantomorphisms, it acts also on the basis \(U\) by symplectomorphisms, the projection of \(Y\) onto \(U\) being equivariant with respect to these actions. Therefore \(G\) is a dynamical group of \(U\), and is said to be a _quantizable_ dynamical group. A quantizable dynamical group of \(U\) is always liftable. The author proves that when a dynamical group of \(U\) is quantizable, its symplectic cohomology is zero. He gives examples which prove that this necessary condition is not sufficient.
A dynamical group of \(U\) which is liftable, but not quantizable, may have an extension which still is a dynamical group of \(U\) and is quantizable. As a first example, the author considers a symplectic vector space \(E\) which acts on itself by translations. It is a dynamical group of \(E\). The product \(Y=E\times T\), where \(T\) is the one-dimensional torus, is a quantization of \(E\), which happens to be a Lie group, called the _Weyl group_, extension of the Abelian group \(E\). It acts on itself by translations, which are quantomorphisms. The Weyl group is therefore a quantizable extension of the Abelian group \(E\), which itself is not quantizable since its symplectic cohomology is not zero. As a second example, the author considers the group \(\mathrm{SO}(3)\) acting on a two-dimensional sphere \(S_{2}\) centered on the origin of \(\mathbb{R}^{3}\), endowed with its element of area form as symplectic form. This symplectic manifold is quantizable if its total area is an integer or an half-integer. In this latter case, the author proves that the dynamical group \(\mathrm{SO}(3)\), which is liftable since \(S_{2}\) is simply connected, is not quantizable, though its symplectic cohomology is zero. He proves also that the group \(\mathrm{SU}(2)\) is a quantizable extension of \(\mathrm{SO}(3)\). A similar situation occurs for the restricted Poincare group, which is a dynamical group of the manifold of motions of a relativistic particle. This group is quantizable if and only if the particle's spin is integer. If the particle's spin is half-integer, there exists a quantizable extension of the restricted Poincare group descibed by the author in terms of _Dirac's spinors_.
The author denotes by \(\mathcal{H}(Y)\) the vector space of smooth, complex-valued and compactly supported functions defined on a quantum manifold \(Y\), which are equivariant with respect to the one-dimensional torus \(T\) actions on \(Y\) by the flow of the Reeb vector field and on \(\mathbb{C}\) by multiplication (the torus \(T\) being identified with the set of complex numbers of modulus \(1\)). After indicating the definitions and results in topology and
functional analysis he is going to use, he proves that \(\mathcal{H}(Y)\) is a pre-Hilbert space. He defines the _Hilbert space of the quantum manifold \(Y\)_ as the completion of \(\mathcal{H}(Y)\). He defines also \(C^{*}\)-algebras, indicates some of their properties and proves that the set of bounded linear endomorphisms of a Hilbert space is a \(C^{*}\)-algebra. He very shortly presents many concepts about operators on a Hilbert space: self-adjoint, normal, Hermitian, unitary operators, etc. For a given quantum manifold \(Y\) of basis \(U\), the author then explains how to associate a Hermitian operator \(\widehat{u}\) on \(\mathcal{H}(Y)\) to any smooth function \(u\) defined on the basis \(U\). He proves that the map \(u\mapsto\widehat{u}\) is linear and injective, that for \(u=1\), \(\widehat{u}\) is the identity of \(\mathcal{H}(Y)\), and that for any pair \((u,u^{\prime})\) of smooth functions on \(U\),
\[\widehat{u}\circ\widehat{u^{\prime}}-\widehat{u^{\prime}}\circ\widehat{u}=-i \widehat{\{u,u^{\prime}\}}\,.\]
He illustrates this important result in the case when \(Y\) is the quantization of a symplectic vector space, the smooth functions on this vector space being canonical coordinates. The operators associated to smooth functions in this way often are not bounded, which makes their study and use very difficult. He proposes to build directly quantomorphisms acting on \(Y\), without taking the detour of infinitesimal quantomorphisms and exponentiation. He proves a theorem which asserts the existence of an injective homomorphism of the group of quantomorphisms of \(Y\) into the group of unitary operators on the Hilbert space of \(Y\).
The second section of Chapter V begins with the statement and explanation of the _Correspondence Principle_, well known by physicist. Quantum mechanics is not an autonomous theory: it cannot be formulated without using classical mechanics. According to the Correspondence Principle, for each physical phenomenon described in the framework of quantum mechanics, there should exist a corresponding "classical approximation" described in the framework of classical mechanics. For the author, this principle is an argument in favour of the extension of classical mechanics he proposed in Chapter II, the _Maxwell Principle_. Without this extension, the use of classical mechanics would be limited to systems made of material points, excluding many phenomena encountered in physics such as particles with spin.
The _quantization_ of a classical mechanical system is the construction of the quantum mechanical system of which this classical system is an approximation. In this section the author examines the possible application of the mathematical theory of Geometric Quantization developed in the previous section, to the quantization of real systems encountered in physics. His initial assumption is: the space \(U\) of classical motions of the system is a _quantizable symplectic manifold_. The first consequence of this assumption is: expressed with the constant \(h\) as unit, the value of the spin of particles must be either an integer or a half-integer. This consequence is in perfect agreement with experimental results.
Let the quantum manifold \(Y\) be a quantization of the classical manifold of motions \(U\). A _state vector_ of \(Y\) is any element \(\Psi\) with norm \(1\) of the pre-Hilbert space \(\mathcal{H}(Y)\). The author explains its probabilistic interpretation: for each \(\xi\in Y\), the real non-negative number \(|\Psi(\xi)|^{2}=\overline{\Psi(\xi)}\Psi(\xi)\) depends only on the projection \(x\) of \(\xi\) on the manifold of motions \(U\); the function so defined on \(U\) is the probability density, with respect to the Liouville measure, of a probability law on \(U\) which is the _statistical state_ of the system (in the sense defined in Chapter IV) corresponding to the state vector \(\Psi\).
An _observable_ in classical mechanics is a smooth function \(u\) defined on the classical manifold of motions \(U\). According to the British scientist P.A.M. Dirac, _observables_ in quantum mechanics are Hermitian operators. In agreement with this view, the author has proven in the previous section that a Hermitian operator \(\widehat{u}\) could be associated to the classical observable \(u\). This operator is the quantum observable of which \(u\) is the classical approximation. When applied to the phase space of a classical conservative system at a given time \(t\), this idea leads to Dirac's _quantum equations of motion_ and to the _commutation relations_ chosen by the physicists W. Pauli and W. Heisenberg as fundamental ingredients of quantum mechanics.
In favorable cases, an isotropic foliation of the classical space of motions \(U\) can be lifted into a _Planck foliation_ of the quantum manifold \(Y\), that means a foliation such that the contact form defined on \(Y\) vanishes on all vectors tangent to the leaves. When a state vector \(\Psi\) is constant on each leaf of this Planck foliation, this vector state is said by the author to _satisfy Planck's condition_ (relative to the considered Planck foliation). The _Planck space_ is the set of state vectors which satisfy Planck's condition. For a conservative system, using as isotropic foliation of \(U\) the foliation whose leaves are the integral curves of the Hamiltonian vector field whose Hamiltonian is the energy, by writing explicitly Planck's condition, the author obtains the famous equality \(E=h\nu\) which relates the energy \(E\), the Planck constant \(h\) and the frequency \(\nu\).
The author then assumes that a dynamical group \(G\) acts on the classical manifold of motions \(U\), with a moment map, and that there exists an Abelian normal subgroup \(\widehat{G}\) of \(G\) whose symplectic cohomology is zero. He uses as isotropic foliation of \(U\) the foliation whose leaves are tangent to the vector sub-bundle of \(TU\) generated by Hamiltonian vector fields whose Hamiltonians are the components of the moment map of the action of \(\widehat{G}\). He lifts this foliation into a Planck foliation of \(Y\), writes explicitly the corresponding Planck's condition, and determines the associated Planck space. For an isolated system, \(G\) will be either
the restricted Poincare group or the Galilei group, according to whether the system is relativistic or non-relativistic. The Abelian subgroup \(\bar{G}\) will be the group of space-time translations. The author explains how Planck's condition leads to the quantum wave equations. For a non-relativistic material point, the author obtains by this means the _Schrodinger equation_, and for a relativistic material point, the _Klein-Gordon equation_. For a non-relativistic particle with spin \(1/2\), using a \(\mathbb{C}^{2}\)-valued variable as state vector, the author proves that both its components satisfy a Schrodinger equation; it is the description of electrons proposed by Pauli. For a relativistic particle of spin \(1/2\), he obtains _Dirac's equation_ and proves that rather than the Poincare group itself, it is a quantizable extension of this group whi acts on the set of solutions of this equation. For him, the non-quantizability of the Poincare group provides a natural explanation of this fact, well known to physicists. The author applies the same procedure for a massless particle, first with spin \(1/2\), then with spin \(1\).
Next the author considers an assembly of particles of a given type in indeterminate number. He denotes by \(U_{\Phi}\) its manifold of motions and by \(U\) the manifold of motions of the system made by a single particle of the considered type. He calls \(U_{\Phi}\) the _Fock's manifold_ and explains that as a set, it is the set of all finite subsets of \(U\). As a manifold, \(U_{\Phi}\) is the sum of disjoint parts of different dimensions, each part \(U_{n}\) being the manifold of motions of the system made of a fixed number \(n\) of particles, with \(0\leq n<+\infty\). For \(U_{0}\), the author takes a singleton considered as a \(0\)-dimensional manifold. As seen in the previous section, for each \(n\geq 2\), the part \(U_{n}\) of \(U_{\Phi}\) has two non-equivalent quantizations, one for each distinct character of the group of permutations of a set of \(n\) elements. The author explains that for some physical considerations, the same character should be chosen for all possible values of \(n\), therefore he obtains two non-equivalent quantizations \(Y_{\Phi}\) of \(U_{\Phi}\). On the pre-Hilbert space \(\mathcal{H}(Y_{\Phi})\), he defines _creation_ and _annihilation_ operators, which respectively increase or decrease the number of particles by one unit. Two different cases must be distinguished, depending on whether the particles are _fermions_ or _bosons_.
At the end of Chapter V, the author discusses the notion of _average value_ of an observable for a given state vector. Planck's condition appears as a sufficient condition so that the statistical and quantum mechanical averages of an observable coincide. Finally the author uses the notion of _function of positive type_ defined on a group, with values in \(\mathbb{C}\), to enlarge the definitions of a _quantum state_ and of _average value_ of an observable, to include states defined by a _density operator_ encountered in quantum chemistry.
## 2 Comments
The great originality of the book _Structure des systemes dynamiques_ clearly appears in the detailed presentation of its contents given above. In this section, we will first try to identify the most innovative and promising ideas which can be found in it. Then we will write a few words about the terminology and the notations used by the author, which do not lack originality too.
Remarkabe scientific concepts.Innovative ideas presented in the first two chapters are mainly related to the ways in which difficult subjects can be taught to beginner students. The first really remarkable scientific concept which appears in this book is, in our opinion, presented in Chapter III. The author begins with a short and rather classical account of the general principles of classical mechanics. Then he proves that on the evolution space of a dynamical system, there exists a remarkable presymplectic form, which he calls the _Lagrange form_. Its kernel determines the vector field whose integral curves are the _motions_ of the system. This form projects onto the set of all possible motions, called the _space of motions_, and its projection is a _symplectic form_, _i.e._, a closed, non degenerate 2-form. The author chooses this property, called the _Maxwell principle_, as the founding principle of mechanics. For us, it is a very important innovation: the traditionally used concepts, such as the _configuration space_, the _evolution space_, the _phase space_, go into the background and leave the first place to the _space of motions_ and to the _Lagrange form_ of which it is endowed. Thanks to this innovation, the author will be able to describe, in the framework of classical mechanics, with the dynamics of a particle with spin, though there is no Lagrangian for such a system, and with assemblies of an indeterminate number of such particles. Similarly, he will be able to describe systems made by massless relativistic particles, though there is no evolution space for such systems.
The space of motions of a dynamical system is not very often considered by modern authors, though it appeared as soon as 1808 in the works of Lagrange. This very natural concept has a nice mathematical property: the space of motions is always endowed with a smooth manifold structure. The _phase portrait_ of a dynamical system, a frequently used closely related concept, very often has a much more complicated structure. One may wonder why the concept of space of motions is not used more by modern authors. Maybe it is because for some dynamical systems, the space of motions is a _non-Hausdorff_ manifold. Another possible explanation is that some scientists are interested in the thorough description of particular
motions of a system, rather than by the study of the set of all possible motions. By showing that many important results can be deduced from the symmetries of the space of motions of a system, the author proves that this reluctance is ungrounded. For example, in Chapter III, section 12, he proves that the principle of equality of action and reaction appears as a consequence of Galilean relativity and Maxwell's principle.
In classical (non-relativistic) mechanics, the cohomological interpretation of the _mass_ of an isolated dynamical system is, in our opinion, another innovative idea worth mentioning. A cohomology class appears indeed in the mathematical expression of the action of the Galilei group on the space of motions of such a system. It is an element of a one-dimensional vector space, and the author has proven that it can be interpreted as the mass of the system. In classical mechanics, it is legitimate to consider isolated systems with a positive, null or negative mass. It will be done by the author in his study of the behaviour of elementary systems with respect to time or space reversals.
Another originality of this book is that it presents dynamical systems in the framework of relativistic physics as well as in the framework of classical, non-relativistic mechanics. This is made possible thanks to the use of the concept of state of motions. In classical, non-relativistic mechanics, the Kepler problem and the \(n\)-bodies problem are very good mathematical models for systems of gravitationally interacting material points. In relativistic physics, there is no such mathematical model for a system of interacting electrically charged particles. These particles interact by means of the electromagnetic field created by their motion. Once created, this electromagnetic field evolves according to its own laws. Up to now, no mathematical model is available to describe the motions of the particles with the integral curves of a vector field defined on some hypothetic evolution space, only depending on the positions and motions of the particles. The author manages to get around this difficulty by using the space of motions of the system and Maxwell's principle.
In classical mechanics, the symmetry group of the space of motions of an isolated dynamical system is the Galilei group. In relativistic physics, this group is the Poincare group. This change of symmetry group has important consequences clearly described by the author. The status of _mass_ is in relativistic physics very different from its status in classical mechanics, due to the fact that the symplectic cohomology of the Poincare group is trivial. In relativistic physics, there is no more a _barycentric decomposition_ of the state of motions of an isolated system, as in classical mechanics, because the Poincare group has no privileged normal Abelian subgroup, as the Galilean group has.
The works of the German mathematician Emmy Noether [9] told us that _first integrals_ of a Lagrangian or Hamiltonian system very often are linked with symmetries of the equations of motion. Most textbooks in classical mechanics published before the author's book only indicate how a real valued first integral is determined by each one-parameter symmetry group of these equations. We believe that Jean-Marie Souriau is, with Stephen Smale [10], among the first scientists who considered the geometric structure of the full set of these first integrals: they are the components of the _moment map_ of the symmetry group's action, defined on the evolution space and valued in the dual vector space of the Lie algebra of this group.
An _elementary system_ is a relativistic dynamical system whose space of motions is an homogeneous space of the restricted Poincare group. This purely mathematical definition, due to the author, seems to us very important because the classification of elementary systems reveals many properties of physicists' _elementary particles_, especially their _mass_, their _spin_ and, for massless particles, their _helicity_ (whose possible values, \(1\) or \(-1\), correspond to the two different circular polarizations of photons). In the non-relativistic approximation, elementary systems are models of elementary particles in the framework of classical mechanics. By this means, Geometric Optics appears, when the particle's spin is negligible, as a part of classical mechanics. The author's works so appear in continuity with those of Hamilton [11], who introduced the _characteristic function_ first in Optics before using it in mechanics. Interested readers will find in the long Introduction of the book [12] a very nice discussion of the symplectic aspects of geometric Optics and Electromagnetism.
Chapter IV also contains several worth mentioning innovations. For the author, a _statistical state_ of a dynamical system is a probability measure on the space of motions of the system. This definition is more natural than that generally used, for example by George Mackey [13] who, instead of the state of motions, uses the phase space at a given time. However, the most important innovation contained in this chapter seems to us the generalization of the notion of a Gibbs state, in which the energy is replaced by the moment map or the action of a symmetry group. This very natural generalization (the energy being the moment map of the action of the one-dimensional group of time translations) involves remarkable new features when the symmetry group is not Abelian. This group acts on the dual space of its Lie algebra by an affine action involving a symplectic cocycle, with the coadjoint action as linear part, for which the moment map is equivariant. By using generalized Gibbs states, the author develops a kind of _Lie groups thermodynamics_, which seems to us very interesting, as well from the mathematician's viewpoint as for possible applications
in physics. In this theory, the generalized temperature and the generalized quantity of heat are, respectively, elements of the Lie algebra of the symmetry group and of its dual vector space. The author proves that an open convex subset of the Lie algebra is endowed with a Riemannian metric which plays an important part in Statistics and Information theory.
The seeds of Geometric Quantization, presented in Chapter V, can be found in George Mackey's small book [13], published in 1963. Jean-Marie Souriau [1] and Bertram Kostant [2], its mean creators, independently proposed two slightly different, but equivalent versions of this theory. On the basis made by a symplectic manifold, Kostant defines a bundle whose fibres are complex lines, endowed with a connection whose curvature is the symplectic form of its basis. Similarly, Souriau defines a bundle whose fibres are circles, endowed with a contact form \(\omega\), whose Reeb vector field is tangent to the fibres. Moreover, the exterior derivative of \(\omega\) projects onto the symplectic form of the basis. Souriau's circle bundle is the _principal bundle_ which is associated to Kostant's complex line bundle, and the connection form of Kostant's line bundle is the contact form \(\omega\) of Souriau. While Souriau quantizes the _manifold of motions_ of a classical dynamical system, Kostant quantizes its _phase space_ at a given time. However, this difference in their choice of the quantized symplectic manifold may not be as important as it seems, because Souriau often uses the local symplectomorphism which exists between the space of motions and the phase space, especially when he derives the quantized wave equations.
Many remarkable results are presented in Chapter V. For instance, the author proves that a quantizable system made by indistinguishable particles has exactly two non-equivalent quantizations, both experimentally observed: they are the Fermi-Dirac and Bose-Einstein quantizations well known by physicists. He proves too that wave functions of quantum mechanics can be described with functions defined on the quantum manifold, and that the usual spatio-temporal description is obtained by means of a Fourier transform. The quantization of the space of motions of an assembly of indistinguishable particles leads to the _second quantization formalism_: quantum vacuum, creation and annihilation operators. The author also shows that the geometric obstructions encountered when the action of a symmetry group on the classical space of motions of a system is lifted to the quantum manifold, explains some facts well known by physicists: the lift of the Abelian group of translations is the non-Abelian _Weyl group_; elements of the Galilean group can be separately lifted, but this full group does not act on the quantum manifold (fact due to a cohomological obstruction discovered by Valentine Bargmann [5]); the Poincare group can be lifted for a particle of integer spin, but not for a particle of half-integer spin, in which case it is a two sheets covering space of the Poincare group which can be lifted, which leads to the use of Dirac's spinors.
Finally, we must quote the many conjectural ideas presented by the author in the last section of his book, for example about the behaviour of elementary systems with respect to space or time reversals, and about a generalization of the notion of quantum state.
Language and notations of the authorThe author's language, although slightly different from the one generally used in differential geometry, is perfectly logical and understandable. One of the very few terms which could cause misunderstandings is that of _embedding_ (in French, _plongement_). For the author, an embedding is a smooth map defined on a smooth manifold \(V\), or on an open subset of \(V\), with values in another smooth manifold \(V^{\prime}\), which is injective (the author uses the term _regular_ for _injective_) and whose rank is everywhere equal to the dimension of \(V\). Most geometers call such a map _an injective immersion_, and use the term _embedding_ for injective immersions which, in addition, are homeomorphisms of their domain of definition onto their image, endowed with the topology induced by that of \(V^{\prime}\). It seems to us that the choice made by the author is very reasonable because injective immersions are much more frequently encountered than embeddings. For example, orbits of a Lie group action, as well as leaves of a foliation, always are immersed in the manifold in which they are contained, and much more rarely embedded. This indication only concerns the original version of the book in French. In its English translation [14], the translators use the usual term _injective immersion_ and, in a footnote, indicate that for the author a _submanifold_ is an _immersed submanifold_.
Another particularity of the author's language which seems to us worth mentioning, is about the notion of an _Euclidean vector space_ of finite dimension. For the author, it is a real or complex finite-dimensional vector space \(E\) endowed with a symmetric, bilinear form \(g\) satisfying the following conditions.
* If \(E\) is a real vector space, \(g\) is assumed to be non degenerate, which means that for any \(x\in E\), \(x\neq 0\), there exits \(y\in E\) such that \(g(x,y)\neq 0\). The author does not impose the condition \(g\) positive. He still uses the term _Euclidean vector space_, not the term _pseudo-Euclidean vector space_ when \(g\) is definite without being positive, as done by most mathematicians.
* If \(E\) is a complex \(n\)-dimensional vector space, \(g\) is a symmetric, bilinear form _for the structure of \(2n\)-dimensional real vector space of \(E\)_ underlying its structure of \(n\)-dimensional complex vector space. The form \(g\) is assumed to be non degenerate and to satisfy, for all pair \((x,y)\) of elements in \(E\), \(g(ix,iy)=g(x,y)\), where \(i=\sqrt{-1}\).
This convention seems to us very logical and useful, because it unifies two different concepts often taught separately: the concept of real Euclidean finite-dimensional vector space and that of complex finite-dimensional Hermitian vector space.
Some notations of the author seem to us disconcerting. For example, he calls _variable_ any symbol, such as the letter \(y\), and associates it to a map. He denotes by \([\)value \(y](a)\) the value of the map associated to \(y\) at the point \(a\) if its domain of definition. He uses different symbols for the variable, the associated map, the point of its domain of definition and its value at that point. So it is sometimes difficult to understand some expressions he writes, because the reader must simultaneously have in mind the meaning of many symbols.
Given a vector field \(f\) defined on a smooth manifold \(V\), the author calls _derivation_ associated to \(f\) the operation which, to any smooth map \(A\) defined on \(V\) and valued in a smooth manifold \(V^{\prime}\), associates the map whose value, at each \(x\in V\), is the image of the vector \(f(x)\) by the linear map tangent to \(A\) at \(x\). He denotes derivations by symbols such as \(d\) or \(\delta\), other than the symbol which denotes the vector field (here the symbol which denotes the vector field is \(f\)). When the author considers several vector fields, he uses symbols such as \(d\), \(d_{1}\), \(\delta\), \(\delta_{1}\), etc to denote the corresponding derivations. Of course, these conventions are perfectly logical and coherent. However, their use may make the reading of some formulae rather difficult. Moreover, since the symbol \(d\) is used for derivations associated to vector fields, the author cannot use it for the exterior derivation of differential forms, as done by most mathematicians. For the exterior derivation of differential forms, the author uses the symbol \(\nabla\), which may disconcert some readers.
Let us finally indicate a few pecularities of the notations used by the author in exterior differential calculus. He does not use the symbol \(\wedge\) for the exterior product of differential forms, which makes some formulae heavy. He denotes by \(\eta(X)\) the interior product of a differential form \(\eta\) by a vector field \(X\), while many other mathematicians denote it by \(\mathrm{i}(X)\eta\), \(\mathrm{i}_{X}\eta\) or \(\mathrm{i}_{X}(\eta)\). It is probably for this reason that he denotes by \(\eta(X_{1})(X_{2})\cdots(X_{p})\) the evaluation of the \(p\)-form \(\eta\) on the vector fields \(X_{1},X_{2},\ldots,X_{p}\), instead of writing more simply \(\eta(X_{1},\ldots,X_{p})\). At first glance, this convention may seem simple and convenient. However, it makes some formulae heavy, such as the Cartan formula \(\mathcal{L}(X)=\mathrm{i}(X)\mathrm{d}+\mathrm{di}(X)\), which expresses the Lie derivative with respect to a vector field \(X\) as the anticommutator of the interior product by \(X\) with the exterior derivative.
## 3 Novel research ideas in Jean-Marie Souriau's footsteps
Every research book is a survey at a given time of the state of knowledge obviously limited on the issue. Just as the author considered that the book by Lagrange _Mecanique analytique_ was unfinished, we believe that _Structure des systemes dynamique_ is a very in-depth work opening new research paths.
## 4 Conclusion
We cannot fail to be impressed when reading this book by the extent and the thoroughness of the author's knowledge, as well in mathematics as in mechanics or in physics, and by the originality and the depth of his thoughts. Jean-Marie Souriau is the author of two other very remarkable books, _Geomerrie et Relativite_[15] and _Calcul lineaire_[7], which are very rich and original too and deserve to be read, and read again. We believe that among the paths for research he indicates in these books, many still are not yet fully explored.
|
2303.09593 | Dynamic generation of photonic spatial quantum states with an all-fiber
platform | Photonic spatial quantum states are a subject of great interest for
applications in quantum communication. One important challenge has been how to
dynamically generate these states using only fiber-optical components. Here we
propose and experimentally demonstrate an all-fiber system that can dynamically
switch between any general transverse spatial qubit state based on linearly
polarized modes. Our platform is based on a fast optical switch based on a
Sagnac interferometer combined with a photonic lantern and few-mode optical
fibers. We show switching times between spatial modes on the order of 5 ns and
demonstrate the applicability of our scheme for quantum technologies by
demonstrating a measurement-device-independent (MDI) quantum random number
generator based on our platform. We run the generator continuously over 15
hours, acquiring over 13.46 Gbits of random numbers, of which we ensure that at
least 60.52% are private, following the MDI protocol. Our results show the use
of photonic lanterns to dynamically create spatial modes using only fiber
components, which due to their robustness and integration capabilities, have
important consequences for photonic classical and quantum information
processing. | A. Alarcón, J. Argillander, D. Spegel-Lexne, G. B. Xavier | 2023-03-16T18:40:45Z | http://arxiv.org/abs/2303.09593v1 | ###### Abstract
###### Abstract
Photonic spatial quantum states are a subject of great interest for applications in quantum communication. One important challenge has been how to dynamically generate these states using only fiber-optical components. Here we propose and experimentally demonstrate an all-fiber system that can dynamically switch between any general transverse spatial qubit state based on linearly polarized modes. Our platform is based on a fast optical switch based on a Sagnac interferometer combined with a photonic lantern and few-mode optical fibers. We show switching times between spatial modes on the order of 5 ns and demonstrate the applicability of our scheme for quantum technologies by demonstrating a measurement-device-independent (MDI) quantum random number generator based on our platform. We run the generator continuously over 15 hours acquiring over 13.46 Gbits of random numbers, of which we ensure that at least 60.52% are private following the MDI protocol. Our results show the use of photonic lanterns in order to dynamically create spatial modes using only fiber components, which due to its robustness and integration capabilities have important consequences for photonic classical and quantum information processing.
**Dynamic Generation of Photonic Spatial Quantum States with an All-Fiber Platform**
**A. Alarcon\({}^{1,2,*}\), J. Argillander\({}^{1,2}\), D. Spegel-Lexne\({}^{1}\), and G.B. Xavier\({}^{1}\)**
\({}^{1}\)_Institutionen for Systemteknik, Linkopings Universitet, SE-581 83 Linkoping, Sweden_
\({}^{2}\)_The authors contributed equally to this work_
\({}^{*}\)[email protected]
## 1 Introduction
In quantum technologies individual quantum and entangled states are employed for information processing, providing significant advantages in areas such as computing and communication security [1, 2]. The latter has motivated the integration of quantum communication technology with current classical communication systems in order to share existing telecom infrastructure [3, 4]. In order to make this integration possible, quantum communications systems must be compatible with the telecommunication optical spectrum, the employed components and optical fibers. Single-mode fibers (SMF) have long been, and continue to be, the preferred platform for quantum communication experiments [5, 6]. However, the growing demand for data traffic has forced us to rethink the existing technological infrastructure [7]. For instance, spatial division multiplexing (SDM) technologies make it possible to use the transverse spatial properties of a light beam to multiplex information and increase data transport capacity [8], as well as offering advantages for other fields such as in quantum information [4].
These recent technological advances have motivated various studies that have allowed a quantum system to be encoded in terms of its transverse optical spatial modes [9-14]. One type of SDM fiber that has gained considerable attention recently are few-mode fibers (FMFs) [15], specially due to their ability to support orbital angular momentum modes [11, 13, 16, 17]. The FMF core is slightly larger cross-sectionally than the core of an SMF, allowing it to carry more than one transverse spatial mode. The main approach is to prepare \(N\) spatial modes and propagate an \(N\)-dimensional qudit represented by a spatial superposition of Gaussian modes through the FMF. However, a common problem in quantum communication with transverse spatial modes is that it usually relies on bulk optical components such as q-plates [18], spatial light modulators [19], or cylindrical lens mode converters [20], employed to manipulate the spatial states, which all suffer from slow response times, thus limiting the performance in communication applications and hindering integration with fiber-optic systems.
In order to start handling these issues, several fiber-based techniques have been implemented to create spatial superpositions of light, such as the use of long-period gratings (LPGs) [21 - 23] or mode-selective couplers [24, 25]. Even though the generation efficiency of spatial modes in these devices is higher than that of bulk components, they are still usually based on manual tuning. One promising technique is based on creating parallel path states of light (with a beamsplitter for instance), which are then converted into superpositions of linearly polarized (LP) modes with a photonic lantern (PL) and then launched onto a few-mode fiber [13, 16]. The PL has been used to increase the capacity of telecommunications channels through the multiplexing of multiple transverse spatial channels in the same fiber [7, 8]. Our commercial PL (Phoenix Photonics) has an FMF at the output and 3 independent single-mode fibers at the input. We can assign each of these Gaussian input modes to one of the three lowest order LP modes in the FMF. The internal structure is made of an adiabatic taper that provides a slow transition from the input single-mode fibers to the FMF in such a way that it transforms the single-mode input to the proper LP mode
at the output. The spatial modes in the FMF will be given by the interaction of the supermodes in the internal structure of the PL, and the mapping is produced by a matching process between the effective indices of the tapered region and the incoming spatial modes [16, 26]. This device has also shown remarkable capability to integrate with other SDM technologies and fiber optic communication systems [7, 26], which makes it a great candidate for generating spatial modes. In this paper we show that a Sagnac interferometer acting as a tunable beamsplitter combined with a photonic lantern can be used to dynamically, and at high-speed, generate any two-dimensional spatial state of the form \(\alpha|0\rangle+\beta e^{i\phi}|1\rangle\), with \(\alpha,\beta\in[0,1]\) where \(|0\rangle\) and \(|1\rangle\) correspond to two orthogonal LP modes in the FMF. It is worth noting in particular that we can also generate the specific superpositions \(|\text{OAM}_{\pm}\rangle=1/\sqrt{2}(|\text{LP}_{11\text{a}}\rangle+e^{\pm i\pi/ 2}|\text{LP}_{11\text{b}}\rangle)\), which are the orbital angular momentum modes (OAM) of index \(\pm 1\), currently under intense investigation in optical and quantum communication systems [3]. Our system can switch between two orthogonal spatial states with a speed on the range of 5 ns, limited by the driving electronics. We demonstrate the applicability of our system by dynamically preparing different quantum states as required in a measurement-device-independent (MDI) quantum random number generator (QRNG) [27]. We demonstrate the stability of our system by continuously measuring over 15 hours with an average random number generation rate of \(205.82\pm 19.95\) kbps of data which, after randomness extraction, passes the widely adopted NIST 800-22 test suite [28]. Another recent work was able to demonstrate the feasibility of generating random numbers from superpositions of transverse spatial states over ring-core fibers (RCFs) [29], with a proposal to upgrade it to a measurement device-independent case based on optical switches on an integrated photonic circuit. Our results go beyond that by already demonstrating the MDI protocol thanks to our dynamic spatial state preparation method, and also without the need of a long fiber to generate temporal delays among the spatial states. We have used only off-the-shelf components, which shows the feasibility of employing FMF technology in applications in quantum information. Our results show a reliable and fast technique to prepare transverse spatial modes of light in optical fibers, with potential applications in classical and quantum information fields.
## 2 Setup
The experimental setup to prepare the transverse spatial states of light is shown in Fig.**[1]** a). The weak coherent source (WCS) consists of a continuous DFB semiconductor laser (\(\lambda\)= 1546.92 nm), followed by a fiber pigtailed lithium niobate intensity modulator (IM) and then a variable optical attenuator in order to produce weak coherent states (not shown for the sake of clarity). The weak coherent states then go to an optical circulator (C) before entering one of the input ports of a 50:50 bidirectional fiber coupler (FC). The Sagnac loop (SL) is constructed by connecting the outputs of the FC together with a lithium niobate pigtailed telecom phase modulator (PM), used to give a phase shift \(\phi_{R}\), and a 50 m long fiber optic spool as a delay line. The phase modulator is used to change the relative phase between the two counter propagating directions inside the loop, and the delay line is needed to ensure sufficient time separation to apply adequate phase modulation to adjust for different output probabilities of the loop. This is achieved by correctly choosing the relative delay between the IM within the SPS and the PM within the loop. A field-programmable gate array (FPGA) applies an electrical signal to the PM to ensure that only the wavepacket propagating in the clockwise direction is subjected to a phase shift. Two manual polarization controllers are used to align the polarization of the wave packets traveling clockwise and counterclockwise in order to maximize the interference when they are recombined in the bidirectional fiber coupler.
We create the well known general path state \(|\psi\rangle=\alpha|0\rangle+i\beta|1\rangle\), where \(|0\rangle\) and \(|1\rangle\) denote the path states at the output of the SL. \(\alpha=\frac{1}{2}\left(1-e^{i\phi_{R}}\right)\) and \(\beta=\frac{1}{2}\left(1+e^{i\phi_{R}}\right)\) are the probability amplitudes of the photon to be routed through \(|0\rangle\) and \(|1\rangle\) (after the circulator) respectively. Thus the SL operates as a tunable beamsplitter controlled by \(\phi_{R}\)[35]. One of the paths is connected to another pigtailed lithium niobate phase modulator (\(\phi_{x}\)) to create the superposition \(|\psi\rangle=\alpha|0\rangle+e^{i\phi_{x}}\beta|1\rangle\). Both paths are then connected to a mode-selective photonic lantern (PL1) whose main function is to take \(N\) input single-mode fibers, and map each one of them into \(N\) corresponding LP modes in a FMF [26].
In our case PL1 makes the following mapping: \(|0\rangle\rightarrow|\text{LP}_{11\text{a}}\rangle\) and \(|1\rangle\rightarrow|\text{LP}_{11\text{b}}\rangle\). Two additional manual polarization controllers are placed on each path \(|0\rangle\) and \(|1\rangle\) to optimize the modal excitation at the output of PL1, which consists of an FMF capable of supporting three spatial modes of propagation: \(|\text{LP}_{01}\rangle\), \(|\text{LP}_{11\text{a}}\rangle\), and \(|\text{LP}_{11\text{b}}\rangle\). The mapping performed by the photonic lantern creates the state \(|\psi\rangle=\alpha|\text{LP}_{11\text{a}}\rangle+e^{i\phi_{x}}\beta|\text{LP}_ {11\text{b}}\rangle\) in the FMF. By acting on the phase modulators \(\phi_{x}\) and \(\phi_{R}\) it is possible to create any state on the surface of the Bloch sphere.
## 3 Creating arbitrary spatial superpositions
In order to demonstrate that we can generate any arbitrary photonic state \(\psi\), we employ the setup in Fig. Fig.[**1**] b). We remove the attenuation from the WCS such that it is now working at standard power levels. Furthermore the pulse width is adjusted to be 100 ns to take into account the slower response time of the InGaAs CCD camera used to measure the intensity and phase profiles of the generated states. The generated light pulses following the IM are split in two paths with a 50:50 fiber coupler, with the lower path connected to our dynamic spatial state generator producing the state \(|\psi\rangle\), which is then collimated with a 10x microscope objective and launched onto a bulk optics 50:50 beamsplitter and then imaged onto the CCD camera. The other arm propagates through single-mode fiber before being launched into free-space with another 10x microscope objective, and is supposed with the beam following the state generator at the 50:50 beamsplitter placed before the camera. By interfering a spherical wave of the fundamental Gaussian mode with a higher order mode generated by our platform, it is possible to visualize the interferogram and the intensity profiles on the camera [21, 22, 30]. A manual polarization controller and a fiber delay line is used in the upper arm to ensure indistinguishability of the two paths on the beamsplitter. A variable attenuator is also employed to ensure equal optical power in both arms before the beamsplitter.
With the setup described above we are able to measure the amplitude profile of each spatial mode (by blocking the upper path), or the phase profile when both paths are allowed to interfere at the beamsplitter. We measure the eigenstates of the three possible mutually unbiased bases (MUBs) in the two-dimensional Hilbert space formed by the superpositions of linearly polarized modes in a few-mode fiber. The three bases are \(\{|\mathrm{LP}_{11\mathrm{a}}\rangle,|\mathrm{LP}_{11\mathrm{b}}\rangle\}\), \(\{|\mathrm{LP}_{+}\rangle,|\mathrm{LP}_{-}\rangle\}\) where \(|\mathrm{LP}_{\pm}\rangle=(1/\sqrt{2})(|\mathrm{LP}_{11\mathrm{a}}\rangle\pm| \mathrm{LP}_{11\mathrm{b}}\rangle)\) and \(\{|\mathrm{OAM}_{+}\rangle,|\mathrm{OAM}_{-}\rangle\}\), where \(|\mathrm{OAM}_{\pm}\rangle=(1/\sqrt{2})(|\mathrm{LP}_{11\mathrm{a}}\rangle+e^{ \pm i\pi/2}|\mathrm{LP}_{11\mathrm{b}}\rangle)\). The theoretical amplitude and phase profiles are shown in Fig.[**2**] for each of the two states of each MUB, as well as the experimental results obtained with the setup of Fig.[**1**] b), showing these results match the theoretical predictions, demonstrating the ability of our spatial state generator to be able to faithfully reproduce any photonic
Figure 1: Experimental setup. a) Our transverse spatial state generator (delimited in the dashed box). We employ a Sagnac loop acting as a tunable beamsplitter to create superpositions of the \(|0\rangle\) and \(|1\rangle\) path states. By combining this with a relative phase shift \(\phi_{x}\), we can then create any general superposition of the form \(\alpha|0\rangle+\beta e^{i\phi_{x}}|1\rangle\), with \(\alpha\) and \(\beta\) determined by the internal phase setting \(\phi_{R}\) in the Sagnac loop. The path state is then mapped to the general superposition of linearly polarized modes \(\alpha|\mathrm{LP}_{11\mathrm{a}}\rangle+\beta e^{i\phi_{x}}|\mathrm{LP}_{11 \mathrm{b}}\rangle\) in the FMF by the photonic lantern. b) We employ an infrared CCD camera to characterize the amplitude and the interferogram of the generated spatial states. c) We demonstrate a measurement-device-independent quantum random number generator by using our state generator to dynamically switch between \(|\mathrm{LP}_{11\mathrm{a}}\rangle\), \(|\mathrm{LP}_{11\mathrm{b}}\rangle\) and its superposition. The projection is done on the path basis by converting from the LP states back to the path states, and detecting the outputs with single-photon detectors. ATT: Optical attenuator; BS: Beamsplitter; C: Optical circulator; CCD: Charged coupled device camera; D: Single-photon detector or amplified p-i-n photodiode; FC: Bidirectional fiber coupler; FMF: Few-mode fiber; PL: Photonic lantern; Pol: Optical polarizer; WCS: Weak coherent source
state \(\psi\) in two-dimensional space. In order to quantify the similarity of the simulated and measured amplitude profiles we measure the correlation between the 2D-Fourier spectrum of the amplitude profiles. From this data we calculate an average correlation of \(94.2\pm 3.7\%\). The deviation can be explained by the use of different exposure settings and nonideal alignment of the CCD camera. The interferometer containing the phase modulator \(\phi_{x}\) is partially insulated on an optical table, and is stable enough for the measurements displayed in Fig.[**2**] to be performed. For applications requiring the superpositions \(\ket{\mathrm{LP}_{\pm}}\) and \(\ket{\mathrm{OAM}_{\pm}}\) to be stable over longer time periods, active phase stabilization can be carried out with the injection of a reference laser at a different wavelength in the interferometer and implementing an active electronic control system.
We now show the time response of the scheme when performing dynamic changes of photonic states (Fig. 3). We create 120 ns optical pulses which are sent to the Sagnac loop. We then generate a modulation pulse of 45 ns width by using a FPGA, which is used to drive \(\phi_{R}\) inside the loop, adjusted to switch from \(\ket{\mathrm{LP}_{11\mathrm{a}}}\) to \(\ket{\mathrm{LP}_{11\mathrm{b}}}\) in the FMF. The time delay to the modulation of \(\phi_{R}\) is synchronized with the 120 ns amplitude pulse such that only the middle portion of the optical pulse is routed in the Sagnac loop. We then measure the outputs of the second lantern (Fig. 1c)) with two amplified p-i-n photodiodes. We are thus able to show a switching of the spatial modes \(\ket{\mathrm{LP}_{11\mathrm{a}}}\) to \(\ket{\mathrm{LP}_{11\mathrm{b}}}\). The inset shows a zoom of the spatial state transition where a rise time of 5.2 ns and a fall time of 2.4 ns are shown. This response time is limited by our custom driving electronics and the bandwidth of our p-i-n diode. However, our system can increase its performance even more using high-speed electronics as has been demonstrated in works using the Sagnac configuration [31].
Figure 2: Theoretical and experimental amplitude and interferograms distributions of the different spatial states generated by our scheme, as imaged with an infrared InGaAs CCD camera (Fig. 1b))
## 4 Demonstration of a spatial state quantum random number generator
We then show that the dynamic and fast switching enabled by our spatial state generator allows the implementation of a measurement-device-independent (MDI) quantum random number generator (QRNG). QRNGs are essential to many applications, including simulations, online gambling and for cryptography [32, 33]. They are based on the fundamental principles of quantum mechanics, where the randomness does not depend on our own lack of knowledge regarding the physical process. One way to implement QRNG is through measurements on weak coherent states which is made by observing and recording the outputs on a measurement made on one of its degrees of freedom, for example, polarization [34], the relative phase of two paths
textbf[35] or arrival time at a detector [36].
Traditional protocols for quantum random number generation depend on an implicit trust in the device used for state preparation and in the device used for measurement. While showing an advantage over deterministic methods for entropy generation and being comparatively practical to implement, they are prone to side-channel attacks, either in the detector or in the state preparation. To overcome the required trust in the devices, so called device-independent (DI) protocols have been developed which are able to generate genuine randomness while requiring no a priori assumptions about, or trust in, the devices. However, there are major technical challenges to overcome, and as such they are typically very slow compared to other implementations, and are not yet practical for most applications [33]. MDI protocols are a convenient alternative, as they are able to relax hardware requirements given the assumptions that Alice (the transmitter) has well characterized hardware responsible for state preparation while Bob (the measurement device) is untrusted [27]. Alice randomly chooses some experimental rounds to prepare and measure certain states that produce deterministic results at Bob in order to test the measurement device. It is then possible to bound the amount of information that has leaked to an eavesdropper, thus placing a lower bound on the amount of private bits that are being generated. In other rounds, in order to generate random numbers, Alice prepares a quantum state capable of generating a random output upon measurement projection. In this way, MDI protocols are highly applicable to practical situations where the measurement hardware cannot be fully trusted or characterized.
Due to the dynamic nature of state preparation in MDI-QRNGs, our spatial state preparation platform is ideal to implement such a protocol. In our implementation the randomness comes from performing projective measurements of the spatial quantum superpositions of weak coherent states \(|\psi\rangle\) onto the orthogonal elements \(|\text{LP}_{11\text{a}}\rangle\) or \(|\text{LP}_{11\text{b}}\rangle\) as Fig. Fig.**[1]** c) shows. The continuous wave laser is chopped
Figure 3: Demonstration of dynamic switching. a) The 120 ns long optical pulses entering the Sagnac interferometer. b) Detected optical pulse after the second lantern (PL2) when \(\phi_{R}\) is modulated with a 45 ns wide pulse, centered within the 120 ns optical pulse c) Inset showing a high-resolution plot of the routing demonstrating a rise time and fall time of 5.2 ns and 2.4 ns respectively.
up into 20 ns pulses at a repetition rate of 300 kHz using an intensity modulator (omitted in Fig.**[1]** for brevity) and attenuated in order to create weak coherent states with an average photon number of 0.16 per detection gate at the output of the spatial state generator. Therefore, the contribution of multiphoton events is small in our experiment (! 1.15%), and all double detection events are discarded. The output of our platform is connected to a second lantern PL2 in a back-to-back connection which makes the inverse mapping: \(\ket{\mathrm{LP_{11a}}}\rightarrow\ket{0}\) and \(\ket{\mathrm{LP_{11b}}}\rightarrow\ket{1}\), thus performing the measurement projection in the basis spanned by \(\ket{\mathrm{LP_{11a}}}\) and \(\ket{\mathrm{LP_{11b}}}\). Note that if we adjust in the spatial state generator \(\alpha\) and \(\beta\) to have the same value by adjusting the driving voltage of \(\phi_{R}\) the incoming weak coherent state will have the same probability of being projected at port 1 or port 2 of PL2. The measurement performed in this fashion is equivalent to measuring in the path (computational) basis, and is thus independent of \(\phi_{x}\). Thus the MDI-QRNG is independent of the phase instability of the interferometer at the input of the lantern. This driving voltage level is fine-tuned using the FPGA in such a way that the entropy of the generated random numbers is maximized. Following the same procedure, it is also possible to obtain either \(\alpha=0\) or \(\beta=0\) in order to carry out deterministic measurement outputs by setting \(\phi_{R}\) accordingly, in which case the entropy of the generated bit sequence is minimized. We employ a single-detector scheme with time multiplexing in order to be able to measure both outputs **[35]**, which implies in an extra 50% loss at the detection stage. This means that the two outputs \(\mathrm{D_{0}}\) and \(\mathrm{D_{1}}\) in Fig.**[1]** c) are mapped to an early and a late time-bin which are then detected serially by one single-photon detector. We employ a single IdQuantique id210 single-photon detection module working in gated mode, with 10% detection efficiency, a detection gate width of 5 ns, and dark count probability of 0.005%. The total losses for the spatial state generator add up to approximately 14 dB, where 3 dB come from the two passes through the circulator, 3 dB for each phase modulator, 4 dB for the photonic lantern and an extra 1 dB for connector losses. We stress that these losses are not an issue for a weak coherent state source such as ours, since they are all concentrated on the transmitter. For the receiver an extra 4 dB loss comes from the second photonic lantern.
We run the MDI-QRNG protocol by randomly choosing in each measurement block whether to run test or randomness generation mode, with probabilities of 5% and 95% respectively. This random decision is given by a classical random number generator based on shift-registers implemented in the FPGA. We note that the additional source of randomness needed to run the MDI protocol is chosen by the user, based on something they can trust. Furthermore, under test mode, each of the two deterministic states is chosen with 50% probability, such that the test states \(\ket{\mathrm{LP_{11a}}}\) or \(\ket{\mathrm{LP_{11b}}}\) are prepared with 2.5% overall probability on average. The test data is used to estimate the fraction of private bits generated by Alice and Bob, while the randomness mode data will generate the final output after randomness extraction. Depending on the measurement mode, different voltages are applied to the PM to modify \(\phi_{R}\) inside the Sagnac loop. Furthermore, the FPGA registers the detection events from the single photon detector, and assigns a binary '0' ['1'] if the photon impinges on the detector at the time slots correponding to the different outputs of the lantern. Each measurement block consists of 32 kilobits, which are stored in a buffer inside the FPGA before they are sent over Ethernet to a personal computer for randomness extraction and assessment. We adjust the voltage driving the phase modulator \(\phi_{R}\) to having close to equal probability of detecting a 0 or a 1, i.e. maximize the entropy of the generated sequence. The results of the 15 hour measurement run are displayed in Fig. 4a, where we plot the raw 8-bit Shannon entropy of each 32 kbit block as a function of time, giving an average of \(7.15\pm 0.12\) bits/Byte, thus demonstrating the stability of the system, with the deviation from the ideal case given mainly by the crosstalk of the photonic lantern. We also display the probabilities to detect the orthogonal test states \(\ket{\mathrm{LP_{11a}}}\) or \(\ket{\mathrm{LP_{11b}}}\) over the entire run, with the inset showing a narrower time range for these probabilities. The probabilities are defined as \(P_{\ket{\mathrm{LP_{11a}}}}=\frac{N_{0}}{N_{0}+N_{1}}\) and \(P_{\ket{\mathrm{LP_{11b}}}}=\frac{N_{1}}{N_{0}+N_{1}}\) where \(N_{0}\)\([N_{1}]\) are the number of detections in \(\mathrm{D_{0}}\)\([\mathrm{D_{1}}]\) for each measurement block when the state \(\ket{\mathrm{LP_{11a}}}\)\([\ket{\mathrm{LP_{11b}}}]\) is prepared and measured, i.e. when the QRNG is operating in test mode. Fig. 4b shows the normalized counts across the two detection time slots, as a function of a complete sweep of the driving voltage of phase \(\phi_{R}\).
From the observed probabilities of recording a binary '0' ['1'] when working under test mode we are able to calculate an adversary's guessing probability \(P_{\mathrm{guess}}(\omega_{x})\). In this case, \(\omega_{x}\) can be any of the three states belonging to the set formed by \(\{\ket{\mathrm{LP_{11a}}},\ket{\mathrm{LP_{11b}}},\ket{\phi}\}\) where \(\ket{\phi}\) is the superposition \(\ket{\phi}=1/\sqrt{2}(\ket{\mathrm{LP_{11a}}}+e^{i\phi_{x}}\ket{\mathrm{LP_{11b }}})\), which is used as the randomness generation state. Regardless of the value of \(\phi_{x}\) it will yield the same probabilities at the single-photon detectors, as the measurement is done in the path (computational) basis. Following the procedure in **[12]** we cast the problem into a simple convex optimization problem solved with semi-definite programming. We optimize over all POVMs that encode all possible guessing strategies an adversary Eve could use that are compatible with the observed probabilities in our experiment. From the observed success probability \(P_{\ket{\mathrm{LP_{11a}}}}=0.9973\pm 0.0144\) and
\(P_{\rm[LP_{11b}]}=0.9913\pm 0.00093\) we compute Eve's guessing probability to be 0.64687. Utilizing the fact that the number of private bits is given by \(H_{\rm min}(x^{*})=-\log_{2}P_{\rm guess}(x^{*})\) we are able to certify that at least 60.52% of the bits are private, by assuming all two-photon events are lowering the amount of generated privacy.
Once the randomness extraction is done through a universal hashing extractor (Toeplitz extractor), following the methodology in [35, 37], we run the generated sequence through the NIST 800-22 statistical test suite with the results displayed in Table 1. Each of the 15 NIST tests is run on 1 Mbit blocks, and as we can see, the average p-values for all blocks in each of the tests are clearly higher than the significance level of 0.01, meaning that our sequence is random within the confidence limits. Furthermore the proportion of passes for each 1 Mbit block within each test is higher than the confidence value of 0.98, based on the number of trials that are done within each test.
## 5 Conclusion
We have designed and tested an all-fiber platform that is able to generate any two-dimensional general transverse spatial state based on linear polarized modes in few-mode optical fibers. Our scheme is based on spatial division multiplexing technology, and is a good candidate as an ultra-fast generator of transverse spatial states, such as orbital angular momentum modes, for applications in classical and quantum communications. The speed of the system comes from the fact that it employs electro-optical telecom modulators, instead of bulk optical elements from other popular implementations. This capability distinguishes our approach from what has been done before, making this proposal attractive in any application where fast switching of spatial states is needed. Furthermore, all the components used to develop the platform are commercially available, which is an advantage when replicating the results obtained in this work.
We demonstrate the versatility of our platform in an implementation of an MDI-QRNG protocol, where dynamic switching of quantum states is needed. The system shows great stability over 15 hours of continuous operation. Our scheme can be further scaled to higher dimensions by using few-mode fibers [13], beam splitters
textbf[12], and lanterns [38] that support a higher number of spatial modes. Although we have focused our efforts on showing that the platform has great potential to be employed as a basis for the integration of optical and quantum communications, many other areas can benefit directly from these results such as quantum imaging [39], astronomy [40], and metrology [41].
Figure 4: a) Probability of detecting a photon in \(\rm D_{0}\) [\(\rm D_{1}\)] when the test states \(\rm|LP_{11a}\rangle\) [\(\rm|LP_{11b}\rangle\)] are prepared and measured, and the raw entropy. Inset shows a zoom on the probabilities over 1 hour of the experimental run. b) Photon counts in \(\rm D_{0}\) [\(\rm D_{1}\)] when sweeping the voltage applied to the PM inside the Sagnac loop. The non-ideal extinction of \(\rm D_{0}\) is a result of afterpulsing, which could be mitigated with longer time-bin separation
## 6 Acknowledge
The authors acknowledge support from CENIIT Linkoping University, the Swedish Research Council (VR 2017-04470), the Knut and Alice Wallenberg Foundation through the Wallenberg Center for Quantum Technology (WACQT) and by the QuantERA grant SECRET (VR 2019-00392).
|
2301.08792 | Inherent Limits on Topology-Based Link Prediction | Link prediction systems (e.g. recommender systems) typically use graph
topology as one of their main sources of information. However, automorphisms
and related properties of graphs beget inherent limits in predictability. We
calculate hard upper bounds on how well graph topology alone enables link
prediction for a wide variety of real-world graphs. We find that in the
sparsest of these graphs the upper bounds are surprisingly low, thereby
demonstrating that prediction systems on sparse graph data are inherently
limited and require information in addition to the graph topology. | Justus I. Hibshman, Tim Weninger | 2023-01-20T20:29:34Z | http://arxiv.org/abs/2301.08792v3 | # Inherent Limits on Topology-Based Link Prediction
###### Abstract
Link prediction systems (_e.g._ recommender systems) typically use graph topology as one of their main sources of information. However, automorphisms and related properties of graphs beget inherent limits in predictability. We calculate hard _upper_ bounds on how well graph topology alone enables link prediction for a wide variety of real-world graphs. We find that in the sparsest of these graphs the upper bounds are surprisingly low, thereby demonstrating that prediction systems on sparse graph data are inherently limited and require information in addition to the graph topology.
Introduction
Graph-based link prediction systems are widely used to recommend a wide variety of products and services. These modern-day systems are even used to find friends and surface possible romantic partners. Usually, these recommendations are based on a combination of node features and the topology (_i.e._ the link-structure) of the graph-data. However, most graphs are known to possess symmetries (_i.e._ automorphisms) in their topology. These automorphisms reduce the prediction task to guesswork and therefore place inherent limits on the predictability of many tasks and datasets. This raises two foundational questions in machine learning on graphs: What are the inherent limits on a graph structure's predictability, and do contemporary systems approach these performance limits?
The goal of the present work is to answer these questions. To do so we investigate how much information a graph's topology alone can provide to a link prediction algorithm. For example, consider the following scenario illustrated in Fig. 1: Imagine you are shown both a link prediction task and the answer to the same task (_i.e._ the links you must predict); imagine further that you are asked to perform link prediction on an anonymized version of the same problem. We call this the _maximally informed link prediction task_. We then ask: what is the best that an algorithm could do at this maximally informed link prediction task? Answering this question gives a hard upper bound on the performance that _any_ algorithm could achieve when presented with the standard prediction task.
At first, it may seem that the maximally informed link prediction task is trivially easy. However, as Figure 1 shows, _symmetries_ in the graph can render several possible edge-predictions structurally identical and therefore equally valid.
Of course, in practice, most systems will perform much worse at the standard link prediction task than an ideal algorithm could perform at the maximally informed task. Any link prediction algorithm will have some inherent (often implicit) modeling assumptions about the graph. For example, a simple model like triadic closure assumes that the likelihood of an edge is proportionate to the number of triangles the edge would be involved in [4; 7; 11; 14]. Expressed in a Bayesian fashion, we can think of a link prediction algorithm as conditionalizing on evidence, where the data the algorithm sees is its evidence and the algorithm's inherent assumptions form its prior. One could think about an algorithm performing the maximally informed link prediction task as an algorithm doing standard link prediction with a perfect prior (_i.e._ 100% confidence on the correct graph). We focus on the maximially informed link prediction task because it enables us to establish limits that exists in the data itself, without making _any_ assumptions about the link prediction algorithm.
To our knowledge, this is the first work to quantify a maximal performance score that link predictors can obtain on a given task. Other work in predictibility has focused on measures of predictibility distinct from evaluation scores. For instance, Abeliuk et. al. study how the predictability of time-series data degrades as the amount of data available decreases; they quantify predictability in terms of permutation entropy and signal self-correlation, as well as actual prediction performance of specific models [1]. Permutation entropy has also been found to be useful to measure predictability in ecology and physics, and self(auto)-correlation in finance [1; 3; 10; 17].
Scholars have also analyzed predictability limits in other domains. For instance, some have used notions of entropy to measure predictability limits on human travel [18; 23] and disease outbreaks [22]. Predictability is related to system complexity and chaos [5]. For instance, minute uncertainties on initial conditions can greatly limit one's ability to make accurate weather forecasts [26].
Others have done excellent work on the related but distinct case that the ground truth (_i.e._ correct output) itself is uncertain or inherently fuzzy. For instance, in these sorts of settings one might need an alternate way of scoring a classifier, such as Survey Equivalence [20]. Rather than fuzzy ground-truth, the present work focuses on cases where
Figure 1: Toy example graph with three held-out edges for link prediction testing. In this case, there exist eight equally plausible edge sets that form a graph isomorphic to the original. However, only one of these eight is “correct” from the perspective of standard link prediction evaluation.
the correct output is clearly known during evaluation, but where limits in predictibility come from symmetries within the input data.
Ultimately, we find that commonly used graph datasets have surprisingly low predictability limits and that some GNNs appear to have exceeded these upper bounds in their reported results. We offer some possible explanations for these findings in our discussion.
## II Formalisms
### Graphs
We represent a graph \(G\) as \(G=(V,E)\) where \(V\) is the set of vertices (_i.e._, nodes) and \(E\) is the set of edges. The edges are pairs of vertices. If the graph's connections are considered to have a direction, we say that \(E\subseteq V\times V\) and that the graph's _non-edges_ are \((V\times V)\setminus E\). If the connections do not have a direction, then the edges are unordered pairs: \(E\subseteq\{\{a,b\}\ |\ (a,b)\in V\times V\}\). However, for simplicity, it is standard to always write \((a,b)\) rather than \(\{a,b\}\) even when talking about undirected graphs. An edge of the form \((a,a)\) is called a _self-loop_.
### Isomorphisms
Given two graphs \(G_{1}=(V_{1},E_{1})\) and \(G_{2}=(V_{2},E_{2})\), we say that they are _isomorphic_ if there exists a way to align the two graphs' vertices so that the structures overlap perfectly. Formally, \(G_{1}\) and \(G_{2}\) are isomorphic (expressed as \(G_{1}\cong G_{2}\)) if there exists a bijection between the vertices \(f:V_{1}\to V_{2}\) such that \((a,b)\in E_{1}\leftrightarrow(f(a),f(b))\in E_{2}\). In this case the function \(f\) is called an _isomorphism_. In this paper, whenever we refer to two graphs as being _equivalent_ or _identical_ we mean that they are isomorphic.
If \(f\) is an isomorphism between two graphs \(G_{1}\) and \(G_{2}\) we will sometimes denote this as \(G_{1}\cong_{f}G_{2}\).
### Automorphism Orbits
Within the context of a single graph, the _automorphism orbit_ of an object (_i.e._ a vertex or an edge) captures its equivalence with other objects in the graph. Two objects are in the same orbit if and only if the data _in no way_ distinguishes between the two objects.
An _automorphism_ of a graph is an isomorphism of the graph with itself. That is, an automorphism of a graph \(G=(V,E)\) is a bijective function \(f:V\to V\) such that:
\[(a,b)\in E\leftrightarrow(f(a),f(b))\in E\]
The set of all automorphisms of a graph \(G\) form the _automorphism group_ of the graph and is denoted \(\operatorname{Aut}(G)\).
The _automorphism orbits_ of a graph typically refer to collections of equivalent vertices; however, they can also refer to collections of equivalent edges. The orbit of a vertex \(a\) in graph \(G\) is the set \(\operatorname{AO}_{G}(a)=\{f(a)\ |\ f\in\operatorname{Aut}(G)\}\). Similarly, the orbit of an edge \(e=(a,b)\) in graph \(G\) is the set \(\operatorname{AO}_{G}(e)=\{(f(a),f(b))\ |\ f\in\operatorname{Aut}(G)\}\). Note that \(a\in\operatorname{AO}_{G}(a)\) and \(e\in\operatorname{AO}_{G}(e)\) due to the trivial automorphism \(f(x)=x\).
We can even consider the orbits of _non-existent_ edges (_i.e._ non-edges). Let \((a,b)\notin E\) be an edge which is not in \(G\). We can still define the orbit of \((a,b)\) to be \(\{(f(a),f(b))\ |\ f\in\operatorname{Aut}(G)\}\). These orbits are collections of edges not in \(G\) which are equivalent given \(G\).
In the context of this paper, two objects (vertices or edges) are considered equivalent if they are in each other's orbits. We will denote the set of all automorphism orbits for the vertices of a graph \(G\) as \(\mathcal{AO}_{V,G}\) - likewise with the edges and non-edges: \(\mathcal{AO}_{E,G}\) and \(\mathcal{AO}_{E,G}\) respectively. The set of all automorphism orbits forms a partition over the entities.
### K-hop Neighborhoods
In practice, most link prediction algorithms do not use the entire graph when predicting the probability of edge membership. Rather, they tend to use local context. We formalize one intuitive notion of local context here that will be used throughout the paper.
Given a node or an edge, we can consider the nodes surrounding the entity to be the collection of nodes you could reach by beginning at the node (or the edge's endpoints), and taking up to \(k\) steps across edges for some value \(k\). We can express this formally as follows:
Given a vertex \(x\in V\), let \(N(x)\) be \(x\)'s neighbors - that is, \(N(x)=\{y\ |\ (y,x)\in E\vee(x,y)\in E\}\). Now, given a set of vertices \(S\subseteq V\), we can define that set's neighbors to be \(N(S)=\bigcup_{x\in S}N(x)\). For any natural number \(k\), we define the \(k\)-hop neighbors of \(S\) to be \(N_{k}(S)=S\) when \(k=0\) and \(N_{k}(S)=N_{k-1}(S)\cup N(N_{k-1}(S))\) when \(k>0\).
We can now define the \(k\)-hop neighborhood of a set of vertices \(S\). It is the induced subgraph on the \(k\)-hop neighbors. Formally, it is the graph
\[G_{k}(S)=(N_{k}(S),\{(a,b)\ |\ (a,b)\in E\wedge a,b\in N_{k}(S)\})\]
For an edge \(e=(a,b)\) we define its \(k\)-hop neighborhood \(G_{k}(e)\) to be the \(k\)-hop neighborhood of its endpoints: \(G_{k}(\{a,b\})\).
## III Link Predictors and their evaluation
### Link Predictors
A link predictor is essentially a binary classifier for non-edges. It produces a verdict indicating whether the (non-)edge is or should be a member of the graph or not.
Let \(G=(V,E)\) be a graph and \(\bar{E}\) be the set of non-edges in \(G\); that is, \(\bar{E}=\{(a,b)\ |\ a,b\in V\wedge(a,b)\notin E\}\). A hard link predictor (_i.e._ hard binary classifier) for \(G\) and \(\bar{E}\) is a process/algorithm/function \(\ell_{G}:\bar{E}\rightarrow\{\texttt{Positive},\texttt{Negative}\}\) that gives a non-edge a label (Positive/Negative). A soft link predictor (_i.e._, soft binary classifier) for \(G\) and \(\bar{E}\) is a function \(\ell_{G}:\bar{E}\rightarrow\mathbb{R}\) that gives a non-edge a score. The higher the score, the more likely the non-edge is considered to be one of the Positives; the lower the score, the more likely the non-edge is considered to be a Negative. The function may be the result of training a model on a collection of correct edges/non-edges via manual parameter tuning, statistical analysis, or any number of other methods.
In practice, soft classifier scores are often probabilities. Further, the scores are often turned into hard labels by picking a threshold value \(t\) and giving all entities with a score \(\geq t\) the Positive label and all others the Negative label.
In the present work we require that a soft link predictor be deterministic when scoring edges and that the score that it gives for one non-edge does not depend on whether the classifier has already given a score to another non-edge. These requirements, while often desirable, might not hold true of some actual procedures/algorithms.
We again note that the present work focuses exclusively on the graph topology. Any notion of labels or ids or other information on the nodes and edges is therefore ignored for the purposes of the present work. This means that the representation or ordering of vertex labels does not change the algorithm's prediction. It makes no difference to the algorithm whether the vertices are labeled 1 through \(n\), \(n\) through 1, or labeled with names like "Alice" and "Bob." Formally, we express these requirements with the following:
\[e_{1}\in\mathrm{AO}_{G}(e_{2})\rightarrow\ell_{G}(e_{1})=\ell_{G}(e_{2})\]
The point of this is to formalize the principle that: _When all information the predictor uses to classify two non-edges is structurally the same, then the output scores are the same._ When we consider link predictors that only use the \(k\)-hop neighborhood of an edge to score the edge, this principle becomes:
\[\left(\exists f.\ G_{k}(e_{1})\cong_{f}G_{k}(e_{2})\wedge f(e_{1})\in \mathrm{AO}_{G_{k}(e_{2})}(e_{2})\right)\rightarrow\ell_{G}(e_{1})=\ell_{G}(e _{2})\]
This is to say that if edges \(e_{1}\) and \(e_{2}\) play an identical role in identical \(k\)-hop neighborhoods, then they are given the same score. Note that by definition \(e_{1}\in\mathrm{AO}_{G}(e_{2})\) implies \(\left(\exists f.\ G_{k}(e_{1})\cong_{f}G_{k}(e_{2})\wedge f(e_{1})\in\mathrm{ AO}_{G_{k}(e_{2})}(e_{2})\right)\) for any \(k\).
### Performance Scores for Link Predictors
In practice, almost all link prediction classifiers are soft classifiers. There are a number of nuances to how these scores are obtained that are worth highlighting here.
When evaluating soft classifiers, researchers tend to evaluate the predictors across different thresholds \(t\). Each (soft predictor, threshold) pair represents a possible hard link predictor. Thus performance of a soft predictor can be considered to be the goodness of the collection of hard predictors it offers. This can be measured in terms of different criterion. One common criterion is the relationship between the predictor's True Positive Rate (TPR) and False Positive Rate (FPR), which generates the widely used ROC curve. Another common criterion is the relationship between the predictor's Precision and Recall, which leads to the Precision-Recall curve. For an in-depth analysis exploring the relationship between ROC curves and Precision-Recall curves (PR curves), we recommend the paper by Davis and Goadrich [8].
In the context of ROC and PR curves, an interpolation between points represents a way of combining the two hard classifiers (the two points) into a new hard classifier. This can be done by picking a value \(\alpha\in[0,1]\) and tossing an \(\alpha\)-weighted coin every time an entity is scored to decide which of the two hard classifiers to use for the entity. This is implicitly how we as well as Davis and Goadrich perform interpolation. It turns out that the popular trapezoidal interpolation is incorrect for Precision-Recall space, because hard classifiers cannot be combined to get precision-recall pairs that interpolate linearly [8].
Sometimes, rather than calculate the AUPR curve exactly, it can be approximated with a measure called Average Precision (AP). Rather than doing a complex interpolation between two precision-recall points, Average Precision simply uses the precision of the rightmost point (the point with the higher recall).
## IV Optimal prediction performance
Recall that a graph \(G=(V,E)\), and \(\bar{E}\) is the set of non-edges in \(G\). Let \(L:\bar{E}\rightarrow\{\texttt{Positive},\texttt{Negative}\}\) be the correct labeling of those non-edges and let \(P=\{e\ |\ e\in\bar{E}\wedge L(e)=\texttt{Positive}\}\) be the set of positives.
### ROC, AUPR, and AP
We prove in the Appendix that the optimal ROC and AUPR scores a soft classifier can obtain equals the ROC/AUPR scores obtained from a classifier \(\ell_{G}^{*}:\bar{E}\rightarrow\mathbb{R}\) which satisfies the following property:
\[\ell_{G}^{*}(e_{1})\geq\ell_{G}^{*}(e_{2})\leftrightarrow\frac{|\text{AO}_{G} (e_{1})\cap P|}{|\text{AO}_{G}(e_{1})|}\geq\frac{|\text{AO}_{G}(e_{2})\cap P|}{ |\text{AO}_{G}(e_{2})|} \tag{1}\]
Amongst other things, this means that a classifier which correctly outputs the probabilities that an object is a Positive is an optimal classifier in these metrics. Note that if incorrect (_e.g._ trapezoidal) interpolation is used for AUPR calculation then this fact no longer holds true.
This property of optimal classifiers permits us to easily compute the maximal ROC/AUPR scores that any algorithm could have obtained on a given dataset and task. We provide code both for proper AUPR calculation and for optimal ROC/AUPR scores at [repository link to be added upon paper acceptance].
Let \(\langle O_{1},O_{2},...,O_{|\mathcal{AO}_{E,G}|}\rangle\) be an ordering of the automorphism orbits that respects the property in Equation 1. Further, let \(p_{0}=0\) and \(p_{i}=p_{i-1}+|P\cap O_{i}|\) for \(1\leq i\leq|\mathcal{AO}_{\bar{E},G}|\). Likewise let \(t_{0}=0\) and \(t_{i}=t_{i-1}+|O_{i}|\) for \(1\leq i\leq|\mathcal{AO}_{\bar{E},G}|\).
Once we can assume this ordering, we can simply apply the standard formulae for ROC and AUPR. Expressed in terms of the notation we are using, these formulae are as follows:
\[\text{Max ROC}=\sum_{i=0}^{|\mathcal{AO}_{\bar{E},G}|-1}\frac{p_{i+1}-p_{i}}{ |P|}\cdot\frac{(t_{i+1}-p_{i+1})+(t_{i}-p_{i})}{2|\bar{E}\setminus P|} \tag{2}\]
\[\text{Max AUPR}=\sum_{i=0}^{|\mathcal{AO}_{\bar{E},G}|-1}\frac{p_{i+1}-p_{i}} {|P|}\cdot\frac{p_{i+1}-p_{i}}{t_{i+1}-t_{i}}\cdot\left(1+\left(\frac{p_{i}}{p _{i+1}-p_{i}}-\frac{t_{i}}{t_{i+1}-t_{i}}\right)\cdot\ln\left(\frac{t_{i+1}}{ t_{i}}\right)\right) \tag{3}\]
The AUPR formula comes from integrating over the precision-recall point interpolation technique discussed above in Sec. III.2. Note that when \(p_{i+1}-p_{i}=0\) (_i.e._\(|P\cap O_{i+1}|=0\)), the formula inside the summation becomes zero; there are no divisions by zero from \(p_{i+1}-p_{i}=0\). Further \(t_{i+1}-t_{i}=|O_{i+1}|>0\). Lastly, when \(t_{i}=0\), then \(p_{i}=0\), and because \(\lim_{x\to 0^{+}}x\ln\frac{1}{x}=\lim_{x\to 0^{+}}x\left(\ln(1)-\ln(x) \right)=0\), we do not get a division by zero issue with \(t_{i}\).
Equipped with these formulae, we can now begin to calculate the maximum possible performance scores on actual prediction tasks.
As we mentioned above, Average Precision (AP) is sometimes used to approximate AUPR. However, the nice result we prove for ROC and AUPR concerning equivalence class ordering does _not_ hold for AP. Fortunately, our upper bound on AUPR is also an upper bound on AP, so we can still upper bound the AP scores that one might obtain. We provide a short proof of this in the appendix.
## V Methodology
Our main experiment is to calculate how maximum link prediction scores vary with the amount of information given to an idealized algorithm. We run this test on a wide variety of real-world graphs. The procedure runs as follows:
1. Begin with a graph \(G=(V,E)\) and an edge removal probability \(p\) (we set \(p\gets 0.1\)).
2. Define the set of negatives \(N\) as all edges not in \(G\).
3. Remove each edge in \(G\) with probability \(p\) (independently) and add the removed edges to the set of positives \(P\). Call the resulting graph \(H\leftarrow(V,E\setminus P)\).
4. Get a (hashed) canonical representation for each non-edge's automorphism orbit in \(H\).
5. Use the collected information to calculate the maximum scores via equations 2 and 3.
6. Assign \(k\gets 1\).
7. Get a (hashed) canonical representation of the \(k\)-hop neighborhood for each non-edge in \(H\) where the non-edge's endpoints are given a distinct color from the rest of the nodes.
8. Use the collected information to calculate the maximum scores when using at most \(k\) hops of information about a non-edge.
9. If the performance limit just obtained from step 8 is equal to (or within 0.005 of) the performance limit obtained from step 5, then stop. Otherwise, assign \(k\gets k+1\) and go to step 7.
We perform the above procedure multiple times for each graph. Each iteration corresponds to different, random possible sets of missing edges; each set of missing edges can be slightly different in terms of the limit on its predictability. We get the mean value and 95% confidence interval for each distinct value of \(k\).
We tested the link-prediction limits on a wide variety of real-world graphs. They are listed in Table 1.
## VI Results
### Sparsity Tends to Lower the Upper-Bound
We found that on most graphs, the upper bounds were near 100%, even when using 1-hop neighborhoods; we suspect that this is because when degrees are high enough there is still a large number of possible 1-hop neighborhoods such that the hypothetical optimal algorithm can take advantage of the slightest difference between neighborhoods. However, we found that on the sparsest graphs the results told a different and very interesting story.
We show the results for the four sparsest graphs: the Cora and Citeseer citation (sub)graphs, the CCSB-YI1 Protein-Protein Interaction graph, and a US Powergrid network. The results are in Figure 2. In particular, we focus on the AUPR values, because even though link prediction papers often report ROC scores, link predictors can easily get large ROC scores due to the class imbalance (the sheer number of non-edges) [24].
In summary, our results give good evidence that when data becomes sparse enough, graph toplogy alone is severely limited in its ability to indicate a difference between genuine and fake missing edges.
Figure 2: Hard upper bounds on link prediction performance as it varies with the amount of information given to a link prediction algorithm. The horizontal line shows the limit when using the entire graph (\(k=\infty\)). Ten percent of the graph’s edges were randomly selected as test edges. The multiple points at a single value of \(k\) are from different sets of randomly chosen test edges; left-right jitter is employed to aid in visualization. Error bars are 95% confidence intervals.
### Negative Sampling Methodologies Produce Artificially High Scores
We were curious to see how these fundamental limits compared to recent reports of link-prediction performance. As a small case study, we considered the Graph Convolutional Neural Network Auto-Encoder (GCNAE), a widely used and referenced model that can perform topology-only link-prediction [13]. In the original GCNAE paper, the authors tested their model on undirected versions of the Cora and Citeseer citation networks. They reported ROC scores and AP scores.
We found that AP scores they reported for the Citeseer network were well _above_ our upper bound, indicating that there was a difference in the calculated AP. We suspect the difference is that in the GCNAE paper's tests the number of negative edges was downsampled, perhaps to one negative test edge per positive test edge. This sort of downsampling is common when performing link prediction evaluation with AP or AUPR; however downsampling tends to boost the AP and AUPR scores significantly relative to what they would have been if the full set of negatives was used in testing [24]. The GCNAE paper does not specify if (and/or how much) downsampling occurred.
The paper's reported ROC scores are well below our upper bound on ROC. This makes sense as the ROC score is not affected by downsampling [24]. If we downsample the number of negative edges to one negative edge per positive edge when calculating the AP limits, we get that the GCNAE's AP performance is also well below the upper bound. We show the numeric results in Table 2.
The point of this case study is twofold. Firstly, a metric (_e.g._ AP) may have different meanings depending on how it is used, and our methodology may be able to help retroactively determine which approach was used if the original paper does not specify.
Secondly, and perhaps of greater interest, state of the art link prediction systems using the topology of a network do not reach the topology-based upper limit on performance. We take this to suggest either that state of the art link prediction systems have room for improvement in their use of graph topology _or_ that what structurally differentiates the \(k\)-hop neighborhoods of true edges from the \(k\)-hop neighborhoods of false edges in our tests is basically noise that an algorithm should not pay attention to if it wishes to generalize well. After all, if for example the 1-hop neighborhoods of two different non-edges both have 20 edges per neighborhood and differ in only one place, should we expect a link prediction algorithm to always treat that difference as significant? We propose some future work in Section VII.1 for exploring how the upper limit on performance changes when the resolution of the data is a bit blurrier, thereby reducing this noise.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} Graph & (**Un**)**Directed & (**Un**)**Weighted & \(|V|\) & \(|E|\) & \# Self-loops \\ \hline \hline Species 1 Brain [1] & D & U & 65 & 1139 & 0 \\ \hline Highschool Friendships [2] & D & W & 70 & 366 & 0 \\ \hline Foodweb [2] & D & U & 183 & 2476 & 18 \\ \hline Jazz Collaboration [2] & U & U & 198 & 5484 & 0 \\ \hline Faculty Hiring (C.S.) & D & W & 206 & 2929 & 124 \\ \hline Congress Mentions [2] & D & W & 219 & 586 & 2 \\ \hline Medical Innovation [2] & D & U & 241 & 1098 & 0 \\ \hline C-Elegans Metabolic [2] & U & U & 453 & 2025 & 0 \\ \hline USA Top 500 Airports (2002) [3] & D & U & 500 & 5960 & 0 \\ \hline Eucore Emails [4] & D & U & 1005 & 24929 & 642 \\ \hline Roget Concepts [3] & D & U & 1010 & 5074 & 1 \\ \hline CCSB-YI1 [5] & U & U & 1278 & 1641 & 168 \\ \hline MySQL Fn. Calls [6] & D & U & 1501 & 4212 & 13 \\ \hline USA Airports (2010) [3] & D & U & 1574 & 28236 & 0 \\ \hline Collins Yeast [3] & U & U & 1622 & 9070 & 0 \\ \hline Cora Citation [1] & D & U & 2708 & 5429 & 0 \\ \hline Citeseer Citation [1] & D & U & 3264 & 4536 & 0 \\ \hline Roman Roads (1999) [3] & D & U & 3353 & 8870 & 0 \\ \hline USA Powergrid [3] & U & U & 4941 & 6594 & 0 \\ \end{tabular}
\end{table}
Table 1: Graphs used for tests – The edge count does not include self-loops, which are listed separately – Sources: [1]: [21], [2]: [15], [3]: [6], [4]: [16], [5]: [25], [6]: [19]
#### vi.2.1 Caveats
There are three caveats to our assessment that the GCNAE paper downsampled. Firstly, it is hypothetically possible that the authors could have gotten extremely lucky in terms of the set of edges randomly removed to form the test set. Our analysis does not exhaustively consider all possible test sets; it merely considers a number of randomly sampled test sets. However, we consider an explanation of the score in terms of a lucky test edge set rather unlikely given the narrow band of upper bounds our samples form and because the reported ROC scores are well below the upper limit - it would be difficult to get very lucky on the harsher metric (AP) but not the easier one (ROC).
The second caveat is that even when running in _featureless_ mode (_i.e._ topology-only prediction), the GCNAE model is given each node's index as a feature. If the nodes' indices were randomly assigned, these features would simply be noise and would do nothing to help link-prediction performance. However, it is possible that the way the datasets were originally compiled ordered the nodes such that, for example, edges between nodes listed close together were more common in the dataset than they would have been if the dataset's node indices were randomly assigned.
The third caveat is that in a paper by the same authors with the same datasets, the authors specify that they add a self-loop to each node [12]; if they allow those self-loop edges to be among the test set, those edges become relatively easy to predict.
## VII Discussion
### Applications, Extensions, and Limitations
In addition to the fact that our methodology gives insights about topology-based link prediction, we believe the kind of analysis we offer in this paper can be extended and expanded. We observed how maximum possible performance on a particular binary classification task (_i.e._ link prediction) varies with the amount of information available to the classifier. At a certain resolution, inputs to the algorithm look identical. In our analysis, the differing resolutions were the differing \(k\) for the \(k\)-hop neighborhood subgraphs. However, these resolutions could hypothetically be any reasonable representation of the data.
If these kinds of equivalence classes can be created at widely varying resolutions for a classification tasks, then researchers will begin to be able to say things like "our algorithm works as well on the full data as an optimal algorithm would work on data of resolution \(X\)."
The key ingredient for our analysis was that at a given resolution we were able to establish equivalence classes on the objects being classified (the non-edges); this let us calculate how well a hypothetical optimal algorithm would perform on those classes. We were able to get equivalence classes because our equality relation on two test objects was isomorphic equivalence of the objects' \(k\)-hop neighborhoods _and thus the relation was transitive_. Yet we expect that even in cases where a transitive equality relation is not immediately available, one could create such a relation by using a distance measure to cluster test inputs and then defining equality as being in the same cluster. The more fine-grained the clusters, the higher the resolution of data given to the hypothetical optimal algorithm.
The main limit we are aware of for this kind of analysis is that at high data resolution, noise can easily dominate the analysis. That is to say, at high resolution, random noise tends to render each entity to be classified unique, and thus the hypothetical, optimal algorithm with a perfect prior will be able to correctly distinguish any two entities and get a perfect score.
For example, consider link prediction on pure noise: That is, consider the process of randomly generating a graph where each edge is present independently with some probability \(p\) and then randomly hiding some fraction of the
\begin{table}
\begin{tabular}{c||l|l||l|l|l} Graph & GCNAE ROC & ROC & Upper- & GCNAE AP & AP Upper-Bound & AP Upper-Bound (1:1 \\ & & Bound & & & Downsampling) \\ \hline \hline Cora & 0.843 \(\pm\) 2e-4 & 0.99992 \(\pm\) 3e-5 & 0.881 \(\pm\) 1e-4 & 0.903 \(\pm\) 0.020 & 0.99999 \(\pm\) 9e-6 \\ \hline Citeseer & 0.787 \(\pm\) 2e-4 & 0.9981 \(\pm\) 3e-4 & 0.841 \(\pm\) 1e-4 & 0.686 \(\pm\) 0.019 & 0.9989 \(\pm\) 5e-4 \\ \end{tabular}
\end{table}
Table 2: Comparison to the GCNAE’s Reported Results – The \(\pm\) symbol indicates the 95% confidence interval. We conclude that the GCNAE paper is likely downsampling negative test edges in the process of calculating AP. More importantly, once downsampling is factored in, there is a notable gap between the hypothetical ideal performance and state of the art topology-based performance. We discuss this more in Sec. VI.B. Note: These results are for undirected versions of the graphs, whereas the results in Fig. 2 are for the directed versions (GCNAE only does link prediction on undirected graphs). Also, note that the version of Citeseer that the GCNAE paper used has some extra nodes with no links, whereas the version we used for Fig. 2 does not.
edges to create a link prediction task. Such a random graph will likely have no global symmetry [9], so at high data resolution (_e.g._\(k=3\)) every non-edge will be unique and thus the hypothetical, optimal algorithm with a perfect prior will obtain a perfect score, even though a real-world algorithm that does not mystically foreknow the answer can do no better than considering each edge to be equally likely, because that is in fact how the graph was constructed.
Fortunately for our kind of analysis, real-world algorithms are usually designed to ignore noise in the first place, so a data resolution that successfully filters out noise can simultaneously be relevant and provide a non-trivial upper bound on optimal performance.
### Conclusion
We presented a methodology for calculating hard limits on how well a link prediction algorithm could perform when using structural information only. This helps analyze how much information graph structure does or does not provide for link prediction. We found that very sparse graphs give rise to significant inherent difficulties and therefore contain strong caps on optimal performance.
We also observed that a state of the art topology-based link prediction method performs well below the upper bound in some cases, which we believe either means that the link prediction algorithms have serious room for improvement _or_ that our test sometimes picks up on "noise" that indeed differentiates edges from non-edges but which an algorithm should not be expected to pick up on because that noise would not behave in any consistent or infer-able manner. These observations prompted our discussion on further avenues of discovery and extensions of our methodology; we expect that an analysis similar to ours which finds a way to obtain performance upper bounds at varying degrees of blurring the noise would provide further insights.
###### Acknowledgements.
This work was funded by grants from the US National Science Foundation (#1652492 and #CCF-1822939).
|
2302.00846 | A time-dependent Markovian model of a limit order book | This paper considers a Markovian model of a limit order book where
time-dependent rates are allowed. With the objective of understanding the
mechanisms through which a microscopic model of an orderbook can converge to
more general diffusion than a Brownian motion with constant coefficient, a
simple time-dependent model is proposed. The model considered here starts by
describing the processes that govern the arrival of the different orders such
as limit orders, market orders and cancellations. In this sense, this is a
microscopic model rather than a ``mesoscopic'' model where the starting point
is usually the point processes describing the times at which the price changes
occur and aggregate in these all the information pertaining to the arrival of
individual orders. Furthermore, several empirical studies are performed to shed
some light into the validity of the modeling assumptions and to verify whether
certain stocks satisfy the conditions for their price process to converge to a
more complex diffusion. | Jonathan A. Chávez-Casillas | 2023-02-02T03:15:56Z | http://arxiv.org/abs/2302.00846v1 | # A time-dependent Markovian model of a limit order book
###### Abstract
This paper considers a Markovian model of a limit order book where time-dependent rates are allowed. With the objective of understanding the mechanisms through which a microscopic model of an orderbook can converge to more general diffusion than a Brownian motion with constant coefficient, a simple time-dependent model is proposed. The model considered here starts by describing the processes that govern the arrival of the different orders such as limit orders, market orders and cancellations. In this sense, this is a microscopic model rather than a "mesoscopic" model where the starting point is usually the point processes describing the times at which the price changes occur and aggregate in these all the information pertaining to the arrival of individual orders. Furthermore, several empirical studies are performed to shed some light into the validity of the modeling assumptions and to verify whether certain stocks satisfy the conditions for their price process to converge to a more complex diffusion.
keywords:Limit order book, Price process, Diffusion approximation, Time-dependency
## 1 Introduction
The evolution of trading markets has progressed considerably in the last few decades. One of the main drivers of such evolution has been the creation and usage of fast-paced technological developments. In the past, liquidity was provided by the so-called market makers, which collected buy and sell orders from all market participants to set bid and ask quotes. While this traditional method was generally accepted, mainly, because of the lack of alternatives, it was also criticized due to the bias and questionable conflict of interest from the market maker. Nowadays, most exchanges use completely automated platforms called Electronic Communication Networks (ECN). These ECN enables a continuous double auction trading mechanism, which eliminates the need of a market maker or an intermediary that matches the opposite parties in a trade. The auction that the ECN manages is called "continuous double" since traders can submit orders in form of bids (i.e., buy orders) as well as asks (i.e., sell orders) at any point in time. ECNs increase significantly the speed of trading, taking only a few milliseconds from sending an order to its execution.
A bid limit order (resp. ask limit order) specifies the quantity and the price at which a trader wants to buy (resp. sell) certain asset. The limit order book consists of all the collection of limit orders from every trader. Outstanding limit orders are stored in different queues inside the order book. These queues are ordered by the price and type (bid or ask). The difference in price between the lowest ask price and the highest bid price is called the spread.
The counterpart of limit orders are market orders, which allow traders to buy and sell at the best available price. While limit orders will not trigger an immediate transaction, market orders are immediately executed. In this sense, limit orders accumulate, create, and extend the size of queues at both sides of the LOB, while market orders remove limit orders from the best available price. Sometimes informed traders are associated with traders that place market orders, while uninformed traders are associated to the ones that place limit orders, but this goes against the fact that many of the most successful hedge funds make extensive use of limit orders (see Bouchaud et al. (2009)).
In addition to limit orders and market orders, cancellation of limit orders is another common operation. The basic idea of a cancellation is that a trader is no longer willing to buy or sell at the specified price. Cancellations account for a large fraction of the operations on an order book, partly due to the introduction and evolution of high frequency trading (see Harris (2003)), in which the inter-arrival times of limit orders and cancellations, occur at a millisecond time scale (see Cont and de Larrard (2013)).
An important feature of a LOB is that traders can choose between submitting limit and market orders. The biggest advantage of limit orders is the possibility of matching better prices than the ones they can obtain with market orders, but as drawback, there is a risk of never being executed. Conversely, market orders never match at prices better than the best bid or the best ask, but the execution is certain and immediate. Usually, the bid-ask spread can be considered as a measure of how expensive is the certainty and immediacy of buying or selling the underlying asset.
From a modeling perspective, sometimes it is important to identify the different types of traders that are able to participate in the market. As stated in Foucault et al. (2003), LOBs allowed traders to immediately obtain liquidity, but at the same time, they also allow other traders to supply liquidity to those who require it later. On the exchanges, most traders combine limit orders and market orders to create a trading strategy according to their needs and the current state of the order book. However, broadly, as detailed in Foucault et al. (2003), traders with short-horizon strategies, as arbitrageurs, technical traders, and indexes, prefer to post market orders, while, traders with long-horizon strategies, as portfolio managers, place limit orders.
As pointed out in Gould et al. (2013), there are many practical advantages in understanding LOB dynamics. Examples of these are: gaining clearer insight into how best to act in a given market situation (cf. Harris and Hasbrouck (1996)), devising optimal order execution strategies (Obizhaeva and Wang (2013), Law (2015), Cartea and Jaimungal (2015)), minimizing the market impact (Eisler et al. (2012)); designing better electronic trading algorithms (Engle et al. (2006)), and assessing market stability (Kirilenko et al. (2011)).
Due to the complexity of a LOB, when interpreted as a dynamical system, any attempt to model a LOB requires considerable assumptions. One of the most important assumptions is how the order flow evolves in time. One of the seminal works in modeling a complete LOB in continuous time is the one proposed by Cont et al. (2010). This is a zero-intelligence model, in which the order flows follow independent Poisson processes whose rate parameters depend on the type of order, distance to the bid-ask spread, and state of the order book. With this model, the authors try to understand how the frequency at which certain events occur depends on the state of the order book. Another important trait of this model is the use of a power-law distribution for the arrival of limit orders depending on their relative price, which fits well with empirical observations.
In an attempt to simplify the dynamics of the order book and provide estimates for the volatility in terms of the parameters governing the order flow, Cont and de Larrard (2013) propose a model that keeps track of the first level of the order book instead of the whole LOB. When there was a depletion in either side of this simplified I-level order book, the amounts of orders available at the next best prices were assumed to be drawn from a distribution \(f\) on \(\mathbb{N}^{2}\), and the spread was always kept constant at \(1\) tick. The authors' justification for this simplified framework was that many traders can only view the depths available at the best prices and, also, that the percentage of time, in liquid markets, that the spread is \(1\) is typically larger than \(97\%\). In this model, arrival of limit, market and cancellation orders are modeled as independent Poisson processes. One of the advantages of this approach is the ability to estimate analytically, depending just on the Poisson processes' parameters and the depth distribution \(f\), quantities of interest such as the volatility, the distribution of time until the next price change, the distribution and auto-correlation of price changes, and the conditional probability that the price moved in a specified direction given the current state of the order book.
As noted in Chavez-Casillas et al. (2019), one of main drawbacks of the model presented in Cont and de Larrard (2013) is the assumption that the limit orders, market orders and cancellations arrive as homogeneous Poisson processes. In an attempt to improve this assumption, in Chavez-Casillas et al. (2019), the authors assume that all the orders arrive as Non-homogeneous Poisson processes but in contrast to the model presented in Cont and de Larrard (2013), the random clock driving the arrivals of these orders is not reset after a price change. In fact, this random clock signaling the arrival of new orders is never reset and instead of a reset, an assumption of periodicity is used. That is, it is assumed that every week, every day, or every given period of time, the intensities of the processes signaling the arrival of orders is the same. While this approach is suitable to analyze the behaviour of the orderbook at a mesoscopic time scale, the distributional properties of the inter arrival times between price changes and their consequences on the heavy-traffic approximation of the price process are not discussed, which is the the aim of the present study with some corresponding empirical data analysis.
By analyzing the distributional properties of the inter-arrival times and allowing the intensity of the point process dictating their occurrence to be time-dependent, it can be shown that the volatility in the long-run dynamics of the price process possess, on certain cases, a time dependent component. This is a new improvement in an attempt to reconcile the macro-price dynamics, as a more general process than merely a Brownian motion with constant volatility, of the orderbook when doing the modeling from the microscopic price formation time scale. Indeed, in the existing literature, most of the models that start from the modeling of how orders arrive to the macro-price formation conclude that the price process is a Brownian motion with constant volatility. The aim of the present paper is to shed some light into how it can be possible to extend such models in such a way that the limiting price process converges to a more general diffusion.
One of the aims of this paper is to analyze the different cases under which the price process when modeled from a microscopic scale, referring to a model that starts from considering the arrival of individual orders, rather than from a mesoscopic scale, referring to a model that considers as a baseline the arrival of changes in the price and aggregating all the information provided in the arrival of individual orders all the way up to the arrival of a price change. This entails more complex dynamics and there are few models in the literature that consider creating bottom-up estimates and models; and while one of the aims of the present work is to expand in the existing literature and to study empirically whether the assumptions of the model might be validated, it also wants to serve as a first step in creating more complex models that start at a microscopic scale and can provide conditions under which the long-run dynamics of the price process possess a more complex structure than a simple Brownian motion with constant volatility.
This paper is organized as follows. In section 2 a precise description of the Limit Order Book model is presented with its corresponding assumptions. In Section 3 the distributional properties of the model and the long-run dynamics of the price process are analyzed depending on the tail behaviour of the time span between price changes. In section 4 the six study cases from this paper are introduced and analyzed. Finally, in section 5 some conclusions are presented along with further research directions in an attempt to generalize the price process when modeling from the micro-price dynamics.
## 2 Description of the model
In this paper, a Level-1 Limit Order Book model is discussed using as a framework the model proposed in Cont and de Larrard (2013). However, in contrast to such model, the point processes describing the arrivals of Limit orders are described by a time-dependent periodic rate proportional to the rate describing the arrival of Market orders plus Cancellations.
The next paragraphs will describe the modeling assumptions of the orderbook model as well as some of its basic dynamics. First, as it was mentioned before, only the best level at each side will be considered, together with their corresponding sizes.. This assumption is justified by the fact that most of the operations inside an orderbook are performed at the level-I, as pointed out in Cont and de Larrard (2013). Also, in this sense, the orderbook is assumed to have a unitary tick size since for the stocks considered in section 4, the spread spend more than 90% within 1 tick.
A more controversial assumption which is meant to be relaxed in further research is a classical assumption in these models that the order volume is also unitary. Since the model presented is inspired by the original model presented in Cont and de Larrard (2013), the same assumption is kept, but it is the author's interest to relax this assumption. However, contrasting to the original model and creating a much more realistic assumption, limit orders at the bid and ask sides of the book arrive independently according to inhomogeneous Poisson processes \(L^{b}_{t}\) and \(L^{a}_{t}\), with intensities \(\lambda^{b}_{t}\) and \(\lambda^{a}_{t}\) respectively. Similarly, market orders plus cancellations at the bid and ask sides of the book arrive independently according to inhomogeneous Poisson processes \(M^{b}_{t}\) and \(M^{a}_{t}\), with intensities \(\mu^{b}_{t}\) and \(\mu^{a}_{t}\) respectively. Notice that, since in this model limit orders increase the queue size and both market orders and cancellations decrease the queue size the latter are merged int one process, which is natural by our assumption of a unitary volume in all orders. Further, it is assumed that the processes \(L^{a}_{t},L^{b}_{t},M^{a}_{t}\) and \(M^{b}_{t}\) are all pairwise independent.
Finally, it will be described what happens every time there is a price change. Indeed, whenever the bid queue or the ask queue gets depleted, a price change occurs. If the bid side is depleted, the price decreases and if the ask queue is depleted, the price increases. After a price change occurs, the depth of the orderbook will be considered via the distribution of two new queue sizes. That is, after a price change, both the bid and the ask prices increase by one tick, and the size of both queues gets redrawn from some distribution \(f\in\mathbb{N}^{2}\).
Distributional Properties of the Inter-arrival times and the Price Process
For this model it will be assumed that after each price change, the clock signaling the arrivals of new orders or cancellation in the orderbook will be reset between the arrivals of these processes, so that if \(\tau_{n}\), \(n\geq 1\) represents inter-arrival time between the \((n-1)-\)th and \(n-\)th price change, then, the sequence \(\{\tau_{n}\}_{n\geq 1}\) becomes an independent sample.
In order to accurately describe the distribution of the inter-arrival time in between price changes, some general notation that will be used throughout will be introduced. Let \(\{(a_{n},b_{n})\}_{n\geq 1}\) be a sequence of independent random variables generated from the distribution \(f\in\mathbb{N}^{2}\). These will represent the initial amount of orders at the bid and ask side of the order book, respectively, after the \(n-\)th price change. For \(t\geq 0\), let also, \(\{X_{x_{n-1}}^{(n)}(t)\}_{n\geq 1}\) and \(\{Y_{y_{n-1}}^{(n)}(t)\}_{n\geq 1}\) be two families of mutually independent one-dimensional time continuous random walks in the positive axis parametrized by their starting point, i.e. for any \(n\in\mathbb{N}\), \(X_{x_{n-1}}^{(n)}(0)=x_{n-1}\) and \(Y_{y_{n-1}}^{(n)}(0)=y_{n-1}\), whose generators \(\mathscr{L}_{t}^{(X,n)}\) and \(\mathscr{L}_{t}^{(Y,n)}\) are given by
\[\mathscr{L}_{t}^{(X,n)}(i,j)=\mathscr{L}_{t}^{(Y,n)}(i,j):=\left\{\begin{array} []{rcl}0&\text{if}&i=0,\ j\geq 0,\\ \mu_{t}&\text{if}&1\leq i,\ j=i-1,\\ \lambda_{t}&\text{if}&1\leq i,\ j=i+1,\\ -(\lambda_{t}+\mu_{t})&\text{if}&1\leq i,\ j=i,\\ 0&\text{if}&|i-j|>1\end{array}\right. \tag{1}\]
Note that \(0\) is an absorbing state for any Markov chain with generator \(\mathscr{L}_{t}^{(X,\,\ast)}\) or \(\mathscr{L}_{t}^{(Y,\,\ast)}\). When a chain reaches the absorbing point \(0\), one calls it extinction.
Let \(\sigma_{x_{n-1}}^{(X,n)}\) and \(\sigma_{y_{n-1}}^{(Y,n)}\) be the extinction times the independent Markov chains \(X_{x_{n-1}}^{(n)}(t)\) and \(Y_{y_{n-1}}^{(n)}(t)\). Further, set \(\tau_{0}:=0\) and \(\tau_{n}:=\min\left(\sigma_{x_{n-1}}^{(n)},\sigma_{y_{n-1}}^{(n)}\right)\).
The dynamics of the orderbook can be described as follows. Let \(q_{t}=(q_{t}^{b},q_{t}^{a})\) be the amount of bid and ask orders at time \(t\), with \((x_{0},y_{0})\) the initial amount of orders at each side of the book, that is, \(q_{0}=(x_{0},y_{0})\). For \(0\leq t<\tau_{1}\), define \(q_{t}:=(X_{x_{0}}^{(1)}(t),Y_{y_{0}}^{(1)}(t))\), so that, \(\tau_{1}\) is the time at which the first price change occurs. At time \(\tau_{1}\), both queues move one tick (to the right if the ask queue is depleted or to the left if the bid queue is depleted). The size of the ask and bid queue are then set to be \((x_{2},y_{2})\) (which are drawn from the distribution \(f\in\mathbb{N}^{2}\)), and \(q_{\tau_{1}}\) is defined as \((x_{1},y_{1})\). Furthermore, for \(\tau_{1}\leq t<\tau_{2}\), \(q_{t}\) is defined as \((X_{x_{1}}^{(2)}(t),Y_{y_{1}}^{(2)}(t))\) and the process continues. To simplify notation, we denote by \(\sigma_{a}^{n}\) and \(\sigma_{b}^{n}\) the random variables \(\sigma_{x_{n-1}}^{(X,n)}\) and \(\sigma_{y_{n-1}}^{(Y,n)}\), respectively.
### Distribution of the inter-arrival time between price changes
Because of the independence between the ask and the bid side of the book before the first price change, to analyze the distribution of \(\tau_{1}\), it is enough to study one side of the orderbook, say the ask. In this case, an explicit formula for \(\mathbb{P}[\sigma_{a}^{1}>t]\) is given by the following result whose proof is deferred to Appendix A.
**Proposition 3.1**.: _Let \(q_{t}^{a}\) be the process defined above starting at \(x\), that is, the Markov process with generator given by Equation 1 such that \(q_{0}^{a}=x\). Fix any \(T>0\) and let \(\bar{u}(t,z):\mathbb{R}_{+}\times(\mathbb{N}\cup\{0\})\to\mathbb{R}\) be a bounded function such that \(t\mapsto\bar{u}(t,x)\) is \(C_{1}(\mathbb{R})\) for all \(x\) and \((t,x)\mapsto\partial\bar{u}(t,z)/\partial t\) is bounded and satisfies the conditions:_
\[\left\{\begin{array}{rcl}\left(\frac{\partial}{\partial r}+\mathscr{L}_{r} \right)\bar{u}(T-r,z)=0&\text{for}&0<r<T,\,z\in\mathbb{N}.\\ \bar{u}(T-r,0)=0&\text{for}&0\leq r<T.\\ \bar{u}(0,z)=1&\text{for}&z\in\mathbb{N}.\end{array}\right. \tag{2}\]
_Then,_
\[\bar{u}(T,x)=\mathbb{P}[\sigma_{a}^{1}(x)>T],\]
_where \(\sigma_{a}^{1}=\inf\{t>0\mid q_{t}^{a}=0\}\)._
As a first approach some assumptions on the behaviour of the rates \(\lambda_{t}\) and \(\mu_{t}\) will be made.
**Assumption 1:** There exists a function \(\alpha_{t}:\mathbb{R}_{+}\to\mathbb{R}_{+}\) such that for some positive constants \(\lambda\) and \(\mu\),
\[\lambda_{t}=\lambda\alpha_{t}\qquad\qquad\text{and}\qquad\qquad\mu_{t}=\lambda \alpha_{t} \tag{3}\]
Let \(\mathcal{H}_{t}\) denote the generator of the Markov process describing the dynamics of the queues' size under Assumption 1, that is,
\[\mathcal{H}_{t}(i,j)=\left\{\begin{array}{ccc}0&\text{if}&i=0,\ j\geq 0,\\ \mu\alpha_{t}&\text{if}&1\leq i,\ j=i-1,\\ \lambda\alpha_{t}&\text{if}&1\leq i,\ j=i+1,\\ -(\lambda\alpha_{t}+\mu\alpha_{t})&\text{if}&1\leq i,\ j=i,\\ 0&\text{if}&|i-j|>1.\end{array}\right. \tag{4}\]
In the case when the rates are constant, that is when in Assumption 1 above, \(\alpha_{t}\equiv 1\), the generator \(\mathscr{A}_{t}\) given in Equation 1 reduces to
\[\mathcal{Q}(i,j)=\left\{\begin{array}{ccc}0&\text{if}&i=0,\ j\geq 0,\\ \mu&\text{if}&1\leq i,\ j=i-1,\\ \lambda&\text{if}&1\leq i,\ j=i+1,\\ -(\lambda+\mu)&\text{if}&1\leq i,\ j=i,\\ 0&\text{if}&|i-j|>1.\end{array}\right. \tag{5}\]
Before proceeding, we will define the inter-arrival times between price changes as follows. For \(\bullet\in\{a,b\}\), let \(\sigma^{1}_{a,\mathcal{Q}}\) be the extinction times of the Markov processes as described in Section 3 where \(\mathscr{L}^{(X,n)}_{t}(i,j)=\mathscr{L}^{(Y,n)}_{t}(i,j)=\mathcal{Q}(i,j)\). Define in the same way the sequence \(\{\tau^{n}_{\mathcal{Q}}\}_{n\geq 0}\) of inter-arrival times and the queue process \(q_{t,\mathcal{Q}}:=(q^{n}_{0,\mathcal{Q}},q^{n}_{0,\mathcal{Q}})\).
Analogously, for \(\bullet\in\{a,b\}\), let \(\sigma^{1}_{a,\mathcal{Q}}\) be the extinction times of the Markov processes as described in Section when \(\mathscr{L}^{(X,n)}_{t}(i,j)=\mathscr{L}^{(Y,n)}_{t}(i,j)=\mathcal{H}_{t}(i,j)\). Define in the same way the sequence \(\{\tau^{n}_{\mathcal{H}}\}_{n\geq 0}\) of inter-arrival times and the queue process \(q_{t,\mathcal{H}}:=(q^{n}_{0,\mathcal{H}},q^{n}_{0,\mathcal{H}})\).
The following lemma gives the distribution of \(\sigma^{1}_{a,\mathcal{Q}}\).
**Lemma 3.2**.: _For the difference operator \(\mathcal{Q}\) given by (5), a solution to the initial value problem_
\[\left\{\begin{array}{ccc}\left(\frac{\partial}{\partial r}+\mathcal{Q} \right)\bar{u}(T-r,z)=0&\text{for}&0<r<T,\ z\in\mathbb{N}.\\ \bar{u}(T-r,0)=0&\text{for}&0\leq r<T.\\ \bar{u}(0,z)=1&\text{for}&z\in\mathbb{N}.\end{array}\right. \tag{6}\]
_is given by the function_
\[\bar{u}_{\mathcal{Q}}(T,x)=\left(\frac{\mu}{\lambda}\right)^{x/2}\int_{T}^{ \infty}\frac{x}{s}I_{x}\left(2s\sqrt{\lambda\mu}\right)e^{-s(\lambda+\mu)}ds, \tag{7}\]
_where \(I_{\nu}(\cdot)\) is the modified Bessel function of the first kind. Consequently, \(\mathbb{P}[\sigma^{1}_{a,\mathcal{Q}}>T]=\bar{u}_{\mathcal{Q}}(T,x)\)._
Proof.: When \(\mathscr{L}_{t}\equiv\mathcal{Q}\), the model reduces to the case consider in Cont and de Larrard (2013). Therefore, the result follows from Proposition 1 therein.
It is important to analyze the tail behaviour of the survival distribution for \(\sigma^{1}_{a,\mathcal{Q}}\). The following lemma, whose proof is deferred to Appendix A, establishes such behaviour.
**Lemma 3.3**.: _Let \(\mathcal{C}=(\sqrt{\mu}-\sqrt{\lambda})^{2}\). Then, for a sufficiently large \(T\),_
\[\mathbb{P}[\sigma^{1}_{a,\mathcal{Q}}>T\mid q^{a}_{0}=x]\sim\left\{\begin{array} []{ccc}\left(\frac{\mu}{\lambda}\right)^{x/2}\frac{x}{c\sqrt{\mu}\sqrt{\lambda }}\frac{e^{-T}}{T^{1/2}}&\text{if}&\lambda<\mu\\ \frac{1}{\lambda\sqrt{\mu}}\frac{1}{\sqrt{\lambda}}\frac{1}{\sqrt{T}}&\text{ if}&\lambda=\mu\end{array}\right.\]
_Consequently, as expected, if \(\lambda=\mu\), \(\mathbb{E}[\sigma^{1}_{a,\mathcal{Q}}]=\infty\), whereas if \(\lambda<\mu\), for every \(k\in\mathbb{N}\),_
\[\mathbb{E}\left[\left(\sigma^{1}_{a,\mathcal{Q}}\right)^{k}\right]<\infty.\]
_Remark 3.4_.: Notice that if \(\lambda=\mu\), the results in Lemma 3.3 agree with the results obtained in Eq. (6) in Cont and de Larrard (2013). However, if \(\lambda<\mu\), Eq. (5) in Cont and de Larrard (2013) says that
\[\mathbb{P}[\sigma^{1}_{a,\mathcal{Q}}>T\mid q^{a}_{0}=x]\sim\frac{x(\lambda+ \mu)}{2\lambda(\mu-\lambda)}\frac{1}{T},\]
which is not correct, due to the well known fact that a Birth and death process with a death rate larger than its birth rate, its extinction time, \(\sigma^{1}_{a,\mathcal{Q}}\), has moments of all orders (an easy way to see this is to use the Moment Generating Function (MGF) computed on Proposition 1 in Cont and de Larrard (2013) and observe that if \(\lambda<\mu\), then the MGF is defined on an open interval around \(0\) (c.f. Billingsley (1995)[Section 21]).
Lemma 3.2 allows a closed formula to be obtained for the distribution of \(\sigma^{1}_{a,\;\mathcal{H}}\), when the rates are proportional to each other, as in Assumption 1. Such a formula is described in the following proposition, whose proof is deferred to Appendix A.
**Proposition 3.5**.: _Under Assumption 1, for \(A_{t}=\int_{0}^{t}\alpha_{s}ds\), the distribution of \(\sigma^{1}_{a,\mathcal{H}}\) is given by_
\[\mathbb{P}[\sigma^{1}_{a,\mathcal{H}}>T|\;q^{a}_{0}=x]=\left(\frac{\mu}{ \lambda}\right)^{x/2}\int_{A_{T}}^{\infty}\frac{x}{s}I_{x}\left(2s\sqrt{ \lambda\mu}\right)e^{-s(\lambda+\mu)}ds,\]
_where \(I_{\nu}(\cdot)\) is the modified Bessel function of the first kind._
_Remark 3.6_.: Lemma 3.2 and Proposition 3.5 imply that
\[\mathbb{P}[\sigma^{1}_{a,\mathcal{H}}>T|\;q^{a}_{0}=x]=\mathbb{P}[\sigma_{a, \mathcal{Q}}>A_{T}|\;q^{a}_{0}=x].\]
This implies that the distribution of the time between price changes in the present model is comparable to the distribution of the inter-arrival time between price changes for the model presented in Cont and de Larrard (2013)
Finally, we present the distribution of the time for the first price change.
**Corollary 3.7**.: _Under Assumption 1, for \(A_{t}=\int_{0}^{t}\alpha_{s}ds\), the distribution of \(\tau^{1}_{\mathcal{H}}\) is given by_
\[\mathbb{P}[\tau^{1}_{\mathcal{H}}>T|\;q^{a}_{0}=x,q^{b}_{0}=y]=\mathbb{P}[ \sigma^{1}_{a,\mathcal{H}}>T|\;q^{a}_{0}=x]\mathbb{P}[\sigma^{1}_{a,\mathcal{ H}}>T|\;q^{a}_{0}=y],\]
_where using the formula in proposition 3.5._
Proof.: The result follows from the fact that \(\tau^{n}_{\mathcal{H}}=\sigma^{n}_{a,\mathcal{H}}\wedge\sigma^{n}_{\mathcal{H },b}\), Lemma 3.5 and the independence between \(\sigma^{n}_{a,\mathcal{H}}\) and \(\sigma^{n}_{b,\mathcal{H}}\).
Now, the asymptotic behaviour of the survival distribution function of \(\tau^{1}_{\mathcal{H}}\) is presented and its proof is deferred to Appendix A.
**Lemma 3.8**.: _Let \(\mathcal{C}=(\sqrt{\mu}-\sqrt{\lambda})^{2}\). Then,_
\[\mathbb{P}\left[\tau^{1}_{\mathcal{H}}>T\;\Big{|}\;q^{a}_{0}=x,q^{b}_{0}=y \right]\sim\left\{\begin{array}{rcl}\left(\frac{\mu}{\lambda}\right)^{(x+y) /2}\frac{xy}{\pi\mathbb{C}^{2}\sqrt{\lambda\mu}}\frac{\exp(-2A_{T}\mathcal{C })}{A_{T}}&\text{if}&\lambda<\mu\\ &\frac{xy}{\lambda^{2}\pi}\frac{1}{A_{T}}&\text{if}&\lambda=\mu\end{array}\right.\]
_Moreover, if_
* \(\alpha_{t}\sim t^{s}\log^{m}(t)\) _as_ \(t\to\infty\) _for some_ \(s\neq-1\)_,_ \(m\geq 0\)_, and_ \(n\in\mathbb{N}\)_,_ \[\mathbb{E}\left[\left(\tau^{1}_{\mathcal{H}}\right)^{n}\;\Big{|}\;q^{a}_{0}=x,q^{b}_{0}=y\right]\left\{\begin{array}{rcl}<\infty&\text{if}&\lambda<\mu\\ <\infty&\text{if}&\lambda=\mu\text{ and }n<s+1\\ =\infty&\text{if}&\lambda=\mu\text{ and }n\geq s+1\end{array}\right.\]
* \(\alpha_{t}\sim k/t\) _as_ \(t\to\infty\) _for some_ \(k>0\)_,_ \[\mathbb{E}\left[\left(\tau^{1}_{\mathcal{H}}\right)^{n}\;\Big{|}\;q^{a}_{0}=x,q^{b}_{0}=y\right]\left\{\begin{array}{rcl}<\infty&\text{if}&n<2k\mathcal{C }\text{ and }\lambda<\mu\\ =\infty&\text{if}&n\geq 2k\mathcal{C}\text{ and }\lambda<\mu\\ =\infty&\text{if}&\lambda=\mu\end{array}\right.\]
### Long-run dynamics of the price process
We are interested in analyzing the asymptotic behaviour of the number of price changes up to time \(t\). That is, in describing
\[N^{\star}_{t}:=\max\{n>0\;|\;\tau^{1}_{\star}+\tau^{2}_{\star}+\ldots+\tau^{n }_{\star}\leq t\}, \tag{8}\]
where, \(\tau^{3}_{\mathcal{Q}}\) and \(\tau^{n}_{\mathcal{H}}\) are defined above.
The next proposition, whose proof is deferred to Appendix A, provides an expression which relates the distribution of the partial sums for the waiting times between price changes for the models with the generators \(\mathcal{H}_{t}\) and \(\mathcal{Q}\).
**Proposition 3.9**.: _Let \(S^{n}_{\mathcal{H}}:=\tau^{1}_{\mathcal{H}}+\tau^{2}_{\mathcal{H}}+\ldots+\tau^ {n}_{\mathcal{H}}\) and \(S^{n}_{\mathcal{Q}}:=\tau^{1}_{\mathcal{Q}}+\tau^{2}_{\mathcal{Q}}+\ldots+\tau^ {3}_{\mathcal{Q}}\). Then,_
\[\mathbb{P}[S^{n}_{\mathcal{H}}\leq t]=\mathbb{P}[S^{n}_{\mathcal{Q}}\leq A_{t}],\]
_where \(A_{t}=\int_{0}^{t}\alpha_{s}ds\) in accordance with Assumption 1._
The following results provide the convergence of the price process. For presentation purposes we separate them into the case when \(\lambda<\mu\) and when \(\lambda=\mu\).
**Theorem 3.10**.: _Assume \(\lambda<\mu\) and let \(\mathcal{C}=(\sqrt{\mu}-\sqrt{\lambda})^{2}\). Then, under Assumption 1, for \(A_{t}=\int_{0}^{t}\alpha_{s}ds\),_
* _If_ \(\frac{\alpha_{t}}{t^{s}}\to\widetilde{K}\) _with_ \(s\neq-1\) _or if_ \(\frac{\alpha_{t}}{t^{-1}}\to K\) _as_ \(t\to\infty\)_, with_ \(2\mathcal{C}K>1\)_, the rescaled price process converges and for the sequence_ \(t_{n}=nt\) _and a constant_ \(\sigma\)_, in distribution,_ \[\frac{s_{t_{n}}}{\sqrt{n}}\Rightarrow\sigma W_{t}\]
* _If_ \(\frac{\alpha_{t}}{t^{-1}}\sim K\) _with_ \(2\mathcal{C}K\leq 1\) _as_ \(t\to\infty\)_, the rescaled price process converges and for the sequence_ \(t_{n}=tn^{1/2\mathcal{K}}\) _and a constant_ \(\sigma\)_, in distribution,_ \[\frac{s_{t_{n}}}{\sqrt{n}}\Rightarrow\sigma\int_{0}^{t}\sqrt{\frac{1}{u^{1-2 \mathcal{C}K}}}dW_{u}\]
**Theorem 3.11**.: _Assume \(\lambda=\mu\). Then, under Assumption 1, for \(A_{t}=\int_{0}^{t}\alpha_{s}ds\),_
* _If_ \(\alpha_{t}\sim t^{-1+s}\) _as_ \(t\to\infty\)_, for any_ \(s>1\)_, the rescaled price process converges and for the sequence_ \(t_{n}=nt\) _and a constant_ \(\sigma\)_, in distribution,_ \[\frac{s_{t_{n}}}{\sqrt{n}}\Rightarrow\sigma W_{t}\]
* _If_ \(\alpha_{t}\sim t^{-1+s}\) _as_ \(t\to\infty\) _for any_ \(s\in(0,1]\)_, the rescaled price process converges and for the sequence_ \(t_{n}=t(n)^{1/s}\log(n)\) _and a constant_ \(\sigma\)_, in distribution,_ \[\frac{s_{t_{n}}}{\sqrt{n}}\Rightarrow\sigma\int_{0}^{t}\sqrt{\frac{1}{u^{s}}} dW_{u}\]
* _If_ \(\alpha_{t}=o(t^{-1+s})\)_, for some_ \(s>0\)_, the price process converges cannot be rescaled to ensure convergence._
_Remark 3.12_.: It is important to notice that we recover the model proposed by Cont and de Larrard (2013) in our current setting. Indeed, for recovering their model, \(\alpha_{t}\) should be chosen such that \(\alpha_{t}\equiv 1=x^{0}\). In such case,
* If \(\lambda<\mu\) and \(\alpha_{t}=1\), then \(\alpha_{t}/t^{-1}\to\infty\) and the result in Cont and de Larrard (2013) follows immediately from Theorem 3.10.
* If \(\lambda=\mu\) and \(\alpha_{t}=t^{-1+1}\), it becomes the borderline case in Theorem 3.11. In such case, we can see from the proof of the aforementioned theorem that we need to choose the rescaling sequence \(t_{n}=tn\log(n)\), but we get convergence to a Brownian motion with constant volatility.
## 4 Empirical Results
In this paper six different stocks will be analyzed. These stocks vary on the type of sector they belong to and in their frequency of trading as well as other properties. The main goal is to see the different quantities being fit to the model and to contrast them to the model in Cont and de Larrard (2013). The selected stocks were CSCO, FB, INTC, MSFT, LBTYK and VOD and all were analyzed on the week of Nov 3rd to Nov 7th of 2014.
The first quantity that will be analyzed in this paper will be the distribution of the time between price changes. From Lemma 3.8, it follows that
\[\mathbb{P}\left[\tau_{\mathcal{H}}^{1}>T\;\Big{|}\;q_{0}^{a}=x,q_{0}^{b}=y \right]\sim\left\{\begin{array}{cc}\left(\frac{u}{\lambda}\right)^{(\alpha+ y)/2}\frac{xy}{\pi\mathcal{C}^{2}\sqrt{\lambda\mu}}\frac{\exp(-2A_{T} \mathcal{C})}{A_{T}}&\mbox{if}\quad\lambda<\mu\\ &\frac{xy}{\lambda^{2}\pi}\frac{1}{A_{T}}&\mbox{if}\quad\lambda=\mu\end{array}\right.\]
Then, by adding over all possible initial positions of the queues and multiplying by the prbability that such position occurs, we have that
\[\mathbb{P}\left[\tau_{\mathcal{H}}^{1}>T\right]\sim\left\{\begin{array}{cc} \sum\limits_{x=1}^{\infty}\sum\limits_{y=1}^{\infty}\left(\frac{u}{\lambda} \right)^{(x+y)/2}\frac{xyf(x,y)}{\pi\mathcal{C}^{2}\sqrt{\lambda\mu}}\frac{ \exp(-2A_{T}\mathcal{C})}{A_{T}}&\mbox{if}\quad\lambda<\mu\\ &\sum\limits_{x=1}^{\infty}\sum\limits_{y=1}^{\infty}\frac{xyf(x,y)}{\lambda^{ 2}\pi}\frac{1}{A_{T}}&\mbox{if}\quad\lambda=\mu\end{array}\right.\]
By taking derivatives and using L'Hopital rule, we have that if \(f_{r_{\mathcal{H}}^{1}}(t)\) is the density of \(\tau_{\mathcal{H}}^{1}\) then,
\[f_{r_{\mathcal{H}}^{1}}(t)\sim\left\{\begin{array}{cc}\sum\limits_{x=1}^{ \infty}\sum\limits_{y=1}^{\infty}\left(\frac{\mu}{\lambda}\right)^{(x+y)/2}\frac {xyf(x,y)}{\pi c^{2}\sqrt{\lambda\mu}}\frac{(2\mathcal{C}A_{T}+1)\alpha_{T} \exp(-2A_{T}\mathcal{C})}{A_{T}^{2}}\quad\text{if}\quad\lambda<\mu\\ \sum\limits_{x=1}^{\infty}\sum\limits_{y=1}^{\infty}\frac{xyf(x,y)}{\lambda^ {2}\pi}\frac{\alpha_{T}}{A_{T}^{2}}\quad\text{if}\quad\lambda=\mu\end{array}\right. \tag{9}\]
The next figure show the empirical densities of \(f_{r_{\mathcal{H}}^{1}}(t)\) for the six picked stocks. As it will be shown in Table 3, in all six cases it happens that,in average, \(\mu<\lambda\) but in almost all cases \(\mu/\lambda>0.9\).
Next, the intensities of Limit orders at the ask side, \(\lambda_{t}^{a}\), and Market orders plus Cancellations, \(\mu_{t}^{a}\), are plotted for each stock. In each plot the intensity \(\lambda_{t}^{a}\) or \(\mu_{t}^{a}\) is computed for each day of the week and a power-law fit is found using regression. The results of the regression are summarized in Table 1 after the corresponding figure.
Figure 1: Densities of the inter-arrival time between price changes on the six stocks for the week of Nov 3rd to Nov 7th of 2014.
In order to approximate the the long-run dynamics of the price process as stated in Theorems (3.10)-(3.11), we have to consider the following two cases:
* \(\alpha=0.001\), \(\beta=0.0001\), \(\beta=0.
(3.11), a power law fit to the intensity of Limit Orders \(\lambda_{t}^{a}\) at the ask side was fit in each of the six stocks analyzed. That is, a regression is performed to fit \(\lambda_{t}^{a}\approx\frac{K_{\lambda_{a}}}{t^{s}}\). Similarly, a power law fit to the intensity of Market Orders plus Cancellations \(\mu_{t}^{a}\) at the ask side was fit in each of the six stocks analyzed. In this case, a regression is performed to fit \(\mu_{t}^{a}\approx\frac{K_{\mu_{t}}}{t^{s}}\). The following table summarizes the power law fit to the intensities of the analyzed stocks.
Now, the intensities of Limit orders at the bid side, \(\lambda_{t}^{b}\), and Market orders plus Cancellations, \(\mu_{t}^{b}\), are now plotted for each stock. Similarly as before, in each plot the intensity \(\lambda_{t}^{b}\) or \(\mu_{t}^{b}\) is computed for each day of the week and a power-law fit is found using regression. The results of the regression are summarized in Table 2 after the corresponding figure.
Figure 4: Daily intensities of Limit Orders at the Bid side for the six stocks considered on the week of Nov 3rd to Nov 7th of 2014 and their corresponding power law fit.
In the same fashion as before, a power law fit to the intensity of Limit Orders \(\lambda_{t}^{b}\) at the bid side was fit in each of the six stocks analyzed. That is, a regression is performed to fit \(\lambda_{t}^{b}\approx\frac{K_{\lambda,b}}{t^{s}}\). Similarly, a power law fit to the intensity of Market Orders plus Cancellations \(\mu_{t}^{b}\) at the bid side was fit in each of the six stocks analyzed. In this case, a regression is performed to fit \(\mu_{t}^{b}\approx\frac{K_{\mu,b}}{t^{r}}\). The following table summarizes the power law fit to the intensities of the analyzed stocks.
Finally, in order to assess how close the quotients \(\lambda_{t}^{a}/\mu_{t}^{a}\) and \(\lambda_{t}^{b}/\mu_{t}^{b}\) behave like constants, a plot of this quotients is presented along with their average. First the quotients at the ask side are presented and then the quotients at the bid. As it can be observed, in all cases, the quotient is less than 1 indicating that the queues are in a stationary case.
Figure 5: Daily intensities of Marker Orders plus Cancellations on the Bid side for the six stocks considered on the week of Nov 3rd to Nov 7th of 2014 and their corresponding power law fit.
Next, the quotient on the bid side is displayed.
The last component of this section is a table that compares the mean of the quotient \(\lambda_{t}^{a}/\mu_{t}^{a}\) for the ask side of all six stocks with the same quotient \(\lambda_{t}^{b}/\mu_{t}^{b}\) for the bid side for all six stocks.
Figure 6: Plot of the quotient \(\lambda_{t}^{a}/\mu_{t}^{a}\) versus the time \(t\). The assumption of a constant quotient is contrasted here.
Figure 7: Plot of the quotient \(\lambda_{t}^{b}/\mu_{t}^{b}\) versus the time \(t\). The assumption of a constant quotient is contrasted here.
## 5 Conclusions and Further Research
For this paper, a simple limit order book model was proposed with the intention to further study the empirical features of these complicated systems. In particular, this paper tries to focus and understand the empirical features of the inter-arrival times between order submissions and how do these empirical features may affect the fluid dynamics of the price process. Indeed, as shown in section 3, depending on the speed at which the density of the times between arrivals of orders decays, the long-run dynamics of the price process might have a time-dependent volatility. This is an important feature because it will be interesting to find conditions under which the long-run dynamics of the price process possess converges to a more general Ito diffusion than a simple Brownian motion with constant volatility, say a geometric Brownian motion, which is one of the most used models for stock prices. Further, many of the existing models, to the knowledge of the author, that try to achieve these convergence define a point process that counts the arrivals of the price changes but not of the orders. That is, many work at a mesoscopic level but not at the microscopic level generated by accounting for the individual orders.
In this intent to create simple models that generalize the long-run dynamics of the price process, this paper has shown that different cases might arise. For example, while CSCO, INTC and VOD exhibit a quotient \(\lambda_{t}/\mu_{t}<1\), implying that they will fall under the case covered in Theorem 3.10 and since all of them have a tail that decays as a power law with exponent different from \(-1\), they will converge to a simple Brownian motion with constant volatility. However, for the other three stocks (FB, MSFT and LBTYK), it can be seen that their quotient \(\lambda_{t}/\mu_{t}<1\) is significantly close to \(1\), implying that they will fall under the case covered in Theorem 3.11, and in here, two cases arise: while MSFT and LBTYK have a tail that decays slower than \(t^{-1}\) and thus will converge to a Brownian motion with a time-dependent volatility, FB exhibits a tail that is barely heavier that \(t^{-1}\), implying that it will converge to a normal Brownian motion with constant volatility.
As it can be seen with this small sample of stocks, many different scenarios have arose, implying that these conditions imposed in the model are attainable. Of course, the model has some limitations and many simplifications took place, but the author believes that this is the first step towards working in obtaining more realistic models such as a GBM. A good example of how these models have been found but where the taken scale is a mesoscopic one is provided on Jaisson and Rosenbaum (2015), where the authors use almost unstable Hawkes processes to achieve convergence to a GBM starting from modeling the times of arrivals of the price changes. An interesting model would then become to consider how to use similar results to achieve convergence to such processes starting from the arrival of individual orders and no from the aggregated data. The difficult part in all of these models is to understand the tail behaviour of the stopping time that signals a price change, such as the one provided in Lemma 3.8. While the author believes that many more interesting features can be achieved by substituting the inhomogeneous Poisson process by a more general point process such as a Hawkes process, or even better, a state-dependent Hawkes process (one where the intensity depends on the state of the process) the complicated part is to unravel the behavior of the aforementioned stopping time and this will become an interesting research direction for the near future.
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Stock** & **Mean of the quotient \(\lambda_{t}^{a}/\mu_{t}^{a}\)** & **Mean of the quotient \(\lambda_{t}^{b}/\mu_{t}^{b}\)** \\ \hline \hline CSCO & 0.9598 & 0.9392 \\ \hline FB & 0.9927 & 0.9993 \\ \hline INTC & 0.9441 & 0.9544 \\ \hline MSFT & 0.9901 & 0.9912 \\ \hline LBTYK & 0.9998 & 0.9498 \\ \hline VOD & 0.8919 & 0.9255 \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of the mean quotient of the intensities for limit orders vs market orders plus cancellations at the ask side and the bid side for all six stocks analyzed. |
2306.03660 | Empir3D : A Framework for Multi-Dimensional Point Cloud Assessment | Advancements in sensors, algorithms, and compute hardware have made 3D
perception feasible in real time. Current methods to compare and evaluate the
quality of a 3D model, such as Chamfer, Hausdorff, and Earth-Mover's distance,
are uni-dimensional and have limitations, including an inability to capture
coverage, local variations in density and error, and sensitivity to outliers.
In this paper, we propose an evaluation framework for point clouds (Empir3D)
that consists of four metrics: resolution to quantify the ability to
distinguish between individual parts in the point cloud, accuracy to measure
registration error, coverage to evaluate the portion of missing data, and
artifact score to characterize the presence of artifacts. Through detailed
analysis, we demonstrate the complementary nature of each of these dimensions
and the improvements they provide compared to the aforementioned
uni-dimensional measures. Furthermore, we illustrate the utility of Empir3D by
comparing our metrics with uni-dimensional metrics for two 3D perception
applications (SLAM and point cloud completion). We believe that Empir3D
advances our ability to reason about point clouds and helps better debug 3D
perception applications by providing a richer evaluation of their performance.
Our implementation of Empir3D, custom real-world datasets, evaluations on
learning methods, and detailed documentation on how to integrate the pipeline
will be made available upon publication. | Yash Turkar, Pranay Meshram, Christo Aluckal, Charuvahan Adhivarahan, Karthik Dantu | 2023-06-06T13:23:42Z | http://arxiv.org/abs/2306.03660v2 | # PQM: A Point Quality Evaluation Metric for Dense Maps
###### Abstract
LiDAR-based mapping/reconstruction are important for various applications, but evaluating the quality of the dense maps they produce is challenging. The current methods have limitations, including the inability to capture completeness, structural information, and local variations in error. In this paper, we propose a novel point quality evaluation metric (PQM) that consists of four sub-metrics to provide a more comprehensive evaluation of point cloud quality. The completeness sub-metric evaluates the proportion of missing data, the artifact score sub-metric recognises and characterizes artifacts, the accuracy sub-metric measures registration accuracy, and the resolution sub-metric quantifies point cloud density. Through an ablation study using a prototype dataset, we demonstrate the effectiveness of each of the sub-metrics and compare them to popular point cloud distance measures. Using three LiDAR SLAM systems to generate maps, we evaluate their output map quality and demonstrate the metric's robustness to noise and artifacts. Our implementation of PQM, datasets and detailed documentation on how to integrate with your custom dense mapping pipeline can be found at github.com/droneslab/pqm
## I Introduction
Dense maps play a crucial role in numerous applications including but not limited to autonomous driving, search and rescue, service robotics, and augmented reality. Among the different ways to build dense maps, dense SLAM is particularly interesting due to its ability to produce high-fidelity maps as point clouds that are used for tasks such as localization, re-localization, place recognition, and cross-robot localization but also a need for realtime execution requiring various tradeoffs in map quality. The point clouds produced by these algorithms have been proposed for use in advanced monitoring, sophisticated manipulation, augmented reality, and fine-grained control.
LiDAR-based SLAM methods are increasingly popular thanks to advances in technology. The affordability and improved accuracy of LiDAR sensors now allow for the real-time creation of high-quality, dense point clouds. Fig.2 provides a comparison between point clouds generated using an engineering-grade LiDAR, specifically the Z+F Imager 5016 from the HILTI SLAM dataset [4] (on the left), and an inexpensive Hesai Pandar XT-32 using FAST-LIO2 [1], an online direct registration-based SLAM method (on the right). It is clear from the visual comparison that FAST-LIO2 produces point clouds that are nearly as high-quality as those generated by the engineering-grade LiDAR, which is generally very expensive.
The evaluation of point clouds poses a significant challenge due to the complexity of capturing all aspects of mapping accuracy. While some measures, such as the Absolute Trajectory Error (ATE), can assess the difference between expected and measured translation and rotation, they do not fully account for map quality and are only indirect measures. Given a reference map, it is important to identify how much of the reference map is captured by a mapping method, how close the created map is to the reference, whether the method created anamolies that do not exist in the reference (we call
Fig. 1: PQM is the only metric that correctly identifies FAST-LIO2 [1] as having the highest quality among all candidates despite CD and HD indicating LeGO-LOAM [2] and Puma [3] have the highest quality, respectively, in a visual comparison.
these _artifacts_) and if the density of the resultant point cloud is similar to the reference or sparser.
Popular methods for comparing point clouds and meshes such as Chamfer distance (CD), Hausdorff distance (HD), and Earth Mover's distance (EMD), have limitations. CD is insensitive to point density and significantly influenced by outliers. Therefore, it serves as a poor performance metric to characterize point cloud completeness or map artifacts. On the other hand, while EMD can detect changes in density, the requirement for a one-to-one correspondence between compared maps is usually too strict and can lead to ignoring local fine-grained structural details. Additionally, EMD is significantly more computationally expensive than CD, which can limit its practical applications. Overall, neither CD nor EMD is ideally suitable for evaluating the quality of generated shapes, as they may fail to capture coverage or completeness, structural information, and local variations in error. Therefore, an ideal evaluation method should be efficient and accurately reflect the presence of artifacts or missing data while considering all factors affecting the quality of point clouds for the above applications. A good metric should have the following:
* The metric should capture coverage and completeness of the point cloud, as well as structural information, to provide a comprehensive evaluation of quality.
* The metric should be computationally efficient and handle large datasets to be useful for practical applications
* The metric should accurately penalize artifacts while rewarding higher density and resolution.
To address these challenges, we propose a novel point quality evaluation metric (PQM) that provides a comprehensive and thorough assessment of point cloud quality. PQM comprises four sub-metrics, each evaluating a different aspect of point clouds' quality:
* **Completeness:** Measures the proportion of missing data in a point cloud map. It is critical for applications such as autonomous driving and robotics, where having a complete point cloud is essential for ensuring safety.
* **Artifact Score:** Measures the proportion of non-existent artifacts added in error. It is useful in detecting the impact of artifacts on visual fidelity, especially in augmented reality and virtual environment creation.
* **Accuracy:** Measures how close the points are to their true positions. It is vital for infrastructure inspection and manufacturing quality control, where registration accuracy plays a crucial role.
* **Resolution:** Measures the density of the point cloud map. It is an indicator of how detailed the map is and can enhance fine-grained manipulation and object recognition precision.
In conclusion, PQM provides a comprehensive evaluation of point cloud quality by addressing various aspects of LiDAR maps, making it a valuable tool for several applications. The contributions of this paper are as follows:
* Propose PQM for evaluating point cloud maps.
* Provide an efficient multi-threaded implementation.
* Evaluate the metric in simulation over three maps and 3 SLAM systems
* Perform an ablation study on the effect of mapping errors
* Provide an open-source suite to the research community to enable point cloud map quality evaluation. The GitHub link is in the abstract.
## II Related Work
SLAM systems can produce point cloud maps with varying levels of density and fidelity using different sensors. Visual SLAM systems that use monocular cameras [5], stereo cameras [6], and RGBD cameras [7] to produce dense [7, 6] or sparse [5] point clouds have been proposed.
Recent advances in sensor technology, efficient libraries [8, 9], and faster computing have enabled real-time LiDAR mapping. Following [10] LiDAR SLAM systems have ever improved, both in localization and mapping performance.
There is a growing interest in dense 3D mapping using LiDAR SLAM. LOAM [10], LeGO-LOAM [2], LiDAR [11], LVI-SAM [12], LINS [13] and FAST-LIO2 [14, 1] generate relatively dense point clouds using online localization and mapping. Methods like Puma [3] and SHINE [15] output meshes by performing offline mapping and localization either solely with sequential LiDAR scans or with additional odometry information.
SLAM systems are generally evaluated for their localization and re-localization performance with the Absolute Trajectory Error (ATE) as seen in [16],[17],[18] with changes in environmental factors such as illumination. Although ATE is a good measure of a SLAM system's localization performance, it is a poor measure of map quality. In some cases, ATE can be used to evaluate the overall structure of the map, not density and completeness. For example, ORB-SLAM [5] is known for good localization and tracking performance even though it produces sparse point cloud maps. In other cases, trajectory error may not be a sufficient metric to evaluate mapping performance. In [19], the authors use a WiFi-based distributed mapping system which cannot be evaluated with the ATE since a ground truth trajectory is hard to obtain in a distributed mapping scenario. Thus, the authors use known landmark (April tag [20]) positions to evaluate their system indicating a need for a metric to
Fig. 2: Left: Ground truth from HILTI SLAM Dataset [4] using Z+F Imager 5016; Right: FAST-LIO2 using Hesai Pandar XT-32
evaluate the map quality directly.
Reconstruction error calculated over point clouds is a direct approach to evaluating map accuracy. Chamfer distance (CD), Hausdorff distance (HD), and Earth Mover's distance (EMD) are popular distance metrics used in computing reconstruction error in point clouds [21],[3]. However, there are limitations in using distance-based reconstruction error as a performance metric to measure map quality.
For example, CD (Eq.1) is computed as the sum of distances in two point clouds, usually referred to as source and candidate. For each point in the source, the distance to its nearest neighbor in the candidate point cloud is computed and vice versa. The sum of distances over both point clouds is the CD. It is fast to compute and it can capture the overall similarity between two point clouds. However, it does not take into account the local variations and structural information in the point clouds, which can be important in some applications. Secondly, it is insensitive to density distribution. Finally, it is significantly influenced by outliers.
\[d_{Chamfer}(A,B)=\sum_{a\in A}\min_{b\in B}\lVert a-b\rVert_{2}+\sum_{b\in B} \min_{a\in A}\lVert a-b\rVert_{2} \tag{1}\]
As another example, HD (Eq. 2) is calculated as the maximum distance between two points in the source and candidate point clouds. This means that for each point in one point cloud, the distance to the farthest point in the other point cloud is calculated, and the maximum of all such distances is the HD. It captures the similarity between two point clouds, including their overall arrangement. However, it is computationally expensive to compute and is not as efficient as CD.
\[d_{Hausdorff}(A,B)=\max(\sup_{a\in A}d(a,B),\sup_{b\in B}d(A,b)\,) \tag{2}\]
Dense point clouds generated by some SLAM systems like FAST-LIO2 rival that of survey and engineering grade LiDARs as seen in (Fig. 2). This means they can be used for applications that need high-resolution point clouds such as GIS analysis [22], infrastructure inspection, 3D reconstruction, and object detection [23, 24] to name a few. With hardware and algorithmic advancements that produce such detailed point clouds, there is a need for a way to measure the difference in the quality of these point clouds. Popular metrics like CD, HD, and EMD have difficulty in capturing coverage, completeness, structural information, local variations in error, and are computationally expensive. Additionally, these methods do not account for artifacts or missing data and do not evaluate the components of quality discretely. Although methods like [25, 26, 27] exist, they focus on specific applications like visual quality and point cloud generation acting as a loss function for neural network training. [27] provides a way to measure the accuracy and completeness of meshes generated by multi-view stereo reconstruction but doesn't not account for resolution and artifacts [18] complains about the lack of ground truth to evaluate point clouds, we address this by using simulated datasets where ground truth from the simulation environment is available in the form of meshes. We then sample these meshes to acquire ground truth point clouds.
Therefore, we propose the Point Quality Metric (PQM) which addresses some limitations of existing metrics, namely: (i) capturing a notion of map coverage, (ii) penalizing non-existent artifacts, (iii) measuring accuracy and, (iv) rewarding higher density and resolution. We believe our proposal provides a framework for the comprehensive evaluation of two point clouds based on their completeness, accuracy, artifacts, and resolutions.
## III Method
This section describes the proposed metric PQM and the evaluation framework including an ablation study to independently measure the effectiveness of each sub-metric. As mentioned above, point clouds generated by LiDAR SLAM methods although dense can be inaccurate and incomplete due to the path taken by the robot and registration errors. Further, these point clouds can contain artifacts (anomalies or points not present in ground truth) that degrade the overall quality.
### _Point Quality Metric (PQM)_
We denote the source (ground truth) point cloud by \(A=a_{i}\), which we refer to as \(pcd_{A}\). Similarly, we denote the candidate point cloud by \(B=b_{i}\), referred as \(pcd_{B}\), where \(a_{i}\) and \(b_{i}\) are in \(R^{d}\) and \(i=1,\ldots,N\). Our goal is to measure the difference in quality between the source point cloud \(pcd_{A}\) and the candidate point cloud \(pcd_{B}\). Quality, as defined in Sec.III, is a weighted combination of the four sub-metrics: completeness, artifact score, accuracy, and resolution. Each sub-metric contributes to the overall quality of the point cloud, and evaluating them independently enables us to assess the effect of each sub-metric on the overall quality.
\(Q_{PQM}\) denotes the overall quality given by eq. 7 while \(Q_{c}\), \(Q_{t}\), \(Q_{a}\) and \(Q_{r}\) denote the individual sub-metrics completeness, artifact score, accuracy, and resolution respectively.
**Region Splitting**:
To efficiently evaluate large point clouds, we divide them into smaller regions of equal size or "cells", with each cell having a size of \(r\). This enables to compare in parallel and provides insights into the quality of different areas within the point cloud. The point cloud is split into \(N\) such cells, and
Fig. 3: Valid points - Candidate points within \(\epsilon\) of source point are marked valid while others are marked invalid (artifacts)
sub-metrics are computed for each cell. Cells are denoted as \(cell_{A^{j}}\in pcd_{A}\) and \(cell_{B^{j}}\in pcd_{B}\), where \(j=1\ldots N\). PQM is normalized between 0 and 1, where 1 represents the best quality and 0 represents the worst quality. In contrast, geometric distance metrics such as CD, HD, and EMD are typically calculated such that a score of 0 represents a perfect match, and any value greater than 0 represents a degree of mismatch.
#### Iii-A1 Resolution
We define resolution per cell as the ratio of the density (pts/volume) of \(cell_{B}\), a cell in \(pcd_{B}\) to density (pts/volume) of \(cell_{A}\), a cell in \(pcd_{A}\) given in Eq.(3). Overall resolution (\(Q_{r}\)) is the mean of \(q_{r}\) over N cells. Resolution determines the level of detail in the point cloud. Low resolution can cause loss of texture and smaller objects making the point cloud unusable for applications that require high fidelity and detail.
\[q_{r}=\left(\frac{\rho_{cell_{B}}}{\rho_{cell_{A}}}\right) \tag{3}\]
#### Iii-A2 Accuracy
Accuracy is measured as the ratio of the sum of distances between every point in \(cell_{B}\), a cell in \(pcd_{B}\) to the nearest neighbor in \(cell_{A}\), a cell in \(pcd_{A}\) given distance is less than threshold \(\epsilon\), to the product of the number of points in B and \(\epsilon\), given in (eq.4). The normalization is performed over (\(|B|\times\epsilon\)) as this is the maximum distance possible if all points in \(cell_{B}\) are valid (i.e. have neighbors within \(\epsilon\) distance in \(cell_{A}\)). Overall accuracy (\(Q_{a}\)) is the mean of \(q_{a}\) over all \(N\) cells.
\[q_{a} = \left(1\!-\!\left(\frac{1}{\epsilon\,|cell_{B}|}\right)\!\times\! \sum_{b\in cell_{B}}\!s(a,b)\right) \tag{4}\]
where,
\(s(a,b)=\begin{cases}\min\limits_{a\in cell_{A}}\!\|a-b\|_{2}&,\text{if}\min \limits_{a\in cell_{A}}\!\|a-b\|_{2}\leq\epsilon\\ 0&,\text{otherwise}\end{cases}\)
#### Iii-A3 Completeness
Completeness is the ratio of valid points (Fig. 3) of \(cell_{B}\) (i.e. points within \(\epsilon\) distance of \(cell_{A}\)) to the total points in \(cell_{A}\) given by (eq.5). This gives us a measure of how complete a given region is compared to the ground truth and can be used to estimate missing areas in the candidate point cloud. Overall completeness is given by \(Q_{c}\), the mean of all (\(q_{c}\)) over \(N\) cells.
\[q_{c}=\left(\frac{|\left\{b_{i}\in cell_{B}:\min\limits_{a\in cell_{A}}\!\|a-b \|_{2}\leq\epsilon\right\}|}{|cell_{A}|}\right) \tag{5}\]
#### Iii-A4 Artifact Score
Artifacts (Fig. 3) are defined as the points in \(cell_{B}\) but not in \(cell_{A}\). These are generated due to reflections, distortion, or misregistration of points. Artifact score is the ratio of valid points of \(cell_{B}\) (i.e. points within \(\epsilon\) distance of \(cell_{A}\)) to the total points in \(cell_{B}\) given by (eq.6). Similar to III-A3, the overall artifact score is given by (\(Q_{t}\)).
\[q_{t}=\left(\frac{\left|\left\{b_{i}\in cell_{B}:\min\limits_{a\in cell_{A}}\! \|a-b\|_{2}\leq\epsilon\right\}|\right.}{|cell_{B}|}\right) \tag{6}\]
**Overall Map Quality**:
PQM is computed as the mean of the weighted sum of \(q_{r}\),\(q_{a}\),\(q_{c}\), and \(q_{t}\) overall \(N\) cells. The weights \(\left\{\omega_{r},\omega_{a},\omega_{c},\omega_{t}\right\}\) correspond to each of these sub-metrics, respectively. For all experiments in this paper, we equally weight each sub-metric \(\left(\left\{\omega_{r},\omega_{a},\omega_{c},\omega_{t}\right\}\!=\!0.25\right)\) to ensure that they contribute equally to the overall quality score. However, these weights can be adjusted to meet the specific requirements of a given application.
\[Q_{PQM}(A,B,\epsilon) = \frac{1}{N}\sum_{j=1}^{N}(\omega_{r}.q_{r}^{j}\!+\!\omega_{a}.q_{ a}^{j}\!+\!\omega_{c}.q_{c}^{j}\!+\!\omega_{t}.q_{t}^{j}) \tag{7}\]
### _Controlled Ablation Study_
To evaluate PQM, we perform an ablation study using a prototype point cloud (Stanford Bunny [28]) model. The study involves applying various degradation to \(pcd_{A}\), the source model, and using PQM, CD and HD to evaluate the quality at each step.
#### Iii-B1 Artifacts
We add points to \(pcd_{A}\) to simulate artifacts (\(pcd_{B}\) is a copy of \(pcd_{A}\) with added artifacts), which can be caused by sensor noise or registration errors. A set percent(\(p\)) of points are added to each cell from a uniformly sampled sphere artifact 1/10\({}^{th}\) the size of the cell, placed at the center of the cell. This also affects resolution since the number of points in \(pcd_{B}\) increases.
#### Iii-B2 Completeness
A patch of points is removed from \(pcd_{A}\) to simulate incompleteness, which can be caused due to inconsistent mapping, down-sampling, and/or sensor noise. A set percent(\(p\)) of nearest neighbors of a randomly selected point are removed per cell.
#### Iii-B3 Accuracy
Gaussian noise is added to \(pcd_{A}\) to simulate loss of accuracy. Gaussian noise is applied to the candidate point cloud with 0 \(\mu\) and a finite \(\sigma\) value, where \(\sigma\) is the variance applied to the points in a random normal direction.
#### Iii-B4 Resolution
Uniform down-sampling is applied to \(pcd_{A}\) to simulate the reduction in resolution and reduce the complexity of the point cloud while preserving its overall structure. This is achieved by sampling every \(k^{th}\) point in the current cell, where \(k\) is the control parameter.
Fig. 4: Simulation Worlds used for evaluation. Mai City (left), Village (center), Warehouse (right)
Fig. 5 shows the bunny model with 50% degradation (for artifacts, completeness, accuracy) and a sampling rate of 5 (for resolution) where \(pcd_{A}\) (green) is the source point cloud and \(pcd_{B}\) (red) are candidates after degradation.
### _Map Evaluation Framework_
To further study PQM, we collect LiDAR scans in purpose-built simulation environments (Fig. 4) and build point cloud maps using several popular LiDAR SLAM systems. The simulation environments are built using the Gazebo simulator [29] and mesh models of the worlds. We simulate an Ouster OS1-128 LiDAR on a Clearpath Husky platform to collect the LiDAR scans. The simulation worlds include the outdoor world provided by the Mai-City dataset [3] and two other worlds called "Village" and "Warehouse" as seen in Fig. 4. The environments are designed to represent city blocks and warehouses, complete with buildings, trees, storage pallets, and other elements commonly found in the real world. The use of simulation allows for more controlled testing conditions, ground truth maps, and the generation of visual data from multiple views for testing and evaluation.
The candidate point clouds are generated using Lego-LOAM [2], FAST-LIO2 [1], and Puma [3]. LeGO-LOAM stands for ground-optimized LiDAR odometry and mapping. The system outputs a dense point cloud and real-time odometry. FAST-LIO2 employs a tightly-coupled LiDAR+IMU method to generate dense point clouds in real time. This is achieved through the direct registration of scans with minimal downsampling facilitated by the use of an iKD-tree [30] for fast point-wise and block-wise operations. Puma [3] employs a unique approach to mesh generation by performing frame-to-mesh registration and Poisson Surface Reconstruction [31],[32] to generate a mesh. While this process is not real-time, the resulting meshes are lightweight and highly representative of the real world.
Finally, the point clouds generated by these methods (candidates) in the simulated worlds (Fig. 6) and their respective ground truths (source) are used to evaluate PQM. We also measure CD and HD between the candidate and source point clouds. The next section shows the comparative results.
## IV Experimental Evaluation
PQM is implemented using Open3D [33] and PDAL [34] libraries and runs entirely on a desktop workstation with 6C/12T CPU and 32 GB of Memory. We also provide a CUDA accelerated implementation with PyTorch [35], which can be advantageous for large point clouds.
Experimental evaluation is performed on three simulation datasets, the HILTI dataset [4], and the Stanford bunny [28] Candidate point clouds are evaluated against the source point cloud sampled from meshes which as mentioned earlier were used in the simulation worlds. Meshes are sampled into point clouds where the number of points is the maximum of all point clouds generated by the three SLAM methods.
### _Ablation Study_
To evaluate the performance of PQM in isolation we degrade the source ([28]) point cloud as described in SEC. III-B. Experiments were performed for a range of degradation (0% to 90% with an increment of 5% and sampling rate of 1 to 19) with varying cell sizes i.e 0.05, 0.04, 0.03, 0.02 (meter). Fig. 7 shows results with cell size 0.05 (dividing the model into 4x4x3 cells). \(\epsilon\) is maintained as half the average distance between points in \(pcd_{A}\). \(\epsilon\) is a tuneable parameter and can be chosen based on application. All trials are evaluated with weights \(\{\omega_{c},\omega_{t},\omega_{a},\omega_{r}\}=0.25\), and \(\epsilon=0.0002\).
#### Iv-A1 Artifact Score
Tab. I and Fig. 7 show adding artifacts proportionally decreases \(Q_{t}\), while \(Q_{a}\),\(Q_{c}\) are unchanged.\(Q_{r}\) also shows change due to the spherical artifact as mentioned in Sec.III-B.1
#### Iv-A2 Completeness
Tab. I and Fig. 7 show completeness \(Q_{c}\) decreases as points are removed from \(pcd_{A}\), while \(Q_{a}\),\(Q_{t}\) are unchanged.\(Q_{r}\) also shows change due to the decrease in total points as mentioned in Sec.III-B.2
#### Iv-A3 Accuracy
Tab. I and Fig. 7 show decrease in \(Q_{a}\) and in-turn in \(Q_{PQM}\). \(\sigma\) for gaussian noise is constrained to \(\epsilon=0.0002\) to ensure \(Q_{c}\) and \(Q_{t}\) are not affected. The negligible change in other sub-metrics can be due to the overflow of boundary points. This demonstrates the efficacy of using PQM where deviations in accuracy might be small.
#### Iv-A4 Resolution
Tab. I shows a decrease in resolution with an increase in sampling rate. This is unlike IV-A2 as removing a continuous patch of points affects completeness more than resolution, while down-sampling degrades \(Q_{r}\) and \(Q_{c}\) almost equally, if not exactly as seen in Fig. 7.
#### Iv-A5 Chamfer and Hausdorff distance
Similarly, Fig. 7, shows the change in CD and HD for each degradation. CD (green) shows a gradual increase, which signifies an increasing degradation in quality, but it fails to give insights about the type of quality that is affected. Further, its value is unbounded and hence cannot justify normalized quality value for a reasonable comparison. In the case of HD (blue), values do not show any correlation with the applied degradation.
Fig. 5: Ablation study. (a) Original unmodified point cloud (b) Arbitrary clusters added in error (c) Clusters of points removed to test completeness (d) Random points added with gaussian noise (50%) and (e) Points down-sampled to 20%
\begin{table}
\begin{tabular}{l|c|c c c c c c c c c} \hline \hline
**Target Sub-metic** & **Parameter** & Candidate Pts & GT pts & \(D_{chamfer}\) & \(D_{hausdorff}\) & \(Q_{r}\) & \(Q_{a}\) & \(Q_{c}\) & \(Q_{t}\) & \(Q_{PQM}\) \\ \hline \multirow{3}{*}{Artifact Score (\%)} & 25 & 213586 & 100106 & 10.5184 & 0.0342 & 0.8586 & 0.9999 & 1.0000 & 0.8676 & 0.9315 \\ & 50 & 256309 & 100106 & 21.0403 & 0.0360 & 0.8708 & 0.9999 & 1.0000 & 0.7692 & 0.9100 \\ & 75 & 299028 & 100106 & 31.5700 & 0.0398 & 0.8857 & 0.9998 & 1.0000 & 0.6926 & 0.8945 \\ \hline \multirow{3}{*}{Completeness (\%)} & 25 & 170877 & 100106 & 0.0004 & 0.0003 & 0.9998 & 0.9008 & 0.9999 & 0.9994 & 0.9749 \\ & 50 & 170877 & 100106 & 0.0009 & 0.0003 & 1.0000 & 0.8051 & 0.9999 & 0.9986 & 0.9509 \\ & 75 & 170877 & 100106 & 0.0015 & 0.0003 & 0.9999 & 0.7033 & 0.9996 & 0.9946 & 0.9244 \\ \hline \multirow{3}{*}{Accuracy (\%)} & 25 & 128168 & 100106 & 0.3108 & 0.0147 & 0.9768 & 1.0000 & 0.9688 & 1.0000 & 0.9864 \\ & 50 & 85445 & 100106 & 1.6109 & 0.0199 & 0.8645 & 1.0000 & 0.8395 & 1.0000 & 0.9260 \\ \cline{1-1} & 75 & 42726 & 100106 & 6.7446 & 0.0248 & 0.6278 & 1.0000 & 0.4713 & 1.0000 & 0.7748 \\ \hline \multirow{3}{*}{Resolution (sample rate)} & 5 & 34187 & 100106 & 0.0455 & 0.0027 & 0.4666 & 1.0000 & 0.4399 & 1.0000 & 0.7266 \\ \cline{1-1} & 10 & 17099 & 100106 & 0.1031 & 0.0039 & 0.2553 & 1.0000 & 0.2266 & 1.0000 & 0.6205 \\ \cline{1-1} & 15 & 11406 & 100106 & 0.1624 & 0.0054 & 0.1481 & 1.0000 & 0.1491 & 1.0000 & 0.5743 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Ablation Study
Fig. 6: Qualitative evaluation of SLAM systems. The figure shows, from top to bottom, dense maps generated using PUMA, FAST-LIO, LeGo-LOAM, and ground truth. Worlds from left to right are Mai-City, Warehouse, Village, and Exp04 sequence from the HILTI dataset
Fig. 7: Ablation Study. (a), (b), (c), and (d) depict the impact of adding artifacts, removing points, adding noise, and down-sampling on the proposed quality metrics, individual sub-metrics, CD and HD. Our method provides further insight into map quality.
Results of these experiments show that PQM is effective in detecting and quantifying changes in map quality due to different types and levels of degradation. The scores generated by PQM correlate with the expected change in resulting value for each targeted sub-metric.
Overall, the experiment demonstrates the effectiveness of the PQM metric in evaluating the quality of point clouds and detecting changes in map quality due to different types and levels of degradation and hence can act as a tool to detect point cloud suitability for specific applications.
### _Evaluation on SLAM systems_
The performance of PQM on large, realistic point cloud maps generated using the SLAM systems mentioned above was evaluated exhaustively in three simulated worlds and the exp04 sequence of the HILTI dataset. The results of this evaluation are presented in Fig. 6, which shows the qualitative differences between the maps generated by the SLAM systems and the ground truth. Tab. II shows the sub-metrics for each map. Each sub-metric evaluates a certain aspect of quality as defined in Sec. III. PQM is computed as the weighted sum of these sub-metrics, where the weights (Eq. 7) are set by users based on application. For all evaluations, the weights were set to 0.25, making the contribution of each sub-metric equal for overall quality. We also set \(\epsilon=0.1\) and cell size \(r=10\) for all tests in Tab. II. More suitable values were not explored for all maps due to long computing times. A study of the effect of \(\epsilon\) and \(r\) is shown in Tab. III.
Visually, it is apparent that the FAST-LIO2 generates the highest-quality maps compared to other methods in terms of resolution, completeness, and accuracy. However, the highlighted CD and HD distance values in Tab. II do not reflect this observation. We can see that CD and HD distances only correctly identify the best map 1/4 and 0/4 times, respectively. On the other hand, PQM consistently identifies point clouds generated by FAST-LIO2 as the ones with the highest overall quality. The sub-metrics are also indicative of PQM's performance, with only \(Q_{a}\) being incorrect in 2 instances (HILTI and Warehouse). This is likely due to the threshold of \(\epsilon=0.1\) used. However, a lower \(\epsilon\) value correctly identifies \(Q_{a}\), as seen in Tab. III. Highlighted values show that as \(\epsilon\) is reduced (\(\epsilon=0.01\)), quality is calculated correctly 2/3 times, which is much greater.
## V Discussion
**Need for reference point clouds**: For the scope of this paper, we assume that candidate maps are compared against ground truth point clouds or meshes. While acquiring the ground truth maps can be difficult in real-world scenarios, we make the following observations: (i) To effectively quantify the mapping performance of a SLAM system a ground truth or reference is necessary, (ii) Ground truth can be generated using an engineering grade LiDAR which can generate centimeter-accurate if not millimeter-accurate point clouds ([4], Fig. 2), and (iii) Alternatively, we can evaluate in simulation where ground truth is readily available. Recent advancements in high fidelity simulators [36, 37] make this a lucrative option. Our evaluation framework uses three simulation scenarios as shown in Figure 4.
**Customizability**: While we provide one holistic PQM metric, the intent is to expose the various dimensions of importance as highlighted by the submetrics - completeness, artifact score, accuracy and resolution. We envision the user customizing the weights and the various parameters to better suit their application. This will allow better quantification of map quality. On those lines, the introduction of cell size \(r\) and threshold \(\epsilon\) enables the user to tune PQM to a particular use case. The smaller the \(\epsilon\) the lower the tolerance is for accuracy, completeness, and artifact score. The cell size helps reduce the computational complexity for large point clouds, metrics can be computed in parallel, and as cell size is decreased the resolution for local variations increases.
\begin{table}
\begin{tabular}{c|c|c c c c c c c c} \hline \hline
**Candidate** & **Method** & Candidate Pts & GT pts & \(D_{chamfer}\) & \(D_{hausdorff}\) & \(Q_{r}\) & \(Q_{a}\) & \(Q_{c}\) & \(Q_{t}\) & \(Q_{PQM}\) \\ \hline \multirow{2}{*}{HILTI} & LeGO LOAM & 84761 & 6845275 & 1535250.251 & 6.4576 & 0.0023 & **0.4600** & 0.0010 & 0.0759 & 0.1348 \\ & FAST-LIO2 & 5832076 & 6845275 & **602159.2716** & **5.5596** & **0.2177** & 0.3836 & **0.3681** & **0.4268** & **0.3491** \\ \hline \multirow{3}{*}{Mai City} & LeGO LOAM & 268003 & 78513113 & 614644320.9 & **21.7358** & 0.0145 & 0.5940 & 0.0016 & 0.6559 & 0.3165 \\ & FAST-LIO2 & 78513193 & 7851313 & **560122060.2** & 21.8017 & **0.3517** & **0.6244** & **0.2327** & **0.6994** & **0.4771** \\ & Puma & 78513052 & 78513113 & 1595554060 & 22.3074 & 0.2932 & 0.4689 & 0.0378 & 0.0736 & 0.2184 \\ \hline \multirow{3}{*}{Village} & LeGO LOAM & 402346 & 53079602 & **235825489.3** & 54.1642 & 0.0593 & 0.6428 & 0.0005 & 0.1026 & 0.2013 \\ & FAST-LIO2 & 53079702 & 53079602 & 256641433.7 & 53.8912 & **0.3061** & **0.7615** & **0.2214** & **0.4514** & **0.4351** \\ & Puma & 53079702 & 53079602 & 2595147655 & **35.0528** & 0.2858 & 0.3927 & 0.0667 & 0.0904 & 0.2089 \\ \hline \multirow{3}{*}{Warehouse} & LeGO LOAM & 901658 & 111853794 & **31136219.94** & **31136219.94** & 0.173512 & 0.0051 & **0.7866** & 0.0015 & 0.2018 & 0.2487 \\ & FAST-LIO2 & 111853794 & 111853794 & 475765579.7 & 12.9406 & **0.4961** & 0.7068 & **0.5190** & **0.8649** & **0.6467** \\ & Puma & 111853794 & 111853794 & 572566956.5 & **12.3049** & 0.4565 & 0.7498 & 0.2474 & 0.2937 & 0.4368 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Evaluation on SLAM Methods
\begin{table}
\begin{tabular}{c|c|c|c c c c} \hline \hline
**Method** & \(r\) & \(\epsilon\) & \(Q_{r}\) & \(Q_{a}\) & \(Q_{c}\) & \(Q_{t}\) & \(Q_{PQM}\) \\ \hline FAST-LIO2 & 1 & & **0.2054** & 0.3582 & **0.2435** & **0.3156** & **0.2807** \\ LeGO-LOAM & 2 & 0.0038 & **0.4139** & 0.0009 & 0.0519 & 0.1176 \\ LeGO-LOAM & 2 & 0.1 & 0.005 & **0.4727** & 0.001 & 0.0564 & 0.1338 \\ FAST-LIO2 & & **0.2059** & 0.3189 & **0.2651** & **0.3162** & **0.2765** \\ LeGO-LOAM & 5 & & 0.0034 & **0.5077** & 0.0008 & 0.0447 & 0.1392 \\ FAST-LIO2 & & & **0.1844** & 0.4241 & **0.2991** & **0.386** & **0.3234** \\ \hline LeGO-LOAM & 1 & & 0.0038 & 0.4385 & 0 & 0.0035 & 0.1115 \\ FAST-LIO2 & & & **0.2054** & **0.5103** & **0.0097** & **0.0127** & **0.1845** \\ LeGO-LOAM & & & 0.005 & **0.4991** & 0.0001 & 0.0038 & 0.127 \\ FAST-LIO2 & & & **0.2059** & 0.4725 & **0.0097** & **0.011** & **0.1748** \\ FAST-LIO2 & & & **0.1844** & **0.6139** & **0.0111** & **0.0121** & **0.2054** \\ LeGO-LOAM & 5 & & 0.0034 & 0.5284 & 0.0001 & 0.0033 & 0.1338 \\ \hline \hline \end{tabular}
\end{table} TABLE III: PQM with varying \(\epsilon\) and \(r\) values (HILTI Exp04)
## VI Conclusion
We propose a novel point quality evaluation metric (PQM) that comprehensively evaluates the quality of dense point cloud maps generated by SLAM algorithms. PQM is designed to capture aspects of mapping accuracy that are not addressed by existing evaluation metrics and can help improve the performance of SLAM algorithms in various applications. The experimental results presented in this paper demonstrate the effectiveness and robustness of PQM in evaluating the quality of point clouds. We open-source our evaluation framework with extensive documentation for others to use the PQM metric in their evaluation of dense mapping methods.
|
2306.13473 | Cosmological Perturbations in the Teleparallel analog of Horndeski
gravity | In this work we study the cosmological perturbations in
Bahamonde-Dialektopoulos-Levi Said (BDLS) theory, i.e. the teleparallel analog
of Horndeski gravity. In order to understand the evolution of structure in a
cosmological model, it is necessary to study its cosmology not only in the
background but also perturbatively. Both Horndeski and its teleparallel analog
have been analyzed a lot in the literature, but in order to study them
quantitatively, we need to know their cosmological perturbations. That is why,
we study here the scalar-vector-tensor decomposition of the theory and we also
express the so-called alpha parameters in terms of the arbitrary functions of
the theory, that designate the deviation from the {\Lambda}CDM model. We have
explored tensor, vector and scalar perturbation of the action up to second
order, which drastically opens up new possibilities on searches in the
parameter space of scalar-tensor theories in the context of observations. | Bobomurat Ahmedov, Konstantinos F. Dialektopoulos, Jackson Levi Said, Abdurakhmon Nosirov, Odil Yunusov, Zinovia Oikonomopoulou | 2023-06-23T12:29:28Z | http://arxiv.org/abs/2306.13473v1 | # Cosmological Perturbations in the Teleparallel analog of Horndeski gravity
###### Abstract
In this work we study the cosmological perturbations in Bahamonde-Dialektopoulos-Levi Said (BDLS) theory, i.e. the teleparallel analog of Horndeski gravity. In order to understand the evolution of structure in a cosmological model, it is necessary to study its cosmology not only in the background but also perturbatively. Both Horndeski and its teleparallel analog have been analyzed a lot in the literature, but in order to study them quantitatively, we need to know their cosmological perturbations. That is why, we study here the scalar-vector-tensor decomposition of the theory and we also express the so-called _alpha_ parameters in terms of the arbitrary functions of the theory, that designate the deviation from the \(\Lambda\)CDM model. We have explored tensor, vector and scalar perturbation of the action up to second order, which drastically opens up new possibilities on searches in the parameter space of scalar-tensor theories in the context of observations.
## I Introduction
The ever increasing precision in the measurement of the expansion of the Universe [1; 2] has led to the possibility that the Universe may be expanding faster than predicted by the \(\Lambda\)CDM concordance model [3]. While the \(\Lambda\)CDM model has had many theoretical [4; 5; 6] and observational open questions [7], this would constitute a larger disagreement. Indeed, over the last few years the problem of cosmic tensions has been expressed in several cosmological parameters [8; 9; 10] with the value of the Hubble constant being the parameter most in contention. These cosmic tensions have primarily emerged as divergences between reported cosmological parameter values based either on direct measurements from the late Universe such as Type Ia supernovae, the tip of the red giant branch measurements, strong lensing as well as many other approaches [11; 12; 13; 14], as compared with indirect measurements from the cosmic microwave background (CMB) radiation and big bang nucleosynthesis in addition to other approaches to this regime of the Universe [15; 16; 17; 18; 19]. The growing tension between cosmological parameters measured either directly or indirectly has prompted a re-evaluation of modifications of the concordance models that have been developed in the literature over the last few
decades.
There have been a wide variety of different approaches to modifying the concordance model given the appearance of cosmic tensions in recent years. This has taken a variety of forms such as the reconsideration of the cosmological principle [20; 21], early Universe dark energy models [22], extra degrees of freedom in the form of additional neutrino species in the early Universe [23; 24], modified gravity [25] and many others [26]. Modified gravity is particularly interesting since it gives a clear avenue in which to modify the cosmic evolution both at the background and perturbative levels at any regimes in the cosmic history of the Universe. However, there are many directions in which general relativity (GR) can be modified as the gravitational component of the concordance model. One interesting approach that has gained momentum in recent years is metric-affine gravity where the underlying connection on which GR is based is exchanged with other geometries [27; 28]. This may be a more natural way of modifying GR since it does not require ad hoc conditions on the action. Teleparallel gravity (TG) [29] is one such approach in which the curvature associated with the Levi-Civita connection \(\hat{\Gamma}^{\sigma}{}_{\mu\nu}\) (over-circles denotes quantities based on the curvature of the Levi-Civita connection in this work) is substituted with a torsional teleparallel connection \(\Gamma^{\sigma}{}_{\mu\nu}\).
The teleparallel connection is curvature-free and satisfies metricity [30; 31; 32]. The result is an altogether novel reformulation of gravitational models which can, following a particular prescription of teleparallel objects produce a teleparallel equivalent of general relativity (TEGR) [33; 34], which is dynamically equivalent to GR but constructed in a totally different way. Thus, GR and TEGR agree on all classical phenomenology but may be different in the IR limit which may provide more possible directions for quantum theories of gravity. As in curvature based theories of gravity, TEGR can be modified in various different directions. The first modification to TEGR and the most well-known is \(f(T)\) gravity [35; 36; 37; 38; 39; 40; 41; 42; 43; 44] where the TEGR Lagrangian \(T\) (the torsion scalar) is generalized to an arbitrary function thereof. This produces a second order theory in terms of metric components and agrees with a growing number of observational phenomena. Another direction in which to modify the TEGR action is to consider it as part of a larger scalar-tensor framework akin to Horndeski gravity [45].
For curvature based settings, Horndeski gravity is the largest class of second order theories in which a single scalar field is added to the Einstein-Hilbert action. The observation of the gravitational wave event GW170817 [46] and its electromagnetic counterpart GRB170817A [47] has placed severe constraints on this model limiting the most exotic branches of the theory [48]. Within this context, a teleparallel analogue of Horndeski gravity was proposed in Ref. [49]. Here, some further conditions had to be placed on the ensuing class of theories since TG tends to observe a wider range of Lovelock terms as compared with curvature based theories [50]. This framework of theories has since been further investigated in various scenarios. In Ref. [51] it was found that a much larger class of models are admitted that tolerate a speed of light propagation speed for gravitational waves, while in Ref. [52] the spectrum of polarization modes was analyzed in the context of the various branches of the theory. The post-Newtonian parametrization was studied in Ref. [53] where it was found that most of the exotic ingredients of the theory pass the standard tests in this regime. More recently, the class of models has been explored through the prism of Noether symmetries in Ref. [54] where the full classification was developed, while in Refs. [55; 56] the well-tempering approach developed in Ref. [57] for regular Horndeski gravity was applied in this scenario and where the mechanics of well-tempering was better tuned to the TG setting. There has also been initial work on the stability of the theory with a particular focus on theoretical conditions that can be derived from Minkowski space as described Ref. [58]. It is thus critical to analyse in a concrete way the full cosmology of this new teleparallel analogue of Horndeski gravity. The background Friedmann and Klein-Gordon equations can be found in Ref. [49]. We are hence motivated to determine the full cosmological perturbation equations around a cosmological background. We do this here together with an initial analysis of these perturbations through some examples. The work is divided as follows, TG and its Horndeski analogue are discussed in Sec. II while in Sec. III the cosmological perturbations are presented. Some example applications of these perturbation equations are shown in Sec. IV. Specifically, we calulcate the tensor and scalar primordial power spectra and we express the tensor-to-scalar ratio in terms of the arbitrary functions of our theory. In addition, we formulate the alpha parameters that could be used to distinguish between \(\Lambda\)CDM and modified descirptions of cosmology. The main results are then summarized in Sec. V where we describe the impact of this work within the broader literature on the topic.
## II Teledenski gravity: a teleparallel analogue to Horndeski theory
Let us first provide a brief introduction to teleparallel gravity and its background cosmological dynamics.
### Teleparallel Gravity
Curvature-based theories of gravity, such as GR, depend on a geometric framework in which the Levi-Civita connection \(\hat{\Gamma}^{\sigma}{}_{\mu\nu}\) (over-circles refer to any quantities based on the Levi-Civita connection) is the basis of the geometric objects
of the theory, such as the Riemann tensor [59]. Thus, the Levi-Civita connection is used throughout curvature-based models of gravity such as in the construction of the Einstein-Hilbert action through the Ricci scalar. TG offers a different avenue in which to construct theories of gravity in which the curvature-based connection is replaced with the torsional teleparallel connection \(\Gamma^{\sigma}{}_{\mu\nu}\)[29; 30; 31; 32].
On a more practical level, curvature and torsional theories of gravity differ in that the former is based on the metric tensor \(g_{\mu\nu}\) and its derivatives, whereas TG is built using the tetrad \(e^{A}{}_{\mu}\), which accounts for the gravitational variables of the system, and a flat spin connection \(\omega^{B}{}_{C\nu}\). Here, Greek indices refer to coordinates on the general manifold while Latin ones refer to the local Minkowski spacetime. These objects also appear in GR but they are convoluted in that setting making them largely impractical, while in TG the spin connection is an inertial object. The tetrad is directly linked to the metric tensor through
\[g_{\mu\nu}=e^{A}{}_{\mu}e^{B}{}_{\nu}\eta_{AB}\quad\text{and}\quad\eta_{AB}=E_{ A}{}^{\mu}E_{B}{}^{\nu}g_{\mu\nu}\,, \tag{1}\]
where \(E_{A}{}^{\mu}\) is the inverse tetrad. Here one can observe that there are an infinite number of possible choices for the tetrad components, and so its the spin connection that acts to retain the diffeomorphism invariance of the system for these different choices.
The tetrad-spin connection pair represent the possible components for a particular spacetime, and so the teleparallel connection can be written in these terms through [31; 32]
\[\Gamma^{\lambda}{}_{\nu\mu}=E_{A}{}^{\lambda}\partial_{\mu}e^{A}{}_{\nu}+E_{ A}{}^{\lambda}\omega^{A}{}_{B\mu}e^{B}{}_{\nu}\,, \tag{2}\]
where the spin connection is guaranteed to be flat through the condition [30]
\[\partial_{[\mu}\omega^{A}{}_{|B|\nu]}+\omega^{A}{}_{C[\mu}\omega^{C}{}_{|B| \nu]}\equiv 0\,. \tag{3}\]
There also exist unique frames for any spacetime in which the spin connection terms all vanish for particular choices of the tetrad components. This is called the Weitzenbock gauge [60], and is consistently applied when the spin connection field equations identically vanish for these choices.
Gravitational scalars in TG are built by replacing the Levi-Civita connection with its teleparallel analogue. The result of this is that the Riemann tensor vanishes identically \(R^{\alpha}{}_{\beta\gamma\epsilon}(\Gamma^{\sigma}{}_{\mu\nu})\equiv 0\) (while the regular Riemann tensor remains nonzero \(\mathring{R}^{\alpha}{}_{\beta\gamma\epsilon}(\mathring{\Gamma}^{\sigma}{}_{ \mu\nu})\neq 0\)). Thus, we need to define a torsion tensor that is based solely on the teleparallel connection, namely [29; 61]
\[T^{A}{}_{\mu\nu}:=2\Gamma^{A}{}_{[\nu\mu]}\,, \tag{4}\]
where square brackets denote the antisymmetric operator. The torsion tensor represents the field strength of the theory [30], and is invariant under local Lorentz and diffeomorphic transformations [62].
The torsion tensor can be decomposed into three irreducible parts [63; 64]
\[a_{\mu} :=\frac{1}{6}\epsilon_{\mu\nu\lambda\rho}T^{\nu\lambda\rho}\,, \tag{5}\] \[v_{\mu} :=T^{\lambda}{}_{\lambda\mu}\,,\] (6) \[t_{\lambda\mu\nu} :=\frac{1}{2}\left(T_{\lambda\mu\nu}+T_{\mu\lambda\nu}\right)+ \frac{1}{6}\left(g_{\nu\lambda}v_{\mu}+g_{\nu\mu}v_{\lambda}\right)-\frac{1}{ 3}g_{\lambda\mu}v_{\nu}\,, \tag{7}\]
which are the axial, vector, and purely tensorial parts, respectively, and where \(\epsilon_{\mu\nu\lambda\rho}\) is the totally antisymmetric Levi-Civita tensor in four dimensions. Using this decomposition, unique gravitational scalar invariants can be built [65]
\[T_{\text{ax}} :=a_{\mu}a^{\mu}=-\frac{1}{18}\left(T_{\lambda\mu\nu}T^{\lambda \mu\nu}-2T_{\lambda\mu\nu}T^{\mu\lambda\nu}\right)\,, \tag{8}\] \[T_{\text{vec}} :=v_{\mu}v^{\mu}=T^{\lambda}{}_{\lambda\mu}T_{\rho}{}^{\rho\mu}\,,\] (9) \[T_{\text{ten}} :=t_{\lambda\mu\nu}t^{\lambda\mu\nu}=\frac{1}{2}\left(T_{\lambda \mu\nu}T^{\lambda\mu\nu}+T_{\lambda\mu\nu}T^{\mu\lambda\nu}\right)-\frac{1}{ 2}T^{\lambda}{}_{\lambda\mu}T_{\rho}{}^{\rho\mu}\,, \tag{10}\]
which form the set of all general scalar invariants that are not parity violating and involve, at most, quadratic contractions of the torsion tensor.
There is a particular combination of the axial, vector, and purely tensorial scalar invariants that produce the so-called torsion scalar [30]
\[T:=\frac{3}{2}T_{\rm ax}+\frac{2}{3}T_{\rm ten}-\frac{2}{3}T_{\rm vec}=\frac{1}{2 }\left(E_{A}{}^{\lambda}g^{\rho\mu}E_{B}{}^{\nu}+2E_{B}{}^{\rho}g^{\lambda\mu}E_ {A}{}^{\nu}+\frac{1}{2}\eta_{AB}g^{\mu\rho}g^{\nu\lambda}\right)T^{A}{}_{\mu \nu}T^{B}{}_{\rho\lambda}\,, \tag{11}\]
which turns out to be an incredibly important scalar since it is equal to the Ricci scalar (up to a total divergence term) [65]
\[R=\mathring{\mathring{R}}+T-\frac{2}{e}\partial_{\mu}\left(eT^{\lambda}{}_{ \lambda}{}^{\mu}\right)=0\,, \tag{12}\]
where the torsion connection calculated Ricci scalar \(R\) vanishes as described above, and where \(e=\det\left(e^{A}{}_{\mu}\right)=\sqrt{-g}\) is the tetrad determinant. The regular curvature-based Ricci scalar can be equivalently expressed as
\[\mathring{R}=-T+\frac{2}{e}\partial_{\mu}\left(eT^{\lambda}{}_{ \lambda}{}^{\mu}\right):=-T+B\,, \tag{13}\]
where \(B\) is this total divergence term.
The action that is based on the linear form of the torsion scalar is guaranteed to produce a teleparallel equivalent of general relativity (TEGR) (up to a boundary term), which will be dynamically equivalent [28, 29]. Using the same rationale as in curvature-based theories of gravity, one can direct generalize the TEGR action to an f(T) gravity [32, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70] in which the Lagrangian of the theory is raised from \(T\) to \(f(T)\). The f(T) field equations have an advantage over their curvature-based analogue in that they remain generically second order in terms of the metric derivatives at the level of the field equations.
In this work we investigate the teleparallel analogue of Horndeski gravity which is built on the interactions of gravitational objects with a single scalar field. In TG, the scalar fields couple to matter identically as in GR (using the minimal coupling prescription), where partial derivatives are raised to the Levi-Civita covariant derivative, that is [29, 71]
\[\partial_{\mu}\rightarrow\mathring{\nabla}_{\mu}\,, \tag{14}\]
which only applies to the matter sector. With this in hand, we can review the recently proposed teleparallel analog of Horndeski gravity [49, 51, 53], also called Bahamonde-Dialektopoulos-Levi Said (BDLS) theory. This is based on three base conditions, namely (i) the field equations must be at most second order in their derivatives of the tetrads; (ii) the scalar invariants will not be parity violating; and (iii) the number of contractions with the torsion tensor is limited to being at most quadratic. Without these conditions, the ensuing action would not be finite in nature. Due, in part, to the second order nature of many extensions of TG, it turns out that the resulting action is an extension of the regular form of Horndeski gravity. The result of these conditions is that the terms of regular Horndeski gravity are found, as well as additional terms that are linear in contractions with the torsion tensor [49]
\[I_{2}=v^{\mu}\phi_{;\mu}\,, \tag{15}\]
where \(\phi\) is the scalar field, as well as terms that are quadratic in this respect
\[J_{1} =a^{\mu}a^{\nu}\phi_{;\mu}\phi_{;\nu}\,, \tag{16}\] \[J_{3} =v_{\sigma}t^{\sigma\mu\nu}\phi_{;\mu}\phi_{;\nu}\,,\] (17) \[J_{5} =t^{\sigma\mu\nu}t_{\sigma}{}^{\alpha}{}_{\nu}\phi_{;\mu}\phi_{; \alpha}\,,\] (18) \[J_{6} =t^{\sigma\mu\nu}t_{\sigma}{}^{\alpha}\phi_{;\mu}\phi_{;\nu}\phi_ {;\alpha}\phi_{;\beta}\,,\] (19) \[J_{8} =t^{\sigma\mu\nu}t_{\sigma\mu}{}^{\alpha}\phi_{;\nu}\phi_{;\alpha }\,,\] (20) \[J_{10} =\epsilon^{\mu}{}_{\nu\sigma\rho}a^{\nu}t^{\alpha\rho\sigma}\phi_ {;\mu}\phi_{;\alpha}\,, \tag{21}\]
where semicolons represent covariant derivatives with respect to the Levi-Civita connection.
Therefore, we can write the teleparallel analogue of Horndeski gravity as an action through
\[\mathcal{S}_{\rm BDLS}=\int d^{4}x\,e\mathcal{L}_{\rm Tele}+\sum_{i=2}^{5}\int d^{ 4}x\,e\mathcal{L}_{i}+\int d^{4}x\,e\mathcal{L}_{\rm m}\,, \tag{22}\]
where the contributions from regular Horndeski gravity continue to appear as [45]
\[\mathcal{L}_{2} :=G_{2}(\phi,X)\,, \tag{23}\] \[\mathcal{L}_{3} :=-G_{3}(\phi,X)\mathring{\Box}\phi\,,\] (24) \[\mathcal{L}_{4} :=G_{4}(\phi,X)\left(-T+B\right)+G_{4,X}(\phi,X)\left[\left( \mathring{\Box}\phi\right)^{2}-\phi_{;\mu\nu}\phi^{;\mu\nu}\right]\,,\] (25) \[\mathcal{L}_{5} :=G_{5}(\phi,X)\mathring{G}_{\mu\nu}\phi^{;\mu\nu}-\frac{1}{6}G_{ 5,X}(\phi,X)\left[\left(\mathring{\Box}\phi\right)^{3}+2\phi_{;\mu}^{\phantom {;\mu}\nu}\phi_{;\nu}^{\phantom{;\nu}\alpha}\phi_{;\alpha}^{\phantom{;\mu} \mu}-3\phi_{;\mu\nu}\phi^{;\mu\nu}\mathring{\Box}\phi\right]\,, \tag{26}\]
which turn out to be equivalent to their corresponding regular Horndeski contributions except that they are calculated using teleparallel objects rather than the metric, but which continue to produce the same contributions to equations of motion for particular systems, and where
\[\mathcal{L}_{\rm Tele}:=G_{\rm Tele}\left(\phi,X,T,T_{\rm ax},T_{\rm vec},I_{2 },J_{1},J_{3},J_{5},J_{6},J_{8},J_{10}\right)\,, \tag{27}\]
where the kinetic term is defined as \(X:=-\frac{1}{2}\partial^{\mu}\phi\partial_{\mu}\phi\), \(\mathcal{L}_{\rm m}\) is the matter Lagrangian in the Jordan conformal frame, \(\mathring{\mathring{\mathcal{G}}}_{\mu\nu}\) is the standard Einstein tensor, and where commas represent regular partial derivatives. For the limit where \(G_{\rm Tele}=0\), we recover regular Horndeski gravity.
### Background Cosmology
By varying the action with respect to the tetrad, spin connection and scalar field, we can determine the field equations, as presented in Ref. [53]. The increased generality of the theory produces vastly larger field equations. For this reason, we consider only the equations of motion for a (maximally symmetric) flat FLRW cosmology
\[\mathrm{d}s^{2}=-N(t)^{2}\mathrm{d}t^{2}+a(t)^{2}(\mathrm{d}x^{2}+\mathrm{d}y ^{2}+\mathrm{d}z^{2})\,, \tag{28}\]
where \(N(t)\) is the lapse function (set to unity once the equations of motion are determined), and \(a(t)\) is the scale factor. To obtain the modified equations of motion, we consider the tetrad choice \(e^{a}_{\phantom{a}\mu}=\mathrm{diag}(N(t),a(t),a(t),a(t))\) which is compatible with the Weitzenbock gauge [30; 31].
Taking a variation with respect to the dynamical variables of \(N(t)\), \(a(t)\), and \(\phi(t)\), we obtain the equations of motion of the system for a flat homogeneous and isotropic background. This results in the Friedmann equation
\[\mathcal{E}_{\rm Tele}+\sum_{i=2}^{5}\mathcal{E}_{i}=0\,, \tag{29}\]
where
\[\mathcal{E}_{\rm Tele} =6H\dot{\phi}\tilde{G}_{6,I_{2}}+12H^{2}\tilde{G}_{6,T}+2X\tilde{ G}_{6,X}-\tilde{G}_{6}\,, \tag{30}\] \[\mathcal{E}_{2} =2XG_{2,X}-G_{2}\,,\] (31) \[\mathcal{E}_{3} =6X\dot{\phi}HG_{3,X}-2XG_{3,\phi}\,,\] (32) \[\mathcal{E}_{4} =-6H^{2}G_{4}+24H^{2}X(G_{4,X}+XG_{4,XX})-12HX\dot{\phi}G_{4,\phi X }-6H\dot{\phi}G_{4,\phi}\,,\] (33) \[\mathcal{E}_{5} =2H^{3}X\dot{\phi}\left(5G_{5,X}+2XG_{5,XX}\right)-6H^{2}X\left(3 G_{5,\phi}+2XG_{5,\phi X}\right)\,, \tag{34}\]
and
\[\mathcal{L}_{\rm Tele}=\tilde{G}_{6}(\phi,X,T,I_{2})\,, \tag{35}\]
which represents all the nonvanishing scalars for \(G_{\rm Tele}\), the Hubble parameter is defined as \(H=\dot{a}/a\), and dots denote derivatives with respect to cosmic time. The torsion scalar takes on the form \(T=6H^{2}\), while \(I_{2}=3H\dot{\phi}\) and \(X=\frac{1}{2}\dot{\phi}^{2}\), and commas denote partial derivatives. Taking a variation with respect to the scale factor leads to the second Friedmann equation
\[\mathcal{P}_{\rm Tele}+\sum_{i=2}^{5}\mathcal{P}_{i}=0\,, \tag{36}\]
where
\[\mathcal{P}_{\rm Tele} =-3H\dot{\phi}\check{G}_{6,I_{2}}-12H^{2}\check{G}_{6,T}-\frac{d }{dt}\Big{(}4H\check{G}_{6,T}+\dot{\phi}\,\check{G}_{6,I_{2}}\Big{)}+\check{G}_ {6}\,, \tag{37}\] \[\mathcal{P}_{2} =G_{2}\,,\] (38) \[\mathcal{P}_{3} =-2X\left(G_{3,\phi}+\ddot{\phi}G_{3,X}\right)\,,\] (39) \[\mathcal{P}_{4} =2\left(3H^{2}+2\dot{H}\right)G_{4}-12H^{2}XG_{4,X}-4H\dot{X}G_ {4,X}-8\dot{H}XG_{4,X}\] \[\qquad-8HX\dot{X}G_{4,XX}+2\left(\ddot{\phi}+2H\dot{\phi}\right) G_{4,\phi}+4XG_{4,\phi\phi}+4X\left(\ddot{\phi}-2H\dot{\phi}\right)G_{4,\phi X}\,,\] (40) \[\mathcal{P}_{5} =-2X\left(2H^{3}\dot{\phi}+2H\dot{H}\dot{\phi}+3H^{2}\ddot{\phi} \right)G_{5,X}-4H^{2}X^{2}\ddot{\phi}G_{5,XX}\] \[\qquad+4HX\left(\dot{X}-HX\right)G_{5,\phi X}+2\left[2\frac{d}{dt }\left(HX\right)+3H^{2}X\right]G_{5,\phi}+4HX\dot{\phi}G_{5,\phi\phi}\,. \tag{41}\]
Finally, the modified Klein-Gordon equation can be determined by taking a variation with respect to the scalar field, giving
\[\frac{1}{a^{3}}\frac{\mathrm{d}}{\mathrm{d}t}\Big{[}a^{3}(J+J_{\rm Tele}) \Big{]}=P_{\phi}+P_{\rm Tele}\,, \tag{42}\]
where the standard Horndeski terms appear as \(J\) and \(P_{\phi}\) which come from the Lagrangian terms \(\mathcal{L}_{i}\), where \(i=2,..,5\) and [72]
\[J =\dot{\phi}G_{2,X}+6HXG_{3,X}-2\dot{\phi}G_{3,\phi}+6H^{2}\dot{ \phi}\left(G_{4,X}+2XG_{4,XX}\right)-12HXG_{4,\phi X}\] \[\qquad+2H^{3}X\left(3G_{5,X}+2XG_{5,XX}\right)-6H^{2}\dot{\phi} \left(G_{5,\phi}+XG_{5,\phi X}\right)\,, \tag{43}\] \[P_{\phi} =G_{2,\phi}-2X\left(G_{3,\phi\phi}+\ddot{\phi}G_{3,\phi X}\right) +6\left(2H^{2}+\dot{H}\right)G_{4,\phi}\] \[\qquad+6H\left(\dot{X}+2HX\right)G_{4,\phi X}-6H^{2}XG_{5,\phi \phi}+2H^{3}X\dot{\phi}G_{5,\phi X}\,, \tag{44}\]
while \(J_{\rm Tele}\) and \(P_{\rm Tele}\) are new terms related to the teleparallel Horndeski, given by
\[J_{\rm Tele} =\dot{\phi}\check{G}_{6,X}\,, \tag{45}\] \[P_{\rm Tele} =-9H^{2}\tilde{G}_{6,I_{2}}+\tilde{G}_{6,\phi}-3\frac{\mathrm{d} }{\mathrm{d}t}\left(H\tilde{G}_{6,I_{2}}\right)\,. \tag{46}\]
Interestingly, the arguments of \(\tilde{G}_{6}\) do not depend on \(T_{\rm ax}\) and \(T_{\rm ten}\) since they are zero for the flat FLRW metric. For this reason, we can simply write the contributions in terms of \(T_{\rm vec}\) since \(T=-(2/3)T_{\rm vec}=6H^{2}\) for this case. Naturally, one should not expect this to also be true at perturbative level [51]. Indeed, in this work. explore the cosmological perturbations of the teleparallel analogue of Horndeski gravity. This will open the way for deeper investigations into specific models of the framework that may lead to a better understanding of which models are most compatible with observations of the cosmic evolution of the Universe.
## III Cosmological perturbations
The goal of cosmological perturbations in general, is to associate the physics of the early Universe to CMB anisotropies and large-scale structure and to provide the initial conditions for numerical simulations of structure
formation. In this scenery, the physical quantities can be decomposed into a homogeneous background where their dependence is restricted only to the cosmic time and the perturbative parts which depend on spacetime. Linear perturbations around a spatially flat FLRW spacetime in Horndeski gravity have been studied in [72]. Following that, our starting point is to establish the adequate form of the tetrad matrix and then proceed to the evaluation of the action for tensor, vector and scalar perturbations. In what follows, we work on the unitary gauge, where the scalar field perturbations vanish, i.e. \(\delta\phi=0\).
The first-order expansion of the metric can be parametrized as
\[g_{\mu\nu}=g^{0}_{\mu\nu}+\delta g_{\mu\nu}\,, \tag{47}\]
where \(g^{0}_{\mu\nu}\) in our case is the spatially flat FLRW background, while \(\delta g_{\mu\nu}\) can be expressed in an irreducible form as
\[\delta g_{\mu\nu}\rightarrow\begin{pmatrix}-2\alpha&aB_{i}+a^{2}\partial_{i} \beta\\ aB_{i}+a^{2}\partial_{i}\beta&2a^{2}\left[\zeta\delta_{ij}+\tfrac{1}{2}h_{ij}+ 2\partial_{(i}h_{j)}\right]\end{pmatrix}, \tag{48}\]
where \(h_{ij}\) is symmetric, traceless (\(h_{ij}\delta^{ij}\)) and divergenceless (\(\partial^{i}h_{ij}\)) tensor perturbation, \(B_{i}=b_{i}+\beta_{i}\), \(h_{i}\) are solinoidal vector perturbations (i.e. \(\partial_{i}X^{i}=0\)) and \(\alpha\), \(\beta\) as well as \(\zeta\) are scalar perturbations. According to (1) there are many tetrads that could reproduce the above metric. In what follows, we choose to work with
\[e^{A}{}_{\mu}=\bar{e}^{A}{}_{\mu}+\delta e^{A}{}_{\mu}\,, \tag{49}\]
where \(\bar{e}^{A}{}_{\mu}=\text{diag}(N,a,a,a)\) is the background flat FLRW tetrad and \(\delta e^{A}{}_{\mu}\) is the perturbed tetrad that is given by
\[\delta e^{A}{}_{\mu}\rightarrow\begin{pmatrix}\alpha&-a\beta_{i}\\ \\ \delta^{I}_{i}(\partial^{i}\beta+b^{i})&a\delta^{Ii}\left[\zeta\delta_{ij}+ \tfrac{1}{2}h_{ij}+\tfrac{1}{8}h_{ik}h_{kj}+2\partial_{(i}h_{j)}\right]\end{pmatrix}, \tag{50}\]
where capital Latin indices indicate components at the Lorentz spacetime, while small ones, the associated 3 dimensional spatial part. For greater details one can check [58; 30].
### Tensor perturbation
We consider the tensor perturbations of Eq. (50) to be described by two functions of spacetime, \(h_{+}(t,x,y,z)\) and \(h_{\times}(t,x,y,z)\) as the two components of a symmetric, traceless and divergenceless tensor \(h_{ij}\). We assume for the sake of simplicity, that the perturbations lie on the x-y plane, so that the z-axis is in the direction of the wavevector \(\vec{k}\), meaning \(\hat{k}=z\). Setting \(a,\beta,\zeta\) and \(b_{i},\beta_{i},h_{i}\) to zero, the tetrad (49) takes the form
\[e^{A}{}_{\mu}\rightarrow\begin{pmatrix}1&0&0&0\\ \\ 0&-a-\tfrac{1}{2}ah_{+}-\tfrac{1}{8}a\left(h_{+}^{2}+h_{\times}^{2}\right)&- \tfrac{1}{2}ah_{\times}&0\\ \\ 0&-\tfrac{1}{2}ah_{\times}&-a+\tfrac{1}{2}ah_{+}-\tfrac{1}{8}a\left(h_{+}^{2}+ h_{\times}^{2}\right),&0\\ \\ 0&0&0&a\end{pmatrix}, \tag{51}\]
where the \(a\) stands for the scale factor. The associated metric from the Eq. (1) becomes
\[g_{\mu\nu}=-N^{2}\mathrm{d}t^{2}+a^{2}(\delta_{ij}+h_{ij})\mathrm{d}x^{i}\mathrm{ d}x^{j}\,, \tag{52}\]
where the background value of the lapse function is considered to be unity. The scalar invariants of the decomposition of the torsion tensor become
\[T_{\mathrm{ax}}= \quad 0\, \tag{53}\] \[T_{\mathrm{vec}}= \,-\frac{9\dot{a}^{2}}{a^{2}}\,\] (54) \[T_{\mathrm{ten}}= \,\frac{3}{4a^{2}}\,\left[\,(\mathbf{\nabla}h_{+})^{2}+(\mathbf{\nabla}h _{\times})^{2}-a^{2}\dot{h}_{+}^{2}-a^{2}\dot{h}_{\times}^{2}\,\right]\,. \tag{55}\]
Taking these into account, the action (22) becomes up to second-order perturbations,
\[S_{\mathrm{T}}^{(2)}=\int\mathrm{d}t\mathrm{d}^{3}x\ \frac{a^{3}}{4}\left[\, \mathcal{G}_{T}\dot{h}_{ij}^{2}-\frac{\mathcal{F}_{T}}{a^{2}}(\mathbf{\nabla}h_{ ij})^{2}\,\right], \tag{56}\]
where
\[\mathcal{G}_{\mathrm{T}}=2\left(G_{4}-2XG_{4,X}+XG_{5,\phi}-HX \dot{\phi}G_{5,X}+2XG_{\mathrm{Tele},J_{8}}+\frac{X}{2}G_{\mathrm{Tele},J_{5}}- G_{\mathrm{Tele},T}\right), \tag{57}\] \[\mathcal{F}_{\mathrm{T}}=2\left(G_{4}-XG_{5,\phi}-X\ddot{\phi}G_ {5,X}-G_{\mathrm{Tele},T}\right), \tag{58}\]
where \(G_{i,A}\) denotes the derivative of \(G_{i}\) with respect to \(A\). Notice that in the limiting case, where \(G_{\mathrm{Tele}}\) vanishes, Eq. (57) and (58), the standard Hordeski results are recovered for the perturbed action [72].
This action represents how tensor perturbations propagate on a flat FLRW background cosmology. By taking a variation of Eq. (56), we find the propagation equation of gravitational waves, which confirms the result in Ref. [51]. We can similarly use this result in the standard \(\alpha_{i}\) parametrization of cosmological perturbations, as we do later on. Specifically, the squared speed of the gravitational waves is given by
\[c_{\mathrm{T}}^{2}=\frac{\mathcal{F}_{\mathrm{T}}}{\mathcal{G}_{\mathrm{T}}}\,, \tag{59}\]
which is not equal to unity for arbitrary \(G_{i}\)'s. In addition, one can see from the action (56) that the following conditions should hold
\[\mathcal{F}_{\mathrm{T}}>0\quad\text{and}\quad\mathcal{G}_{\mathrm{T}}>0\,, \tag{60}\]
in order to avoid ghost and gradient instabilities.
### Vector perturbation
Another important component of the cosmological perturbations decomposition is vector perturbations, which are divergence-free in nature. Usually, vector perturbations decay in an expanding background cosmology, unless they are driven by anisotropic stress. We do not expect here to happen something different, but to be sure we have to study them in detail. Fixing \(\alpha\), \(\beta\), \(\zeta\) and \(h_{ij}\) to vanish in (50) the tetrad becomes
\[e^{A}_{\ \mu}\rightarrow\left(\begin{array}{cc}1&-a\beta_{i}\\ \\ \delta_{i}^{I}b^{i}\ a\delta^{Ii}\left[\delta_{ij}+2\partial_{(i}h_{j)}\right] \end{array}\right). \tag{61}\]
The scalar invariants of the theory (50) take the form
\[T_{\rm ax}= \frac{1}{9a^{2}}\left[\mathbf{\nabla}\times\left(\mathbf{\beta}-\mathbf{b} \right)\right]^{2}, \tag{62}\] \[T_{\rm vec}= -\frac{9\hat{a}^{2}}{a^{2}}+\frac{1}{a^{2}}\Bigg{\{}9\hat{a}^{2} \left(2\mathbf{b}\mathbf{\beta}+\mathbf{\beta}^{2}\right)-6\hat{a}\left[a\left[\mathbf{b}\dot{ \mathbf{\beta}}-2(\mathbf{\nabla}\times\mathbf{h})(\mathbf{\nabla}\times\dot{\mathbf{h}})\right]+( \mathbf{\nabla}\times\mathbf{b})(\mathbf{\nabla}\times\mathbf{h})\right]-\] (63) \[-6\hat{a}(\mathbf{b}+\mathbf{\beta})\left(\mathbf{\nabla}^{2}\mathbf{h}-a\dot{ \mathbf{\beta}}\right)+\left(\mathbf{\nabla}^{2}\mathbf{h}-a\dot{\mathbf{\beta}}\right)^{2} \Bigg{\}},\] \[T_{\rm ten}= -\frac{1}{a^{2}}\left[a^{2}\dot{\mathbf{\beta}}^{2}-(\mathbf{\nabla} \times\mathbf{b})^{2}-(\mathbf{\nabla}\times\mathbf{b})(\mathbf{\nabla}\times\mathbf{\beta})-( \mathbf{\nabla}\times\mathbf{\beta})^{2}+3a(\mathbf{\nabla}\times\dot{\mathbf{h}})(\mathbf{b}-a \mathbf{h})+a\dot{\mathbf{\beta}}\mathbf{\nabla}^{2}\mathbf{h}+(\nabla^{2}\mathbf{h})^{2}\right]. \tag{64}\]
We expand the action keeping up to quadratic order terms in the perturbations and we get
\[S_{\rm V}^{(2)}= \int{\rm d}t{\rm d}^{3}xa^{3}\Bigg{[}\frac{A_{1}}{a^{2}}(\mathbf{ \nabla}^{2}\mathbf{h})^{2}+A_{2}\dot{\mathbf{\beta}}^{2}+A_{3}(\mathbf{\nabla}\times\dot{ \mathbf{h}})^{2}-\frac{A_{3}}{a}(\mathbf{\nabla}\times\dot{\mathbf{h}})(\mathbf{\nabla}\times \mathbf{b})+\frac{A_{4}}{a^{2}}(\mathbf{\nabla}\times\mathbf{b})^{2}+\frac{A_{5}}{a^{2}}( \mathbf{\nabla}\times\mathbf{\beta})^{2} \tag{65}\] \[+\frac{A_{6}}{a}(\mathbf{\nabla}\times\mathbf{h})(\mathbf{\nabla}\times\mathbf{ \beta})+\frac{A_{7}}{a}(\mathbf{\nabla}\times\mathbf{h})(\mathbf{\nabla}\times\dot{\mathbf{ \beta}})+\frac{A_{8}}{a^{2}}(\mathbf{\nabla}\times\mathbf{\beta})(\mathbf{\nabla}\times \mathbf{b})\Bigg{]}\,\]
where
\[A_{1}:= \frac{X}{18}\left(2XG_{\rm Tele,J_{6}}-6G_{\rm Tele,J_{3}}-5G_{ \rm Tele,J_{5}}-2G_{\rm Tele,J_{8}}\right)+G_{\rm Tele,T_{\rm vec}}, \tag{66}\] \[A_{2}:= \frac{2X}{9}\left(2XG_{\rm Tele,J_{6}}+3G_{\rm Tele,J_{3}}-5G_{ \rm Tele,J_{5}}-2G_{\rm Tele,J_{8}}\right)+G_{\rm Tele,T_{\rm vec}},\] (67) \[A_{3}:= X\left(-4G_{4,X}-2G_{5,X}H\dot{\phi}+2G_{5,\phi}+G_{\rm Tele,J_{5 }}+4G_{\rm Tele,J_{8}}\right)+2\left(G_{4}-G_{\rm Tele,T}\right),\] (68) \[A_{4}:= \frac{a}{18}\left[3X\left(-6G_{4X}-3G_{5X}H\dot{\phi}+3G_{5\phi }+3G_{\rm Tele,J5}+6G_{\rm Tele,J8}+2G_{\rm Tele,J10}\right)+9G_{4}-9G_{\rm Tele,T}+2G_{\rm Tele,T_{\rm vec}}\right]\,\] (69) \[A_{5}:= \frac{X}{2}\left(-2G_{4,X}-G_{5,X}H\dot{\phi}+G_{5,\phi}+2G_{ \rm Tele,J_{5}}\right)+\frac{1}{2}G_{4}-\frac{1}{2}G_{\rm Tele,T}+\frac{1}{9}G _{\rm Tele,T_{\rm ax}}-\frac{2X}{3}G_{\rm Tele,J_{10}},\] (70) \[A_{6}:= -2\frac{d}{dt}(2XG_{4X}-XG_{5\phi}-G_{4})-2\dot{\phi}XG_{5,X} \dot{H}-\dot{\phi}G_{\rm Tele,I_{2}}-4XG_{5.X}H^{2}\dot{\phi}\] (71) \[-H\Bigg{\{}2X\left[4G_{4,X}+\left(2XG_{5,XX}+3G_{5,X}\right)\ddot {\phi}+2XG_{5,\phi X}-2G_{5,\phi}\right]-4G_{4}+4G_{\rm Tele,T}-6G_{\rm Tele,T_ {\rm vec}}\Bigg{\}},\] \[A_{7}:= \frac{X}{9}\left(-36G_{4,X}-18G_{5,X}H\dot{\phi}+18G_{5,\phi}-4XG _{\rm Tele,J_{6}}+3G_{\rm Tele,J_{3}}+10G_{\rm Tele,J_{5}}+4G_{\rm Tele,J_{8}} \right)+\] (72) \[+2\left(G_{4}-G_{\rm Tele,T}+G_{\rm Tele,T_{\rm vec}}\right),\] \[A_{8}:= X\left(-2G_{4,X}-G_{5,X}H\dot{\phi}+G_{5,\phi}+G_{\rm Tele,J_{5 }}\right)+G_{4}-G_{\rm Tele,T}-\frac{2}{9}G_{\rm Tele,T_{\rm ax}}+\frac{X}{3}G_{ \rm Tele,J_{10}}. \tag{73}\]
The coefficients of \(\mathbf{\beta}^{2},\mathbf{\beta}\mathbf{b}\) and \(\mathbf{\nabla}\times\mathbf{h}\) vanish thanks to background equations. If we choose the condition with \(A_{1}=0\) and \(A_{3}=0\) we have one dynamic variable \(\mathbf{\beta}\) and two auxiliaries fields \(\mathbf{h}\) and \(\mathbf{b}\). By varying with respect to these two auxiliaries fields, we have the following constraint equations:
\[2\frac{A_{4}}{a^{2}}\left(\mathbf{\nabla}\times\mathbf{b}\right)+\frac{A_{ 8}}{a^{2}}\left(\mathbf{\nabla}\times\mathbf{\beta}\right)=0, \tag{74}\] \[\frac{A_{6}}{a}\left(\mathbf{\nabla}\times\mathbf{\beta}\right)+\frac{A_{7} }{a}\left(\mathbf{\nabla}\times\dot{\mathbf{\beta}}\right)=0. \tag{75}\]
Using Eq. (75) we rewrite the equation for the action as given below:
\[S_{\rm V}^{(2)}=\int\mathrm{d}t\mathrm{d}^{3}xa^{3}\Bigg{[}\mathcal{G}_{\rm V} \dot{\mathbf{\beta}}^{2}-\frac{\mathcal{F}_{V}}{a^{2}}(\mathbf{\nabla}\times\mathbf{\beta})^ {2}\Bigg{]}, \tag{76}\]
where
\[\mathcal{G}_{\rm V}=A_{2}\quad\text{and}\quad\mathcal{F}_{\rm V}=\frac{\mathrm{ A}_{8}^{2}}{4\mathrm{A}_{4}}-\mathrm{A}_{5}. \tag{77}\]
From Eq. (76) we can see that in order to avoid ghost and gradient instabilities, following inequalities should be satisfied
\[\mathcal{F}_{\rm V}>0,\qquad\mathcal{G}_{\rm V}>0. \tag{78}\]
As in the curvature-based Horndeski cosmology case, these perturbations are found to contribute little to the resulting cosmology, and by and large, do not impact the evolution of the Universe after inflation.
### Scalar perturbation
Like in tensor and vector perturbation, by making \(b_{i}\), \(\beta_{i}\), \(h_{i}\) vectors and \(h_{ij}\) tensor zero, tedrad in (50) takes the form
\[e^{A}{}_{\mu}\rightarrow\left(\begin{array}{cc}1+\alpha&0\\ a\delta_{i}^{I}\partial^{i}\beta&a\delta_{i}^{I}(1+\zeta)\end{array}\right). \tag{79}\]
As we will see later on, we can use the constraint equations to remove \(\alpha\) and \(\beta\) and thus get the quadratic action only in terms of a single variable, \(\zeta\). In the case of scalar perturbations, axial, vectorial and tensorial parts of the torsion tensor are
\[T_{\rm ax}= \,0, \tag{80}\] \[T_{\rm vec}= -\frac{9\dot{a}^{2}}{a^{2}}+\frac{6\dot{a}}{a^{2}}\left[3\dot{a} \alpha+a(\mathbf{\nabla}^{2}\beta-3\dot{\zeta})\right]+\frac{1}{a^{2}}\Big{\{}-27 \alpha^{2}\dot{a}^{2}+6\,a\dot{a}\left[3\mathbf{\nabla}\beta\,\mathbf{\nabla}\zeta-2 \alpha(\mathbf{\nabla}^{2}\beta-3\,\dot{\zeta})\,\right]\,+\] (81) \[+\Big{[}\left(2\mathbf{\nabla}\zeta+\mathbf{\nabla}\alpha\right)^{2}-a^{ 2}(\mathbf{\nabla}^{2}\beta-3\dot{\zeta})^{2}\,\Big{]}\,\Big{\}},\] \[T_{\rm ten}= \,\bigg{[}\frac{(\mathbf{\nabla}\zeta-\mathbf{\nabla}\alpha)^{2}}{a^{2}} -(\mathbf{\nabla}^{2}\beta)^{2}\bigg{]}\,. \tag{82}\]
After plugging perturbed tetrad Eq. (79) into the action and expanding it to the second order, we get
\[S_{\rm S}^{(2)}=\int\mathrm{d}t\mathrm{d}^{3}xa^{3}\Bigg{[}-3\mathcal{A}\dot{ \zeta}^{2}+\frac{\mathcal{B}}{a^{2}}(\mathbf{\nabla}\zeta)^{2}+\Sigma\alpha^{2}-2 \Theta\alpha\mathbf{\nabla}^{2}\beta+2\mathcal{A}\dot{\mathbf{\nabla}}^{2}\beta+6 \Theta\alpha\dot{\zeta}-2\mathcal{C}\alpha\frac{\mathbf{\nabla}^{2}}{a^{2}}\zeta \Bigg{]}, \tag{83}\]
where the new coefficients
\[\mathcal{A}:= 2\Big{[}G_{4}-2XG_{4,X}+XG_{5,\phi}-G_{\text{Tele,T}}+\frac{3}{2}( G_{\text{Tele,}_{T_{\text{vec}}}}-XG_{\text{Tele,}I_{2}I_{2}})-3H^{2}(4G_{\text{Tele,}TT}-12G_{ \text{Tele,}TT_{\text{vec}}}\] \[+9G_{\text{Tele,}T_{\text{vec}}T_{\text{vec}}})-H(XG_{5,X}+6G_{ \text{Tele,}TI_{2}}-9G_{\text{Tele,}T_{\text{vec}}I_{2}})\dot{\phi}\Big{]}, \tag{84}\] \[\mathcal{B}:= \frac{2}{9}\Big{(}9G_{4}-9XG_{5,\phi}-6XG_{\text{Tele,}J_{3}}-5XG _{\text{Tele,}J_{5}}+2X^{2}G_{\text{Tele,}J_{6}}-2XG_{\text{Tele,}J_{8}}-9G_{ \text{Tele,}T}+18G_{\text{Tele,}T_{\text{vec}}}\] \[-9XG_{5,X}\ddot{\phi}\Big{)},\] (85) \[\mathcal{C}:= 2\big{(}G_{4}-2XG_{4,X}+XG_{5,\phi}-HXG_{5,X}\dot{\phi}-G_{\text {Tele,T}}+G_{\text{Tele,}T_{\text{vec}}}\big{)}\] \[+\frac{X}{9}(3G_{\text{Tele,}J_{3}}+10G_{\text{Tele,}J_{5}}-4XG_{ \text{Tele,}J_{6}}+4G_{\text{Tele,}J_{8}})\,,\] (86) \[\Sigma:= X\,(G_{\text{Tele,}X}+2XG_{\text{Tele,}XX}+2XG_{2,XX}+G_{2,X}-2XG _{3,\phi X}-2G_{3,\phi})+3H\dot{\phi}\big{(}4XG_{\text{Tele,}XI_{2}}+G_{\text {Tele,}I_{2}}\] \[+2X^{2}G_{3,XX}+4XG_{3,X}-4X^{2}G_{4,\phi XX}-10XG_{4,\phi X}-2G _{4,\phi}\big{)}+3H^{2}\big{(}12XG_{\text{Tele,}I_{2}}+8XG_{\text{Tele,}XT}\] \[+2G_{\text{Tele,}T}-2G_{4}-3G_{\text{Tele,}T_{\text{vec}}}+8X^{3} G_{4,XXX}+32X^{2}G_{4,XX}+14XG_{4,X}-12XG_{\text{Tele,}XT_{\text{vec}}}-4X^{3}G_{5,\phi XX}\] \[-18X^{2}G_{5,\phi X}-12XG_{5,\phi}\big{)}+2H^{3}\dot{\phi}\left(3 6G_{\text{Tele,}TI_{2}}-54G_{\text{Tele,}T_{\text{vec}}I_{2}}+2X^{3}G_{5,XXX }+13X^{2}G_{5,XX}+15XG_{5,X}\right)\] \[+18H^{4}\left(-12G_{\text{Tele,}TT_{\text{vec}}}+4G_{\text{Tele, }TT}+9G_{\text{Tele,}T_{\text{vec}}T_{\text{vec}}}\right),\] (87) \[\Theta:= -6H^{3}(4G_{\text{Tele,}TT}-12G_{\text{Tele,}TT_{\text{vec}}}+9G_ {\text{Tele,}T_{\text{vec}}T_{\text{vec}}})+H(2G_{4}-8XG_{4,X}-8X^{2}G_{4,XX }+6XG_{5,\phi}\] \[+4X^{2}G_{5,\phi X}-2G_{\text{Tele,}T}+3G_{\text{Tele,}T_{\text{vec }}}-6XG_{\text{Tele,}I_{2}I_{2}}-4XG_{\text{Tele,}XT}+6XG_{\text{Tele,}XT_{ \text{vec}}})\] \[-H^{2}(5XG_{5,X}+2X^{2}G_{5,XX}+18G_{\text{Tele,}TI_{2}}-27G_{ \text{Tele,}T_{\text{vec}}}I_{2})\dot{\phi}\] \[-(XG_{3,X}-G_{4,\phi}-2XG_{4,\phi X}+\frac{1}{2}G_{\text{Tele,} I_{2}}+XG_{\text{Tele,}XI_{2}})\dot{\phi}\,, \tag{88}\]
have been introduced. The coefficients of \(\zeta^{2}\) and \(\alpha\zeta\) vanish due to the background equations. If we check this result to the limiting cases, it is easy to see that if \(G_{\text{Tele}}\to 0\), we can recover the same results for the action and its coefficient as in Horndeski case Ref. [72]. To be more precise, if we consider the vanishing of the teleparallel terms, we get \(\mathcal{A}=\mathcal{C}=\mathcal{G}_{\text{T}}\) and \(\mathcal{B}=\mathcal{F}_{\text{T}}\).
By varying the action (83) with respect to \(\alpha\) and \(\beta\), one obtains a set of equations
\[\Sigma\alpha-\Theta\mathbf{\nabla}^{2}\beta+3\Theta\dot{\zeta}- \mathcal{C}\frac{\mathbf{\nabla}^{2}}{a^{2}}\zeta=0\,, \tag{89}\] \[\Theta\alpha-\mathcal{A}\dot{\zeta}=0\,. \tag{90}\]
Using these equations, we can eliminate \(\alpha\) and \(\beta\) from the action (83) and rewrite it as
\[S_{\text{S}}^{(2)}=\int\text{d}t\text{d}^{3}xa^{3}\Bigg{[}\mathcal{G}_{\mathcal{ S}}\dot{\zeta}^{2}-\frac{\mathcal{F}_{\mathcal{S}}}{a^{2}}(\mathbf{\nabla}\zeta)^{2} \Bigg{]}, \tag{91}\]
where the new coefficients are
\[\mathcal{G}_{\text{S}}=3\mathcal{A}+\frac{\Sigma\mathcal{A}^{2}}{ \Theta^{2}}\,, \tag{92}\] \[\mathcal{F}_{\text{S}}=\frac{1}{a}\frac{d}{dt}\Bigg{(}\frac{a \mathcal{A}\mathcal{C}}{\Theta}\Bigg{)}-\mathcal{B}\,. \tag{93}\]
We can also express \(\mathcal{A}\), \(\mathcal{B}\) and \(\mathcal{C}\) coefficients with \(\mathcal{F}_{\text{T}}\), \(\mathcal{G}_{\text{T}}\) and \(G_{\text{Tele}}\) like
\[\mathcal{A} =\mathcal{G}_{\text{T}}+f_{1}\left(G_{\text{Tele}}\right), \tag{94}\] \[\mathcal{B} =\mathcal{F}_{\text{T}}+f_{2}\left(G_{\text{Tele}}\right),\] (95) \[\mathcal{C} =\mathcal{G}_{\text{T}}+f_{3}\left(G_{\text{Tele}}\right). \tag{96}\]
where
\[f_{1}\left(G_{\rm Tele}\right) =3G_{\rm Tele,T_{\rm wee}}-X\left(G_{\rm Tele,J_{5}}+G_{\rm Tele,J_{8} }+3G_{\rm Tele,I_{2}I_{2}}\right)+6H\dot{\phi}\left(3G_{\rm Tele,T_{\rm wee}I_{2 }}-2G_{\rm Tele,TI_{2}}\right)+ \tag{97}\] \[+6H^{2}\left(12G_{\rm Tele,TT_{\rm wee}}-4G_{\rm Tele,TT}-9G_{\rm Tele,T_{\rm wee}T_{\rm wee}}\right),\] \[f_{2}\left(G_{\rm Tele}\right) =\frac{2}{9}\left(18G_{\rm Tele,T_{\rm wee}}-6XG_{\rm Tele,J_{3}}-5XG _{\rm Tele,J_{5}}-2XG_{\rm Tele,J_{8}}+2X^{2}G_{\rm Tele,J_{6}}\right),\] (98) \[f_{3}\left(G_{\rm Tele}\right) =\frac{1}{9}\left(18G_{\rm Tele,T_{\rm wee}}+3XG_{\rm Tele,J_{3}}+XG _{\rm Tele,J_{5}}-32XG_{\rm Tele,J_{8}}-4X^{2}G_{\rm Tele,J_{6}}\right). \tag{99}\]
We can write \(\mathcal{G}_{S}\) and \(\mathcal{F}_{S}\) in terms of \(\mathcal{G}_{T}\) and \(\mathcal{F}_{T}\) as follows
\[\mathcal{G}_{\rm S} =3\mathcal{G}_{T}+\frac{\Sigma}{\Theta^{2}}\left[\mathcal{G}_{ \rm T}+f_{1}\left(G_{\rm Tele}\right)\right]^{2}+3f_{1}(G_{\rm Tele})\,, \tag{100}\] \[\mathcal{F}_{\rm S} =\frac{1}{a}\frac{d}{dt}\Bigg{\{}\frac{a\left[\mathcal{G}_{\rm T }+f_{1}\left(G_{\rm Tele}\right)\right]\left[\mathcal{G}_{\rm T}+f_{3}\left(G _{\rm Tele}\right)\right]}{\Theta}\Bigg{\}}-\mathcal{F}_{T}-f_{2}(G_{\rm Tele })\,. \tag{101}\]
The square sound speed is given by \(c_{\rm S}^{2}=\mathcal{F}_{\rm S}/\mathcal{G}_{\rm S}\) and ghost and gradient stability are obtained
\[\mathcal{F}_{\rm S}>0,\qquad\mathcal{G}_{\rm S}>0. \tag{102}\]
These considerations are essential for understanding the speed at which scalar perturbations propagate from the early Universe which can impact the predicted size of baryonic acoustic oscillations.
## IV Applications
The results we found in the above analysis can be used in several applications in cosmology, especially in the early Universe. In order to understand the formation and evolution of large-scale structure one has to understand the nature of cosmological perturbations. In this section, we will discuss the power spectrum of the perturbations above, and we will provide the form of the alpha parameters that can be used to distinguish between the concordance model and other alternatives.
### Primordial power spectrum
The seeds of all structure in the Universe are considered to be the primordial fluctuations, whose origin is most probably related to inflation. Before the freezing of the horizon at the very early Universe, as the scale factor grew exponentially, it caused quantum fluctuations of the inflaton field, which at later stages re-entered the horizon and set the initial conditions for the formation of large scale structure. These fluctuations are usually described by their power spectrum that has both scalar and tensor modes. That is what we discuss in this section.
Tensor perturbationsThe quadratic action (56) in tensor perturbations can be re-written using the canonical variables
\[\mathrm{d}y_{\rm T}:=\frac{\mathcal{F}_{\rm T}^{1/2}}{a\mathcal{G}_{\rm T}} \mathrm{d}t\,,\;\;z_{\rm T}:=\frac{a}{2}(\mathcal{F}_{\rm T}\mathcal{G}_{\rm T} )^{1/4}\,,\;\;v_{ij}(y_{\rm T},\mathbf{x}):=z_{\rm T}h_{ij}\,, \tag{103}\]
as
\[\mathcal{S}_{\rm T}^{(2)}=\int\mathrm{d}y_{\rm T}\mathrm{d}^{3}x\left[(\nu_{ij }^{\prime})^{2}-(\mathbf{\nabla}v_{ij})^{2}+\frac{z_{\rm T}^{\prime\prime}}{z_{\rm T }}v_{ij}^{2}\right]\,, \tag{104}\]
where prime denotes differentiation with respect to \(y_{\rm T}\). Varying this action with respect to \(v_{ij}\) and solving its equation, we see that at superhorizon scales we get
\[v_{ij}\propto z_{\rm T}\,,\quad v_{ij}\propto z_{\rm T}\int\frac{\mathrm{d}y_{ \rm T}}{z_{\rm T}^{2}}\,, \tag{105}\]
or in terms of the non-canonical variables
\[h_{ij}=\text{const}\,,\quad h_{ij}=\int^{t}\frac{\text{d}t^{\prime}}{a^{3}{\cal G }_{\text{T}}}\,. \tag{106}\]
To evaluate the power spectral density, we assume that
\[\epsilon:=-\frac{\dot{H}}{H^{2}}\simeq\text{const}\,,\ f_{\text{T}}:=\frac{ \dot{\cal F}_{\text{T}}}{H{\cal F}_{\text{T}}}\simeq\text{const}\,,\ g_{\text{T}}:= \frac{\dot{\cal G}_{\text{T}}}{H{\cal G}_{\text{T}}}\simeq\text{const}\,. \tag{107}\]
In order for the canonical time coordinate to run from \(-\infty\) to \(0\) as the Universe expands, we have to impose the condition
\[\epsilon+\frac{f_{\text{T}}-g_{\text{T}}}{2}<1\,, \tag{108}\]
while in order for the second solution in Eq. (105)-(106) to decay, we have to assume that
\[\epsilon-g_{\text{T}}<3\,. \tag{109}\]
Expanding the tensor perturbations in terms of the eigenfunctions \(e^{i\mathbf{k}\cdot\mathbf{x}}\) of the Laplacian and the polarization tensor \(\text{e}_{ij}\) in the Fourier space, we can solve the mode functions equation to get
\[v_{ij}=\frac{\sqrt{\pi}}{2}\sqrt{-y_{\text{T}}}H^{(1)}_{\nu_{ \text{T}}}(-ky_{\text{T}})\text{e}_{ij}\,, \tag{110}\]
with \(H^{(1)}_{\nu_{\text{T}}}\) being the Hankel function of first kind (plus sign) and \(\nu_{\text{T}}\) being a positive scalar defined as
\[\nu_{\text{T}}:=\frac{3-\epsilon+g_{\text{T}}}{2-2\epsilon-f_{\text{T}}+g_{ \text{T}}}\,. \tag{111}\]
Thus, the power spectrum of the primordial tensor fluctuations becomes
\[{\cal P}_{\text{T}}=8\gamma_{\text{T}}\frac{{\cal G}_{\text{T}}^{1/2}}{{\cal F }_{\text{T}}^{3/2}}\frac{H^{2}}{4\pi^{2}}\Big{|}_{-ky_{\text{T}}=1}\,, \tag{112}\]
where
\[\gamma_{\text{T}}=2^{2\nu_{\text{T}}-3}\left|\frac{\Gamma(\nu_{ \text{T}})}{\Gamma(3/2)}\right|^{2}(1-\epsilon-\frac{f_{\text{T}}}{2}+\frac{ g_{\text{T}}}{2})\,.\]
We evaluate the power spectrum at the sound horizon exit, i.e. \(-ky_{\text{T}}=1\), since \(c_{\text{T}}\neq c\) in general. The tensor spectral index is given by
\[n_{\text{T}}=3-2\nu_{\text{T}}\,, \tag{113}\]
and the scale-invariant limit for tensor perturbations would be for \(\nu_{\text{T}}=3/2\). From Eq. (113) we see that, the gravitational wave spectrum could have a blue tilt if
\[n_{\text{T}}>0\Rightarrow 4\epsilon+3f_{\text{T}}-g_{\text{T}}<0\,. \tag{114}\]
Notice that the conditions (108) and (109) are not affected by this and even if B-mode polarization were to be detected on CMB, the theory could still be a good candidate. However, we should mention that the assumptions (107) may not always hold, as long as the above conditions are satisfied.
Scalar perturbationsIn order to canonically normalize the action (91) we define the following variables
\[\text{d}y_{\text{S}}:=\frac{{\cal F}_{\text{S}}^{1/2}}{a{\cal G}_{\text{S}}^{ 1/2}}\text{d}t\,,\ z_{\text{S}}:=\sqrt{2}a({\cal F}_{\text{S}}{\cal G}_{\text {S}})^{1/4}\,,\ u(y_{\text{S}},\mathbf{x}):=z_{\text{S}}\zeta\,. \tag{115}\]
Plugging them in the quadratic action we get
\[\mathcal{S}_{\rm S}^{(2)}=\frac{1}{2}\int\mathrm{d}y_{\rm S}\mathrm{d}^{3}x\left[( u^{\prime})^{2}-(\mathbf{\nabla}u)^{2}+\frac{z_{\rm S}^{\prime\prime}}{z_{\rm S}}u^{2} \right]\,, \tag{116}\]
where, as previously, prime denotes differentiation with respect to the canonical time variable, \(y_{\rm S}\).
Following the same procedure as with the tensor perturbations to evaluate the power spectrum in this case, we assume
\[\epsilon:=-\frac{\dot{H}}{H^{2}}\simeq\mathrm{const}\,,\;f_{\rm S}:=\frac{ \dot{\mathcal{F}}_{\rm S}}{H\mathcal{F}_{\rm S}}\simeq\mathrm{const}\,,\;g_{ \rm S}:=\frac{\dot{\mathcal{G}}_{\rm S}}{H\dot{\mathcal{G}}_{\rm S}}\simeq \mathrm{const}\,. \tag{117}\]
Then, the power spectrum will be given by
\[\mathcal{P}_{\rm S}=\frac{\gamma_{\rm S}}{2}\frac{\mathcal{G}_{\rm S}^{1/2}}{ \mathcal{F}_{\rm S}^{3/2}}\frac{H^{2}}{4\pi^{2}}\Big{|}_{-ky_{\rm S}=1}\,, \tag{118}\]
where
\[\nu_{\rm S}:=\frac{3-\epsilon+g_{\rm S}}{2-2\epsilon-f_{\rm S}+g_{\rm S}}\]
and
\[\gamma_{\rm S}=2^{2\nu_{\rm S}-3}\left|\frac{\Gamma(\nu_{\rm S})}{\Gamma(3/2) }\right|^{2}(1-\epsilon-\frac{f_{\rm S}}{2}+\frac{g_{\rm S}}{2})\,.\]
The scalar spectral index is given by
\[n_{\rm S}=4-2\nu_{\rm S}\,, \tag{119}\]
and thus a spectrum with equal amplitudes at horizon crossing should obey
\[\epsilon+\frac{3f_{\rm S}}{4}-\frac{g_{\rm S}}{4}=0\,. \tag{120}\]
If we consider the limit \(\epsilon,f_{\rm T},g_{\rm T},f_{\rm S},g_{\rm S}\ll 1\), we get \(\nu_{\rm T},\nu_{\rm S}\to 3/2\) and thus \(\gamma_{\rm T},\gamma_{\rm S}\to 1\). The tensor-to-scalar ration is then given by
\[r=16\frac{\mathcal{F}_{\rm S}}{\mathcal{F}_{\rm T}}\frac{c_{\rm S}}{c_{\rm T} }\,. \tag{121}\]
Given that, one can consider any model of inflation that is a sub-class of the BDLS theory and find the form \(r\) in terms of the slow-roll parameters.
### Alpha parameters
It has been proposed in [73] that perturbation dynamics can be solely described by four functions, \(\alpha_{i}\). Similarly to effective field theory methods, one can use observations to constrain the value of these four parameters, without specifying any particular model or initial conditions and thus test possible deviations from \(\Lambda\)CDM.
Given a specific background evolution, which can also be obtained merely by the observations [74; 75; 76; 77; 78; 79], and the value of the matter density today [80], one can fully determine the evolution of large-scale structure in the Universe. The physical meaning of these time-dependent functions is as follows:
* _Kineticity:_\(\alpha_{\rm K}\). Indicates the kinetic energy of curvature perturbations. It is essentially unconstrained by observations since it has no actual impact on any of the observables and large values tend to suppress the sound speed of the perturbations. The contributions for \(\alpha_{\rm K}\) are from all the \(G_{2},G_{3},G_{4},G_{5}\) and \(G_{\rm Tele}\).
* _Braiding:_\(\alpha_{\rm B}\). Its presence shows evidence of the mixing between the kinetic term of the scalar field and the metric. As \(\alpha_{\rm K}\), it only affects the curvature perturbations, giving rise to a quintessence-like force. It is the
reason for dark energy clustering and has contributions from \(G_{\rm Tele}\) and all \(G_{i}\) functions except the potential-like term \(G_{2}\).
* _Planck mass run rate:_\(\alpha_{\rm M}\). Merely a redefinition of the Planck mass that does not affect physics. It contributes to both curvature and tensor perturbations and also creates anisotropic stress in the Jordan frame. It receives contributions from \(G_{4},G_{5}\) and \(G_{\rm Tele}\) terms.
* _Tensor speed excess:_\(\alpha_{\rm T}\). Describes the propagation speed of gravitational waves through the tensor perturbations and specifically its deviation from the speed of light. This is done through the relation \(c_{\rm T}^{2}=1+\alpha_{\rm T}\) their values could be constrained by cosmological observations and experiments. Its contributions are from \(G_{4},G_{5}\) and \(G_{\rm Tele}\).
As already seen in [81; 82] the most general parametrization of the gravitational wave perturbation equation on a flat cosmological background in modified gravity takes the form
\[\ddot{h}_{ij}+(3+\alpha_{\rm M})H\dot{h}_{ij}-(1+\alpha_{\rm T})\frac{k}{a^{2} }h_{ij}=0\,, \tag{122}\]
where dots are derivatives with respect to cosmic time and \(k\) the wavenumber of the perturbation on the Fourier space.
In BDLS theory the excess tensor speed parameter is given by [51]
\[M_{*}^{2}a_{\rm T}\equiv 2X\left(2G_{4,X}-2G_{5,\phi}-(\ddot{\phi}-H\dot{ \phi})G_{5,X}-2G_{\rm Tele,J_{8}}-\frac{1}{2}G_{\rm Tele,J_{5}}\right)\,, \tag{123}\]
and the effective Planck mass by
\[M_{*}^{2}\equiv 2\left(G_{4}-2XG_{4,X}+XG_{5,\phi}-HX\dot{\phi}G_{5,X}+2XG _{\rm Tele,J_{8}}+\frac{1}{2}XG_{\rm Tele,J_{5}}-G_{\rm Tele,T}\right)\,. \tag{124}\]
In addition, the Planck mass run rate parameter can be determined by
\[HM_{*}^{2}\alpha_{\rm M}\equiv\frac{{\rm d}M_{*}^{2}}{{\rm d}t}\,. \tag{125}\]
The remaining two alpha-parameters, i.e. braiding and kineticity, are expressed respectively as
\[\mathcal{A}H\alpha_{B}= 2\dot{\phi}(XG_{3X}-G_{4\phi}-2XG_{4\phi X})+2XH(4G_{4X}+8XG_{4XX }-4G_{5\phi}-4XG_{5\phi X}+ \tag{126}\] \[+3G_{\rm Tele,I_{2}I_{2}}+4G_{\rm Tele,XT}-6G_{\rm Tele,XT_{\rm ee }})+2\dot{\phi}XH^{2}(3G_{5X}+2XG_{5XX})+\] \[+\dot{\phi}(G_{\rm Tele,I_{2}}+12H^{2}G_{\rm Tele,T_{2}}-18H^{2}G_ {\rm Tele,T_{\rm ee}I_{2}}+2XG_{\rm Tele,XI_{2}}),\]
\[\mathcal{A}H^{2}\alpha_{K}= 2X(G_{2X}+2XG_{2XX}-2G_{3\phi}-2XG_{3\phi X})+12\dot{\phi}XH(G_{ 3X}+XG_{3XX}-3G_{4\phi X}-2XG_{4\phi XX})+ \tag{127}\] \[+12XH^{2}(G_{4X}+8XG_{4XX}+4X^{2}G_{4XXX}-G_{5\phi}-5XG_{5\phi X }-2X^{2}G_{5\phi XX})+\] \[+4\dot{\phi}XH^{3}(3G_{5X}+7XG_{5XX}+2X^{2}G_{5XXX})+2X(9H^{2}G_ {\rm Tele,I_{2}I_{2}}+2XG_{\rm Tele,XX}+6\dot{\phi}HG_{\rm Tele,XI_{2}}+G_{\rm Tele,X}).\]
Notice that in the \(G_{\rm Tele}\to 0\) limit, \(\mathcal{A}\to M_{*}\) and one can derive the respective expressions in the standard Horndeski cosmology [83]. The presence of the teleparallel contribution here, in terms of \(G_{\rm Tele}\) might seemingly complicate things, but it also adds interesting phenomenology in the structure of the theory. This can become evident from the fact that functions that were severely constrained in the standard Horndeski formulation, such as the \(G_{4,X}\) and \(G_{5}\), can survive in this setup.
Furthermore, one can rewrite the expressions in the action (91) in terms of the alpha parameters and obtain
\[\Theta=\frac{\mathcal{A}H}{2}(2-\alpha_{B})\ \text{and}\ \Sigma=- \frac{\mathcal{A}H^{2}}{2}(6-\alpha_{K}-6\alpha_{B}), \tag{128}\] \[\mathcal{G_{S}}=\frac{2\mathcal{A}D}{(2-\alpha_{B})^{2}},\ \text{ where}\ \text{D}= \alpha_{\rm K}+\frac{3}{2}\alpha_{\rm B}^{2}.\]
If we consider \(\mathcal{C}H\alpha_{X}=d\mathcal{C}/dt\) equation holds for \(\alpha_{X}\) parameter, squared sound speed can be found with the following
formula
\[c_{s}^{2}=\frac{\mathcal{C}}{\mathcal{A}}\ \frac{(2-\alpha_{B})(H^{2}(1+\alpha_{X})- \dot{H})+\dot{\alpha_{B}}H}{DH^{2}}-\frac{\mathcal{B}}{\mathcal{A}}\ \frac{(2-\alpha_{B})^{2}}{2D}. \tag{129}\]
Here we introduced a new parameter \(\alpha_{X}\) by using the similar approach like \(\alpha_{M}\) in Horndeski gravity, in order to give more compact form to the formula of squared sound speed. Also, as it is mentioned earlier, if we make teleparallel terms zero (\(G_{\rm Tele}\to 0\)), then \(\mathcal{A}=\mathcal{C}=\mathcal{G}_{T}\) and \(\mathcal{B}=\mathcal{F}_{T}\) which means that we get the same expression for squared sound speed like in Horndeski case.
## V Conclusions
A homogeneous and isotropic FLRW Universe is the pedestal of modern cosmology, even if it has been argued otherwise [20]. It is on top of this background that the cosmological perturbations propagate and become source for many of our observations.
In this paper, we study in detail the cosmological perturbations around a flat FLRW spacetime in the teleparallel analog of Horndeski gravity. It has been shown before [49], that BDLS theory presents much richer phenomenology compared to the standard/curvature Horndeski, because of the presence of \(G_{\rm Tele}\) function, which depends on all those teleparallel quantities that in pure Riemannian geometry do not exist. Specifically, it was found that [51] in BDLS theory, terms like \(G_{4,X}\) and \(G_{5}(\phi,X)\) can survive, in contrast with the standard Horndeski formulation in Riemannian geometry, where they are severely constrained (if not eliminated) because they predict a different speed for the propagation of gravitational waves.
Here we present the scalar, vector and tensor perturbations in this theory and we show how the propagation speed of both tensor and scalar perturbations is affected by the presence of the extra teleparallel Lagrangian. In addition, after normalizing the quadratic scalar and tensor perturbations action, we switch to the Fourier space and calculate the power spectra of primordial fluctuations. We also express the tensor-to-scalar ratio in terms of the perturbations coefficients in the action. In this way, one could assume a specific teleparallel Horndeski model, fix the \(G_{i}\) and \(G_{\rm Tele}\) functions and calculate immediately the tensor-to-scalar ratio.
Last but not least, we present the perturbations in terms of the so-called _alpha_ parameters, that is the four parameters that one can constrain solely from observations without the need to specify any physical model. In this way, we could possibly discriminate between the concordance cosmological model and alternative descriptions.
As far as future projects are concerned, it would be very interesting to see how this analysis is affected in a cosmological spacetime with non-trivial spatial curvature. Furthermore, we plan to investigate possible evasions of the no-go theorem that manifests itself in the Riemannian Horndeski gravity regarding bouncing solutions.
###### Acknowledgements.
The work was supported by the PNRR-III-C9-2022-I9 call, with project number 760016/27.01.2023, by the Nazarbayev University Faculty Development Competitive Research Grant No. 11022021FD2926 and by the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the "First Call for H.F.R.I. Research Projects to support Faculty members and Researchers and the procurement of high-cost research equipment grant" (Project Number: 2251). This article is also based upon work from COST Action CA21136 Addressing observational tensions in cosmology with systematics and fundamental physics (CosmoVerse) supported by COST (European Cooperation in Science and Technology).
|
2302.01580 | Rotating BTZ-like black hole and central charges in Einstein-bumblebee
gravity | We obtain an exact rotating BTZ-like black hole solution by solving the
corresponding gravitational field equations and the bumblebee motion equations
in Einstein-bumblebee gravity theory. Result is presented for the purely radial
Lorentz symmetry violating and can only exist with a linear functional
potential of the bumblebee field. This black hole has two horizons and an
ergosphere which are dependent on the bumblebee coupling constant $\ell$. The
concepts of the area and volume of the horizon should be renewed in this LV
spacetime due to the nontrivial contribution of coupling between the bumblebee
field and the Ricci tensor. Only in this way, the entropy-area relation, first
law of thermodynamics and the Smarr formula can still be constructed. We also
study the AdS/CFT correspondence of this black hole, find that the entropy
product of its inner and outer horizons is universal. So the central charges of
the dual CFT on the boundary can be obtained via the thermodynamic method, and
they can reappear black hole mass and angular momentum in the bulk. | Chikun Ding, Yu Shi, Jun Chen, Yuebing Zhou, Changqing Liu, Yuehua Xiao | 2023-02-03T07:18:04Z | http://arxiv.org/abs/2302.01580v3 | # Rotating BTZ-like black hole and central charges in Einstein-bumblebee gravity
###### Abstract
We obtain an exact rotating BTZ-like black hole solution by solving the corresponding gravitational field equations in Einstein-bumblebee gravity theory. Result is presented for the purely radial Lorentz symmetry violating and can only exist with a linear functional potential of the bumblebee field. This black hole has two horizons and an ergosphere which are dependent on the bumblebee coupling constant \(\ell\). We study the AdS/CFT correspondence of this black hole, find that the entropy product of its inner and outer horizons is universal. So the central charges of the dual CFT on the boundary can be obtained via the thermodynamic method, and they can reappear black hole mass and angular momentum in the bulk.
## I Introduction
The standard model (SM) of particle physics and the general relativity (GR) cannot explain everything in the universe, such as dark energy and that what occurs in the vicinity of a black hole. Thus, on the very high energy scales, one reconsiders combining the SM with GR in a unified theory, i.e., "quantum gravity". The standard model extension (SME), proposed by Kostelesky and collaborators [1; 2; 3; 4; 5; 6; 7; 8], is this effective field theory combining GR and SM at low energy scales, and incorporates all possible background fields that violate the fundamental symmetries existent in nature, the Lorentz invariance, happening on the high energy scales (about Planck scale). Studying the Lorentz violation (LV) is a useful approach toward investigating the foundations of modern physics. Besides SME, string theory [9], noncommutative field theories [10; 11; 12], spacetime-varying fields [13; 14; 15], loop quantum gravity theory [16; 17], brane world scenarios [18; 19; 20], massive gravity [21] and Einstein-aether theory [22; 23] are other proposals of Lorentz violation.
SME can be used to calculate a number of predictions which can be tested in modern high-precision experiments [8; 24]. The primary LV in the gravity sector of SME is the form of \(s^{\mu\nu}R_{\mu\nu}\), where \(s^{\mu\nu}\) is a tensor
field possessing a nonzero background configuration and defines preferred frames, \(R_{\mu\nu}\) is the Ricci tensor. The simplest form of \(s^{\mu\nu}\) is the bumblebee field \(B^{\mu}\), which is expected to produce new and maybe even eccentric phenomena through the coupling term \(B^{\mu}B^{\nu}R_{\mu\nu}\). The name of "bumblebee" comes from the alleged remark that bumblebees should not be allowed to fly since we did not know how their wings produce lift [25]. The surprising property of this bumblebee gravity is that, unlike the absence of \(U(1)\) gauge symmetry, it does not forbid the propagation of massless vector modes [26]. So one expects to reveal a variety of physical relics which may be of interest in studies of dark energy and dark matter due to the appearance of Nambu-Goldstone and massive Higgs in theories with the spontaneous Lorentz symmetry breaking [6; 7; 27].
In this Einstein-bumblebee gravity theory, Bertolami and Paramos studied the 4-dimensional static vacuum solutions including the purely radial, the radial/temporal or the axial/temporal LV [28]. They found that there exists an exact black solution for the purely radial bumblebee field; for the radial/temporal LV, there exists only a slightly perturbed metric where one cannot constrain the physical parameters from the observed limits on the PPN(parameterized post-Newtonian) parameters. In recent years, Casana _et al_[29] obtained an exact Schwarzschild-like black hole solution for the purely radial bumblebee field. Xu _et al_ studied the radial/temporal bumblebee field and the properties of some general numerical static black hole solutions [30]. In 2020, we obtained an exact Kerr-like solution and studied its black hole shadow [31]. Though this solution does not seem to satisfy the bumblebee field equations, Liu _et al_ found that it can still be satisfied under some certain conditions, i.e., considered as an approximate solution of the bumblebee field equations [32]. After that we derived a slowly rotating black hole solution which satisfies all field equations [33]. Lately, we derived a black hole solution and a cosmological solution of Einstein-Gauss-Bonnet gravity coupled to bumblebee field [34], found that the Guass-Bonnet term and the bumblebee field can both act as a form of dark energy. In a high dimensional spacetime, we obtained an exact AdS-like black hole solution [35], found that the conceptions of black hole horizon area/entropy and volume inside horizon should be renewed due to its anisotropy.
In this paper, we would seek a rotating black hole solution in 3-dimensional spacetime. The first black hole solution is BTZ(Banados, Teitelboim and Zanelli) solution [36; 37], which is asymptotically anti-de Sitter(AdS) and has on curvature singularity. However, it is a genuine black hole solution due to the presence of event horizon, Hawking radiation, entropy, and playing a significant role to understand physical properties in higher dimensions by using many of toy models [38; 39]. The black hole thermodynamics has raised some challenging questions: a statistical derivation of black hole entropy and an account of its microstates. A promised idea is the holographic principle: there is a nontrivial match between features of 2-dimensional conformal field theory
(CFT) and features of black holes [40]. The microscopic degrees of freedom of the black hole are described by CFT living in the boundary. Quantum studies around BTZ black holes can help better understand AdS/CFT correspondence, T-duality and U-duality to classes of asymptotically flat black strings [41; 42].
We will derive a 3-dimensional rotating black hole solution and study the central charges of the dual CFT in the theory of Einstein gravity coupled to the bumblebee fields. The rest paper is organized as follows. In Sec. II we give the background for the Einstein-bumblebee theory. In Sec. III, we give the black hole solution by solving the gravitational and bumblebee field equations. In Sec. IV, we study its central charges and find some effects of the Lorentz breaking constant \(\ell\). Sec. V is for a summary.
## II Einstein-bumblebee gravity in 3-dimensional spacetime
In the bumblebee gravity model, one introduces the bumblebee vector field \(B_{\mu}\) which has a nonzero vacuum expectation value, to lead a spontaneous Lorentz symmetry breaking in the gravitational sector via a given potential. In the three dimensional spacetime, the action of Einstein-bumblebee gravity is [33],
\[\mathcal{S}=\int d^{3}x\sqrt{-g}\Big{[}\frac{R-2\Lambda}{2\kappa}+\frac{ \varrho}{2\kappa}B^{\mu}B^{\nu}R_{\mu\nu}-\frac{1}{4}B^{\mu\nu}B_{\mu\nu}-V( B_{\mu}B^{\mu}\mp b^{2})+\mathcal{L}_{M}\Big{]}, \tag{1}\]
where \(R\) is Ricci scalar and \(\Lambda\) is the cosmological constant. \(\kappa=4\pi G/c^{4}\) for the three dimensions, where \(G\) is the Newtonian constant.
The coupling constant \(\varrho\) dominates the non-minimal gravity interaction to bumblebee field \(B_{\mu}\). The term \(\mathcal{L}_{M}\) represents possible interactions with matter or external currents. The constant \(b\) is a real positive constant. The potential \(V(B_{\mu}B^{\mu}\mp b^{2})\) triggers Lorentz and/or \(CPT\) (charge, parity and time) violation. It gives a nonzero vacuum expectation value (VEV) for bumblebee field \(B_{\mu}\) indicating that the vacuum of this model obtains a prior direction in the spacetime. This potential has a minimum at \(B^{\mu}B_{\mu}\pm b^{2}=0\) and \(V^{\prime}(b_{\mu}b^{\mu})=0\) to ensure the destroying the \(U(1)\) symmetry, where the field \(B_{\mu}\) acquires a nonzero VEV, \(\langle B^{\mu}\rangle=b^{\mu}\). Another vector \(b^{\mu}\) is a function of the spacetime coordinates and has a constant value \(b_{\mu}b^{\mu}=\mp b^{2}\), where \(\pm\) signs mean that \(b^{\mu}\) is timelike or spacelike, respectively. The bumblebee field strength is
\[B_{\mu\nu}=\partial_{\mu}B_{\nu}-\partial_{\nu}B_{\mu}. \tag{2}\]
This antisymmetry of \(B_{\mu\nu}\) implies the constraint [27]
\[\nabla^{\mu}\nabla^{\nu}B_{\mu\nu}=0. \tag{3}\]
Varying the action (2.1) with respect to the metric yields the gravitational field equations
\[G_{\mu\nu}+\Lambda g_{\mu\nu}=\kappa T^{B}_{\mu\nu}+\kappa T^{M}_{\mu\nu}, \tag{2.4}\]
where \(G_{\mu\nu}=R_{\mu\nu}-g_{\mu\nu}R/2\), and the bumblebee energy momentum tensor \(T^{B}_{\mu\nu}\) is
\[T^{B}_{\mu\nu}=B_{\mu\alpha}B^{\alpha}_{\nu}-\frac{1}{4}g_{\mu \nu}B^{\alpha\beta}B_{\alpha\beta}-g_{\mu\nu}V+2B_{\mu}B_{\nu}V^{\prime}\] \[+\frac{\varrho}{\kappa}\Big{[}\frac{1}{2}g_{\mu\nu}B^{\alpha}B^{ \beta}R_{\alpha\beta}-B_{\mu}B^{\alpha}R_{\alpha\nu}-B_{\nu}B^{\alpha}R_{ \alpha\mu}\] \[+\frac{1}{2}\nabla_{\alpha}\nabla_{\mu}(B^{\alpha}B_{\nu})+\frac {1}{2}\nabla_{\alpha}\nabla_{\nu}(B^{\alpha}B_{\mu})-\frac{1}{2}\nabla^{2}(B_ {\mu}B_{\nu})-\frac{1}{2}g_{\mu\nu}\nabla_{\alpha}\nabla_{\beta}(B^{\alpha}B^ {\beta})\Big{]}. \tag{2.5}\]
The prime denotes differentiation with respect to the argument,
\[V^{\prime}=\frac{\partial V(x)}{\partial x}\Big{|}_{x=B^{\mu}B_{\mu}\pm b^{2}}. \tag{2.6}\]
Varying instead with respect to the the bumblebee field generates the bumblebee equations of motion (supposing that there is no coupling between the bumblebee field and \(\mathcal{L}_{M}\)),
\[\nabla^{\mu}B_{\mu\nu}=2V^{\prime}B_{\nu}-\frac{\varrho}{\kappa}B^{\mu}R_{\mu \nu}. \tag{2.7}\]
The contracted Bianchi identities (\(\nabla^{\mu}G_{\mu\nu}=0\)) lead to conservation of the total energy-momentum tensor
\[\nabla^{\mu}T_{\mu\nu}=\nabla^{\mu}\big{(}T^{B}_{\mu\nu}+T^{M}_{\mu\nu}\big{)} =0. \tag{2.8}\]
We suppose that there is no matter field and the bumblebee field is frosted at its VEV like in Refs [28; 29], i.e., it is
\[B_{\mu}=b_{\mu}. \tag{2.9}\]
And the potential has a smooth quadratic function or a linear function form
\[V=\frac{k}{2}X^{2};\qquad V=\frac{\lambda}{2}X,\qquad X=(B_{\mu}B^{\mu}-b^{2}), \tag{2.10}\]
where \(\lambda\) and \(k\) are some constants. The both potentials are \(V=0\) under the condition (2.9). Then the first two terms in Eq. (2.5) are like those of the electromagnetic field, the only distinctive are the coupling items to Ricci tensor. Under this condition, Eq. (2.4) leads to gravitational field equations [33]
\[G_{\mu\nu}+\Lambda g_{\mu\nu}=\kappa(2V^{\prime}b_{\mu}b_{\nu}+b_{\mu\alpha}b ^{\alpha}_{\nu}-\frac{1}{4}g_{\mu\nu}b^{\alpha\beta}b_{\alpha\beta})+\varrho \Big{(}\frac{1}{2}g_{\mu\nu}b^{\alpha}b^{\beta}R_{\alpha\beta}-b_{\mu}b^{ \alpha}R_{\alpha\nu}-b_{\nu}b^{\alpha}R_{\alpha\mu}\Big{)}+\bar{B}_{\mu\nu}, \tag{2.11}\]
with
\[\bar{B}_{\mu\nu}=\frac{\varrho}{2}\Big{[}\nabla_{\alpha}\nabla_{\mu}(b^{ \alpha}b_{\nu})+\nabla_{\alpha}\nabla_{\nu}(b^{\alpha}b_{\mu})-\nabla^{2}(b_{ \mu}b_{\nu})-g_{\mu\nu}\nabla_{\alpha}\nabla_{\beta}(b^{\alpha}b^{\beta}) \Big{]}. \tag{2.12}\]
## III Rotating BTZ-like black hole solution
The static spherically symmetric black hole metric in the four dimensional spacetime has the form
\[ds^{2}=-e^{2\Phi(r)}dt^{2}+e^{2\psi(r)}dr^{2}+r^{2}\big{[}\Omega(r)dt+d\phi\big{]} ^{2}, \tag{3.1}\]
where
In the present study, we pay attention to that the bumblebee field has a radial vacuum energy expectation because that the spacetime curvature has a strong radial variation, on the contrary that the temporal changes are very slow. So the bumblebee field is supposed to be spacelike(\(b_{\mu}b^{\mu}=\) positive constant) as that
\[b_{\mu}=\big{(}0,b_{0}e^{\psi(r)},0\big{)}, \tag{3.2}\]
where \(b_{0}\) is a positive constant. Then the bumblebee field strength is
\[b_{\mu\nu}=\partial_{\mu}b_{\nu}-\partial_{\nu}b_{\mu}, \tag{3.3}\]
whose components are all zero. And their divergences are all zero, i.e.,
\[\nabla^{\mu}b_{\mu\nu}=0. \tag{3.4}\]
From the equation of motion (2.7), we have
\[b^{\mu}R_{\mu\nu}=\frac{2\kappa}{\varrho}V^{\prime}. \tag{3.5}\]
The gravitational field equations (2.11) become
\[G_{\mu\nu}+\Lambda g_{\mu\nu}=\bar{B}_{\mu\nu}+2\kappa V^{\prime}b_{\mu}b_{\nu }+\varrho\Big{(}\frac{1}{2}g_{\mu\nu}b_{0}^{2}e^{-2\psi}R_{11}-b_{\mu}b^{\alpha }R_{\alpha\nu}-b_{\nu}b^{\alpha}R_{\alpha\mu}\Big{)}. \tag{3.6}\]
For the metric (3.1), the nonzero components of Einstein tensor \(G_{\mu\nu}\), Ricci tensor \(R_{11}\) and the bumblebee tensor \(\bar{B}_{\mu\nu}\) are shown in the appendix. Substituting the equation \(G_{22}+\Lambda g_{22}=\bar{B}_{22}\) into \(G_{02}+\Lambda g_{02}=\bar{B}_{02}\), one can obtain
\[(1+\ell)\big{[}(r\Omega^{\prime\prime}+3\Omega^{\prime})-r\Omega^{\prime}( \Phi^{\prime}+\psi^{\prime})\big{]}=0, \tag{3.7}\]
where \(\ell=\varrho b_{0}^{2}\) and the symbol \(\prime\) means the derivative with their argument. By using the motion equation (3.5), one can obtain
\[R_{11}=\frac{2\kappa V^{\prime}}{\varrho}e^{2\psi}. \tag{3.8}\]
Substituting the equations \(G_{22}+\Lambda g_{22}=\bar{B}_{22}\) and (3.8) into \(G_{00}+\Lambda g_{00}=\bar{B}_{00}\), one can obtain
\[(1+\ell)\left[r\Omega(r\Omega^{\prime\prime}+3\Omega^{\prime})+\frac{1}{4}r^{2} \Omega^{\prime 2}-e^{2\Phi}\frac{\psi^{\prime}}{r}-r^{2}\Omega\Omega^{\prime}(\Phi^{ \prime}+\psi^{\prime})\right]+e^{2\Phi+2\psi}\Lambda=0. \tag{3.9}\]
Substituting the equation (3.8) into \(G_{11}+\Lambda g_{11}=\bar{B}_{11}\), one can obtain
\[(1+\ell)\left[e^{2\Phi}\frac{\Phi^{\prime}}{r}+\frac{1}{4}r^{2}\Omega^{\prime 2 }\right]+e^{2\Phi+2\psi}\Lambda=0. \tag{3.10}\]
The above two equations (3.9) and (3.10) can give that
\[r\Omega\big{[}(r\Omega^{\prime\prime}+3\Omega^{\prime})-r\Omega^{\prime}(\Phi ^{\prime}+\psi^{\prime})\big{]}-\frac{1}{r}e^{2\Phi}(\Phi^{\prime}+\psi^{ \prime})=0. \tag{3.11}\]
Then the Eqs. (3.7) and (3.11) can give the following equations
\[\Phi^{\prime}+\psi^{\prime}=0, \tag{3.12}\] \[r\Omega^{\prime\prime}+3\Omega^{\prime}=0. \tag{3.13}\]
Eq. (3.13) can give
\[\Omega=-\frac{j}{2r^{2}}, \tag{3.14}\]
where \(j\) is an integral constant relating to its angular momentum. From the Eq. (3.12), one can assume that \(e^{2\Phi}=f(r)\) and \(e^{2\psi}=C/f(r)\), where \(C\) is a constant to be determined. Substituting it into the equations \(G_{11}+\Lambda g_{11}=\bar{B}_{11}\) and \(G_{22}+\Lambda g_{22}=\bar{B}_{22}\), one can obtain that
\[\big{(}rf^{\prime\prime}+f^{\prime}-\frac{j^{2}}{r^{3}}\big{)}+\frac{4C}{1+ \ell}\Lambda r=0, \tag{3.15}\]
which can give that
\[f(r)=-m-\frac{C\Lambda r^{2}}{(1+\ell)}+\frac{j^{2}}{4r^{2}}, \tag{3.16}\]
where \(m\) is also an integral constant relating to its mass. In order to get a BTZ-like solution, we choose the constant \(C=(1+\ell)\).
### Non black hole solution for \(V=kx^{2}/2\)
The bumblebee motion equation (3.8) can be rewritten as following
\[\frac{f^{\prime}}{r}-f^{\prime\prime}+\frac{j^{2}}{4r^{2}}=\frac{4\kappa V^{ \prime}C}{\varrho}. \tag{3.17}\]
If the bumblebee potential takes the form of \(V=kx^{2}/2\), then \(V^{\prime}=0\) at its VEV. Substituting the solution (3.16) into the above Eq. (3.17), one can obtain the zero cosmologic constant, i.e., \(\Lambda=0\). Therefore, in this case, there is no black hole solution indeed.
### New black hole solution for \(V=\lambda x/2\)
If the bumblebee potential takes the form of \(V=\lambda x/2\), then \(V^{\prime}=\lambda/2\) at its VEV. Substituting the solution (3.16) into the motion equation (3.17), one can obtain the cosmologic constant, i.e.,
\[\Lambda=(1+\ell)\frac{\kappa\lambda}{2\varrho}. \tag{3.18}\]
Defining an effective cosmological constant \(\Lambda_{e}=\kappa\lambda/2\varrho\), then the new rotating BTZ-like black hole solution is
\[ds^{2}=-f(r)dt^{2}+\frac{(1+\ell)}{f(r)}dr^{2}+r^{2}(d\phi-\frac{j}{2r^{2}}dt)^ {2},\qquad f(r)=-m-(1+\ell)\Lambda_{e}r^{2}+\frac{j^{2}}{4r^{2}}. \tag{3.19}\]
This metric represents a purely radial LV black hole solution in a 3D spacetime. When \(j=0\), it is a static BTZ-like black hole solution
\[f(r)=-m-(1+\ell)\Lambda_{e}r^{2}, \tag{3.20}\]
which is consistent with the result of Eq. (29) in Ref. [35] when \(D=3\).
Its Kretschmann scalar is
\[R_{\mu\nu\rho\tau}R^{\mu\nu\rho\tau}=12\Lambda_{e}^{2}, \tag{3.21}\]
which is a finite constant in the whole spacetime as like as the original BTZ black hole, so there is also no curvature singularity at the origin. Its total energy-momentum tensor of the metric (3.19) is
\[T_{\nu}^{\mu}=\frac{1}{\kappa}\left(\begin{array}{ccc}-\epsilon&0&0\\ 0&p_{r}&0\\ 0&0&p_{t}\end{array}\right), \tag{3.22}\]
where \(\epsilon\) is the energy density, \(p_{r}\) is the radial pressure and \(p_{t}\) is the tangential pressure, which read
\[-\epsilon=p_{r}=p_{t}=-\Lambda_{e}. \tag{3.23}\]
It has two horizons: inner(Cauchy) horizon \(r_{+}\) and outer(event) horizon \(r_{-}\) which can be read from \(f(r_{\pm})=0\),
\[r_{\pm}=\sqrt{\frac{2}{-(1+\ell)\Lambda_{e}}}\left(\sqrt{M+\sqrt{-(1+\ell) \Lambda_{e}J}}\pm\sqrt{M-\sqrt{-(1+\ell)\Lambda_{e}J}}\right), \tag{3.24}\]
where \(M=m/8\) is its ADM mass and \(J=j/8\) is its angular momentum. Like Kerr black hole, it has an ergosphere \(r_{erg}\) which can be read from the time-time component \(g_{tt}=0\),
\[r_{erg}=\sqrt{r_{+}^{2}+r_{-}^{2}}. \tag{3.25}\]
Its surface gravity of the event horizon \(\kappa\) is
\[\kappa=-\frac{1}{\sqrt{1+\ell}}\Big{(}(1+\ell)\Lambda_{e}r_{+}+\frac{16J^{2}}{r_{ +}^{3}}\Big{)}, \tag{3.26}\]
which can be read from the formula [31],
\[\kappa=-\frac{1}{2}\lim_{r\to r_{+}}\sqrt{\frac{1}{g_{rr}X}}\frac{dX}{dr}, \qquad X=g_{tt}-\frac{g_{t\phi}^{2}}{g_{\phi\phi}}. \tag{3.27}\]
It is easy to prove that in the present case, the function \(X=-f(r)\).
## IV Central charge of the dual CFT
According to the holographic theory [43], one may expect that each sector of 3-dimensional gravity which is either asymptotically anti-de Sitter(AdS) or AdS-like, there exists a dual 2-dimensional conformal field theory(CFT) that might live on the boundary of this AdS spacetime. Brown _et al_ found a nontrivial central charge appears in the algebra of the canonical generators, which is just the Virasoro central charge [44]. From the black hole thermodynamics aspect, Yekta [45] obtained the central charges of the CFT which can be constructed using the thermodynamics of the outer and inner horizons. He found the result was in complete agreement with that via the method of asymptotic symmetry group analysis [46; 47; 48]. In this section, we study the central charges by using the thermodynamic method of black hole/CFT correspondence has been proposed in [49; 50; 51].
From the horizon equations \(f(r_{\pm})=0\), one can rewrite the black mass as
\[M=-\frac{1}{8}(1+\ell)\Lambda_{e}r_{\pm}^{2}+\frac{2J^{2}}{r_{\pm}^{2}}. \tag{4.1}\]
The temperature of outer horizon \(T_{+}\) is defined by
\[T_{+}=-\frac{1}{2\pi\sqrt{1+\ell}}\Big{[}(1+\ell)\Lambda_{e}r_{+}+\frac{16J^{ 2}}{r_{+}^{3}}\Big{]}=-\frac{\sqrt{1+\ell}\Lambda_{e}}{2\pi r_{+}}(r_{+}^{2}- r_{-}^{2}). \tag{4.2}\]
The temperature of the inner horizon \(T_{-}\) should be a geometrical positive quantity and is constant over the inner horizon [40], so it should be,
\[T_{-}=\frac{1}{2\pi\sqrt{1+\ell}}\Big{[}(1+\ell)\Lambda_{e}r_{-}+\frac{16J^{2 }}{r_{-}^{3}}\Big{]}=-\frac{\sqrt{1+\ell}\Lambda_{e}}{2\pi r_{-}}(r_{+}^{2}- r_{-}^{2}). \tag{4.3}\]
There are three thermodynamical quantities: entropy \(S_{\pm}\) of the inner and outer horizons, angular momentum \(J\) and pressure \(P=-\Lambda/8\pi(1+\ell)^{2}\), i.e., mass \(M\) can be expressed by the function of these quantities \(M=M(S,J,P)\). The entropies are as following
\[S_{\pm}=\int\frac{dM}{T_{\pm}}=\frac{1}{2}\sqrt{1+\ell}\pi r_{\pm}, \tag{4.4}\]
the angular velocities \(\Omega_{\pm}\) and the thermodynamical volumes \(V_{\pm}\) are
\[\Omega_{\pm}=\left(\frac{\partial M}{\partial J}\right)_{S,P}=\sqrt{8\pi(1+\ell) P}\frac{r_{\mp}}{r_{\pm}},\qquad V_{\pm}=\left(\frac{\partial M}{\partial P} \right)_{S,J}=(1+\ell)\pi r_{\pm}^{2}. \tag{4.5}\]
So there exists the first law of thermodynamics and the Smarr formula for the outer horizon
\[dM=T_{+}dS_{+}+\Omega_{+}dJ+V_{+}dP,\qquad 0\cdot M=T_{+}S_{+}-2PV_{+}+ \Omega_{+}J. \tag{4.6}\]
For the inner horizon, the Killing vector is spacelike inside the event horizon, then one should assign negative energy \(-M\) to the inner horizon [40], similar to the negative energies within the ergosphere. So the first law of thermodynamics and the Smarr formula for the inner horizon are
\[dM=-T_{-}dS_{-}+\Omega_{-}dJ+V_{-}dP,\qquad 0\cdot M=-T_{-}S_{-}-2PV_{ -}+\Omega_{-}J. \tag{4.7}\]
It is obviously that the equality \(T_{+}S_{+}=T_{-}S_{-}\) is true from the relations (4.2), (4.3) and (4.4). This means that the entropy product of the inner and outer horizon
\[S_{+}S_{-}=\sqrt{\frac{1+\ell}{-\Lambda_{e}}}\pi^{2}J, \tag{4.8}\]
is universal (mass-independent). So the central charges of the left- and right-moving sectors in the dual CFT must be the same,
\[c_{L}=c_{R}=6\frac{d}{dJ}\left(\frac{S_{+}S_{-}}{4\pi^{2}}\right) =\frac{3}{2}\sqrt{\frac{1+\ell}{-\Lambda_{e}}}. \tag{4.9}\]
These results of 2-dimensional CFT on the boundary can be used to describe 3-dimensional AdS gravity in the bulk.
Defining the left-moving temperature \(T_{L}\) and right-moving temperature \(T_{R}\) of the dual CFT,
\[T_{L}=\left(\frac{1}{T_{+}}+\frac{1}{T_{-}}\right)^{-1}=-\frac{ \sqrt{1+\ell}}{2\pi}\Lambda_{e}(r_{+}-r_{-}),\qquad T_{R}=\left(\frac{1}{T_{+ }}-\frac{1}{T_{-}}\right)^{-1}=-\frac{\sqrt{1+\ell}}{2\pi}\Lambda_{e}(r_{+}+r _{-}), \tag{4.10}\]
then the entropies of the inner and outer horizon can be represented by
\[S_{\pm}=\frac{\pi^{2}}{3\sqrt{-(1+\ell)\Lambda_{e}}}(c_{L}T_{L} \pm c_{R}T_{R}). \tag{4.11}\]
In addition, defining the energies of the left- and right-moving sectors of the dual CFT as
\[E_{L}=\frac{\pi^{2}}{6\sqrt{-(1+\ell)\Lambda_{e}}}c_{L}T_{L}^{2} =\frac{M-\sqrt{-(1+\ell)\Lambda_{e}}J}{2},E_{R}=\frac{\pi^{2}}{6\sqrt{-(1+\ell )\Lambda_{e}}}c_{R}T_{R}^{2}=\frac{M+\sqrt{-(1+\ell)\Lambda_{e}}J}{2}, \tag{4.12}\]
then the black hole mass \(M\) and angular momentum \(J\) can be represented by
\[M=E_{L}+E_{R},\qquad J=\frac{1}{\sqrt{-(1+\ell)\Lambda_{e}}}(E_{ L}-E_{R}). \tag{4.13}\]
In this way, we use the central charges on the boundary to reappear the entropy, mass and angular momentum in the bulk.
Summary
In this paper, we have studied Einstein gravity coupled to a bumblebee field in a 3-dimensional spacetime. We obtain an exactly rotating BTZ-like black hole solution. The bumblebee field doesn't affect the locations of the black hole horizon and ergosphere.
This black hole is different from 4-dimensional Schwarzschild-like [29] or Kerr-like [31; 33] black holes that it cannot be asymptotically flat and has no curvature singularity at the origin. It is asymptotically AdS and has isotropic pressure, but is different from the 4- or higher dimensional AdS-like black holes [35; 52] whose pressure are anisotropic.
We study the AdS/CFT correspondence of this black hole, find that the entropy product of its inner and outer horizons is universal. So the central charges of the dual CFT on the boundary can be obtained via the thermodynamic method, and they can reappear black hole mass and angular momentum in the bulk.
###### Acknowledgements.
This work was supported by the Scientific Research Fund of the Hunan Provincial Education Department under No. 19A257, the National Natural Science Foundation (NNSFC) of China (grant No. 11247013), Hunan Provincial Natural Science Foundation of China (grant No. 2015JJ2085 and No. 2020JJ4284).
## Appendix A Some quantities I
In this appendix, we showed the nonezero components of Einstein's tensor for the metric (3.1). They are as following
\[G_{00}=e^{-2\psi}r^{2}\Omega^{2}(\Phi^{\prime\prime}+\Phi^{\prime 2}- \Phi^{\prime}\psi^{\prime})+e^{-2\psi}r^{2}\Omega(-\Omega^{\prime\prime}+\Phi ^{\prime}\Omega^{\prime}+\Omega^{\prime}\psi^{\prime})\] \[\qquad\qquad-\frac{1}{4}e^{-2\Phi-2\psi}r^{2}(e^{2\Phi}+3r^{2} \Omega^{2})\Omega^{\prime 2}-3e^{2\psi}r\Omega\Omega^{\prime}+e^{2\Phi-2\psi} \frac{\psi^{\prime}}{r},\] (A1) \[G_{02}=e^{-2\psi}r^{2}\Omega(\Phi^{\prime\prime}+\Phi^{\prime 2}- \Phi^{\prime}\psi^{\prime})+\frac{1}{2}e^{-2\psi}r^{2}(-\Omega^{\prime\prime}+ \Phi^{\prime}\Omega^{\prime}-\Omega^{\prime}\psi^{\prime})\] \[\qquad\qquad-\frac{3}{4}e^{-2\Phi-2\psi}r^{4}\Omega\Omega^{\prime 2 }-\frac{3}{2}e^{-2\psi}r\Omega^{\prime},\] (A2) \[G_{11}=\frac{\Phi^{\prime}}{r}+\frac{1}{4}e^{-2\Phi}r^{2}\Omega^ {\prime 2},\] (A3) \[R_{11}=\frac{\psi^{\prime}}{r}-(\Phi^{\prime\prime}+\Phi^{\prime 2 }-\Phi^{\prime}\psi^{\prime})+\frac{1}{2}e^{-2\Phi}r^{2}\Omega^{\prime 2},\] (A4) \[G_{22}=e^{-2\psi}r^{2}(\Phi^{\prime\prime}+\Phi^{\prime 2}-\Phi^{ \prime}\psi^{\prime})-\frac{3}{4}e^{-2\Phi-2\psi}r^{4}\Omega^{\prime 2}.\] (A5)
\(\bar{B}_{\mu\nu}\) are
\[\bar{B}_{00}=-\frac{\varrho b_{0}^{2}}{2}e^{-2\psi}(e^{-2\Phi}+r^{2 }\Omega^{2})(\Phi^{\prime\prime}+\Phi^{\prime 2}-\Phi^{\prime}\psi^{\prime})+ \varrho b_{0}^{2}e^{-2\psi}r^{2}\Omega(\Omega^{\prime\prime}-\Phi^{\prime}\Omega ^{\prime}-\Omega^{\prime}\psi^{\prime})\] \[\qquad\qquad+\frac{\varrho b_{0}^{2}}{2}e^{-2\Phi-2\psi}r^{2}(e^{ 2\Phi}+r^{2}\Omega^{2})\Omega^{\prime 2}+3\varrho b_{0}^{2}e^{-2\psi}r\Omega \Omega^{\prime}-\frac{\varrho b_{0}^{2}}{2r}e^{-2\psi}(e^{2\Phi}+r^{2}\Omega^{ 2})\psi^{\prime}, \tag{46}\] \[\bar{B}_{02}=-\frac{\varrho b_{0}^{2}}{2}e^{-2\psi}r^{2}\Omega( \Phi^{\prime\prime}+\Phi^{\prime 2}-\Phi^{\prime}\psi^{\prime})+\frac{\varrho b_{0}^{2 }}{2}e^{-2\psi}r^{2}(\Omega^{\prime\prime}-\Phi^{\prime}\Omega^{\prime}-\Omega ^{\prime}\psi^{\prime})\] \[\qquad\qquad+\frac{\varrho b_{0}^{2}}{2}e^{-2\Phi-2\psi}r^{4} \Omega\Omega^{\prime 2}-\frac{\varrho b_{0}^{2}}{2}e^{-2\psi}r(\Omega\psi^{ \prime}-3\Omega^{\prime}),\] (47) \[\bar{B}_{11}=-\frac{\varrho b_{0}^{2}}{2}(\Phi^{\prime\prime}+ \Phi^{\prime 2}-\Phi^{\prime}\psi^{\prime})-\frac{\varrho b_{0}^{2}}{2r}(2 \Phi^{\prime}-\psi^{\prime}),\] (48) \[\bar{B}_{22}=-\frac{\varrho b_{0}^{2}}{2}e^{-2\psi}r^{2}(\Phi^{ \prime\prime}+\Phi^{\prime 2}-\Phi^{\prime}\psi^{\prime})+\frac{\varrho b_{0}^{2 }}{2}e^{-2\Phi-2\psi}r^{4}\Omega^{\prime 2}-\frac{\varrho b_{0}^{2}}{2}e^{-2\psi} r\psi^{\prime}. \tag{49}\]
|
2304.05468 | A Survey of Resources and Methods for Natural Language Processing of
Serbian Language | The Serbian language is a Slavic language spoken by over 12 million speakers
and well understood by over 15 million people. In the area of natural language
processing, it can be considered a low-resourced language. Also, Serbian is
considered a high-inflectional language. The combination of many word
inflections and low availability of language resources makes natural language
processing of Serbian challenging. Nevertheless, over the past three decades,
there have been a number of initiatives to develop resources and methods for
natural language processing of Serbian, ranging from developing a corpus of
free text from books and the internet, annotated corpora for classification and
named entity recognition tasks to various methods and models performing these
tasks. In this paper, we review the initiatives, resources, methods, and their
availability. | Ulfeta A. Marovac, Aldina R. Avdić, Nikola Lj. Milošević | 2023-04-11T19:33:41Z | http://arxiv.org/abs/2304.05468v1 | # A Survey of Resources and Methods for Natural Language Processing of Serbian Language
###### Abstract
The Serbian language is a Slavic language spoken by over 12 million speakers and well understood by over 15 million people. In the area of natural language processing, it can be considered a low-resourced language. Also, Serbian is considered a high-inflectional language. The combination of many word inflections and low availability of language resources makes natural language processing of Serbian challenging. Nevertheless, over the past three decades, there have been a number of initiatives to develop resources and methods for natural language processing of Serbian, ranging from developing a corpus of free text from books and the internet, annotated corpora for classification and named entity recognition tasks to various methods and models performing these tasks. In this paper, we review the initiatives, resources, methods, and their availability.
natural language processing, text mining, language resources, Serbian language
**Keywords:** natural language processing, text mining, language resources, Serbian language
## 1 Introduction
The Serbian language is a south Slavic language currently actively spoken by about 12 million people worldwide. It is one of four mutually intelligible varieties of pluricentric language called Serbo-Croatian (other varieties include Croatian, Bosnian, and Montenegrin). Serbo-Croatian languages are morphologically rich (Delic et al., 2010), containing many inflections of words, due to three genders, seven cases for nouns, and seven tenses for verbs, whose inflections are followed by other parts of speech and word types, as well as twelve sound changes occurring in word inflections (Klajn, 2005). Serbian is also the only European language that is formally digraphic and whose speakers are functionally digraphic, using both Cyrillic and Latin alphabets (Magner, 2001). The majority of Serbian speakers live in Serbia (6,330,919 based on 2011 census1), but a significant number of speakers also live in Montenegro, Bosnia and Herzegovina, Croatia, Macedonia, Slovenia, Albania, Hungary, Austria, Sweden, Germany, and other countries. The Serbian language is an official language in Serbia, Bosnia and Herzegovina, Montenegro, while it is recognized as a minority language in Croatia, Macedonia, Romania, Hungary, Slovakia, and the Czech Republic. Variants of the Serbo-Croatian language (Serbian, Croatian, Bosnian, and Montenegrin) are spoken by about 19 million people, and therefore the importance of these languages are quite significant (Eberhard et al., 2022).
Footnote 1: [https://data.stat.gov.rs/Home/Result/31020104017languageCode=en-US](https://data.stat.gov.rs/Home/Result/31020104017languageCode=en-US)
Natural language processing is a branch of artificial intelligence that examines methods to analyze, process and ultimately make natural languages understandable for computers (Reshamwala et al., 2013). Therefore, the field is addressing many challenges related to human/natural languages. Even though a majority of work in the field is predominantly done on the English language, there has been also work on other languages.
High morphological complexity, variety of word inflection, and relatively low amount of resources available for Serbian and Serbo-Croatian pose a challenge for natural language processing and language technologies. The morphological richness of Serbo-Croatian makes it particularly interesting for examining how natural language processing methods perform on languages with a variety of inflection and how to efficiently handle word inflection in morphologically rich and low-resource languages. In sense of language technologies and natural language processing, Serbian cannot be viewed in isolation, as differences between Serbian, Croatian, Bosnian, and Montenegrin are small, and often approaches developed for one of these variants would perform well on others. Despite these challenges, there have been several initiatives, organizations, and significant academic work performed to address some of the specific challenges in Serbian. A number of resources and corpora for syntactic analysis, classification, and named entity recognition were developed, as well as a number of approaches for document analysis, classification, semantic similarity, and even analysis of rhetorical figures such as similes.
The development of digital lexical resources is an important and strategic task for every language and should have national priority. The results of natural language processing are dependent on the quality and volume of available digital resources, as well as the availability and comprehensiveness of tools for processing digital resources (Nenadic, 2004). Our goal is to collect available resources and methods for processing textual data in the Serbian language, describe them, and identify shortcomings that can be advanced and expanded with the most needed resources according to the development trends of NLP. To the best of our knowledge, this is the first review of NLP resources and methods for Serbian of newer date and scope.
### A brief history of NLP resources and method development in Serbia
The development of the first digital corpora in the Balkans started shortly after the development of the first digital corpora in the world, and it was started by the psychologist Djordje Kostic, in 1957. with the goal of developing language technologies for speech recognition and machine translation from the Serbo-Croatian language. This corpus was developed until 1962, however, it was not digitally processed, so the first digital corpus was published in Zagreb in 1967. This corpus contained the epic Osman by Ivan Gundulic prepared by Zeljko Bujas. Development of corpora and corpus linguistics in the western Balkans in the period between 1950 and 1990 is presented by Dobric (2012). Language resources and tools that were mainly developed at the Faculty of Mathematics, University of Belgrade, until the year 2003, have been previously reviewed (Vitas et al., 2003b, a). During the project called META-NET in 2012, the analysis of the language resources for 23 official languages of the European Union was done, and as a part of white pages was published book "The Serbian language in the digital age" (Vitas et al., 2012). The Regional Linguistic Data Initiative (ReLDI) project has made a significant contribution to promoting the relevance and importance of open language resources for Serbian and related languages (Samardzic et al., 2015). The open and freely available language resources for processing the Serbian language, developed within the ReLDI project or independently built, are briefly presented by Batanovic et al. (2020).
In this paper, we aim to review corpora, resources, methods, models, and tools that were developed over time for the Serbian language. We intentionally limit this review to the Serbian language only. While we agree that some approaches do work as well on related languages in the Serbo-Croatian group of languages, there are still small differences between them, that would make evaluation and comparison of the resources and methods challenging.
## 2 Review methodology
To cover all authors who deal with natural language processing in Serbian, we started with the National Repository of Dissertations in Serbia ([https://nardus.mpn.gov.rs/](https://nardus.mpn.gov.rs/)). By using keywords such as natural language
processing, NLP, text mining, text data processing, computational linguistics, electronic dictionaries, corpora, sentiment analysis, emotional analysis, text classification, lexical resources, and other synonyms and related terms, we identified dissertations that contain these keywords. From the most relevant dissertations, those that deal with natural language processing in Serbian were selected. Additionally, a set of dissertations were identified by searching for known NLP scientists, supervisors, and groups at Serbian universities. Dissertations were identified by searching for known scientists acting as a supervisor or a member of the thesis committee. A total of 29 dissertations in the field of natural language processing were selected. By analyzing the dissertations and references cited in them, we identified 316 papers indicating NLP for Serbian. Further searches were conducted on Google Scholar for prominent authors (or author groups) and selected topics.
We reviewed the dissertations and papers we identified, excluding those that did not pertain to natural language processing in Serbian, and classified them based on their topic and date of publication. In this review, we follow this classification, with each section covering a broad area of natural language processing. The content in each section is primarily arranged in chronological order.
## 3 Corpora
A corpus is a set of machine-readable texts representing a sample of a language or text type. Corpora can be classified based on their parameters such as medium, scope (size), domain, purpose, period, source, method of annotation, number of languages involved, etc. (Vitas et al., 2003b). Given that corpora can include texts in one or more languages, they are divided into monolingual and multilingual corpora. According to this classification, we will present the corpora of the Serbian language.
### Monolingual corpora
_The Diachronic Corpus of the Serbo-Croatian Language_ (DCSCL, Table 1) of Professor Kostic digitized in 2003 contains texts from the period from the 12th to the 20th century, divided into five-time samples. The corpus comprises 11 million words that have been manually annotated with lemmas and information on various morphological categories such as gender, number, case, person, tense, and more (Kostic, 2014). In 1981, the NLP group at the Faculty of Mathematics (NLP_MATF) initiated the development of a corpus for the contemporary Serbian language. The first version of this corpus, named _"The Untagged Corpus of Contemporary Serbian Language"_ (UCCSL, Table 1), was created in 2003. This corpus contains literature published during or after the 20th century and lacks any annotations. Subsequently, the corpus was enhanced by incorporating bibliographic information into the corpus texts, and this new version was called _"SrpKor2003"_ (SrpKor2003, Table 1) (Krstev and Vitas, 2005; Utvic, 2014).
Most of the monolingual corpora have morphosyntactic annotation and bibliographic information about the corpus texts. Morphosyntactic annotation is a linguistic annotation that adds tags to the token: type of speech (Part of Speech Tagging), canonical form or lemma (lemmatization), and morphological word categories. By expanding SrpKor2003, a new version of the corpus of contemporary Serbian _"SrpKor2013"_ (SrpKor2013, Table 1) was created, which contains literary and artistic texts by Serbian writers in the 20th and 21st centuries, as well as scientific texts, administrative texts, and general texts. _The Corpus of Contemporary Serbian_ contains bibliographic data and it has been automatically morphosyntactically annotated (with part-of-speech and lemmas). It contains more than 122 million words. Its subset _"Lematized Corpus of the Modern Serbian Language"_ (SrpLemKor, Table 1) contains 3.7 million corpus words. Both corpora are available with registration under a license (Popovic, 2010; Utvic, 2011).
Among the available corpora of the Serbian/Serbo-Croatian language at the Faculty of Mathematics of the University of Belgrade2, there are also the following monolingual corpora. _Henning's Corpus of Serbo-Croatian_ (HennC, Table 1) consists of approximately 700,000 words of Serbo-Croatian. The texts are taken from modern Yugoslav fiction and all Serbo-Croatian-speaking areas are represented (Serbia, Croatia, Montenegro, and Bosnia-Herzegovina) (Corpora etc, 1992). _The Untagged Corpus of Vuk's Folk Proverbs_ (UnVukC, Table 1), containing folk proverbs along with Vuk's comments on them (Krstev, 1997). Besides this corpus, Vuk's collection of similes has been augmented by employing grammatical rules, machine learning, and manual review. As a result, a corpus of contemporary similes containing 852 similes was developed (VukSimC, Table 1)3(Milosevic and Nenadic, 2016, 2018). _Electoral Crisis 2000_ corpus, which includes the entire webcasts of the daily newspaper "Politika" from September 10th to October 20th, 2000, and the _Labeled corpus of the Serbian language_, which consists of texts with a minimal set of structural labels, lack the detailed information on size and are available on the same source4.
Footnote 2: [http://www.korpus.matf.bg.ac.rs/prezentacija/korpusi.html](http://www.korpus.matf.bg.ac.rs/prezentacija/korpusi.html)
Footnote 3: [https://ezbikra.starisloveeni.com](https://ezbikra.starisloveeni.com)
Footnote 4: [http://www.korpus.matf.bg.ac.rs/prezentacija/korpusi.html](http://www.korpus.matf.bg.ac.rs/prezentacija/korpusi.html)
There are smaller corpora that have been collected mostly for specific domains (medicine, law, etc.) and particular purposes (name entities recognition, semantic similarity, etc.). Among them are the corpora _MRCOR1_ and _MRCOR2_ (Table 1) consisting of medical reports reviewed from 32 medical centers in Serbia. The primary data set contains 2212 medical reports with a diagnosis of measles. The other dataset consists of 2000 medical reports with ten different types of diagnoses. Medical and non-medical terms are manually marked in the medical reports. For each medical report is assigned a diagnosis code (Avdic et al., 2020). A corpus (DMRC, Table 1) of 100 discharge lists and 50 reports from doctors from the Faculty of Dentistry at the University of Belgrade was used to evaluate the system's effectiveness in automatically analyzing temporal expressions of medical narrative texts. Previously, the texts
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline \multicolumn{1}{c}{Corgen label} & \multicolumn{1}{c}{Text type} & \multicolumn{1}{c}{Number of unit} & \multicolumn{1}{c}{Ammonia 1 [FOOTNOTE:1]Footnote 1: Annotation target: U - unannotated, MS - morphosyntactic, L - lemmatization, M - morphological categories, PoS - Part of Speech Tagging, S - structural, NE - named entity, UK - unknown annotation, A - accentination, B - bibliographic, SD - syntactic dependencies, P - plausibility, MT - medical terms, NMT - non-medical terms, SS - semantic similarity; Annotation type: AA - automated annotation MA - manually annotated} \\ \hline \hline \end{tabular}
\end{table}
Table 1: Monolingual Serbian corpora
had been automatically de-identified, but the time expressions had not been changed (Jacimovic et al., 2015). The _LAWC_ (Table 1) set of data includes a collection of 1120 texts of laws, segmented into a total of 59167 texts of individual articles of law (Petrovic and Stankovic, 2019). The corpus _LTC_ (Table 1) consists of legal texts in electronic form that are available on the website of the National Assembly of the Republic of Serbia. The laws passed by the end of May 2014 contain 681 texts (Vasiljevic, 2015).
AlfaNum which deals with automatic speech recognition (ASR), has built its resource _AlfaNum Text Corpus_ (ATC, Table 1), characterized by morphological categories and accentuation and contains approximately 200,000 words (Secujski and Delic, 2008). _The Named Entities Evaluation Corpus for Serbian_ (SrpNEval, Table 1) consists of 2000 short news Serbian daily newspapers from 2005 and 2006. Both the Cyrillic and Latin official scripts for the Serbian language are used in the corpus (Krstev et al., 2012). _ReLDI-NormTagNER-sr 2.1_ (NormTagNER, Table 1) is a manually annotated corpus of Serbian tweets for evaluation of tokenization, sentence segmentation, word normalization, morphosyntactic tagging, lemmatization, and named entities recognition of non-standard Serbian language (Milicevic and Ljubesic, 2016). _SETimes.SR_ (SETimes.SR, Table 1) is a reference training corpus of Serbian, which has been annotated on multiple levels. The texts in SETimes.SR were obtained from the multilingual parallel corpus _SETimes_ (SETimes, Table 2), which is a collection of news articles from the now-defunct Southeast European Times news portal (Batanovic et al., 2018), (Batanovic et al., 2018). Sentences from online press sources were collected for _The Serbian Corpus of Paraphrases_ (paraphrase.sr, Table 2). A binary similarity score was manually assigned to each pair of sentences, indicating whether the sentences in the pair are semantically similar enough to be considered close paraphrases (Batanovic et al., 2011). Another corpus for determining semantic similarity, _The Serbian Corpus of Short News Texts_ - (STS.news.sr, Table 2), consists of 1192 pairs of sentences in Serbian that were collected from news sources on the internet (Batanovic et al., 2018).
Old Serbian novels from the 1840s to the 1920s are collected in _SrELTeC_ (SrELTeC, Table 1) and have been digitally preserved as part of the COST action CA16204 (Stankovic et al., 2021). _ELTeC_'s section for Serbian contains 120 novels (Odebrecht et al., 2021). The novels have structural annotations, and sentence splitting, words are POS-tagged, lemmatized and seven classes of named entities are annotated. Some of the other resources available through the ELG5 portal are _SrpELTeC-gold_, _SrpKor4Tagging_, and _RudKorP_ (Table 1). The corpus for training the recognition of named entities SrpELTeC-gold is a sub-corpus of the literary corpus of the Serbian language, marked with named entities by the SrpNER(Krstev et al., 2014) system (Todorovic et al., 2021). The SrpKor4Tagging corpus was formed by combining literary and administrative texts in the Serbian language (Stankovic et al., 2020). The RudKorP corpus contains texts in the field of mining and processing of mineral raw
materials, created at the University of Belgrade, Faculty of Mining and Geology (Utvic et al., 2019). There are several more corpora available through the Clarin.si 6 platform, which are shown at the bottom of Table 1. _TorlakKor_ is a corpus of transcripts of interviews with the local population of Timok (an area in southeastern Serbia) (Vukovic, 2020). The _COPA-SR_ dataset (_Choice of Plausible Alternatives in Serbian_) is a translation of the English _COPA_ dataset (Ljubesic et al., 2022). _CorFoA_ is a corpus of Serbian forms of the address containing transcripts of biographical interviews with 19 participants (Lemmenmeier-Batinic et al., 2021). _MLNews_ is a comprehensive corpus of news articles that are Serbian language-related. It is complemented with a separate corpus of citizens' online comments on the news articles, available as _MLN-COM_(Bogetic and Batanovic, 2020, 2020). The web corpus of the Serbian language _srWaC_ was built by crawling the.rs top-level domain for Serbia in 2014 (Ljubesic and Klubicka, 2016; Ljubesic and Klubicka, 2016). _CorLeg_ is a corpus of legislation texts of the Republic of Serbia which was created using a large number of Serbian Legislation texts gathered from the official website 7(Bogdanovic and Tosic, 2022).
Footnote 6: [https://www.clarin.si](https://www.clarin.si)
Footnote 7: [https://www.pravno-informacioni-sistem.rs/](https://www.pravno-informacioni-sistem.rs/)
### Multilingual corpora
Multilingual corpora are a particular type of corpus that contains texts written in multiple languages. Parallel corpora include both the original texts and their translations into one or more other languages presented in such a way that their logical structure is explicitly connected at the document, chapter, paragraph, sentence, or word level. Table 2 shows multilingual corpora containing original texts or translations in the Serbian language. One of the early attempts to develop multilingual corpora is the creation of an alignment corpus of Plato's _"Republic"_ containing translations into 21 languages, including Serbian. The corpus has been annotated at the sentence level and has been utilized for both tool development and automated alignment (Krstev and Vitas, 2011). The multilingual language resources and tools for extracting information from the language corpora of CEE languages (Central and Eastern European Languages), called MULTEXT-East8 were created as part of the project Multext. The book "1984" is included in this parallel and sentence-aligned corpora, _Multext-East Corpora_ (G. Orwell's "1984")(G.O.1984, Table 2), along with translations into several other languages. Krstev and Vitas (2011) created a translation of this novel into Serbian and a morphosyntactic annotation in the MULTEXT-East format, for which they had previously developed a specification for the Serbian language. The parallel corpus _Verne80days_ (Table 2) contains the French original and 17 translations of Jules Verne's novel "Around the World in 80 Days". The alignment was performed on the subsentence level for each language (Vitas et al., 2008). _The Serbian-French Corpus_ (SrpFranKor, Table 2), which consists of 31 subsentence-aligned texts
that were originally written in French and then translated into Serbian and vice versa, is the first bilingual corpus in the Serbian language (Vitas and Krstev, 2006; Vitas et al., 2006). _ParCoLab_ (Table 2) is a parallel online searchable corpus consisting of sentence-alignment texts in French, Serbian, English, Spanish, and Occitan. Each of these languages is at the same time a source language and a translation language (Balvet et al., 2014).
_The Serbian-English Corpus_ (SrpEngKo, Table 2) is the second bilingual collection. It consists of English source texts aligned with their translations into Serbian, and visa-versa, as well as several aligned English and Serbian translations of literary texts originally written in French (Krstev and Vitas, 2011). The corpus _SETimes_ is based on the articles posted on the news website SETimes.com. Bulgarian, Bosnian, Greek, English, Croatian, Macedonian, Romanian, Albanian, and Serbian are among the ten languages in which the news is available. Part of the SETimes, sub-corpus _BALKANTIMES_ was used for the expansion of SrpEngKo (Batanovic et al., 2018). Parallel texts from the fields of law, business, education, and health care are also added to SrpEngKor, resulting in the creation of the sub-corpus _Serbian-English Law Finance Education and Health_ (SELFEH, Table 2). Almost 150 parallel texts make up SELFEH, which was utilized in term extraction and machine translation research as well as to test various taggers for the Serbian language (Utvic, 2011). Another Serbian-English corpus is _srenWaC_ (Table 2), which consists of sentence-aligned parallel texts pulled from the.rs top-level domain (Ljubesic et al., 2016). In addition to the SrpFranKo and SrpEngKo bilingual corpora, a similar corpus was created for the German (SrpNemKor, Table 2). It contains 48,004 translated pairs of literary texts in Serbian and German, which are aligned to the sentence level. Available tools for annotation of named entities in texts in both languages as well as tools for terminology extraction were applied to the prepared parallel corpus (Andonovski et al., 2019; Andonovski, 2020).
Additionally, there are multilingual parallel corpora, some of which are displayed at the table's end (Table 2). _OpenSubtitles_ is a database with about 4 million sentence-level translations of movies and television shows in more than 62 different languages (Tiedemann, 2012). _The Bosnian, Croatian, and Serbian Web Corpora_ (BsHrSrWaC, Table 2) are top-level-domain web corpora. They were used to create a method for separating similar languages that is based on unigram language modeling on the crawled data only (Ljubesic and Klubicka, 2014). The Twitter user dataset (Twitter-HBS, Table 2) consists of tweets and their language tag (Bosnian, Croatian, Montenegrin, or Serbian). The main goal of creating this corpus is discrimination between closely related languages at the level of Twitter users (Ljubesic and Rupnik, 2022). The _PE2rr_ corpus includes source language texts from many fields, as well as automatically produced translations into a number of morphologically rich languages, post-edited versions of those texts, and error annotations of the post-edit processes that were carried out. This corpus contains texts in Spanish, German, Serbian,
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Compton blank & Task type & Number of unit & Abnormal1 & Langsugen & Indetermen \\ Circular
Slovene and English (Popovic and Arcan, 2016). The _BERTic-data_ text collection contains more than 8 billion tokens of mostly web-crawled text written in Bosnian, Croatian, Montenegrin, or Serbian. The collection was used to train the BERTic transformer model (Ljubesic and Lauc, 2021). The Wikipedia dumps of the Bosnian, Croatian, Macedonian, Montenegrin, Serbian, Serbo-Croatian, and Slovenian Wikipedias were collected in the comparable corpus _CLASSLA-Wikipedia_ (CLASSLA-Wiki, Table 2). The linguistic annotation was performed with the classla package 9, on all levels available for a specific language (Ljubesic et al., 2021). Corpora for sentiment analysis are presented in a separate chapter.
Footnote 9: [https://pypi.org/project/classla/](https://pypi.org/project/classla/)
## 4 Language resources
### Dictionaries and terminologies
The term electronic dictionary considers the dictionary which is used for text processing. It consists of valuable information for solving problems of segmentation, morphological, and partly syntactic and semantic text processing (Vitas and Krstev, 2009). The automatic processing of text begins by analyzing individual words, which are the base units of the analyzed text. At times, individual words may not be the most appropriate base units for processing natural language. Therefore, there are two types of dictionaries: mono-lexemic, which consists of single words, and polylexemic which consists of multi-word units (Andonovski, 2020).
The international network of laboratories for computational linguistics, RELEX (Laporte, 2003), has created a model for building electronic morphological dictionaries that have been adopted by numerous organizations dealing with natural language processing. The _Unitex_10 system works with electronic morphological dictionaries developed according to this model. These are dictionaries in DELA format (_Dictionnaires Electroniques du LADL - Laboratoire d'Automatique Documentaire et Linguistique_). In order to distinguish between monolexemic and polylexemic units, this electronic dictionary is organized into two separate subsystems: a dictionary of monolexemic units (DELAS - simple forms and DELAF - inflected forms) and a dictionary of polylexemic units (DELAC - compound forms, and DELACF - compound inflected forms).
Footnote 10: [https://unitexgramlab.org/language-resources](https://unitexgramlab.org/language-resources)
Based on these models, within the Group for Language Technologies of the University of Belgrade, electronic morphological dictionaries of the Serbian language in Latin and Cyrillic (SrbMD) were built (Krstev, 1997; Vitas et al., 2003b, a; Krstev et al., 2006, 2010). According to (Mladenovic, 2016), the SrpMD system currently contains 148,000 lemmas and over 1,000 final transducers that generate more than 5 million DELAF determinations. The tool Leximir (Stankovic et al., 2011) is used as a dictionary management system. It is a multipurpose tool for supporting computational linguists in developing, maintaining, and exploiting e-dictionaries.
The accentuation-morphological dictionary was created at the Faculty of Technical Sciences in Novi Sad and it contains over 4 million entries. It is used for context analysis within text-to-speech and automatic speech recognition systems for Serbian (Secujski and Delic, 2008).
Ljubesic et al. (2015) presented _MWELex_, a multilingual lexical of Croatian, Slovene, and Serbian multi-word expressions (MWE) that were extracted from parsed corpora. The _srMWELex_ lexicon v0.5 was automatically built during the short-term scientific mission inside the PARSEME COST action. It contains multi-word expression candidates extracted with the DepMWEx tool from the _srWaC_ v1.0 web corpus. It consists of 22 290 entries and 3 273 369 multi-word units. The freely available morphological lexicon srLex is introduced in (Ljubesic et al., 2016). It is consisting of 105 359 lexemes and 5 327 361 (token, lemma, MSD) triples.
Miletic (2017) described the creation of a morphosyntactic e-dictionary for the Serbian language. It is derived from the Wiktionary edition for Serbo-Croatian, a manually POS-tagged corpus and specialized proposition list. This lexicon contains 1 226 638 million wordforms for 117 445 lemmas, corresponding to a total of 3 066 214 unique triples (wordform, lemma, MSD - morpho-syntactic description), and it is aimed for POS (part of speech) tagging and parsing tasks.
The DELAS-TOP and DELAS-PERS are dictionaries that respectively list geographic names and Serbian personal names (Krstev et al., 2008; Pavlovic-Lazetic et al., 2004; Grass et al., 2002). The dictionary of geographic names DELA-TOP covers geographic concepts at the level of a high-school atlas (approximately 20.000 toponyms, oronyms, and hydronyms with their corresponding derivatives). The dictionary of personal names has been created from the list of the names of 1.7 million inhabitants of Belgrade as established in 1993. Based on this list, two dictionaries were constructed: DELA-FName for the first names, and DELA-LName for the last names (Vitas et al., 2003a).
The dictionary of librarianship and information sciences contains terminology (Kovacevic et al., 2004) used in the theory and practice of librarianship, information sciences, and related fields in Serbian, English, and German. The online version of the dictionary currently contains: 40,000 definitions (approximately 14,000 in Serbian); 900 definitions or annotations of terms that are part of library standards; 2,300 acronyms of international and national organizations and institutions; 190 addresses of relevant websites11.
Footnote 11: [http://rbi.nb.rs/srlat/dict.html](http://rbi.nb.rs/srlat/dict.html)
The electronic geological dictionary (GeoIISSTerm) is a specially prepared taxonomy of basic geological concepts and terms, and it is used for IT needs as an elementary resource in the formation of domains in the Geological Information System of Serbia (GeoIISS)(Stankovic et al., 2011).
(Vujicic-Stankovic et al., 2014) extended the SrpMD by 636 entries of simple words and 612 entries of MWE (multi-word expressions) from the culinary domain.
Grljevic (2016) provided several dictionaries for sentiment analysis in the field of education in her doctoral dissertation (sentiment words, domain-specific phrases, negation keywords, and stop words that are identified from the corpus). Negation signals, negative quantifiers, and particle intensifiers were added to the sentiment lexicon (Ljajic and Marovac, 2019). Similarly, for sentiment analysis, a domain-oriented stop words collection was created (Mladenovic, 2016). In a separate chapter on sentiment analysis below, sentiment word lexicons and other lexicons used in sentiment analysis are described more.
Avdic et al. (2020) created medical dictionaries for Serbian: names of diagnoses (7942 entries), diagnosis code (14194 entries), Latin names of the diagnosis (3794 entries), therapies (2232 drugs and 1317 ampoules, 2255 different terms), symptoms for the diagnosis of measles B05 (95 entries), specialties (41 entries), abbreviations from the medical domain. Non-medical dictionaries created in the same research are a set of negation symbols in the medical domain, places, and names.
Ostrogonac et al. (2020) created a domain vocabulary of jobs in Serbian. It has two versions, one of 40 thousand, one of 80 thousand words, and 30 thousand lemmas, and they are included in Python library _nlpheart_.
The Serbian stop word dictionary (SSW dictionary) contains 1241 different stop words for the Serbian language. It was created based on the grammar of the Serbian as well as by comparing with available sets of stop words for the Serbian language and a set of stop words for the Croatian language. SSW dictionary for the Serbian language contains words in different forms of their appearance. A word type label accompanies each word. The SSW dictionary is available as a CSV file - SSWdictionary.csv. The file contains two columns: word and label. The label describes the type of words: auxiliary verbs (V), pronouns (PRON), adverbs (ADV), prepositions (PREP), conjunctions (CONJ), exclamations (EXCL), particles (PART) and abbreviations (ABBR)(Marovac et al., 2021).
The SrHurtLex (Stankovic et al., 2020) is a lexicon created for the detection of abusive words in Serbian. It is created using the lexical database Leximirka, the system of Serbian morphologic dictionaries SrpMD, and The Dictionary of Serbian Language (DS) (Vujanic, 2007), where the multi-word expressions labeled in dictionaries as augmentative, pejorative, derogatory, vulgar, etc. were collected.
### Ontologies
The term "ontology" originates from philosophy and it represents science about existing concepts (types of things) and their relations (Vujicic-Stankovic, 2016). In computer science, ontology is a structure that describes concepts, their relations, and existing constraints. Their purpose is the automatic sharing and reuse of knowledge between humans and computer, and between computers. Both parts which are included in sharing process have to have a certain level of understanding of the exchangeable information.
The hierarchy of classes is called taxonomy. Commonly, ontology describes terms and relationships between them for a particular domain.
The semantic network which describes proper names and their relations is developed during Prolex project (Krstev et al., 2007). It consists of 2000 proper names, mainly names of states and their capital cities.
The RudOnto is a terminological resource developed at the Faculty of Mining and Geology in Belgrade, and it is the reference resource for mining terminology in Serbian. It is managed by a terminological information system, and intended to produce the derived terminological resources in subfields of mining engineering, such as planning and management of exploitation, mine safety or mining equipment management (Stankovic et al., 2011).
Tomasevic (2018) developed a mining domain ontology _RuDokOnto_ for the purpose of collecting, describing, and systematization of mining project documentation throughout the phases of the mining project's life cycle in a way that links other related ontologies.
RetFig is a linguistic domain, descriptive, formal ontology for rhetorical figures in Serbian and describes 98 figures (Mladenovic and Mitrovic, 2013).
### Word networks
In traditional dictionaries, lexical concepts are alphabetically ordered and there is a definition for all possible meanings for each of them. In WordNet, all words expressing a concept are grouped together in a set of synonyms (synset - synonymous set). _Serbian WordNet (SerWN)_(Krstev et al., 2004; Koeva et al., 2008) is the lexical-semantic net for Serbian. Its development started within the project BalkanNet (Mladenovic et al., 2020), and when it finished in 2004, it had 8000 synsets. After that, the development of WordNet continued, especially in biological, biomedical, psycho-linguistic, and gastrointestinal domains, etc. Its structure is basically the same as PWN (Princeton Word Net (Miller and Fellbaum, 2007)), and it is organized using nodes (synsets) and relations between them. Every word in synset is represented as an array of characters or literal, followed by the meaning of concrete literal in concrete synset. As a word can have multiple meanings, it can be part of multiple synsets.
According to Koeva et al. (2008), _SerWN_ consists of 13612 synsets, 23139 literals, 18210 relations, 314 derived, and 83 derivatives.
Krstev et al. (2014) developed an ontology for the culinary domain in Serbian, and the Serbian Wordnet is enhanced with the synsets from this domain. This ontology is used for the determination of similarity between recipes and query expansion.
As a lexical resource, _SerWN_ has been applied in multi-member lexical unit research (Krstev et al., 2010; Mladenovic et al., 2014), text classification (Pavlovic-Lazetic and Graovac, 2010), the search of multilingual digital databases (Stankovic et al., 2012), recognizing rhetorical figures (Mitrovic et al., 2017; Mladenovic and Mitrovic, 2013; Mitrovic, 2014), analyzing feelings expressed in the text (Mitrovic et al., 2015) and others.
Vujicic-Stankovic in created an ontology for the culinary domain, and expanded SrWN by 1,404 synsets from the culinary domain so it contains a total of 1,797 such synsets (Vujicic-Stankovic et al., 2014; Vujicic-Stankovic, 2016).
Universal Dependencies (UD) project12 aims to develop cross-linguistically consistent treebank annotation for many languages, to provide a universal inventory of categories and guidelines to facilitate consistent annotation of similar constructions across languages while allowing language-specific extensions when necessary. As a part of this project, Serbian treebank is created, based on SETimes corpus (Samardzic et al., 2017).
Footnote 12: [https://universaldependencies.org/introduction.html](https://universaldependencies.org/introduction.html)
## 5 Lexical and syntactic analysis methods
### Transliteration and diacritic restoration
The tool for the automatic performing diacritic restoration of text which is potentially missing diacritics (e.g. transform "kuca" (dog) into "kuca" (house), if it is necessary) is described by Ljubesic et al. (2016). The accuracy of the tool is 99.5% on standard and 99.2% on nonstandard language.
Transliteration in Serbian is accommodated because each sound is a character. Characters map almost directly from Cyrillic to Latin, with exception of a few letters, that map from a single Cyrillic character to two Latin characters (e.g. h> nj, h> lj, or lj -> dj). Systems for transliteration between Serbian Cyrillic and Latin alphabets exist since the 1950s (Matthews, 1952; Aurousseau, 1953; Gerych, 1965). Among newer tools for solving this problem is the Python package _nlpheart_(Ostrogenac et al., 2020), which has a possibility of conversion between the Cyrillic and Latin alphabet.
### Tokenization and stemming
Sentence tokenization is the process of dividing the text into consisting sentences. Word tokenization's aim is to divide sentences into simple units, tokens, which are usually words, numbers, and punctuation marks. There are a number of multi-language tokenizers which have the ability to tokenize Serbian texts. The majority of these tools are available as Python modules, like Cutter(Graen et al., 2018), Spacy13, CLASSLA and Reldi14 tokenizers. Cutter tokenizer has a variant for online tokenization15. CLASSLA tokenizer is adapted Stanford NLP Python Library with improvements for specific languages - Fork of Stanza for Processing Slovenian, Croatian, Serbian, Macedonian and Bulgarian)16. Turanjin tokenizer for Serbian is available as a PHP library17. There is no precise information or comparison of the tokenization accuracy on Serbian documents.
Ostrogonac et al. (2020) present Python package _nlpheart_ for text processing of Serbian that includes transliteration, tokenization, normalization, and automatic preparing for the application of machine learning models.
Stemming is a process of removing finishing letters of words, as derivation suffixes of words. The remaining part is a reduced form of the word called a stem. The stem differs from a dictionary form of the word (lemma). The first tool for stemming (in further text, stemmer) for Serbian is described by Keselj and Sipka (2008), and it is rule-based (1000 rules) and its accuracy is 79%. Based on this stemmer, Milosevic (2012b) created a new stemmer reducing the number of rules (180 rules) with an accuracy of 90%. Another solution can be found in literature, and it is created by S. Petkovic et al.18 and it is based on Stemmer for Croatian (precision of 0.986 and recall of 0.961 (F1 0.973) for Croatian)19. There is no information about the stemming accuracy of this tool. Batanovic et al. reimplemented the optimal and the greedy stemmers of Keselj and Sipka (2008), improved the greedy algorithm proposed by Milosevic (2012b), and reimplemented a stemmer for Croatian by Ljubesic & Pandzic, which is a refinement of the algorithm presented by Ljubesic et al. (2007), as a WEKA package (Holmes et al., 1994) - SCS Stemmers in (Batanovic et al., 2016).
Footnote 18: Stefan Petkovic and Dragan Ivanovic, Stemmer for Serbian language, 2019. [https://snowballstern.org/algorithms/serbian/stemmer.html](https://snowballstern.org/algorithms/serbian/stemmer.html) (accessed Apr 26, 2022)
Footnote 19: Ljubesic, Nikola. Pandzic, Ivan. Stemmer for Croatian, [http://nlp.ffrg.hr/resources/tools/stemmer-for-coroatian/](http://nlp.ffrg.hr/resources/tools/stemmer-for-coroatian/)
The stem is not a dictionary word form, it is the most common part of words with the same semantic meaning. So, in some normalization methods, n-gram analysis is used as a stemmer alternative. This means that a word could be normalized to a single sub-string of its letters whose size is n (tri-gram, tetra-gram etc.). The reason is that the n-gram analysis approach is language-independent, which means that it doesn't need any rules, lexicons, or corpora. Marovac et al. (2012) used n-gram analysis in the normalization of Serbian text.
### Lemmatization and Part-of-speech tagging
Lemmatization is a process that aims to determine the base morphological form of the word (lemma), which corresponds to a headword in a dictionary. This step in text mining is especially important for languages with rich inflectional morphology, such as Serbian. A given word can have multiple possible lemmas, and it depends on the context, so some lemmatizers use information obtained by POS or MSD tagging to achieve better accuracy.
There are a number of lemmatization approaches: rule-based, simple statistical-based methods, and machine learning-based methods (Akhmetov et al., 2020).
LemmaGen (Jursic et al., 2010) is a learning algorithm for the automatic generation of lemmatization rules in the form of a refined RDR (Ripple Down Rules) tree structure. It is compared with CST (Dalianis and Jongejan, 2006)
and RDR (Plasson et al., 2008) lemmatization algorithms and its lemmatization accuracy on Serbian corpora are given in 3.
BTagger20(Gesmundo and Samardzic, 2012) is a bidirectional tagger-lemmatizer tool that implements a lemmatization-as-tagging paradigm. Models are trained on the Serbian G.O.1984 corpus, reaching overall accuracies of 97.72% for lemmatization and 86.65% for MSD tagging.
Footnote 20: [https://github.com/agesmundo/BTagger](https://github.com/agesmundo/BTagger)
Agic et al. (2013) tested hidden Markov model trigram taggers HunPos, lemmatization capable PurePos, TreeTagger, support vector machine tagger SVMTool, CST data-driven rule-based lemmatizer and BTagger on Serbian corpora and results are given in Table 3.
POS (part of speech) tagging is an NLP processing task where words in the text are annotated with corresponding grammatical categories (parts of speech: verb, noun, adjective, pronoun, etc.). POS tagging with more precise information about grammatical categories is MSD tagging (morphosyntactic tagging - tagging with morphosyntactic descriptions).
Finite state automata used in the lexical and syntactic analysis, considering morpho-syntactic labels were described in (Krstev, 1997).
Secujski and Kupusinac (Secujski and Kupusinac) used HMM for morphosyntactic tagging on Alfanum and G.O.1984 corpora. The accuracy of annotation largely depends on the type of text and that some texts are more suitable for automatic annotation than others. For the AlphaNum corpus, an error of 18.44% was obtained, and for "1984" as much as 26.97%.
Popovic (2008, 2010) evaluated five taggers (Tree Tagger, SVMTool, Brill - Rule Based Tagger, Trigrams'n'Tags and MXPOST) on three corpora (_"Helsinske sveske br. 15, nacionalne manjine i pravo"_, Serbian Radio diffusion Law and materials from UNDP workshops, G.O.1984). TnT has shown the best performance, while Tree Tagger and SVMTool taggers have shown better performance in special cases.
The POS tagger for Serbian and Croatian based on CRF (conditional random fields) is described in (Ljubesic et al., 2016). It is trained on a manually annotated corpus of Croatian in combination with hrLex/srLex lexicons for
\begin{table}
\begin{tabular}{l l l l} \hline Toot label & Application & Corpus & Accuracy \\ \hline Kennl[Stemmer (Kenjsi and Zipka, 2008)] & stemmer & unknown & 79.0 \% \\ MilosyntScientificm (Milosyle, 2012b) & stemmer & Politis & 90.05\% \\ LemmatGen (Jurife et al., 2010) & lemmatizer & Multext-East & up to 86.15\%\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\(\shortminus\)\(\)\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\short)\(\)\(\shortminus\)\(\shortminus\)\(\shortminus\)\(\)\(\shortminus\)\
each language. The set of morpho-syntactic labels used in the corpus is created according to instructions of the revised MULTEXT-East V5 set of labels for Croatian and Serbian. The accuracy of POS tagging for Serbian is 92.33% for MSD tagging and 97.86 for POS tagging.
The tools for tokenization, stemming, lemmatization, and POS and MSD tagging and their accuracy on Serbian corpora are shown In Table 3.
## 6 Classification
Text classification is a process of categorizing text data into predefined groups or categories based on its content. Text classification is often performed using supervised machine learning techniques.
Graovac (2014) proposed two methods for classifying text based on their content. The first method is based on the representation of a document as a profile containing a fixed number of n-grams of bytes that appear in the document, and a dissimilarity measure used to determine the class to which the document belongs. This method is language-independent and does not require any pre-processing of the text or prior knowledge of the content of the text or the language in which the text is written. The second method refers to the use of the information contained in the Serbian wordnet and the Serbian electronic dictionary.
Petrovic proposes utilizing models and neural networks as a potential remedy to meet the demand for machine prediction of links or references within the text of newly enacted laws and other regulations (Petrovic and Janicijevic, 2019; Petrovic, 2020). Training and validation of neural networks (RNN - Recurrent neural networks, CNN - convolutional neural networks, and HAN - hierarchical attention network model) are performed on a labeled data set, which is made by assigning to each segment of the text of the law (each article of the law) a corresponding label on the existence, or non-existence of a link or reference in that segment of the text. After that, the training procedure is based on a large set of data, which includes a collection of 1120 texts of laws, segmented into a total of 59 167 individual articles of law.
For all methods, the number of training parameters is reduced by over 99%.
### Similarity
Marovac et al. (2013) proposed a method for similarity search of documents in Serbian. The searching query is represented as a word vector, as well as documents for search hing. The grouping of the documents is done using the k-Means clustering algorithm, and keywords are extracted using TF and IDF features, and n-grams. The similarity values between query and documents are calculated using cosine measure, Jaccard's coefficient, or Euclidian distance. Furlan et al. (2013) proposed a new algorithm, called LInSTSS, which, when determining the semantic similarity of two short texts, also takes into account the specifics of the words these texts contain. The evaluation was carried out on a corpus of paraphrases for the Serbian language created in the same research.
One solution of similarity search in e-government is described by Nikolic (2016) using the tool "Apache Lucene". Petrovic and Stankovic (2019) demonstrated how different preparation methods influence the calculation of text similarity.
Batanovic (2020) presented the process of handling semantic tasks using statistical modeling and machine learning. The STS.news.sr is a corpus of news created and used for the task of semantic similarity where the similarity of news is annotated by score. Implementation is given in the library STSFineGrain (Java), available on GitHub. For semantic similarity, the combination of word alignment and the average of word vectors was used. The srWaC corpus (Web corpus of the Serbian language) is used for creating the word vectors. An evaluation of the effects of 3 different stemming techniques on text similarity for Serbian has been performed. Additionally, a new technique for calculating similarity was proposed called Part-of-Speech and Term Frequency weighted Short-Text Semantic Similarity.
### Sentiment analysis
Sentiment analysis is the process of analyzing and deriving people's opinions, thoughts, and impressions regarding various topics, products, and services expressed in a part of the text. Sentiment analysis can be investigated on several levels: document level, sentence level, phrase level, and aspect level (Wankhade et al., 2022). For sentiment analysis, specific lexical resources are necessary, such as a dictionary of sentiment words, tools for processing negation, stylistic figures, and so on. One of the first tools for sentiment analysis at the sentence level for the Serbian language was given by Milosevic (2012a). A binary classification of negative and positive sentiment was performed using the Naive Bayes(NB) algorithm. A steamer (Milosevic, 2012b) that was designed for this purpose was used as part of the preprocessing. Stop words were eliminated, and negation processing was done by prefixing the word that follows the negation signal (words like no, none) with 'NE_'. The sentiment analyzer was created as a web tool and made available to the public21.
Footnote 21: [https://inspiratron.org/SerbianSentiment.php](https://inspiratron.org/SerbianSentiment.php)
Maximum entropy (ME), support vector machine (SVM), and NB machine learning methods were used to analyze tweet sentiment (Jolic, 2015). Procedures are offered to minimize the noise in these messages to increase accuracy. They achieved the best accuracy with the ME method of 80.5% using unigrams; however, when applying unigrams and bigrams, negation and phrases were also considered, increasing accuracy to 82.7%.
Mladenovic et al. (2016) chose a hybrid approach that uses a dictionary of sentiments extended by morphological forms using a morphological dictionary SrbMD and synonyms using Serbian WordNet to reduce the disadvantages of using stemmers in morphologically rich languages. A sentiment dictionary was created (Mladenovic, 2016), containing 1053 expressions (and 10704 inflectional forms) classified into 24 emotion categories, and augmented with
synonyms and phrases. SentiWordNet has been integrated with Serbian WordNet to provide sentiment tags to the synsets from Serbian WordNet. A total of 4044 synsets were marked. An additional sentiment dictionary with 971 inflectional forms was created using these synsets. Using the TF-IDF approach, 577 (1428 inflectional forms) of the most frequent words from the 122 million-word corpus of the contemporary Serbian language SprKor2013 were used to construct the list of stop words. A domain-oriented collection of stop words with 1372 inflectional forms were generated using the TF approach. The method was trained on a news set (TrN, Table 4) with two topics: "bad news" and "good news," which are automatically categorized and balanced by sentiment. Two sets were used for testing: a set of news (TsN, Table 4) collected from a source other than TrN (this set is not balanced), and a set of movie reviews (TsMR, Table 4) collected from a website and tagged with the sentiment, based on the grades that were attached to them (this set is not balanced). These sources were used to develop the Serbian document-level sentiment analysis framework (SAFOS), which applies the maximum entropy approach with the features: of unigrams, bigrams, and trigrams. They used hold-out test sets and 10-fold cross-validation (CV) to evaluate the SAFOS system. The combination of unigram and bigram features reduced by "sentiment feature" mapping produced the best classification accuracy scores for both hold-out tests (accuracy 78.3% for TsMR set and 79.2% for the TsN set). Because it was trained and tested on data from the same domain, it performed better in a 10-fold CV with a 95.6% accuracy rate.
its author. This dataset was examined to identify the best machine-learning features and simple text-processing options for sentiment classification. By combining the obtained optimal attributes with NBSVM (combination of polynomial Naive Bayes classifier and support vector method classifier), they achieved an accuracy of up to 85.55% for two and up to 62.69% for three classes. By comparing different methods for morphological normalization, it was concluded that the use of stemmer is better than lemmatization in the case of sentiment analysis. The stemmer of Ljubesic and Pandzic gave the best accuracy results on the dataset SerbMR, 86.11% for two and up to 63.02% for three classes (Batanovic and Nikolic, 2017).
According to the studies cited above, identifying the presence of negation is insufficient to ascertain sentiment. The collection of film reviews in (Batanovic et al., 2016) is subjected to the traditional method of processing negation, which involves changing the polarity of words that follow a negative signal. For three classes, marking two words after the negation led to the most significant improvement in sentiment analysis accuracy (0.94%), while for two classes, marking only the first word after the negation gave the best improvement in accuracy (0.66%). The processing rules of semantic negation, which improved the classification of short informal texts by sentiment, are described in Ljajic and Marovac (2019). These rules were tested on a set of tweets with topic public personalities that were manually marked with the sentiment (TWPP, Table 4). The machine learning method that uses additional attributes based on the proposed negation processing rules improves sentiment analysis accuracy on a set of tweets for three classes by up to 1.45% and for two classes by up to 0.82%. When this method is applied to a set of tweets containing negation, the improvement in sentiment analysis accuracy increases by up to 2.65% for three classes and up to 1.65% for two classes. For this study's aims, dictionaries of negation signals (25), negative quantifiers (56), and intensifiers, as well as a sentiment dictionary of 5632 sentiment words (reduced to the morphological foundation of 4058 negative and 1574 positive words), were constructed. The impact of various morphological normalizations on sentiment analysis was examined on this set of tweets, and it was discovered that the use of stemmer (Milosevic, 2012) takes precedence over normalization using the morphological dictionary SrbMD (accuracy 85.27%) and that reducing words to 4-grams produces good results with little resource usage (Ljajic et al., 2019).
Aspect-based sentiment analysis deals with the identification of sentiments (negative, neutral, positive) and the determination of aspects (target sentiments) in a sentence. Nikolic et al. (2020) proposed an aspect-based sentiment analysis of student opinion surveys in the Serbian language. Two sets of data were used for sentiment analysis, which was done at the finest level of granularity of the text - the level of the sentence segment (phrase and sentence).
A collection of official student surveys (OSS, Table 4) makes up the first dataset, while the second dataset set of online reviews of professors and lecturers (OSPL, Table 4) previously created for the paper (Grljevic, 2016). The
OSS and OSPL corpora were automatically annotated for the sentiment (negative, neutral, positive), then manually annotated for aspects (ranging from lower-level features, such as lectures, helpfulness, materials, and organization, to higher-level aspects, such as professor, course, and other). For aspect classification, a cascade classifier (a collection of SVM binary classifiers trained to distinguish between two distinct aspects) was employed. The quality of the aspect analysis was influenced by the corpus, as seen by the F-measures of 0.89 for the OSS corpus and 0.78 for the OSPL corpus, respectively.
Sentiment analysis includes specific subtasks such as polarity detection, subjectivity detection, sarcasm detection, etc. An annotation approach with six sentiment labels was created to satisfy the requirements of processing particular tasks and enabling multiple interpretations of sentiment (Batanovic et al., 2020). SentiComments.SR (Table 4), a corpus of short texts in the Serbian language, has been manually annotated using this multi-level annotation scheme. It contains 3490 short movie comments (length up to 50 tokens) (Batanovic, 2020). On this corpus, the outcomes of applying linear classifiers using bag-of-words and/or bag-of-embedding features were evaluated under the influence of different morphological normalizations and negation processing techniques. The combination of bag-of-words and bag-of-embeddings attributes resulted in significant improvements in classification for all sentiment analysis subtasks (F - measure: polarity 0.783, subjectivity 0.885 four-class sentiment analysis 0.655, six-class sentiment analysis 0.586). Due to the insufficient number of sarcastic texts in the corpus, the results of sarcasm detection are not representative.
Sentiment lexicon Senti-Pol-sr (Stankovic et al., 2022) was created based on three existing lexicons (NRC, AFFIN, and Bing) and was manually corrected. The dictionary contains 6454 different tokens. Its initial version is available.
The lexicon was utilized to conduct sentiment analysis on a well-balanced dataset extracted from SrpELTeC, which consisted of 1089 sentences that were manually labeled, with each sentiment category containing 363 instances of positive, neutral, and negative sentiments. This approach achieved the best
\begin{table}
\begin{tabular}{l l l l l} \hline Corpus label & Text type & Number of & Annotation1 & Reference \\ TrN & News & 2000 & S; AA & (Mladenovic et al., 2016) \\ TrN & News & 779 & S; AA & (Mladenovic et al., 2016) \\ TrMR & Movie reviews & 2237 & S; AA & (Mladenovic et al., 2016) \\ ORPL & Education & 3863 & S, A; AA + MA & (Grijevic, 2016) \\ & Reviews & & & \\ OSS & Education & 2472 & S,A; AA +MA & (Nikolie et al., 2020) \\ & Reviews & 2523 & S;AA & (Batanovic et al., 2016) \\ SentiComments.SR & Short texts & 3490 & S; MA & (Batanovic, 2020) \\ PariSent-BCS v1.0 & Sentences of & 2600 & S; MA & (Motchak et al., 2022) \\ & parliamentary & & & \\ TWPP & Tweets & 7664 & S; MA & (Lajjic and Marosac, 2019) \\ TWVA & Tweets & 8817 & S,R; MA & (Lajjic et al., 2022) \\ MRSA & Music Reviews & 1830 & S; AA & (Drakovic et al., 2022) \\ SMSSA & SMS messaging & 6171 & S; MA & (Sandrini, 2019) \\ TWIS & Tweets & 1643735 & S; MA & (Moresti et al., 2016) \\ \hline \end{tabular}
\end{table}
Table 4: Corpora for sentiment analysis
accuracy of 87.8% on SrpELTeC 2 classes and 71.9% on SrpELTeC 3 classes using MNB with the Bag-of-Words approach combined with our sentiment lexicon features. The results of trained models using LR, NB, decision tree, random forest, SVN, and k-NN methods gave the best accuracy of 87.8% for LR. It has also been shown that training on a dataset of labeled movie reviews (SerbMR) indicates that it cannot be successfully used for sentence sentiment analysis in old novels. Draskovic et al. (2022) developed a machine-learning model for sentiment analysis using three different data sets. The first set (MRSA, Table 4) was created for this research by collecting music reviews from 13 portals, which made sure that the set was balanced. The second data set is the already mentioned set of movie reviews, while the third set is music album reviews--MARD. MARD was originally composed in English and then translated into Serbian using the Google Translate API. Standard classification models (NB, LR, and SVM) and hybrid models (combining a linear model with NB) were applied to these datasets. The hybrid model NB-LR gave average good results (58% for three classes and 79% for two classes). It is shown that a set of film and music reviews can be used together to improve the quality model. Extending the model with reviews translated from English does not improve performance, due to the different vocabulary and review writing styles, as well as the quality of the translated text. Emoticon influence, informal speech, lexical, and other language features about the mood in the set of SMS messages (SMSSA, Table 4) are presented by Sandrih (2019). They selected 621 features and divided them into three main categories: lexical (based on signs and words), syntactic (emoticons, abbreviations), and stylistic. Using linear SVM classification, an accuracy of 92.3% was obtained. Sentence-based sentiment classification as well as emotion recognition is suggested to improve the classification of SMS messages. Mozetic and Grcar (Mozetic and Grcar) found that the quality of the classification model depends much more on the quality and size of the training data than on the type of trained model by analyzing 1.6 million manually tagged tweets in 15 different European languages, of which 73,783 tweets are in Serbian (TW15, Table 4). Based on the performed experiments, it was shown that there is no statistically significant difference between the performance of the top classification models (five of these models are based on SVM, and for reference, the NB classifier was applied).
Transfer learning is one of the advanced techniques in AI, which allows a pre-trained model to transfer its knowledge to a new model. Transfer learning is frequently used in sentiment analysis to classify sentiments, and it can produce successful results, particularly in the absence of large labeled data sets.
Batanovic presented the results of applying neural language models based on transformer architectures to sentiment analysis subtasks of short texts from the SentiComments.SR corpus (Batanovic et al., 2020).Three transformer-based models were used: Multilingual BERT (Devlin et al., 2018), Multilingual distilBERT (Sanh et al., 2019), and XLM MLM (Conneau and Lample, 2019). Fine-tuning multilingual transformer-based models gain the same or better performance than linear models for all sentiment analysis subtasks. For each
subtask, XLM MLM produced the best F-measure results: 0.793 for polarity, 0.887 for subjectivity, 0.686 for four-class sentiment analysis, and 0.627 for six-class sentiment analysis.
Based on a sample of parliamentary discussions, Mochtak et al. (2022) demonstrated that using transformer models produces outcomes that are noticeably superior to those obtained using a simpler architecture. The dataset consists of sentences of average length from the corpus of parliamentary proceedings in the region of the former Yugoslavia - Bosnia and Herzegovina, Croatia, and Serbia. A set of 2600 sentences (ParlaSent-BCS v1.0, Table 4), including 876 with only positive, 876 with only negative, and 866 without sentiment words, were chosen for the dataset using the Croatian gold standard sentiment lexicon (Glavas et al., 2012) (translated to Serbian with a rule-based Croatian-Serbian translator (Klubicka et al., 2016)). This dataset contains 1059 sentences from the Serbian parliament. The dataset is manually annotated using the multiple-level annotation schema described by Batanovic et al. (2020), and it is available online. A sentiment analysis approach was applied at the sentence level. The results of classification four of the transformer models were compared: FastText (Bojanowski et al., 2017) with pre-trained CLARIN.SI word embeddings (Ljubesic and Erjavec, 2018), XLM-R (Conneau et al., 2019), CroSloEngual BERT (Uclar and Robnik-Sikonja, 2020), and BERTic (Ljubesic and Lauc, 2021). BERTic gave the best results compared to the others (model macro F1 0.7941 \(\pm\) 0.0101). Compared to Bosnian and Croatian, the Serbian language proved to be the most difficult to predict. Using BERTic for sentiment analysis, Ljajic et al. (2022) expanded the annotated dataset that was used for the topic analysis of tweets containing negative sentiment towards the COVID-19 vaccination. A collection of 8817 vaccination-related tweets in the Serbian language (TWVA, Table 4) were manually labeled as relevant or irrelevant regarding the COVID-19 vaccination sentiment. Relevant tweets were manually marked with sentiment labels: positive, negative, or neutral. On this data set, BERTic correctly categorized tweets as relevant or irrelevant with a 94.7% accuracy rate and correctly classified relevant tweets as negative, positive, or neutral with an 85.7% accuracy rate. The annotated set was expanded by this classifier, and from the original manually annotated 1770 tweets with negative sentiment, another 1516 tweets with negative sentiment were automatically marked, forming the data set used for the topic analysis. The topic analysis was carried out using the latent Dirichlet allocation (LDA) and nonnegative matrix factorization (NMF) methods. Topics that are potential reasons for vaccine skepticism are highlighted by topic analysis: worries about adverse reactions, efficacy, inadequate testing, mistrust of authorities, and conspiracy theories.
## 7 Named entity recognition
Named-entity recognition (NER) is a task that seeks to locate and classify named entities mentioned in unstructured text into categories such as personal
names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc. Named entity recognition (NER) as an NLP task is fairly old, gaining popularity with Message Understanding Conferences in the mid-1990s (Sekine, 2004), However, NER for Serbian has not been addressed substantially until the 2010s.
Vitas and Pavlovic-Lazetic (2008) developed a system that uses morphological and lexical analysis in combination with dictionaries (Serbian and transcribed English first names, and geographical locations) for recognition of people's names and geographical entities. The system is using e-dictionaries and transducer-based rules or grammars for disambiguation of proper names and geopolitical entities (Krstev et al., 2007; Vitas and Pavlovic-Lazetic, 2008).
Ljubesic et al. (2013) proposed a first system based on machine learning and conditional random fields for the recognition of names, organizations, and locations for Croatian and Slovene, which are closely related to Serbian. They have used a set of annotated web and news corpora (SETimes, Vjesnik, and corpora for both Slovene and Croatian developed as a student project (Filipic et al., 2012)) to train their method. For features, they used linguistic features and distributional similarity features calculated from large unannotated monolingual corpora. Their experiments showed that distributional features improve the F1 score by 7-8 points, while morphological features can improve by additional 3-4 points. However, as the size of the dataset increases, the morphosyntactic and distributional features lose their importance for NER. They have made resources used for building this NER system publicly available.
Another approach, based on the previous application of rules encoded in transducers and thesauri (Krstev, 1997) was enhanced for recognition of personal names and geopolitical names (Krstev et al., 2014). Dictionaries are used for matching tokens and phrases, while recursive transition networks (grammar graphs) from Unitex (Paumier et al., 2002) are used to resolve ambiguities (e.g. taking into account grammatical rules such as case-number-gender agreement). They reported that the system prefers precision over recall, with a precision of 0.96 and a recall of 0.88.
For the purpose of comparing NER approaches in multi-lingual aligned texts (bitexts), a system called NERosetta was developed (Krstev et al., 2013; Krstev C et al., 2013). To illustrate the system, 7 bitexts involving 5 languages (French, English, Greek, Serbian, Croatian) and 5 different NER systems were used (1 for Serbian (Krstev et al., 2014), 1 for Croatian (Ljubesic et al., 2013), 1 for English (Stanford NER) and 2 for French). The entities that were evaluated were Person, Organization, and Location, with some of the NER systems providing annotations for time, date, money, percent, and others. The demo application is available on the web 22.
Footnote 22: [http://www.korpus.matf.bg.ac.rs/nerosetta/](http://www.korpus.matf.bg.ac.rs/nerosetta/)
A dictionary approach with the addition of transducer-based grammars (Krstev et al., 2014) was used to create a gold standard data set based on news articles annotated with personal names. This data set was then used to train machine learning-based approaches, namely Stanford NER and SpaCy
(Sandrih et al., 2019). Their evaluation indicated that the rule-based approach performed the best (based on the F1-score), while Stanford NER had the best recall.
Tanasijevic (2019) developed a system for labeling cultural heritage documents with metadata. In order to do this, she developed a system that recognizes entities, such as years and person names, as well as topics of the tagged documents.
The transformer-based model was also introduced for several tasks in Serbian, Croatian, and Slovene, including NER (Ljubesic and Lauc, 2021). The model was pre-trained on web-crawled texts in Serbian, Bosnian, Croatian, and Slovene, consisting of 8 billion tokens, and then fine-tuned for NER on several openly available datasets, such as SETimes.SR (Batanovic et al., 2018), corpora of news articles, or ReLDI-sr (Ljubesic et al., 2017), corpora of annotated tweets. For reference, authors compared this model with CroSloEngual BERT (Ulcar and Robnik-Sikonja, 2020) and multilingual BERT (Devlin et al., 2018), where language-specific BERT-based models significantly outperformed multi-lingual BERT.
Apart from a general domain for Serbian, a decent amount of work has been done for medical named entity recognition. One of the previously described systems for a general domain was adapted for medical de-identification of clinical texts (Jacimovic et al., 2014, 2015). The system recognized persons, dates, geographic locations, organizations, and numbers using vocabularies and transducer grammar rules. The authors reported an overall F1 score of 0.94. On the other hand, (Puflovic et al., 2016) created a model based on character and word n-grams. The dataset they used was obtained from a neurological clinic and their system was designed to recognize names of diseases, names of medications, abbreviations, and numbers that represent dosage, dates or times, and medical treatment success. They have manually checked 100 documents, reporting accuracy ranging from 64% to 90%. A mathematical model for medical term recognition was proposed by (Avdic et al., 2020). They have proposed three methods. The first method was based on the dictionary matching of terms. The second method uses a formula for labeling words contained in the training set, where confidence is calculated by the number of instances in which a word is labeled with a certain label in the training set divided by the total count of that word in the training set. The third method is an extension of the second method, where several rules are added to terms with errors and abbreviations, so they can be tagged well. Labeled entities included medical terms (symptoms, symptom descriptions, diagnoses, biochemical analyses, Latin words, anatomic names of organs, therapies, and other medical terms) and non-medical terms (numbers, negation symbols, and other words). The best-performing model was the third one, with an F1 score of 0.937, while the highest F1 score for medical terms was 0.896. The methods based on deep neural networks and multilingual language models were proposed by Kaplan et al. (2022). They used a manually annotated corpora from the Clinic for
Nephrology at the University Clinical Center of Serbia (203 discharge summaries annotated by 2 computer science Ph.D. students adapting the 2012 i2b2 temporal relation challenge annotation schema). They have created models based on conditional random fields (CRF), multilingual transformers (BERT Multilingual and XLM RoBERTa), long short-term memory (LSTM) recurrent neural networks, and their ensembles. CRF method had hand-crafted features that are commonly used in literature (word, word stem, shape of the word, previous 3 words, next 3 words, etc.). For the LSTM model, the authors used gensim's word2vec model before feeding the embeddings to the LSTM network, followed by the CRF token classifier. The study showed that the highest precision was achieved with the CRF-based model, while the highest recall had a multi-lingual transformer model. The best F1 score had an LSTM-CRF-based model. The best performance was achieved by creating an ensemble of the models with majority voting (F1 score of 0.892).
## 8 Language models
A language model determines word probability in a sequence. In order to create a language model, many approaches were proposed, ranging from simply calculating word appearance in a larger text corpus to adding more lexical and syntactic features to learning word probabilities. Early language models were purely statistical, while since 2014, we have seen a proliferation of neural language models - language models based on neural networks.
Language models are prerequisites for many natural language tasks. Therefore, many works in classification (Graovac, 2014), sentiment analysis (Milosevic, 2012a; Jolic, 2015; Grljevic, 2016; Batanovic and Nikolic, 2017; Ljajic and Marovac, 2019) or named entity recognition (Sandrih et al., 2019), used traditional n-gram language models, at times enriched with lexical, morphological or syntactic features. These systems were previously described in this review.
Ostrogonac (2018) in his PhD thesis does a review and comparison of language models for Serbian up to 2018. In this work, he proposes the first neural language model for Serbian, based on recurrent neural networks trained on a corpus of morphologically annotated text in Serbian. Also, he creates a hybrid model that uses parts-of-speech and lemmas, and matches sequences of words to either n-grams in corpus, or to partially lemmatized sequences. These models are compared with more traditional n-gram models for correcting semantic and grammatical errors in the text. The error is detected by setting a threshold for a difference in log likelihood between a language model with morphological features and one without it. While setting thresholds may be challenging, it showed the potential use cases for specifically trained neural language models for Serbian.
There has been a significant effort done by international researchers to create multilingual neural language models. Some of these models included also Serbian, such as FastText (Bojanowski et al., 2017), multilingual BERT
(Devlin et al., 2018), XLM-R (Conneau et al., 2019), and XLM MLM (Conneau and Lample, 2019). Batanovic in his Ph.D. thesis (Batanovic, 2020), compared a number of n-gram language models for the tasks of sentiment analysis and text similarity. He further compared these language models and methods with fine-tuned multi-lingual transformer-based models (multilingual BERT base (Devlin et al., 2018), DistilBERT Multilingual (Sanh et al., 2019), and XLM MLM (Conneau and Lample, 2019)), showing transformer models in all cases outperforming all n-gram based models (including ones containing a large amount of morphological, lexical, and syntactic features).
The first, and, at the time of writing of this paper, the only transformer-based language model specifically trained for Serbian, Croatian, Bosnian, and Montenegrin is BERTic (Ljubesic and Lauc, 2021). BERTic is trained using the ELECTRA approach (Clark et al., 2020) for training transformer models. This approach involves training a smaller generator model and the main discriminator model with the task to discriminate whether a specific word is an original word from the text or a word generated by the generator model. The model is trained on a corpus of 8 billion tokens crawled from the web in Serbian, Croatian, Bosnian, and Montenegrin. While there was previously a BERT-based model for Croatian and Slovenian, called CroSloBERT (Ulcar and Robnik-Sikonja, 2020), BERTic outperformed it on almost all tasks (morphological annotation, NER, social media geolocation prediction, commonsense causal reasoning task). This is mainly because of the bigger corpus, and computational efficiency of the ELECTRA approach that was used.
## 9 Conclusion and future directions
Research on natural language processing for the Serbian language has a long tradition, going back to the second half of the 1990s. During this time, many approaches for lexical, morphological, syntactic, and semantic processing of text were explored. In the past decade, the number of researchers and research published on natural language processing for Serbian significantly increased. Several Universities and research institutes in Serbia established natural language research groups.
The Serbian language is a highly inflected language and therefore many challenges in natural language processing are specific to Serbian, such as the most efficient way for tokenization, handling the inflections in various tasks, handling negations, etc. While some work has been done on these challenges, they are still open research questions. Basic lexical and morphological tasks, such as transliteration, diacritic restoration, tokenization, stemming, lemmatization, and part-of-speech tagging are quite well-researched, with many approaches presented, evaluated, and compared. Some of the classification tasks, such as sentiment analysis were, as well, extensively researched. Sentiment analysis seems to gain a lot of interest after 2012. Named entity recognition has also been researched for several named entities, such as proper and personal names, and locations. Also, few approaches have been proposed for biomedical NER.
On the other hand, some methods and tasks were still not adequately addressed for Serbian. Many classification tasks, except sentiment analysis, have not been explored and language resources for them are missing. As it was previously said, methods for only a basic set of named entities have been proposed. Domain-specific classification and named entity recognition methods are still missing.
Methods in the semantic web, ontology, and semantic networks have not much proliferated in the area of Serbian NLP, as only a few papers are touching on this subject. Most significant research in network space has been done in developing Serbian WordNet, but this is a rather morphological and lexical network, then something that can be considered a semantic network. Language resources for many of the tasks are still missing.
While the language-specific BERT-based model has been trained, there is only a single initiative to create this kind of language model. Also, resources such as sentence embedding or document embedding methods have not been yet developed. These methods would also contribute significantly to the creation of methods for summarization, question answering, language-specific semantic search, or machine translation.
At the moment, there is a proliferation of large language models, such as GPT-3 (Brown et al., 2020), Lambda (Thoppilan et al., 2022) or ChatGPT (Ouyang et al., 2022). While these models are multilingual and can generate text in Serbian, there has not yet been much research on prompt engineering or fine-tuning these language models for Serbian.
## 10 Acknowledgements
This paper is partially supported by the Ministry of Education, Science, and Technological Development of the Republic of Serbia, Projects No. III44007.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.